Page MenuHomeFreeBSD

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
Index: projects/clang800-import/UPDATING
===================================================================
--- projects/clang800-import/UPDATING (revision 343955)
+++ projects/clang800-import/UPDATING (revision 343956)
@@ -1,1966 +1,1966 @@
Updating Information for FreeBSD current users.
This file is maintained and copyrighted by M. Warner Losh <imp@freebsd.org>.
See end of file for further details. For commonly done items, please see the
COMMON ITEMS: section later in the file. These instructions assume that you
basically know what you are doing. If not, then please consult the FreeBSD
handbook:
https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/makeworld.html
Items affecting the ports and packages system can be found in
/usr/ports/UPDATING. Please read that file before running portupgrade.
NOTE: FreeBSD has switched from gcc to clang. If you have trouble bootstrapping
from older versions of FreeBSD, try WITHOUT_CLANG and WITH_GCC to bootstrap to
the tip of head, and then rebuild without this option. The bootstrap process
from older version of current across the gcc/clang cutover is a bit fragile.
NOTE TO PEOPLE WHO THINK THAT FreeBSD 13.x IS SLOW:
FreeBSD 13.x has many debugging features turned on, in both the kernel
and userland. These features attempt to detect incorrect use of
system primitives, and encourage loud failure through extra sanity
checking and fail stop semantics. They also substantially impact
system performance. If you want to do performance measurement,
benchmarking, and optimization, you'll want to turn them off. This
includes various WITNESS- related kernel options, INVARIANTS, malloc
debugging flags in userland, and various verbose features in the
kernel. Many developers choose to disable these features on build
machines to maximize performance. (To completely disable malloc
debugging, define MALLOC_PRODUCTION in /etc/make.conf, or to merely
disable the most expensive debugging functionality run
"ln -s 'abort:false,junk:false' /etc/malloc.conf".)
2019mmdd:
Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to
8.0.0. Please see the 20141231 entry below for information about
prerequisites and upgrading, if you are not already using clang 3.5.0
or higher.
20190131:
Iflib is no longer unconditionally compiled into the kernel. Drivers
using iflib and statically compiled into the kernel, now require
the 'device iflib' config option. For the same drivers loaded as
modules on kernels not having 'device iflib', the iflib.ko module
is loaded automatically.
20181230:
r342635 changes the way efibootmgr(8) works by requiring users to add
the -b (bootnum) parameter for commands where the bootnum was previously
specified with each option. For example 'efibootmgr -B 0001' is now
'efibootmgr -B -b 0001'.
20181220:
r342286 modifies the NFSv4 server so that it obeys vfs.nfsd.nfs_privport
in the same as it is applied to NFSv2 and 3. This implies that NFSv4
servers that have vfs.nfsd.nfs_privport set will only allow mounts
from clients using a reserved port#. Since both the FreeBSD and Linux
NFSv4 clients use reserved port#s by default, this should not affect
most NFSv4 mounts.
20181219:
The XLP config has been removed. We can't support 64-bit atomics in this
kernel because it is running in 32-bit mode. XLP users must transition
to running a 64-bit kernel (XLP64 or XLPN32).
The mips GXEMUL support has been removed from FreeBSD. MALTA* + qemu is
the preferred emulator today and we don't need two different ones.
The old sibyte / swarm / Broadcom BCM1250 support has been
removed from the mips port.
20181211:
Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to
7.0.1. Please see the 20141231 entry below for information about
prerequisites and upgrading, if you are not already using clang 3.5.0
or higher.
20181211:
Remove the timed and netdate programs from the base tree. Setting
the time with these deamons has been obsolete for over a decade.
20181126:
On amd64, arm64 and armv7 (architectures that install LLVM's ld.lld
linker as /usr/bin/ld) GNU ld is no longer installed as ld.bfd, as
it produces broken binaries when ifuncs are in use. Users needing
GNU ld should install the binutils port or package.
20181123:
The BSD crtbegin and crtend code has been enabled by default. It has
had extensive testing on amd64, arm64, and i386. It can be disabled
by building a world with -DWITHOUT_BSD_CRTBEGIN.
20181115:
The set of CTM commands (ctm, ctm_smail, ctm_rmail, ctm_dequeue)
has been converted to a port (misc/ctm) and will be removed from
FreeBSD-13. It is available as a package (ctm) for all supported
FreeBSD versions.
20181110:
The default newsyslog.conf(5) file has been changed to only include
files in /etc/newsyslog.conf.d/ and /usr/local/etc/newsyslog.conf.d/ if
the filenames end in '.conf' and do not begin with a '.'.
You should check the configuration files in these two directories match
this naming convention. You can verify which configuration files are
being included using the command:
$ newsyslog -Nrv
20181015:
Ports for the DRM modules have been simplified. Now, amd64 users should
just install the drm-kmod port. All others should install
drm-legacy-kmod.
Graphics hardware that's newer than about 2010 usually works with
drm-kmod. For hardware older than 2013, however, some users will need
to use drm-legacy-kmod if drm-kmod doesn't work for them. Hardware older
than 2008 usually only works in drm-legacy-kmod. The graphics team can
only commit to hardware made since 2013 due to the complexity of the
market and difficulty to test all the older cards effectively. If you
have hardware supported by drm-kmod, you are strongly encouraged to use
that as you will get better support.
Other than KPI chasing, drm-legacy-kmod will not be updated. As outlined
elsewhere, the drm and drm2 modules will be eliminated from the src base
soon (with a limited exception for arm). Please update to the package asap
and report any issues to x11@freebsd.org.
Generally, anybody using the drm*-kmod packages should add
WITHOUT_DRM_MODULE=t and WITHOUT_DRM2_MODULE=t to avoid nasty
cross-threading surprises, especially with automatic driver
loading from X11 startup. These will become the defaults in 13-current
shortly.
20181012:
The ixlv(4) driver has been renamed to iavf(4). As a consequence,
custom kernel and module loading configuration files must be updated
accordingly. Moreover, interfaces previous presented as ixlvN to the
system are now exposed as iavfN and network configuration files must
be adjusted as necessary.
20181009:
OpenSSL has been updated to version 1.1.1. This update included
additional various API changes througout the base system. It is
important to rebuild third-party software after upgrading. The value
of __FreeBSD_version has been bumped accordingly.
20181006:
The legacy DRM modules and drivers have now been added to the loader's
module blacklist, in favor of loading them with kld_list in rc.conf(5).
The module blacklist may be overridden with the loader.conf(5)
'module_blacklist' variable, but loading them via rc.conf(5) is strongly
encouraged.
20181002:
The cam(4) based nda(4) driver will be used over nvd(4) by default on
powerpc64. You may set 'options NVME_USE_NVD=1' in your kernel conf or
loader tunable 'hw.nvme.use_nvd=1' if you wish to use the existing
driver. Make sure to edit /boot/etc/kboot.conf and fstab to use the
nda device name.
20180913:
Reproducible build mode is now on by default, in preparation for
FreeBSD 12.0. This eliminates build metadata such as the user,
host, and time from the kernel (and uname), unless the working tree
corresponds to a modified checkout from a version control system.
The previous behavior can be obtained by setting the /etc/src.conf
knob WITHOUT_REPRODUCIBLE_BUILD.
20180826:
The Yarrow CSPRNG has been removed from the kernel as it has not been
supported by its designers since at least 2003. Fortuna has been the
default since FreeBSD-11.
20180822:
devctl freeze/thaw have gone into the tree, the rc scripts have been
updated to use them and devmatch has been changed. You should update
kernel, userland and rc scripts all at the same time.
20180818:
The default interpreter has been switched from 4th to Lua.
LOADER_DEFAULT_INTERP, documented in build(7), will override the default
interpreter. If you have custom FORTH code you will need to set
LOADER_DEFAULT_INTERP=4th (valid values are 4th, lua or simp) in
src.conf for the build. This will create default hard links between
loader and loader_4th instead of loader and loader_lua, the new default.
If you are using UEFI it will create the proper hard link to loader.efi.
bhyve uses userboot.so. It remains 4th-only until some issues are solved
regarding coexisting with multiple versions of FreeBSD are resolved.
20180815:
ls(1) now respects the COLORTERM environment variable used in other
systems and software to indicate that a colored terminal is both
supported and desired. If ls(1) is suddenly emitting colors, they may
be disabled again by either removing the unwanted COLORTERM from your
environment, or using `ls --color=never`. The ls(1) specific CLICOLOR
may not be observed in a future release.
20180808:
The default pager for most commands has been changed to "less". To
restore the old behavior, set PAGER="more" and MANPAGER="more -s" in
your environment.
20180731:
The jedec_ts(4) driver has been removed. A superset of its functionality
is available in the jedec_dimm(4) driver, and the manpage for that
driver includes migration instructions. If you have "device jedec_ts"
in your kernel configuration file, it must be removed.
20180730:
amd64/GENERIC now has EFI runtime services, EFIRT, enabled by default.
This should have no effect if the kernel is booted via BIOS/legacy boot.
EFIRT may be disabled via a loader tunable, efi.rt.disabled, if a system
has a buggy firmware that prevents a successful boot due to use of
runtime services.
20180727:
Atmel AT91RM9200 and AT91SAM9, Cavium CNS 11xx and XScale
support has been removed from the tree. These ports were
obsolete and/or known to be broken for many years.
20180723:
loader.efi has been augmented to participate more fully in the
UEFI boot manager protocol. loader.efi will now look at the
BootXXXX environment variable to determine if a specific kernel
or root partition was specified. XXXX is derived from BootCurrent.
efibootmgr(8) manages these standard UEFI variables.
20180720:
zfsloader's functionality has now been folded into loader.
zfsloader is no longer necessary once you've updated your
boot blocks. For a transition period, we will install a
hardlink for zfsloader to loader to allow a smooth transition
until the boot blocks can be updated (hard link because old
zfs boot blocks don't understand symlinks).
20180719:
ARM64 now have efifb support, if you want to have serial console
on your arm64 board when an screen is connected and the bootloader
setup a frambuffer for us to use, just add :
boot_serial=YES
boot_multicons=YES
in /boot/loader.conf
For Raspberry Pi 3 (RPI) users, this is needed even if you don't have
an screen connected as the firmware will setup a framebuffer are that
u-boot will expose as an EFI framebuffer.
20180719:
New uid:gid added, ntpd:ntpd (123:123). Be sure to run mergemaster
or take steps to update /etc/passwd before doing installworld on
existing systems. Do not skip the "mergemaster -Fp" step before
installworld, as described in the update procedures near the bottom
of this document. Also, rc.d/ntpd now starts ntpd(8) as user ntpd
if the new mac_ntpd(4) policy is available, unless ntpd_flags or
the ntp config file contain options that change file/dir locations.
When such options (e.g., "statsdir" or "crypto") are used, ntpd can
still be run as non-root by setting ntpd_user=ntpd in rc.conf, after
taking steps to ensure that all required files/dirs are accessible
by the ntpd user.
20180717:
Big endian arm support has been removed.
20180711:
The static environment setup in kernel configs is no longer mutually
exclusive with the loader(8) environment by default. In order to
restore the previous default behavior of disabling the loader(8)
environment if a static environment is present, you must specify
loader_env.disabled=1 in the static environment.
20180705:
The ABI of syscalls used by management tools like sockstat and
netstat has been broken to allow 32-bit binaries to work on
64-bit kernels without modification. These programs will need
to match the kernel in order to function. External programs may
require minor modifications to accommodate a change of type in
structures from pointers to 64-bit virtual addresses.
20180702:
On i386 and amd64 atomics are now inlined. Out of tree modules using
atomics will need to be rebuilt.
20180701:
The '%I' format in the kern.corefile sysctl limits the number of
core files that a process can generate to the number stored in the
debug.ncores sysctl. The '%I' format is replaced by the single digit
index. Previously, if all indexes were taken the kernel would overwrite
only a core file with the highest index in a filename.
Currently the system will create a new core file if there is a free
index or if all slots are taken it will overwrite the oldest one.
20180630:
Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to
6.0.1. Please see the 20141231 entry below for information about
prerequisites and upgrading, if you are not already using clang 3.5.0
or higher.
20180628:
r335753 introduced a new quoting method. However, etc/devd/devmatch.conf
needed to be changed to work with it. This change was made with r335763
and requires a mergemaster / etcupdate / etc to update the installed file.
20180612:
r334930 changed the interface between the NFS modules, so they all
need to be rebuilt. r335018 did a __FreeBSD_version bump for this.
20180530:
As of r334391 lld is the default amd64 system linker; it is installed
as /usr/bin/ld. Kernel build workarounds (see 20180510 entry) are no
longer necessary.
20180530:
The kernel / userland interface for devinfo changed, so you'll
need a new kernel and userland as a pair for it to work (rebuilding
lib/libdevinfo is all that's required). devinfo and devmatch will
not work, but everything else will when there's a mismatch.
20180523:
The on-disk format for hwpmc callchain records has changed to include
threadid corresponding to a given record. This changes the field offsets
and thus requires that libpmcstat be rebuilt before using a kernel
later than r334108.
20180517:
The vxge(4) driver has been removed. This driver was introduced into
HEAD one week before the Exar left the Ethernet market and is not
known to be used. If you have device vxge in your kernel config file
it must be removed.
20180510:
The amd64 kernel now requires a ld that supports ifunc to produce a
working kernel, either lld or a newer binutils. lld is built by default
on amd64, and the 'buildkernel' target uses it automatically. However,
it is not the default linker, so building the kernel the traditional
way requires LD=ld.lld on the command line (or LD=/usr/local/bin/ld for
binutils port/package). lld will soon be default, and this requirement
will go away.
NOTE: As of r334391 lld is the default system linker on amd64, and no
workaround is necessary.
20180508:
The nxge(4) driver has been removed. This driver was for PCI-X 10g
cards made by s2io/Neterion. The company was aquired by Exar and
no longer sells or supports Ethernet products. If you have device
nxge in your kernel config file it must be removed.
20180504:
The tz database (tzdb) has been updated to 2018e. This version more
correctly models time stamps in time zones with negative DST such as
Europe/Dublin (from 1971 on), Europe/Prague (1946/7), and
Africa/Windhoek (1994/2017). This does not affect the UT offsets, only
time zone abbreviations and the tm_isdst flag.
20180502:
The ixgb(4) driver has been removed. This driver was for an early and
uncommon legacy PCI 10GbE for a single ASIC, Intel 82597EX. Intel
quickly shifted to the long lived ixgbe family. If you have device
ixgb in your kernel config file it must be removed.
20180501:
The lmc(4) driver has been removed. This was a WAN interface
card that was already reportedly rare in 2003, and had an ambiguous
license. If you have device lmc in your kernel config file it must
be removed.
20180413:
Support for Arcnet networks has been removed. If you have device
arcnet or device cm in your kernel config file they must be
removed.
20180411:
Support for FDDI networks has been removed. If you have device
fddi or device fpa in your kernel config file they must be
removed.
20180406:
In addition to supporting RFC 3164 formatted messages, the
syslogd(8) service is now capable of parsing RFC 5424 formatted
log messages. The main benefit of using RFC 5424 is that clients
may now send log messages with timestamps containing year numbers,
microseconds and time zone offsets.
Similarly, the syslog(3) C library function has been altered to
send RFC 5424 formatted messages to the local system logging
daemon. On systems using syslogd(8), this change should have no
negative impact, as long as syslogd(8) and the C library are
updated at the same time. On systems using a different system
logging daemon, it may be necessary to make configuration
adjustments, depending on the software used.
When using syslog-ng, add the 'syslog-protocol' flag to local
input sources to enable parsing of RFC 5424 formatted messages:
source src {
unix-dgram("/var/run/log" flags(syslog-protocol));
}
When using rsyslog, disable the 'SysSock.UseSpecialParser' option
of the 'imuxsock' module to let messages be processed by the
regular RFC 3164/5424 parsing pipeline:
module(load="imuxsock" SysSock.UseSpecialParser="off")
Do note that these changes only affect communication between local
applications and syslogd(8). The format that syslogd(8) uses to
store messages on disk or forward messages to other systems
remains unchanged. syslogd(8) still uses RFC 3164 for these
purposes. Options to customize this behaviour will be added in the
future. Utilities that process log files stored in /var/log are
thus expected to continue to function as before.
__FreeBSD_version has been incremented to 1200061 to denote this
change.
20180328:
Support for token ring networks has been removed. If you
have "device token" in your kernel config you should remove
it. No device drivers supported token ring.
20180323:
makefs was modified to be able to tag ISO9660 El Torito boot catalog
entries as EFI instead of overloading the i386 tag as done previously.
The amd64 mkisoimages.sh script used to build amd64 ISO images for
release was updated to use this. This may mean that makefs must be
updated before "make cdrom" can be run in the release directory. This
should be as simple as:
$ cd $SRCDIR/usr.sbin/makefs
$ make depend all install
20180212:
FreeBSD boot loader enhanced with Lua scripting. It's purely opt-in for
now by building WITH_LOADER_LUA and WITHOUT_FORTH in /etc/src.conf.
Co-existance for the transition period will come shortly. Booting is a
complex environment and test coverage for Lua-enabled loaders has been
thin, so it would be prudent to assume it might not work and make
provisions for backup boot methods.
20180211:
devmatch functionality has been turned on in devd. It will automatically
load drivers for unattached devices. This may cause unexpected drivers to
be loaded. Please report any problems to current@ and imp@freebsd.org.
20180114:
Clang, llvm, lld, lldb, compiler-rt and libc++ have been upgraded to
6.0.0. Please see the 20141231 entry below for information about
prerequisites and upgrading, if you are not already using clang 3.5.0
or higher.
20180110:
LLVM's lld linker is now used as the FreeBSD/amd64 bootstrap linker.
This means it is used to link the kernel and userland libraries and
executables, but is not yet installed as /usr/bin/ld by default.
To revert to ld.bfd as the bootstrap linker, in /etc/src.conf set
WITHOUT_LLD_BOOTSTRAP=yes
20180110:
On i386, pmtimer has been removed. Its functionality has been folded
into apm. It was a no-op on ACPI in current for a while now (but was still
needed on i386 in FreeBSD 11 and earlier). Users may need to remove it
from kernel config files.
20180104:
The use of RSS hash from the network card aka flowid has been
disabled by default for lagg(4) as it's currently incompatible with
the lacp and loadbalance protocols.
This can be re-enabled by setting the following in loader.conf:
net.link.lagg.default_use_flowid="1"
20180102:
The SW_WATCHDOG option is no longer necessary to enable the
hardclock-based software watchdog if no hardware watchdog is
configured. As before, SW_WATCHDOG will cause the software
watchdog to be enabled even if a hardware watchdog is configured.
20171215:
r326887 fixes the issue described in the 20171214 UPDATING entry.
r326888 flips the switch back to building GELI support always.
20171214:
r362593 broke ZFS + GELI support for reasons unknown. However,
it also broke ZFS support generally, so GELI has been turned off
by default as the lesser evil in r326857. If you boot off ZFS and/or
GELI, it might not be a good time to update.
20171125:
PowerPC users must update loader(8) by rebuilding world before
installing a new kernel, as the protocol connecting them has
changed. Without the update, loader metadata will not be passed
successfully to the kernel and users will have to enter their
root partition at the kernel mountroot prompt to continue booting.
Newer versions of loader can boot old kernels without issue.
20171110:
The LOADER_FIREWIRE_SUPPORT build variable as been renamed to
WITH/OUT_LOADER_FIREWIRE. LOADER_{NO_,}GELI_SUPPORT has been renamed
to WITH/OUT_LOADER_GELI.
20171106:
The naive and non-compliant support of posix_fallocate(2) in ZFS
has been removed as of r325320. The system call now returns EINVAL
when used on a ZFS file. Although the new behavior complies with the
standard, some consumers are not prepared to cope with it.
One known victim is lld prior to r325420.
20171102:
Building in a FreeBSD src checkout will automatically create object
directories now rather than store files in the current directory if
'make obj' was not ran. Calling 'make obj' is no longer necessary.
This feature can be disabled by setting WITHOUT_AUTO_OBJ=yes in
/etc/src-env.conf (not /etc/src.conf), or passing the option in the
environment.
20171101:
The default MAKEOBJDIR has changed from /usr/obj/<srcdir> for native
builds, and /usr/obj/<arch>/<srcdir> for cross-builds, to a unified
/usr/obj/<srcdir>/<arch>. This behavior can be changed to the old
format by setting WITHOUT_UNIFIED_OBJDIR=yes in /etc/src-env.conf,
the environment, or with -DWITHOUT_UNIFIED_OBJDIR when building.
The UNIFIED_OBJDIR option is a transitional feature that will be
removed for 12.0 release; please migrate to the new format for any
tools by looking up the OBJDIR used by 'make -V .OBJDIR' means rather
than hardcoding paths.
20171028:
The native-xtools target no longer installs the files by default to the
OBJDIR. Use the native-xtools-install target with a DESTDIR to install
to ${DESTDIR}/${NXTP} where NXTP defaults to /nxb-bin.
20171021:
As part of the boot loader infrastructure cleanup, LOADER_*_SUPPORT
options are changing from controlling the build if defined / undefined
to controlling the build with explicit 'yes' or 'no' values. They will
shift to WITH/WITHOUT options to match other options in the system.
20171010:
libstand has turned into a private library for sys/boot use only.
It is no longer supported as a public interface outside of sys/boot.
20171005:
The arm port has split armv6 into armv6 and armv7. armv7 is now
a valid TARGET_ARCH/MACHINE_ARCH setting. If you have an armv7 system
and are running a kernel from before r324363, you will need to add
MACHINE_ARCH=armv7 to 'make buildworld' to do a native build.
20171003:
When building multiple kernels using KERNCONF, non-existent KERNCONF
files will produce an error and buildkernel will fail. Previously
missing KERNCONF files silently failed giving no indication as to
why, only to subsequently discover during installkernel that the
desired kernel was never built in the first place.
20170912:
The default serial number format for CTL LUNs has changed. This will
affect users who use /dev/diskid/* device nodes, or whose FibreChannel
or iSCSI clients care about their LUNs' serial numbers. Users who
require serial number stability should hardcode serial numbers in
/etc/ctl.conf .
20170912:
For 32-bit arm compiled for hard-float support, soft-floating point
binaries now always get their shared libraries from
LD_SOFT_LIBRARY_PATH (in the past, this was only used if
/usr/libsoft also existed). Only users with a hard-float ld.so, but
soft-float everything else should be affected.
20170826:
The geli password typed at boot is now hidden. To restore the previous
behavior, see geli(8) for configuration options.
20170825:
Move PMTUD blackhole counters to TCPSTATS and remove them from bare
sysctl values. Minor nit, but requires a rebuild of both world/kernel
to complete.
20170814:
"make check" behavior (made in ^/head@r295380) has been changed to
execute from a limited sandbox, as opposed to executing from
${TESTSDIR}.
Behavioral changes:
- The "beforecheck" and "aftercheck" targets are now specified.
- ${CHECKDIR} (added in commit noted above) has been removed.
- Legacy behavior can be enabled by setting
WITHOUT_MAKE_CHECK_USE_SANDBOX in src.conf(5) or the environment.
If the limited sandbox mode is enabled, "make check" will execute
"make distribution", then install, execute the tests, and clean up the
sandbox if successful.
The "make distribution" and "make install" targets are typically run as
root to set appropriate permissions and ownership at installation time.
The end-user should set "WITH_INSTALL_AS_USER" in src.conf(5) or the
environment if executing "make check" with limited sandbox mode using
an unprivileged user.
20170808:
Since the switch to GPT disk labels, fsck for UFS/FFS has been
unable to automatically find alternate superblocks. As of r322297,
the information needed to find alternate superblocks has been
moved to the end of the area reserved for the boot block.
Filesystems created with a newfs of this vintage or later
will create the recovery information. If you have a filesystem
created prior to this change and wish to have a recovery block
created for your filesystem, you can do so by running fsck in
foreground mode (i.e., do not use the -p or -y options). As it
starts, fsck will ask ``SAVE DATA TO FIND ALTERNATE SUPERBLOCKS''
to which you should answer yes.
20170728:
As of r321665, an NFSv4 server configuration that services
Kerberos mounts or clients that do not support the uid/gid in
owner/owner_group string capability, must explicitly enable
the nfsuserd daemon by adding nfsuserd_enable="YES" to the
machine's /etc/rc.conf file.
20170722:
Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 5.0.0.
Please see the 20141231 entry below for information about prerequisites
and upgrading, if you are not already using clang 3.5.0 or higher.
20170701:
WITHOUT_RCMDS is now the default. Set WITH_RCMDS if you need the
r-commands (rlogin, rsh, etc.) to be built with the base system.
20170625:
The FreeBSD/powerpc platform now uses a 64-bit type for time_t. This is
a very major ABI incompatible change, so users of FreeBSD/powerpc must
be careful when performing source upgrades. It is best to run
'make installworld' from an alternate root system, either a live
CD/memory stick, or a temporary root partition. Additionally, all ports
must be recompiled. powerpc64 is largely unaffected, except in the case
of 32-bit compatibility. All 32-bit binaries will be affected.
20170623:
Forward compatibility for the "ino64" project have been committed. This
will allow most new binaries to run on older kernels in a limited
fashion. This prevents many of the common foot-shooting actions in the
upgrade as well as the limited ability to roll back the kernel across
the ino64 upgrade. Complicated use cases may not work properly, though
enough simpler ones work to allow recovery in most situations.
20170620:
Switch back to the BSDL dtc (Device Tree Compiler). Set WITH_GPL_DTC
if you require the GPL compiler.
20170618:
The internal ABI used for communication between the NFS kernel modules
was changed by r320085, so __FreeBSD_version was bumped to
ensure all the NFS related modules are updated together.
20170617:
The ABI of struct event was changed by extending the data
member to 64bit and adding ext fields. For upgrade, same
precautions as for the entry 20170523 "ino64" must be
followed.
20170531:
The GNU roff toolchain has been removed from base. To render manpages
which are not supported by mandoc(1), man(1) can fallback on GNU roff
from ports (and recommends to install it).
To render roff(7) documents, consider using GNU roff from ports or the
heirloom doctools roff toolchain from ports via pkg install groff or
via pkg install heirloom-doctools.
20170524:
The ath(4) and ath_hal(4) modules now build piecemeal to allow for
smaller runtime footprint builds. This is useful for embedded systems
which only require one chipset support.
If you load it as a module, make sure this is in /boot/loader.conf:
if_ath_load="YES"
This will load the HAL, all chip/RF backends and if_ath_pci.
If you have if_ath_pci in /boot/loader.conf, ensure it is after
if_ath or it will not load any HAL chipset support.
If you want to selectively load things (eg on ye cheape ARM/MIPS
platforms where RAM is at a premium) you should:
* load ath_hal
* load the chip modules in question
* load ath_rate, ath_dfs
* load ath_main
* load if_ath_pci and/or if_ath_ahb depending upon your particular
bus bind type - this is where probe/attach is done.
For further comments/feedback, poke adrian@ .
20170523:
The "ino64" 64-bit inode project has been committed, which extends
a number of types to 64 bits. Upgrading in place requires care and
adherence to the documented upgrade procedure.
If using a custom kernel configuration ensure that the
COMPAT_FREEBSD11 option is included (as during the upgrade the
system will be running the ino64 kernel with the existing world).
For the safest in-place upgrade begin by removing previous build
artifacts via "rm -rf /usr/obj/*". Then, carefully follow the
full procedure documented below under the heading "To rebuild
everything and install it on the current system." Specifically,
a reboot is required after installing the new kernel before
installing world.
20170424:
The NATM framework including the en(4), fatm(4), hatm(4), and
patm(4) devices has been removed. Consumers should plan a
migration before the end-of-life date for FreeBSD 11.
20170420:
GNU diff has been replaced by a BSD licensed diff. Some features of GNU
diff has not been implemented, if those are needed a newer version of
GNU diff is available via the diffutils package under the gdiff name.
20170413:
As of r316810 for ipfilter, keep frags is no longer assumed when
keep state is specified in a rule. r316810 aligns ipfilter with
documentation in man pages separating keep frags from keep state.
This allows keep state to be specified without forcing keep frags
and allows keep frags to be specified independently of keep state.
To maintain previous behaviour, also specify keep frags with
keep state (as documented in ipf.conf.5).
20170407:
arm64 builds now use the base system LLD 4.0.0 linker by default,
instead of requiring that the aarch64-binutils port or package be
installed. To continue using aarch64-binutils, set
CROSS_BINUTILS_PREFIX=/usr/local/aarch64-freebsd/bin .
20170405:
The UDP optimization in entry 20160818 that added the sysctl
net.inet.udp.require_l2_bcast has been reverted. L2 broadcast
packets will no longer be treated as L3 broadcast packets.
20170331:
Binds and sends to the loopback addresses, IPv6 and IPv4, will now
use any explicitly assigned loopback address available in the jail
instead of using the first assigned address of the jail.
20170329:
The ctl.ko module no longer implements the iSCSI target frontend:
cfiscsi.ko does instead.
If building cfiscsi.ko as a kernel module, the module can be loaded
via one of the following methods:
- `cfiscsi_load="YES"` in loader.conf(5).
- Add `cfiscsi` to `$kld_list` in rc.conf(5).
- ctladm(8)/ctld(8), when compiled with iSCSI support
(`WITH_ISCSI=yes` in src.conf(5))
Please see cfiscsi(4) for more details.
20170316:
The mmcsd.ko module now additionally depends on geom_flashmap.ko.
Also, mmc.ko and mmcsd.ko need to be a matching pair built from the
same source (previously, the dependency of mmcsd.ko on mmc.ko was
missing, but mmcsd.ko now will refuse to load if it is incompatible
with mmc.ko).
20170315:
The syntax of ipfw(8) named states was changed to avoid ambiguity.
If you have used named states in the firewall rules, you need to modify
them after installworld and before rebooting. Now named states must
be prefixed with colon.
20170311:
The old drm (sys/dev/drm/) drivers for i915 and radeon have been
removed as the userland we provide cannot use them. The KMS version
(sys/dev/drm2) supports the same hardware.
20170302:
Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 4.0.0.
Please see the 20141231 entry below for information about prerequisites
and upgrading, if you are not already using clang 3.5.0 or higher.
20170221:
The code that provides support for ZFS .zfs/ directory functionality
has been reimplemented. It's not possible now to create a snapshot
by mkdir under .zfs/snapshot/. That should be the only user visible
change.
20170216:
EISA bus support has been removed. The WITH_EISA option is no longer
valid.
20170215:
MCA bus support has been removed.
20170127:
The WITH_LLD_AS_LD / WITHOUT_LLD_AS_LD build knobs have been renamed
WITH_LLD_IS_LD / WITHOUT_LLD_IS_LD, for consistency with CLANG_IS_CC.
20170112:
The EM_MULTIQUEUE kernel configuration option is deprecated now that
the em(4) driver conforms to iflib specifications.
20170109:
The igb(4), em(4) and lem(4) ethernet drivers are now implemented via
IFLIB. If you have a custom kernel configuration that excludes em(4)
but you use igb(4), you need to re-add em(4) to your custom configuration.
20161217:
Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 3.9.1.
Please see the 20141231 entry below for information about prerequisites
and upgrading, if you are not already using clang 3.5.0 or higher.
20161124:
Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 3.9.0.
Please see the 20141231 entry below for information about prerequisites
and upgrading, if you are not already using clang 3.5.0 or higher.
20161119:
The layout of the pmap structure has changed for powerpc to put the pmap
statistics at the front for all CPU variations. libkvm(3) and all tools
that link against it need to be recompiled.
20161030:
isl(4) and cyapa(4) drivers now require a new driver,
chromebook_platform(4), to work properly on Chromebook-class hardware.
On other types of hardware the drivers may need to be configured using
device hints. Please see the corresponding manual pages for details.
20161017:
The urtwn(4) driver was merged into rtwn(4) and now consists of
rtwn(4) main module + rtwn_usb(4) and rtwn_pci(4) bus-specific
parts.
Also, firmware for RTL8188CE was renamed due to possible name
conflict (rtwnrtl8192cU(B) -> rtwnrtl8192cE(B))
20161015:
GNU rcs has been removed from base. It is available as packages:
- rcs: Latest GPLv3 GNU rcs version.
- rcs57: Copy of the latest version of GNU rcs (GPLv2) before it was
removed from base.
20161008:
Use of the cc_cdg, cc_chd, cc_hd, or cc_vegas congestion control
modules now requires that the kernel configuration contain the
TCP_HHOOK option. (This option is included in the GENERIC kernel.)
20161003:
The WITHOUT_ELFCOPY_AS_OBJCOPY src.conf(5) knob has been retired.
ELF Tool Chain's elfcopy is always installed as /usr/bin/objcopy.
20160924:
Relocatable object files with the extension of .So have been renamed
to use an extension of .pico instead. The purpose of this change is
to avoid a name clash with shared libraries on case-insensitive file
systems. On those file systems, foo.So is the same file as foo.so.
20160918:
GNU rcs has been turned off by default. It can (temporarily) be built
again by adding WITH_RCS knob in src.conf.
Otherwise, GNU rcs is available from packages:
- rcs: Latest GPLv3 GNU rcs version.
- rcs57: Copy of the latest version of GNU rcs (GPLv2) from base.
20160918:
The backup_uses_rcs functionality has been removed from rc.subr.
20160908:
The queue(3) debugging macro, QUEUE_MACRO_DEBUG, has been split into
two separate components, QUEUE_MACRO_DEBUG_TRACE and
QUEUE_MACRO_DEBUG_TRASH. Define both for the original
QUEUE_MACRO_DEBUG behavior.
20160824:
r304787 changed some ioctl interfaces between the iSCSI userspace
programs and the kernel. ctladm, ctld, iscsictl, and iscsid must be
rebuilt to work with new kernels. __FreeBSD_version has been bumped
to 1200005.
20160818:
The UDP receive code has been updated to only treat incoming UDP
packets that were addressed to an L2 broadcast address as L3
broadcast packets. It is not expected that this will affect any
standards-conforming UDP application. The new behaviour can be
disabled by setting the sysctl net.inet.udp.require_l2_bcast to
0.
20160818:
Remove the openbsd_poll system call.
__FreeBSD_version has been bumped because of this.
20160708:
The stable/11 branch has been created from head@r302406.
20160622:
The libc stub for the pipe(2) system call has been replaced with
a wrapper that calls the pipe2(2) system call and the pipe(2)
system call is now only implemented by the kernels that include
"options COMPAT_FREEBSD10" in their config file (this is the
default). Users should ensure that this option is enabled in
their kernel or upgrade userspace to r302092 before upgrading their
kernel.
20160527:
CAM will now strip leading spaces from SCSI disks' serial numbers.
This will affect users who create UFS filesystems on SCSI disks using
those disk's diskid device nodes. For example, if /etc/fstab
previously contained a line like
"/dev/diskid/DISK-%20%20%20%20%20%20%20ABCDEFG0123456", you should
change it to "/dev/diskid/DISK-ABCDEFG0123456". Users of geom
transforms like gmirror may also be affected. ZFS users should
generally be fine.
20160523:
The bitstring(3) API has been updated with new functionality and
improved performance. But it is binary-incompatible with the old API.
Objects built with the new headers may not be linked against objects
built with the old headers.
20160520:
The brk and sbrk functions have been removed from libc on arm64.
Binutils from ports has been updated to not link to these
functions and should be updated to the latest version before
installing a new libc.
20160517:
The armv6 port now defaults to hard float ABI. Limited support
for running both hardfloat and soft float on the same system
is available using the libraries installed with -DWITH_LIBSOFT.
This has only been tested as an upgrade path for installworld
and packages may fail or need manual intervention to run. New
packages will be needed.
To update an existing self-hosted armv6hf system, you must add
TARGET_ARCH=armv6 on the make command line for both the build
and the install steps.
20160510:
Kernel modules compiled outside of a kernel build now default to
installing to /boot/modules instead of /boot/kernel. Many kernel
modules built this way (such as those in ports) already overrode
KMODDIR explicitly to install into /boot/modules. However,
manually building and installing a module from /sys/modules will
now install to /boot/modules instead of /boot/kernel.
20160414:
The CAM I/O scheduler has been committed to the kernel. There should be
no user visible impact. This does enable NCQ Trim on ada SSDs. While the
list of known rogues that claim support for this but actually corrupt
data is believed to be complete, be on the lookout for data
corruption. The known rogue list is believed to be complete:
o Crucial MX100, M550 drives with MU01 firmware.
o Micron M510 and M550 drives with MU01 firmware.
o Micron M500 prior to MU07 firmware
o Samsung 830, 840, and 850 all firmwares
o FCCT M500 all firmwares
Crucial has firmware http://www.crucial.com/usa/en/support-ssd-firmware
with working NCQ TRIM. For Micron branded drives, see your sales rep for
updated firmware. Black listed drives will work correctly because these
drives work correctly so long as no NCQ TRIMs are sent to them. Given
this list is the same as found in Linux, it's believed there are no
other rogues in the market place. All other models from the above
vendors work.
To be safe, if you are at all concerned, you can quirk each of your
drives to prevent NCQ from being sent by setting:
kern.cam.ada.X.quirks="0x2"
in loader.conf. If the drive requires the 4k sector quirk, set the
quirks entry to 0x3.
20160330:
The FAST_DEPEND build option has been removed and its functionality is
now the one true way. The old mkdep(1) style of 'make depend' has
been removed. See 20160311 for further details.
20160317:
Resource range types have grown from unsigned long to uintmax_t. All
drivers, and anything using libdevinfo, need to be recompiled.
20160311:
WITH_FAST_DEPEND is now enabled by default for in-tree and out-of-tree
builds. It no longer runs mkdep(1) during 'make depend', and the
'make depend' stage can safely be skipped now as it is auto ran
when building 'make all' and will generate all SRCS and DPSRCS before
building anything else. Dependencies are gathered at compile time with
-MF flags kept in separate .depend files per object file. Users should
run 'make cleandepend' once if using -DNO_CLEAN to clean out older
stale .depend files.
20160306:
On amd64, clang 3.8.0 can now insert sections of type AMD64_UNWIND into
kernel modules. Therefore, if you load any kernel modules at boot time,
please install the boot loaders after you install the kernel, but before
rebooting, e.g.:
make buildworld
make buildkernel KERNCONF=YOUR_KERNEL_HERE
make installkernel KERNCONF=YOUR_KERNEL_HERE
make -C sys/boot install
<reboot in single user>
Then follow the usual steps, described in the General Notes section,
below.
20160305:
Clang, llvm, lldb and compiler-rt have been upgraded to 3.8.0. Please
see the 20141231 entry below for information about prerequisites and
upgrading, if you are not already using clang 3.5.0 or higher.
20160301:
The AIO subsystem is now a standard part of the kernel. The
VFS_AIO kernel option and aio.ko kernel module have been removed.
Due to stability concerns, asynchronous I/O requests are only
permitted on sockets and raw disks by default. To enable
asynchronous I/O requests on all file types, set the
vfs.aio.enable_unsafe sysctl to a non-zero value.
20160226:
The ELF object manipulation tool objcopy is now provided by the
ELF Tool Chain project rather than by GNU binutils. It should be a
drop-in replacement, with the addition of arm64 support. The
(temporary) src.conf knob WITHOUT_ELFCOPY_AS_OBJCOPY knob may be set
to obtain the GNU version if necessary.
20160129:
Building ZFS pools on top of zvols is prohibited by default. That
feature has never worked safely; it's always been prone to deadlocks.
Using a zvol as the backing store for a VM guest's virtual disk will
still work, even if the guest is using ZFS. Legacy behavior can be
restored by setting vfs.zfs.vol.recursive=1.
20160119:
The NONE and HPN patches has been removed from OpenSSH. They are
still available in the security/openssh-portable port.
20160113:
With the addition of ypldap(8), a new _ypldap user is now required
during installworld. "mergemaster -p" can be used to add the user
prior to installworld, as documented in the handbook.
20151216:
The tftp loader (pxeboot) now uses the option root-path directive. As a
consequence it no longer looks for a pxeboot.4th file on the tftp
server. Instead it uses the regular /boot infrastructure as with the
other loaders.
20151211:
The code to start recording plug and play data into the modules has
been committed. While the old tools will properly build a new kernel,
a number of warnings about "unknown metadata record 4" will be produced
for an older kldxref. To avoid such warnings, make sure to rebuild
the kernel toolchain (or world). Make sure that you have r292078 or
later when trying to build 292077 or later before rebuilding.
20151207:
Debug data files are now built by default with 'make buildworld' and
installed with 'make installworld'. This facilitates debugging but
requires more disk space both during the build and for the installed
world. Debug files may be disabled by setting WITHOUT_DEBUG_FILES=yes
in src.conf(5).
20151130:
r291527 changed the internal interface between the nfsd.ko and
nfscommon.ko modules. As such, they must both be upgraded to-gether.
__FreeBSD_version has been bumped because of this.
20151108:
Add support for unicode collation strings leads to a change of
order of files listed by ls(1) for example. To get back to the old
behaviour, set LC_COLLATE environment variable to "C".
Databases administrators will need to reindex their databases given
collation results will be different.
Due to a bug in install(1) it is recommended to remove the ancient
locales before running make installworld.
rm -rf /usr/share/locale/*
20151030:
The OpenSSL has been upgraded to 1.0.2d. Any binaries requiring
libcrypto.so.7 or libssl.so.7 must be recompiled.
20151020:
Qlogic 24xx/25xx firmware images were updated from 5.5.0 to 7.3.0.
Kernel modules isp_2400_multi and isp_2500_multi were removed and
should be replaced with isp_2400 and isp_2500 modules respectively.
20151017:
The build previously allowed using 'make -n' to not recurse into
sub-directories while showing what commands would be executed, and
'make -n -n' to recursively show commands. Now 'make -n' will recurse
and 'make -N' will not.
20151012:
If you specify SENDMAIL_MC or SENDMAIL_CF in make.conf, mergemaster
and etcupdate will now use this file. A custom sendmail.cf is now
updated via this mechanism rather than via installworld. If you had
excluded sendmail.cf in mergemaster.rc or etcupdate.conf, you may
want to remove the exclusion or change it to "always install".
/etc/mail/sendmail.cf is now managed the same way regardless of
whether SENDMAIL_MC/SENDMAIL_CF is used. If you are not using
SENDMAIL_MC/SENDMAIL_CF there should be no change in behavior.
20151011:
Compatibility shims for legacy ATA device names have been removed.
It includes ATA_STATIC_ID kernel option, kern.cam.ada.legacy_aliases
and kern.geom.raid.legacy_aliases loader tunables, kern.devalias.*
environment variables, /dev/ad* and /dev/ar* symbolic links.
20151006:
Clang, llvm, lldb, compiler-rt and libc++ have been upgraded to 3.7.0.
Please see the 20141231 entry below for information about prerequisites
and upgrading, if you are not already using clang 3.5.0 or higher.
20150924:
Kernel debug files have been moved to /usr/lib/debug/boot/kernel/,
and renamed from .symbols to .debug. This reduces the size requirements
on the boot partition or file system and provides consistency with
userland debug files.
When using the supported kernel installation method the
/usr/lib/debug/boot/kernel directory will be renamed (to kernel.old)
as is done with /boot/kernel.
Developers wishing to maintain the historical behavior of installing
debug files in /boot/kernel/ can set KERN_DEBUGDIR="" in src.conf(5).
20150827:
The wireless drivers had undergone changes that remove the 'parent
interface' from the ifconfig -l output. The rc.d network scripts
used to check presence of a parent interface in the list, so old
scripts would fail to start wireless networking. Thus, etcupdate(3)
or mergemaster(8) run is required after kernel update, to update your
rc.d scripts in /etc.
20150827:
pf no longer supports 'scrub fragment crop' or 'scrub fragment drop-ovl'
These configurations are now automatically interpreted as
'scrub fragment reassemble'.
20150817:
Kernel-loadable modules for the random(4) device are back. To use
them, the kernel must have
device random
options RANDOM_LOADABLE
kldload(8) can then be used to load random_fortuna.ko
or random_yarrow.ko. Please note that due to the indirect
function calls that the loadable modules need to provide,
the build-in variants will be slightly more efficient.
The random(4) kernel option RANDOM_DUMMY has been retired due to
unpopularity. It was not all that useful anyway.
20150813:
The WITHOUT_ELFTOOLCHAIN_TOOLS src.conf(5) knob has been retired.
Control over building the ELF Tool Chain tools is now provided by
the WITHOUT_TOOLCHAIN knob.
20150810:
The polarity of Pulse Per Second (PPS) capture events with the
uart(4) driver has been corrected. Prior to this change the PPS
"assert" event corresponded to the trailing edge of a positive PPS
pulse and the "clear" event was the leading edge of the next pulse.
As the width of a PPS pulse in a typical GPS receiver is on the
order of 1 millisecond, most users will not notice any significant
difference with this change.
Anyone who has compensated for the historical polarity reversal by
configuring a negative offset equal to the pulse width will need to
remove that workaround.
20150809:
The default group assigned to /dev/dri entries has been changed
from 'wheel' to 'video' with the id of '44'. If you want to have
access to the dri devices please add yourself to the video group
with:
# pw groupmod video -m $USER
20150806:
The menu.rc and loader.rc files will now be replaced during
upgrades. Please migrate local changes to menu.rc.local and
loader.rc.local instead.
20150805:
GNU Binutils versions of addr2line, c++filt, nm, readelf, size,
strings and strip have been removed. The src.conf(5) knob
WITHOUT_ELFTOOLCHAIN_TOOLS no longer provides the binutils tools.
20150728:
As ZFS requires more kernel stack pages than is the default on some
architectures e.g. i386, it now warns if KSTACK_PAGES is less than
ZFS_MIN_KSTACK_PAGES (which is 4 at the time of writing).
Please consider using 'options KSTACK_PAGES=X' where X is greater
than or equal to ZFS_MIN_KSTACK_PAGES i.e. 4 in such configurations.
20150706:
sendmail has been updated to 8.15.2. Starting with FreeBSD 11.0
and sendmail 8.15, sendmail uses uncompressed IPv6 addresses by
default, i.e., they will not contain "::". For example, instead
of ::1, it will be 0:0:0:0:0:0:0:1. This permits a zero subnet
to have a more specific match, such as different map entries for
IPv6:0:0 vs IPv6:0. This change requires that configuration
data (including maps, files, classes, custom ruleset, etc.) must
use the same format, so make certain such configuration data is
upgrading. As a very simple check search for patterns like
'IPv6:[0-9a-fA-F:]*::' and 'IPv6::'. To return to the old
behavior, set the m4 option confUSE_COMPRESSED_IPV6_ADDRESSES or
the cf option UseCompressedIPv6Addresses.
20150630:
The default kernel entropy-processing algorithm is now
Fortuna, replacing Yarrow.
Assuming you have 'device random' in your kernel config
file, the configurations allow a kernel option to override
this default. You may choose *ONE* of:
options RANDOM_YARROW # Legacy /dev/random algorithm.
options RANDOM_DUMMY # Blocking-only driver.
If you have neither, you get Fortuna. For most people,
read no further, Fortuna will give a /dev/random that works
like it always used to, and the difference will be irrelevant.
If you remove 'device random', you get *NO* kernel-processed
entropy at all. This may be acceptable to folks building
embedded systems, but has complications. Carry on reading,
and it is assumed you know what you need.
*PLEASE* read random(4) and random(9) if you are in the
habit of tweaking kernel configs, and/or if you are a member
of the embedded community, wanting specific and not-usual
behaviour from your security subsystems.
NOTE!! If you use RANDOM_DUMMY and/or have no 'device
random', you will NOT have a functioning /dev/random, and
many cryptographic features will not work, including SSH.
You may also find strange behaviour from the random(3) set
of library functions, in particular sranddev(3), srandomdev(3)
and arc4random(3). The reason for this is that the KERN_ARND
sysctl only returns entropy if it thinks it has some to
share, and with RANDOM_DUMMY or no 'device random' this
will never happen.
20150623:
An additional fix for the issue described in the 20150614 sendmail
entry below has been committed in revision 284717.
20150616:
FreeBSD's old make (fmake) has been removed from the system. It is
available as the devel/fmake port or via pkg install fmake.
20150615:
The fix for the issue described in the 20150614 sendmail entry
below has been committed in revision 284436. The work
around described in that entry is no longer needed unless the
default setting is overridden by a confDH_PARAMETERS configuration
setting of '5' or pointing to a 512 bit DH parameter file.
20150614:
ALLOW_DEPRECATED_ATF_TOOLS/ATFFILE support has been removed from
atf.test.mk (included from bsd.test.mk). Please upgrade devel/atf
and devel/kyua to version 0.20+ and adjust any calling code to work
with Kyuafile and kyua.
20150614:
The import of openssl to address the FreeBSD-SA-15:10.openssl
security advisory includes a change which rejects handshakes
with DH parameters below 768 bits. sendmail releases prior
to 8.15.2 (not yet released), defaulted to a 512 bit
DH parameter setting for client connections. To work around
this interoperability, sendmail can be configured to use a
2048 bit DH parameter by:
1. Edit /etc/mail/`hostname`.mc
2. If a setting for confDH_PARAMETERS does not exist or
exists and is set to a string beginning with '5',
replace it with '2'.
3. If a setting for confDH_PARAMETERS exists and is set to
a file path, create a new file with:
openssl dhparam -out /path/to/file 2048
4. Rebuild the .cf file:
cd /etc/mail/; make; make install
5. Restart sendmail:
cd /etc/mail/; make restart
A sendmail patch is coming, at which time this file will be
updated.
20150604:
Generation of legacy formatted entries have been disabled by default
in pwd_mkdb(8), as all base system consumers of the legacy formatted
entries were converted to use the new format by default when the new,
machine independent format have been added and supported since FreeBSD
5.x.
Please see the pwd_mkdb(8) manual page for further details.
20150525:
Clang and llvm have been upgraded to 3.6.1 release. Please see the
20141231 entry below for information about prerequisites and upgrading,
if you are not already using 3.5.0 or higher.
20150521:
TI platform code switched to using vendor DTS files and this update
may break existing systems running on Beaglebone, Beaglebone Black,
and Pandaboard:
- dtb files should be regenerated/reinstalled. Filenames are the
same but content is different now
- GPIO addressing was changed, now each GPIO bank (32 pins per bank)
has its own /dev/gpiocX device, e.g. pin 121 on /dev/gpioc0 in old
addressing scheme is now pin 25 on /dev/gpioc3.
- Pandaboard: /etc/ttys should be updated, serial console device is
now /dev/ttyu2, not /dev/ttyu0
20150501:
soelim(1) from gnu/usr.bin/groff has been replaced by usr.bin/soelim.
If you need the GNU extension from groff soelim(1), install groff
from package: pkg install groff, or via ports: textproc/groff.
20150423:
chmod, chflags, chown and chgrp now affect symlinks in -R mode as
defined in symlink(7); previously symlinks were silently ignored.
20150415:
The const qualifier has been removed from iconv(3) to comply with
POSIX. The ports tree is aware of this from r384038 onwards.
20150416:
Libraries specified by LIBADD in Makefiles must have a corresponding
DPADD_<lib> variable to ensure correct dependencies. This is now
enforced in src.libnames.mk.
20150324:
From legacy ata(4) driver was removed support for SATA controllers
supported by more functional drivers ahci(4), siis(4) and mvs(4).
Kernel modules ataahci and ataadaptec were removed completely,
replaced by ahci and mvs modules respectively.
20150315:
Clang, llvm and lldb have been upgraded to 3.6.0 release. Please see
the 20141231 entry below for information about prerequisites and
upgrading, if you are not already using 3.5.0 or higher.
20150307:
The 32-bit PowerPC kernel has been changed to a position-independent
executable. This can only be booted with a version of loader(8)
newer than January 31, 2015, so make sure to update both world and
kernel before rebooting.
20150217:
If you are running a -CURRENT kernel since r273872 (Oct 30th, 2014),
but before r278950, the RNG was not seeded properly. Immediately
upgrade the kernel to r278950 or later and regenerate any keys (e.g.
ssh keys or openssl keys) that were generated w/ a kernel from that
range. This does not affect programs that directly used /dev/random
or /dev/urandom. All userland uses of arc4random(3) are affected.
20150210:
The autofs(4) ABI was changed in order to restore binary compatibility
with 10.1-RELEASE. The automountd(8) daemon needs to be rebuilt to work
with the new kernel.
20150131:
The powerpc64 kernel has been changed to a position-independent
executable. This can only be booted with a new version of loader(8),
so make sure to update both world and kernel before rebooting.
20150118:
Clang and llvm have been upgraded to 3.5.1 release. This is a bugfix
only release, no new features have been added. Please see the 20141231
entry below for information about prerequisites and upgrading, if you
are not already using 3.5.0.
20150107:
ELF tools addr2line, elfcopy (strip), nm, size, and strings are now
taken from the ELF Tool Chain project rather than GNU binutils. They
should be drop-in replacements, with the addition of arm64 support.
The WITHOUT_ELFTOOLCHAIN_TOOLS= knob may be used to obtain the
binutils tools, if necessary. See 20150805 for updated information.
20150105:
The default Unbound configuration now enables remote control
using a local socket. Users who have already enabled the
local_unbound service should regenerate their configuration
by running "service local_unbound setup" as root.
20150102:
The GNU texinfo and GNU info pages have been removed.
To be able to view GNU info pages please install texinfo from ports.
20141231:
Clang, llvm and lldb have been upgraded to 3.5.0 release.
As of this release, a prerequisite for building clang, llvm and lldb is
a C++11 capable compiler and C++11 standard library. This means that to
be able to successfully build the cross-tools stage of buildworld, with
clang as the bootstrap compiler, your system compiler or cross compiler
should either be clang 3.3 or later, or gcc 4.8 or later, and your
system C++ library should be libc++, or libdstdc++ from gcc 4.8 or
later.
On any standard FreeBSD 10.x or 11.x installation, where clang and
libc++ are on by default (that is, on x86 or arm), this should work out
of the box.
On 9.x installations where clang is enabled by default, e.g. on x86 and
powerpc, libc++ will not be enabled by default, so libc++ should be
built (with clang) and installed first. If both clang and libc++ are
missing, build clang first, then use it to build libc++.
On 8.x and earlier installations, upgrade to 9.x first, and then follow
the instructions for 9.x above.
Sparc64 and mips users are unaffected, as they still use gcc 4.2.1 by
default, and do not build clang.
Many embedded systems are resource constrained, and will not be able to
build clang in a reasonable time, or in some cases at all. In those
cases, cross building bootable systems on amd64 is a workaround.
This new version of clang introduces a number of new warnings, of which
the following are most likely to appear:
-Wabsolute-value
This warns in two cases, for both C and C++:
* When the code is trying to take the absolute value of an unsigned
quantity, which is effectively a no-op, and almost never what was
intended. The code should be fixed, if at all possible. If you are
sure that the unsigned quantity can be safely cast to signed, without
loss of information or undefined behavior, you can add an explicit
cast, or disable the warning.
* When the code is trying to take an absolute value, but the called
abs() variant is for the wrong type, which can lead to truncation.
If you want to disable the warning instead of fixing the code, please
make sure that truncation will not occur, or it might lead to unwanted
side-effects.
-Wtautological-undefined-compare and
-Wundefined-bool-conversion
These warn when C++ code is trying to compare 'this' against NULL, while
'this' should never be NULL in well-defined C++ code. However, there is
some legacy (pre C++11) code out there, which actively abuses this
feature, which was less strictly defined in previous C++ versions.
Squid and openjdk do this, for example. The warning can be turned off
for C++98 and earlier, but compiling the code in C++11 mode might result
in unexpected behavior; for example, the parts of the program that are
unreachable could be optimized away.
20141222:
The old NFS client and server (kernel options NFSCLIENT, NFSSERVER)
kernel sources have been removed. The .h files remain, since some
utilities include them. This will need to be fixed later.
If "mount -t oldnfs ..." is attempted, it will fail.
If the "-o" option on mountd(8), nfsd(8) or nfsstat(1) is used,
the utilities will report errors.
20141121:
The handling of LOCAL_LIB_DIRS has been altered to skip addition of
directories to top level SUBDIR variable when their parent
directory is included in LOCAL_DIRS. Users with build systems with
such hierarchies and without SUBDIR entries in the parent
directory Makefiles should add them or add the directories to
LOCAL_DIRS.
20141109:
faith(4) and faithd(8) have been removed from the base system. Faith
has been obsolete for a very long time.
20141104:
vt(4), the new console driver, is enabled by default. It brings
support for Unicode and double-width characters, as well as
support for UEFI and integration with the KMS kernel video
drivers.
You may need to update your console settings in /etc/rc.conf,
most probably the keymap. During boot, /etc/rc.d/syscons will
indicate what you need to do.
vt(4) still has issues and lacks some features compared to
syscons(4). See the wiki for up-to-date information:
https://wiki.freebsd.org/Newcons
If you want to keep using syscons(4), you can do so by adding
the following line to /boot/loader.conf:
kern.vty=sc
20141102:
pjdfstest has been integrated into kyua as an opt-in test suite.
Please see share/doc/pjdfstest/README for more details on how to
execute it.
20141009:
gperf has been removed from the base system for architectures
that use clang. Ports that require gperf will obtain it from the
devel/gperf port.
20140923:
pjdfstest has been moved from tools/regression/pjdfstest to
contrib/pjdfstest .
20140922:
At svn r271982, The default linux compat kernel ABI has been adjusted
to 2.6.18 in support of the linux-c6 compat ports infrastructure
update. If you wish to continue using the linux-f10 compat ports,
add compat.linux.osrelease=2.6.16 to your local sysctl.conf. Users are
encouraged to update their linux-compat packages to linux-c6 during
their next update cycle.
20140729:
The ofwfb driver, used to provide a graphics console on PowerPC when
using vt(4), no longer allows mmap() of all physical memory. This
will prevent Xorg on PowerPC with some ATI graphics cards from
initializing properly unless x11-servers/xorg-server is updated to
1.12.4_8 or newer.
20140723:
The xdev targets have been converted to using TARGET and
TARGET_ARCH instead of XDEV and XDEV_ARCH.
20140719:
The default unbound configuration has been modified to address
issues with reverse lookups on networks that use private
address ranges. If you use the local_unbound service, run
"service local_unbound setup" as root to regenerate your
configuration, then "service local_unbound reload" to load the
new configuration.
20140709:
The GNU texinfo and GNU info pages are not built and installed
anymore, WITH_INFO knob has been added to allow to built and install
them again.
UPDATE: see 20150102 entry on texinfo's removal
20140708:
The GNU readline library is now an INTERNALLIB - that is, it is
statically linked into consumers (GDB and variants) in the base
system, and the shared library is no longer installed. The
devel/readline port is available for third party software that
requires readline.
20140702:
The Itanium architecture (ia64) has been removed from the list of
known architectures. This is the first step in the removal of the
architecture.
20140701:
Commit r268115 has added NFSv4.1 server support, merged from
projects/nfsv4.1-server. Since this includes changes to the
internal interfaces between the NFS related modules, a full
build of the kernel and modules will be necessary.
__FreeBSD_version has been bumped.
20140629:
The WITHOUT_VT_SUPPORT kernel config knob has been renamed
WITHOUT_VT. (The other _SUPPORT knobs have a consistent meaning
which differs from the behaviour controlled by this knob.)
20140619:
Maximal length of the serial number in CTL was increased from 16 to
64 chars, that breaks ABI. All CTL-related tools, such as ctladm
and ctld, need to be rebuilt to work with a new kernel.
20140606:
The libatf-c and libatf-c++ major versions were downgraded to 0 and
1 respectively to match the upstream numbers. They were out of
sync because, when they were originally added to FreeBSD, the
upstream versions were not respected. These libraries are private
and not yet built by default, so renumbering them should be a
non-issue. However, unclean source trees will yield broken test
programs once the operator executes "make delete-old-libs" after a
"make installworld".
Additionally, the atf-sh binary was made private by moving it into
/usr/libexec/. Already-built shell test programs will keep the
path to the old binary so they will break after "make delete-old"
is run.
If you are using WITH_TESTS=yes (not the default), wipe the object
tree and rebuild from scratch to prevent spurious test failures.
This is only needed once: the misnumbered libraries and misplaced
binaries have been added to OptionalObsoleteFiles.inc so they will
be removed during a clean upgrade.
20140512:
Clang and llvm have been upgraded to 3.4.1 release.
20140508:
We bogusly installed src.opts.mk in /usr/share/mk. This file should
be removed to avoid issues in the future (and has been added to
ObsoleteFiles.inc).
20140505:
/etc/src.conf now affects only builds of the FreeBSD src tree. In the
past, it affected all builds that used the bsd.*.mk files. The old
behavior was a bug, but people may have relied upon it. To get this
behavior back, you can .include /etc/src.conf from /etc/make.conf
(which is still global and isn't changed). This also changes the
behavior of incremental builds inside the tree of individual
directories. Set MAKESYSPATH to ".../share/mk" to do that.
Although this has survived make universe and some upgrade scenarios,
other upgrade scenarios may have broken. At least one form of
temporary breakage was fixed with MAKESYSPATH settings for buildworld
as well... In cases where MAKESYSPATH isn't working with this
setting, you'll need to set it to the full path to your tree.
One side effect of all this cleaning up is that bsd.compiler.mk
is no longer implicitly included by bsd.own.mk. If you wish to
use COMPILER_TYPE, you must now explicitly include bsd.compiler.mk
as well.
20140430:
The lindev device has been removed since /dev/full has been made a
standard device. __FreeBSD_version has been bumped.
20140424:
The knob WITHOUT_VI was added to the base system, which controls
building ex(1), vi(1), etc. Older releases of FreeBSD required ex(1)
in order to reorder files share/termcap and didn't build ex(1) as a
build tool, so building/installing with WITH_VI is highly advised for
build hosts for older releases.
This issue has been fixed in stable/9 and stable/10 in r277022 and
r276991, respectively.
20140418:
The YES_HESIOD knob has been removed. It has been obsolete for
a decade. Please move to using WITH_HESIOD instead or your builds
will silently lack HESIOD.
20140405:
The uart(4) driver has been changed with respect to its handling
of the low-level console. Previously the uart(4) driver prevented
any process from changing the baudrate or the CLOCAL and HUPCL
control flags. By removing the restrictions, operators can make
changes to the serial console port without having to reboot.
However, when getty(8) is started on the serial device that is
associated with the low-level console, a misconfigured terminal
line in /etc/ttys will now have a real impact.
Before upgrading the kernel, make sure that /etc/ttys has the
serial console device configured as 3wire without baudrate to
preserve the previous behaviour. E.g:
ttyu0 "/usr/libexec/getty 3wire" vt100 on secure
20140306:
Support for libwrap (TCP wrappers) in rpcbind was disabled by default
to improve performance. To re-enable it, if needed, run rpcbind
with command line option -W.
20140226:
Switched back to the GPL dtc compiler due to updates in the upstream
dts files not being supported by the BSDL dtc compiler. You will need
to rebuild your kernel toolchain to pick up the new compiler. Core dumps
may result while building dtb files during a kernel build if you fail
to do so. Set WITHOUT_GPL_DTC if you require the BSDL compiler.
20140216:
Clang and llvm have been upgraded to 3.4 release.
20140216:
The nve(4) driver has been removed. Please use the nfe(4) driver
for NVIDIA nForce MCP Ethernet adapters instead.
20140212:
An ABI incompatibility crept into the libc++ 3.4 import in r261283.
This could cause certain C++ applications using shared libraries built
against the previous version of libc++ to crash. The incompatibility
has now been fixed, but any C++ applications or shared libraries built
between r261283 and r261801 should be recompiled.
20140204:
OpenSSH will now ignore errors caused by kernel lacking of Capsicum
capability mode support. Please note that enabling the feature in
kernel is still highly recommended.
20140131:
OpenSSH is now built with sandbox support, and will use sandbox as
the default privilege separation method. This requires Capsicum
capability mode support in kernel.
20140128:
The libelf and libdwarf libraries have been updated to newer
versions from upstream. Shared library version numbers for
these two libraries were bumped. Any ports or binaries
requiring these two libraries should be recompiled.
__FreeBSD_version is bumped to 1100006.
20140110:
If a Makefile in a tests/ directory was auto-generating a Kyuafile
instead of providing an explicit one, this would prevent such
Makefile from providing its own Kyuafile in the future during
NO_CLEAN builds. This has been fixed in the Makefiles but manual
intervention is needed to clean an objdir if you use NO_CLEAN:
# find /usr/obj -name Kyuafile | xargs rm -f
20131213:
The behavior of gss_pseudo_random() for the krb5 mechanism
has changed, for applications requesting a longer random string
than produced by the underlying enctype's pseudo-random() function.
In particular, the random string produced from a session key of
enctype aes256-cts-hmac-sha1-96 or aes256-cts-hmac-sha1-96 will
be different at the 17th octet and later, after this change.
The counter used in the PRF+ construction is now encoded as a
big-endian integer in accordance with RFC 4402.
__FreeBSD_version is bumped to 1100004.
20131108:
The WITHOUT_ATF build knob has been removed and its functionality
has been subsumed into the more generic WITHOUT_TESTS. If you were
using the former to disable the build of the ATF libraries, you
should change your settings to use the latter.
20131025:
The default version of mtree is nmtree which is obtained from
NetBSD. The output is generally the same, but may vary
slightly. If you found you need identical output adding
"-F freebsd9" to the command line should do the trick. For the
time being, the old mtree is available as fmtree.
20131014:
libbsdyml has been renamed to libyaml and moved to /usr/lib/private.
This will break ports-mgmt/pkg. Rebuild the port, or upgrade to pkg
1.1.4_8 and verify bsdyml not linked in, before running "make
delete-old-libs":
# make -C /usr/ports/ports-mgmt/pkg build deinstall install clean
or
# pkg install pkg; ldd /usr/local/sbin/pkg | grep bsdyml
20131010:
The stable/10 branch has been created in subversion from head
revision r256279.
COMMON ITEMS:
General Notes
-------------
Avoid using make -j when upgrading. While generally safe, there are
sometimes problems using -j to upgrade. If your upgrade fails with
-j, please try again without -j. From time to time in the past there
have been problems using -j with buildworld and/or installworld. This
is especially true when upgrading between "distant" versions (eg one
that cross a major release boundary or several minor releases, or when
several months have passed on the -current branch).
Sometimes, obscure build problems are the result of environment
poisoning. This can happen because the make utility reads its
environment when searching for values for global variables. To run
your build attempts in an "environmental clean room", prefix all make
commands with 'env -i '. See the env(1) manual page for more details.
When upgrading from one major version to another it is generally best to
upgrade to the latest code in the currently installed branch first, then
do an upgrade to the new branch. This is the best-tested upgrade path,
and has the highest probability of being successful. Please try this
approach if you encounter problems with a major version upgrade. Since
the stable 4.x branch point, one has generally been able to upgrade from
anywhere in the most recent stable branch to head / current (or even the
last couple of stable branches). See the top of this file when there's
an exception.
When upgrading a live system, having a root shell around before
installing anything can help undo problems. Not having a root shell
around can lead to problems if pam has changed too much from your
starting point to allow continued authentication after the upgrade.
This file should be read as a log of events. When a later event changes
information of a prior event, the prior event should not be deleted.
Instead, a pointer to the entry with the new information should be
placed in the old entry. Readers of this file should also sanity check
older entries before relying on them blindly. Authors of new entries
should write them with this in mind.
ZFS notes
---------
When upgrading the boot ZFS pool to a new version, always follow
these two steps:
1.) recompile and reinstall the ZFS boot loader and boot block
(this is part of "make buildworld" and "make installworld")
2.) update the ZFS boot block on your boot drive
The following example updates the ZFS boot block on the first
partition (freebsd-boot) of a GPT partitioned drive ada0:
"gpart bootcode -p /boot/gptzfsboot -i 1 ada0"
Non-boot pools do not need these updates.
To build a kernel
-----------------
If you are updating from a prior version of FreeBSD (even one just
a few days old), you should follow this procedure. It is the most
failsafe as it uses a /usr/obj tree with a fresh mini-buildworld,
make kernel-toolchain
make -DALWAYS_CHECK_MAKE buildkernel KERNCONF=YOUR_KERNEL_HERE
make -DALWAYS_CHECK_MAKE installkernel KERNCONF=YOUR_KERNEL_HERE
To test a kernel once
---------------------
If you just want to boot a kernel once (because you are not sure
if it works, or if you want to boot a known bad kernel to provide
debugging information) run
make installkernel KERNCONF=YOUR_KERNEL_HERE KODIR=/boot/testkernel
nextboot -k testkernel
To rebuild everything and install it on the current system.
-----------------------------------------------------------
# Note: sometimes if you are running current you gotta do more than
# is listed here if you are upgrading from a really old current.
<make sure you have good level 0 dumps>
make buildworld
make buildkernel KERNCONF=YOUR_KERNEL_HERE
make installkernel KERNCONF=YOUR_KERNEL_HERE
[1]
<reboot in single user> [3]
mergemaster -Fp [5]
make installworld
mergemaster -Fi [4]
make delete-old [6]
<reboot>
To cross-install current onto a separate partition
--------------------------------------------------
# In this approach we use a separate partition to hold
# current's root, 'usr', and 'var' directories. A partition
# holding "/", "/usr" and "/var" should be about 2GB in
# size.
<make sure you have good level 0 dumps>
<boot into -stable>
make buildworld
make buildkernel KERNCONF=YOUR_KERNEL_HERE
<maybe newfs current's root partition>
<mount current's root partition on directory ${CURRENT_ROOT}>
make installworld DESTDIR=${CURRENT_ROOT} -DDB_FROM_SRC
make distribution DESTDIR=${CURRENT_ROOT} # if newfs'd
make installkernel KERNCONF=YOUR_KERNEL_HERE DESTDIR=${CURRENT_ROOT}
cp /etc/fstab ${CURRENT_ROOT}/etc/fstab # if newfs'd
<edit ${CURRENT_ROOT}/etc/fstab to mount "/" from the correct partition>
<reboot into current>
<do a "native" rebuild/install as described in the previous section>
<maybe install compatibility libraries from ports/misc/compat*>
<reboot>
To upgrade in-place from stable to current
----------------------------------------------
<make sure you have good level 0 dumps>
make buildworld [9]
make buildkernel KERNCONF=YOUR_KERNEL_HERE [8]
make installkernel KERNCONF=YOUR_KERNEL_HERE
[1]
<reboot in single user> [3]
mergemaster -Fp [5]
make installworld
mergemaster -Fi [4]
make delete-old [6]
<reboot>
Make sure that you've read the UPDATING file to understand the
tweaks to various things you need. At this point in the life
cycle of current, things change often and you are on your own
to cope. The defaults can also change, so please read ALL of
the UPDATING entries.
Also, if you are tracking -current, you must be subscribed to
freebsd-current@freebsd.org. Make sure that before you update
your sources that you have read and understood all the recent
messages there. If in doubt, please track -stable which has
much fewer pitfalls.
[1] If you have third party modules, such as vmware, you
should disable them at this point so they don't crash your
system on reboot.
[3] From the bootblocks, boot -s, and then do
fsck -p
mount -u /
mount -a
cd src
adjkerntz -i # if CMOS is wall time
Also, when doing a major release upgrade, it is required that
you boot into single user mode to do the installworld.
[4] Note: This step is non-optional. Failure to do this step
can result in a significant reduction in the functionality of the
system. Attempting to do it by hand is not recommended and those
that pursue this avenue should read this file carefully, as well
as the archives of freebsd-current and freebsd-hackers mailing lists
for potential gotchas. The -U option is also useful to consider.
See mergemaster(8) for more information.
[5] Usually this step is a no-op. However, from time to time
you may need to do this if you get unknown user in the following
step. It never hurts to do it all the time. You may need to
install a new mergemaster (cd src/usr.sbin/mergemaster && make
install) after the buildworld before this step if you last updated
from current before 20130425 or from -stable before 20130430.
[6] This only deletes old files and directories. Old libraries
can be deleted by "make delete-old-libs", but you have to make
sure that no program is using those libraries anymore.
[8] The new kernel must be able to run existing binaries used by
an installworld. When upgrading across major versions, the new
kernel's configuration must include the correct COMPAT_FREEBSD<n>
option for existing binaries (e.g. COMPAT_FREEBSD11 to run 11.x
binaries). Failure to do so may leave you with a system that is
hard to boot to recover. A GENERIC kernel will include suitable
compatibility options to run binaries from older branches.
Make sure that you merge any new devices from GENERIC since the
last time you updated your kernel config file.
[9] If CPUTYPE is defined in your /etc/make.conf, make sure to use the
"?=" instead of the "=" assignment operator, so that buildworld can
override the CPUTYPE if it needs to.
MAKEOBJDIRPREFIX must be defined in an environment variable, and
not on the command line, or in /etc/make.conf. buildworld will
warn if it is improperly defined.
FORMAT:
This file contains a list, in reverse chronological order, of major
breakages in tracking -current. It is not guaranteed to be a complete
list of such breakages, and only contains entries since September 23, 2011.
If you need to see UPDATING entries from before that date, you will need
to fetch an UPDATING file from an older FreeBSD release.
Copyright information:
-Copyright 1998-2009 M. Warner Losh. All Rights Reserved.
+Copyright 1998-2009 M. Warner Losh.
Redistribution, publication, translation and use, with or without
modification, in full or in part, in any form or format of this
document are permitted without further permission from the author.
THIS DOCUMENT IS PROVIDED BY WARNER LOSH ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL WARNER LOSH BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Contact Warner Losh if you have any questions about your use of
this document.
$FreeBSD$
Index: projects/clang800-import/contrib/llvm
===================================================================
--- projects/clang800-import/contrib/llvm (revision 343955)
+++ projects/clang800-import/contrib/llvm (revision 343956)
Property changes on: projects/clang800-import/contrib/llvm
___________________________________________________________________
Modified: svn:mergeinfo
## -0,0 +0,1 ##
Merged /head/contrib/llvm:r343571-343955
Index: projects/clang800-import/contrib/netbsd-tests/lib/libm/t_cbrt.c
===================================================================
--- projects/clang800-import/contrib/netbsd-tests/lib/libm/t_cbrt.c (revision 343955)
+++ projects/clang800-import/contrib/netbsd-tests/lib/libm/t_cbrt.c (revision 343956)
@@ -1,378 +1,379 @@
/* $NetBSD: t_cbrt.c,v 1.3 2014/03/03 10:39:08 martin Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Jukka Ruohonen.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__RCSID("$NetBSD: t_cbrt.c,v 1.3 2014/03/03 10:39:08 martin Exp $");
#include <atf-c.h>
#include <math.h>
#include <stdio.h>
/*
* cbrt(3)
*/
ATF_TC(cbrt_nan);
ATF_TC_HEAD(cbrt_nan, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrt(NaN) == NaN");
}
ATF_TC_BODY(cbrt_nan, tc)
{
const double x = 0.0L / 0.0L;
ATF_CHECK(isnan(x) != 0);
ATF_CHECK(isnan(cbrt(x)) != 0);
}
ATF_TC(cbrt_pow);
ATF_TC_HEAD(cbrt_pow, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrt(3) vs. pow(3)");
}
ATF_TC_BODY(cbrt_pow, tc)
{
const double x[] = { 0.0, 0.005, 1.0, 99.0, 123.123, 9999.0 };
const double eps = 1.0e-14;
double y, z;
size_t i;
for (i = 0; i < __arraycount(x); i++) {
y = cbrt(x[i]);
z = pow(x[i], 1.0 / 3.0);
if (fabs(y - z) > eps)
atf_tc_fail_nonfatal("cbrt(%0.03f) != "
"pow(%0.03f, 1/3)\n", x[i], x[i]);
}
}
ATF_TC(cbrt_inf_neg);
ATF_TC_HEAD(cbrt_inf_neg, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrt(-Inf) == -Inf");
}
ATF_TC_BODY(cbrt_inf_neg, tc)
{
const double x = -1.0L / 0.0L;
double y = cbrt(x);
ATF_CHECK(isinf(y) != 0);
ATF_CHECK(signbit(y) != 0);
}
ATF_TC(cbrt_inf_pos);
ATF_TC_HEAD(cbrt_inf_pos, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrt(+Inf) == +Inf");
}
ATF_TC_BODY(cbrt_inf_pos, tc)
{
const double x = 1.0L / 0.0L;
double y = cbrt(x);
ATF_CHECK(isinf(y) != 0);
ATF_CHECK(signbit(y) == 0);
}
ATF_TC(cbrt_zero_neg);
ATF_TC_HEAD(cbrt_zero_neg, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrt(-0.0) == -0.0");
}
ATF_TC_BODY(cbrt_zero_neg, tc)
{
const double x = -0.0L;
double y = cbrt(x);
if (fabs(y) > 0.0 || signbit(y) == 0)
atf_tc_fail_nonfatal("cbrt(-0.0) != -0.0");
}
ATF_TC(cbrt_zero_pos);
ATF_TC_HEAD(cbrt_zero_pos, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrt(+0.0) == +0.0");
}
ATF_TC_BODY(cbrt_zero_pos, tc)
{
const double x = 0.0L;
double y = cbrt(x);
if (fabs(y) > 0.0 || signbit(y) != 0)
atf_tc_fail_nonfatal("cbrt(+0.0) != +0.0");
}
/*
* cbrtf(3)
*/
ATF_TC(cbrtf_nan);
ATF_TC_HEAD(cbrtf_nan, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtf(NaN) == NaN");
}
ATF_TC_BODY(cbrtf_nan, tc)
{
const float x = 0.0L / 0.0L;
ATF_CHECK(isnan(x) != 0);
ATF_CHECK(isnan(cbrtf(x)) != 0);
}
ATF_TC(cbrtf_powf);
ATF_TC_HEAD(cbrtf_powf, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtf(3) vs. powf(3)");
}
ATF_TC_BODY(cbrtf_powf, tc)
{
const float x[] = { 0.0, 0.005, 1.0, 99.0, 123.123, 9999.0 };
const float eps = 1.0e-5;
float y, z;
size_t i;
for (i = 0; i < __arraycount(x); i++) {
y = cbrtf(x[i]);
z = powf(x[i], 1.0 / 3.0);
if (fabsf(y - z) > eps)
atf_tc_fail_nonfatal("cbrtf(%0.03f) != "
"powf(%0.03f, 1/3)\n", x[i], x[i]);
}
}
ATF_TC(cbrtf_inf_neg);
ATF_TC_HEAD(cbrtf_inf_neg, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtf(-Inf) == -Inf");
}
ATF_TC_BODY(cbrtf_inf_neg, tc)
{
const float x = -1.0L / 0.0L;
float y = cbrtf(x);
ATF_CHECK(isinf(y) != 0);
ATF_CHECK(signbit(y) != 0);
}
ATF_TC(cbrtf_inf_pos);
ATF_TC_HEAD(cbrtf_inf_pos, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtf(+Inf) == +Inf");
}
ATF_TC_BODY(cbrtf_inf_pos, tc)
{
const float x = 1.0L / 0.0L;
float y = cbrtf(x);
ATF_CHECK(isinf(y) != 0);
ATF_CHECK(signbit(y) == 0);
}
ATF_TC(cbrtf_zero_neg);
ATF_TC_HEAD(cbrtf_zero_neg, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtf(-0.0) == -0.0");
}
ATF_TC_BODY(cbrtf_zero_neg, tc)
{
const float x = -0.0L;
float y = cbrtf(x);
if (fabsf(y) > 0.0 || signbit(y) == 0)
atf_tc_fail_nonfatal("cbrtf(-0.0) != -0.0");
}
ATF_TC(cbrtf_zero_pos);
ATF_TC_HEAD(cbrtf_zero_pos, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtf(+0.0) == +0.0");
}
ATF_TC_BODY(cbrtf_zero_pos, tc)
{
const float x = 0.0L;
float y = cbrtf(x);
if (fabsf(y) > 0.0 || signbit(y) != 0)
atf_tc_fail_nonfatal("cbrtf(+0.0) != +0.0");
}
#if !defined(__FreeBSD__) || LDBL_PREC != 53
/*
* cbrtl(3)
*/
ATF_TC(cbrtl_nan);
ATF_TC_HEAD(cbrtl_nan, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtl(NaN) == NaN");
}
ATF_TC_BODY(cbrtl_nan, tc)
{
const long double x = 0.0L / 0.0L;
ATF_CHECK(isnan(x) != 0);
ATF_CHECK(isnan(cbrtl(x)) != 0);
}
ATF_TC(cbrtl_powl);
ATF_TC_HEAD(cbrtl_powl, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtl(3) vs. powl(3)");
}
ATF_TC_BODY(cbrtl_powl, tc)
{
const long double x[] = { 0.0, 0.005, 1.0, 99.0, 123.123, 9999.0 };
const long double eps = 1.0e-15;
long double y, z;
size_t i;
-#if defined(__amd64__) && defined(__clang__) && __clang_major__ >= 7
+#if defined(__amd64__) && defined(__clang__) && __clang_major__ >= 7 && \
+ __FreeBSD_cc_version < 1300002
atf_tc_expect_fail("test fails with clang 7+ - bug 234040");
#endif
for (i = 0; i < __arraycount(x); i++) {
y = cbrtl(x[i]);
#ifdef __FreeBSD__
z = powl(x[i], (long double)1.0 / 3.0);
#else
z = powl(x[i], 1.0 / 3.0);
#endif
if (fabsl(y - z) > eps * fabsl(1 + x[i]))
atf_tc_fail_nonfatal("cbrtl(%0.03Lf) != "
"powl(%0.03Lf, 1/3)\n", x[i], x[i]);
}
}
ATF_TC(cbrtl_inf_neg);
ATF_TC_HEAD(cbrtl_inf_neg, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtl(-Inf) == -Inf");
}
ATF_TC_BODY(cbrtl_inf_neg, tc)
{
const long double x = -1.0L / 0.0L;
long double y = cbrtl(x);
ATF_CHECK(isinf(y) != 0);
ATF_CHECK(signbit(y) != 0);
}
ATF_TC(cbrtl_inf_pos);
ATF_TC_HEAD(cbrtl_inf_pos, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtl(+Inf) == +Inf");
}
ATF_TC_BODY(cbrtl_inf_pos, tc)
{
const long double x = 1.0L / 0.0L;
long double y = cbrtl(x);
ATF_CHECK(isinf(y) != 0);
ATF_CHECK(signbit(y) == 0);
}
ATF_TC(cbrtl_zero_neg);
ATF_TC_HEAD(cbrtl_zero_neg, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtl(-0.0) == -0.0");
}
ATF_TC_BODY(cbrtl_zero_neg, tc)
{
const long double x = -0.0L;
long double y = cbrtl(x);
if (fabsl(y) > 0.0 || signbit(y) == 0)
atf_tc_fail_nonfatal("cbrtl(-0.0) != -0.0");
}
ATF_TC(cbrtl_zero_pos);
ATF_TC_HEAD(cbrtl_zero_pos, tc)
{
atf_tc_set_md_var(tc, "descr", "Test cbrtl(+0.0) == +0.0");
}
ATF_TC_BODY(cbrtl_zero_pos, tc)
{
const long double x = 0.0L;
long double y = cbrtl(x);
if (fabsl(y) > 0.0 || signbit(y) != 0)
atf_tc_fail_nonfatal("cbrtl(+0.0) != +0.0");
}
#endif
ATF_TP_ADD_TCS(tp)
{
ATF_TP_ADD_TC(tp, cbrt_nan);
ATF_TP_ADD_TC(tp, cbrt_pow);
ATF_TP_ADD_TC(tp, cbrt_inf_neg);
ATF_TP_ADD_TC(tp, cbrt_inf_pos);
ATF_TP_ADD_TC(tp, cbrt_zero_neg);
ATF_TP_ADD_TC(tp, cbrt_zero_pos);
ATF_TP_ADD_TC(tp, cbrtf_nan);
ATF_TP_ADD_TC(tp, cbrtf_powf);
ATF_TP_ADD_TC(tp, cbrtf_inf_neg);
ATF_TP_ADD_TC(tp, cbrtf_inf_pos);
ATF_TP_ADD_TC(tp, cbrtf_zero_neg);
ATF_TP_ADD_TC(tp, cbrtf_zero_pos);
#if !defined(__FreeBSD__) || LDBL_PREC != 53
ATF_TP_ADD_TC(tp, cbrtl_nan);
ATF_TP_ADD_TC(tp, cbrtl_powl);
ATF_TP_ADD_TC(tp, cbrtl_inf_neg);
ATF_TP_ADD_TC(tp, cbrtl_inf_pos);
ATF_TP_ADD_TC(tp, cbrtl_zero_neg);
ATF_TP_ADD_TC(tp, cbrtl_zero_pos);
#endif
return atf_no_error();
}
Index: projects/clang800-import/contrib/netbsd-tests
===================================================================
--- projects/clang800-import/contrib/netbsd-tests (revision 343955)
+++ projects/clang800-import/contrib/netbsd-tests (revision 343956)
Property changes on: projects/clang800-import/contrib/netbsd-tests
___________________________________________________________________
Modified: svn:mergeinfo
## -0,0 +0,1 ##
Merged /head/contrib/netbsd-tests:r343571-343955
Index: projects/clang800-import/etc/mtree/BSD.root.dist
===================================================================
--- projects/clang800-import/etc/mtree/BSD.root.dist (revision 343955)
+++ projects/clang800-import/etc/mtree/BSD.root.dist (revision 343956)
@@ -1,120 +1,124 @@
# $FreeBSD$
#
# Please see the file src/etc/mtree/README before making changes to this file.
#
/set type=dir uname=root gname=wheel mode=0755
.
bin
..
boot
defaults
..
dtb
+ allwinner tags=package=runtime
+ ..
overlays tags=package=runtime
+ ..
+ rockchip tags=package=runtime
..
..
firmware
..
lua
..
kernel
..
modules
..
zfs
..
..
dev mode=0555
..
etc
X11
..
authpf
..
autofs
..
bluetooth
..
cron.d
..
defaults
..
devd
..
dma
..
gss
..
mail
..
mtree
..
newsyslog.conf.d
..
ntp mode=0700
..
pam.d
..
periodic
daily
..
monthly
..
security
..
weekly
..
..
pkg
..
ppp
..
rc.conf.d
..
rc.d
..
security
..
ssh
..
ssl
..
syslog.d
..
zfs
..
..
lib
casper
..
geom
..
nvmecontrol
..
..
libexec
resolvconf
..
..
media
..
mnt
..
net
..
proc mode=0555
..
rescue
..
root
..
sbin
..
tmp mode=01777
..
usr
..
var
..
..
Index: projects/clang800-import/lib/libc/stdio/fgetln.c
===================================================================
--- projects/clang800-import/lib/libc/stdio/fgetln.c (revision 343955)
+++ projects/clang800-import/lib/libc/stdio/fgetln.c (revision 343956)
@@ -1,176 +1,166 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1990, 1993
* The Regents of the University of California. All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* Chris Torek.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#if defined(LIBC_SCCS) && !defined(lint)
static char sccsid[] = "@(#)fgetln.c 8.2 (Berkeley) 1/2/94";
#endif /* LIBC_SCCS and not lint */
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "namespace.h"
#include <errno.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "un-namespace.h"
#include "libc_private.h"
#include "local.h"
/*
* Expand the line buffer. Return -1 on error.
-#ifdef notdef
- * The `new size' does not account for a terminating '\0',
- * so we add 1 here.
-#endif
*/
int
__slbexpand(FILE *fp, size_t newsize)
{
void *p;
-#ifdef notdef
- ++newsize;
-#endif
if (fp->_lb._size >= newsize)
return (0);
if (newsize > INT_MAX) {
errno = ENOMEM;
return (-1);
}
if ((p = realloc(fp->_lb._base, newsize)) == NULL)
return (-1);
fp->_lb._base = p;
fp->_lb._size = newsize;
return (0);
}
/*
* Get an input line. The returned pointer often (but not always)
* points into a stdio buffer. Fgetln does not alter the text of
* the returned line (which is thus not a C string because it will
* not necessarily end with '\0'), but does allow callers to modify
* it if they wish. Thus, we set __SMOD in case the caller does.
*/
char *
fgetln(FILE *fp, size_t *lenp)
{
unsigned char *p;
char *ret;
size_t len;
size_t off;
FLOCKFILE_CANCELSAFE(fp);
ORIENT(fp, -1);
/* make sure there is input */
if (fp->_r <= 0 && __srefill(fp)) {
*lenp = 0;
ret = NULL;
goto end;
}
/* look for a newline in the input */
if ((p = memchr((void *)fp->_p, '\n', (size_t)fp->_r)) != NULL) {
/*
* Found one. Flag buffer as modified to keep fseek from
* `optimising' a backward seek, in case the user stomps on
* the text.
*/
p++; /* advance over it */
ret = (char *)fp->_p;
*lenp = len = p - fp->_p;
fp->_flags |= __SMOD;
fp->_r -= len;
fp->_p = p;
goto end;
}
/*
* We have to copy the current buffered data to the line buffer.
* As a bonus, though, we can leave off the __SMOD.
*
* OPTIMISTIC is length that we (optimistically) expect will
* accommodate the `rest' of the string, on each trip through the
* loop below.
*/
#define OPTIMISTIC 80
for (len = fp->_r, off = 0;; len += fp->_r) {
size_t diff;
/*
* Make sure there is room for more bytes. Copy data from
* file buffer to line buffer, refill file and look for
* newline. The loop stops only when we find a newline.
*/
if (__slbexpand(fp, len + OPTIMISTIC))
goto error;
(void)memcpy((void *)(fp->_lb._base + off), (void *)fp->_p,
len - off);
off = len;
if (__srefill(fp)) {
if (__sfeof(fp))
break;
goto error;
}
if ((p = memchr((void *)fp->_p, '\n', (size_t)fp->_r)) == NULL)
continue;
/* got it: finish up the line (like code above) */
p++;
diff = p - fp->_p;
len += diff;
if (__slbexpand(fp, len))
goto error;
(void)memcpy((void *)(fp->_lb._base + off), (void *)fp->_p,
diff);
fp->_r -= diff;
fp->_p = p;
break;
}
*lenp = len;
-#ifdef notdef
- fp->_lb._base[len] = '\0';
-#endif
ret = (char *)fp->_lb._base;
end:
FUNLOCKFILE_CANCELSAFE();
return (ret);
error:
*lenp = 0; /* ??? */
fp->_flags |= __SERR;
ret = NULL;
goto end;
}
Index: projects/clang800-import/lib/libc/sys/getsockopt.2
===================================================================
--- projects/clang800-import/lib/libc/sys/getsockopt.2 (revision 343955)
+++ projects/clang800-import/lib/libc/sys/getsockopt.2 (revision 343956)
@@ -1,589 +1,602 @@
.\" Copyright (c) 1983, 1991, 1993
.\" The Regents of the University of California. All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\" 3. Neither the name of the University nor the names of its contributors
.\" may be used to endorse or promote products derived from this software
.\" without specific prior written permission.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" @(#)getsockopt.2 8.4 (Berkeley) 5/2/95
.\" $FreeBSD$
.\"
-.Dd August 21, 2018
+.Dd February 10, 2019
.Dt GETSOCKOPT 2
.Os
.Sh NAME
.Nm getsockopt ,
.Nm setsockopt
.Nd get and set options on sockets
.Sh LIBRARY
.Lb libc
.Sh SYNOPSIS
.In sys/types.h
.In sys/socket.h
.Ft int
.Fn getsockopt "int s" "int level" "int optname" "void * restrict optval" "socklen_t * restrict optlen"
.Ft int
.Fn setsockopt "int s" "int level" "int optname" "const void *optval" "socklen_t optlen"
.Sh DESCRIPTION
The
.Fn getsockopt
and
.Fn setsockopt
system calls
manipulate the
.Em options
associated with a socket.
Options may exist at multiple
protocol levels; they are always present at the uppermost
.Dq socket
level.
.Pp
When manipulating socket options the level at which the
option resides and the name of the option must be specified.
To manipulate options at the socket level,
.Fa level
is specified as
.Dv SOL_SOCKET .
To manipulate options at any
other level the protocol number of the appropriate protocol
controlling the option is supplied.
For example,
to indicate that an option is to be interpreted by the
.Tn TCP
protocol,
.Fa level
should be set to the protocol number of
.Tn TCP ;
see
.Xr getprotoent 3 .
.Pp
The
.Fa optval
and
.Fa optlen
arguments
are used to access option values for
.Fn setsockopt .
For
.Fn getsockopt
they identify a buffer in which the value for the
requested option(s) are to be returned.
For
.Fn getsockopt ,
.Fa optlen
is a value-result argument, initially containing the
size of the buffer pointed to by
.Fa optval ,
and modified on return to indicate the actual size of
the value returned.
If no option value is
to be supplied or returned,
.Fa optval
may be NULL.
.Pp
The
.Fa optname
argument
and any specified options are passed uninterpreted to the appropriate
protocol module for interpretation.
The include file
.In sys/socket.h
contains definitions for
socket level options, described below.
Options at other protocol levels vary in format and
name; consult the appropriate entries in
section
4 of the manual.
.Pp
Most socket-level options utilize an
.Vt int
argument for
.Fa optval .
For
.Fn setsockopt ,
the argument should be non-zero to enable a boolean option,
or zero if the option is to be disabled.
.Dv SO_LINGER
uses a
.Vt "struct linger"
argument, defined in
.In sys/socket.h ,
which specifies the desired state of the option and the
linger interval (see below).
.Dv SO_SNDTIMEO
and
.Dv SO_RCVTIMEO
use a
.Vt "struct timeval"
argument, defined in
.In sys/time.h .
.Pp
The following options are recognized at the socket level.
For protocol-specific options, see protocol manual pages,
e.g.
.Xr ip 4
or
.Xr tcp 4 .
Except as noted, each may be examined with
.Fn getsockopt
and set with
.Fn setsockopt .
.Bl -column SO_ACCEPTFILTER -offset indent
.It Dv SO_DEBUG Ta "enables recording of debugging information"
.It Dv SO_REUSEADDR Ta "enables local address reuse"
.It Dv SO_REUSEPORT Ta "enables duplicate address and port bindings"
.It Dv SO_REUSEPORT_LB Ta "enables duplicate address and port bindings with load balancing"
.It Dv SO_KEEPALIVE Ta "enables keep connections alive"
.It Dv SO_DONTROUTE Ta "enables routing bypass for outgoing messages"
.It Dv SO_LINGER Ta "linger on close if data present"
.It Dv SO_BROADCAST Ta "enables permission to transmit broadcast messages"
.It Dv SO_OOBINLINE Ta "enables reception of out-of-band data in band"
.It Dv SO_SNDBUF Ta "set buffer size for output"
.It Dv SO_RCVBUF Ta "set buffer size for input"
.It Dv SO_SNDLOWAT Ta "set minimum count for output"
.It Dv SO_RCVLOWAT Ta "set minimum count for input"
.It Dv SO_SNDTIMEO Ta "set timeout value for output"
.It Dv SO_RCVTIMEO Ta "set timeout value for input"
.It Dv SO_ACCEPTFILTER Ta "set accept filter on listening socket"
.It Dv SO_NOSIGPIPE Ta
controls generation of
.Dv SIGPIPE
for the socket
.It Dv SO_TIMESTAMP Ta "enables reception of a timestamp with datagrams"
.It Dv SO_BINTIME Ta "enables reception of a timestamp with datagrams"
.It Dv SO_ACCEPTCONN Ta "get listening status of the socket (get only)"
.It Dv SO_DOMAIN Ta "get the domain of the socket (get only)"
.It Dv SO_TYPE Ta "get the type of the socket (get only)"
.It Dv SO_PROTOCOL Ta "get the protocol number for the socket (get only)"
.It Dv SO_PROTOTYPE Ta "SunOS alias for the Linux SO_PROTOCOL (get only)"
.It Dv SO_ERROR Ta "get and clear error on the socket (get only)"
.It Dv SO_SETFIB Ta "set the associated FIB (routing table) for the socket (set only)"
.El
.Pp
The following options are recognized in
.Fx :
.Bl -column SO_LISTENINCQLEN -offset indent
.It Dv SO_LABEL Ta "get MAC label of the socket (get only)"
.It Dv SO_PEERLABEL Ta "get socket's peer's MAC label (get only)"
.It Dv SO_LISTENQLIMIT Ta "get backlog limit of the socket (get only)"
.It Dv SO_LISTENQLEN Ta "get complete queue length of the socket (get only)"
.It Dv SO_LISTENINCQLEN Ta "get incomplete queue length of the socket (get only)"
.It Dv SO_USER_COOKIE Ta "set the 'so_user_cookie' value for the socket (uint32_t, set only)"
.It Dv SO_TS_CLOCK Ta "set specific format of timestamp returned by SO_TIMESTAMP"
.It Dv SO_MAX_PACING_RATE Ta "set the maximum transmit rate in bytes per second for the socket"
.El
.Pp
.Dv SO_DEBUG
enables debugging in the underlying protocol modules.
.Pp
.Dv SO_REUSEADDR
indicates that the rules used in validating addresses supplied
in a
.Xr bind 2
system call should allow reuse of local addresses.
.Pp
.Dv SO_REUSEPORT
allows completely duplicate bindings by multiple processes
if they all set
.Dv SO_REUSEPORT
before binding the port.
This option permits multiple instances of a program to each
receive UDP/IP multicast or broadcast datagrams destined for the bound port.
.Pp
.Dv SO_REUSEPORT_LB
allows completely duplicate bindings by multiple processes
if they all set
.Dv SO_REUSEPORT_LB
before binding the port.
Incoming TCP and UDP connections are distributed among the sharing
processes based on a hash function of local port number, foreign IP
address and port number. A maximum of 256 processes can share one socket.
.Pp
.Dv SO_KEEPALIVE
enables the
periodic transmission of messages on a connected socket.
Should the
connected party fail to respond to these messages, the connection is
considered broken and processes using the socket are notified via a
.Dv SIGPIPE
signal when attempting to send data.
.Pp
.Dv SO_DONTROUTE
indicates that outgoing messages should
bypass the standard routing facilities.
Instead, messages are directed
to the appropriate network interface according to the network portion
of the destination address.
.Pp
.Dv SO_LINGER
controls the action taken when unsent messages
are queued on socket and a
.Xr close 2
is performed.
If the socket promises reliable delivery of data and
.Dv SO_LINGER
is set,
the system will block the process on the
.Xr close 2
attempt until it is able to transmit the data or until it decides it
is unable to deliver the information (a timeout period, termed the
linger interval, is specified in seconds in the
.Fn setsockopt
system call when
.Dv SO_LINGER
is requested).
If
.Dv SO_LINGER
is disabled and a
.Xr close 2
is issued, the system will process the close in a manner that allows
the process to continue as quickly as possible.
.Pp
The option
.Dv SO_BROADCAST
requests permission to send broadcast datagrams
on the socket.
Broadcast was a privileged operation in earlier versions of the system.
.Pp
With protocols that support out-of-band data, the
.Dv SO_OOBINLINE
option
requests that out-of-band data be placed in the normal data input queue
as received; it will then be accessible with
.Xr recv 2
or
.Xr read 2
calls without the
.Dv MSG_OOB
flag.
Some protocols always behave as if this option is set.
.Pp
.Dv SO_SNDBUF
and
.Dv SO_RCVBUF
are options to adjust the normal
buffer sizes allocated for output and input buffers, respectively.
The buffer size may be increased for high-volume connections,
or may be decreased to limit the possible backlog of incoming data.
The system places an absolute maximum on these values, which is accessible
through the
.Xr sysctl 3
MIB variable
.Dq Li kern.ipc.maxsockbuf .
.Pp
.Dv SO_SNDLOWAT
is an option to set the minimum count for output operations.
Most output operations process all of the data supplied
by the call, delivering data to the protocol for transmission
and blocking as necessary for flow control.
Nonblocking output operations will process as much data as permitted
subject to flow control without blocking, but will process no data
if flow control does not allow the smaller of the low water mark value
or the entire request to be processed.
A
.Xr select 2
operation testing the ability to write to a socket will return true
only if the low water mark amount could be processed.
The default value for
.Dv SO_SNDLOWAT
is set to a convenient size for network efficiency, often 1024.
.Pp
.Dv SO_RCVLOWAT
is an option to set the minimum count for input operations.
In general, receive calls will block until any (non-zero) amount of data
is received, then return with the smaller of the amount available or the amount
requested.
The default value for
.Dv SO_RCVLOWAT
is 1.
If
.Dv SO_RCVLOWAT
is set to a larger value, blocking receive calls normally
wait until they have received the smaller of the low water mark value
or the requested amount.
Receive calls may still return less than the low water mark if an error
occurs, a signal is caught, or the type of data next in the receive queue
is different from that which was returned.
.Pp
.Dv SO_SNDTIMEO
is an option to set a timeout value for output operations.
It accepts a
.Vt "struct timeval"
argument with the number of seconds and microseconds
used to limit waits for output operations to complete.
If a send operation has blocked for this much time,
it returns with a partial count
or with the error
.Er EWOULDBLOCK
if no data were sent.
In the current implementation, this timer is restarted each time additional
data are delivered to the protocol,
implying that the limit applies to output portions ranging in size
from the low water mark to the high water mark for output.
.Pp
.Dv SO_RCVTIMEO
is an option to set a timeout value for input operations.
It accepts a
.Vt "struct timeval"
argument with the number of seconds and microseconds
used to limit waits for input operations to complete.
In the current implementation, this timer is restarted each time additional
data are received by the protocol,
and thus the limit is in effect an inactivity timer.
If a receive operation has been blocked for this much time without
receiving additional data, it returns with a short count
or with the error
.Er EWOULDBLOCK
if no data were received.
.Pp
.Dv SO_SETFIB
can be used to over-ride the default FIB (routing table) for the given socket.
The value must be from 0 to one less than the number returned from
the sysctl
.Em net.fibs .
.Pp
.Dv SO_USER_COOKIE
can be used to set the uint32_t so_user_cookie field in the socket.
The value is an uint32_t, and can be used in the kernel code that
manipulates traffic related to the socket.
The default value for the field is 0.
As an example, the value can be used as the skipto target or
pipe number in
.Nm ipfw/dummynet .
.Pp
.Dv SO_ACCEPTFILTER
places an
.Xr accept_filter 9
on the socket,
which will filter incoming connections
on a listening stream socket before being presented for
.Xr accept 2 .
Once more,
.Xr listen 2
must be called on the socket before
trying to install the filter on it,
or else the
.Fn setsockopt
system call will fail.
.Bd -literal
struct accept_filter_arg {
char af_name[16];
char af_arg[256-16];
};
.Ed
.Pp
The
.Fa optval
argument
should point to a
.Fa struct accept_filter_arg
that will select and configure the
.Xr accept_filter 9 .
The
.Fa af_name
argument
should be filled with the name of the accept filter
that the application wishes to place on the listening socket.
The optional argument
.Fa af_arg
can be passed to the accept
filter specified by
.Fa af_name
to provide additional configuration options at attach time.
Passing in an
.Fa optval
of NULL will remove the filter.
.Pp
The
.Dv SO_NOSIGPIPE
option controls generation of the
.Dv SIGPIPE
signal normally sent
when writing to a connected socket where the other end has been
closed returns with the error
.Er EPIPE .
.Pp
If the
.Dv SO_TIMESTAMP
or
.Dv SO_BINTIME
option is enabled on a
.Dv SOCK_DGRAM
socket, the
.Xr recvmsg 2
call will return a timestamp corresponding to when the datagram was received.
The
.Va msg_control
field in the
.Vt msghdr
structure points to a buffer that contains a
.Vt cmsghdr
structure followed by a
.Vt "struct timeval"
for
.Dv SO_TIMESTAMP
and
.Vt "struct bintime"
for
.Dv SO_BINTIME .
The
.Vt cmsghdr
fields have the following values for TIMESTAMP by default:
.Bd -literal
cmsg_len = CMSG_LEN(sizeof(struct timeval));
cmsg_level = SOL_SOCKET;
cmsg_type = SCM_TIMESTAMP;
.Ed
.Pp
and for
.Dv SO_BINTIME :
.Bd -literal
cmsg_len = CMSG_LEN(sizeof(struct bintime));
cmsg_level = SOL_SOCKET;
cmsg_type = SCM_BINTIME;
.Ed
.Pp
Additional timestamp types are available by following
.Dv SO_TIMESTAMP
with
.Dv SO_TS_CLOCK ,
which requests a specific timestamp format to be returned instead of
.Dv SCM_TIMESTAMP when
.Dv SO_TIMESTAMP is enabled.
These
.Dv SO_TS_CLOCK
values are recognized in
.Fx :
.Bl -column SO_TS_CLOCK -offset indent
.It Dv SO_TS_REALTIME_MICRO Ta "realtime (SCM_TIMESTAMP, struct timeval), default"
.It Dv SO_TS_BINTIME Ta "realtime (SCM_BINTIME, struct bintime)"
.It Dv SO_TS_REALTIME Ta "realtime (SCM_REALTIME, struct timespec)"
.It Dv SO_TS_MONOTONIC Ta "monotonic time (SCM_MONOTONIC, struct timespec)"
.El
.Pp
.Dv SO_ACCEPTCONN ,
.Dv SO_TYPE ,
.Dv SO_PROTOCOL
(and its alias
.Dv SO_PROTOTYPE )
and
.Dv SO_ERROR
are options used only with
.Fn getsockopt .
.Dv SO_ACCEPTCONN
returns whether the socket is currently accepting connections,
that is, whether or not the
.Xr listen 2
system call was invoked on the socket.
.Dv SO_TYPE
returns the type of the socket, such as
.Dv SOCK_STREAM ;
it is useful for servers that inherit sockets on startup.
.Dv SO_PROTOCOL
returns the protocol number for the socket, for
.Dv AF_INET
and
.Dv AF_INET6
address families.
.Dv SO_ERROR
returns any pending error on the socket and clears
the error status.
It may be used to check for asynchronous errors on connected
datagram sockets or for other asynchronous errors.
.Pp
Finally,
.Dv SO_LABEL
returns the MAC label of the socket.
.Dv SO_PEERLABEL
returns the MAC label of the socket's peer.
Note that your kernel must be compiled with MAC support.
See
.Xr mac 3
for more information.
.Dv SO_LISTENQLIMIT
returns the maximal number of queued connections, as set by
.Xr listen 2 .
.Dv SO_LISTENQLEN
returns the number of unaccepted complete connections.
.Dv SO_LISTENINCQLEN
returns the number of unaccepted incomplete connections.
.Pp
.Dv SO_MAX_PACING_RATE
instruct the socket and underlying network adapter layers to limit the
transfer rate to the given unsigned 32-bit value in bytes per second.
.Sh RETURN VALUES
.Rv -std
.Sh ERRORS
-The call succeeds unless:
+The
+.Fn getsockopt
+and
+.Fn setsockopt
+system calls succeed unless:
.Bl -tag -width Er
.It Bq Er EBADF
The argument
.Fa s
is not a valid descriptor.
.It Bq Er ENOTSOCK
The argument
.Fa s
is a file, not a socket.
.It Bq Er ENOPROTOOPT
The option is unknown at the level indicated.
.It Bq Er EFAULT
The address pointed to by
.Fa optval
is not in a valid part of the process address space.
For
.Fn getsockopt ,
this error may also be returned if
.Fa optlen
is not in a valid part of the process address space.
.It Bq Er EINVAL
Installing an
.Xr accept_filter 9
on a non-listening socket was attempted.
.It Bq Er ENOMEM
A memory allocation failed that was required to service the request.
+.El
+.Pp
+The
+.Fn setsockopt
+system call may also return the following error:
+.Bl -tag -width Er
+.It Bq Er ENOBUFS
+Insufficient resources were available in the system
+to perform the operation.
.El
.Sh SEE ALSO
.Xr ioctl 2 ,
.Xr listen 2 ,
.Xr recvmsg 2 ,
.Xr socket 2 ,
.Xr getprotoent 3 ,
.Xr mac 3 ,
.Xr sysctl 3 ,
.Xr ip 4 ,
.Xr ip6 4 ,
.Xr sctp 4 ,
.Xr tcp 4 ,
.Xr protocols 5 ,
.Xr sysctl 8 ,
.Xr accept_filter 9 ,
.Xr bintime 9
.Sh HISTORY
The
.Fn getsockopt
and
.Fn setsockopt
system calls appeared in
.Bx 4.2 .
.Sh BUGS
Several of the socket options should be handled at lower levels of the system.
Index: projects/clang800-import/lib/libc/x86/sys/__vdso_gettc.c
===================================================================
--- projects/clang800-import/lib/libc/x86/sys/__vdso_gettc.c (revision 343955)
+++ projects/clang800-import/lib/libc/x86/sys/__vdso_gettc.c (revision 343956)
@@ -1,333 +1,295 @@
/*-
* Copyright (c) 2012 Konstantin Belousov <kib@FreeBSD.org>
- * Copyright (c) 2016, 2017 The FreeBSD Foundation
+ * Copyright (c) 2016, 2017, 2019 The FreeBSD Foundation
* All rights reserved.
*
* Portions of this software were developed by Konstantin Belousov
* under sponsorship from the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include "namespace.h"
#include <sys/capsicum.h>
#include <sys/elf.h>
#include <sys/fcntl.h>
#include <sys/mman.h>
#include <sys/time.h>
#include <sys/vdso.h>
#include <errno.h>
#include <string.h>
#include <unistd.h>
#include "un-namespace.h"
#include <machine/atomic.h>
#include <machine/cpufunc.h>
#include <machine/specialreg.h>
#include <dev/acpica/acpi_hpet.h>
#ifdef WANT_HYPERV
#include <dev/hyperv/hyperv.h>
#endif
+#include <x86/ifunc.h>
#include "libc_private.h"
-static enum LMB {
- LMB_UNKNOWN,
- LMB_NONE,
- LMB_MFENCE,
- LMB_LFENCE
-} lfence_works = LMB_UNKNOWN;
-
static void
cpuidp(u_int leaf, u_int p[4])
{
__asm __volatile(
#if defined(__i386__)
" pushl %%ebx\n"
#endif
" cpuid\n"
#if defined(__i386__)
" movl %%ebx,%1\n"
" popl %%ebx"
#endif
: "=a" (p[0]),
#if defined(__i386__)
"=r" (p[1]),
#elif defined(__amd64__)
"=b" (p[1]),
#else
#error "Arch"
#endif
"=c" (p[2]), "=d" (p[3])
: "0" (leaf));
}
-static enum LMB
-select_lmb(void)
+static void
+rdtsc_mb_lfence(void)
{
- u_int p[4];
- static const char intel_id[] = "GenuntelineI";
- cpuidp(0, p);
- return (memcmp(p + 1, intel_id, sizeof(intel_id) - 1) == 0 ?
- LMB_LFENCE : LMB_MFENCE);
+ lfence();
}
static void
-init_fence(void)
+rdtsc_mb_mfence(void)
{
-#if defined(__i386__)
- u_int cpuid_supported, p[4];
- lfence_works = LMB_NONE;
- __asm __volatile(
- " pushfl\n"
- " popl %%eax\n"
- " movl %%eax,%%ecx\n"
- " xorl $0x200000,%%eax\n"
- " pushl %%eax\n"
- " popfl\n"
- " pushfl\n"
- " popl %%eax\n"
- " xorl %%eax,%%ecx\n"
- " je 1f\n"
- " movl $1,%0\n"
- " jmp 2f\n"
- "1: movl $0,%0\n"
- "2:\n"
- : "=r" (cpuid_supported) : : "eax", "ecx", "cc");
- if (cpuid_supported) {
- cpuidp(0x1, p);
- if ((p[3] & CPUID_SSE2) != 0)
- lfence_works = select_lmb();
- }
-#elif defined(__amd64__)
- lfence_works = select_lmb();
-#else
-#error "Arch"
-#endif
+ mfence();
}
static void
-rdtsc_mb(void)
+rdtsc_mb_none(void)
{
+}
-again:
- if (__predict_true(lfence_works == LMB_LFENCE)) {
- lfence();
- return;
- } else if (lfence_works == LMB_MFENCE) {
- mfence();
- return;
- } else if (lfence_works == LMB_NONE) {
- return;
- }
- init_fence();
- goto again;
+DEFINE_UIFUNC(static, void, rdtsc_mb, (void), static)
+{
+ u_int p[4];
+ /* Not a typo, string matches our cpuidp() registers use. */
+ static const char intel_id[] = "GenuntelineI";
+
+ if ((cpu_feature & CPUID_SSE2) == 0)
+ return (rdtsc_mb_none);
+ cpuidp(0, p);
+ return (memcmp(p + 1, intel_id, sizeof(intel_id) - 1) == 0 ?
+ rdtsc_mb_lfence : rdtsc_mb_mfence);
}
static u_int
__vdso_gettc_rdtsc_low(const struct vdso_timehands *th)
{
u_int rv;
rdtsc_mb();
__asm __volatile("rdtsc; shrd %%cl, %%edx, %0"
: "=a" (rv) : "c" (th->th_x86_shift) : "edx");
return (rv);
}
static u_int
__vdso_rdtsc32(void)
{
rdtsc_mb();
return (rdtsc32());
}
#define HPET_DEV_MAP_MAX 10
static volatile char *hpet_dev_map[HPET_DEV_MAP_MAX];
static void
__vdso_init_hpet(uint32_t u)
{
static const char devprefix[] = "/dev/hpet";
char devname[64], *c, *c1, t;
volatile char *new_map, *old_map;
unsigned int mode;
uint32_t u1;
int fd;
c1 = c = stpcpy(devname, devprefix);
u1 = u;
do {
*c++ = u1 % 10 + '0';
u1 /= 10;
} while (u1 != 0);
*c = '\0';
for (c--; c1 != c; c1++, c--) {
t = *c1;
*c1 = *c;
*c = t;
}
old_map = hpet_dev_map[u];
if (old_map != NULL)
return;
/*
* Explicitely check for the capability mode to avoid
* triggering trap_enocap on the device open by absolute path.
*/
if ((cap_getmode(&mode) == 0 && mode != 0) ||
(fd = _open(devname, O_RDONLY)) == -1) {
/* Prevent the caller from re-entering. */
atomic_cmpset_rel_ptr((volatile uintptr_t *)&hpet_dev_map[u],
(uintptr_t)old_map, (uintptr_t)MAP_FAILED);
return;
}
new_map = mmap(NULL, PAGE_SIZE, PROT_READ, MAP_SHARED, fd, 0);
_close(fd);
if (atomic_cmpset_rel_ptr((volatile uintptr_t *)&hpet_dev_map[u],
(uintptr_t)old_map, (uintptr_t)new_map) == 0 &&
new_map != MAP_FAILED)
munmap((void *)new_map, PAGE_SIZE);
}
#ifdef WANT_HYPERV
#define HYPERV_REFTSC_DEVPATH "/dev/" HYPERV_REFTSC_DEVNAME
/*
* NOTE:
* We use 'NULL' for this variable to indicate that initialization
* is required. And if this variable is 'MAP_FAILED', then Hyper-V
* reference TSC can not be used, e.g. in misconfigured jail.
*/
static struct hyperv_reftsc *hyperv_ref_tsc;
static void
__vdso_init_hyperv_tsc(void)
{
int fd;
unsigned int mode;
if (cap_getmode(&mode) == 0 && mode != 0)
goto fail;
fd = _open(HYPERV_REFTSC_DEVPATH, O_RDONLY);
if (fd < 0)
goto fail;
hyperv_ref_tsc = mmap(NULL, sizeof(*hyperv_ref_tsc), PROT_READ,
MAP_SHARED, fd, 0);
_close(fd);
return;
fail:
/* Prevent the caller from re-entering. */
hyperv_ref_tsc = MAP_FAILED;
}
static int
__vdso_hyperv_tsc(struct hyperv_reftsc *tsc_ref, u_int *tc)
{
uint64_t disc, ret, tsc, scale;
uint32_t seq;
int64_t ofs;
while ((seq = atomic_load_acq_int(&tsc_ref->tsc_seq)) != 0) {
scale = tsc_ref->tsc_scale;
ofs = tsc_ref->tsc_ofs;
rdtsc_mb();
tsc = rdtsc();
/* ret = ((tsc * scale) >> 64) + ofs */
__asm__ __volatile__ ("mulq %3" :
"=d" (ret), "=a" (disc) :
"a" (tsc), "r" (scale));
ret += ofs;
atomic_thread_fence_acq();
if (tsc_ref->tsc_seq == seq) {
*tc = ret;
return (0);
}
/* Sequence changed; re-sync. */
}
return (ENOSYS);
}
#endif /* WANT_HYPERV */
#pragma weak __vdso_gettc
int
__vdso_gettc(const struct vdso_timehands *th, u_int *tc)
{
volatile char *map;
uint32_t idx;
switch (th->th_algo) {
case VDSO_TH_ALGO_X86_TSC:
*tc = th->th_x86_shift > 0 ? __vdso_gettc_rdtsc_low(th) :
__vdso_rdtsc32();
return (0);
case VDSO_TH_ALGO_X86_HPET:
idx = th->th_x86_hpet_idx;
if (idx >= HPET_DEV_MAP_MAX)
return (ENOSYS);
map = (volatile char *)atomic_load_acq_ptr(
(volatile uintptr_t *)&hpet_dev_map[idx]);
if (map == NULL) {
__vdso_init_hpet(idx);
map = (volatile char *)atomic_load_acq_ptr(
(volatile uintptr_t *)&hpet_dev_map[idx]);
}
if (map == MAP_FAILED)
return (ENOSYS);
*tc = *(volatile uint32_t *)(map + HPET_MAIN_COUNTER);
return (0);
#ifdef WANT_HYPERV
case VDSO_TH_ALGO_X86_HVTSC:
if (hyperv_ref_tsc == NULL)
__vdso_init_hyperv_tsc();
if (hyperv_ref_tsc == MAP_FAILED)
return (ENOSYS);
return (__vdso_hyperv_tsc(hyperv_ref_tsc, tc));
#endif
default:
return (ENOSYS);
}
}
#pragma weak __vdso_gettimekeep
int
__vdso_gettimekeep(struct vdso_timekeep **tk)
{
return (_elf_aux_info(AT_TIMEKEEP, tk, sizeof(*tk)));
}
Index: projects/clang800-import/lib/libcasper/services/cap_syslog/cap_syslog.c
===================================================================
--- projects/clang800-import/lib/libcasper/services/cap_syslog/cap_syslog.c (revision 343955)
+++ projects/clang800-import/lib/libcasper/services/cap_syslog/cap_syslog.c (revision 343956)
@@ -1,201 +1,224 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2017 Mariusz Zaborski <oshogbo@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/dnv.h>
#include <sys/nv.h>
#include <assert.h>
#include <errno.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <syslog.h>
#include <libcasper.h>
#include <libcasper_service.h>
#include "cap_syslog.h"
#define CAP_SYSLOG_LIMIT 2048
void
cap_syslog(cap_channel_t *chan, int pri, const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
cap_vsyslog(chan, pri, fmt, ap);
va_end(ap);
}
void
cap_vsyslog(cap_channel_t *chan, int priority, const char *fmt, va_list ap)
{
nvlist_t *nvl;
char message[CAP_SYSLOG_LIMIT];
(void)vsnprintf(message, sizeof(message), fmt, ap);
nvl = nvlist_create(0);
nvlist_add_string(nvl, "cmd", "vsyslog");
nvlist_add_number(nvl, "priority", priority);
nvlist_add_string(nvl, "message", message);
nvl = cap_xfer_nvlist(chan, nvl);
if (nvl == NULL) {
return;
}
nvlist_destroy(nvl);
}
void
cap_openlog(cap_channel_t *chan, const char *ident, int logopt, int facility)
{
nvlist_t *nvl;
nvl = nvlist_create(0);
nvlist_add_string(nvl, "cmd", "openlog");
if (ident != NULL) {
nvlist_add_string(nvl, "ident", ident);
}
nvlist_add_number(nvl, "logopt", logopt);
nvlist_add_number(nvl, "facility", facility);
+ if (logopt & LOG_PERROR) {
+ nvlist_add_descriptor(nvl, "stderr", STDERR_FILENO);
+ }
nvl = cap_xfer_nvlist(chan, nvl);
if (nvl == NULL) {
return;
}
nvlist_destroy(nvl);
}
void
cap_closelog(cap_channel_t *chan)
{
nvlist_t *nvl;
nvl = nvlist_create(0);
nvlist_add_string(nvl, "cmd", "closelog");
nvl = cap_xfer_nvlist(chan, nvl);
if (nvl == NULL) {
return;
}
nvlist_destroy(nvl);
}
int
cap_setlogmask(cap_channel_t *chan, int maskpri)
{
nvlist_t *nvl;
int omask;
nvl = nvlist_create(0);
nvlist_add_string(nvl, "cmd", "setlogmask");
nvlist_add_number(nvl, "maskpri", maskpri);
nvl = cap_xfer_nvlist(chan, nvl);
omask = nvlist_get_number(nvl, "omask");
nvlist_destroy(nvl);
return (omask);
}
/*
* Service functions.
*/
static char *LogTag;
+static int prev_stderr = -1;
static void
slog_vsyslog(const nvlist_t *limits __unused, const nvlist_t *nvlin,
nvlist_t *nvlout __unused)
{
syslog(nvlist_get_number(nvlin, "priority"), "%s",
nvlist_get_string(nvlin, "message"));
}
static void
slog_openlog(const nvlist_t *limits __unused, const nvlist_t *nvlin,
nvlist_t *nvlout __unused)
{
const char *ident;
+ uint64_t logopt;
+ int stderr_fd;
ident = dnvlist_get_string(nvlin, "ident", NULL);
if (ident != NULL) {
free(LogTag);
LogTag = strdup(ident);
}
- openlog(LogTag, nvlist_get_number(nvlin, "logopt"),
- nvlist_get_number(nvlin, "facility"));
+ logopt = nvlist_get_number(nvlin, "logopt");
+ if (logopt & LOG_PERROR) {
+ stderr_fd = dnvlist_get_descriptor(nvlin, "stderr", -1);
+ if (prev_stderr == -1)
+ prev_stderr = dup(STDERR_FILENO);
+ if (prev_stderr != -1)
+ (void)dup2(stderr_fd, STDERR_FILENO);
+ } else if (prev_stderr != -1) {
+ (void)dup2(prev_stderr, STDERR_FILENO);
+ close(prev_stderr);
+ prev_stderr = -1;
+ }
+ openlog(LogTag, logopt, nvlist_get_number(nvlin, "facility"));
}
static void
slog_closelog(const nvlist_t *limits __unused, const nvlist_t *nvlin __unused,
nvlist_t *nvlout __unused)
{
closelog();
free(LogTag);
LogTag = NULL;
+
+ if (prev_stderr != -1) {
+ (void)dup2(prev_stderr, STDERR_FILENO);
+ close(prev_stderr);
+ prev_stderr = -1;
+ }
}
static void
slog_setlogmask(const nvlist_t *limits __unused, const nvlist_t *nvlin,
nvlist_t *nvlout)
{
int omask;
omask = setlogmask(nvlist_get_number(nvlin, "maskpri"));
nvlist_add_number(nvlout, "omask", omask);
}
static int
syslog_command(const char *cmd, const nvlist_t *limits, nvlist_t *nvlin,
nvlist_t *nvlout)
{
if (strcmp(cmd, "vsyslog") == 0) {
slog_vsyslog(limits, nvlin, nvlout);
} else if (strcmp(cmd, "openlog") == 0) {
slog_openlog(limits, nvlin, nvlout);
} else if (strcmp(cmd, "closelog") == 0) {
slog_closelog(limits, nvlin, nvlout);
} else if (strcmp(cmd, "setlogmask") == 0) {
slog_setlogmask(limits, nvlin, nvlout);
} else {
return (EINVAL);
}
return (0);
}
-CREATE_SERVICE("system.syslog", NULL, syslog_command, CASPER_SERVICE_STDIO);
+CREATE_SERVICE("system.syslog", NULL, syslog_command, 0);
Index: projects/clang800-import/lib/libutil/quotafile.c
===================================================================
--- projects/clang800-import/lib/libutil/quotafile.c (revision 343955)
+++ projects/clang800-import/lib/libutil/quotafile.c (revision 343956)
@@ -1,600 +1,607 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2008 Dag-Erling Coïdan Smørgrav
* Copyright (c) 2008 Marshall Kirk McKusick
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer
* in this position and unchanged.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#include <sys/types.h>
#include <sys/endian.h>
#include <sys/mount.h>
#include <sys/stat.h>
#include <ufs/ufs/quota.h>
#include <errno.h>
#include <fcntl.h>
#include <fstab.h>
#include <grp.h>
#include <pwd.h>
#include <libutil.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
struct quotafile {
int fd; /* -1 means using quotactl for access */
int accmode; /* access mode */
int wordsize; /* 32-bit or 64-bit limits */
int quotatype; /* USRQUOTA or GRPQUOTA */
dev_t dev; /* device */
char fsname[MAXPATHLEN + 1]; /* mount point of filesystem */
char qfname[MAXPATHLEN + 1]; /* quota file if not using quotactl */
};
static const char *qfextension[] = INITQFNAMES;
/*
* Check to see if a particular quota is to be enabled.
*/
static int
hasquota(struct fstab *fs, int type, char *qfnamep, int qfbufsize)
{
char *opt;
char *cp;
struct statfs sfb;
char buf[BUFSIZ];
static char initname, usrname[100], grpname[100];
/*
* 1) we only need one of these
* 2) fstab may specify a different filename
*/
if (!initname) {
(void)snprintf(usrname, sizeof(usrname), "%s%s",
qfextension[USRQUOTA], QUOTAFILENAME);
(void)snprintf(grpname, sizeof(grpname), "%s%s",
qfextension[GRPQUOTA], QUOTAFILENAME);
initname = 1;
}
strcpy(buf, fs->fs_mntops);
for (opt = strtok(buf, ","); opt; opt = strtok(NULL, ",")) {
if ((cp = strchr(opt, '=')))
*cp++ = '\0';
if (type == USRQUOTA && strcmp(opt, usrname) == 0)
break;
if (type == GRPQUOTA && strcmp(opt, grpname) == 0)
break;
}
if (!opt)
return (0);
/*
* Ensure that the filesystem is mounted.
*/
if (statfs(fs->fs_file, &sfb) != 0 ||
strcmp(fs->fs_file, sfb.f_mntonname)) {
return (0);
}
if (cp) {
strlcpy(qfnamep, cp, qfbufsize);
} else {
(void)snprintf(qfnamep, qfbufsize, "%s/%s.%s", fs->fs_file,
QUOTAFILENAME, qfextension[type]);
}
return (1);
}
struct quotafile *
quota_open(struct fstab *fs, int quotatype, int openflags)
{
struct quotafile *qf;
struct dqhdr64 dqh;
struct group *grp;
struct stat st;
- int qcmd, serrno;
+ int qcmd, serrno = 0;
+ int ufs;
if ((qf = calloc(1, sizeof(*qf))) == NULL)
return (NULL);
qf->fd = -1;
qf->quotatype = quotatype;
strlcpy(qf->fsname, fs->fs_file, sizeof(qf->fsname));
if (stat(qf->fsname, &st) != 0)
goto error;
qf->dev = st.st_dev;
qcmd = QCMD(Q_GETQUOTASIZE, quotatype);
+ ufs = strcmp(fs->fs_vfstype, "ufs") == 0;
+ /*
+ * On UFS, hasquota() fills in qf->qfname. But we only care about
+ * this for UFS. So we need to call hasquota() for UFS, first.
+ */
+ if (ufs) {
+ serrno = hasquota(fs, quotatype, qf->qfname,
+ sizeof(qf->qfname));
+ }
if (quotactl(qf->fsname, qcmd, 0, &qf->wordsize) == 0)
return (qf);
- /* We only check the quota file for ufs */
- if (strcmp(fs->fs_vfstype, "ufs")) {
+ if (!ufs) {
errno = 0;
goto error;
- }
- serrno = hasquota(fs, quotatype, qf->qfname, sizeof(qf->qfname));
- if (serrno == 0) {
+ } else if (serrno == 0) {
errno = EOPNOTSUPP;
goto error;
}
qf->accmode = openflags & O_ACCMODE;
if ((qf->fd = open(qf->qfname, qf->accmode|O_CLOEXEC)) < 0 &&
(openflags & O_CREAT) != O_CREAT)
goto error;
/* File open worked, so process it */
if (qf->fd != -1) {
qf->wordsize = 32;
switch (read(qf->fd, &dqh, sizeof(dqh))) {
case -1:
goto error;
case sizeof(dqh):
if (strcmp(dqh.dqh_magic, Q_DQHDR64_MAGIC) != 0) {
/* no magic, assume 32 bits */
qf->wordsize = 32;
return (qf);
}
if (be32toh(dqh.dqh_version) != Q_DQHDR64_VERSION ||
be32toh(dqh.dqh_hdrlen) != sizeof(struct dqhdr64) ||
be32toh(dqh.dqh_reclen) != sizeof(struct dqblk64)) {
/* correct magic, wrong version / lengths */
errno = EINVAL;
goto error;
}
qf->wordsize = 64;
return (qf);
default:
qf->wordsize = 32;
return (qf);
}
/* not reached */
}
/* open failed, but O_CREAT was specified, so create a new file */
if ((qf->fd = open(qf->qfname, O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0)) <
0)
goto error;
qf->wordsize = 64;
memset(&dqh, 0, sizeof(dqh));
memcpy(dqh.dqh_magic, Q_DQHDR64_MAGIC, sizeof(dqh.dqh_magic));
dqh.dqh_version = htobe32(Q_DQHDR64_VERSION);
dqh.dqh_hdrlen = htobe32(sizeof(struct dqhdr64));
dqh.dqh_reclen = htobe32(sizeof(struct dqblk64));
if (write(qf->fd, &dqh, sizeof(dqh)) != sizeof(dqh)) {
/* it was one we created ourselves */
unlink(qf->qfname);
goto error;
}
grp = getgrnam(QUOTAGROUP);
fchown(qf->fd, 0, grp ? grp->gr_gid : 0);
fchmod(qf->fd, 0640);
return (qf);
error:
serrno = errno;
/* did we have an open file? */
if (qf->fd != -1)
close(qf->fd);
free(qf);
errno = serrno;
return (NULL);
}
void
quota_close(struct quotafile *qf)
{
if (qf->fd != -1)
close(qf->fd);
free(qf);
}
int
quota_on(struct quotafile *qf)
{
int qcmd;
qcmd = QCMD(Q_QUOTAON, qf->quotatype);
return (quotactl(qf->fsname, qcmd, 0, qf->qfname));
}
int
quota_off(struct quotafile *qf)
{
return (quotactl(qf->fsname, QCMD(Q_QUOTAOFF, qf->quotatype), 0, 0));
}
const char *
quota_fsname(const struct quotafile *qf)
{
return (qf->fsname);
}
const char *
quota_qfname(const struct quotafile *qf)
{
return (qf->qfname);
}
int
quota_check_path(const struct quotafile *qf, const char *path)
{
struct stat st;
if (stat(path, &st) == -1)
return (-1);
return (st.st_dev == qf->dev);
}
int
quota_maxid(struct quotafile *qf)
{
struct stat st;
int maxid;
if (stat(qf->qfname, &st) < 0)
return (0);
switch (qf->wordsize) {
case 32:
maxid = st.st_size / sizeof(struct dqblk32) - 1;
break;
case 64:
maxid = st.st_size / sizeof(struct dqblk64) - 2;
break;
default:
maxid = 0;
break;
}
return (maxid > 0 ? maxid : 0);
}
static int
quota_read32(struct quotafile *qf, struct dqblk *dqb, int id)
{
struct dqblk32 dqb32;
off_t off;
off = id * sizeof(struct dqblk32);
if (lseek(qf->fd, off, SEEK_SET) == -1)
return (-1);
switch (read(qf->fd, &dqb32, sizeof(dqb32))) {
case 0:
memset(dqb, 0, sizeof(*dqb));
return (0);
case sizeof(dqb32):
dqb->dqb_bhardlimit = dqb32.dqb_bhardlimit;
dqb->dqb_bsoftlimit = dqb32.dqb_bsoftlimit;
dqb->dqb_curblocks = dqb32.dqb_curblocks;
dqb->dqb_ihardlimit = dqb32.dqb_ihardlimit;
dqb->dqb_isoftlimit = dqb32.dqb_isoftlimit;
dqb->dqb_curinodes = dqb32.dqb_curinodes;
dqb->dqb_btime = dqb32.dqb_btime;
dqb->dqb_itime = dqb32.dqb_itime;
return (0);
default:
return (-1);
}
}
static int
quota_read64(struct quotafile *qf, struct dqblk *dqb, int id)
{
struct dqblk64 dqb64;
off_t off;
off = sizeof(struct dqhdr64) + id * sizeof(struct dqblk64);
if (lseek(qf->fd, off, SEEK_SET) == -1)
return (-1);
switch (read(qf->fd, &dqb64, sizeof(dqb64))) {
case 0:
memset(dqb, 0, sizeof(*dqb));
return (0);
case sizeof(dqb64):
dqb->dqb_bhardlimit = be64toh(dqb64.dqb_bhardlimit);
dqb->dqb_bsoftlimit = be64toh(dqb64.dqb_bsoftlimit);
dqb->dqb_curblocks = be64toh(dqb64.dqb_curblocks);
dqb->dqb_ihardlimit = be64toh(dqb64.dqb_ihardlimit);
dqb->dqb_isoftlimit = be64toh(dqb64.dqb_isoftlimit);
dqb->dqb_curinodes = be64toh(dqb64.dqb_curinodes);
dqb->dqb_btime = be64toh(dqb64.dqb_btime);
dqb->dqb_itime = be64toh(dqb64.dqb_itime);
return (0);
default:
return (-1);
}
}
int
quota_read(struct quotafile *qf, struct dqblk *dqb, int id)
{
int qcmd;
if (qf->fd == -1) {
qcmd = QCMD(Q_GETQUOTA, qf->quotatype);
return (quotactl(qf->fsname, qcmd, id, dqb));
}
switch (qf->wordsize) {
case 32:
return (quota_read32(qf, dqb, id));
case 64:
return (quota_read64(qf, dqb, id));
default:
errno = EINVAL;
return (-1);
}
/* not reached */
}
#define CLIP32(u64) ((u64) > UINT32_MAX ? UINT32_MAX : (uint32_t)(u64))
static int
quota_write32(struct quotafile *qf, const struct dqblk *dqb, int id)
{
struct dqblk32 dqb32;
off_t off;
dqb32.dqb_bhardlimit = CLIP32(dqb->dqb_bhardlimit);
dqb32.dqb_bsoftlimit = CLIP32(dqb->dqb_bsoftlimit);
dqb32.dqb_curblocks = CLIP32(dqb->dqb_curblocks);
dqb32.dqb_ihardlimit = CLIP32(dqb->dqb_ihardlimit);
dqb32.dqb_isoftlimit = CLIP32(dqb->dqb_isoftlimit);
dqb32.dqb_curinodes = CLIP32(dqb->dqb_curinodes);
dqb32.dqb_btime = CLIP32(dqb->dqb_btime);
dqb32.dqb_itime = CLIP32(dqb->dqb_itime);
off = id * sizeof(struct dqblk32);
if (lseek(qf->fd, off, SEEK_SET) == -1)
return (-1);
if (write(qf->fd, &dqb32, sizeof(dqb32)) == sizeof(dqb32))
return (0);
return (-1);
}
static int
quota_write64(struct quotafile *qf, const struct dqblk *dqb, int id)
{
struct dqblk64 dqb64;
off_t off;
dqb64.dqb_bhardlimit = htobe64(dqb->dqb_bhardlimit);
dqb64.dqb_bsoftlimit = htobe64(dqb->dqb_bsoftlimit);
dqb64.dqb_curblocks = htobe64(dqb->dqb_curblocks);
dqb64.dqb_ihardlimit = htobe64(dqb->dqb_ihardlimit);
dqb64.dqb_isoftlimit = htobe64(dqb->dqb_isoftlimit);
dqb64.dqb_curinodes = htobe64(dqb->dqb_curinodes);
dqb64.dqb_btime = htobe64(dqb->dqb_btime);
dqb64.dqb_itime = htobe64(dqb->dqb_itime);
off = sizeof(struct dqhdr64) + id * sizeof(struct dqblk64);
if (lseek(qf->fd, off, SEEK_SET) == -1)
return (-1);
if (write(qf->fd, &dqb64, sizeof(dqb64)) == sizeof(dqb64))
return (0);
return (-1);
}
int
quota_write_usage(struct quotafile *qf, struct dqblk *dqb, int id)
{
struct dqblk dqbuf;
int qcmd;
if (qf->fd == -1) {
qcmd = QCMD(Q_SETUSE, qf->quotatype);
return (quotactl(qf->fsname, qcmd, id, dqb));
}
/*
* Have to do read-modify-write of quota in file.
*/
if ((qf->accmode & O_RDWR) != O_RDWR) {
errno = EBADF;
return (-1);
}
if (quota_read(qf, &dqbuf, id) != 0)
return (-1);
/*
* Reset time limit if have a soft limit and were
* previously under it, but are now over it.
*/
if (dqbuf.dqb_bsoftlimit && id != 0 &&
dqbuf.dqb_curblocks < dqbuf.dqb_bsoftlimit &&
dqb->dqb_curblocks >= dqbuf.dqb_bsoftlimit)
dqbuf.dqb_btime = 0;
if (dqbuf.dqb_isoftlimit && id != 0 &&
dqbuf.dqb_curinodes < dqbuf.dqb_isoftlimit &&
dqb->dqb_curinodes >= dqbuf.dqb_isoftlimit)
dqbuf.dqb_itime = 0;
dqbuf.dqb_curinodes = dqb->dqb_curinodes;
dqbuf.dqb_curblocks = dqb->dqb_curblocks;
/*
* Write it back.
*/
switch (qf->wordsize) {
case 32:
return (quota_write32(qf, &dqbuf, id));
case 64:
return (quota_write64(qf, &dqbuf, id));
default:
errno = EINVAL;
return (-1);
}
/* not reached */
}
int
quota_write_limits(struct quotafile *qf, struct dqblk *dqb, int id)
{
struct dqblk dqbuf;
int qcmd;
if (qf->fd == -1) {
qcmd = QCMD(Q_SETQUOTA, qf->quotatype);
return (quotactl(qf->fsname, qcmd, id, dqb));
}
/*
* Have to do read-modify-write of quota in file.
*/
if ((qf->accmode & O_RDWR) != O_RDWR) {
errno = EBADF;
return (-1);
}
if (quota_read(qf, &dqbuf, id) != 0)
return (-1);
/*
* Reset time limit if have a soft limit and were
* previously under it, but are now over it
* or if there previously was no soft limit, but
* now have one and are over it.
*/
if (dqbuf.dqb_bsoftlimit && id != 0 &&
dqbuf.dqb_curblocks < dqbuf.dqb_bsoftlimit &&
dqbuf.dqb_curblocks >= dqb->dqb_bsoftlimit)
dqb->dqb_btime = 0;
if (dqbuf.dqb_bsoftlimit == 0 && id != 0 &&
dqb->dqb_bsoftlimit > 0 &&
dqbuf.dqb_curblocks >= dqb->dqb_bsoftlimit)
dqb->dqb_btime = 0;
if (dqbuf.dqb_isoftlimit && id != 0 &&
dqbuf.dqb_curinodes < dqbuf.dqb_isoftlimit &&
dqbuf.dqb_curinodes >= dqb->dqb_isoftlimit)
dqb->dqb_itime = 0;
if (dqbuf.dqb_isoftlimit == 0 && id !=0 &&
dqb->dqb_isoftlimit > 0 &&
dqbuf.dqb_curinodes >= dqb->dqb_isoftlimit)
dqb->dqb_itime = 0;
dqb->dqb_curinodes = dqbuf.dqb_curinodes;
dqb->dqb_curblocks = dqbuf.dqb_curblocks;
/*
* Write it back.
*/
switch (qf->wordsize) {
case 32:
return (quota_write32(qf, dqb, id));
case 64:
return (quota_write64(qf, dqb, id));
default:
errno = EINVAL;
return (-1);
}
/* not reached */
}
/*
* Convert a quota file from one format to another.
*/
int
quota_convert(struct quotafile *qf, int wordsize)
{
struct quotafile *newqf;
struct dqhdr64 dqh;
struct dqblk dqblk;
struct group *grp;
int serrno, maxid, id, fd;
/*
* Quotas must not be active and quotafile must be open
* for reading and writing.
*/
if ((qf->accmode & O_RDWR) != O_RDWR || qf->fd == -1) {
errno = EBADF;
return (-1);
}
if ((wordsize != 32 && wordsize != 64) ||
wordsize == qf->wordsize) {
errno = EINVAL;
return (-1);
}
maxid = quota_maxid(qf);
if ((newqf = calloc(1, sizeof(*qf))) == NULL) {
errno = ENOMEM;
return (-1);
}
*newqf = *qf;
snprintf(newqf->qfname, MAXPATHLEN + 1, "%s_%d.orig", qf->qfname,
qf->wordsize);
if (rename(qf->qfname, newqf->qfname) < 0) {
free(newqf);
return (-1);
}
if ((newqf->fd = open(qf->qfname, O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC,
0)) < 0) {
serrno = errno;
goto error;
}
newqf->wordsize = wordsize;
if (wordsize == 64) {
memset(&dqh, 0, sizeof(dqh));
memcpy(dqh.dqh_magic, Q_DQHDR64_MAGIC, sizeof(dqh.dqh_magic));
dqh.dqh_version = htobe32(Q_DQHDR64_VERSION);
dqh.dqh_hdrlen = htobe32(sizeof(struct dqhdr64));
dqh.dqh_reclen = htobe32(sizeof(struct dqblk64));
if (write(newqf->fd, &dqh, sizeof(dqh)) != sizeof(dqh)) {
serrno = errno;
goto error;
}
}
grp = getgrnam(QUOTAGROUP);
fchown(newqf->fd, 0, grp ? grp->gr_gid : 0);
fchmod(newqf->fd, 0640);
for (id = 0; id <= maxid; id++) {
if ((quota_read(qf, &dqblk, id)) < 0)
break;
switch (newqf->wordsize) {
case 32:
if ((quota_write32(newqf, &dqblk, id)) < 0)
break;
continue;
case 64:
if ((quota_write64(newqf, &dqblk, id)) < 0)
break;
continue;
default:
errno = EINVAL;
break;
}
}
if (id < maxid) {
serrno = errno;
goto error;
}
/*
* Update the passed in quotafile to reference the new file
* of the converted format size.
*/
fd = qf->fd;
qf->fd = newqf->fd;
newqf->fd = fd;
qf->wordsize = newqf->wordsize;
quota_close(newqf);
return (0);
error:
/* put back the original file */
(void) rename(newqf->qfname, qf->qfname);
quota_close(newqf);
errno = serrno;
return (-1);
}
Index: projects/clang800-import/lib/msun/src/e_j0.c
===================================================================
--- projects/clang800-import/lib/msun/src/e_j0.c (revision 343955)
+++ projects/clang800-import/lib/msun/src/e_j0.c (revision 343956)
@@ -1,391 +1,389 @@
/* @(#)e_j0.c 1.3 95/01/18 */
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunSoft, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/* __ieee754_j0(x), __ieee754_y0(x)
* Bessel function of the first and second kinds of order zero.
* Method -- j0(x):
* 1. For tiny x, we use j0(x) = 1 - x^2/4 + x^4/64 - ...
* 2. Reduce x to |x| since j0(x)=j0(-x), and
* for x in (0,2)
* j0(x) = 1-z/4+ z^2*R0/S0, where z = x*x;
* (precision: |j0-1+z/4-z^2R0/S0 |<2**-63.67 )
* for x in (2,inf)
* j0(x) = sqrt(2/(pi*x))*(p0(x)*cos(x0)-q0(x)*sin(x0))
* where x0 = x-pi/4. It is better to compute sin(x0),cos(x0)
* as follow:
* cos(x0) = cos(x)cos(pi/4)+sin(x)sin(pi/4)
* = 1/sqrt(2) * (cos(x) + sin(x))
* sin(x0) = sin(x)cos(pi/4)-cos(x)sin(pi/4)
* = 1/sqrt(2) * (sin(x) - cos(x))
* (To avoid cancellation, use
* sin(x) +- cos(x) = -cos(2x)/(sin(x) -+ cos(x))
* to compute the worse one.)
*
* 3 Special cases
* j0(nan)= nan
* j0(0) = 1
* j0(inf) = 0
*
* Method -- y0(x):
* 1. For x<2.
* Since
* y0(x) = 2/pi*(j0(x)*(ln(x/2)+Euler) + x^2/4 - ...)
* therefore y0(x)-2/pi*j0(x)*ln(x) is an even function.
* We use the following function to approximate y0,
* y0(x) = U(z)/V(z) + (2/pi)*(j0(x)*ln(x)), z= x^2
* where
* U(z) = u00 + u01*z + ... + u06*z^6
* V(z) = 1 + v01*z + ... + v04*z^4
* with absolute approximation error bounded by 2**-72.
* Note: For tiny x, U/V = u0 and j0(x)~1, hence
* y0(tiny) = u0 + (2/pi)*ln(tiny), (choose tiny<2**-27)
* 2. For x>=2.
* y0(x) = sqrt(2/(pi*x))*(p0(x)*cos(x0)+q0(x)*sin(x0))
* where x0 = x-pi/4. It is better to compute sin(x0),cos(x0)
* by the method mentioned above.
* 3. Special cases: y0(0)=-inf, y0(x<0)=NaN, y0(inf)=0.
*/
#include "math.h"
#include "math_private.h"
static __inline double pzero(double), qzero(double);
static const volatile double vone = 1, vzero = 0;
static const double
huge = 1e300,
one = 1.0,
invsqrtpi= 5.64189583547756279280e-01, /* 0x3FE20DD7, 0x50429B6D */
tpi = 6.36619772367581382433e-01, /* 0x3FE45F30, 0x6DC9C883 */
/* R0/S0 on [0, 2.00] */
R02 = 1.56249999999999947958e-02, /* 0x3F8FFFFF, 0xFFFFFFFD */
R03 = -1.89979294238854721751e-04, /* 0xBF28E6A5, 0xB61AC6E9 */
R04 = 1.82954049532700665670e-06, /* 0x3EBEB1D1, 0x0C503919 */
R05 = -4.61832688532103189199e-09, /* 0xBE33D5E7, 0x73D63FCE */
S01 = 1.56191029464890010492e-02, /* 0x3F8FFCE8, 0x82C8C2A4 */
S02 = 1.16926784663337450260e-04, /* 0x3F1EA6D2, 0xDD57DBF4 */
S03 = 5.13546550207318111446e-07, /* 0x3EA13B54, 0xCE84D5A9 */
S04 = 1.16614003333790000205e-09; /* 0x3E1408BC, 0xF4745D8F */
static const double zero = 0, qrtr = 0.25;
double
__ieee754_j0(double x)
{
double z, s,c,ss,cc,r,u,v;
int32_t hx,ix;
GET_HIGH_WORD(hx,x);
ix = hx&0x7fffffff;
if(ix>=0x7ff00000) return one/(x*x);
x = fabs(x);
if(ix >= 0x40000000) { /* |x| >= 2.0 */
- s = sin(x);
- c = cos(x);
+ sincos(x, &s, &c);
ss = s-c;
cc = s+c;
if(ix<0x7fe00000) { /* Make sure x+x does not overflow. */
z = -cos(x+x);
if ((s*c)<zero) cc = z/ss;
else ss = z/cc;
}
/*
* j0(x) = 1/sqrt(pi) * (P(0,x)*cc - Q(0,x)*ss) / sqrt(x)
* y0(x) = 1/sqrt(pi) * (P(0,x)*ss + Q(0,x)*cc) / sqrt(x)
*/
if(ix>0x48000000) z = (invsqrtpi*cc)/sqrt(x);
else {
u = pzero(x); v = qzero(x);
z = invsqrtpi*(u*cc-v*ss)/sqrt(x);
}
return z;
}
if(ix<0x3f200000) { /* |x| < 2**-13 */
if(huge+x>one) { /* raise inexact if x != 0 */
if(ix<0x3e400000) return one; /* |x|<2**-27 */
else return one - x*x/4;
}
}
z = x*x;
r = z*(R02+z*(R03+z*(R04+z*R05)));
s = one+z*(S01+z*(S02+z*(S03+z*S04)));
if(ix < 0x3FF00000) { /* |x| < 1.00 */
return one + z*((r/s)-qrtr);
} else {
u = x/2;
return((one+u)*(one-u)+z*(r/s));
}
}
static const double
u00 = -7.38042951086872317523e-02, /* 0xBFB2E4D6, 0x99CBD01F */
u01 = 1.76666452509181115538e-01, /* 0x3FC69D01, 0x9DE9E3FC */
u02 = -1.38185671945596898896e-02, /* 0xBF8C4CE8, 0xB16CFA97 */
u03 = 3.47453432093683650238e-04, /* 0x3F36C54D, 0x20B29B6B */
u04 = -3.81407053724364161125e-06, /* 0xBECFFEA7, 0x73D25CAD */
u05 = 1.95590137035022920206e-08, /* 0x3E550057, 0x3B4EABD4 */
u06 = -3.98205194132103398453e-11, /* 0xBDC5E43D, 0x693FB3C8 */
v01 = 1.27304834834123699328e-02, /* 0x3F8A1270, 0x91C9C71A */
v02 = 7.60068627350353253702e-05, /* 0x3F13ECBB, 0xF578C6C1 */
v03 = 2.59150851840457805467e-07, /* 0x3E91642D, 0x7FF202FD */
v04 = 4.41110311332675467403e-10; /* 0x3DFE5018, 0x3BD6D9EF */
double
__ieee754_y0(double x)
{
double z, s,c,ss,cc,u,v;
int32_t hx,ix,lx;
EXTRACT_WORDS(hx,lx,x);
ix = 0x7fffffff&hx;
/*
* y0(NaN) = NaN.
* y0(Inf) = 0.
* y0(-Inf) = NaN and raise invalid exception.
*/
if(ix>=0x7ff00000) return vone/(x+x*x);
/* y0(+-0) = -inf and raise divide-by-zero exception. */
if((ix|lx)==0) return -one/vzero;
/* y0(x<0) = NaN and raise invalid exception. */
if(hx<0) return vzero/vzero;
if(ix >= 0x40000000) { /* |x| >= 2.0 */
/* y0(x) = sqrt(2/(pi*x))*(p0(x)*sin(x0)+q0(x)*cos(x0))
* where x0 = x-pi/4
* Better formula:
* cos(x0) = cos(x)cos(pi/4)+sin(x)sin(pi/4)
* = 1/sqrt(2) * (sin(x) + cos(x))
* sin(x0) = sin(x)cos(3pi/4)-cos(x)sin(3pi/4)
* = 1/sqrt(2) * (sin(x) - cos(x))
* To avoid cancellation, use
* sin(x) +- cos(x) = -cos(2x)/(sin(x) -+ cos(x))
* to compute the worse one.
*/
- s = sin(x);
- c = cos(x);
+ sincos(x, &s, &c);
ss = s-c;
cc = s+c;
/*
* j0(x) = 1/sqrt(pi) * (P(0,x)*cc - Q(0,x)*ss) / sqrt(x)
* y0(x) = 1/sqrt(pi) * (P(0,x)*ss + Q(0,x)*cc) / sqrt(x)
*/
if(ix<0x7fe00000) { /* make sure x+x not overflow */
z = -cos(x+x);
if ((s*c)<zero) cc = z/ss;
else ss = z/cc;
}
if(ix>0x48000000) z = (invsqrtpi*ss)/sqrt(x);
else {
u = pzero(x); v = qzero(x);
z = invsqrtpi*(u*ss+v*cc)/sqrt(x);
}
return z;
}
if(ix<=0x3e400000) { /* x < 2**-27 */
return(u00 + tpi*__ieee754_log(x));
}
z = x*x;
u = u00+z*(u01+z*(u02+z*(u03+z*(u04+z*(u05+z*u06)))));
v = one+z*(v01+z*(v02+z*(v03+z*v04)));
return(u/v + tpi*(__ieee754_j0(x)*__ieee754_log(x)));
}
/* The asymptotic expansions of pzero is
* 1 - 9/128 s^2 + 11025/98304 s^4 - ..., where s = 1/x.
* For x >= 2, We approximate pzero by
* pzero(x) = 1 + (R/S)
* where R = pR0 + pR1*s^2 + pR2*s^4 + ... + pR5*s^10
* S = 1 + pS0*s^2 + ... + pS4*s^10
* and
* | pzero(x)-1-R/S | <= 2 ** ( -60.26)
*/
static const double pR8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.00000000000000000000e+00, /* 0x00000000, 0x00000000 */
-7.03124999999900357484e-02, /* 0xBFB1FFFF, 0xFFFFFD32 */
-8.08167041275349795626e+00, /* 0xC02029D0, 0xB44FA779 */
-2.57063105679704847262e+02, /* 0xC0701102, 0x7B19E863 */
-2.48521641009428822144e+03, /* 0xC0A36A6E, 0xCD4DCAFC */
-5.25304380490729545272e+03, /* 0xC0B4850B, 0x36CC643D */
};
static const double pS8[5] = {
1.16534364619668181717e+02, /* 0x405D2233, 0x07A96751 */
3.83374475364121826715e+03, /* 0x40ADF37D, 0x50596938 */
4.05978572648472545552e+04, /* 0x40E3D2BB, 0x6EB6B05F */
1.16752972564375915681e+05, /* 0x40FC810F, 0x8F9FA9BD */
4.76277284146730962675e+04, /* 0x40E74177, 0x4F2C49DC */
};
static const double pR5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
-1.14125464691894502584e-11, /* 0xBDA918B1, 0x47E495CC */
-7.03124940873599280078e-02, /* 0xBFB1FFFF, 0xE69AFBC6 */
-4.15961064470587782438e+00, /* 0xC010A370, 0xF90C6BBF */
-6.76747652265167261021e+01, /* 0xC050EB2F, 0x5A7D1783 */
-3.31231299649172967747e+02, /* 0xC074B3B3, 0x6742CC63 */
-3.46433388365604912451e+02, /* 0xC075A6EF, 0x28A38BD7 */
};
static const double pS5[5] = {
6.07539382692300335975e+01, /* 0x404E6081, 0x0C98C5DE */
1.05125230595704579173e+03, /* 0x40906D02, 0x5C7E2864 */
5.97897094333855784498e+03, /* 0x40B75AF8, 0x8FBE1D60 */
9.62544514357774460223e+03, /* 0x40C2CCB8, 0xFA76FA38 */
2.40605815922939109441e+03, /* 0x40A2CC1D, 0xC70BE864 */
};
static const double pR3[6] = {/* for x in [4.547,2.8571]=1/[0.2199,0.35001] */
-2.54704601771951915620e-09, /* 0xBE25E103, 0x6FE1AA86 */
-7.03119616381481654654e-02, /* 0xBFB1FFF6, 0xF7C0E24B */
-2.40903221549529611423e+00, /* 0xC00345B2, 0xAEA48074 */
-2.19659774734883086467e+01, /* 0xC035F74A, 0x4CB94E14 */
-5.80791704701737572236e+01, /* 0xC04D0A22, 0x420A1A45 */
-3.14479470594888503854e+01, /* 0xC03F72AC, 0xA892D80F */
};
static const double pS3[5] = {
3.58560338055209726349e+01, /* 0x4041ED92, 0x84077DD3 */
3.61513983050303863820e+02, /* 0x40769839, 0x464A7C0E */
1.19360783792111533330e+03, /* 0x4092A66E, 0x6D1061D6 */
1.12799679856907414432e+03, /* 0x40919FFC, 0xB8C39B7E */
1.73580930813335754692e+02, /* 0x4065B296, 0xFC379081 */
};
static const double pR2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
-8.87534333032526411254e-08, /* 0xBE77D316, 0xE927026D */
-7.03030995483624743247e-02, /* 0xBFB1FF62, 0x495E1E42 */
-1.45073846780952986357e+00, /* 0xBFF73639, 0x8A24A843 */
-7.63569613823527770791e+00, /* 0xC01E8AF3, 0xEDAFA7F3 */
-1.11931668860356747786e+01, /* 0xC02662E6, 0xC5246303 */
-3.23364579351335335033e+00, /* 0xC009DE81, 0xAF8FE70F */
};
static const double pS2[5] = {
2.22202997532088808441e+01, /* 0x40363865, 0x908B5959 */
1.36206794218215208048e+02, /* 0x4061069E, 0x0EE8878F */
2.70470278658083486789e+02, /* 0x4070E786, 0x42EA079B */
1.53875394208320329881e+02, /* 0x40633C03, 0x3AB6FAFF */
1.46576176948256193810e+01, /* 0x402D50B3, 0x44391809 */
};
static __inline double
pzero(double x)
{
const double *p,*q;
double z,r,s;
int32_t ix;
GET_HIGH_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x40200000) {p = pR8; q= pS8;}
else if(ix>=0x40122E8B){p = pR5; q= pS5;}
else if(ix>=0x4006DB6D){p = pR3; q= pS3;}
else {p = pR2; q= pS2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*q[4]))));
return one+ r/s;
}
/* For x >= 8, the asymptotic expansions of qzero is
* -1/8 s + 75/1024 s^3 - ..., where s = 1/x.
* We approximate pzero by
* qzero(x) = s*(-1.25 + (R/S))
* where R = qR0 + qR1*s^2 + qR2*s^4 + ... + qR5*s^10
* S = 1 + qS0*s^2 + ... + qS5*s^12
* and
* | qzero(x)/s +1.25-R/S | <= 2 ** ( -61.22)
*/
static const double qR8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.00000000000000000000e+00, /* 0x00000000, 0x00000000 */
7.32421874999935051953e-02, /* 0x3FB2BFFF, 0xFFFFFE2C */
1.17682064682252693899e+01, /* 0x40278952, 0x5BB334D6 */
5.57673380256401856059e+02, /* 0x40816D63, 0x15301825 */
8.85919720756468632317e+03, /* 0x40C14D99, 0x3E18F46D */
3.70146267776887834771e+04, /* 0x40E212D4, 0x0E901566 */
};
static const double qS8[6] = {
1.63776026895689824414e+02, /* 0x406478D5, 0x365B39BC */
8.09834494656449805916e+03, /* 0x40BFA258, 0x4E6B0563 */
1.42538291419120476348e+05, /* 0x41016652, 0x54D38C3F */
8.03309257119514397345e+05, /* 0x412883DA, 0x83A52B43 */
8.40501579819060512818e+05, /* 0x4129A66B, 0x28DE0B3D */
-3.43899293537866615225e+05, /* 0xC114FD6D, 0x2C9530C5 */
};
static const double qR5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
1.84085963594515531381e-11, /* 0x3DB43D8F, 0x29CC8CD9 */
7.32421766612684765896e-02, /* 0x3FB2BFFF, 0xD172B04C */
5.83563508962056953777e+00, /* 0x401757B0, 0xB9953DD3 */
1.35111577286449829671e+02, /* 0x4060E392, 0x0A8788E9 */
1.02724376596164097464e+03, /* 0x40900CF9, 0x9DC8C481 */
1.98997785864605384631e+03, /* 0x409F17E9, 0x53C6E3A6 */
};
static const double qS5[6] = {
8.27766102236537761883e+01, /* 0x4054B1B3, 0xFB5E1543 */
2.07781416421392987104e+03, /* 0x40A03BA0, 0xDA21C0CE */
1.88472887785718085070e+04, /* 0x40D267D2, 0x7B591E6D */
5.67511122894947329769e+04, /* 0x40EBB5E3, 0x97E02372 */
3.59767538425114471465e+04, /* 0x40E19118, 0x1F7A54A0 */
-5.35434275601944773371e+03, /* 0xC0B4EA57, 0xBEDBC609 */
};
static const double qR3[6] = {/* for x in [4.547,2.8571]=1/[0.2199,0.35001] */
4.37741014089738620906e-09, /* 0x3E32CD03, 0x6ADECB82 */
7.32411180042911447163e-02, /* 0x3FB2BFEE, 0x0E8D0842 */
3.34423137516170720929e+00, /* 0x400AC0FC, 0x61149CF5 */
4.26218440745412650017e+01, /* 0x40454F98, 0x962DAEDD */
1.70808091340565596283e+02, /* 0x406559DB, 0xE25EFD1F */
1.66733948696651168575e+02, /* 0x4064D77C, 0x81FA21E0 */
};
static const double qS3[6] = {
4.87588729724587182091e+01, /* 0x40486122, 0xBFE343A6 */
7.09689221056606015736e+02, /* 0x40862D83, 0x86544EB3 */
3.70414822620111362994e+03, /* 0x40ACF04B, 0xE44DFC63 */
6.46042516752568917582e+03, /* 0x40B93C6C, 0xD7C76A28 */
2.51633368920368957333e+03, /* 0x40A3A8AA, 0xD94FB1C0 */
-1.49247451836156386662e+02, /* 0xC062A7EB, 0x201CF40F */
};
static const double qR2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
1.50444444886983272379e-07, /* 0x3E84313B, 0x54F76BDB */
7.32234265963079278272e-02, /* 0x3FB2BEC5, 0x3E883E34 */
1.99819174093815998816e+00, /* 0x3FFFF897, 0xE727779C */
1.44956029347885735348e+01, /* 0x402CFDBF, 0xAAF96FE5 */
3.16662317504781540833e+01, /* 0x403FAA8E, 0x29FBDC4A */
1.62527075710929267416e+01, /* 0x403040B1, 0x71814BB4 */
};
static const double qS2[6] = {
3.03655848355219184498e+01, /* 0x403E5D96, 0xF7C07AED */
2.69348118608049844624e+02, /* 0x4070D591, 0xE4D14B40 */
8.44783757595320139444e+02, /* 0x408A6645, 0x22B3BF22 */
8.82935845112488550512e+02, /* 0x408B977C, 0x9C5CC214 */
2.12666388511798828631e+02, /* 0x406A9553, 0x0E001365 */
-5.31095493882666946917e+00, /* 0xC0153E6A, 0xF8B32931 */
};
static __inline double
qzero(double x)
{
static const double eighth = 0.125;
const double *p,*q;
double s,r,z;
int32_t ix;
GET_HIGH_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x40200000) {p = qR8; q= qS8;}
else if(ix>=0x40122E8B){p = qR5; q= qS5;}
else if(ix>=0x4006DB6D){p = qR3; q= qS3;}
else {p = qR2; q= qS2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*(q[4]+z*q[5])))));
return (r/s-eighth)/x;
}
Index: projects/clang800-import/lib/msun/src/e_j0f.c
===================================================================
--- projects/clang800-import/lib/msun/src/e_j0f.c (revision 343955)
+++ projects/clang800-import/lib/msun/src/e_j0f.c (revision 343956)
@@ -1,345 +1,343 @@
/* e_j0f.c -- float version of e_j0.c.
* Conversion to float by Ian Lance Taylor, Cygnus Support, ian@cygnus.com.
*/
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunPro, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* See e_j0.c for complete comments.
*/
#include "math.h"
#include "math_private.h"
static __inline float pzerof(float), qzerof(float);
static const volatile float vone = 1, vzero = 0;
static const float
huge = 1e30,
one = 1.0,
invsqrtpi= 5.6418961287e-01, /* 0x3f106ebb */
tpi = 6.3661974669e-01, /* 0x3f22f983 */
/* R0/S0 on [0, 2.00] */
R02 = 1.5625000000e-02, /* 0x3c800000 */
R03 = -1.8997929874e-04, /* 0xb947352e */
R04 = 1.8295404516e-06, /* 0x35f58e88 */
R05 = -4.6183270541e-09, /* 0xb19eaf3c */
S01 = 1.5619102865e-02, /* 0x3c7fe744 */
S02 = 1.1692678527e-04, /* 0x38f53697 */
S03 = 5.1354652442e-07, /* 0x3509daa6 */
S04 = 1.1661400734e-09; /* 0x30a045e8 */
static const float zero = 0, qrtr = 0.25;
float
__ieee754_j0f(float x)
{
float z, s,c,ss,cc,r,u,v;
int32_t hx,ix;
GET_FLOAT_WORD(hx,x);
ix = hx&0x7fffffff;
if(ix>=0x7f800000) return one/(x*x);
x = fabsf(x);
if(ix >= 0x40000000) { /* |x| >= 2.0 */
- s = sinf(x);
- c = cosf(x);
+ sincosf(x, &s, &c);
ss = s-c;
cc = s+c;
if(ix<0x7f000000) { /* Make sure x+x does not overflow. */
z = -cosf(x+x);
if ((s*c)<zero) cc = z/ss;
else ss = z/cc;
}
/*
* j0(x) = 1/sqrt(pi) * (P(0,x)*cc - Q(0,x)*ss) / sqrt(x)
* y0(x) = 1/sqrt(pi) * (P(0,x)*ss + Q(0,x)*cc) / sqrt(x)
*/
if(ix>0x58000000) z = (invsqrtpi*cc)/sqrtf(x); /* |x|>2**49 */
else {
u = pzerof(x); v = qzerof(x);
z = invsqrtpi*(u*cc-v*ss)/sqrtf(x);
}
return z;
}
if(ix<0x3b000000) { /* |x| < 2**-9 */
if(huge+x>one) { /* raise inexact if x != 0 */
if(ix<0x39800000) return one; /* |x|<2**-12 */
else return one - x*x/4;
}
}
z = x*x;
r = z*(R02+z*(R03+z*(R04+z*R05)));
s = one+z*(S01+z*(S02+z*(S03+z*S04)));
if(ix < 0x3F800000) { /* |x| < 1.00 */
return one + z*((r/s)-qrtr);
} else {
u = x/2;
return((one+u)*(one-u)+z*(r/s));
}
}
static const float
u00 = -7.3804296553e-02, /* 0xbd9726b5 */
u01 = 1.7666645348e-01, /* 0x3e34e80d */
u02 = -1.3818567619e-02, /* 0xbc626746 */
u03 = 3.4745343146e-04, /* 0x39b62a69 */
u04 = -3.8140706238e-06, /* 0xb67ff53c */
u05 = 1.9559013964e-08, /* 0x32a802ba */
u06 = -3.9820518410e-11, /* 0xae2f21eb */
v01 = 1.2730483897e-02, /* 0x3c509385 */
v02 = 7.6006865129e-05, /* 0x389f65e0 */
v03 = 2.5915085189e-07, /* 0x348b216c */
v04 = 4.4111031494e-10; /* 0x2ff280c2 */
float
__ieee754_y0f(float x)
{
float z, s,c,ss,cc,u,v;
int32_t hx,ix;
GET_FLOAT_WORD(hx,x);
ix = 0x7fffffff&hx;
if(ix>=0x7f800000) return vone/(x+x*x);
if(ix==0) return -one/vzero;
if(hx<0) return vzero/vzero;
if(ix >= 0x40000000) { /* |x| >= 2.0 */
/* y0(x) = sqrt(2/(pi*x))*(p0(x)*sin(x0)+q0(x)*cos(x0))
* where x0 = x-pi/4
* Better formula:
* cos(x0) = cos(x)cos(pi/4)+sin(x)sin(pi/4)
* = 1/sqrt(2) * (sin(x) + cos(x))
* sin(x0) = sin(x)cos(3pi/4)-cos(x)sin(3pi/4)
* = 1/sqrt(2) * (sin(x) - cos(x))
* To avoid cancellation, use
* sin(x) +- cos(x) = -cos(2x)/(sin(x) -+ cos(x))
* to compute the worse one.
*/
- s = sinf(x);
- c = cosf(x);
+ sincosf(x, &s, &c);
ss = s-c;
cc = s+c;
/*
* j0(x) = 1/sqrt(pi) * (P(0,x)*cc - Q(0,x)*ss) / sqrt(x)
* y0(x) = 1/sqrt(pi) * (P(0,x)*ss + Q(0,x)*cc) / sqrt(x)
*/
if(ix<0x7f000000) { /* make sure x+x not overflow */
z = -cosf(x+x);
if ((s*c)<zero) cc = z/ss;
else ss = z/cc;
}
if(ix>0x58000000) z = (invsqrtpi*ss)/sqrtf(x); /* |x|>2**49 */
else {
u = pzerof(x); v = qzerof(x);
z = invsqrtpi*(u*ss+v*cc)/sqrtf(x);
}
return z;
}
if(ix<=0x39000000) { /* x < 2**-13 */
return(u00 + tpi*__ieee754_logf(x));
}
z = x*x;
u = u00+z*(u01+z*(u02+z*(u03+z*(u04+z*(u05+z*u06)))));
v = one+z*(v01+z*(v02+z*(v03+z*v04)));
return(u/v + tpi*(__ieee754_j0f(x)*__ieee754_logf(x)));
}
/* The asymptotic expansions of pzero is
* 1 - 9/128 s^2 + 11025/98304 s^4 - ..., where s = 1/x.
* For x >= 2, We approximate pzero by
* pzero(x) = 1 + (R/S)
* where R = pR0 + pR1*s^2 + pR2*s^4 + ... + pR5*s^10
* S = 1 + pS0*s^2 + ... + pS4*s^10
* and
* | pzero(x)-1-R/S | <= 2 ** ( -60.26)
*/
static const float pR8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.0000000000e+00, /* 0x00000000 */
-7.0312500000e-02, /* 0xbd900000 */
-8.0816707611e+00, /* 0xc1014e86 */
-2.5706311035e+02, /* 0xc3808814 */
-2.4852163086e+03, /* 0xc51b5376 */
-5.2530439453e+03, /* 0xc5a4285a */
};
static const float pS8[5] = {
1.1653436279e+02, /* 0x42e91198 */
3.8337448730e+03, /* 0x456f9beb */
4.0597855469e+04, /* 0x471e95db */
1.1675296875e+05, /* 0x47e4087c */
4.7627726562e+04, /* 0x473a0bba */
};
static const float pR5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
-1.1412546255e-11, /* 0xad48c58a */
-7.0312492549e-02, /* 0xbd8fffff */
-4.1596107483e+00, /* 0xc0851b88 */
-6.7674766541e+01, /* 0xc287597b */
-3.3123129272e+02, /* 0xc3a59d9b */
-3.4643338013e+02, /* 0xc3ad3779 */
};
static const float pS5[5] = {
6.0753936768e+01, /* 0x42730408 */
1.0512523193e+03, /* 0x44836813 */
5.9789707031e+03, /* 0x45bad7c4 */
9.6254453125e+03, /* 0x461665c8 */
2.4060581055e+03, /* 0x451660ee */
};
static const float pR3[6] = {/* for x in [4.547,2.8571]=1/[0.2199,0.35001] */
-2.5470459075e-09, /* 0xb12f081b */
-7.0311963558e-02, /* 0xbd8fffb8 */
-2.4090321064e+00, /* 0xc01a2d95 */
-2.1965976715e+01, /* 0xc1afba52 */
-5.8079170227e+01, /* 0xc2685112 */
-3.1447946548e+01, /* 0xc1fb9565 */
};
static const float pS3[5] = {
3.5856033325e+01, /* 0x420f6c94 */
3.6151397705e+02, /* 0x43b4c1ca */
1.1936077881e+03, /* 0x44953373 */
1.1279968262e+03, /* 0x448cffe6 */
1.7358093262e+02, /* 0x432d94b8 */
};
static const float pR2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
-8.8753431271e-08, /* 0xb3be98b7 */
-7.0303097367e-02, /* 0xbd8ffb12 */
-1.4507384300e+00, /* 0xbfb9b1cc */
-7.6356959343e+00, /* 0xc0f4579f */
-1.1193166733e+01, /* 0xc1331736 */
-3.2336456776e+00, /* 0xc04ef40d */
};
static const float pS2[5] = {
2.2220300674e+01, /* 0x41b1c32d */
1.3620678711e+02, /* 0x430834f0 */
2.7047027588e+02, /* 0x43873c32 */
1.5387539673e+02, /* 0x4319e01a */
1.4657617569e+01, /* 0x416a859a */
};
static __inline float
pzerof(float x)
{
const float *p,*q;
float z,r,s;
int32_t ix;
GET_FLOAT_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x41000000) {p = pR8; q= pS8;}
else if(ix>=0x409173eb){p = pR5; q= pS5;}
else if(ix>=0x4036d917){p = pR3; q= pS3;}
else {p = pR2; q= pS2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*q[4]))));
return one+ r/s;
}
/* For x >= 8, the asymptotic expansions of qzero is
* -1/8 s + 75/1024 s^3 - ..., where s = 1/x.
* We approximate pzero by
* qzero(x) = s*(-1.25 + (R/S))
* where R = qR0 + qR1*s^2 + qR2*s^4 + ... + qR5*s^10
* S = 1 + qS0*s^2 + ... + qS5*s^12
* and
* | qzero(x)/s +1.25-R/S | <= 2 ** ( -61.22)
*/
static const float qR8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.0000000000e+00, /* 0x00000000 */
7.3242187500e-02, /* 0x3d960000 */
1.1768206596e+01, /* 0x413c4a93 */
5.5767340088e+02, /* 0x440b6b19 */
8.8591972656e+03, /* 0x460a6cca */
3.7014625000e+04, /* 0x471096a0 */
};
static const float qS8[6] = {
1.6377603149e+02, /* 0x4323c6aa */
8.0983447266e+03, /* 0x45fd12c2 */
1.4253829688e+05, /* 0x480b3293 */
8.0330925000e+05, /* 0x49441ed4 */
8.4050156250e+05, /* 0x494d3359 */
-3.4389928125e+05, /* 0xc8a7eb69 */
};
static const float qR5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
1.8408595828e-11, /* 0x2da1ec79 */
7.3242180049e-02, /* 0x3d95ffff */
5.8356351852e+00, /* 0x40babd86 */
1.3511157227e+02, /* 0x43071c90 */
1.0272437744e+03, /* 0x448067cd */
1.9899779053e+03, /* 0x44f8bf4b */
};
static const float qS5[6] = {
8.2776611328e+01, /* 0x42a58da0 */
2.0778142090e+03, /* 0x4501dd07 */
1.8847289062e+04, /* 0x46933e94 */
5.6751113281e+04, /* 0x475daf1d */
3.5976753906e+04, /* 0x470c88c1 */
-5.3543427734e+03, /* 0xc5a752be */
};
static const float qR3[6] = {/* for x in [4.547,2.8571]=1/[0.2199,0.35001] */
4.3774099900e-09, /* 0x3196681b */
7.3241114616e-02, /* 0x3d95ff70 */
3.3442313671e+00, /* 0x405607e3 */
4.2621845245e+01, /* 0x422a7cc5 */
1.7080809021e+02, /* 0x432acedf */
1.6673394775e+02, /* 0x4326bbe4 */
};
static const float qS3[6] = {
4.8758872986e+01, /* 0x42430916 */
7.0968920898e+02, /* 0x44316c1c */
3.7041481934e+03, /* 0x4567825f */
6.4604252930e+03, /* 0x45c9e367 */
2.5163337402e+03, /* 0x451d4557 */
-1.4924745178e+02, /* 0xc3153f59 */
};
static const float qR2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
1.5044444979e-07, /* 0x342189db */
7.3223426938e-02, /* 0x3d95f62a */
1.9981917143e+00, /* 0x3fffc4bf */
1.4495602608e+01, /* 0x4167edfd */
3.1666231155e+01, /* 0x41fd5471 */
1.6252708435e+01, /* 0x4182058c */
};
static const float qS2[6] = {
3.0365585327e+01, /* 0x41f2ecb8 */
2.6934811401e+02, /* 0x4386ac8f */
8.4478375244e+02, /* 0x44533229 */
8.8293585205e+02, /* 0x445cbbe5 */
2.1266638184e+02, /* 0x4354aa98 */
-5.3109550476e+00, /* 0xc0a9f358 */
};
static __inline float
qzerof(float x)
{
static const float eighth = 0.125;
const float *p,*q;
float s,r,z;
int32_t ix;
GET_FLOAT_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x41000000) {p = qR8; q= qS8;}
else if(ix>=0x409173eb){p = qR5; q= qS5;}
else if(ix>=0x4036d917){p = qR3; q= qS3;}
else {p = qR2; q= qS2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*(q[4]+z*q[5])))));
return (r/s-eighth)/x;
}
Index: projects/clang800-import/lib/msun/src/e_j1.c
===================================================================
--- projects/clang800-import/lib/msun/src/e_j1.c (revision 343955)
+++ projects/clang800-import/lib/msun/src/e_j1.c (revision 343956)
@@ -1,385 +1,383 @@
/* @(#)e_j1.c 1.3 95/01/18 */
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunSoft, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/* __ieee754_j1(x), __ieee754_y1(x)
* Bessel function of the first and second kinds of order zero.
* Method -- j1(x):
* 1. For tiny x, we use j1(x) = x/2 - x^3/16 + x^5/384 - ...
* 2. Reduce x to |x| since j1(x)=-j1(-x), and
* for x in (0,2)
* j1(x) = x/2 + x*z*R0/S0, where z = x*x;
* (precision: |j1/x - 1/2 - R0/S0 |<2**-61.51 )
* for x in (2,inf)
* j1(x) = sqrt(2/(pi*x))*(p1(x)*cos(x1)-q1(x)*sin(x1))
* y1(x) = sqrt(2/(pi*x))*(p1(x)*sin(x1)+q1(x)*cos(x1))
* where x1 = x-3*pi/4. It is better to compute sin(x1),cos(x1)
* as follow:
* cos(x1) = cos(x)cos(3pi/4)+sin(x)sin(3pi/4)
* = 1/sqrt(2) * (sin(x) - cos(x))
* sin(x1) = sin(x)cos(3pi/4)-cos(x)sin(3pi/4)
* = -1/sqrt(2) * (sin(x) + cos(x))
* (To avoid cancellation, use
* sin(x) +- cos(x) = -cos(2x)/(sin(x) -+ cos(x))
* to compute the worse one.)
*
* 3 Special cases
* j1(nan)= nan
* j1(0) = 0
* j1(inf) = 0
*
* Method -- y1(x):
* 1. screen out x<=0 cases: y1(0)=-inf, y1(x<0)=NaN
* 2. For x<2.
* Since
* y1(x) = 2/pi*(j1(x)*(ln(x/2)+Euler)-1/x-x/2+5/64*x^3-...)
* therefore y1(x)-2/pi*j1(x)*ln(x)-1/x is an odd function.
* We use the following function to approximate y1,
* y1(x) = x*U(z)/V(z) + (2/pi)*(j1(x)*ln(x)-1/x), z= x^2
* where for x in [0,2] (abs err less than 2**-65.89)
* U(z) = U0[0] + U0[1]*z + ... + U0[4]*z^4
* V(z) = 1 + v0[0]*z + ... + v0[4]*z^5
* Note: For tiny x, 1/x dominate y1 and hence
* y1(tiny) = -2/pi/tiny, (choose tiny<2**-54)
* 3. For x>=2.
* y1(x) = sqrt(2/(pi*x))*(p1(x)*sin(x1)+q1(x)*cos(x1))
* where x1 = x-3*pi/4. It is better to compute sin(x1),cos(x1)
* by method mentioned above.
*/
#include "math.h"
#include "math_private.h"
static __inline double pone(double), qone(double);
static const volatile double vone = 1, vzero = 0;
static const double
huge = 1e300,
one = 1.0,
invsqrtpi= 5.64189583547756279280e-01, /* 0x3FE20DD7, 0x50429B6D */
tpi = 6.36619772367581382433e-01, /* 0x3FE45F30, 0x6DC9C883 */
/* R0/S0 on [0,2] */
r00 = -6.25000000000000000000e-02, /* 0xBFB00000, 0x00000000 */
r01 = 1.40705666955189706048e-03, /* 0x3F570D9F, 0x98472C61 */
r02 = -1.59955631084035597520e-05, /* 0xBEF0C5C6, 0xBA169668 */
r03 = 4.96727999609584448412e-08, /* 0x3E6AAAFA, 0x46CA0BD9 */
s01 = 1.91537599538363460805e-02, /* 0x3F939D0B, 0x12637E53 */
s02 = 1.85946785588630915560e-04, /* 0x3F285F56, 0xB9CDF664 */
s03 = 1.17718464042623683263e-06, /* 0x3EB3BFF8, 0x333F8498 */
s04 = 5.04636257076217042715e-09, /* 0x3E35AC88, 0xC97DFF2C */
s05 = 1.23542274426137913908e-11; /* 0x3DAB2ACF, 0xCFB97ED8 */
static const double zero = 0.0;
double
__ieee754_j1(double x)
{
double z, s,c,ss,cc,r,u,v,y;
int32_t hx,ix;
GET_HIGH_WORD(hx,x);
ix = hx&0x7fffffff;
if(ix>=0x7ff00000) return one/x;
y = fabs(x);
if(ix >= 0x40000000) { /* |x| >= 2.0 */
- s = sin(y);
- c = cos(y);
+ sincos(y, &s, &c);
ss = -s-c;
cc = s-c;
if(ix<0x7fe00000) { /* make sure y+y not overflow */
z = cos(y+y);
if ((s*c)>zero) cc = z/ss;
else ss = z/cc;
}
/*
* j1(x) = 1/sqrt(pi) * (P(1,x)*cc - Q(1,x)*ss) / sqrt(x)
* y1(x) = 1/sqrt(pi) * (P(1,x)*ss + Q(1,x)*cc) / sqrt(x)
*/
if(ix>0x48000000) z = (invsqrtpi*cc)/sqrt(y);
else {
u = pone(y); v = qone(y);
z = invsqrtpi*(u*cc-v*ss)/sqrt(y);
}
if(hx<0) return -z;
else return z;
}
if(ix<0x3e400000) { /* |x|<2**-27 */
if(huge+x>one) return 0.5*x;/* inexact if x!=0 necessary */
}
z = x*x;
r = z*(r00+z*(r01+z*(r02+z*r03)));
s = one+z*(s01+z*(s02+z*(s03+z*(s04+z*s05))));
r *= x;
return(x*0.5+r/s);
}
static const double U0[5] = {
-1.96057090646238940668e-01, /* 0xBFC91866, 0x143CBC8A */
5.04438716639811282616e-02, /* 0x3FA9D3C7, 0x76292CD1 */
-1.91256895875763547298e-03, /* 0xBF5F55E5, 0x4844F50F */
2.35252600561610495928e-05, /* 0x3EF8AB03, 0x8FA6B88E */
-9.19099158039878874504e-08, /* 0xBE78AC00, 0x569105B8 */
};
static const double V0[5] = {
1.99167318236649903973e-02, /* 0x3F94650D, 0x3F4DA9F0 */
2.02552581025135171496e-04, /* 0x3F2A8C89, 0x6C257764 */
1.35608801097516229404e-06, /* 0x3EB6C05A, 0x894E8CA6 */
6.22741452364621501295e-09, /* 0x3E3ABF1D, 0x5BA69A86 */
1.66559246207992079114e-11, /* 0x3DB25039, 0xDACA772A */
};
double
__ieee754_y1(double x)
{
double z, s,c,ss,cc,u,v;
int32_t hx,ix,lx;
EXTRACT_WORDS(hx,lx,x);
ix = 0x7fffffff&hx;
/*
* y1(NaN) = NaN.
* y1(Inf) = 0.
* y1(-Inf) = NaN and raise invalid exception.
*/
if(ix>=0x7ff00000) return vone/(x+x*x);
/* y1(+-0) = -inf and raise divide-by-zero exception. */
if((ix|lx)==0) return -one/vzero;
/* y1(x<0) = NaN and raise invalid exception. */
if(hx<0) return vzero/vzero;
if(ix >= 0x40000000) { /* |x| >= 2.0 */
- s = sin(x);
- c = cos(x);
+ sincos(x, &s, &c);
ss = -s-c;
cc = s-c;
if(ix<0x7fe00000) { /* make sure x+x not overflow */
z = cos(x+x);
if ((s*c)>zero) cc = z/ss;
else ss = z/cc;
}
/* y1(x) = sqrt(2/(pi*x))*(p1(x)*sin(x0)+q1(x)*cos(x0))
* where x0 = x-3pi/4
* Better formula:
* cos(x0) = cos(x)cos(3pi/4)+sin(x)sin(3pi/4)
* = 1/sqrt(2) * (sin(x) - cos(x))
* sin(x0) = sin(x)cos(3pi/4)-cos(x)sin(3pi/4)
* = -1/sqrt(2) * (cos(x) + sin(x))
* To avoid cancellation, use
* sin(x) +- cos(x) = -cos(2x)/(sin(x) -+ cos(x))
* to compute the worse one.
*/
if(ix>0x48000000) z = (invsqrtpi*ss)/sqrt(x);
else {
u = pone(x); v = qone(x);
z = invsqrtpi*(u*ss+v*cc)/sqrt(x);
}
return z;
}
if(ix<=0x3c900000) { /* x < 2**-54 */
return(-tpi/x);
}
z = x*x;
u = U0[0]+z*(U0[1]+z*(U0[2]+z*(U0[3]+z*U0[4])));
v = one+z*(V0[0]+z*(V0[1]+z*(V0[2]+z*(V0[3]+z*V0[4]))));
return(x*(u/v) + tpi*(__ieee754_j1(x)*__ieee754_log(x)-one/x));
}
/* For x >= 8, the asymptotic expansions of pone is
* 1 + 15/128 s^2 - 4725/2^15 s^4 - ..., where s = 1/x.
* We approximate pone by
* pone(x) = 1 + (R/S)
* where R = pr0 + pr1*s^2 + pr2*s^4 + ... + pr5*s^10
* S = 1 + ps0*s^2 + ... + ps4*s^10
* and
* | pone(x)-1-R/S | <= 2 ** ( -60.06)
*/
static const double pr8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.00000000000000000000e+00, /* 0x00000000, 0x00000000 */
1.17187499999988647970e-01, /* 0x3FBDFFFF, 0xFFFFFCCE */
1.32394806593073575129e+01, /* 0x402A7A9D, 0x357F7FCE */
4.12051854307378562225e+02, /* 0x4079C0D4, 0x652EA590 */
3.87474538913960532227e+03, /* 0x40AE457D, 0xA3A532CC */
7.91447954031891731574e+03, /* 0x40BEEA7A, 0xC32782DD */
};
static const double ps8[5] = {
1.14207370375678408436e+02, /* 0x405C8D45, 0x8E656CAC */
3.65093083420853463394e+03, /* 0x40AC85DC, 0x964D274F */
3.69562060269033463555e+04, /* 0x40E20B86, 0x97C5BB7F */
9.76027935934950801311e+04, /* 0x40F7D42C, 0xB28F17BB */
3.08042720627888811578e+04, /* 0x40DE1511, 0x697A0B2D */
};
static const double pr5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
1.31990519556243522749e-11, /* 0x3DAD0667, 0xDAE1CA7D */
1.17187493190614097638e-01, /* 0x3FBDFFFF, 0xE2C10043 */
6.80275127868432871736e+00, /* 0x401B3604, 0x6E6315E3 */
1.08308182990189109773e+02, /* 0x405B13B9, 0x452602ED */
5.17636139533199752805e+02, /* 0x40802D16, 0xD052D649 */
5.28715201363337541807e+02, /* 0x408085B8, 0xBB7E0CB7 */
};
static const double ps5[5] = {
5.92805987221131331921e+01, /* 0x404DA3EA, 0xA8AF633D */
9.91401418733614377743e+02, /* 0x408EFB36, 0x1B066701 */
5.35326695291487976647e+03, /* 0x40B4E944, 0x5706B6FB */
7.84469031749551231769e+03, /* 0x40BEA4B0, 0xB8A5BB15 */
1.50404688810361062679e+03, /* 0x40978030, 0x036F5E51 */
};
static const double pr3[6] = {
3.02503916137373618024e-09, /* 0x3E29FC21, 0xA7AD9EDD */
1.17186865567253592491e-01, /* 0x3FBDFFF5, 0x5B21D17B */
3.93297750033315640650e+00, /* 0x400F76BC, 0xE85EAD8A */
3.51194035591636932736e+01, /* 0x40418F48, 0x9DA6D129 */
9.10550110750781271918e+01, /* 0x4056C385, 0x4D2C1837 */
4.85590685197364919645e+01, /* 0x4048478F, 0x8EA83EE5 */
};
static const double ps3[5] = {
3.47913095001251519989e+01, /* 0x40416549, 0xA134069C */
3.36762458747825746741e+02, /* 0x40750C33, 0x07F1A75F */
1.04687139975775130551e+03, /* 0x40905B7C, 0x5037D523 */
8.90811346398256432622e+02, /* 0x408BD67D, 0xA32E31E9 */
1.03787932439639277504e+02, /* 0x4059F26D, 0x7C2EED53 */
};
static const double pr2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
1.07710830106873743082e-07, /* 0x3E7CE9D4, 0xF65544F4 */
1.17176219462683348094e-01, /* 0x3FBDFF42, 0xBE760D83 */
2.36851496667608785174e+00, /* 0x4002F2B7, 0xF98FAEC0 */
1.22426109148261232917e+01, /* 0x40287C37, 0x7F71A964 */
1.76939711271687727390e+01, /* 0x4031B1A8, 0x177F8EE2 */
5.07352312588818499250e+00, /* 0x40144B49, 0xA574C1FE */
};
static const double ps2[5] = {
2.14364859363821409488e+01, /* 0x40356FBD, 0x8AD5ECDC */
1.25290227168402751090e+02, /* 0x405F5293, 0x14F92CD5 */
2.32276469057162813669e+02, /* 0x406D08D8, 0xD5A2DBD9 */
1.17679373287147100768e+02, /* 0x405D6B7A, 0xDA1884A9 */
8.36463893371618283368e+00, /* 0x4020BAB1, 0xF44E5192 */
};
static __inline double
pone(double x)
{
const double *p,*q;
double z,r,s;
int32_t ix;
GET_HIGH_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x40200000) {p = pr8; q= ps8;}
else if(ix>=0x40122E8B){p = pr5; q= ps5;}
else if(ix>=0x4006DB6D){p = pr3; q= ps3;}
else {p = pr2; q= ps2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*q[4]))));
return one+ r/s;
}
/* For x >= 8, the asymptotic expansions of qone is
* 3/8 s - 105/1024 s^3 - ..., where s = 1/x.
* We approximate pone by
* qone(x) = s*(0.375 + (R/S))
* where R = qr1*s^2 + qr2*s^4 + ... + qr5*s^10
* S = 1 + qs1*s^2 + ... + qs6*s^12
* and
* | qone(x)/s -0.375-R/S | <= 2 ** ( -61.13)
*/
static const double qr8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.00000000000000000000e+00, /* 0x00000000, 0x00000000 */
-1.02539062499992714161e-01, /* 0xBFBA3FFF, 0xFFFFFDF3 */
-1.62717534544589987888e+01, /* 0xC0304591, 0xA26779F7 */
-7.59601722513950107896e+02, /* 0xC087BCD0, 0x53E4B576 */
-1.18498066702429587167e+04, /* 0xC0C724E7, 0x40F87415 */
-4.84385124285750353010e+04, /* 0xC0E7A6D0, 0x65D09C6A */
};
static const double qs8[6] = {
1.61395369700722909556e+02, /* 0x40642CA6, 0xDE5BCDE5 */
7.82538599923348465381e+03, /* 0x40BE9162, 0xD0D88419 */
1.33875336287249578163e+05, /* 0x4100579A, 0xB0B75E98 */
7.19657723683240939863e+05, /* 0x4125F653, 0x72869C19 */
6.66601232617776375264e+05, /* 0x412457D2, 0x7719AD5C */
-2.94490264303834643215e+05, /* 0xC111F969, 0x0EA5AA18 */
};
static const double qr5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
-2.08979931141764104297e-11, /* 0xBDB6FA43, 0x1AA1A098 */
-1.02539050241375426231e-01, /* 0xBFBA3FFF, 0xCB597FEF */
-8.05644828123936029840e+00, /* 0xC0201CE6, 0xCA03AD4B */
-1.83669607474888380239e+02, /* 0xC066F56D, 0x6CA7B9B0 */
-1.37319376065508163265e+03, /* 0xC09574C6, 0x6931734F */
-2.61244440453215656817e+03, /* 0xC0A468E3, 0x88FDA79D */
};
static const double qs5[6] = {
8.12765501384335777857e+01, /* 0x405451B2, 0xFF5A11B2 */
1.99179873460485964642e+03, /* 0x409F1F31, 0xE77BF839 */
1.74684851924908907677e+04, /* 0x40D10F1F, 0x0D64CE29 */
4.98514270910352279316e+04, /* 0x40E8576D, 0xAABAD197 */
2.79480751638918118260e+04, /* 0x40DB4B04, 0xCF7C364B */
-4.71918354795128470869e+03, /* 0xC0B26F2E, 0xFCFFA004 */
};
static const double qr3[6] = {
-5.07831226461766561369e-09, /* 0xBE35CFA9, 0xD38FC84F */
-1.02537829820837089745e-01, /* 0xBFBA3FEB, 0x51AEED54 */
-4.61011581139473403113e+00, /* 0xC01270C2, 0x3302D9FF */
-5.78472216562783643212e+01, /* 0xC04CEC71, 0xC25D16DA */
-2.28244540737631695038e+02, /* 0xC06C87D3, 0x4718D55F */
-2.19210128478909325622e+02, /* 0xC06B66B9, 0x5F5C1BF6 */
};
static const double qs3[6] = {
4.76651550323729509273e+01, /* 0x4047D523, 0xCCD367E4 */
6.73865112676699709482e+02, /* 0x40850EEB, 0xC031EE3E */
3.38015286679526343505e+03, /* 0x40AA684E, 0x448E7C9A */
5.54772909720722782367e+03, /* 0x40B5ABBA, 0xA61D54A6 */
1.90311919338810798763e+03, /* 0x409DBC7A, 0x0DD4DF4B */
-1.35201191444307340817e+02, /* 0xC060E670, 0x290A311F */
};
static const double qr2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
-1.78381727510958865572e-07, /* 0xBE87F126, 0x44C626D2 */
-1.02517042607985553460e-01, /* 0xBFBA3E8E, 0x9148B010 */
-2.75220568278187460720e+00, /* 0xC0060484, 0x69BB4EDA */
-1.96636162643703720221e+01, /* 0xC033A9E2, 0xC168907F */
-4.23253133372830490089e+01, /* 0xC04529A3, 0xDE104AAA */
-2.13719211703704061733e+01, /* 0xC0355F36, 0x39CF6E52 */
};
static const double qs2[6] = {
2.95333629060523854548e+01, /* 0x403D888A, 0x78AE64FF */
2.52981549982190529136e+02, /* 0x406F9F68, 0xDB821CBA */
7.57502834868645436472e+02, /* 0x4087AC05, 0xCE49A0F7 */
7.39393205320467245656e+02, /* 0x40871B25, 0x48D4C029 */
1.55949003336666123687e+02, /* 0x40637E5E, 0x3C3ED8D4 */
-4.95949898822628210127e+00, /* 0xC013D686, 0xE71BE86B */
};
static __inline double
qone(double x)
{
const double *p,*q;
double s,r,z;
int32_t ix;
GET_HIGH_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x40200000) {p = qr8; q= qs8;}
else if(ix>=0x40122E8B){p = qr5; q= qs5;}
else if(ix>=0x4006DB6D){p = qr3; q= qs3;}
else {p = qr2; q= qs2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*(q[4]+z*q[5])))));
return (.375 + r/s)/x;
}
Index: projects/clang800-import/lib/msun/src/e_j1f.c
===================================================================
--- projects/clang800-import/lib/msun/src/e_j1f.c (revision 343955)
+++ projects/clang800-import/lib/msun/src/e_j1f.c (revision 343956)
@@ -1,340 +1,338 @@
/* e_j1f.c -- float version of e_j1.c.
* Conversion to float by Ian Lance Taylor, Cygnus Support, ian@cygnus.com.
*/
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunPro, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* See e_j1.c for complete comments.
*/
#include "math.h"
#include "math_private.h"
static __inline float ponef(float), qonef(float);
static const volatile float vone = 1, vzero = 0;
static const float
huge = 1e30,
one = 1.0,
invsqrtpi= 5.6418961287e-01, /* 0x3f106ebb */
tpi = 6.3661974669e-01, /* 0x3f22f983 */
/* R0/S0 on [0,2] */
r00 = -6.2500000000e-02, /* 0xbd800000 */
r01 = 1.4070566976e-03, /* 0x3ab86cfd */
r02 = -1.5995563444e-05, /* 0xb7862e36 */
r03 = 4.9672799207e-08, /* 0x335557d2 */
s01 = 1.9153760746e-02, /* 0x3c9ce859 */
s02 = 1.8594678841e-04, /* 0x3942fab6 */
s03 = 1.1771846857e-06, /* 0x359dffc2 */
s04 = 5.0463624390e-09, /* 0x31ad6446 */
s05 = 1.2354227016e-11; /* 0x2d59567e */
static const float zero = 0.0;
float
__ieee754_j1f(float x)
{
float z, s,c,ss,cc,r,u,v,y;
int32_t hx,ix;
GET_FLOAT_WORD(hx,x);
ix = hx&0x7fffffff;
if(ix>=0x7f800000) return one/x;
y = fabsf(x);
if(ix >= 0x40000000) { /* |x| >= 2.0 */
- s = sinf(y);
- c = cosf(y);
+ sincosf(y, &s, &c);
ss = -s-c;
cc = s-c;
if(ix<0x7f000000) { /* make sure y+y not overflow */
z = cosf(y+y);
if ((s*c)>zero) cc = z/ss;
else ss = z/cc;
}
/*
* j1(x) = 1/sqrt(pi) * (P(1,x)*cc - Q(1,x)*ss) / sqrt(x)
* y1(x) = 1/sqrt(pi) * (P(1,x)*ss + Q(1,x)*cc) / sqrt(x)
*/
if(ix>0x58000000) z = (invsqrtpi*cc)/sqrtf(y); /* |x|>2**49 */
else {
u = ponef(y); v = qonef(y);
z = invsqrtpi*(u*cc-v*ss)/sqrtf(y);
}
if(hx<0) return -z;
else return z;
}
if(ix<0x39000000) { /* |x|<2**-13 */
if(huge+x>one) return (float)0.5*x;/* inexact if x!=0 necessary */
}
z = x*x;
r = z*(r00+z*(r01+z*(r02+z*r03)));
s = one+z*(s01+z*(s02+z*(s03+z*(s04+z*s05))));
r *= x;
return(x*(float)0.5+r/s);
}
static const float U0[5] = {
-1.9605709612e-01, /* 0xbe48c331 */
5.0443872809e-02, /* 0x3d4e9e3c */
-1.9125689287e-03, /* 0xbafaaf2a */
2.3525259166e-05, /* 0x37c5581c */
-9.1909917899e-08, /* 0xb3c56003 */
};
static const float V0[5] = {
1.9916731864e-02, /* 0x3ca3286a */
2.0255257550e-04, /* 0x3954644b */
1.3560879779e-06, /* 0x35b602d4 */
6.2274145840e-09, /* 0x31d5f8eb */
1.6655924903e-11, /* 0x2d9281cf */
};
float
__ieee754_y1f(float x)
{
float z, s,c,ss,cc,u,v;
int32_t hx,ix;
GET_FLOAT_WORD(hx,x);
ix = 0x7fffffff&hx;
if(ix>=0x7f800000) return vone/(x+x*x);
if(ix==0) return -one/vzero;
if(hx<0) return vzero/vzero;
if(ix >= 0x40000000) { /* |x| >= 2.0 */
- s = sinf(x);
- c = cosf(x);
+ sincosf(x, &s, &c);
ss = -s-c;
cc = s-c;
if(ix<0x7f000000) { /* make sure x+x not overflow */
z = cosf(x+x);
if ((s*c)>zero) cc = z/ss;
else ss = z/cc;
}
/* y1(x) = sqrt(2/(pi*x))*(p1(x)*sin(x0)+q1(x)*cos(x0))
* where x0 = x-3pi/4
* Better formula:
* cos(x0) = cos(x)cos(3pi/4)+sin(x)sin(3pi/4)
* = 1/sqrt(2) * (sin(x) - cos(x))
* sin(x0) = sin(x)cos(3pi/4)-cos(x)sin(3pi/4)
* = -1/sqrt(2) * (cos(x) + sin(x))
* To avoid cancellation, use
* sin(x) +- cos(x) = -cos(2x)/(sin(x) -+ cos(x))
* to compute the worse one.
*/
if(ix>0x58000000) z = (invsqrtpi*ss)/sqrtf(x); /* |x|>2**49 */
else {
u = ponef(x); v = qonef(x);
z = invsqrtpi*(u*ss+v*cc)/sqrtf(x);
}
return z;
}
if(ix<=0x33000000) { /* x < 2**-25 */
return(-tpi/x);
}
z = x*x;
u = U0[0]+z*(U0[1]+z*(U0[2]+z*(U0[3]+z*U0[4])));
v = one+z*(V0[0]+z*(V0[1]+z*(V0[2]+z*(V0[3]+z*V0[4]))));
return(x*(u/v) + tpi*(__ieee754_j1f(x)*__ieee754_logf(x)-one/x));
}
/* For x >= 8, the asymptotic expansions of pone is
* 1 + 15/128 s^2 - 4725/2^15 s^4 - ..., where s = 1/x.
* We approximate pone by
* pone(x) = 1 + (R/S)
* where R = pr0 + pr1*s^2 + pr2*s^4 + ... + pr5*s^10
* S = 1 + ps0*s^2 + ... + ps4*s^10
* and
* | pone(x)-1-R/S | <= 2 ** ( -60.06)
*/
static const float pr8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.0000000000e+00, /* 0x00000000 */
1.1718750000e-01, /* 0x3df00000 */
1.3239480972e+01, /* 0x4153d4ea */
4.1205184937e+02, /* 0x43ce06a3 */
3.8747453613e+03, /* 0x45722bed */
7.9144794922e+03, /* 0x45f753d6 */
};
static const float ps8[5] = {
1.1420736694e+02, /* 0x42e46a2c */
3.6509309082e+03, /* 0x45642ee5 */
3.6956207031e+04, /* 0x47105c35 */
9.7602796875e+04, /* 0x47bea166 */
3.0804271484e+04, /* 0x46f0a88b */
};
static const float pr5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
1.3199052094e-11, /* 0x2d68333f */
1.1718749255e-01, /* 0x3defffff */
6.8027510643e+00, /* 0x40d9b023 */
1.0830818176e+02, /* 0x42d89dca */
5.1763616943e+02, /* 0x440168b7 */
5.2871520996e+02, /* 0x44042dc6 */
};
static const float ps5[5] = {
5.9280597687e+01, /* 0x426d1f55 */
9.9140142822e+02, /* 0x4477d9b1 */
5.3532670898e+03, /* 0x45a74a23 */
7.8446904297e+03, /* 0x45f52586 */
1.5040468750e+03, /* 0x44bc0180 */
};
static const float pr3[6] = {
3.0250391081e-09, /* 0x314fe10d */
1.1718686670e-01, /* 0x3defffab */
3.9329774380e+00, /* 0x407bb5e7 */
3.5119403839e+01, /* 0x420c7a45 */
9.1055007935e+01, /* 0x42b61c2a */
4.8559066772e+01, /* 0x42423c7c */
};
static const float ps3[5] = {
3.4791309357e+01, /* 0x420b2a4d */
3.3676245117e+02, /* 0x43a86198 */
1.0468714600e+03, /* 0x4482dbe3 */
8.9081134033e+02, /* 0x445eb3ed */
1.0378793335e+02, /* 0x42cf936c */
};
static const float pr2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
1.0771083225e-07, /* 0x33e74ea8 */
1.1717621982e-01, /* 0x3deffa16 */
2.3685150146e+00, /* 0x401795c0 */
1.2242610931e+01, /* 0x4143e1bc */
1.7693971634e+01, /* 0x418d8d41 */
5.0735230446e+00, /* 0x40a25a4d */
};
static const float ps2[5] = {
2.1436485291e+01, /* 0x41ab7dec */
1.2529022980e+02, /* 0x42fa9499 */
2.3227647400e+02, /* 0x436846c7 */
1.1767937469e+02, /* 0x42eb5bd7 */
8.3646392822e+00, /* 0x4105d590 */
};
static __inline float
ponef(float x)
{
const float *p,*q;
float z,r,s;
int32_t ix;
GET_FLOAT_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x41000000) {p = pr8; q= ps8;}
else if(ix>=0x409173eb){p = pr5; q= ps5;}
else if(ix>=0x4036d917){p = pr3; q= ps3;}
else {p = pr2; q= ps2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*q[4]))));
return one+ r/s;
}
/* For x >= 8, the asymptotic expansions of qone is
* 3/8 s - 105/1024 s^3 - ..., where s = 1/x.
* We approximate pone by
* qone(x) = s*(0.375 + (R/S))
* where R = qr1*s^2 + qr2*s^4 + ... + qr5*s^10
* S = 1 + qs1*s^2 + ... + qs6*s^12
* and
* | qone(x)/s -0.375-R/S | <= 2 ** ( -61.13)
*/
static const float qr8[6] = { /* for x in [inf, 8]=1/[0,0.125] */
0.0000000000e+00, /* 0x00000000 */
-1.0253906250e-01, /* 0xbdd20000 */
-1.6271753311e+01, /* 0xc1822c8d */
-7.5960174561e+02, /* 0xc43de683 */
-1.1849806641e+04, /* 0xc639273a */
-4.8438511719e+04, /* 0xc73d3683 */
};
static const float qs8[6] = {
1.6139537048e+02, /* 0x43216537 */
7.8253862305e+03, /* 0x45f48b17 */
1.3387534375e+05, /* 0x4802bcd6 */
7.1965775000e+05, /* 0x492fb29c */
6.6660125000e+05, /* 0x4922be94 */
-2.9449025000e+05, /* 0xc88fcb48 */
};
static const float qr5[6] = { /* for x in [8,4.5454]=1/[0.125,0.22001] */
-2.0897993405e-11, /* 0xadb7d219 */
-1.0253904760e-01, /* 0xbdd1fffe */
-8.0564479828e+00, /* 0xc100e736 */
-1.8366960144e+02, /* 0xc337ab6b */
-1.3731937256e+03, /* 0xc4aba633 */
-2.6124443359e+03, /* 0xc523471c */
};
static const float qs5[6] = {
8.1276550293e+01, /* 0x42a28d98 */
1.9917987061e+03, /* 0x44f8f98f */
1.7468484375e+04, /* 0x468878f8 */
4.9851425781e+04, /* 0x4742bb6d */
2.7948074219e+04, /* 0x46da5826 */
-4.7191835938e+03, /* 0xc5937978 */
};
static const float qr3[6] = {
-5.0783124372e-09, /* 0xb1ae7d4f */
-1.0253783315e-01, /* 0xbdd1ff5b */
-4.6101160049e+00, /* 0xc0938612 */
-5.7847221375e+01, /* 0xc267638e */
-2.2824453735e+02, /* 0xc3643e9a */
-2.1921012878e+02, /* 0xc35b35cb */
};
static const float qs3[6] = {
4.7665153503e+01, /* 0x423ea91e */
6.7386511230e+02, /* 0x4428775e */
3.3801528320e+03, /* 0x45534272 */
5.5477290039e+03, /* 0x45ad5dd5 */
1.9031191406e+03, /* 0x44ede3d0 */
-1.3520118713e+02, /* 0xc3073381 */
};
static const float qr2[6] = {/* for x in [2.8570,2]=1/[0.3499,0.5] */
-1.7838172539e-07, /* 0xb43f8932 */
-1.0251704603e-01, /* 0xbdd1f475 */
-2.7522056103e+00, /* 0xc0302423 */
-1.9663616180e+01, /* 0xc19d4f16 */
-4.2325313568e+01, /* 0xc2294d1f */
-2.1371921539e+01, /* 0xc1aaf9b2 */
};
static const float qs2[6] = {
2.9533363342e+01, /* 0x41ec4454 */
2.5298155212e+02, /* 0x437cfb47 */
7.5750280762e+02, /* 0x443d602e */
7.3939318848e+02, /* 0x4438d92a */
1.5594900513e+02, /* 0x431bf2f2 */
-4.9594988823e+00, /* 0xc09eb437 */
};
static __inline float
qonef(float x)
{
const float *p,*q;
float s,r,z;
int32_t ix;
GET_FLOAT_WORD(ix,x);
ix &= 0x7fffffff;
if(ix>=0x41000000) {p = qr8; q= qs8;}
else if(ix>=0x409173eb){p = qr5; q= qs5;}
else if(ix>=0x4036d917){p = qr3; q= qs3;}
else {p = qr2; q= qs2;} /* ix>=0x40000000 */
z = one/(x*x);
r = p[0]+z*(p[1]+z*(p[2]+z*(p[3]+z*(p[4]+z*p[5]))));
s = one+z*(q[0]+z*(q[1]+z*(q[2]+z*(q[3]+z*(q[4]+z*q[5])))));
return ((float).375 + r/s)/x;
}
Index: projects/clang800-import/lib/msun/src/e_jn.c
===================================================================
--- projects/clang800-import/lib/msun/src/e_jn.c (revision 343955)
+++ projects/clang800-import/lib/msun/src/e_jn.c (revision 343956)
@@ -1,272 +1,274 @@
/* @(#)e_jn.c 1.4 95/01/18 */
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunSoft, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* __ieee754_jn(n, x), __ieee754_yn(n, x)
* floating point Bessel's function of the 1st and 2nd kind
* of order n
*
* Special cases:
* y0(0)=y1(0)=yn(n,0) = -inf with division by zero signal;
* y0(-ve)=y1(-ve)=yn(n,-ve) are NaN with invalid signal.
* Note 2. About jn(n,x), yn(n,x)
* For n=0, j0(x) is called,
* for n=1, j1(x) is called,
* for n<x, forward recursion us used starting
* from values of j0(x) and j1(x).
* for n>x, a continued fraction approximation to
* j(n,x)/j(n-1,x) is evaluated and then backward
* recursion is used starting from a supposed value
* for j(n,x). The resulting value of j(0,x) is
* compared with the actual value to correct the
* supposed value of j(n,x).
*
* yn(n,x) is similar in all respects, except
* that forward recursion is used for all
* values of n>1.
*/
#include "math.h"
#include "math_private.h"
static const volatile double vone = 1, vzero = 0;
static const double
invsqrtpi= 5.64189583547756279280e-01, /* 0x3FE20DD7, 0x50429B6D */
two = 2.00000000000000000000e+00, /* 0x40000000, 0x00000000 */
one = 1.00000000000000000000e+00; /* 0x3FF00000, 0x00000000 */
static const double zero = 0.00000000000000000000e+00;
double
__ieee754_jn(int n, double x)
{
int32_t i,hx,ix,lx, sgn;
- double a, b, temp, di;
+ double a, b, c, s, temp, di;
double z, w;
/* J(-n,x) = (-1)^n * J(n, x), J(n, -x) = (-1)^n * J(n, x)
* Thus, J(-n,x) = J(n,-x)
*/
EXTRACT_WORDS(hx,lx,x);
ix = 0x7fffffff&hx;
/* if J(n,NaN) is NaN */
if((ix|((u_int32_t)(lx|-lx))>>31)>0x7ff00000) return x+x;
if(n<0){
n = -n;
x = -x;
hx ^= 0x80000000;
}
if(n==0) return(__ieee754_j0(x));
if(n==1) return(__ieee754_j1(x));
sgn = (n&1)&(hx>>31); /* even n -- 0, odd n -- sign(x) */
x = fabs(x);
if((ix|lx)==0||ix>=0x7ff00000) /* if x is 0 or inf */
b = zero;
else if((double)n<=x) {
/* Safe to use J(n+1,x)=2n/x *J(n,x)-J(n-1,x) */
if(ix>=0x52D00000) { /* x > 2**302 */
/* (x >> n**2)
* Jn(x) = cos(x-(2n+1)*pi/4)*sqrt(2/x*pi)
* Yn(x) = sin(x-(2n+1)*pi/4)*sqrt(2/x*pi)
* Let s=sin(x), c=cos(x),
* xn=x-(2n+1)*pi/4, sqt2 = sqrt(2), then
*
* n sin(xn)*sqt2 cos(xn)*sqt2
* ----------------------------------
* 0 s-c c+s
* 1 -s-c -c+s
* 2 -s+c -c-s
* 3 s+c c-s
*/
+ sincos(x, &s, &c);
switch(n&3) {
- case 0: temp = cos(x)+sin(x); break;
- case 1: temp = -cos(x)+sin(x); break;
- case 2: temp = -cos(x)-sin(x); break;
- case 3: temp = cos(x)-sin(x); break;
+ case 0: temp = c+s; break;
+ case 1: temp = -c+s; break;
+ case 2: temp = -c-s; break;
+ case 3: temp = c-s; break;
}
b = invsqrtpi*temp/sqrt(x);
} else {
a = __ieee754_j0(x);
b = __ieee754_j1(x);
for(i=1;i<n;i++){
temp = b;
b = b*((double)(i+i)/x) - a; /* avoid underflow */
a = temp;
}
}
} else {
if(ix<0x3e100000) { /* x < 2**-29 */
/* x is tiny, return the first Taylor expansion of J(n,x)
* J(n,x) = 1/n!*(x/2)^n - ...
*/
if(n>33) /* underflow */
b = zero;
else {
temp = x*0.5; b = temp;
for (a=one,i=2;i<=n;i++) {
a *= (double)i; /* a = n! */
b *= temp; /* b = (x/2)^n */
}
b = b/a;
}
} else {
/* use backward recurrence */
/* x x^2 x^2
* J(n,x)/J(n-1,x) = ---- ------ ------ .....
* 2n - 2(n+1) - 2(n+2)
*
* 1 1 1
* (for large x) = ---- ------ ------ .....
* 2n 2(n+1) 2(n+2)
* -- - ------ - ------ -
* x x x
*
* Let w = 2n/x and h=2/x, then the above quotient
* is equal to the continued fraction:
* 1
* = -----------------------
* 1
* w - -----------------
* 1
* w+h - ---------
* w+2h - ...
*
* To determine how many terms needed, let
* Q(0) = w, Q(1) = w(w+h) - 1,
* Q(k) = (w+k*h)*Q(k-1) - Q(k-2),
* When Q(k) > 1e4 good for single
* When Q(k) > 1e9 good for double
* When Q(k) > 1e17 good for quadruple
*/
/* determine k */
double t,v;
double q0,q1,h,tmp; int32_t k,m;
w = (n+n)/(double)x; h = 2.0/(double)x;
q0 = w; z = w+h; q1 = w*z - 1.0; k=1;
while(q1<1.0e9) {
k += 1; z += h;
tmp = z*q1 - q0;
q0 = q1;
q1 = tmp;
}
m = n+n;
for(t=zero, i = 2*(n+k); i>=m; i -= 2) t = one/(i/x-t);
a = t;
b = one;
/* estimate log((2/x)^n*n!) = n*log(2/x)+n*ln(n)
* Hence, if n*(log(2n/x)) > ...
* single 8.8722839355e+01
* double 7.09782712893383973096e+02
* long double 1.1356523406294143949491931077970765006170e+04
* then recurrent value may overflow and the result is
* likely underflow to zero
*/
tmp = n;
v = two/x;
tmp = tmp*__ieee754_log(fabs(v*tmp));
if(tmp<7.09782712893383973096e+02) {
for(i=n-1,di=(double)(i+i);i>0;i--){
temp = b;
b *= di;
b = b/x - a;
a = temp;
di -= two;
}
} else {
for(i=n-1,di=(double)(i+i);i>0;i--){
temp = b;
b *= di;
b = b/x - a;
a = temp;
di -= two;
/* scale b to avoid spurious overflow */
if(b>1e100) {
a /= b;
t /= b;
b = one;
}
}
}
z = __ieee754_j0(x);
w = __ieee754_j1(x);
if (fabs(z) >= fabs(w))
b = (t*z/b);
else
b = (t*w/a);
}
}
if(sgn==1) return -b; else return b;
}
double
__ieee754_yn(int n, double x)
{
int32_t i,hx,ix,lx;
int32_t sign;
- double a, b, temp;
+ double a, b, c, s, temp;
EXTRACT_WORDS(hx,lx,x);
ix = 0x7fffffff&hx;
/* yn(n,NaN) = NaN */
if((ix|((u_int32_t)(lx|-lx))>>31)>0x7ff00000) return x+x;
/* yn(n,+-0) = -inf and raise divide-by-zero exception. */
if((ix|lx)==0) return -one/vzero;
/* yn(n,x<0) = NaN and raise invalid exception. */
if(hx<0) return vzero/vzero;
sign = 1;
if(n<0){
n = -n;
sign = 1 - ((n&1)<<1);
}
if(n==0) return(__ieee754_y0(x));
if(n==1) return(sign*__ieee754_y1(x));
if(ix==0x7ff00000) return zero;
if(ix>=0x52D00000) { /* x > 2**302 */
/* (x >> n**2)
* Jn(x) = cos(x-(2n+1)*pi/4)*sqrt(2/x*pi)
* Yn(x) = sin(x-(2n+1)*pi/4)*sqrt(2/x*pi)
* Let s=sin(x), c=cos(x),
* xn=x-(2n+1)*pi/4, sqt2 = sqrt(2), then
*
* n sin(xn)*sqt2 cos(xn)*sqt2
* ----------------------------------
* 0 s-c c+s
* 1 -s-c -c+s
* 2 -s+c -c-s
* 3 s+c c-s
*/
+ sincos(x, &s, &c);
switch(n&3) {
- case 0: temp = sin(x)-cos(x); break;
- case 1: temp = -sin(x)-cos(x); break;
- case 2: temp = -sin(x)+cos(x); break;
- case 3: temp = sin(x)+cos(x); break;
+ case 0: temp = s-c; break;
+ case 1: temp = -s-c; break;
+ case 2: temp = -s+c; break;
+ case 3: temp = s+c; break;
}
b = invsqrtpi*temp/sqrt(x);
} else {
u_int32_t high;
a = __ieee754_y0(x);
b = __ieee754_y1(x);
/* quit if b is -inf */
GET_HIGH_WORD(high,b);
for(i=1;i<n&&high!=0xfff00000;i++){
temp = b;
b = ((double)(i+i)/x)*b - a;
GET_HIGH_WORD(high,b);
a = temp;
}
}
if(sign>0) return b; else return -b;
}
Index: projects/clang800-import/lib/msun/tests/trig_test.c
===================================================================
--- projects/clang800-import/lib/msun/tests/trig_test.c (revision 343955)
+++ projects/clang800-import/lib/msun/tests/trig_test.c (revision 343956)
@@ -1,284 +1,285 @@
/*-
* Copyright (c) 2008 David Schultz <das@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* Tests for corner cases in trigonometric functions. Some accuracy tests
* are included as well, but these are very basic sanity checks, not
* intended to be comprehensive.
*
* The program for generating representable numbers near multiples of pi is
* available at http://www.cs.berkeley.edu/~wkahan/testpi/ .
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <assert.h>
#include <fenv.h>
#include <float.h>
#include <math.h>
#include <stdio.h>
#include <atf-c.h>
#include "test-utils.h"
#pragma STDC FENV_ACCESS ON
/*
* Test that a function returns the correct value and sets the
* exception flags correctly. The exceptmask specifies which
* exceptions we should check. We need to be lenient for several
* reasons, but mainly because on some architectures it's impossible
* to raise FE_OVERFLOW without raising FE_INEXACT.
*
* These are macros instead of functions so that assert provides more
* meaningful error messages.
*
* XXX The volatile here is to avoid gcc's bogus constant folding and work
* around the lack of support for the FENV_ACCESS pragma.
*/
#define test(func, x, result, exceptmask, excepts) do { \
volatile long double _d = x; \
ATF_CHECK(feclearexcept(FE_ALL_EXCEPT) == 0); \
ATF_CHECK(fpequal((func)(_d), (result))); \
ATF_CHECK(((void)(func), fetestexcept(exceptmask) == (excepts))); \
} while (0)
#define testall(prefix, x, result, exceptmask, excepts) do { \
test(prefix, x, (double)result, exceptmask, excepts); \
test(prefix##f, x, (float)result, exceptmask, excepts); \
test(prefix##l, x, result, exceptmask, excepts); \
} while (0)
#define testdf(prefix, x, result, exceptmask, excepts) do { \
test(prefix, x, (double)result, exceptmask, excepts); \
test(prefix##f, x, (float)result, exceptmask, excepts); \
} while (0)
ATF_TC(special);
ATF_TC_HEAD(special, tc)
{
atf_tc_set_md_var(tc, "descr",
"test special cases in sin(), cos(), and tan()");
}
ATF_TC_BODY(special, tc)
{
/* Values at 0 should be exact. */
testall(tan, 0.0, 0.0, ALL_STD_EXCEPT, 0);
testall(tan, -0.0, -0.0, ALL_STD_EXCEPT, 0);
testall(cos, 0.0, 1.0, ALL_STD_EXCEPT, 0);
testall(cos, -0.0, 1.0, ALL_STD_EXCEPT, 0);
testall(sin, 0.0, 0.0, ALL_STD_EXCEPT, 0);
testall(sin, -0.0, -0.0, ALL_STD_EXCEPT, 0);
/* func(+-Inf) == NaN */
testall(tan, INFINITY, NAN, ALL_STD_EXCEPT, FE_INVALID);
testall(sin, INFINITY, NAN, ALL_STD_EXCEPT, FE_INVALID);
testall(cos, INFINITY, NAN, ALL_STD_EXCEPT, FE_INVALID);
testall(tan, -INFINITY, NAN, ALL_STD_EXCEPT, FE_INVALID);
testall(sin, -INFINITY, NAN, ALL_STD_EXCEPT, FE_INVALID);
testall(cos, -INFINITY, NAN, ALL_STD_EXCEPT, FE_INVALID);
/* func(NaN) == NaN */
testall(tan, NAN, NAN, ALL_STD_EXCEPT, 0);
testall(sin, NAN, NAN, ALL_STD_EXCEPT, 0);
testall(cos, NAN, NAN, ALL_STD_EXCEPT, 0);
}
#ifndef __i386__
ATF_TC(reduction);
ATF_TC_HEAD(reduction, tc)
{
atf_tc_set_md_var(tc, "descr",
"tests to ensure argument reduction for large arguments is accurate");
}
ATF_TC_BODY(reduction, tc)
{
/* floats very close to odd multiples of pi */
static const float f_pi_odd[] = {
85563208.0f,
43998769152.0f,
9.2763667655669323e+25f,
1.5458357838905804e+29f,
};
/* doubles very close to odd multiples of pi */
static const double d_pi_odd[] = {
3.1415926535897931,
91.106186954104004,
642615.9188844458,
3397346.5699258847,
6134899525417045.0,
3.0213551960457761e+43,
1.2646209897993783e+295,
6.2083625380677099e+307,
};
/* long doubles very close to odd multiples of pi */
#if LDBL_MANT_DIG == 64
static const long double ld_pi_odd[] = {
1.1891886960373841596e+101L,
1.07999475322710967206e+2087L,
6.522151627890431836e+2147L,
8.9368974898260328229e+2484L,
9.2961044110572205863e+2555L,
4.90208421886578286e+3189L,
1.5275546401232615884e+3317L,
1.7227465626338900093e+3565L,
2.4160090594000745334e+3808L,
9.8477555741888350649e+4314L,
1.6061597222105160737e+4326L,
};
#endif
unsigned i;
-#if defined(__amd64__) && defined(__clang__) && __clang_major__ >= 7
+#if defined(__amd64__) && defined(__clang__) && __clang_major__ >= 7 && \
+ __FreeBSD_cc_version < 1300002
atf_tc_expect_fail("test fails with clang 7+ - bug 234040");
#endif
for (i = 0; i < nitems(f_pi_odd); i++) {
ATF_CHECK(fabs(sinf(f_pi_odd[i])) < FLT_EPSILON);
ATF_CHECK(cosf(f_pi_odd[i]) == -1.0);
ATF_CHECK(fabs(tan(f_pi_odd[i])) < FLT_EPSILON);
ATF_CHECK(fabs(sinf(-f_pi_odd[i])) < FLT_EPSILON);
ATF_CHECK(cosf(-f_pi_odd[i]) == -1.0);
ATF_CHECK(fabs(tanf(-f_pi_odd[i])) < FLT_EPSILON);
ATF_CHECK(fabs(sinf(f_pi_odd[i] * 2)) < FLT_EPSILON);
ATF_CHECK(cosf(f_pi_odd[i] * 2) == 1.0);
ATF_CHECK(fabs(tanf(f_pi_odd[i] * 2)) < FLT_EPSILON);
ATF_CHECK(fabs(sinf(-f_pi_odd[i] * 2)) < FLT_EPSILON);
ATF_CHECK(cosf(-f_pi_odd[i] * 2) == 1.0);
ATF_CHECK(fabs(tanf(-f_pi_odd[i] * 2)) < FLT_EPSILON);
}
for (i = 0; i < nitems(d_pi_odd); i++) {
ATF_CHECK(fabs(sin(d_pi_odd[i])) < 2 * DBL_EPSILON);
ATF_CHECK(cos(d_pi_odd[i]) == -1.0);
ATF_CHECK(fabs(tan(d_pi_odd[i])) < 2 * DBL_EPSILON);
ATF_CHECK(fabs(sin(-d_pi_odd[i])) < 2 * DBL_EPSILON);
ATF_CHECK(cos(-d_pi_odd[i]) == -1.0);
ATF_CHECK(fabs(tan(-d_pi_odd[i])) < 2 * DBL_EPSILON);
ATF_CHECK(fabs(sin(d_pi_odd[i] * 2)) < 2 * DBL_EPSILON);
ATF_CHECK(cos(d_pi_odd[i] * 2) == 1.0);
ATF_CHECK(fabs(tan(d_pi_odd[i] * 2)) < 2 * DBL_EPSILON);
ATF_CHECK(fabs(sin(-d_pi_odd[i] * 2)) < 2 * DBL_EPSILON);
ATF_CHECK(cos(-d_pi_odd[i] * 2) == 1.0);
ATF_CHECK(fabs(tan(-d_pi_odd[i] * 2)) < 2 * DBL_EPSILON);
}
#if LDBL_MANT_DIG == 64 /* XXX: || LDBL_MANT_DIG == 113 */
for (i = 0; i < nitems(ld_pi_odd); i++) {
ATF_CHECK(fabsl(sinl(ld_pi_odd[i])) < LDBL_EPSILON);
ATF_CHECK(cosl(ld_pi_odd[i]) == -1.0);
ATF_CHECK(fabsl(tanl(ld_pi_odd[i])) < LDBL_EPSILON);
ATF_CHECK(fabsl(sinl(-ld_pi_odd[i])) < LDBL_EPSILON);
ATF_CHECK(cosl(-ld_pi_odd[i]) == -1.0);
ATF_CHECK(fabsl(tanl(-ld_pi_odd[i])) < LDBL_EPSILON);
ATF_CHECK(fabsl(sinl(ld_pi_odd[i] * 2)) < LDBL_EPSILON);
ATF_CHECK(cosl(ld_pi_odd[i] * 2) == 1.0);
ATF_CHECK(fabsl(tanl(ld_pi_odd[i] * 2)) < LDBL_EPSILON);
ATF_CHECK(fabsl(sinl(-ld_pi_odd[i] * 2)) < LDBL_EPSILON);
ATF_CHECK(cosl(-ld_pi_odd[i] * 2) == 1.0);
ATF_CHECK(fabsl(tanl(-ld_pi_odd[i] * 2)) < LDBL_EPSILON);
}
#endif
}
ATF_TC(accuracy);
ATF_TC_HEAD(accuracy, tc)
{
atf_tc_set_md_var(tc, "descr",
"tests the accuracy of these functions over the primary range");
}
ATF_TC_BODY(accuracy, tc)
{
/* For small args, sin(x) = tan(x) = x, and cos(x) = 1. */
testall(sin, 0xd.50ee515fe4aea16p-114L, 0xd.50ee515fe4aea16p-114L,
ALL_STD_EXCEPT, FE_INEXACT);
testall(tan, 0xd.50ee515fe4aea16p-114L, 0xd.50ee515fe4aea16p-114L,
ALL_STD_EXCEPT, FE_INEXACT);
testall(cos, 0xd.50ee515fe4aea16p-114L, 1.0,
ALL_STD_EXCEPT, FE_INEXACT);
/*
* These tests should pass for f32, d64, and ld80 as long as
* the error is <= 0.75 ulp (round to nearest)
*/
#if LDBL_MANT_DIG <= 64
#define testacc testall
#else
#define testacc testdf
#endif
testacc(sin, 0.17255452780841205174L, 0.17169949801444412683L,
ALL_STD_EXCEPT, FE_INEXACT);
testacc(sin, -0.75431944555904520893L, -0.68479288156557286353L,
ALL_STD_EXCEPT, FE_INEXACT);
testacc(cos, 0.70556358769838947292L, 0.76124620693117771850L,
ALL_STD_EXCEPT, FE_INEXACT);
testacc(cos, -0.34061437849088045332L, 0.94254960031831729956L,
ALL_STD_EXCEPT, FE_INEXACT);
testacc(tan, -0.15862817413325692897L, -0.15997221861309522115L,
ALL_STD_EXCEPT, FE_INEXACT);
testacc(tan, 0.38374784931303813530L, 0.40376500259976759951L,
ALL_STD_EXCEPT, FE_INEXACT);
/*
* XXX missing:
* - tests for ld128
* - tests for other rounding modes (probably won't pass for now)
* - tests for large numbers that get reduced to hi+lo with lo!=0
*/
}
#endif
ATF_TP_ADD_TCS(tp)
{
ATF_TP_ADD_TC(tp, special);
#ifndef __i386__
ATF_TP_ADD_TC(tp, accuracy);
ATF_TP_ADD_TC(tp, reduction);
#endif
return (atf_no_error());
}
Index: projects/clang800-import/libexec/rc/rc.d/growfs
===================================================================
--- projects/clang800-import/libexec/rc/rc.d/growfs (revision 343955)
+++ projects/clang800-import/libexec/rc/rc.d/growfs (revision 343956)
@@ -1,98 +1,118 @@
#!/bin/sh
#
# Copyright 2014 John-Mark Gurney
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
# $FreeBSD$
#
# PROVIDE: growfs
# BEFORE: sysctl
# KEYWORD: firstboot
# This allows us to distribute a image
# and have it work on essentially any size drive.
#
# TODO: Figure out where this should really be ordered.
# I suspect it should go just after fsck but before mountcritlocal.
#
. /etc/rc.subr
name="growfs"
desc="Grow root partition to fill device"
start_cmd="growfs_start"
stop_cmd=":"
rcvar="growfs_enable"
growfs_start ()
{
echo "Growing root partition to fill device"
- rootdev=$(df / | tail -n 1 | awk '{ sub("/dev/", "", $1); print $1 }')
+ FSTYPE=$(mount -p | awk '{ if ( $2 == "/") { print $3 }}')
+ FSDEV=$(mount -p | awk '{ if ( $2 == "/") { print $1 }}')
+ case "$FSTYPE" in
+ ufs)
+ rootdev=${FSDEV#/dev/}
+ ;;
+ zfs)
+ pool=${FSDEV%%/*}
+ rootdev=$(zpool list -v $pool | tail -n 1 | awk '{ print $1 }')
+ ;;
+ *)
+ echo "Don't know how to grow root filesystem type: $FSTYPE"
+ return
+ esac
if [ x"$rootdev" = x"${rootdev%/*}" ]; then
# raw device
rawdev="$rootdev"
else
rawdev=$(glabel status | awk '$1 == "'"$rootdev"'" { print $3 }')
if [ x"$rawdev" = x"" ]; then
echo "Can't figure out device for: $rootdev"
return
fi
fi
sysctl -b kern.geom.conftxt | awk '
{
lvl=$1
device[lvl] = $3
type[lvl] = $2
idx[lvl] = $7
parttype[lvl] = $13
if (dev == $3) {
for (i = 1; i <= lvl; i++) {
# resize
if (type[i] == "PART") {
pdev = device[i - 1]
cmd[i] = "gpart resize -i " idx[i] " " pdev
if (parttype[i] == "GPT")
cmd[i] = "gpart recover " pdev " ; " cmd[i]
} else if (type[i] == "LABEL") {
continue
} else {
print "unhandled type: " type[i]
exit 1
}
}
for (i = 1; i <= lvl; i++) {
if (cmd[i])
system(cmd[i])
}
exit 0
}
}' dev="$rawdev"
gpart commit "$rootdev"
- growfs -y /dev/"$rootdev"
+ case "$FSTYPE" in
+ ufs)
+ growfs -y /dev/"$rootdev"
+ ;;
+ zfs)
+ zpool online -e $pool $rootdev
+ ;;
+ esac
}
load_rc_config $name
run_rc_command "$1"
Index: projects/clang800-import/libexec/talkd/extern.h
===================================================================
--- projects/clang800-import/libexec/talkd/extern.h (revision 343955)
+++ projects/clang800-import/libexec/talkd/extern.h (revision 343956)
@@ -1,45 +1,45 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2002 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2002 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
extern int debug;
extern char hostname[];
int announce(CTL_MSG *, const char *);
int delete_invite(u_int32_t);
void do_announce(CTL_MSG *, CTL_RESPONSE *);
CTL_MSG *find_match(CTL_MSG *request);
CTL_MSG *find_request(CTL_MSG *request);
int find_user(const char *name, char *tty);
void insert_table(CTL_MSG *, CTL_RESPONSE *);
int new_id(void);
int print_mesg(const char *, CTL_MSG *, const char *);
void print_request(const char *, CTL_MSG *);
void print_response(const char *, CTL_RESPONSE *);
void process_request(CTL_MSG *mp, CTL_RESPONSE *rp);
void timeout(int sig);
Index: projects/clang800-import/sbin/dhclient/dhclient.c
===================================================================
--- projects/clang800-import/sbin/dhclient/dhclient.c (revision 343955)
+++ projects/clang800-import/sbin/dhclient/dhclient.c (revision 343956)
@@ -1,2839 +1,2840 @@
/* $OpenBSD: dhclient.c,v 1.63 2005/02/06 17:10:13 krw Exp $ */
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2004 Henning Brauer <henning@openbsd.org>
* Copyright (c) 1995, 1996, 1997, 1998, 1999
* The Internet Software Consortium. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of The Internet Software Consortium nor the names
* of its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE INTERNET SOFTWARE CONSORTIUM AND
* CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE INTERNET SOFTWARE CONSORTIUM OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
* USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
* OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This software has been written for the Internet Software Consortium
* by Ted Lemon <mellon@fugue.com> in cooperation with Vixie
* Enterprises. To learn more about the Internet Software Consortium,
* see ``http://www.vix.com/isc''. To learn more about Vixie
* Enterprises, see ``http://www.vix.com''.
*
* This client was substantially modified and enhanced by Elliot Poger
* for use on Linux while he was working on the MosquitoNet project at
* Stanford.
*
* The current version owes much to Elliot's Linux enhancements, but
* was substantially reorganized and partially rewritten by Ted Lemon
* so as to use the same networking framework that the Internet Software
* Consortium DHCP server uses. Much system-specific configuration code
* was moved into a shell script so that as support for more operating
* systems is added, it will not be necessary to port and maintain
* system-specific configuration code to these operating systems - instead,
* the shell script can invoke the native tools to accomplish the same
* purpose.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "dhcpd.h"
#include "privsep.h"
#include <sys/capsicum.h>
#include <sys/endian.h>
#include <capsicum_helpers.h>
#include <libgen.h>
#include <net80211/ieee80211_freebsd.h>
#ifndef _PATH_VAREMPTY
#define _PATH_VAREMPTY "/var/empty"
#endif
#define PERIOD 0x2e
#define hyphenchar(c) ((c) == 0x2d)
#define bslashchar(c) ((c) == 0x5c)
#define periodchar(c) ((c) == PERIOD)
#define asterchar(c) ((c) == 0x2a)
#define alphachar(c) (((c) >= 0x41 && (c) <= 0x5a) || \
((c) >= 0x61 && (c) <= 0x7a))
#define digitchar(c) ((c) >= 0x30 && (c) <= 0x39)
#define whitechar(c) ((c) == ' ' || (c) == '\t')
#define borderchar(c) (alphachar(c) || digitchar(c))
#define middlechar(c) (borderchar(c) || hyphenchar(c))
#define domainchar(c) ((c) > 0x20 && (c) < 0x7f)
#define CLIENT_PATH "PATH=/usr/bin:/usr/sbin:/bin:/sbin"
cap_channel_t *capsyslog;
time_t cur_time;
static time_t default_lease_time = 43200; /* 12 hours... */
const char *path_dhclient_conf = _PATH_DHCLIENT_CONF;
char *path_dhclient_db = NULL;
int log_perror = 1;
static int privfd;
static int nullfd = -1;
static char hostname[_POSIX_HOST_NAME_MAX + 1];
static struct iaddr iaddr_broadcast = { 4, { 255, 255, 255, 255 } };
static struct in_addr inaddr_any, inaddr_broadcast;
static char *path_dhclient_pidfile;
struct pidfh *pidfile;
/*
* ASSERT_STATE() does nothing now; it used to be
* assert (state_is == state_shouldbe).
*/
#define ASSERT_STATE(state_is, state_shouldbe) {}
/*
* We need to check that the expiry, renewal and rebind times are not beyond
* the end of time (~2038 when a 32-bit time_t is being used).
*/
#define TIME_MAX ((((time_t) 1 << (sizeof(time_t) * CHAR_BIT - 2)) - 1) * 2 + 1)
int log_priority;
static int no_daemon;
static int unknown_ok = 1;
static int routefd;
struct interface_info *ifi;
int findproto(char *, int);
struct sockaddr *get_ifa(char *, int);
void routehandler(struct protocol *);
void usage(void);
int check_option(struct client_lease *l, int option);
int check_classless_option(unsigned char *data, int len);
int ipv4addrs(const char * buf);
int res_hnok(const char *dn);
int check_search(const char *srch);
const char *option_as_string(unsigned int code, unsigned char *data, int len);
int fork_privchld(int, int);
#define ROUNDUP(a) \
((a) > 0 ? (1 + (((a) - 1) | (sizeof(long) - 1))) : sizeof(long))
#define ADVANCE(x, n) (x += ROUNDUP((n)->sa_len))
/* Minimum MTU is 68 as per RFC791, p. 24 */
#define MIN_MTU 68
static time_t scripttime;
int
findproto(char *cp, int n)
{
struct sockaddr *sa;
unsigned i;
if (n == 0)
return -1;
for (i = 1; i; i <<= 1) {
if (i & n) {
sa = (struct sockaddr *)cp;
switch (i) {
case RTA_IFA:
case RTA_DST:
case RTA_GATEWAY:
case RTA_NETMASK:
if (sa->sa_family == AF_INET)
return AF_INET;
if (sa->sa_family == AF_INET6)
return AF_INET6;
break;
case RTA_IFP:
break;
}
ADVANCE(cp, sa);
}
}
return (-1);
}
struct sockaddr *
get_ifa(char *cp, int n)
{
struct sockaddr *sa;
unsigned i;
if (n == 0)
return (NULL);
for (i = 1; i; i <<= 1)
if (i & n) {
sa = (struct sockaddr *)cp;
if (i == RTA_IFA)
return (sa);
ADVANCE(cp, sa);
}
return (NULL);
}
static struct iaddr defaddr = { .len = 4 };
static uint8_t curbssid[6];
static void
disassoc(void *arg)
{
struct interface_info *_ifi = arg;
/*
* Clear existing state.
*/
if (_ifi->client->active != NULL) {
script_init("EXPIRE", NULL);
script_write_params("old_",
_ifi->client->active);
if (_ifi->client->alias)
script_write_params("alias_",
_ifi->client->alias);
script_go();
}
_ifi->client->state = S_INIT;
}
void
routehandler(struct protocol *p __unused)
{
char msg[2048], *addr;
struct rt_msghdr *rtm;
struct if_msghdr *ifm;
struct ifa_msghdr *ifam;
struct if_announcemsghdr *ifan;
struct ieee80211_join_event *jev;
struct client_lease *l;
time_t t = time(NULL);
struct sockaddr_in *sa;
struct iaddr a;
ssize_t n;
int linkstat;
n = read(routefd, &msg, sizeof(msg));
rtm = (struct rt_msghdr *)msg;
if (n < (ssize_t)sizeof(rtm->rtm_msglen) ||
n < (ssize_t)rtm->rtm_msglen ||
rtm->rtm_version != RTM_VERSION)
return;
switch (rtm->rtm_type) {
case RTM_NEWADDR:
case RTM_DELADDR:
ifam = (struct ifa_msghdr *)rtm;
if (ifam->ifam_index != ifi->index)
break;
if (findproto((char *)(ifam + 1), ifam->ifam_addrs) != AF_INET)
break;
if (scripttime == 0 || t < scripttime + 10)
break;
sa = (struct sockaddr_in*)get_ifa((char *)(ifam + 1), ifam->ifam_addrs);
if (sa == NULL)
break;
if ((a.len = sizeof(struct in_addr)) > sizeof(a.iabuf))
error("king bula sez: len mismatch");
memcpy(a.iabuf, &sa->sin_addr, a.len);
if (addr_eq(a, defaddr))
break;
for (l = ifi->client->active; l != NULL; l = l->next)
if (addr_eq(a, l->address))
break;
if (l == NULL) /* added/deleted addr is not the one we set */
break;
addr = inet_ntoa(sa->sin_addr);
if (rtm->rtm_type == RTM_NEWADDR) {
/*
* XXX: If someone other than us adds our address,
* should we assume they are taking over from us,
* delete the lease record, and exit without modifying
* the interface?
*/
warning("My address (%s) was re-added", addr);
} else {
warning("My address (%s) was deleted, dhclient exiting",
addr);
goto die;
}
break;
case RTM_IFINFO:
ifm = (struct if_msghdr *)rtm;
if (ifm->ifm_index != ifi->index)
break;
if ((rtm->rtm_flags & RTF_UP) == 0) {
warning("Interface %s is down, dhclient exiting",
ifi->name);
goto die;
}
linkstat = interface_link_status(ifi->name);
if (linkstat != ifi->linkstat) {
debug("%s link state %s -> %s", ifi->name,
ifi->linkstat ? "up" : "down",
linkstat ? "up" : "down");
ifi->linkstat = linkstat;
if (linkstat)
state_reboot(ifi);
}
break;
case RTM_IFANNOUNCE:
ifan = (struct if_announcemsghdr *)rtm;
if (ifan->ifan_what == IFAN_DEPARTURE &&
ifan->ifan_index == ifi->index) {
warning("Interface %s is gone, dhclient exiting",
ifi->name);
goto die;
}
break;
case RTM_IEEE80211:
ifan = (struct if_announcemsghdr *)rtm;
if (ifan->ifan_index != ifi->index)
break;
switch (ifan->ifan_what) {
case RTM_IEEE80211_ASSOC:
case RTM_IEEE80211_REASSOC:
/*
* Use assoc/reassoc event to kick state machine
* in case we roam. Otherwise fall back to the
* normal state machine just like a wired network.
*/
jev = (struct ieee80211_join_event *) &ifan[1];
if (memcmp(curbssid, jev->iev_addr, 6)) {
disassoc(ifi);
state_reboot(ifi);
}
memcpy(curbssid, jev->iev_addr, 6);
break;
}
break;
default:
break;
}
return;
die:
script_init("FAIL", NULL);
if (ifi->client->alias)
script_write_params("alias_", ifi->client->alias);
script_go();
if (pidfile != NULL)
pidfile_remove(pidfile);
exit(1);
}
static void
init_casper(void)
{
cap_channel_t *casper;
casper = cap_init();
if (casper == NULL)
error("unable to start casper");
capsyslog = cap_service_open(casper, "system.syslog");
cap_close(casper);
if (capsyslog == NULL)
error("unable to open system.syslog service");
}
int
main(int argc, char *argv[])
{
u_int capmode;
int ch, fd, quiet = 0, i = 0;
int pipe_fd[2];
int immediate_daemon = 0;
struct passwd *pw;
pid_t otherpid;
cap_rights_t rights;
init_casper();
/* Initially, log errors to stderr as well as to syslogd. */
cap_openlog(capsyslog, getprogname(), LOG_PID | LOG_NDELAY, DHCPD_LOG_FACILITY);
cap_setlogmask(capsyslog, LOG_UPTO(LOG_DEBUG));
while ((ch = getopt(argc, argv, "bc:dl:p:qu")) != -1)
switch (ch) {
case 'b':
immediate_daemon = 1;
break;
case 'c':
path_dhclient_conf = optarg;
break;
case 'd':
no_daemon = 1;
break;
case 'l':
path_dhclient_db = optarg;
break;
case 'p':
path_dhclient_pidfile = optarg;
break;
case 'q':
quiet = 1;
break;
case 'u':
unknown_ok = 0;
break;
default:
usage();
}
argc -= optind;
argv += optind;
if (argc != 1)
usage();
if (path_dhclient_pidfile == NULL) {
asprintf(&path_dhclient_pidfile,
"%s/dhclient/dhclient.%s.pid", _PATH_VARRUN, *argv);
if (path_dhclient_pidfile == NULL)
error("asprintf");
}
pidfile = pidfile_open(path_dhclient_pidfile, 0644, &otherpid);
if (pidfile == NULL) {
if (errno == EEXIST)
error("dhclient already running, pid: %d.", otherpid);
if (errno == EAGAIN)
error("dhclient already running.");
warning("Cannot open or create pidfile: %m");
}
if ((ifi = calloc(1, sizeof(struct interface_info))) == NULL)
error("calloc");
if (strlcpy(ifi->name, argv[0], IFNAMSIZ) >= IFNAMSIZ)
error("Interface name too long");
if (path_dhclient_db == NULL && asprintf(&path_dhclient_db, "%s.%s",
_PATH_DHCLIENT_DB, ifi->name) == -1)
error("asprintf");
if (quiet)
log_perror = 0;
tzset();
time(&cur_time);
inaddr_broadcast.s_addr = INADDR_BROADCAST;
inaddr_any.s_addr = INADDR_ANY;
read_client_conf();
/* The next bit is potentially very time-consuming, so write out
the pidfile right away. We will write it out again with the
correct pid after daemonizing. */
if (pidfile != NULL)
pidfile_write(pidfile);
if (!interface_link_status(ifi->name)) {
fprintf(stderr, "%s: no link ...", ifi->name);
fflush(stderr);
sleep(1);
while (!interface_link_status(ifi->name)) {
fprintf(stderr, ".");
fflush(stderr);
if (++i > 10) {
fprintf(stderr, " giving up\n");
exit(1);
}
sleep(1);
}
fprintf(stderr, " got link\n");
}
ifi->linkstat = 1;
if ((nullfd = open(_PATH_DEVNULL, O_RDWR, 0)) == -1)
error("cannot open %s: %m", _PATH_DEVNULL);
if ((pw = getpwnam("_dhcp")) == NULL) {
warning("no such user: _dhcp, falling back to \"nobody\"");
if ((pw = getpwnam("nobody")) == NULL)
error("no such user: nobody");
}
/*
* Obtain hostname before entering capability mode - it won't be
* possible then, as reading kern.hostname is not permitted.
*/
if (gethostname(hostname, sizeof(hostname)) < 0)
hostname[0] = '\0';
priv_script_init("PREINIT", NULL);
if (ifi->client->alias)
priv_script_write_params("alias_", ifi->client->alias);
priv_script_go();
/* set up the interface */
discover_interfaces(ifi);
if (pipe(pipe_fd) == -1)
error("pipe");
fork_privchld(pipe_fd[0], pipe_fd[1]);
close(ifi->ufdesc);
ifi->ufdesc = -1;
close(ifi->wfdesc);
ifi->wfdesc = -1;
close(pipe_fd[0]);
privfd = pipe_fd[1];
cap_rights_init(&rights, CAP_READ, CAP_WRITE);
if (caph_rights_limit(privfd, &rights) < 0)
error("can't limit private descriptor: %m");
if ((fd = open(path_dhclient_db, O_RDONLY|O_EXLOCK|O_CREAT, 0)) == -1)
error("can't open and lock %s: %m", path_dhclient_db);
read_client_leases();
rewrite_client_leases();
close(fd);
if ((routefd = socket(PF_ROUTE, SOCK_RAW, 0)) != -1)
add_protocol("AF_ROUTE", routefd, routehandler, ifi);
if (shutdown(routefd, SHUT_WR) < 0)
error("can't shutdown route socket: %m");
cap_rights_init(&rights, CAP_EVENT, CAP_READ);
if (caph_rights_limit(routefd, &rights) < 0)
error("can't limit route socket: %m");
endpwent();
setproctitle("%s", ifi->name);
/* setgroups(2) is not permitted in capability mode. */
if (setgroups(1, &pw->pw_gid) != 0)
error("can't restrict groups: %m");
if (caph_enter_casper() < 0)
error("can't enter capability mode: %m");
/*
* If we are not in capability mode (i.e., Capsicum or libcasper is
* disabled), try to restrict filesystem access. This will fail if
* kern.chroot_allow_open_directories is 0 or the process is jailed.
*/
if (cap_getmode(&capmode) < 0 || capmode == 0) {
if (chroot(_PATH_VAREMPTY) == -1)
error("chroot");
if (chdir("/") == -1)
error("chdir(\"/\")");
}
if (setegid(pw->pw_gid) || setgid(pw->pw_gid) ||
seteuid(pw->pw_uid) || setuid(pw->pw_uid))
error("can't drop privileges: %m");
if (immediate_daemon)
go_daemon();
ifi->client->state = S_INIT;
state_reboot(ifi);
bootp_packet_handler = do_packet;
dispatch();
/* not reached */
return (0);
}
void
usage(void)
{
fprintf(stderr, "usage: %s [-bdqu] ", getprogname());
fprintf(stderr, "[-c conffile] [-l leasefile] interface\n");
exit(1);
}
/*
* Individual States:
*
* Each routine is called from the dhclient_state_machine() in one of
* these conditions:
* -> entering INIT state
* -> recvpacket_flag == 0: timeout in this state
* -> otherwise: received a packet in this state
*
* Return conditions as handled by dhclient_state_machine():
* Returns 1, sendpacket_flag = 1: send packet, reset timer.
* Returns 1, sendpacket_flag = 0: just reset the timer (wait for a milestone).
* Returns 0: finish the nap which was interrupted for no good reason.
*
* Several per-interface variables are used to keep track of the process:
* active_lease: the lease that is being used on the interface
* (null pointer if not configured yet).
* offered_leases: leases corresponding to DHCPOFFER messages that have
* been sent to us by DHCP servers.
* acked_leases: leases corresponding to DHCPACK messages that have been
* sent to us by DHCP servers.
* sendpacket: DHCP packet we're trying to send.
* destination: IP address to send sendpacket to
* In addition, there are several relevant per-lease variables.
* T1_expiry, T2_expiry, lease_expiry: lease milestones
* In the active lease, these control the process of renewing the lease;
* In leases on the acked_leases list, this simply determines when we
* can no longer legitimately use the lease.
*/
void
state_reboot(void *ipp)
{
struct interface_info *ip = ipp;
/* If we don't remember an active lease, go straight to INIT. */
if (!ip->client->active || ip->client->active->is_bootp) {
state_init(ip);
return;
}
/* We are in the rebooting state. */
ip->client->state = S_REBOOTING;
/* make_request doesn't initialize xid because it normally comes
from the DHCPDISCOVER, but we haven't sent a DHCPDISCOVER,
so pick an xid now. */
ip->client->xid = arc4random();
/* Make a DHCPREQUEST packet, and set appropriate per-interface
flags. */
make_request(ip, ip->client->active);
ip->client->destination = iaddr_broadcast;
ip->client->first_sending = cur_time;
ip->client->interval = ip->client->config->initial_interval;
/* Zap the medium list... */
ip->client->medium = NULL;
/* Send out the first DHCPREQUEST packet. */
send_request(ip);
}
/*
* Called when a lease has completely expired and we've
* been unable to renew it.
*/
void
state_init(void *ipp)
{
struct interface_info *ip = ipp;
ASSERT_STATE(state, S_INIT);
/* Make a DHCPDISCOVER packet, and set appropriate per-interface
flags. */
make_discover(ip, ip->client->active);
ip->client->xid = ip->client->packet.xid;
ip->client->destination = iaddr_broadcast;
ip->client->state = S_SELECTING;
ip->client->first_sending = cur_time;
ip->client->interval = ip->client->config->initial_interval;
/* Add an immediate timeout to cause the first DHCPDISCOVER packet
to go out. */
send_discover(ip);
}
/*
* state_selecting is called when one or more DHCPOFFER packets
* have been received and a configurable period of time has passed.
*/
void
state_selecting(void *ipp)
{
struct interface_info *ip = ipp;
struct client_lease *lp, *next, *picked;
ASSERT_STATE(state, S_SELECTING);
/* Cancel state_selecting and send_discover timeouts, since either
one could have got us here. */
cancel_timeout(state_selecting, ip);
cancel_timeout(send_discover, ip);
/* We have received one or more DHCPOFFER packets. Currently,
the only criterion by which we judge leases is whether or
not we get a response when we arp for them. */
picked = NULL;
for (lp = ip->client->offered_leases; lp; lp = next) {
next = lp->next;
/* Check to see if we got an ARPREPLY for the address
in this particular lease. */
if (!picked) {
script_init("ARPCHECK", lp->medium);
script_write_params("check_", lp);
/* If the ARPCHECK code detects another
machine using the offered address, it exits
nonzero. We need to send a DHCPDECLINE and
toss the lease. */
if (script_go()) {
make_decline(ip, lp);
send_decline(ip);
goto freeit;
}
picked = lp;
picked->next = NULL;
} else {
freeit:
free_client_lease(lp);
}
}
ip->client->offered_leases = NULL;
/* If we just tossed all the leases we were offered, go back
to square one. */
if (!picked) {
ip->client->state = S_INIT;
state_init(ip);
return;
}
/* If it was a BOOTREPLY, we can just take the address right now. */
if (!picked->options[DHO_DHCP_MESSAGE_TYPE].len) {
ip->client->new = picked;
/* Make up some lease expiry times
XXX these should be configurable. */
ip->client->new->expiry = cur_time + 12000;
ip->client->new->renewal += cur_time + 8000;
ip->client->new->rebind += cur_time + 10000;
ip->client->state = S_REQUESTING;
/* Bind to the address we received. */
bind_lease(ip);
return;
}
/* Go to the REQUESTING state. */
ip->client->destination = iaddr_broadcast;
ip->client->state = S_REQUESTING;
ip->client->first_sending = cur_time;
ip->client->interval = ip->client->config->initial_interval;
/* Make a DHCPREQUEST packet from the lease we picked. */
make_request(ip, picked);
ip->client->xid = ip->client->packet.xid;
/* Toss the lease we picked - we'll get it back in a DHCPACK. */
free_client_lease(picked);
/* Add an immediate timeout to send the first DHCPREQUEST packet. */
send_request(ip);
}
/* state_requesting is called when we receive a DHCPACK message after
having sent out one or more DHCPREQUEST packets. */
void
dhcpack(struct packet *packet)
{
struct interface_info *ip = packet->interface;
struct client_lease *lease;
/* If we're not receptive to an offer right now, or if the offer
has an unrecognizable transaction id, then just drop it. */
if (packet->interface->client->xid != packet->raw->xid ||
(packet->interface->hw_address.hlen != packet->raw->hlen) ||
(memcmp(packet->interface->hw_address.haddr,
packet->raw->chaddr, packet->raw->hlen)))
return;
if (ip->client->state != S_REBOOTING &&
ip->client->state != S_REQUESTING &&
ip->client->state != S_RENEWING &&
ip->client->state != S_REBINDING)
return;
note("DHCPACK from %s", piaddr(packet->client_addr));
lease = packet_to_lease(packet);
if (!lease) {
note("packet_to_lease failed.");
return;
}
ip->client->new = lease;
/* Stop resending DHCPREQUEST. */
cancel_timeout(send_request, ip);
/* Figure out the lease time. */
if (ip->client->config->default_actions[DHO_DHCP_LEASE_TIME] ==
ACTION_SUPERSEDE)
ip->client->new->expiry = getULong(
ip->client->config->defaults[DHO_DHCP_LEASE_TIME].data);
else if (ip->client->new->options[DHO_DHCP_LEASE_TIME].data)
ip->client->new->expiry = getULong(
ip->client->new->options[DHO_DHCP_LEASE_TIME].data);
else
ip->client->new->expiry = default_lease_time;
/* A number that looks negative here is really just very large,
because the lease expiry offset is unsigned. Also make sure that
the addition of cur_time below does not overflow (a 32 bit) time_t. */
if (ip->client->new->expiry < 0 ||
ip->client->new->expiry > TIME_MAX - cur_time)
ip->client->new->expiry = TIME_MAX - cur_time;
/* XXX should be fixed by resetting the client state */
if (ip->client->new->expiry < 60)
ip->client->new->expiry = 60;
/* Unless overridden in the config, take the server-provided renewal
* time if there is one. Otherwise figure it out according to the spec.
* Also make sure the renewal time does not exceed the expiry time.
*/
if (ip->client->config->default_actions[DHO_DHCP_RENEWAL_TIME] ==
ACTION_SUPERSEDE)
ip->client->new->renewal = getULong(
ip->client->config->defaults[DHO_DHCP_RENEWAL_TIME].data);
else if (ip->client->new->options[DHO_DHCP_RENEWAL_TIME].len)
ip->client->new->renewal = getULong(
ip->client->new->options[DHO_DHCP_RENEWAL_TIME].data);
else
ip->client->new->renewal = ip->client->new->expiry / 2;
if (ip->client->new->renewal < 0 ||
ip->client->new->renewal > ip->client->new->expiry / 2)
ip->client->new->renewal = ip->client->new->expiry / 2;
/* Same deal with the rebind time. */
if (ip->client->config->default_actions[DHO_DHCP_REBINDING_TIME] ==
ACTION_SUPERSEDE)
ip->client->new->rebind = getULong(
ip->client->config->defaults[DHO_DHCP_REBINDING_TIME].data);
else if (ip->client->new->options[DHO_DHCP_REBINDING_TIME].len)
ip->client->new->rebind = getULong(
ip->client->new->options[DHO_DHCP_REBINDING_TIME].data);
else
ip->client->new->rebind = ip->client->new->renewal / 4 * 7;
if (ip->client->new->rebind < 0 ||
ip->client->new->rebind > ip->client->new->renewal / 4 * 7)
ip->client->new->rebind = ip->client->new->renewal / 4 * 7;
/* Convert the time offsets into seconds-since-the-epoch */
ip->client->new->expiry += cur_time;
ip->client->new->renewal += cur_time;
ip->client->new->rebind += cur_time;
bind_lease(ip);
}
void
bind_lease(struct interface_info *ip)
{
struct option_data *opt;
/* Remember the medium. */
ip->client->new->medium = ip->client->medium;
opt = &ip->client->new->options[DHO_INTERFACE_MTU];
if (opt->len == sizeof(u_int16_t)) {
u_int16_t mtu = 0;
bool supersede = (ip->client->config->default_actions[DHO_INTERFACE_MTU] ==
ACTION_SUPERSEDE);
if (supersede)
mtu = getUShort(ip->client->config->defaults[DHO_INTERFACE_MTU].data);
else
mtu = be16dec(opt->data);
if (mtu < MIN_MTU) {
/* Treat 0 like a user intentionally doesn't want to change MTU and,
* therefore, warning is not needed */
if (!supersede || mtu != 0)
warning("mtu size %u < %d: ignored", (unsigned)mtu, MIN_MTU);
} else {
interface_set_mtu_unpriv(privfd, mtu);
}
}
/* Write out the new lease. */
write_client_lease(ip, ip->client->new, 0);
/* Run the client script with the new parameters. */
script_init((ip->client->state == S_REQUESTING ? "BOUND" :
(ip->client->state == S_RENEWING ? "RENEW" :
(ip->client->state == S_REBOOTING ? "REBOOT" : "REBIND"))),
ip->client->new->medium);
if (ip->client->active && ip->client->state != S_REBOOTING)
script_write_params("old_", ip->client->active);
script_write_params("new_", ip->client->new);
if (ip->client->alias)
script_write_params("alias_", ip->client->alias);
script_go();
/* Replace the old active lease with the new one. */
if (ip->client->active)
free_client_lease(ip->client->active);
ip->client->active = ip->client->new;
ip->client->new = NULL;
/* Set up a timeout to start the renewal process. */
add_timeout(ip->client->active->renewal, state_bound, ip);
note("bound to %s -- renewal in %d seconds.",
piaddr(ip->client->active->address),
(int)(ip->client->active->renewal - cur_time));
ip->client->state = S_BOUND;
reinitialize_interfaces();
go_daemon();
}
/*
* state_bound is called when we've successfully bound to a particular
* lease, but the renewal time on that lease has expired. We are
* expected to unicast a DHCPREQUEST to the server that gave us our
* original lease.
*/
void
state_bound(void *ipp)
{
struct interface_info *ip = ipp;
ASSERT_STATE(state, S_BOUND);
/* T1 has expired. */
make_request(ip, ip->client->active);
ip->client->xid = ip->client->packet.xid;
if (ip->client->active->options[DHO_DHCP_SERVER_IDENTIFIER].len == 4) {
memcpy(ip->client->destination.iabuf, ip->client->active->
options[DHO_DHCP_SERVER_IDENTIFIER].data, 4);
ip->client->destination.len = 4;
} else
ip->client->destination = iaddr_broadcast;
ip->client->first_sending = cur_time;
ip->client->interval = ip->client->config->initial_interval;
ip->client->state = S_RENEWING;
/* Send the first packet immediately. */
send_request(ip);
}
void
bootp(struct packet *packet)
{
struct iaddrlist *ap;
if (packet->raw->op != BOOTREPLY)
return;
/* If there's a reject list, make sure this packet's sender isn't
on it. */
for (ap = packet->interface->client->config->reject_list;
ap; ap = ap->next) {
if (addr_eq(packet->client_addr, ap->addr)) {
note("BOOTREPLY from %s rejected.", piaddr(ap->addr));
return;
}
}
dhcpoffer(packet);
}
void
dhcp(struct packet *packet)
{
struct iaddrlist *ap;
void (*handler)(struct packet *);
const char *type;
switch (packet->packet_type) {
case DHCPOFFER:
handler = dhcpoffer;
type = "DHCPOFFER";
break;
case DHCPNAK:
handler = dhcpnak;
type = "DHCPNACK";
break;
case DHCPACK:
handler = dhcpack;
type = "DHCPACK";
break;
default:
return;
}
/* If there's a reject list, make sure this packet's sender isn't
on it. */
for (ap = packet->interface->client->config->reject_list;
ap; ap = ap->next) {
if (addr_eq(packet->client_addr, ap->addr)) {
note("%s from %s rejected.", type, piaddr(ap->addr));
return;
}
}
(*handler)(packet);
}
void
dhcpoffer(struct packet *packet)
{
struct interface_info *ip = packet->interface;
struct client_lease *lease, *lp;
int i;
int arp_timeout_needed, stop_selecting;
const char *name = packet->options[DHO_DHCP_MESSAGE_TYPE].len ?
"DHCPOFFER" : "BOOTREPLY";
/* If we're not receptive to an offer right now, or if the offer
has an unrecognizable transaction id, then just drop it. */
if (ip->client->state != S_SELECTING ||
packet->interface->client->xid != packet->raw->xid ||
(packet->interface->hw_address.hlen != packet->raw->hlen) ||
(memcmp(packet->interface->hw_address.haddr,
packet->raw->chaddr, packet->raw->hlen)))
return;
note("%s from %s", name, piaddr(packet->client_addr));
/* If this lease doesn't supply the minimum required parameters,
blow it off. */
for (i = 0; ip->client->config->required_options[i]; i++) {
if (!packet->options[ip->client->config->
required_options[i]].len) {
note("%s isn't satisfactory.", name);
return;
}
}
/* If we've already seen this lease, don't record it again. */
for (lease = ip->client->offered_leases;
lease; lease = lease->next) {
if (lease->address.len == sizeof(packet->raw->yiaddr) &&
!memcmp(lease->address.iabuf,
&packet->raw->yiaddr, lease->address.len)) {
debug("%s already seen.", name);
return;
}
}
lease = packet_to_lease(packet);
if (!lease) {
note("packet_to_lease failed.");
return;
}
/* If this lease was acquired through a BOOTREPLY, record that
fact. */
if (!packet->options[DHO_DHCP_MESSAGE_TYPE].len)
lease->is_bootp = 1;
/* Record the medium under which this lease was offered. */
lease->medium = ip->client->medium;
/* Send out an ARP Request for the offered IP address. */
script_init("ARPSEND", lease->medium);
script_write_params("check_", lease);
/* If the script can't send an ARP request without waiting,
we'll be waiting when we do the ARPCHECK, so don't wait now. */
if (script_go())
arp_timeout_needed = 0;
else
arp_timeout_needed = 2;
/* Figure out when we're supposed to stop selecting. */
stop_selecting =
ip->client->first_sending + ip->client->config->select_interval;
/* If this is the lease we asked for, put it at the head of the
list, and don't mess with the arp request timeout. */
if (lease->address.len == ip->client->requested_address.len &&
!memcmp(lease->address.iabuf,
ip->client->requested_address.iabuf,
ip->client->requested_address.len)) {
lease->next = ip->client->offered_leases;
ip->client->offered_leases = lease;
} else {
/* If we already have an offer, and arping for this
offer would take us past the selection timeout,
then don't extend the timeout - just hope for the
best. */
if (ip->client->offered_leases &&
(cur_time + arp_timeout_needed) > stop_selecting)
arp_timeout_needed = 0;
/* Put the lease at the end of the list. */
lease->next = NULL;
if (!ip->client->offered_leases)
ip->client->offered_leases = lease;
else {
for (lp = ip->client->offered_leases; lp->next;
lp = lp->next)
; /* nothing */
lp->next = lease;
}
}
/* If we're supposed to stop selecting before we've had time
to wait for the ARPREPLY, add some delay to wait for
the ARPREPLY. */
if (stop_selecting - cur_time < arp_timeout_needed)
stop_selecting = cur_time + arp_timeout_needed;
/* If the selecting interval has expired, go immediately to
state_selecting(). Otherwise, time out into
state_selecting at the select interval. */
if (stop_selecting <= 0)
state_selecting(ip);
else {
add_timeout(stop_selecting, state_selecting, ip);
cancel_timeout(send_discover, ip);
}
}
/* Allocate a client_lease structure and initialize it from the parameters
in the specified packet. */
struct client_lease *
packet_to_lease(struct packet *packet)
{
struct client_lease *lease;
int i;
lease = malloc(sizeof(struct client_lease));
if (!lease) {
warning("dhcpoffer: no memory to record lease.");
return (NULL);
}
memset(lease, 0, sizeof(*lease));
/* Copy the lease options. */
for (i = 0; i < 256; i++) {
if (packet->options[i].len) {
lease->options[i].data =
malloc(packet->options[i].len + 1);
if (!lease->options[i].data) {
warning("dhcpoffer: no memory for option %d", i);
free_client_lease(lease);
return (NULL);
} else {
memcpy(lease->options[i].data,
packet->options[i].data,
packet->options[i].len);
lease->options[i].len =
packet->options[i].len;
lease->options[i].data[lease->options[i].len] =
0;
}
if (!check_option(lease,i)) {
/* ignore a bogus lease offer */
warning("Invalid lease option - ignoring offer");
free_client_lease(lease);
return (NULL);
}
}
}
lease->address.len = sizeof(packet->raw->yiaddr);
memcpy(lease->address.iabuf, &packet->raw->yiaddr, lease->address.len);
lease->nextserver.len = sizeof(packet->raw->siaddr);
memcpy(lease->nextserver.iabuf, &packet->raw->siaddr, lease->nextserver.len);
/* If the server name was filled out, copy it.
Do not attempt to validate the server name as a host name.
RFC 2131 merely states that sname is NUL-terminated (which do
do not assume) and that it is the server's host name. Since
the ISC client and server allow arbitrary characters, we do
as well. */
if ((!packet->options[DHO_DHCP_OPTION_OVERLOAD].len ||
!(packet->options[DHO_DHCP_OPTION_OVERLOAD].data[0] & 2)) &&
packet->raw->sname[0]) {
lease->server_name = malloc(DHCP_SNAME_LEN + 1);
if (!lease->server_name) {
warning("dhcpoffer: no memory for server name.");
free_client_lease(lease);
return (NULL);
}
memcpy(lease->server_name, packet->raw->sname, DHCP_SNAME_LEN);
lease->server_name[DHCP_SNAME_LEN]='\0';
}
/* Ditto for the filename. */
if ((!packet->options[DHO_DHCP_OPTION_OVERLOAD].len ||
!(packet->options[DHO_DHCP_OPTION_OVERLOAD].data[0] & 1)) &&
packet->raw->file[0]) {
/* Don't count on the NUL terminator. */
lease->filename = malloc(DHCP_FILE_LEN + 1);
if (!lease->filename) {
warning("dhcpoffer: no memory for filename.");
free_client_lease(lease);
return (NULL);
}
memcpy(lease->filename, packet->raw->file, DHCP_FILE_LEN);
lease->filename[DHCP_FILE_LEN]='\0';
}
return lease;
}
void
dhcpnak(struct packet *packet)
{
struct interface_info *ip = packet->interface;
/* If we're not receptive to an offer right now, or if the offer
has an unrecognizable transaction id, then just drop it. */
if (packet->interface->client->xid != packet->raw->xid ||
(packet->interface->hw_address.hlen != packet->raw->hlen) ||
(memcmp(packet->interface->hw_address.haddr,
packet->raw->chaddr, packet->raw->hlen)))
return;
if (ip->client->state != S_REBOOTING &&
ip->client->state != S_REQUESTING &&
ip->client->state != S_RENEWING &&
ip->client->state != S_REBINDING)
return;
note("DHCPNAK from %s", piaddr(packet->client_addr));
if (!ip->client->active) {
note("DHCPNAK with no active lease.\n");
return;
}
free_client_lease(ip->client->active);
ip->client->active = NULL;
/* Stop sending DHCPREQUEST packets... */
cancel_timeout(send_request, ip);
ip->client->state = S_INIT;
state_init(ip);
}
/* Send out a DHCPDISCOVER packet, and set a timeout to send out another
one after the right interval has expired. If we don't get an offer by
the time we reach the panic interval, call the panic function. */
void
send_discover(void *ipp)
{
struct interface_info *ip = ipp;
int interval, increase = 1;
/* Figure out how long it's been since we started transmitting. */
interval = cur_time - ip->client->first_sending;
/* If we're past the panic timeout, call the script and tell it
we haven't found anything for this interface yet. */
if (interval > ip->client->config->timeout) {
state_panic(ip);
return;
}
/* If we're selecting media, try the whole list before doing
the exponential backoff, but if we've already received an
offer, stop looping, because we obviously have it right. */
if (!ip->client->offered_leases &&
ip->client->config->media) {
int fail = 0;
again:
if (ip->client->medium) {
ip->client->medium = ip->client->medium->next;
increase = 0;
}
if (!ip->client->medium) {
if (fail)
error("No valid media types for %s!", ip->name);
ip->client->medium = ip->client->config->media;
increase = 1;
}
note("Trying medium \"%s\" %d", ip->client->medium->string,
increase);
script_init("MEDIUM", ip->client->medium);
if (script_go())
goto again;
}
/*
* If we're supposed to increase the interval, do so. If it's
* currently zero (i.e., we haven't sent any packets yet), set
* it to one; otherwise, add to it a random number between zero
* and two times itself. On average, this means that it will
* double with every transmission.
*/
if (increase) {
if (!ip->client->interval)
ip->client->interval =
ip->client->config->initial_interval;
else {
ip->client->interval += (arc4random() >> 2) %
(2 * ip->client->interval);
}
/* Don't backoff past cutoff. */
if (ip->client->interval >
ip->client->config->backoff_cutoff)
ip->client->interval =
((ip->client->config->backoff_cutoff / 2)
+ ((arc4random() >> 2) %
ip->client->config->backoff_cutoff));
} else if (!ip->client->interval)
ip->client->interval =
ip->client->config->initial_interval;
/* If the backoff would take us to the panic timeout, just use that
as the interval. */
if (cur_time + ip->client->interval >
ip->client->first_sending + ip->client->config->timeout)
ip->client->interval =
(ip->client->first_sending +
ip->client->config->timeout) - cur_time + 1;
/* Record the number of seconds since we started sending. */
if (interval < 65536)
ip->client->packet.secs = htons(interval);
else
ip->client->packet.secs = htons(65535);
ip->client->secs = ip->client->packet.secs;
note("DHCPDISCOVER on %s to %s port %d interval %d",
ip->name, inet_ntoa(inaddr_broadcast), REMOTE_PORT,
(int)ip->client->interval);
/* Send out a packet. */
send_packet_unpriv(privfd, &ip->client->packet,
ip->client->packet_length, inaddr_any, inaddr_broadcast);
add_timeout(cur_time + ip->client->interval, send_discover, ip);
}
/*
* state_panic gets called if we haven't received any offers in a preset
* amount of time. When this happens, we try to use existing leases
* that haven't yet expired, and failing that, we call the client script
* and hope it can do something.
*/
void
state_panic(void *ipp)
{
struct interface_info *ip = ipp;
struct client_lease *loop = ip->client->active;
struct client_lease *lp;
note("No DHCPOFFERS received.");
/* We may not have an active lease, but we may have some
predefined leases that we can try. */
if (!ip->client->active && ip->client->leases)
goto activate_next;
/* Run through the list of leases and see if one can be used. */
while (ip->client->active) {
if (ip->client->active->expiry > cur_time) {
note("Trying recorded lease %s",
piaddr(ip->client->active->address));
/* Run the client script with the existing
parameters. */
script_init("TIMEOUT",
ip->client->active->medium);
script_write_params("new_", ip->client->active);
if (ip->client->alias)
script_write_params("alias_",
ip->client->alias);
/* If the old lease is still good and doesn't
yet need renewal, go into BOUND state and
timeout at the renewal time. */
if (!script_go()) {
if (cur_time <
ip->client->active->renewal) {
ip->client->state = S_BOUND;
note("bound: renewal in %d seconds.",
(int)(ip->client->active->renewal -
cur_time));
add_timeout(
ip->client->active->renewal,
state_bound, ip);
} else {
ip->client->state = S_BOUND;
note("bound: immediate renewal.");
state_bound(ip);
}
reinitialize_interfaces();
go_daemon();
return;
}
}
/* If there are no other leases, give up. */
if (!ip->client->leases) {
ip->client->leases = ip->client->active;
ip->client->active = NULL;
break;
}
activate_next:
/* Otherwise, put the active lease at the end of the
lease list, and try another lease.. */
for (lp = ip->client->leases; lp->next; lp = lp->next)
;
lp->next = ip->client->active;
if (lp->next)
lp->next->next = NULL;
ip->client->active = ip->client->leases;
ip->client->leases = ip->client->leases->next;
/* If we already tried this lease, we've exhausted the
set of leases, so we might as well give up for
now. */
if (ip->client->active == loop)
break;
else if (!loop)
loop = ip->client->active;
}
/* No leases were available, or what was available didn't work, so
tell the shell script that we failed to allocate an address,
and try again later. */
note("No working leases in persistent database - sleeping.\n");
script_init("FAIL", NULL);
if (ip->client->alias)
script_write_params("alias_", ip->client->alias);
script_go();
ip->client->state = S_INIT;
add_timeout(cur_time + ip->client->config->retry_interval, state_init,
ip);
go_daemon();
}
void
send_request(void *ipp)
{
struct interface_info *ip = ipp;
struct in_addr from, to;
int interval;
/* Figure out how long it's been since we started transmitting. */
interval = cur_time - ip->client->first_sending;
/* If we're in the INIT-REBOOT or REQUESTING state and we're
past the reboot timeout, go to INIT and see if we can
DISCOVER an address... */
/* XXX In the INIT-REBOOT state, if we don't get an ACK, it
means either that we're on a network with no DHCP server,
or that our server is down. In the latter case, assuming
that there is a backup DHCP server, DHCPDISCOVER will get
us a new address, but we could also have successfully
reused our old address. In the former case, we're hosed
anyway. This is not a win-prone situation. */
if ((ip->client->state == S_REBOOTING ||
ip->client->state == S_REQUESTING) &&
interval > ip->client->config->reboot_timeout) {
cancel:
ip->client->state = S_INIT;
cancel_timeout(send_request, ip);
state_init(ip);
return;
}
/* If we're in the reboot state, make sure the media is set up
correctly. */
if (ip->client->state == S_REBOOTING &&
!ip->client->medium &&
ip->client->active->medium ) {
script_init("MEDIUM", ip->client->active->medium);
/* If the medium we chose won't fly, go to INIT state. */
if (script_go())
goto cancel;
/* Record the medium. */
ip->client->medium = ip->client->active->medium;
}
/* If the lease has expired, relinquish the address and go back
to the INIT state. */
if (ip->client->state != S_REQUESTING &&
cur_time > ip->client->active->expiry) {
/* Run the client script with the new parameters. */
script_init("EXPIRE", NULL);
script_write_params("old_", ip->client->active);
if (ip->client->alias)
script_write_params("alias_", ip->client->alias);
script_go();
/* Now do a preinit on the interface so that we can
discover a new address. */
script_init("PREINIT", NULL);
if (ip->client->alias)
script_write_params("alias_", ip->client->alias);
script_go();
ip->client->state = S_INIT;
state_init(ip);
return;
}
/* Do the exponential backoff... */
if (!ip->client->interval)
ip->client->interval = ip->client->config->initial_interval;
else
ip->client->interval += ((arc4random() >> 2) %
(2 * ip->client->interval));
/* Don't backoff past cutoff. */
if (ip->client->interval >
ip->client->config->backoff_cutoff)
ip->client->interval =
((ip->client->config->backoff_cutoff / 2) +
((arc4random() >> 2) % ip->client->interval));
/* If the backoff would take us to the expiry time, just set the
timeout to the expiry time. */
if (ip->client->state != S_REQUESTING &&
cur_time + ip->client->interval >
ip->client->active->expiry)
ip->client->interval =
ip->client->active->expiry - cur_time + 1;
/* If the lease T2 time has elapsed, or if we're not yet bound,
broadcast the DHCPREQUEST rather than unicasting. */
if (ip->client->state == S_REQUESTING ||
ip->client->state == S_REBOOTING ||
cur_time > ip->client->active->rebind)
to.s_addr = INADDR_BROADCAST;
else
memcpy(&to.s_addr, ip->client->destination.iabuf,
sizeof(to.s_addr));
if (ip->client->state != S_REQUESTING &&
ip->client->state != S_REBOOTING)
memcpy(&from, ip->client->active->address.iabuf,
sizeof(from));
else
from.s_addr = INADDR_ANY;
/* Record the number of seconds since we started sending. */
if (ip->client->state == S_REQUESTING)
ip->client->packet.secs = ip->client->secs;
else {
if (interval < 65536)
ip->client->packet.secs = htons(interval);
else
ip->client->packet.secs = htons(65535);
}
note("DHCPREQUEST on %s to %s port %d", ip->name, inet_ntoa(to),
REMOTE_PORT);
/* Send out a packet. */
send_packet_unpriv(privfd, &ip->client->packet,
ip->client->packet_length, from, to);
add_timeout(cur_time + ip->client->interval, send_request, ip);
}
void
send_decline(void *ipp)
{
struct interface_info *ip = ipp;
note("DHCPDECLINE on %s to %s port %d", ip->name,
inet_ntoa(inaddr_broadcast), REMOTE_PORT);
/* Send out a packet. */
send_packet_unpriv(privfd, &ip->client->packet,
ip->client->packet_length, inaddr_any, inaddr_broadcast);
}
void
make_discover(struct interface_info *ip, struct client_lease *lease)
{
unsigned char discover = DHCPDISCOVER;
struct tree_cache *options[256];
struct tree_cache option_elements[256];
int i;
memset(option_elements, 0, sizeof(option_elements));
memset(options, 0, sizeof(options));
memset(&ip->client->packet, 0, sizeof(ip->client->packet));
/* Set DHCP_MESSAGE_TYPE to DHCPDISCOVER */
i = DHO_DHCP_MESSAGE_TYPE;
options[i] = &option_elements[i];
options[i]->value = &discover;
options[i]->len = sizeof(discover);
options[i]->buf_size = sizeof(discover);
options[i]->timeout = 0xFFFFFFFF;
/* Request the options we want */
i = DHO_DHCP_PARAMETER_REQUEST_LIST;
options[i] = &option_elements[i];
options[i]->value = ip->client->config->requested_options;
options[i]->len = ip->client->config->requested_option_count;
options[i]->buf_size =
ip->client->config->requested_option_count;
options[i]->timeout = 0xFFFFFFFF;
/* If we had an address, try to get it again. */
if (lease) {
ip->client->requested_address = lease->address;
i = DHO_DHCP_REQUESTED_ADDRESS;
options[i] = &option_elements[i];
options[i]->value = lease->address.iabuf;
options[i]->len = lease->address.len;
options[i]->buf_size = lease->address.len;
options[i]->timeout = 0xFFFFFFFF;
} else
ip->client->requested_address.len = 0;
/* Send any options requested in the config file. */
for (i = 0; i < 256; i++)
if (!options[i] &&
ip->client->config->send_options[i].data) {
options[i] = &option_elements[i];
options[i]->value =
ip->client->config->send_options[i].data;
options[i]->len =
ip->client->config->send_options[i].len;
options[i]->buf_size =
ip->client->config->send_options[i].len;
options[i]->timeout = 0xFFFFFFFF;
}
/* send host name if not set via config file. */
if (!options[DHO_HOST_NAME]) {
if (hostname[0] != '\0') {
size_t len;
char* posDot = strchr(hostname, '.');
if (posDot != NULL)
len = posDot - hostname;
else
len = strlen(hostname);
options[DHO_HOST_NAME] = &option_elements[DHO_HOST_NAME];
options[DHO_HOST_NAME]->value = hostname;
options[DHO_HOST_NAME]->len = len;
options[DHO_HOST_NAME]->buf_size = len;
options[DHO_HOST_NAME]->timeout = 0xFFFFFFFF;
}
}
/* set unique client identifier */
char client_ident[sizeof(ip->hw_address.haddr) + 1];
if (!options[DHO_DHCP_CLIENT_IDENTIFIER]) {
int hwlen = (ip->hw_address.hlen < sizeof(client_ident)-1) ?
ip->hw_address.hlen : sizeof(client_ident)-1;
client_ident[0] = ip->hw_address.htype;
memcpy(&client_ident[1], ip->hw_address.haddr, hwlen);
options[DHO_DHCP_CLIENT_IDENTIFIER] = &option_elements[DHO_DHCP_CLIENT_IDENTIFIER];
options[DHO_DHCP_CLIENT_IDENTIFIER]->value = client_ident;
options[DHO_DHCP_CLIENT_IDENTIFIER]->len = hwlen+1;
options[DHO_DHCP_CLIENT_IDENTIFIER]->buf_size = hwlen+1;
options[DHO_DHCP_CLIENT_IDENTIFIER]->timeout = 0xFFFFFFFF;
}
/* Set up the option buffer... */
ip->client->packet_length = cons_options(NULL, &ip->client->packet, 0,
options, 0, 0, 0, NULL, 0);
if (ip->client->packet_length < BOOTP_MIN_LEN)
ip->client->packet_length = BOOTP_MIN_LEN;
ip->client->packet.op = BOOTREQUEST;
ip->client->packet.htype = ip->hw_address.htype;
ip->client->packet.hlen = ip->hw_address.hlen;
ip->client->packet.hops = 0;
ip->client->packet.xid = arc4random();
ip->client->packet.secs = 0; /* filled in by send_discover. */
ip->client->packet.flags = 0;
memset(&(ip->client->packet.ciaddr),
0, sizeof(ip->client->packet.ciaddr));
memset(&(ip->client->packet.yiaddr),
0, sizeof(ip->client->packet.yiaddr));
memset(&(ip->client->packet.siaddr),
0, sizeof(ip->client->packet.siaddr));
memset(&(ip->client->packet.giaddr),
0, sizeof(ip->client->packet.giaddr));
memcpy(ip->client->packet.chaddr,
ip->hw_address.haddr, ip->hw_address.hlen);
}
void
make_request(struct interface_info *ip, struct client_lease * lease)
{
unsigned char request = DHCPREQUEST;
struct tree_cache *options[256];
struct tree_cache option_elements[256];
int i;
memset(options, 0, sizeof(options));
memset(&ip->client->packet, 0, sizeof(ip->client->packet));
/* Set DHCP_MESSAGE_TYPE to DHCPREQUEST */
i = DHO_DHCP_MESSAGE_TYPE;
options[i] = &option_elements[i];
options[i]->value = &request;
options[i]->len = sizeof(request);
options[i]->buf_size = sizeof(request);
options[i]->timeout = 0xFFFFFFFF;
/* Request the options we want */
i = DHO_DHCP_PARAMETER_REQUEST_LIST;
options[i] = &option_elements[i];
options[i]->value = ip->client->config->requested_options;
options[i]->len = ip->client->config->requested_option_count;
options[i]->buf_size =
ip->client->config->requested_option_count;
options[i]->timeout = 0xFFFFFFFF;
/* If we are requesting an address that hasn't yet been assigned
to us, use the DHCP Requested Address option. */
if (ip->client->state == S_REQUESTING) {
/* Send back the server identifier... */
i = DHO_DHCP_SERVER_IDENTIFIER;
options[i] = &option_elements[i];
options[i]->value = lease->options[i].data;
options[i]->len = lease->options[i].len;
options[i]->buf_size = lease->options[i].len;
options[i]->timeout = 0xFFFFFFFF;
}
if (ip->client->state == S_REQUESTING ||
ip->client->state == S_REBOOTING) {
ip->client->requested_address = lease->address;
i = DHO_DHCP_REQUESTED_ADDRESS;
options[i] = &option_elements[i];
options[i]->value = lease->address.iabuf;
options[i]->len = lease->address.len;
options[i]->buf_size = lease->address.len;
options[i]->timeout = 0xFFFFFFFF;
} else
ip->client->requested_address.len = 0;
/* Send any options requested in the config file. */
for (i = 0; i < 256; i++)
if (!options[i] &&
ip->client->config->send_options[i].data) {
options[i] = &option_elements[i];
options[i]->value =
ip->client->config->send_options[i].data;
options[i]->len =
ip->client->config->send_options[i].len;
options[i]->buf_size =
ip->client->config->send_options[i].len;
options[i]->timeout = 0xFFFFFFFF;
}
/* send host name if not set via config file. */
if (!options[DHO_HOST_NAME]) {
if (hostname[0] != '\0') {
size_t len;
char* posDot = strchr(hostname, '.');
if (posDot != NULL)
len = posDot - hostname;
else
len = strlen(hostname);
options[DHO_HOST_NAME] = &option_elements[DHO_HOST_NAME];
options[DHO_HOST_NAME]->value = hostname;
options[DHO_HOST_NAME]->len = len;
options[DHO_HOST_NAME]->buf_size = len;
options[DHO_HOST_NAME]->timeout = 0xFFFFFFFF;
}
}
/* set unique client identifier */
char client_ident[sizeof(struct hardware)];
if (!options[DHO_DHCP_CLIENT_IDENTIFIER]) {
int hwlen = (ip->hw_address.hlen < sizeof(client_ident)-1) ?
ip->hw_address.hlen : sizeof(client_ident)-1;
client_ident[0] = ip->hw_address.htype;
memcpy(&client_ident[1], ip->hw_address.haddr, hwlen);
options[DHO_DHCP_CLIENT_IDENTIFIER] = &option_elements[DHO_DHCP_CLIENT_IDENTIFIER];
options[DHO_DHCP_CLIENT_IDENTIFIER]->value = client_ident;
options[DHO_DHCP_CLIENT_IDENTIFIER]->len = hwlen+1;
options[DHO_DHCP_CLIENT_IDENTIFIER]->buf_size = hwlen+1;
options[DHO_DHCP_CLIENT_IDENTIFIER]->timeout = 0xFFFFFFFF;
}
/* Set up the option buffer... */
ip->client->packet_length = cons_options(NULL, &ip->client->packet, 0,
options, 0, 0, 0, NULL, 0);
if (ip->client->packet_length < BOOTP_MIN_LEN)
ip->client->packet_length = BOOTP_MIN_LEN;
ip->client->packet.op = BOOTREQUEST;
ip->client->packet.htype = ip->hw_address.htype;
ip->client->packet.hlen = ip->hw_address.hlen;
ip->client->packet.hops = 0;
ip->client->packet.xid = ip->client->xid;
ip->client->packet.secs = 0; /* Filled in by send_request. */
/* If we own the address we're requesting, put it in ciaddr;
otherwise set ciaddr to zero. */
if (ip->client->state == S_BOUND ||
ip->client->state == S_RENEWING ||
ip->client->state == S_REBINDING) {
memcpy(&ip->client->packet.ciaddr,
lease->address.iabuf, lease->address.len);
ip->client->packet.flags = 0;
} else {
memset(&ip->client->packet.ciaddr, 0,
sizeof(ip->client->packet.ciaddr));
ip->client->packet.flags = 0;
}
memset(&ip->client->packet.yiaddr, 0,
sizeof(ip->client->packet.yiaddr));
memset(&ip->client->packet.siaddr, 0,
sizeof(ip->client->packet.siaddr));
memset(&ip->client->packet.giaddr, 0,
sizeof(ip->client->packet.giaddr));
memcpy(ip->client->packet.chaddr,
ip->hw_address.haddr, ip->hw_address.hlen);
}
void
make_decline(struct interface_info *ip, struct client_lease *lease)
{
struct tree_cache *options[256], message_type_tree;
struct tree_cache requested_address_tree;
struct tree_cache server_id_tree, client_id_tree;
unsigned char decline = DHCPDECLINE;
int i;
memset(options, 0, sizeof(options));
memset(&ip->client->packet, 0, sizeof(ip->client->packet));
/* Set DHCP_MESSAGE_TYPE to DHCPDECLINE */
i = DHO_DHCP_MESSAGE_TYPE;
options[i] = &message_type_tree;
options[i]->value = &decline;
options[i]->len = sizeof(decline);
options[i]->buf_size = sizeof(decline);
options[i]->timeout = 0xFFFFFFFF;
/* Send back the server identifier... */
i = DHO_DHCP_SERVER_IDENTIFIER;
options[i] = &server_id_tree;
options[i]->value = lease->options[i].data;
options[i]->len = lease->options[i].len;
options[i]->buf_size = lease->options[i].len;
options[i]->timeout = 0xFFFFFFFF;
/* Send back the address we're declining. */
i = DHO_DHCP_REQUESTED_ADDRESS;
options[i] = &requested_address_tree;
options[i]->value = lease->address.iabuf;
options[i]->len = lease->address.len;
options[i]->buf_size = lease->address.len;
options[i]->timeout = 0xFFFFFFFF;
/* Send the uid if the user supplied one. */
i = DHO_DHCP_CLIENT_IDENTIFIER;
if (ip->client->config->send_options[i].len) {
options[i] = &client_id_tree;
options[i]->value = ip->client->config->send_options[i].data;
options[i]->len = ip->client->config->send_options[i].len;
options[i]->buf_size = ip->client->config->send_options[i].len;
options[i]->timeout = 0xFFFFFFFF;
}
/* Set up the option buffer... */
ip->client->packet_length = cons_options(NULL, &ip->client->packet, 0,
options, 0, 0, 0, NULL, 0);
if (ip->client->packet_length < BOOTP_MIN_LEN)
ip->client->packet_length = BOOTP_MIN_LEN;
ip->client->packet.op = BOOTREQUEST;
ip->client->packet.htype = ip->hw_address.htype;
ip->client->packet.hlen = ip->hw_address.hlen;
ip->client->packet.hops = 0;
ip->client->packet.xid = ip->client->xid;
ip->client->packet.secs = 0; /* Filled in by send_request. */
ip->client->packet.flags = 0;
/* ciaddr must always be zero. */
memset(&ip->client->packet.ciaddr, 0,
sizeof(ip->client->packet.ciaddr));
memset(&ip->client->packet.yiaddr, 0,
sizeof(ip->client->packet.yiaddr));
memset(&ip->client->packet.siaddr, 0,
sizeof(ip->client->packet.siaddr));
memset(&ip->client->packet.giaddr, 0,
sizeof(ip->client->packet.giaddr));
memcpy(ip->client->packet.chaddr,
ip->hw_address.haddr, ip->hw_address.hlen);
}
void
free_client_lease(struct client_lease *lease)
{
int i;
if (lease->server_name)
free(lease->server_name);
if (lease->filename)
free(lease->filename);
for (i = 0; i < 256; i++) {
if (lease->options[i].len)
free(lease->options[i].data);
}
free(lease);
}
static FILE *leaseFile;
void
rewrite_client_leases(void)
{
struct client_lease *lp;
cap_rights_t rights;
if (!leaseFile) {
leaseFile = fopen(path_dhclient_db, "w");
if (!leaseFile)
error("can't create %s: %m", path_dhclient_db);
cap_rights_init(&rights, CAP_FCNTL, CAP_FSTAT, CAP_FSYNC,
CAP_FTRUNCATE, CAP_SEEK, CAP_WRITE);
if (caph_rights_limit(fileno(leaseFile), &rights) < 0) {
error("can't limit lease descriptor: %m");
}
if (caph_fcntls_limit(fileno(leaseFile), CAP_FCNTL_GETFL) < 0) {
error("can't limit lease descriptor fcntls: %m");
}
} else {
fflush(leaseFile);
rewind(leaseFile);
}
for (lp = ifi->client->leases; lp; lp = lp->next)
write_client_lease(ifi, lp, 1);
if (ifi->client->active)
write_client_lease(ifi, ifi->client->active, 1);
fflush(leaseFile);
ftruncate(fileno(leaseFile), ftello(leaseFile));
fsync(fileno(leaseFile));
}
void
write_client_lease(struct interface_info *ip, struct client_lease *lease,
int rewrite)
{
static int leases_written;
struct tm *t;
int i;
if (!rewrite) {
if (leases_written++ > 20) {
rewrite_client_leases();
leases_written = 0;
}
}
/* If the lease came from the config file, we don't need to stash
a copy in the lease database. */
if (lease->is_static)
return;
if (!leaseFile) { /* XXX */
leaseFile = fopen(path_dhclient_db, "w");
if (!leaseFile)
error("can't create %s: %m", path_dhclient_db);
}
fprintf(leaseFile, "lease {\n");
if (lease->is_bootp)
fprintf(leaseFile, " bootp;\n");
fprintf(leaseFile, " interface \"%s\";\n", ip->name);
fprintf(leaseFile, " fixed-address %s;\n", piaddr(lease->address));
if (lease->nextserver.len == sizeof(inaddr_any) &&
0 != memcmp(lease->nextserver.iabuf, &inaddr_any,
sizeof(inaddr_any)))
fprintf(leaseFile, " next-server %s;\n",
piaddr(lease->nextserver));
if (lease->filename)
fprintf(leaseFile, " filename \"%s\";\n", lease->filename);
if (lease->server_name)
fprintf(leaseFile, " server-name \"%s\";\n",
lease->server_name);
if (lease->medium)
fprintf(leaseFile, " medium \"%s\";\n", lease->medium->string);
for (i = 0; i < 256; i++)
if (lease->options[i].len)
fprintf(leaseFile, " option %s %s;\n",
dhcp_options[i].name,
pretty_print_option(i, lease->options[i].data,
lease->options[i].len, 1, 1));
t = gmtime(&lease->renewal);
fprintf(leaseFile, " renew %d %d/%d/%d %02d:%02d:%02d;\n",
t->tm_wday, t->tm_year + 1900, t->tm_mon + 1, t->tm_mday,
t->tm_hour, t->tm_min, t->tm_sec);
t = gmtime(&lease->rebind);
fprintf(leaseFile, " rebind %d %d/%d/%d %02d:%02d:%02d;\n",
t->tm_wday, t->tm_year + 1900, t->tm_mon + 1, t->tm_mday,
t->tm_hour, t->tm_min, t->tm_sec);
t = gmtime(&lease->expiry);
fprintf(leaseFile, " expire %d %d/%d/%d %02d:%02d:%02d;\n",
t->tm_wday, t->tm_year + 1900, t->tm_mon + 1, t->tm_mday,
t->tm_hour, t->tm_min, t->tm_sec);
fprintf(leaseFile, "}\n");
fflush(leaseFile);
}
void
script_init(const char *reason, struct string_list *medium)
{
size_t len, mediumlen = 0;
struct imsg_hdr hdr;
struct buf *buf;
int errs;
if (medium != NULL && medium->string != NULL)
mediumlen = strlen(medium->string);
hdr.code = IMSG_SCRIPT_INIT;
hdr.len = sizeof(struct imsg_hdr) +
sizeof(size_t) + mediumlen +
sizeof(size_t) + strlen(reason);
if ((buf = buf_open(hdr.len)) == NULL)
error("buf_open: %m");
errs = 0;
errs += buf_add(buf, &hdr, sizeof(hdr));
errs += buf_add(buf, &mediumlen, sizeof(mediumlen));
if (mediumlen > 0)
errs += buf_add(buf, medium->string, mediumlen);
len = strlen(reason);
errs += buf_add(buf, &len, sizeof(len));
errs += buf_add(buf, reason, len);
if (errs)
error("buf_add: %m");
if (buf_close(privfd, buf) == -1)
error("buf_close: %m");
}
void
priv_script_init(const char *reason, char *medium)
{
struct interface_info *ip = ifi;
if (ip) {
ip->client->scriptEnvsize = 100;
if (ip->client->scriptEnv == NULL)
ip->client->scriptEnv =
malloc(ip->client->scriptEnvsize * sizeof(char *));
if (ip->client->scriptEnv == NULL)
error("script_init: no memory for environment");
ip->client->scriptEnv[0] = strdup(CLIENT_PATH);
if (ip->client->scriptEnv[0] == NULL)
error("script_init: no memory for environment");
ip->client->scriptEnv[1] = NULL;
script_set_env(ip->client, "", "interface", ip->name);
if (medium)
script_set_env(ip->client, "", "medium", medium);
script_set_env(ip->client, "", "reason", reason);
}
}
void
priv_script_write_params(const char *prefix, struct client_lease *lease)
{
struct interface_info *ip = ifi;
u_int8_t dbuf[1500], *dp = NULL;
int i;
size_t len;
char tbuf[128];
script_set_env(ip->client, prefix, "ip_address",
piaddr(lease->address));
if (ip->client->config->default_actions[DHO_SUBNET_MASK] ==
ACTION_SUPERSEDE) {
dp = ip->client->config->defaults[DHO_SUBNET_MASK].data;
len = ip->client->config->defaults[DHO_SUBNET_MASK].len;
} else {
dp = lease->options[DHO_SUBNET_MASK].data;
len = lease->options[DHO_SUBNET_MASK].len;
}
if (len && (len < sizeof(lease->address.iabuf))) {
struct iaddr netmask, subnet, broadcast;
memcpy(netmask.iabuf, dp, len);
netmask.len = len;
subnet = subnet_number(lease->address, netmask);
if (subnet.len) {
script_set_env(ip->client, prefix, "network_number",
piaddr(subnet));
if (!lease->options[DHO_BROADCAST_ADDRESS].len) {
broadcast = broadcast_addr(subnet, netmask);
if (broadcast.len)
script_set_env(ip->client, prefix,
"broadcast_address",
piaddr(broadcast));
}
}
}
if (lease->filename)
script_set_env(ip->client, prefix, "filename", lease->filename);
if (lease->server_name)
script_set_env(ip->client, prefix, "server_name",
lease->server_name);
for (i = 0; i < 256; i++) {
len = 0;
if (ip->client->config->defaults[i].len) {
if (lease->options[i].len) {
switch (
ip->client->config->default_actions[i]) {
case ACTION_DEFAULT:
dp = lease->options[i].data;
len = lease->options[i].len;
break;
case ACTION_SUPERSEDE:
supersede:
dp = ip->client->
config->defaults[i].data;
len = ip->client->
config->defaults[i].len;
break;
case ACTION_PREPEND:
len = ip->client->
config->defaults[i].len +
lease->options[i].len;
if (len >= sizeof(dbuf)) {
warning("no space to %s %s",
"prepend option",
dhcp_options[i].name);
goto supersede;
}
dp = dbuf;
memcpy(dp,
ip->client->
config->defaults[i].data,
ip->client->
config->defaults[i].len);
memcpy(dp + ip->client->
config->defaults[i].len,
lease->options[i].data,
lease->options[i].len);
dp[len] = '\0';
break;
case ACTION_APPEND:
/*
* When we append, we assume that we're
* appending to text. Some MS servers
* include a NUL byte at the end of
* the search string provided.
*/
len = ip->client->
config->defaults[i].len +
lease->options[i].len;
if (len >= sizeof(dbuf)) {
warning("no space to %s %s",
"append option",
dhcp_options[i].name);
goto supersede;
}
memcpy(dbuf,
lease->options[i].data,
lease->options[i].len);
for (dp = dbuf + lease->options[i].len;
dp > dbuf; dp--, len--)
if (dp[-1] != '\0')
break;
memcpy(dp,
ip->client->
config->defaults[i].data,
ip->client->
config->defaults[i].len);
dp = dbuf;
dp[len] = '\0';
}
} else {
dp = ip->client->
config->defaults[i].data;
len = ip->client->
config->defaults[i].len;
}
} else if (lease->options[i].len) {
len = lease->options[i].len;
dp = lease->options[i].data;
} else {
len = 0;
}
if (len) {
char name[256];
if (dhcp_option_ev_name(name, sizeof(name),
&dhcp_options[i]))
script_set_env(ip->client, prefix, name,
pretty_print_option(i, dp, len, 0, 0));
}
}
snprintf(tbuf, sizeof(tbuf), "%d", (int)lease->expiry);
script_set_env(ip->client, prefix, "expiry", tbuf);
}
void
script_write_params(const char *prefix, struct client_lease *lease)
{
size_t fn_len = 0, sn_len = 0, pr_len = 0;
struct imsg_hdr hdr;
struct buf *buf;
int errs, i;
if (lease->filename != NULL)
fn_len = strlen(lease->filename);
if (lease->server_name != NULL)
sn_len = strlen(lease->server_name);
if (prefix != NULL)
pr_len = strlen(prefix);
hdr.code = IMSG_SCRIPT_WRITE_PARAMS;
hdr.len = sizeof(hdr) + sizeof(*lease) +
sizeof(fn_len) + fn_len + sizeof(sn_len) + sn_len +
sizeof(pr_len) + pr_len;
for (i = 0; i < 256; i++) {
hdr.len += sizeof(lease->options[i].len);
hdr.len += lease->options[i].len;
}
scripttime = time(NULL);
if ((buf = buf_open(hdr.len)) == NULL)
error("buf_open: %m");
errs = 0;
errs += buf_add(buf, &hdr, sizeof(hdr));
errs += buf_add(buf, lease, sizeof(*lease));
errs += buf_add(buf, &fn_len, sizeof(fn_len));
errs += buf_add(buf, lease->filename, fn_len);
errs += buf_add(buf, &sn_len, sizeof(sn_len));
errs += buf_add(buf, lease->server_name, sn_len);
errs += buf_add(buf, &pr_len, sizeof(pr_len));
errs += buf_add(buf, prefix, pr_len);
for (i = 0; i < 256; i++) {
errs += buf_add(buf, &lease->options[i].len,
sizeof(lease->options[i].len));
errs += buf_add(buf, lease->options[i].data,
lease->options[i].len);
}
if (errs)
error("buf_add: %m");
if (buf_close(privfd, buf) == -1)
error("buf_close: %m");
}
int
script_go(void)
{
struct imsg_hdr hdr;
struct buf *buf;
int ret;
hdr.code = IMSG_SCRIPT_GO;
hdr.len = sizeof(struct imsg_hdr);
if ((buf = buf_open(hdr.len)) == NULL)
error("buf_open: %m");
if (buf_add(buf, &hdr, sizeof(hdr)))
error("buf_add: %m");
if (buf_close(privfd, buf) == -1)
error("buf_close: %m");
bzero(&hdr, sizeof(hdr));
buf_read(privfd, &hdr, sizeof(hdr));
if (hdr.code != IMSG_SCRIPT_GO_RET)
error("unexpected msg type %u", hdr.code);
if (hdr.len != sizeof(hdr) + sizeof(int))
error("received corrupted message");
buf_read(privfd, &ret, sizeof(ret));
scripttime = time(NULL);
return (ret);
}
int
priv_script_go(void)
{
char *scriptName, *argv[2], **envp, *epp[3], reason[] = "REASON=NBI";
static char client_path[] = CLIENT_PATH;
struct interface_info *ip = ifi;
int pid, wpid, wstatus;
scripttime = time(NULL);
if (ip) {
scriptName = ip->client->config->script_name;
envp = ip->client->scriptEnv;
} else {
scriptName = top_level_config.script_name;
epp[0] = reason;
epp[1] = client_path;
epp[2] = NULL;
envp = epp;
}
argv[0] = scriptName;
argv[1] = NULL;
pid = fork();
if (pid < 0) {
error("fork: %m");
wstatus = 0;
} else if (pid) {
do {
wpid = wait(&wstatus);
} while (wpid != pid && wpid > 0);
if (wpid < 0) {
error("wait: %m");
wstatus = 0;
}
} else {
execve(scriptName, argv, envp);
error("execve (%s, ...): %m", scriptName);
}
if (ip)
script_flush_env(ip->client);
- return (wstatus & 0xff);
+ return (WIFEXITED(wstatus) ?
+ WEXITSTATUS(wstatus) : 128 + WTERMSIG(wstatus));
}
void
script_set_env(struct client_state *client, const char *prefix,
const char *name, const char *value)
{
int i, namelen;
size_t j;
/* No `` or $() command substitution allowed in environment values! */
for (j=0; j < strlen(value); j++)
switch (value[j]) {
case '`':
case '$':
warning("illegal character (%c) in value '%s'",
value[j], value);
/* Ignore this option */
return;
}
namelen = strlen(name);
for (i = 0; client->scriptEnv[i]; i++)
if (strncmp(client->scriptEnv[i], name, namelen) == 0 &&
client->scriptEnv[i][namelen] == '=')
break;
if (client->scriptEnv[i])
/* Reuse the slot. */
free(client->scriptEnv[i]);
else {
/* New variable. Expand if necessary. */
if (i >= client->scriptEnvsize - 1) {
char **newscriptEnv;
int newscriptEnvsize = client->scriptEnvsize + 50;
newscriptEnv = realloc(client->scriptEnv,
newscriptEnvsize);
if (newscriptEnv == NULL) {
free(client->scriptEnv);
client->scriptEnv = NULL;
client->scriptEnvsize = 0;
error("script_set_env: no memory for variable");
}
client->scriptEnv = newscriptEnv;
client->scriptEnvsize = newscriptEnvsize;
}
/* need to set the NULL pointer at end of array beyond
the new slot. */
client->scriptEnv[i + 1] = NULL;
}
/* Allocate space and format the variable in the appropriate slot. */
client->scriptEnv[i] = malloc(strlen(prefix) + strlen(name) + 1 +
strlen(value) + 1);
if (client->scriptEnv[i] == NULL)
error("script_set_env: no memory for variable assignment");
snprintf(client->scriptEnv[i], strlen(prefix) + strlen(name) +
1 + strlen(value) + 1, "%s%s=%s", prefix, name, value);
}
void
script_flush_env(struct client_state *client)
{
int i;
for (i = 0; client->scriptEnv[i]; i++) {
free(client->scriptEnv[i]);
client->scriptEnv[i] = NULL;
}
client->scriptEnvsize = 0;
}
int
dhcp_option_ev_name(char *buf, size_t buflen, struct option *option)
{
size_t i;
for (i = 0; option->name[i]; i++) {
if (i + 1 == buflen)
return 0;
if (option->name[i] == '-')
buf[i] = '_';
else
buf[i] = option->name[i];
}
buf[i] = 0;
return 1;
}
void
go_daemon(void)
{
static int state = 0;
cap_rights_t rights;
if (no_daemon || state)
return;
state = 1;
/* Stop logging to stderr... */
log_perror = 0;
if (daemonfd(-1, nullfd) == -1)
error("daemon");
cap_rights_init(&rights);
if (pidfile != NULL) {
pidfile_write(pidfile);
if (caph_rights_limit(pidfile_fileno(pidfile), &rights) < 0)
error("can't limit pidfile descriptor: %m");
}
if (nullfd != -1) {
close(nullfd);
nullfd = -1;
}
if (caph_rights_limit(STDIN_FILENO, &rights) < 0)
error("can't limit stdin: %m");
cap_rights_init(&rights, CAP_WRITE);
if (caph_rights_limit(STDOUT_FILENO, &rights) < 0)
error("can't limit stdout: %m");
if (caph_rights_limit(STDERR_FILENO, &rights) < 0)
error("can't limit stderr: %m");
}
int
check_option(struct client_lease *l, int option)
{
const char *opbuf;
const char *sbuf;
/* we use this, since this is what gets passed to dhclient-script */
opbuf = pretty_print_option(option, l->options[option].data,
l->options[option].len, 0, 0);
sbuf = option_as_string(option, l->options[option].data,
l->options[option].len);
switch (option) {
case DHO_SUBNET_MASK:
case DHO_TIME_SERVERS:
case DHO_NAME_SERVERS:
case DHO_ROUTERS:
case DHO_DOMAIN_NAME_SERVERS:
case DHO_LOG_SERVERS:
case DHO_COOKIE_SERVERS:
case DHO_LPR_SERVERS:
case DHO_IMPRESS_SERVERS:
case DHO_RESOURCE_LOCATION_SERVERS:
case DHO_SWAP_SERVER:
case DHO_BROADCAST_ADDRESS:
case DHO_NIS_SERVERS:
case DHO_NTP_SERVERS:
case DHO_NETBIOS_NAME_SERVERS:
case DHO_NETBIOS_DD_SERVER:
case DHO_FONT_SERVERS:
case DHO_DHCP_SERVER_IDENTIFIER:
case DHO_NISPLUS_SERVERS:
case DHO_MOBILE_IP_HOME_AGENT:
case DHO_SMTP_SERVER:
case DHO_POP_SERVER:
case DHO_NNTP_SERVER:
case DHO_WWW_SERVER:
case DHO_FINGER_SERVER:
case DHO_IRC_SERVER:
case DHO_STREETTALK_SERVER:
case DHO_STREETTALK_DA_SERVER:
if (!ipv4addrs(opbuf)) {
warning("Invalid IP address in option: %s", opbuf);
return (0);
}
return (1) ;
case DHO_HOST_NAME:
case DHO_NIS_DOMAIN:
case DHO_NISPLUS_DOMAIN:
case DHO_TFTP_SERVER_NAME:
if (!res_hnok(sbuf)) {
warning("Bogus Host Name option %d: %s (%s)", option,
sbuf, opbuf);
l->options[option].len = 0;
free(l->options[option].data);
}
return (1);
case DHO_DOMAIN_NAME:
case DHO_DOMAIN_SEARCH:
if (!res_hnok(sbuf)) {
if (!check_search(sbuf)) {
warning("Bogus domain search list %d: %s (%s)",
option, sbuf, opbuf);
l->options[option].len = 0;
free(l->options[option].data);
}
}
return (1);
case DHO_PAD:
case DHO_TIME_OFFSET:
case DHO_BOOT_SIZE:
case DHO_MERIT_DUMP:
case DHO_ROOT_PATH:
case DHO_EXTENSIONS_PATH:
case DHO_IP_FORWARDING:
case DHO_NON_LOCAL_SOURCE_ROUTING:
case DHO_POLICY_FILTER:
case DHO_MAX_DGRAM_REASSEMBLY:
case DHO_DEFAULT_IP_TTL:
case DHO_PATH_MTU_AGING_TIMEOUT:
case DHO_PATH_MTU_PLATEAU_TABLE:
case DHO_INTERFACE_MTU:
case DHO_ALL_SUBNETS_LOCAL:
case DHO_PERFORM_MASK_DISCOVERY:
case DHO_MASK_SUPPLIER:
case DHO_ROUTER_DISCOVERY:
case DHO_ROUTER_SOLICITATION_ADDRESS:
case DHO_STATIC_ROUTES:
case DHO_TRAILER_ENCAPSULATION:
case DHO_ARP_CACHE_TIMEOUT:
case DHO_IEEE802_3_ENCAPSULATION:
case DHO_DEFAULT_TCP_TTL:
case DHO_TCP_KEEPALIVE_INTERVAL:
case DHO_TCP_KEEPALIVE_GARBAGE:
case DHO_VENDOR_ENCAPSULATED_OPTIONS:
case DHO_NETBIOS_NODE_TYPE:
case DHO_NETBIOS_SCOPE:
case DHO_X_DISPLAY_MANAGER:
case DHO_DHCP_REQUESTED_ADDRESS:
case DHO_DHCP_LEASE_TIME:
case DHO_DHCP_OPTION_OVERLOAD:
case DHO_DHCP_MESSAGE_TYPE:
case DHO_DHCP_PARAMETER_REQUEST_LIST:
case DHO_DHCP_MESSAGE:
case DHO_DHCP_MAX_MESSAGE_SIZE:
case DHO_DHCP_RENEWAL_TIME:
case DHO_DHCP_REBINDING_TIME:
case DHO_DHCP_CLASS_IDENTIFIER:
case DHO_DHCP_CLIENT_IDENTIFIER:
case DHO_BOOTFILE_NAME:
case DHO_DHCP_USER_CLASS_ID:
case DHO_END:
return (1);
case DHO_CLASSLESS_ROUTES:
return (check_classless_option(l->options[option].data,
l->options[option].len));
default:
warning("unknown dhcp option value 0x%x", option);
return (unknown_ok);
}
}
/* RFC 3442 The Classless Static Routes option checks */
int
check_classless_option(unsigned char *data, int len)
{
int i = 0;
unsigned char width;
in_addr_t addr, mask;
if (len < 5) {
warning("Too small length: %d", len);
return (0);
}
while(i < len) {
width = data[i++];
if (width == 0) {
i += 4;
continue;
} else if (width < 9) {
addr = (in_addr_t)(data[i] << 24);
i += 1;
} else if (width < 17) {
addr = (in_addr_t)(data[i] << 24) +
(in_addr_t)(data[i + 1] << 16);
i += 2;
} else if (width < 25) {
addr = (in_addr_t)(data[i] << 24) +
(in_addr_t)(data[i + 1] << 16) +
(in_addr_t)(data[i + 2] << 8);
i += 3;
} else if (width < 33) {
addr = (in_addr_t)(data[i] << 24) +
(in_addr_t)(data[i + 1] << 16) +
(in_addr_t)(data[i + 2] << 8) +
data[i + 3];
i += 4;
} else {
warning("Incorrect subnet width: %d", width);
return (0);
}
mask = (in_addr_t)(~0) << (32 - width);
addr = ntohl(addr);
mask = ntohl(mask);
/*
* From RFC 3442:
* ... After deriving a subnet number and subnet mask
* from each destination descriptor, the DHCP client
* MUST zero any bits in the subnet number where the
* corresponding bit in the mask is zero...
*/
if ((addr & mask) != addr) {
addr &= mask;
data[i - 1] = (unsigned char)(
(addr >> (((32 - width)/8)*8)) & 0xFF);
}
i += 4;
}
if (i > len) {
warning("Incorrect data length: %d (must be %d)", len, i);
return (0);
}
return (1);
}
int
res_hnok(const char *dn)
{
int pch = PERIOD, ch = *dn++;
while (ch != '\0') {
int nch = *dn++;
if (periodchar(ch)) {
;
} else if (periodchar(pch)) {
if (!borderchar(ch))
return (0);
} else if (periodchar(nch) || nch == '\0') {
if (!borderchar(ch))
return (0);
} else {
if (!middlechar(ch))
return (0);
}
pch = ch, ch = nch;
}
return (1);
}
int
check_search(const char *srch)
{
int pch = PERIOD, ch = *srch++;
int domains = 1;
/* 256 char limit re resolv.conf(5) */
if (strlen(srch) > 256)
return (0);
while (whitechar(ch))
ch = *srch++;
while (ch != '\0') {
int nch = *srch++;
if (periodchar(ch) || whitechar(ch)) {
;
} else if (periodchar(pch)) {
if (!borderchar(ch))
return (0);
} else if (periodchar(nch) || nch == '\0') {
if (!borderchar(ch))
return (0);
} else {
if (!middlechar(ch))
return (0);
}
if (!whitechar(ch)) {
pch = ch;
} else {
while (whitechar(nch)) {
nch = *srch++;
}
if (nch != '\0')
domains++;
pch = PERIOD;
}
ch = nch;
}
/* 6 domain limit re resolv.conf(5) */
if (domains > 6)
return (0);
return (1);
}
/* Does buf consist only of dotted decimal ipv4 addrs?
* return how many if so,
* otherwise, return 0
*/
int
ipv4addrs(const char * buf)
{
struct in_addr jnk;
int count = 0;
while (inet_aton(buf, &jnk) == 1){
count++;
while (periodchar(*buf) || digitchar(*buf))
buf++;
if (*buf == '\0')
return (count);
while (*buf == ' ')
buf++;
}
return (0);
}
const char *
option_as_string(unsigned int code, unsigned char *data, int len)
{
static char optbuf[32768]; /* XXX */
char *op = optbuf;
int opleft = sizeof(optbuf);
unsigned char *dp = data;
if (code > 255)
error("option_as_string: bad code %d", code);
for (; dp < data + len; dp++) {
if (!isascii(*dp) || !isprint(*dp)) {
if (dp + 1 != data + len || *dp != 0) {
snprintf(op, opleft, "\\%03o", *dp);
op += 4;
opleft -= 4;
}
} else if (*dp == '"' || *dp == '\'' || *dp == '$' ||
*dp == '`' || *dp == '\\') {
*op++ = '\\';
*op++ = *dp;
opleft -= 2;
} else {
*op++ = *dp;
opleft--;
}
}
if (opleft < 1)
goto toobig;
*op = 0;
return optbuf;
toobig:
warning("dhcp option too large");
return "<error>";
}
int
fork_privchld(int fd, int fd2)
{
struct pollfd pfd[1];
int nfds;
switch (fork()) {
case -1:
error("cannot fork");
case 0:
break;
default:
return (0);
}
setproctitle("%s [priv]", ifi->name);
setsid();
dup2(nullfd, STDIN_FILENO);
dup2(nullfd, STDOUT_FILENO);
dup2(nullfd, STDERR_FILENO);
close(nullfd);
close(fd2);
close(ifi->rfdesc);
ifi->rfdesc = -1;
for (;;) {
pfd[0].fd = fd;
pfd[0].events = POLLIN;
if ((nfds = poll(pfd, 1, INFTIM)) == -1)
if (errno != EINTR)
error("poll error");
if (nfds == 0 || !(pfd[0].revents & POLLIN))
continue;
dispatch_imsg(ifi, fd);
}
}
Index: projects/clang800-import/sbin/ipfw/tables.c
===================================================================
--- projects/clang800-import/sbin/ipfw/tables.c (revision 343955)
+++ projects/clang800-import/sbin/ipfw/tables.c (revision 343956)
@@ -1,2033 +1,2037 @@
/*
* Copyright (c) 2014 Yandex LLC
* Copyright (c) 2014 Alexander V. Chernikov
*
* Redistribution and use in source forms, with and without modification,
* are permitted provided that this entire comment appears intact.
*
* Redistribution in binary form may occur without any restrictions.
* Obviously, it would be nice if you gave credit where credit is due
* but requiring it would be too onerous.
*
* This software is provided ``AS IS'' without any warranties of any kind.
*
* in-kernel ipfw tables support.
*
* $FreeBSD$
*/
#include <sys/types.h>
#include <sys/param.h>
#include <sys/socket.h>
#include <sys/sysctl.h>
#include <ctype.h>
#include <err.h>
#include <errno.h>
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sysexits.h>
#include <net/if.h>
#include <netinet/in.h>
#include <netinet/ip_fw.h>
#include <arpa/inet.h>
#include <netdb.h>
#include "ipfw2.h"
static void table_modify_record(ipfw_obj_header *oh, int ac, char *av[],
int add, int quiet, int update, int atomic);
static int table_flush(ipfw_obj_header *oh);
static int table_destroy(ipfw_obj_header *oh);
static int table_do_create(ipfw_obj_header *oh, ipfw_xtable_info *i);
static int table_do_modify(ipfw_obj_header *oh, ipfw_xtable_info *i);
static int table_do_swap(ipfw_obj_header *oh, char *second);
static void table_create(ipfw_obj_header *oh, int ac, char *av[]);
static void table_modify(ipfw_obj_header *oh, int ac, char *av[]);
static void table_lookup(ipfw_obj_header *oh, int ac, char *av[]);
static void table_lock(ipfw_obj_header *oh, int lock);
static int table_swap(ipfw_obj_header *oh, char *second);
static int table_get_info(ipfw_obj_header *oh, ipfw_xtable_info *i);
static int table_show_info(ipfw_xtable_info *i, void *arg);
static int table_destroy_one(ipfw_xtable_info *i, void *arg);
static int table_flush_one(ipfw_xtable_info *i, void *arg);
static int table_show_one(ipfw_xtable_info *i, void *arg);
static int table_do_get_list(ipfw_xtable_info *i, ipfw_obj_header **poh);
static void table_show_list(ipfw_obj_header *oh, int need_header);
static void table_show_entry(ipfw_xtable_info *i, ipfw_obj_tentry *tent);
static void tentry_fill_key(ipfw_obj_header *oh, ipfw_obj_tentry *tent,
char *key, int add, uint8_t *ptype, uint32_t *pvmask, ipfw_xtable_info *xi);
static void tentry_fill_value(ipfw_obj_header *oh, ipfw_obj_tentry *tent,
char *arg, uint8_t type, uint32_t vmask);
static void table_show_value(char *buf, size_t bufsize, ipfw_table_value *v,
uint32_t vmask, int print_ip);
typedef int (table_cb_t)(ipfw_xtable_info *i, void *arg);
static int tables_foreach(table_cb_t *f, void *arg, int sort);
#ifndef s6_addr32
#define s6_addr32 __u6_addr.__u6_addr32
#endif
static struct _s_x tabletypes[] = {
{ "addr", IPFW_TABLE_ADDR },
{ "iface", IPFW_TABLE_INTERFACE },
{ "number", IPFW_TABLE_NUMBER },
{ "flow", IPFW_TABLE_FLOW },
{ NULL, 0 }
};
static struct _s_x tablevaltypes[] = {
{ "skipto", IPFW_VTYPE_SKIPTO },
{ "pipe", IPFW_VTYPE_PIPE },
{ "fib", IPFW_VTYPE_FIB },
{ "nat", IPFW_VTYPE_NAT },
{ "dscp", IPFW_VTYPE_DSCP },
{ "tag", IPFW_VTYPE_TAG },
{ "divert", IPFW_VTYPE_DIVERT },
{ "netgraph", IPFW_VTYPE_NETGRAPH },
{ "limit", IPFW_VTYPE_LIMIT },
{ "ipv4", IPFW_VTYPE_NH4 },
{ "ipv6", IPFW_VTYPE_NH6 },
{ NULL, 0 }
};
static struct _s_x tablecmds[] = {
{ "add", TOK_ADD },
{ "delete", TOK_DEL },
{ "create", TOK_CREATE },
{ "destroy", TOK_DESTROY },
{ "flush", TOK_FLUSH },
{ "modify", TOK_MODIFY },
{ "swap", TOK_SWAP },
{ "info", TOK_INFO },
{ "detail", TOK_DETAIL },
{ "list", TOK_LIST },
{ "lookup", TOK_LOOKUP },
{ "atomic", TOK_ATOMIC },
{ "lock", TOK_LOCK },
{ "unlock", TOK_UNLOCK },
{ NULL, 0 }
};
static int
lookup_host (char *host, struct in_addr *ipaddr)
{
struct hostent *he;
if (!inet_aton(host, ipaddr)) {
if ((he = gethostbyname(host)) == NULL)
return(-1);
*ipaddr = *(struct in_addr *)he->h_addr_list[0];
}
return(0);
}
/*
* This one handles all table-related commands
* ipfw table NAME create ...
* ipfw table NAME modify ...
* ipfw table {NAME | all} destroy
* ipfw table NAME swap NAME
* ipfw table NAME lock
* ipfw table NAME unlock
* ipfw table NAME add addr[/masklen] [value]
* ipfw table NAME add [addr[/masklen] value] [addr[/masklen] value] ..
* ipfw table NAME delete addr[/masklen] [addr[/masklen]] ..
* ipfw table NAME lookup addr
* ipfw table {NAME | all} flush
* ipfw table {NAME | all} list
* ipfw table {NAME | all} info
* ipfw table {NAME | all} detail
*/
void
ipfw_table_handler(int ac, char *av[])
{
int do_add, is_all;
int atomic, error, tcmd;
ipfw_xtable_info i;
ipfw_obj_header oh;
char *tablename;
uint8_t set;
void *arg;
memset(&oh, 0, sizeof(oh));
is_all = 0;
if (co.use_set != 0)
set = co.use_set - 1;
else
set = 0;
ac--; av++;
NEED1("table needs name");
tablename = *av;
if (table_check_name(tablename) == 0) {
table_fill_ntlv(&oh.ntlv, *av, set, 1);
oh.idx = 1;
} else {
if (strcmp(tablename, "all") == 0)
is_all = 1;
else
errx(EX_USAGE, "table name %s is invalid", tablename);
}
ac--; av++;
NEED1("table needs command");
tcmd = get_token(tablecmds, *av, "table command");
/* Check if atomic operation was requested */
atomic = 0;
if (tcmd == TOK_ATOMIC) {
ac--; av++;
NEED1("atomic needs command");
tcmd = get_token(tablecmds, *av, "table command");
switch (tcmd) {
case TOK_ADD:
break;
default:
errx(EX_USAGE, "atomic is not compatible with %s", *av);
}
atomic = 1;
}
switch (tcmd) {
case TOK_LIST:
case TOK_INFO:
case TOK_DETAIL:
case TOK_FLUSH:
case TOK_DESTROY:
break;
default:
if (is_all != 0)
errx(EX_USAGE, "table name required");
}
switch (tcmd) {
case TOK_ADD:
case TOK_DEL:
do_add = **av == 'a';
ac--; av++;
table_modify_record(&oh, ac, av, do_add, co.do_quiet,
co.do_quiet, atomic);
break;
case TOK_CREATE:
ac--; av++;
table_create(&oh, ac, av);
break;
case TOK_MODIFY:
ac--; av++;
table_modify(&oh, ac, av);
break;
case TOK_DESTROY:
if (is_all == 0) {
if (table_destroy(&oh) == 0)
break;
if (errno != ESRCH)
err(EX_OSERR, "failed to destroy table %s",
tablename);
/* ESRCH isn't fatal, warn if not quiet mode */
if (co.do_quiet == 0)
warn("failed to destroy table %s", tablename);
} else {
error = tables_foreach(table_destroy_one, &oh, 1);
if (error != 0)
err(EX_OSERR,
"failed to destroy tables list");
}
break;
case TOK_FLUSH:
if (is_all == 0) {
if ((error = table_flush(&oh)) == 0)
break;
if (errno != ESRCH)
err(EX_OSERR, "failed to flush table %s info",
tablename);
/* ESRCH isn't fatal, warn if not quiet mode */
if (co.do_quiet == 0)
warn("failed to flush table %s info",
tablename);
} else {
error = tables_foreach(table_flush_one, &oh, 1);
if (error != 0)
err(EX_OSERR, "failed to flush tables list");
/* XXX: we ignore errors here */
}
break;
case TOK_SWAP:
ac--; av++;
NEED1("second table name required");
table_swap(&oh, *av);
break;
case TOK_LOCK:
case TOK_UNLOCK:
table_lock(&oh, (tcmd == TOK_LOCK));
break;
case TOK_DETAIL:
case TOK_INFO:
arg = (tcmd == TOK_DETAIL) ? (void *)1 : NULL;
if (is_all == 0) {
if ((error = table_get_info(&oh, &i)) != 0)
err(EX_OSERR, "failed to request table info");
table_show_info(&i, arg);
} else {
error = tables_foreach(table_show_info, arg, 1);
if (error != 0)
err(EX_OSERR, "failed to request tables list");
}
break;
case TOK_LIST:
+ arg = is_all ? (void*)1 : NULL;
if (is_all == 0) {
ipfw_xtable_info i;
if ((error = table_get_info(&oh, &i)) != 0)
err(EX_OSERR, "failed to request table info");
- table_show_one(&i, NULL);
+ table_show_one(&i, arg);
} else {
- error = tables_foreach(table_show_one, NULL, 1);
+ error = tables_foreach(table_show_one, arg, 1);
if (error != 0)
err(EX_OSERR, "failed to request tables list");
}
break;
case TOK_LOOKUP:
ac--; av++;
table_lookup(&oh, ac, av);
break;
}
}
void
table_fill_ntlv(ipfw_obj_ntlv *ntlv, const char *name, uint8_t set,
uint16_t uidx)
{
ntlv->head.type = IPFW_TLV_TBL_NAME;
ntlv->head.length = sizeof(ipfw_obj_ntlv);
ntlv->idx = uidx;
ntlv->set = set;
strlcpy(ntlv->name, name, sizeof(ntlv->name));
}
static void
table_fill_objheader(ipfw_obj_header *oh, ipfw_xtable_info *i)
{
oh->idx = 1;
table_fill_ntlv(&oh->ntlv, i->tablename, i->set, 1);
}
static struct _s_x tablenewcmds[] = {
{ "type", TOK_TYPE },
{ "valtype", TOK_VALTYPE },
{ "algo", TOK_ALGO },
{ "limit", TOK_LIMIT },
{ "locked", TOK_LOCK },
{ NULL, 0 }
};
static struct _s_x flowtypecmds[] = {
{ "src-ip", IPFW_TFFLAG_SRCIP },
{ "proto", IPFW_TFFLAG_PROTO },
{ "src-port", IPFW_TFFLAG_SRCPORT },
{ "dst-ip", IPFW_TFFLAG_DSTIP },
{ "dst-port", IPFW_TFFLAG_DSTPORT },
{ NULL, 0 }
};
int
table_parse_type(uint8_t ttype, char *p, uint8_t *tflags)
{
uint32_t fset, fclear;
char *e;
/* Parse type options */
switch(ttype) {
case IPFW_TABLE_FLOW:
fset = fclear = 0;
if (fill_flags(flowtypecmds, p, &e, &fset, &fclear) != 0)
errx(EX_USAGE,
"unable to parse flow option %s", e);
*tflags = fset;
break;
default:
return (EX_USAGE);
}
return (0);
}
void
table_print_type(char *tbuf, size_t size, uint8_t type, uint8_t tflags)
{
const char *tname;
int l;
if ((tname = match_value(tabletypes, type)) == NULL)
tname = "unknown";
l = snprintf(tbuf, size, "%s", tname);
tbuf += l;
size -= l;
switch(type) {
case IPFW_TABLE_FLOW:
if (tflags != 0) {
*tbuf++ = ':';
l--;
print_flags_buffer(tbuf, size, flowtypecmds, tflags);
}
break;
}
}
/*
* Creates new table
*
* ipfw table NAME create [ type { addr | iface | number | flow } ]
* [ algo algoname ]
*/
static void
table_create(ipfw_obj_header *oh, int ac, char *av[])
{
ipfw_xtable_info xi;
int error, tcmd, val;
uint32_t fset, fclear;
char *e, *p;
char tbuf[128];
memset(&xi, 0, sizeof(xi));
while (ac > 0) {
tcmd = get_token(tablenewcmds, *av, "option");
ac--; av++;
switch (tcmd) {
case TOK_LIMIT:
NEED1("limit value required");
xi.limit = strtol(*av, NULL, 10);
ac--; av++;
break;
case TOK_TYPE:
NEED1("table type required");
/* Type may have suboptions after ':' */
if ((p = strchr(*av, ':')) != NULL)
*p++ = '\0';
val = match_token(tabletypes, *av);
if (val == -1) {
concat_tokens(tbuf, sizeof(tbuf), tabletypes,
", ");
errx(EX_USAGE,
"Unknown tabletype: %s. Supported: %s",
*av, tbuf);
}
xi.type = val;
if (p != NULL) {
error = table_parse_type(val, p, &xi.tflags);
if (error != 0)
errx(EX_USAGE,
"Unsupported suboptions: %s", p);
}
ac--; av++;
break;
case TOK_VALTYPE:
NEED1("table value type required");
fset = fclear = 0;
val = fill_flags(tablevaltypes, *av, &e, &fset, &fclear);
if (val != -1) {
xi.vmask = fset;
ac--; av++;
break;
}
concat_tokens(tbuf, sizeof(tbuf), tablevaltypes, ", ");
errx(EX_USAGE, "Unknown value type: %s. Supported: %s",
e, tbuf);
break;
case TOK_ALGO:
NEED1("table algorithm name required");
if (strlen(*av) > sizeof(xi.algoname))
errx(EX_USAGE, "algorithm name too long");
strlcpy(xi.algoname, *av, sizeof(xi.algoname));
ac--; av++;
break;
case TOK_LOCK:
xi.flags |= IPFW_TGFLAGS_LOCKED;
break;
}
}
/* Set some defaults to preserve compatibility. */
if (xi.algoname[0] == '\0' && xi.type == 0)
xi.type = IPFW_TABLE_ADDR;
if (xi.vmask == 0)
xi.vmask = IPFW_VTYPE_LEGACY;
if ((error = table_do_create(oh, &xi)) != 0)
err(EX_OSERR, "Table creation failed");
}
/*
* Creates new table
*
* Request: [ ipfw_obj_header ipfw_xtable_info ]
*
* Returns 0 on success.
*/
static int
table_do_create(ipfw_obj_header *oh, ipfw_xtable_info *i)
{
char tbuf[sizeof(ipfw_obj_header) + sizeof(ipfw_xtable_info)];
int error;
memcpy(tbuf, oh, sizeof(*oh));
memcpy(tbuf + sizeof(*oh), i, sizeof(*i));
oh = (ipfw_obj_header *)tbuf;
error = do_set3(IP_FW_TABLE_XCREATE, &oh->opheader, sizeof(tbuf));
return (error);
}
/*
* Modifies existing table
*
* ipfw table NAME modify [ limit number ]
*/
static void
table_modify(ipfw_obj_header *oh, int ac, char *av[])
{
ipfw_xtable_info xi;
int tcmd;
memset(&xi, 0, sizeof(xi));
while (ac > 0) {
tcmd = get_token(tablenewcmds, *av, "option");
ac--; av++;
switch (tcmd) {
case TOK_LIMIT:
NEED1("limit value required");
xi.limit = strtol(*av, NULL, 10);
xi.mflags |= IPFW_TMFLAGS_LIMIT;
ac--; av++;
break;
default:
errx(EX_USAGE, "cmd is not supported for modification");
}
}
if (table_do_modify(oh, &xi) != 0)
err(EX_OSERR, "Table modification failed");
}
/*
* Modifies existing table.
*
* Request: [ ipfw_obj_header ipfw_xtable_info ]
*
* Returns 0 on success.
*/
static int
table_do_modify(ipfw_obj_header *oh, ipfw_xtable_info *i)
{
char tbuf[sizeof(ipfw_obj_header) + sizeof(ipfw_xtable_info)];
int error;
memcpy(tbuf, oh, sizeof(*oh));
memcpy(tbuf + sizeof(*oh), i, sizeof(*i));
oh = (ipfw_obj_header *)tbuf;
error = do_set3(IP_FW_TABLE_XMODIFY, &oh->opheader, sizeof(tbuf));
return (error);
}
/*
* Locks or unlocks given table
*/
static void
table_lock(ipfw_obj_header *oh, int lock)
{
ipfw_xtable_info xi;
memset(&xi, 0, sizeof(xi));
xi.mflags |= IPFW_TMFLAGS_LOCK;
xi.flags |= (lock != 0) ? IPFW_TGFLAGS_LOCKED : 0;
if (table_do_modify(oh, &xi) != 0)
err(EX_OSERR, "Table %s failed", lock != 0 ? "lock" : "unlock");
}
/*
* Destroys given table specified by @oh->ntlv.
* Returns 0 on success.
*/
static int
table_destroy(ipfw_obj_header *oh)
{
if (do_set3(IP_FW_TABLE_XDESTROY, &oh->opheader, sizeof(*oh)) != 0)
return (-1);
return (0);
}
static int
table_destroy_one(ipfw_xtable_info *i, void *arg)
{
ipfw_obj_header *oh;
oh = (ipfw_obj_header *)arg;
table_fill_ntlv(&oh->ntlv, i->tablename, i->set, 1);
if (table_destroy(oh) != 0) {
if (co.do_quiet == 0)
warn("failed to destroy table(%s) in set %u",
i->tablename, i->set);
return (-1);
}
return (0);
}
/*
* Flushes given table specified by @oh->ntlv.
* Returns 0 on success.
*/
static int
table_flush(ipfw_obj_header *oh)
{
if (do_set3(IP_FW_TABLE_XFLUSH, &oh->opheader, sizeof(*oh)) != 0)
return (-1);
return (0);
}
static int
table_do_swap(ipfw_obj_header *oh, char *second)
{
char tbuf[sizeof(ipfw_obj_header) + sizeof(ipfw_obj_ntlv)];
int error;
memset(tbuf, 0, sizeof(tbuf));
memcpy(tbuf, oh, sizeof(*oh));
oh = (ipfw_obj_header *)tbuf;
table_fill_ntlv((ipfw_obj_ntlv *)(oh + 1), second, oh->ntlv.set, 1);
error = do_set3(IP_FW_TABLE_XSWAP, &oh->opheader, sizeof(tbuf));
return (error);
}
/*
* Swaps given table with @second one.
*/
static int
table_swap(ipfw_obj_header *oh, char *second)
{
if (table_check_name(second) != 0)
errx(EX_USAGE, "table name %s is invalid", second);
if (table_do_swap(oh, second) == 0)
return (0);
switch (errno) {
case EINVAL:
errx(EX_USAGE, "Unable to swap table: check types");
case EFBIG:
errx(EX_USAGE, "Unable to swap table: check limits");
}
return (0);
}
/*
* Retrieves table in given table specified by @oh->ntlv.
* it inside @i.
* Returns 0 on success.
*/
static int
table_get_info(ipfw_obj_header *oh, ipfw_xtable_info *i)
{
char tbuf[sizeof(ipfw_obj_header) + sizeof(ipfw_xtable_info)];
size_t sz;
sz = sizeof(tbuf);
memset(tbuf, 0, sizeof(tbuf));
memcpy(tbuf, oh, sizeof(*oh));
oh = (ipfw_obj_header *)tbuf;
if (do_get3(IP_FW_TABLE_XINFO, &oh->opheader, &sz) != 0)
return (errno);
if (sz < sizeof(tbuf))
return (EINVAL);
*i = *(ipfw_xtable_info *)(oh + 1);
return (0);
}
static struct _s_x tablealgoclass[] = {
{ "hash", IPFW_TACLASS_HASH },
{ "array", IPFW_TACLASS_ARRAY },
{ "radix", IPFW_TACLASS_RADIX },
{ NULL, 0 }
};
struct ta_cldata {
uint8_t taclass;
uint8_t spare4;
uint16_t itemsize;
uint16_t itemsize6;
uint32_t size;
uint32_t count;
};
/*
* Print global/per-AF table @i algorithm info.
*/
static void
table_show_tainfo(ipfw_xtable_info *i, struct ta_cldata *d,
const char *af, const char *taclass)
{
switch (d->taclass) {
case IPFW_TACLASS_HASH:
case IPFW_TACLASS_ARRAY:
printf(" %salgorithm %s info\n", af, taclass);
if (d->itemsize == d->itemsize6)
printf(" size: %u items: %u itemsize: %u\n",
d->size, d->count, d->itemsize);
else
printf(" size: %u items: %u "
"itemsize4: %u itemsize6: %u\n",
d->size, d->count,
d->itemsize, d->itemsize6);
break;
case IPFW_TACLASS_RADIX:
printf(" %salgorithm %s info\n", af, taclass);
if (d->itemsize == d->itemsize6)
printf(" items: %u itemsize: %u\n",
d->count, d->itemsize);
else
printf(" items: %u "
"itemsize4: %u itemsize6: %u\n",
d->count, d->itemsize, d->itemsize6);
break;
default:
printf(" algo class: %s\n", taclass);
}
}
static void
table_print_valheader(char *buf, size_t bufsize, uint32_t vmask)
{
if (vmask == IPFW_VTYPE_LEGACY) {
snprintf(buf, bufsize, "legacy");
return;
}
memset(buf, 0, bufsize);
print_flags_buffer(buf, bufsize, tablevaltypes, vmask);
}
/*
* Prints table info struct @i in human-readable form.
*/
static int
table_show_info(ipfw_xtable_info *i, void *arg)
{
const char *vtype;
ipfw_ta_tinfo *tainfo;
int afdata, afitem;
struct ta_cldata d;
char ttype[64], tvtype[64];
table_print_type(ttype, sizeof(ttype), i->type, i->tflags);
table_print_valheader(tvtype, sizeof(tvtype), i->vmask);
printf("--- table(%s), set(%u) ---\n", i->tablename, i->set);
if ((i->flags & IPFW_TGFLAGS_LOCKED) != 0)
printf(" kindex: %d, type: %s, locked\n", i->kidx, ttype);
else
printf(" kindex: %d, type: %s\n", i->kidx, ttype);
printf(" references: %u, valtype: %s\n", i->refcnt, tvtype);
printf(" algorithm: %s\n", i->algoname);
printf(" items: %u, size: %u\n", i->count, i->size);
if (i->limit > 0)
printf(" limit: %u\n", i->limit);
/* Print algo-specific info if requested & set */
if (arg == NULL)
return (0);
if ((i->ta_info.flags & IPFW_TATFLAGS_DATA) == 0)
return (0);
tainfo = &i->ta_info;
afdata = 0;
afitem = 0;
if (tainfo->flags & IPFW_TATFLAGS_AFDATA)
afdata = 1;
if (tainfo->flags & IPFW_TATFLAGS_AFITEM)
afitem = 1;
memset(&d, 0, sizeof(d));
d.taclass = tainfo->taclass4;
d.size = tainfo->size4;
d.count = tainfo->count4;
d.itemsize = tainfo->itemsize4;
if (afdata == 0 && afitem != 0)
d.itemsize6 = tainfo->itemsize6;
else
d.itemsize6 = d.itemsize;
if ((vtype = match_value(tablealgoclass, d.taclass)) == NULL)
vtype = "unknown";
if (afdata == 0) {
table_show_tainfo(i, &d, "", vtype);
} else {
table_show_tainfo(i, &d, "IPv4 ", vtype);
memset(&d, 0, sizeof(d));
d.taclass = tainfo->taclass6;
if ((vtype = match_value(tablealgoclass, d.taclass)) == NULL)
vtype = "unknown";
d.size = tainfo->size6;
d.count = tainfo->count6;
d.itemsize = tainfo->itemsize6;
d.itemsize6 = d.itemsize;
table_show_tainfo(i, &d, "IPv6 ", vtype);
}
return (0);
}
/*
* Function wrappers which can be used either
* as is or as foreach function parameter.
*/
static int
table_show_one(ipfw_xtable_info *i, void *arg)
{
ipfw_obj_header *oh;
int error;
+ int is_all;
+ is_all = arg == NULL ? 0 : 1;
+
if ((error = table_do_get_list(i, &oh)) != 0) {
err(EX_OSERR, "Error requesting table %s list", i->tablename);
return (error);
}
- table_show_list(oh, 1);
+ table_show_list(oh, is_all);
free(oh);
return (0);
}
static int
table_flush_one(ipfw_xtable_info *i, void *arg)
{
ipfw_obj_header *oh;
oh = (ipfw_obj_header *)arg;
table_fill_ntlv(&oh->ntlv, i->tablename, i->set, 1);
return (table_flush(oh));
}
static int
table_do_modify_record(int cmd, ipfw_obj_header *oh,
ipfw_obj_tentry *tent, int count, int atomic)
{
ipfw_obj_ctlv *ctlv;
ipfw_obj_tentry *tent_base;
caddr_t pbuf;
char xbuf[sizeof(*oh) + sizeof(ipfw_obj_ctlv) + sizeof(*tent)];
int error, i;
size_t sz;
sz = sizeof(*ctlv) + sizeof(*tent) * count;
if (count == 1) {
memset(xbuf, 0, sizeof(xbuf));
pbuf = xbuf;
} else {
if ((pbuf = calloc(1, sizeof(*oh) + sz)) == NULL)
return (ENOMEM);
}
memcpy(pbuf, oh, sizeof(*oh));
oh = (ipfw_obj_header *)pbuf;
oh->opheader.version = 1;
ctlv = (ipfw_obj_ctlv *)(oh + 1);
ctlv->count = count;
ctlv->head.length = sz;
if (atomic != 0)
ctlv->flags |= IPFW_CTF_ATOMIC;
tent_base = tent;
memcpy(ctlv + 1, tent, sizeof(*tent) * count);
tent = (ipfw_obj_tentry *)(ctlv + 1);
for (i = 0; i < count; i++, tent++) {
tent->head.length = sizeof(ipfw_obj_tentry);
tent->idx = oh->idx;
}
sz += sizeof(*oh);
error = do_get3(cmd, &oh->opheader, &sz);
if (error != 0)
error = errno;
tent = (ipfw_obj_tentry *)(ctlv + 1);
/* Copy result back to provided buffer */
memcpy(tent_base, ctlv + 1, sizeof(*tent) * count);
if (pbuf != xbuf)
free(pbuf);
return (error);
}
static void
table_modify_record(ipfw_obj_header *oh, int ac, char *av[], int add,
int quiet, int update, int atomic)
{
ipfw_obj_tentry *ptent, tent, *tent_buf;
ipfw_xtable_info xi;
uint8_t type;
uint32_t vmask;
int cmd, count, error, i, ignored;
char *texterr, *etxt, *px;
if (ac == 0)
errx(EX_USAGE, "address required");
if (add != 0) {
cmd = IP_FW_TABLE_XADD;
texterr = "Adding record failed";
} else {
cmd = IP_FW_TABLE_XDEL;
texterr = "Deleting record failed";
}
/*
* Calculate number of entries:
* Assume [key val] x N for add
* and
* key x N for delete
*/
count = (add != 0) ? ac / 2 + 1 : ac;
if (count <= 1) {
/* Adding single entry with/without value */
memset(&tent, 0, sizeof(tent));
tent_buf = &tent;
} else {
if ((tent_buf = calloc(count, sizeof(tent))) == NULL)
errx(EX_OSERR,
"Unable to allocate memory for all entries");
}
ptent = tent_buf;
memset(&xi, 0, sizeof(xi));
count = 0;
while (ac > 0) {
tentry_fill_key(oh, ptent, *av, add, &type, &vmask, &xi);
/*
* Compatibility layer: auto-create table if not exists.
*/
if (xi.tablename[0] == '\0') {
xi.type = type;
xi.vmask = vmask;
strlcpy(xi.tablename, oh->ntlv.name,
sizeof(xi.tablename));
if (quiet == 0)
warnx("DEPRECATED: inserting data into "
"non-existent table %s. (auto-created)",
xi.tablename);
table_do_create(oh, &xi);
}
oh->ntlv.type = type;
ac--; av++;
if (add != 0 && ac > 0) {
tentry_fill_value(oh, ptent, *av, type, vmask);
ac--; av++;
}
if (update != 0)
ptent->head.flags |= IPFW_TF_UPDATE;
count++;
ptent++;
}
error = table_do_modify_record(cmd, oh, tent_buf, count, atomic);
/*
* Compatibility stuff: do not yell on duplicate keys or
* failed deletions.
*/
if (error == 0 || (error == EEXIST && add != 0) ||
(error == ENOENT && add == 0)) {
if (quiet != 0) {
if (tent_buf != &tent)
free(tent_buf);
return;
}
}
/* Report results back */
ptent = tent_buf;
for (i = 0; i < count; ptent++, i++) {
ignored = 0;
switch (ptent->result) {
case IPFW_TR_ADDED:
px = "added";
break;
case IPFW_TR_DELETED:
px = "deleted";
break;
case IPFW_TR_UPDATED:
px = "updated";
break;
case IPFW_TR_LIMIT:
px = "limit";
ignored = 1;
break;
case IPFW_TR_ERROR:
px = "error";
ignored = 1;
break;
case IPFW_TR_NOTFOUND:
px = "notfound";
ignored = 1;
break;
case IPFW_TR_EXISTS:
px = "exists";
ignored = 1;
break;
case IPFW_TR_IGNORED:
px = "ignored";
ignored = 1;
break;
default:
px = "unknown";
ignored = 1;
}
if (error != 0 && atomic != 0 && ignored == 0)
printf("%s(reverted): ", px);
else
printf("%s: ", px);
table_show_entry(&xi, ptent);
}
if (tent_buf != &tent)
free(tent_buf);
if (error == 0)
return;
/* Get real OS error */
error = errno;
/* Try to provide more human-readable error */
switch (error) {
case EEXIST:
etxt = "record already exists";
break;
case EFBIG:
etxt = "limit hit";
break;
case ESRCH:
etxt = "table not found";
break;
case ENOENT:
etxt = "record not found";
break;
case EACCES:
etxt = "table is locked";
break;
default:
etxt = strerror(error);
}
errx(EX_OSERR, "%s: %s", texterr, etxt);
}
static int
table_do_lookup(ipfw_obj_header *oh, char *key, ipfw_xtable_info *xi,
ipfw_obj_tentry *xtent)
{
char xbuf[sizeof(ipfw_obj_header) + sizeof(ipfw_obj_tentry)];
ipfw_obj_tentry *tent;
uint8_t type;
uint32_t vmask;
size_t sz;
memcpy(xbuf, oh, sizeof(*oh));
oh = (ipfw_obj_header *)xbuf;
tent = (ipfw_obj_tentry *)(oh + 1);
memset(tent, 0, sizeof(*tent));
tent->head.length = sizeof(*tent);
tent->idx = 1;
tentry_fill_key(oh, tent, key, 0, &type, &vmask, xi);
oh->ntlv.type = type;
sz = sizeof(xbuf);
if (do_get3(IP_FW_TABLE_XFIND, &oh->opheader, &sz) != 0)
return (errno);
if (sz < sizeof(xbuf))
return (EINVAL);
*xtent = *tent;
return (0);
}
static void
table_lookup(ipfw_obj_header *oh, int ac, char *av[])
{
ipfw_obj_tentry xtent;
ipfw_xtable_info xi;
char key[64];
int error;
if (ac == 0)
errx(EX_USAGE, "address required");
strlcpy(key, *av, sizeof(key));
memset(&xi, 0, sizeof(xi));
error = table_do_lookup(oh, key, &xi, &xtent);
switch (error) {
case 0:
break;
case ESRCH:
errx(EX_UNAVAILABLE, "Table %s not found", oh->ntlv.name);
case ENOENT:
errx(EX_UNAVAILABLE, "Entry %s not found", *av);
case ENOTSUP:
errx(EX_UNAVAILABLE, "Table %s algo does not support "
"\"lookup\" method", oh->ntlv.name);
default:
err(EX_OSERR, "getsockopt(IP_FW_TABLE_XFIND)");
}
table_show_entry(&xi, &xtent);
}
static void
tentry_fill_key_type(char *arg, ipfw_obj_tentry *tentry, uint8_t type,
uint8_t tflags)
{
char *p, *pp;
int mask, af;
struct in6_addr *paddr, tmp;
struct tflow_entry *tfe;
uint32_t key, *pkey;
uint16_t port;
struct protoent *pent;
struct servent *sent;
int masklen;
masklen = 0;
af = 0;
paddr = (struct in6_addr *)&tentry->k;
switch (type) {
case IPFW_TABLE_ADDR:
/* Remove / if exists */
if ((p = strchr(arg, '/')) != NULL) {
*p = '\0';
mask = atoi(p + 1);
}
if (inet_pton(AF_INET, arg, paddr) == 1) {
if (p != NULL && mask > 32)
errx(EX_DATAERR, "bad IPv4 mask width: %s",
p + 1);
masklen = p ? mask : 32;
af = AF_INET;
} else if (inet_pton(AF_INET6, arg, paddr) == 1) {
if (IN6_IS_ADDR_V4COMPAT(paddr))
errx(EX_DATAERR,
"Use IPv4 instead of v4-compatible");
if (p != NULL && mask > 128)
errx(EX_DATAERR, "bad IPv6 mask width: %s",
p + 1);
masklen = p ? mask : 128;
af = AF_INET6;
} else {
/* Assume FQDN */
if (lookup_host(arg, (struct in_addr *)paddr) != 0)
errx(EX_NOHOST, "hostname ``%s'' unknown", arg);
masklen = 32;
type = IPFW_TABLE_ADDR;
af = AF_INET;
}
break;
case IPFW_TABLE_INTERFACE:
/* Assume interface name. Copy significant data only */
mask = MIN(strlen(arg), IF_NAMESIZE - 1);
memcpy(paddr, arg, mask);
/* Set mask to exact match */
masklen = 8 * IF_NAMESIZE;
break;
case IPFW_TABLE_NUMBER:
/* Port or any other key */
key = strtol(arg, &p, 10);
if (*p != '\0')
errx(EX_DATAERR, "Invalid number: %s", arg);
pkey = (uint32_t *)paddr;
*pkey = key;
masklen = 32;
break;
case IPFW_TABLE_FLOW:
/* Assume [src-ip][,proto][,src-port][,dst-ip][,dst-port] */
tfe = &tentry->k.flow;
af = 0;
/* Handle <ipv4|ipv6> */
if ((tflags & IPFW_TFFLAG_SRCIP) != 0) {
if ((p = strchr(arg, ',')) != NULL)
*p++ = '\0';
/* Determine family using temporary storage */
if (inet_pton(AF_INET, arg, &tmp) == 1) {
if (af != 0 && af != AF_INET)
errx(EX_DATAERR,
"Inconsistent address family\n");
af = AF_INET;
memcpy(&tfe->a.a4.sip, &tmp, 4);
} else if (inet_pton(AF_INET6, arg, &tmp) == 1) {
if (af != 0 && af != AF_INET6)
errx(EX_DATAERR,
"Inconsistent address family\n");
af = AF_INET6;
memcpy(&tfe->a.a6.sip6, &tmp, 16);
}
arg = p;
}
/* Handle <proto-num|proto-name> */
if ((tflags & IPFW_TFFLAG_PROTO) != 0) {
if (arg == NULL)
errx(EX_DATAERR, "invalid key: proto missing");
if ((p = strchr(arg, ',')) != NULL)
*p++ = '\0';
key = strtol(arg, &pp, 10);
if (*pp != '\0') {
if ((pent = getprotobyname(arg)) == NULL)
errx(EX_DATAERR, "Unknown proto: %s",
arg);
else
key = pent->p_proto;
}
if (key > 255)
errx(EX_DATAERR, "Bad protocol number: %u",key);
tfe->proto = key;
arg = p;
}
/* Handle <port-num|service-name> */
if ((tflags & IPFW_TFFLAG_SRCPORT) != 0) {
if (arg == NULL)
errx(EX_DATAERR, "invalid key: src port missing");
if ((p = strchr(arg, ',')) != NULL)
*p++ = '\0';
port = htons(strtol(arg, &pp, 10));
if (*pp != '\0') {
if ((sent = getservbyname(arg, NULL)) == NULL)
errx(EX_DATAERR, "Unknown service: %s",
arg);
port = sent->s_port;
}
tfe->sport = port;
arg = p;
}
/* Handle <ipv4|ipv6>*/
if ((tflags & IPFW_TFFLAG_DSTIP) != 0) {
if (arg == NULL)
errx(EX_DATAERR, "invalid key: dst ip missing");
if ((p = strchr(arg, ',')) != NULL)
*p++ = '\0';
/* Determine family using temporary storage */
if (inet_pton(AF_INET, arg, &tmp) == 1) {
if (af != 0 && af != AF_INET)
errx(EX_DATAERR,
"Inconsistent address family");
af = AF_INET;
memcpy(&tfe->a.a4.dip, &tmp, 4);
} else if (inet_pton(AF_INET6, arg, &tmp) == 1) {
if (af != 0 && af != AF_INET6)
errx(EX_DATAERR,
"Inconsistent address family");
af = AF_INET6;
memcpy(&tfe->a.a6.dip6, &tmp, 16);
}
arg = p;
}
/* Handle <port-num|service-name> */
if ((tflags & IPFW_TFFLAG_DSTPORT) != 0) {
if (arg == NULL)
errx(EX_DATAERR, "invalid key: dst port missing");
if ((p = strchr(arg, ',')) != NULL)
*p++ = '\0';
port = htons(strtol(arg, &pp, 10));
if (*pp != '\0') {
if ((sent = getservbyname(arg, NULL)) == NULL)
errx(EX_DATAERR, "Unknown service: %s",
arg);
port = sent->s_port;
}
tfe->dport = port;
arg = p;
}
tfe->af = af;
break;
default:
errx(EX_DATAERR, "Unsupported table type: %d", type);
}
tentry->subtype = af;
tentry->masklen = masklen;
}
/*
* Tries to guess table key type.
* This procedure is used in legacy table auto-create
* code AND in `ipfw -n` ruleset checking.
*
* Imported from old table_fill_xentry() parse code.
*/
static int
guess_key_type(char *key, uint8_t *ptype)
{
char *p;
struct in6_addr addr;
uint32_t kv;
if (ishexnumber(*key) != 0 || *key == ':') {
/* Remove / if exists */
if ((p = strchr(key, '/')) != NULL)
*p = '\0';
if ((inet_pton(AF_INET, key, &addr) == 1) ||
(inet_pton(AF_INET6, key, &addr) == 1)) {
*ptype = IPFW_TABLE_CIDR;
if (p != NULL)
*p = '/';
return (0);
} else {
/* Port or any other key */
/* Skip non-base 10 entries like 'fa1' */
kv = strtol(key, &p, 10);
if (*p == '\0') {
*ptype = IPFW_TABLE_NUMBER;
return (0);
} else if ((p != key) && (*p == '.')) {
/*
* Warn on IPv4 address strings
* which are "valid" for inet_aton() but not
* in inet_pton().
*
* Typical examples: '10.5' or '10.0.0.05'
*/
return (1);
}
}
}
if (strchr(key, '.') == NULL) {
*ptype = IPFW_TABLE_INTERFACE;
return (0);
}
if (lookup_host(key, (struct in_addr *)&addr) != 0)
return (1);
*ptype = IPFW_TABLE_CIDR;
return (0);
}
static void
tentry_fill_key(ipfw_obj_header *oh, ipfw_obj_tentry *tent, char *key,
int add, uint8_t *ptype, uint32_t *pvmask, ipfw_xtable_info *xi)
{
uint8_t type, tflags;
uint32_t vmask;
int error;
type = 0;
tflags = 0;
vmask = 0;
if (xi->tablename[0] == '\0')
error = table_get_info(oh, xi);
else
error = 0;
if (error == 0) {
if (co.test_only == 0) {
/* Table found */
type = xi->type;
tflags = xi->tflags;
vmask = xi->vmask;
} else {
/*
* We're running `ipfw -n`
* Compatibility layer: try to guess key type
* before failing.
*/
if (guess_key_type(key, &type) != 0) {
/* Inknown key */
errx(EX_USAGE, "Cannot guess "
"key '%s' type", key);
}
vmask = IPFW_VTYPE_LEGACY;
}
} else {
if (error != ESRCH)
errx(EX_OSERR, "Error requesting table %s info",
oh->ntlv.name);
if (add == 0)
errx(EX_DATAERR, "Table %s does not exist",
oh->ntlv.name);
/*
* Table does not exist
* Compatibility layer: try to guess key type before failing.
*/
if (guess_key_type(key, &type) != 0) {
/* Inknown key */
errx(EX_USAGE, "Table %s does not exist, cannot guess "
"key '%s' type", oh->ntlv.name, key);
}
vmask = IPFW_VTYPE_LEGACY;
}
tentry_fill_key_type(key, tent, type, tflags);
*ptype = type;
*pvmask = vmask;
}
static void
set_legacy_value(uint32_t val, ipfw_table_value *v)
{
v->tag = val;
v->pipe = val;
v->divert = val;
v->skipto = val;
v->netgraph = val;
v->fib = val;
v->nat = val;
v->nh4 = val;
v->dscp = (uint8_t)val;
v->limit = val;
}
static void
tentry_fill_value(ipfw_obj_header *oh, ipfw_obj_tentry *tent, char *arg,
uint8_t type, uint32_t vmask)
{
struct addrinfo hints, *res;
uint32_t a4, flag, val;
ipfw_table_value *v;
uint32_t i;
int dval;
char *comma, *e, *etype, *n, *p;
struct in_addr ipaddr;
v = &tent->v.value;
/* Compat layer: keep old behavior for legacy value types */
if (vmask == IPFW_VTYPE_LEGACY) {
/* Try to interpret as number first */
val = strtoul(arg, &p, 0);
if (*p == '\0') {
set_legacy_value(val, v);
return;
}
if (inet_pton(AF_INET, arg, &val) == 1) {
set_legacy_value(ntohl(val), v);
return;
}
/* Try hostname */
if (lookup_host(arg, &ipaddr) == 0) {
set_legacy_value(ntohl(ipaddr.s_addr), v);
return;
}
errx(EX_OSERR, "Unable to parse value %s", arg);
}
/*
* Shorthands: handle single value if vmask consists
* of numbers only. e.g.:
* vmask = "fib,skipto" -> treat input "1" as "1,1"
*/
n = arg;
etype = NULL;
for (i = 1; i < (1 << 31); i *= 2) {
if ((flag = (vmask & i)) == 0)
continue;
vmask &= ~flag;
if ((comma = strchr(n, ',')) != NULL)
*comma = '\0';
switch (flag) {
case IPFW_VTYPE_TAG:
v->tag = strtol(n, &e, 10);
if (*e != '\0')
etype = "tag";
break;
case IPFW_VTYPE_PIPE:
v->pipe = strtol(n, &e, 10);
if (*e != '\0')
etype = "pipe";
break;
case IPFW_VTYPE_DIVERT:
v->divert = strtol(n, &e, 10);
if (*e != '\0')
etype = "divert";
break;
case IPFW_VTYPE_SKIPTO:
v->skipto = strtol(n, &e, 10);
if (*e != '\0')
etype = "skipto";
break;
case IPFW_VTYPE_NETGRAPH:
v->netgraph = strtol(n, &e, 10);
if (*e != '\0')
etype = "netgraph";
break;
case IPFW_VTYPE_FIB:
v->fib = strtol(n, &e, 10);
if (*e != '\0')
etype = "fib";
break;
case IPFW_VTYPE_NAT:
v->nat = strtol(n, &e, 10);
if (*e != '\0')
etype = "nat";
break;
case IPFW_VTYPE_LIMIT:
v->limit = strtol(n, &e, 10);
if (*e != '\0')
etype = "limit";
break;
case IPFW_VTYPE_NH4:
if (strchr(n, '.') != NULL &&
inet_pton(AF_INET, n, &a4) == 1) {
v->nh4 = ntohl(a4);
break;
}
if (lookup_host(n, &ipaddr) == 0) {
v->nh4 = ntohl(ipaddr.s_addr);
break;
}
etype = "ipv4";
break;
case IPFW_VTYPE_DSCP:
if (isalpha(*n)) {
if ((dval = match_token(f_ipdscp, n)) != -1) {
v->dscp = dval;
break;
} else
etype = "DSCP code";
} else {
v->dscp = strtol(n, &e, 10);
if (v->dscp > 63 || *e != '\0')
etype = "DSCP value";
}
break;
case IPFW_VTYPE_NH6:
if (strchr(n, ':') != NULL) {
memset(&hints, 0, sizeof(hints));
hints.ai_family = AF_INET6;
hints.ai_flags = AI_NUMERICHOST;
if (getaddrinfo(n, NULL, &hints, &res) == 0) {
v->nh6 = ((struct sockaddr_in6 *)
res->ai_addr)->sin6_addr;
v->zoneid = ((struct sockaddr_in6 *)
res->ai_addr)->sin6_scope_id;
freeaddrinfo(res);
break;
}
}
etype = "ipv6";
break;
}
if (etype != NULL)
errx(EX_USAGE, "Unable to parse %s as %s", n, etype);
if (comma != NULL)
*comma++ = ',';
if ((n = comma) != NULL)
continue;
/* End of input. */
if (vmask != 0)
errx(EX_USAGE, "Not enough fields inside value");
}
}
/*
* Compare table names.
* Honor number comparison.
*/
static int
tablename_cmp(const void *a, const void *b)
{
ipfw_xtable_info *ia, *ib;
ia = (ipfw_xtable_info *)a;
ib = (ipfw_xtable_info *)b;
return (stringnum_cmp(ia->tablename, ib->tablename));
}
/*
* Retrieves table list from kernel,
* optionally sorts it and calls requested function for each table.
* Returns 0 on success.
*/
static int
tables_foreach(table_cb_t *f, void *arg, int sort)
{
ipfw_obj_lheader *olh;
ipfw_xtable_info *info;
size_t sz;
int i, error;
/* Start with reasonable default */
sz = sizeof(*olh) + 16 * sizeof(ipfw_xtable_info);
for (;;) {
if ((olh = calloc(1, sz)) == NULL)
return (ENOMEM);
olh->size = sz;
if (do_get3(IP_FW_TABLES_XLIST, &olh->opheader, &sz) != 0) {
sz = olh->size;
free(olh);
if (errno != ENOMEM)
return (errno);
continue;
}
if (sort != 0)
qsort(olh + 1, olh->count, olh->objsize,
tablename_cmp);
info = (ipfw_xtable_info *)(olh + 1);
for (i = 0; i < olh->count; i++) {
if (co.use_set == 0 || info->set == co.use_set - 1)
error = f(info, arg);
info = (ipfw_xtable_info *)((caddr_t)info +
olh->objsize);
}
free(olh);
break;
}
return (0);
}
/*
* Retrieves all entries for given table @i in
* eXtended format. Allocate buffer large enough
* to store result. Called needs to free it later.
*
* Returns 0 on success.
*/
static int
table_do_get_list(ipfw_xtable_info *i, ipfw_obj_header **poh)
{
ipfw_obj_header *oh;
size_t sz;
int c;
sz = 0;
oh = NULL;
for (c = 0; c < 8; c++) {
if (sz < i->size)
sz = i->size + 44;
if (oh != NULL)
free(oh);
if ((oh = calloc(1, sz)) == NULL)
continue;
table_fill_objheader(oh, i);
oh->opheader.version = 1; /* Current version */
if (do_get3(IP_FW_TABLE_XLIST, &oh->opheader, &sz) == 0) {
*poh = oh;
return (0);
}
if (errno != ENOMEM)
break;
}
free(oh);
return (errno);
}
/*
* Shows all entries from @oh in human-readable format
*/
static void
table_show_list(ipfw_obj_header *oh, int need_header)
{
ipfw_obj_tentry *tent;
uint32_t count;
ipfw_xtable_info *i;
i = (ipfw_xtable_info *)(oh + 1);
tent = (ipfw_obj_tentry *)(i + 1);
if (need_header)
printf("--- table(%s), set(%u) ---\n", i->tablename, i->set);
count = i->count;
while (count > 0) {
table_show_entry(i, tent);
tent = (ipfw_obj_tentry *)((caddr_t)tent + tent->head.length);
count--;
}
}
static void
table_show_value(char *buf, size_t bufsize, ipfw_table_value *v,
uint32_t vmask, int print_ip)
{
char abuf[INET6_ADDRSTRLEN + IF_NAMESIZE + 2];
struct sockaddr_in6 sa6;
uint32_t flag, i, l;
size_t sz;
struct in_addr a4;
sz = bufsize;
/*
* Some shorthands for printing values:
* legacy assumes all values are equal, so keep the first one.
*/
if (vmask == IPFW_VTYPE_LEGACY) {
if (print_ip != 0) {
flag = htonl(v->tag);
inet_ntop(AF_INET, &flag, buf, sz);
} else
snprintf(buf, sz, "%u", v->tag);
return;
}
for (i = 1; i < (1 << 31); i *= 2) {
if ((flag = (vmask & i)) == 0)
continue;
l = 0;
switch (flag) {
case IPFW_VTYPE_TAG:
l = snprintf(buf, sz, "%u,", v->tag);
break;
case IPFW_VTYPE_PIPE:
l = snprintf(buf, sz, "%u,", v->pipe);
break;
case IPFW_VTYPE_DIVERT:
l = snprintf(buf, sz, "%d,", v->divert);
break;
case IPFW_VTYPE_SKIPTO:
l = snprintf(buf, sz, "%d,", v->skipto);
break;
case IPFW_VTYPE_NETGRAPH:
l = snprintf(buf, sz, "%u,", v->netgraph);
break;
case IPFW_VTYPE_FIB:
l = snprintf(buf, sz, "%u,", v->fib);
break;
case IPFW_VTYPE_NAT:
l = snprintf(buf, sz, "%u,", v->nat);
break;
case IPFW_VTYPE_LIMIT:
l = snprintf(buf, sz, "%u,", v->limit);
break;
case IPFW_VTYPE_NH4:
a4.s_addr = htonl(v->nh4);
inet_ntop(AF_INET, &a4, abuf, sizeof(abuf));
l = snprintf(buf, sz, "%s,", abuf);
break;
case IPFW_VTYPE_DSCP:
l = snprintf(buf, sz, "%d,", v->dscp);
break;
case IPFW_VTYPE_NH6:
sa6.sin6_family = AF_INET6;
sa6.sin6_len = sizeof(sa6);
sa6.sin6_addr = v->nh6;
sa6.sin6_port = 0;
sa6.sin6_scope_id = v->zoneid;
if (getnameinfo((const struct sockaddr *)&sa6,
sa6.sin6_len, abuf, sizeof(abuf), NULL, 0,
NI_NUMERICHOST) == 0)
l = snprintf(buf, sz, "%s,", abuf);
break;
}
buf += l;
sz -= l;
}
if (sz != bufsize)
*(buf - 1) = '\0';
}
static void
table_show_entry(ipfw_xtable_info *i, ipfw_obj_tentry *tent)
{
char *comma, tbuf[128], pval[128];
void *paddr;
struct tflow_entry *tfe;
table_show_value(pval, sizeof(pval), &tent->v.value, i->vmask,
co.do_value_as_ip);
switch (i->type) {
case IPFW_TABLE_ADDR:
/* IPv4 or IPv6 prefixes */
inet_ntop(tent->subtype, &tent->k, tbuf, sizeof(tbuf));
printf("%s/%u %s\n", tbuf, tent->masklen, pval);
break;
case IPFW_TABLE_INTERFACE:
/* Interface names */
printf("%s %s\n", tent->k.iface, pval);
break;
case IPFW_TABLE_NUMBER:
/* numbers */
printf("%u %s\n", tent->k.key, pval);
break;
case IPFW_TABLE_FLOW:
/* flows */
tfe = &tent->k.flow;
comma = "";
if ((i->tflags & IPFW_TFFLAG_SRCIP) != 0) {
if (tfe->af == AF_INET)
paddr = &tfe->a.a4.sip;
else
paddr = &tfe->a.a6.sip6;
inet_ntop(tfe->af, paddr, tbuf, sizeof(tbuf));
printf("%s%s", comma, tbuf);
comma = ",";
}
if ((i->tflags & IPFW_TFFLAG_PROTO) != 0) {
printf("%s%d", comma, tfe->proto);
comma = ",";
}
if ((i->tflags & IPFW_TFFLAG_SRCPORT) != 0) {
printf("%s%d", comma, ntohs(tfe->sport));
comma = ",";
}
if ((i->tflags & IPFW_TFFLAG_DSTIP) != 0) {
if (tfe->af == AF_INET)
paddr = &tfe->a.a4.dip;
else
paddr = &tfe->a.a6.dip6;
inet_ntop(tfe->af, paddr, tbuf, sizeof(tbuf));
printf("%s%s", comma, tbuf);
comma = ",";
}
if ((i->tflags & IPFW_TFFLAG_DSTPORT) != 0) {
printf("%s%d", comma, ntohs(tfe->dport));
comma = ",";
}
printf(" %s\n", pval);
}
}
static int
table_do_get_stdlist(uint16_t opcode, ipfw_obj_lheader **polh)
{
ipfw_obj_lheader req, *olh;
size_t sz;
memset(&req, 0, sizeof(req));
sz = sizeof(req);
if (do_get3(opcode, &req.opheader, &sz) != 0)
if (errno != ENOMEM)
return (errno);
sz = req.size;
if ((olh = calloc(1, sz)) == NULL)
return (ENOMEM);
olh->size = sz;
if (do_get3(opcode, &olh->opheader, &sz) != 0) {
free(olh);
return (errno);
}
*polh = olh;
return (0);
}
static int
table_do_get_algolist(ipfw_obj_lheader **polh)
{
return (table_do_get_stdlist(IP_FW_TABLES_ALIST, polh));
}
static int
table_do_get_vlist(ipfw_obj_lheader **polh)
{
return (table_do_get_stdlist(IP_FW_TABLE_VLIST, polh));
}
void
ipfw_list_ta(int ac, char *av[])
{
ipfw_obj_lheader *olh;
ipfw_ta_info *info;
int error, i;
const char *atype;
error = table_do_get_algolist(&olh);
if (error != 0)
err(EX_OSERR, "Unable to request algorithm list");
info = (ipfw_ta_info *)(olh + 1);
for (i = 0; i < olh->count; i++) {
if ((atype = match_value(tabletypes, info->type)) == NULL)
atype = "unknown";
printf("--- %s ---\n", info->algoname);
printf(" type: %s\n refcount: %u\n", atype, info->refcnt);
info = (ipfw_ta_info *)((caddr_t)info + olh->objsize);
}
free(olh);
}
/* Copy of current kernel table_value structure */
struct _table_value {
uint32_t tag; /* O_TAG/O_TAGGED */
uint32_t pipe; /* O_PIPE/O_QUEUE */
uint16_t divert; /* O_DIVERT/O_TEE */
uint16_t skipto; /* skipto, CALLRET */
uint32_t netgraph; /* O_NETGRAPH/O_NGTEE */
uint32_t fib; /* O_SETFIB */
uint32_t nat; /* O_NAT */
uint32_t nh4;
uint8_t dscp;
uint8_t spare0;
uint16_t spare1;
/* -- 32 bytes -- */
struct in6_addr nh6;
uint32_t limit; /* O_LIMIT */
uint32_t zoneid;
uint64_t refcnt; /* Number of references */
};
int
compare_values(const void *_a, const void *_b)
{
struct _table_value *a, *b;
a = (struct _table_value *)_a;
b = (struct _table_value *)_b;
if (a->spare1 < b->spare1)
return (-1);
else if (a->spare1 > b->spare1)
return (1);
return (0);
}
void
ipfw_list_values(int ac, char *av[])
{
ipfw_obj_lheader *olh;
struct _table_value *v;
int error, i;
uint32_t vmask;
char buf[128];
error = table_do_get_vlist(&olh);
if (error != 0)
err(EX_OSERR, "Unable to request value list");
vmask = 0x7FFFFFFF; /* Similar to IPFW_VTYPE_LEGACY */
table_print_valheader(buf, sizeof(buf), vmask);
printf("HEADER: %s\n", buf);
v = (struct _table_value *)(olh + 1);
qsort(v, olh->count, olh->objsize, compare_values);
for (i = 0; i < olh->count; i++) {
table_show_value(buf, sizeof(buf), (ipfw_table_value *)v,
vmask, 0);
printf("[%u] refs=%lu %s\n", v->spare1, (u_long)v->refcnt, buf);
v = (struct _table_value *)((caddr_t)v + olh->objsize);
}
free(olh);
}
int
table_check_name(const char *tablename)
{
if (ipfw_check_object_name(tablename) != 0)
return (EINVAL);
/* Restrict some 'special' names */
if (strcmp(tablename, "all") == 0)
return (EINVAL);
return (0);
}
Index: projects/clang800-import/sbin/recoverdisk/recoverdisk.c
===================================================================
--- projects/clang800-import/sbin/recoverdisk/recoverdisk.c (revision 343955)
+++ projects/clang800-import/sbin/recoverdisk/recoverdisk.c (revision 343956)
@@ -1,324 +1,325 @@
/*-
* SPDX-License-Identifier: Beerware
*
* ----------------------------------------------------------------------------
* "THE BEER-WARE LICENSE" (Revision 42):
* <phk@FreeBSD.ORG> wrote this file. As long as you retain this notice you
* can do whatever you want with this stuff. If we meet some day, and you think
* this stuff is worth it, you can buy me a beer in return. Poul-Henning Kamp
* ----------------------------------------------------------------------------
*
* $FreeBSD$
*/
#include <sys/param.h>
#include <sys/queue.h>
#include <sys/disk.h>
#include <sys/stat.h>
#include <err.h>
#include <errno.h>
#include <fcntl.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
static volatile sig_atomic_t aborting = 0;
static size_t bigsize = 1024 * 1024;
static size_t medsize;
static size_t minsize = 512;
struct lump {
off_t start;
off_t len;
int state;
TAILQ_ENTRY(lump) list;
};
static TAILQ_HEAD(, lump) lumps = TAILQ_HEAD_INITIALIZER(lumps);
static void
new_lump(off_t start, off_t len, int state)
{
struct lump *lp;
lp = malloc(sizeof *lp);
if (lp == NULL)
err(1, "Malloc failed");
lp->start = start;
lp->len = len;
lp->state = state;
TAILQ_INSERT_TAIL(&lumps, lp, list);
}
static struct lump *lp;
static char *wworklist = NULL;
static char *rworklist = NULL;
#define PRINT_HEADER \
printf("%13s %7s %13s %5s %13s %13s %9s\n", \
"start", "size", "block-len", "state", "done", "remaining", "% done")
#define PRINT_STATUS(start, i, len, state, d, t) \
printf("\r%13jd %7zu %13jd %5d %13jd %13jd %9.5f", \
(intmax_t)start, \
i, \
(intmax_t)len, \
state, \
(intmax_t)d, \
(intmax_t)(t - d), \
100*(double)d/(double)t)
/* Save the worklist if -w was given */
static void
save_worklist(void)
{
FILE *file;
struct lump *llp;
if (wworklist != NULL) {
(void)fprintf(stderr, "\nSaving worklist ...");
fflush(stderr);
file = fopen(wworklist, "w");
if (file == NULL)
err(1, "Error opening file %s", wworklist);
TAILQ_FOREACH(llp, &lumps, list)
fprintf(file, "%jd %jd %d\n",
(intmax_t)llp->start, (intmax_t)llp->len,
llp->state);
fclose(file);
(void)fprintf(stderr, " done.\n");
}
}
/* Read the worklist if -r was given */
static off_t
read_worklist(off_t t)
{
off_t s, l, d;
int state, lines;
FILE *file;
(void)fprintf(stderr, "Reading worklist ...");
fflush(stderr);
file = fopen(rworklist, "r");
if (file == NULL)
err(1, "Error opening file %s", rworklist);
lines = 0;
d = t;
for (;;) {
++lines;
if (3 != fscanf(file, "%jd %jd %d\n", &s, &l, &state)) {
if (!feof(file))
err(1, "Error parsing file %s at line %d",
rworklist, lines);
else
break;
}
new_lump(s, l, state);
d -= l;
}
+ fclose(file);
(void)fprintf(stderr, " done.\n");
/*
* Return the number of bytes already read
* (at least not in worklist).
*/
return (d);
}
static void
usage(void)
{
(void)fprintf(stderr, "usage: recoverdisk [-b bigsize] [-r readlist] "
"[-s interval] [-w writelist] source [destination]\n");
exit(1);
}
static void
sighandler(__unused int sig)
{
aborting = 1;
}
int
main(int argc, char * const argv[])
{
int ch;
int fdr, fdw;
off_t t, d, start, len;
size_t i, j;
int error, state;
u_char *buf;
u_int sectorsize;
off_t stripesize;
time_t t1, t2;
struct stat sb;
u_int n, snapshot = 60;
while ((ch = getopt(argc, argv, "b:r:w:s:")) != -1) {
switch (ch) {
case 'b':
bigsize = strtoul(optarg, NULL, 0);
break;
case 'r':
rworklist = strdup(optarg);
if (rworklist == NULL)
err(1, "Cannot allocate enough memory");
break;
case 's':
snapshot = strtoul(optarg, NULL, 0);
break;
case 'w':
wworklist = strdup(optarg);
if (wworklist == NULL)
err(1, "Cannot allocate enough memory");
break;
default:
usage();
/* NOTREACHED */
}
}
argc -= optind;
argv += optind;
if (argc < 1 || argc > 2)
usage();
fdr = open(argv[0], O_RDONLY);
if (fdr < 0)
err(1, "Cannot open read descriptor %s", argv[0]);
error = fstat(fdr, &sb);
if (error < 0)
err(1, "fstat failed");
if (S_ISBLK(sb.st_mode) || S_ISCHR(sb.st_mode)) {
error = ioctl(fdr, DIOCGSECTORSIZE, &sectorsize);
if (error < 0)
err(1, "DIOCGSECTORSIZE failed");
error = ioctl(fdr, DIOCGSTRIPESIZE, &stripesize);
if (error == 0 && stripesize > sectorsize)
sectorsize = stripesize;
minsize = sectorsize;
bigsize = rounddown(bigsize, sectorsize);
error = ioctl(fdr, DIOCGMEDIASIZE, &t);
if (error < 0)
err(1, "DIOCGMEDIASIZE failed");
} else {
t = sb.st_size;
}
if (bigsize < minsize)
bigsize = minsize;
for (ch = 0; (bigsize >> ch) > minsize; ch++)
continue;
medsize = bigsize >> (ch / 2);
medsize = rounddown(medsize, minsize);
fprintf(stderr, "Bigsize = %zu, medsize = %zu, minsize = %zu\n",
bigsize, medsize, minsize);
buf = malloc(bigsize);
if (buf == NULL)
err(1, "Cannot allocate %zu bytes buffer", bigsize);
if (argc > 1) {
fdw = open(argv[1], O_WRONLY | O_CREAT, DEFFILEMODE);
if (fdw < 0)
err(1, "Cannot open write descriptor %s", argv[1]);
if (ftruncate(fdw, t) < 0)
err(1, "Cannot truncate output %s to %jd bytes",
argv[1], (intmax_t)t);
} else
fdw = -1;
if (rworklist != NULL) {
d = read_worklist(t);
} else {
new_lump(0, t, 0);
d = 0;
}
if (wworklist != NULL)
signal(SIGINT, sighandler);
t1 = 0;
start = len = i = state = 0;
PRINT_HEADER;
n = 0;
for (;;) {
lp = TAILQ_FIRST(&lumps);
if (lp == NULL)
break;
while (lp->len > 0 && !aborting) {
/* These are only copied for printing stats */
start = lp->start;
len = lp->len;
state = lp->state;
i = MIN(lp->len, (off_t)bigsize);
if (lp->state == 1)
i = MIN(lp->len, (off_t)medsize);
if (lp->state > 1)
i = MIN(lp->len, (off_t)minsize);
time(&t2);
if (t1 != t2 || lp->len < (off_t)bigsize) {
PRINT_STATUS(start, i, len, state, d, t);
t1 = t2;
if (++n == snapshot) {
save_worklist();
n = 0;
}
}
if (i == 0) {
errx(1, "BOGUS i %10jd", (intmax_t)i);
}
fflush(stdout);
j = pread(fdr, buf, i, lp->start);
if (j == i) {
d += i;
if (fdw >= 0)
j = pwrite(fdw, buf, i, lp->start);
else
j = i;
if (j != i)
printf("\nWrite error at %jd/%zu\n",
lp->start, i);
lp->start += i;
lp->len -= i;
continue;
}
printf("\n%jd %zu failed (%s)\n",
lp->start, i, strerror(errno));
if (errno == EINVAL) {
printf("read() size too big? Try with -b 131072");
aborting = 1;
}
if (errno == ENXIO)
aborting = 1;
new_lump(lp->start, i, lp->state + 1);
lp->start += i;
lp->len -= i;
}
if (aborting) {
save_worklist();
return (0);
}
TAILQ_REMOVE(&lumps, lp, list);
free(lp);
}
PRINT_STATUS(start, i, len, state, d, t);
save_worklist();
printf("\nCompleted\n");
return (0);
}
Index: projects/clang800-import/sbin/sysctl/sysctl.8
===================================================================
--- projects/clang800-import/sbin/sysctl/sysctl.8 (revision 343955)
+++ projects/clang800-import/sbin/sysctl/sysctl.8 (revision 343956)
@@ -1,328 +1,328 @@
.\" Copyright (c) 1993
.\" The Regents of the University of California. All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\" 3. Neither the name of the University nor the names of its contributors
.\" may be used to endorse or promote products derived from this software
.\" without specific prior written permission.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" From: @(#)sysctl.8 8.1 (Berkeley) 6/6/93
.\" $FreeBSD$
.\"
-.Dd September 24, 2018
+.Dd February 8, 2019
.Dt SYSCTL 8
.Os
.Sh NAME
.Nm sysctl
.Nd get or set kernel state
.Sh SYNOPSIS
.Nm
-.Op Fl bdehiNnoRTtqx
+.Op Fl bdehiNnoTtqWx
.Op Fl B Ar bufsize
.Op Fl f Ar filename
.Ar name Ns Op = Ns Ar value Ns Op , Ns Ar value
.Ar ...
.Nm
-.Op Fl bdehNnoRTtqx
+.Op Fl bdehNnoTtqWx
.Op Fl B Ar bufsize
.Fl a
.Sh DESCRIPTION
The
.Nm
utility retrieves kernel state and allows processes with appropriate
privilege to set kernel state.
The state to be retrieved or set is described using a
.Dq Management Information Base
.Pq Dq MIB
style name, described as a dotted set of
components.
.Pp
The following options are available:
.Bl -tag -width indent
.It Fl A
Equivalent to
.Fl o a
(for compatibility).
.It Fl a
List all the currently available non-opaque values.
This option is ignored if one or more variable names are specified on
the command line.
.It Fl b
Force the value of the variable(s) to be output in raw, binary format.
No names are printed and no terminating newlines are output.
This is mostly useful with a single variable.
.It Fl B Ar bufsize
Set the buffer size to read from the
.Nm
to
.Ar bufsize .
This is necessary for a
.Nm
that has variable length, and the probe value of 0 is a valid length, such as
.Va kern.arandom .
.It Fl d
Print the description of the variable instead of its value.
.It Fl e
Separate the name and the value of the variable(s) with
.Ql = .
This is useful for producing output which can be fed back to the
.Nm
utility.
This option is ignored if either
.Fl N
or
.Fl n
is specified, or a variable is being set.
.It Fl f Ar filename
Specify a file which contains a pair of name and value in each line.
.Nm
reads and processes the specified file first and then processes the name
and value pairs in the command line argument.
.It Fl h
Format output for human, rather than machine, readability.
.It Fl i
Ignore unknown OIDs.
The purpose is to make use of
.Nm
for collecting data from a variety of machines (not all of which
are necessarily running exactly the same software) easier.
.It Fl N
Show only variable names, not their values.
This is particularly useful with shells that offer programmable
completion.
To enable completion of variable names in
.Xr zsh 1 Pq Pa ports/shells/zsh ,
use the following code:
.Bd -literal -offset indent
listsysctls () { set -A reply $(sysctl -AN ${1%.*}) }
compctl -K listsysctls sysctl
.Ed
.Pp
To enable completion of variable names in
.Xr tcsh 1 ,
use:
.Pp
.Dl "complete sysctl 'n/*/`sysctl -Na`/'"
.It Fl n
Show only variable values, not their names.
This option is useful for setting shell variables.
For instance, to save the pagesize in variable
.Va psize ,
use:
.Pp
.Dl "set psize=`sysctl -n hw.pagesize`"
.It Fl o
Show opaque variables (which are normally suppressed).
The format and length are printed, as well as a hex dump of the first
sixteen bytes of the value.
.It Fl q
Suppress some warnings generated by
.Nm
to standard error.
.It Fl T
Display only variables that are settable via loader (CTLFLAG_TUN).
.It Fl t
Print the type of the variable.
.It Fl W
Display only writable variables that are not statistical.
Useful for determining the set of runtime tunable sysctls.
.It Fl X
Equivalent to
.Fl x a
(for compatibility).
.It Fl x
As
.Fl o ,
but prints a hex dump of the entire value instead of just the first
few bytes.
.El
.Pp
The information available from
.Nm
consists of integers, strings, and opaque types.
The
.Nm
utility
only knows about a couple of opaque types, and will resort to hexdumps
for the rest.
The opaque information is much more useful if retrieved by special
purpose programs such as
.Xr ps 1 ,
.Xr systat 1 ,
and
.Xr netstat 1 .
.Pp
Some of the variables which cannot be modified during normal system
operation can be initialized via
.Xr loader 8
tunables.
This can for example be done by setting them in
.Xr loader.conf 5 .
Please refer to
.Xr loader.conf 5
for more information on which tunables are available and how to set them.
.Pp
The string and integer information is summarized below.
For a detailed description of these variable see
.Xr sysctl 3 .
.Pp
The changeable column indicates whether a process with appropriate
privilege can change the value.
String and integer values can be set using
.Nm .
.Bl -column security.bsd.unprivileged_read_msgbuf integerxxx
.It Sy "Name Type Changeable"
.It "kern.ostype string no"
.It "kern.osrelease string no"
.It "kern.osrevision integer no"
.It "kern.version string no"
.It "kern.maxvnodes integer yes"
.It "kern.maxproc integer no"
.It "kern.maxprocperuid integer yes"
.It "kern.maxfiles integer yes"
.It "kern.maxfilesperproc integer yes"
.It "kern.argmax integer no"
.It "kern.securelevel integer raise only"
.It "kern.hostname string yes"
.It "kern.hostid integer yes"
.It "kern.clockrate struct no"
.It "kern.posix1version integer no"
.It "kern.ngroups integer no"
.It "kern.job_control integer no"
.It "kern.saved_ids integer no"
.It "kern.boottime struct no"
.It "kern.domainname string yes"
.It "kern.filedelay integer yes"
.It "kern.dirdelay integer yes"
.It "kern.metadelay integer yes"
.It "kern.osreldate integer no"
.It "kern.bootfile string yes"
.It "kern.corefile string yes"
.It "kern.logsigexit integer yes"
.It "security.bsd.suser_enabled integer yes"
.It "security.bsd.see_other_uids integer yes"
.It "security.bsd.unprivileged_proc_debug integer yes"
.It "security.bsd.unprivileged_read_msgbuf integer yes"
.It "vm.loadavg struct no"
.It "hw.machine string no"
.It "hw.model string no"
.It "hw.ncpu integer no"
.It "hw.byteorder integer no"
.It "hw.physmem integer no"
.It "hw.usermem integer no"
.It "hw.pagesize integer no"
.It "hw.floatingpoint integer no"
.It "hw.machine_arch string no"
.It "hw.realmem integer no"
.It "machdep.adjkerntz integer yes"
.It "machdep.disable_rtc_set integer yes"
.It "machdep.guessed_bootdev string no"
.It "user.cs_path string no"
.It "user.bc_base_max integer no"
.It "user.bc_dim_max integer no"
.It "user.bc_scale_max integer no"
.It "user.bc_string_max integer no"
.It "user.coll_weights_max integer no"
.It "user.expr_nest_max integer no"
.It "user.line_max integer no"
.It "user.re_dup_max integer no"
.It "user.posix2_version integer no"
.It "user.posix2_c_bind integer no"
.It "user.posix2_c_dev integer no"
.It "user.posix2_char_term integer no"
.It "user.posix2_fort_dev integer no"
.It "user.posix2_fort_run integer no"
.It "user.posix2_localedef integer no"
.It "user.posix2_sw_dev integer no"
.It "user.posix2_upe integer no"
.It "user.stream_max integer no"
.It "user.tzname_max integer no"
.El
.Sh FILES
.Bl -tag -width ".In netinet/icmp_var.h" -compact
.It In sys/sysctl.h
definitions for top level identifiers, second level kernel and hardware
identifiers, and user level identifiers
.It In sys/socket.h
definitions for second level network identifiers
.It In sys/gmon.h
definitions for third level profiling identifiers
.It In vm/vm_param.h
definitions for second level virtual memory identifiers
.It In netinet/in.h
definitions for third level Internet identifiers and
fourth level IP identifiers
.It In netinet/icmp_var.h
definitions for fourth level ICMP identifiers
.It In netinet/udp_var.h
definitions for fourth level UDP identifiers
.El
.Sh EXIT STATUS
.Ex -std
.Sh EXAMPLES
For example, to retrieve the maximum number of processes allowed
in the system, one would use the following request:
.Pp
.Dl "sysctl kern.maxproc"
.Pp
To set the maximum number of processes allowed
per uid to 1000, one would use the following request:
.Pp
.Dl "sysctl kern.maxprocperuid=1000"
.Pp
Information about the system clock rate may be obtained with:
.Pp
.Dl "sysctl kern.clockrate"
.Pp
Information about the load average history may be obtained with:
.Pp
.Dl "sysctl vm.loadavg"
.Pp
More variables than these exist, and the best and likely only place
to search for their deeper meaning is undoubtedly the source where
they are defined.
.Sh COMPATIBILITY
The
.Fl w
option has been deprecated and is silently ignored.
.Sh SEE ALSO
.Xr sysctl 3 ,
.Xr loader.conf 5 ,
.Xr sysctl.conf 5 ,
.Xr loader 8
.Sh HISTORY
A
.Nm
utility first appeared in
.Bx 4.4 .
.Pp
In
.Fx 2.2 ,
.Nm
was significantly remodeled.
.Sh BUGS
The
.Nm
utility presently exploits an undocumented interface to the kernel
sysctl facility to traverse the sysctl tree and to retrieve format
and name information.
This correct interface is being thought about for the time being.
Index: projects/clang800-import/share/man/man4/ng_iface.4
===================================================================
--- projects/clang800-import/share/man/man4/ng_iface.4 (revision 343955)
+++ projects/clang800-import/share/man/man4/ng_iface.4 (revision 343956)
@@ -1,160 +1,172 @@
.\" Copyright (c) 1996-1999 Whistle Communications, Inc.
.\" All rights reserved.
.\"
.\" Subject to the following obligations and disclaimer of warranty, use and
.\" redistribution of this software, in source or object code forms, with or
.\" without modifications are expressly permitted by Whistle Communications;
.\" provided, however, that:
.\" 1. Any and all reproductions of the source or object code must include the
.\" copyright notice above and the following disclaimer of warranties; and
.\" 2. No rights are granted, in any manner or form, to use Whistle
.\" Communications, Inc. trademarks, including the mark "WHISTLE
.\" COMMUNICATIONS" on advertising, endorsements, or otherwise except as
.\" such appears in the above copyright notice or in the software.
.\"
.\" THIS SOFTWARE IS BEING PROVIDED BY WHISTLE COMMUNICATIONS "AS IS", AND
.\" TO THE MAXIMUM EXTENT PERMITTED BY LAW, WHISTLE COMMUNICATIONS MAKES NO
.\" REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, REGARDING THIS SOFTWARE,
.\" INCLUDING WITHOUT LIMITATION, ANY AND ALL IMPLIED WARRANTIES OF
.\" MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
.\" WHISTLE COMMUNICATIONS DOES NOT WARRANT, GUARANTEE, OR MAKE ANY
.\" REPRESENTATIONS REGARDING THE USE OF, OR THE RESULTS OF THE USE OF THIS
.\" SOFTWARE IN TERMS OF ITS CORRECTNESS, ACCURACY, RELIABILITY OR OTHERWISE.
.\" IN NO EVENT SHALL WHISTLE COMMUNICATIONS BE LIABLE FOR ANY DAMAGES
.\" RESULTING FROM OR ARISING OUT OF ANY USE OF THIS SOFTWARE, INCLUDING
.\" WITHOUT LIMITATION, ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
.\" PUNITIVE, OR CONSEQUENTIAL DAMAGES, PROCUREMENT OF SUBSTITUTE GOODS OR
.\" SERVICES, LOSS OF USE, DATA OR PROFITS, HOWEVER CAUSED AND UNDER ANY
.\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
.\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
.\" THIS SOFTWARE, EVEN IF WHISTLE COMMUNICATIONS IS ADVISED OF THE POSSIBILITY
.\" OF SUCH DAMAGE.
.\"
.\" Author: Archie Cobbs <archie@FreeBSD.org>
.\"
.\" $FreeBSD$
.\" $Whistle: ng_iface.8,v 1.5 1999/01/25 23:46:26 archie Exp $
.\"
-.Dd January 12, 2015
+.Dd February 6, 2019
.Dt NG_IFACE 4
.Os
.Sh NAME
.Nm ng_iface
.Nd interface netgraph node type
.Sh SYNOPSIS
.In netgraph/ng_iface.h
.Sh DESCRIPTION
An
.Nm iface
node is both a netgraph node and a system networking interface.
When an
.Nm iface
node is created, a new interface appears which is accessible via
.Xr ifconfig 8 .
.Nm Iface
node interfaces are named
.Dv ng0 ,
.Dv ng1 ,
etc.
When a node is shutdown, the corresponding interface is removed
and the interface name becomes available for reuse by future
.Nm iface
nodes; new nodes always take the first unused interface.
The node itself is assigned the same name as its interface, unless the name
already exists, in which case the node remains unnamed.
.Pp
An
.Nm iface
node has a single hook corresponding to each supported protocol.
Packets transmitted via the interface flow out the corresponding
protocol-specific hook.
Similarly, packets received on a hook appear on the interface as
packets received into the corresponding protocol stack.
The currently supported protocols are IP, IPv6, ATM, NATM, and NS.
.Pp
An
.Nm iface
node can be configured as a point-to-point interface or a broadcast interface.
The configuration can only be changed when the interface is down.
The default mode is point-to-point.
.Pp
.Nm Iface
nodes support the Berkeley Packet Filter (BPF).
.Sh HOOKS
This node type supports the following hooks:
.Bl -tag -width ".Va inet6"
.It Va inet
Transmission and reception of IP packets.
.It Va inet6
Transmission and reception of IPv6 packets.
.It Va atm
Transmission and reception of ATM packets.
.It Va natm
Transmission and reception of NATM packets.
.It Va ns
Transmission and reception of NS packets.
.El
.Sh CONTROL MESSAGES
This node type supports the generic control messages, plus the following:
.Bl -tag -width foo
.It Dv NGM_IFACE_GET_IFNAME Pq Ic getifname
Returns the name of the associated interface as a
.Dv NUL Ns -terminated
.Tn ASCII
string.
Normally this is the same as the name of the node.
.It Dv NGM_IFACE_GET_IFINDEX Pq Ic getifindex
Returns the global index of the associated interface as a 32 bit integer.
.It Dv NGM_IFACE_POINT2POINT Pq Ic point2point
Set the interface to point-to-point mode.
The interface must not currently be up.
.It Dv NGM_IFACE_BROADCAST Pq Ic broadcast
Set the interface to broadcast mode.
The interface must not currently be up.
.El
.Sh SHUTDOWN
This node shuts down upon receipt of a
.Dv NGM_SHUTDOWN
control message.
The associated interface is removed and becomes available
for use by future
.Nm iface
nodes.
.Pp
Unlike most other node types, an
.Nm iface
node does
.Em not
go away when all hooks have been disconnected; rather, and explicit
.Dv NGM_SHUTDOWN
control message is required.
.Sh ALTQ Support
The
.Nm
interface supports ALTQ bandwidth management feature.
However,
.Nm
is a special case, since it is not a physical interface with limited bandwidth.
One should not turn ALTQ on
.Nm
if the latter corresponds to some tunneled connection, e.g.\& PPPoE or PPTP.
In this case, ALTQ should be configured on the interface that is used to
transmit the encapsulated packets.
In case when your graph ends up with some kind of serial line, either
synchronous or modem, the
.Nm
is the right place to turn ALTQ on.
+.Sh Nesting
+.Nm
+supports nesting, a configuration when traffic of one
+.Nm
+interface flows through the other.
+The default maximum allowed nesting level is 2.
+It can be changed at runtime setting
+.Xr sysctl 8
+variable
+.Va net.graph.iface.max_nesting
+to the desired level of nesting.
.Sh SEE ALSO
.Xr altq 4 ,
.Xr bpf 4 ,
.Xr netgraph 4 ,
.Xr ng_cisco 4 ,
.Xr ifconfig 8 ,
.Xr ngctl 8
+.Xr sysctl
.Sh HISTORY
The
.Nm iface
node type was implemented in
.Fx 4.0 .
.Sh AUTHORS
.An Archie Cobbs Aq Mt archie@FreeBSD.org
Index: projects/clang800-import/share/man/man9/bus_space.9
===================================================================
--- projects/clang800-import/share/man/man9/bus_space.9 (revision 343955)
+++ projects/clang800-import/share/man/man9/bus_space.9 (revision 343956)
@@ -1,1715 +1,1716 @@
.\" $NetBSD: bus_space.9,v 1.9 1999/03/06 22:09:29 mycroft Exp $
.\"
-.\" Copyright (c) 2005 M. Warner Losh. All Rights Reserved.
+.\" Copyright (c) 2005 M. Warner Losh.
+.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
.\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
.\" POSSIBILITY OF SUCH DAMAGE.
.\"
.\"
.\" Copyright (c) 1997 The NetBSD Foundation, Inc.
.\" All rights reserved.
.\"
.\" This code is derived from software contributed to The NetBSD Foundation
.\" by Christopher G. Demetriou.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
.\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
.\" POSSIBILITY OF SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
.Dd January 15, 2017
.Dt BUS_SPACE 9
.Os
.Sh NAME
.Nm bus_space ,
.Nm bus_space_barrier ,
.Nm bus_space_copy_region_1 ,
.Nm bus_space_copy_region_2 ,
.Nm bus_space_copy_region_4 ,
.Nm bus_space_copy_region_8 ,
.Nm bus_space_copy_region_stream_1 ,
.Nm bus_space_copy_region_stream_2 ,
.Nm bus_space_copy_region_stream_4 ,
.Nm bus_space_copy_region_stream_8 ,
.Nm bus_space_free ,
.Nm bus_space_map ,
.Nm bus_space_read_1 ,
.Nm bus_space_read_2 ,
.Nm bus_space_read_4 ,
.Nm bus_space_read_8 ,
.Nm bus_space_read_multi_1 ,
.Nm bus_space_read_multi_2 ,
.Nm bus_space_read_multi_4 ,
.Nm bus_space_read_multi_8 ,
.Nm bus_space_read_multi_stream_1 ,
.Nm bus_space_read_multi_stream_2 ,
.Nm bus_space_read_multi_stream_4 ,
.Nm bus_space_read_multi_stream_8 ,
.Nm bus_space_read_region_1 ,
.Nm bus_space_read_region_2 ,
.Nm bus_space_read_region_4 ,
.Nm bus_space_read_region_8 ,
.Nm bus_space_read_region_stream_1 ,
.Nm bus_space_read_region_stream_2 ,
.Nm bus_space_read_region_stream_4 ,
.Nm bus_space_read_region_stream_8 ,
.Nm bus_space_read_stream_1 ,
.Nm bus_space_read_stream_2 ,
.Nm bus_space_read_stream_4 ,
.Nm bus_space_read_stream_8 ,
.Nm bus_space_set_multi_1 ,
.Nm bus_space_set_multi_2 ,
.Nm bus_space_set_multi_4 ,
.Nm bus_space_set_multi_8 ,
.Nm bus_space_set_multi_stream_1 ,
.Nm bus_space_set_multi_stream_2 ,
.Nm bus_space_set_multi_stream_4 ,
.Nm bus_space_set_multi_stream_8 ,
.Nm bus_space_set_region_1 ,
.Nm bus_space_set_region_2 ,
.Nm bus_space_set_region_4 ,
.Nm bus_space_set_region_8 ,
.Nm bus_space_set_region_stream_1 ,
.Nm bus_space_set_region_stream_2 ,
.Nm bus_space_set_region_stream_4 ,
.Nm bus_space_set_region_stream_8 ,
.Nm bus_space_subregion ,
.Nm bus_space_unmap ,
.Nm bus_space_write_1 ,
.Nm bus_space_write_2 ,
.Nm bus_space_write_4 ,
.Nm bus_space_write_8 ,
.Nm bus_space_write_multi_1 ,
.Nm bus_space_write_multi_2 ,
.Nm bus_space_write_multi_4 ,
.Nm bus_space_write_multi_8 ,
.Nm bus_space_write_multi_stream_1 ,
.Nm bus_space_write_multi_stream_2 ,
.Nm bus_space_write_multi_stream_4 ,
.Nm bus_space_write_multi_stream_8 ,
.Nm bus_space_write_region_1 ,
.Nm bus_space_write_region_2 ,
.Nm bus_space_write_region_4 ,
.Nm bus_space_write_region_8 ,
.Nm bus_space_write_region_stream_1 ,
.Nm bus_space_write_region_stream_2 ,
.Nm bus_space_write_region_stream_4 ,
.Nm bus_space_write_region_stream_8 ,
.Nm bus_space_write_stream_1 ,
.Nm bus_space_write_stream_2 ,
.Nm bus_space_write_stream_4 ,
.Nm bus_space_write_stream_8
.Nd "bus space manipulation functions"
.Sh SYNOPSIS
.In machine/bus.h
.Ft int
.Fo bus_space_map
.Fa "bus_space_tag_t space" "bus_addr_t address"
.Fa "bus_size_t size" "int flags" "bus_space_handle_t *handlep"
.Fc
.Ft void
.Fo bus_space_unmap
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t size"
.Fc
.Ft int
.Fo bus_space_subregion
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "bus_size_t size" "bus_space_handle_t *nhandlep"
.Fc
.Ft int
.Fo bus_space_alloc
.Fa "bus_space_tag_t space" "bus_addr_t reg_start" "bus_addr_t reg_end"
.Fa "bus_size_t size" "bus_size_t alignment" "bus_size_t boundary"
.Fa "int flags" "bus_addr_t *addrp" "bus_space_handle_t *handlep"
.Fc
.Ft void
.Fo bus_space_free
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t size"
.Fc
.Ft uint8_t
.Fo bus_space_read_1
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint16_t
.Fo bus_space_read_2
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint32_t
.Fo bus_space_read_4
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint64_t
.Fo bus_space_read_8
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint8_t
.Fo bus_space_read_stream_1
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint16_t
.Fo bus_space_read_stream_2
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint32_t
.Fo bus_space_read_stream_4
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft uint64_t
.Fo bus_space_read_stream_8
.Fa "bus_space_tag_t space" "bus_space_handle_t handle" "bus_size_t offset"
.Fc
.Ft void
.Fo bus_space_write_1
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint8_t value"
.Fc
.Ft void
.Fo bus_space_write_2
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint16_t value"
.Fc
.Ft void
.Fo bus_space_write_4
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint32_t value"
.Fc
.Ft void
.Fo bus_space_write_8
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint64_t value"
.Fc
.Ft void
.Fo bus_space_write_stream_1
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint8_t value"
.Fc
.Ft void
.Fo bus_space_write_stream_2
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint16_t value"
.Fc
.Ft void
.Fo bus_space_write_stream_4
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint32_t value"
.Fc
.Ft void
.Fo bus_space_write_stream_8
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "uint64_t value"
.Fc
.Ft void
.Fo bus_space_barrier
.Fa "bus_space_tag_t space" "bus_space_handle_t handle"
.Fa "bus_size_t offset" "bus_size_t length" "int flags"
.Fc
.Ft void
.Fo bus_space_read_region_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_region_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_region_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_copy_region_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t srchandle" "bus_size_t srcoffset"
.Fa "bus_space_handle_t dsthandle" "bus_size_t dstoffset" "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_region_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_read_multi_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_write_multi_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t *datap"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_stream_1
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint8_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_stream_2
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint16_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_stream_4
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint32_t value"
.Fa "bus_size_t count"
.Fc
.Ft void
.Fo bus_space_set_multi_stream_8
.Fa "bus_space_tag_t space"
.Fa "bus_space_handle_t handle" "bus_size_t offset" "uint64_t value"
.Fa "bus_size_t count"
.Fc
.Sh DESCRIPTION
The
.Nm
functions exist to allow device drivers
machine-independent access to bus memory and register areas.
All of the
functions and types described in this document can be used by including
the
.In machine/bus.h
header file.
.Pp
Many common devices are used on multiple architectures, but are accessed
differently on each because of architectural constraints.
For instance, a device which is mapped in one system's I/O space may be
mapped in memory space on a second system.
On a third system, architectural
limitations might change the way registers need to be accessed (e.g.\&
creating a non-linear register space).
In some cases, a single
driver may need to access the same type of device in multiple ways in a
single system or architecture.
The goal of the
.Nm
functions is to allow a single driver source file to manipulate a set
of devices on different system architectures, and to allow a single driver
object file to manipulate a set of devices on multiple bus types on a
single architecture.
.Pp
Not all buses have to implement all functions described in this
document, though that is encouraged if the operations are logically
supported by the bus.
Unimplemented functions should cause
compile-time errors if possible.
.Pp
All of the interface definitions described in this document are shown as
function prototypes and discussed as if they were required to be
functions.
Implementations are encouraged to implement prototyped
(type-checked) versions of these interfaces, but may implement them as
macros if appropriate.
Machine-dependent types, variables, and functions
should be marked clearly in
.In machine/bus.h
to avoid confusion with the
machine-independent types and functions, and, if possible, should be
given names which make the machine-dependence clear.
.Sh CONCEPTS AND GUIDELINES
Bus spaces are described by bus space tags, which can be created only by
machine-dependent code.
A given machine may have several different types
of bus space (e.g.\& memory space and I/O space), and thus may provide
multiple different bus space tags.
Individual buses or devices on a machine may use more than one bus space
tag.
For instance, ISA devices are
given an ISA memory space tag and an ISA I/O space tag.
Architectures
may have several different tags which represent the same type of
space, for instance because of multiple different host bus interface
chipsets.
.Pp
A range in bus space is described by a bus address and a bus size.
The
bus address describes the start of the range in bus space.
The bus
size describes the size of the range in bytes.
Buses which are not byte
addressable may require use of bus space ranges with appropriately
aligned addresses and properly rounded sizes.
.Pp
Access to regions of bus space is facilitated by use of bus space handles,
which are usually created by mapping a specific range of a bus space.
Handles may also be created by allocating
and mapping a range of bus space, the actual location of which is picked
by the implementation within bounds specified by the caller of the
allocation function.
.Pp
All of the bus space access functions require one bus space tag
argument, at least one handle argument, and at least one offset argument
(a bus size).
The bus space tag specifies the space, each handle specifies a region in
the space, and each offset specifies the offset into the region of the
actual location(s) to be accessed.
Offsets are given in bytes, though buses
may impose alignment constraints.
The offset used to access data
relative to a given handle must be such that all of the data being
accessed is in the mapped region that the handle describes.
Trying to
access data outside that region is an error.
.Pp
Because some architectures' memory systems use buffering to improve
memory and device access performance, there is a mechanism which can be
used to create
.Dq barriers
in the bus space read and write stream.
There
are three types of barriers: read, write, and read/write.
All reads
started to the region before a read barrier must complete before any reads
after the read barrier are started.
(The analogous requirement is true for
write barriers.)
Read/write barriers force all reads and writes started
before the barrier to complete before any reads or writes after the
barrier are started.
Correctly-written drivers will include all
appropriate barriers, and assume only the read/write ordering imposed by
the barrier operations.
.Pp
People trying to write portable drivers with the
.Nm
functions should
try to make minimal assumptions about what the system allows.
In particular,
they should expect that the system requires bus space addresses being
accessed to be naturally aligned (i.e., base address of handle added to
offset is a multiple of the access size), and that the system does
alignment checking on pointers (i.e., pointer to objects being read and
written must point to properly-aligned data).
.Pp
The descriptions of the
.Nm
functions given below all assume that
they are called with proper arguments.
If called with invalid arguments
or arguments that are out of range (e.g.\& trying to access data outside of
the region mapped when a given handle was created), undefined behaviour
results.
In that case, they may cause the
system to halt, either intentionally (via panic) or unintentionally (by
causing a fatal trap of by some other means) or may cause improper
operation which is not immediately fatal.
Functions which return
.Ft void
or which return data read from bus space (i.e., functions which
do not obviously return an error code) do not fail.
They could only fail
if given invalid arguments, and in that case their behaviour is undefined.
Functions which take a count of bytes have undefined results if the specified
.Fa count
is zero.
.Sh TYPES
Several types are defined in
.In machine/bus.h
to facilitate use of the
.Nm
functions by drivers.
.Ss Vt bus_addr_t
The
.Vt bus_addr_t
type is used to describe bus addresses.
It must be an
unsigned integral type
capable of holding the largest bus address usable by the architecture.
This
type is primarily used when mapping and unmapping bus space.
.Ss Vt bus_size_t
The
.Vt bus_size_t
type is used to describe sizes of ranges in bus space.
It must be an
unsigned integral type capable of holding the size of the largest bus
address range usable on the architecture.
This type is used by virtually all
of the
.Nm
functions, describing sizes when mapping regions and
offsets into regions when performing space access operations.
.Ss Vt bus_space_tag_t
The
.Vt bus_space_tag_t
type is used to describe a particular bus space on a machine.
Its
contents are machine-dependent and should be considered opaque by
machine-independent code.
This type is used by all
.Nm
functions to name the space on which they are operating.
.Ss Vt bus_space_handle_t
The
.Vt bus_space_handle_t
type is used to describe a mapping of a range of bus space.
Its
contents are machine-dependent and should be considered opaque by
machine-independent code.
This type is used when performing bus space
access operations.
.Sh MAPPING AND UNMAPPING BUS SPACE
This section is specific to the
.Nx
version of these functions and may or may not apply to the
.Fx
version.
.Pp
Bus space must be mapped before it can be used, and should be
unmapped when it is no longer needed.
The
.Fn bus_space_map
and
.Fn bus_space_unmap
functions provide these capabilities.
.Pp
Some drivers need to be able to pass a subregion of already-mapped bus
space to another driver or module within a driver.
The
.Fn bus_space_subregion
function allows such subregions to be created.
.Ss Fn bus_space_map space address size flags handlep
The
.Fn bus_space_map
function maps the region of bus space named by the
.Fa space , address ,
and
.Fa size
arguments.
If successful, it returns zero
and fills in the bus space handle pointed to by
.Fa handlep
with the handle
that can be used to access the mapped region.
If unsuccessful,
it will return non-zero and leave the bus space handle pointed
to by
.Fa handlep
in an undefined state.
.Pp
The
.Fa flags
argument controls how the space is to be mapped.
Supported flags include:
.Bl -tag -width ".Dv BUS_SPACE_MAP_CACHEABLE"
.It Dv BUS_SPACE_MAP_CACHEABLE
Try to map the space so that accesses can be cached and/or
prefetched by the system.
If this flag is not specified, the
implementation should map the space so that it will not be cached or
prefetched.
.Pp
This flag must have a value of 1 on all implementations for backward
compatibility.
.It Dv BUS_SPACE_MAP_LINEAR
Try to map the space so that its contents can be accessed linearly via
normal memory access methods (e.g.\& pointer dereferencing and structure
accesses).
This is useful when software wants to do direct access to a memory
device, e.g.\& a frame buffer.
If this flag is specified and linear
mapping is not possible, the
.Fn bus_space_map
call should fail.
If this
flag is not specified, the system may map the space in whatever way is
most convenient.
.El
.Pp
Not all combinations of flags make sense or are supported with all
spaces.
For instance,
.Dv BUS_SPACE_MAP_CACHEABLE
may be meaningless when
used on many systems' I/O port spaces, and on some systems
.Dv BUS_SPACE_MAP_LINEAR
without
.Dv BUS_SPACE_MAP_CACHEABLE
may never work.
When the system hardware or firmware provides hints as to how spaces should be
mapped (e.g.\& the PCI memory mapping registers'
.Dq prefetchable
bit), those
hints should be followed for maximum compatibility.
On some systems,
requesting a mapping that cannot be satisfied (e.g.\& requesting a
non-cacheable mapping when the system can only provide a cacheable one)
will cause the request to fail.
.Pp
Some implementations may keep track of use of bus space for some or all
bus spaces and refuse to allow duplicate allocations.
This is encouraged
for bus spaces which have no notion of slot-specific space addressing,
such as ISA, and for spaces which coexist with those spaces
(e.g.\& PCI memory and I/O spaces co-existing with ISA memory and
I/O spaces).
.Pp
Mapped regions may contain areas for which there is no device on the
bus.
If space in those areas is accessed, the results are
bus-dependent.
.Ss Fn bus_space_unmap space handle size
The
.Fn bus_space_unmap
function unmaps a region of bus space mapped with
.Fn bus_space_map .
When unmapping a region, the
.Fa size
specified should be
the same as the size given to
.Fn bus_space_map
when mapping that region.
.Pp
After
.Fn bus_space_unmap
is called on a handle, that handle is no longer
valid.
(If copies were made of the handle they are no longer valid,
either.)
.Pp
This function will never fail.
If it would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case,
.Fn bus_space_unmap
will never return.
.Ss Fn bus_space_subregion space handle offset size nhandlep
The
.Fn bus_space_subregion
function is a convenience function which makes a
new handle to some subregion of an already-mapped region of bus space.
The subregion described by the new handle starts at byte offset
.Fa offset
into the region described by
.Fa handle ,
with the size give by
.Fa size ,
and must be wholly contained within the original region.
.Pp
If successful,
.Fn bus_space_subregion
returns zero and fills in the bus
space handle pointed to by
.Fa nhandlep .
If unsuccessful, it returns non-zero and leaves the bus space handle
pointed to by
.Fa nhandlep
in an
undefined state.
In either case, the handle described by
.Fa handle
remains valid and is unmodified.
.Pp
When done with a handle created by
.Fn bus_space_subregion ,
the handle should
be thrown away.
Under no circumstances should
.Fn bus_space_unmap
be used on the handle.
Doing so may confuse any resource management
being done on the space, and will result in undefined behaviour.
When
.Fn bus_space_unmap
or
.Fn bus_space_free
is called on a handle, all subregions of that handle become invalid.
.Sh ALLOCATING AND FREEING BUS SPACE
This section is specific to the
.Nx
version of these functions and may or may not apply to the
.Fx
version.
.Pp
Some devices require or allow bus space to be allocated by the operating
system for device use.
When the devices no longer need the space, the
operating system should free it for use by other devices.
The
.Fn bus_space_alloc
and
.Fn bus_space_free
functions provide these capabilities.
.Ss Fn bus_space_alloc space reg_start reg_end size alignment boundary \
flags addrp handlep
The
.Fn bus_space_alloc
function allocates and maps a region of bus space with the size given by
.Fa size ,
corresponding to the given constraints.
If successful, it returns
zero, fills in the bus address pointed to by
.Fa addrp
with the bus space address of the allocated region, and fills in
the bus space handle pointed to by
.Fa handlep
with the handle that can be used to access that region.
If unsuccessful, it returns non-zero and leaves the bus address pointed to by
.Fa addrp
and the bus space handle pointed to by
.Fa handlep
in an undefined state.
.Pp
Constraints on the allocation are given by the
.Fa reg_start , reg_end , alignment ,
and
.Fa boundary
parameters.
The allocated region will start at or after
.Fa reg_start
and end before or at
.Fa reg_end .
The
.Fa alignment
constraint must be a power of two, and the allocated region will start at
an address that is an even multiple of that power of two.
The
.Fa boundary
constraint, if non-zero, ensures that the region is allocated so that
.Fa "first address in region"
/
.Fa boundary
has the same value as
.Fa "last address in region"
/
.Fa boundary .
If the constraints cannot be met,
.Fn bus_space_alloc
will fail.
It is an error to specify a set of
constraints that can never be met
(for example,
.Fa size
greater than
.Fa boundary ) .
.Pp
The
.Fa flags
parameter is the same as the like-named parameter to
.Fn bus_space_map ,
the same flag values should be used, and they have the
same meanings.
.Pp
Handles created by
.Fn bus_space_alloc
should only be freed with
.Fn bus_space_free .
Trying to use
.Fn bus_space_unmap
on them causes undefined behaviour.
The
.Fn bus_space_subregion
function can be used on
handles created by
.Fn bus_space_alloc .
.Ss Fn bus_space_free space handle size
The
.Fn bus_space_free
function unmaps and frees a region of bus space mapped
and allocated with
.Fn bus_space_alloc .
When unmapping a region, the
.Fa size
specified should be the same as the size given to
.Fn bus_space_alloc
when allocating the region.
.Pp
After
.Fn bus_space_free
is called on a handle, that handle is no longer valid.
(If copies were
made of the handle, they are no longer valid, either.)
.Pp
This function will never fail.
If it would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case,
.Fn bus_space_free
will never return.
.Sh READING AND WRITING SINGLE DATA ITEMS
The simplest way to access bus space is to read or write a single data
item.
The
.Fn bus_space_read_N
and
.Fn bus_space_write_N
families of functions provide
the ability to read and write 1, 2, 4, and 8 byte data items on buses
which support those access sizes.
.Ss Fn bus_space_read_1 space handle offset
.Ss Fn bus_space_read_2 space handle offset
.Ss Fn bus_space_read_4 space handle offset
.Ss Fn bus_space_read_8 space handle offset
The
.Fn bus_space_read_N
family of functions reads a 1, 2, 4, or 8 byte data item from
the offset specified by
.Fa offset
into the region specified by
.Fa handle
of the bus space specified by
.Fa space .
The location being read must lie within the bus space region specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data item being read.
On some systems, not obeying this requirement may cause incorrect data to
be read, on others it may cause a system crash.
.Pp
Read operations done by the
.Fn bus_space_read_N
functions may be executed out
of order with respect to other pending read and write operations unless
order is enforced by use of the
.Fn bus_space_barrier
function.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Ss Fn bus_space_write_1 space handle offset value
.Ss Fn bus_space_write_2 space handle offset value
.Ss Fn bus_space_write_4 space handle offset value
.Ss Fn bus_space_write_8 space handle offset value
The
.Fn bus_space_write_N
family of functions writes a 1, 2, 4, or 8 byte data item to the offset
specified by
.Fa offset
into the region specified by
.Fa handle
of the bus space specified by
.Fa space .
The location being written must lie within
the bus space region specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data item being
written.
On some systems, not obeying this requirement may cause
incorrect data to be written, on others it may cause a system crash.
.Pp
Write operations done by the
.Fn bus_space_write_N
functions may be executed
out of order with respect to other pending read and write operations
unless order is enforced by use of the
.Fn bus_space_barrier
function.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Sh BARRIERS
In order to allow high-performance buffering implementations to avoid bus
activity on every operation, read and write ordering should be specified
explicitly by drivers when necessary.
The
.Fn bus_space_barrier
function provides that ability.
.Ss Fn bus_space_barrier space handle offset length flags
The
.Fn bus_space_barrier
function enforces ordering of bus space read and write operations
for the specified subregion (described by the
.Fa offset
and
.Fa length
parameters) of the region named by
.Fa handle
in the space named by
.Fa space .
.Pp
The
.Fa flags
argument controls what types of operations are to be ordered.
Supported flags are:
.Bl -tag -width ".Dv BUS_SPACE_BARRIER_WRITE"
.It Dv BUS_SPACE_BARRIER_READ
Synchronize read operations.
.It Dv BUS_SPACE_BARRIER_WRITE
Synchronize write operations.
.El
.Pp
Those flags can be combined (or-ed together) to enforce ordering on both
read and write operations.
.Pp
All of the specified type(s) of operation which are done to the region
before the barrier operation are guaranteed to complete before any of the
specified type(s) of operation done after the barrier.
.Pp
Example: Consider a hypothetical device with two single-byte ports, one
write-only input port (at offset 0) and a read-only output port (at
offset 1).
Operation of the device is as follows: data bytes are written
to the input port, and are placed by the device on a stack, the top of
which is read by reading from the output port.
The sequence to correctly
write two data bytes to the device then read those two data bytes back
would be:
.Bd -literal
/*
* t and h are the tag and handle for the mapped device's
* space.
*/
bus_space_write_1(t, h, 0, data0);
bus_space_barrier(t, h, 0, 1, BUS_SPACE_BARRIER_WRITE); /* 1 */
bus_space_write_1(t, h, 0, data1);
bus_space_barrier(t, h, 0, 2,
BUS_SPACE_BARRIER_READ|BUS_SPACE_BARRIER_WRITE); /* 2 */
ndata1 = bus_space_read_1(t, h, 1);
bus_space_barrier(t, h, 1, 1, BUS_SPACE_BARRIER_READ); /* 3 */
ndata0 = bus_space_read_1(t, h, 1);
/* data0 == ndata0, data1 == ndata1 */
.Ed
.Pp
The first barrier makes sure that the first write finishes before the
second write is issued, so that two writes to the input port are done
in order and are not collapsed into a single write.
This ensures that
the data bytes are written to the device correctly and in order.
.Pp
The second barrier makes sure that the writes to the output port finish
before any of the reads to the input port are issued, thereby making sure
that all of the writes are finished before data is read.
This ensures
that the first byte read from the device really is the last one that was
written.
.Pp
The third barrier makes sure that the first read finishes before the
second read is issued, ensuring that data is read correctly and in order.
.Pp
The barriers in the example above are specified to cover the absolute
minimum number of bus space locations.
It is correct (and often
easier) to make barrier operations cover the device's whole range of bus
space, that is, to specify an offset of zero and the size of the
whole region.
.Sh REGION OPERATIONS
Some devices use buffers which are mapped as regions in bus space.
Often, drivers want to copy the contents of those buffers to or from
memory, e.g.\& into mbufs which can be passed to higher levels of the
system or from mbufs to be output to a network.
In order to allow
drivers to do this as efficiently as possible, the
.Fn bus_space_read_region_N
and
.Fn bus_space_write_region_N
families of functions are provided.
.Pp
Drivers occasionally need to copy one region of a bus space to another,
or to set all locations in a region of bus space to contain a single
value.
The
.Fn bus_space_copy_region_N
family of functions and the
.Fn bus_space_set_region_N
family of functions allow drivers to perform these operations.
.Ss Fn bus_space_read_region_1 space handle offset datap count
.Ss Fn bus_space_read_region_2 space handle offset datap count
.Ss Fn bus_space_read_region_4 space handle offset datap count
.Ss Fn bus_space_read_region_8 space handle offset datap count
The
.Fn bus_space_read_region_N
family of functions reads
.Fa count
1, 2, 4, or 8 byte data items from bus space
starting at byte offset
.Fa offset
in the region specified by
.Fa handle
of the bus space specified by
.Fa space
and writes them into the array specified by
.Fa datap .
Each successive data item is read from an offset
1, 2, 4, or 8 bytes after the previous data item (depending on which
function is used).
All locations being read must lie within the bus
space region specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data items being
read and the data array pointer should be properly aligned.
On some
systems, not obeying these requirements may cause incorrect data to be
read, on others it may cause a system crash.
.Pp
Read operations done by the
.Fn bus_space_read_region_N
functions may be executed in any order.
They may also be executed out
of order with respect to other pending read and write operations unless
order is enforced by use of the
.Fn bus_space_barrier
function.
There is no way to insert barriers between reads of
individual bus space locations executed by the
.Fn bus_space_read_region_N
functions.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Ss Fn bus_space_write_region_1 space handle offset datap count
.Ss Fn bus_space_write_region_2 space handle offset datap count
.Ss Fn bus_space_write_region_4 space handle offset datap count
.Ss Fn bus_space_write_region_8 space handle offset datap count
The
.Fn bus_space_write_region_N
family of functions reads
.Fa count
1, 2, 4, or 8 byte data items from the array
specified by
.Fa datap
and writes them to bus space starting at byte offset
.Fa offset
in the region specified by
.Fa handle
of the bus space specified
by
.Fa space .
Each successive data item is written to an offset 1, 2, 4,
or 8 bytes after the previous data item (depending on which function is
used).
All locations being written must lie within the bus space region
specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data items being
written and the data array pointer should be properly aligned.
On some
systems, not obeying these requirements may cause incorrect data to be
written, on others it may cause a system crash.
.Pp
Write operations done by the
.Fn bus_space_write_region_N
functions may be
executed in any order.
They may also be executed out of order with
respect to other pending read and write operations unless order is
enforced by use of the
.Fn bus_space_barrier
function.
There is no way to insert barriers between writes of
individual bus space locations executed by the
.Fn bus_space_write_region_N
functions.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Ss Fn bus_space_copy_region_1 space srchandle srcoffset dsthandle \
dstoffset count
.Ss Fn bus_space_copy_region_2 space srchandle srcoffset dsthandle \
dstoffset count
.Ss Fn bus_space_copy_region_4 space srchandle srcoffset dsthandle \
dstoffset count
.Ss Fn bus_space_copy_region_8 space srchandle srcoffset dsthandle \
dstoffset count
The
.Fn bus_space_copy_region_N
family of functions copies
.Fa count
1, 2, 4, or 8 byte data items in bus space
from the area starting at byte offset
.Fa srcoffset
in the region specified by
.Fa srchandle
of the bus space specified by
.Fa space
to the area starting at byte offset
.Fa dstoffset
in the region specified by
.Fa dsthandle
in the same bus space.
Each successive data item read or written has
an offset 1, 2, 4, or 8 bytes after the previous data item (depending
on which function is used).
All locations being read and written must
lie within the bus space region specified by their respective handles.
.Pp
For portability, the starting addresses of the regions specified by the
each handle plus its respective offset should be a multiple of the size
of data items being copied.
On some systems, not obeying this
requirement may cause incorrect data to be copied, on others it may cause
a system crash.
.Pp
Read and write operations done by the
.Fn bus_space_copy_region_N
functions may be executed in any order.
They may also be executed out
of order with respect to other pending read and write operations unless
order is enforced by use of the
.Fn bus_space_barrier
function.
There is no way to insert barriers between reads or writes of
individual bus space locations executed by the
.Fn bus_space_copy_region_N
functions.
.Pp
Overlapping copies between different subregions of a single region
of bus space are handled correctly by the
.Fn bus_space_copy_region_N
functions.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Ss Fn bus_space_set_region_1 space handle offset value count
.Ss Fn bus_space_set_region_2 space handle offset value count
.Ss Fn bus_space_set_region_4 space handle offset value count
.Ss Fn bus_space_set_region_8 space handle offset value count
The
.Fn bus_space_set_region_N
family of functions writes the given
.Fa value
to
.Fa count
1, 2, 4, or 8 byte
data items in bus space starting at byte offset
.Fa offset
in the region specified by
.Fa handle
of the bus space specified by
.Fa space .
Each successive data item has an offset 1, 2, 4, or 8 bytes after the
previous data item (depending on which function is used).
All
locations being written must lie within the bus space region specified
by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data items being
written.
On some systems, not obeying this requirement may cause
incorrect data to be written, on others it may cause a system crash.
.Pp
Write operations done by the
.Fn bus_space_set_region_N
functions may be
executed in any order.
They may also be executed out of order with
respect to other pending read and write operations unless order is
enforced by use of the
.Fn bus_space_barrier
function.
There is no way to insert barriers between writes of
individual bus space locations executed by the
.Fn bus_space_set_region_N
functions.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Sh READING AND WRITING A SINGLE LOCATION MULTIPLE TIMES
Some devices implement single locations in bus space which are to be read
or written multiple times to communicate data, e.g.\& some ethernet
devices' packet buffer FIFOs.
In order to allow drivers to manipulate
these types of devices as efficiently as possible, the
.Fn bus_space_read_multi_N ,
.Fn bus_space_set_multi_N ,
and
.Fn bus_space_write_multi_N
families of functions are provided.
.Ss Fn bus_space_read_multi_1 space handle offset datap count
.Ss Fn bus_space_read_multi_2 space handle offset datap count
.Ss Fn bus_space_read_multi_4 space handle offset datap count
.Ss Fn bus_space_read_multi_8 space handle offset datap count
The
.Fn bus_space_read_multi_N
family of functions reads
.Fa count
1, 2, 4, or 8 byte data items from bus space
at byte offset
.Fa offset
in the region specified by
.Fa handle
of the bus space specified by
.Fa space
and writes them into the array specified by
.Fa datap .
Each successive data item is read from the same location in bus
space.
The location being read must lie within the bus space region
specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data items being
read and the data array pointer should be properly aligned.
On some
systems, not obeying these requirements may cause incorrect data to be
read, on others it may cause a system crash.
.Pp
Read operations done by the
.Fn bus_space_read_multi_N
functions may be
executed out of order with respect to other pending read and write
operations unless order is enforced by use of the
.Fn bus_space_barrier
function.
Because the
.Fn bus_space_read_multi_N
functions read the same bus space location multiple times, they
place an implicit read barrier between each successive read of that bus
space location.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Ss Fn bus_space_write_multi_1 space handle offset datap count
.Ss Fn bus_space_write_multi_2 space handle offset datap count
.Ss Fn bus_space_write_multi_4 space handle offset datap count
.Ss Fn bus_space_write_multi_8 space handle offset datap count
The
.Fn bus_space_write_multi_N
family of functions reads
.Fa count
1, 2, 4, or 8 byte data items from the array
specified by
.Fa datap
and writes them into bus space at byte offset
.Fa offset
in the region specified by
.Fa handle
of the bus space specified by
.Fa space .
Each successive data item is written to the same location in
bus space.
The location being written must lie within the bus space
region specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data items being
written and the data array pointer should be properly aligned.
On some
systems, not obeying these requirements may cause incorrect data to be
written, on others it may cause a system crash.
.Pp
Write operations done by the
.Fn bus_space_write_multi_N
functions may be executed out of order with respect to other pending
read and write operations unless order is enforced by use of the
.Fn bus_space_barrier
function.
Because the
.Fn bus_space_write_multi_N
functions write the same bus space location multiple times, they
place an implicit write barrier between each successive write of that
bus space location.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Ss Fn bus_space_set_multi_1 space handle offset value count
.Ss Fn bus_space_set_multi_2 space handle offset value count
.Ss Fn bus_space_set_multi_4 space handle offset value count
.Ss Fn bus_space_set_multi_8 space handle offset value count
The
.Fn bus_space_set_multi_N
writes
.Fa value
into bus space at byte offset
.Fa offset
in the region specified by
.Fa handle
of the bus space specified by
.Fa space ,
.Fa count
times.
The location being written must lie within the bus space
region specified by
.Fa handle .
.Pp
For portability, the starting address of the region specified by
.Fa handle
plus the offset should be a multiple of the size of data items being
written and the data array pointer should be properly aligned.
On some
systems, not obeying these requirements may cause incorrect data to be
written, on others it may cause a system crash.
.Pp
Write operations done by the
.Fn bus_space_set_multi_N
functions may be executed out of order with respect to other pending
read and write operations unless order is enforced by use of the
.Fn bus_space_barrier
function.
Because the
.Fn bus_space_set_multi_N
functions write the same bus space location multiple times, they
place an implicit write barrier between each successive write of that
bus space location.
.Pp
These functions will never fail.
If they would fail (e.g.\& because of an
argument error), that indicates a software bug which should cause a
panic.
In that case, they will never return.
.Sh STREAM FUNCTIONS
Most of the
.Nm
functions imply a host byte-order and a bus byte-order and take care of
any translation for the caller.
In some cases, however, hardware may map a FIFO or some other memory region
for which the caller may want to use multi-word, yet untranslated access.
Access to these types of memory regions should be with the
.Fn bus_space_*_stream_N
functions.
.Pp
.Bl -tag -compact -width Fn
.It Fn bus_space_read_stream_1
.It Fn bus_space_read_stream_2
.It Fn bus_space_read_stream_4
.It Fn bus_space_read_stream_8
.It Fn bus_space_read_multi_stream_1
.It Fn bus_space_read_multi_stream_2
.It Fn bus_space_read_multi_stream_4
.It Fn bus_space_read_multi_stream_8
.It Fn bus_space_read_region_stream_1
.It Fn bus_space_read_region_stream_2
.It Fn bus_space_read_region_stream_4
.It Fn bus_space_read_region_stream_8
.It Fn bus_space_write_stream_1
.It Fn bus_space_write_stream_2
.It Fn bus_space_write_stream_4
.It Fn bus_space_write_stream_8
.It Fn bus_space_write_multi_stream_1
.It Fn bus_space_write_multi_stream_2
.It Fn bus_space_write_multi_stream_4
.It Fn bus_space_write_multi_stream_8
.It Fn bus_space_write_region_stream_1
.It Fn bus_space_write_region_stream_2
.It Fn bus_space_write_region_stream_4
.It Fn bus_space_write_region_stream_8
.It Fn bus_space_copy_region_stream_1
.It Fn bus_space_copy_region_stream_2
.It Fn bus_space_copy_region_stream_4
.It Fn bus_space_copy_region_stream_8
.It Fn bus_space_set_multi_stream_1
.It Fn bus_space_set_multi_stream_2
.It Fn bus_space_set_multi_stream_4
.It Fn bus_space_set_multi_stream_8
.It Fn bus_space_set_region_stream_1
.It Fn bus_space_set_region_stream_2
.It Fn bus_space_set_region_stream_4
.It Fn bus_space_set_region_stream_8
.El
.Pp
These functions are defined just as their non-stream counterparts,
except that they provide no byte-order translation.
.Sh COMPATIBILITY
The current
.Nx
version of the
.Nm
interface specification differs slightly from the original
specification that came into wide use and
.Fx
adopted.
A few of the function names and arguments have changed
for consistency and increased functionality.
.Sh SEE ALSO
.Xr bus_dma 9
.Sh HISTORY
The
.Nm
functions were introduced in a different form (memory and I/O spaces
were accessed via different sets of functions) in
.Nx 1.2 .
The functions were merged to work on generic
.Dq spaces
early in the
.Nx 1.3
development cycle, and many drivers were converted to use them.
This document was written later during the
.Nx 1.3
development cycle, and the specification was updated to fix some
consistency problems and to add some missing functionality.
.Pp
The manual page was then adapted to the version of the interface that
.Fx
imported for the CAM SCSI drivers, plus subsequent evolution.
The
.Fx
.Nm
version was imported in
.Fx 3.0 .
.Sh AUTHORS
.An -nosplit
The
.Nm
interfaces were designed and implemented by the
.Nx
developer
community.
Primary contributors and implementors were
.An Chris Demetriou ,
.An Jason Thorpe ,
and
.An Charles Hannum ,
but the rest of the
.Nx
developers and the user community played a significant role in development.
.Pp
.An Justin Gibbs
ported these interfaces to
.Fx .
.Pp
.An Chris Demetriou
wrote this manual page.
.Pp
.An Warner Losh
modified it for the
.Fx
implementation.
.Sh BUGS
This manual may not completely and accurately document the interface,
and many parts of the interface are unspecified.
Index: projects/clang800-import/share/man/man9/config_intrhook.9
===================================================================
--- projects/clang800-import/share/man/man9/config_intrhook.9 (revision 343955)
+++ projects/clang800-import/share/man/man9/config_intrhook.9 (revision 343956)
@@ -1,120 +1,120 @@
.\"
-.\" Copyright (C) 2006 M. Warner Losh <imp@FreeBSD.org>. All rights reserved.
+.\" Copyright (C) 2006 M. Warner Losh <imp@FreeBSD.org>.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice(s), this list of conditions and the following disclaimer as
.\" the first lines of this file unmodified other than the possible
.\" addition of one or more copyright notices.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice(s), this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
.\" EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
.\" WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
.\" DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY
.\" DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
.\" (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
.\" SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
.\" CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
.\" DAMAGE.
.\"
.\" $FreeBSD$
.\"
.Dd August 10, 2017
.Dt CONFIG_INTRHOOK 9
.Os
.Sh NAME
.Nm config_intrhook
.Nd schedule a function to be run after interrupts have been enabled,
but before root is mounted
.Sh SYNOPSIS
.In sys/kernel.h
.Vt typedef void (*ich_func_t)(void *arg);
.Ft int
.Fn config_intrhook_establish "struct intr_config_hook *hook"
.Ft void
.Fn config_intrhook_disestablish "struct intr_config_hook *hook"
.Ft void
.Fn config_intrhook_oneshot "ich_func_t func" "void *arg"
.Sh DESCRIPTION
The
.Fn config_intrhook_establish
function schedules a function to be run after interrupts have been
enabled, but before root is mounted.
If the system has already passed this point in its initialization,
the function is called immediately.
.Pp
The
.Fn config_intrhook_disestablish
function removes the entry from the hook queue.
.Pp
The
.Fn config_intrhook_oneshot
function schedules a function to be run as described for
.Fn config_intrhook_establish ;
the entry is automatically removed from the hook queue
after that function runs.
This is appropriate when additional device configuration must be done
after interrupts are enabled, but there is no need to stall the
boot process after that.
This function allocates memory using M_WAITOK; do not call this while
holding any non-sleepable locks.
.Pp
Before root is mounted, all the previously established hooks are
run.
The boot process is then stalled until all handlers remove their hook
from the hook queue with
.Fn config_intrhook_disestablish .
The boot process then proceeds to attempt to mount the root file
system.
Any driver that can potentially provide devices they wish to be
mounted as root must use either this hook, or probe all these devices
in the initial probe.
Since interrupts are disabled during the probe process, many drivers
need a method to probe for devices with interrupts enabled.
.Pp
The requests are made with the
.Vt intr_config_hook
structure.
This structure is defined as follows:
.Bd -literal
struct intr_config_hook {
TAILQ_ENTRY(intr_config_hook) ich_links;/* Private */
ich_func_t ich_func; /* function to call */
void *ich_arg; /* Argument to call */
};
.Ed
.Pp
Storage for the
.Vt intr_config_hook
structure must be provided by the driver.
It must be stable from just before the hook is established until
after the hook is disestablished.
.Pp
Specifically, hooks are run at
.Fn SI_SUB_INT_CONFIG_HOOKS ,
which is immediately after the scheduler is started,
and just before the root file system device is discovered.
.Sh RETURN VALUES
A zero return value means the hook was successfully added to the queue
(with either deferred or immediate execution).
A non-zero return value means the hook could not be added to the queue
because it was already on the queue.
.Sh SEE ALSO
.Xr DEVICE_ATTACH 9
.Sh HISTORY
These functions were introduced in
.Fx 3.0
with the CAM subsystem, but are available for any driver to use.
.Sh AUTHORS
.An -nosplit
The functions were written by
.An Justin Gibbs Aq Mt gibbs@FreeBSD.org .
This manual page was written by
.An M. Warner Losh Aq Mt imp@FreeBSD.org .
Index: projects/clang800-import/share/man/man9/pwm.9
===================================================================
--- projects/clang800-import/share/man/man9/pwm.9 (revision 343955)
+++ projects/clang800-import/share/man/man9/pwm.9 (revision 343956)
@@ -1,93 +1,93 @@
.\" Copyright (c) 2018 Emmanuel Vadot <manu@freebsd.org>
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR
.\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
.\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
.\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT,
.\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
.\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
.\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
.\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
.\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
.\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
-.Dd November 12, 2018
+.Dd January 12, 2019
.Dt PWM 9
.Os
.Sh NAME
.Nm pwm ,
.Nm PWM_GET_BUS ,
.Nm PWM_CHANNEL_CONFIG ,
.Nm PWM_CHANNEL_GET_CONFIG ,
.Nm PWM_CHANNEL_SET_FLAGS ,
.Nm PWM_CHANNEL_GET_FLAGS ,
.Nm PWM_CHANNEL_ENABLE ,
.Nm PWM_CHANNEL_IS_ENABLED ,
.Nm PWM_CHANNEL_MAX
.Nd PWM methods
.Sh SYNOPSIS
.Cd "device pwm"
.In "pwm_if.h"
.Ft device_t
.Fn PWM_GET_BUS "device_t dev"
.Ft int
.Fn PWM_CHANNEL_CONFIG "device_t dev" "int channel" "uint64_t period" "uint64_t duty"
.Ft int
.Fn PWM_CHANNEL_GET_CONFIG "device_t dev" "int channel" "uint64_t *period" "uint64_t *duty"
.Ft int
.Fn PWM_CHANNEL_SET_FLAGS "device_t dev" "int channel" "uint32_t flags"
.Ft int
.Fn PWM_CHANNEL_GET_FLAGS "device_t dev" "int channel" "uint32_t *flags"
.Ft int
.Fn PWM_CHANNEL_ENABLE "device_t dev" "int channel" "bool enable"
.Ft int
.Fn PWM_CHANNEL_IS_ENABLED "device_t dev" "int channel" "bool *enabled"
.Ft int
.Fn PWM_CHANNEL_MAX "device_t dev" "int channel" "int *nchannel"
.Sh DESCRIPTION
The PWM (Pulse-Width Modulation) interface allows the device driver to register to a global
bus so other devices in the kernel can use them in a generic way.
.Sh INTERFACE
.Bl -tag -width indent
.It Fn PWM_GET_BUS "device_t dev"
Return the bus device.
.It Fn PWM_CHANNEL_CONFIG "device_t dev" "int channel" "uint64_t period" "uint64_t duty"
Configure the period and duty (in nanoseconds) in the PWM controller for the specified channel.
Returns 0 on success or
.Er EINVAL
if the values are not supported by the controller or
.Er EBUSY
is the PWM controller is in use and does not support changing the value on the fly.
.It Fn PWM_CHANNEL_GET_CONFIG "device_t dev" "int channel" "uint64_t *period" "uint64_t *duty"
Get the current configuration of the period and duty for the specified channel.
.It Fn PWM_CHANNEL_SET_FLAGS "device_t dev" "int channel" "uint32_t flags"
Set the flags of the channel (like inverted polarity).
.It Fn PWM_CHANNEL_GET_FLAGS "device_t dev" "int channel" "uint32_t *flags"
Get the current flags for the channel.
.It Fn PWM_CHANNEL_ENABLE "device_t dev" "int channel" "bool enable"
Enable the PWM channel.
.It Fn PWM_CHANNEL_ISENABLED "device_t dev" "int channel" "bool *enable"
Test if the PWM channel is enabled.
-.It PWM_CHANNEL_MAX "device_t dev" "int channel" "int *nchannel"
+.It Fn PWM_CHANNEL_MAX "device_t dev" "int channel" "int *nchannel"
Get the maximum number of channels supported by the controller.
.El
.Sh HISTORY
The
.Nm pwm
interface first appeared in
.Fx 13.0 .
The
.Nm pwm
interface and manual page was written by
.An Emmanuel Vadot Aq Mt manu@FreeBSD.org .
Index: projects/clang800-import/share/misc/bsd-family-tree
===================================================================
--- projects/clang800-import/share/misc/bsd-family-tree (revision 343955)
+++ projects/clang800-import/share/misc/bsd-family-tree (revision 343956)
@@ -1,827 +1,828 @@
The UNIX system family tree: Research and BSD
---------------------------------------------
First Edition (V1)
|
Second Edition (V2)
|
Third Edition (V3)
|
Fourth Edition (V4)
|
Fifth Edition (V5)
|
Sixth Edition (V6) -----*
\ |
\ |
\ |
Seventh Edition (V7) |
\ |
\ 1BSD
32V |
\ 2BSD---------------*
\ / |
\ / |
\/ |
3BSD |
| |
4.0BSD 2.79BSD
| |
4.1BSD --------------> 2.8BSD
| |
4.1aBSD -----------\ |
| \ |
4.1bBSD \ |
| \ |
*------ 4.1cBSD --------------> 2.9BSD
/ | |
Eighth Edition | 2.9BSD-Seismo
| | |
+----<--- 4.2BSD 2.9.1BSD
| | |
+----<--- 4.3BSD -------------> 2.10BSD
| | / |
Ninth Edition | / 2.10.1BSD
| 4.3BSD Tahoe-----+ |
| | \ |
| | \ |
v | 2.11BSD
Tenth Edition | |
| 2.11BSD rev #430
4.3BSD NET/1 |
| v
4.3BSD Reno
|
*---------- 4.3BSD NET/2 -------------------+-------------*
| | | |
386BSD 0.0 | | BSD/386 ALPHA
| | | |
386BSD 0.1 ------------>+ | BSD/386 0.3.[13]
| \ | 4.4BSD Alpha |
| 386BSD 1.0 | | BSD/386 0.9.[34]
| | 4.4BSD |
| | / | |
| | 4.4BSD-Encumbered | |
| -NetBSD 0.8 | BSD/386 1.0
| / | | |
FreeBSD 1.0 <-----' NetBSD 0.9 | BSD/386 1.1
| | .----- 4.4BSD Lite |
FreeBSD 1.1 | / / | \ |
| | / / | \ |
FreeBSD 1.1.5 .---|--------' / | \ |
| / | / | \ |
FreeBSD 1.1.5.1 / | / | \ |
| / NetBSD 1.0 <-' | \ |
| / | | \ |
FreeBSD 2.0 <--' | | BSD/OS 2.0
| \ | |
FreeBSD 2.0.5 \ | BSD/OS 2.0.1
| .-----\------------- 4.4BSD Lite2 |
| | \ | | | | |
| | .-----|------Rhapsody | | | |
| | | | NetBSD 1.3 | | |
| | | | OpenBSD 2.3 | |
| | | | BSD/OS 3.0 |
FreeBSD 2.1 | | | |
| | | | NetBSD 1.1 ------. BSD/OS 2.1
| FreeBSD 2.1.5 | | | \ |
| | | | NetBSD 1.2 \ BSD/OS 3.0
| FreeBSD 2.1.6 | | | \ OpenBSD 2.0 |
| | | | | \ | |
| FreeBSD 2.1.6.1 | | | \ | |
| | | | | \ | |
| FreeBSD 2.1.7 | | | | | |
| | | | | NetBSD 1.2.1 | |
| FreeBSD 2.1.7.1 | | | | |
| | | | | |
| | | | | |
*-FreeBSD 2.2 | | | | |
| \ | | | | |
| FreeBSD 2.2.1 | | | | |
| | | | | | |
| FreeBSD 2.2.2 | | | OpenBSD 2.1 |
| | | | | | |
| FreeBSD 2.2.5 | | | | |
| | | | | OpenBSD 2.2 |
| | | | NetBSD 1.3 | |
| FreeBSD 2.2.6 | | | | | |
| | | | | NetBSD 1.3.1 | BSD/OS 3.1
| | | | | | OpenBSD 2.3 |
| | | | | NetBSD 1.3.2 | |
| FreeBSD 2.2.7 | | | | | |
| | | | | | | BSD/OS 4.0
| FreeBSD 2.2.8 | | | | | |
| | | | | | | |
| v | | | | OpenBSD 2.4 |
| FreeBSD 2.2.9 | | | | | |
| | | | | | |
FreeBSD 3.0 <--------* | | v | |
| | | NetBSD 1.3.3 | |
*---FreeBSD 3.1 | | | |
| | | | | BSD/OS 4.0.1
| FreeBSD 3.2----* | NetBSD 1.4 OpenBSD 2.5 |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| FreeBSD 3.3 | | | | NetBSD 1.4.1 | |
| | | | | | | OpenBSD 2.6 |
| FreeBSD 3.4 | | | | | | |
| | | | | | | | BSD/OS 4.1
FreeBSD 4.0 | | | | | NetBSD 1.4.2 | |
| | | | | | | | |
| | | | | | | | |
| FreeBSD 3.5 | | | | | OpenBSD 2.7 |
| | | | | | | | |
| FreeBSD 3.5.1 | | | | | | |
| | | | | | | |
*---FreeBSD 4.1 | | | | | | |
| | | | (?) | | | |
| FreeBSD 4.1.1 | | / | | | |
| | | | / | | | |
| FreeBSD 4.2 Darwin/ | NetBSD 1.4.3 | |
| | Mac OS X | OpenBSD 2.8 BSD/OS 4.2
| | | | | |
| | | | | |
| | 10.0 NetBSD 1.5 | |
| FreeBSD 4.3 | | | | |
| | | | | OpenBSD 2.9 |
| | | | NetBSD 1.5.1 | |
| | | | | | |
| FreeBSD 4.4-. | | NetBSD 1.5.2 | |
| | | Mac OS X | | | |
| | | 10.1 | | OpenBSD 3.0 |
| FreeBSD 4.5 | | | | | |
| | \ | | | | BSD/OS 4.3
| FreeBSD 4.6 \ | | | OpenBSD 3.1 |
| | \ | | NetBSD 1.5.3 | |
| FreeBSD 4.6.2 Mac OS X | | |
| | 10.2 | | |
| FreeBSD 4.7 | | | |
| | | NetBSD 1.6 OpenBSD 3.2 |
| FreeBSD 4.8 | | | | |
| | | | NetBSD 1.6.1 | |
| |--------. | | | OpenBSD 3.3 BSD/OS 5.0
| | \ | | | | |
| FreeBSD 4.9 | | | | OpenBSD 3.4 BSD/OS 5.1 ISE
| | | | | | |
| | | | | NetBSD 1.6.2 |
| | | | | | |
| | | | | | OpenBSD 3.5
| | | | | v |
| FreeBSD 4.10 | | | |
| | | | | |
| FreeBSD 4.11 | | | |
| | | | |
| `-|------|-----------------|---------------------.
| | | | \
FreeBSD 5.0 | | | |
| | | | |
FreeBSD 5.1 | | | DragonFly 1.0
| \ | | | |
| ----- Mac OS X | | |
| 10.3 | | |
FreeBSD 5.2 | | | |
| | | | | |
| FreeBSD 5.2.1 | | | |
| | | | |
*-------FreeBSD 5.3 | | | |
| | | | OpenBSD 3.6 |
| | | NetBSD 2.0 | |
| | | | | | | DragonFly 1.2.0
| | Mac OS X | | NetBSD 2.0.2 | |
| | 10.4 | | | | |
| FreeBSD 5.4 | | | | | |
| | | | | | OpenBSD 3.7 |
| | | | | NetBSD 2.0.3 | |
| | | | | | | |
*--FreeBSD | | | | v OpenBSD 3.8 |
| 6.0 | | | | | |
| | | | | \ | |
| | | | | NetBSD 2.1 | |
| | | | | | |
| | | | NetBSD 3.0 | |
| | | | | | | | DragonFly 1.4.0
| | | | | | | OpenBSD 3.9 |
| FreeBSD | | | | | | |
| 6.1 | | | | | | |
| | FreeBSD 5.5 | | | | | |
| | | | | NetBSD 3.0.1 | DragonFly 1.6.0
| | | | | | | |
| | | | | | OpenBSD 4.0 |
| | | | | NetBSD 3.0.2 | |
| | | | NetBSD 3.1 | |
| FreeBSD 6.2 | | | |
| | | | | DragonFly 1.8.0
| | | | OpenBSD 4.1 |
| | | | | DragonFly 1.10.0
| | Mac OS X | | |
| | 10.5 | | |
| | | | OpenBSD 4.2 |
| | | NetBSD 4.0 | |
| FreeBSD 6.3 | | | | |
| \ | | | | |
*--FreeBSD | | | | | DragonFly 1.12.0
| 7.0 | | | | | |
| | | | | | OpenBSD 4.3 |
| | | | | NetBSD | DragonFly 2.0.0
| | FreeBSD | | 4.0.1 OpenBSD 4.4 |
| | 6.4 | | | |
| | | | | |
| FreeBSD 7.1 | | | |
| | | | | DragonFly 2.2.0
| FreeBSD 7.2 | NetBSD 5.0 OpenBSD 4.5 |
| \ | | | \ | |
| | Mac OS X | | \ | |
| | 10.6 | | \ | |
| | | | | NetBSD | DragonFly 2.4.0
| | | | | 5.0.1 OpenBSD 4.6 |
| | | | | | | |
*--FreeBSD | | | | | | |
| 8.0 | | | | | | |
| | FreeBSD | | | NetBSD | |
| | 7.3 | | | 5.0.2 | DragonFly 2.6.0
| | | | | | OpenBSD 4.7 |
| FreeBSD | | | | | |
| 8.1 | | | | | |
| | | | | | | DragonFly 2.8.2
| | | | | | OpenBSD 4.8 |
| | | | | *--NetBSD | |
| FreeBSD FreeBSD | | | 5.1 | |
| 8.2 7.4 | | | | | DragonFly 2.10.1
| | | | | | OpenBSD 4.9 |
| `-----. Mac OS X | | | | |
| \ 10.7 | | | | |
| | | | | | OpenBSD 5.0 |
*--FreeBSD | | | | | | |
| 9.0 | | | | NetBSD | DragonFly 3.0.1
| | FreeBSD | | | 5.1.2 | |
| | 8.3 | | | | | |
| | | | | | NetBSD | |
| | | | | | 5.1.3 | |
| | | | | | | | |
| | | | | | NetBSD | |
| | | | | | 5.1.4 | |
| | | | | | OpenBSD 5.1 |
| | | Mac OS X | `----. | |
| | | 10.8 | \ | |
| | | | NetBSD 6.0 | | |
| | | | | | | | OpenBSD 5.2 DragonFly 3.2.1
| FreeBSD | | | | | NetBSD | |
| 9.1 | | | | | 5.2 | |
| | | | | | | | | |
| | | | | | | NetBSD | |
| | | | | | | 5.2.1 | |
| | | | | | | | | |
| | | | | | | NetBSD | |
| | | | | | | 5.2.2 | |
| | | | | | | | |
| | | | | | \ | |
| | | | | | NetBSD | |
| | | | | | 6.0.1 | |
| | | | | | | OpenBSD 5.3 DragonFly 3.4.1
| | | | | | NetBSD | |
| | | | | | 6.0.2 | |
| | | | | | | | |
| | | | | | NetBSD | |
| | | | | | 6.0.3 | |
| | | | | | | | |
| | | | | | NetBSD | |
| | | | | | 6.0.4 | |
| | | | | | | | |
| | | | | | NetBSD | |
| | | | | | 6.0.5 | |
| | | | | | | | |
| | | | | | NetBSD | |
| | | | | | 6.0.6 | |
| | | | | | | |
| | | | | |`-NetBSD 6.1 | |
| | FreeBSD | | | | |
| | 8.4 | | NetBSD 6.1.1 | |
| | | | | | |
| FreeBSD | | NetBSD 6.1.2 | |
| 9.2 Mac OS X | | | |
| | 10.9 | | OpenBSD 5.4 |
| `-----. | | | | DragonFly 3.6.0
| \ | | | | |
*--FreeBSD | | | NetBSD 6.1.3 | |
| 10.0 | | | | | |
| | | | | | | DragonFly 3.6.1
| | | | | | | |
| | | | | | | |
| | | | | | | DragonFly 3.6.2
| | | | | NetBSD 6.1.4 | |
| | | | | | | |
| | | | | | OpenBSD 5.5 |
| | | | | | | |
| | | | | | | DragonFly 3.8.0
| | | | | | | |
| | | | | | | |
| | | | | | | DragonFly 3.8.1
| | | | | | | |
| | | | | | | |
| | | | | | | DragonFly 3.6.3
| | | | | | | |
| | FreeBSD | | | | |
| | 9.3 | | | | |
| | | | NetBSD 6.1.5 | DragonFly 3.8.2
| | Mac OS X | | |
| | 10.10 | | |
| | | | OpenBSD 5.6 |
| FreeBSD | | | |
| 10.1 | | | DragonFly 4.0.1
| | | | | |
| | | | | DragonFly 4.0.2
| | | | | |
| | | | | DragonFly 4.0.3
| | | | | |
| | | | | DragonFly 4.0.4
| | | | | |
| | | | | DragonFly 4.0.5
| | | | | |
| | | | OpenBSD 5.7 |
| | | | | DragonFly 4.2.0
| FreeBSD | | | |
| 10.2 | | | |
| | macOS NetBSD 7.0 | |
| | 10.11 | | | OpenBSD 5.8 |
| | | | | `--. | DragonFly 4.4.1
| FreeBSD | | | | OpenBSD 5.9 |
| 10.3 | | | | | |
| | | | | NetBSD 7.0.1 | |
| `------. | | | | | DragonFly 4.6.0
| | | | | | | |
| | | | | | | |
*--FreeBSD | macOS | | | OpenBSD 6.0 |
| 11.0 | 10.12 | | NetBSD 7.0.2 | |
| | | | | | | |
| | | | | *- NetBSD 7.1 | |
| | | | | | | |
| | | | | | | |
| | | macOS | | | DragonFly 4.8.0
| | | 10.13 | | OpenBSD 6.1 |
| FreeBSD | | | | | DragonFly 5.0.0
| 11.1 FreeBSD | | | | |
| | 10.4 | | | OpenBSD 6.2 DragonFly 5.0.1
| | | | | | |
| `------. | | NetBSD 7.1.1 | DragonFly 5.0.2
| | | | | | |
| | | | NetBSD 7.1.2 | |
| | | | | | |
| | | | | OpenBSD 6.3 |
| | | NetBSD | | DragonFly 5.2.0
| | | 8.0 | | |
| | | | | | DragonFly 5.2.1
| | | | | | |
| | | | | | DragonFly 5.2.2
| FreeBSD | | NetBSD 7.2 | |
- | 11.2 | | | | |
- | | | | OpenBSD 6.4 |
+ | 11.2 macOS | | | |
+ | 10.14 | | OpenBSD 6.4 |
| | | | | DragonFly 5.4.0
*--FreeBSD | | v | |
| 12.0 | | | DragonFly 5.4.1
| | | | |
FreeBSD 13 -current | NetBSD -current OpenBSD -current DragonFly -current
| | | | |
v v v v v
Time
----------------
Time tolerance +/- 6 months, depending on which book/article you read; if it
was the announcement in Usenet or if it was available as tape.
[44B] McKusick, Marshall Kirk, Keith Bostic, Michael J Karels,
and John Quarterman. The Design and Implementation of
the 4.4BSD Operating System.
[APL] Apple website [https://www.apple.com/macosx/]
[BSDI] Berkeley Software Design, Inc.
[DFB] DragonFlyBSD Project, The.
[DOC] README, COPYRIGHT on tape.
[FBD] FreeBSD Project, The.
[KB] Keith Bostic. BSD2.10 available from Usenix. comp.unix.sources,
Volume 11, Info 4, April, 1987.
[KKK] Mike Karels, Kirk McKusick, and Keith Bostic. tahoe announcement.
comp.bugs.4bsd.ucb-fixes, June 15, 1988.
[KSJ] Michael J. Karels, Carl F. Smith, and William F. Jolitz.
Changes in the Kernel in 2.9BSD. Second Berkeley Software
Distribution UNIX Version 2.9, July, 1983.
[NBD] NetBSD Project, The.
[OBD] OpenBSD Project, The.
[QCU] Salus, Peter H. A quarter century of UNIX.
[SMS] Steven M. Schultz. 2.11BSD, UNIX for the PDP-11.
[TUHS] The Unix Historical Society. http://minnie.tuhs.org/Unix_History/.
[USE] Usenet announcement.
[WRS] Wind River Systems, Inc.
[dmr] Dennis Ritchie, via E-Mail
Multics 1965
UNIX Summer 1969
DEC PDP-7
First Edition 1971-11-03 [QCU]
DEC PDP-11/20, Assembler
Second Edition 1972-06-12 [QCU]
10 UNIX installations
Third Edition 1973-02-xx [QCU]
Pipes, 16 installations
Fourth Edition 1973-11-xx [QCU]
rewriting in C effected,
above 30 installations
Fifth Edition 1974-06-xx [QCU]
above 50 installations
Sixth Edition 1975-05-xx [QCU]
port to DEC Vax
Seventh Edition 1979-01-xx [QCU] 1979-01-10 [TUHS]
first portable UNIX
Eighth Edition 1985-02-xx [QCU]
VAX 11/750, VAX 11/780 [dmr]
descended from 4.1c BSD [dmr]
descended from 4.1 BSD [44B]
scooping-out and replacement of the character-device
and networking part by the streams mechanism
Ninth Edition 1986-09-xx [QCU]
Tenth Edition 1989-10-xx [QCU]
1BSD late 1977
1978-03-09 [QCU]
PDP-11, Pascal, ex(1)
30 free copies of 1BSD sent out
35 tapes sold for 50 USD [QCU]
2BSD mid 1978 [QCU] 1979-05-10 [TUHS]
75 2BSD tapes shipped
2.79BSD 1980-04-xx [TUHS]
2.8BSD 1981-07-xx [KSJ]
2.8.1BSD 1982-01-xx [QCU]
set of performance improvements
2.9BSD 1983-07-xx [KSJ]
2.9.1BSD 1983-11-xx [TUHS]
2.9BSD-Seismo 1985-08-xx [SMS]
2.10BSD 1987-04-xx [KKK]
2.10.1BSD 1989-01-xx [SMS]
2.11BSD 1992-02-xx [SMS]
2.11BSD rev #430 1999-12-13 [SMS]
32V 1978-1[01]-xx [QCU] 1979-03-26 [TUHS]
3BSD late 1979 [QCU] March 1980 [TUHS]
virtual memory, page replacement,
demand paging
4.0BSD 1980-10-xx
4.1BSD 1981-07-08 [DOC]
4.1aBSD 1982-04-xx
alpha release, 100 sites, networking [44B]
4.1bBSD internal release, fast filesystem [44B]
4.1cBSD late 1982
beta release, IPC [44B]
4.2BSD 1983-09-xx [QCU]
1983-08-03 [DOC]
4.3BSD 1986-06-xx [QCU]
1986-04-05 [KB], [DOC]
4.3BSD Tahoe 1988-06-15 [QCU], [DOC]
4.3BSD NET/1 1988-11-xx [QCU]
1989-01-01 [DOC]
4.3BSD Reno 1990-06-29 [QCU], [DOC]
4.3BSD NET/2 1991-06-28 [QCU], [DOC]
BSD/386 ALPHA 1991-12-xx [BSDI]
first code released to people outside BSDI
386BSD 0.0 1992-02-xx [DOC]
BSD/386 0.3.1 1992-04-xx [BSDI] first ext. beta; B customers
BSD/386 0.3.3 1992-06-xx [BSDI] first CDROM version
386BSD 0.1 1992-07-28 [DOC]
4.4BSD Alpha 1992-07-07
BSD/386 0.9.3 1992-10-xx [BSDI]
first external gamma; G customers
BSD/386 0.9.4 1992-12-xx [BSDI]
would have been 1.0 except for request
for preliminary injunction
BSD/386 1.0 1993-03-xx [BSDI]
injunction denied; first official release
NetBSD 0.8 1993-04-20 [NBD]
4.4BSD 1993-06-01 [USE]
NetBSD 0.9 1993-08-23 [NBD]
FreeBSD 1.0 1993-11-01 [FBD]
FreeBSD 1.0.2 1993-11-14 [FBD]
supersedes 1.0 13 days after release.
BSD/386 1.1 1994-02-xx [BSDI]
4.4BSD Lite 1994-03-01 [USE]
FreeBSD 1.1 1994-05-07 [FBD]
FreeBSD 1.1.5 1994-06-30 [FBD]
FreeBSD 1.1.5.1 1994-07-05 [FBD]
supersedes 1.1.5 5 days after release.
NetBSD 1.0 1994-10-26 [NBD]
386BSD 1.0 1994-11-12 [USE]
FreeBSD 2.0 1994-11-23 [FBD]
BSD/OS 2.0 1995-01-xx [BSDI] 4.4 lite based
FreeBSD 2.0.5 1995-06-10 [FBD]
BSD/OS 2.0.1 1995-06-xx [BSDI]
4.4BSD Lite Release 2 1995-06-xx [44B]
the true final distribution from the CSRG
FreeBSD 2.1.0 1995-11-19 [FBD]
NetBSD 1.1 1995-11-26 [NBD]
BSD/OS 2.1 1996-01-xx [BSDI]
FreeBSD 2.1.5 1996-07-14 [FBD]
NetBSD 1.2 1996-10-04 [NBD]
OpenBSD 2.0 1996-10-18 [OBD]
FreeBSD 2.1.6 1996-11-16 [FBD]
FreeBSD 2.1.6.1 1996-11-25 [FBD] (sendmail security release)
Rhapsody 1997-xx-xx
FreeBSD 2.1.7 1997-02-20 [FBD]
BSD/OS 3.0 1997-02-xx [BSDI] 4.4 lite2 based
FreeBSD 2.2.0 1997-03-16 [FBD]
FreeBSD 2.2.1 1997-03-25 [FBD]
FreeBSD 2.2.2 1997-05-16 [FBD]
NetBSD 1.2.1 1997-05-20 [NBD] (patch release)
OpenBSD 2.1 1997-06-01 [OBD]
FreeBSD 2.2.5 1997-10-22 [FBD]
OpenBSD 2.2 1997-12-01 [OBD]
NetBSD 1.3 1998-01-04 [NBD]
FreeBSD 2.2.6 1998-03-25 [FBD]
NetBSD 1.3.1 1998-03-09 [NBD] (patch release)
BSD/OS 3.1 1998-03-xx [BSDI]
OpenBSD 2.3 1998-05-19 [OBD]
NetBSD 1.3.2 1998-05-29 [NBD] (patch release)
FreeBSD 2.2.7 1998-07-22 [FBD]
BSD/OS 4.0 1998-08-xx [BSDI]
2-lock MP support, ELF executables
FreeBSD 3.0 1998-10-16 [FBD]
FreeBSD-3.0 is a snapshot from -current,
while 3.1 and 3.2 are from 3.x-stable which
was branched quite some time after 3.0-release
FreeBSD 2.2.8 1998-11-29 [FBD]
OpenBSD 2.4 1998-12-01 [OBD]
NetBSD 1.3.3 1998-12-23 [NBD] (patch release)
FreeBSD 3.1 1999-02-15 [FBD]
BSD/OS 4.0.1 1999-03-xx [BSDI]
NetBSD 1.4 1999-05-12 [NBD]
FreeBSD 3.2 1999-05-17 [FBD]
OpenBSD 2.5 1999-05-19 [OBD]
NetBSD 1.4.1 1999-08-26 [NBD] (patch release)
FreeBSD 3.3 1999-09-17 [FBD]
OpenBSD 2.6 1999-12-01 [OBD]
FreeBSD 3.4 1999-12-20 [FBD]
BSD/OS 4.1 1999-12-xx [BSDI]
FreeBSD 4.0 2000-03-13 [FBD]
NetBSD 1.4.2 2000-03-19 [NBD] (patch release)
OpenBSD 2.7 2000-06-15 [OBD]
FreeBSD 3.5 2000-06-24 [FBD]
FreeBSD 4.1 2000-07-27 [FBD]
FreeBSD 3.5.1 2000-07-28 [FBD]
FreeBSD 4.1.1 2000-09-25 [FBD] (a network-only patch release)
FreeBSD 4.2 2000-11-21 [FBD]
NetBSD 1.4.3 2000-11-25 [NBD] (patch release)
BSD/OS 4.2 2000-11-29 [BSDI]
OpenBSD 2.8 2000-12-01 [OBD]
NetBSD 1.5 2000-12-06 [NBD]
Mac OS X 10.0 2001-03-24 [APL]
FreeBSD 4.3 2001-04-20 [FBD]
OpenBSD 2.9 2001-06-01 [OBD]
NetBSD 1.5.1 2001-07-11 [NBD] (patch release)
NetBSD 1.5.2 2001-09-13 [NBD] (patch release)
FreeBSD 4.4 2001-09-18 [FBD]
Mac OS X 10.1 2001-09-29 [APL]
OpenBSD 3.0 2001-12-01 [OBD]
FreeBSD 4.5 2002-01-29 [FBD]
BSD/OS 4.3 2002-03-14 [WRS]
OpenBSD 3.1 2002-05-19 [OBD]
FreeBSD 4.6 2002-06-15 [FBD]
NetBSD 1.5.3 2002-07-22 [NBD] (patch release)
FreeBSD 4.6.2 2002-08-15 [FBD] (patch release)
Mac OS X 10.2 2002-08-23 [APL]
NetBSD 1.6 2002-09-14 [NBD]
FreeBSD 4.7 2002-10-08 [FBD]
OpenBSD 3.2 2002-11-01 [OBD]
FreeBSD 5.0 2003-01-17 [FBD]
FreeBSD 5.0 is a separate branch off of
-current, similar to 3.0.
FreeBSD 4.8 2003-04-03 [FBD]
NetBSD 1.6.1 2003-04-21 [NBD] (patch release)
OpenBSD 3.3 2003-05-01 [OBD]
BSD/OS 5.0 2003-05-?? [WRS]
FreeBSD 5.1 2003-06-09 [FBD]
Mac OS X 10.3 2003-10-24 [APL]
FreeBSD 4.9 2003-10-28 [FBD]
BSD/OS 5.1 ISE 2003-10-?? [WRS] (final version)
OpenBSD 3.4 2003-11-01 [OBD]
FreeBSD 5.2 2004-01-12 [FBD]
FreeBSD 5.2.1 2004-02-22 [FBD] (patch release)
NetBSD 1.6.2 2004-03-01 [NBD] (patch release)
OpenBSD 3.5 2004-04-01 [OBD]
FreeBSD 4.10 2004-05-27 [FBD]
DragonFly 1.0 2004-07-12 [DFB]
OpenBSD 3.6 2004-10-29 [OBD]
FreeBSD 5.3 2004-11-06 [FBD]
NetBSD 2.0 2004-12-09 [NBD]
FreeBSD 4.11 2005-01-25 [FBD]
DragonFly 1.2.0 2005-04-08 [DFB]
NetBSD 2.0.2 2005-04-14 [NBD] (security/critical release)
Mac OS X 10.4 2005-04-29 [APL]
FreeBSD 5.4 2005-05-09 [FBD]
OpenBSD 3.7 2005-05-19 [OBD]
NetBSD 2.0.3 2005-10-31 [NBD] (security/critical release)
OpenBSD 3.8 2005-11-01 [OBD]
FreeBSD 6.0 2005-11-01 [FBD]
NetBSD 2.1 2005-11-02 [NBD]
NetBSD 3.0 2005-12-23 [NBD]
DragonFly 1.4.0 2006-01-08 [DFB]
FreeBSD 2.2.9 2006-04-01 [FBD]
OpenBSD 3.9 2006-05-01 [OBD]
FreeBSD 6.1 2006-05-08 [FBD]
FreeBSD 5.5 2006-05-25 [FBD]
NetBSD 3.0.1 2006-07-24 [NBD] (security/critical release)
DragonFly 1.6.0 2006-07-24 [DFB]
OpenBSD 4.0 2006-11-01 [OBD]
NetBSD 3.0.2 2006-11-04 [NBD] (security/critical release)
NetBSD 3.1 2006-11-04 [NBD]
FreeBSD 6.2 2007-01-15 [FBD]
DragonFly 1.8.0 2007-01-30 [DFB]
OpenBSD 4.1 2007-05-01 [OBD]
DragonFly 1.10.0 2007-08-06 [DFB]
Mac OS X 10.5 2007-10-26 [APL]
OpenBSD 4.2 2007-11-01 [OBD]
NetBSD 4.0 2007-12-19 [NBD]
FreeBSD 6.3 2008-01-18 [FBD]
DragonFly 1.12.0 2008-02-26 [DFB]
FreeBSD 7.0 2008-02-27 [FBD]
OpenBSD 4.3 2008-05-01 [OBD]
DragonFly 2.0.0 2008-07-21 [DFB]
OpenBSD 4.4 2008-11-01 [OBD]
FreeBSD 6.4 2008-11-28 [FBD]
FreeBSD 7.1 2009-01-04 [FBD]
DragonFly 2.2.0 2009-02-17 [DFB]
NetBSD 5.0 2009-04-29 [NBD]
OpenBSD 4.5 2009-05-01 [OBD]
FreeBSD 7.2 2009-05-04 [FBD]
Mac OS X 10.6 2009-06-08 [APL]
NetBSD 5.0.1 2009-08-02 [NBD] (security/critical release)
DragonFly 2.4.0 2009-09-16 [DFB]
OpenBSD 4.6 2009-10-18 [OBD]
FreeBSD 8.0 2009-11-26 [FBD]
NetBSD 5.0.2 2010-02-12 [NBD] (security/critical release)
FreeBSD 7.3 2010-03-23 [FBD]
DragonFly 2.6.0 2010-03-28 [DFB]
OpenBSD 4.7 2010-05-19 [OBD]
FreeBSD 8.1 2010-07-24 [FBD]
DragonFly 2.8.2 2010-10-30 [DFB]
OpenBSD 4.8 2010-11-01 [OBD]
NetBSD 5.1 2010-11-19 [NBD]
FreeBSD 7.4 2011-02-24 [FBD]
FreeBSD 8.2 2011-02-24 [FBD]
DragonFly 2.10.1 2011-04-26 [DFB]
OpenBSD 4.9 2011-05-01 [OBD]
Mac OS X 10.7 2011-07-20 [APL]
OpenBSD 5.0 2011-11-01 [OBD]
FreeBSD 9.0 2012-01-12 [FBD]
NetBSD 5.1.2 2012-02-02 [NBD] (security/critical release)
DragonFly 3.0.1 2012-02-21 [DFB]
FreeBSD 8.3 2012-04-18 [FBD]
OpenBSD 5.1 2012-05-01 [OBD]
Mac OS X 10.8 2012-07-25 [APL]
NetBSD 6.0 2012-10-17 [NBD]
OpenBSD 5.2 2012-11-01 [OBD]
DragonFly 3.2.1 2012-11-02 [DFB]
NetBSD 5.2 2012-12-03 [NBD]
NetBSD 6.0.1 2012-12-26 [NBD] (security/critical release)
FreeBSD 9.1 2012-12-30 [FBD]
DragonFly 3.4.1 2013-04-29 [DFB]
OpenBSD 5.3 2013-05-01 [OBD]
NetBSD 6.0.2 2013-05-18 [NBD] (security/critical release)
NetBSD 6.1 2013-05-18 [NBD]
FreeBSD 8.4 2013-06-07 [FBD]
NetBSD 6.1.1 2013-08-22 [NBD]
NetBSD 5.1.3 2013-09-29 [NBD]
NetBSD 5.2.1 2013-09-29 [NBD]
FreeBSD 9.2 2013-09-30 [FBD]
NetBSD 6.0.3 2013-09-30 [NBD]
NetBSD 6.1.2 2013-09-30 [NBD]
Mac OS X 10.9 2013-10-22 [APL]
OpenBSD 5.4 2013-11-01 [OBD]
DragonFly 3.6.0 2013-11-25 [DFB]
FreeBSD 10.0 2014-01-20 [FBD]
NetBSD 5.1.4 2014-01-25 [NBD]
NetBSD 5.2.2 2014-01-25 [NBD]
NetBSD 6.0.4 2014-01-25 [NBD]
NetBSD 6.1.3 2014-01-25 [NBD]
DragonFly 3.6.1 2014-02-22 [DFB]
DragonFly 3.6.2 2014-04-10 [DFB]
NetBSD 6.0.5 2014-04-12 [NBD]
NetBSD 6.1.4 2014-04-12 [NBD]
OpenBSD 5.5 2014-05-01 [OBD]
DragonFly 3.8.0 2014-06-04 [DFB]
DragonFly 3.8.1 2014-06-16 [DFB]
DragonFly 3.6.3 2014-06-17 [DFB]
FreeBSD 9.3 2014-07-05 [FBD]
DragonFly 3.8.2 2014-08-08 [DFB]
NetBSD 6.0.6 2014-09-22 [NBD]
NetBSD 6.1.5 2014-09-22 [NBD]
Mac OS X 10.10 2014-10-16 [APL]
OpenBSD 5.6 2014-11-01 [OBD]
FreeBSD 10.1 2014-11-14 [FBD]
DragonFly 4.0.1 2014-11-25 [DFB]
DragonFly 4.0.2 2015-01-07 [DFB]
DragonFly 4.0.3 2015-01-21 [DFB]
DragonFly 4.0.4 2015-03-09 [DFB]
DragonFly 4.0.5 2015-03-23 [DFB]
OpenBSD 5.7 2015-05-01 [OBD]
DragonFly 4.2.0 2015-06-29 [DFB]
FreeBSD 10.2 2015-08-13 [FBD]
NetBSD 7.0 2015-09-25 [NBD]
OS X 10.11 2015-09-30 [APL]
OpenBSD 5.8 2015-10-18 [OBD]
DragonFly 4.4.1 2015-12-07 [DFB]
OpenBSD 5.9 2016-03-29 [OBD]
FreeBSD 10.3 2016-04-04 [FBD]
NetBSD 7.0.1 2016-05-22 [NBD]
DragonFly 4.6.0 2016-08-02 [DFB]
OpenBSD 6.0 2016-09-01 [OBD]
macOS 10.12 2016-09-20 [APL]
FreeBSD 11.0 2016-10-10 [FBD]
NetBSD 7.0.2 2016-10-21 [NBD]
NetBSD 7.1 2017-03-11 [NBD]
DragonFly 4.8.0 2017-03-27 [DFB]
OpenBSD 6.1 2017-04-11 [OBD]
FreeBSD 11.1 2017-07-26 [FBD]
macOS 10.13 2017-09-25 [APL]
FreeBSD 10.4 2017-10-03 [FBD]
OpenBSD 6.2 2017-10-09 [OBD]
DragonFly 5.0.0 2017-10-16 [DFB]
DragonFly 5.0.1 2017-11-06 [DFB]
DragonFly 5.0.2 2017-12-04 [DFB]
NetBSD 7.1.1 2017-12-22 [NBD]
NetBSD 7.1.2 2018-03-15 [NBD]
OpenBSD 6.3 2018-04-02 [OBD]
DragonFly 5.2.0 2018-04-10 [DFB]
DragonFly 5.2.1 2018-05-20 [DFB]
DragonFly 5.2.2 2018-06-18 [DFB]
FreeBSD 11.2 2018-06-27 [FBD]
NetBSD 8.0 2018-07-17 [NBD]
NetBSD 7.2 2018-08-29 [NBD]
+macOS 10.14 2018-09-24 [APL]
OpenBSD 6.4 2018-10-18 [OBD]
DragonFly 5.4.0 2018-12-03 [DFB]
FreeBSD 12.0 2018-12-11 [FBD]
DragonFly 5.4.1 2018-12-24 [DFB]
Bibliography
------------------------
Leffler, Samuel J., Marshall Kirk McKusick, Michael J Karels and John
Quarterman. The Design and Implementation of the 4.3BSD UNIX Operating
System. Reading, Mass. Addison-Wesley, 1989. ISBN 0-201-06196-1
Salus, Peter H. A quarter century of UNIX. Addison-Wesley Publishing
Company, Inc., 1994. ISBN 0-201-54777-5
McKusick, Marshall Kirk, Keith Bostic, Michael J Karels, and John
Quarterman. The Design and Implementation of the 4.4BSD Operating
System. Reading, Mass. Addison-Wesley, 1996. ISBN 0-201-54979-4
McKusick, Marshall Kirk, George Neville-Neil. The Design and
Implementation of the FreeBSD Operating System.
Addison-Wesley Professional, Published: Aug 2, 2004. ISBN 0-201-70245-2
McKusick, Marshall Kirk, George Neville-Neil, Robert Watson. The
Design and Implementation of the FreeBSD Operating System, 2nd Edition.
Pearson Education, Inc., 2014. ISBN 0-321-96897-2
Doug McIlroy. Research Unix Reader.
Michael G. Brown. The Role of BSD in the Development of Unix.
Presented to the Tasmanian Unix Special Interest Group of the
Australian Computer Society, Hobart, August 1993.
Peter H. Salus. Unix at 25. Byte Magazine, October 1994.
URL: http://www.byte.com/art/9410/sec8/art3.htm
Andreas Klemm, Lars Köller. If you're going to San Francisco ...
Die freien BSD-Varianten von Unix. c't April 1997, page 368ff.
BSD Release Announcements collection.
URL: https://www.FreeBSD.org/releases/
BSD Hypertext Man Pages
URL: https://www.FreeBSD.org/cgi/man.cgi
UNIX history graphing project
URL: http://minnie.tuhs.org/Unix_History/index.html
UNIX history
URL: http://www.levenez.com/unix/
James Howard: The BSD Family Tree
URL: http://ezine.daemonnews.org/200104/bsd_family.html
("what are the differences between FreeBSD, NetBSD, and OpenBSD?")
Acknowledgments
---------------
Josh Gilliam for suggestions, bug fixes, and finding very old
original BSD announcements from Usenet or tapes.
Steven M. Schultz for providing 2.8BSD, 2.10BSD, 2.11BSD manual pages.
--
Copyright (c) 1997-2012 Wolfram Schneider <wosch@FreeBSD.ORG>
URL: https://svnweb.freebsd.org/base/head/share/misc/bsd-family-tree
$FreeBSD$
Index: projects/clang800-import/share/misc/committers-ports.dot
===================================================================
--- projects/clang800-import/share/misc/committers-ports.dot (revision 343955)
+++ projects/clang800-import/share/misc/committers-ports.dot (revision 343956)
@@ -1,752 +1,757 @@
# $FreeBSD$
# This file is meant to list all FreeBSD ports committers and describe the
# mentor-mentee relationships between them.
# The graphical output can be generated from this file with the following
# command:
# $ dot -T png -o file.png committers-ports.dot
#
# The dot binary is part of the graphics/graphviz port.
digraph ports {
# Node definitions follow this example:
#
# foo [label="Foo Bar\nfoo@FreeBSD.org\n????/??/??"]
#
# ????/??/?? is the date when the commit bit was obtained, usually the one you
# can find looking at svn logs for the svnadmin/access file.
# Use YYYY/MM/DD format.
#
# For returned commit bits, the node definition will follow this example:
#
# foo [label="Foo Bar\nfoo@FreeBSD.org\n????/??/??\n????/??/??"]
#
# The first date is the same as for an active committer, the second date is
# the date when the commit bit has been returned. Again, check svn logs.
node [color=grey62, style=filled, bgcolor=black];
# Alumni go here.. Try to keep things sorted.
asami [label="Satoshi Asami\nasami@FreeBSD.org\n1994/11/18\n2001/09/11"]
billf [label="Bill Fumerola\nbillf@FreeBSD.org\n1998/11/11\n2006/12/14"]
jmallett [label="Juli Mallett\njmallett@FreeBSD.org\n2003/01/16\n2006/08/10"]
marcel [label="Marcel Moolenaar\nmarcel@FreeBSD.org\n1999/07/03\n2007/07/01"]
sobomax[label="Maxim Sobolev\nsobomax@FreeBSD.org\n2000/05/17\n2018/12/03"]
steve [label="Steve Price\nsteve@FreeBSD.org\nxxxx/xx/xx\nxxxx/xx/xx"]
will [label="Will Andrews\nwill@FreeBSD.org\n2000/03/20\n2006/09/01"]
node [color=lightblue2, style=filled, bgcolor=black];
# Current ports committers go here. Try to keep things sorted.
"0mp" [label="Mateusz Piotrowski\n0mp@FreeBSD.org\n2018/06/16"]
ache [label="Andrey Chernov\nache@FreeBSD.org\n1994/11/15"]
acm [label="Jose Alonso Cardenas Marquez\nacm@FreeBSD.org\n2006/07/18"]
adamw [label="Adam Weinberger\nadamw@FreeBSD.org\n2002/10/16"]
adridg [label="Adriaan de Groot\nadridg@FreeBSD.org\n2017/09/08"]
ahze [label="Michael Johnson\nahze@FreeBSD.org\n2004/10/29"]
ak [label="Alex Kozlov\nak@FreeBSD.org\n2012/02/29"]
ale [label="Alex Dupre\nale@FreeBSD.org\n2004/01/12"]
alepulver [label="Alejandro Pulver\nalepulver@FreeBSD.org\n2006/04/01"]
alexbl [label="Alexander Botero-Lowry\nalexbl@FreeBSD.org\n2006/09/11"]
alexey [label="Alexey Degtyarev\nalexey@FreeBSD.org\n2013/11/09"]
alonso [label="Alonso Schaich\nalonso@FreeBSD.org\n2014/08/14"]
amdmi3 [label="Dmitry Marakasov\namdmi3@FreeBSD.org\n2008/06/19"]
anray [label="Andrey Slusar\nanray@FreeBSD.org\n2005/12/11"]
antoine [label="Antoine Brodin\nantoine@FreeBSD.org\n2013/04/03"]
araujo [label="Marcelo Araujo\naraujo@FreeBSD.org\n2007/04/26"]
arrowd [label="Gleb Popov\narrowd@FreeBSD.org\n2018/05/18"]
arved [label="Tilman Linneweh\narved@FreeBSD.org\n2002/10/15"]
ashish [label="Ashish SHUKLA\nashish@FreeBSD.org\n2010/06/10"]
avilla [label="Alberto Villa\navilla@FreeBSD.org\n2010/01/24"]
avl [label="Alexander Logvinov\navl@FreeBSD.org\n2009/05/27"]
az [label="Andrej Zverev\naz@FreeBSD.org\n2005/10/03"]
bapt [label="Baptiste Daroussin\nbapt@FreeBSD.org\n2010/07/27"]
bar [label="Barbara Guida\nbar@FreeBSD.org\n2012/11/25"]
bdrewery [label="Bryan Drewery\nbdrewery@FreeBSD.org\n2012/07/31"]
beat [label="Beat Gaetzi\nbeat@FreeBSD.org\n2009/01/28"]
beech [label="Beech Rintoul\nbeech@FreeBSD.org\n2007/05/30"]
bf [label="Brendan Fabeny\nbf@FreeBSD.org\n2010/06/02"]
bland [label="Alexander Nedotsukov\nbland@FreeBSD.org\n2003/08/14"]
bmah [label="Bruce A. Mah\nbmah@FreeBSD.org\n2000/08/23"]
bofh [label="Muhammad Moinur Rahman\nbofh@FreeBSD.org\n2014/12/23"]
brnrd [label="Bernard Spil\nbrnrd@FreeBSD.org\n2015/05/24"]
brix [label="Henrik Brix Andersen\nbrix@FreeBSD.org\n2007/10/31"]
brooks [label="Brooks Davies\nbrooks@FreeBSD.org\n2004/05/03"]
bsam [label="Boris Samorodov\nbsam@FreeBSD.org\n2006/07/20"]
chinsan [label="Chinsan Huang\nchinsan@FreeBSD.org\n2007/06/12"]
clement [label="Clement Laforet\nclement@FreeBSD.org\n2003/12/17"]
clsung [label="Cheng-Lung Sung\nclsung@FreeBSD.org\n2004/8/18"]
cmt [label="Christoph Moench-Tegeder\ncmt@FreeBSD.org\n2016/03/01"]
cperciva [label="Colin Percival\ncperciva@FreeBSD.org\n2006/01/31"]
crees [label="Chris Rees\ncrees@FreeBSD.org\n2011/06/11"]
cs [label="Carlo Strub\ncs@FreeBSD.org\n2011/09/13"]
culot [label="Frederic Culot\nculot@FreeBSD.org\n2010/10/16"]
daichi [label="Daichi Goto\ndaichi@FreeBSD.org\n2002/10/17"]
danfe [label="Alexey Dokuchaev\ndanfe@FreeBSD.org\n2004/08/20"]
danilo [label="Danilo E. Gondolfo\ndanilo@FreeBSD.org\n2013/09/23"]
db [label="Diane Bruce\ndb@FreeBSD.org\n2007/01/18"]
dbaio [label="Danilo G. Baio\ndbaio@FreeBSD.org\n2017/05/03"]
dbn [label="David Naylor\ndbn@FreeBSD.org\n2013/01/14"]
dch [label="Dave Cottlehuber\ndch@FreeBSD.org\n2017/09/09"]
decke [label="Bernhard Froehlich\ndecke@FreeBSD.org\n2010/03/21"]
delphij [label="Xin Li\ndelphij@FreeBSD.org\n2006/05/01"]
demon [label="Dmitry Sivachenko\ndemon@FreeBSD.org\n2000/11/13"]
dhn [label="Dennis Herrmann\ndhn@FreeBSD.org\n2009/03/03"]
dryice [label="Dryice Dong Liu\ndryice@FreeBSD.org\n2006/12/25"]
dteske [label="Devin Teske\ndteske@FreeBSD.org\n2018/03/01"]
dumbbell [label="Jean-Sebastien Pedron\ndumbbell@FreeBSD.org\n2017/01/10"]
dvl [label="Dan Langille\ndvl@FreeBSD.org\n2014/08/10"]
eadler [label="Eitan Adler\neadler@FreeBSD.org\n2011/08/17"]
edwin [label="Edwin Groothuis\nedwin@FreeBSD.org\n2002/10/22"]
egypcio [label="Vin&iacute;cius Zavam\negypcio@FreeBSD.org\n2018/10/04"]
ehaupt [label="Emanuel Haupt\nehaupt@FreeBSD.org\n2005/10/03"]
eik [label="Oliver Eikemeier\neik@FreeBSD.org\n2003/11/12"]
ericbsd [label="Eric Turgeon\nericbsd@FreeBSD.org\n2018/03/17"]
erwin [label="Erwin Lansing\nerwin@FreeBSD.org\n2003/06/04"]
eugen [label="Eugene Grosbein\neugen@FreeBSD.org\n2017/03/04"]
farrokhi [label="Babak Farrokhi\nfarrokhi@FreeBSD.org\n2006/11/07"]
feld [label="Mark Felder\nfeld@FreeBSD.org\n2013/06/25"]
fernape [label="Fernando Apesteguia\nfernape@FreeBSD.org\n2018/03/03"]
fjoe [label="Max Khon\nfjoe@FreeBSD.org\n2001/08/06"]
flo [label="Florian Smeets\nflo@FreeBSD.org\n2010/12/07"]
fluffy [label="Dima Panov\nfluffy@FreeBSD.org\n2009/08/10"]
flz [label="Florent Thoumie\nflz@FreeBSD.org\n2005/03/01"]
gabor [label="Gabor Kovesdan\ngabor@FreeBSD.org\n2006/12/05"]
gahr [label="Pietro Cerutti\ngahr@FreeBSD.org\n2008/02/20"]
garga [label="Renato Botelho\ngarga@FreeBSD.org\n2005/07/11"]
gblach [label="Grzegorz Blach\ngblach@FreeBSD.org\n2012/11/03"]
gerald [label="Gerald Pfeifer\ngerald@FreeBSD.org\n2002/04/03"]
gjb [label="Glen Barber\ngjb@FreeBSD.org\n2012/06/19"]
glarkin [label="Greg Larkin\nglarkin@FreeBSD.org\n2008/07/17"]
glewis [label="Greg Lewis\nglewis@FreeBSD.org\n2002/04/08"]
gordon [label="Gordon Tetlow\ngordon@FreeBSD.org\n2014/10/14"]
grembo [label="Michael Gmelin\ngrembo@FreeBSD.org\n2014/01/21"]
gnn [label="George Neville-Neil\ngnn@FreeBSD.org\n2013/09/04"]
hq [label="Herve Quiroz\nhq@FreeBSD.org\n2004/08/05"]
hrs [label="Hiroki Sato\nhrs@FreeBSD.org\n2004/04/10"]
ijliao [label="Ying-Chieh Liao\nijliao@FreeBSD.org\n2001/01/20"]
itetcu [label="Ion-Mihai Tetcu\nitetcu@FreeBSD.org\n2006/06/07"]
jacula [label="Giuseppe Pilichi\njacula@FreeBSD.org\n2010/04/05"]
jadawin [label="Philippe Audeoud\njadawin@FreeBSD.org\n2008/03/02"]
jase [label="Jase Thew\njase@FreeBSD.org\n2012/05/30"]
jbeich [label="Jan Beich\njbeich@FreeBSD.org\n2015/01/19"]
jgh [label="Jason Helfman\njgh@FreeBSD.org\n2011/12/16"]
jhale [label="Jason E. Hale\njhale@FreeBSD.org\n2012/09/10"]
jhixson [label="John Hixson\njhixson@FreeBSD.org\n2018/07/16"]
jkim [label="Jung-uk Kim\njkim@FreeBSD.org\n2007/09/12"]
jlaffaye [label="Julien Laffaye\njlaffaye@FreeBSD.org\n2011/06/06"]
jmd [label="Johannes M. Dieterich\njmd@FreeBSD.org\n2017/01/09"]
jmelo [label="Jean Milanez Melo\njmelo@FreeBSD.org\n2006/03/31"]
joneum [label="Jochen Neumeister\njoneum@FreeBSD.org\n2017/05/11"]
joerg [label="Joerg Wunsch\njoerg@FreeBSD.org\n1994/08/22"]
johans [label="Johan Selst\njohans@FreeBSD.org\n2006/04/01"]
josef [label="Josef El-Rayes\njosef@FreeBSD.org\n2004/12/20"]
jpaetzel [label="Josh Paetzel\njpaetzel@FreeBSD.org\n2008/09/05"]
jrm [label="Joseph R. Mingrone\njrm@FreeBSD.org\n2016/09/17"]
jsa [label="Joseph S. Atkinson\njsa@FreeBSD.org\n2010/07/15"]
jsm [label="Jesper Schmitz Mouridsen\njsm@FreeBSD.org\n2018/06/30"]
junovitch [label="Jason Unovitch\njunovitch@FreeBSD.org\n2015/07/27"]
jylefort [label="Jean-Yves Lefort\njylefort@FreeBSD.org\n2005/04/12"]
+kai [label="Kai Knoblich\nkai@FreeBSD.org\n2019/02/01"]
kami [label="Dominic Fandrey\nkami@FreeBSD.org\n2014/09/09"]
kbowling [label="Kevin Bowling\nkbowling@FreeBSD.org\n2018/09/02"]
kevlo [label="Kevin Lo\nkevlo@FreeBSD.org\n2003/02/21"]
kmoore [label="Kris Moore\nkmoore@FreeBSD.org\n2009/04/14"]
knu [label="Akinori Musha\nknu@FreeBSD.org\n2000/03/22"]
koitsu [label="Jeremy Chadwick\nkoitsu@FreeBSD.org\n2006/11/10"]
krion [label="Kirill Ponomarew\nkrion@FreeBSD.org\n2003/07/20"]
kwm [label="Koop Mast\nkwm@FreeBSD.org\n2004/09/14"]
laszlof [label="Frank Laszlo\nlaszlof@FreeBSD.org\n2006/11/07"]
lawrance [label="Sam Lawrance\nlawrance@FreeBSD.org\n2005/04/11\n2007/02/21"]
lbr [label="Lars Balker Rasmussen\nlbr@FreeBSD.org\n2006/04/30"]
leeym [label="Yen-Ming Lee\nleeym@FreeBSD.org\n2002/08/14"]
ler [label="Larry Rosenman\nler@FreeBSD.org\n2017/01/09"]
leres [label="Craig Leres\nleres@FreeBSD.org\n2017/10/10"]
lev [label="Lev Serebryakov\nlev@FreeBSD.org\n2003/06/17"]
lifanov [label="Nikolai Lifanov\nlifanov@FreeBSD.org\n2016/12/11"]
linimon [label="Mark Linimon\nlinimon@FreeBSD.org\n2003/10/23"]
lioux [label="Mario Sergio Fujikawa Ferriera\nlioux@FreeBSD.org\n2000/10/14"]
lippe [label="Felippe de Meirelles Motta\nlippe@FreeBSD.org\n2008/03/08"]
lme [label="Lars Engels\nlme@FreeBSD.org\n2007/07/09"]
lth [label="Lars Thegler\nlth@FreeBSD.org\n2004/05/04"]
lwhsu [label="Li-Wen Hsu\nlwhsu@FreeBSD.org\n2007/04/03"]
lx [label="David Thiel\nlx@FreeBSD.org\n2006/11/29"]
madpilot [label="Guido Falsi\nmadpilot@FreeBSD.org\n2012/04/12"]
maho [label="Maho Nakata\nmaho@FreeBSD.org\n2002/10/17"]
makc [label="Max Brazhnikov\nmakc@FreeBSD.org\n2008/08/25"]
mandree [label="Matthias Andree\nmandree@FreeBSD.org\n2009/11/18"]
manu [label="Emmanuel Vadot\nmanu@FreeBSD.org\n2018/09/06"]
marcus [label="Joe Marcus Clarke\nmarcus@FreeBSD.org\n2002/04/05"]
marino [label="John Marino\nmarino@FreeBSD.org\n2013/07/04"]
marius [label="Marius Strobl\nmarius@FreeBSD.org\n2012/12/29"]
markus [label="Markus Brueffer\nmarkus@FreeBSD.org\n2004/02/21"]
martymac [label="Ganael Laplanche\nmartymac@FreeBSD.org\n2010/09/24"]
mat [label="Mathieu Arnold\nmat@FreeBSD.org\n2003/08/15"]
matthew [label="Matthew Seaman\nmatthew@FreeBSD.org\n2012/02/07"]
meta [label="Koichiro Iwao\nmeta@FreeBSD.org\n2018/03/19"]
mezz [label="Jeremy Messenger\nmezz@FreeBSD.org\n2004/04/30"]
mfechner [label="Matthias Fechner\nmfechner@FreeBSD.org\n2018/03/01"]
mharo [label="Michael Haro\nmharo@FreeBSD.org\n1999/04/13"]
milki [label="Jonathan Chu\nmilki@FreeBSD.org\n2013/12/15"]
misha [label="Mikhail Pchelin\nmisha@FreeBSD.org\n2016/11/15"]
miwi [label="Martin Wilke\nmiwi@FreeBSD.org\n2006/06/04"]
mm [label="Martin Matuska\nmm@FreeBSD.org\n2007/04/04"]
mmokhi [label="Mahdi Mokhtari\nmmokhi@FreeBSD.org\n2017/02/09"]
mnag [label="Marcus Alves Grando\nmnag@FreeBSD.org\n2005/09/15"]
mva [label="Marcus von Appen\nmva@FreeBSD.org\n2009/02/16"]
nemysis [label="Rusmir Dusko\nnemysis@FreeBSD.org\n2013/07/31"]
nemoliu [label="Tong Liu\nnemoliu@FreeBSD.org\n2007/04/25"]
netchild [label="Alexander Leidinger\nnetchild@FreeBSD.org\n2002/03/19"]
nobutaka [label="Nobutaka Mantani\nnobutaka@FreeBSD.org\n2001/11/02"]
nork [label="Norikatsu Shigemura\nnork@FreeBSD.org\n2002/04/01"]
novel [label="Roman Bogorodskiy\nnovel@FreeBSD.org\n2005/03/07"]
nox [label="Juergen Lock\nnox@FreeBSD.org\n2006/12/22"]
obrien [label="David E. O'Brien\nobrien@FreeBSD.org\n1996/10/29"]
olivier [label="Olivier Cochard-Labbe\nolivier@FreeBSD.org\n2016/02/02"]
olivierd [label="Olivier Duchateau\nolivierd@FreeBSD.org\n2012/05/29"]
osa [label="Sergey A. Osokin\nosa@FreeBSD.org\n2003/06/04"]
pat [label="Patrick Li\npat@FreeBSD.org\n2001/11/14"]
pav [label="Pav Lucistnik\npav@FreeBSD.org\n2003/11/12"]
pawel [label="Pawel Pekala\npawel@FreeBSD.org\n2011/03/11"]
pclin [label="Po-Chien Lin\npclin@FreeBSD.org\n2013/02/11"]
pgj [label="Gabor Pali\npgj@FreeBSD.org\n2009/04/12"]
pgollucci [label="Philip M. Gollucci\npgollucci@FreeBSD.org\n2008/07/21"]
philip [label="Philip Paeps\nphilip@FreeBSD.org\n2005/10/19"]
pi [label="Kurt Jaeger\npi@FreeBSD.org\n2014/03/14"]
pizzamig [label="Luca Pizzamiglio\npizzamig@FreeBSD.org\n2017/08/25"]
rafan [label="Rong-En Fan\nrafan@FreeBSD.org\n2006/06/23"]
rakuco [label="Raphael Kubo da Costa\nrakuco@FreeBSD.org\n2011/08/22"]
rene [label="Rene Ladan\nrene@FreeBSD.org\n2010/04/11"]
rezny [label="Matthew Rezny\nrezny@FreeBSD.org\n2017/01/09"]
riggs [label="Thomas Zander\nriggs@FreeBSD.org\n2014/01/09"]
rigoletto [label="Alexandre C. Guimaraes\nrigoletto@FreeBSD.org\n2018/10/01"]
rm [label="Ruslan Makhmatkhanov\nrm@FreeBSD.org\n2011/11/06"]
rnoland [label="Robert Noland\nrnoland@FreeBSD.org\n2008/07/21"]
robak [label="Bartek Rutkowski\nrobak@FreeBSD.org\n2014/06/10"]
rodrigo [label="Rodrigo Osorio\nrodrigo@FreeBSD.org\n2014/01/15"]
romain [label="Romain Tartiere\nromain@FreeBSD.org\n2010/01/24"]
rpaulo [label="Rui Paulo\nrpaulo@FreeBSD.org\n2014/07/15"]
sahil [label="Sahil Tandon\nsahil@FreeBSD.org\n2010/04/11"]
sat [label="Andrew Pantyukhin\nsat@FreeBSD.org\n2006/05/06"]
sbruno [label="Sean Bruno\nsbruno@FreeBSD.org\n2014/09/14"]
sbz [label="Sofian Brabez\nsbz@FreeBSD.org\n2011/03/14"]
scheidell [label="Michael Scheidell\nscheidell@FreeBSD.org\n2011/11/06"]
seanc [label="Sean Chittenden\nseanc@FreeBSD.org\n2002/08/15"]
sem [label="Sergey Matveychuk\nsem@FreeBSD.org\n2004/07/07"]
sergei [label="Sergei Kolobov\nsergei@FreeBSD.org\n2003/10/21"]
shaun [label="Shaun Amott\nshaun@FreeBSD.org\n2006/06/19"]
shurd [label="Stephen Hurd\nshurd@FreeBSD.org\n2014/06/14"]
simon [label="Simon L. Nielsen\nsimon@FreeBSD.org\n2005/01/08"]
skozlov [label="Sergey Kozlov\nskozlov@FreeBSD.org\n2018/09/21"]
skreuzer [label="Steven Kreuzer\nskreuzer@FreeBSD.org\n2009/03/25"]
sperber[label="Armin Pirkovitsch\nsperber@FreeBSD.org\n2012/04/15"]
stas [label="Stanislav Sedov\nstas@FreeBSD.org\n2006/09/18"]
stefan [label="Stefan Walter\nstefan@FreeBSD.org\n2006/05/07"]
stephen [label="Stephen Montgomery-Smith\nstephen@FreeBSD.org\n2011/06/13"]
sunpoet [label="Po-Chuan Hsieh\nsunpoet@FreeBSD.org\n2010/09/21"]
swills [label="Steve Wills\nswills@FreeBSD.org\n2010/09/03"]
sylvio [label="Sylvio Cesar Teixeira\nsylvio@FreeBSD.org\n2009/10/29"]
tabthorpe [label="Thomas Abthorpe\ntabthorpe@FreeBSD.org\n2007/08/20"]
tcberner [label="Tobias C. Berner\ntcberner@FreeBSD.org\n2016/07/06"]
tdb [label="Tim Bishop\ntdb@FreeBSD.org\n2005/11/30"]
thierry [label="Thierry Thomas\nthierry@FreeBSD.org\n2004/03/15"]
tijl [label="Tijl Coosemans\ntijl@FreeBSD.org\n2013/03/27"]
timur [label="Timur Bakeyev\ntimur@FreeBSD.org\n2007/06/07"]
tj [label="Tom Judge\ntj@FreeBSD.org\n2012/05/28"]
tmclaugh [label="Tom McLaughlin\ntmclaugh@FreeBSD.org\n2005/09/15"]
tobik [label="Tobias Kortkamp\ntobik@FreeBSD.org\n2017/02/08"]
tota [label="TAKATSU Tomonari\ntota@FreeBSD.org\n2009/03/30"]
trasz [label="Edward Tomasz Napierala\ntrasz@FreeBSD.org\n2007/04/12"]
trhodes [label="Tom Rhodes\ntrhodes@FreeBSD.org\n2004/07/06"]
trociny [label="Mikolaj Golub\ntrociny@FreeBSD.org\n2013/10/17"]
tz [label="Torsten Zuehlsdorff\ntz@FreeBSD.org\n2016/06/04"]
ultima [label="Richard Gallamore\nultima@FreeBSD.org\n2017/06/07"]
uqs [label="Ulrich Spoerlein\nuqs@FreeBSD.org\n2012/01/19"]
vd [label="Vasil Dimov\nvd@FreeBSD.org\n2006/01/19"]
vg [label="Veniamin Gvozdikov\nvg@FreeBSD.org\n2013/06/11"]
vsevolod [label="Vsevolod Stakhov\nvsevolod@FreeBSD.org\n2005/07/22"]
wen [label="Wen Heping\nwen@FreeBSD.org\n2010/12/13"]
wg [label="William Grzybowski\nwg@FreeBSD.org\n2013/04/01"]
woodsb02 [label="Ben Woods\nwoodsb02@FreeBSD.org\n2016/05/09"]
wxs [label="Wesley Shields\nwxs@FreeBSD.org\n2008/01/03"]
xmj [label="Johannes Jost Meixner\nxmj@FreeBSD.org\n2014/04/07"]
xride [label="Soeren Straarup\nxride@FreeBSD.org\n2006/09/27"]
yuri [label="Yuri Victorovich\nyuri@FreeBSD.org\n2017/10/30"]
yzlin [label="Yi-Jheng Lin\nyzlin@FreeBSD.org\n2009/07/19"]
zeising [label="Niclas Zeising\nzeising@FreeBSD.org\n2012/07/03"]
zi [label="Ryan Steinmetz\nzi@FreeBSD.org\n2011/07/14"]
znerd [label="Ernst de Haan\nznerd@FreeBSD.org\n2001/11/15"]
# Here are the mentor/mentee relationships.
# Group together all the mentees for a particular mentor.
# Keep the list sorted by mentor login.
adamw -> ahze
adamw -> jylefort
adamw -> ler
adamw -> mezz
adamw -> pav
adamw -> woodsb02
ade -> jpaetzel
ahze -> shaun
ahze -> tmclaugh
amdmi3 -> jrm
amdmi3 -> arrowd
antoine -> dumbbell
araujo -> egypcio
araujo -> jhixson
araujo -> lippe
araujo -> pclin
araujo -> pgollucci
arved -> markus
arved -> stefan
asami -> obrien
avilla -> jhale
avilla -> rakuco
az -> eugen
bdrewery -> dbn
bdrewery -> sbruno
bdrewery -> trociny
bapt -> bdrewery
bapt -> bofh
bapt -> dumbbell
bapt -> eadler
bapt -> ericbsd
bapt -> grembo
bapt -> jbeich
bapt -> jlaffaye
bapt -> manu
bapt -> marius
bapt -> marino
bapt -> rodrigo
bapt -> rpaulo
bapt -> sbruno
beat -> decke
beat -> egypcio
beat -> marius
beat -> sperber
beat -> uqs
beech -> glarkin
beech -> mva
billf -> sobomax
billf -> will
brooks -> kmoore
clement -> tdb
clement -> lawrance
clsung -> lwhsu
clsung -> tabthorpe
crees -> feld
crees -> gjb
crees -> jgh
crees -> madpilot
crees -> gblach
crees -> tijl
cs -> kami
culot -> danilo
culot -> jase
culot -> marino
culot -> pi
culot -> wg
db -> tj
db -> shurd
decke -> sperber
delphij -> junovitch
delphij -> nemoliu
delphij -> rafan
demon -> mat
eadler -> ak
eadler -> antoine
eadler -> dbn
eadler -> bdrewery
eadler -> gjb
eadler -> milki
eadler -> tj
eadler -> vg
edwin -> cperciva
edwin -> erwin
edwin -> linimon
edwin -> lx
ehaupt -> db
ehaupt -> martymac
eik -> sem
eik -> trhodes
erwin -> brix
erwin -> clement
erwin -> gabor
erwin -> gordon
erwin -> lbr
erwin -> lth
erwin -> simon
feld -> brnrd
feld -> junovitch
feld -> mmokhi
feld -> rezny
fjoe -> danfe
fjoe -> flo
fjoe -> krion
fjoe -> osa
flo -> bar
flo -> jase
flo -> jbeich
flo -> grembo
flz -> garga
flz -> johans
flz -> laszlof
flz -> romain
jpaetzel -> misha
jpaetzel -> wg
gabor -> lippe
gabor -> pgj
gabor -> stephen
gabor -> scheidell
garga -> acm
garga -> alepulver
garga -> dbaio
garga -> mandree
garga -> mm
garga -> rnoland
garga -> vd
garga -> wxs
garga -> xride
glarkin -> avl
glarkin -> cs
glarkin -> rm
glewis -> hq
glewis -> jkim
hrs -> meta
ijliao -> leeym
imp -> dteske
itetcu -> ak
itetcu -> araujo
itetcu -> dryice
itetcu -> sahil
itetcu -> sylvio
jadawin -> bapt
jadawin -> flo
jadawin -> olivier
jadawin -> pi
jadawin -> riggs
jadawin -> sbz
jadawin -> wen
joerg -> netchild
+joneum -> kai
+
jrm -> dch
jrm -> jwb
junovitch -> tz
kmoore -> jhixson
knu -> daichi
knu -> maho
knu -> nobutaka
knu -> nork
koobs -> brnrd
koobs -> kami
koobs -> woodsb02
koobs -> xmj
krion -> "0mp"
krion -> brooks
krion -> kbowling
krion -> miwi
krion -> novel
krion -> philip
krion -> sat
krion -> sem
krion -> sergei
kwm -> jsa
kwm -> rodrigo
kwm -> zeising
lawrance -> itetcu
leeym -> clsung
ler -> leres
lifanov -> ultima
linimon -> hrs
lioux -> pat
lme -> pizzamig
lme -> tobik
lwhsu -> yzlin
maho -> stephen
maho -> tota
marcus -> ahze
marcus -> bland
marcus -> eik
marcus -> jmallett
marino -> bofh
marino -> robak
makc -> alonso
makc -> bf
makc -> jhale
makc -> rakuco
mat -> "0mp"
mat -> bmah
mat -> dteske
mat -> dvl
mat -> gordon
mat -> mmokhi
mat -> seanc
mat -> tcberner
mat -> thierry
mat -> tobik
mat -> woodsb02
mat -> rigoletto
matthew -> leres
matthew -> lifanov
matthew -> ultima
mezz -> tmclaugh
miwi -> amdmi3
miwi -> antoine
miwi -> avilla
miwi -> beat
miwi -> bf
miwi -> cmt
miwi -> decke
miwi -> dhn
miwi -> farrokhi
miwi -> fluffy
miwi -> gahr
miwi -> jhixson
miwi -> joneum
miwi -> jsm
+miwi -> kai
miwi -> kmoore
miwi -> lme
miwi -> makc
miwi -> mandree
miwi -> mva
miwi -> nemysis
miwi -> nox
miwi -> olivierd
miwi -> pawel
miwi -> rm
miwi -> sbz
miwi -> sperber
miwi -> sylvio
miwi -> tabthorpe
miwi -> trasz
miwi -> wen
miwi -> zeising
mnag -> jmelo
netchild -> bsam
nork -> ale
novel -> alexbl
novel -> ehaupt
novel -> rm
obrien -> mharo
obrien -> gerald
olivier -> pizzamig
osa -> vg
pat -> adamw
pav -> ahze
pav -> flz
pav -> josef
pav -> kwm
pav -> mnag
pawel -> nemysis
pgj -> ashish
pgj -> jacula
pgollucci -> junovitch
pgollucci -> sunpoet
pgollucci -> swills
philip -> koitsu
pi -> meta
pi -> tz
rafan -> chinsan
rakuco -> adridg
rakuco -> alonso
rakuco -> tcberner
rene -> bar
rene -> cmt
rene -> crees
rene -> egypcio
rene -> jgh
rene -> jmd
rene -> joneum
rene -> ler
rene -> olivierd
rm -> koobs
rm -> vg
sahil -> culot
sahil -> eadler
sat -> beech
sbruno -> skozlov
sem -> az
sem -> anray
sem -> delphij
sem -> stas
shaun -> timur
shaun -> matthew
skreuzer -> gnn
skreuzer -> shurd
sobomax -> demon
sobomax -> glewis
sobomax -> lev
sobomax -> marcus
sobomax -> znerd
stas -> araujo
steve -> netchild
swills -> dch
swills -> feld
swills -> jmd
swills -> jrm
swills -> jsm
swills -> mfechner
swills -> milki
swills -> pclin
swills -> rezny
swills -> robak
swills -> rpaulo
swills -> seanc
swills -> tz
swills -> xmj
tabthorpe -> ashish
tabthorpe -> avilla
tabthorpe -> avl
tabthorpe -> bapt
tabthorpe -> crees
tabthorpe -> dhn
tabthorpe -> fluffy
tabthorpe -> jacula
tabthorpe -> jadawin
tabthorpe -> jlaffaye
tabthorpe -> madpilot
tabthorpe -> pgj
tabthorpe -> rene
tabthorpe -> zi
tabthorpe -> gblach
tcberner -> adridg
tcberner -> joneum
tcberner -> yuri
tcberner -> fernape
tcberner -> arrowd
tcberner -> rigoletto
+tcberner -> kai
thierry -> jadawin
thierry -> riggs
timur -> kbowling
tmclaugh -> itetcu
tmclaugh -> xride
tz -> joneum
tz -> fernape
tz -> mfechner
vsevolod -> eugen
wen -> cs
wen -> culot
wen -> pawel
wg -> alexey
wg -> danilo
wg -> dvl
wg -> ericbsd
wg -> misha
wg -> nemysis
will -> lioux
wxs -> jsa
wxs -> nemysis
wxs -> sahil
wxs -> skreuzer
wxs -> swills
wxs -> zi
}
Index: projects/clang800-import/share/misc/committers-src.dot
===================================================================
--- projects/clang800-import/share/misc/committers-src.dot (revision 343955)
+++ projects/clang800-import/share/misc/committers-src.dot (revision 343956)
@@ -1,872 +1,874 @@
# $FreeBSD$
# This file is meant to list all FreeBSD src committers and describe the
# mentor-mentee relationships between them.
# The graphical output can be generated from this file with the following
# command:
# $ dot -T png -o file.png committers-src.dot
#
# The dot binary is part of the graphics/graphviz port.
digraph src {
# Node definitions follow this example:
#
# foo [label="Foo Bar\nfoo@FreeBSD.org\n????/??/??"]
#
# ????/??/?? is the date when the commit bit was obtained, usually the one you
# can find looking at svn logs for the svnadmin/conf/access file.
# Use YYYY/MM/DD format.
#
# For returned commit bits, the node definition will follow this example:
#
# foo [label="Foo Bar\nfoo@FreeBSD.org\n????/??/??\n????/??/??"]
#
# The first date is the same as for an active committer, the second date is
# the date when the commit bit has been returned. Again, check svn logs.
node [color=grey62, style=filled, bgcolor=black];
# Alumni go here.. Try to keep things sorted.
alm [label="Andrew Moore\nalm@FreeBSD.org\n1993/06/12\n????/??/??"]
anholt [label="Eric Anholt\nanholt@FreeBSD.org\n2002/04/22\n2008/08/07"]
archie [label="Archie Cobbs\narchie@FreeBSD.org\n1998/11/06\n2006/06/09"]
arr [label="Andrew R. Reiter\narr@FreeBSD.org\n2001/11/02\n2005/05/25"]
arun [label="Arun Sharma\narun@FreeBSD.org\n2003/03/06\n2006/12/16"]
asmodai [label="Jeroen Ruigrok\nasmodai@FreeBSD.org\n1999/12/16\n2001/11/16"]
benjsc [label="Benjamin Close\nbenjsc@FreeBSD.org\n2007/02/09\n2010/09/15"]
billf [label="Bill Fumerola\nbillf@FreeBSD.org\n1998/11/11\n2008/11/10"]
bmah [label="Bruce A. Mah\nbmah@FreeBSD.org\n2002/01/29\n2009/09/13"]
bmilekic [label="Bosko Milekic\nbmilekic@FreeBSD.org\n2000/09/21\n2008/11/10"]
bushman [label="Michael Bushkov\nbushman@FreeBSD.org\n2007/03/10\n2010/04/29"]
carl [label="Carl Delsey\ncarl@FreeBSD.org\n2013/01/14\n2014/03/06"]
ceri [label="Ceri Davies\nceri@FreeBSD.org\n2006/11/07\n2012/03/07"]
cjc [label="Crist J. Clark\ncjc@FreeBSD.org\n2001/06/01\n2006/12/29"]
davidxu [label="David Xu\ndavidxu@FreeBSD.org\n2002/09/02\n2014/04/14"]
dds [label="Diomidis Spinellis\ndds@FreeBSD.org\n2003/06/20\n2010/09/22"]
dhartmei [label="Daniel Hartmeier\ndhartmei@FreeBSD.org\n2004/04/06\n2008/12/08"]
dmlb [label="Duncan Barclay\ndmlb@FreeBSD.org\n2001/12/14\n2008/11/10"]
dougb [label="Doug Barton\ndougb@FreeBSD.org\n2000/10/26\n2012/10/08"]
eik [label="Oliver Eikemeier\neik@FreeBSD.org\n2004/05/20\n2008/11/10"]
furuta [label="Atsushi Furuta\nfuruta@FreeBSD.org\n2000/06/21\n2003/03/08"]
gj [label="Gary L. Jennejohn\ngj@FreeBSD.org\n1994/??/??\n2006/04/28"]
groudier [label="Gerard Roudier\ngroudier@FreeBSD.org\n1999/12/30\n2006/04/06"]
jake [label="Jake Burkholder\njake@FreeBSD.org\n2000/05/16\n2008/11/10"]
jayanth [label="Jayanth Vijayaraghavan\njayanth@FreeBSD.org\n2000/05/08\n2008/11/10"]
jb [label="John Birrell\njb@FreeBSD.org\n1997/03/27\n2009/12/15"]
jdp [label="John Polstra\njdp@FreeBSD.org\n1995/12/07\n2008/02/26"]
jedgar [label="Chris D. Faulhaber\njedgar@FreeBSD.org\n1999/12/15\n2006/04/07"]
jkh [label="Jordan K. Hubbard\njkh@FreeBSD.org\n1993/06/12\n2008/06/13"]
jlemon [label="Jonathan Lemon\njlemon@FreeBSD.org\n1997/08/14\n2008/11/10"]
joe [label="Josef Karthauser\njoe@FreeBSD.org\n1999/10/22\n2008/08/10"]
jtc [label="J.T. Conklin\njtc@FreeBSD.org\n1993/06/12\n????/??/??"]
kargl [label="Steven G. Kargl\nkargl@FreeBSD.org\n2011/01/17\n2015/06/28"]
kbyanc [label="Kelly Yancey\nkbyanc@FreeBSD.org\n2000/07/11\n2006/07/25"]
keichii [label="Michael Wu\nkeichii@FreeBSD.org\n2001/03/07\n2006/04/28"]
linimon [label="Mark Linimon\nlinimon@FreeBSD.org\n2006/09/30\n2008/05/04"]
lulf [label="Ulf Lilleengen\nlulf@FreeBSD.org\n2007/10/24\n2012/01/19"]
mb [label="Maxim Bolotin\nmb@FreeBSD.org\n2000/04/06\n2003/03/08"]
marks [label="Mark Santcroos\nmarks@FreeBSD.org\n2004/03/18\n2008/09/29"]
mike [label="Mike Barcroft\nmike@FreeBSD.org\n2001/07/17\n2006/04/28"]
msmith [label="Mike Smith\nmsmith@FreeBSD.org\n1996/10/22\n2003/12/15"]
murray [label="Murray Stokely\nmurray@FreeBSD.org\n2000/04/05\n2010/07/25"]
mux [label="Maxime Henrion\nmux@FreeBSD.org\n2002/03/03\n2011/06/22"]
nate [label="Nate Willams\nnate@FreeBSD.org\n1993/06/12\n2003/12/15"]
njl [label="Nate Lawson\nnjl@FreeBSD.org\n2002/08/07\n2008/02/16"]
non [label="Noriaki Mitsnaga\nnon@FreeBSD.org\n2000/06/19\n2007/03/06"]
onoe [label="Atsushi Onoe\nonoe@FreeBSD.org\n2000/07/21\n2008/11/10"]
rafan [label="Rong-En Fan\nrafan@FreeBSD.org\n2007/01/31\n2012/07/23"]
randi [label="Randi Harper\nrandi@FreeBSD.org\n2010/04/20\n2012/05/10"]
rink [label="Rink Springer\nrink@FreeBSD.org\n2006/01/16\n2010/11/04"]
robert [label="Robert Drehmel\nrobert@FreeBSD.org\n2001/08/23\n2006/05/13"]
sah [label="Sam Hopkins\nsah@FreeBSD.org\n2004/12/15\n2008/11/10"]
shafeeq [label="Shafeeq Sinnamohideen\nshafeeq@FreeBSD.org\n2000/06/19\n2006/04/06"]
sheldonh [label="Sheldon Hearn\nsheldonh@FreeBSD.org\n1999/06/14\n2006/05/13"]
shiba [label="Takeshi Shibagaki\nshiba@FreeBSD.org\n2000/06/19\n2008/11/10"]
shin [label="Yoshinobu Inoue\nshin@FreeBSD.org\n1999/07/29\n2003/03/08"]
snb [label="Nick Barkas\nsnb@FreeBSD.org\n2009/05/05\n2010/11/04"]
tmm [label="Thomas Moestl\ntmm@FreeBSD.org\n2001/03/07\n2006/07/12"]
toshi [label="Toshihiko Arai\ntoshi@FreeBSD.org\n2000/07/06\n2003/03/08"]
tshiozak [label="Takuya SHIOZAKI\ntshiozak@FreeBSD.org\n2001/04/25\n2003/03/08"]
uch [label="UCHIYAMA Yasushi\nuch@FreeBSD.org\n2000/06/21\n2002/04/24"]
wilko [label="Wilko Bulte\nwilko@FreeBSD.org\n2000/01/13\n2013/01/17"]
yar [label="Yar Tikhiy\nyar@FreeBSD.org\n2001/03/25\n2012/05/23"]
zack [label="Zack Kirsch\nzack@FreeBSD.org\n2010/11/05\n2012/09/08"]
node [color=lightblue2, style=filled, bgcolor=black];
# Current src committers go here. Try to keep things sorted.
ache [label="Andrey Chernov\nache@FreeBSD.org\n1993/10/31"]
achim [label="Achim Leubner\nachim@FreeBSD.org\n2013/01/23"]
adrian [label="Adrian Chadd\nadrian@FreeBSD.org\n2000/07/03"]
ae [label="Andrey V. Elsukov\nae@FreeBSD.org\n2010/06/03"]
akiyama [label="Shunsuke Akiyama\nakiyama@FreeBSD.org\n2000/06/19"]
alc [label="Alan Cox\nalc@FreeBSD.org\n1999/02/23"]
allanjude [label="Allan Jude\nallanjude@FreeBSD.org\n2015/07/30"]
ambrisko [label="Doug Ambrisko\nambrisko@FreeBSD.org\n2001/12/19"]
anchie [label="Ana Kukec\nanchie@FreeBSD.org\n2010/04/14"]
andre [label="Andre Oppermann\nandre@FreeBSD.org\n2003/11/12"]
andreast [label="Andreas Tobler\nandreast@FreeBSD.org\n2010/09/05"]
andrew [label="Andrew Turner\nandrew@FreeBSD.org\n2010/07/19"]
antoine [label="Antoine Brodin\nantoine@FreeBSD.org\n2008/02/03"]
araujo [label="Marcelo Araujo\naraujo@FreeBSD.org\n2015/08/04"]
arichardson [label="Alex Richardson\narichardson@FreeBSD.org\n2017/10/30"]
ariff [label="Ariff Abdullah\nariff@FreeBSD.org\n2005/11/14"]
art [label="Artem Belevich\nart@FreeBSD.org\n2011/03/29"]
arybchik [label="Andrew Rybchenko\narybchik@FreeBSD.org\n2014/10/12"]
asomers [label="Alan Somers\nasomers@FreeBSD.org\n2013/04/24"]
avg [label="Andriy Gapon\navg@FreeBSD.org\n2009/02/18"]
avos [label="Andriy Voskoboinyk\navos@FreeBSD.org\n2015/09/24"]
badger [label="Eric Badger\nbadger@FreeBSD.org\n2016/07/01"]
bapt [label="Baptiste Daroussin\nbapt@FreeBSD.org\n2011/12/23"]
bcran [label="Rebecca Cran\nbcran@FreeBSD.org\n2010/01/29"]
bde [label="Bruce Evans\nbde@FreeBSD.org\n1994/08/20"]
bdrewery [label="Bryan Drewery\nbdrewery@FreeBSD.org\n2013/12/14"]
benl [label="Ben Laurie\nbenl@FreeBSD.org\n2011/05/18"]
benno [label="Benno Rice\nbenno@FreeBSD.org\n2000/11/02"]
bms [label="Bruce M Simpson\nbms@FreeBSD.org\n2003/08/06"]
br [label="Ruslan Bukin\nbr@FreeBSD.org\n2013/09/02"]
brian [label="Brian Somers\nbrian@FreeBSD.org\n1996/12/16"]
brooks [label="Brooks Davis\nbrooks@FreeBSD.org\n2001/06/21"]
brueffer [label="Christian Brueffer\nbrueffer@FreeBSD.org\n2006/02/28"]
bruno [label="Bruno Ducrot\nbruno@FreeBSD.org\n2005/07/18"]
bryanv [label="Bryan Venteicher\nbryanv@FreeBSD.org\n2012/11/03"]
bschmidt [label="Bernhard Schmidt\nbschmidt@FreeBSD.org\n2010/02/06"]
bwidawsk [label="Ben Widawsky\nbwidawsk@FreeBSD.org\n2018/07/05"]
bz [label="Bjoern A. Zeeb\nbz@FreeBSD.org\n2004/07/27"]
cem [label="Conrad Meyer\ncem@FreeBSD.org\n2015/07/05"]
chuck [label="Chuck Tuffli\nchuck@FreeBSD.org\n2017/09/06"]
cognet [label="Olivier Houchard\ncognet@FreeBSD.org\n2002/10/09"]
cokane [label="Coleman Kane\ncokane@FreeBSD.org\n2000/06/19"]
cperciva [label="Colin Percival\ncperciva@FreeBSD.org\n2004/01/20"]
csjp [label="Christian S.J. Peron\ncsjp@FreeBSD.org\n2004/05/04"]
dab [label="David Bright\ndab@FreeBSD.org\n2016/10/24"]
das [label="David Schultz\ndas@FreeBSD.org\n2003/02/21"]
davide [label="Davide Italiano\ndavide@FreeBSD.org\n2012/01/27"]
dchagin [label="Dmitry Chagin\ndchagin@FreeBSD.org\n2009/02/28"]
def [label="Konrad Witaszczyk\ndef@FreeBSD.org\n2016/11/02"]
delphij [label="Xin Li\ndelphij@FreeBSD.org\n2004/09/14"]
des [label="Dag-Erling Smorgrav\ndes@FreeBSD.org\n1998/04/03"]
dexuan [label="Dexuan Cui\ndexuan@FreeBSD.org\n2016/10/24"]
dfr [label="Doug Rabson\ndfr@FreeBSD.org\n????/??/??"]
dg [label="David Greenman\ndg@FreeBSD.org\n1993/06/14"]
dim [label="Dimitry Andric\ndim@FreeBSD.org\n2010/08/30"]
dteske [label="Devin Teske\ndteske@FreeBSD.org\n2012/04/10"]
dumbbell [label="Jean-Sebastien Pedron\ndumbbell@FreeBSD.org\n2004/11/29"]
dwmalone [label="David Malone\ndwmalone@FreeBSD.org\n2000/07/11"]
eadler [label="Eitan Adler\neadler@FreeBSD.org\n2012/01/18"]
ed [label="Ed Schouten\ned@FreeBSD.org\n2008/05/22"]
edavis [label="Eric Davis\nedavis@FreeBSD.org\n2013/10/09"]
edwin [label="Edwin Groothuis\nedwin@FreeBSD.org\n2007/06/25"]
eivind [label="Eivind Eklund\neivind@FreeBSD.org\n1997/02/02"]
emaste [label="Ed Maste\nemaste@FreeBSD.org\n2005/10/04"]
emax [label="Maksim Yevmenkin\nemax@FreeBSD.org\n2003/10/12"]
eri [label="Ermal Luci\neri@FreeBSD.org\n2008/06/11"]
erj [label="Eric Joyner\nerj@FreeBSD.org\n2014/12/14"]
eugen [label="Eugene Grosbein\neugen@FreeBSD.org\n2017/09/19"]
fabient [label="Fabien Thomas\nfabient@FreeBSD.org\n2009/03/16"]
fanf [label="Tony Finch\nfanf@FreeBSD.org\n2002/05/05"]
fjoe [label="Max Khon\nfjoe@FreeBSD.org\n2001/08/06"]
flz [label="Florent Thoumie\nflz@FreeBSD.org\n2006/03/30"]
fsu [label="Fedor Uporov\nfsu@FreeBSD.org\n2017/08/28"]
gabor [label="Gabor Kovesdan\ngabor@FreeBSD.org\n2010/02/02"]
gad [label="Garance A. Drosehn\ngad@FreeBSD.org\n2000/10/27"]
gallatin [label="Andrew Gallatin\ngallatin@FreeBSD.org\n1999/01/15"]
ganbold [label="Ganbold Tsagaankhuu\nganbold@FreeBSD.org\n2013/12/18"]
gavin [label="Gavin Atkinson\ngavin@FreeBSD.org\n2009/12/07"]
gibbs [label="Justin T. Gibbs\ngibbs@FreeBSD.org\n????/??/??"]
gjb [label="Glen Barber\ngjb@FreeBSD.org\n2013/06/04"]
gleb [label="Gleb Kurtsou\ngleb@FreeBSD.org\n2011/09/19"]
glebius [label="Gleb Smirnoff\nglebius@FreeBSD.org\n2004/07/14"]
gnn [label="George V. Neville-Neil\ngnn@FreeBSD.org\n2004/10/11"]
gordon [label="Gordon Tetlow\ngordon@FreeBSD.org\n2002/05/17"]
grehan [label="Peter Grehan\ngrehan@FreeBSD.org\n2002/08/08"]
grog [label="Greg Lehey\ngrog@FreeBSD.org\n1998/08/30"]
gshapiro [label="Gregory Shapiro\ngshapiro@FreeBSD.org\n2000/07/12"]
harti [label="Hartmut Brandt\nharti@FreeBSD.org\n2003/01/29"]
hiren [label="Hiren Panchasara\nhiren@FreeBSD.org\n2013/04/12"]
hmp [label="Hiten Pandya\nhmp@FreeBSD.org\n2004/03/23"]
hselasky [label="Hans Petter Selasky\nhselasky@FreeBSD.org\n"]
ian [label="Ian Lepore\nian@FreeBSD.org\n2013/01/07"]
iedowse [label="Ian Dowse\niedowse@FreeBSD.org\n2000/12/01"]
imp [label="Warner Losh\nimp@FreeBSD.org\n1996/09/20"]
ivoras [label="Ivan Voras\nivoras@FreeBSD.org\n2008/06/10"]
jah [label="Jason A. Harmening\njah@FreeBSD.org\n2015/03/08"]
jamie [label="Jamie Gritton\njamie@FreeBSD.org\n2009/01/28"]
jasone [label="Jason Evans\njasone@FreeBSD.org\n1999/03/03"]
jceel [label="Jakub Klama\njceel@FreeBSD.org\n2011/09/25"]
jch [label="Julien Charbon\njch@FreeBSD.org\n2014/09/24"]
jchandra [label="Jayachandran C.\njchandra@FreeBSD.org\n2010/05/19"]
jeb [label="Jeb Cramer\njeb@FreeBSD.org\n2018/01/25"]
jeff [label="Jeff Roberson\njeff@FreeBSD.org\n2002/02/21"]
jh [label="Jaakko Heinonen\njh@FreeBSD.org\n2009/10/02"]
jhb [label="John Baldwin\njhb@FreeBSD.org\n1999/08/23"]
jhibbits [label="Justin Hibbits\njhibbits@FreeBSD.org\n2011/11/30"]
jilles [label="Jilles Tjoelker\njilles@FreeBSD.org\n2009/05/22"]
jimharris [label="Jim Harris\njimharris@FreeBSD.org\n2011/12/09"]
jinmei [label="JINMEI Tatuya\njinmei@FreeBSD.org\n2007/03/17"]
jkim [label="Jung-uk Kim\njkim@FreeBSD.org\n2005/07/06"]
jkoshy [label="A. Joseph Koshy\njkoshy@FreeBSD.org\n1998/05/13"]
jlh [label="Jeremie Le Hen\njlh@FreeBSD.org\n2012/04/22"]
jls [label="Jordan Sissel\njls@FreeBSD.org\n2006/12/06"]
jmcneill [label="Jared McNeill\njmcneill@FreeBSD.org\n2016/02/24"]
jmg [label="John-Mark Gurney\njmg@FreeBSD.org\n1997/02/13"]
jmmv [label="Julio Merino\njmmv@FreeBSD.org\n2013/11/02"]
joerg [label="Joerg Wunsch\njoerg@FreeBSD.org\n1993/11/14"]
+johalun [label="Johannes Lundberg\njohalun@FreeBSD.org\n2019/01/19"]
jon [label="Jonathan Chen\njon@FreeBSD.org\n2000/10/17"]
jonathan [label="Jonathan Anderson\njonathan@FreeBSD.org\n2010/10/07"]
jpaetzel [label="Josh Paetzel\njpaetzel@FreeBSD.org\n2011/01/21"]
jtl [label="Jonathan T. Looney\njtl@FreeBSD.org\n2015/10/26"]
julian [label="Julian Elischer\njulian@FreeBSD.org\n1993/04/19"]
jwd [label="John De Boskey\njwd@FreeBSD.org\n2000/05/19"]
kaiw [label="Kai Wang\nkaiw@FreeBSD.org\n2007/09/26"]
kan [label="Alexander Kabaev\nkan@FreeBSD.org\n2002/07/21"]
karels [label="Mike Karels\nkarels@FreeBSD.org\n2016/06/09"]
ken [label="Ken Merry\nken@FreeBSD.org\n1998/09/08"]
kensmith [label="Ken Smith\nkensmith@FreeBSD.org\n2004/01/23"]
kevans [label="Kyle Evans\nkevans@FreeBSD.org\n2017/06/20"]
kevlo [label="Kevin Lo\nkevlo@FreeBSD.org\n2006/07/23"]
kib [label="Konstantin Belousov\nkib@FreeBSD.org\n2006/06/03"]
kibab [label="Ilya Bakulin\nkibab@FreeBSD.org\n2017/09/02"]
kmacy [label="Kip Macy\nkmacy@FreeBSD.org\n2005/06/01"]
kp [label="Kristof Provost\nkp@FreeBSD.org\n2015/03/22"]
landonf [label="Landon Fuller\nlandonf@FreeBSD.org\n2016/05/31"]
le [label="Lukas Ertl\nle@FreeBSD.org\n2004/02/02"]
leitao [label="Breno Leitao\nleitao@FreeBSD.org\n2018/05/22"]
lidl [label="Kurt Lidl\nlidl@FreeBSD.org\n2015/10/21"]
loos [label="Luiz Otavio O Souza\nloos@FreeBSD.org\n2013/07/03"]
lstewart [label="Lawrence Stewart\nlstewart@FreeBSD.org\n2008/10/06"]
luporl [label="Leandro Lupori\nluporl@FreeBSD.org\n2018/05/21"]
lwhsu [label="Li-Wen Hsu\nlwhsu@FreeBSD.org\n2018/08/09"]
manu [label="Emmanuel Vadot\nmanu@FreeBSD.org\n2016/04/24"]
marcel [label="Marcel Moolenaar\nmarcel@FreeBSD.org\n1999/07/03"]
marius [label="Marius Strobl\nmarius@FreeBSD.org\n2004/04/17"]
markj [label="Mark Johnston\nmarkj@FreeBSD.org\n2012/12/18"]
markm [label="Mark Murray\nmarkm@FreeBSD.org\n1995/04/24"]
markus [label="Markus Brueffer\nmarkus@FreeBSD.org\n2006/06/01"]
matteo [label="Matteo Riondato\nmatteo@FreeBSD.org\n2006/01/18"]
mav [label="Alexander Motin\nmav@FreeBSD.org\n2007/04/12"]
maxim [label="Maxim Konovalov\nmaxim@FreeBSD.org\n2002/02/07"]
mdf [label="Matthew Fleming\nmdf@FreeBSD.org\n2010/06/04"]
mdodd [label="Matthew N. Dodd\nmdodd@FreeBSD.org\n1999/07/27"]
melifaro [label="Alexander V. Chernikov\nmelifaro@FreeBSD.org\n2011/10/04"]
miwi [label="Martin Wilke\nmiwi@FreeBSD.org\n2011/02/18\n2018/06/14"]
mizhka [label="Michael Zhilin\nmizhka@FreeBSD.org\n2016/07/19"]
mjacob [label="Matt Jacob\nmjacob@FreeBSD.org\n1997/08/13"]
mjg [label="Mateusz Guzik\nmjg@FreeBSD.org\n2012/06/04"]
mjoras [label="Matt Joras\nmjoras@FreeBSD.org\n2017/07/12"]
mlaier [label="Max Laier\nmlaier@FreeBSD.org\n2004/02/10"]
mmel [label="Michal Meloun\nmmel@FreeBSD.org\n2015/11/01"]
monthadar [label="Monthadar Al Jaberi\nmonthadar@FreeBSD.org\n2012/04/02"]
mp [label="Mark Peek\nmp@FreeBSD.org\n2001/07/27"]
mr [label="Michael Reifenberger\nmr@FreeBSD.org\n2001/09/30"]
mw [label="Marcin Wojtas\nmw@FreeBSD.org\n2017/07/18"]
neel [label="Neel Natu\nneel@FreeBSD.org\n2009/09/20"]
netchild [label="Alexander Leidinger\nnetchild@FreeBSD.org\n2005/03/31"]
ngie [label="Enji Cooper\nngie@FreeBSD.org\n2014/07/27"]
nork [label="Norikatsu Shigemura\nnork@FreeBSD.org\n2009/06/09"]
np [label="Navdeep Parhar\nnp@FreeBSD.org\n2009/06/05"]
nwhitehorn [label="Nathan Whitehorn\nnwhitehorn@FreeBSD.org\n2008/07/03"]
n_hibma [label="Nick Hibma\nn_hibma@FreeBSD.org\n1998/11/26"]
obrien [label="David E. O'Brien\nobrien@FreeBSD.org\n1996/10/29"]
olli [label="Oliver Fromme\nolli@FreeBSD.org\n2008/02/14"]
oshogbo [label="Mariusz Zaborski\noshogbo@FreeBSD.org\n2015/04/15"]
peadar [label="Peter Edwards\npeadar@FreeBSD.org\n2004/03/08"]
peter [label="Peter Wemm\npeter@FreeBSD.org\n1995/07/04"]
peterj [label="Peter Jeremy\npeterj@FreeBSD.org\n2012/09/14"]
pfg [label="Pedro Giffuni\npfg@FreeBSD.org\n2011/12/01"]
phil [label="Phil Shafer\nphil@FreeBSD.ogr\n2015/12/30"]
philip [label="Philip Paeps\nphilip@FreeBSD.org\n2004/01/21"]
phk [label="Poul-Henning Kamp\nphk@FreeBSD.org\n1994/02/21"]
pho [label="Peter Holm\npho@FreeBSD.org\n2008/11/16"]
pjd [label="Pawel Jakub Dawidek\npjd@FreeBSD.org\n2004/02/02"]
pkelsey [label="Patrick Kelsey\pkelsey@FreeBSD.org\n2014/05/29"]
pluknet [label="Sergey Kandaurov\npluknet@FreeBSD.org\n2010/10/05"]
ps [label="Paul Saab\nps@FreeBSD.org\n2000/02/23"]
qingli [label="Qing Li\nqingli@FreeBSD.org\n2005/04/13"]
ram [label="Ram Kishore Vegesna\nram@FreeBSD.org\n2018/04/04"]
ray [label="Aleksandr Rybalko\nray@FreeBSD.org\n2011/05/25"]
rdivacky [label="Roman Divacky\nrdivacky@FreeBSD.org\n2008/03/13"]
remko [label="Remko Lodder\nremko@FreeBSD.org\n2007/02/23"]
rgrimes [label="Rodney W. Grimes\nrgrimes@FreeBSD.org\n1993/06/12\n2017/03/03"]
rik [label="Roman Kurakin\nrik@FreeBSD.org\n2003/12/18"]
rlibby [label="Ryan Libby\nrlibby@FreeBSD.org\n2017/06/07"]
rmacklem [label="Rick Macklem\nrmacklem@FreeBSD.org\n2009/03/27"]
rmh [label="Robert Millan\nrmh@FreeBSD.org\n2011/09/18"]
rnoland [label="Robert Noland\nrnoland@FreeBSD.org\n2008/09/15"]
roberto [label="Ollivier Robert\nroberto@FreeBSD.org\n1995/02/22"]
rodrigc [label="Craig Rodrigues\nrodrigc@FreeBSD.org\n2005/05/14"]
royger [label="Roger Pau Monne\nroyger@FreeBSD.org\n2013/11/26"]
rpaulo [label="Rui Paulo\nrpaulo@FreeBSD.org\n2007/09/25"]
rpokala [label="Ravi Pokala\nrpokala@FreeBSD.org\n2015/11/19"]
rrs [label="Randall R Stewart\nrrs@FreeBSD.org\n2007/02/08"]
rse [label="Ralf S. Engelschall\nrse@FreeBSD.org\n1997/07/31"]
rstone [label="Ryan Stone\nrstone@FreeBSD.org\n2010/04/19"]
ru [label="Ruslan Ermilov\nru@FreeBSD.org\n1999/05/27"]
rwatson [label="Robert N. M. Watson\nrwatson@FreeBSD.org\n1999/12/16"]
sam [label="Sam Leffler\nsam@FreeBSD.org\n2002/07/02"]
sanpei [label="MIHIRA Sanpei Yoshiro\nsanpei@FreeBSD.org\n2000/06/19"]
sbruno [label="Sean Bruno\nsbruno@FreeBSD.org\n2008/08/02"]
scf [label="Sean C. Farley\nscf@FreeBSD.org\n2007/06/24"]
schweikh [label="Jens Schweikhardt\nschweikh@FreeBSD.org\n2001/04/06"]
scottl [label="Scott Long\nscottl@FreeBSD.org\n2000/09/28"]
se [label="Stefan Esser\nse@FreeBSD.org\n1994/08/26"]
sephe [label="Sepherosa Ziehau\nsephe@FreeBSD.org\n2007/03/28"]
sepotvin [label="Stephane E. Potvin\nsepotvin@FreeBSD.org\n2007/02/15"]
sgalabov [label="Stanislav Galabov\nsgalabov@FreeBSD.org\n2016/02/24"]
shurd [label="Stephen Hurd\nshurd@FreeBSD.org\n2017/09/02"]
simon [label="Simon L. Nielsen\nsimon@FreeBSD.org\n2006/03/07"]
sjg [label="Simon J. Gerraty\nsjg@FreeBSD.org\n2012/10/23"]
skra [label="Svatopluk Kraus\nskra@FreeBSD.org\n2015/10/28"]
slavash [label="Slava Shwartsman\nslavash@FreeBSD.org\n2018/02/08"]
slm [label="Stephen McConnell\nslm@FreeBSD.org\n2014/05/07"]
smh [label="Steven Hartland\nsmh@FreeBSD.org\n2012/11/12"]
sobomax [label="Maxim Sobolev\nsobomax@FreeBSD.org\n2001/07/25"]
sos [label="Soren Schmidt\nsos@FreeBSD.org\n????/??/??"]
sson [label="Stacey Son\nsson@FreeBSD.org\n2008/07/08"]
stas [label="Stanislav Sedov\nstas@FreeBSD.org\n2008/08/22"]
stevek [label="Stephen J. Kiernan\nstevek@FreeBSD.org\n2016/07/18"]
suz [label="SUZUKI Shinsuke\nsuz@FreeBSD.org\n2002/03/26"]
syrinx [label="Shteryana Shopova\nsyrinx@FreeBSD.org\n2006/10/07"]
takawata [label="Takanori Watanabe\ntakawata@FreeBSD.org\n2000/07/06"]
theraven [label="David Chisnall\ntheraven@FreeBSD.org\n2011/11/11"]
thj [label="Tom Jones\nthj@FreeBSD.org\n2018/04/07"]
thompsa [label="Andrew Thompson\nthompsa@FreeBSD.org\n2005/05/25"]
ticso [label="Bernd Walter\nticso@FreeBSD.org\n2002/01/31"]
tijl [label="Tijl Coosemans\ntijl@FreeBSD.org\n2010/07/16"]
tsoome [label="Toomas Soome\ntsoome@FreeBSD.org\n2016/08/10"]
trasz [label="Edward Tomasz Napierala\ntrasz@FreeBSD.org\n2008/08/22"]
trhodes [label="Tom Rhodes\ntrhodes@FreeBSD.org\n2002/05/28"]
trociny [label="Mikolaj Golub\ntrociny@FreeBSD.org\n2011/03/10"]
tuexen [label="Michael Tuexen\ntuexen@FreeBSD.org\n2009/06/06"]
tychon [label="Tycho Nightingale\ntychon@FreeBSD.org\n2014/01/21"]
ume [label="Hajimu UMEMOTO\nume@FreeBSD.org\n2000/02/26"]
uqs [label="Ulrich Spoerlein\nuqs@FreeBSD.org\n2010/01/28"]
vangyzen [label="Eric van Gyzen\nvangyzen@FreeBSD.org\n2015/03/08"]
vanhu [label="Yvan Vanhullebus\nvanhu@FreeBSD.org\n2008/07/21"]
versus [label="Konrad Jankowski\nversus@FreeBSD.org\n2008/10/27"]
weongyo [label="Weongyo Jeong\nweongyo@FreeBSD.org\n2007/12/21"]
wes [label="Wes Peters\nwes@FreeBSD.org\n1998/11/25"]
whu [label="Wei Hu\nwhu@FreeBSD.org\n2015/02/11"]
will [label="Will Andrews\nwill@FreeBSD.org\n2000/03/20"]
wkoszek [label="Wojciech A. Koszek\nwkoszek@FreeBSD.org\n2006/02/21"]
wma [label="Wojciech Macek\nwma@FreeBSD.org\n2016/01/18"]
wollman [label="Garrett Wollman\nwollman@FreeBSD.org\n????/??/??"]
wsalamon [label="Wayne Salamon\nwsalamon@FreeBSD.org\n2005/06/25"]
wulf [label="Vladimir Kondratyev\nwulf@FreeBSD.org\n2017/04/27"]
yongari [label="Pyun YongHyeon\nyongari@FreeBSD.org\n2004/08/01"]
yuripv [label="Yuri Pankov\nyuripv@FreeBSD.org\n2018/10/09"]
zbb [label="Zbigniew Bodek\nzbb@FreeBSD.org\n2013/09/02"]
zec [label="Marko Zec\nzec@FreeBSD.org\n2008/06/22"]
zml [label="Zachary Loafman\nzml@FreeBSD.org\n2009/05/27"]
zont [label="Andrey Zonov\nzont@FreeBSD.org\n2012/08/21"]
# Pseudo target representing rev 1.1 of commit.allow
day1 [label="Birth of FreeBSD"]
# Here are the mentor/mentee relationships.
# Group together all the mentees for a particular mentor.
# Keep the list sorted by mentor login.
day1 -> jtc
day1 -> jkh
day1 -> nate
day1 -> rgrimes
day1 -> alm
day1 -> dg
adrian -> avos
adrian -> jmcneill
adrian -> landonf
adrian -> lidl
adrian -> loos
adrian -> mizhka
adrian -> monthadar
adrian -> ray
adrian -> rmh
adrian -> sephe
adrian -> sgalabov
ae -> melifaro
allanjude -> tsoome
alc -> davide
andre -> qingli
andrew -> manu
anholt -> jkim
araujo -> miwi
avg -> art
avg -> eugen
avg -> pluknet
avg -> smh
bapt -> allanjude
bapt -> araujo
bapt -> bdrewery
bapt -> wulf
bde -> rgrimes
benno -> grehan
billf -> dougb
billf -> gad
billf -> jedgar
billf -> jhb
billf -> shafeeq
billf -> will
bmilekic -> csjp
bms -> dhartmei
bms -> mlaier
bms -> thompsa
brian -> joe
brooks -> bushman
brooks -> jamie
brooks -> theraven
brooks -> arichardson
bz -> anchie
bz -> jamie
bz -> syrinx
cognet -> br
cognet -> jceel
cognet -> kevlo
cognet -> ian
cognet -> manu
cognet -> mw
cognet -> wkoszek
cognet -> wma
cognet -> zbb
cperciva -> eadler
cperciva -> flz
cperciva -> randi
cperciva -> simon
csjp -> bushman
das -> kargl
das -> rodrigc
delphij -> gabor
delphij -> rafan
delphij -> sephe
des -> anholt
des -> hmp
des -> mike
des -> olli
des -> ru
des -> bapt
dds -> versus
dfr -> gallatin
dfr -> zml
dg -> peter
dim -> theraven
dwmalone -> fanf
dwmalone -> peadar
dwmalone -> snb
eadler -> bcran
ed -> dim
ed -> gavin
ed -> jilles
ed -> rdivacky
ed -> uqs
eivind -> des
eivind -> rwatson
emaste -> achim
emaste -> bwidawsk
emaste -> dteske
emaste -> kevans
emaste -> lwhsu
emaste -> markj
emaste -> ngie
emaste -> rstone
emax -> markus
erj -> jeb
fjoe -> versus
gallatin -> ticso
gavin -> versus
gibbs -> mjacob
gibbs -> njl
gibbs -> royger
gibbs -> whu
glebius -> mav
gnn -> jinmei
gnn -> rrs
gnn -> ivoras
gnn -> vanhu
gnn -> lstewart
gnn -> np
gnn -> davide
gnn -> arybchik
gnn -> erj
gnn -> kp
gnn -> jtl
gnn -> karels
gonzo -> jmcneill
gonzo -> wulf
grehan -> bryanv
grehan -> rgrimes
grog -> edwin
grog -> le
grog -> peterj
hselasky -> slavash
imp -> akiyama
imp -> ambrisko
imp -> andrew
imp -> bmah
imp -> bruno
imp -> chuck
imp -> dmlb
imp -> emax
imp -> furuta
imp -> joe
+imp -> johalun
imp -> jon
imp -> keichii
imp -> kibab
imp -> mb
imp -> mr
imp -> neel
imp -> non
imp -> nork
imp -> onoe
imp -> remko
imp -> rik
imp -> rink
imp -> sanpei
imp -> shiba
imp -> takawata
imp -> toshi
imp -> tsoome
imp -> uch
jake -> bms
jake -> gordon
jake -> harti
jake -> jeff
jake -> kmacy
jake -> robert
jake -> yongari
jb -> sson
jdp -> fjoe
jfv -> erj
jhb -> arr
jhb -> avg
jhb -> jch
jhb -> jeff
jhb -> kbyanc
jhb -> peterj
jhb -> pfg
jhb -> rnoland
jhb -> rpokala
jhb -> arichardson
jhibbits -> leitao
jhibbits -> luporl
jimharris -> carl
jkh -> dfr
jkh -> gj
jkh -> grog
jkh -> imp
jkh -> jlemon
jkh -> joerg
jkh -> jwd
jkh -> msmith
jkh -> murray
jkh -> phk
jkh -> wes
jkh -> yar
jkoshy -> kaiw
jkoshy -> fabient
jkoshy -> rstone
jlemon -> bmilekic
jlemon -> brooks
jmallett -> pkelsey
jmmv -> ngie
joerg -> brian
joerg -> eik
joerg -> jmg
joerg -> le
joerg -> netchild
joerg -> schweikh
jtl -> ngie
jtl -> thj
julian -> glebius
julian -> davidxu
julian -> archie
julian -> adrian
julian -> zec
julian -> mp
kan -> kib
ken -> asomers
ken -> chuck
ken -> ram
ken -> slm
ken -> will
kib -> ae
kib -> badger
kib -> dchagin
kib -> gjb
kib -> jah
kib -> jlh
kib -> jpaetzel
kib -> lulf
kib -> melifaro
kib -> mmel
kib -> pho
kib -> pluknet
kib -> rdivacky
kib -> rmacklem
kib -> rmh
kib -> skra
kib -> slavash
kib -> stas
kib -> tijl
kib -> trociny
kib -> vangyzen
kib -> yuripv
kib -> zont
kmacy -> lstewart
marcel -> allanjude
marcel -> art
marcel -> arun
marcel -> marius
marcel -> nwhitehorn
marcel -> sjg
markj -> cem
markj -> lwhsu
markj -> rlibby
markm -> jasone
markm -> sheldonh
mav -> ae
mav -> eugen
mav -> ram
mdf -> gleb
mdodd -> jake
mike -> das
mlaier -> benjsc
mlaier -> dhartmei
mlaier -> thompsa
mlaier -> eri
msmith -> cokane
msmith -> jasone
msmith -> scottl
murray -> delphij
mux -> cognet
mux -> dumbbell
netchild -> ariff
njl -> marks
njl -> philip
njl -> rpaulo
njl -> sepotvin
nwhitehorn -> andreast
nwhitehorn -> jhibbits
nwhitehorn -> leitao
nwhitehorn -> luporl
obrien -> benno
obrien -> groudier
obrien -> gshapiro
obrien -> kan
obrien -> sam
pfg -> fsu
peter -> asmodai
peter -> jayanth
peter -> ps
philip -> benl
philip -> ed
philip -> jls
philip -> matteo
philip -> uqs
philip -> kp
phk -> jkoshy
phk -> mux
phk -> rgrimes
pjd -> def
pjd -> kib
pjd -> lulf
pjd -> oshogbo
pjd -> smh
pjd -> trociny
rgrimes -> markm
rmacklem -> jwd
royger -> whu
rpaulo -> avg
rpaulo -> bschmidt
rpaulo -> dim
rpaulo -> jmmv
rpaulo -> lidl
rpaulo -> ngie
rrs -> bcran
rrs -> jchandra
rrs -> tuexen
rstone -> markj
rstone -> mjoras
ru -> ceri
ru -> cjc
ru -> eik
ru -> maxim
ru -> sobomax
rwatson -> adrian
rwatson -> antoine
rwatson -> bmah
rwatson -> brueffer
rwatson -> bz
rwatson -> cperciva
rwatson -> emaste
rwatson -> gnn
rwatson -> jh
rwatson -> jonathan
rwatson -> kensmith
rwatson -> kmacy
rwatson -> linimon
rwatson -> rmacklem
rwatson -> shafeeq
rwatson -> tmm
rwatson -> trasz
rwatson -> trhodes
rwatson -> wsalamon
rodrigc -> araujo
sam -> andre
sam -> benjsc
sam -> sephe
sbruno -> hiren
sbruno -> jeb
sbruno -> jimharris
sbruno -> shurd
schweikh -> dds
scottl -> achim
scottl -> jimharris
scottl -> pjd
scottl -> sah
scottl -> sbruno
scottl -> slm
scottl -> yongari
sephe -> dexuan
sheldonh -> dwmalone
sheldonh -> iedowse
shin -> ume
simon -> benl
sjg -> phil
sjg -> stevek
sos -> marcel
stas -> ganbold
theraven -> phil
thompsa -> weongyo
thompsa -> eri
trasz -> jh
trasz -> mjg
ume -> jinmei
ume -> suz
ume -> tshiozak
vangyzen -> badger
vangyzen -> dab
wes -> scf
wkoszek -> jceel
wollman -> gad
zml -> mdf
zml -> zack
}
Index: projects/clang800-import/share/mk/bsd.cpu.mk
===================================================================
--- projects/clang800-import/share/mk/bsd.cpu.mk (revision 343955)
+++ projects/clang800-import/share/mk/bsd.cpu.mk (revision 343956)
@@ -1,406 +1,406 @@
# $FreeBSD$
# Set default CPU compile flags and baseline CPUTYPE for each arch. The
# compile flags must support the minimum CPU type for each architecture but
# may tune support for more advanced processors.
.if !defined(CPUTYPE) || empty(CPUTYPE)
_CPUCFLAGS =
. if ${MACHINE_CPUARCH} == "aarch64"
MACHINE_CPU = arm64
. elif ${MACHINE_CPUARCH} == "amd64"
MACHINE_CPU = amd64 sse2 sse mmx
. elif ${MACHINE_CPUARCH} == "arm"
MACHINE_CPU = arm
. elif ${MACHINE_CPUARCH} == "i386"
MACHINE_CPU = i486
. elif ${MACHINE_CPUARCH} == "mips"
MACHINE_CPU = mips
. elif ${MACHINE_CPUARCH} == "powerpc"
MACHINE_CPU = aim
. elif ${MACHINE_CPUARCH} == "riscv"
MACHINE_CPU = riscv
. elif ${MACHINE_CPUARCH} == "sparc64"
MACHINE_CPU = ultrasparc
. endif
.else
# Handle aliases (not documented in make.conf to avoid user confusion
# between e.g. i586 and pentium)
. if ${MACHINE_CPUARCH} == "amd64" || ${MACHINE_CPUARCH} == "i386"
. if ${CPUTYPE} == "barcelona"
CPUTYPE = amdfam10
. elif ${CPUTYPE} == "core-avx2"
CPUTYPE = haswell
. elif ${CPUTYPE} == "core-avx-i"
CPUTYPE = ivybridge
. elif ${CPUTYPE} == "corei7-avx"
CPUTYPE = sandybridge
. elif ${CPUTYPE} == "corei7"
CPUTYPE = nehalem
. elif ${CPUTYPE} == "slm"
CPUTYPE = silvermont
. elif ${CPUTYPE} == "atom"
CPUTYPE = bonnell
. elif ${CPUTYPE} == "core"
CPUTYPE = prescott
. endif
. if ${MACHINE_CPUARCH} == "amd64"
. if ${CPUTYPE} == "prescott"
CPUTYPE = nocona
. endif
. else
. if ${CPUTYPE} == "k7"
CPUTYPE = athlon
. elif ${CPUTYPE} == "p4"
CPUTYPE = pentium4
. elif ${CPUTYPE} == "p4m"
CPUTYPE = pentium4m
. elif ${CPUTYPE} == "p3"
CPUTYPE = pentium3
. elif ${CPUTYPE} == "p3m"
CPUTYPE = pentium3m
. elif ${CPUTYPE} == "p-m"
CPUTYPE = pentium-m
. elif ${CPUTYPE} == "p2"
CPUTYPE = pentium2
. elif ${CPUTYPE} == "i686"
CPUTYPE = pentiumpro
. elif ${CPUTYPE} == "i586/mmx"
CPUTYPE = pentium-mmx
. elif ${CPUTYPE} == "i586"
CPUTYPE = pentium
. endif
. endif
. elif ${MACHINE_ARCH} == "sparc64"
. if ${CPUTYPE} == "us"
CPUTYPE = ultrasparc
. elif ${CPUTYPE} == "us3"
CPUTYPE = ultrasparc3
. endif
. endif
###############################################################################
# Logic to set up correct gcc optimization flag. This must be included
# after /etc/make.conf so it can react to the local value of CPUTYPE
# defined therein. Consult:
# http://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html
# http://gcc.gnu.org/onlinedocs/gcc/RS-6000-and-PowerPC-Options.html
# http://gcc.gnu.org/onlinedocs/gcc/MIPS-Options.html
# http://gcc.gnu.org/onlinedocs/gcc/SPARC-Options.html
# http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html
. if ${MACHINE_CPUARCH} == "i386"
. if ${CPUTYPE} == "crusoe"
_CPUCFLAGS = -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
. elif ${CPUTYPE} == "k5"
_CPUCFLAGS = -march=pentium
. elif ${CPUTYPE} == "c7"
_CPUCFLAGS = -march=c3-2
. else
_CPUCFLAGS = -march=${CPUTYPE}
. endif
. elif ${MACHINE_CPUARCH} == "amd64"
_CPUCFLAGS = -march=${CPUTYPE}
. elif ${MACHINE_CPUARCH} == "arm"
. if ${CPUTYPE} == "xscale"
#XXX: gcc doesn't seem to like -mcpu=xscale, and dies while rebuilding itself
#_CPUCFLAGS = -mcpu=xscale
_CPUCFLAGS = -march=armv5te -D__XSCALE__
. elif ${CPUTYPE:M*soft*} != ""
_CPUCFLAGS = -mfloat-abi=softfp
. elif ${CPUTYPE} == "cortexa"
_CPUCFLAGS = -march=armv7 -mfpu=vfp
. elif ${CPUTYPE:Marmv[4567]*} != ""
# Handle all the armvX types that FreeBSD runs:
# armv4, armv4t, armv5, armv5te, armv6, armv6t2, armv7, armv7-a, armv7ve
# they require -march=. All the others require -mcpu=.
_CPUCFLAGS = -march=${CPUTYPE}
. else
# Common values for FreeBSD
# arm: (any arm v4 or v5 processor you are targeting)
# arm920t, arm926ej-s, marvell-pj4, fa526, fa626,
# fa606te, fa626te, fa726te
# armv6: (any arm v7 or v8 processor you are targeting and the arm1176jzf-s)
# arm1176jzf-s, generic-armv7-a, cortex-a5, cortex-a7, cortex-a8,
# cortex-a9, cortex-a12, cortex-a15, cortex-a17, cortex-a53, cortex-a57,
# cortex-a72, exynos-m1
_CPUCFLAGS = -mcpu=${CPUTYPE}
. endif
. elif ${MACHINE_ARCH} == "powerpc"
. if ${CPUTYPE} == "e500"
_CPUCFLAGS = -Wa,-me500 -msoft-float
. else
_CPUCFLAGS = -mcpu=${CPUTYPE} -mno-powerpc64
. endif
. elif ${MACHINE_ARCH} == "powerpcspe"
-_CPUCFLAGS = -Wa,-me500 -mspe=yes -mabi=spe -mfloat-gprs=double
+_CPUCFLAGS = -Wa,-me500 -mspe=yes -mabi=spe -mfloat-gprs=double -mcpu=8548
. elif ${MACHINE_ARCH} == "powerpc64"
_CPUCFLAGS = -mcpu=${CPUTYPE}
. elif ${MACHINE_CPUARCH} == "mips"
# mips[1234], mips32, mips64, and all later releases need to have mips
# preserved (releases later than r2 require external toolchain)
. if ${CPUTYPE:Mmips32*} != "" || ${CPUTYPE:Mmips64*} != "" || \
${CPUTYPE:Mmips[1234]} != ""
_CPUCFLAGS = -march=${CPUTYPE}
. else
# Default -march to the CPUTYPE passed in, with mips stripped off so we
# accept either mips4kc or 4kc, mostly for historical reasons
# Typical values for cores:
# 4kc, 24kc, 34kc, 74kc, 1004kc, octeon, octeon+, octeon2, octeon3,
# sb1, xlp, xlr
_CPUCFLAGS = -march=${CPUTYPE:S/^mips//}
. endif
. elif ${MACHINE_ARCH} == "sparc64"
. if ${CPUTYPE} == "v9"
_CPUCFLAGS = -mcpu=v9
. elif ${CPUTYPE} == "ultrasparc"
_CPUCFLAGS = -mcpu=ultrasparc
. elif ${CPUTYPE} == "ultrasparc3"
_CPUCFLAGS = -mcpu=ultrasparc3
. endif
. elif ${MACHINE_CPUARCH} == "aarch64"
_CPUCFLAGS = -mcpu=${CPUTYPE}
. endif
# Set up the list of CPU features based on the CPU type. This is an
# unordered list to make it easy for client makefiles to test for the
# presence of a CPU feature.
########## i386
. if ${MACHINE_CPUARCH} == "i386"
. if ${CPUTYPE} == "znver1"
MACHINE_CPU = avx2 avx sse42 sse41 ssse3 sse4a sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "bdver4"
MACHINE_CPU = xop avx2 avx sse42 sse41 ssse3 sse4a sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "bdver3" || ${CPUTYPE} == "bdver2" || \
${CPUTYPE} == "bdver1"
MACHINE_CPU = xop avx sse42 sse41 ssse3 sse4a sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "btver2"
MACHINE_CPU = avx sse42 sse41 ssse3 sse4a sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "btver1"
MACHINE_CPU = ssse3 sse4a sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "amdfam10"
MACHINE_CPU = athlon-xp athlon k7 3dnow sse4a sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "opteron-sse3" || ${CPUTYPE} == "athlon64-sse3"
MACHINE_CPU = athlon-xp athlon k7 3dnow sse3 sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "opteron" || ${CPUTYPE} == "athlon64" || \
${CPUTYPE} == "athlon-fx"
MACHINE_CPU = athlon-xp athlon k7 3dnow sse2 sse mmx k6 k5 i586
. elif ${CPUTYPE} == "athlon-mp" || ${CPUTYPE} == "athlon-xp" || \
${CPUTYPE} == "athlon-4"
MACHINE_CPU = athlon-xp athlon k7 3dnow sse mmx k6 k5 i586
. elif ${CPUTYPE} == "athlon" || ${CPUTYPE} == "athlon-tbird"
MACHINE_CPU = athlon k7 3dnow mmx k6 k5 i586
. elif ${CPUTYPE} == "k6-3" || ${CPUTYPE} == "k6-2" || ${CPUTYPE} == "geode"
MACHINE_CPU = 3dnow mmx k6 k5 i586
. elif ${CPUTYPE} == "k6"
MACHINE_CPU = mmx k6 k5 i586
. elif ${CPUTYPE} == "k5"
MACHINE_CPU = k5 i586
. elif ${CPUTYPE} == "cannonlake" || ${CPUTYPE} == "knm" || \
${CPUTYPE} == "skylake-avx512" || ${CPUTYPE} == "knl"
MACHINE_CPU = avx512 avx2 avx sse42 sse41 ssse3 sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "skylake" || ${CPUTYPE} == "broadwell" || \
${CPUTYPE} == "haswell"
MACHINE_CPU = avx2 avx sse42 sse41 ssse3 sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "ivybridge" || ${CPUTYPE} == "sandybridge"
MACHINE_CPU = avx sse42 sse41 ssse3 sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "goldmont" || ${CPUTYPE} == "westmere" || \
${CPUTYPE} == "nehalem" || ${CPUTYPE} == "silvermont"
MACHINE_CPU = sse42 sse41 ssse3 sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "penryn"
MACHINE_CPU = sse41 ssse3 sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "core2" || ${CPUTYPE} == "bonnell"
MACHINE_CPU = ssse3 sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "yonah" || ${CPUTYPE} == "prescott"
MACHINE_CPU = sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "pentium4" || ${CPUTYPE} == "pentium4m" || \
${CPUTYPE} == "pentium-m"
MACHINE_CPU = sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "pentium3" || ${CPUTYPE} == "pentium3m"
MACHINE_CPU = sse i686 mmx i586
. elif ${CPUTYPE} == "pentium2"
MACHINE_CPU = i686 mmx i586
. elif ${CPUTYPE} == "pentiumpro"
MACHINE_CPU = i686 i586
. elif ${CPUTYPE} == "pentium-mmx"
MACHINE_CPU = mmx i586
. elif ${CPUTYPE} == "pentium"
MACHINE_CPU = i586
. elif ${CPUTYPE} == "c7"
MACHINE_CPU = sse3 sse2 sse i686 mmx i586
. elif ${CPUTYPE} == "c3-2"
MACHINE_CPU = sse i686 mmx i586
. elif ${CPUTYPE} == "c3"
MACHINE_CPU = 3dnow mmx i586
. elif ${CPUTYPE} == "winchip2"
MACHINE_CPU = 3dnow mmx
. elif ${CPUTYPE} == "winchip-c6"
MACHINE_CPU = mmx
. endif
MACHINE_CPU += i486
########## amd64
. elif ${MACHINE_CPUARCH} == "amd64"
. if ${CPUTYPE} == "znver1"
MACHINE_CPU = avx2 avx sse42 sse41 ssse3 sse4a sse3
. elif ${CPUTYPE} == "bdver4"
MACHINE_CPU = xop avx2 avx sse42 sse41 ssse3 sse4a sse3
. elif ${CPUTYPE} == "bdver3" || ${CPUTYPE} == "bdver2" || \
${CPUTYPE} == "bdver1"
MACHINE_CPU = xop avx sse42 sse41 ssse3 sse4a sse3
. elif ${CPUTYPE} == "btver2"
MACHINE_CPU = avx sse42 sse41 ssse3 sse4a sse3
. elif ${CPUTYPE} == "btver1"
MACHINE_CPU = ssse3 sse4a sse3
. elif ${CPUTYPE} == "amdfam10"
MACHINE_CPU = k8 3dnow sse4a sse3
. elif ${CPUTYPE} == "opteron-sse3" || ${CPUTYPE} == "athlon64-sse3" || \
${CPUTYPE} == "k8-sse3"
MACHINE_CPU = k8 3dnow sse3
. elif ${CPUTYPE} == "opteron" || ${CPUTYPE} == "athlon64" || \
${CPUTYPE} == "athlon-fx" || ${CPUTYPE} == "k8"
MACHINE_CPU = k8 3dnow
. elif ${CPUTYPE} == "cannonlake" || ${CPUTYPE} == "knm" || \
${CPUTYPE} == "skylake-avx512" || ${CPUTYPE} == "knl"
MACHINE_CPU = avx512 avx2 avx sse42 sse41 ssse3 sse3
. elif ${CPUTYPE} == "skylake" || ${CPUTYPE} == "broadwell" || \
${CPUTYPE} == "haswell"
MACHINE_CPU = avx2 avx sse42 sse41 ssse3 sse3
. elif ${CPUTYPE} == "ivybridge" || ${CPUTYPE} == "sandybridge"
MACHINE_CPU = avx sse42 sse41 ssse3 sse3
. elif ${CPUTYPE} == "goldmont" || ${CPUTYPE} == "westmere" || \
${CPUTYPE} == "nehalem" || ${CPUTYPE} == "silvermont"
MACHINE_CPU = sse42 sse41 ssse3 sse3
. elif ${CPUTYPE} == "penryn"
MACHINE_CPU = sse41 ssse3 sse3
. elif ${CPUTYPE} == "core2" || ${CPUTYPE} == "bonnell"
MACHINE_CPU = ssse3 sse3
. elif ${CPUTYPE} == "nocona"
MACHINE_CPU = sse3
. endif
MACHINE_CPU += amd64 sse2 sse mmx
########## Mips
. elif ${MACHINE_CPUARCH} == "mips"
MACHINE_CPU = mips
########## powerpc
. elif ${MACHINE_ARCH} == "powerpc"
. if ${CPUTYPE} == "e500"
MACHINE_CPU = booke softfp
. endif
########## riscv
. elif ${MACHINE_CPUARCH} == "riscv"
MACHINE_CPU = riscv
########## sparc64
. elif ${MACHINE_ARCH} == "sparc64"
. if ${CPUTYPE} == "v9"
MACHINE_CPU = v9
. elif ${CPUTYPE} == "ultrasparc"
MACHINE_CPU = v9 ultrasparc
. elif ${CPUTYPE} == "ultrasparc3"
MACHINE_CPU = v9 ultrasparc ultrasparc3
. endif
. endif
.endif
.if ${MACHINE_CPUARCH} == "mips"
CFLAGS += -G0
. if ${MACHINE_ARCH:Mmips*el*} != ""
AFLAGS += -EL
CFLAGS += -EL
LDFLAGS += -EL
. else
AFLAGS += -EB
CFLAGS += -EB
LDFLAGS += -EB
. endif
. if ${MACHINE_ARCH:Mmips64*} != ""
AFLAGS+= -mabi=64
CFLAGS+= -mabi=64
LDFLAGS+= -mabi=64
. elif ${MACHINE_ARCH:Mmipsn32*} != ""
AFLAGS+= -mabi=n32
CFLAGS+= -mabi=n32
LDFLAGS+= -mabi=n32
. else
AFLAGS+= -mabi=32
CFLAGS+= -mabi=32
LDFLAGS+= -mabi=32
. endif
. if ${MACHINE_ARCH:Mmips*hf}
CFLAGS += -mhard-float
. else
CFLAGS += -msoft-float
. endif
.endif
########## arm
.if ${MACHINE_CPUARCH} == "arm"
MACHINE_CPU += arm
. if ${MACHINE_ARCH:Marmv6*} != ""
MACHINE_CPU += armv6
. endif
. if ${MACHINE_ARCH:Marmv7*} != ""
MACHINE_CPU += armv7
. endif
# armv6 and armv7 are a hybrid. It can use the softfp ABI, but doesn't emulate
# floating point in the general case, so don't define softfp for it at this
# time. arm is pure softfp, so define it for them.
. if ${MACHINE_ARCH:Marmv[67]*} == ""
MACHINE_CPU += softfp
. endif
# Normally armv6 and armv7 are hard float ABI from FreeBSD 11 onwards. However
# when CPUTYPE has 'soft' in it, we use the soft-float ABI to allow building of
# soft-float ABI libraries. In this case, we have to add the -mfloat-abi=softfp
# to force that.
.if ${MACHINE_ARCH:Marmv[67]*} && defined(CPUTYPE) && ${CPUTYPE:M*soft*} != ""
# Needs to be CFLAGS not _CPUCFLAGS because it's needed for the ABI
# not a nice optimization.
CFLAGS += -mfloat-abi=softfp
.endif
.endif
.if ${MACHINE_ARCH} == "powerpcspe"
-CFLAGS += -mcpu=8540 -Wa,-me500 -mspe=yes -mabi=spe -mfloat-gprs=double
+CFLAGS += -mcpu=8548 -Wa,-me500 -mspe=yes -mabi=spe -mfloat-gprs=double
.endif
.if ${MACHINE_CPUARCH} == "riscv"
.if ${MACHINE_ARCH:Mriscv*sf}
CFLAGS += -march=rv64imac -mabi=lp64
.else
CFLAGS += -march=rv64imafdc -mabi=lp64d
.endif
.endif
# NB: COPTFLAGS is handled in /usr/src/sys/conf/kern.pre.mk
.if !defined(NO_CPU_CFLAGS)
CFLAGS += ${_CPUCFLAGS}
.endif
#
# Prohibit the compiler from emitting SIMD instructions.
# These flags are added to CFLAGS in areas where the extra context-switch
# cost outweighs the advantages of SIMD instructions.
#
# gcc:
# Setting -mno-mmx implies -mno-3dnow
# Setting -mno-sse implies -mno-sse2, -mno-sse3, -mno-ssse3 and -mfpmath=387
#
# clang:
# Setting -mno-mmx implies -mno-3dnow and -mno-3dnowa
# Setting -mno-sse implies -mno-sse2, -mno-sse3, -mno-ssse3, -mno-sse41 and
# -mno-sse42
# (-mfpmath= is not supported)
#
.if ${MACHINE_CPUARCH} == "i386" || ${MACHINE_CPUARCH} == "amd64"
CFLAGS_NO_SIMD.clang= -mno-avx -mno-avx2
CFLAGS_NO_SIMD= -mno-mmx -mno-sse
.endif
CFLAGS_NO_SIMD += ${CFLAGS_NO_SIMD.${COMPILER_TYPE}}
# Add in any architecture-specific CFLAGS.
# These come from make.conf or the command line or the environment.
CFLAGS += ${CFLAGS.${MACHINE_ARCH}}
CXXFLAGS += ${CXXFLAGS.${MACHINE_ARCH}}
Index: projects/clang800-import/share/mk/src.opts.mk
===================================================================
--- projects/clang800-import/share/mk/src.opts.mk (revision 343955)
+++ projects/clang800-import/share/mk/src.opts.mk (revision 343956)
@@ -1,574 +1,573 @@
# $FreeBSD$
#
# Option file for FreeBSD /usr/src builds.
#
# Users define WITH_FOO and WITHOUT_FOO on the command line or in /etc/src.conf
# and /etc/make.conf files. These translate in the build system to MK_FOO={yes,no}
# with sensible (usually) defaults.
#
# Makefiles must include bsd.opts.mk after defining specific MK_FOO options that
# are applicable for that Makefile (typically there are none, but sometimes there
# are exceptions). Recursive makes usually add MK_FOO=no for options that they wish
# to omit from that make.
#
# Makefiles must include bsd.mkopt.mk before they test the value of any MK_FOO
# variable.
#
# Makefiles may also assume that this file is included by src.opts.mk should it
# need variables defined there prior to the end of the Makefile where
# bsd.{subdir,lib.bin}.mk is traditionally included.
#
# The old-style YES_FOO and NO_FOO are being phased out. No new instances of them
# should be added. Old instances should be removed since they were just to
# bridge the gap between FreeBSD 4 and FreeBSD 5.
#
# Makefiles should never test WITH_FOO or WITHOUT_FOO directly (although an
# exception is made for _WITHOUT_SRCONF which turns off this mechanism
# completely inside bsd.*.mk files).
#
.if !target(__<src.opts.mk>__)
__<src.opts.mk>__:
.include <bsd.own.mk>
#
# Define MK_* variables (which are either "yes" or "no") for users
# to set via WITH_*/WITHOUT_* in /etc/src.conf and override in the
# make(1) environment.
# These should be tested with `== "no"' or `!= "no"' in makefiles.
# The NO_* variables should only be set by makefiles for variables
# that haven't been converted over.
#
# These options are used by the src builds. Those listed in
# __DEFAULT_YES_OPTIONS default to 'yes' and will build unless turned
# off. __DEFAULT_NO_OPTIONS will default to 'no' and won't build
# unless turned on. Any options listed in 'BROKEN_OPTIONS' will be
# hard-wired to 'no'. "Broken" here means not working or
# not-appropriate and/or not supported. It doesn't imply something is
# wrong with the code. There's not a single good word for this, so
# BROKEN was selected as the least imperfect one considered at the
# time. Options are added to BROKEN_OPTIONS list on a per-arch basis.
# At this time, there's no provision for mutually incompatible options.
__DEFAULT_YES_OPTIONS = \
ACCT \
ACPI \
AMD \
APM \
AT \
ATM \
AUDIT \
AUTHPF \
AUTOFS \
BHYVE \
BINUTILS \
BINUTILS_BOOTSTRAP \
BLACKLIST \
BLUETOOTH \
BOOT \
BOOTPARAMD \
BOOTPD \
BSD_CPIO \
BSD_CRTBEGIN \
BSDINSTALL \
BSNMP \
BZIP2 \
CALENDAR \
CAPSICUM \
CASPER \
CCD \
CDDL \
CPP \
CROSS_COMPILER \
CRYPT \
- CTM \
CUSE \
CXX \
CXGBETOOL \
DIALOG \
DICT \
DMAGENT \
DYNAMICROOT \
EE \
EFI \
ELFTOOLCHAIN_BOOTSTRAP \
EXAMPLES \
FDT \
FILE \
FINGER \
FLOPPY \
FMTREE \
FORTH \
FP_LIBC \
FREEBSD_UPDATE \
FTP \
GAMES \
GCOV \
GDB \
GNU_DIFF \
GNU_GREP \
GPIO \
HAST \
HTML \
HYPERV \
ICONV \
INET \
INET6 \
INETD \
IPFILTER \
IPFW \
ISCSI \
JAIL \
KDUMP \
KVM \
LDNS \
LDNS_UTILS \
LEGACY_CONSOLE \
LIB32 \
LIBPTHREAD \
LIBTHR \
LLVM_COV \
LOADER_GELI \
LOADER_LUA \
LOADER_OFW \
LOADER_UBOOT \
LOCALES \
LOCATE \
LPR \
LS_COLORS \
LZMA_SUPPORT \
MAIL \
MAILWRAPPER \
MAKE \
MLX5TOOL \
NDIS \
NETCAT \
NETGRAPH \
NLS_CATALOGS \
NS_CACHING \
NTP \
NVME \
OFED \
OPENSSL \
PAM \
PC_SYSINSTALL \
PF \
PKGBOOTSTRAP \
PMC \
PORTSNAP \
PPP \
QUOTAS \
RADIUS_SUPPORT \
RBOOTD \
REPRODUCIBLE_BUILD \
RESCUE \
ROUTED \
SENDMAIL \
SERVICESDB \
SETUID_LOGIN \
SHAREDOCS \
SOURCELESS \
SOURCELESS_HOST \
SOURCELESS_UCODE \
SVNLITE \
SYSCONS \
SYSTEM_COMPILER \
SYSTEM_LINKER \
TALK \
TCP_WRAPPERS \
TCSH \
TELNET \
TEXTPROC \
TFTP \
TIMED \
UNBOUND \
USB \
UTMPX \
VI \
VT \
WIRELESS \
WPA_SUPPLICANT_EAPOL \
ZFS \
LOADER_ZFS \
ZONEINFO
__DEFAULT_NO_OPTIONS = \
BSD_GREP \
CLANG_EXTRAS \
DTRACE_TESTS \
EXPERIMENTAL \
GNU_GREP_COMPAT \
HESIOD \
LIBSOFT \
LOADER_FIREWIRE \
LOADER_FORCE_LE \
LOADER_VERBOSE \
NAND \
OFED_EXTRA \
OPENLDAP \
RPCBIND_WARMSTART_SUPPORT \
SHARED_TOOLCHAIN \
SORT_THREADS \
SVN \
ZONEINFO_LEAPSECONDS_SUPPORT \
ZONEINFO_OLD_TIMEZONES_SUPPORT \
# LEFT/RIGHT. Left options which default to "yes" unless their corresponding
# RIGHT option is disabled.
__DEFAULT_DEPENDENT_OPTIONS= \
CLANG_FULL/CLANG \
LLVM_TARGET_ALL/CLANG \
# MK_*_SUPPORT options which default to "yes" unless their corresponding
# MK_* variable is set to "no".
#
.for var in \
BLACKLIST \
BZIP2 \
INET \
INET6 \
KERBEROS \
KVM \
NETGRAPH \
PAM \
TESTS \
WIRELESS
__DEFAULT_DEPENDENT_OPTIONS+= ${var}_SUPPORT/${var}
.endfor
#
# Default behaviour of some options depends on the architecture. Unfortunately
# this means that we have to test TARGET_ARCH (the buildworld case) as well
# as MACHINE_ARCH (the non-buildworld case). Normally TARGET_ARCH is not
# used at all in bsd.*.mk, but we have to make an exception here if we want
# to allow defaults for some things like clang to vary by target architecture.
# Additional, per-target behavior should be rarely added only after much
# gnashing of teeth and grinding of gears.
#
.if defined(TARGET_ARCH)
__T=${TARGET_ARCH}
.else
__T=${MACHINE_ARCH}
.endif
.if defined(TARGET)
__TT=${TARGET}
.else
__TT=${MACHINE}
.endif
# All supported backends for LLVM_TARGET_XXX
__LLVM_TARGETS= \
aarch64 \
arm \
mips \
powerpc \
sparc \
x86
__LLVM_TARGET_FILT= C/(amd64|i386)/x86/:S/sparc64/sparc/:S/arm64/aarch64/
.for __llt in ${__LLVM_TARGETS}
# Default the given TARGET's LLVM_TARGET support to the value of MK_CLANG.
.if ${__TT:${__LLVM_TARGET_FILT}} == ${__llt}
__DEFAULT_DEPENDENT_OPTIONS+= LLVM_TARGET_${__llt:${__LLVM_TARGET_FILT}:tu}/CLANG
# Disable other targets for arm and armv6, to work around "relocation truncated
# to fit" errors with BFD ld, since libllvm.a will get too large to link.
.elif ${__T} == "arm" || ${__T} == "armv6"
__DEFAULT_NO_OPTIONS+=LLVM_TARGET_${__llt:tu}
# aarch64 needs arm for -m32 support.
.elif ${__TT} == "arm64" && ${__llt} == "arm"
__DEFAULT_DEPENDENT_OPTIONS+= LLVM_TARGET_ARM/LLVM_TARGET_AARCH64
# Default the rest of the LLVM_TARGETs to the value of MK_LLVM_TARGET_ALL
# which is based on MK_CLANG.
.else
__DEFAULT_DEPENDENT_OPTIONS+= LLVM_TARGET_${__llt:${__LLVM_TARGET_FILT}:tu}/LLVM_TARGET_ALL
.endif
.endfor
__DEFAULT_NO_OPTIONS+=LLVM_TARGET_BPF
.include <bsd.compiler.mk>
# If the compiler is not C++11 capable, disable Clang and use GCC instead.
# This means that architectures that have GCC 4.2 as default can not
# build Clang without using an external compiler.
.if ${COMPILER_FEATURES:Mc++11} && (${__T} == "aarch64" || \
${__T} == "amd64" || ${__TT} == "arm" || ${__T} == "i386")
# Clang is enabled, and will be installed as the default /usr/bin/cc.
__DEFAULT_YES_OPTIONS+=CLANG CLANG_BOOTSTRAP CLANG_IS_CC LLD
__DEFAULT_NO_OPTIONS+=GCC GCC_BOOTSTRAP GNUCXX GPL_DTC
.elif ${COMPILER_FEATURES:Mc++11} && ${__T:Mriscv*} == "" && ${__T} != "sparc64"
# If an external compiler that supports C++11 is used as ${CC} and Clang
# supports the target, then Clang is enabled but GCC is installed as the
# default /usr/bin/cc.
__DEFAULT_YES_OPTIONS+=CLANG GCC GCC_BOOTSTRAP GNUCXX GPL_DTC LLD
__DEFAULT_NO_OPTIONS+=CLANG_BOOTSTRAP CLANG_IS_CC
.else
# Everything else disables Clang, and uses GCC instead.
__DEFAULT_YES_OPTIONS+=GCC GCC_BOOTSTRAP GNUCXX GPL_DTC
__DEFAULT_NO_OPTIONS+=CLANG CLANG_BOOTSTRAP CLANG_IS_CC LLD
.endif
# In-tree binutils/gcc are older versions without modern architecture support.
.if ${__T} == "aarch64" || ${__T:Mriscv*} != ""
BROKEN_OPTIONS+=BINUTILS BINUTILS_BOOTSTRAP GCC GCC_BOOTSTRAP GDB
.endif
.if ${__T:Mriscv*} != ""
BROKEN_OPTIONS+=OFED
.endif
.if ${__T} == "aarch64" || ${__T} == "amd64" || ${__T} == "i386" || \
${__T:Mriscv*} != "" || ${__TT} == "mips"
__DEFAULT_YES_OPTIONS+=LLVM_LIBUNWIND
.else
__DEFAULT_NO_OPTIONS+=LLVM_LIBUNWIND
.endif
.if ${__T} == "aarch64" || ${__T} == "amd64" || ${__T} == "armv7" || \
${__T} == "i386"
__DEFAULT_YES_OPTIONS+=LLD_BOOTSTRAP LLD_IS_LD
.else
__DEFAULT_NO_OPTIONS+=LLD_BOOTSTRAP LLD_IS_LD
.endif
.if ${__T} == "aarch64" || ${__T} == "amd64" || ${__T} == "i386"
__DEFAULT_YES_OPTIONS+=LLDB
.else
__DEFAULT_NO_OPTIONS+=LLDB
.endif
# LLVM lacks support for FreeBSD 64-bit atomic operations for ARMv4/ARMv5
.if ${__T} == "arm"
BROKEN_OPTIONS+=LLDB
.endif
# GDB in base is generally less functional than GDB in ports. Ports GDB
# sparc64 kernel support has not been tested.
.if ${__T} == "sparc64"
__DEFAULT_NO_OPTIONS+=GDB_LIBEXEC
.else
__DEFAULT_YES_OPTIONS+=GDB_LIBEXEC
.endif
# Only doing soft float API stuff on armv6 and armv7
.if ${__T} != "armv6" && ${__T} != "armv7"
BROKEN_OPTIONS+=LIBSOFT
.endif
.if ${__T:Mmips*}
BROKEN_OPTIONS+=SSP
.endif
# EFI doesn't exist on mips, powerpc, sparc or riscv.
.if ${__T:Mmips*} || ${__T:Mpowerpc*} || ${__T:Msparc64} || ${__T:Mriscv*}
BROKEN_OPTIONS+=EFI
.endif
# OFW is only for powerpc and sparc64, exclude others
.if ${__T:Mpowerpc*} == "" && ${__T:Msparc64} == ""
BROKEN_OPTIONS+=LOADER_OFW
.endif
# UBOOT is only for arm, mips and powerpc, exclude others
.if ${__T:Marm*} == "" && ${__T:Mmips*} == "" && ${__T:Mpowerpc*} == ""
BROKEN_OPTIONS+=LOADER_UBOOT
.endif
# GELI and Lua in loader currently cause boot failures on sparc64 and powerpc.
# Further debugging is required -- probably they are just broken on big
# endian systems generically (they jump to null pointers or try to read
# crazy high addresses, which is typical of endianness problems).
.if ${__T} == "sparc64" || ${__T:Mpowerpc*}
BROKEN_OPTIONS+=LOADER_GELI LOADER_LUA
.endif
.if ${__T:Mmips64*}
# profiling won't work on MIPS64 because there is only assembly for o32
BROKEN_OPTIONS+=PROFILE
.endif
.if ${__T} != "aarch64" && ${__T} != "amd64" && ${__T} != "i386" && \
${__T} != "powerpc64" && ${__T} != "sparc64"
BROKEN_OPTIONS+=CXGBETOOL
BROKEN_OPTIONS+=MLX5TOOL
.endif
# HyperV is currently x86-only
.if ${__T} != "amd64" && ${__T} != "i386"
BROKEN_OPTIONS+=HYPERV
.endif
# NVME is only x86 and powerpc64
.if ${__T} != "amd64" && ${__T} != "i386" && ${__T} != "powerpc64"
BROKEN_OPTIONS+=NVME
.endif
# PowerPC and Sparc64 need extra crt*.o files
.if ${__T:Mpowerpc*} || ${__T:Msparc64}
BROKEN_OPTIONS+=BSD_CRTBEGIN
.endif
.include <bsd.mkopt.mk>
#
# MK_* options that default to "yes" if the compiler is a C++11 compiler.
#
.for var in \
LIBCPLUSPLUS
.if !defined(MK_${var})
.if ${COMPILER_FEATURES:Mc++11}
.if defined(WITHOUT_${var})
MK_${var}:= no
.else
MK_${var}:= yes
.endif
.else
.if defined(WITH_${var})
MK_${var}:= yes
.else
MK_${var}:= no
.endif
.endif
.endif
.endfor
#
# Force some options off if their dependencies are off.
# Order is somewhat important.
#
.if !${COMPILER_FEATURES:Mc++11}
MK_LLVM_LIBUNWIND:= no
.endif
.if ${MK_BINUTILS} == "no"
MK_GDB:= no
.endif
.if ${MK_CAPSICUM} == "no"
MK_CASPER:= no
.endif
.if ${MK_LIBPTHREAD} == "no"
MK_LIBTHR:= no
.endif
.if ${MK_LDNS} == "no"
MK_LDNS_UTILS:= no
MK_UNBOUND:= no
.endif
.if ${MK_SOURCELESS} == "no"
MK_SOURCELESS_HOST:= no
MK_SOURCELESS_UCODE:= no
.endif
.if ${MK_CDDL} == "no"
MK_ZFS:= no
MK_LOADER_ZFS:= no
MK_CTF:= no
.endif
.if ${MK_CRYPT} == "no"
MK_OPENSSL:= no
MK_OPENSSH:= no
MK_KERBEROS:= no
.endif
.if ${MK_CXX} == "no"
MK_CLANG:= no
MK_GNUCXX:= no
MK_TESTS:= no
.endif
.if ${MK_DIALOG} == "no"
MK_BSDINSTALL:= no
.endif
.if ${MK_MAIL} == "no"
MK_MAILWRAPPER:= no
MK_SENDMAIL:= no
MK_DMAGENT:= no
.endif
.if ${MK_NETGRAPH} == "no"
MK_ATM:= no
MK_BLUETOOTH:= no
.endif
.if ${MK_NLS} == "no"
MK_NLS_CATALOGS:= no
.endif
.if ${MK_OPENSSL} == "no"
MK_OPENSSH:= no
MK_KERBEROS:= no
.endif
.if ${MK_PF} == "no"
MK_AUTHPF:= no
.endif
.if ${MK_OFED} == "no"
MK_OFED_EXTRA:= no
.endif
.if ${MK_PORTSNAP} == "no"
# freebsd-update depends on phttpget from portsnap
MK_FREEBSD_UPDATE:= no
.endif
.if ${MK_TESTS} == "no"
MK_DTRACE_TESTS:= no
.endif
.if ${MK_ZONEINFO} == "no"
MK_ZONEINFO_LEAPSECONDS_SUPPORT:= no
MK_ZONEINFO_OLD_TIMEZONES_SUPPORT:= no
.endif
.if ${MK_CROSS_COMPILER} == "no"
MK_BINUTILS_BOOTSTRAP:= no
MK_CLANG_BOOTSTRAP:= no
MK_ELFTOOLCHAIN_BOOTSTRAP:= no
MK_GCC_BOOTSTRAP:= no
MK_LLD_BOOTSTRAP:= no
.endif
.if ${MK_TOOLCHAIN} == "no"
MK_BINUTILS:= no
MK_CLANG:= no
MK_GCC:= no
MK_GDB:= no
MK_INCLUDES:= no
MK_LLD:= no
MK_LLDB:= no
.endif
.if ${MK_CLANG} == "no"
MK_CLANG_EXTRAS:= no
MK_CLANG_FULL:= no
MK_LLVM_COV:= no
.endif
#
# MK_* options whose default value depends on another option.
#
.for vv in \
GSSAPI/KERBEROS \
MAN_UTILS/MAN
.if defined(WITH_${vv:H})
MK_${vv:H}:= yes
.elif defined(WITHOUT_${vv:H})
MK_${vv:H}:= no
.else
MK_${vv:H}:= ${MK_${vv:T}}
.endif
.endfor
#
# Set defaults for the MK_*_SUPPORT variables.
#
.if !${COMPILER_FEATURES:Mc++11}
MK_LLDB:= no
.endif
# gcc 4.8 and newer supports libc++, so suppress gnuc++ in that case.
# while in theory we could build it with that, we don't want to do
# that since it creates too much confusion for too little gain.
# XXX: This is incomplete and needs X_COMPILER_TYPE/VERSION checks too
# to prevent Makefile.inc1 from bootstrapping unneeded dependencies
# and to support 'make delete-old' when supplying an external toolchain.
.if ${COMPILER_TYPE} == "gcc" && ${COMPILER_VERSION} >= 40800
MK_GNUCXX:=no
MK_GCC:=no
.endif
.endif # !target(__<src.opts.mk>__)
Index: projects/clang800-import/share/mk/suite.test.mk
===================================================================
--- projects/clang800-import/share/mk/suite.test.mk (revision 343955)
+++ projects/clang800-import/share/mk/suite.test.mk (revision 343956)
@@ -1,124 +1,126 @@
# $FreeBSD$
#
# You must include bsd.test.mk instead of this file from your Makefile.
#
# Internal glue for the build of /usr/tests/.
.if !target(__<bsd.test.mk>__)
.error suite.test.mk cannot be included directly.
.endif
.include <bsd.opts.mk>
# Name of the test suite these tests belong to. Should rarely be changed for
# Makefiles built into the FreeBSD src tree.
TESTSUITE?= FreeBSD
# Knob to control the handling of the Kyuafile for this Makefile.
#
# If 'yes', a Kyuafile exists in the source tree and is installed into
# TESTSDIR.
#
# If 'auto', a Kyuafile is automatically generated based on the list of test
# programs built by the Makefile and is installed into TESTSDIR. This is the
# default and is sufficient in the majority of the cases.
#
# If 'no', no Kyuafile is installed.
KYUAFILE?= auto
# Per-test program interface definition.
#
# The name provided here must match one of the interface names supported by
# Kyua as this is later encoded in the Kyuafile test program definitions.
#TEST_INTERFACE.<test-program>= interface-name
# Metadata properties applicable to all test programs.
#
# All the variables for a test program defined in the Makefile are appended
# to the test program's definition in the Kyuafile. This feature can be
# used to avoid having to explicitly supply a Kyuafile in the source
# directory, allowing the caller Makefile to rely on the KYUAFILE=auto
# behavior defined here.
#TEST_METADATA+= key="value"
# Per-test program metadata properties as a list of key/value pairs.
#
# These per-test program settings _extend_ the values provided in the
# unqualified TEST_METADATA variable.
#TEST_METADATA.<test-program>+= key="value"
.if ${KYUAFILE:tl} != "no"
${PACKAGE}FILES+= Kyuafile
${PACKAGE}FILESDIR_Kyuafile= ${TESTSDIR}
.endif
.for _T in ${_TESTS}
_TEST_METADATA.${_T}= ${TEST_METADATA} ${TEST_METADATA.${_T}}
.endfor
.if ${KYUAFILE:tl} == "auto"
CLEANFILES+= Kyuafile Kyuafile.tmp
Kyuafile: Makefile
@{ \
echo '-- Automatically generated by bsd.test.mk.'; \
echo; \
echo 'syntax(2)'; \
echo; \
echo 'test_suite("${TESTSUITE}")'; \
echo; \
} > ${.TARGET}.tmp
.for _T in ${_TESTS}
@echo '${TEST_INTERFACE.${_T}}_test_program{name="${_T}"${_TEST_METADATA.${_T}:C/$/,/:tW:C/^/, /W:C/,$//W}}' \
>>${.TARGET}.tmp
.endfor
.for _T in ${TESTS_SUBDIRS:N.WAIT}
@echo "include(\"${_T}/${.TARGET}\")" >>${.TARGET}.tmp
.endfor
@mv ${.TARGET}.tmp ${.TARGET}
.endif
KYUA= ${LOCALBASE}/bin/kyua
# Definition of the "make check" target and supporting variables.
#
# This target, by necessity, can only work for native builds (i.e. a FreeBSD
# host building a release for the same system). The target runs Kyua, which is
# not in the toolchain, and the tests execute code built for the target host.
#
# Due to the dependencies of the binaries built by the source tree and how they
# are used by tests, it is highly possible for a execution of "make test" to
# report bogus results unless the new binaries are put in place.
realcheck: .PHONY
@if [ ! -x ${KYUA} ]; then \
echo; \
echo "kyua binary not installed at expected location (${.TARGET})"; \
echo; \
echo "Please install via pkg install, or specify the path to the kyua"; \
echo "package via the \$${LOCALBASE} variable, e.g. "; \
echo "LOCALBASE=\"${LOCALBASE}\""; \
false; \
fi
@env ${TESTS_ENV:Q} ${KYUA} test -k ${DESTDIR}${TESTSDIR}/Kyuafile
MAKE_CHECK_SANDBOX_DIR= checkdir
CLEANDIRS+= ${MAKE_CHECK_SANDBOX_DIR}
.if ${MK_MAKE_CHECK_USE_SANDBOX} != "no" && make(check)
DESTDIR:= ${.OBJDIR}/${MAKE_CHECK_SANDBOX_DIR}
beforecheck:
.for t in clean depend all
@cd ${.CURDIR} && ${MAKE} $t
.endfor
@cd ${SRCTOP} && ${MAKE} hierarchy DESTDIR=${DESTDIR}
@cd ${.CURDIR} && ${MAKE} install \
DESTDIR=${DESTDIR}
# NOTE: this is intentional to ensure that "make check" can be run multiple
# times. "aftercheck" won't be run if "make check" fails, is interrupted,
# etc.
aftercheck:
@cd ${.CURDIR} && ${MAKE} clean
+ @chflags -R 0 "${DESTDIR}"
+ @rm -Rf "${DESTDIR}"
.endif
Index: projects/clang800-import/stand/efi/libefi/efienv.c
===================================================================
--- projects/clang800-import/stand/efi/libefi/efienv.c (revision 343955)
+++ projects/clang800-import/stand/efi/libefi/efienv.c (revision 343956)
@@ -1,86 +1,86 @@
/*-
* Copyright (c) 2018 Netflix, Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <stand.h>
#include <efi.h>
#include <efichar.h>
#include <efilib.h>
static EFI_GUID FreeBSDBootVarGUID = FREEBSD_BOOT_VAR_GUID;
static EFI_GUID GlobalBootVarGUID = EFI_GLOBAL_VARIABLE;
EFI_STATUS
efi_getenv(EFI_GUID *g, const char *v, void *data, size_t *len)
{
size_t ul;
CHAR16 *uv;
UINT32 attr;
UINTN dl;
EFI_STATUS rv;
uv = NULL;
if (utf8_to_ucs2(v, &uv, &ul) != 0)
return (EFI_OUT_OF_RESOURCES);
dl = *len;
rv = RS->GetVariable(uv, g, &attr, &dl, data);
- if (rv == EFI_SUCCESS)
+ if (rv == EFI_SUCCESS || rv == EFI_BUFFER_TOO_SMALL)
*len = dl;
free(uv);
return (rv);
}
EFI_STATUS
efi_global_getenv(const char *v, void *data, size_t *len)
{
return (efi_getenv(&GlobalBootVarGUID, v, data, len));
}
EFI_STATUS
efi_freebsd_getenv(const char *v, void *data, size_t *len)
{
return (efi_getenv(&FreeBSDBootVarGUID, v, data, len));
}
EFI_STATUS
efi_setenv_freebsd_wcs(const char *varname, CHAR16 *valstr)
{
CHAR16 *var = NULL;
size_t len;
EFI_STATUS rv;
if (utf8_to_ucs2(varname, &var, &len) != 0)
return (EFI_OUT_OF_RESOURCES);
rv = RS->SetVariable(var, &FreeBSDBootVarGUID,
EFI_VARIABLE_BOOTSERVICE_ACCESS | EFI_VARIABLE_RUNTIME_ACCESS,
(ucs2len(valstr) + 1) * sizeof(efi_char), valstr);
free(var);
return (rv);
}
Index: projects/clang800-import/sys/amd64/conf/GENERIC
===================================================================
--- projects/clang800-import/sys/amd64/conf/GENERIC (revision 343955)
+++ projects/clang800-import/sys/amd64/conf/GENERIC (revision 343956)
@@ -1,382 +1,382 @@
#
# GENERIC -- Generic kernel configuration file for FreeBSD/amd64
#
# For more information on this file, please read the config(5) manual page,
# and/or the handbook section on Kernel Configuration Files:
#
# https://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/kernelconfig-config.html
#
# The handbook is also available locally in /usr/share/doc/handbook
# if you've installed the doc distribution, otherwise always see the
# FreeBSD World Wide Web server (https://www.FreeBSD.org/) for the
# latest information.
#
# An exhaustive list of options and more detailed explanations of the
# device lines is also present in the ../../conf/NOTES and NOTES files.
# If you are in doubt as to the purpose or necessity of a line, check first
# in NOTES.
#
# $FreeBSD$
cpu HAMMER
ident GENERIC
makeoptions DEBUG=-g # Build kernel with gdb(1) debug symbols
makeoptions WITH_CTF=1 # Run ctfconvert(1) for DTrace support
options SCHED_ULE # ULE scheduler
options NUMA # Non-Uniform Memory Architecture support
options PREEMPTION # Enable kernel thread preemption
options VIMAGE # Subsystem virtualization, e.g. VNET
options INET # InterNETworking
options INET6 # IPv6 communications protocols
options IPSEC # IP (v4/v6) security
options IPSEC_SUPPORT # Allow kldload of ipsec and tcpmd5
options TCP_OFFLOAD # TCP offload
options TCP_BLACKBOX # Enhanced TCP event logging
options TCP_HHOOK # hhook(9) framework for TCP
options TCP_RFC7413 # TCP Fast Open
options SCTP # Stream Control Transmission Protocol
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates support
options UFS_ACL # Support for access control lists
options UFS_DIRHASH # Improve performance on big directories
options UFS_GJOURNAL # Enable gjournal-based UFS journaling
options QUOTA # Enable disk quotas for UFS
options MD_ROOT # MD is a potential root device
options NFSCL # Network Filesystem Client
options NFSD # Network Filesystem Server
options NFSLOCKD # Network Lock Manager
options NFS_ROOT # NFS usable as /, requires NFSCL
options MSDOSFS # MSDOS Filesystem
options CD9660 # ISO 9660 Filesystem
options PROCFS # Process filesystem (requires PSEUDOFS)
options PSEUDOFS # Pseudo-filesystem framework
options GEOM_RAID # Soft RAID functionality.
options GEOM_LABEL # Provides labelization
options EFIRT # EFI Runtime Services support
options COMPAT_FREEBSD32 # Compatible with i386 binaries
options COMPAT_FREEBSD4 # Compatible with FreeBSD4
options COMPAT_FREEBSD5 # Compatible with FreeBSD5
options COMPAT_FREEBSD6 # Compatible with FreeBSD6
options COMPAT_FREEBSD7 # Compatible with FreeBSD7
options COMPAT_FREEBSD9 # Compatible with FreeBSD9
options COMPAT_FREEBSD10 # Compatible with FreeBSD10
options COMPAT_FREEBSD11 # Compatible with FreeBSD11
options SCSI_DELAY=5000 # Delay (in ms) before probing SCSI
options KTRACE # ktrace(1) support
options STACK # stack(9) support
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time extensions
options PRINTF_BUFR_SIZE=128 # Prevent printf output being interspersed.
options KBD_INSTALL_CDEV # install a CDEV entry in /dev
options HWPMC_HOOKS # Necessary kernel hooks for hwpmc(4)
options AUDIT # Security event auditing
options CAPABILITY_MODE # Capsicum capability mode
options CAPABILITIES # Capsicum capabilities
options MAC # TrustedBSD MAC Framework
options KDTRACE_FRAME # Ensure frames are compiled in
options KDTRACE_HOOKS # Kernel DTrace hooks
options DDB_CTF # Kernel ELF linker loads CTF data
options INCLUDE_CONFIG_FILE # Include this file in kernel
options RACCT # Resource accounting framework
options RACCT_DEFAULT_TO_DISABLED # Set kern.racct.enable=0 by default
options RCTL # Resource limits
# Debugging support. Always need this:
options KDB # Enable kernel debugger support.
options KDB_TRACE # Print a stack trace for a panic.
# For full debugger support use (turn off in stable branch):
options BUF_TRACKING # Track buffer history
options DDB # Support DDB.
options FULL_BUF_TRACKING # Track more buffer history
options GDB # Support remote GDB.
options DEADLKRES # Enable the deadlock resolver
options INVARIANTS # Enable calls of extra sanity checking
options INVARIANT_SUPPORT # Extra sanity checks of internal structures, required by INVARIANTS
options WITNESS # Enable checks to detect deadlocks and cycles
options WITNESS_SKIPSPIN # Don't run witness on spinlocks for speed
options MALLOC_DEBUG_MAXZONES=8 # Separate malloc(9) zones
options VERBOSE_SYSINIT=0 # Support debug.verbose_sysinit, off by default
# Kernel Sanitizers
-options COVERAGE # Generic kernel coverage. Used by KCOV
-options KCOV # Kernel Coverage Sanitizer
+#options COVERAGE # Generic kernel coverage. Used by KCOV
+#options KCOV # Kernel Coverage Sanitizer
# Warning: KUBSAN can result in a kernel too large for loader to load
#options KUBSAN # Kernel Undefined Behavior Sanitizer
# Kernel dump features.
options EKCD # Support for encrypted kernel dumps
options GZIO # gzip-compressed kernel and user dumps
options ZSTDIO # zstd-compressed kernel and user dumps
options NETDUMP # netdump(4) client support
# Make an SMP-capable kernel by default
options SMP # Symmetric MultiProcessor Kernel
options EARLY_AP_STARTUP
# CPU frequency control
device cpufreq
# Bus support.
device acpi
options ACPI_DMAR
device pci
options PCI_HP # PCI-Express native HotPlug
options PCI_IOV # PCI SR-IOV support
# Floppy drives
device fdc
# ATA controllers
device ahci # AHCI-compatible SATA controllers
device ata # Legacy ATA/SATA controllers
device mvs # Marvell 88SX50XX/88SX60XX/88SX70XX/SoC SATA
device siis # SiliconImage SiI3124/SiI3132/SiI3531 SATA
# SCSI Controllers
device ahc # AHA2940 and onboard AIC7xxx devices
device ahd # AHA39320/29320 and onboard AIC79xx devices
device esp # AMD Am53C974 (Tekram DC-390(T))
device hptiop # Highpoint RocketRaid 3xxx series
device isp # Qlogic family
#device ispfw # Firmware for QLogic HBAs- normally a module
device mpt # LSI-Logic MPT-Fusion
device mps # LSI-Logic MPT-Fusion 2
device mpr # LSI-Logic MPT-Fusion 3
device sym # NCR/Symbios Logic
device trm # Tekram DC395U/UW/F DC315U adapters
device isci # Intel C600 SAS controller
device ocs_fc # Emulex FC adapters
# ATA/SCSI peripherals
device scbus # SCSI bus (required for ATA/SCSI)
device ch # SCSI media changers
device da # Direct Access (disks)
device sa # Sequential Access (tape etc)
device cd # CD
device pass # Passthrough device (direct ATA/SCSI access)
device ses # Enclosure Services (SES and SAF-TE)
#device ctl # CAM Target Layer
# RAID controllers interfaced to the SCSI subsystem
device amr # AMI MegaRAID
device arcmsr # Areca SATA II RAID
device ciss # Compaq Smart RAID 5*
device hptmv # Highpoint RocketRAID 182x
device hptnr # Highpoint DC7280, R750
device hptrr # Highpoint RocketRAID 17xx, 22xx, 23xx, 25xx
device hpt27xx # Highpoint RocketRAID 27xx
device iir # Intel Integrated RAID
device ips # IBM (Adaptec) ServeRAID
device mly # Mylex AcceleRAID/eXtremeRAID
device twa # 3ware 9000 series PATA/SATA RAID
device smartpqi # Microsemi smartpqi driver
device tws # LSI 3ware 9750 SATA+SAS 6Gb/s RAID controller
# RAID controllers
device aac # Adaptec FSA RAID
device aacp # SCSI passthrough for aac (requires CAM)
device aacraid # Adaptec by PMC RAID
device ida # Compaq Smart RAID
device mfi # LSI MegaRAID SAS
device mlx # Mylex DAC960 family
device mrsas # LSI/Avago MegaRAID SAS/SATA, 6Gb/s and 12Gb/s
device pmspcv # PMC-Sierra SAS/SATA Controller driver
#XXX pointer/int warnings
#device pst # Promise Supertrak SX6000
device twe # 3ware ATA RAID
# NVM Express (NVMe) support
device nvme # base NVMe driver
device nvd # expose NVMe namespaces as disks, depends on nvme
# atkbdc0 controls both the keyboard and the PS/2 mouse
device atkbdc # AT keyboard controller
device atkbd # AT keyboard
device psm # PS/2 mouse
device kbdmux # keyboard multiplexer
device vga # VGA video card driver
options VESA # Add support for VESA BIOS Extensions (VBE)
device splash # Splash screen and screen saver support
# syscons is the default console driver, resembling an SCO console
device sc
options SC_PIXEL_MODE # add support for the raster text mode
# vt is the new video console driver
device vt
device vt_vga
device vt_efifb
device agp # support several AGP chipsets
# PCCARD (PCMCIA) support
# PCMCIA and cardbus bridge support
device cbb # cardbus (yenta) bridge
device pccard # PC Card (16-bit) bus
device cardbus # CardBus (32-bit) bus
# Serial (COM) ports
device uart # Generic UART driver
# Parallel port
device ppc
device ppbus # Parallel port bus (required)
device lpt # Printer
device ppi # Parallel port interface device
#device vpo # Requires scbus and da
device puc # Multi I/O cards and multi-channel UARTs
# PCI/PCI-X/PCIe Ethernet NICs that use iflib infrastructure
device iflib
device em # Intel PRO/1000 Gigabit Ethernet Family
device ix # Intel PRO/10GbE PCIE PF Ethernet
device ixv # Intel PRO/10GbE PCIE VF Ethernet
device ixl # Intel 700 Series Physical Function
device iavf # Intel Adaptive Virtual Function
device vmx # VMware VMXNET3 Ethernet
# PCI Ethernet NICs.
device bxe # Broadcom NetXtreme II BCM5771X/BCM578XX 10GbE
device de # DEC/Intel DC21x4x (``Tulip'')
device le # AMD Am7900 LANCE and Am79C9xx PCnet
device ti # Alteon Networks Tigon I/II gigabit Ethernet
device txp # 3Com 3cR990 (``Typhoon'')
device vx # 3Com 3c590, 3c595 (``Vortex'')
# PCI Ethernet NICs that use the common MII bus controller code.
# NOTE: Be sure to keep the 'device miibus' line in order to use these NICs!
device miibus # MII bus support
device ae # Attansic/Atheros L2 FastEthernet
device age # Attansic/Atheros L1 Gigabit Ethernet
device alc # Atheros AR8131/AR8132 Ethernet
device ale # Atheros AR8121/AR8113/AR8114 Ethernet
device bce # Broadcom BCM5706/BCM5708 Gigabit Ethernet
device bfe # Broadcom BCM440x 10/100 Ethernet
device bge # Broadcom BCM570xx Gigabit Ethernet
device cas # Sun Cassini/Cassini+ and NS DP83065 Saturn
device dc # DEC/Intel 21143 and various workalikes
device et # Agere ET1310 10/100/Gigabit Ethernet
device fxp # Intel EtherExpress PRO/100B (82557, 82558)
device gem # Sun GEM/Sun ERI/Apple GMAC
device hme # Sun HME (Happy Meal Ethernet)
device jme # JMicron JMC250 Gigabit/JMC260 Fast Ethernet
device lge # Level 1 LXT1001 gigabit Ethernet
device msk # Marvell/SysKonnect Yukon II Gigabit Ethernet
device nfe # nVidia nForce MCP on-board Ethernet
device nge # NatSemi DP83820 gigabit Ethernet
device pcn # AMD Am79C97x PCI 10/100 (precedence over 'le')
device re # RealTek 8139C+/8169/8169S/8110S
device rl # RealTek 8129/8139
device sf # Adaptec AIC-6915 (``Starfire'')
device sge # Silicon Integrated Systems SiS190/191
device sis # Silicon Integrated Systems SiS 900/SiS 7016
device sk # SysKonnect SK-984x & SK-982x gigabit Ethernet
device ste # Sundance ST201 (D-Link DFE-550TX)
device stge # Sundance/Tamarack TC9021 gigabit Ethernet
device tl # Texas Instruments ThunderLAN
device tx # SMC EtherPower II (83c170 ``EPIC'')
device vge # VIA VT612x gigabit Ethernet
device vr # VIA Rhine, Rhine II
device wb # Winbond W89C840F
device xl # 3Com 3c90x (``Boomerang'', ``Cyclone'')
# Wireless NIC cards
device wlan # 802.11 support
options IEEE80211_DEBUG # enable debug msgs
options IEEE80211_SUPPORT_MESH # enable 802.11s draft support
device wlan_wep # 802.11 WEP support
device wlan_ccmp # 802.11 CCMP support
device wlan_tkip # 802.11 TKIP support
device wlan_amrr # AMRR transmit rate control algorithm
device an # Aironet 4500/4800 802.11 wireless NICs.
device ath # Atheros NICs
device ath_pci # Atheros pci/cardbus glue
device ath_hal # pci/cardbus chip support
options AH_AR5416_INTERRUPT_MITIGATION # AR5416 interrupt mitigation
options ATH_ENABLE_11N # Enable 802.11n support for AR5416 and later
device ath_rate_sample # SampleRate tx rate control for ath
#device bwi # Broadcom BCM430x/BCM431x wireless NICs.
#device bwn # Broadcom BCM43xx wireless NICs.
device ipw # Intel 2100 wireless NICs.
device iwi # Intel 2200BG/2225BG/2915ABG wireless NICs.
device iwn # Intel 4965/1000/5000/6000 wireless NICs.
device malo # Marvell Libertas wireless NICs.
device mwl # Marvell 88W8363 802.11n wireless NICs.
device ral # Ralink Technology RT2500 wireless NICs.
device wi # WaveLAN/Intersil/Symbol 802.11 wireless NICs.
device wpi # Intel 3945ABG wireless NICs.
# Pseudo devices.
device crypto # core crypto support
device loop # Network loopback
device random # Entropy device
device padlock_rng # VIA Padlock RNG
device rdrand_rng # Intel Bull Mountain RNG
device ether # Ethernet support
device vlan # 802.1Q VLAN support
device tun # Packet tunnel.
device md # Memory "disks"
device gif # IPv6 and IPv4 tunneling
device firmware # firmware assist module
# The `bpf' device enables the Berkeley Packet Filter.
# Be aware of the administrative consequences of enabling this!
# Note that 'bpf' is required for DHCP.
device bpf # Berkeley packet filter
# USB support
options USB_DEBUG # enable debug msgs
device uhci # UHCI PCI->USB interface
device ohci # OHCI PCI->USB interface
device ehci # EHCI PCI->USB interface (USB 2.0)
device xhci # XHCI PCI->USB interface (USB 3.0)
device usb # USB Bus (required)
device ukbd # Keyboard
device umass # Disks/Mass storage - Requires scbus and da
# Sound support
device sound # Generic sound driver (required)
device snd_cmi # CMedia CMI8338/CMI8738
device snd_csa # Crystal Semiconductor CS461x/428x
device snd_emu10kx # Creative SoundBlaster Live! and Audigy
device snd_es137x # Ensoniq AudioPCI ES137x
device snd_hda # Intel High Definition Audio
device snd_ich # Intel, NVidia and other ICH AC'97 Audio
device snd_via8233 # VIA VT8233x Audio
# MMC/SD
device mmc # MMC/SD bus
device mmcsd # MMC/SD memory card
device sdhci # Generic PCI SD Host Controller
# VirtIO support
device virtio # Generic VirtIO bus (required)
device virtio_pci # VirtIO PCI device
device vtnet # VirtIO Ethernet device
device virtio_blk # VirtIO Block device
device virtio_scsi # VirtIO SCSI device
device virtio_balloon # VirtIO Memory Balloon device
# HyperV drivers and enhancement support
device hyperv # HyperV drivers
# Xen HVM Guest Optimizations
# NOTE: XENHVM depends on xenpci. They must be added or removed together.
options XENHVM # Xen HVM kernel infrastructure
device xenpci # Xen HVM Hypervisor services driver
# Netmap provides direct access to TX/RX rings on supported NICs
device netmap # netmap(4) support
# evdev interface
options EVDEV_SUPPORT # evdev support in legacy drivers
device evdev # input event device support
device uinput # install /dev/uinput cdev
Index: projects/clang800-import/sys/amd64/conf/NOTES
===================================================================
--- projects/clang800-import/sys/amd64/conf/NOTES (revision 343955)
+++ projects/clang800-import/sys/amd64/conf/NOTES (revision 343956)
@@ -1,680 +1,677 @@
#
# NOTES -- Lines that can be cut/pasted into kernel and hints configs.
#
# This file contains machine dependent kernel configuration notes. For
# machine independent notes, look in /sys/conf/NOTES.
#
# $FreeBSD$
#
#
# We want LINT to cover profiling as well.
profile 2
#
# Enable the kernel DTrace hooks which are required to load the DTrace
# kernel modules.
#
options KDTRACE_HOOKS
# DTrace core
# NOTE: introduces CDDL-licensed components into the kernel
#device dtrace
# DTrace modules
#device dtrace_profile
#device dtrace_sdt
#device dtrace_fbt
#device dtrace_systrace
#device dtrace_prototype
#device dtnfscl
#device dtmalloc
# Alternatively include all the DTrace modules
#device dtraceall
#####################################################################
# SMP OPTIONS:
#
# Notes:
#
# IPI_PREEMPTION instructs the kernel to preempt threads running on other
# CPUS if needed. Relies on the PREEMPTION option
# Optional:
options IPI_PREEMPTION
device atpic # Optional legacy pic support
device mptable # Optional MPSPEC mptable support
#
# Watchdog routines.
#
options MP_WATCHDOG
# Debugging options.
#
options COUNT_XINVLTLB_HITS # Counters for TLB events
options COUNT_IPIS # Per-CPU IPI interrupt counters
#####################################################################
# CPU OPTIONS
#
# You must specify at least one CPU (the one you intend to run on);
# deleting the specification for CPUs you don't need to use may make
# parts of the system run faster.
#
cpu HAMMER # aka K8, aka Opteron & Athlon64
#
# Options for CPU features.
#
#####################################################################
# NETWORKING OPTIONS
#
# DEVICE_POLLING adds support for mixed interrupt-polling handling
# of network device drivers, which has significant benefits in terms
# of robustness to overloads and responsivity, as well as permitting
# accurate scheduling of the CPU time between kernel network processing
# and other activities. The drawback is a moderate (up to 1/HZ seconds)
# potential increase in response times.
# It is strongly recommended to use HZ=1000 or 2000 with DEVICE_POLLING
# to achieve smoother behaviour.
# Additionally, you can enable/disable polling at runtime with help of
# the ifconfig(8) utility, and select the CPU fraction reserved to
# userland with the sysctl variable kern.polling.user_frac
# (default 50, range 0..100).
#
# Not all device drivers support this mode of operation at the time of
# this writing. See polling(4) for more details.
options DEVICE_POLLING
# BPF_JITTER adds support for BPF just-in-time compiler.
options BPF_JITTER
# OpenFabrics Enterprise Distribution (Infiniband).
options OFED
options OFED_DEBUG_INIT
# Sockets Direct Protocol
options SDP
options SDP_DEBUG
# IP over Infiniband
options IPOIB
options IPOIB_DEBUG
options IPOIB_CM
#####################################################################
# CLOCK OPTIONS
# Provide read/write access to the memory in the clock chip.
device nvram # Access to rtc cmos via /dev/nvram
#####################################################################
# MISCELLANEOUS DEVICES AND OPTIONS
device speaker #Play IBM BASIC-style noises out your speaker
hint.speaker.0.at="isa"
hint.speaker.0.port="0x61"
device gzip #Exec gzipped a.out's. REQUIRES COMPAT_AOUT!
#####################################################################
# HARDWARE BUS CONFIGURATION
#
# ISA bus
#
device isa
#
# Options for `isa':
#
# AUTO_EOI_1 enables the `automatic EOI' feature for the master 8259A
# interrupt controller. This saves about 0.7-1.25 usec for each interrupt.
# This option breaks suspend/resume on some portables.
#
# AUTO_EOI_2 enables the `automatic EOI' feature for the slave 8259A
# interrupt controller. This saves about 0.7-1.25 usec for each interrupt.
# Automatic EOI is documented not to work for for the slave with the
# original i8259A, but it works for some clones and some integrated
# versions.
#
# MAXMEM specifies the amount of RAM on the machine; if this is not
# specified, FreeBSD will first read the amount of memory from the CMOS
# RAM, so the amount of memory will initially be limited to 64MB or 16MB
# depending on the BIOS. If the BIOS reports 64MB, a memory probe will
# then attempt to detect the installed amount of RAM. If this probe
# fails to detect >64MB RAM you will have to use the MAXMEM option.
# The amount is in kilobytes, so for a machine with 128MB of RAM, it would
# be 131072 (128 * 1024).
#
# BROKEN_KEYBOARD_RESET disables the use of the keyboard controller to
# reset the CPU for reboot. This is needed on some systems with broken
# keyboard controllers.
options AUTO_EOI_1
#options AUTO_EOI_2
options MAXMEM=(128*1024)
#options BROKEN_KEYBOARD_RESET
#
# AGP GART support
device agp
#
# AGP debugging.
#
options AGP_DEBUG
#####################################################################
# HARDWARE DEVICE CONFIGURATION
# To include support for VGA VESA video modes
options VESA
# Turn on extra debugging checks and output for VESA support.
options VESA_DEBUG
device dpms # DPMS suspend & resume via VESA BIOS
# x86 real mode BIOS emulator, required by atkbdc/dpms/vesa
options X86BIOS
#
# Optional devices:
#
# PS/2 mouse
device psm
hint.psm.0.at="atkbdc"
hint.psm.0.irq="12"
# Options for psm:
options PSM_HOOKRESUME #hook the system resume event, useful
#for some laptops
options PSM_RESETAFTERSUSPEND #reset the device at the resume event
# The keyboard controller; it controls the keyboard and the PS/2 mouse.
device atkbdc
hint.atkbdc.0.at="isa"
hint.atkbdc.0.port="0x060"
# The AT keyboard
device atkbd
hint.atkbd.0.at="atkbdc"
hint.atkbd.0.irq="1"
# Options for atkbd:
options ATKBD_DFLT_KEYMAP # specify the built-in keymap
makeoptions ATKBD_DFLT_KEYMAP=fr.dvorak
# `flags' for atkbd:
# 0x01 Force detection of keyboard, else we always assume a keyboard
# 0x02 Don't reset keyboard, useful for some newer ThinkPads
# 0x03 Force detection and avoid reset, might help with certain
# dockingstations
# 0x04 Old-style (XT) keyboard support, useful for older ThinkPads
# Video card driver for VGA adapters.
device vga
hint.vga.0.at="isa"
# Options for vga:
# Try the following option if the mouse pointer is not drawn correctly
# or font does not seem to be loaded properly. May cause flicker on
# some systems.
options VGA_ALT_SEQACCESS
# If you can dispense with some vga driver features, you may want to
# use the following options to save some memory.
#options VGA_NO_FONT_LOADING # don't save/load font
#options VGA_NO_MODE_CHANGE # don't change video modes
# Older video cards may require this option for proper operation.
options VGA_SLOW_IOACCESS # do byte-wide i/o's to TS and GDC regs
# The following option probably won't work with the LCD displays.
options VGA_WIDTH90 # support 90 column modes
# Debugging.
options VGA_DEBUG
# vt(4) drivers.
device vt_vga # VGA
device vt_efifb # EFI framebuffer
# Linear framebuffer driver for S3 VESA 1.2 cards. Works on top of VESA.
device s3pci
# 3Dfx Voodoo Graphics, Voodoo II /dev/3dfx CDEV support. This will create
# the /dev/3dfx0 device to work with glide implementations. This should get
# linked to /dev/3dfx and /dev/voodoo. Note that this is not the same as
# the tdfx DRI module from XFree86 and is completely unrelated.
#
# To enable Linuxulator support, one must also include COMPAT_LINUX in the
# config as well. The other option is to load both as modules.
device tdfx # Enable 3Dfx Voodoo support
#XXX#device tdfx_linux # Enable Linuxulator support
#
# ACPI support using the Intel ACPI Component Architecture reference
# implementation.
#
# ACPI_DEBUG enables the use of the debug.acpi.level and debug.acpi.layer
# kernel environment variables to select initial debugging levels for the
# Intel ACPICA code. (Note that the Intel code must also have USE_DEBUGGER
# defined when it is built).
device acpi
options ACPI_DEBUG
# The cpufreq(4) driver provides support for non-ACPI CPU frequency control
device cpufreq
# Direct Rendering modules for 3D acceleration.
device drm # DRM core module required by DRM drivers
device mach64drm # ATI Rage Pro, Rage Mobility P/M, Rage XL
device mgadrm # AGP Matrox G200, G400, G450, G550
device r128drm # ATI Rage 128
device savagedrm # S3 Savage3D, Savage4
device sisdrm # SiS 300/305, 540, 630
device tdfxdrm # 3dfx Voodoo 3/4/5 and Banshee
device viadrm # VIA
options DRM_DEBUG # Include debug printfs (slow)
#
# Network interfaces:
#
# bxe: Broadcom NetXtreme II (BCM5771X/BCM578XX) PCIe 10Gb Ethernet
# adapters.
# ed: Western Digital and SMC 80xx; Novell NE1000 and NE2000; 3Com 3C503
# HP PC Lan+, various PC Card devices
# (requires miibus)
# ipw: Intel PRO/Wireless 2100 IEEE 802.11 adapter
# Requires the ipw firmware module
# iwi: Intel PRO/Wireless 2200BG/2225BG/2915ABG IEEE 802.11 adapters
# Requires the iwi firmware module
# iwn: Intel Wireless WiFi Link 1000/105/135/2000/4965/5000/6000/6050 abgn
# 802.11 network adapters
# Requires the iwn firmware module
# mthca: Mellanox HCA InfiniBand
# mlx4ib: Mellanox ConnectX HCA InfiniBand
# mlx4en: Mellanox ConnectX HCA Ethernet
# nfe: nVidia nForce MCP on-board Ethernet Networking (BSD open source)
# sfxge: Solarflare SFC9000 family 10Gb Ethernet adapters
# vmx: VMware VMXNET3 Ethernet (BSD open source)
# wpi: Intel 3945ABG Wireless LAN controller
# Requires the wpi firmware module
device bxe # Broadcom NetXtreme II BCM5771X/BCM578XX 10GbE
device ed # NE[12]000, SMC Ultra, 3c503, DS8390 cards
options ED_3C503
options ED_HPP
options ED_SIC
device ipw # Intel 2100 wireless NICs.
device iwi # Intel 2200BG/2225BG/2915ABG wireless NICs.
device iwn # Intel 4965/1000/5000/6000 wireless NICs.
device ixl # Intel 700 Series Physical Function
device iavf # Intel Adaptive Virtual Function
device mthca # Mellanox HCA InfiniBand
device mlx4 # Shared code module between IB and Ethernet
device mlx4ib # Mellanox ConnectX HCA InfiniBand
device mlx4en # Mellanox ConnectX HCA Ethernet
device nfe # nVidia nForce MCP on-board Ethernet
device sfxge # Solarflare SFC9000 10Gb Ethernet
device vmx # VMware VMXNET3 Ethernet
device wpi # Intel 3945ABG wireless NICs.
# IEEE 802.11 adapter firmware modules
# Intel PRO/Wireless 2100 firmware:
# ipwfw: BSS/IBSS/monitor mode firmware
# ipwbssfw: BSS mode firmware
# ipwibssfw: IBSS mode firmware
# ipwmonitorfw: Monitor mode firmware
# Intel PRO/Wireless 2200BG/2225BG/2915ABG firmware:
# iwifw: BSS/IBSS/monitor mode firmware
# iwibssfw: BSS mode firmware
# iwiibssfw: IBSS mode firmware
# iwimonitorfw: Monitor mode firmware
# Intel Wireless WiFi Link 4965/1000/5000/6000 series firmware:
# iwnfw: Single module to support all devices
# iwn1000fw: Specific module for the 1000 only
# iwn105fw: Specific module for the 105 only
# iwn135fw: Specific module for the 135 only
# iwn2000fw: Specific module for the 2000 only
# iwn2030fw: Specific module for the 2030 only
# iwn4965fw: Specific module for the 4965 only
# iwn5000fw: Specific module for the 5000 only
# iwn5150fw: Specific module for the 5150 only
# iwn6000fw: Specific module for the 6000 only
# iwn6000g2afw: Specific module for the 6000g2a only
# iwn6000g2bfw: Specific module for the 6000g2b only
# iwn6050fw: Specific module for the 6050 only
# wpifw: Intel 3945ABG Wireless LAN Controller firmware
device iwifw
device iwibssfw
device iwiibssfw
device iwimonitorfw
device ipwfw
device ipwbssfw
device ipwibssfw
device ipwmonitorfw
device iwnfw
device iwn1000fw
device iwn105fw
device iwn135fw
device iwn2000fw
device iwn2030fw
device iwn4965fw
device iwn5000fw
device iwn5150fw
device iwn6000fw
device iwn6000g2afw
device iwn6000g2bfw
device iwn6050fw
device wpifw
#
# Non-Transparent Bridge (NTB) drivers
#
device if_ntb # Virtual NTB network interface
device ntb_transport # NTB packet transport driver
device ntb # NTB hardware interface
device ntb_hw_intel # Intel NTB hardware driver
device ntb_hw_plx # PLX NTB hardware driver
#
#XXX this stores pointers in a 32bit field that is defined by the hardware
#device pst
#
# Areca 11xx and 12xx series of SATA II RAID controllers.
# CAM is required.
#
device arcmsr # Areca SATA II RAID
#
# Microsemi smartpqi controllers.
# These controllers have a SCSI-like interface, and require the
# CAM infrastructure.
#
device smartpqi
#
# 3ware 9000 series PATA/SATA RAID controller driver and options.
# The driver is implemented as a SIM, and so, needs the CAM infrastructure.
#
options TWA_DEBUG # 0-10; 10 prints the most messages.
device twa # 3ware 9000 series PATA/SATA RAID
#
# Adaptec FSA RAID controllers, including integrated DELL controllers,
# the Dell PERC 2/QC and the HP NetRAID-4M
device aac
device aacp # SCSI Passthrough interface (optional, CAM required)
#
# Adaptec by PMC RAID controllers, Series 6/7/8 and upcoming families
device aacraid # Container interface, CAM required
#
# Highpoint RocketRAID 27xx.
device hpt27xx
#
# Highpoint RocketRAID 182x.
device hptmv
#
# Highpoint DC7280 and R750.
device hptnr
#
# Highpoint RocketRAID. Supports RR172x, RR222x, RR2240, RR232x, RR2340,
# RR2210, RR174x, RR2522, RR231x, RR230x.
device hptrr
#
# Highpoint RocketRaid 3xxx series SATA RAID
device hptiop
#
# IBM (now Adaptec) ServeRAID controllers
device ips
#
# Intel integrated Memory Controller (iMC) SMBus controller
# Sandybridge-Xeon, Ivybridge-Xeon, Haswell-Xeon, Broadwell-Xeon
device imcsmb
#
# Intel C600 (Patsburg) integrated SAS controller
device isci
options ISCI_LOGGING # enable debugging in isci HAL
#
# NVM Express (NVMe) support
device nvme # base NVMe driver
device nvd # expose NVMe namespaces as disks, depends on nvme
#
# PMC-Sierra SAS/SATA controller
device pmspcv
#
# SafeNet crypto driver: can be moved to the MI NOTES as soon as
# it's tested on a big-endian machine
#
device safe # SafeNet 1141
options SAFE_DEBUG # enable debugging support: hw.safe.debug
options SAFE_RNDTEST # enable rndtest support
#
# VirtIO support
#
# The virtio entry provides a generic bus for use by the device drivers.
# It must be combined with an interface that communicates with the host.
# Multiple such interfaces are defined by the VirtIO specification. FreeBSD
# only has support for PCI. Therefore, virtio_pci must be statically
# compiled in or loaded as a module for the device drivers to function.
#
device virtio # Generic VirtIO bus (required)
device virtio_pci # VirtIO PCI Interface
device vtnet # VirtIO Ethernet device
device virtio_blk # VirtIO Block device
device virtio_scsi # VirtIO SCSI device
device virtio_balloon # VirtIO Memory Balloon device
device virtio_random # VirtIO Entropy device
device virtio_console # VirtIO Console device
# Microsoft Hyper-V enhancement support
device hyperv # HyperV drivers
# Xen HVM Guest Optimizations
options XENHVM # Xen HVM kernel infrastructure
device xenpci # Xen HVM Hypervisor services driver
#####################################################################
#
# Miscellaneous hardware:
#
# ipmi: Intelligent Platform Management Interface
# pbio: Parallel (8255 PPI) basic I/O (mode 0) port (e.g. Advantech PCL-724)
# smbios: DMI/SMBIOS entry point
# vpd: Vital Product Data kernel interface
# asmc: Apple System Management Controller
# si: Specialix International SI/XIO or SX intelligent serial card
# tpm: Trusted Platform Module
# Notes on the Specialix SI/XIO driver:
# The host card is memory, not IO mapped.
# The Rev 1 host cards use a 64K chunk, on a 32K boundary.
# The Rev 2 host cards use a 32K chunk, on a 32K boundary.
# The cards can use an IRQ of 11, 12 or 15.
device ipmi
device pbio
hint.pbio.0.at="isa"
hint.pbio.0.port="0x360"
device smbios
device vpd
device asmc
device tpm
device padlock_rng # VIA Padlock RNG
device rdrand_rng # Intel Bull Mountain RNG
device aesni # AES-NI OpenCrypto module
device ioat # Intel I/OAT DMA engine
#
# Laptop/Notebook options:
#
#
# I2C Bus
#
#
# Hardware watchdog timers:
#
# ichwd: Intel ICH watchdog timer
# amdsbwd: AMD SB7xx watchdog timer
# viawd: VIA south bridge watchdog timer
# wbwd: Winbond watchdog timer
#
device ichwd
device amdsbwd
device viawd
device wbwd
#
# Temperature sensors:
#
# coretemp: on-die sensor on Intel Core and newer CPUs
# amdtemp: on-die sensor on AMD K8/K10/K11 CPUs
#
device coretemp
device amdtemp
#
# CPU control pseudo-device. Provides access to MSRs, CPUID info and
# microcode update feature.
#
device cpuctl
#
# System Management Bus (SMB)
#
options ENABLE_ALART # Control alarm on Intel intpm driver
#
# AMD System Management Network (SMN)
#
device amdsmn
#
# Number of initial kernel page table pages used for early bootstrap.
# This number should include enough pages to map the kernel and any
# modules or other data loaded with the kernel by the loader. Each
# page table page maps 2MB.
#
options NKPT=31
# EFI Runtime Services support
options EFIRT
#####################################################################
# ABI Emulation
#XXX keep these here for now and reactivate when support for emulating
#XXX these 32 bit binaries is added.
# Enable 32-bit runtime support for FreeBSD/i386 binaries.
options COMPAT_FREEBSD32
-# Emulate spx device for client side of SVR3 local X interface
-#XXX#options SPX_HACK
-
# Enable (32-bit) a.out binary support
options COMPAT_AOUT
# Enable 32-bit runtime support for CloudABI binaries.
options COMPAT_CLOUDABI32
# Enable 64-bit runtime support for CloudABI binaries.
options COMPAT_CLOUDABI64
# Enable Linux ABI emulation
#XXX#options COMPAT_LINUX
# Enable 32-bit Linux ABI emulation (requires COMPAT_FREEBSD32).
options COMPAT_LINUX32
# Enable the linux-like proc filesystem support (requires COMPAT_LINUX32
# and PSEUDOFS)
options LINPROCFS
#Enable the linux-like sys filesystem support (requires COMPAT_LINUX32
# and PSEUDOFS)
options LINSYSFS
#####################################################################
# ZFS support
options ZFS
#####################################################################
# VM OPTIONS
# KSTACK_PAGES is the number of memory pages to assign to the kernel
# stack of each thread.
options KSTACK_PAGES=5
# Enable detailed accounting by the PV entry allocator.
options PV_STATS
#####################################################################
# More undocumented options for linting.
# Note that documenting these are not considered an affront.
options FB_INSTALL_CDEV # install a CDEV entry in /dev
options KBDIO_DEBUG=2
options KBD_MAXRETRY=4
options KBD_MAXWAIT=6
options KBD_RESETDELAY=201
options PSM_DEBUG=1
options TIMER_FREQ=((14318182+6)/12)
options VM_KMEM_SIZE
options VM_KMEM_SIZE_MAX
options VM_KMEM_SIZE_SCALE
# Enable NDIS binary driver support
options NDISAPI
device ndis
Index: projects/clang800-import/sys/arm/allwinner/axp81x.c
===================================================================
--- projects/clang800-import/sys/arm/allwinner/axp81x.c (revision 343955)
+++ projects/clang800-import/sys/arm/allwinner/axp81x.c (revision 343956)
@@ -1,1174 +1,1312 @@
/*-
* Copyright (c) 2018 Emmanuel Vadot <manu@freebsd.org>
* Copyright (c) 2016 Jared McNeill <jmcneill@invisible.ca>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
/*
* X-Powers AXP803/813/818 PMU for Allwinner SoCs
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/eventhandler.h>
#include <sys/bus.h>
#include <sys/rman.h>
#include <sys/kernel.h>
#include <sys/reboot.h>
#include <sys/gpio.h>
#include <sys/module.h>
#include <machine/bus.h>
#include <dev/iicbus/iicbus.h>
#include <dev/iicbus/iiconf.h>
#include <dev/gpio/gpiobusvar.h>
#include <dev/ofw/ofw_bus.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/extres/regulator/regulator.h>
#include "gpio_if.h"
#include "iicbus_if.h"
#include "regdev_if.h"
MALLOC_DEFINE(M_AXP8XX_REG, "AXP8xx regulator", "AXP8xx power regulator");
#define AXP_POWERSRC 0x00
#define AXP_POWERSRC_ACIN (1 << 7)
#define AXP_POWERSRC_VBUS (1 << 5)
#define AXP_POWERSRC_VBAT (1 << 3)
-#define AXP_POWERSRC_CHARING (1 << 2)
+#define AXP_POWERSRC_CHARING (1 << 2) /* Charging Direction */
#define AXP_POWERSRC_SHORTED (1 << 1)
#define AXP_POWERSRC_STARTUP (1 << 0)
+#define AXP_POWERMODE 0x01
+#define AXP_POWERMODE_BAT_CHARGING (1 << 6)
+#define AXP_POWERMODE_BAT_PRESENT (1 << 5)
+#define AXP_POWERMODE_BAT_VALID (1 << 4)
#define AXP_ICTYPE 0x03
#define AXP_POWERCTL1 0x10
#define AXP_POWERCTL1_DCDC7 (1 << 6) /* AXP813/818 only */
#define AXP_POWERCTL1_DCDC6 (1 << 5)
#define AXP_POWERCTL1_DCDC5 (1 << 4)
#define AXP_POWERCTL1_DCDC4 (1 << 3)
#define AXP_POWERCTL1_DCDC3 (1 << 2)
#define AXP_POWERCTL1_DCDC2 (1 << 1)
#define AXP_POWERCTL1_DCDC1 (1 << 0)
#define AXP_POWERCTL2 0x12
#define AXP_POWERCTL2_DC1SW (1 << 7) /* AXP803 only */
#define AXP_POWERCTL2_DLDO4 (1 << 6)
#define AXP_POWERCTL2_DLDO3 (1 << 5)
#define AXP_POWERCTL2_DLDO2 (1 << 4)
#define AXP_POWERCTL2_DLDO1 (1 << 3)
#define AXP_POWERCTL2_ELDO3 (1 << 2)
#define AXP_POWERCTL2_ELDO2 (1 << 1)
#define AXP_POWERCTL2_ELDO1 (1 << 0)
#define AXP_POWERCTL3 0x13
#define AXP_POWERCTL3_ALDO3 (1 << 7)
#define AXP_POWERCTL3_ALDO2 (1 << 6)
#define AXP_POWERCTL3_ALDO1 (1 << 5)
#define AXP_POWERCTL3_FLDO3 (1 << 4) /* AXP813/818 only */
#define AXP_POWERCTL3_FLDO2 (1 << 3)
#define AXP_POWERCTL3_FLDO1 (1 << 2)
#define AXP_VOLTCTL_DLDO1 0x15
#define AXP_VOLTCTL_DLDO2 0x16
#define AXP_VOLTCTL_DLDO3 0x17
#define AXP_VOLTCTL_DLDO4 0x18
#define AXP_VOLTCTL_ELDO1 0x19
#define AXP_VOLTCTL_ELDO2 0x1A
#define AXP_VOLTCTL_ELDO3 0x1B
#define AXP_VOLTCTL_FLDO1 0x1C
#define AXP_VOLTCTL_FLDO2 0x1D
#define AXP_VOLTCTL_DCDC1 0x20
#define AXP_VOLTCTL_DCDC2 0x21
#define AXP_VOLTCTL_DCDC3 0x22
#define AXP_VOLTCTL_DCDC4 0x23
#define AXP_VOLTCTL_DCDC5 0x24
#define AXP_VOLTCTL_DCDC6 0x25
#define AXP_VOLTCTL_DCDC7 0x26
#define AXP_VOLTCTL_ALDO1 0x28
#define AXP_VOLTCTL_ALDO2 0x29
#define AXP_VOLTCTL_ALDO3 0x2A
#define AXP_VOLTCTL_STATUS (1 << 7)
#define AXP_VOLTCTL_MASK 0x7f
#define AXP_POWERBAT 0x32
#define AXP_POWERBAT_SHUTDOWN (1 << 7)
#define AXP_IRQEN1 0x40
+#define AXP_IRQEN1_ACIN_HI (1 << 6)
+#define AXP_IRQEN1_ACIN_LO (1 << 5)
+#define AXP_IRQEN1_VBUS_HI (1 << 3)
+#define AXP_IRQEN1_VBUS_LO (1 << 2)
#define AXP_IRQEN2 0x41
+#define AXP_IRQEN2_BAT_IN (1 << 7)
+#define AXP_IRQEN2_BAT_NO (1 << 6)
+#define AXP_IRQEN2_BATCHGC (1 << 3)
+#define AXP_IRQEN2_BATCHGD (1 << 2)
#define AXP_IRQEN3 0x42
#define AXP_IRQEN4 0x43
+#define AXP_IRQEN4_BATLVL_LO1 (1 << 1)
+#define AXP_IRQEN4_BATLVL_LO0 (1 << 0)
#define AXP_IRQEN5 0x44
#define AXP_IRQEN5_POKSIRQ (1 << 4)
+#define AXP_IRQEN5_POKLIRQ (1 << 3)
#define AXP_IRQEN6 0x45
+#define AXP_IRQSTAT1 0x48
+#define AXP_IRQSTAT1_ACIN_HI (1 << 6)
+#define AXP_IRQSTAT1_ACIN_LO (1 << 5)
+#define AXP_IRQSTAT1_VBUS_HI (1 << 3)
+#define AXP_IRQSTAT1_VBUS_LO (1 << 2)
+#define AXP_IRQSTAT2 0x49
+#define AXP_IRQSTAT2_BAT_IN (1 << 7)
+#define AXP_IRQSTAT2_BAT_NO (1 << 6)
+#define AXP_IRQSTAT2_BATCHGC (1 << 3)
+#define AXP_IRQSTAT2_BATCHGD (1 << 2)
+#define AXP_IRQSTAT3 0x4a
+#define AXP_IRQSTAT4 0x4b
+#define AXP_IRQSTAT4_BATLVL_LO1 (1 << 1)
+#define AXP_IRQSTAT4_BATLVL_LO0 (1 << 0)
#define AXP_IRQSTAT5 0x4c
#define AXP_IRQSTAT5_POKSIRQ (1 << 4)
+#define AXP_IRQEN5_POKLIRQ (1 << 3)
+#define AXP_IRQSTAT6 0x4d
+#define AXP_BATSENSE_HI 0x78
+#define AXP_BATSENSE_LO 0x79
+#define AXP_BATCHG_HI 0x7a
+#define AXP_BATCHG_LO 0x7b
+#define AXP_BATDISCHG_HI 0x7c
+#define AXP_BATDISCHG_LO 0x7d
#define AXP_GPIO0_CTRL 0x90
#define AXP_GPIO0LDO_CTRL 0x91
#define AXP_GPIO1_CTRL 0x92
#define AXP_GPIO1LDO_CTRL 0x93
#define AXP_GPIO_FUNC (0x7 << 0)
#define AXP_GPIO_FUNC_SHIFT 0
#define AXP_GPIO_FUNC_DRVLO 0
#define AXP_GPIO_FUNC_DRVHI 1
#define AXP_GPIO_FUNC_INPUT 2
#define AXP_GPIO_FUNC_LDO_ON 3
#define AXP_GPIO_FUNC_LDO_OFF 4
#define AXP_GPIO_SIGBIT 0x94
#define AXP_GPIO_PD 0x97
+#define AXP_FUEL_GAUGECTL 0xb8
+#define AXP_FUEL_GAUGECTL_EN (1 << 7)
+#define AXP_BAT_CAP 0xb9
+#define AXP_BAT_CAP_VALID (1 << 7)
+#define AXP_BAT_CAP_PERCENT 0x7f
+
+#define AXP_BAT_MAX_CAP_HI 0xe0
+#define AXP_BAT_MAX_CAP_VALID (1 << 7)
+#define AXP_BAT_MAX_CAP_LO 0xe1
+
+#define AXP_BAT_COULOMB_HI 0xe2
+#define AXP_BAT_COULOMB_VALID (1 << 7)
+#define AXP_BAT_COULOMB_LO 0xe3
+
+#define AXP_BAT_CAP_WARN 0xe6
+#define AXP_BAT_CAP_WARN_LV1 0xf0 /* Bits 4, 5, 6, 7 */
+#define AXP_BAT_CAP_WARN_LV2 0xf /* Bits 0, 1, 2, 3 */
+
static const struct {
const char *name;
uint8_t ctrl_reg;
} axp8xx_pins[] = {
{ "GPIO0", AXP_GPIO0_CTRL },
{ "GPIO1", AXP_GPIO1_CTRL },
};
enum AXP8XX_TYPE {
AXP803 = 1,
AXP813,
};
static struct ofw_compat_data compat_data[] = {
{ "x-powers,axp803", AXP803 },
{ "x-powers,axp813", AXP813 },
{ "x-powers,axp818", AXP813 },
{ NULL, 0 }
};
static struct resource_spec axp8xx_spec[] = {
{ SYS_RES_IRQ, 0, RF_ACTIVE },
{ -1, 0 }
};
struct axp8xx_regdef {
intptr_t id;
char *name;
char *supply_name;
uint8_t enable_reg;
uint8_t enable_mask;
uint8_t enable_value;
uint8_t disable_value;
uint8_t voltage_reg;
int voltage_min;
int voltage_max;
int voltage_step1;
int voltage_nstep1;
int voltage_step2;
int voltage_nstep2;
};
enum axp8xx_reg_id {
AXP8XX_REG_ID_DCDC1 = 100,
AXP8XX_REG_ID_DCDC2,
AXP8XX_REG_ID_DCDC3,
AXP8XX_REG_ID_DCDC4,
AXP8XX_REG_ID_DCDC5,
AXP8XX_REG_ID_DCDC6,
AXP813_REG_ID_DCDC7,
AXP803_REG_ID_DC1SW,
AXP8XX_REG_ID_DLDO1,
AXP8XX_REG_ID_DLDO2,
AXP8XX_REG_ID_DLDO3,
AXP8XX_REG_ID_DLDO4,
AXP8XX_REG_ID_ELDO1,
AXP8XX_REG_ID_ELDO2,
AXP8XX_REG_ID_ELDO3,
AXP8XX_REG_ID_ALDO1,
AXP8XX_REG_ID_ALDO2,
AXP8XX_REG_ID_ALDO3,
AXP8XX_REG_ID_FLDO1,
AXP8XX_REG_ID_FLDO2,
AXP813_REG_ID_FLDO3,
AXP8XX_REG_ID_GPIO0_LDO,
AXP8XX_REG_ID_GPIO1_LDO,
};
static struct axp8xx_regdef axp803_regdefs[] = {
{
.id = AXP803_REG_ID_DC1SW,
.name = "dc1sw",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_DC1SW,
.enable_value = AXP_POWERCTL2_DC1SW,
},
};
static struct axp8xx_regdef axp813_regdefs[] = {
{
.id = AXP813_REG_ID_DCDC7,
.name = "dcdc7",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC7,
.enable_value = AXP_POWERCTL1_DCDC7,
.voltage_reg = AXP_VOLTCTL_DCDC7,
.voltage_min = 600,
.voltage_max = 1520,
.voltage_step1 = 10,
.voltage_nstep1 = 50,
.voltage_step2 = 20,
.voltage_nstep2 = 21,
},
};
static struct axp8xx_regdef axp8xx_common_regdefs[] = {
{
.id = AXP8XX_REG_ID_DCDC1,
.name = "dcdc1",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC1,
.enable_value = AXP_POWERCTL1_DCDC1,
.voltage_reg = AXP_VOLTCTL_DCDC1,
.voltage_min = 1600,
.voltage_max = 3400,
.voltage_step1 = 100,
.voltage_nstep1 = 18,
},
{
.id = AXP8XX_REG_ID_DCDC2,
.name = "dcdc2",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC2,
.enable_value = AXP_POWERCTL1_DCDC2,
.voltage_reg = AXP_VOLTCTL_DCDC2,
.voltage_min = 500,
.voltage_max = 1300,
.voltage_step1 = 10,
.voltage_nstep1 = 70,
.voltage_step2 = 20,
.voltage_nstep2 = 5,
},
{
.id = AXP8XX_REG_ID_DCDC3,
.name = "dcdc3",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC3,
.enable_value = AXP_POWERCTL1_DCDC3,
.voltage_reg = AXP_VOLTCTL_DCDC3,
.voltage_min = 500,
.voltage_max = 1300,
.voltage_step1 = 10,
.voltage_nstep1 = 70,
.voltage_step2 = 20,
.voltage_nstep2 = 5,
},
{
.id = AXP8XX_REG_ID_DCDC4,
.name = "dcdc4",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC4,
.enable_value = AXP_POWERCTL1_DCDC4,
.voltage_reg = AXP_VOLTCTL_DCDC4,
.voltage_min = 500,
.voltage_max = 1300,
.voltage_step1 = 10,
.voltage_nstep1 = 70,
.voltage_step2 = 20,
.voltage_nstep2 = 5,
},
{
.id = AXP8XX_REG_ID_DCDC5,
.name = "dcdc5",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC5,
.enable_value = AXP_POWERCTL1_DCDC5,
.voltage_reg = AXP_VOLTCTL_DCDC5,
.voltage_min = 800,
.voltage_max = 1840,
.voltage_step1 = 10,
.voltage_nstep1 = 42,
.voltage_step2 = 20,
.voltage_nstep2 = 36,
},
{
.id = AXP8XX_REG_ID_DCDC6,
.name = "dcdc6",
.enable_reg = AXP_POWERCTL1,
.enable_mask = (uint8_t) AXP_POWERCTL1_DCDC6,
.enable_value = AXP_POWERCTL1_DCDC6,
.voltage_reg = AXP_VOLTCTL_DCDC6,
.voltage_min = 600,
.voltage_max = 1520,
.voltage_step1 = 10,
.voltage_nstep1 = 50,
.voltage_step2 = 20,
.voltage_nstep2 = 21,
},
{
.id = AXP8XX_REG_ID_DLDO1,
.name = "dldo1",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_DLDO1,
.enable_value = AXP_POWERCTL2_DLDO1,
.voltage_reg = AXP_VOLTCTL_DLDO1,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_DLDO2,
.name = "dldo2",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_DLDO2,
.enable_value = AXP_POWERCTL2_DLDO2,
.voltage_reg = AXP_VOLTCTL_DLDO2,
.voltage_min = 700,
.voltage_max = 4200,
.voltage_step1 = 100,
.voltage_nstep1 = 27,
.voltage_step2 = 200,
.voltage_nstep2 = 4,
},
{
.id = AXP8XX_REG_ID_DLDO3,
.name = "dldo3",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_DLDO3,
.enable_value = AXP_POWERCTL2_DLDO3,
.voltage_reg = AXP_VOLTCTL_DLDO3,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_DLDO4,
.name = "dldo4",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_DLDO4,
.enable_value = AXP_POWERCTL2_DLDO4,
.voltage_reg = AXP_VOLTCTL_DLDO4,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_ALDO1,
.name = "aldo1",
.enable_reg = AXP_POWERCTL3,
.enable_mask = (uint8_t) AXP_POWERCTL3_ALDO1,
.enable_value = AXP_POWERCTL3_ALDO1,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_ALDO2,
.name = "aldo2",
.enable_reg = AXP_POWERCTL3,
.enable_mask = (uint8_t) AXP_POWERCTL3_ALDO2,
.enable_value = AXP_POWERCTL3_ALDO2,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_ALDO3,
.name = "aldo3",
.enable_reg = AXP_POWERCTL3,
.enable_mask = (uint8_t) AXP_POWERCTL3_ALDO3,
.enable_value = AXP_POWERCTL3_ALDO3,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_ELDO1,
.name = "eldo1",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_ELDO1,
.enable_value = AXP_POWERCTL2_ELDO1,
.voltage_min = 700,
.voltage_max = 1900,
.voltage_step1 = 50,
.voltage_nstep1 = 24,
},
{
.id = AXP8XX_REG_ID_ELDO2,
.name = "eldo2",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_ELDO2,
.enable_value = AXP_POWERCTL2_ELDO2,
.voltage_min = 700,
.voltage_max = 1900,
.voltage_step1 = 50,
.voltage_nstep1 = 24,
},
{
.id = AXP8XX_REG_ID_ELDO3,
.name = "eldo3",
.enable_reg = AXP_POWERCTL2,
.enable_mask = (uint8_t) AXP_POWERCTL2_ELDO3,
.enable_value = AXP_POWERCTL2_ELDO3,
.voltage_min = 700,
.voltage_max = 1900,
.voltage_step1 = 50,
.voltage_nstep1 = 24,
},
{
.id = AXP8XX_REG_ID_FLDO1,
.name = "fldo1",
.enable_reg = AXP_POWERCTL3,
.enable_mask = (uint8_t) AXP_POWERCTL3_FLDO1,
.enable_value = AXP_POWERCTL3_FLDO1,
.voltage_min = 700,
.voltage_max = 1450,
.voltage_step1 = 50,
.voltage_nstep1 = 15,
},
{
.id = AXP8XX_REG_ID_FLDO2,
.name = "fldo2",
.enable_reg = AXP_POWERCTL3,
.enable_mask = (uint8_t) AXP_POWERCTL3_FLDO2,
.enable_value = AXP_POWERCTL3_FLDO2,
.voltage_min = 700,
.voltage_max = 1450,
.voltage_step1 = 50,
.voltage_nstep1 = 15,
},
{
.id = AXP8XX_REG_ID_GPIO0_LDO,
.name = "ldo-io0",
.enable_reg = AXP_GPIO0_CTRL,
.enable_mask = (uint8_t) AXP_GPIO_FUNC,
.enable_value = AXP_GPIO_FUNC_LDO_ON,
.disable_value = AXP_GPIO_FUNC_LDO_OFF,
.voltage_reg = AXP_GPIO0LDO_CTRL,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
{
.id = AXP8XX_REG_ID_GPIO1_LDO,
.name = "ldo-io1",
.enable_reg = AXP_GPIO1_CTRL,
.enable_mask = (uint8_t) AXP_GPIO_FUNC,
.enable_value = AXP_GPIO_FUNC_LDO_ON,
.disable_value = AXP_GPIO_FUNC_LDO_OFF,
.voltage_reg = AXP_GPIO1LDO_CTRL,
.voltage_min = 700,
.voltage_max = 3300,
.voltage_step1 = 100,
.voltage_nstep1 = 26,
},
};
struct axp8xx_softc;
struct axp8xx_reg_sc {
struct regnode *regnode;
device_t base_dev;
struct axp8xx_regdef *def;
phandle_t xref;
struct regnode_std_param *param;
};
struct axp8xx_softc {
struct resource *res;
uint16_t addr;
void *ih;
device_t gpiodev;
struct mtx mtx;
int busy;
int type;
/* Regulators */
struct axp8xx_reg_sc **regs;
int nregs;
};
#define AXP_LOCK(sc) mtx_lock(&(sc)->mtx)
#define AXP_UNLOCK(sc) mtx_unlock(&(sc)->mtx)
static int
axp8xx_read(device_t dev, uint8_t reg, uint8_t *data, uint8_t size)
{
struct axp8xx_softc *sc;
struct iic_msg msg[2];
sc = device_get_softc(dev);
msg[0].slave = sc->addr;
msg[0].flags = IIC_M_WR;
msg[0].len = 1;
msg[0].buf = &reg;
msg[1].slave = sc->addr;
msg[1].flags = IIC_M_RD;
msg[1].len = size;
msg[1].buf = data;
return (iicbus_transfer(dev, msg, 2));
}
static int
axp8xx_write(device_t dev, uint8_t reg, uint8_t val)
{
struct axp8xx_softc *sc;
struct iic_msg msg[2];
sc = device_get_softc(dev);
msg[0].slave = sc->addr;
msg[0].flags = IIC_M_WR;
msg[0].len = 1;
msg[0].buf = &reg;
msg[1].slave = sc->addr;
msg[1].flags = IIC_M_WR;
msg[1].len = 1;
msg[1].buf = &val;
return (iicbus_transfer(dev, msg, 2));
}
static int
axp8xx_regnode_init(struct regnode *regnode)
{
return (0);
}
static int
axp8xx_regnode_enable(struct regnode *regnode, bool enable, int *udelay)
{
struct axp8xx_reg_sc *sc;
uint8_t val;
sc = regnode_get_softc(regnode);
if (bootverbose)
device_printf(sc->base_dev, "%sable %s (%s)\n",
enable ? "En" : "Dis",
regnode_get_name(regnode),
sc->def->name);
axp8xx_read(sc->base_dev, sc->def->enable_reg, &val, 1);
val &= ~sc->def->enable_mask;
if (enable)
val |= sc->def->enable_value;
else {
if (sc->def->disable_value)
val |= sc->def->disable_value;
else
val &= ~sc->def->enable_value;
}
axp8xx_write(sc->base_dev, sc->def->enable_reg, val);
*udelay = 0;
return (0);
}
static void
axp8xx_regnode_reg_to_voltage(struct axp8xx_reg_sc *sc, uint8_t val, int *uv)
{
if (val < sc->def->voltage_nstep1)
*uv = sc->def->voltage_min + val * sc->def->voltage_step1;
else
*uv = sc->def->voltage_min +
(sc->def->voltage_nstep1 * sc->def->voltage_step1) +
((val - sc->def->voltage_nstep1) * sc->def->voltage_step2);
*uv *= 1000;
}
static int
axp8xx_regnode_voltage_to_reg(struct axp8xx_reg_sc *sc, int min_uvolt,
int max_uvolt, uint8_t *val)
{
uint8_t nval;
int nstep, uvolt;
nval = 0;
uvolt = sc->def->voltage_min * 1000;
for (nstep = 0; nstep < sc->def->voltage_nstep1 && uvolt < min_uvolt;
nstep++) {
++nval;
uvolt += (sc->def->voltage_step1 * 1000);
}
for (nstep = 0; nstep < sc->def->voltage_nstep2 && uvolt < min_uvolt;
nstep++) {
++nval;
uvolt += (sc->def->voltage_step2 * 1000);
}
if (uvolt > max_uvolt)
return (EINVAL);
*val = nval;
return (0);
}
static int
axp8xx_regnode_set_voltage(struct regnode *regnode, int min_uvolt,
int max_uvolt, int *udelay)
{
struct axp8xx_reg_sc *sc;
uint8_t val;
sc = regnode_get_softc(regnode);
if (bootverbose)
device_printf(sc->base_dev, "Setting %s (%s) to %d<->%d\n",
regnode_get_name(regnode),
sc->def->name,
min_uvolt, max_uvolt);
if (sc->def->voltage_step1 == 0)
return (ENXIO);
if (axp8xx_regnode_voltage_to_reg(sc, min_uvolt, max_uvolt, &val) != 0)
return (ERANGE);
axp8xx_write(sc->base_dev, sc->def->voltage_reg, val);
*udelay = 0;
return (0);
}
static int
axp8xx_regnode_get_voltage(struct regnode *regnode, int *uvolt)
{
struct axp8xx_reg_sc *sc;
uint8_t val;
sc = regnode_get_softc(regnode);
if (!sc->def->voltage_step1 || !sc->def->voltage_step2)
return (ENXIO);
axp8xx_read(sc->base_dev, sc->def->voltage_reg, &val, 1);
axp8xx_regnode_reg_to_voltage(sc, val & AXP_VOLTCTL_MASK, uvolt);
return (0);
}
static regnode_method_t axp8xx_regnode_methods[] = {
/* Regulator interface */
REGNODEMETHOD(regnode_init, axp8xx_regnode_init),
REGNODEMETHOD(regnode_enable, axp8xx_regnode_enable),
REGNODEMETHOD(regnode_set_voltage, axp8xx_regnode_set_voltage),
REGNODEMETHOD(regnode_get_voltage, axp8xx_regnode_get_voltage),
REGNODEMETHOD_END
};
DEFINE_CLASS_1(axp8xx_regnode, axp8xx_regnode_class, axp8xx_regnode_methods,
sizeof(struct axp8xx_reg_sc), regnode_class);
static void
axp8xx_shutdown(void *devp, int howto)
{
device_t dev;
if ((howto & RB_POWEROFF) == 0)
return;
dev = devp;
if (bootverbose)
device_printf(dev, "Shutdown Axp8xx\n");
axp8xx_write(dev, AXP_POWERBAT, AXP_POWERBAT_SHUTDOWN);
}
static void
axp8xx_intr(void *arg)
{
device_t dev;
uint8_t val;
int error;
dev = arg;
+ error = axp8xx_read(dev, AXP_IRQSTAT1, &val, 1);
+ if (error != 0)
+ return;
+
+ if (val) {
+ if (bootverbose)
+ device_printf(dev, "AXP_IRQSTAT1 val: %x\n", val);
+ if (val & AXP_IRQSTAT1_ACIN_HI)
+ devctl_notify("PMU", "AC", "plugged", NULL);
+ if (val & AXP_IRQSTAT1_ACIN_LO)
+ devctl_notify("PMU", "AC", "unplugged", NULL);
+ if (val & AXP_IRQSTAT1_VBUS_HI)
+ devctl_notify("PMU", "USB", "plugged", NULL);
+ if (val & AXP_IRQSTAT1_VBUS_LO)
+ devctl_notify("PMU", "USB", "unplugged", NULL);
+ /* Acknowledge */
+ axp8xx_write(dev, AXP_IRQSTAT1, val);
+ }
+
+ error = axp8xx_read(dev, AXP_IRQSTAT2, &val, 1);
+ if (error != 0)
+ return;
+
+ if (val) {
+ if (bootverbose)
+ device_printf(dev, "AXP_IRQSTAT2 val: %x\n", val);
+ if (val & AXP_IRQSTAT2_BATCHGD)
+ devctl_notify("PMU", "Battery", "charged", NULL);
+ if (val & AXP_IRQSTAT2_BATCHGC)
+ devctl_notify("PMU", "Battery", "charging", NULL);
+ if (val & AXP_IRQSTAT2_BAT_NO)
+ devctl_notify("PMU", "Battery", "absent", NULL);
+ if (val & AXP_IRQSTAT2_BAT_IN)
+ devctl_notify("PMU", "Battery", "plugged", NULL);
+ /* Acknowledge */
+ axp8xx_write(dev, AXP_IRQSTAT2, val);
+ }
+
+ error = axp8xx_read(dev, AXP_IRQSTAT3, &val, 1);
+ if (error != 0)
+ return;
+
+ if (val) {
+ /* Acknowledge */
+ axp8xx_write(dev, AXP_IRQSTAT3, val);
+ }
+
+ error = axp8xx_read(dev, AXP_IRQSTAT4, &val, 1);
+ if (error != 0)
+ return;
+
+ if (val) {
+ if (bootverbose)
+ device_printf(dev, "AXP_IRQSTAT4 val: %x\n", val);
+ if (val & AXP_IRQSTAT4_BATLVL_LO0)
+ devctl_notify("PMU", "Battery", "lower than level 2", NULL);
+ if (val & AXP_IRQSTAT4_BATLVL_LO1)
+ devctl_notify("PMU", "Battery", "lower than level 1", NULL);
+ /* Acknowledge */
+ axp8xx_write(dev, AXP_IRQSTAT4, val);
+ }
+
error = axp8xx_read(dev, AXP_IRQSTAT5, &val, 1);
if (error != 0)
return;
if (val != 0) {
if ((val & AXP_IRQSTAT5_POKSIRQ) != 0) {
if (bootverbose)
device_printf(dev, "Power button pressed\n");
shutdown_nice(RB_POWEROFF);
}
/* Acknowledge */
axp8xx_write(dev, AXP_IRQSTAT5, val);
}
+
+ error = axp8xx_read(dev, AXP_IRQSTAT6, &val, 1);
+ if (error != 0)
+ return;
+
+ if (val) {
+ /* Acknowledge */
+ axp8xx_write(dev, AXP_IRQSTAT6, val);
+ }
}
static device_t
axp8xx_gpio_get_bus(device_t dev)
{
struct axp8xx_softc *sc;
sc = device_get_softc(dev);
return (sc->gpiodev);
}
static int
axp8xx_gpio_pin_max(device_t dev, int *maxpin)
{
*maxpin = nitems(axp8xx_pins) - 1;
return (0);
}
static int
axp8xx_gpio_pin_getname(device_t dev, uint32_t pin, char *name)
{
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
snprintf(name, GPIOMAXNAME, "%s", axp8xx_pins[pin].name);
return (0);
}
static int
axp8xx_gpio_pin_getcaps(device_t dev, uint32_t pin, uint32_t *caps)
{
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
*caps = GPIO_PIN_INPUT | GPIO_PIN_OUTPUT;
return (0);
}
static int
axp8xx_gpio_pin_getflags(device_t dev, uint32_t pin, uint32_t *flags)
{
struct axp8xx_softc *sc;
uint8_t data, func;
int error;
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
sc = device_get_softc(dev);
AXP_LOCK(sc);
error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1);
if (error == 0) {
func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT;
if (func == AXP_GPIO_FUNC_INPUT)
*flags = GPIO_PIN_INPUT;
else if (func == AXP_GPIO_FUNC_DRVLO ||
func == AXP_GPIO_FUNC_DRVHI)
*flags = GPIO_PIN_OUTPUT;
else
*flags = 0;
}
AXP_UNLOCK(sc);
return (error);
}
static int
axp8xx_gpio_pin_setflags(device_t dev, uint32_t pin, uint32_t flags)
{
struct axp8xx_softc *sc;
uint8_t data;
int error;
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
sc = device_get_softc(dev);
AXP_LOCK(sc);
error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1);
if (error == 0) {
data &= ~AXP_GPIO_FUNC;
if ((flags & (GPIO_PIN_INPUT|GPIO_PIN_OUTPUT)) != 0) {
if ((flags & GPIO_PIN_OUTPUT) == 0)
data |= AXP_GPIO_FUNC_INPUT;
}
error = axp8xx_write(dev, axp8xx_pins[pin].ctrl_reg, data);
}
AXP_UNLOCK(sc);
return (error);
}
static int
axp8xx_gpio_pin_get(device_t dev, uint32_t pin, unsigned int *val)
{
struct axp8xx_softc *sc;
uint8_t data, func;
int error;
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
sc = device_get_softc(dev);
AXP_LOCK(sc);
error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1);
if (error == 0) {
func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT;
switch (func) {
case AXP_GPIO_FUNC_DRVLO:
*val = 0;
break;
case AXP_GPIO_FUNC_DRVHI:
*val = 1;
break;
case AXP_GPIO_FUNC_INPUT:
error = axp8xx_read(dev, AXP_GPIO_SIGBIT, &data, 1);
if (error == 0)
*val = (data & (1 << pin)) ? 1 : 0;
break;
default:
error = EIO;
break;
}
}
AXP_UNLOCK(sc);
return (error);
}
static int
axp8xx_gpio_pin_set(device_t dev, uint32_t pin, unsigned int val)
{
struct axp8xx_softc *sc;
uint8_t data, func;
int error;
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
sc = device_get_softc(dev);
AXP_LOCK(sc);
error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1);
if (error == 0) {
func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT;
switch (func) {
case AXP_GPIO_FUNC_DRVLO:
case AXP_GPIO_FUNC_DRVHI:
data &= ~AXP_GPIO_FUNC;
data |= (val << AXP_GPIO_FUNC_SHIFT);
break;
default:
error = EIO;
break;
}
}
if (error == 0)
error = axp8xx_write(dev, axp8xx_pins[pin].ctrl_reg, data);
AXP_UNLOCK(sc);
return (error);
}
static int
axp8xx_gpio_pin_toggle(device_t dev, uint32_t pin)
{
struct axp8xx_softc *sc;
uint8_t data, func;
int error;
if (pin >= nitems(axp8xx_pins))
return (EINVAL);
sc = device_get_softc(dev);
AXP_LOCK(sc);
error = axp8xx_read(dev, axp8xx_pins[pin].ctrl_reg, &data, 1);
if (error == 0) {
func = (data & AXP_GPIO_FUNC) >> AXP_GPIO_FUNC_SHIFT;
switch (func) {
case AXP_GPIO_FUNC_DRVLO:
data &= ~AXP_GPIO_FUNC;
data |= (AXP_GPIO_FUNC_DRVHI << AXP_GPIO_FUNC_SHIFT);
break;
case AXP_GPIO_FUNC_DRVHI:
data &= ~AXP_GPIO_FUNC;
data |= (AXP_GPIO_FUNC_DRVLO << AXP_GPIO_FUNC_SHIFT);
break;
default:
error = EIO;
break;
}
}
if (error == 0)
error = axp8xx_write(dev, axp8xx_pins[pin].ctrl_reg, data);
AXP_UNLOCK(sc);
return (error);
}
static int
axp8xx_gpio_map_gpios(device_t bus, phandle_t dev, phandle_t gparent,
int gcells, pcell_t *gpios, uint32_t *pin, uint32_t *flags)
{
if (gpios[0] >= nitems(axp8xx_pins))
return (EINVAL);
*pin = gpios[0];
*flags = gpios[1];
return (0);
}
static phandle_t
axp8xx_get_node(device_t dev, device_t bus)
{
return (ofw_bus_get_node(dev));
}
static struct axp8xx_reg_sc *
axp8xx_reg_attach(device_t dev, phandle_t node,
struct axp8xx_regdef *def)
{
struct axp8xx_reg_sc *reg_sc;
struct regnode_init_def initdef;
struct regnode *regnode;
memset(&initdef, 0, sizeof(initdef));
if (regulator_parse_ofw_stdparam(dev, node, &initdef) != 0)
return (NULL);
if (initdef.std_param.min_uvolt == 0)
initdef.std_param.min_uvolt = def->voltage_min * 1000;
if (initdef.std_param.max_uvolt == 0)
initdef.std_param.max_uvolt = def->voltage_max * 1000;
initdef.id = def->id;
initdef.ofw_node = node;
regnode = regnode_create(dev, &axp8xx_regnode_class, &initdef);
if (regnode == NULL) {
device_printf(dev, "cannot create regulator\n");
return (NULL);
}
reg_sc = regnode_get_softc(regnode);
reg_sc->regnode = regnode;
reg_sc->base_dev = dev;
reg_sc->def = def;
reg_sc->xref = OF_xref_from_node(node);
reg_sc->param = regnode_get_stdparam(regnode);
regnode_register(regnode);
return (reg_sc);
}
static int
axp8xx_regdev_map(device_t dev, phandle_t xref, int ncells, pcell_t *cells,
intptr_t *num)
{
struct axp8xx_softc *sc;
int i;
sc = device_get_softc(dev);
for (i = 0; i < sc->nregs; i++) {
if (sc->regs[i] == NULL)
continue;
if (sc->regs[i]->xref == xref) {
*num = sc->regs[i]->def->id;
return (0);
}
}
return (ENXIO);
}
static int
axp8xx_probe(device_t dev)
{
if (!ofw_bus_status_okay(dev))
return (ENXIO);
switch (ofw_bus_search_compatible(dev, compat_data)->ocd_data)
{
case AXP803:
device_set_desc(dev, "X-Powers AXP803 Power Management Unit");
break;
case AXP813:
device_set_desc(dev, "X-Powers AXP813 Power Management Unit");
break;
default:
return (ENXIO);
}
return (BUS_PROBE_DEFAULT);
}
static int
axp8xx_attach(device_t dev)
{
struct axp8xx_softc *sc;
struct axp8xx_reg_sc *reg;
uint8_t chip_id;
phandle_t rnode, child;
int error, i;
sc = device_get_softc(dev);
sc->addr = iicbus_get_addr(dev);
mtx_init(&sc->mtx, device_get_nameunit(dev), NULL, MTX_DEF);
error = bus_alloc_resources(dev, axp8xx_spec, &sc->res);
if (error != 0) {
device_printf(dev, "cannot allocate resources for device\n");
return (error);
}
if (bootverbose) {
axp8xx_read(dev, AXP_ICTYPE, &chip_id, 1);
device_printf(dev, "chip ID 0x%02x\n", chip_id);
}
sc->nregs = nitems(axp8xx_common_regdefs);
sc->type = ofw_bus_search_compatible(dev, compat_data)->ocd_data;
switch (sc->type) {
case AXP803:
sc->nregs += nitems(axp803_regdefs);
break;
case AXP813:
sc->nregs += nitems(axp813_regdefs);
break;
}
sc->regs = malloc(sizeof(struct axp8xx_reg_sc *) * sc->nregs,
M_AXP8XX_REG, M_WAITOK | M_ZERO);
/* Attach known regulators that exist in the DT */
rnode = ofw_bus_find_child(ofw_bus_get_node(dev), "regulators");
if (rnode > 0) {
for (i = 0; i < sc->nregs; i++) {
char *regname;
struct axp8xx_regdef *regdef;
if (i <= nitems(axp8xx_common_regdefs)) {
regname = axp8xx_common_regdefs[i].name;
regdef = &axp8xx_common_regdefs[i];
} else {
int off;
off = i - nitems(axp8xx_common_regdefs);
switch (sc->type) {
case AXP803:
regname = axp803_regdefs[off].name;
regdef = &axp803_regdefs[off];
break;
case AXP813:
regname = axp813_regdefs[off].name;
regdef = &axp813_regdefs[off];
break;
}
}
child = ofw_bus_find_child(rnode,
regname);
if (child == 0)
continue;
reg = axp8xx_reg_attach(dev, child,
regdef);
if (reg == NULL) {
device_printf(dev,
"cannot attach regulator %s\n",
regname);
continue;
}
sc->regs[i] = reg;
}
}
- /* Enable IRQ on short power key press */
- axp8xx_write(dev, AXP_IRQEN1, 0);
- axp8xx_write(dev, AXP_IRQEN2, 0);
+ /* Enable interrupts */
+ axp8xx_write(dev, AXP_IRQEN1,
+ AXP_IRQEN1_VBUS_LO |
+ AXP_IRQEN1_VBUS_HI |
+ AXP_IRQEN1_ACIN_LO |
+ AXP_IRQEN1_ACIN_HI);
+ axp8xx_write(dev, AXP_IRQEN2,
+ AXP_IRQEN2_BATCHGD |
+ AXP_IRQEN2_BATCHGC |
+ AXP_IRQEN2_BAT_NO |
+ AXP_IRQEN2_BAT_IN);
axp8xx_write(dev, AXP_IRQEN3, 0);
- axp8xx_write(dev, AXP_IRQEN4, 0);
- axp8xx_write(dev, AXP_IRQEN5, AXP_IRQEN5_POKSIRQ);
+ axp8xx_write(dev, AXP_IRQEN4,
+ AXP_IRQEN4_BATLVL_LO0 |
+ AXP_IRQEN4_BATLVL_LO1);
+ axp8xx_write(dev, AXP_IRQEN5,
+ AXP_IRQEN5_POKSIRQ |
+ AXP_IRQEN5_POKLIRQ);
axp8xx_write(dev, AXP_IRQEN6, 0);
/* Install interrupt handler */
error = bus_setup_intr(dev, sc->res, INTR_TYPE_MISC | INTR_MPSAFE,
NULL, axp8xx_intr, dev, &sc->ih);
if (error != 0) {
device_printf(dev, "cannot setup interrupt handler\n");
return (error);
}
EVENTHANDLER_REGISTER(shutdown_final, axp8xx_shutdown, dev,
SHUTDOWN_PRI_LAST);
sc->gpiodev = gpiobus_attach_bus(dev);
return (0);
}
static device_method_t axp8xx_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, axp8xx_probe),
DEVMETHOD(device_attach, axp8xx_attach),
/* GPIO interface */
DEVMETHOD(gpio_get_bus, axp8xx_gpio_get_bus),
DEVMETHOD(gpio_pin_max, axp8xx_gpio_pin_max),
DEVMETHOD(gpio_pin_getname, axp8xx_gpio_pin_getname),
DEVMETHOD(gpio_pin_getcaps, axp8xx_gpio_pin_getcaps),
DEVMETHOD(gpio_pin_getflags, axp8xx_gpio_pin_getflags),
DEVMETHOD(gpio_pin_setflags, axp8xx_gpio_pin_setflags),
DEVMETHOD(gpio_pin_get, axp8xx_gpio_pin_get),
DEVMETHOD(gpio_pin_set, axp8xx_gpio_pin_set),
DEVMETHOD(gpio_pin_toggle, axp8xx_gpio_pin_toggle),
DEVMETHOD(gpio_map_gpios, axp8xx_gpio_map_gpios),
/* Regdev interface */
DEVMETHOD(regdev_map, axp8xx_regdev_map),
/* OFW bus interface */
DEVMETHOD(ofw_bus_get_node, axp8xx_get_node),
DEVMETHOD_END
};
static driver_t axp8xx_driver = {
"axp8xx_pmu",
axp8xx_methods,
sizeof(struct axp8xx_softc),
};
static devclass_t axp8xx_devclass;
extern devclass_t ofwgpiobus_devclass, gpioc_devclass;
extern driver_t ofw_gpiobus_driver, gpioc_driver;
EARLY_DRIVER_MODULE(axp8xx, iicbus, axp8xx_driver, axp8xx_devclass, 0, 0,
BUS_PASS_INTERRUPT + BUS_PASS_ORDER_LAST);
EARLY_DRIVER_MODULE(ofw_gpiobus, axp8xx_pmu, ofw_gpiobus_driver,
ofwgpiobus_devclass, 0, 0, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_LAST);
DRIVER_MODULE(gpioc, axp8xx_pmu, gpioc_driver, gpioc_devclass, 0, 0);
MODULE_VERSION(axp8xx, 1);
MODULE_DEPEND(axp8xx, iicbus, 1, 1, 1);
Index: projects/clang800-import/sys/arm64/acpica/acpi_iort.c
===================================================================
--- projects/clang800-import/sys/arm64/acpica/acpi_iort.c (nonexistent)
+++ projects/clang800-import/sys/arm64/acpica/acpi_iort.c (revision 343956)
@@ -0,0 +1,502 @@
+/*-
+ * SPDX-License-Identifier: BSD-2-Clause-FreeBSD
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * Author: Jayachandran C Nair <jchandra@freebsd.org>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include "opt_acpi.h"
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <sys/param.h>
+#include <sys/bus.h>
+#include <sys/kernel.h>
+#include <sys/malloc.h>
+
+#include <machine/intr.h>
+
+#include <contrib/dev/acpica/include/acpi.h>
+#include <contrib/dev/acpica/include/accommon.h>
+#include <contrib/dev/acpica/include/actables.h>
+
+#include <dev/acpica/acpivar.h>
+
+/*
+ * Track next XREF available for ITS groups.
+ */
+static u_int acpi_its_xref = ACPI_MSI_XREF;
+
+/*
+ * Some types of IORT nodes have a set of mappings. Each of them map
+ * a range of device IDs [base..end] from the current node to another
+ * node. The corresponding device IDs on destination node starts at
+ * outbase.
+ */
+struct iort_map_entry {
+ u_int base;
+ u_int end;
+ u_int outbase;
+ u_int flags;
+ u_int out_node_offset;
+ struct iort_node *out_node;
+};
+
+/*
+ * The ITS group node does not have any outgoing mappings. It has a
+ * of a list of GIC ITS blocks which can handle the device ID. We
+ * will store the PIC XREF used by the block and the blocks proximity
+ * data here, so that it can be retrieved together.
+ */
+struct iort_its_entry {
+ u_int its_id;
+ u_int xref;
+ int pxm;
+};
+
+/*
+ * IORT node. Each node has some device specific data depending on the
+ * type of the node. The node can also have a set of mappings, OR in
+ * case of ITS group nodes a set of ITS entries.
+ * The nodes are kept in a TAILQ by type.
+ */
+struct iort_node {
+ TAILQ_ENTRY(iort_node) next; /* next entry with same type */
+ enum AcpiIortNodeType type; /* ACPI type */
+ u_int node_offset; /* offset in IORT - node ID */
+ u_int nentries; /* items in array below */
+ u_int usecount; /* for bookkeeping */
+ union {
+ ACPI_IORT_ROOT_COMPLEX pci_rc; /* PCI root complex */
+ ACPI_IORT_SMMU smmu;
+ ACPI_IORT_SMMU_V3 smmu_v3;
+ } data;
+ union {
+ struct iort_map_entry *mappings; /* node mappings */
+ struct iort_its_entry *its; /* ITS IDs array */
+ } entries;
+};
+
+/* Lists for each of the types. */
+static TAILQ_HEAD(, iort_node) pci_nodes = TAILQ_HEAD_INITIALIZER(pci_nodes);
+static TAILQ_HEAD(, iort_node) smmu_nodes = TAILQ_HEAD_INITIALIZER(smmu_nodes);
+static TAILQ_HEAD(, iort_node) its_groups = TAILQ_HEAD_INITIALIZER(its_groups);
+
+/*
+ * Lookup an ID in the mappings array. If successful, map the input ID
+ * to the output ID and return the output node found.
+ */
+static struct iort_node *
+iort_entry_lookup(struct iort_node *node, u_int id, u_int *outid)
+{
+ struct iort_map_entry *entry;
+ int i;
+
+ entry = node->entries.mappings;
+ for (i = 0; i < node->nentries; i++, entry++) {
+ if (entry->base <= id && id <= entry->end)
+ break;
+ }
+ if (i == node->nentries)
+ return (NULL);
+ if ((entry->flags & ACPI_IORT_ID_SINGLE_MAPPING) == 0)
+ *outid = entry->outbase + (id - entry->base);
+ else
+ *outid = entry->outbase;
+ return (entry->out_node);
+}
+
+/*
+ * Map a PCI RID to a SMMU node or an ITS node, based on outtype.
+ */
+static struct iort_node *
+iort_pci_rc_map(u_int seg, u_int rid, u_int outtype, u_int *outid)
+{
+ struct iort_node *node, *out_node;
+ u_int nxtid;
+
+ out_node = NULL;
+ TAILQ_FOREACH(node, &pci_nodes, next) {
+ if (node->data.pci_rc.PciSegmentNumber != seg)
+ continue;
+ out_node = iort_entry_lookup(node, rid, &nxtid);
+ if (out_node != NULL)
+ break;
+ }
+
+ /* Could not find a PCI RC node with segment and device ID. */
+ if (out_node == NULL)
+ return (NULL);
+
+ /* Node can be SMMU or ITS. If SMMU, we need another lookup. */
+ if (outtype == ACPI_IORT_NODE_ITS_GROUP &&
+ (out_node->type == ACPI_IORT_NODE_SMMU_V3 ||
+ out_node->type == ACPI_IORT_NODE_SMMU)) {
+ out_node = iort_entry_lookup(out_node, nxtid, &nxtid);
+ if (out_node == NULL)
+ return (NULL);
+ }
+
+ KASSERT(out_node->type == outtype, ("mapping fail"));
+ *outid = nxtid;
+ return (out_node);
+}
+
+#ifdef notyet
+/*
+ * Not implemented, map a PCIe device to the SMMU it is associated with.
+ */
+int
+acpi_iort_map_smmu(u_int seg, u_int devid, void **smmu, u_int *sid)
+{
+ /* XXX: convert oref to SMMU device */
+ return (ENXIO);
+}
+#endif
+
+/*
+ * Allocate memory for a node, initialize and copy mappings. 'start'
+ * argument provides the table start used to calculate the node offset.
+ */
+static void
+iort_copy_data(struct iort_node *node, ACPI_IORT_NODE *node_entry)
+{
+ ACPI_IORT_ID_MAPPING *map_entry;
+ struct iort_map_entry *mapping;
+ int i;
+
+ map_entry = ACPI_ADD_PTR(ACPI_IORT_ID_MAPPING, node_entry,
+ node_entry->MappingOffset);
+ node->nentries = node_entry->MappingCount;
+ node->usecount = 0;
+ mapping = malloc(sizeof(*mapping) * node->nentries, M_DEVBUF,
+ M_WAITOK | M_ZERO);
+ node->entries.mappings = mapping;
+ for (i = 0; i < node->nentries; i++, mapping++, map_entry++) {
+ mapping->base = map_entry->InputBase;
+ mapping->end = map_entry->InputBase + map_entry->IdCount - 1;
+ mapping->outbase = map_entry->OutputBase;
+ mapping->out_node_offset = map_entry->OutputReference;
+ mapping->flags = map_entry->Flags;
+ mapping->out_node = NULL;
+ }
+}
+
+/*
+ * Allocate and copy an ITS group.
+ */
+static void
+iort_copy_its(struct iort_node *node, ACPI_IORT_NODE *node_entry)
+{
+ struct iort_its_entry *its;
+ ACPI_IORT_ITS_GROUP *itsg_entry;
+ UINT32 *id;
+ int i;
+
+ itsg_entry = (ACPI_IORT_ITS_GROUP *)node_entry->NodeData;
+ node->nentries = itsg_entry->ItsCount;
+ node->usecount = 0;
+ its = malloc(sizeof(*its) * node->nentries, M_DEVBUF, M_WAITOK | M_ZERO);
+ node->entries.its = its;
+ id = &itsg_entry->Identifiers[0];
+ for (i = 0; i < node->nentries; i++, its++, id++) {
+ its->its_id = *id;
+ its->pxm = -1;
+ its->xref = 0;
+ }
+}
+
+/*
+ * Walk the IORT table and add nodes to corresponding list.
+ */
+static void
+iort_add_nodes(ACPI_IORT_NODE *node_entry, u_int node_offset)
+{
+ ACPI_IORT_ROOT_COMPLEX *pci_rc;
+ ACPI_IORT_SMMU *smmu;
+ ACPI_IORT_SMMU_V3 *smmu_v3;
+ struct iort_node *node;
+
+ node = malloc(sizeof(*node), M_DEVBUF, M_WAITOK | M_ZERO);
+ node->type = node_entry->Type;
+ node->node_offset = node_offset;
+
+ /* copy nodes depending on type */
+ switch(node_entry->Type) {
+ case ACPI_IORT_NODE_PCI_ROOT_COMPLEX:
+ pci_rc = (ACPI_IORT_ROOT_COMPLEX *)node_entry->NodeData;
+ memcpy(&node->data.pci_rc, pci_rc, sizeof(*pci_rc));
+ iort_copy_data(node, node_entry);
+ TAILQ_INSERT_TAIL(&pci_nodes, node, next);
+ break;
+ case ACPI_IORT_NODE_SMMU:
+ smmu = (ACPI_IORT_SMMU *)node_entry->NodeData;
+ memcpy(&node->data.smmu, smmu, sizeof(*smmu));
+ iort_copy_data(node, node_entry);
+ TAILQ_INSERT_TAIL(&smmu_nodes, node, next);
+ break;
+ case ACPI_IORT_NODE_SMMU_V3:
+ smmu_v3 = (ACPI_IORT_SMMU_V3 *)node_entry->NodeData;
+ memcpy(&node->data.smmu_v3, smmu_v3, sizeof(*smmu_v3));
+ iort_copy_data(node, node_entry);
+ TAILQ_INSERT_TAIL(&smmu_nodes, node, next);
+ break;
+ case ACPI_IORT_NODE_ITS_GROUP:
+ iort_copy_its(node, node_entry);
+ TAILQ_INSERT_TAIL(&its_groups, node, next);
+ break;
+ default:
+ printf("ACPI: IORT: Dropping unhandled type %u\n",
+ node_entry->Type);
+ free(node, M_DEVBUF);
+ break;
+ }
+}
+
+/*
+ * For the mapping entry given, walk thru all the possible destination
+ * nodes and resolve the output reference.
+ */
+static void
+iort_resolve_node(struct iort_map_entry *entry, int check_smmu)
+{
+ struct iort_node *node, *np;
+
+ node = NULL;
+ if (check_smmu) {
+ TAILQ_FOREACH(np, &smmu_nodes, next) {
+ if (entry->out_node_offset == np->node_offset) {
+ node = np;
+ break;
+ }
+ }
+ }
+ if (node == NULL) {
+ TAILQ_FOREACH(np, &its_groups, next) {
+ if (entry->out_node_offset == np->node_offset) {
+ node = np;
+ break;
+ }
+ }
+ }
+ if (node != NULL) {
+ node->usecount++;
+ entry->out_node = node;
+ } else {
+ printf("ACPI: IORT: Firmware Bug: no mapping for node %u\n",
+ entry->out_node_offset);
+ }
+}
+
+/*
+ * Resolve all output node references to node pointers.
+ */
+static void
+iort_post_process_mappings(void)
+{
+ struct iort_node *node;
+ int i;
+
+ TAILQ_FOREACH(node, &pci_nodes, next)
+ for (i = 0; i < node->nentries; i++)
+ iort_resolve_node(&node->entries.mappings[i], TRUE);
+ TAILQ_FOREACH(node, &smmu_nodes, next)
+ for (i = 0; i < node->nentries; i++)
+ iort_resolve_node(&node->entries.mappings[i], FALSE);
+ /* TODO: named nodes */
+}
+
+/*
+ * Walk MADT table, assign PIC xrefs to all ITS entries.
+ */
+static void
+madt_resolve_its_xref(ACPI_SUBTABLE_HEADER *entry, void *arg)
+{
+ ACPI_MADT_GENERIC_TRANSLATOR *gict;
+ struct iort_node *its_node;
+ struct iort_its_entry *its_entry;
+ u_int xref;
+ int i, matches;
+
+ if (entry->Type != ACPI_MADT_TYPE_GENERIC_TRANSLATOR)
+ return;
+
+ gict = (ACPI_MADT_GENERIC_TRANSLATOR *)entry;
+ matches = 0;
+ xref = acpi_its_xref++;
+ TAILQ_FOREACH(its_node, &its_groups, next) {
+ its_entry = its_node->entries.its;
+ for (i = 0; i < its_node->nentries; i++, its_entry++) {
+ if (its_entry->its_id == gict->TranslationId) {
+ its_entry->xref = xref;
+ matches++;
+ }
+ }
+ }
+ if (matches == 0)
+ printf("ACPI: IORT: Unused ITS block, ID %u\n",
+ gict->TranslationId);
+}
+
+/*
+ * Walk SRAT, assign proximity to all ITS entries.
+ */
+static void
+srat_resolve_its_pxm(ACPI_SUBTABLE_HEADER *entry, void *arg)
+{
+ ACPI_SRAT_GIC_ITS_AFFINITY *gicits;
+ struct iort_node *its_node;
+ struct iort_its_entry *its_entry;
+ int i, matches;
+
+ if (entry->Type != ACPI_SRAT_TYPE_GIC_ITS_AFFINITY)
+ return;
+
+ matches = 0;
+ gicits = (ACPI_SRAT_GIC_ITS_AFFINITY *)entry;
+ TAILQ_FOREACH(its_node, &its_groups, next) {
+ its_entry = its_node->entries.its;
+ for (i = 0; i < its_node->nentries; i++, its_entry++) {
+ if (its_entry->its_id == gicits->ItsId) {
+ its_entry->pxm = acpi_map_pxm_to_vm_domainid(
+ gicits->ProximityDomain);
+ matches++;
+ }
+ }
+ }
+ if (matches == 0)
+ printf("ACPI: IORT: ITS block %u in SRAT not found in IORT!\n",
+ gicits->ItsId);
+}
+
+/*
+ * Cross check the ITS Id with MADT and (if available) SRAT.
+ */
+static int
+iort_post_process_its(void)
+{
+ ACPI_TABLE_MADT *madt;
+ ACPI_TABLE_SRAT *srat;
+ vm_paddr_t madt_pa, srat_pa;
+
+ /* Check ITS block in MADT */
+ madt_pa = acpi_find_table(ACPI_SIG_MADT);
+ KASSERT(madt_pa != 0, ("no MADT!"));
+ madt = acpi_map_table(madt_pa, ACPI_SIG_MADT);
+ KASSERT(madt != NULL, ("can't map MADT!"));
+ acpi_walk_subtables(madt + 1, (char *)madt + madt->Header.Length,
+ madt_resolve_its_xref, NULL);
+ acpi_unmap_table(madt);
+
+ /* Get proximtiy if available */
+ srat_pa = acpi_find_table(ACPI_SIG_SRAT);
+ if (srat_pa != 0) {
+ srat = acpi_map_table(srat_pa, ACPI_SIG_SRAT);
+ KASSERT(srat != NULL, ("can't map SRAT!"));
+ acpi_walk_subtables(srat + 1, (char *)srat + srat->Header.Length,
+ srat_resolve_its_pxm, NULL);
+ acpi_unmap_table(srat);
+ }
+ return (0);
+}
+
+/*
+ * Find, parse, and save IO Remapping Table ("IORT").
+ */
+static int
+acpi_parse_iort(void *dummy __unused)
+{
+ ACPI_TABLE_IORT *iort;
+ ACPI_IORT_NODE *node_entry;
+ vm_paddr_t iort_pa;
+ u_int node_offset;
+
+ iort_pa = acpi_find_table(ACPI_SIG_IORT);
+ if (iort_pa == 0)
+ return (ENXIO);
+
+ iort = acpi_map_table(iort_pa, ACPI_SIG_IORT);
+ if (iort == NULL) {
+ printf("ACPI: Unable to map the IORT table!\n");
+ return (ENXIO);
+ }
+ for (node_offset = iort->NodeOffset;
+ node_offset < iort->Header.Length;
+ node_offset += node_entry->Length) {
+ node_entry = ACPI_ADD_PTR(ACPI_IORT_NODE, iort, node_offset);
+ iort_add_nodes(node_entry, node_offset);
+ }
+ acpi_unmap_table(iort);
+ iort_post_process_mappings();
+ iort_post_process_its();
+ return (0);
+}
+SYSINIT(acpi_parse_iort, SI_SUB_DRIVERS, SI_ORDER_FIRST, acpi_parse_iort, NULL);
+
+/*
+ * Provide ITS ID to PIC xref mapping.
+ */
+int
+acpi_iort_its_lookup(u_int its_id, u_int *xref, int *pxm)
+{
+ struct iort_node *its_node;
+ struct iort_its_entry *its_entry;
+ int i;
+
+ TAILQ_FOREACH(its_node, &its_groups, next) {
+ its_entry = its_node->entries.its;
+ for (i = 0; i < its_node->nentries; i++, its_entry++) {
+ if (its_entry->its_id == its_id) {
+ *xref = its_entry->xref;
+ *pxm = its_entry->pxm;
+ return (0);
+ }
+ }
+ }
+ return (ENOENT);
+}
+
+/*
+ * Find mapping for a PCIe device given segment and device ID
+ * returns the XREF for MSI interrupt setup and the device ID to
+ * use for the interrupt setup
+ */
+int
+acpi_iort_map_pci_msi(u_int seg, u_int rid, u_int *xref, u_int *devid)
+{
+ struct iort_node *node;
+
+ node = iort_pci_rc_map(seg, rid, ACPI_IORT_NODE_ITS_GROUP, devid);
+ if (node == NULL)
+ return (ENOENT);
+
+ /* This should be an ITS node */
+ KASSERT(node->type == ACPI_IORT_NODE_ITS_GROUP, ("bad group"));
+
+ /* return first node, we don't handle more than that now. */
+ *xref = node->entries.its[0].xref;
+ return (0);
+}
Property changes on: projects/clang800-import/sys/arm64/acpica/acpi_iort.c
___________________________________________________________________
Added: svn:eol-style
## -0,0 +1 ##
+native
\ No newline at end of property
Added: svn:keywords
## -0,0 +1 ##
+FreeBSD=%H
\ No newline at end of property
Added: svn:mime-type
## -0,0 +1 ##
+text/plain
\ No newline at end of property
Index: projects/clang800-import/sys/arm64/arm64/cpufunc_asm.S
===================================================================
--- projects/clang800-import/sys/arm64/arm64/cpufunc_asm.S (revision 343955)
+++ projects/clang800-import/sys/arm64/arm64/cpufunc_asm.S (revision 343956)
@@ -1,180 +1,181 @@
/*-
* Copyright (c) 2014 Robin Randhawa
* Copyright (c) 2015 The FreeBSD Foundation
* All rights reserved.
*
* Portions of this software were developed by Andrew Turner
* under sponsorship from the FreeBSD Foundation
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
*/
#include <sys/errno.h>
#include <machine/asm.h>
#include <machine/param.h>
#include "assym.inc"
__FBSDID("$FreeBSD$");
/*
* FIXME:
* Need big.LITTLE awareness at some point.
* Using arm64_p[id]cache_line_size may not be the best option.
* Need better SMP awareness.
*/
.text
.align 2
.Lpage_mask:
.word PAGE_MASK
/*
* Macro to handle the cache. This takes the start address in x0, length
* in x1. It will corrupt x0, x1, x2, and x3.
*/
.macro cache_handle_range dcop = 0, ic = 0, icop = 0
.if \ic == 0
ldr x3, =dcache_line_size /* Load the D cache line size */
.else
ldr x3, =idcache_line_size /* Load the I & D cache line size */
.endif
ldr x3, [x3]
sub x4, x3, #1 /* Get the address mask */
and x2, x0, x4 /* Get the low bits of the address */
add x1, x1, x2 /* Add these to the size */
bic x0, x0, x4 /* Clear the low bit of the address */
1:
dc \dcop, x0
dsb ish
.if \ic != 0
ic \icop, x0
dsb ish
.endif
add x0, x0, x3 /* Move to the next line */
subs x1, x1, x3 /* Reduce the size */
b.hi 1b /* Check if we are done */
.if \ic != 0
isb
.endif
.endm
ENTRY(arm64_nullop)
ret
END(arm64_nullop)
/*
* Generic functions to read/modify/write the internal coprocessor registers
*/
ENTRY(arm64_setttb)
dsb ish
msr ttbr0_el1, x0
dsb ish
isb
ret
END(arm64_setttb)
ENTRY(arm64_tlb_flushID)
+ dsb ishst
#ifdef SMP
tlbi vmalle1is
#else
tlbi vmalle1
#endif
dsb ish
isb
ret
END(arm64_tlb_flushID)
/*
* void arm64_dcache_wb_range(vm_offset_t, vm_size_t)
*/
ENTRY(arm64_dcache_wb_range)
cache_handle_range dcop = cvac
ret
END(arm64_dcache_wb_range)
/*
* void arm64_dcache_wbinv_range(vm_offset_t, vm_size_t)
*/
ENTRY(arm64_dcache_wbinv_range)
cache_handle_range dcop = civac
ret
END(arm64_dcache_wbinv_range)
/*
* void arm64_dcache_inv_range(vm_offset_t, vm_size_t)
*
* Note, we must not invalidate everything. If the range is too big we
* must use wb-inv of the entire cache.
*/
ENTRY(arm64_dcache_inv_range)
cache_handle_range dcop = ivac
ret
END(arm64_dcache_inv_range)
/*
* void arm64_idcache_wbinv_range(vm_offset_t, vm_size_t)
*/
ENTRY(arm64_idcache_wbinv_range)
cache_handle_range dcop = civac, ic = 1, icop = ivau
ret
END(arm64_idcache_wbinv_range)
/*
* void arm64_icache_sync_range(vm_offset_t, vm_size_t)
*/
ENTRY(arm64_icache_sync_range)
/*
* XXX Temporary solution - I-cache flush should be range based for
* PIPT cache or IALLUIS for VIVT or VIPT caches
*/
/* cache_handle_range dcop = cvau, ic = 1, icop = ivau */
cache_handle_range dcop = cvau
ic ialluis
dsb ish
isb
ret
END(arm64_icache_sync_range)
/*
* int arm64_icache_sync_range_checked(vm_offset_t, vm_size_t)
*/
ENTRY(arm64_icache_sync_range_checked)
adr x5, cache_maint_fault
SET_FAULT_HANDLER(x5, x6)
/* XXX: See comment in arm64_icache_sync_range */
cache_handle_range dcop = cvau
ic ialluis
dsb ish
isb
SET_FAULT_HANDLER(xzr, x6)
mov x0, #0
ret
END(arm64_icache_sync_range_checked)
ENTRY(cache_maint_fault)
SET_FAULT_HANDLER(xzr, x1)
mov x0, #EFAULT
ret
END(cache_maint_fault)
Index: projects/clang800-import/sys/arm64/arm64/gic_v3_acpi.c
===================================================================
--- projects/clang800-import/sys/arm64/arm64/gic_v3_acpi.c (revision 343955)
+++ projects/clang800-import/sys/arm64/arm64/gic_v3_acpi.c (revision 343956)
@@ -1,381 +1,390 @@
/*-
* Copyright (c) 2016 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed by Andrew Turner under
* the sponsorship of the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include "opt_acpi.h"
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/types.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/rman.h>
#include <machine/intr.h>
#include <machine/resource.h>
#include <contrib/dev/acpica/include/acpi.h>
#include <dev/acpica/acpivar.h>
#include "gic_v3_reg.h"
#include "gic_v3_var.h"
struct gic_v3_acpi_devinfo {
struct gic_v3_devinfo di_gic_dinfo;
struct resource_list di_rl;
};
static device_identify_t gic_v3_acpi_identify;
static device_probe_t gic_v3_acpi_probe;
static device_attach_t gic_v3_acpi_attach;
static bus_alloc_resource_t gic_v3_acpi_bus_alloc_res;
static void gic_v3_acpi_bus_attach(device_t);
static device_method_t gic_v3_acpi_methods[] = {
/* Device interface */
DEVMETHOD(device_identify, gic_v3_acpi_identify),
DEVMETHOD(device_probe, gic_v3_acpi_probe),
DEVMETHOD(device_attach, gic_v3_acpi_attach),
/* Bus interface */
DEVMETHOD(bus_alloc_resource, gic_v3_acpi_bus_alloc_res),
DEVMETHOD(bus_activate_resource, bus_generic_activate_resource),
/* End */
DEVMETHOD_END
};
DEFINE_CLASS_1(gic, gic_v3_acpi_driver, gic_v3_acpi_methods,
sizeof(struct gic_v3_softc), gic_v3_driver);
static devclass_t gic_v3_acpi_devclass;
EARLY_DRIVER_MODULE(gic_v3, acpi, gic_v3_acpi_driver, gic_v3_acpi_devclass,
0, 0, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_MIDDLE);
struct madt_table_data {
device_t parent;
device_t dev;
ACPI_MADT_GENERIC_DISTRIBUTOR *dist;
int count;
};
static void
madt_handler(ACPI_SUBTABLE_HEADER *entry, void *arg)
{
struct madt_table_data *madt_data;
madt_data = (struct madt_table_data *)arg;
switch(entry->Type) {
case ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR:
if (madt_data->dist != NULL) {
if (bootverbose)
device_printf(madt_data->parent,
"gic: Already have a distributor table");
break;
}
madt_data->dist = (ACPI_MADT_GENERIC_DISTRIBUTOR *)entry;
break;
case ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR:
break;
default:
break;
}
}
static void
rdist_map(ACPI_SUBTABLE_HEADER *entry, void *arg)
{
ACPI_MADT_GENERIC_REDISTRIBUTOR *redist;
struct madt_table_data *madt_data;
madt_data = (struct madt_table_data *)arg;
switch(entry->Type) {
case ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR:
redist = (ACPI_MADT_GENERIC_REDISTRIBUTOR *)entry;
madt_data->count++;
BUS_SET_RESOURCE(madt_data->parent, madt_data->dev,
SYS_RES_MEMORY, madt_data->count, redist->BaseAddress,
redist->Length);
break;
default:
break;
}
}
static void
gic_v3_acpi_identify(driver_t *driver, device_t parent)
{
struct madt_table_data madt_data;
ACPI_TABLE_MADT *madt;
vm_paddr_t physaddr;
device_t dev;
physaddr = acpi_find_table(ACPI_SIG_MADT);
if (physaddr == 0)
return;
madt = acpi_map_table(physaddr, ACPI_SIG_MADT);
if (madt == NULL) {
device_printf(parent, "gic: Unable to map the MADT\n");
return;
}
madt_data.parent = parent;
madt_data.dist = NULL;
madt_data.count = 0;
acpi_walk_subtables(madt + 1, (char *)madt + madt->Header.Length,
madt_handler, &madt_data);
if (madt_data.dist == NULL) {
device_printf(parent,
"No gic interrupt or distributor table\n");
goto out;
}
/* This is for the wrong GIC version */
if (madt_data.dist->Version != ACPI_MADT_GIC_VERSION_V3)
goto out;
dev = BUS_ADD_CHILD(parent, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_MIDDLE,
"gic", -1);
if (dev == NULL) {
device_printf(parent, "add gic child failed\n");
goto out;
}
/* Add the MADT data */
BUS_SET_RESOURCE(parent, dev, SYS_RES_MEMORY, 0,
madt_data.dist->BaseAddress, 128 * 1024);
madt_data.dev = dev;
acpi_walk_subtables(madt + 1, (char *)madt + madt->Header.Length,
rdist_map, &madt_data);
acpi_set_private(dev, (void *)(uintptr_t)madt_data.dist->Version);
out:
acpi_unmap_table(madt);
}
static int
gic_v3_acpi_probe(device_t dev)
{
switch((uintptr_t)acpi_get_private(dev)) {
case ACPI_MADT_GIC_VERSION_V3:
break;
default:
return (ENXIO);
}
device_set_desc(dev, GIC_V3_DEVSTR);
return (BUS_PROBE_NOWILDCARD);
}
static void
madt_count_redistrib(ACPI_SUBTABLE_HEADER *entry, void *arg)
{
struct gic_v3_softc *sc = arg;
if (entry->Type == ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR)
sc->gic_redists.nregions++;
}
static int
gic_v3_acpi_count_regions(device_t dev)
{
struct gic_v3_softc *sc;
ACPI_TABLE_MADT *madt;
vm_paddr_t physaddr;
sc = device_get_softc(dev);
physaddr = acpi_find_table(ACPI_SIG_MADT);
if (physaddr == 0)
return (ENXIO);
madt = acpi_map_table(physaddr, ACPI_SIG_MADT);
if (madt == NULL) {
device_printf(dev, "Unable to map the MADT\n");
return (ENXIO);
}
acpi_walk_subtables(madt + 1, (char *)madt + madt->Header.Length,
madt_count_redistrib, sc);
acpi_unmap_table(madt);
return (sc->gic_redists.nregions > 0 ? 0 : ENXIO);
}
static int
gic_v3_acpi_attach(device_t dev)
{
struct gic_v3_softc *sc;
int err;
sc = device_get_softc(dev);
sc->dev = dev;
sc->gic_bus = GIC_BUS_ACPI;
err = gic_v3_acpi_count_regions(dev);
if (err != 0)
goto error;
err = gic_v3_attach(dev);
if (err != 0)
goto error;
sc->gic_pic = intr_pic_register(dev, ACPI_INTR_XREF);
if (sc->gic_pic == NULL) {
device_printf(dev, "could not register PIC\n");
err = ENXIO;
goto error;
}
if (intr_pic_claim_root(dev, ACPI_INTR_XREF, arm_gic_v3_intr, sc,
GIC_LAST_SGI - GIC_FIRST_SGI + 1) != 0) {
err = ENXIO;
goto error;
}
/*
* Try to register the ITS driver to this GIC. The GIC will act as
* a bus in that case. Failure here will not affect the main GIC
* functionality.
*/
gic_v3_acpi_bus_attach(dev);
if (device_get_children(dev, &sc->gic_children, &sc->gic_nchildren) !=0)
sc->gic_nchildren = 0;
return (0);
error:
if (bootverbose) {
device_printf(dev,
"Failed to attach. Error %d\n", err);
}
/* Failure so free resources */
gic_v3_detach(dev);
return (err);
}
static void
gic_v3_add_children(ACPI_SUBTABLE_HEADER *entry, void *arg)
{
ACPI_MADT_GENERIC_TRANSLATOR *gict;
struct gic_v3_acpi_devinfo *di;
struct gic_v3_softc *sc;
device_t child, dev;
+ u_int xref;
+ int err, pxm;
if (entry->Type == ACPI_MADT_TYPE_GENERIC_TRANSLATOR) {
/* We have an ITS, add it as a child */
gict = (ACPI_MADT_GENERIC_TRANSLATOR *)entry;
dev = arg;
sc = device_get_softc(dev);
child = device_add_child(dev, "its", -1);
if (child == NULL)
return;
di = malloc(sizeof(*di), M_GIC_V3, M_WAITOK | M_ZERO);
resource_list_init(&di->di_rl);
resource_list_add(&di->di_rl, SYS_RES_MEMORY, 0,
gict->BaseAddress, gict->BaseAddress + 128 * 1024 - 1,
128 * 1024);
- di->di_gic_dinfo.gic_domain = -1;
+ err = acpi_iort_its_lookup(gict->TranslationId, &xref, &pxm);
+ if (err == 0) {
+ di->di_gic_dinfo.gic_domain = pxm;
+ di->di_gic_dinfo.msi_xref = xref;
+ } else {
+ di->di_gic_dinfo.gic_domain = -1;
+ di->di_gic_dinfo.msi_xref = ACPI_MSI_XREF;
+ }
sc->gic_nchildren++;
device_set_ivars(child, di);
}
}
static void
gic_v3_acpi_bus_attach(device_t dev)
{
ACPI_TABLE_MADT *madt;
vm_paddr_t physaddr;
physaddr = acpi_find_table(ACPI_SIG_MADT);
if (physaddr == 0)
return;
madt = acpi_map_table(physaddr, ACPI_SIG_MADT);
if (madt == NULL) {
device_printf(dev, "Unable to map the MADT to add children\n");
return;
}
acpi_walk_subtables(madt + 1, (char *)madt + madt->Header.Length,
gic_v3_add_children, dev);
acpi_unmap_table(madt);
bus_generic_attach(dev);
}
static struct resource *
gic_v3_acpi_bus_alloc_res(device_t bus, device_t child, int type, int *rid,
rman_res_t start, rman_res_t end, rman_res_t count, u_int flags)
{
struct gic_v3_acpi_devinfo *di;
struct resource_list_entry *rle;
/* We only allocate memory */
if (type != SYS_RES_MEMORY)
return (NULL);
if (RMAN_IS_DEFAULT_RANGE(start, end)) {
if ((di = device_get_ivars(child)) == NULL)
return (NULL);
/* Find defaults for this rid */
rle = resource_list_find(&di->di_rl, type, *rid);
if (rle == NULL)
return (NULL);
start = rle->start;
end = rle->end;
count = rle->count;
}
return (bus_generic_alloc_resource(bus, child, type, rid, start, end,
count, flags));
}
Index: projects/clang800-import/sys/arm64/arm64/gic_v3_var.h
===================================================================
--- projects/clang800-import/sys/arm64/arm64/gic_v3_var.h (revision 343955)
+++ projects/clang800-import/sys/arm64/arm64/gic_v3_var.h (revision 343956)
@@ -1,146 +1,147 @@
/*-
* Copyright (c) 2015 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed by Semihalf under
* the sponsorship of the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#ifndef _GIC_V3_VAR_H_
#define _GIC_V3_VAR_H_
#include <arm/arm/gic_common.h>
#define GIC_V3_DEVSTR "ARM Generic Interrupt Controller v3.0"
DECLARE_CLASS(gic_v3_driver);
struct gic_v3_irqsrc;
struct redist_lpis {
vm_offset_t conf_base;
vm_offset_t pend_base[MAXCPU];
uint64_t flags;
};
struct gic_redists {
/*
* Re-Distributor region description.
* We will have few of those depending
* on the #redistributor-regions property in FDT.
*/
struct resource ** regions;
/* Number of Re-Distributor regions */
u_int nregions;
/* Per-CPU Re-Distributor handler */
struct resource * pcpu[MAXCPU];
/* LPIs data */
struct redist_lpis lpis;
};
struct gic_v3_softc {
device_t dev;
struct resource ** gic_res;
struct mtx gic_mtx;
/* Distributor */
struct resource * gic_dist;
/* Re-Distributors */
struct gic_redists gic_redists;
uint32_t gic_pidr2;
u_int gic_bus;
u_int gic_nirqs;
u_int gic_idbits;
boolean_t gic_registered;
int gic_nchildren;
device_t *gic_children;
struct intr_pic *gic_pic;
struct gic_v3_irqsrc *gic_irqs;
};
struct gic_v3_devinfo {
int gic_domain;
+ int msi_xref;
};
#define GIC_INTR_ISRC(sc, irq) (&sc->gic_irqs[irq].gi_isrc)
MALLOC_DECLARE(M_GIC_V3);
/* ivars */
#define GICV3_IVAR_NIRQS 1000
#define GICV3_IVAR_REDIST_VADDR 1001
__BUS_ACCESSOR(gicv3, nirqs, GICV3, NIRQS, u_int);
__BUS_ACCESSOR(gicv3, redist_vaddr, GICV3, REDIST_VADDR, void *);
/* Device methods */
int gic_v3_attach(device_t dev);
int gic_v3_detach(device_t dev);
int arm_gic_v3_intr(void *);
uint32_t gic_r_read_4(device_t, bus_size_t);
uint64_t gic_r_read_8(device_t, bus_size_t);
void gic_r_write_4(device_t, bus_size_t, uint32_t var);
void gic_r_write_8(device_t, bus_size_t, uint64_t var);
/*
* GIC Distributor accessors.
* Notice that only GIC sofc can be passed.
*/
#define gic_d_read(sc, len, reg) \
({ \
bus_read_##len(sc->gic_dist, reg); \
})
#define gic_d_write(sc, len, reg, val) \
({ \
bus_write_##len(sc->gic_dist, reg, val);\
})
/* GIC Re-Distributor accessors (per-CPU) */
#define gic_r_read(sc, len, reg) \
({ \
u_int cpu = PCPU_GET(cpuid); \
\
bus_read_##len( \
sc->gic_redists.pcpu[cpu], \
reg); \
})
#define gic_r_write(sc, len, reg, val) \
({ \
u_int cpu = PCPU_GET(cpuid); \
\
bus_write_##len( \
sc->gic_redists.pcpu[cpu], \
reg, val); \
})
#endif /* _GIC_V3_VAR_H_ */
Index: projects/clang800-import/sys/arm64/arm64/gicv3_its.c
===================================================================
--- projects/clang800-import/sys/arm64/arm64/gicv3_its.c (revision 343955)
+++ projects/clang800-import/sys/arm64/arm64/gicv3_its.c (revision 343956)
@@ -1,1742 +1,1743 @@
/*-
* Copyright (c) 2015-2016 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed by Andrew Turner under
* the sponsorship of the FreeBSD Foundation.
*
* This software was developed by Semihalf under
* the sponsorship of the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include "opt_acpi.h"
#include "opt_platform.h"
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/cpuset.h>
#include <sys/endian.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/proc.h>
#include <sys/queue.h>
#include <sys/rman.h>
#include <sys/smp.h>
#include <sys/vmem.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <machine/bus.h>
#include <machine/intr.h>
#include <arm/arm/gic_common.h>
#include <arm64/arm64/gic_v3_reg.h>
#include <arm64/arm64/gic_v3_var.h>
#ifdef FDT
#include <dev/ofw/openfirm.h>
#include <dev/ofw/ofw_bus.h>
#include <dev/ofw/ofw_bus_subr.h>
#endif
#include <dev/pci/pcireg.h>
#include <dev/pci/pcivar.h>
#include "pcib_if.h"
#include "pic_if.h"
#include "msi_if.h"
MALLOC_DEFINE(M_GICV3_ITS, "GICv3 ITS",
"ARM GICv3 Interrupt Translation Service");
#define LPI_NIRQS (64 * 1024)
/* The size and alignment of the command circular buffer */
#define ITS_CMDQ_SIZE (64 * 1024) /* Must be a multiple of 4K */
#define ITS_CMDQ_ALIGN (64 * 1024)
#define LPI_CONFTAB_SIZE LPI_NIRQS
#define LPI_CONFTAB_ALIGN (64 * 1024)
#define LPI_CONFTAB_MAX_ADDR ((1ul << 48) - 1) /* We need a 47 bit PA */
/* 1 bit per SPI, PPI, and SGI (8k), and 1 bit per LPI (LPI_CONFTAB_SIZE) */
#define LPI_PENDTAB_SIZE ((LPI_NIRQS + GIC_FIRST_LPI) / 8)
#define LPI_PENDTAB_ALIGN (64 * 1024)
#define LPI_PENDTAB_MAX_ADDR ((1ul << 48) - 1) /* We need a 47 bit PA */
#define LPI_INT_TRANS_TAB_ALIGN 256
#define LPI_INT_TRANS_TAB_MAX_ADDR ((1ul << 48) - 1)
/* ITS commands encoding */
#define ITS_CMD_MOVI (0x01)
#define ITS_CMD_SYNC (0x05)
#define ITS_CMD_MAPD (0x08)
#define ITS_CMD_MAPC (0x09)
#define ITS_CMD_MAPTI (0x0a)
#define ITS_CMD_MAPI (0x0b)
#define ITS_CMD_INV (0x0c)
#define ITS_CMD_INVALL (0x0d)
/* Command */
#define CMD_COMMAND_MASK (0xFFUL)
/* PCI device ID */
#define CMD_DEVID_SHIFT (32)
#define CMD_DEVID_MASK (0xFFFFFFFFUL << CMD_DEVID_SHIFT)
/* Size of IRQ ID bitfield */
#define CMD_SIZE_MASK (0xFFUL)
/* Virtual LPI ID */
#define CMD_ID_MASK (0xFFFFFFFFUL)
/* Physical LPI ID */
#define CMD_PID_SHIFT (32)
#define CMD_PID_MASK (0xFFFFFFFFUL << CMD_PID_SHIFT)
/* Collection */
#define CMD_COL_MASK (0xFFFFUL)
/* Target (CPU or Re-Distributor) */
#define CMD_TARGET_SHIFT (16)
#define CMD_TARGET_MASK (0xFFFFFFFFUL << CMD_TARGET_SHIFT)
/* Interrupt Translation Table address */
#define CMD_ITT_MASK (0xFFFFFFFFFF00UL)
/* Valid command bit */
#define CMD_VALID_SHIFT (63)
#define CMD_VALID_MASK (1UL << CMD_VALID_SHIFT)
#define ITS_TARGET_NONE 0xFBADBEEF
/* LPI chunk owned by ITS device */
struct lpi_chunk {
u_int lpi_base;
u_int lpi_free; /* First free LPI in set */
u_int lpi_num; /* Total number of LPIs in chunk */
u_int lpi_busy; /* Number of busy LPIs in chink */
};
/* ITS device */
struct its_dev {
TAILQ_ENTRY(its_dev) entry;
/* PCI device */
device_t pci_dev;
/* Device ID (i.e. PCI device ID) */
uint32_t devid;
/* List of assigned LPIs */
struct lpi_chunk lpis;
/* Virtual address of ITT */
vm_offset_t itt;
size_t itt_size;
};
/*
* ITS command descriptor.
* Idea for command description passing taken from Linux.
*/
struct its_cmd_desc {
uint8_t cmd_type;
union {
struct {
struct its_dev *its_dev;
struct its_col *col;
uint32_t id;
} cmd_desc_movi;
struct {
struct its_col *col;
} cmd_desc_sync;
struct {
struct its_col *col;
uint8_t valid;
} cmd_desc_mapc;
struct {
struct its_dev *its_dev;
struct its_col *col;
uint32_t pid;
uint32_t id;
} cmd_desc_mapvi;
struct {
struct its_dev *its_dev;
struct its_col *col;
uint32_t pid;
} cmd_desc_mapi;
struct {
struct its_dev *its_dev;
uint8_t valid;
} cmd_desc_mapd;
struct {
struct its_dev *its_dev;
struct its_col *col;
uint32_t pid;
} cmd_desc_inv;
struct {
struct its_col *col;
} cmd_desc_invall;
};
};
/* ITS command. Each command is 32 bytes long */
struct its_cmd {
uint64_t cmd_dword[4]; /* ITS command double word */
};
/* An ITS private table */
struct its_ptable {
vm_offset_t ptab_vaddr;
unsigned long ptab_size;
};
/* ITS collection description. */
struct its_col {
uint64_t col_target; /* Target Re-Distributor */
uint64_t col_id; /* Collection ID */
};
struct gicv3_its_irqsrc {
struct intr_irqsrc gi_isrc;
u_int gi_irq;
struct its_dev *gi_its_dev;
};
struct gicv3_its_softc {
struct intr_pic *sc_pic;
struct resource *sc_its_res;
cpuset_t sc_cpus;
u_int gic_irq_cpu;
struct its_ptable sc_its_ptab[GITS_BASER_NUM];
struct its_col *sc_its_cols[MAXCPU]; /* Per-CPU collections */
/*
* TODO: We should get these from the parent as we only want a
* single copy of each across the interrupt controller.
*/
vm_offset_t sc_conf_base;
vm_offset_t sc_pend_base[MAXCPU];
/* Command handling */
struct mtx sc_its_cmd_lock;
struct its_cmd *sc_its_cmd_base; /* Command circular buffer address */
size_t sc_its_cmd_next_idx;
vmem_t *sc_irq_alloc;
struct gicv3_its_irqsrc *sc_irqs;
u_int sc_irq_base;
u_int sc_irq_length;
struct mtx sc_its_dev_lock;
TAILQ_HEAD(its_dev_list, its_dev) sc_its_dev_list;
#define ITS_FLAGS_CMDQ_FLUSH 0x00000001
#define ITS_FLAGS_LPI_CONF_FLUSH 0x00000002
#define ITS_FLAGS_ERRATA_CAVIUM_22375 0x00000004
u_int sc_its_flags;
};
typedef void (its_quirk_func_t)(device_t);
static its_quirk_func_t its_quirk_cavium_22375;
static const struct {
const char *desc;
uint32_t iidr;
uint32_t iidr_mask;
its_quirk_func_t *func;
} its_quirks[] = {
{
/* Cavium ThunderX Pass 1.x */
.desc = "Cavoum ThunderX errata: 22375, 24313",
.iidr = GITS_IIDR_RAW(GITS_IIDR_IMPL_CAVIUM,
GITS_IIDR_PROD_THUNDER, GITS_IIDR_VAR_THUNDER_1, 0),
.iidr_mask = ~GITS_IIDR_REVISION_MASK,
.func = its_quirk_cavium_22375,
},
};
#define gic_its_read_4(sc, reg) \
bus_read_4((sc)->sc_its_res, (reg))
#define gic_its_read_8(sc, reg) \
bus_read_8((sc)->sc_its_res, (reg))
#define gic_its_write_4(sc, reg, val) \
bus_write_4((sc)->sc_its_res, (reg), (val))
#define gic_its_write_8(sc, reg, val) \
bus_write_8((sc)->sc_its_res, (reg), (val))
static device_attach_t gicv3_its_attach;
static device_detach_t gicv3_its_detach;
static pic_disable_intr_t gicv3_its_disable_intr;
static pic_enable_intr_t gicv3_its_enable_intr;
static pic_map_intr_t gicv3_its_map_intr;
static pic_setup_intr_t gicv3_its_setup_intr;
static pic_post_filter_t gicv3_its_post_filter;
static pic_post_ithread_t gicv3_its_post_ithread;
static pic_pre_ithread_t gicv3_its_pre_ithread;
static pic_bind_intr_t gicv3_its_bind_intr;
#ifdef SMP
static pic_init_secondary_t gicv3_its_init_secondary;
#endif
static msi_alloc_msi_t gicv3_its_alloc_msi;
static msi_release_msi_t gicv3_its_release_msi;
static msi_alloc_msix_t gicv3_its_alloc_msix;
static msi_release_msix_t gicv3_its_release_msix;
static msi_map_msi_t gicv3_its_map_msi;
static void its_cmd_movi(device_t, struct gicv3_its_irqsrc *);
static void its_cmd_mapc(device_t, struct its_col *, uint8_t);
static void its_cmd_mapti(device_t, struct gicv3_its_irqsrc *);
static void its_cmd_mapd(device_t, struct its_dev *, uint8_t);
static void its_cmd_inv(device_t, struct its_dev *, struct gicv3_its_irqsrc *);
static void its_cmd_invall(device_t, struct its_col *);
static device_method_t gicv3_its_methods[] = {
/* Device interface */
DEVMETHOD(device_detach, gicv3_its_detach),
/* Interrupt controller interface */
DEVMETHOD(pic_disable_intr, gicv3_its_disable_intr),
DEVMETHOD(pic_enable_intr, gicv3_its_enable_intr),
DEVMETHOD(pic_map_intr, gicv3_its_map_intr),
DEVMETHOD(pic_setup_intr, gicv3_its_setup_intr),
DEVMETHOD(pic_post_filter, gicv3_its_post_filter),
DEVMETHOD(pic_post_ithread, gicv3_its_post_ithread),
DEVMETHOD(pic_pre_ithread, gicv3_its_pre_ithread),
#ifdef SMP
DEVMETHOD(pic_bind_intr, gicv3_its_bind_intr),
DEVMETHOD(pic_init_secondary, gicv3_its_init_secondary),
#endif
/* MSI/MSI-X */
DEVMETHOD(msi_alloc_msi, gicv3_its_alloc_msi),
DEVMETHOD(msi_release_msi, gicv3_its_release_msi),
DEVMETHOD(msi_alloc_msix, gicv3_its_alloc_msix),
DEVMETHOD(msi_release_msix, gicv3_its_release_msix),
DEVMETHOD(msi_map_msi, gicv3_its_map_msi),
/* End */
DEVMETHOD_END
};
static DEFINE_CLASS_0(gic, gicv3_its_driver, gicv3_its_methods,
sizeof(struct gicv3_its_softc));
static void
gicv3_its_cmdq_init(struct gicv3_its_softc *sc)
{
vm_paddr_t cmd_paddr;
uint64_t reg, tmp;
/* Set up the command circular buffer */
sc->sc_its_cmd_base = contigmalloc(ITS_CMDQ_SIZE, M_GICV3_ITS,
M_WAITOK | M_ZERO, 0, (1ul << 48) - 1, ITS_CMDQ_ALIGN, 0);
sc->sc_its_cmd_next_idx = 0;
cmd_paddr = vtophys(sc->sc_its_cmd_base);
/* Set the base of the command buffer */
reg = GITS_CBASER_VALID |
(GITS_CBASER_CACHE_NIWAWB << GITS_CBASER_CACHE_SHIFT) |
cmd_paddr | (GITS_CBASER_SHARE_IS << GITS_CBASER_SHARE_SHIFT) |
(ITS_CMDQ_SIZE / 4096 - 1);
gic_its_write_8(sc, GITS_CBASER, reg);
/* Read back to check for fixed value fields */
tmp = gic_its_read_8(sc, GITS_CBASER);
if ((tmp & GITS_CBASER_SHARE_MASK) !=
(GITS_CBASER_SHARE_IS << GITS_CBASER_SHARE_SHIFT)) {
/* Check if the hardware reported non-shareable */
if ((tmp & GITS_CBASER_SHARE_MASK) ==
(GITS_CBASER_SHARE_NS << GITS_CBASER_SHARE_SHIFT)) {
/* If so remove the cache attribute */
reg &= ~GITS_CBASER_CACHE_MASK;
reg &= ~GITS_CBASER_SHARE_MASK;
/* Set to Non-cacheable, Non-shareable */
reg |= GITS_CBASER_CACHE_NIN << GITS_CBASER_CACHE_SHIFT;
reg |= GITS_CBASER_SHARE_NS << GITS_CBASER_SHARE_SHIFT;
gic_its_write_8(sc, GITS_CBASER, reg);
}
/* The command queue has to be flushed after each command */
sc->sc_its_flags |= ITS_FLAGS_CMDQ_FLUSH;
}
/* Get the next command from the start of the buffer */
gic_its_write_8(sc, GITS_CWRITER, 0x0);
}
static int
gicv3_its_table_init(device_t dev, struct gicv3_its_softc *sc)
{
vm_offset_t table;
vm_paddr_t paddr;
uint64_t cache, reg, share, tmp, type;
size_t esize, its_tbl_size, nidents, nitspages, npages;
int i, page_size;
int devbits;
if ((sc->sc_its_flags & ITS_FLAGS_ERRATA_CAVIUM_22375) != 0) {
/*
* GITS_TYPER[17:13] of ThunderX reports that device IDs
* are to be 21 bits in length. The entry size of the ITS
* table can be read from GITS_BASERn[52:48] and on ThunderX
* is supposed to be 8 bytes in length (for device table).
* Finally the page size that is to be used by ITS to access
* this table will be set to 64KB.
*
* This gives 0x200000 entries of size 0x8 bytes covered by
* 256 pages each of which 64KB in size. The number of pages
* (minus 1) should then be written to GITS_BASERn[7:0]. In
* that case this value would be 0xFF but on ThunderX the
* maximum value that HW accepts is 0xFD.
*
* Set an arbitrary number of device ID bits to 20 in order
* to limit the number of entries in ITS device table to
* 0x100000 and the table size to 8MB.
*/
devbits = 20;
cache = 0;
} else {
devbits = GITS_TYPER_DEVB(gic_its_read_8(sc, GITS_TYPER));
cache = GITS_BASER_CACHE_WAWB;
}
share = GITS_BASER_SHARE_IS;
page_size = PAGE_SIZE_64K;
for (i = 0; i < GITS_BASER_NUM; i++) {
reg = gic_its_read_8(sc, GITS_BASER(i));
/* The type of table */
type = GITS_BASER_TYPE(reg);
/* The table entry size */
esize = GITS_BASER_ESIZE(reg);
switch(type) {
case GITS_BASER_TYPE_DEV:
nidents = (1 << devbits);
its_tbl_size = esize * nidents;
its_tbl_size = roundup2(its_tbl_size, PAGE_SIZE_64K);
break;
case GITS_BASER_TYPE_VP:
case GITS_BASER_TYPE_PP: /* Undocumented? */
case GITS_BASER_TYPE_IC:
its_tbl_size = page_size;
break;
default:
continue;
}
npages = howmany(its_tbl_size, PAGE_SIZE);
/* Allocate the table */
table = (vm_offset_t)contigmalloc(npages * PAGE_SIZE,
M_GICV3_ITS, M_WAITOK | M_ZERO, 0, (1ul << 48) - 1,
PAGE_SIZE_64K, 0);
sc->sc_its_ptab[i].ptab_vaddr = table;
sc->sc_its_ptab[i].ptab_size = npages * PAGE_SIZE;
paddr = vtophys(table);
while (1) {
nitspages = howmany(its_tbl_size, page_size);
/* Clear the fields we will be setting */
reg &= ~(GITS_BASER_VALID |
GITS_BASER_CACHE_MASK | GITS_BASER_TYPE_MASK |
GITS_BASER_ESIZE_MASK | GITS_BASER_PA_MASK |
GITS_BASER_SHARE_MASK | GITS_BASER_PSZ_MASK |
GITS_BASER_SIZE_MASK);
/* Set the new values */
reg |= GITS_BASER_VALID |
(cache << GITS_BASER_CACHE_SHIFT) |
(type << GITS_BASER_TYPE_SHIFT) |
((esize - 1) << GITS_BASER_ESIZE_SHIFT) |
paddr | (share << GITS_BASER_SHARE_SHIFT) |
(nitspages - 1);
switch (page_size) {
case PAGE_SIZE: /* 4KB */
reg |=
GITS_BASER_PSZ_4K << GITS_BASER_PSZ_SHIFT;
break;
case PAGE_SIZE_16K: /* 16KB */
reg |=
GITS_BASER_PSZ_16K << GITS_BASER_PSZ_SHIFT;
break;
case PAGE_SIZE_64K: /* 64KB */
reg |=
GITS_BASER_PSZ_64K << GITS_BASER_PSZ_SHIFT;
break;
}
gic_its_write_8(sc, GITS_BASER(i), reg);
/* Read back to check */
tmp = gic_its_read_8(sc, GITS_BASER(i));
/* Do the shareability masks line up? */
if ((tmp & GITS_BASER_SHARE_MASK) !=
(reg & GITS_BASER_SHARE_MASK)) {
share = (tmp & GITS_BASER_SHARE_MASK) >>
GITS_BASER_SHARE_SHIFT;
continue;
}
if ((tmp & GITS_BASER_PSZ_MASK) !=
(reg & GITS_BASER_PSZ_MASK)) {
switch (page_size) {
case PAGE_SIZE_16K:
page_size = PAGE_SIZE;
continue;
case PAGE_SIZE_64K:
page_size = PAGE_SIZE_16K;
continue;
}
}
if (tmp != reg) {
device_printf(dev, "GITS_BASER%d: "
"unable to be updated: %lx != %lx\n",
i, reg, tmp);
return (ENXIO);
}
/* We should have made all needed changes */
break;
}
}
return (0);
}
static void
gicv3_its_conftable_init(struct gicv3_its_softc *sc)
{
sc->sc_conf_base = (vm_offset_t)contigmalloc(LPI_CONFTAB_SIZE,
M_GICV3_ITS, M_WAITOK, 0, LPI_CONFTAB_MAX_ADDR, LPI_CONFTAB_ALIGN,
0);
/* Set the default configuration */
memset((void *)sc->sc_conf_base, GIC_PRIORITY_MAX | LPI_CONF_GROUP1,
LPI_CONFTAB_SIZE);
/* Flush the table to memory */
cpu_dcache_wb_range(sc->sc_conf_base, LPI_CONFTAB_SIZE);
}
static void
gicv3_its_pendtables_init(struct gicv3_its_softc *sc)
{
int i;
for (i = 0; i <= mp_maxid; i++) {
if (CPU_ISSET(i, &sc->sc_cpus) == 0)
continue;
sc->sc_pend_base[i] = (vm_offset_t)contigmalloc(
LPI_PENDTAB_SIZE, M_GICV3_ITS, M_WAITOK | M_ZERO,
0, LPI_PENDTAB_MAX_ADDR, LPI_PENDTAB_ALIGN, 0);
/* Flush so the ITS can see the memory */
cpu_dcache_wb_range((vm_offset_t)sc->sc_pend_base[i],
LPI_PENDTAB_SIZE);
}
}
static int
its_init_cpu(device_t dev, struct gicv3_its_softc *sc)
{
device_t gicv3;
vm_paddr_t target;
uint64_t xbaser, tmp;
uint32_t ctlr;
u_int cpuid;
gicv3 = device_get_parent(dev);
cpuid = PCPU_GET(cpuid);
if (!CPU_ISSET(cpuid, &sc->sc_cpus))
return (0);
/* Check if the ITS is enabled on this CPU */
if ((gic_r_read_4(gicv3, GICR_TYPER) & GICR_TYPER_PLPIS) == 0) {
return (ENXIO);
}
/* Disable LPIs */
ctlr = gic_r_read_4(gicv3, GICR_CTLR);
ctlr &= ~GICR_CTLR_LPI_ENABLE;
gic_r_write_4(gicv3, GICR_CTLR, ctlr);
/* Make sure changes are observable my the GIC */
dsb(sy);
/*
* Set the redistributor base
*/
xbaser = vtophys(sc->sc_conf_base) |
(GICR_PROPBASER_SHARE_IS << GICR_PROPBASER_SHARE_SHIFT) |
(GICR_PROPBASER_CACHE_NIWAWB << GICR_PROPBASER_CACHE_SHIFT) |
(flsl(LPI_CONFTAB_SIZE | GIC_FIRST_LPI) - 1);
gic_r_write_8(gicv3, GICR_PROPBASER, xbaser);
/* Check the cache attributes we set */
tmp = gic_r_read_8(gicv3, GICR_PROPBASER);
if ((tmp & GICR_PROPBASER_SHARE_MASK) !=
(xbaser & GICR_PROPBASER_SHARE_MASK)) {
if ((tmp & GICR_PROPBASER_SHARE_MASK) ==
(GICR_PROPBASER_SHARE_NS << GICR_PROPBASER_SHARE_SHIFT)) {
/* We need to mark as non-cacheable */
xbaser &= ~(GICR_PROPBASER_SHARE_MASK |
GICR_PROPBASER_CACHE_MASK);
/* Non-cacheable */
xbaser |= GICR_PROPBASER_CACHE_NIN <<
GICR_PROPBASER_CACHE_SHIFT;
/* Non-sareable */
xbaser |= GICR_PROPBASER_SHARE_NS <<
GICR_PROPBASER_SHARE_SHIFT;
gic_r_write_8(gicv3, GICR_PROPBASER, xbaser);
}
sc->sc_its_flags |= ITS_FLAGS_LPI_CONF_FLUSH;
}
/*
* Set the LPI pending table base
*/
xbaser = vtophys(sc->sc_pend_base[cpuid]) |
(GICR_PENDBASER_CACHE_NIWAWB << GICR_PENDBASER_CACHE_SHIFT) |
(GICR_PENDBASER_SHARE_IS << GICR_PENDBASER_SHARE_SHIFT);
gic_r_write_8(gicv3, GICR_PENDBASER, xbaser);
tmp = gic_r_read_8(gicv3, GICR_PENDBASER);
if ((tmp & GICR_PENDBASER_SHARE_MASK) ==
(GICR_PENDBASER_SHARE_NS << GICR_PENDBASER_SHARE_SHIFT)) {
/* Clear the cahce and shareability bits */
xbaser &= ~(GICR_PENDBASER_CACHE_MASK |
GICR_PENDBASER_SHARE_MASK);
/* Mark as non-shareable */
xbaser |= GICR_PENDBASER_SHARE_NS << GICR_PENDBASER_SHARE_SHIFT;
/* And non-cacheable */
xbaser |= GICR_PENDBASER_CACHE_NIN <<
GICR_PENDBASER_CACHE_SHIFT;
}
/* Enable LPIs */
ctlr = gic_r_read_4(gicv3, GICR_CTLR);
ctlr |= GICR_CTLR_LPI_ENABLE;
gic_r_write_4(gicv3, GICR_CTLR, ctlr);
/* Make sure the GIC has seen everything */
dsb(sy);
if ((gic_its_read_8(sc, GITS_TYPER) & GITS_TYPER_PTA) != 0) {
/* This ITS wants the redistributor physical address */
target = vtophys(gicv3_get_redist_vaddr(dev));
} else {
/* This ITS wants the unique processor number */
target = GICR_TYPER_CPUNUM(gic_r_read_8(gicv3, GICR_TYPER));
}
sc->sc_its_cols[cpuid]->col_target = target;
sc->sc_its_cols[cpuid]->col_id = cpuid;
its_cmd_mapc(dev, sc->sc_its_cols[cpuid], 1);
its_cmd_invall(dev, sc->sc_its_cols[cpuid]);
return (0);
}
static int
gicv3_its_attach(device_t dev)
{
struct gicv3_its_softc *sc;
const char *name;
uint32_t iidr;
int domain, err, i, rid;
sc = device_get_softc(dev);
sc->sc_irq_length = gicv3_get_nirqs(dev);
sc->sc_irq_base = GIC_FIRST_LPI;
sc->sc_irq_base += device_get_unit(dev) * sc->sc_irq_length;
rid = 0;
sc->sc_its_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid,
RF_ACTIVE);
if (sc->sc_its_res == NULL) {
device_printf(dev, "Could not allocate memory\n");
return (ENXIO);
}
iidr = gic_its_read_4(sc, GITS_IIDR);
for (i = 0; i < nitems(its_quirks); i++) {
if ((iidr & its_quirks[i].iidr_mask) == its_quirks[i].iidr) {
if (bootverbose) {
device_printf(dev, "Applying %s\n",
its_quirks[i].desc);
}
its_quirks[i].func(dev);
break;
}
}
/* Allocate the private tables */
err = gicv3_its_table_init(dev, sc);
if (err != 0)
return (err);
/* Protects access to the device list */
mtx_init(&sc->sc_its_dev_lock, "ITS device lock", NULL, MTX_SPIN);
/* Protects access to the ITS command circular buffer. */
mtx_init(&sc->sc_its_cmd_lock, "ITS cmd lock", NULL, MTX_SPIN);
CPU_ZERO(&sc->sc_cpus);
if (bus_get_domain(dev, &domain) == 0) {
if (domain < MAXMEMDOM)
CPU_COPY(&cpuset_domain[domain], &sc->sc_cpus);
} else {
/* XXX : cannot handle more than one ITS per cpu */
if (device_get_unit(dev) == 0)
CPU_COPY(&all_cpus, &sc->sc_cpus);
}
/* Allocate the command circular buffer */
gicv3_its_cmdq_init(sc);
/* Allocate the per-CPU collections */
for (int cpu = 0; cpu <= mp_maxid; cpu++)
if (CPU_ISSET(cpu, &sc->sc_cpus) != 0)
sc->sc_its_cols[cpu] = malloc(
sizeof(*sc->sc_its_cols[0]), M_GICV3_ITS,
M_WAITOK | M_ZERO);
else
sc->sc_its_cols[cpu] = NULL;
/* Enable the ITS */
gic_its_write_4(sc, GITS_CTLR,
gic_its_read_4(sc, GITS_CTLR) | GITS_CTLR_EN);
/* Create the LPI configuration table */
gicv3_its_conftable_init(sc);
/* And the pending tebles */
gicv3_its_pendtables_init(sc);
/* Enable LPIs on this CPU */
its_init_cpu(dev, sc);
TAILQ_INIT(&sc->sc_its_dev_list);
/*
* Create the vmem object to allocate INTRNG IRQs from. We try to
* use all IRQs not already used by the GICv3.
* XXX: This assumes there are no other interrupt controllers in the
* system.
*/
sc->sc_irq_alloc = vmem_create("GICv3 ITS IRQs", 0,
gicv3_get_nirqs(dev), 1, 1, M_FIRSTFIT | M_WAITOK);
sc->sc_irqs = malloc(sizeof(*sc->sc_irqs) * sc->sc_irq_length,
M_GICV3_ITS, M_WAITOK | M_ZERO);
name = device_get_nameunit(dev);
for (i = 0; i < sc->sc_irq_length; i++) {
sc->sc_irqs[i].gi_irq = i;
err = intr_isrc_register(&sc->sc_irqs[i].gi_isrc, dev, 0,
"%s,%u", name, i);
}
return (0);
}
static int
gicv3_its_detach(device_t dev)
{
return (ENXIO);
}
static void
its_quirk_cavium_22375(device_t dev)
{
struct gicv3_its_softc *sc;
sc = device_get_softc(dev);
sc->sc_its_flags |= ITS_FLAGS_ERRATA_CAVIUM_22375;
}
static void
gicv3_its_disable_intr(device_t dev, struct intr_irqsrc *isrc)
{
struct gicv3_its_softc *sc;
struct gicv3_its_irqsrc *girq;
uint8_t *conf;
sc = device_get_softc(dev);
girq = (struct gicv3_its_irqsrc *)isrc;
conf = (uint8_t *)sc->sc_conf_base;
conf[girq->gi_irq] &= ~LPI_CONF_ENABLE;
if ((sc->sc_its_flags & ITS_FLAGS_LPI_CONF_FLUSH) != 0) {
/* Clean D-cache under command. */
cpu_dcache_wb_range((vm_offset_t)&conf[girq->gi_irq], 1);
} else {
/* DSB inner shareable, store */
dsb(ishst);
}
its_cmd_inv(dev, girq->gi_its_dev, girq);
}
static void
gicv3_its_enable_intr(device_t dev, struct intr_irqsrc *isrc)
{
struct gicv3_its_softc *sc;
struct gicv3_its_irqsrc *girq;
uint8_t *conf;
sc = device_get_softc(dev);
girq = (struct gicv3_its_irqsrc *)isrc;
conf = (uint8_t *)sc->sc_conf_base;
conf[girq->gi_irq] |= LPI_CONF_ENABLE;
if ((sc->sc_its_flags & ITS_FLAGS_LPI_CONF_FLUSH) != 0) {
/* Clean D-cache under command. */
cpu_dcache_wb_range((vm_offset_t)&conf[girq->gi_irq], 1);
} else {
/* DSB inner shareable, store */
dsb(ishst);
}
its_cmd_inv(dev, girq->gi_its_dev, girq);
}
static int
gicv3_its_intr(void *arg, uintptr_t irq)
{
struct gicv3_its_softc *sc = arg;
struct gicv3_its_irqsrc *girq;
struct trapframe *tf;
irq -= sc->sc_irq_base;
girq = &sc->sc_irqs[irq];
if (girq == NULL)
panic("gicv3_its_intr: Invalid interrupt %ld",
irq + sc->sc_irq_base);
tf = curthread->td_intr_frame;
intr_isrc_dispatch(&girq->gi_isrc, tf);
return (FILTER_HANDLED);
}
static void
gicv3_its_pre_ithread(device_t dev, struct intr_irqsrc *isrc)
{
struct gicv3_its_irqsrc *girq;
struct gicv3_its_softc *sc;
sc = device_get_softc(dev);
girq = (struct gicv3_its_irqsrc *)isrc;
gicv3_its_disable_intr(dev, isrc);
gic_icc_write(EOIR1, girq->gi_irq + sc->sc_irq_base);
}
static void
gicv3_its_post_ithread(device_t dev, struct intr_irqsrc *isrc)
{
gicv3_its_enable_intr(dev, isrc);
}
static void
gicv3_its_post_filter(device_t dev, struct intr_irqsrc *isrc)
{
struct gicv3_its_irqsrc *girq;
struct gicv3_its_softc *sc;
sc = device_get_softc(dev);
girq = (struct gicv3_its_irqsrc *)isrc;
gic_icc_write(EOIR1, girq->gi_irq + sc->sc_irq_base);
}
static int
gicv3_its_bind_intr(device_t dev, struct intr_irqsrc *isrc)
{
struct gicv3_its_irqsrc *girq;
struct gicv3_its_softc *sc;
sc = device_get_softc(dev);
girq = (struct gicv3_its_irqsrc *)isrc;
if (CPU_EMPTY(&isrc->isrc_cpu)) {
sc->gic_irq_cpu = intr_irq_next_cpu(sc->gic_irq_cpu,
&sc->sc_cpus);
CPU_SETOF(sc->gic_irq_cpu, &isrc->isrc_cpu);
}
its_cmd_movi(dev, girq);
return (0);
}
static int
gicv3_its_map_intr(device_t dev, struct intr_map_data *data,
struct intr_irqsrc **isrcp)
{
/*
* This should never happen, we only call this function to map
* interrupts found before the controller driver is ready.
*/
panic("gicv3_its_map_intr: Unable to map a MSI interrupt");
}
static int
gicv3_its_setup_intr(device_t dev, struct intr_irqsrc *isrc,
struct resource *res, struct intr_map_data *data)
{
/* Bind the interrupt to a CPU */
gicv3_its_bind_intr(dev, isrc);
return (0);
}
#ifdef SMP
static void
gicv3_its_init_secondary(device_t dev)
{
struct gicv3_its_softc *sc;
sc = device_get_softc(dev);
/*
* This is fatal as otherwise we may bind interrupts to this CPU.
* We need a way to tell the interrupt framework to only bind to a
* subset of given CPUs when it performs the shuffle.
*/
if (its_init_cpu(dev, sc) != 0)
panic("gicv3_its_init_secondary: No usable ITS on CPU%d",
PCPU_GET(cpuid));
}
#endif
static uint32_t
its_get_devid(device_t pci_dev)
{
uintptr_t id;
if (pci_get_id(pci_dev, PCI_ID_MSI, &id) != 0)
panic("its_get_devid: Unable to get the MSI DeviceID");
return (id);
}
static struct its_dev *
its_device_find(device_t dev, device_t child)
{
struct gicv3_its_softc *sc;
struct its_dev *its_dev = NULL;
sc = device_get_softc(dev);
mtx_lock_spin(&sc->sc_its_dev_lock);
TAILQ_FOREACH(its_dev, &sc->sc_its_dev_list, entry) {
if (its_dev->pci_dev == child)
break;
}
mtx_unlock_spin(&sc->sc_its_dev_lock);
return (its_dev);
}
static struct its_dev *
its_device_get(device_t dev, device_t child, u_int nvecs)
{
struct gicv3_its_softc *sc;
struct its_dev *its_dev;
vmem_addr_t irq_base;
size_t esize;
sc = device_get_softc(dev);
its_dev = its_device_find(dev, child);
if (its_dev != NULL)
return (its_dev);
its_dev = malloc(sizeof(*its_dev), M_GICV3_ITS, M_NOWAIT | M_ZERO);
if (its_dev == NULL)
return (NULL);
its_dev->pci_dev = child;
its_dev->devid = its_get_devid(child);
its_dev->lpis.lpi_busy = 0;
its_dev->lpis.lpi_num = nvecs;
its_dev->lpis.lpi_free = nvecs;
if (vmem_alloc(sc->sc_irq_alloc, nvecs, M_FIRSTFIT | M_NOWAIT,
&irq_base) != 0) {
free(its_dev, M_GICV3_ITS);
return (NULL);
}
its_dev->lpis.lpi_base = irq_base;
/* Get ITT entry size */
esize = GITS_TYPER_ITTES(gic_its_read_8(sc, GITS_TYPER));
/*
* Allocate ITT for this device.
* PA has to be 256 B aligned. At least two entries for device.
*/
its_dev->itt_size = roundup2(MAX(nvecs, 2) * esize, 256);
its_dev->itt = (vm_offset_t)contigmalloc(its_dev->itt_size,
M_GICV3_ITS, M_NOWAIT | M_ZERO, 0, LPI_INT_TRANS_TAB_MAX_ADDR,
LPI_INT_TRANS_TAB_ALIGN, 0);
if (its_dev->itt == 0) {
vmem_free(sc->sc_irq_alloc, its_dev->lpis.lpi_base, nvecs);
free(its_dev, M_GICV3_ITS);
return (NULL);
}
mtx_lock_spin(&sc->sc_its_dev_lock);
TAILQ_INSERT_TAIL(&sc->sc_its_dev_list, its_dev, entry);
mtx_unlock_spin(&sc->sc_its_dev_lock);
/* Map device to its ITT */
its_cmd_mapd(dev, its_dev, 1);
return (its_dev);
}
static void
its_device_release(device_t dev, struct its_dev *its_dev)
{
struct gicv3_its_softc *sc;
KASSERT(its_dev->lpis.lpi_busy == 0,
("its_device_release: Trying to release an inuse ITS device"));
/* Unmap device in ITS */
its_cmd_mapd(dev, its_dev, 0);
sc = device_get_softc(dev);
/* Remove the device from the list of devices */
mtx_lock_spin(&sc->sc_its_dev_lock);
TAILQ_REMOVE(&sc->sc_its_dev_list, its_dev, entry);
mtx_unlock_spin(&sc->sc_its_dev_lock);
/* Free ITT */
KASSERT(its_dev->itt != 0, ("Invalid ITT in valid ITS device"));
contigfree((void *)its_dev->itt, its_dev->itt_size, M_GICV3_ITS);
/* Free the IRQ allocation */
vmem_free(sc->sc_irq_alloc, its_dev->lpis.lpi_base,
its_dev->lpis.lpi_num);
free(its_dev, M_GICV3_ITS);
}
static int
gicv3_its_alloc_msi(device_t dev, device_t child, int count, int maxcount,
device_t *pic, struct intr_irqsrc **srcs)
{
struct gicv3_its_softc *sc;
struct gicv3_its_irqsrc *girq;
struct its_dev *its_dev;
u_int irq;
int i;
its_dev = its_device_get(dev, child, count);
if (its_dev == NULL)
return (ENXIO);
KASSERT(its_dev->lpis.lpi_free >= count,
("gicv3_its_alloc_msi: No free LPIs"));
sc = device_get_softc(dev);
irq = its_dev->lpis.lpi_base + its_dev->lpis.lpi_num -
its_dev->lpis.lpi_free;
for (i = 0; i < count; i++, irq++) {
its_dev->lpis.lpi_free--;
girq = &sc->sc_irqs[irq];
girq->gi_its_dev = its_dev;
srcs[i] = (struct intr_irqsrc *)girq;
}
its_dev->lpis.lpi_busy += count;
*pic = dev;
return (0);
}
static int
gicv3_its_release_msi(device_t dev, device_t child, int count,
struct intr_irqsrc **isrc)
{
struct gicv3_its_irqsrc *girq;
struct its_dev *its_dev;
int i;
its_dev = its_device_find(dev, child);
KASSERT(its_dev != NULL,
("gicv3_its_release_msi: Releasing a MSI interrupt with "
"no ITS device"));
KASSERT(its_dev->lpis.lpi_busy >= count,
("gicv3_its_release_msi: Releasing more interrupts than "
"were allocated: releasing %d, allocated %d", count,
its_dev->lpis.lpi_busy));
for (i = 0; i < count; i++) {
girq = (struct gicv3_its_irqsrc *)isrc[i];
girq->gi_its_dev = NULL;
}
its_dev->lpis.lpi_busy -= count;
if (its_dev->lpis.lpi_busy == 0)
its_device_release(dev, its_dev);
return (0);
}
static int
gicv3_its_alloc_msix(device_t dev, device_t child, device_t *pic,
struct intr_irqsrc **isrcp)
{
struct gicv3_its_softc *sc;
struct gicv3_its_irqsrc *girq;
struct its_dev *its_dev;
u_int nvecs, irq;
nvecs = pci_msix_count(child);
its_dev = its_device_get(dev, child, nvecs);
if (its_dev == NULL)
return (ENXIO);
KASSERT(its_dev->lpis.lpi_free > 0,
("gicv3_its_alloc_msix: No free LPIs"));
sc = device_get_softc(dev);
irq = its_dev->lpis.lpi_base + its_dev->lpis.lpi_num -
its_dev->lpis.lpi_free;
its_dev->lpis.lpi_free--;
its_dev->lpis.lpi_busy++;
girq = &sc->sc_irqs[irq];
girq->gi_its_dev = its_dev;
*pic = dev;
*isrcp = (struct intr_irqsrc *)girq;
return (0);
}
static int
gicv3_its_release_msix(device_t dev, device_t child, struct intr_irqsrc *isrc)
{
struct gicv3_its_irqsrc *girq;
struct its_dev *its_dev;
its_dev = its_device_find(dev, child);
KASSERT(its_dev != NULL,
("gicv3_its_release_msix: Releasing a MSI-X interrupt with "
"no ITS device"));
KASSERT(its_dev->lpis.lpi_busy > 0,
("gicv3_its_release_msix: Releasing more interrupts than "
"were allocated: allocated %d", its_dev->lpis.lpi_busy));
girq = (struct gicv3_its_irqsrc *)isrc;
girq->gi_its_dev = NULL;
its_dev->lpis.lpi_busy--;
if (its_dev->lpis.lpi_busy == 0)
its_device_release(dev, its_dev);
return (0);
}
static int
gicv3_its_map_msi(device_t dev, device_t child, struct intr_irqsrc *isrc,
uint64_t *addr, uint32_t *data)
{
struct gicv3_its_softc *sc;
struct gicv3_its_irqsrc *girq;
sc = device_get_softc(dev);
girq = (struct gicv3_its_irqsrc *)isrc;
/* Map the message to the given IRQ */
its_cmd_mapti(dev, girq);
*addr = vtophys(rman_get_virtual(sc->sc_its_res)) + GITS_TRANSLATER;
*data = girq->gi_irq - girq->gi_its_dev->lpis.lpi_base;
return (0);
}
/*
* Commands handling.
*/
static __inline void
cmd_format_command(struct its_cmd *cmd, uint8_t cmd_type)
{
/* Command field: DW0 [7:0] */
cmd->cmd_dword[0] &= htole64(~CMD_COMMAND_MASK);
cmd->cmd_dword[0] |= htole64(cmd_type);
}
static __inline void
cmd_format_devid(struct its_cmd *cmd, uint32_t devid)
{
/* Device ID field: DW0 [63:32] */
cmd->cmd_dword[0] &= htole64(~CMD_DEVID_MASK);
cmd->cmd_dword[0] |= htole64((uint64_t)devid << CMD_DEVID_SHIFT);
}
static __inline void
cmd_format_size(struct its_cmd *cmd, uint16_t size)
{
/* Size field: DW1 [4:0] */
cmd->cmd_dword[1] &= htole64(~CMD_SIZE_MASK);
cmd->cmd_dword[1] |= htole64((size & CMD_SIZE_MASK));
}
static __inline void
cmd_format_id(struct its_cmd *cmd, uint32_t id)
{
/* ID field: DW1 [31:0] */
cmd->cmd_dword[1] &= htole64(~CMD_ID_MASK);
cmd->cmd_dword[1] |= htole64(id);
}
static __inline void
cmd_format_pid(struct its_cmd *cmd, uint32_t pid)
{
/* Physical ID field: DW1 [63:32] */
cmd->cmd_dword[1] &= htole64(~CMD_PID_MASK);
cmd->cmd_dword[1] |= htole64((uint64_t)pid << CMD_PID_SHIFT);
}
static __inline void
cmd_format_col(struct its_cmd *cmd, uint16_t col_id)
{
/* Collection field: DW2 [16:0] */
cmd->cmd_dword[2] &= htole64(~CMD_COL_MASK);
cmd->cmd_dword[2] |= htole64(col_id);
}
static __inline void
cmd_format_target(struct its_cmd *cmd, uint64_t target)
{
/* Target Address field: DW2 [47:16] */
cmd->cmd_dword[2] &= htole64(~CMD_TARGET_MASK);
cmd->cmd_dword[2] |= htole64(target & CMD_TARGET_MASK);
}
static __inline void
cmd_format_itt(struct its_cmd *cmd, uint64_t itt)
{
/* ITT Address field: DW2 [47:8] */
cmd->cmd_dword[2] &= htole64(~CMD_ITT_MASK);
cmd->cmd_dword[2] |= htole64(itt & CMD_ITT_MASK);
}
static __inline void
cmd_format_valid(struct its_cmd *cmd, uint8_t valid)
{
/* Valid field: DW2 [63] */
cmd->cmd_dword[2] &= htole64(~CMD_VALID_MASK);
cmd->cmd_dword[2] |= htole64((uint64_t)valid << CMD_VALID_SHIFT);
}
static inline bool
its_cmd_queue_full(struct gicv3_its_softc *sc)
{
size_t read_idx, next_write_idx;
/* Get the index of the next command */
next_write_idx = (sc->sc_its_cmd_next_idx + 1) %
(ITS_CMDQ_SIZE / sizeof(struct its_cmd));
/* And the index of the current command being read */
read_idx = gic_its_read_4(sc, GITS_CREADR) / sizeof(struct its_cmd);
/*
* The queue is full when the write offset points
* at the command before the current read offset.
*/
return (next_write_idx == read_idx);
}
static inline void
its_cmd_sync(struct gicv3_its_softc *sc, struct its_cmd *cmd)
{
if ((sc->sc_its_flags & ITS_FLAGS_CMDQ_FLUSH) != 0) {
/* Clean D-cache under command. */
cpu_dcache_wb_range((vm_offset_t)cmd, sizeof(*cmd));
} else {
/* DSB inner shareable, store */
dsb(ishst);
}
}
static inline uint64_t
its_cmd_cwriter_offset(struct gicv3_its_softc *sc, struct its_cmd *cmd)
{
uint64_t off;
off = (cmd - sc->sc_its_cmd_base) * sizeof(*cmd);
return (off);
}
static void
its_cmd_wait_completion(device_t dev, struct its_cmd *cmd_first,
struct its_cmd *cmd_last)
{
struct gicv3_its_softc *sc;
uint64_t first, last, read;
size_t us_left;
sc = device_get_softc(dev);
/*
* XXX ARM64TODO: This is obviously a significant delay.
* The reason for that is that currently the time frames for
* the command to complete are not known.
*/
us_left = 1000000;
first = its_cmd_cwriter_offset(sc, cmd_first);
last = its_cmd_cwriter_offset(sc, cmd_last);
for (;;) {
read = gic_its_read_8(sc, GITS_CREADR);
if (first < last) {
if (read < first || read >= last)
break;
} else if (read < first && read >= last)
break;
if (us_left-- == 0) {
/* This means timeout */
device_printf(dev,
"Timeout while waiting for CMD completion.\n");
return;
}
DELAY(1);
}
}
static struct its_cmd *
its_cmd_alloc_locked(device_t dev)
{
struct gicv3_its_softc *sc;
struct its_cmd *cmd;
size_t us_left;
sc = device_get_softc(dev);
/*
* XXX ARM64TODO: This is obviously a significant delay.
* The reason for that is that currently the time frames for
* the command to complete (and therefore free the descriptor)
* are not known.
*/
us_left = 1000000;
mtx_assert(&sc->sc_its_cmd_lock, MA_OWNED);
while (its_cmd_queue_full(sc)) {
if (us_left-- == 0) {
/* Timeout while waiting for free command */
device_printf(dev,
"Timeout while waiting for free command\n");
return (NULL);
}
DELAY(1);
}
cmd = &sc->sc_its_cmd_base[sc->sc_its_cmd_next_idx];
sc->sc_its_cmd_next_idx++;
sc->sc_its_cmd_next_idx %= ITS_CMDQ_SIZE / sizeof(struct its_cmd);
return (cmd);
}
static uint64_t
its_cmd_prepare(struct its_cmd *cmd, struct its_cmd_desc *desc)
{
uint64_t target;
uint8_t cmd_type;
u_int size;
cmd_type = desc->cmd_type;
target = ITS_TARGET_NONE;
switch (cmd_type) {
case ITS_CMD_MOVI: /* Move interrupt ID to another collection */
target = desc->cmd_desc_movi.col->col_target;
cmd_format_command(cmd, ITS_CMD_MOVI);
cmd_format_id(cmd, desc->cmd_desc_movi.id);
cmd_format_col(cmd, desc->cmd_desc_movi.col->col_id);
cmd_format_devid(cmd, desc->cmd_desc_movi.its_dev->devid);
break;
case ITS_CMD_SYNC: /* Wait for previous commands completion */
target = desc->cmd_desc_sync.col->col_target;
cmd_format_command(cmd, ITS_CMD_SYNC);
cmd_format_target(cmd, target);
break;
case ITS_CMD_MAPD: /* Assign ITT to device */
cmd_format_command(cmd, ITS_CMD_MAPD);
cmd_format_itt(cmd, vtophys(desc->cmd_desc_mapd.its_dev->itt));
/*
* Size describes number of bits to encode interrupt IDs
* supported by the device minus one.
* When V (valid) bit is zero, this field should be written
* as zero.
*/
if (desc->cmd_desc_mapd.valid != 0) {
size = fls(desc->cmd_desc_mapd.its_dev->lpis.lpi_num);
size = MAX(1, size) - 1;
} else
size = 0;
cmd_format_size(cmd, size);
cmd_format_devid(cmd, desc->cmd_desc_mapd.its_dev->devid);
cmd_format_valid(cmd, desc->cmd_desc_mapd.valid);
break;
case ITS_CMD_MAPC: /* Map collection to Re-Distributor */
target = desc->cmd_desc_mapc.col->col_target;
cmd_format_command(cmd, ITS_CMD_MAPC);
cmd_format_col(cmd, desc->cmd_desc_mapc.col->col_id);
cmd_format_valid(cmd, desc->cmd_desc_mapc.valid);
cmd_format_target(cmd, target);
break;
case ITS_CMD_MAPTI:
target = desc->cmd_desc_mapvi.col->col_target;
cmd_format_command(cmd, ITS_CMD_MAPTI);
cmd_format_devid(cmd, desc->cmd_desc_mapvi.its_dev->devid);
cmd_format_id(cmd, desc->cmd_desc_mapvi.id);
cmd_format_pid(cmd, desc->cmd_desc_mapvi.pid);
cmd_format_col(cmd, desc->cmd_desc_mapvi.col->col_id);
break;
case ITS_CMD_MAPI:
target = desc->cmd_desc_mapi.col->col_target;
cmd_format_command(cmd, ITS_CMD_MAPI);
cmd_format_devid(cmd, desc->cmd_desc_mapi.its_dev->devid);
cmd_format_id(cmd, desc->cmd_desc_mapi.pid);
cmd_format_col(cmd, desc->cmd_desc_mapi.col->col_id);
break;
case ITS_CMD_INV:
target = desc->cmd_desc_inv.col->col_target;
cmd_format_command(cmd, ITS_CMD_INV);
cmd_format_devid(cmd, desc->cmd_desc_inv.its_dev->devid);
cmd_format_id(cmd, desc->cmd_desc_inv.pid);
break;
case ITS_CMD_INVALL:
cmd_format_command(cmd, ITS_CMD_INVALL);
cmd_format_col(cmd, desc->cmd_desc_invall.col->col_id);
break;
default:
panic("its_cmd_prepare: Invalid command: %x", cmd_type);
}
return (target);
}
static int
its_cmd_send(device_t dev, struct its_cmd_desc *desc)
{
struct gicv3_its_softc *sc;
struct its_cmd *cmd, *cmd_sync, *cmd_write;
struct its_col col_sync;
struct its_cmd_desc desc_sync;
uint64_t target, cwriter;
sc = device_get_softc(dev);
mtx_lock_spin(&sc->sc_its_cmd_lock);
cmd = its_cmd_alloc_locked(dev);
if (cmd == NULL) {
device_printf(dev, "could not allocate ITS command\n");
mtx_unlock_spin(&sc->sc_its_cmd_lock);
return (EBUSY);
}
target = its_cmd_prepare(cmd, desc);
its_cmd_sync(sc, cmd);
if (target != ITS_TARGET_NONE) {
cmd_sync = its_cmd_alloc_locked(dev);
if (cmd_sync != NULL) {
desc_sync.cmd_type = ITS_CMD_SYNC;
col_sync.col_target = target;
desc_sync.cmd_desc_sync.col = &col_sync;
its_cmd_prepare(cmd_sync, &desc_sync);
its_cmd_sync(sc, cmd_sync);
}
}
/* Update GITS_CWRITER */
cwriter = sc->sc_its_cmd_next_idx * sizeof(struct its_cmd);
gic_its_write_8(sc, GITS_CWRITER, cwriter);
cmd_write = &sc->sc_its_cmd_base[sc->sc_its_cmd_next_idx];
mtx_unlock_spin(&sc->sc_its_cmd_lock);
its_cmd_wait_completion(dev, cmd, cmd_write);
return (0);
}
/* Handlers to send commands */
static void
its_cmd_movi(device_t dev, struct gicv3_its_irqsrc *girq)
{
struct gicv3_its_softc *sc;
struct its_cmd_desc desc;
struct its_col *col;
sc = device_get_softc(dev);
col = sc->sc_its_cols[CPU_FFS(&girq->gi_isrc.isrc_cpu) - 1];
desc.cmd_type = ITS_CMD_MOVI;
desc.cmd_desc_movi.its_dev = girq->gi_its_dev;
desc.cmd_desc_movi.col = col;
desc.cmd_desc_movi.id = girq->gi_irq - girq->gi_its_dev->lpis.lpi_base;
its_cmd_send(dev, &desc);
}
static void
its_cmd_mapc(device_t dev, struct its_col *col, uint8_t valid)
{
struct its_cmd_desc desc;
desc.cmd_type = ITS_CMD_MAPC;
desc.cmd_desc_mapc.col = col;
/*
* Valid bit set - map the collection.
* Valid bit cleared - unmap the collection.
*/
desc.cmd_desc_mapc.valid = valid;
its_cmd_send(dev, &desc);
}
static void
its_cmd_mapti(device_t dev, struct gicv3_its_irqsrc *girq)
{
struct gicv3_its_softc *sc;
struct its_cmd_desc desc;
struct its_col *col;
u_int col_id;
sc = device_get_softc(dev);
col_id = CPU_FFS(&girq->gi_isrc.isrc_cpu) - 1;
col = sc->sc_its_cols[col_id];
desc.cmd_type = ITS_CMD_MAPTI;
desc.cmd_desc_mapvi.its_dev = girq->gi_its_dev;
desc.cmd_desc_mapvi.col = col;
/* The EventID sent to the device */
desc.cmd_desc_mapvi.id = girq->gi_irq - girq->gi_its_dev->lpis.lpi_base;
/* The physical interrupt presented to softeware */
desc.cmd_desc_mapvi.pid = girq->gi_irq + sc->sc_irq_base;
its_cmd_send(dev, &desc);
}
static void
its_cmd_mapd(device_t dev, struct its_dev *its_dev, uint8_t valid)
{
struct its_cmd_desc desc;
desc.cmd_type = ITS_CMD_MAPD;
desc.cmd_desc_mapd.its_dev = its_dev;
desc.cmd_desc_mapd.valid = valid;
its_cmd_send(dev, &desc);
}
static void
its_cmd_inv(device_t dev, struct its_dev *its_dev,
struct gicv3_its_irqsrc *girq)
{
struct gicv3_its_softc *sc;
struct its_cmd_desc desc;
struct its_col *col;
sc = device_get_softc(dev);
col = sc->sc_its_cols[CPU_FFS(&girq->gi_isrc.isrc_cpu) - 1];
desc.cmd_type = ITS_CMD_INV;
/* The EventID sent to the device */
desc.cmd_desc_inv.pid = girq->gi_irq - its_dev->lpis.lpi_base;
desc.cmd_desc_inv.its_dev = its_dev;
desc.cmd_desc_inv.col = col;
its_cmd_send(dev, &desc);
}
static void
its_cmd_invall(device_t dev, struct its_col *col)
{
struct its_cmd_desc desc;
desc.cmd_type = ITS_CMD_INVALL;
desc.cmd_desc_invall.col = col;
its_cmd_send(dev, &desc);
}
#ifdef FDT
static device_probe_t gicv3_its_fdt_probe;
static device_attach_t gicv3_its_fdt_attach;
static device_method_t gicv3_its_fdt_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, gicv3_its_fdt_probe),
DEVMETHOD(device_attach, gicv3_its_fdt_attach),
/* End */
DEVMETHOD_END
};
#define its_baseclasses its_fdt_baseclasses
DEFINE_CLASS_1(its, gicv3_its_fdt_driver, gicv3_its_fdt_methods,
sizeof(struct gicv3_its_softc), gicv3_its_driver);
#undef its_baseclasses
static devclass_t gicv3_its_fdt_devclass;
EARLY_DRIVER_MODULE(its_fdt, gic, gicv3_its_fdt_driver,
gicv3_its_fdt_devclass, 0, 0, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_MIDDLE);
static int
gicv3_its_fdt_probe(device_t dev)
{
if (!ofw_bus_status_okay(dev))
return (ENXIO);
if (!ofw_bus_is_compatible(dev, "arm,gic-v3-its"))
return (ENXIO);
device_set_desc(dev, "ARM GIC Interrupt Translation Service");
return (BUS_PROBE_DEFAULT);
}
static int
gicv3_its_fdt_attach(device_t dev)
{
struct gicv3_its_softc *sc;
phandle_t xref;
int err;
sc = device_get_softc(dev);
err = gicv3_its_attach(dev);
if (err != 0)
return (err);
/* Register this device as a interrupt controller */
xref = OF_xref_from_node(ofw_bus_get_node(dev));
sc->sc_pic = intr_pic_register(dev, xref);
intr_pic_add_handler(device_get_parent(dev), sc->sc_pic,
gicv3_its_intr, sc, sc->sc_irq_base, sc->sc_irq_length);
/* Register this device to handle MSI interrupts */
intr_msi_register(dev, xref);
return (0);
}
#endif
#ifdef DEV_ACPI
static device_probe_t gicv3_its_acpi_probe;
static device_attach_t gicv3_its_acpi_attach;
static device_method_t gicv3_its_acpi_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, gicv3_its_acpi_probe),
DEVMETHOD(device_attach, gicv3_its_acpi_attach),
/* End */
DEVMETHOD_END
};
#define its_baseclasses its_acpi_baseclasses
DEFINE_CLASS_1(its, gicv3_its_acpi_driver, gicv3_its_acpi_methods,
sizeof(struct gicv3_its_softc), gicv3_its_driver);
#undef its_baseclasses
static devclass_t gicv3_its_acpi_devclass;
EARLY_DRIVER_MODULE(its_acpi, gic, gicv3_its_acpi_driver,
gicv3_its_acpi_devclass, 0, 0, BUS_PASS_INTERRUPT + BUS_PASS_ORDER_MIDDLE);
static int
gicv3_its_acpi_probe(device_t dev)
{
if (gic_get_bus(dev) != GIC_BUS_ACPI)
return (EINVAL);
if (gic_get_hw_rev(dev) < 3)
return (EINVAL);
device_set_desc(dev, "ARM GIC Interrupt Translation Service");
return (BUS_PROBE_DEFAULT);
}
static int
gicv3_its_acpi_attach(device_t dev)
{
struct gicv3_its_softc *sc;
+ struct gic_v3_devinfo *di;
int err;
sc = device_get_softc(dev);
err = gicv3_its_attach(dev);
if (err != 0)
return (err);
- sc->sc_pic = intr_pic_register(dev,
- device_get_unit(dev) + ACPI_MSI_XREF);
+ di = device_get_ivars(dev);
+ sc->sc_pic = intr_pic_register(dev, di->msi_xref);
intr_pic_add_handler(device_get_parent(dev), sc->sc_pic,
gicv3_its_intr, sc, sc->sc_irq_base, sc->sc_irq_length);
/* Register this device to handle MSI interrupts */
- intr_msi_register(dev, device_get_unit(dev) + ACPI_MSI_XREF);
+ intr_msi_register(dev, di->msi_xref);
return (0);
}
#endif
Index: projects/clang800-import/sys/arm64/arm64/pmap.c
===================================================================
--- projects/clang800-import/sys/arm64/arm64/pmap.c (revision 343955)
+++ projects/clang800-import/sys/arm64/arm64/pmap.c (revision 343956)
@@ -1,5415 +1,5418 @@
/*-
* Copyright (c) 1991 Regents of the University of California.
* All rights reserved.
* Copyright (c) 1994 John S. Dyson
* All rights reserved.
* Copyright (c) 1994 David Greenman
* All rights reserved.
* Copyright (c) 2003 Peter Wemm
* All rights reserved.
* Copyright (c) 2005-2010 Alan L. Cox <alc@cs.rice.edu>
* All rights reserved.
* Copyright (c) 2014 Andrew Turner
* All rights reserved.
* Copyright (c) 2014-2016 The FreeBSD Foundation
* All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
* Science Department and William Jolitz of UUNET Technologies Inc.
*
* This software was developed by Andrew Turner under sponsorship from
* the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: @(#)pmap.c 7.7 (Berkeley) 5/12/91
*/
/*-
* Copyright (c) 2003 Networks Associates Technology, Inc.
* All rights reserved.
*
* This software was developed for the FreeBSD Project by Jake Burkholder,
* Safeport Network Services, and Network Associates Laboratories, the
* Security Research Division of Network Associates, Inc. under
* DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA
* CHATS research program.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* Manages physical address maps.
*
* Since the information managed by this module is
* also stored by the logical address mapping module,
* this module may throw away valid virtual-to-physical
* mappings at almost any time. However, invalidations
* of virtual-to-physical mappings must be done as
* requested.
*
* In order to cope with hardware architectures which
* make virtual-to-physical map invalidates expensive,
* this module may delay invalidate or reduced protection
* operations until such time as they are actually
* necessary. This module is given full information as
* to which processors are currently using which maps,
* and to when physical maps must be made correct.
*/
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/bitstring.h>
#include <sys/bus.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/ktr.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mman.h>
#include <sys/msgbuf.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/rwlock.h>
#include <sys/sx.h>
#include <sys/vmem.h>
#include <sys/vmmeter.h>
#include <sys/sched.h>
#include <sys/sysctl.h>
#include <sys/_unrhdr.h>
#include <sys/smp.h>
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <vm/vm_map.h>
#include <vm/vm_object.h>
#include <vm/vm_extern.h>
#include <vm/vm_pageout.h>
#include <vm/vm_pager.h>
#include <vm/vm_phys.h>
#include <vm/vm_radix.h>
#include <vm/vm_reserv.h>
#include <vm/uma.h>
#include <machine/machdep.h>
#include <machine/md_var.h>
#include <machine/pcb.h>
#include <arm/include/physmem.h>
#define NL0PG (PAGE_SIZE/(sizeof (pd_entry_t)))
#define NL1PG (PAGE_SIZE/(sizeof (pd_entry_t)))
#define NL2PG (PAGE_SIZE/(sizeof (pd_entry_t)))
#define NL3PG (PAGE_SIZE/(sizeof (pt_entry_t)))
#define NUL0E L0_ENTRIES
#define NUL1E (NUL0E * NL1PG)
#define NUL2E (NUL1E * NL2PG)
#if !defined(DIAGNOSTIC)
#ifdef __GNUC_GNU_INLINE__
#define PMAP_INLINE __attribute__((__gnu_inline__)) inline
#else
#define PMAP_INLINE extern inline
#endif
#else
#define PMAP_INLINE
#endif
/*
* These are configured by the mair_el1 register. This is set up in locore.S
*/
#define DEVICE_MEMORY 0
#define UNCACHED_MEMORY 1
#define CACHED_MEMORY 2
#ifdef PV_STATS
#define PV_STAT(x) do { x ; } while (0)
#else
#define PV_STAT(x) do { } while (0)
#endif
#define pmap_l2_pindex(v) ((v) >> L2_SHIFT)
#define pa_to_pvh(pa) (&pv_table[pmap_l2_pindex(pa)])
#define NPV_LIST_LOCKS MAXCPU
#define PHYS_TO_PV_LIST_LOCK(pa) \
(&pv_list_locks[pa_index(pa) % NPV_LIST_LOCKS])
#define CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa) do { \
struct rwlock **_lockp = (lockp); \
struct rwlock *_new_lock; \
\
_new_lock = PHYS_TO_PV_LIST_LOCK(pa); \
if (_new_lock != *_lockp) { \
if (*_lockp != NULL) \
rw_wunlock(*_lockp); \
*_lockp = _new_lock; \
rw_wlock(*_lockp); \
} \
} while (0)
#define CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m) \
CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, VM_PAGE_TO_PHYS(m))
#define RELEASE_PV_LIST_LOCK(lockp) do { \
struct rwlock **_lockp = (lockp); \
\
if (*_lockp != NULL) { \
rw_wunlock(*_lockp); \
*_lockp = NULL; \
} \
} while (0)
#define VM_PAGE_TO_PV_LIST_LOCK(m) \
PHYS_TO_PV_LIST_LOCK(VM_PAGE_TO_PHYS(m))
struct pmap kernel_pmap_store;
/* Used for mapping ACPI memory before VM is initialized */
#define PMAP_PREINIT_MAPPING_COUNT 32
#define PMAP_PREINIT_MAPPING_SIZE (PMAP_PREINIT_MAPPING_COUNT * L2_SIZE)
static vm_offset_t preinit_map_va; /* Start VA of pre-init mapping space */
static int vm_initialized = 0; /* No need to use pre-init maps when set */
/*
* Reserve a few L2 blocks starting from 'preinit_map_va' pointer.
* Always map entire L2 block for simplicity.
* VA of L2 block = preinit_map_va + i * L2_SIZE
*/
static struct pmap_preinit_mapping {
vm_paddr_t pa;
vm_offset_t va;
vm_size_t size;
} pmap_preinit_mapping[PMAP_PREINIT_MAPPING_COUNT];
vm_offset_t virtual_avail; /* VA of first avail page (after kernel bss) */
vm_offset_t virtual_end; /* VA of last avail page (end of kernel AS) */
vm_offset_t kernel_vm_end = 0;
/*
* Data for the pv entry allocation mechanism.
*/
static TAILQ_HEAD(pch, pv_chunk) pv_chunks = TAILQ_HEAD_INITIALIZER(pv_chunks);
static struct mtx pv_chunks_mutex;
static struct rwlock pv_list_locks[NPV_LIST_LOCKS];
static struct md_page *pv_table;
static struct md_page pv_dummy;
vm_paddr_t dmap_phys_base; /* The start of the dmap region */
vm_paddr_t dmap_phys_max; /* The limit of the dmap region */
vm_offset_t dmap_max_addr; /* The virtual address limit of the dmap */
/* This code assumes all L1 DMAP entries will be used */
CTASSERT((DMAP_MIN_ADDRESS & ~L0_OFFSET) == DMAP_MIN_ADDRESS);
CTASSERT((DMAP_MAX_ADDRESS & ~L0_OFFSET) == DMAP_MAX_ADDRESS);
#define DMAP_TABLES ((DMAP_MAX_ADDRESS - DMAP_MIN_ADDRESS) >> L0_SHIFT)
extern pt_entry_t pagetable_dmap[];
#define PHYSMAP_SIZE (2 * (VM_PHYSSEG_MAX - 1))
static vm_paddr_t physmap[PHYSMAP_SIZE];
static u_int physmap_idx;
static SYSCTL_NODE(_vm, OID_AUTO, pmap, CTLFLAG_RD, 0, "VM/pmap parameters");
static int superpages_enabled = 1;
SYSCTL_INT(_vm_pmap, OID_AUTO, superpages_enabled,
CTLFLAG_RDTUN | CTLFLAG_NOFETCH, &superpages_enabled, 0,
"Are large page mappings enabled?");
/*
* Internal flags for pmap_enter()'s helper functions.
*/
#define PMAP_ENTER_NORECLAIM 0x1000000 /* Don't reclaim PV entries. */
#define PMAP_ENTER_NOREPLACE 0x2000000 /* Don't replace mappings. */
static void free_pv_chunk(struct pv_chunk *pc);
static void free_pv_entry(pmap_t pmap, pv_entry_t pv);
static pv_entry_t get_pv_entry(pmap_t pmap, struct rwlock **lockp);
static vm_page_t reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp);
static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va);
static pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap,
vm_offset_t va);
static int pmap_change_attr(vm_offset_t va, vm_size_t size, int mode);
static int pmap_change_attr_locked(vm_offset_t va, vm_size_t size, int mode);
static pt_entry_t *pmap_demote_l1(pmap_t pmap, pt_entry_t *l1, vm_offset_t va);
static pt_entry_t *pmap_demote_l2_locked(pmap_t pmap, pt_entry_t *l2,
vm_offset_t va, struct rwlock **lockp);
static pt_entry_t *pmap_demote_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t va);
static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va,
vm_page_t m, vm_prot_t prot, vm_page_t mpte, struct rwlock **lockp);
static int pmap_enter_l2(pmap_t pmap, vm_offset_t va, pd_entry_t new_l2,
u_int flags, vm_page_t m, struct rwlock **lockp);
static int pmap_remove_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t sva,
pd_entry_t l1e, struct spglist *free, struct rwlock **lockp);
static int pmap_remove_l3(pmap_t pmap, pt_entry_t *l3, vm_offset_t sva,
pd_entry_t l2e, struct spglist *free, struct rwlock **lockp);
static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va,
vm_page_t m, struct rwlock **lockp);
static vm_page_t _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex,
struct rwlock **lockp);
static void _pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m,
struct spglist *free);
static int pmap_unuse_pt(pmap_t, vm_offset_t, pd_entry_t, struct spglist *);
static __inline vm_page_t pmap_remove_pt_page(pmap_t pmap, vm_offset_t va);
/*
* These load the old table data and store the new value.
* They need to be atomic as the System MMU may write to the table at
* the same time as the CPU.
*/
#define pmap_load_store(table, entry) atomic_swap_64(table, entry)
#define pmap_set(table, mask) atomic_set_64(table, mask)
#define pmap_load_clear(table) atomic_swap_64(table, 0)
#define pmap_load(table) (*table)
/********************/
/* Inline functions */
/********************/
static __inline void
pagecopy(void *s, void *d)
{
memcpy(d, s, PAGE_SIZE);
}
static __inline pd_entry_t *
pmap_l0(pmap_t pmap, vm_offset_t va)
{
return (&pmap->pm_l0[pmap_l0_index(va)]);
}
static __inline pd_entry_t *
pmap_l0_to_l1(pd_entry_t *l0, vm_offset_t va)
{
pd_entry_t *l1;
l1 = (pd_entry_t *)PHYS_TO_DMAP(pmap_load(l0) & ~ATTR_MASK);
return (&l1[pmap_l1_index(va)]);
}
static __inline pd_entry_t *
pmap_l1(pmap_t pmap, vm_offset_t va)
{
pd_entry_t *l0;
l0 = pmap_l0(pmap, va);
if ((pmap_load(l0) & ATTR_DESCR_MASK) != L0_TABLE)
return (NULL);
return (pmap_l0_to_l1(l0, va));
}
static __inline pd_entry_t *
pmap_l1_to_l2(pd_entry_t *l1, vm_offset_t va)
{
pd_entry_t *l2;
l2 = (pd_entry_t *)PHYS_TO_DMAP(pmap_load(l1) & ~ATTR_MASK);
return (&l2[pmap_l2_index(va)]);
}
static __inline pd_entry_t *
pmap_l2(pmap_t pmap, vm_offset_t va)
{
pd_entry_t *l1;
l1 = pmap_l1(pmap, va);
if ((pmap_load(l1) & ATTR_DESCR_MASK) != L1_TABLE)
return (NULL);
return (pmap_l1_to_l2(l1, va));
}
static __inline pt_entry_t *
pmap_l2_to_l3(pd_entry_t *l2, vm_offset_t va)
{
pt_entry_t *l3;
l3 = (pd_entry_t *)PHYS_TO_DMAP(pmap_load(l2) & ~ATTR_MASK);
return (&l3[pmap_l3_index(va)]);
}
/*
* Returns the lowest valid pde for a given virtual address.
* The next level may or may not point to a valid page or block.
*/
static __inline pd_entry_t *
pmap_pde(pmap_t pmap, vm_offset_t va, int *level)
{
pd_entry_t *l0, *l1, *l2, desc;
l0 = pmap_l0(pmap, va);
desc = pmap_load(l0) & ATTR_DESCR_MASK;
if (desc != L0_TABLE) {
*level = -1;
return (NULL);
}
l1 = pmap_l0_to_l1(l0, va);
desc = pmap_load(l1) & ATTR_DESCR_MASK;
if (desc != L1_TABLE) {
*level = 0;
return (l0);
}
l2 = pmap_l1_to_l2(l1, va);
desc = pmap_load(l2) & ATTR_DESCR_MASK;
if (desc != L2_TABLE) {
*level = 1;
return (l1);
}
*level = 2;
return (l2);
}
/*
* Returns the lowest valid pte block or table entry for a given virtual
* address. If there are no valid entries return NULL and set the level to
* the first invalid level.
*/
static __inline pt_entry_t *
pmap_pte(pmap_t pmap, vm_offset_t va, int *level)
{
pd_entry_t *l1, *l2, desc;
pt_entry_t *l3;
l1 = pmap_l1(pmap, va);
if (l1 == NULL) {
*level = 0;
return (NULL);
}
desc = pmap_load(l1) & ATTR_DESCR_MASK;
if (desc == L1_BLOCK) {
*level = 1;
return (l1);
}
if (desc != L1_TABLE) {
*level = 1;
return (NULL);
}
l2 = pmap_l1_to_l2(l1, va);
desc = pmap_load(l2) & ATTR_DESCR_MASK;
if (desc == L2_BLOCK) {
*level = 2;
return (l2);
}
if (desc != L2_TABLE) {
*level = 2;
return (NULL);
}
*level = 3;
l3 = pmap_l2_to_l3(l2, va);
if ((pmap_load(l3) & ATTR_DESCR_MASK) != L3_PAGE)
return (NULL);
return (l3);
}
bool
pmap_ps_enabled(pmap_t pmap __unused)
{
return (superpages_enabled != 0);
}
bool
pmap_get_tables(pmap_t pmap, vm_offset_t va, pd_entry_t **l0, pd_entry_t **l1,
pd_entry_t **l2, pt_entry_t **l3)
{
pd_entry_t *l0p, *l1p, *l2p;
if (pmap->pm_l0 == NULL)
return (false);
l0p = pmap_l0(pmap, va);
*l0 = l0p;
if ((pmap_load(l0p) & ATTR_DESCR_MASK) != L0_TABLE)
return (false);
l1p = pmap_l0_to_l1(l0p, va);
*l1 = l1p;
if ((pmap_load(l1p) & ATTR_DESCR_MASK) == L1_BLOCK) {
*l2 = NULL;
*l3 = NULL;
return (true);
}
if ((pmap_load(l1p) & ATTR_DESCR_MASK) != L1_TABLE)
return (false);
l2p = pmap_l1_to_l2(l1p, va);
*l2 = l2p;
if ((pmap_load(l2p) & ATTR_DESCR_MASK) == L2_BLOCK) {
*l3 = NULL;
return (true);
}
if ((pmap_load(l2p) & ATTR_DESCR_MASK) != L2_TABLE)
return (false);
*l3 = pmap_l2_to_l3(l2p, va);
return (true);
}
static __inline int
pmap_l3_valid(pt_entry_t l3)
{
return ((l3 & ATTR_DESCR_MASK) == L3_PAGE);
}
CTASSERT(L1_BLOCK == L2_BLOCK);
/*
* Checks if the page is dirty. We currently lack proper tracking of this on
* arm64 so for now assume is a page mapped as rw was accessed it is.
*/
static inline int
pmap_page_dirty(pt_entry_t pte)
{
return ((pte & (ATTR_AF | ATTR_AP_RW_BIT)) ==
(ATTR_AF | ATTR_AP(ATTR_AP_RW)));
}
static __inline void
pmap_resident_count_inc(pmap_t pmap, int count)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
pmap->pm_stats.resident_count += count;
}
static __inline void
pmap_resident_count_dec(pmap_t pmap, int count)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
KASSERT(pmap->pm_stats.resident_count >= count,
("pmap %p resident count underflow %ld %d", pmap,
pmap->pm_stats.resident_count, count));
pmap->pm_stats.resident_count -= count;
}
static pt_entry_t *
pmap_early_page_idx(vm_offset_t l1pt, vm_offset_t va, u_int *l1_slot,
u_int *l2_slot)
{
pt_entry_t *l2;
pd_entry_t *l1;
l1 = (pd_entry_t *)l1pt;
*l1_slot = (va >> L1_SHIFT) & Ln_ADDR_MASK;
/* Check locore has used a table L1 map */
KASSERT((l1[*l1_slot] & ATTR_DESCR_MASK) == L1_TABLE,
("Invalid bootstrap L1 table"));
/* Find the address of the L2 table */
l2 = (pt_entry_t *)init_pt_va;
*l2_slot = pmap_l2_index(va);
return (l2);
}
static vm_paddr_t
pmap_early_vtophys(vm_offset_t l1pt, vm_offset_t va)
{
u_int l1_slot, l2_slot;
pt_entry_t *l2;
l2 = pmap_early_page_idx(l1pt, va, &l1_slot, &l2_slot);
return ((l2[l2_slot] & ~ATTR_MASK) + (va & L2_OFFSET));
}
static vm_offset_t
pmap_bootstrap_dmap(vm_offset_t kern_l1, vm_paddr_t min_pa,
vm_offset_t freemempos)
{
pt_entry_t *l2;
vm_offset_t va;
vm_paddr_t l2_pa, pa;
u_int l1_slot, l2_slot, prev_l1_slot;
int i;
dmap_phys_base = min_pa & ~L1_OFFSET;
dmap_phys_max = 0;
dmap_max_addr = 0;
l2 = NULL;
prev_l1_slot = -1;
#define DMAP_TABLES ((DMAP_MAX_ADDRESS - DMAP_MIN_ADDRESS) >> L0_SHIFT)
memset(pagetable_dmap, 0, PAGE_SIZE * DMAP_TABLES);
for (i = 0; i < (physmap_idx * 2); i += 2) {
pa = physmap[i] & ~L2_OFFSET;
va = pa - dmap_phys_base + DMAP_MIN_ADDRESS;
/* Create L2 mappings at the start of the region */
if ((pa & L1_OFFSET) != 0) {
l1_slot = ((va - DMAP_MIN_ADDRESS) >> L1_SHIFT);
if (l1_slot != prev_l1_slot) {
prev_l1_slot = l1_slot;
l2 = (pt_entry_t *)freemempos;
l2_pa = pmap_early_vtophys(kern_l1,
(vm_offset_t)l2);
freemempos += PAGE_SIZE;
pmap_load_store(&pagetable_dmap[l1_slot],
(l2_pa & ~Ln_TABLE_MASK) | L1_TABLE);
memset(l2, 0, PAGE_SIZE);
}
KASSERT(l2 != NULL,
("pmap_bootstrap_dmap: NULL l2 map"));
for (; va < DMAP_MAX_ADDRESS && pa < physmap[i + 1];
pa += L2_SIZE, va += L2_SIZE) {
/*
* We are on a boundary, stop to
* create a level 1 block
*/
if ((pa & L1_OFFSET) == 0)
break;
l2_slot = pmap_l2_index(va);
KASSERT(l2_slot != 0, ("..."));
pmap_load_store(&l2[l2_slot],
(pa & ~L2_OFFSET) | ATTR_DEFAULT | ATTR_XN |
ATTR_IDX(CACHED_MEMORY) | L2_BLOCK);
}
KASSERT(va == (pa - dmap_phys_base + DMAP_MIN_ADDRESS),
("..."));
}
for (; va < DMAP_MAX_ADDRESS && pa < physmap[i + 1] &&
(physmap[i + 1] - pa) >= L1_SIZE;
pa += L1_SIZE, va += L1_SIZE) {
l1_slot = ((va - DMAP_MIN_ADDRESS) >> L1_SHIFT);
pmap_load_store(&pagetable_dmap[l1_slot],
(pa & ~L1_OFFSET) | ATTR_DEFAULT | ATTR_XN |
ATTR_IDX(CACHED_MEMORY) | L1_BLOCK);
}
/* Create L2 mappings at the end of the region */
if (pa < physmap[i + 1]) {
l1_slot = ((va - DMAP_MIN_ADDRESS) >> L1_SHIFT);
if (l1_slot != prev_l1_slot) {
prev_l1_slot = l1_slot;
l2 = (pt_entry_t *)freemempos;
l2_pa = pmap_early_vtophys(kern_l1,
(vm_offset_t)l2);
freemempos += PAGE_SIZE;
pmap_load_store(&pagetable_dmap[l1_slot],
(l2_pa & ~Ln_TABLE_MASK) | L1_TABLE);
memset(l2, 0, PAGE_SIZE);
}
KASSERT(l2 != NULL,
("pmap_bootstrap_dmap: NULL l2 map"));
for (; va < DMAP_MAX_ADDRESS && pa < physmap[i + 1];
pa += L2_SIZE, va += L2_SIZE) {
l2_slot = pmap_l2_index(va);
pmap_load_store(&l2[l2_slot],
(pa & ~L2_OFFSET) | ATTR_DEFAULT | ATTR_XN |
ATTR_IDX(CACHED_MEMORY) | L2_BLOCK);
}
}
if (pa > dmap_phys_max) {
dmap_phys_max = pa;
dmap_max_addr = va;
}
}
cpu_tlb_flushID();
return (freemempos);
}
static vm_offset_t
pmap_bootstrap_l2(vm_offset_t l1pt, vm_offset_t va, vm_offset_t l2_start)
{
vm_offset_t l2pt;
vm_paddr_t pa;
pd_entry_t *l1;
u_int l1_slot;
KASSERT((va & L1_OFFSET) == 0, ("Invalid virtual address"));
l1 = (pd_entry_t *)l1pt;
l1_slot = pmap_l1_index(va);
l2pt = l2_start;
for (; va < VM_MAX_KERNEL_ADDRESS; l1_slot++, va += L1_SIZE) {
KASSERT(l1_slot < Ln_ENTRIES, ("Invalid L1 index"));
pa = pmap_early_vtophys(l1pt, l2pt);
pmap_load_store(&l1[l1_slot],
(pa & ~Ln_TABLE_MASK) | L1_TABLE);
l2pt += PAGE_SIZE;
}
/* Clean the L2 page table */
memset((void *)l2_start, 0, l2pt - l2_start);
return l2pt;
}
static vm_offset_t
pmap_bootstrap_l3(vm_offset_t l1pt, vm_offset_t va, vm_offset_t l3_start)
{
vm_offset_t l3pt;
vm_paddr_t pa;
pd_entry_t *l2;
u_int l2_slot;
KASSERT((va & L2_OFFSET) == 0, ("Invalid virtual address"));
l2 = pmap_l2(kernel_pmap, va);
l2 = (pd_entry_t *)rounddown2((uintptr_t)l2, PAGE_SIZE);
l2_slot = pmap_l2_index(va);
l3pt = l3_start;
for (; va < VM_MAX_KERNEL_ADDRESS; l2_slot++, va += L2_SIZE) {
KASSERT(l2_slot < Ln_ENTRIES, ("Invalid L2 index"));
pa = pmap_early_vtophys(l1pt, l3pt);
pmap_load_store(&l2[l2_slot],
(pa & ~Ln_TABLE_MASK) | L2_TABLE);
l3pt += PAGE_SIZE;
}
/* Clean the L2 page table */
memset((void *)l3_start, 0, l3pt - l3_start);
return l3pt;
}
/*
* Bootstrap the system enough to run with virtual memory.
*/
void
pmap_bootstrap(vm_offset_t l0pt, vm_offset_t l1pt, vm_paddr_t kernstart,
vm_size_t kernlen)
{
u_int l1_slot, l2_slot;
uint64_t kern_delta;
pt_entry_t *l2;
vm_offset_t va, freemempos;
vm_offset_t dpcpu, msgbufpv;
vm_paddr_t start_pa, pa, min_pa;
int i;
kern_delta = KERNBASE - kernstart;
printf("pmap_bootstrap %lx %lx %lx\n", l1pt, kernstart, kernlen);
printf("%lx\n", l1pt);
printf("%lx\n", (KERNBASE >> L1_SHIFT) & Ln_ADDR_MASK);
/* Set this early so we can use the pagetable walking functions */
kernel_pmap_store.pm_l0 = (pd_entry_t *)l0pt;
PMAP_LOCK_INIT(kernel_pmap);
/* Assume the address we were loaded to is a valid physical address */
min_pa = KERNBASE - kern_delta;
physmap_idx = arm_physmem_avail(physmap, nitems(physmap));
physmap_idx /= 2;
/*
* Find the minimum physical address. physmap is sorted,
* but may contain empty ranges.
*/
for (i = 0; i < (physmap_idx * 2); i += 2) {
if (physmap[i] == physmap[i + 1])
continue;
if (physmap[i] <= min_pa)
min_pa = physmap[i];
}
freemempos = KERNBASE + kernlen;
freemempos = roundup2(freemempos, PAGE_SIZE);
/* Create a direct map region early so we can use it for pa -> va */
freemempos = pmap_bootstrap_dmap(l1pt, min_pa, freemempos);
va = KERNBASE;
start_pa = pa = KERNBASE - kern_delta;
/*
* Read the page table to find out what is already mapped.
* This assumes we have mapped a block of memory from KERNBASE
* using a single L1 entry.
*/
l2 = pmap_early_page_idx(l1pt, KERNBASE, &l1_slot, &l2_slot);
/* Sanity check the index, KERNBASE should be the first VA */
KASSERT(l2_slot == 0, ("The L2 index is non-zero"));
/* Find how many pages we have mapped */
for (; l2_slot < Ln_ENTRIES; l2_slot++) {
if ((l2[l2_slot] & ATTR_DESCR_MASK) == 0)
break;
/* Check locore used L2 blocks */
KASSERT((l2[l2_slot] & ATTR_DESCR_MASK) == L2_BLOCK,
("Invalid bootstrap L2 table"));
KASSERT((l2[l2_slot] & ~ATTR_MASK) == pa,
("Incorrect PA in L2 table"));
va += L2_SIZE;
pa += L2_SIZE;
}
va = roundup2(va, L1_SIZE);
/* Create the l2 tables up to VM_MAX_KERNEL_ADDRESS */
freemempos = pmap_bootstrap_l2(l1pt, va, freemempos);
/* And the l3 tables for the early devmap */
freemempos = pmap_bootstrap_l3(l1pt,
VM_MAX_KERNEL_ADDRESS - (PMAP_MAPDEV_EARLY_SIZE), freemempos);
cpu_tlb_flushID();
#define alloc_pages(var, np) \
(var) = freemempos; \
freemempos += (np * PAGE_SIZE); \
memset((char *)(var), 0, ((np) * PAGE_SIZE));
/* Allocate dynamic per-cpu area. */
alloc_pages(dpcpu, DPCPU_SIZE / PAGE_SIZE);
dpcpu_init((void *)dpcpu, 0);
/* Allocate memory for the msgbuf, e.g. for /sbin/dmesg */
alloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE);
msgbufp = (void *)msgbufpv;
/* Reserve some VA space for early BIOS/ACPI mapping */
preinit_map_va = roundup2(freemempos, L2_SIZE);
virtual_avail = preinit_map_va + PMAP_PREINIT_MAPPING_SIZE;
virtual_avail = roundup2(virtual_avail, L1_SIZE);
virtual_end = VM_MAX_KERNEL_ADDRESS - (PMAP_MAPDEV_EARLY_SIZE);
kernel_vm_end = virtual_avail;
pa = pmap_early_vtophys(l1pt, freemempos);
arm_physmem_exclude_region(start_pa, pa - start_pa, EXFLAG_NOALLOC);
cpu_tlb_flushID();
}
/*
* Initialize a vm_page's machine-dependent fields.
*/
void
pmap_page_init(vm_page_t m)
{
TAILQ_INIT(&m->md.pv_list);
m->md.pv_memattr = VM_MEMATTR_WRITE_BACK;
}
/*
* Initialize the pmap module.
* Called by vm_init, to initialize any structures that the pmap
* system needs to map virtual memory.
*/
void
pmap_init(void)
{
vm_size_t s;
int i, pv_npg;
/*
* Are large page mappings enabled?
*/
TUNABLE_INT_FETCH("vm.pmap.superpages_enabled", &superpages_enabled);
if (superpages_enabled) {
KASSERT(MAXPAGESIZES > 1 && pagesizes[1] == 0,
("pmap_init: can't assign to pagesizes[1]"));
pagesizes[1] = L2_SIZE;
}
/*
* Initialize the pv chunk list mutex.
*/
mtx_init(&pv_chunks_mutex, "pmap pv chunk list", NULL, MTX_DEF);
/*
* Initialize the pool of pv list locks.
*/
for (i = 0; i < NPV_LIST_LOCKS; i++)
rw_init(&pv_list_locks[i], "pmap pv list");
/*
* Calculate the size of the pv head table for superpages.
*/
pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, L2_SIZE);
/*
* Allocate memory for the pv head table for superpages.
*/
s = (vm_size_t)(pv_npg * sizeof(struct md_page));
s = round_page(s);
pv_table = (struct md_page *)kmem_malloc(s, M_WAITOK | M_ZERO);
for (i = 0; i < pv_npg; i++)
TAILQ_INIT(&pv_table[i].pv_list);
TAILQ_INIT(&pv_dummy.pv_list);
vm_initialized = 1;
}
static SYSCTL_NODE(_vm_pmap, OID_AUTO, l2, CTLFLAG_RD, 0,
"2MB page mapping counters");
static u_long pmap_l2_demotions;
SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, demotions, CTLFLAG_RD,
&pmap_l2_demotions, 0, "2MB page demotions");
static u_long pmap_l2_mappings;
SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, mappings, CTLFLAG_RD,
&pmap_l2_mappings, 0, "2MB page mappings");
static u_long pmap_l2_p_failures;
SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, p_failures, CTLFLAG_RD,
&pmap_l2_p_failures, 0, "2MB page promotion failures");
static u_long pmap_l2_promotions;
SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, promotions, CTLFLAG_RD,
&pmap_l2_promotions, 0, "2MB page promotions");
/*
* Invalidate a single TLB entry.
*/
static __inline void
pmap_invalidate_page(pmap_t pmap, vm_offset_t va)
{
sched_pin();
__asm __volatile(
"dsb ishst \n"
"tlbi vaae1is, %0 \n"
"dsb ish \n"
"isb \n"
: : "r"(va >> PAGE_SHIFT));
sched_unpin();
}
static __inline void
pmap_invalidate_range_nopin(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
vm_offset_t addr;
dsb(ishst);
for (addr = sva; addr < eva; addr += PAGE_SIZE) {
__asm __volatile(
"tlbi vaae1is, %0" : : "r"(addr >> PAGE_SHIFT));
}
__asm __volatile(
"dsb ish \n"
"isb \n");
}
static __inline void
pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
sched_pin();
pmap_invalidate_range_nopin(pmap, sva, eva);
sched_unpin();
}
static __inline void
pmap_invalidate_all(pmap_t pmap)
{
sched_pin();
__asm __volatile(
"dsb ishst \n"
"tlbi vmalle1is \n"
"dsb ish \n"
"isb \n");
sched_unpin();
}
/*
* Routine: pmap_extract
* Function:
* Extract the physical page address associated
* with the given map/virtual_address pair.
*/
vm_paddr_t
pmap_extract(pmap_t pmap, vm_offset_t va)
{
pt_entry_t *pte, tpte;
vm_paddr_t pa;
int lvl;
pa = 0;
PMAP_LOCK(pmap);
/*
* Find the block or page map for this virtual address. pmap_pte
* will return either a valid block/page entry, or NULL.
*/
pte = pmap_pte(pmap, va, &lvl);
if (pte != NULL) {
tpte = pmap_load(pte);
pa = tpte & ~ATTR_MASK;
switch(lvl) {
case 1:
KASSERT((tpte & ATTR_DESCR_MASK) == L1_BLOCK,
("pmap_extract: Invalid L1 pte found: %lx",
tpte & ATTR_DESCR_MASK));
pa |= (va & L1_OFFSET);
break;
case 2:
KASSERT((tpte & ATTR_DESCR_MASK) == L2_BLOCK,
("pmap_extract: Invalid L2 pte found: %lx",
tpte & ATTR_DESCR_MASK));
pa |= (va & L2_OFFSET);
break;
case 3:
KASSERT((tpte & ATTR_DESCR_MASK) == L3_PAGE,
("pmap_extract: Invalid L3 pte found: %lx",
tpte & ATTR_DESCR_MASK));
pa |= (va & L3_OFFSET);
break;
}
}
PMAP_UNLOCK(pmap);
return (pa);
}
/*
* Routine: pmap_extract_and_hold
* Function:
* Atomically extract and hold the physical page
* with the given pmap and virtual address pair
* if that mapping permits the given protection.
*/
vm_page_t
pmap_extract_and_hold(pmap_t pmap, vm_offset_t va, vm_prot_t prot)
{
pt_entry_t *pte, tpte;
vm_offset_t off;
vm_paddr_t pa;
vm_page_t m;
int lvl;
pa = 0;
m = NULL;
PMAP_LOCK(pmap);
retry:
pte = pmap_pte(pmap, va, &lvl);
if (pte != NULL) {
tpte = pmap_load(pte);
KASSERT(lvl > 0 && lvl <= 3,
("pmap_extract_and_hold: Invalid level %d", lvl));
CTASSERT(L1_BLOCK == L2_BLOCK);
KASSERT((lvl == 3 && (tpte & ATTR_DESCR_MASK) == L3_PAGE) ||
(lvl < 3 && (tpte & ATTR_DESCR_MASK) == L1_BLOCK),
("pmap_extract_and_hold: Invalid pte at L%d: %lx", lvl,
tpte & ATTR_DESCR_MASK));
if (((tpte & ATTR_AP_RW_BIT) == ATTR_AP(ATTR_AP_RW)) ||
((prot & VM_PROT_WRITE) == 0)) {
switch(lvl) {
case 1:
off = va & L1_OFFSET;
break;
case 2:
off = va & L2_OFFSET;
break;
case 3:
default:
off = 0;
}
if (vm_page_pa_tryrelock(pmap,
(tpte & ~ATTR_MASK) | off, &pa))
goto retry;
m = PHYS_TO_VM_PAGE((tpte & ~ATTR_MASK) | off);
vm_page_hold(m);
}
}
PA_UNLOCK_COND(pa);
PMAP_UNLOCK(pmap);
return (m);
}
vm_paddr_t
pmap_kextract(vm_offset_t va)
{
pt_entry_t *pte, tpte;
vm_paddr_t pa;
int lvl;
if (va >= DMAP_MIN_ADDRESS && va < DMAP_MAX_ADDRESS) {
pa = DMAP_TO_PHYS(va);
} else {
pa = 0;
pte = pmap_pte(kernel_pmap, va, &lvl);
if (pte != NULL) {
tpte = pmap_load(pte);
pa = tpte & ~ATTR_MASK;
switch(lvl) {
case 1:
KASSERT((tpte & ATTR_DESCR_MASK) == L1_BLOCK,
("pmap_kextract: Invalid L1 pte found: %lx",
tpte & ATTR_DESCR_MASK));
pa |= (va & L1_OFFSET);
break;
case 2:
KASSERT((tpte & ATTR_DESCR_MASK) == L2_BLOCK,
("pmap_kextract: Invalid L2 pte found: %lx",
tpte & ATTR_DESCR_MASK));
pa |= (va & L2_OFFSET);
break;
case 3:
KASSERT((tpte & ATTR_DESCR_MASK) == L3_PAGE,
("pmap_kextract: Invalid L3 pte found: %lx",
tpte & ATTR_DESCR_MASK));
pa |= (va & L3_OFFSET);
break;
}
}
}
return (pa);
}
/***************************************************
* Low level mapping routines.....
***************************************************/
void
pmap_kenter(vm_offset_t sva, vm_size_t size, vm_paddr_t pa, int mode)
{
pd_entry_t *pde;
pt_entry_t *pte, attr;
vm_offset_t va;
int lvl;
KASSERT((pa & L3_OFFSET) == 0,
("pmap_kenter: Invalid physical address"));
KASSERT((sva & L3_OFFSET) == 0,
("pmap_kenter: Invalid virtual address"));
KASSERT((size & PAGE_MASK) == 0,
("pmap_kenter: Mapping is not page-sized"));
attr = ATTR_DEFAULT | ATTR_IDX(mode) | L3_PAGE;
if (mode == DEVICE_MEMORY)
attr |= ATTR_XN;
va = sva;
while (size != 0) {
pde = pmap_pde(kernel_pmap, va, &lvl);
KASSERT(pde != NULL,
("pmap_kenter: Invalid page entry, va: 0x%lx", va));
KASSERT(lvl == 2, ("pmap_kenter: Invalid level %d", lvl));
pte = pmap_l2_to_l3(pde, va);
pmap_load_store(pte, (pa & ~L3_OFFSET) | attr);
va += PAGE_SIZE;
pa += PAGE_SIZE;
size -= PAGE_SIZE;
}
pmap_invalidate_range(kernel_pmap, sva, va);
}
void
pmap_kenter_device(vm_offset_t sva, vm_size_t size, vm_paddr_t pa)
{
pmap_kenter(sva, size, pa, DEVICE_MEMORY);
}
/*
* Remove a page from the kernel pagetables.
*/
PMAP_INLINE void
pmap_kremove(vm_offset_t va)
{
pt_entry_t *pte;
int lvl;
pte = pmap_pte(kernel_pmap, va, &lvl);
KASSERT(pte != NULL, ("pmap_kremove: Invalid address"));
KASSERT(lvl == 3, ("pmap_kremove: Invalid pte level %d", lvl));
pmap_load_clear(pte);
pmap_invalidate_page(kernel_pmap, va);
}
void
pmap_kremove_device(vm_offset_t sva, vm_size_t size)
{
pt_entry_t *pte;
vm_offset_t va;
int lvl;
KASSERT((sva & L3_OFFSET) == 0,
("pmap_kremove_device: Invalid virtual address"));
KASSERT((size & PAGE_MASK) == 0,
("pmap_kremove_device: Mapping is not page-sized"));
va = sva;
while (size != 0) {
pte = pmap_pte(kernel_pmap, va, &lvl);
KASSERT(pte != NULL, ("Invalid page table, va: 0x%lx", va));
KASSERT(lvl == 3,
("Invalid device pagetable level: %d != 3", lvl));
pmap_load_clear(pte);
va += PAGE_SIZE;
size -= PAGE_SIZE;
}
pmap_invalidate_range(kernel_pmap, sva, va);
}
/*
* Used to map a range of physical addresses into kernel
* virtual address space.
*
* The value passed in '*virt' is a suggested virtual address for
* the mapping. Architectures which can support a direct-mapped
* physical to virtual region can return the appropriate address
* within that region, leaving '*virt' unchanged. Other
* architectures should map the pages starting at '*virt' and
* update '*virt' with the first usable address after the mapped
* region.
*/
vm_offset_t
pmap_map(vm_offset_t *virt, vm_paddr_t start, vm_paddr_t end, int prot)
{
return PHYS_TO_DMAP(start);
}
/*
* Add a list of wired pages to the kva
* this routine is only used for temporary
* kernel mappings that do not need to have
* page modification or references recorded.
* Note that old mappings are simply written
* over. The page *must* be wired.
* Note: SMP coherent. Uses a ranged shootdown IPI.
*/
void
pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count)
{
pd_entry_t *pde;
pt_entry_t *pte, pa;
vm_offset_t va;
vm_page_t m;
int i, lvl;
va = sva;
for (i = 0; i < count; i++) {
pde = pmap_pde(kernel_pmap, va, &lvl);
KASSERT(pde != NULL,
("pmap_qenter: Invalid page entry, va: 0x%lx", va));
KASSERT(lvl == 2,
("pmap_qenter: Invalid level %d", lvl));
m = ma[i];
pa = VM_PAGE_TO_PHYS(m) | ATTR_DEFAULT | ATTR_AP(ATTR_AP_RW) |
ATTR_IDX(m->md.pv_memattr) | L3_PAGE;
if (m->md.pv_memattr == DEVICE_MEMORY)
pa |= ATTR_XN;
pte = pmap_l2_to_l3(pde, va);
pmap_load_store(pte, pa);
va += L3_SIZE;
}
pmap_invalidate_range(kernel_pmap, sva, va);
}
/*
* This routine tears out page mappings from the
* kernel -- it is meant only for temporary mappings.
*/
void
pmap_qremove(vm_offset_t sva, int count)
{
pt_entry_t *pte;
vm_offset_t va;
int lvl;
KASSERT(sva >= VM_MIN_KERNEL_ADDRESS, ("usermode va %lx", sva));
va = sva;
while (count-- > 0) {
pte = pmap_pte(kernel_pmap, va, &lvl);
KASSERT(lvl == 3,
("Invalid device pagetable level: %d != 3", lvl));
if (pte != NULL) {
pmap_load_clear(pte);
}
va += PAGE_SIZE;
}
pmap_invalidate_range(kernel_pmap, sva, va);
}
/***************************************************
* Page table page management routines.....
***************************************************/
/*
* Schedule the specified unused page table page to be freed. Specifically,
* add the page to the specified list of pages that will be released to the
* physical memory manager after the TLB has been updated.
*/
static __inline void
pmap_add_delayed_free_list(vm_page_t m, struct spglist *free,
boolean_t set_PG_ZERO)
{
if (set_PG_ZERO)
m->flags |= PG_ZERO;
else
m->flags &= ~PG_ZERO;
SLIST_INSERT_HEAD(free, m, plinks.s.ss);
}
/*
* Decrements a page table page's wire count, which is used to record the
* number of valid page table entries within the page. If the wire count
* drops to zero, then the page table page is unmapped. Returns TRUE if the
* page table page was unmapped and FALSE otherwise.
*/
static inline boolean_t
pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free)
{
--m->wire_count;
if (m->wire_count == 0) {
_pmap_unwire_l3(pmap, va, m, free);
return (TRUE);
} else
return (FALSE);
}
static void
_pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/*
* unmap the page table page
*/
if (m->pindex >= (NUL2E + NUL1E)) {
/* l1 page */
pd_entry_t *l0;
l0 = pmap_l0(pmap, va);
pmap_load_clear(l0);
} else if (m->pindex >= NUL2E) {
/* l2 page */
pd_entry_t *l1;
l1 = pmap_l1(pmap, va);
pmap_load_clear(l1);
} else {
/* l3 page */
pd_entry_t *l2;
l2 = pmap_l2(pmap, va);
pmap_load_clear(l2);
}
pmap_resident_count_dec(pmap, 1);
if (m->pindex < NUL2E) {
/* We just released an l3, unhold the matching l2 */
pd_entry_t *l1, tl1;
vm_page_t l2pg;
l1 = pmap_l1(pmap, va);
tl1 = pmap_load(l1);
l2pg = PHYS_TO_VM_PAGE(tl1 & ~ATTR_MASK);
pmap_unwire_l3(pmap, va, l2pg, free);
} else if (m->pindex < (NUL2E + NUL1E)) {
/* We just released an l2, unhold the matching l1 */
pd_entry_t *l0, tl0;
vm_page_t l1pg;
l0 = pmap_l0(pmap, va);
tl0 = pmap_load(l0);
l1pg = PHYS_TO_VM_PAGE(tl0 & ~ATTR_MASK);
pmap_unwire_l3(pmap, va, l1pg, free);
}
pmap_invalidate_page(pmap, va);
vm_wire_sub(1);
/*
* Put page on a list so that it is released after
* *ALL* TLB shootdown is done
*/
pmap_add_delayed_free_list(m, free, TRUE);
}
/*
* After removing a page table entry, this routine is used to
* conditionally free the page, and manage the hold/wire counts.
*/
static int
pmap_unuse_pt(pmap_t pmap, vm_offset_t va, pd_entry_t ptepde,
struct spglist *free)
{
vm_page_t mpte;
if (va >= VM_MAXUSER_ADDRESS)
return (0);
KASSERT(ptepde != 0, ("pmap_unuse_pt: ptepde != 0"));
mpte = PHYS_TO_VM_PAGE(ptepde & ~ATTR_MASK);
return (pmap_unwire_l3(pmap, va, mpte, free));
}
void
pmap_pinit0(pmap_t pmap)
{
PMAP_LOCK_INIT(pmap);
bzero(&pmap->pm_stats, sizeof(pmap->pm_stats));
pmap->pm_l0 = kernel_pmap->pm_l0;
pmap->pm_root.rt_root = 0;
}
int
pmap_pinit(pmap_t pmap)
{
vm_paddr_t l0phys;
vm_page_t l0pt;
/*
* allocate the l0 page
*/
while ((l0pt = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL)
vm_wait(NULL);
l0phys = VM_PAGE_TO_PHYS(l0pt);
pmap->pm_l0 = (pd_entry_t *)PHYS_TO_DMAP(l0phys);
if ((l0pt->flags & PG_ZERO) == 0)
pagezero(pmap->pm_l0);
pmap->pm_root.rt_root = 0;
bzero(&pmap->pm_stats, sizeof(pmap->pm_stats));
return (1);
}
/*
* This routine is called if the desired page table page does not exist.
*
* If page table page allocation fails, this routine may sleep before
* returning NULL. It sleeps only if a lock pointer was given.
*
* Note: If a page allocation fails at page table level two or three,
* one or two pages may be held during the wait, only to be released
* afterwards. This conservative approach is easily argued to avoid
* race conditions.
*/
static vm_page_t
_pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, struct rwlock **lockp)
{
vm_page_t m, l1pg, l2pg;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/*
* Allocate a page table page.
*/
if ((m = vm_page_alloc(NULL, ptepindex, VM_ALLOC_NOOBJ |
VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) {
if (lockp != NULL) {
RELEASE_PV_LIST_LOCK(lockp);
PMAP_UNLOCK(pmap);
vm_wait(NULL);
PMAP_LOCK(pmap);
}
/*
* Indicate the need to retry. While waiting, the page table
* page may have been allocated.
*/
return (NULL);
}
if ((m->flags & PG_ZERO) == 0)
pmap_zero_page(m);
/*
* Map the pagetable page into the process address space, if
* it isn't already there.
*/
if (ptepindex >= (NUL2E + NUL1E)) {
pd_entry_t *l0;
vm_pindex_t l0index;
l0index = ptepindex - (NUL2E + NUL1E);
l0 = &pmap->pm_l0[l0index];
pmap_load_store(l0, VM_PAGE_TO_PHYS(m) | L0_TABLE);
} else if (ptepindex >= NUL2E) {
vm_pindex_t l0index, l1index;
pd_entry_t *l0, *l1;
pd_entry_t tl0;
l1index = ptepindex - NUL2E;
l0index = l1index >> L0_ENTRIES_SHIFT;
l0 = &pmap->pm_l0[l0index];
tl0 = pmap_load(l0);
if (tl0 == 0) {
/* recurse for allocating page dir */
if (_pmap_alloc_l3(pmap, NUL2E + NUL1E + l0index,
lockp) == NULL) {
vm_page_unwire_noq(m);
vm_page_free_zero(m);
return (NULL);
}
} else {
l1pg = PHYS_TO_VM_PAGE(tl0 & ~ATTR_MASK);
l1pg->wire_count++;
}
l1 = (pd_entry_t *)PHYS_TO_DMAP(pmap_load(l0) & ~ATTR_MASK);
l1 = &l1[ptepindex & Ln_ADDR_MASK];
pmap_load_store(l1, VM_PAGE_TO_PHYS(m) | L1_TABLE);
} else {
vm_pindex_t l0index, l1index;
pd_entry_t *l0, *l1, *l2;
pd_entry_t tl0, tl1;
l1index = ptepindex >> Ln_ENTRIES_SHIFT;
l0index = l1index >> L0_ENTRIES_SHIFT;
l0 = &pmap->pm_l0[l0index];
tl0 = pmap_load(l0);
if (tl0 == 0) {
/* recurse for allocating page dir */
if (_pmap_alloc_l3(pmap, NUL2E + l1index,
lockp) == NULL) {
vm_page_unwire_noq(m);
vm_page_free_zero(m);
return (NULL);
}
tl0 = pmap_load(l0);
l1 = (pd_entry_t *)PHYS_TO_DMAP(tl0 & ~ATTR_MASK);
l1 = &l1[l1index & Ln_ADDR_MASK];
} else {
l1 = (pd_entry_t *)PHYS_TO_DMAP(tl0 & ~ATTR_MASK);
l1 = &l1[l1index & Ln_ADDR_MASK];
tl1 = pmap_load(l1);
if (tl1 == 0) {
/* recurse for allocating page dir */
if (_pmap_alloc_l3(pmap, NUL2E + l1index,
lockp) == NULL) {
vm_page_unwire_noq(m);
vm_page_free_zero(m);
return (NULL);
}
} else {
l2pg = PHYS_TO_VM_PAGE(tl1 & ~ATTR_MASK);
l2pg->wire_count++;
}
}
l2 = (pd_entry_t *)PHYS_TO_DMAP(pmap_load(l1) & ~ATTR_MASK);
l2 = &l2[ptepindex & Ln_ADDR_MASK];
pmap_load_store(l2, VM_PAGE_TO_PHYS(m) | L2_TABLE);
}
pmap_resident_count_inc(pmap, 1);
return (m);
}
static vm_page_t
pmap_alloc_l2(pmap_t pmap, vm_offset_t va, struct rwlock **lockp)
{
pd_entry_t *l1;
vm_page_t l2pg;
vm_pindex_t l2pindex;
retry:
l1 = pmap_l1(pmap, va);
if (l1 != NULL && (pmap_load(l1) & ATTR_DESCR_MASK) == L1_TABLE) {
/* Add a reference to the L2 page. */
l2pg = PHYS_TO_VM_PAGE(pmap_load(l1) & ~ATTR_MASK);
l2pg->wire_count++;
} else {
/* Allocate a L2 page. */
l2pindex = pmap_l2_pindex(va) >> Ln_ENTRIES_SHIFT;
l2pg = _pmap_alloc_l3(pmap, NUL2E + l2pindex, lockp);
if (l2pg == NULL && lockp != NULL)
goto retry;
}
return (l2pg);
}
static vm_page_t
pmap_alloc_l3(pmap_t pmap, vm_offset_t va, struct rwlock **lockp)
{
vm_pindex_t ptepindex;
pd_entry_t *pde, tpde;
#ifdef INVARIANTS
pt_entry_t *pte;
#endif
vm_page_t m;
int lvl;
/*
* Calculate pagetable page index
*/
ptepindex = pmap_l2_pindex(va);
retry:
/*
* Get the page directory entry
*/
pde = pmap_pde(pmap, va, &lvl);
/*
* If the page table page is mapped, we just increment the hold count,
* and activate it. If we get a level 2 pde it will point to a level 3
* table.
*/
switch (lvl) {
case -1:
break;
case 0:
#ifdef INVARIANTS
pte = pmap_l0_to_l1(pde, va);
KASSERT(pmap_load(pte) == 0,
("pmap_alloc_l3: TODO: l0 superpages"));
#endif
break;
case 1:
#ifdef INVARIANTS
pte = pmap_l1_to_l2(pde, va);
KASSERT(pmap_load(pte) == 0,
("pmap_alloc_l3: TODO: l1 superpages"));
#endif
break;
case 2:
tpde = pmap_load(pde);
if (tpde != 0) {
m = PHYS_TO_VM_PAGE(tpde & ~ATTR_MASK);
m->wire_count++;
return (m);
}
break;
default:
panic("pmap_alloc_l3: Invalid level %d", lvl);
}
/*
* Here if the pte page isn't mapped, or if it has been deallocated.
*/
m = _pmap_alloc_l3(pmap, ptepindex, lockp);
if (m == NULL && lockp != NULL)
goto retry;
return (m);
}
/***************************************************
* Pmap allocation/deallocation routines.
***************************************************/
/*
* Release any resources held by the given physical map.
* Called when a pmap initialized by pmap_pinit is being released.
* Should only be called if the map contains no valid mappings.
*/
void
pmap_release(pmap_t pmap)
{
vm_page_t m;
KASSERT(pmap->pm_stats.resident_count == 0,
("pmap_release: pmap resident count %ld != 0",
pmap->pm_stats.resident_count));
KASSERT(vm_radix_is_empty(&pmap->pm_root),
("pmap_release: pmap has reserved page table page(s)"));
m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_l0));
vm_page_unwire_noq(m);
vm_page_free_zero(m);
}
static int
kvm_size(SYSCTL_HANDLER_ARGS)
{
unsigned long ksize = VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS;
return sysctl_handle_long(oidp, &ksize, 0, req);
}
SYSCTL_PROC(_vm, OID_AUTO, kvm_size, CTLTYPE_LONG|CTLFLAG_RD,
0, 0, kvm_size, "LU", "Size of KVM");
static int
kvm_free(SYSCTL_HANDLER_ARGS)
{
unsigned long kfree = VM_MAX_KERNEL_ADDRESS - kernel_vm_end;
return sysctl_handle_long(oidp, &kfree, 0, req);
}
SYSCTL_PROC(_vm, OID_AUTO, kvm_free, CTLTYPE_LONG|CTLFLAG_RD,
0, 0, kvm_free, "LU", "Amount of KVM free");
/*
* grow the number of kernel page table entries, if needed
*/
void
pmap_growkernel(vm_offset_t addr)
{
vm_paddr_t paddr;
vm_page_t nkpg;
pd_entry_t *l0, *l1, *l2;
mtx_assert(&kernel_map->system_mtx, MA_OWNED);
addr = roundup2(addr, L2_SIZE);
if (addr - 1 >= vm_map_max(kernel_map))
addr = vm_map_max(kernel_map);
while (kernel_vm_end < addr) {
l0 = pmap_l0(kernel_pmap, kernel_vm_end);
KASSERT(pmap_load(l0) != 0,
("pmap_growkernel: No level 0 kernel entry"));
l1 = pmap_l0_to_l1(l0, kernel_vm_end);
if (pmap_load(l1) == 0) {
/* We need a new PDP entry */
nkpg = vm_page_alloc(NULL, kernel_vm_end >> L1_SHIFT,
VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ |
VM_ALLOC_WIRED | VM_ALLOC_ZERO);
if (nkpg == NULL)
panic("pmap_growkernel: no memory to grow kernel");
if ((nkpg->flags & PG_ZERO) == 0)
pmap_zero_page(nkpg);
paddr = VM_PAGE_TO_PHYS(nkpg);
pmap_load_store(l1, paddr | L1_TABLE);
continue; /* try again */
}
l2 = pmap_l1_to_l2(l1, kernel_vm_end);
if ((pmap_load(l2) & ATTR_AF) != 0) {
kernel_vm_end = (kernel_vm_end + L2_SIZE) & ~L2_OFFSET;
if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) {
kernel_vm_end = vm_map_max(kernel_map);
break;
}
continue;
}
nkpg = vm_page_alloc(NULL, kernel_vm_end >> L2_SHIFT,
VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED |
VM_ALLOC_ZERO);
if (nkpg == NULL)
panic("pmap_growkernel: no memory to grow kernel");
if ((nkpg->flags & PG_ZERO) == 0)
pmap_zero_page(nkpg);
paddr = VM_PAGE_TO_PHYS(nkpg);
pmap_load_store(l2, paddr | L2_TABLE);
pmap_invalidate_page(kernel_pmap, kernel_vm_end);
kernel_vm_end = (kernel_vm_end + L2_SIZE) & ~L2_OFFSET;
if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) {
kernel_vm_end = vm_map_max(kernel_map);
break;
}
}
}
/***************************************************
* page management routines.
***************************************************/
CTASSERT(sizeof(struct pv_chunk) == PAGE_SIZE);
CTASSERT(_NPCM == 3);
CTASSERT(_NPCPV == 168);
static __inline struct pv_chunk *
pv_to_chunk(pv_entry_t pv)
{
return ((struct pv_chunk *)((uintptr_t)pv & ~(uintptr_t)PAGE_MASK));
}
#define PV_PMAP(pv) (pv_to_chunk(pv)->pc_pmap)
#define PC_FREE0 0xfffffffffffffffful
#define PC_FREE1 0xfffffffffffffffful
#define PC_FREE2 0x000000fffffffffful
static const uint64_t pc_freemask[_NPCM] = { PC_FREE0, PC_FREE1, PC_FREE2 };
#if 0
#ifdef PV_STATS
static int pc_chunk_count, pc_chunk_allocs, pc_chunk_frees, pc_chunk_tryfail;
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_count, CTLFLAG_RD, &pc_chunk_count, 0,
"Current number of pv entry chunks");
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_allocs, CTLFLAG_RD, &pc_chunk_allocs, 0,
"Current number of pv entry chunks allocated");
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_frees, CTLFLAG_RD, &pc_chunk_frees, 0,
"Current number of pv entry chunks frees");
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_tryfail, CTLFLAG_RD, &pc_chunk_tryfail, 0,
"Number of times tried to get a chunk page but failed.");
static long pv_entry_frees, pv_entry_allocs, pv_entry_count;
static int pv_entry_spare;
SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_frees, CTLFLAG_RD, &pv_entry_frees, 0,
"Current number of pv entry frees");
SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_allocs, CTLFLAG_RD, &pv_entry_allocs, 0,
"Current number of pv entry allocs");
SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_count, CTLFLAG_RD, &pv_entry_count, 0,
"Current number of pv entries");
SYSCTL_INT(_vm_pmap, OID_AUTO, pv_entry_spare, CTLFLAG_RD, &pv_entry_spare, 0,
"Current number of spare pv entries");
#endif
#endif /* 0 */
/*
* We are in a serious low memory condition. Resort to
* drastic measures to free some pages so we can allocate
* another pv entry chunk.
*
* Returns NULL if PV entries were reclaimed from the specified pmap.
*
* We do not, however, unmap 2mpages because subsequent accesses will
* allocate per-page pv entries until repromotion occurs, thereby
* exacerbating the shortage of free pv entries.
*/
static vm_page_t
reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp)
{
struct pv_chunk *pc, *pc_marker, *pc_marker_end;
struct pv_chunk_header pc_marker_b, pc_marker_end_b;
struct md_page *pvh;
pd_entry_t *pde;
pmap_t next_pmap, pmap;
pt_entry_t *pte, tpte;
pv_entry_t pv;
vm_offset_t va;
vm_page_t m, m_pc;
struct spglist free;
uint64_t inuse;
int bit, field, freed, lvl;
static int active_reclaims = 0;
PMAP_LOCK_ASSERT(locked_pmap, MA_OWNED);
KASSERT(lockp != NULL, ("reclaim_pv_chunk: lockp is NULL"));
pmap = NULL;
m_pc = NULL;
SLIST_INIT(&free);
bzero(&pc_marker_b, sizeof(pc_marker_b));
bzero(&pc_marker_end_b, sizeof(pc_marker_end_b));
pc_marker = (struct pv_chunk *)&pc_marker_b;
pc_marker_end = (struct pv_chunk *)&pc_marker_end_b;
mtx_lock(&pv_chunks_mutex);
active_reclaims++;
TAILQ_INSERT_HEAD(&pv_chunks, pc_marker, pc_lru);
TAILQ_INSERT_TAIL(&pv_chunks, pc_marker_end, pc_lru);
while ((pc = TAILQ_NEXT(pc_marker, pc_lru)) != pc_marker_end &&
SLIST_EMPTY(&free)) {
next_pmap = pc->pc_pmap;
if (next_pmap == NULL) {
/*
* The next chunk is a marker. However, it is
* not our marker, so active_reclaims must be
* > 1. Consequently, the next_chunk code
* will not rotate the pv_chunks list.
*/
goto next_chunk;
}
mtx_unlock(&pv_chunks_mutex);
/*
* A pv_chunk can only be removed from the pc_lru list
* when both pv_chunks_mutex is owned and the
* corresponding pmap is locked.
*/
if (pmap != next_pmap) {
if (pmap != NULL && pmap != locked_pmap)
PMAP_UNLOCK(pmap);
pmap = next_pmap;
/* Avoid deadlock and lock recursion. */
if (pmap > locked_pmap) {
RELEASE_PV_LIST_LOCK(lockp);
PMAP_LOCK(pmap);
mtx_lock(&pv_chunks_mutex);
continue;
} else if (pmap != locked_pmap) {
if (PMAP_TRYLOCK(pmap)) {
mtx_lock(&pv_chunks_mutex);
continue;
} else {
pmap = NULL; /* pmap is not locked */
mtx_lock(&pv_chunks_mutex);
pc = TAILQ_NEXT(pc_marker, pc_lru);
if (pc == NULL ||
pc->pc_pmap != next_pmap)
continue;
goto next_chunk;
}
}
}
/*
* Destroy every non-wired, 4 KB page mapping in the chunk.
*/
freed = 0;
for (field = 0; field < _NPCM; field++) {
for (inuse = ~pc->pc_map[field] & pc_freemask[field];
inuse != 0; inuse &= ~(1UL << bit)) {
bit = ffsl(inuse) - 1;
pv = &pc->pc_pventry[field * 64 + bit];
va = pv->pv_va;
pde = pmap_pde(pmap, va, &lvl);
if (lvl != 2)
continue;
pte = pmap_l2_to_l3(pde, va);
tpte = pmap_load(pte);
if ((tpte & ATTR_SW_WIRED) != 0)
continue;
tpte = pmap_load_clear(pte);
pmap_invalidate_page(pmap, va);
m = PHYS_TO_VM_PAGE(tpte & ~ATTR_MASK);
if (pmap_page_dirty(tpte))
vm_page_dirty(m);
if ((tpte & ATTR_AF) != 0)
vm_page_aflag_set(m, PGA_REFERENCED);
CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m);
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
if (TAILQ_EMPTY(&m->md.pv_list) &&
(m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
if (TAILQ_EMPTY(&pvh->pv_list)) {
vm_page_aflag_clear(m,
PGA_WRITEABLE);
}
}
pc->pc_map[field] |= 1UL << bit;
pmap_unuse_pt(pmap, va, pmap_load(pde), &free);
freed++;
}
}
if (freed == 0) {
mtx_lock(&pv_chunks_mutex);
goto next_chunk;
}
/* Every freed mapping is for a 4 KB page. */
pmap_resident_count_dec(pmap, freed);
PV_STAT(atomic_add_long(&pv_entry_frees, freed));
PV_STAT(atomic_add_int(&pv_entry_spare, freed));
PV_STAT(atomic_subtract_long(&pv_entry_count, freed));
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
if (pc->pc_map[0] == PC_FREE0 && pc->pc_map[1] == PC_FREE1 &&
pc->pc_map[2] == PC_FREE2) {
PV_STAT(atomic_subtract_int(&pv_entry_spare, _NPCPV));
PV_STAT(atomic_subtract_int(&pc_chunk_count, 1));
PV_STAT(atomic_add_int(&pc_chunk_frees, 1));
/* Entire chunk is free; return it. */
m_pc = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pc));
dump_drop_page(m_pc->phys_addr);
mtx_lock(&pv_chunks_mutex);
TAILQ_REMOVE(&pv_chunks, pc, pc_lru);
break;
}
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list);
mtx_lock(&pv_chunks_mutex);
/* One freed pv entry in locked_pmap is sufficient. */
if (pmap == locked_pmap)
break;
next_chunk:
TAILQ_REMOVE(&pv_chunks, pc_marker, pc_lru);
TAILQ_INSERT_AFTER(&pv_chunks, pc, pc_marker, pc_lru);
if (active_reclaims == 1 && pmap != NULL) {
/*
* Rotate the pv chunks list so that we do not
* scan the same pv chunks that could not be
* freed (because they contained a wired
* and/or superpage mapping) on every
* invocation of reclaim_pv_chunk().
*/
while ((pc = TAILQ_FIRST(&pv_chunks)) != pc_marker) {
MPASS(pc->pc_pmap != NULL);
TAILQ_REMOVE(&pv_chunks, pc, pc_lru);
TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru);
}
}
}
TAILQ_REMOVE(&pv_chunks, pc_marker, pc_lru);
TAILQ_REMOVE(&pv_chunks, pc_marker_end, pc_lru);
active_reclaims--;
mtx_unlock(&pv_chunks_mutex);
if (pmap != NULL && pmap != locked_pmap)
PMAP_UNLOCK(pmap);
if (m_pc == NULL && !SLIST_EMPTY(&free)) {
m_pc = SLIST_FIRST(&free);
SLIST_REMOVE_HEAD(&free, plinks.s.ss);
/* Recycle a freed page table page. */
m_pc->wire_count = 1;
vm_wire_add(1);
}
vm_page_free_pages_toq(&free, false);
return (m_pc);
}
/*
* free the pv_entry back to the free list
*/
static void
free_pv_entry(pmap_t pmap, pv_entry_t pv)
{
struct pv_chunk *pc;
int idx, field, bit;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
PV_STAT(atomic_add_long(&pv_entry_frees, 1));
PV_STAT(atomic_add_int(&pv_entry_spare, 1));
PV_STAT(atomic_subtract_long(&pv_entry_count, 1));
pc = pv_to_chunk(pv);
idx = pv - &pc->pc_pventry[0];
field = idx / 64;
bit = idx % 64;
pc->pc_map[field] |= 1ul << bit;
if (pc->pc_map[0] != PC_FREE0 || pc->pc_map[1] != PC_FREE1 ||
pc->pc_map[2] != PC_FREE2) {
/* 98% of the time, pc is already at the head of the list. */
if (__predict_false(pc != TAILQ_FIRST(&pmap->pm_pvchunk))) {
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list);
}
return;
}
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
free_pv_chunk(pc);
}
static void
free_pv_chunk(struct pv_chunk *pc)
{
vm_page_t m;
mtx_lock(&pv_chunks_mutex);
TAILQ_REMOVE(&pv_chunks, pc, pc_lru);
mtx_unlock(&pv_chunks_mutex);
PV_STAT(atomic_subtract_int(&pv_entry_spare, _NPCPV));
PV_STAT(atomic_subtract_int(&pc_chunk_count, 1));
PV_STAT(atomic_add_int(&pc_chunk_frees, 1));
/* entire chunk is free, return it */
m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pc));
dump_drop_page(m->phys_addr);
vm_page_unwire_noq(m);
vm_page_free(m);
}
/*
* Returns a new PV entry, allocating a new PV chunk from the system when
* needed. If this PV chunk allocation fails and a PV list lock pointer was
* given, a PV chunk is reclaimed from an arbitrary pmap. Otherwise, NULL is
* returned.
*
* The given PV list lock may be released.
*/
static pv_entry_t
get_pv_entry(pmap_t pmap, struct rwlock **lockp)
{
int bit, field;
pv_entry_t pv;
struct pv_chunk *pc;
vm_page_t m;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
PV_STAT(atomic_add_long(&pv_entry_allocs, 1));
retry:
pc = TAILQ_FIRST(&pmap->pm_pvchunk);
if (pc != NULL) {
for (field = 0; field < _NPCM; field++) {
if (pc->pc_map[field]) {
bit = ffsl(pc->pc_map[field]) - 1;
break;
}
}
if (field < _NPCM) {
pv = &pc->pc_pventry[field * 64 + bit];
pc->pc_map[field] &= ~(1ul << bit);
/* If this was the last item, move it to tail */
if (pc->pc_map[0] == 0 && pc->pc_map[1] == 0 &&
pc->pc_map[2] == 0) {
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc,
pc_list);
}
PV_STAT(atomic_add_long(&pv_entry_count, 1));
PV_STAT(atomic_subtract_int(&pv_entry_spare, 1));
return (pv);
}
}
/* No free items, allocate another chunk */
m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ |
VM_ALLOC_WIRED);
if (m == NULL) {
if (lockp == NULL) {
PV_STAT(pc_chunk_tryfail++);
return (NULL);
}
m = reclaim_pv_chunk(pmap, lockp);
if (m == NULL)
goto retry;
}
PV_STAT(atomic_add_int(&pc_chunk_count, 1));
PV_STAT(atomic_add_int(&pc_chunk_allocs, 1));
dump_add_page(m->phys_addr);
pc = (void *)PHYS_TO_DMAP(m->phys_addr);
pc->pc_pmap = pmap;
pc->pc_map[0] = PC_FREE0 & ~1ul; /* preallocated bit 0 */
pc->pc_map[1] = PC_FREE1;
pc->pc_map[2] = PC_FREE2;
mtx_lock(&pv_chunks_mutex);
TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru);
mtx_unlock(&pv_chunks_mutex);
pv = &pc->pc_pventry[0];
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list);
PV_STAT(atomic_add_long(&pv_entry_count, 1));
PV_STAT(atomic_add_int(&pv_entry_spare, _NPCPV - 1));
return (pv);
}
/*
* Ensure that the number of spare PV entries in the specified pmap meets or
* exceeds the given count, "needed".
*
* The given PV list lock may be released.
*/
static void
reserve_pv_entries(pmap_t pmap, int needed, struct rwlock **lockp)
{
struct pch new_tail;
struct pv_chunk *pc;
vm_page_t m;
int avail, free;
bool reclaimed;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
KASSERT(lockp != NULL, ("reserve_pv_entries: lockp is NULL"));
/*
* Newly allocated PV chunks must be stored in a private list until
* the required number of PV chunks have been allocated. Otherwise,
* reclaim_pv_chunk() could recycle one of these chunks. In
* contrast, these chunks must be added to the pmap upon allocation.
*/
TAILQ_INIT(&new_tail);
retry:
avail = 0;
TAILQ_FOREACH(pc, &pmap->pm_pvchunk, pc_list) {
bit_count((bitstr_t *)pc->pc_map, 0,
sizeof(pc->pc_map) * NBBY, &free);
if (free == 0)
break;
avail += free;
if (avail >= needed)
break;
}
for (reclaimed = false; avail < needed; avail += _NPCPV) {
m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ |
VM_ALLOC_WIRED);
if (m == NULL) {
m = reclaim_pv_chunk(pmap, lockp);
if (m == NULL)
goto retry;
reclaimed = true;
}
PV_STAT(atomic_add_int(&pc_chunk_count, 1));
PV_STAT(atomic_add_int(&pc_chunk_allocs, 1));
dump_add_page(m->phys_addr);
pc = (void *)PHYS_TO_DMAP(m->phys_addr);
pc->pc_pmap = pmap;
pc->pc_map[0] = PC_FREE0;
pc->pc_map[1] = PC_FREE1;
pc->pc_map[2] = PC_FREE2;
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_TAIL(&new_tail, pc, pc_lru);
PV_STAT(atomic_add_int(&pv_entry_spare, _NPCPV));
/*
* The reclaim might have freed a chunk from the current pmap.
* If that chunk contained available entries, we need to
* re-count the number of available entries.
*/
if (reclaimed)
goto retry;
}
if (!TAILQ_EMPTY(&new_tail)) {
mtx_lock(&pv_chunks_mutex);
TAILQ_CONCAT(&pv_chunks, &new_tail, pc_lru);
mtx_unlock(&pv_chunks_mutex);
}
}
/*
* First find and then remove the pv entry for the specified pmap and virtual
* address from the specified pv list. Returns the pv entry if found and NULL
* otherwise. This operation can be performed on pv lists for either 4KB or
* 2MB page mappings.
*/
static __inline pv_entry_t
pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va)
{
pv_entry_t pv;
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
if (pmap == PV_PMAP(pv) && va == pv->pv_va) {
TAILQ_REMOVE(&pvh->pv_list, pv, pv_next);
pvh->pv_gen++;
break;
}
}
return (pv);
}
/*
* After demotion from a 2MB page mapping to 512 4KB page mappings,
* destroy the pv entry for the 2MB page mapping and reinstantiate the pv
* entries for each of the 4KB page mappings.
*/
static void
pmap_pv_demote_l2(pmap_t pmap, vm_offset_t va, vm_paddr_t pa,
struct rwlock **lockp)
{
struct md_page *pvh;
struct pv_chunk *pc;
pv_entry_t pv;
vm_offset_t va_last;
vm_page_t m;
int bit, field;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
KASSERT((pa & L2_OFFSET) == 0,
("pmap_pv_demote_l2: pa is not 2mpage aligned"));
CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa);
/*
* Transfer the 2mpage's pv entry for this mapping to the first
* page's pv list. Once this transfer begins, the pv list lock
* must not be released until the last pv entry is reinstantiated.
*/
pvh = pa_to_pvh(pa);
va = va & ~L2_OFFSET;
pv = pmap_pvh_remove(pvh, pmap, va);
KASSERT(pv != NULL, ("pmap_pv_demote_l2: pv not found"));
m = PHYS_TO_VM_PAGE(pa);
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
/* Instantiate the remaining Ln_ENTRIES - 1 pv entries. */
PV_STAT(atomic_add_long(&pv_entry_allocs, Ln_ENTRIES - 1));
va_last = va + L2_SIZE - PAGE_SIZE;
for (;;) {
pc = TAILQ_FIRST(&pmap->pm_pvchunk);
KASSERT(pc->pc_map[0] != 0 || pc->pc_map[1] != 0 ||
pc->pc_map[2] != 0, ("pmap_pv_demote_l2: missing spare"));
for (field = 0; field < _NPCM; field++) {
while (pc->pc_map[field]) {
bit = ffsl(pc->pc_map[field]) - 1;
pc->pc_map[field] &= ~(1ul << bit);
pv = &pc->pc_pventry[field * 64 + bit];
va += PAGE_SIZE;
pv->pv_va = va;
m++;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_pv_demote_l2: page %p is not managed", m));
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
if (va == va_last)
goto out;
}
}
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list);
}
out:
if (pc->pc_map[0] == 0 && pc->pc_map[1] == 0 && pc->pc_map[2] == 0) {
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list);
}
PV_STAT(atomic_add_long(&pv_entry_count, Ln_ENTRIES - 1));
PV_STAT(atomic_subtract_int(&pv_entry_spare, Ln_ENTRIES - 1));
}
/*
* First find and then destroy the pv entry for the specified pmap and virtual
* address. This operation can be performed on pv lists for either 4KB or 2MB
* page mappings.
*/
static void
pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va)
{
pv_entry_t pv;
pv = pmap_pvh_remove(pvh, pmap, va);
KASSERT(pv != NULL, ("pmap_pvh_free: pv not found"));
free_pv_entry(pmap, pv);
}
/*
* Conditionally create the PV entry for a 4KB page mapping if the required
* memory can be allocated without resorting to reclamation.
*/
static boolean_t
pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va, vm_page_t m,
struct rwlock **lockp)
{
pv_entry_t pv;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/* Pass NULL instead of the lock pointer to disable reclamation. */
if ((pv = get_pv_entry(pmap, NULL)) != NULL) {
pv->pv_va = va;
CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m);
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
return (TRUE);
} else
return (FALSE);
}
/*
* Create the PV entry for a 2MB page mapping. Always returns true unless the
* flag PMAP_ENTER_NORECLAIM is specified. If that flag is specified, returns
* false if the PV entry cannot be allocated without resorting to reclamation.
*/
static bool
pmap_pv_insert_l2(pmap_t pmap, vm_offset_t va, pd_entry_t l2e, u_int flags,
struct rwlock **lockp)
{
struct md_page *pvh;
pv_entry_t pv;
vm_paddr_t pa;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/* Pass NULL instead of the lock pointer to disable reclamation. */
if ((pv = get_pv_entry(pmap, (flags & PMAP_ENTER_NORECLAIM) != 0 ?
NULL : lockp)) == NULL)
return (false);
pv->pv_va = va;
pa = l2e & ~ATTR_MASK;
CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa);
pvh = pa_to_pvh(pa);
TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next);
pvh->pv_gen++;
return (true);
}
static void
pmap_remove_kernel_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t va)
{
pt_entry_t newl2, oldl2;
vm_page_t ml3;
vm_paddr_t ml3pa;
KASSERT(!VIRT_IN_DMAP(va), ("removing direct mapping of %#lx", va));
KASSERT(pmap == kernel_pmap, ("pmap %p is not kernel_pmap", pmap));
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
ml3 = pmap_remove_pt_page(pmap, va);
if (ml3 == NULL)
panic("pmap_remove_kernel_l2: Missing pt page");
ml3pa = VM_PAGE_TO_PHYS(ml3);
newl2 = ml3pa | L2_TABLE;
/*
* Initialize the page table page.
*/
pagezero((void *)PHYS_TO_DMAP(ml3pa));
/*
* Demote the mapping. The caller must have already invalidated the
* mapping (i.e., the "break" in break-before-make).
*/
oldl2 = pmap_load_store(l2, newl2);
KASSERT(oldl2 == 0, ("%s: found existing mapping at %p: %#lx",
__func__, l2, oldl2));
}
/*
* pmap_remove_l2: Do the things to unmap a level 2 superpage.
*/
static int
pmap_remove_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t sva,
pd_entry_t l1e, struct spglist *free, struct rwlock **lockp)
{
struct md_page *pvh;
pt_entry_t old_l2;
vm_offset_t eva, va;
vm_page_t m, ml3;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
KASSERT((sva & L2_OFFSET) == 0, ("pmap_remove_l2: sva is not aligned"));
old_l2 = pmap_load_clear(l2);
KASSERT((old_l2 & ATTR_DESCR_MASK) == L2_BLOCK,
("pmap_remove_l2: L2e %lx is not a block mapping", old_l2));
pmap_invalidate_range(pmap, sva, sva + L2_SIZE);
if (old_l2 & ATTR_SW_WIRED)
pmap->pm_stats.wired_count -= L2_SIZE / PAGE_SIZE;
pmap_resident_count_dec(pmap, L2_SIZE / PAGE_SIZE);
if (old_l2 & ATTR_SW_MANAGED) {
CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, old_l2 & ~ATTR_MASK);
pvh = pa_to_pvh(old_l2 & ~ATTR_MASK);
pmap_pvh_free(pvh, pmap, sva);
eva = sva + L2_SIZE;
for (va = sva, m = PHYS_TO_VM_PAGE(old_l2 & ~ATTR_MASK);
va < eva; va += PAGE_SIZE, m++) {
if (pmap_page_dirty(old_l2))
vm_page_dirty(m);
if (old_l2 & ATTR_AF)
vm_page_aflag_set(m, PGA_REFERENCED);
if (TAILQ_EMPTY(&m->md.pv_list) &&
TAILQ_EMPTY(&pvh->pv_list))
vm_page_aflag_clear(m, PGA_WRITEABLE);
}
}
if (pmap == kernel_pmap) {
pmap_remove_kernel_l2(pmap, l2, sva);
} else {
ml3 = pmap_remove_pt_page(pmap, sva);
if (ml3 != NULL) {
pmap_resident_count_dec(pmap, 1);
KASSERT(ml3->wire_count == NL3PG,
("pmap_remove_l2: l3 page wire count error"));
ml3->wire_count = 1;
vm_page_unwire_noq(ml3);
pmap_add_delayed_free_list(ml3, free, FALSE);
}
}
return (pmap_unuse_pt(pmap, sva, l1e, free));
}
/*
* pmap_remove_l3: do the things to unmap a page in a process
*/
static int
pmap_remove_l3(pmap_t pmap, pt_entry_t *l3, vm_offset_t va,
pd_entry_t l2e, struct spglist *free, struct rwlock **lockp)
{
struct md_page *pvh;
pt_entry_t old_l3;
vm_page_t m;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
old_l3 = pmap_load_clear(l3);
pmap_invalidate_page(pmap, va);
if (old_l3 & ATTR_SW_WIRED)
pmap->pm_stats.wired_count -= 1;
pmap_resident_count_dec(pmap, 1);
if (old_l3 & ATTR_SW_MANAGED) {
m = PHYS_TO_VM_PAGE(old_l3 & ~ATTR_MASK);
if (pmap_page_dirty(old_l3))
vm_page_dirty(m);
if (old_l3 & ATTR_AF)
vm_page_aflag_set(m, PGA_REFERENCED);
CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m);
pmap_pvh_free(&m->md, pmap, va);
if (TAILQ_EMPTY(&m->md.pv_list) &&
(m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
if (TAILQ_EMPTY(&pvh->pv_list))
vm_page_aflag_clear(m, PGA_WRITEABLE);
}
}
return (pmap_unuse_pt(pmap, va, l2e, free));
}
/*
* Remove the given range of addresses from the specified map.
*
* It is assumed that the start and end are properly
* rounded to the page size.
*/
void
pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
struct rwlock *lock;
vm_offset_t va, va_next;
pd_entry_t *l0, *l1, *l2;
pt_entry_t l3_paddr, *l3;
struct spglist free;
/*
* Perform an unsynchronized read. This is, however, safe.
*/
if (pmap->pm_stats.resident_count == 0)
return;
SLIST_INIT(&free);
PMAP_LOCK(pmap);
lock = NULL;
for (; sva < eva; sva = va_next) {
if (pmap->pm_stats.resident_count == 0)
break;
l0 = pmap_l0(pmap, sva);
if (pmap_load(l0) == 0) {
va_next = (sva + L0_SIZE) & ~L0_OFFSET;
if (va_next < sva)
va_next = eva;
continue;
}
l1 = pmap_l0_to_l1(l0, sva);
if (pmap_load(l1) == 0) {
va_next = (sva + L1_SIZE) & ~L1_OFFSET;
if (va_next < sva)
va_next = eva;
continue;
}
/*
* Calculate index for next page table.
*/
va_next = (sva + L2_SIZE) & ~L2_OFFSET;
if (va_next < sva)
va_next = eva;
l2 = pmap_l1_to_l2(l1, sva);
if (l2 == NULL)
continue;
l3_paddr = pmap_load(l2);
if ((l3_paddr & ATTR_DESCR_MASK) == L2_BLOCK) {
if (sva + L2_SIZE == va_next && eva >= va_next) {
pmap_remove_l2(pmap, l2, sva, pmap_load(l1),
&free, &lock);
continue;
} else if (pmap_demote_l2_locked(pmap, l2,
sva &~L2_OFFSET, &lock) == NULL)
continue;
l3_paddr = pmap_load(l2);
}
/*
* Weed out invalid mappings.
*/
if ((l3_paddr & ATTR_DESCR_MASK) != L2_TABLE)
continue;
/*
* Limit our scan to either the end of the va represented
* by the current page table page, or to the end of the
* range being removed.
*/
if (va_next > eva)
va_next = eva;
va = va_next;
for (l3 = pmap_l2_to_l3(l2, sva); sva != va_next; l3++,
sva += L3_SIZE) {
if (l3 == NULL)
panic("l3 == NULL");
if (pmap_load(l3) == 0) {
if (va != va_next) {
pmap_invalidate_range(pmap, va, sva);
va = va_next;
}
continue;
}
if (va == va_next)
va = sva;
if (pmap_remove_l3(pmap, l3, sva, l3_paddr, &free,
&lock)) {
sva += L3_SIZE;
break;
}
}
if (va != va_next)
pmap_invalidate_range(pmap, va, sva);
}
if (lock != NULL)
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
vm_page_free_pages_toq(&free, false);
}
/*
* Routine: pmap_remove_all
* Function:
* Removes this physical page from
* all physical maps in which it resides.
* Reflects back modify bits to the pager.
*
* Notes:
* Original versions of this routine were very
* inefficient because they iteratively called
* pmap_remove (slow...)
*/
void
pmap_remove_all(vm_page_t m)
{
struct md_page *pvh;
pv_entry_t pv;
pmap_t pmap;
struct rwlock *lock;
pd_entry_t *pde, tpde;
pt_entry_t *pte, tpte;
vm_offset_t va;
struct spglist free;
int lvl, pvh_gen, md_gen;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_remove_all: page %p is not managed", m));
SLIST_INIT(&free);
lock = VM_PAGE_TO_PV_LIST_LOCK(m);
pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy :
pa_to_pvh(VM_PAGE_TO_PHYS(m));
retry:
rw_wlock(lock);
while ((pv = TAILQ_FIRST(&pvh->pv_list)) != NULL) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
pvh_gen = pvh->pv_gen;
rw_wunlock(lock);
PMAP_LOCK(pmap);
rw_wlock(lock);
if (pvh_gen != pvh->pv_gen) {
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
goto retry;
}
}
va = pv->pv_va;
pte = pmap_pte(pmap, va, &lvl);
KASSERT(pte != NULL,
("pmap_remove_all: no page table entry found"));
KASSERT(lvl == 2,
("pmap_remove_all: invalid pte level %d", lvl));
pmap_demote_l2_locked(pmap, pte, va, &lock);
PMAP_UNLOCK(pmap);
}
while ((pv = TAILQ_FIRST(&m->md.pv_list)) != NULL) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
pvh_gen = pvh->pv_gen;
md_gen = m->md.pv_gen;
rw_wunlock(lock);
PMAP_LOCK(pmap);
rw_wlock(lock);
if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) {
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
goto retry;
}
}
pmap_resident_count_dec(pmap, 1);
pde = pmap_pde(pmap, pv->pv_va, &lvl);
KASSERT(pde != NULL,
("pmap_remove_all: no page directory entry found"));
KASSERT(lvl == 2,
("pmap_remove_all: invalid pde level %d", lvl));
tpde = pmap_load(pde);
pte = pmap_l2_to_l3(pde, pv->pv_va);
tpte = pmap_load(pte);
pmap_load_clear(pte);
pmap_invalidate_page(pmap, pv->pv_va);
if (tpte & ATTR_SW_WIRED)
pmap->pm_stats.wired_count--;
if ((tpte & ATTR_AF) != 0)
vm_page_aflag_set(m, PGA_REFERENCED);
/*
* Update the vm_page_t clean and reference bits.
*/
if (pmap_page_dirty(tpte))
vm_page_dirty(m);
pmap_unuse_pt(pmap, pv->pv_va, tpde, &free);
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
free_pv_entry(pmap, pv);
PMAP_UNLOCK(pmap);
}
vm_page_aflag_clear(m, PGA_WRITEABLE);
rw_wunlock(lock);
vm_page_free_pages_toq(&free, false);
}
/*
* Set the physical protection on the
* specified range of this map as requested.
*/
void
pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot)
{
vm_offset_t va, va_next;
pd_entry_t *l0, *l1, *l2;
pt_entry_t *l3p, l3, nbits;
KASSERT((prot & ~VM_PROT_ALL) == 0, ("invalid prot %x", prot));
if (prot == VM_PROT_NONE) {
pmap_remove(pmap, sva, eva);
return;
}
if ((prot & (VM_PROT_WRITE | VM_PROT_EXECUTE)) ==
(VM_PROT_WRITE | VM_PROT_EXECUTE))
return;
PMAP_LOCK(pmap);
for (; sva < eva; sva = va_next) {
l0 = pmap_l0(pmap, sva);
if (pmap_load(l0) == 0) {
va_next = (sva + L0_SIZE) & ~L0_OFFSET;
if (va_next < sva)
va_next = eva;
continue;
}
l1 = pmap_l0_to_l1(l0, sva);
if (pmap_load(l1) == 0) {
va_next = (sva + L1_SIZE) & ~L1_OFFSET;
if (va_next < sva)
va_next = eva;
continue;
}
va_next = (sva + L2_SIZE) & ~L2_OFFSET;
if (va_next < sva)
va_next = eva;
l2 = pmap_l1_to_l2(l1, sva);
if (pmap_load(l2) == 0)
continue;
if ((pmap_load(l2) & ATTR_DESCR_MASK) == L2_BLOCK) {
l3p = pmap_demote_l2(pmap, l2, sva);
if (l3p == NULL)
continue;
}
KASSERT((pmap_load(l2) & ATTR_DESCR_MASK) == L2_TABLE,
("pmap_protect: Invalid L2 entry after demotion"));
if (va_next > eva)
va_next = eva;
va = va_next;
for (l3p = pmap_l2_to_l3(l2, sva); sva != va_next; l3p++,
sva += L3_SIZE) {
l3 = pmap_load(l3p);
if (!pmap_l3_valid(l3))
continue;
nbits = 0;
if ((prot & VM_PROT_WRITE) == 0) {
if ((l3 & ATTR_SW_MANAGED) &&
pmap_page_dirty(l3)) {
vm_page_dirty(PHYS_TO_VM_PAGE(l3 &
~ATTR_MASK));
}
nbits |= ATTR_AP(ATTR_AP_RO);
}
if ((prot & VM_PROT_EXECUTE) == 0)
nbits |= ATTR_XN;
pmap_set(l3p, nbits);
/* XXX: Use pmap_invalidate_range */
pmap_invalidate_page(pmap, sva);
}
}
PMAP_UNLOCK(pmap);
}
/*
* Inserts the specified page table page into the specified pmap's collection
* of idle page table pages. Each of a pmap's page table pages is responsible
* for mapping a distinct range of virtual addresses. The pmap's collection is
* ordered by this virtual address range.
*/
static __inline int
pmap_insert_pt_page(pmap_t pmap, vm_page_t mpte)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
return (vm_radix_insert(&pmap->pm_root, mpte));
}
/*
* Removes the page table page mapping the specified virtual address from the
* specified pmap's collection of idle page table pages, and returns it.
* Otherwise, returns NULL if there is no page table page corresponding to the
* specified virtual address.
*/
static __inline vm_page_t
pmap_remove_pt_page(pmap_t pmap, vm_offset_t va)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
return (vm_radix_remove(&pmap->pm_root, pmap_l2_pindex(va)));
}
/*
* Performs a break-before-make update of a pmap entry. This is needed when
* either promoting or demoting pages to ensure the TLB doesn't get into an
* inconsistent state.
*/
static void
pmap_update_entry(pmap_t pmap, pd_entry_t *pte, pd_entry_t newpte,
vm_offset_t va, vm_size_t size)
{
register_t intr;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/*
* Ensure we don't get switched out with the page table in an
* inconsistent state. We also need to ensure no interrupts fire
* as they may make use of an address we are about to invalidate.
*/
intr = intr_disable();
critical_enter();
/* Clear the old mapping */
pmap_load_clear(pte);
pmap_invalidate_range_nopin(pmap, va, va + size);
/* Create the new mapping */
pmap_load_store(pte, newpte);
+ dsb(ishst);
critical_exit();
intr_restore(intr);
}
#if VM_NRESERVLEVEL > 0
/*
* After promotion from 512 4KB page mappings to a single 2MB page mapping,
* replace the many pv entries for the 4KB page mappings by a single pv entry
* for the 2MB page mapping.
*/
static void
pmap_pv_promote_l2(pmap_t pmap, vm_offset_t va, vm_paddr_t pa,
struct rwlock **lockp)
{
struct md_page *pvh;
pv_entry_t pv;
vm_offset_t va_last;
vm_page_t m;
KASSERT((pa & L2_OFFSET) == 0,
("pmap_pv_promote_l2: pa is not 2mpage aligned"));
CHANGE_PV_LIST_LOCK_TO_PHYS(lockp, pa);
/*
* Transfer the first page's pv entry for this mapping to the 2mpage's
* pv list. Aside from avoiding the cost of a call to get_pv_entry(),
* a transfer avoids the possibility that get_pv_entry() calls
* reclaim_pv_chunk() and that reclaim_pv_chunk() removes one of the
* mappings that is being promoted.
*/
m = PHYS_TO_VM_PAGE(pa);
va = va & ~L2_OFFSET;
pv = pmap_pvh_remove(&m->md, pmap, va);
KASSERT(pv != NULL, ("pmap_pv_promote_l2: pv not found"));
pvh = pa_to_pvh(pa);
TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next);
pvh->pv_gen++;
/* Free the remaining NPTEPG - 1 pv entries. */
va_last = va + L2_SIZE - PAGE_SIZE;
do {
m++;
va += PAGE_SIZE;
pmap_pvh_free(&m->md, pmap, va);
} while (va < va_last);
}
/*
* Tries to promote the 512, contiguous 4KB page mappings that are within a
* single level 2 table entry to a single 2MB page mapping. For promotion
* to occur, two conditions must be met: (1) the 4KB page mappings must map
* aligned, contiguous physical memory and (2) the 4KB page mappings must have
* identical characteristics.
*/
static void
pmap_promote_l2(pmap_t pmap, pd_entry_t *l2, vm_offset_t va,
struct rwlock **lockp)
{
pt_entry_t *firstl3, *l3, newl2, oldl3, pa;
vm_page_t mpte;
vm_offset_t sva;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
sva = va & ~L2_OFFSET;
firstl3 = pmap_l2_to_l3(l2, sva);
newl2 = pmap_load(firstl3);
/* Check the alingment is valid */
if (((newl2 & ~ATTR_MASK) & L2_OFFSET) != 0) {
atomic_add_long(&pmap_l2_p_failures, 1);
CTR2(KTR_PMAP, "pmap_promote_l2: failure for va %#lx"
" in pmap %p", va, pmap);
return;
}
pa = newl2 + L2_SIZE - PAGE_SIZE;
for (l3 = firstl3 + NL3PG - 1; l3 > firstl3; l3--) {
oldl3 = pmap_load(l3);
if (oldl3 != pa) {
atomic_add_long(&pmap_l2_p_failures, 1);
CTR2(KTR_PMAP, "pmap_promote_l2: failure for va %#lx"
" in pmap %p", va, pmap);
return;
}
pa -= PAGE_SIZE;
}
/*
* Save the page table page in its current state until the L2
* mapping the superpage is demoted by pmap_demote_l2() or
* destroyed by pmap_remove_l3().
*/
mpte = PHYS_TO_VM_PAGE(pmap_load(l2) & ~ATTR_MASK);
KASSERT(mpte >= vm_page_array &&
mpte < &vm_page_array[vm_page_array_size],
("pmap_promote_l2: page table page is out of range"));
KASSERT(mpte->pindex == pmap_l2_pindex(va),
("pmap_promote_l2: page table page's pindex is wrong"));
if (pmap_insert_pt_page(pmap, mpte)) {
atomic_add_long(&pmap_l2_p_failures, 1);
CTR2(KTR_PMAP,
"pmap_promote_l2: failure for va %#lx in pmap %p", va,
pmap);
return;
}
if ((newl2 & ATTR_SW_MANAGED) != 0)
pmap_pv_promote_l2(pmap, va, newl2 & ~ATTR_MASK, lockp);
newl2 &= ~ATTR_DESCR_MASK;
newl2 |= L2_BLOCK;
pmap_update_entry(pmap, l2, newl2, sva, L2_SIZE);
atomic_add_long(&pmap_l2_promotions, 1);
CTR2(KTR_PMAP, "pmap_promote_l2: success for va %#lx in pmap %p", va,
pmap);
}
#endif /* VM_NRESERVLEVEL > 0 */
/*
* Insert the given physical page (p) at
* the specified virtual address (v) in the
* target physical map with the protection requested.
*
* If specified, the page will be wired down, meaning
* that the related pte can not be reclaimed.
*
* NB: This is the only routine which MAY NOT lazy-evaluate
* or lose information. That is, this routine must actually
* insert this page into the given map NOW.
*/
int
pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot,
u_int flags, int8_t psind)
{
struct rwlock *lock;
pd_entry_t *pde;
pt_entry_t new_l3, orig_l3;
pt_entry_t *l2, *l3;
pv_entry_t pv;
vm_paddr_t opa, pa, l1_pa, l2_pa, l3_pa;
vm_page_t mpte, om, l1_m, l2_m, l3_m;
boolean_t nosleep;
int lvl, rv;
va = trunc_page(va);
if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m))
VM_OBJECT_ASSERT_LOCKED(m->object);
pa = VM_PAGE_TO_PHYS(m);
new_l3 = (pt_entry_t)(pa | ATTR_DEFAULT | ATTR_IDX(m->md.pv_memattr) |
L3_PAGE);
if ((prot & VM_PROT_WRITE) == 0)
new_l3 |= ATTR_AP(ATTR_AP_RO);
if ((prot & VM_PROT_EXECUTE) == 0 || m->md.pv_memattr == DEVICE_MEMORY)
new_l3 |= ATTR_XN;
if ((flags & PMAP_ENTER_WIRED) != 0)
new_l3 |= ATTR_SW_WIRED;
if (va < VM_MAXUSER_ADDRESS)
new_l3 |= ATTR_AP(ATTR_AP_USER) | ATTR_PXN;
if ((m->oflags & VPO_UNMANAGED) == 0)
new_l3 |= ATTR_SW_MANAGED;
CTR2(KTR_PMAP, "pmap_enter: %.16lx -> %.16lx", va, pa);
lock = NULL;
mpte = NULL;
PMAP_LOCK(pmap);
if (psind == 1) {
/* Assert the required virtual and physical alignment. */
KASSERT((va & L2_OFFSET) == 0, ("pmap_enter: va unaligned"));
KASSERT(m->psind > 0, ("pmap_enter: m->psind < psind"));
rv = pmap_enter_l2(pmap, va, (new_l3 & ~L3_PAGE) | L2_BLOCK,
flags, m, &lock);
goto out;
}
pde = pmap_pde(pmap, va, &lvl);
if (pde != NULL && lvl == 1) {
l2 = pmap_l1_to_l2(pde, va);
if ((pmap_load(l2) & ATTR_DESCR_MASK) == L2_BLOCK &&
(l3 = pmap_demote_l2_locked(pmap, l2, va & ~L2_OFFSET,
&lock)) != NULL) {
l3 = &l3[pmap_l3_index(va)];
if (va < VM_MAXUSER_ADDRESS) {
mpte = PHYS_TO_VM_PAGE(
pmap_load(l2) & ~ATTR_MASK);
mpte->wire_count++;
}
goto havel3;
}
}
if (va < VM_MAXUSER_ADDRESS) {
nosleep = (flags & PMAP_ENTER_NOSLEEP) != 0;
mpte = pmap_alloc_l3(pmap, va, nosleep ? NULL : &lock);
if (mpte == NULL && nosleep) {
CTR0(KTR_PMAP, "pmap_enter: mpte == NULL");
if (lock != NULL)
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
return (KERN_RESOURCE_SHORTAGE);
}
pde = pmap_pde(pmap, va, &lvl);
KASSERT(pde != NULL,
("pmap_enter: Invalid page entry, va: 0x%lx", va));
KASSERT(lvl == 2,
("pmap_enter: Invalid level %d", lvl));
} else {
/*
* If we get a level 2 pde it must point to a level 3 entry
* otherwise we will need to create the intermediate tables
*/
if (lvl < 2) {
switch (lvl) {
default:
case -1:
/* Get the l0 pde to update */
pde = pmap_l0(pmap, va);
KASSERT(pde != NULL, ("..."));
l1_m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED |
VM_ALLOC_ZERO);
if (l1_m == NULL)
panic("pmap_enter: l1 pte_m == NULL");
if ((l1_m->flags & PG_ZERO) == 0)
pmap_zero_page(l1_m);
l1_pa = VM_PAGE_TO_PHYS(l1_m);
pmap_load_store(pde, l1_pa | L0_TABLE);
/* FALLTHROUGH */
case 0:
/* Get the l1 pde to update */
pde = pmap_l1_to_l2(pde, va);
KASSERT(pde != NULL, ("..."));
l2_m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED |
VM_ALLOC_ZERO);
if (l2_m == NULL)
panic("pmap_enter: l2 pte_m == NULL");
if ((l2_m->flags & PG_ZERO) == 0)
pmap_zero_page(l2_m);
l2_pa = VM_PAGE_TO_PHYS(l2_m);
pmap_load_store(pde, l2_pa | L1_TABLE);
/* FALLTHROUGH */
case 1:
/* Get the l2 pde to update */
pde = pmap_l1_to_l2(pde, va);
KASSERT(pde != NULL, ("..."));
l3_m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED |
VM_ALLOC_ZERO);
if (l3_m == NULL)
panic("pmap_enter: l3 pte_m == NULL");
if ((l3_m->flags & PG_ZERO) == 0)
pmap_zero_page(l3_m);
l3_pa = VM_PAGE_TO_PHYS(l3_m);
pmap_load_store(pde, l3_pa | L2_TABLE);
break;
}
}
}
l3 = pmap_l2_to_l3(pde, va);
havel3:
orig_l3 = pmap_load(l3);
opa = orig_l3 & ~ATTR_MASK;
pv = NULL;
/*
* Is the specified virtual address already mapped?
*/
if (pmap_l3_valid(orig_l3)) {
/*
* Wiring change, just update stats. We don't worry about
* wiring PT pages as they remain resident as long as there
* are valid mappings in them. Hence, if a user page is wired,
* the PT page will be also.
*/
if ((flags & PMAP_ENTER_WIRED) != 0 &&
(orig_l3 & ATTR_SW_WIRED) == 0)
pmap->pm_stats.wired_count++;
else if ((flags & PMAP_ENTER_WIRED) == 0 &&
(orig_l3 & ATTR_SW_WIRED) != 0)
pmap->pm_stats.wired_count--;
/*
* Remove the extra PT page reference.
*/
if (mpte != NULL) {
mpte->wire_count--;
KASSERT(mpte->wire_count > 0,
("pmap_enter: missing reference to page table page,"
" va: 0x%lx", va));
}
/*
* Has the physical page changed?
*/
if (opa == pa) {
/*
* No, might be a protection or wiring change.
*/
if ((orig_l3 & ATTR_SW_MANAGED) != 0) {
if ((new_l3 & ATTR_AP(ATTR_AP_RW)) ==
ATTR_AP(ATTR_AP_RW)) {
vm_page_aflag_set(m, PGA_WRITEABLE);
}
}
goto validate;
}
/*
* The physical page has changed.
*/
(void)pmap_load_clear(l3);
KASSERT((orig_l3 & ~ATTR_MASK) == opa,
("pmap_enter: unexpected pa update for %#lx", va));
if ((orig_l3 & ATTR_SW_MANAGED) != 0) {
om = PHYS_TO_VM_PAGE(opa);
/*
* The pmap lock is sufficient to synchronize with
* concurrent calls to pmap_page_test_mappings() and
* pmap_ts_referenced().
*/
if (pmap_page_dirty(orig_l3))
vm_page_dirty(om);
if ((orig_l3 & ATTR_AF) != 0)
vm_page_aflag_set(om, PGA_REFERENCED);
CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, opa);
pv = pmap_pvh_remove(&om->md, pmap, va);
if ((m->oflags & VPO_UNMANAGED) != 0)
free_pv_entry(pmap, pv);
if ((om->aflags & PGA_WRITEABLE) != 0 &&
TAILQ_EMPTY(&om->md.pv_list) &&
((om->flags & PG_FICTITIOUS) != 0 ||
TAILQ_EMPTY(&pa_to_pvh(opa)->pv_list)))
vm_page_aflag_clear(om, PGA_WRITEABLE);
}
pmap_invalidate_page(pmap, va);
orig_l3 = 0;
} else {
/*
* Increment the counters.
*/
if ((new_l3 & ATTR_SW_WIRED) != 0)
pmap->pm_stats.wired_count++;
pmap_resident_count_inc(pmap, 1);
}
/*
* Enter on the PV list if part of our managed memory.
*/
if ((m->oflags & VPO_UNMANAGED) == 0) {
if (pv == NULL) {
pv = get_pv_entry(pmap, &lock);
pv->pv_va = va;
}
CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, pa);
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
if ((new_l3 & ATTR_AP_RW_BIT) == ATTR_AP(ATTR_AP_RW))
vm_page_aflag_set(m, PGA_WRITEABLE);
}
validate:
/*
* Sync icache if exec permission and attribute VM_MEMATTR_WRITE_BACK
* is set. Do it now, before the mapping is stored and made
* valid for hardware table walk. If done later, then other can
* access this page before caches are properly synced.
* Don't do it for kernel memory which is mapped with exec
* permission even if the memory isn't going to hold executable
* code. The only time when icache sync is needed is after
* kernel module is loaded and the relocation info is processed.
* And it's done in elf_cpu_load_file().
*/
if ((prot & VM_PROT_EXECUTE) && pmap != kernel_pmap &&
m->md.pv_memattr == VM_MEMATTR_WRITE_BACK &&
(opa != pa || (orig_l3 & ATTR_XN)))
cpu_icache_sync_range(PHYS_TO_DMAP(pa), PAGE_SIZE);
/*
* Update the L3 entry
*/
if (pmap_l3_valid(orig_l3)) {
KASSERT(opa == pa, ("pmap_enter: invalid update"));
if ((orig_l3 & ~ATTR_AF) != (new_l3 & ~ATTR_AF)) {
/* same PA, different attributes */
pmap_load_store(l3, new_l3);
pmap_invalidate_page(pmap, va);
if (pmap_page_dirty(orig_l3) &&
(orig_l3 & ATTR_SW_MANAGED) != 0)
vm_page_dirty(m);
} else {
/*
* orig_l3 == new_l3
* This can happens if multiple threads simultaneously
* access not yet mapped page. This bad for performance
* since this can cause full demotion-NOP-promotion
* cycle.
* Another possible reasons are:
* - VM and pmap memory layout are diverged
* - tlb flush is missing somewhere and CPU doesn't see
* actual mapping.
*/
CTR4(KTR_PMAP, "%s: already mapped page - "
"pmap %p va 0x%#lx pte 0x%lx",
__func__, pmap, va, new_l3);
}
} else {
/* New mappig */
pmap_load_store(l3, new_l3);
+ dsb(ishst);
}
#if VM_NRESERVLEVEL > 0
if (pmap != pmap_kernel() &&
(mpte == NULL || mpte->wire_count == NL3PG) &&
pmap_ps_enabled(pmap) &&
(m->flags & PG_FICTITIOUS) == 0 &&
vm_reserv_level_iffullpop(m) == 0) {
pmap_promote_l2(pmap, pde, va, &lock);
}
#endif
rv = KERN_SUCCESS;
out:
if (lock != NULL)
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
return (rv);
}
/*
* Tries to create a read- and/or execute-only 2MB page mapping. Returns true
* if successful. Returns false if (1) a page table page cannot be allocated
* without sleeping, (2) a mapping already exists at the specified virtual
* address, or (3) a PV entry cannot be allocated without reclaiming another
* PV entry.
*/
static bool
pmap_enter_2mpage(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot,
struct rwlock **lockp)
{
pd_entry_t new_l2;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
new_l2 = (pd_entry_t)(VM_PAGE_TO_PHYS(m) | ATTR_DEFAULT |
ATTR_IDX(m->md.pv_memattr) | ATTR_AP(ATTR_AP_RO) | L2_BLOCK);
if ((m->oflags & VPO_UNMANAGED) == 0)
new_l2 |= ATTR_SW_MANAGED;
if ((prot & VM_PROT_EXECUTE) == 0 || m->md.pv_memattr == DEVICE_MEMORY)
new_l2 |= ATTR_XN;
if (va < VM_MAXUSER_ADDRESS)
new_l2 |= ATTR_AP(ATTR_AP_USER) | ATTR_PXN;
return (pmap_enter_l2(pmap, va, new_l2, PMAP_ENTER_NOSLEEP |
PMAP_ENTER_NOREPLACE | PMAP_ENTER_NORECLAIM, NULL, lockp) ==
KERN_SUCCESS);
}
/*
* Tries to create the specified 2MB page mapping. Returns KERN_SUCCESS if
* the mapping was created, and either KERN_FAILURE or KERN_RESOURCE_SHORTAGE
* otherwise. Returns KERN_FAILURE if PMAP_ENTER_NOREPLACE was specified and
* a mapping already exists at the specified virtual address. Returns
* KERN_RESOURCE_SHORTAGE if PMAP_ENTER_NOSLEEP was specified and a page table
* page allocation failed. Returns KERN_RESOURCE_SHORTAGE if
* PMAP_ENTER_NORECLAIM was specified and a PV entry allocation failed.
*
* The parameter "m" is only used when creating a managed, writeable mapping.
*/
static int
pmap_enter_l2(pmap_t pmap, vm_offset_t va, pd_entry_t new_l2, u_int flags,
vm_page_t m, struct rwlock **lockp)
{
struct spglist free;
pd_entry_t *l2, *l3, old_l2;
vm_offset_t sva;
vm_page_t l2pg, mt;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
if ((l2pg = pmap_alloc_l2(pmap, va, (flags & PMAP_ENTER_NOSLEEP) != 0 ?
NULL : lockp)) == NULL) {
CTR2(KTR_PMAP, "pmap_enter_l2: failure for va %#lx in pmap %p",
va, pmap);
return (KERN_RESOURCE_SHORTAGE);
}
l2 = (pd_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(l2pg));
l2 = &l2[pmap_l2_index(va)];
if ((old_l2 = pmap_load(l2)) != 0) {
KASSERT(l2pg->wire_count > 1,
("pmap_enter_l2: l2pg's wire count is too low"));
if ((flags & PMAP_ENTER_NOREPLACE) != 0) {
l2pg->wire_count--;
CTR2(KTR_PMAP,
"pmap_enter_l2: failure for va %#lx in pmap %p",
va, pmap);
return (KERN_FAILURE);
}
SLIST_INIT(&free);
if ((old_l2 & ATTR_DESCR_MASK) == L2_BLOCK)
(void)pmap_remove_l2(pmap, l2, va,
pmap_load(pmap_l1(pmap, va)), &free, lockp);
else
for (sva = va; sva < va + L2_SIZE; sva += PAGE_SIZE) {
l3 = pmap_l2_to_l3(l2, sva);
if (pmap_l3_valid(pmap_load(l3)) &&
pmap_remove_l3(pmap, l3, sva, old_l2, &free,
lockp) != 0)
break;
}
vm_page_free_pages_toq(&free, true);
if (va >= VM_MAXUSER_ADDRESS) {
mt = PHYS_TO_VM_PAGE(pmap_load(l2) & ~ATTR_MASK);
if (pmap_insert_pt_page(pmap, mt)) {
/*
* XXX Currently, this can't happen bacuse
* we do not perform pmap_enter(psind == 1)
* on the kernel pmap.
*/
panic("pmap_enter_l2: trie insert failed");
}
} else
KASSERT(pmap_load(l2) == 0,
("pmap_enter_l2: non-zero L2 entry %p", l2));
}
if ((new_l2 & ATTR_SW_MANAGED) != 0) {
/*
* Abort this mapping if its PV entry could not be created.
*/
if (!pmap_pv_insert_l2(pmap, va, new_l2, flags, lockp)) {
SLIST_INIT(&free);
if (pmap_unwire_l3(pmap, va, l2pg, &free)) {
/*
* Although "va" is not mapped, paging-structure
* caches could nonetheless have entries that
* refer to the freed page table pages.
* Invalidate those entries.
*/
pmap_invalidate_page(pmap, va);
vm_page_free_pages_toq(&free, true);
}
CTR2(KTR_PMAP,
"pmap_enter_l2: failure for va %#lx in pmap %p",
va, pmap);
return (KERN_RESOURCE_SHORTAGE);
}
if ((new_l2 & ATTR_AP_RW_BIT) == ATTR_AP(ATTR_AP_RW))
for (mt = m; mt < &m[L2_SIZE / PAGE_SIZE]; mt++)
vm_page_aflag_set(mt, PGA_WRITEABLE);
}
/*
* Increment counters.
*/
if ((new_l2 & ATTR_SW_WIRED) != 0)
pmap->pm_stats.wired_count += L2_SIZE / PAGE_SIZE;
pmap->pm_stats.resident_count += L2_SIZE / PAGE_SIZE;
/*
* Map the superpage.
*/
(void)pmap_load_store(l2, new_l2);
+ dsb(ishst);
atomic_add_long(&pmap_l2_mappings, 1);
CTR2(KTR_PMAP, "pmap_enter_l2: success for va %#lx in pmap %p",
va, pmap);
return (KERN_SUCCESS);
}
/*
* Maps a sequence of resident pages belonging to the same object.
* The sequence begins with the given page m_start. This page is
* mapped at the given virtual address start. Each subsequent page is
* mapped at a virtual address that is offset from start by the same
* amount as the page is offset from m_start within the object. The
* last page in the sequence is the page with the largest offset from
* m_start that can be mapped at a virtual address less than the given
* virtual address end. Not every virtual page between start and end
* is mapped; only those for which a resident page exists with the
* corresponding offset from m_start are mapped.
*/
void
pmap_enter_object(pmap_t pmap, vm_offset_t start, vm_offset_t end,
vm_page_t m_start, vm_prot_t prot)
{
struct rwlock *lock;
vm_offset_t va;
vm_page_t m, mpte;
vm_pindex_t diff, psize;
VM_OBJECT_ASSERT_LOCKED(m_start->object);
psize = atop(end - start);
mpte = NULL;
m = m_start;
lock = NULL;
PMAP_LOCK(pmap);
while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) {
va = start + ptoa(diff);
if ((va & L2_OFFSET) == 0 && va + L2_SIZE <= end &&
m->psind == 1 && pmap_ps_enabled(pmap) &&
pmap_enter_2mpage(pmap, va, m, prot, &lock))
m = &m[L2_SIZE / PAGE_SIZE - 1];
else
mpte = pmap_enter_quick_locked(pmap, va, m, prot, mpte,
&lock);
m = TAILQ_NEXT(m, listq);
}
if (lock != NULL)
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
}
/*
* this code makes some *MAJOR* assumptions:
* 1. Current pmap & pmap exists.
* 2. Not wired.
* 3. Read access.
* 4. No page table pages.
* but is *MUCH* faster than pmap_enter...
*/
void
pmap_enter_quick(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot)
{
struct rwlock *lock;
lock = NULL;
PMAP_LOCK(pmap);
(void)pmap_enter_quick_locked(pmap, va, m, prot, NULL, &lock);
if (lock != NULL)
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
}
static vm_page_t
pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m,
vm_prot_t prot, vm_page_t mpte, struct rwlock **lockp)
{
struct spglist free;
pd_entry_t *pde;
pt_entry_t *l2, *l3, l3_val;
vm_paddr_t pa;
int lvl;
KASSERT(va < kmi.clean_sva || va >= kmi.clean_eva ||
(m->oflags & VPO_UNMANAGED) != 0,
("pmap_enter_quick_locked: managed mapping within the clean submap"));
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
CTR2(KTR_PMAP, "pmap_enter_quick_locked: %p %lx", pmap, va);
/*
* In the case that a page table page is not
* resident, we are creating it here.
*/
if (va < VM_MAXUSER_ADDRESS) {
vm_pindex_t l2pindex;
/*
* Calculate pagetable page index
*/
l2pindex = pmap_l2_pindex(va);
if (mpte && (mpte->pindex == l2pindex)) {
mpte->wire_count++;
} else {
/*
* Get the l2 entry
*/
pde = pmap_pde(pmap, va, &lvl);
/*
* If the page table page is mapped, we just increment
* the hold count, and activate it. Otherwise, we
* attempt to allocate a page table page. If this
* attempt fails, we don't retry. Instead, we give up.
*/
if (lvl == 1) {
l2 = pmap_l1_to_l2(pde, va);
if ((pmap_load(l2) & ATTR_DESCR_MASK) ==
L2_BLOCK)
return (NULL);
}
if (lvl == 2 && pmap_load(pde) != 0) {
mpte =
PHYS_TO_VM_PAGE(pmap_load(pde) & ~ATTR_MASK);
mpte->wire_count++;
} else {
/*
* Pass NULL instead of the PV list lock
* pointer, because we don't intend to sleep.
*/
mpte = _pmap_alloc_l3(pmap, l2pindex, NULL);
if (mpte == NULL)
return (mpte);
}
}
l3 = (pt_entry_t *)PHYS_TO_DMAP(VM_PAGE_TO_PHYS(mpte));
l3 = &l3[pmap_l3_index(va)];
} else {
mpte = NULL;
pde = pmap_pde(kernel_pmap, va, &lvl);
KASSERT(pde != NULL,
("pmap_enter_quick_locked: Invalid page entry, va: 0x%lx",
va));
KASSERT(lvl == 2,
("pmap_enter_quick_locked: Invalid level %d", lvl));
l3 = pmap_l2_to_l3(pde, va);
}
if (pmap_load(l3) != 0) {
if (mpte != NULL) {
mpte->wire_count--;
mpte = NULL;
}
return (mpte);
}
/*
* Enter on the PV list if part of our managed memory.
*/
if ((m->oflags & VPO_UNMANAGED) == 0 &&
!pmap_try_insert_pv_entry(pmap, va, m, lockp)) {
if (mpte != NULL) {
SLIST_INIT(&free);
if (pmap_unwire_l3(pmap, va, mpte, &free)) {
pmap_invalidate_page(pmap, va);
vm_page_free_pages_toq(&free, false);
}
mpte = NULL;
}
return (mpte);
}
/*
* Increment counters
*/
pmap_resident_count_inc(pmap, 1);
pa = VM_PAGE_TO_PHYS(m);
l3_val = pa | ATTR_DEFAULT | ATTR_IDX(m->md.pv_memattr) |
ATTR_AP(ATTR_AP_RO) | L3_PAGE;
if ((prot & VM_PROT_EXECUTE) == 0 || m->md.pv_memattr == DEVICE_MEMORY)
l3_val |= ATTR_XN;
else if (va < VM_MAXUSER_ADDRESS)
l3_val |= ATTR_PXN;
/*
* Now validate mapping with RO protection
*/
if ((m->oflags & VPO_UNMANAGED) == 0)
l3_val |= ATTR_SW_MANAGED;
/* Sync icache before the mapping is stored to PTE */
if ((prot & VM_PROT_EXECUTE) && pmap != kernel_pmap &&
m->md.pv_memattr == VM_MEMATTR_WRITE_BACK)
cpu_icache_sync_range(PHYS_TO_DMAP(pa), PAGE_SIZE);
pmap_load_store(l3, l3_val);
pmap_invalidate_page(pmap, va);
return (mpte);
}
/*
* This code maps large physical mmap regions into the
* processor address space. Note that some shortcuts
* are taken, but the code works.
*/
void
pmap_object_init_pt(pmap_t pmap, vm_offset_t addr, vm_object_t object,
vm_pindex_t pindex, vm_size_t size)
{
VM_OBJECT_ASSERT_WLOCKED(object);
KASSERT(object->type == OBJT_DEVICE || object->type == OBJT_SG,
("pmap_object_init_pt: non-device object"));
}
/*
* Clear the wired attribute from the mappings for the specified range of
* addresses in the given pmap. Every valid mapping within that range
* must have the wired attribute set. In contrast, invalid mappings
* cannot have the wired attribute set, so they are ignored.
*
* The wired attribute of the page table entry is not a hardware feature,
* so there is no need to invalidate any TLB entries.
*/
void
pmap_unwire(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
vm_offset_t va_next;
pd_entry_t *l0, *l1, *l2;
pt_entry_t *l3;
PMAP_LOCK(pmap);
for (; sva < eva; sva = va_next) {
l0 = pmap_l0(pmap, sva);
if (pmap_load(l0) == 0) {
va_next = (sva + L0_SIZE) & ~L0_OFFSET;
if (va_next < sva)
va_next = eva;
continue;
}
l1 = pmap_l0_to_l1(l0, sva);
if (pmap_load(l1) == 0) {
va_next = (sva + L1_SIZE) & ~L1_OFFSET;
if (va_next < sva)
va_next = eva;
continue;
}
va_next = (sva + L2_SIZE) & ~L2_OFFSET;
if (va_next < sva)
va_next = eva;
l2 = pmap_l1_to_l2(l1, sva);
if (pmap_load(l2) == 0)
continue;
if ((pmap_load(l2) & ATTR_DESCR_MASK) == L2_BLOCK) {
l3 = pmap_demote_l2(pmap, l2, sva);
if (l3 == NULL)
continue;
}
KASSERT((pmap_load(l2) & ATTR_DESCR_MASK) == L2_TABLE,
("pmap_unwire: Invalid l2 entry after demotion"));
if (va_next > eva)
va_next = eva;
for (l3 = pmap_l2_to_l3(l2, sva); sva != va_next; l3++,
sva += L3_SIZE) {
if (pmap_load(l3) == 0)
continue;
if ((pmap_load(l3) & ATTR_SW_WIRED) == 0)
panic("pmap_unwire: l3 %#jx is missing "
"ATTR_SW_WIRED", (uintmax_t)pmap_load(l3));
/*
* PG_W must be cleared atomically. Although the pmap
* lock synchronizes access to PG_W, another processor
* could be setting PG_M and/or PG_A concurrently.
*/
atomic_clear_long(l3, ATTR_SW_WIRED);
pmap->pm_stats.wired_count--;
}
}
PMAP_UNLOCK(pmap);
}
/*
* Copy the range specified by src_addr/len
* from the source map to the range dst_addr/len
* in the destination map.
*
* This routine is only advisory and need not do anything.
*/
void
pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, vm_size_t len,
vm_offset_t src_addr)
{
}
/*
* pmap_zero_page zeros the specified hardware page by mapping
* the page into KVM and using bzero to clear its contents.
*/
void
pmap_zero_page(vm_page_t m)
{
vm_offset_t va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m));
pagezero((void *)va);
}
/*
* pmap_zero_page_area zeros the specified hardware page by mapping
* the page into KVM and using bzero to clear its contents.
*
* off and size may not cover an area beyond a single hardware page.
*/
void
pmap_zero_page_area(vm_page_t m, int off, int size)
{
vm_offset_t va = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m));
if (off == 0 && size == PAGE_SIZE)
pagezero((void *)va);
else
bzero((char *)va + off, size);
}
/*
* pmap_copy_page copies the specified (machine independent)
* page by mapping the page into virtual memory and using
* bcopy to copy the page, one machine dependent page at a
* time.
*/
void
pmap_copy_page(vm_page_t msrc, vm_page_t mdst)
{
vm_offset_t src = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(msrc));
vm_offset_t dst = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(mdst));
pagecopy((void *)src, (void *)dst);
}
int unmapped_buf_allowed = 1;
void
pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[],
vm_offset_t b_offset, int xfersize)
{
void *a_cp, *b_cp;
vm_page_t m_a, m_b;
vm_paddr_t p_a, p_b;
vm_offset_t a_pg_offset, b_pg_offset;
int cnt;
while (xfersize > 0) {
a_pg_offset = a_offset & PAGE_MASK;
m_a = ma[a_offset >> PAGE_SHIFT];
p_a = m_a->phys_addr;
b_pg_offset = b_offset & PAGE_MASK;
m_b = mb[b_offset >> PAGE_SHIFT];
p_b = m_b->phys_addr;
cnt = min(xfersize, PAGE_SIZE - a_pg_offset);
cnt = min(cnt, PAGE_SIZE - b_pg_offset);
if (__predict_false(!PHYS_IN_DMAP(p_a))) {
panic("!DMAP a %lx", p_a);
} else {
a_cp = (char *)PHYS_TO_DMAP(p_a) + a_pg_offset;
}
if (__predict_false(!PHYS_IN_DMAP(p_b))) {
panic("!DMAP b %lx", p_b);
} else {
b_cp = (char *)PHYS_TO_DMAP(p_b) + b_pg_offset;
}
bcopy(a_cp, b_cp, cnt);
a_offset += cnt;
b_offset += cnt;
xfersize -= cnt;
}
}
vm_offset_t
pmap_quick_enter_page(vm_page_t m)
{
return (PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)));
}
void
pmap_quick_remove_page(vm_offset_t addr)
{
}
/*
* Returns true if the pmap's pv is one of the first
* 16 pvs linked to from this page. This count may
* be changed upwards or downwards in the future; it
* is only necessary that true be returned for a small
* subset of pmaps for proper page aging.
*/
boolean_t
pmap_page_exists_quick(pmap_t pmap, vm_page_t m)
{
struct md_page *pvh;
struct rwlock *lock;
pv_entry_t pv;
int loops = 0;
boolean_t rv;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_page_exists_quick: page %p is not managed", m));
rv = FALSE;
lock = VM_PAGE_TO_PV_LIST_LOCK(m);
rw_rlock(lock);
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
if (PV_PMAP(pv) == pmap) {
rv = TRUE;
break;
}
loops++;
if (loops >= 16)
break;
}
if (!rv && loops < 16 && (m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
if (PV_PMAP(pv) == pmap) {
rv = TRUE;
break;
}
loops++;
if (loops >= 16)
break;
}
}
rw_runlock(lock);
return (rv);
}
/*
* pmap_page_wired_mappings:
*
* Return the number of managed mappings to the given physical page
* that are wired.
*/
int
pmap_page_wired_mappings(vm_page_t m)
{
struct rwlock *lock;
struct md_page *pvh;
pmap_t pmap;
pt_entry_t *pte;
pv_entry_t pv;
int count, lvl, md_gen, pvh_gen;
if ((m->oflags & VPO_UNMANAGED) != 0)
return (0);
lock = VM_PAGE_TO_PV_LIST_LOCK(m);
rw_rlock(lock);
restart:
count = 0;
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
md_gen = m->md.pv_gen;
rw_runlock(lock);
PMAP_LOCK(pmap);
rw_rlock(lock);
if (md_gen != m->md.pv_gen) {
PMAP_UNLOCK(pmap);
goto restart;
}
}
pte = pmap_pte(pmap, pv->pv_va, &lvl);
if (pte != NULL && (pmap_load(pte) & ATTR_SW_WIRED) != 0)
count++;
PMAP_UNLOCK(pmap);
}
if ((m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
md_gen = m->md.pv_gen;
pvh_gen = pvh->pv_gen;
rw_runlock(lock);
PMAP_LOCK(pmap);
rw_rlock(lock);
if (md_gen != m->md.pv_gen ||
pvh_gen != pvh->pv_gen) {
PMAP_UNLOCK(pmap);
goto restart;
}
}
pte = pmap_pte(pmap, pv->pv_va, &lvl);
if (pte != NULL &&
(pmap_load(pte) & ATTR_SW_WIRED) != 0)
count++;
PMAP_UNLOCK(pmap);
}
}
rw_runlock(lock);
return (count);
}
/*
* Destroy all managed, non-wired mappings in the given user-space
* pmap. This pmap cannot be active on any processor besides the
* caller.
*
* This function cannot be applied to the kernel pmap. Moreover, it
* is not intended for general use. It is only to be used during
* process termination. Consequently, it can be implemented in ways
* that make it faster than pmap_remove(). First, it can more quickly
* destroy mappings by iterating over the pmap's collection of PV
* entries, rather than searching the page table. Second, it doesn't
* have to test and clear the page table entries atomically, because
* no processor is currently accessing the user address space. In
* particular, a page table entry's dirty bit won't change state once
* this function starts.
*/
void
pmap_remove_pages(pmap_t pmap)
{
pd_entry_t *pde;
pt_entry_t *pte, tpte;
struct spglist free;
vm_page_t m, ml3, mt;
pv_entry_t pv;
struct md_page *pvh;
struct pv_chunk *pc, *npc;
struct rwlock *lock;
int64_t bit;
uint64_t inuse, bitmask;
int allfree, field, freed, idx, lvl;
vm_paddr_t pa;
lock = NULL;
SLIST_INIT(&free);
PMAP_LOCK(pmap);
TAILQ_FOREACH_SAFE(pc, &pmap->pm_pvchunk, pc_list, npc) {
allfree = 1;
freed = 0;
for (field = 0; field < _NPCM; field++) {
inuse = ~pc->pc_map[field] & pc_freemask[field];
while (inuse != 0) {
bit = ffsl(inuse) - 1;
bitmask = 1UL << bit;
idx = field * 64 + bit;
pv = &pc->pc_pventry[idx];
inuse &= ~bitmask;
pde = pmap_pde(pmap, pv->pv_va, &lvl);
KASSERT(pde != NULL,
("Attempting to remove an unmapped page"));
switch(lvl) {
case 1:
pte = pmap_l1_to_l2(pde, pv->pv_va);
tpte = pmap_load(pte);
KASSERT((tpte & ATTR_DESCR_MASK) ==
L2_BLOCK,
("Attempting to remove an invalid "
"block: %lx", tpte));
tpte = pmap_load(pte);
break;
case 2:
pte = pmap_l2_to_l3(pde, pv->pv_va);
tpte = pmap_load(pte);
KASSERT((tpte & ATTR_DESCR_MASK) ==
L3_PAGE,
("Attempting to remove an invalid "
"page: %lx", tpte));
break;
default:
panic(
"Invalid page directory level: %d",
lvl);
}
/*
* We cannot remove wired pages from a process' mapping at this time
*/
if (tpte & ATTR_SW_WIRED) {
allfree = 0;
continue;
}
pa = tpte & ~ATTR_MASK;
m = PHYS_TO_VM_PAGE(pa);
KASSERT(m->phys_addr == pa,
("vm_page_t %p phys_addr mismatch %016jx %016jx",
m, (uintmax_t)m->phys_addr,
(uintmax_t)tpte));
KASSERT((m->flags & PG_FICTITIOUS) != 0 ||
m < &vm_page_array[vm_page_array_size],
("pmap_remove_pages: bad pte %#jx",
(uintmax_t)tpte));
pmap_load_clear(pte);
/*
* Update the vm_page_t clean/reference bits.
*/
if ((tpte & ATTR_AP_RW_BIT) ==
ATTR_AP(ATTR_AP_RW)) {
switch (lvl) {
case 1:
for (mt = m; mt < &m[L2_SIZE / PAGE_SIZE]; mt++)
vm_page_dirty(m);
break;
case 2:
vm_page_dirty(m);
break;
}
}
CHANGE_PV_LIST_LOCK_TO_VM_PAGE(&lock, m);
/* Mark free */
pc->pc_map[field] |= bitmask;
switch (lvl) {
case 1:
pmap_resident_count_dec(pmap,
L2_SIZE / PAGE_SIZE);
pvh = pa_to_pvh(tpte & ~ATTR_MASK);
TAILQ_REMOVE(&pvh->pv_list, pv,pv_next);
pvh->pv_gen++;
if (TAILQ_EMPTY(&pvh->pv_list)) {
for (mt = m; mt < &m[L2_SIZE / PAGE_SIZE]; mt++)
if ((mt->aflags & PGA_WRITEABLE) != 0 &&
TAILQ_EMPTY(&mt->md.pv_list))
vm_page_aflag_clear(mt, PGA_WRITEABLE);
}
ml3 = pmap_remove_pt_page(pmap,
pv->pv_va);
if (ml3 != NULL) {
pmap_resident_count_dec(pmap,1);
KASSERT(ml3->wire_count == NL3PG,
("pmap_remove_pages: l3 page wire count error"));
ml3->wire_count = 1;
vm_page_unwire_noq(ml3);
pmap_add_delayed_free_list(ml3,
&free, FALSE);
}
break;
case 2:
pmap_resident_count_dec(pmap, 1);
TAILQ_REMOVE(&m->md.pv_list, pv,
pv_next);
m->md.pv_gen++;
if ((m->aflags & PGA_WRITEABLE) != 0 &&
TAILQ_EMPTY(&m->md.pv_list) &&
(m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(
VM_PAGE_TO_PHYS(m));
if (TAILQ_EMPTY(&pvh->pv_list))
vm_page_aflag_clear(m,
PGA_WRITEABLE);
}
break;
}
pmap_unuse_pt(pmap, pv->pv_va, pmap_load(pde),
&free);
freed++;
}
}
PV_STAT(atomic_add_long(&pv_entry_frees, freed));
PV_STAT(atomic_add_int(&pv_entry_spare, freed));
PV_STAT(atomic_subtract_long(&pv_entry_count, freed));
if (allfree) {
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
free_pv_chunk(pc);
}
}
pmap_invalidate_all(pmap);
if (lock != NULL)
rw_wunlock(lock);
PMAP_UNLOCK(pmap);
vm_page_free_pages_toq(&free, false);
}
/*
* This is used to check if a page has been accessed or modified. As we
* don't have a bit to see if it has been modified we have to assume it
* has been if the page is read/write.
*/
static boolean_t
pmap_page_test_mappings(vm_page_t m, boolean_t accessed, boolean_t modified)
{
struct rwlock *lock;
pv_entry_t pv;
struct md_page *pvh;
pt_entry_t *pte, mask, value;
pmap_t pmap;
int lvl, md_gen, pvh_gen;
boolean_t rv;
rv = FALSE;
lock = VM_PAGE_TO_PV_LIST_LOCK(m);
rw_rlock(lock);
restart:
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
md_gen = m->md.pv_gen;
rw_runlock(lock);
PMAP_LOCK(pmap);
rw_rlock(lock);
if (md_gen != m->md.pv_gen) {
PMAP_UNLOCK(pmap);
goto restart;
}
}
pte = pmap_pte(pmap, pv->pv_va, &lvl);
KASSERT(lvl == 3,
("pmap_page_test_mappings: Invalid level %d", lvl));
mask = 0;
value = 0;
if (modified) {
mask |= ATTR_AP_RW_BIT;
value |= ATTR_AP(ATTR_AP_RW);
}
if (accessed) {
mask |= ATTR_AF | ATTR_DESCR_MASK;
value |= ATTR_AF | L3_PAGE;
}
rv = (pmap_load(pte) & mask) == value;
PMAP_UNLOCK(pmap);
if (rv)
goto out;
}
if ((m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
md_gen = m->md.pv_gen;
pvh_gen = pvh->pv_gen;
rw_runlock(lock);
PMAP_LOCK(pmap);
rw_rlock(lock);
if (md_gen != m->md.pv_gen ||
pvh_gen != pvh->pv_gen) {
PMAP_UNLOCK(pmap);
goto restart;
}
}
pte = pmap_pte(pmap, pv->pv_va, &lvl);
KASSERT(lvl == 2,
("pmap_page_test_mappings: Invalid level %d", lvl));
mask = 0;
value = 0;
if (modified) {
mask |= ATTR_AP_RW_BIT;
value |= ATTR_AP(ATTR_AP_RW);
}
if (accessed) {
mask |= ATTR_AF | ATTR_DESCR_MASK;
value |= ATTR_AF | L2_BLOCK;
}
rv = (pmap_load(pte) & mask) == value;
PMAP_UNLOCK(pmap);
if (rv)
goto out;
}
}
out:
rw_runlock(lock);
return (rv);
}
/*
* pmap_is_modified:
*
* Return whether or not the specified physical page was modified
* in any physical maps.
*/
boolean_t
pmap_is_modified(vm_page_t m)
{
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_is_modified: page %p is not managed", m));
/*
* If the page is not exclusive busied, then PGA_WRITEABLE cannot be
* concurrently set while the object is locked. Thus, if PGA_WRITEABLE
* is clear, no PTEs can have PG_M set.
*/
VM_OBJECT_ASSERT_WLOCKED(m->object);
if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0)
return (FALSE);
return (pmap_page_test_mappings(m, FALSE, TRUE));
}
/*
* pmap_is_prefaultable:
*
* Return whether or not the specified virtual address is eligible
* for prefault.
*/
boolean_t
pmap_is_prefaultable(pmap_t pmap, vm_offset_t addr)
{
pt_entry_t *pte;
boolean_t rv;
int lvl;
rv = FALSE;
PMAP_LOCK(pmap);
pte = pmap_pte(pmap, addr, &lvl);
if (pte != NULL && pmap_load(pte) != 0) {
rv = TRUE;
}
PMAP_UNLOCK(pmap);
return (rv);
}
/*
* pmap_is_referenced:
*
* Return whether or not the specified physical page was referenced
* in any physical maps.
*/
boolean_t
pmap_is_referenced(vm_page_t m)
{
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_is_referenced: page %p is not managed", m));
return (pmap_page_test_mappings(m, TRUE, FALSE));
}
/*
* Clear the write and modified bits in each of the given page's mappings.
*/
void
pmap_remove_write(vm_page_t m)
{
struct md_page *pvh;
pmap_t pmap;
struct rwlock *lock;
pv_entry_t next_pv, pv;
pt_entry_t oldpte, *pte;
vm_offset_t va;
int lvl, md_gen, pvh_gen;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_remove_write: page %p is not managed", m));
/*
* If the page is not exclusive busied, then PGA_WRITEABLE cannot be
* set by another thread while the object is locked. Thus,
* if PGA_WRITEABLE is clear, no page table entries need updating.
*/
VM_OBJECT_ASSERT_WLOCKED(m->object);
if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0)
return;
lock = VM_PAGE_TO_PV_LIST_LOCK(m);
pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy :
pa_to_pvh(VM_PAGE_TO_PHYS(m));
retry_pv_loop:
rw_wlock(lock);
TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
pvh_gen = pvh->pv_gen;
rw_wunlock(lock);
PMAP_LOCK(pmap);
rw_wlock(lock);
if (pvh_gen != pvh->pv_gen) {
PMAP_UNLOCK(pmap);
rw_wunlock(lock);
goto retry_pv_loop;
}
}
va = pv->pv_va;
pte = pmap_pte(pmap, pv->pv_va, &lvl);
if ((pmap_load(pte) & ATTR_AP_RW_BIT) == ATTR_AP(ATTR_AP_RW))
pmap_demote_l2_locked(pmap, pte, va & ~L2_OFFSET,
&lock);
KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m),
("inconsistent pv lock %p %p for page %p",
lock, VM_PAGE_TO_PV_LIST_LOCK(m), m));
PMAP_UNLOCK(pmap);
}
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
pvh_gen = pvh->pv_gen;
md_gen = m->md.pv_gen;
rw_wunlock(lock);
PMAP_LOCK(pmap);
rw_wlock(lock);
if (pvh_gen != pvh->pv_gen ||
md_gen != m->md.pv_gen) {
PMAP_UNLOCK(pmap);
rw_wunlock(lock);
goto retry_pv_loop;
}
}
pte = pmap_pte(pmap, pv->pv_va, &lvl);
retry:
oldpte = pmap_load(pte);
if ((oldpte & ATTR_AP_RW_BIT) == ATTR_AP(ATTR_AP_RW)) {
if (!atomic_cmpset_long(pte, oldpte,
oldpte | ATTR_AP(ATTR_AP_RO)))
goto retry;
if ((oldpte & ATTR_AF) != 0)
vm_page_dirty(m);
pmap_invalidate_page(pmap, pv->pv_va);
}
PMAP_UNLOCK(pmap);
}
rw_wunlock(lock);
vm_page_aflag_clear(m, PGA_WRITEABLE);
}
static __inline boolean_t
safe_to_clear_referenced(pmap_t pmap, pt_entry_t pte)
{
return (FALSE);
}
/*
* pmap_ts_referenced:
*
* Return a count of reference bits for a page, clearing those bits.
* It is not necessary for every reference bit to be cleared, but it
* is necessary that 0 only be returned when there are truly no
* reference bits set.
*
* As an optimization, update the page's dirty field if a modified bit is
* found while counting reference bits. This opportunistic update can be
* performed at low cost and can eliminate the need for some future calls
* to pmap_is_modified(). However, since this function stops after
* finding PMAP_TS_REFERENCED_MAX reference bits, it may not detect some
* dirty pages. Those dirty pages will only be detected by a future call
* to pmap_is_modified().
*/
int
pmap_ts_referenced(vm_page_t m)
{
struct md_page *pvh;
pv_entry_t pv, pvf;
pmap_t pmap;
struct rwlock *lock;
pd_entry_t *pde, tpde;
pt_entry_t *pte, tpte;
pt_entry_t *l3;
vm_offset_t va;
vm_paddr_t pa;
int cleared, md_gen, not_cleared, lvl, pvh_gen;
struct spglist free;
bool demoted;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_ts_referenced: page %p is not managed", m));
SLIST_INIT(&free);
cleared = 0;
pa = VM_PAGE_TO_PHYS(m);
lock = PHYS_TO_PV_LIST_LOCK(pa);
pvh = (m->flags & PG_FICTITIOUS) != 0 ? &pv_dummy : pa_to_pvh(pa);
rw_wlock(lock);
retry:
not_cleared = 0;
if ((pvf = TAILQ_FIRST(&pvh->pv_list)) == NULL)
goto small_mappings;
pv = pvf;
do {
if (pvf == NULL)
pvf = pv;
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
pvh_gen = pvh->pv_gen;
rw_wunlock(lock);
PMAP_LOCK(pmap);
rw_wlock(lock);
if (pvh_gen != pvh->pv_gen) {
PMAP_UNLOCK(pmap);
goto retry;
}
}
va = pv->pv_va;
pde = pmap_pde(pmap, pv->pv_va, &lvl);
KASSERT(pde != NULL, ("pmap_ts_referenced: no l1 table found"));
KASSERT(lvl == 1,
("pmap_ts_referenced: invalid pde level %d", lvl));
tpde = pmap_load(pde);
KASSERT((tpde & ATTR_DESCR_MASK) == L1_TABLE,
("pmap_ts_referenced: found an invalid l1 table"));
pte = pmap_l1_to_l2(pde, pv->pv_va);
tpte = pmap_load(pte);
if (pmap_page_dirty(tpte)) {
/*
* Although "tpte" is mapping a 2MB page, because
* this function is called at a 4KB page granularity,
* we only update the 4KB page under test.
*/
vm_page_dirty(m);
}
if ((tpte & ATTR_AF) != 0) {
/*
* Since this reference bit is shared by 512 4KB
* pages, it should not be cleared every time it is
* tested. Apply a simple "hash" function on the
* physical page number, the virtual superpage number,
* and the pmap address to select one 4KB page out of
* the 512 on which testing the reference bit will
* result in clearing that reference bit. This
* function is designed to avoid the selection of the
* same 4KB page for every 2MB page mapping.
*
* On demotion, a mapping that hasn't been referenced
* is simply destroyed. To avoid the possibility of a
* subsequent page fault on a demoted wired mapping,
* always leave its reference bit set. Moreover,
* since the superpage is wired, the current state of
* its reference bit won't affect page replacement.
*/
if ((((pa >> PAGE_SHIFT) ^ (pv->pv_va >> L2_SHIFT) ^
(uintptr_t)pmap) & (Ln_ENTRIES - 1)) == 0 &&
(tpte & ATTR_SW_WIRED) == 0) {
if (safe_to_clear_referenced(pmap, tpte)) {
/*
* TODO: We don't handle the access
* flag at all. We need to be able
* to set it in the exception handler.
*/
panic("ARM64TODO: "
"safe_to_clear_referenced\n");
} else if (pmap_demote_l2_locked(pmap, pte,
pv->pv_va, &lock) != NULL) {
demoted = true;
va += VM_PAGE_TO_PHYS(m) -
(tpte & ~ATTR_MASK);
l3 = pmap_l2_to_l3(pte, va);
pmap_remove_l3(pmap, l3, va,
pmap_load(pte), NULL, &lock);
} else
demoted = true;
if (demoted) {
/*
* The superpage mapping was removed
* entirely and therefore 'pv' is no
* longer valid.
*/
if (pvf == pv)
pvf = NULL;
pv = NULL;
}
cleared++;
KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m),
("inconsistent pv lock %p %p for page %p",
lock, VM_PAGE_TO_PV_LIST_LOCK(m), m));
} else
not_cleared++;
}
PMAP_UNLOCK(pmap);
/* Rotate the PV list if it has more than one entry. */
if (pv != NULL && TAILQ_NEXT(pv, pv_next) != NULL) {
TAILQ_REMOVE(&pvh->pv_list, pv, pv_next);
TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next);
pvh->pv_gen++;
}
if (cleared + not_cleared >= PMAP_TS_REFERENCED_MAX)
goto out;
} while ((pv = TAILQ_FIRST(&pvh->pv_list)) != pvf);
small_mappings:
if ((pvf = TAILQ_FIRST(&m->md.pv_list)) == NULL)
goto out;
pv = pvf;
do {
if (pvf == NULL)
pvf = pv;
pmap = PV_PMAP(pv);
if (!PMAP_TRYLOCK(pmap)) {
pvh_gen = pvh->pv_gen;
md_gen = m->md.pv_gen;
rw_wunlock(lock);
PMAP_LOCK(pmap);
rw_wlock(lock);
if (pvh_gen != pvh->pv_gen || md_gen != m->md.pv_gen) {
PMAP_UNLOCK(pmap);
goto retry;
}
}
pde = pmap_pde(pmap, pv->pv_va, &lvl);
KASSERT(pde != NULL, ("pmap_ts_referenced: no l2 table found"));
KASSERT(lvl == 2,
("pmap_ts_referenced: invalid pde level %d", lvl));
tpde = pmap_load(pde);
KASSERT((tpde & ATTR_DESCR_MASK) == L2_TABLE,
("pmap_ts_referenced: found an invalid l2 table"));
pte = pmap_l2_to_l3(pde, pv->pv_va);
tpte = pmap_load(pte);
if (pmap_page_dirty(tpte))
vm_page_dirty(m);
if ((tpte & ATTR_AF) != 0) {
if (safe_to_clear_referenced(pmap, tpte)) {
/*
* TODO: We don't handle the access flag
* at all. We need to be able to set it in
* the exception handler.
*/
panic("ARM64TODO: safe_to_clear_referenced\n");
} else if ((tpte & ATTR_SW_WIRED) == 0) {
/*
* Wired pages cannot be paged out so
* doing accessed bit emulation for
* them is wasted effort. We do the
* hard work for unwired pages only.
*/
pmap_remove_l3(pmap, pte, pv->pv_va, tpde,
&free, &lock);
pmap_invalidate_page(pmap, pv->pv_va);
cleared++;
if (pvf == pv)
pvf = NULL;
pv = NULL;
KASSERT(lock == VM_PAGE_TO_PV_LIST_LOCK(m),
("inconsistent pv lock %p %p for page %p",
lock, VM_PAGE_TO_PV_LIST_LOCK(m), m));
} else
not_cleared++;
}
PMAP_UNLOCK(pmap);
/* Rotate the PV list if it has more than one entry. */
if (pv != NULL && TAILQ_NEXT(pv, pv_next) != NULL) {
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
m->md.pv_gen++;
}
} while ((pv = TAILQ_FIRST(&m->md.pv_list)) != pvf && cleared +
not_cleared < PMAP_TS_REFERENCED_MAX);
out:
rw_wunlock(lock);
vm_page_free_pages_toq(&free, false);
return (cleared + not_cleared);
}
/*
* Apply the given advice to the specified range of addresses within the
* given pmap. Depending on the advice, clear the referenced and/or
* modified flags in each mapping and set the mapped page's dirty field.
*/
void
pmap_advise(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, int advice)
{
}
/*
* Clear the modify bits on the specified physical page.
*/
void
pmap_clear_modify(vm_page_t m)
{
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_clear_modify: page %p is not managed", m));
VM_OBJECT_ASSERT_WLOCKED(m->object);
KASSERT(!vm_page_xbusied(m),
("pmap_clear_modify: page %p is exclusive busied", m));
/*
* If the page is not PGA_WRITEABLE, then no PTEs can have PG_M set.
* If the object containing the page is locked and the page is not
* exclusive busied, then PGA_WRITEABLE cannot be concurrently set.
*/
if ((m->aflags & PGA_WRITEABLE) == 0)
return;
/* ARM64TODO: We lack support for tracking if a page is modified */
}
void *
pmap_mapbios(vm_paddr_t pa, vm_size_t size)
{
struct pmap_preinit_mapping *ppim;
vm_offset_t va, offset;
pd_entry_t *pde;
pt_entry_t *l2;
int i, lvl, l2_blocks, free_l2_count, start_idx;
if (!vm_initialized) {
/*
* No L3 ptables so map entire L2 blocks where start VA is:
* preinit_map_va + start_idx * L2_SIZE
* There may be duplicate mappings (multiple VA -> same PA) but
* ARM64 dcache is always PIPT so that's acceptable.
*/
if (size == 0)
return (NULL);
/* Calculate how many L2 blocks are needed for the mapping */
l2_blocks = (roundup2(pa + size, L2_SIZE) -
rounddown2(pa, L2_SIZE)) >> L2_SHIFT;
offset = pa & L2_OFFSET;
if (preinit_map_va == 0)
return (NULL);
/* Map 2MiB L2 blocks from reserved VA space */
free_l2_count = 0;
start_idx = -1;
/* Find enough free contiguous VA space */
for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) {
ppim = pmap_preinit_mapping + i;
if (free_l2_count > 0 && ppim->pa != 0) {
/* Not enough space here */
free_l2_count = 0;
start_idx = -1;
continue;
}
if (ppim->pa == 0) {
/* Free L2 block */
if (start_idx == -1)
start_idx = i;
free_l2_count++;
if (free_l2_count == l2_blocks)
break;
}
}
if (free_l2_count != l2_blocks)
panic("%s: too many preinit mappings", __func__);
va = preinit_map_va + (start_idx * L2_SIZE);
for (i = start_idx; i < start_idx + l2_blocks; i++) {
/* Mark entries as allocated */
ppim = pmap_preinit_mapping + i;
ppim->pa = pa;
ppim->va = va + offset;
ppim->size = size;
}
/* Map L2 blocks */
pa = rounddown2(pa, L2_SIZE);
for (i = 0; i < l2_blocks; i++) {
pde = pmap_pde(kernel_pmap, va, &lvl);
KASSERT(pde != NULL,
("pmap_mapbios: Invalid page entry, va: 0x%lx",
va));
KASSERT(lvl == 1,
("pmap_mapbios: Invalid level %d", lvl));
/* Insert L2_BLOCK */
l2 = pmap_l1_to_l2(pde, va);
pmap_load_store(l2,
pa | ATTR_DEFAULT | ATTR_XN |
ATTR_IDX(CACHED_MEMORY) | L2_BLOCK);
va += L2_SIZE;
pa += L2_SIZE;
}
pmap_invalidate_all(kernel_pmap);
va = preinit_map_va + (start_idx * L2_SIZE);
} else {
/* kva_alloc may be used to map the pages */
offset = pa & PAGE_MASK;
size = round_page(offset + size);
va = kva_alloc(size);
if (va == 0)
panic("%s: Couldn't allocate KVA", __func__);
pde = pmap_pde(kernel_pmap, va, &lvl);
KASSERT(lvl == 2, ("pmap_mapbios: Invalid level %d", lvl));
/* L3 table is linked */
va = trunc_page(va);
pa = trunc_page(pa);
pmap_kenter(va, size, pa, CACHED_MEMORY);
}
return ((void *)(va + offset));
}
void
pmap_unmapbios(vm_offset_t va, vm_size_t size)
{
struct pmap_preinit_mapping *ppim;
vm_offset_t offset, tmpsize, va_trunc;
pd_entry_t *pde;
pt_entry_t *l2;
int i, lvl, l2_blocks, block;
bool preinit_map;
l2_blocks =
(roundup2(va + size, L2_SIZE) - rounddown2(va, L2_SIZE)) >> L2_SHIFT;
KASSERT(l2_blocks > 0, ("pmap_unmapbios: invalid size %lx", size));
/* Remove preinit mapping */
preinit_map = false;
block = 0;
for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) {
ppim = pmap_preinit_mapping + i;
if (ppim->va == va) {
KASSERT(ppim->size == size,
("pmap_unmapbios: size mismatch"));
ppim->va = 0;
ppim->pa = 0;
ppim->size = 0;
preinit_map = true;
offset = block * L2_SIZE;
va_trunc = rounddown2(va, L2_SIZE) + offset;
/* Remove L2_BLOCK */
pde = pmap_pde(kernel_pmap, va_trunc, &lvl);
KASSERT(pde != NULL,
("pmap_unmapbios: Invalid page entry, va: 0x%lx",
va_trunc));
l2 = pmap_l1_to_l2(pde, va_trunc);
pmap_load_clear(l2);
if (block == (l2_blocks - 1))
break;
block++;
}
}
if (preinit_map) {
pmap_invalidate_all(kernel_pmap);
return;
}
/* Unmap the pages reserved with kva_alloc. */
if (vm_initialized) {
offset = va & PAGE_MASK;
size = round_page(offset + size);
va = trunc_page(va);
pde = pmap_pde(kernel_pmap, va, &lvl);
KASSERT(pde != NULL,
("pmap_unmapbios: Invalid page entry, va: 0x%lx", va));
KASSERT(lvl == 2, ("pmap_unmapbios: Invalid level %d", lvl));
/* Unmap and invalidate the pages */
for (tmpsize = 0; tmpsize < size; tmpsize += PAGE_SIZE)
pmap_kremove(va + tmpsize);
kva_free(va, size);
}
}
/*
* Sets the memory attribute for the specified page.
*/
void
pmap_page_set_memattr(vm_page_t m, vm_memattr_t ma)
{
m->md.pv_memattr = ma;
/*
* If "m" is a normal page, update its direct mapping. This update
* can be relied upon to perform any cache operations that are
* required for data coherence.
*/
if ((m->flags & PG_FICTITIOUS) == 0 &&
pmap_change_attr(PHYS_TO_DMAP(VM_PAGE_TO_PHYS(m)), PAGE_SIZE,
m->md.pv_memattr) != 0)
panic("memory attribute change on the direct map failed");
}
/*
* Changes the specified virtual address range's memory type to that given by
* the parameter "mode". The specified virtual address range must be
* completely contained within either the direct map or the kernel map. If
* the virtual address range is contained within the kernel map, then the
* memory type for each of the corresponding ranges of the direct map is also
* changed. (The corresponding ranges of the direct map are those ranges that
* map the same physical pages as the specified virtual address range.) These
* changes to the direct map are necessary because Intel describes the
* behavior of their processors as "undefined" if two or more mappings to the
* same physical page have different memory types.
*
* Returns zero if the change completed successfully, and either EINVAL or
* ENOMEM if the change failed. Specifically, EINVAL is returned if some part
* of the virtual address range was not mapped, and ENOMEM is returned if
* there was insufficient memory available to complete the change. In the
* latter case, the memory type may have been changed on some part of the
* virtual address range or the direct map.
*/
static int
pmap_change_attr(vm_offset_t va, vm_size_t size, int mode)
{
int error;
PMAP_LOCK(kernel_pmap);
error = pmap_change_attr_locked(va, size, mode);
PMAP_UNLOCK(kernel_pmap);
return (error);
}
static int
pmap_change_attr_locked(vm_offset_t va, vm_size_t size, int mode)
{
vm_offset_t base, offset, tmpva;
pt_entry_t l3, *pte, *newpte;
int lvl;
PMAP_LOCK_ASSERT(kernel_pmap, MA_OWNED);
base = trunc_page(va);
offset = va & PAGE_MASK;
size = round_page(offset + size);
if (!VIRT_IN_DMAP(base))
return (EINVAL);
for (tmpva = base; tmpva < base + size; ) {
pte = pmap_pte(kernel_pmap, tmpva, &lvl);
if (pte == NULL)
return (EINVAL);
if ((pmap_load(pte) & ATTR_IDX_MASK) == ATTR_IDX(mode)) {
/*
* We already have the correct attribute,
* ignore this entry.
*/
switch (lvl) {
default:
panic("Invalid DMAP table level: %d\n", lvl);
case 1:
tmpva = (tmpva & ~L1_OFFSET) + L1_SIZE;
break;
case 2:
tmpva = (tmpva & ~L2_OFFSET) + L2_SIZE;
break;
case 3:
tmpva += PAGE_SIZE;
break;
}
} else {
/*
* Split the entry to an level 3 table, then
* set the new attribute.
*/
switch (lvl) {
default:
panic("Invalid DMAP table level: %d\n", lvl);
case 1:
newpte = pmap_demote_l1(kernel_pmap, pte,
tmpva & ~L1_OFFSET);
if (newpte == NULL)
return (EINVAL);
pte = pmap_l1_to_l2(pte, tmpva);
case 2:
newpte = pmap_demote_l2(kernel_pmap, pte,
tmpva & ~L2_OFFSET);
if (newpte == NULL)
return (EINVAL);
pte = pmap_l2_to_l3(pte, tmpva);
case 3:
/* Update the entry */
l3 = pmap_load(pte);
l3 &= ~ATTR_IDX_MASK;
l3 |= ATTR_IDX(mode);
if (mode == DEVICE_MEMORY)
l3 |= ATTR_XN;
pmap_update_entry(kernel_pmap, pte, l3, tmpva,
PAGE_SIZE);
/*
* If moving to a non-cacheable entry flush
* the cache.
*/
if (mode == VM_MEMATTR_UNCACHEABLE)
cpu_dcache_wbinv_range(tmpva, L3_SIZE);
break;
}
tmpva += PAGE_SIZE;
}
}
return (0);
}
/*
* Create an L2 table to map all addresses within an L1 mapping.
*/
static pt_entry_t *
pmap_demote_l1(pmap_t pmap, pt_entry_t *l1, vm_offset_t va)
{
pt_entry_t *l2, newl2, oldl1;
vm_offset_t tmpl1;
vm_paddr_t l2phys, phys;
vm_page_t ml2;
int i;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
oldl1 = pmap_load(l1);
KASSERT((oldl1 & ATTR_DESCR_MASK) == L1_BLOCK,
("pmap_demote_l1: Demoting a non-block entry"));
KASSERT((va & L1_OFFSET) == 0,
("pmap_demote_l1: Invalid virtual address %#lx", va));
KASSERT((oldl1 & ATTR_SW_MANAGED) == 0,
("pmap_demote_l1: Level 1 table shouldn't be managed"));
tmpl1 = 0;
if (va <= (vm_offset_t)l1 && va + L1_SIZE > (vm_offset_t)l1) {
tmpl1 = kva_alloc(PAGE_SIZE);
if (tmpl1 == 0)
return (NULL);
}
if ((ml2 = vm_page_alloc(NULL, 0, VM_ALLOC_INTERRUPT |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == NULL) {
CTR2(KTR_PMAP, "pmap_demote_l1: failure for va %#lx"
" in pmap %p", va, pmap);
return (NULL);
}
l2phys = VM_PAGE_TO_PHYS(ml2);
l2 = (pt_entry_t *)PHYS_TO_DMAP(l2phys);
/* Address the range points at */
phys = oldl1 & ~ATTR_MASK;
/* The attributed from the old l1 table to be copied */
newl2 = oldl1 & ATTR_MASK;
/* Create the new entries */
for (i = 0; i < Ln_ENTRIES; i++) {
l2[i] = newl2 | phys;
phys += L2_SIZE;
}
KASSERT(l2[0] == ((oldl1 & ~ATTR_DESCR_MASK) | L2_BLOCK),
("Invalid l2 page (%lx != %lx)", l2[0],
(oldl1 & ~ATTR_DESCR_MASK) | L2_BLOCK));
if (tmpl1 != 0) {
pmap_kenter(tmpl1, PAGE_SIZE,
DMAP_TO_PHYS((vm_offset_t)l1) & ~L3_OFFSET, CACHED_MEMORY);
l1 = (pt_entry_t *)(tmpl1 + ((vm_offset_t)l1 & PAGE_MASK));
}
pmap_update_entry(pmap, l1, l2phys | L1_TABLE, va, PAGE_SIZE);
if (tmpl1 != 0) {
pmap_kremove(tmpl1);
kva_free(tmpl1, PAGE_SIZE);
}
return (l2);
}
/*
* Create an L3 table to map all addresses within an L2 mapping.
*/
static pt_entry_t *
pmap_demote_l2_locked(pmap_t pmap, pt_entry_t *l2, vm_offset_t va,
struct rwlock **lockp)
{
pt_entry_t *l3, newl3, oldl2;
vm_offset_t tmpl2;
vm_paddr_t l3phys, phys;
vm_page_t ml3;
int i;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
l3 = NULL;
oldl2 = pmap_load(l2);
KASSERT((oldl2 & ATTR_DESCR_MASK) == L2_BLOCK,
("pmap_demote_l2: Demoting a non-block entry"));
KASSERT((va & L2_OFFSET) == 0,
("pmap_demote_l2: Invalid virtual address %#lx", va));
tmpl2 = 0;
if (va <= (vm_offset_t)l2 && va + L2_SIZE > (vm_offset_t)l2) {
tmpl2 = kva_alloc(PAGE_SIZE);
if (tmpl2 == 0)
return (NULL);
}
if ((ml3 = pmap_remove_pt_page(pmap, va)) == NULL) {
ml3 = vm_page_alloc(NULL, pmap_l2_pindex(va),
(VIRT_IN_DMAP(va) ? VM_ALLOC_INTERRUPT : VM_ALLOC_NORMAL) |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED);
if (ml3 == NULL) {
CTR2(KTR_PMAP, "pmap_demote_l2: failure for va %#lx"
" in pmap %p", va, pmap);
goto fail;
}
if (va < VM_MAXUSER_ADDRESS)
pmap_resident_count_inc(pmap, 1);
}
l3phys = VM_PAGE_TO_PHYS(ml3);
l3 = (pt_entry_t *)PHYS_TO_DMAP(l3phys);
/* Address the range points at */
phys = oldl2 & ~ATTR_MASK;
/* The attributed from the old l2 table to be copied */
newl3 = (oldl2 & (ATTR_MASK & ~ATTR_DESCR_MASK)) | L3_PAGE;
/*
* If the page table page is new, initialize it.
*/
if (ml3->wire_count == 1) {
ml3->wire_count = NL3PG;
for (i = 0; i < Ln_ENTRIES; i++) {
l3[i] = newl3 | phys;
phys += L3_SIZE;
}
}
KASSERT(l3[0] == ((oldl2 & ~ATTR_DESCR_MASK) | L3_PAGE),
("Invalid l3 page (%lx != %lx)", l3[0],
(oldl2 & ~ATTR_DESCR_MASK) | L3_PAGE));
/*
* Map the temporary page so we don't lose access to the l2 table.
*/
if (tmpl2 != 0) {
pmap_kenter(tmpl2, PAGE_SIZE,
DMAP_TO_PHYS((vm_offset_t)l2) & ~L3_OFFSET, CACHED_MEMORY);
l2 = (pt_entry_t *)(tmpl2 + ((vm_offset_t)l2 & PAGE_MASK));
}
/*
* The spare PV entries must be reserved prior to demoting the
* mapping, that is, prior to changing the PDE. Otherwise, the state
* of the L2 and the PV lists will be inconsistent, which can result
* in reclaim_pv_chunk() attempting to remove a PV entry from the
* wrong PV list and pmap_pv_demote_l2() failing to find the expected
* PV entry for the 2MB page mapping that is being demoted.
*/
if ((oldl2 & ATTR_SW_MANAGED) != 0)
reserve_pv_entries(pmap, Ln_ENTRIES - 1, lockp);
pmap_update_entry(pmap, l2, l3phys | L2_TABLE, va, PAGE_SIZE);
/*
* Demote the PV entry.
*/
if ((oldl2 & ATTR_SW_MANAGED) != 0)
pmap_pv_demote_l2(pmap, va, oldl2 & ~ATTR_MASK, lockp);
atomic_add_long(&pmap_l2_demotions, 1);
CTR3(KTR_PMAP, "pmap_demote_l2: success for va %#lx"
" in pmap %p %lx", va, pmap, l3[0]);
fail:
if (tmpl2 != 0) {
pmap_kremove(tmpl2);
kva_free(tmpl2, PAGE_SIZE);
}
return (l3);
}
static pt_entry_t *
pmap_demote_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t va)
{
struct rwlock *lock;
pt_entry_t *l3;
lock = NULL;
l3 = pmap_demote_l2_locked(pmap, l2, va, &lock);
if (lock != NULL)
rw_wunlock(lock);
return (l3);
}
/*
* perform the pmap work for mincore
*/
int
pmap_mincore(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa)
{
pt_entry_t *pte, tpte;
vm_paddr_t mask, pa;
int lvl, val;
bool managed;
PMAP_LOCK(pmap);
retry:
val = 0;
pte = pmap_pte(pmap, addr, &lvl);
if (pte != NULL) {
tpte = pmap_load(pte);
switch (lvl) {
case 3:
mask = L3_OFFSET;
break;
case 2:
mask = L2_OFFSET;
break;
case 1:
mask = L1_OFFSET;
break;
default:
panic("pmap_mincore: invalid level %d", lvl);
}
val = MINCORE_INCORE;
if (lvl != 3)
val |= MINCORE_SUPER;
if (pmap_page_dirty(tpte))
val |= MINCORE_MODIFIED | MINCORE_MODIFIED_OTHER;
if ((tpte & ATTR_AF) == ATTR_AF)
val |= MINCORE_REFERENCED | MINCORE_REFERENCED_OTHER;
managed = (tpte & ATTR_SW_MANAGED) == ATTR_SW_MANAGED;
pa = (tpte & ~ATTR_MASK) | (addr & mask);
} else
managed = false;
if ((val & (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER)) !=
(MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER) && managed) {
/* Ensure that "PHYS_TO_VM_PAGE(pa)->object" doesn't change. */
if (vm_page_pa_tryrelock(pmap, pa, locked_pa))
goto retry;
} else
PA_UNLOCK_COND(*locked_pa);
PMAP_UNLOCK(pmap);
return (val);
}
void
pmap_activate(struct thread *td)
{
pmap_t pmap;
critical_enter();
pmap = vmspace_pmap(td->td_proc->p_vmspace);
td->td_proc->p_md.md_l0addr = vtophys(pmap->pm_l0);
__asm __volatile("msr ttbr0_el1, %0" : :
"r"(td->td_proc->p_md.md_l0addr));
pmap_invalidate_all(pmap);
critical_exit();
}
struct pcb *
pmap_switch(struct thread *old, struct thread *new)
{
pcpu_bp_harden bp_harden;
struct pcb *pcb;
/* Store the new curthread */
PCPU_SET(curthread, new);
/* And the new pcb */
pcb = new->td_pcb;
PCPU_SET(curpcb, pcb);
/*
* TODO: We may need to flush the cache here if switching
* to a user process.
*/
if (old == NULL ||
old->td_proc->p_md.md_l0addr != new->td_proc->p_md.md_l0addr) {
__asm __volatile(
/* Switch to the new pmap */
"msr ttbr0_el1, %0 \n"
"isb \n"
/* Invalidate the TLB */
"dsb ishst \n"
"tlbi vmalle1is \n"
"dsb ish \n"
"isb \n"
: : "r"(new->td_proc->p_md.md_l0addr));
/*
* Stop userspace from training the branch predictor against
* other processes. This will call into a CPU specific
* function that clears the branch predictor state.
*/
bp_harden = PCPU_GET(bp_harden);
if (bp_harden != NULL)
bp_harden();
}
return (pcb);
}
void
pmap_sync_icache(pmap_t pmap, vm_offset_t va, vm_size_t sz)
{
if (va >= VM_MIN_KERNEL_ADDRESS) {
cpu_icache_sync_range(va, sz);
} else {
u_int len, offset;
vm_paddr_t pa;
/* Find the length of data in this page to flush */
offset = va & PAGE_MASK;
len = imin(PAGE_SIZE - offset, sz);
while (sz != 0) {
/* Extract the physical address & find it in the DMAP */
pa = pmap_extract(pmap, va);
if (pa != 0)
cpu_icache_sync_range(PHYS_TO_DMAP(pa), len);
/* Move to the next page */
sz -= len;
va += len;
/* Set the length for the next iteration */
len = imin(PAGE_SIZE, sz);
}
}
}
int
pmap_fault(pmap_t pmap, uint64_t esr, uint64_t far)
{
#ifdef SMP
register_t intr;
uint64_t par;
switch (ESR_ELx_EXCEPTION(esr)) {
case EXCP_INSN_ABORT_L:
case EXCP_INSN_ABORT:
case EXCP_DATA_ABORT_L:
case EXCP_DATA_ABORT:
break;
default:
return (KERN_FAILURE);
}
/* Data and insn aborts use same encoding for FCS field. */
switch (esr & ISS_DATA_DFSC_MASK) {
case ISS_DATA_DFSC_TF_L0:
case ISS_DATA_DFSC_TF_L1:
case ISS_DATA_DFSC_TF_L2:
case ISS_DATA_DFSC_TF_L3:
PMAP_LOCK(pmap);
/* Ask the MMU to check the address */
intr = intr_disable();
if (pmap == kernel_pmap)
par = arm64_address_translate_s1e1r(far);
else
par = arm64_address_translate_s1e0r(far);
intr_restore(intr);
PMAP_UNLOCK(pmap);
/*
* If the translation was successful the address was invalid
* due to a break-before-make sequence. We can unlock and
* return success to the trap handler.
*/
if (PAR_SUCCESS(par))
return (KERN_SUCCESS);
break;
default:
break;
}
#endif
return (KERN_FAILURE);
}
/*
* Increase the starting virtual address of the given mapping if a
* different alignment might result in more superpage mappings.
*/
void
pmap_align_superpage(vm_object_t object, vm_ooffset_t offset,
vm_offset_t *addr, vm_size_t size)
{
vm_offset_t superpage_offset;
if (size < L2_SIZE)
return;
if (object != NULL && (object->flags & OBJ_COLORED) != 0)
offset += ptoa(object->pg_color);
superpage_offset = offset & L2_OFFSET;
if (size - ((L2_SIZE - superpage_offset) & L2_OFFSET) < L2_SIZE ||
(*addr & L2_OFFSET) == superpage_offset)
return;
if ((*addr & L2_OFFSET) < superpage_offset)
*addr = (*addr & ~L2_OFFSET) + superpage_offset;
else
*addr = ((*addr + L2_OFFSET) & ~L2_OFFSET) + superpage_offset;
}
/**
* Get the kernel virtual address of a set of physical pages. If there are
* physical addresses not covered by the DMAP perform a transient mapping
* that will be removed when calling pmap_unmap_io_transient.
*
* \param page The pages the caller wishes to obtain the virtual
* address on the kernel memory map.
* \param vaddr On return contains the kernel virtual memory address
* of the pages passed in the page parameter.
* \param count Number of pages passed in.
* \param can_fault TRUE if the thread using the mapped pages can take
* page faults, FALSE otherwise.
*
* \returns TRUE if the caller must call pmap_unmap_io_transient when
* finished or FALSE otherwise.
*
*/
boolean_t
pmap_map_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count,
boolean_t can_fault)
{
vm_paddr_t paddr;
boolean_t needs_mapping;
int error, i;
/*
* Allocate any KVA space that we need, this is done in a separate
* loop to prevent calling vmem_alloc while pinned.
*/
needs_mapping = FALSE;
for (i = 0; i < count; i++) {
paddr = VM_PAGE_TO_PHYS(page[i]);
if (__predict_false(!PHYS_IN_DMAP(paddr))) {
error = vmem_alloc(kernel_arena, PAGE_SIZE,
M_BESTFIT | M_WAITOK, &vaddr[i]);
KASSERT(error == 0, ("vmem_alloc failed: %d", error));
needs_mapping = TRUE;
} else {
vaddr[i] = PHYS_TO_DMAP(paddr);
}
}
/* Exit early if everything is covered by the DMAP */
if (!needs_mapping)
return (FALSE);
if (!can_fault)
sched_pin();
for (i = 0; i < count; i++) {
paddr = VM_PAGE_TO_PHYS(page[i]);
if (!PHYS_IN_DMAP(paddr)) {
panic(
"pmap_map_io_transient: TODO: Map out of DMAP data");
}
}
return (needs_mapping);
}
void
pmap_unmap_io_transient(vm_page_t page[], vm_offset_t vaddr[], int count,
boolean_t can_fault)
{
vm_paddr_t paddr;
int i;
if (!can_fault)
sched_unpin();
for (i = 0; i < count; i++) {
paddr = VM_PAGE_TO_PHYS(page[i]);
if (!PHYS_IN_DMAP(paddr)) {
panic("ARM64TODO: pmap_unmap_io_transient: Unmap data");
}
}
}
boolean_t
pmap_is_valid_memattr(pmap_t pmap __unused, vm_memattr_t mode)
{
return (mode >= VM_MEMATTR_DEVICE && mode <= VM_MEMATTR_WRITE_THROUGH);
}
Index: projects/clang800-import/sys/arm64/conf/GENERIC
===================================================================
--- projects/clang800-import/sys/arm64/conf/GENERIC (revision 343955)
+++ projects/clang800-import/sys/arm64/conf/GENERIC (revision 343956)
@@ -1,305 +1,307 @@
#
# GENERIC -- Generic kernel configuration file for FreeBSD/arm64
#
# For more information on this file, please read the config(5) manual page,
# and/or the handbook section on Kernel Configuration Files:
#
# https://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/kernelconfig-config.html
#
# The handbook is also available locally in /usr/share/doc/handbook
# if you've installed the doc distribution, otherwise always see the
# FreeBSD World Wide Web server (https://www.FreeBSD.org/) for the
# latest information.
#
# An exhaustive list of options and more detailed explanations of the
# device lines is also present in the ../../conf/NOTES and NOTES files.
# If you are in doubt as to the purpose or necessity of a line, check first
# in NOTES.
#
# $FreeBSD$
cpu ARM64
ident GENERIC
makeoptions DEBUG=-g # Build kernel with gdb(1) debug symbols
makeoptions WITH_CTF=1 # Run ctfconvert(1) for DTrace support
options SCHED_ULE # ULE scheduler
options PREEMPTION # Enable kernel thread preemption
options VIMAGE # Subsystem virtualization, e.g. VNET
options INET # InterNETworking
options INET6 # IPv6 communications protocols
options IPSEC # IP (v4/v6) security
options IPSEC_SUPPORT # Allow kldload of ipsec and tcpmd5
options TCP_HHOOK # hhook(9) framework for TCP
options TCP_OFFLOAD # TCP offload
options TCP_RFC7413 # TCP Fast Open
options SCTP # Stream Control Transmission Protocol
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates support
options UFS_ACL # Support for access control lists
options UFS_DIRHASH # Improve performance on big directories
options UFS_GJOURNAL # Enable gjournal-based UFS journaling
options QUOTA # Enable disk quotas for UFS
options MD_ROOT # MD is a potential root device
options NFSCL # Network Filesystem Client
options NFSD # Network Filesystem Server
options NFSLOCKD # Network Lock Manager
options NFS_ROOT # NFS usable as /, requires NFSCL
options MSDOSFS # MSDOS Filesystem
options CD9660 # ISO 9660 Filesystem
options PROCFS # Process filesystem (requires PSEUDOFS)
options PSEUDOFS # Pseudo-filesystem framework
options GEOM_RAID # Soft RAID functionality.
options GEOM_LABEL # Provides labelization
options COMPAT_FREEBSD32 # Compatible with FreeBSD/arm
options COMPAT_FREEBSD11 # Compatible with FreeBSD11
options SCSI_DELAY=5000 # Delay (in ms) before probing SCSI
options KTRACE # ktrace(1) support
options STACK # stack(9) support
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time extensions
options PRINTF_BUFR_SIZE=128 # Prevent printf output being interspersed.
options KBD_INSTALL_CDEV # install a CDEV entry in /dev
options HWPMC_HOOKS # Necessary kernel hooks for hwpmc(4)
options AUDIT # Security event auditing
options CAPABILITY_MODE # Capsicum capability mode
options CAPABILITIES # Capsicum capabilities
options MAC # TrustedBSD MAC Framework
options KDTRACE_FRAME # Ensure frames are compiled in
options KDTRACE_HOOKS # Kernel DTrace hooks
options VFP # Floating-point support
options RACCT # Resource accounting framework
options RACCT_DEFAULT_TO_DISABLED # Set kern.racct.enable=0 by default
options RCTL # Resource limits
options SMP
options INTRNG
# Debugging support. Always need this:
options KDB # Enable kernel debugger support.
options KDB_TRACE # Print a stack trace for a panic.
# For full debugger support use (turn off in stable branch):
options DDB # Support DDB.
#options GDB # Support remote GDB.
options DEADLKRES # Enable the deadlock resolver
options INVARIANTS # Enable calls of extra sanity checking
options INVARIANT_SUPPORT # Extra sanity checks of internal structures, required by INVARIANTS
options WITNESS # Enable checks to detect deadlocks and cycles
options WITNESS_SKIPSPIN # Don't run witness on spinlocks for speed
options MALLOC_DEBUG_MAXZONES=8 # Separate malloc(9) zones
options ALT_BREAK_TO_DEBUGGER # Enter debugger on keyboard escape sequence
options USB_DEBUG # enable debug msgs
options VERBOSE_SYSINIT=0 # Support debug.verbose_sysinit, off by default
# Kernel Sanitizers
-options COVERAGE # Generic kernel coverage. Used by KCOV
-options KCOV # Kernel Coverage Sanitizer
+#options COVERAGE # Generic kernel coverage. Used by KCOV
+#options KCOV # Kernel Coverage Sanitizer
# Warning: KUBSAN can result in a kernel too large for loader to load
#options KUBSAN # Kernel Undefined Behavior Sanitizer
# Kernel dump features.
options EKCD # Support for encrypted kernel dumps
options GZIO # gzip-compressed kernel and user dumps
options ZSTDIO # zstd-compressed kernel and user dumps
options NETDUMP # netdump(4) client support
# SoC support
options SOC_ALLWINNER_A64
options SOC_ALLWINNER_H5
options SOC_CAVM_THUNDERX
options SOC_HISI_HI6220
options SOC_BRCM_BCM2837
options SOC_MARVELL_8K
options SOC_ROCKCHIP_RK3328
options SOC_ROCKCHIP_RK3399
options SOC_XILINX_ZYNQ
# Timer drivers
device a10_timer
# Annapurna Alpine drivers
device al_ccu # Alpine Cache Coherency Unit
device al_nb_service # Alpine North Bridge Service
device al_iofic # I/O Fabric Interrupt Controller
device al_serdes # Serializer/Deserializer
device al_udma # Universal DMA
# Qualcomm Snapdragon drivers
device qcom_gcc # Global Clock Controller
# VirtIO support
device virtio
device virtio_pci
device virtio_mmio
device virtio_blk
device vtnet
# CPU frequency control
device cpufreq
# Bus drivers
device pci
device al_pci # Annapurna Alpine PCI-E
options PCI_HP # PCI-Express native HotPlug
options PCI_IOV # PCI SR-IOV support
# PCI/PCI-X/PCIe Ethernet NICs that use iflib infrastructure
device iflib
device em # Intel PRO/1000 Gigabit Ethernet Family
device ix # Intel 10Gb Ethernet Family
# Ethernet NICs
device mdio
device mii
device miibus # MII bus support
device awg # Allwinner EMAC Gigabit Ethernet
device axgbe # AMD Opteron A1100 integrated NIC
device msk # Marvell/SysKonnect Yukon II Gigabit Ethernet
device neta # Marvell Armada 370/38x/XP/3700 NIC
device smc # SMSC LAN91C111
device vnic # Cavium ThunderX NIC
device al_eth # Annapurna Alpine Ethernet NIC
device dwc_rk # Rockchip Designware
# Block devices
device ahci
device scbus
device da
# ATA/SCSI peripherals
device pass # Passthrough device (direct ATA/SCSI access)
# MMC/SD/SDIO Card slot support
device sdhci
device sdhci_xenon # Marvell Xenon SD/MMC controller
device aw_mmc # Allwinner SD/MMC controller
device mmc # mmc/sd bus
device mmcsd # mmc/sd flash cards
device dwmmc
# Serial (COM) ports
device uart # Generic UART driver
device uart_msm # Qualcomm MSM UART driver
device uart_mu # RPI3 aux port
device uart_mvebu # Armada 3700 UART driver
device uart_ns8250 # ns8250-type UART driver
device uart_snps
device pl011
# USB support
device aw_ehci # Allwinner EHCI USB interface (USB 2.0)
device aw_usbphy # Allwinner USB PHY
device dwcotg # DWC OTG controller
device ohci # OHCI USB interface
device ehci # EHCI USB interface (USB 2.0)
device ehci_mv # Marvell EHCI USB interface
device xhci # XHCI PCI->USB interface (USB 3.0)
device xhci_mv # Marvell XHCI USB interface
device usb # USB Bus (required)
device ukbd # Keyboard
device umass # Disks/Mass storage - Requires scbus and da
# USB ethernet support
device muge
device smcphy
device smsc
# GPIO / PINCTRL
device aw_gpio # Allwinner GPIO controller
device gpio
device gpioled
device fdt_pinctrl
device mv_gpio # Marvell GPIO controller
device mvebu_pinctrl # Marvell Pinmux Controller
+device rk_gpio # RockChip GPIO Controller
+device rk_pinctrl # RockChip Pinmux Controller
# I2C
device aw_rsb # Allwinner Reduced Serial Bus
device bcm2835_bsc # Broadcom BCM283x I2C bus
device iicbus
device iic
device twsi # Allwinner I2C controller
device rk_i2c # RockChip I2C controller
device syr827 # Silergy SYR827 PMIC
# Clock and reset controllers
device aw_ccu # Allwinner clock controller
# Interrupt controllers
device aw_nmi # Allwinner NMI support
device mv_cp110_icu # Marvell CP110 ICU
device mv_ap806_gicp # Marvell AP806 GICP
# Real-time clock support
device aw_rtc # Allwinner Real-time Clock
device mv_rtc # Marvell Real-time Clock
# Watchdog controllers
device aw_wdog # Allwinner Watchdog
# Power management controllers
device axp81x # X-Powers AXP81x PMIC
device rk805 # RockChip RK805 PMIC
# EFUSE
device aw_sid # Allwinner Secure ID EFUSE
# Thermal sensors
device aw_thermal # Allwinner Thermal Sensor Controller
device mv_thermal # Marvell Thermal Sensor Controller
# SPI
device spibus
device bcm2835_spi # Broadcom BCM283x SPI bus
# PWM
device pwm
device aw_pwm
# Console
device vt
device kbdmux
device vt_efifb
# EVDEV support
device evdev # input event device support
options EVDEV_SUPPORT # evdev support in legacy drivers
device uinput # install /dev/uinput cdev
# Pseudo devices.
device crypto # core crypto support
device loop # Network loopback
device random # Entropy device
device ether # Ethernet support
device vlan # 802.1Q VLAN support
device tun # Packet tunnel.
device md # Memory "disks"
device gif # IPv6 and IPv4 tunneling
device firmware # firmware assist module
options EFIRT # EFI Runtime Services
# EXT_RESOURCES pseudo devices
options EXT_RESOURCES
device clk
device phy
device hwreset
device nvmem
device regulator
device syscon
device aw_syscon
# The `bpf' device enables the Berkeley Packet Filter.
# Be aware of the administrative consequences of enabling this!
# Note that 'bpf' is required for DHCP.
device bpf # Berkeley packet filter
# Chip-specific errata
options THUNDERX_PASS_1_1_ERRATA
options FDT
device acpi
# DTBs
makeoptions MODULES_EXTRA="dtb/allwinner dtb/rockchip dtb/rpi"
Index: projects/clang800-import/sys/cam/ata/ata_da.c
===================================================================
--- projects/clang800-import/sys/cam/ata/ata_da.c (revision 343955)
+++ projects/clang800-import/sys/cam/ata/ata_da.c (revision 343956)
@@ -1,3621 +1,3630 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer,
* without modification, immediately at the beginning of the file.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_ada.h"
#include <sys/param.h>
#ifdef _KERNEL
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/bio.h>
#include <sys/sysctl.h>
#include <sys/taskqueue.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/conf.h>
#include <sys/devicestat.h>
#include <sys/eventhandler.h>
#include <sys/malloc.h>
#include <sys/endian.h>
#include <sys/cons.h>
#include <sys/proc.h>
#include <sys/reboot.h>
#include <sys/sbuf.h>
#include <geom/geom_disk.h>
#endif /* _KERNEL */
#ifndef _KERNEL
#include <stdio.h>
#include <string.h>
#endif /* _KERNEL */
#include <cam/cam.h>
#include <cam/cam_ccb.h>
#include <cam/cam_periph.h>
#include <cam/cam_xpt_periph.h>
#include <cam/scsi/scsi_all.h>
#include <cam/scsi/scsi_da.h>
#include <cam/cam_sim.h>
#include <cam/cam_iosched.h>
#include <cam/ata/ata_all.h>
#include <machine/md_var.h> /* geometry translation */
#ifdef _KERNEL
#define ATA_MAX_28BIT_LBA 268435455UL
extern int iosched_debug;
typedef enum {
ADA_STATE_RAHEAD,
ADA_STATE_WCACHE,
ADA_STATE_LOGDIR,
ADA_STATE_IDDIR,
ADA_STATE_SUP_CAP,
ADA_STATE_ZONE,
ADA_STATE_NORMAL
} ada_state;
typedef enum {
ADA_FLAG_CAN_48BIT = 0x00000002,
ADA_FLAG_CAN_FLUSHCACHE = 0x00000004,
ADA_FLAG_CAN_NCQ = 0x00000008,
ADA_FLAG_CAN_DMA = 0x00000010,
ADA_FLAG_NEED_OTAG = 0x00000020,
ADA_FLAG_WAS_OTAG = 0x00000040,
ADA_FLAG_CAN_TRIM = 0x00000080,
ADA_FLAG_OPEN = 0x00000100,
ADA_FLAG_SCTX_INIT = 0x00000200,
ADA_FLAG_CAN_CFA = 0x00000400,
ADA_FLAG_CAN_POWERMGT = 0x00000800,
ADA_FLAG_CAN_DMA48 = 0x00001000,
ADA_FLAG_CAN_LOG = 0x00002000,
ADA_FLAG_CAN_IDLOG = 0x00004000,
ADA_FLAG_CAN_SUPCAP = 0x00008000,
ADA_FLAG_CAN_ZONE = 0x00010000,
ADA_FLAG_CAN_WCACHE = 0x00020000,
ADA_FLAG_CAN_RAHEAD = 0x00040000,
ADA_FLAG_PROBED = 0x00080000,
ADA_FLAG_ANNOUNCED = 0x00100000,
ADA_FLAG_DIRTY = 0x00200000,
ADA_FLAG_CAN_NCQ_TRIM = 0x00400000, /* CAN_TRIM also set */
ADA_FLAG_PIM_ATA_EXT = 0x00800000
} ada_flags;
typedef enum {
ADA_Q_NONE = 0x00,
ADA_Q_4K = 0x01,
ADA_Q_NCQ_TRIM_BROKEN = 0x02,
ADA_Q_LOG_BROKEN = 0x04,
ADA_Q_SMR_DM = 0x08,
- ADA_Q_NO_TRIM = 0x10
+ ADA_Q_NO_TRIM = 0x10,
+ ADA_Q_128KB = 0x20
} ada_quirks;
#define ADA_Q_BIT_STRING \
"\020" \
"\0014K" \
"\002NCQ_TRIM_BROKEN" \
"\003LOG_BROKEN" \
"\004SMR_DM" \
- "\005NO_TRIM"
+ "\005NO_TRIM" \
+ "\006128KB"
typedef enum {
ADA_CCB_RAHEAD = 0x01,
ADA_CCB_WCACHE = 0x02,
ADA_CCB_BUFFER_IO = 0x03,
ADA_CCB_DUMP = 0x05,
ADA_CCB_TRIM = 0x06,
ADA_CCB_LOGDIR = 0x07,
ADA_CCB_IDDIR = 0x08,
ADA_CCB_SUP_CAP = 0x09,
ADA_CCB_ZONE = 0x0a,
ADA_CCB_TYPE_MASK = 0x0F,
} ada_ccb_state;
typedef enum {
ADA_ZONE_NONE = 0x00,
ADA_ZONE_DRIVE_MANAGED = 0x01,
ADA_ZONE_HOST_AWARE = 0x02,
ADA_ZONE_HOST_MANAGED = 0x03
} ada_zone_mode;
typedef enum {
ADA_ZONE_FLAG_RZ_SUP = 0x0001,
ADA_ZONE_FLAG_OPEN_SUP = 0x0002,
ADA_ZONE_FLAG_CLOSE_SUP = 0x0004,
ADA_ZONE_FLAG_FINISH_SUP = 0x0008,
ADA_ZONE_FLAG_RWP_SUP = 0x0010,
ADA_ZONE_FLAG_SUP_MASK = (ADA_ZONE_FLAG_RZ_SUP |
ADA_ZONE_FLAG_OPEN_SUP |
ADA_ZONE_FLAG_CLOSE_SUP |
ADA_ZONE_FLAG_FINISH_SUP |
ADA_ZONE_FLAG_RWP_SUP),
ADA_ZONE_FLAG_URSWRZ = 0x0020,
ADA_ZONE_FLAG_OPT_SEQ_SET = 0x0040,
ADA_ZONE_FLAG_OPT_NONSEQ_SET = 0x0080,
ADA_ZONE_FLAG_MAX_SEQ_SET = 0x0100,
ADA_ZONE_FLAG_SET_MASK = (ADA_ZONE_FLAG_OPT_SEQ_SET |
ADA_ZONE_FLAG_OPT_NONSEQ_SET |
ADA_ZONE_FLAG_MAX_SEQ_SET)
} ada_zone_flags;
static struct ada_zone_desc {
ada_zone_flags value;
const char *desc;
} ada_zone_desc_table[] = {
{ADA_ZONE_FLAG_RZ_SUP, "Report Zones" },
{ADA_ZONE_FLAG_OPEN_SUP, "Open" },
{ADA_ZONE_FLAG_CLOSE_SUP, "Close" },
{ADA_ZONE_FLAG_FINISH_SUP, "Finish" },
{ADA_ZONE_FLAG_RWP_SUP, "Reset Write Pointer" },
};
/* Offsets into our private area for storing information */
#define ccb_state ppriv_field0
#define ccb_bp ppriv_ptr1
typedef enum {
ADA_DELETE_NONE,
ADA_DELETE_DISABLE,
ADA_DELETE_CFA_ERASE,
ADA_DELETE_DSM_TRIM,
ADA_DELETE_NCQ_DSM_TRIM,
ADA_DELETE_MIN = ADA_DELETE_CFA_ERASE,
ADA_DELETE_MAX = ADA_DELETE_NCQ_DSM_TRIM,
} ada_delete_methods;
static const char *ada_delete_method_names[] =
{ "NONE", "DISABLE", "CFA_ERASE", "DSM_TRIM", "NCQ_DSM_TRIM" };
#if 0
static const char *ada_delete_method_desc[] =
{ "NONE", "DISABLED", "CFA Erase", "DSM Trim", "DSM Trim via NCQ" };
#endif
struct disk_params {
u_int8_t heads;
u_int8_t secs_per_track;
u_int32_t cylinders;
u_int32_t secsize; /* Number of bytes/logical sector */
u_int64_t sectors; /* Total number sectors */
};
#define TRIM_MAX_BLOCKS 8
#define TRIM_MAX_RANGES (TRIM_MAX_BLOCKS * ATA_DSM_BLK_RANGES)
struct trim_request {
uint8_t data[TRIM_MAX_RANGES * ATA_DSM_RANGE_SIZE];
TAILQ_HEAD(, bio) bps;
};
struct ada_softc {
struct cam_iosched_softc *cam_iosched;
int outstanding_cmds; /* Number of active commands */
int refcount; /* Active xpt_action() calls */
ada_state state;
ada_flags flags;
ada_zone_mode zone_mode;
ada_zone_flags zone_flags;
struct ata_gp_log_dir ata_logdir;
int valid_logdir_len;
struct ata_identify_log_pages ata_iddir;
int valid_iddir_len;
uint64_t optimal_seq_zones;
uint64_t optimal_nonseq_zones;
uint64_t max_seq_zones;
ada_quirks quirks;
ada_delete_methods delete_method;
int trim_max_ranges;
int read_ahead;
int write_cache;
int unmappedio;
int rotating;
#ifdef CAM_TEST_FAILURE
int force_read_error;
int force_write_error;
int periodic_read_error;
int periodic_read_count;
#endif
struct disk_params params;
struct disk *disk;
struct task sysctl_task;
struct sysctl_ctx_list sysctl_ctx;
struct sysctl_oid *sysctl_tree;
struct callout sendordered_c;
struct trim_request trim_req;
uint64_t trim_count;
uint64_t trim_ranges;
uint64_t trim_lbas;
#ifdef CAM_IO_STATS
struct sysctl_ctx_list sysctl_stats_ctx;
struct sysctl_oid *sysctl_stats_tree;
u_int timeouts;
u_int errors;
u_int invalidations;
#endif
#define ADA_ANNOUNCETMP_SZ 80
char announce_temp[ADA_ANNOUNCETMP_SZ];
#define ADA_ANNOUNCE_SZ 400
char announce_buffer[ADA_ANNOUNCE_SZ];
};
struct ada_quirk_entry {
struct scsi_inquiry_pattern inq_pat;
ada_quirks quirks;
};
static struct ada_quirk_entry ada_quirk_table[] =
{
{
+ /* Sandisk X400 */
+ { T_DIRECT, SIP_MEDIA_FIXED, "*", "SanDisk?SD8SB8U1T00*", "X4162000*" },
+ /*quirks*/ADA_Q_128KB
+ },
+ {
/* Hitachi Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Hitachi H??????????E3*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Samsung Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG HD155UI*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Samsung Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG HD204UI*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST????DL*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Barracuda Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST???DM*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Barracuda Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST????DM*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9500423AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9500424AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9640423AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9640424AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9750420AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9750422AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST9750423AS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* Seagate Momentus Thin Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST???LT*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Red Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD????CX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD????RS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Green/Red Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD????RX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Red Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD??????CX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD????AZEX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD????FZEX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD??????RS*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD??????RX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Scorpio Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD???PKT*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Scorpio Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD?????PKT*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Scorpio Blue Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD???PVT*", "*" },
/*quirks*/ADA_Q_4K
},
{
/* WDC Scorpio Blue Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "WDC WD?????PVT*", "*" },
/*quirks*/ADA_Q_4K
},
/* SSDs */
{
/*
* Corsair Force 2 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Corsair CSSD-F*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Corsair Force 3 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Corsair Force 3*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Corsair Neutron GTX SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Corsair Neutron GTX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Corsair Force GT & GS SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Corsair Force G*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Crucial M4 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "M4-CT???M4SSD2*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Crucial M500 SSDs MU07 firmware
* NCQ Trim works
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Crucial CT*M500*", "MU07" },
/*quirks*/0
},
{
/*
* Crucial M500 SSDs all other firmware
* NCQ Trim doesn't work
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Crucial CT*M500*", "*" },
/*quirks*/ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Crucial M550 SSDs
* NCQ Trim doesn't work, but only on MU01 firmware
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Crucial CT*M550*", "MU01" },
/*quirks*/ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Crucial MX100 SSDs
* NCQ Trim doesn't work, but only on MU01 firmware
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Crucial CT*MX100*", "MU01" },
/*quirks*/ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Crucial RealSSD C300 SSDs
* 4k optimised
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "C300-CTFDDAC???MAG*",
"*" }, /*quirks*/ADA_Q_4K
},
{
/*
* FCCT M500 SSDs
* NCQ Trim doesn't work
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "FCCT*M500*", "*" },
/*quirks*/ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Intel 320 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "INTEL SSDSA2CW*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Intel 330 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "INTEL SSDSC2CT*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Intel 510 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "INTEL SSDSC2MH*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Intel 520 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "INTEL SSDSC2BW*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Intel S3610 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "INTEL SSDSC2BX*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Intel X25-M Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "INTEL SSDSA2M*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* KingDian S200 60GB P0921B
* Trimming crash the SSD
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "KingDian S200 *", "*" },
/*quirks*/ADA_Q_NO_TRIM
},
{
/*
* Kingston E100 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "KINGSTON SE100S3*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Kingston HyperX 3k SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "KINGSTON SH103S3*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Marvell SSDs (entry taken from OpenSolaris)
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "MARVELL SD88SA02*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Micron M500 SSDs firmware MU07
* NCQ Trim works?
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Micron M500*", "MU07" },
/*quirks*/0
},
{
/*
* Micron M500 SSDs all other firmware
* NCQ Trim doesn't work
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Micron M500*", "*" },
/*quirks*/ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Micron M5[15]0 SSDs
* NCQ Trim doesn't work, but only MU01 firmware
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Micron M5[15]0*", "MU01" },
/*quirks*/ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Micron 5100 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Micron 5100 MTFDDAK*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* OCZ Agility 2 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "OCZ-AGILITY2*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* OCZ Agility 3 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "OCZ-AGILITY3*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* OCZ Deneva R Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "DENRSTE251M45*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* OCZ Vertex 2 SSDs (inc pro series)
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "OCZ?VERTEX2*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* OCZ Vertex 3 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "OCZ-VERTEX3*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* OCZ Vertex 4 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "OCZ-VERTEX4*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Samsung 750 SSDs
* 4k optimised, NCQ TRIM seems to work
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Samsung SSD 750*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Samsung 830 Series SSDs
* 4k optimised, NCQ TRIM Broken (normal TRIM is fine)
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG SSD 830 Series*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Samsung 840 SSDs
* 4k optimised, NCQ TRIM Broken (normal TRIM is fine)
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Samsung SSD 840*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Samsung 845 SSDs
* 4k optimised, NCQ TRIM Broken (normal TRIM is fine)
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Samsung SSD 845*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Samsung 850 SSDs
* 4k optimised, NCQ TRIM broken (normal TRIM fine)
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Samsung SSD 850*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Samsung SM863 Series SSDs (MZ7KM*)
* 4k optimised, NCQ believed to be working
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG MZ7KM*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Samsung 843T Series SSDs (MZ7WD*)
* Samsung PM851 Series SSDs (MZ7TE*)
* Samsung PM853T Series SSDs (MZ7GE*)
* 4k optimised, NCQ believed to be broken since these are
* appear to be built with the same controllers as the 840/850.
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG MZ7*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Same as for SAMSUNG MZ7* but enable the quirks for SSD
* starting with MZ7* too
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "MZ7*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* Samsung PM851 Series SSDs Dell OEM
* device model "SAMSUNG SSD PM851 mSATA 256GB"
* 4k optimised, NCQ broken
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG SSD PM851*", "*" },
/*quirks*/ADA_Q_4K | ADA_Q_NCQ_TRIM_BROKEN
},
{
/*
* SuperTalent TeraDrive CT SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "FTM??CT25H*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* XceedIOPS SATA SSDs
* 4k optimised
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SG9XCS2D*", "*" },
/*quirks*/ADA_Q_4K
},
{
/*
* Samsung drive that doesn't support READ LOG EXT or
* READ LOG DMA EXT, despite reporting that it does in
* ATA identify data:
* SAMSUNG HD200HJ KF100-06
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG HD200*", "*" },
/*quirks*/ADA_Q_LOG_BROKEN
},
{
/*
* Samsung drive that doesn't support READ LOG EXT or
* READ LOG DMA EXT, despite reporting that it does in
* ATA identify data:
* SAMSUNG HD501LJ CR100-10
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "SAMSUNG HD501*", "*" },
/*quirks*/ADA_Q_LOG_BROKEN
},
{
/*
* Seagate Lamarr 8TB Shingled Magnetic Recording (SMR)
* Drive Managed SATA hard drive. This drive doesn't report
* in firmware that it is a drive managed SMR drive.
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "ST8000AS000[23]*", "*" },
/*quirks*/ADA_Q_SMR_DM
},
{
/* Default */
{
T_ANY, SIP_MEDIA_REMOVABLE|SIP_MEDIA_FIXED,
/*vendor*/"*", /*product*/"*", /*revision*/"*"
},
/*quirks*/0
},
};
static disk_strategy_t adastrategy;
static dumper_t adadump;
static periph_init_t adainit;
static void adadiskgonecb(struct disk *dp);
static periph_oninv_t adaoninvalidate;
static periph_dtor_t adacleanup;
static void adaasync(void *callback_arg, u_int32_t code,
struct cam_path *path, void *arg);
static int adazonemodesysctl(SYSCTL_HANDLER_ARGS);
static int adazonesupsysctl(SYSCTL_HANDLER_ARGS);
static void adasysctlinit(void *context, int pending);
static int adagetattr(struct bio *bp);
static void adasetflags(struct ada_softc *softc,
struct ccb_getdev *cgd);
static periph_ctor_t adaregister;
static void ada_dsmtrim(struct ada_softc *softc, struct bio *bp,
struct ccb_ataio *ataio);
static void ada_cfaerase(struct ada_softc *softc, struct bio *bp,
struct ccb_ataio *ataio);
static int ada_zone_bio_to_ata(int disk_zone_cmd);
static int ada_zone_cmd(struct cam_periph *periph, union ccb *ccb,
struct bio *bp, int *queue_ccb);
static periph_start_t adastart;
static void adaprobedone(struct cam_periph *periph, union ccb *ccb);
static void adazonedone(struct cam_periph *periph, union ccb *ccb);
static void adadone(struct cam_periph *periph,
union ccb *done_ccb);
static int adaerror(union ccb *ccb, u_int32_t cam_flags,
u_int32_t sense_flags);
static void adagetparams(struct cam_periph *periph,
struct ccb_getdev *cgd);
static timeout_t adasendorderedtag;
static void adashutdown(void *arg, int howto);
static void adasuspend(void *arg);
static void adaresume(void *arg);
#ifndef ADA_DEFAULT_TIMEOUT
#define ADA_DEFAULT_TIMEOUT 30 /* Timeout in seconds */
#endif
#ifndef ADA_DEFAULT_RETRY
#define ADA_DEFAULT_RETRY 4
#endif
#ifndef ADA_DEFAULT_SEND_ORDERED
#define ADA_DEFAULT_SEND_ORDERED 1
#endif
#ifndef ADA_DEFAULT_SPINDOWN_SHUTDOWN
#define ADA_DEFAULT_SPINDOWN_SHUTDOWN 1
#endif
#ifndef ADA_DEFAULT_SPINDOWN_SUSPEND
#define ADA_DEFAULT_SPINDOWN_SUSPEND 1
#endif
#ifndef ADA_DEFAULT_READ_AHEAD
#define ADA_DEFAULT_READ_AHEAD 1
#endif
#ifndef ADA_DEFAULT_WRITE_CACHE
#define ADA_DEFAULT_WRITE_CACHE 1
#endif
#define ADA_RA (softc->read_ahead >= 0 ? \
softc->read_ahead : ada_read_ahead)
#define ADA_WC (softc->write_cache >= 0 ? \
softc->write_cache : ada_write_cache)
/*
* Most platforms map firmware geometry to actual, but some don't. If
* not overridden, default to nothing.
*/
#ifndef ata_disk_firmware_geom_adjust
#define ata_disk_firmware_geom_adjust(disk)
#endif
static int ada_retry_count = ADA_DEFAULT_RETRY;
static int ada_default_timeout = ADA_DEFAULT_TIMEOUT;
static int ada_send_ordered = ADA_DEFAULT_SEND_ORDERED;
static int ada_spindown_shutdown = ADA_DEFAULT_SPINDOWN_SHUTDOWN;
static int ada_spindown_suspend = ADA_DEFAULT_SPINDOWN_SUSPEND;
static int ada_read_ahead = ADA_DEFAULT_READ_AHEAD;
static int ada_write_cache = ADA_DEFAULT_WRITE_CACHE;
static SYSCTL_NODE(_kern_cam, OID_AUTO, ada, CTLFLAG_RD, 0,
"CAM Direct Access Disk driver");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, retry_count, CTLFLAG_RWTUN,
&ada_retry_count, 0, "Normal I/O retry count");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, default_timeout, CTLFLAG_RWTUN,
&ada_default_timeout, 0, "Normal I/O timeout (in seconds)");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, send_ordered, CTLFLAG_RWTUN,
&ada_send_ordered, 0, "Send Ordered Tags");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, spindown_shutdown, CTLFLAG_RWTUN,
&ada_spindown_shutdown, 0, "Spin down upon shutdown");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, spindown_suspend, CTLFLAG_RWTUN,
&ada_spindown_suspend, 0, "Spin down upon suspend");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, read_ahead, CTLFLAG_RWTUN,
&ada_read_ahead, 0, "Enable disk read-ahead");
SYSCTL_INT(_kern_cam_ada, OID_AUTO, write_cache, CTLFLAG_RWTUN,
&ada_write_cache, 0, "Enable disk write cache");
/*
* ADA_ORDEREDTAG_INTERVAL determines how often, relative
* to the default timeout, we check to see whether an ordered
* tagged transaction is appropriate to prevent simple tag
* starvation. Since we'd like to ensure that there is at least
* 1/2 of the timeout length left for a starved transaction to
* complete after we've sent an ordered tag, we must poll at least
* four times in every timeout period. This takes care of the worst
* case where a starved transaction starts during an interval that
* meets the requirement "don't send an ordered tag" test so it takes
* us two intervals to determine that a tag must be sent.
*/
#ifndef ADA_ORDEREDTAG_INTERVAL
#define ADA_ORDEREDTAG_INTERVAL 4
#endif
static struct periph_driver adadriver =
{
adainit, "ada",
TAILQ_HEAD_INITIALIZER(adadriver.units), /* generation */ 0
};
static int adadeletemethodsysctl(SYSCTL_HANDLER_ARGS);
PERIPHDRIVER_DECLARE(ada, adadriver);
static MALLOC_DEFINE(M_ATADA, "ata_da", "ata_da buffers");
static int
adaopen(struct disk *dp)
{
struct cam_periph *periph;
struct ada_softc *softc;
int error;
periph = (struct cam_periph *)dp->d_drv1;
if (cam_periph_acquire(periph) != 0) {
return(ENXIO);
}
cam_periph_lock(periph);
if ((error = cam_periph_hold(periph, PRIBIO|PCATCH)) != 0) {
cam_periph_unlock(periph);
cam_periph_release(periph);
return (error);
}
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE | CAM_DEBUG_PERIPH,
("adaopen\n"));
softc = (struct ada_softc *)periph->softc;
softc->flags |= ADA_FLAG_OPEN;
cam_periph_unhold(periph);
cam_periph_unlock(periph);
return (0);
}
static int
adaclose(struct disk *dp)
{
struct cam_periph *periph;
struct ada_softc *softc;
union ccb *ccb;
int error;
periph = (struct cam_periph *)dp->d_drv1;
softc = (struct ada_softc *)periph->softc;
cam_periph_lock(periph);
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE | CAM_DEBUG_PERIPH,
("adaclose\n"));
/* We only sync the cache if the drive is capable of it. */
if ((softc->flags & ADA_FLAG_DIRTY) != 0 &&
(softc->flags & ADA_FLAG_CAN_FLUSHCACHE) != 0 &&
(periph->flags & CAM_PERIPH_INVALID) == 0 &&
cam_periph_hold(periph, PRIBIO) == 0) {
ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL);
cam_fill_ataio(&ccb->ataio,
1,
NULL,
CAM_DIR_NONE,
0,
NULL,
0,
ada_default_timeout*1000);
if (softc->flags & ADA_FLAG_CAN_48BIT)
ata_48bit_cmd(&ccb->ataio, ATA_FLUSHCACHE48, 0, 0, 0);
else
ata_28bit_cmd(&ccb->ataio, ATA_FLUSHCACHE, 0, 0, 0);
error = cam_periph_runccb(ccb, adaerror, /*cam_flags*/0,
/*sense_flags*/0, softc->disk->d_devstat);
if (error != 0)
xpt_print(periph->path, "Synchronize cache failed\n");
softc->flags &= ~ADA_FLAG_DIRTY;
xpt_release_ccb(ccb);
cam_periph_unhold(periph);
}
softc->flags &= ~ADA_FLAG_OPEN;
while (softc->refcount != 0)
cam_periph_sleep(periph, &softc->refcount, PRIBIO, "adaclose", 1);
cam_periph_unlock(periph);
cam_periph_release(periph);
return (0);
}
static void
adaschedule(struct cam_periph *periph)
{
struct ada_softc *softc = (struct ada_softc *)periph->softc;
if (softc->state != ADA_STATE_NORMAL)
return;
cam_iosched_schedule(softc->cam_iosched, periph);
}
/*
* Actually translate the requested transfer into one the physical driver
* can understand. The transfer is described by a buf and will include
* only one physical transfer.
*/
static void
adastrategy(struct bio *bp)
{
struct cam_periph *periph;
struct ada_softc *softc;
periph = (struct cam_periph *)bp->bio_disk->d_drv1;
softc = (struct ada_softc *)periph->softc;
cam_periph_lock(periph);
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("adastrategy(%p)\n", bp));
/*
* If the device has been made invalid, error out
*/
if ((periph->flags & CAM_PERIPH_INVALID) != 0) {
cam_periph_unlock(periph);
biofinish(bp, NULL, ENXIO);
return;
}
/*
* Zone commands must be ordered, because they can depend on the
* effects of previously issued commands, and they may affect
* commands after them.
*/
if (bp->bio_cmd == BIO_ZONE)
bp->bio_flags |= BIO_ORDERED;
/*
* Place it in the queue of disk activities for this disk
*/
cam_iosched_queue_work(softc->cam_iosched, bp);
/*
* Schedule ourselves for performing the work.
*/
adaschedule(periph);
cam_periph_unlock(periph);
return;
}
static int
adadump(void *arg, void *virtual, vm_offset_t physical, off_t offset, size_t length)
{
struct cam_periph *periph;
struct ada_softc *softc;
u_int secsize;
struct ccb_ataio ataio;
struct disk *dp;
uint64_t lba;
uint16_t count;
int error = 0;
dp = arg;
periph = dp->d_drv1;
softc = (struct ada_softc *)periph->softc;
secsize = softc->params.secsize;
lba = offset / secsize;
count = length / secsize;
if ((periph->flags & CAM_PERIPH_INVALID) != 0)
return (ENXIO);
memset(&ataio, 0, sizeof(ataio));
if (length > 0) {
xpt_setup_ccb(&ataio.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
ataio.ccb_h.ccb_state = ADA_CCB_DUMP;
cam_fill_ataio(&ataio,
0,
NULL,
CAM_DIR_OUT,
0,
(u_int8_t *) virtual,
length,
ada_default_timeout*1000);
if ((softc->flags & ADA_FLAG_CAN_48BIT) &&
(lba + count >= ATA_MAX_28BIT_LBA ||
count >= 256)) {
ata_48bit_cmd(&ataio, ATA_WRITE_DMA48,
0, lba, count);
} else {
ata_28bit_cmd(&ataio, ATA_WRITE_DMA,
0, lba, count);
}
error = cam_periph_runccb((union ccb *)&ataio, adaerror,
0, SF_NO_RECOVERY | SF_NO_RETRY, NULL);
if (error != 0)
printf("Aborting dump due to I/O error.\n");
return (error);
}
if (softc->flags & ADA_FLAG_CAN_FLUSHCACHE) {
xpt_setup_ccb(&ataio.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
/*
* Tell the drive to flush its internal cache. if we
* can't flush in 5s we have big problems. No need to
* wait the default 60s to detect problems.
*/
ataio.ccb_h.ccb_state = ADA_CCB_DUMP;
cam_fill_ataio(&ataio,
0,
NULL,
CAM_DIR_NONE,
0,
NULL,
0,
5*1000);
if (softc->flags & ADA_FLAG_CAN_48BIT)
ata_48bit_cmd(&ataio, ATA_FLUSHCACHE48, 0, 0, 0);
else
ata_28bit_cmd(&ataio, ATA_FLUSHCACHE, 0, 0, 0);
error = cam_periph_runccb((union ccb *)&ataio, adaerror,
0, SF_NO_RECOVERY | SF_NO_RETRY, NULL);
if (error != 0)
xpt_print(periph->path, "Synchronize cache failed\n");
}
return (error);
}
static void
adainit(void)
{
cam_status status;
/*
* Install a global async callback. This callback will
* receive async callbacks like "new device found".
*/
status = xpt_register_async(AC_FOUND_DEVICE, adaasync, NULL, NULL);
if (status != CAM_REQ_CMP) {
printf("ada: Failed to attach master async callback "
"due to status 0x%x!\n", status);
} else if (ada_send_ordered) {
/* Register our event handlers */
if ((EVENTHANDLER_REGISTER(power_suspend, adasuspend,
NULL, EVENTHANDLER_PRI_LAST)) == NULL)
printf("adainit: power event registration failed!\n");
if ((EVENTHANDLER_REGISTER(power_resume, adaresume,
NULL, EVENTHANDLER_PRI_LAST)) == NULL)
printf("adainit: power event registration failed!\n");
if ((EVENTHANDLER_REGISTER(shutdown_post_sync, adashutdown,
NULL, SHUTDOWN_PRI_DEFAULT)) == NULL)
printf("adainit: shutdown event registration failed!\n");
}
}
/*
* Callback from GEOM, called when it has finished cleaning up its
* resources.
*/
static void
adadiskgonecb(struct disk *dp)
{
struct cam_periph *periph;
periph = (struct cam_periph *)dp->d_drv1;
cam_periph_release(periph);
}
static void
adaoninvalidate(struct cam_periph *periph)
{
struct ada_softc *softc;
softc = (struct ada_softc *)periph->softc;
/*
* De-register any async callbacks.
*/
xpt_register_async(0, adaasync, periph, periph->path);
#ifdef CAM_IO_STATS
softc->invalidations++;
#endif
/*
* Return all queued I/O with ENXIO.
* XXX Handle any transactions queued to the card
* with XPT_ABORT_CCB.
*/
cam_iosched_flush(softc->cam_iosched, NULL, ENXIO);
disk_gone(softc->disk);
}
static void
adacleanup(struct cam_periph *periph)
{
struct ada_softc *softc;
softc = (struct ada_softc *)periph->softc;
cam_periph_unlock(periph);
cam_iosched_fini(softc->cam_iosched);
/*
* If we can't free the sysctl tree, oh well...
*/
if ((softc->flags & ADA_FLAG_SCTX_INIT) != 0) {
#ifdef CAM_IO_STATS
if (sysctl_ctx_free(&softc->sysctl_stats_ctx) != 0)
xpt_print(periph->path,
"can't remove sysctl stats context\n");
#endif
if (sysctl_ctx_free(&softc->sysctl_ctx) != 0)
xpt_print(periph->path,
"can't remove sysctl context\n");
}
disk_destroy(softc->disk);
callout_drain(&softc->sendordered_c);
free(softc, M_DEVBUF);
cam_periph_lock(periph);
}
static void
adasetdeletemethod(struct ada_softc *softc)
{
if (softc->flags & ADA_FLAG_CAN_NCQ_TRIM)
softc->delete_method = ADA_DELETE_NCQ_DSM_TRIM;
else if (softc->flags & ADA_FLAG_CAN_TRIM)
softc->delete_method = ADA_DELETE_DSM_TRIM;
else if ((softc->flags & ADA_FLAG_CAN_CFA) && !(softc->flags & ADA_FLAG_CAN_48BIT))
softc->delete_method = ADA_DELETE_CFA_ERASE;
else
softc->delete_method = ADA_DELETE_NONE;
}
static void
adaasync(void *callback_arg, u_int32_t code,
struct cam_path *path, void *arg)
{
struct ccb_getdev cgd;
struct cam_periph *periph;
struct ada_softc *softc;
periph = (struct cam_periph *)callback_arg;
switch (code) {
case AC_FOUND_DEVICE:
{
struct ccb_getdev *cgd;
cam_status status;
cgd = (struct ccb_getdev *)arg;
if (cgd == NULL)
break;
if (cgd->protocol != PROTO_ATA)
break;
/*
* Allocate a peripheral instance for
* this device and start the probe
* process.
*/
status = cam_periph_alloc(adaregister, adaoninvalidate,
adacleanup, adastart,
"ada", CAM_PERIPH_BIO,
path, adaasync,
AC_FOUND_DEVICE, cgd);
if (status != CAM_REQ_CMP
&& status != CAM_REQ_INPROG)
printf("adaasync: Unable to attach to new device "
"due to status 0x%x\n", status);
break;
}
case AC_GETDEV_CHANGED:
{
softc = (struct ada_softc *)periph->softc;
xpt_setup_ccb(&cgd.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
cgd.ccb_h.func_code = XPT_GDEV_TYPE;
xpt_action((union ccb *)&cgd);
/*
* Set/clear support flags based on the new Identify data.
*/
adasetflags(softc, &cgd);
cam_periph_async(periph, code, path, arg);
break;
}
case AC_ADVINFO_CHANGED:
{
uintptr_t buftype;
buftype = (uintptr_t)arg;
if (buftype == CDAI_TYPE_PHYS_PATH) {
struct ada_softc *softc;
softc = periph->softc;
disk_attr_changed(softc->disk, "GEOM::physpath",
M_NOWAIT);
}
break;
}
case AC_SENT_BDR:
case AC_BUS_RESET:
{
softc = (struct ada_softc *)periph->softc;
cam_periph_async(periph, code, path, arg);
if (softc->state != ADA_STATE_NORMAL)
break;
xpt_setup_ccb(&cgd.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
cgd.ccb_h.func_code = XPT_GDEV_TYPE;
xpt_action((union ccb *)&cgd);
if (ADA_RA >= 0 && softc->flags & ADA_FLAG_CAN_RAHEAD)
softc->state = ADA_STATE_RAHEAD;
else if (ADA_WC >= 0 && softc->flags & ADA_FLAG_CAN_WCACHE)
softc->state = ADA_STATE_WCACHE;
else if ((softc->flags & ADA_FLAG_CAN_LOG)
&& (softc->zone_mode != ADA_ZONE_NONE))
softc->state = ADA_STATE_LOGDIR;
else
break;
if (cam_periph_acquire(periph) != 0)
softc->state = ADA_STATE_NORMAL;
else
xpt_schedule(periph, CAM_PRIORITY_DEV);
}
default:
cam_periph_async(periph, code, path, arg);
break;
}
}
static int
adazonemodesysctl(SYSCTL_HANDLER_ARGS)
{
char tmpbuf[40];
struct ada_softc *softc;
int error;
softc = (struct ada_softc *)arg1;
switch (softc->zone_mode) {
case ADA_ZONE_DRIVE_MANAGED:
snprintf(tmpbuf, sizeof(tmpbuf), "Drive Managed");
break;
case ADA_ZONE_HOST_AWARE:
snprintf(tmpbuf, sizeof(tmpbuf), "Host Aware");
break;
case ADA_ZONE_HOST_MANAGED:
snprintf(tmpbuf, sizeof(tmpbuf), "Host Managed");
break;
case ADA_ZONE_NONE:
default:
snprintf(tmpbuf, sizeof(tmpbuf), "Not Zoned");
break;
}
error = sysctl_handle_string(oidp, tmpbuf, sizeof(tmpbuf), req);
return (error);
}
static int
adazonesupsysctl(SYSCTL_HANDLER_ARGS)
{
char tmpbuf[180];
struct ada_softc *softc;
struct sbuf sb;
int error, first;
unsigned int i;
softc = (struct ada_softc *)arg1;
error = 0;
first = 1;
sbuf_new(&sb, tmpbuf, sizeof(tmpbuf), 0);
for (i = 0; i < sizeof(ada_zone_desc_table) /
sizeof(ada_zone_desc_table[0]); i++) {
if (softc->zone_flags & ada_zone_desc_table[i].value) {
if (first == 0)
sbuf_printf(&sb, ", ");
else
first = 0;
sbuf_cat(&sb, ada_zone_desc_table[i].desc);
}
}
if (first == 1)
sbuf_printf(&sb, "None");
sbuf_finish(&sb);
error = sysctl_handle_string(oidp, sbuf_data(&sb), sbuf_len(&sb), req);
return (error);
}
static void
adasysctlinit(void *context, int pending)
{
struct cam_periph *periph;
struct ada_softc *softc;
char tmpstr[32], tmpstr2[16];
periph = (struct cam_periph *)context;
/* periph was held for us when this task was enqueued */
if ((periph->flags & CAM_PERIPH_INVALID) != 0) {
cam_periph_release(periph);
return;
}
softc = (struct ada_softc *)periph->softc;
snprintf(tmpstr, sizeof(tmpstr), "CAM ADA unit %d",periph->unit_number);
snprintf(tmpstr2, sizeof(tmpstr2), "%d", periph->unit_number);
sysctl_ctx_init(&softc->sysctl_ctx);
softc->flags |= ADA_FLAG_SCTX_INIT;
softc->sysctl_tree = SYSCTL_ADD_NODE_WITH_LABEL(&softc->sysctl_ctx,
SYSCTL_STATIC_CHILDREN(_kern_cam_ada), OID_AUTO, tmpstr2,
CTLFLAG_RD, 0, tmpstr, "device_index");
if (softc->sysctl_tree == NULL) {
printf("adasysctlinit: unable to allocate sysctl tree\n");
cam_periph_release(periph);
return;
}
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "delete_method", CTLTYPE_STRING | CTLFLAG_RW,
softc, 0, adadeletemethodsysctl, "A",
"BIO_DELETE execution method");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"trim_count", CTLFLAG_RD, &softc->trim_count,
"Total number of dsm commands sent");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"trim_ranges", CTLFLAG_RD, &softc->trim_ranges,
"Total number of ranges in dsm commands");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"trim_lbas", CTLFLAG_RD, &softc->trim_lbas,
"Total lbas in the dsm commands sent");
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "read_ahead", CTLFLAG_RW | CTLFLAG_MPSAFE,
&softc->read_ahead, 0, "Enable disk read ahead.");
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "write_cache", CTLFLAG_RW | CTLFLAG_MPSAFE,
&softc->write_cache, 0, "Enable disk write cache.");
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "unmapped_io", CTLFLAG_RD | CTLFLAG_MPSAFE,
&softc->unmappedio, 0, "Unmapped I/O leaf");
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "rotating", CTLFLAG_RD | CTLFLAG_MPSAFE,
&softc->rotating, 0, "Rotating media");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "zone_mode", CTLTYPE_STRING | CTLFLAG_RD,
softc, 0, adazonemodesysctl, "A",
"Zone Mode");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "zone_support", CTLTYPE_STRING | CTLFLAG_RD,
softc, 0, adazonesupsysctl, "A",
"Zone Support");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"optimal_seq_zones", CTLFLAG_RD, &softc->optimal_seq_zones,
"Optimal Number of Open Sequential Write Preferred Zones");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"optimal_nonseq_zones", CTLFLAG_RD,
&softc->optimal_nonseq_zones,
"Optimal Number of Non-Sequentially Written Sequential Write "
"Preferred Zones");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"max_seq_zones", CTLFLAG_RD, &softc->max_seq_zones,
"Maximum Number of Open Sequential Write Required Zones");
#ifdef CAM_TEST_FAILURE
/*
* Add a 'door bell' sysctl which allows one to set it from userland
* and cause something bad to happen. For the moment, we only allow
* whacking the next read or write.
*/
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "force_read_error", CTLFLAG_RW | CTLFLAG_MPSAFE,
&softc->force_read_error, 0,
"Force a read error for the next N reads.");
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "force_write_error", CTLFLAG_RW | CTLFLAG_MPSAFE,
&softc->force_write_error, 0,
"Force a write error for the next N writes.");
SYSCTL_ADD_INT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "periodic_read_error", CTLFLAG_RW | CTLFLAG_MPSAFE,
&softc->periodic_read_error, 0,
"Force a read error every N reads (don't set too low).");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "invalidate", CTLTYPE_U64 | CTLFLAG_RW | CTLFLAG_MPSAFE,
periph, 0, cam_periph_invalidate_sysctl, "I",
"Write 1 to invalidate the drive immediately");
#endif
#ifdef CAM_IO_STATS
softc->sysctl_stats_tree = SYSCTL_ADD_NODE(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO, "stats",
CTLFLAG_RD, 0, "Statistics");
SYSCTL_ADD_INT(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_stats_tree),
OID_AUTO, "timeouts", CTLFLAG_RD | CTLFLAG_MPSAFE,
&softc->timeouts, 0,
"Device timeouts reported by the SIM");
SYSCTL_ADD_INT(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_stats_tree),
OID_AUTO, "errors", CTLFLAG_RD | CTLFLAG_MPSAFE,
&softc->errors, 0,
"Transport errors reported by the SIM.");
SYSCTL_ADD_INT(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_stats_tree),
OID_AUTO, "pack_invalidations", CTLFLAG_RD | CTLFLAG_MPSAFE,
&softc->invalidations, 0,
"Device pack invalidations.");
#endif
cam_iosched_sysctl_init(softc->cam_iosched, &softc->sysctl_ctx,
softc->sysctl_tree);
cam_periph_release(periph);
}
static int
adagetattr(struct bio *bp)
{
int ret;
struct cam_periph *periph;
periph = (struct cam_periph *)bp->bio_disk->d_drv1;
cam_periph_lock(periph);
ret = xpt_getattr(bp->bio_data, bp->bio_length, bp->bio_attribute,
periph->path);
cam_periph_unlock(periph);
if (ret == 0)
bp->bio_completed = bp->bio_length;
return ret;
}
static int
adadeletemethodsysctl(SYSCTL_HANDLER_ARGS)
{
char buf[16];
const char *p;
struct ada_softc *softc;
int i, error, value, methods;
softc = (struct ada_softc *)arg1;
value = softc->delete_method;
if (value < 0 || value > ADA_DELETE_MAX)
p = "UNKNOWN";
else
p = ada_delete_method_names[value];
strncpy(buf, p, sizeof(buf));
error = sysctl_handle_string(oidp, buf, sizeof(buf), req);
if (error != 0 || req->newptr == NULL)
return (error);
methods = 1 << ADA_DELETE_DISABLE;
if ((softc->flags & ADA_FLAG_CAN_CFA) &&
!(softc->flags & ADA_FLAG_CAN_48BIT))
methods |= 1 << ADA_DELETE_CFA_ERASE;
if (softc->flags & ADA_FLAG_CAN_TRIM)
methods |= 1 << ADA_DELETE_DSM_TRIM;
if (softc->flags & ADA_FLAG_CAN_NCQ_TRIM)
methods |= 1 << ADA_DELETE_NCQ_DSM_TRIM;
for (i = 0; i <= ADA_DELETE_MAX; i++) {
if (!(methods & (1 << i)) ||
strcmp(buf, ada_delete_method_names[i]) != 0)
continue;
softc->delete_method = i;
return (0);
}
return (EINVAL);
}
static void
adasetflags(struct ada_softc *softc, struct ccb_getdev *cgd)
{
if ((cgd->ident_data.capabilities1 & ATA_SUPPORT_DMA) &&
(cgd->inq_flags & SID_DMA))
softc->flags |= ADA_FLAG_CAN_DMA;
else
softc->flags &= ~ADA_FLAG_CAN_DMA;
if (cgd->ident_data.support.command2 & ATA_SUPPORT_ADDRESS48) {
softc->flags |= ADA_FLAG_CAN_48BIT;
if (cgd->inq_flags & SID_DMA48)
softc->flags |= ADA_FLAG_CAN_DMA48;
else
softc->flags &= ~ADA_FLAG_CAN_DMA48;
} else
softc->flags &= ~(ADA_FLAG_CAN_48BIT | ADA_FLAG_CAN_DMA48);
if (cgd->ident_data.support.command2 & ATA_SUPPORT_FLUSHCACHE)
softc->flags |= ADA_FLAG_CAN_FLUSHCACHE;
else
softc->flags &= ~ADA_FLAG_CAN_FLUSHCACHE;
if (cgd->ident_data.support.command1 & ATA_SUPPORT_POWERMGT)
softc->flags |= ADA_FLAG_CAN_POWERMGT;
else
softc->flags &= ~ADA_FLAG_CAN_POWERMGT;
if ((cgd->ident_data.satacapabilities & ATA_SUPPORT_NCQ) &&
(cgd->inq_flags & SID_DMA) && (cgd->inq_flags & SID_CmdQue))
softc->flags |= ADA_FLAG_CAN_NCQ;
else
softc->flags &= ~ADA_FLAG_CAN_NCQ;
if ((cgd->ident_data.support_dsm & ATA_SUPPORT_DSM_TRIM) &&
(cgd->inq_flags & SID_DMA)) {
softc->flags |= ADA_FLAG_CAN_TRIM;
softc->trim_max_ranges = TRIM_MAX_RANGES;
if (cgd->ident_data.max_dsm_blocks != 0) {
softc->trim_max_ranges =
min(cgd->ident_data.max_dsm_blocks *
ATA_DSM_BLK_RANGES, softc->trim_max_ranges);
}
/*
* If we can do RCVSND_FPDMA_QUEUED commands, we may be able
* to do NCQ trims, if we support trims at all. We also need
* support from the SIM to do things properly. Perhaps we
* should look at log 13 dword 0 bit 0 and dword 1 bit 0 are
* set too...
*/
if ((softc->quirks & ADA_Q_NCQ_TRIM_BROKEN) == 0 &&
(softc->flags & ADA_FLAG_PIM_ATA_EXT) != 0 &&
(cgd->ident_data.satacapabilities2 &
ATA_SUPPORT_RCVSND_FPDMA_QUEUED) != 0 &&
(softc->flags & ADA_FLAG_CAN_TRIM) != 0)
softc->flags |= ADA_FLAG_CAN_NCQ_TRIM;
else
softc->flags &= ~ADA_FLAG_CAN_NCQ_TRIM;
} else
softc->flags &= ~(ADA_FLAG_CAN_TRIM | ADA_FLAG_CAN_NCQ_TRIM);
if (cgd->ident_data.support.command2 & ATA_SUPPORT_CFA)
softc->flags |= ADA_FLAG_CAN_CFA;
else
softc->flags &= ~ADA_FLAG_CAN_CFA;
/*
* Now that we've set the appropriate flags, setup the delete
* method.
*/
adasetdeletemethod(softc);
if ((cgd->ident_data.support.extension & ATA_SUPPORT_GENLOG)
&& ((softc->quirks & ADA_Q_LOG_BROKEN) == 0))
softc->flags |= ADA_FLAG_CAN_LOG;
else
softc->flags &= ~ADA_FLAG_CAN_LOG;
if ((cgd->ident_data.support3 & ATA_SUPPORT_ZONE_MASK) ==
ATA_SUPPORT_ZONE_HOST_AWARE)
softc->zone_mode = ADA_ZONE_HOST_AWARE;
else if (((cgd->ident_data.support3 & ATA_SUPPORT_ZONE_MASK) ==
ATA_SUPPORT_ZONE_DEV_MANAGED)
|| (softc->quirks & ADA_Q_SMR_DM))
softc->zone_mode = ADA_ZONE_DRIVE_MANAGED;
else
softc->zone_mode = ADA_ZONE_NONE;
if (cgd->ident_data.support.command1 & ATA_SUPPORT_LOOKAHEAD)
softc->flags |= ADA_FLAG_CAN_RAHEAD;
else
softc->flags &= ~ADA_FLAG_CAN_RAHEAD;
if (cgd->ident_data.support.command1 & ATA_SUPPORT_WRITECACHE)
softc->flags |= ADA_FLAG_CAN_WCACHE;
else
softc->flags &= ~ADA_FLAG_CAN_WCACHE;
}
static cam_status
adaregister(struct cam_periph *periph, void *arg)
{
struct ada_softc *softc;
struct ccb_pathinq cpi;
struct ccb_getdev *cgd;
struct disk_params *dp;
struct sbuf sb;
char *announce_buf;
caddr_t match;
u_int maxio;
int quirks;
cgd = (struct ccb_getdev *)arg;
if (cgd == NULL) {
printf("adaregister: no getdev CCB, can't register device\n");
return(CAM_REQ_CMP_ERR);
}
softc = (struct ada_softc *)malloc(sizeof(*softc), M_DEVBUF,
M_NOWAIT|M_ZERO);
if (softc == NULL) {
printf("adaregister: Unable to probe new device. "
"Unable to allocate softc\n");
return(CAM_REQ_CMP_ERR);
}
announce_buf = softc->announce_temp;
bzero(announce_buf, ADA_ANNOUNCETMP_SZ);
if (cam_iosched_init(&softc->cam_iosched, periph) != 0) {
printf("adaregister: Unable to probe new device. "
"Unable to allocate iosched memory\n");
free(softc, M_DEVBUF);
return(CAM_REQ_CMP_ERR);
}
periph->softc = softc;
/*
* See if this device has any quirks.
*/
match = cam_quirkmatch((caddr_t)&cgd->ident_data,
(caddr_t)ada_quirk_table,
nitems(ada_quirk_table),
sizeof(*ada_quirk_table), ata_identify_match);
if (match != NULL)
softc->quirks = ((struct ada_quirk_entry *)match)->quirks;
else
softc->quirks = ADA_Q_NONE;
xpt_path_inq(&cpi, periph->path);
TASK_INIT(&softc->sysctl_task, 0, adasysctlinit, periph);
/*
* Register this media as a disk
*/
(void)cam_periph_hold(periph, PRIBIO);
cam_periph_unlock(periph);
snprintf(announce_buf, ADA_ANNOUNCETMP_SZ,
"kern.cam.ada.%d.quirks", periph->unit_number);
quirks = softc->quirks;
TUNABLE_INT_FETCH(announce_buf, &quirks);
softc->quirks = quirks;
softc->read_ahead = -1;
snprintf(announce_buf, ADA_ANNOUNCETMP_SZ,
"kern.cam.ada.%d.read_ahead", periph->unit_number);
TUNABLE_INT_FETCH(announce_buf, &softc->read_ahead);
softc->write_cache = -1;
snprintf(announce_buf, ADA_ANNOUNCETMP_SZ,
"kern.cam.ada.%d.write_cache", periph->unit_number);
TUNABLE_INT_FETCH(announce_buf, &softc->write_cache);
/*
* Set support flags based on the Identify data and quirks.
*/
adasetflags(softc, cgd);
/* Disable queue sorting for non-rotational media by default. */
if (cgd->ident_data.media_rotation_rate == ATA_RATE_NON_ROTATING) {
softc->rotating = 0;
} else {
softc->rotating = 1;
}
cam_iosched_set_sort_queue(softc->cam_iosched, softc->rotating ? -1 : 0);
adagetparams(periph, cgd);
softc->disk = disk_alloc();
softc->disk->d_rotation_rate = cgd->ident_data.media_rotation_rate;
softc->disk->d_devstat = devstat_new_entry(periph->periph_name,
periph->unit_number, softc->params.secsize,
DEVSTAT_ALL_SUPPORTED,
DEVSTAT_TYPE_DIRECT |
XPORT_DEVSTAT_TYPE(cpi.transport),
DEVSTAT_PRIORITY_DISK);
softc->disk->d_open = adaopen;
softc->disk->d_close = adaclose;
softc->disk->d_strategy = adastrategy;
softc->disk->d_getattr = adagetattr;
softc->disk->d_dump = adadump;
softc->disk->d_gone = adadiskgonecb;
softc->disk->d_name = "ada";
softc->disk->d_drv1 = periph;
maxio = cpi.maxio; /* Honor max I/O size of SIM */
if (maxio == 0)
maxio = DFLTPHYS; /* traditional default */
else if (maxio > MAXPHYS)
maxio = MAXPHYS; /* for safety */
if (softc->flags & ADA_FLAG_CAN_48BIT)
maxio = min(maxio, 65536 * softc->params.secsize);
else /* 28bit ATA command limit */
maxio = min(maxio, 256 * softc->params.secsize);
+ if (softc->quirks & ADA_Q_128KB)
+ maxio = min(maxio, 128 * 1024);
softc->disk->d_maxsize = maxio;
softc->disk->d_unit = periph->unit_number;
softc->disk->d_flags = DISKFLAG_DIRECT_COMPLETION | DISKFLAG_CANZONE;
if (softc->flags & ADA_FLAG_CAN_FLUSHCACHE)
softc->disk->d_flags |= DISKFLAG_CANFLUSHCACHE;
/* Device lies about TRIM capability. */
if ((softc->quirks & ADA_Q_NO_TRIM) &&
(softc->flags & ADA_FLAG_CAN_TRIM))
softc->flags &= ~ADA_FLAG_CAN_TRIM;
if (softc->flags & ADA_FLAG_CAN_TRIM) {
softc->disk->d_flags |= DISKFLAG_CANDELETE;
softc->disk->d_delmaxsize = softc->params.secsize *
ATA_DSM_RANGE_MAX *
softc->trim_max_ranges;
} else if ((softc->flags & ADA_FLAG_CAN_CFA) &&
!(softc->flags & ADA_FLAG_CAN_48BIT)) {
softc->disk->d_flags |= DISKFLAG_CANDELETE;
softc->disk->d_delmaxsize = 256 * softc->params.secsize;
} else
softc->disk->d_delmaxsize = maxio;
if ((cpi.hba_misc & PIM_UNMAPPED) != 0) {
softc->disk->d_flags |= DISKFLAG_UNMAPPED_BIO;
softc->unmappedio = 1;
}
if (cpi.hba_misc & PIM_ATA_EXT)
softc->flags |= ADA_FLAG_PIM_ATA_EXT;
strlcpy(softc->disk->d_descr, cgd->ident_data.model,
MIN(sizeof(softc->disk->d_descr), sizeof(cgd->ident_data.model)));
strlcpy(softc->disk->d_ident, cgd->ident_data.serial,
MIN(sizeof(softc->disk->d_ident), sizeof(cgd->ident_data.serial)));
softc->disk->d_hba_vendor = cpi.hba_vendor;
softc->disk->d_hba_device = cpi.hba_device;
softc->disk->d_hba_subvendor = cpi.hba_subvendor;
softc->disk->d_hba_subdevice = cpi.hba_subdevice;
softc->disk->d_sectorsize = softc->params.secsize;
softc->disk->d_mediasize = (off_t)softc->params.sectors *
softc->params.secsize;
if (ata_physical_sector_size(&cgd->ident_data) !=
softc->params.secsize) {
softc->disk->d_stripesize =
ata_physical_sector_size(&cgd->ident_data);
softc->disk->d_stripeoffset = (softc->disk->d_stripesize -
ata_logical_sector_offset(&cgd->ident_data)) %
softc->disk->d_stripesize;
} else if (softc->quirks & ADA_Q_4K) {
softc->disk->d_stripesize = 4096;
softc->disk->d_stripeoffset = 0;
}
softc->disk->d_fwsectors = softc->params.secs_per_track;
softc->disk->d_fwheads = softc->params.heads;
ata_disk_firmware_geom_adjust(softc->disk);
/*
* Acquire a reference to the periph before we register with GEOM.
* We'll release this reference once GEOM calls us back (via
* adadiskgonecb()) telling us that our provider has been freed.
*/
if (cam_periph_acquire(periph) != 0) {
xpt_print(periph->path, "%s: lost periph during "
"registration!\n", __func__);
cam_periph_lock(periph);
return (CAM_REQ_CMP_ERR);
}
disk_create(softc->disk, DISK_VERSION);
cam_periph_lock(periph);
dp = &softc->params;
snprintf(announce_buf, ADA_ANNOUNCETMP_SZ,
"%juMB (%ju %u byte sectors)",
((uintmax_t)dp->secsize * dp->sectors) / (1024 * 1024),
(uintmax_t)dp->sectors, dp->secsize);
sbuf_new(&sb, softc->announce_buffer, ADA_ANNOUNCE_SZ, SBUF_FIXEDLEN);
xpt_announce_periph_sbuf(periph, &sb, announce_buf);
xpt_announce_quirks_sbuf(periph, &sb, softc->quirks, ADA_Q_BIT_STRING);
sbuf_finish(&sb);
sbuf_putbuf(&sb);
/*
* Create our sysctl variables, now that we know
* we have successfully attached.
*/
if (cam_periph_acquire(periph) == 0)
taskqueue_enqueue(taskqueue_thread, &softc->sysctl_task);
/*
* Add async callbacks for bus reset and
* bus device reset calls. I don't bother
* checking if this fails as, in most cases,
* the system will function just fine without
* them and the only alternative would be to
* not attach the device on failure.
*/
xpt_register_async(AC_SENT_BDR | AC_BUS_RESET | AC_LOST_DEVICE |
AC_GETDEV_CHANGED | AC_ADVINFO_CHANGED,
adaasync, periph, periph->path);
/*
* Schedule a periodic event to occasionally send an
* ordered tag to a device.
*/
callout_init_mtx(&softc->sendordered_c, cam_periph_mtx(periph), 0);
callout_reset(&softc->sendordered_c,
(ada_default_timeout * hz) / ADA_ORDEREDTAG_INTERVAL,
adasendorderedtag, softc);
if (ADA_RA >= 0 && softc->flags & ADA_FLAG_CAN_RAHEAD) {
softc->state = ADA_STATE_RAHEAD;
} else if (ADA_WC >= 0 && softc->flags & ADA_FLAG_CAN_WCACHE) {
softc->state = ADA_STATE_WCACHE;
} else if ((softc->flags & ADA_FLAG_CAN_LOG)
&& (softc->zone_mode != ADA_ZONE_NONE)) {
softc->state = ADA_STATE_LOGDIR;
} else {
/*
* Nothing to probe, so we can just transition to the
* normal state.
*/
adaprobedone(periph, NULL);
return(CAM_REQ_CMP);
}
xpt_schedule(periph, CAM_PRIORITY_DEV);
return(CAM_REQ_CMP);
}
static int
ada_dsmtrim_req_create(struct ada_softc *softc, struct bio *bp, struct trim_request *req)
{
uint64_t lastlba = (uint64_t)-1, lbas = 0;
int c, lastcount = 0, off, ranges = 0;
bzero(req, sizeof(*req));
TAILQ_INIT(&req->bps);
do {
uint64_t lba = bp->bio_pblkno;
int count = bp->bio_bcount / softc->params.secsize;
/* Try to extend the previous range. */
if (lba == lastlba) {
c = min(count, ATA_DSM_RANGE_MAX - lastcount);
lastcount += c;
off = (ranges - 1) * ATA_DSM_RANGE_SIZE;
req->data[off + 6] = lastcount & 0xff;
req->data[off + 7] =
(lastcount >> 8) & 0xff;
count -= c;
lba += c;
lbas += c;
}
while (count > 0) {
c = min(count, ATA_DSM_RANGE_MAX);
off = ranges * ATA_DSM_RANGE_SIZE;
req->data[off + 0] = lba & 0xff;
req->data[off + 1] = (lba >> 8) & 0xff;
req->data[off + 2] = (lba >> 16) & 0xff;
req->data[off + 3] = (lba >> 24) & 0xff;
req->data[off + 4] = (lba >> 32) & 0xff;
req->data[off + 5] = (lba >> 40) & 0xff;
req->data[off + 6] = c & 0xff;
req->data[off + 7] = (c >> 8) & 0xff;
lba += c;
lbas += c;
count -= c;
lastcount = c;
ranges++;
/*
* Its the caller's responsibility to ensure the
* request will fit so we don't need to check for
* overrun here
*/
}
lastlba = lba;
TAILQ_INSERT_TAIL(&req->bps, bp, bio_queue);
bp = cam_iosched_next_trim(softc->cam_iosched);
if (bp == NULL)
break;
if (bp->bio_bcount / softc->params.secsize >
(softc->trim_max_ranges - ranges) * ATA_DSM_RANGE_MAX) {
cam_iosched_put_back_trim(softc->cam_iosched, bp);
break;
}
} while (1);
softc->trim_count++;
softc->trim_ranges += ranges;
softc->trim_lbas += lbas;
return (ranges);
}
static void
ada_dsmtrim(struct ada_softc *softc, struct bio *bp, struct ccb_ataio *ataio)
{
struct trim_request *req = &softc->trim_req;
int ranges;
ranges = ada_dsmtrim_req_create(softc, bp, req);
cam_fill_ataio(ataio,
ada_retry_count,
adadone,
CAM_DIR_OUT,
0,
req->data,
howmany(ranges, ATA_DSM_BLK_RANGES) * ATA_DSM_BLK_SIZE,
ada_default_timeout * 1000);
ata_48bit_cmd(ataio, ATA_DATA_SET_MANAGEMENT,
ATA_DSM_TRIM, 0, howmany(ranges, ATA_DSM_BLK_RANGES));
}
static void
ada_ncq_dsmtrim(struct ada_softc *softc, struct bio *bp, struct ccb_ataio *ataio)
{
struct trim_request *req = &softc->trim_req;
int ranges;
ranges = ada_dsmtrim_req_create(softc, bp, req);
cam_fill_ataio(ataio,
ada_retry_count,
adadone,
CAM_DIR_OUT,
0,
req->data,
howmany(ranges, ATA_DSM_BLK_RANGES) * ATA_DSM_BLK_SIZE,
ada_default_timeout * 1000);
ata_ncq_cmd(ataio,
ATA_SEND_FPDMA_QUEUED,
0,
howmany(ranges, ATA_DSM_BLK_RANGES));
ataio->cmd.sector_count_exp = ATA_SFPDMA_DSM;
ataio->ata_flags |= ATA_FLAG_AUX;
ataio->aux = 1;
}
static void
ada_cfaerase(struct ada_softc *softc, struct bio *bp, struct ccb_ataio *ataio)
{
struct trim_request *req = &softc->trim_req;
uint64_t lba = bp->bio_pblkno;
uint16_t count = bp->bio_bcount / softc->params.secsize;
bzero(req, sizeof(*req));
TAILQ_INIT(&req->bps);
TAILQ_INSERT_TAIL(&req->bps, bp, bio_queue);
cam_fill_ataio(ataio,
ada_retry_count,
adadone,
CAM_DIR_NONE,
0,
NULL,
0,
ada_default_timeout*1000);
if (count >= 256)
count = 0;
ata_28bit_cmd(ataio, ATA_CFA_ERASE, 0, lba, count);
}
static int
ada_zone_bio_to_ata(int disk_zone_cmd)
{
switch (disk_zone_cmd) {
case DISK_ZONE_OPEN:
return ATA_ZM_OPEN_ZONE;
case DISK_ZONE_CLOSE:
return ATA_ZM_CLOSE_ZONE;
case DISK_ZONE_FINISH:
return ATA_ZM_FINISH_ZONE;
case DISK_ZONE_RWP:
return ATA_ZM_RWP;
}
return -1;
}
static int
ada_zone_cmd(struct cam_periph *periph, union ccb *ccb, struct bio *bp,
int *queue_ccb)
{
struct ada_softc *softc;
int error;
error = 0;
if (bp->bio_cmd != BIO_ZONE) {
error = EINVAL;
goto bailout;
}
softc = periph->softc;
switch (bp->bio_zone.zone_cmd) {
case DISK_ZONE_OPEN:
case DISK_ZONE_CLOSE:
case DISK_ZONE_FINISH:
case DISK_ZONE_RWP: {
int zone_flags;
int zone_sa;
uint64_t lba;
zone_sa = ada_zone_bio_to_ata(bp->bio_zone.zone_cmd);
if (zone_sa == -1) {
xpt_print(periph->path, "Cannot translate zone "
"cmd %#x to ATA\n", bp->bio_zone.zone_cmd);
error = EINVAL;
goto bailout;
}
zone_flags = 0;
lba = bp->bio_zone.zone_params.rwp.id;
if (bp->bio_zone.zone_params.rwp.flags &
DISK_ZONE_RWP_FLAG_ALL)
zone_flags |= ZBC_OUT_ALL;
ata_zac_mgmt_out(&ccb->ataio,
/*retries*/ ada_retry_count,
/*cbfcnp*/ adadone,
/*use_ncq*/ (softc->flags &
ADA_FLAG_PIM_ATA_EXT) ? 1 : 0,
/*zm_action*/ zone_sa,
/*zone_id*/ lba,
/*zone_flags*/ zone_flags,
/*sector_count*/ 0,
/*data_ptr*/ NULL,
/*dxfer_len*/ 0,
/*timeout*/ ada_default_timeout * 1000);
*queue_ccb = 1;
break;
}
case DISK_ZONE_REPORT_ZONES: {
uint8_t *rz_ptr;
uint32_t num_entries, alloc_size;
struct disk_zone_report *rep;
rep = &bp->bio_zone.zone_params.report;
num_entries = rep->entries_allocated;
if (num_entries == 0) {
xpt_print(periph->path, "No entries allocated for "
"Report Zones request\n");
error = EINVAL;
goto bailout;
}
alloc_size = sizeof(struct scsi_report_zones_hdr) +
(sizeof(struct scsi_report_zones_desc) * num_entries);
alloc_size = min(alloc_size, softc->disk->d_maxsize);
rz_ptr = malloc(alloc_size, M_ATADA, M_NOWAIT | M_ZERO);
if (rz_ptr == NULL) {
xpt_print(periph->path, "Unable to allocate memory "
"for Report Zones request\n");
error = ENOMEM;
goto bailout;
}
ata_zac_mgmt_in(&ccb->ataio,
/*retries*/ ada_retry_count,
/*cbcfnp*/ adadone,
/*use_ncq*/ (softc->flags &
ADA_FLAG_PIM_ATA_EXT) ? 1 : 0,
/*zm_action*/ ATA_ZM_REPORT_ZONES,
/*zone_id*/ rep->starting_id,
/*zone_flags*/ rep->rep_options,
/*data_ptr*/ rz_ptr,
/*dxfer_len*/ alloc_size,
/*timeout*/ ada_default_timeout * 1000);
/*
* For BIO_ZONE, this isn't normally needed. However, it
* is used by devstat_end_transaction_bio() to determine
* how much data was transferred.
*/
/*
* XXX KDM we have a problem. But I'm not sure how to fix
* it. devstat uses bio_bcount - bio_resid to calculate
* the amount of data transferred. The GEOM disk code
* uses bio_length - bio_resid to calculate the amount of
* data in bio_completed. We have different structure
* sizes above and below the ada(4) driver. So, if we
* use the sizes above, the amount transferred won't be
* quite accurate for devstat. If we use different sizes
* for bio_bcount and bio_length (above and below
* respectively), then the residual needs to match one or
* the other. Everything is calculated after the bio
* leaves the driver, so changing the values around isn't
* really an option. For now, just set the count to the
* passed in length. This means that the calculations
* above (e.g. bio_completed) will be correct, but the
* amount of data reported to devstat will be slightly
* under or overstated.
*/
bp->bio_bcount = bp->bio_length;
*queue_ccb = 1;
break;
}
case DISK_ZONE_GET_PARAMS: {
struct disk_zone_disk_params *params;
params = &bp->bio_zone.zone_params.disk_params;
bzero(params, sizeof(*params));
switch (softc->zone_mode) {
case ADA_ZONE_DRIVE_MANAGED:
params->zone_mode = DISK_ZONE_MODE_DRIVE_MANAGED;
break;
case ADA_ZONE_HOST_AWARE:
params->zone_mode = DISK_ZONE_MODE_HOST_AWARE;
break;
case ADA_ZONE_HOST_MANAGED:
params->zone_mode = DISK_ZONE_MODE_HOST_MANAGED;
break;
default:
case ADA_ZONE_NONE:
params->zone_mode = DISK_ZONE_MODE_NONE;
break;
}
if (softc->zone_flags & ADA_ZONE_FLAG_URSWRZ)
params->flags |= DISK_ZONE_DISK_URSWRZ;
if (softc->zone_flags & ADA_ZONE_FLAG_OPT_SEQ_SET) {
params->optimal_seq_zones = softc->optimal_seq_zones;
params->flags |= DISK_ZONE_OPT_SEQ_SET;
}
if (softc->zone_flags & ADA_ZONE_FLAG_OPT_NONSEQ_SET) {
params->optimal_nonseq_zones =
softc->optimal_nonseq_zones;
params->flags |= DISK_ZONE_OPT_NONSEQ_SET;
}
if (softc->zone_flags & ADA_ZONE_FLAG_MAX_SEQ_SET) {
params->max_seq_zones = softc->max_seq_zones;
params->flags |= DISK_ZONE_MAX_SEQ_SET;
}
if (softc->zone_flags & ADA_ZONE_FLAG_RZ_SUP)
params->flags |= DISK_ZONE_RZ_SUP;
if (softc->zone_flags & ADA_ZONE_FLAG_OPEN_SUP)
params->flags |= DISK_ZONE_OPEN_SUP;
if (softc->zone_flags & ADA_ZONE_FLAG_CLOSE_SUP)
params->flags |= DISK_ZONE_CLOSE_SUP;
if (softc->zone_flags & ADA_ZONE_FLAG_FINISH_SUP)
params->flags |= DISK_ZONE_FINISH_SUP;
if (softc->zone_flags & ADA_ZONE_FLAG_RWP_SUP)
params->flags |= DISK_ZONE_RWP_SUP;
break;
}
default:
break;
}
bailout:
return (error);
}
static void
adastart(struct cam_periph *periph, union ccb *start_ccb)
{
struct ada_softc *softc = (struct ada_softc *)periph->softc;
struct ccb_ataio *ataio = &start_ccb->ataio;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("adastart\n"));
switch (softc->state) {
case ADA_STATE_NORMAL:
{
struct bio *bp;
u_int8_t tag_code;
bp = cam_iosched_next_bio(softc->cam_iosched);
if (bp == NULL) {
xpt_release_ccb(start_ccb);
break;
}
if ((bp->bio_flags & BIO_ORDERED) != 0 ||
(bp->bio_cmd != BIO_DELETE && (softc->flags & ADA_FLAG_NEED_OTAG) != 0)) {
softc->flags &= ~ADA_FLAG_NEED_OTAG;
softc->flags |= ADA_FLAG_WAS_OTAG;
tag_code = 0;
} else {
tag_code = 1;
}
switch (bp->bio_cmd) {
case BIO_WRITE:
case BIO_READ:
{
uint64_t lba = bp->bio_pblkno;
uint16_t count = bp->bio_bcount / softc->params.secsize;
void *data_ptr;
int rw_op;
if (bp->bio_cmd == BIO_WRITE) {
softc->flags |= ADA_FLAG_DIRTY;
rw_op = CAM_DIR_OUT;
} else {
rw_op = CAM_DIR_IN;
}
data_ptr = bp->bio_data;
if ((bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0) {
rw_op |= CAM_DATA_BIO;
data_ptr = bp;
}
#ifdef CAM_TEST_FAILURE
int fail = 0;
/*
* Support the failure ioctls. If the command is a
* read, and there are pending forced read errors, or
* if a write and pending write errors, then fail this
* operation with EIO. This is useful for testing
* purposes. Also, support having every Nth read fail.
*
* This is a rather blunt tool.
*/
if (bp->bio_cmd == BIO_READ) {
if (softc->force_read_error) {
softc->force_read_error--;
fail = 1;
}
if (softc->periodic_read_error > 0) {
if (++softc->periodic_read_count >=
softc->periodic_read_error) {
softc->periodic_read_count = 0;
fail = 1;
}
}
} else {
if (softc->force_write_error) {
softc->force_write_error--;
fail = 1;
}
}
if (fail) {
biofinish(bp, NULL, EIO);
xpt_release_ccb(start_ccb);
adaschedule(periph);
return;
}
#endif
KASSERT((bp->bio_flags & BIO_UNMAPPED) == 0 ||
round_page(bp->bio_bcount + bp->bio_ma_offset) /
PAGE_SIZE == bp->bio_ma_n,
("Short bio %p", bp));
cam_fill_ataio(ataio,
ada_retry_count,
adadone,
rw_op,
0,
data_ptr,
bp->bio_bcount,
ada_default_timeout*1000);
if ((softc->flags & ADA_FLAG_CAN_NCQ) && tag_code) {
if (bp->bio_cmd == BIO_READ) {
ata_ncq_cmd(ataio, ATA_READ_FPDMA_QUEUED,
lba, count);
} else {
ata_ncq_cmd(ataio, ATA_WRITE_FPDMA_QUEUED,
lba, count);
}
} else if ((softc->flags & ADA_FLAG_CAN_48BIT) &&
(lba + count >= ATA_MAX_28BIT_LBA ||
count > 256)) {
if (softc->flags & ADA_FLAG_CAN_DMA48) {
if (bp->bio_cmd == BIO_READ) {
ata_48bit_cmd(ataio, ATA_READ_DMA48,
0, lba, count);
} else {
ata_48bit_cmd(ataio, ATA_WRITE_DMA48,
0, lba, count);
}
} else {
if (bp->bio_cmd == BIO_READ) {
ata_48bit_cmd(ataio, ATA_READ_MUL48,
0, lba, count);
} else {
ata_48bit_cmd(ataio, ATA_WRITE_MUL48,
0, lba, count);
}
}
} else {
if (count == 256)
count = 0;
if (softc->flags & ADA_FLAG_CAN_DMA) {
if (bp->bio_cmd == BIO_READ) {
ata_28bit_cmd(ataio, ATA_READ_DMA,
0, lba, count);
} else {
ata_28bit_cmd(ataio, ATA_WRITE_DMA,
0, lba, count);
}
} else {
if (bp->bio_cmd == BIO_READ) {
ata_28bit_cmd(ataio, ATA_READ_MUL,
0, lba, count);
} else {
ata_28bit_cmd(ataio, ATA_WRITE_MUL,
0, lba, count);
}
}
}
break;
}
case BIO_DELETE:
switch (softc->delete_method) {
case ADA_DELETE_NCQ_DSM_TRIM:
ada_ncq_dsmtrim(softc, bp, ataio);
break;
case ADA_DELETE_DSM_TRIM:
ada_dsmtrim(softc, bp, ataio);
break;
case ADA_DELETE_CFA_ERASE:
ada_cfaerase(softc, bp, ataio);
break;
default:
biofinish(bp, NULL, EOPNOTSUPP);
xpt_release_ccb(start_ccb);
adaschedule(periph);
return;
}
start_ccb->ccb_h.ccb_state = ADA_CCB_TRIM;
start_ccb->ccb_h.flags |= CAM_UNLOCKED;
cam_iosched_submit_trim(softc->cam_iosched);
goto out;
case BIO_FLUSH:
cam_fill_ataio(ataio,
1,
adadone,
CAM_DIR_NONE,
0,
NULL,
0,
ada_default_timeout*1000);
if (softc->flags & ADA_FLAG_CAN_48BIT)
ata_48bit_cmd(ataio, ATA_FLUSHCACHE48, 0, 0, 0);
else
ata_28bit_cmd(ataio, ATA_FLUSHCACHE, 0, 0, 0);
break;
case BIO_ZONE: {
int error, queue_ccb;
queue_ccb = 0;
error = ada_zone_cmd(periph, start_ccb, bp, &queue_ccb);
if ((error != 0)
|| (queue_ccb == 0)) {
biofinish(bp, NULL, error);
xpt_release_ccb(start_ccb);
return;
}
break;
}
}
start_ccb->ccb_h.ccb_state = ADA_CCB_BUFFER_IO;
start_ccb->ccb_h.flags |= CAM_UNLOCKED;
out:
start_ccb->ccb_h.ccb_bp = bp;
softc->outstanding_cmds++;
softc->refcount++;
cam_periph_unlock(periph);
xpt_action(start_ccb);
cam_periph_lock(periph);
/* May have more work to do, so ensure we stay scheduled */
adaschedule(periph);
break;
}
case ADA_STATE_RAHEAD:
case ADA_STATE_WCACHE:
{
cam_fill_ataio(ataio,
1,
adadone,
CAM_DIR_NONE,
0,
NULL,
0,
ada_default_timeout*1000);
if (softc->state == ADA_STATE_RAHEAD) {
ata_28bit_cmd(ataio, ATA_SETFEATURES, ADA_RA ?
ATA_SF_ENAB_RCACHE : ATA_SF_DIS_RCACHE, 0, 0);
start_ccb->ccb_h.ccb_state = ADA_CCB_RAHEAD;
} else {
ata_28bit_cmd(ataio, ATA_SETFEATURES, ADA_WC ?
ATA_SF_ENAB_WCACHE : ATA_SF_DIS_WCACHE, 0, 0);
start_ccb->ccb_h.ccb_state = ADA_CCB_WCACHE;
}
start_ccb->ccb_h.flags |= CAM_DEV_QFREEZE;
xpt_action(start_ccb);
break;
}
case ADA_STATE_LOGDIR:
{
struct ata_gp_log_dir *log_dir;
if ((softc->flags & ADA_FLAG_CAN_LOG) == 0) {
adaprobedone(periph, start_ccb);
break;
}
log_dir = malloc(sizeof(*log_dir), M_ATADA, M_NOWAIT|M_ZERO);
if (log_dir == NULL) {
xpt_print(periph->path, "Couldn't malloc log_dir "
"data\n");
softc->state = ADA_STATE_NORMAL;
xpt_release_ccb(start_ccb);
break;
}
ata_read_log(ataio,
/*retries*/1,
/*cbfcnp*/adadone,
/*log_address*/ ATA_LOG_DIRECTORY,
/*page_number*/ 0,
/*block_count*/ 1,
/*protocol*/ softc->flags & ADA_FLAG_CAN_DMA ?
CAM_ATAIO_DMA : 0,
/*data_ptr*/ (uint8_t *)log_dir,
/*dxfer_len*/sizeof(*log_dir),
/*timeout*/ada_default_timeout*1000);
start_ccb->ccb_h.ccb_state = ADA_CCB_LOGDIR;
xpt_action(start_ccb);
break;
}
case ADA_STATE_IDDIR:
{
struct ata_identify_log_pages *id_dir;
id_dir = malloc(sizeof(*id_dir), M_ATADA, M_NOWAIT | M_ZERO);
if (id_dir == NULL) {
xpt_print(periph->path, "Couldn't malloc id_dir "
"data\n");
adaprobedone(periph, start_ccb);
break;
}
ata_read_log(ataio,
/*retries*/1,
/*cbfcnp*/adadone,
/*log_address*/ ATA_IDENTIFY_DATA_LOG,
/*page_number*/ ATA_IDL_PAGE_LIST,
/*block_count*/ 1,
/*protocol*/ softc->flags & ADA_FLAG_CAN_DMA ?
CAM_ATAIO_DMA : 0,
/*data_ptr*/ (uint8_t *)id_dir,
/*dxfer_len*/ sizeof(*id_dir),
/*timeout*/ada_default_timeout*1000);
start_ccb->ccb_h.ccb_state = ADA_CCB_IDDIR;
xpt_action(start_ccb);
break;
}
case ADA_STATE_SUP_CAP:
{
struct ata_identify_log_sup_cap *sup_cap;
sup_cap = malloc(sizeof(*sup_cap), M_ATADA, M_NOWAIT|M_ZERO);
if (sup_cap == NULL) {
xpt_print(periph->path, "Couldn't malloc sup_cap "
"data\n");
adaprobedone(periph, start_ccb);
break;
}
ata_read_log(ataio,
/*retries*/1,
/*cbfcnp*/adadone,
/*log_address*/ ATA_IDENTIFY_DATA_LOG,
/*page_number*/ ATA_IDL_SUP_CAP,
/*block_count*/ 1,
/*protocol*/ softc->flags & ADA_FLAG_CAN_DMA ?
CAM_ATAIO_DMA : 0,
/*data_ptr*/ (uint8_t *)sup_cap,
/*dxfer_len*/ sizeof(*sup_cap),
/*timeout*/ada_default_timeout*1000);
start_ccb->ccb_h.ccb_state = ADA_CCB_SUP_CAP;
xpt_action(start_ccb);
break;
}
case ADA_STATE_ZONE:
{
struct ata_zoned_info_log *ata_zone;
ata_zone = malloc(sizeof(*ata_zone), M_ATADA, M_NOWAIT|M_ZERO);
if (ata_zone == NULL) {
xpt_print(periph->path, "Couldn't malloc ata_zone "
"data\n");
adaprobedone(periph, start_ccb);
break;
}
ata_read_log(ataio,
/*retries*/1,
/*cbfcnp*/adadone,
/*log_address*/ ATA_IDENTIFY_DATA_LOG,
/*page_number*/ ATA_IDL_ZDI,
/*block_count*/ 1,
/*protocol*/ softc->flags & ADA_FLAG_CAN_DMA ?
CAM_ATAIO_DMA : 0,
/*data_ptr*/ (uint8_t *)ata_zone,
/*dxfer_len*/ sizeof(*ata_zone),
/*timeout*/ada_default_timeout*1000);
start_ccb->ccb_h.ccb_state = ADA_CCB_ZONE;
xpt_action(start_ccb);
break;
}
}
}
static void
adaprobedone(struct cam_periph *periph, union ccb *ccb)
{
struct ada_softc *softc;
softc = (struct ada_softc *)periph->softc;
if (ccb != NULL)
xpt_release_ccb(ccb);
softc->state = ADA_STATE_NORMAL;
softc->flags |= ADA_FLAG_PROBED;
adaschedule(periph);
if ((softc->flags & ADA_FLAG_ANNOUNCED) == 0) {
softc->flags |= ADA_FLAG_ANNOUNCED;
cam_periph_unhold(periph);
} else {
cam_periph_release_locked(periph);
}
}
static void
adazonedone(struct cam_periph *periph, union ccb *ccb)
{
struct bio *bp;
bp = (struct bio *)ccb->ccb_h.ccb_bp;
switch (bp->bio_zone.zone_cmd) {
case DISK_ZONE_OPEN:
case DISK_ZONE_CLOSE:
case DISK_ZONE_FINISH:
case DISK_ZONE_RWP:
break;
case DISK_ZONE_REPORT_ZONES: {
uint32_t avail_len;
struct disk_zone_report *rep;
struct scsi_report_zones_hdr *hdr;
struct scsi_report_zones_desc *desc;
struct disk_zone_rep_entry *entry;
uint32_t hdr_len, num_avail;
uint32_t num_to_fill, i;
rep = &bp->bio_zone.zone_params.report;
avail_len = ccb->ataio.dxfer_len - ccb->ataio.resid;
/*
* Note that bio_resid isn't normally used for zone
* commands, but it is used by devstat_end_transaction_bio()
* to determine how much data was transferred. Because
* the size of the SCSI/ATA data structures is different
* than the size of the BIO interface structures, the
* amount of data actually transferred from the drive will
* be different than the amount of data transferred to
* the user.
*/
hdr = (struct scsi_report_zones_hdr *)ccb->ataio.data_ptr;
if (avail_len < sizeof(*hdr)) {
/*
* Is there a better error than EIO here? We asked
* for at least the header, and we got less than
* that.
*/
bp->bio_error = EIO;
bp->bio_flags |= BIO_ERROR;
bp->bio_resid = bp->bio_bcount;
break;
}
hdr_len = le32dec(hdr->length);
if (hdr_len > 0)
rep->entries_available = hdr_len / sizeof(*desc);
else
rep->entries_available = 0;
/*
* NOTE: using the same values for the BIO version of the
* same field as the SCSI/ATA values. This means we could
* get some additional values that aren't defined in bio.h
* if more values of the same field are defined later.
*/
rep->header.same = hdr->byte4 & SRZ_SAME_MASK;
rep->header.maximum_lba = le64dec(hdr->maximum_lba);
/*
* If the drive reports no entries that match the query,
* we're done.
*/
if (hdr_len == 0) {
rep->entries_filled = 0;
bp->bio_resid = bp->bio_bcount;
break;
}
num_avail = min((avail_len - sizeof(*hdr)) / sizeof(*desc),
hdr_len / sizeof(*desc));
/*
* If the drive didn't return any data, then we're done.
*/
if (num_avail == 0) {
rep->entries_filled = 0;
bp->bio_resid = bp->bio_bcount;
break;
}
num_to_fill = min(num_avail, rep->entries_allocated);
/*
* If the user didn't allocate any entries for us to fill,
* we're done.
*/
if (num_to_fill == 0) {
rep->entries_filled = 0;
bp->bio_resid = bp->bio_bcount;
break;
}
for (i = 0, desc = &hdr->desc_list[0], entry=&rep->entries[0];
i < num_to_fill; i++, desc++, entry++) {
/*
* NOTE: we're mapping the values here directly
* from the SCSI/ATA bit definitions to the bio.h
* definitions. There is also a warning in
* disk_zone.h, but the impact is that if
* additional values are added in the SCSI/ATA
* specs these will be visible to consumers of
* this interface.
*/
entry->zone_type = desc->zone_type & SRZ_TYPE_MASK;
entry->zone_condition =
(desc->zone_flags & SRZ_ZONE_COND_MASK) >>
SRZ_ZONE_COND_SHIFT;
entry->zone_flags |= desc->zone_flags &
(SRZ_ZONE_NON_SEQ|SRZ_ZONE_RESET);
entry->zone_length = le64dec(desc->zone_length);
entry->zone_start_lba = le64dec(desc->zone_start_lba);
entry->write_pointer_lba =
le64dec(desc->write_pointer_lba);
}
rep->entries_filled = num_to_fill;
/*
* Note that this residual is accurate from the user's
* standpoint, but the amount transferred isn't accurate
* from the standpoint of what actually came back from the
* drive.
*/
bp->bio_resid = bp->bio_bcount - (num_to_fill * sizeof(*entry));
break;
}
case DISK_ZONE_GET_PARAMS:
default:
/*
* In theory we should not get a GET_PARAMS bio, since it
* should be handled without queueing the command to the
* drive.
*/
panic("%s: Invalid zone command %d", __func__,
bp->bio_zone.zone_cmd);
break;
}
if (bp->bio_zone.zone_cmd == DISK_ZONE_REPORT_ZONES)
free(ccb->ataio.data_ptr, M_ATADA);
}
static void
adadone(struct cam_periph *periph, union ccb *done_ccb)
{
struct ada_softc *softc;
struct ccb_ataio *ataio;
struct cam_path *path;
uint32_t priority;
int state;
softc = (struct ada_softc *)periph->softc;
ataio = &done_ccb->ataio;
path = done_ccb->ccb_h.path;
priority = done_ccb->ccb_h.pinfo.priority;
CAM_DEBUG(path, CAM_DEBUG_TRACE, ("adadone\n"));
state = ataio->ccb_h.ccb_state & ADA_CCB_TYPE_MASK;
switch (state) {
case ADA_CCB_BUFFER_IO:
case ADA_CCB_TRIM:
{
struct bio *bp;
int error;
cam_periph_lock(periph);
bp = (struct bio *)done_ccb->ccb_h.ccb_bp;
if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
error = adaerror(done_ccb, 0, 0);
if (error == ERESTART) {
/* A retry was scheduled, so just return. */
cam_periph_unlock(periph);
return;
}
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
cam_release_devq(path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
/*
* If we get an error on an NCQ DSM TRIM, fall back
* to a non-NCQ DSM TRIM forever. Please note that if
* CAN_NCQ_TRIM is set, CAN_TRIM is necessarily set too.
* However, for this one trim, we treat it as advisory
* and return success up the stack.
*/
if (state == ADA_CCB_TRIM &&
error != 0 &&
(softc->flags & ADA_FLAG_CAN_NCQ_TRIM) != 0) {
softc->flags &= ~ADA_FLAG_CAN_NCQ_TRIM;
error = 0;
adasetdeletemethod(softc);
}
} else {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
panic("REQ_CMP with QFRZN");
error = 0;
}
bp->bio_error = error;
if (error != 0) {
bp->bio_resid = bp->bio_bcount;
bp->bio_flags |= BIO_ERROR;
} else {
if (bp->bio_cmd == BIO_ZONE)
adazonedone(periph, done_ccb);
else if (state == ADA_CCB_TRIM)
bp->bio_resid = 0;
else
bp->bio_resid = ataio->resid;
if ((bp->bio_resid > 0)
&& (bp->bio_cmd != BIO_ZONE))
bp->bio_flags |= BIO_ERROR;
}
softc->outstanding_cmds--;
if (softc->outstanding_cmds == 0)
softc->flags |= ADA_FLAG_WAS_OTAG;
/*
* We need to call cam_iosched before we call biodone so that we
* don't measure any activity that happens in the completion
* routine, which in the case of sendfile can be quite
* extensive. Release the periph refcount taken in adastart()
* for each CCB.
*/
cam_iosched_bio_complete(softc->cam_iosched, bp, done_ccb);
xpt_release_ccb(done_ccb);
KASSERT(softc->refcount >= 1, ("adadone softc %p refcount %d", softc, softc->refcount));
softc->refcount--;
if (state == ADA_CCB_TRIM) {
TAILQ_HEAD(, bio) queue;
struct bio *bp1;
TAILQ_INIT(&queue);
TAILQ_CONCAT(&queue, &softc->trim_req.bps, bio_queue);
/*
* Normally, the xpt_release_ccb() above would make sure
* that when we have more work to do, that work would
* get kicked off. However, we specifically keep
* trim_running set to 0 before the call above to allow
* other I/O to progress when many BIO_DELETE requests
* are pushed down. We set trim_running to 0 and call
* daschedule again so that we don't stall if there are
* no other I/Os pending apart from BIO_DELETEs.
*/
cam_iosched_trim_done(softc->cam_iosched);
adaschedule(periph);
cam_periph_unlock(periph);
while ((bp1 = TAILQ_FIRST(&queue)) != NULL) {
TAILQ_REMOVE(&queue, bp1, bio_queue);
bp1->bio_error = error;
if (error != 0) {
bp1->bio_flags |= BIO_ERROR;
bp1->bio_resid = bp1->bio_bcount;
} else
bp1->bio_resid = 0;
biodone(bp1);
}
} else {
adaschedule(periph);
cam_periph_unlock(periph);
biodone(bp);
}
return;
}
case ADA_CCB_RAHEAD:
{
if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
if (adaerror(done_ccb, 0, 0) == ERESTART) {
/* Drop freeze taken due to CAM_DEV_QFREEZE */
cam_release_devq(path, 0, 0, 0, FALSE);
return;
} else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
cam_release_devq(path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
/*
* Since our peripheral may be invalidated by an error
* above or an external event, we must release our CCB
* before releasing the reference on the peripheral.
* The peripheral will only go away once the last reference
* is removed, and we need it around for the CCB release
* operation.
*/
xpt_release_ccb(done_ccb);
softc->state = ADA_STATE_WCACHE;
xpt_schedule(periph, priority);
/* Drop freeze taken due to CAM_DEV_QFREEZE */
cam_release_devq(path, 0, 0, 0, FALSE);
return;
}
case ADA_CCB_WCACHE:
{
if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
if (adaerror(done_ccb, 0, 0) == ERESTART) {
/* Drop freeze taken due to CAM_DEV_QFREEZE */
cam_release_devq(path, 0, 0, 0, FALSE);
return;
} else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
cam_release_devq(path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
/* Drop freeze taken due to CAM_DEV_QFREEZE */
cam_release_devq(path, 0, 0, 0, FALSE);
if ((softc->flags & ADA_FLAG_CAN_LOG)
&& (softc->zone_mode != ADA_ZONE_NONE)) {
xpt_release_ccb(done_ccb);
softc->state = ADA_STATE_LOGDIR;
xpt_schedule(periph, priority);
} else {
adaprobedone(periph, done_ccb);
}
return;
}
case ADA_CCB_LOGDIR:
{
int error;
if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
error = 0;
softc->valid_logdir_len = 0;
bzero(&softc->ata_logdir, sizeof(softc->ata_logdir));
softc->valid_logdir_len =
ataio->dxfer_len - ataio->resid;
if (softc->valid_logdir_len > 0)
bcopy(ataio->data_ptr, &softc->ata_logdir,
min(softc->valid_logdir_len,
sizeof(softc->ata_logdir)));
/*
* Figure out whether the Identify Device log is
* supported. The General Purpose log directory
* has a header, and lists the number of pages
* available for each GP log identified by the
* offset into the list.
*/
if ((softc->valid_logdir_len >=
((ATA_IDENTIFY_DATA_LOG + 1) * sizeof(uint16_t)))
&& (le16dec(softc->ata_logdir.header) ==
ATA_GP_LOG_DIR_VERSION)
&& (le16dec(&softc->ata_logdir.num_pages[
(ATA_IDENTIFY_DATA_LOG *
sizeof(uint16_t)) - sizeof(uint16_t)]) > 0)){
softc->flags |= ADA_FLAG_CAN_IDLOG;
} else {
softc->flags &= ~ADA_FLAG_CAN_IDLOG;
}
} else {
error = adaerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
/*
* If we can't get the ATA log directory,
* then ATA logs are effectively not
* supported even if the bit is set in the
* identify data.
*/
softc->flags &= ~(ADA_FLAG_CAN_LOG |
ADA_FLAG_CAN_IDLOG);
if ((done_ccb->ccb_h.status &
CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(ataio->data_ptr, M_ATADA);
if ((error == 0)
&& (softc->flags & ADA_FLAG_CAN_IDLOG)) {
softc->state = ADA_STATE_IDDIR;
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
} else
adaprobedone(periph, done_ccb);
return;
}
case ADA_CCB_IDDIR: {
int error;
if ((ataio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
off_t entries_offset, max_entries;
error = 0;
softc->valid_iddir_len = 0;
bzero(&softc->ata_iddir, sizeof(softc->ata_iddir));
softc->flags &= ~(ADA_FLAG_CAN_SUPCAP |
ADA_FLAG_CAN_ZONE);
softc->valid_iddir_len =
ataio->dxfer_len - ataio->resid;
if (softc->valid_iddir_len > 0)
bcopy(ataio->data_ptr, &softc->ata_iddir,
min(softc->valid_iddir_len,
sizeof(softc->ata_iddir)));
entries_offset =
__offsetof(struct ata_identify_log_pages,entries);
max_entries = softc->valid_iddir_len - entries_offset;
if ((softc->valid_iddir_len > (entries_offset + 1))
&& (le64dec(softc->ata_iddir.header) ==
ATA_IDLOG_REVISION)
&& (softc->ata_iddir.entry_count > 0)) {
int num_entries, i;
num_entries = softc->ata_iddir.entry_count;
num_entries = min(num_entries,
softc->valid_iddir_len - entries_offset);
for (i = 0; i < num_entries &&
i < max_entries; i++) {
if (softc->ata_iddir.entries[i] ==
ATA_IDL_SUP_CAP)
softc->flags |=
ADA_FLAG_CAN_SUPCAP;
else if (softc->ata_iddir.entries[i]==
ATA_IDL_ZDI)
softc->flags |=
ADA_FLAG_CAN_ZONE;
if ((softc->flags &
ADA_FLAG_CAN_SUPCAP)
&& (softc->flags &
ADA_FLAG_CAN_ZONE))
break;
}
}
} else {
error = adaerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
/*
* If we can't get the ATA Identify Data log
* directory, then it effectively isn't
* supported even if the ATA Log directory
* a non-zero number of pages present for
* this log.
*/
softc->flags &= ~ADA_FLAG_CAN_IDLOG;
if ((done_ccb->ccb_h.status &
CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(ataio->data_ptr, M_ATADA);
if ((error == 0)
&& (softc->flags & ADA_FLAG_CAN_SUPCAP)) {
softc->state = ADA_STATE_SUP_CAP;
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
} else
adaprobedone(periph, done_ccb);
return;
}
case ADA_CCB_SUP_CAP: {
int error;
if ((ataio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
uint32_t valid_len;
size_t needed_size;
struct ata_identify_log_sup_cap *sup_cap;
error = 0;
sup_cap = (struct ata_identify_log_sup_cap *)
ataio->data_ptr;
valid_len = ataio->dxfer_len - ataio->resid;
needed_size =
__offsetof(struct ata_identify_log_sup_cap,
sup_zac_cap) + 1 + sizeof(sup_cap->sup_zac_cap);
if (valid_len >= needed_size) {
uint64_t zoned, zac_cap;
zoned = le64dec(sup_cap->zoned_cap);
if (zoned & ATA_ZONED_VALID) {
/*
* This should have already been
* set, because this is also in the
* ATA identify data.
*/
if ((zoned & ATA_ZONED_MASK) ==
ATA_SUPPORT_ZONE_HOST_AWARE)
softc->zone_mode =
ADA_ZONE_HOST_AWARE;
else if ((zoned & ATA_ZONED_MASK) ==
ATA_SUPPORT_ZONE_DEV_MANAGED)
softc->zone_mode =
ADA_ZONE_DRIVE_MANAGED;
}
zac_cap = le64dec(sup_cap->sup_zac_cap);
if (zac_cap & ATA_SUP_ZAC_CAP_VALID) {
if (zac_cap & ATA_REPORT_ZONES_SUP)
softc->zone_flags |=
ADA_ZONE_FLAG_RZ_SUP;
if (zac_cap & ATA_ND_OPEN_ZONE_SUP)
softc->zone_flags |=
ADA_ZONE_FLAG_OPEN_SUP;
if (zac_cap & ATA_ND_CLOSE_ZONE_SUP)
softc->zone_flags |=
ADA_ZONE_FLAG_CLOSE_SUP;
if (zac_cap & ATA_ND_FINISH_ZONE_SUP)
softc->zone_flags |=
ADA_ZONE_FLAG_FINISH_SUP;
if (zac_cap & ATA_ND_RWP_SUP)
softc->zone_flags |=
ADA_ZONE_FLAG_RWP_SUP;
} else {
/*
* This field was introduced in
* ACS-4, r08 on April 28th, 2015.
* If the drive firmware was written
* to an earlier spec, it won't have
* the field. So, assume all
* commands are supported.
*/
softc->zone_flags |=
ADA_ZONE_FLAG_SUP_MASK;
}
}
} else {
error = adaerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
/*
* If we can't get the ATA Identify Data
* Supported Capabilities page, clear the
* flag...
*/
softc->flags &= ~ADA_FLAG_CAN_SUPCAP;
/*
* And clear zone capabilities.
*/
softc->zone_flags &= ~ADA_ZONE_FLAG_SUP_MASK;
if ((done_ccb->ccb_h.status &
CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(ataio->data_ptr, M_ATADA);
if ((error == 0)
&& (softc->flags & ADA_FLAG_CAN_ZONE)) {
softc->state = ADA_STATE_ZONE;
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
} else
adaprobedone(periph, done_ccb);
return;
}
case ADA_CCB_ZONE: {
int error;
if ((ataio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
struct ata_zoned_info_log *zi_log;
uint32_t valid_len;
size_t needed_size;
zi_log = (struct ata_zoned_info_log *)ataio->data_ptr;
valid_len = ataio->dxfer_len - ataio->resid;
needed_size = __offsetof(struct ata_zoned_info_log,
version_info) + 1 + sizeof(zi_log->version_info);
if (valid_len >= needed_size) {
uint64_t tmpvar;
tmpvar = le64dec(zi_log->zoned_cap);
if (tmpvar & ATA_ZDI_CAP_VALID) {
if (tmpvar & ATA_ZDI_CAP_URSWRZ)
softc->zone_flags |=
ADA_ZONE_FLAG_URSWRZ;
else
softc->zone_flags &=
~ADA_ZONE_FLAG_URSWRZ;
}
tmpvar = le64dec(zi_log->optimal_seq_zones);
if (tmpvar & ATA_ZDI_OPT_SEQ_VALID) {
softc->zone_flags |=
ADA_ZONE_FLAG_OPT_SEQ_SET;
softc->optimal_seq_zones = (tmpvar &
ATA_ZDI_OPT_SEQ_MASK);
} else {
softc->zone_flags &=
~ADA_ZONE_FLAG_OPT_SEQ_SET;
softc->optimal_seq_zones = 0;
}
tmpvar =le64dec(zi_log->optimal_nonseq_zones);
if (tmpvar & ATA_ZDI_OPT_NS_VALID) {
softc->zone_flags |=
ADA_ZONE_FLAG_OPT_NONSEQ_SET;
softc->optimal_nonseq_zones =
(tmpvar & ATA_ZDI_OPT_NS_MASK);
} else {
softc->zone_flags &=
~ADA_ZONE_FLAG_OPT_NONSEQ_SET;
softc->optimal_nonseq_zones = 0;
}
tmpvar = le64dec(zi_log->max_seq_req_zones);
if (tmpvar & ATA_ZDI_MAX_SEQ_VALID) {
softc->zone_flags |=
ADA_ZONE_FLAG_MAX_SEQ_SET;
softc->max_seq_zones =
(tmpvar & ATA_ZDI_MAX_SEQ_MASK);
} else {
softc->zone_flags &=
~ADA_ZONE_FLAG_MAX_SEQ_SET;
softc->max_seq_zones = 0;
}
}
} else {
error = adaerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
softc->flags &= ~ADA_FLAG_CAN_ZONE;
softc->flags &= ~ADA_ZONE_FLAG_SET_MASK;
if ((done_ccb->ccb_h.status &
CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(ataio->data_ptr, M_ATADA);
adaprobedone(periph, done_ccb);
return;
}
case ADA_CCB_DUMP:
/* No-op. We're polling */
return;
default:
break;
}
xpt_release_ccb(done_ccb);
}
static int
adaerror(union ccb *ccb, u_int32_t cam_flags, u_int32_t sense_flags)
{
#ifdef CAM_IO_STATS
struct ada_softc *softc;
struct cam_periph *periph;
periph = xpt_path_periph(ccb->ccb_h.path);
softc = (struct ada_softc *)periph->softc;
switch (ccb->ccb_h.status & CAM_STATUS_MASK) {
case CAM_CMD_TIMEOUT:
softc->timeouts++;
break;
case CAM_REQ_ABORTED:
case CAM_REQ_CMP_ERR:
case CAM_REQ_TERMIO:
case CAM_UNREC_HBA_ERROR:
case CAM_DATA_RUN_ERR:
case CAM_ATA_STATUS_ERROR:
softc->errors++;
break;
default:
break;
}
#endif
return(cam_periph_error(ccb, cam_flags, sense_flags));
}
static void
adagetparams(struct cam_periph *periph, struct ccb_getdev *cgd)
{
struct ada_softc *softc = (struct ada_softc *)periph->softc;
struct disk_params *dp = &softc->params;
u_int64_t lbasize48;
u_int32_t lbasize;
dp->secsize = ata_logical_sector_size(&cgd->ident_data);
if ((cgd->ident_data.atavalid & ATA_FLAG_54_58) &&
cgd->ident_data.current_heads && cgd->ident_data.current_sectors) {
dp->heads = cgd->ident_data.current_heads;
dp->secs_per_track = cgd->ident_data.current_sectors;
dp->cylinders = cgd->ident_data.cylinders;
dp->sectors = (u_int32_t)cgd->ident_data.current_size_1 |
((u_int32_t)cgd->ident_data.current_size_2 << 16);
} else {
dp->heads = cgd->ident_data.heads;
dp->secs_per_track = cgd->ident_data.sectors;
dp->cylinders = cgd->ident_data.cylinders;
dp->sectors = cgd->ident_data.cylinders *
(u_int32_t)(dp->heads * dp->secs_per_track);
}
lbasize = (u_int32_t)cgd->ident_data.lba_size_1 |
((u_int32_t)cgd->ident_data.lba_size_2 << 16);
/* use the 28bit LBA size if valid or bigger than the CHS mapping */
if (cgd->ident_data.cylinders == 16383 || dp->sectors < lbasize)
dp->sectors = lbasize;
/* use the 48bit LBA size if valid */
lbasize48 = ((u_int64_t)cgd->ident_data.lba_size48_1) |
((u_int64_t)cgd->ident_data.lba_size48_2 << 16) |
((u_int64_t)cgd->ident_data.lba_size48_3 << 32) |
((u_int64_t)cgd->ident_data.lba_size48_4 << 48);
if ((cgd->ident_data.support.command2 & ATA_SUPPORT_ADDRESS48) &&
lbasize48 > ATA_MAX_28BIT_LBA)
dp->sectors = lbasize48;
}
static void
adasendorderedtag(void *arg)
{
struct ada_softc *softc = arg;
if (ada_send_ordered) {
if (softc->outstanding_cmds > 0) {
if ((softc->flags & ADA_FLAG_WAS_OTAG) == 0)
softc->flags |= ADA_FLAG_NEED_OTAG;
softc->flags &= ~ADA_FLAG_WAS_OTAG;
}
}
/* Queue us up again */
callout_reset(&softc->sendordered_c,
(ada_default_timeout * hz) / ADA_ORDEREDTAG_INTERVAL,
adasendorderedtag, softc);
}
/*
* Step through all ADA peripheral drivers, and if the device is still open,
* sync the disk cache to physical media.
*/
static void
adaflush(void)
{
struct cam_periph *periph;
struct ada_softc *softc;
union ccb *ccb;
int error;
CAM_PERIPH_FOREACH(periph, &adadriver) {
softc = (struct ada_softc *)periph->softc;
if (SCHEDULER_STOPPED()) {
/* If we paniced with the lock held, do not recurse. */
if (!cam_periph_owned(periph) &&
(softc->flags & ADA_FLAG_OPEN)) {
adadump(softc->disk, NULL, 0, 0, 0);
}
continue;
}
cam_periph_lock(periph);
/*
* We only sync the cache if the drive is still open, and
* if the drive is capable of it..
*/
if (((softc->flags & ADA_FLAG_OPEN) == 0) ||
(softc->flags & ADA_FLAG_CAN_FLUSHCACHE) == 0) {
cam_periph_unlock(periph);
continue;
}
ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL);
cam_fill_ataio(&ccb->ataio,
0,
NULL,
CAM_DIR_NONE,
0,
NULL,
0,
ada_default_timeout*1000);
if (softc->flags & ADA_FLAG_CAN_48BIT)
ata_48bit_cmd(&ccb->ataio, ATA_FLUSHCACHE48, 0, 0, 0);
else
ata_28bit_cmd(&ccb->ataio, ATA_FLUSHCACHE, 0, 0, 0);
error = cam_periph_runccb(ccb, adaerror, /*cam_flags*/0,
/*sense_flags*/ SF_NO_RECOVERY | SF_NO_RETRY,
softc->disk->d_devstat);
if (error != 0)
xpt_print(periph->path, "Synchronize cache failed\n");
xpt_release_ccb(ccb);
cam_periph_unlock(periph);
}
}
static void
adaspindown(uint8_t cmd, int flags)
{
struct cam_periph *periph;
struct ada_softc *softc;
struct ccb_ataio local_ccb;
int error;
CAM_PERIPH_FOREACH(periph, &adadriver) {
/* If we paniced with lock held - not recurse here. */
if (cam_periph_owned(periph))
continue;
cam_periph_lock(periph);
softc = (struct ada_softc *)periph->softc;
/*
* We only spin-down the drive if it is capable of it..
*/
if ((softc->flags & ADA_FLAG_CAN_POWERMGT) == 0) {
cam_periph_unlock(periph);
continue;
}
if (bootverbose)
xpt_print(periph->path, "spin-down\n");
memset(&local_ccb, 0, sizeof(local_ccb));
xpt_setup_ccb(&local_ccb.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
local_ccb.ccb_h.ccb_state = ADA_CCB_DUMP;
cam_fill_ataio(&local_ccb,
0,
NULL,
CAM_DIR_NONE | flags,
0,
NULL,
0,
ada_default_timeout*1000);
ata_28bit_cmd(&local_ccb, cmd, 0, 0, 0);
error = cam_periph_runccb((union ccb *)&local_ccb, adaerror,
/*cam_flags*/0, /*sense_flags*/ SF_NO_RECOVERY | SF_NO_RETRY,
softc->disk->d_devstat);
if (error != 0)
xpt_print(periph->path, "Spin-down disk failed\n");
cam_periph_unlock(periph);
}
}
static void
adashutdown(void *arg, int howto)
{
int how;
adaflush();
/*
* STANDBY IMMEDIATE saves any volatile data to the drive. It also spins
* down hard drives. IDLE IMMEDIATE also saves the volatile data without
* a spindown. We send the former when we expect to lose power soon. For
* a warm boot, we send the latter to avoid a thundering herd of spinups
* just after the kernel loads while probing. We have to do something to
* flush the data because the BIOS in many systems resets the HBA
* causing a COMINIT/COMRESET negotiation, which some drives interpret
* as license to toss the volatile data, and others count as unclean
* shutdown when in the Active PM state in SMART attributes.
*
* adaspindown will ensure that we don't send this to a drive that
* doesn't support it.
*/
if (ada_spindown_shutdown != 0) {
how = (howto & (RB_HALT | RB_POWEROFF | RB_POWERCYCLE)) ?
ATA_STANDBY_IMMEDIATE : ATA_IDLE_IMMEDIATE;
adaspindown(how, 0);
}
}
static void
adasuspend(void *arg)
{
adaflush();
/*
* SLEEP also fushes any volatile data, like STANDBY IMEDIATE,
* so we don't need to send it as well.
*/
if (ada_spindown_suspend != 0)
adaspindown(ATA_SLEEP, CAM_DEV_QFREEZE);
}
static void
adaresume(void *arg)
{
struct cam_periph *periph;
struct ada_softc *softc;
if (ada_spindown_suspend == 0)
return;
CAM_PERIPH_FOREACH(periph, &adadriver) {
cam_periph_lock(periph);
softc = (struct ada_softc *)periph->softc;
/*
* We only spin-down the drive if it is capable of it..
*/
if ((softc->flags & ADA_FLAG_CAN_POWERMGT) == 0) {
cam_periph_unlock(periph);
continue;
}
if (bootverbose)
xpt_print(periph->path, "resume\n");
/*
* Drop freeze taken due to CAM_DEV_QFREEZE flag set on
* sleep request.
*/
cam_release_devq(periph->path,
/*relsim_flags*/0,
/*openings*/0,
/*timeout*/0,
/*getcount_only*/0);
cam_periph_unlock(periph);
}
}
#endif /* _KERNEL */
Index: projects/clang800-import/sys/cam/scsi/scsi_da.c
===================================================================
--- projects/clang800-import/sys/cam/scsi/scsi_da.c (revision 343955)
+++ projects/clang800-import/sys/cam/scsi/scsi_da.c (revision 343956)
@@ -1,6516 +1,6525 @@
/*-
* Implementation of SCSI Direct Access Peripheral driver for CAM.
*
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 1997 Justin T. Gibbs.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification, immediately at the beginning of the file.
* 2. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#ifdef _KERNEL
#include "opt_da.h"
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/bio.h>
#include <sys/sysctl.h>
#include <sys/taskqueue.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/conf.h>
#include <sys/devicestat.h>
#include <sys/eventhandler.h>
#include <sys/malloc.h>
#include <sys/cons.h>
#include <sys/endian.h>
#include <sys/proc.h>
#include <sys/sbuf.h>
#include <geom/geom.h>
#include <geom/geom_disk.h>
#include <machine/atomic.h>
#endif /* _KERNEL */
#ifndef _KERNEL
#include <stdio.h>
#include <string.h>
#endif /* _KERNEL */
#include <cam/cam.h>
#include <cam/cam_ccb.h>
#include <cam/cam_periph.h>
#include <cam/cam_xpt_periph.h>
#include <cam/cam_sim.h>
#include <cam/cam_iosched.h>
#include <cam/scsi/scsi_message.h>
#include <cam/scsi/scsi_da.h>
#ifdef _KERNEL
/*
* Note that there are probe ordering dependencies here. The order isn't
* controlled by this enumeration, but by explicit state transitions in
* dastart() and dadone(). Here are some of the dependencies:
*
* 1. RC should come first, before RC16, unless there is evidence that RC16
* is supported.
* 2. BDC needs to come before any of the ATA probes, or the ZONE probe.
* 3. The ATA probes should go in this order:
* ATA -> LOGDIR -> IDDIR -> SUP -> ATA_ZONE
*/
typedef enum {
DA_STATE_PROBE_WP,
DA_STATE_PROBE_RC,
DA_STATE_PROBE_RC16,
DA_STATE_PROBE_LBP,
DA_STATE_PROBE_BLK_LIMITS,
DA_STATE_PROBE_BDC,
DA_STATE_PROBE_ATA,
DA_STATE_PROBE_ATA_LOGDIR,
DA_STATE_PROBE_ATA_IDDIR,
DA_STATE_PROBE_ATA_SUP,
DA_STATE_PROBE_ATA_ZONE,
DA_STATE_PROBE_ZONE,
DA_STATE_NORMAL
} da_state;
typedef enum {
DA_FLAG_PACK_INVALID = 0x000001,
DA_FLAG_NEW_PACK = 0x000002,
DA_FLAG_PACK_LOCKED = 0x000004,
DA_FLAG_PACK_REMOVABLE = 0x000008,
DA_FLAG_NEED_OTAG = 0x000020,
DA_FLAG_WAS_OTAG = 0x000040,
DA_FLAG_RETRY_UA = 0x000080,
DA_FLAG_OPEN = 0x000100,
DA_FLAG_SCTX_INIT = 0x000200,
DA_FLAG_CAN_RC16 = 0x000400,
DA_FLAG_PROBED = 0x000800,
DA_FLAG_DIRTY = 0x001000,
DA_FLAG_ANNOUNCED = 0x002000,
DA_FLAG_CAN_ATA_DMA = 0x004000,
DA_FLAG_CAN_ATA_LOG = 0x008000,
DA_FLAG_CAN_ATA_IDLOG = 0x010000,
DA_FLAG_CAN_ATA_SUPCAP = 0x020000,
DA_FLAG_CAN_ATA_ZONE = 0x040000,
DA_FLAG_TUR_PENDING = 0x080000
} da_flags;
typedef enum {
DA_Q_NONE = 0x00,
DA_Q_NO_SYNC_CACHE = 0x01,
DA_Q_NO_6_BYTE = 0x02,
DA_Q_NO_PREVENT = 0x04,
DA_Q_4K = 0x08,
DA_Q_NO_RC16 = 0x10,
DA_Q_NO_UNMAP = 0x20,
DA_Q_RETRY_BUSY = 0x40,
DA_Q_SMR_DM = 0x80,
- DA_Q_STRICT_UNMAP = 0x100
+ DA_Q_STRICT_UNMAP = 0x100,
+ DA_Q_128KB = 0x200
} da_quirks;
#define DA_Q_BIT_STRING \
"\020" \
"\001NO_SYNC_CACHE" \
"\002NO_6_BYTE" \
"\003NO_PREVENT" \
"\0044K" \
"\005NO_RC16" \
"\006NO_UNMAP" \
"\007RETRY_BUSY" \
"\010SMR_DM" \
- "\011STRICT_UNMAP"
+ "\011STRICT_UNMAP" \
+ "\012128KB"
typedef enum {
DA_CCB_PROBE_RC = 0x01,
DA_CCB_PROBE_RC16 = 0x02,
DA_CCB_PROBE_LBP = 0x03,
DA_CCB_PROBE_BLK_LIMITS = 0x04,
DA_CCB_PROBE_BDC = 0x05,
DA_CCB_PROBE_ATA = 0x06,
DA_CCB_BUFFER_IO = 0x07,
DA_CCB_DUMP = 0x0A,
DA_CCB_DELETE = 0x0B,
DA_CCB_TUR = 0x0C,
DA_CCB_PROBE_ZONE = 0x0D,
DA_CCB_PROBE_ATA_LOGDIR = 0x0E,
DA_CCB_PROBE_ATA_IDDIR = 0x0F,
DA_CCB_PROBE_ATA_SUP = 0x10,
DA_CCB_PROBE_ATA_ZONE = 0x11,
DA_CCB_PROBE_WP = 0x12,
DA_CCB_TYPE_MASK = 0x1F,
DA_CCB_RETRY_UA = 0x20
} da_ccb_state;
/*
* Order here is important for method choice
*
* We prefer ATA_TRIM as tests run against a Sandforce 2281 SSD attached to
* LSI 2008 (mps) controller (FW: v12, Drv: v14) resulted 20% quicker deletes
* using ATA_TRIM than the corresponding UNMAP results for a real world mysql
* import taking 5mins.
*
*/
typedef enum {
DA_DELETE_NONE,
DA_DELETE_DISABLE,
DA_DELETE_ATA_TRIM,
DA_DELETE_UNMAP,
DA_DELETE_WS16,
DA_DELETE_WS10,
DA_DELETE_ZERO,
DA_DELETE_MIN = DA_DELETE_ATA_TRIM,
DA_DELETE_MAX = DA_DELETE_ZERO
} da_delete_methods;
/*
* For SCSI, host managed drives show up as a separate device type. For
* ATA, host managed drives also have a different device signature.
* XXX KDM figure out the ATA host managed signature.
*/
typedef enum {
DA_ZONE_NONE = 0x00,
DA_ZONE_DRIVE_MANAGED = 0x01,
DA_ZONE_HOST_AWARE = 0x02,
DA_ZONE_HOST_MANAGED = 0x03
} da_zone_mode;
/*
* We distinguish between these interface cases in addition to the drive type:
* o ATA drive behind a SCSI translation layer that knows about ZBC/ZAC
* o ATA drive behind a SCSI translation layer that does not know about
* ZBC/ZAC, and so needs to be managed via ATA passthrough. In this
* case, we would need to share the ATA code with the ada(4) driver.
* o SCSI drive.
*/
typedef enum {
DA_ZONE_IF_SCSI,
DA_ZONE_IF_ATA_PASS,
DA_ZONE_IF_ATA_SAT,
} da_zone_interface;
typedef enum {
DA_ZONE_FLAG_RZ_SUP = 0x0001,
DA_ZONE_FLAG_OPEN_SUP = 0x0002,
DA_ZONE_FLAG_CLOSE_SUP = 0x0004,
DA_ZONE_FLAG_FINISH_SUP = 0x0008,
DA_ZONE_FLAG_RWP_SUP = 0x0010,
DA_ZONE_FLAG_SUP_MASK = (DA_ZONE_FLAG_RZ_SUP |
DA_ZONE_FLAG_OPEN_SUP |
DA_ZONE_FLAG_CLOSE_SUP |
DA_ZONE_FLAG_FINISH_SUP |
DA_ZONE_FLAG_RWP_SUP),
DA_ZONE_FLAG_URSWRZ = 0x0020,
DA_ZONE_FLAG_OPT_SEQ_SET = 0x0040,
DA_ZONE_FLAG_OPT_NONSEQ_SET = 0x0080,
DA_ZONE_FLAG_MAX_SEQ_SET = 0x0100,
DA_ZONE_FLAG_SET_MASK = (DA_ZONE_FLAG_OPT_SEQ_SET |
DA_ZONE_FLAG_OPT_NONSEQ_SET |
DA_ZONE_FLAG_MAX_SEQ_SET)
} da_zone_flags;
static struct da_zone_desc {
da_zone_flags value;
const char *desc;
} da_zone_desc_table[] = {
{DA_ZONE_FLAG_RZ_SUP, "Report Zones" },
{DA_ZONE_FLAG_OPEN_SUP, "Open" },
{DA_ZONE_FLAG_CLOSE_SUP, "Close" },
{DA_ZONE_FLAG_FINISH_SUP, "Finish" },
{DA_ZONE_FLAG_RWP_SUP, "Reset Write Pointer" },
};
typedef void da_delete_func_t (struct cam_periph *periph, union ccb *ccb,
struct bio *bp);
static da_delete_func_t da_delete_trim;
static da_delete_func_t da_delete_unmap;
static da_delete_func_t da_delete_ws;
static const void * da_delete_functions[] = {
NULL,
NULL,
da_delete_trim,
da_delete_unmap,
da_delete_ws,
da_delete_ws,
da_delete_ws
};
static const char *da_delete_method_names[] =
{ "NONE", "DISABLE", "ATA_TRIM", "UNMAP", "WS16", "WS10", "ZERO" };
static const char *da_delete_method_desc[] =
{ "NONE", "DISABLED", "ATA TRIM", "UNMAP", "WRITE SAME(16) with UNMAP",
"WRITE SAME(10) with UNMAP", "ZERO" };
/* Offsets into our private area for storing information */
#define ccb_state ppriv_field0
#define ccb_bp ppriv_ptr1
struct disk_params {
u_int8_t heads;
u_int32_t cylinders;
u_int8_t secs_per_track;
u_int32_t secsize; /* Number of bytes/sector */
u_int64_t sectors; /* total number sectors */
u_int stripesize;
u_int stripeoffset;
};
#define UNMAP_RANGE_MAX 0xffffffff
#define UNMAP_HEAD_SIZE 8
#define UNMAP_RANGE_SIZE 16
#define UNMAP_MAX_RANGES 2048 /* Protocol Max is 4095 */
#define UNMAP_BUF_SIZE ((UNMAP_MAX_RANGES * UNMAP_RANGE_SIZE) + \
UNMAP_HEAD_SIZE)
#define WS10_MAX_BLKS 0xffff
#define WS16_MAX_BLKS 0xffffffff
#define ATA_TRIM_MAX_RANGES ((UNMAP_BUF_SIZE / \
(ATA_DSM_RANGE_SIZE * ATA_DSM_BLK_SIZE)) * ATA_DSM_BLK_SIZE)
#define DA_WORK_TUR (1 << 16)
typedef enum {
DA_REF_OPEN = 1,
DA_REF_OPEN_HOLD,
DA_REF_CLOSE_HOLD,
DA_REF_PROBE_HOLD,
DA_REF_TUR,
DA_REF_GEOM,
DA_REF_SYSCTL,
DA_REF_REPROBE,
DA_REF_MAX /* KEEP LAST */
} da_ref_token;
struct da_softc {
struct cam_iosched_softc *cam_iosched;
struct bio_queue_head delete_run_queue;
LIST_HEAD(, ccb_hdr) pending_ccbs;
int refcount; /* Active xpt_action() calls */
da_state state;
da_flags flags;
da_quirks quirks;
int minimum_cmd_size;
int error_inject;
int trim_max_ranges;
int delete_available; /* Delete methods possibly available */
da_zone_mode zone_mode;
da_zone_interface zone_interface;
da_zone_flags zone_flags;
struct ata_gp_log_dir ata_logdir;
int valid_logdir_len;
struct ata_identify_log_pages ata_iddir;
int valid_iddir_len;
uint64_t optimal_seq_zones;
uint64_t optimal_nonseq_zones;
uint64_t max_seq_zones;
u_int maxio;
uint32_t unmap_max_ranges;
uint32_t unmap_max_lba; /* Max LBAs in UNMAP req */
uint32_t unmap_gran;
uint32_t unmap_gran_align;
uint64_t ws_max_blks;
uint64_t trim_count;
uint64_t trim_ranges;
uint64_t trim_lbas;
da_delete_methods delete_method_pref;
da_delete_methods delete_method;
da_delete_func_t *delete_func;
int unmappedio;
int rotating;
struct disk_params params;
struct disk *disk;
union ccb saved_ccb;
struct task sysctl_task;
struct sysctl_ctx_list sysctl_ctx;
struct sysctl_oid *sysctl_tree;
struct callout sendordered_c;
uint64_t wwpn;
uint8_t unmap_buf[UNMAP_BUF_SIZE];
struct scsi_read_capacity_data_long rcaplong;
struct callout mediapoll_c;
int ref_flags[DA_REF_MAX];
#ifdef CAM_IO_STATS
struct sysctl_ctx_list sysctl_stats_ctx;
struct sysctl_oid *sysctl_stats_tree;
u_int errors;
u_int timeouts;
u_int invalidations;
#endif
#define DA_ANNOUNCETMP_SZ 160
char announce_temp[DA_ANNOUNCETMP_SZ];
#define DA_ANNOUNCE_SZ 400
char announcebuf[DA_ANNOUNCE_SZ];
};
#define dadeleteflag(softc, delete_method, enable) \
if (enable) { \
softc->delete_available |= (1 << delete_method); \
} else { \
softc->delete_available &= ~(1 << delete_method); \
}
struct da_quirk_entry {
struct scsi_inquiry_pattern inq_pat;
da_quirks quirks;
};
static const char quantum[] = "QUANTUM";
static const char microp[] = "MICROP";
static struct da_quirk_entry da_quirk_table[] =
{
/* SPI, FC devices */
{
/*
* Fujitsu M2513A MO drives.
* Tested devices: M2513A2 firmware versions 1200 & 1300.
* (dip switch selects whether T_DIRECT or T_OPTICAL device)
* Reported by: W.Scholten <whs@xs4all.nl>
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "FUJITSU", "M2513A", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/* See above. */
{T_OPTICAL, SIP_MEDIA_REMOVABLE, "FUJITSU", "M2513A", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* This particular Fujitsu drive doesn't like the
* synchronize cache command.
* Reported by: Tom Jackson <toj@gorilla.net>
*/
{T_DIRECT, SIP_MEDIA_FIXED, "FUJITSU", "M2954*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* This drive doesn't like the synchronize cache command
* either. Reported by: Matthew Jacob <mjacob@feral.com>
* in NetBSD PR kern/6027, August 24, 1998.
*/
{T_DIRECT, SIP_MEDIA_FIXED, microp, "2217*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* This drive doesn't like the synchronize cache command
* either. Reported by: Hellmuth Michaelis (hm@kts.org)
* (PR 8882).
*/
{T_DIRECT, SIP_MEDIA_FIXED, microp, "2112*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Doesn't like the synchronize cache command.
* Reported by: Blaz Zupan <blaz@gold.amis.net>
*/
{T_DIRECT, SIP_MEDIA_FIXED, "NEC", "D3847*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Doesn't like the synchronize cache command.
* Reported by: Blaz Zupan <blaz@gold.amis.net>
*/
{T_DIRECT, SIP_MEDIA_FIXED, quantum, "MAVERICK 540S", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Doesn't like the synchronize cache command.
*/
{T_DIRECT, SIP_MEDIA_FIXED, quantum, "LPS525S", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Doesn't like the synchronize cache command.
* Reported by: walter@pelissero.de
*/
{T_DIRECT, SIP_MEDIA_FIXED, quantum, "LPS540S", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Doesn't work correctly with 6 byte reads/writes.
* Returns illegal request, and points to byte 9 of the
* 6-byte CDB.
* Reported by: Adam McDougall <bsdx@spawnet.com>
*/
{T_DIRECT, SIP_MEDIA_FIXED, quantum, "VIKING 4*", "*"},
/*quirks*/ DA_Q_NO_6_BYTE
},
{
/* See above. */
{T_DIRECT, SIP_MEDIA_FIXED, quantum, "VIKING 2*", "*"},
/*quirks*/ DA_Q_NO_6_BYTE
},
{
/*
* Doesn't like the synchronize cache command.
* Reported by: walter@pelissero.de
*/
{T_DIRECT, SIP_MEDIA_FIXED, "CONNER", "CP3500*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* The CISS RAID controllers do not support SYNC_CACHE
*/
{T_DIRECT, SIP_MEDIA_FIXED, "COMPAQ", "RAID*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* The STEC SSDs sometimes hang on UNMAP.
*/
{T_DIRECT, SIP_MEDIA_FIXED, "STEC", "*", "*"},
/*quirks*/ DA_Q_NO_UNMAP
},
{
/*
* VMware returns BUSY status when storage has transient
* connectivity problems, so better wait.
* Also VMware returns odd errors on misaligned UNMAPs.
*/
{T_DIRECT, SIP_MEDIA_FIXED, "VMware*", "*", "*"},
/*quirks*/ DA_Q_RETRY_BUSY | DA_Q_STRICT_UNMAP
},
/* USB mass storage devices supported by umass(4) */
{
/*
* EXATELECOM (Sigmatel) i-Bead 100/105 USB Flash MP3 Player
* PR: kern/51675
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "EXATEL", "i-BEAD10*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Power Quotient Int. (PQI) USB flash key
* PR: kern/53067
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Generic*", "USB Flash Disk*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Creative Nomad MUVO mp3 player (USB)
* PR: kern/53094
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "CREATIVE", "NOMAD_MUVO", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE|DA_Q_NO_PREVENT
},
{
/*
* Jungsoft NEXDISK USB flash key
* PR: kern/54737
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "JUNGSOFT", "NEXDISK*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* FreeDik USB Mini Data Drive
* PR: kern/54786
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "FreeDik*", "Mini Data Drive",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Sigmatel USB Flash MP3 Player
* PR: kern/57046
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "SigmaTel", "MSCN", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE|DA_Q_NO_PREVENT
},
{
/*
* Neuros USB Digital Audio Computer
* PR: kern/63645
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "NEUROS", "dig. audio comp.",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* SEAGRAND NP-900 MP3 Player
* PR: kern/64563
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "SEAGRAND", "NP-900*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE|DA_Q_NO_PREVENT
},
{
/*
* iRiver iFP MP3 player (with UMS Firmware)
* PR: kern/54881, i386/63941, kern/66124
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "iRiver", "iFP*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Frontier Labs NEX IA+ Digital Audio Player, rev 1.10/0.01
* PR: kern/70158
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "FL" , "Nex*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* ZICPlay USB MP3 Player with FM
* PR: kern/75057
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "ACTIONS*" , "USB DISK*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* TEAC USB floppy mechanisms
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "TEAC" , "FD-05*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Kingston DataTraveler II+ USB Pen-Drive.
* Reported by: Pawel Jakub Dawidek <pjd@FreeBSD.org>
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Kingston" , "DataTraveler II+",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* USB DISK Pro PMAP
* Reported by: jhs
* PR: usb/96381
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, " ", "USB DISK Pro", "PMAP"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Motorola E398 Mobile Phone (TransFlash memory card).
* Reported by: Wojciech A. Koszek <dunstan@FreeBSD.czest.pl>
* PR: usb/89889
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Motorola" , "Motorola Phone",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Qware BeatZkey! Pro
* PR: usb/79164
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "GENERIC", "USB DISK DEVICE",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Time DPA20B 1GB MP3 Player
* PR: usb/81846
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "USB2.0*", "(FS) FLASH DISK*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Samsung USB key 128Mb
* PR: usb/90081
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "USB-DISK", "FreeDik-FlashUsb",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Kingston DataTraveler 2.0 USB Flash memory.
* PR: usb/89196
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Kingston", "DataTraveler 2.0",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Creative MUVO Slim mp3 player (USB)
* PR: usb/86131
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "CREATIVE", "MuVo Slim",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE|DA_Q_NO_PREVENT
},
{
/*
* United MP5512 Portable MP3 Player (2-in-1 USB DISK/MP3)
* PR: usb/80487
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Generic*", "MUSIC DISK",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* SanDisk Micro Cruzer 128MB
* PR: usb/75970
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "SanDisk" , "Micro Cruzer",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* TOSHIBA TransMemory USB sticks
* PR: kern/94660
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "TOSHIBA", "TransMemory",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* PNY USB 3.0 Flash Drives
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "PNY", "USB 3.0 FD*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE | DA_Q_NO_RC16
},
{
/*
* PNY USB Flash keys
* PR: usb/75578, usb/72344, usb/65436
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "*" , "USB DISK*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Genesys GL3224
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Generic*", "STORAGE DEVICE*",
"120?"}, /*quirks*/ DA_Q_NO_SYNC_CACHE | DA_Q_4K | DA_Q_NO_RC16
},
{
/*
* Genesys 6-in-1 Card Reader
* PR: usb/94647
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Generic*", "STORAGE DEVICE*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Rekam Digital CAMERA
* PR: usb/98713
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "CAMERA*", "4MP-9J6*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* iRiver H10 MP3 player
* PR: usb/102547
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "iriver", "H10*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* iRiver U10 MP3 player
* PR: usb/92306
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "iriver", "U10*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* X-Micro Flash Disk
* PR: usb/96901
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "X-Micro", "Flash Disk",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* EasyMP3 EM732X USB 2.0 Flash MP3 Player
* PR: usb/96546
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "EM732X", "MP3 Player*",
"1.00"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Denver MP3 player
* PR: usb/107101
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "DENVER", "MP3 PLAYER",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Philips USB Key Audio KEY013
* PR: usb/68412
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "PHILIPS", "Key*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE | DA_Q_NO_PREVENT
},
{
/*
* JNC MP3 Player
* PR: usb/94439
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "JNC*" , "MP3 Player*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* SAMSUNG MP0402H
* PR: usb/108427
*/
{T_DIRECT, SIP_MEDIA_FIXED, "SAMSUNG", "MP0402H", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* I/O Magic USB flash - Giga Bank
* PR: usb/108810
*/
{T_DIRECT, SIP_MEDIA_FIXED, "GS-Magic", "stor*", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* JoyFly 128mb USB Flash Drive
* PR: 96133
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "USB 2.0", "Flash Disk*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* ChipsBnk usb stick
* PR: 103702
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "ChipsBnk", "USB*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Storcase (Kingston) InfoStation IFS FC2/SATA-R 201A
* PR: 129858
*/
{T_DIRECT, SIP_MEDIA_FIXED, "IFS", "FC2/SATA-R*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Samsung YP-U3 mp3-player
* PR: 125398
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Samsung", "YP-U3",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Netac", "OnlyDisk*",
"2000"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Sony Cyber-Shot DSC cameras
* PR: usb/137035
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Sony", "Sony DSC", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE | DA_Q_NO_PREVENT
},
{
{T_DIRECT, SIP_MEDIA_REMOVABLE, "Kingston", "DataTraveler G3",
"1.00"}, /*quirks*/ DA_Q_NO_PREVENT
},
{
/* At least several Transcent USB sticks lie on RC16. */
{T_DIRECT, SIP_MEDIA_REMOVABLE, "JetFlash", "Transcend*",
"*"}, /*quirks*/ DA_Q_NO_RC16
},
{
/*
* I-O Data USB Flash Disk
* PR: usb/211716
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "I-O DATA", "USB Flash Disk*",
"*"}, /*quirks*/ DA_Q_NO_RC16
},
{
/*
* 16GB SLC CHIPFANCIER
* PR: usb/234503
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "16G SLC", "CHIPFANCIER",
"1.00"}, /*quirks*/ DA_Q_NO_RC16
},
/* ATA/SATA devices over SAS/USB/... */
{
+ /* Sandisk X400 */
+ { T_DIRECT, SIP_MEDIA_FIXED, "ATA", "SanDisk SD8SB8U1*", "*" },
+ /*quirks*/DA_Q_128KB
+ },
+ {
/* Hitachi Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "Hitachi", "H??????????E3*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Micron Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Micron 5100 MTFDDAK*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Samsung Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "SAMSUNG HD155UI*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Samsung Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "SAMSUNG", "HD155UI*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Samsung Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "SAMSUNG HD204UI*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Samsung Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "SAMSUNG", "HD204UI*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST????DL*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST????DL", "*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST???DM*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST???DM*", "*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST????DM*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Barracuda Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST????DM", "*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9500423AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST950042", "3AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9500424AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST950042", "4AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9640423AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST964042", "3AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9640424AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST964042", "4AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9750420AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST975042", "0AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9750422AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST975042", "2AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST9750423AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST975042", "3AS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Thin Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST???LT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* Seagate Momentus Thin Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ST???LT*", "*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD????RS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "??RS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD????RX*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "??RX*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD??????RS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "????RS*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD??????RX*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Caviar Green Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "????RX*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD???PKT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "?PKT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD?????PKT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Black Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "???PKT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Blue Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD???PVT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Blue Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "?PVT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Blue Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "WDC WD?????PVT*", "*" },
/*quirks*/DA_Q_4K
},
{
/* WDC Scorpio Blue Advanced Format (4k) drives */
{ T_DIRECT, SIP_MEDIA_FIXED, "WDC WD??", "???PVT*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Olympus digital cameras (C-3040ZOOM, C-2040ZOOM, C-1)
* PR: usb/97472
*/
{ T_DIRECT, SIP_MEDIA_REMOVABLE, "OLYMPUS", "C*", "*"},
/*quirks*/ DA_Q_NO_6_BYTE | DA_Q_NO_SYNC_CACHE
},
{
/*
* Olympus digital cameras (D-370)
* PR: usb/97472
*/
{ T_DIRECT, SIP_MEDIA_REMOVABLE, "OLYMPUS", "D*", "*"},
/*quirks*/ DA_Q_NO_6_BYTE
},
{
/*
* Olympus digital cameras (E-100RS, E-10).
* PR: usb/97472
*/
{ T_DIRECT, SIP_MEDIA_REMOVABLE, "OLYMPUS", "E*", "*"},
/*quirks*/ DA_Q_NO_6_BYTE | DA_Q_NO_SYNC_CACHE
},
{
/*
* Olympus FE-210 camera
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "OLYMPUS", "FE210*",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Pentax Digital Camera
* PR: usb/93389
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "PENTAX", "DIGITAL CAMERA",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* LG UP3S MP3 player
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "LG", "UP3S",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* Laser MP3-2GA13 MP3 player
*/
{T_DIRECT, SIP_MEDIA_REMOVABLE, "USB 2.0", "(HS) Flash Disk",
"*"}, /*quirks*/ DA_Q_NO_SYNC_CACHE
},
{
/*
* LaCie external 250GB Hard drive des by Porsche
* Submitted by: Ben Stuyts <ben@altesco.nl>
* PR: 121474
*/
{T_DIRECT, SIP_MEDIA_FIXED, "SAMSUNG", "HM250JI", "*"},
/*quirks*/ DA_Q_NO_SYNC_CACHE
},
/* SATA SSDs */
{
/*
* Corsair Force 2 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Corsair CSSD-F*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Corsair Force 3 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Corsair Force 3*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Corsair Neutron GTX SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "Corsair Neutron GTX*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Corsair Force GT & GS SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Corsair Force G*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Crucial M4 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "M4-CT???M4SSD2*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Crucial RealSSD C300 SSDs
* 4k optimised
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "C300-CTFDDAC???MAG*",
"*" }, /*quirks*/DA_Q_4K
},
{
/*
* Intel 320 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "INTEL SSDSA2CW*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Intel 330 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "INTEL SSDSC2CT*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Intel 510 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "INTEL SSDSC2MH*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Intel 520 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "INTEL SSDSC2BW*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Intel S3610 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "INTEL SSDSC2BX*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Intel X25-M Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "INTEL SSDSA2M*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Kingston E100 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "KINGSTON SE100S3*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Kingston HyperX 3k SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "KINGSTON SH103S3*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Marvell SSDs (entry taken from OpenSolaris)
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "MARVELL SD88SA02*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* OCZ Agility 2 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "*", "OCZ-AGILITY2*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* OCZ Agility 3 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "OCZ-AGILITY3*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* OCZ Deneva R Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "DENRSTE251M45*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* OCZ Vertex 2 SSDs (inc pro series)
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "OCZ?VERTEX2*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* OCZ Vertex 3 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "OCZ-VERTEX3*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* OCZ Vertex 4 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "OCZ-VERTEX4*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Samsung 750 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Samsung SSD 750*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Samsung 830 Series SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "SAMSUNG SSD 830 Series*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Samsung 840 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Samsung SSD 840*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Samsung 845 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Samsung SSD 845*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Samsung 850 SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "Samsung SSD 850*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Samsung 843T Series SSDs (MZ7WD*)
* Samsung PM851 Series SSDs (MZ7TE*)
* Samsung PM853T Series SSDs (MZ7GE*)
* Samsung SM863 Series SSDs (MZ7KM*)
* 4k optimised
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "SAMSUNG MZ7*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Same as for SAMSUNG MZ7* but enable the quirks for SSD
* starting with MZ7* too
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "MZ7*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* SuperTalent TeraDrive CT SSDs
* 4k optimised & trim only works in 4k requests + 4k aligned
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "FTM??CT25H*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* XceedIOPS SATA SSDs
* 4k optimised
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "SG9XCS2D*", "*" },
/*quirks*/DA_Q_4K
},
{
/*
* Hama Innostor USB-Stick
*/
{ T_DIRECT, SIP_MEDIA_REMOVABLE, "Innostor", "Innostor*", "*" },
/*quirks*/DA_Q_NO_RC16
},
{
/*
* Seagate Lamarr 8TB Shingled Magnetic Recording (SMR)
* Drive Managed SATA hard drive. This drive doesn't report
* in firmware that it is a drive managed SMR drive.
*/
{ T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST8000AS000[23]*", "*" },
/*quirks*/DA_Q_SMR_DM
},
{
/*
* MX-ES USB Drive by Mach Xtreme
*/
{ T_DIRECT, SIP_MEDIA_REMOVABLE, "MX", "MXUB3*", "*"},
/*quirks*/DA_Q_NO_RC16
},
};
static disk_strategy_t dastrategy;
static dumper_t dadump;
static periph_init_t dainit;
static void daasync(void *callback_arg, u_int32_t code,
struct cam_path *path, void *arg);
static void dasysctlinit(void *context, int pending);
static int dasysctlsofttimeout(SYSCTL_HANDLER_ARGS);
static int dacmdsizesysctl(SYSCTL_HANDLER_ARGS);
static int dadeletemethodsysctl(SYSCTL_HANDLER_ARGS);
static int dazonemodesysctl(SYSCTL_HANDLER_ARGS);
static int dazonesupsysctl(SYSCTL_HANDLER_ARGS);
static int dadeletemaxsysctl(SYSCTL_HANDLER_ARGS);
static void dadeletemethodset(struct da_softc *softc,
da_delete_methods delete_method);
static off_t dadeletemaxsize(struct da_softc *softc,
da_delete_methods delete_method);
static void dadeletemethodchoose(struct da_softc *softc,
da_delete_methods default_method);
static void daprobedone(struct cam_periph *periph, union ccb *ccb);
static periph_ctor_t daregister;
static periph_dtor_t dacleanup;
static periph_start_t dastart;
static periph_oninv_t daoninvalidate;
static void dazonedone(struct cam_periph *periph, union ccb *ccb);
static void dadone(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probewp(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_proberc(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probelbp(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probeblklimits(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probebdc(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probeata(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probeatalogdir(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probeataiddir(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probeatasup(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probeatazone(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_probezone(struct cam_periph *periph,
union ccb *done_ccb);
static void dadone_tur(struct cam_periph *periph,
union ccb *done_ccb);
static int daerror(union ccb *ccb, u_int32_t cam_flags,
u_int32_t sense_flags);
static void daprevent(struct cam_periph *periph, int action);
static void dareprobe(struct cam_periph *periph);
static void dasetgeom(struct cam_periph *periph, uint32_t block_len,
uint64_t maxsector,
struct scsi_read_capacity_data_long *rcaplong,
size_t rcap_size);
static timeout_t dasendorderedtag;
static void dashutdown(void *arg, int howto);
static timeout_t damediapoll;
#ifndef DA_DEFAULT_POLL_PERIOD
#define DA_DEFAULT_POLL_PERIOD 3
#endif
#ifndef DA_DEFAULT_TIMEOUT
#define DA_DEFAULT_TIMEOUT 60 /* Timeout in seconds */
#endif
#ifndef DA_DEFAULT_SOFTTIMEOUT
#define DA_DEFAULT_SOFTTIMEOUT 0
#endif
#ifndef DA_DEFAULT_RETRY
#define DA_DEFAULT_RETRY 4
#endif
#ifndef DA_DEFAULT_SEND_ORDERED
#define DA_DEFAULT_SEND_ORDERED 1
#endif
static int da_poll_period = DA_DEFAULT_POLL_PERIOD;
static int da_retry_count = DA_DEFAULT_RETRY;
static int da_default_timeout = DA_DEFAULT_TIMEOUT;
static sbintime_t da_default_softtimeout = DA_DEFAULT_SOFTTIMEOUT;
static int da_send_ordered = DA_DEFAULT_SEND_ORDERED;
static int da_disable_wp_detection = 0;
static SYSCTL_NODE(_kern_cam, OID_AUTO, da, CTLFLAG_RD, 0,
"CAM Direct Access Disk driver");
SYSCTL_INT(_kern_cam_da, OID_AUTO, poll_period, CTLFLAG_RWTUN,
&da_poll_period, 0, "Media polling period in seconds");
SYSCTL_INT(_kern_cam_da, OID_AUTO, retry_count, CTLFLAG_RWTUN,
&da_retry_count, 0, "Normal I/O retry count");
SYSCTL_INT(_kern_cam_da, OID_AUTO, default_timeout, CTLFLAG_RWTUN,
&da_default_timeout, 0, "Normal I/O timeout (in seconds)");
SYSCTL_INT(_kern_cam_da, OID_AUTO, send_ordered, CTLFLAG_RWTUN,
&da_send_ordered, 0, "Send Ordered Tags");
SYSCTL_INT(_kern_cam_da, OID_AUTO, disable_wp_detection, CTLFLAG_RWTUN,
&da_disable_wp_detection, 0,
"Disable detection of write-protected disks");
SYSCTL_PROC(_kern_cam_da, OID_AUTO, default_softtimeout,
CTLTYPE_UINT | CTLFLAG_RW, NULL, 0, dasysctlsofttimeout, "I",
"Soft I/O timeout (ms)");
TUNABLE_INT64("kern.cam.da.default_softtimeout", &da_default_softtimeout);
/*
* DA_ORDEREDTAG_INTERVAL determines how often, relative
* to the default timeout, we check to see whether an ordered
* tagged transaction is appropriate to prevent simple tag
* starvation. Since we'd like to ensure that there is at least
* 1/2 of the timeout length left for a starved transaction to
* complete after we've sent an ordered tag, we must poll at least
* four times in every timeout period. This takes care of the worst
* case where a starved transaction starts during an interval that
* meets the requirement "don't send an ordered tag" test so it takes
* us two intervals to determine that a tag must be sent.
*/
#ifndef DA_ORDEREDTAG_INTERVAL
#define DA_ORDEREDTAG_INTERVAL 4
#endif
static struct periph_driver dadriver =
{
dainit, "da",
TAILQ_HEAD_INITIALIZER(dadriver.units), /* generation */ 0
};
PERIPHDRIVER_DECLARE(da, dadriver);
static MALLOC_DEFINE(M_SCSIDA, "scsi_da", "scsi_da buffers");
/*
* This driver takes out references / holds in well defined pairs, never
* recursively. These macros / inline functions enforce those rules. They
* are only enabled with DA_TRACK_REFS or INVARIANTS. If DA_TRACK_REFS is
* defined to be 2 or larger, the tracking also includes debug printfs.
*/
#if defined(DA_TRACK_REFS) || defined(INVARIANTS)
#ifndef DA_TRACK_REFS
#define DA_TRACK_REFS 1
#endif
#if DA_TRACK_REFS > 1
static const char *da_ref_text[] = {
"bogus",
"open",
"open hold",
"close hold",
"reprobe hold",
"Test Unit Ready",
"Geom",
"sysctl",
"reprobe",
"max -- also bogus"
};
#define DA_PERIPH_PRINT(periph, msg, args...) \
CAM_PERIPH_PRINT(periph, msg, ##args)
#else
#define DA_PERIPH_PRINT(periph, msg, args...)
#endif
static inline void
token_sanity(da_ref_token token)
{
if ((unsigned)token >= DA_REF_MAX)
panic("Bad token value passed in %d\n", token);
}
static inline int
da_periph_hold(struct cam_periph *periph, int priority, da_ref_token token)
{
int err = cam_periph_hold(periph, priority);
token_sanity(token);
DA_PERIPH_PRINT(periph, "Holding device %s (%d): %d\n",
da_ref_text[token], token, err);
if (err == 0) {
int cnt;
struct da_softc *softc = periph->softc;
cnt = atomic_fetchadd_int(&softc->ref_flags[token], 1);
if (cnt != 0)
panic("Re-holding for reason %d, cnt = %d", token, cnt);
}
return (err);
}
static inline void
da_periph_unhold(struct cam_periph *periph, da_ref_token token)
{
int cnt;
struct da_softc *softc = periph->softc;
token_sanity(token);
DA_PERIPH_PRINT(periph, "Unholding device %s (%d)\n",
da_ref_text[token], token);
cnt = atomic_fetchadd_int(&softc->ref_flags[token], -1);
if (cnt != 1)
panic("Unholding %d with cnt = %d", token, cnt);
cam_periph_unhold(periph);
}
static inline int
da_periph_acquire(struct cam_periph *periph, da_ref_token token)
{
int err = cam_periph_acquire(periph);
token_sanity(token);
DA_PERIPH_PRINT(periph, "acquiring device %s (%d): %d\n",
da_ref_text[token], token, err);
if (err == 0) {
int cnt;
struct da_softc *softc = periph->softc;
cnt = atomic_fetchadd_int(&softc->ref_flags[token], 1);
if (cnt != 0)
panic("Re-refing for reason %d, cnt = %d", token, cnt);
}
return (err);
}
static inline void
da_periph_release(struct cam_periph *periph, da_ref_token token)
{
int cnt;
struct da_softc *softc = periph->softc;
token_sanity(token);
DA_PERIPH_PRINT(periph, "releasing device %s (%d)\n",
da_ref_text[token], token);
cnt = atomic_fetchadd_int(&softc->ref_flags[token], -1);
if (cnt != 1)
panic("Releasing %d with cnt = %d", token, cnt);
cam_periph_release(periph);
}
static inline void
da_periph_release_locked(struct cam_periph *periph, da_ref_token token)
{
int cnt;
struct da_softc *softc = periph->softc;
token_sanity(token);
DA_PERIPH_PRINT(periph, "releasing device (locked) %s (%d)\n",
da_ref_text[token], token);
cnt = atomic_fetchadd_int(&softc->ref_flags[token], -1);
if (cnt != 1)
panic("Unholding %d with cnt = %d", token, cnt);
cam_periph_release_locked(periph);
}
#define cam_periph_hold POISON
#define cam_periph_unhold POISON
#define cam_periph_acquire POISON
#define cam_periph_release POISON
#define cam_periph_release_locked POISON
#else
#define da_periph_hold(periph, prio, token) cam_periph_hold((periph), (prio))
#define da_periph_unhold(periph, token) cam_periph_unhold((periph))
#define da_periph_acquire(periph, token) cam_periph_acquire((periph))
#define da_periph_release(periph, token) cam_periph_release((periph))
#define da_periph_release_locked(periph, token) cam_periph_release_locked((periph))
#endif
static int
daopen(struct disk *dp)
{
struct cam_periph *periph;
struct da_softc *softc;
int error;
periph = (struct cam_periph *)dp->d_drv1;
if (da_periph_acquire(periph, DA_REF_OPEN) != 0) {
return (ENXIO);
}
cam_periph_lock(periph);
if ((error = da_periph_hold(periph, PRIBIO|PCATCH, DA_REF_OPEN_HOLD)) != 0) {
cam_periph_unlock(periph);
da_periph_release(periph, DA_REF_OPEN);
return (error);
}
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE | CAM_DEBUG_PERIPH,
("daopen\n"));
softc = (struct da_softc *)periph->softc;
dareprobe(periph);
/* Wait for the disk size update. */
error = cam_periph_sleep(periph, &softc->disk->d_mediasize, PRIBIO,
"dareprobe", 0);
if (error != 0)
xpt_print(periph->path, "unable to retrieve capacity data\n");
if (periph->flags & CAM_PERIPH_INVALID)
error = ENXIO;
if (error == 0 && (softc->flags & DA_FLAG_PACK_REMOVABLE) != 0 &&
(softc->quirks & DA_Q_NO_PREVENT) == 0)
daprevent(periph, PR_PREVENT);
if (error == 0) {
softc->flags &= ~DA_FLAG_PACK_INVALID;
softc->flags |= DA_FLAG_OPEN;
}
da_periph_unhold(periph, DA_REF_OPEN_HOLD);
cam_periph_unlock(periph);
if (error != 0)
da_periph_release(periph, DA_REF_OPEN);
return (error);
}
static int
daclose(struct disk *dp)
{
struct cam_periph *periph;
struct da_softc *softc;
union ccb *ccb;
periph = (struct cam_periph *)dp->d_drv1;
softc = (struct da_softc *)periph->softc;
cam_periph_lock(periph);
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE | CAM_DEBUG_PERIPH,
("daclose\n"));
if (da_periph_hold(periph, PRIBIO, DA_REF_CLOSE_HOLD) == 0) {
/* Flush disk cache. */
if ((softc->flags & DA_FLAG_DIRTY) != 0 &&
(softc->quirks & DA_Q_NO_SYNC_CACHE) == 0 &&
(softc->flags & DA_FLAG_PACK_INVALID) == 0) {
ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL);
scsi_synchronize_cache(&ccb->csio, /*retries*/1,
/*cbfcnp*/NULL, MSG_SIMPLE_Q_TAG,
/*begin_lba*/0, /*lb_count*/0, SSD_FULL_SIZE,
5 * 60 * 1000);
cam_periph_runccb(ccb, daerror, /*cam_flags*/0,
/*sense_flags*/SF_RETRY_UA | SF_QUIET_IR,
softc->disk->d_devstat);
softc->flags &= ~DA_FLAG_DIRTY;
xpt_release_ccb(ccb);
}
/* Allow medium removal. */
if ((softc->flags & DA_FLAG_PACK_REMOVABLE) != 0 &&
(softc->quirks & DA_Q_NO_PREVENT) == 0)
daprevent(periph, PR_ALLOW);
da_periph_unhold(periph, DA_REF_CLOSE_HOLD);
}
/*
* If we've got removeable media, mark the blocksize as
* unavailable, since it could change when new media is
* inserted.
*/
if ((softc->flags & DA_FLAG_PACK_REMOVABLE) != 0)
softc->disk->d_devstat->flags |= DEVSTAT_BS_UNAVAILABLE;
softc->flags &= ~DA_FLAG_OPEN;
while (softc->refcount != 0)
cam_periph_sleep(periph, &softc->refcount, PRIBIO, "daclose", 1);
cam_periph_unlock(periph);
da_periph_release(periph, DA_REF_OPEN);
return (0);
}
static void
daschedule(struct cam_periph *periph)
{
struct da_softc *softc = (struct da_softc *)periph->softc;
if (softc->state != DA_STATE_NORMAL)
return;
cam_iosched_schedule(softc->cam_iosched, periph);
}
/*
* Actually translate the requested transfer into one the physical driver
* can understand. The transfer is described by a buf and will include
* only one physical transfer.
*/
static void
dastrategy(struct bio *bp)
{
struct cam_periph *periph;
struct da_softc *softc;
periph = (struct cam_periph *)bp->bio_disk->d_drv1;
softc = (struct da_softc *)periph->softc;
cam_periph_lock(periph);
/*
* If the device has been made invalid, error out
*/
if ((softc->flags & DA_FLAG_PACK_INVALID)) {
cam_periph_unlock(periph);
biofinish(bp, NULL, ENXIO);
return;
}
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dastrategy(%p)\n", bp));
/*
* Zone commands must be ordered, because they can depend on the
* effects of previously issued commands, and they may affect
* commands after them.
*/
if (bp->bio_cmd == BIO_ZONE)
bp->bio_flags |= BIO_ORDERED;
/*
* Place it in the queue of disk activities for this disk
*/
cam_iosched_queue_work(softc->cam_iosched, bp);
/*
* Schedule ourselves for performing the work.
*/
daschedule(periph);
cam_periph_unlock(periph);
return;
}
static int
dadump(void *arg, void *virtual, vm_offset_t physical, off_t offset, size_t length)
{
struct cam_periph *periph;
struct da_softc *softc;
u_int secsize;
struct ccb_scsiio csio;
struct disk *dp;
int error = 0;
dp = arg;
periph = dp->d_drv1;
softc = (struct da_softc *)periph->softc;
secsize = softc->params.secsize;
if ((softc->flags & DA_FLAG_PACK_INVALID) != 0)
return (ENXIO);
memset(&csio, 0, sizeof(csio));
if (length > 0) {
xpt_setup_ccb(&csio.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
csio.ccb_h.ccb_state = DA_CCB_DUMP;
scsi_read_write(&csio,
/*retries*/0,
/*cbfcnp*/NULL,
MSG_ORDERED_Q_TAG,
/*read*/SCSI_RW_WRITE,
/*byte2*/0,
/*minimum_cmd_size*/ softc->minimum_cmd_size,
offset / secsize,
length / secsize,
/*data_ptr*/(u_int8_t *) virtual,
/*dxfer_len*/length,
/*sense_len*/SSD_FULL_SIZE,
da_default_timeout * 1000);
error = cam_periph_runccb((union ccb *)&csio, cam_periph_error,
0, SF_NO_RECOVERY | SF_NO_RETRY, NULL);
if (error != 0)
printf("Aborting dump due to I/O error.\n");
return (error);
}
/*
* Sync the disk cache contents to the physical media.
*/
if ((softc->quirks & DA_Q_NO_SYNC_CACHE) == 0) {
xpt_setup_ccb(&csio.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
csio.ccb_h.ccb_state = DA_CCB_DUMP;
scsi_synchronize_cache(&csio,
/*retries*/0,
/*cbfcnp*/NULL,
MSG_SIMPLE_Q_TAG,
/*begin_lba*/0,/* Cover the whole disk */
/*lb_count*/0,
SSD_FULL_SIZE,
5 * 1000);
error = cam_periph_runccb((union ccb *)&csio, cam_periph_error,
0, SF_NO_RECOVERY | SF_NO_RETRY, NULL);
if (error != 0)
xpt_print(periph->path, "Synchronize cache failed\n");
}
return (error);
}
static int
dagetattr(struct bio *bp)
{
int ret;
struct cam_periph *periph;
periph = (struct cam_periph *)bp->bio_disk->d_drv1;
cam_periph_lock(periph);
ret = xpt_getattr(bp->bio_data, bp->bio_length, bp->bio_attribute,
periph->path);
cam_periph_unlock(periph);
if (ret == 0)
bp->bio_completed = bp->bio_length;
return ret;
}
static void
dainit(void)
{
cam_status status;
/*
* Install a global async callback. This callback will
* receive async callbacks like "new device found".
*/
status = xpt_register_async(AC_FOUND_DEVICE, daasync, NULL, NULL);
if (status != CAM_REQ_CMP) {
printf("da: Failed to attach master async callback "
"due to status 0x%x!\n", status);
} else if (da_send_ordered) {
/* Register our shutdown event handler */
if ((EVENTHANDLER_REGISTER(shutdown_post_sync, dashutdown,
NULL, SHUTDOWN_PRI_DEFAULT)) == NULL)
printf("dainit: shutdown event registration failed!\n");
}
}
/*
* Callback from GEOM, called when it has finished cleaning up its
* resources.
*/
static void
dadiskgonecb(struct disk *dp)
{
struct cam_periph *periph;
periph = (struct cam_periph *)dp->d_drv1;
da_periph_release(periph, DA_REF_GEOM);
}
static void
daoninvalidate(struct cam_periph *periph)
{
struct da_softc *softc;
cam_periph_assert(periph, MA_OWNED);
softc = (struct da_softc *)periph->softc;
/*
* De-register any async callbacks.
*/
xpt_register_async(0, daasync, periph, periph->path);
softc->flags |= DA_FLAG_PACK_INVALID;
#ifdef CAM_IO_STATS
softc->invalidations++;
#endif
/*
* Return all queued I/O with ENXIO.
* XXX Handle any transactions queued to the card
* with XPT_ABORT_CCB.
*/
cam_iosched_flush(softc->cam_iosched, NULL, ENXIO);
/*
* Tell GEOM that we've gone away, we'll get a callback when it is
* done cleaning up its resources.
*/
disk_gone(softc->disk);
}
static void
dacleanup(struct cam_periph *periph)
{
struct da_softc *softc;
softc = (struct da_softc *)periph->softc;
cam_periph_unlock(periph);
cam_iosched_fini(softc->cam_iosched);
/*
* If we can't free the sysctl tree, oh well...
*/
if ((softc->flags & DA_FLAG_SCTX_INIT) != 0) {
#ifdef CAM_IO_STATS
if (sysctl_ctx_free(&softc->sysctl_stats_ctx) != 0)
xpt_print(periph->path,
"can't remove sysctl stats context\n");
#endif
if (sysctl_ctx_free(&softc->sysctl_ctx) != 0)
xpt_print(periph->path,
"can't remove sysctl context\n");
}
callout_drain(&softc->mediapoll_c);
disk_destroy(softc->disk);
callout_drain(&softc->sendordered_c);
free(softc, M_DEVBUF);
cam_periph_lock(periph);
}
static void
daasync(void *callback_arg, u_int32_t code,
struct cam_path *path, void *arg)
{
struct cam_periph *periph;
struct da_softc *softc;
periph = (struct cam_periph *)callback_arg;
switch (code) {
case AC_FOUND_DEVICE: /* callback to create periph, no locking yet */
{
struct ccb_getdev *cgd;
cam_status status;
cgd = (struct ccb_getdev *)arg;
if (cgd == NULL)
break;
if (cgd->protocol != PROTO_SCSI)
break;
if (SID_QUAL(&cgd->inq_data) != SID_QUAL_LU_CONNECTED)
break;
if (SID_TYPE(&cgd->inq_data) != T_DIRECT
&& SID_TYPE(&cgd->inq_data) != T_RBC
&& SID_TYPE(&cgd->inq_data) != T_OPTICAL
&& SID_TYPE(&cgd->inq_data) != T_ZBC_HM)
break;
/*
* Allocate a peripheral instance for
* this device and start the probe
* process.
*/
status = cam_periph_alloc(daregister, daoninvalidate,
dacleanup, dastart,
"da", CAM_PERIPH_BIO,
path, daasync,
AC_FOUND_DEVICE, cgd);
if (status != CAM_REQ_CMP
&& status != CAM_REQ_INPROG)
printf("daasync: Unable to attach to new device "
"due to status 0x%x\n", status);
return;
}
case AC_ADVINFO_CHANGED: /* Doesn't touch periph */
{
uintptr_t buftype;
buftype = (uintptr_t)arg;
if (buftype == CDAI_TYPE_PHYS_PATH) {
struct da_softc *softc;
softc = periph->softc;
disk_attr_changed(softc->disk, "GEOM::physpath",
M_NOWAIT);
}
break;
}
case AC_UNIT_ATTENTION:
{
union ccb *ccb;
int error_code, sense_key, asc, ascq;
softc = (struct da_softc *)periph->softc;
ccb = (union ccb *)arg;
/*
* Handle all UNIT ATTENTIONs except our own, as they will be
* handled by daerror(). Since this comes from a different periph,
* that periph's lock is held, not ours, so we have to take it ours
* out to touch softc flags.
*/
if (xpt_path_periph(ccb->ccb_h.path) != periph &&
scsi_extract_sense_ccb(ccb,
&error_code, &sense_key, &asc, &ascq)) {
if (asc == 0x2A && ascq == 0x09) {
xpt_print(ccb->ccb_h.path,
"Capacity data has changed\n");
cam_periph_lock(periph);
softc->flags &= ~DA_FLAG_PROBED;
cam_periph_unlock(periph);
dareprobe(periph);
} else if (asc == 0x28 && ascq == 0x00) {
cam_periph_lock(periph);
softc->flags &= ~DA_FLAG_PROBED;
cam_periph_unlock(periph);
disk_media_changed(softc->disk, M_NOWAIT);
} else if (asc == 0x3F && ascq == 0x03) {
xpt_print(ccb->ccb_h.path,
"INQUIRY data has changed\n");
cam_periph_lock(periph);
softc->flags &= ~DA_FLAG_PROBED;
cam_periph_unlock(periph);
dareprobe(periph);
}
}
break;
}
case AC_SCSI_AEN: /* Called for this path: periph locked */
/*
* Appears to be currently unused for SCSI devices, only ata SIMs
* generate this.
*/
cam_periph_assert(periph, MA_OWNED);
softc = (struct da_softc *)periph->softc;
if (!cam_iosched_has_work_flags(softc->cam_iosched, DA_WORK_TUR) &&
(softc->flags & DA_FLAG_TUR_PENDING) == 0) {
if (da_periph_acquire(periph, DA_REF_TUR) == 0) {
cam_iosched_set_work_flags(softc->cam_iosched, DA_WORK_TUR);
daschedule(periph);
}
}
/* FALLTHROUGH */
case AC_SENT_BDR: /* Called for this path: periph locked */
case AC_BUS_RESET: /* Called for this path: periph locked */
{
struct ccb_hdr *ccbh;
cam_periph_assert(periph, MA_OWNED);
softc = (struct da_softc *)periph->softc;
/*
* Don't fail on the expected unit attention
* that will occur.
*/
softc->flags |= DA_FLAG_RETRY_UA;
LIST_FOREACH(ccbh, &softc->pending_ccbs, periph_links.le)
ccbh->ccb_state |= DA_CCB_RETRY_UA;
break;
}
case AC_INQ_CHANGED: /* Called for this path: periph locked */
cam_periph_assert(periph, MA_OWNED);
softc = (struct da_softc *)periph->softc;
softc->flags &= ~DA_FLAG_PROBED;
dareprobe(periph);
break;
default:
break;
}
cam_periph_async(periph, code, path, arg);
}
static void
dasysctlinit(void *context, int pending)
{
struct cam_periph *periph;
struct da_softc *softc;
char tmpstr[32], tmpstr2[16];
struct ccb_trans_settings cts;
periph = (struct cam_periph *)context;
/*
* periph was held for us when this task was enqueued
*/
if (periph->flags & CAM_PERIPH_INVALID) {
da_periph_release(periph, DA_REF_SYSCTL);
return;
}
softc = (struct da_softc *)periph->softc;
snprintf(tmpstr, sizeof(tmpstr), "CAM DA unit %d", periph->unit_number);
snprintf(tmpstr2, sizeof(tmpstr2), "%d", periph->unit_number);
sysctl_ctx_init(&softc->sysctl_ctx);
cam_periph_lock(periph);
softc->flags |= DA_FLAG_SCTX_INIT;
cam_periph_unlock(periph);
softc->sysctl_tree = SYSCTL_ADD_NODE_WITH_LABEL(&softc->sysctl_ctx,
SYSCTL_STATIC_CHILDREN(_kern_cam_da), OID_AUTO, tmpstr2,
CTLFLAG_RD, 0, tmpstr, "device_index");
if (softc->sysctl_tree == NULL) {
printf("dasysctlinit: unable to allocate sysctl tree\n");
da_periph_release(periph, DA_REF_SYSCTL);
return;
}
/*
* Now register the sysctl handler, so the user can change the value on
* the fly.
*/
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "delete_method", CTLTYPE_STRING | CTLFLAG_RWTUN,
softc, 0, dadeletemethodsysctl, "A",
"BIO_DELETE execution method");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "delete_max", CTLTYPE_U64 | CTLFLAG_RW,
softc, 0, dadeletemaxsysctl, "Q",
"Maximum BIO_DELETE size");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "minimum_cmd_size", CTLTYPE_INT | CTLFLAG_RW,
&softc->minimum_cmd_size, 0, dacmdsizesysctl, "I",
"Minimum CDB size");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"trim_count", CTLFLAG_RD, &softc->trim_count,
"Total number of unmap/dsm commands sent");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"trim_ranges", CTLFLAG_RD, &softc->trim_ranges,
"Total number of ranges in unmap/dsm commands");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"trim_lbas", CTLFLAG_RD, &softc->trim_lbas,
"Total lbas in the unmap/dsm commands sent");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "zone_mode", CTLTYPE_STRING | CTLFLAG_RD,
softc, 0, dazonemodesysctl, "A",
"Zone Mode");
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "zone_support", CTLTYPE_STRING | CTLFLAG_RD,
softc, 0, dazonesupsysctl, "A",
"Zone Support");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"optimal_seq_zones", CTLFLAG_RD, &softc->optimal_seq_zones,
"Optimal Number of Open Sequential Write Preferred Zones");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"optimal_nonseq_zones", CTLFLAG_RD,
&softc->optimal_nonseq_zones,
"Optimal Number of Non-Sequentially Written Sequential Write "
"Preferred Zones");
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO,
"max_seq_zones", CTLFLAG_RD, &softc->max_seq_zones,
"Maximum Number of Open Sequential Write Required Zones");
SYSCTL_ADD_INT(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO,
"error_inject",
CTLFLAG_RW,
&softc->error_inject,
0,
"error_inject leaf");
SYSCTL_ADD_INT(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO,
"unmapped_io",
CTLFLAG_RD,
&softc->unmappedio,
0,
"Unmapped I/O leaf");
SYSCTL_ADD_INT(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO,
"rotating",
CTLFLAG_RD,
&softc->rotating,
0,
"Rotating media");
#ifdef CAM_TEST_FAILURE
SYSCTL_ADD_PROC(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "invalidate", CTLTYPE_U64 | CTLFLAG_RW | CTLFLAG_MPSAFE,
periph, 0, cam_periph_invalidate_sysctl, "I",
"Write 1 to invalidate the drive immediately");
#endif
/*
* Add some addressing info.
*/
memset(&cts, 0, sizeof (cts));
xpt_setup_ccb(&cts.ccb_h, periph->path, CAM_PRIORITY_NONE);
cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
cts.type = CTS_TYPE_CURRENT_SETTINGS;
cam_periph_lock(periph);
xpt_action((union ccb *)&cts);
cam_periph_unlock(periph);
if (cts.ccb_h.status != CAM_REQ_CMP) {
da_periph_release(periph, DA_REF_SYSCTL);
return;
}
if (cts.protocol == PROTO_SCSI && cts.transport == XPORT_FC) {
struct ccb_trans_settings_fc *fc = &cts.xport_specific.fc;
if (fc->valid & CTS_FC_VALID_WWPN) {
softc->wwpn = fc->wwpn;
SYSCTL_ADD_UQUAD(&softc->sysctl_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree),
OID_AUTO, "wwpn", CTLFLAG_RD,
&softc->wwpn, "World Wide Port Name");
}
}
#ifdef CAM_IO_STATS
/*
* Now add some useful stats.
* XXX These should live in cam_periph and be common to all periphs
*/
softc->sysctl_stats_tree = SYSCTL_ADD_NODE(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO, "stats",
CTLFLAG_RD, 0, "Statistics");
SYSCTL_ADD_INT(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_stats_tree),
OID_AUTO,
"errors",
CTLFLAG_RD,
&softc->errors,
0,
"Transport errors reported by the SIM");
SYSCTL_ADD_INT(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_stats_tree),
OID_AUTO,
"timeouts",
CTLFLAG_RD,
&softc->timeouts,
0,
"Device timeouts reported by the SIM");
SYSCTL_ADD_INT(&softc->sysctl_stats_ctx,
SYSCTL_CHILDREN(softc->sysctl_stats_tree),
OID_AUTO,
"pack_invalidations",
CTLFLAG_RD,
&softc->invalidations,
0,
"Device pack invalidations");
#endif
cam_iosched_sysctl_init(softc->cam_iosched, &softc->sysctl_ctx,
softc->sysctl_tree);
da_periph_release(periph, DA_REF_SYSCTL);
}
static int
dadeletemaxsysctl(SYSCTL_HANDLER_ARGS)
{
int error;
uint64_t value;
struct da_softc *softc;
softc = (struct da_softc *)arg1;
value = softc->disk->d_delmaxsize;
error = sysctl_handle_64(oidp, &value, 0, req);
if ((error != 0) || (req->newptr == NULL))
return (error);
/* only accept values smaller than the calculated value */
if (value > dadeletemaxsize(softc, softc->delete_method)) {
return (EINVAL);
}
softc->disk->d_delmaxsize = value;
return (0);
}
static int
dacmdsizesysctl(SYSCTL_HANDLER_ARGS)
{
int error, value;
value = *(int *)arg1;
error = sysctl_handle_int(oidp, &value, 0, req);
if ((error != 0)
|| (req->newptr == NULL))
return (error);
/*
* Acceptable values here are 6, 10, 12 or 16.
*/
if (value < 6)
value = 6;
else if ((value > 6)
&& (value <= 10))
value = 10;
else if ((value > 10)
&& (value <= 12))
value = 12;
else if (value > 12)
value = 16;
*(int *)arg1 = value;
return (0);
}
static int
dasysctlsofttimeout(SYSCTL_HANDLER_ARGS)
{
sbintime_t value;
int error;
value = da_default_softtimeout / SBT_1MS;
error = sysctl_handle_int(oidp, (int *)&value, 0, req);
if ((error != 0) || (req->newptr == NULL))
return (error);
/* XXX Should clip this to a reasonable level */
if (value > da_default_timeout * 1000)
return (EINVAL);
da_default_softtimeout = value * SBT_1MS;
return (0);
}
static void
dadeletemethodset(struct da_softc *softc, da_delete_methods delete_method)
{
softc->delete_method = delete_method;
softc->disk->d_delmaxsize = dadeletemaxsize(softc, delete_method);
softc->delete_func = da_delete_functions[delete_method];
if (softc->delete_method > DA_DELETE_DISABLE)
softc->disk->d_flags |= DISKFLAG_CANDELETE;
else
softc->disk->d_flags &= ~DISKFLAG_CANDELETE;
}
static off_t
dadeletemaxsize(struct da_softc *softc, da_delete_methods delete_method)
{
off_t sectors;
switch(delete_method) {
case DA_DELETE_UNMAP:
sectors = (off_t)softc->unmap_max_lba;
break;
case DA_DELETE_ATA_TRIM:
sectors = (off_t)ATA_DSM_RANGE_MAX * softc->trim_max_ranges;
break;
case DA_DELETE_WS16:
sectors = omin(softc->ws_max_blks, WS16_MAX_BLKS);
break;
case DA_DELETE_ZERO:
case DA_DELETE_WS10:
sectors = omin(softc->ws_max_blks, WS10_MAX_BLKS);
break;
default:
return 0;
}
return (off_t)softc->params.secsize *
omin(sectors, softc->params.sectors);
}
static void
daprobedone(struct cam_periph *periph, union ccb *ccb)
{
struct da_softc *softc;
softc = (struct da_softc *)periph->softc;
cam_periph_assert(periph, MA_OWNED);
dadeletemethodchoose(softc, DA_DELETE_NONE);
if (bootverbose && (softc->flags & DA_FLAG_ANNOUNCED) == 0) {
char buf[80];
int i, sep;
snprintf(buf, sizeof(buf), "Delete methods: <");
sep = 0;
for (i = 0; i <= DA_DELETE_MAX; i++) {
if ((softc->delete_available & (1 << i)) == 0 &&
i != softc->delete_method)
continue;
if (sep)
strlcat(buf, ",", sizeof(buf));
strlcat(buf, da_delete_method_names[i],
sizeof(buf));
if (i == softc->delete_method)
strlcat(buf, "(*)", sizeof(buf));
sep = 1;
}
strlcat(buf, ">", sizeof(buf));
printf("%s%d: %s\n", periph->periph_name,
periph->unit_number, buf);
}
if ((softc->disk->d_flags & DISKFLAG_WRITE_PROTECT) != 0 &&
(softc->flags & DA_FLAG_ANNOUNCED) == 0) {
printf("%s%d: Write Protected\n", periph->periph_name,
periph->unit_number);
}
/*
* Since our peripheral may be invalidated by an error
* above or an external event, we must release our CCB
* before releasing the probe lock on the peripheral.
* The peripheral will only go away once the last lock
* is removed, and we need it around for the CCB release
* operation.
*/
xpt_release_ccb(ccb);
softc->state = DA_STATE_NORMAL;
softc->flags |= DA_FLAG_PROBED;
daschedule(periph);
wakeup(&softc->disk->d_mediasize);
if ((softc->flags & DA_FLAG_ANNOUNCED) == 0) {
softc->flags |= DA_FLAG_ANNOUNCED;
da_periph_unhold(periph, DA_REF_PROBE_HOLD);
} else
da_periph_release_locked(periph, DA_REF_REPROBE);
}
static void
dadeletemethodchoose(struct da_softc *softc, da_delete_methods default_method)
{
int i, methods;
/* If available, prefer the method requested by user. */
i = softc->delete_method_pref;
methods = softc->delete_available | (1 << DA_DELETE_DISABLE);
if (methods & (1 << i)) {
dadeletemethodset(softc, i);
return;
}
/* Use the pre-defined order to choose the best performing delete. */
for (i = DA_DELETE_MIN; i <= DA_DELETE_MAX; i++) {
if (i == DA_DELETE_ZERO)
continue;
if (softc->delete_available & (1 << i)) {
dadeletemethodset(softc, i);
return;
}
}
/* Fallback to default. */
dadeletemethodset(softc, default_method);
}
static int
dadeletemethodsysctl(SYSCTL_HANDLER_ARGS)
{
char buf[16];
const char *p;
struct da_softc *softc;
int i, error, value;
softc = (struct da_softc *)arg1;
value = softc->delete_method;
if (value < 0 || value > DA_DELETE_MAX)
p = "UNKNOWN";
else
p = da_delete_method_names[value];
strncpy(buf, p, sizeof(buf));
error = sysctl_handle_string(oidp, buf, sizeof(buf), req);
if (error != 0 || req->newptr == NULL)
return (error);
for (i = 0; i <= DA_DELETE_MAX; i++) {
if (strcmp(buf, da_delete_method_names[i]) == 0)
break;
}
if (i > DA_DELETE_MAX)
return (EINVAL);
softc->delete_method_pref = i;
dadeletemethodchoose(softc, DA_DELETE_NONE);
return (0);
}
static int
dazonemodesysctl(SYSCTL_HANDLER_ARGS)
{
char tmpbuf[40];
struct da_softc *softc;
int error;
softc = (struct da_softc *)arg1;
switch (softc->zone_mode) {
case DA_ZONE_DRIVE_MANAGED:
snprintf(tmpbuf, sizeof(tmpbuf), "Drive Managed");
break;
case DA_ZONE_HOST_AWARE:
snprintf(tmpbuf, sizeof(tmpbuf), "Host Aware");
break;
case DA_ZONE_HOST_MANAGED:
snprintf(tmpbuf, sizeof(tmpbuf), "Host Managed");
break;
case DA_ZONE_NONE:
default:
snprintf(tmpbuf, sizeof(tmpbuf), "Not Zoned");
break;
}
error = sysctl_handle_string(oidp, tmpbuf, sizeof(tmpbuf), req);
return (error);
}
static int
dazonesupsysctl(SYSCTL_HANDLER_ARGS)
{
char tmpbuf[180];
struct da_softc *softc;
struct sbuf sb;
int error, first;
unsigned int i;
softc = (struct da_softc *)arg1;
error = 0;
first = 1;
sbuf_new(&sb, tmpbuf, sizeof(tmpbuf), 0);
for (i = 0; i < sizeof(da_zone_desc_table) /
sizeof(da_zone_desc_table[0]); i++) {
if (softc->zone_flags & da_zone_desc_table[i].value) {
if (first == 0)
sbuf_printf(&sb, ", ");
else
first = 0;
sbuf_cat(&sb, da_zone_desc_table[i].desc);
}
}
if (first == 1)
sbuf_printf(&sb, "None");
sbuf_finish(&sb);
error = sysctl_handle_string(oidp, sbuf_data(&sb), sbuf_len(&sb), req);
return (error);
}
static cam_status
daregister(struct cam_periph *periph, void *arg)
{
struct da_softc *softc;
struct ccb_pathinq cpi;
struct ccb_getdev *cgd;
char tmpstr[80];
caddr_t match;
cgd = (struct ccb_getdev *)arg;
if (cgd == NULL) {
printf("daregister: no getdev CCB, can't register device\n");
return(CAM_REQ_CMP_ERR);
}
softc = (struct da_softc *)malloc(sizeof(*softc), M_DEVBUF,
M_NOWAIT|M_ZERO);
if (softc == NULL) {
printf("daregister: Unable to probe new device. "
"Unable to allocate softc\n");
return(CAM_REQ_CMP_ERR);
}
if (cam_iosched_init(&softc->cam_iosched, periph) != 0) {
printf("daregister: Unable to probe new device. "
"Unable to allocate iosched memory\n");
free(softc, M_DEVBUF);
return(CAM_REQ_CMP_ERR);
}
LIST_INIT(&softc->pending_ccbs);
softc->state = DA_STATE_PROBE_WP;
bioq_init(&softc->delete_run_queue);
if (SID_IS_REMOVABLE(&cgd->inq_data))
softc->flags |= DA_FLAG_PACK_REMOVABLE;
softc->unmap_max_ranges = UNMAP_MAX_RANGES;
softc->unmap_max_lba = UNMAP_RANGE_MAX;
softc->unmap_gran = 0;
softc->unmap_gran_align = 0;
softc->ws_max_blks = WS16_MAX_BLKS;
softc->trim_max_ranges = ATA_TRIM_MAX_RANGES;
softc->rotating = 1;
periph->softc = softc;
/*
* See if this device has any quirks.
*/
match = cam_quirkmatch((caddr_t)&cgd->inq_data,
(caddr_t)da_quirk_table,
nitems(da_quirk_table),
sizeof(*da_quirk_table), scsi_inquiry_match);
if (match != NULL)
softc->quirks = ((struct da_quirk_entry *)match)->quirks;
else
softc->quirks = DA_Q_NONE;
/* Check if the SIM does not want 6 byte commands */
xpt_path_inq(&cpi, periph->path);
if (cpi.ccb_h.status == CAM_REQ_CMP && (cpi.hba_misc & PIM_NO_6_BYTE))
softc->quirks |= DA_Q_NO_6_BYTE;
if (SID_TYPE(&cgd->inq_data) == T_ZBC_HM)
softc->zone_mode = DA_ZONE_HOST_MANAGED;
else if (softc->quirks & DA_Q_SMR_DM)
softc->zone_mode = DA_ZONE_DRIVE_MANAGED;
else
softc->zone_mode = DA_ZONE_NONE;
if (softc->zone_mode != DA_ZONE_NONE) {
if (scsi_vpd_supported_page(periph, SVPD_ATA_INFORMATION)) {
if (scsi_vpd_supported_page(periph, SVPD_ZONED_BDC))
softc->zone_interface = DA_ZONE_IF_ATA_SAT;
else
softc->zone_interface = DA_ZONE_IF_ATA_PASS;
} else
softc->zone_interface = DA_ZONE_IF_SCSI;
}
TASK_INIT(&softc->sysctl_task, 0, dasysctlinit, periph);
/*
* Take an exclusive section lock qon the periph while dastart is called
* to finish the probe. The lock will be dropped in dadone at the end
* of probe. This locks out daopen and daclose from racing with the
* probe.
*
* XXX if cam_periph_hold returns an error, we don't hold a refcount.
*/
(void)da_periph_hold(periph, PRIBIO, DA_REF_PROBE_HOLD);
/*
* Schedule a periodic event to occasionally send an
* ordered tag to a device.
*/
callout_init_mtx(&softc->sendordered_c, cam_periph_mtx(periph), 0);
callout_reset(&softc->sendordered_c,
(da_default_timeout * hz) / DA_ORDEREDTAG_INTERVAL,
dasendorderedtag, periph);
cam_periph_unlock(periph);
/*
* RBC devices don't have to support READ(6), only READ(10).
*/
if (softc->quirks & DA_Q_NO_6_BYTE || SID_TYPE(&cgd->inq_data) == T_RBC)
softc->minimum_cmd_size = 10;
else
softc->minimum_cmd_size = 6;
/*
* Load the user's default, if any.
*/
snprintf(tmpstr, sizeof(tmpstr), "kern.cam.da.%d.minimum_cmd_size",
periph->unit_number);
TUNABLE_INT_FETCH(tmpstr, &softc->minimum_cmd_size);
/*
* 6, 10, 12 and 16 are the currently permissible values.
*/
if (softc->minimum_cmd_size > 12)
softc->minimum_cmd_size = 16;
else if (softc->minimum_cmd_size > 10)
softc->minimum_cmd_size = 12;
else if (softc->minimum_cmd_size > 6)
softc->minimum_cmd_size = 10;
else
softc->minimum_cmd_size = 6;
/* Predict whether device may support READ CAPACITY(16). */
if (SID_ANSI_REV(&cgd->inq_data) >= SCSI_REV_SPC3 &&
(softc->quirks & DA_Q_NO_RC16) == 0) {
softc->flags |= DA_FLAG_CAN_RC16;
}
/*
* Register this media as a disk.
*/
softc->disk = disk_alloc();
softc->disk->d_devstat = devstat_new_entry(periph->periph_name,
periph->unit_number, 0,
DEVSTAT_BS_UNAVAILABLE,
SID_TYPE(&cgd->inq_data) |
XPORT_DEVSTAT_TYPE(cpi.transport),
DEVSTAT_PRIORITY_DISK);
softc->disk->d_open = daopen;
softc->disk->d_close = daclose;
softc->disk->d_strategy = dastrategy;
softc->disk->d_dump = dadump;
softc->disk->d_getattr = dagetattr;
softc->disk->d_gone = dadiskgonecb;
softc->disk->d_name = "da";
softc->disk->d_drv1 = periph;
if (cpi.maxio == 0)
softc->maxio = DFLTPHYS; /* traditional default */
else if (cpi.maxio > MAXPHYS)
softc->maxio = MAXPHYS; /* for safety */
else
softc->maxio = cpi.maxio;
+ if (softc->quirks & DA_Q_128KB)
+ softc->maxio = min(softc->maxio, 128 * 1024);
softc->disk->d_maxsize = softc->maxio;
softc->disk->d_unit = periph->unit_number;
softc->disk->d_flags = DISKFLAG_DIRECT_COMPLETION | DISKFLAG_CANZONE;
if ((softc->quirks & DA_Q_NO_SYNC_CACHE) == 0)
softc->disk->d_flags |= DISKFLAG_CANFLUSHCACHE;
if ((cpi.hba_misc & PIM_UNMAPPED) != 0) {
softc->unmappedio = 1;
softc->disk->d_flags |= DISKFLAG_UNMAPPED_BIO;
}
cam_strvis(softc->disk->d_descr, cgd->inq_data.vendor,
sizeof(cgd->inq_data.vendor), sizeof(softc->disk->d_descr));
strlcat(softc->disk->d_descr, " ", sizeof(softc->disk->d_descr));
cam_strvis(&softc->disk->d_descr[strlen(softc->disk->d_descr)],
cgd->inq_data.product, sizeof(cgd->inq_data.product),
sizeof(softc->disk->d_descr) - strlen(softc->disk->d_descr));
softc->disk->d_hba_vendor = cpi.hba_vendor;
softc->disk->d_hba_device = cpi.hba_device;
softc->disk->d_hba_subvendor = cpi.hba_subvendor;
softc->disk->d_hba_subdevice = cpi.hba_subdevice;
/*
* Acquire a reference to the periph before we register with GEOM.
* We'll release this reference once GEOM calls us back (via
* dadiskgonecb()) telling us that our provider has been freed.
*/
if (da_periph_acquire(periph, DA_REF_GEOM) != 0) {
xpt_print(periph->path, "%s: lost periph during "
"registration!\n", __func__);
cam_periph_lock(periph);
return (CAM_REQ_CMP_ERR);
}
disk_create(softc->disk, DISK_VERSION);
cam_periph_lock(periph);
/*
* Add async callbacks for events of interest.
* I don't bother checking if this fails as,
* in most cases, the system will function just
* fine without them and the only alternative
* would be to not attach the device on failure.
*/
xpt_register_async(AC_SENT_BDR | AC_BUS_RESET | AC_LOST_DEVICE |
AC_ADVINFO_CHANGED | AC_SCSI_AEN | AC_UNIT_ATTENTION |
AC_INQ_CHANGED, daasync, periph, periph->path);
/*
* Emit an attribute changed notification just in case
* physical path information arrived before our async
* event handler was registered, but after anyone attaching
* to our disk device polled it.
*/
disk_attr_changed(softc->disk, "GEOM::physpath", M_NOWAIT);
/*
* Schedule a periodic media polling events.
*/
callout_init_mtx(&softc->mediapoll_c, cam_periph_mtx(periph), 0);
if ((softc->flags & DA_FLAG_PACK_REMOVABLE) &&
(cgd->inq_flags & SID_AEN) == 0 &&
da_poll_period != 0)
callout_reset(&softc->mediapoll_c, da_poll_period * hz,
damediapoll, periph);
xpt_schedule(periph, CAM_PRIORITY_DEV);
return(CAM_REQ_CMP);
}
static int
da_zone_bio_to_scsi(int disk_zone_cmd)
{
switch (disk_zone_cmd) {
case DISK_ZONE_OPEN:
return ZBC_OUT_SA_OPEN;
case DISK_ZONE_CLOSE:
return ZBC_OUT_SA_CLOSE;
case DISK_ZONE_FINISH:
return ZBC_OUT_SA_FINISH;
case DISK_ZONE_RWP:
return ZBC_OUT_SA_RWP;
}
return -1;
}
static int
da_zone_cmd(struct cam_periph *periph, union ccb *ccb, struct bio *bp,
int *queue_ccb)
{
struct da_softc *softc;
int error;
error = 0;
if (bp->bio_cmd != BIO_ZONE) {
error = EINVAL;
goto bailout;
}
softc = periph->softc;
switch (bp->bio_zone.zone_cmd) {
case DISK_ZONE_OPEN:
case DISK_ZONE_CLOSE:
case DISK_ZONE_FINISH:
case DISK_ZONE_RWP: {
int zone_flags;
int zone_sa;
uint64_t lba;
zone_sa = da_zone_bio_to_scsi(bp->bio_zone.zone_cmd);
if (zone_sa == -1) {
xpt_print(periph->path, "Cannot translate zone "
"cmd %#x to SCSI\n", bp->bio_zone.zone_cmd);
error = EINVAL;
goto bailout;
}
zone_flags = 0;
lba = bp->bio_zone.zone_params.rwp.id;
if (bp->bio_zone.zone_params.rwp.flags &
DISK_ZONE_RWP_FLAG_ALL)
zone_flags |= ZBC_OUT_ALL;
if (softc->zone_interface != DA_ZONE_IF_ATA_PASS) {
scsi_zbc_out(&ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*service_action*/ zone_sa,
/*zone_id*/ lba,
/*zone_flags*/ zone_flags,
/*data_ptr*/ NULL,
/*dxfer_len*/ 0,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
} else {
/*
* Note that in this case, even though we can
* technically use NCQ, we don't bother for several
* reasons:
* 1. It hasn't been tested on a SAT layer that
* supports it. This is new as of SAT-4.
* 2. Even when there is a SAT layer that supports
* it, that SAT layer will also probably support
* ZBC -> ZAC translation, since they are both
* in the SAT-4 spec.
* 3. Translation will likely be preferable to ATA
* passthrough. LSI / Avago at least single
* steps ATA passthrough commands in the HBA,
* regardless of protocol, so unless that
* changes, there is a performance penalty for
* doing ATA passthrough no matter whether
* you're using NCQ/FPDMA, DMA or PIO.
* 4. It requires a 32-byte CDB, which at least at
* this point in CAM requires a CDB pointer, which
* would require us to allocate an additional bit
* of storage separate from the CCB.
*/
error = scsi_ata_zac_mgmt_out(&ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*use_ncq*/ 0,
/*zm_action*/ zone_sa,
/*zone_id*/ lba,
/*zone_flags*/ zone_flags,
/*data_ptr*/ NULL,
/*dxfer_len*/ 0,
/*cdb_storage*/ NULL,
/*cdb_storage_len*/ 0,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
if (error != 0) {
error = EINVAL;
xpt_print(periph->path,
"scsi_ata_zac_mgmt_out() returned an "
"error!");
goto bailout;
}
}
*queue_ccb = 1;
break;
}
case DISK_ZONE_REPORT_ZONES: {
uint8_t *rz_ptr;
uint32_t num_entries, alloc_size;
struct disk_zone_report *rep;
rep = &bp->bio_zone.zone_params.report;
num_entries = rep->entries_allocated;
if (num_entries == 0) {
xpt_print(periph->path, "No entries allocated for "
"Report Zones request\n");
error = EINVAL;
goto bailout;
}
alloc_size = sizeof(struct scsi_report_zones_hdr) +
(sizeof(struct scsi_report_zones_desc) * num_entries);
alloc_size = min(alloc_size, softc->disk->d_maxsize);
rz_ptr = malloc(alloc_size, M_SCSIDA, M_NOWAIT | M_ZERO);
if (rz_ptr == NULL) {
xpt_print(periph->path, "Unable to allocate memory "
"for Report Zones request\n");
error = ENOMEM;
goto bailout;
}
if (softc->zone_interface != DA_ZONE_IF_ATA_PASS) {
scsi_zbc_in(&ccb->csio,
/*retries*/ da_retry_count,
/*cbcfnp*/ dadone,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*service_action*/ ZBC_IN_SA_REPORT_ZONES,
/*zone_start_lba*/ rep->starting_id,
/*zone_options*/ rep->rep_options,
/*data_ptr*/ rz_ptr,
/*dxfer_len*/ alloc_size,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
} else {
/*
* Note that in this case, even though we can
* technically use NCQ, we don't bother for several
* reasons:
* 1. It hasn't been tested on a SAT layer that
* supports it. This is new as of SAT-4.
* 2. Even when there is a SAT layer that supports
* it, that SAT layer will also probably support
* ZBC -> ZAC translation, since they are both
* in the SAT-4 spec.
* 3. Translation will likely be preferable to ATA
* passthrough. LSI / Avago at least single
* steps ATA passthrough commands in the HBA,
* regardless of protocol, so unless that
* changes, there is a performance penalty for
* doing ATA passthrough no matter whether
* you're using NCQ/FPDMA, DMA or PIO.
* 4. It requires a 32-byte CDB, which at least at
* this point in CAM requires a CDB pointer, which
* would require us to allocate an additional bit
* of storage separate from the CCB.
*/
error = scsi_ata_zac_mgmt_in(&ccb->csio,
/*retries*/ da_retry_count,
/*cbcfnp*/ dadone,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*use_ncq*/ 0,
/*zm_action*/ ATA_ZM_REPORT_ZONES,
/*zone_id*/ rep->starting_id,
/*zone_flags*/ rep->rep_options,
/*data_ptr*/ rz_ptr,
/*dxfer_len*/ alloc_size,
/*cdb_storage*/ NULL,
/*cdb_storage_len*/ 0,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
if (error != 0) {
error = EINVAL;
xpt_print(periph->path,
"scsi_ata_zac_mgmt_in() returned an "
"error!");
goto bailout;
}
}
/*
* For BIO_ZONE, this isn't normally needed. However, it
* is used by devstat_end_transaction_bio() to determine
* how much data was transferred.
*/
/*
* XXX KDM we have a problem. But I'm not sure how to fix
* it. devstat uses bio_bcount - bio_resid to calculate
* the amount of data transferred. The GEOM disk code
* uses bio_length - bio_resid to calculate the amount of
* data in bio_completed. We have different structure
* sizes above and below the ada(4) driver. So, if we
* use the sizes above, the amount transferred won't be
* quite accurate for devstat. If we use different sizes
* for bio_bcount and bio_length (above and below
* respectively), then the residual needs to match one or
* the other. Everything is calculated after the bio
* leaves the driver, so changing the values around isn't
* really an option. For now, just set the count to the
* passed in length. This means that the calculations
* above (e.g. bio_completed) will be correct, but the
* amount of data reported to devstat will be slightly
* under or overstated.
*/
bp->bio_bcount = bp->bio_length;
*queue_ccb = 1;
break;
}
case DISK_ZONE_GET_PARAMS: {
struct disk_zone_disk_params *params;
params = &bp->bio_zone.zone_params.disk_params;
bzero(params, sizeof(*params));
switch (softc->zone_mode) {
case DA_ZONE_DRIVE_MANAGED:
params->zone_mode = DISK_ZONE_MODE_DRIVE_MANAGED;
break;
case DA_ZONE_HOST_AWARE:
params->zone_mode = DISK_ZONE_MODE_HOST_AWARE;
break;
case DA_ZONE_HOST_MANAGED:
params->zone_mode = DISK_ZONE_MODE_HOST_MANAGED;
break;
default:
case DA_ZONE_NONE:
params->zone_mode = DISK_ZONE_MODE_NONE;
break;
}
if (softc->zone_flags & DA_ZONE_FLAG_URSWRZ)
params->flags |= DISK_ZONE_DISK_URSWRZ;
if (softc->zone_flags & DA_ZONE_FLAG_OPT_SEQ_SET) {
params->optimal_seq_zones = softc->optimal_seq_zones;
params->flags |= DISK_ZONE_OPT_SEQ_SET;
}
if (softc->zone_flags & DA_ZONE_FLAG_OPT_NONSEQ_SET) {
params->optimal_nonseq_zones =
softc->optimal_nonseq_zones;
params->flags |= DISK_ZONE_OPT_NONSEQ_SET;
}
if (softc->zone_flags & DA_ZONE_FLAG_MAX_SEQ_SET) {
params->max_seq_zones = softc->max_seq_zones;
params->flags |= DISK_ZONE_MAX_SEQ_SET;
}
if (softc->zone_flags & DA_ZONE_FLAG_RZ_SUP)
params->flags |= DISK_ZONE_RZ_SUP;
if (softc->zone_flags & DA_ZONE_FLAG_OPEN_SUP)
params->flags |= DISK_ZONE_OPEN_SUP;
if (softc->zone_flags & DA_ZONE_FLAG_CLOSE_SUP)
params->flags |= DISK_ZONE_CLOSE_SUP;
if (softc->zone_flags & DA_ZONE_FLAG_FINISH_SUP)
params->flags |= DISK_ZONE_FINISH_SUP;
if (softc->zone_flags & DA_ZONE_FLAG_RWP_SUP)
params->flags |= DISK_ZONE_RWP_SUP;
break;
}
default:
break;
}
bailout:
return (error);
}
static void
dastart(struct cam_periph *periph, union ccb *start_ccb)
{
struct da_softc *softc;
cam_periph_assert(periph, MA_OWNED);
softc = (struct da_softc *)periph->softc;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dastart\n"));
skipstate:
switch (softc->state) {
case DA_STATE_NORMAL:
{
struct bio *bp;
uint8_t tag_code;
more:
bp = cam_iosched_next_bio(softc->cam_iosched);
if (bp == NULL) {
if (cam_iosched_has_work_flags(softc->cam_iosched,
DA_WORK_TUR)) {
softc->flags |= DA_FLAG_TUR_PENDING;
cam_iosched_clr_work_flags(softc->cam_iosched,
DA_WORK_TUR);
scsi_test_unit_ready(&start_ccb->csio,
/*retries*/ da_retry_count,
dadone_tur,
MSG_SIMPLE_Q_TAG,
SSD_FULL_SIZE,
da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_TUR;
xpt_action(start_ccb);
} else
xpt_release_ccb(start_ccb);
break;
}
if (bp->bio_cmd == BIO_DELETE) {
if (softc->delete_func != NULL) {
softc->delete_func(periph, start_ccb, bp);
goto out;
} else {
/*
* Not sure this is possible, but failsafe by
* lying and saying "sure, done."
*/
biofinish(bp, NULL, 0);
goto more;
}
}
if (cam_iosched_has_work_flags(softc->cam_iosched,
DA_WORK_TUR)) {
cam_iosched_clr_work_flags(softc->cam_iosched,
DA_WORK_TUR);
da_periph_release_locked(periph, DA_REF_TUR);
}
if ((bp->bio_flags & BIO_ORDERED) != 0 ||
(softc->flags & DA_FLAG_NEED_OTAG) != 0) {
softc->flags &= ~DA_FLAG_NEED_OTAG;
softc->flags |= DA_FLAG_WAS_OTAG;
tag_code = MSG_ORDERED_Q_TAG;
} else {
tag_code = MSG_SIMPLE_Q_TAG;
}
switch (bp->bio_cmd) {
case BIO_WRITE:
case BIO_READ:
{
void *data_ptr;
int rw_op;
biotrack(bp, __func__);
if (bp->bio_cmd == BIO_WRITE) {
softc->flags |= DA_FLAG_DIRTY;
rw_op = SCSI_RW_WRITE;
} else {
rw_op = SCSI_RW_READ;
}
data_ptr = bp->bio_data;
if ((bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0) {
rw_op |= SCSI_RW_BIO;
data_ptr = bp;
}
scsi_read_write(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone,
/*tag_action*/tag_code,
rw_op,
/*byte2*/0,
softc->minimum_cmd_size,
/*lba*/bp->bio_pblkno,
/*block_count*/bp->bio_bcount /
softc->params.secsize,
data_ptr,
/*dxfer_len*/ bp->bio_bcount,
/*sense_len*/SSD_FULL_SIZE,
da_default_timeout * 1000);
#if defined(BUF_TRACKING) || defined(FULL_BUF_TRACKING)
start_ccb->csio.bio = bp;
#endif
break;
}
case BIO_FLUSH:
/*
* If we don't support sync cache, or the disk
* isn't dirty, FLUSH is a no-op. Use the
* allocated CCB for the next bio if one is
* available.
*/
if ((softc->quirks & DA_Q_NO_SYNC_CACHE) != 0 ||
(softc->flags & DA_FLAG_DIRTY) == 0) {
biodone(bp);
goto skipstate;
}
/*
* BIO_FLUSH doesn't currently communicate
* range data, so we synchronize the cache
* over the whole disk.
*/
scsi_synchronize_cache(&start_ccb->csio,
/*retries*/1,
/*cbfcnp*/dadone,
/*tag_action*/tag_code,
/*begin_lba*/0,
/*lb_count*/0,
SSD_FULL_SIZE,
da_default_timeout*1000);
/*
* Clear the dirty flag before sending the command.
* Either this sync cache will be successful, or it
* will fail after a retry. If it fails, it is
* unlikely to be successful if retried later, so
* we'll save ourselves time by just marking the
* device clean.
*/
softc->flags &= ~DA_FLAG_DIRTY;
break;
case BIO_ZONE: {
int error, queue_ccb;
queue_ccb = 0;
error = da_zone_cmd(periph, start_ccb, bp,&queue_ccb);
if ((error != 0)
|| (queue_ccb == 0)) {
biofinish(bp, NULL, error);
xpt_release_ccb(start_ccb);
return;
}
break;
}
}
start_ccb->ccb_h.ccb_state = DA_CCB_BUFFER_IO;
start_ccb->ccb_h.flags |= CAM_UNLOCKED;
start_ccb->ccb_h.softtimeout = sbttotv(da_default_softtimeout);
out:
LIST_INSERT_HEAD(&softc->pending_ccbs,
&start_ccb->ccb_h, periph_links.le);
/* We expect a unit attention from this device */
if ((softc->flags & DA_FLAG_RETRY_UA) != 0) {
start_ccb->ccb_h.ccb_state |= DA_CCB_RETRY_UA;
softc->flags &= ~DA_FLAG_RETRY_UA;
}
start_ccb->ccb_h.ccb_bp = bp;
softc->refcount++;
cam_periph_unlock(periph);
xpt_action(start_ccb);
cam_periph_lock(periph);
/* May have more work to do, so ensure we stay scheduled */
daschedule(periph);
break;
}
case DA_STATE_PROBE_WP:
{
void *mode_buf;
int mode_buf_len;
if (da_disable_wp_detection) {
if ((softc->flags & DA_FLAG_CAN_RC16) != 0)
softc->state = DA_STATE_PROBE_RC16;
else
softc->state = DA_STATE_PROBE_RC;
goto skipstate;
}
mode_buf_len = 192;
mode_buf = malloc(mode_buf_len, M_SCSIDA, M_NOWAIT);
if (mode_buf == NULL) {
xpt_print(periph->path, "Unable to send mode sense - "
"malloc failure\n");
if ((softc->flags & DA_FLAG_CAN_RC16) != 0)
softc->state = DA_STATE_PROBE_RC16;
else
softc->state = DA_STATE_PROBE_RC;
goto skipstate;
}
scsi_mode_sense_len(&start_ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone_probewp,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*dbd*/ FALSE,
/*pc*/ SMS_PAGE_CTRL_CURRENT,
/*page*/ SMS_ALL_PAGES_PAGE,
/*param_buf*/ mode_buf,
/*param_len*/ mode_buf_len,
/*minimum_cmd_size*/ softc->minimum_cmd_size,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_WP;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_RC:
{
struct scsi_read_capacity_data *rcap;
rcap = (struct scsi_read_capacity_data *)
malloc(sizeof(*rcap), M_SCSIDA, M_NOWAIT|M_ZERO);
if (rcap == NULL) {
printf("dastart: Couldn't malloc read_capacity data\n");
/* da_free_periph??? */
break;
}
scsi_read_capacity(&start_ccb->csio,
/*retries*/da_retry_count,
dadone_proberc,
MSG_SIMPLE_Q_TAG,
rcap,
SSD_FULL_SIZE,
/*timeout*/5000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_RC;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_RC16:
{
struct scsi_read_capacity_data_long *rcaplong;
rcaplong = (struct scsi_read_capacity_data_long *)
malloc(sizeof(*rcaplong), M_SCSIDA, M_NOWAIT|M_ZERO);
if (rcaplong == NULL) {
printf("dastart: Couldn't malloc read_capacity data\n");
/* da_free_periph??? */
break;
}
scsi_read_capacity_16(&start_ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone_proberc,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*lba*/ 0,
/*reladr*/ 0,
/*pmi*/ 0,
/*rcap_buf*/ (uint8_t *)rcaplong,
/*rcap_buf_len*/ sizeof(*rcaplong),
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_RC16;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_LBP:
{
struct scsi_vpd_logical_block_prov *lbp;
if (!scsi_vpd_supported_page(periph, SVPD_LBP)) {
/*
* If we get here we don't support any SBC-3 delete
* methods with UNMAP as the Logical Block Provisioning
* VPD page support is required for devices which
* support it according to T10/1799-D Revision 31
* however older revisions of the spec don't mandate
* this so we currently don't remove these methods
* from the available set.
*/
softc->state = DA_STATE_PROBE_BLK_LIMITS;
goto skipstate;
}
lbp = (struct scsi_vpd_logical_block_prov *)
malloc(sizeof(*lbp), M_SCSIDA, M_NOWAIT|M_ZERO);
if (lbp == NULL) {
printf("dastart: Couldn't malloc lbp data\n");
/* da_free_periph??? */
break;
}
scsi_inquiry(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone_probelbp,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*inq_buf*/(u_int8_t *)lbp,
/*inq_len*/sizeof(*lbp),
/*evpd*/TRUE,
/*page_code*/SVPD_LBP,
/*sense_len*/SSD_MIN_SIZE,
/*timeout*/da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_LBP;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_BLK_LIMITS:
{
struct scsi_vpd_block_limits *block_limits;
if (!scsi_vpd_supported_page(periph, SVPD_BLOCK_LIMITS)) {
/* Not supported skip to next probe */
softc->state = DA_STATE_PROBE_BDC;
goto skipstate;
}
block_limits = (struct scsi_vpd_block_limits *)
malloc(sizeof(*block_limits), M_SCSIDA, M_NOWAIT|M_ZERO);
if (block_limits == NULL) {
printf("dastart: Couldn't malloc block_limits data\n");
/* da_free_periph??? */
break;
}
scsi_inquiry(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone_probeblklimits,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*inq_buf*/(u_int8_t *)block_limits,
/*inq_len*/sizeof(*block_limits),
/*evpd*/TRUE,
/*page_code*/SVPD_BLOCK_LIMITS,
/*sense_len*/SSD_MIN_SIZE,
/*timeout*/da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_BLK_LIMITS;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_BDC:
{
struct scsi_vpd_block_characteristics *bdc;
if (!scsi_vpd_supported_page(periph, SVPD_BDC)) {
softc->state = DA_STATE_PROBE_ATA;
goto skipstate;
}
bdc = (struct scsi_vpd_block_characteristics *)
malloc(sizeof(*bdc), M_SCSIDA, M_NOWAIT|M_ZERO);
if (bdc == NULL) {
printf("dastart: Couldn't malloc bdc data\n");
/* da_free_periph??? */
break;
}
scsi_inquiry(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone_probebdc,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*inq_buf*/(u_int8_t *)bdc,
/*inq_len*/sizeof(*bdc),
/*evpd*/TRUE,
/*page_code*/SVPD_BDC,
/*sense_len*/SSD_MIN_SIZE,
/*timeout*/da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_BDC;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_ATA:
{
struct ata_params *ata_params;
if (!scsi_vpd_supported_page(periph, SVPD_ATA_INFORMATION)) {
if ((softc->zone_mode == DA_ZONE_HOST_AWARE)
|| (softc->zone_mode == DA_ZONE_HOST_MANAGED)) {
/*
* Note that if the ATA VPD page isn't
* supported, we aren't talking to an ATA
* device anyway. Support for that VPD
* page is mandatory for SCSI to ATA (SAT)
* translation layers.
*/
softc->state = DA_STATE_PROBE_ZONE;
goto skipstate;
}
daprobedone(periph, start_ccb);
break;
}
ata_params = (struct ata_params*)
malloc(sizeof(*ata_params), M_SCSIDA,M_NOWAIT|M_ZERO);
if (ata_params == NULL) {
xpt_print(periph->path, "Couldn't malloc ata_params "
"data\n");
/* da_free_periph??? */
break;
}
scsi_ata_identify(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone_probeata,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*data_ptr*/(u_int8_t *)ata_params,
/*dxfer_len*/sizeof(*ata_params),
/*sense_len*/SSD_FULL_SIZE,
/*timeout*/da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_ATA;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_ATA_LOGDIR:
{
struct ata_gp_log_dir *log_dir;
int retval;
retval = 0;
if ((softc->flags & DA_FLAG_CAN_ATA_LOG) == 0) {
/*
* If we don't have log support, not much point in
* trying to probe zone support.
*/
daprobedone(periph, start_ccb);
break;
}
/*
* If we have an ATA device (the SCSI ATA Information VPD
* page should be present and the ATA identify should have
* succeeded) and it supports logs, ask for the log directory.
*/
log_dir = malloc(sizeof(*log_dir), M_SCSIDA, M_NOWAIT|M_ZERO);
if (log_dir == NULL) {
xpt_print(periph->path, "Couldn't malloc log_dir "
"data\n");
daprobedone(periph, start_ccb);
break;
}
retval = scsi_ata_read_log(&start_ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone_probeatalogdir,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*log_address*/ ATA_LOG_DIRECTORY,
/*page_number*/ 0,
/*block_count*/ 1,
/*protocol*/ softc->flags & DA_FLAG_CAN_ATA_DMA ?
AP_PROTO_DMA : AP_PROTO_PIO_IN,
/*data_ptr*/ (uint8_t *)log_dir,
/*dxfer_len*/ sizeof(*log_dir),
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
if (retval != 0) {
xpt_print(periph->path, "scsi_ata_read_log() failed!");
free(log_dir, M_SCSIDA);
daprobedone(periph, start_ccb);
break;
}
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_ATA_LOGDIR;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_ATA_IDDIR:
{
struct ata_identify_log_pages *id_dir;
int retval;
retval = 0;
/*
* Check here to see whether the Identify Device log is
* supported in the directory of logs. If so, continue
* with requesting the log of identify device pages.
*/
if ((softc->flags & DA_FLAG_CAN_ATA_IDLOG) == 0) {
daprobedone(periph, start_ccb);
break;
}
id_dir = malloc(sizeof(*id_dir), M_SCSIDA, M_NOWAIT | M_ZERO);
if (id_dir == NULL) {
xpt_print(periph->path, "Couldn't malloc id_dir "
"data\n");
daprobedone(periph, start_ccb);
break;
}
retval = scsi_ata_read_log(&start_ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone_probeataiddir,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*log_address*/ ATA_IDENTIFY_DATA_LOG,
/*page_number*/ ATA_IDL_PAGE_LIST,
/*block_count*/ 1,
/*protocol*/ softc->flags & DA_FLAG_CAN_ATA_DMA ?
AP_PROTO_DMA : AP_PROTO_PIO_IN,
/*data_ptr*/ (uint8_t *)id_dir,
/*dxfer_len*/ sizeof(*id_dir),
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
if (retval != 0) {
xpt_print(periph->path, "scsi_ata_read_log() failed!");
free(id_dir, M_SCSIDA);
daprobedone(periph, start_ccb);
break;
}
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_ATA_IDDIR;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_ATA_SUP:
{
struct ata_identify_log_sup_cap *sup_cap;
int retval;
retval = 0;
/*
* Check here to see whether the Supported Capabilities log
* is in the list of Identify Device logs.
*/
if ((softc->flags & DA_FLAG_CAN_ATA_SUPCAP) == 0) {
daprobedone(periph, start_ccb);
break;
}
sup_cap = malloc(sizeof(*sup_cap), M_SCSIDA, M_NOWAIT|M_ZERO);
if (sup_cap == NULL) {
xpt_print(periph->path, "Couldn't malloc sup_cap "
"data\n");
daprobedone(periph, start_ccb);
break;
}
retval = scsi_ata_read_log(&start_ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone_probeatasup,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*log_address*/ ATA_IDENTIFY_DATA_LOG,
/*page_number*/ ATA_IDL_SUP_CAP,
/*block_count*/ 1,
/*protocol*/ softc->flags & DA_FLAG_CAN_ATA_DMA ?
AP_PROTO_DMA : AP_PROTO_PIO_IN,
/*data_ptr*/ (uint8_t *)sup_cap,
/*dxfer_len*/ sizeof(*sup_cap),
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
if (retval != 0) {
xpt_print(periph->path, "scsi_ata_read_log() failed!");
free(sup_cap, M_SCSIDA);
daprobedone(periph, start_ccb);
break;
}
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_ATA_SUP;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_ATA_ZONE:
{
struct ata_zoned_info_log *ata_zone;
int retval;
retval = 0;
/*
* Check here to see whether the zoned device information
* page is supported. If so, continue on to request it.
* If not, skip to DA_STATE_PROBE_LOG or done.
*/
if ((softc->flags & DA_FLAG_CAN_ATA_ZONE) == 0) {
daprobedone(periph, start_ccb);
break;
}
ata_zone = malloc(sizeof(*ata_zone), M_SCSIDA,
M_NOWAIT|M_ZERO);
if (ata_zone == NULL) {
xpt_print(periph->path, "Couldn't malloc ata_zone "
"data\n");
daprobedone(periph, start_ccb);
break;
}
retval = scsi_ata_read_log(&start_ccb->csio,
/*retries*/ da_retry_count,
/*cbfcnp*/ dadone_probeatazone,
/*tag_action*/ MSG_SIMPLE_Q_TAG,
/*log_address*/ ATA_IDENTIFY_DATA_LOG,
/*page_number*/ ATA_IDL_ZDI,
/*block_count*/ 1,
/*protocol*/ softc->flags & DA_FLAG_CAN_ATA_DMA ?
AP_PROTO_DMA : AP_PROTO_PIO_IN,
/*data_ptr*/ (uint8_t *)ata_zone,
/*dxfer_len*/ sizeof(*ata_zone),
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ da_default_timeout * 1000);
if (retval != 0) {
xpt_print(periph->path, "scsi_ata_read_log() failed!");
free(ata_zone, M_SCSIDA);
daprobedone(periph, start_ccb);
break;
}
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_ATA_ZONE;
xpt_action(start_ccb);
break;
}
case DA_STATE_PROBE_ZONE:
{
struct scsi_vpd_zoned_bdc *bdc;
/*
* Note that this page will be supported for SCSI protocol
* devices that support ZBC (SMR devices), as well as ATA
* protocol devices that are behind a SAT (SCSI to ATA
* Translation) layer that supports converting ZBC commands
* to their ZAC equivalents.
*/
if (!scsi_vpd_supported_page(periph, SVPD_ZONED_BDC)) {
daprobedone(periph, start_ccb);
break;
}
bdc = (struct scsi_vpd_zoned_bdc *)
malloc(sizeof(*bdc), M_SCSIDA, M_NOWAIT|M_ZERO);
if (bdc == NULL) {
xpt_release_ccb(start_ccb);
xpt_print(periph->path, "Couldn't malloc zone VPD "
"data\n");
break;
}
scsi_inquiry(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone_probezone,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*inq_buf*/(u_int8_t *)bdc,
/*inq_len*/sizeof(*bdc),
/*evpd*/TRUE,
/*page_code*/SVPD_ZONED_BDC,
/*sense_len*/SSD_FULL_SIZE,
/*timeout*/da_default_timeout * 1000);
start_ccb->ccb_h.ccb_bp = NULL;
start_ccb->ccb_h.ccb_state = DA_CCB_PROBE_ZONE;
xpt_action(start_ccb);
break;
}
}
}
/*
* In each of the methods below, while its the caller's
* responsibility to ensure the request will fit into a
* single device request, we might have changed the delete
* method due to the device incorrectly advertising either
* its supported methods or limits.
*
* To prevent this causing further issues we validate the
* against the methods limits, and warn which would
* otherwise be unnecessary.
*/
static void
da_delete_unmap(struct cam_periph *periph, union ccb *ccb, struct bio *bp)
{
struct da_softc *softc = (struct da_softc *)periph->softc;;
struct bio *bp1;
uint8_t *buf = softc->unmap_buf;
struct scsi_unmap_desc *d = (void *)&buf[UNMAP_HEAD_SIZE];
uint64_t lba, lastlba = (uint64_t)-1;
uint64_t totalcount = 0;
uint64_t count;
uint32_t c, lastcount = 0, ranges = 0;
/*
* Currently this doesn't take the UNMAP
* Granularity and Granularity Alignment
* fields into account.
*
* This could result in both unoptimal unmap
* requests as as well as UNMAP calls unmapping
* fewer LBA's than requested.
*/
bzero(softc->unmap_buf, sizeof(softc->unmap_buf));
bp1 = bp;
do {
/*
* Note: ada and da are different in how they store the
* pending bp's in a trim. ada stores all of them in the
* trim_req.bps. da stores all but the first one in the
* delete_run_queue. ada then completes all the bps in
* its adadone() loop. da completes all the bps in the
* delete_run_queue in dadone, and relies on the biodone
* after to complete. This should be reconciled since there's
* no real reason to do it differently. XXX
*/
if (bp1 != bp)
bioq_insert_tail(&softc->delete_run_queue, bp1);
lba = bp1->bio_pblkno;
count = bp1->bio_bcount / softc->params.secsize;
/* Try to extend the previous range. */
if (lba == lastlba) {
c = omin(count, UNMAP_RANGE_MAX - lastcount);
lastlba += c;
lastcount += c;
scsi_ulto4b(lastcount, d[ranges - 1].length);
count -= c;
lba += c;
totalcount += c;
} else if ((softc->quirks & DA_Q_STRICT_UNMAP) &&
softc->unmap_gran != 0) {
/* Align length of the previous range. */
if ((c = lastcount % softc->unmap_gran) != 0) {
if (lastcount <= c) {
totalcount -= lastcount;
lastlba = (uint64_t)-1;
lastcount = 0;
ranges--;
} else {
totalcount -= c;
lastlba -= c;
lastcount -= c;
scsi_ulto4b(lastcount,
d[ranges - 1].length);
}
}
/* Align beginning of the new range. */
c = (lba - softc->unmap_gran_align) % softc->unmap_gran;
if (c != 0) {
c = softc->unmap_gran - c;
if (count <= c) {
count = 0;
} else {
lba += c;
count -= c;
}
}
}
while (count > 0) {
c = omin(count, UNMAP_RANGE_MAX);
if (totalcount + c > softc->unmap_max_lba ||
ranges >= softc->unmap_max_ranges) {
xpt_print(periph->path,
"%s issuing short delete %ld > %ld"
"|| %d >= %d",
da_delete_method_desc[softc->delete_method],
totalcount + c, softc->unmap_max_lba,
ranges, softc->unmap_max_ranges);
break;
}
scsi_u64to8b(lba, d[ranges].lba);
scsi_ulto4b(c, d[ranges].length);
lba += c;
totalcount += c;
ranges++;
count -= c;
lastlba = lba;
lastcount = c;
}
bp1 = cam_iosched_next_trim(softc->cam_iosched);
if (bp1 == NULL)
break;
if (ranges >= softc->unmap_max_ranges ||
totalcount + bp1->bio_bcount /
softc->params.secsize > softc->unmap_max_lba) {
cam_iosched_put_back_trim(softc->cam_iosched, bp1);
break;
}
} while (1);
/* Align length of the last range. */
if ((softc->quirks & DA_Q_STRICT_UNMAP) && softc->unmap_gran != 0 &&
(c = lastcount % softc->unmap_gran) != 0) {
if (lastcount <= c)
ranges--;
else
scsi_ulto4b(lastcount - c, d[ranges - 1].length);
}
scsi_ulto2b(ranges * 16 + 6, &buf[0]);
scsi_ulto2b(ranges * 16, &buf[2]);
scsi_unmap(&ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*byte2*/0,
/*data_ptr*/ buf,
/*dxfer_len*/ ranges * 16 + 8,
/*sense_len*/SSD_FULL_SIZE,
da_default_timeout * 1000);
ccb->ccb_h.ccb_state = DA_CCB_DELETE;
ccb->ccb_h.flags |= CAM_UNLOCKED;
softc->trim_count++;
softc->trim_ranges += ranges;
softc->trim_lbas += totalcount;
cam_iosched_submit_trim(softc->cam_iosched);
}
static void
da_delete_trim(struct cam_periph *periph, union ccb *ccb, struct bio *bp)
{
struct da_softc *softc = (struct da_softc *)periph->softc;
struct bio *bp1;
uint8_t *buf = softc->unmap_buf;
uint64_t lastlba = (uint64_t)-1;
uint64_t count;
uint64_t lba;
uint32_t lastcount = 0, c, requestcount;
int ranges = 0, off, block_count;
bzero(softc->unmap_buf, sizeof(softc->unmap_buf));
bp1 = bp;
do {
if (bp1 != bp)//XXX imp XXX
bioq_insert_tail(&softc->delete_run_queue, bp1);
lba = bp1->bio_pblkno;
count = bp1->bio_bcount / softc->params.secsize;
requestcount = count;
/* Try to extend the previous range. */
if (lba == lastlba) {
c = omin(count, ATA_DSM_RANGE_MAX - lastcount);
lastcount += c;
off = (ranges - 1) * 8;
buf[off + 6] = lastcount & 0xff;
buf[off + 7] = (lastcount >> 8) & 0xff;
count -= c;
lba += c;
}
while (count > 0) {
c = omin(count, ATA_DSM_RANGE_MAX);
off = ranges * 8;
buf[off + 0] = lba & 0xff;
buf[off + 1] = (lba >> 8) & 0xff;
buf[off + 2] = (lba >> 16) & 0xff;
buf[off + 3] = (lba >> 24) & 0xff;
buf[off + 4] = (lba >> 32) & 0xff;
buf[off + 5] = (lba >> 40) & 0xff;
buf[off + 6] = c & 0xff;
buf[off + 7] = (c >> 8) & 0xff;
lba += c;
ranges++;
count -= c;
lastcount = c;
if (count != 0 && ranges == softc->trim_max_ranges) {
xpt_print(periph->path,
"%s issuing short delete %ld > %ld\n",
da_delete_method_desc[softc->delete_method],
requestcount,
(softc->trim_max_ranges - ranges) *
ATA_DSM_RANGE_MAX);
break;
}
}
lastlba = lba;
bp1 = cam_iosched_next_trim(softc->cam_iosched);
if (bp1 == NULL)
break;
if (bp1->bio_bcount / softc->params.secsize >
(softc->trim_max_ranges - ranges) * ATA_DSM_RANGE_MAX) {
cam_iosched_put_back_trim(softc->cam_iosched, bp1);
break;
}
} while (1);
block_count = howmany(ranges, ATA_DSM_BLK_RANGES);
scsi_ata_trim(&ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone,
/*tag_action*/MSG_SIMPLE_Q_TAG,
block_count,
/*data_ptr*/buf,
/*dxfer_len*/block_count * ATA_DSM_BLK_SIZE,
/*sense_len*/SSD_FULL_SIZE,
da_default_timeout * 1000);
ccb->ccb_h.ccb_state = DA_CCB_DELETE;
ccb->ccb_h.flags |= CAM_UNLOCKED;
cam_iosched_submit_trim(softc->cam_iosched);
}
/*
* We calculate ws_max_blks here based off d_delmaxsize instead
* of using softc->ws_max_blks as it is absolute max for the
* device not the protocol max which may well be lower.
*/
static void
da_delete_ws(struct cam_periph *periph, union ccb *ccb, struct bio *bp)
{
struct da_softc *softc;
struct bio *bp1;
uint64_t ws_max_blks;
uint64_t lba;
uint64_t count; /* forward compat with WS32 */
softc = (struct da_softc *)periph->softc;
ws_max_blks = softc->disk->d_delmaxsize / softc->params.secsize;
lba = bp->bio_pblkno;
count = 0;
bp1 = bp;
do {
if (bp1 != bp)//XXX imp XXX
bioq_insert_tail(&softc->delete_run_queue, bp1);
count += bp1->bio_bcount / softc->params.secsize;
if (count > ws_max_blks) {
xpt_print(periph->path,
"%s issuing short delete %ld > %ld\n",
da_delete_method_desc[softc->delete_method],
count, ws_max_blks);
count = omin(count, ws_max_blks);
break;
}
bp1 = cam_iosched_next_trim(softc->cam_iosched);
if (bp1 == NULL)
break;
if (lba + count != bp1->bio_pblkno ||
count + bp1->bio_bcount /
softc->params.secsize > ws_max_blks) {
cam_iosched_put_back_trim(softc->cam_iosched, bp1);
break;
}
} while (1);
scsi_write_same(&ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone,
/*tag_action*/MSG_SIMPLE_Q_TAG,
/*byte2*/softc->delete_method ==
DA_DELETE_ZERO ? 0 : SWS_UNMAP,
softc->delete_method == DA_DELETE_WS16 ? 16 : 10,
/*lba*/lba,
/*block_count*/count,
/*data_ptr*/ __DECONST(void *, zero_region),
/*dxfer_len*/ softc->params.secsize,
/*sense_len*/SSD_FULL_SIZE,
da_default_timeout * 1000);
ccb->ccb_h.ccb_state = DA_CCB_DELETE;
ccb->ccb_h.flags |= CAM_UNLOCKED;
cam_iosched_submit_trim(softc->cam_iosched);
}
static int
cmd6workaround(union ccb *ccb)
{
struct scsi_rw_6 cmd6;
struct scsi_rw_10 *cmd10;
struct da_softc *softc;
u_int8_t *cdb;
struct bio *bp;
int frozen;
cdb = ccb->csio.cdb_io.cdb_bytes;
softc = (struct da_softc *)xpt_path_periph(ccb->ccb_h.path)->softc;
if (ccb->ccb_h.ccb_state == DA_CCB_DELETE) {
da_delete_methods old_method = softc->delete_method;
/*
* Typically there are two reasons for failure here
* 1. Delete method was detected as supported but isn't
* 2. Delete failed due to invalid params e.g. too big
*
* While we will attempt to choose an alternative delete method
* this may result in short deletes if the existing delete
* requests from geom are big for the new method chosen.
*
* This method assumes that the error which triggered this
* will not retry the io otherwise a panic will occur
*/
dadeleteflag(softc, old_method, 0);
dadeletemethodchoose(softc, DA_DELETE_DISABLE);
if (softc->delete_method == DA_DELETE_DISABLE)
xpt_print(ccb->ccb_h.path,
"%s failed, disabling BIO_DELETE\n",
da_delete_method_desc[old_method]);
else
xpt_print(ccb->ccb_h.path,
"%s failed, switching to %s BIO_DELETE\n",
da_delete_method_desc[old_method],
da_delete_method_desc[softc->delete_method]);
while ((bp = bioq_takefirst(&softc->delete_run_queue)) != NULL)
cam_iosched_queue_work(softc->cam_iosched, bp);
cam_iosched_queue_work(softc->cam_iosched,
(struct bio *)ccb->ccb_h.ccb_bp);
ccb->ccb_h.ccb_bp = NULL;
return (0);
}
/* Detect unsupported PREVENT ALLOW MEDIUM REMOVAL. */
if ((ccb->ccb_h.flags & CAM_CDB_POINTER) == 0 &&
(*cdb == PREVENT_ALLOW) &&
(softc->quirks & DA_Q_NO_PREVENT) == 0) {
if (bootverbose)
xpt_print(ccb->ccb_h.path,
"PREVENT ALLOW MEDIUM REMOVAL not supported.\n");
softc->quirks |= DA_Q_NO_PREVENT;
return (0);
}
/* Detect unsupported SYNCHRONIZE CACHE(10). */
if ((ccb->ccb_h.flags & CAM_CDB_POINTER) == 0 &&
(*cdb == SYNCHRONIZE_CACHE) &&
(softc->quirks & DA_Q_NO_SYNC_CACHE) == 0) {
if (bootverbose)
xpt_print(ccb->ccb_h.path,
"SYNCHRONIZE CACHE(10) not supported.\n");
softc->quirks |= DA_Q_NO_SYNC_CACHE;
softc->disk->d_flags &= ~DISKFLAG_CANFLUSHCACHE;
return (0);
}
/* Translation only possible if CDB is an array and cmd is R/W6 */
if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0 ||
(*cdb != READ_6 && *cdb != WRITE_6))
return 0;
xpt_print(ccb->ccb_h.path, "READ(6)/WRITE(6) not supported, "
"increasing minimum_cmd_size to 10.\n");
softc->minimum_cmd_size = 10;
bcopy(cdb, &cmd6, sizeof(struct scsi_rw_6));
cmd10 = (struct scsi_rw_10 *)cdb;
cmd10->opcode = (cmd6.opcode == READ_6) ? READ_10 : WRITE_10;
cmd10->byte2 = 0;
scsi_ulto4b(scsi_3btoul(cmd6.addr), cmd10->addr);
cmd10->reserved = 0;
scsi_ulto2b(cmd6.length, cmd10->length);
cmd10->control = cmd6.control;
ccb->csio.cdb_len = sizeof(*cmd10);
/* Requeue request, unfreezing queue if necessary */
frozen = (ccb->ccb_h.status & CAM_DEV_QFRZN) != 0;
ccb->ccb_h.status = CAM_REQUEUE_REQ;
xpt_action(ccb);
if (frozen) {
cam_release_devq(ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
return (ERESTART);
}
static void
dazonedone(struct cam_periph *periph, union ccb *ccb)
{
struct da_softc *softc;
struct bio *bp;
softc = periph->softc;
bp = (struct bio *)ccb->ccb_h.ccb_bp;
switch (bp->bio_zone.zone_cmd) {
case DISK_ZONE_OPEN:
case DISK_ZONE_CLOSE:
case DISK_ZONE_FINISH:
case DISK_ZONE_RWP:
break;
case DISK_ZONE_REPORT_ZONES: {
uint32_t avail_len;
struct disk_zone_report *rep;
struct scsi_report_zones_hdr *hdr;
struct scsi_report_zones_desc *desc;
struct disk_zone_rep_entry *entry;
uint32_t hdr_len, num_avail;
uint32_t num_to_fill, i;
int ata;
rep = &bp->bio_zone.zone_params.report;
avail_len = ccb->csio.dxfer_len - ccb->csio.resid;
/*
* Note that bio_resid isn't normally used for zone
* commands, but it is used by devstat_end_transaction_bio()
* to determine how much data was transferred. Because
* the size of the SCSI/ATA data structures is different
* than the size of the BIO interface structures, the
* amount of data actually transferred from the drive will
* be different than the amount of data transferred to
* the user.
*/
bp->bio_resid = ccb->csio.resid;
hdr = (struct scsi_report_zones_hdr *)ccb->csio.data_ptr;
if (avail_len < sizeof(*hdr)) {
/*
* Is there a better error than EIO here? We asked
* for at least the header, and we got less than
* that.
*/
bp->bio_error = EIO;
bp->bio_flags |= BIO_ERROR;
bp->bio_resid = bp->bio_bcount;
break;
}
if (softc->zone_interface == DA_ZONE_IF_ATA_PASS)
ata = 1;
else
ata = 0;
hdr_len = ata ? le32dec(hdr->length) :
scsi_4btoul(hdr->length);
if (hdr_len > 0)
rep->entries_available = hdr_len / sizeof(*desc);
else
rep->entries_available = 0;
/*
* NOTE: using the same values for the BIO version of the
* same field as the SCSI/ATA values. This means we could
* get some additional values that aren't defined in bio.h
* if more values of the same field are defined later.
*/
rep->header.same = hdr->byte4 & SRZ_SAME_MASK;
rep->header.maximum_lba = ata ? le64dec(hdr->maximum_lba) :
scsi_8btou64(hdr->maximum_lba);
/*
* If the drive reports no entries that match the query,
* we're done.
*/
if (hdr_len == 0) {
rep->entries_filled = 0;
break;
}
num_avail = min((avail_len - sizeof(*hdr)) / sizeof(*desc),
hdr_len / sizeof(*desc));
/*
* If the drive didn't return any data, then we're done.
*/
if (num_avail == 0) {
rep->entries_filled = 0;
break;
}
num_to_fill = min(num_avail, rep->entries_allocated);
/*
* If the user didn't allocate any entries for us to fill,
* we're done.
*/
if (num_to_fill == 0) {
rep->entries_filled = 0;
break;
}
for (i = 0, desc = &hdr->desc_list[0], entry=&rep->entries[0];
i < num_to_fill; i++, desc++, entry++) {
/*
* NOTE: we're mapping the values here directly
* from the SCSI/ATA bit definitions to the bio.h
* definitons. There is also a warning in
* disk_zone.h, but the impact is that if
* additional values are added in the SCSI/ATA
* specs these will be visible to consumers of
* this interface.
*/
entry->zone_type = desc->zone_type & SRZ_TYPE_MASK;
entry->zone_condition =
(desc->zone_flags & SRZ_ZONE_COND_MASK) >>
SRZ_ZONE_COND_SHIFT;
entry->zone_flags |= desc->zone_flags &
(SRZ_ZONE_NON_SEQ|SRZ_ZONE_RESET);
entry->zone_length =
ata ? le64dec(desc->zone_length) :
scsi_8btou64(desc->zone_length);
entry->zone_start_lba =
ata ? le64dec(desc->zone_start_lba) :
scsi_8btou64(desc->zone_start_lba);
entry->write_pointer_lba =
ata ? le64dec(desc->write_pointer_lba) :
scsi_8btou64(desc->write_pointer_lba);
}
rep->entries_filled = num_to_fill;
break;
}
case DISK_ZONE_GET_PARAMS:
default:
/*
* In theory we should not get a GET_PARAMS bio, since it
* should be handled without queueing the command to the
* drive.
*/
panic("%s: Invalid zone command %d", __func__,
bp->bio_zone.zone_cmd);
break;
}
if (bp->bio_zone.zone_cmd == DISK_ZONE_REPORT_ZONES)
free(ccb->csio.data_ptr, M_SCSIDA);
}
static void
dadone(struct cam_periph *periph, union ccb *done_ccb)
{
struct bio *bp, *bp1;
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
da_ccb_state state;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
#if defined(BUF_TRACKING) || defined(FULL_BUF_TRACKING)
if (csio->bio != NULL)
biotrack(csio->bio, __func__);
#endif
state = csio->ccb_h.ccb_state & DA_CCB_TYPE_MASK;
cam_periph_lock(periph);
bp = (struct bio *)done_ccb->ccb_h.ccb_bp;
if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
int error;
int sf;
if ((csio->ccb_h.ccb_state & DA_CCB_RETRY_UA) != 0)
sf = SF_RETRY_UA;
else
sf = 0;
error = daerror(done_ccb, CAM_RETRY_SELTO, sf);
if (error == ERESTART) {
/* A retry was scheduled, so just return. */
cam_periph_unlock(periph);
return;
}
bp = (struct bio *)done_ccb->ccb_h.ccb_bp;
if (error != 0) {
int queued_error;
/*
* return all queued I/O with EIO, so that
* the client can retry these I/Os in the
* proper order should it attempt to recover.
*/
queued_error = EIO;
if (error == ENXIO
&& (softc->flags & DA_FLAG_PACK_INVALID)== 0) {
/*
* Catastrophic error. Mark our pack as
* invalid.
*
* XXX See if this is really a media
* XXX change first?
*/
xpt_print(periph->path, "Invalidating pack\n");
softc->flags |= DA_FLAG_PACK_INVALID;
#ifdef CAM_IO_STATS
softc->invalidations++;
#endif
queued_error = ENXIO;
}
cam_iosched_flush(softc->cam_iosched, NULL,
queued_error);
if (bp != NULL) {
bp->bio_error = error;
bp->bio_resid = bp->bio_bcount;
bp->bio_flags |= BIO_ERROR;
}
} else if (bp != NULL) {
if (state == DA_CCB_DELETE)
bp->bio_resid = 0;
else
bp->bio_resid = csio->resid;
bp->bio_error = 0;
if (bp->bio_resid != 0)
bp->bio_flags |= BIO_ERROR;
}
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
} else if (bp != NULL) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
panic("REQ_CMP with QFRZN");
if (bp->bio_cmd == BIO_ZONE)
dazonedone(periph, done_ccb);
else if (state == DA_CCB_DELETE)
bp->bio_resid = 0;
else
bp->bio_resid = csio->resid;
if ((csio->resid > 0) && (bp->bio_cmd != BIO_ZONE))
bp->bio_flags |= BIO_ERROR;
if (softc->error_inject != 0) {
bp->bio_error = softc->error_inject;
bp->bio_resid = bp->bio_bcount;
bp->bio_flags |= BIO_ERROR;
softc->error_inject = 0;
}
}
if (bp != NULL)
biotrack(bp, __func__);
LIST_REMOVE(&done_ccb->ccb_h, periph_links.le);
if (LIST_EMPTY(&softc->pending_ccbs))
softc->flags |= DA_FLAG_WAS_OTAG;
/*
* We need to call cam_iosched before we call biodone so that we don't
* measure any activity that happens in the completion routine, which in
* the case of sendfile can be quite extensive. Release the periph
* refcount taken in dastart() for each CCB.
*/
cam_iosched_bio_complete(softc->cam_iosched, bp, done_ccb);
xpt_release_ccb(done_ccb);
KASSERT(softc->refcount >= 1, ("dadone softc %p refcount %d", softc, softc->refcount));
softc->refcount--;
if (state == DA_CCB_DELETE) {
TAILQ_HEAD(, bio) queue;
TAILQ_INIT(&queue);
TAILQ_CONCAT(&queue, &softc->delete_run_queue.queue, bio_queue);
softc->delete_run_queue.insert_point = NULL;
/*
* Normally, the xpt_release_ccb() above would make sure
* that when we have more work to do, that work would
* get kicked off. However, we specifically keep
* delete_running set to 0 before the call above to
* allow other I/O to progress when many BIO_DELETE
* requests are pushed down. We set delete_running to 0
* and call daschedule again so that we don't stall if
* there are no other I/Os pending apart from BIO_DELETEs.
*/
cam_iosched_trim_done(softc->cam_iosched);
daschedule(periph);
cam_periph_unlock(periph);
while ((bp1 = TAILQ_FIRST(&queue)) != NULL) {
TAILQ_REMOVE(&queue, bp1, bio_queue);
bp1->bio_error = bp->bio_error;
if (bp->bio_flags & BIO_ERROR) {
bp1->bio_flags |= BIO_ERROR;
bp1->bio_resid = bp1->bio_bcount;
} else
bp1->bio_resid = 0;
biodone(bp1);
}
} else {
daschedule(periph);
cam_periph_unlock(periph);
}
if (bp != NULL)
biodone(bp);
return;
}
static void
dadone_probewp(struct cam_periph *periph, union ccb *done_ccb)
{
struct scsi_mode_header_6 *mode_hdr6;
struct scsi_mode_header_10 *mode_hdr10;
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
uint8_t dev_spec;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probewp\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if (softc->minimum_cmd_size > 6) {
mode_hdr10 = (struct scsi_mode_header_10 *)csio->data_ptr;
dev_spec = mode_hdr10->dev_spec;
} else {
mode_hdr6 = (struct scsi_mode_header_6 *)csio->data_ptr;
dev_spec = mode_hdr6->dev_spec;
}
if (cam_ccb_status(done_ccb) == CAM_REQ_CMP) {
if ((dev_spec & 0x80) != 0)
softc->disk->d_flags |= DISKFLAG_WRITE_PROTECT;
else
softc->disk->d_flags &= ~DISKFLAG_WRITE_PROTECT;
} else {
int error;
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(csio->data_ptr, M_SCSIDA);
xpt_release_ccb(done_ccb);
if ((softc->flags & DA_FLAG_CAN_RC16) != 0)
softc->state = DA_STATE_PROBE_RC16;
else
softc->state = DA_STATE_PROBE_RC;
xpt_schedule(periph, priority);
return;
}
static void
dadone_proberc(struct cam_periph *periph, union ccb *done_ccb)
{
struct scsi_read_capacity_data *rdcap;
struct scsi_read_capacity_data_long *rcaplong;
struct da_softc *softc;
struct ccb_scsiio *csio;
da_ccb_state state;
char *announce_buf;
u_int32_t priority;
int lbp;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_proberc\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
state = csio->ccb_h.ccb_state & DA_CCB_TYPE_MASK;
lbp = 0;
rdcap = NULL;
rcaplong = NULL;
/* XXX TODO: can this be a malloc? */
announce_buf = softc->announce_temp;
bzero(announce_buf, DA_ANNOUNCETMP_SZ);
if (state == DA_CCB_PROBE_RC)
rdcap =(struct scsi_read_capacity_data *)csio->data_ptr;
else
rcaplong = (struct scsi_read_capacity_data_long *)
csio->data_ptr;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
struct disk_params *dp;
uint32_t block_size;
uint64_t maxsector;
u_int lalba; /* Lowest aligned LBA. */
if (state == DA_CCB_PROBE_RC) {
block_size = scsi_4btoul(rdcap->length);
maxsector = scsi_4btoul(rdcap->addr);
lalba = 0;
/*
* According to SBC-2, if the standard 10
* byte READ CAPACITY command returns 2^32,
* we should issue the 16 byte version of
* the command, since the device in question
* has more sectors than can be represented
* with the short version of the command.
*/
if (maxsector == 0xffffffff) {
free(rdcap, M_SCSIDA);
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_RC16;
xpt_schedule(periph, priority);
return;
}
} else {
block_size = scsi_4btoul(rcaplong->length);
maxsector = scsi_8btou64(rcaplong->addr);
lalba = scsi_2btoul(rcaplong->lalba_lbp);
}
/*
* Because GEOM code just will panic us if we
* give them an 'illegal' value we'll avoid that
* here.
*/
if (block_size == 0) {
block_size = 512;
if (maxsector == 0)
maxsector = -1;
}
if (block_size >= MAXPHYS) {
xpt_print(periph->path,
"unsupportable block size %ju\n",
(uintmax_t) block_size);
announce_buf = NULL;
cam_periph_invalidate(periph);
} else {
/*
* We pass rcaplong into dasetgeom(),
* because it will only use it if it is
* non-NULL.
*/
dasetgeom(periph, block_size, maxsector,
rcaplong, sizeof(*rcaplong));
lbp = (lalba & SRC16_LBPME_A);
dp = &softc->params;
snprintf(announce_buf, DA_ANNOUNCETMP_SZ,
"%juMB (%ju %u byte sectors)",
((uintmax_t)dp->secsize * dp->sectors) /
(1024 * 1024),
(uintmax_t)dp->sectors, dp->secsize);
}
} else {
int error;
/*
* Retry any UNIT ATTENTION type errors. They
* are expected at boot.
*/
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART) {
/*
* A retry was scheuled, so
* just return.
*/
return;
} else if (error != 0) {
int asc, ascq;
int sense_key, error_code;
int have_sense;
cam_status status;
struct ccb_getdev cgd;
/* Don't wedge this device's queue */
status = done_ccb->ccb_h.status;
if ((status & CAM_DEV_QFRZN) != 0)
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
xpt_setup_ccb(&cgd.ccb_h, done_ccb->ccb_h.path,
CAM_PRIORITY_NORMAL);
cgd.ccb_h.func_code = XPT_GDEV_TYPE;
xpt_action((union ccb *)&cgd);
if (scsi_extract_sense_ccb(done_ccb,
&error_code, &sense_key, &asc, &ascq))
have_sense = TRUE;
else
have_sense = FALSE;
/*
* If we tried READ CAPACITY(16) and failed,
* fallback to READ CAPACITY(10).
*/
if ((state == DA_CCB_PROBE_RC16) &&
(softc->flags & DA_FLAG_CAN_RC16) &&
(((csio->ccb_h.status & CAM_STATUS_MASK) ==
CAM_REQ_INVALID) ||
((have_sense) &&
(error_code == SSD_CURRENT_ERROR ||
error_code == SSD_DESC_CURRENT_ERROR) &&
(sense_key == SSD_KEY_ILLEGAL_REQUEST)))) {
cam_periph_assert(periph, MA_OWNED);
softc->flags &= ~DA_FLAG_CAN_RC16;
free(rdcap, M_SCSIDA);
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_RC;
xpt_schedule(periph, priority);
return;
}
/*
* Attach to anything that claims to be a
* direct access or optical disk device,
* as long as it doesn't return a "Logical
* unit not supported" (0x25) error.
* "Internal Target Failure" (0x44) is also
* special and typically means that the
* device is a SATA drive behind a SATL
* translation that's fallen into a
* terminally fatal state.
*/
if ((have_sense)
&& (asc != 0x25) && (asc != 0x44)
&& (error_code == SSD_CURRENT_ERROR
|| error_code == SSD_DESC_CURRENT_ERROR)) {
const char *sense_key_desc;
const char *asc_desc;
dasetgeom(periph, 512, -1, NULL, 0);
scsi_sense_desc(sense_key, asc, ascq,
&cgd.inq_data, &sense_key_desc,
&asc_desc);
snprintf(announce_buf, DA_ANNOUNCETMP_SZ,
"Attempt to query device "
"size failed: %s, %s",
sense_key_desc, asc_desc);
} else {
if (have_sense)
scsi_sense_print(&done_ccb->csio);
else {
xpt_print(periph->path,
"got CAM status %#x\n",
done_ccb->ccb_h.status);
}
xpt_print(periph->path, "fatal error, "
"failed to attach to device\n");
announce_buf = NULL;
/*
* Free up resources.
*/
cam_periph_invalidate(periph);
}
}
}
free(csio->data_ptr, M_SCSIDA);
if (announce_buf != NULL &&
((softc->flags & DA_FLAG_ANNOUNCED) == 0)) {
struct sbuf sb;
sbuf_new(&sb, softc->announcebuf, DA_ANNOUNCE_SZ,
SBUF_FIXEDLEN);
xpt_announce_periph_sbuf(periph, &sb, announce_buf);
xpt_announce_quirks_sbuf(periph, &sb, softc->quirks,
DA_Q_BIT_STRING);
sbuf_finish(&sb);
sbuf_putbuf(&sb);
/*
* Create our sysctl variables, now that we know
* we have successfully attached.
*/
/* increase the refcount */
if (da_periph_acquire(periph, DA_REF_SYSCTL) == 0) {
taskqueue_enqueue(taskqueue_thread,
&softc->sysctl_task);
} else {
/* XXX This message is useless! */
xpt_print(periph->path, "fatal error, "
"could not acquire reference count\n");
}
}
/* We already probed the device. */
if (softc->flags & DA_FLAG_PROBED) {
daprobedone(periph, done_ccb);
return;
}
/* Ensure re-probe doesn't see old delete. */
softc->delete_available = 0;
dadeleteflag(softc, DA_DELETE_ZERO, 1);
if (lbp && (softc->quirks & DA_Q_NO_UNMAP) == 0) {
/*
* Based on older SBC-3 spec revisions
* any of the UNMAP methods "may" be
* available via LBP given this flag so
* we flag all of them as available and
* then remove those which further
* probes confirm aren't available
* later.
*
* We could also check readcap(16) p_type
* flag to exclude one or more invalid
* write same (X) types here
*/
dadeleteflag(softc, DA_DELETE_WS16, 1);
dadeleteflag(softc, DA_DELETE_WS10, 1);
dadeleteflag(softc, DA_DELETE_UNMAP, 1);
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_LBP;
xpt_schedule(periph, priority);
return;
}
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_BDC;
xpt_schedule(periph, priority);
return;
}
static void
dadone_probelbp(struct cam_periph *periph, union ccb *done_ccb)
{
struct scsi_vpd_logical_block_prov *lbp;
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probelbp\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
lbp = (struct scsi_vpd_logical_block_prov *)csio->data_ptr;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
/*
* T10/1799-D Revision 31 states at least one of these
* must be supported but we don't currently enforce this.
*/
dadeleteflag(softc, DA_DELETE_WS16,
(lbp->flags & SVPD_LBP_WS16));
dadeleteflag(softc, DA_DELETE_WS10,
(lbp->flags & SVPD_LBP_WS10));
dadeleteflag(softc, DA_DELETE_UNMAP,
(lbp->flags & SVPD_LBP_UNMAP));
} else {
int error;
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
/*
* Failure indicates we don't support any SBC-3
* delete methods with UNMAP
*/
}
}
free(lbp, M_SCSIDA);
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_BLK_LIMITS;
xpt_schedule(periph, priority);
return;
}
static void
dadone_probeblklimits(struct cam_periph *periph, union ccb *done_ccb)
{
struct scsi_vpd_block_limits *block_limits;
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probeblklimits\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
block_limits = (struct scsi_vpd_block_limits *)csio->data_ptr;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
uint32_t max_txfer_len = scsi_4btoul(
block_limits->max_txfer_len);
uint32_t max_unmap_lba_cnt = scsi_4btoul(
block_limits->max_unmap_lba_cnt);
uint32_t max_unmap_blk_cnt = scsi_4btoul(
block_limits->max_unmap_blk_cnt);
uint32_t unmap_gran = scsi_4btoul(
block_limits->opt_unmap_grain);
uint32_t unmap_gran_align = scsi_4btoul(
block_limits->unmap_grain_align);
uint64_t ws_max_blks = scsi_8btou64(
block_limits->max_write_same_length);
if (max_txfer_len != 0) {
softc->disk->d_maxsize = MIN(softc->maxio,
(off_t)max_txfer_len * softc->params.secsize);
}
/*
* We should already support UNMAP but we check lba
* and block count to be sure
*/
if (max_unmap_lba_cnt != 0x00L &&
max_unmap_blk_cnt != 0x00L) {
softc->unmap_max_lba = max_unmap_lba_cnt;
softc->unmap_max_ranges = min(max_unmap_blk_cnt,
UNMAP_MAX_RANGES);
if (unmap_gran > 1) {
softc->unmap_gran = unmap_gran;
if (unmap_gran_align & 0x80000000) {
softc->unmap_gran_align =
unmap_gran_align & 0x7fffffff;
}
}
} else {
/*
* Unexpected UNMAP limits which means the
* device doesn't actually support UNMAP
*/
dadeleteflag(softc, DA_DELETE_UNMAP, 0);
}
if (ws_max_blks != 0x00L)
softc->ws_max_blks = ws_max_blks;
} else {
int error;
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
/*
* Failure here doesn't mean UNMAP is not
* supported as this is an optional page.
*/
softc->unmap_max_lba = 1;
softc->unmap_max_ranges = 1;
}
}
free(block_limits, M_SCSIDA);
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_BDC;
xpt_schedule(periph, priority);
return;
}
static void
dadone_probebdc(struct cam_periph *periph, union ccb *done_ccb)
{
struct scsi_vpd_block_device_characteristics *bdc;
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probebdc\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
bdc = (struct scsi_vpd_block_device_characteristics *)csio->data_ptr;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
uint32_t valid_len;
/*
* Disable queue sorting for non-rotational media
* by default.
*/
u_int16_t old_rate = softc->disk->d_rotation_rate;
valid_len = csio->dxfer_len - csio->resid;
if (SBDC_IS_PRESENT(bdc, valid_len,
medium_rotation_rate)) {
softc->disk->d_rotation_rate =
scsi_2btoul(bdc->medium_rotation_rate);
if (softc->disk->d_rotation_rate ==
SVPD_BDC_RATE_NON_ROTATING) {
cam_iosched_set_sort_queue(
softc->cam_iosched, 0);
softc->rotating = 0;
}
if (softc->disk->d_rotation_rate != old_rate) {
disk_attr_changed(softc->disk,
"GEOM::rotation_rate", M_NOWAIT);
}
}
if ((SBDC_IS_PRESENT(bdc, valid_len, flags))
&& (softc->zone_mode == DA_ZONE_NONE)) {
int ata_proto;
if (scsi_vpd_supported_page(periph,
SVPD_ATA_INFORMATION))
ata_proto = 1;
else
ata_proto = 0;
/*
* The Zoned field will only be set for
* Drive Managed and Host Aware drives. If
* they are Host Managed, the device type
* in the standard INQUIRY data should be
* set to T_ZBC_HM (0x14).
*/
if ((bdc->flags & SVPD_ZBC_MASK) ==
SVPD_HAW_ZBC) {
softc->zone_mode = DA_ZONE_HOST_AWARE;
softc->zone_interface = (ata_proto) ?
DA_ZONE_IF_ATA_SAT : DA_ZONE_IF_SCSI;
} else if ((bdc->flags & SVPD_ZBC_MASK) ==
SVPD_DM_ZBC) {
softc->zone_mode =DA_ZONE_DRIVE_MANAGED;
softc->zone_interface = (ata_proto) ?
DA_ZONE_IF_ATA_SAT : DA_ZONE_IF_SCSI;
} else if ((bdc->flags & SVPD_ZBC_MASK) !=
SVPD_ZBC_NR) {
xpt_print(periph->path, "Unknown zoned "
"type %#x",
bdc->flags & SVPD_ZBC_MASK);
}
}
} else {
int error;
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(bdc, M_SCSIDA);
xpt_release_ccb(done_ccb);
softc->state = DA_STATE_PROBE_ATA;
xpt_schedule(periph, priority);
return;
}
static void
dadone_probeata(struct cam_periph *periph, union ccb *done_ccb)
{
struct ata_params *ata_params;
struct ccb_scsiio *csio;
struct da_softc *softc;
u_int32_t priority;
int continue_probe;
int error, i;
int16_t *ptr;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probeata\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
ata_params = (struct ata_params *)csio->data_ptr;
ptr = (uint16_t *)ata_params;
continue_probe = 0;
error = 0;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
uint16_t old_rate;
for (i = 0; i < sizeof(*ata_params) / 2; i++)
ptr[i] = le16toh(ptr[i]);
if (ata_params->support_dsm & ATA_SUPPORT_DSM_TRIM &&
(softc->quirks & DA_Q_NO_UNMAP) == 0) {
dadeleteflag(softc, DA_DELETE_ATA_TRIM, 1);
if (ata_params->max_dsm_blocks != 0)
softc->trim_max_ranges = min(
softc->trim_max_ranges,
ata_params->max_dsm_blocks *
ATA_DSM_BLK_RANGES);
}
/*
* Disable queue sorting for non-rotational media
* by default.
*/
old_rate = softc->disk->d_rotation_rate;
softc->disk->d_rotation_rate = ata_params->media_rotation_rate;
if (softc->disk->d_rotation_rate == ATA_RATE_NON_ROTATING) {
cam_iosched_set_sort_queue(softc->cam_iosched, 0);
softc->rotating = 0;
}
if (softc->disk->d_rotation_rate != old_rate) {
disk_attr_changed(softc->disk,
"GEOM::rotation_rate", M_NOWAIT);
}
cam_periph_assert(periph, MA_OWNED);
if (ata_params->capabilities1 & ATA_SUPPORT_DMA)
softc->flags |= DA_FLAG_CAN_ATA_DMA;
if (ata_params->support.extension & ATA_SUPPORT_GENLOG)
softc->flags |= DA_FLAG_CAN_ATA_LOG;
/*
* At this point, if we have a SATA host aware drive,
* we communicate via ATA passthrough unless the
* SAT layer supports ZBC -> ZAC translation. In
* that case,
*
* XXX KDM figure out how to detect a host managed
* SATA drive.
*/
if (softc->zone_mode == DA_ZONE_NONE) {
/*
* Note that we don't override the zone
* mode or interface if it has already been
* set. This is because it has either been
* set as a quirk, or when we probed the
* SCSI Block Device Characteristics page,
* the zoned field was set. The latter
* means that the SAT layer supports ZBC to
* ZAC translation, and we would prefer to
* use that if it is available.
*/
if ((ata_params->support3 &
ATA_SUPPORT_ZONE_MASK) ==
ATA_SUPPORT_ZONE_HOST_AWARE) {
softc->zone_mode = DA_ZONE_HOST_AWARE;
softc->zone_interface =
DA_ZONE_IF_ATA_PASS;
} else if ((ata_params->support3 &
ATA_SUPPORT_ZONE_MASK) ==
ATA_SUPPORT_ZONE_DEV_MANAGED) {
softc->zone_mode =DA_ZONE_DRIVE_MANAGED;
softc->zone_interface = DA_ZONE_IF_ATA_PASS;
}
}
} else {
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(ata_params, M_SCSIDA);
if ((softc->zone_mode == DA_ZONE_HOST_AWARE)
|| (softc->zone_mode == DA_ZONE_HOST_MANAGED)) {
/*
* If the ATA IDENTIFY failed, we could be talking
* to a SCSI drive, although that seems unlikely,
* since the drive did report that it supported the
* ATA Information VPD page. If the ATA IDENTIFY
* succeeded, and the SAT layer doesn't support
* ZBC -> ZAC translation, continue on to get the
* directory of ATA logs, and complete the rest of
* the ZAC probe. If the SAT layer does support
* ZBC -> ZAC translation, we want to use that,
* and we'll probe the SCSI Zoned Block Device
* Characteristics VPD page next.
*/
if ((error == 0)
&& (softc->flags & DA_FLAG_CAN_ATA_LOG)
&& (softc->zone_interface == DA_ZONE_IF_ATA_PASS))
softc->state = DA_STATE_PROBE_ATA_LOGDIR;
else
softc->state = DA_STATE_PROBE_ZONE;
continue_probe = 1;
}
if (continue_probe != 0) {
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
return;
} else
daprobedone(periph, done_ccb);
return;
}
static void
dadone_probeatalogdir(struct cam_periph *periph, union ccb *done_ccb)
{
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
int error;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probeatalogdir\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
error = 0;
softc->valid_logdir_len = 0;
bzero(&softc->ata_logdir, sizeof(softc->ata_logdir));
softc->valid_logdir_len = csio->dxfer_len - csio->resid;
if (softc->valid_logdir_len > 0)
bcopy(csio->data_ptr, &softc->ata_logdir,
min(softc->valid_logdir_len,
sizeof(softc->ata_logdir)));
/*
* Figure out whether the Identify Device log is
* supported. The General Purpose log directory
* has a header, and lists the number of pages
* available for each GP log identified by the
* offset into the list.
*/
if ((softc->valid_logdir_len >=
((ATA_IDENTIFY_DATA_LOG + 1) * sizeof(uint16_t)))
&& (le16dec(softc->ata_logdir.header) ==
ATA_GP_LOG_DIR_VERSION)
&& (le16dec(&softc->ata_logdir.num_pages[
(ATA_IDENTIFY_DATA_LOG *
sizeof(uint16_t)) - sizeof(uint16_t)]) > 0)){
softc->flags |= DA_FLAG_CAN_ATA_IDLOG;
} else {
softc->flags &= ~DA_FLAG_CAN_ATA_IDLOG;
}
} else {
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
/*
* If we can't get the ATA log directory,
* then ATA logs are effectively not
* supported even if the bit is set in the
* identify data.
*/
softc->flags &= ~(DA_FLAG_CAN_ATA_LOG |
DA_FLAG_CAN_ATA_IDLOG);
if ((done_ccb->ccb_h.status &
CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(csio->data_ptr, M_SCSIDA);
if ((error == 0)
&& (softc->flags & DA_FLAG_CAN_ATA_IDLOG)) {
softc->state = DA_STATE_PROBE_ATA_IDDIR;
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
return;
}
daprobedone(periph, done_ccb);
return;
}
static void
dadone_probeataiddir(struct cam_periph *periph, union ccb *done_ccb)
{
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
int error;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probeataiddir\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
off_t entries_offset, max_entries;
error = 0;
softc->valid_iddir_len = 0;
bzero(&softc->ata_iddir, sizeof(softc->ata_iddir));
softc->flags &= ~(DA_FLAG_CAN_ATA_SUPCAP |
DA_FLAG_CAN_ATA_ZONE);
softc->valid_iddir_len = csio->dxfer_len - csio->resid;
if (softc->valid_iddir_len > 0)
bcopy(csio->data_ptr, &softc->ata_iddir,
min(softc->valid_iddir_len,
sizeof(softc->ata_iddir)));
entries_offset =
__offsetof(struct ata_identify_log_pages,entries);
max_entries = softc->valid_iddir_len - entries_offset;
if ((softc->valid_iddir_len > (entries_offset + 1))
&& (le64dec(softc->ata_iddir.header) == ATA_IDLOG_REVISION)
&& (softc->ata_iddir.entry_count > 0)) {
int num_entries, i;
num_entries = softc->ata_iddir.entry_count;
num_entries = min(num_entries,
softc->valid_iddir_len - entries_offset);
for (i = 0; i < num_entries && i < max_entries; i++) {
if (softc->ata_iddir.entries[i] ==
ATA_IDL_SUP_CAP)
softc->flags |= DA_FLAG_CAN_ATA_SUPCAP;
else if (softc->ata_iddir.entries[i] ==
ATA_IDL_ZDI)
softc->flags |= DA_FLAG_CAN_ATA_ZONE;
if ((softc->flags & DA_FLAG_CAN_ATA_SUPCAP)
&& (softc->flags & DA_FLAG_CAN_ATA_ZONE))
break;
}
}
} else {
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
/*
* If we can't get the ATA Identify Data log
* directory, then it effectively isn't
* supported even if the ATA Log directory
* a non-zero number of pages present for
* this log.
*/
softc->flags &= ~DA_FLAG_CAN_ATA_IDLOG;
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(csio->data_ptr, M_SCSIDA);
if ((error == 0) && (softc->flags & DA_FLAG_CAN_ATA_SUPCAP)) {
softc->state = DA_STATE_PROBE_ATA_SUP;
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
return;
}
daprobedone(periph, done_ccb);
return;
}
static void
dadone_probeatasup(struct cam_periph *periph, union ccb *done_ccb)
{
struct da_softc *softc;
struct ccb_scsiio *csio;
u_int32_t priority;
int error;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probeatasup\n"));
softc = (struct da_softc *)periph->softc;
priority = done_ccb->ccb_h.pinfo.priority;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
uint32_t valid_len;
size_t needed_size;
struct ata_identify_log_sup_cap *sup_cap;
error = 0;
sup_cap = (struct ata_identify_log_sup_cap *)csio->data_ptr;
valid_len = csio->dxfer_len - csio->resid;
needed_size = __offsetof(struct ata_identify_log_sup_cap,
sup_zac_cap) + 1 + sizeof(sup_cap->sup_zac_cap);
if (valid_len >= needed_size) {
uint64_t zoned, zac_cap;
zoned = le64dec(sup_cap->zoned_cap);
if (zoned & ATA_ZONED_VALID) {
/*
* This should have already been
* set, because this is also in the
* ATA identify data.
*/
if ((zoned & ATA_ZONED_MASK) ==
ATA_SUPPORT_ZONE_HOST_AWARE)
softc->zone_mode = DA_ZONE_HOST_AWARE;
else if ((zoned & ATA_ZONED_MASK) ==
ATA_SUPPORT_ZONE_DEV_MANAGED)
softc->zone_mode =
DA_ZONE_DRIVE_MANAGED;
}
zac_cap = le64dec(sup_cap->sup_zac_cap);
if (zac_cap & ATA_SUP_ZAC_CAP_VALID) {
if (zac_cap & ATA_REPORT_ZONES_SUP)
softc->zone_flags |=
DA_ZONE_FLAG_RZ_SUP;
if (zac_cap & ATA_ND_OPEN_ZONE_SUP)
softc->zone_flags |=
DA_ZONE_FLAG_OPEN_SUP;
if (zac_cap & ATA_ND_CLOSE_ZONE_SUP)
softc->zone_flags |=
DA_ZONE_FLAG_CLOSE_SUP;
if (zac_cap & ATA_ND_FINISH_ZONE_SUP)
softc->zone_flags |=
DA_ZONE_FLAG_FINISH_SUP;
if (zac_cap & ATA_ND_RWP_SUP)
softc->zone_flags |=
DA_ZONE_FLAG_RWP_SUP;
} else {
/*
* This field was introduced in
* ACS-4, r08 on April 28th, 2015.
* If the drive firmware was written
* to an earlier spec, it won't have
* the field. So, assume all
* commands are supported.
*/
softc->zone_flags |= DA_ZONE_FLAG_SUP_MASK;
}
}
} else {
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
/*
* If we can't get the ATA Identify Data
* Supported Capabilities page, clear the
* flag...
*/
softc->flags &= ~DA_FLAG_CAN_ATA_SUPCAP;
/*
* And clear zone capabilities.
*/
softc->zone_flags &= ~DA_ZONE_FLAG_SUP_MASK;
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(csio->data_ptr, M_SCSIDA);
if ((error == 0) && (softc->flags & DA_FLAG_CAN_ATA_ZONE)) {
softc->state = DA_STATE_PROBE_ATA_ZONE;
xpt_release_ccb(done_ccb);
xpt_schedule(periph, priority);
return;
}
daprobedone(periph, done_ccb);
return;
}
static void
dadone_probeatazone(struct cam_periph *periph, union ccb *done_ccb)
{
struct da_softc *softc;
struct ccb_scsiio *csio;
int error;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probeatazone\n"));
softc = (struct da_softc *)periph->softc;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
struct ata_zoned_info_log *zi_log;
uint32_t valid_len;
size_t needed_size;
zi_log = (struct ata_zoned_info_log *)csio->data_ptr;
valid_len = csio->dxfer_len - csio->resid;
needed_size = __offsetof(struct ata_zoned_info_log,
version_info) + 1 + sizeof(zi_log->version_info);
if (valid_len >= needed_size) {
uint64_t tmpvar;
tmpvar = le64dec(zi_log->zoned_cap);
if (tmpvar & ATA_ZDI_CAP_VALID) {
if (tmpvar & ATA_ZDI_CAP_URSWRZ)
softc->zone_flags |=
DA_ZONE_FLAG_URSWRZ;
else
softc->zone_flags &=
~DA_ZONE_FLAG_URSWRZ;
}
tmpvar = le64dec(zi_log->optimal_seq_zones);
if (tmpvar & ATA_ZDI_OPT_SEQ_VALID) {
softc->zone_flags |= DA_ZONE_FLAG_OPT_SEQ_SET;
softc->optimal_seq_zones = (tmpvar &
ATA_ZDI_OPT_SEQ_MASK);
} else {
softc->zone_flags &= ~DA_ZONE_FLAG_OPT_SEQ_SET;
softc->optimal_seq_zones = 0;
}
tmpvar =le64dec(zi_log->optimal_nonseq_zones);
if (tmpvar & ATA_ZDI_OPT_NS_VALID) {
softc->zone_flags |=
DA_ZONE_FLAG_OPT_NONSEQ_SET;
softc->optimal_nonseq_zones =
(tmpvar & ATA_ZDI_OPT_NS_MASK);
} else {
softc->zone_flags &=
~DA_ZONE_FLAG_OPT_NONSEQ_SET;
softc->optimal_nonseq_zones = 0;
}
tmpvar = le64dec(zi_log->max_seq_req_zones);
if (tmpvar & ATA_ZDI_MAX_SEQ_VALID) {
softc->zone_flags |= DA_ZONE_FLAG_MAX_SEQ_SET;
softc->max_seq_zones =
(tmpvar & ATA_ZDI_MAX_SEQ_MASK);
} else {
softc->zone_flags &= ~DA_ZONE_FLAG_MAX_SEQ_SET;
softc->max_seq_zones = 0;
}
}
} else {
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
softc->flags &= ~DA_FLAG_CAN_ATA_ZONE;
softc->flags &= ~DA_ZONE_FLAG_SET_MASK;
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(csio->data_ptr, M_SCSIDA);
daprobedone(periph, done_ccb);
return;
}
static void
dadone_probezone(struct cam_periph *periph, union ccb *done_ccb)
{
struct da_softc *softc;
struct ccb_scsiio *csio;
int error;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_probezone\n"));
softc = (struct da_softc *)periph->softc;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
uint32_t valid_len;
size_t needed_len;
struct scsi_vpd_zoned_bdc *zoned_bdc;
error = 0;
zoned_bdc = (struct scsi_vpd_zoned_bdc *)csio->data_ptr;
valid_len = csio->dxfer_len - csio->resid;
needed_len = __offsetof(struct scsi_vpd_zoned_bdc,
max_seq_req_zones) + 1 +
sizeof(zoned_bdc->max_seq_req_zones);
if ((valid_len >= needed_len)
&& (scsi_2btoul(zoned_bdc->page_length) >= SVPD_ZBDC_PL)) {
if (zoned_bdc->flags & SVPD_ZBDC_URSWRZ)
softc->zone_flags |= DA_ZONE_FLAG_URSWRZ;
else
softc->zone_flags &= ~DA_ZONE_FLAG_URSWRZ;
softc->optimal_seq_zones =
scsi_4btoul(zoned_bdc->optimal_seq_zones);
softc->zone_flags |= DA_ZONE_FLAG_OPT_SEQ_SET;
softc->optimal_nonseq_zones = scsi_4btoul(
zoned_bdc->optimal_nonseq_zones);
softc->zone_flags |= DA_ZONE_FLAG_OPT_NONSEQ_SET;
softc->max_seq_zones =
scsi_4btoul(zoned_bdc->max_seq_req_zones);
softc->zone_flags |= DA_ZONE_FLAG_MAX_SEQ_SET;
}
/*
* All of the zone commands are mandatory for SCSI
* devices.
*
* XXX KDM this is valid as of September 2015.
* Re-check this assumption once the SAT spec is
* updated to support SCSI ZBC to ATA ZAC mapping.
* Since ATA allows zone commands to be reported
* as supported or not, this may not necessarily
* be true for an ATA device behind a SAT (SCSI to
* ATA Translation) layer.
*/
softc->zone_flags |= DA_ZONE_FLAG_SUP_MASK;
} else {
error = daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA|SF_NO_PRINT);
if (error == ERESTART)
return;
else if (error != 0) {
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
/* Don't wedge this device's queue */
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
}
}
free(csio->data_ptr, M_SCSIDA);
daprobedone(periph, done_ccb);
return;
}
static void
dadone_tur(struct cam_periph *periph, union ccb *done_ccb)
{
struct da_softc *softc;
struct ccb_scsiio *csio;
CAM_DEBUG(periph->path, CAM_DEBUG_TRACE, ("dadone_tur\n"));
softc = (struct da_softc *)periph->softc;
csio = &done_ccb->csio;
cam_periph_assert(periph, MA_OWNED);
if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
if (daerror(done_ccb, CAM_RETRY_SELTO,
SF_RETRY_UA | SF_NO_RECOVERY | SF_NO_PRINT) == ERESTART)
return; /* Will complete again, keep reference */
if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
cam_release_devq(done_ccb->ccb_h.path,
/*relsim_flags*/0,
/*reduction*/0,
/*timeout*/0,
/*getcount_only*/0);
}
xpt_release_ccb(done_ccb);
softc->flags &= ~DA_FLAG_TUR_PENDING;
da_periph_release_locked(periph, DA_REF_TUR);
return;
}
static void
dareprobe(struct cam_periph *periph)
{
struct da_softc *softc;
int status;
softc = (struct da_softc *)periph->softc;
/* Probe in progress; don't interfere. */
if (softc->state != DA_STATE_NORMAL)
return;
status = da_periph_acquire(periph, DA_REF_REPROBE);
KASSERT(status == 0, ("dareprobe: cam_periph_acquire failed"));
softc->state = DA_STATE_PROBE_WP;
xpt_schedule(periph, CAM_PRIORITY_DEV);
}
static int
daerror(union ccb *ccb, u_int32_t cam_flags, u_int32_t sense_flags)
{
struct da_softc *softc;
struct cam_periph *periph;
int error, error_code, sense_key, asc, ascq;
#if defined(BUF_TRACKING) || defined(FULL_BUF_TRACKING)
if (ccb->csio.bio != NULL)
biotrack(ccb->csio.bio, __func__);
#endif
periph = xpt_path_periph(ccb->ccb_h.path);
softc = (struct da_softc *)periph->softc;
cam_periph_assert(periph, MA_OWNED);
/*
* Automatically detect devices that do not support
* READ(6)/WRITE(6) and upgrade to using 10 byte cdbs.
*/
error = 0;
if ((ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_INVALID) {
error = cmd6workaround(ccb);
} else if (scsi_extract_sense_ccb(ccb,
&error_code, &sense_key, &asc, &ascq)) {
if (sense_key == SSD_KEY_ILLEGAL_REQUEST)
error = cmd6workaround(ccb);
/*
* If the target replied with CAPACITY DATA HAS CHANGED UA,
* query the capacity and notify upper layers.
*/
else if (sense_key == SSD_KEY_UNIT_ATTENTION &&
asc == 0x2A && ascq == 0x09) {
xpt_print(periph->path, "Capacity data has changed\n");
softc->flags &= ~DA_FLAG_PROBED;
dareprobe(periph);
sense_flags |= SF_NO_PRINT;
} else if (sense_key == SSD_KEY_UNIT_ATTENTION &&
asc == 0x28 && ascq == 0x00) {
softc->flags &= ~DA_FLAG_PROBED;
disk_media_changed(softc->disk, M_NOWAIT);
} else if (sense_key == SSD_KEY_UNIT_ATTENTION &&
asc == 0x3F && ascq == 0x03) {
xpt_print(periph->path, "INQUIRY data has changed\n");
softc->flags &= ~DA_FLAG_PROBED;
dareprobe(periph);
sense_flags |= SF_NO_PRINT;
} else if (sense_key == SSD_KEY_NOT_READY &&
asc == 0x3a && (softc->flags & DA_FLAG_PACK_INVALID) == 0) {
softc->flags |= DA_FLAG_PACK_INVALID;
disk_media_gone(softc->disk, M_NOWAIT);
}
}
if (error == ERESTART)
return (ERESTART);
#ifdef CAM_IO_STATS
switch (ccb->ccb_h.status & CAM_STATUS_MASK) {
case CAM_CMD_TIMEOUT:
softc->timeouts++;
break;
case CAM_REQ_ABORTED:
case CAM_REQ_CMP_ERR:
case CAM_REQ_TERMIO:
case CAM_UNREC_HBA_ERROR:
case CAM_DATA_RUN_ERR:
softc->errors++;
break;
default:
break;
}
#endif
/*
* XXX
* Until we have a better way of doing pack validation,
* don't treat UAs as errors.
*/
sense_flags |= SF_RETRY_UA;
if (softc->quirks & DA_Q_RETRY_BUSY)
sense_flags |= SF_RETRY_BUSY;
return(cam_periph_error(ccb, cam_flags, sense_flags));
}
static void
damediapoll(void *arg)
{
struct cam_periph *periph = arg;
struct da_softc *softc = periph->softc;
if (!cam_iosched_has_work_flags(softc->cam_iosched, DA_WORK_TUR) &&
(softc->flags & DA_FLAG_TUR_PENDING) == 0 &&
LIST_EMPTY(&softc->pending_ccbs)) {
if (da_periph_acquire(periph, DA_REF_TUR) == 0) {
cam_iosched_set_work_flags(softc->cam_iosched, DA_WORK_TUR);
daschedule(periph);
}
}
/* Queue us up again */
if (da_poll_period != 0)
callout_schedule(&softc->mediapoll_c, da_poll_period * hz);
}
static void
daprevent(struct cam_periph *periph, int action)
{
struct da_softc *softc;
union ccb *ccb;
int error;
cam_periph_assert(periph, MA_OWNED);
softc = (struct da_softc *)periph->softc;
if (((action == PR_ALLOW)
&& (softc->flags & DA_FLAG_PACK_LOCKED) == 0)
|| ((action == PR_PREVENT)
&& (softc->flags & DA_FLAG_PACK_LOCKED) != 0)) {
return;
}
ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL);
scsi_prevent(&ccb->csio,
/*retries*/1,
/*cbcfp*/NULL,
MSG_SIMPLE_Q_TAG,
action,
SSD_FULL_SIZE,
5000);
error = cam_periph_runccb(ccb, daerror, CAM_RETRY_SELTO,
SF_RETRY_UA | SF_NO_PRINT, softc->disk->d_devstat);
if (error == 0) {
if (action == PR_ALLOW)
softc->flags &= ~DA_FLAG_PACK_LOCKED;
else
softc->flags |= DA_FLAG_PACK_LOCKED;
}
xpt_release_ccb(ccb);
}
static void
dasetgeom(struct cam_periph *periph, uint32_t block_len, uint64_t maxsector,
struct scsi_read_capacity_data_long *rcaplong, size_t rcap_len)
{
struct ccb_calc_geometry ccg;
struct da_softc *softc;
struct disk_params *dp;
u_int lbppbe, lalba;
int error;
softc = (struct da_softc *)periph->softc;
dp = &softc->params;
dp->secsize = block_len;
dp->sectors = maxsector + 1;
if (rcaplong != NULL) {
lbppbe = rcaplong->prot_lbppbe & SRC16_LBPPBE;
lalba = scsi_2btoul(rcaplong->lalba_lbp);
lalba &= SRC16_LALBA_A;
} else {
lbppbe = 0;
lalba = 0;
}
if (lbppbe > 0) {
dp->stripesize = block_len << lbppbe;
dp->stripeoffset = (dp->stripesize - block_len * lalba) %
dp->stripesize;
} else if (softc->quirks & DA_Q_4K) {
dp->stripesize = 4096;
dp->stripeoffset = 0;
} else if (softc->unmap_gran != 0) {
dp->stripesize = block_len * softc->unmap_gran;
dp->stripeoffset = (dp->stripesize - block_len *
softc->unmap_gran_align) % dp->stripesize;
} else {
dp->stripesize = 0;
dp->stripeoffset = 0;
}
/*
* Have the controller provide us with a geometry
* for this disk. The only time the geometry
* matters is when we boot and the controller
* is the only one knowledgeable enough to come
* up with something that will make this a bootable
* device.
*/
xpt_setup_ccb(&ccg.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
ccg.ccb_h.func_code = XPT_CALC_GEOMETRY;
ccg.block_size = dp->secsize;
ccg.volume_size = dp->sectors;
ccg.heads = 0;
ccg.secs_per_track = 0;
ccg.cylinders = 0;
xpt_action((union ccb*)&ccg);
if ((ccg.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
/*
* We don't know what went wrong here- but just pick
* a geometry so we don't have nasty things like divide
* by zero.
*/
dp->heads = 255;
dp->secs_per_track = 255;
dp->cylinders = dp->sectors / (255 * 255);
if (dp->cylinders == 0) {
dp->cylinders = 1;
}
} else {
dp->heads = ccg.heads;
dp->secs_per_track = ccg.secs_per_track;
dp->cylinders = ccg.cylinders;
}
/*
* If the user supplied a read capacity buffer, and if it is
* different than the previous buffer, update the data in the EDT.
* If it's the same, we don't bother. This avoids sending an
* update every time someone opens this device.
*/
if ((rcaplong != NULL)
&& (bcmp(rcaplong, &softc->rcaplong,
min(sizeof(softc->rcaplong), rcap_len)) != 0)) {
struct ccb_dev_advinfo cdai;
xpt_setup_ccb(&cdai.ccb_h, periph->path, CAM_PRIORITY_NORMAL);
cdai.ccb_h.func_code = XPT_DEV_ADVINFO;
cdai.buftype = CDAI_TYPE_RCAPLONG;
cdai.flags = CDAI_FLAG_STORE;
cdai.bufsiz = rcap_len;
cdai.buf = (uint8_t *)rcaplong;
xpt_action((union ccb *)&cdai);
if ((cdai.ccb_h.status & CAM_DEV_QFRZN) != 0)
cam_release_devq(cdai.ccb_h.path, 0, 0, 0, FALSE);
if (cdai.ccb_h.status != CAM_REQ_CMP) {
xpt_print(periph->path, "%s: failed to set read "
"capacity advinfo\n", __func__);
/* Use cam_error_print() to decode the status */
cam_error_print((union ccb *)&cdai, CAM_ESF_CAM_STATUS,
CAM_EPF_ALL);
} else {
bcopy(rcaplong, &softc->rcaplong,
min(sizeof(softc->rcaplong), rcap_len));
}
}
softc->disk->d_sectorsize = softc->params.secsize;
softc->disk->d_mediasize = softc->params.secsize * (off_t)softc->params.sectors;
softc->disk->d_stripesize = softc->params.stripesize;
softc->disk->d_stripeoffset = softc->params.stripeoffset;
/* XXX: these are not actually "firmware" values, so they may be wrong */
softc->disk->d_fwsectors = softc->params.secs_per_track;
softc->disk->d_fwheads = softc->params.heads;
softc->disk->d_devstat->block_size = softc->params.secsize;
softc->disk->d_devstat->flags &= ~DEVSTAT_BS_UNAVAILABLE;
error = disk_resize(softc->disk, M_NOWAIT);
if (error != 0)
xpt_print(periph->path, "disk_resize(9) failed, error = %d\n", error);
}
static void
dasendorderedtag(void *arg)
{
struct cam_periph *periph = arg;
struct da_softc *softc = periph->softc;
cam_periph_assert(periph, MA_OWNED);
if (da_send_ordered) {
if (!LIST_EMPTY(&softc->pending_ccbs)) {
if ((softc->flags & DA_FLAG_WAS_OTAG) == 0)
softc->flags |= DA_FLAG_NEED_OTAG;
softc->flags &= ~DA_FLAG_WAS_OTAG;
}
}
/* Queue us up again */
callout_reset(&softc->sendordered_c,
(da_default_timeout * hz) / DA_ORDEREDTAG_INTERVAL,
dasendorderedtag, periph);
}
/*
* Step through all DA peripheral drivers, and if the device is still open,
* sync the disk cache to physical media.
*/
static void
dashutdown(void * arg, int howto)
{
struct cam_periph *periph;
struct da_softc *softc;
union ccb *ccb;
int error;
CAM_PERIPH_FOREACH(periph, &dadriver) {
softc = (struct da_softc *)periph->softc;
if (SCHEDULER_STOPPED()) {
/* If we paniced with the lock held, do not recurse. */
if (!cam_periph_owned(periph) &&
(softc->flags & DA_FLAG_OPEN)) {
dadump(softc->disk, NULL, 0, 0, 0);
}
continue;
}
cam_periph_lock(periph);
/*
* We only sync the cache if the drive is still open, and
* if the drive is capable of it..
*/
if (((softc->flags & DA_FLAG_OPEN) == 0)
|| (softc->quirks & DA_Q_NO_SYNC_CACHE)) {
cam_periph_unlock(periph);
continue;
}
ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL);
scsi_synchronize_cache(&ccb->csio,
/*retries*/0,
/*cbfcnp*/NULL,
MSG_SIMPLE_Q_TAG,
/*begin_lba*/0, /* whole disk */
/*lb_count*/0,
SSD_FULL_SIZE,
60 * 60 * 1000);
error = cam_periph_runccb(ccb, daerror, /*cam_flags*/0,
/*sense_flags*/ SF_NO_RECOVERY | SF_NO_RETRY | SF_QUIET_IR,
softc->disk->d_devstat);
if (error != 0)
xpt_print(periph->path, "Synchronize cache failed\n");
xpt_release_ccb(ccb);
cam_periph_unlock(periph);
}
}
#else /* !_KERNEL */
/*
* XXX These are only left out of the kernel build to silence warnings. If,
* for some reason these functions are used in the kernel, the ifdefs should
* be moved so they are included both in the kernel and userland.
*/
void
scsi_format_unit(struct ccb_scsiio *csio, u_int32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
u_int8_t tag_action, u_int8_t byte2, u_int16_t ileave,
u_int8_t *data_ptr, u_int32_t dxfer_len, u_int8_t sense_len,
u_int32_t timeout)
{
struct scsi_format_unit *scsi_cmd;
scsi_cmd = (struct scsi_format_unit *)&csio->cdb_io.cdb_bytes;
scsi_cmd->opcode = FORMAT_UNIT;
scsi_cmd->byte2 = byte2;
scsi_ulto2b(ileave, scsi_cmd->interleave);
cam_fill_csio(csio,
retries,
cbfcnp,
/*flags*/ (dxfer_len > 0) ? CAM_DIR_OUT : CAM_DIR_NONE,
tag_action,
data_ptr,
dxfer_len,
sense_len,
sizeof(*scsi_cmd),
timeout);
}
void
scsi_read_defects(struct ccb_scsiio *csio, uint32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
uint8_t tag_action, uint8_t list_format,
uint32_t addr_desc_index, uint8_t *data_ptr,
uint32_t dxfer_len, int minimum_cmd_size,
uint8_t sense_len, uint32_t timeout)
{
uint8_t cdb_len;
/*
* These conditions allow using the 10 byte command. Otherwise we
* need to use the 12 byte command.
*/
if ((minimum_cmd_size <= 10)
&& (addr_desc_index == 0)
&& (dxfer_len <= SRDD10_MAX_LENGTH)) {
struct scsi_read_defect_data_10 *cdb10;
cdb10 = (struct scsi_read_defect_data_10 *)
&csio->cdb_io.cdb_bytes;
cdb_len = sizeof(*cdb10);
bzero(cdb10, cdb_len);
cdb10->opcode = READ_DEFECT_DATA_10;
cdb10->format = list_format;
scsi_ulto2b(dxfer_len, cdb10->alloc_length);
} else {
struct scsi_read_defect_data_12 *cdb12;
cdb12 = (struct scsi_read_defect_data_12 *)
&csio->cdb_io.cdb_bytes;
cdb_len = sizeof(*cdb12);
bzero(cdb12, cdb_len);
cdb12->opcode = READ_DEFECT_DATA_12;
cdb12->format = list_format;
scsi_ulto4b(dxfer_len, cdb12->alloc_length);
scsi_ulto4b(addr_desc_index, cdb12->address_descriptor_index);
}
cam_fill_csio(csio,
retries,
cbfcnp,
/*flags*/ CAM_DIR_IN,
tag_action,
data_ptr,
dxfer_len,
sense_len,
cdb_len,
timeout);
}
void
scsi_sanitize(struct ccb_scsiio *csio, u_int32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
u_int8_t tag_action, u_int8_t byte2, u_int16_t control,
u_int8_t *data_ptr, u_int32_t dxfer_len, u_int8_t sense_len,
u_int32_t timeout)
{
struct scsi_sanitize *scsi_cmd;
scsi_cmd = (struct scsi_sanitize *)&csio->cdb_io.cdb_bytes;
scsi_cmd->opcode = SANITIZE;
scsi_cmd->byte2 = byte2;
scsi_cmd->control = control;
scsi_ulto2b(dxfer_len, scsi_cmd->length);
cam_fill_csio(csio,
retries,
cbfcnp,
/*flags*/ (dxfer_len > 0) ? CAM_DIR_OUT : CAM_DIR_NONE,
tag_action,
data_ptr,
dxfer_len,
sense_len,
sizeof(*scsi_cmd),
timeout);
}
#endif /* _KERNEL */
void
scsi_zbc_out(struct ccb_scsiio *csio, uint32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
uint8_t tag_action, uint8_t service_action, uint64_t zone_id,
uint8_t zone_flags, uint8_t *data_ptr, uint32_t dxfer_len,
uint8_t sense_len, uint32_t timeout)
{
struct scsi_zbc_out *scsi_cmd;
scsi_cmd = (struct scsi_zbc_out *)&csio->cdb_io.cdb_bytes;
scsi_cmd->opcode = ZBC_OUT;
scsi_cmd->service_action = service_action;
scsi_u64to8b(zone_id, scsi_cmd->zone_id);
scsi_cmd->zone_flags = zone_flags;
cam_fill_csio(csio,
retries,
cbfcnp,
/*flags*/ (dxfer_len > 0) ? CAM_DIR_OUT : CAM_DIR_NONE,
tag_action,
data_ptr,
dxfer_len,
sense_len,
sizeof(*scsi_cmd),
timeout);
}
void
scsi_zbc_in(struct ccb_scsiio *csio, uint32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
uint8_t tag_action, uint8_t service_action, uint64_t zone_start_lba,
uint8_t zone_options, uint8_t *data_ptr, uint32_t dxfer_len,
uint8_t sense_len, uint32_t timeout)
{
struct scsi_zbc_in *scsi_cmd;
scsi_cmd = (struct scsi_zbc_in *)&csio->cdb_io.cdb_bytes;
scsi_cmd->opcode = ZBC_IN;
scsi_cmd->service_action = service_action;
scsi_ulto4b(dxfer_len, scsi_cmd->length);
scsi_u64to8b(zone_start_lba, scsi_cmd->zone_start_lba);
scsi_cmd->zone_options = zone_options;
cam_fill_csio(csio,
retries,
cbfcnp,
/*flags*/ (dxfer_len > 0) ? CAM_DIR_IN : CAM_DIR_NONE,
tag_action,
data_ptr,
dxfer_len,
sense_len,
sizeof(*scsi_cmd),
timeout);
}
int
scsi_ata_zac_mgmt_out(struct ccb_scsiio *csio, uint32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
uint8_t tag_action, int use_ncq,
uint8_t zm_action, uint64_t zone_id, uint8_t zone_flags,
uint8_t *data_ptr, uint32_t dxfer_len,
uint8_t *cdb_storage, size_t cdb_storage_len,
uint8_t sense_len, uint32_t timeout)
{
uint8_t command_out, protocol, ata_flags;
uint16_t features_out;
uint32_t sectors_out, auxiliary;
int retval;
retval = 0;
if (use_ncq == 0) {
command_out = ATA_ZAC_MANAGEMENT_OUT;
features_out = (zm_action & 0xf) | (zone_flags << 8);
ata_flags = AP_FLAG_BYT_BLOK_BLOCKS;
if (dxfer_len == 0) {
protocol = AP_PROTO_NON_DATA;
ata_flags |= AP_FLAG_TLEN_NO_DATA;
sectors_out = 0;
} else {
protocol = AP_PROTO_DMA;
ata_flags |= AP_FLAG_TLEN_SECT_CNT |
AP_FLAG_TDIR_TO_DEV;
sectors_out = ((dxfer_len >> 9) & 0xffff);
}
auxiliary = 0;
} else {
ata_flags = AP_FLAG_BYT_BLOK_BLOCKS;
if (dxfer_len == 0) {
command_out = ATA_NCQ_NON_DATA;
features_out = ATA_NCQ_ZAC_MGMT_OUT;
/*
* We're assuming the SCSI to ATA translation layer
* will set the NCQ tag number in the tag field.
* That isn't clear from the SAT-4 spec (as of rev 05).
*/
sectors_out = 0;
ata_flags |= AP_FLAG_TLEN_NO_DATA;
} else {
command_out = ATA_SEND_FPDMA_QUEUED;
/*
* Note that we're defaulting to normal priority,
* and assuming that the SCSI to ATA translation
* layer will insert the NCQ tag number in the tag
* field. That isn't clear in the SAT-4 spec (as
* of rev 05).
*/
sectors_out = ATA_SFPDMA_ZAC_MGMT_OUT << 8;
ata_flags |= AP_FLAG_TLEN_FEAT |
AP_FLAG_TDIR_TO_DEV;
/*
* For SEND FPDMA QUEUED, the transfer length is
* encoded in the FEATURE register, and 0 means
* that 65536 512 byte blocks are to be tranferred.
* In practice, it seems unlikely that we'll see
* a transfer that large, and it may confuse the
* the SAT layer, because generally that means that
* 0 bytes should be transferred.
*/
if (dxfer_len == (65536 * 512)) {
features_out = 0;
} else if (dxfer_len <= (65535 * 512)) {
features_out = ((dxfer_len >> 9) & 0xffff);
} else {
/* The transfer is too big. */
retval = 1;
goto bailout;
}
}
auxiliary = (zm_action & 0xf) | (zone_flags << 8);
protocol = AP_PROTO_FPDMA;
}
protocol |= AP_EXTEND;
retval = scsi_ata_pass(csio,
retries,
cbfcnp,
/*flags*/ (dxfer_len > 0) ? CAM_DIR_OUT : CAM_DIR_NONE,
tag_action,
/*protocol*/ protocol,
/*ata_flags*/ ata_flags,
/*features*/ features_out,
/*sector_count*/ sectors_out,
/*lba*/ zone_id,
/*command*/ command_out,
/*device*/ 0,
/*icc*/ 0,
/*auxiliary*/ auxiliary,
/*control*/ 0,
/*data_ptr*/ data_ptr,
/*dxfer_len*/ dxfer_len,
/*cdb_storage*/ cdb_storage,
/*cdb_storage_len*/ cdb_storage_len,
/*minimum_cmd_size*/ 0,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ timeout);
bailout:
return (retval);
}
int
scsi_ata_zac_mgmt_in(struct ccb_scsiio *csio, uint32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
uint8_t tag_action, int use_ncq,
uint8_t zm_action, uint64_t zone_id, uint8_t zone_flags,
uint8_t *data_ptr, uint32_t dxfer_len,
uint8_t *cdb_storage, size_t cdb_storage_len,
uint8_t sense_len, uint32_t timeout)
{
uint8_t command_out, protocol;
uint16_t features_out, sectors_out;
uint32_t auxiliary;
int ata_flags;
int retval;
retval = 0;
ata_flags = AP_FLAG_TDIR_FROM_DEV | AP_FLAG_BYT_BLOK_BLOCKS;
if (use_ncq == 0) {
command_out = ATA_ZAC_MANAGEMENT_IN;
/* XXX KDM put a macro here */
features_out = (zm_action & 0xf) | (zone_flags << 8);
sectors_out = dxfer_len >> 9; /* XXX KDM macro */
protocol = AP_PROTO_DMA;
ata_flags |= AP_FLAG_TLEN_SECT_CNT;
auxiliary = 0;
} else {
ata_flags |= AP_FLAG_TLEN_FEAT;
command_out = ATA_RECV_FPDMA_QUEUED;
sectors_out = ATA_RFPDMA_ZAC_MGMT_IN << 8;
/*
* For RECEIVE FPDMA QUEUED, the transfer length is
* encoded in the FEATURE register, and 0 means
* that 65536 512 byte blocks are to be tranferred.
* In practice, it seems unlikely that we'll see
* a transfer that large, and it may confuse the
* the SAT layer, because generally that means that
* 0 bytes should be transferred.
*/
if (dxfer_len == (65536 * 512)) {
features_out = 0;
} else if (dxfer_len <= (65535 * 512)) {
features_out = ((dxfer_len >> 9) & 0xffff);
} else {
/* The transfer is too big. */
retval = 1;
goto bailout;
}
auxiliary = (zm_action & 0xf) | (zone_flags << 8),
protocol = AP_PROTO_FPDMA;
}
protocol |= AP_EXTEND;
retval = scsi_ata_pass(csio,
retries,
cbfcnp,
/*flags*/ CAM_DIR_IN,
tag_action,
/*protocol*/ protocol,
/*ata_flags*/ ata_flags,
/*features*/ features_out,
/*sector_count*/ sectors_out,
/*lba*/ zone_id,
/*command*/ command_out,
/*device*/ 0,
/*icc*/ 0,
/*auxiliary*/ auxiliary,
/*control*/ 0,
/*data_ptr*/ data_ptr,
/*dxfer_len*/ (dxfer_len >> 9) * 512, /* XXX KDM */
/*cdb_storage*/ cdb_storage,
/*cdb_storage_len*/ cdb_storage_len,
/*minimum_cmd_size*/ 0,
/*sense_len*/ SSD_FULL_SIZE,
/*timeout*/ timeout);
bailout:
return (retval);
}
Index: projects/clang800-import/sys/compat/freebsd32/freebsd32_misc.c
===================================================================
--- projects/clang800-import/sys/compat/freebsd32/freebsd32_misc.c (revision 343955)
+++ projects/clang800-import/sys/compat/freebsd32/freebsd32_misc.c (revision 343956)
@@ -1,3471 +1,3474 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2002 Doug Rabson
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_inet.h"
#include "opt_inet6.h"
#include "opt_ktrace.h"
#define __ELF_WORD_SIZE 32
#ifdef COMPAT_FREEBSD11
#define _WANT_FREEBSD11_KEVENT
#endif
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/capsicum.h>
#include <sys/clock.h>
#include <sys/exec.h>
#include <sys/fcntl.h>
#include <sys/filedesc.h>
#include <sys/imgact.h>
#include <sys/jail.h>
#include <sys/kernel.h>
#include <sys/limits.h>
#include <sys/linker.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/file.h> /* Must come after sys/malloc.h */
#include <sys/imgact.h>
#include <sys/mbuf.h>
#include <sys/mman.h>
#include <sys/module.h>
#include <sys/mount.h>
#include <sys/mutex.h>
#include <sys/namei.h>
#include <sys/proc.h>
#include <sys/procctl.h>
#include <sys/reboot.h>
#include <sys/resource.h>
#include <sys/resourcevar.h>
#include <sys/selinfo.h>
#include <sys/eventvar.h> /* Must come after sys/selinfo.h */
#include <sys/pipe.h> /* Must come after sys/selinfo.h */
#include <sys/signal.h>
#include <sys/signalvar.h>
#include <sys/socket.h>
#include <sys/socketvar.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/syscallsubr.h>
#include <sys/sysctl.h>
#include <sys/sysent.h>
#include <sys/sysproto.h>
#include <sys/systm.h>
#include <sys/thr.h>
#include <sys/unistd.h>
#include <sys/ucontext.h>
#include <sys/vnode.h>
#include <sys/wait.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <sys/sem.h>
#include <sys/shm.h>
#ifdef KTRACE
#include <sys/ktrace.h>
#endif
#ifdef INET
#include <netinet/in.h>
#endif
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#include <vm/vm_object.h>
#include <vm/vm_extern.h>
#include <machine/cpu.h>
#include <machine/elf.h>
+#ifdef __amd64__
+#include <machine/md_var.h>
+#endif
#include <security/audit/audit.h>
#include <compat/freebsd32/freebsd32_util.h>
#include <compat/freebsd32/freebsd32.h>
#include <compat/freebsd32/freebsd32_ipc.h>
#include <compat/freebsd32/freebsd32_misc.h>
#include <compat/freebsd32/freebsd32_signal.h>
#include <compat/freebsd32/freebsd32_proto.h>
FEATURE(compat_freebsd_32bit, "Compatible with 32-bit FreeBSD");
#ifdef __amd64__
CTASSERT(sizeof(struct timeval32) == 8);
CTASSERT(sizeof(struct timespec32) == 8);
CTASSERT(sizeof(struct itimerval32) == 16);
CTASSERT(sizeof(struct bintime32) == 12);
#endif
CTASSERT(sizeof(struct statfs32) == 256);
#ifdef __amd64__
CTASSERT(sizeof(struct rusage32) == 72);
#endif
CTASSERT(sizeof(struct sigaltstack32) == 12);
#ifdef __amd64__
CTASSERT(sizeof(struct kevent32) == 56);
#else
CTASSERT(sizeof(struct kevent32) == 64);
#endif
CTASSERT(sizeof(struct iovec32) == 8);
CTASSERT(sizeof(struct msghdr32) == 28);
#ifdef __amd64__
CTASSERT(sizeof(struct stat32) == 208);
CTASSERT(sizeof(struct freebsd11_stat32) == 96);
#endif
CTASSERT(sizeof(struct sigaction32) == 24);
static int freebsd32_kevent_copyout(void *arg, struct kevent *kevp, int count);
static int freebsd32_kevent_copyin(void *arg, struct kevent *kevp, int count);
static int freebsd32_user_clock_nanosleep(struct thread *td, clockid_t clock_id,
int flags, const struct timespec32 *ua_rqtp, struct timespec32 *ua_rmtp);
void
freebsd32_rusage_out(const struct rusage *s, struct rusage32 *s32)
{
TV_CP(*s, *s32, ru_utime);
TV_CP(*s, *s32, ru_stime);
CP(*s, *s32, ru_maxrss);
CP(*s, *s32, ru_ixrss);
CP(*s, *s32, ru_idrss);
CP(*s, *s32, ru_isrss);
CP(*s, *s32, ru_minflt);
CP(*s, *s32, ru_majflt);
CP(*s, *s32, ru_nswap);
CP(*s, *s32, ru_inblock);
CP(*s, *s32, ru_oublock);
CP(*s, *s32, ru_msgsnd);
CP(*s, *s32, ru_msgrcv);
CP(*s, *s32, ru_nsignals);
CP(*s, *s32, ru_nvcsw);
CP(*s, *s32, ru_nivcsw);
}
int
freebsd32_wait4(struct thread *td, struct freebsd32_wait4_args *uap)
{
int error, status;
struct rusage32 ru32;
struct rusage ru, *rup;
if (uap->rusage != NULL)
rup = &ru;
else
rup = NULL;
error = kern_wait(td, uap->pid, &status, uap->options, rup);
if (error)
return (error);
if (uap->status != NULL)
error = copyout(&status, uap->status, sizeof(status));
if (uap->rusage != NULL && error == 0) {
freebsd32_rusage_out(&ru, &ru32);
error = copyout(&ru32, uap->rusage, sizeof(ru32));
}
return (error);
}
int
freebsd32_wait6(struct thread *td, struct freebsd32_wait6_args *uap)
{
struct wrusage32 wru32;
struct __wrusage wru, *wrup;
struct siginfo32 si32;
struct __siginfo si, *sip;
int error, status;
if (uap->wrusage != NULL)
wrup = &wru;
else
wrup = NULL;
if (uap->info != NULL) {
sip = &si;
bzero(sip, sizeof(*sip));
} else
sip = NULL;
error = kern_wait6(td, uap->idtype, PAIR32TO64(id_t, uap->id),
&status, uap->options, wrup, sip);
if (error != 0)
return (error);
if (uap->status != NULL)
error = copyout(&status, uap->status, sizeof(status));
if (uap->wrusage != NULL && error == 0) {
freebsd32_rusage_out(&wru.wru_self, &wru32.wru_self);
freebsd32_rusage_out(&wru.wru_children, &wru32.wru_children);
error = copyout(&wru32, uap->wrusage, sizeof(wru32));
}
if (uap->info != NULL && error == 0) {
siginfo_to_siginfo32 (&si, &si32);
error = copyout(&si32, uap->info, sizeof(si32));
}
return (error);
}
#ifdef COMPAT_FREEBSD4
static void
copy_statfs(struct statfs *in, struct statfs32 *out)
{
statfs_scale_blocks(in, INT32_MAX);
bzero(out, sizeof(*out));
CP(*in, *out, f_bsize);
out->f_iosize = MIN(in->f_iosize, INT32_MAX);
CP(*in, *out, f_blocks);
CP(*in, *out, f_bfree);
CP(*in, *out, f_bavail);
out->f_files = MIN(in->f_files, INT32_MAX);
out->f_ffree = MIN(in->f_ffree, INT32_MAX);
CP(*in, *out, f_fsid);
CP(*in, *out, f_owner);
CP(*in, *out, f_type);
CP(*in, *out, f_flags);
out->f_syncwrites = MIN(in->f_syncwrites, INT32_MAX);
out->f_asyncwrites = MIN(in->f_asyncwrites, INT32_MAX);
strlcpy(out->f_fstypename,
in->f_fstypename, MFSNAMELEN);
strlcpy(out->f_mntonname,
in->f_mntonname, min(MNAMELEN, FREEBSD4_MNAMELEN));
out->f_syncreads = MIN(in->f_syncreads, INT32_MAX);
out->f_asyncreads = MIN(in->f_asyncreads, INT32_MAX);
strlcpy(out->f_mntfromname,
in->f_mntfromname, min(MNAMELEN, FREEBSD4_MNAMELEN));
}
#endif
#ifdef COMPAT_FREEBSD4
int
freebsd4_freebsd32_getfsstat(struct thread *td,
struct freebsd4_freebsd32_getfsstat_args *uap)
{
struct statfs *buf, *sp;
struct statfs32 stat32;
size_t count, size, copycount;
int error;
count = uap->bufsize / sizeof(struct statfs32);
size = count * sizeof(struct statfs);
error = kern_getfsstat(td, &buf, size, &count, UIO_SYSSPACE, uap->mode);
if (size > 0) {
sp = buf;
copycount = count;
while (copycount > 0 && error == 0) {
copy_statfs(sp, &stat32);
error = copyout(&stat32, uap->buf, sizeof(stat32));
sp++;
uap->buf++;
copycount--;
}
free(buf, M_STATFS);
}
if (error == 0)
td->td_retval[0] = count;
return (error);
}
#endif
#ifdef COMPAT_FREEBSD10
int
freebsd10_freebsd32_pipe(struct thread *td,
struct freebsd10_freebsd32_pipe_args *uap) {
return (freebsd10_pipe(td, (struct freebsd10_pipe_args*)uap));
}
#endif
int
freebsd32_sigaltstack(struct thread *td,
struct freebsd32_sigaltstack_args *uap)
{
struct sigaltstack32 s32;
struct sigaltstack ss, oss, *ssp;
int error;
if (uap->ss != NULL) {
error = copyin(uap->ss, &s32, sizeof(s32));
if (error)
return (error);
PTRIN_CP(s32, ss, ss_sp);
CP(s32, ss, ss_size);
CP(s32, ss, ss_flags);
ssp = &ss;
} else
ssp = NULL;
error = kern_sigaltstack(td, ssp, &oss);
if (error == 0 && uap->oss != NULL) {
PTROUT_CP(oss, s32, ss_sp);
CP(oss, s32, ss_size);
CP(oss, s32, ss_flags);
error = copyout(&s32, uap->oss, sizeof(s32));
}
return (error);
}
/*
* Custom version of exec_copyin_args() so that we can translate
* the pointers.
*/
int
freebsd32_exec_copyin_args(struct image_args *args, const char *fname,
enum uio_seg segflg, u_int32_t *argv, u_int32_t *envv)
{
char *argp, *envp;
u_int32_t *p32, arg;
int error;
bzero(args, sizeof(*args));
if (argv == NULL)
return (EFAULT);
/*
* Allocate demand-paged memory for the file name, argument, and
* environment strings.
*/
error = exec_alloc_args(args);
if (error != 0)
return (error);
/*
* Copy the file name.
*/
error = exec_args_add_fname(args, fname, segflg);
if (error != 0)
goto err_exit;
/*
* extract arguments first
*/
p32 = argv;
for (;;) {
error = copyin(p32++, &arg, sizeof(arg));
if (error)
goto err_exit;
if (arg == 0)
break;
argp = PTRIN(arg);
error = exec_args_add_arg(args, argp, UIO_USERSPACE);
if (error != 0)
goto err_exit;
}
/*
* extract environment strings
*/
if (envv) {
p32 = envv;
for (;;) {
error = copyin(p32++, &arg, sizeof(arg));
if (error)
goto err_exit;
if (arg == 0)
break;
envp = PTRIN(arg);
error = exec_args_add_env(args, envp, UIO_USERSPACE);
if (error != 0)
goto err_exit;
}
}
return (0);
err_exit:
exec_free_args(args);
return (error);
}
int
freebsd32_execve(struct thread *td, struct freebsd32_execve_args *uap)
{
struct image_args eargs;
struct vmspace *oldvmspace;
int error;
error = pre_execve(td, &oldvmspace);
if (error != 0)
return (error);
error = freebsd32_exec_copyin_args(&eargs, uap->fname, UIO_USERSPACE,
uap->argv, uap->envv);
if (error == 0)
error = kern_execve(td, &eargs, NULL);
post_execve(td, error, oldvmspace);
return (error);
}
int
freebsd32_fexecve(struct thread *td, struct freebsd32_fexecve_args *uap)
{
struct image_args eargs;
struct vmspace *oldvmspace;
int error;
error = pre_execve(td, &oldvmspace);
if (error != 0)
return (error);
error = freebsd32_exec_copyin_args(&eargs, NULL, UIO_SYSSPACE,
uap->argv, uap->envv);
if (error == 0) {
eargs.fd = uap->fd;
error = kern_execve(td, &eargs, NULL);
}
post_execve(td, error, oldvmspace);
return (error);
}
int
freebsd32_mknodat(struct thread *td, struct freebsd32_mknodat_args *uap)
{
return (kern_mknodat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->mode, PAIR32TO64(dev_t, uap->dev)));
}
int
freebsd32_mprotect(struct thread *td, struct freebsd32_mprotect_args *uap)
{
int prot;
prot = uap->prot;
#if defined(__amd64__)
if (i386_read_exec && (prot & PROT_READ) != 0)
prot |= PROT_EXEC;
#endif
return (kern_mprotect(td, (uintptr_t)PTRIN(uap->addr), uap->len,
prot));
}
int
freebsd32_mmap(struct thread *td, struct freebsd32_mmap_args *uap)
{
int prot;
prot = uap->prot;
#if defined(__amd64__)
if (i386_read_exec && (prot & PROT_READ))
prot |= PROT_EXEC;
#endif
return (kern_mmap(td, (uintptr_t)uap->addr, uap->len, prot,
uap->flags, uap->fd, PAIR32TO64(off_t, uap->pos)));
}
#ifdef COMPAT_FREEBSD6
int
freebsd6_freebsd32_mmap(struct thread *td,
struct freebsd6_freebsd32_mmap_args *uap)
{
int prot;
prot = uap->prot;
#if defined(__amd64__)
if (i386_read_exec && (prot & PROT_READ))
prot |= PROT_EXEC;
#endif
return (kern_mmap(td, (uintptr_t)uap->addr, uap->len, prot,
uap->flags, uap->fd, PAIR32TO64(off_t, uap->pos)));
}
#endif
int
freebsd32_setitimer(struct thread *td, struct freebsd32_setitimer_args *uap)
{
struct itimerval itv, oitv, *itvp;
struct itimerval32 i32;
int error;
if (uap->itv != NULL) {
error = copyin(uap->itv, &i32, sizeof(i32));
if (error)
return (error);
TV_CP(i32, itv, it_interval);
TV_CP(i32, itv, it_value);
itvp = &itv;
} else
itvp = NULL;
error = kern_setitimer(td, uap->which, itvp, &oitv);
if (error || uap->oitv == NULL)
return (error);
TV_CP(oitv, i32, it_interval);
TV_CP(oitv, i32, it_value);
return (copyout(&i32, uap->oitv, sizeof(i32)));
}
int
freebsd32_getitimer(struct thread *td, struct freebsd32_getitimer_args *uap)
{
struct itimerval itv;
struct itimerval32 i32;
int error;
error = kern_getitimer(td, uap->which, &itv);
if (error || uap->itv == NULL)
return (error);
TV_CP(itv, i32, it_interval);
TV_CP(itv, i32, it_value);
return (copyout(&i32, uap->itv, sizeof(i32)));
}
int
freebsd32_select(struct thread *td, struct freebsd32_select_args *uap)
{
struct timeval32 tv32;
struct timeval tv, *tvp;
int error;
if (uap->tv != NULL) {
error = copyin(uap->tv, &tv32, sizeof(tv32));
if (error)
return (error);
CP(tv32, tv, tv_sec);
CP(tv32, tv, tv_usec);
tvp = &tv;
} else
tvp = NULL;
/*
* XXX Do pointers need PTRIN()?
*/
return (kern_select(td, uap->nd, uap->in, uap->ou, uap->ex, tvp,
sizeof(int32_t) * 8));
}
int
freebsd32_pselect(struct thread *td, struct freebsd32_pselect_args *uap)
{
struct timespec32 ts32;
struct timespec ts;
struct timeval tv, *tvp;
sigset_t set, *uset;
int error;
if (uap->ts != NULL) {
error = copyin(uap->ts, &ts32, sizeof(ts32));
if (error != 0)
return (error);
CP(ts32, ts, tv_sec);
CP(ts32, ts, tv_nsec);
TIMESPEC_TO_TIMEVAL(&tv, &ts);
tvp = &tv;
} else
tvp = NULL;
if (uap->sm != NULL) {
error = copyin(uap->sm, &set, sizeof(set));
if (error != 0)
return (error);
uset = &set;
} else
uset = NULL;
/*
* XXX Do pointers need PTRIN()?
*/
error = kern_pselect(td, uap->nd, uap->in, uap->ou, uap->ex, tvp,
uset, sizeof(int32_t) * 8);
return (error);
}
/*
* Copy 'count' items into the destination list pointed to by uap->eventlist.
*/
static int
freebsd32_kevent_copyout(void *arg, struct kevent *kevp, int count)
{
struct freebsd32_kevent_args *uap;
struct kevent32 ks32[KQ_NEVENTS];
uint64_t e;
int i, j, error;
KASSERT(count <= KQ_NEVENTS, ("count (%d) > KQ_NEVENTS", count));
uap = (struct freebsd32_kevent_args *)arg;
for (i = 0; i < count; i++) {
CP(kevp[i], ks32[i], ident);
CP(kevp[i], ks32[i], filter);
CP(kevp[i], ks32[i], flags);
CP(kevp[i], ks32[i], fflags);
#if BYTE_ORDER == LITTLE_ENDIAN
ks32[i].data1 = kevp[i].data;
ks32[i].data2 = kevp[i].data >> 32;
#else
ks32[i].data1 = kevp[i].data >> 32;
ks32[i].data2 = kevp[i].data;
#endif
PTROUT_CP(kevp[i], ks32[i], udata);
for (j = 0; j < nitems(kevp->ext); j++) {
e = kevp[i].ext[j];
#if BYTE_ORDER == LITTLE_ENDIAN
ks32[i].ext64[2 * j] = e;
ks32[i].ext64[2 * j + 1] = e >> 32;
#else
ks32[i].ext64[2 * j] = e >> 32;
ks32[i].ext64[2 * j + 1] = e;
#endif
}
}
error = copyout(ks32, uap->eventlist, count * sizeof *ks32);
if (error == 0)
uap->eventlist += count;
return (error);
}
/*
* Copy 'count' items from the list pointed to by uap->changelist.
*/
static int
freebsd32_kevent_copyin(void *arg, struct kevent *kevp, int count)
{
struct freebsd32_kevent_args *uap;
struct kevent32 ks32[KQ_NEVENTS];
uint64_t e;
int i, j, error;
KASSERT(count <= KQ_NEVENTS, ("count (%d) > KQ_NEVENTS", count));
uap = (struct freebsd32_kevent_args *)arg;
error = copyin(uap->changelist, ks32, count * sizeof *ks32);
if (error)
goto done;
uap->changelist += count;
for (i = 0; i < count; i++) {
CP(ks32[i], kevp[i], ident);
CP(ks32[i], kevp[i], filter);
CP(ks32[i], kevp[i], flags);
CP(ks32[i], kevp[i], fflags);
kevp[i].data = PAIR32TO64(uint64_t, ks32[i].data);
PTRIN_CP(ks32[i], kevp[i], udata);
for (j = 0; j < nitems(kevp->ext); j++) {
#if BYTE_ORDER == LITTLE_ENDIAN
e = ks32[i].ext64[2 * j + 1];
e <<= 32;
e += ks32[i].ext64[2 * j];
#else
e = ks32[i].ext64[2 * j];
e <<= 32;
e += ks32[i].ext64[2 * j + 1];
#endif
kevp[i].ext[j] = e;
}
}
done:
return (error);
}
int
freebsd32_kevent(struct thread *td, struct freebsd32_kevent_args *uap)
{
struct timespec32 ts32;
struct timespec ts, *tsp;
struct kevent_copyops k_ops = {
.arg = uap,
.k_copyout = freebsd32_kevent_copyout,
.k_copyin = freebsd32_kevent_copyin,
};
#ifdef KTRACE
struct kevent32 *eventlist = uap->eventlist;
#endif
int error;
if (uap->timeout) {
error = copyin(uap->timeout, &ts32, sizeof(ts32));
if (error)
return (error);
CP(ts32, ts, tv_sec);
CP(ts32, ts, tv_nsec);
tsp = &ts;
} else
tsp = NULL;
#ifdef KTRACE
if (KTRPOINT(td, KTR_STRUCT_ARRAY))
ktrstructarray("kevent32", UIO_USERSPACE, uap->changelist,
uap->nchanges, sizeof(struct kevent32));
#endif
error = kern_kevent(td, uap->fd, uap->nchanges, uap->nevents,
&k_ops, tsp);
#ifdef KTRACE
if (error == 0 && KTRPOINT(td, KTR_STRUCT_ARRAY))
ktrstructarray("kevent32", UIO_USERSPACE, eventlist,
td->td_retval[0], sizeof(struct kevent32));
#endif
return (error);
}
#ifdef COMPAT_FREEBSD11
static int
freebsd32_kevent11_copyout(void *arg, struct kevent *kevp, int count)
{
struct freebsd11_freebsd32_kevent_args *uap;
struct kevent32_freebsd11 ks32[KQ_NEVENTS];
int i, error;
KASSERT(count <= KQ_NEVENTS, ("count (%d) > KQ_NEVENTS", count));
uap = (struct freebsd11_freebsd32_kevent_args *)arg;
for (i = 0; i < count; i++) {
CP(kevp[i], ks32[i], ident);
CP(kevp[i], ks32[i], filter);
CP(kevp[i], ks32[i], flags);
CP(kevp[i], ks32[i], fflags);
CP(kevp[i], ks32[i], data);
PTROUT_CP(kevp[i], ks32[i], udata);
}
error = copyout(ks32, uap->eventlist, count * sizeof *ks32);
if (error == 0)
uap->eventlist += count;
return (error);
}
/*
* Copy 'count' items from the list pointed to by uap->changelist.
*/
static int
freebsd32_kevent11_copyin(void *arg, struct kevent *kevp, int count)
{
struct freebsd11_freebsd32_kevent_args *uap;
struct kevent32_freebsd11 ks32[KQ_NEVENTS];
int i, j, error;
KASSERT(count <= KQ_NEVENTS, ("count (%d) > KQ_NEVENTS", count));
uap = (struct freebsd11_freebsd32_kevent_args *)arg;
error = copyin(uap->changelist, ks32, count * sizeof *ks32);
if (error)
goto done;
uap->changelist += count;
for (i = 0; i < count; i++) {
CP(ks32[i], kevp[i], ident);
CP(ks32[i], kevp[i], filter);
CP(ks32[i], kevp[i], flags);
CP(ks32[i], kevp[i], fflags);
CP(ks32[i], kevp[i], data);
PTRIN_CP(ks32[i], kevp[i], udata);
for (j = 0; j < nitems(kevp->ext); j++)
kevp[i].ext[j] = 0;
}
done:
return (error);
}
int
freebsd11_freebsd32_kevent(struct thread *td,
struct freebsd11_freebsd32_kevent_args *uap)
{
struct timespec32 ts32;
struct timespec ts, *tsp;
struct kevent_copyops k_ops = {
.arg = uap,
.k_copyout = freebsd32_kevent11_copyout,
.k_copyin = freebsd32_kevent11_copyin,
};
#ifdef KTRACE
struct kevent32_freebsd11 *eventlist = uap->eventlist;
#endif
int error;
if (uap->timeout) {
error = copyin(uap->timeout, &ts32, sizeof(ts32));
if (error)
return (error);
CP(ts32, ts, tv_sec);
CP(ts32, ts, tv_nsec);
tsp = &ts;
} else
tsp = NULL;
#ifdef KTRACE
if (KTRPOINT(td, KTR_STRUCT_ARRAY))
ktrstructarray("kevent32_freebsd11", UIO_USERSPACE,
uap->changelist, uap->nchanges,
sizeof(struct kevent32_freebsd11));
#endif
error = kern_kevent(td, uap->fd, uap->nchanges, uap->nevents,
&k_ops, tsp);
#ifdef KTRACE
if (error == 0 && KTRPOINT(td, KTR_STRUCT_ARRAY))
ktrstructarray("kevent32_freebsd11", UIO_USERSPACE,
eventlist, td->td_retval[0],
sizeof(struct kevent32_freebsd11));
#endif
return (error);
}
#endif
int
freebsd32_gettimeofday(struct thread *td,
struct freebsd32_gettimeofday_args *uap)
{
struct timeval atv;
struct timeval32 atv32;
struct timezone rtz;
int error = 0;
if (uap->tp) {
microtime(&atv);
CP(atv, atv32, tv_sec);
CP(atv, atv32, tv_usec);
error = copyout(&atv32, uap->tp, sizeof (atv32));
}
if (error == 0 && uap->tzp != NULL) {
rtz.tz_minuteswest = tz_minuteswest;
rtz.tz_dsttime = tz_dsttime;
error = copyout(&rtz, uap->tzp, sizeof (rtz));
}
return (error);
}
int
freebsd32_getrusage(struct thread *td, struct freebsd32_getrusage_args *uap)
{
struct rusage32 s32;
struct rusage s;
int error;
error = kern_getrusage(td, uap->who, &s);
if (error == 0) {
freebsd32_rusage_out(&s, &s32);
error = copyout(&s32, uap->rusage, sizeof(s32));
}
return (error);
}
static int
freebsd32_copyinuio(struct iovec32 *iovp, u_int iovcnt, struct uio **uiop)
{
struct iovec32 iov32;
struct iovec *iov;
struct uio *uio;
u_int iovlen;
int error, i;
*uiop = NULL;
if (iovcnt > UIO_MAXIOV)
return (EINVAL);
iovlen = iovcnt * sizeof(struct iovec);
uio = malloc(iovlen + sizeof *uio, M_IOV, M_WAITOK);
iov = (struct iovec *)(uio + 1);
for (i = 0; i < iovcnt; i++) {
error = copyin(&iovp[i], &iov32, sizeof(struct iovec32));
if (error) {
free(uio, M_IOV);
return (error);
}
iov[i].iov_base = PTRIN(iov32.iov_base);
iov[i].iov_len = iov32.iov_len;
}
uio->uio_iov = iov;
uio->uio_iovcnt = iovcnt;
uio->uio_segflg = UIO_USERSPACE;
uio->uio_offset = -1;
uio->uio_resid = 0;
for (i = 0; i < iovcnt; i++) {
if (iov->iov_len > INT_MAX - uio->uio_resid) {
free(uio, M_IOV);
return (EINVAL);
}
uio->uio_resid += iov->iov_len;
iov++;
}
*uiop = uio;
return (0);
}
int
freebsd32_readv(struct thread *td, struct freebsd32_readv_args *uap)
{
struct uio *auio;
int error;
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = kern_readv(td, uap->fd, auio);
free(auio, M_IOV);
return (error);
}
int
freebsd32_writev(struct thread *td, struct freebsd32_writev_args *uap)
{
struct uio *auio;
int error;
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = kern_writev(td, uap->fd, auio);
free(auio, M_IOV);
return (error);
}
int
freebsd32_preadv(struct thread *td, struct freebsd32_preadv_args *uap)
{
struct uio *auio;
int error;
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = kern_preadv(td, uap->fd, auio, PAIR32TO64(off_t,uap->offset));
free(auio, M_IOV);
return (error);
}
int
freebsd32_pwritev(struct thread *td, struct freebsd32_pwritev_args *uap)
{
struct uio *auio;
int error;
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = kern_pwritev(td, uap->fd, auio, PAIR32TO64(off_t,uap->offset));
free(auio, M_IOV);
return (error);
}
int
freebsd32_copyiniov(struct iovec32 *iovp32, u_int iovcnt, struct iovec **iovp,
int error)
{
struct iovec32 iov32;
struct iovec *iov;
u_int iovlen;
int i;
*iovp = NULL;
if (iovcnt > UIO_MAXIOV)
return (error);
iovlen = iovcnt * sizeof(struct iovec);
iov = malloc(iovlen, M_IOV, M_WAITOK);
for (i = 0; i < iovcnt; i++) {
error = copyin(&iovp32[i], &iov32, sizeof(struct iovec32));
if (error) {
free(iov, M_IOV);
return (error);
}
iov[i].iov_base = PTRIN(iov32.iov_base);
iov[i].iov_len = iov32.iov_len;
}
*iovp = iov;
return (0);
}
static int
freebsd32_copyinmsghdr(struct msghdr32 *msg32, struct msghdr *msg)
{
struct msghdr32 m32;
int error;
error = copyin(msg32, &m32, sizeof(m32));
if (error)
return (error);
msg->msg_name = PTRIN(m32.msg_name);
msg->msg_namelen = m32.msg_namelen;
msg->msg_iov = PTRIN(m32.msg_iov);
msg->msg_iovlen = m32.msg_iovlen;
msg->msg_control = PTRIN(m32.msg_control);
msg->msg_controllen = m32.msg_controllen;
msg->msg_flags = m32.msg_flags;
return (0);
}
static int
freebsd32_copyoutmsghdr(struct msghdr *msg, struct msghdr32 *msg32)
{
struct msghdr32 m32;
int error;
m32.msg_name = PTROUT(msg->msg_name);
m32.msg_namelen = msg->msg_namelen;
m32.msg_iov = PTROUT(msg->msg_iov);
m32.msg_iovlen = msg->msg_iovlen;
m32.msg_control = PTROUT(msg->msg_control);
m32.msg_controllen = msg->msg_controllen;
m32.msg_flags = msg->msg_flags;
error = copyout(&m32, msg32, sizeof(m32));
return (error);
}
#ifndef __mips__
#define FREEBSD32_ALIGNBYTES (sizeof(int) - 1)
#else
#define FREEBSD32_ALIGNBYTES (sizeof(long) - 1)
#endif
#define FREEBSD32_ALIGN(p) \
(((u_long)(p) + FREEBSD32_ALIGNBYTES) & ~FREEBSD32_ALIGNBYTES)
#define FREEBSD32_CMSG_SPACE(l) \
(FREEBSD32_ALIGN(sizeof(struct cmsghdr)) + FREEBSD32_ALIGN(l))
#define FREEBSD32_CMSG_DATA(cmsg) ((unsigned char *)(cmsg) + \
FREEBSD32_ALIGN(sizeof(struct cmsghdr)))
static size_t
freebsd32_cmsg_convert(const struct cmsghdr *cm, void *data, socklen_t datalen)
{
size_t copylen;
union {
struct timespec32 ts;
struct timeval32 tv;
struct bintime32 bt;
} tmp32;
union {
struct timespec ts;
struct timeval tv;
struct bintime bt;
} *in;
in = data;
copylen = 0;
switch (cm->cmsg_level) {
case SOL_SOCKET:
switch (cm->cmsg_type) {
case SCM_TIMESTAMP:
TV_CP(*in, tmp32, tv);
copylen = sizeof(tmp32.tv);
break;
case SCM_BINTIME:
BT_CP(*in, tmp32, bt);
copylen = sizeof(tmp32.bt);
break;
case SCM_REALTIME:
case SCM_MONOTONIC:
TS_CP(*in, tmp32, ts);
copylen = sizeof(tmp32.ts);
break;
default:
break;
}
default:
break;
}
if (copylen == 0)
return (datalen);
KASSERT((datalen >= copylen), ("corrupted cmsghdr"));
bcopy(&tmp32, data, copylen);
return (copylen);
}
static int
freebsd32_copy_msg_out(struct msghdr *msg, struct mbuf *control)
{
struct cmsghdr *cm;
void *data;
socklen_t clen, datalen, datalen_out, oldclen;
int error;
caddr_t ctlbuf;
int len, maxlen, copylen;
struct mbuf *m;
error = 0;
len = msg->msg_controllen;
maxlen = msg->msg_controllen;
msg->msg_controllen = 0;
ctlbuf = msg->msg_control;
for (m = control; m != NULL && len > 0; m = m->m_next) {
cm = mtod(m, struct cmsghdr *);
clen = m->m_len;
while (cm != NULL) {
if (sizeof(struct cmsghdr) > clen ||
cm->cmsg_len > clen) {
error = EINVAL;
break;
}
data = CMSG_DATA(cm);
datalen = (caddr_t)cm + cm->cmsg_len - (caddr_t)data;
datalen_out = freebsd32_cmsg_convert(cm, data, datalen);
/*
* Copy out the message header. Preserve the native
* message size in case we need to inspect the message
* contents later.
*/
copylen = sizeof(struct cmsghdr);
if (len < copylen) {
msg->msg_flags |= MSG_CTRUNC;
m_dispose_extcontrolm(m);
goto exit;
}
oldclen = cm->cmsg_len;
cm->cmsg_len = FREEBSD32_ALIGN(sizeof(struct cmsghdr)) +
datalen_out;
error = copyout(cm, ctlbuf, copylen);
cm->cmsg_len = oldclen;
if (error != 0)
goto exit;
ctlbuf += FREEBSD32_ALIGN(copylen);
len -= FREEBSD32_ALIGN(copylen);
copylen = datalen_out;
if (len < copylen) {
msg->msg_flags |= MSG_CTRUNC;
m_dispose_extcontrolm(m);
break;
}
/* Copy out the message data. */
error = copyout(data, ctlbuf, copylen);
if (error)
goto exit;
ctlbuf += FREEBSD32_ALIGN(copylen);
len -= FREEBSD32_ALIGN(copylen);
if (CMSG_SPACE(datalen) < clen) {
clen -= CMSG_SPACE(datalen);
cm = (struct cmsghdr *)
((caddr_t)cm + CMSG_SPACE(datalen));
} else {
clen = 0;
cm = NULL;
}
msg->msg_controllen += FREEBSD32_ALIGN(sizeof(*cm)) +
datalen_out;
}
}
if (len == 0 && m != NULL) {
msg->msg_flags |= MSG_CTRUNC;
m_dispose_extcontrolm(m);
}
exit:
return (error);
}
int
freebsd32_recvmsg(td, uap)
struct thread *td;
struct freebsd32_recvmsg_args /* {
int s;
struct msghdr32 *msg;
int flags;
} */ *uap;
{
struct msghdr msg;
struct msghdr32 m32;
struct iovec *uiov, *iov;
struct mbuf *control = NULL;
struct mbuf **controlp;
int error;
error = copyin(uap->msg, &m32, sizeof(m32));
if (error)
return (error);
error = freebsd32_copyinmsghdr(uap->msg, &msg);
if (error)
return (error);
error = freebsd32_copyiniov(PTRIN(m32.msg_iov), m32.msg_iovlen, &iov,
EMSGSIZE);
if (error)
return (error);
msg.msg_flags = uap->flags;
uiov = msg.msg_iov;
msg.msg_iov = iov;
controlp = (msg.msg_control != NULL) ? &control : NULL;
error = kern_recvit(td, uap->s, &msg, UIO_USERSPACE, controlp);
if (error == 0) {
msg.msg_iov = uiov;
if (control != NULL)
error = freebsd32_copy_msg_out(&msg, control);
else
msg.msg_controllen = 0;
if (error == 0)
error = freebsd32_copyoutmsghdr(&msg, uap->msg);
}
free(iov, M_IOV);
if (control != NULL) {
if (error != 0)
m_dispose_extcontrolm(control);
m_freem(control);
}
return (error);
}
/*
* Copy-in the array of control messages constructed using alignment
* and padding suitable for a 32-bit environment and construct an
* mbuf using alignment and padding suitable for a 64-bit kernel.
* The alignment and padding are defined indirectly by CMSG_DATA(),
* CMSG_SPACE() and CMSG_LEN().
*/
static int
freebsd32_copyin_control(struct mbuf **mp, caddr_t buf, u_int buflen)
{
struct mbuf *m;
void *md;
u_int idx, len, msglen;
int error;
buflen = FREEBSD32_ALIGN(buflen);
if (buflen > MCLBYTES)
return (EINVAL);
/*
* Iterate over the buffer and get the length of each message
* in there. This has 32-bit alignment and padding. Use it to
* determine the length of these messages when using 64-bit
* alignment and padding.
*/
idx = 0;
len = 0;
while (idx < buflen) {
error = copyin(buf + idx, &msglen, sizeof(msglen));
if (error)
return (error);
if (msglen < sizeof(struct cmsghdr))
return (EINVAL);
msglen = FREEBSD32_ALIGN(msglen);
if (idx + msglen > buflen)
return (EINVAL);
idx += msglen;
msglen += CMSG_ALIGN(sizeof(struct cmsghdr)) -
FREEBSD32_ALIGN(sizeof(struct cmsghdr));
len += CMSG_ALIGN(msglen);
}
if (len > MCLBYTES)
return (EINVAL);
m = m_get(M_WAITOK, MT_CONTROL);
if (len > MLEN)
MCLGET(m, M_WAITOK);
m->m_len = len;
md = mtod(m, void *);
while (buflen > 0) {
error = copyin(buf, md, sizeof(struct cmsghdr));
if (error)
break;
msglen = *(u_int *)md;
msglen = FREEBSD32_ALIGN(msglen);
/* Modify the message length to account for alignment. */
*(u_int *)md = msglen + CMSG_ALIGN(sizeof(struct cmsghdr)) -
FREEBSD32_ALIGN(sizeof(struct cmsghdr));
md = (char *)md + CMSG_ALIGN(sizeof(struct cmsghdr));
buf += FREEBSD32_ALIGN(sizeof(struct cmsghdr));
buflen -= FREEBSD32_ALIGN(sizeof(struct cmsghdr));
msglen -= FREEBSD32_ALIGN(sizeof(struct cmsghdr));
if (msglen > 0) {
error = copyin(buf, md, msglen);
if (error)
break;
md = (char *)md + CMSG_ALIGN(msglen);
buf += msglen;
buflen -= msglen;
}
}
if (error)
m_free(m);
else
*mp = m;
return (error);
}
int
freebsd32_sendmsg(struct thread *td,
struct freebsd32_sendmsg_args *uap)
{
struct msghdr msg;
struct msghdr32 m32;
struct iovec *iov;
struct mbuf *control = NULL;
struct sockaddr *to = NULL;
int error;
error = copyin(uap->msg, &m32, sizeof(m32));
if (error)
return (error);
error = freebsd32_copyinmsghdr(uap->msg, &msg);
if (error)
return (error);
error = freebsd32_copyiniov(PTRIN(m32.msg_iov), m32.msg_iovlen, &iov,
EMSGSIZE);
if (error)
return (error);
msg.msg_iov = iov;
if (msg.msg_name != NULL) {
error = getsockaddr(&to, msg.msg_name, msg.msg_namelen);
if (error) {
to = NULL;
goto out;
}
msg.msg_name = to;
}
if (msg.msg_control) {
if (msg.msg_controllen < sizeof(struct cmsghdr)) {
error = EINVAL;
goto out;
}
error = freebsd32_copyin_control(&control, msg.msg_control,
msg.msg_controllen);
if (error)
goto out;
msg.msg_control = NULL;
msg.msg_controllen = 0;
}
error = kern_sendit(td, uap->s, &msg, uap->flags, control,
UIO_USERSPACE);
out:
free(iov, M_IOV);
if (to)
free(to, M_SONAME);
return (error);
}
int
freebsd32_recvfrom(struct thread *td,
struct freebsd32_recvfrom_args *uap)
{
struct msghdr msg;
struct iovec aiov;
int error;
if (uap->fromlenaddr) {
error = copyin(PTRIN(uap->fromlenaddr), &msg.msg_namelen,
sizeof(msg.msg_namelen));
if (error)
return (error);
} else {
msg.msg_namelen = 0;
}
msg.msg_name = PTRIN(uap->from);
msg.msg_iov = &aiov;
msg.msg_iovlen = 1;
aiov.iov_base = PTRIN(uap->buf);
aiov.iov_len = uap->len;
msg.msg_control = NULL;
msg.msg_flags = uap->flags;
error = kern_recvit(td, uap->s, &msg, UIO_USERSPACE, NULL);
if (error == 0 && uap->fromlenaddr)
error = copyout(&msg.msg_namelen, PTRIN(uap->fromlenaddr),
sizeof (msg.msg_namelen));
return (error);
}
int
freebsd32_settimeofday(struct thread *td,
struct freebsd32_settimeofday_args *uap)
{
struct timeval32 tv32;
struct timeval tv, *tvp;
struct timezone tz, *tzp;
int error;
if (uap->tv) {
error = copyin(uap->tv, &tv32, sizeof(tv32));
if (error)
return (error);
CP(tv32, tv, tv_sec);
CP(tv32, tv, tv_usec);
tvp = &tv;
} else
tvp = NULL;
if (uap->tzp) {
error = copyin(uap->tzp, &tz, sizeof(tz));
if (error)
return (error);
tzp = &tz;
} else
tzp = NULL;
return (kern_settimeofday(td, tvp, tzp));
}
int
freebsd32_utimes(struct thread *td, struct freebsd32_utimes_args *uap)
{
struct timeval32 s32[2];
struct timeval s[2], *sp;
int error;
if (uap->tptr != NULL) {
error = copyin(uap->tptr, s32, sizeof(s32));
if (error)
return (error);
CP(s32[0], s[0], tv_sec);
CP(s32[0], s[0], tv_usec);
CP(s32[1], s[1], tv_sec);
CP(s32[1], s[1], tv_usec);
sp = s;
} else
sp = NULL;
return (kern_utimesat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
sp, UIO_SYSSPACE));
}
int
freebsd32_lutimes(struct thread *td, struct freebsd32_lutimes_args *uap)
{
struct timeval32 s32[2];
struct timeval s[2], *sp;
int error;
if (uap->tptr != NULL) {
error = copyin(uap->tptr, s32, sizeof(s32));
if (error)
return (error);
CP(s32[0], s[0], tv_sec);
CP(s32[0], s[0], tv_usec);
CP(s32[1], s[1], tv_sec);
CP(s32[1], s[1], tv_usec);
sp = s;
} else
sp = NULL;
return (kern_lutimes(td, uap->path, UIO_USERSPACE, sp, UIO_SYSSPACE));
}
int
freebsd32_futimes(struct thread *td, struct freebsd32_futimes_args *uap)
{
struct timeval32 s32[2];
struct timeval s[2], *sp;
int error;
if (uap->tptr != NULL) {
error = copyin(uap->tptr, s32, sizeof(s32));
if (error)
return (error);
CP(s32[0], s[0], tv_sec);
CP(s32[0], s[0], tv_usec);
CP(s32[1], s[1], tv_sec);
CP(s32[1], s[1], tv_usec);
sp = s;
} else
sp = NULL;
return (kern_futimes(td, uap->fd, sp, UIO_SYSSPACE));
}
int
freebsd32_futimesat(struct thread *td, struct freebsd32_futimesat_args *uap)
{
struct timeval32 s32[2];
struct timeval s[2], *sp;
int error;
if (uap->times != NULL) {
error = copyin(uap->times, s32, sizeof(s32));
if (error)
return (error);
CP(s32[0], s[0], tv_sec);
CP(s32[0], s[0], tv_usec);
CP(s32[1], s[1], tv_sec);
CP(s32[1], s[1], tv_usec);
sp = s;
} else
sp = NULL;
return (kern_utimesat(td, uap->fd, uap->path, UIO_USERSPACE,
sp, UIO_SYSSPACE));
}
int
freebsd32_futimens(struct thread *td, struct freebsd32_futimens_args *uap)
{
struct timespec32 ts32[2];
struct timespec ts[2], *tsp;
int error;
if (uap->times != NULL) {
error = copyin(uap->times, ts32, sizeof(ts32));
if (error)
return (error);
CP(ts32[0], ts[0], tv_sec);
CP(ts32[0], ts[0], tv_nsec);
CP(ts32[1], ts[1], tv_sec);
CP(ts32[1], ts[1], tv_nsec);
tsp = ts;
} else
tsp = NULL;
return (kern_futimens(td, uap->fd, tsp, UIO_SYSSPACE));
}
int
freebsd32_utimensat(struct thread *td, struct freebsd32_utimensat_args *uap)
{
struct timespec32 ts32[2];
struct timespec ts[2], *tsp;
int error;
if (uap->times != NULL) {
error = copyin(uap->times, ts32, sizeof(ts32));
if (error)
return (error);
CP(ts32[0], ts[0], tv_sec);
CP(ts32[0], ts[0], tv_nsec);
CP(ts32[1], ts[1], tv_sec);
CP(ts32[1], ts[1], tv_nsec);
tsp = ts;
} else
tsp = NULL;
return (kern_utimensat(td, uap->fd, uap->path, UIO_USERSPACE,
tsp, UIO_SYSSPACE, uap->flag));
}
int
freebsd32_adjtime(struct thread *td, struct freebsd32_adjtime_args *uap)
{
struct timeval32 tv32;
struct timeval delta, olddelta, *deltap;
int error;
if (uap->delta) {
error = copyin(uap->delta, &tv32, sizeof(tv32));
if (error)
return (error);
CP(tv32, delta, tv_sec);
CP(tv32, delta, tv_usec);
deltap = &delta;
} else
deltap = NULL;
error = kern_adjtime(td, deltap, &olddelta);
if (uap->olddelta && error == 0) {
CP(olddelta, tv32, tv_sec);
CP(olddelta, tv32, tv_usec);
error = copyout(&tv32, uap->olddelta, sizeof(tv32));
}
return (error);
}
#ifdef COMPAT_FREEBSD4
int
freebsd4_freebsd32_statfs(struct thread *td, struct freebsd4_freebsd32_statfs_args *uap)
{
struct statfs32 s32;
struct statfs *sp;
int error;
sp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_statfs(td, uap->path, UIO_USERSPACE, sp);
if (error == 0) {
copy_statfs(sp, &s32);
error = copyout(&s32, uap->buf, sizeof(s32));
}
free(sp, M_STATFS);
return (error);
}
#endif
#ifdef COMPAT_FREEBSD4
int
freebsd4_freebsd32_fstatfs(struct thread *td, struct freebsd4_freebsd32_fstatfs_args *uap)
{
struct statfs32 s32;
struct statfs *sp;
int error;
sp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fstatfs(td, uap->fd, sp);
if (error == 0) {
copy_statfs(sp, &s32);
error = copyout(&s32, uap->buf, sizeof(s32));
}
free(sp, M_STATFS);
return (error);
}
#endif
#ifdef COMPAT_FREEBSD4
int
freebsd4_freebsd32_fhstatfs(struct thread *td, struct freebsd4_freebsd32_fhstatfs_args *uap)
{
struct statfs32 s32;
struct statfs *sp;
fhandle_t fh;
int error;
if ((error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t))) != 0)
return (error);
sp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fhstatfs(td, fh, sp);
if (error == 0) {
copy_statfs(sp, &s32);
error = copyout(&s32, uap->buf, sizeof(s32));
}
free(sp, M_STATFS);
return (error);
}
#endif
int
freebsd32_pread(struct thread *td, struct freebsd32_pread_args *uap)
{
return (kern_pread(td, uap->fd, uap->buf, uap->nbyte,
PAIR32TO64(off_t, uap->offset)));
}
int
freebsd32_pwrite(struct thread *td, struct freebsd32_pwrite_args *uap)
{
return (kern_pwrite(td, uap->fd, uap->buf, uap->nbyte,
PAIR32TO64(off_t, uap->offset)));
}
#ifdef COMPAT_43
int
ofreebsd32_lseek(struct thread *td, struct ofreebsd32_lseek_args *uap)
{
return (kern_lseek(td, uap->fd, uap->offset, uap->whence));
}
#endif
int
freebsd32_lseek(struct thread *td, struct freebsd32_lseek_args *uap)
{
int error;
off_t pos;
error = kern_lseek(td, uap->fd, PAIR32TO64(off_t, uap->offset),
uap->whence);
/* Expand the quad return into two parts for eax and edx */
pos = td->td_uretoff.tdu_off;
td->td_retval[RETVAL_LO] = pos & 0xffffffff; /* %eax */
td->td_retval[RETVAL_HI] = pos >> 32; /* %edx */
return error;
}
int
freebsd32_truncate(struct thread *td, struct freebsd32_truncate_args *uap)
{
return (kern_truncate(td, uap->path, UIO_USERSPACE,
PAIR32TO64(off_t, uap->length)));
}
int
freebsd32_ftruncate(struct thread *td, struct freebsd32_ftruncate_args *uap)
{
return (kern_ftruncate(td, uap->fd, PAIR32TO64(off_t, uap->length)));
}
#ifdef COMPAT_43
int
ofreebsd32_getdirentries(struct thread *td,
struct ofreebsd32_getdirentries_args *uap)
{
struct ogetdirentries_args ap;
int error;
long loff;
int32_t loff_cut;
ap.fd = uap->fd;
ap.buf = uap->buf;
ap.count = uap->count;
ap.basep = NULL;
error = kern_ogetdirentries(td, &ap, &loff);
if (error == 0) {
loff_cut = loff;
error = copyout(&loff_cut, uap->basep, sizeof(int32_t));
}
return (error);
}
#endif
#if defined(COMPAT_FREEBSD11)
int
freebsd11_freebsd32_getdirentries(struct thread *td,
struct freebsd11_freebsd32_getdirentries_args *uap)
{
long base;
int32_t base32;
int error;
error = freebsd11_kern_getdirentries(td, uap->fd, uap->buf, uap->count,
&base, NULL);
if (error)
return (error);
if (uap->basep != NULL) {
base32 = base;
error = copyout(&base32, uap->basep, sizeof(int32_t));
}
return (error);
}
int
freebsd11_freebsd32_getdents(struct thread *td,
struct freebsd11_freebsd32_getdents_args *uap)
{
struct freebsd11_freebsd32_getdirentries_args ap;
ap.fd = uap->fd;
ap.buf = uap->buf;
ap.count = uap->count;
ap.basep = NULL;
return (freebsd11_freebsd32_getdirentries(td, &ap));
}
#endif /* COMPAT_FREEBSD11 */
#ifdef COMPAT_FREEBSD6
/* versions with the 'int pad' argument */
int
freebsd6_freebsd32_pread(struct thread *td, struct freebsd6_freebsd32_pread_args *uap)
{
return (kern_pread(td, uap->fd, uap->buf, uap->nbyte,
PAIR32TO64(off_t, uap->offset)));
}
int
freebsd6_freebsd32_pwrite(struct thread *td, struct freebsd6_freebsd32_pwrite_args *uap)
{
return (kern_pwrite(td, uap->fd, uap->buf, uap->nbyte,
PAIR32TO64(off_t, uap->offset)));
}
int
freebsd6_freebsd32_lseek(struct thread *td, struct freebsd6_freebsd32_lseek_args *uap)
{
int error;
off_t pos;
error = kern_lseek(td, uap->fd, PAIR32TO64(off_t, uap->offset),
uap->whence);
/* Expand the quad return into two parts for eax and edx */
pos = *(off_t *)(td->td_retval);
td->td_retval[RETVAL_LO] = pos & 0xffffffff; /* %eax */
td->td_retval[RETVAL_HI] = pos >> 32; /* %edx */
return error;
}
int
freebsd6_freebsd32_truncate(struct thread *td, struct freebsd6_freebsd32_truncate_args *uap)
{
return (kern_truncate(td, uap->path, UIO_USERSPACE,
PAIR32TO64(off_t, uap->length)));
}
int
freebsd6_freebsd32_ftruncate(struct thread *td, struct freebsd6_freebsd32_ftruncate_args *uap)
{
return (kern_ftruncate(td, uap->fd, PAIR32TO64(off_t, uap->length)));
}
#endif /* COMPAT_FREEBSD6 */
struct sf_hdtr32 {
uint32_t headers;
int hdr_cnt;
uint32_t trailers;
int trl_cnt;
};
static int
freebsd32_do_sendfile(struct thread *td,
struct freebsd32_sendfile_args *uap, int compat)
{
struct sf_hdtr32 hdtr32;
struct sf_hdtr hdtr;
struct uio *hdr_uio, *trl_uio;
struct file *fp;
cap_rights_t rights;
struct iovec32 *iov32;
off_t offset, sbytes;
int error;
offset = PAIR32TO64(off_t, uap->offset);
if (offset < 0)
return (EINVAL);
hdr_uio = trl_uio = NULL;
if (uap->hdtr != NULL) {
error = copyin(uap->hdtr, &hdtr32, sizeof(hdtr32));
if (error)
goto out;
PTRIN_CP(hdtr32, hdtr, headers);
CP(hdtr32, hdtr, hdr_cnt);
PTRIN_CP(hdtr32, hdtr, trailers);
CP(hdtr32, hdtr, trl_cnt);
if (hdtr.headers != NULL) {
iov32 = PTRIN(hdtr32.headers);
error = freebsd32_copyinuio(iov32,
hdtr32.hdr_cnt, &hdr_uio);
if (error)
goto out;
#ifdef COMPAT_FREEBSD4
/*
* In FreeBSD < 5.0 the nbytes to send also included
* the header. If compat is specified subtract the
* header size from nbytes.
*/
if (compat) {
if (uap->nbytes > hdr_uio->uio_resid)
uap->nbytes -= hdr_uio->uio_resid;
else
uap->nbytes = 0;
}
#endif
}
if (hdtr.trailers != NULL) {
iov32 = PTRIN(hdtr32.trailers);
error = freebsd32_copyinuio(iov32,
hdtr32.trl_cnt, &trl_uio);
if (error)
goto out;
}
}
AUDIT_ARG_FD(uap->fd);
if ((error = fget_read(td, uap->fd,
cap_rights_init(&rights, CAP_PREAD), &fp)) != 0)
goto out;
error = fo_sendfile(fp, uap->s, hdr_uio, trl_uio, offset,
uap->nbytes, &sbytes, uap->flags, td);
fdrop(fp, td);
if (uap->sbytes != NULL)
copyout(&sbytes, uap->sbytes, sizeof(off_t));
out:
if (hdr_uio)
free(hdr_uio, M_IOV);
if (trl_uio)
free(trl_uio, M_IOV);
return (error);
}
#ifdef COMPAT_FREEBSD4
int
freebsd4_freebsd32_sendfile(struct thread *td,
struct freebsd4_freebsd32_sendfile_args *uap)
{
return (freebsd32_do_sendfile(td,
(struct freebsd32_sendfile_args *)uap, 1));
}
#endif
int
freebsd32_sendfile(struct thread *td, struct freebsd32_sendfile_args *uap)
{
return (freebsd32_do_sendfile(td, uap, 0));
}
static void
copy_stat(struct stat *in, struct stat32 *out)
{
CP(*in, *out, st_dev);
CP(*in, *out, st_ino);
CP(*in, *out, st_mode);
CP(*in, *out, st_nlink);
CP(*in, *out, st_uid);
CP(*in, *out, st_gid);
CP(*in, *out, st_rdev);
TS_CP(*in, *out, st_atim);
TS_CP(*in, *out, st_mtim);
TS_CP(*in, *out, st_ctim);
CP(*in, *out, st_size);
CP(*in, *out, st_blocks);
CP(*in, *out, st_blksize);
CP(*in, *out, st_flags);
CP(*in, *out, st_gen);
TS_CP(*in, *out, st_birthtim);
out->st_padding0 = 0;
out->st_padding1 = 0;
#ifdef __STAT32_TIME_T_EXT
out->st_atim_ext = 0;
out->st_mtim_ext = 0;
out->st_ctim_ext = 0;
out->st_btim_ext = 0;
#endif
bzero(out->st_spare, sizeof(out->st_spare));
}
#ifdef COMPAT_43
static void
copy_ostat(struct stat *in, struct ostat32 *out)
{
bzero(out, sizeof(*out));
CP(*in, *out, st_dev);
CP(*in, *out, st_ino);
CP(*in, *out, st_mode);
CP(*in, *out, st_nlink);
CP(*in, *out, st_uid);
CP(*in, *out, st_gid);
CP(*in, *out, st_rdev);
out->st_size = MIN(in->st_size, INT32_MAX);
TS_CP(*in, *out, st_atim);
TS_CP(*in, *out, st_mtim);
TS_CP(*in, *out, st_ctim);
CP(*in, *out, st_blksize);
CP(*in, *out, st_blocks);
CP(*in, *out, st_flags);
CP(*in, *out, st_gen);
}
#endif
#ifdef COMPAT_43
int
ofreebsd32_stat(struct thread *td, struct ofreebsd32_stat_args *uap)
{
struct stat sb;
struct ostat32 sb32;
int error;
error = kern_statat(td, 0, AT_FDCWD, uap->path, UIO_USERSPACE,
&sb, NULL);
if (error)
return (error);
copy_ostat(&sb, &sb32);
error = copyout(&sb32, uap->ub, sizeof (sb32));
return (error);
}
#endif
int
freebsd32_fstat(struct thread *td, struct freebsd32_fstat_args *uap)
{
struct stat ub;
struct stat32 ub32;
int error;
error = kern_fstat(td, uap->fd, &ub);
if (error)
return (error);
copy_stat(&ub, &ub32);
error = copyout(&ub32, uap->ub, sizeof(ub32));
return (error);
}
#ifdef COMPAT_43
int
ofreebsd32_fstat(struct thread *td, struct ofreebsd32_fstat_args *uap)
{
struct stat ub;
struct ostat32 ub32;
int error;
error = kern_fstat(td, uap->fd, &ub);
if (error)
return (error);
copy_ostat(&ub, &ub32);
error = copyout(&ub32, uap->ub, sizeof(ub32));
return (error);
}
#endif
int
freebsd32_fstatat(struct thread *td, struct freebsd32_fstatat_args *uap)
{
struct stat ub;
struct stat32 ub32;
int error;
error = kern_statat(td, uap->flag, uap->fd, uap->path, UIO_USERSPACE,
&ub, NULL);
if (error)
return (error);
copy_stat(&ub, &ub32);
error = copyout(&ub32, uap->buf, sizeof(ub32));
return (error);
}
#ifdef COMPAT_43
int
ofreebsd32_lstat(struct thread *td, struct ofreebsd32_lstat_args *uap)
{
struct stat sb;
struct ostat32 sb32;
int error;
error = kern_statat(td, AT_SYMLINK_NOFOLLOW, AT_FDCWD, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error)
return (error);
copy_ostat(&sb, &sb32);
error = copyout(&sb32, uap->ub, sizeof (sb32));
return (error);
}
#endif
int
freebsd32_fhstat(struct thread *td, struct freebsd32_fhstat_args *uap)
{
struct stat sb;
struct stat32 sb32;
struct fhandle fh;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t));
if (error != 0)
return (error);
error = kern_fhstat(td, fh, &sb);
if (error != 0)
return (error);
copy_stat(&sb, &sb32);
error = copyout(&sb32, uap->sb, sizeof (sb32));
return (error);
}
#if defined(COMPAT_FREEBSD11)
extern int ino64_trunc_error;
static int
freebsd11_cvtstat32(struct stat *in, struct freebsd11_stat32 *out)
{
CP(*in, *out, st_ino);
if (in->st_ino != out->st_ino) {
switch (ino64_trunc_error) {
default:
case 0:
break;
case 1:
return (EOVERFLOW);
case 2:
out->st_ino = UINT32_MAX;
break;
}
}
CP(*in, *out, st_nlink);
if (in->st_nlink != out->st_nlink) {
switch (ino64_trunc_error) {
default:
case 0:
break;
case 1:
return (EOVERFLOW);
case 2:
out->st_nlink = UINT16_MAX;
break;
}
}
out->st_dev = in->st_dev;
if (out->st_dev != in->st_dev) {
switch (ino64_trunc_error) {
default:
break;
case 1:
return (EOVERFLOW);
}
}
CP(*in, *out, st_mode);
CP(*in, *out, st_uid);
CP(*in, *out, st_gid);
out->st_rdev = in->st_rdev;
if (out->st_rdev != in->st_rdev) {
switch (ino64_trunc_error) {
default:
break;
case 1:
return (EOVERFLOW);
}
}
TS_CP(*in, *out, st_atim);
TS_CP(*in, *out, st_mtim);
TS_CP(*in, *out, st_ctim);
CP(*in, *out, st_size);
CP(*in, *out, st_blocks);
CP(*in, *out, st_blksize);
CP(*in, *out, st_flags);
CP(*in, *out, st_gen);
TS_CP(*in, *out, st_birthtim);
out->st_lspare = 0;
bzero((char *)&out->st_birthtim + sizeof(out->st_birthtim),
sizeof(*out) - offsetof(struct freebsd11_stat32,
st_birthtim) - sizeof(out->st_birthtim));
return (0);
}
int
freebsd11_freebsd32_stat(struct thread *td,
struct freebsd11_freebsd32_stat_args *uap)
{
struct stat sb;
struct freebsd11_stat32 sb32;
int error;
error = kern_statat(td, 0, AT_FDCWD, uap->path, UIO_USERSPACE,
&sb, NULL);
if (error != 0)
return (error);
error = freebsd11_cvtstat32(&sb, &sb32);
if (error == 0)
error = copyout(&sb32, uap->ub, sizeof (sb32));
return (error);
}
int
freebsd11_freebsd32_fstat(struct thread *td,
struct freebsd11_freebsd32_fstat_args *uap)
{
struct stat sb;
struct freebsd11_stat32 sb32;
int error;
error = kern_fstat(td, uap->fd, &sb);
if (error != 0)
return (error);
error = freebsd11_cvtstat32(&sb, &sb32);
if (error == 0)
error = copyout(&sb32, uap->ub, sizeof (sb32));
return (error);
}
int
freebsd11_freebsd32_fstatat(struct thread *td,
struct freebsd11_freebsd32_fstatat_args *uap)
{
struct stat sb;
struct freebsd11_stat32 sb32;
int error;
error = kern_statat(td, uap->flag, uap->fd, uap->path, UIO_USERSPACE,
&sb, NULL);
if (error != 0)
return (error);
error = freebsd11_cvtstat32(&sb, &sb32);
if (error == 0)
error = copyout(&sb32, uap->buf, sizeof (sb32));
return (error);
}
int
freebsd11_freebsd32_lstat(struct thread *td,
struct freebsd11_freebsd32_lstat_args *uap)
{
struct stat sb;
struct freebsd11_stat32 sb32;
int error;
error = kern_statat(td, AT_SYMLINK_NOFOLLOW, AT_FDCWD, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error != 0)
return (error);
error = freebsd11_cvtstat32(&sb, &sb32);
if (error == 0)
error = copyout(&sb32, uap->ub, sizeof (sb32));
return (error);
}
int
freebsd11_freebsd32_fhstat(struct thread *td,
struct freebsd11_freebsd32_fhstat_args *uap)
{
struct stat sb;
struct freebsd11_stat32 sb32;
struct fhandle fh;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t));
if (error != 0)
return (error);
error = kern_fhstat(td, fh, &sb);
if (error != 0)
return (error);
error = freebsd11_cvtstat32(&sb, &sb32);
if (error == 0)
error = copyout(&sb32, uap->sb, sizeof (sb32));
return (error);
}
#endif
int
freebsd32___sysctl(struct thread *td, struct freebsd32___sysctl_args *uap)
{
int error, name[CTL_MAXNAME];
size_t j, oldlen;
uint32_t tmp;
if (uap->namelen > CTL_MAXNAME || uap->namelen < 2)
return (EINVAL);
error = copyin(uap->name, name, uap->namelen * sizeof(int));
if (error)
return (error);
if (uap->oldlenp) {
error = fueword32(uap->oldlenp, &tmp);
oldlen = tmp;
} else {
oldlen = 0;
}
if (error != 0)
return (EFAULT);
error = userland_sysctl(td, name, uap->namelen,
uap->old, &oldlen, 1,
uap->new, uap->newlen, &j, SCTL_MASK32);
if (error)
return (error);
if (uap->oldlenp)
suword32(uap->oldlenp, j);
return (0);
}
int
freebsd32_jail(struct thread *td, struct freebsd32_jail_args *uap)
{
uint32_t version;
int error;
struct jail j;
error = copyin(uap->jail, &version, sizeof(uint32_t));
if (error)
return (error);
switch (version) {
case 0:
{
/* FreeBSD single IPv4 jails. */
struct jail32_v0 j32_v0;
bzero(&j, sizeof(struct jail));
error = copyin(uap->jail, &j32_v0, sizeof(struct jail32_v0));
if (error)
return (error);
CP(j32_v0, j, version);
PTRIN_CP(j32_v0, j, path);
PTRIN_CP(j32_v0, j, hostname);
j.ip4s = htonl(j32_v0.ip_number); /* jail_v0 is host order */
break;
}
case 1:
/*
* Version 1 was used by multi-IPv4 jail implementations
* that never made it into the official kernel.
*/
return (EINVAL);
case 2: /* JAIL_API_VERSION */
{
/* FreeBSD multi-IPv4/IPv6,noIP jails. */
struct jail32 j32;
error = copyin(uap->jail, &j32, sizeof(struct jail32));
if (error)
return (error);
CP(j32, j, version);
PTRIN_CP(j32, j, path);
PTRIN_CP(j32, j, hostname);
PTRIN_CP(j32, j, jailname);
CP(j32, j, ip4s);
CP(j32, j, ip6s);
PTRIN_CP(j32, j, ip4);
PTRIN_CP(j32, j, ip6);
break;
}
default:
/* Sci-Fi jails are not supported, sorry. */
return (EINVAL);
}
return (kern_jail(td, &j));
}
int
freebsd32_jail_set(struct thread *td, struct freebsd32_jail_set_args *uap)
{
struct uio *auio;
int error;
/* Check that we have an even number of iovecs. */
if (uap->iovcnt & 1)
return (EINVAL);
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = kern_jail_set(td, auio, uap->flags);
free(auio, M_IOV);
return (error);
}
int
freebsd32_jail_get(struct thread *td, struct freebsd32_jail_get_args *uap)
{
struct iovec32 iov32;
struct uio *auio;
int error, i;
/* Check that we have an even number of iovecs. */
if (uap->iovcnt & 1)
return (EINVAL);
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = kern_jail_get(td, auio, uap->flags);
if (error == 0)
for (i = 0; i < uap->iovcnt; i++) {
PTROUT_CP(auio->uio_iov[i], iov32, iov_base);
CP(auio->uio_iov[i], iov32, iov_len);
error = copyout(&iov32, uap->iovp + i, sizeof(iov32));
if (error != 0)
break;
}
free(auio, M_IOV);
return (error);
}
int
freebsd32_sigaction(struct thread *td, struct freebsd32_sigaction_args *uap)
{
struct sigaction32 s32;
struct sigaction sa, osa, *sap;
int error;
if (uap->act) {
error = copyin(uap->act, &s32, sizeof(s32));
if (error)
return (error);
sa.sa_handler = PTRIN(s32.sa_u);
CP(s32, sa, sa_flags);
CP(s32, sa, sa_mask);
sap = &sa;
} else
sap = NULL;
error = kern_sigaction(td, uap->sig, sap, &osa, 0);
if (error == 0 && uap->oact != NULL) {
s32.sa_u = PTROUT(osa.sa_handler);
CP(osa, s32, sa_flags);
CP(osa, s32, sa_mask);
error = copyout(&s32, uap->oact, sizeof(s32));
}
return (error);
}
#ifdef COMPAT_FREEBSD4
int
freebsd4_freebsd32_sigaction(struct thread *td,
struct freebsd4_freebsd32_sigaction_args *uap)
{
struct sigaction32 s32;
struct sigaction sa, osa, *sap;
int error;
if (uap->act) {
error = copyin(uap->act, &s32, sizeof(s32));
if (error)
return (error);
sa.sa_handler = PTRIN(s32.sa_u);
CP(s32, sa, sa_flags);
CP(s32, sa, sa_mask);
sap = &sa;
} else
sap = NULL;
error = kern_sigaction(td, uap->sig, sap, &osa, KSA_FREEBSD4);
if (error == 0 && uap->oact != NULL) {
s32.sa_u = PTROUT(osa.sa_handler);
CP(osa, s32, sa_flags);
CP(osa, s32, sa_mask);
error = copyout(&s32, uap->oact, sizeof(s32));
}
return (error);
}
#endif
#ifdef COMPAT_43
struct osigaction32 {
u_int32_t sa_u;
osigset_t sa_mask;
int sa_flags;
};
#define ONSIG 32
int
ofreebsd32_sigaction(struct thread *td,
struct ofreebsd32_sigaction_args *uap)
{
struct osigaction32 s32;
struct sigaction sa, osa, *sap;
int error;
if (uap->signum <= 0 || uap->signum >= ONSIG)
return (EINVAL);
if (uap->nsa) {
error = copyin(uap->nsa, &s32, sizeof(s32));
if (error)
return (error);
sa.sa_handler = PTRIN(s32.sa_u);
CP(s32, sa, sa_flags);
OSIG2SIG(s32.sa_mask, sa.sa_mask);
sap = &sa;
} else
sap = NULL;
error = kern_sigaction(td, uap->signum, sap, &osa, KSA_OSIGSET);
if (error == 0 && uap->osa != NULL) {
s32.sa_u = PTROUT(osa.sa_handler);
CP(osa, s32, sa_flags);
SIG2OSIG(osa.sa_mask, s32.sa_mask);
error = copyout(&s32, uap->osa, sizeof(s32));
}
return (error);
}
int
ofreebsd32_sigprocmask(struct thread *td,
struct ofreebsd32_sigprocmask_args *uap)
{
sigset_t set, oset;
int error;
OSIG2SIG(uap->mask, set);
error = kern_sigprocmask(td, uap->how, &set, &oset, SIGPROCMASK_OLD);
SIG2OSIG(oset, td->td_retval[0]);
return (error);
}
int
ofreebsd32_sigpending(struct thread *td,
struct ofreebsd32_sigpending_args *uap)
{
struct proc *p = td->td_proc;
sigset_t siglist;
PROC_LOCK(p);
siglist = p->p_siglist;
SIGSETOR(siglist, td->td_siglist);
PROC_UNLOCK(p);
SIG2OSIG(siglist, td->td_retval[0]);
return (0);
}
struct sigvec32 {
u_int32_t sv_handler;
int sv_mask;
int sv_flags;
};
int
ofreebsd32_sigvec(struct thread *td,
struct ofreebsd32_sigvec_args *uap)
{
struct sigvec32 vec;
struct sigaction sa, osa, *sap;
int error;
if (uap->signum <= 0 || uap->signum >= ONSIG)
return (EINVAL);
if (uap->nsv) {
error = copyin(uap->nsv, &vec, sizeof(vec));
if (error)
return (error);
sa.sa_handler = PTRIN(vec.sv_handler);
OSIG2SIG(vec.sv_mask, sa.sa_mask);
sa.sa_flags = vec.sv_flags;
sa.sa_flags ^= SA_RESTART;
sap = &sa;
} else
sap = NULL;
error = kern_sigaction(td, uap->signum, sap, &osa, KSA_OSIGSET);
if (error == 0 && uap->osv != NULL) {
vec.sv_handler = PTROUT(osa.sa_handler);
SIG2OSIG(osa.sa_mask, vec.sv_mask);
vec.sv_flags = osa.sa_flags;
vec.sv_flags &= ~SA_NOCLDWAIT;
vec.sv_flags ^= SA_RESTART;
error = copyout(&vec, uap->osv, sizeof(vec));
}
return (error);
}
int
ofreebsd32_sigblock(struct thread *td,
struct ofreebsd32_sigblock_args *uap)
{
sigset_t set, oset;
OSIG2SIG(uap->mask, set);
kern_sigprocmask(td, SIG_BLOCK, &set, &oset, 0);
SIG2OSIG(oset, td->td_retval[0]);
return (0);
}
int
ofreebsd32_sigsetmask(struct thread *td,
struct ofreebsd32_sigsetmask_args *uap)
{
sigset_t set, oset;
OSIG2SIG(uap->mask, set);
kern_sigprocmask(td, SIG_SETMASK, &set, &oset, 0);
SIG2OSIG(oset, td->td_retval[0]);
return (0);
}
int
ofreebsd32_sigsuspend(struct thread *td,
struct ofreebsd32_sigsuspend_args *uap)
{
sigset_t mask;
OSIG2SIG(uap->mask, mask);
return (kern_sigsuspend(td, mask));
}
struct sigstack32 {
u_int32_t ss_sp;
int ss_onstack;
};
int
ofreebsd32_sigstack(struct thread *td,
struct ofreebsd32_sigstack_args *uap)
{
struct sigstack32 s32;
struct sigstack nss, oss;
int error = 0, unss;
if (uap->nss != NULL) {
error = copyin(uap->nss, &s32, sizeof(s32));
if (error)
return (error);
nss.ss_sp = PTRIN(s32.ss_sp);
CP(s32, nss, ss_onstack);
unss = 1;
} else {
unss = 0;
}
oss.ss_sp = td->td_sigstk.ss_sp;
oss.ss_onstack = sigonstack(cpu_getstack(td));
if (unss) {
td->td_sigstk.ss_sp = nss.ss_sp;
td->td_sigstk.ss_size = 0;
td->td_sigstk.ss_flags |= (nss.ss_onstack & SS_ONSTACK);
td->td_pflags |= TDP_ALTSTACK;
}
if (uap->oss != NULL) {
s32.ss_sp = PTROUT(oss.ss_sp);
CP(oss, s32, ss_onstack);
error = copyout(&s32, uap->oss, sizeof(s32));
}
return (error);
}
#endif
int
freebsd32_nanosleep(struct thread *td, struct freebsd32_nanosleep_args *uap)
{
return (freebsd32_user_clock_nanosleep(td, CLOCK_REALTIME,
TIMER_RELTIME, uap->rqtp, uap->rmtp));
}
int
freebsd32_clock_nanosleep(struct thread *td,
struct freebsd32_clock_nanosleep_args *uap)
{
int error;
error = freebsd32_user_clock_nanosleep(td, uap->clock_id, uap->flags,
uap->rqtp, uap->rmtp);
return (kern_posix_error(td, error));
}
static int
freebsd32_user_clock_nanosleep(struct thread *td, clockid_t clock_id,
int flags, const struct timespec32 *ua_rqtp, struct timespec32 *ua_rmtp)
{
struct timespec32 rmt32, rqt32;
struct timespec rmt, rqt;
int error;
error = copyin(ua_rqtp, &rqt32, sizeof(rqt32));
if (error)
return (error);
CP(rqt32, rqt, tv_sec);
CP(rqt32, rqt, tv_nsec);
if (ua_rmtp != NULL && (flags & TIMER_ABSTIME) == 0 &&
!useracc(ua_rmtp, sizeof(rmt32), VM_PROT_WRITE))
return (EFAULT);
error = kern_clock_nanosleep(td, clock_id, flags, &rqt, &rmt);
if (error == EINTR && ua_rmtp != NULL && (flags & TIMER_ABSTIME) == 0) {
int error2;
CP(rmt, rmt32, tv_sec);
CP(rmt, rmt32, tv_nsec);
error2 = copyout(&rmt32, ua_rmtp, sizeof(rmt32));
if (error2)
error = error2;
}
return (error);
}
int
freebsd32_clock_gettime(struct thread *td,
struct freebsd32_clock_gettime_args *uap)
{
struct timespec ats;
struct timespec32 ats32;
int error;
error = kern_clock_gettime(td, uap->clock_id, &ats);
if (error == 0) {
CP(ats, ats32, tv_sec);
CP(ats, ats32, tv_nsec);
error = copyout(&ats32, uap->tp, sizeof(ats32));
}
return (error);
}
int
freebsd32_clock_settime(struct thread *td,
struct freebsd32_clock_settime_args *uap)
{
struct timespec ats;
struct timespec32 ats32;
int error;
error = copyin(uap->tp, &ats32, sizeof(ats32));
if (error)
return (error);
CP(ats32, ats, tv_sec);
CP(ats32, ats, tv_nsec);
return (kern_clock_settime(td, uap->clock_id, &ats));
}
int
freebsd32_clock_getres(struct thread *td,
struct freebsd32_clock_getres_args *uap)
{
struct timespec ts;
struct timespec32 ts32;
int error;
if (uap->tp == NULL)
return (0);
error = kern_clock_getres(td, uap->clock_id, &ts);
if (error == 0) {
CP(ts, ts32, tv_sec);
CP(ts, ts32, tv_nsec);
error = copyout(&ts32, uap->tp, sizeof(ts32));
}
return (error);
}
int freebsd32_ktimer_create(struct thread *td,
struct freebsd32_ktimer_create_args *uap)
{
struct sigevent32 ev32;
struct sigevent ev, *evp;
int error, id;
if (uap->evp == NULL) {
evp = NULL;
} else {
evp = &ev;
error = copyin(uap->evp, &ev32, sizeof(ev32));
if (error != 0)
return (error);
error = convert_sigevent32(&ev32, &ev);
if (error != 0)
return (error);
}
error = kern_ktimer_create(td, uap->clock_id, evp, &id, -1);
if (error == 0) {
error = copyout(&id, uap->timerid, sizeof(int));
if (error != 0)
kern_ktimer_delete(td, id);
}
return (error);
}
int
freebsd32_ktimer_settime(struct thread *td,
struct freebsd32_ktimer_settime_args *uap)
{
struct itimerspec32 val32, oval32;
struct itimerspec val, oval, *ovalp;
int error;
error = copyin(uap->value, &val32, sizeof(val32));
if (error != 0)
return (error);
ITS_CP(val32, val);
ovalp = uap->ovalue != NULL ? &oval : NULL;
error = kern_ktimer_settime(td, uap->timerid, uap->flags, &val, ovalp);
if (error == 0 && uap->ovalue != NULL) {
ITS_CP(oval, oval32);
error = copyout(&oval32, uap->ovalue, sizeof(oval32));
}
return (error);
}
int
freebsd32_ktimer_gettime(struct thread *td,
struct freebsd32_ktimer_gettime_args *uap)
{
struct itimerspec32 val32;
struct itimerspec val;
int error;
error = kern_ktimer_gettime(td, uap->timerid, &val);
if (error == 0) {
ITS_CP(val, val32);
error = copyout(&val32, uap->value, sizeof(val32));
}
return (error);
}
int
freebsd32_clock_getcpuclockid2(struct thread *td,
struct freebsd32_clock_getcpuclockid2_args *uap)
{
clockid_t clk_id;
int error;
error = kern_clock_getcpuclockid2(td, PAIR32TO64(id_t, uap->id),
uap->which, &clk_id);
if (error == 0)
error = copyout(&clk_id, uap->clock_id, sizeof(clockid_t));
return (error);
}
int
freebsd32_thr_new(struct thread *td,
struct freebsd32_thr_new_args *uap)
{
struct thr_param32 param32;
struct thr_param param;
int error;
if (uap->param_size < 0 ||
uap->param_size > sizeof(struct thr_param32))
return (EINVAL);
bzero(&param, sizeof(struct thr_param));
bzero(&param32, sizeof(struct thr_param32));
error = copyin(uap->param, &param32, uap->param_size);
if (error != 0)
return (error);
param.start_func = PTRIN(param32.start_func);
param.arg = PTRIN(param32.arg);
param.stack_base = PTRIN(param32.stack_base);
param.stack_size = param32.stack_size;
param.tls_base = PTRIN(param32.tls_base);
param.tls_size = param32.tls_size;
param.child_tid = PTRIN(param32.child_tid);
param.parent_tid = PTRIN(param32.parent_tid);
param.flags = param32.flags;
param.rtp = PTRIN(param32.rtp);
param.spare[0] = PTRIN(param32.spare[0]);
param.spare[1] = PTRIN(param32.spare[1]);
param.spare[2] = PTRIN(param32.spare[2]);
return (kern_thr_new(td, &param));
}
int
freebsd32_thr_suspend(struct thread *td, struct freebsd32_thr_suspend_args *uap)
{
struct timespec32 ts32;
struct timespec ts, *tsp;
int error;
error = 0;
tsp = NULL;
if (uap->timeout != NULL) {
error = copyin((const void *)uap->timeout, (void *)&ts32,
sizeof(struct timespec32));
if (error != 0)
return (error);
ts.tv_sec = ts32.tv_sec;
ts.tv_nsec = ts32.tv_nsec;
tsp = &ts;
}
return (kern_thr_suspend(td, tsp));
}
void
siginfo_to_siginfo32(const siginfo_t *src, struct siginfo32 *dst)
{
bzero(dst, sizeof(*dst));
dst->si_signo = src->si_signo;
dst->si_errno = src->si_errno;
dst->si_code = src->si_code;
dst->si_pid = src->si_pid;
dst->si_uid = src->si_uid;
dst->si_status = src->si_status;
dst->si_addr = (uintptr_t)src->si_addr;
dst->si_value.sival_int = src->si_value.sival_int;
dst->si_timerid = src->si_timerid;
dst->si_overrun = src->si_overrun;
}
#ifndef _FREEBSD32_SYSPROTO_H_
struct freebsd32_sigqueue_args {
pid_t pid;
int signum;
/* union sigval32 */ int value;
};
#endif
int
freebsd32_sigqueue(struct thread *td, struct freebsd32_sigqueue_args *uap)
{
union sigval sv;
/*
* On 32-bit ABIs, sival_int and sival_ptr are the same.
* On 64-bit little-endian ABIs, the low bits are the same.
* In 64-bit big-endian ABIs, sival_int overlaps with
* sival_ptr's HIGH bits. We choose to support sival_int
* rather than sival_ptr in this case as it seems to be
* more common.
*/
bzero(&sv, sizeof(sv));
sv.sival_int = uap->value;
return (kern_sigqueue(td, uap->pid, uap->signum, &sv));
}
int
freebsd32_sigtimedwait(struct thread *td, struct freebsd32_sigtimedwait_args *uap)
{
struct timespec32 ts32;
struct timespec ts;
struct timespec *timeout;
sigset_t set;
ksiginfo_t ksi;
struct siginfo32 si32;
int error;
if (uap->timeout) {
error = copyin(uap->timeout, &ts32, sizeof(ts32));
if (error)
return (error);
ts.tv_sec = ts32.tv_sec;
ts.tv_nsec = ts32.tv_nsec;
timeout = &ts;
} else
timeout = NULL;
error = copyin(uap->set, &set, sizeof(set));
if (error)
return (error);
error = kern_sigtimedwait(td, set, &ksi, timeout);
if (error)
return (error);
if (uap->info) {
siginfo_to_siginfo32(&ksi.ksi_info, &si32);
error = copyout(&si32, uap->info, sizeof(struct siginfo32));
}
if (error == 0)
td->td_retval[0] = ksi.ksi_signo;
return (error);
}
/*
* MPSAFE
*/
int
freebsd32_sigwaitinfo(struct thread *td, struct freebsd32_sigwaitinfo_args *uap)
{
ksiginfo_t ksi;
struct siginfo32 si32;
sigset_t set;
int error;
error = copyin(uap->set, &set, sizeof(set));
if (error)
return (error);
error = kern_sigtimedwait(td, set, &ksi, NULL);
if (error)
return (error);
if (uap->info) {
siginfo_to_siginfo32(&ksi.ksi_info, &si32);
error = copyout(&si32, uap->info, sizeof(struct siginfo32));
}
if (error == 0)
td->td_retval[0] = ksi.ksi_signo;
return (error);
}
int
freebsd32_cpuset_setid(struct thread *td,
struct freebsd32_cpuset_setid_args *uap)
{
return (kern_cpuset_setid(td, uap->which,
PAIR32TO64(id_t, uap->id), uap->setid));
}
int
freebsd32_cpuset_getid(struct thread *td,
struct freebsd32_cpuset_getid_args *uap)
{
return (kern_cpuset_getid(td, uap->level, uap->which,
PAIR32TO64(id_t, uap->id), uap->setid));
}
int
freebsd32_cpuset_getaffinity(struct thread *td,
struct freebsd32_cpuset_getaffinity_args *uap)
{
return (kern_cpuset_getaffinity(td, uap->level, uap->which,
PAIR32TO64(id_t,uap->id), uap->cpusetsize, uap->mask));
}
int
freebsd32_cpuset_setaffinity(struct thread *td,
struct freebsd32_cpuset_setaffinity_args *uap)
{
return (kern_cpuset_setaffinity(td, uap->level, uap->which,
PAIR32TO64(id_t,uap->id), uap->cpusetsize, uap->mask));
}
int
freebsd32_cpuset_getdomain(struct thread *td,
struct freebsd32_cpuset_getdomain_args *uap)
{
return (kern_cpuset_getdomain(td, uap->level, uap->which,
PAIR32TO64(id_t,uap->id), uap->domainsetsize, uap->mask, uap->policy));
}
int
freebsd32_cpuset_setdomain(struct thread *td,
struct freebsd32_cpuset_setdomain_args *uap)
{
return (kern_cpuset_setdomain(td, uap->level, uap->which,
PAIR32TO64(id_t,uap->id), uap->domainsetsize, uap->mask, uap->policy));
}
int
freebsd32_nmount(struct thread *td,
struct freebsd32_nmount_args /* {
struct iovec *iovp;
unsigned int iovcnt;
int flags;
} */ *uap)
{
struct uio *auio;
uint64_t flags;
int error;
/*
* Mount flags are now 64-bits. On 32-bit archtectures only
* 32-bits are passed in, but from here on everything handles
* 64-bit flags correctly.
*/
flags = uap->flags;
AUDIT_ARG_FFLAGS(flags);
/*
* Filter out MNT_ROOTFS. We do not want clients of nmount() in
* userspace to set this flag, but we must filter it out if we want
* MNT_UPDATE on the root file system to work.
* MNT_ROOTFS should only be set by the kernel when mounting its
* root file system.
*/
flags &= ~MNT_ROOTFS;
/*
* check that we have an even number of iovec's
* and that we have at least two options.
*/
if ((uap->iovcnt & 1) || (uap->iovcnt < 4))
return (EINVAL);
error = freebsd32_copyinuio(uap->iovp, uap->iovcnt, &auio);
if (error)
return (error);
error = vfs_donmount(td, flags, auio);
free(auio, M_IOV);
return error;
}
#if 0
int
freebsd32_xxx(struct thread *td, struct freebsd32_xxx_args *uap)
{
struct yyy32 *p32, s32;
struct yyy *p = NULL, s;
struct xxx_arg ap;
int error;
if (uap->zzz) {
error = copyin(uap->zzz, &s32, sizeof(s32));
if (error)
return (error);
/* translate in */
p = &s;
}
error = kern_xxx(td, p);
if (error)
return (error);
if (uap->zzz) {
/* translate out */
error = copyout(&s32, p32, sizeof(s32));
}
return (error);
}
#endif
int
syscall32_module_handler(struct module *mod, int what, void *arg)
{
return (kern_syscall_module_handler(freebsd32_sysent, mod, what, arg));
}
int
syscall32_helper_register(struct syscall_helper_data *sd, int flags)
{
return (kern_syscall_helper_register(freebsd32_sysent, sd, flags));
}
int
syscall32_helper_unregister(struct syscall_helper_data *sd)
{
return (kern_syscall_helper_unregister(freebsd32_sysent, sd));
}
register_t *
freebsd32_copyout_strings(struct image_params *imgp)
{
int argc, envc, i;
u_int32_t *vectp;
char *stringp;
uintptr_t destp;
u_int32_t *stack_base;
struct freebsd32_ps_strings *arginfo;
char canary[sizeof(long) * 8];
int32_t pagesizes32[MAXPAGESIZES];
size_t execpath_len;
int szsigcode;
/*
* Calculate string base and vector table pointers.
* Also deal with signal trampoline code for this exec type.
*/
if (imgp->execpath != NULL && imgp->auxargs != NULL)
execpath_len = strlen(imgp->execpath) + 1;
else
execpath_len = 0;
arginfo = (struct freebsd32_ps_strings *)curproc->p_sysent->
sv_psstrings;
if (imgp->proc->p_sysent->sv_sigcode_base == 0)
szsigcode = *(imgp->proc->p_sysent->sv_szsigcode);
else
szsigcode = 0;
destp = (uintptr_t)arginfo;
/*
* install sigcode
*/
if (szsigcode != 0) {
destp -= szsigcode;
destp = rounddown2(destp, sizeof(uint32_t));
copyout(imgp->proc->p_sysent->sv_sigcode, (void *)destp,
szsigcode);
}
/*
* Copy the image path for the rtld.
*/
if (execpath_len != 0) {
destp -= execpath_len;
imgp->execpathp = destp;
copyout(imgp->execpath, (void *)destp, execpath_len);
}
/*
* Prepare the canary for SSP.
*/
arc4rand(canary, sizeof(canary), 0);
destp -= sizeof(canary);
imgp->canary = destp;
copyout(canary, (void *)destp, sizeof(canary));
imgp->canarylen = sizeof(canary);
/*
* Prepare the pagesizes array.
*/
for (i = 0; i < MAXPAGESIZES; i++)
pagesizes32[i] = (uint32_t)pagesizes[i];
destp -= sizeof(pagesizes32);
destp = rounddown2(destp, sizeof(uint32_t));
imgp->pagesizes = destp;
copyout(pagesizes32, (void *)destp, sizeof(pagesizes32));
imgp->pagesizeslen = sizeof(pagesizes32);
destp -= ARG_MAX - imgp->args->stringspace;
destp = rounddown2(destp, sizeof(uint32_t));
vectp = (uint32_t *)destp;
if (imgp->auxargs) {
/*
* Allocate room on the stack for the ELF auxargs
* array. It has up to AT_COUNT entries.
*/
vectp -= howmany(AT_COUNT * sizeof(Elf32_Auxinfo),
sizeof(*vectp));
}
/*
* Allocate room for the argv[] and env vectors including the
* terminating NULL pointers.
*/
vectp -= imgp->args->argc + 1 + imgp->args->envc + 1;
/*
* vectp also becomes our initial stack base
*/
stack_base = vectp;
stringp = imgp->args->begin_argv;
argc = imgp->args->argc;
envc = imgp->args->envc;
/*
* Copy out strings - arguments and environment.
*/
copyout(stringp, (void *)destp, ARG_MAX - imgp->args->stringspace);
/*
* Fill in "ps_strings" struct for ps, w, etc.
*/
suword32(&arginfo->ps_argvstr, (u_int32_t)(intptr_t)vectp);
suword32(&arginfo->ps_nargvstr, argc);
/*
* Fill in argument portion of vector table.
*/
for (; argc > 0; --argc) {
suword32(vectp++, (u_int32_t)(intptr_t)destp);
while (*stringp++ != 0)
destp++;
destp++;
}
/* a null vector table pointer separates the argp's from the envp's */
suword32(vectp++, 0);
suword32(&arginfo->ps_envstr, (u_int32_t)(intptr_t)vectp);
suword32(&arginfo->ps_nenvstr, envc);
/*
* Fill in environment portion of vector table.
*/
for (; envc > 0; --envc) {
suword32(vectp++, (u_int32_t)(intptr_t)destp);
while (*stringp++ != 0)
destp++;
destp++;
}
/* end of vector table is a null pointer */
suword32(vectp, 0);
return ((register_t *)stack_base);
}
int
freebsd32_kldstat(struct thread *td, struct freebsd32_kldstat_args *uap)
{
struct kld_file_stat *stat;
struct kld32_file_stat *stat32;
int error, version;
if ((error = copyin(&uap->stat->version, &version, sizeof(version)))
!= 0)
return (error);
if (version != sizeof(struct kld32_file_stat_1) &&
version != sizeof(struct kld32_file_stat))
return (EINVAL);
stat = malloc(sizeof(*stat), M_TEMP, M_WAITOK | M_ZERO);
stat32 = malloc(sizeof(*stat32), M_TEMP, M_WAITOK | M_ZERO);
error = kern_kldstat(td, uap->fileid, stat);
if (error == 0) {
bcopy(&stat->name[0], &stat32->name[0], sizeof(stat->name));
CP(*stat, *stat32, refs);
CP(*stat, *stat32, id);
PTROUT_CP(*stat, *stat32, address);
CP(*stat, *stat32, size);
bcopy(&stat->pathname[0], &stat32->pathname[0],
sizeof(stat->pathname));
stat32->version = version;
error = copyout(stat32, uap->stat, version);
}
free(stat, M_TEMP);
free(stat32, M_TEMP);
return (error);
}
int
freebsd32_posix_fallocate(struct thread *td,
struct freebsd32_posix_fallocate_args *uap)
{
int error;
error = kern_posix_fallocate(td, uap->fd,
PAIR32TO64(off_t, uap->offset), PAIR32TO64(off_t, uap->len));
return (kern_posix_error(td, error));
}
int
freebsd32_posix_fadvise(struct thread *td,
struct freebsd32_posix_fadvise_args *uap)
{
int error;
error = kern_posix_fadvise(td, uap->fd, PAIR32TO64(off_t, uap->offset),
PAIR32TO64(off_t, uap->len), uap->advice);
return (kern_posix_error(td, error));
}
int
convert_sigevent32(struct sigevent32 *sig32, struct sigevent *sig)
{
CP(*sig32, *sig, sigev_notify);
switch (sig->sigev_notify) {
case SIGEV_NONE:
break;
case SIGEV_THREAD_ID:
CP(*sig32, *sig, sigev_notify_thread_id);
/* FALLTHROUGH */
case SIGEV_SIGNAL:
CP(*sig32, *sig, sigev_signo);
PTRIN_CP(*sig32, *sig, sigev_value.sival_ptr);
break;
case SIGEV_KEVENT:
CP(*sig32, *sig, sigev_notify_kqueue);
CP(*sig32, *sig, sigev_notify_kevent_flags);
PTRIN_CP(*sig32, *sig, sigev_value.sival_ptr);
break;
default:
return (EINVAL);
}
return (0);
}
int
freebsd32_procctl(struct thread *td, struct freebsd32_procctl_args *uap)
{
void *data;
union {
struct procctl_reaper_status rs;
struct procctl_reaper_pids rp;
struct procctl_reaper_kill rk;
} x;
union {
struct procctl_reaper_pids32 rp;
} x32;
int error, error1, flags, signum;
switch (uap->com) {
case PROC_SPROTECT:
case PROC_TRACE_CTL:
case PROC_TRAPCAP_CTL:
error = copyin(PTRIN(uap->data), &flags, sizeof(flags));
if (error != 0)
return (error);
data = &flags;
break;
case PROC_REAP_ACQUIRE:
case PROC_REAP_RELEASE:
if (uap->data != NULL)
return (EINVAL);
data = NULL;
break;
case PROC_REAP_STATUS:
data = &x.rs;
break;
case PROC_REAP_GETPIDS:
error = copyin(uap->data, &x32.rp, sizeof(x32.rp));
if (error != 0)
return (error);
CP(x32.rp, x.rp, rp_count);
PTRIN_CP(x32.rp, x.rp, rp_pids);
data = &x.rp;
break;
case PROC_REAP_KILL:
error = copyin(uap->data, &x.rk, sizeof(x.rk));
if (error != 0)
return (error);
data = &x.rk;
break;
case PROC_TRACE_STATUS:
case PROC_TRAPCAP_STATUS:
data = &flags;
break;
case PROC_PDEATHSIG_CTL:
error = copyin(uap->data, &signum, sizeof(signum));
if (error != 0)
return (error);
data = &signum;
break;
case PROC_PDEATHSIG_STATUS:
data = &signum;
break;
default:
return (EINVAL);
}
error = kern_procctl(td, uap->idtype, PAIR32TO64(id_t, uap->id),
uap->com, data);
switch (uap->com) {
case PROC_REAP_STATUS:
if (error == 0)
error = copyout(&x.rs, uap->data, sizeof(x.rs));
break;
case PROC_REAP_KILL:
error1 = copyout(&x.rk, uap->data, sizeof(x.rk));
if (error == 0)
error = error1;
break;
case PROC_TRACE_STATUS:
case PROC_TRAPCAP_STATUS:
if (error == 0)
error = copyout(&flags, uap->data, sizeof(flags));
break;
case PROC_PDEATHSIG_STATUS:
if (error == 0)
error = copyout(&signum, uap->data, sizeof(signum));
break;
}
return (error);
}
int
freebsd32_fcntl(struct thread *td, struct freebsd32_fcntl_args *uap)
{
long tmp;
switch (uap->cmd) {
/*
* Do unsigned conversion for arg when operation
* interprets it as flags or pointer.
*/
case F_SETLK_REMOTE:
case F_SETLKW:
case F_SETLK:
case F_GETLK:
case F_SETFD:
case F_SETFL:
case F_OGETLK:
case F_OSETLK:
case F_OSETLKW:
tmp = (unsigned int)(uap->arg);
break;
default:
tmp = uap->arg;
break;
}
return (kern_fcntl_freebsd(td, uap->fd, uap->cmd, tmp));
}
int
freebsd32_ppoll(struct thread *td, struct freebsd32_ppoll_args *uap)
{
struct timespec32 ts32;
struct timespec ts, *tsp;
sigset_t set, *ssp;
int error;
if (uap->ts != NULL) {
error = copyin(uap->ts, &ts32, sizeof(ts32));
if (error != 0)
return (error);
CP(ts32, ts, tv_sec);
CP(ts32, ts, tv_nsec);
tsp = &ts;
} else
tsp = NULL;
if (uap->set != NULL) {
error = copyin(uap->set, &set, sizeof(set));
if (error != 0)
return (error);
ssp = &set;
} else
ssp = NULL;
return (kern_poll(td, uap->fds, uap->nfds, tsp, ssp));
}
int
freebsd32_sched_rr_get_interval(struct thread *td,
struct freebsd32_sched_rr_get_interval_args *uap)
{
struct timespec ts;
struct timespec32 ts32;
int error;
error = kern_sched_rr_get_interval(td, uap->pid, &ts);
if (error == 0) {
CP(ts, ts32, tv_sec);
CP(ts, ts32, tv_nsec);
error = copyout(&ts32, uap->interval, sizeof(ts32));
}
return (error);
}
Index: projects/clang800-import/sys/conf/files.arm64
===================================================================
--- projects/clang800-import/sys/conf/files.arm64 (revision 343955)
+++ projects/clang800-import/sys/conf/files.arm64 (revision 343956)
@@ -1,281 +1,285 @@
# $FreeBSD$
cloudabi32_vdso.o optional compat_cloudabi32 \
dependency "$S/contrib/cloudabi/cloudabi_vdso_armv6_on_64bit.S" \
compile-with "${CC} -x assembler-with-cpp -m32 -shared -nostdinc -nostdlib -Wl,-T$S/compat/cloudabi/cloudabi_vdso.lds $S/contrib/cloudabi/cloudabi_vdso_armv6_on_64bit.S -o ${.TARGET}" \
no-obj no-implicit-rule \
clean "cloudabi32_vdso.o"
#
cloudabi32_vdso_blob.o optional compat_cloudabi32 \
dependency "cloudabi32_vdso.o" \
compile-with "${OBJCOPY} --input-target binary --output-target elf64-littleaarch64 --binary-architecture aarch64 cloudabi32_vdso.o ${.TARGET}" \
no-implicit-rule \
clean "cloudabi32_vdso_blob.o"
#
cloudabi64_vdso.o optional compat_cloudabi64 \
dependency "$S/contrib/cloudabi/cloudabi_vdso_aarch64.S" \
compile-with "${CC} -x assembler-with-cpp -shared -nostdinc -nostdlib -Wl,-T$S/compat/cloudabi/cloudabi_vdso.lds $S/contrib/cloudabi/cloudabi_vdso_aarch64.S -o ${.TARGET}" \
no-obj no-implicit-rule \
clean "cloudabi64_vdso.o"
#
cloudabi64_vdso_blob.o optional compat_cloudabi64 \
dependency "cloudabi64_vdso.o" \
compile-with "${OBJCOPY} --input-target binary --output-target elf64-littleaarch64 --binary-architecture aarch64 cloudabi64_vdso.o ${.TARGET}" \
no-implicit-rule \
clean "cloudabi64_vdso_blob.o"
#
# Allwinner common files
arm/allwinner/a10_ehci.c optional ehci aw_ehci fdt
arm/allwinner/a10_timer.c optional a10_timer fdt
arm/allwinner/aw_gpio.c optional gpio aw_gpio fdt
arm/allwinner/aw_mmc.c optional mmc aw_mmc fdt | mmccam aw_mmc fdt
arm/allwinner/aw_nmi.c optional aw_nmi fdt \
compile-with "${NORMAL_C} -I$S/gnu/dts/include"
arm/allwinner/aw_pwm.c optional aw_pwm fdt
arm/allwinner/aw_rsb.c optional aw_rsb fdt
arm/allwinner/aw_rtc.c optional aw_rtc fdt
arm/allwinner/aw_sid.c optional aw_sid nvmem fdt
arm/allwinner/aw_spi.c optional aw_spi fdt
arm/allwinner/aw_syscon.c optional aw_syscon ext_resources syscon fdt
arm/allwinner/aw_thermal.c optional aw_thermal nvmem fdt
arm/allwinner/aw_usbphy.c optional ehci aw_usbphy fdt
arm/allwinner/aw_wdog.c optional aw_wdog fdt
arm/allwinner/axp81x.c optional axp81x fdt
arm/allwinner/if_awg.c optional awg ext_resources syscon aw_sid nvmem fdt
# Allwinner clock driver
arm/allwinner/clkng/aw_ccung.c optional aw_ccu fdt
arm/allwinner/clkng/aw_clk_nkmp.c optional aw_ccu fdt
arm/allwinner/clkng/aw_clk_nm.c optional aw_ccu fdt
arm/allwinner/clkng/aw_clk_prediv_mux.c optional aw_ccu fdt
arm/allwinner/clkng/ccu_a64.c optional soc_allwinner_a64 aw_ccu fdt
arm/allwinner/clkng/ccu_h3.c optional soc_allwinner_h5 aw_ccu fdt
arm/allwinner/clkng/ccu_sun8i_r.c optional aw_ccu fdt
# Allwinner padconf files
arm/allwinner/a64/a64_padconf.c optional soc_allwinner_a64 fdt
arm/allwinner/a64/a64_r_padconf.c optional soc_allwinner_a64 fdt
arm/allwinner/h3/h3_padconf.c optional soc_allwinner_h5 fdt
arm/allwinner/h3/h3_r_padconf.c optional soc_allwinner_h5 fdt
arm/annapurna/alpine/alpine_ccu.c optional al_ccu fdt
arm/annapurna/alpine/alpine_nb_service.c optional al_nb_service fdt
arm/annapurna/alpine/alpine_pci.c optional al_pci fdt
arm/annapurna/alpine/alpine_pci_msix.c optional al_pci fdt
arm/annapurna/alpine/alpine_serdes.c optional al_serdes fdt \
no-depend \
compile-with "${CC} -c -o ${.TARGET} ${CFLAGS} -I$S/contrib/alpine-hal -I$S/contrib/alpine-hal/eth ${PROF} ${.IMPSRC}"
arm/arm/generic_timer.c standard
arm/arm/gic.c standard
arm/arm/gic_acpi.c optional acpi
arm/arm/gic_fdt.c optional fdt
arm/arm/pmu.c standard
arm/arm/physmem.c standard
arm/broadcom/bcm2835/bcm2835_audio.c optional sound vchiq fdt \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
arm/broadcom/bcm2835/bcm2835_bsc.c optional bcm2835_bsc soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_cpufreq.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_dma.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_fbd.c optional vt soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_ft5406.c optional evdev bcm2835_ft5406 soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_gpio.c optional gpio soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_intr.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_mbox.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_rng.c optional random soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_sdhci.c optional sdhci soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_sdhost.c optional sdhci soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_spi.c optional bcm2835_spi soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_vcio.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2835_wdog.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm2836.c optional soc_brcm_bcm2837 fdt
arm/broadcom/bcm2835/bcm283x_dwc_fdt.c optional dwcotg fdt soc_brcm_bcm2837
arm/mv/gpio.c optional mv_gpio fdt
arm/mv/mvebu_pinctrl.c optional mvebu_pinctrl fdt
arm/mv/mv_cp110_icu.c optional mv_cp110_icu fdt
arm/mv/mv_ap806_gicp.c optional mv_ap806_gicp fdt
arm/mv/mv_ap806_clock.c optional SOC_MARVELL_8K fdt
arm/mv/mv_cp110_clock.c optional SOC_MARVELL_8K fdt
arm/mv/mv_thermal.c optional SOC_MARVELL_8K mv_thermal fdt
arm/mv/armada38x/armada38x_rtc.c optional mv_rtc fdt
arm/xilinx/uart_dev_cdnc.c optional uart soc_xilinx_zynq
+arm64/acpica/acpi_iort.c optional acpi
arm64/acpica/acpi_machdep.c optional acpi
arm64/acpica/OsdEnvironment.c optional acpi
arm64/acpica/acpi_wakeup.c optional acpi
arm64/acpica/pci_cfgreg.c optional acpi pci
arm64/arm64/autoconf.c standard
arm64/arm64/bus_machdep.c standard
arm64/arm64/bus_space_asm.S standard
arm64/arm64/busdma_bounce.c standard
arm64/arm64/busdma_machdep.c standard
arm64/arm64/bzero.S standard
arm64/arm64/clock.c standard
arm64/arm64/copyinout.S standard
arm64/arm64/copystr.c standard
arm64/arm64/cpu_errata.c standard
arm64/arm64/cpufunc_asm.S standard
arm64/arm64/db_disasm.c optional ddb
arm64/arm64/db_interface.c optional ddb
arm64/arm64/db_trace.c optional ddb
arm64/arm64/debug_monitor.c optional ddb
arm64/arm64/disassem.c optional ddb
arm64/arm64/dump_machdep.c standard
arm64/arm64/efirt_machdep.c optional efirt
arm64/arm64/elf32_machdep.c optional compat_freebsd32
arm64/arm64/elf_machdep.c standard
arm64/arm64/exception.S standard
arm64/arm64/freebsd32_machdep.c optional compat_freebsd32
arm64/arm64/gicv3_its.c optional intrng fdt
arm64/arm64/gic_v3.c standard
arm64/arm64/gic_v3_acpi.c optional acpi
arm64/arm64/gic_v3_fdt.c optional fdt
arm64/arm64/identcpu.c standard
arm64/arm64/in_cksum.c optional inet | inet6
arm64/arm64/locore.S standard no-obj
arm64/arm64/machdep.c standard
arm64/arm64/mem.c standard
arm64/arm64/memcpy.S standard
arm64/arm64/memmove.S standard
arm64/arm64/minidump_machdep.c standard
arm64/arm64/mp_machdep.c optional smp
arm64/arm64/nexus.c standard
arm64/arm64/ofw_machdep.c optional fdt
arm64/arm64/pmap.c standard
arm64/arm64/stack_machdep.c optional ddb | stack
arm64/arm64/support.S standard
arm64/arm64/swtch.S standard
arm64/arm64/sys_machdep.c standard
arm64/arm64/trap.c standard
arm64/arm64/uio_machdep.c standard
arm64/arm64/uma_machdep.c standard
arm64/arm64/undefined.c standard
arm64/arm64/unwind.c optional ddb | kdtrace_hooks | stack
arm64/arm64/vfp.c standard
arm64/arm64/vm_machdep.c standard
arm64/cavium/thunder_pcie_fdt.c optional soc_cavm_thunderx pci fdt
arm64/cavium/thunder_pcie_pem.c optional soc_cavm_thunderx pci
arm64/cavium/thunder_pcie_pem_fdt.c optional soc_cavm_thunderx pci fdt
arm64/cavium/thunder_pcie_common.c optional soc_cavm_thunderx pci
arm64/cloudabi32/cloudabi32_sysvec.c optional compat_cloudabi32
arm64/cloudabi64/cloudabi64_sysvec.c optional compat_cloudabi64
arm64/coresight/coresight.c standard
arm64/coresight/coresight_if.m standard
arm64/coresight/coresight-cmd.c standard
arm64/coresight/coresight-cpu-debug.c standard
arm64/coresight/coresight-dynamic-replicator.c standard
arm64/coresight/coresight-etm4x.c standard
arm64/coresight/coresight-funnel.c standard
arm64/coresight/coresight-tmc.c standard
arm64/qualcomm/qcom_gcc.c optional qcom_gcc fdt
contrib/vchiq/interface/compat/vchi_bsd.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_2835_arm.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -Wno-unused -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_arm.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -Wno-unused -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_connected.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_core.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_kern_lib.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_kmod.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_shim.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
contrib/vchiq/interface/vchiq_arm/vchiq_util.c optional vchiq soc_brcm_bcm2837 \
compile-with "${NORMAL_C} -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000 -I$S/contrib/vchiq"
crypto/armv8/armv8_crypto.c optional armv8crypto
armv8_crypto_wrap.o optional armv8crypto \
dependency "$S/crypto/armv8/armv8_crypto_wrap.c" \
compile-with "${CC} -c ${CFLAGS:C/^-O2$/-O3/:N-nostdinc:N-mgeneral-regs-only} -I$S/crypto/armv8/ ${WERROR} ${NO_WCAST_QUAL} ${PROF} -march=armv8-a+crypto ${.IMPSRC}" \
no-implicit-rule \
clean "armv8_crypto_wrap.o"
crypto/blowfish/bf_enc.c optional crypto | ipsec | ipsec_support
crypto/des/des_enc.c optional crypto | ipsec | ipsec_support | netsmb
dev/acpica/acpi_bus_if.m optional acpi
dev/acpica/acpi_if.m optional acpi
dev/acpica/acpi_pci_link.c optional acpi pci
dev/acpica/acpi_pcib.c optional acpi pci
dev/acpica/acpi_pxm.c optional acpi
dev/ahci/ahci_generic.c optional ahci
dev/axgbe/if_axgbe.c optional axgbe
dev/axgbe/xgbe-desc.c optional axgbe
dev/axgbe/xgbe-dev.c optional axgbe
dev/axgbe/xgbe-drv.c optional axgbe
dev/axgbe/xgbe-mdio.c optional axgbe
dev/cpufreq/cpufreq_dt.c optional cpufreq fdt
dev/iicbus/twsi/mv_twsi.c optional twsi fdt
dev/iicbus/twsi/a10_twsi.c optional twsi fdt
dev/iicbus/twsi/twsi.c optional twsi fdt
dev/hwpmc/hwpmc_arm64.c optional hwpmc
dev/hwpmc/hwpmc_arm64_md.c optional hwpmc
dev/mbox/mbox_if.m optional soc_brcm_bcm2837
dev/mmc/host/dwmmc.c optional dwmmc fdt
dev/mmc/host/dwmmc_hisi.c optional dwmmc fdt soc_hisi_hi6220
dev/mmc/host/dwmmc_rockchip.c optional dwmmc fdt soc_rockchip_rk3328
dev/neta/if_mvneta_fdt.c optional neta fdt
dev/neta/if_mvneta.c optional neta mdio mii
dev/ofw/ofw_cpu.c optional fdt
dev/ofw/ofwpci.c optional fdt pci
dev/pci/pci_host_generic.c optional pci
dev/pci/pci_host_generic_acpi.c optional pci acpi
dev/pci/pci_host_generic_fdt.c optional pci fdt
dev/psci/psci.c standard
dev/psci/psci_arm64.S standard
dev/psci/smccc.c standard
dev/sdhci/sdhci_xenon.c optional sdhci_xenon sdhci fdt
dev/uart/uart_cpu_arm64.c optional uart
dev/uart/uart_dev_mu.c optional uart uart_mu
dev/uart/uart_dev_pl011.c optional uart pl011
dev/usb/controller/dwc_otg_hisi.c optional dwcotg fdt soc_hisi_hi6220
dev/usb/controller/ehci_mv.c optional ehci_mv fdt
dev/usb/controller/generic_ehci.c optional ehci acpi
dev/usb/controller/generic_ohci.c optional ohci fdt
dev/usb/controller/generic_usb_if.m optional ohci fdt
dev/usb/controller/xhci_mv.c optional xhci_mv fdt
dev/vnic/mrml_bridge.c optional vnic fdt
dev/vnic/nic_main.c optional vnic pci
dev/vnic/nicvf_main.c optional vnic pci pci_iov
dev/vnic/nicvf_queues.c optional vnic pci pci_iov
dev/vnic/thunder_bgx_fdt.c optional vnic fdt
dev/vnic/thunder_bgx.c optional vnic pci
dev/vnic/thunder_mdio_fdt.c optional vnic fdt
dev/vnic/thunder_mdio.c optional vnic
dev/vnic/lmac_if.m optional inet | inet6 | vnic
kern/kern_clocksource.c standard
kern/msi_if.m optional intrng
kern/pic_if.m optional intrng
kern/subr_devmap.c standard
kern/subr_intr.c optional intrng
libkern/bcmp.c standard
libkern/ffs.c standard
libkern/ffsl.c standard
libkern/ffsll.c standard
libkern/fls.c standard
libkern/flsl.c standard
libkern/flsll.c standard
libkern/memcmp.c standard
libkern/memset.c standard
libkern/arm64/crc32c_armv8.S standard
cddl/contrib/opensolaris/common/atomic/aarch64/opensolaris_atomic.S optional zfs | dtrace compile-with "${CDDL_C}"
cddl/dev/dtrace/aarch64/dtrace_asm.S optional dtrace compile-with "${DTRACE_S}"
cddl/dev/dtrace/aarch64/dtrace_subr.c optional dtrace compile-with "${DTRACE_C}"
cddl/dev/fbt/aarch64/fbt_isa.c optional dtrace_fbt | dtraceall compile-with "${FBT_C}"
-arm64/rockchip/rk_i2c.c optional rk_i2c fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/rk805.c optional rk805 fdt soc_rockchip_rk3328
-arm64/rockchip/rk_grf.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/rk_pinctrl.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/rk_gpio.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/clk/rk_cru.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/clk/rk_clk_armclk.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/clk/rk_clk_composite.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/clk/rk_clk_gate.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/clk/rk_clk_mux.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-arm64/rockchip/clk/rk_clk_pll.c optional fdt soc_rockchip_rk3328 soc_rockchip_rk3399
+# RockChip Drivers
+arm64/rockchip/rk_i2c.c optional fdt rk_i2c soc_rockchip_rk3328 | fdt rk_i2c soc_rockchip_rk3399
+arm64/rockchip/rk805.c optional fdt rk805 soc_rockchip_rk3328 | fdt rk805 soc_rockchip_rk3399
+arm64/rockchip/rk_grf.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
+arm64/rockchip/rk_pinctrl.c optional fdt rk_pinctrl soc_rockchip_rk3328 | fdt rk_pinctrl soc_rockchip_rk3399
+arm64/rockchip/rk_gpio.c optional fdt rk_gpio soc_rockchip_rk3328 | fdt rk_gpio soc_rockchip_rk3399
+arm64/rockchip/if_dwc_rk.c optional fdt dwc_rk soc_rockchip_rk3328 | fdt dwc_rk soc_rockchip_rk3399
+dev/dwc/if_dwc.c optional fdt dwc_rk soc_rockchip_rk3328 | fdt dwc_rk soc_rockchip_rk3399
+dev/dwc/if_dwc_if.m optional fdt dwc_rk soc_rockchip_rk3328 | fdt dwc_rk soc_rockchip_rk3399
+
+# RockChip Clock support
+arm64/rockchip/clk/rk_cru.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
+arm64/rockchip/clk/rk_clk_armclk.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
+arm64/rockchip/clk/rk_clk_composite.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
+arm64/rockchip/clk/rk_clk_gate.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
+arm64/rockchip/clk/rk_clk_mux.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
+arm64/rockchip/clk/rk_clk_pll.c optional fdt soc_rockchip_rk3328 | fdt soc_rockchip_rk3399
arm64/rockchip/clk/rk3328_cru.c optional fdt soc_rockchip_rk3328
arm64/rockchip/clk/rk3399_cru.c optional fdt soc_rockchip_rk3399
arm64/rockchip/clk/rk3399_pmucru.c optional fdt soc_rockchip_rk3399
-arm64/rockchip/if_dwc_rk.c optional dwc_rk fdt soc_rockchip_rk3328 soc_rockchip_rk3399
-dev/dwc/if_dwc.c optional dwc_rk
-dev/dwc/if_dwc_if.m optional dwc_rk
Index: projects/clang800-import/sys/conf/options
===================================================================
--- projects/clang800-import/sys/conf/options (revision 343955)
+++ projects/clang800-import/sys/conf/options (revision 343956)
@@ -1,1012 +1,1011 @@
# $FreeBSD$
#
# On the handling of kernel options
#
# All kernel options should be listed in NOTES, with suitable
# descriptions. Negative options (options that make some code not
# compile) should be commented out; LINT (generated from NOTES) should
# compile as much code as possible. Try to structure option-using
# code so that a single option only switch code on, or only switch
# code off, to make it possible to have a full compile-test. If
# necessary, you can check for COMPILING_LINT to get maximum code
# coverage.
#
# All new options shall also be listed in either "conf/options" or
# "conf/options.<machine>". Options that affect a single source-file
# <xxx>.[c|s] should be directed into "opt_<xxx>.h", while options
# that affect multiple files should either go in "opt_global.h" if
# this is a kernel-wide option (used just about everywhere), or in
# "opt_<option-name-in-lower-case>.h" if it affects only some files.
# Note that the effect of listing only an option without a
# header-file-name in conf/options (and cousins) is that the last
# convention is followed.
#
# This handling scheme is not yet fully implemented.
#
#
# Format of this file:
# Option name filename
#
# If filename is missing, the default is
# opt_<name-of-option-in-lower-case>.h
AAC_DEBUG opt_aac.h
AACRAID_DEBUG opt_aacraid.h
AHC_ALLOW_MEMIO opt_aic7xxx.h
AHC_TMODE_ENABLE opt_aic7xxx.h
AHC_DUMP_EEPROM opt_aic7xxx.h
AHC_DEBUG opt_aic7xxx.h
AHC_DEBUG_OPTS opt_aic7xxx.h
AHC_REG_PRETTY_PRINT opt_aic7xxx.h
AHD_DEBUG opt_aic79xx.h
AHD_DEBUG_OPTS opt_aic79xx.h
AHD_TMODE_ENABLE opt_aic79xx.h
AHD_REG_PRETTY_PRINT opt_aic79xx.h
TWA_DEBUG opt_twa.h
# Debugging options.
ALT_BREAK_TO_DEBUGGER opt_kdb.h
BREAK_TO_DEBUGGER opt_kdb.h
BUF_TRACKING opt_global.h
DDB
DDB_BUFR_SIZE opt_ddb.h
DDB_CAPTURE_DEFAULTBUFSIZE opt_ddb.h
DDB_CAPTURE_MAXBUFSIZE opt_ddb.h
DDB_CTF opt_ddb.h
DDB_NUMSYM opt_ddb.h
FULL_BUF_TRACKING opt_global.h
GDB
KDB opt_global.h
KDB_TRACE opt_kdb.h
KDB_UNATTENDED opt_kdb.h
KLD_DEBUG opt_kld.h
SYSCTL_DEBUG opt_sysctl.h
EARLY_PRINTF opt_global.h
TEXTDUMP_PREFERRED opt_ddb.h
TEXTDUMP_VERBOSE opt_ddb.h
NUM_CORE_FILES opt_global.h
TSLOG opt_global.h
TSLOGSIZE opt_global.h
# Miscellaneous options.
ALQ
ALTERA_SDCARD_FAST_SIM opt_altera_sdcard.h
ATSE_CFI_HACK opt_cfi.h
AUDIT opt_global.h
BOOTHOWTO opt_global.h
BOOTVERBOSE opt_global.h
CALLOUT_PROFILING
CAPABILITIES opt_capsicum.h
CAPABILITY_MODE opt_capsicum.h
COMPAT_43 opt_global.h
COMPAT_43TTY opt_global.h
COMPAT_FREEBSD4 opt_global.h
COMPAT_FREEBSD5 opt_global.h
COMPAT_FREEBSD6 opt_global.h
COMPAT_FREEBSD7 opt_global.h
COMPAT_FREEBSD9 opt_global.h
COMPAT_FREEBSD10 opt_global.h
COMPAT_FREEBSD11 opt_global.h
COMPAT_CLOUDABI32 opt_dontuse.h
COMPAT_CLOUDABI64 opt_dontuse.h
COMPAT_LINUXKPI opt_dontuse.h
_COMPAT_LINUX32 opt_compat.h # XXX: make sure opt_compat.h exists
COMPILING_LINT opt_global.h
CY_PCI_FASTINTR
DEADLKRES opt_watchdog.h
EXPERIMENTAL opt_global.h
EXT_RESOURCES opt_global.h
DIRECTIO
FILEMON opt_dontuse.h
FFCLOCK
FULL_PREEMPTION opt_sched.h
GZIO opt_gzio.h
IMAGACT_BINMISC opt_dontuse.h
IPI_PREEMPTION opt_sched.h
GEOM_BDE opt_geom.h
GEOM_BSD opt_geom.h
GEOM_CACHE opt_geom.h
GEOM_CONCAT opt_geom.h
GEOM_ELI opt_geom.h
GEOM_FOX opt_geom.h
GEOM_GATE opt_geom.h
GEOM_JOURNAL opt_geom.h
GEOM_LABEL opt_geom.h
GEOM_LABEL_GPT opt_geom.h
GEOM_LINUX_LVM opt_geom.h
GEOM_MAP opt_geom.h
GEOM_MBR opt_geom.h
GEOM_MIRROR opt_geom.h
GEOM_MOUNTVER opt_geom.h
GEOM_MULTIPATH opt_geom.h
GEOM_NOP opt_geom.h
GEOM_PART_APM opt_geom.h
GEOM_PART_BSD opt_geom.h
GEOM_PART_BSD64 opt_geom.h
GEOM_PART_EBR opt_geom.h
GEOM_PART_EBR_COMPAT opt_geom.h
GEOM_PART_GPT opt_geom.h
GEOM_PART_LDM opt_geom.h
GEOM_PART_MBR opt_geom.h
GEOM_PART_VTOC8 opt_geom.h
GEOM_RAID opt_geom.h
GEOM_RAID3 opt_geom.h
GEOM_SHSEC opt_geom.h
GEOM_STRIPE opt_geom.h
GEOM_SUNLABEL opt_geom.h
GEOM_UZIP opt_geom.h
GEOM_UZIP_DEBUG opt_geom.h
GEOM_VINUM opt_geom.h
GEOM_VIRSTOR opt_geom.h
GEOM_VOL opt_geom.h
GEOM_ZERO opt_geom.h
IFLIB opt_iflib.h
KDTRACE_HOOKS opt_global.h
KDTRACE_FRAME opt_kdtrace.h
KN_HASHSIZE opt_kqueue.h
KSTACK_MAX_PAGES
KSTACK_PAGES
KSTACK_USAGE_PROF
KTRACE
KTRACE_REQUEST_POOL opt_ktrace.h
LIBICONV
MAC opt_global.h
MAC_BIBA opt_dontuse.h
MAC_BSDEXTENDED opt_dontuse.h
MAC_IFOFF opt_dontuse.h
MAC_LOMAC opt_dontuse.h
MAC_MLS opt_dontuse.h
MAC_NONE opt_dontuse.h
MAC_NTPD opt_dontuse.h
MAC_PARTITION opt_dontuse.h
MAC_PORTACL opt_dontuse.h
MAC_SEEOTHERUIDS opt_dontuse.h
MAC_STATIC opt_mac.h
MAC_STUB opt_dontuse.h
MAC_TEST opt_dontuse.h
MAC_VERIEXEC opt_dontuse.h
MAC_VERIEXEC_SHA1 opt_dontuse.h
MAC_VERIEXEC_SHA256 opt_dontuse.h
MAC_VERIEXEC_SHA384 opt_dontuse.h
MAC_VERIEXEC_SHA512 opt_dontuse.h
MD_ROOT opt_md.h
MD_ROOT_FSTYPE opt_md.h
MD_ROOT_READONLY opt_md.h
MD_ROOT_SIZE opt_md.h
MD_ROOT_MEM opt_md.h
MFI_DEBUG opt_mfi.h
MFI_DECODE_LOG opt_mfi.h
MPROF_BUFFERS opt_mprof.h
MPROF_HASH_SIZE opt_mprof.h
NEW_PCIB opt_global.h
NO_ADAPTIVE_MUTEXES opt_adaptive_mutexes.h
NO_ADAPTIVE_RWLOCKS
NO_ADAPTIVE_SX
NO_EVENTTIMERS opt_timer.h
NO_OBSOLETE_CODE opt_global.h
NO_SYSCTL_DESCR opt_global.h
NSWBUF_MIN opt_param.h
MBUF_PACKET_ZONE_DISABLE opt_global.h
PANIC_REBOOT_WAIT_TIME opt_panic.h
PCI_HP opt_pci.h
PCI_IOV opt_global.h
PPC_DEBUG opt_ppc.h
PPC_PROBE_CHIPSET opt_ppc.h
PPS_SYNC opt_ntp.h
PREEMPTION opt_sched.h
QUOTA
SCHED_4BSD opt_sched.h
SCHED_STATS opt_sched.h
SCHED_ULE opt_sched.h
SLEEPQUEUE_PROFILING
SLHCI_DEBUG opt_slhci.h
-SPX_HACK
STACK opt_stack.h
SUIDDIR
MSGMNB opt_sysvipc.h
MSGMNI opt_sysvipc.h
MSGSEG opt_sysvipc.h
MSGSSZ opt_sysvipc.h
MSGTQL opt_sysvipc.h
SEMMNI opt_sysvipc.h
SEMMNS opt_sysvipc.h
SEMMNU opt_sysvipc.h
SEMMSL opt_sysvipc.h
SEMOPM opt_sysvipc.h
SEMUME opt_sysvipc.h
SHMALL opt_sysvipc.h
SHMMAX opt_sysvipc.h
SHMMAXPGS opt_sysvipc.h
SHMMIN opt_sysvipc.h
SHMMNI opt_sysvipc.h
SHMSEG opt_sysvipc.h
SYSVMSG opt_sysvipc.h
SYSVSEM opt_sysvipc.h
SYSVSHM opt_sysvipc.h
SW_WATCHDOG opt_watchdog.h
TCPHPTS opt_inet.h
TURNSTILE_PROFILING
UMTX_PROFILING
UMTX_CHAINS opt_global.h
VERBOSE_SYSINIT
ZSTDIO opt_zstdio.h
# Sanitizers
COVERAGE opt_global.h
KCOV
KUBSAN opt_global.h
# POSIX kernel options
P1003_1B_MQUEUE opt_posix.h
P1003_1B_SEMAPHORES opt_posix.h
_KPOSIX_PRIORITY_SCHEDULING opt_posix.h
# Do we want the config file compiled into the kernel?
INCLUDE_CONFIG_FILE opt_config.h
# Options for static filesystems. These should only be used at config
# time, since the corresponding lkms cannot work if there are any static
# dependencies. Unusability is enforced by hiding the defines for the
# options in a never-included header.
AUTOFS opt_dontuse.h
CD9660 opt_dontuse.h
EXT2FS opt_dontuse.h
FDESCFS opt_dontuse.h
FFS opt_dontuse.h
FUSE opt_dontuse.h
MSDOSFS opt_dontuse.h
NANDFS opt_dontuse.h
NULLFS opt_dontuse.h
PROCFS opt_dontuse.h
PSEUDOFS opt_dontuse.h
SMBFS opt_dontuse.h
TMPFS opt_dontuse.h
UDF opt_dontuse.h
UNIONFS opt_dontuse.h
ZFS opt_dontuse.h
# Pseudofs debugging
PSEUDOFS_TRACE opt_pseudofs.h
# In-kernel GSS-API
KGSSAPI opt_kgssapi.h
KGSSAPI_DEBUG opt_kgssapi.h
# These static filesystems have one slightly bogus static dependency in
# sys/i386/i386/autoconf.c. If any of these filesystems are
# statically compiled into the kernel, code for mounting them as root
# filesystems will be enabled - but look below.
# NFSCL - client
# NFSD - server
NFSCL opt_nfs.h
NFSD opt_nfs.h
# filesystems and libiconv bridge
CD9660_ICONV opt_dontuse.h
MSDOSFS_ICONV opt_dontuse.h
UDF_ICONV opt_dontuse.h
# If you are following the conditions in the copyright,
# you can enable soft-updates which will speed up a lot of thigs
# and make the system safer from crashes at the same time.
# otherwise a STUB module will be compiled in.
SOFTUPDATES opt_ffs.h
# On small, embedded systems, it can be useful to turn off support for
# snapshots. It saves about 30-40k for a feature that would be lightly
# used, if it is used at all.
NO_FFS_SNAPSHOT opt_ffs.h
# Enabling this option turns on support for Access Control Lists in UFS,
# which can be used to support high security configurations. Depends on
# UFS_EXTATTR.
UFS_ACL opt_ufs.h
# Enabling this option turns on support for extended attributes in UFS-based
# filesystems, which can be used to support high security configurations
# as well as new filesystem features.
UFS_EXTATTR opt_ufs.h
UFS_EXTATTR_AUTOSTART opt_ufs.h
# Enable fast hash lookups for large directories on UFS-based filesystems.
UFS_DIRHASH opt_ufs.h
# Enable gjournal-based UFS journal.
UFS_GJOURNAL opt_ufs.h
# The below sentence is not in English, and neither is this one.
# We plan to remove the static dependences above, with a
# <filesystem>_ROOT option to control if it usable as root. This list
# allows these options to be present in config files already (though
# they won't make any difference yet).
NFS_ROOT opt_nfsroot.h
# SMB/CIFS requester
NETSMB opt_netsmb.h
# Enable netdump(4) client support.
NETDUMP opt_global.h
# Options used only in subr_param.c.
HZ opt_param.h
MAXFILES opt_param.h
NBUF opt_param.h
NSFBUFS opt_param.h
VM_BCACHE_SIZE_MAX opt_param.h
VM_SWZONE_SIZE_MAX opt_param.h
MAXUSERS
DFLDSIZ opt_param.h
MAXDSIZ opt_param.h
MAXSSIZ opt_param.h
# Generic SCSI options.
CAM_MAX_HIGHPOWER opt_cam.h
CAMDEBUG opt_cam.h
CAM_DEBUG_COMPILE opt_cam.h
CAM_DEBUG_DELAY opt_cam.h
CAM_DEBUG_BUS opt_cam.h
CAM_DEBUG_TARGET opt_cam.h
CAM_DEBUG_LUN opt_cam.h
CAM_DEBUG_FLAGS opt_cam.h
CAM_BOOT_DELAY opt_cam.h
CAM_IOSCHED_DYNAMIC opt_cam.h
CAM_TEST_FAILURE opt_cam.h
SCSI_DELAY opt_scsi.h
SCSI_NO_SENSE_STRINGS opt_scsi.h
SCSI_NO_OP_STRINGS opt_scsi.h
# Options used only in cam/ata/ata_da.c
ATA_STATIC_ID opt_ada.h
# Options used only in cam/scsi/scsi_cd.c
CHANGER_MIN_BUSY_SECONDS opt_cd.h
CHANGER_MAX_BUSY_SECONDS opt_cd.h
# Options used only in cam/scsi/scsi_da.c
DA_TRACK_REFS opt_da.h
# Options used only in cam/scsi/scsi_sa.c.
SA_IO_TIMEOUT opt_sa.h
SA_SPACE_TIMEOUT opt_sa.h
SA_REWIND_TIMEOUT opt_sa.h
SA_ERASE_TIMEOUT opt_sa.h
SA_1FM_AT_EOD opt_sa.h
# Options used only in cam/scsi/scsi_pt.c
SCSI_PT_DEFAULT_TIMEOUT opt_pt.h
# Options used only in cam/scsi/scsi_ses.c
SES_ENABLE_PASSTHROUGH opt_ses.h
# Options used in dev/sym/ (Symbios SCSI driver).
SYM_SETUP_SCSI_DIFF opt_sym.h #-HVD support for 825a, 875, 885
# disabled:0 (default), enabled:1
SYM_SETUP_PCI_PARITY opt_sym.h #-PCI parity checking
# disabled:0, enabled:1 (default)
SYM_SETUP_MAX_LUN opt_sym.h #-Number of LUNs supported
# default:8, range:[1..64]
# Options used only in dev/isp/*
ISP_TARGET_MODE opt_isp.h
ISP_FW_CRASH_DUMP opt_isp.h
ISP_DEFAULT_ROLES opt_isp.h
ISP_INTERNAL_TARGET opt_isp.h
ISP_FCTAPE_OFF opt_isp.h
# Options used only in dev/iscsi
ISCSI_INITIATOR_DEBUG opt_iscsi_initiator.h
# Net stuff.
ACCEPT_FILTER_DATA
ACCEPT_FILTER_DNS
ACCEPT_FILTER_HTTP
ALTQ opt_global.h
ALTQ_CBQ opt_altq.h
ALTQ_CDNR opt_altq.h
ALTQ_CODEL opt_altq.h
ALTQ_DEBUG opt_altq.h
ALTQ_HFSC opt_altq.h
ALTQ_FAIRQ opt_altq.h
ALTQ_NOPCC opt_altq.h
ALTQ_PRIQ opt_altq.h
ALTQ_RED opt_altq.h
ALTQ_RIO opt_altq.h
BOOTP opt_bootp.h
BOOTP_BLOCKSIZE opt_bootp.h
BOOTP_COMPAT opt_bootp.h
BOOTP_NFSROOT opt_bootp.h
BOOTP_NFSV3 opt_bootp.h
BOOTP_WIRED_TO opt_bootp.h
DEVICE_POLLING
DUMMYNET opt_ipdn.h
RATELIMIT opt_ratelimit.h
RATELIMIT_DEBUG opt_ratelimit.h
INET opt_inet.h
INET6 opt_inet6.h
IPDIVERT
IPFILTER opt_ipfilter.h
IPFILTER_DEFAULT_BLOCK opt_ipfilter.h
IPFILTER_LOG opt_ipfilter.h
IPFILTER_LOOKUP opt_ipfilter.h
IPFIREWALL opt_ipfw.h
IPFIREWALL_DEFAULT_TO_ACCEPT opt_ipfw.h
IPFIREWALL_NAT opt_ipfw.h
IPFIREWALL_NAT64 opt_ipfw.h
IPFIREWALL_NPTV6 opt_ipfw.h
IPFIREWALL_VERBOSE opt_ipfw.h
IPFIREWALL_VERBOSE_LIMIT opt_ipfw.h
IPFIREWALL_PMOD opt_ipfw.h
IPSEC opt_ipsec.h
IPSEC_DEBUG opt_ipsec.h
IPSEC_SUPPORT opt_ipsec.h
IPSTEALTH
KRPC
LIBALIAS
LIBMCHAIN
MBUF_PROFILING
MBUF_STRESS_TEST
MROUTING opt_mrouting.h
NFSLOCKD
PCBGROUP opt_pcbgroup.h
PF_DEFAULT_TO_DROP opt_pf.h
RADIX_MPATH opt_mpath.h
ROUTETABLES opt_route.h
RSS opt_rss.h
SLIP_IFF_OPTS opt_slip.h
TCPDEBUG
TCPPCAP opt_global.h
SIFTR
TCP_BLACKBOX opt_global.h
TCP_HHOOK opt_inet.h
TCP_OFFLOAD opt_inet.h # Enable code to dispatch TCP offloading
TCP_RFC7413 opt_inet.h
TCP_RFC7413_MAX_KEYS opt_inet.h
TCP_RFC7413_MAX_PSKS opt_inet.h
TCP_SIGNATURE opt_ipsec.h
VLAN_ARRAY opt_vlan.h
XBONEHACK
#
# SCTP
#
SCTP opt_sctp.h
SCTP_DEBUG opt_sctp.h # Enable debug printfs
SCTP_LOCK_LOGGING opt_sctp.h # Log to KTR lock activity
SCTP_MBUF_LOGGING opt_sctp.h # Log to KTR general mbuf aloc/free
SCTP_MBCNT_LOGGING opt_sctp.h # Log to KTR mbcnt activity
SCTP_PACKET_LOGGING opt_sctp.h # Log to a packet buffer last N packets
SCTP_LTRACE_CHUNKS opt_sctp.h # Log to KTR chunks processed
SCTP_LTRACE_ERRORS opt_sctp.h # Log to KTR error returns.
SCTP_USE_PERCPU_STAT opt_sctp.h # Use per cpu stats.
SCTP_MCORE_INPUT opt_sctp.h # Have multiple input threads for input mbufs
SCTP_LOCAL_TRACE_BUF opt_sctp.h # Use tracebuffer exported via sysctl
SCTP_DETAILED_STR_STATS opt_sctp.h # Use per PR-SCTP policy stream stats
#
#
#
# Netgraph(4). Use option NETGRAPH to enable the base netgraph code.
# Each netgraph node type can be either be compiled into the kernel
# or loaded dynamically. To get the former, include the corresponding
# option below. Each type has its own man page, e.g. ng_async(4).
NETGRAPH
NETGRAPH_DEBUG opt_netgraph.h
NETGRAPH_ASYNC opt_netgraph.h
NETGRAPH_ATMLLC opt_netgraph.h
NETGRAPH_ATM_ATMPIF opt_netgraph.h
NETGRAPH_BLUETOOTH opt_netgraph.h
NETGRAPH_BLUETOOTH_BT3C opt_netgraph.h
NETGRAPH_BLUETOOTH_H4 opt_netgraph.h
NETGRAPH_BLUETOOTH_HCI opt_netgraph.h
NETGRAPH_BLUETOOTH_L2CAP opt_netgraph.h
NETGRAPH_BLUETOOTH_SOCKET opt_netgraph.h
NETGRAPH_BLUETOOTH_UBT opt_netgraph.h
NETGRAPH_BLUETOOTH_UBTBCMFW opt_netgraph.h
NETGRAPH_BPF opt_netgraph.h
NETGRAPH_BRIDGE opt_netgraph.h
NETGRAPH_CAR opt_netgraph.h
NETGRAPH_CHECKSUM opt_netgraph.h
NETGRAPH_CISCO opt_netgraph.h
NETGRAPH_DEFLATE opt_netgraph.h
NETGRAPH_DEVICE opt_netgraph.h
NETGRAPH_ECHO opt_netgraph.h
NETGRAPH_EIFACE opt_netgraph.h
NETGRAPH_ETHER opt_netgraph.h
NETGRAPH_ETHER_ECHO opt_netgraph.h
NETGRAPH_FEC opt_netgraph.h
NETGRAPH_FRAME_RELAY opt_netgraph.h
NETGRAPH_GIF opt_netgraph.h
NETGRAPH_GIF_DEMUX opt_netgraph.h
NETGRAPH_HOLE opt_netgraph.h
NETGRAPH_IFACE opt_netgraph.h
NETGRAPH_IP_INPUT opt_netgraph.h
NETGRAPH_IPFW opt_netgraph.h
NETGRAPH_KSOCKET opt_netgraph.h
NETGRAPH_L2TP opt_netgraph.h
NETGRAPH_LMI opt_netgraph.h
NETGRAPH_MPPC_COMPRESSION opt_netgraph.h
NETGRAPH_MPPC_ENCRYPTION opt_netgraph.h
NETGRAPH_NAT opt_netgraph.h
NETGRAPH_NETFLOW opt_netgraph.h
NETGRAPH_ONE2MANY opt_netgraph.h
NETGRAPH_PATCH opt_netgraph.h
NETGRAPH_PIPE opt_netgraph.h
NETGRAPH_PPP opt_netgraph.h
NETGRAPH_PPPOE opt_netgraph.h
NETGRAPH_PPTPGRE opt_netgraph.h
NETGRAPH_PRED1 opt_netgraph.h
NETGRAPH_RFC1490 opt_netgraph.h
NETGRAPH_SOCKET opt_netgraph.h
NETGRAPH_SPLIT opt_netgraph.h
NETGRAPH_SPPP opt_netgraph.h
NETGRAPH_TAG opt_netgraph.h
NETGRAPH_TCPMSS opt_netgraph.h
NETGRAPH_TEE opt_netgraph.h
NETGRAPH_TTY opt_netgraph.h
NETGRAPH_UI opt_netgraph.h
NETGRAPH_VJC opt_netgraph.h
NETGRAPH_VLAN opt_netgraph.h
# NgATM options
NGATM_ATM opt_netgraph.h
NGATM_ATMBASE opt_netgraph.h
NGATM_SSCOP opt_netgraph.h
NGATM_SSCFU opt_netgraph.h
NGATM_UNI opt_netgraph.h
NGATM_CCATM opt_netgraph.h
# DRM options
DRM_DEBUG opt_drm.h
TI_SF_BUF_JUMBO opt_ti.h
TI_JUMBO_HDRSPLIT opt_ti.h
# Misc debug flags. Most of these should probably be replaced with
# 'DEBUG', and then let people recompile just the interesting modules
# with 'make CC="cc -DDEBUG"'.
CLUSTERDEBUG opt_debug_cluster.h
DEBUG_1284 opt_ppb_1284.h
VP0_DEBUG opt_vpo.h
LPT_DEBUG opt_lpt.h
PLIP_DEBUG opt_plip.h
LOCKF_DEBUG opt_debug_lockf.h
SI_DEBUG opt_debug_si.h
IFMEDIA_DEBUG opt_ifmedia.h
# Fb options
FB_DEBUG opt_fb.h
FB_INSTALL_CDEV opt_fb.h
# ppbus related options
PERIPH_1284 opt_ppb_1284.h
DONTPROBE_1284 opt_ppb_1284.h
# smbus related options
ENABLE_ALART opt_intpm.h
# These cause changes all over the kernel
BLKDEV_IOSIZE opt_global.h
BURN_BRIDGES opt_global.h
DEBUG opt_global.h
DEBUG_LOCKS opt_global.h
DEBUG_VFS_LOCKS opt_global.h
DFLTPHYS opt_global.h
DIAGNOSTIC opt_global.h
INVARIANT_SUPPORT opt_global.h
INVARIANTS opt_global.h
KASSERT_PANIC_OPTIONAL opt_global.h
MAXCPU opt_global.h
MAXMEMDOM opt_global.h
MAXPHYS opt_global.h
MCLSHIFT opt_global.h
MUTEX_NOINLINE opt_global.h
LOCK_PROFILING opt_global.h
LOCK_PROFILING_FAST opt_global.h
MSIZE opt_global.h
REGRESSION opt_global.h
RWLOCK_NOINLINE opt_global.h
SX_NOINLINE opt_global.h
VFS_BIO_DEBUG opt_global.h
# These are VM related options
VM_KMEM_SIZE opt_vm.h
VM_KMEM_SIZE_SCALE opt_vm.h
VM_KMEM_SIZE_MAX opt_vm.h
VM_NRESERVLEVEL opt_vm.h
VM_LEVEL_0_ORDER opt_vm.h
NO_SWAPPING opt_vm.h
MALLOC_MAKE_FAILURES opt_vm.h
MALLOC_PROFILE opt_vm.h
MALLOC_DEBUG_MAXZONES opt_vm.h
# The MemGuard replacement allocator used for tamper-after-free detection
DEBUG_MEMGUARD opt_vm.h
# The RedZone malloc(9) protection
DEBUG_REDZONE opt_vm.h
# Standard SMP options
EARLY_AP_STARTUP opt_global.h
SMP opt_global.h
NUMA opt_global.h
# Size of the kernel message buffer
MSGBUF_SIZE opt_msgbuf.h
# NFS options
NFS_MINATTRTIMO opt_nfs.h
NFS_MAXATTRTIMO opt_nfs.h
NFS_MINDIRATTRTIMO opt_nfs.h
NFS_MAXDIRATTRTIMO opt_nfs.h
NFS_DEBUG opt_nfs.h
# For the Bt848/Bt848A/Bt849/Bt878/Bt879 driver
OVERRIDE_CARD opt_bktr.h
OVERRIDE_TUNER opt_bktr.h
OVERRIDE_DBX opt_bktr.h
OVERRIDE_MSP opt_bktr.h
BROOKTREE_SYSTEM_DEFAULT opt_bktr.h
BROOKTREE_ALLOC_PAGES opt_bktr.h
BKTR_OVERRIDE_CARD opt_bktr.h
BKTR_OVERRIDE_TUNER opt_bktr.h
BKTR_OVERRIDE_DBX opt_bktr.h
BKTR_OVERRIDE_MSP opt_bktr.h
BKTR_SYSTEM_DEFAULT opt_bktr.h
BKTR_ALLOC_PAGES opt_bktr.h
BKTR_USE_PLL opt_bktr.h
BKTR_GPIO_ACCESS opt_bktr.h
BKTR_NO_MSP_RESET opt_bktr.h
BKTR_430_FX_MODE opt_bktr.h
BKTR_SIS_VIA_MODE opt_bktr.h
BKTR_USE_FREEBSD_SMBUS opt_bktr.h
BKTR_NEW_MSP34XX_DRIVER opt_bktr.h
# Options for uart(4)
UART_PPS_ON_CTS opt_uart.h
UART_POLL_FREQ opt_uart.h
UART_DEV_TOLERANCE_PCT opt_uart.h
# options for bus/device framework
BUS_DEBUG opt_bus.h
# options for USB support
USB_DEBUG opt_usb.h
USB_HOST_ALIGN opt_usb.h
USB_REQ_DEBUG opt_usb.h
USB_TEMPLATE opt_usb.h
USB_VERBOSE opt_usb.h
USB_DMA_SINGLE_ALLOC opt_usb.h
USB_EHCI_BIG_ENDIAN_DESC opt_usb.h
U3G_DEBUG opt_u3g.h
UKBD_DFLT_KEYMAP opt_ukbd.h
UPLCOM_INTR_INTERVAL opt_uplcom.h
UVSCOM_DEFAULT_OPKTSIZE opt_uvscom.h
UVSCOM_INTR_INTERVAL opt_uvscom.h
# options for the Realtek rtwn driver
RTWN_DEBUG opt_rtwn.h
RTWN_WITHOUT_UCODE opt_rtwn.h
# Embedded system options
INIT_PATH
ROOTDEVNAME
FDC_DEBUG opt_fdc.h
PCFCLOCK_VERBOSE opt_pcfclock.h
PCFCLOCK_MAX_RETRIES opt_pcfclock.h
KTR opt_global.h
KTR_ALQ opt_ktr.h
KTR_MASK opt_ktr.h
KTR_CPUMASK opt_ktr.h
KTR_COMPILE opt_global.h
KTR_BOOT_ENTRIES opt_global.h
KTR_ENTRIES opt_global.h
KTR_VERBOSE opt_ktr.h
WITNESS opt_global.h
WITNESS_KDB opt_witness.h
WITNESS_NO_VNODE opt_witness.h
WITNESS_SKIPSPIN opt_witness.h
WITNESS_COUNT opt_witness.h
OPENSOLARIS_WITNESS opt_global.h
# options for ACPI support
ACPI_DEBUG opt_acpi.h
ACPI_MAX_TASKS opt_acpi.h
ACPI_MAX_THREADS opt_acpi.h
ACPI_DMAR opt_acpi.h
DEV_ACPI opt_acpi.h
# ISA support
DEV_ISA opt_isa.h
ISAPNP opt_isa.h
# various 'device presence' options.
DEV_BPF opt_bpf.h
DEV_CARP opt_carp.h
DEV_NETMAP opt_global.h
DEV_PCI opt_pci.h
DEV_PF opt_pf.h
DEV_PFLOG opt_pf.h
DEV_PFSYNC opt_pf.h
DEV_RANDOM opt_global.h
DEV_SPLASH opt_splash.h
DEV_VLAN opt_vlan.h
# ed driver
ED_HPP opt_ed.h
ED_3C503 opt_ed.h
ED_SIC opt_ed.h
# bce driver
BCE_DEBUG opt_bce.h
BCE_NVRAM_WRITE_SUPPORT opt_bce.h
SOCKBUF_DEBUG opt_global.h
# options for ubsec driver
UBSEC_DEBUG opt_ubsec.h
UBSEC_RNDTEST opt_ubsec.h
UBSEC_NO_RNG opt_ubsec.h
# options for hifn driver
HIFN_DEBUG opt_hifn.h
HIFN_RNDTEST opt_hifn.h
# options for safenet driver
SAFE_DEBUG opt_safe.h
SAFE_NO_RNG opt_safe.h
SAFE_RNDTEST opt_safe.h
# syscons/vt options
MAXCONS opt_syscons.h
SC_ALT_MOUSE_IMAGE opt_syscons.h
SC_CUT_SPACES2TABS opt_syscons.h
SC_CUT_SEPCHARS opt_syscons.h
SC_DEBUG_LEVEL opt_syscons.h
SC_DFLT_FONT opt_syscons.h
SC_DISABLE_KDBKEY opt_syscons.h
SC_DISABLE_REBOOT opt_syscons.h
SC_HISTORY_SIZE opt_syscons.h
SC_KERNEL_CONS_ATTR opt_syscons.h
SC_KERNEL_CONS_ATTRS opt_syscons.h
SC_KERNEL_CONS_REV_ATTR opt_syscons.h
SC_MOUSE_CHAR opt_syscons.h
SC_NO_CUTPASTE opt_syscons.h
SC_NO_FONT_LOADING opt_syscons.h
SC_NO_HISTORY opt_syscons.h
SC_NO_MODE_CHANGE opt_syscons.h
SC_NO_SUSPEND_VTYSWITCH opt_syscons.h
SC_NO_SYSMOUSE opt_syscons.h
SC_NORM_ATTR opt_syscons.h
SC_NORM_REV_ATTR opt_syscons.h
SC_PIXEL_MODE opt_syscons.h
SC_RENDER_DEBUG opt_syscons.h
SC_TWOBUTTON_MOUSE opt_syscons.h
VT_ALT_TO_ESC_HACK opt_syscons.h
VT_FB_DEFAULT_WIDTH opt_syscons.h
VT_FB_DEFAULT_HEIGHT opt_syscons.h
VT_MAXWINDOWS opt_syscons.h
VT_TWOBUTTON_MOUSE opt_syscons.h
DEV_SC opt_syscons.h
DEV_VT opt_syscons.h
# teken terminal emulator options
TEKEN_CONS25 opt_teken.h
TEKEN_UTF8 opt_teken.h
TERMINAL_KERN_ATTR opt_teken.h
TERMINAL_NORM_ATTR opt_teken.h
# options for printf
PRINTF_BUFR_SIZE opt_printf.h
BOOT_TAG opt_printf.h
BOOT_TAG_SZ opt_printf.h
# kbd options
KBD_DISABLE_KEYMAP_LOAD opt_kbd.h
KBD_INSTALL_CDEV opt_kbd.h
KBD_MAXRETRY opt_kbd.h
KBD_MAXWAIT opt_kbd.h
KBD_RESETDELAY opt_kbd.h
KBDIO_DEBUG opt_kbd.h
KBDMUX_DFLT_KEYMAP opt_kbdmux.h
# options for the Atheros driver
ATH_DEBUG opt_ath.h
ATH_TXBUF opt_ath.h
ATH_RXBUF opt_ath.h
ATH_DIAGAPI opt_ath.h
ATH_TX99_DIAG opt_ath.h
ATH_ENABLE_11N opt_ath.h
ATH_ENABLE_DFS opt_ath.h
ATH_EEPROM_FIRMWARE opt_ath.h
ATH_ENABLE_RADIOTAP_VENDOR_EXT opt_ath.h
ATH_DEBUG_ALQ opt_ath.h
ATH_KTR_INTR_DEBUG opt_ath.h
# options for the Atheros hal
# XXX For now, this breaks non-AR9130 chipsets, so only use it
# XXX when actually targeting AR9130.
AH_SUPPORT_AR9130 opt_ah.h
# This is required for AR933x SoC support
AH_SUPPORT_AR9330 opt_ah.h
AH_SUPPORT_AR9340 opt_ah.h
AH_SUPPORT_QCA9530 opt_ah.h
AH_SUPPORT_QCA9550 opt_ah.h
AH_DEBUG opt_ah.h
AH_ASSERT opt_ah.h
AH_DEBUG_ALQ opt_ah.h
AH_REGOPS_FUNC opt_ah.h
AH_WRITE_REGDOMAIN opt_ah.h
AH_DEBUG_COUNTRY opt_ah.h
AH_WRITE_EEPROM opt_ah.h
AH_PRIVATE_DIAG opt_ah.h
AH_NEED_DESC_SWAP opt_ah.h
AH_USE_INIPDGAIN opt_ah.h
AH_MAXCHAN opt_ah.h
AH_RXCFG_SDMAMW_4BYTES opt_ah.h
AH_INTERRUPT_DEBUGGING opt_ah.h
# AR5416 and later interrupt mitigation
# XXX do not use this for AR9130
AH_AR5416_INTERRUPT_MITIGATION opt_ah.h
# options for the Broadcom BCM43xx driver (bwi)
BWI_DEBUG opt_bwi.h
BWI_DEBUG_VERBOSE opt_bwi.h
# options for the Brodacom BCM43xx driver (bwn)
BWN_DEBUG opt_bwn.h
BWN_GPL_PHY opt_bwn.h
BWN_USE_SIBA opt_bwn.h
# Options for the SIBA driver
SIBA_DEBUG opt_siba.h
# options for the Marvell 8335 wireless driver
MALO_DEBUG opt_malo.h
MALO_TXBUF opt_malo.h
MALO_RXBUF opt_malo.h
# options for the Marvell wireless driver
MWL_DEBUG opt_mwl.h
MWL_TXBUF opt_mwl.h
MWL_RXBUF opt_mwl.h
MWL_DIAGAPI opt_mwl.h
MWL_AGGR_SIZE opt_mwl.h
MWL_TX_NODROP opt_mwl.h
# Options for the Marvell NETA driver
MVNETA_MULTIQUEUE opt_mvneta.h
MVNETA_KTR opt_mvneta.h
# Options for the Intel 802.11ac wireless driver
IWM_DEBUG opt_iwm.h
# Options for the Intel 802.11n wireless driver
IWN_DEBUG opt_iwn.h
# Options for the Intel 3945ABG wireless driver
WPI_DEBUG opt_wpi.h
# dcons options
DCONS_BUF_SIZE opt_dcons.h
DCONS_POLL_HZ opt_dcons.h
DCONS_FORCE_CONSOLE opt_dcons.h
DCONS_FORCE_GDB opt_dcons.h
# HWPMC options
HWPMC_DEBUG opt_global.h
HWPMC_HOOKS
HWPMC_MIPS_BACKTRACE opt_hwpmc_hooks.h
# 802.11 support layer
IEEE80211_DEBUG opt_wlan.h
IEEE80211_DEBUG_REFCNT opt_wlan.h
IEEE80211_SUPPORT_MESH opt_wlan.h
IEEE80211_SUPPORT_SUPERG opt_wlan.h
IEEE80211_SUPPORT_TDMA opt_wlan.h
IEEE80211_ALQ opt_wlan.h
IEEE80211_DFS_DEBUG opt_wlan.h
# 802.11 TDMA support
TDMA_SLOTLEN_DEFAULT opt_tdma.h
TDMA_SLOTCNT_DEFAULT opt_tdma.h
TDMA_BINTVAL_DEFAULT opt_tdma.h
TDMA_TXRATE_11B_DEFAULT opt_tdma.h
TDMA_TXRATE_11G_DEFAULT opt_tdma.h
TDMA_TXRATE_11A_DEFAULT opt_tdma.h
TDMA_TXRATE_TURBO_DEFAULT opt_tdma.h
TDMA_TXRATE_HALF_DEFAULT opt_tdma.h
TDMA_TXRATE_QUARTER_DEFAULT opt_tdma.h
TDMA_TXRATE_11NA_DEFAULT opt_tdma.h
TDMA_TXRATE_11NG_DEFAULT opt_tdma.h
# VideoMode
PICKMODE_DEBUG opt_videomode.h
# Network stack virtualization options
VIMAGE opt_global.h
VNET_DEBUG opt_global.h
# Common Flash Interface (CFI) options
CFI_SUPPORT_STRATAFLASH opt_cfi.h
CFI_ARMEDANDDANGEROUS opt_cfi.h
CFI_HARDWAREBYTESWAP opt_cfi.h
# Sound options
SND_DEBUG opt_snd.h
SND_DIAGNOSTIC opt_snd.h
SND_FEEDER_MULTIFORMAT opt_snd.h
SND_FEEDER_FULL_MULTIFORMAT opt_snd.h
SND_FEEDER_RATE_HP opt_snd.h
SND_PCM_64 opt_snd.h
SND_OLDSTEREO opt_snd.h
X86BIOS
# Flattened device tree options
FDT opt_platform.h
FDT_DTB_STATIC opt_platform.h
# OFED Infiniband stack
OFED opt_ofed.h
OFED_DEBUG_INIT opt_ofed.h
SDP opt_ofed.h
SDP_DEBUG opt_ofed.h
IPOIB opt_ofed.h
IPOIB_DEBUG opt_ofed.h
IPOIB_CM opt_ofed.h
# Resource Accounting
RACCT opt_global.h
RACCT_DEFAULT_TO_DISABLED opt_global.h
# Resource Limits
RCTL opt_global.h
# Random number generator(s)
# With this, no entropy processor is loaded, but the entropy
# harvesting infrastructure is present. This means an entropy
# processor may be loaded as a module.
RANDOM_LOADABLE opt_global.h
# This turns on high-rate and potentially expensive harvesting in
# the uma slab allocator.
RANDOM_ENABLE_UMA opt_global.h
RANDOM_ENABLE_ETHER opt_global.h
# BHND(4) driver
BHND_LOGLEVEL opt_global.h
# GPIO and child devices
GPIO_SPI_DEBUG opt_gpio.h
# SPI devices
SPIGEN_LEGACY_CDEVNAME opt_spi.h
# etherswitch(4) driver
RTL8366_SOFT_RESET opt_etherswitch.h
# evdev protocol support
EVDEV_SUPPORT opt_evdev.h
EVDEV_DEBUG opt_evdev.h
UINPUT_DEBUG opt_evdev.h
# Hyper-V network driver
HN_DEBUG opt_hn.h
# CAM-based MMC stack
MMCCAM
# Encrypted kernel crash dumps
EKCD opt_ekcd.h
# NVME options
NVME_USE_NVD opt_nvme.h
# amdsbwd options
AMDSBWD_DEBUG opt_amdsbwd.h
Index: projects/clang800-import/sys/ddb/db_ps.c
===================================================================
--- projects/clang800-import/sys/ddb/db_ps.c (revision 343955)
+++ projects/clang800-import/sys/ddb/db_ps.c (revision 343956)
@@ -1,535 +1,536 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1993 The Regents of the University of California.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_kstack_pages.h"
#include <sys/param.h>
#include <sys/cons.h>
#include <sys/jail.h>
#include <sys/kdb.h>
#include <sys/kernel.h>
#include <sys/proc.h>
#include <sys/sysent.h>
#include <sys/systm.h>
#include <sys/_kstack_cache.h>
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#include <ddb/ddb.h>
#define PRINT_NONE 0
#define PRINT_ARGS 1
static void dumpthread(volatile struct proc *p, volatile struct thread *td,
int all);
static int ps_mode;
/*
* At least one non-optional show-command must be implemented using
* DB_SHOW_ALL_COMMAND() so that db_show_all_cmd_set gets created.
* Here is one.
*/
DB_SHOW_ALL_COMMAND(procs, db_procs_cmd)
{
db_ps(addr, have_addr, count, modif);
}
static void
dump_args(volatile struct proc *p)
{
char *args;
int i, len;
if (p->p_args == NULL)
return;
args = p->p_args->ar_args;
len = (int)p->p_args->ar_length;
for (i = 0; i < len; i++) {
if (args[i] == '\0')
db_printf(" ");
else
db_printf("%c", args[i]);
}
}
/*
* Layout:
* - column counts
* - header
* - single-threaded process
* - multi-threaded process
* - thread in a MT process
*
* 1 2 3 4 5 6 7
* 1234567890123456789012345678901234567890123456789012345678901234567890
* pid ppid pgrp uid state wmesg wchan cmd
* <pid> <ppi> <pgi> <uid> <stat> <wmesg> <wchan > <name>
* <pid> <ppi> <pgi> <uid> <stat> (threaded) <command>
* <tid > <stat> <wmesg> <wchan > <name>
*
* For machines with 64-bit pointers, we expand the wchan field 8 more
* characters.
*/
void
db_ps(db_expr_t addr, bool hasaddr, db_expr_t count, char *modif)
{
volatile struct proc *p, *pp;
volatile struct thread *td;
struct ucred *cred;
struct pgrp *pgrp;
char state[9];
int np, rflag, sflag, dflag, lflag, wflag;
ps_mode = modif[0] == 'a' ? PRINT_ARGS : PRINT_NONE;
np = nprocs;
if (!LIST_EMPTY(&allproc))
p = LIST_FIRST(&allproc);
else
p = &proc0;
#ifdef __LP64__
db_printf(" pid ppid pgrp uid state wmesg wchan cmd\n");
#else
db_printf(" pid ppid pgrp uid state wmesg wchan cmd\n");
#endif
while (--np >= 0 && !db_pager_quit) {
if (p == NULL) {
db_printf("oops, ran out of processes early!\n");
break;
}
pp = p->p_pptr;
if (pp == NULL)
pp = p;
cred = p->p_ucred;
pgrp = p->p_pgrp;
db_printf("%5d %5d %5d %5d ", p->p_pid, pp->p_pid,
pgrp != NULL ? pgrp->pg_id : 0,
cred != NULL ? cred->cr_ruid : 0);
/* Determine our primary process state. */
switch (p->p_state) {
case PRS_NORMAL:
if (P_SHOULDSTOP(p))
state[0] = 'T';
else {
/*
* One of D, L, R, S, W. For a
* multithreaded process we will use
* the state of the thread with the
* highest precedence. The
* precendence order from high to low
* is R, L, D, S, W. If no thread is
* in a sane state we use '?' for our
* primary state.
*/
rflag = sflag = dflag = lflag = wflag = 0;
FOREACH_THREAD_IN_PROC(p, td) {
if (td->td_state == TDS_RUNNING ||
td->td_state == TDS_RUNQ ||
td->td_state == TDS_CAN_RUN)
rflag++;
if (TD_ON_LOCK(td))
lflag++;
if (TD_IS_SLEEPING(td)) {
if (!(td->td_flags & TDF_SINTR))
dflag++;
else
sflag++;
}
if (TD_AWAITING_INTR(td))
wflag++;
}
if (rflag)
state[0] = 'R';
else if (lflag)
state[0] = 'L';
else if (dflag)
state[0] = 'D';
else if (sflag)
state[0] = 'S';
else if (wflag)
state[0] = 'W';
else
state[0] = '?';
}
break;
case PRS_NEW:
state[0] = 'N';
break;
case PRS_ZOMBIE:
state[0] = 'Z';
break;
default:
state[0] = 'U';
break;
}
state[1] = '\0';
/* Additional process state flags. */
if (!(p->p_flag & P_INMEM))
strlcat(state, "W", sizeof(state));
if (p->p_flag & P_TRACED)
strlcat(state, "X", sizeof(state));
if (p->p_flag & P_WEXIT && p->p_state != PRS_ZOMBIE)
strlcat(state, "E", sizeof(state));
if (p->p_flag & P_PPWAIT)
strlcat(state, "V", sizeof(state));
if (p->p_flag & P_SYSTEM || p->p_lock > 0)
strlcat(state, "L", sizeof(state));
if (p->p_pgrp != NULL && p->p_session != NULL &&
SESS_LEADER(p))
strlcat(state, "s", sizeof(state));
/* Cheated here and didn't compare pgid's. */
if (p->p_flag & P_CONTROLT)
strlcat(state, "+", sizeof(state));
if (cred != NULL && jailed(cred))
strlcat(state, "J", sizeof(state));
db_printf(" %-6.6s ", state);
if (p->p_flag & P_HADTHREADS) {
#ifdef __LP64__
db_printf(" (threaded) ");
#else
db_printf(" (threaded) ");
#endif
if (p->p_flag & P_SYSTEM)
db_printf("[");
db_printf("%s", p->p_comm);
if (p->p_flag & P_SYSTEM)
db_printf("]");
if (ps_mode == PRINT_ARGS) {
db_printf(" ");
dump_args(p);
}
db_printf("\n");
}
FOREACH_THREAD_IN_PROC(p, td) {
dumpthread(p, td, p->p_flag & P_HADTHREADS);
if (db_pager_quit)
break;
}
p = LIST_NEXT(p, p_list);
if (p == NULL && np > 0)
p = LIST_FIRST(&zombproc);
}
}
static void
dumpthread(volatile struct proc *p, volatile struct thread *td, int all)
{
char state[9], wprefix;
const char *wmesg;
void *wchan;
if (all) {
db_printf("%6d ", td->td_tid);
switch (td->td_state) {
case TDS_RUNNING:
snprintf(state, sizeof(state), "Run");
break;
case TDS_RUNQ:
snprintf(state, sizeof(state), "RunQ");
break;
case TDS_CAN_RUN:
snprintf(state, sizeof(state), "CanRun");
break;
case TDS_INACTIVE:
snprintf(state, sizeof(state), "Inactv");
break;
case TDS_INHIBITED:
state[0] = '\0';
if (TD_ON_LOCK(td))
strlcat(state, "L", sizeof(state));
if (TD_IS_SLEEPING(td)) {
if (td->td_flags & TDF_SINTR)
strlcat(state, "S", sizeof(state));
else
strlcat(state, "D", sizeof(state));
}
if (TD_IS_SWAPPED(td))
strlcat(state, "W", sizeof(state));
if (TD_AWAITING_INTR(td))
strlcat(state, "I", sizeof(state));
if (TD_IS_SUSPENDED(td))
strlcat(state, "s", sizeof(state));
if (state[0] != '\0')
break;
default:
snprintf(state, sizeof(state), "???");
}
db_printf(" %-6.6s ", state);
}
wprefix = ' ';
if (TD_ON_LOCK(td)) {
wprefix = '*';
wmesg = td->td_lockname;
wchan = td->td_blocked;
} else if (TD_ON_SLEEPQ(td)) {
wmesg = td->td_wmesg;
wchan = td->td_wchan;
} else if (TD_IS_RUNNING(td)) {
snprintf(state, sizeof(state), "CPU %d", td->td_oncpu);
wmesg = state;
wchan = NULL;
} else {
wmesg = "";
wchan = NULL;
}
db_printf("%c%-7.7s ", wprefix, wmesg);
if (wchan == NULL)
#ifdef __LP64__
db_printf("%18s ", "");
#else
db_printf("%10s ", "");
#endif
else
db_printf("%p ", wchan);
if (p->p_flag & P_SYSTEM)
db_printf("[");
if (td->td_name[0] != '\0')
db_printf("%s", td->td_name);
else
db_printf("%s", td->td_proc->p_comm);
if (p->p_flag & P_SYSTEM)
db_printf("]");
if (ps_mode == PRINT_ARGS && all == 0) {
db_printf(" ");
dump_args(p);
}
db_printf("\n");
}
DB_SHOW_COMMAND(thread, db_show_thread)
{
struct thread *td;
struct lock_object *lock;
bool comma;
int delta;
/* Determine which thread to examine. */
if (have_addr)
td = db_lookup_thread(addr, false);
else
td = kdb_thread;
lock = (struct lock_object *)td->td_lock;
db_printf("Thread %d at %p:\n", td->td_tid, td);
db_printf(" proc (pid %d): %p\n", td->td_proc->p_pid, td->td_proc);
if (td->td_name[0] != '\0')
db_printf(" name: %s\n", td->td_name);
+ db_printf(" pcb: %p\n", td->td_pcb);
db_printf(" stack: %p-%p\n", (void *)td->td_kstack,
(void *)(td->td_kstack + td->td_kstack_pages * PAGE_SIZE - 1));
db_printf(" flags: %#x ", td->td_flags);
db_printf(" pflags: %#x\n", td->td_pflags);
db_printf(" state: ");
switch (td->td_state) {
case TDS_INACTIVE:
db_printf("INACTIVE\n");
break;
case TDS_CAN_RUN:
db_printf("CAN RUN\n");
break;
case TDS_RUNQ:
db_printf("RUNQ\n");
break;
case TDS_RUNNING:
db_printf("RUNNING (CPU %d)\n", td->td_oncpu);
break;
case TDS_INHIBITED:
db_printf("INHIBITED: {");
comma = false;
if (TD_IS_SLEEPING(td)) {
db_printf("SLEEPING");
comma = true;
}
if (TD_IS_SUSPENDED(td)) {
if (comma)
db_printf(", ");
db_printf("SUSPENDED");
comma = true;
}
if (TD_IS_SWAPPED(td)) {
if (comma)
db_printf(", ");
db_printf("SWAPPED");
comma = true;
}
if (TD_ON_LOCK(td)) {
if (comma)
db_printf(", ");
db_printf("LOCK");
comma = true;
}
if (TD_AWAITING_INTR(td)) {
if (comma)
db_printf(", ");
db_printf("IWAIT");
}
db_printf("}\n");
break;
default:
db_printf("??? (%#x)\n", td->td_state);
break;
}
if (TD_ON_LOCK(td))
db_printf(" lock: %s turnstile: %p\n", td->td_lockname,
td->td_blocked);
if (TD_ON_SLEEPQ(td))
db_printf(
" wmesg: %s wchan: %p sleeptimo %lx. %jx (curr %lx. %jx)\n",
td->td_wmesg, td->td_wchan,
(long)sbttobt(td->td_sleeptimo).sec,
(uintmax_t)sbttobt(td->td_sleeptimo).frac,
(long)sbttobt(sbinuptime()).sec,
(uintmax_t)sbttobt(sbinuptime()).frac);
db_printf(" priority: %d\n", td->td_priority);
db_printf(" container lock: %s (%p)\n", lock->lo_name, lock);
if (td->td_swvoltick != 0) {
delta = (u_int)ticks - (u_int)td->td_swvoltick;
db_printf(" last voluntary switch: %d ms ago\n",
1000 * delta / hz);
}
if (td->td_swinvoltick != 0) {
delta = (u_int)ticks - (u_int)td->td_swinvoltick;
db_printf(" last involuntary switch: %d ms ago\n",
1000 * delta / hz);
}
}
DB_SHOW_COMMAND(proc, db_show_proc)
{
struct thread *td;
struct proc *p;
int i;
/* Determine which process to examine. */
if (have_addr)
p = db_lookup_proc(addr);
else
p = kdb_thread->td_proc;
db_printf("Process %d (%s) at %p:\n", p->p_pid, p->p_comm, p);
db_printf(" state: ");
switch (p->p_state) {
case PRS_NEW:
db_printf("NEW\n");
break;
case PRS_NORMAL:
db_printf("NORMAL\n");
break;
case PRS_ZOMBIE:
db_printf("ZOMBIE\n");
break;
default:
db_printf("??? (%#x)\n", p->p_state);
}
if (p->p_ucred != NULL) {
db_printf(" uid: %d gids: ", p->p_ucred->cr_uid);
for (i = 0; i < p->p_ucred->cr_ngroups; i++) {
db_printf("%d", p->p_ucred->cr_groups[i]);
if (i < (p->p_ucred->cr_ngroups - 1))
db_printf(", ");
}
db_printf("\n");
}
if (p->p_pptr != NULL)
db_printf(" parent: pid %d at %p\n", p->p_pptr->p_pid,
p->p_pptr);
if (p->p_leader != NULL && p->p_leader != p)
db_printf(" leader: pid %d at %p\n", p->p_leader->p_pid,
p->p_leader);
if (p->p_sysent != NULL)
db_printf(" ABI: %s\n", p->p_sysent->sv_name);
if (p->p_args != NULL) {
db_printf(" arguments: ");
dump_args(p);
db_printf("\n");
}
db_printf(" repear: %p reapsubtree: %d\n",
p->p_reaper, p->p_reapsubtree);
db_printf(" sigparent: %d\n", p->p_sigparent);
db_printf(" vmspace: %p\n", p->p_vmspace);
db_printf(" (map %p)\n",
(p->p_vmspace != NULL) ? &p->p_vmspace->vm_map : 0);
db_printf(" (map.pmap %p)\n",
(p->p_vmspace != NULL) ? &p->p_vmspace->vm_map.pmap : 0);
db_printf(" (pmap %p)\n",
(p->p_vmspace != NULL) ? &p->p_vmspace->vm_pmap : 0);
db_printf(" threads: %d\n", p->p_numthreads);
FOREACH_THREAD_IN_PROC(p, td) {
dumpthread(p, td, 1);
if (db_pager_quit)
break;
}
}
void
db_findstack_cmd(db_expr_t addr, bool have_addr, db_expr_t dummy3 __unused,
char *dummy4 __unused)
{
struct proc *p;
struct thread *td;
struct kstack_cache_entry *ks_ce;
vm_offset_t saddr;
if (have_addr)
saddr = addr;
else {
db_printf("Usage: findstack <address>\n");
return;
}
FOREACH_PROC_IN_SYSTEM(p) {
FOREACH_THREAD_IN_PROC(p, td) {
if (td->td_kstack <= saddr && saddr < td->td_kstack +
PAGE_SIZE * td->td_kstack_pages) {
db_printf("Thread %p\n", td);
return;
}
}
}
for (ks_ce = kstack_cache; ks_ce != NULL;
ks_ce = ks_ce->next_ks_entry) {
if ((vm_offset_t)ks_ce <= saddr && saddr < (vm_offset_t)ks_ce +
PAGE_SIZE * kstack_pages) {
db_printf("Cached stack %p\n", ks_ce);
return;
}
}
}
Index: projects/clang800-import/sys/dev/acpica/acpivar.h
===================================================================
--- projects/clang800-import/sys/dev/acpica/acpivar.h (revision 343955)
+++ projects/clang800-import/sys/dev/acpica/acpivar.h (revision 343956)
@@ -1,548 +1,555 @@
/*-
* Copyright (c) 2000 Mitsuru IWASAKI <iwasaki@jp.freebsd.org>
* Copyright (c) 2000 Michael Smith <msmith@freebsd.org>
* Copyright (c) 2000 BSDi
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#ifndef _ACPIVAR_H_
#define _ACPIVAR_H_
#ifdef _KERNEL
#include "acpi_if.h"
#include "bus_if.h"
#include <sys/eventhandler.h>
#ifdef INTRNG
#include <sys/intr.h>
#endif
#include <sys/ktr.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/selinfo.h>
#include <sys/sx.h>
#include <sys/sysctl.h>
#include <machine/bus.h>
#include <machine/resource.h>
struct apm_clone_data;
struct acpi_softc {
device_t acpi_dev;
struct cdev *acpi_dev_t;
int acpi_enabled;
int acpi_sstate;
int acpi_sleep_disabled;
int acpi_resources_reserved;
struct sysctl_ctx_list acpi_sysctl_ctx;
struct sysctl_oid *acpi_sysctl_tree;
int acpi_power_button_sx;
int acpi_sleep_button_sx;
int acpi_lid_switch_sx;
int acpi_standby_sx;
int acpi_suspend_sx;
int acpi_sleep_delay;
int acpi_s4bios;
int acpi_do_disable;
int acpi_verbose;
int acpi_handle_reboot;
vm_offset_t acpi_wakeaddr;
vm_paddr_t acpi_wakephys;
int acpi_next_sstate; /* Next suspend Sx state. */
struct apm_clone_data *acpi_clone; /* Pseudo-dev for devd(8). */
STAILQ_HEAD(,apm_clone_data) apm_cdevs; /* All apm/apmctl/acpi cdevs. */
struct callout susp_force_to; /* Force suspend if no acks. */
};
struct acpi_device {
/* ACPI ivars */
ACPI_HANDLE ad_handle;
void *ad_private;
int ad_flags;
int ad_cls_class;
/* Resources */
struct resource_list ad_rl;
};
#ifdef INTRNG
struct intr_map_data_acpi {
struct intr_map_data hdr;
u_int irq;
u_int pol;
u_int trig;
};
#endif
/* Track device (/dev/{apm,apmctl} and /dev/acpi) notification status. */
struct apm_clone_data {
STAILQ_ENTRY(apm_clone_data) entries;
struct cdev *cdev;
int flags;
#define ACPI_EVF_NONE 0 /* /dev/apm semantics */
#define ACPI_EVF_DEVD 1 /* /dev/acpi is handled via devd(8) */
#define ACPI_EVF_WRITE 2 /* Device instance is opened writable. */
int notify_status;
#define APM_EV_NONE 0 /* Device not yet aware of pending sleep. */
#define APM_EV_NOTIFIED 1 /* Device saw next sleep state. */
#define APM_EV_ACKED 2 /* Device agreed sleep can occur. */
struct acpi_softc *acpi_sc;
struct selinfo sel_read;
};
#define ACPI_PRW_MAX_POWERRES 8
struct acpi_prw_data {
ACPI_HANDLE gpe_handle;
int gpe_bit;
int lowest_wake;
ACPI_OBJECT power_res[ACPI_PRW_MAX_POWERRES];
int power_res_count;
};
/* Flags for each device defined in the AML namespace. */
#define ACPI_FLAG_WAKE_ENABLED 0x1
/* Macros for extracting parts of a PCI address from an _ADR value. */
#define ACPI_ADR_PCI_SLOT(adr) (((adr) & 0xffff0000) >> 16)
#define ACPI_ADR_PCI_FUNC(adr) ((adr) & 0xffff)
/*
* Entry points to ACPI from above are global functions defined in this
* file, sysctls, and I/O on the control device. Entry points from below
* are interrupts (the SCI), notifies, task queue threads, and the thermal
* zone polling thread.
*
* ACPI tables and global shared data are protected by a global lock
* (acpi_mutex).
*
* Each ACPI device can have its own driver-specific mutex for protecting
* shared access to local data. The ACPI_LOCK macros handle mutexes.
*
* Drivers that need to serialize access to functions (e.g., to route
* interrupts, get/set control paths, etc.) should use the sx lock macros
* (ACPI_SERIAL).
*
* ACPI-CA handles its own locking and should not be called with locks held.
*
* The most complicated path is:
* GPE -> EC runs _Qxx -> _Qxx reads EC space -> GPE
*/
extern struct mtx acpi_mutex;
#define ACPI_LOCK(sys) mtx_lock(&sys##_mutex)
#define ACPI_UNLOCK(sys) mtx_unlock(&sys##_mutex)
#define ACPI_LOCK_ASSERT(sys) mtx_assert(&sys##_mutex, MA_OWNED);
#define ACPI_LOCK_DECL(sys, name) \
static struct mtx sys##_mutex; \
MTX_SYSINIT(sys##_mutex, &sys##_mutex, name, MTX_DEF)
#define ACPI_SERIAL_BEGIN(sys) sx_xlock(&sys##_sxlock)
#define ACPI_SERIAL_END(sys) sx_xunlock(&sys##_sxlock)
#define ACPI_SERIAL_ASSERT(sys) sx_assert(&sys##_sxlock, SX_XLOCKED);
#define ACPI_SERIAL_DECL(sys, name) \
static struct sx sys##_sxlock; \
SX_SYSINIT(sys##_sxlock, &sys##_sxlock, name)
/*
* ACPI CA does not define layers for non-ACPI CA drivers.
* We define some here within the range provided.
*/
#define ACPI_AC_ADAPTER 0x00010000
#define ACPI_BATTERY 0x00020000
#define ACPI_BUS 0x00040000
#define ACPI_BUTTON 0x00080000
#define ACPI_EC 0x00100000
#define ACPI_FAN 0x00200000
#define ACPI_POWERRES 0x00400000
#define ACPI_PROCESSOR 0x00800000
#define ACPI_THERMAL 0x01000000
#define ACPI_TIMER 0x02000000
#define ACPI_OEM 0x04000000
/*
* Constants for different interrupt models used with acpi_SetIntrModel().
*/
#define ACPI_INTR_PIC 0
#define ACPI_INTR_APIC 1
#define ACPI_INTR_SAPIC 2
/*
* Various features and capabilities for the acpi_get_features() method.
* In particular, these are used for the ACPI 3.0 _PDC and _OSC methods.
* See the Intel document titled "Intel Processor Vendor-Specific ACPI",
* number 302223-007.
*/
#define ACPI_CAP_PERF_MSRS (1 << 0) /* Intel SpeedStep PERF_CTL MSRs */
#define ACPI_CAP_C1_IO_HALT (1 << 1) /* Intel C1 "IO then halt" sequence */
#define ACPI_CAP_THR_MSRS (1 << 2) /* Intel OnDemand throttling MSRs */
#define ACPI_CAP_SMP_SAME (1 << 3) /* MP C1, Px, and Tx (all the same) */
#define ACPI_CAP_SMP_SAME_C3 (1 << 4) /* MP C2 and C3 (all the same) */
#define ACPI_CAP_SMP_DIFF_PX (1 << 5) /* MP Px (different, using _PSD) */
#define ACPI_CAP_SMP_DIFF_CX (1 << 6) /* MP Cx (different, using _CSD) */
#define ACPI_CAP_SMP_DIFF_TX (1 << 7) /* MP Tx (different, using _TSD) */
#define ACPI_CAP_SMP_C1_NATIVE (1 << 8) /* MP C1 support other than halt */
#define ACPI_CAP_SMP_C3_NATIVE (1 << 9) /* MP C2 and C3 support */
#define ACPI_CAP_PX_HW_COORD (1 << 11) /* Intel P-state HW coordination */
#define ACPI_CAP_INTR_CPPC (1 << 12) /* Native Interrupt Handling for
Collaborative Processor Performance Control notifications */
#define ACPI_CAP_HW_DUTY_C (1 << 13) /* Hardware Duty Cycling */
/*
* Quirk flags.
*
* ACPI_Q_BROKEN: Disables all ACPI support.
* ACPI_Q_TIMER: Disables support for the ACPI timer.
* ACPI_Q_MADT_IRQ0: Specifies that ISA IRQ 0 is wired up to pin 0 of the
* first APIC and that the MADT should force that by ignoring the PC-AT
* compatible flag and ignoring overrides that redirect IRQ 0 to pin 2.
*/
extern int acpi_quirks;
#define ACPI_Q_OK 0
#define ACPI_Q_BROKEN (1 << 0)
#define ACPI_Q_TIMER (1 << 1)
#define ACPI_Q_MADT_IRQ0 (1 << 2)
/*
* Note that the low ivar values are reserved to provide
* interface compatibility with ISA drivers which can also
* attach to ACPI.
*/
#define ACPI_IVAR_HANDLE 0x100
#define ACPI_IVAR_UNUSED 0x101 /* Unused/reserved. */
#define ACPI_IVAR_PRIVATE 0x102
#define ACPI_IVAR_FLAGS 0x103
/*
* Accessor functions for our ivars. Default value for BUS_READ_IVAR is
* (type) 0. The <sys/bus.h> accessor functions don't check return values.
*/
#define __ACPI_BUS_ACCESSOR(varp, var, ivarp, ivar, type) \
\
static __inline type varp ## _get_ ## var(device_t dev) \
{ \
uintptr_t v = 0; \
BUS_READ_IVAR(device_get_parent(dev), dev, \
ivarp ## _IVAR_ ## ivar, &v); \
return ((type) v); \
} \
\
static __inline void varp ## _set_ ## var(device_t dev, type t) \
{ \
uintptr_t v = (uintptr_t) t; \
BUS_WRITE_IVAR(device_get_parent(dev), dev, \
ivarp ## _IVAR_ ## ivar, v); \
}
__ACPI_BUS_ACCESSOR(acpi, handle, ACPI, HANDLE, ACPI_HANDLE)
__ACPI_BUS_ACCESSOR(acpi, private, ACPI, PRIVATE, void *)
__ACPI_BUS_ACCESSOR(acpi, flags, ACPI, FLAGS, int)
void acpi_fake_objhandler(ACPI_HANDLE h, void *data);
static __inline device_t
acpi_get_device(ACPI_HANDLE handle)
{
void *dev = NULL;
AcpiGetData(handle, acpi_fake_objhandler, &dev);
return ((device_t)dev);
}
static __inline ACPI_OBJECT_TYPE
acpi_get_type(device_t dev)
{
ACPI_HANDLE h;
ACPI_OBJECT_TYPE t;
if ((h = acpi_get_handle(dev)) == NULL)
return (ACPI_TYPE_NOT_FOUND);
if (ACPI_FAILURE(AcpiGetType(h, &t)))
return (ACPI_TYPE_NOT_FOUND);
return (t);
}
/* Find the difference between two PM tick counts. */
static __inline uint32_t
acpi_TimerDelta(uint32_t end, uint32_t start)
{
if (end < start && (AcpiGbl_FADT.Flags & ACPI_FADT_32BIT_TIMER) == 0)
end |= 0x01000000;
return (end - start);
}
#ifdef ACPI_DEBUGGER
void acpi_EnterDebugger(void);
#endif
#ifdef ACPI_DEBUG
#include <sys/cons.h>
#define STEP(x) do {printf x, printf("\n"); cngetc();} while (0)
#else
#define STEP(x)
#endif
#define ACPI_VPRINT(dev, acpi_sc, x...) do { \
if (acpi_get_verbose(acpi_sc)) \
device_printf(dev, x); \
} while (0)
/* Values for the first status word returned by _OSC. */
#define ACPI_OSC_FAILURE (1 << 1)
#define ACPI_OSC_BAD_UUID (1 << 2)
#define ACPI_OSC_BAD_REVISION (1 << 3)
#define ACPI_OSC_CAPS_MASKED (1 << 4)
#define ACPI_DEVINFO_PRESENT(x, flags) \
(((x) & (flags)) == (flags))
#define ACPI_DEVICE_PRESENT(x) \
ACPI_DEVINFO_PRESENT(x, ACPI_STA_DEVICE_PRESENT | \
ACPI_STA_DEVICE_FUNCTIONING)
#define ACPI_BATTERY_PRESENT(x) \
ACPI_DEVINFO_PRESENT(x, ACPI_STA_DEVICE_PRESENT | \
ACPI_STA_DEVICE_FUNCTIONING | ACPI_STA_BATTERY_PRESENT)
/* Callback function type for walking subtables within a table. */
typedef void acpi_subtable_handler(ACPI_SUBTABLE_HEADER *, void *);
BOOLEAN acpi_DeviceIsPresent(device_t dev);
BOOLEAN acpi_BatteryIsPresent(device_t dev);
ACPI_STATUS acpi_GetHandleInScope(ACPI_HANDLE parent, char *path,
ACPI_HANDLE *result);
ACPI_BUFFER *acpi_AllocBuffer(int size);
ACPI_STATUS acpi_ConvertBufferToInteger(ACPI_BUFFER *bufp,
UINT32 *number);
ACPI_STATUS acpi_GetInteger(ACPI_HANDLE handle, char *path,
UINT32 *number);
ACPI_STATUS acpi_SetInteger(ACPI_HANDLE handle, char *path,
UINT32 number);
ACPI_STATUS acpi_ForeachPackageObject(ACPI_OBJECT *obj,
void (*func)(ACPI_OBJECT *comp, void *arg), void *arg);
ACPI_STATUS acpi_FindIndexedResource(ACPI_BUFFER *buf, int index,
ACPI_RESOURCE **resp);
ACPI_STATUS acpi_AppendBufferResource(ACPI_BUFFER *buf,
ACPI_RESOURCE *res);
UINT8 acpi_DSMQuery(ACPI_HANDLE h, uint8_t *uuid, int revision);
ACPI_STATUS acpi_EvaluateDSM(ACPI_HANDLE handle, uint8_t *uuid,
int revision, uint64_t function, union acpi_object *package,
ACPI_BUFFER *out_buf);
ACPI_STATUS acpi_EvaluateOSC(ACPI_HANDLE handle, uint8_t *uuid,
int revision, int count, uint32_t *caps_in,
uint32_t *caps_out, bool query);
ACPI_STATUS acpi_OverrideInterruptLevel(UINT32 InterruptNumber);
ACPI_STATUS acpi_SetIntrModel(int model);
int acpi_ReqSleepState(struct acpi_softc *sc, int state);
int acpi_AckSleepState(struct apm_clone_data *clone, int error);
ACPI_STATUS acpi_SetSleepState(struct acpi_softc *sc, int state);
int acpi_wake_set_enable(device_t dev, int enable);
int acpi_parse_prw(ACPI_HANDLE h, struct acpi_prw_data *prw);
ACPI_STATUS acpi_Startup(void);
void acpi_UserNotify(const char *subsystem, ACPI_HANDLE h,
uint8_t notify);
int acpi_bus_alloc_gas(device_t dev, int *type, int *rid,
ACPI_GENERIC_ADDRESS *gas, struct resource **res,
u_int flags);
void acpi_walk_subtables(void *first, void *end,
acpi_subtable_handler *handler, void *arg);
int acpi_MatchHid(ACPI_HANDLE h, const char *hid);
#define ACPI_MATCHHID_NOMATCH 0
#define ACPI_MATCHHID_HID 1
#define ACPI_MATCHHID_CID 2
struct acpi_parse_resource_set {
void (*set_init)(device_t dev, void *arg, void **context);
void (*set_done)(device_t dev, void *context);
void (*set_ioport)(device_t dev, void *context, uint64_t base,
uint64_t length);
void (*set_iorange)(device_t dev, void *context, uint64_t low,
uint64_t high, uint64_t length, uint64_t align);
void (*set_memory)(device_t dev, void *context, uint64_t base,
uint64_t length);
void (*set_memoryrange)(device_t dev, void *context, uint64_t low,
uint64_t high, uint64_t length, uint64_t align);
void (*set_irq)(device_t dev, void *context, uint8_t *irq,
int count, int trig, int pol);
void (*set_ext_irq)(device_t dev, void *context, uint32_t *irq,
int count, int trig, int pol);
void (*set_drq)(device_t dev, void *context, uint8_t *drq,
int count);
void (*set_start_dependent)(device_t dev, void *context,
int preference);
void (*set_end_dependent)(device_t dev, void *context);
};
extern struct acpi_parse_resource_set acpi_res_parse_set;
int acpi_identify(void);
void acpi_config_intr(device_t dev, ACPI_RESOURCE *res);
#ifdef INTRNG
int acpi_map_intr(device_t dev, u_int irq, ACPI_HANDLE handle);
#endif
ACPI_STATUS acpi_lookup_irq_resource(device_t dev, int rid,
struct resource *res, ACPI_RESOURCE *acpi_res);
ACPI_STATUS acpi_parse_resources(device_t dev, ACPI_HANDLE handle,
struct acpi_parse_resource_set *set, void *arg);
struct resource *acpi_alloc_sysres(device_t child, int type, int *rid,
rman_res_t start, rman_res_t end, rman_res_t count,
u_int flags);
/* ACPI event handling */
UINT32 acpi_event_power_button_sleep(void *context);
UINT32 acpi_event_power_button_wake(void *context);
UINT32 acpi_event_sleep_button_sleep(void *context);
UINT32 acpi_event_sleep_button_wake(void *context);
#define ACPI_EVENT_PRI_FIRST 0
#define ACPI_EVENT_PRI_DEFAULT 10000
#define ACPI_EVENT_PRI_LAST 20000
typedef void (*acpi_event_handler_t)(void *, int);
EVENTHANDLER_DECLARE(acpi_sleep_event, acpi_event_handler_t);
EVENTHANDLER_DECLARE(acpi_wakeup_event, acpi_event_handler_t);
/* Device power control. */
ACPI_STATUS acpi_pwr_wake_enable(ACPI_HANDLE consumer, int enable);
ACPI_STATUS acpi_pwr_switch_consumer(ACPI_HANDLE consumer, int state);
int acpi_device_pwr_for_sleep(device_t bus, device_t dev,
int *dstate);
/* APM emulation */
void acpi_apm_init(struct acpi_softc *);
/* Misc. */
static __inline struct acpi_softc *
acpi_device_get_parent_softc(device_t child)
{
device_t parent;
parent = device_get_parent(child);
if (parent == NULL)
return (NULL);
return (device_get_softc(parent));
}
static __inline int
acpi_get_verbose(struct acpi_softc *sc)
{
if (sc)
return (sc->acpi_verbose);
return (0);
}
char *acpi_name(ACPI_HANDLE handle);
int acpi_avoid(ACPI_HANDLE handle);
int acpi_disabled(char *subsys);
int acpi_machdep_init(device_t dev);
void acpi_install_wakeup_handler(struct acpi_softc *sc);
int acpi_sleep_machdep(struct acpi_softc *sc, int state);
int acpi_wakeup_machdep(struct acpi_softc *sc, int state,
int sleep_result, int intr_enabled);
int acpi_table_quirks(int *quirks);
int acpi_machdep_quirks(int *quirks);
uint32_t hpet_get_uid(device_t dev);
/* Battery Abstraction. */
struct acpi_battinfo;
int acpi_battery_register(device_t dev);
int acpi_battery_remove(device_t dev);
int acpi_battery_get_units(void);
int acpi_battery_get_info_expire(void);
int acpi_battery_bst_valid(struct acpi_bst *bst);
int acpi_battery_bif_valid(struct acpi_bif *bif);
int acpi_battery_get_battinfo(device_t dev,
struct acpi_battinfo *info);
/* Embedded controller. */
void acpi_ec_ecdt_probe(device_t);
/* AC adapter interface. */
int acpi_acad_get_acline(int *);
/* Package manipulation convenience functions. */
#define ACPI_PKG_VALID(pkg, size) \
((pkg) != NULL && (pkg)->Type == ACPI_TYPE_PACKAGE && \
(pkg)->Package.Count >= (size))
int acpi_PkgInt(ACPI_OBJECT *res, int idx, UINT64 *dst);
int acpi_PkgInt32(ACPI_OBJECT *res, int idx, uint32_t *dst);
int acpi_PkgStr(ACPI_OBJECT *res, int idx, void *dst, size_t size);
int acpi_PkgGas(device_t dev, ACPI_OBJECT *res, int idx, int *type,
int *rid, struct resource **dst, u_int flags);
int acpi_PkgFFH_IntelCpu(ACPI_OBJECT *res, int idx, int *vendor,
int *class, uint64_t *address, int *accsize);
ACPI_HANDLE acpi_GetReference(ACPI_HANDLE scope, ACPI_OBJECT *obj);
/*
* Base level for BUS_ADD_CHILD. Special devices are added at orders less
* than this, and normal devices at or above this level. This keeps the
* probe order sorted so that things like sysresource are available before
* their children need them.
*/
#define ACPI_DEV_BASE_ORDER 100
/* Default maximum number of tasks to enqueue. */
#ifndef ACPI_MAX_TASKS
#define ACPI_MAX_TASKS MAX(32, MAXCPU * 4)
#endif
/* Default number of task queue threads to start. */
#ifndef ACPI_MAX_THREADS
#define ACPI_MAX_THREADS 3
#endif
/* Use the device logging level for ktr(4). */
#define KTR_ACPI KTR_DEV
SYSCTL_DECL(_debug_acpi);
/*
* Parse and use proximity information in SRAT and SLIT.
*/
int acpi_pxm_init(int ncpus, vm_paddr_t maxphys);
void acpi_pxm_parse_tables(void);
void acpi_pxm_set_mem_locality(void);
void acpi_pxm_set_cpu_locality(void);
void acpi_pxm_free(void);
/*
* Map a PXM to a VM domain.
*
* Returns the VM domain ID if found, or -1 if not found / invalid.
*/
int acpi_map_pxm_to_vm_domainid(int pxm);
int acpi_get_cpus(device_t dev, device_t child, enum cpu_sets op,
size_t setsize, cpuset_t *cpuset);
int acpi_get_domain(device_t dev, device_t child, int *domain);
+#ifdef __aarch64__
+/*
+ * ARM specific ACPI interfaces, relating to IORT table.
+ */
+int acpi_iort_map_pci_msi(u_int seg, u_int rid, u_int *xref, u_int *devid);
+int acpi_iort_its_lookup(u_int its_id, u_int *xref, int *pxm);
+#endif
#endif /* _KERNEL */
#endif /* !_ACPIVAR_H_ */
Index: projects/clang800-import/sys/dev/cardbus/cardbus.c
===================================================================
--- projects/clang800-import/sys/dev/cardbus/cardbus.c (revision 343955)
+++ projects/clang800-import/sys/dev/cardbus/cardbus.c (revision 343956)
@@ -1,371 +1,372 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2003-2008 M. Warner Losh. All Rights Reserved.
* Copyright (c) 2000,2001 Jonathan Chen. All rights reserved.
+ *
+ * Copyright (c) 2003-2008 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/kernel.h>
#include <sys/sysctl.h>
#include <sys/bus.h>
#include <machine/bus.h>
#include <sys/rman.h>
#include <machine/resource.h>
#include <sys/pciio.h>
#include <dev/pci/pcivar.h>
#include <dev/pci/pcireg.h>
#include <dev/pci/pci_private.h>
#include <dev/cardbus/cardbusreg.h>
#include <dev/cardbus/cardbusvar.h>
#include <dev/cardbus/cardbus_cis.h>
#include <dev/pccard/pccard_cis.h>
#include <dev/pccard/pccardvar.h>
#include "power_if.h"
#include "pcib_if.h"
/* sysctl vars */
static SYSCTL_NODE(_hw, OID_AUTO, cardbus, CTLFLAG_RD, 0, "CardBus parameters");
int cardbus_debug = 0;
SYSCTL_INT(_hw_cardbus, OID_AUTO, debug, CTLFLAG_RWTUN,
&cardbus_debug, 0, "CardBus debug");
int cardbus_cis_debug = 0;
SYSCTL_INT(_hw_cardbus, OID_AUTO, cis_debug, CTLFLAG_RWTUN,
&cardbus_cis_debug, 0, "CardBus CIS debug");
#define DPRINTF(a) if (cardbus_debug) printf a
#define DEVPRINTF(x) if (cardbus_debug) device_printf x
static int cardbus_attach(device_t cbdev);
static int cardbus_attach_card(device_t cbdev);
static int cardbus_detach(device_t cbdev);
static int cardbus_detach_card(device_t cbdev);
static void cardbus_device_setup_regs(pcicfgregs *cfg);
static void cardbus_driver_added(device_t cbdev, driver_t *driver);
static int cardbus_probe(device_t cbdev);
static int cardbus_read_ivar(device_t cbdev, device_t child, int which,
uintptr_t *result);
/************************************************************************/
/* Probe/Attach */
/************************************************************************/
static int
cardbus_probe(device_t cbdev)
{
device_set_desc(cbdev, "CardBus bus");
return (0);
}
static int
cardbus_attach(device_t cbdev)
{
struct cardbus_softc *sc;
#ifdef PCI_RES_BUS
int rid;
#endif
sc = device_get_softc(cbdev);
sc->sc_dev = cbdev;
#ifdef PCI_RES_BUS
rid = 0;
sc->sc_bus = bus_alloc_resource(cbdev, PCI_RES_BUS, &rid,
pcib_get_bus(cbdev), pcib_get_bus(cbdev), 1, 0);
if (sc->sc_bus == NULL) {
device_printf(cbdev, "failed to allocate bus number\n");
return (ENXIO);
}
#else
device_printf(cbdev, "Your bus numbers may be AFU\n");
#endif
return (0);
}
static int
cardbus_detach(device_t cbdev)
{
#ifdef PCI_RES_BUS
struct cardbus_softc *sc;
#endif
cardbus_detach_card(cbdev);
#ifdef PCI_RES_BUS
sc = device_get_softc(cbdev);
device_printf(cbdev, "Freeing up the allocatd bus\n");
(void)bus_release_resource(cbdev, PCI_RES_BUS, 0, sc->sc_bus);
#endif
return (0);
}
static int
cardbus_suspend(device_t self)
{
cardbus_detach_card(self);
return (0);
}
static int
cardbus_resume(device_t self)
{
return (0);
}
/************************************************************************/
/* Attach/Detach card */
/************************************************************************/
static void
cardbus_device_setup_regs(pcicfgregs *cfg)
{
device_t dev = cfg->dev;
int i;
/*
* Some cards power up with garbage in their BARs. This
* code clears all that junk out.
*/
for (i = 0; i < PCIR_MAX_BAR_0; i++)
pci_write_config(dev, PCIR_BAR(i), 0, 4);
cfg->intline =
pci_get_irq(device_get_parent(device_get_parent(dev)));
pci_write_config(dev, PCIR_INTLINE, cfg->intline, 1);
pci_write_config(dev, PCIR_CACHELNSZ, 0x08, 1);
pci_write_config(dev, PCIR_LATTIMER, 0xa8, 1);
pci_write_config(dev, PCIR_MINGNT, 0x14, 1);
pci_write_config(dev, PCIR_MAXLAT, 0x14, 1);
}
static struct pci_devinfo *
cardbus_alloc_devinfo(device_t dev)
{
struct cardbus_devinfo *dinfo;
dinfo = malloc(sizeof(*dinfo), M_DEVBUF, M_WAITOK | M_ZERO);
return (&dinfo->pci);
}
static int
cardbus_attach_card(device_t cbdev)
{
device_t brdev = device_get_parent(cbdev);
device_t child;
int bus, domain, slot, func;
int cardattached = 0;
int cardbusfunchigh = 0;
struct cardbus_softc *sc;
sc = device_get_softc(cbdev);
cardbus_detach_card(cbdev); /* detach existing cards */
POWER_DISABLE_SOCKET(brdev, cbdev); /* Turn the socket off first */
POWER_ENABLE_SOCKET(brdev, cbdev);
domain = pcib_get_domain(cbdev);
bus = pcib_get_bus(cbdev);
slot = 0;
mtx_lock(&Giant);
/* For each function, set it up and try to attach a driver to it */
for (func = 0; func <= cardbusfunchigh; func++) {
struct cardbus_devinfo *dinfo;
dinfo = (struct cardbus_devinfo *)
pci_read_device(brdev, cbdev, domain, bus, slot, func);
if (dinfo == NULL)
continue;
if (dinfo->pci.cfg.mfdev)
cardbusfunchigh = PCI_FUNCMAX;
child = device_add_child(cbdev, NULL, -1);
if (child == NULL) {
DEVPRINTF((cbdev, "Cannot add child!\n"));
pci_freecfg((struct pci_devinfo *)dinfo);
continue;
}
dinfo->pci.cfg.dev = child;
resource_list_init(&dinfo->pci.resources);
device_set_ivars(child, dinfo);
cardbus_device_create(sc, dinfo, cbdev, child);
if (cardbus_do_cis(cbdev, child) != 0)
DEVPRINTF((cbdev, "Warning: Bogus CIS ignored\n"));
pci_cfg_save(dinfo->pci.cfg.dev, &dinfo->pci, 0);
pci_cfg_restore(dinfo->pci.cfg.dev, &dinfo->pci);
cardbus_device_setup_regs(&dinfo->pci.cfg);
pci_add_resources(cbdev, child, 1, dinfo->mprefetchable);
pci_print_verbose(&dinfo->pci);
if (device_probe_and_attach(child) == 0)
cardattached++;
else
pci_cfg_save(dinfo->pci.cfg.dev, &dinfo->pci, 1);
}
mtx_unlock(&Giant);
if (cardattached > 0)
return (0);
/* POWER_DISABLE_SOCKET(brdev, cbdev); */
return (ENOENT);
}
static void
cardbus_child_deleted(device_t cbdev, device_t child)
{
struct cardbus_devinfo *dinfo = device_get_ivars(child);
if (dinfo->pci.cfg.dev != child)
device_printf(cbdev, "devinfo dev mismatch\n");
cardbus_device_destroy(dinfo);
pci_child_deleted(cbdev, child);
}
static int
cardbus_detach_card(device_t cbdev)
{
int err = 0;
err = bus_generic_detach(cbdev);
if (err)
return (err);
err = device_delete_children(cbdev);
if (err)
return (err);
POWER_DISABLE_SOCKET(device_get_parent(cbdev), cbdev);
return (err);
}
static void
cardbus_driver_added(device_t cbdev, driver_t *driver)
{
int numdevs;
device_t *devlist;
device_t dev;
int i;
struct cardbus_devinfo *dinfo;
DEVICE_IDENTIFY(driver, cbdev);
if (device_get_children(cbdev, &devlist, &numdevs) != 0)
return;
/*
* If there are no drivers attached, but there are children,
* then power the card up.
*/
for (i = 0; i < numdevs; i++) {
dev = devlist[i];
if (device_get_state(dev) != DS_NOTPRESENT)
break;
}
if (i > 0 && i == numdevs)
POWER_ENABLE_SOCKET(device_get_parent(cbdev), cbdev);
for (i = 0; i < numdevs; i++) {
dev = devlist[i];
if (device_get_state(dev) != DS_NOTPRESENT)
continue;
dinfo = device_get_ivars(dev);
pci_print_verbose(&dinfo->pci);
if (bootverbose)
printf("pci%d:%d:%d:%d: reprobing on driver added\n",
dinfo->pci.cfg.domain, dinfo->pci.cfg.bus,
dinfo->pci.cfg.slot, dinfo->pci.cfg.func);
pci_cfg_restore(dinfo->pci.cfg.dev, &dinfo->pci);
if (device_probe_and_attach(dev) != 0)
pci_cfg_save(dev, &dinfo->pci, 1);
}
free(devlist, M_TEMP);
}
/************************************************************************/
/* Other Bus Methods */
/************************************************************************/
static int
cardbus_read_ivar(device_t cbdev, device_t child, int which, uintptr_t *result)
{
struct cardbus_devinfo *dinfo;
pcicfgregs *cfg;
dinfo = device_get_ivars(child);
cfg = &dinfo->pci.cfg;
switch (which) {
case PCI_IVAR_ETHADDR:
/*
* The generic accessor doesn't deal with failure, so
* we set the return value, then return an error.
*/
if (dinfo->fepresent & (1 << PCCARD_TPLFE_TYPE_LAN_NID)) {
*((uint8_t **) result) = dinfo->funce.lan.nid;
break;
}
*((uint8_t **) result) = NULL;
return (EINVAL);
default:
return (pci_read_ivar(cbdev, child, which, result));
}
return 0;
}
static device_method_t cardbus_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, cardbus_probe),
DEVMETHOD(device_attach, cardbus_attach),
DEVMETHOD(device_detach, cardbus_detach),
DEVMETHOD(device_suspend, cardbus_suspend),
DEVMETHOD(device_resume, cardbus_resume),
/* Bus interface */
DEVMETHOD(bus_child_deleted, cardbus_child_deleted),
DEVMETHOD(bus_get_dma_tag, bus_generic_get_dma_tag),
DEVMETHOD(bus_read_ivar, cardbus_read_ivar),
DEVMETHOD(bus_driver_added, cardbus_driver_added),
DEVMETHOD(bus_rescan, bus_null_rescan),
/* Card Interface */
DEVMETHOD(card_attach_card, cardbus_attach_card),
DEVMETHOD(card_detach_card, cardbus_detach_card),
/* PCI interface */
DEVMETHOD(pci_alloc_devinfo, cardbus_alloc_devinfo),
{0,0}
};
DEFINE_CLASS_1(cardbus, cardbus_driver, cardbus_methods,
sizeof(struct cardbus_softc), pci_driver);
static devclass_t cardbus_devclass;
DRIVER_MODULE(cardbus, cbb, cardbus_driver, cardbus_devclass, 0, 0);
MODULE_VERSION(cardbus, 1);
Index: projects/clang800-import/sys/dev/cxgbe/adapter.h
===================================================================
--- projects/clang800-import/sys/dev/cxgbe/adapter.h (revision 343955)
+++ projects/clang800-import/sys/dev/cxgbe/adapter.h (revision 343956)
@@ -1,1309 +1,1311 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2011 Chelsio Communications, Inc.
* All rights reserved.
* Written by: Navdeep Parhar <np@FreeBSD.org>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*
*/
#ifndef __T4_ADAPTER_H__
#define __T4_ADAPTER_H__
#include <sys/kernel.h>
#include <sys/bus.h>
#include <sys/rman.h>
#include <sys/types.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/rwlock.h>
#include <sys/sx.h>
#include <sys/vmem.h>
#include <vm/uma.h>
#include <dev/pci/pcivar.h>
#include <dev/pci/pcireg.h>
#include <machine/bus.h>
#include <sys/socket.h>
#include <sys/sysctl.h>
#include <net/ethernet.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/if_media.h>
#include <netinet/in.h>
#include <netinet/tcp_lro.h>
#include "offload.h"
#include "t4_ioctl.h"
#include "common/t4_msg.h"
#include "firmware/t4fw_interface.h"
#define KTR_CXGBE KTR_SPARE3
MALLOC_DECLARE(M_CXGBE);
#define CXGBE_UNIMPLEMENTED(s) \
panic("%s (%s, line %d) not implemented yet.", s, __FILE__, __LINE__)
#if defined(__i386__) || defined(__amd64__)
static __inline void
prefetch(void *x)
{
__asm volatile("prefetcht0 %0" :: "m" (*(unsigned long *)x));
}
#else
#define prefetch(x) __builtin_prefetch(x)
#endif
#ifndef SYSCTL_ADD_UQUAD
#define SYSCTL_ADD_UQUAD SYSCTL_ADD_QUAD
#define sysctl_handle_64 sysctl_handle_quad
#define CTLTYPE_U64 CTLTYPE_QUAD
#endif
SYSCTL_DECL(_hw_cxgbe);
struct adapter;
typedef struct adapter adapter_t;
enum {
/*
* All ingress queues use this entry size. Note that the firmware event
* queue and any iq expecting CPL_RX_PKT in the descriptor needs this to
* be at least 64.
*/
IQ_ESIZE = 64,
/* Default queue sizes for all kinds of ingress queues */
FW_IQ_QSIZE = 256,
RX_IQ_QSIZE = 1024,
/* All egress queues use this entry size */
EQ_ESIZE = 64,
/* Default queue sizes for all kinds of egress queues */
CTRL_EQ_QSIZE = 1024,
TX_EQ_QSIZE = 1024,
#if MJUMPAGESIZE != MCLBYTES
SW_ZONE_SIZES = 4, /* cluster, jumbop, jumbo9k, jumbo16k */
#else
SW_ZONE_SIZES = 3, /* cluster, jumbo9k, jumbo16k */
#endif
CL_METADATA_SIZE = CACHE_LINE_SIZE,
SGE_MAX_WR_NDESC = SGE_MAX_WR_LEN / EQ_ESIZE, /* max WR size in desc */
TX_SGL_SEGS = 39,
TX_SGL_SEGS_TSO = 38,
TX_SGL_SEGS_EO_TSO = 30, /* XXX: lower for IPv6. */
TX_WR_FLITS = SGE_MAX_WR_LEN / 8
};
enum {
/* adapter intr_type */
INTR_INTX = (1 << 0),
INTR_MSI = (1 << 1),
INTR_MSIX = (1 << 2)
};
enum {
XGMAC_MTU = (1 << 0),
XGMAC_PROMISC = (1 << 1),
XGMAC_ALLMULTI = (1 << 2),
XGMAC_VLANEX = (1 << 3),
XGMAC_UCADDR = (1 << 4),
XGMAC_MCADDRS = (1 << 5),
XGMAC_ALL = 0xffff
};
enum {
/* flags understood by begin_synchronized_op */
HOLD_LOCK = (1 << 0),
SLEEP_OK = (1 << 1),
INTR_OK = (1 << 2),
/* flags understood by end_synchronized_op */
LOCK_HELD = HOLD_LOCK,
};
enum {
/* adapter flags */
FULL_INIT_DONE = (1 << 0),
FW_OK = (1 << 1),
CHK_MBOX_ACCESS = (1 << 2),
MASTER_PF = (1 << 3),
ADAP_SYSCTL_CTX = (1 << 4),
ADAP_ERR = (1 << 5),
BUF_PACKING_OK = (1 << 6),
IS_VF = (1 << 7),
CXGBE_BUSY = (1 << 9),
/* port flags */
HAS_TRACEQ = (1 << 3),
FIXED_IFMEDIA = (1 << 4), /* ifmedia list doesn't change. */
/* VI flags */
DOOMED = (1 << 0),
VI_INIT_DONE = (1 << 1),
VI_SYSCTL_CTX = (1 << 2),
/* adapter debug_flags */
DF_DUMP_MBOX = (1 << 0), /* Log all mbox cmd/rpl. */
DF_LOAD_FW_ANYTIME = (1 << 1), /* Allow LOAD_FW after init */
DF_DISABLE_TCB_CACHE = (1 << 2), /* Disable TCB cache (T6+) */
DF_DISABLE_CFG_RETRY = (1 << 3), /* Disable fallback config */
DF_VERBOSE_SLOWINTR = (1 << 4), /* Chatty slow intr handler */
};
#define IS_DOOMED(vi) ((vi)->flags & DOOMED)
#define SET_DOOMED(vi) do {(vi)->flags |= DOOMED;} while (0)
#define IS_BUSY(sc) ((sc)->flags & CXGBE_BUSY)
#define SET_BUSY(sc) do {(sc)->flags |= CXGBE_BUSY;} while (0)
#define CLR_BUSY(sc) do {(sc)->flags &= ~CXGBE_BUSY;} while (0)
struct vi_info {
device_t dev;
struct port_info *pi;
struct ifnet *ifp;
unsigned long flags;
int if_flags;
uint16_t *rss, *nm_rss;
int smt_idx; /* for convenience */
uint16_t viid;
int16_t xact_addr_filt;/* index of exact MAC address filter */
uint16_t rss_size; /* size of VI's RSS table slice */
uint16_t rss_base; /* start of VI's RSS table slice */
int hashen;
int nintr;
int first_intr;
/* These need to be int as they are used in sysctl */
int ntxq; /* # of tx queues */
int first_txq; /* index of first tx queue */
int rsrv_noflowq; /* Reserve queue 0 for non-flowid packets */
int nrxq; /* # of rx queues */
int first_rxq; /* index of first rx queue */
int nofldtxq; /* # of offload tx queues */
int first_ofld_txq; /* index of first offload tx queue */
int nofldrxq; /* # of offload rx queues */
int first_ofld_rxq; /* index of first offload rx queue */
int nnmtxq;
int first_nm_txq;
int nnmrxq;
int first_nm_rxq;
int tmr_idx;
int ofld_tmr_idx;
int pktc_idx;
int ofld_pktc_idx;
int qsize_rxq;
int qsize_txq;
struct timeval last_refreshed;
struct fw_vi_stats_vf stats;
struct callout tick;
struct sysctl_ctx_list ctx; /* from ifconfig up to driver detach */
uint8_t hw_addr[ETHER_ADDR_LEN]; /* factory MAC address, won't change */
};
struct tx_ch_rl_params {
enum fw_sched_params_rate ratemode; /* %port (REL) or kbps (ABS) */
uint32_t maxrate;
};
enum {
CLRL_USER = (1 << 0), /* allocated manually. */
CLRL_SYNC = (1 << 1), /* sync hw update in progress. */
CLRL_ASYNC = (1 << 2), /* async hw update requested. */
CLRL_ERR = (1 << 3), /* last hw setup ended in error. */
};
struct tx_cl_rl_params {
int refcount;
uint8_t flags;
enum fw_sched_params_rate ratemode; /* %port REL or ABS value */
enum fw_sched_params_unit rateunit; /* kbps or pps (when ABS) */
enum fw_sched_params_mode mode; /* aggr or per-flow */
uint32_t maxrate;
uint16_t pktsize;
uint16_t burstsize;
};
/* Tx scheduler parameters for a channel/port */
struct tx_sched_params {
/* Channel Rate Limiter */
struct tx_ch_rl_params ch_rl;
/* Class WRR */
/* XXX */
/* Class Rate Limiter (including the default pktsize and burstsize). */
int pktsize;
int burstsize;
struct tx_cl_rl_params cl_rl[];
};
struct port_info {
device_t dev;
struct adapter *adapter;
struct vi_info *vi;
int nvi;
int up_vis;
int uld_vis;
struct tx_sched_params *sched_params;
struct mtx pi_lock;
char lockname[16];
unsigned long flags;
uint8_t lport; /* associated offload logical port */
int8_t mdio_addr;
uint8_t port_type;
uint8_t mod_type;
uint8_t port_id;
uint8_t tx_chan;
uint8_t mps_bg_map; /* rx MPS buffer group bitmap */
uint8_t rx_e_chan_map; /* rx TP e-channel bitmap */
struct link_config link_cfg;
struct ifmedia media;
struct timeval last_refreshed;
struct port_stats stats;
u_int tnl_cong_drops;
u_int tx_parse_error;
u_long tx_tls_records;
u_long tx_tls_octets;
u_long rx_tls_records;
u_long rx_tls_octets;
struct callout tick;
};
#define IS_MAIN_VI(vi) ((vi) == &((vi)->pi->vi[0]))
/* Where the cluster came from, how it has been carved up. */
struct cluster_layout {
int8_t zidx;
int8_t hwidx;
uint16_t region1; /* mbufs laid out within this region */
/* region2 is the DMA region */
uint16_t region3; /* cluster_metadata within this region */
};
struct cluster_metadata {
u_int refcount;
struct fl_sdesc *sd; /* For debug only. Could easily be stale */
};
struct fl_sdesc {
caddr_t cl;
uint16_t nmbuf; /* # of driver originated mbufs with ref on cluster */
struct cluster_layout cll;
};
struct tx_desc {
__be64 flit[8];
};
struct tx_sdesc {
struct mbuf *m; /* m_nextpkt linked chain of frames */
uint8_t desc_used; /* # of hardware descriptors used by the WR */
};
#define IQ_PAD (IQ_ESIZE - sizeof(struct rsp_ctrl) - sizeof(struct rss_header))
struct iq_desc {
struct rss_header rss;
uint8_t cpl[IQ_PAD];
struct rsp_ctrl rsp;
};
#undef IQ_PAD
CTASSERT(sizeof(struct iq_desc) == IQ_ESIZE);
enum {
/* iq flags */
IQ_ALLOCATED = (1 << 0), /* firmware resources allocated */
IQ_HAS_FL = (1 << 1), /* iq associated with a freelist */
IQ_RX_TIMESTAMP = (1 << 2), /* provide the SGE rx timestamp */
IQ_LRO_ENABLED = (1 << 3), /* iq is an eth rxq with LRO enabled */
IQ_ADJ_CREDIT = (1 << 4), /* hw is off by 1 credit for this iq */
/* iq state */
IQS_DISABLED = 0,
IQS_BUSY = 1,
IQS_IDLE = 2,
/* netmap related flags */
NM_OFF = 0,
NM_ON = 1,
NM_BUSY = 2,
};
enum {
CPL_COOKIE_RESERVED = 0,
CPL_COOKIE_FILTER,
CPL_COOKIE_DDP0,
CPL_COOKIE_DDP1,
CPL_COOKIE_TOM,
CPL_COOKIE_HASHFILTER,
CPL_COOKIE_ETHOFLD,
CPL_COOKIE_AVAILABLE3,
NUM_CPL_COOKIES = 8 /* Limited by M_COOKIE. Do not increase. */
};
struct sge_iq;
struct rss_header;
typedef int (*cpl_handler_t)(struct sge_iq *, const struct rss_header *,
struct mbuf *);
typedef int (*an_handler_t)(struct sge_iq *, const struct rsp_ctrl *);
typedef int (*fw_msg_handler_t)(struct adapter *, const __be64 *);
/*
* Ingress Queue: T4 is producer, driver is consumer.
*/
struct sge_iq {
uint32_t flags;
volatile int state;
struct adapter *adapter;
struct iq_desc *desc; /* KVA of descriptor ring */
int8_t intr_pktc_idx; /* packet count threshold index */
uint8_t gen; /* generation bit */
uint8_t intr_params; /* interrupt holdoff parameters */
uint8_t intr_next; /* XXX: holdoff for next interrupt */
uint16_t qsize; /* size (# of entries) of the queue */
uint16_t sidx; /* index of the entry with the status page */
uint16_t cidx; /* consumer index */
uint16_t cntxt_id; /* SGE context id for the iq */
uint16_t abs_id; /* absolute SGE id for the iq */
STAILQ_ENTRY(sge_iq) link;
bus_dma_tag_t desc_tag;
bus_dmamap_t desc_map;
bus_addr_t ba; /* bus address of descriptor ring */
};
enum {
EQ_CTRL = 1,
EQ_ETH = 2,
EQ_OFLD = 3,
/* eq flags */
EQ_TYPEMASK = 0x3, /* 2 lsbits hold the type (see above) */
EQ_ALLOCATED = (1 << 2), /* firmware resources allocated */
EQ_ENABLED = (1 << 3), /* open for business */
EQ_QFLUSH = (1 << 4), /* if_qflush in progress */
};
/* Listed in order of preference. Update t4_sysctls too if you change these */
enum {DOORBELL_UDB, DOORBELL_WCWR, DOORBELL_UDBWC, DOORBELL_KDB};
/*
* Egress Queue: driver is producer, T4 is consumer.
*
* Note: A free list is an egress queue (driver produces the buffers and T4
* consumes them) but it's special enough to have its own struct (see sge_fl).
*/
struct sge_eq {
unsigned int flags; /* MUST be first */
unsigned int cntxt_id; /* SGE context id for the eq */
unsigned int abs_id; /* absolute SGE id for the eq */
struct mtx eq_lock;
struct tx_desc *desc; /* KVA of descriptor ring */
uint8_t doorbells;
volatile uint32_t *udb; /* KVA of doorbell (lies within BAR2) */
u_int udb_qid; /* relative qid within the doorbell page */
uint16_t sidx; /* index of the entry with the status page */
uint16_t cidx; /* consumer idx (desc idx) */
uint16_t pidx; /* producer idx (desc idx) */
uint16_t equeqidx; /* EQUEQ last requested at this pidx */
uint16_t dbidx; /* pidx of the most recent doorbell */
uint16_t iqid; /* iq that gets egr_update for the eq */
uint8_t tx_chan; /* tx channel used by the eq */
volatile u_int equiq; /* EQUIQ outstanding */
bus_dma_tag_t desc_tag;
bus_dmamap_t desc_map;
bus_addr_t ba; /* bus address of descriptor ring */
char lockname[16];
};
struct sw_zone_info {
uma_zone_t zone; /* zone that this cluster comes from */
int size; /* size of cluster: 2K, 4K, 9K, 16K, etc. */
int type; /* EXT_xxx type of the cluster */
int8_t head_hwidx;
int8_t tail_hwidx;
};
struct hw_buf_info {
int8_t zidx; /* backpointer to zone; -ve means unused */
int8_t next; /* next hwidx for this zone; -1 means no more */
int size;
};
enum {
NUM_MEMWIN = 3,
MEMWIN0_APERTURE = 2048,
MEMWIN0_BASE = 0x1b800,
MEMWIN1_APERTURE = 32768,
MEMWIN1_BASE = 0x28000,
MEMWIN2_APERTURE_T4 = 65536,
MEMWIN2_BASE_T4 = 0x30000,
MEMWIN2_APERTURE_T5 = 128 * 1024,
MEMWIN2_BASE_T5 = 0x60000,
};
struct memwin {
struct rwlock mw_lock __aligned(CACHE_LINE_SIZE);
uint32_t mw_base; /* constant after setup_memwin */
uint32_t mw_aperture; /* ditto */
uint32_t mw_curpos; /* protected by mw_lock */
};
enum {
FL_STARVING = (1 << 0), /* on the adapter's list of starving fl's */
FL_DOOMED = (1 << 1), /* about to be destroyed */
FL_BUF_PACKING = (1 << 2), /* buffer packing enabled */
FL_BUF_RESUME = (1 << 3), /* resume from the middle of the frame */
};
#define FL_RUNNING_LOW(fl) \
(IDXDIFF(fl->dbidx * 8, fl->cidx, fl->sidx * 8) <= fl->lowat)
#define FL_NOT_RUNNING_LOW(fl) \
(IDXDIFF(fl->dbidx * 8, fl->cidx, fl->sidx * 8) >= 2 * fl->lowat)
struct sge_fl {
struct mtx fl_lock;
__be64 *desc; /* KVA of descriptor ring, ptr to addresses */
struct fl_sdesc *sdesc; /* KVA of software descriptor ring */
struct cluster_layout cll_def; /* default refill zone, layout */
uint16_t lowat; /* # of buffers <= this means fl needs help */
int flags;
uint16_t buf_boundary;
/* The 16b idx all deal with hw descriptors */
uint16_t dbidx; /* hw pidx after last doorbell */
uint16_t sidx; /* index of status page */
volatile uint16_t hw_cidx;
/* The 32b idx are all buffer idx, not hardware descriptor idx */
uint32_t cidx; /* consumer index */
uint32_t pidx; /* producer index */
uint32_t dbval;
u_int rx_offset; /* offset in fl buf (when buffer packing) */
volatile uint32_t *udb;
uint64_t mbuf_allocated;/* # of mbuf allocated from zone_mbuf */
uint64_t mbuf_inlined; /* # of mbuf created within clusters */
uint64_t cl_allocated; /* # of clusters allocated */
uint64_t cl_recycled; /* # of clusters recycled */
uint64_t cl_fast_recycled; /* # of clusters recycled (fast) */
/* These 3 are valid when FL_BUF_RESUME is set, stale otherwise. */
struct mbuf *m0;
struct mbuf **pnext;
u_int remaining;
uint16_t qsize; /* # of hw descriptors (status page included) */
uint16_t cntxt_id; /* SGE context id for the freelist */
TAILQ_ENTRY(sge_fl) link; /* All starving freelists */
bus_dma_tag_t desc_tag;
bus_dmamap_t desc_map;
char lockname[16];
bus_addr_t ba; /* bus address of descriptor ring */
struct cluster_layout cll_alt; /* alternate refill zone, layout */
};
struct mp_ring;
/* txq: SGE egress queue + what's needed for Ethernet NIC */
struct sge_txq {
struct sge_eq eq; /* MUST be first */
struct ifnet *ifp; /* the interface this txq belongs to */
struct mp_ring *r; /* tx software ring */
struct tx_sdesc *sdesc; /* KVA of software descriptor ring */
struct sglist *gl;
__be32 cpl_ctrl0; /* for convenience */
int tc_idx; /* traffic class */
struct task tx_reclaim_task;
/* stats for common events first */
uint64_t txcsum; /* # of times hardware assisted with checksum */
uint64_t tso_wrs; /* # of TSO work requests */
uint64_t vlan_insertion;/* # of times VLAN tag was inserted */
uint64_t imm_wrs; /* # of work requests with immediate data */
uint64_t sgl_wrs; /* # of work requests with direct SGL */
uint64_t txpkt_wrs; /* # of txpkt work requests (not coalesced) */
uint64_t txpkts0_wrs; /* # of type0 coalesced tx work requests */
uint64_t txpkts1_wrs; /* # of type1 coalesced tx work requests */
uint64_t txpkts0_pkts; /* # of frames in type0 coalesced tx WRs */
uint64_t txpkts1_pkts; /* # of frames in type1 coalesced tx WRs */
uint64_t raw_wrs; /* # of raw work requests (alloc_wr_mbuf) */
/* stats for not-that-common events */
} __aligned(CACHE_LINE_SIZE);
/* rxq: SGE ingress queue + SGE free list + miscellaneous items */
struct sge_rxq {
struct sge_iq iq; /* MUST be first */
struct sge_fl fl; /* MUST follow iq */
struct ifnet *ifp; /* the interface this rxq belongs to */
#if defined(INET) || defined(INET6)
struct lro_ctrl lro; /* LRO state */
#endif
/* stats for common events first */
uint64_t rxcsum; /* # of times hardware assisted with checksum */
uint64_t vlan_extraction;/* # of times VLAN tag was extracted */
/* stats for not-that-common events */
} __aligned(CACHE_LINE_SIZE);
static inline struct sge_rxq *
iq_to_rxq(struct sge_iq *iq)
{
return (__containerof(iq, struct sge_rxq, iq));
}
/* ofld_rxq: SGE ingress queue + SGE free list + miscellaneous items */
struct sge_ofld_rxq {
struct sge_iq iq; /* MUST be first */
struct sge_fl fl; /* MUST follow iq */
} __aligned(CACHE_LINE_SIZE);
static inline struct sge_ofld_rxq *
iq_to_ofld_rxq(struct sge_iq *iq)
{
return (__containerof(iq, struct sge_ofld_rxq, iq));
}
struct wrqe {
STAILQ_ENTRY(wrqe) link;
struct sge_wrq *wrq;
int wr_len;
char wr[] __aligned(16);
};
struct wrq_cookie {
TAILQ_ENTRY(wrq_cookie) link;
int ndesc;
int pidx;
};
/*
* wrq: SGE egress queue that is given prebuilt work requests. Both the control
* and offload tx queues are of this type.
*/
struct sge_wrq {
struct sge_eq eq; /* MUST be first */
struct adapter *adapter;
struct task wrq_tx_task;
/* Tx desc reserved but WR not "committed" yet. */
TAILQ_HEAD(wrq_incomplete_wrs , wrq_cookie) incomplete_wrs;
/* List of WRs ready to go out as soon as descriptors are available. */
STAILQ_HEAD(, wrqe) wr_list;
u_int nwr_pending;
u_int ndesc_needed;
/* stats for common events first */
uint64_t tx_wrs_direct; /* # of WRs written directly to desc ring. */
uint64_t tx_wrs_ss; /* # of WRs copied from scratch space. */
uint64_t tx_wrs_copied; /* # of WRs queued and copied to desc ring. */
/* stats for not-that-common events */
/*
* Scratch space for work requests that wrap around after reaching the
* status page, and some information about the last WR that used it.
*/
uint16_t ss_pidx;
uint16_t ss_len;
uint8_t ss[SGE_MAX_WR_LEN];
} __aligned(CACHE_LINE_SIZE);
#define INVALID_NM_RXQ_CNTXT_ID ((uint16_t)(-1))
struct sge_nm_rxq {
volatile int nm_state; /* NM_OFF, NM_ON, or NM_BUSY */
struct vi_info *vi;
struct iq_desc *iq_desc;
uint16_t iq_abs_id;
uint16_t iq_cntxt_id;
uint16_t iq_cidx;
uint16_t iq_sidx;
uint8_t iq_gen;
__be64 *fl_desc;
uint16_t fl_cntxt_id;
uint32_t fl_cidx;
uint32_t fl_pidx;
uint32_t fl_sidx;
uint32_t fl_db_val;
u_int fl_hwidx:4;
u_int fl_db_saved;
u_int nid; /* netmap ring # for this queue */
/* infrequently used items after this */
bus_dma_tag_t iq_desc_tag;
bus_dmamap_t iq_desc_map;
bus_addr_t iq_ba;
int intr_idx;
bus_dma_tag_t fl_desc_tag;
bus_dmamap_t fl_desc_map;
bus_addr_t fl_ba;
} __aligned(CACHE_LINE_SIZE);
#define INVALID_NM_TXQ_CNTXT_ID ((u_int)(-1))
struct sge_nm_txq {
struct tx_desc *desc;
uint16_t cidx;
uint16_t pidx;
uint16_t sidx;
uint16_t equiqidx; /* EQUIQ last requested at this pidx */
uint16_t equeqidx; /* EQUEQ last requested at this pidx */
uint16_t dbidx; /* pidx of the most recent doorbell */
uint8_t doorbells;
volatile uint32_t *udb;
u_int udb_qid;
u_int cntxt_id;
__be32 cpl_ctrl0; /* for convenience */
u_int nid; /* netmap ring # for this queue */
/* infrequently used items after this */
bus_dma_tag_t desc_tag;
bus_dmamap_t desc_map;
bus_addr_t ba;
int iqidx;
} __aligned(CACHE_LINE_SIZE);
struct sge {
int nrxq; /* total # of Ethernet rx queues */
int ntxq; /* total # of Ethernet tx queues */
int nofldrxq; /* total # of TOE rx queues */
int nofldtxq; /* total # of TOE tx queues */
int nnmrxq; /* total # of netmap rx queues */
int nnmtxq; /* total # of netmap tx queues */
int niq; /* total # of ingress queues */
int neq; /* total # of egress queues */
struct sge_iq fwq; /* Firmware event queue */
struct sge_wrq *ctrlq; /* Control queues */
struct sge_txq *txq; /* NIC tx queues */
struct sge_rxq *rxq; /* NIC rx queues */
struct sge_wrq *ofld_txq; /* TOE tx queues */
struct sge_ofld_rxq *ofld_rxq; /* TOE rx queues */
struct sge_nm_txq *nm_txq; /* netmap tx queues */
struct sge_nm_rxq *nm_rxq; /* netmap rx queues */
uint16_t iq_start; /* first cntxt_id */
uint16_t iq_base; /* first abs_id */
int eq_start; /* first cntxt_id */
int eq_base; /* first abs_id */
struct sge_iq **iqmap; /* iq->cntxt_id to iq mapping */
struct sge_eq **eqmap; /* eq->cntxt_id to eq mapping */
int8_t safe_hwidx1; /* may not have room for metadata */
int8_t safe_hwidx2; /* with room for metadata and maybe more */
struct sw_zone_info sw_zone_info[SW_ZONE_SIZES];
struct hw_buf_info hw_buf_info[SGE_FLBUF_SIZES];
};
struct devnames {
const char *nexus_name;
const char *ifnet_name;
const char *vi_ifnet_name;
const char *pf03_drv_name;
const char *vf_nexus_name;
const char *vf_ifnet_name;
};
struct clip_entry;
struct adapter {
SLIST_ENTRY(adapter) link;
device_t dev;
struct cdev *cdev;
const struct devnames *names;
/* PCIe register resources */
int regs_rid;
struct resource *regs_res;
int msix_rid;
struct resource *msix_res;
bus_space_handle_t bh;
bus_space_tag_t bt;
bus_size_t mmio_len;
int udbs_rid;
struct resource *udbs_res;
volatile uint8_t *udbs_base;
unsigned int pf;
unsigned int mbox;
unsigned int vpd_busy;
unsigned int vpd_flag;
/* Interrupt information */
int intr_type;
int intr_count;
struct irq {
struct resource *res;
int rid;
void *tag;
struct sge_rxq *rxq;
struct sge_nm_rxq *nm_rxq;
} __aligned(CACHE_LINE_SIZE) *irq;
int sge_gts_reg;
int sge_kdoorbell_reg;
bus_dma_tag_t dmat; /* Parent DMA tag */
struct sge sge;
int lro_timeout;
int sc_do_rxcopy;
struct taskqueue *tq[MAX_NCHAN]; /* General purpose taskqueues */
struct port_info *port[MAX_NPORTS];
uint8_t chan_map[MAX_NCHAN]; /* channel -> port */
struct mtx clip_table_lock;
TAILQ_HEAD(, clip_entry) clip_table;
int clip_gen;
void *tom_softc; /* (struct tom_data *) */
struct tom_tunables tt;
struct t4_offload_policy *policy;
struct rwlock policy_lock;
void *iwarp_softc; /* (struct c4iw_dev *) */
struct iw_tunables iwt;
void *iscsi_ulp_softc; /* (struct cxgbei_data *) */
void *ccr_softc; /* (struct ccr_softc *) */
struct l2t_data *l2t; /* L2 table */
struct smt_data *smt; /* Source MAC Table */
struct tid_info tids;
vmem_t *key_map;
uint8_t doorbells;
int offload_map; /* ports with IFCAP_TOE enabled */
int active_ulds; /* ULDs activated on this adapter */
int flags;
int debug_flags;
char ifp_lockname[16];
struct mtx ifp_lock;
struct ifnet *ifp; /* tracer ifp */
struct ifmedia media;
int traceq; /* iq used by all tracers, -1 if none */
int tracer_valid; /* bitmap of valid tracers */
int tracer_enabled; /* bitmap of enabled tracers */
char fw_version[16];
char tp_version[16];
char er_version[16];
char bs_version[16];
char cfg_file[32];
u_int cfcsum;
struct adapter_params params;
const struct chip_params *chip_params;
struct t4_virt_res vres;
uint16_t nbmcaps;
uint16_t linkcaps;
uint16_t switchcaps;
uint16_t niccaps;
uint16_t toecaps;
uint16_t rdmacaps;
uint16_t cryptocaps;
uint16_t iscsicaps;
uint16_t fcoecaps;
struct sysctl_ctx_list ctx; /* from adapter_full_init to full_uninit */
struct mtx sc_lock;
char lockname[16];
/* Starving free lists */
struct mtx sfl_lock; /* same cache-line as sc_lock? but that's ok */
TAILQ_HEAD(, sge_fl) sfl;
struct callout sfl_callout;
struct mtx reg_lock; /* for indirect register access */
struct memwin memwin[NUM_MEMWIN]; /* memory windows */
struct mtx tc_lock;
struct task tc_task;
const char *last_op;
const void *last_op_thr;
int last_op_flags;
};
#define ADAPTER_LOCK(sc) mtx_lock(&(sc)->sc_lock)
#define ADAPTER_UNLOCK(sc) mtx_unlock(&(sc)->sc_lock)
#define ADAPTER_LOCK_ASSERT_OWNED(sc) mtx_assert(&(sc)->sc_lock, MA_OWNED)
#define ADAPTER_LOCK_ASSERT_NOTOWNED(sc) mtx_assert(&(sc)->sc_lock, MA_NOTOWNED)
#define ASSERT_SYNCHRONIZED_OP(sc) \
KASSERT(IS_BUSY(sc) && \
(mtx_owned(&(sc)->sc_lock) || sc->last_op_thr == curthread), \
("%s: operation not synchronized.", __func__))
#define PORT_LOCK(pi) mtx_lock(&(pi)->pi_lock)
#define PORT_UNLOCK(pi) mtx_unlock(&(pi)->pi_lock)
#define PORT_LOCK_ASSERT_OWNED(pi) mtx_assert(&(pi)->pi_lock, MA_OWNED)
#define PORT_LOCK_ASSERT_NOTOWNED(pi) mtx_assert(&(pi)->pi_lock, MA_NOTOWNED)
#define FL_LOCK(fl) mtx_lock(&(fl)->fl_lock)
#define FL_TRYLOCK(fl) mtx_trylock(&(fl)->fl_lock)
#define FL_UNLOCK(fl) mtx_unlock(&(fl)->fl_lock)
#define FL_LOCK_ASSERT_OWNED(fl) mtx_assert(&(fl)->fl_lock, MA_OWNED)
#define FL_LOCK_ASSERT_NOTOWNED(fl) mtx_assert(&(fl)->fl_lock, MA_NOTOWNED)
#define RXQ_FL_LOCK(rxq) FL_LOCK(&(rxq)->fl)
#define RXQ_FL_UNLOCK(rxq) FL_UNLOCK(&(rxq)->fl)
#define RXQ_FL_LOCK_ASSERT_OWNED(rxq) FL_LOCK_ASSERT_OWNED(&(rxq)->fl)
#define RXQ_FL_LOCK_ASSERT_NOTOWNED(rxq) FL_LOCK_ASSERT_NOTOWNED(&(rxq)->fl)
#define EQ_LOCK(eq) mtx_lock(&(eq)->eq_lock)
#define EQ_TRYLOCK(eq) mtx_trylock(&(eq)->eq_lock)
#define EQ_UNLOCK(eq) mtx_unlock(&(eq)->eq_lock)
#define EQ_LOCK_ASSERT_OWNED(eq) mtx_assert(&(eq)->eq_lock, MA_OWNED)
#define EQ_LOCK_ASSERT_NOTOWNED(eq) mtx_assert(&(eq)->eq_lock, MA_NOTOWNED)
#define TXQ_LOCK(txq) EQ_LOCK(&(txq)->eq)
#define TXQ_TRYLOCK(txq) EQ_TRYLOCK(&(txq)->eq)
#define TXQ_UNLOCK(txq) EQ_UNLOCK(&(txq)->eq)
#define TXQ_LOCK_ASSERT_OWNED(txq) EQ_LOCK_ASSERT_OWNED(&(txq)->eq)
#define TXQ_LOCK_ASSERT_NOTOWNED(txq) EQ_LOCK_ASSERT_NOTOWNED(&(txq)->eq)
#define for_each_txq(vi, iter, q) \
for (q = &vi->pi->adapter->sge.txq[vi->first_txq], iter = 0; \
iter < vi->ntxq; ++iter, ++q)
#define for_each_rxq(vi, iter, q) \
for (q = &vi->pi->adapter->sge.rxq[vi->first_rxq], iter = 0; \
iter < vi->nrxq; ++iter, ++q)
#define for_each_ofld_txq(vi, iter, q) \
for (q = &vi->pi->adapter->sge.ofld_txq[vi->first_ofld_txq], iter = 0; \
iter < vi->nofldtxq; ++iter, ++q)
#define for_each_ofld_rxq(vi, iter, q) \
for (q = &vi->pi->adapter->sge.ofld_rxq[vi->first_ofld_rxq], iter = 0; \
iter < vi->nofldrxq; ++iter, ++q)
#define for_each_nm_txq(vi, iter, q) \
for (q = &vi->pi->adapter->sge.nm_txq[vi->first_nm_txq], iter = 0; \
iter < vi->nnmtxq; ++iter, ++q)
#define for_each_nm_rxq(vi, iter, q) \
for (q = &vi->pi->adapter->sge.nm_rxq[vi->first_nm_rxq], iter = 0; \
iter < vi->nnmrxq; ++iter, ++q)
#define for_each_vi(_pi, _iter, _vi) \
for ((_vi) = (_pi)->vi, (_iter) = 0; (_iter) < (_pi)->nvi; \
++(_iter), ++(_vi))
#define IDXINCR(idx, incr, wrap) do { \
idx = wrap - idx > incr ? idx + incr : incr - (wrap - idx); \
} while (0)
#define IDXDIFF(head, tail, wrap) \
((head) >= (tail) ? (head) - (tail) : (wrap) - (tail) + (head))
/* One for errors, one for firmware events */
#define T4_EXTRA_INTR 2
/* One for firmware events */
#define T4VF_EXTRA_INTR 1
static inline int
forwarding_intr_to_fwq(struct adapter *sc)
{
return (sc->intr_count == 1);
}
static inline uint32_t
t4_read_reg(struct adapter *sc, uint32_t reg)
{
return bus_space_read_4(sc->bt, sc->bh, reg);
}
static inline void
t4_write_reg(struct adapter *sc, uint32_t reg, uint32_t val)
{
bus_space_write_4(sc->bt, sc->bh, reg, val);
}
static inline uint64_t
t4_read_reg64(struct adapter *sc, uint32_t reg)
{
#ifdef __LP64__
return bus_space_read_8(sc->bt, sc->bh, reg);
#else
return (uint64_t)bus_space_read_4(sc->bt, sc->bh, reg) +
((uint64_t)bus_space_read_4(sc->bt, sc->bh, reg + 4) << 32);
#endif
}
static inline void
t4_write_reg64(struct adapter *sc, uint32_t reg, uint64_t val)
{
#ifdef __LP64__
bus_space_write_8(sc->bt, sc->bh, reg, val);
#else
bus_space_write_4(sc->bt, sc->bh, reg, val);
bus_space_write_4(sc->bt, sc->bh, reg + 4, val>> 32);
#endif
}
static inline void
t4_os_pci_read_cfg1(struct adapter *sc, int reg, uint8_t *val)
{
*val = pci_read_config(sc->dev, reg, 1);
}
static inline void
t4_os_pci_write_cfg1(struct adapter *sc, int reg, uint8_t val)
{
pci_write_config(sc->dev, reg, val, 1);
}
static inline void
t4_os_pci_read_cfg2(struct adapter *sc, int reg, uint16_t *val)
{
*val = pci_read_config(sc->dev, reg, 2);
}
static inline void
t4_os_pci_write_cfg2(struct adapter *sc, int reg, uint16_t val)
{
pci_write_config(sc->dev, reg, val, 2);
}
static inline void
t4_os_pci_read_cfg4(struct adapter *sc, int reg, uint32_t *val)
{
*val = pci_read_config(sc->dev, reg, 4);
}
static inline void
t4_os_pci_write_cfg4(struct adapter *sc, int reg, uint32_t val)
{
pci_write_config(sc->dev, reg, val, 4);
}
static inline struct port_info *
adap2pinfo(struct adapter *sc, int idx)
{
return (sc->port[idx]);
}
static inline void
t4_os_set_hw_addr(struct port_info *pi, uint8_t hw_addr[])
{
bcopy(hw_addr, pi->vi[0].hw_addr, ETHER_ADDR_LEN);
}
static inline int
tx_resume_threshold(struct sge_eq *eq)
{
/* not quite the same as qsize / 4, but this will do. */
return (eq->sidx / 4);
}
static inline int
t4_use_ldst(struct adapter *sc)
{
#ifdef notyet
return (sc->flags & FW_OK || !sc->use_bd);
#else
return (0);
#endif
}
static inline void
CH_DUMP_MBOX(struct adapter *sc, int mbox, const int reg,
const char *msg, const __be64 *const p, const bool err)
{
if (!(sc->debug_flags & DF_DUMP_MBOX) && !err)
return;
if (p != NULL) {
log(err ? LOG_ERR : LOG_DEBUG,
"%s: mbox %u %s %016llx %016llx %016llx %016llx "
"%016llx %016llx %016llx %016llx\n",
device_get_nameunit(sc->dev), mbox, msg,
(long long)be64_to_cpu(p[0]), (long long)be64_to_cpu(p[1]),
(long long)be64_to_cpu(p[2]), (long long)be64_to_cpu(p[3]),
(long long)be64_to_cpu(p[4]), (long long)be64_to_cpu(p[5]),
(long long)be64_to_cpu(p[6]), (long long)be64_to_cpu(p[7]));
} else {
log(err ? LOG_ERR : LOG_DEBUG,
"%s: mbox %u %s %016llx %016llx %016llx %016llx "
"%016llx %016llx %016llx %016llx\n",
device_get_nameunit(sc->dev), mbox, msg,
(long long)t4_read_reg64(sc, reg),
(long long)t4_read_reg64(sc, reg + 8),
(long long)t4_read_reg64(sc, reg + 16),
(long long)t4_read_reg64(sc, reg + 24),
(long long)t4_read_reg64(sc, reg + 32),
(long long)t4_read_reg64(sc, reg + 40),
(long long)t4_read_reg64(sc, reg + 48),
(long long)t4_read_reg64(sc, reg + 56));
}
}
/* t4_main.c */
extern int t4_ntxq;
extern int t4_nrxq;
extern int t4_intr_types;
extern int t4_tmr_idx;
extern int t4_pktc_idx;
extern unsigned int t4_qsize_rxq;
extern unsigned int t4_qsize_txq;
extern device_method_t cxgbe_methods[];
int t4_os_find_pci_capability(struct adapter *, int);
int t4_os_pci_save_state(struct adapter *);
int t4_os_pci_restore_state(struct adapter *);
void t4_os_portmod_changed(struct port_info *);
void t4_os_link_changed(struct port_info *);
void t4_iterate(void (*)(struct adapter *, void *), void *);
void t4_init_devnames(struct adapter *);
void t4_add_adapter(struct adapter *);
void t4_aes_getdeckey(void *, const void *, unsigned int);
int t4_detach_common(device_t);
int t4_map_bars_0_and_4(struct adapter *);
int t4_map_bar_2(struct adapter *);
int t4_setup_intr_handlers(struct adapter *);
void t4_sysctls(struct adapter *);
int begin_synchronized_op(struct adapter *, struct vi_info *, int, char *);
void doom_vi(struct adapter *, struct vi_info *);
void end_synchronized_op(struct adapter *, int);
int update_mac_settings(struct ifnet *, int);
int adapter_full_init(struct adapter *);
int adapter_full_uninit(struct adapter *);
uint64_t cxgbe_get_counter(struct ifnet *, ift_counter);
int vi_full_init(struct vi_info *);
int vi_full_uninit(struct vi_info *);
void vi_sysctls(struct vi_info *);
void vi_tick(void *);
int rw_via_memwin(struct adapter *, int, uint32_t, uint32_t *, int, int);
int alloc_atid_tab(struct tid_info *, int);
void free_atid_tab(struct tid_info *);
int alloc_atid(struct adapter *, void *);
void *lookup_atid(struct adapter *, int);
void free_atid(struct adapter *, int);
void release_tid(struct adapter *, int, struct sge_wrq *);
int cxgbe_media_change(struct ifnet *);
void cxgbe_media_status(struct ifnet *, struct ifmediareq *);
+bool t4_os_dump_cimla(struct adapter *, int, bool);
+void t4_os_dump_devlog(struct adapter *);
#ifdef DEV_NETMAP
/* t4_netmap.c */
struct sge_nm_rxq;
void cxgbe_nm_attach(struct vi_info *);
void cxgbe_nm_detach(struct vi_info *);
void service_nm_rxq(struct sge_nm_rxq *);
#endif
/* t4_sge.c */
void t4_sge_modload(void);
void t4_sge_modunload(void);
uint64_t t4_sge_extfree_refs(void);
void t4_tweak_chip_settings(struct adapter *);
int t4_read_chip_settings(struct adapter *);
int t4_create_dma_tag(struct adapter *);
void t4_sge_sysctls(struct adapter *, struct sysctl_ctx_list *,
struct sysctl_oid_list *);
int t4_destroy_dma_tag(struct adapter *);
int t4_setup_adapter_queues(struct adapter *);
int t4_teardown_adapter_queues(struct adapter *);
int t4_setup_vi_queues(struct vi_info *);
int t4_teardown_vi_queues(struct vi_info *);
void t4_intr_all(void *);
void t4_intr(void *);
#ifdef DEV_NETMAP
void t4_nm_intr(void *);
void t4_vi_intr(void *);
#endif
void t4_intr_err(void *);
void t4_intr_evt(void *);
void t4_wrq_tx_locked(struct adapter *, struct sge_wrq *, struct wrqe *);
void t4_update_fl_bufsize(struct ifnet *);
struct mbuf *alloc_wr_mbuf(int, int);
int parse_pkt(struct adapter *, struct mbuf **);
void *start_wrq_wr(struct sge_wrq *, int, struct wrq_cookie *);
void commit_wrq_wr(struct sge_wrq *, void *, struct wrq_cookie *);
int tnl_cong(struct port_info *, int);
void t4_register_an_handler(an_handler_t);
void t4_register_fw_msg_handler(int, fw_msg_handler_t);
void t4_register_cpl_handler(int, cpl_handler_t);
void t4_register_shared_cpl_handler(int, cpl_handler_t, int);
#ifdef RATELIMIT
int ethofld_transmit(struct ifnet *, struct mbuf *);
void send_etid_flush_wr(struct cxgbe_snd_tag *);
#endif
/* t4_tracer.c */
struct t4_tracer;
void t4_tracer_modload(void);
void t4_tracer_modunload(void);
void t4_tracer_port_detach(struct adapter *);
int t4_get_tracer(struct adapter *, struct t4_tracer *);
int t4_set_tracer(struct adapter *, struct t4_tracer *);
int t4_trace_pkt(struct sge_iq *, const struct rss_header *, struct mbuf *);
int t5_trace_pkt(struct sge_iq *, const struct rss_header *, struct mbuf *);
/* t4_sched.c */
int t4_set_sched_class(struct adapter *, struct t4_sched_params *);
int t4_set_sched_queue(struct adapter *, struct t4_sched_queue *);
int t4_init_tx_sched(struct adapter *);
int t4_free_tx_sched(struct adapter *);
void t4_update_tx_sched(struct adapter *);
int t4_reserve_cl_rl_kbps(struct adapter *, int, u_int, int *);
void t4_release_cl_rl(struct adapter *, int, int);
int sysctl_tc(SYSCTL_HANDLER_ARGS);
int sysctl_tc_params(SYSCTL_HANDLER_ARGS);
#ifdef RATELIMIT
void t4_init_etid_table(struct adapter *);
void t4_free_etid_table(struct adapter *);
struct cxgbe_snd_tag *lookup_etid(struct adapter *, int);
int cxgbe_snd_tag_alloc(struct ifnet *, union if_snd_tag_alloc_params *,
struct m_snd_tag **);
int cxgbe_snd_tag_modify(struct m_snd_tag *, union if_snd_tag_modify_params *);
int cxgbe_snd_tag_query(struct m_snd_tag *, union if_snd_tag_query_params *);
void cxgbe_snd_tag_free(struct m_snd_tag *);
void cxgbe_snd_tag_free_locked(struct cxgbe_snd_tag *);
#endif
/* t4_filter.c */
int get_filter_mode(struct adapter *, uint32_t *);
int set_filter_mode(struct adapter *, uint32_t);
int get_filter(struct adapter *, struct t4_filter *);
int set_filter(struct adapter *, struct t4_filter *);
int del_filter(struct adapter *, struct t4_filter *);
int t4_filter_rpl(struct sge_iq *, const struct rss_header *, struct mbuf *);
int t4_hashfilter_ao_rpl(struct sge_iq *, const struct rss_header *, struct mbuf *);
int t4_hashfilter_tcb_rpl(struct sge_iq *, const struct rss_header *, struct mbuf *);
int t4_del_hashfilter_rpl(struct sge_iq *, const struct rss_header *, struct mbuf *);
void free_hftid_hash(struct tid_info *);
static inline struct wrqe *
alloc_wrqe(int wr_len, struct sge_wrq *wrq)
{
int len = offsetof(struct wrqe, wr) + wr_len;
struct wrqe *wr;
wr = malloc(len, M_CXGBE, M_NOWAIT);
if (__predict_false(wr == NULL))
return (NULL);
wr->wr_len = wr_len;
wr->wrq = wrq;
return (wr);
}
static inline void *
wrtod(struct wrqe *wr)
{
return (&wr->wr[0]);
}
static inline void
free_wrqe(struct wrqe *wr)
{
free(wr, M_CXGBE);
}
static inline void
t4_wrq_tx(struct adapter *sc, struct wrqe *wr)
{
struct sge_wrq *wrq = wr->wrq;
TXQ_LOCK(wrq);
t4_wrq_tx_locked(sc, wrq, wr);
TXQ_UNLOCK(wrq);
}
static inline int
read_via_memwin(struct adapter *sc, int idx, uint32_t addr, uint32_t *val,
int len)
{
return (rw_via_memwin(sc, idx, addr, val, len, 0));
}
static inline int
write_via_memwin(struct adapter *sc, int idx, uint32_t addr,
const uint32_t *val, int len)
{
return (rw_via_memwin(sc, idx, addr, (void *)(uintptr_t)val, len, 1));
}
#endif
Index: projects/clang800-import/sys/dev/cxgbe/common/t4_hw.c
===================================================================
--- projects/clang800-import/sys/dev/cxgbe/common/t4_hw.c (revision 343955)
+++ projects/clang800-import/sys/dev/cxgbe/common/t4_hw.c (revision 343956)
@@ -1,10714 +1,10726 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2012, 2016 Chelsio Communications, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_inet.h"
#include <sys/param.h>
#include <sys/eventhandler.h>
#include "common.h"
#include "t4_regs.h"
#include "t4_regs_values.h"
#include "firmware/t4fw_interface.h"
#undef msleep
#define msleep(x) do { \
if (cold) \
DELAY((x) * 1000); \
else \
pause("t4hw", (x) * hz / 1000); \
} while (0)
/**
* t4_wait_op_done_val - wait until an operation is completed
* @adapter: the adapter performing the operation
* @reg: the register to check for completion
* @mask: a single-bit field within @reg that indicates completion
* @polarity: the value of the field when the operation is completed
* @attempts: number of check iterations
* @delay: delay in usecs between iterations
* @valp: where to store the value of the register at completion time
*
* Wait until an operation is completed by checking a bit in a register
* up to @attempts times. If @valp is not NULL the value of the register
* at the time it indicated completion is stored there. Returns 0 if the
* operation completes and -EAGAIN otherwise.
*/
static int t4_wait_op_done_val(struct adapter *adapter, int reg, u32 mask,
int polarity, int attempts, int delay, u32 *valp)
{
while (1) {
u32 val = t4_read_reg(adapter, reg);
if (!!(val & mask) == polarity) {
if (valp)
*valp = val;
return 0;
}
if (--attempts == 0)
return -EAGAIN;
if (delay)
udelay(delay);
}
}
static inline int t4_wait_op_done(struct adapter *adapter, int reg, u32 mask,
int polarity, int attempts, int delay)
{
return t4_wait_op_done_val(adapter, reg, mask, polarity, attempts,
delay, NULL);
}
/**
* t4_set_reg_field - set a register field to a value
* @adapter: the adapter to program
* @addr: the register address
* @mask: specifies the portion of the register to modify
* @val: the new value for the register field
*
* Sets a register field specified by the supplied mask to the
* given value.
*/
void t4_set_reg_field(struct adapter *adapter, unsigned int addr, u32 mask,
u32 val)
{
u32 v = t4_read_reg(adapter, addr) & ~mask;
t4_write_reg(adapter, addr, v | val);
(void) t4_read_reg(adapter, addr); /* flush */
}
/**
* t4_read_indirect - read indirectly addressed registers
* @adap: the adapter
* @addr_reg: register holding the indirect address
* @data_reg: register holding the value of the indirect register
* @vals: where the read register values are stored
* @nregs: how many indirect registers to read
* @start_idx: index of first indirect register to read
*
* Reads registers that are accessed indirectly through an address/data
* register pair.
*/
void t4_read_indirect(struct adapter *adap, unsigned int addr_reg,
unsigned int data_reg, u32 *vals,
unsigned int nregs, unsigned int start_idx)
{
while (nregs--) {
t4_write_reg(adap, addr_reg, start_idx);
*vals++ = t4_read_reg(adap, data_reg);
start_idx++;
}
}
/**
* t4_write_indirect - write indirectly addressed registers
* @adap: the adapter
* @addr_reg: register holding the indirect addresses
* @data_reg: register holding the value for the indirect registers
* @vals: values to write
* @nregs: how many indirect registers to write
* @start_idx: address of first indirect register to write
*
* Writes a sequential block of registers that are accessed indirectly
* through an address/data register pair.
*/
void t4_write_indirect(struct adapter *adap, unsigned int addr_reg,
unsigned int data_reg, const u32 *vals,
unsigned int nregs, unsigned int start_idx)
{
while (nregs--) {
t4_write_reg(adap, addr_reg, start_idx++);
t4_write_reg(adap, data_reg, *vals++);
}
}
/*
* Read a 32-bit PCI Configuration Space register via the PCI-E backdoor
* mechanism. This guarantees that we get the real value even if we're
* operating within a Virtual Machine and the Hypervisor is trapping our
* Configuration Space accesses.
*
* N.B. This routine should only be used as a last resort: the firmware uses
* the backdoor registers on a regular basis and we can end up
* conflicting with it's uses!
*/
u32 t4_hw_pci_read_cfg4(adapter_t *adap, int reg)
{
u32 req = V_FUNCTION(adap->pf) | V_REGISTER(reg);
u32 val;
if (chip_id(adap) <= CHELSIO_T5)
req |= F_ENABLE;
else
req |= F_T6_ENABLE;
if (is_t4(adap))
req |= F_LOCALCFG;
t4_write_reg(adap, A_PCIE_CFG_SPACE_REQ, req);
val = t4_read_reg(adap, A_PCIE_CFG_SPACE_DATA);
/*
* Reset F_ENABLE to 0 so reads of PCIE_CFG_SPACE_DATA won't cause a
* Configuration Space read. (None of the other fields matter when
* F_ENABLE is 0 so a simple register write is easier than a
* read-modify-write via t4_set_reg_field().)
*/
t4_write_reg(adap, A_PCIE_CFG_SPACE_REQ, 0);
return val;
}
/*
* t4_report_fw_error - report firmware error
* @adap: the adapter
*
* The adapter firmware can indicate error conditions to the host.
* If the firmware has indicated an error, print out the reason for
* the firmware error.
*/
static void t4_report_fw_error(struct adapter *adap)
{
static const char *const reason[] = {
"Crash", /* PCIE_FW_EVAL_CRASH */
"During Device Preparation", /* PCIE_FW_EVAL_PREP */
"During Device Configuration", /* PCIE_FW_EVAL_CONF */
"During Device Initialization", /* PCIE_FW_EVAL_INIT */
"Unexpected Event", /* PCIE_FW_EVAL_UNEXPECTEDEVENT */
"Insufficient Airflow", /* PCIE_FW_EVAL_OVERHEAT */
"Device Shutdown", /* PCIE_FW_EVAL_DEVICESHUTDOWN */
"Reserved", /* reserved */
};
u32 pcie_fw;
pcie_fw = t4_read_reg(adap, A_PCIE_FW);
if (pcie_fw & F_PCIE_FW_ERR) {
+ adap->flags &= ~FW_OK;
CH_ERR(adap, "firmware reports adapter error: %s (0x%08x)\n",
reason[G_PCIE_FW_EVAL(pcie_fw)], pcie_fw);
- adap->flags &= ~FW_OK;
+ if (pcie_fw != 0xffffffff)
+ t4_os_dump_devlog(adap);
}
}
/*
* Get the reply to a mailbox command and store it in @rpl in big-endian order.
*/
static void get_mbox_rpl(struct adapter *adap, __be64 *rpl, int nflit,
u32 mbox_addr)
{
for ( ; nflit; nflit--, mbox_addr += 8)
*rpl++ = cpu_to_be64(t4_read_reg64(adap, mbox_addr));
}
/*
* Handle a FW assertion reported in a mailbox.
*/
static void fw_asrt(struct adapter *adap, struct fw_debug_cmd *asrt)
{
CH_ALERT(adap,
"FW assertion at %.16s:%u, val0 %#x, val1 %#x\n",
asrt->u.assert.filename_0_7,
be32_to_cpu(asrt->u.assert.line),
be32_to_cpu(asrt->u.assert.x),
be32_to_cpu(asrt->u.assert.y));
}
struct port_tx_state {
uint64_t rx_pause;
uint64_t tx_frames;
};
static void
read_tx_state_one(struct adapter *sc, int i, struct port_tx_state *tx_state)
{
uint32_t rx_pause_reg, tx_frames_reg;
if (is_t4(sc)) {
tx_frames_reg = PORT_REG(i, A_MPS_PORT_STAT_TX_PORT_FRAMES_L);
rx_pause_reg = PORT_REG(i, A_MPS_PORT_STAT_RX_PORT_PAUSE_L);
} else {
tx_frames_reg = T5_PORT_REG(i, A_MPS_PORT_STAT_TX_PORT_FRAMES_L);
rx_pause_reg = T5_PORT_REG(i, A_MPS_PORT_STAT_RX_PORT_PAUSE_L);
}
tx_state->rx_pause = t4_read_reg64(sc, rx_pause_reg);
tx_state->tx_frames = t4_read_reg64(sc, tx_frames_reg);
}
static void
read_tx_state(struct adapter *sc, struct port_tx_state *tx_state)
{
int i;
for_each_port(sc, i)
read_tx_state_one(sc, i, &tx_state[i]);
}
static void
check_tx_state(struct adapter *sc, struct port_tx_state *tx_state)
{
uint32_t port_ctl_reg;
uint64_t tx_frames, rx_pause;
int i;
for_each_port(sc, i) {
rx_pause = tx_state[i].rx_pause;
tx_frames = tx_state[i].tx_frames;
read_tx_state_one(sc, i, &tx_state[i]); /* update */
if (is_t4(sc))
port_ctl_reg = PORT_REG(i, A_MPS_PORT_CTL);
else
port_ctl_reg = T5_PORT_REG(i, A_MPS_PORT_CTL);
if (t4_read_reg(sc, port_ctl_reg) & F_PORTTXEN &&
rx_pause != tx_state[i].rx_pause &&
tx_frames == tx_state[i].tx_frames) {
t4_set_reg_field(sc, port_ctl_reg, F_PORTTXEN, 0);
mdelay(1);
t4_set_reg_field(sc, port_ctl_reg, F_PORTTXEN, F_PORTTXEN);
}
}
}
#define X_CIM_PF_NOACCESS 0xeeeeeeee
/**
* t4_wr_mbox_meat_timeout - send a command to FW through the given mailbox
* @adap: the adapter
* @mbox: index of the mailbox to use
* @cmd: the command to write
* @size: command length in bytes
* @rpl: where to optionally store the reply
* @sleep_ok: if true we may sleep while awaiting command completion
* @timeout: time to wait for command to finish before timing out
* (negative implies @sleep_ok=false)
*
* Sends the given command to FW through the selected mailbox and waits
* for the FW to execute the command. If @rpl is not %NULL it is used to
* store the FW's reply to the command. The command and its optional
* reply are of the same length. Some FW commands like RESET and
* INITIALIZE can take a considerable amount of time to execute.
* @sleep_ok determines whether we may sleep while awaiting the response.
* If sleeping is allowed we use progressive backoff otherwise we spin.
* Note that passing in a negative @timeout is an alternate mechanism
* for specifying @sleep_ok=false. This is useful when a higher level
* interface allows for specification of @timeout but not @sleep_ok ...
*
* The return value is 0 on success or a negative errno on failure. A
* failure can happen either because we are not able to execute the
* command or FW executes it but signals an error. In the latter case
* the return value is the error code indicated by FW (negated).
*/
int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
int size, void *rpl, bool sleep_ok, int timeout)
{
/*
* We delay in small increments at first in an effort to maintain
* responsiveness for simple, fast executing commands but then back
* off to larger delays to a maximum retry delay.
*/
static const int delay[] = {
1, 1, 3, 5, 10, 10, 20, 50, 100
};
u32 v;
u64 res;
int i, ms, delay_idx, ret, next_tx_check;
u32 data_reg = PF_REG(mbox, A_CIM_PF_MAILBOX_DATA);
u32 ctl_reg = PF_REG(mbox, A_CIM_PF_MAILBOX_CTRL);
u32 ctl;
__be64 cmd_rpl[MBOX_LEN/8];
u32 pcie_fw;
struct port_tx_state tx_state[MAX_NPORTS];
if (adap->flags & CHK_MBOX_ACCESS)
ASSERT_SYNCHRONIZED_OP(adap);
if (size <= 0 || (size & 15) || size > MBOX_LEN)
return -EINVAL;
if (adap->flags & IS_VF) {
if (is_t6(adap))
data_reg = FW_T6VF_MBDATA_BASE_ADDR;
else
data_reg = FW_T4VF_MBDATA_BASE_ADDR;
ctl_reg = VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_CTRL);
}
/*
* If we have a negative timeout, that implies that we can't sleep.
*/
if (timeout < 0) {
sleep_ok = false;
timeout = -timeout;
}
/*
* Attempt to gain access to the mailbox.
*/
for (i = 0; i < 4; i++) {
ctl = t4_read_reg(adap, ctl_reg);
v = G_MBOWNER(ctl);
if (v != X_MBOWNER_NONE)
break;
}
/*
* If we were unable to gain access, report the error to our caller.
*/
if (v != X_MBOWNER_PL) {
t4_report_fw_error(adap);
ret = (v == X_MBOWNER_FW) ? -EBUSY : -ETIMEDOUT;
return ret;
}
/*
* If we gain ownership of the mailbox and there's a "valid" message
* in it, this is likely an asynchronous error message from the
* firmware. So we'll report that and then proceed on with attempting
* to issue our own command ... which may well fail if the error
* presaged the firmware crashing ...
*/
if (ctl & F_MBMSGVALID) {
CH_DUMP_MBOX(adap, mbox, data_reg, "VLD", NULL, true);
}
/*
* Copy in the new mailbox command and send it on its way ...
*/
memset(cmd_rpl, 0, sizeof(cmd_rpl));
memcpy(cmd_rpl, cmd, size);
CH_DUMP_MBOX(adap, mbox, 0, "cmd", cmd_rpl, false);
for (i = 0; i < ARRAY_SIZE(cmd_rpl); i++)
t4_write_reg64(adap, data_reg + i * 8, be64_to_cpu(cmd_rpl[i]));
if (adap->flags & IS_VF) {
/*
* For the VFs, the Mailbox Data "registers" are
* actually backed by T4's "MA" interface rather than
* PL Registers (as is the case for the PFs). Because
* these are in different coherency domains, the write
* to the VF's PL-register-backed Mailbox Control can
* race in front of the writes to the MA-backed VF
* Mailbox Data "registers". So we need to do a
* read-back on at least one byte of the VF Mailbox
* Data registers before doing the write to the VF
* Mailbox Control register.
*/
t4_read_reg(adap, data_reg);
}
t4_write_reg(adap, ctl_reg, F_MBMSGVALID | V_MBOWNER(X_MBOWNER_FW));
read_tx_state(adap, &tx_state[0]); /* also flushes the write_reg */
next_tx_check = 1000;
delay_idx = 0;
ms = delay[0];
/*
* Loop waiting for the reply; bail out if we time out or the firmware
* reports an error.
*/
pcie_fw = 0;
for (i = 0; i < timeout; i += ms) {
if (!(adap->flags & IS_VF)) {
pcie_fw = t4_read_reg(adap, A_PCIE_FW);
if (pcie_fw & F_PCIE_FW_ERR)
break;
}
if (i >= next_tx_check) {
check_tx_state(adap, &tx_state[0]);
next_tx_check = i + 1000;
}
if (sleep_ok) {
ms = delay[delay_idx]; /* last element may repeat */
if (delay_idx < ARRAY_SIZE(delay) - 1)
delay_idx++;
msleep(ms);
} else {
mdelay(ms);
}
v = t4_read_reg(adap, ctl_reg);
if (v == X_CIM_PF_NOACCESS)
continue;
if (G_MBOWNER(v) == X_MBOWNER_PL) {
if (!(v & F_MBMSGVALID)) {
t4_write_reg(adap, ctl_reg,
V_MBOWNER(X_MBOWNER_NONE));
continue;
}
/*
* Retrieve the command reply and release the mailbox.
*/
get_mbox_rpl(adap, cmd_rpl, MBOX_LEN/8, data_reg);
CH_DUMP_MBOX(adap, mbox, 0, "rpl", cmd_rpl, false);
t4_write_reg(adap, ctl_reg, V_MBOWNER(X_MBOWNER_NONE));
res = be64_to_cpu(cmd_rpl[0]);
if (G_FW_CMD_OP(res >> 32) == FW_DEBUG_CMD) {
fw_asrt(adap, (struct fw_debug_cmd *)cmd_rpl);
res = V_FW_CMD_RETVAL(EIO);
} else if (rpl)
memcpy(rpl, cmd_rpl, size);
return -G_FW_CMD_RETVAL((int)res);
}
}
/*
* We timed out waiting for a reply to our mailbox command. Report
* the error and also check to see if the firmware reported any
* errors ...
*/
- ret = (pcie_fw & F_PCIE_FW_ERR) ? -ENXIO : -ETIMEDOUT;
CH_ERR(adap, "command %#x in mbox %d timed out (0x%08x).\n",
*(const u8 *)cmd, mbox, pcie_fw);
CH_DUMP_MBOX(adap, mbox, 0, "cmdsent", cmd_rpl, true);
CH_DUMP_MBOX(adap, mbox, data_reg, "current", NULL, true);
- t4_report_fw_error(adap);
+ if (pcie_fw & F_PCIE_FW_ERR) {
+ ret = -ENXIO;
+ t4_report_fw_error(adap);
+ } else {
+ ret = -ETIMEDOUT;
+ t4_os_dump_devlog(adap);
+ }
+
t4_fatal_err(adap, true);
return ret;
}
int t4_wr_mbox_meat(struct adapter *adap, int mbox, const void *cmd, int size,
void *rpl, bool sleep_ok)
{
return t4_wr_mbox_meat_timeout(adap, mbox, cmd, size, rpl,
sleep_ok, FW_CMD_MAX_TIMEOUT);
}
static int t4_edc_err_read(struct adapter *adap, int idx)
{
u32 edc_ecc_err_addr_reg;
u32 edc_bist_status_rdata_reg;
if (is_t4(adap)) {
CH_WARN(adap, "%s: T4 NOT supported.\n", __func__);
return 0;
}
if (idx != MEM_EDC0 && idx != MEM_EDC1) {
CH_WARN(adap, "%s: idx %d NOT supported.\n", __func__, idx);
return 0;
}
edc_ecc_err_addr_reg = EDC_T5_REG(A_EDC_H_ECC_ERR_ADDR, idx);
edc_bist_status_rdata_reg = EDC_T5_REG(A_EDC_H_BIST_STATUS_RDATA, idx);
CH_WARN(adap,
"edc%d err addr 0x%x: 0x%x.\n",
idx, edc_ecc_err_addr_reg,
t4_read_reg(adap, edc_ecc_err_addr_reg));
CH_WARN(adap,
"bist: 0x%x, status %llx %llx %llx %llx %llx %llx %llx %llx %llx.\n",
edc_bist_status_rdata_reg,
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 8),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 16),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 24),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 32),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 40),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 48),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 56),
(unsigned long long)t4_read_reg64(adap, edc_bist_status_rdata_reg + 64));
return 0;
}
/**
* t4_mc_read - read from MC through backdoor accesses
* @adap: the adapter
* @idx: which MC to access
* @addr: address of first byte requested
* @data: 64 bytes of data containing the requested address
* @ecc: where to store the corresponding 64-bit ECC word
*
* Read 64 bytes of data from MC starting at a 64-byte-aligned address
* that covers the requested address @addr. If @parity is not %NULL it
* is assigned the 64-bit ECC word for the read data.
*/
int t4_mc_read(struct adapter *adap, int idx, u32 addr, __be32 *data, u64 *ecc)
{
int i;
u32 mc_bist_cmd_reg, mc_bist_cmd_addr_reg, mc_bist_cmd_len_reg;
u32 mc_bist_status_rdata_reg, mc_bist_data_pattern_reg;
if (is_t4(adap)) {
mc_bist_cmd_reg = A_MC_BIST_CMD;
mc_bist_cmd_addr_reg = A_MC_BIST_CMD_ADDR;
mc_bist_cmd_len_reg = A_MC_BIST_CMD_LEN;
mc_bist_status_rdata_reg = A_MC_BIST_STATUS_RDATA;
mc_bist_data_pattern_reg = A_MC_BIST_DATA_PATTERN;
} else {
mc_bist_cmd_reg = MC_REG(A_MC_P_BIST_CMD, idx);
mc_bist_cmd_addr_reg = MC_REG(A_MC_P_BIST_CMD_ADDR, idx);
mc_bist_cmd_len_reg = MC_REG(A_MC_P_BIST_CMD_LEN, idx);
mc_bist_status_rdata_reg = MC_REG(A_MC_P_BIST_STATUS_RDATA,
idx);
mc_bist_data_pattern_reg = MC_REG(A_MC_P_BIST_DATA_PATTERN,
idx);
}
if (t4_read_reg(adap, mc_bist_cmd_reg) & F_START_BIST)
return -EBUSY;
t4_write_reg(adap, mc_bist_cmd_addr_reg, addr & ~0x3fU);
t4_write_reg(adap, mc_bist_cmd_len_reg, 64);
t4_write_reg(adap, mc_bist_data_pattern_reg, 0xc);
t4_write_reg(adap, mc_bist_cmd_reg, V_BIST_OPCODE(1) |
F_START_BIST | V_BIST_CMD_GAP(1));
i = t4_wait_op_done(adap, mc_bist_cmd_reg, F_START_BIST, 0, 10, 1);
if (i)
return i;
#define MC_DATA(i) MC_BIST_STATUS_REG(mc_bist_status_rdata_reg, i)
for (i = 15; i >= 0; i--)
*data++ = ntohl(t4_read_reg(adap, MC_DATA(i)));
if (ecc)
*ecc = t4_read_reg64(adap, MC_DATA(16));
#undef MC_DATA
return 0;
}
/**
* t4_edc_read - read from EDC through backdoor accesses
* @adap: the adapter
* @idx: which EDC to access
* @addr: address of first byte requested
* @data: 64 bytes of data containing the requested address
* @ecc: where to store the corresponding 64-bit ECC word
*
* Read 64 bytes of data from EDC starting at a 64-byte-aligned address
* that covers the requested address @addr. If @parity is not %NULL it
* is assigned the 64-bit ECC word for the read data.
*/
int t4_edc_read(struct adapter *adap, int idx, u32 addr, __be32 *data, u64 *ecc)
{
int i;
u32 edc_bist_cmd_reg, edc_bist_cmd_addr_reg, edc_bist_cmd_len_reg;
u32 edc_bist_cmd_data_pattern, edc_bist_status_rdata_reg;
if (is_t4(adap)) {
edc_bist_cmd_reg = EDC_REG(A_EDC_BIST_CMD, idx);
edc_bist_cmd_addr_reg = EDC_REG(A_EDC_BIST_CMD_ADDR, idx);
edc_bist_cmd_len_reg = EDC_REG(A_EDC_BIST_CMD_LEN, idx);
edc_bist_cmd_data_pattern = EDC_REG(A_EDC_BIST_DATA_PATTERN,
idx);
edc_bist_status_rdata_reg = EDC_REG(A_EDC_BIST_STATUS_RDATA,
idx);
} else {
/*
* These macro are missing in t4_regs.h file.
* Added temporarily for testing.
*/
#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
edc_bist_cmd_reg = EDC_REG_T5(A_EDC_H_BIST_CMD, idx);
edc_bist_cmd_addr_reg = EDC_REG_T5(A_EDC_H_BIST_CMD_ADDR, idx);
edc_bist_cmd_len_reg = EDC_REG_T5(A_EDC_H_BIST_CMD_LEN, idx);
edc_bist_cmd_data_pattern = EDC_REG_T5(A_EDC_H_BIST_DATA_PATTERN,
idx);
edc_bist_status_rdata_reg = EDC_REG_T5(A_EDC_H_BIST_STATUS_RDATA,
idx);
#undef EDC_REG_T5
#undef EDC_STRIDE_T5
}
if (t4_read_reg(adap, edc_bist_cmd_reg) & F_START_BIST)
return -EBUSY;
t4_write_reg(adap, edc_bist_cmd_addr_reg, addr & ~0x3fU);
t4_write_reg(adap, edc_bist_cmd_len_reg, 64);
t4_write_reg(adap, edc_bist_cmd_data_pattern, 0xc);
t4_write_reg(adap, edc_bist_cmd_reg,
V_BIST_OPCODE(1) | V_BIST_CMD_GAP(1) | F_START_BIST);
i = t4_wait_op_done(adap, edc_bist_cmd_reg, F_START_BIST, 0, 10, 1);
if (i)
return i;
#define EDC_DATA(i) EDC_BIST_STATUS_REG(edc_bist_status_rdata_reg, i)
for (i = 15; i >= 0; i--)
*data++ = ntohl(t4_read_reg(adap, EDC_DATA(i)));
if (ecc)
*ecc = t4_read_reg64(adap, EDC_DATA(16));
#undef EDC_DATA
return 0;
}
/**
* t4_mem_read - read EDC 0, EDC 1 or MC into buffer
* @adap: the adapter
* @mtype: memory type: MEM_EDC0, MEM_EDC1 or MEM_MC
* @addr: address within indicated memory type
* @len: amount of memory to read
* @buf: host memory buffer
*
* Reads an [almost] arbitrary memory region in the firmware: the
* firmware memory address, length and host buffer must be aligned on
* 32-bit boudaries. The memory is returned as a raw byte sequence from
* the firmware's memory. If this memory contains data structures which
* contain multi-byte integers, it's the callers responsibility to
* perform appropriate byte order conversions.
*/
int t4_mem_read(struct adapter *adap, int mtype, u32 addr, u32 len,
__be32 *buf)
{
u32 pos, start, end, offset;
int ret;
/*
* Argument sanity checks ...
*/
if ((addr & 0x3) || (len & 0x3))
return -EINVAL;
/*
* The underlaying EDC/MC read routines read 64 bytes at a time so we
* need to round down the start and round up the end. We'll start
* copying out of the first line at (addr - start) a word at a time.
*/
start = rounddown2(addr, 64);
end = roundup2(addr + len, 64);
offset = (addr - start)/sizeof(__be32);
for (pos = start; pos < end; pos += 64, offset = 0) {
__be32 data[16];
/*
* Read the chip's memory block and bail if there's an error.
*/
if ((mtype == MEM_MC) || (mtype == MEM_MC1))
ret = t4_mc_read(adap, mtype - MEM_MC, pos, data, NULL);
else
ret = t4_edc_read(adap, mtype, pos, data, NULL);
if (ret)
return ret;
/*
* Copy the data into the caller's memory buffer.
*/
while (offset < 16 && len > 0) {
*buf++ = data[offset++];
len -= sizeof(__be32);
}
}
return 0;
}
/*
* Return the specified PCI-E Configuration Space register from our Physical
* Function. We try first via a Firmware LDST Command (if fw_attach != 0)
* since we prefer to let the firmware own all of these registers, but if that
* fails we go for it directly ourselves.
*/
u32 t4_read_pcie_cfg4(struct adapter *adap, int reg, int drv_fw_attach)
{
/*
* If fw_attach != 0, construct and send the Firmware LDST Command to
* retrieve the specified PCI-E Configuration Space register.
*/
if (drv_fw_attach != 0) {
struct fw_ldst_cmd ldst_cmd;
int ret;
memset(&ldst_cmd, 0, sizeof(ldst_cmd));
ldst_cmd.op_to_addrspace =
cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_READ |
V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_FUNC_PCIE));
ldst_cmd.cycles_to_len16 = cpu_to_be32(FW_LEN16(ldst_cmd));
ldst_cmd.u.pcie.select_naccess = V_FW_LDST_CMD_NACCESS(1);
ldst_cmd.u.pcie.ctrl_to_fn =
(F_FW_LDST_CMD_LC | V_FW_LDST_CMD_FN(adap->pf));
ldst_cmd.u.pcie.r = reg;
/*
* If the LDST Command succeeds, return the result, otherwise
* fall through to reading it directly ourselves ...
*/
ret = t4_wr_mbox(adap, adap->mbox, &ldst_cmd, sizeof(ldst_cmd),
&ldst_cmd);
if (ret == 0)
return be32_to_cpu(ldst_cmd.u.pcie.data[0]);
CH_WARN(adap, "Firmware failed to return "
"Configuration Space register %d, err = %d\n",
reg, -ret);
}
/*
* Read the desired Configuration Space register via the PCI-E
* Backdoor mechanism.
*/
return t4_hw_pci_read_cfg4(adap, reg);
}
/**
* t4_get_regs_len - return the size of the chips register set
* @adapter: the adapter
*
* Returns the size of the chip's BAR0 register space.
*/
unsigned int t4_get_regs_len(struct adapter *adapter)
{
unsigned int chip_version = chip_id(adapter);
switch (chip_version) {
case CHELSIO_T4:
if (adapter->flags & IS_VF)
return FW_T4VF_REGMAP_SIZE;
return T4_REGMAP_SIZE;
case CHELSIO_T5:
case CHELSIO_T6:
if (adapter->flags & IS_VF)
return FW_T4VF_REGMAP_SIZE;
return T5_REGMAP_SIZE;
}
CH_ERR(adapter,
"Unsupported chip version %d\n", chip_version);
return 0;
}
/**
* t4_get_regs - read chip registers into provided buffer
* @adap: the adapter
* @buf: register buffer
* @buf_size: size (in bytes) of register buffer
*
* If the provided register buffer isn't large enough for the chip's
* full register range, the register dump will be truncated to the
* register buffer's size.
*/
void t4_get_regs(struct adapter *adap, u8 *buf, size_t buf_size)
{
static const unsigned int t4_reg_ranges[] = {
0x1008, 0x1108,
0x1180, 0x1184,
0x1190, 0x1194,
0x11a0, 0x11a4,
0x11b0, 0x11b4,
0x11fc, 0x123c,
0x1300, 0x173c,
0x1800, 0x18fc,
0x3000, 0x30d8,
0x30e0, 0x30e4,
0x30ec, 0x5910,
0x5920, 0x5924,
0x5960, 0x5960,
0x5968, 0x5968,
0x5970, 0x5970,
0x5978, 0x5978,
0x5980, 0x5980,
0x5988, 0x5988,
0x5990, 0x5990,
0x5998, 0x5998,
0x59a0, 0x59d4,
0x5a00, 0x5ae0,
0x5ae8, 0x5ae8,
0x5af0, 0x5af0,
0x5af8, 0x5af8,
0x6000, 0x6098,
0x6100, 0x6150,
0x6200, 0x6208,
0x6240, 0x6248,
0x6280, 0x62b0,
0x62c0, 0x6338,
0x6370, 0x638c,
0x6400, 0x643c,
0x6500, 0x6524,
0x6a00, 0x6a04,
0x6a14, 0x6a38,
0x6a60, 0x6a70,
0x6a78, 0x6a78,
0x6b00, 0x6b0c,
0x6b1c, 0x6b84,
0x6bf0, 0x6bf8,
0x6c00, 0x6c0c,
0x6c1c, 0x6c84,
0x6cf0, 0x6cf8,
0x6d00, 0x6d0c,
0x6d1c, 0x6d84,
0x6df0, 0x6df8,
0x6e00, 0x6e0c,
0x6e1c, 0x6e84,
0x6ef0, 0x6ef8,
0x6f00, 0x6f0c,
0x6f1c, 0x6f84,
0x6ff0, 0x6ff8,
0x7000, 0x700c,
0x701c, 0x7084,
0x70f0, 0x70f8,
0x7100, 0x710c,
0x711c, 0x7184,
0x71f0, 0x71f8,
0x7200, 0x720c,
0x721c, 0x7284,
0x72f0, 0x72f8,
0x7300, 0x730c,
0x731c, 0x7384,
0x73f0, 0x73f8,
0x7400, 0x7450,
0x7500, 0x7530,
0x7600, 0x760c,
0x7614, 0x761c,
0x7680, 0x76cc,
0x7700, 0x7798,
0x77c0, 0x77fc,
0x7900, 0x79fc,
0x7b00, 0x7b58,
0x7b60, 0x7b84,
0x7b8c, 0x7c38,
0x7d00, 0x7d38,
0x7d40, 0x7d80,
0x7d8c, 0x7ddc,
0x7de4, 0x7e04,
0x7e10, 0x7e1c,
0x7e24, 0x7e38,
0x7e40, 0x7e44,
0x7e4c, 0x7e78,
0x7e80, 0x7ea4,
0x7eac, 0x7edc,
0x7ee8, 0x7efc,
0x8dc0, 0x8e04,
0x8e10, 0x8e1c,
0x8e30, 0x8e78,
0x8ea0, 0x8eb8,
0x8ec0, 0x8f6c,
0x8fc0, 0x9008,
0x9010, 0x9058,
0x9060, 0x9060,
0x9068, 0x9074,
0x90fc, 0x90fc,
0x9400, 0x9408,
0x9410, 0x9458,
0x9600, 0x9600,
0x9608, 0x9638,
0x9640, 0x96bc,
0x9800, 0x9808,
0x9820, 0x983c,
0x9850, 0x9864,
0x9c00, 0x9c6c,
0x9c80, 0x9cec,
0x9d00, 0x9d6c,
0x9d80, 0x9dec,
0x9e00, 0x9e6c,
0x9e80, 0x9eec,
0x9f00, 0x9f6c,
0x9f80, 0x9fec,
0xd004, 0xd004,
0xd010, 0xd03c,
0xdfc0, 0xdfe0,
0xe000, 0xea7c,
0xf000, 0x11110,
0x11118, 0x11190,
0x19040, 0x1906c,
0x19078, 0x19080,
0x1908c, 0x190e4,
0x190f0, 0x190f8,
0x19100, 0x19110,
0x19120, 0x19124,
0x19150, 0x19194,
0x1919c, 0x191b0,
0x191d0, 0x191e8,
0x19238, 0x1924c,
0x193f8, 0x1943c,
0x1944c, 0x19474,
0x19490, 0x194e0,
0x194f0, 0x194f8,
0x19800, 0x19c08,
0x19c10, 0x19c90,
0x19ca0, 0x19ce4,
0x19cf0, 0x19d40,
0x19d50, 0x19d94,
0x19da0, 0x19de8,
0x19df0, 0x19e40,
0x19e50, 0x19e90,
0x19ea0, 0x19f4c,
0x1a000, 0x1a004,
0x1a010, 0x1a06c,
0x1a0b0, 0x1a0e4,
0x1a0ec, 0x1a0f4,
0x1a100, 0x1a108,
0x1a114, 0x1a120,
0x1a128, 0x1a130,
0x1a138, 0x1a138,
0x1a190, 0x1a1c4,
0x1a1fc, 0x1a1fc,
0x1e040, 0x1e04c,
0x1e284, 0x1e28c,
0x1e2c0, 0x1e2c0,
0x1e2e0, 0x1e2e0,
0x1e300, 0x1e384,
0x1e3c0, 0x1e3c8,
0x1e440, 0x1e44c,
0x1e684, 0x1e68c,
0x1e6c0, 0x1e6c0,
0x1e6e0, 0x1e6e0,
0x1e700, 0x1e784,
0x1e7c0, 0x1e7c8,
0x1e840, 0x1e84c,
0x1ea84, 0x1ea8c,
0x1eac0, 0x1eac0,
0x1eae0, 0x1eae0,
0x1eb00, 0x1eb84,
0x1ebc0, 0x1ebc8,
0x1ec40, 0x1ec4c,
0x1ee84, 0x1ee8c,
0x1eec0, 0x1eec0,
0x1eee0, 0x1eee0,
0x1ef00, 0x1ef84,
0x1efc0, 0x1efc8,
0x1f040, 0x1f04c,
0x1f284, 0x1f28c,
0x1f2c0, 0x1f2c0,
0x1f2e0, 0x1f2e0,
0x1f300, 0x1f384,
0x1f3c0, 0x1f3c8,
0x1f440, 0x1f44c,
0x1f684, 0x1f68c,
0x1f6c0, 0x1f6c0,
0x1f6e0, 0x1f6e0,
0x1f700, 0x1f784,
0x1f7c0, 0x1f7c8,
0x1f840, 0x1f84c,
0x1fa84, 0x1fa8c,
0x1fac0, 0x1fac0,
0x1fae0, 0x1fae0,
0x1fb00, 0x1fb84,
0x1fbc0, 0x1fbc8,
0x1fc40, 0x1fc4c,
0x1fe84, 0x1fe8c,
0x1fec0, 0x1fec0,
0x1fee0, 0x1fee0,
0x1ff00, 0x1ff84,
0x1ffc0, 0x1ffc8,
0x20000, 0x2002c,
0x20100, 0x2013c,
0x20190, 0x201a0,
0x201a8, 0x201b8,
0x201c4, 0x201c8,
0x20200, 0x20318,
0x20400, 0x204b4,
0x204c0, 0x20528,
0x20540, 0x20614,
0x21000, 0x21040,
0x2104c, 0x21060,
0x210c0, 0x210ec,
0x21200, 0x21268,
0x21270, 0x21284,
0x212fc, 0x21388,
0x21400, 0x21404,
0x21500, 0x21500,
0x21510, 0x21518,
0x2152c, 0x21530,
0x2153c, 0x2153c,
0x21550, 0x21554,
0x21600, 0x21600,
0x21608, 0x2161c,
0x21624, 0x21628,
0x21630, 0x21634,
0x2163c, 0x2163c,
0x21700, 0x2171c,
0x21780, 0x2178c,
0x21800, 0x21818,
0x21820, 0x21828,
0x21830, 0x21848,
0x21850, 0x21854,
0x21860, 0x21868,
0x21870, 0x21870,
0x21878, 0x21898,
0x218a0, 0x218a8,
0x218b0, 0x218c8,
0x218d0, 0x218d4,
0x218e0, 0x218e8,
0x218f0, 0x218f0,
0x218f8, 0x21a18,
0x21a20, 0x21a28,
0x21a30, 0x21a48,
0x21a50, 0x21a54,
0x21a60, 0x21a68,
0x21a70, 0x21a70,
0x21a78, 0x21a98,
0x21aa0, 0x21aa8,
0x21ab0, 0x21ac8,
0x21ad0, 0x21ad4,
0x21ae0, 0x21ae8,
0x21af0, 0x21af0,
0x21af8, 0x21c18,
0x21c20, 0x21c20,
0x21c28, 0x21c30,
0x21c38, 0x21c38,
0x21c80, 0x21c98,
0x21ca0, 0x21ca8,
0x21cb0, 0x21cc8,
0x21cd0, 0x21cd4,
0x21ce0, 0x21ce8,
0x21cf0, 0x21cf0,
0x21cf8, 0x21d7c,
0x21e00, 0x21e04,
0x22000, 0x2202c,
0x22100, 0x2213c,
0x22190, 0x221a0,
0x221a8, 0x221b8,
0x221c4, 0x221c8,
0x22200, 0x22318,
0x22400, 0x224b4,
0x224c0, 0x22528,
0x22540, 0x22614,
0x23000, 0x23040,
0x2304c, 0x23060,
0x230c0, 0x230ec,
0x23200, 0x23268,
0x23270, 0x23284,
0x232fc, 0x23388,
0x23400, 0x23404,
0x23500, 0x23500,
0x23510, 0x23518,
0x2352c, 0x23530,
0x2353c, 0x2353c,
0x23550, 0x23554,
0x23600, 0x23600,
0x23608, 0x2361c,
0x23624, 0x23628,
0x23630, 0x23634,
0x2363c, 0x2363c,
0x23700, 0x2371c,
0x23780, 0x2378c,
0x23800, 0x23818,
0x23820, 0x23828,
0x23830, 0x23848,
0x23850, 0x23854,
0x23860, 0x23868,
0x23870, 0x23870,
0x23878, 0x23898,
0x238a0, 0x238a8,
0x238b0, 0x238c8,
0x238d0, 0x238d4,
0x238e0, 0x238e8,
0x238f0, 0x238f0,
0x238f8, 0x23a18,
0x23a20, 0x23a28,
0x23a30, 0x23a48,
0x23a50, 0x23a54,
0x23a60, 0x23a68,
0x23a70, 0x23a70,
0x23a78, 0x23a98,
0x23aa0, 0x23aa8,
0x23ab0, 0x23ac8,
0x23ad0, 0x23ad4,
0x23ae0, 0x23ae8,
0x23af0, 0x23af0,
0x23af8, 0x23c18,
0x23c20, 0x23c20,
0x23c28, 0x23c30,
0x23c38, 0x23c38,
0x23c80, 0x23c98,
0x23ca0, 0x23ca8,
0x23cb0, 0x23cc8,
0x23cd0, 0x23cd4,
0x23ce0, 0x23ce8,
0x23cf0, 0x23cf0,
0x23cf8, 0x23d7c,
0x23e00, 0x23e04,
0x24000, 0x2402c,
0x24100, 0x2413c,
0x24190, 0x241a0,
0x241a8, 0x241b8,
0x241c4, 0x241c8,
0x24200, 0x24318,
0x24400, 0x244b4,
0x244c0, 0x24528,
0x24540, 0x24614,
0x25000, 0x25040,
0x2504c, 0x25060,
0x250c0, 0x250ec,
0x25200, 0x25268,
0x25270, 0x25284,
0x252fc, 0x25388,
0x25400, 0x25404,
0x25500, 0x25500,
0x25510, 0x25518,
0x2552c, 0x25530,
0x2553c, 0x2553c,
0x25550, 0x25554,
0x25600, 0x25600,
0x25608, 0x2561c,
0x25624, 0x25628,
0x25630, 0x25634,
0x2563c, 0x2563c,
0x25700, 0x2571c,
0x25780, 0x2578c,
0x25800, 0x25818,
0x25820, 0x25828,
0x25830, 0x25848,
0x25850, 0x25854,
0x25860, 0x25868,
0x25870, 0x25870,
0x25878, 0x25898,
0x258a0, 0x258a8,
0x258b0, 0x258c8,
0x258d0, 0x258d4,
0x258e0, 0x258e8,
0x258f0, 0x258f0,
0x258f8, 0x25a18,
0x25a20, 0x25a28,
0x25a30, 0x25a48,
0x25a50, 0x25a54,
0x25a60, 0x25a68,
0x25a70, 0x25a70,
0x25a78, 0x25a98,
0x25aa0, 0x25aa8,
0x25ab0, 0x25ac8,
0x25ad0, 0x25ad4,
0x25ae0, 0x25ae8,
0x25af0, 0x25af0,
0x25af8, 0x25c18,
0x25c20, 0x25c20,
0x25c28, 0x25c30,
0x25c38, 0x25c38,
0x25c80, 0x25c98,
0x25ca0, 0x25ca8,
0x25cb0, 0x25cc8,
0x25cd0, 0x25cd4,
0x25ce0, 0x25ce8,
0x25cf0, 0x25cf0,
0x25cf8, 0x25d7c,
0x25e00, 0x25e04,
0x26000, 0x2602c,
0x26100, 0x2613c,
0x26190, 0x261a0,
0x261a8, 0x261b8,
0x261c4, 0x261c8,
0x26200, 0x26318,
0x26400, 0x264b4,
0x264c0, 0x26528,
0x26540, 0x26614,
0x27000, 0x27040,
0x2704c, 0x27060,
0x270c0, 0x270ec,
0x27200, 0x27268,
0x27270, 0x27284,
0x272fc, 0x27388,
0x27400, 0x27404,
0x27500, 0x27500,
0x27510, 0x27518,
0x2752c, 0x27530,
0x2753c, 0x2753c,
0x27550, 0x27554,
0x27600, 0x27600,
0x27608, 0x2761c,
0x27624, 0x27628,
0x27630, 0x27634,
0x2763c, 0x2763c,
0x27700, 0x2771c,
0x27780, 0x2778c,
0x27800, 0x27818,
0x27820, 0x27828,
0x27830, 0x27848,
0x27850, 0x27854,
0x27860, 0x27868,
0x27870, 0x27870,
0x27878, 0x27898,
0x278a0, 0x278a8,
0x278b0, 0x278c8,
0x278d0, 0x278d4,
0x278e0, 0x278e8,
0x278f0, 0x278f0,
0x278f8, 0x27a18,
0x27a20, 0x27a28,
0x27a30, 0x27a48,
0x27a50, 0x27a54,
0x27a60, 0x27a68,
0x27a70, 0x27a70,
0x27a78, 0x27a98,
0x27aa0, 0x27aa8,
0x27ab0, 0x27ac8,
0x27ad0, 0x27ad4,
0x27ae0, 0x27ae8,
0x27af0, 0x27af0,
0x27af8, 0x27c18,
0x27c20, 0x27c20,
0x27c28, 0x27c30,
0x27c38, 0x27c38,
0x27c80, 0x27c98,
0x27ca0, 0x27ca8,
0x27cb0, 0x27cc8,
0x27cd0, 0x27cd4,
0x27ce0, 0x27ce8,
0x27cf0, 0x27cf0,
0x27cf8, 0x27d7c,
0x27e00, 0x27e04,
};
static const unsigned int t4vf_reg_ranges[] = {
VF_SGE_REG(A_SGE_VF_KDOORBELL), VF_SGE_REG(A_SGE_VF_GTS),
VF_MPS_REG(A_MPS_VF_CTL),
VF_MPS_REG(A_MPS_VF_STAT_RX_VF_ERR_FRAMES_H),
VF_PL_REG(A_PL_VF_WHOAMI), VF_PL_REG(A_PL_VF_WHOAMI),
VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_CTRL),
VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_STATUS),
FW_T4VF_MBDATA_BASE_ADDR,
FW_T4VF_MBDATA_BASE_ADDR +
((NUM_CIM_PF_MAILBOX_DATA_INSTANCES - 1) * 4),
};
static const unsigned int t5_reg_ranges[] = {
0x1008, 0x10c0,
0x10cc, 0x10f8,
0x1100, 0x1100,
0x110c, 0x1148,
0x1180, 0x1184,
0x1190, 0x1194,
0x11a0, 0x11a4,
0x11b0, 0x11b4,
0x11fc, 0x123c,
0x1280, 0x173c,
0x1800, 0x18fc,
0x3000, 0x3028,
0x3060, 0x30b0,
0x30b8, 0x30d8,
0x30e0, 0x30fc,
0x3140, 0x357c,
0x35a8, 0x35cc,
0x35ec, 0x35ec,
0x3600, 0x5624,
0x56cc, 0x56ec,
0x56f4, 0x5720,
0x5728, 0x575c,
0x580c, 0x5814,
0x5890, 0x589c,
0x58a4, 0x58ac,
0x58b8, 0x58bc,
0x5940, 0x59c8,
0x59d0, 0x59dc,
0x59fc, 0x5a18,
0x5a60, 0x5a70,
0x5a80, 0x5a9c,
0x5b94, 0x5bfc,
0x6000, 0x6020,
0x6028, 0x6040,
0x6058, 0x609c,
0x60a8, 0x614c,
0x7700, 0x7798,
0x77c0, 0x78fc,
0x7b00, 0x7b58,
0x7b60, 0x7b84,
0x7b8c, 0x7c54,
0x7d00, 0x7d38,
0x7d40, 0x7d80,
0x7d8c, 0x7ddc,
0x7de4, 0x7e04,
0x7e10, 0x7e1c,
0x7e24, 0x7e38,
0x7e40, 0x7e44,
0x7e4c, 0x7e78,
0x7e80, 0x7edc,
0x7ee8, 0x7efc,
0x8dc0, 0x8de0,
0x8df8, 0x8e04,
0x8e10, 0x8e84,
0x8ea0, 0x8f84,
0x8fc0, 0x9058,
0x9060, 0x9060,
0x9068, 0x90f8,
0x9400, 0x9408,
0x9410, 0x9470,
0x9600, 0x9600,
0x9608, 0x9638,
0x9640, 0x96f4,
0x9800, 0x9808,
0x9820, 0x983c,
0x9850, 0x9864,
0x9c00, 0x9c6c,
0x9c80, 0x9cec,
0x9d00, 0x9d6c,
0x9d80, 0x9dec,
0x9e00, 0x9e6c,
0x9e80, 0x9eec,
0x9f00, 0x9f6c,
0x9f80, 0xa020,
0xd004, 0xd004,
0xd010, 0xd03c,
0xdfc0, 0xdfe0,
0xe000, 0x1106c,
0x11074, 0x11088,
0x1109c, 0x1117c,
0x11190, 0x11204,
0x19040, 0x1906c,
0x19078, 0x19080,
0x1908c, 0x190e8,
0x190f0, 0x190f8,
0x19100, 0x19110,
0x19120, 0x19124,
0x19150, 0x19194,
0x1919c, 0x191b0,
0x191d0, 0x191e8,
0x19238, 0x19290,
0x193f8, 0x19428,
0x19430, 0x19444,
0x1944c, 0x1946c,
0x19474, 0x19474,
0x19490, 0x194cc,
0x194f0, 0x194f8,
0x19c00, 0x19c08,
0x19c10, 0x19c60,
0x19c94, 0x19ce4,
0x19cf0, 0x19d40,
0x19d50, 0x19d94,
0x19da0, 0x19de8,
0x19df0, 0x19e10,
0x19e50, 0x19e90,
0x19ea0, 0x19f24,
0x19f34, 0x19f34,
0x19f40, 0x19f50,
0x19f90, 0x19fb4,
0x19fc4, 0x19fe4,
0x1a000, 0x1a004,
0x1a010, 0x1a06c,
0x1a0b0, 0x1a0e4,
0x1a0ec, 0x1a0f8,
0x1a100, 0x1a108,
0x1a114, 0x1a120,
0x1a128, 0x1a130,
0x1a138, 0x1a138,
0x1a190, 0x1a1c4,
0x1a1fc, 0x1a1fc,
0x1e008, 0x1e00c,
0x1e040, 0x1e044,
0x1e04c, 0x1e04c,
0x1e284, 0x1e290,
0x1e2c0, 0x1e2c0,
0x1e2e0, 0x1e2e0,
0x1e300, 0x1e384,
0x1e3c0, 0x1e3c8,
0x1e408, 0x1e40c,
0x1e440, 0x1e444,
0x1e44c, 0x1e44c,
0x1e684, 0x1e690,
0x1e6c0, 0x1e6c0,
0x1e6e0, 0x1e6e0,
0x1e700, 0x1e784,
0x1e7c0, 0x1e7c8,
0x1e808, 0x1e80c,
0x1e840, 0x1e844,
0x1e84c, 0x1e84c,
0x1ea84, 0x1ea90,
0x1eac0, 0x1eac0,
0x1eae0, 0x1eae0,
0x1eb00, 0x1eb84,
0x1ebc0, 0x1ebc8,
0x1ec08, 0x1ec0c,
0x1ec40, 0x1ec44,
0x1ec4c, 0x1ec4c,
0x1ee84, 0x1ee90,
0x1eec0, 0x1eec0,
0x1eee0, 0x1eee0,
0x1ef00, 0x1ef84,
0x1efc0, 0x1efc8,
0x1f008, 0x1f00c,
0x1f040, 0x1f044,
0x1f04c, 0x1f04c,
0x1f284, 0x1f290,
0x1f2c0, 0x1f2c0,
0x1f2e0, 0x1f2e0,
0x1f300, 0x1f384,
0x1f3c0, 0x1f3c8,
0x1f408, 0x1f40c,
0x1f440, 0x1f444,
0x1f44c, 0x1f44c,
0x1f684, 0x1f690,
0x1f6c0, 0x1f6c0,
0x1f6e0, 0x1f6e0,
0x1f700, 0x1f784,
0x1f7c0, 0x1f7c8,
0x1f808, 0x1f80c,
0x1f840, 0x1f844,
0x1f84c, 0x1f84c,
0x1fa84, 0x1fa90,
0x1fac0, 0x1fac0,
0x1fae0, 0x1fae0,
0x1fb00, 0x1fb84,
0x1fbc0, 0x1fbc8,
0x1fc08, 0x1fc0c,
0x1fc40, 0x1fc44,
0x1fc4c, 0x1fc4c,
0x1fe84, 0x1fe90,
0x1fec0, 0x1fec0,
0x1fee0, 0x1fee0,
0x1ff00, 0x1ff84,
0x1ffc0, 0x1ffc8,
0x30000, 0x30030,
0x30100, 0x30144,
0x30190, 0x301a0,
0x301a8, 0x301b8,
0x301c4, 0x301c8,
0x301d0, 0x301d0,
0x30200, 0x30318,
0x30400, 0x304b4,
0x304c0, 0x3052c,
0x30540, 0x3061c,
0x30800, 0x30828,
0x30834, 0x30834,
0x308c0, 0x30908,
0x30910, 0x309ac,
0x30a00, 0x30a14,
0x30a1c, 0x30a2c,
0x30a44, 0x30a50,
0x30a74, 0x30a74,
0x30a7c, 0x30afc,
0x30b08, 0x30c24,
0x30d00, 0x30d00,
0x30d08, 0x30d14,
0x30d1c, 0x30d20,
0x30d3c, 0x30d3c,
0x30d48, 0x30d50,
0x31200, 0x3120c,
0x31220, 0x31220,
0x31240, 0x31240,
0x31600, 0x3160c,
0x31a00, 0x31a1c,
0x31e00, 0x31e20,
0x31e38, 0x31e3c,
0x31e80, 0x31e80,
0x31e88, 0x31ea8,
0x31eb0, 0x31eb4,
0x31ec8, 0x31ed4,
0x31fb8, 0x32004,
0x32200, 0x32200,
0x32208, 0x32240,
0x32248, 0x32280,
0x32288, 0x322c0,
0x322c8, 0x322fc,
0x32600, 0x32630,
0x32a00, 0x32abc,
0x32b00, 0x32b10,
0x32b20, 0x32b30,
0x32b40, 0x32b50,
0x32b60, 0x32b70,
0x33000, 0x33028,
0x33030, 0x33048,
0x33060, 0x33068,
0x33070, 0x3309c,
0x330f0, 0x33128,
0x33130, 0x33148,
0x33160, 0x33168,
0x33170, 0x3319c,
0x331f0, 0x33238,
0x33240, 0x33240,
0x33248, 0x33250,
0x3325c, 0x33264,
0x33270, 0x332b8,
0x332c0, 0x332e4,
0x332f8, 0x33338,
0x33340, 0x33340,
0x33348, 0x33350,
0x3335c, 0x33364,
0x33370, 0x333b8,
0x333c0, 0x333e4,
0x333f8, 0x33428,
0x33430, 0x33448,
0x33460, 0x33468,
0x33470, 0x3349c,
0x334f0, 0x33528,
0x33530, 0x33548,
0x33560, 0x33568,
0x33570, 0x3359c,
0x335f0, 0x33638,
0x33640, 0x33640,
0x33648, 0x33650,
0x3365c, 0x33664,
0x33670, 0x336b8,
0x336c0, 0x336e4,
0x336f8, 0x33738,
0x33740, 0x33740,
0x33748, 0x33750,
0x3375c, 0x33764,
0x33770, 0x337b8,
0x337c0, 0x337e4,
0x337f8, 0x337fc,
0x33814, 0x33814,
0x3382c, 0x3382c,
0x33880, 0x3388c,
0x338e8, 0x338ec,
0x33900, 0x33928,
0x33930, 0x33948,
0x33960, 0x33968,
0x33970, 0x3399c,
0x339f0, 0x33a38,
0x33a40, 0x33a40,
0x33a48, 0x33a50,
0x33a5c, 0x33a64,
0x33a70, 0x33ab8,
0x33ac0, 0x33ae4,
0x33af8, 0x33b10,
0x33b28, 0x33b28,
0x33b3c, 0x33b50,
0x33bf0, 0x33c10,
0x33c28, 0x33c28,
0x33c3c, 0x33c50,
0x33cf0, 0x33cfc,
0x34000, 0x34030,
0x34100, 0x34144,
0x34190, 0x341a0,
0x341a8, 0x341b8,
0x341c4, 0x341c8,
0x341d0, 0x341d0,
0x34200, 0x34318,
0x34400, 0x344b4,
0x344c0, 0x3452c,
0x34540, 0x3461c,
0x34800, 0x34828,
0x34834, 0x34834,
0x348c0, 0x34908,
0x34910, 0x349ac,
0x34a00, 0x34a14,
0x34a1c, 0x34a2c,
0x34a44, 0x34a50,
0x34a74, 0x34a74,
0x34a7c, 0x34afc,
0x34b08, 0x34c24,
0x34d00, 0x34d00,
0x34d08, 0x34d14,
0x34d1c, 0x34d20,
0x34d3c, 0x34d3c,
0x34d48, 0x34d50,
0x35200, 0x3520c,
0x35220, 0x35220,
0x35240, 0x35240,
0x35600, 0x3560c,
0x35a00, 0x35a1c,
0x35e00, 0x35e20,
0x35e38, 0x35e3c,
0x35e80, 0x35e80,
0x35e88, 0x35ea8,
0x35eb0, 0x35eb4,
0x35ec8, 0x35ed4,
0x35fb8, 0x36004,
0x36200, 0x36200,
0x36208, 0x36240,
0x36248, 0x36280,
0x36288, 0x362c0,
0x362c8, 0x362fc,
0x36600, 0x36630,
0x36a00, 0x36abc,
0x36b00, 0x36b10,
0x36b20, 0x36b30,
0x36b40, 0x36b50,
0x36b60, 0x36b70,
0x37000, 0x37028,
0x37030, 0x37048,
0x37060, 0x37068,
0x37070, 0x3709c,
0x370f0, 0x37128,
0x37130, 0x37148,
0x37160, 0x37168,
0x37170, 0x3719c,
0x371f0, 0x37238,
0x37240, 0x37240,
0x37248, 0x37250,
0x3725c, 0x37264,
0x37270, 0x372b8,
0x372c0, 0x372e4,
0x372f8, 0x37338,
0x37340, 0x37340,
0x37348, 0x37350,
0x3735c, 0x37364,
0x37370, 0x373b8,
0x373c0, 0x373e4,
0x373f8, 0x37428,
0x37430, 0x37448,
0x37460, 0x37468,
0x37470, 0x3749c,
0x374f0, 0x37528,
0x37530, 0x37548,
0x37560, 0x37568,
0x37570, 0x3759c,
0x375f0, 0x37638,
0x37640, 0x37640,
0x37648, 0x37650,
0x3765c, 0x37664,
0x37670, 0x376b8,
0x376c0, 0x376e4,
0x376f8, 0x37738,
0x37740, 0x37740,
0x37748, 0x37750,
0x3775c, 0x37764,
0x37770, 0x377b8,
0x377c0, 0x377e4,
0x377f8, 0x377fc,
0x37814, 0x37814,
0x3782c, 0x3782c,
0x37880, 0x3788c,
0x378e8, 0x378ec,
0x37900, 0x37928,
0x37930, 0x37948,
0x37960, 0x37968,
0x37970, 0x3799c,
0x379f0, 0x37a38,
0x37a40, 0x37a40,
0x37a48, 0x37a50,
0x37a5c, 0x37a64,
0x37a70, 0x37ab8,
0x37ac0, 0x37ae4,
0x37af8, 0x37b10,
0x37b28, 0x37b28,
0x37b3c, 0x37b50,
0x37bf0, 0x37c10,
0x37c28, 0x37c28,
0x37c3c, 0x37c50,
0x37cf0, 0x37cfc,
0x38000, 0x38030,
0x38100, 0x38144,
0x38190, 0x381a0,
0x381a8, 0x381b8,
0x381c4, 0x381c8,
0x381d0, 0x381d0,
0x38200, 0x38318,
0x38400, 0x384b4,
0x384c0, 0x3852c,
0x38540, 0x3861c,
0x38800, 0x38828,
0x38834, 0x38834,
0x388c0, 0x38908,
0x38910, 0x389ac,
0x38a00, 0x38a14,
0x38a1c, 0x38a2c,
0x38a44, 0x38a50,
0x38a74, 0x38a74,
0x38a7c, 0x38afc,
0x38b08, 0x38c24,
0x38d00, 0x38d00,
0x38d08, 0x38d14,
0x38d1c, 0x38d20,
0x38d3c, 0x38d3c,
0x38d48, 0x38d50,
0x39200, 0x3920c,
0x39220, 0x39220,
0x39240, 0x39240,
0x39600, 0x3960c,
0x39a00, 0x39a1c,
0x39e00, 0x39e20,
0x39e38, 0x39e3c,
0x39e80, 0x39e80,
0x39e88, 0x39ea8,
0x39eb0, 0x39eb4,
0x39ec8, 0x39ed4,
0x39fb8, 0x3a004,
0x3a200, 0x3a200,
0x3a208, 0x3a240,
0x3a248, 0x3a280,
0x3a288, 0x3a2c0,
0x3a2c8, 0x3a2fc,
0x3a600, 0x3a630,
0x3aa00, 0x3aabc,
0x3ab00, 0x3ab10,
0x3ab20, 0x3ab30,
0x3ab40, 0x3ab50,
0x3ab60, 0x3ab70,
0x3b000, 0x3b028,
0x3b030, 0x3b048,
0x3b060, 0x3b068,
0x3b070, 0x3b09c,
0x3b0f0, 0x3b128,
0x3b130, 0x3b148,
0x3b160, 0x3b168,
0x3b170, 0x3b19c,
0x3b1f0, 0x3b238,
0x3b240, 0x3b240,
0x3b248, 0x3b250,
0x3b25c, 0x3b264,
0x3b270, 0x3b2b8,
0x3b2c0, 0x3b2e4,
0x3b2f8, 0x3b338,
0x3b340, 0x3b340,
0x3b348, 0x3b350,
0x3b35c, 0x3b364,
0x3b370, 0x3b3b8,
0x3b3c0, 0x3b3e4,
0x3b3f8, 0x3b428,
0x3b430, 0x3b448,
0x3b460, 0x3b468,
0x3b470, 0x3b49c,
0x3b4f0, 0x3b528,
0x3b530, 0x3b548,
0x3b560, 0x3b568,
0x3b570, 0x3b59c,
0x3b5f0, 0x3b638,
0x3b640, 0x3b640,
0x3b648, 0x3b650,
0x3b65c, 0x3b664,
0x3b670, 0x3b6b8,
0x3b6c0, 0x3b6e4,
0x3b6f8, 0x3b738,
0x3b740, 0x3b740,
0x3b748, 0x3b750,
0x3b75c, 0x3b764,
0x3b770, 0x3b7b8,
0x3b7c0, 0x3b7e4,
0x3b7f8, 0x3b7fc,
0x3b814, 0x3b814,
0x3b82c, 0x3b82c,
0x3b880, 0x3b88c,
0x3b8e8, 0x3b8ec,
0x3b900, 0x3b928,
0x3b930, 0x3b948,
0x3b960, 0x3b968,
0x3b970, 0x3b99c,
0x3b9f0, 0x3ba38,
0x3ba40, 0x3ba40,
0x3ba48, 0x3ba50,
0x3ba5c, 0x3ba64,
0x3ba70, 0x3bab8,
0x3bac0, 0x3bae4,
0x3baf8, 0x3bb10,
0x3bb28, 0x3bb28,
0x3bb3c, 0x3bb50,
0x3bbf0, 0x3bc10,
0x3bc28, 0x3bc28,
0x3bc3c, 0x3bc50,
0x3bcf0, 0x3bcfc,
0x3c000, 0x3c030,
0x3c100, 0x3c144,
0x3c190, 0x3c1a0,
0x3c1a8, 0x3c1b8,
0x3c1c4, 0x3c1c8,
0x3c1d0, 0x3c1d0,
0x3c200, 0x3c318,
0x3c400, 0x3c4b4,
0x3c4c0, 0x3c52c,
0x3c540, 0x3c61c,
0x3c800, 0x3c828,
0x3c834, 0x3c834,
0x3c8c0, 0x3c908,
0x3c910, 0x3c9ac,
0x3ca00, 0x3ca14,
0x3ca1c, 0x3ca2c,
0x3ca44, 0x3ca50,
0x3ca74, 0x3ca74,
0x3ca7c, 0x3cafc,
0x3cb08, 0x3cc24,
0x3cd00, 0x3cd00,
0x3cd08, 0x3cd14,
0x3cd1c, 0x3cd20,
0x3cd3c, 0x3cd3c,
0x3cd48, 0x3cd50,
0x3d200, 0x3d20c,
0x3d220, 0x3d220,
0x3d240, 0x3d240,
0x3d600, 0x3d60c,
0x3da00, 0x3da1c,
0x3de00, 0x3de20,
0x3de38, 0x3de3c,
0x3de80, 0x3de80,
0x3de88, 0x3dea8,
0x3deb0, 0x3deb4,
0x3dec8, 0x3ded4,
0x3dfb8, 0x3e004,
0x3e200, 0x3e200,
0x3e208, 0x3e240,
0x3e248, 0x3e280,
0x3e288, 0x3e2c0,
0x3e2c8, 0x3e2fc,
0x3e600, 0x3e630,
0x3ea00, 0x3eabc,
0x3eb00, 0x3eb10,
0x3eb20, 0x3eb30,
0x3eb40, 0x3eb50,
0x3eb60, 0x3eb70,
0x3f000, 0x3f028,
0x3f030, 0x3f048,
0x3f060, 0x3f068,
0x3f070, 0x3f09c,
0x3f0f0, 0x3f128,
0x3f130, 0x3f148,
0x3f160, 0x3f168,
0x3f170, 0x3f19c,
0x3f1f0, 0x3f238,
0x3f240, 0x3f240,
0x3f248, 0x3f250,
0x3f25c, 0x3f264,
0x3f270, 0x3f2b8,
0x3f2c0, 0x3f2e4,
0x3f2f8, 0x3f338,
0x3f340, 0x3f340,
0x3f348, 0x3f350,
0x3f35c, 0x3f364,
0x3f370, 0x3f3b8,
0x3f3c0, 0x3f3e4,
0x3f3f8, 0x3f428,
0x3f430, 0x3f448,
0x3f460, 0x3f468,
0x3f470, 0x3f49c,
0x3f4f0, 0x3f528,
0x3f530, 0x3f548,
0x3f560, 0x3f568,
0x3f570, 0x3f59c,
0x3f5f0, 0x3f638,
0x3f640, 0x3f640,
0x3f648, 0x3f650,
0x3f65c, 0x3f664,
0x3f670, 0x3f6b8,
0x3f6c0, 0x3f6e4,
0x3f6f8, 0x3f738,
0x3f740, 0x3f740,
0x3f748, 0x3f750,
0x3f75c, 0x3f764,
0x3f770, 0x3f7b8,
0x3f7c0, 0x3f7e4,
0x3f7f8, 0x3f7fc,
0x3f814, 0x3f814,
0x3f82c, 0x3f82c,
0x3f880, 0x3f88c,
0x3f8e8, 0x3f8ec,
0x3f900, 0x3f928,
0x3f930, 0x3f948,
0x3f960, 0x3f968,
0x3f970, 0x3f99c,
0x3f9f0, 0x3fa38,
0x3fa40, 0x3fa40,
0x3fa48, 0x3fa50,
0x3fa5c, 0x3fa64,
0x3fa70, 0x3fab8,
0x3fac0, 0x3fae4,
0x3faf8, 0x3fb10,
0x3fb28, 0x3fb28,
0x3fb3c, 0x3fb50,
0x3fbf0, 0x3fc10,
0x3fc28, 0x3fc28,
0x3fc3c, 0x3fc50,
0x3fcf0, 0x3fcfc,
0x40000, 0x4000c,
0x40040, 0x40050,
0x40060, 0x40068,
0x4007c, 0x4008c,
0x40094, 0x400b0,
0x400c0, 0x40144,
0x40180, 0x4018c,
0x40200, 0x40254,
0x40260, 0x40264,
0x40270, 0x40288,
0x40290, 0x40298,
0x402ac, 0x402c8,
0x402d0, 0x402e0,
0x402f0, 0x402f0,
0x40300, 0x4033c,
0x403f8, 0x403fc,
0x41304, 0x413c4,
0x41400, 0x4140c,
0x41414, 0x4141c,
0x41480, 0x414d0,
0x44000, 0x44054,
0x4405c, 0x44078,
0x440c0, 0x44174,
0x44180, 0x441ac,
0x441b4, 0x441b8,
0x441c0, 0x44254,
0x4425c, 0x44278,
0x442c0, 0x44374,
0x44380, 0x443ac,
0x443b4, 0x443b8,
0x443c0, 0x44454,
0x4445c, 0x44478,
0x444c0, 0x44574,
0x44580, 0x445ac,
0x445b4, 0x445b8,
0x445c0, 0x44654,
0x4465c, 0x44678,
0x446c0, 0x44774,
0x44780, 0x447ac,
0x447b4, 0x447b8,
0x447c0, 0x44854,
0x4485c, 0x44878,
0x448c0, 0x44974,
0x44980, 0x449ac,
0x449b4, 0x449b8,
0x449c0, 0x449fc,
0x45000, 0x45004,
0x45010, 0x45030,
0x45040, 0x45060,
0x45068, 0x45068,
0x45080, 0x45084,
0x450a0, 0x450b0,
0x45200, 0x45204,
0x45210, 0x45230,
0x45240, 0x45260,
0x45268, 0x45268,
0x45280, 0x45284,
0x452a0, 0x452b0,
0x460c0, 0x460e4,
0x47000, 0x4703c,
0x47044, 0x4708c,
0x47200, 0x47250,
0x47400, 0x47408,
0x47414, 0x47420,
0x47600, 0x47618,
0x47800, 0x47814,
0x48000, 0x4800c,
0x48040, 0x48050,
0x48060, 0x48068,
0x4807c, 0x4808c,
0x48094, 0x480b0,
0x480c0, 0x48144,
0x48180, 0x4818c,
0x48200, 0x48254,
0x48260, 0x48264,
0x48270, 0x48288,
0x48290, 0x48298,
0x482ac, 0x482c8,
0x482d0, 0x482e0,
0x482f0, 0x482f0,
0x48300, 0x4833c,
0x483f8, 0x483fc,
0x49304, 0x493c4,
0x49400, 0x4940c,
0x49414, 0x4941c,
0x49480, 0x494d0,
0x4c000, 0x4c054,
0x4c05c, 0x4c078,
0x4c0c0, 0x4c174,
0x4c180, 0x4c1ac,
0x4c1b4, 0x4c1b8,
0x4c1c0, 0x4c254,
0x4c25c, 0x4c278,
0x4c2c0, 0x4c374,
0x4c380, 0x4c3ac,
0x4c3b4, 0x4c3b8,
0x4c3c0, 0x4c454,
0x4c45c, 0x4c478,
0x4c4c0, 0x4c574,
0x4c580, 0x4c5ac,
0x4c5b4, 0x4c5b8,
0x4c5c0, 0x4c654,
0x4c65c, 0x4c678,
0x4c6c0, 0x4c774,
0x4c780, 0x4c7ac,
0x4c7b4, 0x4c7b8,
0x4c7c0, 0x4c854,
0x4c85c, 0x4c878,
0x4c8c0, 0x4c974,
0x4c980, 0x4c9ac,
0x4c9b4, 0x4c9b8,
0x4c9c0, 0x4c9fc,
0x4d000, 0x4d004,
0x4d010, 0x4d030,
0x4d040, 0x4d060,
0x4d068, 0x4d068,
0x4d080, 0x4d084,
0x4d0a0, 0x4d0b0,
0x4d200, 0x4d204,
0x4d210, 0x4d230,
0x4d240, 0x4d260,
0x4d268, 0x4d268,
0x4d280, 0x4d284,
0x4d2a0, 0x4d2b0,
0x4e0c0, 0x4e0e4,
0x4f000, 0x4f03c,
0x4f044, 0x4f08c,
0x4f200, 0x4f250,
0x4f400, 0x4f408,
0x4f414, 0x4f420,
0x4f600, 0x4f618,
0x4f800, 0x4f814,
0x50000, 0x50084,
0x50090, 0x500cc,
0x50400, 0x50400,
0x50800, 0x50884,
0x50890, 0x508cc,
0x50c00, 0x50c00,
0x51000, 0x5101c,
0x51300, 0x51308,
};
static const unsigned int t5vf_reg_ranges[] = {
VF_SGE_REG(A_SGE_VF_KDOORBELL), VF_SGE_REG(A_SGE_VF_GTS),
VF_MPS_REG(A_MPS_VF_CTL),
VF_MPS_REG(A_MPS_VF_STAT_RX_VF_ERR_FRAMES_H),
VF_PL_REG(A_PL_VF_WHOAMI), VF_PL_REG(A_PL_VF_REVISION),
VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_CTRL),
VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_STATUS),
FW_T4VF_MBDATA_BASE_ADDR,
FW_T4VF_MBDATA_BASE_ADDR +
((NUM_CIM_PF_MAILBOX_DATA_INSTANCES - 1) * 4),
};
static const unsigned int t6_reg_ranges[] = {
0x1008, 0x101c,
0x1024, 0x10a8,
0x10b4, 0x10f8,
0x1100, 0x1114,
0x111c, 0x112c,
0x1138, 0x113c,
0x1144, 0x114c,
0x1180, 0x1184,
0x1190, 0x1194,
0x11a0, 0x11a4,
0x11b0, 0x11b4,
0x11fc, 0x1274,
0x1280, 0x133c,
0x1800, 0x18fc,
0x3000, 0x302c,
0x3060, 0x30b0,
0x30b8, 0x30d8,
0x30e0, 0x30fc,
0x3140, 0x357c,
0x35a8, 0x35cc,
0x35ec, 0x35ec,
0x3600, 0x5624,
0x56cc, 0x56ec,
0x56f4, 0x5720,
0x5728, 0x575c,
0x580c, 0x5814,
0x5890, 0x589c,
0x58a4, 0x58ac,
0x58b8, 0x58bc,
0x5940, 0x595c,
0x5980, 0x598c,
0x59b0, 0x59c8,
0x59d0, 0x59dc,
0x59fc, 0x5a18,
0x5a60, 0x5a6c,
0x5a80, 0x5a8c,
0x5a94, 0x5a9c,
0x5b94, 0x5bfc,
0x5c10, 0x5e48,
0x5e50, 0x5e94,
0x5ea0, 0x5eb0,
0x5ec0, 0x5ec0,
0x5ec8, 0x5ed0,
0x5ee0, 0x5ee0,
0x5ef0, 0x5ef0,
0x5f00, 0x5f00,
0x6000, 0x6020,
0x6028, 0x6040,
0x6058, 0x609c,
0x60a8, 0x619c,
0x7700, 0x7798,
0x77c0, 0x7880,
0x78cc, 0x78fc,
0x7b00, 0x7b58,
0x7b60, 0x7b84,
0x7b8c, 0x7c54,
0x7d00, 0x7d38,
0x7d40, 0x7d84,
0x7d8c, 0x7ddc,
0x7de4, 0x7e04,
0x7e10, 0x7e1c,
0x7e24, 0x7e38,
0x7e40, 0x7e44,
0x7e4c, 0x7e78,
0x7e80, 0x7edc,
0x7ee8, 0x7efc,
0x8dc0, 0x8de4,
0x8df8, 0x8e04,
0x8e10, 0x8e84,
0x8ea0, 0x8f88,
0x8fb8, 0x9058,
0x9060, 0x9060,
0x9068, 0x90f8,
0x9100, 0x9124,
0x9400, 0x9470,
0x9600, 0x9600,
0x9608, 0x9638,
0x9640, 0x9704,
0x9710, 0x971c,
0x9800, 0x9808,
0x9820, 0x983c,
0x9850, 0x9864,
0x9c00, 0x9c6c,
0x9c80, 0x9cec,
0x9d00, 0x9d6c,
0x9d80, 0x9dec,
0x9e00, 0x9e6c,
0x9e80, 0x9eec,
0x9f00, 0x9f6c,
0x9f80, 0xa020,
0xd004, 0xd03c,
0xd100, 0xd118,
0xd200, 0xd214,
0xd220, 0xd234,
0xd240, 0xd254,
0xd260, 0xd274,
0xd280, 0xd294,
0xd2a0, 0xd2b4,
0xd2c0, 0xd2d4,
0xd2e0, 0xd2f4,
0xd300, 0xd31c,
0xdfc0, 0xdfe0,
0xe000, 0xf008,
0xf010, 0xf018,
0xf020, 0xf028,
0x11000, 0x11014,
0x11048, 0x1106c,
0x11074, 0x11088,
0x11098, 0x11120,
0x1112c, 0x1117c,
0x11190, 0x112e0,
0x11300, 0x1130c,
0x12000, 0x1206c,
0x19040, 0x1906c,
0x19078, 0x19080,
0x1908c, 0x190e8,
0x190f0, 0x190f8,
0x19100, 0x19110,
0x19120, 0x19124,
0x19150, 0x19194,
0x1919c, 0x191b0,
0x191d0, 0x191e8,
0x19238, 0x19290,
0x192a4, 0x192b0,
0x192bc, 0x192bc,
0x19348, 0x1934c,
0x193f8, 0x19418,
0x19420, 0x19428,
0x19430, 0x19444,
0x1944c, 0x1946c,
0x19474, 0x19474,
0x19490, 0x194cc,
0x194f0, 0x194f8,
0x19c00, 0x19c48,
0x19c50, 0x19c80,
0x19c94, 0x19c98,
0x19ca0, 0x19cbc,
0x19ce4, 0x19ce4,
0x19cf0, 0x19cf8,
0x19d00, 0x19d28,
0x19d50, 0x19d78,
0x19d94, 0x19d98,
0x19da0, 0x19dc8,
0x19df0, 0x19e10,
0x19e50, 0x19e6c,
0x19ea0, 0x19ebc,
0x19ec4, 0x19ef4,
0x19f04, 0x19f2c,
0x19f34, 0x19f34,
0x19f40, 0x19f50,
0x19f90, 0x19fac,
0x19fc4, 0x19fc8,
0x19fd0, 0x19fe4,
0x1a000, 0x1a004,
0x1a010, 0x1a06c,
0x1a0b0, 0x1a0e4,
0x1a0ec, 0x1a0f8,
0x1a100, 0x1a108,
0x1a114, 0x1a120,
0x1a128, 0x1a130,
0x1a138, 0x1a138,
0x1a190, 0x1a1c4,
0x1a1fc, 0x1a1fc,
0x1e008, 0x1e00c,
0x1e040, 0x1e044,
0x1e04c, 0x1e04c,
0x1e284, 0x1e290,
0x1e2c0, 0x1e2c0,
0x1e2e0, 0x1e2e0,
0x1e300, 0x1e384,
0x1e3c0, 0x1e3c8,
0x1e408, 0x1e40c,
0x1e440, 0x1e444,
0x1e44c, 0x1e44c,
0x1e684, 0x1e690,
0x1e6c0, 0x1e6c0,
0x1e6e0, 0x1e6e0,
0x1e700, 0x1e784,
0x1e7c0, 0x1e7c8,
0x1e808, 0x1e80c,
0x1e840, 0x1e844,
0x1e84c, 0x1e84c,
0x1ea84, 0x1ea90,
0x1eac0, 0x1eac0,
0x1eae0, 0x1eae0,
0x1eb00, 0x1eb84,
0x1ebc0, 0x1ebc8,
0x1ec08, 0x1ec0c,
0x1ec40, 0x1ec44,
0x1ec4c, 0x1ec4c,
0x1ee84, 0x1ee90,
0x1eec0, 0x1eec0,
0x1eee0, 0x1eee0,
0x1ef00, 0x1ef84,
0x1efc0, 0x1efc8,
0x1f008, 0x1f00c,
0x1f040, 0x1f044,
0x1f04c, 0x1f04c,
0x1f284, 0x1f290,
0x1f2c0, 0x1f2c0,
0x1f2e0, 0x1f2e0,
0x1f300, 0x1f384,
0x1f3c0, 0x1f3c8,
0x1f408, 0x1f40c,
0x1f440, 0x1f444,
0x1f44c, 0x1f44c,
0x1f684, 0x1f690,
0x1f6c0, 0x1f6c0,
0x1f6e0, 0x1f6e0,
0x1f700, 0x1f784,
0x1f7c0, 0x1f7c8,
0x1f808, 0x1f80c,
0x1f840, 0x1f844,
0x1f84c, 0x1f84c,
0x1fa84, 0x1fa90,
0x1fac0, 0x1fac0,
0x1fae0, 0x1fae0,
0x1fb00, 0x1fb84,
0x1fbc0, 0x1fbc8,
0x1fc08, 0x1fc0c,
0x1fc40, 0x1fc44,
0x1fc4c, 0x1fc4c,
0x1fe84, 0x1fe90,
0x1fec0, 0x1fec0,
0x1fee0, 0x1fee0,
0x1ff00, 0x1ff84,
0x1ffc0, 0x1ffc8,
0x30000, 0x30030,
0x30100, 0x30168,
0x30190, 0x301a0,
0x301a8, 0x301b8,
0x301c4, 0x301c8,
0x301d0, 0x301d0,
0x30200, 0x30320,
0x30400, 0x304b4,
0x304c0, 0x3052c,
0x30540, 0x3061c,
0x30800, 0x308a0,
0x308c0, 0x30908,
0x30910, 0x309b8,
0x30a00, 0x30a04,
0x30a0c, 0x30a14,
0x30a1c, 0x30a2c,
0x30a44, 0x30a50,
0x30a74, 0x30a74,
0x30a7c, 0x30afc,
0x30b08, 0x30c24,
0x30d00, 0x30d14,
0x30d1c, 0x30d3c,
0x30d44, 0x30d4c,
0x30d54, 0x30d74,
0x30d7c, 0x30d7c,
0x30de0, 0x30de0,
0x30e00, 0x30ed4,
0x30f00, 0x30fa4,
0x30fc0, 0x30fc4,
0x31000, 0x31004,
0x31080, 0x310fc,
0x31208, 0x31220,
0x3123c, 0x31254,
0x31300, 0x31300,
0x31308, 0x3131c,
0x31338, 0x3133c,
0x31380, 0x31380,
0x31388, 0x313a8,
0x313b4, 0x313b4,
0x31400, 0x31420,
0x31438, 0x3143c,
0x31480, 0x31480,
0x314a8, 0x314a8,
0x314b0, 0x314b4,
0x314c8, 0x314d4,
0x31a40, 0x31a4c,
0x31af0, 0x31b20,
0x31b38, 0x31b3c,
0x31b80, 0x31b80,
0x31ba8, 0x31ba8,
0x31bb0, 0x31bb4,
0x31bc8, 0x31bd4,
0x32140, 0x3218c,
0x321f0, 0x321f4,
0x32200, 0x32200,
0x32218, 0x32218,
0x32400, 0x32400,
0x32408, 0x3241c,
0x32618, 0x32620,
0x32664, 0x32664,
0x326a8, 0x326a8,
0x326ec, 0x326ec,
0x32a00, 0x32abc,
0x32b00, 0x32b18,
0x32b20, 0x32b38,
0x32b40, 0x32b58,
0x32b60, 0x32b78,
0x32c00, 0x32c00,
0x32c08, 0x32c3c,
0x33000, 0x3302c,
0x33034, 0x33050,
0x33058, 0x33058,
0x33060, 0x3308c,
0x3309c, 0x330ac,
0x330c0, 0x330c0,
0x330c8, 0x330d0,
0x330d8, 0x330e0,
0x330ec, 0x3312c,
0x33134, 0x33150,
0x33158, 0x33158,
0x33160, 0x3318c,
0x3319c, 0x331ac,
0x331c0, 0x331c0,
0x331c8, 0x331d0,
0x331d8, 0x331e0,
0x331ec, 0x33290,
0x33298, 0x332c4,
0x332e4, 0x33390,
0x33398, 0x333c4,
0x333e4, 0x3342c,
0x33434, 0x33450,
0x33458, 0x33458,
0x33460, 0x3348c,
0x3349c, 0x334ac,
0x334c0, 0x334c0,
0x334c8, 0x334d0,
0x334d8, 0x334e0,
0x334ec, 0x3352c,
0x33534, 0x33550,
0x33558, 0x33558,
0x33560, 0x3358c,
0x3359c, 0x335ac,
0x335c0, 0x335c0,
0x335c8, 0x335d0,
0x335d8, 0x335e0,
0x335ec, 0x33690,
0x33698, 0x336c4,
0x336e4, 0x33790,
0x33798, 0x337c4,
0x337e4, 0x337fc,
0x33814, 0x33814,
0x33854, 0x33868,
0x33880, 0x3388c,
0x338c0, 0x338d0,
0x338e8, 0x338ec,
0x33900, 0x3392c,
0x33934, 0x33950,
0x33958, 0x33958,
0x33960, 0x3398c,
0x3399c, 0x339ac,
0x339c0, 0x339c0,
0x339c8, 0x339d0,
0x339d8, 0x339e0,
0x339ec, 0x33a90,
0x33a98, 0x33ac4,
0x33ae4, 0x33b10,
0x33b24, 0x33b28,
0x33b38, 0x33b50,
0x33bf0, 0x33c10,
0x33c24, 0x33c28,
0x33c38, 0x33c50,
0x33cf0, 0x33cfc,
0x34000, 0x34030,
0x34100, 0x34168,
0x34190, 0x341a0,
0x341a8, 0x341b8,
0x341c4, 0x341c8,
0x341d0, 0x341d0,
0x34200, 0x34320,
0x34400, 0x344b4,
0x344c0, 0x3452c,
0x34540, 0x3461c,
0x34800, 0x348a0,
0x348c0, 0x34908,
0x34910, 0x349b8,
0x34a00, 0x34a04,
0x34a0c, 0x34a14,
0x34a1c, 0x34a2c,
0x34a44, 0x34a50,
0x34a74, 0x34a74,
0x34a7c, 0x34afc,
0x34b08, 0x34c24,
0x34d00, 0x34d14,
0x34d1c, 0x34d3c,
0x34d44, 0x34d4c,
0x34d54, 0x34d74,
0x34d7c, 0x34d7c,
0x34de0, 0x34de0,
0x34e00, 0x34ed4,
0x34f00, 0x34fa4,
0x34fc0, 0x34fc4,
0x35000, 0x35004,
0x35080, 0x350fc,
0x35208, 0x35220,
0x3523c, 0x35254,
0x35300, 0x35300,
0x35308, 0x3531c,
0x35338, 0x3533c,
0x35380, 0x35380,
0x35388, 0x353a8,
0x353b4, 0x353b4,
0x35400, 0x35420,
0x35438, 0x3543c,
0x35480, 0x35480,
0x354a8, 0x354a8,
0x354b0, 0x354b4,
0x354c8, 0x354d4,
0x35a40, 0x35a4c,
0x35af0, 0x35b20,
0x35b38, 0x35b3c,
0x35b80, 0x35b80,
0x35ba8, 0x35ba8,
0x35bb0, 0x35bb4,
0x35bc8, 0x35bd4,
0x36140, 0x3618c,
0x361f0, 0x361f4,
0x36200, 0x36200,
0x36218, 0x36218,
0x36400, 0x36400,
0x36408, 0x3641c,
0x36618, 0x36620,
0x36664, 0x36664,
0x366a8, 0x366a8,
0x366ec, 0x366ec,
0x36a00, 0x36abc,
0x36b00, 0x36b18,
0x36b20, 0x36b38,
0x36b40, 0x36b58,
0x36b60, 0x36b78,
0x36c00, 0x36c00,
0x36c08, 0x36c3c,
0x37000, 0x3702c,
0x37034, 0x37050,
0x37058, 0x37058,
0x37060, 0x3708c,
0x3709c, 0x370ac,
0x370c0, 0x370c0,
0x370c8, 0x370d0,
0x370d8, 0x370e0,
0x370ec, 0x3712c,
0x37134, 0x37150,
0x37158, 0x37158,
0x37160, 0x3718c,
0x3719c, 0x371ac,
0x371c0, 0x371c0,
0x371c8, 0x371d0,
0x371d8, 0x371e0,
0x371ec, 0x37290,
0x37298, 0x372c4,
0x372e4, 0x37390,
0x37398, 0x373c4,
0x373e4, 0x3742c,
0x37434, 0x37450,
0x37458, 0x37458,
0x37460, 0x3748c,
0x3749c, 0x374ac,
0x374c0, 0x374c0,
0x374c8, 0x374d0,
0x374d8, 0x374e0,
0x374ec, 0x3752c,
0x37534, 0x37550,
0x37558, 0x37558,
0x37560, 0x3758c,
0x3759c, 0x375ac,
0x375c0, 0x375c0,
0x375c8, 0x375d0,
0x375d8, 0x375e0,
0x375ec, 0x37690,
0x37698, 0x376c4,
0x376e4, 0x37790,
0x37798, 0x377c4,
0x377e4, 0x377fc,
0x37814, 0x37814,
0x37854, 0x37868,
0x37880, 0x3788c,
0x378c0, 0x378d0,
0x378e8, 0x378ec,
0x37900, 0x3792c,
0x37934, 0x37950,
0x37958, 0x37958,
0x37960, 0x3798c,
0x3799c, 0x379ac,
0x379c0, 0x379c0,
0x379c8, 0x379d0,
0x379d8, 0x379e0,
0x379ec, 0x37a90,
0x37a98, 0x37ac4,
0x37ae4, 0x37b10,
0x37b24, 0x37b28,
0x37b38, 0x37b50,
0x37bf0, 0x37c10,
0x37c24, 0x37c28,
0x37c38, 0x37c50,
0x37cf0, 0x37cfc,
0x40040, 0x40040,
0x40080, 0x40084,
0x40100, 0x40100,
0x40140, 0x401bc,
0x40200, 0x40214,
0x40228, 0x40228,
0x40240, 0x40258,
0x40280, 0x40280,
0x40304, 0x40304,
0x40330, 0x4033c,
0x41304, 0x413c8,
0x413d0, 0x413dc,
0x413f0, 0x413f0,
0x41400, 0x4140c,
0x41414, 0x4141c,
0x41480, 0x414d0,
0x44000, 0x4407c,
0x440c0, 0x441ac,
0x441b4, 0x4427c,
0x442c0, 0x443ac,
0x443b4, 0x4447c,
0x444c0, 0x445ac,
0x445b4, 0x4467c,
0x446c0, 0x447ac,
0x447b4, 0x4487c,
0x448c0, 0x449ac,
0x449b4, 0x44a7c,
0x44ac0, 0x44bac,
0x44bb4, 0x44c7c,
0x44cc0, 0x44dac,
0x44db4, 0x44e7c,
0x44ec0, 0x44fac,
0x44fb4, 0x4507c,
0x450c0, 0x451ac,
0x451b4, 0x451fc,
0x45800, 0x45804,
0x45810, 0x45830,
0x45840, 0x45860,
0x45868, 0x45868,
0x45880, 0x45884,
0x458a0, 0x458b0,
0x45a00, 0x45a04,
0x45a10, 0x45a30,
0x45a40, 0x45a60,
0x45a68, 0x45a68,
0x45a80, 0x45a84,
0x45aa0, 0x45ab0,
0x460c0, 0x460e4,
0x47000, 0x4703c,
0x47044, 0x4708c,
0x47200, 0x47250,
0x47400, 0x47408,
0x47414, 0x47420,
0x47600, 0x47618,
0x47800, 0x47814,
0x47820, 0x4782c,
0x50000, 0x50084,
0x50090, 0x500cc,
0x50300, 0x50384,
0x50400, 0x50400,
0x50800, 0x50884,
0x50890, 0x508cc,
0x50b00, 0x50b84,
0x50c00, 0x50c00,
0x51000, 0x51020,
0x51028, 0x510b0,
0x51300, 0x51324,
};
static const unsigned int t6vf_reg_ranges[] = {
VF_SGE_REG(A_SGE_VF_KDOORBELL), VF_SGE_REG(A_SGE_VF_GTS),
VF_MPS_REG(A_MPS_VF_CTL),
VF_MPS_REG(A_MPS_VF_STAT_RX_VF_ERR_FRAMES_H),
VF_PL_REG(A_PL_VF_WHOAMI), VF_PL_REG(A_PL_VF_REVISION),
VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_CTRL),
VF_CIM_REG(A_CIM_VF_EXT_MAILBOX_STATUS),
FW_T6VF_MBDATA_BASE_ADDR,
FW_T6VF_MBDATA_BASE_ADDR +
((NUM_CIM_PF_MAILBOX_DATA_INSTANCES - 1) * 4),
};
u32 *buf_end = (u32 *)(buf + buf_size);
const unsigned int *reg_ranges;
int reg_ranges_size, range;
unsigned int chip_version = chip_id(adap);
/*
* Select the right set of register ranges to dump depending on the
* adapter chip type.
*/
switch (chip_version) {
case CHELSIO_T4:
if (adap->flags & IS_VF) {
reg_ranges = t4vf_reg_ranges;
reg_ranges_size = ARRAY_SIZE(t4vf_reg_ranges);
} else {
reg_ranges = t4_reg_ranges;
reg_ranges_size = ARRAY_SIZE(t4_reg_ranges);
}
break;
case CHELSIO_T5:
if (adap->flags & IS_VF) {
reg_ranges = t5vf_reg_ranges;
reg_ranges_size = ARRAY_SIZE(t5vf_reg_ranges);
} else {
reg_ranges = t5_reg_ranges;
reg_ranges_size = ARRAY_SIZE(t5_reg_ranges);
}
break;
case CHELSIO_T6:
if (adap->flags & IS_VF) {
reg_ranges = t6vf_reg_ranges;
reg_ranges_size = ARRAY_SIZE(t6vf_reg_ranges);
} else {
reg_ranges = t6_reg_ranges;
reg_ranges_size = ARRAY_SIZE(t6_reg_ranges);
}
break;
default:
CH_ERR(adap,
"Unsupported chip version %d\n", chip_version);
return;
}
/*
* Clear the register buffer and insert the appropriate register
* values selected by the above register ranges.
*/
memset(buf, 0, buf_size);
for (range = 0; range < reg_ranges_size; range += 2) {
unsigned int reg = reg_ranges[range];
unsigned int last_reg = reg_ranges[range + 1];
u32 *bufp = (u32 *)(buf + reg);
/*
* Iterate across the register range filling in the register
* buffer but don't write past the end of the register buffer.
*/
while (reg <= last_reg && bufp < buf_end) {
*bufp++ = t4_read_reg(adap, reg);
reg += sizeof(u32);
}
}
}
/*
* Partial EEPROM Vital Product Data structure. The VPD starts with one ID
* header followed by one or more VPD-R sections, each with its own header.
*/
struct t4_vpd_hdr {
u8 id_tag;
u8 id_len[2];
u8 id_data[ID_LEN];
};
struct t4_vpdr_hdr {
u8 vpdr_tag;
u8 vpdr_len[2];
};
/*
* EEPROM reads take a few tens of us while writes can take a bit over 5 ms.
*/
#define EEPROM_DELAY 10 /* 10us per poll spin */
#define EEPROM_MAX_POLL 5000 /* x 5000 == 50ms */
#define EEPROM_STAT_ADDR 0x7bfc
#define VPD_SIZE 0x800
#define VPD_BASE 0x400
#define VPD_BASE_OLD 0
#define VPD_LEN 1024
#define VPD_INFO_FLD_HDR_SIZE 3
#define CHELSIO_VPD_UNIQUE_ID 0x82
/*
* Small utility function to wait till any outstanding VPD Access is complete.
* We have a per-adapter state variable "VPD Busy" to indicate when we have a
* VPD Access in flight. This allows us to handle the problem of having a
* previous VPD Access time out and prevent an attempt to inject a new VPD
* Request before any in-flight VPD reguest has completed.
*/
static int t4_seeprom_wait(struct adapter *adapter)
{
unsigned int base = adapter->params.pci.vpd_cap_addr;
int max_poll;
/*
* If no VPD Access is in flight, we can just return success right
* away.
*/
if (!adapter->vpd_busy)
return 0;
/*
* Poll the VPD Capability Address/Flag register waiting for it
* to indicate that the operation is complete.
*/
max_poll = EEPROM_MAX_POLL;
do {
u16 val;
udelay(EEPROM_DELAY);
t4_os_pci_read_cfg2(adapter, base + PCI_VPD_ADDR, &val);
/*
* If the operation is complete, mark the VPD as no longer
* busy and return success.
*/
if ((val & PCI_VPD_ADDR_F) == adapter->vpd_flag) {
adapter->vpd_busy = 0;
return 0;
}
} while (--max_poll);
/*
* Failure! Note that we leave the VPD Busy status set in order to
* avoid pushing a new VPD Access request into the VPD Capability till
* the current operation eventually succeeds. It's a bug to issue a
* new request when an existing request is in flight and will result
* in corrupt hardware state.
*/
return -ETIMEDOUT;
}
/**
* t4_seeprom_read - read a serial EEPROM location
* @adapter: adapter to read
* @addr: EEPROM virtual address
* @data: where to store the read data
*
* Read a 32-bit word from a location in serial EEPROM using the card's PCI
* VPD capability. Note that this function must be called with a virtual
* address.
*/
int t4_seeprom_read(struct adapter *adapter, u32 addr, u32 *data)
{
unsigned int base = adapter->params.pci.vpd_cap_addr;
int ret;
/*
* VPD Accesses must alway be 4-byte aligned!
*/
if (addr >= EEPROMVSIZE || (addr & 3))
return -EINVAL;
/*
* Wait for any previous operation which may still be in flight to
* complete.
*/
ret = t4_seeprom_wait(adapter);
if (ret) {
CH_ERR(adapter, "VPD still busy from previous operation\n");
return ret;
}
/*
* Issue our new VPD Read request, mark the VPD as being busy and wait
* for our request to complete. If it doesn't complete, note the
* error and return it to our caller. Note that we do not reset the
* VPD Busy status!
*/
t4_os_pci_write_cfg2(adapter, base + PCI_VPD_ADDR, (u16)addr);
adapter->vpd_busy = 1;
adapter->vpd_flag = PCI_VPD_ADDR_F;
ret = t4_seeprom_wait(adapter);
if (ret) {
CH_ERR(adapter, "VPD read of address %#x failed\n", addr);
return ret;
}
/*
* Grab the returned data, swizzle it into our endianness and
* return success.
*/
t4_os_pci_read_cfg4(adapter, base + PCI_VPD_DATA, data);
*data = le32_to_cpu(*data);
return 0;
}
/**
* t4_seeprom_write - write a serial EEPROM location
* @adapter: adapter to write
* @addr: virtual EEPROM address
* @data: value to write
*
* Write a 32-bit word to a location in serial EEPROM using the card's PCI
* VPD capability. Note that this function must be called with a virtual
* address.
*/
int t4_seeprom_write(struct adapter *adapter, u32 addr, u32 data)
{
unsigned int base = adapter->params.pci.vpd_cap_addr;
int ret;
u32 stats_reg;
int max_poll;
/*
* VPD Accesses must alway be 4-byte aligned!
*/
if (addr >= EEPROMVSIZE || (addr & 3))
return -EINVAL;
/*
* Wait for any previous operation which may still be in flight to
* complete.
*/
ret = t4_seeprom_wait(adapter);
if (ret) {
CH_ERR(adapter, "VPD still busy from previous operation\n");
return ret;
}
/*
* Issue our new VPD Read request, mark the VPD as being busy and wait
* for our request to complete. If it doesn't complete, note the
* error and return it to our caller. Note that we do not reset the
* VPD Busy status!
*/
t4_os_pci_write_cfg4(adapter, base + PCI_VPD_DATA,
cpu_to_le32(data));
t4_os_pci_write_cfg2(adapter, base + PCI_VPD_ADDR,
(u16)addr | PCI_VPD_ADDR_F);
adapter->vpd_busy = 1;
adapter->vpd_flag = 0;
ret = t4_seeprom_wait(adapter);
if (ret) {
CH_ERR(adapter, "VPD write of address %#x failed\n", addr);
return ret;
}
/*
* Reset PCI_VPD_DATA register after a transaction and wait for our
* request to complete. If it doesn't complete, return error.
*/
t4_os_pci_write_cfg4(adapter, base + PCI_VPD_DATA, 0);
max_poll = EEPROM_MAX_POLL;
do {
udelay(EEPROM_DELAY);
t4_seeprom_read(adapter, EEPROM_STAT_ADDR, &stats_reg);
} while ((stats_reg & 0x1) && --max_poll);
if (!max_poll)
return -ETIMEDOUT;
/* Return success! */
return 0;
}
/**
* t4_eeprom_ptov - translate a physical EEPROM address to virtual
* @phys_addr: the physical EEPROM address
* @fn: the PCI function number
* @sz: size of function-specific area
*
* Translate a physical EEPROM address to virtual. The first 1K is
* accessed through virtual addresses starting at 31K, the rest is
* accessed through virtual addresses starting at 0.
*
* The mapping is as follows:
* [0..1K) -> [31K..32K)
* [1K..1K+A) -> [ES-A..ES)
* [1K+A..ES) -> [0..ES-A-1K)
*
* where A = @fn * @sz, and ES = EEPROM size.
*/
int t4_eeprom_ptov(unsigned int phys_addr, unsigned int fn, unsigned int sz)
{
fn *= sz;
if (phys_addr < 1024)
return phys_addr + (31 << 10);
if (phys_addr < 1024 + fn)
return EEPROMSIZE - fn + phys_addr - 1024;
if (phys_addr < EEPROMSIZE)
return phys_addr - 1024 - fn;
return -EINVAL;
}
/**
* t4_seeprom_wp - enable/disable EEPROM write protection
* @adapter: the adapter
* @enable: whether to enable or disable write protection
*
* Enables or disables write protection on the serial EEPROM.
*/
int t4_seeprom_wp(struct adapter *adapter, int enable)
{
return t4_seeprom_write(adapter, EEPROM_STAT_ADDR, enable ? 0xc : 0);
}
/**
* get_vpd_keyword_val - Locates an information field keyword in the VPD
* @vpd: Pointer to buffered vpd data structure
* @kw: The keyword to search for
* @region: VPD region to search (starting from 0)
*
* Returns the value of the information field keyword or
* -ENOENT otherwise.
*/
static int get_vpd_keyword_val(const u8 *vpd, const char *kw, int region)
{
int i, tag;
unsigned int offset, len;
const struct t4_vpdr_hdr *vpdr;
offset = sizeof(struct t4_vpd_hdr);
vpdr = (const void *)(vpd + offset);
tag = vpdr->vpdr_tag;
len = (u16)vpdr->vpdr_len[0] + ((u16)vpdr->vpdr_len[1] << 8);
while (region--) {
offset += sizeof(struct t4_vpdr_hdr) + len;
vpdr = (const void *)(vpd + offset);
if (++tag != vpdr->vpdr_tag)
return -ENOENT;
len = (u16)vpdr->vpdr_len[0] + ((u16)vpdr->vpdr_len[1] << 8);
}
offset += sizeof(struct t4_vpdr_hdr);
if (offset + len > VPD_LEN) {
return -ENOENT;
}
for (i = offset; i + VPD_INFO_FLD_HDR_SIZE <= offset + len;) {
if (memcmp(vpd + i , kw , 2) == 0){
i += VPD_INFO_FLD_HDR_SIZE;
return i;
}
i += VPD_INFO_FLD_HDR_SIZE + vpd[i+2];
}
return -ENOENT;
}
/**
* get_vpd_params - read VPD parameters from VPD EEPROM
* @adapter: adapter to read
* @p: where to store the parameters
* @vpd: caller provided temporary space to read the VPD into
*
* Reads card parameters stored in VPD EEPROM.
*/
static int get_vpd_params(struct adapter *adapter, struct vpd_params *p,
uint16_t device_id, u32 *buf)
{
int i, ret, addr;
int ec, sn, pn, na, md;
u8 csum;
const u8 *vpd = (const u8 *)buf;
/*
* Card information normally starts at VPD_BASE but early cards had
* it at 0.
*/
ret = t4_seeprom_read(adapter, VPD_BASE, buf);
if (ret)
return (ret);
/*
* The VPD shall have a unique identifier specified by the PCI SIG.
* For chelsio adapters, the identifier is 0x82. The first byte of a VPD
* shall be CHELSIO_VPD_UNIQUE_ID (0x82). The VPD programming software
* is expected to automatically put this entry at the
* beginning of the VPD.
*/
addr = *vpd == CHELSIO_VPD_UNIQUE_ID ? VPD_BASE : VPD_BASE_OLD;
for (i = 0; i < VPD_LEN; i += 4) {
ret = t4_seeprom_read(adapter, addr + i, buf++);
if (ret)
return ret;
}
#define FIND_VPD_KW(var,name) do { \
var = get_vpd_keyword_val(vpd, name, 0); \
if (var < 0) { \
CH_ERR(adapter, "missing VPD keyword " name "\n"); \
return -EINVAL; \
} \
} while (0)
FIND_VPD_KW(i, "RV");
for (csum = 0; i >= 0; i--)
csum += vpd[i];
if (csum) {
CH_ERR(adapter,
"corrupted VPD EEPROM, actual csum %u\n", csum);
return -EINVAL;
}
FIND_VPD_KW(ec, "EC");
FIND_VPD_KW(sn, "SN");
FIND_VPD_KW(pn, "PN");
FIND_VPD_KW(na, "NA");
#undef FIND_VPD_KW
memcpy(p->id, vpd + offsetof(struct t4_vpd_hdr, id_data), ID_LEN);
strstrip(p->id);
memcpy(p->ec, vpd + ec, EC_LEN);
strstrip(p->ec);
i = vpd[sn - VPD_INFO_FLD_HDR_SIZE + 2];
memcpy(p->sn, vpd + sn, min(i, SERNUM_LEN));
strstrip(p->sn);
i = vpd[pn - VPD_INFO_FLD_HDR_SIZE + 2];
memcpy(p->pn, vpd + pn, min(i, PN_LEN));
strstrip((char *)p->pn);
i = vpd[na - VPD_INFO_FLD_HDR_SIZE + 2];
memcpy(p->na, vpd + na, min(i, MACADDR_LEN));
strstrip((char *)p->na);
if (device_id & 0x80)
return 0; /* Custom card */
md = get_vpd_keyword_val(vpd, "VF", 1);
if (md < 0) {
snprintf(p->md, sizeof(p->md), "unknown");
} else {
i = vpd[md - VPD_INFO_FLD_HDR_SIZE + 2];
memcpy(p->md, vpd + md, min(i, MD_LEN));
strstrip((char *)p->md);
}
return 0;
}
/* serial flash and firmware constants and flash config file constants */
enum {
SF_ATTEMPTS = 10, /* max retries for SF operations */
/* flash command opcodes */
SF_PROG_PAGE = 2, /* program 256B page */
SF_WR_DISABLE = 4, /* disable writes */
SF_RD_STATUS = 5, /* read status register */
SF_WR_ENABLE = 6, /* enable writes */
SF_RD_DATA_FAST = 0xb, /* read flash */
SF_RD_ID = 0x9f, /* read ID */
SF_ERASE_SECTOR = 0xd8, /* erase 64KB sector */
};
/**
* sf1_read - read data from the serial flash
* @adapter: the adapter
* @byte_cnt: number of bytes to read
* @cont: whether another operation will be chained
* @lock: whether to lock SF for PL access only
* @valp: where to store the read data
*
* Reads up to 4 bytes of data from the serial flash. The location of
* the read needs to be specified prior to calling this by issuing the
* appropriate commands to the serial flash.
*/
static int sf1_read(struct adapter *adapter, unsigned int byte_cnt, int cont,
int lock, u32 *valp)
{
int ret;
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
if (t4_read_reg(adapter, A_SF_OP) & F_BUSY)
return -EBUSY;
t4_write_reg(adapter, A_SF_OP,
V_SF_LOCK(lock) | V_CONT(cont) | V_BYTECNT(byte_cnt - 1));
ret = t4_wait_op_done(adapter, A_SF_OP, F_BUSY, 0, SF_ATTEMPTS, 5);
if (!ret)
*valp = t4_read_reg(adapter, A_SF_DATA);
return ret;
}
/**
* sf1_write - write data to the serial flash
* @adapter: the adapter
* @byte_cnt: number of bytes to write
* @cont: whether another operation will be chained
* @lock: whether to lock SF for PL access only
* @val: value to write
*
* Writes up to 4 bytes of data to the serial flash. The location of
* the write needs to be specified prior to calling this by issuing the
* appropriate commands to the serial flash.
*/
static int sf1_write(struct adapter *adapter, unsigned int byte_cnt, int cont,
int lock, u32 val)
{
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
if (t4_read_reg(adapter, A_SF_OP) & F_BUSY)
return -EBUSY;
t4_write_reg(adapter, A_SF_DATA, val);
t4_write_reg(adapter, A_SF_OP, V_SF_LOCK(lock) |
V_CONT(cont) | V_BYTECNT(byte_cnt - 1) | V_OP(1));
return t4_wait_op_done(adapter, A_SF_OP, F_BUSY, 0, SF_ATTEMPTS, 5);
}
/**
* flash_wait_op - wait for a flash operation to complete
* @adapter: the adapter
* @attempts: max number of polls of the status register
* @delay: delay between polls in ms
*
* Wait for a flash operation to complete by polling the status register.
*/
static int flash_wait_op(struct adapter *adapter, int attempts, int delay)
{
int ret;
u32 status;
while (1) {
if ((ret = sf1_write(adapter, 1, 1, 1, SF_RD_STATUS)) != 0 ||
(ret = sf1_read(adapter, 1, 0, 1, &status)) != 0)
return ret;
if (!(status & 1))
return 0;
if (--attempts == 0)
return -EAGAIN;
if (delay)
msleep(delay);
}
}
/**
* t4_read_flash - read words from serial flash
* @adapter: the adapter
* @addr: the start address for the read
* @nwords: how many 32-bit words to read
* @data: where to store the read data
* @byte_oriented: whether to store data as bytes or as words
*
* Read the specified number of 32-bit words from the serial flash.
* If @byte_oriented is set the read data is stored as a byte array
* (i.e., big-endian), otherwise as 32-bit words in the platform's
* natural endianness.
*/
int t4_read_flash(struct adapter *adapter, unsigned int addr,
unsigned int nwords, u32 *data, int byte_oriented)
{
int ret;
if (addr + nwords * sizeof(u32) > adapter->params.sf_size || (addr & 3))
return -EINVAL;
addr = swab32(addr) | SF_RD_DATA_FAST;
if ((ret = sf1_write(adapter, 4, 1, 0, addr)) != 0 ||
(ret = sf1_read(adapter, 1, 1, 0, data)) != 0)
return ret;
for ( ; nwords; nwords--, data++) {
ret = sf1_read(adapter, 4, nwords > 1, nwords == 1, data);
if (nwords == 1)
t4_write_reg(adapter, A_SF_OP, 0); /* unlock SF */
if (ret)
return ret;
if (byte_oriented)
*data = (__force __u32)(cpu_to_be32(*data));
}
return 0;
}
/**
* t4_write_flash - write up to a page of data to the serial flash
* @adapter: the adapter
* @addr: the start address to write
* @n: length of data to write in bytes
* @data: the data to write
* @byte_oriented: whether to store data as bytes or as words
*
* Writes up to a page of data (256 bytes) to the serial flash starting
* at the given address. All the data must be written to the same page.
* If @byte_oriented is set the write data is stored as byte stream
* (i.e. matches what on disk), otherwise in big-endian.
*/
int t4_write_flash(struct adapter *adapter, unsigned int addr,
unsigned int n, const u8 *data, int byte_oriented)
{
int ret;
u32 buf[SF_PAGE_SIZE / 4];
unsigned int i, c, left, val, offset = addr & 0xff;
if (addr >= adapter->params.sf_size || offset + n > SF_PAGE_SIZE)
return -EINVAL;
val = swab32(addr) | SF_PROG_PAGE;
if ((ret = sf1_write(adapter, 1, 0, 1, SF_WR_ENABLE)) != 0 ||
(ret = sf1_write(adapter, 4, 1, 1, val)) != 0)
goto unlock;
for (left = n; left; left -= c) {
c = min(left, 4U);
for (val = 0, i = 0; i < c; ++i)
val = (val << 8) + *data++;
if (!byte_oriented)
val = cpu_to_be32(val);
ret = sf1_write(adapter, c, c != left, 1, val);
if (ret)
goto unlock;
}
ret = flash_wait_op(adapter, 8, 1);
if (ret)
goto unlock;
t4_write_reg(adapter, A_SF_OP, 0); /* unlock SF */
/* Read the page to verify the write succeeded */
ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf,
byte_oriented);
if (ret)
return ret;
if (memcmp(data - n, (u8 *)buf + offset, n)) {
CH_ERR(adapter,
"failed to correctly write the flash page at %#x\n",
addr);
return -EIO;
}
return 0;
unlock:
t4_write_reg(adapter, A_SF_OP, 0); /* unlock SF */
return ret;
}
/**
* t4_get_fw_version - read the firmware version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the FW version from flash.
*/
int t4_get_fw_version(struct adapter *adapter, u32 *vers)
{
return t4_read_flash(adapter, FLASH_FW_START +
offsetof(struct fw_hdr, fw_ver), 1,
vers, 0);
}
/**
* t4_get_fw_hdr - read the firmware header
* @adapter: the adapter
* @hdr: where to place the version
*
* Reads the FW header from flash into caller provided buffer.
*/
int t4_get_fw_hdr(struct adapter *adapter, struct fw_hdr *hdr)
{
return t4_read_flash(adapter, FLASH_FW_START,
sizeof (*hdr) / sizeof (uint32_t), (uint32_t *)hdr, 1);
}
/**
* t4_get_bs_version - read the firmware bootstrap version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the FW Bootstrap version from flash.
*/
int t4_get_bs_version(struct adapter *adapter, u32 *vers)
{
return t4_read_flash(adapter, FLASH_FWBOOTSTRAP_START +
offsetof(struct fw_hdr, fw_ver), 1,
vers, 0);
}
/**
* t4_get_tp_version - read the TP microcode version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the TP microcode version from flash.
*/
int t4_get_tp_version(struct adapter *adapter, u32 *vers)
{
return t4_read_flash(adapter, FLASH_FW_START +
offsetof(struct fw_hdr, tp_microcode_ver),
1, vers, 0);
}
/**
* t4_get_exprom_version - return the Expansion ROM version (if any)
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the Expansion ROM header from FLASH and returns the version
* number (if present) through the @vers return value pointer. We return
* this in the Firmware Version Format since it's convenient. Return
* 0 on success, -ENOENT if no Expansion ROM is present.
*/
int t4_get_exprom_version(struct adapter *adap, u32 *vers)
{
struct exprom_header {
unsigned char hdr_arr[16]; /* must start with 0x55aa */
unsigned char hdr_ver[4]; /* Expansion ROM version */
} *hdr;
u32 exprom_header_buf[DIV_ROUND_UP(sizeof(struct exprom_header),
sizeof(u32))];
int ret;
ret = t4_read_flash(adap, FLASH_EXP_ROM_START,
ARRAY_SIZE(exprom_header_buf), exprom_header_buf,
0);
if (ret)
return ret;
hdr = (struct exprom_header *)exprom_header_buf;
if (hdr->hdr_arr[0] != 0x55 || hdr->hdr_arr[1] != 0xaa)
return -ENOENT;
*vers = (V_FW_HDR_FW_VER_MAJOR(hdr->hdr_ver[0]) |
V_FW_HDR_FW_VER_MINOR(hdr->hdr_ver[1]) |
V_FW_HDR_FW_VER_MICRO(hdr->hdr_ver[2]) |
V_FW_HDR_FW_VER_BUILD(hdr->hdr_ver[3]));
return 0;
}
/**
* t4_get_scfg_version - return the Serial Configuration version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the Serial Configuration Version via the Firmware interface
* (thus this can only be called once we're ready to issue Firmware
* commands). The format of the Serial Configuration version is
* adapter specific. Returns 0 on success, an error on failure.
*
* Note that early versions of the Firmware didn't include the ability
* to retrieve the Serial Configuration version, so we zero-out the
* return-value parameter in that case to avoid leaving it with
* garbage in it.
*
* Also note that the Firmware will return its cached copy of the Serial
* Initialization Revision ID, not the actual Revision ID as written in
* the Serial EEPROM. This is only an issue if a new VPD has been written
* and the Firmware/Chip haven't yet gone through a RESET sequence. So
* it's best to defer calling this routine till after a FW_RESET_CMD has
* been issued if the Host Driver will be performing a full adapter
* initialization.
*/
int t4_get_scfg_version(struct adapter *adapter, u32 *vers)
{
u32 scfgrev_param;
int ret;
scfgrev_param = (V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_SCFGREV));
ret = t4_query_params(adapter, adapter->mbox, adapter->pf, 0,
1, &scfgrev_param, vers);
if (ret)
*vers = 0;
return ret;
}
/**
* t4_get_vpd_version - return the VPD version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the VPD via the Firmware interface (thus this can only be called
* once we're ready to issue Firmware commands). The format of the
* VPD version is adapter specific. Returns 0 on success, an error on
* failure.
*
* Note that early versions of the Firmware didn't include the ability
* to retrieve the VPD version, so we zero-out the return-value parameter
* in that case to avoid leaving it with garbage in it.
*
* Also note that the Firmware will return its cached copy of the VPD
* Revision ID, not the actual Revision ID as written in the Serial
* EEPROM. This is only an issue if a new VPD has been written and the
* Firmware/Chip haven't yet gone through a RESET sequence. So it's best
* to defer calling this routine till after a FW_RESET_CMD has been issued
* if the Host Driver will be performing a full adapter initialization.
*/
int t4_get_vpd_version(struct adapter *adapter, u32 *vers)
{
u32 vpdrev_param;
int ret;
vpdrev_param = (V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_VPDREV));
ret = t4_query_params(adapter, adapter->mbox, adapter->pf, 0,
1, &vpdrev_param, vers);
if (ret)
*vers = 0;
return ret;
}
/**
* t4_get_version_info - extract various chip/firmware version information
* @adapter: the adapter
*
* Reads various chip/firmware version numbers and stores them into the
* adapter Adapter Parameters structure. If any of the efforts fails
* the first failure will be returned, but all of the version numbers
* will be read.
*/
int t4_get_version_info(struct adapter *adapter)
{
int ret = 0;
#define FIRST_RET(__getvinfo) \
do { \
int __ret = __getvinfo; \
if (__ret && !ret) \
ret = __ret; \
} while (0)
FIRST_RET(t4_get_fw_version(adapter, &adapter->params.fw_vers));
FIRST_RET(t4_get_bs_version(adapter, &adapter->params.bs_vers));
FIRST_RET(t4_get_tp_version(adapter, &adapter->params.tp_vers));
FIRST_RET(t4_get_exprom_version(adapter, &adapter->params.er_vers));
FIRST_RET(t4_get_scfg_version(adapter, &adapter->params.scfg_vers));
FIRST_RET(t4_get_vpd_version(adapter, &adapter->params.vpd_vers));
#undef FIRST_RET
return ret;
}
/**
* t4_flash_erase_sectors - erase a range of flash sectors
* @adapter: the adapter
* @start: the first sector to erase
* @end: the last sector to erase
*
* Erases the sectors in the given inclusive range.
*/
int t4_flash_erase_sectors(struct adapter *adapter, int start, int end)
{
int ret = 0;
if (end >= adapter->params.sf_nsec)
return -EINVAL;
while (start <= end) {
if ((ret = sf1_write(adapter, 1, 0, 1, SF_WR_ENABLE)) != 0 ||
(ret = sf1_write(adapter, 4, 0, 1,
SF_ERASE_SECTOR | (start << 8))) != 0 ||
(ret = flash_wait_op(adapter, 14, 500)) != 0) {
CH_ERR(adapter,
"erase of flash sector %d failed, error %d\n",
start, ret);
break;
}
start++;
}
t4_write_reg(adapter, A_SF_OP, 0); /* unlock SF */
return ret;
}
/**
* t4_flash_cfg_addr - return the address of the flash configuration file
* @adapter: the adapter
*
* Return the address within the flash where the Firmware Configuration
* File is stored, or an error if the device FLASH is too small to contain
* a Firmware Configuration File.
*/
int t4_flash_cfg_addr(struct adapter *adapter)
{
/*
* If the device FLASH isn't large enough to hold a Firmware
* Configuration File, return an error.
*/
if (adapter->params.sf_size < FLASH_CFG_START + FLASH_CFG_MAX_SIZE)
return -ENOSPC;
return FLASH_CFG_START;
}
/*
* Return TRUE if the specified firmware matches the adapter. I.e. T4
* firmware for T4 adapters, T5 firmware for T5 adapters, etc. We go ahead
* and emit an error message for mismatched firmware to save our caller the
* effort ...
*/
static int t4_fw_matches_chip(struct adapter *adap,
const struct fw_hdr *hdr)
{
/*
* The expression below will return FALSE for any unsupported adapter
* which will keep us "honest" in the future ...
*/
if ((is_t4(adap) && hdr->chip == FW_HDR_CHIP_T4) ||
(is_t5(adap) && hdr->chip == FW_HDR_CHIP_T5) ||
(is_t6(adap) && hdr->chip == FW_HDR_CHIP_T6))
return 1;
CH_ERR(adap,
"FW image (%d) is not suitable for this adapter (%d)\n",
hdr->chip, chip_id(adap));
return 0;
}
/**
* t4_load_fw - download firmware
* @adap: the adapter
* @fw_data: the firmware image to write
* @size: image size
*
* Write the supplied firmware image to the card's serial flash.
*/
int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
{
u32 csum;
int ret, addr;
unsigned int i;
u8 first_page[SF_PAGE_SIZE];
const u32 *p = (const u32 *)fw_data;
const struct fw_hdr *hdr = (const struct fw_hdr *)fw_data;
unsigned int sf_sec_size = adap->params.sf_size / adap->params.sf_nsec;
unsigned int fw_start_sec;
unsigned int fw_start;
unsigned int fw_size;
if (ntohl(hdr->magic) == FW_HDR_MAGIC_BOOTSTRAP) {
fw_start_sec = FLASH_FWBOOTSTRAP_START_SEC;
fw_start = FLASH_FWBOOTSTRAP_START;
fw_size = FLASH_FWBOOTSTRAP_MAX_SIZE;
} else {
fw_start_sec = FLASH_FW_START_SEC;
fw_start = FLASH_FW_START;
fw_size = FLASH_FW_MAX_SIZE;
}
if (!size) {
CH_ERR(adap, "FW image has no data\n");
return -EINVAL;
}
if (size & 511) {
CH_ERR(adap,
"FW image size not multiple of 512 bytes\n");
return -EINVAL;
}
if ((unsigned int) be16_to_cpu(hdr->len512) * 512 != size) {
CH_ERR(adap,
"FW image size differs from size in FW header\n");
return -EINVAL;
}
if (size > fw_size) {
CH_ERR(adap, "FW image too large, max is %u bytes\n",
fw_size);
return -EFBIG;
}
if (!t4_fw_matches_chip(adap, hdr))
return -EINVAL;
for (csum = 0, i = 0; i < size / sizeof(csum); i++)
csum += be32_to_cpu(p[i]);
if (csum != 0xffffffff) {
CH_ERR(adap,
"corrupted firmware image, checksum %#x\n", csum);
return -EINVAL;
}
i = DIV_ROUND_UP(size, sf_sec_size); /* # of sectors spanned */
ret = t4_flash_erase_sectors(adap, fw_start_sec, fw_start_sec + i - 1);
if (ret)
goto out;
/*
* We write the correct version at the end so the driver can see a bad
* version if the FW write fails. Start by writing a copy of the
* first page with a bad version.
*/
memcpy(first_page, fw_data, SF_PAGE_SIZE);
((struct fw_hdr *)first_page)->fw_ver = cpu_to_be32(0xffffffff);
ret = t4_write_flash(adap, fw_start, SF_PAGE_SIZE, first_page, 1);
if (ret)
goto out;
addr = fw_start;
for (size -= SF_PAGE_SIZE; size; size -= SF_PAGE_SIZE) {
addr += SF_PAGE_SIZE;
fw_data += SF_PAGE_SIZE;
ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, fw_data, 1);
if (ret)
goto out;
}
ret = t4_write_flash(adap,
fw_start + offsetof(struct fw_hdr, fw_ver),
sizeof(hdr->fw_ver), (const u8 *)&hdr->fw_ver, 1);
out:
if (ret)
CH_ERR(adap, "firmware download failed, error %d\n",
ret);
return ret;
}
/**
* t4_fwcache - firmware cache operation
* @adap: the adapter
* @op : the operation (flush or flush and invalidate)
*/
int t4_fwcache(struct adapter *adap, enum fw_params_param_dev_fwcache op)
{
struct fw_params_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn =
cpu_to_be32(V_FW_CMD_OP(FW_PARAMS_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_PARAMS_CMD_PFN(adap->pf) |
V_FW_PARAMS_CMD_VFN(0));
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
c.param[0].mnem =
cpu_to_be32(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_FWCACHE));
c.param[0].val = (__force __be32)op;
return t4_wr_mbox(adap, adap->mbox, &c, sizeof(c), NULL);
}
void t4_cim_read_pif_la(struct adapter *adap, u32 *pif_req, u32 *pif_rsp,
unsigned int *pif_req_wrptr,
unsigned int *pif_rsp_wrptr)
{
int i, j;
u32 cfg, val, req, rsp;
cfg = t4_read_reg(adap, A_CIM_DEBUGCFG);
if (cfg & F_LADBGEN)
t4_write_reg(adap, A_CIM_DEBUGCFG, cfg ^ F_LADBGEN);
val = t4_read_reg(adap, A_CIM_DEBUGSTS);
req = G_POLADBGWRPTR(val);
rsp = G_PILADBGWRPTR(val);
if (pif_req_wrptr)
*pif_req_wrptr = req;
if (pif_rsp_wrptr)
*pif_rsp_wrptr = rsp;
for (i = 0; i < CIM_PIFLA_SIZE; i++) {
for (j = 0; j < 6; j++) {
t4_write_reg(adap, A_CIM_DEBUGCFG, V_POLADBGRDPTR(req) |
V_PILADBGRDPTR(rsp));
*pif_req++ = t4_read_reg(adap, A_CIM_PO_LA_DEBUGDATA);
*pif_rsp++ = t4_read_reg(adap, A_CIM_PI_LA_DEBUGDATA);
req++;
rsp++;
}
req = (req + 2) & M_POLADBGRDPTR;
rsp = (rsp + 2) & M_PILADBGRDPTR;
}
t4_write_reg(adap, A_CIM_DEBUGCFG, cfg);
}
void t4_cim_read_ma_la(struct adapter *adap, u32 *ma_req, u32 *ma_rsp)
{
u32 cfg;
int i, j, idx;
cfg = t4_read_reg(adap, A_CIM_DEBUGCFG);
if (cfg & F_LADBGEN)
t4_write_reg(adap, A_CIM_DEBUGCFG, cfg ^ F_LADBGEN);
for (i = 0; i < CIM_MALA_SIZE; i++) {
for (j = 0; j < 5; j++) {
idx = 8 * i + j;
t4_write_reg(adap, A_CIM_DEBUGCFG, V_POLADBGRDPTR(idx) |
V_PILADBGRDPTR(idx));
*ma_req++ = t4_read_reg(adap, A_CIM_PO_LA_MADEBUGDATA);
*ma_rsp++ = t4_read_reg(adap, A_CIM_PI_LA_MADEBUGDATA);
}
}
t4_write_reg(adap, A_CIM_DEBUGCFG, cfg);
}
void t4_ulprx_read_la(struct adapter *adap, u32 *la_buf)
{
unsigned int i, j;
for (i = 0; i < 8; i++) {
u32 *p = la_buf + i;
t4_write_reg(adap, A_ULP_RX_LA_CTL, i);
j = t4_read_reg(adap, A_ULP_RX_LA_WRPTR);
t4_write_reg(adap, A_ULP_RX_LA_RDPTR, j);
for (j = 0; j < ULPRX_LA_SIZE; j++, p += 8)
*p = t4_read_reg(adap, A_ULP_RX_LA_RDDATA);
}
}
/**
* fwcaps16_to_caps32 - convert 16-bit Port Capabilities to 32-bits
* @caps16: a 16-bit Port Capabilities value
*
* Returns the equivalent 32-bit Port Capabilities value.
*/
static uint32_t fwcaps16_to_caps32(uint16_t caps16)
{
uint32_t caps32 = 0;
#define CAP16_TO_CAP32(__cap) \
do { \
if (caps16 & FW_PORT_CAP_##__cap) \
caps32 |= FW_PORT_CAP32_##__cap; \
} while (0)
CAP16_TO_CAP32(SPEED_100M);
CAP16_TO_CAP32(SPEED_1G);
CAP16_TO_CAP32(SPEED_25G);
CAP16_TO_CAP32(SPEED_10G);
CAP16_TO_CAP32(SPEED_40G);
CAP16_TO_CAP32(SPEED_100G);
CAP16_TO_CAP32(FC_RX);
CAP16_TO_CAP32(FC_TX);
CAP16_TO_CAP32(ANEG);
CAP16_TO_CAP32(FORCE_PAUSE);
CAP16_TO_CAP32(MDIAUTO);
CAP16_TO_CAP32(MDISTRAIGHT);
CAP16_TO_CAP32(FEC_RS);
CAP16_TO_CAP32(FEC_BASER_RS);
CAP16_TO_CAP32(802_3_PAUSE);
CAP16_TO_CAP32(802_3_ASM_DIR);
#undef CAP16_TO_CAP32
return caps32;
}
/**
* fwcaps32_to_caps16 - convert 32-bit Port Capabilities to 16-bits
* @caps32: a 32-bit Port Capabilities value
*
* Returns the equivalent 16-bit Port Capabilities value. Note that
* not all 32-bit Port Capabilities can be represented in the 16-bit
* Port Capabilities and some fields/values may not make it.
*/
static uint16_t fwcaps32_to_caps16(uint32_t caps32)
{
uint16_t caps16 = 0;
#define CAP32_TO_CAP16(__cap) \
do { \
if (caps32 & FW_PORT_CAP32_##__cap) \
caps16 |= FW_PORT_CAP_##__cap; \
} while (0)
CAP32_TO_CAP16(SPEED_100M);
CAP32_TO_CAP16(SPEED_1G);
CAP32_TO_CAP16(SPEED_10G);
CAP32_TO_CAP16(SPEED_25G);
CAP32_TO_CAP16(SPEED_40G);
CAP32_TO_CAP16(SPEED_100G);
CAP32_TO_CAP16(FC_RX);
CAP32_TO_CAP16(FC_TX);
CAP32_TO_CAP16(802_3_PAUSE);
CAP32_TO_CAP16(802_3_ASM_DIR);
CAP32_TO_CAP16(ANEG);
CAP32_TO_CAP16(FORCE_PAUSE);
CAP32_TO_CAP16(MDIAUTO);
CAP32_TO_CAP16(MDISTRAIGHT);
CAP32_TO_CAP16(FEC_RS);
CAP32_TO_CAP16(FEC_BASER_RS);
#undef CAP32_TO_CAP16
return caps16;
}
static bool
is_bt(struct port_info *pi)
{
return (pi->port_type == FW_PORT_TYPE_BT_SGMII ||
pi->port_type == FW_PORT_TYPE_BT_XFI ||
pi->port_type == FW_PORT_TYPE_BT_XAUI);
}
/**
* t4_link_l1cfg - apply link configuration to MAC/PHY
* @phy: the PHY to setup
* @mac: the MAC to setup
* @lc: the requested link configuration
*
* Set up a port's MAC and PHY according to a desired link configuration.
* - If the PHY can auto-negotiate first decide what to advertise, then
* enable/disable auto-negotiation as desired, and reset.
* - If the PHY does not auto-negotiate just reset it.
* - If auto-negotiation is off set the MAC to the proper speed/duplex/FC,
* otherwise do it later based on the outcome of auto-negotiation.
*/
int t4_link_l1cfg(struct adapter *adap, unsigned int mbox, unsigned int port,
struct link_config *lc)
{
struct fw_port_cmd c;
unsigned int mdi = V_FW_PORT_CAP32_MDI(FW_PORT_CAP32_MDI_AUTO);
unsigned int aneg, fc, fec, speed, rcap;
fc = 0;
if (lc->requested_fc & PAUSE_RX)
fc |= FW_PORT_CAP32_FC_RX;
if (lc->requested_fc & PAUSE_TX)
fc |= FW_PORT_CAP32_FC_TX;
if (!(lc->requested_fc & PAUSE_AUTONEG))
fc |= FW_PORT_CAP32_FORCE_PAUSE;
fec = 0;
if (lc->requested_fec == FEC_AUTO)
fec = lc->fec_hint;
else {
if (lc->requested_fec & FEC_RS)
fec |= FW_PORT_CAP32_FEC_RS;
if (lc->requested_fec & FEC_BASER_RS)
fec |= FW_PORT_CAP32_FEC_BASER_RS;
}
if (lc->requested_aneg == AUTONEG_DISABLE)
aneg = 0;
else if (lc->requested_aneg == AUTONEG_ENABLE)
aneg = FW_PORT_CAP32_ANEG;
else
aneg = lc->supported & FW_PORT_CAP32_ANEG;
if (aneg) {
speed = lc->supported & V_FW_PORT_CAP32_SPEED(M_FW_PORT_CAP32_SPEED);
} else if (lc->requested_speed != 0)
speed = speed_to_fwcap(lc->requested_speed);
else
speed = fwcap_top_speed(lc->supported);
/* Force AN on for BT cards. */
if (is_bt(adap->port[port]))
aneg = lc->supported & FW_PORT_CAP32_ANEG;
rcap = aneg | speed | fc | fec;
if ((rcap | lc->supported) != lc->supported) {
#ifdef INVARIANTS
CH_WARN(adap, "rcap 0x%08x, pcap 0x%08x\n", rcap,
lc->supported);
#endif
rcap &= lc->supported;
}
rcap |= mdi;
memset(&c, 0, sizeof(c));
c.op_to_portid = cpu_to_be32(V_FW_CMD_OP(FW_PORT_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_PORT_CMD_PORTID(port));
if (adap->params.port_caps32) {
c.action_to_len16 =
cpu_to_be32(V_FW_PORT_CMD_ACTION(FW_PORT_ACTION_L1_CFG32) |
FW_LEN16(c));
c.u.l1cfg32.rcap32 = cpu_to_be32(rcap);
} else {
c.action_to_len16 =
cpu_to_be32(V_FW_PORT_CMD_ACTION(FW_PORT_ACTION_L1_CFG) |
FW_LEN16(c));
c.u.l1cfg.rcap = cpu_to_be32(fwcaps32_to_caps16(rcap));
}
return t4_wr_mbox_ns(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_restart_aneg - restart autonegotiation
* @adap: the adapter
* @mbox: mbox to use for the FW command
* @port: the port id
*
* Restarts autonegotiation for the selected port.
*/
int t4_restart_aneg(struct adapter *adap, unsigned int mbox, unsigned int port)
{
struct fw_port_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_portid = cpu_to_be32(V_FW_CMD_OP(FW_PORT_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_PORT_CMD_PORTID(port));
c.action_to_len16 =
cpu_to_be32(V_FW_PORT_CMD_ACTION(FW_PORT_ACTION_L1_CFG) |
FW_LEN16(c));
c.u.l1cfg.rcap = cpu_to_be32(FW_PORT_CAP_ANEG);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
struct intr_details {
u32 mask;
const char *msg;
};
struct intr_action {
u32 mask;
int arg;
bool (*action)(struct adapter *, int, bool);
};
struct intr_info {
const char *name; /* name of the INT_CAUSE register */
int cause_reg; /* INT_CAUSE register */
int enable_reg; /* INT_ENABLE register */
u32 fatal; /* bits that are fatal */
const struct intr_details *details;
const struct intr_action *actions;
};
static inline char
intr_alert_char(u32 cause, u32 enable, u32 fatal)
{
if (cause & fatal)
return ('!');
if (cause & enable)
return ('*');
return ('-');
}
static void
t4_show_intr_info(struct adapter *adap, const struct intr_info *ii, u32 cause)
{
u32 enable, leftover;
const struct intr_details *details;
char alert;
enable = t4_read_reg(adap, ii->enable_reg);
alert = intr_alert_char(cause, enable, ii->fatal);
CH_ALERT(adap, "%c %s 0x%x = 0x%08x, E 0x%08x, F 0x%08x\n",
alert, ii->name, ii->cause_reg, cause, enable, ii->fatal);
leftover = cause;
for (details = ii->details; details && details->mask != 0; details++) {
u32 msgbits = details->mask & cause;
if (msgbits == 0)
continue;
alert = intr_alert_char(msgbits, enable, ii->fatal);
CH_ALERT(adap, " %c [0x%08x] %s\n", alert, msgbits,
details->msg);
leftover &= ~msgbits;
}
if (leftover != 0 && leftover != cause)
CH_ALERT(adap, " ? [0x%08x]\n", leftover);
}
/*
* Returns true for fatal error.
*/
static bool
t4_handle_intr(struct adapter *adap, const struct intr_info *ii,
u32 additional_cause, bool verbose)
{
u32 cause;
bool fatal;
const struct intr_action *action;
/* read and display cause. */
cause = t4_read_reg(adap, ii->cause_reg);
if (verbose || cause != 0)
t4_show_intr_info(adap, ii, cause);
fatal = (cause & ii->fatal) != 0;
cause |= additional_cause;
if (cause == 0)
return (false);
for (action = ii->actions; action && action->mask != 0; action++) {
if (!(action->mask & cause))
continue;
fatal |= (action->action)(adap, action->arg, verbose);
}
/* clear */
t4_write_reg(adap, ii->cause_reg, cause);
(void)t4_read_reg(adap, ii->cause_reg);
return (fatal);
}
/*
* Interrupt handler for the PCIE module.
*/
static bool pcie_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details sysbus_intr_details[] = {
{ F_RNPP, "RXNP array parity error" },
{ F_RPCP, "RXPC array parity error" },
{ F_RCIP, "RXCIF array parity error" },
{ F_RCCP, "Rx completions control array parity error" },
{ F_RFTP, "RXFT array parity error" },
{ 0 }
};
static const struct intr_info sysbus_intr_info = {
.name = "PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS",
.cause_reg = A_PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
.enable_reg = A_PCIE_CORE_UTL_SYSTEM_BUS_AGENT_INTERRUPT_ENABLE,
.fatal = F_RFTP | F_RCCP | F_RCIP | F_RPCP | F_RNPP,
.details = sysbus_intr_details,
.actions = NULL,
};
static const struct intr_details pcie_port_intr_details[] = {
{ F_TPCP, "TXPC array parity error" },
{ F_TNPP, "TXNP array parity error" },
{ F_TFTP, "TXFT array parity error" },
{ F_TCAP, "TXCA array parity error" },
{ F_TCIP, "TXCIF array parity error" },
{ F_RCAP, "RXCA array parity error" },
{ F_OTDD, "outbound request TLP discarded" },
{ F_RDPE, "Rx data parity error" },
{ F_TDUE, "Tx uncorrectable data error" },
{ 0 }
};
static const struct intr_info pcie_port_intr_info = {
.name = "PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS",
.cause_reg = A_PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
.enable_reg = A_PCIE_CORE_UTL_PCI_EXPRESS_PORT_INTERRUPT_ENABLE,
.fatal = F_TPCP | F_TNPP | F_TFTP | F_TCAP | F_TCIP | F_RCAP |
F_OTDD | F_RDPE | F_TDUE,
.details = pcie_port_intr_details,
.actions = NULL,
};
static const struct intr_details pcie_intr_details[] = {
{ F_MSIADDRLPERR, "MSI AddrL parity error" },
{ F_MSIADDRHPERR, "MSI AddrH parity error" },
{ F_MSIDATAPERR, "MSI data parity error" },
{ F_MSIXADDRLPERR, "MSI-X AddrL parity error" },
{ F_MSIXADDRHPERR, "MSI-X AddrH parity error" },
{ F_MSIXDATAPERR, "MSI-X data parity error" },
{ F_MSIXDIPERR, "MSI-X DI parity error" },
{ F_PIOCPLPERR, "PCIe PIO completion FIFO parity error" },
{ F_PIOREQPERR, "PCIe PIO request FIFO parity error" },
{ F_TARTAGPERR, "PCIe target tag FIFO parity error" },
{ F_CCNTPERR, "PCIe CMD channel count parity error" },
{ F_CREQPERR, "PCIe CMD channel request parity error" },
{ F_CRSPPERR, "PCIe CMD channel response parity error" },
{ F_DCNTPERR, "PCIe DMA channel count parity error" },
{ F_DREQPERR, "PCIe DMA channel request parity error" },
{ F_DRSPPERR, "PCIe DMA channel response parity error" },
{ F_HCNTPERR, "PCIe HMA channel count parity error" },
{ F_HREQPERR, "PCIe HMA channel request parity error" },
{ F_HRSPPERR, "PCIe HMA channel response parity error" },
{ F_CFGSNPPERR, "PCIe config snoop FIFO parity error" },
{ F_FIDPERR, "PCIe FID parity error" },
{ F_INTXCLRPERR, "PCIe INTx clear parity error" },
{ F_MATAGPERR, "PCIe MA tag parity error" },
{ F_PIOTAGPERR, "PCIe PIO tag parity error" },
{ F_RXCPLPERR, "PCIe Rx completion parity error" },
{ F_RXWRPERR, "PCIe Rx write parity error" },
{ F_RPLPERR, "PCIe replay buffer parity error" },
{ F_PCIESINT, "PCIe core secondary fault" },
{ F_PCIEPINT, "PCIe core primary fault" },
{ F_UNXSPLCPLERR, "PCIe unexpected split completion error" },
{ 0 }
};
static const struct intr_details t5_pcie_intr_details[] = {
{ F_IPGRPPERR, "Parity errors observed by IP" },
{ F_NONFATALERR, "PCIe non-fatal error" },
{ F_READRSPERR, "Outbound read error" },
{ F_TRGT1GRPPERR, "PCIe TRGT1 group FIFOs parity error" },
{ F_IPSOTPERR, "PCIe IP SOT buffer SRAM parity error" },
{ F_IPRETRYPERR, "PCIe IP replay buffer parity error" },
{ F_IPRXDATAGRPPERR, "PCIe IP Rx data group SRAMs parity error" },
{ F_IPRXHDRGRPPERR, "PCIe IP Rx header group SRAMs parity error" },
{ F_PIOTAGQPERR, "PIO tag queue FIFO parity error" },
{ F_MAGRPPERR, "MA group FIFO parity error" },
{ F_VFIDPERR, "VFID SRAM parity error" },
{ F_FIDPERR, "FID SRAM parity error" },
{ F_CFGSNPPERR, "config snoop FIFO parity error" },
{ F_HRSPPERR, "HMA channel response data SRAM parity error" },
{ F_HREQRDPERR, "HMA channel read request SRAM parity error" },
{ F_HREQWRPERR, "HMA channel write request SRAM parity error" },
{ F_DRSPPERR, "DMA channel response data SRAM parity error" },
{ F_DREQRDPERR, "DMA channel write request SRAM parity error" },
{ F_CRSPPERR, "CMD channel response data SRAM parity error" },
{ F_CREQRDPERR, "CMD channel read request SRAM parity error" },
{ F_MSTTAGQPERR, "PCIe master tag queue SRAM parity error" },
{ F_TGTTAGQPERR, "PCIe target tag queue FIFO parity error" },
{ F_PIOREQGRPPERR, "PIO request group FIFOs parity error" },
{ F_PIOCPLGRPPERR, "PIO completion group FIFOs parity error" },
{ F_MSIXDIPERR, "MSI-X DI SRAM parity error" },
{ F_MSIXDATAPERR, "MSI-X data SRAM parity error" },
{ F_MSIXADDRHPERR, "MSI-X AddrH SRAM parity error" },
{ F_MSIXADDRLPERR, "MSI-X AddrL SRAM parity error" },
{ F_MSIXSTIPERR, "MSI-X STI SRAM parity error" },
{ F_MSTTIMEOUTPERR, "Master timeout FIFO parity error" },
{ F_MSTGRPPERR, "Master response read queue SRAM parity error" },
{ 0 }
};
struct intr_info pcie_intr_info = {
.name = "PCIE_INT_CAUSE",
.cause_reg = A_PCIE_INT_CAUSE,
.enable_reg = A_PCIE_INT_ENABLE,
.fatal = 0,
.details = NULL,
.actions = NULL,
};
bool fatal = false;
if (is_t4(adap)) {
fatal |= t4_handle_intr(adap, &sysbus_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &pcie_port_intr_info, 0, verbose);
pcie_intr_info.fatal = 0x3fffffc0;
pcie_intr_info.details = pcie_intr_details;
} else {
pcie_intr_info.fatal = is_t5(adap) ? 0xbfffff40 : 0x9fffff40;
pcie_intr_info.details = t5_pcie_intr_details;
}
fatal |= t4_handle_intr(adap, &pcie_intr_info, 0, verbose);
return (fatal);
}
/*
* TP interrupt handler.
*/
static bool tp_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details tp_intr_details[] = {
{ 0x3fffffff, "TP parity error" },
{ F_FLMTXFLSTEMPTY, "TP out of Tx pages" },
{ 0 }
};
static const struct intr_info tp_intr_info = {
.name = "TP_INT_CAUSE",
.cause_reg = A_TP_INT_CAUSE,
.enable_reg = A_TP_INT_ENABLE,
.fatal = 0x7fffffff,
.details = tp_intr_details,
.actions = NULL,
};
return (t4_handle_intr(adap, &tp_intr_info, 0, verbose));
}
/*
* SGE interrupt handler.
*/
static bool sge_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_info sge_int1_info = {
.name = "SGE_INT_CAUSE1",
.cause_reg = A_SGE_INT_CAUSE1,
.enable_reg = A_SGE_INT_ENABLE1,
.fatal = 0xffffffff,
.details = NULL,
.actions = NULL,
};
static const struct intr_info sge_int2_info = {
.name = "SGE_INT_CAUSE2",
.cause_reg = A_SGE_INT_CAUSE2,
.enable_reg = A_SGE_INT_ENABLE2,
.fatal = 0xffffffff,
.details = NULL,
.actions = NULL,
};
static const struct intr_details sge_int3_details[] = {
{ F_ERR_FLM_DBP,
"DBP pointer delivery for invalid context or QID" },
{ F_ERR_FLM_IDMA1 | F_ERR_FLM_IDMA0,
"Invalid QID or header request by IDMA" },
{ F_ERR_FLM_HINT, "FLM hint is for invalid context or QID" },
{ F_ERR_PCIE_ERROR3, "SGE PCIe error for DBP thread 3" },
{ F_ERR_PCIE_ERROR2, "SGE PCIe error for DBP thread 2" },
{ F_ERR_PCIE_ERROR1, "SGE PCIe error for DBP thread 1" },
{ F_ERR_PCIE_ERROR0, "SGE PCIe error for DBP thread 0" },
{ F_ERR_TIMER_ABOVE_MAX_QID,
"SGE GTS with timer 0-5 for IQID > 1023" },
{ F_ERR_CPL_EXCEED_IQE_SIZE,
"SGE received CPL exceeding IQE size" },
{ F_ERR_INVALID_CIDX_INC, "SGE GTS CIDX increment too large" },
{ F_ERR_ITP_TIME_PAUSED, "SGE ITP error" },
{ F_ERR_CPL_OPCODE_0, "SGE received 0-length CPL" },
{ F_ERR_DROPPED_DB, "SGE DB dropped" },
{ F_ERR_DATA_CPL_ON_HIGH_QID1 | F_ERR_DATA_CPL_ON_HIGH_QID0,
"SGE IQID > 1023 received CPL for FL" },
{ F_ERR_BAD_DB_PIDX3 | F_ERR_BAD_DB_PIDX2 | F_ERR_BAD_DB_PIDX1 |
F_ERR_BAD_DB_PIDX0, "SGE DBP pidx increment too large" },
{ F_ERR_ING_PCIE_CHAN, "SGE Ingress PCIe channel mismatch" },
{ F_ERR_ING_CTXT_PRIO,
"Ingress context manager priority user error" },
{ F_ERR_EGR_CTXT_PRIO,
"Egress context manager priority user error" },
{ F_DBFIFO_HP_INT, "High priority DB FIFO threshold reached" },
{ F_DBFIFO_LP_INT, "Low priority DB FIFO threshold reached" },
{ F_REG_ADDRESS_ERR, "Undefined SGE register accessed" },
{ F_INGRESS_SIZE_ERR, "SGE illegal ingress QID" },
{ F_EGRESS_SIZE_ERR, "SGE illegal egress QID" },
{ 0x0000000f, "SGE context access for invalid queue" },
{ 0 }
};
static const struct intr_details t6_sge_int3_details[] = {
{ F_ERR_FLM_DBP,
"DBP pointer delivery for invalid context or QID" },
{ F_ERR_FLM_IDMA1 | F_ERR_FLM_IDMA0,
"Invalid QID or header request by IDMA" },
{ F_ERR_FLM_HINT, "FLM hint is for invalid context or QID" },
{ F_ERR_PCIE_ERROR3, "SGE PCIe error for DBP thread 3" },
{ F_ERR_PCIE_ERROR2, "SGE PCIe error for DBP thread 2" },
{ F_ERR_PCIE_ERROR1, "SGE PCIe error for DBP thread 1" },
{ F_ERR_PCIE_ERROR0, "SGE PCIe error for DBP thread 0" },
{ F_ERR_TIMER_ABOVE_MAX_QID,
"SGE GTS with timer 0-5 for IQID > 1023" },
{ F_ERR_CPL_EXCEED_IQE_SIZE,
"SGE received CPL exceeding IQE size" },
{ F_ERR_INVALID_CIDX_INC, "SGE GTS CIDX increment too large" },
{ F_ERR_ITP_TIME_PAUSED, "SGE ITP error" },
{ F_ERR_CPL_OPCODE_0, "SGE received 0-length CPL" },
{ F_ERR_DROPPED_DB, "SGE DB dropped" },
{ F_ERR_DATA_CPL_ON_HIGH_QID1 | F_ERR_DATA_CPL_ON_HIGH_QID0,
"SGE IQID > 1023 received CPL for FL" },
{ F_ERR_BAD_DB_PIDX3 | F_ERR_BAD_DB_PIDX2 | F_ERR_BAD_DB_PIDX1 |
F_ERR_BAD_DB_PIDX0, "SGE DBP pidx increment too large" },
{ F_ERR_ING_PCIE_CHAN, "SGE Ingress PCIe channel mismatch" },
{ F_ERR_ING_CTXT_PRIO,
"Ingress context manager priority user error" },
{ F_ERR_EGR_CTXT_PRIO,
"Egress context manager priority user error" },
{ F_DBP_TBUF_FULL, "SGE DBP tbuf full" },
{ F_FATAL_WRE_LEN,
"SGE WRE packet less than advertized length" },
{ F_REG_ADDRESS_ERR, "Undefined SGE register accessed" },
{ F_INGRESS_SIZE_ERR, "SGE illegal ingress QID" },
{ F_EGRESS_SIZE_ERR, "SGE illegal egress QID" },
{ 0x0000000f, "SGE context access for invalid queue" },
{ 0 }
};
struct intr_info sge_int3_info = {
.name = "SGE_INT_CAUSE3",
.cause_reg = A_SGE_INT_CAUSE3,
.enable_reg = A_SGE_INT_ENABLE3,
.fatal = F_ERR_CPL_EXCEED_IQE_SIZE,
.details = NULL,
.actions = NULL,
};
static const struct intr_info sge_int4_info = {
.name = "SGE_INT_CAUSE4",
.cause_reg = A_SGE_INT_CAUSE4,
.enable_reg = A_SGE_INT_ENABLE4,
.fatal = 0,
.details = NULL,
.actions = NULL,
};
static const struct intr_info sge_int5_info = {
.name = "SGE_INT_CAUSE5",
.cause_reg = A_SGE_INT_CAUSE5,
.enable_reg = A_SGE_INT_ENABLE5,
.fatal = 0xffffffff,
.details = NULL,
.actions = NULL,
};
static const struct intr_info sge_int6_info = {
.name = "SGE_INT_CAUSE6",
.cause_reg = A_SGE_INT_CAUSE6,
.enable_reg = A_SGE_INT_ENABLE6,
.fatal = 0,
.details = NULL,
.actions = NULL,
};
bool fatal;
u32 v;
if (chip_id(adap) <= CHELSIO_T5) {
sge_int3_info.details = sge_int3_details;
} else {
sge_int3_info.details = t6_sge_int3_details;
}
fatal = false;
fatal |= t4_handle_intr(adap, &sge_int1_info, 0, verbose);
fatal |= t4_handle_intr(adap, &sge_int2_info, 0, verbose);
fatal |= t4_handle_intr(adap, &sge_int3_info, 0, verbose);
fatal |= t4_handle_intr(adap, &sge_int4_info, 0, verbose);
if (chip_id(adap) >= CHELSIO_T5)
fatal |= t4_handle_intr(adap, &sge_int5_info, 0, verbose);
if (chip_id(adap) >= CHELSIO_T6)
fatal |= t4_handle_intr(adap, &sge_int6_info, 0, verbose);
v = t4_read_reg(adap, A_SGE_ERROR_STATS);
if (v & F_ERROR_QID_VALID) {
CH_ERR(adap, "SGE error for QID %u\n", G_ERROR_QID(v));
if (v & F_UNCAPTURED_ERROR)
CH_ERR(adap, "SGE UNCAPTURED_ERROR set (clearing)\n");
t4_write_reg(adap, A_SGE_ERROR_STATS,
F_ERROR_QID_VALID | F_UNCAPTURED_ERROR);
}
return (fatal);
}
/*
* CIM interrupt handler.
*/
static bool cim_intr_handler(struct adapter *adap, int arg, bool verbose)
{
+ static const struct intr_action cim_host_intr_actions[] = {
+ { F_TIMER0INT, 0, t4_os_dump_cimla },
+ { 0 },
+ };
static const struct intr_details cim_host_intr_details[] = {
/* T6+ */
{ F_PCIE2CIMINTFPARERR, "CIM IBQ PCIe interface parity error" },
/* T5+ */
{ F_MA_CIM_INTFPERR, "MA2CIM interface parity error" },
{ F_PLCIM_MSTRSPDATAPARERR,
"PL2CIM master response data parity error" },
{ F_NCSI2CIMINTFPARERR, "CIM IBQ NC-SI interface parity error" },
{ F_SGE2CIMINTFPARERR, "CIM IBQ SGE interface parity error" },
{ F_ULP2CIMINTFPARERR, "CIM IBQ ULP_TX interface parity error" },
{ F_TP2CIMINTFPARERR, "CIM IBQ TP interface parity error" },
{ F_OBQSGERX1PARERR, "CIM OBQ SGE1_RX parity error" },
{ F_OBQSGERX0PARERR, "CIM OBQ SGE0_RX parity error" },
/* T4+ */
{ F_TIEQOUTPARERRINT, "CIM TIEQ outgoing FIFO parity error" },
{ F_TIEQINPARERRINT, "CIM TIEQ incoming FIFO parity error" },
{ F_MBHOSTPARERR, "CIM mailbox host read parity error" },
{ F_MBUPPARERR, "CIM mailbox uP parity error" },
{ F_IBQTP0PARERR, "CIM IBQ TP0 parity error" },
{ F_IBQTP1PARERR, "CIM IBQ TP1 parity error" },
{ F_IBQULPPARERR, "CIM IBQ ULP parity error" },
{ F_IBQSGELOPARERR, "CIM IBQ SGE_LO parity error" },
{ F_IBQSGEHIPARERR | F_IBQPCIEPARERR, /* same bit */
"CIM IBQ PCIe/SGE_HI parity error" },
{ F_IBQNCSIPARERR, "CIM IBQ NC-SI parity error" },
{ F_OBQULP0PARERR, "CIM OBQ ULP0 parity error" },
{ F_OBQULP1PARERR, "CIM OBQ ULP1 parity error" },
{ F_OBQULP2PARERR, "CIM OBQ ULP2 parity error" },
{ F_OBQULP3PARERR, "CIM OBQ ULP3 parity error" },
{ F_OBQSGEPARERR, "CIM OBQ SGE parity error" },
{ F_OBQNCSIPARERR, "CIM OBQ NC-SI parity error" },
{ F_TIMER1INT, "CIM TIMER0 interrupt" },
{ F_TIMER0INT, "CIM TIMER0 interrupt" },
{ F_PREFDROPINT, "CIM control register prefetch drop" },
{ 0}
};
struct intr_info cim_host_intr_info = {
.name = "CIM_HOST_INT_CAUSE",
.cause_reg = A_CIM_HOST_INT_CAUSE,
.enable_reg = A_CIM_HOST_INT_ENABLE,
.fatal = 0,
.details = cim_host_intr_details,
- .actions = NULL,
+ .actions = cim_host_intr_actions,
};
static const struct intr_details cim_host_upacc_intr_details[] = {
{ F_EEPROMWRINT, "CIM EEPROM came out of busy state" },
{ F_TIMEOUTMAINT, "CIM PIF MA timeout" },
{ F_TIMEOUTINT, "CIM PIF timeout" },
{ F_RSPOVRLOOKUPINT, "CIM response FIFO overwrite" },
{ F_REQOVRLOOKUPINT, "CIM request FIFO overwrite" },
{ F_BLKWRPLINT, "CIM block write to PL space" },
{ F_BLKRDPLINT, "CIM block read from PL space" },
{ F_SGLWRPLINT,
"CIM single write to PL space with illegal BEs" },
{ F_SGLRDPLINT,
"CIM single read from PL space with illegal BEs" },
{ F_BLKWRCTLINT, "CIM block write to CTL space" },
{ F_BLKRDCTLINT, "CIM block read from CTL space" },
{ F_SGLWRCTLINT,
"CIM single write to CTL space with illegal BEs" },
{ F_SGLRDCTLINT,
"CIM single read from CTL space with illegal BEs" },
{ F_BLKWREEPROMINT, "CIM block write to EEPROM space" },
{ F_BLKRDEEPROMINT, "CIM block read from EEPROM space" },
{ F_SGLWREEPROMINT,
"CIM single write to EEPROM space with illegal BEs" },
{ F_SGLRDEEPROMINT,
"CIM single read from EEPROM space with illegal BEs" },
{ F_BLKWRFLASHINT, "CIM block write to flash space" },
{ F_BLKRDFLASHINT, "CIM block read from flash space" },
{ F_SGLWRFLASHINT, "CIM single write to flash space" },
{ F_SGLRDFLASHINT,
"CIM single read from flash space with illegal BEs" },
{ F_BLKWRBOOTINT, "CIM block write to boot space" },
{ F_BLKRDBOOTINT, "CIM block read from boot space" },
{ F_SGLWRBOOTINT, "CIM single write to boot space" },
{ F_SGLRDBOOTINT,
"CIM single read from boot space with illegal BEs" },
{ F_ILLWRBEINT, "CIM illegal write BEs" },
{ F_ILLRDBEINT, "CIM illegal read BEs" },
{ F_ILLRDINT, "CIM illegal read" },
{ F_ILLWRINT, "CIM illegal write" },
{ F_ILLTRANSINT, "CIM illegal transaction" },
{ F_RSVDSPACEINT, "CIM reserved space access" },
{0}
};
static const struct intr_info cim_host_upacc_intr_info = {
.name = "CIM_HOST_UPACC_INT_CAUSE",
.cause_reg = A_CIM_HOST_UPACC_INT_CAUSE,
.enable_reg = A_CIM_HOST_UPACC_INT_ENABLE,
.fatal = 0x3fffeeff,
.details = cim_host_upacc_intr_details,
.actions = NULL,
};
static const struct intr_info cim_pf_host_intr_info = {
.name = "CIM_PF_HOST_INT_CAUSE",
.cause_reg = MYPF_REG(A_CIM_PF_HOST_INT_CAUSE),
.enable_reg = MYPF_REG(A_CIM_PF_HOST_INT_ENABLE),
.fatal = 0,
.details = NULL,
.actions = NULL,
};
u32 val, fw_err;
bool fatal;
fw_err = t4_read_reg(adap, A_PCIE_FW);
if (fw_err & F_PCIE_FW_ERR)
t4_report_fw_error(adap);
/*
* When the Firmware detects an internal error which normally wouldn't
* raise a Host Interrupt, it forces a CIM Timer0 interrupt in order
* to make sure the Host sees the Firmware Crash. So if we have a
* Timer0 interrupt and don't see a Firmware Crash, ignore the Timer0
* interrupt.
*/
val = t4_read_reg(adap, A_CIM_HOST_INT_CAUSE);
if (val & F_TIMER0INT && (!(fw_err & F_PCIE_FW_ERR) ||
G_PCIE_FW_EVAL(fw_err) != PCIE_FW_EVAL_CRASH)) {
t4_write_reg(adap, A_CIM_HOST_INT_CAUSE, F_TIMER0INT);
}
fatal = false;
if (is_t4(adap))
cim_host_intr_info.fatal = 0x001fffe2;
else if (is_t5(adap))
cim_host_intr_info.fatal = 0x007dffe2;
else
cim_host_intr_info.fatal = 0x007dffe6;
fatal |= t4_handle_intr(adap, &cim_host_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &cim_host_upacc_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &cim_pf_host_intr_info, 0, verbose);
return (fatal);
}
/*
* ULP RX interrupt handler.
*/
static bool ulprx_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details ulprx_intr_details[] = {
/* T5+ */
{ F_SE_CNT_MISMATCH_1, "ULPRX SE count mismatch in channel 1" },
{ F_SE_CNT_MISMATCH_0, "ULPRX SE count mismatch in channel 0" },
/* T4+ */
{ F_CAUSE_CTX_1, "ULPRX channel 1 context error" },
{ F_CAUSE_CTX_0, "ULPRX channel 0 context error" },
{ 0x007fffff, "ULPRX parity error" },
{ 0 }
};
static const struct intr_info ulprx_intr_info = {
.name = "ULP_RX_INT_CAUSE",
.cause_reg = A_ULP_RX_INT_CAUSE,
.enable_reg = A_ULP_RX_INT_ENABLE,
.fatal = 0x07ffffff,
.details = ulprx_intr_details,
.actions = NULL,
};
static const struct intr_info ulprx_intr2_info = {
.name = "ULP_RX_INT_CAUSE_2",
.cause_reg = A_ULP_RX_INT_CAUSE_2,
.enable_reg = A_ULP_RX_INT_ENABLE_2,
.fatal = 0,
.details = NULL,
.actions = NULL,
};
bool fatal = false;
fatal |= t4_handle_intr(adap, &ulprx_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &ulprx_intr2_info, 0, verbose);
return (fatal);
}
/*
* ULP TX interrupt handler.
*/
static bool ulptx_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details ulptx_intr_details[] = {
{ F_PBL_BOUND_ERR_CH3, "ULPTX channel 3 PBL out of bounds" },
{ F_PBL_BOUND_ERR_CH2, "ULPTX channel 2 PBL out of bounds" },
{ F_PBL_BOUND_ERR_CH1, "ULPTX channel 1 PBL out of bounds" },
{ F_PBL_BOUND_ERR_CH0, "ULPTX channel 0 PBL out of bounds" },
{ 0x0fffffff, "ULPTX parity error" },
{ 0 }
};
static const struct intr_info ulptx_intr_info = {
.name = "ULP_TX_INT_CAUSE",
.cause_reg = A_ULP_TX_INT_CAUSE,
.enable_reg = A_ULP_TX_INT_ENABLE,
.fatal = 0x0fffffff,
.details = ulptx_intr_details,
.actions = NULL,
};
static const struct intr_info ulptx_intr2_info = {
.name = "ULP_TX_INT_CAUSE_2",
.cause_reg = A_ULP_TX_INT_CAUSE_2,
.enable_reg = A_ULP_TX_INT_ENABLE_2,
.fatal = 0,
.details = NULL,
.actions = NULL,
};
bool fatal = false;
fatal |= t4_handle_intr(adap, &ulptx_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &ulptx_intr2_info, 0, verbose);
return (fatal);
}
static bool pmtx_dump_dbg_stats(struct adapter *adap, int arg, bool verbose)
{
int i;
u32 data[17];
t4_read_indirect(adap, A_PM_TX_DBG_CTRL, A_PM_TX_DBG_DATA, &data[0],
ARRAY_SIZE(data), A_PM_TX_DBG_STAT0);
for (i = 0; i < ARRAY_SIZE(data); i++) {
CH_ALERT(adap, " - PM_TX_DBG_STAT%u (0x%x) = 0x%08x\n", i,
A_PM_TX_DBG_STAT0 + i, data[i]);
}
return (false);
}
/*
* PM TX interrupt handler.
*/
static bool pmtx_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_action pmtx_intr_actions[] = {
{ 0xffffffff, 0, pmtx_dump_dbg_stats },
{ 0 },
};
static const struct intr_details pmtx_intr_details[] = {
{ F_PCMD_LEN_OVFL0, "PMTX channel 0 pcmd too large" },
{ F_PCMD_LEN_OVFL1, "PMTX channel 1 pcmd too large" },
{ F_PCMD_LEN_OVFL2, "PMTX channel 2 pcmd too large" },
{ F_ZERO_C_CMD_ERROR, "PMTX 0-length pcmd" },
{ 0x0f000000, "PMTX icspi FIFO2X Rx framing error" },
{ 0x00f00000, "PMTX icspi FIFO Rx framing error" },
{ 0x000f0000, "PMTX icspi FIFO Tx framing error" },
{ 0x0000f000, "PMTX oespi FIFO Rx framing error" },
{ 0x00000f00, "PMTX oespi FIFO Tx framing error" },
{ 0x000000f0, "PMTX oespi FIFO2X Tx framing error" },
{ F_OESPI_PAR_ERROR, "PMTX oespi parity error" },
{ F_DB_OPTIONS_PAR_ERROR, "PMTX db_options parity error" },
{ F_ICSPI_PAR_ERROR, "PMTX icspi parity error" },
{ F_C_PCMD_PAR_ERROR, "PMTX c_pcmd parity error" },
{ 0 }
};
static const struct intr_info pmtx_intr_info = {
.name = "PM_TX_INT_CAUSE",
.cause_reg = A_PM_TX_INT_CAUSE,
.enable_reg = A_PM_TX_INT_ENABLE,
.fatal = 0xffffffff,
.details = pmtx_intr_details,
.actions = pmtx_intr_actions,
};
return (t4_handle_intr(adap, &pmtx_intr_info, 0, verbose));
}
/*
* PM RX interrupt handler.
*/
static bool pmrx_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details pmrx_intr_details[] = {
/* T6+ */
{ 0x18000000, "PMRX ospi overflow" },
{ F_MA_INTF_SDC_ERR, "PMRX MA interface SDC parity error" },
{ F_BUNDLE_LEN_PARERR, "PMRX bundle len FIFO parity error" },
{ F_BUNDLE_LEN_OVFL, "PMRX bundle len FIFO overflow" },
{ F_SDC_ERR, "PMRX SDC error" },
/* T4+ */
{ F_ZERO_E_CMD_ERROR, "PMRX 0-length pcmd" },
{ 0x003c0000, "PMRX iespi FIFO2X Rx framing error" },
{ 0x0003c000, "PMRX iespi Rx framing error" },
{ 0x00003c00, "PMRX iespi Tx framing error" },
{ 0x00000300, "PMRX ocspi Rx framing error" },
{ 0x000000c0, "PMRX ocspi Tx framing error" },
{ 0x00000030, "PMRX ocspi FIFO2X Tx framing error" },
{ F_OCSPI_PAR_ERROR, "PMRX ocspi parity error" },
{ F_DB_OPTIONS_PAR_ERROR, "PMRX db_options parity error" },
{ F_IESPI_PAR_ERROR, "PMRX iespi parity error" },
{ F_E_PCMD_PAR_ERROR, "PMRX e_pcmd parity error"},
{ 0 }
};
static const struct intr_info pmrx_intr_info = {
.name = "PM_RX_INT_CAUSE",
.cause_reg = A_PM_RX_INT_CAUSE,
.enable_reg = A_PM_RX_INT_ENABLE,
.fatal = 0x1fffffff,
.details = pmrx_intr_details,
.actions = NULL,
};
return (t4_handle_intr(adap, &pmrx_intr_info, 0, verbose));
}
/*
* CPL switch interrupt handler.
*/
static bool cplsw_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details cplsw_intr_details[] = {
/* T5+ */
{ F_PERR_CPL_128TO128_1, "CPLSW 128TO128 FIFO1 parity error" },
{ F_PERR_CPL_128TO128_0, "CPLSW 128TO128 FIFO0 parity error" },
/* T4+ */
{ F_CIM_OP_MAP_PERR, "CPLSW CIM op_map parity error" },
{ F_CIM_OVFL_ERROR, "CPLSW CIM overflow" },
{ F_TP_FRAMING_ERROR, "CPLSW TP framing error" },
{ F_SGE_FRAMING_ERROR, "CPLSW SGE framing error" },
{ F_CIM_FRAMING_ERROR, "CPLSW CIM framing error" },
{ F_ZERO_SWITCH_ERROR, "CPLSW no-switch error" },
{ 0 }
};
struct intr_info cplsw_intr_info = {
.name = "CPL_INTR_CAUSE",
.cause_reg = A_CPL_INTR_CAUSE,
.enable_reg = A_CPL_INTR_ENABLE,
.fatal = 0,
.details = cplsw_intr_details,
.actions = NULL,
};
if (is_t4(adap))
cplsw_intr_info.fatal = 0x2f;
else if (is_t5(adap))
cplsw_intr_info.fatal = 0xef;
else
cplsw_intr_info.fatal = 0xff;
return (t4_handle_intr(adap, &cplsw_intr_info, 0, verbose));
}
#define T4_LE_FATAL_MASK (F_PARITYERR | F_UNKNOWNCMD | F_REQQPARERR)
#define T6_LE_PERRCRC_MASK (F_PIPELINEERR | F_CLIPTCAMACCFAIL | \
F_SRVSRAMACCFAIL | F_CLCAMCRCPARERR | F_CLCAMINTPERR | F_SSRAMINTPERR | \
F_SRVSRAMPERR | F_VFSRAMPERR | F_TCAMINTPERR | F_TCAMCRCERR | \
F_HASHTBLMEMACCERR | F_MAIFWRINTPERR | F_HASHTBLMEMCRCERR)
#define T6_LE_FATAL_MASK (T6_LE_PERRCRC_MASK | F_T6_UNKNOWNCMD | \
F_TCAMACCFAIL | F_HASHTBLACCFAIL | F_CMDTIDERR | F_CMDPRSRINTERR | \
F_TOTCNTERR | F_CLCAMFIFOERR | F_CLIPSUBERR)
/*
* LE interrupt handler.
*/
static bool le_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details le_intr_details[] = {
{ F_REQQPARERR, "LE request queue parity error" },
{ F_UNKNOWNCMD, "LE unknown command" },
{ F_ACTRGNFULL, "LE active region full" },
{ F_PARITYERR, "LE parity error" },
{ F_LIPMISS, "LE LIP miss" },
{ F_LIP0, "LE 0 LIP error" },
{ 0 }
};
static const struct intr_details t6_le_intr_details[] = {
{ F_CLIPSUBERR, "LE CLIP CAM reverse substitution error" },
{ F_CLCAMFIFOERR, "LE CLIP CAM internal FIFO error" },
{ F_CTCAMINVLDENT, "Invalid IPv6 CLIP TCAM entry" },
{ F_TCAMINVLDENT, "Invalid IPv6 TCAM entry" },
{ F_TOTCNTERR, "LE total active < TCAM count" },
{ F_CMDPRSRINTERR, "LE internal error in parser" },
{ F_CMDTIDERR, "Incorrect tid in LE command" },
{ F_T6_ACTRGNFULL, "LE active region full" },
{ F_T6_ACTCNTIPV6TZERO, "LE IPv6 active open TCAM counter -ve" },
{ F_T6_ACTCNTIPV4TZERO, "LE IPv4 active open TCAM counter -ve" },
{ F_T6_ACTCNTIPV6ZERO, "LE IPv6 active open counter -ve" },
{ F_T6_ACTCNTIPV4ZERO, "LE IPv4 active open counter -ve" },
{ F_HASHTBLACCFAIL, "Hash table read error (proto conflict)" },
{ F_TCAMACCFAIL, "LE TCAM access failure" },
{ F_T6_UNKNOWNCMD, "LE unknown command" },
{ F_T6_LIP0, "LE found 0 LIP during CLIP substitution" },
{ F_T6_LIPMISS, "LE CLIP lookup miss" },
{ T6_LE_PERRCRC_MASK, "LE parity/CRC error" },
{ 0 }
};
struct intr_info le_intr_info = {
.name = "LE_DB_INT_CAUSE",
.cause_reg = A_LE_DB_INT_CAUSE,
.enable_reg = A_LE_DB_INT_ENABLE,
.fatal = 0,
.details = NULL,
.actions = NULL,
};
if (chip_id(adap) <= CHELSIO_T5) {
le_intr_info.details = le_intr_details;
le_intr_info.fatal = T4_LE_FATAL_MASK;
if (is_t5(adap))
le_intr_info.fatal |= F_VFPARERR;
} else {
le_intr_info.details = t6_le_intr_details;
le_intr_info.fatal = T6_LE_FATAL_MASK;
}
return (t4_handle_intr(adap, &le_intr_info, 0, verbose));
}
/*
* MPS interrupt handler.
*/
static bool mps_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details mps_rx_perr_intr_details[] = {
{ 0xffffffff, "MPS Rx parity error" },
{ 0 }
};
static const struct intr_info mps_rx_perr_intr_info = {
.name = "MPS_RX_PERR_INT_CAUSE",
.cause_reg = A_MPS_RX_PERR_INT_CAUSE,
.enable_reg = A_MPS_RX_PERR_INT_ENABLE,
.fatal = 0xffffffff,
.details = mps_rx_perr_intr_details,
.actions = NULL,
};
static const struct intr_details mps_tx_intr_details[] = {
{ F_PORTERR, "MPS Tx destination port is disabled" },
{ F_FRMERR, "MPS Tx framing error" },
{ F_SECNTERR, "MPS Tx SOP/EOP error" },
{ F_BUBBLE, "MPS Tx underflow" },
{ V_TXDESCFIFO(M_TXDESCFIFO), "MPS Tx desc FIFO parity error" },
{ V_TXDATAFIFO(M_TXDATAFIFO), "MPS Tx data FIFO parity error" },
{ F_NCSIFIFO, "MPS Tx NC-SI FIFO parity error" },
{ V_TPFIFO(M_TPFIFO), "MPS Tx TP FIFO parity error" },
{ 0 }
};
struct intr_info mps_tx_intr_info = {
.name = "MPS_TX_INT_CAUSE",
.cause_reg = A_MPS_TX_INT_CAUSE,
.enable_reg = A_MPS_TX_INT_ENABLE,
.fatal = 0x1ffff,
.details = mps_tx_intr_details,
.actions = NULL,
};
static const struct intr_details mps_trc_intr_details[] = {
{ F_MISCPERR, "MPS TRC misc parity error" },
{ V_PKTFIFO(M_PKTFIFO), "MPS TRC packet FIFO parity error" },
{ V_FILTMEM(M_FILTMEM), "MPS TRC filter parity error" },
{ 0 }
};
static const struct intr_info mps_trc_intr_info = {
.name = "MPS_TRC_INT_CAUSE",
.cause_reg = A_MPS_TRC_INT_CAUSE,
.enable_reg = A_MPS_TRC_INT_ENABLE,
.fatal = F_MISCPERR | V_PKTFIFO(M_PKTFIFO) | V_FILTMEM(M_FILTMEM),
.details = mps_trc_intr_details,
.actions = NULL,
};
static const struct intr_details mps_stat_sram_intr_details[] = {
{ 0xffffffff, "MPS statistics SRAM parity error" },
{ 0 }
};
static const struct intr_info mps_stat_sram_intr_info = {
.name = "MPS_STAT_PERR_INT_CAUSE_SRAM",
.cause_reg = A_MPS_STAT_PERR_INT_CAUSE_SRAM,
.enable_reg = A_MPS_STAT_PERR_INT_ENABLE_SRAM,
.fatal = 0x1fffffff,
.details = mps_stat_sram_intr_details,
.actions = NULL,
};
static const struct intr_details mps_stat_tx_intr_details[] = {
{ 0xffffff, "MPS statistics Tx FIFO parity error" },
{ 0 }
};
static const struct intr_info mps_stat_tx_intr_info = {
.name = "MPS_STAT_PERR_INT_CAUSE_TX_FIFO",
.cause_reg = A_MPS_STAT_PERR_INT_CAUSE_TX_FIFO,
.enable_reg = A_MPS_STAT_PERR_INT_ENABLE_TX_FIFO,
.fatal = 0xffffff,
.details = mps_stat_tx_intr_details,
.actions = NULL,
};
static const struct intr_details mps_stat_rx_intr_details[] = {
{ 0xffffff, "MPS statistics Rx FIFO parity error" },
{ 0 }
};
static const struct intr_info mps_stat_rx_intr_info = {
.name = "MPS_STAT_PERR_INT_CAUSE_RX_FIFO",
.cause_reg = A_MPS_STAT_PERR_INT_CAUSE_RX_FIFO,
.enable_reg = A_MPS_STAT_PERR_INT_ENABLE_RX_FIFO,
.fatal = 0xffffff,
.details = mps_stat_rx_intr_details,
.actions = NULL,
};
static const struct intr_details mps_cls_intr_details[] = {
{ F_HASHSRAM, "MPS hash SRAM parity error" },
{ F_MATCHTCAM, "MPS match TCAM parity error" },
{ F_MATCHSRAM, "MPS match SRAM parity error" },
{ 0 }
};
static const struct intr_info mps_cls_intr_info = {
.name = "MPS_CLS_INT_CAUSE",
.cause_reg = A_MPS_CLS_INT_CAUSE,
.enable_reg = A_MPS_CLS_INT_ENABLE,
.fatal = F_MATCHSRAM | F_MATCHTCAM | F_HASHSRAM,
.details = mps_cls_intr_details,
.actions = NULL,
};
static const struct intr_details mps_stat_sram1_intr_details[] = {
{ 0xff, "MPS statistics SRAM1 parity error" },
{ 0 }
};
static const struct intr_info mps_stat_sram1_intr_info = {
.name = "MPS_STAT_PERR_INT_CAUSE_SRAM1",
.cause_reg = A_MPS_STAT_PERR_INT_CAUSE_SRAM1,
.enable_reg = A_MPS_STAT_PERR_INT_ENABLE_SRAM1,
.fatal = 0xff,
.details = mps_stat_sram1_intr_details,
.actions = NULL,
};
bool fatal;
if (chip_id(adap) == CHELSIO_T6)
mps_tx_intr_info.fatal &= ~F_BUBBLE;
fatal = false;
fatal |= t4_handle_intr(adap, &mps_rx_perr_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &mps_tx_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &mps_trc_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &mps_stat_sram_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &mps_stat_tx_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &mps_stat_rx_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &mps_cls_intr_info, 0, verbose);
if (chip_id(adap) > CHELSIO_T4) {
fatal |= t4_handle_intr(adap, &mps_stat_sram1_intr_info, 0,
verbose);
}
t4_write_reg(adap, A_MPS_INT_CAUSE, is_t4(adap) ? 0 : 0xffffffff);
t4_read_reg(adap, A_MPS_INT_CAUSE); /* flush */
return (fatal);
}
/*
* EDC/MC interrupt handler.
*/
static bool mem_intr_handler(struct adapter *adap, int idx, bool verbose)
{
static const char name[4][5] = { "EDC0", "EDC1", "MC0", "MC1" };
unsigned int count_reg, v;
static const struct intr_details mem_intr_details[] = {
{ F_ECC_UE_INT_CAUSE, "Uncorrectable ECC data error(s)" },
{ F_ECC_CE_INT_CAUSE, "Correctable ECC data error(s)" },
{ F_PERR_INT_CAUSE, "FIFO parity error" },
{ 0 }
};
struct intr_info ii = {
.fatal = F_PERR_INT_CAUSE | F_ECC_UE_INT_CAUSE,
.details = mem_intr_details,
.actions = NULL,
};
bool fatal;
switch (idx) {
case MEM_EDC0:
ii.name = "EDC0_INT_CAUSE";
ii.cause_reg = EDC_REG(A_EDC_INT_CAUSE, 0);
ii.enable_reg = EDC_REG(A_EDC_INT_ENABLE, 0);
count_reg = EDC_REG(A_EDC_ECC_STATUS, 0);
break;
case MEM_EDC1:
ii.name = "EDC1_INT_CAUSE";
ii.cause_reg = EDC_REG(A_EDC_INT_CAUSE, 1);
ii.enable_reg = EDC_REG(A_EDC_INT_ENABLE, 1);
count_reg = EDC_REG(A_EDC_ECC_STATUS, 1);
break;
case MEM_MC0:
ii.name = "MC0_INT_CAUSE";
if (is_t4(adap)) {
ii.cause_reg = A_MC_INT_CAUSE;
ii.enable_reg = A_MC_INT_ENABLE;
count_reg = A_MC_ECC_STATUS;
} else {
ii.cause_reg = A_MC_P_INT_CAUSE;
ii.enable_reg = A_MC_P_INT_ENABLE;
count_reg = A_MC_P_ECC_STATUS;
}
break;
case MEM_MC1:
ii.name = "MC1_INT_CAUSE";
ii.cause_reg = MC_REG(A_MC_P_INT_CAUSE, 1);
ii.enable_reg = MC_REG(A_MC_P_INT_ENABLE, 1);
count_reg = MC_REG(A_MC_P_ECC_STATUS, 1);
break;
}
fatal = t4_handle_intr(adap, &ii, 0, verbose);
v = t4_read_reg(adap, count_reg);
if (v != 0) {
if (G_ECC_UECNT(v) != 0) {
CH_ALERT(adap,
"%s: %u uncorrectable ECC data error(s)\n",
name[idx], G_ECC_UECNT(v));
}
if (G_ECC_CECNT(v) != 0) {
if (idx <= MEM_EDC1)
t4_edc_err_read(adap, idx);
CH_WARN_RATELIMIT(adap,
"%s: %u correctable ECC data error(s)\n",
name[idx], G_ECC_CECNT(v));
}
t4_write_reg(adap, count_reg, 0xffffffff);
}
return (fatal);
}
static bool ma_wrap_status(struct adapter *adap, int arg, bool verbose)
{
u32 v;
v = t4_read_reg(adap, A_MA_INT_WRAP_STATUS);
CH_ALERT(adap,
"MA address wrap-around error by client %u to address %#x\n",
G_MEM_WRAP_CLIENT_NUM(v), G_MEM_WRAP_ADDRESS(v) << 4);
t4_write_reg(adap, A_MA_INT_WRAP_STATUS, v);
return (false);
}
/*
* MA interrupt handler.
*/
static bool ma_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_action ma_intr_actions[] = {
{ F_MEM_WRAP_INT_CAUSE, 0, ma_wrap_status },
{ 0 },
};
static const struct intr_info ma_intr_info = {
.name = "MA_INT_CAUSE",
.cause_reg = A_MA_INT_CAUSE,
.enable_reg = A_MA_INT_ENABLE,
.fatal = F_MEM_WRAP_INT_CAUSE | F_MEM_PERR_INT_CAUSE |
F_MEM_TO_INT_CAUSE,
.details = NULL,
.actions = ma_intr_actions,
};
static const struct intr_info ma_perr_status1 = {
.name = "MA_PARITY_ERROR_STATUS1",
.cause_reg = A_MA_PARITY_ERROR_STATUS1,
.enable_reg = A_MA_PARITY_ERROR_ENABLE1,
.fatal = 0xffffffff,
.details = NULL,
.actions = NULL,
};
static const struct intr_info ma_perr_status2 = {
.name = "MA_PARITY_ERROR_STATUS2",
.cause_reg = A_MA_PARITY_ERROR_STATUS2,
.enable_reg = A_MA_PARITY_ERROR_ENABLE2,
.fatal = 0xffffffff,
.details = NULL,
.actions = NULL,
};
bool fatal;
fatal = false;
fatal |= t4_handle_intr(adap, &ma_intr_info, 0, verbose);
fatal |= t4_handle_intr(adap, &ma_perr_status1, 0, verbose);
if (chip_id(adap) > CHELSIO_T4)
fatal |= t4_handle_intr(adap, &ma_perr_status2, 0, verbose);
return (fatal);
}
/*
* SMB interrupt handler.
*/
static bool smb_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details smb_intr_details[] = {
{ F_MSTTXFIFOPARINT, "SMB master Tx FIFO parity error" },
{ F_MSTRXFIFOPARINT, "SMB master Rx FIFO parity error" },
{ F_SLVFIFOPARINT, "SMB slave FIFO parity error" },
{ 0 }
};
static const struct intr_info smb_intr_info = {
.name = "SMB_INT_CAUSE",
.cause_reg = A_SMB_INT_CAUSE,
.enable_reg = A_SMB_INT_ENABLE,
.fatal = F_SLVFIFOPARINT | F_MSTRXFIFOPARINT | F_MSTTXFIFOPARINT,
.details = smb_intr_details,
.actions = NULL,
};
return (t4_handle_intr(adap, &smb_intr_info, 0, verbose));
}
/*
* NC-SI interrupt handler.
*/
static bool ncsi_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details ncsi_intr_details[] = {
{ F_CIM_DM_PRTY_ERR, "NC-SI CIM parity error" },
{ F_MPS_DM_PRTY_ERR, "NC-SI MPS parity error" },
{ F_TXFIFO_PRTY_ERR, "NC-SI Tx FIFO parity error" },
{ F_RXFIFO_PRTY_ERR, "NC-SI Rx FIFO parity error" },
{ 0 }
};
static const struct intr_info ncsi_intr_info = {
.name = "NCSI_INT_CAUSE",
.cause_reg = A_NCSI_INT_CAUSE,
.enable_reg = A_NCSI_INT_ENABLE,
.fatal = F_RXFIFO_PRTY_ERR | F_TXFIFO_PRTY_ERR |
F_MPS_DM_PRTY_ERR | F_CIM_DM_PRTY_ERR,
.details = ncsi_intr_details,
.actions = NULL,
};
return (t4_handle_intr(adap, &ncsi_intr_info, 0, verbose));
}
/*
* MAC interrupt handler.
*/
static bool mac_intr_handler(struct adapter *adap, int port, bool verbose)
{
static const struct intr_details mac_intr_details[] = {
{ F_TXFIFO_PRTY_ERR, "MAC Tx FIFO parity error" },
{ F_RXFIFO_PRTY_ERR, "MAC Rx FIFO parity error" },
{ 0 }
};
char name[32];
struct intr_info ii;
bool fatal = false;
if (is_t4(adap)) {
snprintf(name, sizeof(name), "XGMAC_PORT%u_INT_CAUSE", port);
ii.name = &name[0];
ii.cause_reg = PORT_REG(port, A_XGMAC_PORT_INT_CAUSE);
ii.enable_reg = PORT_REG(port, A_XGMAC_PORT_INT_EN);
ii.fatal = F_TXFIFO_PRTY_ERR | F_RXFIFO_PRTY_ERR,
ii.details = mac_intr_details,
ii.actions = NULL;
} else {
snprintf(name, sizeof(name), "MAC_PORT%u_INT_CAUSE", port);
ii.name = &name[0];
ii.cause_reg = T5_PORT_REG(port, A_MAC_PORT_INT_CAUSE);
ii.enable_reg = T5_PORT_REG(port, A_MAC_PORT_INT_EN);
ii.fatal = F_TXFIFO_PRTY_ERR | F_RXFIFO_PRTY_ERR,
ii.details = mac_intr_details,
ii.actions = NULL;
}
fatal |= t4_handle_intr(adap, &ii, 0, verbose);
if (chip_id(adap) >= CHELSIO_T5) {
snprintf(name, sizeof(name), "MAC_PORT%u_PERR_INT_CAUSE", port);
ii.name = &name[0];
ii.cause_reg = T5_PORT_REG(port, A_MAC_PORT_PERR_INT_CAUSE);
ii.enable_reg = T5_PORT_REG(port, A_MAC_PORT_PERR_INT_EN);
ii.fatal = 0;
ii.details = NULL;
ii.actions = NULL;
fatal |= t4_handle_intr(adap, &ii, 0, verbose);
}
if (chip_id(adap) >= CHELSIO_T6) {
snprintf(name, sizeof(name), "MAC_PORT%u_PERR_INT_CAUSE_100G", port);
ii.name = &name[0];
ii.cause_reg = T5_PORT_REG(port, A_MAC_PORT_PERR_INT_CAUSE_100G);
ii.enable_reg = T5_PORT_REG(port, A_MAC_PORT_PERR_INT_EN_100G);
ii.fatal = 0;
ii.details = NULL;
ii.actions = NULL;
fatal |= t4_handle_intr(adap, &ii, 0, verbose);
}
return (fatal);
}
static bool plpl_intr_handler(struct adapter *adap, int arg, bool verbose)
{
static const struct intr_details plpl_intr_details[] = {
{ F_FATALPERR, "Fatal parity error" },
{ F_PERRVFID, "VFID_MAP parity error" },
{ 0 }
};
struct intr_info plpl_intr_info = {
.name = "PL_PL_INT_CAUSE",
.cause_reg = A_PL_PL_INT_CAUSE,
.enable_reg = A_PL_PL_INT_ENABLE,
.fatal = F_FATALPERR,
.details = plpl_intr_details,
.actions = NULL,
};
if (is_t4(adap))
plpl_intr_info.fatal |= F_PERRVFID;
return (t4_handle_intr(adap, &plpl_intr_info, 0, verbose));
}
/**
* t4_slow_intr_handler - control path interrupt handler
* @adap: the adapter
* @verbose: increased verbosity, for debug
*
* T4 interrupt handler for non-data global interrupt events, e.g., errors.
* The designation 'slow' is because it involves register reads, while
* data interrupts typically don't involve any MMIOs.
*/
int t4_slow_intr_handler(struct adapter *adap, bool verbose)
{
static const struct intr_details pl_intr_details[] = {
{ F_MC1, "MC1" },
{ F_UART, "UART" },
{ F_ULP_TX, "ULP TX" },
{ F_SGE, "SGE" },
{ F_HMA, "HMA" },
{ F_CPL_SWITCH, "CPL Switch" },
{ F_ULP_RX, "ULP RX" },
{ F_PM_RX, "PM RX" },
{ F_PM_TX, "PM TX" },
{ F_MA, "MA" },
{ F_TP, "TP" },
{ F_LE, "LE" },
{ F_EDC1, "EDC1" },
{ F_EDC0, "EDC0" },
{ F_MC, "MC0" },
{ F_PCIE, "PCIE" },
{ F_PMU, "PMU" },
{ F_MAC3, "MAC3" },
{ F_MAC2, "MAC2" },
{ F_MAC1, "MAC1" },
{ F_MAC0, "MAC0" },
{ F_SMB, "SMB" },
{ F_SF, "SF" },
{ F_PL, "PL" },
{ F_NCSI, "NC-SI" },
{ F_MPS, "MPS" },
{ F_MI, "MI" },
{ F_DBG, "DBG" },
{ F_I2CM, "I2CM" },
{ F_CIM, "CIM" },
{ 0 }
};
static const struct intr_info pl_perr_cause = {
.name = "PL_PERR_CAUSE",
.cause_reg = A_PL_PERR_CAUSE,
.enable_reg = A_PL_PERR_ENABLE,
.fatal = 0xffffffff,
.details = pl_intr_details,
.actions = NULL,
};
static const struct intr_action pl_intr_action[] = {
{ F_MC1, MEM_MC1, mem_intr_handler },
{ F_ULP_TX, -1, ulptx_intr_handler },
{ F_SGE, -1, sge_intr_handler },
{ F_CPL_SWITCH, -1, cplsw_intr_handler },
{ F_ULP_RX, -1, ulprx_intr_handler },
{ F_PM_RX, -1, pmrx_intr_handler},
{ F_PM_TX, -1, pmtx_intr_handler},
{ F_MA, -1, ma_intr_handler },
{ F_TP, -1, tp_intr_handler },
{ F_LE, -1, le_intr_handler },
{ F_EDC1, MEM_EDC1, mem_intr_handler },
{ F_EDC0, MEM_EDC0, mem_intr_handler },
{ F_MC0, MEM_MC0, mem_intr_handler },
{ F_PCIE, -1, pcie_intr_handler },
{ F_MAC3, 3, mac_intr_handler},
{ F_MAC2, 2, mac_intr_handler},
{ F_MAC1, 1, mac_intr_handler},
{ F_MAC0, 0, mac_intr_handler},
{ F_SMB, -1, smb_intr_handler},
{ F_PL, -1, plpl_intr_handler },
{ F_NCSI, -1, ncsi_intr_handler},
{ F_MPS, -1, mps_intr_handler },
{ F_CIM, -1, cim_intr_handler },
{ 0 }
};
static const struct intr_info pl_intr_info = {
.name = "PL_INT_CAUSE",
.cause_reg = A_PL_INT_CAUSE,
.enable_reg = A_PL_INT_ENABLE,
.fatal = 0,
.details = pl_intr_details,
.actions = pl_intr_action,
};
bool fatal;
u32 perr;
perr = t4_read_reg(adap, pl_perr_cause.cause_reg);
if (verbose || perr != 0) {
t4_show_intr_info(adap, &pl_perr_cause, perr);
if (perr != 0)
t4_write_reg(adap, pl_perr_cause.cause_reg, perr);
if (verbose)
perr |= t4_read_reg(adap, pl_intr_info.enable_reg);
}
fatal = t4_handle_intr(adap, &pl_intr_info, perr, verbose);
if (fatal)
t4_fatal_err(adap, false);
return (0);
}
#define PF_INTR_MASK (F_PFSW | F_PFCIM)
/**
* t4_intr_enable - enable interrupts
* @adapter: the adapter whose interrupts should be enabled
*
* Enable PF-specific interrupts for the calling function and the top-level
* interrupt concentrator for global interrupts. Interrupts are already
* enabled at each module, here we just enable the roots of the interrupt
* hierarchies.
*
* Note: this function should be called only when the driver manages
* non PF-specific interrupts from the various HW modules. Only one PCI
* function at a time should be doing this.
*/
void t4_intr_enable(struct adapter *adap)
{
u32 val = 0;
if (chip_id(adap) <= CHELSIO_T5)
val = F_ERR_DROPPED_DB | F_ERR_EGR_CTXT_PRIO | F_DBFIFO_HP_INT;
else
val = F_ERR_PCIE_ERROR0 | F_ERR_PCIE_ERROR1 | F_FATAL_WRE_LEN;
val |= F_ERR_CPL_EXCEED_IQE_SIZE | F_ERR_INVALID_CIDX_INC |
F_ERR_CPL_OPCODE_0 | F_ERR_DATA_CPL_ON_HIGH_QID1 |
F_INGRESS_SIZE_ERR | F_ERR_DATA_CPL_ON_HIGH_QID0 |
F_ERR_BAD_DB_PIDX3 | F_ERR_BAD_DB_PIDX2 | F_ERR_BAD_DB_PIDX1 |
F_ERR_BAD_DB_PIDX0 | F_ERR_ING_CTXT_PRIO | F_DBFIFO_LP_INT |
F_EGRESS_SIZE_ERR;
t4_set_reg_field(adap, A_SGE_INT_ENABLE3, val, val);
t4_write_reg(adap, MYPF_REG(A_PL_PF_INT_ENABLE), PF_INTR_MASK);
t4_set_reg_field(adap, A_PL_INT_MAP0, 0, 1 << adap->pf);
}
/**
* t4_intr_disable - disable interrupts
* @adap: the adapter whose interrupts should be disabled
*
* Disable interrupts. We only disable the top-level interrupt
* concentrators. The caller must be a PCI function managing global
* interrupts.
*/
void t4_intr_disable(struct adapter *adap)
{
t4_write_reg(adap, MYPF_REG(A_PL_PF_INT_ENABLE), 0);
t4_set_reg_field(adap, A_PL_INT_MAP0, 1 << adap->pf, 0);
}
/**
* t4_intr_clear - clear all interrupts
* @adap: the adapter whose interrupts should be cleared
*
* Clears all interrupts. The caller must be a PCI function managing
* global interrupts.
*/
void t4_intr_clear(struct adapter *adap)
{
static const u32 cause_reg[] = {
A_CIM_HOST_INT_CAUSE,
A_CIM_HOST_UPACC_INT_CAUSE,
MYPF_REG(A_CIM_PF_HOST_INT_CAUSE),
A_CPL_INTR_CAUSE,
EDC_REG(A_EDC_INT_CAUSE, 0), EDC_REG(A_EDC_INT_CAUSE, 1),
A_LE_DB_INT_CAUSE,
A_MA_INT_WRAP_STATUS,
A_MA_PARITY_ERROR_STATUS1,
A_MA_INT_CAUSE,
A_MPS_CLS_INT_CAUSE,
A_MPS_RX_PERR_INT_CAUSE,
A_MPS_STAT_PERR_INT_CAUSE_RX_FIFO,
A_MPS_STAT_PERR_INT_CAUSE_SRAM,
A_MPS_TRC_INT_CAUSE,
A_MPS_TX_INT_CAUSE,
A_MPS_STAT_PERR_INT_CAUSE_TX_FIFO,
A_NCSI_INT_CAUSE,
A_PCIE_INT_CAUSE,
A_PCIE_NONFAT_ERR,
A_PL_PL_INT_CAUSE,
A_PM_RX_INT_CAUSE,
A_PM_TX_INT_CAUSE,
A_SGE_INT_CAUSE1,
A_SGE_INT_CAUSE2,
A_SGE_INT_CAUSE3,
A_SGE_INT_CAUSE4,
A_SMB_INT_CAUSE,
A_TP_INT_CAUSE,
A_ULP_RX_INT_CAUSE,
A_ULP_RX_INT_CAUSE_2,
A_ULP_TX_INT_CAUSE,
A_ULP_TX_INT_CAUSE_2,
MYPF_REG(A_PL_PF_INT_CAUSE),
};
int i;
const int nchan = adap->chip_params->nchan;
for (i = 0; i < ARRAY_SIZE(cause_reg); i++)
t4_write_reg(adap, cause_reg[i], 0xffffffff);
if (is_t4(adap)) {
t4_write_reg(adap, A_PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
0xffffffff);
t4_write_reg(adap, A_PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
0xffffffff);
t4_write_reg(adap, A_MC_INT_CAUSE, 0xffffffff);
for (i = 0; i < nchan; i++) {
t4_write_reg(adap, PORT_REG(i, A_XGMAC_PORT_INT_CAUSE),
0xffffffff);
}
}
if (chip_id(adap) >= CHELSIO_T5) {
t4_write_reg(adap, A_MA_PARITY_ERROR_STATUS2, 0xffffffff);
t4_write_reg(adap, A_MPS_STAT_PERR_INT_CAUSE_SRAM1, 0xffffffff);
t4_write_reg(adap, A_SGE_INT_CAUSE5, 0xffffffff);
t4_write_reg(adap, A_MC_P_INT_CAUSE, 0xffffffff);
if (is_t5(adap)) {
t4_write_reg(adap, MC_REG(A_MC_P_INT_CAUSE, 1),
0xffffffff);
}
for (i = 0; i < nchan; i++) {
t4_write_reg(adap, T5_PORT_REG(i,
A_MAC_PORT_PERR_INT_CAUSE), 0xffffffff);
if (chip_id(adap) > CHELSIO_T5) {
t4_write_reg(adap, T5_PORT_REG(i,
A_MAC_PORT_PERR_INT_CAUSE_100G),
0xffffffff);
}
t4_write_reg(adap, T5_PORT_REG(i, A_MAC_PORT_INT_CAUSE),
0xffffffff);
}
}
if (chip_id(adap) >= CHELSIO_T6) {
t4_write_reg(adap, A_SGE_INT_CAUSE6, 0xffffffff);
}
t4_write_reg(adap, A_MPS_INT_CAUSE, is_t4(adap) ? 0 : 0xffffffff);
t4_write_reg(adap, A_PL_PERR_CAUSE, 0xffffffff);
t4_write_reg(adap, A_PL_INT_CAUSE, 0xffffffff);
(void) t4_read_reg(adap, A_PL_INT_CAUSE); /* flush */
}
/**
* hash_mac_addr - return the hash value of a MAC address
* @addr: the 48-bit Ethernet MAC address
*
* Hashes a MAC address according to the hash function used by HW inexact
* (hash) address matching.
*/
static int hash_mac_addr(const u8 *addr)
{
u32 a = ((u32)addr[0] << 16) | ((u32)addr[1] << 8) | addr[2];
u32 b = ((u32)addr[3] << 16) | ((u32)addr[4] << 8) | addr[5];
a ^= b;
a ^= (a >> 12);
a ^= (a >> 6);
return a & 0x3f;
}
/**
* t4_config_rss_range - configure a portion of the RSS mapping table
* @adapter: the adapter
* @mbox: mbox to use for the FW command
* @viid: virtual interface whose RSS subtable is to be written
* @start: start entry in the table to write
* @n: how many table entries to write
* @rspq: values for the "response queue" (Ingress Queue) lookup table
* @nrspq: number of values in @rspq
*
* Programs the selected part of the VI's RSS mapping table with the
* provided values. If @nrspq < @n the supplied values are used repeatedly
* until the full table range is populated.
*
* The caller must ensure the values in @rspq are in the range allowed for
* @viid.
*/
int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
int start, int n, const u16 *rspq, unsigned int nrspq)
{
int ret;
const u16 *rsp = rspq;
const u16 *rsp_end = rspq + nrspq;
struct fw_rss_ind_tbl_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_RSS_IND_TBL_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_RSS_IND_TBL_CMD_VIID(viid));
cmd.retval_len16 = cpu_to_be32(FW_LEN16(cmd));
/*
* Each firmware RSS command can accommodate up to 32 RSS Ingress
* Queue Identifiers. These Ingress Queue IDs are packed three to
* a 32-bit word as 10-bit values with the upper remaining 2 bits
* reserved.
*/
while (n > 0) {
int nq = min(n, 32);
int nq_packed = 0;
__be32 *qp = &cmd.iq0_to_iq2;
/*
* Set up the firmware RSS command header to send the next
* "nq" Ingress Queue IDs to the firmware.
*/
cmd.niqid = cpu_to_be16(nq);
cmd.startidx = cpu_to_be16(start);
/*
* "nq" more done for the start of the next loop.
*/
start += nq;
n -= nq;
/*
* While there are still Ingress Queue IDs to stuff into the
* current firmware RSS command, retrieve them from the
* Ingress Queue ID array and insert them into the command.
*/
while (nq > 0) {
/*
* Grab up to the next 3 Ingress Queue IDs (wrapping
* around the Ingress Queue ID array if necessary) and
* insert them into the firmware RSS command at the
* current 3-tuple position within the commad.
*/
u16 qbuf[3];
u16 *qbp = qbuf;
int nqbuf = min(3, nq);
nq -= nqbuf;
qbuf[0] = qbuf[1] = qbuf[2] = 0;
while (nqbuf && nq_packed < 32) {
nqbuf--;
nq_packed++;
*qbp++ = *rsp++;
if (rsp >= rsp_end)
rsp = rspq;
}
*qp++ = cpu_to_be32(V_FW_RSS_IND_TBL_CMD_IQ0(qbuf[0]) |
V_FW_RSS_IND_TBL_CMD_IQ1(qbuf[1]) |
V_FW_RSS_IND_TBL_CMD_IQ2(qbuf[2]));
}
/*
* Send this portion of the RRS table update to the firmware;
* bail out on any errors.
*/
ret = t4_wr_mbox(adapter, mbox, &cmd, sizeof(cmd), NULL);
if (ret)
return ret;
}
return 0;
}
/**
* t4_config_glbl_rss - configure the global RSS mode
* @adapter: the adapter
* @mbox: mbox to use for the FW command
* @mode: global RSS mode
* @flags: mode-specific flags
*
* Sets the global RSS mode.
*/
int t4_config_glbl_rss(struct adapter *adapter, int mbox, unsigned int mode,
unsigned int flags)
{
struct fw_rss_glb_config_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_RSS_GLB_CONFIG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE);
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
if (mode == FW_RSS_GLB_CONFIG_CMD_MODE_MANUAL) {
c.u.manual.mode_pkd =
cpu_to_be32(V_FW_RSS_GLB_CONFIG_CMD_MODE(mode));
} else if (mode == FW_RSS_GLB_CONFIG_CMD_MODE_BASICVIRTUAL) {
c.u.basicvirtual.mode_keymode =
cpu_to_be32(V_FW_RSS_GLB_CONFIG_CMD_MODE(mode));
c.u.basicvirtual.synmapen_to_hashtoeplitz = cpu_to_be32(flags);
} else
return -EINVAL;
return t4_wr_mbox(adapter, mbox, &c, sizeof(c), NULL);
}
/**
* t4_config_vi_rss - configure per VI RSS settings
* @adapter: the adapter
* @mbox: mbox to use for the FW command
* @viid: the VI id
* @flags: RSS flags
* @defq: id of the default RSS queue for the VI.
* @skeyidx: RSS secret key table index for non-global mode
* @skey: RSS vf_scramble key for VI.
*
* Configures VI-specific RSS properties.
*/
int t4_config_vi_rss(struct adapter *adapter, int mbox, unsigned int viid,
unsigned int flags, unsigned int defq, unsigned int skeyidx,
unsigned int skey)
{
struct fw_rss_vi_config_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_RSS_VI_CONFIG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_RSS_VI_CONFIG_CMD_VIID(viid));
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
c.u.basicvirtual.defaultq_to_udpen = cpu_to_be32(flags |
V_FW_RSS_VI_CONFIG_CMD_DEFAULTQ(defq));
c.u.basicvirtual.secretkeyidx_pkd = cpu_to_be32(
V_FW_RSS_VI_CONFIG_CMD_SECRETKEYIDX(skeyidx));
c.u.basicvirtual.secretkeyxor = cpu_to_be32(skey);
return t4_wr_mbox(adapter, mbox, &c, sizeof(c), NULL);
}
/* Read an RSS table row */
static int rd_rss_row(struct adapter *adap, int row, u32 *val)
{
t4_write_reg(adap, A_TP_RSS_LKP_TABLE, 0xfff00000 | row);
return t4_wait_op_done_val(adap, A_TP_RSS_LKP_TABLE, F_LKPTBLROWVLD, 1,
5, 0, val);
}
/**
* t4_read_rss - read the contents of the RSS mapping table
* @adapter: the adapter
* @map: holds the contents of the RSS mapping table
*
* Reads the contents of the RSS hash->queue mapping table.
*/
int t4_read_rss(struct adapter *adapter, u16 *map)
{
u32 val;
int i, ret;
for (i = 0; i < RSS_NENTRIES / 2; ++i) {
ret = rd_rss_row(adapter, i, &val);
if (ret)
return ret;
*map++ = G_LKPTBLQUEUE0(val);
*map++ = G_LKPTBLQUEUE1(val);
}
return 0;
}
/**
* t4_tp_fw_ldst_rw - Access TP indirect register through LDST
* @adap: the adapter
* @cmd: TP fw ldst address space type
* @vals: where the indirect register values are stored/written
* @nregs: how many indirect registers to read/write
* @start_idx: index of first indirect register to read/write
* @rw: Read (1) or Write (0)
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Access TP indirect registers through LDST
**/
static int t4_tp_fw_ldst_rw(struct adapter *adap, int cmd, u32 *vals,
unsigned int nregs, unsigned int start_index,
unsigned int rw, bool sleep_ok)
{
int ret = 0;
unsigned int i;
struct fw_ldst_cmd c;
for (i = 0; i < nregs; i++) {
memset(&c, 0, sizeof(c));
c.op_to_addrspace = cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST |
(rw ? F_FW_CMD_READ :
F_FW_CMD_WRITE) |
V_FW_LDST_CMD_ADDRSPACE(cmd));
c.cycles_to_len16 = cpu_to_be32(FW_LEN16(c));
c.u.addrval.addr = cpu_to_be32(start_index + i);
c.u.addrval.val = rw ? 0 : cpu_to_be32(vals[i]);
ret = t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c,
sleep_ok);
if (ret)
return ret;
if (rw)
vals[i] = be32_to_cpu(c.u.addrval.val);
}
return 0;
}
/**
* t4_tp_indirect_rw - Read/Write TP indirect register through LDST or backdoor
* @adap: the adapter
* @reg_addr: Address Register
* @reg_data: Data register
* @buff: where the indirect register values are stored/written
* @nregs: how many indirect registers to read/write
* @start_index: index of first indirect register to read/write
* @rw: READ(1) or WRITE(0)
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Read/Write TP indirect registers through LDST if possible.
* Else, use backdoor access
**/
static void t4_tp_indirect_rw(struct adapter *adap, u32 reg_addr, u32 reg_data,
u32 *buff, u32 nregs, u32 start_index, int rw,
bool sleep_ok)
{
int rc = -EINVAL;
int cmd;
switch (reg_addr) {
case A_TP_PIO_ADDR:
cmd = FW_LDST_ADDRSPC_TP_PIO;
break;
case A_TP_TM_PIO_ADDR:
cmd = FW_LDST_ADDRSPC_TP_TM_PIO;
break;
case A_TP_MIB_INDEX:
cmd = FW_LDST_ADDRSPC_TP_MIB;
break;
default:
goto indirect_access;
}
if (t4_use_ldst(adap))
rc = t4_tp_fw_ldst_rw(adap, cmd, buff, nregs, start_index, rw,
sleep_ok);
indirect_access:
if (rc) {
if (rw)
t4_read_indirect(adap, reg_addr, reg_data, buff, nregs,
start_index);
else
t4_write_indirect(adap, reg_addr, reg_data, buff, nregs,
start_index);
}
}
/**
* t4_tp_pio_read - Read TP PIO registers
* @adap: the adapter
* @buff: where the indirect register values are written
* @nregs: how many indirect registers to read
* @start_index: index of first indirect register to read
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Read TP PIO Registers
**/
void t4_tp_pio_read(struct adapter *adap, u32 *buff, u32 nregs,
u32 start_index, bool sleep_ok)
{
t4_tp_indirect_rw(adap, A_TP_PIO_ADDR, A_TP_PIO_DATA, buff, nregs,
start_index, 1, sleep_ok);
}
/**
* t4_tp_pio_write - Write TP PIO registers
* @adap: the adapter
* @buff: where the indirect register values are stored
* @nregs: how many indirect registers to write
* @start_index: index of first indirect register to write
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Write TP PIO Registers
**/
void t4_tp_pio_write(struct adapter *adap, const u32 *buff, u32 nregs,
u32 start_index, bool sleep_ok)
{
t4_tp_indirect_rw(adap, A_TP_PIO_ADDR, A_TP_PIO_DATA,
__DECONST(u32 *, buff), nregs, start_index, 0, sleep_ok);
}
/**
* t4_tp_tm_pio_read - Read TP TM PIO registers
* @adap: the adapter
* @buff: where the indirect register values are written
* @nregs: how many indirect registers to read
* @start_index: index of first indirect register to read
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Read TP TM PIO Registers
**/
void t4_tp_tm_pio_read(struct adapter *adap, u32 *buff, u32 nregs,
u32 start_index, bool sleep_ok)
{
t4_tp_indirect_rw(adap, A_TP_TM_PIO_ADDR, A_TP_TM_PIO_DATA, buff,
nregs, start_index, 1, sleep_ok);
}
/**
* t4_tp_mib_read - Read TP MIB registers
* @adap: the adapter
* @buff: where the indirect register values are written
* @nregs: how many indirect registers to read
* @start_index: index of first indirect register to read
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Read TP MIB Registers
**/
void t4_tp_mib_read(struct adapter *adap, u32 *buff, u32 nregs, u32 start_index,
bool sleep_ok)
{
t4_tp_indirect_rw(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, buff, nregs,
start_index, 1, sleep_ok);
}
/**
* t4_read_rss_key - read the global RSS key
* @adap: the adapter
* @key: 10-entry array holding the 320-bit RSS key
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Reads the global 320-bit RSS key.
*/
void t4_read_rss_key(struct adapter *adap, u32 *key, bool sleep_ok)
{
t4_tp_pio_read(adap, key, 10, A_TP_RSS_SECRET_KEY0, sleep_ok);
}
/**
* t4_write_rss_key - program one of the RSS keys
* @adap: the adapter
* @key: 10-entry array holding the 320-bit RSS key
* @idx: which RSS key to write
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Writes one of the RSS keys with the given 320-bit value. If @idx is
* 0..15 the corresponding entry in the RSS key table is written,
* otherwise the global RSS key is written.
*/
void t4_write_rss_key(struct adapter *adap, const u32 *key, int idx,
bool sleep_ok)
{
u8 rss_key_addr_cnt = 16;
u32 vrt = t4_read_reg(adap, A_TP_RSS_CONFIG_VRT);
/*
* T6 and later: for KeyMode 3 (per-vf and per-vf scramble),
* allows access to key addresses 16-63 by using KeyWrAddrX
* as index[5:4](upper 2) into key table
*/
if ((chip_id(adap) > CHELSIO_T5) &&
(vrt & F_KEYEXTEND) && (G_KEYMODE(vrt) == 3))
rss_key_addr_cnt = 32;
t4_tp_pio_write(adap, key, 10, A_TP_RSS_SECRET_KEY0, sleep_ok);
if (idx >= 0 && idx < rss_key_addr_cnt) {
if (rss_key_addr_cnt > 16)
t4_write_reg(adap, A_TP_RSS_CONFIG_VRT,
vrt | V_KEYWRADDRX(idx >> 4) |
V_T6_VFWRADDR(idx) | F_KEYWREN);
else
t4_write_reg(adap, A_TP_RSS_CONFIG_VRT,
vrt| V_KEYWRADDR(idx) | F_KEYWREN);
}
}
/**
* t4_read_rss_pf_config - read PF RSS Configuration Table
* @adapter: the adapter
* @index: the entry in the PF RSS table to read
* @valp: where to store the returned value
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Reads the PF RSS Configuration Table at the specified index and returns
* the value found there.
*/
void t4_read_rss_pf_config(struct adapter *adapter, unsigned int index,
u32 *valp, bool sleep_ok)
{
t4_tp_pio_read(adapter, valp, 1, A_TP_RSS_PF0_CONFIG + index, sleep_ok);
}
/**
* t4_write_rss_pf_config - write PF RSS Configuration Table
* @adapter: the adapter
* @index: the entry in the VF RSS table to read
* @val: the value to store
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Writes the PF RSS Configuration Table at the specified index with the
* specified value.
*/
void t4_write_rss_pf_config(struct adapter *adapter, unsigned int index,
u32 val, bool sleep_ok)
{
t4_tp_pio_write(adapter, &val, 1, A_TP_RSS_PF0_CONFIG + index,
sleep_ok);
}
/**
* t4_read_rss_vf_config - read VF RSS Configuration Table
* @adapter: the adapter
* @index: the entry in the VF RSS table to read
* @vfl: where to store the returned VFL
* @vfh: where to store the returned VFH
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Reads the VF RSS Configuration Table at the specified index and returns
* the (VFL, VFH) values found there.
*/
void t4_read_rss_vf_config(struct adapter *adapter, unsigned int index,
u32 *vfl, u32 *vfh, bool sleep_ok)
{
u32 vrt, mask, data;
if (chip_id(adapter) <= CHELSIO_T5) {
mask = V_VFWRADDR(M_VFWRADDR);
data = V_VFWRADDR(index);
} else {
mask = V_T6_VFWRADDR(M_T6_VFWRADDR);
data = V_T6_VFWRADDR(index);
}
/*
* Request that the index'th VF Table values be read into VFL/VFH.
*/
vrt = t4_read_reg(adapter, A_TP_RSS_CONFIG_VRT);
vrt &= ~(F_VFRDRG | F_VFWREN | F_KEYWREN | mask);
vrt |= data | F_VFRDEN;
t4_write_reg(adapter, A_TP_RSS_CONFIG_VRT, vrt);
/*
* Grab the VFL/VFH values ...
*/
t4_tp_pio_read(adapter, vfl, 1, A_TP_RSS_VFL_CONFIG, sleep_ok);
t4_tp_pio_read(adapter, vfh, 1, A_TP_RSS_VFH_CONFIG, sleep_ok);
}
/**
* t4_write_rss_vf_config - write VF RSS Configuration Table
*
* @adapter: the adapter
* @index: the entry in the VF RSS table to write
* @vfl: the VFL to store
* @vfh: the VFH to store
*
* Writes the VF RSS Configuration Table at the specified index with the
* specified (VFL, VFH) values.
*/
void t4_write_rss_vf_config(struct adapter *adapter, unsigned int index,
u32 vfl, u32 vfh, bool sleep_ok)
{
u32 vrt, mask, data;
if (chip_id(adapter) <= CHELSIO_T5) {
mask = V_VFWRADDR(M_VFWRADDR);
data = V_VFWRADDR(index);
} else {
mask = V_T6_VFWRADDR(M_T6_VFWRADDR);
data = V_T6_VFWRADDR(index);
}
/*
* Load up VFL/VFH with the values to be written ...
*/
t4_tp_pio_write(adapter, &vfl, 1, A_TP_RSS_VFL_CONFIG, sleep_ok);
t4_tp_pio_write(adapter, &vfh, 1, A_TP_RSS_VFH_CONFIG, sleep_ok);
/*
* Write the VFL/VFH into the VF Table at index'th location.
*/
vrt = t4_read_reg(adapter, A_TP_RSS_CONFIG_VRT);
vrt &= ~(F_VFRDRG | F_VFWREN | F_KEYWREN | mask);
vrt |= data | F_VFRDEN;
t4_write_reg(adapter, A_TP_RSS_CONFIG_VRT, vrt);
}
/**
* t4_read_rss_pf_map - read PF RSS Map
* @adapter: the adapter
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Reads the PF RSS Map register and returns its value.
*/
u32 t4_read_rss_pf_map(struct adapter *adapter, bool sleep_ok)
{
u32 pfmap;
t4_tp_pio_read(adapter, &pfmap, 1, A_TP_RSS_PF_MAP, sleep_ok);
return pfmap;
}
/**
* t4_write_rss_pf_map - write PF RSS Map
* @adapter: the adapter
* @pfmap: PF RSS Map value
*
* Writes the specified value to the PF RSS Map register.
*/
void t4_write_rss_pf_map(struct adapter *adapter, u32 pfmap, bool sleep_ok)
{
t4_tp_pio_write(adapter, &pfmap, 1, A_TP_RSS_PF_MAP, sleep_ok);
}
/**
* t4_read_rss_pf_mask - read PF RSS Mask
* @adapter: the adapter
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Reads the PF RSS Mask register and returns its value.
*/
u32 t4_read_rss_pf_mask(struct adapter *adapter, bool sleep_ok)
{
u32 pfmask;
t4_tp_pio_read(adapter, &pfmask, 1, A_TP_RSS_PF_MSK, sleep_ok);
return pfmask;
}
/**
* t4_write_rss_pf_mask - write PF RSS Mask
* @adapter: the adapter
* @pfmask: PF RSS Mask value
*
* Writes the specified value to the PF RSS Mask register.
*/
void t4_write_rss_pf_mask(struct adapter *adapter, u32 pfmask, bool sleep_ok)
{
t4_tp_pio_write(adapter, &pfmask, 1, A_TP_RSS_PF_MSK, sleep_ok);
}
/**
* t4_tp_get_tcp_stats - read TP's TCP MIB counters
* @adap: the adapter
* @v4: holds the TCP/IP counter values
* @v6: holds the TCP/IPv6 counter values
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Returns the values of TP's TCP/IP and TCP/IPv6 MIB counters.
* Either @v4 or @v6 may be %NULL to skip the corresponding stats.
*/
void t4_tp_get_tcp_stats(struct adapter *adap, struct tp_tcp_stats *v4,
struct tp_tcp_stats *v6, bool sleep_ok)
{
u32 val[A_TP_MIB_TCP_RXT_SEG_LO - A_TP_MIB_TCP_OUT_RST + 1];
#define STAT_IDX(x) ((A_TP_MIB_TCP_##x) - A_TP_MIB_TCP_OUT_RST)
#define STAT(x) val[STAT_IDX(x)]
#define STAT64(x) (((u64)STAT(x##_HI) << 32) | STAT(x##_LO))
if (v4) {
t4_tp_mib_read(adap, val, ARRAY_SIZE(val),
A_TP_MIB_TCP_OUT_RST, sleep_ok);
v4->tcp_out_rsts = STAT(OUT_RST);
v4->tcp_in_segs = STAT64(IN_SEG);
v4->tcp_out_segs = STAT64(OUT_SEG);
v4->tcp_retrans_segs = STAT64(RXT_SEG);
}
if (v6) {
t4_tp_mib_read(adap, val, ARRAY_SIZE(val),
A_TP_MIB_TCP_V6OUT_RST, sleep_ok);
v6->tcp_out_rsts = STAT(OUT_RST);
v6->tcp_in_segs = STAT64(IN_SEG);
v6->tcp_out_segs = STAT64(OUT_SEG);
v6->tcp_retrans_segs = STAT64(RXT_SEG);
}
#undef STAT64
#undef STAT
#undef STAT_IDX
}
/**
* t4_tp_get_err_stats - read TP's error MIB counters
* @adap: the adapter
* @st: holds the counter values
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Returns the values of TP's error counters.
*/
void t4_tp_get_err_stats(struct adapter *adap, struct tp_err_stats *st,
bool sleep_ok)
{
int nchan = adap->chip_params->nchan;
t4_tp_mib_read(adap, st->mac_in_errs, nchan, A_TP_MIB_MAC_IN_ERR_0,
sleep_ok);
t4_tp_mib_read(adap, st->hdr_in_errs, nchan, A_TP_MIB_HDR_IN_ERR_0,
sleep_ok);
t4_tp_mib_read(adap, st->tcp_in_errs, nchan, A_TP_MIB_TCP_IN_ERR_0,
sleep_ok);
t4_tp_mib_read(adap, st->tnl_cong_drops, nchan,
A_TP_MIB_TNL_CNG_DROP_0, sleep_ok);
t4_tp_mib_read(adap, st->ofld_chan_drops, nchan,
A_TP_MIB_OFD_CHN_DROP_0, sleep_ok);
t4_tp_mib_read(adap, st->tnl_tx_drops, nchan, A_TP_MIB_TNL_DROP_0,
sleep_ok);
t4_tp_mib_read(adap, st->ofld_vlan_drops, nchan,
A_TP_MIB_OFD_VLN_DROP_0, sleep_ok);
t4_tp_mib_read(adap, st->tcp6_in_errs, nchan,
A_TP_MIB_TCP_V6IN_ERR_0, sleep_ok);
t4_tp_mib_read(adap, &st->ofld_no_neigh, 2, A_TP_MIB_OFD_ARP_DROP,
sleep_ok);
}
/**
* t4_tp_get_proxy_stats - read TP's proxy MIB counters
* @adap: the adapter
* @st: holds the counter values
*
* Returns the values of TP's proxy counters.
*/
void t4_tp_get_proxy_stats(struct adapter *adap, struct tp_proxy_stats *st,
bool sleep_ok)
{
int nchan = adap->chip_params->nchan;
t4_tp_mib_read(adap, st->proxy, nchan, A_TP_MIB_TNL_LPBK_0, sleep_ok);
}
/**
* t4_tp_get_cpl_stats - read TP's CPL MIB counters
* @adap: the adapter
* @st: holds the counter values
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Returns the values of TP's CPL counters.
*/
void t4_tp_get_cpl_stats(struct adapter *adap, struct tp_cpl_stats *st,
bool sleep_ok)
{
int nchan = adap->chip_params->nchan;
t4_tp_mib_read(adap, st->req, nchan, A_TP_MIB_CPL_IN_REQ_0, sleep_ok);
t4_tp_mib_read(adap, st->rsp, nchan, A_TP_MIB_CPL_OUT_RSP_0, sleep_ok);
}
/**
* t4_tp_get_rdma_stats - read TP's RDMA MIB counters
* @adap: the adapter
* @st: holds the counter values
*
* Returns the values of TP's RDMA counters.
*/
void t4_tp_get_rdma_stats(struct adapter *adap, struct tp_rdma_stats *st,
bool sleep_ok)
{
t4_tp_mib_read(adap, &st->rqe_dfr_pkt, 2, A_TP_MIB_RQE_DFR_PKT,
sleep_ok);
}
/**
* t4_get_fcoe_stats - read TP's FCoE MIB counters for a port
* @adap: the adapter
* @idx: the port index
* @st: holds the counter values
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Returns the values of TP's FCoE counters for the selected port.
*/
void t4_get_fcoe_stats(struct adapter *adap, unsigned int idx,
struct tp_fcoe_stats *st, bool sleep_ok)
{
u32 val[2];
t4_tp_mib_read(adap, &st->frames_ddp, 1, A_TP_MIB_FCOE_DDP_0 + idx,
sleep_ok);
t4_tp_mib_read(adap, &st->frames_drop, 1,
A_TP_MIB_FCOE_DROP_0 + idx, sleep_ok);
t4_tp_mib_read(adap, val, 2, A_TP_MIB_FCOE_BYTE_0_HI + 2 * idx,
sleep_ok);
st->octets_ddp = ((u64)val[0] << 32) | val[1];
}
/**
* t4_get_usm_stats - read TP's non-TCP DDP MIB counters
* @adap: the adapter
* @st: holds the counter values
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Returns the values of TP's counters for non-TCP directly-placed packets.
*/
void t4_get_usm_stats(struct adapter *adap, struct tp_usm_stats *st,
bool sleep_ok)
{
u32 val[4];
t4_tp_mib_read(adap, val, 4, A_TP_MIB_USM_PKTS, sleep_ok);
st->frames = val[0];
st->drops = val[1];
st->octets = ((u64)val[2] << 32) | val[3];
}
/**
* t4_read_mtu_tbl - returns the values in the HW path MTU table
* @adap: the adapter
* @mtus: where to store the MTU values
* @mtu_log: where to store the MTU base-2 log (may be %NULL)
*
* Reads the HW path MTU table.
*/
void t4_read_mtu_tbl(struct adapter *adap, u16 *mtus, u8 *mtu_log)
{
u32 v;
int i;
for (i = 0; i < NMTUS; ++i) {
t4_write_reg(adap, A_TP_MTU_TABLE,
V_MTUINDEX(0xff) | V_MTUVALUE(i));
v = t4_read_reg(adap, A_TP_MTU_TABLE);
mtus[i] = G_MTUVALUE(v);
if (mtu_log)
mtu_log[i] = G_MTUWIDTH(v);
}
}
/**
* t4_read_cong_tbl - reads the congestion control table
* @adap: the adapter
* @incr: where to store the alpha values
*
* Reads the additive increments programmed into the HW congestion
* control table.
*/
void t4_read_cong_tbl(struct adapter *adap, u16 incr[NMTUS][NCCTRL_WIN])
{
unsigned int mtu, w;
for (mtu = 0; mtu < NMTUS; ++mtu)
for (w = 0; w < NCCTRL_WIN; ++w) {
t4_write_reg(adap, A_TP_CCTRL_TABLE,
V_ROWINDEX(0xffff) | (mtu << 5) | w);
incr[mtu][w] = (u16)t4_read_reg(adap,
A_TP_CCTRL_TABLE) & 0x1fff;
}
}
/**
* t4_tp_wr_bits_indirect - set/clear bits in an indirect TP register
* @adap: the adapter
* @addr: the indirect TP register address
* @mask: specifies the field within the register to modify
* @val: new value for the field
*
* Sets a field of an indirect TP register to the given value.
*/
void t4_tp_wr_bits_indirect(struct adapter *adap, unsigned int addr,
unsigned int mask, unsigned int val)
{
t4_write_reg(adap, A_TP_PIO_ADDR, addr);
val |= t4_read_reg(adap, A_TP_PIO_DATA) & ~mask;
t4_write_reg(adap, A_TP_PIO_DATA, val);
}
/**
* init_cong_ctrl - initialize congestion control parameters
* @a: the alpha values for congestion control
* @b: the beta values for congestion control
*
* Initialize the congestion control parameters.
*/
static void init_cong_ctrl(unsigned short *a, unsigned short *b)
{
a[0] = a[1] = a[2] = a[3] = a[4] = a[5] = a[6] = a[7] = a[8] = 1;
a[9] = 2;
a[10] = 3;
a[11] = 4;
a[12] = 5;
a[13] = 6;
a[14] = 7;
a[15] = 8;
a[16] = 9;
a[17] = 10;
a[18] = 14;
a[19] = 17;
a[20] = 21;
a[21] = 25;
a[22] = 30;
a[23] = 35;
a[24] = 45;
a[25] = 60;
a[26] = 80;
a[27] = 100;
a[28] = 200;
a[29] = 300;
a[30] = 400;
a[31] = 500;
b[0] = b[1] = b[2] = b[3] = b[4] = b[5] = b[6] = b[7] = b[8] = 0;
b[9] = b[10] = 1;
b[11] = b[12] = 2;
b[13] = b[14] = b[15] = b[16] = 3;
b[17] = b[18] = b[19] = b[20] = b[21] = 4;
b[22] = b[23] = b[24] = b[25] = b[26] = b[27] = 5;
b[28] = b[29] = 6;
b[30] = b[31] = 7;
}
/* The minimum additive increment value for the congestion control table */
#define CC_MIN_INCR 2U
/**
* t4_load_mtus - write the MTU and congestion control HW tables
* @adap: the adapter
* @mtus: the values for the MTU table
* @alpha: the values for the congestion control alpha parameter
* @beta: the values for the congestion control beta parameter
*
* Write the HW MTU table with the supplied MTUs and the high-speed
* congestion control table with the supplied alpha, beta, and MTUs.
* We write the two tables together because the additive increments
* depend on the MTUs.
*/
void t4_load_mtus(struct adapter *adap, const unsigned short *mtus,
const unsigned short *alpha, const unsigned short *beta)
{
static const unsigned int avg_pkts[NCCTRL_WIN] = {
2, 6, 10, 14, 20, 28, 40, 56, 80, 112, 160, 224, 320, 448, 640,
896, 1281, 1792, 2560, 3584, 5120, 7168, 10240, 14336, 20480,
28672, 40960, 57344, 81920, 114688, 163840, 229376
};
unsigned int i, w;
for (i = 0; i < NMTUS; ++i) {
unsigned int mtu = mtus[i];
unsigned int log2 = fls(mtu);
if (!(mtu & ((1 << log2) >> 2))) /* round */
log2--;
t4_write_reg(adap, A_TP_MTU_TABLE, V_MTUINDEX(i) |
V_MTUWIDTH(log2) | V_MTUVALUE(mtu));
for (w = 0; w < NCCTRL_WIN; ++w) {
unsigned int inc;
inc = max(((mtu - 40) * alpha[w]) / avg_pkts[w],
CC_MIN_INCR);
t4_write_reg(adap, A_TP_CCTRL_TABLE, (i << 21) |
(w << 16) | (beta[w] << 13) | inc);
}
}
}
/**
* t4_set_pace_tbl - set the pace table
* @adap: the adapter
* @pace_vals: the pace values in microseconds
* @start: index of the first entry in the HW pace table to set
* @n: how many entries to set
*
* Sets (a subset of the) HW pace table.
*/
int t4_set_pace_tbl(struct adapter *adap, const unsigned int *pace_vals,
unsigned int start, unsigned int n)
{
unsigned int vals[NTX_SCHED], i;
unsigned int tick_ns = dack_ticks_to_usec(adap, 1000);
if (n > NTX_SCHED)
return -ERANGE;
/* convert values from us to dack ticks, rounding to closest value */
for (i = 0; i < n; i++, pace_vals++) {
vals[i] = (1000 * *pace_vals + tick_ns / 2) / tick_ns;
if (vals[i] > 0x7ff)
return -ERANGE;
if (*pace_vals && vals[i] == 0)
return -ERANGE;
}
for (i = 0; i < n; i++, start++)
t4_write_reg(adap, A_TP_PACE_TABLE, (start << 16) | vals[i]);
return 0;
}
/**
* t4_set_sched_bps - set the bit rate for a HW traffic scheduler
* @adap: the adapter
* @kbps: target rate in Kbps
* @sched: the scheduler index
*
* Configure a Tx HW scheduler for the target rate.
*/
int t4_set_sched_bps(struct adapter *adap, int sched, unsigned int kbps)
{
unsigned int v, tps, cpt, bpt, delta, mindelta = ~0;
unsigned int clk = adap->params.vpd.cclk * 1000;
unsigned int selected_cpt = 0, selected_bpt = 0;
if (kbps > 0) {
kbps *= 125; /* -> bytes */
for (cpt = 1; cpt <= 255; cpt++) {
tps = clk / cpt;
bpt = (kbps + tps / 2) / tps;
if (bpt > 0 && bpt <= 255) {
v = bpt * tps;
delta = v >= kbps ? v - kbps : kbps - v;
if (delta < mindelta) {
mindelta = delta;
selected_cpt = cpt;
selected_bpt = bpt;
}
} else if (selected_cpt)
break;
}
if (!selected_cpt)
return -EINVAL;
}
t4_write_reg(adap, A_TP_TM_PIO_ADDR,
A_TP_TX_MOD_Q1_Q0_RATE_LIMIT - sched / 2);
v = t4_read_reg(adap, A_TP_TM_PIO_DATA);
if (sched & 1)
v = (v & 0xffff) | (selected_cpt << 16) | (selected_bpt << 24);
else
v = (v & 0xffff0000) | selected_cpt | (selected_bpt << 8);
t4_write_reg(adap, A_TP_TM_PIO_DATA, v);
return 0;
}
/**
* t4_set_sched_ipg - set the IPG for a Tx HW packet rate scheduler
* @adap: the adapter
* @sched: the scheduler index
* @ipg: the interpacket delay in tenths of nanoseconds
*
* Set the interpacket delay for a HW packet rate scheduler.
*/
int t4_set_sched_ipg(struct adapter *adap, int sched, unsigned int ipg)
{
unsigned int v, addr = A_TP_TX_MOD_Q1_Q0_TIMER_SEPARATOR - sched / 2;
/* convert ipg to nearest number of core clocks */
ipg *= core_ticks_per_usec(adap);
ipg = (ipg + 5000) / 10000;
if (ipg > M_TXTIMERSEPQ0)
return -EINVAL;
t4_write_reg(adap, A_TP_TM_PIO_ADDR, addr);
v = t4_read_reg(adap, A_TP_TM_PIO_DATA);
if (sched & 1)
v = (v & V_TXTIMERSEPQ0(M_TXTIMERSEPQ0)) | V_TXTIMERSEPQ1(ipg);
else
v = (v & V_TXTIMERSEPQ1(M_TXTIMERSEPQ1)) | V_TXTIMERSEPQ0(ipg);
t4_write_reg(adap, A_TP_TM_PIO_DATA, v);
t4_read_reg(adap, A_TP_TM_PIO_DATA);
return 0;
}
/*
* Calculates a rate in bytes/s given the number of 256-byte units per 4K core
* clocks. The formula is
*
* bytes/s = bytes256 * 256 * ClkFreq / 4096
*
* which is equivalent to
*
* bytes/s = 62.5 * bytes256 * ClkFreq_ms
*/
static u64 chan_rate(struct adapter *adap, unsigned int bytes256)
{
u64 v = (u64)bytes256 * adap->params.vpd.cclk;
return v * 62 + v / 2;
}
/**
* t4_get_chan_txrate - get the current per channel Tx rates
* @adap: the adapter
* @nic_rate: rates for NIC traffic
* @ofld_rate: rates for offloaded traffic
*
* Return the current Tx rates in bytes/s for NIC and offloaded traffic
* for each channel.
*/
void t4_get_chan_txrate(struct adapter *adap, u64 *nic_rate, u64 *ofld_rate)
{
u32 v;
v = t4_read_reg(adap, A_TP_TX_TRATE);
nic_rate[0] = chan_rate(adap, G_TNLRATE0(v));
nic_rate[1] = chan_rate(adap, G_TNLRATE1(v));
if (adap->chip_params->nchan > 2) {
nic_rate[2] = chan_rate(adap, G_TNLRATE2(v));
nic_rate[3] = chan_rate(adap, G_TNLRATE3(v));
}
v = t4_read_reg(adap, A_TP_TX_ORATE);
ofld_rate[0] = chan_rate(adap, G_OFDRATE0(v));
ofld_rate[1] = chan_rate(adap, G_OFDRATE1(v));
if (adap->chip_params->nchan > 2) {
ofld_rate[2] = chan_rate(adap, G_OFDRATE2(v));
ofld_rate[3] = chan_rate(adap, G_OFDRATE3(v));
}
}
/**
* t4_set_trace_filter - configure one of the tracing filters
* @adap: the adapter
* @tp: the desired trace filter parameters
* @idx: which filter to configure
* @enable: whether to enable or disable the filter
*
* Configures one of the tracing filters available in HW. If @tp is %NULL
* it indicates that the filter is already written in the register and it
* just needs to be enabled or disabled.
*/
int t4_set_trace_filter(struct adapter *adap, const struct trace_params *tp,
int idx, int enable)
{
int i, ofst = idx * 4;
u32 data_reg, mask_reg, cfg;
u32 multitrc = F_TRCMULTIFILTER;
u32 en = is_t4(adap) ? F_TFEN : F_T5_TFEN;
if (idx < 0 || idx >= NTRACE)
return -EINVAL;
if (tp == NULL || !enable) {
t4_set_reg_field(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst, en,
enable ? en : 0);
return 0;
}
/*
* TODO - After T4 data book is updated, specify the exact
* section below.
*
* See T4 data book - MPS section for a complete description
* of the below if..else handling of A_MPS_TRC_CFG register
* value.
*/
cfg = t4_read_reg(adap, A_MPS_TRC_CFG);
if (cfg & F_TRCMULTIFILTER) {
/*
* If multiple tracers are enabled, then maximum
* capture size is 2.5KB (FIFO size of a single channel)
* minus 2 flits for CPL_TRACE_PKT header.
*/
if (tp->snap_len > ((10 * 1024 / 4) - (2 * 8)))
return -EINVAL;
} else {
/*
* If multiple tracers are disabled, to avoid deadlocks
* maximum packet capture size of 9600 bytes is recommended.
* Also in this mode, only trace0 can be enabled and running.
*/
multitrc = 0;
if (tp->snap_len > 9600 || idx)
return -EINVAL;
}
if (tp->port > (is_t4(adap) ? 11 : 19) || tp->invert > 1 ||
tp->skip_len > M_TFLENGTH || tp->skip_ofst > M_TFOFFSET ||
tp->min_len > M_TFMINPKTSIZE)
return -EINVAL;
/* stop the tracer we'll be changing */
t4_set_reg_field(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst, en, 0);
idx *= (A_MPS_TRC_FILTER1_MATCH - A_MPS_TRC_FILTER0_MATCH);
data_reg = A_MPS_TRC_FILTER0_MATCH + idx;
mask_reg = A_MPS_TRC_FILTER0_DONT_CARE + idx;
for (i = 0; i < TRACE_LEN / 4; i++, data_reg += 4, mask_reg += 4) {
t4_write_reg(adap, data_reg, tp->data[i]);
t4_write_reg(adap, mask_reg, ~tp->mask[i]);
}
t4_write_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_B + ofst,
V_TFCAPTUREMAX(tp->snap_len) |
V_TFMINPKTSIZE(tp->min_len));
t4_write_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst,
V_TFOFFSET(tp->skip_ofst) | V_TFLENGTH(tp->skip_len) | en |
(is_t4(adap) ?
V_TFPORT(tp->port) | V_TFINVERTMATCH(tp->invert) :
V_T5_TFPORT(tp->port) | V_T5_TFINVERTMATCH(tp->invert)));
return 0;
}
/**
* t4_get_trace_filter - query one of the tracing filters
* @adap: the adapter
* @tp: the current trace filter parameters
* @idx: which trace filter to query
* @enabled: non-zero if the filter is enabled
*
* Returns the current settings of one of the HW tracing filters.
*/
void t4_get_trace_filter(struct adapter *adap, struct trace_params *tp, int idx,
int *enabled)
{
u32 ctla, ctlb;
int i, ofst = idx * 4;
u32 data_reg, mask_reg;
ctla = t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_A + ofst);
ctlb = t4_read_reg(adap, A_MPS_TRC_FILTER_MATCH_CTL_B + ofst);
if (is_t4(adap)) {
*enabled = !!(ctla & F_TFEN);
tp->port = G_TFPORT(ctla);
tp->invert = !!(ctla & F_TFINVERTMATCH);
} else {
*enabled = !!(ctla & F_T5_TFEN);
tp->port = G_T5_TFPORT(ctla);
tp->invert = !!(ctla & F_T5_TFINVERTMATCH);
}
tp->snap_len = G_TFCAPTUREMAX(ctlb);
tp->min_len = G_TFMINPKTSIZE(ctlb);
tp->skip_ofst = G_TFOFFSET(ctla);
tp->skip_len = G_TFLENGTH(ctla);
ofst = (A_MPS_TRC_FILTER1_MATCH - A_MPS_TRC_FILTER0_MATCH) * idx;
data_reg = A_MPS_TRC_FILTER0_MATCH + ofst;
mask_reg = A_MPS_TRC_FILTER0_DONT_CARE + ofst;
for (i = 0; i < TRACE_LEN / 4; i++, data_reg += 4, mask_reg += 4) {
tp->mask[i] = ~t4_read_reg(adap, mask_reg);
tp->data[i] = t4_read_reg(adap, data_reg) & tp->mask[i];
}
}
/**
* t4_pmtx_get_stats - returns the HW stats from PMTX
* @adap: the adapter
* @cnt: where to store the count statistics
* @cycles: where to store the cycle statistics
*
* Returns performance statistics from PMTX.
*/
void t4_pmtx_get_stats(struct adapter *adap, u32 cnt[], u64 cycles[])
{
int i;
u32 data[2];
for (i = 0; i < adap->chip_params->pm_stats_cnt; i++) {
t4_write_reg(adap, A_PM_TX_STAT_CONFIG, i + 1);
cnt[i] = t4_read_reg(adap, A_PM_TX_STAT_COUNT);
if (is_t4(adap))
cycles[i] = t4_read_reg64(adap, A_PM_TX_STAT_LSB);
else {
t4_read_indirect(adap, A_PM_TX_DBG_CTRL,
A_PM_TX_DBG_DATA, data, 2,
A_PM_TX_DBG_STAT_MSB);
cycles[i] = (((u64)data[0] << 32) | data[1]);
}
}
}
/**
* t4_pmrx_get_stats - returns the HW stats from PMRX
* @adap: the adapter
* @cnt: where to store the count statistics
* @cycles: where to store the cycle statistics
*
* Returns performance statistics from PMRX.
*/
void t4_pmrx_get_stats(struct adapter *adap, u32 cnt[], u64 cycles[])
{
int i;
u32 data[2];
for (i = 0; i < adap->chip_params->pm_stats_cnt; i++) {
t4_write_reg(adap, A_PM_RX_STAT_CONFIG, i + 1);
cnt[i] = t4_read_reg(adap, A_PM_RX_STAT_COUNT);
if (is_t4(adap)) {
cycles[i] = t4_read_reg64(adap, A_PM_RX_STAT_LSB);
} else {
t4_read_indirect(adap, A_PM_RX_DBG_CTRL,
A_PM_RX_DBG_DATA, data, 2,
A_PM_RX_DBG_STAT_MSB);
cycles[i] = (((u64)data[0] << 32) | data[1]);
}
}
}
/**
* t4_get_mps_bg_map - return the buffer groups associated with a port
* @adap: the adapter
* @idx: the port index
*
* Returns a bitmap indicating which MPS buffer groups are associated
* with the given port. Bit i is set if buffer group i is used by the
* port.
*/
static unsigned int t4_get_mps_bg_map(struct adapter *adap, int idx)
{
u32 n;
if (adap->params.mps_bg_map)
return ((adap->params.mps_bg_map >> (idx << 3)) & 0xff);
n = G_NUMPORTS(t4_read_reg(adap, A_MPS_CMN_CTL));
if (n == 0)
return idx == 0 ? 0xf : 0;
if (n == 1 && chip_id(adap) <= CHELSIO_T5)
return idx < 2 ? (3 << (2 * idx)) : 0;
return 1 << idx;
}
/*
* TP RX e-channels associated with the port.
*/
static unsigned int t4_get_rx_e_chan_map(struct adapter *adap, int idx)
{
u32 n = G_NUMPORTS(t4_read_reg(adap, A_MPS_CMN_CTL));
if (n == 0)
return idx == 0 ? 0xf : 0;
if (n == 1 && chip_id(adap) <= CHELSIO_T5)
return idx < 2 ? (3 << (2 * idx)) : 0;
return 1 << idx;
}
/**
* t4_get_port_type_description - return Port Type string description
* @port_type: firmware Port Type enumeration
*/
const char *t4_get_port_type_description(enum fw_port_type port_type)
{
static const char *const port_type_description[] = {
"Fiber_XFI",
"Fiber_XAUI",
"BT_SGMII",
"BT_XFI",
"BT_XAUI",
"KX4",
"CX4",
"KX",
"KR",
"SFP",
"BP_AP",
"BP4_AP",
"QSFP_10G",
"QSA",
"QSFP",
"BP40_BA",
"KR4_100G",
"CR4_QSFP",
"CR_QSFP",
"CR2_QSFP",
"SFP28",
"KR_SFP28",
};
if (port_type < ARRAY_SIZE(port_type_description))
return port_type_description[port_type];
return "UNKNOWN";
}
/**
* t4_get_port_stats_offset - collect port stats relative to a previous
* snapshot
* @adap: The adapter
* @idx: The port
* @stats: Current stats to fill
* @offset: Previous stats snapshot
*/
void t4_get_port_stats_offset(struct adapter *adap, int idx,
struct port_stats *stats,
struct port_stats *offset)
{
u64 *s, *o;
int i;
t4_get_port_stats(adap, idx, stats);
for (i = 0, s = (u64 *)stats, o = (u64 *)offset ;
i < (sizeof(struct port_stats)/sizeof(u64)) ;
i++, s++, o++)
*s -= *o;
}
/**
* t4_get_port_stats - collect port statistics
* @adap: the adapter
* @idx: the port index
* @p: the stats structure to fill
*
* Collect statistics related to the given port from HW.
*/
void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
{
u32 bgmap = adap2pinfo(adap, idx)->mps_bg_map;
u32 stat_ctl = t4_read_reg(adap, A_MPS_STAT_CTL);
#define GET_STAT(name) \
t4_read_reg64(adap, \
(is_t4(adap) ? PORT_REG(idx, A_MPS_PORT_STAT_##name##_L) : \
T5_PORT_REG(idx, A_MPS_PORT_STAT_##name##_L)))
#define GET_STAT_COM(name) t4_read_reg64(adap, A_MPS_STAT_##name##_L)
p->tx_pause = GET_STAT(TX_PORT_PAUSE);
p->tx_octets = GET_STAT(TX_PORT_BYTES);
p->tx_frames = GET_STAT(TX_PORT_FRAMES);
p->tx_bcast_frames = GET_STAT(TX_PORT_BCAST);
p->tx_mcast_frames = GET_STAT(TX_PORT_MCAST);
p->tx_ucast_frames = GET_STAT(TX_PORT_UCAST);
p->tx_error_frames = GET_STAT(TX_PORT_ERROR);
p->tx_frames_64 = GET_STAT(TX_PORT_64B);
p->tx_frames_65_127 = GET_STAT(TX_PORT_65B_127B);
p->tx_frames_128_255 = GET_STAT(TX_PORT_128B_255B);
p->tx_frames_256_511 = GET_STAT(TX_PORT_256B_511B);
p->tx_frames_512_1023 = GET_STAT(TX_PORT_512B_1023B);
p->tx_frames_1024_1518 = GET_STAT(TX_PORT_1024B_1518B);
p->tx_frames_1519_max = GET_STAT(TX_PORT_1519B_MAX);
p->tx_drop = GET_STAT(TX_PORT_DROP);
p->tx_ppp0 = GET_STAT(TX_PORT_PPP0);
p->tx_ppp1 = GET_STAT(TX_PORT_PPP1);
p->tx_ppp2 = GET_STAT(TX_PORT_PPP2);
p->tx_ppp3 = GET_STAT(TX_PORT_PPP3);
p->tx_ppp4 = GET_STAT(TX_PORT_PPP4);
p->tx_ppp5 = GET_STAT(TX_PORT_PPP5);
p->tx_ppp6 = GET_STAT(TX_PORT_PPP6);
p->tx_ppp7 = GET_STAT(TX_PORT_PPP7);
if (chip_id(adap) >= CHELSIO_T5) {
if (stat_ctl & F_COUNTPAUSESTATTX) {
p->tx_frames -= p->tx_pause;
p->tx_octets -= p->tx_pause * 64;
}
if (stat_ctl & F_COUNTPAUSEMCTX)
p->tx_mcast_frames -= p->tx_pause;
}
p->rx_pause = GET_STAT(RX_PORT_PAUSE);
p->rx_octets = GET_STAT(RX_PORT_BYTES);
p->rx_frames = GET_STAT(RX_PORT_FRAMES);
p->rx_bcast_frames = GET_STAT(RX_PORT_BCAST);
p->rx_mcast_frames = GET_STAT(RX_PORT_MCAST);
p->rx_ucast_frames = GET_STAT(RX_PORT_UCAST);
p->rx_too_long = GET_STAT(RX_PORT_MTU_ERROR);
p->rx_jabber = GET_STAT(RX_PORT_MTU_CRC_ERROR);
p->rx_fcs_err = GET_STAT(RX_PORT_CRC_ERROR);
p->rx_len_err = GET_STAT(RX_PORT_LEN_ERROR);
p->rx_symbol_err = GET_STAT(RX_PORT_SYM_ERROR);
p->rx_runt = GET_STAT(RX_PORT_LESS_64B);
p->rx_frames_64 = GET_STAT(RX_PORT_64B);
p->rx_frames_65_127 = GET_STAT(RX_PORT_65B_127B);
p->rx_frames_128_255 = GET_STAT(RX_PORT_128B_255B);
p->rx_frames_256_511 = GET_STAT(RX_PORT_256B_511B);
p->rx_frames_512_1023 = GET_STAT(RX_PORT_512B_1023B);
p->rx_frames_1024_1518 = GET_STAT(RX_PORT_1024B_1518B);
p->rx_frames_1519_max = GET_STAT(RX_PORT_1519B_MAX);
p->rx_ppp0 = GET_STAT(RX_PORT_PPP0);
p->rx_ppp1 = GET_STAT(RX_PORT_PPP1);
p->rx_ppp2 = GET_STAT(RX_PORT_PPP2);
p->rx_ppp3 = GET_STAT(RX_PORT_PPP3);
p->rx_ppp4 = GET_STAT(RX_PORT_PPP4);
p->rx_ppp5 = GET_STAT(RX_PORT_PPP5);
p->rx_ppp6 = GET_STAT(RX_PORT_PPP6);
p->rx_ppp7 = GET_STAT(RX_PORT_PPP7);
if (chip_id(adap) >= CHELSIO_T5) {
if (stat_ctl & F_COUNTPAUSESTATRX) {
p->rx_frames -= p->rx_pause;
p->rx_octets -= p->rx_pause * 64;
}
if (stat_ctl & F_COUNTPAUSEMCRX)
p->rx_mcast_frames -= p->rx_pause;
}
p->rx_ovflow0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_MAC_DROP_FRAME) : 0;
p->rx_ovflow1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_MAC_DROP_FRAME) : 0;
p->rx_ovflow2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_MAC_DROP_FRAME) : 0;
p->rx_ovflow3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_MAC_DROP_FRAME) : 0;
p->rx_trunc0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_MAC_TRUNC_FRAME) : 0;
p->rx_trunc1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_MAC_TRUNC_FRAME) : 0;
p->rx_trunc2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_MAC_TRUNC_FRAME) : 0;
p->rx_trunc3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_MAC_TRUNC_FRAME) : 0;
#undef GET_STAT
#undef GET_STAT_COM
}
/**
* t4_get_lb_stats - collect loopback port statistics
* @adap: the adapter
* @idx: the loopback port index
* @p: the stats structure to fill
*
* Return HW statistics for the given loopback port.
*/
void t4_get_lb_stats(struct adapter *adap, int idx, struct lb_port_stats *p)
{
u32 bgmap = adap2pinfo(adap, idx)->mps_bg_map;
#define GET_STAT(name) \
t4_read_reg64(adap, \
(is_t4(adap) ? \
PORT_REG(idx, A_MPS_PORT_STAT_LB_PORT_##name##_L) : \
T5_PORT_REG(idx, A_MPS_PORT_STAT_LB_PORT_##name##_L)))
#define GET_STAT_COM(name) t4_read_reg64(adap, A_MPS_STAT_##name##_L)
p->octets = GET_STAT(BYTES);
p->frames = GET_STAT(FRAMES);
p->bcast_frames = GET_STAT(BCAST);
p->mcast_frames = GET_STAT(MCAST);
p->ucast_frames = GET_STAT(UCAST);
p->error_frames = GET_STAT(ERROR);
p->frames_64 = GET_STAT(64B);
p->frames_65_127 = GET_STAT(65B_127B);
p->frames_128_255 = GET_STAT(128B_255B);
p->frames_256_511 = GET_STAT(256B_511B);
p->frames_512_1023 = GET_STAT(512B_1023B);
p->frames_1024_1518 = GET_STAT(1024B_1518B);
p->frames_1519_max = GET_STAT(1519B_MAX);
p->drop = GET_STAT(DROP_FRAMES);
p->ovflow0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_LB_DROP_FRAME) : 0;
p->ovflow1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_LB_DROP_FRAME) : 0;
p->ovflow2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_LB_DROP_FRAME) : 0;
p->ovflow3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_LB_DROP_FRAME) : 0;
p->trunc0 = (bgmap & 1) ? GET_STAT_COM(RX_BG_0_LB_TRUNC_FRAME) : 0;
p->trunc1 = (bgmap & 2) ? GET_STAT_COM(RX_BG_1_LB_TRUNC_FRAME) : 0;
p->trunc2 = (bgmap & 4) ? GET_STAT_COM(RX_BG_2_LB_TRUNC_FRAME) : 0;
p->trunc3 = (bgmap & 8) ? GET_STAT_COM(RX_BG_3_LB_TRUNC_FRAME) : 0;
#undef GET_STAT
#undef GET_STAT_COM
}
/**
* t4_wol_magic_enable - enable/disable magic packet WoL
* @adap: the adapter
* @port: the physical port index
* @addr: MAC address expected in magic packets, %NULL to disable
*
* Enables/disables magic packet wake-on-LAN for the selected port.
*/
void t4_wol_magic_enable(struct adapter *adap, unsigned int port,
const u8 *addr)
{
u32 mag_id_reg_l, mag_id_reg_h, port_cfg_reg;
if (is_t4(adap)) {
mag_id_reg_l = PORT_REG(port, A_XGMAC_PORT_MAGIC_MACID_LO);
mag_id_reg_h = PORT_REG(port, A_XGMAC_PORT_MAGIC_MACID_HI);
port_cfg_reg = PORT_REG(port, A_XGMAC_PORT_CFG2);
} else {
mag_id_reg_l = T5_PORT_REG(port, A_MAC_PORT_MAGIC_MACID_LO);
mag_id_reg_h = T5_PORT_REG(port, A_MAC_PORT_MAGIC_MACID_HI);
port_cfg_reg = T5_PORT_REG(port, A_MAC_PORT_CFG2);
}
if (addr) {
t4_write_reg(adap, mag_id_reg_l,
(addr[2] << 24) | (addr[3] << 16) |
(addr[4] << 8) | addr[5]);
t4_write_reg(adap, mag_id_reg_h,
(addr[0] << 8) | addr[1]);
}
t4_set_reg_field(adap, port_cfg_reg, F_MAGICEN,
V_MAGICEN(addr != NULL));
}
/**
* t4_wol_pat_enable - enable/disable pattern-based WoL
* @adap: the adapter
* @port: the physical port index
* @map: bitmap of which HW pattern filters to set
* @mask0: byte mask for bytes 0-63 of a packet
* @mask1: byte mask for bytes 64-127 of a packet
* @crc: Ethernet CRC for selected bytes
* @enable: enable/disable switch
*
* Sets the pattern filters indicated in @map to mask out the bytes
* specified in @mask0/@mask1 in received packets and compare the CRC of
* the resulting packet against @crc. If @enable is %true pattern-based
* WoL is enabled, otherwise disabled.
*/
int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
u64 mask0, u64 mask1, unsigned int crc, bool enable)
{
int i;
u32 port_cfg_reg;
if (is_t4(adap))
port_cfg_reg = PORT_REG(port, A_XGMAC_PORT_CFG2);
else
port_cfg_reg = T5_PORT_REG(port, A_MAC_PORT_CFG2);
if (!enable) {
t4_set_reg_field(adap, port_cfg_reg, F_PATEN, 0);
return 0;
}
if (map > 0xff)
return -EINVAL;
#define EPIO_REG(name) \
(is_t4(adap) ? PORT_REG(port, A_XGMAC_PORT_EPIO_##name) : \
T5_PORT_REG(port, A_MAC_PORT_EPIO_##name))
t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32);
t4_write_reg(adap, EPIO_REG(DATA2), mask1);
t4_write_reg(adap, EPIO_REG(DATA3), mask1 >> 32);
for (i = 0; i < NWOL_PAT; i++, map >>= 1) {
if (!(map & 1))
continue;
/* write byte masks */
t4_write_reg(adap, EPIO_REG(DATA0), mask0);
t4_write_reg(adap, EPIO_REG(OP), V_ADDRESS(i) | F_EPIOWR);
t4_read_reg(adap, EPIO_REG(OP)); /* flush */
if (t4_read_reg(adap, EPIO_REG(OP)) & F_BUSY)
return -ETIMEDOUT;
/* write CRC */
t4_write_reg(adap, EPIO_REG(DATA0), crc);
t4_write_reg(adap, EPIO_REG(OP), V_ADDRESS(i + 32) | F_EPIOWR);
t4_read_reg(adap, EPIO_REG(OP)); /* flush */
if (t4_read_reg(adap, EPIO_REG(OP)) & F_BUSY)
return -ETIMEDOUT;
}
#undef EPIO_REG
t4_set_reg_field(adap, port_cfg_reg, 0, F_PATEN);
return 0;
}
/* t4_mk_filtdelwr - create a delete filter WR
* @ftid: the filter ID
* @wr: the filter work request to populate
* @qid: ingress queue to receive the delete notification
*
* Creates a filter work request to delete the supplied filter. If @qid is
* negative the delete notification is suppressed.
*/
void t4_mk_filtdelwr(unsigned int ftid, struct fw_filter_wr *wr, int qid)
{
memset(wr, 0, sizeof(*wr));
wr->op_pkd = cpu_to_be32(V_FW_WR_OP(FW_FILTER_WR));
wr->len16_pkd = cpu_to_be32(V_FW_WR_LEN16(sizeof(*wr) / 16));
wr->tid_to_iq = cpu_to_be32(V_FW_FILTER_WR_TID(ftid) |
V_FW_FILTER_WR_NOREPLY(qid < 0));
wr->del_filter_to_l2tix = cpu_to_be32(F_FW_FILTER_WR_DEL_FILTER);
if (qid >= 0)
wr->rx_chan_rx_rpl_iq =
cpu_to_be16(V_FW_FILTER_WR_RX_RPL_IQ(qid));
}
#define INIT_CMD(var, cmd, rd_wr) do { \
(var).op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_##cmd##_CMD) | \
F_FW_CMD_REQUEST | \
F_FW_CMD_##rd_wr); \
(var).retval_len16 = cpu_to_be32(FW_LEN16(var)); \
} while (0)
int t4_fwaddrspace_write(struct adapter *adap, unsigned int mbox,
u32 addr, u32 val)
{
u32 ldst_addrspace;
struct fw_ldst_cmd c;
memset(&c, 0, sizeof(c));
ldst_addrspace = V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_FIRMWARE);
c.op_to_addrspace = cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE |
ldst_addrspace);
c.cycles_to_len16 = cpu_to_be32(FW_LEN16(c));
c.u.addrval.addr = cpu_to_be32(addr);
c.u.addrval.val = cpu_to_be32(val);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_mdio_rd - read a PHY register through MDIO
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @phy_addr: the PHY address
* @mmd: the PHY MMD to access (0 for clause 22 PHYs)
* @reg: the register to read
* @valp: where to store the value
*
* Issues a FW command through the given mailbox to read a PHY register.
*/
int t4_mdio_rd(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,
unsigned int mmd, unsigned int reg, unsigned int *valp)
{
int ret;
u32 ldst_addrspace;
struct fw_ldst_cmd c;
memset(&c, 0, sizeof(c));
ldst_addrspace = V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_MDIO);
c.op_to_addrspace = cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
ldst_addrspace);
c.cycles_to_len16 = cpu_to_be32(FW_LEN16(c));
c.u.mdio.paddr_mmd = cpu_to_be16(V_FW_LDST_CMD_PADDR(phy_addr) |
V_FW_LDST_CMD_MMD(mmd));
c.u.mdio.raddr = cpu_to_be16(reg);
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret == 0)
*valp = be16_to_cpu(c.u.mdio.rval);
return ret;
}
/**
* t4_mdio_wr - write a PHY register through MDIO
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @phy_addr: the PHY address
* @mmd: the PHY MMD to access (0 for clause 22 PHYs)
* @reg: the register to write
* @valp: value to write
*
* Issues a FW command through the given mailbox to write a PHY register.
*/
int t4_mdio_wr(struct adapter *adap, unsigned int mbox, unsigned int phy_addr,
unsigned int mmd, unsigned int reg, unsigned int val)
{
u32 ldst_addrspace;
struct fw_ldst_cmd c;
memset(&c, 0, sizeof(c));
ldst_addrspace = V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_MDIO);
c.op_to_addrspace = cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
ldst_addrspace);
c.cycles_to_len16 = cpu_to_be32(FW_LEN16(c));
c.u.mdio.paddr_mmd = cpu_to_be16(V_FW_LDST_CMD_PADDR(phy_addr) |
V_FW_LDST_CMD_MMD(mmd));
c.u.mdio.raddr = cpu_to_be16(reg);
c.u.mdio.rval = cpu_to_be16(val);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
*
* t4_sge_decode_idma_state - decode the idma state
* @adap: the adapter
* @state: the state idma is stuck in
*/
void t4_sge_decode_idma_state(struct adapter *adapter, int state)
{
static const char * const t4_decode[] = {
"IDMA_IDLE",
"IDMA_PUSH_MORE_CPL_FIFO",
"IDMA_PUSH_CPL_MSG_HEADER_TO_FIFO",
"Not used",
"IDMA_PHYSADDR_SEND_PCIEHDR",
"IDMA_PHYSADDR_SEND_PAYLOAD_FIRST",
"IDMA_PHYSADDR_SEND_PAYLOAD",
"IDMA_SEND_FIFO_TO_IMSG",
"IDMA_FL_REQ_DATA_FL_PREP",
"IDMA_FL_REQ_DATA_FL",
"IDMA_FL_DROP",
"IDMA_FL_H_REQ_HEADER_FL",
"IDMA_FL_H_SEND_PCIEHDR",
"IDMA_FL_H_PUSH_CPL_FIFO",
"IDMA_FL_H_SEND_CPL",
"IDMA_FL_H_SEND_IP_HDR_FIRST",
"IDMA_FL_H_SEND_IP_HDR",
"IDMA_FL_H_REQ_NEXT_HEADER_FL",
"IDMA_FL_H_SEND_NEXT_PCIEHDR",
"IDMA_FL_H_SEND_IP_HDR_PADDING",
"IDMA_FL_D_SEND_PCIEHDR",
"IDMA_FL_D_SEND_CPL_AND_IP_HDR",
"IDMA_FL_D_REQ_NEXT_DATA_FL",
"IDMA_FL_SEND_PCIEHDR",
"IDMA_FL_PUSH_CPL_FIFO",
"IDMA_FL_SEND_CPL",
"IDMA_FL_SEND_PAYLOAD_FIRST",
"IDMA_FL_SEND_PAYLOAD",
"IDMA_FL_REQ_NEXT_DATA_FL",
"IDMA_FL_SEND_NEXT_PCIEHDR",
"IDMA_FL_SEND_PADDING",
"IDMA_FL_SEND_COMPLETION_TO_IMSG",
"IDMA_FL_SEND_FIFO_TO_IMSG",
"IDMA_FL_REQ_DATAFL_DONE",
"IDMA_FL_REQ_HEADERFL_DONE",
};
static const char * const t5_decode[] = {
"IDMA_IDLE",
"IDMA_ALMOST_IDLE",
"IDMA_PUSH_MORE_CPL_FIFO",
"IDMA_PUSH_CPL_MSG_HEADER_TO_FIFO",
"IDMA_SGEFLRFLUSH_SEND_PCIEHDR",
"IDMA_PHYSADDR_SEND_PCIEHDR",
"IDMA_PHYSADDR_SEND_PAYLOAD_FIRST",
"IDMA_PHYSADDR_SEND_PAYLOAD",
"IDMA_SEND_FIFO_TO_IMSG",
"IDMA_FL_REQ_DATA_FL",
"IDMA_FL_DROP",
"IDMA_FL_DROP_SEND_INC",
"IDMA_FL_H_REQ_HEADER_FL",
"IDMA_FL_H_SEND_PCIEHDR",
"IDMA_FL_H_PUSH_CPL_FIFO",
"IDMA_FL_H_SEND_CPL",
"IDMA_FL_H_SEND_IP_HDR_FIRST",
"IDMA_FL_H_SEND_IP_HDR",
"IDMA_FL_H_REQ_NEXT_HEADER_FL",
"IDMA_FL_H_SEND_NEXT_PCIEHDR",
"IDMA_FL_H_SEND_IP_HDR_PADDING",
"IDMA_FL_D_SEND_PCIEHDR",
"IDMA_FL_D_SEND_CPL_AND_IP_HDR",
"IDMA_FL_D_REQ_NEXT_DATA_FL",
"IDMA_FL_SEND_PCIEHDR",
"IDMA_FL_PUSH_CPL_FIFO",
"IDMA_FL_SEND_CPL",
"IDMA_FL_SEND_PAYLOAD_FIRST",
"IDMA_FL_SEND_PAYLOAD",
"IDMA_FL_REQ_NEXT_DATA_FL",
"IDMA_FL_SEND_NEXT_PCIEHDR",
"IDMA_FL_SEND_PADDING",
"IDMA_FL_SEND_COMPLETION_TO_IMSG",
};
static const char * const t6_decode[] = {
"IDMA_IDLE",
"IDMA_PUSH_MORE_CPL_FIFO",
"IDMA_PUSH_CPL_MSG_HEADER_TO_FIFO",
"IDMA_SGEFLRFLUSH_SEND_PCIEHDR",
"IDMA_PHYSADDR_SEND_PCIEHDR",
"IDMA_PHYSADDR_SEND_PAYLOAD_FIRST",
"IDMA_PHYSADDR_SEND_PAYLOAD",
"IDMA_FL_REQ_DATA_FL",
"IDMA_FL_DROP",
"IDMA_FL_DROP_SEND_INC",
"IDMA_FL_H_REQ_HEADER_FL",
"IDMA_FL_H_SEND_PCIEHDR",
"IDMA_FL_H_PUSH_CPL_FIFO",
"IDMA_FL_H_SEND_CPL",
"IDMA_FL_H_SEND_IP_HDR_FIRST",
"IDMA_FL_H_SEND_IP_HDR",
"IDMA_FL_H_REQ_NEXT_HEADER_FL",
"IDMA_FL_H_SEND_NEXT_PCIEHDR",
"IDMA_FL_H_SEND_IP_HDR_PADDING",
"IDMA_FL_D_SEND_PCIEHDR",
"IDMA_FL_D_SEND_CPL_AND_IP_HDR",
"IDMA_FL_D_REQ_NEXT_DATA_FL",
"IDMA_FL_SEND_PCIEHDR",
"IDMA_FL_PUSH_CPL_FIFO",
"IDMA_FL_SEND_CPL",
"IDMA_FL_SEND_PAYLOAD_FIRST",
"IDMA_FL_SEND_PAYLOAD",
"IDMA_FL_REQ_NEXT_DATA_FL",
"IDMA_FL_SEND_NEXT_PCIEHDR",
"IDMA_FL_SEND_PADDING",
"IDMA_FL_SEND_COMPLETION_TO_IMSG",
};
static const u32 sge_regs[] = {
A_SGE_DEBUG_DATA_LOW_INDEX_2,
A_SGE_DEBUG_DATA_LOW_INDEX_3,
A_SGE_DEBUG_DATA_HIGH_INDEX_10,
};
const char * const *sge_idma_decode;
int sge_idma_decode_nstates;
int i;
unsigned int chip_version = chip_id(adapter);
/* Select the right set of decode strings to dump depending on the
* adapter chip type.
*/
switch (chip_version) {
case CHELSIO_T4:
sge_idma_decode = (const char * const *)t4_decode;
sge_idma_decode_nstates = ARRAY_SIZE(t4_decode);
break;
case CHELSIO_T5:
sge_idma_decode = (const char * const *)t5_decode;
sge_idma_decode_nstates = ARRAY_SIZE(t5_decode);
break;
case CHELSIO_T6:
sge_idma_decode = (const char * const *)t6_decode;
sge_idma_decode_nstates = ARRAY_SIZE(t6_decode);
break;
default:
CH_ERR(adapter, "Unsupported chip version %d\n", chip_version);
return;
}
if (state < sge_idma_decode_nstates)
CH_WARN(adapter, "idma state %s\n", sge_idma_decode[state]);
else
CH_WARN(adapter, "idma state %d unknown\n", state);
for (i = 0; i < ARRAY_SIZE(sge_regs); i++)
CH_WARN(adapter, "SGE register %#x value %#x\n",
sge_regs[i], t4_read_reg(adapter, sge_regs[i]));
}
/**
* t4_sge_ctxt_flush - flush the SGE context cache
* @adap: the adapter
* @mbox: mailbox to use for the FW command
*
* Issues a FW command through the given mailbox to flush the
* SGE context cache.
*/
int t4_sge_ctxt_flush(struct adapter *adap, unsigned int mbox)
{
int ret;
u32 ldst_addrspace;
struct fw_ldst_cmd c;
memset(&c, 0, sizeof(c));
ldst_addrspace = V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_SGE_EGRC);
c.op_to_addrspace = cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
ldst_addrspace);
c.cycles_to_len16 = cpu_to_be32(FW_LEN16(c));
c.u.idctxt.msg_ctxtflush = cpu_to_be32(F_FW_LDST_CMD_CTXTFLUSH);
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
return ret;
}
/**
* t4_fw_hello - establish communication with FW
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @evt_mbox: mailbox to receive async FW events
* @master: specifies the caller's willingness to be the device master
* @state: returns the current device state (if non-NULL)
*
* Issues a command to establish communication with FW. Returns either
* an error (negative integer) or the mailbox of the Master PF.
*/
int t4_fw_hello(struct adapter *adap, unsigned int mbox, unsigned int evt_mbox,
enum dev_master master, enum dev_state *state)
{
int ret;
struct fw_hello_cmd c;
u32 v;
unsigned int master_mbox;
int retries = FW_CMD_HELLO_RETRIES;
retry:
memset(&c, 0, sizeof(c));
INIT_CMD(c, HELLO, WRITE);
c.err_to_clearinit = cpu_to_be32(
V_FW_HELLO_CMD_MASTERDIS(master == MASTER_CANT) |
V_FW_HELLO_CMD_MASTERFORCE(master == MASTER_MUST) |
V_FW_HELLO_CMD_MBMASTER(master == MASTER_MUST ?
mbox : M_FW_HELLO_CMD_MBMASTER) |
V_FW_HELLO_CMD_MBASYNCNOT(evt_mbox) |
V_FW_HELLO_CMD_STAGE(FW_HELLO_CMD_STAGE_OS) |
F_FW_HELLO_CMD_CLEARINIT);
/*
* Issue the HELLO command to the firmware. If it's not successful
* but indicates that we got a "busy" or "timeout" condition, retry
* the HELLO until we exhaust our retry limit. If we do exceed our
* retry limit, check to see if the firmware left us any error
* information and report that if so ...
*/
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret != FW_SUCCESS) {
if ((ret == -EBUSY || ret == -ETIMEDOUT) && retries-- > 0)
goto retry;
if (t4_read_reg(adap, A_PCIE_FW) & F_PCIE_FW_ERR)
t4_report_fw_error(adap);
return ret;
}
v = be32_to_cpu(c.err_to_clearinit);
master_mbox = G_FW_HELLO_CMD_MBMASTER(v);
if (state) {
if (v & F_FW_HELLO_CMD_ERR)
*state = DEV_STATE_ERR;
else if (v & F_FW_HELLO_CMD_INIT)
*state = DEV_STATE_INIT;
else
*state = DEV_STATE_UNINIT;
}
/*
* If we're not the Master PF then we need to wait around for the
* Master PF Driver to finish setting up the adapter.
*
* Note that we also do this wait if we're a non-Master-capable PF and
* there is no current Master PF; a Master PF may show up momentarily
* and we wouldn't want to fail pointlessly. (This can happen when an
* OS loads lots of different drivers rapidly at the same time). In
* this case, the Master PF returned by the firmware will be
* M_PCIE_FW_MASTER so the test below will work ...
*/
if ((v & (F_FW_HELLO_CMD_ERR|F_FW_HELLO_CMD_INIT)) == 0 &&
master_mbox != mbox) {
int waiting = FW_CMD_HELLO_TIMEOUT;
/*
* Wait for the firmware to either indicate an error or
* initialized state. If we see either of these we bail out
* and report the issue to the caller. If we exhaust the
* "hello timeout" and we haven't exhausted our retries, try
* again. Otherwise bail with a timeout error.
*/
for (;;) {
u32 pcie_fw;
msleep(50);
waiting -= 50;
/*
* If neither Error nor Initialialized are indicated
* by the firmware keep waiting till we exhaust our
* timeout ... and then retry if we haven't exhausted
* our retries ...
*/
pcie_fw = t4_read_reg(adap, A_PCIE_FW);
if (!(pcie_fw & (F_PCIE_FW_ERR|F_PCIE_FW_INIT))) {
if (waiting <= 0) {
if (retries-- > 0)
goto retry;
return -ETIMEDOUT;
}
continue;
}
/*
* We either have an Error or Initialized condition
* report errors preferentially.
*/
if (state) {
if (pcie_fw & F_PCIE_FW_ERR)
*state = DEV_STATE_ERR;
else if (pcie_fw & F_PCIE_FW_INIT)
*state = DEV_STATE_INIT;
}
/*
* If we arrived before a Master PF was selected and
* there's not a valid Master PF, grab its identity
* for our caller.
*/
if (master_mbox == M_PCIE_FW_MASTER &&
(pcie_fw & F_PCIE_FW_MASTER_VLD))
master_mbox = G_PCIE_FW_MASTER(pcie_fw);
break;
}
}
return master_mbox;
}
/**
* t4_fw_bye - end communication with FW
* @adap: the adapter
* @mbox: mailbox to use for the FW command
*
* Issues a command to terminate communication with FW.
*/
int t4_fw_bye(struct adapter *adap, unsigned int mbox)
{
struct fw_bye_cmd c;
memset(&c, 0, sizeof(c));
INIT_CMD(c, BYE, WRITE);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_fw_reset - issue a reset to FW
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @reset: specifies the type of reset to perform
*
* Issues a reset command of the specified type to FW.
*/
int t4_fw_reset(struct adapter *adap, unsigned int mbox, int reset)
{
struct fw_reset_cmd c;
memset(&c, 0, sizeof(c));
INIT_CMD(c, RESET, WRITE);
c.val = cpu_to_be32(reset);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_fw_halt - issue a reset/halt to FW and put uP into RESET
* @adap: the adapter
* @mbox: mailbox to use for the FW RESET command (if desired)
* @force: force uP into RESET even if FW RESET command fails
*
* Issues a RESET command to firmware (if desired) with a HALT indication
* and then puts the microprocessor into RESET state. The RESET command
* will only be issued if a legitimate mailbox is provided (mbox <=
* M_PCIE_FW_MASTER).
*
* This is generally used in order for the host to safely manipulate the
* adapter without fear of conflicting with whatever the firmware might
* be doing. The only way out of this state is to RESTART the firmware
* ...
*/
int t4_fw_halt(struct adapter *adap, unsigned int mbox, int force)
{
int ret = 0;
/*
* If a legitimate mailbox is provided, issue a RESET command
* with a HALT indication.
*/
if (adap->flags & FW_OK && mbox <= M_PCIE_FW_MASTER) {
struct fw_reset_cmd c;
memset(&c, 0, sizeof(c));
INIT_CMD(c, RESET, WRITE);
c.val = cpu_to_be32(F_PIORST | F_PIORSTMODE);
c.halt_pkd = cpu_to_be32(F_FW_RESET_CMD_HALT);
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/*
* Normally we won't complete the operation if the firmware RESET
* command fails but if our caller insists we'll go ahead and put the
* uP into RESET. This can be useful if the firmware is hung or even
* missing ... We'll have to take the risk of putting the uP into
* RESET without the cooperation of firmware in that case.
*
* We also force the firmware's HALT flag to be on in case we bypassed
* the firmware RESET command above or we're dealing with old firmware
* which doesn't have the HALT capability. This will serve as a flag
* for the incoming firmware to know that it's coming out of a HALT
* rather than a RESET ... if it's new enough to understand that ...
*/
if (ret == 0 || force) {
t4_set_reg_field(adap, A_CIM_BOOT_CFG, F_UPCRST, F_UPCRST);
t4_set_reg_field(adap, A_PCIE_FW, F_PCIE_FW_HALT,
F_PCIE_FW_HALT);
}
/*
* And we always return the result of the firmware RESET command
* even when we force the uP into RESET ...
*/
return ret;
}
/**
* t4_fw_restart - restart the firmware by taking the uP out of RESET
* @adap: the adapter
*
* Restart firmware previously halted by t4_fw_halt(). On successful
* return the previous PF Master remains as the new PF Master and there
* is no need to issue a new HELLO command, etc.
*/
int t4_fw_restart(struct adapter *adap, unsigned int mbox)
{
int ms;
t4_set_reg_field(adap, A_CIM_BOOT_CFG, F_UPCRST, 0);
for (ms = 0; ms < FW_CMD_MAX_TIMEOUT; ) {
if (!(t4_read_reg(adap, A_PCIE_FW) & F_PCIE_FW_HALT))
return FW_SUCCESS;
msleep(100);
ms += 100;
}
return -ETIMEDOUT;
}
/**
* t4_fw_upgrade - perform all of the steps necessary to upgrade FW
* @adap: the adapter
* @mbox: mailbox to use for the FW RESET command (if desired)
* @fw_data: the firmware image to write
* @size: image size
* @force: force upgrade even if firmware doesn't cooperate
*
* Perform all of the steps necessary for upgrading an adapter's
* firmware image. Normally this requires the cooperation of the
* existing firmware in order to halt all existing activities
* but if an invalid mailbox token is passed in we skip that step
* (though we'll still put the adapter microprocessor into RESET in
* that case).
*
* On successful return the new firmware will have been loaded and
* the adapter will have been fully RESET losing all previous setup
* state. On unsuccessful return the adapter may be completely hosed ...
* positive errno indicates that the adapter is ~probably~ intact, a
* negative errno indicates that things are looking bad ...
*/
int t4_fw_upgrade(struct adapter *adap, unsigned int mbox,
const u8 *fw_data, unsigned int size, int force)
{
const struct fw_hdr *fw_hdr = (const struct fw_hdr *)fw_data;
unsigned int bootstrap =
be32_to_cpu(fw_hdr->magic) == FW_HDR_MAGIC_BOOTSTRAP;
int ret;
if (!t4_fw_matches_chip(adap, fw_hdr))
return -EINVAL;
if (!bootstrap) {
ret = t4_fw_halt(adap, mbox, force);
if (ret < 0 && !force)
return ret;
}
ret = t4_load_fw(adap, fw_data, size);
if (ret < 0 || bootstrap)
return ret;
return t4_fw_restart(adap, mbox);
}
/**
* t4_fw_initialize - ask FW to initialize the device
* @adap: the adapter
* @mbox: mailbox to use for the FW command
*
* Issues a command to FW to partially initialize the device. This
* performs initialization that generally doesn't depend on user input.
*/
int t4_fw_initialize(struct adapter *adap, unsigned int mbox)
{
struct fw_initialize_cmd c;
memset(&c, 0, sizeof(c));
INIT_CMD(c, INITIALIZE, WRITE);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_query_params_rw - query FW or device parameters
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF
* @vf: the VF
* @nparams: the number of parameters
* @params: the parameter names
* @val: the parameter values
* @rw: Write and read flag
*
* Reads the value of FW or device parameters. Up to 7 parameters can be
* queried at once.
*/
int t4_query_params_rw(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int nparams, const u32 *params,
u32 *val, int rw)
{
int i, ret;
struct fw_params_cmd c;
__be32 *p = &c.param[0].mnem;
if (nparams > 7)
return -EINVAL;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_PARAMS_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
V_FW_PARAMS_CMD_PFN(pf) |
V_FW_PARAMS_CMD_VFN(vf));
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
for (i = 0; i < nparams; i++) {
*p++ = cpu_to_be32(*params++);
if (rw)
*p = cpu_to_be32(*(val + i));
p++;
}
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret == 0)
for (i = 0, p = &c.param[0].val; i < nparams; i++, p += 2)
*val++ = be32_to_cpu(*p);
return ret;
}
int t4_query_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int nparams, const u32 *params,
u32 *val)
{
return t4_query_params_rw(adap, mbox, pf, vf, nparams, params, val, 0);
}
/**
* t4_set_params_timeout - sets FW or device parameters
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF
* @vf: the VF
* @nparams: the number of parameters
* @params: the parameter names
* @val: the parameter values
* @timeout: the timeout time
*
* Sets the value of FW or device parameters. Up to 7 parameters can be
* specified at once.
*/
int t4_set_params_timeout(struct adapter *adap, unsigned int mbox,
unsigned int pf, unsigned int vf,
unsigned int nparams, const u32 *params,
const u32 *val, int timeout)
{
struct fw_params_cmd c;
__be32 *p = &c.param[0].mnem;
if (nparams > 7)
return -EINVAL;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_PARAMS_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_PARAMS_CMD_PFN(pf) |
V_FW_PARAMS_CMD_VFN(vf));
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
while (nparams--) {
*p++ = cpu_to_be32(*params++);
*p++ = cpu_to_be32(*val++);
}
return t4_wr_mbox_timeout(adap, mbox, &c, sizeof(c), NULL, timeout);
}
/**
* t4_set_params - sets FW or device parameters
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF
* @vf: the VF
* @nparams: the number of parameters
* @params: the parameter names
* @val: the parameter values
*
* Sets the value of FW or device parameters. Up to 7 parameters can be
* specified at once.
*/
int t4_set_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int nparams, const u32 *params,
const u32 *val)
{
return t4_set_params_timeout(adap, mbox, pf, vf, nparams, params, val,
FW_CMD_MAX_TIMEOUT);
}
/**
* t4_cfg_pfvf - configure PF/VF resource limits
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF being configured
* @vf: the VF being configured
* @txq: the max number of egress queues
* @txq_eth_ctrl: the max number of egress Ethernet or control queues
* @rxqi: the max number of interrupt-capable ingress queues
* @rxq: the max number of interruptless ingress queues
* @tc: the PCI traffic class
* @vi: the max number of virtual interfaces
* @cmask: the channel access rights mask for the PF/VF
* @pmask: the port access rights mask for the PF/VF
* @nexact: the maximum number of exact MPS filters
* @rcaps: read capabilities
* @wxcaps: write/execute capabilities
*
* Configures resource limits and capabilities for a physical or virtual
* function.
*/
int t4_cfg_pfvf(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int txq, unsigned int txq_eth_ctrl,
unsigned int rxqi, unsigned int rxq, unsigned int tc,
unsigned int vi, unsigned int cmask, unsigned int pmask,
unsigned int nexact, unsigned int rcaps, unsigned int wxcaps)
{
struct fw_pfvf_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_PFVF_CMD) | F_FW_CMD_REQUEST |
F_FW_CMD_WRITE | V_FW_PFVF_CMD_PFN(pf) |
V_FW_PFVF_CMD_VFN(vf));
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
c.niqflint_niq = cpu_to_be32(V_FW_PFVF_CMD_NIQFLINT(rxqi) |
V_FW_PFVF_CMD_NIQ(rxq));
c.type_to_neq = cpu_to_be32(V_FW_PFVF_CMD_CMASK(cmask) |
V_FW_PFVF_CMD_PMASK(pmask) |
V_FW_PFVF_CMD_NEQ(txq));
c.tc_to_nexactf = cpu_to_be32(V_FW_PFVF_CMD_TC(tc) |
V_FW_PFVF_CMD_NVI(vi) |
V_FW_PFVF_CMD_NEXACTF(nexact));
c.r_caps_to_nethctrl = cpu_to_be32(V_FW_PFVF_CMD_R_CAPS(rcaps) |
V_FW_PFVF_CMD_WX_CAPS(wxcaps) |
V_FW_PFVF_CMD_NETHCTRL(txq_eth_ctrl));
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_alloc_vi_func - allocate a virtual interface
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @port: physical port associated with the VI
* @pf: the PF owning the VI
* @vf: the VF owning the VI
* @nmac: number of MAC addresses needed (1 to 5)
* @mac: the MAC addresses of the VI
* @rss_size: size of RSS table slice associated with this VI
* @portfunc: which Port Application Function MAC Address is desired
* @idstype: Intrusion Detection Type
*
* Allocates a virtual interface for the given physical port. If @mac is
* not %NULL it contains the MAC addresses of the VI as assigned by FW.
* If @rss_size is %NULL the VI is not assigned any RSS slice by FW.
* @mac should be large enough to hold @nmac Ethernet addresses, they are
* stored consecutively so the space needed is @nmac * 6 bytes.
* Returns a negative error number or the non-negative VI id.
*/
int t4_alloc_vi_func(struct adapter *adap, unsigned int mbox,
unsigned int port, unsigned int pf, unsigned int vf,
unsigned int nmac, u8 *mac, u16 *rss_size,
unsigned int portfunc, unsigned int idstype)
{
int ret;
struct fw_vi_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_VI_CMD) | F_FW_CMD_REQUEST |
F_FW_CMD_WRITE | F_FW_CMD_EXEC |
V_FW_VI_CMD_PFN(pf) | V_FW_VI_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_VI_CMD_ALLOC | FW_LEN16(c));
c.type_to_viid = cpu_to_be16(V_FW_VI_CMD_TYPE(idstype) |
V_FW_VI_CMD_FUNC(portfunc));
c.portid_pkd = V_FW_VI_CMD_PORTID(port);
c.nmac = nmac - 1;
if(!rss_size)
c.norss_rsssize = F_FW_VI_CMD_NORSS;
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret)
return ret;
if (mac) {
memcpy(mac, c.mac, sizeof(c.mac));
switch (nmac) {
case 5:
memcpy(mac + 24, c.nmac3, sizeof(c.nmac3));
case 4:
memcpy(mac + 18, c.nmac2, sizeof(c.nmac2));
case 3:
memcpy(mac + 12, c.nmac1, sizeof(c.nmac1));
case 2:
memcpy(mac + 6, c.nmac0, sizeof(c.nmac0));
}
}
if (rss_size)
*rss_size = G_FW_VI_CMD_RSSSIZE(be16_to_cpu(c.norss_rsssize));
return G_FW_VI_CMD_VIID(be16_to_cpu(c.type_to_viid));
}
/**
* t4_alloc_vi - allocate an [Ethernet Function] virtual interface
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @port: physical port associated with the VI
* @pf: the PF owning the VI
* @vf: the VF owning the VI
* @nmac: number of MAC addresses needed (1 to 5)
* @mac: the MAC addresses of the VI
* @rss_size: size of RSS table slice associated with this VI
*
* backwards compatible and convieniance routine to allocate a Virtual
* Interface with a Ethernet Port Application Function and Intrustion
* Detection System disabled.
*/
int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port,
unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac,
u16 *rss_size)
{
return t4_alloc_vi_func(adap, mbox, port, pf, vf, nmac, mac, rss_size,
FW_VI_FUNC_ETH, 0);
}
/**
* t4_free_vi - free a virtual interface
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the VI
* @vf: the VF owning the VI
* @viid: virtual interface identifiler
*
* Free a previously allocated virtual interface.
*/
int t4_free_vi(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int viid)
{
struct fw_vi_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_VI_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_EXEC |
V_FW_VI_CMD_PFN(pf) |
V_FW_VI_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_VI_CMD_FREE | FW_LEN16(c));
c.type_to_viid = cpu_to_be16(V_FW_VI_CMD_VIID(viid));
return t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
}
/**
* t4_set_rxmode - set Rx properties of a virtual interface
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @mtu: the new MTU or -1
* @promisc: 1 to enable promiscuous mode, 0 to disable it, -1 no change
* @all_multi: 1 to enable all-multi mode, 0 to disable it, -1 no change
* @bcast: 1 to enable broadcast Rx, 0 to disable it, -1 no change
* @vlanex: 1 to enable HW VLAN extraction, 0 to disable it, -1 no change
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Sets Rx properties of a virtual interface.
*/
int t4_set_rxmode(struct adapter *adap, unsigned int mbox, unsigned int viid,
int mtu, int promisc, int all_multi, int bcast, int vlanex,
bool sleep_ok)
{
struct fw_vi_rxmode_cmd c;
/* convert to FW values */
if (mtu < 0)
mtu = M_FW_VI_RXMODE_CMD_MTU;
if (promisc < 0)
promisc = M_FW_VI_RXMODE_CMD_PROMISCEN;
if (all_multi < 0)
all_multi = M_FW_VI_RXMODE_CMD_ALLMULTIEN;
if (bcast < 0)
bcast = M_FW_VI_RXMODE_CMD_BROADCASTEN;
if (vlanex < 0)
vlanex = M_FW_VI_RXMODE_CMD_VLANEXEN;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_VI_RXMODE_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_VI_RXMODE_CMD_VIID(viid));
c.retval_len16 = cpu_to_be32(FW_LEN16(c));
c.mtu_to_vlanexen =
cpu_to_be32(V_FW_VI_RXMODE_CMD_MTU(mtu) |
V_FW_VI_RXMODE_CMD_PROMISCEN(promisc) |
V_FW_VI_RXMODE_CMD_ALLMULTIEN(all_multi) |
V_FW_VI_RXMODE_CMD_BROADCASTEN(bcast) |
V_FW_VI_RXMODE_CMD_VLANEXEN(vlanex));
return t4_wr_mbox_meat(adap, mbox, &c, sizeof(c), NULL, sleep_ok);
}
/**
* t4_alloc_mac_filt - allocates exact-match filters for MAC addresses
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @free: if true any existing filters for this VI id are first removed
* @naddr: the number of MAC addresses to allocate filters for (up to 7)
* @addr: the MAC address(es)
* @idx: where to store the index of each allocated filter
* @hash: pointer to hash address filter bitmap
* @sleep_ok: call is allowed to sleep
*
* Allocates an exact-match filter for each of the supplied addresses and
* sets it to the corresponding address. If @idx is not %NULL it should
* have at least @naddr entries, each of which will be set to the index of
* the filter allocated for the corresponding MAC address. If a filter
* could not be allocated for an address its index is set to 0xffff.
* If @hash is not %NULL addresses that fail to allocate an exact filter
* are hashed and update the hash filter bitmap pointed at by @hash.
*
* Returns a negative error number or the number of filters allocated.
*/
int t4_alloc_mac_filt(struct adapter *adap, unsigned int mbox,
unsigned int viid, bool free, unsigned int naddr,
const u8 **addr, u16 *idx, u64 *hash, bool sleep_ok)
{
int offset, ret = 0;
struct fw_vi_mac_cmd c;
unsigned int nfilters = 0;
unsigned int max_naddr = adap->chip_params->mps_tcam_size;
unsigned int rem = naddr;
if (naddr > max_naddr)
return -EINVAL;
for (offset = 0; offset < naddr ; /**/) {
unsigned int fw_naddr = (rem < ARRAY_SIZE(c.u.exact)
? rem
: ARRAY_SIZE(c.u.exact));
size_t len16 = DIV_ROUND_UP(offsetof(struct fw_vi_mac_cmd,
u.exact[fw_naddr]), 16);
struct fw_vi_mac_exact *p;
int i;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_VI_MAC_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE |
V_FW_CMD_EXEC(free) |
V_FW_VI_MAC_CMD_VIID(viid));
c.freemacs_to_len16 = cpu_to_be32(V_FW_VI_MAC_CMD_FREEMACS(free) |
V_FW_CMD_LEN16(len16));
for (i = 0, p = c.u.exact; i < fw_naddr; i++, p++) {
p->valid_to_idx =
cpu_to_be16(F_FW_VI_MAC_CMD_VALID |
V_FW_VI_MAC_CMD_IDX(FW_VI_MAC_ADD_MAC));
memcpy(p->macaddr, addr[offset+i], sizeof(p->macaddr));
}
/*
* It's okay if we run out of space in our MAC address arena.
* Some of the addresses we submit may get stored so we need
* to run through the reply to see what the results were ...
*/
ret = t4_wr_mbox_meat(adap, mbox, &c, sizeof(c), &c, sleep_ok);
if (ret && ret != -FW_ENOMEM)
break;
for (i = 0, p = c.u.exact; i < fw_naddr; i++, p++) {
u16 index = G_FW_VI_MAC_CMD_IDX(
be16_to_cpu(p->valid_to_idx));
if (idx)
idx[offset+i] = (index >= max_naddr
? 0xffff
: index);
if (index < max_naddr)
nfilters++;
else if (hash)
*hash |= (1ULL << hash_mac_addr(addr[offset+i]));
}
free = false;
offset += fw_naddr;
rem -= fw_naddr;
}
if (ret == 0 || ret == -FW_ENOMEM)
ret = nfilters;
return ret;
}
/**
* t4_change_mac - modifies the exact-match filter for a MAC address
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @idx: index of existing filter for old value of MAC address, or -1
* @addr: the new MAC address value
* @persist: whether a new MAC allocation should be persistent
* @add_smt: if true also add the address to the HW SMT
*
* Modifies an exact-match filter and sets it to the new MAC address if
* @idx >= 0, or adds the MAC address to a new filter if @idx < 0. In the
* latter case the address is added persistently if @persist is %true.
*
* Note that in general it is not possible to modify the value of a given
* filter so the generic way to modify an address filter is to free the one
* being used by the old address value and allocate a new filter for the
* new address value.
*
* Returns a negative error number or the index of the filter with the new
* MAC value. Note that this index may differ from @idx.
*/
int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid,
int idx, const u8 *addr, bool persist, bool add_smt)
{
int ret, mode;
struct fw_vi_mac_cmd c;
struct fw_vi_mac_exact *p = c.u.exact;
unsigned int max_mac_addr = adap->chip_params->mps_tcam_size;
if (idx < 0) /* new allocation */
idx = persist ? FW_VI_MAC_ADD_PERSIST_MAC : FW_VI_MAC_ADD_MAC;
mode = add_smt ? FW_VI_MAC_SMT_AND_MPSTCAM : FW_VI_MAC_MPS_TCAM_ENTRY;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_VI_MAC_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_VI_MAC_CMD_VIID(viid));
c.freemacs_to_len16 = cpu_to_be32(V_FW_CMD_LEN16(1));
p->valid_to_idx = cpu_to_be16(F_FW_VI_MAC_CMD_VALID |
V_FW_VI_MAC_CMD_SMAC_RESULT(mode) |
V_FW_VI_MAC_CMD_IDX(idx));
memcpy(p->macaddr, addr, sizeof(p->macaddr));
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret == 0) {
ret = G_FW_VI_MAC_CMD_IDX(be16_to_cpu(p->valid_to_idx));
if (ret >= max_mac_addr)
ret = -ENOMEM;
}
return ret;
}
/**
* t4_set_addr_hash - program the MAC inexact-match hash filter
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @ucast: whether the hash filter should also match unicast addresses
* @vec: the value to be written to the hash filter
* @sleep_ok: call is allowed to sleep
*
* Sets the 64-bit inexact-match hash filter for a virtual interface.
*/
int t4_set_addr_hash(struct adapter *adap, unsigned int mbox, unsigned int viid,
bool ucast, u64 vec, bool sleep_ok)
{
struct fw_vi_mac_cmd c;
u32 val;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_VI_MAC_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE |
V_FW_VI_ENABLE_CMD_VIID(viid));
val = V_FW_VI_MAC_CMD_ENTRY_TYPE(FW_VI_MAC_TYPE_HASHVEC) |
V_FW_VI_MAC_CMD_HASHUNIEN(ucast) | V_FW_CMD_LEN16(1);
c.freemacs_to_len16 = cpu_to_be32(val);
c.u.hash.hashvec = cpu_to_be64(vec);
return t4_wr_mbox_meat(adap, mbox, &c, sizeof(c), NULL, sleep_ok);
}
/**
* t4_enable_vi_params - enable/disable a virtual interface
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @rx_en: 1=enable Rx, 0=disable Rx
* @tx_en: 1=enable Tx, 0=disable Tx
* @dcb_en: 1=enable delivery of Data Center Bridging messages.
*
* Enables/disables a virtual interface. Note that setting DCB Enable
* only makes sense when enabling a Virtual Interface ...
*/
int t4_enable_vi_params(struct adapter *adap, unsigned int mbox,
unsigned int viid, bool rx_en, bool tx_en, bool dcb_en)
{
struct fw_vi_enable_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_VI_ENABLE_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_VI_ENABLE_CMD_VIID(viid));
c.ien_to_len16 = cpu_to_be32(V_FW_VI_ENABLE_CMD_IEN(rx_en) |
V_FW_VI_ENABLE_CMD_EEN(tx_en) |
V_FW_VI_ENABLE_CMD_DCB_INFO(dcb_en) |
FW_LEN16(c));
return t4_wr_mbox_ns(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_enable_vi - enable/disable a virtual interface
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @rx_en: 1=enable Rx, 0=disable Rx
* @tx_en: 1=enable Tx, 0=disable Tx
*
* Enables/disables a virtual interface. Note that setting DCB Enable
* only makes sense when enabling a Virtual Interface ...
*/
int t4_enable_vi(struct adapter *adap, unsigned int mbox, unsigned int viid,
bool rx_en, bool tx_en)
{
return t4_enable_vi_params(adap, mbox, viid, rx_en, tx_en, 0);
}
/**
* t4_identify_port - identify a VI's port by blinking its LED
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @viid: the VI id
* @nblinks: how many times to blink LED at 2.5 Hz
*
* Identifies a VI's port by blinking its LED.
*/
int t4_identify_port(struct adapter *adap, unsigned int mbox, unsigned int viid,
unsigned int nblinks)
{
struct fw_vi_enable_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_viid = cpu_to_be32(V_FW_CMD_OP(FW_VI_ENABLE_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_VI_ENABLE_CMD_VIID(viid));
c.ien_to_len16 = cpu_to_be32(F_FW_VI_ENABLE_CMD_LED | FW_LEN16(c));
c.blinkdur = cpu_to_be16(nblinks);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_iq_stop - stop an ingress queue and its FLs
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the queues
* @vf: the VF owning the queues
* @iqtype: the ingress queue type (FW_IQ_TYPE_FL_INT_CAP, etc.)
* @iqid: ingress queue id
* @fl0id: FL0 queue id or 0xffff if no attached FL0
* @fl1id: FL1 queue id or 0xffff if no attached FL1
*
* Stops an ingress queue and its associated FLs, if any. This causes
* any current or future data/messages destined for these queues to be
* tossed.
*/
int t4_iq_stop(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int iqtype, unsigned int iqid,
unsigned int fl0id, unsigned int fl1id)
{
struct fw_iq_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_IQ_CMD) | F_FW_CMD_REQUEST |
F_FW_CMD_EXEC | V_FW_IQ_CMD_PFN(pf) |
V_FW_IQ_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_IQ_CMD_IQSTOP | FW_LEN16(c));
c.type_to_iqandstindex = cpu_to_be32(V_FW_IQ_CMD_TYPE(iqtype));
c.iqid = cpu_to_be16(iqid);
c.fl0id = cpu_to_be16(fl0id);
c.fl1id = cpu_to_be16(fl1id);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_iq_free - free an ingress queue and its FLs
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the queues
* @vf: the VF owning the queues
* @iqtype: the ingress queue type (FW_IQ_TYPE_FL_INT_CAP, etc.)
* @iqid: ingress queue id
* @fl0id: FL0 queue id or 0xffff if no attached FL0
* @fl1id: FL1 queue id or 0xffff if no attached FL1
*
* Frees an ingress queue and its associated FLs, if any.
*/
int t4_iq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int iqtype, unsigned int iqid,
unsigned int fl0id, unsigned int fl1id)
{
struct fw_iq_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_IQ_CMD) | F_FW_CMD_REQUEST |
F_FW_CMD_EXEC | V_FW_IQ_CMD_PFN(pf) |
V_FW_IQ_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_IQ_CMD_FREE | FW_LEN16(c));
c.type_to_iqandstindex = cpu_to_be32(V_FW_IQ_CMD_TYPE(iqtype));
c.iqid = cpu_to_be16(iqid);
c.fl0id = cpu_to_be16(fl0id);
c.fl1id = cpu_to_be16(fl1id);
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_eth_eq_free - free an Ethernet egress queue
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the queue
* @vf: the VF owning the queue
* @eqid: egress queue id
*
* Frees an Ethernet egress queue.
*/
int t4_eth_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int eqid)
{
struct fw_eq_eth_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_EQ_ETH_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_EQ_ETH_CMD_PFN(pf) |
V_FW_EQ_ETH_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_EQ_ETH_CMD_FREE | FW_LEN16(c));
c.eqid_pkd = cpu_to_be32(V_FW_EQ_ETH_CMD_EQID(eqid));
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_ctrl_eq_free - free a control egress queue
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the queue
* @vf: the VF owning the queue
* @eqid: egress queue id
*
* Frees a control egress queue.
*/
int t4_ctrl_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int eqid)
{
struct fw_eq_ctrl_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_EQ_CTRL_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_EQ_CTRL_CMD_PFN(pf) |
V_FW_EQ_CTRL_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_EQ_CTRL_CMD_FREE | FW_LEN16(c));
c.cmpliqid_eqid = cpu_to_be32(V_FW_EQ_CTRL_CMD_EQID(eqid));
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_ofld_eq_free - free an offload egress queue
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the queue
* @vf: the VF owning the queue
* @eqid: egress queue id
*
* Frees a control egress queue.
*/
int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int eqid)
{
struct fw_eq_ofld_cmd c;
memset(&c, 0, sizeof(c));
c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_EQ_OFLD_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_EXEC |
V_FW_EQ_OFLD_CMD_PFN(pf) |
V_FW_EQ_OFLD_CMD_VFN(vf));
c.alloc_to_len16 = cpu_to_be32(F_FW_EQ_OFLD_CMD_FREE | FW_LEN16(c));
c.eqid_pkd = cpu_to_be32(V_FW_EQ_OFLD_CMD_EQID(eqid));
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
/**
* t4_link_down_rc_str - return a string for a Link Down Reason Code
* @link_down_rc: Link Down Reason Code
*
* Returns a string representation of the Link Down Reason Code.
*/
const char *t4_link_down_rc_str(unsigned char link_down_rc)
{
static const char *reason[] = {
"Link Down",
"Remote Fault",
"Auto-negotiation Failure",
"Reserved3",
"Insufficient Airflow",
"Unable To Determine Reason",
"No RX Signal Detected",
"Reserved7",
};
if (link_down_rc >= ARRAY_SIZE(reason))
return "Bad Reason Code";
return reason[link_down_rc];
}
/*
* Return the highest speed set in the port capabilities, in Mb/s.
*/
unsigned int fwcap_to_speed(uint32_t caps)
{
#define TEST_SPEED_RETURN(__caps_speed, __speed) \
do { \
if (caps & FW_PORT_CAP32_SPEED_##__caps_speed) \
return __speed; \
} while (0)
TEST_SPEED_RETURN(400G, 400000);
TEST_SPEED_RETURN(200G, 200000);
TEST_SPEED_RETURN(100G, 100000);
TEST_SPEED_RETURN(50G, 50000);
TEST_SPEED_RETURN(40G, 40000);
TEST_SPEED_RETURN(25G, 25000);
TEST_SPEED_RETURN(10G, 10000);
TEST_SPEED_RETURN(1G, 1000);
TEST_SPEED_RETURN(100M, 100);
#undef TEST_SPEED_RETURN
return 0;
}
/*
* Return the port capabilities bit for the given speed, which is in Mb/s.
*/
uint32_t speed_to_fwcap(unsigned int speed)
{
#define TEST_SPEED_RETURN(__caps_speed, __speed) \
do { \
if (speed == __speed) \
return FW_PORT_CAP32_SPEED_##__caps_speed; \
} while (0)
TEST_SPEED_RETURN(400G, 400000);
TEST_SPEED_RETURN(200G, 200000);
TEST_SPEED_RETURN(100G, 100000);
TEST_SPEED_RETURN(50G, 50000);
TEST_SPEED_RETURN(40G, 40000);
TEST_SPEED_RETURN(25G, 25000);
TEST_SPEED_RETURN(10G, 10000);
TEST_SPEED_RETURN(1G, 1000);
TEST_SPEED_RETURN(100M, 100);
#undef TEST_SPEED_RETURN
return 0;
}
/*
* Return the port capabilities bit for the highest speed in the capabilities.
*/
uint32_t fwcap_top_speed(uint32_t caps)
{
#define TEST_SPEED_RETURN(__caps_speed) \
do { \
if (caps & FW_PORT_CAP32_SPEED_##__caps_speed) \
return FW_PORT_CAP32_SPEED_##__caps_speed; \
} while (0)
TEST_SPEED_RETURN(400G);
TEST_SPEED_RETURN(200G);
TEST_SPEED_RETURN(100G);
TEST_SPEED_RETURN(50G);
TEST_SPEED_RETURN(40G);
TEST_SPEED_RETURN(25G);
TEST_SPEED_RETURN(10G);
TEST_SPEED_RETURN(1G);
TEST_SPEED_RETURN(100M);
#undef TEST_SPEED_RETURN
return 0;
}
/**
* lstatus_to_fwcap - translate old lstatus to 32-bit Port Capabilities
* @lstatus: old FW_PORT_ACTION_GET_PORT_INFO lstatus value
*
* Translates old FW_PORT_ACTION_GET_PORT_INFO lstatus field into new
* 32-bit Port Capabilities value.
*/
static uint32_t lstatus_to_fwcap(u32 lstatus)
{
uint32_t linkattr = 0;
/*
* Unfortunately the format of the Link Status in the old
* 16-bit Port Information message isn't the same as the
* 16-bit Port Capabilities bitfield used everywhere else ...
*/
if (lstatus & F_FW_PORT_CMD_RXPAUSE)
linkattr |= FW_PORT_CAP32_FC_RX;
if (lstatus & F_FW_PORT_CMD_TXPAUSE)
linkattr |= FW_PORT_CAP32_FC_TX;
if (lstatus & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_100M))
linkattr |= FW_PORT_CAP32_SPEED_100M;
if (lstatus & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_1G))
linkattr |= FW_PORT_CAP32_SPEED_1G;
if (lstatus & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_10G))
linkattr |= FW_PORT_CAP32_SPEED_10G;
if (lstatus & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_25G))
linkattr |= FW_PORT_CAP32_SPEED_25G;
if (lstatus & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_40G))
linkattr |= FW_PORT_CAP32_SPEED_40G;
if (lstatus & V_FW_PORT_CMD_LSPEED(FW_PORT_CAP_SPEED_100G))
linkattr |= FW_PORT_CAP32_SPEED_100G;
return linkattr;
}
/*
* Updates all fields owned by the common code in port_info and link_config
* based on information provided by the firmware. Does not touch any
* requested_* field.
*/
static void handle_port_info(struct port_info *pi, const struct fw_port_cmd *p,
enum fw_port_action action, bool *mod_changed, bool *link_changed)
{
struct link_config old_lc, *lc = &pi->link_cfg;
unsigned char fc, fec;
u32 stat, linkattr;
int old_ptype, old_mtype;
old_ptype = pi->port_type;
old_mtype = pi->mod_type;
old_lc = *lc;
if (action == FW_PORT_ACTION_GET_PORT_INFO) {
stat = be32_to_cpu(p->u.info.lstatus_to_modtype);
pi->port_type = G_FW_PORT_CMD_PTYPE(stat);
pi->mod_type = G_FW_PORT_CMD_MODTYPE(stat);
pi->mdio_addr = stat & F_FW_PORT_CMD_MDIOCAP ?
G_FW_PORT_CMD_MDIOADDR(stat) : -1;
lc->supported = fwcaps16_to_caps32(be16_to_cpu(p->u.info.pcap));
lc->advertising = fwcaps16_to_caps32(be16_to_cpu(p->u.info.acap));
lc->lp_advertising = fwcaps16_to_caps32(be16_to_cpu(p->u.info.lpacap));
lc->link_ok = (stat & F_FW_PORT_CMD_LSTATUS) != 0;
lc->link_down_rc = G_FW_PORT_CMD_LINKDNRC(stat);
linkattr = lstatus_to_fwcap(stat);
} else if (action == FW_PORT_ACTION_GET_PORT_INFO32) {
stat = be32_to_cpu(p->u.info32.lstatus32_to_cbllen32);
pi->port_type = G_FW_PORT_CMD_PORTTYPE32(stat);
pi->mod_type = G_FW_PORT_CMD_MODTYPE32(stat);
pi->mdio_addr = stat & F_FW_PORT_CMD_MDIOCAP32 ?
G_FW_PORT_CMD_MDIOADDR32(stat) : -1;
lc->supported = be32_to_cpu(p->u.info32.pcaps32);
lc->advertising = be32_to_cpu(p->u.info32.acaps32);
lc->lp_advertising = be16_to_cpu(p->u.info32.lpacaps32);
lc->link_ok = (stat & F_FW_PORT_CMD_LSTATUS32) != 0;
lc->link_down_rc = G_FW_PORT_CMD_LINKDNRC32(stat);
linkattr = be32_to_cpu(p->u.info32.linkattr32);
} else {
CH_ERR(pi->adapter, "bad port_info action 0x%x\n", action);
return;
}
lc->speed = fwcap_to_speed(linkattr);
fc = 0;
if (linkattr & FW_PORT_CAP32_FC_RX)
fc |= PAUSE_RX;
if (linkattr & FW_PORT_CAP32_FC_TX)
fc |= PAUSE_TX;
lc->fc = fc;
fec = FEC_NONE;
if (linkattr & FW_PORT_CAP32_FEC_RS)
fec |= FEC_RS;
if (linkattr & FW_PORT_CAP32_FEC_BASER_RS)
fec |= FEC_BASER_RS;
lc->fec = fec;
if (mod_changed != NULL)
*mod_changed = false;
if (link_changed != NULL)
*link_changed = false;
if (old_ptype != pi->port_type || old_mtype != pi->mod_type ||
old_lc.supported != lc->supported) {
if (pi->mod_type != FW_PORT_MOD_TYPE_NONE) {
lc->fec_hint = lc->advertising &
V_FW_PORT_CAP32_FEC(M_FW_PORT_CAP32_FEC);
}
if (mod_changed != NULL)
*mod_changed = true;
}
if (old_lc.link_ok != lc->link_ok || old_lc.speed != lc->speed ||
old_lc.fec != lc->fec || old_lc.fc != lc->fc) {
if (link_changed != NULL)
*link_changed = true;
}
}
/**
* t4_update_port_info - retrieve and update port information if changed
* @pi: the port_info
*
* We issue a Get Port Information Command to the Firmware and, if
* successful, we check to see if anything is different from what we
* last recorded and update things accordingly.
*/
int t4_update_port_info(struct port_info *pi)
{
struct adapter *sc = pi->adapter;
struct fw_port_cmd cmd;
enum fw_port_action action;
int ret;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_portid = cpu_to_be32(V_FW_CMD_OP(FW_PORT_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
V_FW_PORT_CMD_PORTID(pi->tx_chan));
action = sc->params.port_caps32 ? FW_PORT_ACTION_GET_PORT_INFO32 :
FW_PORT_ACTION_GET_PORT_INFO;
cmd.action_to_len16 = cpu_to_be32(V_FW_PORT_CMD_ACTION(action) |
FW_LEN16(cmd));
ret = t4_wr_mbox_ns(sc, sc->mbox, &cmd, sizeof(cmd), &cmd);
if (ret)
return ret;
handle_port_info(pi, &cmd, action, NULL, NULL);
return 0;
}
/**
* t4_handle_fw_rpl - process a FW reply message
* @adap: the adapter
* @rpl: start of the FW message
*
* Processes a FW message, such as link state change messages.
*/
int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl)
{
u8 opcode = *(const u8 *)rpl;
const struct fw_port_cmd *p = (const void *)rpl;
enum fw_port_action action =
G_FW_PORT_CMD_ACTION(be32_to_cpu(p->action_to_len16));
bool mod_changed, link_changed;
if (opcode == FW_PORT_CMD &&
(action == FW_PORT_ACTION_GET_PORT_INFO ||
action == FW_PORT_ACTION_GET_PORT_INFO32)) {
/* link/module state change message */
int i;
int chan = G_FW_PORT_CMD_PORTID(be32_to_cpu(p->op_to_portid));
struct port_info *pi = NULL;
struct link_config *lc;
for_each_port(adap, i) {
pi = adap2pinfo(adap, i);
if (pi->tx_chan == chan)
break;
}
lc = &pi->link_cfg;
PORT_LOCK(pi);
handle_port_info(pi, p, action, &mod_changed, &link_changed);
PORT_UNLOCK(pi);
if (mod_changed)
t4_os_portmod_changed(pi);
if (link_changed) {
PORT_LOCK(pi);
t4_os_link_changed(pi);
PORT_UNLOCK(pi);
}
} else {
CH_WARN_RATELIMIT(adap, "Unknown firmware reply %d\n", opcode);
return -EINVAL;
}
return 0;
}
/**
* get_pci_mode - determine a card's PCI mode
* @adapter: the adapter
* @p: where to store the PCI settings
*
* Determines a card's PCI mode and associated parameters, such as speed
* and width.
*/
static void get_pci_mode(struct adapter *adapter,
struct pci_params *p)
{
u16 val;
u32 pcie_cap;
pcie_cap = t4_os_find_pci_capability(adapter, PCI_CAP_ID_EXP);
if (pcie_cap) {
t4_os_pci_read_cfg2(adapter, pcie_cap + PCI_EXP_LNKSTA, &val);
p->speed = val & PCI_EXP_LNKSTA_CLS;
p->width = (val & PCI_EXP_LNKSTA_NLW) >> 4;
}
}
struct flash_desc {
u32 vendor_and_model_id;
u32 size_mb;
};
int t4_get_flash_params(struct adapter *adapter)
{
/*
* Table for non-standard supported Flash parts. Note, all Flash
* parts must have 64KB sectors.
*/
static struct flash_desc supported_flash[] = {
{ 0x00150201, 4 << 20 }, /* Spansion 4MB S25FL032P */
};
int ret;
u32 flashid = 0;
unsigned int part, manufacturer;
unsigned int density, size = 0;
/*
* Issue a Read ID Command to the Flash part. We decode supported
* Flash parts and their sizes from this. There's a newer Query
* Command which can retrieve detailed geometry information but many
* Flash parts don't support it.
*/
ret = sf1_write(adapter, 1, 1, 0, SF_RD_ID);
if (!ret)
ret = sf1_read(adapter, 3, 0, 1, &flashid);
t4_write_reg(adapter, A_SF_OP, 0); /* unlock SF */
if (ret < 0)
return ret;
/*
* Check to see if it's one of our non-standard supported Flash parts.
*/
for (part = 0; part < ARRAY_SIZE(supported_flash); part++)
if (supported_flash[part].vendor_and_model_id == flashid) {
adapter->params.sf_size =
supported_flash[part].size_mb;
adapter->params.sf_nsec =
adapter->params.sf_size / SF_SEC_SIZE;
goto found;
}
/*
* Decode Flash part size. The code below looks repetative with
* common encodings, but that's not guaranteed in the JEDEC
* specification for the Read JADEC ID command. The only thing that
* we're guaranteed by the JADEC specification is where the
* Manufacturer ID is in the returned result. After that each
* Manufacturer ~could~ encode things completely differently.
* Note, all Flash parts must have 64KB sectors.
*/
manufacturer = flashid & 0xff;
switch (manufacturer) {
case 0x20: /* Micron/Numonix */
/*
* This Density -> Size decoding table is taken from Micron
* Data Sheets.
*/
density = (flashid >> 16) & 0xff;
switch (density) {
case 0x14: size = 1 << 20; break; /* 1MB */
case 0x15: size = 1 << 21; break; /* 2MB */
case 0x16: size = 1 << 22; break; /* 4MB */
case 0x17: size = 1 << 23; break; /* 8MB */
case 0x18: size = 1 << 24; break; /* 16MB */
case 0x19: size = 1 << 25; break; /* 32MB */
case 0x20: size = 1 << 26; break; /* 64MB */
case 0x21: size = 1 << 27; break; /* 128MB */
case 0x22: size = 1 << 28; break; /* 256MB */
}
break;
case 0x9d: /* ISSI -- Integrated Silicon Solution, Inc. */
/*
* This Density -> Size decoding table is taken from ISSI
* Data Sheets.
*/
density = (flashid >> 16) & 0xff;
switch (density) {
case 0x16: size = 1 << 25; break; /* 32MB */
case 0x17: size = 1 << 26; break; /* 64MB */
}
break;
case 0xc2: /* Macronix */
/*
* This Density -> Size decoding table is taken from Macronix
* Data Sheets.
*/
density = (flashid >> 16) & 0xff;
switch (density) {
case 0x17: size = 1 << 23; break; /* 8MB */
case 0x18: size = 1 << 24; break; /* 16MB */
}
break;
case 0xef: /* Winbond */
/*
* This Density -> Size decoding table is taken from Winbond
* Data Sheets.
*/
density = (flashid >> 16) & 0xff;
switch (density) {
case 0x17: size = 1 << 23; break; /* 8MB */
case 0x18: size = 1 << 24; break; /* 16MB */
}
break;
}
/* If we didn't recognize the FLASH part, that's no real issue: the
* Hardware/Software contract says that Hardware will _*ALWAYS*_
* use a FLASH part which is at least 4MB in size and has 64KB
* sectors. The unrecognized FLASH part is likely to be much larger
* than 4MB, but that's all we really need.
*/
if (size == 0) {
CH_WARN(adapter, "Unknown Flash Part, ID = %#x, assuming 4MB\n", flashid);
size = 1 << 22;
}
/*
* Store decoded Flash size and fall through into vetting code.
*/
adapter->params.sf_size = size;
adapter->params.sf_nsec = size / SF_SEC_SIZE;
found:
/*
* We should ~probably~ reject adapters with FLASHes which are too
* small but we have some legacy FPGAs with small FLASHes that we'd
* still like to use. So instead we emit a scary message ...
*/
if (adapter->params.sf_size < FLASH_MIN_SIZE)
CH_WARN(adapter, "WARNING: Flash Part ID %#x, size %#x < %#x\n",
flashid, adapter->params.sf_size, FLASH_MIN_SIZE);
return 0;
}
static void set_pcie_completion_timeout(struct adapter *adapter,
u8 range)
{
u16 val;
u32 pcie_cap;
pcie_cap = t4_os_find_pci_capability(adapter, PCI_CAP_ID_EXP);
if (pcie_cap) {
t4_os_pci_read_cfg2(adapter, pcie_cap + PCI_EXP_DEVCTL2, &val);
val &= 0xfff0;
val |= range ;
t4_os_pci_write_cfg2(adapter, pcie_cap + PCI_EXP_DEVCTL2, val);
}
}
const struct chip_params *t4_get_chip_params(int chipid)
{
static const struct chip_params chip_params[] = {
{
/* T4 */
.nchan = NCHAN,
.pm_stats_cnt = PM_NSTATS,
.cng_ch_bits_log = 2,
.nsched_cls = 15,
.cim_num_obq = CIM_NUM_OBQ,
.mps_rplc_size = 128,
.vfcount = 128,
.sge_fl_db = F_DBPRIO,
.mps_tcam_size = NUM_MPS_CLS_SRAM_L_INSTANCES,
},
{
/* T5 */
.nchan = NCHAN,
.pm_stats_cnt = PM_NSTATS,
.cng_ch_bits_log = 2,
.nsched_cls = 16,
.cim_num_obq = CIM_NUM_OBQ_T5,
.mps_rplc_size = 128,
.vfcount = 128,
.sge_fl_db = F_DBPRIO | F_DBTYPE,
.mps_tcam_size = NUM_MPS_T5_CLS_SRAM_L_INSTANCES,
},
{
/* T6 */
.nchan = T6_NCHAN,
.pm_stats_cnt = T6_PM_NSTATS,
.cng_ch_bits_log = 3,
.nsched_cls = 16,
.cim_num_obq = CIM_NUM_OBQ_T5,
.mps_rplc_size = 256,
.vfcount = 256,
.sge_fl_db = 0,
.mps_tcam_size = NUM_MPS_T5_CLS_SRAM_L_INSTANCES,
},
};
chipid -= CHELSIO_T4;
if (chipid < 0 || chipid >= ARRAY_SIZE(chip_params))
return NULL;
return &chip_params[chipid];
}
/**
* t4_prep_adapter - prepare SW and HW for operation
* @adapter: the adapter
* @buf: temporary space of at least VPD_LEN size provided by the caller.
*
* Initialize adapter SW state for the various HW modules, set initial
* values for some adapter tunables, take PHYs out of reset, and
* initialize the MDIO interface.
*/
int t4_prep_adapter(struct adapter *adapter, u32 *buf)
{
int ret;
uint16_t device_id;
uint32_t pl_rev;
get_pci_mode(adapter, &adapter->params.pci);
pl_rev = t4_read_reg(adapter, A_PL_REV);
adapter->params.chipid = G_CHIPID(pl_rev);
adapter->params.rev = G_REV(pl_rev);
if (adapter->params.chipid == 0) {
/* T4 did not have chipid in PL_REV (T5 onwards do) */
adapter->params.chipid = CHELSIO_T4;
/* T4A1 chip is not supported */
if (adapter->params.rev == 1) {
CH_ALERT(adapter, "T4 rev 1 chip is not supported.\n");
return -EINVAL;
}
}
adapter->chip_params = t4_get_chip_params(chip_id(adapter));
if (adapter->chip_params == NULL)
return -EINVAL;
adapter->params.pci.vpd_cap_addr =
t4_os_find_pci_capability(adapter, PCI_CAP_ID_VPD);
ret = t4_get_flash_params(adapter);
if (ret < 0)
return ret;
/* Cards with real ASICs have the chipid in the PCIe device id */
t4_os_pci_read_cfg2(adapter, PCI_DEVICE_ID, &device_id);
if (device_id >> 12 == chip_id(adapter))
adapter->params.cim_la_size = CIMLA_SIZE;
else {
/* FPGA */
adapter->params.fpga = 1;
adapter->params.cim_la_size = 2 * CIMLA_SIZE;
}
ret = get_vpd_params(adapter, &adapter->params.vpd, device_id, buf);
if (ret < 0)
return ret;
init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd);
/*
* Default port and clock for debugging in case we can't reach FW.
*/
adapter->params.nports = 1;
adapter->params.portvec = 1;
adapter->params.vpd.cclk = 50000;
/* Set pci completion timeout value to 4 seconds. */
set_pcie_completion_timeout(adapter, 0xd);
return 0;
}
/**
* t4_shutdown_adapter - shut down adapter, host & wire
* @adapter: the adapter
*
* Perform an emergency shutdown of the adapter and stop it from
* continuing any further communication on the ports or DMA to the
* host. This is typically used when the adapter and/or firmware
* have crashed and we want to prevent any further accidental
* communication with the rest of the world. This will also force
* the port Link Status to go down -- if register writes work --
* which should help our peers figure out that we're down.
*/
int t4_shutdown_adapter(struct adapter *adapter)
{
int port;
t4_intr_disable(adapter);
t4_write_reg(adapter, A_DBG_GPIO_EN, 0);
for_each_port(adapter, port) {
u32 a_port_cfg = is_t4(adapter) ?
PORT_REG(port, A_XGMAC_PORT_CFG) :
T5_PORT_REG(port, A_MAC_PORT_CFG);
t4_write_reg(adapter, a_port_cfg,
t4_read_reg(adapter, a_port_cfg)
& ~V_SIGNAL_DET(1));
}
t4_set_reg_field(adapter, A_SGE_CONTROL, F_GLOBALENABLE, 0);
return 0;
}
/**
* t4_bar2_sge_qregs - return BAR2 SGE Queue register information
* @adapter: the adapter
* @qid: the Queue ID
* @qtype: the Ingress or Egress type for @qid
* @user: true if this request is for a user mode queue
* @pbar2_qoffset: BAR2 Queue Offset
* @pbar2_qid: BAR2 Queue ID or 0 for Queue ID inferred SGE Queues
*
* Returns the BAR2 SGE Queue Registers information associated with the
* indicated Absolute Queue ID. These are passed back in return value
* pointers. @qtype should be T4_BAR2_QTYPE_EGRESS for Egress Queue
* and T4_BAR2_QTYPE_INGRESS for Ingress Queues.
*
* This may return an error which indicates that BAR2 SGE Queue
* registers aren't available. If an error is not returned, then the
* following values are returned:
*
* *@pbar2_qoffset: the BAR2 Offset of the @qid Registers
* *@pbar2_qid: the BAR2 SGE Queue ID or 0 of @qid
*
* If the returned BAR2 Queue ID is 0, then BAR2 SGE registers which
* require the "Inferred Queue ID" ability may be used. E.g. the
* Write Combining Doorbell Buffer. If the BAR2 Queue ID is not 0,
* then these "Inferred Queue ID" register may not be used.
*/
int t4_bar2_sge_qregs(struct adapter *adapter,
unsigned int qid,
enum t4_bar2_qtype qtype,
int user,
u64 *pbar2_qoffset,
unsigned int *pbar2_qid)
{
unsigned int page_shift, page_size, qpp_shift, qpp_mask;
u64 bar2_page_offset, bar2_qoffset;
unsigned int bar2_qid, bar2_qid_offset, bar2_qinferred;
/* T4 doesn't support BAR2 SGE Queue registers for kernel
* mode queues.
*/
if (!user && is_t4(adapter))
return -EINVAL;
/* Get our SGE Page Size parameters.
*/
page_shift = adapter->params.sge.page_shift;
page_size = 1 << page_shift;
/* Get the right Queues per Page parameters for our Queue.
*/
qpp_shift = (qtype == T4_BAR2_QTYPE_EGRESS
? adapter->params.sge.eq_s_qpp
: adapter->params.sge.iq_s_qpp);
qpp_mask = (1 << qpp_shift) - 1;
/* Calculate the basics of the BAR2 SGE Queue register area:
* o The BAR2 page the Queue registers will be in.
* o The BAR2 Queue ID.
* o The BAR2 Queue ID Offset into the BAR2 page.
*/
bar2_page_offset = ((u64)(qid >> qpp_shift) << page_shift);
bar2_qid = qid & qpp_mask;
bar2_qid_offset = bar2_qid * SGE_UDB_SIZE;
/* If the BAR2 Queue ID Offset is less than the Page Size, then the
* hardware will infer the Absolute Queue ID simply from the writes to
* the BAR2 Queue ID Offset within the BAR2 Page (and we need to use a
* BAR2 Queue ID of 0 for those writes). Otherwise, we'll simply
* write to the first BAR2 SGE Queue Area within the BAR2 Page with
* the BAR2 Queue ID and the hardware will infer the Absolute Queue ID
* from the BAR2 Page and BAR2 Queue ID.
*
* One important censequence of this is that some BAR2 SGE registers
* have a "Queue ID" field and we can write the BAR2 SGE Queue ID
* there. But other registers synthesize the SGE Queue ID purely
* from the writes to the registers -- the Write Combined Doorbell
* Buffer is a good example. These BAR2 SGE Registers are only
* available for those BAR2 SGE Register areas where the SGE Absolute
* Queue ID can be inferred from simple writes.
*/
bar2_qoffset = bar2_page_offset;
bar2_qinferred = (bar2_qid_offset < page_size);
if (bar2_qinferred) {
bar2_qoffset += bar2_qid_offset;
bar2_qid = 0;
}
*pbar2_qoffset = bar2_qoffset;
*pbar2_qid = bar2_qid;
return 0;
}
/**
* t4_init_devlog_params - initialize adapter->params.devlog
* @adap: the adapter
* @fw_attach: whether we can talk to the firmware
*
* Initialize various fields of the adapter's Firmware Device Log
* Parameters structure.
*/
int t4_init_devlog_params(struct adapter *adap, int fw_attach)
{
struct devlog_params *dparams = &adap->params.devlog;
u32 pf_dparams;
unsigned int devlog_meminfo;
struct fw_devlog_cmd devlog_cmd;
int ret;
/* If we're dealing with newer firmware, the Device Log Paramerters
* are stored in a designated register which allows us to access the
* Device Log even if we can't talk to the firmware.
*/
pf_dparams =
t4_read_reg(adap, PCIE_FW_REG(A_PCIE_FW_PF, PCIE_FW_PF_DEVLOG));
if (pf_dparams) {
unsigned int nentries, nentries128;
dparams->memtype = G_PCIE_FW_PF_DEVLOG_MEMTYPE(pf_dparams);
dparams->start = G_PCIE_FW_PF_DEVLOG_ADDR16(pf_dparams) << 4;
nentries128 = G_PCIE_FW_PF_DEVLOG_NENTRIES128(pf_dparams);
nentries = (nentries128 + 1) * 128;
dparams->size = nentries * sizeof(struct fw_devlog_e);
return 0;
}
/*
* For any failing returns ...
*/
memset(dparams, 0, sizeof *dparams);
/*
* If we can't talk to the firmware, there's really nothing we can do
* at this point.
*/
if (!fw_attach)
return -ENXIO;
/* Otherwise, ask the firmware for it's Device Log Parameters.
*/
memset(&devlog_cmd, 0, sizeof devlog_cmd);
devlog_cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_DEVLOG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ);
devlog_cmd.retval_len16 = cpu_to_be32(FW_LEN16(devlog_cmd));
ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd),
&devlog_cmd);
if (ret)
return ret;
devlog_meminfo =
be32_to_cpu(devlog_cmd.memtype_devlog_memaddr16_devlog);
dparams->memtype = G_FW_DEVLOG_CMD_MEMTYPE_DEVLOG(devlog_meminfo);
dparams->start = G_FW_DEVLOG_CMD_MEMADDR16_DEVLOG(devlog_meminfo) << 4;
dparams->size = be32_to_cpu(devlog_cmd.memsize_devlog);
return 0;
}
/**
* t4_init_sge_params - initialize adap->params.sge
* @adapter: the adapter
*
* Initialize various fields of the adapter's SGE Parameters structure.
*/
int t4_init_sge_params(struct adapter *adapter)
{
u32 r;
struct sge_params *sp = &adapter->params.sge;
unsigned i, tscale = 1;
r = t4_read_reg(adapter, A_SGE_INGRESS_RX_THRESHOLD);
sp->counter_val[0] = G_THRESHOLD_0(r);
sp->counter_val[1] = G_THRESHOLD_1(r);
sp->counter_val[2] = G_THRESHOLD_2(r);
sp->counter_val[3] = G_THRESHOLD_3(r);
if (chip_id(adapter) >= CHELSIO_T6) {
r = t4_read_reg(adapter, A_SGE_ITP_CONTROL);
tscale = G_TSCALE(r);
if (tscale == 0)
tscale = 1;
else
tscale += 2;
}
r = t4_read_reg(adapter, A_SGE_TIMER_VALUE_0_AND_1);
sp->timer_val[0] = core_ticks_to_us(adapter, G_TIMERVALUE0(r)) * tscale;
sp->timer_val[1] = core_ticks_to_us(adapter, G_TIMERVALUE1(r)) * tscale;
r = t4_read_reg(adapter, A_SGE_TIMER_VALUE_2_AND_3);
sp->timer_val[2] = core_ticks_to_us(adapter, G_TIMERVALUE2(r)) * tscale;
sp->timer_val[3] = core_ticks_to_us(adapter, G_TIMERVALUE3(r)) * tscale;
r = t4_read_reg(adapter, A_SGE_TIMER_VALUE_4_AND_5);
sp->timer_val[4] = core_ticks_to_us(adapter, G_TIMERVALUE4(r)) * tscale;
sp->timer_val[5] = core_ticks_to_us(adapter, G_TIMERVALUE5(r)) * tscale;
r = t4_read_reg(adapter, A_SGE_CONM_CTRL);
sp->fl_starve_threshold = G_EGRTHRESHOLD(r) * 2 + 1;
if (is_t4(adapter))
sp->fl_starve_threshold2 = sp->fl_starve_threshold;
else if (is_t5(adapter))
sp->fl_starve_threshold2 = G_EGRTHRESHOLDPACKING(r) * 2 + 1;
else
sp->fl_starve_threshold2 = G_T6_EGRTHRESHOLDPACKING(r) * 2 + 1;
/* egress queues: log2 of # of doorbells per BAR2 page */
r = t4_read_reg(adapter, A_SGE_EGRESS_QUEUES_PER_PAGE_PF);
r >>= S_QUEUESPERPAGEPF0 +
(S_QUEUESPERPAGEPF1 - S_QUEUESPERPAGEPF0) * adapter->pf;
sp->eq_s_qpp = r & M_QUEUESPERPAGEPF0;
/* ingress queues: log2 of # of doorbells per BAR2 page */
r = t4_read_reg(adapter, A_SGE_INGRESS_QUEUES_PER_PAGE_PF);
r >>= S_QUEUESPERPAGEPF0 +
(S_QUEUESPERPAGEPF1 - S_QUEUESPERPAGEPF0) * adapter->pf;
sp->iq_s_qpp = r & M_QUEUESPERPAGEPF0;
r = t4_read_reg(adapter, A_SGE_HOST_PAGE_SIZE);
r >>= S_HOSTPAGESIZEPF0 +
(S_HOSTPAGESIZEPF1 - S_HOSTPAGESIZEPF0) * adapter->pf;
sp->page_shift = (r & M_HOSTPAGESIZEPF0) + 10;
r = t4_read_reg(adapter, A_SGE_CONTROL);
sp->sge_control = r;
sp->spg_len = r & F_EGRSTATUSPAGESIZE ? 128 : 64;
sp->fl_pktshift = G_PKTSHIFT(r);
if (chip_id(adapter) <= CHELSIO_T5) {
sp->pad_boundary = 1 << (G_INGPADBOUNDARY(r) +
X_INGPADBOUNDARY_SHIFT);
} else {
sp->pad_boundary = 1 << (G_INGPADBOUNDARY(r) +
X_T6_INGPADBOUNDARY_SHIFT);
}
if (is_t4(adapter))
sp->pack_boundary = sp->pad_boundary;
else {
r = t4_read_reg(adapter, A_SGE_CONTROL2);
if (G_INGPACKBOUNDARY(r) == 0)
sp->pack_boundary = 16;
else
sp->pack_boundary = 1 << (G_INGPACKBOUNDARY(r) + 5);
}
for (i = 0; i < SGE_FLBUF_SIZES; i++)
sp->sge_fl_buffer_size[i] = t4_read_reg(adapter,
A_SGE_FL_BUFFER_SIZE0 + (4 * i));
return 0;
}
/*
* Read and cache the adapter's compressed filter mode and ingress config.
*/
static void read_filter_mode_and_ingress_config(struct adapter *adap,
bool sleep_ok)
{
uint32_t v;
struct tp_params *tpp = &adap->params.tp;
t4_tp_pio_read(adap, &tpp->vlan_pri_map, 1, A_TP_VLAN_PRI_MAP,
sleep_ok);
t4_tp_pio_read(adap, &tpp->ingress_config, 1, A_TP_INGRESS_CONFIG,
sleep_ok);
/*
* Now that we have TP_VLAN_PRI_MAP cached, we can calculate the field
* shift positions of several elements of the Compressed Filter Tuple
* for this adapter which we need frequently ...
*/
tpp->fcoe_shift = t4_filter_field_shift(adap, F_FCOE);
tpp->port_shift = t4_filter_field_shift(adap, F_PORT);
tpp->vnic_shift = t4_filter_field_shift(adap, F_VNIC_ID);
tpp->vlan_shift = t4_filter_field_shift(adap, F_VLAN);
tpp->tos_shift = t4_filter_field_shift(adap, F_TOS);
tpp->protocol_shift = t4_filter_field_shift(adap, F_PROTOCOL);
tpp->ethertype_shift = t4_filter_field_shift(adap, F_ETHERTYPE);
tpp->macmatch_shift = t4_filter_field_shift(adap, F_MACMATCH);
tpp->matchtype_shift = t4_filter_field_shift(adap, F_MPSHITTYPE);
tpp->frag_shift = t4_filter_field_shift(adap, F_FRAGMENTATION);
if (chip_id(adap) > CHELSIO_T4) {
v = t4_read_reg(adap, LE_HASH_MASK_GEN_IPV4T5(3));
adap->params.tp.hash_filter_mask = v;
v = t4_read_reg(adap, LE_HASH_MASK_GEN_IPV4T5(4));
adap->params.tp.hash_filter_mask |= (u64)v << 32;
}
}
/**
* t4_init_tp_params - initialize adap->params.tp
* @adap: the adapter
*
* Initialize various fields of the adapter's TP Parameters structure.
*/
int t4_init_tp_params(struct adapter *adap, bool sleep_ok)
{
int chan;
u32 v;
struct tp_params *tpp = &adap->params.tp;
v = t4_read_reg(adap, A_TP_TIMER_RESOLUTION);
tpp->tre = G_TIMERRESOLUTION(v);
tpp->dack_re = G_DELAYEDACKRESOLUTION(v);
/* MODQ_REQ_MAP defaults to setting queues 0-3 to chan 0-3 */
for (chan = 0; chan < MAX_NCHAN; chan++)
tpp->tx_modq[chan] = chan;
read_filter_mode_and_ingress_config(adap, sleep_ok);
/*
* Cache a mask of the bits that represent the error vector portion of
* rx_pkt.err_vec. T6+ can use a compressed error vector to make room
* for information about outer encapsulation (GENEVE/VXLAN/NVGRE).
*/
tpp->err_vec_mask = htobe16(0xffff);
if (chip_id(adap) > CHELSIO_T5) {
v = t4_read_reg(adap, A_TP_OUT_CONFIG);
if (v & F_CRXPKTENC) {
tpp->err_vec_mask =
htobe16(V_T6_COMPR_RXERR_VEC(M_T6_COMPR_RXERR_VEC));
}
}
return 0;
}
/**
* t4_filter_field_shift - calculate filter field shift
* @adap: the adapter
* @filter_sel: the desired field (from TP_VLAN_PRI_MAP bits)
*
* Return the shift position of a filter field within the Compressed
* Filter Tuple. The filter field is specified via its selection bit
* within TP_VLAN_PRI_MAL (filter mode). E.g. F_VLAN.
*/
int t4_filter_field_shift(const struct adapter *adap, int filter_sel)
{
unsigned int filter_mode = adap->params.tp.vlan_pri_map;
unsigned int sel;
int field_shift;
if ((filter_mode & filter_sel) == 0)
return -1;
for (sel = 1, field_shift = 0; sel < filter_sel; sel <<= 1) {
switch (filter_mode & sel) {
case F_FCOE:
field_shift += W_FT_FCOE;
break;
case F_PORT:
field_shift += W_FT_PORT;
break;
case F_VNIC_ID:
field_shift += W_FT_VNIC_ID;
break;
case F_VLAN:
field_shift += W_FT_VLAN;
break;
case F_TOS:
field_shift += W_FT_TOS;
break;
case F_PROTOCOL:
field_shift += W_FT_PROTOCOL;
break;
case F_ETHERTYPE:
field_shift += W_FT_ETHERTYPE;
break;
case F_MACMATCH:
field_shift += W_FT_MACMATCH;
break;
case F_MPSHITTYPE:
field_shift += W_FT_MPSHITTYPE;
break;
case F_FRAGMENTATION:
field_shift += W_FT_FRAGMENTATION;
break;
}
}
return field_shift;
}
int t4_port_init(struct adapter *adap, int mbox, int pf, int vf, int port_id)
{
u8 addr[6];
int ret, i, j;
u16 rss_size;
struct port_info *p = adap2pinfo(adap, port_id);
u32 param, val;
for (i = 0, j = -1; i <= p->port_id; i++) {
do {
j++;
} while ((adap->params.portvec & (1 << j)) == 0);
}
p->tx_chan = j;
p->mps_bg_map = t4_get_mps_bg_map(adap, j);
p->rx_e_chan_map = t4_get_rx_e_chan_map(adap, j);
p->lport = j;
if (!(adap->flags & IS_VF) ||
adap->params.vfres.r_caps & FW_CMD_CAP_PORT) {
t4_update_port_info(p);
}
ret = t4_alloc_vi(adap, mbox, j, pf, vf, 1, addr, &rss_size);
if (ret < 0)
return ret;
p->vi[0].viid = ret;
if (chip_id(adap) <= CHELSIO_T5)
p->vi[0].smt_idx = (ret & 0x7f) << 1;
else
p->vi[0].smt_idx = (ret & 0x7f);
p->vi[0].rss_size = rss_size;
t4_os_set_hw_addr(p, addr);
param = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_RSSINFO) |
V_FW_PARAMS_PARAM_YZ(p->vi[0].viid);
ret = t4_query_params(adap, mbox, pf, vf, 1, &param, &val);
if (ret)
p->vi[0].rss_base = 0xffff;
else {
/* MPASS((val >> 16) == rss_size); */
p->vi[0].rss_base = val & 0xffff;
}
return 0;
}
/**
* t4_read_cimq_cfg - read CIM queue configuration
* @adap: the adapter
* @base: holds the queue base addresses in bytes
* @size: holds the queue sizes in bytes
* @thres: holds the queue full thresholds in bytes
*
* Returns the current configuration of the CIM queues, starting with
* the IBQs, then the OBQs.
*/
void t4_read_cimq_cfg(struct adapter *adap, u16 *base, u16 *size, u16 *thres)
{
unsigned int i, v;
int cim_num_obq = adap->chip_params->cim_num_obq;
for (i = 0; i < CIM_NUM_IBQ; i++) {
t4_write_reg(adap, A_CIM_QUEUE_CONFIG_REF, F_IBQSELECT |
V_QUENUMSELECT(i));
v = t4_read_reg(adap, A_CIM_QUEUE_CONFIG_CTRL);
/* value is in 256-byte units */
*base++ = G_CIMQBASE(v) * 256;
*size++ = G_CIMQSIZE(v) * 256;
*thres++ = G_QUEFULLTHRSH(v) * 8; /* 8-byte unit */
}
for (i = 0; i < cim_num_obq; i++) {
t4_write_reg(adap, A_CIM_QUEUE_CONFIG_REF, F_OBQSELECT |
V_QUENUMSELECT(i));
v = t4_read_reg(adap, A_CIM_QUEUE_CONFIG_CTRL);
/* value is in 256-byte units */
*base++ = G_CIMQBASE(v) * 256;
*size++ = G_CIMQSIZE(v) * 256;
}
}
/**
* t4_read_cim_ibq - read the contents of a CIM inbound queue
* @adap: the adapter
* @qid: the queue index
* @data: where to store the queue contents
* @n: capacity of @data in 32-bit words
*
* Reads the contents of the selected CIM queue starting at address 0 up
* to the capacity of @data. @n must be a multiple of 4. Returns < 0 on
* error and the number of 32-bit words actually read on success.
*/
int t4_read_cim_ibq(struct adapter *adap, unsigned int qid, u32 *data, size_t n)
{
int i, err, attempts;
unsigned int addr;
const unsigned int nwords = CIM_IBQ_SIZE * 4;
if (qid > 5 || (n & 3))
return -EINVAL;
addr = qid * nwords;
if (n > nwords)
n = nwords;
/* It might take 3-10ms before the IBQ debug read access is allowed.
* Wait for 1 Sec with a delay of 1 usec.
*/
attempts = 1000000;
for (i = 0; i < n; i++, addr++) {
t4_write_reg(adap, A_CIM_IBQ_DBG_CFG, V_IBQDBGADDR(addr) |
F_IBQDBGEN);
err = t4_wait_op_done(adap, A_CIM_IBQ_DBG_CFG, F_IBQDBGBUSY, 0,
attempts, 1);
if (err)
return err;
*data++ = t4_read_reg(adap, A_CIM_IBQ_DBG_DATA);
}
t4_write_reg(adap, A_CIM_IBQ_DBG_CFG, 0);
return i;
}
/**
* t4_read_cim_obq - read the contents of a CIM outbound queue
* @adap: the adapter
* @qid: the queue index
* @data: where to store the queue contents
* @n: capacity of @data in 32-bit words
*
* Reads the contents of the selected CIM queue starting at address 0 up
* to the capacity of @data. @n must be a multiple of 4. Returns < 0 on
* error and the number of 32-bit words actually read on success.
*/
int t4_read_cim_obq(struct adapter *adap, unsigned int qid, u32 *data, size_t n)
{
int i, err;
unsigned int addr, v, nwords;
int cim_num_obq = adap->chip_params->cim_num_obq;
if ((qid > (cim_num_obq - 1)) || (n & 3))
return -EINVAL;
t4_write_reg(adap, A_CIM_QUEUE_CONFIG_REF, F_OBQSELECT |
V_QUENUMSELECT(qid));
v = t4_read_reg(adap, A_CIM_QUEUE_CONFIG_CTRL);
addr = G_CIMQBASE(v) * 64; /* muliple of 256 -> muliple of 4 */
nwords = G_CIMQSIZE(v) * 64; /* same */
if (n > nwords)
n = nwords;
for (i = 0; i < n; i++, addr++) {
t4_write_reg(adap, A_CIM_OBQ_DBG_CFG, V_OBQDBGADDR(addr) |
F_OBQDBGEN);
err = t4_wait_op_done(adap, A_CIM_OBQ_DBG_CFG, F_OBQDBGBUSY, 0,
2, 1);
if (err)
return err;
*data++ = t4_read_reg(adap, A_CIM_OBQ_DBG_DATA);
}
t4_write_reg(adap, A_CIM_OBQ_DBG_CFG, 0);
return i;
}
enum {
CIM_QCTL_BASE = 0,
CIM_CTL_BASE = 0x2000,
CIM_PBT_ADDR_BASE = 0x2800,
CIM_PBT_LRF_BASE = 0x3000,
CIM_PBT_DATA_BASE = 0x3800
};
/**
* t4_cim_read - read a block from CIM internal address space
* @adap: the adapter
* @addr: the start address within the CIM address space
* @n: number of words to read
* @valp: where to store the result
*
* Reads a block of 4-byte words from the CIM intenal address space.
*/
int t4_cim_read(struct adapter *adap, unsigned int addr, unsigned int n,
unsigned int *valp)
{
int ret = 0;
if (t4_read_reg(adap, A_CIM_HOST_ACC_CTRL) & F_HOSTBUSY)
return -EBUSY;
for ( ; !ret && n--; addr += 4) {
t4_write_reg(adap, A_CIM_HOST_ACC_CTRL, addr);
ret = t4_wait_op_done(adap, A_CIM_HOST_ACC_CTRL, F_HOSTBUSY,
0, 5, 2);
if (!ret)
*valp++ = t4_read_reg(adap, A_CIM_HOST_ACC_DATA);
}
return ret;
}
/**
* t4_cim_write - write a block into CIM internal address space
* @adap: the adapter
* @addr: the start address within the CIM address space
* @n: number of words to write
* @valp: set of values to write
*
* Writes a block of 4-byte words into the CIM intenal address space.
*/
int t4_cim_write(struct adapter *adap, unsigned int addr, unsigned int n,
const unsigned int *valp)
{
int ret = 0;
if (t4_read_reg(adap, A_CIM_HOST_ACC_CTRL) & F_HOSTBUSY)
return -EBUSY;
for ( ; !ret && n--; addr += 4) {
t4_write_reg(adap, A_CIM_HOST_ACC_DATA, *valp++);
t4_write_reg(adap, A_CIM_HOST_ACC_CTRL, addr | F_HOSTWRITE);
ret = t4_wait_op_done(adap, A_CIM_HOST_ACC_CTRL, F_HOSTBUSY,
0, 5, 2);
}
return ret;
}
static int t4_cim_write1(struct adapter *adap, unsigned int addr,
unsigned int val)
{
return t4_cim_write(adap, addr, 1, &val);
}
/**
* t4_cim_ctl_read - read a block from CIM control region
* @adap: the adapter
* @addr: the start address within the CIM control region
* @n: number of words to read
* @valp: where to store the result
*
* Reads a block of 4-byte words from the CIM control region.
*/
int t4_cim_ctl_read(struct adapter *adap, unsigned int addr, unsigned int n,
unsigned int *valp)
{
return t4_cim_read(adap, addr + CIM_CTL_BASE, n, valp);
}
/**
* t4_cim_read_la - read CIM LA capture buffer
* @adap: the adapter
* @la_buf: where to store the LA data
* @wrptr: the HW write pointer within the capture buffer
*
* Reads the contents of the CIM LA buffer with the most recent entry at
* the end of the returned data and with the entry at @wrptr first.
* We try to leave the LA in the running state we find it in.
*/
int t4_cim_read_la(struct adapter *adap, u32 *la_buf, unsigned int *wrptr)
{
int i, ret;
unsigned int cfg, val, idx;
ret = t4_cim_read(adap, A_UP_UP_DBG_LA_CFG, 1, &cfg);
if (ret)
return ret;
if (cfg & F_UPDBGLAEN) { /* LA is running, freeze it */
ret = t4_cim_write1(adap, A_UP_UP_DBG_LA_CFG, 0);
if (ret)
return ret;
}
ret = t4_cim_read(adap, A_UP_UP_DBG_LA_CFG, 1, &val);
if (ret)
goto restart;
idx = G_UPDBGLAWRPTR(val);
if (wrptr)
*wrptr = idx;
for (i = 0; i < adap->params.cim_la_size; i++) {
ret = t4_cim_write1(adap, A_UP_UP_DBG_LA_CFG,
V_UPDBGLARDPTR(idx) | F_UPDBGLARDEN);
if (ret)
break;
ret = t4_cim_read(adap, A_UP_UP_DBG_LA_CFG, 1, &val);
if (ret)
break;
if (val & F_UPDBGLARDEN) {
ret = -ETIMEDOUT;
break;
}
ret = t4_cim_read(adap, A_UP_UP_DBG_LA_DATA, 1, &la_buf[i]);
if (ret)
break;
/* address can't exceed 0xfff (UpDbgLaRdPtr is of 12-bits) */
idx = (idx + 1) & M_UPDBGLARDPTR;
/*
* Bits 0-3 of UpDbgLaRdPtr can be between 0000 to 1001 to
* identify the 32-bit portion of the full 312-bit data
*/
if (is_t6(adap))
while ((idx & 0xf) > 9)
idx = (idx + 1) % M_UPDBGLARDPTR;
}
restart:
if (cfg & F_UPDBGLAEN) {
int r = t4_cim_write1(adap, A_UP_UP_DBG_LA_CFG,
cfg & ~F_UPDBGLARDEN);
if (!ret)
ret = r;
}
return ret;
}
/**
* t4_tp_read_la - read TP LA capture buffer
* @adap: the adapter
* @la_buf: where to store the LA data
* @wrptr: the HW write pointer within the capture buffer
*
* Reads the contents of the TP LA buffer with the most recent entry at
* the end of the returned data and with the entry at @wrptr first.
* We leave the LA in the running state we find it in.
*/
void t4_tp_read_la(struct adapter *adap, u64 *la_buf, unsigned int *wrptr)
{
bool last_incomplete;
unsigned int i, cfg, val, idx;
cfg = t4_read_reg(adap, A_TP_DBG_LA_CONFIG) & 0xffff;
if (cfg & F_DBGLAENABLE) /* freeze LA */
t4_write_reg(adap, A_TP_DBG_LA_CONFIG,
adap->params.tp.la_mask | (cfg ^ F_DBGLAENABLE));
val = t4_read_reg(adap, A_TP_DBG_LA_CONFIG);
idx = G_DBGLAWPTR(val);
last_incomplete = G_DBGLAMODE(val) >= 2 && (val & F_DBGLAWHLF) == 0;
if (last_incomplete)
idx = (idx + 1) & M_DBGLARPTR;
if (wrptr)
*wrptr = idx;
val &= 0xffff;
val &= ~V_DBGLARPTR(M_DBGLARPTR);
val |= adap->params.tp.la_mask;
for (i = 0; i < TPLA_SIZE; i++) {
t4_write_reg(adap, A_TP_DBG_LA_CONFIG, V_DBGLARPTR(idx) | val);
la_buf[i] = t4_read_reg64(adap, A_TP_DBG_LA_DATAL);
idx = (idx + 1) & M_DBGLARPTR;
}
/* Wipe out last entry if it isn't valid */
if (last_incomplete)
la_buf[TPLA_SIZE - 1] = ~0ULL;
if (cfg & F_DBGLAENABLE) /* restore running state */
t4_write_reg(adap, A_TP_DBG_LA_CONFIG,
cfg | adap->params.tp.la_mask);
}
/*
* SGE Hung Ingress DMA Warning Threshold time and Warning Repeat Rate (in
* seconds). If we find one of the SGE Ingress DMA State Machines in the same
* state for more than the Warning Threshold then we'll issue a warning about
* a potential hang. We'll repeat the warning as the SGE Ingress DMA Channel
* appears to be hung every Warning Repeat second till the situation clears.
* If the situation clears, we'll note that as well.
*/
#define SGE_IDMA_WARN_THRESH 1
#define SGE_IDMA_WARN_REPEAT 300
/**
* t4_idma_monitor_init - initialize SGE Ingress DMA Monitor
* @adapter: the adapter
* @idma: the adapter IDMA Monitor state
*
* Initialize the state of an SGE Ingress DMA Monitor.
*/
void t4_idma_monitor_init(struct adapter *adapter,
struct sge_idma_monitor_state *idma)
{
/* Initialize the state variables for detecting an SGE Ingress DMA
* hang. The SGE has internal counters which count up on each clock
* tick whenever the SGE finds its Ingress DMA State Engines in the
* same state they were on the previous clock tick. The clock used is
* the Core Clock so we have a limit on the maximum "time" they can
* record; typically a very small number of seconds. For instance,
* with a 600MHz Core Clock, we can only count up to a bit more than
* 7s. So we'll synthesize a larger counter in order to not run the
* risk of having the "timers" overflow and give us the flexibility to
* maintain a Hung SGE State Machine of our own which operates across
* a longer time frame.
*/
idma->idma_1s_thresh = core_ticks_per_usec(adapter) * 1000000; /* 1s */
idma->idma_stalled[0] = idma->idma_stalled[1] = 0;
}
/**
* t4_idma_monitor - monitor SGE Ingress DMA state
* @adapter: the adapter
* @idma: the adapter IDMA Monitor state
* @hz: number of ticks/second
* @ticks: number of ticks since the last IDMA Monitor call
*/
void t4_idma_monitor(struct adapter *adapter,
struct sge_idma_monitor_state *idma,
int hz, int ticks)
{
int i, idma_same_state_cnt[2];
/* Read the SGE Debug Ingress DMA Same State Count registers. These
* are counters inside the SGE which count up on each clock when the
* SGE finds its Ingress DMA State Engines in the same states they
* were in the previous clock. The counters will peg out at
* 0xffffffff without wrapping around so once they pass the 1s
* threshold they'll stay above that till the IDMA state changes.
*/
t4_write_reg(adapter, A_SGE_DEBUG_INDEX, 13);
idma_same_state_cnt[0] = t4_read_reg(adapter, A_SGE_DEBUG_DATA_HIGH);
idma_same_state_cnt[1] = t4_read_reg(adapter, A_SGE_DEBUG_DATA_LOW);
for (i = 0; i < 2; i++) {
u32 debug0, debug11;
/* If the Ingress DMA Same State Counter ("timer") is less
* than 1s, then we can reset our synthesized Stall Timer and
* continue. If we have previously emitted warnings about a
* potential stalled Ingress Queue, issue a note indicating
* that the Ingress Queue has resumed forward progress.
*/
if (idma_same_state_cnt[i] < idma->idma_1s_thresh) {
if (idma->idma_stalled[i] >= SGE_IDMA_WARN_THRESH*hz)
CH_WARN(adapter, "SGE idma%d, queue %u, "
"resumed after %d seconds\n",
i, idma->idma_qid[i],
idma->idma_stalled[i]/hz);
idma->idma_stalled[i] = 0;
continue;
}
/* Synthesize an SGE Ingress DMA Same State Timer in the Hz
* domain. The first time we get here it'll be because we
* passed the 1s Threshold; each additional time it'll be
* because the RX Timer Callback is being fired on its regular
* schedule.
*
* If the stall is below our Potential Hung Ingress Queue
* Warning Threshold, continue.
*/
if (idma->idma_stalled[i] == 0) {
idma->idma_stalled[i] = hz;
idma->idma_warn[i] = 0;
} else {
idma->idma_stalled[i] += ticks;
idma->idma_warn[i] -= ticks;
}
if (idma->idma_stalled[i] < SGE_IDMA_WARN_THRESH*hz)
continue;
/* We'll issue a warning every SGE_IDMA_WARN_REPEAT seconds.
*/
if (idma->idma_warn[i] > 0)
continue;
idma->idma_warn[i] = SGE_IDMA_WARN_REPEAT*hz;
/* Read and save the SGE IDMA State and Queue ID information.
* We do this every time in case it changes across time ...
* can't be too careful ...
*/
t4_write_reg(adapter, A_SGE_DEBUG_INDEX, 0);
debug0 = t4_read_reg(adapter, A_SGE_DEBUG_DATA_LOW);
idma->idma_state[i] = (debug0 >> (i * 9)) & 0x3f;
t4_write_reg(adapter, A_SGE_DEBUG_INDEX, 11);
debug11 = t4_read_reg(adapter, A_SGE_DEBUG_DATA_LOW);
idma->idma_qid[i] = (debug11 >> (i * 16)) & 0xffff;
CH_WARN(adapter, "SGE idma%u, queue %u, potentially stuck in "
" state %u for %d seconds (debug0=%#x, debug11=%#x)\n",
i, idma->idma_qid[i], idma->idma_state[i],
idma->idma_stalled[i]/hz,
debug0, debug11);
t4_sge_decode_idma_state(adapter, idma->idma_state[i]);
}
}
/**
* t4_read_pace_tbl - read the pace table
* @adap: the adapter
* @pace_vals: holds the returned values
*
* Returns the values of TP's pace table in microseconds.
*/
void t4_read_pace_tbl(struct adapter *adap, unsigned int pace_vals[NTX_SCHED])
{
unsigned int i, v;
for (i = 0; i < NTX_SCHED; i++) {
t4_write_reg(adap, A_TP_PACE_TABLE, 0xffff0000 + i);
v = t4_read_reg(adap, A_TP_PACE_TABLE);
pace_vals[i] = dack_ticks_to_usec(adap, v);
}
}
/**
* t4_get_tx_sched - get the configuration of a Tx HW traffic scheduler
* @adap: the adapter
* @sched: the scheduler index
* @kbps: the byte rate in Kbps
* @ipg: the interpacket delay in tenths of nanoseconds
*
* Return the current configuration of a HW Tx scheduler.
*/
void t4_get_tx_sched(struct adapter *adap, unsigned int sched, unsigned int *kbps,
unsigned int *ipg, bool sleep_ok)
{
unsigned int v, addr, bpt, cpt;
if (kbps) {
addr = A_TP_TX_MOD_Q1_Q0_RATE_LIMIT - sched / 2;
t4_tp_tm_pio_read(adap, &v, 1, addr, sleep_ok);
if (sched & 1)
v >>= 16;
bpt = (v >> 8) & 0xff;
cpt = v & 0xff;
if (!cpt)
*kbps = 0; /* scheduler disabled */
else {
v = (adap->params.vpd.cclk * 1000) / cpt; /* ticks/s */
*kbps = (v * bpt) / 125;
}
}
if (ipg) {
addr = A_TP_TX_MOD_Q1_Q0_TIMER_SEPARATOR - sched / 2;
t4_tp_tm_pio_read(adap, &v, 1, addr, sleep_ok);
if (sched & 1)
v >>= 16;
v &= 0xffff;
*ipg = (10000 * v) / core_ticks_per_usec(adap);
}
}
/**
* t4_load_cfg - download config file
* @adap: the adapter
* @cfg_data: the cfg text file to write
* @size: text file size
*
* Write the supplied config text file to the card's serial flash.
*/
int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
{
int ret, i, n, cfg_addr;
unsigned int addr;
unsigned int flash_cfg_start_sec;
unsigned int sf_sec_size = adap->params.sf_size / adap->params.sf_nsec;
cfg_addr = t4_flash_cfg_addr(adap);
if (cfg_addr < 0)
return cfg_addr;
addr = cfg_addr;
flash_cfg_start_sec = addr / SF_SEC_SIZE;
if (size > FLASH_CFG_MAX_SIZE) {
CH_ERR(adap, "cfg file too large, max is %u bytes\n",
FLASH_CFG_MAX_SIZE);
return -EFBIG;
}
i = DIV_ROUND_UP(FLASH_CFG_MAX_SIZE, /* # of sectors spanned */
sf_sec_size);
ret = t4_flash_erase_sectors(adap, flash_cfg_start_sec,
flash_cfg_start_sec + i - 1);
/*
* If size == 0 then we're simply erasing the FLASH sectors associated
* with the on-adapter Firmware Configuration File.
*/
if (ret || size == 0)
goto out;
/* this will write to the flash up to SF_PAGE_SIZE at a time */
for (i = 0; i< size; i+= SF_PAGE_SIZE) {
if ( (size - i) < SF_PAGE_SIZE)
n = size - i;
else
n = SF_PAGE_SIZE;
ret = t4_write_flash(adap, addr, n, cfg_data, 1);
if (ret)
goto out;
addr += SF_PAGE_SIZE;
cfg_data += SF_PAGE_SIZE;
}
out:
if (ret)
CH_ERR(adap, "config file %s failed %d\n",
(size == 0 ? "clear" : "download"), ret);
return ret;
}
/**
* t5_fw_init_extern_mem - initialize the external memory
* @adap: the adapter
*
* Initializes the external memory on T5.
*/
int t5_fw_init_extern_mem(struct adapter *adap)
{
u32 params[1], val[1];
int ret;
if (!is_t5(adap))
return 0;
val[0] = 0xff; /* Initialize all MCs */
params[0] = (V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_MCINIT));
ret = t4_set_params_timeout(adap, adap->mbox, adap->pf, 0, 1, params, val,
FW_CMD_MAX_TIMEOUT);
return ret;
}
/* BIOS boot headers */
typedef struct pci_expansion_rom_header {
u8 signature[2]; /* ROM Signature. Should be 0xaa55 */
u8 reserved[22]; /* Reserved per processor Architecture data */
u8 pcir_offset[2]; /* Offset to PCI Data Structure */
} pci_exp_rom_header_t; /* PCI_EXPANSION_ROM_HEADER */
/* Legacy PCI Expansion ROM Header */
typedef struct legacy_pci_expansion_rom_header {
u8 signature[2]; /* ROM Signature. Should be 0xaa55 */
u8 size512; /* Current Image Size in units of 512 bytes */
u8 initentry_point[4];
u8 cksum; /* Checksum computed on the entire Image */
u8 reserved[16]; /* Reserved */
u8 pcir_offset[2]; /* Offset to PCI Data Struture */
} legacy_pci_exp_rom_header_t; /* LEGACY_PCI_EXPANSION_ROM_HEADER */
/* EFI PCI Expansion ROM Header */
typedef struct efi_pci_expansion_rom_header {
u8 signature[2]; // ROM signature. The value 0xaa55
u8 initialization_size[2]; /* Units 512. Includes this header */
u8 efi_signature[4]; /* Signature from EFI image header. 0x0EF1 */
u8 efi_subsystem[2]; /* Subsystem value for EFI image header */
u8 efi_machine_type[2]; /* Machine type from EFI image header */
u8 compression_type[2]; /* Compression type. */
/*
* Compression type definition
* 0x0: uncompressed
* 0x1: Compressed
* 0x2-0xFFFF: Reserved
*/
u8 reserved[8]; /* Reserved */
u8 efi_image_header_offset[2]; /* Offset to EFI Image */
u8 pcir_offset[2]; /* Offset to PCI Data Structure */
} efi_pci_exp_rom_header_t; /* EFI PCI Expansion ROM Header */
/* PCI Data Structure Format */
typedef struct pcir_data_structure { /* PCI Data Structure */
u8 signature[4]; /* Signature. The string "PCIR" */
u8 vendor_id[2]; /* Vendor Identification */
u8 device_id[2]; /* Device Identification */
u8 vital_product[2]; /* Pointer to Vital Product Data */
u8 length[2]; /* PCIR Data Structure Length */
u8 revision; /* PCIR Data Structure Revision */
u8 class_code[3]; /* Class Code */
u8 image_length[2]; /* Image Length. Multiple of 512B */
u8 code_revision[2]; /* Revision Level of Code/Data */
u8 code_type; /* Code Type. */
/*
* PCI Expansion ROM Code Types
* 0x00: Intel IA-32, PC-AT compatible. Legacy
* 0x01: Open Firmware standard for PCI. FCODE
* 0x02: Hewlett-Packard PA RISC. HP reserved
* 0x03: EFI Image. EFI
* 0x04-0xFF: Reserved.
*/
u8 indicator; /* Indicator. Identifies the last image in the ROM */
u8 reserved[2]; /* Reserved */
} pcir_data_t; /* PCI__DATA_STRUCTURE */
/* BOOT constants */
enum {
BOOT_FLASH_BOOT_ADDR = 0x0,/* start address of boot image in flash */
BOOT_SIGNATURE = 0xaa55, /* signature of BIOS boot ROM */
BOOT_SIZE_INC = 512, /* image size measured in 512B chunks */
BOOT_MIN_SIZE = sizeof(pci_exp_rom_header_t), /* basic header */
BOOT_MAX_SIZE = 1024*BOOT_SIZE_INC, /* 1 byte * length increment */
VENDOR_ID = 0x1425, /* Vendor ID */
PCIR_SIGNATURE = 0x52494350 /* PCIR signature */
};
/*
* modify_device_id - Modifies the device ID of the Boot BIOS image
* @adatper: the device ID to write.
* @boot_data: the boot image to modify.
*
* Write the supplied device ID to the boot BIOS image.
*/
static void modify_device_id(int device_id, u8 *boot_data)
{
legacy_pci_exp_rom_header_t *header;
pcir_data_t *pcir_header;
u32 cur_header = 0;
/*
* Loop through all chained images and change the device ID's
*/
while (1) {
header = (legacy_pci_exp_rom_header_t *) &boot_data[cur_header];
pcir_header = (pcir_data_t *) &boot_data[cur_header +
le16_to_cpu(*(u16*)header->pcir_offset)];
/*
* Only modify the Device ID if code type is Legacy or HP.
* 0x00: Okay to modify
* 0x01: FCODE. Do not be modify
* 0x03: Okay to modify
* 0x04-0xFF: Do not modify
*/
if (pcir_header->code_type == 0x00) {
u8 csum = 0;
int i;
/*
* Modify Device ID to match current adatper
*/
*(u16*) pcir_header->device_id = device_id;
/*
* Set checksum temporarily to 0.
* We will recalculate it later.
*/
header->cksum = 0x0;
/*
* Calculate and update checksum
*/
for (i = 0; i < (header->size512 * 512); i++)
csum += (u8)boot_data[cur_header + i];
/*
* Invert summed value to create the checksum
* Writing new checksum value directly to the boot data
*/
boot_data[cur_header + 7] = -csum;
} else if (pcir_header->code_type == 0x03) {
/*
* Modify Device ID to match current adatper
*/
*(u16*) pcir_header->device_id = device_id;
}
/*
* Check indicator element to identify if this is the last
* image in the ROM.
*/
if (pcir_header->indicator & 0x80)
break;
/*
* Move header pointer up to the next image in the ROM.
*/
cur_header += header->size512 * 512;
}
}
/*
* t4_load_boot - download boot flash
* @adapter: the adapter
* @boot_data: the boot image to write
* @boot_addr: offset in flash to write boot_data
* @size: image size
*
* Write the supplied boot image to the card's serial flash.
* The boot image has the following sections: a 28-byte header and the
* boot image.
*/
int t4_load_boot(struct adapter *adap, u8 *boot_data,
unsigned int boot_addr, unsigned int size)
{
pci_exp_rom_header_t *header;
int pcir_offset ;
pcir_data_t *pcir_header;
int ret, addr;
uint16_t device_id;
unsigned int i;
unsigned int boot_sector = (boot_addr * 1024 );
unsigned int sf_sec_size = adap->params.sf_size / adap->params.sf_nsec;
/*
* Make sure the boot image does not encroach on the firmware region
*/
if ((boot_sector + size) >> 16 > FLASH_FW_START_SEC) {
CH_ERR(adap, "boot image encroaching on firmware region\n");
return -EFBIG;
}
/*
* The boot sector is comprised of the Expansion-ROM boot, iSCSI boot,
* and Boot configuration data sections. These 3 boot sections span
* sectors 0 to 7 in flash and live right before the FW image location.
*/
i = DIV_ROUND_UP(size ? size : FLASH_FW_START,
sf_sec_size);
ret = t4_flash_erase_sectors(adap, boot_sector >> 16,
(boot_sector >> 16) + i - 1);
/*
* If size == 0 then we're simply erasing the FLASH sectors associated
* with the on-adapter option ROM file
*/
if (ret || (size == 0))
goto out;
/* Get boot header */
header = (pci_exp_rom_header_t *)boot_data;
pcir_offset = le16_to_cpu(*(u16 *)header->pcir_offset);
/* PCIR Data Structure */
pcir_header = (pcir_data_t *) &boot_data[pcir_offset];
/*
* Perform some primitive sanity testing to avoid accidentally
* writing garbage over the boot sectors. We ought to check for
* more but it's not worth it for now ...
*/
if (size < BOOT_MIN_SIZE || size > BOOT_MAX_SIZE) {
CH_ERR(adap, "boot image too small/large\n");
return -EFBIG;
}
#ifndef CHELSIO_T4_DIAGS
/*
* Check BOOT ROM header signature
*/
if (le16_to_cpu(*(u16*)header->signature) != BOOT_SIGNATURE ) {
CH_ERR(adap, "Boot image missing signature\n");
return -EINVAL;
}
/*
* Check PCI header signature
*/
if (le32_to_cpu(*(u32*)pcir_header->signature) != PCIR_SIGNATURE) {
CH_ERR(adap, "PCI header missing signature\n");
return -EINVAL;
}
/*
* Check Vendor ID matches Chelsio ID
*/
if (le16_to_cpu(*(u16*)pcir_header->vendor_id) != VENDOR_ID) {
CH_ERR(adap, "Vendor ID missing signature\n");
return -EINVAL;
}
#endif
/*
* Retrieve adapter's device ID
*/
t4_os_pci_read_cfg2(adap, PCI_DEVICE_ID, &device_id);
/* Want to deal with PF 0 so I strip off PF 4 indicator */
device_id = device_id & 0xf0ff;
/*
* Check PCIE Device ID
*/
if (le16_to_cpu(*(u16*)pcir_header->device_id) != device_id) {
/*
* Change the device ID in the Boot BIOS image to match
* the Device ID of the current adapter.
*/
modify_device_id(device_id, boot_data);
}
/*
* Skip over the first SF_PAGE_SIZE worth of data and write it after
* we finish copying the rest of the boot image. This will ensure
* that the BIOS boot header will only be written if the boot image
* was written in full.
*/
addr = boot_sector;
for (size -= SF_PAGE_SIZE; size; size -= SF_PAGE_SIZE) {
addr += SF_PAGE_SIZE;
boot_data += SF_PAGE_SIZE;
ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, boot_data, 0);
if (ret)
goto out;
}
ret = t4_write_flash(adap, boot_sector, SF_PAGE_SIZE,
(const u8 *)header, 0);
out:
if (ret)
CH_ERR(adap, "boot image download failed, error %d\n", ret);
return ret;
}
/*
* t4_flash_bootcfg_addr - return the address of the flash optionrom configuration
* @adapter: the adapter
*
* Return the address within the flash where the OptionROM Configuration
* is stored, or an error if the device FLASH is too small to contain
* a OptionROM Configuration.
*/
static int t4_flash_bootcfg_addr(struct adapter *adapter)
{
/*
* If the device FLASH isn't large enough to hold a Firmware
* Configuration File, return an error.
*/
if (adapter->params.sf_size < FLASH_BOOTCFG_START + FLASH_BOOTCFG_MAX_SIZE)
return -ENOSPC;
return FLASH_BOOTCFG_START;
}
int t4_load_bootcfg(struct adapter *adap,const u8 *cfg_data, unsigned int size)
{
int ret, i, n, cfg_addr;
unsigned int addr;
unsigned int flash_cfg_start_sec;
unsigned int sf_sec_size = adap->params.sf_size / adap->params.sf_nsec;
cfg_addr = t4_flash_bootcfg_addr(adap);
if (cfg_addr < 0)
return cfg_addr;
addr = cfg_addr;
flash_cfg_start_sec = addr / SF_SEC_SIZE;
if (size > FLASH_BOOTCFG_MAX_SIZE) {
CH_ERR(adap, "bootcfg file too large, max is %u bytes\n",
FLASH_BOOTCFG_MAX_SIZE);
return -EFBIG;
}
i = DIV_ROUND_UP(FLASH_BOOTCFG_MAX_SIZE,/* # of sectors spanned */
sf_sec_size);
ret = t4_flash_erase_sectors(adap, flash_cfg_start_sec,
flash_cfg_start_sec + i - 1);
/*
* If size == 0 then we're simply erasing the FLASH sectors associated
* with the on-adapter OptionROM Configuration File.
*/
if (ret || size == 0)
goto out;
/* this will write to the flash up to SF_PAGE_SIZE at a time */
for (i = 0; i< size; i+= SF_PAGE_SIZE) {
if ( (size - i) < SF_PAGE_SIZE)
n = size - i;
else
n = SF_PAGE_SIZE;
ret = t4_write_flash(adap, addr, n, cfg_data, 0);
if (ret)
goto out;
addr += SF_PAGE_SIZE;
cfg_data += SF_PAGE_SIZE;
}
out:
if (ret)
CH_ERR(adap, "boot config data %s failed %d\n",
(size == 0 ? "clear" : "download"), ret);
return ret;
}
/**
* t4_set_filter_mode - configure the optional components of filter tuples
* @adap: the adapter
* @mode_map: a bitmap selcting which optional filter components to enable
* @sleep_ok: if true we may sleep while awaiting command completion
*
* Sets the filter mode by selecting the optional components to enable
* in filter tuples. Returns 0 on success and a negative error if the
* requested mode needs more bits than are available for optional
* components.
*/
int t4_set_filter_mode(struct adapter *adap, unsigned int mode_map,
bool sleep_ok)
{
static u8 width[] = { 1, 3, 17, 17, 8, 8, 16, 9, 3, 1 };
int i, nbits = 0;
for (i = S_FCOE; i <= S_FRAGMENTATION; i++)
if (mode_map & (1 << i))
nbits += width[i];
if (nbits > FILTER_OPT_LEN)
return -EINVAL;
t4_tp_pio_write(adap, &mode_map, 1, A_TP_VLAN_PRI_MAP, sleep_ok);
read_filter_mode_and_ingress_config(adap, sleep_ok);
return 0;
}
/**
* t4_clr_port_stats - clear port statistics
* @adap: the adapter
* @idx: the port index
*
* Clear HW statistics for the given port.
*/
void t4_clr_port_stats(struct adapter *adap, int idx)
{
unsigned int i;
u32 bgmap = adap2pinfo(adap, idx)->mps_bg_map;
u32 port_base_addr;
if (is_t4(adap))
port_base_addr = PORT_BASE(idx);
else
port_base_addr = T5_PORT_BASE(idx);
for (i = A_MPS_PORT_STAT_TX_PORT_BYTES_L;
i <= A_MPS_PORT_STAT_TX_PORT_PPP7_H; i += 8)
t4_write_reg(adap, port_base_addr + i, 0);
for (i = A_MPS_PORT_STAT_RX_PORT_BYTES_L;
i <= A_MPS_PORT_STAT_RX_PORT_LESS_64B_H; i += 8)
t4_write_reg(adap, port_base_addr + i, 0);
for (i = 0; i < 4; i++)
if (bgmap & (1 << i)) {
t4_write_reg(adap,
A_MPS_STAT_RX_BG_0_MAC_DROP_FRAME_L + i * 8, 0);
t4_write_reg(adap,
A_MPS_STAT_RX_BG_0_MAC_TRUNC_FRAME_L + i * 8, 0);
}
}
/**
* t4_i2c_rd - read I2C data from adapter
* @adap: the adapter
* @port: Port number if per-port device; <0 if not
* @devid: per-port device ID or absolute device ID
* @offset: byte offset into device I2C space
* @len: byte length of I2C space data
* @buf: buffer in which to return I2C data
*
* Reads the I2C data from the indicated device and location.
*/
int t4_i2c_rd(struct adapter *adap, unsigned int mbox,
int port, unsigned int devid,
unsigned int offset, unsigned int len,
u8 *buf)
{
u32 ldst_addrspace;
struct fw_ldst_cmd ldst;
int ret;
if (port >= 4 ||
devid >= 256 ||
offset >= 256 ||
len > sizeof ldst.u.i2c.data)
return -EINVAL;
memset(&ldst, 0, sizeof ldst);
ldst_addrspace = V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_I2C);
ldst.op_to_addrspace =
cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_READ |
ldst_addrspace);
ldst.cycles_to_len16 = cpu_to_be32(FW_LEN16(ldst));
ldst.u.i2c.pid = (port < 0 ? 0xff : port);
ldst.u.i2c.did = devid;
ldst.u.i2c.boffset = offset;
ldst.u.i2c.blen = len;
ret = t4_wr_mbox(adap, mbox, &ldst, sizeof ldst, &ldst);
if (!ret)
memcpy(buf, ldst.u.i2c.data, len);
return ret;
}
/**
* t4_i2c_wr - write I2C data to adapter
* @adap: the adapter
* @port: Port number if per-port device; <0 if not
* @devid: per-port device ID or absolute device ID
* @offset: byte offset into device I2C space
* @len: byte length of I2C space data
* @buf: buffer containing new I2C data
*
* Write the I2C data to the indicated device and location.
*/
int t4_i2c_wr(struct adapter *adap, unsigned int mbox,
int port, unsigned int devid,
unsigned int offset, unsigned int len,
u8 *buf)
{
u32 ldst_addrspace;
struct fw_ldst_cmd ldst;
if (port >= 4 ||
devid >= 256 ||
offset >= 256 ||
len > sizeof ldst.u.i2c.data)
return -EINVAL;
memset(&ldst, 0, sizeof ldst);
ldst_addrspace = V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_I2C);
ldst.op_to_addrspace =
cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE |
ldst_addrspace);
ldst.cycles_to_len16 = cpu_to_be32(FW_LEN16(ldst));
ldst.u.i2c.pid = (port < 0 ? 0xff : port);
ldst.u.i2c.did = devid;
ldst.u.i2c.boffset = offset;
ldst.u.i2c.blen = len;
memcpy(ldst.u.i2c.data, buf, len);
return t4_wr_mbox(adap, mbox, &ldst, sizeof ldst, &ldst);
}
/**
* t4_sge_ctxt_rd - read an SGE context through FW
* @adap: the adapter
* @mbox: mailbox to use for the FW command
* @cid: the context id
* @ctype: the context type
* @data: where to store the context data
*
* Issues a FW command through the given mailbox to read an SGE context.
*/
int t4_sge_ctxt_rd(struct adapter *adap, unsigned int mbox, unsigned int cid,
enum ctxt_type ctype, u32 *data)
{
int ret;
struct fw_ldst_cmd c;
if (ctype == CTXT_EGRESS)
ret = FW_LDST_ADDRSPC_SGE_EGRC;
else if (ctype == CTXT_INGRESS)
ret = FW_LDST_ADDRSPC_SGE_INGC;
else if (ctype == CTXT_FLM)
ret = FW_LDST_ADDRSPC_SGE_FLMC;
else
ret = FW_LDST_ADDRSPC_SGE_CONMC;
memset(&c, 0, sizeof(c));
c.op_to_addrspace = cpu_to_be32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
V_FW_LDST_CMD_ADDRSPACE(ret));
c.cycles_to_len16 = cpu_to_be32(FW_LEN16(c));
c.u.idctxt.physid = cpu_to_be32(cid);
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret == 0) {
data[0] = be32_to_cpu(c.u.idctxt.ctxt_data0);
data[1] = be32_to_cpu(c.u.idctxt.ctxt_data1);
data[2] = be32_to_cpu(c.u.idctxt.ctxt_data2);
data[3] = be32_to_cpu(c.u.idctxt.ctxt_data3);
data[4] = be32_to_cpu(c.u.idctxt.ctxt_data4);
data[5] = be32_to_cpu(c.u.idctxt.ctxt_data5);
}
return ret;
}
/**
* t4_sge_ctxt_rd_bd - read an SGE context bypassing FW
* @adap: the adapter
* @cid: the context id
* @ctype: the context type
* @data: where to store the context data
*
* Reads an SGE context directly, bypassing FW. This is only for
* debugging when FW is unavailable.
*/
int t4_sge_ctxt_rd_bd(struct adapter *adap, unsigned int cid, enum ctxt_type ctype,
u32 *data)
{
int i, ret;
t4_write_reg(adap, A_SGE_CTXT_CMD, V_CTXTQID(cid) | V_CTXTTYPE(ctype));
ret = t4_wait_op_done(adap, A_SGE_CTXT_CMD, F_BUSY, 0, 3, 1);
if (!ret)
for (i = A_SGE_CTXT_DATA0; i <= A_SGE_CTXT_DATA5; i += 4)
*data++ = t4_read_reg(adap, i);
return ret;
}
int t4_sched_config(struct adapter *adapter, int type, int minmaxen,
int sleep_ok)
{
struct fw_sched_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_SCHED_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE);
cmd.retval_len16 = cpu_to_be32(FW_LEN16(cmd));
cmd.u.config.sc = FW_SCHED_SC_CONFIG;
cmd.u.config.type = type;
cmd.u.config.minmaxen = minmaxen;
return t4_wr_mbox_meat(adapter,adapter->mbox, &cmd, sizeof(cmd),
NULL, sleep_ok);
}
int t4_sched_params(struct adapter *adapter, int type, int level, int mode,
int rateunit, int ratemode, int channel, int cl,
int minrate, int maxrate, int weight, int pktsize,
int burstsize, int sleep_ok)
{
struct fw_sched_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_SCHED_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE);
cmd.retval_len16 = cpu_to_be32(FW_LEN16(cmd));
cmd.u.params.sc = FW_SCHED_SC_PARAMS;
cmd.u.params.type = type;
cmd.u.params.level = level;
cmd.u.params.mode = mode;
cmd.u.params.ch = channel;
cmd.u.params.cl = cl;
cmd.u.params.unit = rateunit;
cmd.u.params.rate = ratemode;
cmd.u.params.min = cpu_to_be32(minrate);
cmd.u.params.max = cpu_to_be32(maxrate);
cmd.u.params.weight = cpu_to_be16(weight);
cmd.u.params.pktsize = cpu_to_be16(pktsize);
cmd.u.params.burstsize = cpu_to_be16(burstsize);
return t4_wr_mbox_meat(adapter,adapter->mbox, &cmd, sizeof(cmd),
NULL, sleep_ok);
}
int t4_sched_params_ch_rl(struct adapter *adapter, int channel, int ratemode,
unsigned int maxrate, int sleep_ok)
{
struct fw_sched_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_SCHED_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE);
cmd.retval_len16 = cpu_to_be32(FW_LEN16(cmd));
cmd.u.params.sc = FW_SCHED_SC_PARAMS;
cmd.u.params.type = FW_SCHED_TYPE_PKTSCHED;
cmd.u.params.level = FW_SCHED_PARAMS_LEVEL_CH_RL;
cmd.u.params.ch = channel;
cmd.u.params.rate = ratemode; /* REL or ABS */
cmd.u.params.max = cpu_to_be32(maxrate);/* % or kbps */
return t4_wr_mbox_meat(adapter,adapter->mbox, &cmd, sizeof(cmd),
NULL, sleep_ok);
}
int t4_sched_params_cl_wrr(struct adapter *adapter, int channel, int cl,
int weight, int sleep_ok)
{
struct fw_sched_cmd cmd;
if (weight < 0 || weight > 100)
return -EINVAL;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_SCHED_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE);
cmd.retval_len16 = cpu_to_be32(FW_LEN16(cmd));
cmd.u.params.sc = FW_SCHED_SC_PARAMS;
cmd.u.params.type = FW_SCHED_TYPE_PKTSCHED;
cmd.u.params.level = FW_SCHED_PARAMS_LEVEL_CL_WRR;
cmd.u.params.ch = channel;
cmd.u.params.cl = cl;
cmd.u.params.weight = cpu_to_be16(weight);
return t4_wr_mbox_meat(adapter,adapter->mbox, &cmd, sizeof(cmd),
NULL, sleep_ok);
}
int t4_sched_params_cl_rl_kbps(struct adapter *adapter, int channel, int cl,
int mode, unsigned int maxrate, int pktsize, int sleep_ok)
{
struct fw_sched_cmd cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_SCHED_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE);
cmd.retval_len16 = cpu_to_be32(FW_LEN16(cmd));
cmd.u.params.sc = FW_SCHED_SC_PARAMS;
cmd.u.params.type = FW_SCHED_TYPE_PKTSCHED;
cmd.u.params.level = FW_SCHED_PARAMS_LEVEL_CL_RL;
cmd.u.params.mode = mode;
cmd.u.params.ch = channel;
cmd.u.params.cl = cl;
cmd.u.params.unit = FW_SCHED_PARAMS_UNIT_BITRATE;
cmd.u.params.rate = FW_SCHED_PARAMS_RATE_ABS;
cmd.u.params.max = cpu_to_be32(maxrate);
cmd.u.params.pktsize = cpu_to_be16(pktsize);
return t4_wr_mbox_meat(adapter,adapter->mbox, &cmd, sizeof(cmd),
NULL, sleep_ok);
}
/*
* t4_config_watchdog - configure (enable/disable) a watchdog timer
* @adapter: the adapter
* @mbox: mailbox to use for the FW command
* @pf: the PF owning the queue
* @vf: the VF owning the queue
* @timeout: watchdog timeout in ms
* @action: watchdog timer / action
*
* There are separate watchdog timers for each possible watchdog
* action. Configure one of the watchdog timers by setting a non-zero
* timeout. Disable a watchdog timer by using a timeout of zero.
*/
int t4_config_watchdog(struct adapter *adapter, unsigned int mbox,
unsigned int pf, unsigned int vf,
unsigned int timeout, unsigned int action)
{
struct fw_watchdog_cmd wdog;
unsigned int ticks;
/*
* The watchdog command expects a timeout in units of 10ms so we need
* to convert it here (via rounding) and force a minimum of one 10ms
* "tick" if the timeout is non-zero but the conversion results in 0
* ticks.
*/
ticks = (timeout + 5)/10;
if (timeout && !ticks)
ticks = 1;
memset(&wdog, 0, sizeof wdog);
wdog.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_WATCHDOG_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE |
V_FW_PARAMS_CMD_PFN(pf) |
V_FW_PARAMS_CMD_VFN(vf));
wdog.retval_len16 = cpu_to_be32(FW_LEN16(wdog));
wdog.timeout = cpu_to_be32(ticks);
wdog.action = cpu_to_be32(action);
return t4_wr_mbox(adapter, mbox, &wdog, sizeof wdog, NULL);
}
int t4_get_devlog_level(struct adapter *adapter, unsigned int *level)
{
struct fw_devlog_cmd devlog_cmd;
int ret;
memset(&devlog_cmd, 0, sizeof(devlog_cmd));
devlog_cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_DEVLOG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ);
devlog_cmd.retval_len16 = cpu_to_be32(FW_LEN16(devlog_cmd));
ret = t4_wr_mbox(adapter, adapter->mbox, &devlog_cmd,
sizeof(devlog_cmd), &devlog_cmd);
if (ret)
return ret;
*level = devlog_cmd.level;
return 0;
}
int t4_set_devlog_level(struct adapter *adapter, unsigned int level)
{
struct fw_devlog_cmd devlog_cmd;
memset(&devlog_cmd, 0, sizeof(devlog_cmd));
devlog_cmd.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_DEVLOG_CMD) |
F_FW_CMD_REQUEST |
F_FW_CMD_WRITE);
devlog_cmd.level = level;
devlog_cmd.retval_len16 = cpu_to_be32(FW_LEN16(devlog_cmd));
return t4_wr_mbox(adapter, adapter->mbox, &devlog_cmd,
sizeof(devlog_cmd), &devlog_cmd);
}
Index: projects/clang800-import/sys/dev/cxgbe/t4_main.c
===================================================================
--- projects/clang800-import/sys/dev/cxgbe/t4_main.c (revision 343955)
+++ projects/clang800-import/sys/dev/cxgbe/t4_main.c (revision 343956)
@@ -1,10764 +1,10811 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2011 Chelsio Communications, Inc.
* All rights reserved.
* Written by: Navdeep Parhar <np@FreeBSD.org>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_ddb.h"
#include "opt_inet.h"
#include "opt_inet6.h"
#include "opt_ratelimit.h"
#include "opt_rss.h"
#include <sys/param.h>
#include <sys/conf.h>
#include <sys/priv.h>
#include <sys/kernel.h>
#include <sys/bus.h>
#include <sys/module.h>
#include <sys/malloc.h>
#include <sys/queue.h>
#include <sys/taskqueue.h>
#include <sys/pciio.h>
#include <dev/pci/pcireg.h>
#include <dev/pci/pcivar.h>
#include <dev/pci/pci_private.h>
#include <sys/firmware.h>
#include <sys/sbuf.h>
#include <sys/smp.h>
#include <sys/socket.h>
#include <sys/sockio.h>
#include <sys/sysctl.h>
#include <net/ethernet.h>
#include <net/if.h>
#include <net/if_types.h>
#include <net/if_dl.h>
#include <net/if_vlan_var.h>
#ifdef RSS
#include <net/rss_config.h>
#endif
#include <netinet/in.h>
#include <netinet/ip.h>
#if defined(__i386__) || defined(__amd64__)
#include <machine/md_var.h>
#include <machine/cputypes.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#endif
#include <crypto/rijndael/rijndael.h>
#ifdef DDB
#include <ddb/ddb.h>
#include <ddb/db_lex.h>
#endif
#include "common/common.h"
#include "common/t4_msg.h"
#include "common/t4_regs.h"
#include "common/t4_regs_values.h"
#include "cudbg/cudbg.h"
#include "t4_clip.h"
#include "t4_ioctl.h"
#include "t4_l2t.h"
#include "t4_mp_ring.h"
#include "t4_if.h"
#include "t4_smt.h"
/* T4 bus driver interface */
static int t4_probe(device_t);
static int t4_attach(device_t);
static int t4_detach(device_t);
static int t4_child_location_str(device_t, device_t, char *, size_t);
static int t4_ready(device_t);
static int t4_read_port_device(device_t, int, device_t *);
static device_method_t t4_methods[] = {
DEVMETHOD(device_probe, t4_probe),
DEVMETHOD(device_attach, t4_attach),
DEVMETHOD(device_detach, t4_detach),
DEVMETHOD(bus_child_location_str, t4_child_location_str),
DEVMETHOD(t4_is_main_ready, t4_ready),
DEVMETHOD(t4_read_port_device, t4_read_port_device),
DEVMETHOD_END
};
static driver_t t4_driver = {
"t4nex",
t4_methods,
sizeof(struct adapter)
};
/* T4 port (cxgbe) interface */
static int cxgbe_probe(device_t);
static int cxgbe_attach(device_t);
static int cxgbe_detach(device_t);
device_method_t cxgbe_methods[] = {
DEVMETHOD(device_probe, cxgbe_probe),
DEVMETHOD(device_attach, cxgbe_attach),
DEVMETHOD(device_detach, cxgbe_detach),
{ 0, 0 }
};
static driver_t cxgbe_driver = {
"cxgbe",
cxgbe_methods,
sizeof(struct port_info)
};
/* T4 VI (vcxgbe) interface */
static int vcxgbe_probe(device_t);
static int vcxgbe_attach(device_t);
static int vcxgbe_detach(device_t);
static device_method_t vcxgbe_methods[] = {
DEVMETHOD(device_probe, vcxgbe_probe),
DEVMETHOD(device_attach, vcxgbe_attach),
DEVMETHOD(device_detach, vcxgbe_detach),
{ 0, 0 }
};
static driver_t vcxgbe_driver = {
"vcxgbe",
vcxgbe_methods,
sizeof(struct vi_info)
};
static d_ioctl_t t4_ioctl;
static struct cdevsw t4_cdevsw = {
.d_version = D_VERSION,
.d_ioctl = t4_ioctl,
.d_name = "t4nex",
};
/* T5 bus driver interface */
static int t5_probe(device_t);
static device_method_t t5_methods[] = {
DEVMETHOD(device_probe, t5_probe),
DEVMETHOD(device_attach, t4_attach),
DEVMETHOD(device_detach, t4_detach),
DEVMETHOD(bus_child_location_str, t4_child_location_str),
DEVMETHOD(t4_is_main_ready, t4_ready),
DEVMETHOD(t4_read_port_device, t4_read_port_device),
DEVMETHOD_END
};
static driver_t t5_driver = {
"t5nex",
t5_methods,
sizeof(struct adapter)
};
/* T5 port (cxl) interface */
static driver_t cxl_driver = {
"cxl",
cxgbe_methods,
sizeof(struct port_info)
};
/* T5 VI (vcxl) interface */
static driver_t vcxl_driver = {
"vcxl",
vcxgbe_methods,
sizeof(struct vi_info)
};
/* T6 bus driver interface */
static int t6_probe(device_t);
static device_method_t t6_methods[] = {
DEVMETHOD(device_probe, t6_probe),
DEVMETHOD(device_attach, t4_attach),
DEVMETHOD(device_detach, t4_detach),
DEVMETHOD(bus_child_location_str, t4_child_location_str),
DEVMETHOD(t4_is_main_ready, t4_ready),
DEVMETHOD(t4_read_port_device, t4_read_port_device),
DEVMETHOD_END
};
static driver_t t6_driver = {
"t6nex",
t6_methods,
sizeof(struct adapter)
};
/* T6 port (cc) interface */
static driver_t cc_driver = {
"cc",
cxgbe_methods,
sizeof(struct port_info)
};
/* T6 VI (vcc) interface */
static driver_t vcc_driver = {
"vcc",
vcxgbe_methods,
sizeof(struct vi_info)
};
/* ifnet interface */
static void cxgbe_init(void *);
static int cxgbe_ioctl(struct ifnet *, unsigned long, caddr_t);
static int cxgbe_transmit(struct ifnet *, struct mbuf *);
static void cxgbe_qflush(struct ifnet *);
MALLOC_DEFINE(M_CXGBE, "cxgbe", "Chelsio T4/T5 Ethernet driver and services");
/*
* Correct lock order when you need to acquire multiple locks is t4_list_lock,
* then ADAPTER_LOCK, then t4_uld_list_lock.
*/
static struct sx t4_list_lock;
SLIST_HEAD(, adapter) t4_list;
#ifdef TCP_OFFLOAD
static struct sx t4_uld_list_lock;
SLIST_HEAD(, uld_info) t4_uld_list;
#endif
/*
* Tunables. See tweak_tunables() too.
*
* Each tunable is set to a default value here if it's known at compile-time.
* Otherwise it is set to -n as an indication to tweak_tunables() that it should
* provide a reasonable default (upto n) when the driver is loaded.
*
* Tunables applicable to both T4 and T5 are under hw.cxgbe. Those specific to
* T5 are under hw.cxl.
*/
SYSCTL_NODE(_hw, OID_AUTO, cxgbe, CTLFLAG_RD, 0, "cxgbe(4) parameters");
SYSCTL_NODE(_hw, OID_AUTO, cxl, CTLFLAG_RD, 0, "cxgbe(4) T5+ parameters");
SYSCTL_NODE(_hw_cxgbe, OID_AUTO, toe, CTLFLAG_RD, 0, "cxgbe(4) TOE parameters");
/*
* Number of queues for tx and rx, NIC and offload.
*/
#define NTXQ 16
int t4_ntxq = -NTXQ;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, ntxq, CTLFLAG_RDTUN, &t4_ntxq, 0,
"Number of TX queues per port");
TUNABLE_INT("hw.cxgbe.ntxq10g", &t4_ntxq); /* Old name, undocumented */
#define NRXQ 8
int t4_nrxq = -NRXQ;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nrxq, CTLFLAG_RDTUN, &t4_nrxq, 0,
"Number of RX queues per port");
TUNABLE_INT("hw.cxgbe.nrxq10g", &t4_nrxq); /* Old name, undocumented */
#define NTXQ_VI 1
static int t4_ntxq_vi = -NTXQ_VI;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, ntxq_vi, CTLFLAG_RDTUN, &t4_ntxq_vi, 0,
"Number of TX queues per VI");
#define NRXQ_VI 1
static int t4_nrxq_vi = -NRXQ_VI;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nrxq_vi, CTLFLAG_RDTUN, &t4_nrxq_vi, 0,
"Number of RX queues per VI");
static int t4_rsrv_noflowq = 0;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, rsrv_noflowq, CTLFLAG_RDTUN, &t4_rsrv_noflowq,
0, "Reserve TX queue 0 of each VI for non-flowid packets");
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
#define NOFLDTXQ 8
static int t4_nofldtxq = -NOFLDTXQ;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nofldtxq, CTLFLAG_RDTUN, &t4_nofldtxq, 0,
"Number of offload TX queues per port");
#define NOFLDRXQ 2
static int t4_nofldrxq = -NOFLDRXQ;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nofldrxq, CTLFLAG_RDTUN, &t4_nofldrxq, 0,
"Number of offload RX queues per port");
#define NOFLDTXQ_VI 1
static int t4_nofldtxq_vi = -NOFLDTXQ_VI;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nofldtxq_vi, CTLFLAG_RDTUN, &t4_nofldtxq_vi, 0,
"Number of offload TX queues per VI");
#define NOFLDRXQ_VI 1
static int t4_nofldrxq_vi = -NOFLDRXQ_VI;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nofldrxq_vi, CTLFLAG_RDTUN, &t4_nofldrxq_vi, 0,
"Number of offload RX queues per VI");
#define TMR_IDX_OFLD 1
int t4_tmr_idx_ofld = TMR_IDX_OFLD;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, holdoff_timer_idx_ofld, CTLFLAG_RDTUN,
&t4_tmr_idx_ofld, 0, "Holdoff timer index for offload queues");
#define PKTC_IDX_OFLD (-1)
int t4_pktc_idx_ofld = PKTC_IDX_OFLD;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, holdoff_pktc_idx_ofld, CTLFLAG_RDTUN,
&t4_pktc_idx_ofld, 0, "holdoff packet counter index for offload queues");
/* 0 means chip/fw default, non-zero number is value in microseconds */
static u_long t4_toe_keepalive_idle = 0;
SYSCTL_ULONG(_hw_cxgbe_toe, OID_AUTO, keepalive_idle, CTLFLAG_RDTUN,
&t4_toe_keepalive_idle, 0, "TOE keepalive idle timer (us)");
/* 0 means chip/fw default, non-zero number is value in microseconds */
static u_long t4_toe_keepalive_interval = 0;
SYSCTL_ULONG(_hw_cxgbe_toe, OID_AUTO, keepalive_interval, CTLFLAG_RDTUN,
&t4_toe_keepalive_interval, 0, "TOE keepalive interval timer (us)");
/* 0 means chip/fw default, non-zero number is # of keepalives before abort */
static int t4_toe_keepalive_count = 0;
SYSCTL_INT(_hw_cxgbe_toe, OID_AUTO, keepalive_count, CTLFLAG_RDTUN,
&t4_toe_keepalive_count, 0, "Number of TOE keepalive probes before abort");
/* 0 means chip/fw default, non-zero number is value in microseconds */
static u_long t4_toe_rexmt_min = 0;
SYSCTL_ULONG(_hw_cxgbe_toe, OID_AUTO, rexmt_min, CTLFLAG_RDTUN,
&t4_toe_rexmt_min, 0, "Minimum TOE retransmit interval (us)");
/* 0 means chip/fw default, non-zero number is value in microseconds */
static u_long t4_toe_rexmt_max = 0;
SYSCTL_ULONG(_hw_cxgbe_toe, OID_AUTO, rexmt_max, CTLFLAG_RDTUN,
&t4_toe_rexmt_max, 0, "Maximum TOE retransmit interval (us)");
/* 0 means chip/fw default, non-zero number is # of rexmt before abort */
static int t4_toe_rexmt_count = 0;
SYSCTL_INT(_hw_cxgbe_toe, OID_AUTO, rexmt_count, CTLFLAG_RDTUN,
&t4_toe_rexmt_count, 0, "Number of TOE retransmissions before abort");
/* -1 means chip/fw default, other values are raw backoff values to use */
static int t4_toe_rexmt_backoff[16] = {
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1
};
SYSCTL_NODE(_hw_cxgbe_toe, OID_AUTO, rexmt_backoff, CTLFLAG_RD, 0,
"cxgbe(4) TOE retransmit backoff values");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 0, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[0], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 1, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[1], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 2, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[2], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 3, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[3], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 4, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[4], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 5, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[5], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 6, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[6], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 7, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[7], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 8, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[8], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 9, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[9], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 10, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[10], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 11, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[11], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 12, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[12], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 13, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[13], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 14, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[14], 0, "");
SYSCTL_INT(_hw_cxgbe_toe_rexmt_backoff, OID_AUTO, 15, CTLFLAG_RDTUN,
&t4_toe_rexmt_backoff[15], 0, "");
#endif
#ifdef DEV_NETMAP
#define NNMTXQ_VI 2
static int t4_nnmtxq_vi = -NNMTXQ_VI;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nnmtxq_vi, CTLFLAG_RDTUN, &t4_nnmtxq_vi, 0,
"Number of netmap TX queues per VI");
#define NNMRXQ_VI 2
static int t4_nnmrxq_vi = -NNMRXQ_VI;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nnmrxq_vi, CTLFLAG_RDTUN, &t4_nnmrxq_vi, 0,
"Number of netmap RX queues per VI");
#endif
/*
* Holdoff parameters for ports.
*/
#define TMR_IDX 1
int t4_tmr_idx = TMR_IDX;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, holdoff_timer_idx, CTLFLAG_RDTUN, &t4_tmr_idx,
0, "Holdoff timer index");
TUNABLE_INT("hw.cxgbe.holdoff_timer_idx_10G", &t4_tmr_idx); /* Old name */
#define PKTC_IDX (-1)
int t4_pktc_idx = PKTC_IDX;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, holdoff_pktc_idx, CTLFLAG_RDTUN, &t4_pktc_idx,
0, "Holdoff packet counter index");
TUNABLE_INT("hw.cxgbe.holdoff_pktc_idx_10G", &t4_pktc_idx); /* Old name */
/*
* Size (# of entries) of each tx and rx queue.
*/
unsigned int t4_qsize_txq = TX_EQ_QSIZE;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, qsize_txq, CTLFLAG_RDTUN, &t4_qsize_txq, 0,
"Number of descriptors in each TX queue");
unsigned int t4_qsize_rxq = RX_IQ_QSIZE;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, qsize_rxq, CTLFLAG_RDTUN, &t4_qsize_rxq, 0,
"Number of descriptors in each RX queue");
/*
* Interrupt types allowed (bits 0, 1, 2 = INTx, MSI, MSI-X respectively).
*/
int t4_intr_types = INTR_MSIX | INTR_MSI | INTR_INTX;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, interrupt_types, CTLFLAG_RDTUN, &t4_intr_types,
0, "Interrupt types allowed (bit 0 = INTx, 1 = MSI, 2 = MSI-X)");
/*
* Configuration file. All the _CF names here are special.
*/
#define DEFAULT_CF "default"
#define BUILTIN_CF "built-in"
#define FLASH_CF "flash"
#define UWIRE_CF "uwire"
#define FPGA_CF "fpga"
static char t4_cfg_file[32] = DEFAULT_CF;
SYSCTL_STRING(_hw_cxgbe, OID_AUTO, config_file, CTLFLAG_RDTUN, t4_cfg_file,
sizeof(t4_cfg_file), "Firmware configuration file");
/*
* PAUSE settings (bit 0, 1, 2 = rx_pause, tx_pause, pause_autoneg respectively).
* rx_pause = 1 to heed incoming PAUSE frames, 0 to ignore them.
* tx_pause = 1 to emit PAUSE frames when the rx FIFO reaches its high water
* mark or when signalled to do so, 0 to never emit PAUSE.
* pause_autoneg = 1 means PAUSE will be negotiated if possible and the
* negotiated settings will override rx_pause/tx_pause.
* Otherwise rx_pause/tx_pause are applied forcibly.
*/
static int t4_pause_settings = PAUSE_RX | PAUSE_TX | PAUSE_AUTONEG;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, pause_settings, CTLFLAG_RDTUN,
&t4_pause_settings, 0,
"PAUSE settings (bit 0 = rx_pause, 1 = tx_pause, 2 = pause_autoneg)");
/*
* Forward Error Correction settings (bit 0, 1 = RS, BASER respectively).
* -1 to run with the firmware default. Same as FEC_AUTO (bit 5)
* 0 to disable FEC.
*/
static int t4_fec = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, fec, CTLFLAG_RDTUN, &t4_fec, 0,
"Forward Error Correction (bit 0 = RS, bit 1 = BASER_RS)");
/*
* Link autonegotiation.
* -1 to run with the firmware default.
* 0 to disable.
* 1 to enable.
*/
static int t4_autoneg = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, autoneg, CTLFLAG_RDTUN, &t4_autoneg, 0,
"Link autonegotiation");
/*
* Firmware auto-install by driver during attach (0, 1, 2 = prohibited, allowed,
* encouraged respectively). '-n' is the same as 'n' except the firmware
* version used in the checks is read from the firmware bundled with the driver.
*/
static int t4_fw_install = 1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, fw_install, CTLFLAG_RDTUN, &t4_fw_install, 0,
"Firmware auto-install (0 = prohibited, 1 = allowed, 2 = encouraged)");
/*
* ASIC features that will be used. Disable the ones you don't want so that the
* chip resources aren't wasted on features that will not be used.
*/
static int t4_nbmcaps_allowed = 0;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, nbmcaps_allowed, CTLFLAG_RDTUN,
&t4_nbmcaps_allowed, 0, "Default NBM capabilities");
static int t4_linkcaps_allowed = 0; /* No DCBX, PPP, etc. by default */
SYSCTL_INT(_hw_cxgbe, OID_AUTO, linkcaps_allowed, CTLFLAG_RDTUN,
&t4_linkcaps_allowed, 0, "Default link capabilities");
static int t4_switchcaps_allowed = FW_CAPS_CONFIG_SWITCH_INGRESS |
FW_CAPS_CONFIG_SWITCH_EGRESS;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, switchcaps_allowed, CTLFLAG_RDTUN,
&t4_switchcaps_allowed, 0, "Default switch capabilities");
#ifdef RATELIMIT
static int t4_niccaps_allowed = FW_CAPS_CONFIG_NIC |
FW_CAPS_CONFIG_NIC_HASHFILTER | FW_CAPS_CONFIG_NIC_ETHOFLD;
#else
static int t4_niccaps_allowed = FW_CAPS_CONFIG_NIC |
FW_CAPS_CONFIG_NIC_HASHFILTER;
#endif
SYSCTL_INT(_hw_cxgbe, OID_AUTO, niccaps_allowed, CTLFLAG_RDTUN,
&t4_niccaps_allowed, 0, "Default NIC capabilities");
static int t4_toecaps_allowed = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, toecaps_allowed, CTLFLAG_RDTUN,
&t4_toecaps_allowed, 0, "Default TCP offload capabilities");
static int t4_rdmacaps_allowed = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, rdmacaps_allowed, CTLFLAG_RDTUN,
&t4_rdmacaps_allowed, 0, "Default RDMA capabilities");
static int t4_cryptocaps_allowed = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, cryptocaps_allowed, CTLFLAG_RDTUN,
&t4_cryptocaps_allowed, 0, "Default crypto capabilities");
static int t4_iscsicaps_allowed = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, iscsicaps_allowed, CTLFLAG_RDTUN,
&t4_iscsicaps_allowed, 0, "Default iSCSI capabilities");
static int t4_fcoecaps_allowed = 0;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, fcoecaps_allowed, CTLFLAG_RDTUN,
&t4_fcoecaps_allowed, 0, "Default FCoE capabilities");
static int t5_write_combine = 0;
SYSCTL_INT(_hw_cxl, OID_AUTO, write_combine, CTLFLAG_RDTUN, &t5_write_combine,
0, "Use WC instead of UC for BAR2");
static int t4_num_vis = 1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, num_vis, CTLFLAG_RDTUN, &t4_num_vis, 0,
"Number of VIs per port");
/*
* PCIe Relaxed Ordering.
* -1: driver should figure out a good value.
* 0: disable RO.
* 1: enable RO.
* 2: leave RO alone.
*/
static int pcie_relaxed_ordering = -1;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, pcie_relaxed_ordering, CTLFLAG_RDTUN,
&pcie_relaxed_ordering, 0,
"PCIe Relaxed Ordering: 0 = disable, 1 = enable, 2 = leave alone");
static int t4_panic_on_fatal_err = 0;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, panic_on_fatal_err, CTLFLAG_RDTUN,
- &t4_panic_on_fatal_err, 0, "panic on fatal firmware errors");
+ &t4_panic_on_fatal_err, 0, "panic on fatal errors");
#ifdef TCP_OFFLOAD
/*
* TOE tunables.
*/
static int t4_cop_managed_offloading = 0;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, cop_managed_offloading, CTLFLAG_RDTUN,
&t4_cop_managed_offloading, 0,
"COP (Connection Offload Policy) controls all TOE offload");
#endif
/* Functions used by VIs to obtain unique MAC addresses for each VI. */
static int vi_mac_funcs[] = {
FW_VI_FUNC_ETH,
FW_VI_FUNC_OFLD,
FW_VI_FUNC_IWARP,
FW_VI_FUNC_OPENISCSI,
FW_VI_FUNC_OPENFCOE,
FW_VI_FUNC_FOISCSI,
FW_VI_FUNC_FOFCOE,
};
struct intrs_and_queues {
uint16_t intr_type; /* INTx, MSI, or MSI-X */
uint16_t num_vis; /* number of VIs for each port */
uint16_t nirq; /* Total # of vectors */
uint16_t ntxq; /* # of NIC txq's for each port */
uint16_t nrxq; /* # of NIC rxq's for each port */
uint16_t nofldtxq; /* # of TOE/ETHOFLD txq's for each port */
uint16_t nofldrxq; /* # of TOE rxq's for each port */
/* The vcxgbe/vcxl interfaces use these and not the ones above. */
uint16_t ntxq_vi; /* # of NIC txq's */
uint16_t nrxq_vi; /* # of NIC rxq's */
uint16_t nofldtxq_vi; /* # of TOE txq's */
uint16_t nofldrxq_vi; /* # of TOE rxq's */
uint16_t nnmtxq_vi; /* # of netmap txq's */
uint16_t nnmrxq_vi; /* # of netmap rxq's */
};
static void setup_memwin(struct adapter *);
static void position_memwin(struct adapter *, int, uint32_t);
static int validate_mem_range(struct adapter *, uint32_t, uint32_t);
static int fwmtype_to_hwmtype(int);
static int validate_mt_off_len(struct adapter *, int, uint32_t, uint32_t,
uint32_t *);
static int fixup_devlog_params(struct adapter *);
static int cfg_itype_and_nqueues(struct adapter *, struct intrs_and_queues *);
static int contact_firmware(struct adapter *);
static int partition_resources(struct adapter *);
static int get_params__pre_init(struct adapter *);
static int get_params__post_init(struct adapter *);
static int set_params__post_init(struct adapter *);
static void t4_set_desc(struct adapter *);
static bool fixed_ifmedia(struct port_info *);
static void build_medialist(struct port_info *);
static void init_link_config(struct port_info *);
static int fixup_link_config(struct port_info *);
static int apply_link_config(struct port_info *);
static int cxgbe_init_synchronized(struct vi_info *);
static int cxgbe_uninit_synchronized(struct vi_info *);
static void quiesce_txq(struct adapter *, struct sge_txq *);
static void quiesce_wrq(struct adapter *, struct sge_wrq *);
static void quiesce_iq(struct adapter *, struct sge_iq *);
static void quiesce_fl(struct adapter *, struct sge_fl *);
static int t4_alloc_irq(struct adapter *, struct irq *, int rid,
driver_intr_t *, void *, char *);
static int t4_free_irq(struct adapter *, struct irq *);
static void get_regs(struct adapter *, struct t4_regdump *, uint8_t *);
static void vi_refresh_stats(struct adapter *, struct vi_info *);
static void cxgbe_refresh_stats(struct adapter *, struct port_info *);
static void cxgbe_tick(void *);
static void cxgbe_sysctls(struct port_info *);
static int sysctl_int_array(SYSCTL_HANDLER_ARGS);
static int sysctl_bitfield_8b(SYSCTL_HANDLER_ARGS);
static int sysctl_bitfield_16b(SYSCTL_HANDLER_ARGS);
static int sysctl_btphy(SYSCTL_HANDLER_ARGS);
static int sysctl_noflowq(SYSCTL_HANDLER_ARGS);
static int sysctl_holdoff_tmr_idx(SYSCTL_HANDLER_ARGS);
static int sysctl_holdoff_pktc_idx(SYSCTL_HANDLER_ARGS);
static int sysctl_qsize_rxq(SYSCTL_HANDLER_ARGS);
static int sysctl_qsize_txq(SYSCTL_HANDLER_ARGS);
static int sysctl_pause_settings(SYSCTL_HANDLER_ARGS);
static int sysctl_fec(SYSCTL_HANDLER_ARGS);
static int sysctl_autoneg(SYSCTL_HANDLER_ARGS);
static int sysctl_handle_t4_reg64(SYSCTL_HANDLER_ARGS);
static int sysctl_temperature(SYSCTL_HANDLER_ARGS);
static int sysctl_loadavg(SYSCTL_HANDLER_ARGS);
static int sysctl_cctrl(SYSCTL_HANDLER_ARGS);
static int sysctl_cim_ibq_obq(SYSCTL_HANDLER_ARGS);
static int sysctl_cim_la(SYSCTL_HANDLER_ARGS);
-static int sysctl_cim_la_t6(SYSCTL_HANDLER_ARGS);
static int sysctl_cim_ma_la(SYSCTL_HANDLER_ARGS);
static int sysctl_cim_pif_la(SYSCTL_HANDLER_ARGS);
static int sysctl_cim_qcfg(SYSCTL_HANDLER_ARGS);
static int sysctl_cpl_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_ddp_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_devlog(SYSCTL_HANDLER_ARGS);
static int sysctl_fcoe_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_hw_sched(SYSCTL_HANDLER_ARGS);
static int sysctl_lb_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_linkdnrc(SYSCTL_HANDLER_ARGS);
static int sysctl_meminfo(SYSCTL_HANDLER_ARGS);
static int sysctl_mps_tcam(SYSCTL_HANDLER_ARGS);
static int sysctl_mps_tcam_t6(SYSCTL_HANDLER_ARGS);
static int sysctl_path_mtus(SYSCTL_HANDLER_ARGS);
static int sysctl_pm_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_rdma_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_tcp_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_tids(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_err_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_la_mask(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_la(SYSCTL_HANDLER_ARGS);
static int sysctl_tx_rate(SYSCTL_HANDLER_ARGS);
static int sysctl_ulprx_la(SYSCTL_HANDLER_ARGS);
static int sysctl_wcwr_stats(SYSCTL_HANDLER_ARGS);
static int sysctl_cpus(SYSCTL_HANDLER_ARGS);
#ifdef TCP_OFFLOAD
static int sysctl_tls_rx_ports(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_tick(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_dack_timer(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_timer(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_shift_cnt(SYSCTL_HANDLER_ARGS);
static int sysctl_tp_backoff(SYSCTL_HANDLER_ARGS);
static int sysctl_holdoff_tmr_idx_ofld(SYSCTL_HANDLER_ARGS);
static int sysctl_holdoff_pktc_idx_ofld(SYSCTL_HANDLER_ARGS);
#endif
static int get_sge_context(struct adapter *, struct t4_sge_context *);
static int load_fw(struct adapter *, struct t4_data *);
static int load_cfg(struct adapter *, struct t4_data *);
static int load_boot(struct adapter *, struct t4_bootrom *);
static int load_bootcfg(struct adapter *, struct t4_data *);
static int cudbg_dump(struct adapter *, struct t4_cudbg_dump *);
static void free_offload_policy(struct t4_offload_policy *);
static int set_offload_policy(struct adapter *, struct t4_offload_policy *);
static int read_card_mem(struct adapter *, int, struct t4_mem_range *);
static int read_i2c(struct adapter *, struct t4_i2c_data *);
#ifdef TCP_OFFLOAD
static int toe_capability(struct vi_info *, int);
#endif
static int mod_event(module_t, int, void *);
static int notify_siblings(device_t, int);
struct {
uint16_t device;
char *desc;
} t4_pciids[] = {
{0xa000, "Chelsio Terminator 4 FPGA"},
{0x4400, "Chelsio T440-dbg"},
{0x4401, "Chelsio T420-CR"},
{0x4402, "Chelsio T422-CR"},
{0x4403, "Chelsio T440-CR"},
{0x4404, "Chelsio T420-BCH"},
{0x4405, "Chelsio T440-BCH"},
{0x4406, "Chelsio T440-CH"},
{0x4407, "Chelsio T420-SO"},
{0x4408, "Chelsio T420-CX"},
{0x4409, "Chelsio T420-BT"},
{0x440a, "Chelsio T404-BT"},
{0x440e, "Chelsio T440-LP-CR"},
}, t5_pciids[] = {
{0xb000, "Chelsio Terminator 5 FPGA"},
{0x5400, "Chelsio T580-dbg"},
{0x5401, "Chelsio T520-CR"}, /* 2 x 10G */
{0x5402, "Chelsio T522-CR"}, /* 2 x 10G, 2 X 1G */
{0x5403, "Chelsio T540-CR"}, /* 4 x 10G */
{0x5407, "Chelsio T520-SO"}, /* 2 x 10G, nomem */
{0x5409, "Chelsio T520-BT"}, /* 2 x 10GBaseT */
{0x540a, "Chelsio T504-BT"}, /* 4 x 1G */
{0x540d, "Chelsio T580-CR"}, /* 2 x 40G */
{0x540e, "Chelsio T540-LP-CR"}, /* 4 x 10G */
{0x5410, "Chelsio T580-LP-CR"}, /* 2 x 40G */
{0x5411, "Chelsio T520-LL-CR"}, /* 2 x 10G */
{0x5412, "Chelsio T560-CR"}, /* 1 x 40G, 2 x 10G */
{0x5414, "Chelsio T580-LP-SO-CR"}, /* 2 x 40G, nomem */
{0x5415, "Chelsio T502-BT"}, /* 2 x 1G */
{0x5418, "Chelsio T540-BT"}, /* 4 x 10GBaseT */
{0x5419, "Chelsio T540-LP-BT"}, /* 4 x 10GBaseT */
{0x541a, "Chelsio T540-SO-BT"}, /* 4 x 10GBaseT, nomem */
{0x541b, "Chelsio T540-SO-CR"}, /* 4 x 10G, nomem */
/* Custom */
{0x5483, "Custom T540-CR"},
{0x5484, "Custom T540-BT"},
}, t6_pciids[] = {
{0xc006, "Chelsio Terminator 6 FPGA"}, /* T6 PE10K6 FPGA (PF0) */
{0x6400, "Chelsio T6-DBG-25"}, /* 2 x 10/25G, debug */
{0x6401, "Chelsio T6225-CR"}, /* 2 x 10/25G */
{0x6402, "Chelsio T6225-SO-CR"}, /* 2 x 10/25G, nomem */
{0x6403, "Chelsio T6425-CR"}, /* 4 x 10/25G */
{0x6404, "Chelsio T6425-SO-CR"}, /* 4 x 10/25G, nomem */
{0x6405, "Chelsio T6225-OCP-SO"}, /* 2 x 10/25G, nomem */
{0x6406, "Chelsio T62100-OCP-SO"}, /* 2 x 40/50/100G, nomem */
{0x6407, "Chelsio T62100-LP-CR"}, /* 2 x 40/50/100G */
{0x6408, "Chelsio T62100-SO-CR"}, /* 2 x 40/50/100G, nomem */
{0x6409, "Chelsio T6210-BT"}, /* 2 x 10GBASE-T */
{0x640d, "Chelsio T62100-CR"}, /* 2 x 40/50/100G */
{0x6410, "Chelsio T6-DBG-100"}, /* 2 x 40/50/100G, debug */
{0x6411, "Chelsio T6225-LL-CR"}, /* 2 x 10/25G */
{0x6414, "Chelsio T61100-OCP-SO"}, /* 1 x 40/50/100G, nomem */
{0x6415, "Chelsio T6201-BT"}, /* 2 x 1000BASE-T */
/* Custom */
{0x6480, "Custom T6225-CR"},
{0x6481, "Custom T62100-CR"},
{0x6482, "Custom T6225-CR"},
{0x6483, "Custom T62100-CR"},
{0x6484, "Custom T64100-CR"},
{0x6485, "Custom T6240-SO"},
{0x6486, "Custom T6225-SO-CR"},
{0x6487, "Custom T6225-CR"},
};
#ifdef TCP_OFFLOAD
/*
* service_iq_fl() has an iq and needs the fl. Offset of fl from the iq should
* be exactly the same for both rxq and ofld_rxq.
*/
CTASSERT(offsetof(struct sge_ofld_rxq, iq) == offsetof(struct sge_rxq, iq));
CTASSERT(offsetof(struct sge_ofld_rxq, fl) == offsetof(struct sge_rxq, fl));
#endif
CTASSERT(sizeof(struct cluster_metadata) <= CL_METADATA_SIZE);
static int
t4_probe(device_t dev)
{
int i;
uint16_t v = pci_get_vendor(dev);
uint16_t d = pci_get_device(dev);
uint8_t f = pci_get_function(dev);
if (v != PCI_VENDOR_ID_CHELSIO)
return (ENXIO);
/* Attach only to PF0 of the FPGA */
if (d == 0xa000 && f != 0)
return (ENXIO);
for (i = 0; i < nitems(t4_pciids); i++) {
if (d == t4_pciids[i].device) {
device_set_desc(dev, t4_pciids[i].desc);
return (BUS_PROBE_DEFAULT);
}
}
return (ENXIO);
}
static int
t5_probe(device_t dev)
{
int i;
uint16_t v = pci_get_vendor(dev);
uint16_t d = pci_get_device(dev);
uint8_t f = pci_get_function(dev);
if (v != PCI_VENDOR_ID_CHELSIO)
return (ENXIO);
/* Attach only to PF0 of the FPGA */
if (d == 0xb000 && f != 0)
return (ENXIO);
for (i = 0; i < nitems(t5_pciids); i++) {
if (d == t5_pciids[i].device) {
device_set_desc(dev, t5_pciids[i].desc);
return (BUS_PROBE_DEFAULT);
}
}
return (ENXIO);
}
static int
t6_probe(device_t dev)
{
int i;
uint16_t v = pci_get_vendor(dev);
uint16_t d = pci_get_device(dev);
if (v != PCI_VENDOR_ID_CHELSIO)
return (ENXIO);
for (i = 0; i < nitems(t6_pciids); i++) {
if (d == t6_pciids[i].device) {
device_set_desc(dev, t6_pciids[i].desc);
return (BUS_PROBE_DEFAULT);
}
}
return (ENXIO);
}
static void
t5_attribute_workaround(device_t dev)
{
device_t root_port;
uint32_t v;
/*
* The T5 chips do not properly echo the No Snoop and Relaxed
* Ordering attributes when replying to a TLP from a Root
* Port. As a workaround, find the parent Root Port and
* disable No Snoop and Relaxed Ordering. Note that this
* affects all devices under this root port.
*/
root_port = pci_find_pcie_root_port(dev);
if (root_port == NULL) {
device_printf(dev, "Unable to find parent root port\n");
return;
}
v = pcie_adjust_config(root_port, PCIER_DEVICE_CTL,
PCIEM_CTL_RELAXED_ORD_ENABLE | PCIEM_CTL_NOSNOOP_ENABLE, 0, 2);
if ((v & (PCIEM_CTL_RELAXED_ORD_ENABLE | PCIEM_CTL_NOSNOOP_ENABLE)) !=
0)
device_printf(dev, "Disabled No Snoop/Relaxed Ordering on %s\n",
device_get_nameunit(root_port));
}
static const struct devnames devnames[] = {
{
.nexus_name = "t4nex",
.ifnet_name = "cxgbe",
.vi_ifnet_name = "vcxgbe",
.pf03_drv_name = "t4iov",
.vf_nexus_name = "t4vf",
.vf_ifnet_name = "cxgbev"
}, {
.nexus_name = "t5nex",
.ifnet_name = "cxl",
.vi_ifnet_name = "vcxl",
.pf03_drv_name = "t5iov",
.vf_nexus_name = "t5vf",
.vf_ifnet_name = "cxlv"
}, {
.nexus_name = "t6nex",
.ifnet_name = "cc",
.vi_ifnet_name = "vcc",
.pf03_drv_name = "t6iov",
.vf_nexus_name = "t6vf",
.vf_ifnet_name = "ccv"
}
};
void
t4_init_devnames(struct adapter *sc)
{
int id;
id = chip_id(sc);
if (id >= CHELSIO_T4 && id - CHELSIO_T4 < nitems(devnames))
sc->names = &devnames[id - CHELSIO_T4];
else {
device_printf(sc->dev, "chip id %d is not supported.\n", id);
sc->names = NULL;
}
}
static int
t4_ifnet_unit(struct adapter *sc, struct port_info *pi)
{
const char *parent, *name;
long value;
int line, unit;
line = 0;
parent = device_get_nameunit(sc->dev);
name = sc->names->ifnet_name;
while (resource_find_dev(&line, name, &unit, "at", parent) == 0) {
if (resource_long_value(name, unit, "port", &value) == 0 &&
value == pi->port_id)
return (unit);
}
return (-1);
}
static int
t4_attach(device_t dev)
{
struct adapter *sc;
int rc = 0, i, j, rqidx, tqidx, nports;
struct make_dev_args mda;
struct intrs_and_queues iaq;
struct sge *s;
uint32_t *buf;
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
int ofld_tqidx;
#endif
#ifdef TCP_OFFLOAD
int ofld_rqidx;
#endif
#ifdef DEV_NETMAP
int nm_rqidx, nm_tqidx;
#endif
int num_vis;
sc = device_get_softc(dev);
sc->dev = dev;
TUNABLE_INT_FETCH("hw.cxgbe.dflags", &sc->debug_flags);
if ((pci_get_device(dev) & 0xff00) == 0x5400)
t5_attribute_workaround(dev);
pci_enable_busmaster(dev);
if (pci_find_cap(dev, PCIY_EXPRESS, &i) == 0) {
uint32_t v;
pci_set_max_read_req(dev, 4096);
v = pci_read_config(dev, i + PCIER_DEVICE_CTL, 2);
sc->params.pci.mps = 128 << ((v & PCIEM_CTL_MAX_PAYLOAD) >> 5);
if (pcie_relaxed_ordering == 0 &&
(v & PCIEM_CTL_RELAXED_ORD_ENABLE) != 0) {
v &= ~PCIEM_CTL_RELAXED_ORD_ENABLE;
pci_write_config(dev, i + PCIER_DEVICE_CTL, v, 2);
} else if (pcie_relaxed_ordering == 1 &&
(v & PCIEM_CTL_RELAXED_ORD_ENABLE) == 0) {
v |= PCIEM_CTL_RELAXED_ORD_ENABLE;
pci_write_config(dev, i + PCIER_DEVICE_CTL, v, 2);
}
}
sc->sge_gts_reg = MYPF_REG(A_SGE_PF_GTS);
sc->sge_kdoorbell_reg = MYPF_REG(A_SGE_PF_KDOORBELL);
sc->traceq = -1;
mtx_init(&sc->ifp_lock, sc->ifp_lockname, 0, MTX_DEF);
snprintf(sc->ifp_lockname, sizeof(sc->ifp_lockname), "%s tracer",
device_get_nameunit(dev));
snprintf(sc->lockname, sizeof(sc->lockname), "%s",
device_get_nameunit(dev));
mtx_init(&sc->sc_lock, sc->lockname, 0, MTX_DEF);
t4_add_adapter(sc);
mtx_init(&sc->sfl_lock, "starving freelists", 0, MTX_DEF);
TAILQ_INIT(&sc->sfl);
callout_init_mtx(&sc->sfl_callout, &sc->sfl_lock, 0);
mtx_init(&sc->reg_lock, "indirect register access", 0, MTX_DEF);
sc->policy = NULL;
rw_init(&sc->policy_lock, "connection offload policy");
rc = t4_map_bars_0_and_4(sc);
if (rc != 0)
goto done; /* error message displayed already */
memset(sc->chan_map, 0xff, sizeof(sc->chan_map));
/* Prepare the adapter for operation. */
buf = malloc(PAGE_SIZE, M_CXGBE, M_ZERO | M_WAITOK);
rc = -t4_prep_adapter(sc, buf);
free(buf, M_CXGBE);
if (rc != 0) {
device_printf(dev, "failed to prepare adapter: %d.\n", rc);
goto done;
}
/*
* This is the real PF# to which we're attaching. Works from within PCI
* passthrough environments too, where pci_get_function() could return a
* different PF# depending on the passthrough configuration. We need to
* use the real PF# in all our communication with the firmware.
*/
j = t4_read_reg(sc, A_PL_WHOAMI);
sc->pf = chip_id(sc) <= CHELSIO_T5 ? G_SOURCEPF(j) : G_T6_SOURCEPF(j);
sc->mbox = sc->pf;
t4_init_devnames(sc);
if (sc->names == NULL) {
rc = ENOTSUP;
goto done; /* error message displayed already */
}
/*
* Do this really early, with the memory windows set up even before the
* character device. The userland tool's register i/o and mem read
* will work even in "recovery mode".
*/
setup_memwin(sc);
if (t4_init_devlog_params(sc, 0) == 0)
fixup_devlog_params(sc);
make_dev_args_init(&mda);
mda.mda_devsw = &t4_cdevsw;
mda.mda_uid = UID_ROOT;
mda.mda_gid = GID_WHEEL;
mda.mda_mode = 0600;
mda.mda_si_drv1 = sc;
rc = make_dev_s(&mda, &sc->cdev, "%s", device_get_nameunit(dev));
if (rc != 0)
device_printf(dev, "failed to create nexus char device: %d.\n",
rc);
/* Go no further if recovery mode has been requested. */
if (TUNABLE_INT_FETCH("hw.cxgbe.sos", &i) && i != 0) {
device_printf(dev, "recovery mode.\n");
goto done;
}
#if defined(__i386__)
if ((cpu_feature & CPUID_CX8) == 0) {
device_printf(dev, "64 bit atomics not available.\n");
rc = ENOTSUP;
goto done;
}
#endif
/* Contact the firmware and try to become the master driver. */
rc = contact_firmware(sc);
if (rc != 0)
goto done; /* error message displayed already */
MPASS(sc->flags & FW_OK);
rc = get_params__pre_init(sc);
if (rc != 0)
goto done; /* error message displayed already */
if (sc->flags & MASTER_PF) {
rc = partition_resources(sc);
if (rc != 0)
goto done; /* error message displayed already */
t4_intr_clear(sc);
}
rc = get_params__post_init(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = set_params__post_init(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = t4_map_bar_2(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = t4_create_dma_tag(sc);
if (rc != 0)
goto done; /* error message displayed already */
/*
* First pass over all the ports - allocate VIs and initialize some
* basic parameters like mac address, port type, etc.
*/
for_each_port(sc, i) {
struct port_info *pi;
pi = malloc(sizeof(*pi), M_CXGBE, M_ZERO | M_WAITOK);
sc->port[i] = pi;
/* These must be set before t4_port_init */
pi->adapter = sc;
pi->port_id = i;
/*
* XXX: vi[0] is special so we can't delay this allocation until
* pi->nvi's final value is known.
*/
pi->vi = malloc(sizeof(struct vi_info) * t4_num_vis, M_CXGBE,
M_ZERO | M_WAITOK);
/*
* Allocate the "main" VI and initialize parameters
* like mac addr.
*/
rc = -t4_port_init(sc, sc->mbox, sc->pf, 0, i);
if (rc != 0) {
device_printf(dev, "unable to initialize port %d: %d\n",
i, rc);
free(pi->vi, M_CXGBE);
free(pi, M_CXGBE);
sc->port[i] = NULL;
goto done;
}
snprintf(pi->lockname, sizeof(pi->lockname), "%sp%d",
device_get_nameunit(dev), i);
mtx_init(&pi->pi_lock, pi->lockname, 0, MTX_DEF);
sc->chan_map[pi->tx_chan] = i;
/* All VIs on this port share this media. */
ifmedia_init(&pi->media, IFM_IMASK, cxgbe_media_change,
cxgbe_media_status);
PORT_LOCK(pi);
init_link_config(pi);
fixup_link_config(pi);
build_medialist(pi);
if (fixed_ifmedia(pi))
pi->flags |= FIXED_IFMEDIA;
PORT_UNLOCK(pi);
pi->dev = device_add_child(dev, sc->names->ifnet_name,
t4_ifnet_unit(sc, pi));
if (pi->dev == NULL) {
device_printf(dev,
"failed to add device for port %d.\n", i);
rc = ENXIO;
goto done;
}
pi->vi[0].dev = pi->dev;
device_set_softc(pi->dev, pi);
}
/*
* Interrupt type, # of interrupts, # of rx/tx queues, etc.
*/
nports = sc->params.nports;
rc = cfg_itype_and_nqueues(sc, &iaq);
if (rc != 0)
goto done; /* error message displayed already */
num_vis = iaq.num_vis;
sc->intr_type = iaq.intr_type;
sc->intr_count = iaq.nirq;
s = &sc->sge;
s->nrxq = nports * iaq.nrxq;
s->ntxq = nports * iaq.ntxq;
if (num_vis > 1) {
s->nrxq += nports * (num_vis - 1) * iaq.nrxq_vi;
s->ntxq += nports * (num_vis - 1) * iaq.ntxq_vi;
}
s->neq = s->ntxq + s->nrxq; /* the free list in an rxq is an eq */
s->neq += nports; /* ctrl queues: 1 per port */
s->niq = s->nrxq + 1; /* 1 extra for firmware event queue */
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
if (is_offload(sc) || is_ethoffload(sc)) {
s->nofldtxq = nports * iaq.nofldtxq;
if (num_vis > 1)
s->nofldtxq += nports * (num_vis - 1) * iaq.nofldtxq_vi;
s->neq += s->nofldtxq;
s->ofld_txq = malloc(s->nofldtxq * sizeof(struct sge_wrq),
M_CXGBE, M_ZERO | M_WAITOK);
}
#endif
#ifdef TCP_OFFLOAD
if (is_offload(sc)) {
s->nofldrxq = nports * iaq.nofldrxq;
if (num_vis > 1)
s->nofldrxq += nports * (num_vis - 1) * iaq.nofldrxq_vi;
s->neq += s->nofldrxq; /* free list */
s->niq += s->nofldrxq;
s->ofld_rxq = malloc(s->nofldrxq * sizeof(struct sge_ofld_rxq),
M_CXGBE, M_ZERO | M_WAITOK);
}
#endif
#ifdef DEV_NETMAP
if (num_vis > 1) {
s->nnmrxq = nports * (num_vis - 1) * iaq.nnmrxq_vi;
s->nnmtxq = nports * (num_vis - 1) * iaq.nnmtxq_vi;
}
s->neq += s->nnmtxq + s->nnmrxq;
s->niq += s->nnmrxq;
s->nm_rxq = malloc(s->nnmrxq * sizeof(struct sge_nm_rxq),
M_CXGBE, M_ZERO | M_WAITOK);
s->nm_txq = malloc(s->nnmtxq * sizeof(struct sge_nm_txq),
M_CXGBE, M_ZERO | M_WAITOK);
#endif
s->ctrlq = malloc(nports * sizeof(struct sge_wrq), M_CXGBE,
M_ZERO | M_WAITOK);
s->rxq = malloc(s->nrxq * sizeof(struct sge_rxq), M_CXGBE,
M_ZERO | M_WAITOK);
s->txq = malloc(s->ntxq * sizeof(struct sge_txq), M_CXGBE,
M_ZERO | M_WAITOK);
s->iqmap = malloc(s->niq * sizeof(struct sge_iq *), M_CXGBE,
M_ZERO | M_WAITOK);
s->eqmap = malloc(s->neq * sizeof(struct sge_eq *), M_CXGBE,
M_ZERO | M_WAITOK);
sc->irq = malloc(sc->intr_count * sizeof(struct irq), M_CXGBE,
M_ZERO | M_WAITOK);
t4_init_l2t(sc, M_WAITOK);
t4_init_smt(sc, M_WAITOK);
t4_init_tx_sched(sc);
#ifdef RATELIMIT
t4_init_etid_table(sc);
#endif
#ifdef INET6
t4_init_clip_table(sc);
#endif
if (sc->vres.key.size != 0)
sc->key_map = vmem_create("T4TLS key map", sc->vres.key.start,
sc->vres.key.size, 32, 0, M_FIRSTFIT | M_WAITOK);
/*
* Second pass over the ports. This time we know the number of rx and
* tx queues that each port should get.
*/
rqidx = tqidx = 0;
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
ofld_tqidx = 0;
#endif
#ifdef TCP_OFFLOAD
ofld_rqidx = 0;
#endif
#ifdef DEV_NETMAP
nm_rqidx = nm_tqidx = 0;
#endif
for_each_port(sc, i) {
struct port_info *pi = sc->port[i];
struct vi_info *vi;
if (pi == NULL)
continue;
pi->nvi = num_vis;
for_each_vi(pi, j, vi) {
vi->pi = pi;
vi->qsize_rxq = t4_qsize_rxq;
vi->qsize_txq = t4_qsize_txq;
vi->first_rxq = rqidx;
vi->first_txq = tqidx;
vi->tmr_idx = t4_tmr_idx;
vi->pktc_idx = t4_pktc_idx;
vi->nrxq = j == 0 ? iaq.nrxq : iaq.nrxq_vi;
vi->ntxq = j == 0 ? iaq.ntxq : iaq.ntxq_vi;
rqidx += vi->nrxq;
tqidx += vi->ntxq;
if (j == 0 && vi->ntxq > 1)
vi->rsrv_noflowq = t4_rsrv_noflowq ? 1 : 0;
else
vi->rsrv_noflowq = 0;
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
vi->first_ofld_txq = ofld_tqidx;
vi->nofldtxq = j == 0 ? iaq.nofldtxq : iaq.nofldtxq_vi;
ofld_tqidx += vi->nofldtxq;
#endif
#ifdef TCP_OFFLOAD
vi->ofld_tmr_idx = t4_tmr_idx_ofld;
vi->ofld_pktc_idx = t4_pktc_idx_ofld;
vi->first_ofld_rxq = ofld_rqidx;
vi->nofldrxq = j == 0 ? iaq.nofldrxq : iaq.nofldrxq_vi;
ofld_rqidx += vi->nofldrxq;
#endif
#ifdef DEV_NETMAP
if (j > 0) {
vi->first_nm_rxq = nm_rqidx;
vi->first_nm_txq = nm_tqidx;
vi->nnmrxq = iaq.nnmrxq_vi;
vi->nnmtxq = iaq.nnmtxq_vi;
nm_rqidx += vi->nnmrxq;
nm_tqidx += vi->nnmtxq;
}
#endif
}
}
rc = t4_setup_intr_handlers(sc);
if (rc != 0) {
device_printf(dev,
"failed to setup interrupt handlers: %d\n", rc);
goto done;
}
rc = bus_generic_probe(dev);
if (rc != 0) {
device_printf(dev, "failed to probe child drivers: %d\n", rc);
goto done;
}
/*
* Ensure thread-safe mailbox access (in debug builds).
*
* So far this was the only thread accessing the mailbox but various
* ifnets and sysctls are about to be created and their handlers/ioctls
* will access the mailbox from different threads.
*/
sc->flags |= CHK_MBOX_ACCESS;
rc = bus_generic_attach(dev);
if (rc != 0) {
device_printf(dev,
"failed to attach all child ports: %d\n", rc);
goto done;
}
device_printf(dev,
"PCIe gen%d x%d, %d ports, %d %s interrupt%s, %d eq, %d iq\n",
sc->params.pci.speed, sc->params.pci.width, sc->params.nports,
sc->intr_count, sc->intr_type == INTR_MSIX ? "MSI-X" :
(sc->intr_type == INTR_MSI ? "MSI" : "INTx"),
sc->intr_count > 1 ? "s" : "", sc->sge.neq, sc->sge.niq);
t4_set_desc(sc);
notify_siblings(dev, 0);
done:
if (rc != 0 && sc->cdev) {
/* cdev was created and so cxgbetool works; recover that way. */
device_printf(dev,
"error during attach, adapter is now in recovery mode.\n");
rc = 0;
}
if (rc != 0)
t4_detach_common(dev);
else
t4_sysctls(sc);
return (rc);
}
static int
t4_child_location_str(device_t bus, device_t dev, char *buf, size_t buflen)
{
struct port_info *pi;
pi = device_get_softc(dev);
snprintf(buf, buflen, "port=%d", pi->port_id);
return (0);
}
static int
t4_ready(device_t dev)
{
struct adapter *sc;
sc = device_get_softc(dev);
if (sc->flags & FW_OK)
return (0);
return (ENXIO);
}
static int
t4_read_port_device(device_t dev, int port, device_t *child)
{
struct adapter *sc;
struct port_info *pi;
sc = device_get_softc(dev);
if (port < 0 || port >= MAX_NPORTS)
return (EINVAL);
pi = sc->port[port];
if (pi == NULL || pi->dev == NULL)
return (ENXIO);
*child = pi->dev;
return (0);
}
static int
notify_siblings(device_t dev, int detaching)
{
device_t sibling;
int error, i;
error = 0;
for (i = 0; i < PCI_FUNCMAX; i++) {
if (i == pci_get_function(dev))
continue;
sibling = pci_find_dbsf(pci_get_domain(dev), pci_get_bus(dev),
pci_get_slot(dev), i);
if (sibling == NULL || !device_is_attached(sibling))
continue;
if (detaching)
error = T4_DETACH_CHILD(sibling);
else
(void)T4_ATTACH_CHILD(sibling);
if (error)
break;
}
return (error);
}
/*
* Idempotent
*/
static int
t4_detach(device_t dev)
{
struct adapter *sc;
int rc;
sc = device_get_softc(dev);
rc = notify_siblings(dev, 1);
if (rc) {
device_printf(dev,
"failed to detach sibling devices: %d\n", rc);
return (rc);
}
return (t4_detach_common(dev));
}
int
t4_detach_common(device_t dev)
{
struct adapter *sc;
struct port_info *pi;
int i, rc;
sc = device_get_softc(dev);
if (sc->cdev) {
destroy_dev(sc->cdev);
sc->cdev = NULL;
}
sc->flags &= ~CHK_MBOX_ACCESS;
if (sc->flags & FULL_INIT_DONE) {
if (!(sc->flags & IS_VF))
t4_intr_disable(sc);
}
if (device_is_attached(dev)) {
rc = bus_generic_detach(dev);
if (rc) {
device_printf(dev,
"failed to detach child devices: %d\n", rc);
return (rc);
}
}
for (i = 0; i < sc->intr_count; i++)
t4_free_irq(sc, &sc->irq[i]);
if ((sc->flags & (IS_VF | FW_OK)) == FW_OK)
t4_free_tx_sched(sc);
for (i = 0; i < MAX_NPORTS; i++) {
pi = sc->port[i];
if (pi) {
t4_free_vi(sc, sc->mbox, sc->pf, 0, pi->vi[0].viid);
if (pi->dev)
device_delete_child(dev, pi->dev);
mtx_destroy(&pi->pi_lock);
free(pi->vi, M_CXGBE);
free(pi, M_CXGBE);
}
}
device_delete_children(dev);
if (sc->flags & FULL_INIT_DONE)
adapter_full_uninit(sc);
if ((sc->flags & (IS_VF | FW_OK)) == FW_OK)
t4_fw_bye(sc, sc->mbox);
if (sc->intr_type == INTR_MSI || sc->intr_type == INTR_MSIX)
pci_release_msi(dev);
if (sc->regs_res)
bus_release_resource(dev, SYS_RES_MEMORY, sc->regs_rid,
sc->regs_res);
if (sc->udbs_res)
bus_release_resource(dev, SYS_RES_MEMORY, sc->udbs_rid,
sc->udbs_res);
if (sc->msix_res)
bus_release_resource(dev, SYS_RES_MEMORY, sc->msix_rid,
sc->msix_res);
if (sc->l2t)
t4_free_l2t(sc->l2t);
if (sc->smt)
t4_free_smt(sc->smt);
#ifdef RATELIMIT
t4_free_etid_table(sc);
#endif
if (sc->key_map)
vmem_destroy(sc->key_map);
#ifdef INET6
t4_destroy_clip_table(sc);
#endif
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
free(sc->sge.ofld_txq, M_CXGBE);
#endif
#ifdef TCP_OFFLOAD
free(sc->sge.ofld_rxq, M_CXGBE);
#endif
#ifdef DEV_NETMAP
free(sc->sge.nm_rxq, M_CXGBE);
free(sc->sge.nm_txq, M_CXGBE);
#endif
free(sc->irq, M_CXGBE);
free(sc->sge.rxq, M_CXGBE);
free(sc->sge.txq, M_CXGBE);
free(sc->sge.ctrlq, M_CXGBE);
free(sc->sge.iqmap, M_CXGBE);
free(sc->sge.eqmap, M_CXGBE);
free(sc->tids.ftid_tab, M_CXGBE);
free(sc->tids.hpftid_tab, M_CXGBE);
free_hftid_hash(&sc->tids);
free(sc->tids.atid_tab, M_CXGBE);
free(sc->tids.tid_tab, M_CXGBE);
free(sc->tt.tls_rx_ports, M_CXGBE);
t4_destroy_dma_tag(sc);
if (mtx_initialized(&sc->sc_lock)) {
sx_xlock(&t4_list_lock);
SLIST_REMOVE(&t4_list, sc, adapter, link);
sx_xunlock(&t4_list_lock);
mtx_destroy(&sc->sc_lock);
}
callout_drain(&sc->sfl_callout);
if (mtx_initialized(&sc->tids.ftid_lock)) {
mtx_destroy(&sc->tids.ftid_lock);
cv_destroy(&sc->tids.ftid_cv);
}
if (mtx_initialized(&sc->tids.atid_lock))
mtx_destroy(&sc->tids.atid_lock);
if (mtx_initialized(&sc->sfl_lock))
mtx_destroy(&sc->sfl_lock);
if (mtx_initialized(&sc->ifp_lock))
mtx_destroy(&sc->ifp_lock);
if (mtx_initialized(&sc->reg_lock))
mtx_destroy(&sc->reg_lock);
if (rw_initialized(&sc->policy_lock)) {
rw_destroy(&sc->policy_lock);
#ifdef TCP_OFFLOAD
if (sc->policy != NULL)
free_offload_policy(sc->policy);
#endif
}
for (i = 0; i < NUM_MEMWIN; i++) {
struct memwin *mw = &sc->memwin[i];
if (rw_initialized(&mw->mw_lock))
rw_destroy(&mw->mw_lock);
}
bzero(sc, sizeof(*sc));
return (0);
}
static int
cxgbe_probe(device_t dev)
{
char buf[128];
struct port_info *pi = device_get_softc(dev);
snprintf(buf, sizeof(buf), "port %d", pi->port_id);
device_set_desc_copy(dev, buf);
return (BUS_PROBE_DEFAULT);
}
#define T4_CAP (IFCAP_VLAN_HWTAGGING | IFCAP_VLAN_MTU | IFCAP_HWCSUM | \
IFCAP_VLAN_HWCSUM | IFCAP_TSO | IFCAP_JUMBO_MTU | IFCAP_LRO | \
IFCAP_VLAN_HWTSO | IFCAP_LINKSTATE | IFCAP_HWCSUM_IPV6 | IFCAP_HWSTATS | \
IFCAP_HWRXTSTMP)
#define T4_CAP_ENABLE (T4_CAP)
static int
cxgbe_vi_attach(device_t dev, struct vi_info *vi)
{
struct ifnet *ifp;
struct sbuf *sb;
vi->xact_addr_filt = -1;
callout_init(&vi->tick, 1);
/* Allocate an ifnet and set it up */
ifp = if_alloc(IFT_ETHER);
if (ifp == NULL) {
device_printf(dev, "Cannot allocate ifnet\n");
return (ENOMEM);
}
vi->ifp = ifp;
ifp->if_softc = vi;
if_initname(ifp, device_get_name(dev), device_get_unit(dev));
ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
ifp->if_init = cxgbe_init;
ifp->if_ioctl = cxgbe_ioctl;
ifp->if_transmit = cxgbe_transmit;
ifp->if_qflush = cxgbe_qflush;
ifp->if_get_counter = cxgbe_get_counter;
#ifdef RATELIMIT
ifp->if_snd_tag_alloc = cxgbe_snd_tag_alloc;
ifp->if_snd_tag_modify = cxgbe_snd_tag_modify;
ifp->if_snd_tag_query = cxgbe_snd_tag_query;
ifp->if_snd_tag_free = cxgbe_snd_tag_free;
#endif
ifp->if_capabilities = T4_CAP;
ifp->if_capenable = T4_CAP_ENABLE;
#ifdef TCP_OFFLOAD
if (vi->nofldrxq != 0)
ifp->if_capabilities |= IFCAP_TOE;
#endif
#ifdef RATELIMIT
if (is_ethoffload(vi->pi->adapter) && vi->nofldtxq != 0) {
ifp->if_capabilities |= IFCAP_TXRTLMT;
ifp->if_capenable |= IFCAP_TXRTLMT;
}
#endif
ifp->if_hwassist = CSUM_TCP | CSUM_UDP | CSUM_IP | CSUM_TSO |
CSUM_UDP_IPV6 | CSUM_TCP_IPV6;
ifp->if_hw_tsomax = IP_MAXPACKET;
ifp->if_hw_tsomaxsegcount = TX_SGL_SEGS_TSO;
#ifdef RATELIMIT
if (is_ethoffload(vi->pi->adapter) && vi->nofldtxq != 0)
ifp->if_hw_tsomaxsegcount = TX_SGL_SEGS_EO_TSO;
#endif
ifp->if_hw_tsomaxsegsize = 65536;
ether_ifattach(ifp, vi->hw_addr);
#ifdef DEV_NETMAP
if (vi->nnmrxq != 0)
cxgbe_nm_attach(vi);
#endif
sb = sbuf_new_auto();
sbuf_printf(sb, "%d txq, %d rxq (NIC)", vi->ntxq, vi->nrxq);
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
switch (ifp->if_capabilities & (IFCAP_TOE | IFCAP_TXRTLMT)) {
case IFCAP_TOE:
sbuf_printf(sb, "; %d txq (TOE)", vi->nofldtxq);
break;
case IFCAP_TOE | IFCAP_TXRTLMT:
sbuf_printf(sb, "; %d txq (TOE/ETHOFLD)", vi->nofldtxq);
break;
case IFCAP_TXRTLMT:
sbuf_printf(sb, "; %d txq (ETHOFLD)", vi->nofldtxq);
break;
}
#endif
#ifdef TCP_OFFLOAD
if (ifp->if_capabilities & IFCAP_TOE)
sbuf_printf(sb, ", %d rxq (TOE)", vi->nofldrxq);
#endif
#ifdef DEV_NETMAP
if (ifp->if_capabilities & IFCAP_NETMAP)
sbuf_printf(sb, "; %d txq, %d rxq (netmap)",
vi->nnmtxq, vi->nnmrxq);
#endif
sbuf_finish(sb);
device_printf(dev, "%s\n", sbuf_data(sb));
sbuf_delete(sb);
vi_sysctls(vi);
return (0);
}
static int
cxgbe_attach(device_t dev)
{
struct port_info *pi = device_get_softc(dev);
struct adapter *sc = pi->adapter;
struct vi_info *vi;
int i, rc;
callout_init_mtx(&pi->tick, &pi->pi_lock, 0);
rc = cxgbe_vi_attach(dev, &pi->vi[0]);
if (rc)
return (rc);
for_each_vi(pi, i, vi) {
if (i == 0)
continue;
vi->dev = device_add_child(dev, sc->names->vi_ifnet_name, -1);
if (vi->dev == NULL) {
device_printf(dev, "failed to add VI %d\n", i);
continue;
}
device_set_softc(vi->dev, vi);
}
cxgbe_sysctls(pi);
bus_generic_attach(dev);
return (0);
}
static void
cxgbe_vi_detach(struct vi_info *vi)
{
struct ifnet *ifp = vi->ifp;
ether_ifdetach(ifp);
/* Let detach proceed even if these fail. */
#ifdef DEV_NETMAP
if (ifp->if_capabilities & IFCAP_NETMAP)
cxgbe_nm_detach(vi);
#endif
cxgbe_uninit_synchronized(vi);
callout_drain(&vi->tick);
vi_full_uninit(vi);
if_free(vi->ifp);
vi->ifp = NULL;
}
static int
cxgbe_detach(device_t dev)
{
struct port_info *pi = device_get_softc(dev);
struct adapter *sc = pi->adapter;
int rc;
/* Detach the extra VIs first. */
rc = bus_generic_detach(dev);
if (rc)
return (rc);
device_delete_children(dev);
doom_vi(sc, &pi->vi[0]);
if (pi->flags & HAS_TRACEQ) {
sc->traceq = -1; /* cloner should not create ifnet */
t4_tracer_port_detach(sc);
}
cxgbe_vi_detach(&pi->vi[0]);
callout_drain(&pi->tick);
ifmedia_removeall(&pi->media);
end_synchronized_op(sc, 0);
return (0);
}
static void
cxgbe_init(void *arg)
{
struct vi_info *vi = arg;
struct adapter *sc = vi->pi->adapter;
if (begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4init") != 0)
return;
cxgbe_init_synchronized(vi);
end_synchronized_op(sc, 0);
}
static int
cxgbe_ioctl(struct ifnet *ifp, unsigned long cmd, caddr_t data)
{
int rc = 0, mtu, flags;
struct vi_info *vi = ifp->if_softc;
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
struct ifreq *ifr = (struct ifreq *)data;
uint32_t mask;
switch (cmd) {
case SIOCSIFMTU:
mtu = ifr->ifr_mtu;
if (mtu < ETHERMIN || mtu > MAX_MTU)
return (EINVAL);
rc = begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4mtu");
if (rc)
return (rc);
ifp->if_mtu = mtu;
if (vi->flags & VI_INIT_DONE) {
t4_update_fl_bufsize(ifp);
if (ifp->if_drv_flags & IFF_DRV_RUNNING)
rc = update_mac_settings(ifp, XGMAC_MTU);
}
end_synchronized_op(sc, 0);
break;
case SIOCSIFFLAGS:
rc = begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4flg");
if (rc)
return (rc);
if (ifp->if_flags & IFF_UP) {
if (ifp->if_drv_flags & IFF_DRV_RUNNING) {
flags = vi->if_flags;
if ((ifp->if_flags ^ flags) &
(IFF_PROMISC | IFF_ALLMULTI)) {
rc = update_mac_settings(ifp,
XGMAC_PROMISC | XGMAC_ALLMULTI);
}
} else {
rc = cxgbe_init_synchronized(vi);
}
vi->if_flags = ifp->if_flags;
} else if (ifp->if_drv_flags & IFF_DRV_RUNNING) {
rc = cxgbe_uninit_synchronized(vi);
}
end_synchronized_op(sc, 0);
break;
case SIOCADDMULTI:
case SIOCDELMULTI:
rc = begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4multi");
if (rc)
return (rc);
if (ifp->if_drv_flags & IFF_DRV_RUNNING)
rc = update_mac_settings(ifp, XGMAC_MCADDRS);
end_synchronized_op(sc, 0);
break;
case SIOCSIFCAP:
rc = begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4cap");
if (rc)
return (rc);
mask = ifr->ifr_reqcap ^ ifp->if_capenable;
if (mask & IFCAP_TXCSUM) {
ifp->if_capenable ^= IFCAP_TXCSUM;
ifp->if_hwassist ^= (CSUM_TCP | CSUM_UDP | CSUM_IP);
if (IFCAP_TSO4 & ifp->if_capenable &&
!(IFCAP_TXCSUM & ifp->if_capenable)) {
ifp->if_capenable &= ~IFCAP_TSO4;
if_printf(ifp,
"tso4 disabled due to -txcsum.\n");
}
}
if (mask & IFCAP_TXCSUM_IPV6) {
ifp->if_capenable ^= IFCAP_TXCSUM_IPV6;
ifp->if_hwassist ^= (CSUM_UDP_IPV6 | CSUM_TCP_IPV6);
if (IFCAP_TSO6 & ifp->if_capenable &&
!(IFCAP_TXCSUM_IPV6 & ifp->if_capenable)) {
ifp->if_capenable &= ~IFCAP_TSO6;
if_printf(ifp,
"tso6 disabled due to -txcsum6.\n");
}
}
if (mask & IFCAP_RXCSUM)
ifp->if_capenable ^= IFCAP_RXCSUM;
if (mask & IFCAP_RXCSUM_IPV6)
ifp->if_capenable ^= IFCAP_RXCSUM_IPV6;
/*
* Note that we leave CSUM_TSO alone (it is always set). The
* kernel takes both IFCAP_TSOx and CSUM_TSO into account before
* sending a TSO request our way, so it's sufficient to toggle
* IFCAP_TSOx only.
*/
if (mask & IFCAP_TSO4) {
if (!(IFCAP_TSO4 & ifp->if_capenable) &&
!(IFCAP_TXCSUM & ifp->if_capenable)) {
if_printf(ifp, "enable txcsum first.\n");
rc = EAGAIN;
goto fail;
}
ifp->if_capenable ^= IFCAP_TSO4;
}
if (mask & IFCAP_TSO6) {
if (!(IFCAP_TSO6 & ifp->if_capenable) &&
!(IFCAP_TXCSUM_IPV6 & ifp->if_capenable)) {
if_printf(ifp, "enable txcsum6 first.\n");
rc = EAGAIN;
goto fail;
}
ifp->if_capenable ^= IFCAP_TSO6;
}
if (mask & IFCAP_LRO) {
#if defined(INET) || defined(INET6)
int i;
struct sge_rxq *rxq;
ifp->if_capenable ^= IFCAP_LRO;
for_each_rxq(vi, i, rxq) {
if (ifp->if_capenable & IFCAP_LRO)
rxq->iq.flags |= IQ_LRO_ENABLED;
else
rxq->iq.flags &= ~IQ_LRO_ENABLED;
}
#endif
}
#ifdef TCP_OFFLOAD
if (mask & IFCAP_TOE) {
int enable = (ifp->if_capenable ^ mask) & IFCAP_TOE;
rc = toe_capability(vi, enable);
if (rc != 0)
goto fail;
ifp->if_capenable ^= mask;
}
#endif
if (mask & IFCAP_VLAN_HWTAGGING) {
ifp->if_capenable ^= IFCAP_VLAN_HWTAGGING;
if (ifp->if_drv_flags & IFF_DRV_RUNNING)
rc = update_mac_settings(ifp, XGMAC_VLANEX);
}
if (mask & IFCAP_VLAN_MTU) {
ifp->if_capenable ^= IFCAP_VLAN_MTU;
/* Need to find out how to disable auto-mtu-inflation */
}
if (mask & IFCAP_VLAN_HWTSO)
ifp->if_capenable ^= IFCAP_VLAN_HWTSO;
if (mask & IFCAP_VLAN_HWCSUM)
ifp->if_capenable ^= IFCAP_VLAN_HWCSUM;
#ifdef RATELIMIT
if (mask & IFCAP_TXRTLMT)
ifp->if_capenable ^= IFCAP_TXRTLMT;
#endif
if (mask & IFCAP_HWRXTSTMP) {
int i;
struct sge_rxq *rxq;
ifp->if_capenable ^= IFCAP_HWRXTSTMP;
for_each_rxq(vi, i, rxq) {
if (ifp->if_capenable & IFCAP_HWRXTSTMP)
rxq->iq.flags |= IQ_RX_TIMESTAMP;
else
rxq->iq.flags &= ~IQ_RX_TIMESTAMP;
}
}
#ifdef VLAN_CAPABILITIES
VLAN_CAPABILITIES(ifp);
#endif
fail:
end_synchronized_op(sc, 0);
break;
case SIOCSIFMEDIA:
case SIOCGIFMEDIA:
case SIOCGIFXMEDIA:
ifmedia_ioctl(ifp, ifr, &pi->media, cmd);
break;
case SIOCGI2C: {
struct ifi2creq i2c;
rc = copyin(ifr_data_get_ptr(ifr), &i2c, sizeof(i2c));
if (rc != 0)
break;
if (i2c.dev_addr != 0xA0 && i2c.dev_addr != 0xA2) {
rc = EPERM;
break;
}
if (i2c.len > sizeof(i2c.data)) {
rc = EINVAL;
break;
}
rc = begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4i2c");
if (rc)
return (rc);
rc = -t4_i2c_rd(sc, sc->mbox, pi->port_id, i2c.dev_addr,
i2c.offset, i2c.len, &i2c.data[0]);
end_synchronized_op(sc, 0);
if (rc == 0)
rc = copyout(&i2c, ifr_data_get_ptr(ifr), sizeof(i2c));
break;
}
default:
rc = ether_ioctl(ifp, cmd, data);
}
return (rc);
}
static int
cxgbe_transmit(struct ifnet *ifp, struct mbuf *m)
{
struct vi_info *vi = ifp->if_softc;
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
struct sge_txq *txq;
void *items[1];
int rc;
M_ASSERTPKTHDR(m);
MPASS(m->m_nextpkt == NULL); /* not quite ready for this yet */
if (__predict_false(pi->link_cfg.link_ok == false)) {
m_freem(m);
return (ENETDOWN);
}
rc = parse_pkt(sc, &m);
if (__predict_false(rc != 0)) {
MPASS(m == NULL); /* was freed already */
atomic_add_int(&pi->tx_parse_error, 1); /* rare, atomic is ok */
return (rc);
}
#ifdef RATELIMIT
if (m->m_pkthdr.snd_tag != NULL) {
/* EAGAIN tells the stack we are not the correct interface. */
if (__predict_false(ifp != m->m_pkthdr.snd_tag->ifp)) {
m_freem(m);
return (EAGAIN);
}
return (ethofld_transmit(ifp, m));
}
#endif
/* Select a txq. */
txq = &sc->sge.txq[vi->first_txq];
if (M_HASHTYPE_GET(m) != M_HASHTYPE_NONE)
txq += ((m->m_pkthdr.flowid % (vi->ntxq - vi->rsrv_noflowq)) +
vi->rsrv_noflowq);
items[0] = m;
rc = mp_ring_enqueue(txq->r, items, 1, 4096);
if (__predict_false(rc != 0))
m_freem(m);
return (rc);
}
static void
cxgbe_qflush(struct ifnet *ifp)
{
struct vi_info *vi = ifp->if_softc;
struct sge_txq *txq;
int i;
/* queues do not exist if !VI_INIT_DONE. */
if (vi->flags & VI_INIT_DONE) {
for_each_txq(vi, i, txq) {
TXQ_LOCK(txq);
txq->eq.flags |= EQ_QFLUSH;
TXQ_UNLOCK(txq);
while (!mp_ring_is_idle(txq->r)) {
mp_ring_check_drainage(txq->r, 0);
pause("qflush", 1);
}
TXQ_LOCK(txq);
txq->eq.flags &= ~EQ_QFLUSH;
TXQ_UNLOCK(txq);
}
}
if_qflush(ifp);
}
static uint64_t
vi_get_counter(struct ifnet *ifp, ift_counter c)
{
struct vi_info *vi = ifp->if_softc;
struct fw_vi_stats_vf *s = &vi->stats;
vi_refresh_stats(vi->pi->adapter, vi);
switch (c) {
case IFCOUNTER_IPACKETS:
return (s->rx_bcast_frames + s->rx_mcast_frames +
s->rx_ucast_frames);
case IFCOUNTER_IERRORS:
return (s->rx_err_frames);
case IFCOUNTER_OPACKETS:
return (s->tx_bcast_frames + s->tx_mcast_frames +
s->tx_ucast_frames + s->tx_offload_frames);
case IFCOUNTER_OERRORS:
return (s->tx_drop_frames);
case IFCOUNTER_IBYTES:
return (s->rx_bcast_bytes + s->rx_mcast_bytes +
s->rx_ucast_bytes);
case IFCOUNTER_OBYTES:
return (s->tx_bcast_bytes + s->tx_mcast_bytes +
s->tx_ucast_bytes + s->tx_offload_bytes);
case IFCOUNTER_IMCASTS:
return (s->rx_mcast_frames);
case IFCOUNTER_OMCASTS:
return (s->tx_mcast_frames);
case IFCOUNTER_OQDROPS: {
uint64_t drops;
drops = 0;
if (vi->flags & VI_INIT_DONE) {
int i;
struct sge_txq *txq;
for_each_txq(vi, i, txq)
drops += counter_u64_fetch(txq->r->drops);
}
return (drops);
}
default:
return (if_get_counter_default(ifp, c));
}
}
uint64_t
cxgbe_get_counter(struct ifnet *ifp, ift_counter c)
{
struct vi_info *vi = ifp->if_softc;
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
struct port_stats *s = &pi->stats;
if (pi->nvi > 1 || sc->flags & IS_VF)
return (vi_get_counter(ifp, c));
cxgbe_refresh_stats(sc, pi);
switch (c) {
case IFCOUNTER_IPACKETS:
return (s->rx_frames);
case IFCOUNTER_IERRORS:
return (s->rx_jabber + s->rx_runt + s->rx_too_long +
s->rx_fcs_err + s->rx_len_err);
case IFCOUNTER_OPACKETS:
return (s->tx_frames);
case IFCOUNTER_OERRORS:
return (s->tx_error_frames);
case IFCOUNTER_IBYTES:
return (s->rx_octets);
case IFCOUNTER_OBYTES:
return (s->tx_octets);
case IFCOUNTER_IMCASTS:
return (s->rx_mcast_frames);
case IFCOUNTER_OMCASTS:
return (s->tx_mcast_frames);
case IFCOUNTER_IQDROPS:
return (s->rx_ovflow0 + s->rx_ovflow1 + s->rx_ovflow2 +
s->rx_ovflow3 + s->rx_trunc0 + s->rx_trunc1 + s->rx_trunc2 +
s->rx_trunc3 + pi->tnl_cong_drops);
case IFCOUNTER_OQDROPS: {
uint64_t drops;
drops = s->tx_drop;
if (vi->flags & VI_INIT_DONE) {
int i;
struct sge_txq *txq;
for_each_txq(vi, i, txq)
drops += counter_u64_fetch(txq->r->drops);
}
return (drops);
}
default:
return (if_get_counter_default(ifp, c));
}
}
/*
* The kernel picks a media from the list we had provided but we still validate
* the requeste.
*/
int
cxgbe_media_change(struct ifnet *ifp)
{
struct vi_info *vi = ifp->if_softc;
struct port_info *pi = vi->pi;
struct ifmedia *ifm = &pi->media;
struct link_config *lc = &pi->link_cfg;
struct adapter *sc = pi->adapter;
int rc;
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4mec");
if (rc != 0)
return (rc);
PORT_LOCK(pi);
if (IFM_SUBTYPE(ifm->ifm_media) == IFM_AUTO) {
/* ifconfig .. media autoselect */
if (!(lc->supported & FW_PORT_CAP32_ANEG)) {
rc = ENOTSUP; /* AN not supported by transceiver */
goto done;
}
lc->requested_aneg = AUTONEG_ENABLE;
lc->requested_speed = 0;
lc->requested_fc |= PAUSE_AUTONEG;
} else {
lc->requested_aneg = AUTONEG_DISABLE;
lc->requested_speed =
ifmedia_baudrate(ifm->ifm_media) / 1000000;
lc->requested_fc = 0;
if (IFM_OPTIONS(ifm->ifm_media) & IFM_ETH_RXPAUSE)
lc->requested_fc |= PAUSE_RX;
if (IFM_OPTIONS(ifm->ifm_media) & IFM_ETH_TXPAUSE)
lc->requested_fc |= PAUSE_TX;
}
if (pi->up_vis > 0) {
fixup_link_config(pi);
rc = apply_link_config(pi);
}
done:
PORT_UNLOCK(pi);
end_synchronized_op(sc, 0);
return (rc);
}
/*
* Base media word (without ETHER, pause, link active, etc.) for the port at the
* given speed.
*/
static int
port_mword(struct port_info *pi, uint32_t speed)
{
MPASS(speed & M_FW_PORT_CAP32_SPEED);
MPASS(powerof2(speed));
switch(pi->port_type) {
case FW_PORT_TYPE_BT_SGMII:
case FW_PORT_TYPE_BT_XFI:
case FW_PORT_TYPE_BT_XAUI:
/* BaseT */
switch (speed) {
case FW_PORT_CAP32_SPEED_100M:
return (IFM_100_T);
case FW_PORT_CAP32_SPEED_1G:
return (IFM_1000_T);
case FW_PORT_CAP32_SPEED_10G:
return (IFM_10G_T);
}
break;
case FW_PORT_TYPE_KX4:
if (speed == FW_PORT_CAP32_SPEED_10G)
return (IFM_10G_KX4);
break;
case FW_PORT_TYPE_CX4:
if (speed == FW_PORT_CAP32_SPEED_10G)
return (IFM_10G_CX4);
break;
case FW_PORT_TYPE_KX:
if (speed == FW_PORT_CAP32_SPEED_1G)
return (IFM_1000_KX);
break;
case FW_PORT_TYPE_KR:
case FW_PORT_TYPE_BP_AP:
case FW_PORT_TYPE_BP4_AP:
case FW_PORT_TYPE_BP40_BA:
case FW_PORT_TYPE_KR4_100G:
case FW_PORT_TYPE_KR_SFP28:
case FW_PORT_TYPE_KR_XLAUI:
switch (speed) {
case FW_PORT_CAP32_SPEED_1G:
return (IFM_1000_KX);
case FW_PORT_CAP32_SPEED_10G:
return (IFM_10G_KR);
case FW_PORT_CAP32_SPEED_25G:
return (IFM_25G_KR);
case FW_PORT_CAP32_SPEED_40G:
return (IFM_40G_KR4);
case FW_PORT_CAP32_SPEED_50G:
return (IFM_50G_KR2);
case FW_PORT_CAP32_SPEED_100G:
return (IFM_100G_KR4);
}
break;
case FW_PORT_TYPE_FIBER_XFI:
case FW_PORT_TYPE_FIBER_XAUI:
case FW_PORT_TYPE_SFP:
case FW_PORT_TYPE_QSFP_10G:
case FW_PORT_TYPE_QSA:
case FW_PORT_TYPE_QSFP:
case FW_PORT_TYPE_CR4_QSFP:
case FW_PORT_TYPE_CR_QSFP:
case FW_PORT_TYPE_CR2_QSFP:
case FW_PORT_TYPE_SFP28:
/* Pluggable transceiver */
switch (pi->mod_type) {
case FW_PORT_MOD_TYPE_LR:
switch (speed) {
case FW_PORT_CAP32_SPEED_1G:
return (IFM_1000_LX);
case FW_PORT_CAP32_SPEED_10G:
return (IFM_10G_LR);
case FW_PORT_CAP32_SPEED_25G:
return (IFM_25G_LR);
case FW_PORT_CAP32_SPEED_40G:
return (IFM_40G_LR4);
case FW_PORT_CAP32_SPEED_50G:
return (IFM_50G_LR2);
case FW_PORT_CAP32_SPEED_100G:
return (IFM_100G_LR4);
}
break;
case FW_PORT_MOD_TYPE_SR:
switch (speed) {
case FW_PORT_CAP32_SPEED_1G:
return (IFM_1000_SX);
case FW_PORT_CAP32_SPEED_10G:
return (IFM_10G_SR);
case FW_PORT_CAP32_SPEED_25G:
return (IFM_25G_SR);
case FW_PORT_CAP32_SPEED_40G:
return (IFM_40G_SR4);
case FW_PORT_CAP32_SPEED_50G:
return (IFM_50G_SR2);
case FW_PORT_CAP32_SPEED_100G:
return (IFM_100G_SR4);
}
break;
case FW_PORT_MOD_TYPE_ER:
if (speed == FW_PORT_CAP32_SPEED_10G)
return (IFM_10G_ER);
break;
case FW_PORT_MOD_TYPE_TWINAX_PASSIVE:
case FW_PORT_MOD_TYPE_TWINAX_ACTIVE:
switch (speed) {
case FW_PORT_CAP32_SPEED_1G:
return (IFM_1000_CX);
case FW_PORT_CAP32_SPEED_10G:
return (IFM_10G_TWINAX);
case FW_PORT_CAP32_SPEED_25G:
return (IFM_25G_CR);
case FW_PORT_CAP32_SPEED_40G:
return (IFM_40G_CR4);
case FW_PORT_CAP32_SPEED_50G:
return (IFM_50G_CR2);
case FW_PORT_CAP32_SPEED_100G:
return (IFM_100G_CR4);
}
break;
case FW_PORT_MOD_TYPE_LRM:
if (speed == FW_PORT_CAP32_SPEED_10G)
return (IFM_10G_LRM);
break;
case FW_PORT_MOD_TYPE_NA:
MPASS(0); /* Not pluggable? */
/* fall throough */
case FW_PORT_MOD_TYPE_ERROR:
case FW_PORT_MOD_TYPE_UNKNOWN:
case FW_PORT_MOD_TYPE_NOTSUPPORTED:
break;
case FW_PORT_MOD_TYPE_NONE:
return (IFM_NONE);
}
break;
case FW_PORT_TYPE_NONE:
return (IFM_NONE);
}
return (IFM_UNKNOWN);
}
void
cxgbe_media_status(struct ifnet *ifp, struct ifmediareq *ifmr)
{
struct vi_info *vi = ifp->if_softc;
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
struct link_config *lc = &pi->link_cfg;
if (begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4med") != 0)
return;
PORT_LOCK(pi);
if (pi->up_vis == 0) {
/*
* If all the interfaces are administratively down the firmware
* does not report transceiver changes. Refresh port info here
* so that ifconfig displays accurate ifmedia at all times.
* This is the only reason we have a synchronized op in this
* function. Just PORT_LOCK would have been enough otherwise.
*/
t4_update_port_info(pi);
build_medialist(pi);
}
/* ifm_status */
ifmr->ifm_status = IFM_AVALID;
if (lc->link_ok == false)
goto done;
ifmr->ifm_status |= IFM_ACTIVE;
/* ifm_active */
ifmr->ifm_active = IFM_ETHER | IFM_FDX;
ifmr->ifm_active &= ~(IFM_ETH_TXPAUSE | IFM_ETH_RXPAUSE);
if (lc->fc & PAUSE_RX)
ifmr->ifm_active |= IFM_ETH_RXPAUSE;
if (lc->fc & PAUSE_TX)
ifmr->ifm_active |= IFM_ETH_TXPAUSE;
ifmr->ifm_active |= port_mword(pi, speed_to_fwcap(lc->speed));
done:
PORT_UNLOCK(pi);
end_synchronized_op(sc, 0);
}
static int
vcxgbe_probe(device_t dev)
{
char buf[128];
struct vi_info *vi = device_get_softc(dev);
snprintf(buf, sizeof(buf), "port %d vi %td", vi->pi->port_id,
vi - vi->pi->vi);
device_set_desc_copy(dev, buf);
return (BUS_PROBE_DEFAULT);
}
static int
alloc_extra_vi(struct adapter *sc, struct port_info *pi, struct vi_info *vi)
{
int func, index, rc;
uint32_t param, val;
ASSERT_SYNCHRONIZED_OP(sc);
index = vi - pi->vi;
MPASS(index > 0); /* This function deals with _extra_ VIs only */
KASSERT(index < nitems(vi_mac_funcs),
("%s: VI %s doesn't have a MAC func", __func__,
device_get_nameunit(vi->dev)));
func = vi_mac_funcs[index];
rc = t4_alloc_vi_func(sc, sc->mbox, pi->tx_chan, sc->pf, 0, 1,
vi->hw_addr, &vi->rss_size, func, 0);
if (rc < 0) {
device_printf(vi->dev, "failed to allocate virtual interface %d"
"for port %d: %d\n", index, pi->port_id, -rc);
return (-rc);
}
vi->viid = rc;
if (chip_id(sc) <= CHELSIO_T5)
vi->smt_idx = (rc & 0x7f) << 1;
else
vi->smt_idx = (rc & 0x7f);
if (vi->rss_size == 1) {
/*
* This VI didn't get a slice of the RSS table. Reduce the
* number of VIs being created (hw.cxgbe.num_vis) or modify the
* configuration file (nvi, rssnvi for this PF) if this is a
* problem.
*/
device_printf(vi->dev, "RSS table not available.\n");
vi->rss_base = 0xffff;
return (0);
}
param = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_RSSINFO) |
V_FW_PARAMS_PARAM_YZ(vi->viid);
rc = t4_query_params(sc, sc->mbox, sc->pf, 0, 1, &param, &val);
if (rc)
vi->rss_base = 0xffff;
else {
MPASS((val >> 16) == vi->rss_size);
vi->rss_base = val & 0xffff;
}
return (0);
}
static int
vcxgbe_attach(device_t dev)
{
struct vi_info *vi;
struct port_info *pi;
struct adapter *sc;
int rc;
vi = device_get_softc(dev);
pi = vi->pi;
sc = pi->adapter;
rc = begin_synchronized_op(sc, vi, SLEEP_OK | INTR_OK, "t4via");
if (rc)
return (rc);
rc = alloc_extra_vi(sc, pi, vi);
end_synchronized_op(sc, 0);
if (rc)
return (rc);
rc = cxgbe_vi_attach(dev, vi);
if (rc) {
t4_free_vi(sc, sc->mbox, sc->pf, 0, vi->viid);
return (rc);
}
return (0);
}
static int
vcxgbe_detach(device_t dev)
{
struct vi_info *vi;
struct adapter *sc;
vi = device_get_softc(dev);
sc = vi->pi->adapter;
doom_vi(sc, vi);
cxgbe_vi_detach(vi);
t4_free_vi(sc, sc->mbox, sc->pf, 0, vi->viid);
end_synchronized_op(sc, 0);
return (0);
}
+static struct callout fatal_callout;
+
+static void
+delayed_panic(void *arg)
+{
+ struct adapter *sc = arg;
+
+ panic("%s: panic on fatal error", device_get_nameunit(sc->dev));
+}
+
void
t4_fatal_err(struct adapter *sc, bool fw_error)
{
t4_shutdown_adapter(sc);
log(LOG_ALERT, "%s: encountered fatal error, adapter stopped.\n",
device_get_nameunit(sc->dev));
- if (t4_panic_on_fatal_err)
- panic("panic requested on fatal error");
-
if (fw_error) {
ASSERT_SYNCHRONIZED_OP(sc);
sc->flags |= ADAP_ERR;
} else {
ADAPTER_LOCK(sc);
sc->flags |= ADAP_ERR;
ADAPTER_UNLOCK(sc);
}
+
+ if (t4_panic_on_fatal_err) {
+ log(LOG_ALERT, "%s: panic on fatal error after 30s",
+ device_get_nameunit(sc->dev));
+ callout_reset(&fatal_callout, hz * 30, delayed_panic, sc);
+ }
}
void
t4_add_adapter(struct adapter *sc)
{
sx_xlock(&t4_list_lock);
SLIST_INSERT_HEAD(&t4_list, sc, link);
sx_xunlock(&t4_list_lock);
}
int
t4_map_bars_0_and_4(struct adapter *sc)
{
sc->regs_rid = PCIR_BAR(0);
sc->regs_res = bus_alloc_resource_any(sc->dev, SYS_RES_MEMORY,
&sc->regs_rid, RF_ACTIVE);
if (sc->regs_res == NULL) {
device_printf(sc->dev, "cannot map registers.\n");
return (ENXIO);
}
sc->bt = rman_get_bustag(sc->regs_res);
sc->bh = rman_get_bushandle(sc->regs_res);
sc->mmio_len = rman_get_size(sc->regs_res);
setbit(&sc->doorbells, DOORBELL_KDB);
sc->msix_rid = PCIR_BAR(4);
sc->msix_res = bus_alloc_resource_any(sc->dev, SYS_RES_MEMORY,
&sc->msix_rid, RF_ACTIVE);
if (sc->msix_res == NULL) {
device_printf(sc->dev, "cannot map MSI-X BAR.\n");
return (ENXIO);
}
return (0);
}
int
t4_map_bar_2(struct adapter *sc)
{
/*
* T4: only iWARP driver uses the userspace doorbells. There is no need
* to map it if RDMA is disabled.
*/
if (is_t4(sc) && sc->rdmacaps == 0)
return (0);
sc->udbs_rid = PCIR_BAR(2);
sc->udbs_res = bus_alloc_resource_any(sc->dev, SYS_RES_MEMORY,
&sc->udbs_rid, RF_ACTIVE);
if (sc->udbs_res == NULL) {
device_printf(sc->dev, "cannot map doorbell BAR.\n");
return (ENXIO);
}
sc->udbs_base = rman_get_virtual(sc->udbs_res);
if (chip_id(sc) >= CHELSIO_T5) {
setbit(&sc->doorbells, DOORBELL_UDB);
#if defined(__i386__) || defined(__amd64__)
if (t5_write_combine) {
int rc, mode;
/*
* Enable write combining on BAR2. This is the
* userspace doorbell BAR and is split into 128B
* (UDBS_SEG_SIZE) doorbell regions, each associated
* with an egress queue. The first 64B has the doorbell
* and the second 64B can be used to submit a tx work
* request with an implicit doorbell.
*/
rc = pmap_change_attr((vm_offset_t)sc->udbs_base,
rman_get_size(sc->udbs_res), PAT_WRITE_COMBINING);
if (rc == 0) {
clrbit(&sc->doorbells, DOORBELL_UDB);
setbit(&sc->doorbells, DOORBELL_WCWR);
setbit(&sc->doorbells, DOORBELL_UDBWC);
} else {
device_printf(sc->dev,
"couldn't enable write combining: %d\n",
rc);
}
mode = is_t5(sc) ? V_STATMODE(0) : V_T6_STATMODE(0);
t4_write_reg(sc, A_SGE_STAT_CFG,
V_STATSOURCE_T5(7) | mode);
}
#endif
}
sc->iwt.wc_en = isset(&sc->doorbells, DOORBELL_UDBWC) ? 1 : 0;
return (0);
}
struct memwin_init {
uint32_t base;
uint32_t aperture;
};
static const struct memwin_init t4_memwin[NUM_MEMWIN] = {
{ MEMWIN0_BASE, MEMWIN0_APERTURE },
{ MEMWIN1_BASE, MEMWIN1_APERTURE },
{ MEMWIN2_BASE_T4, MEMWIN2_APERTURE_T4 }
};
static const struct memwin_init t5_memwin[NUM_MEMWIN] = {
{ MEMWIN0_BASE, MEMWIN0_APERTURE },
{ MEMWIN1_BASE, MEMWIN1_APERTURE },
{ MEMWIN2_BASE_T5, MEMWIN2_APERTURE_T5 },
};
static void
setup_memwin(struct adapter *sc)
{
const struct memwin_init *mw_init;
struct memwin *mw;
int i;
uint32_t bar0;
if (is_t4(sc)) {
/*
* Read low 32b of bar0 indirectly via the hardware backdoor
* mechanism. Works from within PCI passthrough environments
* too, where rman_get_start() can return a different value. We
* need to program the T4 memory window decoders with the actual
* addresses that will be coming across the PCIe link.
*/
bar0 = t4_hw_pci_read_cfg4(sc, PCIR_BAR(0));
bar0 &= (uint32_t) PCIM_BAR_MEM_BASE;
mw_init = &t4_memwin[0];
} else {
/* T5+ use the relative offset inside the PCIe BAR */
bar0 = 0;
mw_init = &t5_memwin[0];
}
for (i = 0, mw = &sc->memwin[0]; i < NUM_MEMWIN; i++, mw_init++, mw++) {
rw_init(&mw->mw_lock, "memory window access");
mw->mw_base = mw_init->base;
mw->mw_aperture = mw_init->aperture;
mw->mw_curpos = 0;
t4_write_reg(sc,
PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_BASE_WIN, i),
(mw->mw_base + bar0) | V_BIR(0) |
V_WINDOW(ilog2(mw->mw_aperture) - 10));
rw_wlock(&mw->mw_lock);
position_memwin(sc, i, 0);
rw_wunlock(&mw->mw_lock);
}
/* flush */
t4_read_reg(sc, PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_BASE_WIN, 2));
}
/*
* Positions the memory window at the given address in the card's address space.
* There are some alignment requirements and the actual position may be at an
* address prior to the requested address. mw->mw_curpos always has the actual
* position of the window.
*/
static void
position_memwin(struct adapter *sc, int idx, uint32_t addr)
{
struct memwin *mw;
uint32_t pf;
uint32_t reg;
MPASS(idx >= 0 && idx < NUM_MEMWIN);
mw = &sc->memwin[idx];
rw_assert(&mw->mw_lock, RA_WLOCKED);
if (is_t4(sc)) {
pf = 0;
mw->mw_curpos = addr & ~0xf; /* start must be 16B aligned */
} else {
pf = V_PFNUM(sc->pf);
mw->mw_curpos = addr & ~0x7f; /* start must be 128B aligned */
}
reg = PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_OFFSET, idx);
t4_write_reg(sc, reg, mw->mw_curpos | pf);
t4_read_reg(sc, reg); /* flush */
}
int
rw_via_memwin(struct adapter *sc, int idx, uint32_t addr, uint32_t *val,
int len, int rw)
{
struct memwin *mw;
uint32_t mw_end, v;
MPASS(idx >= 0 && idx < NUM_MEMWIN);
/* Memory can only be accessed in naturally aligned 4 byte units */
if (addr & 3 || len & 3 || len <= 0)
return (EINVAL);
mw = &sc->memwin[idx];
while (len > 0) {
rw_rlock(&mw->mw_lock);
mw_end = mw->mw_curpos + mw->mw_aperture;
if (addr >= mw_end || addr < mw->mw_curpos) {
/* Will need to reposition the window */
if (!rw_try_upgrade(&mw->mw_lock)) {
rw_runlock(&mw->mw_lock);
rw_wlock(&mw->mw_lock);
}
rw_assert(&mw->mw_lock, RA_WLOCKED);
position_memwin(sc, idx, addr);
rw_downgrade(&mw->mw_lock);
mw_end = mw->mw_curpos + mw->mw_aperture;
}
rw_assert(&mw->mw_lock, RA_RLOCKED);
while (addr < mw_end && len > 0) {
if (rw == 0) {
v = t4_read_reg(sc, mw->mw_base + addr -
mw->mw_curpos);
*val++ = le32toh(v);
} else {
v = *val++;
t4_write_reg(sc, mw->mw_base + addr -
mw->mw_curpos, htole32(v));
}
addr += 4;
len -= 4;
}
rw_runlock(&mw->mw_lock);
}
return (0);
}
int
alloc_atid_tab(struct tid_info *t, int flags)
{
int i;
MPASS(t->natids > 0);
MPASS(t->atid_tab == NULL);
t->atid_tab = malloc(t->natids * sizeof(*t->atid_tab), M_CXGBE,
M_ZERO | flags);
if (t->atid_tab == NULL)
return (ENOMEM);
mtx_init(&t->atid_lock, "atid lock", NULL, MTX_DEF);
t->afree = t->atid_tab;
t->atids_in_use = 0;
for (i = 1; i < t->natids; i++)
t->atid_tab[i - 1].next = &t->atid_tab[i];
t->atid_tab[t->natids - 1].next = NULL;
return (0);
}
void
free_atid_tab(struct tid_info *t)
{
KASSERT(t->atids_in_use == 0,
("%s: %d atids still in use.", __func__, t->atids_in_use));
if (mtx_initialized(&t->atid_lock))
mtx_destroy(&t->atid_lock);
free(t->atid_tab, M_CXGBE);
t->atid_tab = NULL;
}
int
alloc_atid(struct adapter *sc, void *ctx)
{
struct tid_info *t = &sc->tids;
int atid = -1;
mtx_lock(&t->atid_lock);
if (t->afree) {
union aopen_entry *p = t->afree;
atid = p - t->atid_tab;
MPASS(atid <= M_TID_TID);
t->afree = p->next;
p->data = ctx;
t->atids_in_use++;
}
mtx_unlock(&t->atid_lock);
return (atid);
}
void *
lookup_atid(struct adapter *sc, int atid)
{
struct tid_info *t = &sc->tids;
return (t->atid_tab[atid].data);
}
void
free_atid(struct adapter *sc, int atid)
{
struct tid_info *t = &sc->tids;
union aopen_entry *p = &t->atid_tab[atid];
mtx_lock(&t->atid_lock);
p->next = t->afree;
t->afree = p;
t->atids_in_use--;
mtx_unlock(&t->atid_lock);
}
static void
queue_tid_release(struct adapter *sc, int tid)
{
CXGBE_UNIMPLEMENTED("deferred tid release");
}
void
release_tid(struct adapter *sc, int tid, struct sge_wrq *ctrlq)
{
struct wrqe *wr;
struct cpl_tid_release *req;
wr = alloc_wrqe(sizeof(*req), ctrlq);
if (wr == NULL) {
queue_tid_release(sc, tid); /* defer */
return;
}
req = wrtod(wr);
INIT_TP_WR_MIT_CPL(req, CPL_TID_RELEASE, tid);
t4_wrq_tx(sc, wr);
}
static int
t4_range_cmp(const void *a, const void *b)
{
return ((const struct t4_range *)a)->start -
((const struct t4_range *)b)->start;
}
/*
* Verify that the memory range specified by the addr/len pair is valid within
* the card's address space.
*/
static int
validate_mem_range(struct adapter *sc, uint32_t addr, uint32_t len)
{
struct t4_range mem_ranges[4], *r, *next;
uint32_t em, addr_len;
int i, n, remaining;
/* Memory can only be accessed in naturally aligned 4 byte units */
if (addr & 3 || len & 3 || len == 0)
return (EINVAL);
/* Enabled memories */
em = t4_read_reg(sc, A_MA_TARGET_MEM_ENABLE);
r = &mem_ranges[0];
n = 0;
bzero(r, sizeof(mem_ranges));
if (em & F_EDRAM0_ENABLE) {
addr_len = t4_read_reg(sc, A_MA_EDRAM0_BAR);
r->size = G_EDRAM0_SIZE(addr_len) << 20;
if (r->size > 0) {
r->start = G_EDRAM0_BASE(addr_len) << 20;
if (addr >= r->start &&
addr + len <= r->start + r->size)
return (0);
r++;
n++;
}
}
if (em & F_EDRAM1_ENABLE) {
addr_len = t4_read_reg(sc, A_MA_EDRAM1_BAR);
r->size = G_EDRAM1_SIZE(addr_len) << 20;
if (r->size > 0) {
r->start = G_EDRAM1_BASE(addr_len) << 20;
if (addr >= r->start &&
addr + len <= r->start + r->size)
return (0);
r++;
n++;
}
}
if (em & F_EXT_MEM_ENABLE) {
addr_len = t4_read_reg(sc, A_MA_EXT_MEMORY_BAR);
r->size = G_EXT_MEM_SIZE(addr_len) << 20;
if (r->size > 0) {
r->start = G_EXT_MEM_BASE(addr_len) << 20;
if (addr >= r->start &&
addr + len <= r->start + r->size)
return (0);
r++;
n++;
}
}
if (is_t5(sc) && em & F_EXT_MEM1_ENABLE) {
addr_len = t4_read_reg(sc, A_MA_EXT_MEMORY1_BAR);
r->size = G_EXT_MEM1_SIZE(addr_len) << 20;
if (r->size > 0) {
r->start = G_EXT_MEM1_BASE(addr_len) << 20;
if (addr >= r->start &&
addr + len <= r->start + r->size)
return (0);
r++;
n++;
}
}
MPASS(n <= nitems(mem_ranges));
if (n > 1) {
/* Sort and merge the ranges. */
qsort(mem_ranges, n, sizeof(struct t4_range), t4_range_cmp);
/* Start from index 0 and examine the next n - 1 entries. */
r = &mem_ranges[0];
for (remaining = n - 1; remaining > 0; remaining--, r++) {
MPASS(r->size > 0); /* r is a valid entry. */
next = r + 1;
MPASS(next->size > 0); /* and so is the next one. */
while (r->start + r->size >= next->start) {
/* Merge the next one into the current entry. */
r->size = max(r->start + r->size,
next->start + next->size) - r->start;
n--; /* One fewer entry in total. */
if (--remaining == 0)
goto done; /* short circuit */
next++;
}
if (next != r + 1) {
/*
* Some entries were merged into r and next
* points to the first valid entry that couldn't
* be merged.
*/
MPASS(next->size > 0); /* must be valid */
memcpy(r + 1, next, remaining * sizeof(*r));
#ifdef INVARIANTS
/*
* This so that the foo->size assertion in the
* next iteration of the loop do the right
* thing for entries that were pulled up and are
* no longer valid.
*/
MPASS(n < nitems(mem_ranges));
bzero(&mem_ranges[n], (nitems(mem_ranges) - n) *
sizeof(struct t4_range));
#endif
}
}
done:
/* Done merging the ranges. */
MPASS(n > 0);
r = &mem_ranges[0];
for (i = 0; i < n; i++, r++) {
if (addr >= r->start &&
addr + len <= r->start + r->size)
return (0);
}
}
return (EFAULT);
}
static int
fwmtype_to_hwmtype(int mtype)
{
switch (mtype) {
case FW_MEMTYPE_EDC0:
return (MEM_EDC0);
case FW_MEMTYPE_EDC1:
return (MEM_EDC1);
case FW_MEMTYPE_EXTMEM:
return (MEM_MC0);
case FW_MEMTYPE_EXTMEM1:
return (MEM_MC1);
default:
panic("%s: cannot translate fw mtype %d.", __func__, mtype);
}
}
/*
* Verify that the memory range specified by the memtype/offset/len pair is
* valid and lies entirely within the memtype specified. The global address of
* the start of the range is returned in addr.
*/
static int
validate_mt_off_len(struct adapter *sc, int mtype, uint32_t off, uint32_t len,
uint32_t *addr)
{
uint32_t em, addr_len, maddr;
/* Memory can only be accessed in naturally aligned 4 byte units */
if (off & 3 || len & 3 || len == 0)
return (EINVAL);
em = t4_read_reg(sc, A_MA_TARGET_MEM_ENABLE);
switch (fwmtype_to_hwmtype(mtype)) {
case MEM_EDC0:
if (!(em & F_EDRAM0_ENABLE))
return (EINVAL);
addr_len = t4_read_reg(sc, A_MA_EDRAM0_BAR);
maddr = G_EDRAM0_BASE(addr_len) << 20;
break;
case MEM_EDC1:
if (!(em & F_EDRAM1_ENABLE))
return (EINVAL);
addr_len = t4_read_reg(sc, A_MA_EDRAM1_BAR);
maddr = G_EDRAM1_BASE(addr_len) << 20;
break;
case MEM_MC:
if (!(em & F_EXT_MEM_ENABLE))
return (EINVAL);
addr_len = t4_read_reg(sc, A_MA_EXT_MEMORY_BAR);
maddr = G_EXT_MEM_BASE(addr_len) << 20;
break;
case MEM_MC1:
if (!is_t5(sc) || !(em & F_EXT_MEM1_ENABLE))
return (EINVAL);
addr_len = t4_read_reg(sc, A_MA_EXT_MEMORY1_BAR);
maddr = G_EXT_MEM1_BASE(addr_len) << 20;
break;
default:
return (EINVAL);
}
*addr = maddr + off; /* global address */
return (validate_mem_range(sc, *addr, len));
}
static int
fixup_devlog_params(struct adapter *sc)
{
struct devlog_params *dparams = &sc->params.devlog;
int rc;
rc = validate_mt_off_len(sc, dparams->memtype, dparams->start,
dparams->size, &dparams->addr);
return (rc);
}
static void
update_nirq(struct intrs_and_queues *iaq, int nports)
{
int extra = T4_EXTRA_INTR;
iaq->nirq = extra;
iaq->nirq += nports * (iaq->nrxq + iaq->nofldrxq);
iaq->nirq += nports * (iaq->num_vis - 1) *
max(iaq->nrxq_vi, iaq->nnmrxq_vi);
iaq->nirq += nports * (iaq->num_vis - 1) * iaq->nofldrxq_vi;
}
/*
* Adjust requirements to fit the number of interrupts available.
*/
static void
calculate_iaq(struct adapter *sc, struct intrs_and_queues *iaq, int itype,
int navail)
{
int old_nirq;
const int nports = sc->params.nports;
MPASS(nports > 0);
MPASS(navail > 0);
bzero(iaq, sizeof(*iaq));
iaq->intr_type = itype;
iaq->num_vis = t4_num_vis;
iaq->ntxq = t4_ntxq;
iaq->ntxq_vi = t4_ntxq_vi;
iaq->nrxq = t4_nrxq;
iaq->nrxq_vi = t4_nrxq_vi;
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
if (is_offload(sc) || is_ethoffload(sc)) {
iaq->nofldtxq = t4_nofldtxq;
iaq->nofldtxq_vi = t4_nofldtxq_vi;
}
#endif
#ifdef TCP_OFFLOAD
if (is_offload(sc)) {
iaq->nofldrxq = t4_nofldrxq;
iaq->nofldrxq_vi = t4_nofldrxq_vi;
}
#endif
#ifdef DEV_NETMAP
iaq->nnmtxq_vi = t4_nnmtxq_vi;
iaq->nnmrxq_vi = t4_nnmrxq_vi;
#endif
update_nirq(iaq, nports);
if (iaq->nirq <= navail &&
(itype != INTR_MSI || powerof2(iaq->nirq))) {
/*
* This is the normal case -- there are enough interrupts for
* everything.
*/
goto done;
}
/*
* If extra VIs have been configured try reducing their count and see if
* that works.
*/
while (iaq->num_vis > 1) {
iaq->num_vis--;
update_nirq(iaq, nports);
if (iaq->nirq <= navail &&
(itype != INTR_MSI || powerof2(iaq->nirq))) {
device_printf(sc->dev, "virtual interfaces per port "
"reduced to %d from %d. nrxq=%u, nofldrxq=%u, "
"nrxq_vi=%u nofldrxq_vi=%u, nnmrxq_vi=%u. "
"itype %d, navail %u, nirq %d.\n",
iaq->num_vis, t4_num_vis, iaq->nrxq, iaq->nofldrxq,
iaq->nrxq_vi, iaq->nofldrxq_vi, iaq->nnmrxq_vi,
itype, navail, iaq->nirq);
goto done;
}
}
/*
* Extra VIs will not be created. Log a message if they were requested.
*/
MPASS(iaq->num_vis == 1);
iaq->ntxq_vi = iaq->nrxq_vi = 0;
iaq->nofldtxq_vi = iaq->nofldrxq_vi = 0;
iaq->nnmtxq_vi = iaq->nnmrxq_vi = 0;
if (iaq->num_vis != t4_num_vis) {
device_printf(sc->dev, "extra virtual interfaces disabled. "
"nrxq=%u, nofldrxq=%u, nrxq_vi=%u nofldrxq_vi=%u, "
"nnmrxq_vi=%u. itype %d, navail %u, nirq %d.\n",
iaq->nrxq, iaq->nofldrxq, iaq->nrxq_vi, iaq->nofldrxq_vi,
iaq->nnmrxq_vi, itype, navail, iaq->nirq);
}
/*
* Keep reducing the number of NIC rx queues to the next lower power of
* 2 (for even RSS distribution) and halving the TOE rx queues and see
* if that works.
*/
do {
if (iaq->nrxq > 1) {
do {
iaq->nrxq--;
} while (!powerof2(iaq->nrxq));
}
if (iaq->nofldrxq > 1)
iaq->nofldrxq >>= 1;
old_nirq = iaq->nirq;
update_nirq(iaq, nports);
if (iaq->nirq <= navail &&
(itype != INTR_MSI || powerof2(iaq->nirq))) {
device_printf(sc->dev, "running with reduced number of "
"rx queues because of shortage of interrupts. "
"nrxq=%u, nofldrxq=%u. "
"itype %d, navail %u, nirq %d.\n", iaq->nrxq,
iaq->nofldrxq, itype, navail, iaq->nirq);
goto done;
}
} while (old_nirq != iaq->nirq);
/* One interrupt for everything. Ugh. */
device_printf(sc->dev, "running with minimal number of queues. "
"itype %d, navail %u.\n", itype, navail);
iaq->nirq = 1;
MPASS(iaq->nrxq == 1);
iaq->ntxq = 1;
if (iaq->nofldrxq > 1)
iaq->nofldtxq = 1;
done:
MPASS(iaq->num_vis > 0);
if (iaq->num_vis > 1) {
MPASS(iaq->nrxq_vi > 0);
MPASS(iaq->ntxq_vi > 0);
}
MPASS(iaq->nirq > 0);
MPASS(iaq->nrxq > 0);
MPASS(iaq->ntxq > 0);
if (itype == INTR_MSI) {
MPASS(powerof2(iaq->nirq));
}
}
static int
cfg_itype_and_nqueues(struct adapter *sc, struct intrs_and_queues *iaq)
{
int rc, itype, navail, nalloc;
for (itype = INTR_MSIX; itype; itype >>= 1) {
if ((itype & t4_intr_types) == 0)
continue; /* not allowed */
if (itype == INTR_MSIX)
navail = pci_msix_count(sc->dev);
else if (itype == INTR_MSI)
navail = pci_msi_count(sc->dev);
else
navail = 1;
restart:
if (navail == 0)
continue;
calculate_iaq(sc, iaq, itype, navail);
nalloc = iaq->nirq;
rc = 0;
if (itype == INTR_MSIX)
rc = pci_alloc_msix(sc->dev, &nalloc);
else if (itype == INTR_MSI)
rc = pci_alloc_msi(sc->dev, &nalloc);
if (rc == 0 && nalloc > 0) {
if (nalloc == iaq->nirq)
return (0);
/*
* Didn't get the number requested. Use whatever number
* the kernel is willing to allocate.
*/
device_printf(sc->dev, "fewer vectors than requested, "
"type=%d, req=%d, rcvd=%d; will downshift req.\n",
itype, iaq->nirq, nalloc);
pci_release_msi(sc->dev);
navail = nalloc;
goto restart;
}
device_printf(sc->dev,
"failed to allocate vectors:%d, type=%d, req=%d, rcvd=%d\n",
itype, rc, iaq->nirq, nalloc);
}
device_printf(sc->dev,
"failed to find a usable interrupt type. "
"allowed=%d, msi-x=%d, msi=%d, intx=1", t4_intr_types,
pci_msix_count(sc->dev), pci_msi_count(sc->dev));
return (ENXIO);
}
#define FW_VERSION(chip) ( \
V_FW_HDR_FW_VER_MAJOR(chip##FW_VERSION_MAJOR) | \
V_FW_HDR_FW_VER_MINOR(chip##FW_VERSION_MINOR) | \
V_FW_HDR_FW_VER_MICRO(chip##FW_VERSION_MICRO) | \
V_FW_HDR_FW_VER_BUILD(chip##FW_VERSION_BUILD))
#define FW_INTFVER(chip, intf) (chip##FW_HDR_INTFVER_##intf)
/* Just enough of fw_hdr to cover all version info. */
struct fw_h {
__u8 ver;
__u8 chip;
__be16 len512;
__be32 fw_ver;
__be32 tp_microcode_ver;
__u8 intfver_nic;
__u8 intfver_vnic;
__u8 intfver_ofld;
__u8 intfver_ri;
__u8 intfver_iscsipdu;
__u8 intfver_iscsi;
__u8 intfver_fcoepdu;
__u8 intfver_fcoe;
};
/* Spot check a couple of fields. */
CTASSERT(offsetof(struct fw_h, fw_ver) == offsetof(struct fw_hdr, fw_ver));
CTASSERT(offsetof(struct fw_h, intfver_nic) == offsetof(struct fw_hdr, intfver_nic));
CTASSERT(offsetof(struct fw_h, intfver_fcoe) == offsetof(struct fw_hdr, intfver_fcoe));
struct fw_info {
uint8_t chip;
char *kld_name;
char *fw_mod_name;
struct fw_h fw_h;
} fw_info[] = {
{
.chip = CHELSIO_T4,
.kld_name = "t4fw_cfg",
.fw_mod_name = "t4fw",
.fw_h = {
.chip = FW_HDR_CHIP_T4,
.fw_ver = htobe32(FW_VERSION(T4)),
.intfver_nic = FW_INTFVER(T4, NIC),
.intfver_vnic = FW_INTFVER(T4, VNIC),
.intfver_ofld = FW_INTFVER(T4, OFLD),
.intfver_ri = FW_INTFVER(T4, RI),
.intfver_iscsipdu = FW_INTFVER(T4, ISCSIPDU),
.intfver_iscsi = FW_INTFVER(T4, ISCSI),
.intfver_fcoepdu = FW_INTFVER(T4, FCOEPDU),
.intfver_fcoe = FW_INTFVER(T4, FCOE),
},
}, {
.chip = CHELSIO_T5,
.kld_name = "t5fw_cfg",
.fw_mod_name = "t5fw",
.fw_h = {
.chip = FW_HDR_CHIP_T5,
.fw_ver = htobe32(FW_VERSION(T5)),
.intfver_nic = FW_INTFVER(T5, NIC),
.intfver_vnic = FW_INTFVER(T5, VNIC),
.intfver_ofld = FW_INTFVER(T5, OFLD),
.intfver_ri = FW_INTFVER(T5, RI),
.intfver_iscsipdu = FW_INTFVER(T5, ISCSIPDU),
.intfver_iscsi = FW_INTFVER(T5, ISCSI),
.intfver_fcoepdu = FW_INTFVER(T5, FCOEPDU),
.intfver_fcoe = FW_INTFVER(T5, FCOE),
},
}, {
.chip = CHELSIO_T6,
.kld_name = "t6fw_cfg",
.fw_mod_name = "t6fw",
.fw_h = {
.chip = FW_HDR_CHIP_T6,
.fw_ver = htobe32(FW_VERSION(T6)),
.intfver_nic = FW_INTFVER(T6, NIC),
.intfver_vnic = FW_INTFVER(T6, VNIC),
.intfver_ofld = FW_INTFVER(T6, OFLD),
.intfver_ri = FW_INTFVER(T6, RI),
.intfver_iscsipdu = FW_INTFVER(T6, ISCSIPDU),
.intfver_iscsi = FW_INTFVER(T6, ISCSI),
.intfver_fcoepdu = FW_INTFVER(T6, FCOEPDU),
.intfver_fcoe = FW_INTFVER(T6, FCOE),
},
}
};
static struct fw_info *
find_fw_info(int chip)
{
int i;
for (i = 0; i < nitems(fw_info); i++) {
if (fw_info[i].chip == chip)
return (&fw_info[i]);
}
return (NULL);
}
/*
* Is the given firmware API compatible with the one the driver was compiled
* with?
*/
static int
fw_compatible(const struct fw_h *hdr1, const struct fw_h *hdr2)
{
/* short circuit if it's the exact same firmware version */
if (hdr1->chip == hdr2->chip && hdr1->fw_ver == hdr2->fw_ver)
return (1);
/*
* XXX: Is this too conservative? Perhaps I should limit this to the
* features that are supported in the driver.
*/
#define SAME_INTF(x) (hdr1->intfver_##x == hdr2->intfver_##x)
if (hdr1->chip == hdr2->chip && SAME_INTF(nic) && SAME_INTF(vnic) &&
SAME_INTF(ofld) && SAME_INTF(ri) && SAME_INTF(iscsipdu) &&
SAME_INTF(iscsi) && SAME_INTF(fcoepdu) && SAME_INTF(fcoe))
return (1);
#undef SAME_INTF
return (0);
}
static int
load_fw_module(struct adapter *sc, const struct firmware **dcfg,
const struct firmware **fw)
{
struct fw_info *fw_info;
*dcfg = NULL;
if (fw != NULL)
*fw = NULL;
fw_info = find_fw_info(chip_id(sc));
if (fw_info == NULL) {
device_printf(sc->dev,
"unable to look up firmware information for chip %d.\n",
chip_id(sc));
return (EINVAL);
}
*dcfg = firmware_get(fw_info->kld_name);
if (*dcfg != NULL) {
if (fw != NULL)
*fw = firmware_get(fw_info->fw_mod_name);
return (0);
}
return (ENOENT);
}
static void
unload_fw_module(struct adapter *sc, const struct firmware *dcfg,
const struct firmware *fw)
{
if (fw != NULL)
firmware_put(fw, FIRMWARE_UNLOAD);
if (dcfg != NULL)
firmware_put(dcfg, FIRMWARE_UNLOAD);
}
/*
* Return values:
* 0 means no firmware install attempted.
* ERESTART means a firmware install was attempted and was successful.
* +ve errno means a firmware install was attempted but failed.
*/
static int
install_kld_firmware(struct adapter *sc, struct fw_h *card_fw,
const struct fw_h *drv_fw, const char *reason, int *already)
{
const struct firmware *cfg, *fw;
const uint32_t c = be32toh(card_fw->fw_ver);
uint32_t d, k;
int rc, fw_install;
struct fw_h bundled_fw;
bool load_attempted;
cfg = fw = NULL;
load_attempted = false;
fw_install = t4_fw_install < 0 ? -t4_fw_install : t4_fw_install;
if (reason != NULL)
goto install;
if ((sc->flags & FW_OK) == 0) {
if (c == 0xffffffff) {
reason = "missing";
goto install;
}
return (0);
}
memcpy(&bundled_fw, drv_fw, sizeof(bundled_fw));
if (t4_fw_install < 0) {
rc = load_fw_module(sc, &cfg, &fw);
if (rc != 0 || fw == NULL) {
device_printf(sc->dev,
"failed to load firmware module: %d. cfg %p, fw %p;"
" will use compiled-in firmware version for"
"hw.cxgbe.fw_install checks.\n",
rc, cfg, fw);
} else {
memcpy(&bundled_fw, fw->data, sizeof(bundled_fw));
}
load_attempted = true;
}
d = be32toh(bundled_fw.fw_ver);
if (!fw_compatible(card_fw, &bundled_fw)) {
reason = "incompatible or unusable";
goto install;
}
if (d > c) {
reason = "older than the version bundled with this driver";
goto install;
}
if (fw_install == 2 && d != c) {
reason = "different than the version bundled with this driver";
goto install;
}
/* No reason to do anything to the firmware already on the card. */
rc = 0;
goto done;
install:
rc = 0;
if ((*already)++)
goto done;
if (fw_install == 0) {
device_printf(sc->dev, "firmware on card (%u.%u.%u.%u) is %s, "
"but the driver is prohibited from installing a firmware "
"on the card.\n",
G_FW_HDR_FW_VER_MAJOR(c), G_FW_HDR_FW_VER_MINOR(c),
G_FW_HDR_FW_VER_MICRO(c), G_FW_HDR_FW_VER_BUILD(c), reason);
goto done;
}
/*
* We'll attempt to install a firmware. Load the module first (if it
* hasn't been loaded already).
*/
if (!load_attempted) {
rc = load_fw_module(sc, &cfg, &fw);
if (rc != 0 || fw == NULL) {
device_printf(sc->dev,
"failed to load firmware module: %d. cfg %p, fw %p\n",
rc, cfg, fw);
/* carry on */
}
}
if (fw == NULL) {
device_printf(sc->dev, "firmware on card (%u.%u.%u.%u) is %s, "
"but the driver cannot take corrective action because it "
"is unable to load the firmware module.\n",
G_FW_HDR_FW_VER_MAJOR(c), G_FW_HDR_FW_VER_MINOR(c),
G_FW_HDR_FW_VER_MICRO(c), G_FW_HDR_FW_VER_BUILD(c), reason);
rc = sc->flags & FW_OK ? 0 : ENOENT;
goto done;
}
k = be32toh(((const struct fw_hdr *)fw->data)->fw_ver);
if (k != d) {
MPASS(t4_fw_install > 0);
device_printf(sc->dev,
"firmware in KLD (%u.%u.%u.%u) is not what the driver was "
"expecting (%u.%u.%u.%u) and will not be used.\n",
G_FW_HDR_FW_VER_MAJOR(k), G_FW_HDR_FW_VER_MINOR(k),
G_FW_HDR_FW_VER_MICRO(k), G_FW_HDR_FW_VER_BUILD(k),
G_FW_HDR_FW_VER_MAJOR(d), G_FW_HDR_FW_VER_MINOR(d),
G_FW_HDR_FW_VER_MICRO(d), G_FW_HDR_FW_VER_BUILD(d));
rc = sc->flags & FW_OK ? 0 : EINVAL;
goto done;
}
device_printf(sc->dev, "firmware on card (%u.%u.%u.%u) is %s, "
"installing firmware %u.%u.%u.%u on card.\n",
G_FW_HDR_FW_VER_MAJOR(c), G_FW_HDR_FW_VER_MINOR(c),
G_FW_HDR_FW_VER_MICRO(c), G_FW_HDR_FW_VER_BUILD(c), reason,
G_FW_HDR_FW_VER_MAJOR(d), G_FW_HDR_FW_VER_MINOR(d),
G_FW_HDR_FW_VER_MICRO(d), G_FW_HDR_FW_VER_BUILD(d));
rc = -t4_fw_upgrade(sc, sc->mbox, fw->data, fw->datasize, 0);
if (rc != 0) {
device_printf(sc->dev, "failed to install firmware: %d\n", rc);
} else {
/* Installed successfully, update the cached header too. */
rc = ERESTART;
memcpy(card_fw, fw->data, sizeof(*card_fw));
}
done:
unload_fw_module(sc, cfg, fw);
return (rc);
}
/*
* Establish contact with the firmware and attempt to become the master driver.
*
* A firmware will be installed to the card if needed (if the driver is allowed
* to do so).
*/
static int
contact_firmware(struct adapter *sc)
{
int rc, already = 0;
enum dev_state state;
struct fw_info *fw_info;
struct fw_hdr *card_fw; /* fw on the card */
const struct fw_h *drv_fw;
fw_info = find_fw_info(chip_id(sc));
if (fw_info == NULL) {
device_printf(sc->dev,
"unable to look up firmware information for chip %d.\n",
chip_id(sc));
return (EINVAL);
}
drv_fw = &fw_info->fw_h;
/* Read the header of the firmware on the card */
card_fw = malloc(sizeof(*card_fw), M_CXGBE, M_ZERO | M_WAITOK);
restart:
rc = -t4_get_fw_hdr(sc, card_fw);
if (rc != 0) {
device_printf(sc->dev,
"unable to read firmware header from card's flash: %d\n",
rc);
goto done;
}
rc = install_kld_firmware(sc, (struct fw_h *)card_fw, drv_fw, NULL,
&already);
if (rc == ERESTART)
goto restart;
if (rc != 0)
goto done;
rc = t4_fw_hello(sc, sc->mbox, sc->mbox, MASTER_MAY, &state);
if (rc < 0 || state == DEV_STATE_ERR) {
rc = -rc;
device_printf(sc->dev,
"failed to connect to the firmware: %d, %d. "
"PCIE_FW 0x%08x\n", rc, state, t4_read_reg(sc, A_PCIE_FW));
#if 0
if (install_kld_firmware(sc, (struct fw_h *)card_fw, drv_fw,
"not responding properly to HELLO", &already) == ERESTART)
goto restart;
#endif
goto done;
}
MPASS(be32toh(card_fw->flags) & FW_HDR_FLAGS_RESET_HALT);
sc->flags |= FW_OK; /* The firmware responded to the FW_HELLO. */
if (rc == sc->pf) {
sc->flags |= MASTER_PF;
rc = install_kld_firmware(sc, (struct fw_h *)card_fw, drv_fw,
NULL, &already);
if (rc == ERESTART)
rc = 0;
else if (rc != 0)
goto done;
} else if (state == DEV_STATE_UNINIT) {
/*
* We didn't get to be the master so we definitely won't be
* configuring the chip. It's a bug if someone else hasn't
* configured it already.
*/
device_printf(sc->dev, "couldn't be master(%d), "
"device not already initialized either(%d). "
"PCIE_FW 0x%08x\n", rc, state, t4_read_reg(sc, A_PCIE_FW));
rc = EPROTO;
goto done;
} else {
/*
* Some other PF is the master and has configured the chip.
* This is allowed but untested.
*/
device_printf(sc->dev, "PF%d is master, device state %d. "
"PCIE_FW 0x%08x\n", rc, state, t4_read_reg(sc, A_PCIE_FW));
snprintf(sc->cfg_file, sizeof(sc->cfg_file), "pf%d", rc);
sc->cfcsum = 0;
rc = 0;
}
done:
if (rc != 0 && sc->flags & FW_OK) {
t4_fw_bye(sc, sc->mbox);
sc->flags &= ~FW_OK;
}
free(card_fw, M_CXGBE);
return (rc);
}
static int
copy_cfg_file_to_card(struct adapter *sc, char *cfg_file,
uint32_t mtype, uint32_t moff)
{
struct fw_info *fw_info;
const struct firmware *dcfg, *rcfg = NULL;
const uint32_t *cfdata;
uint32_t cflen, addr;
int rc;
load_fw_module(sc, &dcfg, NULL);
/* Card specific interpretation of "default". */
if (strncmp(cfg_file, DEFAULT_CF, sizeof(t4_cfg_file)) == 0) {
if (pci_get_device(sc->dev) == 0x440a)
snprintf(cfg_file, sizeof(t4_cfg_file), UWIRE_CF);
if (is_fpga(sc))
snprintf(cfg_file, sizeof(t4_cfg_file), FPGA_CF);
}
if (strncmp(cfg_file, DEFAULT_CF, sizeof(t4_cfg_file)) == 0) {
if (dcfg == NULL) {
device_printf(sc->dev,
"KLD with default config is not available.\n");
rc = ENOENT;
goto done;
}
cfdata = dcfg->data;
cflen = dcfg->datasize & ~3;
} else {
char s[32];
fw_info = find_fw_info(chip_id(sc));
if (fw_info == NULL) {
device_printf(sc->dev,
"unable to look up firmware information for chip %d.\n",
chip_id(sc));
rc = EINVAL;
goto done;
}
snprintf(s, sizeof(s), "%s_%s", fw_info->kld_name, cfg_file);
rcfg = firmware_get(s);
if (rcfg == NULL) {
device_printf(sc->dev,
"unable to load module \"%s\" for configuration "
"profile \"%s\".\n", s, cfg_file);
rc = ENOENT;
goto done;
}
cfdata = rcfg->data;
cflen = rcfg->datasize & ~3;
}
if (cflen > FLASH_CFG_MAX_SIZE) {
device_printf(sc->dev,
"config file too long (%d, max allowed is %d).\n",
cflen, FLASH_CFG_MAX_SIZE);
rc = EINVAL;
goto done;
}
rc = validate_mt_off_len(sc, mtype, moff, cflen, &addr);
if (rc != 0) {
device_printf(sc->dev,
"%s: addr (%d/0x%x) or len %d is not valid: %d.\n",
__func__, mtype, moff, cflen, rc);
rc = EINVAL;
goto done;
}
write_via_memwin(sc, 2, addr, cfdata, cflen);
done:
if (rcfg != NULL)
firmware_put(rcfg, FIRMWARE_UNLOAD);
unload_fw_module(sc, dcfg, NULL);
return (rc);
}
struct caps_allowed {
uint16_t nbmcaps;
uint16_t linkcaps;
uint16_t switchcaps;
uint16_t niccaps;
uint16_t toecaps;
uint16_t rdmacaps;
uint16_t cryptocaps;
uint16_t iscsicaps;
uint16_t fcoecaps;
};
#define FW_PARAM_DEV(param) \
(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
#define FW_PARAM_PFVF(param) \
(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param))
/*
* Provide a configuration profile to the firmware and have it initialize the
* chip accordingly. This may involve uploading a configuration file to the
* card.
*/
static int
apply_cfg_and_initialize(struct adapter *sc, char *cfg_file,
const struct caps_allowed *caps_allowed)
{
int rc;
struct fw_caps_config_cmd caps;
uint32_t mtype, moff, finicsum, cfcsum, param, val;
rc = -t4_fw_reset(sc, sc->mbox, F_PIORSTMODE | F_PIORST);
if (rc != 0) {
device_printf(sc->dev, "firmware reset failed: %d.\n", rc);
return (rc);
}
bzero(&caps, sizeof(caps));
caps.op_to_write = htobe32(V_FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ);
if (strncmp(cfg_file, BUILTIN_CF, sizeof(t4_cfg_file)) == 0) {
mtype = 0;
moff = 0;
caps.cfvalid_to_len16 = htobe32(FW_LEN16(caps));
} else if (strncmp(cfg_file, FLASH_CF, sizeof(t4_cfg_file)) == 0) {
mtype = FW_MEMTYPE_FLASH;
moff = t4_flash_cfg_addr(sc);
caps.cfvalid_to_len16 = htobe32(F_FW_CAPS_CONFIG_CMD_CFVALID |
V_FW_CAPS_CONFIG_CMD_MEMTYPE_CF(mtype) |
V_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF(moff >> 16) |
FW_LEN16(caps));
} else {
/*
* Ask the firmware where it wants us to upload the config file.
*/
param = FW_PARAM_DEV(CF);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, &param, &val);
if (rc != 0) {
/* No support for config file? Shouldn't happen. */
device_printf(sc->dev,
"failed to query config file location: %d.\n", rc);
goto done;
}
mtype = G_FW_PARAMS_PARAM_Y(val);
moff = G_FW_PARAMS_PARAM_Z(val) << 16;
caps.cfvalid_to_len16 = htobe32(F_FW_CAPS_CONFIG_CMD_CFVALID |
V_FW_CAPS_CONFIG_CMD_MEMTYPE_CF(mtype) |
V_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF(moff >> 16) |
FW_LEN16(caps));
rc = copy_cfg_file_to_card(sc, cfg_file, mtype, moff);
if (rc != 0) {
device_printf(sc->dev,
"failed to upload config file to card: %d.\n", rc);
goto done;
}
}
rc = -t4_wr_mbox(sc, sc->mbox, &caps, sizeof(caps), &caps);
if (rc != 0) {
device_printf(sc->dev, "failed to pre-process config file: %d "
"(mtype %d, moff 0x%x).\n", rc, mtype, moff);
goto done;
}
finicsum = be32toh(caps.finicsum);
cfcsum = be32toh(caps.cfcsum); /* actual */
if (finicsum != cfcsum) {
device_printf(sc->dev,
"WARNING: config file checksum mismatch: %08x %08x\n",
finicsum, cfcsum);
}
sc->cfcsum = cfcsum;
snprintf(sc->cfg_file, sizeof(sc->cfg_file), "%s", cfg_file);
/*
* Let the firmware know what features will (not) be used so it can tune
* things accordingly.
*/
#define LIMIT_CAPS(x) do { \
caps.x##caps &= htobe16(caps_allowed->x##caps); \
} while (0)
LIMIT_CAPS(nbm);
LIMIT_CAPS(link);
LIMIT_CAPS(switch);
LIMIT_CAPS(nic);
LIMIT_CAPS(toe);
LIMIT_CAPS(rdma);
LIMIT_CAPS(crypto);
LIMIT_CAPS(iscsi);
LIMIT_CAPS(fcoe);
#undef LIMIT_CAPS
if (caps.niccaps & htobe16(FW_CAPS_CONFIG_NIC_HASHFILTER)) {
/*
* TOE and hashfilters are mutually exclusive. It is a config
* file or firmware bug if both are reported as available. Try
* to cope with the situation in non-debug builds by disabling
* TOE.
*/
MPASS(caps.toecaps == 0);
caps.toecaps = 0;
caps.rdmacaps = 0;
caps.iscsicaps = 0;
}
caps.op_to_write = htobe32(V_FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_WRITE);
caps.cfvalid_to_len16 = htobe32(FW_LEN16(caps));
rc = -t4_wr_mbox(sc, sc->mbox, &caps, sizeof(caps), NULL);
if (rc != 0) {
device_printf(sc->dev,
"failed to process config file: %d.\n", rc);
goto done;
}
t4_tweak_chip_settings(sc);
/* get basic stuff going */
rc = -t4_fw_initialize(sc, sc->mbox);
if (rc != 0) {
device_printf(sc->dev, "fw_initialize failed: %d.\n", rc);
goto done;
}
done:
return (rc);
}
/*
* Partition chip resources for use between various PFs, VFs, etc.
*/
static int
partition_resources(struct adapter *sc)
{
char cfg_file[sizeof(t4_cfg_file)];
struct caps_allowed caps_allowed;
int rc;
bool fallback;
/* Only the master driver gets to configure the chip resources. */
MPASS(sc->flags & MASTER_PF);
#define COPY_CAPS(x) do { \
caps_allowed.x##caps = t4_##x##caps_allowed; \
} while (0)
bzero(&caps_allowed, sizeof(caps_allowed));
COPY_CAPS(nbm);
COPY_CAPS(link);
COPY_CAPS(switch);
COPY_CAPS(nic);
COPY_CAPS(toe);
COPY_CAPS(rdma);
COPY_CAPS(crypto);
COPY_CAPS(iscsi);
COPY_CAPS(fcoe);
fallback = sc->debug_flags & DF_DISABLE_CFG_RETRY ? false : true;
snprintf(cfg_file, sizeof(cfg_file), "%s", t4_cfg_file);
retry:
rc = apply_cfg_and_initialize(sc, cfg_file, &caps_allowed);
if (rc != 0 && fallback) {
device_printf(sc->dev,
"failed (%d) to configure card with \"%s\" profile, "
"will fall back to a basic configuration and retry.\n",
rc, cfg_file);
snprintf(cfg_file, sizeof(cfg_file), "%s", BUILTIN_CF);
bzero(&caps_allowed, sizeof(caps_allowed));
COPY_CAPS(nbm);
COPY_CAPS(link);
COPY_CAPS(switch);
COPY_CAPS(nic);
fallback = false;
goto retry;
}
#undef COPY_CAPS
return (rc);
}
/*
* Retrieve parameters that are needed (or nice to have) very early.
*/
static int
get_params__pre_init(struct adapter *sc)
{
int rc;
uint32_t param[2], val[2];
t4_get_version_info(sc);
snprintf(sc->fw_version, sizeof(sc->fw_version), "%u.%u.%u.%u",
G_FW_HDR_FW_VER_MAJOR(sc->params.fw_vers),
G_FW_HDR_FW_VER_MINOR(sc->params.fw_vers),
G_FW_HDR_FW_VER_MICRO(sc->params.fw_vers),
G_FW_HDR_FW_VER_BUILD(sc->params.fw_vers));
snprintf(sc->bs_version, sizeof(sc->bs_version), "%u.%u.%u.%u",
G_FW_HDR_FW_VER_MAJOR(sc->params.bs_vers),
G_FW_HDR_FW_VER_MINOR(sc->params.bs_vers),
G_FW_HDR_FW_VER_MICRO(sc->params.bs_vers),
G_FW_HDR_FW_VER_BUILD(sc->params.bs_vers));
snprintf(sc->tp_version, sizeof(sc->tp_version), "%u.%u.%u.%u",
G_FW_HDR_FW_VER_MAJOR(sc->params.tp_vers),
G_FW_HDR_FW_VER_MINOR(sc->params.tp_vers),
G_FW_HDR_FW_VER_MICRO(sc->params.tp_vers),
G_FW_HDR_FW_VER_BUILD(sc->params.tp_vers));
snprintf(sc->er_version, sizeof(sc->er_version), "%u.%u.%u.%u",
G_FW_HDR_FW_VER_MAJOR(sc->params.er_vers),
G_FW_HDR_FW_VER_MINOR(sc->params.er_vers),
G_FW_HDR_FW_VER_MICRO(sc->params.er_vers),
G_FW_HDR_FW_VER_BUILD(sc->params.er_vers));
param[0] = FW_PARAM_DEV(PORTVEC);
param[1] = FW_PARAM_DEV(CCLK);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 2, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query parameters (pre_init): %d.\n", rc);
return (rc);
}
sc->params.portvec = val[0];
sc->params.nports = bitcount32(val[0]);
sc->params.vpd.cclk = val[1];
/* Read device log parameters. */
rc = -t4_init_devlog_params(sc, 1);
if (rc == 0)
fixup_devlog_params(sc);
else {
device_printf(sc->dev,
"failed to get devlog parameters: %d.\n", rc);
rc = 0; /* devlog isn't critical for device operation */
}
return (rc);
}
/*
* Retrieve various parameters that are of interest to the driver. The device
* has been initialized by the firmware at this point.
*/
static int
get_params__post_init(struct adapter *sc)
{
int rc;
uint32_t param[7], val[7];
struct fw_caps_config_cmd caps;
param[0] = FW_PARAM_PFVF(IQFLINT_START);
param[1] = FW_PARAM_PFVF(EQ_START);
param[2] = FW_PARAM_PFVF(FILTER_START);
param[3] = FW_PARAM_PFVF(FILTER_END);
param[4] = FW_PARAM_PFVF(L2T_START);
param[5] = FW_PARAM_PFVF(L2T_END);
param[6] = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_DIAG) |
V_FW_PARAMS_PARAM_Y(FW_PARAM_DEV_DIAG_VDD);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 7, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query parameters (post_init): %d.\n", rc);
return (rc);
}
sc->sge.iq_start = val[0];
sc->sge.eq_start = val[1];
if ((int)val[3] > (int)val[2]) {
sc->tids.ftid_base = val[2];
sc->tids.ftid_end = val[3];
sc->tids.nftids = val[3] - val[2] + 1;
}
sc->vres.l2t.start = val[4];
sc->vres.l2t.size = val[5] - val[4] + 1;
KASSERT(sc->vres.l2t.size <= L2T_SIZE,
("%s: L2 table size (%u) larger than expected (%u)",
__func__, sc->vres.l2t.size, L2T_SIZE));
sc->params.core_vdd = val[6];
if (chip_id(sc) >= CHELSIO_T6) {
#ifdef INVARIANTS
if (sc->params.fw_vers >=
(V_FW_HDR_FW_VER_MAJOR(1) | V_FW_HDR_FW_VER_MINOR(20) |
V_FW_HDR_FW_VER_MICRO(1) | V_FW_HDR_FW_VER_BUILD(0))) {
/*
* Note that the code to enable the region should run
* before t4_fw_initialize and not here. This is just a
* reminder to add said code.
*/
device_printf(sc->dev,
"hpfilter region not enabled.\n");
}
#endif
sc->tids.tid_base = t4_read_reg(sc,
A_LE_DB_ACTIVE_TABLE_START_INDEX);
param[0] = FW_PARAM_PFVF(HPFILTER_START);
param[1] = FW_PARAM_PFVF(HPFILTER_END);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 2, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query hpfilter parameters: %d.\n", rc);
return (rc);
}
if ((int)val[1] > (int)val[0]) {
sc->tids.hpftid_base = val[0];
sc->tids.hpftid_end = val[1];
sc->tids.nhpftids = val[1] - val[0] + 1;
/*
* These should go off if the layout changes and the
* driver needs to catch up.
*/
MPASS(sc->tids.hpftid_base == 0);
MPASS(sc->tids.tid_base == sc->tids.nhpftids);
}
}
/*
* MPSBGMAP is queried separately because only recent firmwares support
* it as a parameter and we don't want the compound query above to fail
* on older firmwares.
*/
param[0] = FW_PARAM_DEV(MPSBGMAP);
val[0] = 0;
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, param, val);
if (rc == 0)
sc->params.mps_bg_map = val[0];
else
sc->params.mps_bg_map = 0;
/*
* Determine whether the firmware supports the filter2 work request.
* This is queried separately for the same reason as MPSBGMAP above.
*/
param[0] = FW_PARAM_DEV(FILTER2_WR);
val[0] = 0;
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, param, val);
if (rc == 0)
sc->params.filter2_wr_support = val[0] != 0;
else
sc->params.filter2_wr_support = 0;
/*
* Find out whether we're allowed to use the ULPTX MEMWRITE DSGL.
* This is queried separately for the same reason as other params above.
*/
param[0] = FW_PARAM_DEV(ULPTX_MEMWRITE_DSGL);
val[0] = 0;
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, param, val);
if (rc == 0)
sc->params.ulptx_memwrite_dsgl = val[0] != 0;
else
sc->params.ulptx_memwrite_dsgl = false;
/* get capabilites */
bzero(&caps, sizeof(caps));
caps.op_to_write = htobe32(V_FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ);
caps.cfvalid_to_len16 = htobe32(FW_LEN16(caps));
rc = -t4_wr_mbox(sc, sc->mbox, &caps, sizeof(caps), &caps);
if (rc != 0) {
device_printf(sc->dev,
"failed to get card capabilities: %d.\n", rc);
return (rc);
}
#define READ_CAPS(x) do { \
sc->x = htobe16(caps.x); \
} while (0)
READ_CAPS(nbmcaps);
READ_CAPS(linkcaps);
READ_CAPS(switchcaps);
READ_CAPS(niccaps);
READ_CAPS(toecaps);
READ_CAPS(rdmacaps);
READ_CAPS(cryptocaps);
READ_CAPS(iscsicaps);
READ_CAPS(fcoecaps);
if (sc->niccaps & FW_CAPS_CONFIG_NIC_HASHFILTER) {
MPASS(chip_id(sc) > CHELSIO_T4);
MPASS(sc->toecaps == 0);
sc->toecaps = 0;
param[0] = FW_PARAM_DEV(NTID);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query HASHFILTER parameters: %d.\n", rc);
return (rc);
}
sc->tids.ntids = val[0];
if (sc->params.fw_vers <
(V_FW_HDR_FW_VER_MAJOR(1) | V_FW_HDR_FW_VER_MINOR(20) |
V_FW_HDR_FW_VER_MICRO(5) | V_FW_HDR_FW_VER_BUILD(0))) {
MPASS(sc->tids.ntids >= sc->tids.nhpftids);
sc->tids.ntids -= sc->tids.nhpftids;
}
sc->tids.natids = min(sc->tids.ntids / 2, MAX_ATIDS);
sc->params.hash_filter = 1;
}
if (sc->niccaps & FW_CAPS_CONFIG_NIC_ETHOFLD) {
param[0] = FW_PARAM_PFVF(ETHOFLD_START);
param[1] = FW_PARAM_PFVF(ETHOFLD_END);
param[2] = FW_PARAM_DEV(FLOWC_BUFFIFO_SZ);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 3, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query NIC parameters: %d.\n", rc);
return (rc);
}
if ((int)val[1] > (int)val[0]) {
sc->tids.etid_base = val[0];
sc->tids.etid_end = val[1];
sc->tids.netids = val[1] - val[0] + 1;
sc->params.eo_wr_cred = val[2];
sc->params.ethoffload = 1;
}
}
if (sc->toecaps) {
/* query offload-related parameters */
param[0] = FW_PARAM_DEV(NTID);
param[1] = FW_PARAM_PFVF(SERVER_START);
param[2] = FW_PARAM_PFVF(SERVER_END);
param[3] = FW_PARAM_PFVF(TDDP_START);
param[4] = FW_PARAM_PFVF(TDDP_END);
param[5] = FW_PARAM_DEV(FLOWC_BUFFIFO_SZ);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 6, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query TOE parameters: %d.\n", rc);
return (rc);
}
sc->tids.ntids = val[0];
if (sc->params.fw_vers <
(V_FW_HDR_FW_VER_MAJOR(1) | V_FW_HDR_FW_VER_MINOR(20) |
V_FW_HDR_FW_VER_MICRO(5) | V_FW_HDR_FW_VER_BUILD(0))) {
MPASS(sc->tids.ntids >= sc->tids.nhpftids);
sc->tids.ntids -= sc->tids.nhpftids;
}
sc->tids.natids = min(sc->tids.ntids / 2, MAX_ATIDS);
if ((int)val[2] > (int)val[1]) {
sc->tids.stid_base = val[1];
sc->tids.nstids = val[2] - val[1] + 1;
}
sc->vres.ddp.start = val[3];
sc->vres.ddp.size = val[4] - val[3] + 1;
sc->params.ofldq_wr_cred = val[5];
sc->params.offload = 1;
} else {
/*
* The firmware attempts memfree TOE configuration for -SO cards
* and will report toecaps=0 if it runs out of resources (this
* depends on the config file). It may not report 0 for other
* capabilities dependent on the TOE in this case. Set them to
* 0 here so that the driver doesn't bother tracking resources
* that will never be used.
*/
sc->iscsicaps = 0;
sc->rdmacaps = 0;
}
if (sc->rdmacaps) {
param[0] = FW_PARAM_PFVF(STAG_START);
param[1] = FW_PARAM_PFVF(STAG_END);
param[2] = FW_PARAM_PFVF(RQ_START);
param[3] = FW_PARAM_PFVF(RQ_END);
param[4] = FW_PARAM_PFVF(PBL_START);
param[5] = FW_PARAM_PFVF(PBL_END);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 6, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query RDMA parameters(1): %d.\n", rc);
return (rc);
}
sc->vres.stag.start = val[0];
sc->vres.stag.size = val[1] - val[0] + 1;
sc->vres.rq.start = val[2];
sc->vres.rq.size = val[3] - val[2] + 1;
sc->vres.pbl.start = val[4];
sc->vres.pbl.size = val[5] - val[4] + 1;
param[0] = FW_PARAM_PFVF(SQRQ_START);
param[1] = FW_PARAM_PFVF(SQRQ_END);
param[2] = FW_PARAM_PFVF(CQ_START);
param[3] = FW_PARAM_PFVF(CQ_END);
param[4] = FW_PARAM_PFVF(OCQ_START);
param[5] = FW_PARAM_PFVF(OCQ_END);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 6, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query RDMA parameters(2): %d.\n", rc);
return (rc);
}
sc->vres.qp.start = val[0];
sc->vres.qp.size = val[1] - val[0] + 1;
sc->vres.cq.start = val[2];
sc->vres.cq.size = val[3] - val[2] + 1;
sc->vres.ocq.start = val[4];
sc->vres.ocq.size = val[5] - val[4] + 1;
param[0] = FW_PARAM_PFVF(SRQ_START);
param[1] = FW_PARAM_PFVF(SRQ_END);
param[2] = FW_PARAM_DEV(MAXORDIRD_QP);
param[3] = FW_PARAM_DEV(MAXIRD_ADAPTER);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 4, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query RDMA parameters(3): %d.\n", rc);
return (rc);
}
sc->vres.srq.start = val[0];
sc->vres.srq.size = val[1] - val[0] + 1;
sc->params.max_ordird_qp = val[2];
sc->params.max_ird_adapter = val[3];
}
if (sc->iscsicaps) {
param[0] = FW_PARAM_PFVF(ISCSI_START);
param[1] = FW_PARAM_PFVF(ISCSI_END);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 2, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query iSCSI parameters: %d.\n", rc);
return (rc);
}
sc->vres.iscsi.start = val[0];
sc->vres.iscsi.size = val[1] - val[0] + 1;
}
if (sc->cryptocaps & FW_CAPS_CONFIG_TLSKEYS) {
param[0] = FW_PARAM_PFVF(TLS_START);
param[1] = FW_PARAM_PFVF(TLS_END);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 2, param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query TLS parameters: %d.\n", rc);
return (rc);
}
sc->vres.key.start = val[0];
sc->vres.key.size = val[1] - val[0] + 1;
}
t4_init_sge_params(sc);
/*
* We've got the params we wanted to query via the firmware. Now grab
* some others directly from the chip.
*/
rc = t4_read_chip_settings(sc);
return (rc);
}
static int
set_params__post_init(struct adapter *sc)
{
uint32_t param, val;
#ifdef TCP_OFFLOAD
int i, v, shift;
#endif
/* ask for encapsulated CPLs */
param = FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
val = 1;
(void)t4_set_params(sc, sc->mbox, sc->pf, 0, 1, &param, &val);
/* Enable 32b port caps if the firmware supports it. */
param = FW_PARAM_PFVF(PORT_CAPS32);
val = 1;
if (t4_set_params(sc, sc->mbox, sc->pf, 0, 1, &param, &val) == 0)
sc->params.port_caps32 = 1;
/* Let filter + maskhash steer to a part of the VI's RSS region. */
val = 1 << (G_MASKSIZE(t4_read_reg(sc, A_TP_RSS_CONFIG_TNL)) - 1);
t4_set_reg_field(sc, A_TP_RSS_CONFIG_TNL, V_MASKFILTER(M_MASKFILTER),
V_MASKFILTER(val - 1));
#ifdef TCP_OFFLOAD
/*
* Override the TOE timers with user provided tunables. This is not the
* recommended way to change the timers (the firmware config file is) so
* these tunables are not documented.
*
* All the timer tunables are in microseconds.
*/
if (t4_toe_keepalive_idle != 0) {
v = us_to_tcp_ticks(sc, t4_toe_keepalive_idle);
v &= M_KEEPALIVEIDLE;
t4_set_reg_field(sc, A_TP_KEEP_IDLE,
V_KEEPALIVEIDLE(M_KEEPALIVEIDLE), V_KEEPALIVEIDLE(v));
}
if (t4_toe_keepalive_interval != 0) {
v = us_to_tcp_ticks(sc, t4_toe_keepalive_interval);
v &= M_KEEPALIVEINTVL;
t4_set_reg_field(sc, A_TP_KEEP_INTVL,
V_KEEPALIVEINTVL(M_KEEPALIVEINTVL), V_KEEPALIVEINTVL(v));
}
if (t4_toe_keepalive_count != 0) {
v = t4_toe_keepalive_count & M_KEEPALIVEMAXR2;
t4_set_reg_field(sc, A_TP_SHIFT_CNT,
V_KEEPALIVEMAXR1(M_KEEPALIVEMAXR1) |
V_KEEPALIVEMAXR2(M_KEEPALIVEMAXR2),
V_KEEPALIVEMAXR1(1) | V_KEEPALIVEMAXR2(v));
}
if (t4_toe_rexmt_min != 0) {
v = us_to_tcp_ticks(sc, t4_toe_rexmt_min);
v &= M_RXTMIN;
t4_set_reg_field(sc, A_TP_RXT_MIN,
V_RXTMIN(M_RXTMIN), V_RXTMIN(v));
}
if (t4_toe_rexmt_max != 0) {
v = us_to_tcp_ticks(sc, t4_toe_rexmt_max);
v &= M_RXTMAX;
t4_set_reg_field(sc, A_TP_RXT_MAX,
V_RXTMAX(M_RXTMAX), V_RXTMAX(v));
}
if (t4_toe_rexmt_count != 0) {
v = t4_toe_rexmt_count & M_RXTSHIFTMAXR2;
t4_set_reg_field(sc, A_TP_SHIFT_CNT,
V_RXTSHIFTMAXR1(M_RXTSHIFTMAXR1) |
V_RXTSHIFTMAXR2(M_RXTSHIFTMAXR2),
V_RXTSHIFTMAXR1(1) | V_RXTSHIFTMAXR2(v));
}
for (i = 0; i < nitems(t4_toe_rexmt_backoff); i++) {
if (t4_toe_rexmt_backoff[i] != -1) {
v = t4_toe_rexmt_backoff[i] & M_TIMERBACKOFFINDEX0;
shift = (i & 3) << 3;
t4_set_reg_field(sc, A_TP_TCP_BACKOFF_REG0 + (i & ~3),
M_TIMERBACKOFFINDEX0 << shift, v << shift);
}
}
#endif
return (0);
}
#undef FW_PARAM_PFVF
#undef FW_PARAM_DEV
static void
t4_set_desc(struct adapter *sc)
{
char buf[128];
struct adapter_params *p = &sc->params;
snprintf(buf, sizeof(buf), "Chelsio %s", p->vpd.id);
device_set_desc_copy(sc->dev, buf);
}
static inline void
ifmedia_add4(struct ifmedia *ifm, int m)
{
ifmedia_add(ifm, m, 0, NULL);
ifmedia_add(ifm, m | IFM_ETH_TXPAUSE, 0, NULL);
ifmedia_add(ifm, m | IFM_ETH_RXPAUSE, 0, NULL);
ifmedia_add(ifm, m | IFM_ETH_TXPAUSE | IFM_ETH_RXPAUSE, 0, NULL);
}
/*
* This is the selected media, which is not quite the same as the active media.
* The media line in ifconfig is "media: Ethernet selected (active)" if selected
* and active are not the same, and "media: Ethernet selected" otherwise.
*/
static void
set_current_media(struct port_info *pi)
{
struct link_config *lc;
struct ifmedia *ifm;
int mword;
u_int speed;
PORT_LOCK_ASSERT_OWNED(pi);
/* Leave current media alone if it's already set to IFM_NONE. */
ifm = &pi->media;
if (ifm->ifm_cur != NULL &&
IFM_SUBTYPE(ifm->ifm_cur->ifm_media) == IFM_NONE)
return;
lc = &pi->link_cfg;
if (lc->requested_aneg != AUTONEG_DISABLE &&
lc->supported & FW_PORT_CAP32_ANEG) {
ifmedia_set(ifm, IFM_ETHER | IFM_AUTO);
return;
}
mword = IFM_ETHER | IFM_FDX;
if (lc->requested_fc & PAUSE_TX)
mword |= IFM_ETH_TXPAUSE;
if (lc->requested_fc & PAUSE_RX)
mword |= IFM_ETH_RXPAUSE;
if (lc->requested_speed == 0)
speed = port_top_speed(pi) * 1000; /* Gbps -> Mbps */
else
speed = lc->requested_speed;
mword |= port_mword(pi, speed_to_fwcap(speed));
ifmedia_set(ifm, mword);
}
/*
* Returns true if the ifmedia list for the port cannot change.
*/
static bool
fixed_ifmedia(struct port_info *pi)
{
return (pi->port_type == FW_PORT_TYPE_BT_SGMII ||
pi->port_type == FW_PORT_TYPE_BT_XFI ||
pi->port_type == FW_PORT_TYPE_BT_XAUI ||
pi->port_type == FW_PORT_TYPE_KX4 ||
pi->port_type == FW_PORT_TYPE_KX ||
pi->port_type == FW_PORT_TYPE_KR ||
pi->port_type == FW_PORT_TYPE_BP_AP ||
pi->port_type == FW_PORT_TYPE_BP4_AP ||
pi->port_type == FW_PORT_TYPE_BP40_BA ||
pi->port_type == FW_PORT_TYPE_KR4_100G ||
pi->port_type == FW_PORT_TYPE_KR_SFP28 ||
pi->port_type == FW_PORT_TYPE_KR_XLAUI);
}
static void
build_medialist(struct port_info *pi)
{
uint32_t ss, speed;
int unknown, mword, bit;
struct link_config *lc;
struct ifmedia *ifm;
PORT_LOCK_ASSERT_OWNED(pi);
if (pi->flags & FIXED_IFMEDIA)
return;
/*
* Rebuild the ifmedia list.
*/
ifm = &pi->media;
ifmedia_removeall(ifm);
lc = &pi->link_cfg;
ss = G_FW_PORT_CAP32_SPEED(lc->supported); /* Supported Speeds */
if (__predict_false(ss == 0)) { /* not supposed to happen. */
MPASS(ss != 0);
no_media:
MPASS(LIST_EMPTY(&ifm->ifm_list));
ifmedia_add(ifm, IFM_ETHER | IFM_NONE, 0, NULL);
ifmedia_set(ifm, IFM_ETHER | IFM_NONE);
return;
}
unknown = 0;
for (bit = S_FW_PORT_CAP32_SPEED; bit < fls(ss); bit++) {
speed = 1 << bit;
MPASS(speed & M_FW_PORT_CAP32_SPEED);
if (ss & speed) {
mword = port_mword(pi, speed);
if (mword == IFM_NONE) {
goto no_media;
} else if (mword == IFM_UNKNOWN)
unknown++;
else
ifmedia_add4(ifm, IFM_ETHER | IFM_FDX | mword);
}
}
if (unknown > 0) /* Add one unknown for all unknown media types. */
ifmedia_add4(ifm, IFM_ETHER | IFM_FDX | IFM_UNKNOWN);
if (lc->supported & FW_PORT_CAP32_ANEG)
ifmedia_add(ifm, IFM_ETHER | IFM_AUTO, 0, NULL);
set_current_media(pi);
}
/*
* Initialize the requested fields in the link config based on driver tunables.
*/
static void
init_link_config(struct port_info *pi)
{
struct link_config *lc = &pi->link_cfg;
PORT_LOCK_ASSERT_OWNED(pi);
lc->requested_speed = 0;
if (t4_autoneg == 0)
lc->requested_aneg = AUTONEG_DISABLE;
else if (t4_autoneg == 1)
lc->requested_aneg = AUTONEG_ENABLE;
else
lc->requested_aneg = AUTONEG_AUTO;
lc->requested_fc = t4_pause_settings & (PAUSE_TX | PAUSE_RX |
PAUSE_AUTONEG);
if (t4_fec == -1 || t4_fec & FEC_AUTO)
lc->requested_fec = FEC_AUTO;
else {
lc->requested_fec = FEC_NONE;
if (t4_fec & FEC_RS)
lc->requested_fec |= FEC_RS;
if (t4_fec & FEC_BASER_RS)
lc->requested_fec |= FEC_BASER_RS;
}
}
/*
* Makes sure that all requested settings comply with what's supported by the
* port. Returns the number of settings that were invalid and had to be fixed.
*/
static int
fixup_link_config(struct port_info *pi)
{
int n = 0;
struct link_config *lc = &pi->link_cfg;
uint32_t fwspeed;
PORT_LOCK_ASSERT_OWNED(pi);
/* Speed (when not autonegotiating) */
if (lc->requested_speed != 0) {
fwspeed = speed_to_fwcap(lc->requested_speed);
if ((fwspeed & lc->supported) == 0) {
n++;
lc->requested_speed = 0;
}
}
/* Link autonegotiation */
MPASS(lc->requested_aneg == AUTONEG_ENABLE ||
lc->requested_aneg == AUTONEG_DISABLE ||
lc->requested_aneg == AUTONEG_AUTO);
if (lc->requested_aneg == AUTONEG_ENABLE &&
!(lc->supported & FW_PORT_CAP32_ANEG)) {
n++;
lc->requested_aneg = AUTONEG_AUTO;
}
/* Flow control */
MPASS((lc->requested_fc & ~(PAUSE_TX | PAUSE_RX | PAUSE_AUTONEG)) == 0);
if (lc->requested_fc & PAUSE_TX &&
!(lc->supported & FW_PORT_CAP32_FC_TX)) {
n++;
lc->requested_fc &= ~PAUSE_TX;
}
if (lc->requested_fc & PAUSE_RX &&
!(lc->supported & FW_PORT_CAP32_FC_RX)) {
n++;
lc->requested_fc &= ~PAUSE_RX;
}
if (!(lc->requested_fc & PAUSE_AUTONEG) &&
!(lc->supported & FW_PORT_CAP32_FORCE_PAUSE)) {
n++;
lc->requested_fc |= PAUSE_AUTONEG;
}
/* FEC */
if ((lc->requested_fec & FEC_RS &&
!(lc->supported & FW_PORT_CAP32_FEC_RS)) ||
(lc->requested_fec & FEC_BASER_RS &&
!(lc->supported & FW_PORT_CAP32_FEC_BASER_RS))) {
n++;
lc->requested_fec = FEC_AUTO;
}
return (n);
}
/*
* Apply the requested L1 settings, which are expected to be valid, to the
* hardware.
*/
static int
apply_link_config(struct port_info *pi)
{
struct adapter *sc = pi->adapter;
struct link_config *lc = &pi->link_cfg;
int rc;
#ifdef INVARIANTS
ASSERT_SYNCHRONIZED_OP(sc);
PORT_LOCK_ASSERT_OWNED(pi);
if (lc->requested_aneg == AUTONEG_ENABLE)
MPASS(lc->supported & FW_PORT_CAP32_ANEG);
if (!(lc->requested_fc & PAUSE_AUTONEG))
MPASS(lc->supported & FW_PORT_CAP32_FORCE_PAUSE);
if (lc->requested_fc & PAUSE_TX)
MPASS(lc->supported & FW_PORT_CAP32_FC_TX);
if (lc->requested_fc & PAUSE_RX)
MPASS(lc->supported & FW_PORT_CAP32_FC_RX);
if (lc->requested_fec & FEC_RS)
MPASS(lc->supported & FW_PORT_CAP32_FEC_RS);
if (lc->requested_fec & FEC_BASER_RS)
MPASS(lc->supported & FW_PORT_CAP32_FEC_BASER_RS);
#endif
rc = -t4_link_l1cfg(sc, sc->mbox, pi->tx_chan, lc);
if (rc != 0) {
/* Don't complain if the VF driver gets back an EPERM. */
if (!(sc->flags & IS_VF) || rc != FW_EPERM)
device_printf(pi->dev, "l1cfg failed: %d\n", rc);
} else {
/*
* An L1_CFG will almost always result in a link-change event if
* the link is up, and the driver will refresh the actual
* fec/fc/etc. when the notification is processed. If the link
* is down then the actual settings are meaningless.
*
* This takes care of the case where a change in the L1 settings
* may not result in a notification.
*/
if (lc->link_ok && !(lc->requested_fc & PAUSE_AUTONEG))
lc->fc = lc->requested_fc & (PAUSE_TX | PAUSE_RX);
}
return (rc);
}
#define FW_MAC_EXACT_CHUNK 7
/*
* Program the port's XGMAC based on parameters in ifnet. The caller also
* indicates which parameters should be programmed (the rest are left alone).
*/
int
update_mac_settings(struct ifnet *ifp, int flags)
{
int rc = 0;
struct vi_info *vi = ifp->if_softc;
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
int mtu = -1, promisc = -1, allmulti = -1, vlanex = -1;
ASSERT_SYNCHRONIZED_OP(sc);
KASSERT(flags, ("%s: not told what to update.", __func__));
if (flags & XGMAC_MTU)
mtu = ifp->if_mtu;
if (flags & XGMAC_PROMISC)
promisc = ifp->if_flags & IFF_PROMISC ? 1 : 0;
if (flags & XGMAC_ALLMULTI)
allmulti = ifp->if_flags & IFF_ALLMULTI ? 1 : 0;
if (flags & XGMAC_VLANEX)
vlanex = ifp->if_capenable & IFCAP_VLAN_HWTAGGING ? 1 : 0;
if (flags & (XGMAC_MTU|XGMAC_PROMISC|XGMAC_ALLMULTI|XGMAC_VLANEX)) {
rc = -t4_set_rxmode(sc, sc->mbox, vi->viid, mtu, promisc,
allmulti, 1, vlanex, false);
if (rc) {
if_printf(ifp, "set_rxmode (%x) failed: %d\n", flags,
rc);
return (rc);
}
}
if (flags & XGMAC_UCADDR) {
uint8_t ucaddr[ETHER_ADDR_LEN];
bcopy(IF_LLADDR(ifp), ucaddr, sizeof(ucaddr));
rc = t4_change_mac(sc, sc->mbox, vi->viid, vi->xact_addr_filt,
ucaddr, true, true);
if (rc < 0) {
rc = -rc;
if_printf(ifp, "change_mac failed: %d\n", rc);
return (rc);
} else {
vi->xact_addr_filt = rc;
rc = 0;
}
}
if (flags & XGMAC_MCADDRS) {
const uint8_t *mcaddr[FW_MAC_EXACT_CHUNK];
int del = 1;
uint64_t hash = 0;
struct ifmultiaddr *ifma;
int i = 0, j;
if_maddr_rlock(ifp);
CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) {
if (ifma->ifma_addr->sa_family != AF_LINK)
continue;
mcaddr[i] =
LLADDR((struct sockaddr_dl *)ifma->ifma_addr);
MPASS(ETHER_IS_MULTICAST(mcaddr[i]));
i++;
if (i == FW_MAC_EXACT_CHUNK) {
rc = t4_alloc_mac_filt(sc, sc->mbox, vi->viid,
del, i, mcaddr, NULL, &hash, 0);
if (rc < 0) {
rc = -rc;
for (j = 0; j < i; j++) {
if_printf(ifp,
"failed to add mc address"
" %02x:%02x:%02x:"
"%02x:%02x:%02x rc=%d\n",
mcaddr[j][0], mcaddr[j][1],
mcaddr[j][2], mcaddr[j][3],
mcaddr[j][4], mcaddr[j][5],
rc);
}
goto mcfail;
}
del = 0;
i = 0;
}
}
if (i > 0) {
rc = t4_alloc_mac_filt(sc, sc->mbox, vi->viid, del, i,
mcaddr, NULL, &hash, 0);
if (rc < 0) {
rc = -rc;
for (j = 0; j < i; j++) {
if_printf(ifp,
"failed to add mc address"
" %02x:%02x:%02x:"
"%02x:%02x:%02x rc=%d\n",
mcaddr[j][0], mcaddr[j][1],
mcaddr[j][2], mcaddr[j][3],
mcaddr[j][4], mcaddr[j][5],
rc);
}
goto mcfail;
}
}
rc = -t4_set_addr_hash(sc, sc->mbox, vi->viid, 0, hash, 0);
if (rc != 0)
if_printf(ifp, "failed to set mc address hash: %d", rc);
mcfail:
if_maddr_runlock(ifp);
}
return (rc);
}
/*
* {begin|end}_synchronized_op must be called from the same thread.
*/
int
begin_synchronized_op(struct adapter *sc, struct vi_info *vi, int flags,
char *wmesg)
{
int rc, pri;
#ifdef WITNESS
/* the caller thinks it's ok to sleep, but is it really? */
if (flags & SLEEP_OK)
WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL,
"begin_synchronized_op");
#endif
if (INTR_OK)
pri = PCATCH;
else
pri = 0;
ADAPTER_LOCK(sc);
for (;;) {
if (vi && IS_DOOMED(vi)) {
rc = ENXIO;
goto done;
}
if (!IS_BUSY(sc)) {
rc = 0;
break;
}
if (!(flags & SLEEP_OK)) {
rc = EBUSY;
goto done;
}
if (mtx_sleep(&sc->flags, &sc->sc_lock, pri, wmesg, 0)) {
rc = EINTR;
goto done;
}
}
KASSERT(!IS_BUSY(sc), ("%s: controller busy.", __func__));
SET_BUSY(sc);
#ifdef INVARIANTS
sc->last_op = wmesg;
sc->last_op_thr = curthread;
sc->last_op_flags = flags;
#endif
done:
if (!(flags & HOLD_LOCK) || rc)
ADAPTER_UNLOCK(sc);
return (rc);
}
/*
* Tell if_ioctl and if_init that the VI is going away. This is
* special variant of begin_synchronized_op and must be paired with a
* call to end_synchronized_op.
*/
void
doom_vi(struct adapter *sc, struct vi_info *vi)
{
ADAPTER_LOCK(sc);
SET_DOOMED(vi);
wakeup(&sc->flags);
while (IS_BUSY(sc))
mtx_sleep(&sc->flags, &sc->sc_lock, 0, "t4detach", 0);
SET_BUSY(sc);
#ifdef INVARIANTS
sc->last_op = "t4detach";
sc->last_op_thr = curthread;
sc->last_op_flags = 0;
#endif
ADAPTER_UNLOCK(sc);
}
/*
* {begin|end}_synchronized_op must be called from the same thread.
*/
void
end_synchronized_op(struct adapter *sc, int flags)
{
if (flags & LOCK_HELD)
ADAPTER_LOCK_ASSERT_OWNED(sc);
else
ADAPTER_LOCK(sc);
KASSERT(IS_BUSY(sc), ("%s: controller not busy.", __func__));
CLR_BUSY(sc);
wakeup(&sc->flags);
ADAPTER_UNLOCK(sc);
}
static int
cxgbe_init_synchronized(struct vi_info *vi)
{
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
struct ifnet *ifp = vi->ifp;
int rc = 0, i;
struct sge_txq *txq;
ASSERT_SYNCHRONIZED_OP(sc);
if (ifp->if_drv_flags & IFF_DRV_RUNNING)
return (0); /* already running */
if (!(sc->flags & FULL_INIT_DONE) &&
((rc = adapter_full_init(sc)) != 0))
return (rc); /* error message displayed already */
if (!(vi->flags & VI_INIT_DONE) &&
((rc = vi_full_init(vi)) != 0))
return (rc); /* error message displayed already */
rc = update_mac_settings(ifp, XGMAC_ALL);
if (rc)
goto done; /* error message displayed already */
PORT_LOCK(pi);
if (pi->up_vis == 0) {
t4_update_port_info(pi);
fixup_link_config(pi);
build_medialist(pi);
apply_link_config(pi);
}
rc = -t4_enable_vi(sc, sc->mbox, vi->viid, true, true);
if (rc != 0) {
if_printf(ifp, "enable_vi failed: %d\n", rc);
PORT_UNLOCK(pi);
goto done;
}
/*
* Can't fail from this point onwards. Review cxgbe_uninit_synchronized
* if this changes.
*/
for_each_txq(vi, i, txq) {
TXQ_LOCK(txq);
txq->eq.flags |= EQ_ENABLED;
TXQ_UNLOCK(txq);
}
/*
* The first iq of the first port to come up is used for tracing.
*/
if (sc->traceq < 0 && IS_MAIN_VI(vi)) {
sc->traceq = sc->sge.rxq[vi->first_rxq].iq.abs_id;
t4_write_reg(sc, is_t4(sc) ? A_MPS_TRC_RSS_CONTROL :
A_MPS_T5_TRC_RSS_CONTROL, V_RSSCONTROL(pi->tx_chan) |
V_QUEUENUMBER(sc->traceq));
pi->flags |= HAS_TRACEQ;
}
/* all ok */
pi->up_vis++;
ifp->if_drv_flags |= IFF_DRV_RUNNING;
if (pi->nvi > 1 || sc->flags & IS_VF)
callout_reset(&vi->tick, hz, vi_tick, vi);
else
callout_reset(&pi->tick, hz, cxgbe_tick, pi);
PORT_UNLOCK(pi);
done:
if (rc != 0)
cxgbe_uninit_synchronized(vi);
return (rc);
}
/*
* Idempotent.
*/
static int
cxgbe_uninit_synchronized(struct vi_info *vi)
{
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
struct ifnet *ifp = vi->ifp;
int rc, i;
struct sge_txq *txq;
ASSERT_SYNCHRONIZED_OP(sc);
if (!(vi->flags & VI_INIT_DONE)) {
if (__predict_false(ifp->if_drv_flags & IFF_DRV_RUNNING)) {
KASSERT(0, ("uninited VI is running"));
if_printf(ifp, "uninited VI with running ifnet. "
"vi->flags 0x%016lx, if_flags 0x%08x, "
"if_drv_flags 0x%08x\n", vi->flags, ifp->if_flags,
ifp->if_drv_flags);
}
return (0);
}
/*
* Disable the VI so that all its data in either direction is discarded
* by the MPS. Leave everything else (the queues, interrupts, and 1Hz
* tick) intact as the TP can deliver negative advice or data that it's
* holding in its RAM (for an offloaded connection) even after the VI is
* disabled.
*/
rc = -t4_enable_vi(sc, sc->mbox, vi->viid, false, false);
if (rc) {
if_printf(ifp, "disable_vi failed: %d\n", rc);
return (rc);
}
for_each_txq(vi, i, txq) {
TXQ_LOCK(txq);
txq->eq.flags &= ~EQ_ENABLED;
TXQ_UNLOCK(txq);
}
PORT_LOCK(pi);
if (pi->nvi > 1 || sc->flags & IS_VF)
callout_stop(&vi->tick);
else
callout_stop(&pi->tick);
if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) {
PORT_UNLOCK(pi);
return (0);
}
ifp->if_drv_flags &= ~IFF_DRV_RUNNING;
pi->up_vis--;
if (pi->up_vis > 0) {
PORT_UNLOCK(pi);
return (0);
}
pi->link_cfg.link_ok = false;
pi->link_cfg.speed = 0;
pi->link_cfg.link_down_rc = 255;
t4_os_link_changed(pi);
PORT_UNLOCK(pi);
return (0);
}
/*
* It is ok for this function to fail midway and return right away. t4_detach
* will walk the entire sc->irq list and clean up whatever is valid.
*/
int
t4_setup_intr_handlers(struct adapter *sc)
{
int rc, rid, p, q, v;
char s[8];
struct irq *irq;
struct port_info *pi;
struct vi_info *vi;
struct sge *sge = &sc->sge;
struct sge_rxq *rxq;
#ifdef TCP_OFFLOAD
struct sge_ofld_rxq *ofld_rxq;
#endif
#ifdef DEV_NETMAP
struct sge_nm_rxq *nm_rxq;
#endif
#ifdef RSS
int nbuckets = rss_getnumbuckets();
#endif
/*
* Setup interrupts.
*/
irq = &sc->irq[0];
rid = sc->intr_type == INTR_INTX ? 0 : 1;
if (forwarding_intr_to_fwq(sc))
return (t4_alloc_irq(sc, irq, rid, t4_intr_all, sc, "all"));
/* Multiple interrupts. */
if (sc->flags & IS_VF)
KASSERT(sc->intr_count >= T4VF_EXTRA_INTR + sc->params.nports,
("%s: too few intr.", __func__));
else
KASSERT(sc->intr_count >= T4_EXTRA_INTR + sc->params.nports,
("%s: too few intr.", __func__));
/* The first one is always error intr on PFs */
if (!(sc->flags & IS_VF)) {
rc = t4_alloc_irq(sc, irq, rid, t4_intr_err, sc, "err");
if (rc != 0)
return (rc);
irq++;
rid++;
}
/* The second one is always the firmware event queue (first on VFs) */
rc = t4_alloc_irq(sc, irq, rid, t4_intr_evt, &sge->fwq, "evt");
if (rc != 0)
return (rc);
irq++;
rid++;
for_each_port(sc, p) {
pi = sc->port[p];
for_each_vi(pi, v, vi) {
vi->first_intr = rid - 1;
if (vi->nnmrxq > 0) {
int n = max(vi->nrxq, vi->nnmrxq);
rxq = &sge->rxq[vi->first_rxq];
#ifdef DEV_NETMAP
nm_rxq = &sge->nm_rxq[vi->first_nm_rxq];
#endif
for (q = 0; q < n; q++) {
snprintf(s, sizeof(s), "%x%c%x", p,
'a' + v, q);
if (q < vi->nrxq)
irq->rxq = rxq++;
#ifdef DEV_NETMAP
if (q < vi->nnmrxq)
irq->nm_rxq = nm_rxq++;
if (irq->nm_rxq != NULL &&
irq->rxq == NULL) {
/* Netmap rx only */
rc = t4_alloc_irq(sc, irq, rid,
t4_nm_intr, irq->nm_rxq, s);
}
if (irq->nm_rxq != NULL &&
irq->rxq != NULL) {
/* NIC and Netmap rx */
rc = t4_alloc_irq(sc, irq, rid,
t4_vi_intr, irq, s);
}
#endif
if (irq->rxq != NULL &&
irq->nm_rxq == NULL) {
/* NIC rx only */
rc = t4_alloc_irq(sc, irq, rid,
t4_intr, irq->rxq, s);
}
if (rc != 0)
return (rc);
#ifdef RSS
if (q < vi->nrxq) {
bus_bind_intr(sc->dev, irq->res,
rss_getcpu(q % nbuckets));
}
#endif
irq++;
rid++;
vi->nintr++;
}
} else {
for_each_rxq(vi, q, rxq) {
snprintf(s, sizeof(s), "%x%c%x", p,
'a' + v, q);
rc = t4_alloc_irq(sc, irq, rid,
t4_intr, rxq, s);
if (rc != 0)
return (rc);
#ifdef RSS
bus_bind_intr(sc->dev, irq->res,
rss_getcpu(q % nbuckets));
#endif
irq++;
rid++;
vi->nintr++;
}
}
#ifdef TCP_OFFLOAD
for_each_ofld_rxq(vi, q, ofld_rxq) {
snprintf(s, sizeof(s), "%x%c%x", p, 'A' + v, q);
rc = t4_alloc_irq(sc, irq, rid, t4_intr,
ofld_rxq, s);
if (rc != 0)
return (rc);
irq++;
rid++;
vi->nintr++;
}
#endif
}
}
MPASS(irq == &sc->irq[sc->intr_count]);
return (0);
}
int
adapter_full_init(struct adapter *sc)
{
int rc, i;
#ifdef RSS
uint32_t raw_rss_key[RSS_KEYSIZE / sizeof(uint32_t)];
uint32_t rss_key[RSS_KEYSIZE / sizeof(uint32_t)];
#endif
ASSERT_SYNCHRONIZED_OP(sc);
ADAPTER_LOCK_ASSERT_NOTOWNED(sc);
KASSERT((sc->flags & FULL_INIT_DONE) == 0,
("%s: FULL_INIT_DONE already", __func__));
/*
* queues that belong to the adapter (not any particular port).
*/
rc = t4_setup_adapter_queues(sc);
if (rc != 0)
goto done;
for (i = 0; i < nitems(sc->tq); i++) {
sc->tq[i] = taskqueue_create("t4 taskq", M_NOWAIT,
taskqueue_thread_enqueue, &sc->tq[i]);
if (sc->tq[i] == NULL) {
device_printf(sc->dev,
"failed to allocate task queue %d\n", i);
rc = ENOMEM;
goto done;
}
taskqueue_start_threads(&sc->tq[i], 1, PI_NET, "%s tq%d",
device_get_nameunit(sc->dev), i);
}
#ifdef RSS
MPASS(RSS_KEYSIZE == 40);
rss_getkey((void *)&raw_rss_key[0]);
for (i = 0; i < nitems(rss_key); i++) {
rss_key[i] = htobe32(raw_rss_key[nitems(rss_key) - 1 - i]);
}
t4_write_rss_key(sc, &rss_key[0], -1, 1);
#endif
if (!(sc->flags & IS_VF))
t4_intr_enable(sc);
sc->flags |= FULL_INIT_DONE;
done:
if (rc != 0)
adapter_full_uninit(sc);
return (rc);
}
int
adapter_full_uninit(struct adapter *sc)
{
int i;
ADAPTER_LOCK_ASSERT_NOTOWNED(sc);
t4_teardown_adapter_queues(sc);
for (i = 0; i < nitems(sc->tq) && sc->tq[i]; i++) {
taskqueue_free(sc->tq[i]);
sc->tq[i] = NULL;
}
sc->flags &= ~FULL_INIT_DONE;
return (0);
}
#ifdef RSS
#define SUPPORTED_RSS_HASHTYPES (RSS_HASHTYPE_RSS_IPV4 | \
RSS_HASHTYPE_RSS_TCP_IPV4 | RSS_HASHTYPE_RSS_IPV6 | \
RSS_HASHTYPE_RSS_TCP_IPV6 | RSS_HASHTYPE_RSS_UDP_IPV4 | \
RSS_HASHTYPE_RSS_UDP_IPV6)
/* Translates kernel hash types to hardware. */
static int
hashconfig_to_hashen(int hashconfig)
{
int hashen = 0;
if (hashconfig & RSS_HASHTYPE_RSS_IPV4)
hashen |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
if (hashconfig & RSS_HASHTYPE_RSS_IPV6)
hashen |= F_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN;
if (hashconfig & RSS_HASHTYPE_RSS_UDP_IPV4) {
hashen |= F_FW_RSS_VI_CONFIG_CMD_UDPEN |
F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
}
if (hashconfig & RSS_HASHTYPE_RSS_UDP_IPV6) {
hashen |= F_FW_RSS_VI_CONFIG_CMD_UDPEN |
F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN;
}
if (hashconfig & RSS_HASHTYPE_RSS_TCP_IPV4)
hashen |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
if (hashconfig & RSS_HASHTYPE_RSS_TCP_IPV6)
hashen |= F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN;
return (hashen);
}
/* Translates hardware hash types to kernel. */
static int
hashen_to_hashconfig(int hashen)
{
int hashconfig = 0;
if (hashen & F_FW_RSS_VI_CONFIG_CMD_UDPEN) {
/*
* If UDP hashing was enabled it must have been enabled for
* either IPv4 or IPv6 (inclusive or). Enabling UDP without
* enabling any 4-tuple hash is nonsense configuration.
*/
MPASS(hashen & (F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN));
if (hashen & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN)
hashconfig |= RSS_HASHTYPE_RSS_UDP_IPV4;
if (hashen & F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN)
hashconfig |= RSS_HASHTYPE_RSS_UDP_IPV6;
}
if (hashen & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN)
hashconfig |= RSS_HASHTYPE_RSS_TCP_IPV4;
if (hashen & F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN)
hashconfig |= RSS_HASHTYPE_RSS_TCP_IPV6;
if (hashen & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
hashconfig |= RSS_HASHTYPE_RSS_IPV4;
if (hashen & F_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN)
hashconfig |= RSS_HASHTYPE_RSS_IPV6;
return (hashconfig);
}
#endif
int
vi_full_init(struct vi_info *vi)
{
struct adapter *sc = vi->pi->adapter;
struct ifnet *ifp = vi->ifp;
uint16_t *rss;
struct sge_rxq *rxq;
int rc, i, j;
#ifdef RSS
int nbuckets = rss_getnumbuckets();
int hashconfig = rss_gethashconfig();
int extra;
#endif
ASSERT_SYNCHRONIZED_OP(sc);
KASSERT((vi->flags & VI_INIT_DONE) == 0,
("%s: VI_INIT_DONE already", __func__));
sysctl_ctx_init(&vi->ctx);
vi->flags |= VI_SYSCTL_CTX;
/*
* Allocate tx/rx/fl queues for this VI.
*/
rc = t4_setup_vi_queues(vi);
if (rc != 0)
goto done; /* error message displayed already */
/*
* Setup RSS for this VI. Save a copy of the RSS table for later use.
*/
if (vi->nrxq > vi->rss_size) {
if_printf(ifp, "nrxq (%d) > hw RSS table size (%d); "
"some queues will never receive traffic.\n", vi->nrxq,
vi->rss_size);
} else if (vi->rss_size % vi->nrxq) {
if_printf(ifp, "nrxq (%d), hw RSS table size (%d); "
"expect uneven traffic distribution.\n", vi->nrxq,
vi->rss_size);
}
#ifdef RSS
if (vi->nrxq != nbuckets) {
if_printf(ifp, "nrxq (%d) != kernel RSS buckets (%d);"
"performance will be impacted.\n", vi->nrxq, nbuckets);
}
#endif
rss = malloc(vi->rss_size * sizeof (*rss), M_CXGBE, M_ZERO | M_WAITOK);
for (i = 0; i < vi->rss_size;) {
#ifdef RSS
j = rss_get_indirection_to_bucket(i);
j %= vi->nrxq;
rxq = &sc->sge.rxq[vi->first_rxq + j];
rss[i++] = rxq->iq.abs_id;
#else
for_each_rxq(vi, j, rxq) {
rss[i++] = rxq->iq.abs_id;
if (i == vi->rss_size)
break;
}
#endif
}
rc = -t4_config_rss_range(sc, sc->mbox, vi->viid, 0, vi->rss_size, rss,
vi->rss_size);
if (rc != 0) {
free(rss, M_CXGBE);
if_printf(ifp, "rss_config failed: %d\n", rc);
goto done;
}
#ifdef RSS
vi->hashen = hashconfig_to_hashen(hashconfig);
/*
* We may have had to enable some hashes even though the global config
* wants them disabled. This is a potential problem that must be
* reported to the user.
*/
extra = hashen_to_hashconfig(vi->hashen) ^ hashconfig;
/*
* If we consider only the supported hash types, then the enabled hashes
* are a superset of the requested hashes. In other words, there cannot
* be any supported hash that was requested but not enabled, but there
* can be hashes that were not requested but had to be enabled.
*/
extra &= SUPPORTED_RSS_HASHTYPES;
MPASS((extra & hashconfig) == 0);
if (extra) {
if_printf(ifp,
"global RSS config (0x%x) cannot be accommodated.\n",
hashconfig);
}
if (extra & RSS_HASHTYPE_RSS_IPV4)
if_printf(ifp, "IPv4 2-tuple hashing forced on.\n");
if (extra & RSS_HASHTYPE_RSS_TCP_IPV4)
if_printf(ifp, "TCP/IPv4 4-tuple hashing forced on.\n");
if (extra & RSS_HASHTYPE_RSS_IPV6)
if_printf(ifp, "IPv6 2-tuple hashing forced on.\n");
if (extra & RSS_HASHTYPE_RSS_TCP_IPV6)
if_printf(ifp, "TCP/IPv6 4-tuple hashing forced on.\n");
if (extra & RSS_HASHTYPE_RSS_UDP_IPV4)
if_printf(ifp, "UDP/IPv4 4-tuple hashing forced on.\n");
if (extra & RSS_HASHTYPE_RSS_UDP_IPV6)
if_printf(ifp, "UDP/IPv6 4-tuple hashing forced on.\n");
#else
vi->hashen = F_FW_RSS_VI_CONFIG_CMD_IP6FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_IP6TWOTUPEN |
F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN | F_FW_RSS_VI_CONFIG_CMD_UDPEN;
#endif
rc = -t4_config_vi_rss(sc, sc->mbox, vi->viid, vi->hashen, rss[0], 0, 0);
if (rc != 0) {
free(rss, M_CXGBE);
if_printf(ifp, "rss hash/defaultq config failed: %d\n", rc);
goto done;
}
vi->rss = rss;
vi->flags |= VI_INIT_DONE;
done:
if (rc != 0)
vi_full_uninit(vi);
return (rc);
}
/*
* Idempotent.
*/
int
vi_full_uninit(struct vi_info *vi)
{
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
int i;
struct sge_rxq *rxq;
struct sge_txq *txq;
#ifdef TCP_OFFLOAD
struct sge_ofld_rxq *ofld_rxq;
#endif
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
struct sge_wrq *ofld_txq;
#endif
if (vi->flags & VI_INIT_DONE) {
/* Need to quiesce queues. */
/* XXX: Only for the first VI? */
if (IS_MAIN_VI(vi) && !(sc->flags & IS_VF))
quiesce_wrq(sc, &sc->sge.ctrlq[pi->port_id]);
for_each_txq(vi, i, txq) {
quiesce_txq(sc, txq);
}
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
for_each_ofld_txq(vi, i, ofld_txq) {
quiesce_wrq(sc, ofld_txq);
}
#endif
for_each_rxq(vi, i, rxq) {
quiesce_iq(sc, &rxq->iq);
quiesce_fl(sc, &rxq->fl);
}
#ifdef TCP_OFFLOAD
for_each_ofld_rxq(vi, i, ofld_rxq) {
quiesce_iq(sc, &ofld_rxq->iq);
quiesce_fl(sc, &ofld_rxq->fl);
}
#endif
free(vi->rss, M_CXGBE);
free(vi->nm_rss, M_CXGBE);
}
t4_teardown_vi_queues(vi);
vi->flags &= ~VI_INIT_DONE;
return (0);
}
static void
quiesce_txq(struct adapter *sc, struct sge_txq *txq)
{
struct sge_eq *eq = &txq->eq;
struct sge_qstat *spg = (void *)&eq->desc[eq->sidx];
(void) sc; /* unused */
#ifdef INVARIANTS
TXQ_LOCK(txq);
MPASS((eq->flags & EQ_ENABLED) == 0);
TXQ_UNLOCK(txq);
#endif
/* Wait for the mp_ring to empty. */
while (!mp_ring_is_idle(txq->r)) {
mp_ring_check_drainage(txq->r, 0);
pause("rquiesce", 1);
}
/* Then wait for the hardware to finish. */
while (spg->cidx != htobe16(eq->pidx))
pause("equiesce", 1);
/* Finally, wait for the driver to reclaim all descriptors. */
while (eq->cidx != eq->pidx)
pause("dquiesce", 1);
}
static void
quiesce_wrq(struct adapter *sc, struct sge_wrq *wrq)
{
/* XXXTX */
}
static void
quiesce_iq(struct adapter *sc, struct sge_iq *iq)
{
(void) sc; /* unused */
/* Synchronize with the interrupt handler */
while (!atomic_cmpset_int(&iq->state, IQS_IDLE, IQS_DISABLED))
pause("iqfree", 1);
}
static void
quiesce_fl(struct adapter *sc, struct sge_fl *fl)
{
mtx_lock(&sc->sfl_lock);
FL_LOCK(fl);
fl->flags |= FL_DOOMED;
FL_UNLOCK(fl);
callout_stop(&sc->sfl_callout);
mtx_unlock(&sc->sfl_lock);
KASSERT((fl->flags & FL_STARVING) == 0,
("%s: still starving", __func__));
}
static int
t4_alloc_irq(struct adapter *sc, struct irq *irq, int rid,
driver_intr_t *handler, void *arg, char *name)
{
int rc;
irq->rid = rid;
irq->res = bus_alloc_resource_any(sc->dev, SYS_RES_IRQ, &irq->rid,
RF_SHAREABLE | RF_ACTIVE);
if (irq->res == NULL) {
device_printf(sc->dev,
"failed to allocate IRQ for rid %d, name %s.\n", rid, name);
return (ENOMEM);
}
rc = bus_setup_intr(sc->dev, irq->res, INTR_MPSAFE | INTR_TYPE_NET,
NULL, handler, arg, &irq->tag);
if (rc != 0) {
device_printf(sc->dev,
"failed to setup interrupt for rid %d, name %s: %d\n",
rid, name, rc);
} else if (name)
bus_describe_intr(sc->dev, irq->res, irq->tag, "%s", name);
return (rc);
}
static int
t4_free_irq(struct adapter *sc, struct irq *irq)
{
if (irq->tag)
bus_teardown_intr(sc->dev, irq->res, irq->tag);
if (irq->res)
bus_release_resource(sc->dev, SYS_RES_IRQ, irq->rid, irq->res);
bzero(irq, sizeof(*irq));
return (0);
}
static void
get_regs(struct adapter *sc, struct t4_regdump *regs, uint8_t *buf)
{
regs->version = chip_id(sc) | chip_rev(sc) << 10;
t4_get_regs(sc, buf, regs->len);
}
#define A_PL_INDIR_CMD 0x1f8
#define S_PL_AUTOINC 31
#define M_PL_AUTOINC 0x1U
#define V_PL_AUTOINC(x) ((x) << S_PL_AUTOINC)
#define G_PL_AUTOINC(x) (((x) >> S_PL_AUTOINC) & M_PL_AUTOINC)
#define S_PL_VFID 20
#define M_PL_VFID 0xffU
#define V_PL_VFID(x) ((x) << S_PL_VFID)
#define G_PL_VFID(x) (((x) >> S_PL_VFID) & M_PL_VFID)
#define S_PL_ADDR 0
#define M_PL_ADDR 0xfffffU
#define V_PL_ADDR(x) ((x) << S_PL_ADDR)
#define G_PL_ADDR(x) (((x) >> S_PL_ADDR) & M_PL_ADDR)
#define A_PL_INDIR_DATA 0x1fc
static uint64_t
read_vf_stat(struct adapter *sc, unsigned int viid, int reg)
{
u32 stats[2];
mtx_assert(&sc->reg_lock, MA_OWNED);
if (sc->flags & IS_VF) {
stats[0] = t4_read_reg(sc, VF_MPS_REG(reg));
stats[1] = t4_read_reg(sc, VF_MPS_REG(reg + 4));
} else {
t4_write_reg(sc, A_PL_INDIR_CMD, V_PL_AUTOINC(1) |
V_PL_VFID(G_FW_VIID_VIN(viid)) |
V_PL_ADDR(VF_MPS_REG(reg)));
stats[0] = t4_read_reg(sc, A_PL_INDIR_DATA);
stats[1] = t4_read_reg(sc, A_PL_INDIR_DATA);
}
return (((uint64_t)stats[1]) << 32 | stats[0]);
}
static void
t4_get_vi_stats(struct adapter *sc, unsigned int viid,
struct fw_vi_stats_vf *stats)
{
#define GET_STAT(name) \
read_vf_stat(sc, viid, A_MPS_VF_STAT_##name##_L)
stats->tx_bcast_bytes = GET_STAT(TX_VF_BCAST_BYTES);
stats->tx_bcast_frames = GET_STAT(TX_VF_BCAST_FRAMES);
stats->tx_mcast_bytes = GET_STAT(TX_VF_MCAST_BYTES);
stats->tx_mcast_frames = GET_STAT(TX_VF_MCAST_FRAMES);
stats->tx_ucast_bytes = GET_STAT(TX_VF_UCAST_BYTES);
stats->tx_ucast_frames = GET_STAT(TX_VF_UCAST_FRAMES);
stats->tx_drop_frames = GET_STAT(TX_VF_DROP_FRAMES);
stats->tx_offload_bytes = GET_STAT(TX_VF_OFFLOAD_BYTES);
stats->tx_offload_frames = GET_STAT(TX_VF_OFFLOAD_FRAMES);
stats->rx_bcast_bytes = GET_STAT(RX_VF_BCAST_BYTES);
stats->rx_bcast_frames = GET_STAT(RX_VF_BCAST_FRAMES);
stats->rx_mcast_bytes = GET_STAT(RX_VF_MCAST_BYTES);
stats->rx_mcast_frames = GET_STAT(RX_VF_MCAST_FRAMES);
stats->rx_ucast_bytes = GET_STAT(RX_VF_UCAST_BYTES);
stats->rx_ucast_frames = GET_STAT(RX_VF_UCAST_FRAMES);
stats->rx_err_frames = GET_STAT(RX_VF_ERR_FRAMES);
#undef GET_STAT
}
static void
t4_clr_vi_stats(struct adapter *sc, unsigned int viid)
{
int reg;
t4_write_reg(sc, A_PL_INDIR_CMD, V_PL_AUTOINC(1) |
V_PL_VFID(G_FW_VIID_VIN(viid)) |
V_PL_ADDR(VF_MPS_REG(A_MPS_VF_STAT_TX_VF_BCAST_BYTES_L)));
for (reg = A_MPS_VF_STAT_TX_VF_BCAST_BYTES_L;
reg <= A_MPS_VF_STAT_RX_VF_ERR_FRAMES_H; reg += 4)
t4_write_reg(sc, A_PL_INDIR_DATA, 0);
}
static void
vi_refresh_stats(struct adapter *sc, struct vi_info *vi)
{
struct timeval tv;
const struct timeval interval = {0, 250000}; /* 250ms */
if (!(vi->flags & VI_INIT_DONE))
return;
getmicrotime(&tv);
timevalsub(&tv, &interval);
if (timevalcmp(&tv, &vi->last_refreshed, <))
return;
mtx_lock(&sc->reg_lock);
t4_get_vi_stats(sc, vi->viid, &vi->stats);
getmicrotime(&vi->last_refreshed);
mtx_unlock(&sc->reg_lock);
}
static void
cxgbe_refresh_stats(struct adapter *sc, struct port_info *pi)
{
u_int i, v, tnl_cong_drops, bg_map;
struct timeval tv;
const struct timeval interval = {0, 250000}; /* 250ms */
getmicrotime(&tv);
timevalsub(&tv, &interval);
if (timevalcmp(&tv, &pi->last_refreshed, <))
return;
tnl_cong_drops = 0;
t4_get_port_stats(sc, pi->tx_chan, &pi->stats);
bg_map = pi->mps_bg_map;
while (bg_map) {
i = ffs(bg_map) - 1;
mtx_lock(&sc->reg_lock);
t4_read_indirect(sc, A_TP_MIB_INDEX, A_TP_MIB_DATA, &v, 1,
A_TP_MIB_TNL_CNG_DROP_0 + i);
mtx_unlock(&sc->reg_lock);
tnl_cong_drops += v;
bg_map &= ~(1 << i);
}
pi->tnl_cong_drops = tnl_cong_drops;
getmicrotime(&pi->last_refreshed);
}
static void
cxgbe_tick(void *arg)
{
struct port_info *pi = arg;
struct adapter *sc = pi->adapter;
PORT_LOCK_ASSERT_OWNED(pi);
cxgbe_refresh_stats(sc, pi);
callout_schedule(&pi->tick, hz);
}
void
vi_tick(void *arg)
{
struct vi_info *vi = arg;
struct adapter *sc = vi->pi->adapter;
vi_refresh_stats(sc, vi);
callout_schedule(&vi->tick, hz);
}
/*
* Should match fw_caps_config_<foo> enums in t4fw_interface.h
*/
static char *caps_decoder[] = {
"\20\001IPMI\002NCSI", /* 0: NBM */
"\20\001PPP\002QFC\003DCBX", /* 1: link */
"\20\001INGRESS\002EGRESS", /* 2: switch */
"\20\001NIC\002VM\003IDS\004UM\005UM_ISGL" /* 3: NIC */
"\006HASHFILTER\007ETHOFLD",
"\20\001TOE", /* 4: TOE */
"\20\001RDDP\002RDMAC", /* 5: RDMA */
"\20\001INITIATOR_PDU\002TARGET_PDU" /* 6: iSCSI */
"\003INITIATOR_CNXOFLD\004TARGET_CNXOFLD"
"\005INITIATOR_SSNOFLD\006TARGET_SSNOFLD"
"\007T10DIF"
"\010INITIATOR_CMDOFLD\011TARGET_CMDOFLD",
"\20\001LOOKASIDE\002TLSKEYS", /* 7: Crypto */
"\20\001INITIATOR\002TARGET\003CTRL_OFLD" /* 8: FCoE */
"\004PO_INITIATOR\005PO_TARGET",
};
void
t4_sysctls(struct adapter *sc)
{
struct sysctl_ctx_list *ctx;
struct sysctl_oid *oid;
struct sysctl_oid_list *children, *c0;
static char *doorbells = {"\20\1UDB\2WCWR\3UDBWC\4KDB"};
ctx = device_get_sysctl_ctx(sc->dev);
/*
* dev.t4nex.X.
*/
oid = device_get_sysctl_tree(sc->dev);
c0 = children = SYSCTL_CHILDREN(oid);
sc->sc_do_rxcopy = 1;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "do_rx_copy", CTLFLAG_RW,
&sc->sc_do_rxcopy, 1, "Do RX copy of small frames");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nports", CTLFLAG_RD, NULL,
sc->params.nports, "# of ports");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "doorbells",
CTLTYPE_STRING | CTLFLAG_RD, doorbells, (uintptr_t)&sc->doorbells,
sysctl_bitfield_8b, "A", "available doorbells");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "core_clock", CTLFLAG_RD, NULL,
sc->params.vpd.cclk, "core clock frequency (in KHz)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "holdoff_timers",
CTLTYPE_STRING | CTLFLAG_RD, sc->params.sge.timer_val,
sizeof(sc->params.sge.timer_val), sysctl_int_array, "A",
"interrupt holdoff timer values (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "holdoff_pkt_counts",
CTLTYPE_STRING | CTLFLAG_RD, sc->params.sge.counter_val,
sizeof(sc->params.sge.counter_val), sysctl_int_array, "A",
"interrupt holdoff packet counter values");
t4_sge_sysctls(sc, ctx, children);
sc->lro_timeout = 100;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "lro_timeout", CTLFLAG_RW,
&sc->lro_timeout, 0, "lro inactive-flush timeout (in us)");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "dflags", CTLFLAG_RW,
&sc->debug_flags, 0, "flags to enable runtime debugging");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "tp_version",
CTLFLAG_RD, sc->tp_version, 0, "TP microcode version");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "firmware_version",
CTLFLAG_RD, sc->fw_version, 0, "firmware version");
if (sc->flags & IS_VF)
return;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "hw_revision", CTLFLAG_RD,
NULL, chip_rev(sc), "chip hardware revision");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "sn",
CTLFLAG_RD, sc->params.vpd.sn, 0, "serial number");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "pn",
CTLFLAG_RD, sc->params.vpd.pn, 0, "part number");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "ec",
CTLFLAG_RD, sc->params.vpd.ec, 0, "engineering change");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "md_version",
CTLFLAG_RD, sc->params.vpd.md, 0, "manufacturing diags version");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "na",
CTLFLAG_RD, sc->params.vpd.na, 0, "network address");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "er_version", CTLFLAG_RD,
sc->er_version, 0, "expansion ROM version");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "bs_version", CTLFLAG_RD,
sc->bs_version, 0, "bootstrap firmware version");
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "scfg_version", CTLFLAG_RD,
NULL, sc->params.scfg_vers, "serial config version");
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "vpd_version", CTLFLAG_RD,
NULL, sc->params.vpd_vers, "VPD version");
SYSCTL_ADD_STRING(ctx, children, OID_AUTO, "cf",
CTLFLAG_RD, sc->cfg_file, 0, "configuration file");
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "cfcsum", CTLFLAG_RD, NULL,
sc->cfcsum, "config file checksum");
#define SYSCTL_CAP(name, n, text) \
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, #name, \
CTLTYPE_STRING | CTLFLAG_RD, caps_decoder[n], (uintptr_t)&sc->name, \
sysctl_bitfield_16b, "A", "available " text " capabilities")
SYSCTL_CAP(nbmcaps, 0, "NBM");
SYSCTL_CAP(linkcaps, 1, "link");
SYSCTL_CAP(switchcaps, 2, "switch");
SYSCTL_CAP(niccaps, 3, "NIC");
SYSCTL_CAP(toecaps, 4, "TCP offload");
SYSCTL_CAP(rdmacaps, 5, "RDMA");
SYSCTL_CAP(iscsicaps, 6, "iSCSI");
SYSCTL_CAP(cryptocaps, 7, "crypto");
SYSCTL_CAP(fcoecaps, 8, "FCoE");
#undef SYSCTL_CAP
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nfilters", CTLFLAG_RD,
NULL, sc->tids.nftids, "number of filters");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "temperature", CTLTYPE_INT |
CTLFLAG_RD, sc, 0, sysctl_temperature, "I",
"chip temperature (in Celsius)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "loadavg", CTLTYPE_STRING |
CTLFLAG_RD, sc, 0, sysctl_loadavg, "A",
"microprocessor load averages (debug firmwares only)");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "core_vdd", CTLFLAG_RD,
&sc->params.core_vdd, 0, "core Vdd (in mV)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "local_cpus",
CTLTYPE_STRING | CTLFLAG_RD, sc, LOCAL_CPUS,
sysctl_cpus, "A", "local CPUs");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "intr_cpus",
CTLTYPE_STRING | CTLFLAG_RD, sc, INTR_CPUS,
sysctl_cpus, "A", "preferred CPUs for interrupts");
/*
* dev.t4nex.X.misc. Marked CTLFLAG_SKIP to avoid information overload.
*/
oid = SYSCTL_ADD_NODE(ctx, c0, OID_AUTO, "misc",
CTLFLAG_RD | CTLFLAG_SKIP, NULL,
"logs and miscellaneous information");
children = SYSCTL_CHILDREN(oid);
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cctrl",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_cctrl, "A", "congestion control");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ibq_tp0",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_cim_ibq_obq, "A", "CIM IBQ 0 (TP0)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ibq_tp1",
CTLTYPE_STRING | CTLFLAG_RD, sc, 1,
sysctl_cim_ibq_obq, "A", "CIM IBQ 1 (TP1)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ibq_ulp",
CTLTYPE_STRING | CTLFLAG_RD, sc, 2,
sysctl_cim_ibq_obq, "A", "CIM IBQ 2 (ULP)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ibq_sge0",
CTLTYPE_STRING | CTLFLAG_RD, sc, 3,
sysctl_cim_ibq_obq, "A", "CIM IBQ 3 (SGE0)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ibq_sge1",
CTLTYPE_STRING | CTLFLAG_RD, sc, 4,
sysctl_cim_ibq_obq, "A", "CIM IBQ 4 (SGE1)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ibq_ncsi",
CTLTYPE_STRING | CTLFLAG_RD, sc, 5,
sysctl_cim_ibq_obq, "A", "CIM IBQ 5 (NCSI)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_la",
- CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
- chip_id(sc) <= CHELSIO_T5 ? sysctl_cim_la : sysctl_cim_la_t6,
+ CTLTYPE_STRING | CTLFLAG_RD, sc, 0, sysctl_cim_la,
"A", "CIM logic analyzer");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_ma_la",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_cim_ma_la, "A", "CIM MA logic analyzer");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_ulp0",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 0 (ULP0)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_ulp1",
CTLTYPE_STRING | CTLFLAG_RD, sc, 1 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 1 (ULP1)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_ulp2",
CTLTYPE_STRING | CTLFLAG_RD, sc, 2 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 2 (ULP2)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_ulp3",
CTLTYPE_STRING | CTLFLAG_RD, sc, 3 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 3 (ULP3)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_sge",
CTLTYPE_STRING | CTLFLAG_RD, sc, 4 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 4 (SGE)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_ncsi",
CTLTYPE_STRING | CTLFLAG_RD, sc, 5 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 5 (NCSI)");
if (chip_id(sc) > CHELSIO_T4) {
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_sge0_rx",
CTLTYPE_STRING | CTLFLAG_RD, sc, 6 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 6 (SGE0-RX)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_obq_sge1_rx",
CTLTYPE_STRING | CTLFLAG_RD, sc, 7 + CIM_NUM_IBQ,
sysctl_cim_ibq_obq, "A", "CIM OBQ 7 (SGE1-RX)");
}
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_pif_la",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_cim_pif_la, "A", "CIM PIF logic analyzer");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cim_qcfg",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_cim_qcfg, "A", "CIM queue configuration");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cpl_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_cpl_stats, "A", "CPL statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "ddp_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_ddp_stats, "A", "non-TCP DDP statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "devlog",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_devlog, "A", "firmware's device log");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "fcoe_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_fcoe_stats, "A", "FCoE statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "hw_sched",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_hw_sched, "A", "hardware scheduler ");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "l2t",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_l2t, "A", "hardware L2 table");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "smt",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_smt, "A", "hardware source MAC table");
#ifdef INET6
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "clip",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_clip, "A", "active CLIP table entries");
#endif
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "lb_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_lb_stats, "A", "loopback statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "meminfo",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_meminfo, "A", "memory regions");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "mps_tcam",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
chip_id(sc) <= CHELSIO_T5 ? sysctl_mps_tcam : sysctl_mps_tcam_t6,
"A", "MPS TCAM entries");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "path_mtus",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_path_mtus, "A", "path MTUs");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "pm_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_pm_stats, "A", "PM statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "rdma_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_rdma_stats, "A", "RDMA statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tcp_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_tcp_stats, "A", "TCP statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tids",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_tids, "A", "TID information");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tp_err_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_tp_err_stats, "A", "TP error statistics");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tp_la_mask",
CTLTYPE_INT | CTLFLAG_RW, sc, 0, sysctl_tp_la_mask, "I",
"TP logic analyzer event capture mask");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tp_la",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_tp_la, "A", "TP logic analyzer");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tx_rate",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_tx_rate, "A", "Tx rate");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "ulprx_la",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_ulprx_la, "A", "ULPRX logic analyzer");
if (chip_id(sc) >= CHELSIO_T5) {
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "wcwr_stats",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0,
sysctl_wcwr_stats, "A", "write combined work requests");
}
#ifdef TCP_OFFLOAD
if (is_offload(sc)) {
int i;
char s[4];
/*
* dev.t4nex.X.toe.
*/
oid = SYSCTL_ADD_NODE(ctx, c0, OID_AUTO, "toe", CTLFLAG_RD,
NULL, "TOE parameters");
children = SYSCTL_CHILDREN(oid);
sc->tt.cong_algorithm = -1;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "cong_algorithm",
CTLFLAG_RW, &sc->tt.cong_algorithm, 0, "congestion control "
"(-1 = default, 0 = reno, 1 = tahoe, 2 = newreno, "
"3 = highspeed)");
sc->tt.sndbuf = 256 * 1024;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "sndbuf", CTLFLAG_RW,
&sc->tt.sndbuf, 0, "max hardware send buffer size");
sc->tt.ddp = 0;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "ddp", CTLFLAG_RW,
&sc->tt.ddp, 0, "DDP allowed");
sc->tt.rx_coalesce = 1;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "rx_coalesce",
CTLFLAG_RW, &sc->tt.rx_coalesce, 0, "receive coalescing");
sc->tt.tls = 0;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "tls", CTLFLAG_RW,
&sc->tt.tls, 0, "Inline TLS allowed");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "tls_rx_ports",
CTLTYPE_INT | CTLFLAG_RW, sc, 0, sysctl_tls_rx_ports,
"I", "TCP ports that use inline TLS+TOE RX");
sc->tt.tx_align = 1;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "tx_align",
CTLFLAG_RW, &sc->tt.tx_align, 0, "chop and align payload");
sc->tt.tx_zcopy = 0;
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "tx_zcopy",
CTLFLAG_RW, &sc->tt.tx_zcopy, 0,
"Enable zero-copy aio_write(2)");
sc->tt.cop_managed_offloading = !!t4_cop_managed_offloading;
SYSCTL_ADD_INT(ctx, children, OID_AUTO,
"cop_managed_offloading", CTLFLAG_RW,
&sc->tt.cop_managed_offloading, 0,
"COP (Connection Offload Policy) controls all TOE offload");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "timer_tick",
CTLTYPE_STRING | CTLFLAG_RD, sc, 0, sysctl_tp_tick, "A",
"TP timer tick (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "timestamp_tick",
CTLTYPE_STRING | CTLFLAG_RD, sc, 1, sysctl_tp_tick, "A",
"TCP timestamp tick (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "dack_tick",
CTLTYPE_STRING | CTLFLAG_RD, sc, 2, sysctl_tp_tick, "A",
"DACK tick (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "dack_timer",
CTLTYPE_UINT | CTLFLAG_RD, sc, 0, sysctl_tp_dack_timer,
"IU", "DACK timer (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "rexmt_min",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_RXT_MIN,
sysctl_tp_timer, "LU", "Minimum retransmit interval (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "rexmt_max",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_RXT_MAX,
sysctl_tp_timer, "LU", "Maximum retransmit interval (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "persist_min",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_PERS_MIN,
sysctl_tp_timer, "LU", "Persist timer min (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "persist_max",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_PERS_MAX,
sysctl_tp_timer, "LU", "Persist timer max (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "keepalive_idle",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_KEEP_IDLE,
sysctl_tp_timer, "LU", "Keepalive idle timer (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "keepalive_interval",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_KEEP_INTVL,
sysctl_tp_timer, "LU", "Keepalive interval timer (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "initial_srtt",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_INIT_SRTT,
sysctl_tp_timer, "LU", "Initial SRTT (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "finwait2_timer",
CTLTYPE_ULONG | CTLFLAG_RD, sc, A_TP_FINWAIT2_TIMER,
sysctl_tp_timer, "LU", "FINWAIT2 timer (us)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "syn_rexmt_count",
CTLTYPE_UINT | CTLFLAG_RD, sc, S_SYNSHIFTMAX,
sysctl_tp_shift_cnt, "IU",
"Number of SYN retransmissions before abort");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "rexmt_count",
CTLTYPE_UINT | CTLFLAG_RD, sc, S_RXTSHIFTMAXR2,
sysctl_tp_shift_cnt, "IU",
"Number of retransmissions before abort");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "keepalive_count",
CTLTYPE_UINT | CTLFLAG_RD, sc, S_KEEPALIVEMAXR2,
sysctl_tp_shift_cnt, "IU",
"Number of keepalive probes before abort");
oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, "rexmt_backoff",
CTLFLAG_RD, NULL, "TOE retransmit backoffs");
children = SYSCTL_CHILDREN(oid);
for (i = 0; i < 16; i++) {
snprintf(s, sizeof(s), "%u", i);
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, s,
CTLTYPE_UINT | CTLFLAG_RD, sc, i, sysctl_tp_backoff,
"IU", "TOE retransmit backoff");
}
}
#endif
}
void
vi_sysctls(struct vi_info *vi)
{
struct sysctl_ctx_list *ctx;
struct sysctl_oid *oid;
struct sysctl_oid_list *children;
ctx = device_get_sysctl_ctx(vi->dev);
/*
* dev.v?(cxgbe|cxl).X.
*/
oid = device_get_sysctl_tree(vi->dev);
children = SYSCTL_CHILDREN(oid);
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "viid", CTLFLAG_RD, NULL,
vi->viid, "VI identifer");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nrxq", CTLFLAG_RD,
&vi->nrxq, 0, "# of rx queues");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "ntxq", CTLFLAG_RD,
&vi->ntxq, 0, "# of tx queues");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "first_rxq", CTLFLAG_RD,
&vi->first_rxq, 0, "index of first rx queue");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "first_txq", CTLFLAG_RD,
&vi->first_txq, 0, "index of first tx queue");
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "rss_base", CTLFLAG_RD, NULL,
vi->rss_base, "start of RSS indirection table");
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "rss_size", CTLFLAG_RD, NULL,
vi->rss_size, "size of RSS indirection table");
if (IS_MAIN_VI(vi)) {
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "rsrv_noflowq",
CTLTYPE_INT | CTLFLAG_RW, vi, 0, sysctl_noflowq, "IU",
"Reserve queue 0 for non-flowid packets");
}
#ifdef TCP_OFFLOAD
if (vi->nofldrxq != 0) {
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nofldrxq", CTLFLAG_RD,
&vi->nofldrxq, 0,
"# of rx queues for offloaded TCP connections");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "first_ofld_rxq",
CTLFLAG_RD, &vi->first_ofld_rxq, 0,
"index of first TOE rx queue");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "holdoff_tmr_idx_ofld",
CTLTYPE_INT | CTLFLAG_RW, vi, 0,
sysctl_holdoff_tmr_idx_ofld, "I",
"holdoff timer index for TOE queues");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "holdoff_pktc_idx_ofld",
CTLTYPE_INT | CTLFLAG_RW, vi, 0,
sysctl_holdoff_pktc_idx_ofld, "I",
"holdoff packet counter index for TOE queues");
}
#endif
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
if (vi->nofldtxq != 0) {
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nofldtxq", CTLFLAG_RD,
&vi->nofldtxq, 0,
"# of tx queues for TOE/ETHOFLD");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "first_ofld_txq",
CTLFLAG_RD, &vi->first_ofld_txq, 0,
"index of first TOE/ETHOFLD tx queue");
}
#endif
#ifdef DEV_NETMAP
if (vi->nnmrxq != 0) {
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nnmrxq", CTLFLAG_RD,
&vi->nnmrxq, 0, "# of netmap rx queues");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "nnmtxq", CTLFLAG_RD,
&vi->nnmtxq, 0, "# of netmap tx queues");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "first_nm_rxq",
CTLFLAG_RD, &vi->first_nm_rxq, 0,
"index of first netmap rx queue");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "first_nm_txq",
CTLFLAG_RD, &vi->first_nm_txq, 0,
"index of first netmap tx queue");
}
#endif
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "holdoff_tmr_idx",
CTLTYPE_INT | CTLFLAG_RW, vi, 0, sysctl_holdoff_tmr_idx, "I",
"holdoff timer index");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "holdoff_pktc_idx",
CTLTYPE_INT | CTLFLAG_RW, vi, 0, sysctl_holdoff_pktc_idx, "I",
"holdoff packet counter index");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "qsize_rxq",
CTLTYPE_INT | CTLFLAG_RW, vi, 0, sysctl_qsize_rxq, "I",
"rx queue size");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "qsize_txq",
CTLTYPE_INT | CTLFLAG_RW, vi, 0, sysctl_qsize_txq, "I",
"tx queue size");
}
static void
cxgbe_sysctls(struct port_info *pi)
{
struct sysctl_ctx_list *ctx;
struct sysctl_oid *oid;
struct sysctl_oid_list *children, *children2;
struct adapter *sc = pi->adapter;
int i;
char name[16];
static char *tc_flags = {"\20\1USER\2SYNC\3ASYNC\4ERR"};
ctx = device_get_sysctl_ctx(pi->dev);
/*
* dev.cxgbe.X.
*/
oid = device_get_sysctl_tree(pi->dev);
children = SYSCTL_CHILDREN(oid);
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "linkdnrc", CTLTYPE_STRING |
CTLFLAG_RD, pi, 0, sysctl_linkdnrc, "A", "reason why link is down");
if (pi->port_type == FW_PORT_TYPE_BT_XAUI) {
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "temperature",
CTLTYPE_INT | CTLFLAG_RD, pi, 0, sysctl_btphy, "I",
"PHY temperature (in Celsius)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "fw_version",
CTLTYPE_INT | CTLFLAG_RD, pi, 1, sysctl_btphy, "I",
"PHY firmware version");
}
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "pause_settings",
CTLTYPE_STRING | CTLFLAG_RW, pi, 0, sysctl_pause_settings, "A",
"PAUSE settings (bit 0 = rx_pause, 1 = tx_pause, 2 = pause_autoneg)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "fec",
CTLTYPE_STRING | CTLFLAG_RW, pi, 0, sysctl_fec, "A",
"Forward Error Correction (bit 0 = RS, bit 1 = BASER_RS)");
SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "autoneg",
CTLTYPE_INT | CTLFLAG_RW, pi, 0, sysctl_autoneg, "I",
"autonegotiation (-1 = not supported)");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "max_speed", CTLFLAG_RD, NULL,
port_top_speed(pi), "max speed (in Gbps)");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "mps_bg_map", CTLFLAG_RD, NULL,
pi->mps_bg_map, "MPS buffer group map");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "rx_e_chan_map", CTLFLAG_RD,
NULL, pi->rx_e_chan_map, "TP rx e-channel map");
if (sc->flags & IS_VF)
return;
/*
* dev.(cxgbe|cxl).X.tc.
*/
oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, "tc", CTLFLAG_RD, NULL,
"Tx scheduler traffic classes (cl_rl)");
children2 = SYSCTL_CHILDREN(oid);
SYSCTL_ADD_UINT(ctx, children2, OID_AUTO, "pktsize",
CTLFLAG_RW, &pi->sched_params->pktsize, 0,
"pktsize for per-flow cl-rl (0 means up to the driver )");
SYSCTL_ADD_UINT(ctx, children2, OID_AUTO, "burstsize",
CTLFLAG_RW, &pi->sched_params->burstsize, 0,
"burstsize for per-flow cl-rl (0 means up to the driver)");
for (i = 0; i < sc->chip_params->nsched_cls; i++) {
struct tx_cl_rl_params *tc = &pi->sched_params->cl_rl[i];
snprintf(name, sizeof(name), "%d", i);
children2 = SYSCTL_CHILDREN(SYSCTL_ADD_NODE(ctx,
SYSCTL_CHILDREN(oid), OID_AUTO, name, CTLFLAG_RD, NULL,
"traffic class"));
SYSCTL_ADD_PROC(ctx, children2, OID_AUTO, "flags",
CTLTYPE_STRING | CTLFLAG_RD, tc_flags, (uintptr_t)&tc->flags,
sysctl_bitfield_8b, "A", "flags");
SYSCTL_ADD_UINT(ctx, children2, OID_AUTO, "refcount",
CTLFLAG_RD, &tc->refcount, 0, "references to this class");
SYSCTL_ADD_PROC(ctx, children2, OID_AUTO, "params",
CTLTYPE_STRING | CTLFLAG_RD, sc, (pi->port_id << 16) | i,
sysctl_tc_params, "A", "traffic class parameters");
}
/*
* dev.cxgbe.X.stats.
*/
oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, "stats", CTLFLAG_RD,
NULL, "port statistics");
children = SYSCTL_CHILDREN(oid);
SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "tx_parse_error", CTLFLAG_RD,
&pi->tx_parse_error, 0,
"# of tx packets with invalid length or # of segments");
#define SYSCTL_ADD_T4_REG64(pi, name, desc, reg) \
SYSCTL_ADD_OID(ctx, children, OID_AUTO, name, \
CTLTYPE_U64 | CTLFLAG_RD, sc, reg, \
sysctl_handle_t4_reg64, "QU", desc)
SYSCTL_ADD_T4_REG64(pi, "tx_octets", "# of octets in good frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_BYTES_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames", "total # of good frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_FRAMES_L));
SYSCTL_ADD_T4_REG64(pi, "tx_bcast_frames", "# of broadcast frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_BCAST_L));
SYSCTL_ADD_T4_REG64(pi, "tx_mcast_frames", "# of multicast frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_MCAST_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ucast_frames", "# of unicast frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_UCAST_L));
SYSCTL_ADD_T4_REG64(pi, "tx_error_frames", "# of error frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_ERROR_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_64",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_64B_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_65_127",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_65B_127B_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_128_255",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_128B_255B_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_256_511",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_256B_511B_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_512_1023",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_512B_1023B_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_1024_1518",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_1024B_1518B_L));
SYSCTL_ADD_T4_REG64(pi, "tx_frames_1519_max",
"# of tx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_1519B_MAX_L));
SYSCTL_ADD_T4_REG64(pi, "tx_drop", "# of dropped tx frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_DROP_L));
SYSCTL_ADD_T4_REG64(pi, "tx_pause", "# of pause frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PAUSE_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp0", "# of PPP prio 0 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP0_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp1", "# of PPP prio 1 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP1_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp2", "# of PPP prio 2 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP2_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp3", "# of PPP prio 3 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP3_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp4", "# of PPP prio 4 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP4_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp5", "# of PPP prio 5 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP5_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp6", "# of PPP prio 6 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP6_L));
SYSCTL_ADD_T4_REG64(pi, "tx_ppp7", "# of PPP prio 7 frames transmitted",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_TX_PORT_PPP7_L));
SYSCTL_ADD_T4_REG64(pi, "rx_octets", "# of octets in good frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_BYTES_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames", "total # of good frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_FRAMES_L));
SYSCTL_ADD_T4_REG64(pi, "rx_bcast_frames", "# of broadcast frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_BCAST_L));
SYSCTL_ADD_T4_REG64(pi, "rx_mcast_frames", "# of multicast frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_MCAST_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ucast_frames", "# of unicast frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_UCAST_L));
SYSCTL_ADD_T4_REG64(pi, "rx_too_long", "# of frames exceeding MTU",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_MTU_ERROR_L));
SYSCTL_ADD_T4_REG64(pi, "rx_jabber", "# of jabber frames",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_MTU_CRC_ERROR_L));
SYSCTL_ADD_T4_REG64(pi, "rx_fcs_err",
"# of frames received with bad FCS",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_CRC_ERROR_L));
SYSCTL_ADD_T4_REG64(pi, "rx_len_err",
"# of frames received with length error",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_LEN_ERROR_L));
SYSCTL_ADD_T4_REG64(pi, "rx_symbol_err", "symbol errors",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_SYM_ERROR_L));
SYSCTL_ADD_T4_REG64(pi, "rx_runt", "# of short frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_LESS_64B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_64",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_64B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_65_127",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_65B_127B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_128_255",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_128B_255B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_256_511",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_256B_511B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_512_1023",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_512B_1023B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_1024_1518",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_1024B_1518B_L));
SYSCTL_ADD_T4_REG64(pi, "rx_frames_1519_max",
"# of rx frames in this range",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_1519B_MAX_L));
SYSCTL_ADD_T4_REG64(pi, "rx_pause", "# of pause frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PAUSE_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp0", "# of PPP prio 0 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP0_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp1", "# of PPP prio 1 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP1_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp2", "# of PPP prio 2 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP2_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp3", "# of PPP prio 3 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP3_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp4", "# of PPP prio 4 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP4_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp5", "# of PPP prio 5 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP5_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp6", "# of PPP prio 6 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP6_L));
SYSCTL_ADD_T4_REG64(pi, "rx_ppp7", "# of PPP prio 7 frames received",
PORT_REG(pi->tx_chan, A_MPS_PORT_STAT_RX_PORT_PPP7_L));
#undef SYSCTL_ADD_T4_REG64
#define SYSCTL_ADD_T4_PORTSTAT(name, desc) \
SYSCTL_ADD_UQUAD(ctx, children, OID_AUTO, #name, CTLFLAG_RD, \
&pi->stats.name, desc)
/* We get these from port_stats and they may be stale by up to 1s */
SYSCTL_ADD_T4_PORTSTAT(rx_ovflow0,
"# drops due to buffer-group 0 overflows");
SYSCTL_ADD_T4_PORTSTAT(rx_ovflow1,
"# drops due to buffer-group 1 overflows");
SYSCTL_ADD_T4_PORTSTAT(rx_ovflow2,
"# drops due to buffer-group 2 overflows");
SYSCTL_ADD_T4_PORTSTAT(rx_ovflow3,
"# drops due to buffer-group 3 overflows");
SYSCTL_ADD_T4_PORTSTAT(rx_trunc0,
"# of buffer-group 0 truncated packets");
SYSCTL_ADD_T4_PORTSTAT(rx_trunc1,
"# of buffer-group 1 truncated packets");
SYSCTL_ADD_T4_PORTSTAT(rx_trunc2,
"# of buffer-group 2 truncated packets");
SYSCTL_ADD_T4_PORTSTAT(rx_trunc3,
"# of buffer-group 3 truncated packets");
#undef SYSCTL_ADD_T4_PORTSTAT
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "tx_tls_records",
CTLFLAG_RD, &pi->tx_tls_records,
"# of TLS records transmitted");
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "tx_tls_octets",
CTLFLAG_RD, &pi->tx_tls_octets,
"# of payload octets in transmitted TLS records");
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "rx_tls_records",
CTLFLAG_RD, &pi->rx_tls_records,
"# of TLS records received");
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "rx_tls_octets",
CTLFLAG_RD, &pi->rx_tls_octets,
"# of payload octets in received TLS records");
}
static int
sysctl_int_array(SYSCTL_HANDLER_ARGS)
{
int rc, *i, space = 0;
struct sbuf sb;
sbuf_new_for_sysctl(&sb, NULL, 64, req);
for (i = arg1; arg2; arg2 -= sizeof(int), i++) {
if (space)
sbuf_printf(&sb, " ");
sbuf_printf(&sb, "%d", *i);
space = 1;
}
rc = sbuf_finish(&sb);
sbuf_delete(&sb);
return (rc);
}
static int
sysctl_bitfield_8b(SYSCTL_HANDLER_ARGS)
{
int rc;
struct sbuf *sb;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return(rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 128, req);
if (sb == NULL)
return (ENOMEM);
sbuf_printf(sb, "%b", *(uint8_t *)(uintptr_t)arg2, (char *)arg1);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_bitfield_16b(SYSCTL_HANDLER_ARGS)
{
int rc;
struct sbuf *sb;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return(rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 128, req);
if (sb == NULL)
return (ENOMEM);
sbuf_printf(sb, "%b", *(uint16_t *)(uintptr_t)arg2, (char *)arg1);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_btphy(SYSCTL_HANDLER_ARGS)
{
struct port_info *pi = arg1;
int op = arg2;
struct adapter *sc = pi->adapter;
u_int v;
int rc;
rc = begin_synchronized_op(sc, &pi->vi[0], SLEEP_OK | INTR_OK, "t4btt");
if (rc)
return (rc);
/* XXX: magic numbers */
rc = -t4_mdio_rd(sc, sc->mbox, pi->mdio_addr, 0x1e, op ? 0x20 : 0xc820,
&v);
end_synchronized_op(sc, 0);
if (rc)
return (rc);
if (op == 0)
v /= 256;
rc = sysctl_handle_int(oidp, &v, 0, req);
return (rc);
}
static int
sysctl_noflowq(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
int rc, val;
val = vi->rsrv_noflowq;
rc = sysctl_handle_int(oidp, &val, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if ((val >= 1) && (vi->ntxq > 1))
vi->rsrv_noflowq = 1;
else
vi->rsrv_noflowq = 0;
return (rc);
}
static int
sysctl_holdoff_tmr_idx(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
struct adapter *sc = vi->pi->adapter;
int idx, rc, i;
struct sge_rxq *rxq;
uint8_t v;
idx = vi->tmr_idx;
rc = sysctl_handle_int(oidp, &idx, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (idx < 0 || idx >= SGE_NTIMERS)
return (EINVAL);
rc = begin_synchronized_op(sc, vi, HOLD_LOCK | SLEEP_OK | INTR_OK,
"t4tmr");
if (rc)
return (rc);
v = V_QINTR_TIMER_IDX(idx) | V_QINTR_CNT_EN(vi->pktc_idx != -1);
for_each_rxq(vi, i, rxq) {
#ifdef atomic_store_rel_8
atomic_store_rel_8(&rxq->iq.intr_params, v);
#else
rxq->iq.intr_params = v;
#endif
}
vi->tmr_idx = idx;
end_synchronized_op(sc, LOCK_HELD);
return (0);
}
static int
sysctl_holdoff_pktc_idx(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
struct adapter *sc = vi->pi->adapter;
int idx, rc;
idx = vi->pktc_idx;
rc = sysctl_handle_int(oidp, &idx, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (idx < -1 || idx >= SGE_NCOUNTERS)
return (EINVAL);
rc = begin_synchronized_op(sc, vi, HOLD_LOCK | SLEEP_OK | INTR_OK,
"t4pktc");
if (rc)
return (rc);
if (vi->flags & VI_INIT_DONE)
rc = EBUSY; /* cannot be changed once the queues are created */
else
vi->pktc_idx = idx;
end_synchronized_op(sc, LOCK_HELD);
return (rc);
}
static int
sysctl_qsize_rxq(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
struct adapter *sc = vi->pi->adapter;
int qsize, rc;
qsize = vi->qsize_rxq;
rc = sysctl_handle_int(oidp, &qsize, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (qsize < 128 || (qsize & 7))
return (EINVAL);
rc = begin_synchronized_op(sc, vi, HOLD_LOCK | SLEEP_OK | INTR_OK,
"t4rxqs");
if (rc)
return (rc);
if (vi->flags & VI_INIT_DONE)
rc = EBUSY; /* cannot be changed once the queues are created */
else
vi->qsize_rxq = qsize;
end_synchronized_op(sc, LOCK_HELD);
return (rc);
}
static int
sysctl_qsize_txq(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
struct adapter *sc = vi->pi->adapter;
int qsize, rc;
qsize = vi->qsize_txq;
rc = sysctl_handle_int(oidp, &qsize, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (qsize < 128 || qsize > 65536)
return (EINVAL);
rc = begin_synchronized_op(sc, vi, HOLD_LOCK | SLEEP_OK | INTR_OK,
"t4txqs");
if (rc)
return (rc);
if (vi->flags & VI_INIT_DONE)
rc = EBUSY; /* cannot be changed once the queues are created */
else
vi->qsize_txq = qsize;
end_synchronized_op(sc, LOCK_HELD);
return (rc);
}
static int
sysctl_pause_settings(SYSCTL_HANDLER_ARGS)
{
struct port_info *pi = arg1;
struct adapter *sc = pi->adapter;
struct link_config *lc = &pi->link_cfg;
int rc;
if (req->newptr == NULL) {
struct sbuf *sb;
static char *bits = "\20\1RX\2TX\3AUTO";
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return(rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 128, req);
if (sb == NULL)
return (ENOMEM);
if (lc->link_ok) {
sbuf_printf(sb, "%b", (lc->fc & (PAUSE_TX | PAUSE_RX)) |
(lc->requested_fc & PAUSE_AUTONEG), bits);
} else {
sbuf_printf(sb, "%b", lc->requested_fc & (PAUSE_TX |
PAUSE_RX | PAUSE_AUTONEG), bits);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
} else {
char s[2];
int n;
s[0] = '0' + (lc->requested_fc & (PAUSE_TX | PAUSE_RX |
PAUSE_AUTONEG));
s[1] = 0;
rc = sysctl_handle_string(oidp, s, sizeof(s), req);
if (rc != 0)
return(rc);
if (s[1] != 0)
return (EINVAL);
if (s[0] < '0' || s[0] > '9')
return (EINVAL); /* not a number */
n = s[0] - '0';
if (n & ~(PAUSE_TX | PAUSE_RX | PAUSE_AUTONEG))
return (EINVAL); /* some other bit is set too */
rc = begin_synchronized_op(sc, &pi->vi[0], SLEEP_OK | INTR_OK,
"t4PAUSE");
if (rc)
return (rc);
PORT_LOCK(pi);
lc->requested_fc = n;
fixup_link_config(pi);
if (pi->up_vis > 0)
rc = apply_link_config(pi);
set_current_media(pi);
PORT_UNLOCK(pi);
end_synchronized_op(sc, 0);
}
return (rc);
}
static int
sysctl_fec(SYSCTL_HANDLER_ARGS)
{
struct port_info *pi = arg1;
struct adapter *sc = pi->adapter;
struct link_config *lc = &pi->link_cfg;
int rc;
int8_t old;
if (req->newptr == NULL) {
struct sbuf *sb;
static char *bits = "\20\1RS\2BASE-R\3RSVD1\4RSVD2\5RSVD3\6AUTO";
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return(rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 128, req);
if (sb == NULL)
return (ENOMEM);
/*
* Display the requested_fec when the link is down -- the actual
* FEC makes sense only when the link is up.
*/
if (lc->link_ok) {
sbuf_printf(sb, "%b", (lc->fec & M_FW_PORT_CAP32_FEC) |
(lc->requested_fec & FEC_AUTO), bits);
} else {
sbuf_printf(sb, "%b", lc->requested_fec, bits);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
} else {
char s[3];
int n;
snprintf(s, sizeof(s), "%d",
lc->requested_fec == FEC_AUTO ? -1 :
lc->requested_fec & M_FW_PORT_CAP32_FEC);
rc = sysctl_handle_string(oidp, s, sizeof(s), req);
if (rc != 0)
return(rc);
n = strtol(&s[0], NULL, 0);
if (n < 0 || n & FEC_AUTO)
n = FEC_AUTO;
else {
if (n & ~M_FW_PORT_CAP32_FEC)
return (EINVAL);/* some other bit is set too */
if (!powerof2(n))
return (EINVAL);/* one bit can be set at most */
}
rc = begin_synchronized_op(sc, &pi->vi[0], SLEEP_OK | INTR_OK,
"t4fec");
if (rc)
return (rc);
PORT_LOCK(pi);
old = lc->requested_fec;
if (n == FEC_AUTO)
lc->requested_fec = FEC_AUTO;
else if (n == 0)
lc->requested_fec = FEC_NONE;
else {
if ((lc->supported | V_FW_PORT_CAP32_FEC(n)) !=
lc->supported) {
rc = ENOTSUP;
goto done;
}
lc->requested_fec = n;
}
fixup_link_config(pi);
if (pi->up_vis > 0) {
rc = apply_link_config(pi);
if (rc != 0) {
lc->requested_fec = old;
if (rc == FW_EPROTO)
rc = ENOTSUP;
}
}
done:
PORT_UNLOCK(pi);
end_synchronized_op(sc, 0);
}
return (rc);
}
static int
sysctl_autoneg(SYSCTL_HANDLER_ARGS)
{
struct port_info *pi = arg1;
struct adapter *sc = pi->adapter;
struct link_config *lc = &pi->link_cfg;
int rc, val;
if (lc->supported & FW_PORT_CAP32_ANEG)
val = lc->requested_aneg == AUTONEG_DISABLE ? 0 : 1;
else
val = -1;
rc = sysctl_handle_int(oidp, &val, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (val == 0)
val = AUTONEG_DISABLE;
else if (val == 1)
val = AUTONEG_ENABLE;
else
val = AUTONEG_AUTO;
rc = begin_synchronized_op(sc, &pi->vi[0], SLEEP_OK | INTR_OK,
"t4aneg");
if (rc)
return (rc);
PORT_LOCK(pi);
if (val == AUTONEG_ENABLE && !(lc->supported & FW_PORT_CAP32_ANEG)) {
rc = ENOTSUP;
goto done;
}
lc->requested_aneg = val;
fixup_link_config(pi);
if (pi->up_vis > 0)
rc = apply_link_config(pi);
set_current_media(pi);
done:
PORT_UNLOCK(pi);
end_synchronized_op(sc, 0);
return (rc);
}
static int
sysctl_handle_t4_reg64(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
int reg = arg2;
uint64_t val;
val = t4_read_reg64(sc, reg);
return (sysctl_handle_64(oidp, &val, 0, req));
}
static int
sysctl_temperature(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
int rc, t;
uint32_t param, val;
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4temp");
if (rc)
return (rc);
param = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_DIAG) |
V_FW_PARAMS_PARAM_Y(FW_PARAM_DEV_DIAG_TMP);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, &param, &val);
end_synchronized_op(sc, 0);
if (rc)
return (rc);
/* unknown is returned as 0 but we display -1 in that case */
t = val == 0 ? -1 : val;
rc = sysctl_handle_int(oidp, &t, 0, req);
return (rc);
}
static int
sysctl_loadavg(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
uint32_t param, val;
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4lavg");
if (rc)
return (rc);
param = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_LOAD);
rc = -t4_query_params(sc, sc->mbox, sc->pf, 0, 1, &param, &val);
end_synchronized_op(sc, 0);
if (rc)
return (rc);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
if (val == 0xffffffff) {
/* Only debug and custom firmwares report load averages. */
sbuf_printf(sb, "not available");
} else {
sbuf_printf(sb, "%d %d %d", val & 0xff, (val >> 8) & 0xff,
(val >> 16) & 0xff);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_cctrl(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i;
uint16_t incr[NMTUS][NCCTRL_WIN];
static const char *dec_fac[] = {
"0.5", "0.5625", "0.625", "0.6875", "0.75", "0.8125", "0.875",
"0.9375"
};
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
t4_read_cong_tbl(sc, incr);
for (i = 0; i < NCCTRL_WIN; ++i) {
sbuf_printf(sb, "%2d: %4u %4u %4u %4u %4u %4u %4u %4u\n", i,
incr[0][i], incr[1][i], incr[2][i], incr[3][i], incr[4][i],
incr[5][i], incr[6][i], incr[7][i]);
sbuf_printf(sb, "%8u %4u %4u %4u %4u %4u %4u %4u %5u %s\n",
incr[8][i], incr[9][i], incr[10][i], incr[11][i],
incr[12][i], incr[13][i], incr[14][i], incr[15][i],
sc->params.a_wnd[i], dec_fac[sc->params.b_wnd[i]]);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static const char *qname[CIM_NUM_IBQ + CIM_NUM_OBQ_T5] = {
"TP0", "TP1", "ULP", "SGE0", "SGE1", "NC-SI", /* ibq's */
"ULP0", "ULP1", "ULP2", "ULP3", "SGE", "NC-SI", /* obq's */
"SGE0-RX", "SGE1-RX" /* additional obq's (T5 onwards) */
};
static int
sysctl_cim_ibq_obq(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i, n, qid = arg2;
uint32_t *buf, *p;
char *qtype;
u_int cim_num_obq = sc->chip_params->cim_num_obq;
KASSERT(qid >= 0 && qid < CIM_NUM_IBQ + cim_num_obq,
("%s: bad qid %d\n", __func__, qid));
if (qid < CIM_NUM_IBQ) {
/* inbound queue */
qtype = "IBQ";
n = 4 * CIM_IBQ_SIZE;
buf = malloc(n * sizeof(uint32_t), M_CXGBE, M_ZERO | M_WAITOK);
rc = t4_read_cim_ibq(sc, qid, buf, n);
} else {
/* outbound queue */
qtype = "OBQ";
qid -= CIM_NUM_IBQ;
n = 4 * cim_num_obq * CIM_OBQ_SIZE;
buf = malloc(n * sizeof(uint32_t), M_CXGBE, M_ZERO | M_WAITOK);
rc = t4_read_cim_obq(sc, qid, buf, n);
}
if (rc < 0) {
rc = -rc;
goto done;
}
n = rc * sizeof(uint32_t); /* rc has # of words actually read */
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
goto done;
sb = sbuf_new_for_sysctl(NULL, NULL, PAGE_SIZE, req);
if (sb == NULL) {
rc = ENOMEM;
goto done;
}
sbuf_printf(sb, "%s%d %s", qtype , qid, qname[arg2]);
for (i = 0, p = buf; i < n; i += 16, p += 4)
sbuf_printf(sb, "\n%#06x: %08x %08x %08x %08x", i, p[0], p[1],
p[2], p[3]);
rc = sbuf_finish(sb);
sbuf_delete(sb);
done:
free(buf, M_CXGBE);
return (rc);
}
-static int
-sysctl_cim_la(SYSCTL_HANDLER_ARGS)
+static void
+sbuf_cim_la4(struct adapter *sc, struct sbuf *sb, uint32_t *buf, uint32_t cfg)
{
- struct adapter *sc = arg1;
- u_int cfg;
- struct sbuf *sb;
- uint32_t *buf, *p;
- int rc;
+ uint32_t *p;
- MPASS(chip_id(sc) <= CHELSIO_T5);
-
- rc = -t4_cim_read(sc, A_UP_UP_DBG_LA_CFG, 1, &cfg);
- if (rc != 0)
- return (rc);
-
- rc = sysctl_wire_old_buffer(req, 0);
- if (rc != 0)
- return (rc);
-
- sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
- if (sb == NULL)
- return (ENOMEM);
-
- buf = malloc(sc->params.cim_la_size * sizeof(uint32_t), M_CXGBE,
- M_ZERO | M_WAITOK);
-
- rc = -t4_cim_read_la(sc, buf, NULL);
- if (rc != 0)
- goto done;
-
sbuf_printf(sb, "Status Data PC%s",
cfg & F_UPDBGLACAPTPCONLY ? "" :
" LS0Stat LS0Addr LS0Data");
for (p = buf; p <= &buf[sc->params.cim_la_size - 8]; p += 8) {
if (cfg & F_UPDBGLACAPTPCONLY) {
sbuf_printf(sb, "\n %02x %08x %08x", p[5] & 0xff,
p[6], p[7]);
sbuf_printf(sb, "\n %02x %02x%06x %02x%06x",
(p[3] >> 8) & 0xff, p[3] & 0xff, p[4] >> 8,
p[4] & 0xff, p[5] >> 8);
sbuf_printf(sb, "\n %02x %x%07x %x%07x",
(p[0] >> 4) & 0xff, p[0] & 0xf, p[1] >> 4,
p[1] & 0xf, p[2] >> 4);
} else {
sbuf_printf(sb,
"\n %02x %x%07x %x%07x %08x %08x "
"%08x%08x%08x%08x",
(p[0] >> 4) & 0xff, p[0] & 0xf, p[1] >> 4,
p[1] & 0xf, p[2] >> 4, p[2] & 0xf, p[3], p[4], p[5],
p[6], p[7]);
}
}
-
- rc = sbuf_finish(sb);
- sbuf_delete(sb);
-done:
- free(buf, M_CXGBE);
- return (rc);
}
-static int
-sysctl_cim_la_t6(SYSCTL_HANDLER_ARGS)
+static void
+sbuf_cim_la6(struct adapter *sc, struct sbuf *sb, uint32_t *buf, uint32_t cfg)
{
- struct adapter *sc = arg1;
- u_int cfg;
- struct sbuf *sb;
- uint32_t *buf, *p;
- int rc;
+ uint32_t *p;
- MPASS(chip_id(sc) > CHELSIO_T5);
-
- rc = -t4_cim_read(sc, A_UP_UP_DBG_LA_CFG, 1, &cfg);
- if (rc != 0)
- return (rc);
-
- rc = sysctl_wire_old_buffer(req, 0);
- if (rc != 0)
- return (rc);
-
- sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
- if (sb == NULL)
- return (ENOMEM);
-
- buf = malloc(sc->params.cim_la_size * sizeof(uint32_t), M_CXGBE,
- M_ZERO | M_WAITOK);
-
- rc = -t4_cim_read_la(sc, buf, NULL);
- if (rc != 0)
- goto done;
-
sbuf_printf(sb, "Status Inst Data PC%s",
cfg & F_UPDBGLACAPTPCONLY ? "" :
" LS0Stat LS0Addr LS0Data LS1Stat LS1Addr LS1Data");
for (p = buf; p <= &buf[sc->params.cim_la_size - 10]; p += 10) {
if (cfg & F_UPDBGLACAPTPCONLY) {
sbuf_printf(sb, "\n %02x %08x %08x %08x",
p[3] & 0xff, p[2], p[1], p[0]);
sbuf_printf(sb, "\n %02x %02x%06x %02x%06x %02x%06x",
(p[6] >> 8) & 0xff, p[6] & 0xff, p[5] >> 8,
p[5] & 0xff, p[4] >> 8, p[4] & 0xff, p[3] >> 8);
sbuf_printf(sb, "\n %02x %04x%04x %04x%04x %04x%04x",
(p[9] >> 16) & 0xff, p[9] & 0xffff, p[8] >> 16,
p[8] & 0xffff, p[7] >> 16, p[7] & 0xffff,
p[6] >> 16);
} else {
sbuf_printf(sb, "\n %02x %04x%04x %04x%04x %04x%04x "
"%08x %08x %08x %08x %08x %08x",
(p[9] >> 16) & 0xff,
p[9] & 0xffff, p[8] >> 16,
p[8] & 0xffff, p[7] >> 16,
p[7] & 0xffff, p[6] >> 16,
p[2], p[1], p[0], p[5], p[4], p[3]);
}
}
+}
- rc = sbuf_finish(sb);
- sbuf_delete(sb);
+static int
+sbuf_cim_la(struct adapter *sc, struct sbuf *sb, int flags)
+{
+ uint32_t cfg, *buf;
+ int rc;
+
+ rc = -t4_cim_read(sc, A_UP_UP_DBG_LA_CFG, 1, &cfg);
+ if (rc != 0)
+ return (rc);
+
+ MPASS(flags == M_WAITOK || flags == M_NOWAIT);
+ buf = malloc(sc->params.cim_la_size * sizeof(uint32_t), M_CXGBE,
+ M_ZERO | flags);
+ if (buf == NULL)
+ return (ENOMEM);
+
+ rc = -t4_cim_read_la(sc, buf, NULL);
+ if (rc != 0)
+ goto done;
+ if (chip_id(sc) < CHELSIO_T6)
+ sbuf_cim_la4(sc, sb, buf, cfg);
+ else
+ sbuf_cim_la6(sc, sb, buf, cfg);
+
done:
free(buf, M_CXGBE);
return (rc);
}
static int
+sysctl_cim_la(SYSCTL_HANDLER_ARGS)
+{
+ struct adapter *sc = arg1;
+ struct sbuf *sb;
+ int rc;
+
+ rc = sysctl_wire_old_buffer(req, 0);
+ if (rc != 0)
+ return (rc);
+ sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
+ if (sb == NULL)
+ return (ENOMEM);
+
+ rc = sbuf_cim_la(sc, sb, M_WAITOK);
+ if (rc == 0)
+ rc = sbuf_finish(sb);
+ sbuf_delete(sb);
+ return (rc);
+}
+
+bool
+t4_os_dump_cimla(struct adapter *sc, int arg, bool verbose)
+{
+ struct sbuf sb;
+ int rc;
+
+ if (sbuf_new(&sb, NULL, 4096, SBUF_AUTOEXTEND) != &sb)
+ return (false);
+ rc = sbuf_cim_la(sc, &sb, M_NOWAIT);
+ if (rc == 0) {
+ rc = sbuf_finish(&sb);
+ if (rc == 0) {
+ log(LOG_DEBUG, "%s: CIM LA dump follows.\n%s",
+ device_get_nameunit(sc->dev), sbuf_data(&sb));
+ }
+ }
+ sbuf_delete(&sb);
+ return (false);
+}
+
+static int
sysctl_cim_ma_la(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
u_int i;
struct sbuf *sb;
uint32_t *buf, *p;
int rc;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
buf = malloc(2 * CIM_MALA_SIZE * 5 * sizeof(uint32_t), M_CXGBE,
M_ZERO | M_WAITOK);
t4_cim_read_ma_la(sc, buf, buf + 5 * CIM_MALA_SIZE);
p = buf;
for (i = 0; i < CIM_MALA_SIZE; i++, p += 5) {
sbuf_printf(sb, "\n%02x%08x%08x%08x%08x", p[4], p[3], p[2],
p[1], p[0]);
}
sbuf_printf(sb, "\n\nCnt ID Tag UE Data RDY VLD");
for (i = 0; i < CIM_MALA_SIZE; i++, p += 5) {
sbuf_printf(sb, "\n%3u %2u %x %u %08x%08x %u %u",
(p[2] >> 10) & 0xff, (p[2] >> 7) & 7,
(p[2] >> 3) & 0xf, (p[2] >> 2) & 1,
(p[1] >> 2) | ((p[2] & 3) << 30),
(p[0] >> 2) | ((p[1] & 3) << 30), (p[0] >> 1) & 1,
p[0] & 1);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
free(buf, M_CXGBE);
return (rc);
}
static int
sysctl_cim_pif_la(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
u_int i;
struct sbuf *sb;
uint32_t *buf, *p;
int rc;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
buf = malloc(2 * CIM_PIFLA_SIZE * 6 * sizeof(uint32_t), M_CXGBE,
M_ZERO | M_WAITOK);
t4_cim_read_pif_la(sc, buf, buf + 6 * CIM_PIFLA_SIZE, NULL, NULL);
p = buf;
sbuf_printf(sb, "Cntl ID DataBE Addr Data");
for (i = 0; i < CIM_PIFLA_SIZE; i++, p += 6) {
sbuf_printf(sb, "\n %02x %02x %04x %08x %08x%08x%08x%08x",
(p[5] >> 22) & 0xff, (p[5] >> 16) & 0x3f, p[5] & 0xffff,
p[4], p[3], p[2], p[1], p[0]);
}
sbuf_printf(sb, "\n\nCntl ID Data");
for (i = 0; i < CIM_PIFLA_SIZE; i++, p += 6) {
sbuf_printf(sb, "\n %02x %02x %08x%08x%08x%08x",
(p[4] >> 6) & 0xff, p[4] & 0x3f, p[3], p[2], p[1], p[0]);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
free(buf, M_CXGBE);
return (rc);
}
static int
sysctl_cim_qcfg(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i;
uint16_t base[CIM_NUM_IBQ + CIM_NUM_OBQ_T5];
uint16_t size[CIM_NUM_IBQ + CIM_NUM_OBQ_T5];
uint16_t thres[CIM_NUM_IBQ];
uint32_t obq_wr[2 * CIM_NUM_OBQ_T5], *wr = obq_wr;
uint32_t stat[4 * (CIM_NUM_IBQ + CIM_NUM_OBQ_T5)], *p = stat;
u_int cim_num_obq, ibq_rdaddr, obq_rdaddr, nq;
cim_num_obq = sc->chip_params->cim_num_obq;
if (is_t4(sc)) {
ibq_rdaddr = A_UP_IBQ_0_RDADDR;
obq_rdaddr = A_UP_OBQ_0_REALADDR;
} else {
ibq_rdaddr = A_UP_IBQ_0_SHADOW_RDADDR;
obq_rdaddr = A_UP_OBQ_0_SHADOW_REALADDR;
}
nq = CIM_NUM_IBQ + cim_num_obq;
rc = -t4_cim_read(sc, ibq_rdaddr, 4 * nq, stat);
if (rc == 0)
rc = -t4_cim_read(sc, obq_rdaddr, 2 * cim_num_obq, obq_wr);
if (rc != 0)
return (rc);
t4_read_cimq_cfg(sc, base, size, thres);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, PAGE_SIZE, req);
if (sb == NULL)
return (ENOMEM);
sbuf_printf(sb,
" Queue Base Size Thres RdPtr WrPtr SOP EOP Avail");
for (i = 0; i < CIM_NUM_IBQ; i++, p += 4)
sbuf_printf(sb, "\n%7s %5x %5u %5u %6x %4x %4u %4u %5u",
qname[i], base[i], size[i], thres[i], G_IBQRDADDR(p[0]),
G_IBQWRADDR(p[1]), G_QUESOPCNT(p[3]), G_QUEEOPCNT(p[3]),
G_QUEREMFLITS(p[2]) * 16);
for ( ; i < nq; i++, p += 4, wr += 2)
sbuf_printf(sb, "\n%7s %5x %5u %12x %4x %4u %4u %5u", qname[i],
base[i], size[i], G_QUERDADDR(p[0]) & 0x3fff,
wr[0] - base[i], G_QUESOPCNT(p[3]), G_QUEEOPCNT(p[3]),
G_QUEREMFLITS(p[2]) * 16);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_cpl_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tp_cpl_stats stats;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
mtx_lock(&sc->reg_lock);
t4_tp_get_cpl_stats(sc, &stats, 0);
mtx_unlock(&sc->reg_lock);
if (sc->chip_params->nchan > 2) {
sbuf_printf(sb, " channel 0 channel 1"
" channel 2 channel 3");
sbuf_printf(sb, "\nCPL requests: %10u %10u %10u %10u",
stats.req[0], stats.req[1], stats.req[2], stats.req[3]);
sbuf_printf(sb, "\nCPL responses: %10u %10u %10u %10u",
stats.rsp[0], stats.rsp[1], stats.rsp[2], stats.rsp[3]);
} else {
sbuf_printf(sb, " channel 0 channel 1");
sbuf_printf(sb, "\nCPL requests: %10u %10u",
stats.req[0], stats.req[1]);
sbuf_printf(sb, "\nCPL responses: %10u %10u",
stats.rsp[0], stats.rsp[1]);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_ddp_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tp_usm_stats stats;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return(rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
t4_get_usm_stats(sc, &stats, 1);
sbuf_printf(sb, "Frames: %u\n", stats.frames);
sbuf_printf(sb, "Octets: %ju\n", stats.octets);
sbuf_printf(sb, "Drops: %u", stats.drops);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static const char * const devlog_level_strings[] = {
[FW_DEVLOG_LEVEL_EMERG] = "EMERG",
[FW_DEVLOG_LEVEL_CRIT] = "CRIT",
[FW_DEVLOG_LEVEL_ERR] = "ERR",
[FW_DEVLOG_LEVEL_NOTICE] = "NOTICE",
[FW_DEVLOG_LEVEL_INFO] = "INFO",
[FW_DEVLOG_LEVEL_DEBUG] = "DEBUG"
};
static const char * const devlog_facility_strings[] = {
[FW_DEVLOG_FACILITY_CORE] = "CORE",
[FW_DEVLOG_FACILITY_CF] = "CF",
[FW_DEVLOG_FACILITY_SCHED] = "SCHED",
[FW_DEVLOG_FACILITY_TIMER] = "TIMER",
[FW_DEVLOG_FACILITY_RES] = "RES",
[FW_DEVLOG_FACILITY_HW] = "HW",
[FW_DEVLOG_FACILITY_FLR] = "FLR",
[FW_DEVLOG_FACILITY_DMAQ] = "DMAQ",
[FW_DEVLOG_FACILITY_PHY] = "PHY",
[FW_DEVLOG_FACILITY_MAC] = "MAC",
[FW_DEVLOG_FACILITY_PORT] = "PORT",
[FW_DEVLOG_FACILITY_VI] = "VI",
[FW_DEVLOG_FACILITY_FILTER] = "FILTER",
[FW_DEVLOG_FACILITY_ACL] = "ACL",
[FW_DEVLOG_FACILITY_TM] = "TM",
[FW_DEVLOG_FACILITY_QFC] = "QFC",
[FW_DEVLOG_FACILITY_DCB] = "DCB",
[FW_DEVLOG_FACILITY_ETH] = "ETH",
[FW_DEVLOG_FACILITY_OFLD] = "OFLD",
[FW_DEVLOG_FACILITY_RI] = "RI",
[FW_DEVLOG_FACILITY_ISCSI] = "ISCSI",
[FW_DEVLOG_FACILITY_FCOE] = "FCOE",
[FW_DEVLOG_FACILITY_FOISCSI] = "FOISCSI",
[FW_DEVLOG_FACILITY_FOFCOE] = "FOFCOE",
[FW_DEVLOG_FACILITY_CHNET] = "CHNET",
};
static int
-sysctl_devlog(SYSCTL_HANDLER_ARGS)
+sbuf_devlog(struct adapter *sc, struct sbuf *sb, int flags)
{
- struct adapter *sc = arg1;
+ int i, j, rc, nentries, first = 0;
struct devlog_params *dparams = &sc->params.devlog;
struct fw_devlog_e *buf, *e;
- int i, j, rc, nentries, first = 0;
- struct sbuf *sb;
uint64_t ftstamp = UINT64_MAX;
if (dparams->addr == 0)
return (ENXIO);
- buf = malloc(dparams->size, M_CXGBE, M_NOWAIT);
+ MPASS(flags == M_WAITOK || flags == M_NOWAIT);
+ buf = malloc(dparams->size, M_CXGBE, M_ZERO | flags);
if (buf == NULL)
return (ENOMEM);
rc = read_via_memwin(sc, 1, dparams->addr, (void *)buf, dparams->size);
if (rc != 0)
goto done;
nentries = dparams->size / sizeof(struct fw_devlog_e);
for (i = 0; i < nentries; i++) {
e = &buf[i];
if (e->timestamp == 0)
break; /* end */
e->timestamp = be64toh(e->timestamp);
e->seqno = be32toh(e->seqno);
for (j = 0; j < 8; j++)
e->params[j] = be32toh(e->params[j]);
if (e->timestamp < ftstamp) {
ftstamp = e->timestamp;
first = i;
}
}
if (buf[first].timestamp == 0)
goto done; /* nothing in the log */
- rc = sysctl_wire_old_buffer(req, 0);
- if (rc != 0)
- goto done;
-
- sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
- if (sb == NULL) {
- rc = ENOMEM;
- goto done;
- }
sbuf_printf(sb, "%10s %15s %8s %8s %s\n",
"Seq#", "Tstamp", "Level", "Facility", "Message");
i = first;
do {
e = &buf[i];
if (e->timestamp == 0)
break; /* end */
sbuf_printf(sb, "%10d %15ju %8s %8s ",
e->seqno, e->timestamp,
(e->level < nitems(devlog_level_strings) ?
devlog_level_strings[e->level] : "UNKNOWN"),
(e->facility < nitems(devlog_facility_strings) ?
devlog_facility_strings[e->facility] : "UNKNOWN"));
sbuf_printf(sb, e->fmt, e->params[0], e->params[1],
e->params[2], e->params[3], e->params[4],
e->params[5], e->params[6], e->params[7]);
if (++i == nentries)
i = 0;
} while (i != first);
-
- rc = sbuf_finish(sb);
- sbuf_delete(sb);
done:
free(buf, M_CXGBE);
return (rc);
}
static int
+sysctl_devlog(SYSCTL_HANDLER_ARGS)
+{
+ struct adapter *sc = arg1;
+ int rc;
+ struct sbuf *sb;
+
+ rc = sysctl_wire_old_buffer(req, 0);
+ if (rc != 0)
+ return (rc);
+ sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
+ if (sb == NULL)
+ return (ENOMEM);
+
+ rc = sbuf_devlog(sc, sb, M_WAITOK);
+ if (rc == 0)
+ rc = sbuf_finish(sb);
+ sbuf_delete(sb);
+ return (rc);
+}
+
+void
+t4_os_dump_devlog(struct adapter *sc)
+{
+ int rc;
+ struct sbuf sb;
+
+ if (sbuf_new(&sb, NULL, 4096, SBUF_AUTOEXTEND) != &sb)
+ return;
+ rc = sbuf_devlog(sc, &sb, M_NOWAIT);
+ if (rc == 0) {
+ rc = sbuf_finish(&sb);
+ if (rc == 0) {
+ log(LOG_DEBUG, "%s: device log follows.\n%s",
+ device_get_nameunit(sc->dev), sbuf_data(&sb));
+ }
+ }
+ sbuf_delete(&sb);
+}
+
+static int
sysctl_fcoe_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tp_fcoe_stats stats[MAX_NCHAN];
int i, nchan = sc->chip_params->nchan;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
for (i = 0; i < nchan; i++)
t4_get_fcoe_stats(sc, i, &stats[i], 1);
if (nchan > 2) {
sbuf_printf(sb, " channel 0 channel 1"
" channel 2 channel 3");
sbuf_printf(sb, "\noctetsDDP: %16ju %16ju %16ju %16ju",
stats[0].octets_ddp, stats[1].octets_ddp,
stats[2].octets_ddp, stats[3].octets_ddp);
sbuf_printf(sb, "\nframesDDP: %16u %16u %16u %16u",
stats[0].frames_ddp, stats[1].frames_ddp,
stats[2].frames_ddp, stats[3].frames_ddp);
sbuf_printf(sb, "\nframesDrop: %16u %16u %16u %16u",
stats[0].frames_drop, stats[1].frames_drop,
stats[2].frames_drop, stats[3].frames_drop);
} else {
sbuf_printf(sb, " channel 0 channel 1");
sbuf_printf(sb, "\noctetsDDP: %16ju %16ju",
stats[0].octets_ddp, stats[1].octets_ddp);
sbuf_printf(sb, "\nframesDDP: %16u %16u",
stats[0].frames_ddp, stats[1].frames_ddp);
sbuf_printf(sb, "\nframesDrop: %16u %16u",
stats[0].frames_drop, stats[1].frames_drop);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_hw_sched(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i;
unsigned int map, kbps, ipg, mode;
unsigned int pace_tab[NTX_SCHED];
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
map = t4_read_reg(sc, A_TP_TX_MOD_QUEUE_REQ_MAP);
mode = G_TIMERMODE(t4_read_reg(sc, A_TP_MOD_CONFIG));
t4_read_pace_tbl(sc, pace_tab);
sbuf_printf(sb, "Scheduler Mode Channel Rate (Kbps) "
"Class IPG (0.1 ns) Flow IPG (us)");
for (i = 0; i < NTX_SCHED; ++i, map >>= 2) {
t4_get_tx_sched(sc, i, &kbps, &ipg, 1);
sbuf_printf(sb, "\n %u %-5s %u ", i,
(mode & (1 << i)) ? "flow" : "class", map & 3);
if (kbps)
sbuf_printf(sb, "%9u ", kbps);
else
sbuf_printf(sb, " disabled ");
if (ipg)
sbuf_printf(sb, "%13u ", ipg);
else
sbuf_printf(sb, " disabled ");
if (pace_tab[i])
sbuf_printf(sb, "%10u", pace_tab[i]);
else
sbuf_printf(sb, " disabled");
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_lb_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i, j;
uint64_t *p0, *p1;
struct lb_port_stats s[2];
static const char *stat_name[] = {
"OctetsOK:", "FramesOK:", "BcastFrames:", "McastFrames:",
"UcastFrames:", "ErrorFrames:", "Frames64:", "Frames65To127:",
"Frames128To255:", "Frames256To511:", "Frames512To1023:",
"Frames1024To1518:", "Frames1519ToMax:", "FramesDropped:",
"BG0FramesDropped:", "BG1FramesDropped:", "BG2FramesDropped:",
"BG3FramesDropped:", "BG0FramesTrunc:", "BG1FramesTrunc:",
"BG2FramesTrunc:", "BG3FramesTrunc:"
};
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
memset(s, 0, sizeof(s));
for (i = 0; i < sc->chip_params->nchan; i += 2) {
t4_get_lb_stats(sc, i, &s[0]);
t4_get_lb_stats(sc, i + 1, &s[1]);
p0 = &s[0].octets;
p1 = &s[1].octets;
sbuf_printf(sb, "%s Loopback %u"
" Loopback %u", i == 0 ? "" : "\n", i, i + 1);
for (j = 0; j < nitems(stat_name); j++)
sbuf_printf(sb, "\n%-17s %20ju %20ju", stat_name[j],
*p0++, *p1++);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_linkdnrc(SYSCTL_HANDLER_ARGS)
{
int rc = 0;
struct port_info *pi = arg1;
struct link_config *lc = &pi->link_cfg;
struct sbuf *sb;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return(rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 64, req);
if (sb == NULL)
return (ENOMEM);
if (lc->link_ok || lc->link_down_rc == 255)
sbuf_printf(sb, "n/a");
else
sbuf_printf(sb, "%s", t4_link_down_rc_str(lc->link_down_rc));
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
struct mem_desc {
unsigned int base;
unsigned int limit;
unsigned int idx;
};
static int
mem_desc_cmp(const void *a, const void *b)
{
return ((const struct mem_desc *)a)->base -
((const struct mem_desc *)b)->base;
}
static void
mem_region_show(struct sbuf *sb, const char *name, unsigned int from,
unsigned int to)
{
unsigned int size;
if (from == to)
return;
size = to - from + 1;
if (size == 0)
return;
/* XXX: need humanize_number(3) in libkern for a more readable 'size' */
sbuf_printf(sb, "%-15s %#x-%#x [%u]\n", name, from, to, size);
}
static int
sysctl_meminfo(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i, n;
uint32_t lo, hi, used, alloc;
static const char *memory[] = {"EDC0:", "EDC1:", "MC:", "MC0:", "MC1:"};
static const char *region[] = {
"DBQ contexts:", "IMSG contexts:", "FLM cache:", "TCBs:",
"Pstructs:", "Timers:", "Rx FL:", "Tx FL:", "Pstruct FL:",
"Tx payload:", "Rx payload:", "LE hash:", "iSCSI region:",
"TDDP region:", "TPT region:", "STAG region:", "RQ region:",
"RQUDP region:", "PBL region:", "TXPBL region:",
"DBVFIFO region:", "ULPRX state:", "ULPTX state:",
"On-chip queues:", "TLS keys:",
};
struct mem_desc avail[4];
struct mem_desc mem[nitems(region) + 3]; /* up to 3 holes */
struct mem_desc *md = mem;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
for (i = 0; i < nitems(mem); i++) {
mem[i].limit = 0;
mem[i].idx = i;
}
/* Find and sort the populated memory ranges */
i = 0;
lo = t4_read_reg(sc, A_MA_TARGET_MEM_ENABLE);
if (lo & F_EDRAM0_ENABLE) {
hi = t4_read_reg(sc, A_MA_EDRAM0_BAR);
avail[i].base = G_EDRAM0_BASE(hi) << 20;
avail[i].limit = avail[i].base + (G_EDRAM0_SIZE(hi) << 20);
avail[i].idx = 0;
i++;
}
if (lo & F_EDRAM1_ENABLE) {
hi = t4_read_reg(sc, A_MA_EDRAM1_BAR);
avail[i].base = G_EDRAM1_BASE(hi) << 20;
avail[i].limit = avail[i].base + (G_EDRAM1_SIZE(hi) << 20);
avail[i].idx = 1;
i++;
}
if (lo & F_EXT_MEM_ENABLE) {
hi = t4_read_reg(sc, A_MA_EXT_MEMORY_BAR);
avail[i].base = G_EXT_MEM_BASE(hi) << 20;
avail[i].limit = avail[i].base +
(G_EXT_MEM_SIZE(hi) << 20);
avail[i].idx = is_t5(sc) ? 3 : 2; /* Call it MC0 for T5 */
i++;
}
if (is_t5(sc) && lo & F_EXT_MEM1_ENABLE) {
hi = t4_read_reg(sc, A_MA_EXT_MEMORY1_BAR);
avail[i].base = G_EXT_MEM1_BASE(hi) << 20;
avail[i].limit = avail[i].base +
(G_EXT_MEM1_SIZE(hi) << 20);
avail[i].idx = 4;
i++;
}
if (!i) /* no memory available */
return 0;
qsort(avail, i, sizeof(struct mem_desc), mem_desc_cmp);
(md++)->base = t4_read_reg(sc, A_SGE_DBQ_CTXT_BADDR);
(md++)->base = t4_read_reg(sc, A_SGE_IMSG_CTXT_BADDR);
(md++)->base = t4_read_reg(sc, A_SGE_FLM_CACHE_BADDR);
(md++)->base = t4_read_reg(sc, A_TP_CMM_TCB_BASE);
(md++)->base = t4_read_reg(sc, A_TP_CMM_MM_BASE);
(md++)->base = t4_read_reg(sc, A_TP_CMM_TIMER_BASE);
(md++)->base = t4_read_reg(sc, A_TP_CMM_MM_RX_FLST_BASE);
(md++)->base = t4_read_reg(sc, A_TP_CMM_MM_TX_FLST_BASE);
(md++)->base = t4_read_reg(sc, A_TP_CMM_MM_PS_FLST_BASE);
/* the next few have explicit upper bounds */
md->base = t4_read_reg(sc, A_TP_PMM_TX_BASE);
md->limit = md->base - 1 +
t4_read_reg(sc, A_TP_PMM_TX_PAGE_SIZE) *
G_PMTXMAXPAGE(t4_read_reg(sc, A_TP_PMM_TX_MAX_PAGE));
md++;
md->base = t4_read_reg(sc, A_TP_PMM_RX_BASE);
md->limit = md->base - 1 +
t4_read_reg(sc, A_TP_PMM_RX_PAGE_SIZE) *
G_PMRXMAXPAGE(t4_read_reg(sc, A_TP_PMM_RX_MAX_PAGE));
md++;
if (t4_read_reg(sc, A_LE_DB_CONFIG) & F_HASHEN) {
if (chip_id(sc) <= CHELSIO_T5)
md->base = t4_read_reg(sc, A_LE_DB_HASH_TID_BASE);
else
md->base = t4_read_reg(sc, A_LE_DB_HASH_TBL_BASE_ADDR);
md->limit = 0;
} else {
md->base = 0;
md->idx = nitems(region); /* hide it */
}
md++;
#define ulp_region(reg) \
md->base = t4_read_reg(sc, A_ULP_ ## reg ## _LLIMIT);\
(md++)->limit = t4_read_reg(sc, A_ULP_ ## reg ## _ULIMIT)
ulp_region(RX_ISCSI);
ulp_region(RX_TDDP);
ulp_region(TX_TPT);
ulp_region(RX_STAG);
ulp_region(RX_RQ);
ulp_region(RX_RQUDP);
ulp_region(RX_PBL);
ulp_region(TX_PBL);
#undef ulp_region
md->base = 0;
md->idx = nitems(region);
if (!is_t4(sc)) {
uint32_t size = 0;
uint32_t sge_ctrl = t4_read_reg(sc, A_SGE_CONTROL2);
uint32_t fifo_size = t4_read_reg(sc, A_SGE_DBVFIFO_SIZE);
if (is_t5(sc)) {
if (sge_ctrl & F_VFIFO_ENABLE)
size = G_DBVFIFO_SIZE(fifo_size);
} else
size = G_T6_DBVFIFO_SIZE(fifo_size);
if (size) {
md->base = G_BASEADDR(t4_read_reg(sc,
A_SGE_DBVFIFO_BADDR));
md->limit = md->base + (size << 2) - 1;
}
}
md++;
md->base = t4_read_reg(sc, A_ULP_RX_CTX_BASE);
md->limit = 0;
md++;
md->base = t4_read_reg(sc, A_ULP_TX_ERR_TABLE_BASE);
md->limit = 0;
md++;
md->base = sc->vres.ocq.start;
if (sc->vres.ocq.size)
md->limit = md->base + sc->vres.ocq.size - 1;
else
md->idx = nitems(region); /* hide it */
md++;
md->base = sc->vres.key.start;
if (sc->vres.key.size)
md->limit = md->base + sc->vres.key.size - 1;
else
md->idx = nitems(region); /* hide it */
md++;
/* add any address-space holes, there can be up to 3 */
for (n = 0; n < i - 1; n++)
if (avail[n].limit < avail[n + 1].base)
(md++)->base = avail[n].limit;
if (avail[n].limit)
(md++)->base = avail[n].limit;
n = md - mem;
qsort(mem, n, sizeof(struct mem_desc), mem_desc_cmp);
for (lo = 0; lo < i; lo++)
mem_region_show(sb, memory[avail[lo].idx], avail[lo].base,
avail[lo].limit - 1);
sbuf_printf(sb, "\n");
for (i = 0; i < n; i++) {
if (mem[i].idx >= nitems(region))
continue; /* skip holes */
if (!mem[i].limit)
mem[i].limit = i < n - 1 ? mem[i + 1].base - 1 : ~0;
mem_region_show(sb, region[mem[i].idx], mem[i].base,
mem[i].limit);
}
sbuf_printf(sb, "\n");
lo = t4_read_reg(sc, A_CIM_SDRAM_BASE_ADDR);
hi = t4_read_reg(sc, A_CIM_SDRAM_ADDR_SIZE) + lo - 1;
mem_region_show(sb, "uP RAM:", lo, hi);
lo = t4_read_reg(sc, A_CIM_EXTMEM2_BASE_ADDR);
hi = t4_read_reg(sc, A_CIM_EXTMEM2_ADDR_SIZE) + lo - 1;
mem_region_show(sb, "uP Extmem2:", lo, hi);
lo = t4_read_reg(sc, A_TP_PMM_RX_MAX_PAGE);
sbuf_printf(sb, "\n%u Rx pages of size %uKiB for %u channels\n",
G_PMRXMAXPAGE(lo),
t4_read_reg(sc, A_TP_PMM_RX_PAGE_SIZE) >> 10,
(lo & F_PMRXNUMCHN) ? 2 : 1);
lo = t4_read_reg(sc, A_TP_PMM_TX_MAX_PAGE);
hi = t4_read_reg(sc, A_TP_PMM_TX_PAGE_SIZE);
sbuf_printf(sb, "%u Tx pages of size %u%ciB for %u channels\n",
G_PMTXMAXPAGE(lo),
hi >= (1 << 20) ? (hi >> 20) : (hi >> 10),
hi >= (1 << 20) ? 'M' : 'K', 1 << G_PMTXNUMCHN(lo));
sbuf_printf(sb, "%u p-structs\n",
t4_read_reg(sc, A_TP_CMM_MM_MAX_PSTRUCT));
for (i = 0; i < 4; i++) {
if (chip_id(sc) > CHELSIO_T5)
lo = t4_read_reg(sc, A_MPS_RX_MAC_BG_PG_CNT0 + i * 4);
else
lo = t4_read_reg(sc, A_MPS_RX_PG_RSV0 + i * 4);
if (is_t5(sc)) {
used = G_T5_USED(lo);
alloc = G_T5_ALLOC(lo);
} else {
used = G_USED(lo);
alloc = G_ALLOC(lo);
}
/* For T6 these are MAC buffer groups */
sbuf_printf(sb, "\nPort %d using %u pages out of %u allocated",
i, used, alloc);
}
for (i = 0; i < sc->chip_params->nchan; i++) {
if (chip_id(sc) > CHELSIO_T5)
lo = t4_read_reg(sc, A_MPS_RX_LPBK_BG_PG_CNT0 + i * 4);
else
lo = t4_read_reg(sc, A_MPS_RX_PG_RSV4 + i * 4);
if (is_t5(sc)) {
used = G_T5_USED(lo);
alloc = G_T5_ALLOC(lo);
} else {
used = G_USED(lo);
alloc = G_ALLOC(lo);
}
/* For T6 these are MAC buffer groups */
sbuf_printf(sb,
"\nLoopback %d using %u pages out of %u allocated",
i, used, alloc);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static inline void
tcamxy2valmask(uint64_t x, uint64_t y, uint8_t *addr, uint64_t *mask)
{
*mask = x | y;
y = htobe64(y);
memcpy(addr, (char *)&y + 2, ETHER_ADDR_LEN);
}
static int
sysctl_mps_tcam(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i;
MPASS(chip_id(sc) <= CHELSIO_T5);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
sbuf_printf(sb,
"Idx Ethernet address Mask Vld Ports PF"
" VF Replication P0 P1 P2 P3 ML");
for (i = 0; i < sc->chip_params->mps_tcam_size; i++) {
uint64_t tcamx, tcamy, mask;
uint32_t cls_lo, cls_hi;
uint8_t addr[ETHER_ADDR_LEN];
tcamy = t4_read_reg64(sc, MPS_CLS_TCAM_Y_L(i));
tcamx = t4_read_reg64(sc, MPS_CLS_TCAM_X_L(i));
if (tcamx & tcamy)
continue;
tcamxy2valmask(tcamx, tcamy, addr, &mask);
cls_lo = t4_read_reg(sc, MPS_CLS_SRAM_L(i));
cls_hi = t4_read_reg(sc, MPS_CLS_SRAM_H(i));
sbuf_printf(sb, "\n%3u %02x:%02x:%02x:%02x:%02x:%02x %012jx"
" %c %#x%4u%4d", i, addr[0], addr[1], addr[2],
addr[3], addr[4], addr[5], (uintmax_t)mask,
(cls_lo & F_SRAM_VLD) ? 'Y' : 'N',
G_PORTMAP(cls_hi), G_PF(cls_lo),
(cls_lo & F_VF_VALID) ? G_VF(cls_lo) : -1);
if (cls_lo & F_REPLICATE) {
struct fw_ldst_cmd ldst_cmd;
memset(&ldst_cmd, 0, sizeof(ldst_cmd));
ldst_cmd.op_to_addrspace =
htobe32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_MPS));
ldst_cmd.cycles_to_len16 = htobe32(FW_LEN16(ldst_cmd));
ldst_cmd.u.mps.rplc.fid_idx =
htobe16(V_FW_LDST_CMD_FID(FW_LDST_MPS_RPLC) |
V_FW_LDST_CMD_IDX(i));
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK,
"t4mps");
if (rc)
break;
rc = -t4_wr_mbox(sc, sc->mbox, &ldst_cmd,
sizeof(ldst_cmd), &ldst_cmd);
end_synchronized_op(sc, 0);
if (rc != 0) {
sbuf_printf(sb, "%36d", rc);
rc = 0;
} else {
sbuf_printf(sb, " %08x %08x %08x %08x",
be32toh(ldst_cmd.u.mps.rplc.rplc127_96),
be32toh(ldst_cmd.u.mps.rplc.rplc95_64),
be32toh(ldst_cmd.u.mps.rplc.rplc63_32),
be32toh(ldst_cmd.u.mps.rplc.rplc31_0));
}
} else
sbuf_printf(sb, "%36s", "");
sbuf_printf(sb, "%4u%3u%3u%3u %#3x", G_SRAM_PRIO0(cls_lo),
G_SRAM_PRIO1(cls_lo), G_SRAM_PRIO2(cls_lo),
G_SRAM_PRIO3(cls_lo), (cls_lo >> S_MULTILISTEN0) & 0xf);
}
if (rc)
(void) sbuf_finish(sb);
else
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_mps_tcam_t6(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i;
MPASS(chip_id(sc) > CHELSIO_T5);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
sbuf_printf(sb, "Idx Ethernet address Mask VNI Mask"
" IVLAN Vld DIP_Hit Lookup Port Vld Ports PF VF"
" Replication"
" P0 P1 P2 P3 ML\n");
for (i = 0; i < sc->chip_params->mps_tcam_size; i++) {
uint8_t dip_hit, vlan_vld, lookup_type, port_num;
uint16_t ivlan;
uint64_t tcamx, tcamy, val, mask;
uint32_t cls_lo, cls_hi, ctl, data2, vnix, vniy;
uint8_t addr[ETHER_ADDR_LEN];
ctl = V_CTLREQID(1) | V_CTLCMDTYPE(0) | V_CTLXYBITSEL(0);
if (i < 256)
ctl |= V_CTLTCAMINDEX(i) | V_CTLTCAMSEL(0);
else
ctl |= V_CTLTCAMINDEX(i - 256) | V_CTLTCAMSEL(1);
t4_write_reg(sc, A_MPS_CLS_TCAM_DATA2_CTL, ctl);
val = t4_read_reg(sc, A_MPS_CLS_TCAM_RDATA1_REQ_ID1);
tcamy = G_DMACH(val) << 32;
tcamy |= t4_read_reg(sc, A_MPS_CLS_TCAM_RDATA0_REQ_ID1);
data2 = t4_read_reg(sc, A_MPS_CLS_TCAM_RDATA2_REQ_ID1);
lookup_type = G_DATALKPTYPE(data2);
port_num = G_DATAPORTNUM(data2);
if (lookup_type && lookup_type != M_DATALKPTYPE) {
/* Inner header VNI */
vniy = ((data2 & F_DATAVIDH2) << 23) |
(G_DATAVIDH1(data2) << 16) | G_VIDL(val);
dip_hit = data2 & F_DATADIPHIT;
vlan_vld = 0;
} else {
vniy = 0;
dip_hit = 0;
vlan_vld = data2 & F_DATAVIDH2;
ivlan = G_VIDL(val);
}
ctl |= V_CTLXYBITSEL(1);
t4_write_reg(sc, A_MPS_CLS_TCAM_DATA2_CTL, ctl);
val = t4_read_reg(sc, A_MPS_CLS_TCAM_RDATA1_REQ_ID1);
tcamx = G_DMACH(val) << 32;
tcamx |= t4_read_reg(sc, A_MPS_CLS_TCAM_RDATA0_REQ_ID1);
data2 = t4_read_reg(sc, A_MPS_CLS_TCAM_RDATA2_REQ_ID1);
if (lookup_type && lookup_type != M_DATALKPTYPE) {
/* Inner header VNI mask */
vnix = ((data2 & F_DATAVIDH2) << 23) |
(G_DATAVIDH1(data2) << 16) | G_VIDL(val);
} else
vnix = 0;
if (tcamx & tcamy)
continue;
tcamxy2valmask(tcamx, tcamy, addr, &mask);
cls_lo = t4_read_reg(sc, MPS_CLS_SRAM_L(i));
cls_hi = t4_read_reg(sc, MPS_CLS_SRAM_H(i));
if (lookup_type && lookup_type != M_DATALKPTYPE) {
sbuf_printf(sb, "\n%3u %02x:%02x:%02x:%02x:%02x:%02x "
"%012jx %06x %06x - - %3c"
" 'I' %4x %3c %#x%4u%4d", i, addr[0],
addr[1], addr[2], addr[3], addr[4], addr[5],
(uintmax_t)mask, vniy, vnix, dip_hit ? 'Y' : 'N',
port_num, cls_lo & F_T6_SRAM_VLD ? 'Y' : 'N',
G_PORTMAP(cls_hi), G_T6_PF(cls_lo),
cls_lo & F_T6_VF_VALID ? G_T6_VF(cls_lo) : -1);
} else {
sbuf_printf(sb, "\n%3u %02x:%02x:%02x:%02x:%02x:%02x "
"%012jx - - ", i, addr[0], addr[1],
addr[2], addr[3], addr[4], addr[5],
(uintmax_t)mask);
if (vlan_vld)
sbuf_printf(sb, "%4u Y ", ivlan);
else
sbuf_printf(sb, " - N ");
sbuf_printf(sb, "- %3c %4x %3c %#x%4u%4d",
lookup_type ? 'I' : 'O', port_num,
cls_lo & F_T6_SRAM_VLD ? 'Y' : 'N',
G_PORTMAP(cls_hi), G_T6_PF(cls_lo),
cls_lo & F_T6_VF_VALID ? G_T6_VF(cls_lo) : -1);
}
if (cls_lo & F_T6_REPLICATE) {
struct fw_ldst_cmd ldst_cmd;
memset(&ldst_cmd, 0, sizeof(ldst_cmd));
ldst_cmd.op_to_addrspace =
htobe32(V_FW_CMD_OP(FW_LDST_CMD) |
F_FW_CMD_REQUEST | F_FW_CMD_READ |
V_FW_LDST_CMD_ADDRSPACE(FW_LDST_ADDRSPC_MPS));
ldst_cmd.cycles_to_len16 = htobe32(FW_LEN16(ldst_cmd));
ldst_cmd.u.mps.rplc.fid_idx =
htobe16(V_FW_LDST_CMD_FID(FW_LDST_MPS_RPLC) |
V_FW_LDST_CMD_IDX(i));
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK,
"t6mps");
if (rc)
break;
rc = -t4_wr_mbox(sc, sc->mbox, &ldst_cmd,
sizeof(ldst_cmd), &ldst_cmd);
end_synchronized_op(sc, 0);
if (rc != 0) {
sbuf_printf(sb, "%72d", rc);
rc = 0;
} else {
sbuf_printf(sb, " %08x %08x %08x %08x"
" %08x %08x %08x %08x",
be32toh(ldst_cmd.u.mps.rplc.rplc255_224),
be32toh(ldst_cmd.u.mps.rplc.rplc223_192),
be32toh(ldst_cmd.u.mps.rplc.rplc191_160),
be32toh(ldst_cmd.u.mps.rplc.rplc159_128),
be32toh(ldst_cmd.u.mps.rplc.rplc127_96),
be32toh(ldst_cmd.u.mps.rplc.rplc95_64),
be32toh(ldst_cmd.u.mps.rplc.rplc63_32),
be32toh(ldst_cmd.u.mps.rplc.rplc31_0));
}
} else
sbuf_printf(sb, "%72s", "");
sbuf_printf(sb, "%4u%3u%3u%3u %#x",
G_T6_SRAM_PRIO0(cls_lo), G_T6_SRAM_PRIO1(cls_lo),
G_T6_SRAM_PRIO2(cls_lo), G_T6_SRAM_PRIO3(cls_lo),
(cls_lo >> S_T6_MULTILISTEN0) & 0xf);
}
if (rc)
(void) sbuf_finish(sb);
else
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_path_mtus(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
uint16_t mtus[NMTUS];
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
t4_read_mtu_tbl(sc, mtus, NULL);
sbuf_printf(sb, "%u %u %u %u %u %u %u %u %u %u %u %u %u %u %u %u",
mtus[0], mtus[1], mtus[2], mtus[3], mtus[4], mtus[5], mtus[6],
mtus[7], mtus[8], mtus[9], mtus[10], mtus[11], mtus[12], mtus[13],
mtus[14], mtus[15]);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_pm_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, i;
uint32_t tx_cnt[MAX_PM_NSTATS], rx_cnt[MAX_PM_NSTATS];
uint64_t tx_cyc[MAX_PM_NSTATS], rx_cyc[MAX_PM_NSTATS];
static const char *tx_stats[MAX_PM_NSTATS] = {
"Read:", "Write bypass:", "Write mem:", "Bypass + mem:",
"Tx FIFO wait", NULL, "Tx latency"
};
static const char *rx_stats[MAX_PM_NSTATS] = {
"Read:", "Write bypass:", "Write mem:", "Flush:",
"Rx FIFO wait", NULL, "Rx latency"
};
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
t4_pmtx_get_stats(sc, tx_cnt, tx_cyc);
t4_pmrx_get_stats(sc, rx_cnt, rx_cyc);
sbuf_printf(sb, " Tx pcmds Tx bytes");
for (i = 0; i < 4; i++) {
sbuf_printf(sb, "\n%-13s %10u %20ju", tx_stats[i], tx_cnt[i],
tx_cyc[i]);
}
sbuf_printf(sb, "\n Rx pcmds Rx bytes");
for (i = 0; i < 4; i++) {
sbuf_printf(sb, "\n%-13s %10u %20ju", rx_stats[i], rx_cnt[i],
rx_cyc[i]);
}
if (chip_id(sc) > CHELSIO_T5) {
sbuf_printf(sb,
"\n Total wait Total occupancy");
sbuf_printf(sb, "\n%-13s %10u %20ju", tx_stats[i], tx_cnt[i],
tx_cyc[i]);
sbuf_printf(sb, "\n%-13s %10u %20ju", rx_stats[i], rx_cnt[i],
rx_cyc[i]);
i += 2;
MPASS(i < nitems(tx_stats));
sbuf_printf(sb,
"\n Reads Total wait");
sbuf_printf(sb, "\n%-13s %10u %20ju", tx_stats[i], tx_cnt[i],
tx_cyc[i]);
sbuf_printf(sb, "\n%-13s %10u %20ju", rx_stats[i], rx_cnt[i],
rx_cyc[i]);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_rdma_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tp_rdma_stats stats;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
mtx_lock(&sc->reg_lock);
t4_tp_get_rdma_stats(sc, &stats, 0);
mtx_unlock(&sc->reg_lock);
sbuf_printf(sb, "NoRQEModDefferals: %u\n", stats.rqe_dfr_mod);
sbuf_printf(sb, "NoRQEPktDefferals: %u", stats.rqe_dfr_pkt);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_tcp_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tp_tcp_stats v4, v6;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
mtx_lock(&sc->reg_lock);
t4_tp_get_tcp_stats(sc, &v4, &v6, 0);
mtx_unlock(&sc->reg_lock);
sbuf_printf(sb,
" IP IPv6\n");
sbuf_printf(sb, "OutRsts: %20u %20u\n",
v4.tcp_out_rsts, v6.tcp_out_rsts);
sbuf_printf(sb, "InSegs: %20ju %20ju\n",
v4.tcp_in_segs, v6.tcp_in_segs);
sbuf_printf(sb, "OutSegs: %20ju %20ju\n",
v4.tcp_out_segs, v6.tcp_out_segs);
sbuf_printf(sb, "RetransSegs: %20ju %20ju",
v4.tcp_retrans_segs, v6.tcp_retrans_segs);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_tids(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tid_info *t = &sc->tids;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
if (t->natids) {
sbuf_printf(sb, "ATID range: 0-%u, in use: %u\n", t->natids - 1,
t->atids_in_use);
}
if (t->nhpftids) {
sbuf_printf(sb, "HPFTID range: %u-%u, in use: %u\n",
t->hpftid_base, t->hpftid_end, t->hpftids_in_use);
}
if (t->ntids) {
sbuf_printf(sb, "TID range: ");
if (t4_read_reg(sc, A_LE_DB_CONFIG) & F_HASHEN) {
uint32_t b, hb;
if (chip_id(sc) <= CHELSIO_T5) {
b = t4_read_reg(sc, A_LE_DB_SERVER_INDEX) / 4;
hb = t4_read_reg(sc, A_LE_DB_TID_HASHBASE) / 4;
} else {
b = t4_read_reg(sc, A_LE_DB_SRVR_START_INDEX);
hb = t4_read_reg(sc, A_T6_LE_DB_HASH_TID_BASE);
}
if (b)
sbuf_printf(sb, "%u-%u, ", t->tid_base, b - 1);
sbuf_printf(sb, "%u-%u", hb, t->ntids - 1);
} else
sbuf_printf(sb, "%u-%u", t->tid_base, t->ntids - 1);
sbuf_printf(sb, ", in use: %u\n",
atomic_load_acq_int(&t->tids_in_use));
}
if (t->nstids) {
sbuf_printf(sb, "STID range: %u-%u, in use: %u\n", t->stid_base,
t->stid_base + t->nstids - 1, t->stids_in_use);
}
if (t->nftids) {
sbuf_printf(sb, "FTID range: %u-%u, in use: %u\n", t->ftid_base,
t->ftid_end, t->ftids_in_use);
}
if (t->netids) {
sbuf_printf(sb, "ETID range: %u-%u, in use: %u\n", t->etid_base,
t->etid_base + t->netids - 1, t->etids_in_use);
}
sbuf_printf(sb, "HW TID usage: %u IP users, %u IPv6 users",
t4_read_reg(sc, A_LE_DB_ACT_CNT_IPV4),
t4_read_reg(sc, A_LE_DB_ACT_CNT_IPV6));
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_tp_err_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
struct tp_err_stats stats;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
mtx_lock(&sc->reg_lock);
t4_tp_get_err_stats(sc, &stats, 0);
mtx_unlock(&sc->reg_lock);
if (sc->chip_params->nchan > 2) {
sbuf_printf(sb, " channel 0 channel 1"
" channel 2 channel 3\n");
sbuf_printf(sb, "macInErrs: %10u %10u %10u %10u\n",
stats.mac_in_errs[0], stats.mac_in_errs[1],
stats.mac_in_errs[2], stats.mac_in_errs[3]);
sbuf_printf(sb, "hdrInErrs: %10u %10u %10u %10u\n",
stats.hdr_in_errs[0], stats.hdr_in_errs[1],
stats.hdr_in_errs[2], stats.hdr_in_errs[3]);
sbuf_printf(sb, "tcpInErrs: %10u %10u %10u %10u\n",
stats.tcp_in_errs[0], stats.tcp_in_errs[1],
stats.tcp_in_errs[2], stats.tcp_in_errs[3]);
sbuf_printf(sb, "tcp6InErrs: %10u %10u %10u %10u\n",
stats.tcp6_in_errs[0], stats.tcp6_in_errs[1],
stats.tcp6_in_errs[2], stats.tcp6_in_errs[3]);
sbuf_printf(sb, "tnlCongDrops: %10u %10u %10u %10u\n",
stats.tnl_cong_drops[0], stats.tnl_cong_drops[1],
stats.tnl_cong_drops[2], stats.tnl_cong_drops[3]);
sbuf_printf(sb, "tnlTxDrops: %10u %10u %10u %10u\n",
stats.tnl_tx_drops[0], stats.tnl_tx_drops[1],
stats.tnl_tx_drops[2], stats.tnl_tx_drops[3]);
sbuf_printf(sb, "ofldVlanDrops: %10u %10u %10u %10u\n",
stats.ofld_vlan_drops[0], stats.ofld_vlan_drops[1],
stats.ofld_vlan_drops[2], stats.ofld_vlan_drops[3]);
sbuf_printf(sb, "ofldChanDrops: %10u %10u %10u %10u\n\n",
stats.ofld_chan_drops[0], stats.ofld_chan_drops[1],
stats.ofld_chan_drops[2], stats.ofld_chan_drops[3]);
} else {
sbuf_printf(sb, " channel 0 channel 1\n");
sbuf_printf(sb, "macInErrs: %10u %10u\n",
stats.mac_in_errs[0], stats.mac_in_errs[1]);
sbuf_printf(sb, "hdrInErrs: %10u %10u\n",
stats.hdr_in_errs[0], stats.hdr_in_errs[1]);
sbuf_printf(sb, "tcpInErrs: %10u %10u\n",
stats.tcp_in_errs[0], stats.tcp_in_errs[1]);
sbuf_printf(sb, "tcp6InErrs: %10u %10u\n",
stats.tcp6_in_errs[0], stats.tcp6_in_errs[1]);
sbuf_printf(sb, "tnlCongDrops: %10u %10u\n",
stats.tnl_cong_drops[0], stats.tnl_cong_drops[1]);
sbuf_printf(sb, "tnlTxDrops: %10u %10u\n",
stats.tnl_tx_drops[0], stats.tnl_tx_drops[1]);
sbuf_printf(sb, "ofldVlanDrops: %10u %10u\n",
stats.ofld_vlan_drops[0], stats.ofld_vlan_drops[1]);
sbuf_printf(sb, "ofldChanDrops: %10u %10u\n\n",
stats.ofld_chan_drops[0], stats.ofld_chan_drops[1]);
}
sbuf_printf(sb, "ofldNoNeigh: %u\nofldCongDefer: %u",
stats.ofld_no_neigh, stats.ofld_cong_defer);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_tp_la_mask(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct tp_params *tpp = &sc->params.tp;
u_int mask;
int rc;
mask = tpp->la_mask >> 16;
rc = sysctl_handle_int(oidp, &mask, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (mask > 0xffff)
return (EINVAL);
tpp->la_mask = mask << 16;
t4_set_reg_field(sc, A_TP_DBG_LA_CONFIG, 0xffff0000U, tpp->la_mask);
return (0);
}
struct field_desc {
const char *name;
u_int start;
u_int width;
};
static void
field_desc_show(struct sbuf *sb, uint64_t v, const struct field_desc *f)
{
char buf[32];
int line_size = 0;
while (f->name) {
uint64_t mask = (1ULL << f->width) - 1;
int len = snprintf(buf, sizeof(buf), "%s: %ju", f->name,
((uintmax_t)v >> f->start) & mask);
if (line_size + len >= 79) {
line_size = 8;
sbuf_printf(sb, "\n ");
}
sbuf_printf(sb, "%s ", buf);
line_size += len + 1;
f++;
}
sbuf_printf(sb, "\n");
}
static const struct field_desc tp_la0[] = {
{ "RcfOpCodeOut", 60, 4 },
{ "State", 56, 4 },
{ "WcfState", 52, 4 },
{ "RcfOpcSrcOut", 50, 2 },
{ "CRxError", 49, 1 },
{ "ERxError", 48, 1 },
{ "SanityFailed", 47, 1 },
{ "SpuriousMsg", 46, 1 },
{ "FlushInputMsg", 45, 1 },
{ "FlushInputCpl", 44, 1 },
{ "RssUpBit", 43, 1 },
{ "RssFilterHit", 42, 1 },
{ "Tid", 32, 10 },
{ "InitTcb", 31, 1 },
{ "LineNumber", 24, 7 },
{ "Emsg", 23, 1 },
{ "EdataOut", 22, 1 },
{ "Cmsg", 21, 1 },
{ "CdataOut", 20, 1 },
{ "EreadPdu", 19, 1 },
{ "CreadPdu", 18, 1 },
{ "TunnelPkt", 17, 1 },
{ "RcfPeerFin", 16, 1 },
{ "RcfReasonOut", 12, 4 },
{ "TxCchannel", 10, 2 },
{ "RcfTxChannel", 8, 2 },
{ "RxEchannel", 6, 2 },
{ "RcfRxChannel", 5, 1 },
{ "RcfDataOutSrdy", 4, 1 },
{ "RxDvld", 3, 1 },
{ "RxOoDvld", 2, 1 },
{ "RxCongestion", 1, 1 },
{ "TxCongestion", 0, 1 },
{ NULL }
};
static const struct field_desc tp_la1[] = {
{ "CplCmdIn", 56, 8 },
{ "CplCmdOut", 48, 8 },
{ "ESynOut", 47, 1 },
{ "EAckOut", 46, 1 },
{ "EFinOut", 45, 1 },
{ "ERstOut", 44, 1 },
{ "SynIn", 43, 1 },
{ "AckIn", 42, 1 },
{ "FinIn", 41, 1 },
{ "RstIn", 40, 1 },
{ "DataIn", 39, 1 },
{ "DataInVld", 38, 1 },
{ "PadIn", 37, 1 },
{ "RxBufEmpty", 36, 1 },
{ "RxDdp", 35, 1 },
{ "RxFbCongestion", 34, 1 },
{ "TxFbCongestion", 33, 1 },
{ "TxPktSumSrdy", 32, 1 },
{ "RcfUlpType", 28, 4 },
{ "Eread", 27, 1 },
{ "Ebypass", 26, 1 },
{ "Esave", 25, 1 },
{ "Static0", 24, 1 },
{ "Cread", 23, 1 },
{ "Cbypass", 22, 1 },
{ "Csave", 21, 1 },
{ "CPktOut", 20, 1 },
{ "RxPagePoolFull", 18, 2 },
{ "RxLpbkPkt", 17, 1 },
{ "TxLpbkPkt", 16, 1 },
{ "RxVfValid", 15, 1 },
{ "SynLearned", 14, 1 },
{ "SetDelEntry", 13, 1 },
{ "SetInvEntry", 12, 1 },
{ "CpcmdDvld", 11, 1 },
{ "CpcmdSave", 10, 1 },
{ "RxPstructsFull", 8, 2 },
{ "EpcmdDvld", 7, 1 },
{ "EpcmdFlush", 6, 1 },
{ "EpcmdTrimPrefix", 5, 1 },
{ "EpcmdTrimPostfix", 4, 1 },
{ "ERssIp4Pkt", 3, 1 },
{ "ERssIp6Pkt", 2, 1 },
{ "ERssTcpUdpPkt", 1, 1 },
{ "ERssFceFipPkt", 0, 1 },
{ NULL }
};
static const struct field_desc tp_la2[] = {
{ "CplCmdIn", 56, 8 },
{ "MpsVfVld", 55, 1 },
{ "MpsPf", 52, 3 },
{ "MpsVf", 44, 8 },
{ "SynIn", 43, 1 },
{ "AckIn", 42, 1 },
{ "FinIn", 41, 1 },
{ "RstIn", 40, 1 },
{ "DataIn", 39, 1 },
{ "DataInVld", 38, 1 },
{ "PadIn", 37, 1 },
{ "RxBufEmpty", 36, 1 },
{ "RxDdp", 35, 1 },
{ "RxFbCongestion", 34, 1 },
{ "TxFbCongestion", 33, 1 },
{ "TxPktSumSrdy", 32, 1 },
{ "RcfUlpType", 28, 4 },
{ "Eread", 27, 1 },
{ "Ebypass", 26, 1 },
{ "Esave", 25, 1 },
{ "Static0", 24, 1 },
{ "Cread", 23, 1 },
{ "Cbypass", 22, 1 },
{ "Csave", 21, 1 },
{ "CPktOut", 20, 1 },
{ "RxPagePoolFull", 18, 2 },
{ "RxLpbkPkt", 17, 1 },
{ "TxLpbkPkt", 16, 1 },
{ "RxVfValid", 15, 1 },
{ "SynLearned", 14, 1 },
{ "SetDelEntry", 13, 1 },
{ "SetInvEntry", 12, 1 },
{ "CpcmdDvld", 11, 1 },
{ "CpcmdSave", 10, 1 },
{ "RxPstructsFull", 8, 2 },
{ "EpcmdDvld", 7, 1 },
{ "EpcmdFlush", 6, 1 },
{ "EpcmdTrimPrefix", 5, 1 },
{ "EpcmdTrimPostfix", 4, 1 },
{ "ERssIp4Pkt", 3, 1 },
{ "ERssIp6Pkt", 2, 1 },
{ "ERssTcpUdpPkt", 1, 1 },
{ "ERssFceFipPkt", 0, 1 },
{ NULL }
};
static void
tp_la_show(struct sbuf *sb, uint64_t *p, int idx)
{
field_desc_show(sb, *p, tp_la0);
}
static void
tp_la_show2(struct sbuf *sb, uint64_t *p, int idx)
{
if (idx)
sbuf_printf(sb, "\n");
field_desc_show(sb, p[0], tp_la0);
if (idx < (TPLA_SIZE / 2 - 1) || p[1] != ~0ULL)
field_desc_show(sb, p[1], tp_la0);
}
static void
tp_la_show3(struct sbuf *sb, uint64_t *p, int idx)
{
if (idx)
sbuf_printf(sb, "\n");
field_desc_show(sb, p[0], tp_la0);
if (idx < (TPLA_SIZE / 2 - 1) || p[1] != ~0ULL)
field_desc_show(sb, p[1], (p[0] & (1 << 17)) ? tp_la2 : tp_la1);
}
static int
sysctl_tp_la(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
uint64_t *buf, *p;
int rc;
u_int i, inc;
void (*show_func)(struct sbuf *, uint64_t *, int);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
buf = malloc(TPLA_SIZE * sizeof(uint64_t), M_CXGBE, M_ZERO | M_WAITOK);
t4_tp_read_la(sc, buf, NULL);
p = buf;
switch (G_DBGLAMODE(t4_read_reg(sc, A_TP_DBG_LA_CONFIG))) {
case 2:
inc = 2;
show_func = tp_la_show2;
break;
case 3:
inc = 2;
show_func = tp_la_show3;
break;
default:
inc = 1;
show_func = tp_la_show;
}
for (i = 0; i < TPLA_SIZE / inc; i++, p += inc)
(*show_func)(sb, p, i);
rc = sbuf_finish(sb);
sbuf_delete(sb);
free(buf, M_CXGBE);
return (rc);
}
static int
sysctl_tx_rate(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc;
u64 nrate[MAX_NCHAN], orate[MAX_NCHAN];
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 256, req);
if (sb == NULL)
return (ENOMEM);
t4_get_chan_txrate(sc, nrate, orate);
if (sc->chip_params->nchan > 2) {
sbuf_printf(sb, " channel 0 channel 1"
" channel 2 channel 3\n");
sbuf_printf(sb, "NIC B/s: %10ju %10ju %10ju %10ju\n",
nrate[0], nrate[1], nrate[2], nrate[3]);
sbuf_printf(sb, "Offload B/s: %10ju %10ju %10ju %10ju",
orate[0], orate[1], orate[2], orate[3]);
} else {
sbuf_printf(sb, " channel 0 channel 1\n");
sbuf_printf(sb, "NIC B/s: %10ju %10ju\n",
nrate[0], nrate[1]);
sbuf_printf(sb, "Offload B/s: %10ju %10ju",
orate[0], orate[1]);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_ulprx_la(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
uint32_t *buf, *p;
int rc, i;
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
buf = malloc(ULPRX_LA_SIZE * 8 * sizeof(uint32_t), M_CXGBE,
M_ZERO | M_WAITOK);
t4_ulprx_read_la(sc, buf);
p = buf;
sbuf_printf(sb, " Pcmd Type Message"
" Data");
for (i = 0; i < ULPRX_LA_SIZE; i++, p += 8) {
sbuf_printf(sb, "\n%08x%08x %4x %08x %08x%08x%08x%08x",
p[1], p[0], p[2], p[3], p[7], p[6], p[5], p[4]);
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
free(buf, M_CXGBE);
return (rc);
}
static int
sysctl_wcwr_stats(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
struct sbuf *sb;
int rc, v;
MPASS(chip_id(sc) >= CHELSIO_T5);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
v = t4_read_reg(sc, A_SGE_STAT_CFG);
if (G_STATSOURCE_T5(v) == 7) {
int mode;
mode = is_t5(sc) ? G_STATMODE(v) : G_T6_STATMODE(v);
if (mode == 0) {
sbuf_printf(sb, "total %d, incomplete %d",
t4_read_reg(sc, A_SGE_STAT_TOTAL),
t4_read_reg(sc, A_SGE_STAT_MATCH));
} else if (mode == 1) {
sbuf_printf(sb, "total %d, data overflow %d",
t4_read_reg(sc, A_SGE_STAT_TOTAL),
t4_read_reg(sc, A_SGE_STAT_MATCH));
} else {
sbuf_printf(sb, "unknown mode %d", mode);
}
}
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
static int
sysctl_cpus(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
enum cpu_sets op = arg2;
cpuset_t cpuset;
struct sbuf *sb;
int i, rc;
MPASS(op == LOCAL_CPUS || op == INTR_CPUS);
CPU_ZERO(&cpuset);
rc = bus_get_cpus(sc->dev, op, sizeof(cpuset), &cpuset);
if (rc != 0)
return (rc);
rc = sysctl_wire_old_buffer(req, 0);
if (rc != 0)
return (rc);
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
if (sb == NULL)
return (ENOMEM);
CPU_FOREACH(i)
sbuf_printf(sb, "%d ", i);
rc = sbuf_finish(sb);
sbuf_delete(sb);
return (rc);
}
#ifdef TCP_OFFLOAD
static int
sysctl_tls_rx_ports(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
int *old_ports, *new_ports;
int i, new_count, rc;
if (req->newptr == NULL && req->oldptr == NULL)
return (SYSCTL_OUT(req, NULL, imax(sc->tt.num_tls_rx_ports, 1) *
sizeof(sc->tt.tls_rx_ports[0])));
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4tlsrx");
if (rc)
return (rc);
if (sc->tt.num_tls_rx_ports == 0) {
i = -1;
rc = SYSCTL_OUT(req, &i, sizeof(i));
} else
rc = SYSCTL_OUT(req, sc->tt.tls_rx_ports,
sc->tt.num_tls_rx_ports * sizeof(sc->tt.tls_rx_ports[0]));
if (rc == 0 && req->newptr != NULL) {
new_count = req->newlen / sizeof(new_ports[0]);
new_ports = malloc(new_count * sizeof(new_ports[0]), M_CXGBE,
M_WAITOK);
rc = SYSCTL_IN(req, new_ports, new_count *
sizeof(new_ports[0]));
if (rc)
goto err;
/* Allow setting to a single '-1' to clear the list. */
if (new_count == 1 && new_ports[0] == -1) {
ADAPTER_LOCK(sc);
old_ports = sc->tt.tls_rx_ports;
sc->tt.tls_rx_ports = NULL;
sc->tt.num_tls_rx_ports = 0;
ADAPTER_UNLOCK(sc);
free(old_ports, M_CXGBE);
} else {
for (i = 0; i < new_count; i++) {
if (new_ports[i] < 1 ||
new_ports[i] > IPPORT_MAX) {
rc = EINVAL;
goto err;
}
}
ADAPTER_LOCK(sc);
old_ports = sc->tt.tls_rx_ports;
sc->tt.tls_rx_ports = new_ports;
sc->tt.num_tls_rx_ports = new_count;
ADAPTER_UNLOCK(sc);
free(old_ports, M_CXGBE);
new_ports = NULL;
}
err:
free(new_ports, M_CXGBE);
}
end_synchronized_op(sc, 0);
return (rc);
}
static void
unit_conv(char *buf, size_t len, u_int val, u_int factor)
{
u_int rem = val % factor;
if (rem == 0)
snprintf(buf, len, "%u", val / factor);
else {
while (rem % 10 == 0)
rem /= 10;
snprintf(buf, len, "%u.%u", val / factor, rem);
}
}
static int
sysctl_tp_tick(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
char buf[16];
u_int res, re;
u_int cclk_ps = 1000000000 / sc->params.vpd.cclk;
res = t4_read_reg(sc, A_TP_TIMER_RESOLUTION);
switch (arg2) {
case 0:
/* timer_tick */
re = G_TIMERRESOLUTION(res);
break;
case 1:
/* TCP timestamp tick */
re = G_TIMESTAMPRESOLUTION(res);
break;
case 2:
/* DACK tick */
re = G_DELAYEDACKRESOLUTION(res);
break;
default:
return (EDOOFUS);
}
unit_conv(buf, sizeof(buf), (cclk_ps << re), 1000000);
return (sysctl_handle_string(oidp, buf, sizeof(buf), req));
}
static int
sysctl_tp_dack_timer(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
u_int res, dack_re, v;
u_int cclk_ps = 1000000000 / sc->params.vpd.cclk;
res = t4_read_reg(sc, A_TP_TIMER_RESOLUTION);
dack_re = G_DELAYEDACKRESOLUTION(res);
v = ((cclk_ps << dack_re) / 1000000) * t4_read_reg(sc, A_TP_DACK_TIMER);
return (sysctl_handle_int(oidp, &v, 0, req));
}
static int
sysctl_tp_timer(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
int reg = arg2;
u_int tre;
u_long tp_tick_us, v;
u_int cclk_ps = 1000000000 / sc->params.vpd.cclk;
MPASS(reg == A_TP_RXT_MIN || reg == A_TP_RXT_MAX ||
reg == A_TP_PERS_MIN || reg == A_TP_PERS_MAX ||
reg == A_TP_KEEP_IDLE || reg == A_TP_KEEP_INTVL ||
reg == A_TP_INIT_SRTT || reg == A_TP_FINWAIT2_TIMER);
tre = G_TIMERRESOLUTION(t4_read_reg(sc, A_TP_TIMER_RESOLUTION));
tp_tick_us = (cclk_ps << tre) / 1000000;
if (reg == A_TP_INIT_SRTT)
v = tp_tick_us * G_INITSRTT(t4_read_reg(sc, reg));
else
v = tp_tick_us * t4_read_reg(sc, reg);
return (sysctl_handle_long(oidp, &v, 0, req));
}
/*
* All fields in TP_SHIFT_CNT are 4b and the starting location of the field is
* passed to this function.
*/
static int
sysctl_tp_shift_cnt(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
int idx = arg2;
u_int v;
MPASS(idx >= 0 && idx <= 24);
v = (t4_read_reg(sc, A_TP_SHIFT_CNT) >> idx) & 0xf;
return (sysctl_handle_int(oidp, &v, 0, req));
}
static int
sysctl_tp_backoff(SYSCTL_HANDLER_ARGS)
{
struct adapter *sc = arg1;
int idx = arg2;
u_int shift, v, r;
MPASS(idx >= 0 && idx < 16);
r = A_TP_TCP_BACKOFF_REG0 + (idx & ~3);
shift = (idx & 3) << 3;
v = (t4_read_reg(sc, r) >> shift) & M_TIMERBACKOFFINDEX0;
return (sysctl_handle_int(oidp, &v, 0, req));
}
static int
sysctl_holdoff_tmr_idx_ofld(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
struct adapter *sc = vi->pi->adapter;
int idx, rc, i;
struct sge_ofld_rxq *ofld_rxq;
uint8_t v;
idx = vi->ofld_tmr_idx;
rc = sysctl_handle_int(oidp, &idx, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (idx < 0 || idx >= SGE_NTIMERS)
return (EINVAL);
rc = begin_synchronized_op(sc, vi, HOLD_LOCK | SLEEP_OK | INTR_OK,
"t4otmr");
if (rc)
return (rc);
v = V_QINTR_TIMER_IDX(idx) | V_QINTR_CNT_EN(vi->ofld_pktc_idx != -1);
for_each_ofld_rxq(vi, i, ofld_rxq) {
#ifdef atomic_store_rel_8
atomic_store_rel_8(&ofld_rxq->iq.intr_params, v);
#else
ofld_rxq->iq.intr_params = v;
#endif
}
vi->ofld_tmr_idx = idx;
end_synchronized_op(sc, LOCK_HELD);
return (0);
}
static int
sysctl_holdoff_pktc_idx_ofld(SYSCTL_HANDLER_ARGS)
{
struct vi_info *vi = arg1;
struct adapter *sc = vi->pi->adapter;
int idx, rc;
idx = vi->ofld_pktc_idx;
rc = sysctl_handle_int(oidp, &idx, 0, req);
if (rc != 0 || req->newptr == NULL)
return (rc);
if (idx < -1 || idx >= SGE_NCOUNTERS)
return (EINVAL);
rc = begin_synchronized_op(sc, vi, HOLD_LOCK | SLEEP_OK | INTR_OK,
"t4opktc");
if (rc)
return (rc);
if (vi->flags & VI_INIT_DONE)
rc = EBUSY; /* cannot be changed once the queues are created */
else
vi->ofld_pktc_idx = idx;
end_synchronized_op(sc, LOCK_HELD);
return (rc);
}
#endif
static int
get_sge_context(struct adapter *sc, struct t4_sge_context *cntxt)
{
int rc;
if (cntxt->cid > M_CTXTQID)
return (EINVAL);
if (cntxt->mem_id != CTXT_EGRESS && cntxt->mem_id != CTXT_INGRESS &&
cntxt->mem_id != CTXT_FLM && cntxt->mem_id != CTXT_CNM)
return (EINVAL);
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4ctxt");
if (rc)
return (rc);
if (sc->flags & FW_OK) {
rc = -t4_sge_ctxt_rd(sc, sc->mbox, cntxt->cid, cntxt->mem_id,
&cntxt->data[0]);
if (rc == 0)
goto done;
}
/*
* Read via firmware failed or wasn't even attempted. Read directly via
* the backdoor.
*/
rc = -t4_sge_ctxt_rd_bd(sc, cntxt->cid, cntxt->mem_id, &cntxt->data[0]);
done:
end_synchronized_op(sc, 0);
return (rc);
}
static int
load_fw(struct adapter *sc, struct t4_data *fw)
{
int rc;
uint8_t *fw_data;
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4ldfw");
if (rc)
return (rc);
/*
* The firmware, with the sole exception of the memory parity error
* handler, runs from memory and not flash. It is almost always safe to
* install a new firmware on a running system. Just set bit 1 in
* hw.cxgbe.dflags or dev.<nexus>.<n>.dflags first.
*/
if (sc->flags & FULL_INIT_DONE &&
(sc->debug_flags & DF_LOAD_FW_ANYTIME) == 0) {
rc = EBUSY;
goto done;
}
fw_data = malloc(fw->len, M_CXGBE, M_WAITOK);
if (fw_data == NULL) {
rc = ENOMEM;
goto done;
}
rc = copyin(fw->data, fw_data, fw->len);
if (rc == 0)
rc = -t4_load_fw(sc, fw_data, fw->len);
free(fw_data, M_CXGBE);
done:
end_synchronized_op(sc, 0);
return (rc);
}
static int
load_cfg(struct adapter *sc, struct t4_data *cfg)
{
int rc;
uint8_t *cfg_data = NULL;
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4ldcf");
if (rc)
return (rc);
if (cfg->len == 0) {
/* clear */
rc = -t4_load_cfg(sc, NULL, 0);
goto done;
}
cfg_data = malloc(cfg->len, M_CXGBE, M_WAITOK);
if (cfg_data == NULL) {
rc = ENOMEM;
goto done;
}
rc = copyin(cfg->data, cfg_data, cfg->len);
if (rc == 0)
rc = -t4_load_cfg(sc, cfg_data, cfg->len);
free(cfg_data, M_CXGBE);
done:
end_synchronized_op(sc, 0);
return (rc);
}
static int
load_boot(struct adapter *sc, struct t4_bootrom *br)
{
int rc;
uint8_t *br_data = NULL;
u_int offset;
if (br->len > 1024 * 1024)
return (EFBIG);
if (br->pf_offset == 0) {
/* pfidx */
if (br->pfidx_addr > 7)
return (EINVAL);
offset = G_OFFSET(t4_read_reg(sc, PF_REG(br->pfidx_addr,
A_PCIE_PF_EXPROM_OFST)));
} else if (br->pf_offset == 1) {
/* offset */
offset = G_OFFSET(br->pfidx_addr);
} else {
return (EINVAL);
}
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4ldbr");
if (rc)
return (rc);
if (br->len == 0) {
/* clear */
rc = -t4_load_boot(sc, NULL, offset, 0);
goto done;
}
br_data = malloc(br->len, M_CXGBE, M_WAITOK);
if (br_data == NULL) {
rc = ENOMEM;
goto done;
}
rc = copyin(br->data, br_data, br->len);
if (rc == 0)
rc = -t4_load_boot(sc, br_data, offset, br->len);
free(br_data, M_CXGBE);
done:
end_synchronized_op(sc, 0);
return (rc);
}
static int
load_bootcfg(struct adapter *sc, struct t4_data *bc)
{
int rc;
uint8_t *bc_data = NULL;
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4ldcf");
if (rc)
return (rc);
if (bc->len == 0) {
/* clear */
rc = -t4_load_bootcfg(sc, NULL, 0);
goto done;
}
bc_data = malloc(bc->len, M_CXGBE, M_WAITOK);
if (bc_data == NULL) {
rc = ENOMEM;
goto done;
}
rc = copyin(bc->data, bc_data, bc->len);
if (rc == 0)
rc = -t4_load_bootcfg(sc, bc_data, bc->len);
free(bc_data, M_CXGBE);
done:
end_synchronized_op(sc, 0);
return (rc);
}
static int
cudbg_dump(struct adapter *sc, struct t4_cudbg_dump *dump)
{
int rc;
struct cudbg_init *cudbg;
void *handle, *buf;
/* buf is large, don't block if no memory is available */
buf = malloc(dump->len, M_CXGBE, M_NOWAIT | M_ZERO);
if (buf == NULL)
return (ENOMEM);
handle = cudbg_alloc_handle();
if (handle == NULL) {
rc = ENOMEM;
goto done;
}
cudbg = cudbg_get_init(handle);
cudbg->adap = sc;
cudbg->print = (cudbg_print_cb)printf;
#ifndef notyet
device_printf(sc->dev, "%s: wr_flash %u, len %u, data %p.\n",
__func__, dump->wr_flash, dump->len, dump->data);
#endif
if (dump->wr_flash)
cudbg->use_flash = 1;
MPASS(sizeof(cudbg->dbg_bitmap) == sizeof(dump->bitmap));
memcpy(cudbg->dbg_bitmap, dump->bitmap, sizeof(cudbg->dbg_bitmap));
rc = cudbg_collect(handle, buf, &dump->len);
if (rc != 0)
goto done;
rc = copyout(buf, dump->data, dump->len);
done:
cudbg_free_handle(handle);
free(buf, M_CXGBE);
return (rc);
}
static void
free_offload_policy(struct t4_offload_policy *op)
{
struct offload_rule *r;
int i;
if (op == NULL)
return;
r = &op->rule[0];
for (i = 0; i < op->nrules; i++, r++) {
free(r->bpf_prog.bf_insns, M_CXGBE);
}
free(op->rule, M_CXGBE);
free(op, M_CXGBE);
}
static int
set_offload_policy(struct adapter *sc, struct t4_offload_policy *uop)
{
int i, rc, len;
struct t4_offload_policy *op, *old;
struct bpf_program *bf;
const struct offload_settings *s;
struct offload_rule *r;
void *u;
if (!is_offload(sc))
return (ENODEV);
if (uop->nrules == 0) {
/* Delete installed policies. */
op = NULL;
goto set_policy;
} if (uop->nrules > 256) { /* arbitrary */
return (E2BIG);
}
/* Copy userspace offload policy to kernel */
op = malloc(sizeof(*op), M_CXGBE, M_ZERO | M_WAITOK);
op->nrules = uop->nrules;
len = op->nrules * sizeof(struct offload_rule);
op->rule = malloc(len, M_CXGBE, M_ZERO | M_WAITOK);
rc = copyin(uop->rule, op->rule, len);
if (rc) {
free(op->rule, M_CXGBE);
free(op, M_CXGBE);
return (rc);
}
r = &op->rule[0];
for (i = 0; i < op->nrules; i++, r++) {
/* Validate open_type */
if (r->open_type != OPEN_TYPE_LISTEN &&
r->open_type != OPEN_TYPE_ACTIVE &&
r->open_type != OPEN_TYPE_PASSIVE &&
r->open_type != OPEN_TYPE_DONTCARE) {
error:
/*
* Rules 0 to i have malloc'd filters that need to be
* freed. Rules i+1 to nrules have userspace pointers
* and should be left alone.
*/
op->nrules = i;
free_offload_policy(op);
return (rc);
}
/* Validate settings */
s = &r->settings;
if ((s->offload != 0 && s->offload != 1) ||
s->cong_algo < -1 || s->cong_algo > CONG_ALG_HIGHSPEED ||
s->sched_class < -1 ||
s->sched_class >= sc->chip_params->nsched_cls) {
rc = EINVAL;
goto error;
}
bf = &r->bpf_prog;
u = bf->bf_insns; /* userspace ptr */
bf->bf_insns = NULL;
if (bf->bf_len == 0) {
/* legal, matches everything */
continue;
}
len = bf->bf_len * sizeof(*bf->bf_insns);
bf->bf_insns = malloc(len, M_CXGBE, M_ZERO | M_WAITOK);
rc = copyin(u, bf->bf_insns, len);
if (rc != 0)
goto error;
if (!bpf_validate(bf->bf_insns, bf->bf_len)) {
rc = EINVAL;
goto error;
}
}
set_policy:
rw_wlock(&sc->policy_lock);
old = sc->policy;
sc->policy = op;
rw_wunlock(&sc->policy_lock);
free_offload_policy(old);
return (0);
}
#define MAX_READ_BUF_SIZE (128 * 1024)
static int
read_card_mem(struct adapter *sc, int win, struct t4_mem_range *mr)
{
uint32_t addr, remaining, n;
uint32_t *buf;
int rc;
uint8_t *dst;
rc = validate_mem_range(sc, mr->addr, mr->len);
if (rc != 0)
return (rc);
buf = malloc(min(mr->len, MAX_READ_BUF_SIZE), M_CXGBE, M_WAITOK);
addr = mr->addr;
remaining = mr->len;
dst = (void *)mr->data;
while (remaining) {
n = min(remaining, MAX_READ_BUF_SIZE);
read_via_memwin(sc, 2, addr, buf, n);
rc = copyout(buf, dst, n);
if (rc != 0)
break;
dst += n;
remaining -= n;
addr += n;
}
free(buf, M_CXGBE);
return (rc);
}
#undef MAX_READ_BUF_SIZE
static int
read_i2c(struct adapter *sc, struct t4_i2c_data *i2cd)
{
int rc;
if (i2cd->len == 0 || i2cd->port_id >= sc->params.nports)
return (EINVAL);
if (i2cd->len > sizeof(i2cd->data))
return (EFBIG);
rc = begin_synchronized_op(sc, NULL, SLEEP_OK | INTR_OK, "t4i2crd");
if (rc)
return (rc);
rc = -t4_i2c_rd(sc, sc->mbox, i2cd->port_id, i2cd->dev_addr,
i2cd->offset, i2cd->len, &i2cd->data[0]);
end_synchronized_op(sc, 0);
return (rc);
}
int
t4_os_find_pci_capability(struct adapter *sc, int cap)
{
int i;
return (pci_find_cap(sc->dev, cap, &i) == 0 ? i : 0);
}
int
t4_os_pci_save_state(struct adapter *sc)
{
device_t dev;
struct pci_devinfo *dinfo;
dev = sc->dev;
dinfo = device_get_ivars(dev);
pci_cfg_save(dev, dinfo, 0);
return (0);
}
int
t4_os_pci_restore_state(struct adapter *sc)
{
device_t dev;
struct pci_devinfo *dinfo;
dev = sc->dev;
dinfo = device_get_ivars(dev);
pci_cfg_restore(dev, dinfo);
return (0);
}
void
t4_os_portmod_changed(struct port_info *pi)
{
struct adapter *sc = pi->adapter;
struct vi_info *vi;
struct ifnet *ifp;
static const char *mod_str[] = {
NULL, "LR", "SR", "ER", "TWINAX", "active TWINAX", "LRM"
};
KASSERT((pi->flags & FIXED_IFMEDIA) == 0,
("%s: port_type %u", __func__, pi->port_type));
vi = &pi->vi[0];
if (begin_synchronized_op(sc, vi, HOLD_LOCK, "t4mod") == 0) {
PORT_LOCK(pi);
build_medialist(pi);
if (pi->mod_type != FW_PORT_MOD_TYPE_NONE) {
fixup_link_config(pi);
apply_link_config(pi);
}
PORT_UNLOCK(pi);
end_synchronized_op(sc, LOCK_HELD);
}
ifp = vi->ifp;
if (pi->mod_type == FW_PORT_MOD_TYPE_NONE)
if_printf(ifp, "transceiver unplugged.\n");
else if (pi->mod_type == FW_PORT_MOD_TYPE_UNKNOWN)
if_printf(ifp, "unknown transceiver inserted.\n");
else if (pi->mod_type == FW_PORT_MOD_TYPE_NOTSUPPORTED)
if_printf(ifp, "unsupported transceiver inserted.\n");
else if (pi->mod_type > 0 && pi->mod_type < nitems(mod_str)) {
if_printf(ifp, "%dGbps %s transceiver inserted.\n",
port_top_speed(pi), mod_str[pi->mod_type]);
} else {
if_printf(ifp, "transceiver (type %d) inserted.\n",
pi->mod_type);
}
}
void
t4_os_link_changed(struct port_info *pi)
{
struct vi_info *vi;
struct ifnet *ifp;
struct link_config *lc;
int v;
PORT_LOCK_ASSERT_OWNED(pi);
for_each_vi(pi, v, vi) {
ifp = vi->ifp;
if (ifp == NULL)
continue;
lc = &pi->link_cfg;
if (lc->link_ok) {
ifp->if_baudrate = IF_Mbps(lc->speed);
if_link_state_change(ifp, LINK_STATE_UP);
} else {
if_link_state_change(ifp, LINK_STATE_DOWN);
}
}
}
void
t4_iterate(void (*func)(struct adapter *, void *), void *arg)
{
struct adapter *sc;
sx_slock(&t4_list_lock);
SLIST_FOREACH(sc, &t4_list, link) {
/*
* func should not make any assumptions about what state sc is
* in - the only guarantee is that sc->sc_lock is a valid lock.
*/
func(sc, arg);
}
sx_sunlock(&t4_list_lock);
}
static int
t4_ioctl(struct cdev *dev, unsigned long cmd, caddr_t data, int fflag,
struct thread *td)
{
int rc;
struct adapter *sc = dev->si_drv1;
rc = priv_check(td, PRIV_DRIVER);
if (rc != 0)
return (rc);
switch (cmd) {
case CHELSIO_T4_GETREG: {
struct t4_reg *edata = (struct t4_reg *)data;
if ((edata->addr & 0x3) != 0 || edata->addr >= sc->mmio_len)
return (EFAULT);
if (edata->size == 4)
edata->val = t4_read_reg(sc, edata->addr);
else if (edata->size == 8)
edata->val = t4_read_reg64(sc, edata->addr);
else
return (EINVAL);
break;
}
case CHELSIO_T4_SETREG: {
struct t4_reg *edata = (struct t4_reg *)data;
if ((edata->addr & 0x3) != 0 || edata->addr >= sc->mmio_len)
return (EFAULT);
if (edata->size == 4) {
if (edata->val & 0xffffffff00000000)
return (EINVAL);
t4_write_reg(sc, edata->addr, (uint32_t) edata->val);
} else if (edata->size == 8)
t4_write_reg64(sc, edata->addr, edata->val);
else
return (EINVAL);
break;
}
case CHELSIO_T4_REGDUMP: {
struct t4_regdump *regs = (struct t4_regdump *)data;
int reglen = t4_get_regs_len(sc);
uint8_t *buf;
if (regs->len < reglen) {
regs->len = reglen; /* hint to the caller */
return (ENOBUFS);
}
regs->len = reglen;
buf = malloc(reglen, M_CXGBE, M_WAITOK | M_ZERO);
get_regs(sc, regs, buf);
rc = copyout(buf, regs->data, reglen);
free(buf, M_CXGBE);
break;
}
case CHELSIO_T4_GET_FILTER_MODE:
rc = get_filter_mode(sc, (uint32_t *)data);
break;
case CHELSIO_T4_SET_FILTER_MODE:
rc = set_filter_mode(sc, *(uint32_t *)data);
break;
case CHELSIO_T4_GET_FILTER:
rc = get_filter(sc, (struct t4_filter *)data);
break;
case CHELSIO_T4_SET_FILTER:
rc = set_filter(sc, (struct t4_filter *)data);
break;
case CHELSIO_T4_DEL_FILTER:
rc = del_filter(sc, (struct t4_filter *)data);
break;
case CHELSIO_T4_GET_SGE_CONTEXT:
rc = get_sge_context(sc, (struct t4_sge_context *)data);
break;
case CHELSIO_T4_LOAD_FW:
rc = load_fw(sc, (struct t4_data *)data);
break;
case CHELSIO_T4_GET_MEM:
rc = read_card_mem(sc, 2, (struct t4_mem_range *)data);
break;
case CHELSIO_T4_GET_I2C:
rc = read_i2c(sc, (struct t4_i2c_data *)data);
break;
case CHELSIO_T4_CLEAR_STATS: {
int i, v, bg_map;
u_int port_id = *(uint32_t *)data;
struct port_info *pi;
struct vi_info *vi;
if (port_id >= sc->params.nports)
return (EINVAL);
pi = sc->port[port_id];
if (pi == NULL)
return (EIO);
/* MAC stats */
t4_clr_port_stats(sc, pi->tx_chan);
pi->tx_parse_error = 0;
pi->tnl_cong_drops = 0;
mtx_lock(&sc->reg_lock);
for_each_vi(pi, v, vi) {
if (vi->flags & VI_INIT_DONE)
t4_clr_vi_stats(sc, vi->viid);
}
bg_map = pi->mps_bg_map;
v = 0; /* reuse */
while (bg_map) {
i = ffs(bg_map) - 1;
t4_write_indirect(sc, A_TP_MIB_INDEX, A_TP_MIB_DATA, &v,
1, A_TP_MIB_TNL_CNG_DROP_0 + i);
bg_map &= ~(1 << i);
}
mtx_unlock(&sc->reg_lock);
/*
* Since this command accepts a port, clear stats for
* all VIs on this port.
*/
for_each_vi(pi, v, vi) {
if (vi->flags & VI_INIT_DONE) {
struct sge_rxq *rxq;
struct sge_txq *txq;
struct sge_wrq *wrq;
for_each_rxq(vi, i, rxq) {
#if defined(INET) || defined(INET6)
rxq->lro.lro_queued = 0;
rxq->lro.lro_flushed = 0;
#endif
rxq->rxcsum = 0;
rxq->vlan_extraction = 0;
}
for_each_txq(vi, i, txq) {
txq->txcsum = 0;
txq->tso_wrs = 0;
txq->vlan_insertion = 0;
txq->imm_wrs = 0;
txq->sgl_wrs = 0;
txq->txpkt_wrs = 0;
txq->txpkts0_wrs = 0;
txq->txpkts1_wrs = 0;
txq->txpkts0_pkts = 0;
txq->txpkts1_pkts = 0;
txq->raw_wrs = 0;
mp_ring_reset_stats(txq->r);
}
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
/* nothing to clear for each ofld_rxq */
for_each_ofld_txq(vi, i, wrq) {
wrq->tx_wrs_direct = 0;
wrq->tx_wrs_copied = 0;
}
#endif
if (IS_MAIN_VI(vi)) {
wrq = &sc->sge.ctrlq[pi->port_id];
wrq->tx_wrs_direct = 0;
wrq->tx_wrs_copied = 0;
}
}
}
break;
}
case CHELSIO_T4_SCHED_CLASS:
rc = t4_set_sched_class(sc, (struct t4_sched_params *)data);
break;
case CHELSIO_T4_SCHED_QUEUE:
rc = t4_set_sched_queue(sc, (struct t4_sched_queue *)data);
break;
case CHELSIO_T4_GET_TRACER:
rc = t4_get_tracer(sc, (struct t4_tracer *)data);
break;
case CHELSIO_T4_SET_TRACER:
rc = t4_set_tracer(sc, (struct t4_tracer *)data);
break;
case CHELSIO_T4_LOAD_CFG:
rc = load_cfg(sc, (struct t4_data *)data);
break;
case CHELSIO_T4_LOAD_BOOT:
rc = load_boot(sc, (struct t4_bootrom *)data);
break;
case CHELSIO_T4_LOAD_BOOTCFG:
rc = load_bootcfg(sc, (struct t4_data *)data);
break;
case CHELSIO_T4_CUDBG_DUMP:
rc = cudbg_dump(sc, (struct t4_cudbg_dump *)data);
break;
case CHELSIO_T4_SET_OFLD_POLICY:
rc = set_offload_policy(sc, (struct t4_offload_policy *)data);
break;
default:
rc = ENOTTY;
}
return (rc);
}
#ifdef TCP_OFFLOAD
static int
toe_capability(struct vi_info *vi, int enable)
{
int rc;
struct port_info *pi = vi->pi;
struct adapter *sc = pi->adapter;
ASSERT_SYNCHRONIZED_OP(sc);
if (!is_offload(sc))
return (ENODEV);
if (enable) {
if ((vi->ifp->if_capenable & IFCAP_TOE) != 0) {
/* TOE is already enabled. */
return (0);
}
/*
* We need the port's queues around so that we're able to send
* and receive CPLs to/from the TOE even if the ifnet for this
* port has never been UP'd administratively.
*/
if (!(vi->flags & VI_INIT_DONE)) {
rc = vi_full_init(vi);
if (rc)
return (rc);
}
if (!(pi->vi[0].flags & VI_INIT_DONE)) {
rc = vi_full_init(&pi->vi[0]);
if (rc)
return (rc);
}
if (isset(&sc->offload_map, pi->port_id)) {
/* TOE is enabled on another VI of this port. */
pi->uld_vis++;
return (0);
}
if (!uld_active(sc, ULD_TOM)) {
rc = t4_activate_uld(sc, ULD_TOM);
if (rc == EAGAIN) {
log(LOG_WARNING,
"You must kldload t4_tom.ko before trying "
"to enable TOE on a cxgbe interface.\n");
}
if (rc != 0)
return (rc);
KASSERT(sc->tom_softc != NULL,
("%s: TOM activated but softc NULL", __func__));
KASSERT(uld_active(sc, ULD_TOM),
("%s: TOM activated but flag not set", __func__));
}
/* Activate iWARP and iSCSI too, if the modules are loaded. */
if (!uld_active(sc, ULD_IWARP))
(void) t4_activate_uld(sc, ULD_IWARP);
if (!uld_active(sc, ULD_ISCSI))
(void) t4_activate_uld(sc, ULD_ISCSI);
pi->uld_vis++;
setbit(&sc->offload_map, pi->port_id);
} else {
pi->uld_vis--;
if (!isset(&sc->offload_map, pi->port_id) || pi->uld_vis > 0)
return (0);
KASSERT(uld_active(sc, ULD_TOM),
("%s: TOM never initialized?", __func__));
clrbit(&sc->offload_map, pi->port_id);
}
return (0);
}
/*
* Add an upper layer driver to the global list.
*/
int
t4_register_uld(struct uld_info *ui)
{
int rc = 0;
struct uld_info *u;
sx_xlock(&t4_uld_list_lock);
SLIST_FOREACH(u, &t4_uld_list, link) {
if (u->uld_id == ui->uld_id) {
rc = EEXIST;
goto done;
}
}
SLIST_INSERT_HEAD(&t4_uld_list, ui, link);
ui->refcount = 0;
done:
sx_xunlock(&t4_uld_list_lock);
return (rc);
}
int
t4_unregister_uld(struct uld_info *ui)
{
int rc = EINVAL;
struct uld_info *u;
sx_xlock(&t4_uld_list_lock);
SLIST_FOREACH(u, &t4_uld_list, link) {
if (u == ui) {
if (ui->refcount > 0) {
rc = EBUSY;
goto done;
}
SLIST_REMOVE(&t4_uld_list, ui, uld_info, link);
rc = 0;
goto done;
}
}
done:
sx_xunlock(&t4_uld_list_lock);
return (rc);
}
int
t4_activate_uld(struct adapter *sc, int id)
{
int rc;
struct uld_info *ui;
ASSERT_SYNCHRONIZED_OP(sc);
if (id < 0 || id > ULD_MAX)
return (EINVAL);
rc = EAGAIN; /* kldoad the module with this ULD and try again. */
sx_slock(&t4_uld_list_lock);
SLIST_FOREACH(ui, &t4_uld_list, link) {
if (ui->uld_id == id) {
if (!(sc->flags & FULL_INIT_DONE)) {
rc = adapter_full_init(sc);
if (rc != 0)
break;
}
rc = ui->activate(sc);
if (rc == 0) {
setbit(&sc->active_ulds, id);
ui->refcount++;
}
break;
}
}
sx_sunlock(&t4_uld_list_lock);
return (rc);
}
int
t4_deactivate_uld(struct adapter *sc, int id)
{
int rc;
struct uld_info *ui;
ASSERT_SYNCHRONIZED_OP(sc);
if (id < 0 || id > ULD_MAX)
return (EINVAL);
rc = ENXIO;
sx_slock(&t4_uld_list_lock);
SLIST_FOREACH(ui, &t4_uld_list, link) {
if (ui->uld_id == id) {
rc = ui->deactivate(sc);
if (rc == 0) {
clrbit(&sc->active_ulds, id);
ui->refcount--;
}
break;
}
}
sx_sunlock(&t4_uld_list_lock);
return (rc);
}
int
uld_active(struct adapter *sc, int uld_id)
{
MPASS(uld_id >= 0 && uld_id <= ULD_MAX);
return (isset(&sc->active_ulds, uld_id));
}
#endif
/*
* t = ptr to tunable.
* nc = number of CPUs.
* c = compiled in default for that tunable.
*/
static void
calculate_nqueues(int *t, int nc, const int c)
{
int nq;
if (*t > 0)
return;
nq = *t < 0 ? -*t : c;
*t = min(nc, nq);
}
/*
* Come up with reasonable defaults for some of the tunables, provided they're
* not set by the user (in which case we'll use the values as is).
*/
static void
tweak_tunables(void)
{
int nc = mp_ncpus; /* our snapshot of the number of CPUs */
if (t4_ntxq < 1) {
#ifdef RSS
t4_ntxq = rss_getnumbuckets();
#else
calculate_nqueues(&t4_ntxq, nc, NTXQ);
#endif
}
calculate_nqueues(&t4_ntxq_vi, nc, NTXQ_VI);
if (t4_nrxq < 1) {
#ifdef RSS
t4_nrxq = rss_getnumbuckets();
#else
calculate_nqueues(&t4_nrxq, nc, NRXQ);
#endif
}
calculate_nqueues(&t4_nrxq_vi, nc, NRXQ_VI);
#if defined(TCP_OFFLOAD) || defined(RATELIMIT)
calculate_nqueues(&t4_nofldtxq, nc, NOFLDTXQ);
calculate_nqueues(&t4_nofldtxq_vi, nc, NOFLDTXQ_VI);
#endif
#ifdef TCP_OFFLOAD
calculate_nqueues(&t4_nofldrxq, nc, NOFLDRXQ);
calculate_nqueues(&t4_nofldrxq_vi, nc, NOFLDRXQ_VI);
if (t4_toecaps_allowed == -1)
t4_toecaps_allowed = FW_CAPS_CONFIG_TOE;
if (t4_rdmacaps_allowed == -1) {
t4_rdmacaps_allowed = FW_CAPS_CONFIG_RDMA_RDDP |
FW_CAPS_CONFIG_RDMA_RDMAC;
}
if (t4_iscsicaps_allowed == -1) {
t4_iscsicaps_allowed = FW_CAPS_CONFIG_ISCSI_INITIATOR_PDU |
FW_CAPS_CONFIG_ISCSI_TARGET_PDU |
FW_CAPS_CONFIG_ISCSI_T10DIF;
}
if (t4_tmr_idx_ofld < 0 || t4_tmr_idx_ofld >= SGE_NTIMERS)
t4_tmr_idx_ofld = TMR_IDX_OFLD;
if (t4_pktc_idx_ofld < -1 || t4_pktc_idx_ofld >= SGE_NCOUNTERS)
t4_pktc_idx_ofld = PKTC_IDX_OFLD;
#else
if (t4_toecaps_allowed == -1)
t4_toecaps_allowed = 0;
if (t4_rdmacaps_allowed == -1)
t4_rdmacaps_allowed = 0;
if (t4_iscsicaps_allowed == -1)
t4_iscsicaps_allowed = 0;
#endif
#ifdef DEV_NETMAP
calculate_nqueues(&t4_nnmtxq_vi, nc, NNMTXQ_VI);
calculate_nqueues(&t4_nnmrxq_vi, nc, NNMRXQ_VI);
#endif
if (t4_tmr_idx < 0 || t4_tmr_idx >= SGE_NTIMERS)
t4_tmr_idx = TMR_IDX;
if (t4_pktc_idx < -1 || t4_pktc_idx >= SGE_NCOUNTERS)
t4_pktc_idx = PKTC_IDX;
if (t4_qsize_txq < 128)
t4_qsize_txq = 128;
if (t4_qsize_rxq < 128)
t4_qsize_rxq = 128;
while (t4_qsize_rxq & 7)
t4_qsize_rxq++;
t4_intr_types &= INTR_MSIX | INTR_MSI | INTR_INTX;
/*
* Number of VIs to create per-port. The first VI is the "main" regular
* VI for the port. The rest are additional virtual interfaces on the
* same physical port. Note that the main VI does not have native
* netmap support but the extra VIs do.
*
* Limit the number of VIs per port to the number of available
* MAC addresses per port.
*/
if (t4_num_vis < 1)
t4_num_vis = 1;
if (t4_num_vis > nitems(vi_mac_funcs)) {
t4_num_vis = nitems(vi_mac_funcs);
printf("cxgbe: number of VIs limited to %d\n", t4_num_vis);
}
if (pcie_relaxed_ordering < 0 || pcie_relaxed_ordering > 2) {
pcie_relaxed_ordering = 1;
#if defined(__i386__) || defined(__amd64__)
if (cpu_vendor_id == CPU_VENDOR_INTEL)
pcie_relaxed_ordering = 0;
#endif
}
}
#ifdef DDB
static void
t4_dump_tcb(struct adapter *sc, int tid)
{
uint32_t base, i, j, off, pf, reg, save, tcb_addr, win_pos;
reg = PCIE_MEM_ACCESS_REG(A_PCIE_MEM_ACCESS_OFFSET, 2);
save = t4_read_reg(sc, reg);
base = sc->memwin[2].mw_base;
/* Dump TCB for the tid */
tcb_addr = t4_read_reg(sc, A_TP_CMM_TCB_BASE);
tcb_addr += tid * TCB_SIZE;
if (is_t4(sc)) {
pf = 0;
win_pos = tcb_addr & ~0xf; /* start must be 16B aligned */
} else {
pf = V_PFNUM(sc->pf);
win_pos = tcb_addr & ~0x7f; /* start must be 128B aligned */
}
t4_write_reg(sc, reg, win_pos | pf);
t4_read_reg(sc, reg);
off = tcb_addr - win_pos;
for (i = 0; i < 4; i++) {
uint32_t buf[8];
for (j = 0; j < 8; j++, off += 4)
buf[j] = htonl(t4_read_reg(sc, base + off));
db_printf("%08x %08x %08x %08x %08x %08x %08x %08x\n",
buf[0], buf[1], buf[2], buf[3], buf[4], buf[5], buf[6],
buf[7]);
}
t4_write_reg(sc, reg, save);
t4_read_reg(sc, reg);
}
static void
t4_dump_devlog(struct adapter *sc)
{
struct devlog_params *dparams = &sc->params.devlog;
struct fw_devlog_e e;
int i, first, j, m, nentries, rc;
uint64_t ftstamp = UINT64_MAX;
if (dparams->start == 0) {
db_printf("devlog params not valid\n");
return;
}
nentries = dparams->size / sizeof(struct fw_devlog_e);
m = fwmtype_to_hwmtype(dparams->memtype);
/* Find the first entry. */
first = -1;
for (i = 0; i < nentries && !db_pager_quit; i++) {
rc = -t4_mem_read(sc, m, dparams->start + i * sizeof(e),
sizeof(e), (void *)&e);
if (rc != 0)
break;
if (e.timestamp == 0)
break;
e.timestamp = be64toh(e.timestamp);
if (e.timestamp < ftstamp) {
ftstamp = e.timestamp;
first = i;
}
}
if (first == -1)
return;
i = first;
do {
rc = -t4_mem_read(sc, m, dparams->start + i * sizeof(e),
sizeof(e), (void *)&e);
if (rc != 0)
return;
if (e.timestamp == 0)
return;
e.timestamp = be64toh(e.timestamp);
e.seqno = be32toh(e.seqno);
for (j = 0; j < 8; j++)
e.params[j] = be32toh(e.params[j]);
db_printf("%10d %15ju %8s %8s ",
e.seqno, e.timestamp,
(e.level < nitems(devlog_level_strings) ?
devlog_level_strings[e.level] : "UNKNOWN"),
(e.facility < nitems(devlog_facility_strings) ?
devlog_facility_strings[e.facility] : "UNKNOWN"));
db_printf(e.fmt, e.params[0], e.params[1], e.params[2],
e.params[3], e.params[4], e.params[5], e.params[6],
e.params[7]);
if (++i == nentries)
i = 0;
} while (i != first && !db_pager_quit);
}
static struct command_table db_t4_table = LIST_HEAD_INITIALIZER(db_t4_table);
_DB_SET(_show, t4, NULL, db_show_table, 0, &db_t4_table);
DB_FUNC(devlog, db_show_devlog, db_t4_table, CS_OWN, NULL)
{
device_t dev;
int t;
bool valid;
valid = false;
t = db_read_token();
if (t == tIDENT) {
dev = device_lookup_by_name(db_tok_string);
valid = true;
}
db_skip_to_eol();
if (!valid) {
db_printf("usage: show t4 devlog <nexus>\n");
return;
}
if (dev == NULL) {
db_printf("device not found\n");
return;
}
t4_dump_devlog(device_get_softc(dev));
}
DB_FUNC(tcb, db_show_t4tcb, db_t4_table, CS_OWN, NULL)
{
device_t dev;
int radix, tid, t;
bool valid;
valid = false;
radix = db_radix;
db_radix = 10;
t = db_read_token();
if (t == tIDENT) {
dev = device_lookup_by_name(db_tok_string);
t = db_read_token();
if (t == tNUMBER) {
tid = db_tok_number;
valid = true;
}
}
db_radix = radix;
db_skip_to_eol();
if (!valid) {
db_printf("usage: show t4 tcb <nexus> <tid>\n");
return;
}
if (dev == NULL) {
db_printf("device not found\n");
return;
}
if (tid < 0) {
db_printf("invalid tid\n");
return;
}
t4_dump_tcb(device_get_softc(dev), tid);
}
#endif
/*
* Borrowed from cesa_prep_aes_key().
*
* NB: The crypto engine wants the words in the decryption key in reverse
* order.
*/
void
t4_aes_getdeckey(void *dec_key, const void *enc_key, unsigned int kbits)
{
uint32_t ek[4 * (RIJNDAEL_MAXNR + 1)];
uint32_t *dkey;
int i;
rijndaelKeySetupEnc(ek, enc_key, kbits);
dkey = dec_key;
dkey += (kbits / 8) / 4;
switch (kbits) {
case 128:
for (i = 0; i < 4; i++)
*--dkey = htobe32(ek[4 * 10 + i]);
break;
case 192:
for (i = 0; i < 2; i++)
*--dkey = htobe32(ek[4 * 11 + 2 + i]);
for (i = 0; i < 4; i++)
*--dkey = htobe32(ek[4 * 12 + i]);
break;
case 256:
for (i = 0; i < 4; i++)
*--dkey = htobe32(ek[4 * 13 + i]);
for (i = 0; i < 4; i++)
*--dkey = htobe32(ek[4 * 14 + i]);
break;
}
MPASS(dkey == dec_key);
}
static struct sx mlu; /* mod load unload */
SX_SYSINIT(cxgbe_mlu, &mlu, "cxgbe mod load/unload");
static int
mod_event(module_t mod, int cmd, void *arg)
{
int rc = 0;
static int loaded = 0;
switch (cmd) {
case MOD_LOAD:
sx_xlock(&mlu);
if (loaded++ == 0) {
t4_sge_modload();
t4_register_shared_cpl_handler(CPL_SET_TCB_RPL,
t4_filter_rpl, CPL_COOKIE_FILTER);
t4_register_shared_cpl_handler(CPL_L2T_WRITE_RPL,
do_l2t_write_rpl, CPL_COOKIE_FILTER);
t4_register_shared_cpl_handler(CPL_ACT_OPEN_RPL,
t4_hashfilter_ao_rpl, CPL_COOKIE_HASHFILTER);
t4_register_shared_cpl_handler(CPL_SET_TCB_RPL,
t4_hashfilter_tcb_rpl, CPL_COOKIE_HASHFILTER);
t4_register_shared_cpl_handler(CPL_ABORT_RPL_RSS,
t4_del_hashfilter_rpl, CPL_COOKIE_HASHFILTER);
t4_register_cpl_handler(CPL_TRACE_PKT, t4_trace_pkt);
t4_register_cpl_handler(CPL_T5_TRACE_PKT, t5_trace_pkt);
t4_register_cpl_handler(CPL_SMT_WRITE_RPL,
do_smt_write_rpl);
sx_init(&t4_list_lock, "T4/T5 adapters");
SLIST_INIT(&t4_list);
+ callout_init(&fatal_callout, 1);
#ifdef TCP_OFFLOAD
sx_init(&t4_uld_list_lock, "T4/T5 ULDs");
SLIST_INIT(&t4_uld_list);
#endif
#ifdef INET6
t4_clip_modload();
#endif
t4_tracer_modload();
tweak_tunables();
}
sx_xunlock(&mlu);
break;
case MOD_UNLOAD:
sx_xlock(&mlu);
if (--loaded == 0) {
int tries;
sx_slock(&t4_list_lock);
if (!SLIST_EMPTY(&t4_list)) {
rc = EBUSY;
sx_sunlock(&t4_list_lock);
goto done_unload;
}
#ifdef TCP_OFFLOAD
sx_slock(&t4_uld_list_lock);
if (!SLIST_EMPTY(&t4_uld_list)) {
rc = EBUSY;
sx_sunlock(&t4_uld_list_lock);
sx_sunlock(&t4_list_lock);
goto done_unload;
}
#endif
tries = 0;
while (tries++ < 5 && t4_sge_extfree_refs() != 0) {
uprintf("%ju clusters with custom free routine "
"still is use.\n", t4_sge_extfree_refs());
pause("t4unload", 2 * hz);
}
#ifdef TCP_OFFLOAD
sx_sunlock(&t4_uld_list_lock);
#endif
sx_sunlock(&t4_list_lock);
if (t4_sge_extfree_refs() == 0) {
t4_tracer_modunload();
#ifdef INET6
t4_clip_modunload();
#endif
#ifdef TCP_OFFLOAD
sx_destroy(&t4_uld_list_lock);
#endif
sx_destroy(&t4_list_lock);
t4_sge_modunload();
loaded = 0;
} else {
rc = EBUSY;
loaded++; /* undo earlier decrement */
}
}
done_unload:
sx_xunlock(&mlu);
break;
}
return (rc);
}
static devclass_t t4_devclass, t5_devclass, t6_devclass;
static devclass_t cxgbe_devclass, cxl_devclass, cc_devclass;
static devclass_t vcxgbe_devclass, vcxl_devclass, vcc_devclass;
DRIVER_MODULE(t4nex, pci, t4_driver, t4_devclass, mod_event, 0);
MODULE_VERSION(t4nex, 1);
MODULE_DEPEND(t4nex, firmware, 1, 1, 1);
#ifdef DEV_NETMAP
MODULE_DEPEND(t4nex, netmap, 1, 1, 1);
#endif /* DEV_NETMAP */
DRIVER_MODULE(t5nex, pci, t5_driver, t5_devclass, mod_event, 0);
MODULE_VERSION(t5nex, 1);
MODULE_DEPEND(t5nex, firmware, 1, 1, 1);
#ifdef DEV_NETMAP
MODULE_DEPEND(t5nex, netmap, 1, 1, 1);
#endif /* DEV_NETMAP */
DRIVER_MODULE(t6nex, pci, t6_driver, t6_devclass, mod_event, 0);
MODULE_VERSION(t6nex, 1);
MODULE_DEPEND(t6nex, firmware, 1, 1, 1);
#ifdef DEV_NETMAP
MODULE_DEPEND(t6nex, netmap, 1, 1, 1);
#endif /* DEV_NETMAP */
DRIVER_MODULE(cxgbe, t4nex, cxgbe_driver, cxgbe_devclass, 0, 0);
MODULE_VERSION(cxgbe, 1);
DRIVER_MODULE(cxl, t5nex, cxl_driver, cxl_devclass, 0, 0);
MODULE_VERSION(cxl, 1);
DRIVER_MODULE(cc, t6nex, cc_driver, cc_devclass, 0, 0);
MODULE_VERSION(cc, 1);
DRIVER_MODULE(vcxgbe, cxgbe, vcxgbe_driver, vcxgbe_devclass, 0, 0);
MODULE_VERSION(vcxgbe, 1);
DRIVER_MODULE(vcxl, cxl, vcxl_driver, vcxl_devclass, 0, 0);
MODULE_VERSION(vcxl, 1);
DRIVER_MODULE(vcc, cc, vcc_driver, vcc_devclass, 0, 0);
MODULE_VERSION(vcc, 1);
Index: projects/clang800-import/sys/dev/cxgbe/t4_vf.c
===================================================================
--- projects/clang800-import/sys/dev/cxgbe/t4_vf.c (revision 343955)
+++ projects/clang800-import/sys/dev/cxgbe/t4_vf.c (revision 343956)
@@ -1,967 +1,968 @@
/*-
* Copyright (c) 2016 Chelsio Communications, Inc.
* All rights reserved.
* Written by: John Baldwin <jhb@FreeBSD.org>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_inet.h"
#include "opt_inet6.h"
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <sys/priv.h>
#include <dev/pci/pcivar.h>
#if defined(__i386__) || defined(__amd64__)
#include <vm/vm.h>
#include <vm/pmap.h>
#endif
#include "common/common.h"
#include "common/t4_regs.h"
#include "t4_ioctl.h"
#include "t4_mp_ring.h"
/*
* Some notes:
*
* The Virtual Interfaces are connected to an internal switch on the chip
* which allows VIs attached to the same port to talk to each other even when
* the port link is down. As a result, we might want to always report a
* VF's link as being "up".
*
* XXX: Add a TUNABLE and possible per-device sysctl for this?
*/
struct intrs_and_queues {
uint16_t intr_type; /* MSI, or MSI-X */
uint16_t nirq; /* Total # of vectors */
uint16_t ntxq; /* # of NIC txq's for each port */
uint16_t nrxq; /* # of NIC rxq's for each port */
};
struct {
uint16_t device;
char *desc;
} t4vf_pciids[] = {
{0x4800, "Chelsio T440-dbg VF"},
{0x4801, "Chelsio T420-CR VF"},
{0x4802, "Chelsio T422-CR VF"},
{0x4803, "Chelsio T440-CR VF"},
{0x4804, "Chelsio T420-BCH VF"},
{0x4805, "Chelsio T440-BCH VF"},
{0x4806, "Chelsio T440-CH VF"},
{0x4807, "Chelsio T420-SO VF"},
{0x4808, "Chelsio T420-CX VF"},
{0x4809, "Chelsio T420-BT VF"},
{0x480a, "Chelsio T404-BT VF"},
{0x480e, "Chelsio T440-LP-CR VF"},
}, t5vf_pciids[] = {
{0x5800, "Chelsio T580-dbg VF"},
{0x5801, "Chelsio T520-CR VF"}, /* 2 x 10G */
{0x5802, "Chelsio T522-CR VF"}, /* 2 x 10G, 2 X 1G */
{0x5803, "Chelsio T540-CR VF"}, /* 4 x 10G */
{0x5807, "Chelsio T520-SO VF"}, /* 2 x 10G, nomem */
{0x5809, "Chelsio T520-BT VF"}, /* 2 x 10GBaseT */
{0x580a, "Chelsio T504-BT VF"}, /* 4 x 1G */
{0x580d, "Chelsio T580-CR VF"}, /* 2 x 40G */
{0x580e, "Chelsio T540-LP-CR VF"}, /* 4 x 10G */
{0x5810, "Chelsio T580-LP-CR VF"}, /* 2 x 40G */
{0x5811, "Chelsio T520-LL-CR VF"}, /* 2 x 10G */
{0x5812, "Chelsio T560-CR VF"}, /* 1 x 40G, 2 x 10G */
{0x5814, "Chelsio T580-LP-SO-CR VF"}, /* 2 x 40G, nomem */
{0x5815, "Chelsio T502-BT VF"}, /* 2 x 1G */
#ifdef notyet
{0x5804, "Chelsio T520-BCH VF"},
{0x5805, "Chelsio T540-BCH VF"},
{0x5806, "Chelsio T540-CH VF"},
{0x5808, "Chelsio T520-CX VF"},
{0x580b, "Chelsio B520-SR VF"},
{0x580c, "Chelsio B504-BT VF"},
{0x580f, "Chelsio Amsterdam VF"},
{0x5813, "Chelsio T580-CHR VF"},
#endif
}, t6vf_pciids[] = {
{0x6800, "Chelsio T6-DBG-25 VF"}, /* 2 x 10/25G, debug */
{0x6801, "Chelsio T6225-CR VF"}, /* 2 x 10/25G */
{0x6802, "Chelsio T6225-SO-CR VF"}, /* 2 x 10/25G, nomem */
{0x6803, "Chelsio T6425-CR VF"}, /* 4 x 10/25G */
{0x6804, "Chelsio T6425-SO-CR VF"}, /* 4 x 10/25G, nomem */
{0x6805, "Chelsio T6225-OCP-SO VF"}, /* 2 x 10/25G, nomem */
{0x6806, "Chelsio T62100-OCP-SO VF"}, /* 2 x 40/50/100G, nomem */
{0x6807, "Chelsio T62100-LP-CR VF"}, /* 2 x 40/50/100G */
{0x6808, "Chelsio T62100-SO-CR VF"}, /* 2 x 40/50/100G, nomem */
{0x6809, "Chelsio T6210-BT VF"}, /* 2 x 10GBASE-T */
{0x680d, "Chelsio T62100-CR VF"}, /* 2 x 40/50/100G */
{0x6810, "Chelsio T6-DBG-100 VF"}, /* 2 x 40/50/100G, debug */
{0x6811, "Chelsio T6225-LL-CR VF"}, /* 2 x 10/25G */
{0x6814, "Chelsio T61100-OCP-SO VF"}, /* 1 x 40/50/100G, nomem */
{0x6815, "Chelsio T6201-BT VF"}, /* 2 x 1000BASE-T */
/* Custom */
{0x6880, "Chelsio T6225 80 VF"},
{0x6881, "Chelsio T62100 81 VF"},
};
static d_ioctl_t t4vf_ioctl;
static struct cdevsw t4vf_cdevsw = {
.d_version = D_VERSION,
.d_ioctl = t4vf_ioctl,
.d_name = "t4vf",
};
static int
t4vf_probe(device_t dev)
{
uint16_t d;
size_t i;
d = pci_get_device(dev);
for (i = 0; i < nitems(t4vf_pciids); i++) {
if (d == t4vf_pciids[i].device) {
device_set_desc(dev, t4vf_pciids[i].desc);
return (BUS_PROBE_DEFAULT);
}
}
return (ENXIO);
}
static int
t5vf_probe(device_t dev)
{
uint16_t d;
size_t i;
d = pci_get_device(dev);
for (i = 0; i < nitems(t5vf_pciids); i++) {
if (d == t5vf_pciids[i].device) {
device_set_desc(dev, t5vf_pciids[i].desc);
return (BUS_PROBE_DEFAULT);
}
}
return (ENXIO);
}
static int
t6vf_probe(device_t dev)
{
uint16_t d;
size_t i;
d = pci_get_device(dev);
for (i = 0; i < nitems(t6vf_pciids); i++) {
if (d == t6vf_pciids[i].device) {
device_set_desc(dev, t6vf_pciids[i].desc);
return (BUS_PROBE_DEFAULT);
}
}
return (ENXIO);
}
#define FW_PARAM_DEV(param) \
(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
#define FW_PARAM_PFVF(param) \
(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param))
static int
get_params__pre_init(struct adapter *sc)
{
int rc;
uint32_t param[3], val[3];
param[0] = FW_PARAM_DEV(FWREV);
param[1] = FW_PARAM_DEV(TPREV);
param[2] = FW_PARAM_DEV(CCLK);
rc = -t4vf_query_params(sc, nitems(param), param, val);
if (rc != 0) {
device_printf(sc->dev,
"failed to query parameters (pre_init): %d.\n", rc);
return (rc);
}
sc->params.fw_vers = val[0];
sc->params.tp_vers = val[1];
sc->params.vpd.cclk = val[2];
snprintf(sc->fw_version, sizeof(sc->fw_version), "%u.%u.%u.%u",
G_FW_HDR_FW_VER_MAJOR(sc->params.fw_vers),
G_FW_HDR_FW_VER_MINOR(sc->params.fw_vers),
G_FW_HDR_FW_VER_MICRO(sc->params.fw_vers),
G_FW_HDR_FW_VER_BUILD(sc->params.fw_vers));
snprintf(sc->tp_version, sizeof(sc->tp_version), "%u.%u.%u.%u",
G_FW_HDR_FW_VER_MAJOR(sc->params.tp_vers),
G_FW_HDR_FW_VER_MINOR(sc->params.tp_vers),
G_FW_HDR_FW_VER_MICRO(sc->params.tp_vers),
G_FW_HDR_FW_VER_BUILD(sc->params.tp_vers));
return (0);
}
static int
get_params__post_init(struct adapter *sc)
{
int rc;
rc = -t4vf_get_sge_params(sc);
if (rc != 0) {
device_printf(sc->dev,
"unable to retrieve adapter SGE parameters: %d\n", rc);
return (rc);
}
rc = -t4vf_get_rss_glb_config(sc);
if (rc != 0) {
device_printf(sc->dev,
"unable to retrieve adapter RSS parameters: %d\n", rc);
return (rc);
}
if (sc->params.rss.mode != FW_RSS_GLB_CONFIG_CMD_MODE_BASICVIRTUAL) {
device_printf(sc->dev,
"unable to operate with global RSS mode %d\n",
sc->params.rss.mode);
return (EINVAL);
}
rc = t4_read_chip_settings(sc);
if (rc != 0)
return (rc);
/*
* Grab our Virtual Interface resource allocation, extract the
* features that we're interested in and do a bit of sanity testing on
* what we discover.
*/
rc = -t4vf_get_vfres(sc);
if (rc != 0) {
device_printf(sc->dev,
"unable to get virtual interface resources: %d\n", rc);
return (rc);
}
/*
* Check for various parameter sanity issues.
*/
if (sc->params.vfres.pmask == 0) {
device_printf(sc->dev, "no port access configured/usable!\n");
return (EINVAL);
}
if (sc->params.vfres.nvi == 0) {
device_printf(sc->dev,
"no virtual interfaces configured/usable!\n");
return (EINVAL);
}
sc->params.portvec = sc->params.vfres.pmask;
return (0);
}
static int
set_params__post_init(struct adapter *sc)
{
uint32_t param, val;
/* ask for encapsulated CPLs */
param = FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
val = 1;
(void)t4vf_set_params(sc, 1, &param, &val);
return (0);
}
#undef FW_PARAM_PFVF
#undef FW_PARAM_DEV
static int
cfg_itype_and_nqueues(struct adapter *sc, struct intrs_and_queues *iaq)
{
struct vf_resources *vfres;
int nrxq, ntxq, nports;
int itype, iq_avail, navail, rc;
/*
* Figure out the layout of queues across our VIs and ensure
* we can allocate enough interrupts for our layout.
*/
vfres = &sc->params.vfres;
nports = sc->params.nports;
bzero(iaq, sizeof(*iaq));
for (itype = INTR_MSIX; itype != 0; itype >>= 1) {
if (itype == INTR_INTX)
continue;
if (itype == INTR_MSIX)
navail = pci_msix_count(sc->dev);
else
navail = pci_msi_count(sc->dev);
if (navail == 0)
continue;
iaq->intr_type = itype;
/*
* XXX: The Linux driver reserves an Ingress Queue for
* forwarded interrupts when using MSI (but not MSI-X).
* It seems it just always asks for 2 interrupts and
* forwards all rxqs to the forwarded interrupt.
*
* We must reserve one IRQ for the for the firmware
* event queue.
*
* Every rxq requires an ingress queue with a free
* list and interrupts and an egress queue. Every txq
* requires an ETH egress queue.
*/
iaq->nirq = T4VF_EXTRA_INTR;
/*
* First, determine how many queues we can allocate.
* Start by finding the upper bound on rxqs from the
* limit on ingress queues.
*/
iq_avail = vfres->niqflint - iaq->nirq;
if (iq_avail < nports) {
device_printf(sc->dev,
"Not enough ingress queues (%d) for %d ports\n",
vfres->niqflint, nports);
return (ENXIO);
}
/*
* Try to honor the cap on interrupts. If there aren't
* enough interrupts for at least one interrupt per
* port, then don't bother, we will just forward all
* interrupts to one interrupt in that case.
*/
if (iaq->nirq + nports <= navail) {
if (iq_avail > navail - iaq->nirq)
iq_avail = navail - iaq->nirq;
}
nrxq = nports * t4_nrxq;
if (nrxq > iq_avail) {
/*
* Too many ingress queues. Use what we can.
*/
nrxq = (iq_avail / nports) * nports;
}
KASSERT(nrxq <= iq_avail, ("too many ingress queues"));
/*
* Next, determine the upper bound on txqs from the limit
* on ETH queues.
*/
if (vfres->nethctrl < nports) {
device_printf(sc->dev,
"Not enough ETH queues (%d) for %d ports\n",
vfres->nethctrl, nports);
return (ENXIO);
}
ntxq = nports * t4_ntxq;
if (ntxq > vfres->nethctrl) {
/*
* Too many ETH queues. Use what we can.
*/
ntxq = (vfres->nethctrl / nports) * nports;
}
KASSERT(ntxq <= vfres->nethctrl, ("too many ETH queues"));
/*
* Finally, ensure we have enough egress queues.
*/
if (vfres->neq < nports * 2) {
device_printf(sc->dev,
"Not enough egress queues (%d) for %d ports\n",
vfres->neq, nports);
return (ENXIO);
}
if (nrxq + ntxq > vfres->neq) {
/* Just punt and use 1 for everything. */
nrxq = ntxq = nports;
}
KASSERT(nrxq <= iq_avail, ("too many ingress queues"));
KASSERT(ntxq <= vfres->nethctrl, ("too many ETH queues"));
KASSERT(nrxq + ntxq <= vfres->neq, ("too many egress queues"));
/*
* Do we have enough interrupts? For MSI the interrupts
* have to be a power of 2 as well.
*/
iaq->nirq += nrxq;
iaq->ntxq = ntxq;
iaq->nrxq = nrxq;
if (iaq->nirq <= navail &&
(itype != INTR_MSI || powerof2(iaq->nirq))) {
navail = iaq->nirq;
if (itype == INTR_MSIX)
rc = pci_alloc_msix(sc->dev, &navail);
else
rc = pci_alloc_msi(sc->dev, &navail);
if (rc != 0) {
device_printf(sc->dev,
"failed to allocate vectors:%d, type=%d, req=%d, rcvd=%d\n",
itype, rc, iaq->nirq, navail);
return (rc);
}
if (navail == iaq->nirq) {
return (0);
}
pci_release_msi(sc->dev);
}
/* Fall back to a single interrupt. */
iaq->nirq = 1;
navail = iaq->nirq;
if (itype == INTR_MSIX)
rc = pci_alloc_msix(sc->dev, &navail);
else
rc = pci_alloc_msi(sc->dev, &navail);
if (rc != 0)
device_printf(sc->dev,
"failed to allocate vectors:%d, type=%d, req=%d, rcvd=%d\n",
itype, rc, iaq->nirq, navail);
return (rc);
}
device_printf(sc->dev,
"failed to find a usable interrupt type. "
"allowed=%d, msi-x=%d, msi=%d, intx=1", t4_intr_types,
pci_msix_count(sc->dev), pci_msi_count(sc->dev));
return (ENXIO);
}
static int
t4vf_attach(device_t dev)
{
struct adapter *sc;
int rc = 0, i, j, rqidx, tqidx;
struct make_dev_args mda;
struct intrs_and_queues iaq;
struct sge *s;
sc = device_get_softc(dev);
sc->dev = dev;
pci_enable_busmaster(dev);
pci_set_max_read_req(dev, 4096);
sc->params.pci.mps = pci_get_max_payload(dev);
sc->flags |= IS_VF;
+ TUNABLE_INT_FETCH("hw.cxgbe.dflags", &sc->debug_flags);
sc->sge_gts_reg = VF_SGE_REG(A_SGE_VF_GTS);
sc->sge_kdoorbell_reg = VF_SGE_REG(A_SGE_VF_KDOORBELL);
snprintf(sc->lockname, sizeof(sc->lockname), "%s",
device_get_nameunit(dev));
mtx_init(&sc->sc_lock, sc->lockname, 0, MTX_DEF);
t4_add_adapter(sc);
mtx_init(&sc->sfl_lock, "starving freelists", 0, MTX_DEF);
TAILQ_INIT(&sc->sfl);
callout_init_mtx(&sc->sfl_callout, &sc->sfl_lock, 0);
mtx_init(&sc->reg_lock, "indirect register access", 0, MTX_DEF);
rc = t4_map_bars_0_and_4(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = -t4vf_prep_adapter(sc);
if (rc != 0)
goto done;
t4_init_devnames(sc);
if (sc->names == NULL) {
rc = ENOTSUP;
goto done; /* error message displayed already */
}
/*
* Leave the 'pf' and 'mbox' values as zero. This ensures
* that various firmware messages do not set the fields which
* is the correct thing to do for a VF.
*/
memset(sc->chan_map, 0xff, sizeof(sc->chan_map));
make_dev_args_init(&mda);
mda.mda_devsw = &t4vf_cdevsw;
mda.mda_uid = UID_ROOT;
mda.mda_gid = GID_WHEEL;
mda.mda_mode = 0600;
mda.mda_si_drv1 = sc;
rc = make_dev_s(&mda, &sc->cdev, "%s", device_get_nameunit(dev));
if (rc != 0)
device_printf(dev, "failed to create nexus char device: %d.\n",
rc);
#if defined(__i386__)
if ((cpu_feature & CPUID_CX8) == 0) {
device_printf(dev, "64 bit atomics not available.\n");
rc = ENOTSUP;
goto done;
}
#endif
/*
* Some environments do not properly handle PCIE FLRs -- e.g. in Linux
* 2.6.31 and later we can't call pci_reset_function() in order to
* issue an FLR because of a self- deadlock on the device semaphore.
* Meanwhile, the OS infrastructure doesn't issue FLRs in all the
* cases where they're needed -- for instance, some versions of KVM
* fail to reset "Assigned Devices" when the VM reboots. Therefore we
* use the firmware based reset in order to reset any per function
* state.
*/
rc = -t4vf_fw_reset(sc);
if (rc != 0) {
device_printf(dev, "FW reset failed: %d\n", rc);
goto done;
}
sc->flags |= FW_OK;
/*
* Grab basic operational parameters. These will predominantly have
* been set up by the Physical Function Driver or will be hard coded
* into the adapter. We just have to live with them ... Note that
* we _must_ get our VPD parameters before our SGE parameters because
* we need to know the adapter's core clock from the VPD in order to
* properly decode the SGE Timer Values.
*/
rc = get_params__pre_init(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = get_params__post_init(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = set_params__post_init(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = t4_map_bar_2(sc);
if (rc != 0)
goto done; /* error message displayed already */
rc = t4_create_dma_tag(sc);
if (rc != 0)
goto done; /* error message displayed already */
/*
* The number of "ports" which we support is equal to the number of
* Virtual Interfaces with which we've been provisioned.
*/
sc->params.nports = imin(sc->params.vfres.nvi, MAX_NPORTS);
/*
* We may have been provisioned with more VIs than the number of
* ports we're allowed to access (our Port Access Rights Mask).
* Just use a single VI for each port.
*/
sc->params.nports = imin(sc->params.nports,
bitcount32(sc->params.vfres.pmask));
#ifdef notyet
/*
* XXX: The Linux VF driver will lower nports if it thinks there
* are too few resources in vfres (niqflint, nethctrl, neq).
*/
#endif
/*
* First pass over all the ports - allocate VIs and initialize some
* basic parameters like mac address, port type, etc.
*/
for_each_port(sc, i) {
struct port_info *pi;
pi = malloc(sizeof(*pi), M_CXGBE, M_ZERO | M_WAITOK);
sc->port[i] = pi;
/* These must be set before t4_port_init */
pi->adapter = sc;
pi->port_id = i;
pi->nvi = 1;
pi->vi = malloc(sizeof(struct vi_info) * pi->nvi, M_CXGBE,
M_ZERO | M_WAITOK);
/*
* Allocate the "main" VI and initialize parameters
* like mac addr.
*/
rc = -t4_port_init(sc, sc->mbox, sc->pf, 0, i);
if (rc != 0) {
device_printf(dev, "unable to initialize port %d: %d\n",
i, rc);
free(pi->vi, M_CXGBE);
free(pi, M_CXGBE);
sc->port[i] = NULL;
goto done;
}
/* No t4_link_start. */
snprintf(pi->lockname, sizeof(pi->lockname), "%sp%d",
device_get_nameunit(dev), i);
mtx_init(&pi->pi_lock, pi->lockname, 0, MTX_DEF);
sc->chan_map[pi->tx_chan] = i;
/* All VIs on this port share this media. */
ifmedia_init(&pi->media, IFM_IMASK, cxgbe_media_change,
cxgbe_media_status);
pi->dev = device_add_child(dev, sc->names->vf_ifnet_name, -1);
if (pi->dev == NULL) {
device_printf(dev,
"failed to add device for port %d.\n", i);
rc = ENXIO;
goto done;
}
pi->vi[0].dev = pi->dev;
device_set_softc(pi->dev, pi);
}
/*
* Interrupt type, # of interrupts, # of rx/tx queues, etc.
*/
rc = cfg_itype_and_nqueues(sc, &iaq);
if (rc != 0)
goto done; /* error message displayed already */
sc->intr_type = iaq.intr_type;
sc->intr_count = iaq.nirq;
s = &sc->sge;
s->nrxq = sc->params.nports * iaq.nrxq;
s->ntxq = sc->params.nports * iaq.ntxq;
s->neq = s->ntxq + s->nrxq; /* the free list in an rxq is an eq */
s->neq += sc->params.nports; /* ctrl queues: 1 per port */
s->niq = s->nrxq + 1; /* 1 extra for firmware event queue */
s->rxq = malloc(s->nrxq * sizeof(struct sge_rxq), M_CXGBE,
M_ZERO | M_WAITOK);
s->txq = malloc(s->ntxq * sizeof(struct sge_txq), M_CXGBE,
M_ZERO | M_WAITOK);
s->iqmap = malloc(s->niq * sizeof(struct sge_iq *), M_CXGBE,
M_ZERO | M_WAITOK);
s->eqmap = malloc(s->neq * sizeof(struct sge_eq *), M_CXGBE,
M_ZERO | M_WAITOK);
sc->irq = malloc(sc->intr_count * sizeof(struct irq), M_CXGBE,
M_ZERO | M_WAITOK);
/*
* Second pass over the ports. This time we know the number of rx and
* tx queues that each port should get.
*/
rqidx = tqidx = 0;
for_each_port(sc, i) {
struct port_info *pi = sc->port[i];
struct vi_info *vi;
if (pi == NULL)
continue;
for_each_vi(pi, j, vi) {
vi->pi = pi;
vi->qsize_rxq = t4_qsize_rxq;
vi->qsize_txq = t4_qsize_txq;
vi->first_rxq = rqidx;
vi->first_txq = tqidx;
vi->tmr_idx = t4_tmr_idx;
vi->pktc_idx = t4_pktc_idx;
vi->nrxq = j == 0 ? iaq.nrxq: 1;
vi->ntxq = j == 0 ? iaq.ntxq: 1;
rqidx += vi->nrxq;
tqidx += vi->ntxq;
vi->rsrv_noflowq = 0;
}
}
rc = t4_setup_intr_handlers(sc);
if (rc != 0) {
device_printf(dev,
"failed to setup interrupt handlers: %d\n", rc);
goto done;
}
rc = bus_generic_attach(dev);
if (rc != 0) {
device_printf(dev,
"failed to attach all child ports: %d\n", rc);
goto done;
}
device_printf(dev,
"%d ports, %d %s interrupt%s, %d eq, %d iq\n",
sc->params.nports, sc->intr_count, sc->intr_type == INTR_MSIX ?
"MSI-X" : "MSI", sc->intr_count > 1 ? "s" : "", sc->sge.neq,
sc->sge.niq);
done:
if (rc != 0)
t4_detach_common(dev);
else
t4_sysctls(sc);
return (rc);
}
static void
get_regs(struct adapter *sc, struct t4_regdump *regs, uint8_t *buf)
{
/* 0x3f is used as the revision for VFs. */
regs->version = chip_id(sc) | (0x3f << 10);
t4_get_regs(sc, buf, regs->len);
}
static void
t4_clr_vi_stats(struct adapter *sc)
{
int reg;
for (reg = A_MPS_VF_STAT_TX_VF_BCAST_BYTES_L;
reg <= A_MPS_VF_STAT_RX_VF_ERR_FRAMES_H; reg += 4)
t4_write_reg(sc, VF_MPS_REG(reg), 0);
}
static int
t4vf_ioctl(struct cdev *dev, unsigned long cmd, caddr_t data, int fflag,
struct thread *td)
{
int rc;
struct adapter *sc = dev->si_drv1;
rc = priv_check(td, PRIV_DRIVER);
if (rc != 0)
return (rc);
switch (cmd) {
case CHELSIO_T4_GETREG: {
struct t4_reg *edata = (struct t4_reg *)data;
if ((edata->addr & 0x3) != 0 || edata->addr >= sc->mmio_len)
return (EFAULT);
if (edata->size == 4)
edata->val = t4_read_reg(sc, edata->addr);
else if (edata->size == 8)
edata->val = t4_read_reg64(sc, edata->addr);
else
return (EINVAL);
break;
}
case CHELSIO_T4_SETREG: {
struct t4_reg *edata = (struct t4_reg *)data;
if ((edata->addr & 0x3) != 0 || edata->addr >= sc->mmio_len)
return (EFAULT);
if (edata->size == 4) {
if (edata->val & 0xffffffff00000000)
return (EINVAL);
t4_write_reg(sc, edata->addr, (uint32_t) edata->val);
} else if (edata->size == 8)
t4_write_reg64(sc, edata->addr, edata->val);
else
return (EINVAL);
break;
}
case CHELSIO_T4_REGDUMP: {
struct t4_regdump *regs = (struct t4_regdump *)data;
int reglen = t4_get_regs_len(sc);
uint8_t *buf;
if (regs->len < reglen) {
regs->len = reglen; /* hint to the caller */
return (ENOBUFS);
}
regs->len = reglen;
buf = malloc(reglen, M_CXGBE, M_WAITOK | M_ZERO);
get_regs(sc, regs, buf);
rc = copyout(buf, regs->data, reglen);
free(buf, M_CXGBE);
break;
}
case CHELSIO_T4_CLEAR_STATS: {
int i, v;
u_int port_id = *(uint32_t *)data;
struct port_info *pi;
struct vi_info *vi;
if (port_id >= sc->params.nports)
return (EINVAL);
pi = sc->port[port_id];
/* MAC stats */
pi->tx_parse_error = 0;
t4_clr_vi_stats(sc);
/*
* Since this command accepts a port, clear stats for
* all VIs on this port.
*/
for_each_vi(pi, v, vi) {
if (vi->flags & VI_INIT_DONE) {
struct sge_rxq *rxq;
struct sge_txq *txq;
for_each_rxq(vi, i, rxq) {
#if defined(INET) || defined(INET6)
rxq->lro.lro_queued = 0;
rxq->lro.lro_flushed = 0;
#endif
rxq->rxcsum = 0;
rxq->vlan_extraction = 0;
}
for_each_txq(vi, i, txq) {
txq->txcsum = 0;
txq->tso_wrs = 0;
txq->vlan_insertion = 0;
txq->imm_wrs = 0;
txq->sgl_wrs = 0;
txq->txpkt_wrs = 0;
txq->txpkts0_wrs = 0;
txq->txpkts1_wrs = 0;
txq->txpkts0_pkts = 0;
txq->txpkts1_pkts = 0;
mp_ring_reset_stats(txq->r);
}
}
}
break;
}
case CHELSIO_T4_SCHED_CLASS:
rc = t4_set_sched_class(sc, (struct t4_sched_params *)data);
break;
case CHELSIO_T4_SCHED_QUEUE:
rc = t4_set_sched_queue(sc, (struct t4_sched_queue *)data);
break;
default:
rc = ENOTTY;
}
return (rc);
}
static device_method_t t4vf_methods[] = {
DEVMETHOD(device_probe, t4vf_probe),
DEVMETHOD(device_attach, t4vf_attach),
DEVMETHOD(device_detach, t4_detach_common),
DEVMETHOD_END
};
static driver_t t4vf_driver = {
"t4vf",
t4vf_methods,
sizeof(struct adapter)
};
static device_method_t t5vf_methods[] = {
DEVMETHOD(device_probe, t5vf_probe),
DEVMETHOD(device_attach, t4vf_attach),
DEVMETHOD(device_detach, t4_detach_common),
DEVMETHOD_END
};
static driver_t t5vf_driver = {
"t5vf",
t5vf_methods,
sizeof(struct adapter)
};
static device_method_t t6vf_methods[] = {
DEVMETHOD(device_probe, t6vf_probe),
DEVMETHOD(device_attach, t4vf_attach),
DEVMETHOD(device_detach, t4_detach_common),
DEVMETHOD_END
};
static driver_t t6vf_driver = {
"t6vf",
t6vf_methods,
sizeof(struct adapter)
};
static driver_t cxgbev_driver = {
"cxgbev",
cxgbe_methods,
sizeof(struct port_info)
};
static driver_t cxlv_driver = {
"cxlv",
cxgbe_methods,
sizeof(struct port_info)
};
static driver_t ccv_driver = {
"ccv",
cxgbe_methods,
sizeof(struct port_info)
};
static devclass_t t4vf_devclass, t5vf_devclass, t6vf_devclass;
static devclass_t cxgbev_devclass, cxlv_devclass, ccv_devclass;
DRIVER_MODULE(t4vf, pci, t4vf_driver, t4vf_devclass, 0, 0);
MODULE_VERSION(t4vf, 1);
MODULE_DEPEND(t4vf, t4nex, 1, 1, 1);
DRIVER_MODULE(t5vf, pci, t5vf_driver, t5vf_devclass, 0, 0);
MODULE_VERSION(t5vf, 1);
MODULE_DEPEND(t5vf, t5nex, 1, 1, 1);
DRIVER_MODULE(t6vf, pci, t6vf_driver, t6vf_devclass, 0, 0);
MODULE_VERSION(t6vf, 1);
MODULE_DEPEND(t6vf, t6nex, 1, 1, 1);
DRIVER_MODULE(cxgbev, t4vf, cxgbev_driver, cxgbev_devclass, 0, 0);
MODULE_VERSION(cxgbev, 1);
DRIVER_MODULE(cxlv, t5vf, cxlv_driver, cxlv_devclass, 0, 0);
MODULE_VERSION(cxlv, 1);
DRIVER_MODULE(ccv, t6vf, ccv_driver, ccv_devclass, 0, 0);
MODULE_VERSION(ccv, 1);
Index: projects/clang800-import/sys/dev/e1000/if_em.c
===================================================================
--- projects/clang800-import/sys/dev/e1000/if_em.c (revision 343955)
+++ projects/clang800-import/sys/dev/e1000/if_em.c (revision 343956)
@@ -1,4556 +1,4555 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2016 Nicole Graziano <nicole@nextbsd.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/* $FreeBSD$ */
#include "if_em.h"
#include <sys/sbuf.h>
#include <machine/_inttypes.h>
#define em_mac_min e1000_82547
#define igb_mac_min e1000_82575
/*********************************************************************
* Driver version:
*********************************************************************/
char em_driver_version[] = "7.6.1-k";
/*********************************************************************
* PCI Device ID Table
*
* Used by probe to select devices to load on
* Last field stores an index into e1000_strings
* Last entry must be all 0s
*
* { Vendor ID, Device ID, SubVendor ID, SubDevice ID, String Index }
*********************************************************************/
static pci_vendor_info_t em_vendor_info_array[] =
{
/* Intel(R) PRO/1000 Network Connection - Legacy em*/
PVID(0x8086, E1000_DEV_ID_82540EM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82540EM_LOM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82540EP, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82540EP_LOM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82540EP_LP, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541EI, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541ER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541ER_LOM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541EI_MOBILE, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541GI, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541GI_LF, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82541GI_MOBILE, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82542, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82543GC_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82543GC_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82544EI_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82544EI_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82544GC_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82544GC_LOM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82545EM_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82545EM_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82545GM_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82545GM_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82545GM_SERDES, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546EB_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546EB_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546EB_QUAD_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546GB_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546GB_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546GB_SERDES, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546GB_PCIE, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546GB_QUAD_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82546GB_QUAD_COPPER_KSP3, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82547EI, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82547EI_MOBILE, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82547GI, "Intel(R) PRO/1000 Network Connection"),
/* Intel(R) PRO/1000 Network Connection - em */
PVID(0x8086, E1000_DEV_ID_82571EB_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_SERDES, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_SERDES_DUAL, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_SERDES_QUAD, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_QUAD_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_QUAD_COPPER_LP, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571EB_QUAD_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82571PT_QUAD_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82572EI, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82572EI_COPPER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82572EI_FIBER, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82572EI_SERDES, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82573E, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82573E_IAMT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82573L, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82583V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_80003ES2LAN_COPPER_SPT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_80003ES2LAN_SERDES_SPT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_80003ES2LAN_COPPER_DPT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_80003ES2LAN_SERDES_DPT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IGP_M_AMT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IGP_AMT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IGP_C, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IFE, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IFE_GT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IFE_G, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_IGP_M, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH8_82567V_3, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IGP_M_AMT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IGP_AMT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IGP_C, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IGP_M, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IGP_M_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IFE, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IFE_GT, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_IFE_G, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH9_BM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82574L, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_82574LA, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH10_R_BM_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH10_R_BM_LF, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH10_R_BM_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH10_D_BM_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH10_D_BM_LF, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_ICH10_D_BM_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_M_HV_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_M_HV_LC, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_D_HV_DM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_D_HV_DC, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH2_LV_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH2_LV_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_LPT_I217_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_LPT_I217_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_LPTLP_I218_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_LPTLP_I218_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_I218_LM2, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_I218_V2, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_I218_LM3, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_I218_V3, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_LM, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_V, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_LM2, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_V2, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_LBG_I219_LM3, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_LM4, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_V4, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_LM5, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_SPT_I219_V5, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_CNP_I219_LM6, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_CNP_I219_V6, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_CNP_I219_LM7, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_CNP_I219_V7, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_ICP_I219_LM8, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_ICP_I219_V8, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_ICP_I219_LM9, "Intel(R) PRO/1000 Network Connection"),
PVID(0x8086, E1000_DEV_ID_PCH_ICP_I219_V9, "Intel(R) PRO/1000 Network Connection"),
/* required last entry */
PVID_END
};
static pci_vendor_info_t igb_vendor_info_array[] =
{
/* Intel(R) PRO/1000 Network Connection - igb */
PVID(0x8086, E1000_DEV_ID_82575EB_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82575EB_FIBER_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82575GB_QUAD_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_NS, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_NS_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_FIBER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_SERDES_QUAD, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_QUAD_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_QUAD_COPPER_ET2, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82576_VF, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82580_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82580_FIBER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82580_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82580_SGMII, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82580_COPPER_DUAL, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_82580_QUAD_FIBER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_DH89XXCC_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_DH89XXCC_SGMII, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_DH89XXCC_SFP, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_DH89XXCC_BACKPLANE, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I350_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I350_FIBER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I350_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I350_SGMII, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I350_VF, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_COPPER_IT, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_COPPER_OEM1, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_COPPER_FLASHLESS, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_SERDES_FLASHLESS, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_FIBER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_SERDES, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I210_SGMII, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I211_COPPER, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I354_BACKPLANE_1GBPS, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I354_BACKPLANE_2_5GBPS, "Intel(R) PRO/1000 PCI-Express Network Driver"),
PVID(0x8086, E1000_DEV_ID_I354_SGMII, "Intel(R) PRO/1000 PCI-Express Network Driver"),
/* required last entry */
PVID_END
};
/*********************************************************************
* Function prototypes
*********************************************************************/
static void *em_register(device_t dev);
static void *igb_register(device_t dev);
static int em_if_attach_pre(if_ctx_t ctx);
static int em_if_attach_post(if_ctx_t ctx);
static int em_if_detach(if_ctx_t ctx);
static int em_if_shutdown(if_ctx_t ctx);
static int em_if_suspend(if_ctx_t ctx);
static int em_if_resume(if_ctx_t ctx);
static int em_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets);
static int em_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxqs, int nrxqsets);
static void em_if_queues_free(if_ctx_t ctx);
static uint64_t em_if_get_counter(if_ctx_t, ift_counter);
static void em_if_init(if_ctx_t ctx);
static void em_if_stop(if_ctx_t ctx);
static void em_if_media_status(if_ctx_t, struct ifmediareq *);
static int em_if_media_change(if_ctx_t ctx);
static int em_if_mtu_set(if_ctx_t ctx, uint32_t mtu);
static void em_if_timer(if_ctx_t ctx, uint16_t qid);
static void em_if_vlan_register(if_ctx_t ctx, u16 vtag);
static void em_if_vlan_unregister(if_ctx_t ctx, u16 vtag);
+static void em_if_watchdog_reset(if_ctx_t ctx);
static void em_identify_hardware(if_ctx_t ctx);
static int em_allocate_pci_resources(if_ctx_t ctx);
static void em_free_pci_resources(if_ctx_t ctx);
static void em_reset(if_ctx_t ctx);
static int em_setup_interface(if_ctx_t ctx);
static int em_setup_msix(if_ctx_t ctx);
static void em_initialize_transmit_unit(if_ctx_t ctx);
static void em_initialize_receive_unit(if_ctx_t ctx);
static void em_if_enable_intr(if_ctx_t ctx);
static void em_if_disable_intr(if_ctx_t ctx);
static int em_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid);
static int em_if_tx_queue_intr_enable(if_ctx_t ctx, uint16_t txqid);
static void em_if_multi_set(if_ctx_t ctx);
static void em_if_update_admin_status(if_ctx_t ctx);
static void em_if_debug(if_ctx_t ctx);
static void em_update_stats_counters(struct adapter *);
static void em_add_hw_stats(struct adapter *adapter);
static int em_if_set_promisc(if_ctx_t ctx, int flags);
static void em_setup_vlan_hw_support(struct adapter *);
static int em_sysctl_nvm_info(SYSCTL_HANDLER_ARGS);
static void em_print_nvm_info(struct adapter *);
static int em_sysctl_debug_info(SYSCTL_HANDLER_ARGS);
static int em_get_rs(SYSCTL_HANDLER_ARGS);
static void em_print_debug_info(struct adapter *);
static int em_is_valid_ether_addr(u8 *);
static int em_sysctl_int_delay(SYSCTL_HANDLER_ARGS);
static void em_add_int_delay_sysctl(struct adapter *, const char *,
const char *, struct em_int_delay_info *, int, int);
/* Management and WOL Support */
static void em_init_manageability(struct adapter *);
static void em_release_manageability(struct adapter *);
static void em_get_hw_control(struct adapter *);
static void em_release_hw_control(struct adapter *);
static void em_get_wakeup(if_ctx_t ctx);
static void em_enable_wakeup(if_ctx_t ctx);
static int em_enable_phy_wakeup(struct adapter *);
static void em_disable_aspm(struct adapter *);
int em_intr(void *arg);
static void em_disable_promisc(if_ctx_t ctx);
/* MSI-X handlers */
static int em_if_msix_intr_assign(if_ctx_t, int);
static int em_msix_link(void *);
static void em_handle_link(void *context);
static void em_enable_vectors_82574(if_ctx_t);
static int em_set_flowcntl(SYSCTL_HANDLER_ARGS);
static int em_sysctl_eee(SYSCTL_HANDLER_ARGS);
static void em_if_led_func(if_ctx_t ctx, int onoff);
static int em_get_regs(SYSCTL_HANDLER_ARGS);
static void lem_smartspeed(struct adapter *adapter);
static void igb_configure_queues(struct adapter *adapter);
/*********************************************************************
* FreeBSD Device Interface Entry Points
*********************************************************************/
static device_method_t em_methods[] = {
/* Device interface */
DEVMETHOD(device_register, em_register),
DEVMETHOD(device_probe, iflib_device_probe),
DEVMETHOD(device_attach, iflib_device_attach),
DEVMETHOD(device_detach, iflib_device_detach),
DEVMETHOD(device_shutdown, iflib_device_shutdown),
DEVMETHOD(device_suspend, iflib_device_suspend),
DEVMETHOD(device_resume, iflib_device_resume),
DEVMETHOD_END
};
static device_method_t igb_methods[] = {
/* Device interface */
DEVMETHOD(device_register, igb_register),
DEVMETHOD(device_probe, iflib_device_probe),
DEVMETHOD(device_attach, iflib_device_attach),
DEVMETHOD(device_detach, iflib_device_detach),
DEVMETHOD(device_shutdown, iflib_device_shutdown),
DEVMETHOD(device_suspend, iflib_device_suspend),
DEVMETHOD(device_resume, iflib_device_resume),
DEVMETHOD_END
};
static driver_t em_driver = {
"em", em_methods, sizeof(struct adapter),
};
static devclass_t em_devclass;
DRIVER_MODULE(em, pci, em_driver, em_devclass, 0, 0);
MODULE_DEPEND(em, pci, 1, 1, 1);
MODULE_DEPEND(em, ether, 1, 1, 1);
MODULE_DEPEND(em, iflib, 1, 1, 1);
IFLIB_PNP_INFO(pci, em, em_vendor_info_array);
static driver_t igb_driver = {
"igb", igb_methods, sizeof(struct adapter),
};
static devclass_t igb_devclass;
DRIVER_MODULE(igb, pci, igb_driver, igb_devclass, 0, 0);
MODULE_DEPEND(igb, pci, 1, 1, 1);
MODULE_DEPEND(igb, ether, 1, 1, 1);
MODULE_DEPEND(igb, iflib, 1, 1, 1);
IFLIB_PNP_INFO(pci, igb, igb_vendor_info_array);
static device_method_t em_if_methods[] = {
DEVMETHOD(ifdi_attach_pre, em_if_attach_pre),
DEVMETHOD(ifdi_attach_post, em_if_attach_post),
DEVMETHOD(ifdi_detach, em_if_detach),
DEVMETHOD(ifdi_shutdown, em_if_shutdown),
DEVMETHOD(ifdi_suspend, em_if_suspend),
DEVMETHOD(ifdi_resume, em_if_resume),
DEVMETHOD(ifdi_init, em_if_init),
DEVMETHOD(ifdi_stop, em_if_stop),
DEVMETHOD(ifdi_msix_intr_assign, em_if_msix_intr_assign),
DEVMETHOD(ifdi_intr_enable, em_if_enable_intr),
DEVMETHOD(ifdi_intr_disable, em_if_disable_intr),
DEVMETHOD(ifdi_tx_queues_alloc, em_if_tx_queues_alloc),
DEVMETHOD(ifdi_rx_queues_alloc, em_if_rx_queues_alloc),
DEVMETHOD(ifdi_queues_free, em_if_queues_free),
DEVMETHOD(ifdi_update_admin_status, em_if_update_admin_status),
DEVMETHOD(ifdi_multi_set, em_if_multi_set),
DEVMETHOD(ifdi_media_status, em_if_media_status),
DEVMETHOD(ifdi_media_change, em_if_media_change),
DEVMETHOD(ifdi_mtu_set, em_if_mtu_set),
DEVMETHOD(ifdi_promisc_set, em_if_set_promisc),
DEVMETHOD(ifdi_timer, em_if_timer),
+ DEVMETHOD(ifdi_watchdog_reset, em_if_watchdog_reset),
DEVMETHOD(ifdi_vlan_register, em_if_vlan_register),
DEVMETHOD(ifdi_vlan_unregister, em_if_vlan_unregister),
DEVMETHOD(ifdi_get_counter, em_if_get_counter),
DEVMETHOD(ifdi_led_func, em_if_led_func),
DEVMETHOD(ifdi_rx_queue_intr_enable, em_if_rx_queue_intr_enable),
DEVMETHOD(ifdi_tx_queue_intr_enable, em_if_tx_queue_intr_enable),
DEVMETHOD(ifdi_debug, em_if_debug),
DEVMETHOD_END
};
/*
* note that if (adapter->msix_mem) is replaced by:
* if (adapter->intr_type == IFLIB_INTR_MSIX)
*/
static driver_t em_if_driver = {
"em_if", em_if_methods, sizeof(struct adapter)
};
/*********************************************************************
* Tunable default values.
*********************************************************************/
#define EM_TICKS_TO_USECS(ticks) ((1024 * (ticks) + 500) / 1000)
#define EM_USECS_TO_TICKS(usecs) ((1000 * (usecs) + 512) / 1024)
#define MAX_INTS_PER_SEC 8000
#define DEFAULT_ITR (1000000000/(MAX_INTS_PER_SEC * 256))
/* Allow common code without TSO */
#ifndef CSUM_TSO
#define CSUM_TSO 0
#endif
static SYSCTL_NODE(_hw, OID_AUTO, em, CTLFLAG_RD, 0, "EM driver parameters");
static int em_disable_crc_stripping = 0;
SYSCTL_INT(_hw_em, OID_AUTO, disable_crc_stripping, CTLFLAG_RDTUN,
&em_disable_crc_stripping, 0, "Disable CRC Stripping");
static int em_tx_int_delay_dflt = EM_TICKS_TO_USECS(EM_TIDV);
static int em_rx_int_delay_dflt = EM_TICKS_TO_USECS(EM_RDTR);
SYSCTL_INT(_hw_em, OID_AUTO, tx_int_delay, CTLFLAG_RDTUN, &em_tx_int_delay_dflt,
0, "Default transmit interrupt delay in usecs");
SYSCTL_INT(_hw_em, OID_AUTO, rx_int_delay, CTLFLAG_RDTUN, &em_rx_int_delay_dflt,
0, "Default receive interrupt delay in usecs");
static int em_tx_abs_int_delay_dflt = EM_TICKS_TO_USECS(EM_TADV);
static int em_rx_abs_int_delay_dflt = EM_TICKS_TO_USECS(EM_RADV);
SYSCTL_INT(_hw_em, OID_AUTO, tx_abs_int_delay, CTLFLAG_RDTUN,
&em_tx_abs_int_delay_dflt, 0,
"Default transmit interrupt delay limit in usecs");
SYSCTL_INT(_hw_em, OID_AUTO, rx_abs_int_delay, CTLFLAG_RDTUN,
&em_rx_abs_int_delay_dflt, 0,
"Default receive interrupt delay limit in usecs");
static int em_smart_pwr_down = FALSE;
SYSCTL_INT(_hw_em, OID_AUTO, smart_pwr_down, CTLFLAG_RDTUN, &em_smart_pwr_down,
0, "Set to true to leave smart power down enabled on newer adapters");
/* Controls whether promiscuous also shows bad packets */
static int em_debug_sbp = TRUE;
SYSCTL_INT(_hw_em, OID_AUTO, sbp, CTLFLAG_RDTUN, &em_debug_sbp, 0,
"Show bad packets in promiscuous mode");
/* How many packets rxeof tries to clean at a time */
static int em_rx_process_limit = 100;
SYSCTL_INT(_hw_em, OID_AUTO, rx_process_limit, CTLFLAG_RDTUN,
&em_rx_process_limit, 0,
"Maximum number of received packets to process "
"at a time, -1 means unlimited");
/* Energy efficient ethernet - default to OFF */
static int eee_setting = 1;
SYSCTL_INT(_hw_em, OID_AUTO, eee_setting, CTLFLAG_RDTUN, &eee_setting, 0,
"Enable Energy Efficient Ethernet");
/*
** Tuneable Interrupt rate
*/
static int em_max_interrupt_rate = 8000;
SYSCTL_INT(_hw_em, OID_AUTO, max_interrupt_rate, CTLFLAG_RDTUN,
&em_max_interrupt_rate, 0, "Maximum interrupts per second");
/* Global used in WOL setup with multiport cards */
static int global_quad_port_a = 0;
extern struct if_txrx igb_txrx;
extern struct if_txrx em_txrx;
extern struct if_txrx lem_txrx;
static struct if_shared_ctx em_sctx_init = {
.isc_magic = IFLIB_MAGIC,
.isc_q_align = PAGE_SIZE,
.isc_tx_maxsize = EM_TSO_SIZE + sizeof(struct ether_vlan_header),
.isc_tx_maxsegsize = PAGE_SIZE,
.isc_tso_maxsize = EM_TSO_SIZE + sizeof(struct ether_vlan_header),
.isc_tso_maxsegsize = EM_TSO_SEG_SIZE,
.isc_rx_maxsize = MJUM9BYTES,
.isc_rx_nsegments = 1,
.isc_rx_maxsegsize = MJUM9BYTES,
.isc_nfl = 1,
.isc_nrxqs = 1,
.isc_ntxqs = 1,
.isc_admin_intrcnt = 1,
.isc_vendor_info = em_vendor_info_array,
.isc_driver_version = em_driver_version,
.isc_driver = &em_if_driver,
.isc_flags = IFLIB_NEED_SCRATCH | IFLIB_TSO_INIT_IP | IFLIB_NEED_ZERO_CSUM,
.isc_nrxd_min = {EM_MIN_RXD},
.isc_ntxd_min = {EM_MIN_TXD},
.isc_nrxd_max = {EM_MAX_RXD},
.isc_ntxd_max = {EM_MAX_TXD},
.isc_nrxd_default = {EM_DEFAULT_RXD},
.isc_ntxd_default = {EM_DEFAULT_TXD},
};
if_shared_ctx_t em_sctx = &em_sctx_init;
static struct if_shared_ctx igb_sctx_init = {
.isc_magic = IFLIB_MAGIC,
.isc_q_align = PAGE_SIZE,
.isc_tx_maxsize = EM_TSO_SIZE + sizeof(struct ether_vlan_header),
.isc_tx_maxsegsize = PAGE_SIZE,
.isc_tso_maxsize = EM_TSO_SIZE + sizeof(struct ether_vlan_header),
.isc_tso_maxsegsize = EM_TSO_SEG_SIZE,
.isc_rx_maxsize = MJUM9BYTES,
.isc_rx_nsegments = 1,
.isc_rx_maxsegsize = MJUM9BYTES,
.isc_nfl = 1,
.isc_nrxqs = 1,
.isc_ntxqs = 1,
.isc_admin_intrcnt = 1,
.isc_vendor_info = igb_vendor_info_array,
.isc_driver_version = em_driver_version,
.isc_driver = &em_if_driver,
.isc_flags = IFLIB_NEED_SCRATCH | IFLIB_TSO_INIT_IP | IFLIB_NEED_ZERO_CSUM,
.isc_nrxd_min = {EM_MIN_RXD},
.isc_ntxd_min = {EM_MIN_TXD},
.isc_nrxd_max = {IGB_MAX_RXD},
.isc_ntxd_max = {IGB_MAX_TXD},
.isc_nrxd_default = {EM_DEFAULT_RXD},
.isc_ntxd_default = {EM_DEFAULT_TXD},
};
if_shared_ctx_t igb_sctx = &igb_sctx_init;
/*****************************************************************
*
* Dump Registers
*
****************************************************************/
#define IGB_REGS_LEN 739
static int em_get_regs(SYSCTL_HANDLER_ARGS)
{
struct adapter *adapter = (struct adapter *)arg1;
struct e1000_hw *hw = &adapter->hw;
struct sbuf *sb;
u32 *regs_buff;
int rc;
regs_buff = malloc(sizeof(u32) * IGB_REGS_LEN, M_DEVBUF, M_WAITOK);
memset(regs_buff, 0, IGB_REGS_LEN * sizeof(u32));
rc = sysctl_wire_old_buffer(req, 0);
MPASS(rc == 0);
if (rc != 0) {
free(regs_buff, M_DEVBUF);
return (rc);
}
sb = sbuf_new_for_sysctl(NULL, NULL, 32*400, req);
MPASS(sb != NULL);
if (sb == NULL) {
free(regs_buff, M_DEVBUF);
return (ENOMEM);
}
/* General Registers */
regs_buff[0] = E1000_READ_REG(hw, E1000_CTRL);
regs_buff[1] = E1000_READ_REG(hw, E1000_STATUS);
regs_buff[2] = E1000_READ_REG(hw, E1000_CTRL_EXT);
regs_buff[3] = E1000_READ_REG(hw, E1000_ICR);
regs_buff[4] = E1000_READ_REG(hw, E1000_RCTL);
regs_buff[5] = E1000_READ_REG(hw, E1000_RDLEN(0));
regs_buff[6] = E1000_READ_REG(hw, E1000_RDH(0));
regs_buff[7] = E1000_READ_REG(hw, E1000_RDT(0));
regs_buff[8] = E1000_READ_REG(hw, E1000_RXDCTL(0));
regs_buff[9] = E1000_READ_REG(hw, E1000_RDBAL(0));
regs_buff[10] = E1000_READ_REG(hw, E1000_RDBAH(0));
regs_buff[11] = E1000_READ_REG(hw, E1000_TCTL);
regs_buff[12] = E1000_READ_REG(hw, E1000_TDBAL(0));
regs_buff[13] = E1000_READ_REG(hw, E1000_TDBAH(0));
regs_buff[14] = E1000_READ_REG(hw, E1000_TDLEN(0));
regs_buff[15] = E1000_READ_REG(hw, E1000_TDH(0));
regs_buff[16] = E1000_READ_REG(hw, E1000_TDT(0));
regs_buff[17] = E1000_READ_REG(hw, E1000_TXDCTL(0));
regs_buff[18] = E1000_READ_REG(hw, E1000_TDFH);
regs_buff[19] = E1000_READ_REG(hw, E1000_TDFT);
regs_buff[20] = E1000_READ_REG(hw, E1000_TDFHS);
regs_buff[21] = E1000_READ_REG(hw, E1000_TDFPC);
sbuf_printf(sb, "General Registers\n");
sbuf_printf(sb, "\tCTRL\t %08x\n", regs_buff[0]);
sbuf_printf(sb, "\tSTATUS\t %08x\n", regs_buff[1]);
sbuf_printf(sb, "\tCTRL_EXIT\t %08x\n\n", regs_buff[2]);
sbuf_printf(sb, "Interrupt Registers\n");
sbuf_printf(sb, "\tICR\t %08x\n\n", regs_buff[3]);
sbuf_printf(sb, "RX Registers\n");
sbuf_printf(sb, "\tRCTL\t %08x\n", regs_buff[4]);
sbuf_printf(sb, "\tRDLEN\t %08x\n", regs_buff[5]);
sbuf_printf(sb, "\tRDH\t %08x\n", regs_buff[6]);
sbuf_printf(sb, "\tRDT\t %08x\n", regs_buff[7]);
sbuf_printf(sb, "\tRXDCTL\t %08x\n", regs_buff[8]);
sbuf_printf(sb, "\tRDBAL\t %08x\n", regs_buff[9]);
sbuf_printf(sb, "\tRDBAH\t %08x\n\n", regs_buff[10]);
sbuf_printf(sb, "TX Registers\n");
sbuf_printf(sb, "\tTCTL\t %08x\n", regs_buff[11]);
sbuf_printf(sb, "\tTDBAL\t %08x\n", regs_buff[12]);
sbuf_printf(sb, "\tTDBAH\t %08x\n", regs_buff[13]);
sbuf_printf(sb, "\tTDLEN\t %08x\n", regs_buff[14]);
sbuf_printf(sb, "\tTDH\t %08x\n", regs_buff[15]);
sbuf_printf(sb, "\tTDT\t %08x\n", regs_buff[16]);
sbuf_printf(sb, "\tTXDCTL\t %08x\n", regs_buff[17]);
sbuf_printf(sb, "\tTDFH\t %08x\n", regs_buff[18]);
sbuf_printf(sb, "\tTDFT\t %08x\n", regs_buff[19]);
sbuf_printf(sb, "\tTDFHS\t %08x\n", regs_buff[20]);
sbuf_printf(sb, "\tTDFPC\t %08x\n\n", regs_buff[21]);
free(regs_buff, M_DEVBUF);
#ifdef DUMP_DESCS
{
if_softc_ctx_t scctx = adapter->shared;
struct rx_ring *rxr = &rx_que->rxr;
struct tx_ring *txr = &tx_que->txr;
int ntxd = scctx->isc_ntxd[0];
int nrxd = scctx->isc_nrxd[0];
int j;
for (j = 0; j < nrxd; j++) {
u32 staterr = le32toh(rxr->rx_base[j].wb.upper.status_error);
u32 length = le32toh(rxr->rx_base[j].wb.upper.length);
sbuf_printf(sb, "\tReceive Descriptor Address %d: %08" PRIx64 " Error:%d Length:%d\n", j, rxr->rx_base[j].read.buffer_addr, staterr, length);
}
for (j = 0; j < min(ntxd, 256); j++) {
unsigned int *ptr = (unsigned int *)&txr->tx_base[j];
sbuf_printf(sb, "\tTXD[%03d] [0]: %08x [1]: %08x [2]: %08x [3]: %08x eop: %d DD=%d\n",
j, ptr[0], ptr[1], ptr[2], ptr[3], buf->eop,
buf->eop != -1 ? txr->tx_base[buf->eop].upper.fields.status & E1000_TXD_STAT_DD : 0);
}
}
#endif
rc = sbuf_finish(sb);
sbuf_delete(sb);
return(rc);
}
static void *
em_register(device_t dev)
{
return (em_sctx);
}
static void *
igb_register(device_t dev)
{
return (igb_sctx);
}
static int
em_set_num_queues(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
int maxqueues;
/* Sanity check based on HW */
switch (adapter->hw.mac.type) {
case e1000_82576:
case e1000_82580:
case e1000_i350:
case e1000_i354:
maxqueues = 8;
break;
case e1000_i210:
case e1000_82575:
maxqueues = 4;
break;
case e1000_i211:
case e1000_82574:
maxqueues = 2;
break;
default:
maxqueues = 1;
break;
}
return (maxqueues);
}
#define LEM_CAPS \
IFCAP_HWCSUM | IFCAP_VLAN_MTU | IFCAP_VLAN_HWTAGGING | \
IFCAP_VLAN_HWCSUM | IFCAP_WOL | IFCAP_VLAN_HWFILTER
#define EM_CAPS \
IFCAP_HWCSUM | IFCAP_VLAN_MTU | IFCAP_VLAN_HWTAGGING | \
IFCAP_VLAN_HWCSUM | IFCAP_WOL | IFCAP_VLAN_HWFILTER | IFCAP_TSO4 | \
IFCAP_LRO | IFCAP_VLAN_HWTSO
#define IGB_CAPS \
IFCAP_HWCSUM | IFCAP_VLAN_MTU | IFCAP_VLAN_HWTAGGING | \
IFCAP_VLAN_HWCSUM | IFCAP_WOL | IFCAP_VLAN_HWFILTER | IFCAP_TSO4 | \
IFCAP_LRO | IFCAP_VLAN_HWTSO | IFCAP_JUMBO_MTU | IFCAP_HWCSUM_IPV6 |\
IFCAP_TSO6
/*********************************************************************
* Device initialization routine
*
* The attach entry point is called when the driver is being loaded.
* This routine identifies the type of hardware, allocates all resources
* and initializes the hardware.
*
* return 0 on success, positive on failure
*********************************************************************/
-
static int
em_if_attach_pre(if_ctx_t ctx)
{
struct adapter *adapter;
if_softc_ctx_t scctx;
device_t dev;
struct e1000_hw *hw;
int error = 0;
- INIT_DEBUGOUT("em_if_attach_pre begin");
+ INIT_DEBUGOUT("em_if_attach_pre: begin");
dev = iflib_get_dev(ctx);
adapter = iflib_get_softc(ctx);
- if (resource_disabled("em", device_get_unit(dev))) {
- device_printf(dev, "Disabled by device hint\n");
- return (ENXIO);
- }
-
adapter->ctx = adapter->osdep.ctx = ctx;
adapter->dev = adapter->osdep.dev = dev;
scctx = adapter->shared = iflib_get_softc_ctx(ctx);
adapter->media = iflib_get_media(ctx);
hw = &adapter->hw;
adapter->tx_process_limit = scctx->isc_ntxd[0];
/* SYSCTL stuff */
SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
OID_AUTO, "nvm", CTLTYPE_INT|CTLFLAG_RW, adapter, 0,
em_sysctl_nvm_info, "I", "NVM Information");
SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
OID_AUTO, "debug", CTLTYPE_INT|CTLFLAG_RW, adapter, 0,
em_sysctl_debug_info, "I", "Debug Information");
SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
OID_AUTO, "fc", CTLTYPE_INT|CTLFLAG_RW, adapter, 0,
em_set_flowcntl, "I", "Flow Control");
SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
OID_AUTO, "reg_dump", CTLTYPE_STRING | CTLFLAG_RD, adapter, 0,
em_get_regs, "A", "Dump Registers");
SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
OID_AUTO, "rs_dump", CTLTYPE_INT | CTLFLAG_RW, adapter, 0,
em_get_rs, "I", "Dump RS indexes");
/* Determine hardware and mac info */
em_identify_hardware(ctx);
- scctx->isc_msix_bar = PCIR_BAR(EM_MSIX_BAR);
scctx->isc_tx_nsegments = EM_MAX_SCATTER;
scctx->isc_nrxqsets_max = scctx->isc_ntxqsets_max = em_set_num_queues(ctx);
if (bootverbose)
device_printf(dev, "attach_pre capping queues at %d\n",
scctx->isc_ntxqsets_max);
if (adapter->hw.mac.type >= igb_mac_min) {
- int try_second_bar;
-
scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0] * sizeof(union e1000_adv_tx_desc), EM_DBA_ALIGN);
scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0] * sizeof(union e1000_adv_rx_desc), EM_DBA_ALIGN);
scctx->isc_txd_size[0] = sizeof(union e1000_adv_tx_desc);
scctx->isc_rxd_size[0] = sizeof(union e1000_adv_rx_desc);
scctx->isc_txrx = &igb_txrx;
scctx->isc_tx_tso_segments_max = EM_MAX_SCATTER;
scctx->isc_tx_tso_size_max = EM_TSO_SIZE;
scctx->isc_tx_tso_segsize_max = EM_TSO_SEG_SIZE;
scctx->isc_capabilities = scctx->isc_capenable = IGB_CAPS;
scctx->isc_tx_csum_flags = CSUM_TCP | CSUM_UDP | CSUM_TSO |
CSUM_IP6_TCP | CSUM_IP6_UDP;
if (adapter->hw.mac.type != e1000_82575)
scctx->isc_tx_csum_flags |= CSUM_SCTP | CSUM_IP6_SCTP;
-
/*
** Some new devices, as with ixgbe, now may
** use a different BAR, so we need to keep
** track of which is used.
*/
- try_second_bar = pci_read_config(dev, scctx->isc_msix_bar, 4);
- if (try_second_bar == 0)
+ scctx->isc_msix_bar = PCIR_BAR(EM_MSIX_BAR);
+ if (pci_read_config(dev, scctx->isc_msix_bar, 4) == 0)
scctx->isc_msix_bar += 4;
} else if (adapter->hw.mac.type >= em_mac_min) {
scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0]* sizeof(struct e1000_tx_desc), EM_DBA_ALIGN);
scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0] * sizeof(union e1000_rx_desc_extended), EM_DBA_ALIGN);
scctx->isc_txd_size[0] = sizeof(struct e1000_tx_desc);
scctx->isc_rxd_size[0] = sizeof(union e1000_rx_desc_extended);
scctx->isc_txrx = &em_txrx;
scctx->isc_tx_tso_segments_max = EM_MAX_SCATTER;
scctx->isc_tx_tso_size_max = EM_TSO_SIZE;
scctx->isc_tx_tso_segsize_max = EM_TSO_SEG_SIZE;
scctx->isc_capabilities = scctx->isc_capenable = EM_CAPS;
/*
* For EM-class devices, don't enable IFCAP_{TSO4,VLAN_HWTSO}
* by default as we don't have workarounds for all associated
* silicon errata. E. g., with several MACs such as 82573E,
* TSO only works at Gigabit speed and otherwise can cause the
* hardware to hang (which also would be next to impossible to
* work around given that already queued TSO-using descriptors
* would need to be flushed and vlan(4) reconfigured at runtime
* in case of a link speed change). Moreover, MACs like 82579
* still can hang at Gigabit even with all publicly documented
* TSO workarounds implemented. Generally, the penality of
* these workarounds is rather high and may involve copying
* mbuf data around so advantages of TSO lapse. Still, TSO may
* work for a few MACs of this class - at least when sticking
* with Gigabit - in which case users may enable TSO manually.
*/
scctx->isc_capenable &= ~(IFCAP_TSO4 | IFCAP_VLAN_HWTSO);
scctx->isc_tx_csum_flags = CSUM_TCP | CSUM_UDP | CSUM_IP_TSO;
+ /*
+ * We support MSI-X with 82574 only, but indicate to iflib(4)
+ * that it shall give MSI at least a try with other devices.
+ */
+ if (adapter->hw.mac.type == e1000_82574) {
+ scctx->isc_msix_bar = PCIR_BAR(EM_MSIX_BAR);
+ } else {
+ scctx->isc_msix_bar = -1;
+ scctx->isc_disable_msix = 1;
+ }
} else {
scctx->isc_txqsizes[0] = roundup2((scctx->isc_ntxd[0] + 1) * sizeof(struct e1000_tx_desc), EM_DBA_ALIGN);
scctx->isc_rxqsizes[0] = roundup2((scctx->isc_nrxd[0] + 1) * sizeof(struct e1000_rx_desc), EM_DBA_ALIGN);
scctx->isc_txd_size[0] = sizeof(struct e1000_tx_desc);
scctx->isc_rxd_size[0] = sizeof(struct e1000_rx_desc);
scctx->isc_tx_csum_flags = CSUM_TCP | CSUM_UDP;
scctx->isc_txrx = &lem_txrx;
scctx->isc_capabilities = scctx->isc_capenable = LEM_CAPS;
if (adapter->hw.mac.type < e1000_82543)
scctx->isc_capenable &= ~(IFCAP_HWCSUM|IFCAP_VLAN_HWCSUM);
+ /* INTx only */
scctx->isc_msix_bar = 0;
}
/* Setup PCI resources */
if (em_allocate_pci_resources(ctx)) {
device_printf(dev, "Allocation of PCI resources failed\n");
error = ENXIO;
goto err_pci;
}
/*
** For ICH8 and family we need to
** map the flash memory, and this
** must happen after the MAC is
** identified
*/
if ((hw->mac.type == e1000_ich8lan) ||
(hw->mac.type == e1000_ich9lan) ||
(hw->mac.type == e1000_ich10lan) ||
(hw->mac.type == e1000_pchlan) ||
(hw->mac.type == e1000_pch2lan) ||
(hw->mac.type == e1000_pch_lpt)) {
int rid = EM_BAR_TYPE_FLASH;
adapter->flash = bus_alloc_resource_any(dev,
SYS_RES_MEMORY, &rid, RF_ACTIVE);
if (adapter->flash == NULL) {
device_printf(dev, "Mapping of Flash failed\n");
error = ENXIO;
goto err_pci;
}
/* This is used in the shared code */
hw->flash_address = (u8 *)adapter->flash;
adapter->osdep.flash_bus_space_tag =
rman_get_bustag(adapter->flash);
adapter->osdep.flash_bus_space_handle =
rman_get_bushandle(adapter->flash);
}
/*
** In the new SPT device flash is not a
** separate BAR, rather it is also in BAR0,
** so use the same tag and an offset handle for the
** FLASH read/write macros in the shared code.
*/
else if (hw->mac.type >= e1000_pch_spt) {
adapter->osdep.flash_bus_space_tag =
adapter->osdep.mem_bus_space_tag;
adapter->osdep.flash_bus_space_handle =
adapter->osdep.mem_bus_space_handle
+ E1000_FLASH_BASE_ADDR;
}
/* Do Shared Code initialization */
error = e1000_setup_init_funcs(hw, TRUE);
if (error) {
device_printf(dev, "Setup of Shared code failed, error %d\n",
error);
error = ENXIO;
goto err_pci;
}
em_setup_msix(ctx);
e1000_get_bus_info(hw);
/* Set up some sysctls for the tunable interrupt delays */
em_add_int_delay_sysctl(adapter, "rx_int_delay",
"receive interrupt delay in usecs", &adapter->rx_int_delay,
E1000_REGISTER(hw, E1000_RDTR), em_rx_int_delay_dflt);
em_add_int_delay_sysctl(adapter, "tx_int_delay",
"transmit interrupt delay in usecs", &adapter->tx_int_delay,
E1000_REGISTER(hw, E1000_TIDV), em_tx_int_delay_dflt);
em_add_int_delay_sysctl(adapter, "rx_abs_int_delay",
"receive interrupt delay limit in usecs",
&adapter->rx_abs_int_delay,
E1000_REGISTER(hw, E1000_RADV),
em_rx_abs_int_delay_dflt);
em_add_int_delay_sysctl(adapter, "tx_abs_int_delay",
"transmit interrupt delay limit in usecs",
&adapter->tx_abs_int_delay,
E1000_REGISTER(hw, E1000_TADV),
em_tx_abs_int_delay_dflt);
em_add_int_delay_sysctl(adapter, "itr",
"interrupt delay limit in usecs/4",
&adapter->tx_itr,
E1000_REGISTER(hw, E1000_ITR),
DEFAULT_ITR);
hw->mac.autoneg = DO_AUTO_NEG;
hw->phy.autoneg_wait_to_complete = FALSE;
hw->phy.autoneg_advertised = AUTONEG_ADV_DEFAULT;
if (adapter->hw.mac.type < em_mac_min) {
e1000_init_script_state_82541(&adapter->hw, TRUE);
e1000_set_tbi_compatibility_82543(&adapter->hw, TRUE);
}
/* Copper options */
if (hw->phy.media_type == e1000_media_type_copper) {
hw->phy.mdix = AUTO_ALL_MODES;
hw->phy.disable_polarity_correction = FALSE;
hw->phy.ms_type = EM_MASTER_SLAVE;
}
/*
* Set the frame limits assuming
* standard ethernet sized frames.
*/
scctx->isc_max_frame_size = adapter->hw.mac.max_frame_size =
ETHERMTU + ETHER_HDR_LEN + ETHERNET_FCS_SIZE;
/*
* This controls when hardware reports transmit completion
* status.
*/
hw->mac.report_tx_early = 1;
/* Allocate multicast array memory. */
adapter->mta = malloc(sizeof(u8) * ETH_ADDR_LEN *
MAX_NUM_MULTICAST_ADDRESSES, M_DEVBUF, M_NOWAIT);
if (adapter->mta == NULL) {
device_printf(dev, "Can not allocate multicast setup array\n");
error = ENOMEM;
goto err_late;
}
/* Check SOL/IDER usage */
if (e1000_check_reset_block(hw))
device_printf(dev, "PHY reset is blocked"
" due to SOL/IDER session.\n");
/* Sysctl for setting Energy Efficient Ethernet */
hw->dev_spec.ich8lan.eee_disable = eee_setting;
SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(dev)),
OID_AUTO, "eee_control", CTLTYPE_INT|CTLFLAG_RW,
adapter, 0, em_sysctl_eee, "I",
"Disable Energy Efficient Ethernet");
/*
** Start from a known state, this is
** important in reading the nvm and
** mac from that.
*/
e1000_reset_hw(hw);
/* Make sure we have a good EEPROM before we read from it */
if (e1000_validate_nvm_checksum(hw) < 0) {
/*
** Some PCI-E parts fail the first check due to
** the link being in sleep state, call it again,
** if it fails a second time its a real issue.
*/
if (e1000_validate_nvm_checksum(hw) < 0) {
device_printf(dev,
"The EEPROM Checksum Is Not Valid\n");
error = EIO;
goto err_late;
}
}
/* Copy the permanent MAC address out of the EEPROM */
if (e1000_read_mac_addr(hw) < 0) {
device_printf(dev, "EEPROM read error while reading MAC"
" address\n");
error = EIO;
goto err_late;
}
if (!em_is_valid_ether_addr(hw->mac.addr)) {
device_printf(dev, "Invalid MAC address\n");
error = EIO;
goto err_late;
}
/* Disable ULP support */
e1000_disable_ulp_lpt_lp(hw, TRUE);
/*
* Get Wake-on-Lan and Management info for later use
*/
em_get_wakeup(ctx);
/* Enable only WOL MAGIC by default */
scctx->isc_capenable &= ~IFCAP_WOL;
if (adapter->wol != 0)
scctx->isc_capenable |= IFCAP_WOL_MAGIC;
iflib_set_mac(ctx, hw->mac.addr);
return (0);
err_late:
em_release_hw_control(adapter);
err_pci:
em_free_pci_resources(ctx);
free(adapter->mta, M_DEVBUF);
return (error);
}
static int
em_if_attach_post(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct e1000_hw *hw = &adapter->hw;
int error = 0;
/* Setup OS specific network interface */
error = em_setup_interface(ctx);
if (error != 0) {
goto err_late;
}
em_reset(ctx);
/* Initialize statistics */
em_update_stats_counters(adapter);
hw->mac.get_link_status = 1;
em_if_update_admin_status(ctx);
em_add_hw_stats(adapter);
/* Non-AMT based hardware can now take control from firmware */
if (adapter->has_manage && !adapter->has_amt)
em_get_hw_control(adapter);
INIT_DEBUGOUT("em_if_attach_post: end");
return (error);
err_late:
em_release_hw_control(adapter);
em_free_pci_resources(ctx);
em_if_queues_free(ctx);
free(adapter->mta, M_DEVBUF);
return (error);
}
/*********************************************************************
* Device removal routine
*
* The detach entry point is called when the driver is being removed.
* This routine stops the adapter and deallocates all the resources
* that were allocated for driver operation.
*
* return 0 on success, positive on failure
*********************************************************************/
-
static int
em_if_detach(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
- INIT_DEBUGOUT("em_detach: begin");
+ INIT_DEBUGOUT("em_if_detach: begin");
e1000_phy_hw_reset(&adapter->hw);
em_release_manageability(adapter);
em_release_hw_control(adapter);
em_free_pci_resources(ctx);
return (0);
}
/*********************************************************************
*
* Shutdown entry point
*
**********************************************************************/
static int
em_if_shutdown(if_ctx_t ctx)
{
return em_if_suspend(ctx);
}
/*
* Suspend/resume device methods.
*/
static int
em_if_suspend(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
em_release_manageability(adapter);
em_release_hw_control(adapter);
em_enable_wakeup(ctx);
return (0);
}
static int
em_if_resume(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
if (adapter->hw.mac.type == e1000_pch2lan)
e1000_resume_workarounds_pchlan(&adapter->hw);
em_if_init(ctx);
em_init_manageability(adapter);
return(0);
}
static int
em_if_mtu_set(if_ctx_t ctx, uint32_t mtu)
{
int max_frame_size;
struct adapter *adapter = iflib_get_softc(ctx);
if_softc_ctx_t scctx = iflib_get_softc_ctx(ctx);
IOCTL_DEBUGOUT("ioctl rcv'd: SIOCSIFMTU (Set Interface MTU)");
switch (adapter->hw.mac.type) {
case e1000_82571:
case e1000_82572:
case e1000_ich9lan:
case e1000_ich10lan:
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
case e1000_pch_cnp:
case e1000_82574:
case e1000_82583:
case e1000_80003es2lan:
/* 9K Jumbo Frame size */
max_frame_size = 9234;
break;
case e1000_pchlan:
max_frame_size = 4096;
break;
case e1000_82542:
case e1000_ich8lan:
/* Adapters that do not support jumbo frames */
max_frame_size = ETHER_MAX_LEN;
break;
default:
if (adapter->hw.mac.type >= igb_mac_min)
max_frame_size = 9234;
else /* lem */
max_frame_size = MAX_JUMBO_FRAME_SIZE;
}
if (mtu > max_frame_size - ETHER_HDR_LEN - ETHER_CRC_LEN) {
return (EINVAL);
}
scctx->isc_max_frame_size = adapter->hw.mac.max_frame_size =
mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
return (0);
}
/*********************************************************************
* Init entry point
*
* This routine is used in two ways. It is used by the stack as
* init entry point in network interface structure. It is also used
* by the driver as a hw/sw initialization routine to get to a
* consistent state.
*
- * return 0 on success, positive on failure
**********************************************************************/
-
static void
em_if_init(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
if_softc_ctx_t scctx = adapter->shared;
struct ifnet *ifp = iflib_get_ifp(ctx);
struct em_tx_queue *tx_que;
int i;
+
INIT_DEBUGOUT("em_if_init: begin");
/* Get the latest mac address, User can use a LAA */
bcopy(if_getlladdr(ifp), adapter->hw.mac.addr,
ETHER_ADDR_LEN);
/* Put the address into the Receive Address Array */
e1000_rar_set(&adapter->hw, adapter->hw.mac.addr, 0);
/*
* With the 82571 adapter, RAR[0] may be overwritten
* when the other port is reset, we make a duplicate
* in RAR[14] for that eventuality, this assures
* the interface continues to function.
*/
if (adapter->hw.mac.type == e1000_82571) {
e1000_set_laa_state_82571(&adapter->hw, TRUE);
e1000_rar_set(&adapter->hw, adapter->hw.mac.addr,
E1000_RAR_ENTRIES - 1);
}
/* Initialize the hardware */
em_reset(ctx);
em_if_update_admin_status(ctx);
for (i = 0, tx_que = adapter->tx_queues; i < adapter->tx_num_queues; i++, tx_que++) {
struct tx_ring *txr = &tx_que->txr;
txr->tx_rs_cidx = txr->tx_rs_pidx;
/* Initialize the last processed descriptor to be the end of
* the ring, rather than the start, so that we avoid an
* off-by-one error when calculating how many descriptors are
* done in the credits_update function.
*/
txr->tx_cidx_processed = scctx->isc_ntxd[0] - 1;
}
/* Setup VLAN support, basic and offload if available */
E1000_WRITE_REG(&adapter->hw, E1000_VET, ETHERTYPE_VLAN);
/* Clear bad data from Rx FIFOs */
if (adapter->hw.mac.type >= igb_mac_min)
e1000_rx_fifo_flush_82575(&adapter->hw);
/* Configure for OS presence */
em_init_manageability(adapter);
/* Prepare transmit descriptors and buffers */
em_initialize_transmit_unit(ctx);
/* Setup Multicast table */
em_if_multi_set(ctx);
/*
* Figure out the desired mbuf
* pool for doing jumbos
*/
if (adapter->hw.mac.max_frame_size <= 2048)
adapter->rx_mbuf_sz = MCLBYTES;
#ifndef CONTIGMALLOC_WORKS
else
adapter->rx_mbuf_sz = MJUMPAGESIZE;
#else
else if (adapter->hw.mac.max_frame_size <= 4096)
adapter->rx_mbuf_sz = MJUMPAGESIZE;
else
adapter->rx_mbuf_sz = MJUM9BYTES;
#endif
em_initialize_receive_unit(ctx);
/* Use real VLAN Filter support? */
if (if_getcapenable(ifp) & IFCAP_VLAN_HWTAGGING) {
if (if_getcapenable(ifp) & IFCAP_VLAN_HWFILTER)
/* Use real VLAN Filter support */
em_setup_vlan_hw_support(adapter);
else {
u32 ctrl;
ctrl = E1000_READ_REG(&adapter->hw, E1000_CTRL);
ctrl |= E1000_CTRL_VME;
E1000_WRITE_REG(&adapter->hw, E1000_CTRL, ctrl);
}
}
/* Don't lose promiscuous settings */
em_if_set_promisc(ctx, IFF_PROMISC);
e1000_clear_hw_cntrs_base_generic(&adapter->hw);
/* MSI-X configuration for 82574 */
if (adapter->hw.mac.type == e1000_82574) {
int tmp = E1000_READ_REG(&adapter->hw, E1000_CTRL_EXT);
tmp |= E1000_CTRL_EXT_PBA_CLR;
E1000_WRITE_REG(&adapter->hw, E1000_CTRL_EXT, tmp);
/* Set the IVAR - interrupt vector routing. */
E1000_WRITE_REG(&adapter->hw, E1000_IVAR, adapter->ivars);
} else if (adapter->intr_type == IFLIB_INTR_MSIX) /* Set up queue routing */
igb_configure_queues(adapter);
/* this clears any pending interrupts */
E1000_READ_REG(&adapter->hw, E1000_ICR);
E1000_WRITE_REG(&adapter->hw, E1000_ICS, E1000_ICS_LSC);
/* AMT based hardware can now take control from firmware */
if (adapter->has_manage && adapter->has_amt)
em_get_hw_control(adapter);
/* Set Energy Efficient Ethernet */
if (adapter->hw.mac.type >= igb_mac_min &&
adapter->hw.phy.media_type == e1000_media_type_copper) {
if (adapter->hw.mac.type == e1000_i354)
e1000_set_eee_i354(&adapter->hw, TRUE, TRUE);
else
e1000_set_eee_i350(&adapter->hw, TRUE, TRUE);
}
}
/*********************************************************************
*
* Fast Legacy/MSI Combined Interrupt Service routine
*
*********************************************************************/
int
em_intr(void *arg)
{
struct adapter *adapter = arg;
if_ctx_t ctx = adapter->ctx;
u32 reg_icr;
reg_icr = E1000_READ_REG(&adapter->hw, E1000_ICR);
if (adapter->intr_type != IFLIB_INTR_LEGACY)
goto skip_stray;
/* Hot eject? */
if (reg_icr == 0xffffffff)
return FILTER_STRAY;
/* Definitely not our interrupt. */
if (reg_icr == 0x0)
return FILTER_STRAY;
/*
* Starting with the 82571 chip, bit 31 should be used to
* determine whether the interrupt belongs to us.
*/
if (adapter->hw.mac.type >= e1000_82571 &&
(reg_icr & E1000_ICR_INT_ASSERTED) == 0)
return FILTER_STRAY;
skip_stray:
/* Link status change */
if (reg_icr & (E1000_ICR_RXSEQ | E1000_ICR_LSC)) {
adapter->hw.mac.get_link_status = 1;
iflib_admin_intr_deferred(ctx);
}
if (reg_icr & E1000_ICR_RXO)
adapter->rx_overruns++;
return (FILTER_SCHEDULE_THREAD);
}
static void
igb_rx_enable_queue(struct adapter *adapter, struct em_rx_queue *rxq)
{
E1000_WRITE_REG(&adapter->hw, E1000_EIMS, rxq->eims);
}
static void
em_rx_enable_queue(struct adapter *adapter, struct em_rx_queue *rxq)
{
E1000_WRITE_REG(&adapter->hw, E1000_IMS, rxq->eims);
}
static void
igb_tx_enable_queue(struct adapter *adapter, struct em_tx_queue *txq)
{
E1000_WRITE_REG(&adapter->hw, E1000_EIMS, txq->eims);
}
static void
em_tx_enable_queue(struct adapter *adapter, struct em_tx_queue *txq)
{
E1000_WRITE_REG(&adapter->hw, E1000_IMS, txq->eims);
}
static int
em_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct em_rx_queue *rxq = &adapter->rx_queues[rxqid];
if (adapter->hw.mac.type >= igb_mac_min)
igb_rx_enable_queue(adapter, rxq);
else
em_rx_enable_queue(adapter, rxq);
return (0);
}
static int
em_if_tx_queue_intr_enable(if_ctx_t ctx, uint16_t txqid)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct em_tx_queue *txq = &adapter->tx_queues[txqid];
if (adapter->hw.mac.type >= igb_mac_min)
igb_tx_enable_queue(adapter, txq);
else
em_tx_enable_queue(adapter, txq);
return (0);
}
/*********************************************************************
*
* MSI-X RX Interrupt Service routine
*
**********************************************************************/
static int
em_msix_que(void *arg)
{
struct em_rx_queue *que = arg;
++que->irqs;
return (FILTER_SCHEDULE_THREAD);
}
/*********************************************************************
*
* MSI-X Link Fast Interrupt Service routine
*
**********************************************************************/
static int
em_msix_link(void *arg)
{
struct adapter *adapter = arg;
u32 reg_icr;
++adapter->link_irq;
MPASS(adapter->hw.back != NULL);
reg_icr = E1000_READ_REG(&adapter->hw, E1000_ICR);
if (reg_icr & E1000_ICR_RXO)
adapter->rx_overruns++;
if (reg_icr & (E1000_ICR_RXSEQ | E1000_ICR_LSC)) {
em_handle_link(adapter->ctx);
} else {
E1000_WRITE_REG(&adapter->hw, E1000_IMS,
EM_MSIX_LINK | E1000_IMS_LSC);
if (adapter->hw.mac.type >= igb_mac_min)
E1000_WRITE_REG(&adapter->hw, E1000_EIMS, adapter->link_mask);
}
/*
* Because we must read the ICR for this interrupt
* it may clear other causes using autoclear, for
* this reason we simply create a soft interrupt
* for all these vectors.
*/
if (reg_icr && adapter->hw.mac.type < igb_mac_min) {
E1000_WRITE_REG(&adapter->hw,
E1000_ICS, adapter->ims);
}
return (FILTER_HANDLED);
}
static void
em_handle_link(void *context)
{
if_ctx_t ctx = context;
struct adapter *adapter = iflib_get_softc(ctx);
adapter->hw.mac.get_link_status = 1;
iflib_admin_intr_deferred(ctx);
}
/*********************************************************************
*
* Media Ioctl callback
*
* This routine is called whenever the user queries the status of
* the interface using ifconfig.
*
**********************************************************************/
static void
em_if_media_status(if_ctx_t ctx, struct ifmediareq *ifmr)
{
struct adapter *adapter = iflib_get_softc(ctx);
u_char fiber_type = IFM_1000_SX;
INIT_DEBUGOUT("em_if_media_status: begin");
iflib_admin_intr_deferred(ctx);
ifmr->ifm_status = IFM_AVALID;
ifmr->ifm_active = IFM_ETHER;
if (!adapter->link_active) {
return;
}
ifmr->ifm_status |= IFM_ACTIVE;
if ((adapter->hw.phy.media_type == e1000_media_type_fiber) ||
(adapter->hw.phy.media_type == e1000_media_type_internal_serdes)) {
if (adapter->hw.mac.type == e1000_82545)
fiber_type = IFM_1000_LX;
ifmr->ifm_active |= fiber_type | IFM_FDX;
} else {
switch (adapter->link_speed) {
case 10:
ifmr->ifm_active |= IFM_10_T;
break;
case 100:
ifmr->ifm_active |= IFM_100_TX;
break;
case 1000:
ifmr->ifm_active |= IFM_1000_T;
break;
}
if (adapter->link_duplex == FULL_DUPLEX)
ifmr->ifm_active |= IFM_FDX;
else
ifmr->ifm_active |= IFM_HDX;
}
}
/*********************************************************************
*
* Media Ioctl callback
*
* This routine is called when the user changes speed/duplex using
* media/mediopt option with ifconfig.
*
**********************************************************************/
static int
em_if_media_change(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct ifmedia *ifm = iflib_get_media(ctx);
INIT_DEBUGOUT("em_if_media_change: begin");
if (IFM_TYPE(ifm->ifm_media) != IFM_ETHER)
return (EINVAL);
switch (IFM_SUBTYPE(ifm->ifm_media)) {
case IFM_AUTO:
adapter->hw.mac.autoneg = DO_AUTO_NEG;
adapter->hw.phy.autoneg_advertised = AUTONEG_ADV_DEFAULT;
break;
case IFM_1000_LX:
case IFM_1000_SX:
case IFM_1000_T:
adapter->hw.mac.autoneg = DO_AUTO_NEG;
adapter->hw.phy.autoneg_advertised = ADVERTISE_1000_FULL;
break;
case IFM_100_TX:
adapter->hw.mac.autoneg = FALSE;
adapter->hw.phy.autoneg_advertised = 0;
if ((ifm->ifm_media & IFM_GMASK) == IFM_FDX)
adapter->hw.mac.forced_speed_duplex = ADVERTISE_100_FULL;
else
adapter->hw.mac.forced_speed_duplex = ADVERTISE_100_HALF;
break;
case IFM_10_T:
adapter->hw.mac.autoneg = FALSE;
adapter->hw.phy.autoneg_advertised = 0;
if ((ifm->ifm_media & IFM_GMASK) == IFM_FDX)
adapter->hw.mac.forced_speed_duplex = ADVERTISE_10_FULL;
else
adapter->hw.mac.forced_speed_duplex = ADVERTISE_10_HALF;
break;
default:
device_printf(adapter->dev, "Unsupported media type\n");
}
em_if_init(ctx);
return (0);
}
static int
em_if_set_promisc(if_ctx_t ctx, int flags)
{
struct adapter *adapter = iflib_get_softc(ctx);
u32 reg_rctl;
em_disable_promisc(ctx);
reg_rctl = E1000_READ_REG(&adapter->hw, E1000_RCTL);
if (flags & IFF_PROMISC) {
reg_rctl |= (E1000_RCTL_UPE | E1000_RCTL_MPE);
/* Turn this on if you want to see bad packets */
if (em_debug_sbp)
reg_rctl |= E1000_RCTL_SBP;
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, reg_rctl);
} else if (flags & IFF_ALLMULTI) {
reg_rctl |= E1000_RCTL_MPE;
reg_rctl &= ~E1000_RCTL_UPE;
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, reg_rctl);
}
return (0);
}
static void
em_disable_promisc(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct ifnet *ifp = iflib_get_ifp(ctx);
u32 reg_rctl;
int mcnt = 0;
reg_rctl = E1000_READ_REG(&adapter->hw, E1000_RCTL);
reg_rctl &= (~E1000_RCTL_UPE);
if (if_getflags(ifp) & IFF_ALLMULTI)
mcnt = MAX_NUM_MULTICAST_ADDRESSES;
else
mcnt = if_multiaddr_count(ifp, MAX_NUM_MULTICAST_ADDRESSES);
/* Don't disable if in MAX groups */
if (mcnt < MAX_NUM_MULTICAST_ADDRESSES)
reg_rctl &= (~E1000_RCTL_MPE);
reg_rctl &= (~E1000_RCTL_SBP);
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, reg_rctl);
}
/*********************************************************************
* Multicast Update
*
* This routine is called whenever multicast address list is updated.
*
**********************************************************************/
static void
em_if_multi_set(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct ifnet *ifp = iflib_get_ifp(ctx);
u32 reg_rctl = 0;
u8 *mta; /* Multicast array memory */
int mcnt = 0;
IOCTL_DEBUGOUT("em_set_multi: begin");
mta = adapter->mta;
bzero(mta, sizeof(u8) * ETH_ADDR_LEN * MAX_NUM_MULTICAST_ADDRESSES);
if (adapter->hw.mac.type == e1000_82542 &&
adapter->hw.revision_id == E1000_REVISION_2) {
reg_rctl = E1000_READ_REG(&adapter->hw, E1000_RCTL);
if (adapter->hw.bus.pci_cmd_word & CMD_MEM_WRT_INVALIDATE)
e1000_pci_clear_mwi(&adapter->hw);
reg_rctl |= E1000_RCTL_RST;
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, reg_rctl);
msec_delay(5);
}
if_multiaddr_array(ifp, mta, &mcnt, MAX_NUM_MULTICAST_ADDRESSES);
if (mcnt >= MAX_NUM_MULTICAST_ADDRESSES) {
reg_rctl = E1000_READ_REG(&adapter->hw, E1000_RCTL);
reg_rctl |= E1000_RCTL_MPE;
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, reg_rctl);
} else
e1000_update_mc_addr_list(&adapter->hw, mta, mcnt);
if (adapter->hw.mac.type == e1000_82542 &&
adapter->hw.revision_id == E1000_REVISION_2) {
reg_rctl = E1000_READ_REG(&adapter->hw, E1000_RCTL);
reg_rctl &= ~E1000_RCTL_RST;
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, reg_rctl);
msec_delay(5);
if (adapter->hw.bus.pci_cmd_word & CMD_MEM_WRT_INVALIDATE)
e1000_pci_set_mwi(&adapter->hw);
}
}
-
/*********************************************************************
* Timer routine
*
- * This routine checks for link status and updates statistics.
+ * This routine schedules em_if_update_admin_status() to check for
+ * link status and to gather statistics as well as to perform some
+ * controller-specific hardware patting.
*
**********************************************************************/
-
static void
em_if_timer(if_ctx_t ctx, uint16_t qid)
{
- struct adapter *adapter = iflib_get_softc(ctx);
- struct em_rx_queue *que;
- int i;
- int trigger = 0;
if (qid != 0)
return;
iflib_admin_intr_deferred(ctx);
-
- /* Mask to use in the irq trigger */
- if (adapter->intr_type == IFLIB_INTR_MSIX) {
- for (i = 0, que = adapter->rx_queues; i < adapter->rx_num_queues; i++, que++)
- trigger |= que->eims;
- } else {
- trigger = E1000_ICS_RXDMT0;
- }
}
-
static void
em_if_update_admin_status(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct e1000_hw *hw = &adapter->hw;
device_t dev = iflib_get_dev(ctx);
u32 link_check, thstat, ctrl;
link_check = thstat = ctrl = 0;
/* Get the cached link value or read phy for real */
switch (hw->phy.media_type) {
case e1000_media_type_copper:
if (hw->mac.get_link_status) {
if (hw->mac.type == e1000_pch_spt)
msec_delay(50);
/* Do the work to read phy */
e1000_check_for_link(hw);
link_check = !hw->mac.get_link_status;
if (link_check) /* ESB2 fix */
e1000_cfg_on_link_up(hw);
} else {
link_check = TRUE;
}
break;
case e1000_media_type_fiber:
e1000_check_for_link(hw);
link_check = (E1000_READ_REG(hw, E1000_STATUS) &
E1000_STATUS_LU);
break;
case e1000_media_type_internal_serdes:
e1000_check_for_link(hw);
link_check = adapter->hw.mac.serdes_has_link;
break;
/* VF device is type_unknown */
case e1000_media_type_unknown:
e1000_check_for_link(hw);
link_check = !hw->mac.get_link_status;
/* FALLTHROUGH */
default:
break;
}
/* Check for thermal downshift or shutdown */
if (hw->mac.type == e1000_i350) {
thstat = E1000_READ_REG(hw, E1000_THSTAT);
ctrl = E1000_READ_REG(hw, E1000_CTRL_EXT);
}
/* Now check for a transition */
if (link_check && (adapter->link_active == 0)) {
e1000_get_speed_and_duplex(hw, &adapter->link_speed,
&adapter->link_duplex);
/* Check if we must disable SPEED_MODE bit on PCI-E */
if ((adapter->link_speed != SPEED_1000) &&
((hw->mac.type == e1000_82571) ||
(hw->mac.type == e1000_82572))) {
int tarc0;
tarc0 = E1000_READ_REG(hw, E1000_TARC(0));
tarc0 &= ~TARC_SPEED_MODE_BIT;
E1000_WRITE_REG(hw, E1000_TARC(0), tarc0);
}
if (bootverbose)
device_printf(dev, "Link is up %d Mbps %s\n",
adapter->link_speed,
((adapter->link_duplex == FULL_DUPLEX) ?
"Full Duplex" : "Half Duplex"));
adapter->link_active = 1;
adapter->smartspeed = 0;
if ((ctrl & E1000_CTRL_EXT_LINK_MODE_MASK) ==
E1000_CTRL_EXT_LINK_MODE_GMII &&
(thstat & E1000_THSTAT_LINK_THROTTLE))
device_printf(dev, "Link: thermal downshift\n");
/* Delay Link Up for Phy update */
if (((hw->mac.type == e1000_i210) ||
(hw->mac.type == e1000_i211)) &&
(hw->phy.id == I210_I_PHY_ID))
msec_delay(I210_LINK_DELAY);
/* Reset if the media type changed. */
if ((hw->dev_spec._82575.media_changed) &&
(adapter->hw.mac.type >= igb_mac_min)) {
hw->dev_spec._82575.media_changed = false;
adapter->flags |= IGB_MEDIA_RESET;
em_reset(ctx);
}
iflib_link_state_change(ctx, LINK_STATE_UP,
IF_Mbps(adapter->link_speed));
} else if (!link_check && (adapter->link_active == 1)) {
adapter->link_speed = 0;
adapter->link_duplex = 0;
adapter->link_active = 0;
iflib_link_state_change(ctx, LINK_STATE_DOWN, 0);
}
em_update_stats_counters(adapter);
/* Reset LAA into RAR[0] on 82571 */
if ((adapter->hw.mac.type == e1000_82571) &&
e1000_get_laa_state_82571(&adapter->hw))
e1000_rar_set(&adapter->hw, adapter->hw.mac.addr, 0);
if (adapter->hw.mac.type < em_mac_min)
lem_smartspeed(adapter);
E1000_WRITE_REG(&adapter->hw, E1000_IMS, EM_MSIX_LINK | E1000_IMS_LSC);
}
+static void
+em_if_watchdog_reset(if_ctx_t ctx)
+{
+ struct adapter *adapter = iflib_get_softc(ctx);
+
+ /*
+ * Just count the event; iflib(4) will already trigger a
+ * sufficient reset of the controller.
+ */
+ adapter->watchdog_events++;
+}
+
/*********************************************************************
*
* This routine disables all traffic on the adapter by issuing a
- * global reset on the MAC and deallocates TX/RX buffers.
+ * global reset on the MAC.
*
- * This routine should always be called with BOTH the CORE
- * and TX locks.
**********************************************************************/
-
static void
em_if_stop(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
- INIT_DEBUGOUT("em_stop: begin");
+ INIT_DEBUGOUT("em_if_stop: begin");
e1000_reset_hw(&adapter->hw);
if (adapter->hw.mac.type >= e1000_82544)
E1000_WRITE_REG(&adapter->hw, E1000_WUFC, 0);
e1000_led_off(&adapter->hw);
e1000_cleanup_led(&adapter->hw);
}
-
/*********************************************************************
*
* Determine hardware revision.
*
**********************************************************************/
static void
em_identify_hardware(if_ctx_t ctx)
{
device_t dev = iflib_get_dev(ctx);
struct adapter *adapter = iflib_get_softc(ctx);
/* Make sure our PCI config space has the necessary stuff set */
adapter->hw.bus.pci_cmd_word = pci_read_config(dev, PCIR_COMMAND, 2);
/* Save off the information about this board */
adapter->hw.vendor_id = pci_get_vendor(dev);
adapter->hw.device_id = pci_get_device(dev);
adapter->hw.revision_id = pci_read_config(dev, PCIR_REVID, 1);
adapter->hw.subsystem_vendor_id =
pci_read_config(dev, PCIR_SUBVEND_0, 2);
adapter->hw.subsystem_device_id =
pci_read_config(dev, PCIR_SUBDEV_0, 2);
/* Do Shared Code Init and Setup */
if (e1000_set_mac_type(&adapter->hw)) {
device_printf(dev, "Setup init failure\n");
return;
}
}
static int
em_allocate_pci_resources(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
int rid, val;
rid = PCIR_BAR(0);
adapter->memory = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
&rid, RF_ACTIVE);
if (adapter->memory == NULL) {
device_printf(dev, "Unable to allocate bus resource: memory\n");
return (ENXIO);
}
adapter->osdep.mem_bus_space_tag = rman_get_bustag(adapter->memory);
adapter->osdep.mem_bus_space_handle =
rman_get_bushandle(adapter->memory);
adapter->hw.hw_addr = (u8 *)&adapter->osdep.mem_bus_space_handle;
/* Only older adapters use IO mapping */
if (adapter->hw.mac.type < em_mac_min &&
adapter->hw.mac.type > e1000_82543) {
/* Figure our where our IO BAR is ? */
for (rid = PCIR_BAR(0); rid < PCIR_CIS;) {
val = pci_read_config(dev, rid, 4);
if (EM_BAR_TYPE(val) == EM_BAR_TYPE_IO) {
break;
}
rid += 4;
/* check for 64bit BAR */
if (EM_BAR_MEM_TYPE(val) == EM_BAR_MEM_TYPE_64BIT)
rid += 4;
}
if (rid >= PCIR_CIS) {
device_printf(dev, "Unable to locate IO BAR\n");
return (ENXIO);
}
adapter->ioport = bus_alloc_resource_any(dev, SYS_RES_IOPORT,
&rid, RF_ACTIVE);
if (adapter->ioport == NULL) {
device_printf(dev, "Unable to allocate bus resource: "
"ioport\n");
return (ENXIO);
}
adapter->hw.io_base = 0;
adapter->osdep.io_bus_space_tag =
rman_get_bustag(adapter->ioport);
adapter->osdep.io_bus_space_handle =
rman_get_bushandle(adapter->ioport);
}
adapter->hw.back = &adapter->osdep;
return (0);
}
/*********************************************************************
*
* Set up the MSI-X Interrupt handlers
*
**********************************************************************/
static int
em_if_msix_intr_assign(if_ctx_t ctx, int msix)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct em_rx_queue *rx_que = adapter->rx_queues;
struct em_tx_queue *tx_que = adapter->tx_queues;
int error, rid, i, vector = 0, rx_vectors;
char buf[16];
/* First set up ring resources */
for (i = 0; i < adapter->rx_num_queues; i++, rx_que++, vector++) {
rid = vector + 1;
snprintf(buf, sizeof(buf), "rxq%d", i);
error = iflib_irq_alloc_generic(ctx, &rx_que->que_irq, rid, IFLIB_INTR_RXTX, em_msix_que, rx_que, rx_que->me, buf);
if (error) {
device_printf(iflib_get_dev(ctx), "Failed to allocate que int %d err: %d", i, error);
adapter->rx_num_queues = i + 1;
goto fail;
}
rx_que->msix = vector;
/*
* Set the bit to enable interrupt
* in E1000_IMS -- bits 20 and 21
* are for RX0 and RX1, note this has
* NOTHING to do with the MSI-X vector
*/
if (adapter->hw.mac.type == e1000_82574) {
rx_que->eims = 1 << (20 + i);
adapter->ims |= rx_que->eims;
adapter->ivars |= (8 | rx_que->msix) << (i * 4);
} else if (adapter->hw.mac.type == e1000_82575)
rx_que->eims = E1000_EICR_TX_QUEUE0 << vector;
else
rx_que->eims = 1 << vector;
}
rx_vectors = vector;
vector = 0;
for (i = 0; i < adapter->tx_num_queues; i++, tx_que++, vector++) {
snprintf(buf, sizeof(buf), "txq%d", i);
tx_que = &adapter->tx_queues[i];
iflib_softirq_alloc_generic(ctx,
&adapter->rx_queues[i % adapter->rx_num_queues].que_irq,
IFLIB_INTR_TX, tx_que, tx_que->me, buf);
- tx_que->msix = (vector % adapter->tx_num_queues);
+ tx_que->msix = (vector % adapter->rx_num_queues);
/*
* Set the bit to enable interrupt
* in E1000_IMS -- bits 22 and 23
* are for TX0 and TX1, note this has
* NOTHING to do with the MSI-X vector
*/
if (adapter->hw.mac.type == e1000_82574) {
tx_que->eims = 1 << (22 + i);
adapter->ims |= tx_que->eims;
adapter->ivars |= (8 | tx_que->msix) << (8 + (i * 4));
} else if (adapter->hw.mac.type == e1000_82575) {
- tx_que->eims = E1000_EICR_TX_QUEUE0 << (i % adapter->tx_num_queues);
+ tx_que->eims = E1000_EICR_TX_QUEUE0 << i;
} else {
- tx_que->eims = 1 << (i % adapter->tx_num_queues);
+ tx_que->eims = 1 << i;
}
}
/* Link interrupt */
rid = rx_vectors + 1;
error = iflib_irq_alloc_generic(ctx, &adapter->irq, rid, IFLIB_INTR_ADMIN, em_msix_link, adapter, 0, "aq");
if (error) {
device_printf(iflib_get_dev(ctx), "Failed to register admin handler");
goto fail;
}
adapter->linkvec = rx_vectors;
if (adapter->hw.mac.type < igb_mac_min) {
adapter->ivars |= (8 | rx_vectors) << 16;
adapter->ivars |= 0x80000000;
}
return (0);
fail:
iflib_irq_free(ctx, &adapter->irq);
rx_que = adapter->rx_queues;
for (int i = 0; i < adapter->rx_num_queues; i++, rx_que++)
iflib_irq_free(ctx, &rx_que->que_irq);
return (error);
}
static void
igb_configure_queues(struct adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
struct em_rx_queue *rx_que;
struct em_tx_queue *tx_que;
u32 tmp, ivar = 0, newitr = 0;
/* First turn on RSS capability */
if (adapter->hw.mac.type != e1000_82575)
E1000_WRITE_REG(hw, E1000_GPIE,
E1000_GPIE_MSIX_MODE | E1000_GPIE_EIAME |
E1000_GPIE_PBA | E1000_GPIE_NSICR);
/* Turn on MSI-X */
switch (adapter->hw.mac.type) {
case e1000_82580:
case e1000_i350:
case e1000_i354:
case e1000_i210:
case e1000_i211:
case e1000_vfadapt:
case e1000_vfadapt_i350:
/* RX entries */
for (int i = 0; i < adapter->rx_num_queues; i++) {
u32 index = i >> 1;
ivar = E1000_READ_REG_ARRAY(hw, E1000_IVAR0, index);
rx_que = &adapter->rx_queues[i];
if (i & 1) {
ivar &= 0xFF00FFFF;
ivar |= (rx_que->msix | E1000_IVAR_VALID) << 16;
} else {
ivar &= 0xFFFFFF00;
ivar |= rx_que->msix | E1000_IVAR_VALID;
}
E1000_WRITE_REG_ARRAY(hw, E1000_IVAR0, index, ivar);
}
/* TX entries */
for (int i = 0; i < adapter->tx_num_queues; i++) {
u32 index = i >> 1;
ivar = E1000_READ_REG_ARRAY(hw, E1000_IVAR0, index);
tx_que = &adapter->tx_queues[i];
if (i & 1) {
ivar &= 0x00FFFFFF;
ivar |= (tx_que->msix | E1000_IVAR_VALID) << 24;
} else {
ivar &= 0xFFFF00FF;
ivar |= (tx_que->msix | E1000_IVAR_VALID) << 8;
}
E1000_WRITE_REG_ARRAY(hw, E1000_IVAR0, index, ivar);
adapter->que_mask |= tx_que->eims;
}
/* And for the link interrupt */
ivar = (adapter->linkvec | E1000_IVAR_VALID) << 8;
adapter->link_mask = 1 << adapter->linkvec;
E1000_WRITE_REG(hw, E1000_IVAR_MISC, ivar);
break;
case e1000_82576:
/* RX entries */
for (int i = 0; i < adapter->rx_num_queues; i++) {
u32 index = i & 0x7; /* Each IVAR has two entries */
ivar = E1000_READ_REG_ARRAY(hw, E1000_IVAR0, index);
rx_que = &adapter->rx_queues[i];
if (i < 8) {
ivar &= 0xFFFFFF00;
ivar |= rx_que->msix | E1000_IVAR_VALID;
} else {
ivar &= 0xFF00FFFF;
ivar |= (rx_que->msix | E1000_IVAR_VALID) << 16;
}
E1000_WRITE_REG_ARRAY(hw, E1000_IVAR0, index, ivar);
adapter->que_mask |= rx_que->eims;
}
/* TX entries */
for (int i = 0; i < adapter->tx_num_queues; i++) {
u32 index = i & 0x7; /* Each IVAR has two entries */
ivar = E1000_READ_REG_ARRAY(hw, E1000_IVAR0, index);
tx_que = &adapter->tx_queues[i];
if (i < 8) {
ivar &= 0xFFFF00FF;
ivar |= (tx_que->msix | E1000_IVAR_VALID) << 8;
} else {
ivar &= 0x00FFFFFF;
ivar |= (tx_que->msix | E1000_IVAR_VALID) << 24;
}
E1000_WRITE_REG_ARRAY(hw, E1000_IVAR0, index, ivar);
adapter->que_mask |= tx_que->eims;
}
/* And for the link interrupt */
ivar = (adapter->linkvec | E1000_IVAR_VALID) << 8;
adapter->link_mask = 1 << adapter->linkvec;
E1000_WRITE_REG(hw, E1000_IVAR_MISC, ivar);
break;
case e1000_82575:
/* enable MSI-X support*/
tmp = E1000_READ_REG(hw, E1000_CTRL_EXT);
tmp |= E1000_CTRL_EXT_PBA_CLR;
/* Auto-Mask interrupts upon ICR read. */
tmp |= E1000_CTRL_EXT_EIAME;
tmp |= E1000_CTRL_EXT_IRCA;
E1000_WRITE_REG(hw, E1000_CTRL_EXT, tmp);
/* Queues */
for (int i = 0; i < adapter->rx_num_queues; i++) {
rx_que = &adapter->rx_queues[i];
tmp = E1000_EICR_RX_QUEUE0 << i;
tmp |= E1000_EICR_TX_QUEUE0 << i;
rx_que->eims = tmp;
E1000_WRITE_REG_ARRAY(hw, E1000_MSIXBM(0),
i, rx_que->eims);
adapter->que_mask |= rx_que->eims;
}
/* Link */
E1000_WRITE_REG(hw, E1000_MSIXBM(adapter->linkvec),
E1000_EIMS_OTHER);
adapter->link_mask |= E1000_EIMS_OTHER;
default:
break;
}
/* Set the starting interrupt rate */
if (em_max_interrupt_rate > 0)
newitr = (4000000 / em_max_interrupt_rate) & 0x7FFC;
if (hw->mac.type == e1000_82575)
newitr |= newitr << 16;
else
newitr |= E1000_EITR_CNT_IGNR;
for (int i = 0; i < adapter->rx_num_queues; i++) {
rx_que = &adapter->rx_queues[i];
E1000_WRITE_REG(hw, E1000_EITR(rx_que->msix), newitr);
}
return;
}
static void
em_free_pci_resources(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct em_rx_queue *que = adapter->rx_queues;
device_t dev = iflib_get_dev(ctx);
/* Release all MSI-X queue resources */
if (adapter->intr_type == IFLIB_INTR_MSIX)
iflib_irq_free(ctx, &adapter->irq);
for (int i = 0; i < adapter->rx_num_queues; i++, que++) {
iflib_irq_free(ctx, &que->que_irq);
}
if (adapter->memory != NULL) {
bus_release_resource(dev, SYS_RES_MEMORY,
rman_get_rid(adapter->memory), adapter->memory);
adapter->memory = NULL;
}
if (adapter->flash != NULL) {
bus_release_resource(dev, SYS_RES_MEMORY,
rman_get_rid(adapter->flash), adapter->flash);
adapter->flash = NULL;
}
if (adapter->ioport != NULL) {
bus_release_resource(dev, SYS_RES_IOPORT,
rman_get_rid(adapter->ioport), adapter->ioport);
adapter->ioport = NULL;
}
}
/* Set up MSI or MSI-X */
static int
em_setup_msix(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
if (adapter->hw.mac.type == e1000_82574) {
em_enable_vectors_82574(ctx);
}
return (0);
}
/*********************************************************************
*
- * Initialize the hardware to a configuration
- * as specified by the adapter structure.
+ * Workaround for SmartSpeed on 82541 and 82547 controllers
*
**********************************************************************/
-
static void
lem_smartspeed(struct adapter *adapter)
{
u16 phy_tmp;
if (adapter->link_active || (adapter->hw.phy.type != e1000_phy_igp) ||
adapter->hw.mac.autoneg == 0 ||
(adapter->hw.phy.autoneg_advertised & ADVERTISE_1000_FULL) == 0)
return;
if (adapter->smartspeed == 0) {
/* If Master/Slave config fault is asserted twice,
* we assume back-to-back */
e1000_read_phy_reg(&adapter->hw, PHY_1000T_STATUS, &phy_tmp);
if (!(phy_tmp & SR_1000T_MS_CONFIG_FAULT))
return;
e1000_read_phy_reg(&adapter->hw, PHY_1000T_STATUS, &phy_tmp);
if (phy_tmp & SR_1000T_MS_CONFIG_FAULT) {
e1000_read_phy_reg(&adapter->hw,
PHY_1000T_CTRL, &phy_tmp);
if(phy_tmp & CR_1000T_MS_ENABLE) {
phy_tmp &= ~CR_1000T_MS_ENABLE;
e1000_write_phy_reg(&adapter->hw,
PHY_1000T_CTRL, phy_tmp);
adapter->smartspeed++;
if(adapter->hw.mac.autoneg &&
!e1000_copper_link_autoneg(&adapter->hw) &&
!e1000_read_phy_reg(&adapter->hw,
PHY_CONTROL, &phy_tmp)) {
phy_tmp |= (MII_CR_AUTO_NEG_EN |
MII_CR_RESTART_AUTO_NEG);
e1000_write_phy_reg(&adapter->hw,
PHY_CONTROL, phy_tmp);
}
}
}
return;
} else if(adapter->smartspeed == EM_SMARTSPEED_DOWNSHIFT) {
/* If still no link, perhaps using 2/3 pair cable */
e1000_read_phy_reg(&adapter->hw, PHY_1000T_CTRL, &phy_tmp);
phy_tmp |= CR_1000T_MS_ENABLE;
e1000_write_phy_reg(&adapter->hw, PHY_1000T_CTRL, phy_tmp);
if(adapter->hw.mac.autoneg &&
!e1000_copper_link_autoneg(&adapter->hw) &&
!e1000_read_phy_reg(&adapter->hw, PHY_CONTROL, &phy_tmp)) {
phy_tmp |= (MII_CR_AUTO_NEG_EN |
MII_CR_RESTART_AUTO_NEG);
e1000_write_phy_reg(&adapter->hw, PHY_CONTROL, phy_tmp);
}
}
/* Restart process after EM_SMARTSPEED_MAX iterations */
if(adapter->smartspeed++ == EM_SMARTSPEED_MAX)
adapter->smartspeed = 0;
}
/*********************************************************************
*
* Initialize the DMA Coalescing feature
*
**********************************************************************/
static void
igb_init_dmac(struct adapter *adapter, u32 pba)
{
device_t dev = adapter->dev;
struct e1000_hw *hw = &adapter->hw;
u32 dmac, reg = ~E1000_DMACR_DMAC_EN;
u16 hwm;
u16 max_frame_size;
if (hw->mac.type == e1000_i211)
return;
max_frame_size = adapter->shared->isc_max_frame_size;
if (hw->mac.type > e1000_82580) {
if (adapter->dmac == 0) { /* Disabling it */
E1000_WRITE_REG(hw, E1000_DMACR, reg);
return;
} else
device_printf(dev, "DMA Coalescing enabled\n");
/* Set starting threshold */
E1000_WRITE_REG(hw, E1000_DMCTXTH, 0);
hwm = 64 * pba - max_frame_size / 16;
if (hwm < 64 * (pba - 6))
hwm = 64 * (pba - 6);
reg = E1000_READ_REG(hw, E1000_FCRTC);
reg &= ~E1000_FCRTC_RTH_COAL_MASK;
reg |= ((hwm << E1000_FCRTC_RTH_COAL_SHIFT)
& E1000_FCRTC_RTH_COAL_MASK);
E1000_WRITE_REG(hw, E1000_FCRTC, reg);
dmac = pba - max_frame_size / 512;
if (dmac < pba - 10)
dmac = pba - 10;
reg = E1000_READ_REG(hw, E1000_DMACR);
reg &= ~E1000_DMACR_DMACTHR_MASK;
reg |= ((dmac << E1000_DMACR_DMACTHR_SHIFT)
& E1000_DMACR_DMACTHR_MASK);
/* transition to L0x or L1 if available..*/
reg |= (E1000_DMACR_DMAC_EN | E1000_DMACR_DMAC_LX_MASK);
/* Check if status is 2.5Gb backplane connection
* before configuration of watchdog timer, which is
* in msec values in 12.8usec intervals
* watchdog timer= msec values in 32usec intervals
* for non 2.5Gb connection
*/
if (hw->mac.type == e1000_i354) {
int status = E1000_READ_REG(hw, E1000_STATUS);
if ((status & E1000_STATUS_2P5_SKU) &&
(!(status & E1000_STATUS_2P5_SKU_OVER)))
reg |= ((adapter->dmac * 5) >> 6);
else
reg |= (adapter->dmac >> 5);
} else {
reg |= (adapter->dmac >> 5);
}
E1000_WRITE_REG(hw, E1000_DMACR, reg);
E1000_WRITE_REG(hw, E1000_DMCRTRH, 0);
/* Set the interval before transition */
reg = E1000_READ_REG(hw, E1000_DMCTLX);
if (hw->mac.type == e1000_i350)
reg |= IGB_DMCTLX_DCFLUSH_DIS;
/*
** in 2.5Gb connection, TTLX unit is 0.4 usec
** which is 0x4*2 = 0xA. But delay is still 4 usec
*/
if (hw->mac.type == e1000_i354) {
int status = E1000_READ_REG(hw, E1000_STATUS);
if ((status & E1000_STATUS_2P5_SKU) &&
(!(status & E1000_STATUS_2P5_SKU_OVER)))
reg |= 0xA;
else
reg |= 0x4;
} else {
reg |= 0x4;
}
E1000_WRITE_REG(hw, E1000_DMCTLX, reg);
/* free space in tx packet buffer to wake from DMA coal */
E1000_WRITE_REG(hw, E1000_DMCTXTH, (IGB_TXPBSIZE -
(2 * max_frame_size)) >> 6);
/* make low power state decision controlled by DMA coal */
reg = E1000_READ_REG(hw, E1000_PCIEMISC);
reg &= ~E1000_PCIEMISC_LX_DECISION;
E1000_WRITE_REG(hw, E1000_PCIEMISC, reg);
} else if (hw->mac.type == e1000_82580) {
u32 reg = E1000_READ_REG(hw, E1000_PCIEMISC);
E1000_WRITE_REG(hw, E1000_PCIEMISC,
reg & ~E1000_PCIEMISC_LX_DECISION);
E1000_WRITE_REG(hw, E1000_DMACR, 0);
}
}
+/*********************************************************************
+ *
+ * Initialize the hardware to a configuration as specified by the
+ * adapter structure.
+ *
+ **********************************************************************/
static void
em_reset(if_ctx_t ctx)
{
device_t dev = iflib_get_dev(ctx);
struct adapter *adapter = iflib_get_softc(ctx);
struct ifnet *ifp = iflib_get_ifp(ctx);
struct e1000_hw *hw = &adapter->hw;
u16 rx_buffer_size;
u32 pba;
INIT_DEBUGOUT("em_reset: begin");
/* Let the firmware know the OS is in control */
em_get_hw_control(adapter);
/* Set up smart power down as default off on newer adapters. */
if (!em_smart_pwr_down && (hw->mac.type == e1000_82571 ||
hw->mac.type == e1000_82572)) {
u16 phy_tmp = 0;
/* Speed up time to link by disabling smart power down. */
e1000_read_phy_reg(hw, IGP02E1000_PHY_POWER_MGMT, &phy_tmp);
phy_tmp &= ~IGP02E1000_PM_SPD;
e1000_write_phy_reg(hw, IGP02E1000_PHY_POWER_MGMT, phy_tmp);
}
/*
* Packet Buffer Allocation (PBA)
* Writing PBA sets the receive portion of the buffer
* the remainder is used for the transmit buffer.
*/
switch (hw->mac.type) {
/* Total Packet Buffer on these is 48K */
case e1000_82571:
case e1000_82572:
case e1000_80003es2lan:
pba = E1000_PBA_32K; /* 32K for Rx, 16K for Tx */
break;
case e1000_82573: /* 82573: Total Packet Buffer is 32K */
pba = E1000_PBA_12K; /* 12K for Rx, 20K for Tx */
break;
case e1000_82574:
case e1000_82583:
pba = E1000_PBA_20K; /* 20K for Rx, 20K for Tx */
break;
case e1000_ich8lan:
pba = E1000_PBA_8K;
break;
case e1000_ich9lan:
case e1000_ich10lan:
/* Boost Receive side for jumbo frames */
if (adapter->hw.mac.max_frame_size > 4096)
pba = E1000_PBA_14K;
else
pba = E1000_PBA_10K;
break;
case e1000_pchlan:
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
case e1000_pch_cnp:
pba = E1000_PBA_26K;
break;
case e1000_82575:
pba = E1000_PBA_32K;
break;
case e1000_82576:
case e1000_vfadapt:
pba = E1000_READ_REG(hw, E1000_RXPBS);
pba &= E1000_RXPBS_SIZE_MASK_82576;
break;
case e1000_82580:
case e1000_i350:
case e1000_i354:
case e1000_vfadapt_i350:
pba = E1000_READ_REG(hw, E1000_RXPBS);
pba = e1000_rxpbs_adjust_82580(pba);
break;
case e1000_i210:
case e1000_i211:
pba = E1000_PBA_34K;
break;
default:
if (adapter->hw.mac.max_frame_size > 8192)
pba = E1000_PBA_40K; /* 40K for Rx, 24K for Tx */
else
pba = E1000_PBA_48K; /* 48K for Rx, 16K for Tx */
}
/* Special needs in case of Jumbo frames */
if ((hw->mac.type == e1000_82575) && (ifp->if_mtu > ETHERMTU)) {
u32 tx_space, min_tx, min_rx;
pba = E1000_READ_REG(hw, E1000_PBA);
tx_space = pba >> 16;
pba &= 0xffff;
min_tx = (adapter->hw.mac.max_frame_size +
sizeof(struct e1000_tx_desc) - ETHERNET_FCS_SIZE) * 2;
min_tx = roundup2(min_tx, 1024);
min_tx >>= 10;
min_rx = adapter->hw.mac.max_frame_size;
min_rx = roundup2(min_rx, 1024);
min_rx >>= 10;
if (tx_space < min_tx &&
((min_tx - tx_space) < pba)) {
pba = pba - (min_tx - tx_space);
/*
* if short on rx space, rx wins
* and must trump tx adjustment
*/
if (pba < min_rx)
pba = min_rx;
}
E1000_WRITE_REG(hw, E1000_PBA, pba);
}
if (hw->mac.type < igb_mac_min)
E1000_WRITE_REG(&adapter->hw, E1000_PBA, pba);
INIT_DEBUGOUT1("em_reset: pba=%dK",pba);
/*
* These parameters control the automatic generation (Tx) and
* response (Rx) to Ethernet PAUSE frames.
* - High water mark should allow for at least two frames to be
* received after sending an XOFF.
* - Low water mark works best when it is very near the high water mark.
* This allows the receiver to restart by sending XON when it has
* drained a bit. Here we use an arbitrary value of 1500 which will
* restart after one full frame is pulled from the buffer. There
* could be several smaller frames in the buffer and if so they will
* not trigger the XON until their total number reduces the buffer
* by 1500.
* - The pause time is fairly large at 1000 x 512ns = 512 usec.
*/
rx_buffer_size = (pba & 0xffff) << 10;
hw->fc.high_water = rx_buffer_size -
roundup2(adapter->hw.mac.max_frame_size, 1024);
hw->fc.low_water = hw->fc.high_water - 1500;
if (adapter->fc) /* locally set flow control value? */
hw->fc.requested_mode = adapter->fc;
else
hw->fc.requested_mode = e1000_fc_full;
if (hw->mac.type == e1000_80003es2lan)
hw->fc.pause_time = 0xFFFF;
else
hw->fc.pause_time = EM_FC_PAUSE_TIME;
hw->fc.send_xon = TRUE;
/* Device specific overrides/settings */
switch (hw->mac.type) {
case e1000_pchlan:
/* Workaround: no TX flow ctrl for PCH */
hw->fc.requested_mode = e1000_fc_rx_pause;
hw->fc.pause_time = 0xFFFF; /* override */
if (if_getmtu(ifp) > ETHERMTU) {
hw->fc.high_water = 0x3500;
hw->fc.low_water = 0x1500;
} else {
hw->fc.high_water = 0x5000;
hw->fc.low_water = 0x3000;
}
hw->fc.refresh_time = 0x1000;
break;
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
case e1000_pch_cnp:
hw->fc.high_water = 0x5C20;
hw->fc.low_water = 0x5048;
hw->fc.pause_time = 0x0650;
hw->fc.refresh_time = 0x0400;
/* Jumbos need adjusted PBA */
if (if_getmtu(ifp) > ETHERMTU)
E1000_WRITE_REG(hw, E1000_PBA, 12);
else
E1000_WRITE_REG(hw, E1000_PBA, 26);
break;
case e1000_82575:
case e1000_82576:
/* 8-byte granularity */
hw->fc.low_water = hw->fc.high_water - 8;
break;
case e1000_82580:
case e1000_i350:
case e1000_i354:
case e1000_i210:
case e1000_i211:
case e1000_vfadapt:
case e1000_vfadapt_i350:
/* 16-byte granularity */
hw->fc.low_water = hw->fc.high_water - 16;
break;
case e1000_ich9lan:
case e1000_ich10lan:
if (if_getmtu(ifp) > ETHERMTU) {
hw->fc.high_water = 0x2800;
hw->fc.low_water = hw->fc.high_water - 8;
break;
}
/* FALLTHROUGH */
default:
if (hw->mac.type == e1000_80003es2lan)
hw->fc.pause_time = 0xFFFF;
break;
}
/* Issue a global reset */
e1000_reset_hw(hw);
if (adapter->hw.mac.type >= igb_mac_min) {
E1000_WRITE_REG(hw, E1000_WUC, 0);
} else {
E1000_WRITE_REG(hw, E1000_WUFC, 0);
em_disable_aspm(adapter);
}
if (adapter->flags & IGB_MEDIA_RESET) {
e1000_setup_init_funcs(hw, TRUE);
e1000_get_bus_info(hw);
adapter->flags &= ~IGB_MEDIA_RESET;
}
/* and a re-init */
if (e1000_init_hw(hw) < 0) {
device_printf(dev, "Hardware Initialization Failed\n");
return;
}
if (adapter->hw.mac.type >= igb_mac_min)
igb_init_dmac(adapter, pba);
E1000_WRITE_REG(hw, E1000_VET, ETHERTYPE_VLAN);
e1000_get_phy_info(hw);
e1000_check_for_link(hw);
}
+/*
+ * Initialise the RSS mapping for NICs that support multiple transmit/
+ * receive rings.
+ */
+
#define RSSKEYLEN 10
static void
em_initialize_rss_mapping(struct adapter *adapter)
{
uint8_t rss_key[4 * RSSKEYLEN];
uint32_t reta = 0;
struct e1000_hw *hw = &adapter->hw;
int i;
/*
* Configure RSS key
*/
arc4rand(rss_key, sizeof(rss_key), 0);
for (i = 0; i < RSSKEYLEN; ++i) {
uint32_t rssrk = 0;
rssrk = EM_RSSRK_VAL(rss_key, i);
E1000_WRITE_REG(hw,E1000_RSSRK(i), rssrk);
}
/*
* Configure RSS redirect table in following fashion:
* (hash & ring_cnt_mask) == rdr_table[(hash & rdr_table_mask)]
*/
for (i = 0; i < sizeof(reta); ++i) {
uint32_t q;
q = (i % adapter->rx_num_queues) << 7;
reta |= q << (8 * i);
}
for (i = 0; i < 32; ++i)
E1000_WRITE_REG(hw, E1000_RETA(i), reta);
E1000_WRITE_REG(hw, E1000_MRQC, E1000_MRQC_RSS_ENABLE_2Q |
E1000_MRQC_RSS_FIELD_IPV4_TCP |
E1000_MRQC_RSS_FIELD_IPV4 |
E1000_MRQC_RSS_FIELD_IPV6_TCP_EX |
E1000_MRQC_RSS_FIELD_IPV6_EX |
E1000_MRQC_RSS_FIELD_IPV6);
-
}
static void
igb_initialize_rss_mapping(struct adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
int i;
int queue_id;
u32 reta;
u32 rss_key[10], mrqc, shift = 0;
/* XXX? */
if (adapter->hw.mac.type == e1000_82575)
shift = 6;
/*
* The redirection table controls which destination
* queue each bucket redirects traffic to.
* Each DWORD represents four queues, with the LSB
* being the first queue in the DWORD.
*
* This just allocates buckets to queues using round-robin
* allocation.
*
* NOTE: It Just Happens to line up with the default
* RSS allocation method.
*/
/* Warning FM follows */
reta = 0;
for (i = 0; i < 128; i++) {
#ifdef RSS
queue_id = rss_get_indirection_to_bucket(i);
/*
* If we have more queues than buckets, we'll
* end up mapping buckets to a subset of the
* queues.
*
* If we have more buckets than queues, we'll
* end up instead assigning multiple buckets
* to queues.
*
* Both are suboptimal, but we need to handle
* the case so we don't go out of bounds
* indexing arrays and such.
*/
queue_id = queue_id % adapter->rx_num_queues;
#else
queue_id = (i % adapter->rx_num_queues);
#endif
/* Adjust if required */
queue_id = queue_id << shift;
/*
* The low 8 bits are for hash value (n+0);
* The next 8 bits are for hash value (n+1), etc.
*/
reta = reta >> 8;
reta = reta | ( ((uint32_t) queue_id) << 24);
if ((i & 3) == 3) {
E1000_WRITE_REG(hw, E1000_RETA(i >> 2), reta);
reta = 0;
}
}
/* Now fill in hash table */
/*
* MRQC: Multiple Receive Queues Command
* Set queuing to RSS control, number depends on the device.
*/
mrqc = E1000_MRQC_ENABLE_RSS_8Q;
#ifdef RSS
/* XXX ew typecasting */
rss_getkey((uint8_t *) &rss_key);
#else
arc4rand(&rss_key, sizeof(rss_key), 0);
#endif
for (i = 0; i < 10; i++)
E1000_WRITE_REG_ARRAY(hw, E1000_RSSRK(0), i, rss_key[i]);
/*
* Configure the RSS fields to hash upon.
*/
mrqc |= (E1000_MRQC_RSS_FIELD_IPV4 |
E1000_MRQC_RSS_FIELD_IPV4_TCP);
mrqc |= (E1000_MRQC_RSS_FIELD_IPV6 |
E1000_MRQC_RSS_FIELD_IPV6_TCP);
mrqc |=( E1000_MRQC_RSS_FIELD_IPV4_UDP |
E1000_MRQC_RSS_FIELD_IPV6_UDP);
mrqc |=( E1000_MRQC_RSS_FIELD_IPV6_UDP_EX |
E1000_MRQC_RSS_FIELD_IPV6_TCP_EX);
E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
}
/*********************************************************************
*
- * Setup networking device structure and register an interface.
+ * Setup networking device structure and register interface media.
*
**********************************************************************/
static int
em_setup_interface(if_ctx_t ctx)
{
struct ifnet *ifp = iflib_get_ifp(ctx);
struct adapter *adapter = iflib_get_softc(ctx);
if_softc_ctx_t scctx = adapter->shared;
INIT_DEBUGOUT("em_setup_interface: begin");
/* Single Queue */
if (adapter->tx_num_queues == 1) {
if_setsendqlen(ifp, scctx->isc_ntxd[0] - 1);
if_setsendqready(ifp);
}
/*
* Specify the media types supported by this adapter and register
* callbacks to update media and link information
*/
if ((adapter->hw.phy.media_type == e1000_media_type_fiber) ||
(adapter->hw.phy.media_type == e1000_media_type_internal_serdes)) {
u_char fiber_type = IFM_1000_SX; /* default type */
if (adapter->hw.mac.type == e1000_82545)
fiber_type = IFM_1000_LX;
ifmedia_add(adapter->media, IFM_ETHER | fiber_type | IFM_FDX, 0, NULL);
ifmedia_add(adapter->media, IFM_ETHER | fiber_type, 0, NULL);
} else {
ifmedia_add(adapter->media, IFM_ETHER | IFM_10_T, 0, NULL);
ifmedia_add(adapter->media, IFM_ETHER | IFM_10_T | IFM_FDX, 0, NULL);
ifmedia_add(adapter->media, IFM_ETHER | IFM_100_TX, 0, NULL);
ifmedia_add(adapter->media, IFM_ETHER | IFM_100_TX | IFM_FDX, 0, NULL);
if (adapter->hw.phy.type != e1000_phy_ife) {
ifmedia_add(adapter->media, IFM_ETHER | IFM_1000_T | IFM_FDX, 0, NULL);
ifmedia_add(adapter->media, IFM_ETHER | IFM_1000_T, 0, NULL);
}
}
ifmedia_add(adapter->media, IFM_ETHER | IFM_AUTO, 0, NULL);
ifmedia_set(adapter->media, IFM_ETHER | IFM_AUTO);
return (0);
}
static int
em_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets)
{
struct adapter *adapter = iflib_get_softc(ctx);
if_softc_ctx_t scctx = adapter->shared;
int error = E1000_SUCCESS;
struct em_tx_queue *que;
int i, j;
MPASS(adapter->tx_num_queues > 0);
MPASS(adapter->tx_num_queues == ntxqsets);
/* First allocate the top level queue structs */
if (!(adapter->tx_queues =
(struct em_tx_queue *) malloc(sizeof(struct em_tx_queue) *
adapter->tx_num_queues, M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "Unable to allocate queue memory\n");
return(ENOMEM);
}
for (i = 0, que = adapter->tx_queues; i < adapter->tx_num_queues; i++, que++) {
/* Set up some basics */
struct tx_ring *txr = &que->txr;
txr->adapter = que->adapter = adapter;
que->me = txr->me = i;
/* Allocate report status array */
if (!(txr->tx_rsq = (qidx_t *) malloc(sizeof(qidx_t) * scctx->isc_ntxd[0], M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "failed to allocate rs_idxs memory\n");
error = ENOMEM;
goto fail;
}
for (j = 0; j < scctx->isc_ntxd[0]; j++)
txr->tx_rsq[j] = QIDX_INVALID;
/* get the virtual and physical address of the hardware queues */
txr->tx_base = (struct e1000_tx_desc *)vaddrs[i*ntxqs];
txr->tx_paddr = paddrs[i*ntxqs];
}
if (bootverbose)
device_printf(iflib_get_dev(ctx),
"allocated for %d tx_queues\n", adapter->tx_num_queues);
return (0);
fail:
em_if_queues_free(ctx);
return (error);
}
static int
em_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxqs, int nrxqsets)
{
struct adapter *adapter = iflib_get_softc(ctx);
int error = E1000_SUCCESS;
struct em_rx_queue *que;
int i;
MPASS(adapter->rx_num_queues > 0);
MPASS(adapter->rx_num_queues == nrxqsets);
/* First allocate the top level queue structs */
if (!(adapter->rx_queues =
(struct em_rx_queue *) malloc(sizeof(struct em_rx_queue) *
adapter->rx_num_queues, M_DEVBUF, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "Unable to allocate queue memory\n");
error = ENOMEM;
goto fail;
}
for (i = 0, que = adapter->rx_queues; i < nrxqsets; i++, que++) {
/* Set up some basics */
struct rx_ring *rxr = &que->rxr;
rxr->adapter = que->adapter = adapter;
rxr->que = que;
que->me = rxr->me = i;
/* get the virtual and physical address of the hardware queues */
rxr->rx_base = (union e1000_rx_desc_extended *)vaddrs[i*nrxqs];
rxr->rx_paddr = paddrs[i*nrxqs];
}
if (bootverbose)
device_printf(iflib_get_dev(ctx),
"allocated for %d rx_queues\n", adapter->rx_num_queues);
return (0);
fail:
em_if_queues_free(ctx);
return (error);
}
static void
em_if_queues_free(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct em_tx_queue *tx_que = adapter->tx_queues;
struct em_rx_queue *rx_que = adapter->rx_queues;
if (tx_que != NULL) {
for (int i = 0; i < adapter->tx_num_queues; i++, tx_que++) {
struct tx_ring *txr = &tx_que->txr;
if (txr->tx_rsq == NULL)
break;
free(txr->tx_rsq, M_DEVBUF);
txr->tx_rsq = NULL;
}
free(adapter->tx_queues, M_DEVBUF);
adapter->tx_queues = NULL;
}
if (rx_que != NULL) {
free(adapter->rx_queues, M_DEVBUF);
adapter->rx_queues = NULL;
}
em_release_hw_control(adapter);
if (adapter->mta != NULL) {
free(adapter->mta, M_DEVBUF);
}
}
/*********************************************************************
*
* Enable transmit unit.
*
**********************************************************************/
static void
em_initialize_transmit_unit(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
if_softc_ctx_t scctx = adapter->shared;
struct em_tx_queue *que;
struct tx_ring *txr;
struct e1000_hw *hw = &adapter->hw;
u32 tctl, txdctl = 0, tarc, tipg = 0;
INIT_DEBUGOUT("em_initialize_transmit_unit: begin");
for (int i = 0; i < adapter->tx_num_queues; i++, txr++) {
u64 bus_addr;
caddr_t offp, endp;
que = &adapter->tx_queues[i];
txr = &que->txr;
bus_addr = txr->tx_paddr;
/* Clear checksum offload context. */
offp = (caddr_t)&txr->csum_flags;
endp = (caddr_t)(txr + 1);
bzero(offp, endp - offp);
/* Base and Len of TX Ring */
E1000_WRITE_REG(hw, E1000_TDLEN(i),
scctx->isc_ntxd[0] * sizeof(struct e1000_tx_desc));
E1000_WRITE_REG(hw, E1000_TDBAH(i),
(u32)(bus_addr >> 32));
E1000_WRITE_REG(hw, E1000_TDBAL(i),
(u32)bus_addr);
/* Init the HEAD/TAIL indices */
E1000_WRITE_REG(hw, E1000_TDT(i), 0);
E1000_WRITE_REG(hw, E1000_TDH(i), 0);
HW_DEBUGOUT2("Base = %x, Length = %x\n",
E1000_READ_REG(&adapter->hw, E1000_TDBAL(i)),
E1000_READ_REG(&adapter->hw, E1000_TDLEN(i)));
txdctl = 0; /* clear txdctl */
txdctl |= 0x1f; /* PTHRESH */
txdctl |= 1 << 8; /* HTHRESH */
txdctl |= 1 << 16;/* WTHRESH */
txdctl |= 1 << 22; /* Reserved bit 22 must always be 1 */
txdctl |= E1000_TXDCTL_GRAN;
txdctl |= 1 << 25; /* LWTHRESH */
E1000_WRITE_REG(hw, E1000_TXDCTL(i), txdctl);
}
/* Set the default values for the Tx Inter Packet Gap timer */
switch (adapter->hw.mac.type) {
case e1000_80003es2lan:
tipg = DEFAULT_82543_TIPG_IPGR1;
tipg |= DEFAULT_80003ES2LAN_TIPG_IPGR2 <<
E1000_TIPG_IPGR2_SHIFT;
break;
case e1000_82542:
tipg = DEFAULT_82542_TIPG_IPGT;
tipg |= DEFAULT_82542_TIPG_IPGR1 << E1000_TIPG_IPGR1_SHIFT;
tipg |= DEFAULT_82542_TIPG_IPGR2 << E1000_TIPG_IPGR2_SHIFT;
break;
default:
if ((adapter->hw.phy.media_type == e1000_media_type_fiber) ||
(adapter->hw.phy.media_type ==
e1000_media_type_internal_serdes))
tipg = DEFAULT_82543_TIPG_IPGT_FIBER;
else
tipg = DEFAULT_82543_TIPG_IPGT_COPPER;
tipg |= DEFAULT_82543_TIPG_IPGR1 << E1000_TIPG_IPGR1_SHIFT;
tipg |= DEFAULT_82543_TIPG_IPGR2 << E1000_TIPG_IPGR2_SHIFT;
}
E1000_WRITE_REG(&adapter->hw, E1000_TIPG, tipg);
E1000_WRITE_REG(&adapter->hw, E1000_TIDV, adapter->tx_int_delay.value);
if(adapter->hw.mac.type >= e1000_82540)
E1000_WRITE_REG(&adapter->hw, E1000_TADV,
adapter->tx_abs_int_delay.value);
if ((adapter->hw.mac.type == e1000_82571) ||
(adapter->hw.mac.type == e1000_82572)) {
tarc = E1000_READ_REG(&adapter->hw, E1000_TARC(0));
tarc |= TARC_SPEED_MODE_BIT;
E1000_WRITE_REG(&adapter->hw, E1000_TARC(0), tarc);
} else if (adapter->hw.mac.type == e1000_80003es2lan) {
/* errata: program both queues to unweighted RR */
tarc = E1000_READ_REG(&adapter->hw, E1000_TARC(0));
tarc |= 1;
E1000_WRITE_REG(&adapter->hw, E1000_TARC(0), tarc);
tarc = E1000_READ_REG(&adapter->hw, E1000_TARC(1));
tarc |= 1;
E1000_WRITE_REG(&adapter->hw, E1000_TARC(1), tarc);
} else if (adapter->hw.mac.type == e1000_82574) {
tarc = E1000_READ_REG(&adapter->hw, E1000_TARC(0));
tarc |= TARC_ERRATA_BIT;
if ( adapter->tx_num_queues > 1) {
tarc |= (TARC_COMPENSATION_MODE | TARC_MQ_FIX);
E1000_WRITE_REG(&adapter->hw, E1000_TARC(0), tarc);
E1000_WRITE_REG(&adapter->hw, E1000_TARC(1), tarc);
} else
E1000_WRITE_REG(&adapter->hw, E1000_TARC(0), tarc);
}
if (adapter->tx_int_delay.value > 0)
adapter->txd_cmd |= E1000_TXD_CMD_IDE;
/* Program the Transmit Control Register */
tctl = E1000_READ_REG(&adapter->hw, E1000_TCTL);
tctl &= ~E1000_TCTL_CT;
tctl |= (E1000_TCTL_PSP | E1000_TCTL_RTLC | E1000_TCTL_EN |
(E1000_COLLISION_THRESHOLD << E1000_CT_SHIFT));
if (adapter->hw.mac.type >= e1000_82571)
tctl |= E1000_TCTL_MULR;
/* This write will effectively turn on the transmit unit. */
E1000_WRITE_REG(&adapter->hw, E1000_TCTL, tctl);
/* SPT and KBL errata workarounds */
if (hw->mac.type == e1000_pch_spt) {
u32 reg;
reg = E1000_READ_REG(hw, E1000_IOSFPC);
reg |= E1000_RCTL_RDMTS_HEX;
E1000_WRITE_REG(hw, E1000_IOSFPC, reg);
/* i218-i219 Specification Update 1.5.4.5 */
reg = E1000_READ_REG(hw, E1000_TARC(0));
reg &= ~E1000_TARC0_CB_MULTIQ_3_REQ;
reg |= E1000_TARC0_CB_MULTIQ_2_REQ;
E1000_WRITE_REG(hw, E1000_TARC(0), reg);
}
}
/*********************************************************************
*
* Enable receive unit.
*
**********************************************************************/
static void
em_initialize_receive_unit(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
if_softc_ctx_t scctx = adapter->shared;
struct ifnet *ifp = iflib_get_ifp(ctx);
struct e1000_hw *hw = &adapter->hw;
struct em_rx_queue *que;
int i;
u32 rctl, rxcsum, rfctl;
INIT_DEBUGOUT("em_initialize_receive_units: begin");
/*
* Make sure receives are disabled while setting
* up the descriptor ring
*/
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* Do not disable if ever enabled on this hardware */
if ((hw->mac.type != e1000_82574) && (hw->mac.type != e1000_82583))
E1000_WRITE_REG(hw, E1000_RCTL, rctl & ~E1000_RCTL_EN);
/* Setup the Receive Control Register */
rctl &= ~(3 << E1000_RCTL_MO_SHIFT);
rctl |= E1000_RCTL_EN | E1000_RCTL_BAM |
E1000_RCTL_LBM_NO | E1000_RCTL_RDMTS_HALF |
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Do not store bad packets */
rctl &= ~E1000_RCTL_SBP;
/* Enable Long Packet receive */
if (if_getmtu(ifp) > ETHERMTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
/* Strip the CRC */
if (!em_disable_crc_stripping)
rctl |= E1000_RCTL_SECRC;
if (adapter->hw.mac.type >= e1000_82540) {
E1000_WRITE_REG(&adapter->hw, E1000_RADV,
adapter->rx_abs_int_delay.value);
/*
* Set the interrupt throttling rate. Value is calculated
* as DEFAULT_ITR = 1/(MAX_INTS_PER_SEC * 256ns)
*/
E1000_WRITE_REG(hw, E1000_ITR, DEFAULT_ITR);
}
E1000_WRITE_REG(&adapter->hw, E1000_RDTR,
adapter->rx_int_delay.value);
/* Use extended rx descriptor formats */
rfctl = E1000_READ_REG(hw, E1000_RFCTL);
rfctl |= E1000_RFCTL_EXTEN;
/*
* When using MSI-X interrupts we need to throttle
* using the EITR register (82574 only)
*/
if (hw->mac.type == e1000_82574) {
for (int i = 0; i < 4; i++)
E1000_WRITE_REG(hw, E1000_EITR_82574(i),
DEFAULT_ITR);
/* Disable accelerated acknowledge */
rfctl |= E1000_RFCTL_ACK_DIS;
}
E1000_WRITE_REG(hw, E1000_RFCTL, rfctl);
rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
if (if_getcapenable(ifp) & IFCAP_RXCSUM &&
adapter->hw.mac.type >= e1000_82543) {
if (adapter->tx_num_queues > 1) {
if (adapter->hw.mac.type >= igb_mac_min) {
rxcsum |= E1000_RXCSUM_PCSD;
if (hw->mac.type != e1000_82575)
rxcsum |= E1000_RXCSUM_CRCOFL;
} else
rxcsum |= E1000_RXCSUM_TUOFL |
E1000_RXCSUM_IPOFL |
E1000_RXCSUM_PCSD;
} else {
if (adapter->hw.mac.type >= igb_mac_min)
rxcsum |= E1000_RXCSUM_IPPCSE;
else
rxcsum |= E1000_RXCSUM_TUOFL | E1000_RXCSUM_IPOFL;
if (adapter->hw.mac.type > e1000_82575)
rxcsum |= E1000_RXCSUM_CRCOFL;
}
} else
rxcsum &= ~E1000_RXCSUM_TUOFL;
E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
if (adapter->rx_num_queues > 1) {
if (adapter->hw.mac.type >= igb_mac_min)
igb_initialize_rss_mapping(adapter);
else
em_initialize_rss_mapping(adapter);
}
/*
* XXX TEMPORARY WORKAROUND: on some systems with 82573
* long latencies are observed, like Lenovo X60. This
* change eliminates the problem, but since having positive
* values in RDTR is a known source of problems on other
* platforms another solution is being sought.
*/
if (hw->mac.type == e1000_82573)
E1000_WRITE_REG(hw, E1000_RDTR, 0x20);
for (i = 0, que = adapter->rx_queues; i < adapter->rx_num_queues; i++, que++) {
struct rx_ring *rxr = &que->rxr;
/* Setup the Base and Length of the Rx Descriptor Ring */
u64 bus_addr = rxr->rx_paddr;
#if 0
u32 rdt = adapter->rx_num_queues -1; /* default */
#endif
E1000_WRITE_REG(hw, E1000_RDLEN(i),
scctx->isc_nrxd[0] * sizeof(union e1000_rx_desc_extended));
E1000_WRITE_REG(hw, E1000_RDBAH(i), (u32)(bus_addr >> 32));
E1000_WRITE_REG(hw, E1000_RDBAL(i), (u32)bus_addr);
/* Setup the Head and Tail Descriptor Pointers */
E1000_WRITE_REG(hw, E1000_RDH(i), 0);
E1000_WRITE_REG(hw, E1000_RDT(i), 0);
}
/*
* Set PTHRESH for improved jumbo performance
* According to 10.2.5.11 of Intel 82574 Datasheet,
* RXDCTL(1) is written whenever RXDCTL(0) is written.
* Only write to RXDCTL(1) if there is a need for different
* settings.
*/
if (((adapter->hw.mac.type == e1000_ich9lan) ||
(adapter->hw.mac.type == e1000_pch2lan) ||
(adapter->hw.mac.type == e1000_ich10lan)) &&
(if_getmtu(ifp) > ETHERMTU)) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
} else if (adapter->hw.mac.type == e1000_82574) {
for (int i = 0; i < adapter->rx_num_queues; i++) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(i));
rxdctl |= 0x20; /* PTHRESH */
rxdctl |= 4 << 8; /* HTHRESH */
rxdctl |= 4 << 16;/* WTHRESH */
rxdctl |= 1 << 24; /* Switch to granularity */
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
} else if (adapter->hw.mac.type >= igb_mac_min) {
u32 psize, srrctl = 0;
if (if_getmtu(ifp) > ETHERMTU) {
/* Set maximum packet len */
if (adapter->rx_mbuf_sz <= 4096) {
srrctl |= 4096 >> E1000_SRRCTL_BSIZEPKT_SHIFT;
rctl |= E1000_RCTL_SZ_4096 | E1000_RCTL_BSEX;
} else if (adapter->rx_mbuf_sz > 4096) {
srrctl |= 8192 >> E1000_SRRCTL_BSIZEPKT_SHIFT;
rctl |= E1000_RCTL_SZ_8192 | E1000_RCTL_BSEX;
}
psize = scctx->isc_max_frame_size;
/* are we on a vlan? */
if (ifp->if_vlantrunk != NULL)
psize += VLAN_TAG_SIZE;
E1000_WRITE_REG(&adapter->hw, E1000_RLPML, psize);
} else {
srrctl |= 2048 >> E1000_SRRCTL_BSIZEPKT_SHIFT;
rctl |= E1000_RCTL_SZ_2048;
}
/*
* If TX flow control is disabled and there's >1 queue defined,
* enable DROP.
*
* This drops frames rather than hanging the RX MAC for all queues.
*/
if ((adapter->rx_num_queues > 1) &&
(adapter->fc == e1000_fc_none ||
adapter->fc == e1000_fc_rx_pause)) {
srrctl |= E1000_SRRCTL_DROP_EN;
}
/* Setup the Base and Length of the Rx Descriptor Rings */
for (i = 0, que = adapter->rx_queues; i < adapter->rx_num_queues; i++, que++) {
struct rx_ring *rxr = &que->rxr;
u64 bus_addr = rxr->rx_paddr;
u32 rxdctl;
#ifdef notyet
/* Configure for header split? -- ignore for now */
rxr->hdr_split = igb_header_split;
#else
srrctl |= E1000_SRRCTL_DESCTYPE_ADV_ONEBUF;
#endif
E1000_WRITE_REG(hw, E1000_RDLEN(i),
scctx->isc_nrxd[0] * sizeof(struct e1000_rx_desc));
E1000_WRITE_REG(hw, E1000_RDBAH(i),
(uint32_t)(bus_addr >> 32));
E1000_WRITE_REG(hw, E1000_RDBAL(i),
(uint32_t)bus_addr);
E1000_WRITE_REG(hw, E1000_SRRCTL(i), srrctl);
/* Enable this Queue */
rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(i));
rxdctl |= E1000_RXDCTL_QUEUE_ENABLE;
rxdctl &= 0xFFF00000;
rxdctl |= IGB_RX_PTHRESH;
rxdctl |= IGB_RX_HTHRESH << 8;
rxdctl |= IGB_RX_WTHRESH << 16;
E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
}
} else if (adapter->hw.mac.type >= e1000_pch2lan) {
if (if_getmtu(ifp) > ETHERMTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
}
/* Make sure VLAN Filters are off */
rctl &= ~E1000_RCTL_VFE;
if (adapter->hw.mac.type < igb_mac_min) {
if (adapter->rx_mbuf_sz == MCLBYTES)
rctl |= E1000_RCTL_SZ_2048;
else if (adapter->rx_mbuf_sz == MJUMPAGESIZE)
rctl |= E1000_RCTL_SZ_4096 | E1000_RCTL_BSEX;
else if (adapter->rx_mbuf_sz > MJUMPAGESIZE)
rctl |= E1000_RCTL_SZ_8192 | E1000_RCTL_BSEX;
/* ensure we clear use DTYPE of 00 here */
rctl &= ~0x00000C00;
}
/* Write out the settings */
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return;
}
static void
em_if_vlan_register(if_ctx_t ctx, u16 vtag)
{
struct adapter *adapter = iflib_get_softc(ctx);
u32 index, bit;
index = (vtag >> 5) & 0x7F;
bit = vtag & 0x1F;
adapter->shadow_vfta[index] |= (1 << bit);
++adapter->num_vlans;
}
static void
em_if_vlan_unregister(if_ctx_t ctx, u16 vtag)
{
struct adapter *adapter = iflib_get_softc(ctx);
u32 index, bit;
index = (vtag >> 5) & 0x7F;
bit = vtag & 0x1F;
adapter->shadow_vfta[index] &= ~(1 << bit);
--adapter->num_vlans;
}
static void
em_setup_vlan_hw_support(struct adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
u32 reg;
/*
* We get here thru init_locked, meaning
* a soft reset, this has already cleared
* the VFTA and other state, so if there
* have been no vlan's registered do nothing.
*/
if (adapter->num_vlans == 0)
return;
/*
* A soft reset zero's out the VFTA, so
* we need to repopulate it now.
*/
for (int i = 0; i < EM_VFTA_SIZE; i++)
if (adapter->shadow_vfta[i] != 0)
E1000_WRITE_REG_ARRAY(hw, E1000_VFTA,
i, adapter->shadow_vfta[i]);
reg = E1000_READ_REG(hw, E1000_CTRL);
reg |= E1000_CTRL_VME;
E1000_WRITE_REG(hw, E1000_CTRL, reg);
/* Enable the Filter Table */
reg = E1000_READ_REG(hw, E1000_RCTL);
reg &= ~E1000_RCTL_CFIEN;
reg |= E1000_RCTL_VFE;
E1000_WRITE_REG(hw, E1000_RCTL, reg);
}
static void
em_if_enable_intr(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct e1000_hw *hw = &adapter->hw;
u32 ims_mask = IMS_ENABLE_MASK;
if (hw->mac.type == e1000_82574) {
E1000_WRITE_REG(hw, EM_EIAC, EM_MSIX_MASK);
ims_mask |= adapter->ims;
} else if (adapter->intr_type == IFLIB_INTR_MSIX && hw->mac.type >= igb_mac_min) {
u32 mask = (adapter->que_mask | adapter->link_mask);
E1000_WRITE_REG(&adapter->hw, E1000_EIAC, mask);
E1000_WRITE_REG(&adapter->hw, E1000_EIAM, mask);
E1000_WRITE_REG(&adapter->hw, E1000_EIMS, mask);
ims_mask = E1000_IMS_LSC;
}
E1000_WRITE_REG(hw, E1000_IMS, ims_mask);
}
static void
em_if_disable_intr(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct e1000_hw *hw = &adapter->hw;
if (adapter->intr_type == IFLIB_INTR_MSIX) {
if (hw->mac.type >= igb_mac_min)
E1000_WRITE_REG(&adapter->hw, E1000_EIMC, ~0);
E1000_WRITE_REG(&adapter->hw, E1000_EIAC, 0);
}
E1000_WRITE_REG(&adapter->hw, E1000_IMC, 0xffffffff);
}
/*
* Bit of a misnomer, what this really means is
* to enable OS management of the system... aka
* to disable special hardware management features
*/
static void
em_init_manageability(struct adapter *adapter)
{
/* A shared code workaround */
#define E1000_82542_MANC2H E1000_MANC2H
if (adapter->has_manage) {
int manc2h = E1000_READ_REG(&adapter->hw, E1000_MANC2H);
int manc = E1000_READ_REG(&adapter->hw, E1000_MANC);
/* disable hardware interception of ARP */
manc &= ~(E1000_MANC_ARP_EN);
/* enable receiving management packets to the host */
manc |= E1000_MANC_EN_MNG2HOST;
#define E1000_MNG2HOST_PORT_623 (1 << 5)
#define E1000_MNG2HOST_PORT_664 (1 << 6)
manc2h |= E1000_MNG2HOST_PORT_623;
manc2h |= E1000_MNG2HOST_PORT_664;
E1000_WRITE_REG(&adapter->hw, E1000_MANC2H, manc2h);
E1000_WRITE_REG(&adapter->hw, E1000_MANC, manc);
}
}
/*
* Give control back to hardware management
* controller if there is one.
*/
static void
em_release_manageability(struct adapter *adapter)
{
if (adapter->has_manage) {
int manc = E1000_READ_REG(&adapter->hw, E1000_MANC);
/* re-enable hardware interception of ARP */
manc |= E1000_MANC_ARP_EN;
manc &= ~E1000_MANC_EN_MNG2HOST;
E1000_WRITE_REG(&adapter->hw, E1000_MANC, manc);
}
}
/*
* em_get_hw_control sets the {CTRL_EXT|FWSM}:DRV_LOAD bit.
* For ASF and Pass Through versions of f/w this means
* that the driver is loaded. For AMT version type f/w
* this means that the network i/f is open.
*/
static void
em_get_hw_control(struct adapter *adapter)
{
u32 ctrl_ext, swsm;
if (adapter->vf_ifp)
return;
if (adapter->hw.mac.type == e1000_82573) {
swsm = E1000_READ_REG(&adapter->hw, E1000_SWSM);
E1000_WRITE_REG(&adapter->hw, E1000_SWSM,
swsm | E1000_SWSM_DRV_LOAD);
return;
}
/* else */
ctrl_ext = E1000_READ_REG(&adapter->hw, E1000_CTRL_EXT);
E1000_WRITE_REG(&adapter->hw, E1000_CTRL_EXT,
ctrl_ext | E1000_CTRL_EXT_DRV_LOAD);
}
/*
* em_release_hw_control resets {CTRL_EXT|FWSM}:DRV_LOAD bit.
* For ASF and Pass Through versions of f/w this means that
* the driver is no longer loaded. For AMT versions of the
* f/w this means that the network i/f is closed.
*/
static void
em_release_hw_control(struct adapter *adapter)
{
u32 ctrl_ext, swsm;
if (!adapter->has_manage)
return;
if (adapter->hw.mac.type == e1000_82573) {
swsm = E1000_READ_REG(&adapter->hw, E1000_SWSM);
E1000_WRITE_REG(&adapter->hw, E1000_SWSM,
swsm & ~E1000_SWSM_DRV_LOAD);
return;
}
/* else */
ctrl_ext = E1000_READ_REG(&adapter->hw, E1000_CTRL_EXT);
E1000_WRITE_REG(&adapter->hw, E1000_CTRL_EXT,
ctrl_ext & ~E1000_CTRL_EXT_DRV_LOAD);
return;
}
static int
em_is_valid_ether_addr(u8 *addr)
{
char zero_addr[6] = { 0, 0, 0, 0, 0, 0 };
if ((addr[0] & 1) || (!bcmp(addr, zero_addr, ETHER_ADDR_LEN))) {
return (FALSE);
}
return (TRUE);
}
/*
** Parse the interface capabilities with regard
** to both system management and wake-on-lan for
** later use.
*/
static void
em_get_wakeup(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
u16 eeprom_data = 0, device_id, apme_mask;
adapter->has_manage = e1000_enable_mng_pass_thru(&adapter->hw);
apme_mask = EM_EEPROM_APME;
switch (adapter->hw.mac.type) {
case e1000_82542:
case e1000_82543:
break;
case e1000_82544:
e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL2_REG, 1, &eeprom_data);
apme_mask = EM_82544_APME;
break;
case e1000_82546:
case e1000_82546_rev_3:
if (adapter->hw.bus.func == 1) {
e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_B, 1, &eeprom_data);
break;
} else
e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_A, 1, &eeprom_data);
break;
case e1000_82573:
case e1000_82583:
adapter->has_amt = TRUE;
/* FALLTHROUGH */
case e1000_82571:
case e1000_82572:
case e1000_80003es2lan:
if (adapter->hw.bus.func == 1) {
e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_B, 1, &eeprom_data);
break;
} else
e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_A, 1, &eeprom_data);
break;
case e1000_ich8lan:
case e1000_ich9lan:
case e1000_ich10lan:
case e1000_pchlan:
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
case e1000_82575: /* listing all igb devices */
case e1000_82576:
case e1000_82580:
case e1000_i350:
case e1000_i354:
case e1000_i210:
case e1000_i211:
case e1000_vfadapt:
case e1000_vfadapt_i350:
apme_mask = E1000_WUC_APME;
adapter->has_amt = TRUE;
eeprom_data = E1000_READ_REG(&adapter->hw, E1000_WUC);
break;
default:
e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_A, 1, &eeprom_data);
break;
}
if (eeprom_data & apme_mask)
adapter->wol = (E1000_WUFC_MAG | E1000_WUFC_MC);
/*
* We have the eeprom settings, now apply the special cases
* where the eeprom may be wrong or the board won't support
* wake on lan on a particular port
*/
device_id = pci_get_device(dev);
switch (device_id) {
case E1000_DEV_ID_82546GB_PCIE:
adapter->wol = 0;
break;
case E1000_DEV_ID_82546EB_FIBER:
case E1000_DEV_ID_82546GB_FIBER:
/* Wake events only supported on port A for dual fiber
* regardless of eeprom setting */
if (E1000_READ_REG(&adapter->hw, E1000_STATUS) &
E1000_STATUS_FUNC_1)
adapter->wol = 0;
break;
case E1000_DEV_ID_82546GB_QUAD_COPPER_KSP3:
/* if quad port adapter, disable WoL on all but port A */
if (global_quad_port_a != 0)
adapter->wol = 0;
/* Reset for multiple quad port adapters */
if (++global_quad_port_a == 4)
global_quad_port_a = 0;
break;
case E1000_DEV_ID_82571EB_FIBER:
/* Wake events only supported on port A for dual fiber
* regardless of eeprom setting */
if (E1000_READ_REG(&adapter->hw, E1000_STATUS) &
E1000_STATUS_FUNC_1)
adapter->wol = 0;
break;
case E1000_DEV_ID_82571EB_QUAD_COPPER:
case E1000_DEV_ID_82571EB_QUAD_FIBER:
case E1000_DEV_ID_82571EB_QUAD_COPPER_LP:
/* if quad port adapter, disable WoL on all but port A */
if (global_quad_port_a != 0)
adapter->wol = 0;
/* Reset for multiple quad port adapters */
if (++global_quad_port_a == 4)
global_quad_port_a = 0;
break;
}
return;
}
/*
* Enable PCI Wake On Lan capability
*/
static void
em_enable_wakeup(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
device_t dev = iflib_get_dev(ctx);
if_t ifp = iflib_get_ifp(ctx);
int error = 0;
u32 pmc, ctrl, ctrl_ext, rctl;
u16 status;
if (pci_find_cap(dev, PCIY_PMG, &pmc) != 0)
return;
/*
* Determine type of Wakeup: note that wol
* is set with all bits on by default.
*/
if ((if_getcapenable(ifp) & IFCAP_WOL_MAGIC) == 0)
adapter->wol &= ~E1000_WUFC_MAG;
if ((if_getcapenable(ifp) & IFCAP_WOL_UCAST) == 0)
adapter->wol &= ~E1000_WUFC_EX;
if ((if_getcapenable(ifp) & IFCAP_WOL_MCAST) == 0)
adapter->wol &= ~E1000_WUFC_MC;
else {
rctl = E1000_READ_REG(&adapter->hw, E1000_RCTL);
rctl |= E1000_RCTL_MPE;
E1000_WRITE_REG(&adapter->hw, E1000_RCTL, rctl);
}
if (!(adapter->wol & (E1000_WUFC_EX | E1000_WUFC_MAG | E1000_WUFC_MC)))
goto pme;
/* Advertise the wakeup capability */
ctrl = E1000_READ_REG(&adapter->hw, E1000_CTRL);
ctrl |= (E1000_CTRL_SWDPIN2 | E1000_CTRL_SWDPIN3);
E1000_WRITE_REG(&adapter->hw, E1000_CTRL, ctrl);
/* Keep the laser running on Fiber adapters */
if (adapter->hw.phy.media_type == e1000_media_type_fiber ||
adapter->hw.phy.media_type == e1000_media_type_internal_serdes) {
ctrl_ext = E1000_READ_REG(&adapter->hw, E1000_CTRL_EXT);
ctrl_ext |= E1000_CTRL_EXT_SDP3_DATA;
E1000_WRITE_REG(&adapter->hw, E1000_CTRL_EXT, ctrl_ext);
}
if ((adapter->hw.mac.type == e1000_ich8lan) ||
(adapter->hw.mac.type == e1000_pchlan) ||
(adapter->hw.mac.type == e1000_ich9lan) ||
(adapter->hw.mac.type == e1000_ich10lan))
e1000_suspend_workarounds_ich8lan(&adapter->hw);
if ( adapter->hw.mac.type >= e1000_pchlan) {
error = em_enable_phy_wakeup(adapter);
if (error)
goto pme;
} else {
/* Enable wakeup by the MAC */
E1000_WRITE_REG(&adapter->hw, E1000_WUC, E1000_WUC_PME_EN);
E1000_WRITE_REG(&adapter->hw, E1000_WUFC, adapter->wol);
}
if (adapter->hw.phy.type == e1000_phy_igp_3)
e1000_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw);
pme:
status = pci_read_config(dev, pmc + PCIR_POWER_STATUS, 2);
status &= ~(PCIM_PSTAT_PME | PCIM_PSTAT_PMEENABLE);
if (!error && (if_getcapenable(ifp) & IFCAP_WOL))
status |= PCIM_PSTAT_PME | PCIM_PSTAT_PMEENABLE;
pci_write_config(dev, pmc + PCIR_POWER_STATUS, status, 2);
return;
}
/*
* WOL in the newer chipset interfaces (pchlan)
* require thing to be copied into the phy
*/
static int
em_enable_phy_wakeup(struct adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
u32 mreg, ret = 0;
u16 preg;
/* copy MAC RARs to PHY RARs */
e1000_copy_rx_addrs_to_phy_ich8lan(hw);
/* copy MAC MTA to PHY MTA */
for (int i = 0; i < adapter->hw.mac.mta_reg_count; i++) {
mreg = E1000_READ_REG_ARRAY(hw, E1000_MTA, i);
e1000_write_phy_reg(hw, BM_MTA(i), (u16)(mreg & 0xFFFF));
e1000_write_phy_reg(hw, BM_MTA(i) + 1,
(u16)((mreg >> 16) & 0xFFFF));
}
/* configure PHY Rx Control register */
e1000_read_phy_reg(&adapter->hw, BM_RCTL, &preg);
mreg = E1000_READ_REG(hw, E1000_RCTL);
if (mreg & E1000_RCTL_UPE)
preg |= BM_RCTL_UPE;
if (mreg & E1000_RCTL_MPE)
preg |= BM_RCTL_MPE;
preg &= ~(BM_RCTL_MO_MASK);
if (mreg & E1000_RCTL_MO_3)
preg |= (((mreg & E1000_RCTL_MO_3) >> E1000_RCTL_MO_SHIFT)
<< BM_RCTL_MO_SHIFT);
if (mreg & E1000_RCTL_BAM)
preg |= BM_RCTL_BAM;
if (mreg & E1000_RCTL_PMCF)
preg |= BM_RCTL_PMCF;
mreg = E1000_READ_REG(hw, E1000_CTRL);
if (mreg & E1000_CTRL_RFCE)
preg |= BM_RCTL_RFCE;
e1000_write_phy_reg(&adapter->hw, BM_RCTL, preg);
/* enable PHY wakeup in MAC register */
E1000_WRITE_REG(hw, E1000_WUC,
E1000_WUC_PHY_WAKE | E1000_WUC_PME_EN | E1000_WUC_APME);
E1000_WRITE_REG(hw, E1000_WUFC, adapter->wol);
/* configure and enable PHY wakeup in PHY registers */
e1000_write_phy_reg(&adapter->hw, BM_WUFC, adapter->wol);
e1000_write_phy_reg(&adapter->hw, BM_WUC, E1000_WUC_PME_EN);
/* activate PHY wakeup */
ret = hw->phy.ops.acquire(hw);
if (ret) {
printf("Could not acquire PHY\n");
return ret;
}
e1000_write_phy_reg_mdic(hw, IGP01E1000_PHY_PAGE_SELECT,
(BM_WUC_ENABLE_PAGE << IGP_PAGE_SHIFT));
ret = e1000_read_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, &preg);
if (ret) {
printf("Could not read PHY page 769\n");
goto out;
}
preg |= BM_WUC_ENABLE_BIT | BM_WUC_HOST_WU_BIT;
ret = e1000_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, preg);
if (ret)
printf("Could not set PHY Host Wakeup bit\n");
out:
hw->phy.ops.release(hw);
return ret;
}
static void
em_if_led_func(if_ctx_t ctx, int onoff)
{
struct adapter *adapter = iflib_get_softc(ctx);
if (onoff) {
e1000_setup_led(&adapter->hw);
e1000_led_on(&adapter->hw);
} else {
e1000_led_off(&adapter->hw);
e1000_cleanup_led(&adapter->hw);
}
}
/*
* Disable the L0S and L1 LINK states
*/
static void
em_disable_aspm(struct adapter *adapter)
{
int base, reg;
u16 link_cap,link_ctrl;
device_t dev = adapter->dev;
switch (adapter->hw.mac.type) {
case e1000_82573:
case e1000_82574:
case e1000_82583:
break;
default:
return;
}
if (pci_find_cap(dev, PCIY_EXPRESS, &base) != 0)
return;
reg = base + PCIER_LINK_CAP;
link_cap = pci_read_config(dev, reg, 2);
if ((link_cap & PCIEM_LINK_CAP_ASPM) == 0)
return;
reg = base + PCIER_LINK_CTL;
link_ctrl = pci_read_config(dev, reg, 2);
link_ctrl &= ~PCIEM_LINK_CTL_ASPMC;
pci_write_config(dev, reg, link_ctrl, 2);
return;
}
/**********************************************************************
*
* Update the board statistics counters.
*
**********************************************************************/
static void
em_update_stats_counters(struct adapter *adapter)
{
if(adapter->hw.phy.media_type == e1000_media_type_copper ||
(E1000_READ_REG(&adapter->hw, E1000_STATUS) & E1000_STATUS_LU)) {
adapter->stats.symerrs += E1000_READ_REG(&adapter->hw, E1000_SYMERRS);
adapter->stats.sec += E1000_READ_REG(&adapter->hw, E1000_SEC);
}
adapter->stats.crcerrs += E1000_READ_REG(&adapter->hw, E1000_CRCERRS);
adapter->stats.mpc += E1000_READ_REG(&adapter->hw, E1000_MPC);
adapter->stats.scc += E1000_READ_REG(&adapter->hw, E1000_SCC);
adapter->stats.ecol += E1000_READ_REG(&adapter->hw, E1000_ECOL);
adapter->stats.mcc += E1000_READ_REG(&adapter->hw, E1000_MCC);
adapter->stats.latecol += E1000_READ_REG(&adapter->hw, E1000_LATECOL);
adapter->stats.colc += E1000_READ_REG(&adapter->hw, E1000_COLC);
adapter->stats.dc += E1000_READ_REG(&adapter->hw, E1000_DC);
adapter->stats.rlec += E1000_READ_REG(&adapter->hw, E1000_RLEC);
adapter->stats.xonrxc += E1000_READ_REG(&adapter->hw, E1000_XONRXC);
adapter->stats.xontxc += E1000_READ_REG(&adapter->hw, E1000_XONTXC);
adapter->stats.xoffrxc += E1000_READ_REG(&adapter->hw, E1000_XOFFRXC);
/*
** For watchdog management we need to know if we have been
** paused during the last interval, so capture that here.
*/
adapter->shared->isc_pause_frames = adapter->stats.xoffrxc;
adapter->stats.xofftxc += E1000_READ_REG(&adapter->hw, E1000_XOFFTXC);
adapter->stats.fcruc += E1000_READ_REG(&adapter->hw, E1000_FCRUC);
adapter->stats.prc64 += E1000_READ_REG(&adapter->hw, E1000_PRC64);
adapter->stats.prc127 += E1000_READ_REG(&adapter->hw, E1000_PRC127);
adapter->stats.prc255 += E1000_READ_REG(&adapter->hw, E1000_PRC255);
adapter->stats.prc511 += E1000_READ_REG(&adapter->hw, E1000_PRC511);
adapter->stats.prc1023 += E1000_READ_REG(&adapter->hw, E1000_PRC1023);
adapter->stats.prc1522 += E1000_READ_REG(&adapter->hw, E1000_PRC1522);
adapter->stats.gprc += E1000_READ_REG(&adapter->hw, E1000_GPRC);
adapter->stats.bprc += E1000_READ_REG(&adapter->hw, E1000_BPRC);
adapter->stats.mprc += E1000_READ_REG(&adapter->hw, E1000_MPRC);
adapter->stats.gptc += E1000_READ_REG(&adapter->hw, E1000_GPTC);
/* For the 64-bit byte counters the low dword must be read first. */
/* Both registers clear on the read of the high dword */
adapter->stats.gorc += E1000_READ_REG(&adapter->hw, E1000_GORCL) +
((u64)E1000_READ_REG(&adapter->hw, E1000_GORCH) << 32);
adapter->stats.gotc += E1000_READ_REG(&adapter->hw, E1000_GOTCL) +
((u64)E1000_READ_REG(&adapter->hw, E1000_GOTCH) << 32);
adapter->stats.rnbc += E1000_READ_REG(&adapter->hw, E1000_RNBC);
adapter->stats.ruc += E1000_READ_REG(&adapter->hw, E1000_RUC);
adapter->stats.rfc += E1000_READ_REG(&adapter->hw, E1000_RFC);
adapter->stats.roc += E1000_READ_REG(&adapter->hw, E1000_ROC);
adapter->stats.rjc += E1000_READ_REG(&adapter->hw, E1000_RJC);
adapter->stats.tor += E1000_READ_REG(&adapter->hw, E1000_TORH);
adapter->stats.tot += E1000_READ_REG(&adapter->hw, E1000_TOTH);
adapter->stats.tpr += E1000_READ_REG(&adapter->hw, E1000_TPR);
adapter->stats.tpt += E1000_READ_REG(&adapter->hw, E1000_TPT);
adapter->stats.ptc64 += E1000_READ_REG(&adapter->hw, E1000_PTC64);
adapter->stats.ptc127 += E1000_READ_REG(&adapter->hw, E1000_PTC127);
adapter->stats.ptc255 += E1000_READ_REG(&adapter->hw, E1000_PTC255);
adapter->stats.ptc511 += E1000_READ_REG(&adapter->hw, E1000_PTC511);
adapter->stats.ptc1023 += E1000_READ_REG(&adapter->hw, E1000_PTC1023);
adapter->stats.ptc1522 += E1000_READ_REG(&adapter->hw, E1000_PTC1522);
adapter->stats.mptc += E1000_READ_REG(&adapter->hw, E1000_MPTC);
adapter->stats.bptc += E1000_READ_REG(&adapter->hw, E1000_BPTC);
/* Interrupt Counts */
adapter->stats.iac += E1000_READ_REG(&adapter->hw, E1000_IAC);
adapter->stats.icrxptc += E1000_READ_REG(&adapter->hw, E1000_ICRXPTC);
adapter->stats.icrxatc += E1000_READ_REG(&adapter->hw, E1000_ICRXATC);
adapter->stats.ictxptc += E1000_READ_REG(&adapter->hw, E1000_ICTXPTC);
adapter->stats.ictxatc += E1000_READ_REG(&adapter->hw, E1000_ICTXATC);
adapter->stats.ictxqec += E1000_READ_REG(&adapter->hw, E1000_ICTXQEC);
adapter->stats.ictxqmtc += E1000_READ_REG(&adapter->hw, E1000_ICTXQMTC);
adapter->stats.icrxdmtc += E1000_READ_REG(&adapter->hw, E1000_ICRXDMTC);
adapter->stats.icrxoc += E1000_READ_REG(&adapter->hw, E1000_ICRXOC);
if (adapter->hw.mac.type >= e1000_82543) {
adapter->stats.algnerrc +=
E1000_READ_REG(&adapter->hw, E1000_ALGNERRC);
adapter->stats.rxerrc +=
E1000_READ_REG(&adapter->hw, E1000_RXERRC);
adapter->stats.tncrs +=
E1000_READ_REG(&adapter->hw, E1000_TNCRS);
adapter->stats.cexterr +=
E1000_READ_REG(&adapter->hw, E1000_CEXTERR);
adapter->stats.tsctc +=
E1000_READ_REG(&adapter->hw, E1000_TSCTC);
adapter->stats.tsctfc +=
E1000_READ_REG(&adapter->hw, E1000_TSCTFC);
}
}
static uint64_t
em_if_get_counter(if_ctx_t ctx, ift_counter cnt)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct ifnet *ifp = iflib_get_ifp(ctx);
switch (cnt) {
case IFCOUNTER_COLLISIONS:
return (adapter->stats.colc);
case IFCOUNTER_IERRORS:
return (adapter->dropped_pkts + adapter->stats.rxerrc +
adapter->stats.crcerrs + adapter->stats.algnerrc +
adapter->stats.ruc + adapter->stats.roc +
adapter->stats.mpc + adapter->stats.cexterr);
case IFCOUNTER_OERRORS:
return (adapter->stats.ecol + adapter->stats.latecol +
adapter->watchdog_events);
default:
return (if_get_counter_default(ifp, cnt));
}
}
/* Export a single 32-bit register via a read-only sysctl. */
static int
em_sysctl_reg_handler(SYSCTL_HANDLER_ARGS)
{
struct adapter *adapter;
u_int val;
adapter = oidp->oid_arg1;
val = E1000_READ_REG(&adapter->hw, oidp->oid_arg2);
return (sysctl_handle_int(oidp, &val, 0, req));
}
/*
* Add sysctl variables, one per statistic, to the system.
*/
static void
em_add_hw_stats(struct adapter *adapter)
{
device_t dev = iflib_get_dev(adapter->ctx);
struct em_tx_queue *tx_que = adapter->tx_queues;
struct em_rx_queue *rx_que = adapter->rx_queues;
struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
struct sysctl_oid *tree = device_get_sysctl_tree(dev);
struct sysctl_oid_list *child = SYSCTL_CHILDREN(tree);
struct e1000_hw_stats *stats = &adapter->stats;
struct sysctl_oid *stat_node, *queue_node, *int_node;
struct sysctl_oid_list *stat_list, *queue_list, *int_list;
#define QUEUE_NAME_LEN 32
char namebuf[QUEUE_NAME_LEN];
/* Driver Statistics */
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "dropped",
CTLFLAG_RD, &adapter->dropped_pkts,
"Driver dropped packets");
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "link_irq",
CTLFLAG_RD, &adapter->link_irq,
"Link MSI-X IRQ Handled");
- SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "mbuf_defrag_fail",
- CTLFLAG_RD, &adapter->mbuf_defrag_failed,
- "Defragmenting mbuf chain failed");
- SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "tx_dma_fail",
- CTLFLAG_RD, &adapter->no_tx_dma_setup,
- "Driver tx dma failure in xmit");
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "rx_overruns",
CTLFLAG_RD, &adapter->rx_overruns,
"RX overruns");
SYSCTL_ADD_ULONG(ctx, child, OID_AUTO, "watchdog_timeouts",
CTLFLAG_RD, &adapter->watchdog_events,
"Watchdog timeouts");
SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "device_control",
CTLTYPE_UINT | CTLFLAG_RD, adapter, E1000_CTRL,
em_sysctl_reg_handler, "IU",
"Device Control Register");
SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "rx_control",
CTLTYPE_UINT | CTLFLAG_RD, adapter, E1000_RCTL,
em_sysctl_reg_handler, "IU",
"Receiver Control Register");
SYSCTL_ADD_UINT(ctx, child, OID_AUTO, "fc_high_water",
CTLFLAG_RD, &adapter->hw.fc.high_water, 0,
"Flow Control High Watermark");
SYSCTL_ADD_UINT(ctx, child, OID_AUTO, "fc_low_water",
CTLFLAG_RD, &adapter->hw.fc.low_water, 0,
"Flow Control Low Watermark");
for (int i = 0; i < adapter->tx_num_queues; i++, tx_que++) {
struct tx_ring *txr = &tx_que->txr;
snprintf(namebuf, QUEUE_NAME_LEN, "queue_tx_%d", i);
queue_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, namebuf,
CTLFLAG_RD, NULL, "TX Queue Name");
queue_list = SYSCTL_CHILDREN(queue_node);
SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "txd_head",
CTLTYPE_UINT | CTLFLAG_RD, adapter,
E1000_TDH(txr->me),
em_sysctl_reg_handler, "IU",
"Transmit Descriptor Head");
SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "txd_tail",
CTLTYPE_UINT | CTLFLAG_RD, adapter,
E1000_TDT(txr->me),
em_sysctl_reg_handler, "IU",
"Transmit Descriptor Tail");
SYSCTL_ADD_ULONG(ctx, queue_list, OID_AUTO, "tx_irq",
CTLFLAG_RD, &txr->tx_irq,
"Queue MSI-X Transmit Interrupts");
}
for (int j = 0; j < adapter->rx_num_queues; j++, rx_que++) {
struct rx_ring *rxr = &rx_que->rxr;
snprintf(namebuf, QUEUE_NAME_LEN, "queue_rx_%d", j);
queue_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, namebuf,
CTLFLAG_RD, NULL, "RX Queue Name");
queue_list = SYSCTL_CHILDREN(queue_node);
SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "rxd_head",
CTLTYPE_UINT | CTLFLAG_RD, adapter,
E1000_RDH(rxr->me),
em_sysctl_reg_handler, "IU",
"Receive Descriptor Head");
SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "rxd_tail",
CTLTYPE_UINT | CTLFLAG_RD, adapter,
E1000_RDT(rxr->me),
em_sysctl_reg_handler, "IU",
"Receive Descriptor Tail");
SYSCTL_ADD_ULONG(ctx, queue_list, OID_AUTO, "rx_irq",
CTLFLAG_RD, &rxr->rx_irq,
"Queue MSI-X Receive Interrupts");
}
/* MAC stats get their own sub node */
stat_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "mac_stats",
CTLFLAG_RD, NULL, "Statistics");
stat_list = SYSCTL_CHILDREN(stat_node);
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "excess_coll",
CTLFLAG_RD, &stats->ecol,
"Excessive collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "single_coll",
CTLFLAG_RD, &stats->scc,
"Single collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "multiple_coll",
CTLFLAG_RD, &stats->mcc,
"Multiple collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "late_coll",
CTLFLAG_RD, &stats->latecol,
"Late collisions");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "collision_count",
CTLFLAG_RD, &stats->colc,
"Collision Count");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "symbol_errors",
CTLFLAG_RD, &adapter->stats.symerrs,
"Symbol Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "sequence_errors",
CTLFLAG_RD, &adapter->stats.sec,
"Sequence Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "defer_count",
CTLFLAG_RD, &adapter->stats.dc,
"Defer Count");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "missed_packets",
CTLFLAG_RD, &adapter->stats.mpc,
"Missed Packets");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_no_buff",
CTLFLAG_RD, &adapter->stats.rnbc,
"Receive No Buffers");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_undersize",
CTLFLAG_RD, &adapter->stats.ruc,
"Receive Undersize");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_fragmented",
CTLFLAG_RD, &adapter->stats.rfc,
"Fragmented Packets Received ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_oversize",
CTLFLAG_RD, &adapter->stats.roc,
"Oversized Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_jabber",
CTLFLAG_RD, &adapter->stats.rjc,
"Recevied Jabber");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "recv_errs",
CTLFLAG_RD, &adapter->stats.rxerrc,
"Receive Errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "crc_errs",
CTLFLAG_RD, &adapter->stats.crcerrs,
"CRC errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "alignment_errs",
CTLFLAG_RD, &adapter->stats.algnerrc,
"Alignment Errors");
/* On 82575 these are collision counts */
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "coll_ext_errs",
CTLFLAG_RD, &adapter->stats.cexterr,
"Collision/Carrier extension errors");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xon_recvd",
CTLFLAG_RD, &adapter->stats.xonrxc,
"XON Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xon_txd",
CTLFLAG_RD, &adapter->stats.xontxc,
"XON Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xoff_recvd",
CTLFLAG_RD, &adapter->stats.xoffrxc,
"XOFF Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "xoff_txd",
CTLFLAG_RD, &adapter->stats.xofftxc,
"XOFF Transmitted");
/* Packet Reception Stats */
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "total_pkts_recvd",
CTLFLAG_RD, &adapter->stats.tpr,
"Total Packets Received ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_pkts_recvd",
CTLFLAG_RD, &adapter->stats.gprc,
"Good Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "bcast_pkts_recvd",
CTLFLAG_RD, &adapter->stats.bprc,
"Broadcast Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mcast_pkts_recvd",
CTLFLAG_RD, &adapter->stats.mprc,
"Multicast Packets Received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_64",
CTLFLAG_RD, &adapter->stats.prc64,
"64 byte frames received ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_65_127",
CTLFLAG_RD, &adapter->stats.prc127,
"65-127 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_128_255",
CTLFLAG_RD, &adapter->stats.prc255,
"128-255 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_256_511",
CTLFLAG_RD, &adapter->stats.prc511,
"256-511 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_512_1023",
CTLFLAG_RD, &adapter->stats.prc1023,
"512-1023 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "rx_frames_1024_1522",
CTLFLAG_RD, &adapter->stats.prc1522,
"1023-1522 byte frames received");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_octets_recvd",
CTLFLAG_RD, &adapter->stats.gorc,
"Good Octets Received");
/* Packet Transmission Stats */
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_octets_txd",
CTLFLAG_RD, &adapter->stats.gotc,
"Good Octets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "total_pkts_txd",
CTLFLAG_RD, &adapter->stats.tpt,
"Total Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "good_pkts_txd",
CTLFLAG_RD, &adapter->stats.gptc,
"Good Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "bcast_pkts_txd",
CTLFLAG_RD, &adapter->stats.bptc,
"Broadcast Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "mcast_pkts_txd",
CTLFLAG_RD, &adapter->stats.mptc,
"Multicast Packets Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_64",
CTLFLAG_RD, &adapter->stats.ptc64,
"64 byte frames transmitted ");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_65_127",
CTLFLAG_RD, &adapter->stats.ptc127,
"65-127 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_128_255",
CTLFLAG_RD, &adapter->stats.ptc255,
"128-255 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_256_511",
CTLFLAG_RD, &adapter->stats.ptc511,
"256-511 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_512_1023",
CTLFLAG_RD, &adapter->stats.ptc1023,
"512-1023 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tx_frames_1024_1522",
CTLFLAG_RD, &adapter->stats.ptc1522,
"1024-1522 byte frames transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tso_txd",
CTLFLAG_RD, &adapter->stats.tsctc,
"TSO Contexts Transmitted");
SYSCTL_ADD_UQUAD(ctx, stat_list, OID_AUTO, "tso_ctx_fail",
CTLFLAG_RD, &adapter->stats.tsctfc,
"TSO Contexts Failed");
/* Interrupt Stats */
int_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "interrupts",
CTLFLAG_RD, NULL, "Interrupt Statistics");
int_list = SYSCTL_CHILDREN(int_node);
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "asserts",
CTLFLAG_RD, &adapter->stats.iac,
"Interrupt Assertion Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "rx_pkt_timer",
CTLFLAG_RD, &adapter->stats.icrxptc,
"Interrupt Cause Rx Pkt Timer Expire Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "rx_abs_timer",
CTLFLAG_RD, &adapter->stats.icrxatc,
"Interrupt Cause Rx Abs Timer Expire Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "tx_pkt_timer",
CTLFLAG_RD, &adapter->stats.ictxptc,
"Interrupt Cause Tx Pkt Timer Expire Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "tx_abs_timer",
CTLFLAG_RD, &adapter->stats.ictxatc,
"Interrupt Cause Tx Abs Timer Expire Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "tx_queue_empty",
CTLFLAG_RD, &adapter->stats.ictxqec,
"Interrupt Cause Tx Queue Empty Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "tx_queue_min_thresh",
CTLFLAG_RD, &adapter->stats.ictxqmtc,
"Interrupt Cause Tx Queue Min Thresh Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "rx_desc_min_thresh",
CTLFLAG_RD, &adapter->stats.icrxdmtc,
"Interrupt Cause Rx Desc Min Thresh Count");
SYSCTL_ADD_UQUAD(ctx, int_list, OID_AUTO, "rx_overrun",
CTLFLAG_RD, &adapter->stats.icrxoc,
"Interrupt Cause Receiver Overrun Count");
}
/**********************************************************************
*
* This routine provides a way to dump out the adapter eeprom,
* often a useful debug/service tool. This only dumps the first
* 32 words, stuff that matters is in that extent.
*
**********************************************************************/
static int
em_sysctl_nvm_info(SYSCTL_HANDLER_ARGS)
{
struct adapter *adapter = (struct adapter *)arg1;
int error;
int result;
result = -1;
error = sysctl_handle_int(oidp, &result, 0, req);
if (error || !req->newptr)
return (error);
/*
* This value will cause a hex dump of the
* first 32 16-bit words of the EEPROM to
* the screen.
*/
if (result == 1)
em_print_nvm_info(adapter);
return (error);
}
static void
em_print_nvm_info(struct adapter *adapter)
{
u16 eeprom_data;
int i, j, row = 0;
/* Its a bit crude, but it gets the job done */
printf("\nInterface EEPROM Dump:\n");
printf("Offset\n0x0000 ");
for (i = 0, j = 0; i < 32; i++, j++) {
if (j == 8) { /* Make the offset block */
j = 0; ++row;
printf("\n0x00%x0 ",row);
}
e1000_read_nvm(&adapter->hw, i, 1, &eeprom_data);
printf("%04x ", eeprom_data);
}
printf("\n");
}
static int
em_sysctl_int_delay(SYSCTL_HANDLER_ARGS)
{
struct em_int_delay_info *info;
struct adapter *adapter;
u32 regval;
int error, usecs, ticks;
info = (struct em_int_delay_info *) arg1;
usecs = info->value;
error = sysctl_handle_int(oidp, &usecs, 0, req);
if (error != 0 || req->newptr == NULL)
return (error);
if (usecs < 0 || usecs > EM_TICKS_TO_USECS(65535))
return (EINVAL);
info->value = usecs;
ticks = EM_USECS_TO_TICKS(usecs);
if (info->offset == E1000_ITR) /* units are 256ns here */
ticks *= 4;
adapter = info->adapter;
regval = E1000_READ_OFFSET(&adapter->hw, info->offset);
regval = (regval & ~0xffff) | (ticks & 0xffff);
/* Handle a few special cases. */
switch (info->offset) {
case E1000_RDTR:
break;
case E1000_TIDV:
if (ticks == 0) {
adapter->txd_cmd &= ~E1000_TXD_CMD_IDE;
/* Don't write 0 into the TIDV register. */
regval++;
} else
adapter->txd_cmd |= E1000_TXD_CMD_IDE;
break;
}
E1000_WRITE_OFFSET(&adapter->hw, info->offset, regval);
return (0);
}
static void
em_add_int_delay_sysctl(struct adapter *adapter, const char *name,
const char *description, struct em_int_delay_info *info,
int offset, int value)
{
info->adapter = adapter;
info->offset = offset;
info->value = value;
SYSCTL_ADD_PROC(device_get_sysctl_ctx(adapter->dev),
SYSCTL_CHILDREN(device_get_sysctl_tree(adapter->dev)),
OID_AUTO, name, CTLTYPE_INT|CTLFLAG_RW,
info, 0, em_sysctl_int_delay, "I", description);
}
/*
* Set flow control using sysctl:
* Flow control values:
* 0 - off
* 1 - rx pause
* 2 - tx pause
* 3 - full
*/
static int
em_set_flowcntl(SYSCTL_HANDLER_ARGS)
{
int error;
static int input = 3; /* default is full */
struct adapter *adapter = (struct adapter *) arg1;
error = sysctl_handle_int(oidp, &input, 0, req);
if ((error) || (req->newptr == NULL))
return (error);
if (input == adapter->fc) /* no change? */
return (error);
switch (input) {
case e1000_fc_rx_pause:
case e1000_fc_tx_pause:
case e1000_fc_full:
case e1000_fc_none:
adapter->hw.fc.requested_mode = input;
adapter->fc = input;
break;
default:
/* Do nothing */
return (error);
}
adapter->hw.fc.current_mode = adapter->hw.fc.requested_mode;
e1000_force_mac_fc(&adapter->hw);
return (error);
}
/*
* Manage Energy Efficient Ethernet:
* Control values:
* 0/1 - enabled/disabled
*/
static int
em_sysctl_eee(SYSCTL_HANDLER_ARGS)
{
struct adapter *adapter = (struct adapter *) arg1;
int error, value;
value = adapter->hw.dev_spec.ich8lan.eee_disable;
error = sysctl_handle_int(oidp, &value, 0, req);
if (error || req->newptr == NULL)
return (error);
adapter->hw.dev_spec.ich8lan.eee_disable = (value != 0);
em_if_init(adapter->ctx);
return (0);
}
static int
em_sysctl_debug_info(SYSCTL_HANDLER_ARGS)
{
struct adapter *adapter;
int error;
int result;
result = -1;
error = sysctl_handle_int(oidp, &result, 0, req);
if (error || !req->newptr)
return (error);
if (result == 1) {
adapter = (struct adapter *) arg1;
em_print_debug_info(adapter);
}
return (error);
}
static int
em_get_rs(SYSCTL_HANDLER_ARGS)
{
struct adapter *adapter = (struct adapter *) arg1;
int error;
int result;
result = 0;
error = sysctl_handle_int(oidp, &result, 0, req);
if (error || !req->newptr || result != 1)
return (error);
em_dump_rs(adapter);
return (error);
}
static void
em_if_debug(if_ctx_t ctx)
{
em_dump_rs(iflib_get_softc(ctx));
}
/*
* This routine is meant to be fluid, add whatever is
* needed for debugging a problem. -jfv
*/
static void
em_print_debug_info(struct adapter *adapter)
{
device_t dev = iflib_get_dev(adapter->ctx);
struct ifnet *ifp = iflib_get_ifp(adapter->ctx);
struct tx_ring *txr = &adapter->tx_queues->txr;
struct rx_ring *rxr = &adapter->rx_queues->rxr;
if (if_getdrvflags(ifp) & IFF_DRV_RUNNING)
printf("Interface is RUNNING ");
else
printf("Interface is NOT RUNNING\n");
if (if_getdrvflags(ifp) & IFF_DRV_OACTIVE)
printf("and INACTIVE\n");
else
printf("and ACTIVE\n");
for (int i = 0; i < adapter->tx_num_queues; i++, txr++) {
device_printf(dev, "TX Queue %d ------\n", i);
device_printf(dev, "hw tdh = %d, hw tdt = %d\n",
E1000_READ_REG(&adapter->hw, E1000_TDH(i)),
E1000_READ_REG(&adapter->hw, E1000_TDT(i)));
}
for (int j=0; j < adapter->rx_num_queues; j++, rxr++) {
device_printf(dev, "RX Queue %d ------\n", j);
device_printf(dev, "hw rdh = %d, hw rdt = %d\n",
E1000_READ_REG(&adapter->hw, E1000_RDH(j)),
E1000_READ_REG(&adapter->hw, E1000_RDT(j)));
}
}
/*
* 82574 only:
* Write a new value to the EEPROM increasing the number of MSI-X
* vectors from 3 to 5, for proper multiqueue support.
*/
static void
em_enable_vectors_82574(if_ctx_t ctx)
{
struct adapter *adapter = iflib_get_softc(ctx);
struct e1000_hw *hw = &adapter->hw;
device_t dev = iflib_get_dev(ctx);
u16 edata;
e1000_read_nvm(hw, EM_NVM_PCIE_CTRL, 1, &edata);
- printf("Current cap: %#06x\n", edata);
+ if (bootverbose)
+ device_printf(dev, "EM_NVM_PCIE_CTRL = %#06x\n", edata);
if (((edata & EM_NVM_MSIX_N_MASK) >> EM_NVM_MSIX_N_SHIFT) != 4) {
device_printf(dev, "Writing to eeprom: increasing "
"reported MSI-X vectors from 3 to 5...\n");
edata &= ~(EM_NVM_MSIX_N_MASK);
edata |= 4 << EM_NVM_MSIX_N_SHIFT;
e1000_write_nvm(hw, EM_NVM_PCIE_CTRL, 1, &edata);
e1000_update_nvm_checksum(hw);
device_printf(dev, "Writing to eeprom: done\n");
}
}
Index: projects/clang800-import/sys/dev/e1000/if_em.h
===================================================================
--- projects/clang800-import/sys/dev/e1000/if_em.h (revision 343955)
+++ projects/clang800-import/sys/dev/e1000/if_em.h (revision 343956)
@@ -1,564 +1,560 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2016 Nicole Graziano <nicole@nextbsd.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*$FreeBSD$*/
#include "opt_ddb.h"
#include "opt_inet.h"
#include "opt_inet6.h"
#ifdef HAVE_KERNEL_OPTION_HEADERS
#include "opt_device_polling.h"
#endif
#include <sys/param.h>
#include <sys/systm.h>
#ifdef DDB
#include <sys/types.h>
#include <ddb/ddb.h>
#endif
#if __FreeBSD_version >= 800000
#include <sys/buf_ring.h>
#endif
#include <sys/bus.h>
#include <sys/endian.h>
#include <sys/kernel.h>
#include <sys/kthread.h>
#include <sys/malloc.h>
#include <sys/mbuf.h>
#include <sys/module.h>
#include <sys/rman.h>
#include <sys/smp.h>
#include <sys/socket.h>
#include <sys/sockio.h>
#include <sys/sysctl.h>
#include <sys/taskqueue.h>
#include <sys/eventhandler.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <net/bpf.h>
#include <net/ethernet.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/if_arp.h>
#include <net/if_dl.h>
#include <net/if_media.h>
#include <net/iflib.h>
#include <net/if_types.h>
#include <net/if_vlan_var.h>
#include <netinet/in_systm.h>
#include <netinet/in.h>
#include <netinet/if_ether.h>
#include <netinet/ip.h>
#include <netinet/ip6.h>
#include <netinet/tcp.h>
#include <netinet/udp.h>
#include <machine/in_cksum.h>
#include <dev/led/led.h>
#include <dev/pci/pcivar.h>
#include <dev/pci/pcireg.h>
#include "e1000_api.h"
#include "e1000_82571.h"
#include "ifdi_if.h"
#ifndef _EM_H_DEFINED_
#define _EM_H_DEFINED_
/* Tunables */
/*
* EM_MAX_TXD: Maximum number of Transmit Descriptors
* Valid Range: 80-256 for 82542 and 82543-based adapters
* 80-4096 for others
* Default Value: 1024
* This value is the number of transmit descriptors allocated by the driver.
* Increasing this value allows the driver to queue more transmits. Each
* descriptor is 16 bytes.
* Since TDLEN should be multiple of 128bytes, the number of transmit
* desscriptors should meet the following condition.
* (num_tx_desc * sizeof(struct e1000_tx_desc)) % 128 == 0
*/
#define EM_MIN_TXD 128
#define EM_MAX_TXD 4096
#define EM_DEFAULT_TXD 1024
#define EM_DEFAULT_MULTI_TXD 4096
#define IGB_MAX_TXD 4096
/*
* EM_MAX_RXD - Maximum number of receive Descriptors
* Valid Range: 80-256 for 82542 and 82543-based adapters
* 80-4096 for others
* Default Value: 1024
* This value is the number of receive descriptors allocated by the driver.
* Increasing this value allows the driver to buffer more incoming packets.
* Each descriptor is 16 bytes. A receive buffer is also allocated for each
* descriptor. The maximum MTU size is 16110.
* Since TDLEN should be multiple of 128bytes, the number of transmit
* desscriptors should meet the following condition.
* (num_tx_desc * sizeof(struct e1000_tx_desc)) % 128 == 0
*/
#define EM_MIN_RXD 128
#define EM_MAX_RXD 4096
#define EM_DEFAULT_RXD 1024
#define EM_DEFAULT_MULTI_RXD 4096
#define IGB_MAX_RXD 4096
/*
* EM_TIDV - Transmit Interrupt Delay Value
* Valid Range: 0-65535 (0=off)
* Default Value: 64
* This value delays the generation of transmit interrupts in units of
* 1.024 microseconds. Transmit interrupt reduction can improve CPU
* efficiency if properly tuned for specific network traffic. If the
* system is reporting dropped transmits, this value may be set too high
* causing the driver to run out of available transmit descriptors.
*/
#define EM_TIDV 64
/*
* EM_TADV - Transmit Absolute Interrupt Delay Value
* (Not valid for 82542/82543/82544)
* Valid Range: 0-65535 (0=off)
* Default Value: 64
* This value, in units of 1.024 microseconds, limits the delay in which a
* transmit interrupt is generated. Useful only if EM_TIDV is non-zero,
* this value ensures that an interrupt is generated after the initial
* packet is sent on the wire within the set amount of time. Proper tuning,
* along with EM_TIDV, may improve traffic throughput in specific
* network conditions.
*/
#define EM_TADV 64
/*
* EM_RDTR - Receive Interrupt Delay Timer (Packet Timer)
* Valid Range: 0-65535 (0=off)
* Default Value: 0
* This value delays the generation of receive interrupts in units of 1.024
* microseconds. Receive interrupt reduction can improve CPU efficiency if
* properly tuned for specific network traffic. Increasing this value adds
* extra latency to frame reception and can end up decreasing the throughput
* of TCP traffic. If the system is reporting dropped receives, this value
* may be set too high, causing the driver to run out of available receive
* descriptors.
*
* CAUTION: When setting EM_RDTR to a value other than 0, adapters
* may hang (stop transmitting) under certain network conditions.
* If this occurs a WATCHDOG message is logged in the system
* event log. In addition, the controller is automatically reset,
* restoring the network connection. To eliminate the potential
* for the hang ensure that EM_RDTR is set to 0.
*/
#define EM_RDTR 0
/*
* Receive Interrupt Absolute Delay Timer (Not valid for 82542/82543/82544)
* Valid Range: 0-65535 (0=off)
* Default Value: 64
* This value, in units of 1.024 microseconds, limits the delay in which a
* receive interrupt is generated. Useful only if EM_RDTR is non-zero,
* this value ensures that an interrupt is generated after the initial
* packet is received within the set amount of time. Proper tuning,
* along with EM_RDTR, may improve traffic throughput in specific network
* conditions.
*/
#define EM_RADV 64
/*
* This parameter controls whether or not autonegotation is enabled.
* 0 - Disable autonegotiation
* 1 - Enable autonegotiation
*/
#define DO_AUTO_NEG 1
/*
* This parameter control whether or not the driver will wait for
* autonegotiation to complete.
* 1 - Wait for autonegotiation to complete
* 0 - Don't wait for autonegotiation to complete
*/
#define WAIT_FOR_AUTO_NEG_DEFAULT 0
/* Tunables -- End */
#define AUTONEG_ADV_DEFAULT (ADVERTISE_10_HALF | ADVERTISE_10_FULL | \
ADVERTISE_100_HALF | ADVERTISE_100_FULL | \
ADVERTISE_1000_FULL)
#define AUTO_ALL_MODES 0
/* PHY master/slave setting */
#define EM_MASTER_SLAVE e1000_ms_hw_default
/*
* Micellaneous constants
*/
#define EM_VENDOR_ID 0x8086
#define EM_FLASH 0x0014
#define EM_JUMBO_PBA 0x00000028
#define EM_DEFAULT_PBA 0x00000030
#define EM_SMARTSPEED_DOWNSHIFT 3
#define EM_SMARTSPEED_MAX 15
#define EM_MAX_LOOP 10
#define MAX_NUM_MULTICAST_ADDRESSES 128
#define PCI_ANY_ID (~0U)
#define ETHER_ALIGN 2
#define EM_FC_PAUSE_TIME 0x0680
#define EM_EEPROM_APME 0x400;
#define EM_82544_APME 0x0004;
/* Support AutoMediaDetect for Marvell M88 PHY in i354 */
#define IGB_MEDIA_RESET (1 << 0)
/* Define the starting Interrupt rate per Queue */
#define IGB_INTS_PER_SEC 8000
#define IGB_DEFAULT_ITR ((1000000/IGB_INTS_PER_SEC) << 2)
#define IGB_LINK_ITR 2000
#define I210_LINK_DELAY 1000
#define IGB_TXPBSIZE 20408
#define IGB_HDR_BUF 128
#define IGB_PKTTYPE_MASK 0x0000FFF0
#define IGB_DMCTLX_DCFLUSH_DIS 0x80000000 /* Disable DMA Coalesce Flush */
/*
* Driver state logic for the detection of a hung state
* in hardware. Set TX_HUNG whenever a TX packet is used
* (data is sent) and clear it when txeof() is invoked if
* any descriptors from the ring are cleaned/reclaimed.
* Increment internal counter if no descriptors are cleaned
* and compare to TX_MAXTRIES. When counter > TX_MAXTRIES,
* reset adapter.
*/
#define EM_TX_IDLE 0x00000000
#define EM_TX_BUSY 0x00000001
#define EM_TX_HUNG 0x80000000
#define EM_TX_MAXTRIES 10
#define PCICFG_DESC_RING_STATUS 0xe4
#define FLUSH_DESC_REQUIRED 0x100
#define IGB_RX_PTHRESH ((hw->mac.type == e1000_i354) ? 12 : \
((hw->mac.type <= e1000_82576) ? 16 : 8))
#define IGB_RX_HTHRESH 8
#define IGB_RX_WTHRESH ((hw->mac.type == e1000_82576 && \
(adapter->intr_type == IFLIB_INTR_MSIX)) ? 1 : 4)
#define IGB_TX_PTHRESH ((hw->mac.type == e1000_i354) ? 20 : 8)
#define IGB_TX_HTHRESH 1
#define IGB_TX_WTHRESH ((hw->mac.type != e1000_82575 && \
(adapter->intr_type == IFLIB_INTR_MSIX) ? 1 : 16)
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary. This will
* also optimize cache line size effect. H/W supports up to cache line size 128.
*/
#define EM_DBA_ALIGN 128
/*
* See Intel 82574 Driver Programming Interface Manual, Section 10.2.6.9
*/
#define TARC_COMPENSATION_MODE (1 << 7) /* Compensation Mode */
#define TARC_SPEED_MODE_BIT (1 << 21) /* On PCI-E MACs only */
#define TARC_MQ_FIX (1 << 23) | \
(1 << 24) | \
(1 << 25) /* Handle errata in MQ mode */
#define TARC_ERRATA_BIT (1 << 26) /* Note from errata on 82574 */
/* PCI Config defines */
#define EM_BAR_TYPE(v) ((v) & EM_BAR_TYPE_MASK)
#define EM_BAR_TYPE_MASK 0x00000001
#define EM_BAR_TYPE_MMEM 0x00000000
#define EM_BAR_TYPE_IO 0x00000001
#define EM_BAR_TYPE_FLASH 0x0014
#define EM_BAR_MEM_TYPE(v) ((v) & EM_BAR_MEM_TYPE_MASK)
#define EM_BAR_MEM_TYPE_MASK 0x00000006
#define EM_BAR_MEM_TYPE_32BIT 0x00000000
#define EM_BAR_MEM_TYPE_64BIT 0x00000004
#define EM_MSIX_BAR 3 /* On 82575 */
/* More backward compatibility */
#if __FreeBSD_version < 900000
#define SYSCTL_ADD_UQUAD SYSCTL_ADD_QUAD
#endif
/* Defines for printing debug information */
#define DEBUG_INIT 0
#define DEBUG_IOCTL 0
#define DEBUG_HW 0
#define INIT_DEBUGOUT(S) if (DEBUG_INIT) printf(S "\n")
#define INIT_DEBUGOUT1(S, A) if (DEBUG_INIT) printf(S "\n", A)
#define INIT_DEBUGOUT2(S, A, B) if (DEBUG_INIT) printf(S "\n", A, B)
#define IOCTL_DEBUGOUT(S) if (DEBUG_IOCTL) printf(S "\n")
#define IOCTL_DEBUGOUT1(S, A) if (DEBUG_IOCTL) printf(S "\n", A)
#define IOCTL_DEBUGOUT2(S, A, B) if (DEBUG_IOCTL) printf(S "\n", A, B)
#define HW_DEBUGOUT(S) if (DEBUG_HW) printf(S "\n")
#define HW_DEBUGOUT1(S, A) if (DEBUG_HW) printf(S "\n", A)
#define HW_DEBUGOUT2(S, A, B) if (DEBUG_HW) printf(S "\n", A, B)
#define EM_MAX_SCATTER 40
#define EM_VFTA_SIZE 128
#define EM_TSO_SIZE 65535
#define EM_TSO_SEG_SIZE 4096 /* Max dma segment size */
#define EM_MSIX_MASK 0x01F00000 /* For 82574 use */
#define EM_MSIX_LINK 0x01000000 /* For 82574 use */
#define ETH_ZLEN 60
#define ETH_ADDR_LEN 6
#define EM_CSUM_OFFLOAD (CSUM_IP | CSUM_IP_UDP | CSUM_IP_TCP) /* Offload bits in mbuf flag */
#define IGB_CSUM_OFFLOAD (CSUM_IP | CSUM_IP_UDP | CSUM_IP_TCP | \
CSUM_IP_SCTP | CSUM_IP6_UDP | CSUM_IP6_TCP | \
CSUM_IP6_SCTP) /* Offload bits in mbuf flag */
#define IGB_PKTTYPE_MASK 0x0000FFF0
#define IGB_DMCTLX_DCFLUSH_DIS 0x80000000 /* Disable DMA Coalesce Flush */
/*
* 82574 has a nonstandard address for EIAC
* and since its only used in MSI-X, and in
* the em driver only 82574 uses MSI-X we can
* solve it just using this define.
*/
#define EM_EIAC 0x000DC
/*
* 82574 only reports 3 MSI-X vectors by default;
* defines assisting with making it report 5 are
* located here.
*/
#define EM_NVM_PCIE_CTRL 0x1B
#define EM_NVM_MSIX_N_MASK (0x7 << EM_NVM_MSIX_N_SHIFT)
#define EM_NVM_MSIX_N_SHIFT 7
struct adapter;
struct em_int_delay_info {
struct adapter *adapter; /* Back-pointer to the adapter struct */
int offset; /* Register offset to read/write */
int value; /* Current value in usecs */
};
/*
* The transmit ring, one per tx queue
*/
struct tx_ring {
struct adapter *adapter;
struct e1000_tx_desc *tx_base;
uint64_t tx_paddr;
qidx_t *tx_rsq;
bool tx_tso; /* last tx was tso */
uint8_t me;
qidx_t tx_rs_cidx;
qidx_t tx_rs_pidx;
qidx_t tx_cidx_processed;
/* Interrupt resources */
void *tag;
struct resource *res;
unsigned long tx_irq;
/* Saved csum offloading context information */
int csum_flags;
int csum_lhlen;
int csum_iphlen;
int csum_thlen;
int csum_mss;
int csum_pktlen;
uint32_t csum_txd_upper;
uint32_t csum_txd_lower; /* last field */
};
/*
* The Receive ring, one per rx queue
*/
struct rx_ring {
struct adapter *adapter;
struct em_rx_queue *que;
u32 me;
u32 payload;
union e1000_rx_desc_extended *rx_base;
uint64_t rx_paddr;
/* Interrupt resources */
void *tag;
struct resource *res;
bool discard;
/* Soft stats */
unsigned long rx_irq;
unsigned long rx_discarded;
unsigned long rx_packets;
unsigned long rx_bytes;
};
struct em_tx_queue {
struct adapter *adapter;
u32 msix;
u32 eims; /* This queue's EIMS bit */
u32 me;
struct tx_ring txr;
};
struct em_rx_queue {
struct adapter *adapter;
u32 me;
u32 msix;
u32 eims;
struct rx_ring rxr;
u64 irqs;
struct if_irq que_irq;
};
/* Our adapter structure */
struct adapter {
struct ifnet *ifp;
struct e1000_hw hw;
if_softc_ctx_t shared;
if_ctx_t ctx;
#define tx_num_queues shared->isc_ntxqsets
#define rx_num_queues shared->isc_nrxqsets
#define intr_type shared->isc_intr
/* FreeBSD operating-system-specific structures. */
struct e1000_osdep osdep;
device_t dev;
struct cdev *led_dev;
struct em_tx_queue *tx_queues;
struct em_rx_queue *rx_queues;
struct if_irq irq;
struct resource *memory;
struct resource *flash;
struct resource *ioport;
struct resource *res;
void *tag;
u32 linkvec;
u32 ivars;
struct ifmedia *media;
int msix;
int if_flags;
int em_insert_vlan_header;
u32 ims;
bool in_detach;
u32 flags;
/* Task for FAST handling */
struct grouptask link_task;
u16 num_vlans;
u32 txd_cmd;
u32 tx_process_limit;
u32 rx_process_limit;
u32 rx_mbuf_sz;
/* Management and WOL features */
u32 wol;
bool has_manage;
bool has_amt;
/* Multicast array memory */
u8 *mta;
/*
** Shadow VFTA table, this is needed because
** the real vlan filter table gets cleared during
** a soft reset and the driver needs to be able
** to repopulate it.
*/
u32 shadow_vfta[EM_VFTA_SIZE];
/* Info about the interface */
u16 link_active;
u16 fc;
u16 link_speed;
u16 link_duplex;
u32 smartspeed;
u32 dmac;
int link_mask;
u64 que_mask;
-
struct em_int_delay_info tx_int_delay;
struct em_int_delay_info tx_abs_int_delay;
struct em_int_delay_info rx_int_delay;
struct em_int_delay_info rx_abs_int_delay;
struct em_int_delay_info tx_itr;
/* Misc stats maintained by the driver */
unsigned long dropped_pkts;
unsigned long link_irq;
- unsigned long mbuf_defrag_failed;
- unsigned long no_tx_dma_setup;
- unsigned long no_tx_map_avail;
unsigned long rx_overruns;
unsigned long watchdog_events;
struct e1000_hw_stats stats;
u16 vf_ifp;
};
/********************************************************************************
* vendor_info_array
*
* This array contains the list of Subvendor/Subdevice IDs on which the driver
* should load.
*
********************************************************************************/
typedef struct _em_vendor_info_t {
unsigned int vendor_id;
unsigned int device_id;
unsigned int subvendor_id;
unsigned int subdevice_id;
unsigned int index;
} em_vendor_info_t;
void em_dump_rs(struct adapter *);
#define EM_RSSRK_SIZE 4
#define EM_RSSRK_VAL(key, i) (key[(i) * EM_RSSRK_SIZE] | \
key[(i) * EM_RSSRK_SIZE + 1] << 8 | \
key[(i) * EM_RSSRK_SIZE + 2] << 16 | \
key[(i) * EM_RSSRK_SIZE + 3] << 24)
#endif /* _EM_H_DEFINED_ */
Index: projects/clang800-import/sys/dev/flash/mx25l.c
===================================================================
--- projects/clang800-import/sys/dev/flash/mx25l.c (revision 343955)
+++ projects/clang800-import/sys/dev/flash/mx25l.c (revision 343956)
@@ -1,689 +1,689 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
* Copyright (c) 2009 Oleksandr Tymoshenko. All rights reserved.
* Copyright (c) 2018 Ian Lepore. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_platform.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bio.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/kthread.h>
#include <sys/lock.h>
#include <sys/mbuf.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <geom/geom_disk.h>
#ifdef FDT
#include <dev/fdt/fdt_common.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/ofw/openfirm.h>
#endif
#include <dev/spibus/spi.h>
#include "spibus_if.h"
#include <dev/flash/mx25lreg.h>
#define FL_NONE 0x00
#define FL_ERASE_4K 0x01
#define FL_ERASE_32K 0x02
#define FL_ENABLE_4B_ADDR 0x04
#define FL_DISABLE_4B_ADDR 0x08
/*
* Define the sectorsize to be a smaller size rather than the flash
* sector size. Trying to run FFS off of a 64k flash sector size
* results in a completely un-usable system.
*/
#define MX25L_SECTORSIZE 512
struct mx25l_flash_ident
{
const char *name;
uint8_t manufacturer_id;
uint16_t device_id;
unsigned int sectorsize;
unsigned int sectorcount;
unsigned int flags;
};
struct mx25l_softc
{
device_t sc_dev;
device_t sc_parent;
uint8_t sc_manufacturer_id;
uint16_t sc_device_id;
unsigned int sc_erasesize;
struct mtx sc_mtx;
struct disk *sc_disk;
struct proc *sc_p;
struct bio_queue_head sc_bio_queue;
unsigned int sc_flags;
unsigned int sc_taskstate;
uint8_t sc_dummybuf[FLASH_PAGE_SIZE];
};
#define TSTATE_STOPPED 0
#define TSTATE_STOPPING 1
#define TSTATE_RUNNING 2
#define M25PXX_LOCK(_sc) mtx_lock(&(_sc)->sc_mtx)
#define M25PXX_UNLOCK(_sc) mtx_unlock(&(_sc)->sc_mtx)
#define M25PXX_LOCK_INIT(_sc) \
mtx_init(&_sc->sc_mtx, device_get_nameunit(_sc->sc_dev), \
"mx25l", MTX_DEF)
#define M25PXX_LOCK_DESTROY(_sc) mtx_destroy(&_sc->sc_mtx);
#define M25PXX_ASSERT_LOCKED(_sc) mtx_assert(&_sc->sc_mtx, MA_OWNED);
#define M25PXX_ASSERT_UNLOCKED(_sc) mtx_assert(&_sc->sc_mtx, MA_NOTOWNED);
/* disk routines */
static int mx25l_open(struct disk *dp);
static int mx25l_close(struct disk *dp);
static int mx25l_ioctl(struct disk *, u_long, void *, int, struct thread *);
static void mx25l_strategy(struct bio *bp);
static int mx25l_getattr(struct bio *bp);
static void mx25l_task(void *arg);
static struct mx25l_flash_ident flash_devices[] = {
{ "en25f32", 0x1c, 0x3116, 64 * 1024, 64, FL_NONE },
{ "en25p32", 0x1c, 0x2016, 64 * 1024, 64, FL_NONE },
{ "en25p64", 0x1c, 0x2017, 64 * 1024, 128, FL_NONE },
{ "en25q32", 0x1c, 0x3016, 64 * 1024, 64, FL_NONE },
{ "en25q64", 0x1c, 0x3017, 64 * 1024, 128, FL_ERASE_4K },
{ "m25p32", 0x20, 0x2016, 64 * 1024, 64, FL_NONE },
{ "m25p64", 0x20, 0x2017, 64 * 1024, 128, FL_NONE },
{ "mx25l1606e", 0xc2, 0x2015, 64 * 1024, 32, FL_ERASE_4K},
{ "mx25ll32", 0xc2, 0x2016, 64 * 1024, 64, FL_NONE },
{ "mx25ll64", 0xc2, 0x2017, 64 * 1024, 128, FL_NONE },
{ "mx25ll128", 0xc2, 0x2018, 64 * 1024, 256, FL_ERASE_4K | FL_ERASE_32K },
{ "mx25ll256", 0xc2, 0x2019, 64 * 1024, 512, FL_ERASE_4K | FL_ERASE_32K | FL_ENABLE_4B_ADDR },
{ "s25fl032", 0x01, 0x0215, 64 * 1024, 64, FL_NONE },
{ "s25fl064", 0x01, 0x0216, 64 * 1024, 128, FL_NONE },
{ "s25fl128", 0x01, 0x2018, 64 * 1024, 256, FL_NONE },
{ "s25fl256s", 0x01, 0x0219, 64 * 1024, 512, FL_NONE },
{ "SST25VF010A", 0xbf, 0x2549, 4 * 1024, 32, FL_ERASE_4K | FL_ERASE_32K },
{ "SST25VF032B", 0xbf, 0x254a, 64 * 1024, 64, FL_ERASE_4K | FL_ERASE_32K },
/* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */
{ "w25x32", 0xef, 0x3016, 64 * 1024, 64, FL_ERASE_4K },
{ "w25x64", 0xef, 0x3017, 64 * 1024, 128, FL_ERASE_4K },
{ "w25q32", 0xef, 0x4016, 64 * 1024, 64, FL_ERASE_4K },
{ "w25q64", 0xef, 0x4017, 64 * 1024, 128, FL_ERASE_4K },
{ "w25q64bv", 0xef, 0x4017, 64 * 1024, 128, FL_ERASE_4K },
{ "w25q128", 0xef, 0x4018, 64 * 1024, 256, FL_ERASE_4K },
{ "w25q256", 0xef, 0x4019, 64 * 1024, 512, FL_ERASE_4K },
/* Atmel */
{ "at25df641", 0x1f, 0x4800, 64 * 1024, 128, FL_ERASE_4K },
/* GigaDevice */
{ "gd25q64", 0xc8, 0x4017, 64 * 1024, 128, FL_ERASE_4K },
};
static int
mx25l_wait_for_device_ready(struct mx25l_softc *sc)
{
uint8_t txBuf[2], rxBuf[2];
struct spi_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
do {
txBuf[0] = CMD_READ_STATUS;
cmd.tx_cmd = txBuf;
cmd.rx_cmd = rxBuf;
cmd.rx_cmd_sz = 2;
cmd.tx_cmd_sz = 2;
err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd);
} while (err == 0 && (rxBuf[1] & STATUS_WIP));
return (err);
}
static struct mx25l_flash_ident*
mx25l_get_device_ident(struct mx25l_softc *sc)
{
uint8_t txBuf[8], rxBuf[8];
struct spi_command cmd;
uint8_t manufacturer_id;
uint16_t dev_id;
int err, i;
memset(&cmd, 0, sizeof(cmd));
memset(txBuf, 0, sizeof(txBuf));
memset(rxBuf, 0, sizeof(rxBuf));
txBuf[0] = CMD_READ_IDENT;
cmd.tx_cmd = &txBuf;
cmd.rx_cmd = &rxBuf;
/*
* Some compatible devices has extended two-bytes ID
* We'll use only manufacturer/deviceid atm
*/
cmd.tx_cmd_sz = 4;
cmd.rx_cmd_sz = 4;
err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd);
if (err)
return (NULL);
manufacturer_id = rxBuf[1];
dev_id = (rxBuf[2] << 8) | (rxBuf[3]);
for (i = 0; i < nitems(flash_devices); i++) {
if ((flash_devices[i].manufacturer_id == manufacturer_id) &&
(flash_devices[i].device_id == dev_id))
return &flash_devices[i];
}
device_printf(sc->sc_dev,
"Unknown SPI flash device. Vendor: %02x, device id: %04x\n",
manufacturer_id, dev_id);
return (NULL);
}
static int
mx25l_set_writable(struct mx25l_softc *sc, int writable)
{
uint8_t txBuf[1], rxBuf[1];
struct spi_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
memset(txBuf, 0, sizeof(txBuf));
memset(rxBuf, 0, sizeof(rxBuf));
txBuf[0] = writable ? CMD_WRITE_ENABLE : CMD_WRITE_DISABLE;
cmd.tx_cmd = txBuf;
cmd.rx_cmd = rxBuf;
cmd.rx_cmd_sz = 1;
cmd.tx_cmd_sz = 1;
err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd);
return (err);
}
static int
mx25l_erase_cmd(struct mx25l_softc *sc, off_t sector)
{
uint8_t txBuf[5], rxBuf[5];
struct spi_command cmd;
int err;
if ((err = mx25l_set_writable(sc, 1)) != 0)
return (err);
memset(&cmd, 0, sizeof(cmd));
memset(txBuf, 0, sizeof(txBuf));
memset(rxBuf, 0, sizeof(rxBuf));
cmd.tx_cmd = txBuf;
cmd.rx_cmd = rxBuf;
if (sc->sc_flags & FL_ERASE_4K)
txBuf[0] = CMD_BLOCK_4K_ERASE;
else if (sc->sc_flags & FL_ERASE_32K)
txBuf[0] = CMD_BLOCK_32K_ERASE;
else
txBuf[0] = CMD_SECTOR_ERASE;
if (sc->sc_flags & FL_ENABLE_4B_ADDR) {
cmd.rx_cmd_sz = 5;
cmd.tx_cmd_sz = 5;
txBuf[1] = ((sector >> 24) & 0xff);
txBuf[2] = ((sector >> 16) & 0xff);
txBuf[3] = ((sector >> 8) & 0xff);
txBuf[4] = (sector & 0xff);
} else {
cmd.rx_cmd_sz = 4;
cmd.tx_cmd_sz = 4;
txBuf[1] = ((sector >> 16) & 0xff);
txBuf[2] = ((sector >> 8) & 0xff);
txBuf[3] = (sector & 0xff);
}
if ((err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd)) != 0)
return (err);
err = mx25l_wait_for_device_ready(sc);
return (err);
}
static int
mx25l_write(struct mx25l_softc *sc, off_t offset, caddr_t data, off_t count)
{
uint8_t txBuf[8], rxBuf[8];
struct spi_command cmd;
off_t bytes_to_write;
int err = 0;
if (sc->sc_flags & FL_ENABLE_4B_ADDR) {
cmd.tx_cmd_sz = 5;
cmd.rx_cmd_sz = 5;
} else {
cmd.tx_cmd_sz = 4;
cmd.rx_cmd_sz = 4;
}
/*
* Writes must be aligned to the erase sectorsize, since blocks are
* fully erased before they're written to.
*/
if (count % sc->sc_erasesize != 0 || offset % sc->sc_erasesize != 0)
return (EIO);
/*
* Maximum write size for CMD_PAGE_PROGRAM is FLASH_PAGE_SIZE, so loop
* to write chunks of FLASH_PAGE_SIZE bytes each.
*/
while (count != 0) {
/* If we crossed a sector boundary, erase the next sector. */
if (((offset) % sc->sc_erasesize) == 0) {
err = mx25l_erase_cmd(sc, offset);
if (err)
break;
}
txBuf[0] = CMD_PAGE_PROGRAM;
if (sc->sc_flags & FL_ENABLE_4B_ADDR) {
txBuf[1] = (offset >> 24) & 0xff;
txBuf[2] = (offset >> 16) & 0xff;
txBuf[3] = (offset >> 8) & 0xff;
txBuf[4] = offset & 0xff;
} else {
txBuf[1] = (offset >> 16) & 0xff;
txBuf[2] = (offset >> 8) & 0xff;
txBuf[3] = offset & 0xff;
}
bytes_to_write = MIN(FLASH_PAGE_SIZE, count);
cmd.tx_cmd = txBuf;
cmd.rx_cmd = rxBuf;
cmd.tx_data = data;
cmd.rx_data = sc->sc_dummybuf;
cmd.tx_data_sz = (uint32_t)bytes_to_write;
cmd.rx_data_sz = (uint32_t)bytes_to_write;
/*
* Each completed write operation resets WEL (write enable
* latch) to disabled state, so we re-enable it here.
*/
if ((err = mx25l_wait_for_device_ready(sc)) != 0)
break;
if ((err = mx25l_set_writable(sc, 1)) != 0)
break;
err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd);
if (err != 0)
break;
err = mx25l_wait_for_device_ready(sc);
if (err)
break;
data += bytes_to_write;
offset += bytes_to_write;
count -= bytes_to_write;
}
return (err);
}
static int
mx25l_read(struct mx25l_softc *sc, off_t offset, caddr_t data, off_t count)
{
uint8_t txBuf[8], rxBuf[8];
struct spi_command cmd;
int err = 0;
/*
* Enforce that reads are aligned to the disk sectorsize, not the
* erase sectorsize. In this way, smaller read IO is possible,
* dramatically speeding up filesystem/geom_compress access.
*/
if (count % sc->sc_disk->d_sectorsize != 0 ||
offset % sc->sc_disk->d_sectorsize != 0)
return (EIO);
txBuf[0] = CMD_FAST_READ;
if (sc->sc_flags & FL_ENABLE_4B_ADDR) {
cmd.tx_cmd_sz = 6;
cmd.rx_cmd_sz = 6;
txBuf[1] = (offset >> 24) & 0xff;
txBuf[2] = (offset >> 16) & 0xff;
txBuf[3] = (offset >> 8) & 0xff;
txBuf[4] = offset & 0xff;
/* Dummy byte */
txBuf[5] = 0;
} else {
cmd.tx_cmd_sz = 5;
cmd.rx_cmd_sz = 5;
txBuf[1] = (offset >> 16) & 0xff;
txBuf[2] = (offset >> 8) & 0xff;
txBuf[3] = offset & 0xff;
/* Dummy byte */
txBuf[4] = 0;
}
cmd.tx_cmd = txBuf;
cmd.rx_cmd = rxBuf;
cmd.tx_data = data;
cmd.rx_data = data;
cmd.tx_data_sz = count;
cmd.rx_data_sz = count;
err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd);
return (err);
}
static int
mx25l_set_4b_mode(struct mx25l_softc *sc, uint8_t command)
{
uint8_t txBuf[1], rxBuf[1];
struct spi_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
memset(txBuf, 0, sizeof(txBuf));
memset(rxBuf, 0, sizeof(rxBuf));
cmd.tx_cmd_sz = cmd.rx_cmd_sz = 1;
cmd.tx_cmd = txBuf;
cmd.rx_cmd = rxBuf;
txBuf[0] = command;
if ((err = SPIBUS_TRANSFER(sc->sc_parent, sc->sc_dev, &cmd)) == 0)
err = mx25l_wait_for_device_ready(sc);
return (err);
}
#ifdef FDT
static struct ofw_compat_data compat_data[] = {
{ "st,m25p", 1 },
{ "jedec,spi-nor", 1 },
{ NULL, 0 },
};
#endif
static int
mx25l_probe(device_t dev)
{
#ifdef FDT
int i;
if (!ofw_bus_status_okay(dev))
return (ENXIO);
/* First try to match the compatible property to the compat_data */
if (ofw_bus_search_compatible(dev, compat_data)->ocd_data == 1)
goto found;
/*
* Next, try to find a compatible device using the names in the
* flash_devices structure
*/
for (i = 0; i < nitems(flash_devices); i++)
if (ofw_bus_is_compatible(dev, flash_devices[i].name))
goto found;
return (ENXIO);
found:
#endif
device_set_desc(dev, "M25Pxx Flash Family");
return (0);
}
static int
mx25l_attach(device_t dev)
{
struct mx25l_softc *sc;
struct mx25l_flash_ident *ident;
int err;
sc = device_get_softc(dev);
sc->sc_dev = dev;
sc->sc_parent = device_get_parent(sc->sc_dev);
M25PXX_LOCK_INIT(sc);
ident = mx25l_get_device_ident(sc);
if (ident == NULL)
return (ENXIO);
if ((err = mx25l_wait_for_device_ready(sc)) != 0)
return (err);
sc->sc_flags = ident->flags;
if (sc->sc_flags & FL_ERASE_4K)
sc->sc_erasesize = 4 * 1024;
else if (sc->sc_flags & FL_ERASE_32K)
sc->sc_erasesize = 32 * 1024;
else
sc->sc_erasesize = ident->sectorsize;
if (sc->sc_flags & FL_ENABLE_4B_ADDR) {
if ((err = mx25l_set_4b_mode(sc, CMD_ENTER_4B_MODE)) != 0)
return (err);
} else if (sc->sc_flags & FL_DISABLE_4B_ADDR) {
if ((err = mx25l_set_4b_mode(sc, CMD_EXIT_4B_MODE)) != 0)
return (err);
}
sc->sc_disk = disk_alloc();
sc->sc_disk->d_open = mx25l_open;
sc->sc_disk->d_close = mx25l_close;
sc->sc_disk->d_strategy = mx25l_strategy;
sc->sc_disk->d_getattr = mx25l_getattr;
sc->sc_disk->d_ioctl = mx25l_ioctl;
sc->sc_disk->d_name = "flash/spi";
sc->sc_disk->d_drv1 = sc;
sc->sc_disk->d_maxsize = DFLTPHYS;
sc->sc_disk->d_sectorsize = MX25L_SECTORSIZE;
sc->sc_disk->d_mediasize = ident->sectorsize * ident->sectorcount;
sc->sc_disk->d_stripesize = sc->sc_erasesize;
sc->sc_disk->d_unit = device_get_unit(sc->sc_dev);
sc->sc_disk->d_dump = NULL; /* NB: no dumps */
strlcpy(sc->sc_disk->d_descr, ident->name,
sizeof(sc->sc_disk->d_descr));
disk_create(sc->sc_disk, DISK_VERSION);
bioq_init(&sc->sc_bio_queue);
kproc_create(&mx25l_task, sc, &sc->sc_p, 0, 0, "task: mx25l flash");
sc->sc_taskstate = TSTATE_RUNNING;
device_printf(sc->sc_dev,
"device type %s, size %dK in %d sectors of %dK, erase size %dK\n",
ident->name,
ident->sectorcount * ident->sectorsize / 1024,
ident->sectorcount, ident->sectorsize / 1024,
sc->sc_erasesize / 1024);
return (0);
}
static int
mx25l_detach(device_t dev)
{
struct mx25l_softc *sc;
int err;
sc = device_get_softc(dev);
err = 0;
M25PXX_LOCK(sc);
if (sc->sc_taskstate == TSTATE_RUNNING) {
sc->sc_taskstate = TSTATE_STOPPING;
wakeup(sc);
while (err == 0 && sc->sc_taskstate != TSTATE_STOPPED) {
err = msleep(sc, &sc->sc_mtx, 0, "mx25dt", hz * 3);
if (err != 0) {
sc->sc_taskstate = TSTATE_RUNNING;
device_printf(sc->sc_dev,
"Failed to stop queue task\n");
}
}
}
M25PXX_UNLOCK(sc);
if (err == 0 && sc->sc_taskstate == TSTATE_STOPPED) {
disk_destroy(sc->sc_disk);
bioq_flush(&sc->sc_bio_queue, NULL, ENXIO);
M25PXX_LOCK_DESTROY(sc);
}
return (err);
}
static int
mx25l_open(struct disk *dp)
{
return (0);
}
static int
mx25l_close(struct disk *dp)
{
return (0);
}
static int
mx25l_ioctl(struct disk *dp, u_long cmd, void *data, int fflag,
struct thread *td)
{
return (EINVAL);
}
static void
mx25l_strategy(struct bio *bp)
{
struct mx25l_softc *sc;
sc = (struct mx25l_softc *)bp->bio_disk->d_drv1;
M25PXX_LOCK(sc);
bioq_disksort(&sc->sc_bio_queue, bp);
wakeup(sc);
M25PXX_UNLOCK(sc);
}
static int
mx25l_getattr(struct bio *bp)
{
struct mx25l_softc *sc;
device_t dev;
if (bp->bio_disk == NULL || bp->bio_disk->d_drv1 == NULL)
return (ENXIO);
sc = bp->bio_disk->d_drv1;
dev = sc->sc_dev;
if (strcmp(bp->bio_attribute, "SPI::device") == 0) {
if (bp->bio_length != sizeof(dev))
return (EFAULT);
bcopy(&dev, bp->bio_data, sizeof(dev));
} else
return (-1);
return (0);
}
static void
mx25l_task(void *arg)
{
struct mx25l_softc *sc = (struct mx25l_softc*)arg;
struct bio *bp;
device_t dev;
for (;;) {
dev = sc->sc_dev;
M25PXX_LOCK(sc);
do {
if (sc->sc_taskstate == TSTATE_STOPPING) {
sc->sc_taskstate = TSTATE_STOPPED;
M25PXX_UNLOCK(sc);
wakeup(sc);
kproc_exit(0);
}
bp = bioq_first(&sc->sc_bio_queue);
if (bp == NULL)
msleep(sc, &sc->sc_mtx, PRIBIO, "mx25jq", 0);
} while (bp == NULL);
bioq_remove(&sc->sc_bio_queue, bp);
M25PXX_UNLOCK(sc);
switch (bp->bio_cmd) {
case BIO_READ:
bp->bio_error = mx25l_read(sc, bp->bio_offset,
bp->bio_data, bp->bio_bcount);
break;
case BIO_WRITE:
bp->bio_error = mx25l_write(sc, bp->bio_offset,
bp->bio_data, bp->bio_bcount);
break;
default:
bp->bio_error = EINVAL;
}
biodone(bp);
}
}
static devclass_t mx25l_devclass;
static device_method_t mx25l_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, mx25l_probe),
DEVMETHOD(device_attach, mx25l_attach),
DEVMETHOD(device_detach, mx25l_detach),
{ 0, 0 }
};
static driver_t mx25l_driver = {
"mx25l",
mx25l_methods,
sizeof(struct mx25l_softc),
};
DRIVER_MODULE(mx25l, spibus, mx25l_driver, mx25l_devclass, 0, 0);
MODULE_DEPEND(mx25l, spibus, 1, 1, 1);
#ifdef FDT
SPIBUS_PNP_INFO(compat_data);
#endif
Index: projects/clang800-import/sys/dev/flash/n25q.c
===================================================================
--- projects/clang800-import/sys/dev/flash/n25q.c (revision 343955)
+++ projects/clang800-import/sys/dev/flash/n25q.c (revision 343956)
@@ -1,490 +1,490 @@
/*-
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
* Copyright (c) 2009 Oleksandr Tymoshenko. All rights reserved.
* Copyright (c) 2017 Ruslan Bukin <br@bsdpad.com>
* Copyright (c) 2018 Ian Lepore. All rights reserved.
* All rights reserved.
*
* This software was developed by SRI International and the University of
* Cambridge Computer Laboratory under DARPA/AFRL contract FA8750-10-C-0237
* ("CTSRD"), as part of the DARPA CRASH research programme.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/* n25q Quad SPI Flash driver. */
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_platform.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bio.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/kthread.h>
#include <sys/lock.h>
#include <sys/mbuf.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <geom/geom_disk.h>
#include <machine/bus.h>
#include <dev/fdt/fdt_common.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/ofw/openfirm.h>
#include <dev/flash/mx25lreg.h>
#include "qspi_if.h"
#define N25Q_DEBUG
#undef N25Q_DEBUG
#ifdef N25Q_DEBUG
#define dprintf(fmt, ...) printf(fmt, ##__VA_ARGS__)
#else
#define dprintf(fmt, ...)
#endif
#define FL_NONE 0x00
#define FL_ERASE_4K 0x01
#define FL_ERASE_32K 0x02
#define FL_ENABLE_4B_ADDR 0x04
#define FL_DISABLE_4B_ADDR 0x08
/*
* Define the sectorsize to be a smaller size rather than the flash
* sector size. Trying to run FFS off of a 64k flash sector size
* results in a completely un-usable system.
*/
#define FLASH_SECTORSIZE 512
struct n25q_flash_ident {
const char *name;
uint8_t manufacturer_id;
uint16_t device_id;
unsigned int sectorsize;
unsigned int sectorcount;
unsigned int flags;
};
struct n25q_softc {
device_t dev;
bus_space_tag_t bst;
bus_space_handle_t bsh;
void *ih;
struct resource *res[3];
uint8_t sc_manufacturer_id;
uint16_t device_id;
unsigned int sc_sectorsize;
struct mtx sc_mtx;
struct disk *sc_disk;
struct proc *sc_p;
struct bio_queue_head sc_bio_queue;
unsigned int sc_flags;
unsigned int sc_taskstate;
};
#define TSTATE_STOPPED 0
#define TSTATE_STOPPING 1
#define TSTATE_RUNNING 2
#define N25Q_LOCK(_sc) mtx_lock(&(_sc)->sc_mtx)
#define N25Q_UNLOCK(_sc) mtx_unlock(&(_sc)->sc_mtx)
#define N25Q_LOCK_INIT(_sc) \
mtx_init(&_sc->sc_mtx, device_get_nameunit(_sc->dev), \
"n25q", MTX_DEF)
#define N25Q_LOCK_DESTROY(_sc) mtx_destroy(&_sc->sc_mtx);
#define N25Q_ASSERT_LOCKED(_sc) \
mtx_assert(&_sc->sc_mtx, MA_OWNED);
#define N25Q_ASSERT_UNLOCKED(_sc) \
mtx_assert(&_sc->sc_mtx, MA_NOTOWNED);
static struct ofw_compat_data compat_data[] = {
{ "n25q00aa", 1 },
{ NULL, 0 },
};
/* disk routines */
static int n25q_open(struct disk *dp);
static int n25q_close(struct disk *dp);
static int n25q_ioctl(struct disk *, u_long, void *, int, struct thread *);
static void n25q_strategy(struct bio *bp);
static int n25q_getattr(struct bio *bp);
static void n25q_task(void *arg);
static struct n25q_flash_ident flash_devices[] = {
{ "n25q00", 0x20, 0xbb21, (64 * 1024), 2048, FL_ENABLE_4B_ADDR},
};
static int
n25q_wait_for_device_ready(device_t dev)
{
device_t pdev;
uint8_t status;
int err;
pdev = device_get_parent(dev);
do {
err = QSPI_READ_REG(pdev, dev, CMD_READ_STATUS, &status, 1);
} while (err == 0 && (status & STATUS_WIP));
return (err);
}
static struct n25q_flash_ident*
n25q_get_device_ident(struct n25q_softc *sc)
{
uint8_t manufacturer_id;
uint16_t dev_id;
device_t pdev;
uint8_t data[4];
int i;
pdev = device_get_parent(sc->dev);
QSPI_READ_REG(pdev, sc->dev, CMD_READ_IDENT, (uint8_t *)&data[0], 4);
manufacturer_id = data[0];
dev_id = (data[1] << 8) | (data[2]);
for (i = 0; i < nitems(flash_devices); i++) {
if ((flash_devices[i].manufacturer_id == manufacturer_id) &&
(flash_devices[i].device_id == dev_id))
return &flash_devices[i];
}
printf("Unknown SPI flash device. Vendor: %02x, device id: %04x\n",
manufacturer_id, dev_id);
return (NULL);
}
static int
n25q_write(device_t dev, struct bio *bp, off_t offset, caddr_t data, off_t count)
{
struct n25q_softc *sc;
device_t pdev;
int err;
pdev = device_get_parent(dev);
sc = device_get_softc(dev);
dprintf("%s: offset 0x%llx count %lld bytes\n", __func__, offset, count);
err = QSPI_ERASE(pdev, dev, offset);
if (err != 0) {
return (err);
}
err = QSPI_WRITE(pdev, dev, bp, offset, data, count);
return (err);
}
static int
n25q_read(device_t dev, struct bio *bp, off_t offset, caddr_t data, off_t count)
{
struct n25q_softc *sc;
device_t pdev;
int err;
pdev = device_get_parent(dev);
sc = device_get_softc(dev);
dprintf("%s: offset 0x%llx count %lld bytes\n", __func__, offset, count);
/*
* Enforce the disk read sectorsize not the erase sectorsize.
* In this way, smaller read IO is possible,dramatically
* speeding up filesystem/geom_compress access.
*/
if (count % sc->sc_disk->d_sectorsize != 0
|| offset % sc->sc_disk->d_sectorsize != 0) {
printf("EIO\n");
return (EIO);
}
err = QSPI_READ(pdev, dev, bp, offset, data, count);
return (err);
}
static int
n25q_set_4b_mode(device_t dev, uint8_t command)
{
struct n25q_softc *sc;
device_t pdev;
int err;
pdev = device_get_parent(dev);
sc = device_get_softc(dev);
err = QSPI_WRITE_REG(pdev, dev, command, 0, 0);
return (err);
}
static int
n25q_probe(device_t dev)
{
int i;
if (!ofw_bus_status_okay(dev))
return (ENXIO);
/* First try to match the compatible property to the compat_data */
if (ofw_bus_search_compatible(dev, compat_data)->ocd_data == 1)
goto found;
/*
* Next, try to find a compatible device using the names in the
* flash_devices structure
*/
for (i = 0; i < nitems(flash_devices); i++)
if (ofw_bus_is_compatible(dev, flash_devices[i].name))
goto found;
return (ENXIO);
found:
device_set_desc(dev, "Micron n25q");
return (0);
}
static int
n25q_attach(device_t dev)
{
struct n25q_flash_ident *ident;
struct n25q_softc *sc;
sc = device_get_softc(dev);
sc->dev = dev;
N25Q_LOCK_INIT(sc);
ident = n25q_get_device_ident(sc);
if (ident == NULL) {
return (ENXIO);
}
n25q_wait_for_device_ready(sc->dev);
sc->sc_disk = disk_alloc();
sc->sc_disk->d_open = n25q_open;
sc->sc_disk->d_close = n25q_close;
sc->sc_disk->d_strategy = n25q_strategy;
sc->sc_disk->d_getattr = n25q_getattr;
sc->sc_disk->d_ioctl = n25q_ioctl;
sc->sc_disk->d_name = "flash/qspi";
sc->sc_disk->d_drv1 = sc;
sc->sc_disk->d_maxsize = DFLTPHYS;
sc->sc_disk->d_sectorsize = FLASH_SECTORSIZE;
sc->sc_disk->d_mediasize = (ident->sectorsize * ident->sectorcount);
sc->sc_disk->d_unit = device_get_unit(sc->dev);
sc->sc_disk->d_dump = NULL;
/* Sectorsize for erase operations */
sc->sc_sectorsize = ident->sectorsize;
sc->sc_flags = ident->flags;
if (sc->sc_flags & FL_ENABLE_4B_ADDR)
n25q_set_4b_mode(dev, CMD_ENTER_4B_MODE);
if (sc->sc_flags & FL_DISABLE_4B_ADDR)
n25q_set_4b_mode(dev, CMD_EXIT_4B_MODE);
/* NB: use stripesize to hold the erase/region size for RedBoot */
sc->sc_disk->d_stripesize = ident->sectorsize;
disk_create(sc->sc_disk, DISK_VERSION);
bioq_init(&sc->sc_bio_queue);
kproc_create(&n25q_task, sc, &sc->sc_p, 0, 0, "task: n25q flash");
sc->sc_taskstate = TSTATE_RUNNING;
device_printf(sc->dev, "%s, sector %d bytes, %d sectors\n",
ident->name, ident->sectorsize, ident->sectorcount);
return (0);
}
static int
n25q_detach(device_t dev)
{
struct n25q_softc *sc;
int err;
sc = device_get_softc(dev);
err = 0;
N25Q_LOCK(sc);
if (sc->sc_taskstate == TSTATE_RUNNING) {
sc->sc_taskstate = TSTATE_STOPPING;
wakeup(sc);
while (err == 0 && sc->sc_taskstate != TSTATE_STOPPED) {
err = msleep(sc, &sc->sc_mtx, 0, "n25q", hz * 3);
if (err != 0) {
sc->sc_taskstate = TSTATE_RUNNING;
device_printf(sc->dev,
"Failed to stop queue task\n");
}
}
}
N25Q_UNLOCK(sc);
if (err == 0 && sc->sc_taskstate == TSTATE_STOPPED) {
disk_destroy(sc->sc_disk);
bioq_flush(&sc->sc_bio_queue, NULL, ENXIO);
N25Q_LOCK_DESTROY(sc);
}
return (err);
}
static int
n25q_open(struct disk *dp)
{
return (0);
}
static int
n25q_close(struct disk *dp)
{
return (0);
}
static int
n25q_ioctl(struct disk *dp, u_long cmd, void *data,
int fflag, struct thread *td)
{
return (EINVAL);
}
static void
n25q_strategy(struct bio *bp)
{
struct n25q_softc *sc;
sc = (struct n25q_softc *)bp->bio_disk->d_drv1;
N25Q_LOCK(sc);
bioq_disksort(&sc->sc_bio_queue, bp);
wakeup(sc);
N25Q_UNLOCK(sc);
}
static int
n25q_getattr(struct bio *bp)
{
struct n25q_softc *sc;
device_t dev;
if (bp->bio_disk == NULL || bp->bio_disk->d_drv1 == NULL) {
return (ENXIO);
}
sc = bp->bio_disk->d_drv1;
dev = sc->dev;
if (strcmp(bp->bio_attribute, "SPI::device") == 0) {
if (bp->bio_length != sizeof(dev)) {
return (EFAULT);
}
bcopy(&dev, bp->bio_data, sizeof(dev));
return (0);
}
return (-1);
}
static void
n25q_task(void *arg)
{
struct n25q_softc *sc;
struct bio *bp;
device_t dev;
sc = (struct n25q_softc *)arg;
dev = sc->dev;
for (;;) {
N25Q_LOCK(sc);
do {
if (sc->sc_taskstate == TSTATE_STOPPING) {
sc->sc_taskstate = TSTATE_STOPPED;
N25Q_UNLOCK(sc);
wakeup(sc);
kproc_exit(0);
}
bp = bioq_first(&sc->sc_bio_queue);
if (bp == NULL)
msleep(sc, &sc->sc_mtx, PRIBIO, "jobqueue", hz);
} while (bp == NULL);
bioq_remove(&sc->sc_bio_queue, bp);
N25Q_UNLOCK(sc);
switch (bp->bio_cmd) {
case BIO_READ:
bp->bio_error = n25q_read(dev, bp, bp->bio_offset,
bp->bio_data, bp->bio_bcount);
break;
case BIO_WRITE:
bp->bio_error = n25q_write(dev, bp, bp->bio_offset,
bp->bio_data, bp->bio_bcount);
break;
default:
bp->bio_error = EINVAL;
}
biodone(bp);
}
}
static devclass_t n25q_devclass;
static device_method_t n25q_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, n25q_probe),
DEVMETHOD(device_attach, n25q_attach),
DEVMETHOD(device_detach, n25q_detach),
{ 0, 0 }
};
static driver_t n25q_driver = {
"n25q",
n25q_methods,
sizeof(struct n25q_softc),
};
DRIVER_MODULE(n25q, simplebus, n25q_driver, n25q_devclass, 0, 0);
Index: projects/clang800-import/sys/dev/iwn/if_iwn.c
===================================================================
--- projects/clang800-import/sys/dev/iwn/if_iwn.c (revision 343955)
+++ projects/clang800-import/sys/dev/iwn/if_iwn.c (revision 343956)
@@ -1,9224 +1,9240 @@
/*-
* Copyright (c) 2007-2009 Damien Bergamini <damien.bergamini@free.fr>
* Copyright (c) 2008 Benjamin Close <benjsc@FreeBSD.org>
* Copyright (c) 2008 Sam Leffler, Errno Consulting
* Copyright (c) 2011 Intel Corporation
* Copyright (c) 2013 Cedric GROSS <c.gross@kreiz-it.fr>
* Copyright (c) 2013 Adrian Chadd <adrian@FreeBSD.org>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
* Driver for Intel WiFi Link 4965 and 1000/5000/6000 Series 802.11 network
* adapters.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_wlan.h"
#include "opt_iwn.h"
#include <sys/param.h>
#include <sys/sockio.h>
#include <sys/sysctl.h>
#include <sys/mbuf.h>
#include <sys/kernel.h>
#include <sys/socket.h>
#include <sys/systm.h>
#include <sys/malloc.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/rman.h>
#include <sys/endian.h>
#include <sys/firmware.h>
#include <sys/limits.h>
#include <sys/module.h>
#include <sys/priv.h>
#include <sys/queue.h>
#include <sys/taskqueue.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <machine/clock.h>
#include <dev/pci/pcireg.h>
#include <dev/pci/pcivar.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/if_dl.h>
#include <net/if_media.h>
#include <netinet/in.h>
#include <netinet/if_ether.h>
#include <net80211/ieee80211_var.h>
#include <net80211/ieee80211_radiotap.h>
#include <net80211/ieee80211_regdomain.h>
#include <net80211/ieee80211_ratectl.h>
#include <dev/iwn/if_iwnreg.h>
#include <dev/iwn/if_iwnvar.h>
#include <dev/iwn/if_iwn_devid.h>
#include <dev/iwn/if_iwn_chip_cfg.h>
#include <dev/iwn/if_iwn_debug.h>
#include <dev/iwn/if_iwn_ioctl.h>
struct iwn_ident {
uint16_t vendor;
uint16_t device;
const char *name;
};
static const struct iwn_ident iwn_ident_table[] = {
{ 0x8086, IWN_DID_6x05_1, "Intel Centrino Advanced-N 6205" },
{ 0x8086, IWN_DID_1000_1, "Intel Centrino Wireless-N 1000" },
{ 0x8086, IWN_DID_1000_2, "Intel Centrino Wireless-N 1000" },
{ 0x8086, IWN_DID_6x05_2, "Intel Centrino Advanced-N 6205" },
{ 0x8086, IWN_DID_6050_1, "Intel Centrino Advanced-N + WiMAX 6250" },
{ 0x8086, IWN_DID_6050_2, "Intel Centrino Advanced-N + WiMAX 6250" },
{ 0x8086, IWN_DID_x030_1, "Intel Centrino Wireless-N 1030" },
{ 0x8086, IWN_DID_x030_2, "Intel Centrino Wireless-N 1030" },
{ 0x8086, IWN_DID_x030_3, "Intel Centrino Advanced-N 6230" },
{ 0x8086, IWN_DID_x030_4, "Intel Centrino Advanced-N 6230" },
{ 0x8086, IWN_DID_6150_1, "Intel Centrino Wireless-N + WiMAX 6150" },
{ 0x8086, IWN_DID_6150_2, "Intel Centrino Wireless-N + WiMAX 6150" },
{ 0x8086, IWN_DID_2x00_1, "Intel(R) Centrino(R) Wireless-N 2200 BGN" },
{ 0x8086, IWN_DID_2x00_2, "Intel(R) Centrino(R) Wireless-N 2200 BGN" },
/* XXX 2200D is IWN_SDID_2x00_4; there's no way to express this here! */
{ 0x8086, IWN_DID_2x30_1, "Intel Centrino Wireless-N 2230" },
{ 0x8086, IWN_DID_2x30_2, "Intel Centrino Wireless-N 2230" },
{ 0x8086, IWN_DID_130_1, "Intel Centrino Wireless-N 130" },
{ 0x8086, IWN_DID_130_2, "Intel Centrino Wireless-N 130" },
{ 0x8086, IWN_DID_100_1, "Intel Centrino Wireless-N 100" },
{ 0x8086, IWN_DID_100_2, "Intel Centrino Wireless-N 100" },
{ 0x8086, IWN_DID_105_1, "Intel Centrino Wireless-N 105" },
{ 0x8086, IWN_DID_105_2, "Intel Centrino Wireless-N 105" },
{ 0x8086, IWN_DID_135_1, "Intel Centrino Wireless-N 135" },
{ 0x8086, IWN_DID_135_2, "Intel Centrino Wireless-N 135" },
{ 0x8086, IWN_DID_4965_1, "Intel Wireless WiFi Link 4965" },
{ 0x8086, IWN_DID_6x00_1, "Intel Centrino Ultimate-N 6300" },
{ 0x8086, IWN_DID_6x00_2, "Intel Centrino Advanced-N 6200" },
{ 0x8086, IWN_DID_4965_2, "Intel Wireless WiFi Link 4965" },
{ 0x8086, IWN_DID_4965_3, "Intel Wireless WiFi Link 4965" },
{ 0x8086, IWN_DID_5x00_1, "Intel WiFi Link 5100" },
{ 0x8086, IWN_DID_4965_4, "Intel Wireless WiFi Link 4965" },
{ 0x8086, IWN_DID_5x00_3, "Intel Ultimate N WiFi Link 5300" },
{ 0x8086, IWN_DID_5x00_4, "Intel Ultimate N WiFi Link 5300" },
{ 0x8086, IWN_DID_5x00_2, "Intel WiFi Link 5100" },
{ 0x8086, IWN_DID_6x00_3, "Intel Centrino Ultimate-N 6300" },
{ 0x8086, IWN_DID_6x00_4, "Intel Centrino Advanced-N 6200" },
{ 0x8086, IWN_DID_5x50_1, "Intel WiMAX/WiFi Link 5350" },
{ 0x8086, IWN_DID_5x50_2, "Intel WiMAX/WiFi Link 5350" },
{ 0x8086, IWN_DID_5x50_3, "Intel WiMAX/WiFi Link 5150" },
{ 0x8086, IWN_DID_5x50_4, "Intel WiMAX/WiFi Link 5150" },
{ 0x8086, IWN_DID_6035_1, "Intel Centrino Advanced 6235" },
{ 0x8086, IWN_DID_6035_2, "Intel Centrino Advanced 6235" },
{ 0, 0, NULL }
};
static int iwn_probe(device_t);
static int iwn_attach(device_t);
static void iwn4965_attach(struct iwn_softc *, uint16_t);
static void iwn5000_attach(struct iwn_softc *, uint16_t);
static int iwn_config_specific(struct iwn_softc *, uint16_t);
static void iwn_radiotap_attach(struct iwn_softc *);
static void iwn_sysctlattach(struct iwn_softc *);
static struct ieee80211vap *iwn_vap_create(struct ieee80211com *,
const char [IFNAMSIZ], int, enum ieee80211_opmode, int,
const uint8_t [IEEE80211_ADDR_LEN],
const uint8_t [IEEE80211_ADDR_LEN]);
static void iwn_vap_delete(struct ieee80211vap *);
static int iwn_detach(device_t);
static int iwn_shutdown(device_t);
static int iwn_suspend(device_t);
static int iwn_resume(device_t);
static int iwn_nic_lock(struct iwn_softc *);
static int iwn_eeprom_lock(struct iwn_softc *);
static int iwn_init_otprom(struct iwn_softc *);
static int iwn_read_prom_data(struct iwn_softc *, uint32_t, void *, int);
static void iwn_dma_map_addr(void *, bus_dma_segment_t *, int, int);
static int iwn_dma_contig_alloc(struct iwn_softc *, struct iwn_dma_info *,
void **, bus_size_t, bus_size_t);
static void iwn_dma_contig_free(struct iwn_dma_info *);
static int iwn_alloc_sched(struct iwn_softc *);
static void iwn_free_sched(struct iwn_softc *);
static int iwn_alloc_kw(struct iwn_softc *);
static void iwn_free_kw(struct iwn_softc *);
static int iwn_alloc_ict(struct iwn_softc *);
static void iwn_free_ict(struct iwn_softc *);
static int iwn_alloc_fwmem(struct iwn_softc *);
static void iwn_free_fwmem(struct iwn_softc *);
static int iwn_alloc_rx_ring(struct iwn_softc *, struct iwn_rx_ring *);
static void iwn_reset_rx_ring(struct iwn_softc *, struct iwn_rx_ring *);
static void iwn_free_rx_ring(struct iwn_softc *, struct iwn_rx_ring *);
static int iwn_alloc_tx_ring(struct iwn_softc *, struct iwn_tx_ring *,
int);
static void iwn_reset_tx_ring(struct iwn_softc *, struct iwn_tx_ring *);
static void iwn_free_tx_ring(struct iwn_softc *, struct iwn_tx_ring *);
static void iwn_check_tx_ring(struct iwn_softc *, int);
static void iwn5000_ict_reset(struct iwn_softc *);
static int iwn_read_eeprom(struct iwn_softc *,
uint8_t macaddr[IEEE80211_ADDR_LEN]);
static void iwn4965_read_eeprom(struct iwn_softc *);
#ifdef IWN_DEBUG
static void iwn4965_print_power_group(struct iwn_softc *, int);
#endif
static void iwn5000_read_eeprom(struct iwn_softc *);
static uint32_t iwn_eeprom_channel_flags(struct iwn_eeprom_chan *);
static void iwn_read_eeprom_band(struct iwn_softc *, int, int, int *,
struct ieee80211_channel[]);
static void iwn_read_eeprom_ht40(struct iwn_softc *, int, int, int *,
struct ieee80211_channel[]);
static void iwn_read_eeprom_channels(struct iwn_softc *, int, uint32_t);
static struct iwn_eeprom_chan *iwn_find_eeprom_channel(struct iwn_softc *,
struct ieee80211_channel *);
static void iwn_getradiocaps(struct ieee80211com *, int, int *,
struct ieee80211_channel[]);
static int iwn_setregdomain(struct ieee80211com *,
struct ieee80211_regdomain *, int,
struct ieee80211_channel[]);
static void iwn_read_eeprom_enhinfo(struct iwn_softc *);
static struct ieee80211_node *iwn_node_alloc(struct ieee80211vap *,
const uint8_t mac[IEEE80211_ADDR_LEN]);
static void iwn_newassoc(struct ieee80211_node *, int);
static int iwn_media_change(struct ifnet *);
static int iwn_newstate(struct ieee80211vap *, enum ieee80211_state, int);
static void iwn_calib_timeout(void *);
static void iwn_rx_phy(struct iwn_softc *, struct iwn_rx_desc *);
static void iwn_rx_done(struct iwn_softc *, struct iwn_rx_desc *,
struct iwn_rx_data *);
static void iwn_agg_tx_complete(struct iwn_softc *, struct iwn_tx_ring *,
int, int, int);
static void iwn_rx_compressed_ba(struct iwn_softc *, struct iwn_rx_desc *);
static void iwn5000_rx_calib_results(struct iwn_softc *,
struct iwn_rx_desc *);
static void iwn_rx_statistics(struct iwn_softc *, struct iwn_rx_desc *);
static void iwn4965_tx_done(struct iwn_softc *, struct iwn_rx_desc *,
struct iwn_rx_data *);
static void iwn5000_tx_done(struct iwn_softc *, struct iwn_rx_desc *,
struct iwn_rx_data *);
static void iwn_adj_ampdu_ptr(struct iwn_softc *, struct iwn_tx_ring *);
static void iwn_tx_done(struct iwn_softc *, struct iwn_rx_desc *, int, int,
uint8_t);
static int iwn_ampdu_check_bitmap(uint64_t, int, int);
static int iwn_ampdu_index_check(struct iwn_softc *, struct iwn_tx_ring *,
uint64_t, int, int);
static void iwn_ampdu_tx_done(struct iwn_softc *, int, int, int, void *);
static void iwn_cmd_done(struct iwn_softc *, struct iwn_rx_desc *);
static void iwn_notif_intr(struct iwn_softc *);
static void iwn_wakeup_intr(struct iwn_softc *);
static void iwn_rftoggle_task(void *, int);
static void iwn_fatal_intr(struct iwn_softc *);
static void iwn_intr(void *);
static void iwn4965_update_sched(struct iwn_softc *, int, int, uint8_t,
uint16_t);
static void iwn5000_update_sched(struct iwn_softc *, int, int, uint8_t,
uint16_t);
#ifdef notyet
static void iwn5000_reset_sched(struct iwn_softc *, int, int);
#endif
static int iwn_tx_data(struct iwn_softc *, struct mbuf *,
struct ieee80211_node *);
static int iwn_tx_data_raw(struct iwn_softc *, struct mbuf *,
struct ieee80211_node *,
const struct ieee80211_bpf_params *params);
static int iwn_tx_cmd(struct iwn_softc *, struct mbuf *,
struct ieee80211_node *, struct iwn_tx_ring *);
static void iwn_xmit_task(void *arg0, int pending);
static int iwn_raw_xmit(struct ieee80211_node *, struct mbuf *,
const struct ieee80211_bpf_params *);
static int iwn_transmit(struct ieee80211com *, struct mbuf *);
static void iwn_scan_timeout(void *);
static void iwn_watchdog(void *);
static int iwn_ioctl(struct ieee80211com *, u_long , void *);
static void iwn_parent(struct ieee80211com *);
static int iwn_cmd(struct iwn_softc *, int, const void *, int, int);
static int iwn4965_add_node(struct iwn_softc *, struct iwn_node_info *,
int);
static int iwn5000_add_node(struct iwn_softc *, struct iwn_node_info *,
int);
static int iwn_set_link_quality(struct iwn_softc *,
struct ieee80211_node *);
static int iwn_add_broadcast_node(struct iwn_softc *, int);
static int iwn_updateedca(struct ieee80211com *);
static void iwn_set_promisc(struct iwn_softc *);
static void iwn_update_promisc(struct ieee80211com *);
static void iwn_update_mcast(struct ieee80211com *);
static void iwn_set_led(struct iwn_softc *, uint8_t, uint8_t, uint8_t);
static int iwn_set_critical_temp(struct iwn_softc *);
static int iwn_set_timing(struct iwn_softc *, struct ieee80211_node *);
static void iwn4965_power_calibration(struct iwn_softc *, int);
static int iwn4965_set_txpower(struct iwn_softc *, int);
static int iwn5000_set_txpower(struct iwn_softc *, int);
static int iwn4965_get_rssi(struct iwn_softc *, struct iwn_rx_stat *);
static int iwn5000_get_rssi(struct iwn_softc *, struct iwn_rx_stat *);
static int iwn_get_noise(const struct iwn_rx_general_stats *);
static int iwn4965_get_temperature(struct iwn_softc *);
static int iwn5000_get_temperature(struct iwn_softc *);
static int iwn_init_sensitivity(struct iwn_softc *);
static void iwn_collect_noise(struct iwn_softc *,
const struct iwn_rx_general_stats *);
static int iwn4965_init_gains(struct iwn_softc *);
static int iwn5000_init_gains(struct iwn_softc *);
static int iwn4965_set_gains(struct iwn_softc *);
static int iwn5000_set_gains(struct iwn_softc *);
static void iwn_tune_sensitivity(struct iwn_softc *,
const struct iwn_rx_stats *);
static void iwn_save_stats_counters(struct iwn_softc *,
const struct iwn_stats *);
static int iwn_send_sensitivity(struct iwn_softc *);
static void iwn_check_rx_recovery(struct iwn_softc *, struct iwn_stats *);
static int iwn_set_pslevel(struct iwn_softc *, int, int, int);
static int iwn_send_btcoex(struct iwn_softc *);
static int iwn_send_advanced_btcoex(struct iwn_softc *);
static int iwn5000_runtime_calib(struct iwn_softc *);
static int iwn_check_bss_filter(struct iwn_softc *);
static int iwn4965_rxon_assoc(struct iwn_softc *, int);
static int iwn5000_rxon_assoc(struct iwn_softc *, int);
static int iwn_send_rxon(struct iwn_softc *, int, int);
static int iwn_config(struct iwn_softc *);
static int iwn_scan(struct iwn_softc *, struct ieee80211vap *,
struct ieee80211_scan_state *, struct ieee80211_channel *);
static int iwn_auth(struct iwn_softc *, struct ieee80211vap *vap);
static int iwn_run(struct iwn_softc *, struct ieee80211vap *vap);
static int iwn_ampdu_rx_start(struct ieee80211_node *,
struct ieee80211_rx_ampdu *, int, int, int);
static void iwn_ampdu_rx_stop(struct ieee80211_node *,
struct ieee80211_rx_ampdu *);
static int iwn_addba_request(struct ieee80211_node *,
struct ieee80211_tx_ampdu *, int, int, int);
static int iwn_addba_response(struct ieee80211_node *,
struct ieee80211_tx_ampdu *, int, int, int);
static int iwn_ampdu_tx_start(struct ieee80211com *,
struct ieee80211_node *, uint8_t);
static void iwn_ampdu_tx_stop(struct ieee80211_node *,
struct ieee80211_tx_ampdu *);
static void iwn4965_ampdu_tx_start(struct iwn_softc *,
struct ieee80211_node *, int, uint8_t, uint16_t);
static void iwn4965_ampdu_tx_stop(struct iwn_softc *, int,
uint8_t, uint16_t);
static void iwn5000_ampdu_tx_start(struct iwn_softc *,
struct ieee80211_node *, int, uint8_t, uint16_t);
static void iwn5000_ampdu_tx_stop(struct iwn_softc *, int,
uint8_t, uint16_t);
static int iwn5000_query_calibration(struct iwn_softc *);
static int iwn5000_send_calibration(struct iwn_softc *);
static int iwn5000_send_wimax_coex(struct iwn_softc *);
static int iwn5000_crystal_calib(struct iwn_softc *);
static int iwn5000_temp_offset_calib(struct iwn_softc *);
static int iwn5000_temp_offset_calibv2(struct iwn_softc *);
static int iwn4965_post_alive(struct iwn_softc *);
static int iwn5000_post_alive(struct iwn_softc *);
static int iwn4965_load_bootcode(struct iwn_softc *, const uint8_t *,
int);
static int iwn4965_load_firmware(struct iwn_softc *);
static int iwn5000_load_firmware_section(struct iwn_softc *, uint32_t,
const uint8_t *, int);
static int iwn5000_load_firmware(struct iwn_softc *);
static int iwn_read_firmware_leg(struct iwn_softc *,
struct iwn_fw_info *);
static int iwn_read_firmware_tlv(struct iwn_softc *,
struct iwn_fw_info *, uint16_t);
static int iwn_read_firmware(struct iwn_softc *);
static void iwn_unload_firmware(struct iwn_softc *);
static int iwn_clock_wait(struct iwn_softc *);
static int iwn_apm_init(struct iwn_softc *);
static void iwn_apm_stop_master(struct iwn_softc *);
static void iwn_apm_stop(struct iwn_softc *);
static int iwn4965_nic_config(struct iwn_softc *);
static int iwn5000_nic_config(struct iwn_softc *);
static int iwn_hw_prepare(struct iwn_softc *);
static int iwn_hw_init(struct iwn_softc *);
static void iwn_hw_stop(struct iwn_softc *);
static void iwn_panicked(void *, int);
static int iwn_init_locked(struct iwn_softc *);
static int iwn_init(struct iwn_softc *);
static void iwn_stop_locked(struct iwn_softc *);
static void iwn_stop(struct iwn_softc *);
static void iwn_scan_start(struct ieee80211com *);
static void iwn_scan_end(struct ieee80211com *);
static void iwn_set_channel(struct ieee80211com *);
static void iwn_scan_curchan(struct ieee80211_scan_state *, unsigned long);
static void iwn_scan_mindwell(struct ieee80211_scan_state *);
#ifdef IWN_DEBUG
static char *iwn_get_csr_string(int);
static void iwn_debug_register(struct iwn_softc *);
#endif
static device_method_t iwn_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, iwn_probe),
DEVMETHOD(device_attach, iwn_attach),
DEVMETHOD(device_detach, iwn_detach),
DEVMETHOD(device_shutdown, iwn_shutdown),
DEVMETHOD(device_suspend, iwn_suspend),
DEVMETHOD(device_resume, iwn_resume),
DEVMETHOD_END
};
static driver_t iwn_driver = {
"iwn",
iwn_methods,
sizeof(struct iwn_softc)
};
static devclass_t iwn_devclass;
DRIVER_MODULE(iwn, pci, iwn_driver, iwn_devclass, NULL, NULL);
MODULE_PNP_INFO("U16:vendor;U16:device;D:#", pci, iwn, iwn_ident_table,
nitems(iwn_ident_table) - 1);
MODULE_VERSION(iwn, 1);
MODULE_DEPEND(iwn, firmware, 1, 1, 1);
MODULE_DEPEND(iwn, pci, 1, 1, 1);
MODULE_DEPEND(iwn, wlan, 1, 1, 1);
static d_ioctl_t iwn_cdev_ioctl;
static d_open_t iwn_cdev_open;
static d_close_t iwn_cdev_close;
static struct cdevsw iwn_cdevsw = {
.d_version = D_VERSION,
.d_flags = 0,
.d_open = iwn_cdev_open,
.d_close = iwn_cdev_close,
.d_ioctl = iwn_cdev_ioctl,
.d_name = "iwn",
};
static int
iwn_probe(device_t dev)
{
const struct iwn_ident *ident;
for (ident = iwn_ident_table; ident->name != NULL; ident++) {
if (pci_get_vendor(dev) == ident->vendor &&
pci_get_device(dev) == ident->device) {
device_set_desc(dev, ident->name);
return (BUS_PROBE_DEFAULT);
}
}
return ENXIO;
}
static int
iwn_is_3stream_device(struct iwn_softc *sc)
{
/* XXX for now only 5300, until the 5350 can be tested */
if (sc->hw_type == IWN_HW_REV_TYPE_5300)
return (1);
return (0);
}
static int
iwn_attach(device_t dev)
{
struct iwn_softc *sc = device_get_softc(dev);
struct ieee80211com *ic;
int i, error, rid;
sc->sc_dev = dev;
#ifdef IWN_DEBUG
error = resource_int_value(device_get_name(sc->sc_dev),
device_get_unit(sc->sc_dev), "debug", &(sc->sc_debug));
if (error != 0)
sc->sc_debug = 0;
#else
sc->sc_debug = 0;
#endif
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: begin\n",__func__);
/*
* Get the offset of the PCI Express Capability Structure in PCI
* Configuration Space.
*/
error = pci_find_cap(dev, PCIY_EXPRESS, &sc->sc_cap_off);
if (error != 0) {
device_printf(dev, "PCIe capability structure not found!\n");
return error;
}
/* Clear device-specific "PCI retry timeout" register (41h). */
pci_write_config(dev, 0x41, 0, 1);
/* Enable bus-mastering. */
pci_enable_busmaster(dev);
rid = PCIR_BAR(0);
sc->mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &rid,
RF_ACTIVE);
if (sc->mem == NULL) {
device_printf(dev, "can't map mem space\n");
error = ENOMEM;
return error;
}
sc->sc_st = rman_get_bustag(sc->mem);
sc->sc_sh = rman_get_bushandle(sc->mem);
i = 1;
rid = 0;
if (pci_alloc_msi(dev, &i) == 0)
rid = 1;
/* Install interrupt handler. */
sc->irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid, RF_ACTIVE |
(rid != 0 ? 0 : RF_SHAREABLE));
if (sc->irq == NULL) {
device_printf(dev, "can't map interrupt\n");
error = ENOMEM;
goto fail;
}
IWN_LOCK_INIT(sc);
/* Read hardware revision and attach. */
sc->hw_type = (IWN_READ(sc, IWN_HW_REV) >> IWN_HW_REV_TYPE_SHIFT)
& IWN_HW_REV_TYPE_MASK;
sc->subdevice_id = pci_get_subdevice(dev);
/*
* 4965 versus 5000 and later have different methods.
* Let's set those up first.
*/
if (sc->hw_type == IWN_HW_REV_TYPE_4965)
iwn4965_attach(sc, pci_get_device(dev));
else
iwn5000_attach(sc, pci_get_device(dev));
/*
* Next, let's setup the various parameters of each NIC.
*/
error = iwn_config_specific(sc, pci_get_device(dev));
if (error != 0) {
device_printf(dev, "could not attach device, error %d\n",
error);
goto fail;
}
if ((error = iwn_hw_prepare(sc)) != 0) {
device_printf(dev, "hardware not ready, error %d\n", error);
goto fail;
}
/* Allocate DMA memory for firmware transfers. */
if ((error = iwn_alloc_fwmem(sc)) != 0) {
device_printf(dev,
"could not allocate memory for firmware, error %d\n",
error);
goto fail;
}
/* Allocate "Keep Warm" page. */
if ((error = iwn_alloc_kw(sc)) != 0) {
device_printf(dev,
"could not allocate keep warm page, error %d\n", error);
goto fail;
}
/* Allocate ICT table for 5000 Series. */
if (sc->hw_type != IWN_HW_REV_TYPE_4965 &&
(error = iwn_alloc_ict(sc)) != 0) {
device_printf(dev, "could not allocate ICT table, error %d\n",
error);
goto fail;
}
/* Allocate TX scheduler "rings". */
if ((error = iwn_alloc_sched(sc)) != 0) {
device_printf(dev,
"could not allocate TX scheduler rings, error %d\n", error);
goto fail;
}
/* Allocate TX rings (16 on 4965AGN, 20 on >=5000). */
for (i = 0; i < sc->ntxqs; i++) {
if ((error = iwn_alloc_tx_ring(sc, &sc->txq[i], i)) != 0) {
device_printf(dev,
"could not allocate TX ring %d, error %d\n", i,
error);
goto fail;
}
}
/* Allocate RX ring. */
if ((error = iwn_alloc_rx_ring(sc, &sc->rxq)) != 0) {
device_printf(dev, "could not allocate RX ring, error %d\n",
error);
goto fail;
}
/* Clear pending interrupts. */
IWN_WRITE(sc, IWN_INT, 0xffffffff);
ic = &sc->sc_ic;
ic->ic_softc = sc;
ic->ic_name = device_get_nameunit(dev);
ic->ic_phytype = IEEE80211_T_OFDM; /* not only, but not used */
ic->ic_opmode = IEEE80211_M_STA; /* default to BSS mode */
/* Set device capabilities. */
ic->ic_caps =
IEEE80211_C_STA /* station mode supported */
| IEEE80211_C_MONITOR /* monitor mode supported */
#if 0
| IEEE80211_C_BGSCAN /* background scanning */
#endif
| IEEE80211_C_TXPMGT /* tx power management */
| IEEE80211_C_SHSLOT /* short slot time supported */
| IEEE80211_C_WPA
| IEEE80211_C_SHPREAMBLE /* short preamble supported */
#if 0
| IEEE80211_C_IBSS /* ibss/adhoc mode */
#endif
| IEEE80211_C_WME /* WME */
| IEEE80211_C_PMGT /* Station-side power mgmt */
;
/* Read MAC address, channels, etc from EEPROM. */
if ((error = iwn_read_eeprom(sc, ic->ic_macaddr)) != 0) {
device_printf(dev, "could not read EEPROM, error %d\n",
error);
goto fail;
}
/* Count the number of available chains. */
sc->ntxchains =
((sc->txchainmask >> 2) & 1) +
((sc->txchainmask >> 1) & 1) +
((sc->txchainmask >> 0) & 1);
sc->nrxchains =
((sc->rxchainmask >> 2) & 1) +
((sc->rxchainmask >> 1) & 1) +
((sc->rxchainmask >> 0) & 1);
if (bootverbose) {
device_printf(dev, "MIMO %dT%dR, %.4s, address %6D\n",
sc->ntxchains, sc->nrxchains, sc->eeprom_domain,
ic->ic_macaddr, ":");
}
if (sc->sc_flags & IWN_FLAG_HAS_11N) {
ic->ic_rxstream = sc->nrxchains;
ic->ic_txstream = sc->ntxchains;
/*
* Some of the 3 antenna devices (ie, the 4965) only supports
* 2x2 operation. So correct the number of streams if
* it's not a 3-stream device.
*/
if (! iwn_is_3stream_device(sc)) {
if (ic->ic_rxstream > 2)
ic->ic_rxstream = 2;
if (ic->ic_txstream > 2)
ic->ic_txstream = 2;
}
ic->ic_htcaps =
IEEE80211_HTCAP_SMPS_OFF /* SMPS mode disabled */
| IEEE80211_HTCAP_SHORTGI20 /* short GI in 20MHz */
| IEEE80211_HTCAP_CHWIDTH40 /* 40MHz channel width*/
| IEEE80211_HTCAP_SHORTGI40 /* short GI in 40MHz */
#ifdef notyet
| IEEE80211_HTCAP_GREENFIELD
#if IWN_RBUF_SIZE == 8192
| IEEE80211_HTCAP_MAXAMSDU_7935 /* max A-MSDU length */
#else
| IEEE80211_HTCAP_MAXAMSDU_3839 /* max A-MSDU length */
#endif
#endif
/* s/w capabilities */
| IEEE80211_HTC_HT /* HT operation */
| IEEE80211_HTC_AMPDU /* tx A-MPDU */
#ifdef notyet
| IEEE80211_HTC_AMSDU /* tx A-MSDU */
#endif
;
}
ieee80211_ifattach(ic);
ic->ic_vap_create = iwn_vap_create;
ic->ic_ioctl = iwn_ioctl;
ic->ic_parent = iwn_parent;
ic->ic_vap_delete = iwn_vap_delete;
ic->ic_transmit = iwn_transmit;
ic->ic_raw_xmit = iwn_raw_xmit;
ic->ic_node_alloc = iwn_node_alloc;
sc->sc_ampdu_rx_start = ic->ic_ampdu_rx_start;
ic->ic_ampdu_rx_start = iwn_ampdu_rx_start;
sc->sc_ampdu_rx_stop = ic->ic_ampdu_rx_stop;
ic->ic_ampdu_rx_stop = iwn_ampdu_rx_stop;
sc->sc_addba_request = ic->ic_addba_request;
ic->ic_addba_request = iwn_addba_request;
sc->sc_addba_response = ic->ic_addba_response;
ic->ic_addba_response = iwn_addba_response;
sc->sc_addba_stop = ic->ic_addba_stop;
ic->ic_addba_stop = iwn_ampdu_tx_stop;
ic->ic_newassoc = iwn_newassoc;
ic->ic_wme.wme_update = iwn_updateedca;
ic->ic_update_promisc = iwn_update_promisc;
ic->ic_update_mcast = iwn_update_mcast;
ic->ic_scan_start = iwn_scan_start;
ic->ic_scan_end = iwn_scan_end;
ic->ic_set_channel = iwn_set_channel;
ic->ic_scan_curchan = iwn_scan_curchan;
ic->ic_scan_mindwell = iwn_scan_mindwell;
ic->ic_getradiocaps = iwn_getradiocaps;
ic->ic_setregdomain = iwn_setregdomain;
iwn_radiotap_attach(sc);
callout_init_mtx(&sc->calib_to, &sc->sc_mtx, 0);
callout_init_mtx(&sc->scan_timeout, &sc->sc_mtx, 0);
callout_init_mtx(&sc->watchdog_to, &sc->sc_mtx, 0);
TASK_INIT(&sc->sc_rftoggle_task, 0, iwn_rftoggle_task, sc);
TASK_INIT(&sc->sc_panic_task, 0, iwn_panicked, sc);
TASK_INIT(&sc->sc_xmit_task, 0, iwn_xmit_task, sc);
mbufq_init(&sc->sc_xmit_queue, 1024);
sc->sc_tq = taskqueue_create("iwn_taskq", M_WAITOK,
taskqueue_thread_enqueue, &sc->sc_tq);
error = taskqueue_start_threads(&sc->sc_tq, 1, 0, "iwn_taskq");
if (error != 0) {
device_printf(dev, "can't start threads, error %d\n", error);
goto fail;
}
iwn_sysctlattach(sc);
/*
* Hook our interrupt after all initialization is complete.
*/
error = bus_setup_intr(dev, sc->irq, INTR_TYPE_NET | INTR_MPSAFE,
NULL, iwn_intr, sc, &sc->sc_ih);
if (error != 0) {
device_printf(dev, "can't establish interrupt, error %d\n",
error);
goto fail;
}
#if 0
device_printf(sc->sc_dev, "%s: rx_stats=%d, rx_stats_bt=%d\n",
__func__,
sizeof(struct iwn_stats),
sizeof(struct iwn_stats_bt));
#endif
if (bootverbose)
ieee80211_announce(ic);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
/* Add debug ioctl right at the end */
sc->sc_cdev = make_dev(&iwn_cdevsw, device_get_unit(dev),
UID_ROOT, GID_WHEEL, 0600, "%s", device_get_nameunit(dev));
if (sc->sc_cdev == NULL) {
device_printf(dev, "failed to create debug character device\n");
} else {
sc->sc_cdev->si_drv1 = sc;
}
return 0;
fail:
iwn_detach(dev);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end in error\n",__func__);
return error;
}
/*
* Define specific configuration based on device id and subdevice id
* pid : PCI device id
*/
static int
iwn_config_specific(struct iwn_softc *sc, uint16_t pid)
{
switch (pid) {
/* 4965 series */
case IWN_DID_4965_1:
case IWN_DID_4965_2:
case IWN_DID_4965_3:
case IWN_DID_4965_4:
sc->base_params = &iwn4965_base_params;
sc->limits = &iwn4965_sensitivity_limits;
sc->fwname = "iwn4965fw";
/* Override chains masks, ROM is known to be broken. */
sc->txchainmask = IWN_ANT_AB;
sc->rxchainmask = IWN_ANT_ABC;
/* Enable normal btcoex */
sc->sc_flags |= IWN_FLAG_BTCOEX;
break;
/* 1000 Series */
case IWN_DID_1000_1:
case IWN_DID_1000_2:
switch(sc->subdevice_id) {
case IWN_SDID_1000_1:
case IWN_SDID_1000_2:
case IWN_SDID_1000_3:
case IWN_SDID_1000_4:
case IWN_SDID_1000_5:
case IWN_SDID_1000_6:
case IWN_SDID_1000_7:
case IWN_SDID_1000_8:
case IWN_SDID_1000_9:
case IWN_SDID_1000_10:
case IWN_SDID_1000_11:
case IWN_SDID_1000_12:
sc->limits = &iwn1000_sensitivity_limits;
sc->base_params = &iwn1000_base_params;
sc->fwname = "iwn1000fw";
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 6x00 Series */
case IWN_DID_6x00_2:
case IWN_DID_6x00_4:
case IWN_DID_6x00_1:
case IWN_DID_6x00_3:
sc->fwname = "iwn6000fw";
sc->limits = &iwn6000_sensitivity_limits;
switch(sc->subdevice_id) {
case IWN_SDID_6x00_1:
case IWN_SDID_6x00_2:
case IWN_SDID_6x00_8:
//iwl6000_3agn_cfg
sc->base_params = &iwn_6000_base_params;
break;
case IWN_SDID_6x00_3:
case IWN_SDID_6x00_6:
case IWN_SDID_6x00_9:
////iwl6000i_2agn
case IWN_SDID_6x00_4:
case IWN_SDID_6x00_7:
case IWN_SDID_6x00_10:
//iwl6000i_2abg_cfg
case IWN_SDID_6x00_5:
//iwl6000i_2bg_cfg
sc->base_params = &iwn_6000i_base_params;
sc->sc_flags |= IWN_FLAG_INTERNAL_PA;
sc->txchainmask = IWN_ANT_BC;
sc->rxchainmask = IWN_ANT_BC;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 6x05 Series */
case IWN_DID_6x05_1:
case IWN_DID_6x05_2:
switch(sc->subdevice_id) {
case IWN_SDID_6x05_1:
case IWN_SDID_6x05_4:
case IWN_SDID_6x05_6:
//iwl6005_2agn_cfg
case IWN_SDID_6x05_2:
case IWN_SDID_6x05_5:
case IWN_SDID_6x05_7:
//iwl6005_2abg_cfg
case IWN_SDID_6x05_3:
//iwl6005_2bg_cfg
case IWN_SDID_6x05_8:
case IWN_SDID_6x05_9:
//iwl6005_2agn_sff_cfg
case IWN_SDID_6x05_10:
//iwl6005_2agn_d_cfg
case IWN_SDID_6x05_11:
//iwl6005_2agn_mow1_cfg
case IWN_SDID_6x05_12:
//iwl6005_2agn_mow2_cfg
sc->fwname = "iwn6000g2afw";
sc->limits = &iwn6000_sensitivity_limits;
sc->base_params = &iwn_6000g2_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 6x35 Series */
case IWN_DID_6035_1:
case IWN_DID_6035_2:
switch(sc->subdevice_id) {
case IWN_SDID_6035_1:
case IWN_SDID_6035_2:
case IWN_SDID_6035_3:
case IWN_SDID_6035_4:
case IWN_SDID_6035_5:
sc->fwname = "iwn6000g2bfw";
sc->limits = &iwn6235_sensitivity_limits;
sc->base_params = &iwn_6235_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 6x50 WiFi/WiMax Series */
case IWN_DID_6050_1:
case IWN_DID_6050_2:
switch(sc->subdevice_id) {
case IWN_SDID_6050_1:
case IWN_SDID_6050_3:
case IWN_SDID_6050_5:
//iwl6050_2agn_cfg
case IWN_SDID_6050_2:
case IWN_SDID_6050_4:
case IWN_SDID_6050_6:
//iwl6050_2abg_cfg
sc->fwname = "iwn6050fw";
sc->txchainmask = IWN_ANT_AB;
sc->rxchainmask = IWN_ANT_AB;
sc->limits = &iwn6000_sensitivity_limits;
sc->base_params = &iwn_6050_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 6150 WiFi/WiMax Series */
case IWN_DID_6150_1:
case IWN_DID_6150_2:
switch(sc->subdevice_id) {
case IWN_SDID_6150_1:
case IWN_SDID_6150_3:
case IWN_SDID_6150_5:
// iwl6150_bgn_cfg
case IWN_SDID_6150_2:
case IWN_SDID_6150_4:
case IWN_SDID_6150_6:
//iwl6150_bg_cfg
sc->fwname = "iwn6050fw";
sc->limits = &iwn6000_sensitivity_limits;
sc->base_params = &iwn_6150_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 6030 Series and 1030 Series */
case IWN_DID_x030_1:
case IWN_DID_x030_2:
case IWN_DID_x030_3:
case IWN_DID_x030_4:
switch(sc->subdevice_id) {
case IWN_SDID_x030_1:
case IWN_SDID_x030_3:
case IWN_SDID_x030_5:
// iwl1030_bgn_cfg
case IWN_SDID_x030_2:
case IWN_SDID_x030_4:
case IWN_SDID_x030_6:
//iwl1030_bg_cfg
case IWN_SDID_x030_7:
case IWN_SDID_x030_10:
case IWN_SDID_x030_14:
//iwl6030_2agn_cfg
case IWN_SDID_x030_8:
case IWN_SDID_x030_11:
case IWN_SDID_x030_15:
// iwl6030_2bgn_cfg
case IWN_SDID_x030_9:
case IWN_SDID_x030_12:
case IWN_SDID_x030_16:
// iwl6030_2abg_cfg
case IWN_SDID_x030_13:
//iwl6030_2bg_cfg
sc->fwname = "iwn6000g2bfw";
sc->limits = &iwn6000_sensitivity_limits;
sc->base_params = &iwn_6000g2b_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 130 Series WiFi */
/* XXX: This series will need adjustment for rate.
* see rx_with_siso_diversity in linux kernel
*/
case IWN_DID_130_1:
case IWN_DID_130_2:
switch(sc->subdevice_id) {
case IWN_SDID_130_1:
case IWN_SDID_130_3:
case IWN_SDID_130_5:
//iwl130_bgn_cfg
case IWN_SDID_130_2:
case IWN_SDID_130_4:
case IWN_SDID_130_6:
//iwl130_bg_cfg
sc->fwname = "iwn6000g2bfw";
sc->limits = &iwn6000_sensitivity_limits;
sc->base_params = &iwn_6000g2b_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 100 Series WiFi */
case IWN_DID_100_1:
case IWN_DID_100_2:
switch(sc->subdevice_id) {
case IWN_SDID_100_1:
case IWN_SDID_100_2:
case IWN_SDID_100_3:
case IWN_SDID_100_4:
case IWN_SDID_100_5:
case IWN_SDID_100_6:
sc->limits = &iwn1000_sensitivity_limits;
sc->base_params = &iwn1000_base_params;
sc->fwname = "iwn100fw";
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 105 Series */
/* XXX: This series will need adjustment for rate.
* see rx_with_siso_diversity in linux kernel
*/
case IWN_DID_105_1:
case IWN_DID_105_2:
switch(sc->subdevice_id) {
case IWN_SDID_105_1:
case IWN_SDID_105_2:
case IWN_SDID_105_3:
//iwl105_bgn_cfg
case IWN_SDID_105_4:
//iwl105_bgn_d_cfg
sc->limits = &iwn2030_sensitivity_limits;
sc->base_params = &iwn2000_base_params;
sc->fwname = "iwn105fw";
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 135 Series */
/* XXX: This series will need adjustment for rate.
* see rx_with_siso_diversity in linux kernel
*/
case IWN_DID_135_1:
case IWN_DID_135_2:
switch(sc->subdevice_id) {
case IWN_SDID_135_1:
case IWN_SDID_135_2:
case IWN_SDID_135_3:
sc->limits = &iwn2030_sensitivity_limits;
sc->base_params = &iwn2030_base_params;
sc->fwname = "iwn135fw";
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 2x00 Series */
case IWN_DID_2x00_1:
case IWN_DID_2x00_2:
switch(sc->subdevice_id) {
case IWN_SDID_2x00_1:
case IWN_SDID_2x00_2:
case IWN_SDID_2x00_3:
//iwl2000_2bgn_cfg
case IWN_SDID_2x00_4:
//iwl2000_2bgn_d_cfg
sc->limits = &iwn2030_sensitivity_limits;
sc->base_params = &iwn2000_base_params;
sc->fwname = "iwn2000fw";
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice) \n",
pid, sc->subdevice_id, sc->hw_type);
return ENOTSUP;
}
break;
/* 2x30 Series */
case IWN_DID_2x30_1:
case IWN_DID_2x30_2:
switch(sc->subdevice_id) {
case IWN_SDID_2x30_1:
case IWN_SDID_2x30_3:
case IWN_SDID_2x30_5:
//iwl100_bgn_cfg
case IWN_SDID_2x30_2:
case IWN_SDID_2x30_4:
case IWN_SDID_2x30_6:
//iwl100_bg_cfg
sc->limits = &iwn2030_sensitivity_limits;
sc->base_params = &iwn2030_base_params;
sc->fwname = "iwn2030fw";
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 5x00 Series */
case IWN_DID_5x00_1:
case IWN_DID_5x00_2:
case IWN_DID_5x00_3:
case IWN_DID_5x00_4:
sc->limits = &iwn5000_sensitivity_limits;
sc->base_params = &iwn5000_base_params;
sc->fwname = "iwn5000fw";
switch(sc->subdevice_id) {
case IWN_SDID_5x00_1:
case IWN_SDID_5x00_2:
case IWN_SDID_5x00_3:
case IWN_SDID_5x00_4:
case IWN_SDID_5x00_9:
case IWN_SDID_5x00_10:
case IWN_SDID_5x00_11:
case IWN_SDID_5x00_12:
case IWN_SDID_5x00_17:
case IWN_SDID_5x00_18:
case IWN_SDID_5x00_19:
case IWN_SDID_5x00_20:
//iwl5100_agn_cfg
sc->txchainmask = IWN_ANT_B;
sc->rxchainmask = IWN_ANT_AB;
break;
case IWN_SDID_5x00_5:
case IWN_SDID_5x00_6:
case IWN_SDID_5x00_13:
case IWN_SDID_5x00_14:
case IWN_SDID_5x00_21:
case IWN_SDID_5x00_22:
//iwl5100_bgn_cfg
sc->txchainmask = IWN_ANT_B;
sc->rxchainmask = IWN_ANT_AB;
break;
case IWN_SDID_5x00_7:
case IWN_SDID_5x00_8:
case IWN_SDID_5x00_15:
case IWN_SDID_5x00_16:
case IWN_SDID_5x00_23:
case IWN_SDID_5x00_24:
//iwl5100_abg_cfg
sc->txchainmask = IWN_ANT_B;
sc->rxchainmask = IWN_ANT_AB;
break;
case IWN_SDID_5x00_25:
case IWN_SDID_5x00_26:
case IWN_SDID_5x00_27:
case IWN_SDID_5x00_28:
case IWN_SDID_5x00_29:
case IWN_SDID_5x00_30:
case IWN_SDID_5x00_31:
case IWN_SDID_5x00_32:
case IWN_SDID_5x00_33:
case IWN_SDID_5x00_34:
case IWN_SDID_5x00_35:
case IWN_SDID_5x00_36:
//iwl5300_agn_cfg
sc->txchainmask = IWN_ANT_ABC;
sc->rxchainmask = IWN_ANT_ABC;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
/* 5x50 Series */
case IWN_DID_5x50_1:
case IWN_DID_5x50_2:
case IWN_DID_5x50_3:
case IWN_DID_5x50_4:
sc->limits = &iwn5000_sensitivity_limits;
sc->base_params = &iwn5000_base_params;
sc->fwname = "iwn5000fw";
switch(sc->subdevice_id) {
case IWN_SDID_5x50_1:
case IWN_SDID_5x50_2:
case IWN_SDID_5x50_3:
//iwl5350_agn_cfg
sc->limits = &iwn5000_sensitivity_limits;
sc->base_params = &iwn5000_base_params;
sc->fwname = "iwn5000fw";
break;
case IWN_SDID_5x50_4:
case IWN_SDID_5x50_5:
case IWN_SDID_5x50_8:
case IWN_SDID_5x50_9:
case IWN_SDID_5x50_10:
case IWN_SDID_5x50_11:
//iwl5150_agn_cfg
case IWN_SDID_5x50_6:
case IWN_SDID_5x50_7:
case IWN_SDID_5x50_12:
case IWN_SDID_5x50_13:
//iwl5150_abg_cfg
sc->limits = &iwn5000_sensitivity_limits;
sc->fwname = "iwn5150fw";
sc->base_params = &iwn_5x50_base_params;
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id :"
"0x%04x rev %d not supported (subdevice)\n", pid,
sc->subdevice_id,sc->hw_type);
return ENOTSUP;
}
break;
default:
device_printf(sc->sc_dev, "adapter type id : 0x%04x sub id : 0x%04x"
"rev 0x%08x not supported (device)\n", pid, sc->subdevice_id,
sc->hw_type);
return ENOTSUP;
}
return 0;
}
static void
iwn4965_attach(struct iwn_softc *sc, uint16_t pid)
{
struct iwn_ops *ops = &sc->ops;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
ops->load_firmware = iwn4965_load_firmware;
ops->read_eeprom = iwn4965_read_eeprom;
ops->post_alive = iwn4965_post_alive;
ops->nic_config = iwn4965_nic_config;
ops->update_sched = iwn4965_update_sched;
ops->get_temperature = iwn4965_get_temperature;
ops->get_rssi = iwn4965_get_rssi;
ops->set_txpower = iwn4965_set_txpower;
ops->init_gains = iwn4965_init_gains;
ops->set_gains = iwn4965_set_gains;
ops->rxon_assoc = iwn4965_rxon_assoc;
ops->add_node = iwn4965_add_node;
ops->tx_done = iwn4965_tx_done;
ops->ampdu_tx_start = iwn4965_ampdu_tx_start;
ops->ampdu_tx_stop = iwn4965_ampdu_tx_stop;
sc->ntxqs = IWN4965_NTXQUEUES;
sc->firstaggqueue = IWN4965_FIRSTAGGQUEUE;
sc->ndmachnls = IWN4965_NDMACHNLS;
sc->broadcast_id = IWN4965_ID_BROADCAST;
sc->rxonsz = IWN4965_RXONSZ;
sc->schedsz = IWN4965_SCHEDSZ;
sc->fw_text_maxsz = IWN4965_FW_TEXT_MAXSZ;
sc->fw_data_maxsz = IWN4965_FW_DATA_MAXSZ;
sc->fwsz = IWN4965_FWSZ;
sc->sched_txfact_addr = IWN4965_SCHED_TXFACT;
sc->limits = &iwn4965_sensitivity_limits;
sc->fwname = "iwn4965fw";
/* Override chains masks, ROM is known to be broken. */
sc->txchainmask = IWN_ANT_AB;
sc->rxchainmask = IWN_ANT_ABC;
/* Enable normal btcoex */
sc->sc_flags |= IWN_FLAG_BTCOEX;
DPRINTF(sc, IWN_DEBUG_TRACE, "%s: end\n",__func__);
}
static void
iwn5000_attach(struct iwn_softc *sc, uint16_t pid)
{
struct iwn_ops *ops = &sc->ops;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
ops->load_firmware = iwn5000_load_firmware;
ops->read_eeprom = iwn5000_read_eeprom;
ops->post_alive = iwn5000_post_alive;
ops->nic_config = iwn5000_nic_config;
ops->update_sched = iwn5000_update_sched;
ops->get_temperature = iwn5000_get_temperature;
ops->get_rssi = iwn5000_get_rssi;
ops->set_txpower = iwn5000_set_txpower;
ops->init_gains = iwn5000_init_gains;
ops->set_gains = iwn5000_set_gains;
ops->rxon_assoc = iwn5000_rxon_assoc;
ops->add_node = iwn5000_add_node;
ops->tx_done = iwn5000_tx_done;
ops->ampdu_tx_start = iwn5000_ampdu_tx_start;
ops->ampdu_tx_stop = iwn5000_ampdu_tx_stop;
sc->ntxqs = IWN5000_NTXQUEUES;
sc->firstaggqueue = IWN5000_FIRSTAGGQUEUE;
sc->ndmachnls = IWN5000_NDMACHNLS;
sc->broadcast_id = IWN5000_ID_BROADCAST;
sc->rxonsz = IWN5000_RXONSZ;
sc->schedsz = IWN5000_SCHEDSZ;
sc->fw_text_maxsz = IWN5000_FW_TEXT_MAXSZ;
sc->fw_data_maxsz = IWN5000_FW_DATA_MAXSZ;
sc->fwsz = IWN5000_FWSZ;
sc->sched_txfact_addr = IWN5000_SCHED_TXFACT;
sc->reset_noise_gain = IWN5000_PHY_CALIB_RESET_NOISE_GAIN;
sc->noise_gain = IWN5000_PHY_CALIB_NOISE_GAIN;
DPRINTF(sc, IWN_DEBUG_TRACE, "%s: end\n",__func__);
}
/*
* Attach the interface to 802.11 radiotap.
*/
static void
iwn_radiotap_attach(struct iwn_softc *sc)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
ieee80211_radiotap_attach(&sc->sc_ic,
&sc->sc_txtap.wt_ihdr, sizeof(sc->sc_txtap),
IWN_TX_RADIOTAP_PRESENT,
&sc->sc_rxtap.wr_ihdr, sizeof(sc->sc_rxtap),
IWN_RX_RADIOTAP_PRESENT);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
}
static void
iwn_sysctlattach(struct iwn_softc *sc)
{
#ifdef IWN_DEBUG
struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(sc->sc_dev);
struct sysctl_oid *tree = device_get_sysctl_tree(sc->sc_dev);
SYSCTL_ADD_INT(ctx, SYSCTL_CHILDREN(tree), OID_AUTO,
"debug", CTLFLAG_RW, &sc->sc_debug, sc->sc_debug,
"control debugging printfs");
#endif
}
static struct ieee80211vap *
iwn_vap_create(struct ieee80211com *ic, const char name[IFNAMSIZ], int unit,
enum ieee80211_opmode opmode, int flags,
const uint8_t bssid[IEEE80211_ADDR_LEN],
const uint8_t mac[IEEE80211_ADDR_LEN])
{
struct iwn_softc *sc = ic->ic_softc;
struct iwn_vap *ivp;
struct ieee80211vap *vap;
if (!TAILQ_EMPTY(&ic->ic_vaps)) /* only one at a time */
return NULL;
ivp = malloc(sizeof(struct iwn_vap), M_80211_VAP, M_WAITOK | M_ZERO);
vap = &ivp->iv_vap;
ieee80211_vap_setup(ic, vap, name, unit, opmode, flags, bssid);
ivp->ctx = IWN_RXON_BSS_CTX;
vap->iv_bmissthreshold = 10; /* override default */
/* Override with driver methods. */
ivp->iv_newstate = vap->iv_newstate;
vap->iv_newstate = iwn_newstate;
sc->ivap[IWN_RXON_BSS_CTX] = vap;
ieee80211_ratectl_init(vap);
/* Complete setup. */
ieee80211_vap_attach(vap, iwn_media_change, ieee80211_media_status,
mac);
ic->ic_opmode = opmode;
return vap;
}
static void
iwn_vap_delete(struct ieee80211vap *vap)
{
struct iwn_vap *ivp = IWN_VAP(vap);
ieee80211_ratectl_deinit(vap);
ieee80211_vap_detach(vap);
free(ivp, M_80211_VAP);
}
static void
iwn_xmit_queue_drain(struct iwn_softc *sc)
{
struct mbuf *m;
struct ieee80211_node *ni;
IWN_LOCK_ASSERT(sc);
while ((m = mbufq_dequeue(&sc->sc_xmit_queue)) != NULL) {
ni = (struct ieee80211_node *)m->m_pkthdr.rcvif;
ieee80211_free_node(ni);
m_freem(m);
}
}
static int
iwn_xmit_queue_enqueue(struct iwn_softc *sc, struct mbuf *m)
{
IWN_LOCK_ASSERT(sc);
return (mbufq_enqueue(&sc->sc_xmit_queue, m));
}
static int
iwn_detach(device_t dev)
{
struct iwn_softc *sc = device_get_softc(dev);
int qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
if (sc->sc_ic.ic_softc != NULL) {
/* Free the mbuf queue and node references */
IWN_LOCK(sc);
iwn_xmit_queue_drain(sc);
IWN_UNLOCK(sc);
iwn_stop(sc);
taskqueue_drain_all(sc->sc_tq);
taskqueue_free(sc->sc_tq);
callout_drain(&sc->watchdog_to);
callout_drain(&sc->scan_timeout);
callout_drain(&sc->calib_to);
ieee80211_ifdetach(&sc->sc_ic);
}
/* Uninstall interrupt handler. */
if (sc->irq != NULL) {
bus_teardown_intr(dev, sc->irq, sc->sc_ih);
bus_release_resource(dev, SYS_RES_IRQ, rman_get_rid(sc->irq),
sc->irq);
pci_release_msi(dev);
}
/* Free DMA resources. */
iwn_free_rx_ring(sc, &sc->rxq);
for (qid = 0; qid < sc->ntxqs; qid++)
iwn_free_tx_ring(sc, &sc->txq[qid]);
iwn_free_sched(sc);
iwn_free_kw(sc);
if (sc->ict != NULL)
iwn_free_ict(sc);
iwn_free_fwmem(sc);
if (sc->mem != NULL)
bus_release_resource(dev, SYS_RES_MEMORY,
rman_get_rid(sc->mem), sc->mem);
if (sc->sc_cdev) {
destroy_dev(sc->sc_cdev);
sc->sc_cdev = NULL;
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n", __func__);
IWN_LOCK_DESTROY(sc);
return 0;
}
static int
iwn_shutdown(device_t dev)
{
struct iwn_softc *sc = device_get_softc(dev);
iwn_stop(sc);
return 0;
}
static int
iwn_suspend(device_t dev)
{
struct iwn_softc *sc = device_get_softc(dev);
ieee80211_suspend_all(&sc->sc_ic);
return 0;
}
static int
iwn_resume(device_t dev)
{
struct iwn_softc *sc = device_get_softc(dev);
/* Clear device-specific "PCI retry timeout" register (41h). */
pci_write_config(dev, 0x41, 0, 1);
ieee80211_resume_all(&sc->sc_ic);
return 0;
}
static int
iwn_nic_lock(struct iwn_softc *sc)
{
int ntries;
/* Request exclusive access to NIC. */
IWN_SETBITS(sc, IWN_GP_CNTRL, IWN_GP_CNTRL_MAC_ACCESS_REQ);
/* Spin until we actually get the lock. */
for (ntries = 0; ntries < 1000; ntries++) {
if ((IWN_READ(sc, IWN_GP_CNTRL) &
(IWN_GP_CNTRL_MAC_ACCESS_ENA | IWN_GP_CNTRL_SLEEP)) ==
IWN_GP_CNTRL_MAC_ACCESS_ENA)
return 0;
DELAY(10);
}
return ETIMEDOUT;
}
static __inline void
iwn_nic_unlock(struct iwn_softc *sc)
{
IWN_CLRBITS(sc, IWN_GP_CNTRL, IWN_GP_CNTRL_MAC_ACCESS_REQ);
}
static __inline uint32_t
iwn_prph_read(struct iwn_softc *sc, uint32_t addr)
{
IWN_WRITE(sc, IWN_PRPH_RADDR, IWN_PRPH_DWORD | addr);
IWN_BARRIER_READ_WRITE(sc);
return IWN_READ(sc, IWN_PRPH_RDATA);
}
static __inline void
iwn_prph_write(struct iwn_softc *sc, uint32_t addr, uint32_t data)
{
IWN_WRITE(sc, IWN_PRPH_WADDR, IWN_PRPH_DWORD | addr);
IWN_BARRIER_WRITE(sc);
IWN_WRITE(sc, IWN_PRPH_WDATA, data);
}
static __inline void
iwn_prph_setbits(struct iwn_softc *sc, uint32_t addr, uint32_t mask)
{
iwn_prph_write(sc, addr, iwn_prph_read(sc, addr) | mask);
}
static __inline void
iwn_prph_clrbits(struct iwn_softc *sc, uint32_t addr, uint32_t mask)
{
iwn_prph_write(sc, addr, iwn_prph_read(sc, addr) & ~mask);
}
static __inline void
iwn_prph_write_region_4(struct iwn_softc *sc, uint32_t addr,
const uint32_t *data, int count)
{
for (; count > 0; count--, data++, addr += 4)
iwn_prph_write(sc, addr, *data);
}
static __inline uint32_t
iwn_mem_read(struct iwn_softc *sc, uint32_t addr)
{
IWN_WRITE(sc, IWN_MEM_RADDR, addr);
IWN_BARRIER_READ_WRITE(sc);
return IWN_READ(sc, IWN_MEM_RDATA);
}
static __inline void
iwn_mem_write(struct iwn_softc *sc, uint32_t addr, uint32_t data)
{
IWN_WRITE(sc, IWN_MEM_WADDR, addr);
IWN_BARRIER_WRITE(sc);
IWN_WRITE(sc, IWN_MEM_WDATA, data);
}
static __inline void
iwn_mem_write_2(struct iwn_softc *sc, uint32_t addr, uint16_t data)
{
uint32_t tmp;
tmp = iwn_mem_read(sc, addr & ~3);
if (addr & 3)
tmp = (tmp & 0x0000ffff) | data << 16;
else
tmp = (tmp & 0xffff0000) | data;
iwn_mem_write(sc, addr & ~3, tmp);
}
static __inline void
iwn_mem_read_region_4(struct iwn_softc *sc, uint32_t addr, uint32_t *data,
int count)
{
for (; count > 0; count--, addr += 4)
*data++ = iwn_mem_read(sc, addr);
}
static __inline void
iwn_mem_set_region_4(struct iwn_softc *sc, uint32_t addr, uint32_t val,
int count)
{
for (; count > 0; count--, addr += 4)
iwn_mem_write(sc, addr, val);
}
static int
iwn_eeprom_lock(struct iwn_softc *sc)
{
int i, ntries;
for (i = 0; i < 100; i++) {
/* Request exclusive access to EEPROM. */
IWN_SETBITS(sc, IWN_HW_IF_CONFIG,
IWN_HW_IF_CONFIG_EEPROM_LOCKED);
/* Spin until we actually get the lock. */
for (ntries = 0; ntries < 100; ntries++) {
if (IWN_READ(sc, IWN_HW_IF_CONFIG) &
IWN_HW_IF_CONFIG_EEPROM_LOCKED)
return 0;
DELAY(10);
}
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end timeout\n", __func__);
return ETIMEDOUT;
}
static __inline void
iwn_eeprom_unlock(struct iwn_softc *sc)
{
IWN_CLRBITS(sc, IWN_HW_IF_CONFIG, IWN_HW_IF_CONFIG_EEPROM_LOCKED);
}
/*
* Initialize access by host to One Time Programmable ROM.
* NB: This kind of ROM can be found on 1000 or 6000 Series only.
*/
static int
iwn_init_otprom(struct iwn_softc *sc)
{
uint16_t prev, base, next;
int count, error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Wait for clock stabilization before accessing prph. */
if ((error = iwn_clock_wait(sc)) != 0)
return error;
if ((error = iwn_nic_lock(sc)) != 0)
return error;
iwn_prph_setbits(sc, IWN_APMG_PS, IWN_APMG_PS_RESET_REQ);
DELAY(5);
iwn_prph_clrbits(sc, IWN_APMG_PS, IWN_APMG_PS_RESET_REQ);
iwn_nic_unlock(sc);
/* Set auto clock gate disable bit for HW with OTP shadow RAM. */
if (sc->base_params->shadow_ram_support) {
IWN_SETBITS(sc, IWN_DBG_LINK_PWR_MGMT,
IWN_RESET_LINK_PWR_MGMT_DIS);
}
IWN_CLRBITS(sc, IWN_EEPROM_GP, IWN_EEPROM_GP_IF_OWNER);
/* Clear ECC status. */
IWN_SETBITS(sc, IWN_OTP_GP,
IWN_OTP_GP_ECC_CORR_STTS | IWN_OTP_GP_ECC_UNCORR_STTS);
/*
* Find the block before last block (contains the EEPROM image)
* for HW without OTP shadow RAM.
*/
if (! sc->base_params->shadow_ram_support) {
/* Switch to absolute addressing mode. */
IWN_CLRBITS(sc, IWN_OTP_GP, IWN_OTP_GP_RELATIVE_ACCESS);
base = prev = 0;
for (count = 0; count < sc->base_params->max_ll_items;
count++) {
error = iwn_read_prom_data(sc, base, &next, 2);
if (error != 0)
return error;
if (next == 0) /* End of linked-list. */
break;
prev = base;
base = le16toh(next);
}
if (count == 0 || count == sc->base_params->max_ll_items)
return EIO;
/* Skip "next" word. */
sc->prom_base = prev + 1;
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
return 0;
}
static int
iwn_read_prom_data(struct iwn_softc *sc, uint32_t addr, void *data, int count)
{
uint8_t *out = data;
uint32_t val, tmp;
int ntries;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
addr += sc->prom_base;
for (; count > 0; count -= 2, addr++) {
IWN_WRITE(sc, IWN_EEPROM, addr << 2);
for (ntries = 0; ntries < 10; ntries++) {
val = IWN_READ(sc, IWN_EEPROM);
if (val & IWN_EEPROM_READ_VALID)
break;
DELAY(5);
}
if (ntries == 10) {
device_printf(sc->sc_dev,
"timeout reading ROM at 0x%x\n", addr);
return ETIMEDOUT;
}
if (sc->sc_flags & IWN_FLAG_HAS_OTPROM) {
/* OTPROM, check for ECC errors. */
tmp = IWN_READ(sc, IWN_OTP_GP);
if (tmp & IWN_OTP_GP_ECC_UNCORR_STTS) {
device_printf(sc->sc_dev,
"OTPROM ECC error at 0x%x\n", addr);
return EIO;
}
if (tmp & IWN_OTP_GP_ECC_CORR_STTS) {
/* Correctable ECC error, clear bit. */
IWN_SETBITS(sc, IWN_OTP_GP,
IWN_OTP_GP_ECC_CORR_STTS);
}
}
*out++ = val >> 16;
if (count > 1)
*out++ = val >> 24;
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
return 0;
}
static void
iwn_dma_map_addr(void *arg, bus_dma_segment_t *segs, int nsegs, int error)
{
if (error != 0)
return;
KASSERT(nsegs == 1, ("too many DMA segments, %d should be 1", nsegs));
*(bus_addr_t *)arg = segs[0].ds_addr;
}
static int
iwn_dma_contig_alloc(struct iwn_softc *sc, struct iwn_dma_info *dma,
void **kvap, bus_size_t size, bus_size_t alignment)
{
int error;
dma->tag = NULL;
dma->size = size;
error = bus_dma_tag_create(bus_get_dma_tag(sc->sc_dev), alignment,
0, BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, size,
1, size, 0, NULL, NULL, &dma->tag);
if (error != 0)
goto fail;
error = bus_dmamem_alloc(dma->tag, (void **)&dma->vaddr,
BUS_DMA_NOWAIT | BUS_DMA_ZERO | BUS_DMA_COHERENT, &dma->map);
if (error != 0)
goto fail;
error = bus_dmamap_load(dma->tag, dma->map, dma->vaddr, size,
iwn_dma_map_addr, &dma->paddr, BUS_DMA_NOWAIT);
if (error != 0)
goto fail;
bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREWRITE);
if (kvap != NULL)
*kvap = dma->vaddr;
return 0;
fail: iwn_dma_contig_free(dma);
return error;
}
static void
iwn_dma_contig_free(struct iwn_dma_info *dma)
{
if (dma->vaddr != NULL) {
bus_dmamap_sync(dma->tag, dma->map,
BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
bus_dmamap_unload(dma->tag, dma->map);
bus_dmamem_free(dma->tag, dma->vaddr, dma->map);
dma->vaddr = NULL;
}
if (dma->tag != NULL) {
bus_dma_tag_destroy(dma->tag);
dma->tag = NULL;
}
}
static int
iwn_alloc_sched(struct iwn_softc *sc)
{
/* TX scheduler rings must be aligned on a 1KB boundary. */
return iwn_dma_contig_alloc(sc, &sc->sched_dma, (void **)&sc->sched,
sc->schedsz, 1024);
}
static void
iwn_free_sched(struct iwn_softc *sc)
{
iwn_dma_contig_free(&sc->sched_dma);
}
static int
iwn_alloc_kw(struct iwn_softc *sc)
{
/* "Keep Warm" page must be aligned on a 4KB boundary. */
return iwn_dma_contig_alloc(sc, &sc->kw_dma, NULL, 4096, 4096);
}
static void
iwn_free_kw(struct iwn_softc *sc)
{
iwn_dma_contig_free(&sc->kw_dma);
}
static int
iwn_alloc_ict(struct iwn_softc *sc)
{
/* ICT table must be aligned on a 4KB boundary. */
return iwn_dma_contig_alloc(sc, &sc->ict_dma, (void **)&sc->ict,
IWN_ICT_SIZE, 4096);
}
static void
iwn_free_ict(struct iwn_softc *sc)
{
iwn_dma_contig_free(&sc->ict_dma);
}
static int
iwn_alloc_fwmem(struct iwn_softc *sc)
{
/* Must be aligned on a 16-byte boundary. */
return iwn_dma_contig_alloc(sc, &sc->fw_dma, NULL, sc->fwsz, 16);
}
static void
iwn_free_fwmem(struct iwn_softc *sc)
{
iwn_dma_contig_free(&sc->fw_dma);
}
static int
iwn_alloc_rx_ring(struct iwn_softc *sc, struct iwn_rx_ring *ring)
{
bus_size_t size;
int i, error;
ring->cur = 0;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Allocate RX descriptors (256-byte aligned). */
size = IWN_RX_RING_COUNT * sizeof (uint32_t);
error = iwn_dma_contig_alloc(sc, &ring->desc_dma, (void **)&ring->desc,
size, 256);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not allocate RX ring DMA memory, error %d\n",
__func__, error);
goto fail;
}
/* Allocate RX status area (16-byte aligned). */
error = iwn_dma_contig_alloc(sc, &ring->stat_dma, (void **)&ring->stat,
sizeof (struct iwn_rx_status), 16);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not allocate RX status DMA memory, error %d\n",
__func__, error);
goto fail;
}
/* Create RX buffer DMA tag. */
error = bus_dma_tag_create(bus_get_dma_tag(sc->sc_dev), 1, 0,
BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL,
IWN_RBUF_SIZE, 1, IWN_RBUF_SIZE, 0, NULL, NULL, &ring->data_dmat);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not create RX buf DMA tag, error %d\n",
__func__, error);
goto fail;
}
/*
* Allocate and map RX buffers.
*/
for (i = 0; i < IWN_RX_RING_COUNT; i++) {
struct iwn_rx_data *data = &ring->data[i];
bus_addr_t paddr;
error = bus_dmamap_create(ring->data_dmat, 0, &data->map);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not create RX buf DMA map, error %d\n",
__func__, error);
goto fail;
}
data->m = m_getjcl(M_NOWAIT, MT_DATA, M_PKTHDR,
IWN_RBUF_SIZE);
if (data->m == NULL) {
device_printf(sc->sc_dev,
"%s: could not allocate RX mbuf\n", __func__);
error = ENOBUFS;
goto fail;
}
error = bus_dmamap_load(ring->data_dmat, data->map,
mtod(data->m, void *), IWN_RBUF_SIZE, iwn_dma_map_addr,
&paddr, BUS_DMA_NOWAIT);
if (error != 0 && error != EFBIG) {
device_printf(sc->sc_dev,
"%s: can't map mbuf, error %d\n", __func__,
error);
goto fail;
}
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_PREREAD);
/* Set physical address of RX buffer (256-byte aligned). */
ring->desc[i] = htole32(paddr >> 8);
}
bus_dmamap_sync(ring->desc_dma.tag, ring->desc_dma.map,
BUS_DMASYNC_PREWRITE);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return 0;
fail: iwn_free_rx_ring(sc, ring);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end in error\n",__func__);
return error;
}
static void
iwn_reset_rx_ring(struct iwn_softc *sc, struct iwn_rx_ring *ring)
{
int ntries;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (iwn_nic_lock(sc) == 0) {
IWN_WRITE(sc, IWN_FH_RX_CONFIG, 0);
for (ntries = 0; ntries < 1000; ntries++) {
if (IWN_READ(sc, IWN_FH_RX_STATUS) &
IWN_FH_RX_STATUS_IDLE)
break;
DELAY(10);
}
iwn_nic_unlock(sc);
}
ring->cur = 0;
sc->last_rx_valid = 0;
}
static void
iwn_free_rx_ring(struct iwn_softc *sc, struct iwn_rx_ring *ring)
{
int i;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s \n", __func__);
iwn_dma_contig_free(&ring->desc_dma);
iwn_dma_contig_free(&ring->stat_dma);
for (i = 0; i < IWN_RX_RING_COUNT; i++) {
struct iwn_rx_data *data = &ring->data[i];
if (data->m != NULL) {
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_POSTREAD);
bus_dmamap_unload(ring->data_dmat, data->map);
m_freem(data->m);
data->m = NULL;
}
if (data->map != NULL)
bus_dmamap_destroy(ring->data_dmat, data->map);
}
if (ring->data_dmat != NULL) {
bus_dma_tag_destroy(ring->data_dmat);
ring->data_dmat = NULL;
}
}
static int
iwn_alloc_tx_ring(struct iwn_softc *sc, struct iwn_tx_ring *ring, int qid)
{
bus_addr_t paddr;
bus_size_t size;
int i, error;
ring->qid = qid;
ring->queued = 0;
ring->cur = 0;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Allocate TX descriptors (256-byte aligned). */
size = IWN_TX_RING_COUNT * sizeof (struct iwn_tx_desc);
error = iwn_dma_contig_alloc(sc, &ring->desc_dma, (void **)&ring->desc,
size, 256);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not allocate TX ring DMA memory, error %d\n",
__func__, error);
goto fail;
}
size = IWN_TX_RING_COUNT * sizeof (struct iwn_tx_cmd);
error = iwn_dma_contig_alloc(sc, &ring->cmd_dma, (void **)&ring->cmd,
size, 4);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not allocate TX cmd DMA memory, error %d\n",
__func__, error);
goto fail;
}
error = bus_dma_tag_create(bus_get_dma_tag(sc->sc_dev), 1, 0,
BUS_SPACE_MAXADDR_32BIT, BUS_SPACE_MAXADDR, NULL, NULL, MCLBYTES,
IWN_MAX_SCATTER - 1, MCLBYTES, 0, NULL, NULL, &ring->data_dmat);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not create TX buf DMA tag, error %d\n",
__func__, error);
goto fail;
}
paddr = ring->cmd_dma.paddr;
for (i = 0; i < IWN_TX_RING_COUNT; i++) {
struct iwn_tx_data *data = &ring->data[i];
data->cmd_paddr = paddr;
data->scratch_paddr = paddr + 12;
paddr += sizeof (struct iwn_tx_cmd);
error = bus_dmamap_create(ring->data_dmat, 0, &data->map);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not create TX buf DMA map, error %d\n",
__func__, error);
goto fail;
}
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
return 0;
fail: iwn_free_tx_ring(sc, ring);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end in error\n", __func__);
return error;
}
static void
iwn_reset_tx_ring(struct iwn_softc *sc, struct iwn_tx_ring *ring)
{
int i;
DPRINTF(sc, IWN_DEBUG_TRACE, "->doing %s \n", __func__);
for (i = 0; i < IWN_TX_RING_COUNT; i++) {
struct iwn_tx_data *data = &ring->data[i];
if (data->m != NULL) {
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_POSTWRITE);
bus_dmamap_unload(ring->data_dmat, data->map);
m_freem(data->m);
data->m = NULL;
}
if (data->ni != NULL) {
ieee80211_free_node(data->ni);
data->ni = NULL;
}
data->remapped = 0;
data->long_retries = 0;
}
/* Clear TX descriptors. */
memset(ring->desc, 0, ring->desc_dma.size);
bus_dmamap_sync(ring->desc_dma.tag, ring->desc_dma.map,
BUS_DMASYNC_PREWRITE);
sc->qfullmsk &= ~(1 << ring->qid);
ring->queued = 0;
ring->cur = 0;
}
static void
iwn_free_tx_ring(struct iwn_softc *sc, struct iwn_tx_ring *ring)
{
int i;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s \n", __func__);
iwn_dma_contig_free(&ring->desc_dma);
iwn_dma_contig_free(&ring->cmd_dma);
for (i = 0; i < IWN_TX_RING_COUNT; i++) {
struct iwn_tx_data *data = &ring->data[i];
if (data->m != NULL) {
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_POSTWRITE);
bus_dmamap_unload(ring->data_dmat, data->map);
m_freem(data->m);
}
if (data->map != NULL)
bus_dmamap_destroy(ring->data_dmat, data->map);
}
if (ring->data_dmat != NULL) {
bus_dma_tag_destroy(ring->data_dmat);
ring->data_dmat = NULL;
}
}
static void
iwn_check_tx_ring(struct iwn_softc *sc, int qid)
{
struct iwn_tx_ring *ring = &sc->txq[qid];
KASSERT(ring->queued >= 0, ("%s: ring->queued (%d) for queue %d < 0!",
__func__, ring->queued, qid));
if (qid >= sc->firstaggqueue) {
struct iwn_ops *ops = &sc->ops;
struct ieee80211_tx_ampdu *tap = sc->qid2tap[qid];
if (ring->queued == 0 && !IEEE80211_AMPDU_RUNNING(tap)) {
uint16_t ssn = tap->txa_start & 0xfff;
uint8_t tid = tap->txa_tid;
int *res = tap->txa_private;
iwn_nic_lock(sc);
ops->ampdu_tx_stop(sc, qid, tid, ssn);
iwn_nic_unlock(sc);
sc->qid2tap[qid] = NULL;
free(res, M_DEVBUF);
}
}
if (ring->queued < IWN_TX_RING_LOMARK) {
sc->qfullmsk &= ~(1 << qid);
if (ring->queued == 0)
sc->sc_tx_timer = 0;
else
sc->sc_tx_timer = 5;
}
}
static void
iwn5000_ict_reset(struct iwn_softc *sc)
{
/* Disable interrupts. */
IWN_WRITE(sc, IWN_INT_MASK, 0);
/* Reset ICT table. */
memset(sc->ict, 0, IWN_ICT_SIZE);
sc->ict_cur = 0;
bus_dmamap_sync(sc->ict_dma.tag, sc->ict_dma.map,
BUS_DMASYNC_PREWRITE);
/* Set physical address of ICT table (4KB aligned). */
DPRINTF(sc, IWN_DEBUG_RESET, "%s: enabling ICT\n", __func__);
IWN_WRITE(sc, IWN_DRAM_INT_TBL, IWN_DRAM_INT_TBL_ENABLE |
IWN_DRAM_INT_TBL_WRAP_CHECK | sc->ict_dma.paddr >> 12);
/* Enable periodic RX interrupt. */
sc->int_mask |= IWN_INT_RX_PERIODIC;
/* Switch to ICT interrupt mode in driver. */
sc->sc_flags |= IWN_FLAG_USE_ICT;
/* Re-enable interrupts. */
IWN_WRITE(sc, IWN_INT, 0xffffffff);
IWN_WRITE(sc, IWN_INT_MASK, sc->int_mask);
}
static int
iwn_read_eeprom(struct iwn_softc *sc, uint8_t macaddr[IEEE80211_ADDR_LEN])
{
struct iwn_ops *ops = &sc->ops;
uint16_t val;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Check whether adapter has an EEPROM or an OTPROM. */
if (sc->hw_type >= IWN_HW_REV_TYPE_1000 &&
(IWN_READ(sc, IWN_OTP_GP) & IWN_OTP_GP_DEV_SEL_OTP))
sc->sc_flags |= IWN_FLAG_HAS_OTPROM;
DPRINTF(sc, IWN_DEBUG_RESET, "%s found\n",
(sc->sc_flags & IWN_FLAG_HAS_OTPROM) ? "OTPROM" : "EEPROM");
/* Adapter has to be powered on for EEPROM access to work. */
if ((error = iwn_apm_init(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not power ON adapter, error %d\n", __func__,
error);
return error;
}
if ((IWN_READ(sc, IWN_EEPROM_GP) & 0x7) == 0) {
device_printf(sc->sc_dev, "%s: bad ROM signature\n", __func__);
return EIO;
}
if ((error = iwn_eeprom_lock(sc)) != 0) {
device_printf(sc->sc_dev, "%s: could not lock ROM, error %d\n",
__func__, error);
return error;
}
if (sc->sc_flags & IWN_FLAG_HAS_OTPROM) {
if ((error = iwn_init_otprom(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not initialize OTPROM, error %d\n",
__func__, error);
return error;
}
}
iwn_read_prom_data(sc, IWN_EEPROM_SKU_CAP, &val, 2);
DPRINTF(sc, IWN_DEBUG_RESET, "SKU capabilities=0x%04x\n", le16toh(val));
/* Check if HT support is bonded out. */
if (val & htole16(IWN_EEPROM_SKU_CAP_11N))
sc->sc_flags |= IWN_FLAG_HAS_11N;
iwn_read_prom_data(sc, IWN_EEPROM_RFCFG, &val, 2);
sc->rfcfg = le16toh(val);
DPRINTF(sc, IWN_DEBUG_RESET, "radio config=0x%04x\n", sc->rfcfg);
/* Read Tx/Rx chains from ROM unless it's known to be broken. */
if (sc->txchainmask == 0)
sc->txchainmask = IWN_RFCFG_TXANTMSK(sc->rfcfg);
if (sc->rxchainmask == 0)
sc->rxchainmask = IWN_RFCFG_RXANTMSK(sc->rfcfg);
/* Read MAC address. */
iwn_read_prom_data(sc, IWN_EEPROM_MAC, macaddr, 6);
/* Read adapter-specific information from EEPROM. */
ops->read_eeprom(sc);
iwn_apm_stop(sc); /* Power OFF adapter. */
iwn_eeprom_unlock(sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
return 0;
}
static void
iwn4965_read_eeprom(struct iwn_softc *sc)
{
uint32_t addr;
uint16_t val;
int i;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Read regulatory domain (4 ASCII characters). */
iwn_read_prom_data(sc, IWN4965_EEPROM_DOMAIN, sc->eeprom_domain, 4);
/* Read the list of authorized channels (20MHz & 40MHz). */
for (i = 0; i < IWN_NBANDS - 1; i++) {
addr = iwn4965_regulatory_bands[i];
iwn_read_eeprom_channels(sc, i, addr);
}
/* Read maximum allowed TX power for 2GHz and 5GHz bands. */
iwn_read_prom_data(sc, IWN4965_EEPROM_MAXPOW, &val, 2);
sc->maxpwr2GHz = val & 0xff;
sc->maxpwr5GHz = val >> 8;
/* Check that EEPROM values are within valid range. */
if (sc->maxpwr5GHz < 20 || sc->maxpwr5GHz > 50)
sc->maxpwr5GHz = 38;
if (sc->maxpwr2GHz < 20 || sc->maxpwr2GHz > 50)
sc->maxpwr2GHz = 38;
DPRINTF(sc, IWN_DEBUG_RESET, "maxpwr 2GHz=%d 5GHz=%d\n",
sc->maxpwr2GHz, sc->maxpwr5GHz);
/* Read samples for each TX power group. */
iwn_read_prom_data(sc, IWN4965_EEPROM_BANDS, sc->bands,
sizeof sc->bands);
/* Read voltage at which samples were taken. */
iwn_read_prom_data(sc, IWN4965_EEPROM_VOLTAGE, &val, 2);
sc->eeprom_voltage = (int16_t)le16toh(val);
DPRINTF(sc, IWN_DEBUG_RESET, "voltage=%d (in 0.3V)\n",
sc->eeprom_voltage);
#ifdef IWN_DEBUG
/* Print samples. */
if (sc->sc_debug & IWN_DEBUG_ANY) {
for (i = 0; i < IWN_NBANDS - 1; i++)
iwn4965_print_power_group(sc, i);
}
#endif
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
}
#ifdef IWN_DEBUG
static void
iwn4965_print_power_group(struct iwn_softc *sc, int i)
{
struct iwn4965_eeprom_band *band = &sc->bands[i];
struct iwn4965_eeprom_chan_samples *chans = band->chans;
int j, c;
printf("===band %d===\n", i);
printf("chan lo=%d, chan hi=%d\n", band->lo, band->hi);
printf("chan1 num=%d\n", chans[0].num);
for (c = 0; c < 2; c++) {
for (j = 0; j < IWN_NSAMPLES; j++) {
printf("chain %d, sample %d: temp=%d gain=%d "
"power=%d pa_det=%d\n", c, j,
chans[0].samples[c][j].temp,
chans[0].samples[c][j].gain,
chans[0].samples[c][j].power,
chans[0].samples[c][j].pa_det);
}
}
printf("chan2 num=%d\n", chans[1].num);
for (c = 0; c < 2; c++) {
for (j = 0; j < IWN_NSAMPLES; j++) {
printf("chain %d, sample %d: temp=%d gain=%d "
"power=%d pa_det=%d\n", c, j,
chans[1].samples[c][j].temp,
chans[1].samples[c][j].gain,
chans[1].samples[c][j].power,
chans[1].samples[c][j].pa_det);
}
}
}
#endif
static void
iwn5000_read_eeprom(struct iwn_softc *sc)
{
struct iwn5000_eeprom_calib_hdr hdr;
int32_t volt;
uint32_t base, addr;
uint16_t val;
int i;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Read regulatory domain (4 ASCII characters). */
iwn_read_prom_data(sc, IWN5000_EEPROM_REG, &val, 2);
base = le16toh(val);
iwn_read_prom_data(sc, base + IWN5000_EEPROM_DOMAIN,
sc->eeprom_domain, 4);
/* Read the list of authorized channels (20MHz & 40MHz). */
for (i = 0; i < IWN_NBANDS - 1; i++) {
addr = base + sc->base_params->regulatory_bands[i];
iwn_read_eeprom_channels(sc, i, addr);
}
/* Read enhanced TX power information for 6000 Series. */
if (sc->base_params->enhanced_TX_power)
iwn_read_eeprom_enhinfo(sc);
iwn_read_prom_data(sc, IWN5000_EEPROM_CAL, &val, 2);
base = le16toh(val);
iwn_read_prom_data(sc, base, &hdr, sizeof hdr);
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: calib version=%u pa type=%u voltage=%u\n", __func__,
hdr.version, hdr.pa_type, le16toh(hdr.volt));
sc->calib_ver = hdr.version;
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSETv2) {
sc->eeprom_voltage = le16toh(hdr.volt);
iwn_read_prom_data(sc, base + IWN5000_EEPROM_TEMP, &val, 2);
sc->eeprom_temp_high=le16toh(val);
iwn_read_prom_data(sc, base + IWN5000_EEPROM_VOLT, &val, 2);
sc->eeprom_temp = le16toh(val);
}
if (sc->hw_type == IWN_HW_REV_TYPE_5150) {
/* Compute temperature offset. */
iwn_read_prom_data(sc, base + IWN5000_EEPROM_TEMP, &val, 2);
sc->eeprom_temp = le16toh(val);
iwn_read_prom_data(sc, base + IWN5000_EEPROM_VOLT, &val, 2);
volt = le16toh(val);
sc->temp_off = sc->eeprom_temp - (volt / -5);
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "temp=%d volt=%d offset=%dK\n",
sc->eeprom_temp, volt, sc->temp_off);
} else {
/* Read crystal calibration. */
iwn_read_prom_data(sc, base + IWN5000_EEPROM_CRYSTAL,
&sc->eeprom_crystal, sizeof (uint32_t));
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "crystal calibration 0x%08x\n",
le32toh(sc->eeprom_crystal));
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
}
/*
* Translate EEPROM flags to net80211.
*/
static uint32_t
iwn_eeprom_channel_flags(struct iwn_eeprom_chan *channel)
{
uint32_t nflags;
nflags = 0;
if ((channel->flags & IWN_EEPROM_CHAN_ACTIVE) == 0)
nflags |= IEEE80211_CHAN_PASSIVE;
if ((channel->flags & IWN_EEPROM_CHAN_IBSS) == 0)
nflags |= IEEE80211_CHAN_NOADHOC;
if (channel->flags & IWN_EEPROM_CHAN_RADAR) {
nflags |= IEEE80211_CHAN_DFS;
/* XXX apparently IBSS may still be marked */
nflags |= IEEE80211_CHAN_NOADHOC;
}
return nflags;
}
static void
iwn_read_eeprom_band(struct iwn_softc *sc, int n, int maxchans, int *nchans,
struct ieee80211_channel chans[])
{
struct iwn_eeprom_chan *channels = sc->eeprom_channels[n];
const struct iwn_chan_band *band = &iwn_bands[n];
uint8_t bands[IEEE80211_MODE_BYTES];
uint8_t chan;
int i, error, nflags;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
memset(bands, 0, sizeof(bands));
if (n == 0) {
setbit(bands, IEEE80211_MODE_11B);
setbit(bands, IEEE80211_MODE_11G);
if (sc->sc_flags & IWN_FLAG_HAS_11N)
setbit(bands, IEEE80211_MODE_11NG);
} else {
setbit(bands, IEEE80211_MODE_11A);
if (sc->sc_flags & IWN_FLAG_HAS_11N)
setbit(bands, IEEE80211_MODE_11NA);
}
for (i = 0; i < band->nchan; i++) {
if (!(channels[i].flags & IWN_EEPROM_CHAN_VALID)) {
DPRINTF(sc, IWN_DEBUG_RESET,
"skip chan %d flags 0x%x maxpwr %d\n",
band->chan[i], channels[i].flags,
channels[i].maxpwr);
continue;
}
chan = band->chan[i];
nflags = iwn_eeprom_channel_flags(&channels[i]);
error = ieee80211_add_channel(chans, maxchans, nchans,
chan, 0, channels[i].maxpwr, nflags, bands);
if (error != 0)
break;
/* Save maximum allowed TX power for this channel. */
/* XXX wrong */
sc->maxpwr[chan] = channels[i].maxpwr;
DPRINTF(sc, IWN_DEBUG_RESET,
"add chan %d flags 0x%x maxpwr %d\n", chan,
channels[i].flags, channels[i].maxpwr);
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
}
static void
iwn_read_eeprom_ht40(struct iwn_softc *sc, int n, int maxchans, int *nchans,
struct ieee80211_channel chans[])
{
struct iwn_eeprom_chan *channels = sc->eeprom_channels[n];
const struct iwn_chan_band *band = &iwn_bands[n];
uint8_t chan;
int i, error, nflags;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s start\n", __func__);
if (!(sc->sc_flags & IWN_FLAG_HAS_11N)) {
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end no 11n\n", __func__);
return;
}
for (i = 0; i < band->nchan; i++) {
if (!(channels[i].flags & IWN_EEPROM_CHAN_VALID)) {
DPRINTF(sc, IWN_DEBUG_RESET,
"skip chan %d flags 0x%x maxpwr %d\n",
band->chan[i], channels[i].flags,
channels[i].maxpwr);
continue;
}
chan = band->chan[i];
nflags = iwn_eeprom_channel_flags(&channels[i]);
nflags |= (n == 5 ? IEEE80211_CHAN_G : IEEE80211_CHAN_A);
error = ieee80211_add_channel_ht40(chans, maxchans, nchans,
chan, channels[i].maxpwr, nflags);
switch (error) {
case EINVAL:
device_printf(sc->sc_dev,
"%s: no entry for channel %d\n", __func__, chan);
continue;
case ENOENT:
DPRINTF(sc, IWN_DEBUG_RESET,
"%s: skip chan %d, extension channel not found\n",
__func__, chan);
continue;
case ENOBUFS:
device_printf(sc->sc_dev,
"%s: channel table is full!\n", __func__);
break;
case 0:
DPRINTF(sc, IWN_DEBUG_RESET,
"add ht40 chan %d flags 0x%x maxpwr %d\n",
chan, channels[i].flags, channels[i].maxpwr);
/* FALLTHROUGH */
default:
break;
}
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
}
static void
iwn_read_eeprom_channels(struct iwn_softc *sc, int n, uint32_t addr)
{
struct ieee80211com *ic = &sc->sc_ic;
iwn_read_prom_data(sc, addr, &sc->eeprom_channels[n],
iwn_bands[n].nchan * sizeof (struct iwn_eeprom_chan));
if (n < 5) {
iwn_read_eeprom_band(sc, n, IEEE80211_CHAN_MAX, &ic->ic_nchans,
ic->ic_channels);
} else {
iwn_read_eeprom_ht40(sc, n, IEEE80211_CHAN_MAX, &ic->ic_nchans,
ic->ic_channels);
}
ieee80211_sort_channels(ic->ic_channels, ic->ic_nchans);
}
static struct iwn_eeprom_chan *
iwn_find_eeprom_channel(struct iwn_softc *sc, struct ieee80211_channel *c)
{
int band, chan, i, j;
if (IEEE80211_IS_CHAN_HT40(c)) {
band = IEEE80211_IS_CHAN_5GHZ(c) ? 6 : 5;
if (IEEE80211_IS_CHAN_HT40D(c))
chan = c->ic_extieee;
else
chan = c->ic_ieee;
for (i = 0; i < iwn_bands[band].nchan; i++) {
if (iwn_bands[band].chan[i] == chan)
return &sc->eeprom_channels[band][i];
}
} else {
for (j = 0; j < 5; j++) {
for (i = 0; i < iwn_bands[j].nchan; i++) {
if (iwn_bands[j].chan[i] == c->ic_ieee &&
((j == 0) ^ IEEE80211_IS_CHAN_A(c)) == 1)
return &sc->eeprom_channels[j][i];
}
}
}
return NULL;
}
static void
iwn_getradiocaps(struct ieee80211com *ic,
int maxchans, int *nchans, struct ieee80211_channel chans[])
{
struct iwn_softc *sc = ic->ic_softc;
int i;
/* Parse the list of authorized channels. */
for (i = 0; i < 5 && *nchans < maxchans; i++)
iwn_read_eeprom_band(sc, i, maxchans, nchans, chans);
for (i = 5; i < IWN_NBANDS - 1 && *nchans < maxchans; i++)
iwn_read_eeprom_ht40(sc, i, maxchans, nchans, chans);
}
/*
* Enforce flags read from EEPROM.
*/
static int
iwn_setregdomain(struct ieee80211com *ic, struct ieee80211_regdomain *rd,
int nchan, struct ieee80211_channel chans[])
{
struct iwn_softc *sc = ic->ic_softc;
int i;
for (i = 0; i < nchan; i++) {
struct ieee80211_channel *c = &chans[i];
struct iwn_eeprom_chan *channel;
channel = iwn_find_eeprom_channel(sc, c);
if (channel == NULL) {
ic_printf(ic, "%s: invalid channel %u freq %u/0x%x\n",
__func__, c->ic_ieee, c->ic_freq, c->ic_flags);
return EINVAL;
}
c->ic_flags |= iwn_eeprom_channel_flags(channel);
}
return 0;
}
static void
iwn_read_eeprom_enhinfo(struct iwn_softc *sc)
{
struct iwn_eeprom_enhinfo enhinfo[35];
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211_channel *c;
uint16_t val, base;
int8_t maxpwr;
uint8_t flags;
int i, j;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
iwn_read_prom_data(sc, IWN5000_EEPROM_REG, &val, 2);
base = le16toh(val);
iwn_read_prom_data(sc, base + IWN6000_EEPROM_ENHINFO,
enhinfo, sizeof enhinfo);
for (i = 0; i < nitems(enhinfo); i++) {
flags = enhinfo[i].flags;
if (!(flags & IWN_ENHINFO_VALID))
continue; /* Skip invalid entries. */
maxpwr = 0;
if (sc->txchainmask & IWN_ANT_A)
maxpwr = MAX(maxpwr, enhinfo[i].chain[0]);
if (sc->txchainmask & IWN_ANT_B)
maxpwr = MAX(maxpwr, enhinfo[i].chain[1]);
if (sc->txchainmask & IWN_ANT_C)
maxpwr = MAX(maxpwr, enhinfo[i].chain[2]);
if (sc->ntxchains == 2)
maxpwr = MAX(maxpwr, enhinfo[i].mimo2);
else if (sc->ntxchains == 3)
maxpwr = MAX(maxpwr, enhinfo[i].mimo3);
for (j = 0; j < ic->ic_nchans; j++) {
c = &ic->ic_channels[j];
if ((flags & IWN_ENHINFO_5GHZ)) {
if (!IEEE80211_IS_CHAN_A(c))
continue;
} else if ((flags & IWN_ENHINFO_OFDM)) {
if (!IEEE80211_IS_CHAN_G(c))
continue;
} else if (!IEEE80211_IS_CHAN_B(c))
continue;
if ((flags & IWN_ENHINFO_HT40)) {
if (!IEEE80211_IS_CHAN_HT40(c))
continue;
} else {
if (IEEE80211_IS_CHAN_HT40(c))
continue;
}
if (enhinfo[i].chan != 0 &&
enhinfo[i].chan != c->ic_ieee)
continue;
DPRINTF(sc, IWN_DEBUG_RESET,
"channel %d(%x), maxpwr %d\n", c->ic_ieee,
c->ic_flags, maxpwr / 2);
c->ic_maxregpower = maxpwr / 2;
c->ic_maxpower = maxpwr;
}
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end\n", __func__);
}
static struct ieee80211_node *
iwn_node_alloc(struct ieee80211vap *vap, const uint8_t mac[IEEE80211_ADDR_LEN])
{
struct iwn_node *wn;
wn = malloc(sizeof (struct iwn_node), M_80211_NODE, M_NOWAIT | M_ZERO);
if (wn == NULL)
return (NULL);
wn->id = IWN_ID_UNDEFINED;
return (&wn->ni);
}
static __inline int
rate2plcp(int rate)
{
switch (rate & 0xff) {
case 12: return 0xd;
case 18: return 0xf;
case 24: return 0x5;
case 36: return 0x7;
case 48: return 0x9;
case 72: return 0xb;
case 96: return 0x1;
case 108: return 0x3;
case 2: return 10;
case 4: return 20;
case 11: return 55;
case 22: return 110;
}
return 0;
}
static __inline uint8_t
plcp2rate(const uint8_t rate_plcp)
{
switch (rate_plcp) {
case 0xd: return 12;
case 0xf: return 18;
case 0x5: return 24;
case 0x7: return 36;
case 0x9: return 48;
case 0xb: return 72;
case 0x1: return 96;
case 0x3: return 108;
case 10: return 2;
case 20: return 4;
case 55: return 11;
case 110: return 22;
default: return 0;
}
}
static int
iwn_get_1stream_tx_antmask(struct iwn_softc *sc)
{
return IWN_LSB(sc->txchainmask);
}
static int
iwn_get_2stream_tx_antmask(struct iwn_softc *sc)
{
int tx;
/*
* The '2 stream' setup is a bit .. odd.
*
* For NICs that support only 1 antenna, default to IWN_ANT_AB or
* the firmware panics (eg Intel 5100.)
*
* For NICs that support two antennas, we use ANT_AB.
*
* For NICs that support three antennas, we use the two that
* wasn't the default one.
*
* XXX TODO: if bluetooth (full concurrent) is enabled, restrict
* this to only one antenna.
*/
/* Default - transmit on the other antennas */
tx = (sc->txchainmask & ~IWN_LSB(sc->txchainmask));
/* Now, if it's zero, set it to IWN_ANT_AB, so to not panic firmware */
if (tx == 0)
tx = IWN_ANT_AB;
/*
* If the NIC is a two-stream TX NIC, configure the TX mask to
* the default chainmask
*/
else if (sc->ntxchains == 2)
tx = sc->txchainmask;
return (tx);
}
/*
* Calculate the required PLCP value from the given rate,
* to the given node.
*
* This will take the node configuration (eg 11n, rate table
* setup, etc) into consideration.
*/
static uint32_t
iwn_rate_to_plcp(struct iwn_softc *sc, struct ieee80211_node *ni,
uint8_t rate)
{
struct ieee80211com *ic = ni->ni_ic;
uint32_t plcp = 0;
int ridx;
/*
* If it's an MCS rate, let's set the plcp correctly
* and set the relevant flags based on the node config.
*/
if (rate & IEEE80211_RATE_MCS) {
/*
* Set the initial PLCP value to be between 0->31 for
* MCS 0 -> MCS 31, then set the "I'm an MCS rate!"
* flag.
*/
plcp = IEEE80211_RV(rate) | IWN_RFLAG_MCS;
/*
* XXX the following should only occur if both
* the local configuration _and_ the remote node
* advertise these capabilities. Thus this code
* may need fixing!
*/
/*
* Set the channel width and guard interval.
*/
if (IEEE80211_IS_CHAN_HT40(ni->ni_chan)) {
plcp |= IWN_RFLAG_HT40;
if (ni->ni_htcap & IEEE80211_HTCAP_SHORTGI40)
plcp |= IWN_RFLAG_SGI;
} else if (ni->ni_htcap & IEEE80211_HTCAP_SHORTGI20) {
plcp |= IWN_RFLAG_SGI;
}
/*
* Ensure the selected rate matches the link quality
* table entries being used.
*/
if (rate > 0x8f)
plcp |= IWN_RFLAG_ANT(sc->txchainmask);
else if (rate > 0x87)
plcp |= IWN_RFLAG_ANT(iwn_get_2stream_tx_antmask(sc));
else
plcp |= IWN_RFLAG_ANT(iwn_get_1stream_tx_antmask(sc));
} else {
/*
* Set the initial PLCP - fine for both
* OFDM and CCK rates.
*/
plcp = rate2plcp(rate);
/* Set CCK flag if it's CCK */
/* XXX It would be nice to have a method
* to map the ridx -> phy table entry
* so we could just query that, rather than
* this hack to check against IWN_RIDX_OFDM6.
*/
ridx = ieee80211_legacy_rate_lookup(ic->ic_rt,
rate & IEEE80211_RATE_VAL);
if (ridx < IWN_RIDX_OFDM6 &&
IEEE80211_IS_CHAN_2GHZ(ni->ni_chan))
plcp |= IWN_RFLAG_CCK;
/* Set antenna configuration */
/* XXX TODO: is this the right antenna to use for legacy? */
plcp |= IWN_RFLAG_ANT(iwn_get_1stream_tx_antmask(sc));
}
DPRINTF(sc, IWN_DEBUG_TXRATE, "%s: rate=0x%02x, plcp=0x%08x\n",
__func__,
rate,
plcp);
return (htole32(plcp));
}
static void
iwn_newassoc(struct ieee80211_node *ni, int isnew)
{
/* Doesn't do anything at the moment */
}
static int
iwn_media_change(struct ifnet *ifp)
{
int error;
error = ieee80211_media_change(ifp);
/* NB: only the fixed rate can change and that doesn't need a reset */
return (error == ENETRESET ? 0 : error);
}
static int
iwn_newstate(struct ieee80211vap *vap, enum ieee80211_state nstate, int arg)
{
struct iwn_vap *ivp = IWN_VAP(vap);
struct ieee80211com *ic = vap->iv_ic;
struct iwn_softc *sc = ic->ic_softc;
int error = 0;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
DPRINTF(sc, IWN_DEBUG_STATE, "%s: %s -> %s\n", __func__,
ieee80211_state_name[vap->iv_state], ieee80211_state_name[nstate]);
IEEE80211_UNLOCK(ic);
IWN_LOCK(sc);
callout_stop(&sc->calib_to);
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
switch (nstate) {
case IEEE80211_S_ASSOC:
if (vap->iv_state != IEEE80211_S_RUN)
break;
/* FALLTHROUGH */
case IEEE80211_S_AUTH:
if (vap->iv_state == IEEE80211_S_AUTH)
break;
/*
* !AUTH -> AUTH transition requires state reset to handle
* reassociations correctly.
*/
sc->rxon->associd = 0;
sc->rxon->filter &= ~htole32(IWN_FILTER_BSS);
sc->calib.state = IWN_CALIB_STATE_INIT;
/* Wait until we hear a beacon before we transmit */
if (IEEE80211_IS_CHAN_PASSIVE(ic->ic_curchan))
sc->sc_beacon_wait = 1;
if ((error = iwn_auth(sc, vap)) != 0) {
device_printf(sc->sc_dev,
"%s: could not move to auth state\n", __func__);
}
break;
case IEEE80211_S_RUN:
/*
* RUN -> RUN transition; Just restart the timers.
*/
if (vap->iv_state == IEEE80211_S_RUN) {
sc->calib_cnt = 0;
break;
}
/* Wait until we hear a beacon before we transmit */
if (IEEE80211_IS_CHAN_PASSIVE(ic->ic_curchan))
sc->sc_beacon_wait = 1;
/*
* !RUN -> RUN requires setting the association id
* which is done with a firmware cmd. We also defer
* starting the timers until that work is done.
*/
if ((error = iwn_run(sc, vap)) != 0) {
device_printf(sc->sc_dev,
"%s: could not move to run state\n", __func__);
}
break;
case IEEE80211_S_INIT:
sc->calib.state = IWN_CALIB_STATE_INIT;
/*
* Purge the xmit queue so we don't have old frames
* during a new association attempt.
*/
sc->sc_beacon_wait = 0;
iwn_xmit_queue_drain(sc);
break;
default:
break;
}
IWN_UNLOCK(sc);
IEEE80211_LOCK(ic);
if (error != 0){
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end in error\n", __func__);
return error;
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return ivp->iv_newstate(vap, nstate, arg);
}
static void
iwn_calib_timeout(void *arg)
{
struct iwn_softc *sc = arg;
IWN_LOCK_ASSERT(sc);
/* Force automatic TX power calibration every 60 secs. */
if (++sc->calib_cnt >= 120) {
uint32_t flags = 0;
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s\n",
"sending request for statistics");
(void)iwn_cmd(sc, IWN_CMD_GET_STATISTICS, &flags,
sizeof flags, 1);
sc->calib_cnt = 0;
}
callout_reset(&sc->calib_to, msecs_to_ticks(500), iwn_calib_timeout,
sc);
}
/*
* Process an RX_PHY firmware notification. This is usually immediately
* followed by an MPDU_RX_DONE notification.
*/
static void
iwn_rx_phy(struct iwn_softc *sc, struct iwn_rx_desc *desc)
{
struct iwn_rx_stat *stat = (struct iwn_rx_stat *)(desc + 1);
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s: received PHY stats\n", __func__);
/* Save RX statistics, they will be used on MPDU_RX_DONE. */
memcpy(&sc->last_rx_stat, stat, sizeof (*stat));
sc->last_rx_valid = 1;
}
/*
* Process an RX_DONE (4965AGN only) or MPDU_RX_DONE firmware notification.
* Each MPDU_RX_DONE notification must be preceded by an RX_PHY one.
*/
static void
iwn_rx_done(struct iwn_softc *sc, struct iwn_rx_desc *desc,
struct iwn_rx_data *data)
{
struct iwn_ops *ops = &sc->ops;
struct ieee80211com *ic = &sc->sc_ic;
struct iwn_rx_ring *ring = &sc->rxq;
struct ieee80211_frame_min *wh;
struct ieee80211_node *ni;
struct mbuf *m, *m1;
struct iwn_rx_stat *stat;
caddr_t head;
bus_addr_t paddr;
uint32_t flags;
int error, len, rssi, nf;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
if (desc->type == IWN_MPDU_RX_DONE) {
/* Check for prior RX_PHY notification. */
if (!sc->last_rx_valid) {
DPRINTF(sc, IWN_DEBUG_ANY,
"%s: missing RX_PHY\n", __func__);
return;
}
stat = &sc->last_rx_stat;
} else
stat = (struct iwn_rx_stat *)(desc + 1);
if (stat->cfg_phy_len > IWN_STAT_MAXLEN) {
device_printf(sc->sc_dev,
"%s: invalid RX statistic header, len %d\n", __func__,
stat->cfg_phy_len);
return;
}
if (desc->type == IWN_MPDU_RX_DONE) {
struct iwn_rx_mpdu *mpdu = (struct iwn_rx_mpdu *)(desc + 1);
head = (caddr_t)(mpdu + 1);
len = le16toh(mpdu->len);
} else {
head = (caddr_t)(stat + 1) + stat->cfg_phy_len;
len = le16toh(stat->len);
}
flags = le32toh(*(uint32_t *)(head + len));
/* Discard frames with a bad FCS early. */
if ((flags & IWN_RX_NOERROR) != IWN_RX_NOERROR) {
DPRINTF(sc, IWN_DEBUG_RECV, "%s: RX flags error %x\n",
__func__, flags);
counter_u64_add(ic->ic_ierrors, 1);
return;
}
/* Discard frames that are too short. */
if (len < sizeof (struct ieee80211_frame_ack)) {
DPRINTF(sc, IWN_DEBUG_RECV, "%s: frame too short: %d\n",
__func__, len);
counter_u64_add(ic->ic_ierrors, 1);
return;
}
m1 = m_getjcl(M_NOWAIT, MT_DATA, M_PKTHDR, IWN_RBUF_SIZE);
if (m1 == NULL) {
DPRINTF(sc, IWN_DEBUG_ANY, "%s: no mbuf to restock ring\n",
__func__);
counter_u64_add(ic->ic_ierrors, 1);
return;
}
bus_dmamap_unload(ring->data_dmat, data->map);
error = bus_dmamap_load(ring->data_dmat, data->map, mtod(m1, void *),
IWN_RBUF_SIZE, iwn_dma_map_addr, &paddr, BUS_DMA_NOWAIT);
if (error != 0 && error != EFBIG) {
device_printf(sc->sc_dev,
"%s: bus_dmamap_load failed, error %d\n", __func__, error);
m_freem(m1);
/* Try to reload the old mbuf. */
error = bus_dmamap_load(ring->data_dmat, data->map,
mtod(data->m, void *), IWN_RBUF_SIZE, iwn_dma_map_addr,
&paddr, BUS_DMA_NOWAIT);
if (error != 0 && error != EFBIG) {
panic("%s: could not load old RX mbuf", __func__);
}
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_PREREAD);
/* Physical address may have changed. */
ring->desc[ring->cur] = htole32(paddr >> 8);
bus_dmamap_sync(ring->desc_dma.tag, ring->desc_dma.map,
BUS_DMASYNC_PREWRITE);
counter_u64_add(ic->ic_ierrors, 1);
return;
}
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_PREREAD);
m = data->m;
data->m = m1;
/* Update RX descriptor. */
ring->desc[ring->cur] = htole32(paddr >> 8);
bus_dmamap_sync(ring->desc_dma.tag, ring->desc_dma.map,
BUS_DMASYNC_PREWRITE);
/* Finalize mbuf. */
m->m_data = head;
m->m_pkthdr.len = m->m_len = len;
/* Grab a reference to the source node. */
wh = mtod(m, struct ieee80211_frame_min *);
if (len >= sizeof(struct ieee80211_frame_min))
ni = ieee80211_find_rxnode(ic, wh);
else
ni = NULL;
nf = (ni != NULL && ni->ni_vap->iv_state == IEEE80211_S_RUN &&
(ic->ic_flags & IEEE80211_F_SCAN) == 0) ? sc->noise : -95;
rssi = ops->get_rssi(sc, stat);
if (ieee80211_radiotap_active(ic)) {
struct iwn_rx_radiotap_header *tap = &sc->sc_rxtap;
uint32_t rate = le32toh(stat->rate);
tap->wr_flags = 0;
if (stat->flags & htole16(IWN_STAT_FLAG_SHPREAMBLE))
tap->wr_flags |= IEEE80211_RADIOTAP_F_SHORTPRE;
tap->wr_dbm_antsignal = (int8_t)rssi;
tap->wr_dbm_antnoise = (int8_t)nf;
tap->wr_tsft = stat->tstamp;
if (rate & IWN_RFLAG_MCS) {
tap->wr_rate = rate & IWN_RFLAG_RATE_MCS;
tap->wr_rate |= IEEE80211_RATE_MCS;
} else
tap->wr_rate = plcp2rate(rate & IWN_RFLAG_RATE);
}
/*
* If it's a beacon and we're waiting, then do the
* wakeup. This should unblock raw_xmit/start.
*/
if (sc->sc_beacon_wait) {
uint8_t type, subtype;
/* NB: Re-assign wh */
wh = mtod(m, struct ieee80211_frame_min *);
type = wh->i_fc[0] & IEEE80211_FC0_TYPE_MASK;
subtype = wh->i_fc[0] & IEEE80211_FC0_SUBTYPE_MASK;
/*
* This assumes at this point we've received our own
* beacon.
*/
DPRINTF(sc, IWN_DEBUG_TRACE,
"%s: beacon_wait, type=%d, subtype=%d\n",
__func__, type, subtype);
if (type == IEEE80211_FC0_TYPE_MGT &&
subtype == IEEE80211_FC0_SUBTYPE_BEACON) {
DPRINTF(sc, IWN_DEBUG_TRACE | IWN_DEBUG_XMIT,
"%s: waking things up\n", __func__);
/* queue taskqueue to transmit! */
taskqueue_enqueue(sc->sc_tq, &sc->sc_xmit_task);
}
}
IWN_UNLOCK(sc);
/* Send the frame to the 802.11 layer. */
if (ni != NULL) {
if (ni->ni_flags & IEEE80211_NODE_HT)
m->m_flags |= M_AMPDU;
(void)ieee80211_input(ni, m, rssi - nf, nf);
/* Node is no longer needed. */
ieee80211_free_node(ni);
} else
(void)ieee80211_input_all(ic, m, rssi - nf, nf);
IWN_LOCK(sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
}
static void
iwn_agg_tx_complete(struct iwn_softc *sc, struct iwn_tx_ring *ring, int tid,
int idx, int success)
{
struct ieee80211_ratectl_tx_status *txs = &sc->sc_txs;
struct iwn_tx_data *data = &ring->data[idx];
struct iwn_node *wn;
struct mbuf *m;
struct ieee80211_node *ni;
KASSERT(data->ni != NULL, ("idx %d: no node", idx));
KASSERT(data->m != NULL, ("idx %d: no mbuf", idx));
/* Unmap and free mbuf. */
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_POSTWRITE);
bus_dmamap_unload(ring->data_dmat, data->map);
m = data->m, data->m = NULL;
ni = data->ni, data->ni = NULL;
wn = (void *)ni;
#if 0
/* XXX causes significant performance degradation. */
txs->flags = IEEE80211_RATECTL_STATUS_SHORT_RETRY |
IEEE80211_RATECTL_STATUS_LONG_RETRY;
txs->long_retries = data->long_retries - 1;
#else
txs->flags = IEEE80211_RATECTL_STATUS_SHORT_RETRY;
#endif
txs->short_retries = wn->agg[tid].short_retries;
if (success)
txs->status = IEEE80211_RATECTL_TX_SUCCESS;
else
txs->status = IEEE80211_RATECTL_TX_FAIL_UNSPECIFIED;
wn->agg[tid].short_retries = 0;
data->long_retries = 0;
DPRINTF(sc, IWN_DEBUG_AMPDU, "%s: freeing m %p ni %p idx %d qid %d\n",
__func__, m, ni, idx, ring->qid);
ieee80211_ratectl_tx_complete(ni, txs);
ieee80211_tx_complete(ni, m, !success);
}
/* Process an incoming Compressed BlockAck. */
static void
iwn_rx_compressed_ba(struct iwn_softc *sc, struct iwn_rx_desc *desc)
{
struct iwn_tx_ring *ring;
struct iwn_tx_data *data;
struct iwn_node *wn;
struct iwn_compressed_ba *ba = (struct iwn_compressed_ba *)(desc + 1);
struct ieee80211_tx_ampdu *tap;
uint64_t bitmap;
uint8_t tid;
int i, qid, shift;
int tx_ok = 0;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
qid = le16toh(ba->qid);
tap = sc->qid2tap[qid];
ring = &sc->txq[qid];
tid = tap->txa_tid;
wn = (void *)tap->txa_ni;
DPRINTF(sc, IWN_DEBUG_AMPDU, "%s: qid %d tid %d seq %04X ssn %04X\n"
"bitmap: ba %016jX wn %016jX, start %d\n",
__func__, qid, tid, le16toh(ba->seq), le16toh(ba->ssn),
(uintmax_t)le64toh(ba->bitmap), (uintmax_t)wn->agg[tid].bitmap,
wn->agg[tid].startidx);
if (wn->agg[tid].bitmap == 0)
return;
shift = wn->agg[tid].startidx - ((le16toh(ba->seq) >> 4) & 0xff);
if (shift <= -64)
shift += 0x100;
/*
* Walk the bitmap and calculate how many successful attempts
* are made.
*
* Yes, the rate control code doesn't know these are A-MPDU
* subframes; due to that long_retries stats are not used here.
*/
bitmap = le64toh(ba->bitmap);
if (shift >= 0)
bitmap >>= shift;
else
bitmap <<= -shift;
bitmap &= wn->agg[tid].bitmap;
wn->agg[tid].bitmap = 0;
for (i = wn->agg[tid].startidx;
bitmap;
bitmap >>= 1, i = (i + 1) % IWN_TX_RING_COUNT) {
if ((bitmap & 1) == 0)
continue;
data = &ring->data[i];
if (__predict_false(data->m == NULL)) {
/*
* There is no frame; skip this entry.
*
* NB: it is "ok" to have both
* 'tx done' + 'compressed BA' replies for frame
* with STATE_SCD_QUERY status.
*/
DPRINTF(sc, IWN_DEBUG_AMPDU,
"%s: ring %d: no entry %d\n", __func__, qid, i);
continue;
}
tx_ok++;
iwn_agg_tx_complete(sc, ring, tid, i, 1);
}
ring->queued -= tx_ok;
iwn_check_tx_ring(sc, qid);
DPRINTF(sc, IWN_DEBUG_TRACE | IWN_DEBUG_AMPDU,
"->%s: end; %d ok\n",__func__, tx_ok);
}
/*
* Process a CALIBRATION_RESULT notification sent by the initialization
* firmware on response to a CMD_CALIB_CONFIG command (5000 only).
*/
static void
iwn5000_rx_calib_results(struct iwn_softc *sc, struct iwn_rx_desc *desc)
{
struct iwn_phy_calib *calib = (struct iwn_phy_calib *)(desc + 1);
int len, idx = -1;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Runtime firmware should not send such a notification. */
if (sc->sc_flags & IWN_FLAG_CALIB_DONE){
DPRINTF(sc, IWN_DEBUG_TRACE,
"->%s received after calib done\n", __func__);
return;
}
len = (le32toh(desc->len) & 0x3fff) - 4;
switch (calib->code) {
case IWN5000_PHY_CALIB_DC:
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_DC)
idx = 0;
break;
case IWN5000_PHY_CALIB_LO:
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_LO)
idx = 1;
break;
case IWN5000_PHY_CALIB_TX_IQ:
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TX_IQ)
idx = 2;
break;
case IWN5000_PHY_CALIB_TX_IQ_PERIODIC:
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TX_IQ_PERIODIC)
idx = 3;
break;
case IWN5000_PHY_CALIB_BASE_BAND:
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_BASE_BAND)
idx = 4;
break;
}
if (idx == -1) /* Ignore other results. */
return;
/* Save calibration result. */
if (sc->calibcmd[idx].buf != NULL)
free(sc->calibcmd[idx].buf, M_DEVBUF);
sc->calibcmd[idx].buf = malloc(len, M_DEVBUF, M_NOWAIT);
if (sc->calibcmd[idx].buf == NULL) {
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"not enough memory for calibration result %d\n",
calib->code);
return;
}
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"saving calibration result idx=%d, code=%d len=%d\n", idx, calib->code, len);
sc->calibcmd[idx].len = len;
memcpy(sc->calibcmd[idx].buf, calib, len);
}
static void
iwn_stats_update(struct iwn_softc *sc, struct iwn_calib_state *calib,
struct iwn_stats *stats, int len)
{
struct iwn_stats_bt *stats_bt;
struct iwn_stats *lstats;
/*
* First - check whether the length is the bluetooth or normal.
*
* If it's normal - just copy it and bump out.
* Otherwise we have to convert things.
*/
if (len == sizeof(struct iwn_stats) + 4) {
memcpy(&sc->last_stat, stats, sizeof(struct iwn_stats));
sc->last_stat_valid = 1;
return;
}
/*
* If it's not the bluetooth size - log, then just copy.
*/
if (len != sizeof(struct iwn_stats_bt) + 4) {
DPRINTF(sc, IWN_DEBUG_STATS,
"%s: size of rx statistics (%d) not an expected size!\n",
__func__,
len);
memcpy(&sc->last_stat, stats, sizeof(struct iwn_stats));
sc->last_stat_valid = 1;
return;
}
/*
* Ok. Time to copy.
*/
stats_bt = (struct iwn_stats_bt *) stats;
lstats = &sc->last_stat;
/* flags */
lstats->flags = stats_bt->flags;
/* rx_bt */
memcpy(&lstats->rx.ofdm, &stats_bt->rx_bt.ofdm,
sizeof(struct iwn_rx_phy_stats));
memcpy(&lstats->rx.cck, &stats_bt->rx_bt.cck,
sizeof(struct iwn_rx_phy_stats));
memcpy(&lstats->rx.general, &stats_bt->rx_bt.general_bt.common,
sizeof(struct iwn_rx_general_stats));
memcpy(&lstats->rx.ht, &stats_bt->rx_bt.ht,
sizeof(struct iwn_rx_ht_phy_stats));
/* tx */
memcpy(&lstats->tx, &stats_bt->tx,
sizeof(struct iwn_tx_stats));
/* general */
memcpy(&lstats->general, &stats_bt->general,
sizeof(struct iwn_general_stats));
/* XXX TODO: Squirrel away the extra bluetooth stats somewhere */
sc->last_stat_valid = 1;
}
/*
* Process an RX_STATISTICS or BEACON_STATISTICS firmware notification.
* The latter is sent by the firmware after each received beacon.
*/
static void
iwn_rx_statistics(struct iwn_softc *sc, struct iwn_rx_desc *desc)
{
struct iwn_ops *ops = &sc->ops;
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211vap *vap = TAILQ_FIRST(&ic->ic_vaps);
struct iwn_calib_state *calib = &sc->calib;
struct iwn_stats *stats = (struct iwn_stats *)(desc + 1);
struct iwn_stats *lstats;
int temp;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Ignore statistics received during a scan. */
if (vap->iv_state != IEEE80211_S_RUN ||
(ic->ic_flags & IEEE80211_F_SCAN)){
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s received during calib\n",
__func__);
return;
}
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_STATS,
"%s: received statistics, cmd %d, len %d\n",
__func__, desc->type, le16toh(desc->len));
sc->calib_cnt = 0; /* Reset TX power calibration timeout. */
/*
* Collect/track general statistics for reporting.
*
* This takes care of ensuring that the bluetooth sized message
* will be correctly converted to the legacy sized message.
*/
iwn_stats_update(sc, calib, stats, le16toh(desc->len));
/*
* And now, let's take a reference of it to use!
*/
lstats = &sc->last_stat;
/* Test if temperature has changed. */
if (lstats->general.temp != sc->rawtemp) {
/* Convert "raw" temperature to degC. */
sc->rawtemp = stats->general.temp;
temp = ops->get_temperature(sc);
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s: temperature %d\n",
__func__, temp);
/* Update TX power if need be (4965AGN only). */
if (sc->hw_type == IWN_HW_REV_TYPE_4965)
iwn4965_power_calibration(sc, temp);
}
if (desc->type != IWN_BEACON_STATISTICS)
return; /* Reply to a statistics request. */
sc->noise = iwn_get_noise(&lstats->rx.general);
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s: noise %d\n", __func__, sc->noise);
/* Test that RSSI and noise are present in stats report. */
if (le32toh(lstats->rx.general.flags) != 1) {
DPRINTF(sc, IWN_DEBUG_ANY, "%s\n",
"received statistics without RSSI");
return;
}
if (calib->state == IWN_CALIB_STATE_ASSOC)
iwn_collect_noise(sc, &lstats->rx.general);
else if (calib->state == IWN_CALIB_STATE_RUN) {
iwn_tune_sensitivity(sc, &lstats->rx);
/*
* XXX TODO: Only run the RX recovery if we're associated!
*/
iwn_check_rx_recovery(sc, lstats);
iwn_save_stats_counters(sc, lstats);
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
}
/*
* Save the relevant statistic counters for the next calibration
* pass.
*/
static void
iwn_save_stats_counters(struct iwn_softc *sc, const struct iwn_stats *rs)
{
struct iwn_calib_state *calib = &sc->calib;
/* Save counters values for next call. */
calib->bad_plcp_cck = le32toh(rs->rx.cck.bad_plcp);
calib->fa_cck = le32toh(rs->rx.cck.fa);
calib->bad_plcp_ht = le32toh(rs->rx.ht.bad_plcp);
calib->bad_plcp_ofdm = le32toh(rs->rx.ofdm.bad_plcp);
calib->fa_ofdm = le32toh(rs->rx.ofdm.fa);
/* Last time we received these tick values */
sc->last_calib_ticks = ticks;
}
/*
* Process a TX_DONE firmware notification. Unfortunately, the 4965AGN
* and 5000 adapters have different incompatible TX status formats.
*/
static void
iwn4965_tx_done(struct iwn_softc *sc, struct iwn_rx_desc *desc,
struct iwn_rx_data *data)
{
struct iwn4965_tx_stat *stat = (struct iwn4965_tx_stat *)(desc + 1);
int qid = desc->qid & IWN_RX_DESC_QID_MSK;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: "
"qid %d idx %d RTS retries %d ACK retries %d nkill %d rate %x duration %d status %x\n",
__func__, desc->qid, desc->idx,
stat->rtsfailcnt,
stat->ackfailcnt,
stat->btkillcnt,
stat->rate, le16toh(stat->duration),
le32toh(stat->status));
if (qid >= sc->firstaggqueue && stat->nframes != 1) {
iwn_ampdu_tx_done(sc, qid, stat->nframes, stat->rtsfailcnt,
&stat->status);
} else {
iwn_tx_done(sc, desc, stat->rtsfailcnt, stat->ackfailcnt,
le32toh(stat->status) & 0xff);
}
}
static void
iwn5000_tx_done(struct iwn_softc *sc, struct iwn_rx_desc *desc,
struct iwn_rx_data *data)
{
struct iwn5000_tx_stat *stat = (struct iwn5000_tx_stat *)(desc + 1);
int qid = desc->qid & IWN_RX_DESC_QID_MSK;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: "
"qid %d idx %d RTS retries %d ACK retries %d nkill %d rate %x duration %d status %x\n",
__func__, desc->qid, desc->idx,
stat->rtsfailcnt,
stat->ackfailcnt,
stat->btkillcnt,
stat->rate, le16toh(stat->duration),
le32toh(stat->status));
#ifdef notyet
/* Reset TX scheduler slot. */
iwn5000_reset_sched(sc, qid, desc->idx);
#endif
if (qid >= sc->firstaggqueue && stat->nframes != 1) {
iwn_ampdu_tx_done(sc, qid, stat->nframes, stat->rtsfailcnt,
&stat->status);
} else {
iwn_tx_done(sc, desc, stat->rtsfailcnt, stat->ackfailcnt,
le16toh(stat->status) & 0xff);
}
}
static void
iwn_adj_ampdu_ptr(struct iwn_softc *sc, struct iwn_tx_ring *ring)
{
int i;
for (i = ring->read; i != ring->cur; i = (i + 1) % IWN_TX_RING_COUNT) {
struct iwn_tx_data *data = &ring->data[i];
if (data->m != NULL)
break;
data->remapped = 0;
}
ring->read = i;
}
/*
* Adapter-independent backend for TX_DONE firmware notifications.
*/
static void
iwn_tx_done(struct iwn_softc *sc, struct iwn_rx_desc *desc, int rtsfailcnt,
int ackfailcnt, uint8_t status)
{
struct ieee80211_ratectl_tx_status *txs = &sc->sc_txs;
struct iwn_tx_ring *ring = &sc->txq[desc->qid & IWN_RX_DESC_QID_MSK];
struct iwn_tx_data *data = &ring->data[desc->idx];
struct mbuf *m;
struct ieee80211_node *ni;
if (__predict_false(data->m == NULL &&
ring->qid >= sc->firstaggqueue)) {
/*
* There is no frame; skip this entry.
*/
DPRINTF(sc, IWN_DEBUG_AMPDU, "%s: ring %d: no entry %d\n",
__func__, ring->qid, desc->idx);
return;
}
KASSERT(data->ni != NULL, ("no node"));
KASSERT(data->m != NULL, ("no mbuf"));
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Unmap and free mbuf. */
bus_dmamap_sync(ring->data_dmat, data->map, BUS_DMASYNC_POSTWRITE);
bus_dmamap_unload(ring->data_dmat, data->map);
m = data->m, data->m = NULL;
ni = data->ni, data->ni = NULL;
data->long_retries = 0;
if (ring->qid >= sc->firstaggqueue)
iwn_adj_ampdu_ptr(sc, ring);
/*
* XXX f/w may hang (device timeout) when desc->idx - ring->read == 64
* (aggregation queues only).
*/
ring->queued--;
iwn_check_tx_ring(sc, ring->qid);
/*
* Update rate control statistics for the node.
*/
txs->flags = IEEE80211_RATECTL_STATUS_SHORT_RETRY |
IEEE80211_RATECTL_STATUS_LONG_RETRY;
txs->short_retries = rtsfailcnt;
txs->long_retries = ackfailcnt;
if (!(status & IWN_TX_FAIL))
txs->status = IEEE80211_RATECTL_TX_SUCCESS;
else {
switch (status) {
case IWN_TX_FAIL_SHORT_LIMIT:
txs->status = IEEE80211_RATECTL_TX_FAIL_SHORT;
break;
case IWN_TX_FAIL_LONG_LIMIT:
txs->status = IEEE80211_RATECTL_TX_FAIL_LONG;
break;
case IWN_TX_STATUS_FAIL_LIFE_EXPIRE:
txs->status = IEEE80211_RATECTL_TX_FAIL_EXPIRED;
break;
default:
txs->status = IEEE80211_RATECTL_TX_FAIL_UNSPECIFIED;
break;
}
}
ieee80211_ratectl_tx_complete(ni, txs);
/*
* Channels marked for "radar" require traffic to be received
* to unlock before we can transmit. Until traffic is seen
* any attempt to transmit is returned immediately with status
* set to IWN_TX_FAIL_TX_LOCKED. Unfortunately this can easily
* happen on first authenticate after scanning. To workaround
* this we ignore a failure of this sort in AUTH state so the
* 802.11 layer will fall back to using a timeout to wait for
* the AUTH reply. This allows the firmware time to see
* traffic so a subsequent retry of AUTH succeeds. It's
* unclear why the firmware does not maintain state for
* channels recently visited as this would allow immediate
* use of the channel after a scan (where we see traffic).
*/
if (status == IWN_TX_FAIL_TX_LOCKED &&
ni->ni_vap->iv_state == IEEE80211_S_AUTH)
ieee80211_tx_complete(ni, m, 0);
else
ieee80211_tx_complete(ni, m,
(status & IWN_TX_FAIL) != 0);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
}
/*
* Process a "command done" firmware notification. This is where we wakeup
* processes waiting for a synchronous command completion.
*/
static void
iwn_cmd_done(struct iwn_softc *sc, struct iwn_rx_desc *desc)
{
struct iwn_tx_ring *ring;
struct iwn_tx_data *data;
int cmd_queue_num;
if (sc->sc_flags & IWN_FLAG_PAN_SUPPORT)
cmd_queue_num = IWN_PAN_CMD_QUEUE;
else
cmd_queue_num = IWN_CMD_QUEUE_NUM;
if ((desc->qid & IWN_RX_DESC_QID_MSK) != cmd_queue_num)
return; /* Not a command ack. */
ring = &sc->txq[cmd_queue_num];
data = &ring->data[desc->idx];
/* If the command was mapped in an mbuf, free it. */
if (data->m != NULL) {
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_POSTWRITE);
bus_dmamap_unload(ring->data_dmat, data->map);
m_freem(data->m);
data->m = NULL;
}
wakeup(&ring->desc[desc->idx]);
}
static int
iwn_ampdu_check_bitmap(uint64_t bitmap, int start, int idx)
{
int bit, shift;
bit = idx - start;
shift = 0;
if (bit >= 64) {
shift = 0x100 - bit;
bit = 0;
} else if (bit <= -64)
bit = 0x100 + bit;
else if (bit < 0) {
shift = -bit;
bit = 0;
}
if (bit - shift >= 64)
return (0);
return ((bitmap & (1ULL << (bit - shift))) != 0);
}
/*
* Firmware bug workaround: in case if 'retries' counter
* overflows 'seqno' field will be incremented:
* status|sequence|status|sequence|status|sequence
* 0000 0A48 0001 0A49 0000 0A6A
* 1000 0A48 1000 0A49 1000 0A6A
* 2000 0A48 2000 0A49 2000 0A6A
* ...
* E000 0A48 E000 0A49 E000 0A6A
* F000 0A48 F000 0A49 F000 0A6A
* 0000 0A49 0000 0A49 0000 0A6B
* 1000 0A49 1000 0A49 1000 0A6B
* ...
* D000 0A49 D000 0A49 D000 0A6B
* E000 0A49 E001 0A49 E000 0A6B
* F000 0A49 F001 0A49 F000 0A6B
* 0000 0A4A 0000 0A4B 0000 0A6A
* 1000 0A4A 1000 0A4B 1000 0A6A
* ...
*
* Odd 'seqno' numbers are incremened by 2 every 2 overflows.
* For even 'seqno' % 4 != 0 overflow is cyclic (0 -> +1 -> 0).
* Not checked with nretries >= 64.
*
*/
static int
iwn_ampdu_index_check(struct iwn_softc *sc, struct iwn_tx_ring *ring,
uint64_t bitmap, int start, int idx)
{
struct ieee80211com *ic = &sc->sc_ic;
struct iwn_tx_data *data;
int diff, min_retries, max_retries, new_idx, loop_end;
new_idx = idx - IWN_LONG_RETRY_LIMIT_LOG;
if (new_idx < 0)
new_idx += IWN_TX_RING_COUNT;
/*
* Corner case: check if retry count is not too big;
* reset device otherwise.
*/
if (!iwn_ampdu_check_bitmap(bitmap, start, new_idx)) {
data = &ring->data[new_idx];
if (data->long_retries > IWN_LONG_RETRY_LIMIT) {
device_printf(sc->sc_dev,
"%s: retry count (%d) for idx %d/%d overflow, "
"resetting...\n", __func__, data->long_retries,
ring->qid, new_idx);
ieee80211_restart_all(ic);
return (-1);
}
}
/* Correct index if needed. */
loop_end = idx;
do {
data = &ring->data[new_idx];
diff = idx - new_idx;
if (diff < 0)
diff += IWN_TX_RING_COUNT;
min_retries = IWN_LONG_RETRY_FW_OVERFLOW * diff;
if ((new_idx % 2) == 0)
max_retries = IWN_LONG_RETRY_FW_OVERFLOW * (diff + 1);
else
max_retries = IWN_LONG_RETRY_FW_OVERFLOW * (diff + 2);
if (!iwn_ampdu_check_bitmap(bitmap, start, new_idx) &&
((data->long_retries >= min_retries &&
data->long_retries < max_retries) ||
(diff == 1 &&
(new_idx & 0x03) == 0x02 &&
data->long_retries >= IWN_LONG_RETRY_FW_OVERFLOW))) {
DPRINTF(sc, IWN_DEBUG_AMPDU,
"%s: correcting index %d -> %d in queue %d"
" (retries %d)\n", __func__, idx, new_idx,
ring->qid, data->long_retries);
return (new_idx);
}
new_idx = (new_idx + 1) % IWN_TX_RING_COUNT;
} while (new_idx != loop_end);
return (idx);
}
static void
iwn_ampdu_tx_done(struct iwn_softc *sc, int qid, int nframes, int rtsfailcnt,
void *stat)
{
struct iwn_tx_ring *ring = &sc->txq[qid];
struct ieee80211_tx_ampdu *tap = sc->qid2tap[qid];
struct iwn_node *wn = (void *)tap->txa_ni;
struct iwn_tx_data *data;
uint64_t bitmap = 0;
uint16_t *aggstatus = stat;
uint8_t tid = tap->txa_tid;
int bit, i, idx, shift, start, tx_err;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
start = le16toh(*(aggstatus + nframes * 2)) & 0xff;
for (i = 0; i < nframes; i++) {
uint16_t status = le16toh(aggstatus[i * 2]);
if (status & IWN_AGG_TX_STATE_IGNORE_MASK)
continue;
idx = le16toh(aggstatus[i * 2 + 1]) & 0xff;
data = &ring->data[idx];
if (data->remapped) {
idx = iwn_ampdu_index_check(sc, ring, bitmap, start, idx);
if (idx == -1) {
/* skip error (device will be restarted anyway). */
continue;
}
/* Index may have changed. */
data = &ring->data[idx];
}
/*
* XXX Sometimes (rarely) some frames are excluded from events.
* XXX Due to that long_retries counter may be wrong.
*/
data->long_retries &= ~0x0f;
data->long_retries += IWN_AGG_TX_TRY_COUNT(status) + 1;
if (data->long_retries >= IWN_LONG_RETRY_FW_OVERFLOW) {
int diff, wrong_idx;
diff = data->long_retries / IWN_LONG_RETRY_FW_OVERFLOW;
wrong_idx = (idx + diff) % IWN_TX_RING_COUNT;
/*
* Mark the entry so the above code will check it
* next time.
*/
ring->data[wrong_idx].remapped = 1;
}
if (status & IWN_AGG_TX_STATE_UNDERRUN_MSK) {
/*
* NB: count retries but postpone - it was not
* transmitted.
*/
continue;
}
bit = idx - start;
shift = 0;
if (bit >= 64) {
shift = 0x100 - bit;
bit = 0;
} else if (bit <= -64)
bit = 0x100 + bit;
else if (bit < 0) {
shift = -bit;
bit = 0;
}
bitmap = bitmap << shift;
bitmap |= 1ULL << bit;
}
wn->agg[tid].startidx = start;
wn->agg[tid].bitmap = bitmap;
wn->agg[tid].short_retries = rtsfailcnt;
DPRINTF(sc, IWN_DEBUG_AMPDU, "%s: nframes %d start %d bitmap %016jX\n",
__func__, nframes, start, (uintmax_t)bitmap);
i = ring->read;
for (tx_err = 0;
i != wn->agg[tid].startidx;
i = (i + 1) % IWN_TX_RING_COUNT) {
data = &ring->data[i];
data->remapped = 0;
if (data->m == NULL)
continue;
tx_err++;
iwn_agg_tx_complete(sc, ring, tid, i, 0);
}
ring->read = wn->agg[tid].startidx;
ring->queued -= tx_err;
iwn_check_tx_ring(sc, qid);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
}
/*
* Process an INT_FH_RX or INT_SW_RX interrupt.
*/
static void
iwn_notif_intr(struct iwn_softc *sc)
{
struct iwn_ops *ops = &sc->ops;
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211vap *vap = TAILQ_FIRST(&ic->ic_vaps);
uint16_t hw;
+ int is_stopped;
bus_dmamap_sync(sc->rxq.stat_dma.tag, sc->rxq.stat_dma.map,
BUS_DMASYNC_POSTREAD);
hw = le16toh(sc->rxq.stat->closed_count) & 0xfff;
while (sc->rxq.cur != hw) {
struct iwn_rx_data *data = &sc->rxq.data[sc->rxq.cur];
struct iwn_rx_desc *desc;
bus_dmamap_sync(sc->rxq.data_dmat, data->map,
BUS_DMASYNC_POSTREAD);
desc = mtod(data->m, struct iwn_rx_desc *);
DPRINTF(sc, IWN_DEBUG_RECV,
"%s: cur=%d; qid %x idx %d flags %x type %d(%s) len %d\n",
__func__, sc->rxq.cur, desc->qid & IWN_RX_DESC_QID_MSK,
desc->idx, desc->flags, desc->type,
iwn_intr_str(desc->type), le16toh(desc->len));
if (!(desc->qid & IWN_UNSOLICITED_RX_NOTIF)) /* Reply to a command. */
iwn_cmd_done(sc, desc);
switch (desc->type) {
case IWN_RX_PHY:
iwn_rx_phy(sc, desc);
break;
case IWN_RX_DONE: /* 4965AGN only. */
case IWN_MPDU_RX_DONE:
/* An 802.11 frame has been received. */
iwn_rx_done(sc, desc, data);
+
+ is_stopped = (sc->sc_flags & IWN_FLAG_RUNNING) == 0;
+ if (__predict_false(is_stopped))
+ return;
+
break;
case IWN_RX_COMPRESSED_BA:
/* A Compressed BlockAck has been received. */
iwn_rx_compressed_ba(sc, desc);
break;
case IWN_TX_DONE:
/* An 802.11 frame has been transmitted. */
ops->tx_done(sc, desc, data);
break;
case IWN_RX_STATISTICS:
case IWN_BEACON_STATISTICS:
iwn_rx_statistics(sc, desc);
break;
case IWN_BEACON_MISSED:
{
struct iwn_beacon_missed *miss =
(struct iwn_beacon_missed *)(desc + 1);
int misses;
misses = le32toh(miss->consecutive);
DPRINTF(sc, IWN_DEBUG_STATE,
"%s: beacons missed %d/%d\n", __func__,
misses, le32toh(miss->total));
/*
* If more than 5 consecutive beacons are missed,
* reinitialize the sensitivity state machine.
*/
if (vap->iv_state == IEEE80211_S_RUN &&
(ic->ic_flags & IEEE80211_F_SCAN) == 0) {
if (misses > 5)
(void)iwn_init_sensitivity(sc);
if (misses >= vap->iv_bmissthreshold) {
IWN_UNLOCK(sc);
ieee80211_beacon_miss(ic);
IWN_LOCK(sc);
+
+ is_stopped = (sc->sc_flags &
+ IWN_FLAG_RUNNING) == 0;
+ if (__predict_false(is_stopped))
+ return;
}
}
break;
}
case IWN_UC_READY:
{
struct iwn_ucode_info *uc =
(struct iwn_ucode_info *)(desc + 1);
/* The microcontroller is ready. */
DPRINTF(sc, IWN_DEBUG_RESET,
"microcode alive notification version=%d.%d "
"subtype=%x alive=%x\n", uc->major, uc->minor,
uc->subtype, le32toh(uc->valid));
if (le32toh(uc->valid) != 1) {
device_printf(sc->sc_dev,
"microcontroller initialization failed");
break;
}
if (uc->subtype == IWN_UCODE_INIT) {
/* Save microcontroller report. */
memcpy(&sc->ucode_info, uc, sizeof (*uc));
}
/* Save the address of the error log in SRAM. */
sc->errptr = le32toh(uc->errptr);
break;
}
#ifdef IWN_DEBUG
case IWN_STATE_CHANGED:
{
/*
* State change allows hardware switch change to be
* noted. However, we handle this in iwn_intr as we
* get both the enable/disble intr.
*/
uint32_t *status = (uint32_t *)(desc + 1);
DPRINTF(sc, IWN_DEBUG_INTR | IWN_DEBUG_STATE,
"state changed to %x\n",
le32toh(*status));
break;
}
case IWN_START_SCAN:
{
struct iwn_start_scan *scan =
(struct iwn_start_scan *)(desc + 1);
DPRINTF(sc, IWN_DEBUG_ANY,
"%s: scanning channel %d status %x\n",
__func__, scan->chan, le32toh(scan->status));
break;
}
#endif
case IWN_STOP_SCAN:
{
#ifdef IWN_DEBUG
struct iwn_stop_scan *scan =
(struct iwn_stop_scan *)(desc + 1);
DPRINTF(sc, IWN_DEBUG_STATE | IWN_DEBUG_SCAN,
"scan finished nchan=%d status=%d chan=%d\n",
scan->nchan, scan->status, scan->chan);
#endif
sc->sc_is_scanning = 0;
callout_stop(&sc->scan_timeout);
IWN_UNLOCK(sc);
ieee80211_scan_next(vap);
IWN_LOCK(sc);
+
+ is_stopped = (sc->sc_flags & IWN_FLAG_RUNNING) == 0;
+ if (__predict_false(is_stopped))
+ return;
+
break;
}
case IWN5000_CALIBRATION_RESULT:
iwn5000_rx_calib_results(sc, desc);
break;
case IWN5000_CALIBRATION_DONE:
sc->sc_flags |= IWN_FLAG_CALIB_DONE;
wakeup(sc);
break;
}
sc->rxq.cur = (sc->rxq.cur + 1) % IWN_RX_RING_COUNT;
}
/* Tell the firmware what we have processed. */
hw = (hw == 0) ? IWN_RX_RING_COUNT - 1 : hw - 1;
IWN_WRITE(sc, IWN_FH_RX_WPTR, hw & ~7);
}
/*
* Process an INT_WAKEUP interrupt raised when the microcontroller wakes up
* from power-down sleep mode.
*/
static void
iwn_wakeup_intr(struct iwn_softc *sc)
{
int qid;
DPRINTF(sc, IWN_DEBUG_RESET, "%s: ucode wakeup from power-down sleep\n",
__func__);
/* Wakeup RX and TX rings. */
IWN_WRITE(sc, IWN_FH_RX_WPTR, sc->rxq.cur & ~7);
for (qid = 0; qid < sc->ntxqs; qid++) {
struct iwn_tx_ring *ring = &sc->txq[qid];
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | ring->cur);
}
}
static void
iwn_rftoggle_task(void *arg, int npending)
{
struct iwn_softc *sc = arg;
struct ieee80211com *ic = &sc->sc_ic;
uint32_t tmp;
IWN_LOCK(sc);
tmp = IWN_READ(sc, IWN_GP_CNTRL);
IWN_UNLOCK(sc);
device_printf(sc->sc_dev, "RF switch: radio %s\n",
(tmp & IWN_GP_CNTRL_RFKILL) ? "enabled" : "disabled");
if (!(tmp & IWN_GP_CNTRL_RFKILL)) {
ieee80211_suspend_all(ic);
/* Enable interrupts to get RF toggle notification. */
IWN_LOCK(sc);
IWN_WRITE(sc, IWN_INT, 0xffffffff);
IWN_WRITE(sc, IWN_INT_MASK, sc->int_mask);
IWN_UNLOCK(sc);
} else
ieee80211_resume_all(ic);
}
/*
* Dump the error log of the firmware when a firmware panic occurs. Although
* we can't debug the firmware because it is neither open source nor free, it
* can help us to identify certain classes of problems.
*/
static void
iwn_fatal_intr(struct iwn_softc *sc)
{
struct iwn_fw_dump dump;
int i;
IWN_LOCK_ASSERT(sc);
/* Force a complete recalibration on next init. */
sc->sc_flags &= ~IWN_FLAG_CALIB_DONE;
/* Check that the error log address is valid. */
if (sc->errptr < IWN_FW_DATA_BASE ||
sc->errptr + sizeof (dump) >
IWN_FW_DATA_BASE + sc->fw_data_maxsz) {
printf("%s: bad firmware error log address 0x%08x\n", __func__,
sc->errptr);
return;
}
if (iwn_nic_lock(sc) != 0) {
printf("%s: could not read firmware error log\n", __func__);
return;
}
/* Read firmware error log from SRAM. */
iwn_mem_read_region_4(sc, sc->errptr, (uint32_t *)&dump,
sizeof (dump) / sizeof (uint32_t));
iwn_nic_unlock(sc);
if (dump.valid == 0) {
printf("%s: firmware error log is empty\n", __func__);
return;
}
printf("firmware error log:\n");
printf(" error type = \"%s\" (0x%08X)\n",
(dump.id < nitems(iwn_fw_errmsg)) ?
iwn_fw_errmsg[dump.id] : "UNKNOWN",
dump.id);
printf(" program counter = 0x%08X\n", dump.pc);
printf(" source line = 0x%08X\n", dump.src_line);
printf(" error data = 0x%08X%08X\n",
dump.error_data[0], dump.error_data[1]);
printf(" branch link = 0x%08X%08X\n",
dump.branch_link[0], dump.branch_link[1]);
printf(" interrupt link = 0x%08X%08X\n",
dump.interrupt_link[0], dump.interrupt_link[1]);
printf(" time = %u\n", dump.time[0]);
/* Dump driver status (TX and RX rings) while we're here. */
printf("driver status:\n");
for (i = 0; i < sc->ntxqs; i++) {
struct iwn_tx_ring *ring = &sc->txq[i];
printf(" tx ring %2d: qid=%-2d cur=%-3d queued=%-3d\n",
i, ring->qid, ring->cur, ring->queued);
}
printf(" rx ring: cur=%d\n", sc->rxq.cur);
}
static void
iwn_intr(void *arg)
{
struct iwn_softc *sc = arg;
uint32_t r1, r2, tmp;
IWN_LOCK(sc);
/* Disable interrupts. */
IWN_WRITE(sc, IWN_INT_MASK, 0);
/* Read interrupts from ICT (fast) or from registers (slow). */
if (sc->sc_flags & IWN_FLAG_USE_ICT) {
bus_dmamap_sync(sc->ict_dma.tag, sc->ict_dma.map,
BUS_DMASYNC_POSTREAD);
tmp = 0;
while (sc->ict[sc->ict_cur] != 0) {
tmp |= sc->ict[sc->ict_cur];
sc->ict[sc->ict_cur] = 0; /* Acknowledge. */
sc->ict_cur = (sc->ict_cur + 1) % IWN_ICT_COUNT;
}
tmp = le32toh(tmp);
if (tmp == 0xffffffff) /* Shouldn't happen. */
tmp = 0;
else if (tmp & 0xc0000) /* Workaround a HW bug. */
tmp |= 0x8000;
r1 = (tmp & 0xff00) << 16 | (tmp & 0xff);
r2 = 0; /* Unused. */
} else {
r1 = IWN_READ(sc, IWN_INT);
if (r1 == 0xffffffff || (r1 & 0xfffffff0) == 0xa5a5a5a0) {
IWN_UNLOCK(sc);
return; /* Hardware gone! */
}
r2 = IWN_READ(sc, IWN_FH_INT);
}
DPRINTF(sc, IWN_DEBUG_INTR, "interrupt reg1=0x%08x reg2=0x%08x\n"
, r1, r2);
if (r1 == 0 && r2 == 0)
goto done; /* Interrupt not for us. */
/* Acknowledge interrupts. */
IWN_WRITE(sc, IWN_INT, r1);
if (!(sc->sc_flags & IWN_FLAG_USE_ICT))
IWN_WRITE(sc, IWN_FH_INT, r2);
if (r1 & IWN_INT_RF_TOGGLED) {
taskqueue_enqueue(sc->sc_tq, &sc->sc_rftoggle_task);
goto done;
}
if (r1 & IWN_INT_CT_REACHED) {
device_printf(sc->sc_dev, "%s: critical temperature reached!\n",
__func__);
}
if (r1 & (IWN_INT_SW_ERR | IWN_INT_HW_ERR)) {
device_printf(sc->sc_dev, "%s: fatal firmware error\n",
__func__);
#ifdef IWN_DEBUG
iwn_debug_register(sc);
#endif
/* Dump firmware error log and stop. */
iwn_fatal_intr(sc);
taskqueue_enqueue(sc->sc_tq, &sc->sc_panic_task);
goto done;
}
if ((r1 & (IWN_INT_FH_RX | IWN_INT_SW_RX | IWN_INT_RX_PERIODIC)) ||
(r2 & IWN_FH_INT_RX)) {
if (sc->sc_flags & IWN_FLAG_USE_ICT) {
if (r1 & (IWN_INT_FH_RX | IWN_INT_SW_RX))
IWN_WRITE(sc, IWN_FH_INT, IWN_FH_INT_RX);
IWN_WRITE_1(sc, IWN_INT_PERIODIC,
IWN_INT_PERIODIC_DIS);
iwn_notif_intr(sc);
if (r1 & (IWN_INT_FH_RX | IWN_INT_SW_RX)) {
IWN_WRITE_1(sc, IWN_INT_PERIODIC,
IWN_INT_PERIODIC_ENA);
}
} else
iwn_notif_intr(sc);
}
if ((r1 & IWN_INT_FH_TX) || (r2 & IWN_FH_INT_TX)) {
if (sc->sc_flags & IWN_FLAG_USE_ICT)
IWN_WRITE(sc, IWN_FH_INT, IWN_FH_INT_TX);
wakeup(sc); /* FH DMA transfer completed. */
}
if (r1 & IWN_INT_ALIVE)
wakeup(sc); /* Firmware is alive. */
if (r1 & IWN_INT_WAKEUP)
iwn_wakeup_intr(sc);
done:
/* Re-enable interrupts. */
if (sc->sc_flags & IWN_FLAG_RUNNING)
IWN_WRITE(sc, IWN_INT_MASK, sc->int_mask);
IWN_UNLOCK(sc);
}
/*
* Update TX scheduler ring when transmitting an 802.11 frame (4965AGN and
* 5000 adapters use a slightly different format).
*/
static void
iwn4965_update_sched(struct iwn_softc *sc, int qid, int idx, uint8_t id,
uint16_t len)
{
uint16_t *w = &sc->sched[qid * IWN4965_SCHED_COUNT + idx];
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
*w = htole16(len + 8);
bus_dmamap_sync(sc->sched_dma.tag, sc->sched_dma.map,
BUS_DMASYNC_PREWRITE);
if (idx < IWN_SCHED_WINSZ) {
*(w + IWN_TX_RING_COUNT) = *w;
bus_dmamap_sync(sc->sched_dma.tag, sc->sched_dma.map,
BUS_DMASYNC_PREWRITE);
}
}
static void
iwn5000_update_sched(struct iwn_softc *sc, int qid, int idx, uint8_t id,
uint16_t len)
{
uint16_t *w = &sc->sched[qid * IWN5000_SCHED_COUNT + idx];
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
*w = htole16(id << 12 | (len + 8));
bus_dmamap_sync(sc->sched_dma.tag, sc->sched_dma.map,
BUS_DMASYNC_PREWRITE);
if (idx < IWN_SCHED_WINSZ) {
*(w + IWN_TX_RING_COUNT) = *w;
bus_dmamap_sync(sc->sched_dma.tag, sc->sched_dma.map,
BUS_DMASYNC_PREWRITE);
}
}
#ifdef notyet
static void
iwn5000_reset_sched(struct iwn_softc *sc, int qid, int idx)
{
uint16_t *w = &sc->sched[qid * IWN5000_SCHED_COUNT + idx];
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
*w = (*w & htole16(0xf000)) | htole16(1);
bus_dmamap_sync(sc->sched_dma.tag, sc->sched_dma.map,
BUS_DMASYNC_PREWRITE);
if (idx < IWN_SCHED_WINSZ) {
*(w + IWN_TX_RING_COUNT) = *w;
bus_dmamap_sync(sc->sched_dma.tag, sc->sched_dma.map,
BUS_DMASYNC_PREWRITE);
}
}
#endif
/*
* Check whether OFDM 11g protection will be enabled for the given rate.
*
* The original driver code only enabled protection for OFDM rates.
* It didn't check to see whether it was operating in 11a or 11bg mode.
*/
static int
iwn_check_rate_needs_protection(struct iwn_softc *sc,
struct ieee80211vap *vap, uint8_t rate)
{
struct ieee80211com *ic = vap->iv_ic;
/*
* Not in 2GHz mode? Then there's no need to enable OFDM
* 11bg protection.
*/
if (! IEEE80211_IS_CHAN_2GHZ(ic->ic_curchan)) {
return (0);
}
/*
* 11bg protection not enabled? Then don't use it.
*/
if ((ic->ic_flags & IEEE80211_F_USEPROT) == 0)
return (0);
/*
* If it's an 11n rate - no protection.
* We'll do it via a specific 11n check.
*/
if (rate & IEEE80211_RATE_MCS) {
return (0);
}
/*
* Do a rate table lookup. If the PHY is CCK,
* don't do protection.
*/
if (ieee80211_rate2phytype(ic->ic_rt, rate) == IEEE80211_T_CCK)
return (0);
/*
* Yup, enable protection.
*/
return (1);
}
/*
* return a value between 0 and IWN_MAX_TX_RETRIES-1 as an index into
* the link quality table that reflects this particular entry.
*/
static int
iwn_tx_rate_to_linkq_offset(struct iwn_softc *sc, struct ieee80211_node *ni,
uint8_t rate)
{
struct ieee80211_rateset *rs;
int is_11n;
int nr;
int i;
uint8_t cmp_rate;
/*
* Figure out if we're using 11n or not here.
*/
if (IEEE80211_IS_CHAN_HT(ni->ni_chan) && ni->ni_htrates.rs_nrates > 0)
is_11n = 1;
else
is_11n = 0;
/*
* Use the correct rate table.
*/
if (is_11n) {
rs = (struct ieee80211_rateset *) &ni->ni_htrates;
nr = ni->ni_htrates.rs_nrates;
} else {
rs = &ni->ni_rates;
nr = rs->rs_nrates;
}
/*
* Find the relevant link quality entry in the table.
*/
for (i = 0; i < nr && i < IWN_MAX_TX_RETRIES - 1 ; i++) {
/*
* The link quality table index starts at 0 == highest
* rate, so we walk the rate table backwards.
*/
cmp_rate = rs->rs_rates[(nr - 1) - i];
if (rate & IEEE80211_RATE_MCS)
cmp_rate |= IEEE80211_RATE_MCS;
#if 0
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: idx %d: nr=%d, rate=0x%02x, rateentry=0x%02x\n",
__func__,
i,
nr,
rate,
cmp_rate);
#endif
if (cmp_rate == rate)
return (i);
}
/* Failed? Start at the end */
return (IWN_MAX_TX_RETRIES - 1);
}
static int
iwn_tx_data(struct iwn_softc *sc, struct mbuf *m, struct ieee80211_node *ni)
{
const struct ieee80211_txparam *tp = ni->ni_txparms;
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211com *ic = ni->ni_ic;
struct iwn_node *wn = (void *)ni;
struct iwn_tx_ring *ring;
struct iwn_tx_cmd *cmd;
struct iwn_cmd_data *tx;
struct ieee80211_frame *wh;
struct ieee80211_key *k = NULL;
uint32_t flags;
uint16_t qos;
uint8_t tid, type;
int ac, totlen, rate;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
IWN_LOCK_ASSERT(sc);
wh = mtod(m, struct ieee80211_frame *);
type = wh->i_fc[0] & IEEE80211_FC0_TYPE_MASK;
/* Select EDCA Access Category and TX ring for this frame. */
if (IEEE80211_QOS_HAS_SEQ(wh)) {
qos = ((const struct ieee80211_qosframe *)wh)->i_qos[0];
tid = qos & IEEE80211_QOS_TID;
} else {
qos = 0;
tid = 0;
}
/* Choose a TX rate index. */
if (type == IEEE80211_FC0_TYPE_MGT ||
type == IEEE80211_FC0_TYPE_CTL ||
(m->m_flags & M_EAPOL) != 0)
rate = tp->mgmtrate;
else if (IEEE80211_IS_MULTICAST(wh->i_addr1))
rate = tp->mcastrate;
else if (tp->ucastrate != IEEE80211_FIXED_RATE_NONE)
rate = tp->ucastrate;
else {
/* XXX pass pktlen */
(void) ieee80211_ratectl_rate(ni, NULL, 0);
rate = ni->ni_txrate;
}
/*
* XXX TODO: Group addressed frames aren't aggregated and must
* go to the normal non-aggregation queue, and have a NONQOS TID
* assigned from net80211.
*/
ac = M_WME_GETAC(m);
if (m->m_flags & M_AMPDU_MPDU) {
struct ieee80211_tx_ampdu *tap = &ni->ni_tx_ampdu[ac];
if (!IEEE80211_AMPDU_RUNNING(tap))
return (EINVAL);
ac = *(int *)tap->txa_private;
}
/* Encrypt the frame if need be. */
if (wh->i_fc[1] & IEEE80211_FC1_PROTECTED) {
/* Retrieve key for TX. */
k = ieee80211_crypto_encap(ni, m);
if (k == NULL) {
return ENOBUFS;
}
/* 802.11 header may have moved. */
wh = mtod(m, struct ieee80211_frame *);
}
totlen = m->m_pkthdr.len;
if (ieee80211_radiotap_active_vap(vap)) {
struct iwn_tx_radiotap_header *tap = &sc->sc_txtap;
tap->wt_flags = 0;
tap->wt_rate = rate;
if (k != NULL)
tap->wt_flags |= IEEE80211_RADIOTAP_F_WEP;
ieee80211_radiotap_tx(vap, m);
}
flags = 0;
if (!IEEE80211_IS_MULTICAST(wh->i_addr1)) {
/* Unicast frame, check if an ACK is expected. */
if (!qos || (qos & IEEE80211_QOS_ACKPOLICY) !=
IEEE80211_QOS_ACKPOLICY_NOACK)
flags |= IWN_TX_NEED_ACK;
}
if ((wh->i_fc[0] &
(IEEE80211_FC0_TYPE_MASK | IEEE80211_FC0_SUBTYPE_MASK)) ==
(IEEE80211_FC0_TYPE_CTL | IEEE80211_FC0_SUBTYPE_BAR))
flags |= IWN_TX_IMM_BA; /* Cannot happen yet. */
if (wh->i_fc[1] & IEEE80211_FC1_MORE_FRAG)
flags |= IWN_TX_MORE_FRAG; /* Cannot happen yet. */
/* Check if frame must be protected using RTS/CTS or CTS-to-self. */
if (!IEEE80211_IS_MULTICAST(wh->i_addr1)) {
/* NB: Group frames are sent using CCK in 802.11b/g. */
if (totlen + IEEE80211_CRC_LEN > vap->iv_rtsthreshold) {
flags |= IWN_TX_NEED_RTS;
} else if (iwn_check_rate_needs_protection(sc, vap, rate)) {
if (ic->ic_protmode == IEEE80211_PROT_CTSONLY)
flags |= IWN_TX_NEED_CTS;
else if (ic->ic_protmode == IEEE80211_PROT_RTSCTS)
flags |= IWN_TX_NEED_RTS;
} else if ((rate & IEEE80211_RATE_MCS) &&
(ic->ic_htprotmode == IEEE80211_PROT_RTSCTS)) {
flags |= IWN_TX_NEED_RTS;
}
/* XXX HT protection? */
if (flags & (IWN_TX_NEED_RTS | IWN_TX_NEED_CTS)) {
if (sc->hw_type != IWN_HW_REV_TYPE_4965) {
/* 5000 autoselects RTS/CTS or CTS-to-self. */
flags &= ~(IWN_TX_NEED_RTS | IWN_TX_NEED_CTS);
flags |= IWN_TX_NEED_PROTECTION;
} else
flags |= IWN_TX_FULL_TXOP;
}
}
ring = &sc->txq[ac];
if (m->m_flags & M_AMPDU_MPDU) {
uint16_t seqno = ni->ni_txseqs[tid];
if (ring->queued > IWN_TX_RING_COUNT / 2 &&
(ring->cur + 1) % IWN_TX_RING_COUNT == ring->read) {
DPRINTF(sc, IWN_DEBUG_AMPDU, "%s: no more space "
"(queued %d) left in %d queue!\n",
__func__, ring->queued, ac);
return (ENOBUFS);
}
/*
* Queue this frame to the hardware ring that we've
* negotiated AMPDU TX on.
*
* Note that the sequence number must match the TX slot
* being used!
*/
if ((seqno % 256) != ring->cur) {
device_printf(sc->sc_dev,
"%s: m=%p: seqno (%d) (%d) != ring index (%d) !\n",
__func__,
m,
seqno,
seqno % 256,
ring->cur);
/* XXX until D9195 will not be committed */
ni->ni_txseqs[tid] &= ~0xff;
ni->ni_txseqs[tid] += ring->cur;
seqno = ni->ni_txseqs[tid];
}
*(uint16_t *)wh->i_seq =
htole16(seqno << IEEE80211_SEQ_SEQ_SHIFT);
ni->ni_txseqs[tid]++;
}
/* Prepare TX firmware command. */
cmd = &ring->cmd[ring->cur];
tx = (struct iwn_cmd_data *)cmd->data;
/* NB: No need to clear tx, all fields are reinitialized here. */
tx->scratch = 0; /* clear "scratch" area */
if (IEEE80211_IS_MULTICAST(wh->i_addr1) ||
type != IEEE80211_FC0_TYPE_DATA)
tx->id = sc->broadcast_id;
else
tx->id = wn->id;
if (type == IEEE80211_FC0_TYPE_MGT) {
uint8_t subtype = wh->i_fc[0] & IEEE80211_FC0_SUBTYPE_MASK;
/* Tell HW to set timestamp in probe responses. */
if (subtype == IEEE80211_FC0_SUBTYPE_PROBE_RESP)
flags |= IWN_TX_INSERT_TSTAMP;
if (subtype == IEEE80211_FC0_SUBTYPE_ASSOC_REQ ||
subtype == IEEE80211_FC0_SUBTYPE_REASSOC_REQ)
tx->timeout = htole16(3);
else
tx->timeout = htole16(2);
} else
tx->timeout = htole16(0);
if (tx->id == sc->broadcast_id) {
/* Group or management frame. */
tx->linkq = 0;
} else {
tx->linkq = iwn_tx_rate_to_linkq_offset(sc, ni, rate);
flags |= IWN_TX_LINKQ; /* enable MRR */
}
tx->tid = tid;
tx->rts_ntries = 60;
tx->data_ntries = 15;
tx->lifetime = htole32(IWN_LIFETIME_INFINITE);
tx->rate = iwn_rate_to_plcp(sc, ni, rate);
tx->security = 0;
tx->flags = htole32(flags);
return (iwn_tx_cmd(sc, m, ni, ring));
}
static int
iwn_tx_data_raw(struct iwn_softc *sc, struct mbuf *m,
struct ieee80211_node *ni, const struct ieee80211_bpf_params *params)
{
struct ieee80211vap *vap = ni->ni_vap;
struct iwn_tx_cmd *cmd;
struct iwn_cmd_data *tx;
struct ieee80211_frame *wh;
struct iwn_tx_ring *ring;
uint32_t flags;
int ac, rate;
uint8_t type;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
IWN_LOCK_ASSERT(sc);
wh = mtod(m, struct ieee80211_frame *);
type = wh->i_fc[0] & IEEE80211_FC0_TYPE_MASK;
ac = params->ibp_pri & 3;
/* Choose a TX rate. */
rate = params->ibp_rate0;
flags = 0;
if ((params->ibp_flags & IEEE80211_BPF_NOACK) == 0)
flags |= IWN_TX_NEED_ACK;
if (params->ibp_flags & IEEE80211_BPF_RTS) {
if (sc->hw_type != IWN_HW_REV_TYPE_4965) {
/* 5000 autoselects RTS/CTS or CTS-to-self. */
flags &= ~IWN_TX_NEED_RTS;
flags |= IWN_TX_NEED_PROTECTION;
} else
flags |= IWN_TX_NEED_RTS | IWN_TX_FULL_TXOP;
}
if (params->ibp_flags & IEEE80211_BPF_CTS) {
if (sc->hw_type != IWN_HW_REV_TYPE_4965) {
/* 5000 autoselects RTS/CTS or CTS-to-self. */
flags &= ~IWN_TX_NEED_CTS;
flags |= IWN_TX_NEED_PROTECTION;
} else
flags |= IWN_TX_NEED_CTS | IWN_TX_FULL_TXOP;
}
if (ieee80211_radiotap_active_vap(vap)) {
struct iwn_tx_radiotap_header *tap = &sc->sc_txtap;
tap->wt_flags = 0;
tap->wt_rate = rate;
ieee80211_radiotap_tx(vap, m);
}
ring = &sc->txq[ac];
cmd = &ring->cmd[ring->cur];
tx = (struct iwn_cmd_data *)cmd->data;
/* NB: No need to clear tx, all fields are reinitialized here. */
tx->scratch = 0; /* clear "scratch" area */
if (type == IEEE80211_FC0_TYPE_MGT) {
uint8_t subtype = wh->i_fc[0] & IEEE80211_FC0_SUBTYPE_MASK;
/* Tell HW to set timestamp in probe responses. */
if (subtype == IEEE80211_FC0_SUBTYPE_PROBE_RESP)
flags |= IWN_TX_INSERT_TSTAMP;
if (subtype == IEEE80211_FC0_SUBTYPE_ASSOC_REQ ||
subtype == IEEE80211_FC0_SUBTYPE_REASSOC_REQ)
tx->timeout = htole16(3);
else
tx->timeout = htole16(2);
} else
tx->timeout = htole16(0);
tx->tid = 0;
tx->id = sc->broadcast_id;
tx->rts_ntries = params->ibp_try1;
tx->data_ntries = params->ibp_try0;
tx->lifetime = htole32(IWN_LIFETIME_INFINITE);
tx->rate = iwn_rate_to_plcp(sc, ni, rate);
tx->security = 0;
tx->flags = htole32(flags);
/* Group or management frame. */
tx->linkq = 0;
return (iwn_tx_cmd(sc, m, ni, ring));
}
static int
iwn_tx_cmd(struct iwn_softc *sc, struct mbuf *m, struct ieee80211_node *ni,
struct iwn_tx_ring *ring)
{
struct iwn_ops *ops = &sc->ops;
struct iwn_tx_cmd *cmd;
struct iwn_cmd_data *tx;
struct ieee80211_frame *wh;
struct iwn_tx_desc *desc;
struct iwn_tx_data *data;
bus_dma_segment_t *seg, segs[IWN_MAX_SCATTER];
struct mbuf *m1;
u_int hdrlen;
int totlen, error, pad, nsegs = 0, i;
wh = mtod(m, struct ieee80211_frame *);
hdrlen = ieee80211_anyhdrsize(wh);
totlen = m->m_pkthdr.len;
desc = &ring->desc[ring->cur];
data = &ring->data[ring->cur];
if (__predict_false(data->m != NULL || data->ni != NULL)) {
device_printf(sc->sc_dev, "%s: ni (%p) or m (%p) for idx %d "
"in queue %d is not NULL!\n", __func__, data->ni, data->m,
ring->cur, ring->qid);
return EIO;
}
/* Prepare TX firmware command. */
cmd = &ring->cmd[ring->cur];
cmd->code = IWN_CMD_TX_DATA;
cmd->flags = 0;
cmd->qid = ring->qid;
cmd->idx = ring->cur;
tx = (struct iwn_cmd_data *)cmd->data;
tx->len = htole16(totlen);
/* Set physical address of "scratch area". */
tx->loaddr = htole32(IWN_LOADDR(data->scratch_paddr));
tx->hiaddr = IWN_HIADDR(data->scratch_paddr);
if (hdrlen & 3) {
/* First segment length must be a multiple of 4. */
tx->flags |= htole32(IWN_TX_NEED_PADDING);
pad = 4 - (hdrlen & 3);
} else
pad = 0;
/* Copy 802.11 header in TX command. */
memcpy((uint8_t *)(tx + 1), wh, hdrlen);
/* Trim 802.11 header. */
m_adj(m, hdrlen);
error = bus_dmamap_load_mbuf_sg(ring->data_dmat, data->map, m, segs,
&nsegs, BUS_DMA_NOWAIT);
if (error != 0) {
if (error != EFBIG) {
device_printf(sc->sc_dev,
"%s: can't map mbuf (error %d)\n", __func__, error);
return error;
}
/* Too many DMA segments, linearize mbuf. */
m1 = m_collapse(m, M_NOWAIT, IWN_MAX_SCATTER - 1);
if (m1 == NULL) {
device_printf(sc->sc_dev,
"%s: could not defrag mbuf\n", __func__);
return ENOBUFS;
}
m = m1;
error = bus_dmamap_load_mbuf_sg(ring->data_dmat, data->map, m,
segs, &nsegs, BUS_DMA_NOWAIT);
if (error != 0) {
/* XXX fix this */
/*
* NB: Do not return error;
* original mbuf does not exist anymore.
*/
device_printf(sc->sc_dev,
"%s: can't map mbuf (error %d)\n",
__func__, error);
if_inc_counter(ni->ni_vap->iv_ifp,
IFCOUNTER_OERRORS, 1);
ieee80211_free_node(ni);
m_freem(m);
return 0;
}
}
data->m = m;
data->ni = ni;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: qid %d idx %d len %d nsegs %d "
"plcp %d\n",
__func__, ring->qid, ring->cur, totlen, nsegs, tx->rate);
/* Fill TX descriptor. */
desc->nsegs = 1;
if (m->m_len != 0)
desc->nsegs += nsegs;
/* First DMA segment is used by the TX command. */
desc->segs[0].addr = htole32(IWN_LOADDR(data->cmd_paddr));
desc->segs[0].len = htole16(IWN_HIADDR(data->cmd_paddr) |
(4 + sizeof (*tx) + hdrlen + pad) << 4);
/* Other DMA segments are for data payload. */
seg = &segs[0];
for (i = 1; i <= nsegs; i++) {
desc->segs[i].addr = htole32(IWN_LOADDR(seg->ds_addr));
desc->segs[i].len = htole16(IWN_HIADDR(seg->ds_addr) |
seg->ds_len << 4);
seg++;
}
bus_dmamap_sync(ring->data_dmat, data->map, BUS_DMASYNC_PREWRITE);
bus_dmamap_sync(ring->cmd_dma.tag, ring->cmd_dma.map,
BUS_DMASYNC_PREWRITE);
bus_dmamap_sync(ring->desc_dma.tag, ring->desc_dma.map,
BUS_DMASYNC_PREWRITE);
/* Update TX scheduler. */
if (ring->qid >= sc->firstaggqueue)
ops->update_sched(sc, ring->qid, ring->cur, tx->id, totlen);
/* Kick TX ring. */
ring->cur = (ring->cur + 1) % IWN_TX_RING_COUNT;
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, ring->qid << 8 | ring->cur);
/* Mark TX ring as full if we reach a certain threshold. */
if (++ring->queued > IWN_TX_RING_HIMARK)
sc->qfullmsk |= 1 << ring->qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return 0;
}
static void
iwn_xmit_task(void *arg0, int pending)
{
struct iwn_softc *sc = arg0;
struct ieee80211_node *ni;
struct mbuf *m;
int error;
struct ieee80211_bpf_params p;
int have_p;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: called\n", __func__);
IWN_LOCK(sc);
/*
* Dequeue frames, attempt to transmit,
* then disable beaconwait when we're done.
*/
while ((m = mbufq_dequeue(&sc->sc_xmit_queue)) != NULL) {
have_p = 0;
ni = (struct ieee80211_node *)m->m_pkthdr.rcvif;
/* Get xmit params if appropriate */
if (ieee80211_get_xmit_params(m, &p) == 0)
have_p = 1;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: m=%p, have_p=%d\n",
__func__, m, have_p);
/* If we have xmit params, use them */
if (have_p)
error = iwn_tx_data_raw(sc, m, ni, &p);
else
error = iwn_tx_data(sc, m, ni);
if (error != 0) {
if_inc_counter(ni->ni_vap->iv_ifp,
IFCOUNTER_OERRORS, 1);
ieee80211_free_node(ni);
m_freem(m);
}
}
sc->sc_beacon_wait = 0;
IWN_UNLOCK(sc);
}
/*
* raw frame xmit - free node/reference if failed.
*/
static int
iwn_raw_xmit(struct ieee80211_node *ni, struct mbuf *m,
const struct ieee80211_bpf_params *params)
{
struct ieee80211com *ic = ni->ni_ic;
struct iwn_softc *sc = ic->ic_softc;
int error = 0;
DPRINTF(sc, IWN_DEBUG_XMIT | IWN_DEBUG_TRACE, "->%s begin\n", __func__);
IWN_LOCK(sc);
if ((sc->sc_flags & IWN_FLAG_RUNNING) == 0) {
m_freem(m);
IWN_UNLOCK(sc);
return (ENETDOWN);
}
/* queue frame if we have to */
if (sc->sc_beacon_wait) {
if (iwn_xmit_queue_enqueue(sc, m) != 0) {
m_freem(m);
IWN_UNLOCK(sc);
return (ENOBUFS);
}
/* Queued, so just return OK */
IWN_UNLOCK(sc);
return (0);
}
if (params == NULL) {
/*
* Legacy path; interpret frame contents to decide
* precisely how to send the frame.
*/
error = iwn_tx_data(sc, m, ni);
} else {
/*
* Caller supplied explicit parameters to use in
* sending the frame.
*/
error = iwn_tx_data_raw(sc, m, ni, params);
}
if (error == 0)
sc->sc_tx_timer = 5;
else
m_freem(m);
IWN_UNLOCK(sc);
DPRINTF(sc, IWN_DEBUG_TRACE | IWN_DEBUG_XMIT, "->%s: end\n",__func__);
return (error);
}
/*
* transmit - don't free mbuf if failed; don't free node ref if failed.
*/
static int
iwn_transmit(struct ieee80211com *ic, struct mbuf *m)
{
struct iwn_softc *sc = ic->ic_softc;
struct ieee80211_node *ni;
int error;
ni = (struct ieee80211_node *)m->m_pkthdr.rcvif;
IWN_LOCK(sc);
if ((sc->sc_flags & IWN_FLAG_RUNNING) == 0 || sc->sc_beacon_wait) {
IWN_UNLOCK(sc);
return (ENXIO);
}
if (sc->qfullmsk) {
IWN_UNLOCK(sc);
return (ENOBUFS);
}
error = iwn_tx_data(sc, m, ni);
if (!error)
sc->sc_tx_timer = 5;
IWN_UNLOCK(sc);
return (error);
}
static void
iwn_scan_timeout(void *arg)
{
struct iwn_softc *sc = arg;
struct ieee80211com *ic = &sc->sc_ic;
ic_printf(ic, "scan timeout\n");
ieee80211_restart_all(ic);
}
static void
iwn_watchdog(void *arg)
{
struct iwn_softc *sc = arg;
struct ieee80211com *ic = &sc->sc_ic;
IWN_LOCK_ASSERT(sc);
KASSERT(sc->sc_flags & IWN_FLAG_RUNNING, ("not running"));
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (sc->sc_tx_timer > 0) {
if (--sc->sc_tx_timer == 0) {
ic_printf(ic, "device timeout\n");
ieee80211_restart_all(ic);
return;
}
}
callout_reset(&sc->watchdog_to, hz, iwn_watchdog, sc);
}
static int
iwn_cdev_open(struct cdev *dev, int flags, int type, struct thread *td)
{
return (0);
}
static int
iwn_cdev_close(struct cdev *dev, int flags, int type, struct thread *td)
{
return (0);
}
static int
iwn_cdev_ioctl(struct cdev *dev, unsigned long cmd, caddr_t data, int fflag,
struct thread *td)
{
int rc;
struct iwn_softc *sc = dev->si_drv1;
struct iwn_ioctl_data *d;
rc = priv_check(td, PRIV_DRIVER);
if (rc != 0)
return (0);
switch (cmd) {
case SIOCGIWNSTATS:
d = (struct iwn_ioctl_data *) data;
IWN_LOCK(sc);
/* XXX validate permissions/memory/etc? */
rc = copyout(&sc->last_stat, d->dst_addr, sizeof(struct iwn_stats));
IWN_UNLOCK(sc);
break;
case SIOCZIWNSTATS:
IWN_LOCK(sc);
memset(&sc->last_stat, 0, sizeof(struct iwn_stats));
IWN_UNLOCK(sc);
break;
default:
rc = EINVAL;
break;
}
return (rc);
}
static int
iwn_ioctl(struct ieee80211com *ic, u_long cmd, void *data)
{
return (ENOTTY);
}
static void
iwn_parent(struct ieee80211com *ic)
{
struct iwn_softc *sc = ic->ic_softc;
struct ieee80211vap *vap;
int error;
if (ic->ic_nrunning > 0) {
error = iwn_init(sc);
switch (error) {
case 0:
ieee80211_start_all(ic);
break;
case 1:
/* radio is disabled via RFkill switch */
taskqueue_enqueue(sc->sc_tq, &sc->sc_rftoggle_task);
break;
default:
vap = TAILQ_FIRST(&ic->ic_vaps);
if (vap != NULL)
ieee80211_stop(vap);
break;
}
} else
iwn_stop(sc);
}
/*
* Send a command to the firmware.
*/
static int
iwn_cmd(struct iwn_softc *sc, int code, const void *buf, int size, int async)
{
struct iwn_tx_ring *ring;
struct iwn_tx_desc *desc;
struct iwn_tx_data *data;
struct iwn_tx_cmd *cmd;
struct mbuf *m;
bus_addr_t paddr;
int totlen, error;
int cmd_queue_num;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
if (async == 0)
IWN_LOCK_ASSERT(sc);
if (sc->sc_flags & IWN_FLAG_PAN_SUPPORT)
cmd_queue_num = IWN_PAN_CMD_QUEUE;
else
cmd_queue_num = IWN_CMD_QUEUE_NUM;
ring = &sc->txq[cmd_queue_num];
desc = &ring->desc[ring->cur];
data = &ring->data[ring->cur];
totlen = 4 + size;
if (size > sizeof cmd->data) {
/* Command is too large to fit in a descriptor. */
if (totlen > MCLBYTES)
return EINVAL;
m = m_getjcl(M_NOWAIT, MT_DATA, M_PKTHDR, MJUMPAGESIZE);
if (m == NULL)
return ENOMEM;
cmd = mtod(m, struct iwn_tx_cmd *);
error = bus_dmamap_load(ring->data_dmat, data->map, cmd,
totlen, iwn_dma_map_addr, &paddr, BUS_DMA_NOWAIT);
if (error != 0) {
m_freem(m);
return error;
}
data->m = m;
} else {
cmd = &ring->cmd[ring->cur];
paddr = data->cmd_paddr;
}
cmd->code = code;
cmd->flags = 0;
cmd->qid = ring->qid;
cmd->idx = ring->cur;
memcpy(cmd->data, buf, size);
desc->nsegs = 1;
desc->segs[0].addr = htole32(IWN_LOADDR(paddr));
desc->segs[0].len = htole16(IWN_HIADDR(paddr) | totlen << 4);
DPRINTF(sc, IWN_DEBUG_CMD, "%s: %s (0x%x) flags %d qid %d idx %d\n",
__func__, iwn_intr_str(cmd->code), cmd->code,
cmd->flags, cmd->qid, cmd->idx);
if (size > sizeof cmd->data) {
bus_dmamap_sync(ring->data_dmat, data->map,
BUS_DMASYNC_PREWRITE);
} else {
bus_dmamap_sync(ring->cmd_dma.tag, ring->cmd_dma.map,
BUS_DMASYNC_PREWRITE);
}
bus_dmamap_sync(ring->desc_dma.tag, ring->desc_dma.map,
BUS_DMASYNC_PREWRITE);
/* Kick command ring. */
ring->cur = (ring->cur + 1) % IWN_TX_RING_COUNT;
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, ring->qid << 8 | ring->cur);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return async ? 0 : msleep(desc, &sc->sc_mtx, PCATCH, "iwncmd", hz);
}
static int
iwn4965_add_node(struct iwn_softc *sc, struct iwn_node_info *node, int async)
{
struct iwn4965_node_info hnode;
caddr_t src, dst;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/*
* We use the node structure for 5000 Series internally (it is
* a superset of the one for 4965AGN). We thus copy the common
* fields before sending the command.
*/
src = (caddr_t)node;
dst = (caddr_t)&hnode;
memcpy(dst, src, 48);
/* Skip TSC, RX MIC and TX MIC fields from ``src''. */
memcpy(dst + 48, src + 72, 20);
return iwn_cmd(sc, IWN_CMD_ADD_NODE, &hnode, sizeof hnode, async);
}
static int
iwn5000_add_node(struct iwn_softc *sc, struct iwn_node_info *node, int async)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Direct mapping. */
return iwn_cmd(sc, IWN_CMD_ADD_NODE, node, sizeof (*node), async);
}
static int
iwn_set_link_quality(struct iwn_softc *sc, struct ieee80211_node *ni)
{
struct iwn_node *wn = (void *)ni;
struct ieee80211_rateset *rs;
struct iwn_cmd_link_quality linkq;
int i, rate, txrate;
int is_11n;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
memset(&linkq, 0, sizeof linkq);
linkq.id = wn->id;
linkq.antmsk_1stream = iwn_get_1stream_tx_antmask(sc);
linkq.antmsk_2stream = iwn_get_2stream_tx_antmask(sc);
linkq.ampdu_max = 32; /* XXX negotiated? */
linkq.ampdu_threshold = 3;
linkq.ampdu_limit = htole16(4000); /* 4ms */
DPRINTF(sc, IWN_DEBUG_XMIT,
"%s: 1stream antenna=0x%02x, 2stream antenna=0x%02x, ntxstreams=%d\n",
__func__,
linkq.antmsk_1stream,
linkq.antmsk_2stream,
sc->ntxchains);
/*
* Are we using 11n rates? Ensure the channel is
* 11n _and_ we have some 11n rates, or don't
* try.
*/
if (IEEE80211_IS_CHAN_HT(ni->ni_chan) && ni->ni_htrates.rs_nrates > 0) {
rs = (struct ieee80211_rateset *) &ni->ni_htrates;
is_11n = 1;
} else {
rs = &ni->ni_rates;
is_11n = 0;
}
/* Start at highest available bit-rate. */
/*
* XXX this is all very dirty!
*/
if (is_11n)
txrate = ni->ni_htrates.rs_nrates - 1;
else
txrate = rs->rs_nrates - 1;
for (i = 0; i < IWN_MAX_TX_RETRIES; i++) {
uint32_t plcp;
/*
* XXX TODO: ensure the last two slots are the two lowest
* rate entries, just for now.
*/
if (i == 14 || i == 15)
txrate = 0;
if (is_11n)
rate = IEEE80211_RATE_MCS | rs->rs_rates[txrate];
else
rate = IEEE80211_RV(rs->rs_rates[txrate]);
/* Do rate -> PLCP config mapping */
plcp = iwn_rate_to_plcp(sc, ni, rate);
linkq.retry[i] = plcp;
DPRINTF(sc, IWN_DEBUG_XMIT,
"%s: i=%d, txrate=%d, rate=0x%02x, plcp=0x%08x\n",
__func__,
i,
txrate,
rate,
le32toh(plcp));
/*
* The mimo field is an index into the table which
* indicates the first index where it and subsequent entries
* will not be using MIMO.
*
* Since we're filling linkq from 0..15 and we're filling
* from the highest MCS rates to the lowest rates, if we
* _are_ doing a dual-stream rate, set mimo to idx+1 (ie,
* the next entry.) That way if the next entry is a non-MIMO
* entry, we're already pointing at it.
*/
if ((le32toh(plcp) & IWN_RFLAG_MCS) &&
IEEE80211_RV(le32toh(plcp)) > 7)
linkq.mimo = i + 1;
/* Next retry at immediate lower bit-rate. */
if (txrate > 0)
txrate--;
}
/*
* If we reached the end of the list and indeed we hit
* all MIMO rates (eg 5300 doing MCS23-15) then yes,
* set mimo to 15. Setting it to 16 panics the firmware.
*/
if (linkq.mimo > 15)
linkq.mimo = 15;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: mimo = %d\n", __func__, linkq.mimo);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return iwn_cmd(sc, IWN_CMD_LINK_QUALITY, &linkq, sizeof linkq, 1);
}
/*
* Broadcast node is used to send group-addressed and management frames.
*/
static int
iwn_add_broadcast_node(struct iwn_softc *sc, int async)
{
struct iwn_ops *ops = &sc->ops;
struct ieee80211com *ic = &sc->sc_ic;
struct iwn_node_info node;
struct iwn_cmd_link_quality linkq;
uint8_t txant;
int i, error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
memset(&node, 0, sizeof node);
IEEE80211_ADDR_COPY(node.macaddr, ieee80211broadcastaddr);
node.id = sc->broadcast_id;
DPRINTF(sc, IWN_DEBUG_RESET, "%s: adding broadcast node\n", __func__);
if ((error = ops->add_node(sc, &node, async)) != 0)
return error;
/* Use the first valid TX antenna. */
txant = IWN_LSB(sc->txchainmask);
memset(&linkq, 0, sizeof linkq);
linkq.id = sc->broadcast_id;
linkq.antmsk_1stream = iwn_get_1stream_tx_antmask(sc);
linkq.antmsk_2stream = iwn_get_2stream_tx_antmask(sc);
linkq.ampdu_max = 64;
linkq.ampdu_threshold = 3;
linkq.ampdu_limit = htole16(4000); /* 4ms */
/* Use lowest mandatory bit-rate. */
/* XXX rate table lookup? */
if (IEEE80211_IS_CHAN_5GHZ(ic->ic_curchan))
linkq.retry[0] = htole32(0xd);
else
linkq.retry[0] = htole32(10 | IWN_RFLAG_CCK);
linkq.retry[0] |= htole32(IWN_RFLAG_ANT(txant));
/* Use same bit-rate for all TX retries. */
for (i = 1; i < IWN_MAX_TX_RETRIES; i++) {
linkq.retry[i] = linkq.retry[0];
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return iwn_cmd(sc, IWN_CMD_LINK_QUALITY, &linkq, sizeof linkq, async);
}
static int
iwn_updateedca(struct ieee80211com *ic)
{
#define IWN_EXP2(x) ((1 << (x)) - 1) /* CWmin = 2^ECWmin - 1 */
struct iwn_softc *sc = ic->ic_softc;
struct iwn_edca_params cmd;
struct chanAccParams chp;
int aci;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
ieee80211_wme_ic_getparams(ic, &chp);
memset(&cmd, 0, sizeof cmd);
cmd.flags = htole32(IWN_EDCA_UPDATE);
IEEE80211_LOCK(ic);
for (aci = 0; aci < WME_NUM_AC; aci++) {
const struct wmeParams *ac = &chp.cap_wmeParams[aci];
cmd.ac[aci].aifsn = ac->wmep_aifsn;
cmd.ac[aci].cwmin = htole16(IWN_EXP2(ac->wmep_logcwmin));
cmd.ac[aci].cwmax = htole16(IWN_EXP2(ac->wmep_logcwmax));
cmd.ac[aci].txoplimit =
htole16(IEEE80211_TXOP_TO_US(ac->wmep_txopLimit));
}
IEEE80211_UNLOCK(ic);
IWN_LOCK(sc);
(void)iwn_cmd(sc, IWN_CMD_EDCA_PARAMS, &cmd, sizeof cmd, 1);
IWN_UNLOCK(sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return 0;
#undef IWN_EXP2
}
static void
iwn_set_promisc(struct iwn_softc *sc)
{
struct ieee80211com *ic = &sc->sc_ic;
uint32_t promisc_filter;
promisc_filter = IWN_FILTER_CTL | IWN_FILTER_PROMISC;
if (ic->ic_promisc > 0 || ic->ic_opmode == IEEE80211_M_MONITOR)
sc->rxon->filter |= htole32(promisc_filter);
else
sc->rxon->filter &= ~htole32(promisc_filter);
}
static void
iwn_update_promisc(struct ieee80211com *ic)
{
struct iwn_softc *sc = ic->ic_softc;
int error;
if (ic->ic_opmode == IEEE80211_M_MONITOR)
return; /* nothing to do */
IWN_LOCK(sc);
if (!(sc->sc_flags & IWN_FLAG_RUNNING)) {
IWN_UNLOCK(sc);
return;
}
iwn_set_promisc(sc);
if ((error = iwn_send_rxon(sc, 1, 1)) != 0) {
device_printf(sc->sc_dev,
"%s: could not send RXON, error %d\n",
__func__, error);
}
IWN_UNLOCK(sc);
}
static void
iwn_update_mcast(struct ieee80211com *ic)
{
/* Ignore */
}
static void
iwn_set_led(struct iwn_softc *sc, uint8_t which, uint8_t off, uint8_t on)
{
struct iwn_cmd_led led;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
#if 0
/* XXX don't set LEDs during scan? */
if (sc->sc_is_scanning)
return;
#endif
/* Clear microcode LED ownership. */
IWN_CLRBITS(sc, IWN_LED, IWN_LED_BSM_CTRL);
led.which = which;
led.unit = htole32(10000); /* on/off in unit of 100ms */
led.off = off;
led.on = on;
(void)iwn_cmd(sc, IWN_CMD_SET_LED, &led, sizeof led, 1);
}
/*
* Set the critical temperature at which the firmware will stop the radio
* and notify us.
*/
static int
iwn_set_critical_temp(struct iwn_softc *sc)
{
struct iwn_critical_temp crit;
int32_t temp;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
IWN_WRITE(sc, IWN_UCODE_GP1_CLR, IWN_UCODE_GP1_CTEMP_STOP_RF);
if (sc->hw_type == IWN_HW_REV_TYPE_5150)
temp = (IWN_CTOK(110) - sc->temp_off) * -5;
else if (sc->hw_type == IWN_HW_REV_TYPE_4965)
temp = IWN_CTOK(110);
else
temp = 110;
memset(&crit, 0, sizeof crit);
crit.tempR = htole32(temp);
DPRINTF(sc, IWN_DEBUG_RESET, "setting critical temp to %d\n", temp);
return iwn_cmd(sc, IWN_CMD_SET_CRITICAL_TEMP, &crit, sizeof crit, 0);
}
static int
iwn_set_timing(struct iwn_softc *sc, struct ieee80211_node *ni)
{
struct iwn_cmd_timing cmd;
uint64_t val, mod;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
memset(&cmd, 0, sizeof cmd);
memcpy(&cmd.tstamp, ni->ni_tstamp.data, sizeof (uint64_t));
cmd.bintval = htole16(ni->ni_intval);
cmd.lintval = htole16(10);
/* Compute remaining time until next beacon. */
val = (uint64_t)ni->ni_intval * IEEE80211_DUR_TU;
mod = le64toh(cmd.tstamp) % val;
cmd.binitval = htole32((uint32_t)(val - mod));
DPRINTF(sc, IWN_DEBUG_RESET, "timing bintval=%u tstamp=%ju, init=%u\n",
ni->ni_intval, le64toh(cmd.tstamp), (uint32_t)(val - mod));
return iwn_cmd(sc, IWN_CMD_TIMING, &cmd, sizeof cmd, 1);
}
static void
iwn4965_power_calibration(struct iwn_softc *sc, int temp)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Adjust TX power if need be (delta >= 3 degC). */
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s: temperature %d->%d\n",
__func__, sc->temp, temp);
if (abs(temp - sc->temp) >= 3) {
/* Record temperature of last calibration. */
sc->temp = temp;
(void)iwn4965_set_txpower(sc, 1);
}
}
/*
* Set TX power for current channel (each rate has its own power settings).
* This function takes into account the regulatory information from EEPROM,
* the current temperature and the current voltage.
*/
static int
iwn4965_set_txpower(struct iwn_softc *sc, int async)
{
/* Fixed-point arithmetic division using a n-bit fractional part. */
#define fdivround(a, b, n) \
((((1 << n) * (a)) / (b) + (1 << n) / 2) / (1 << n))
/* Linear interpolation. */
#define interpolate(x, x1, y1, x2, y2, n) \
((y1) + fdivround(((int)(x) - (x1)) * ((y2) - (y1)), (x2) - (x1), n))
static const int tdiv[IWN_NATTEN_GROUPS] = { 9, 8, 8, 8, 6 };
struct iwn_ucode_info *uc = &sc->ucode_info;
struct iwn4965_cmd_txpower cmd;
struct iwn4965_eeprom_chan_samples *chans;
const uint8_t *rf_gain, *dsp_gain;
int32_t vdiff, tdiff;
int i, is_chan_5ghz, c, grp, maxpwr;
uint8_t chan;
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
/* Retrieve current channel from last RXON. */
chan = sc->rxon->chan;
is_chan_5ghz = (sc->rxon->flags & htole32(IWN_RXON_24GHZ)) == 0;
DPRINTF(sc, IWN_DEBUG_RESET, "setting TX power for channel %d\n",
chan);
memset(&cmd, 0, sizeof cmd);
cmd.band = is_chan_5ghz ? 0 : 1;
cmd.chan = chan;
if (is_chan_5ghz) {
maxpwr = sc->maxpwr5GHz;
rf_gain = iwn4965_rf_gain_5ghz;
dsp_gain = iwn4965_dsp_gain_5ghz;
} else {
maxpwr = sc->maxpwr2GHz;
rf_gain = iwn4965_rf_gain_2ghz;
dsp_gain = iwn4965_dsp_gain_2ghz;
}
/* Compute voltage compensation. */
vdiff = ((int32_t)le32toh(uc->volt) - sc->eeprom_voltage) / 7;
if (vdiff > 0)
vdiff *= 2;
if (abs(vdiff) > 2)
vdiff = 0;
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: voltage compensation=%d (UCODE=%d, EEPROM=%d)\n",
__func__, vdiff, le32toh(uc->volt), sc->eeprom_voltage);
/* Get channel attenuation group. */
if (chan <= 20) /* 1-20 */
grp = 4;
else if (chan <= 43) /* 34-43 */
grp = 0;
else if (chan <= 70) /* 44-70 */
grp = 1;
else if (chan <= 124) /* 71-124 */
grp = 2;
else /* 125-200 */
grp = 3;
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: chan %d, attenuation group=%d\n", __func__, chan, grp);
/* Get channel sub-band. */
for (i = 0; i < IWN_NBANDS; i++)
if (sc->bands[i].lo != 0 &&
sc->bands[i].lo <= chan && chan <= sc->bands[i].hi)
break;
if (i == IWN_NBANDS) /* Can't happen in real-life. */
return EINVAL;
chans = sc->bands[i].chans;
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: chan %d sub-band=%d\n", __func__, chan, i);
for (c = 0; c < 2; c++) {
uint8_t power, gain, temp;
int maxchpwr, pwr, ridx, idx;
power = interpolate(chan,
chans[0].num, chans[0].samples[c][1].power,
chans[1].num, chans[1].samples[c][1].power, 1);
gain = interpolate(chan,
chans[0].num, chans[0].samples[c][1].gain,
chans[1].num, chans[1].samples[c][1].gain, 1);
temp = interpolate(chan,
chans[0].num, chans[0].samples[c][1].temp,
chans[1].num, chans[1].samples[c][1].temp, 1);
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: Tx chain %d: power=%d gain=%d temp=%d\n",
__func__, c, power, gain, temp);
/* Compute temperature compensation. */
tdiff = ((sc->temp - temp) * 2) / tdiv[grp];
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: temperature compensation=%d (current=%d, EEPROM=%d)\n",
__func__, tdiff, sc->temp, temp);
for (ridx = 0; ridx <= IWN_RIDX_MAX; ridx++) {
/* Convert dBm to half-dBm. */
maxchpwr = sc->maxpwr[chan] * 2;
if ((ridx / 8) & 1)
maxchpwr -= 6; /* MIMO 2T: -3dB */
pwr = maxpwr;
/* Adjust TX power based on rate. */
if ((ridx % 8) == 5)
pwr -= 15; /* OFDM48: -7.5dB */
else if ((ridx % 8) == 6)
pwr -= 17; /* OFDM54: -8.5dB */
else if ((ridx % 8) == 7)
pwr -= 20; /* OFDM60: -10dB */
else
pwr -= 10; /* Others: -5dB */
/* Do not exceed channel max TX power. */
if (pwr > maxchpwr)
pwr = maxchpwr;
idx = gain - (pwr - power) - tdiff - vdiff;
if ((ridx / 8) & 1) /* MIMO */
idx += (int32_t)le32toh(uc->atten[grp][c]);
if (cmd.band == 0)
idx += 9; /* 5GHz */
if (ridx == IWN_RIDX_MAX)
idx += 5; /* CCK */
/* Make sure idx stays in a valid range. */
if (idx < 0)
idx = 0;
else if (idx > IWN4965_MAX_PWR_INDEX)
idx = IWN4965_MAX_PWR_INDEX;
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: Tx chain %d, rate idx %d: power=%d\n",
__func__, c, ridx, idx);
cmd.power[ridx].rf_gain[c] = rf_gain[idx];
cmd.power[ridx].dsp_gain[c] = dsp_gain[idx];
}
}
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_TXPOW,
"%s: set tx power for chan %d\n", __func__, chan);
return iwn_cmd(sc, IWN_CMD_TXPOWER, &cmd, sizeof cmd, async);
#undef interpolate
#undef fdivround
}
static int
iwn5000_set_txpower(struct iwn_softc *sc, int async)
{
struct iwn5000_cmd_txpower cmd;
int cmdid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/*
* TX power calibration is handled automatically by the firmware
* for 5000 Series.
*/
memset(&cmd, 0, sizeof cmd);
cmd.global_limit = 2 * IWN5000_TXPOWER_MAX_DBM; /* 16 dBm */
cmd.flags = IWN5000_TXPOWER_NO_CLOSED;
cmd.srv_limit = IWN5000_TXPOWER_AUTO;
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_XMIT,
"%s: setting TX power; rev=%d\n",
__func__,
IWN_UCODE_API(sc->ucode_rev));
if (IWN_UCODE_API(sc->ucode_rev) == 1)
cmdid = IWN_CMD_TXPOWER_DBM_V1;
else
cmdid = IWN_CMD_TXPOWER_DBM;
return iwn_cmd(sc, cmdid, &cmd, sizeof cmd, async);
}
/*
* Retrieve the maximum RSSI (in dBm) among receivers.
*/
static int
iwn4965_get_rssi(struct iwn_softc *sc, struct iwn_rx_stat *stat)
{
struct iwn4965_rx_phystat *phy = (void *)stat->phybuf;
uint8_t mask, agc;
int rssi;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
mask = (le16toh(phy->antenna) >> 4) & IWN_ANT_ABC;
agc = (le16toh(phy->agc) >> 7) & 0x7f;
rssi = 0;
if (mask & IWN_ANT_A)
rssi = MAX(rssi, phy->rssi[0]);
if (mask & IWN_ANT_B)
rssi = MAX(rssi, phy->rssi[2]);
if (mask & IWN_ANT_C)
rssi = MAX(rssi, phy->rssi[4]);
DPRINTF(sc, IWN_DEBUG_RECV,
"%s: agc %d mask 0x%x rssi %d %d %d result %d\n", __func__, agc,
mask, phy->rssi[0], phy->rssi[2], phy->rssi[4],
rssi - agc - IWN_RSSI_TO_DBM);
return rssi - agc - IWN_RSSI_TO_DBM;
}
static int
iwn5000_get_rssi(struct iwn_softc *sc, struct iwn_rx_stat *stat)
{
struct iwn5000_rx_phystat *phy = (void *)stat->phybuf;
uint8_t agc;
int rssi;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
agc = (le32toh(phy->agc) >> 9) & 0x7f;
rssi = MAX(le16toh(phy->rssi[0]) & 0xff,
le16toh(phy->rssi[1]) & 0xff);
rssi = MAX(le16toh(phy->rssi[2]) & 0xff, rssi);
DPRINTF(sc, IWN_DEBUG_RECV,
"%s: agc %d rssi %d %d %d result %d\n", __func__, agc,
phy->rssi[0], phy->rssi[1], phy->rssi[2],
rssi - agc - IWN_RSSI_TO_DBM);
return rssi - agc - IWN_RSSI_TO_DBM;
}
/*
* Retrieve the average noise (in dBm) among receivers.
*/
static int
iwn_get_noise(const struct iwn_rx_general_stats *stats)
{
int i, total, nbant, noise;
total = nbant = 0;
for (i = 0; i < 3; i++) {
if ((noise = le32toh(stats->noise[i]) & 0xff) == 0)
continue;
total += noise;
nbant++;
}
/* There should be at least one antenna but check anyway. */
return (nbant == 0) ? -127 : (total / nbant) - 107;
}
/*
* Compute temperature (in degC) from last received statistics.
*/
static int
iwn4965_get_temperature(struct iwn_softc *sc)
{
struct iwn_ucode_info *uc = &sc->ucode_info;
int32_t r1, r2, r3, r4, temp;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
r1 = le32toh(uc->temp[0].chan20MHz);
r2 = le32toh(uc->temp[1].chan20MHz);
r3 = le32toh(uc->temp[2].chan20MHz);
r4 = le32toh(sc->rawtemp);
if (r1 == r3) /* Prevents division by 0 (should not happen). */
return 0;
/* Sign-extend 23-bit R4 value to 32-bit. */
r4 = ((r4 & 0xffffff) ^ 0x800000) - 0x800000;
/* Compute temperature in Kelvin. */
temp = (259 * (r4 - r2)) / (r3 - r1);
temp = (temp * 97) / 100 + 8;
DPRINTF(sc, IWN_DEBUG_ANY, "temperature %dK/%dC\n", temp,
IWN_KTOC(temp));
return IWN_KTOC(temp);
}
static int
iwn5000_get_temperature(struct iwn_softc *sc)
{
int32_t temp;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/*
* Temperature is not used by the driver for 5000 Series because
* TX power calibration is handled by firmware.
*/
temp = le32toh(sc->rawtemp);
if (sc->hw_type == IWN_HW_REV_TYPE_5150) {
temp = (temp / -5) + sc->temp_off;
temp = IWN_KTOC(temp);
}
return temp;
}
/*
* Initialize sensitivity calibration state machine.
*/
static int
iwn_init_sensitivity(struct iwn_softc *sc)
{
struct iwn_ops *ops = &sc->ops;
struct iwn_calib_state *calib = &sc->calib;
uint32_t flags;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Reset calibration state machine. */
memset(calib, 0, sizeof (*calib));
calib->state = IWN_CALIB_STATE_INIT;
calib->cck_state = IWN_CCK_STATE_HIFA;
/* Set initial correlation values. */
calib->ofdm_x1 = sc->limits->min_ofdm_x1;
calib->ofdm_mrc_x1 = sc->limits->min_ofdm_mrc_x1;
calib->ofdm_x4 = sc->limits->min_ofdm_x4;
calib->ofdm_mrc_x4 = sc->limits->min_ofdm_mrc_x4;
calib->cck_x4 = 125;
calib->cck_mrc_x4 = sc->limits->min_cck_mrc_x4;
calib->energy_cck = sc->limits->energy_cck;
/* Write initial sensitivity. */
if ((error = iwn_send_sensitivity(sc)) != 0)
return error;
/* Write initial gains. */
if ((error = ops->init_gains(sc)) != 0)
return error;
/* Request statistics at each beacon interval. */
flags = 0;
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s: sending request for statistics\n",
__func__);
return iwn_cmd(sc, IWN_CMD_GET_STATISTICS, &flags, sizeof flags, 1);
}
/*
* Collect noise and RSSI statistics for the first 20 beacons received
* after association and use them to determine connected antennas and
* to set differential gains.
*/
static void
iwn_collect_noise(struct iwn_softc *sc,
const struct iwn_rx_general_stats *stats)
{
struct iwn_ops *ops = &sc->ops;
struct iwn_calib_state *calib = &sc->calib;
struct ieee80211com *ic = &sc->sc_ic;
uint32_t val;
int i;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Accumulate RSSI and noise for all 3 antennas. */
for (i = 0; i < 3; i++) {
calib->rssi[i] += le32toh(stats->rssi[i]) & 0xff;
calib->noise[i] += le32toh(stats->noise[i]) & 0xff;
}
/* NB: We update differential gains only once after 20 beacons. */
if (++calib->nbeacons < 20)
return;
/* Determine highest average RSSI. */
val = MAX(calib->rssi[0], calib->rssi[1]);
val = MAX(calib->rssi[2], val);
/* Determine which antennas are connected. */
sc->chainmask = sc->rxchainmask;
for (i = 0; i < 3; i++)
if (val - calib->rssi[i] > 15 * 20)
sc->chainmask &= ~(1 << i);
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_XMIT,
"%s: RX chains mask: theoretical=0x%x, actual=0x%x\n",
__func__, sc->rxchainmask, sc->chainmask);
/* If none of the TX antennas are connected, keep at least one. */
if ((sc->chainmask & sc->txchainmask) == 0)
sc->chainmask |= IWN_LSB(sc->txchainmask);
(void)ops->set_gains(sc);
calib->state = IWN_CALIB_STATE_RUN;
#ifdef notyet
/* XXX Disable RX chains with no antennas connected. */
sc->rxon->rxchain = htole16(IWN_RXCHAIN_SEL(sc->chainmask));
if (sc->sc_is_scanning)
device_printf(sc->sc_dev,
"%s: is_scanning set, before RXON\n",
__func__);
(void)iwn_cmd(sc, IWN_CMD_RXON, sc->rxon, sc->rxonsz, 1);
#endif
/* Enable power-saving mode if requested by user. */
if (ic->ic_flags & IEEE80211_F_PMGTON)
(void)iwn_set_pslevel(sc, 0, 3, 1);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
}
static int
iwn4965_init_gains(struct iwn_softc *sc)
{
struct iwn_phy_calib_gain cmd;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
memset(&cmd, 0, sizeof cmd);
cmd.code = IWN4965_PHY_CALIB_DIFF_GAIN;
/* Differential gains initially set to 0 for all 3 antennas. */
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: setting initial differential gains\n", __func__);
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 1);
}
static int
iwn5000_init_gains(struct iwn_softc *sc)
{
struct iwn_phy_calib cmd;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
memset(&cmd, 0, sizeof cmd);
cmd.code = sc->reset_noise_gain;
cmd.ngroups = 1;
cmd.isvalid = 1;
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: setting initial differential gains\n", __func__);
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 1);
}
static int
iwn4965_set_gains(struct iwn_softc *sc)
{
struct iwn_calib_state *calib = &sc->calib;
struct iwn_phy_calib_gain cmd;
int i, delta, noise;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Get minimal noise among connected antennas. */
noise = INT_MAX; /* NB: There's at least one antenna. */
for (i = 0; i < 3; i++)
if (sc->chainmask & (1 << i))
noise = MIN(calib->noise[i], noise);
memset(&cmd, 0, sizeof cmd);
cmd.code = IWN4965_PHY_CALIB_DIFF_GAIN;
/* Set differential gains for connected antennas. */
for (i = 0; i < 3; i++) {
if (sc->chainmask & (1 << i)) {
/* Compute attenuation (in unit of 1.5dB). */
delta = (noise - (int32_t)calib->noise[i]) / 30;
/* NB: delta <= 0 */
/* Limit to [-4.5dB,0]. */
cmd.gain[i] = MIN(abs(delta), 3);
if (delta < 0)
cmd.gain[i] |= 1 << 2; /* sign bit */
}
}
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"setting differential gains Ant A/B/C: %x/%x/%x (%x)\n",
cmd.gain[0], cmd.gain[1], cmd.gain[2], sc->chainmask);
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 1);
}
static int
iwn5000_set_gains(struct iwn_softc *sc)
{
struct iwn_calib_state *calib = &sc->calib;
struct iwn_phy_calib_gain cmd;
int i, ant, div, delta;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* We collected 20 beacons and !=6050 need a 1.5 factor. */
div = (sc->hw_type == IWN_HW_REV_TYPE_6050) ? 20 : 30;
memset(&cmd, 0, sizeof cmd);
cmd.code = sc->noise_gain;
cmd.ngroups = 1;
cmd.isvalid = 1;
/* Get first available RX antenna as referential. */
ant = IWN_LSB(sc->rxchainmask);
/* Set differential gains for other antennas. */
for (i = ant + 1; i < 3; i++) {
if (sc->chainmask & (1 << i)) {
/* The delta is relative to antenna "ant". */
delta = ((int32_t)calib->noise[ant] -
(int32_t)calib->noise[i]) / div;
/* Limit to [-4.5dB,+4.5dB]. */
cmd.gain[i - 1] = MIN(abs(delta), 3);
if (delta < 0)
cmd.gain[i - 1] |= 1 << 2; /* sign bit */
}
}
DPRINTF(sc, IWN_DEBUG_CALIBRATE | IWN_DEBUG_XMIT,
"setting differential gains Ant B/C: %x/%x (%x)\n",
cmd.gain[0], cmd.gain[1], sc->chainmask);
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 1);
}
/*
* Tune RF RX sensitivity based on the number of false alarms detected
* during the last beacon period.
*/
static void
iwn_tune_sensitivity(struct iwn_softc *sc, const struct iwn_rx_stats *stats)
{
#define inc(val, inc, max) \
if ((val) < (max)) { \
if ((val) < (max) - (inc)) \
(val) += (inc); \
else \
(val) = (max); \
needs_update = 1; \
}
#define dec(val, dec, min) \
if ((val) > (min)) { \
if ((val) > (min) + (dec)) \
(val) -= (dec); \
else \
(val) = (min); \
needs_update = 1; \
}
const struct iwn_sensitivity_limits *limits = sc->limits;
struct iwn_calib_state *calib = &sc->calib;
uint32_t val, rxena, fa;
uint32_t energy[3], energy_min;
uint8_t noise[3], noise_ref;
int i, needs_update = 0;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Check that we've been enabled long enough. */
if ((rxena = le32toh(stats->general.load)) == 0){
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end not so long\n", __func__);
return;
}
/* Compute number of false alarms since last call for OFDM. */
fa = le32toh(stats->ofdm.bad_plcp) - calib->bad_plcp_ofdm;
fa += le32toh(stats->ofdm.fa) - calib->fa_ofdm;
fa *= 200 * IEEE80211_DUR_TU; /* 200TU */
if (fa > 50 * rxena) {
/* High false alarm count, decrease sensitivity. */
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: OFDM high false alarm count: %u\n", __func__, fa);
inc(calib->ofdm_x1, 1, limits->max_ofdm_x1);
inc(calib->ofdm_mrc_x1, 1, limits->max_ofdm_mrc_x1);
inc(calib->ofdm_x4, 1, limits->max_ofdm_x4);
inc(calib->ofdm_mrc_x4, 1, limits->max_ofdm_mrc_x4);
} else if (fa < 5 * rxena) {
/* Low false alarm count, increase sensitivity. */
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: OFDM low false alarm count: %u\n", __func__, fa);
dec(calib->ofdm_x1, 1, limits->min_ofdm_x1);
dec(calib->ofdm_mrc_x1, 1, limits->min_ofdm_mrc_x1);
dec(calib->ofdm_x4, 1, limits->min_ofdm_x4);
dec(calib->ofdm_mrc_x4, 1, limits->min_ofdm_mrc_x4);
}
/* Compute maximum noise among 3 receivers. */
for (i = 0; i < 3; i++)
noise[i] = (le32toh(stats->general.noise[i]) >> 8) & 0xff;
val = MAX(noise[0], noise[1]);
val = MAX(noise[2], val);
/* Insert it into our samples table. */
calib->noise_samples[calib->cur_noise_sample] = val;
calib->cur_noise_sample = (calib->cur_noise_sample + 1) % 20;
/* Compute maximum noise among last 20 samples. */
noise_ref = calib->noise_samples[0];
for (i = 1; i < 20; i++)
noise_ref = MAX(noise_ref, calib->noise_samples[i]);
/* Compute maximum energy among 3 receivers. */
for (i = 0; i < 3; i++)
energy[i] = le32toh(stats->general.energy[i]);
val = MIN(energy[0], energy[1]);
val = MIN(energy[2], val);
/* Insert it into our samples table. */
calib->energy_samples[calib->cur_energy_sample] = val;
calib->cur_energy_sample = (calib->cur_energy_sample + 1) % 10;
/* Compute minimum energy among last 10 samples. */
energy_min = calib->energy_samples[0];
for (i = 1; i < 10; i++)
energy_min = MAX(energy_min, calib->energy_samples[i]);
energy_min += 6;
/* Compute number of false alarms since last call for CCK. */
fa = le32toh(stats->cck.bad_plcp) - calib->bad_plcp_cck;
fa += le32toh(stats->cck.fa) - calib->fa_cck;
fa *= 200 * IEEE80211_DUR_TU; /* 200TU */
if (fa > 50 * rxena) {
/* High false alarm count, decrease sensitivity. */
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: CCK high false alarm count: %u\n", __func__, fa);
calib->cck_state = IWN_CCK_STATE_HIFA;
calib->low_fa = 0;
if (calib->cck_x4 > 160) {
calib->noise_ref = noise_ref;
if (calib->energy_cck > 2)
dec(calib->energy_cck, 2, energy_min);
}
if (calib->cck_x4 < 160) {
calib->cck_x4 = 161;
needs_update = 1;
} else
inc(calib->cck_x4, 3, limits->max_cck_x4);
inc(calib->cck_mrc_x4, 3, limits->max_cck_mrc_x4);
} else if (fa < 5 * rxena) {
/* Low false alarm count, increase sensitivity. */
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: CCK low false alarm count: %u\n", __func__, fa);
calib->cck_state = IWN_CCK_STATE_LOFA;
calib->low_fa++;
if (calib->cck_state != IWN_CCK_STATE_INIT &&
(((int32_t)calib->noise_ref - (int32_t)noise_ref) > 2 ||
calib->low_fa > 100)) {
inc(calib->energy_cck, 2, limits->min_energy_cck);
dec(calib->cck_x4, 3, limits->min_cck_x4);
dec(calib->cck_mrc_x4, 3, limits->min_cck_mrc_x4);
}
} else {
/* Not worth to increase or decrease sensitivity. */
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: CCK normal false alarm count: %u\n", __func__, fa);
calib->low_fa = 0;
calib->noise_ref = noise_ref;
if (calib->cck_state == IWN_CCK_STATE_HIFA) {
/* Previous interval had many false alarms. */
dec(calib->energy_cck, 8, energy_min);
}
calib->cck_state = IWN_CCK_STATE_INIT;
}
if (needs_update)
(void)iwn_send_sensitivity(sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
#undef dec
#undef inc
}
static int
iwn_send_sensitivity(struct iwn_softc *sc)
{
struct iwn_calib_state *calib = &sc->calib;
struct iwn_enhanced_sensitivity_cmd cmd;
int len;
memset(&cmd, 0, sizeof cmd);
len = sizeof (struct iwn_sensitivity_cmd);
cmd.which = IWN_SENSITIVITY_WORKTBL;
/* OFDM modulation. */
cmd.corr_ofdm_x1 = htole16(calib->ofdm_x1);
cmd.corr_ofdm_mrc_x1 = htole16(calib->ofdm_mrc_x1);
cmd.corr_ofdm_x4 = htole16(calib->ofdm_x4);
cmd.corr_ofdm_mrc_x4 = htole16(calib->ofdm_mrc_x4);
cmd.energy_ofdm = htole16(sc->limits->energy_ofdm);
cmd.energy_ofdm_th = htole16(62);
/* CCK modulation. */
cmd.corr_cck_x4 = htole16(calib->cck_x4);
cmd.corr_cck_mrc_x4 = htole16(calib->cck_mrc_x4);
cmd.energy_cck = htole16(calib->energy_cck);
/* Barker modulation: use default values. */
cmd.corr_barker = htole16(190);
cmd.corr_barker_mrc = htole16(sc->limits->barker_mrc);
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: set sensitivity %d/%d/%d/%d/%d/%d/%d\n", __func__,
calib->ofdm_x1, calib->ofdm_mrc_x1, calib->ofdm_x4,
calib->ofdm_mrc_x4, calib->cck_x4,
calib->cck_mrc_x4, calib->energy_cck);
if (!(sc->sc_flags & IWN_FLAG_ENH_SENS))
goto send;
/* Enhanced sensitivity settings. */
len = sizeof (struct iwn_enhanced_sensitivity_cmd);
cmd.ofdm_det_slope_mrc = htole16(668);
cmd.ofdm_det_icept_mrc = htole16(4);
cmd.ofdm_det_slope = htole16(486);
cmd.ofdm_det_icept = htole16(37);
cmd.cck_det_slope_mrc = htole16(853);
cmd.cck_det_icept_mrc = htole16(4);
cmd.cck_det_slope = htole16(476);
cmd.cck_det_icept = htole16(99);
send:
return iwn_cmd(sc, IWN_CMD_SET_SENSITIVITY, &cmd, len, 1);
}
/*
* Look at the increase of PLCP errors over time; if it exceeds
* a programmed threshold then trigger an RF retune.
*/
static void
iwn_check_rx_recovery(struct iwn_softc *sc, struct iwn_stats *rs)
{
int32_t delta_ofdm, delta_ht, delta_cck;
struct iwn_calib_state *calib = &sc->calib;
int delta_ticks, cur_ticks;
int delta_msec;
int thresh;
/*
* Calculate the difference between the current and
* previous statistics.
*/
delta_cck = le32toh(rs->rx.cck.bad_plcp) - calib->bad_plcp_cck;
delta_ofdm = le32toh(rs->rx.ofdm.bad_plcp) - calib->bad_plcp_ofdm;
delta_ht = le32toh(rs->rx.ht.bad_plcp) - calib->bad_plcp_ht;
/*
* Calculate the delta in time between successive statistics
* messages. Yes, it can roll over; so we make sure that
* this doesn't happen.
*
* XXX go figure out what to do about rollover
* XXX go figure out what to do if ticks rolls over to -ve instead!
* XXX go stab signed integer overflow undefined-ness in the face.
*/
cur_ticks = ticks;
delta_ticks = cur_ticks - sc->last_calib_ticks;
/*
* If any are negative, then the firmware likely reset; so just
* bail. We'll pick this up next time.
*/
if (delta_cck < 0 || delta_ofdm < 0 || delta_ht < 0 || delta_ticks < 0)
return;
/*
* delta_ticks is in ticks; we need to convert it up to milliseconds
* so we can do some useful math with it.
*/
delta_msec = ticks_to_msecs(delta_ticks);
/*
* Calculate what our threshold is given the current delta_msec.
*/
thresh = sc->base_params->plcp_err_threshold * delta_msec;
DPRINTF(sc, IWN_DEBUG_STATE,
"%s: time delta: %d; cck=%d, ofdm=%d, ht=%d, total=%d, thresh=%d\n",
__func__,
delta_msec,
delta_cck,
delta_ofdm,
delta_ht,
(delta_msec + delta_cck + delta_ofdm + delta_ht),
thresh);
/*
* If we need a retune, then schedule a single channel scan
* to a channel that isn't the currently active one!
*
* The math from linux iwlwifi:
*
* if ((delta * 100 / msecs) > threshold)
*/
if (thresh > 0 && (delta_cck + delta_ofdm + delta_ht) * 100 > thresh) {
DPRINTF(sc, IWN_DEBUG_ANY,
"%s: PLCP error threshold raw (%d) comparison (%d) "
"over limit (%d); retune!\n",
__func__,
(delta_cck + delta_ofdm + delta_ht),
(delta_cck + delta_ofdm + delta_ht) * 100,
thresh);
}
}
/*
* Set STA mode power saving level (between 0 and 5).
* Level 0 is CAM (Continuously Aware Mode), 5 is for maximum power saving.
*/
static int
iwn_set_pslevel(struct iwn_softc *sc, int dtim, int level, int async)
{
struct iwn_pmgt_cmd cmd;
const struct iwn_pmgt *pmgt;
uint32_t max, skip_dtim;
uint32_t reg;
int i;
DPRINTF(sc, IWN_DEBUG_PWRSAVE,
"%s: dtim=%d, level=%d, async=%d\n",
__func__,
dtim,
level,
async);
/* Select which PS parameters to use. */
if (dtim <= 2)
pmgt = &iwn_pmgt[0][level];
else if (dtim <= 10)
pmgt = &iwn_pmgt[1][level];
else
pmgt = &iwn_pmgt[2][level];
memset(&cmd, 0, sizeof cmd);
if (level != 0) /* not CAM */
cmd.flags |= htole16(IWN_PS_ALLOW_SLEEP);
if (level == 5)
cmd.flags |= htole16(IWN_PS_FAST_PD);
/* Retrieve PCIe Active State Power Management (ASPM). */
reg = pci_read_config(sc->sc_dev, sc->sc_cap_off + PCIER_LINK_CTL, 4);
if (!(reg & PCIEM_LINK_CTL_ASPMC_L0S)) /* L0s Entry disabled. */
cmd.flags |= htole16(IWN_PS_PCI_PMGT);
cmd.rxtimeout = htole32(pmgt->rxtimeout * 1024);
cmd.txtimeout = htole32(pmgt->txtimeout * 1024);
if (dtim == 0) {
dtim = 1;
skip_dtim = 0;
} else
skip_dtim = pmgt->skip_dtim;
if (skip_dtim != 0) {
cmd.flags |= htole16(IWN_PS_SLEEP_OVER_DTIM);
max = pmgt->intval[4];
if (max == (uint32_t)-1)
max = dtim * (skip_dtim + 1);
else if (max > dtim)
max = rounddown(max, dtim);
} else
max = dtim;
for (i = 0; i < 5; i++)
cmd.intval[i] = htole32(MIN(max, pmgt->intval[i]));
DPRINTF(sc, IWN_DEBUG_RESET, "setting power saving level to %d\n",
level);
return iwn_cmd(sc, IWN_CMD_SET_POWER_MODE, &cmd, sizeof cmd, async);
}
static int
iwn_send_btcoex(struct iwn_softc *sc)
{
struct iwn_bluetooth cmd;
memset(&cmd, 0, sizeof cmd);
cmd.flags = IWN_BT_COEX_CHAN_ANN | IWN_BT_COEX_BT_PRIO;
cmd.lead_time = IWN_BT_LEAD_TIME_DEF;
cmd.max_kill = IWN_BT_MAX_KILL_DEF;
DPRINTF(sc, IWN_DEBUG_RESET, "%s: configuring bluetooth coexistence\n",
__func__);
return iwn_cmd(sc, IWN_CMD_BT_COEX, &cmd, sizeof(cmd), 0);
}
static int
iwn_send_advanced_btcoex(struct iwn_softc *sc)
{
static const uint32_t btcoex_3wire[12] = {
0xaaaaaaaa, 0xaaaaaaaa, 0xaeaaaaaa, 0xaaaaaaaa,
0xcc00ff28, 0x0000aaaa, 0xcc00aaaa, 0x0000aaaa,
0xc0004000, 0x00004000, 0xf0005000, 0xf0005000,
};
struct iwn6000_btcoex_config btconfig;
struct iwn2000_btcoex_config btconfig2k;
struct iwn_btcoex_priotable btprio;
struct iwn_btcoex_prot btprot;
int error, i;
uint8_t flags;
memset(&btconfig, 0, sizeof btconfig);
memset(&btconfig2k, 0, sizeof btconfig2k);
flags = IWN_BT_FLAG_COEX6000_MODE_3W <<
IWN_BT_FLAG_COEX6000_MODE_SHIFT; // Done as is in linux kernel 3.2
if (sc->base_params->bt_sco_disable)
flags &= ~IWN_BT_FLAG_SYNC_2_BT_DISABLE;
else
flags |= IWN_BT_FLAG_SYNC_2_BT_DISABLE;
flags |= IWN_BT_FLAG_COEX6000_CHAN_INHIBITION;
/* Default flags result is 145 as old value */
/*
* Flags value has to be review. Values must change if we
* which to disable it
*/
if (sc->base_params->bt_session_2) {
btconfig2k.flags = flags;
btconfig2k.max_kill = 5;
btconfig2k.bt3_t7_timer = 1;
btconfig2k.kill_ack = htole32(0xffff0000);
btconfig2k.kill_cts = htole32(0xffff0000);
btconfig2k.sample_time = 2;
btconfig2k.bt3_t2_timer = 0xc;
for (i = 0; i < 12; i++)
btconfig2k.lookup_table[i] = htole32(btcoex_3wire[i]);
btconfig2k.valid = htole16(0xff);
btconfig2k.prio_boost = htole32(0xf0);
DPRINTF(sc, IWN_DEBUG_RESET,
"%s: configuring advanced bluetooth coexistence"
" session 2, flags : 0x%x\n",
__func__,
flags);
error = iwn_cmd(sc, IWN_CMD_BT_COEX, &btconfig2k,
sizeof(btconfig2k), 1);
} else {
btconfig.flags = flags;
btconfig.max_kill = 5;
btconfig.bt3_t7_timer = 1;
btconfig.kill_ack = htole32(0xffff0000);
btconfig.kill_cts = htole32(0xffff0000);
btconfig.sample_time = 2;
btconfig.bt3_t2_timer = 0xc;
for (i = 0; i < 12; i++)
btconfig.lookup_table[i] = htole32(btcoex_3wire[i]);
btconfig.valid = htole16(0xff);
btconfig.prio_boost = 0xf0;
DPRINTF(sc, IWN_DEBUG_RESET,
"%s: configuring advanced bluetooth coexistence,"
" flags : 0x%x\n",
__func__,
flags);
error = iwn_cmd(sc, IWN_CMD_BT_COEX, &btconfig,
sizeof(btconfig), 1);
}
if (error != 0)
return error;
memset(&btprio, 0, sizeof btprio);
btprio.calib_init1 = 0x6;
btprio.calib_init2 = 0x7;
btprio.calib_periodic_low1 = 0x2;
btprio.calib_periodic_low2 = 0x3;
btprio.calib_periodic_high1 = 0x4;
btprio.calib_periodic_high2 = 0x5;
btprio.dtim = 0x6;
btprio.scan52 = 0x8;
btprio.scan24 = 0xa;
error = iwn_cmd(sc, IWN_CMD_BT_COEX_PRIOTABLE, &btprio, sizeof(btprio),
1);
if (error != 0)
return error;
/* Force BT state machine change. */
memset(&btprot, 0, sizeof btprot);
btprot.open = 1;
btprot.type = 1;
error = iwn_cmd(sc, IWN_CMD_BT_COEX_PROT, &btprot, sizeof(btprot), 1);
if (error != 0)
return error;
btprot.open = 0;
return iwn_cmd(sc, IWN_CMD_BT_COEX_PROT, &btprot, sizeof(btprot), 1);
}
static int
iwn5000_runtime_calib(struct iwn_softc *sc)
{
struct iwn5000_calib_config cmd;
memset(&cmd, 0, sizeof cmd);
cmd.ucode.once.enable = 0xffffffff;
cmd.ucode.once.start = IWN5000_CALIB_DC;
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"%s: configuring runtime calibration\n", __func__);
return iwn_cmd(sc, IWN5000_CMD_CALIB_CONFIG, &cmd, sizeof(cmd), 0);
}
static uint32_t
iwn_get_rxon_ht_flags(struct iwn_softc *sc, struct ieee80211_channel *c)
{
struct ieee80211com *ic = &sc->sc_ic;
uint32_t htflags = 0;
if (! IEEE80211_IS_CHAN_HT(c))
return (0);
htflags |= IWN_RXON_HT_PROTMODE(ic->ic_curhtprotmode);
if (IEEE80211_IS_CHAN_HT40(c)) {
switch (ic->ic_curhtprotmode) {
case IEEE80211_HTINFO_OPMODE_HT20PR:
htflags |= IWN_RXON_HT_MODEPURE40;
break;
default:
htflags |= IWN_RXON_HT_MODEMIXED;
break;
}
}
if (IEEE80211_IS_CHAN_HT40D(c))
htflags |= IWN_RXON_HT_HT40MINUS;
return (htflags);
}
static int
iwn_check_bss_filter(struct iwn_softc *sc)
{
return ((sc->rxon->filter & htole32(IWN_FILTER_BSS)) != 0);
}
static int
iwn4965_rxon_assoc(struct iwn_softc *sc, int async)
{
struct iwn4965_rxon_assoc cmd;
struct iwn_rxon *rxon = sc->rxon;
cmd.flags = rxon->flags;
cmd.filter = rxon->filter;
cmd.ofdm_mask = rxon->ofdm_mask;
cmd.cck_mask = rxon->cck_mask;
cmd.ht_single_mask = rxon->ht_single_mask;
cmd.ht_dual_mask = rxon->ht_dual_mask;
cmd.rxchain = rxon->rxchain;
cmd.reserved = 0;
return (iwn_cmd(sc, IWN_CMD_RXON_ASSOC, &cmd, sizeof(cmd), async));
}
static int
iwn5000_rxon_assoc(struct iwn_softc *sc, int async)
{
struct iwn5000_rxon_assoc cmd;
struct iwn_rxon *rxon = sc->rxon;
cmd.flags = rxon->flags;
cmd.filter = rxon->filter;
cmd.ofdm_mask = rxon->ofdm_mask;
cmd.cck_mask = rxon->cck_mask;
cmd.reserved1 = 0;
cmd.ht_single_mask = rxon->ht_single_mask;
cmd.ht_dual_mask = rxon->ht_dual_mask;
cmd.ht_triple_mask = rxon->ht_triple_mask;
cmd.reserved2 = 0;
cmd.rxchain = rxon->rxchain;
cmd.acquisition = rxon->acquisition;
cmd.reserved3 = 0;
return (iwn_cmd(sc, IWN_CMD_RXON_ASSOC, &cmd, sizeof(cmd), async));
}
static int
iwn_send_rxon(struct iwn_softc *sc, int assoc, int async)
{
struct iwn_ops *ops = &sc->ops;
int error;
IWN_LOCK_ASSERT(sc);
if (assoc && iwn_check_bss_filter(sc) != 0) {
error = ops->rxon_assoc(sc, async);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: RXON_ASSOC command failed, error %d\n",
__func__, error);
return (error);
}
} else {
if (sc->sc_is_scanning)
device_printf(sc->sc_dev,
"%s: is_scanning set, before RXON\n",
__func__);
error = iwn_cmd(sc, IWN_CMD_RXON, sc->rxon, sc->rxonsz, async);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: RXON command failed, error %d\n",
__func__, error);
return (error);
}
/*
* Reconfiguring RXON clears the firmware nodes table so
* we must add the broadcast node again.
*/
if (iwn_check_bss_filter(sc) == 0 &&
(error = iwn_add_broadcast_node(sc, async)) != 0) {
device_printf(sc->sc_dev,
"%s: could not add broadcast node, error %d\n",
__func__, error);
return (error);
}
}
/* Configuration has changed, set TX power accordingly. */
if ((error = ops->set_txpower(sc, async)) != 0) {
device_printf(sc->sc_dev,
"%s: could not set TX power, error %d\n",
__func__, error);
return (error);
}
return (0);
}
static int
iwn_config(struct iwn_softc *sc)
{
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211vap *vap = TAILQ_FIRST(&ic->ic_vaps);
const uint8_t *macaddr;
uint32_t txmask;
uint16_t rxchain;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
if ((sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSET)
&& (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSETv2)) {
device_printf(sc->sc_dev,"%s: temp_offset and temp_offsetv2 are"
" exclusive each together. Review NIC config file. Conf"
" : 0x%08x Flags : 0x%08x \n", __func__,
sc->base_params->calib_need,
(IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSET |
IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSETv2));
return (EINVAL);
}
/* Compute temperature calib if needed. Will be send by send calib */
if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSET) {
error = iwn5000_temp_offset_calib(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not set temperature offset\n", __func__);
return (error);
}
} else if (sc->base_params->calib_need & IWN_FLG_NEED_PHY_CALIB_TEMP_OFFSETv2) {
error = iwn5000_temp_offset_calibv2(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not compute temperature offset v2\n",
__func__);
return (error);
}
}
if (sc->hw_type == IWN_HW_REV_TYPE_6050) {
/* Configure runtime DC calibration. */
error = iwn5000_runtime_calib(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not configure runtime calibration\n",
__func__);
return error;
}
}
/* Configure valid TX chains for >=5000 Series. */
if (sc->hw_type != IWN_HW_REV_TYPE_4965 &&
IWN_UCODE_API(sc->ucode_rev) > 1) {
txmask = htole32(sc->txchainmask);
DPRINTF(sc, IWN_DEBUG_RESET | IWN_DEBUG_XMIT,
"%s: configuring valid TX chains 0x%x\n", __func__, txmask);
error = iwn_cmd(sc, IWN5000_CMD_TX_ANT_CONFIG, &txmask,
sizeof txmask, 0);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not configure valid TX chains, "
"error %d\n", __func__, error);
return error;
}
}
/* Configure bluetooth coexistence. */
error = 0;
/* Configure bluetooth coexistence if needed. */
if (sc->base_params->bt_mode == IWN_BT_ADVANCED)
error = iwn_send_advanced_btcoex(sc);
if (sc->base_params->bt_mode == IWN_BT_SIMPLE)
error = iwn_send_btcoex(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not configure bluetooth coexistence, error %d\n",
__func__, error);
return error;
}
/* Set mode, channel, RX filter and enable RX. */
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
memset(sc->rxon, 0, sizeof (struct iwn_rxon));
macaddr = vap ? vap->iv_myaddr : ic->ic_macaddr;
IEEE80211_ADDR_COPY(sc->rxon->myaddr, macaddr);
IEEE80211_ADDR_COPY(sc->rxon->wlap, macaddr);
sc->rxon->chan = ieee80211_chan2ieee(ic, ic->ic_curchan);
sc->rxon->flags = htole32(IWN_RXON_TSF | IWN_RXON_CTS_TO_SELF);
if (IEEE80211_IS_CHAN_2GHZ(ic->ic_curchan))
sc->rxon->flags |= htole32(IWN_RXON_AUTO | IWN_RXON_24GHZ);
sc->rxon->filter = htole32(IWN_FILTER_MULTICAST);
switch (ic->ic_opmode) {
case IEEE80211_M_STA:
sc->rxon->mode = IWN_MODE_STA;
break;
case IEEE80211_M_MONITOR:
sc->rxon->mode = IWN_MODE_MONITOR;
break;
default:
/* Should not get there. */
break;
}
iwn_set_promisc(sc);
sc->rxon->cck_mask = 0x0f; /* not yet negotiated */
sc->rxon->ofdm_mask = 0xff; /* not yet negotiated */
sc->rxon->ht_single_mask = 0xff;
sc->rxon->ht_dual_mask = 0xff;
sc->rxon->ht_triple_mask = 0xff;
/*
* In active association mode, ensure that
* all the receive chains are enabled.
*
* Since we're not yet doing SMPS, don't allow the
* number of idle RX chains to be less than the active
* number.
*/
rxchain =
IWN_RXCHAIN_VALID(sc->rxchainmask) |
IWN_RXCHAIN_MIMO_COUNT(sc->nrxchains) |
IWN_RXCHAIN_IDLE_COUNT(sc->nrxchains);
sc->rxon->rxchain = htole16(rxchain);
DPRINTF(sc, IWN_DEBUG_RESET | IWN_DEBUG_XMIT,
"%s: rxchainmask=0x%x, nrxchains=%d\n",
__func__,
sc->rxchainmask,
sc->nrxchains);
sc->rxon->flags |= htole32(iwn_get_rxon_ht_flags(sc, ic->ic_curchan));
DPRINTF(sc, IWN_DEBUG_RESET,
"%s: setting configuration; flags=0x%08x\n",
__func__, le32toh(sc->rxon->flags));
if ((error = iwn_send_rxon(sc, 0, 0)) != 0) {
device_printf(sc->sc_dev, "%s: could not send RXON\n",
__func__);
return error;
}
if ((error = iwn_set_critical_temp(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not set critical temperature\n", __func__);
return error;
}
/* Set power saving level to CAM during initialization. */
if ((error = iwn_set_pslevel(sc, 0, 0, 0)) != 0) {
device_printf(sc->sc_dev,
"%s: could not set power saving level\n", __func__);
return error;
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return 0;
}
static uint16_t
iwn_get_active_dwell_time(struct iwn_softc *sc,
struct ieee80211_channel *c, uint8_t n_probes)
{
/* No channel? Default to 2GHz settings */
if (c == NULL || IEEE80211_IS_CHAN_2GHZ(c)) {
return (IWN_ACTIVE_DWELL_TIME_2GHZ +
IWN_ACTIVE_DWELL_FACTOR_2GHZ * (n_probes + 1));
}
/* 5GHz dwell time */
return (IWN_ACTIVE_DWELL_TIME_5GHZ +
IWN_ACTIVE_DWELL_FACTOR_5GHZ * (n_probes + 1));
}
/*
* Limit the total dwell time to 85% of the beacon interval.
*
* Returns the dwell time in milliseconds.
*/
static uint16_t
iwn_limit_dwell(struct iwn_softc *sc, uint16_t dwell_time)
{
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211vap *vap = NULL;
int bintval = 0;
/* bintval is in TU (1.024mS) */
if (! TAILQ_EMPTY(&ic->ic_vaps)) {
vap = TAILQ_FIRST(&ic->ic_vaps);
bintval = vap->iv_bss->ni_intval;
}
/*
* If it's non-zero, we should calculate the minimum of
* it and the DWELL_BASE.
*
* XXX Yes, the math should take into account that bintval
* is 1.024mS, not 1mS..
*/
if (bintval > 0) {
DPRINTF(sc, IWN_DEBUG_SCAN,
"%s: bintval=%d\n",
__func__,
bintval);
return (MIN(IWN_PASSIVE_DWELL_BASE, ((bintval * 85) / 100)));
}
/* No association context? Default */
return (IWN_PASSIVE_DWELL_BASE);
}
static uint16_t
iwn_get_passive_dwell_time(struct iwn_softc *sc, struct ieee80211_channel *c)
{
uint16_t passive;
if (c == NULL || IEEE80211_IS_CHAN_2GHZ(c)) {
passive = IWN_PASSIVE_DWELL_BASE + IWN_PASSIVE_DWELL_TIME_2GHZ;
} else {
passive = IWN_PASSIVE_DWELL_BASE + IWN_PASSIVE_DWELL_TIME_5GHZ;
}
/* Clamp to the beacon interval if we're associated */
return (iwn_limit_dwell(sc, passive));
}
static int
iwn_scan(struct iwn_softc *sc, struct ieee80211vap *vap,
struct ieee80211_scan_state *ss, struct ieee80211_channel *c)
{
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211_node *ni = vap->iv_bss;
struct iwn_scan_hdr *hdr;
struct iwn_cmd_data *tx;
struct iwn_scan_essid *essid;
struct iwn_scan_chan *chan;
struct ieee80211_frame *wh;
struct ieee80211_rateset *rs;
uint8_t *buf, *frm;
uint16_t rxchain;
uint8_t txant;
int buflen, error;
int is_active;
uint16_t dwell_active, dwell_passive;
uint32_t extra, scan_service_time;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/*
* We are absolutely not allowed to send a scan command when another
* scan command is pending.
*/
if (sc->sc_is_scanning) {
device_printf(sc->sc_dev, "%s: called whilst scanning!\n",
__func__);
return (EAGAIN);
}
/* Assign the scan channel */
c = ic->ic_curchan;
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
buf = malloc(IWN_SCAN_MAXSZ, M_DEVBUF, M_NOWAIT | M_ZERO);
if (buf == NULL) {
device_printf(sc->sc_dev,
"%s: could not allocate buffer for scan command\n",
__func__);
return ENOMEM;
}
hdr = (struct iwn_scan_hdr *)buf;
/*
* Move to the next channel if no frames are received within 10ms
* after sending the probe request.
*/
hdr->quiet_time = htole16(10); /* timeout in milliseconds */
hdr->quiet_threshold = htole16(1); /* min # of packets */
/*
* Max needs to be greater than active and passive and quiet!
* It's also in microseconds!
*/
hdr->max_svc = htole32(250 * 1024);
/*
* Reset scan: interval=100
* Normal scan: interval=becaon interval
* suspend_time: 100 (TU)
*
*/
extra = (100 /* suspend_time */ / 100 /* beacon interval */) << 22;
//scan_service_time = extra | ((100 /* susp */ % 100 /* int */) * 1024);
scan_service_time = (4 << 22) | (100 * 1024); /* Hardcode for now! */
hdr->pause_svc = htole32(scan_service_time);
/* Select antennas for scanning. */
rxchain =
IWN_RXCHAIN_VALID(sc->rxchainmask) |
IWN_RXCHAIN_FORCE_MIMO_SEL(sc->rxchainmask) |
IWN_RXCHAIN_DRIVER_FORCE;
if (IEEE80211_IS_CHAN_A(c) &&
sc->hw_type == IWN_HW_REV_TYPE_4965) {
/* Ant A must be avoided in 5GHz because of an HW bug. */
rxchain |= IWN_RXCHAIN_FORCE_SEL(IWN_ANT_B);
} else /* Use all available RX antennas. */
rxchain |= IWN_RXCHAIN_FORCE_SEL(sc->rxchainmask);
hdr->rxchain = htole16(rxchain);
hdr->filter = htole32(IWN_FILTER_MULTICAST | IWN_FILTER_BEACON);
tx = (struct iwn_cmd_data *)(hdr + 1);
tx->flags = htole32(IWN_TX_AUTO_SEQ);
tx->id = sc->broadcast_id;
tx->lifetime = htole32(IWN_LIFETIME_INFINITE);
if (IEEE80211_IS_CHAN_5GHZ(c)) {
/* Send probe requests at 6Mbps. */
tx->rate = htole32(0xd);
rs = &ic->ic_sup_rates[IEEE80211_MODE_11A];
} else {
hdr->flags = htole32(IWN_RXON_24GHZ | IWN_RXON_AUTO);
if (sc->hw_type == IWN_HW_REV_TYPE_4965 &&
sc->rxon->associd && sc->rxon->chan > 14)
tx->rate = htole32(0xd);
else {
/* Send probe requests at 1Mbps. */
tx->rate = htole32(10 | IWN_RFLAG_CCK);
}
rs = &ic->ic_sup_rates[IEEE80211_MODE_11G];
}
/* Use the first valid TX antenna. */
txant = IWN_LSB(sc->txchainmask);
tx->rate |= htole32(IWN_RFLAG_ANT(txant));
/*
* Only do active scanning if we're announcing a probe request
* for a given SSID (or more, if we ever add it to the driver.)
*/
is_active = 0;
/*
* If we're scanning for a specific SSID, add it to the command.
*
* XXX maybe look at adding support for scanning multiple SSIDs?
*/
essid = (struct iwn_scan_essid *)(tx + 1);
if (ss != NULL) {
if (ss->ss_ssid[0].len != 0) {
essid[0].id = IEEE80211_ELEMID_SSID;
essid[0].len = ss->ss_ssid[0].len;
memcpy(essid[0].data, ss->ss_ssid[0].ssid, ss->ss_ssid[0].len);
}
DPRINTF(sc, IWN_DEBUG_SCAN, "%s: ssid_len=%d, ssid=%*s\n",
__func__,
ss->ss_ssid[0].len,
ss->ss_ssid[0].len,
ss->ss_ssid[0].ssid);
if (ss->ss_nssid > 0)
is_active = 1;
}
/*
* Build a probe request frame. Most of the following code is a
* copy & paste of what is done in net80211.
*/
wh = (struct ieee80211_frame *)(essid + 20);
wh->i_fc[0] = IEEE80211_FC0_VERSION_0 | IEEE80211_FC0_TYPE_MGT |
IEEE80211_FC0_SUBTYPE_PROBE_REQ;
wh->i_fc[1] = IEEE80211_FC1_DIR_NODS;
IEEE80211_ADDR_COPY(wh->i_addr1, vap->iv_ifp->if_broadcastaddr);
IEEE80211_ADDR_COPY(wh->i_addr2, IF_LLADDR(vap->iv_ifp));
IEEE80211_ADDR_COPY(wh->i_addr3, vap->iv_ifp->if_broadcastaddr);
*(uint16_t *)&wh->i_dur[0] = 0; /* filled by HW */
*(uint16_t *)&wh->i_seq[0] = 0; /* filled by HW */
frm = (uint8_t *)(wh + 1);
frm = ieee80211_add_ssid(frm, NULL, 0);
frm = ieee80211_add_rates(frm, rs);
if (rs->rs_nrates > IEEE80211_RATE_SIZE)
frm = ieee80211_add_xrates(frm, rs);
if (ic->ic_htcaps & IEEE80211_HTC_HT)
frm = ieee80211_add_htcap(frm, ni);
/* Set length of probe request. */
tx->len = htole16(frm - (uint8_t *)wh);
/*
* If active scanning is requested but a certain channel is
* marked passive, we can do active scanning if we detect
* transmissions.
*
* There is an issue with some firmware versions that triggers
* a sysassert on a "good CRC threshold" of zero (== disabled),
* on a radar channel even though this means that we should NOT
* send probes.
*
* The "good CRC threshold" is the number of frames that we
* need to receive during our dwell time on a channel before
* sending out probes -- setting this to a huge value will
* mean we never reach it, but at the same time work around
* the aforementioned issue. Thus use IWL_GOOD_CRC_TH_NEVER
* here instead of IWL_GOOD_CRC_TH_DISABLED.
*
* This was fixed in later versions along with some other
* scan changes, and the threshold behaves as a flag in those
* versions.
*/
/*
* If we're doing active scanning, set the crc_threshold
* to a suitable value. This is different to active veruss
* passive scanning depending upon the channel flags; the
* firmware will obey that particular check for us.
*/
if (sc->tlv_feature_flags & IWN_UCODE_TLV_FLAGS_NEWSCAN)
hdr->crc_threshold = is_active ?
IWN_GOOD_CRC_TH_DEFAULT : IWN_GOOD_CRC_TH_DISABLED;
else
hdr->crc_threshold = is_active ?
IWN_GOOD_CRC_TH_DEFAULT : IWN_GOOD_CRC_TH_NEVER;
chan = (struct iwn_scan_chan *)frm;
chan->chan = htole16(ieee80211_chan2ieee(ic, c));
chan->flags = 0;
if (ss->ss_nssid > 0)
chan->flags |= htole32(IWN_CHAN_NPBREQS(1));
chan->dsp_gain = 0x6e;
/*
* Set the passive/active flag depending upon the channel mode.
* XXX TODO: take the is_active flag into account as well?
*/
if (c->ic_flags & IEEE80211_CHAN_PASSIVE)
chan->flags |= htole32(IWN_CHAN_PASSIVE);
else
chan->flags |= htole32(IWN_CHAN_ACTIVE);
/*
* Calculate the active/passive dwell times.
*/
dwell_active = iwn_get_active_dwell_time(sc, c, ss->ss_nssid);
dwell_passive = iwn_get_passive_dwell_time(sc, c);
/* Make sure they're valid */
if (dwell_passive <= dwell_active)
dwell_passive = dwell_active + 1;
chan->active = htole16(dwell_active);
chan->passive = htole16(dwell_passive);
if (IEEE80211_IS_CHAN_5GHZ(c))
chan->rf_gain = 0x3b;
else
chan->rf_gain = 0x28;
DPRINTF(sc, IWN_DEBUG_STATE,
"%s: chan %u flags 0x%x rf_gain 0x%x "
"dsp_gain 0x%x active %d passive %d scan_svc_time %d crc 0x%x "
"isactive=%d numssid=%d\n", __func__,
chan->chan, chan->flags, chan->rf_gain, chan->dsp_gain,
dwell_active, dwell_passive, scan_service_time,
hdr->crc_threshold, is_active, ss->ss_nssid);
hdr->nchan++;
chan++;
buflen = (uint8_t *)chan - buf;
hdr->len = htole16(buflen);
if (sc->sc_is_scanning) {
device_printf(sc->sc_dev,
"%s: called with is_scanning set!\n",
__func__);
}
sc->sc_is_scanning = 1;
DPRINTF(sc, IWN_DEBUG_STATE, "sending scan command nchan=%d\n",
hdr->nchan);
error = iwn_cmd(sc, IWN_CMD_SCAN, buf, buflen, 1);
free(buf, M_DEVBUF);
if (error == 0)
callout_reset(&sc->scan_timeout, 5*hz, iwn_scan_timeout, sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return error;
}
static int
iwn_auth(struct iwn_softc *sc, struct ieee80211vap *vap)
{
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211_node *ni = vap->iv_bss;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
/* Update adapter configuration. */
IEEE80211_ADDR_COPY(sc->rxon->bssid, ni->ni_bssid);
sc->rxon->chan = ieee80211_chan2ieee(ic, ni->ni_chan);
sc->rxon->flags = htole32(IWN_RXON_TSF | IWN_RXON_CTS_TO_SELF);
if (IEEE80211_IS_CHAN_2GHZ(ni->ni_chan))
sc->rxon->flags |= htole32(IWN_RXON_AUTO | IWN_RXON_24GHZ);
if (ic->ic_flags & IEEE80211_F_SHSLOT)
sc->rxon->flags |= htole32(IWN_RXON_SHSLOT);
if (ic->ic_flags & IEEE80211_F_SHPREAMBLE)
sc->rxon->flags |= htole32(IWN_RXON_SHPREAMBLE);
if (IEEE80211_IS_CHAN_A(ni->ni_chan)) {
sc->rxon->cck_mask = 0;
sc->rxon->ofdm_mask = 0x15;
} else if (IEEE80211_IS_CHAN_B(ni->ni_chan)) {
sc->rxon->cck_mask = 0x03;
sc->rxon->ofdm_mask = 0;
} else {
/* Assume 802.11b/g. */
sc->rxon->cck_mask = 0x03;
sc->rxon->ofdm_mask = 0x15;
}
/* try HT */
sc->rxon->flags |= htole32(iwn_get_rxon_ht_flags(sc, ic->ic_curchan));
DPRINTF(sc, IWN_DEBUG_STATE, "rxon chan %d flags %x cck %x ofdm %x\n",
sc->rxon->chan, sc->rxon->flags, sc->rxon->cck_mask,
sc->rxon->ofdm_mask);
if ((error = iwn_send_rxon(sc, 0, 1)) != 0) {
device_printf(sc->sc_dev, "%s: could not send RXON\n",
__func__);
return (error);
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return (0);
}
static int
iwn_run(struct iwn_softc *sc, struct ieee80211vap *vap)
{
struct iwn_ops *ops = &sc->ops;
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211_node *ni = vap->iv_bss;
struct iwn_node_info node;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
sc->rxon = &sc->rx_on[IWN_RXON_BSS_CTX];
if (ic->ic_opmode == IEEE80211_M_MONITOR) {
/* Link LED blinks while monitoring. */
iwn_set_led(sc, IWN_LED_LINK, 5, 5);
return 0;
}
if ((error = iwn_set_timing(sc, ni)) != 0) {
device_printf(sc->sc_dev,
"%s: could not set timing, error %d\n", __func__, error);
return error;
}
/* Update adapter configuration. */
IEEE80211_ADDR_COPY(sc->rxon->bssid, ni->ni_bssid);
sc->rxon->associd = htole16(IEEE80211_AID(ni->ni_associd));
sc->rxon->chan = ieee80211_chan2ieee(ic, ni->ni_chan);
sc->rxon->flags = htole32(IWN_RXON_TSF | IWN_RXON_CTS_TO_SELF);
if (IEEE80211_IS_CHAN_2GHZ(ni->ni_chan))
sc->rxon->flags |= htole32(IWN_RXON_AUTO | IWN_RXON_24GHZ);
if (ic->ic_flags & IEEE80211_F_SHSLOT)
sc->rxon->flags |= htole32(IWN_RXON_SHSLOT);
if (ic->ic_flags & IEEE80211_F_SHPREAMBLE)
sc->rxon->flags |= htole32(IWN_RXON_SHPREAMBLE);
if (IEEE80211_IS_CHAN_A(ni->ni_chan)) {
sc->rxon->cck_mask = 0;
sc->rxon->ofdm_mask = 0x15;
} else if (IEEE80211_IS_CHAN_B(ni->ni_chan)) {
sc->rxon->cck_mask = 0x03;
sc->rxon->ofdm_mask = 0;
} else {
/* Assume 802.11b/g. */
sc->rxon->cck_mask = 0x0f;
sc->rxon->ofdm_mask = 0x15;
}
/* try HT */
sc->rxon->flags |= htole32(iwn_get_rxon_ht_flags(sc, ni->ni_chan));
sc->rxon->filter |= htole32(IWN_FILTER_BSS);
DPRINTF(sc, IWN_DEBUG_STATE, "rxon chan %d flags %x, curhtprotmode=%d\n",
sc->rxon->chan, le32toh(sc->rxon->flags), ic->ic_curhtprotmode);
if ((error = iwn_send_rxon(sc, 0, 1)) != 0) {
device_printf(sc->sc_dev, "%s: could not send RXON\n",
__func__);
return error;
}
/* Fake a join to initialize the TX rate. */
((struct iwn_node *)ni)->id = IWN_ID_BSS;
iwn_newassoc(ni, 1);
/* Add BSS node. */
memset(&node, 0, sizeof node);
IEEE80211_ADDR_COPY(node.macaddr, ni->ni_macaddr);
node.id = IWN_ID_BSS;
if (IEEE80211_IS_CHAN_HT(ni->ni_chan)) {
switch (ni->ni_htcap & IEEE80211_HTCAP_SMPS) {
case IEEE80211_HTCAP_SMPS_ENA:
node.htflags |= htole32(IWN_SMPS_MIMO_DIS);
break;
case IEEE80211_HTCAP_SMPS_DYNAMIC:
node.htflags |= htole32(IWN_SMPS_MIMO_PROT);
break;
}
node.htflags |= htole32(IWN_AMDPU_SIZE_FACTOR(3) |
IWN_AMDPU_DENSITY(5)); /* 4us */
if (IEEE80211_IS_CHAN_HT40(ni->ni_chan))
node.htflags |= htole32(IWN_NODE_HT40);
}
DPRINTF(sc, IWN_DEBUG_STATE, "%s: adding BSS node\n", __func__);
error = ops->add_node(sc, &node, 1);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not add BSS node, error %d\n", __func__, error);
return error;
}
DPRINTF(sc, IWN_DEBUG_STATE, "%s: setting link quality for node %d\n",
__func__, node.id);
if ((error = iwn_set_link_quality(sc, ni)) != 0) {
device_printf(sc->sc_dev,
"%s: could not setup link quality for node %d, error %d\n",
__func__, node.id, error);
return error;
}
if ((error = iwn_init_sensitivity(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not set sensitivity, error %d\n", __func__,
error);
return error;
}
/* Start periodic calibration timer. */
sc->calib.state = IWN_CALIB_STATE_ASSOC;
sc->calib_cnt = 0;
callout_reset(&sc->calib_to, msecs_to_ticks(500), iwn_calib_timeout,
sc);
/* Link LED always on while associated. */
iwn_set_led(sc, IWN_LED_LINK, 0, 1);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return 0;
}
/*
* This function is called by upper layer when an ADDBA request is received
* from another STA and before the ADDBA response is sent.
*/
static int
iwn_ampdu_rx_start(struct ieee80211_node *ni, struct ieee80211_rx_ampdu *rap,
int baparamset, int batimeout, int baseqctl)
{
#define MS(_v, _f) (((_v) & _f) >> _f##_S)
struct iwn_softc *sc = ni->ni_ic->ic_softc;
struct iwn_ops *ops = &sc->ops;
struct iwn_node *wn = (void *)ni;
struct iwn_node_info node;
uint16_t ssn;
uint8_t tid;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
tid = MS(le16toh(baparamset), IEEE80211_BAPS_TID);
ssn = MS(le16toh(baseqctl), IEEE80211_BASEQ_START);
if (wn->id == IWN_ID_UNDEFINED)
return (ENOENT);
memset(&node, 0, sizeof node);
node.id = wn->id;
node.control = IWN_NODE_UPDATE;
node.flags = IWN_FLAG_SET_ADDBA;
node.addba_tid = tid;
node.addba_ssn = htole16(ssn);
DPRINTF(sc, IWN_DEBUG_RECV, "ADDBA RA=%d TID=%d SSN=%d\n",
wn->id, tid, ssn);
error = ops->add_node(sc, &node, 1);
if (error != 0)
return error;
return sc->sc_ampdu_rx_start(ni, rap, baparamset, batimeout, baseqctl);
#undef MS
}
/*
* This function is called by upper layer on teardown of an HT-immediate
* Block Ack agreement (eg. uppon receipt of a DELBA frame).
*/
static void
iwn_ampdu_rx_stop(struct ieee80211_node *ni, struct ieee80211_rx_ampdu *rap)
{
struct ieee80211com *ic = ni->ni_ic;
struct iwn_softc *sc = ic->ic_softc;
struct iwn_ops *ops = &sc->ops;
struct iwn_node *wn = (void *)ni;
struct iwn_node_info node;
uint8_t tid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (wn->id == IWN_ID_UNDEFINED)
goto end;
/* XXX: tid as an argument */
for (tid = 0; tid < WME_NUM_TID; tid++) {
if (&ni->ni_rx_ampdu[tid] == rap)
break;
}
memset(&node, 0, sizeof node);
node.id = wn->id;
node.control = IWN_NODE_UPDATE;
node.flags = IWN_FLAG_SET_DELBA;
node.delba_tid = tid;
DPRINTF(sc, IWN_DEBUG_RECV, "DELBA RA=%d TID=%d\n", wn->id, tid);
(void)ops->add_node(sc, &node, 1);
end:
sc->sc_ampdu_rx_stop(ni, rap);
}
static int
iwn_addba_request(struct ieee80211_node *ni, struct ieee80211_tx_ampdu *tap,
int dialogtoken, int baparamset, int batimeout)
{
struct iwn_softc *sc = ni->ni_ic->ic_softc;
int qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
for (qid = sc->firstaggqueue; qid < sc->ntxqs; qid++) {
if (sc->qid2tap[qid] == NULL)
break;
}
if (qid == sc->ntxqs) {
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: not free aggregation queue\n",
__func__);
return 0;
}
tap->txa_private = malloc(sizeof(int), M_DEVBUF, M_NOWAIT);
if (tap->txa_private == NULL) {
device_printf(sc->sc_dev,
"%s: failed to alloc TX aggregation structure\n", __func__);
return 0;
}
sc->qid2tap[qid] = tap;
*(int *)tap->txa_private = qid;
return sc->sc_addba_request(ni, tap, dialogtoken, baparamset,
batimeout);
}
static int
iwn_addba_response(struct ieee80211_node *ni, struct ieee80211_tx_ampdu *tap,
int code, int baparamset, int batimeout)
{
struct iwn_softc *sc = ni->ni_ic->ic_softc;
int qid = *(int *)tap->txa_private;
uint8_t tid = tap->txa_tid;
int ret;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (code == IEEE80211_STATUS_SUCCESS) {
ni->ni_txseqs[tid] = tap->txa_start & 0xfff;
ret = iwn_ampdu_tx_start(ni->ni_ic, ni, tid);
if (ret != 1)
return ret;
} else {
sc->qid2tap[qid] = NULL;
free(tap->txa_private, M_DEVBUF);
tap->txa_private = NULL;
}
return sc->sc_addba_response(ni, tap, code, baparamset, batimeout);
}
/*
* This function is called by upper layer when an ADDBA response is received
* from another STA.
*/
static int
iwn_ampdu_tx_start(struct ieee80211com *ic, struct ieee80211_node *ni,
uint8_t tid)
{
struct ieee80211_tx_ampdu *tap = &ni->ni_tx_ampdu[tid];
struct iwn_softc *sc = ni->ni_ic->ic_softc;
struct iwn_ops *ops = &sc->ops;
struct iwn_node *wn = (void *)ni;
struct iwn_node_info node;
int error, qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (wn->id == IWN_ID_UNDEFINED)
return (0);
/* Enable TX for the specified RA/TID. */
wn->disable_tid &= ~(1 << tid);
memset(&node, 0, sizeof node);
node.id = wn->id;
node.control = IWN_NODE_UPDATE;
node.flags = IWN_FLAG_SET_DISABLE_TID;
node.disable_tid = htole16(wn->disable_tid);
error = ops->add_node(sc, &node, 1);
if (error != 0)
return 0;
if ((error = iwn_nic_lock(sc)) != 0)
return 0;
qid = *(int *)tap->txa_private;
DPRINTF(sc, IWN_DEBUG_XMIT, "%s: ra=%d tid=%d ssn=%d qid=%d\n",
__func__, wn->id, tid, tap->txa_start, qid);
ops->ampdu_tx_start(sc, ni, qid, tid, tap->txa_start & 0xfff);
iwn_nic_unlock(sc);
iwn_set_link_quality(sc, ni);
return 1;
}
static void
iwn_ampdu_tx_stop(struct ieee80211_node *ni, struct ieee80211_tx_ampdu *tap)
{
struct iwn_softc *sc = ni->ni_ic->ic_softc;
struct iwn_ops *ops = &sc->ops;
uint8_t tid = tap->txa_tid;
int qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
sc->sc_addba_stop(ni, tap);
if (tap->txa_private == NULL)
return;
qid = *(int *)tap->txa_private;
if (sc->txq[qid].queued != 0)
return;
if (iwn_nic_lock(sc) != 0)
return;
ops->ampdu_tx_stop(sc, qid, tid, tap->txa_start & 0xfff);
iwn_nic_unlock(sc);
sc->qid2tap[qid] = NULL;
free(tap->txa_private, M_DEVBUF);
tap->txa_private = NULL;
}
static void
iwn4965_ampdu_tx_start(struct iwn_softc *sc, struct ieee80211_node *ni,
int qid, uint8_t tid, uint16_t ssn)
{
struct iwn_node *wn = (void *)ni;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Stop TX scheduler while we're changing its configuration. */
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_STATUS(qid),
IWN4965_TXQ_STATUS_CHGACT);
/* Assign RA/TID translation to the queue. */
iwn_mem_write_2(sc, sc->sched_base + IWN4965_SCHED_TRANS_TBL(qid),
wn->id << 4 | tid);
/* Enable chain-building mode for the queue. */
iwn_prph_setbits(sc, IWN4965_SCHED_QCHAIN_SEL, 1 << qid);
/* Set starting sequence number from the ADDBA request. */
sc->txq[qid].cur = sc->txq[qid].read = (ssn & 0xff);
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | (ssn & 0xff));
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_RDPTR(qid), ssn);
/* Set scheduler window size. */
iwn_mem_write(sc, sc->sched_base + IWN4965_SCHED_QUEUE_OFFSET(qid),
IWN_SCHED_WINSZ);
/* Set scheduler frame limit. */
iwn_mem_write(sc, sc->sched_base + IWN4965_SCHED_QUEUE_OFFSET(qid) + 4,
IWN_SCHED_LIMIT << 16);
/* Enable interrupts for the queue. */
iwn_prph_setbits(sc, IWN4965_SCHED_INTR_MASK, 1 << qid);
/* Mark the queue as active. */
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_STATUS(qid),
IWN4965_TXQ_STATUS_ACTIVE | IWN4965_TXQ_STATUS_AGGR_ENA |
iwn_tid2fifo[tid] << 1);
}
static void
iwn4965_ampdu_tx_stop(struct iwn_softc *sc, int qid, uint8_t tid, uint16_t ssn)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Stop TX scheduler while we're changing its configuration. */
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_STATUS(qid),
IWN4965_TXQ_STATUS_CHGACT);
/* Set starting sequence number from the ADDBA request. */
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | (ssn & 0xff));
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_RDPTR(qid), ssn);
/* Disable interrupts for the queue. */
iwn_prph_clrbits(sc, IWN4965_SCHED_INTR_MASK, 1 << qid);
/* Mark the queue as inactive. */
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_STATUS(qid),
IWN4965_TXQ_STATUS_INACTIVE | iwn_tid2fifo[tid] << 1);
}
static void
iwn5000_ampdu_tx_start(struct iwn_softc *sc, struct ieee80211_node *ni,
int qid, uint8_t tid, uint16_t ssn)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
struct iwn_node *wn = (void *)ni;
/* Stop TX scheduler while we're changing its configuration. */
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_STATUS(qid),
IWN5000_TXQ_STATUS_CHGACT);
/* Assign RA/TID translation to the queue. */
iwn_mem_write_2(sc, sc->sched_base + IWN5000_SCHED_TRANS_TBL(qid),
wn->id << 4 | tid);
/* Enable chain-building mode for the queue. */
iwn_prph_setbits(sc, IWN5000_SCHED_QCHAIN_SEL, 1 << qid);
/* Enable aggregation for the queue. */
iwn_prph_setbits(sc, IWN5000_SCHED_AGGR_SEL, 1 << qid);
/* Set starting sequence number from the ADDBA request. */
sc->txq[qid].cur = sc->txq[qid].read = (ssn & 0xff);
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | (ssn & 0xff));
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_RDPTR(qid), ssn);
/* Set scheduler window size and frame limit. */
iwn_mem_write(sc, sc->sched_base + IWN5000_SCHED_QUEUE_OFFSET(qid) + 4,
IWN_SCHED_LIMIT << 16 | IWN_SCHED_WINSZ);
/* Enable interrupts for the queue. */
iwn_prph_setbits(sc, IWN5000_SCHED_INTR_MASK, 1 << qid);
/* Mark the queue as active. */
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_STATUS(qid),
IWN5000_TXQ_STATUS_ACTIVE | iwn_tid2fifo[tid]);
}
static void
iwn5000_ampdu_tx_stop(struct iwn_softc *sc, int qid, uint8_t tid, uint16_t ssn)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Stop TX scheduler while we're changing its configuration. */
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_STATUS(qid),
IWN5000_TXQ_STATUS_CHGACT);
/* Disable aggregation for the queue. */
iwn_prph_clrbits(sc, IWN5000_SCHED_AGGR_SEL, 1 << qid);
/* Set starting sequence number from the ADDBA request. */
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | (ssn & 0xff));
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_RDPTR(qid), ssn);
/* Disable interrupts for the queue. */
iwn_prph_clrbits(sc, IWN5000_SCHED_INTR_MASK, 1 << qid);
/* Mark the queue as inactive. */
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_STATUS(qid),
IWN5000_TXQ_STATUS_INACTIVE | iwn_tid2fifo[tid]);
}
/*
* Query calibration tables from the initialization firmware. We do this
* only once at first boot. Called from a process context.
*/
static int
iwn5000_query_calibration(struct iwn_softc *sc)
{
struct iwn5000_calib_config cmd;
int error;
memset(&cmd, 0, sizeof cmd);
cmd.ucode.once.enable = htole32(0xffffffff);
cmd.ucode.once.start = htole32(0xffffffff);
cmd.ucode.once.send = htole32(0xffffffff);
cmd.ucode.flags = htole32(0xffffffff);
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "%s: sending calibration query\n",
__func__);
error = iwn_cmd(sc, IWN5000_CMD_CALIB_CONFIG, &cmd, sizeof cmd, 0);
if (error != 0)
return error;
/* Wait at most two seconds for calibration to complete. */
if (!(sc->sc_flags & IWN_FLAG_CALIB_DONE))
error = msleep(sc, &sc->sc_mtx, PCATCH, "iwncal", 2 * hz);
return error;
}
/*
* Send calibration results to the runtime firmware. These results were
* obtained on first boot from the initialization firmware.
*/
static int
iwn5000_send_calibration(struct iwn_softc *sc)
{
int idx, error;
for (idx = 0; idx < IWN5000_PHY_CALIB_MAX_RESULT; idx++) {
if (!(sc->base_params->calib_need & (1<<idx))) {
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"No need of calib %d\n",
idx);
continue; /* no need for this calib */
}
if (sc->calibcmd[idx].buf == NULL) {
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"Need calib idx : %d but no available data\n",
idx);
continue;
}
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"send calibration result idx=%d len=%d\n", idx,
sc->calibcmd[idx].len);
error = iwn_cmd(sc, IWN_CMD_PHY_CALIB, sc->calibcmd[idx].buf,
sc->calibcmd[idx].len, 0);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not send calibration result, error %d\n",
__func__, error);
return error;
}
}
return 0;
}
static int
iwn5000_send_wimax_coex(struct iwn_softc *sc)
{
struct iwn5000_wimax_coex wimax;
#if 0
if (sc->hw_type == IWN_HW_REV_TYPE_6050) {
/* Enable WiMAX coexistence for combo adapters. */
wimax.flags =
IWN_WIMAX_COEX_ASSOC_WA_UNMASK |
IWN_WIMAX_COEX_UNASSOC_WA_UNMASK |
IWN_WIMAX_COEX_STA_TABLE_VALID |
IWN_WIMAX_COEX_ENABLE;
memcpy(wimax.events, iwn6050_wimax_events,
sizeof iwn6050_wimax_events);
} else
#endif
{
/* Disable WiMAX coexistence. */
wimax.flags = 0;
memset(wimax.events, 0, sizeof wimax.events);
}
DPRINTF(sc, IWN_DEBUG_RESET, "%s: Configuring WiMAX coexistence\n",
__func__);
return iwn_cmd(sc, IWN5000_CMD_WIMAX_COEX, &wimax, sizeof wimax, 0);
}
static int
iwn5000_crystal_calib(struct iwn_softc *sc)
{
struct iwn5000_phy_calib_crystal cmd;
memset(&cmd, 0, sizeof cmd);
cmd.code = IWN5000_PHY_CALIB_CRYSTAL;
cmd.ngroups = 1;
cmd.isvalid = 1;
cmd.cap_pin[0] = le32toh(sc->eeprom_crystal) & 0xff;
cmd.cap_pin[1] = (le32toh(sc->eeprom_crystal) >> 16) & 0xff;
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "sending crystal calibration %d, %d\n",
cmd.cap_pin[0], cmd.cap_pin[1]);
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 0);
}
static int
iwn5000_temp_offset_calib(struct iwn_softc *sc)
{
struct iwn5000_phy_calib_temp_offset cmd;
memset(&cmd, 0, sizeof cmd);
cmd.code = IWN5000_PHY_CALIB_TEMP_OFFSET;
cmd.ngroups = 1;
cmd.isvalid = 1;
if (sc->eeprom_temp != 0)
cmd.offset = htole16(sc->eeprom_temp);
else
cmd.offset = htole16(IWN_DEFAULT_TEMP_OFFSET);
DPRINTF(sc, IWN_DEBUG_CALIBRATE, "setting radio sensor offset to %d\n",
le16toh(cmd.offset));
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 0);
}
static int
iwn5000_temp_offset_calibv2(struct iwn_softc *sc)
{
struct iwn5000_phy_calib_temp_offsetv2 cmd;
memset(&cmd, 0, sizeof cmd);
cmd.code = IWN5000_PHY_CALIB_TEMP_OFFSET;
cmd.ngroups = 1;
cmd.isvalid = 1;
if (sc->eeprom_temp != 0) {
cmd.offset_low = htole16(sc->eeprom_temp);
cmd.offset_high = htole16(sc->eeprom_temp_high);
} else {
cmd.offset_low = htole16(IWN_DEFAULT_TEMP_OFFSET);
cmd.offset_high = htole16(IWN_DEFAULT_TEMP_OFFSET);
}
cmd.burnt_voltage_ref = htole16(sc->eeprom_voltage);
DPRINTF(sc, IWN_DEBUG_CALIBRATE,
"setting radio sensor low offset to %d, high offset to %d, voltage to %d\n",
le16toh(cmd.offset_low),
le16toh(cmd.offset_high),
le16toh(cmd.burnt_voltage_ref));
return iwn_cmd(sc, IWN_CMD_PHY_CALIB, &cmd, sizeof cmd, 0);
}
/*
* This function is called after the runtime firmware notifies us of its
* readiness (called in a process context).
*/
static int
iwn4965_post_alive(struct iwn_softc *sc)
{
int error, qid;
if ((error = iwn_nic_lock(sc)) != 0)
return error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Clear TX scheduler state in SRAM. */
sc->sched_base = iwn_prph_read(sc, IWN_SCHED_SRAM_ADDR);
iwn_mem_set_region_4(sc, sc->sched_base + IWN4965_SCHED_CTX_OFF, 0,
IWN4965_SCHED_CTX_LEN / sizeof (uint32_t));
/* Set physical address of TX scheduler rings (1KB aligned). */
iwn_prph_write(sc, IWN4965_SCHED_DRAM_ADDR, sc->sched_dma.paddr >> 10);
IWN_SETBITS(sc, IWN_FH_TX_CHICKEN, IWN_FH_TX_CHICKEN_SCHED_RETRY);
/* Disable chain mode for all our 16 queues. */
iwn_prph_write(sc, IWN4965_SCHED_QCHAIN_SEL, 0);
for (qid = 0; qid < IWN4965_NTXQUEUES; qid++) {
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_RDPTR(qid), 0);
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | 0);
/* Set scheduler window size. */
iwn_mem_write(sc, sc->sched_base +
IWN4965_SCHED_QUEUE_OFFSET(qid), IWN_SCHED_WINSZ);
/* Set scheduler frame limit. */
iwn_mem_write(sc, sc->sched_base +
IWN4965_SCHED_QUEUE_OFFSET(qid) + 4,
IWN_SCHED_LIMIT << 16);
}
/* Enable interrupts for all our 16 queues. */
iwn_prph_write(sc, IWN4965_SCHED_INTR_MASK, 0xffff);
/* Identify TX FIFO rings (0-7). */
iwn_prph_write(sc, IWN4965_SCHED_TXFACT, 0xff);
/* Mark TX rings (4 EDCA + cmd + 2 HCCA) as active. */
for (qid = 0; qid < 7; qid++) {
static uint8_t qid2fifo[] = { 3, 2, 1, 0, 4, 5, 6 };
iwn_prph_write(sc, IWN4965_SCHED_QUEUE_STATUS(qid),
IWN4965_TXQ_STATUS_ACTIVE | qid2fifo[qid] << 1);
}
iwn_nic_unlock(sc);
return 0;
}
/*
* This function is called after the initialization or runtime firmware
* notifies us of its readiness (called in a process context).
*/
static int
iwn5000_post_alive(struct iwn_softc *sc)
{
int error, qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Switch to using ICT interrupt mode. */
iwn5000_ict_reset(sc);
if ((error = iwn_nic_lock(sc)) != 0){
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s end in error\n", __func__);
return error;
}
/* Clear TX scheduler state in SRAM. */
sc->sched_base = iwn_prph_read(sc, IWN_SCHED_SRAM_ADDR);
iwn_mem_set_region_4(sc, sc->sched_base + IWN5000_SCHED_CTX_OFF, 0,
IWN5000_SCHED_CTX_LEN / sizeof (uint32_t));
/* Set physical address of TX scheduler rings (1KB aligned). */
iwn_prph_write(sc, IWN5000_SCHED_DRAM_ADDR, sc->sched_dma.paddr >> 10);
IWN_SETBITS(sc, IWN_FH_TX_CHICKEN, IWN_FH_TX_CHICKEN_SCHED_RETRY);
/* Enable chain mode for all queues, except command queue. */
if (sc->sc_flags & IWN_FLAG_PAN_SUPPORT)
iwn_prph_write(sc, IWN5000_SCHED_QCHAIN_SEL, 0xfffdf);
else
iwn_prph_write(sc, IWN5000_SCHED_QCHAIN_SEL, 0xfffef);
iwn_prph_write(sc, IWN5000_SCHED_AGGR_SEL, 0);
for (qid = 0; qid < IWN5000_NTXQUEUES; qid++) {
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_RDPTR(qid), 0);
IWN_WRITE(sc, IWN_HBUS_TARG_WRPTR, qid << 8 | 0);
iwn_mem_write(sc, sc->sched_base +
IWN5000_SCHED_QUEUE_OFFSET(qid), 0);
/* Set scheduler window size and frame limit. */
iwn_mem_write(sc, sc->sched_base +
IWN5000_SCHED_QUEUE_OFFSET(qid) + 4,
IWN_SCHED_LIMIT << 16 | IWN_SCHED_WINSZ);
}
/* Enable interrupts for all our 20 queues. */
iwn_prph_write(sc, IWN5000_SCHED_INTR_MASK, 0xfffff);
/* Identify TX FIFO rings (0-7). */
iwn_prph_write(sc, IWN5000_SCHED_TXFACT, 0xff);
/* Mark TX rings (4 EDCA + cmd + 2 HCCA) as active. */
if (sc->sc_flags & IWN_FLAG_PAN_SUPPORT) {
/* Mark TX rings as active. */
for (qid = 0; qid < 11; qid++) {
static uint8_t qid2fifo[] = { 3, 2, 1, 0, 0, 4, 2, 5, 4, 7, 5 };
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_STATUS(qid),
IWN5000_TXQ_STATUS_ACTIVE | qid2fifo[qid]);
}
} else {
/* Mark TX rings (4 EDCA + cmd + 2 HCCA) as active. */
for (qid = 0; qid < 7; qid++) {
static uint8_t qid2fifo[] = { 3, 2, 1, 0, 7, 5, 6 };
iwn_prph_write(sc, IWN5000_SCHED_QUEUE_STATUS(qid),
IWN5000_TXQ_STATUS_ACTIVE | qid2fifo[qid]);
}
}
iwn_nic_unlock(sc);
/* Configure WiMAX coexistence for combo adapters. */
error = iwn5000_send_wimax_coex(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not configure WiMAX coexistence, error %d\n",
__func__, error);
return error;
}
if (sc->hw_type != IWN_HW_REV_TYPE_5150) {
/* Perform crystal calibration. */
error = iwn5000_crystal_calib(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: crystal calibration failed, error %d\n",
__func__, error);
return error;
}
}
if (!(sc->sc_flags & IWN_FLAG_CALIB_DONE)) {
/* Query calibration from the initialization firmware. */
if ((error = iwn5000_query_calibration(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not query calibration, error %d\n",
__func__, error);
return error;
}
/*
* We have the calibration results now, reboot with the
* runtime firmware (call ourselves recursively!)
*/
iwn_hw_stop(sc);
error = iwn_hw_init(sc);
} else {
/* Send calibration results to runtime firmware. */
error = iwn5000_send_calibration(sc);
}
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return error;
}
/*
* The firmware boot code is small and is intended to be copied directly into
* the NIC internal memory (no DMA transfer).
*/
static int
iwn4965_load_bootcode(struct iwn_softc *sc, const uint8_t *ucode, int size)
{
int error, ntries;
size /= sizeof (uint32_t);
if ((error = iwn_nic_lock(sc)) != 0)
return error;
/* Copy microcode image into NIC memory. */
iwn_prph_write_region_4(sc, IWN_BSM_SRAM_BASE,
(const uint32_t *)ucode, size);
iwn_prph_write(sc, IWN_BSM_WR_MEM_SRC, 0);
iwn_prph_write(sc, IWN_BSM_WR_MEM_DST, IWN_FW_TEXT_BASE);
iwn_prph_write(sc, IWN_BSM_WR_DWCOUNT, size);
/* Start boot load now. */
iwn_prph_write(sc, IWN_BSM_WR_CTRL, IWN_BSM_WR_CTRL_START);
/* Wait for transfer to complete. */
for (ntries = 0; ntries < 1000; ntries++) {
if (!(iwn_prph_read(sc, IWN_BSM_WR_CTRL) &
IWN_BSM_WR_CTRL_START))
break;
DELAY(10);
}
if (ntries == 1000) {
device_printf(sc->sc_dev, "%s: could not load boot firmware\n",
__func__);
iwn_nic_unlock(sc);
return ETIMEDOUT;
}
/* Enable boot after power up. */
iwn_prph_write(sc, IWN_BSM_WR_CTRL, IWN_BSM_WR_CTRL_START_EN);
iwn_nic_unlock(sc);
return 0;
}
static int
iwn4965_load_firmware(struct iwn_softc *sc)
{
struct iwn_fw_info *fw = &sc->fw;
struct iwn_dma_info *dma = &sc->fw_dma;
int error;
/* Copy initialization sections into pre-allocated DMA-safe memory. */
memcpy(dma->vaddr, fw->init.data, fw->init.datasz);
bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREWRITE);
memcpy(dma->vaddr + IWN4965_FW_DATA_MAXSZ,
fw->init.text, fw->init.textsz);
bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREWRITE);
/* Tell adapter where to find initialization sections. */
if ((error = iwn_nic_lock(sc)) != 0)
return error;
iwn_prph_write(sc, IWN_BSM_DRAM_DATA_ADDR, dma->paddr >> 4);
iwn_prph_write(sc, IWN_BSM_DRAM_DATA_SIZE, fw->init.datasz);
iwn_prph_write(sc, IWN_BSM_DRAM_TEXT_ADDR,
(dma->paddr + IWN4965_FW_DATA_MAXSZ) >> 4);
iwn_prph_write(sc, IWN_BSM_DRAM_TEXT_SIZE, fw->init.textsz);
iwn_nic_unlock(sc);
/* Load firmware boot code. */
error = iwn4965_load_bootcode(sc, fw->boot.text, fw->boot.textsz);
if (error != 0) {
device_printf(sc->sc_dev, "%s: could not load boot firmware\n",
__func__);
return error;
}
/* Now press "execute". */
IWN_WRITE(sc, IWN_RESET, 0);
/* Wait at most one second for first alive notification. */
if ((error = msleep(sc, &sc->sc_mtx, PCATCH, "iwninit", hz)) != 0) {
device_printf(sc->sc_dev,
"%s: timeout waiting for adapter to initialize, error %d\n",
__func__, error);
return error;
}
/* Retrieve current temperature for initial TX power calibration. */
sc->rawtemp = sc->ucode_info.temp[3].chan20MHz;
sc->temp = iwn4965_get_temperature(sc);
/* Copy runtime sections into pre-allocated DMA-safe memory. */
memcpy(dma->vaddr, fw->main.data, fw->main.datasz);
bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREWRITE);
memcpy(dma->vaddr + IWN4965_FW_DATA_MAXSZ,
fw->main.text, fw->main.textsz);
bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREWRITE);
/* Tell adapter where to find runtime sections. */
if ((error = iwn_nic_lock(sc)) != 0)
return error;
iwn_prph_write(sc, IWN_BSM_DRAM_DATA_ADDR, dma->paddr >> 4);
iwn_prph_write(sc, IWN_BSM_DRAM_DATA_SIZE, fw->main.datasz);
iwn_prph_write(sc, IWN_BSM_DRAM_TEXT_ADDR,
(dma->paddr + IWN4965_FW_DATA_MAXSZ) >> 4);
iwn_prph_write(sc, IWN_BSM_DRAM_TEXT_SIZE,
IWN_FW_UPDATED | fw->main.textsz);
iwn_nic_unlock(sc);
return 0;
}
static int
iwn5000_load_firmware_section(struct iwn_softc *sc, uint32_t dst,
const uint8_t *section, int size)
{
struct iwn_dma_info *dma = &sc->fw_dma;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Copy firmware section into pre-allocated DMA-safe memory. */
memcpy(dma->vaddr, section, size);
bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREWRITE);
if ((error = iwn_nic_lock(sc)) != 0)
return error;
IWN_WRITE(sc, IWN_FH_TX_CONFIG(IWN_SRVC_DMACHNL),
IWN_FH_TX_CONFIG_DMA_PAUSE);
IWN_WRITE(sc, IWN_FH_SRAM_ADDR(IWN_SRVC_DMACHNL), dst);
IWN_WRITE(sc, IWN_FH_TFBD_CTRL0(IWN_SRVC_DMACHNL),
IWN_LOADDR(dma->paddr));
IWN_WRITE(sc, IWN_FH_TFBD_CTRL1(IWN_SRVC_DMACHNL),
IWN_HIADDR(dma->paddr) << 28 | size);
IWN_WRITE(sc, IWN_FH_TXBUF_STATUS(IWN_SRVC_DMACHNL),
IWN_FH_TXBUF_STATUS_TBNUM(1) |
IWN_FH_TXBUF_STATUS_TBIDX(1) |
IWN_FH_TXBUF_STATUS_TFBD_VALID);
/* Kick Flow Handler to start DMA transfer. */
IWN_WRITE(sc, IWN_FH_TX_CONFIG(IWN_SRVC_DMACHNL),
IWN_FH_TX_CONFIG_DMA_ENA | IWN_FH_TX_CONFIG_CIRQ_HOST_ENDTFD);
iwn_nic_unlock(sc);
/* Wait at most five seconds for FH DMA transfer to complete. */
return msleep(sc, &sc->sc_mtx, PCATCH, "iwninit", 5 * hz);
}
static int
iwn5000_load_firmware(struct iwn_softc *sc)
{
struct iwn_fw_part *fw;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Load the initialization firmware on first boot only. */
fw = (sc->sc_flags & IWN_FLAG_CALIB_DONE) ?
&sc->fw.main : &sc->fw.init;
error = iwn5000_load_firmware_section(sc, IWN_FW_TEXT_BASE,
fw->text, fw->textsz);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not load firmware %s section, error %d\n",
__func__, ".text", error);
return error;
}
error = iwn5000_load_firmware_section(sc, IWN_FW_DATA_BASE,
fw->data, fw->datasz);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not load firmware %s section, error %d\n",
__func__, ".data", error);
return error;
}
/* Now press "execute". */
IWN_WRITE(sc, IWN_RESET, 0);
return 0;
}
/*
* Extract text and data sections from a legacy firmware image.
*/
static int
iwn_read_firmware_leg(struct iwn_softc *sc, struct iwn_fw_info *fw)
{
const uint32_t *ptr;
size_t hdrlen = 24;
uint32_t rev;
ptr = (const uint32_t *)fw->data;
rev = le32toh(*ptr++);
sc->ucode_rev = rev;
/* Check firmware API version. */
if (IWN_FW_API(rev) <= 1) {
device_printf(sc->sc_dev,
"%s: bad firmware, need API version >=2\n", __func__);
return EINVAL;
}
if (IWN_FW_API(rev) >= 3) {
/* Skip build number (version 2 header). */
hdrlen += 4;
ptr++;
}
if (fw->size < hdrlen) {
device_printf(sc->sc_dev, "%s: firmware too short: %zu bytes\n",
__func__, fw->size);
return EINVAL;
}
fw->main.textsz = le32toh(*ptr++);
fw->main.datasz = le32toh(*ptr++);
fw->init.textsz = le32toh(*ptr++);
fw->init.datasz = le32toh(*ptr++);
fw->boot.textsz = le32toh(*ptr++);
/* Check that all firmware sections fit. */
if (fw->size < hdrlen + fw->main.textsz + fw->main.datasz +
fw->init.textsz + fw->init.datasz + fw->boot.textsz) {
device_printf(sc->sc_dev, "%s: firmware too short: %zu bytes\n",
__func__, fw->size);
return EINVAL;
}
/* Get pointers to firmware sections. */
fw->main.text = (const uint8_t *)ptr;
fw->main.data = fw->main.text + fw->main.textsz;
fw->init.text = fw->main.data + fw->main.datasz;
fw->init.data = fw->init.text + fw->init.textsz;
fw->boot.text = fw->init.data + fw->init.datasz;
return 0;
}
/*
* Extract text and data sections from a TLV firmware image.
*/
static int
iwn_read_firmware_tlv(struct iwn_softc *sc, struct iwn_fw_info *fw,
uint16_t alt)
{
const struct iwn_fw_tlv_hdr *hdr;
const struct iwn_fw_tlv *tlv;
const uint8_t *ptr, *end;
uint64_t altmask;
uint32_t len, tmp;
if (fw->size < sizeof (*hdr)) {
device_printf(sc->sc_dev, "%s: firmware too short: %zu bytes\n",
__func__, fw->size);
return EINVAL;
}
hdr = (const struct iwn_fw_tlv_hdr *)fw->data;
if (hdr->signature != htole32(IWN_FW_SIGNATURE)) {
device_printf(sc->sc_dev, "%s: bad firmware signature 0x%08x\n",
__func__, le32toh(hdr->signature));
return EINVAL;
}
DPRINTF(sc, IWN_DEBUG_RESET, "FW: \"%.64s\", build 0x%x\n", hdr->descr,
le32toh(hdr->build));
sc->ucode_rev = le32toh(hdr->rev);
/*
* Select the closest supported alternative that is less than
* or equal to the specified one.
*/
altmask = le64toh(hdr->altmask);
while (alt > 0 && !(altmask & (1ULL << alt)))
alt--; /* Downgrade. */
DPRINTF(sc, IWN_DEBUG_RESET, "using alternative %d\n", alt);
ptr = (const uint8_t *)(hdr + 1);
end = (const uint8_t *)(fw->data + fw->size);
/* Parse type-length-value fields. */
while (ptr + sizeof (*tlv) <= end) {
tlv = (const struct iwn_fw_tlv *)ptr;
len = le32toh(tlv->len);
ptr += sizeof (*tlv);
if (ptr + len > end) {
device_printf(sc->sc_dev,
"%s: firmware too short: %zu bytes\n", __func__,
fw->size);
return EINVAL;
}
/* Skip other alternatives. */
if (tlv->alt != 0 && tlv->alt != htole16(alt))
goto next;
switch (le16toh(tlv->type)) {
case IWN_FW_TLV_MAIN_TEXT:
fw->main.text = ptr;
fw->main.textsz = len;
break;
case IWN_FW_TLV_MAIN_DATA:
fw->main.data = ptr;
fw->main.datasz = len;
break;
case IWN_FW_TLV_INIT_TEXT:
fw->init.text = ptr;
fw->init.textsz = len;
break;
case IWN_FW_TLV_INIT_DATA:
fw->init.data = ptr;
fw->init.datasz = len;
break;
case IWN_FW_TLV_BOOT_TEXT:
fw->boot.text = ptr;
fw->boot.textsz = len;
break;
case IWN_FW_TLV_ENH_SENS:
if (!len)
sc->sc_flags |= IWN_FLAG_ENH_SENS;
break;
case IWN_FW_TLV_PHY_CALIB:
tmp = le32toh(*ptr);
if (tmp < 253) {
sc->reset_noise_gain = tmp;
sc->noise_gain = tmp + 1;
}
break;
case IWN_FW_TLV_PAN:
sc->sc_flags |= IWN_FLAG_PAN_SUPPORT;
DPRINTF(sc, IWN_DEBUG_RESET,
"PAN Support found: %d\n", 1);
break;
case IWN_FW_TLV_FLAGS:
if (len < sizeof(uint32_t))
break;
if (len % sizeof(uint32_t))
break;
sc->tlv_feature_flags = le32toh(*ptr);
DPRINTF(sc, IWN_DEBUG_RESET,
"%s: feature: 0x%08x\n",
__func__,
sc->tlv_feature_flags);
break;
case IWN_FW_TLV_PBREQ_MAXLEN:
case IWN_FW_TLV_RUNT_EVTLOG_PTR:
case IWN_FW_TLV_RUNT_EVTLOG_SIZE:
case IWN_FW_TLV_RUNT_ERRLOG_PTR:
case IWN_FW_TLV_INIT_EVTLOG_PTR:
case IWN_FW_TLV_INIT_EVTLOG_SIZE:
case IWN_FW_TLV_INIT_ERRLOG_PTR:
case IWN_FW_TLV_WOWLAN_INST:
case IWN_FW_TLV_WOWLAN_DATA:
DPRINTF(sc, IWN_DEBUG_RESET,
"TLV type %d recognized but not handled\n",
le16toh(tlv->type));
break;
default:
DPRINTF(sc, IWN_DEBUG_RESET,
"TLV type %d not handled\n", le16toh(tlv->type));
break;
}
next: /* TLV fields are 32-bit aligned. */
ptr += (len + 3) & ~3;
}
return 0;
}
static int
iwn_read_firmware(struct iwn_softc *sc)
{
struct iwn_fw_info *fw = &sc->fw;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
IWN_UNLOCK(sc);
memset(fw, 0, sizeof (*fw));
/* Read firmware image from filesystem. */
sc->fw_fp = firmware_get(sc->fwname);
if (sc->fw_fp == NULL) {
device_printf(sc->sc_dev, "%s: could not read firmware %s\n",
__func__, sc->fwname);
IWN_LOCK(sc);
return EINVAL;
}
IWN_LOCK(sc);
fw->size = sc->fw_fp->datasize;
fw->data = (const uint8_t *)sc->fw_fp->data;
if (fw->size < sizeof (uint32_t)) {
device_printf(sc->sc_dev, "%s: firmware too short: %zu bytes\n",
__func__, fw->size);
error = EINVAL;
goto fail;
}
/* Retrieve text and data sections. */
if (*(const uint32_t *)fw->data != 0) /* Legacy image. */
error = iwn_read_firmware_leg(sc, fw);
else
error = iwn_read_firmware_tlv(sc, fw, 1);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not read firmware sections, error %d\n",
__func__, error);
goto fail;
}
device_printf(sc->sc_dev, "%s: ucode rev=0x%08x\n", __func__, sc->ucode_rev);
/* Make sure text and data sections fit in hardware memory. */
if (fw->main.textsz > sc->fw_text_maxsz ||
fw->main.datasz > sc->fw_data_maxsz ||
fw->init.textsz > sc->fw_text_maxsz ||
fw->init.datasz > sc->fw_data_maxsz ||
fw->boot.textsz > IWN_FW_BOOT_TEXT_MAXSZ ||
(fw->boot.textsz & 3) != 0) {
device_printf(sc->sc_dev, "%s: firmware sections too large\n",
__func__);
error = EINVAL;
goto fail;
}
/* We can proceed with loading the firmware. */
return 0;
fail: iwn_unload_firmware(sc);
return error;
}
static void
iwn_unload_firmware(struct iwn_softc *sc)
{
firmware_put(sc->fw_fp, FIRMWARE_UNLOAD);
sc->fw_fp = NULL;
}
static int
iwn_clock_wait(struct iwn_softc *sc)
{
int ntries;
/* Set "initialization complete" bit. */
IWN_SETBITS(sc, IWN_GP_CNTRL, IWN_GP_CNTRL_INIT_DONE);
/* Wait for clock stabilization. */
for (ntries = 0; ntries < 2500; ntries++) {
if (IWN_READ(sc, IWN_GP_CNTRL) & IWN_GP_CNTRL_MAC_CLOCK_READY)
return 0;
DELAY(10);
}
device_printf(sc->sc_dev,
"%s: timeout waiting for clock stabilization\n", __func__);
return ETIMEDOUT;
}
static int
iwn_apm_init(struct iwn_softc *sc)
{
uint32_t reg;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Disable L0s exit timer (NMI bug workaround). */
IWN_SETBITS(sc, IWN_GIO_CHICKEN, IWN_GIO_CHICKEN_DIS_L0S_TIMER);
/* Don't wait for ICH L0s (ICH bug workaround). */
IWN_SETBITS(sc, IWN_GIO_CHICKEN, IWN_GIO_CHICKEN_L1A_NO_L0S_RX);
/* Set FH wait threshold to max (HW bug under stress workaround). */
IWN_SETBITS(sc, IWN_DBG_HPET_MEM, 0xffff0000);
/* Enable HAP INTA to move adapter from L1a to L0s. */
IWN_SETBITS(sc, IWN_HW_IF_CONFIG, IWN_HW_IF_CONFIG_HAP_WAKE_L1A);
/* Retrieve PCIe Active State Power Management (ASPM). */
reg = pci_read_config(sc->sc_dev, sc->sc_cap_off + PCIER_LINK_CTL, 4);
/* Workaround for HW instability in PCIe L0->L0s->L1 transition. */
if (reg & PCIEM_LINK_CTL_ASPMC_L1) /* L1 Entry enabled. */
IWN_SETBITS(sc, IWN_GIO, IWN_GIO_L0S_ENA);
else
IWN_CLRBITS(sc, IWN_GIO, IWN_GIO_L0S_ENA);
if (sc->base_params->pll_cfg_val)
IWN_SETBITS(sc, IWN_ANA_PLL, sc->base_params->pll_cfg_val);
/* Wait for clock stabilization before accessing prph. */
if ((error = iwn_clock_wait(sc)) != 0)
return error;
if ((error = iwn_nic_lock(sc)) != 0)
return error;
if (sc->hw_type == IWN_HW_REV_TYPE_4965) {
/* Enable DMA and BSM (Bootstrap State Machine). */
iwn_prph_write(sc, IWN_APMG_CLK_EN,
IWN_APMG_CLK_CTRL_DMA_CLK_RQT |
IWN_APMG_CLK_CTRL_BSM_CLK_RQT);
} else {
/* Enable DMA. */
iwn_prph_write(sc, IWN_APMG_CLK_EN,
IWN_APMG_CLK_CTRL_DMA_CLK_RQT);
}
DELAY(20);
/* Disable L1-Active. */
iwn_prph_setbits(sc, IWN_APMG_PCI_STT, IWN_APMG_PCI_STT_L1A_DIS);
iwn_nic_unlock(sc);
return 0;
}
static void
iwn_apm_stop_master(struct iwn_softc *sc)
{
int ntries;
/* Stop busmaster DMA activity. */
IWN_SETBITS(sc, IWN_RESET, IWN_RESET_STOP_MASTER);
for (ntries = 0; ntries < 100; ntries++) {
if (IWN_READ(sc, IWN_RESET) & IWN_RESET_MASTER_DISABLED)
return;
DELAY(10);
}
device_printf(sc->sc_dev, "%s: timeout waiting for master\n", __func__);
}
static void
iwn_apm_stop(struct iwn_softc *sc)
{
iwn_apm_stop_master(sc);
/* Reset the entire device. */
IWN_SETBITS(sc, IWN_RESET, IWN_RESET_SW);
DELAY(10);
/* Clear "initialization complete" bit. */
IWN_CLRBITS(sc, IWN_GP_CNTRL, IWN_GP_CNTRL_INIT_DONE);
}
static int
iwn4965_nic_config(struct iwn_softc *sc)
{
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (IWN_RFCFG_TYPE(sc->rfcfg) == 1) {
/*
* I don't believe this to be correct but this is what the
* vendor driver is doing. Probably the bits should not be
* shifted in IWN_RFCFG_*.
*/
IWN_SETBITS(sc, IWN_HW_IF_CONFIG,
IWN_RFCFG_TYPE(sc->rfcfg) |
IWN_RFCFG_STEP(sc->rfcfg) |
IWN_RFCFG_DASH(sc->rfcfg));
}
IWN_SETBITS(sc, IWN_HW_IF_CONFIG,
IWN_HW_IF_CONFIG_RADIO_SI | IWN_HW_IF_CONFIG_MAC_SI);
return 0;
}
static int
iwn5000_nic_config(struct iwn_softc *sc)
{
uint32_t tmp;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
if (IWN_RFCFG_TYPE(sc->rfcfg) < 3) {
IWN_SETBITS(sc, IWN_HW_IF_CONFIG,
IWN_RFCFG_TYPE(sc->rfcfg) |
IWN_RFCFG_STEP(sc->rfcfg) |
IWN_RFCFG_DASH(sc->rfcfg));
}
IWN_SETBITS(sc, IWN_HW_IF_CONFIG,
IWN_HW_IF_CONFIG_RADIO_SI | IWN_HW_IF_CONFIG_MAC_SI);
if ((error = iwn_nic_lock(sc)) != 0)
return error;
iwn_prph_setbits(sc, IWN_APMG_PS, IWN_APMG_PS_EARLY_PWROFF_DIS);
if (sc->hw_type == IWN_HW_REV_TYPE_1000) {
/*
* Select first Switching Voltage Regulator (1.32V) to
* solve a stability issue related to noisy DC2DC line
* in the silicon of 1000 Series.
*/
tmp = iwn_prph_read(sc, IWN_APMG_DIGITAL_SVR);
tmp &= ~IWN_APMG_DIGITAL_SVR_VOLTAGE_MASK;
tmp |= IWN_APMG_DIGITAL_SVR_VOLTAGE_1_32;
iwn_prph_write(sc, IWN_APMG_DIGITAL_SVR, tmp);
}
iwn_nic_unlock(sc);
if (sc->sc_flags & IWN_FLAG_INTERNAL_PA) {
/* Use internal power amplifier only. */
IWN_WRITE(sc, IWN_GP_DRIVER, IWN_GP_DRIVER_RADIO_2X2_IPA);
}
if (sc->base_params->additional_nic_config && sc->calib_ver >= 6) {
/* Indicate that ROM calibration version is >=6. */
IWN_SETBITS(sc, IWN_GP_DRIVER, IWN_GP_DRIVER_CALIB_VER6);
}
if (sc->base_params->additional_gp_drv_bit)
IWN_SETBITS(sc, IWN_GP_DRIVER,
sc->base_params->additional_gp_drv_bit);
return 0;
}
/*
* Take NIC ownership over Intel Active Management Technology (AMT).
*/
static int
iwn_hw_prepare(struct iwn_softc *sc)
{
int ntries;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
/* Check if hardware is ready. */
IWN_SETBITS(sc, IWN_HW_IF_CONFIG, IWN_HW_IF_CONFIG_NIC_READY);
for (ntries = 0; ntries < 5; ntries++) {
if (IWN_READ(sc, IWN_HW_IF_CONFIG) &
IWN_HW_IF_CONFIG_NIC_READY)
return 0;
DELAY(10);
}
/* Hardware not ready, force into ready state. */
IWN_SETBITS(sc, IWN_HW_IF_CONFIG, IWN_HW_IF_CONFIG_PREPARE);
for (ntries = 0; ntries < 15000; ntries++) {
if (!(IWN_READ(sc, IWN_HW_IF_CONFIG) &
IWN_HW_IF_CONFIG_PREPARE_DONE))
break;
DELAY(10);
}
if (ntries == 15000)
return ETIMEDOUT;
/* Hardware should be ready now. */
IWN_SETBITS(sc, IWN_HW_IF_CONFIG, IWN_HW_IF_CONFIG_NIC_READY);
for (ntries = 0; ntries < 5; ntries++) {
if (IWN_READ(sc, IWN_HW_IF_CONFIG) &
IWN_HW_IF_CONFIG_NIC_READY)
return 0;
DELAY(10);
}
return ETIMEDOUT;
}
static int
iwn_hw_init(struct iwn_softc *sc)
{
struct iwn_ops *ops = &sc->ops;
int error, chnl, qid;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
/* Clear pending interrupts. */
IWN_WRITE(sc, IWN_INT, 0xffffffff);
if ((error = iwn_apm_init(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not power ON adapter, error %d\n", __func__,
error);
return error;
}
/* Select VMAIN power source. */
if ((error = iwn_nic_lock(sc)) != 0)
return error;
iwn_prph_clrbits(sc, IWN_APMG_PS, IWN_APMG_PS_PWR_SRC_MASK);
iwn_nic_unlock(sc);
/* Perform adapter-specific initialization. */
if ((error = ops->nic_config(sc)) != 0)
return error;
/* Initialize RX ring. */
if ((error = iwn_nic_lock(sc)) != 0)
return error;
IWN_WRITE(sc, IWN_FH_RX_CONFIG, 0);
IWN_WRITE(sc, IWN_FH_RX_WPTR, 0);
/* Set physical address of RX ring (256-byte aligned). */
IWN_WRITE(sc, IWN_FH_RX_BASE, sc->rxq.desc_dma.paddr >> 8);
/* Set physical address of RX status (16-byte aligned). */
IWN_WRITE(sc, IWN_FH_STATUS_WPTR, sc->rxq.stat_dma.paddr >> 4);
/* Enable RX. */
IWN_WRITE(sc, IWN_FH_RX_CONFIG,
IWN_FH_RX_CONFIG_ENA |
IWN_FH_RX_CONFIG_IGN_RXF_EMPTY | /* HW bug workaround */
IWN_FH_RX_CONFIG_IRQ_DST_HOST |
IWN_FH_RX_CONFIG_SINGLE_FRAME |
IWN_FH_RX_CONFIG_RB_TIMEOUT(0) |
IWN_FH_RX_CONFIG_NRBD(IWN_RX_RING_COUNT_LOG));
iwn_nic_unlock(sc);
IWN_WRITE(sc, IWN_FH_RX_WPTR, (IWN_RX_RING_COUNT - 1) & ~7);
if ((error = iwn_nic_lock(sc)) != 0)
return error;
/* Initialize TX scheduler. */
iwn_prph_write(sc, sc->sched_txfact_addr, 0);
/* Set physical address of "keep warm" page (16-byte aligned). */
IWN_WRITE(sc, IWN_FH_KW_ADDR, sc->kw_dma.paddr >> 4);
/* Initialize TX rings. */
for (qid = 0; qid < sc->ntxqs; qid++) {
struct iwn_tx_ring *txq = &sc->txq[qid];
/* Set physical address of TX ring (256-byte aligned). */
IWN_WRITE(sc, IWN_FH_CBBC_QUEUE(qid),
txq->desc_dma.paddr >> 8);
}
iwn_nic_unlock(sc);
/* Enable DMA channels. */
for (chnl = 0; chnl < sc->ndmachnls; chnl++) {
IWN_WRITE(sc, IWN_FH_TX_CONFIG(chnl),
IWN_FH_TX_CONFIG_DMA_ENA |
IWN_FH_TX_CONFIG_DMA_CREDIT_ENA);
}
/* Clear "radio off" and "commands blocked" bits. */
IWN_WRITE(sc, IWN_UCODE_GP1_CLR, IWN_UCODE_GP1_RFKILL);
IWN_WRITE(sc, IWN_UCODE_GP1_CLR, IWN_UCODE_GP1_CMD_BLOCKED);
/* Clear pending interrupts. */
IWN_WRITE(sc, IWN_INT, 0xffffffff);
/* Enable interrupt coalescing. */
IWN_WRITE(sc, IWN_INT_COALESCING, 512 / 8);
/* Enable interrupts. */
IWN_WRITE(sc, IWN_INT_MASK, sc->int_mask);
/* _Really_ make sure "radio off" bit is cleared! */
IWN_WRITE(sc, IWN_UCODE_GP1_CLR, IWN_UCODE_GP1_RFKILL);
IWN_WRITE(sc, IWN_UCODE_GP1_CLR, IWN_UCODE_GP1_RFKILL);
/* Enable shadow registers. */
if (sc->base_params->shadow_reg_enable)
IWN_SETBITS(sc, IWN_SHADOW_REG_CTRL, 0x800fffff);
if ((error = ops->load_firmware(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not load firmware, error %d\n", __func__,
error);
return error;
}
/* Wait at most one second for firmware alive notification. */
if ((error = msleep(sc, &sc->sc_mtx, PCATCH, "iwninit", hz)) != 0) {
device_printf(sc->sc_dev,
"%s: timeout waiting for adapter to initialize, error %d\n",
__func__, error);
return error;
}
/* Do post-firmware initialization. */
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return ops->post_alive(sc);
}
static void
iwn_hw_stop(struct iwn_softc *sc)
{
int chnl, qid, ntries;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
IWN_WRITE(sc, IWN_RESET, IWN_RESET_NEVO);
/* Disable interrupts. */
IWN_WRITE(sc, IWN_INT_MASK, 0);
IWN_WRITE(sc, IWN_INT, 0xffffffff);
IWN_WRITE(sc, IWN_FH_INT, 0xffffffff);
sc->sc_flags &= ~IWN_FLAG_USE_ICT;
/* Make sure we no longer hold the NIC lock. */
iwn_nic_unlock(sc);
/* Stop TX scheduler. */
iwn_prph_write(sc, sc->sched_txfact_addr, 0);
/* Stop all DMA channels. */
if (iwn_nic_lock(sc) == 0) {
for (chnl = 0; chnl < sc->ndmachnls; chnl++) {
IWN_WRITE(sc, IWN_FH_TX_CONFIG(chnl), 0);
for (ntries = 0; ntries < 200; ntries++) {
if (IWN_READ(sc, IWN_FH_TX_STATUS) &
IWN_FH_TX_STATUS_IDLE(chnl))
break;
DELAY(10);
}
}
iwn_nic_unlock(sc);
}
/* Stop RX ring. */
iwn_reset_rx_ring(sc, &sc->rxq);
/* Reset all TX rings. */
for (qid = 0; qid < sc->ntxqs; qid++)
iwn_reset_tx_ring(sc, &sc->txq[qid]);
if (iwn_nic_lock(sc) == 0) {
iwn_prph_write(sc, IWN_APMG_CLK_DIS,
IWN_APMG_CLK_CTRL_DMA_CLK_RQT);
iwn_nic_unlock(sc);
}
DELAY(5);
/* Power OFF adapter. */
iwn_apm_stop(sc);
}
static void
iwn_panicked(void *arg0, int pending)
{
struct iwn_softc *sc = arg0;
struct ieee80211com *ic = &sc->sc_ic;
struct ieee80211vap *vap = TAILQ_FIRST(&ic->ic_vaps);
#if 0
int error;
#endif
if (vap == NULL) {
printf("%s: null vap\n", __func__);
return;
}
device_printf(sc->sc_dev, "%s: controller panicked, iv_state = %d; "
"restarting\n", __func__, vap->iv_state);
/*
* This is not enough work. We need to also reinitialise
* the correct transmit state for aggregation enabled queues,
* which has a very specific requirement of
* ring index = 802.11 seqno % 256. If we don't do this (which
* we definitely don't!) then the firmware will just panic again.
*/
#if 1
ieee80211_restart_all(ic);
#else
IWN_LOCK(sc);
iwn_stop_locked(sc);
if ((error = iwn_init_locked(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not init hardware\n", __func__);
goto unlock;
}
if (vap->iv_state >= IEEE80211_S_AUTH &&
(error = iwn_auth(sc, vap)) != 0) {
device_printf(sc->sc_dev,
"%s: could not move to auth state\n", __func__);
}
if (vap->iv_state >= IEEE80211_S_RUN &&
(error = iwn_run(sc, vap)) != 0) {
device_printf(sc->sc_dev,
"%s: could not move to run state\n", __func__);
}
unlock:
IWN_UNLOCK(sc);
#endif
}
static int
iwn_init_locked(struct iwn_softc *sc)
{
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s begin\n", __func__);
IWN_LOCK_ASSERT(sc);
if (sc->sc_flags & IWN_FLAG_RUNNING)
goto end;
sc->sc_flags |= IWN_FLAG_RUNNING;
if ((error = iwn_hw_prepare(sc)) != 0) {
device_printf(sc->sc_dev, "%s: hardware not ready, error %d\n",
__func__, error);
goto fail;
}
/* Initialize interrupt mask to default value. */
sc->int_mask = IWN_INT_MASK_DEF;
sc->sc_flags &= ~IWN_FLAG_USE_ICT;
/* Check that the radio is not disabled by hardware switch. */
if (!(IWN_READ(sc, IWN_GP_CNTRL) & IWN_GP_CNTRL_RFKILL)) {
iwn_stop_locked(sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return (1);
}
/* Read firmware images from the filesystem. */
if ((error = iwn_read_firmware(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not read firmware, error %d\n", __func__,
error);
goto fail;
}
/* Initialize hardware and upload firmware. */
error = iwn_hw_init(sc);
iwn_unload_firmware(sc);
if (error != 0) {
device_printf(sc->sc_dev,
"%s: could not initialize hardware, error %d\n", __func__,
error);
goto fail;
}
/* Configure adapter now that it is ready. */
if ((error = iwn_config(sc)) != 0) {
device_printf(sc->sc_dev,
"%s: could not configure device, error %d\n", __func__,
error);
goto fail;
}
callout_reset(&sc->watchdog_to, hz, iwn_watchdog, sc);
end:
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end\n",__func__);
return (0);
fail:
iwn_stop_locked(sc);
DPRINTF(sc, IWN_DEBUG_TRACE, "->%s: end in error\n",__func__);
return (-1);
}
static int
iwn_init(struct iwn_softc *sc)
{
int error;
IWN_LOCK(sc);
error = iwn_init_locked(sc);
IWN_UNLOCK(sc);
return (error);
}
static void
iwn_stop_locked(struct iwn_softc *sc)
{
IWN_LOCK_ASSERT(sc);
if (!(sc->sc_flags & IWN_FLAG_RUNNING))
return;
sc->sc_is_scanning = 0;
sc->sc_tx_timer = 0;
callout_stop(&sc->watchdog_to);
callout_stop(&sc->scan_timeout);
callout_stop(&sc->calib_to);
sc->sc_flags &= ~IWN_FLAG_RUNNING;
/* Power OFF hardware. */
iwn_hw_stop(sc);
}
static void
iwn_stop(struct iwn_softc *sc)
{
IWN_LOCK(sc);
iwn_stop_locked(sc);
IWN_UNLOCK(sc);
}
/*
* Callback from net80211 to start a scan.
*/
static void
iwn_scan_start(struct ieee80211com *ic)
{
struct iwn_softc *sc = ic->ic_softc;
IWN_LOCK(sc);
/* make the link LED blink while we're scanning */
iwn_set_led(sc, IWN_LED_LINK, 20, 2);
IWN_UNLOCK(sc);
}
/*
* Callback from net80211 to terminate a scan.
*/
static void
iwn_scan_end(struct ieee80211com *ic)
{
struct iwn_softc *sc = ic->ic_softc;
struct ieee80211vap *vap = TAILQ_FIRST(&ic->ic_vaps);
IWN_LOCK(sc);
if (vap->iv_state == IEEE80211_S_RUN) {
/* Set link LED to ON status if we are associated */
iwn_set_led(sc, IWN_LED_LINK, 0, 1);
}
IWN_UNLOCK(sc);
}
/*
* Callback from net80211 to force a channel change.
*/
static void
iwn_set_channel(struct ieee80211com *ic)
{
const struct ieee80211_channel *c = ic->ic_curchan;
struct iwn_softc *sc = ic->ic_softc;
int error;
DPRINTF(sc, IWN_DEBUG_TRACE, "->Doing %s\n", __func__);
IWN_LOCK(sc);
sc->sc_rxtap.wr_chan_freq = htole16(c->ic_freq);
sc->sc_rxtap.wr_chan_flags = htole16(c->ic_flags);
sc->sc_txtap.wt_chan_freq = htole16(c->ic_freq);
sc->sc_txtap.wt_chan_flags = htole16(c->ic_flags);
/*
* Only need to set the channel in Monitor mode. AP scanning and auth
* are already taken care of by their respective firmware commands.
*/
if (ic->ic_opmode == IEEE80211_M_MONITOR) {
error = iwn_config(sc);
if (error != 0)
device_printf(sc->sc_dev,
"%s: error %d settting channel\n", __func__, error);
}
IWN_UNLOCK(sc);
}
/*
* Callback from net80211 to start scanning of the current channel.
*/
static void
iwn_scan_curchan(struct ieee80211_scan_state *ss, unsigned long maxdwell)
{
struct ieee80211vap *vap = ss->ss_vap;
struct ieee80211com *ic = vap->iv_ic;
struct iwn_softc *sc = ic->ic_softc;
int error;
IWN_LOCK(sc);
error = iwn_scan(sc, vap, ss, ic->ic_curchan);
IWN_UNLOCK(sc);
if (error != 0)
ieee80211_cancel_scan(vap);
}
/*
* Callback from net80211 to handle the minimum dwell time being met.
* The intent is to terminate the scan but we just let the firmware
* notify us when it's finished as we have no safe way to abort it.
*/
static void
iwn_scan_mindwell(struct ieee80211_scan_state *ss)
{
/* NB: don't try to abort scan; wait for firmware to finish */
}
#ifdef IWN_DEBUG
#define IWN_DESC(x) case x: return #x
/*
* Translate CSR code to string
*/
static char *iwn_get_csr_string(int csr)
{
switch (csr) {
IWN_DESC(IWN_HW_IF_CONFIG);
IWN_DESC(IWN_INT_COALESCING);
IWN_DESC(IWN_INT);
IWN_DESC(IWN_INT_MASK);
IWN_DESC(IWN_FH_INT);
IWN_DESC(IWN_GPIO_IN);
IWN_DESC(IWN_RESET);
IWN_DESC(IWN_GP_CNTRL);
IWN_DESC(IWN_HW_REV);
IWN_DESC(IWN_EEPROM);
IWN_DESC(IWN_EEPROM_GP);
IWN_DESC(IWN_OTP_GP);
IWN_DESC(IWN_GIO);
IWN_DESC(IWN_GP_UCODE);
IWN_DESC(IWN_GP_DRIVER);
IWN_DESC(IWN_UCODE_GP1);
IWN_DESC(IWN_UCODE_GP2);
IWN_DESC(IWN_LED);
IWN_DESC(IWN_DRAM_INT_TBL);
IWN_DESC(IWN_GIO_CHICKEN);
IWN_DESC(IWN_ANA_PLL);
IWN_DESC(IWN_HW_REV_WA);
IWN_DESC(IWN_DBG_HPET_MEM);
default:
return "UNKNOWN CSR";
}
}
/*
* This function print firmware register
*/
static void
iwn_debug_register(struct iwn_softc *sc)
{
int i;
static const uint32_t csr_tbl[] = {
IWN_HW_IF_CONFIG,
IWN_INT_COALESCING,
IWN_INT,
IWN_INT_MASK,
IWN_FH_INT,
IWN_GPIO_IN,
IWN_RESET,
IWN_GP_CNTRL,
IWN_HW_REV,
IWN_EEPROM,
IWN_EEPROM_GP,
IWN_OTP_GP,
IWN_GIO,
IWN_GP_UCODE,
IWN_GP_DRIVER,
IWN_UCODE_GP1,
IWN_UCODE_GP2,
IWN_LED,
IWN_DRAM_INT_TBL,
IWN_GIO_CHICKEN,
IWN_ANA_PLL,
IWN_HW_REV_WA,
IWN_DBG_HPET_MEM,
};
DPRINTF(sc, IWN_DEBUG_REGISTER,
"CSR values: (2nd byte of IWN_INT_COALESCING is IWN_INT_PERIODIC)%s",
"\n");
for (i = 0; i < nitems(csr_tbl); i++){
DPRINTF(sc, IWN_DEBUG_REGISTER," %10s: 0x%08x ",
iwn_get_csr_string(csr_tbl[i]), IWN_READ(sc, csr_tbl[i]));
if ((i+1) % 3 == 0)
DPRINTF(sc, IWN_DEBUG_REGISTER,"%s","\n");
}
DPRINTF(sc, IWN_DEBUG_REGISTER,"%s","\n");
}
#endif
Index: projects/clang800-import/sys/dev/mmc/bridge.h
===================================================================
--- projects/clang800-import/sys/dev/mmc/bridge.h (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/bridge.h (revision 343956)
@@ -1,194 +1,194 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*
* $FreeBSD$
*/
#ifndef DEV_MMC_BRIDGE_H
#define DEV_MMC_BRIDGE_H
#include <sys/bus.h>
/*
* This file defines interfaces for the mmc bridge. The names chosen
* are similar to or the same as the names used in Linux to allow for
* easy porting of what Linux calls mmc host drivers. I use the
* FreeBSD terminology of bridge and bus for consistency with other
* drivers in the system. This file corresponds roughly to the Linux
* linux/mmc/host.h file.
*
* A mmc bridge is a chipset that can have one or more mmc and/or sd
* cards attached to it. mmc devices are attached on a bus topology,
* while sd and sdio cards usually are attached using a star topology
* (meaning in practice each sd card has its own, independent slot).
* Since SDHCI v3.00, buses for esd and esdio are possible, though.
*
* Attached to the mmc bridge is an mmcbus. The mmcbus is described
* in dev/mmc/mmcbus_if.m.
*/
/*
* mmc_ios is a structure that is used to store the state of the mmc/sd
* bus configuration. This include the bus' clock speed, its voltage,
* the bus mode for command output, the SPI chip select, some power
* states and the bus width.
*/
enum mmc_vdd {
vdd_150 = 0, vdd_155, vdd_160, vdd_165, vdd_170, vdd_180,
vdd_190, vdd_200, vdd_210, vdd_220, vdd_230, vdd_240, vdd_250,
vdd_260, vdd_270, vdd_280, vdd_290, vdd_300, vdd_310, vdd_320,
vdd_330, vdd_340, vdd_350, vdd_360
};
enum mmc_vccq {
vccq_120 = 0, vccq_180, vccq_330
};
enum mmc_power_mode {
power_off = 0, power_up, power_on
};
enum mmc_bus_mode {
opendrain = 1, pushpull
};
enum mmc_chip_select {
cs_dontcare = 0, cs_high, cs_low
};
enum mmc_bus_width {
bus_width_1 = 0, bus_width_4 = 2, bus_width_8 = 3
};
enum mmc_drv_type {
drv_type_b = 0, drv_type_a, drv_type_c, drv_type_d
};
enum mmc_bus_timing {
bus_timing_normal = 0, bus_timing_hs, bus_timing_uhs_sdr12,
bus_timing_uhs_sdr25, bus_timing_uhs_sdr50, bus_timing_uhs_ddr50,
bus_timing_uhs_sdr104, bus_timing_mmc_ddr52, bus_timing_mmc_hs200,
bus_timing_mmc_hs400, bus_timing_mmc_hs400es, bus_timing_max =
bus_timing_mmc_hs400es
};
struct mmc_ios {
uint32_t clock; /* Speed of the clock in Hz to move data */
enum mmc_vdd vdd; /* Voltage to apply to the power pins */
enum mmc_vccq vccq; /* Voltage to use for signaling */
enum mmc_bus_mode bus_mode;
enum mmc_chip_select chip_select;
enum mmc_bus_width bus_width;
enum mmc_power_mode power_mode;
enum mmc_bus_timing timing;
enum mmc_drv_type drv_type;
};
enum mmc_card_mode {
mode_mmc, mode_sd
};
enum mmc_retune_req {
retune_req_none = 0, retune_req_normal, retune_req_reset
};
struct mmc_host {
int f_min;
int f_max;
uint32_t host_ocr;
uint32_t ocr;
uint32_t caps;
#define MMC_CAP_4_BIT_DATA (1 << 0) /* Can do 4-bit data transfers */
#define MMC_CAP_8_BIT_DATA (1 << 1) /* Can do 8-bit data transfers */
#define MMC_CAP_HSPEED (1 << 2) /* Can do High Speed transfers */
#define MMC_CAP_BOOT_NOACC (1 << 4) /* Cannot access boot partitions */
#define MMC_CAP_WAIT_WHILE_BUSY (1 << 5) /* Host waits for busy responses */
#define MMC_CAP_UHS_SDR12 (1 << 6) /* Can do UHS SDR12 */
#define MMC_CAP_UHS_SDR25 (1 << 7) /* Can do UHS SDR25 */
#define MMC_CAP_UHS_SDR50 (1 << 8) /* Can do UHS SDR50 */
#define MMC_CAP_UHS_SDR104 (1 << 9) /* Can do UHS SDR104 */
#define MMC_CAP_UHS_DDR50 (1 << 10) /* Can do UHS DDR50 */
#define MMC_CAP_MMC_DDR52_120 (1 << 11) /* Can do eMMC DDR52 at 1.2 V */
#define MMC_CAP_MMC_DDR52_180 (1 << 12) /* Can do eMMC DDR52 at 1.8 V */
#define MMC_CAP_MMC_DDR52 (MMC_CAP_MMC_DDR52_120 | MMC_CAP_MMC_DDR52_180)
#define MMC_CAP_MMC_HS200_120 (1 << 13) /* Can do eMMC HS200 at 1.2 V */
#define MMC_CAP_MMC_HS200_180 (1 << 14) /* Can do eMMC HS200 at 1.8 V */
#define MMC_CAP_MMC_HS200 (MMC_CAP_MMC_HS200_120| MMC_CAP_MMC_HS200_180)
#define MMC_CAP_MMC_HS400_120 (1 << 15) /* Can do eMMC HS400 at 1.2 V */
#define MMC_CAP_MMC_HS400_180 (1 << 16) /* Can do eMMC HS400 at 1.8 V */
#define MMC_CAP_MMC_HS400 (MMC_CAP_MMC_HS400_120 | MMC_CAP_MMC_HS400_180)
#define MMC_CAP_MMC_HSX00_120 (MMC_CAP_MMC_HS200_120 | MMC_CAP_MMC_HS400_120)
#define MMC_CAP_MMC_ENH_STROBE (1 << 17) /* Can do eMMC Enhanced Strobe */
#define MMC_CAP_SIGNALING_120 (1 << 18) /* Can do signaling at 1.2 V */
#define MMC_CAP_SIGNALING_180 (1 << 19) /* Can do signaling at 1.8 V */
#define MMC_CAP_SIGNALING_330 (1 << 20) /* Can do signaling at 3.3 V */
#define MMC_CAP_DRIVER_TYPE_A (1 << 21) /* Can do Driver Type A */
#define MMC_CAP_DRIVER_TYPE_C (1 << 22) /* Can do Driver Type C */
#define MMC_CAP_DRIVER_TYPE_D (1 << 23) /* Can do Driver Type D */
enum mmc_card_mode mode;
struct mmc_ios ios; /* Current state of the host */
};
#ifdef _KERNEL
extern driver_t mmc_driver;
extern devclass_t mmc_devclass;
#define MMC_VERSION 5
#define MMC_DECLARE_BRIDGE(name) \
DRIVER_MODULE(mmc, name, mmc_driver, mmc_devclass, NULL, NULL); \
MODULE_DEPEND(name, mmc, MMC_VERSION, MMC_VERSION, MMC_VERSION);
#define MMC_DEPEND(name) \
MODULE_DEPEND(name, mmc, MMC_VERSION, MMC_VERSION, MMC_VERSION);
#endif /* _KERNEL */
#endif /* DEV_MMC_BRIDGE_H */
Index: projects/clang800-import/sys/dev/mmc/mmc.c
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmc.c (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmc.c (revision 343956)
@@ -1,2582 +1,2582 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
* Copyright (c) 2017 Marius Strobl <marius@FreeBSD.org>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/lock.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <sys/bus.h>
#include <sys/endian.h>
#include <sys/sysctl.h>
#include <sys/time.h>
#include <dev/mmc/bridge.h>
#include <dev/mmc/mmc_private.h>
#include <dev/mmc/mmc_subr.h>
#include <dev/mmc/mmcreg.h>
#include <dev/mmc/mmcbrvar.h>
#include <dev/mmc/mmcvar.h>
#include "mmcbr_if.h"
#include "mmcbus_if.h"
CTASSERT(bus_timing_max <= sizeof(uint32_t) * NBBY);
/*
* Per-card data
*/
struct mmc_ivars {
uint32_t raw_cid[4]; /* Raw bits of the CID */
uint32_t raw_csd[4]; /* Raw bits of the CSD */
uint32_t raw_scr[2]; /* Raw bits of the SCR */
uint8_t raw_ext_csd[MMC_EXTCSD_SIZE]; /* Raw bits of the EXT_CSD */
uint32_t raw_sd_status[16]; /* Raw bits of the SD_STATUS */
uint16_t rca;
u_char read_only; /* True when the device is read-only */
u_char high_cap; /* High Capacity device (block addressed) */
enum mmc_card_mode mode;
enum mmc_bus_width bus_width; /* Bus width to use */
struct mmc_cid cid; /* cid decoded */
struct mmc_csd csd; /* csd decoded */
struct mmc_scr scr; /* scr decoded */
struct mmc_sd_status sd_status; /* SD_STATUS decoded */
uint32_t sec_count; /* Card capacity in 512byte blocks */
uint32_t timings; /* Mask of bus timings supported */
uint32_t vccq_120; /* Mask of bus timings at VCCQ of 1.2 V */
uint32_t vccq_180; /* Mask of bus timings at VCCQ of 1.8 V */
uint32_t tran_speed; /* Max speed in normal mode */
uint32_t hs_tran_speed; /* Max speed in high speed mode */
uint32_t erase_sector; /* Card native erase sector size */
uint32_t cmd6_time; /* Generic switch timeout [us] */
uint32_t quirks; /* Quirks as per mmc_quirk->quirks */
char card_id_string[64];/* Formatted CID info (serial, MFG, etc) */
char card_sn_string[16];/* Formatted serial # for disk->d_ident */
};
#define CMD_RETRIES 3
static const struct mmc_quirk mmc_quirks[] = {
/*
* For some SanDisk iNAND devices, the CMD38 argument needs to be
* provided in EXT_CSD[113].
*/
{ 0x2, 0x100, "SEM02G", MMC_QUIRK_INAND_CMD38 },
{ 0x2, 0x100, "SEM04G", MMC_QUIRK_INAND_CMD38 },
{ 0x2, 0x100, "SEM08G", MMC_QUIRK_INAND_CMD38 },
{ 0x2, 0x100, "SEM16G", MMC_QUIRK_INAND_CMD38 },
{ 0x2, 0x100, "SEM32G", MMC_QUIRK_INAND_CMD38 },
/*
* Disable TRIM for Kingston eMMCs where a firmware bug can lead to
* unrecoverable data corruption.
*/
{ 0x70, MMC_QUIRK_OID_ANY, "V10008", MMC_QUIRK_BROKEN_TRIM },
{ 0x70, MMC_QUIRK_OID_ANY, "V10016", MMC_QUIRK_BROKEN_TRIM },
{ 0x0, 0x0, NULL, 0x0 }
};
static SYSCTL_NODE(_hw, OID_AUTO, mmc, CTLFLAG_RD, NULL, "mmc driver");
static int mmc_debug;
SYSCTL_INT(_hw_mmc, OID_AUTO, debug, CTLFLAG_RWTUN, &mmc_debug, 0,
"Debug level");
/* bus entry points */
static int mmc_acquire_bus(device_t busdev, device_t dev);
static int mmc_attach(device_t dev);
static int mmc_child_location_str(device_t dev, device_t child, char *buf,
size_t buflen);
static int mmc_detach(device_t dev);
static int mmc_probe(device_t dev);
static int mmc_read_ivar(device_t bus, device_t child, int which,
uintptr_t *result);
static int mmc_release_bus(device_t busdev, device_t dev);
static int mmc_resume(device_t dev);
static void mmc_retune_pause(device_t busdev, device_t dev, bool retune);
static void mmc_retune_unpause(device_t busdev, device_t dev);
static int mmc_suspend(device_t dev);
static int mmc_wait_for_request(device_t busdev, device_t dev,
struct mmc_request *req);
static int mmc_write_ivar(device_t bus, device_t child, int which,
uintptr_t value);
#define MMC_LOCK(_sc) mtx_lock(&(_sc)->sc_mtx)
#define MMC_UNLOCK(_sc) mtx_unlock(&(_sc)->sc_mtx)
#define MMC_LOCK_INIT(_sc) \
mtx_init(&(_sc)->sc_mtx, device_get_nameunit((_sc)->dev), \
"mmc", MTX_DEF)
#define MMC_LOCK_DESTROY(_sc) mtx_destroy(&(_sc)->sc_mtx);
#define MMC_ASSERT_LOCKED(_sc) mtx_assert(&(_sc)->sc_mtx, MA_OWNED);
#define MMC_ASSERT_UNLOCKED(_sc) mtx_assert(&(_sc)->sc_mtx, MA_NOTOWNED);
static int mmc_all_send_cid(struct mmc_softc *sc, uint32_t *rawcid);
static void mmc_app_decode_scr(uint32_t *raw_scr, struct mmc_scr *scr);
static void mmc_app_decode_sd_status(uint32_t *raw_sd_status,
struct mmc_sd_status *sd_status);
static int mmc_app_sd_status(struct mmc_softc *sc, uint16_t rca,
uint32_t *rawsdstatus);
static int mmc_app_send_scr(struct mmc_softc *sc, uint16_t rca,
uint32_t *rawscr);
static int mmc_calculate_clock(struct mmc_softc *sc);
static void mmc_decode_cid_mmc(uint32_t *raw_cid, struct mmc_cid *cid,
bool is_4_41p);
static void mmc_decode_cid_sd(uint32_t *raw_cid, struct mmc_cid *cid);
static void mmc_decode_csd_mmc(uint32_t *raw_csd, struct mmc_csd *csd);
static int mmc_decode_csd_sd(uint32_t *raw_csd, struct mmc_csd *csd);
static void mmc_delayed_attach(void *xsc);
static int mmc_delete_cards(struct mmc_softc *sc, bool final);
static void mmc_discover_cards(struct mmc_softc *sc);
static void mmc_format_card_id_string(struct mmc_ivars *ivar);
static void mmc_go_discovery(struct mmc_softc *sc);
static uint32_t mmc_get_bits(uint32_t *bits, int bit_len, int start,
int size);
static int mmc_highest_voltage(uint32_t ocr);
static bool mmc_host_timing(device_t dev, enum mmc_bus_timing timing);
static void mmc_idle_cards(struct mmc_softc *sc);
static void mmc_ms_delay(int ms);
static void mmc_log_card(device_t dev, struct mmc_ivars *ivar, int newcard);
static void mmc_power_down(struct mmc_softc *sc);
static void mmc_power_up(struct mmc_softc *sc);
static void mmc_rescan_cards(struct mmc_softc *sc);
static int mmc_retune(device_t busdev, device_t dev, bool reset);
static void mmc_scan(struct mmc_softc *sc);
static int mmc_sd_switch(struct mmc_softc *sc, uint8_t mode, uint8_t grp,
uint8_t value, uint8_t *res);
static int mmc_select_card(struct mmc_softc *sc, uint16_t rca);
static uint32_t mmc_select_vdd(struct mmc_softc *sc, uint32_t ocr);
static int mmc_send_app_op_cond(struct mmc_softc *sc, uint32_t ocr,
uint32_t *rocr);
static int mmc_send_csd(struct mmc_softc *sc, uint16_t rca, uint32_t *rawcsd);
static int mmc_send_if_cond(struct mmc_softc *sc, uint8_t vhs);
static int mmc_send_op_cond(struct mmc_softc *sc, uint32_t ocr,
uint32_t *rocr);
static int mmc_send_relative_addr(struct mmc_softc *sc, uint32_t *resp);
static int mmc_set_blocklen(struct mmc_softc *sc, uint32_t len);
static int mmc_set_card_bus_width(struct mmc_softc *sc, struct mmc_ivars *ivar,
enum mmc_bus_timing timing);
static int mmc_set_power_class(struct mmc_softc *sc, struct mmc_ivars *ivar);
static int mmc_set_relative_addr(struct mmc_softc *sc, uint16_t resp);
static int mmc_set_timing(struct mmc_softc *sc, struct mmc_ivars *ivar,
enum mmc_bus_timing timing);
static int mmc_set_vccq(struct mmc_softc *sc, struct mmc_ivars *ivar,
enum mmc_bus_timing timing);
static int mmc_switch_to_hs200(struct mmc_softc *sc, struct mmc_ivars *ivar,
uint32_t clock);
static int mmc_switch_to_hs400(struct mmc_softc *sc, struct mmc_ivars *ivar,
uint32_t max_dtr, enum mmc_bus_timing max_timing);
static int mmc_test_bus_width(struct mmc_softc *sc);
static uint32_t mmc_timing_to_dtr(struct mmc_ivars *ivar,
enum mmc_bus_timing timing);
static const char *mmc_timing_to_string(enum mmc_bus_timing timing);
static void mmc_update_child_list(struct mmc_softc *sc);
static int mmc_wait_for_command(struct mmc_softc *sc, uint32_t opcode,
uint32_t arg, uint32_t flags, uint32_t *resp, int retries);
static int mmc_wait_for_req(struct mmc_softc *sc, struct mmc_request *req);
static void mmc_wakeup(struct mmc_request *req);
static void
mmc_ms_delay(int ms)
{
DELAY(1000 * ms); /* XXX BAD */
}
static int
mmc_probe(device_t dev)
{
device_set_desc(dev, "MMC/SD bus");
return (0);
}
static int
mmc_attach(device_t dev)
{
struct mmc_softc *sc;
sc = device_get_softc(dev);
sc->dev = dev;
MMC_LOCK_INIT(sc);
/* We'll probe and attach our children later, but before / mount */
sc->config_intrhook.ich_func = mmc_delayed_attach;
sc->config_intrhook.ich_arg = sc;
if (config_intrhook_establish(&sc->config_intrhook) != 0)
device_printf(dev, "config_intrhook_establish failed\n");
return (0);
}
static int
mmc_detach(device_t dev)
{
struct mmc_softc *sc = device_get_softc(dev);
int err;
err = mmc_delete_cards(sc, true);
if (err != 0)
return (err);
mmc_power_down(sc);
MMC_LOCK_DESTROY(sc);
return (0);
}
static int
mmc_suspend(device_t dev)
{
struct mmc_softc *sc = device_get_softc(dev);
int err;
err = bus_generic_suspend(dev);
if (err != 0)
return (err);
/*
* We power down with the bus acquired here, mainly so that no device
* is selected any longer and sc->last_rca gets set to 0. Otherwise,
* the deselect as part of the bus acquisition in mmc_scan() may fail
* during resume, as the bus isn't powered up again before later in
* mmc_go_discovery().
*/
err = mmc_acquire_bus(dev, dev);
if (err != 0)
return (err);
mmc_power_down(sc);
err = mmc_release_bus(dev, dev);
return (err);
}
static int
mmc_resume(device_t dev)
{
struct mmc_softc *sc = device_get_softc(dev);
mmc_scan(sc);
return (bus_generic_resume(dev));
}
static int
mmc_acquire_bus(device_t busdev, device_t dev)
{
struct mmc_softc *sc;
struct mmc_ivars *ivar;
int err;
uint16_t rca;
enum mmc_bus_timing timing;
err = MMCBR_ACQUIRE_HOST(device_get_parent(busdev), busdev);
if (err)
return (err);
sc = device_get_softc(busdev);
MMC_LOCK(sc);
if (sc->owner)
panic("mmc: host bridge didn't serialize us.");
sc->owner = dev;
MMC_UNLOCK(sc);
if (busdev != dev) {
/*
* Keep track of the last rca that we've selected. If
* we're asked to do it again, don't. We never
* unselect unless the bus code itself wants the mmc
* bus, and constantly reselecting causes problems.
*/
ivar = device_get_ivars(dev);
rca = ivar->rca;
if (sc->last_rca != rca) {
if (mmc_select_card(sc, rca) != MMC_ERR_NONE) {
device_printf(busdev, "Card at relative "
"address %d failed to select\n", rca);
return (ENXIO);
}
sc->last_rca = rca;
timing = mmcbr_get_timing(busdev);
/*
* For eMMC modes, setting/updating bus width and VCCQ
* only really is necessary if there actually is more
* than one device on the bus as generally that already
* had to be done by mmc_calculate_clock() or one of
* its calees. Moreover, setting the bus width anew
* can trigger re-tuning (via a CRC error on the next
* CMD), even if not switching between devices an the
* previously selected one is still tuned. Obviously,
* we need to re-tune the host controller if devices
* are actually switched, though.
*/
if (timing >= bus_timing_mmc_ddr52 &&
sc->child_count == 1)
return (0);
/* Prepare bus width for the new card. */
if (bootverbose || mmc_debug) {
device_printf(busdev,
"setting bus width to %d bits %s timing\n",
(ivar->bus_width == bus_width_4) ? 4 :
(ivar->bus_width == bus_width_8) ? 8 : 1,
mmc_timing_to_string(timing));
}
if (mmc_set_card_bus_width(sc, ivar, timing) !=
MMC_ERR_NONE) {
device_printf(busdev, "Card at relative "
"address %d failed to set bus width\n",
rca);
return (ENXIO);
}
mmcbr_set_bus_width(busdev, ivar->bus_width);
mmcbr_update_ios(busdev);
if (mmc_set_vccq(sc, ivar, timing) != MMC_ERR_NONE) {
device_printf(busdev, "Failed to set VCCQ "
"for card at relative address %d\n", rca);
return (ENXIO);
}
if (timing >= bus_timing_mmc_hs200 &&
mmc_retune(busdev, dev, true) != 0) {
device_printf(busdev, "Card at relative "
"address %d failed to re-tune\n", rca);
return (ENXIO);
}
}
} else {
/*
* If there's a card selected, stand down.
*/
if (sc->last_rca != 0) {
if (mmc_select_card(sc, 0) != MMC_ERR_NONE)
return (ENXIO);
sc->last_rca = 0;
}
}
return (0);
}
static int
mmc_release_bus(device_t busdev, device_t dev)
{
struct mmc_softc *sc;
int err;
sc = device_get_softc(busdev);
MMC_LOCK(sc);
if (!sc->owner)
panic("mmc: releasing unowned bus.");
if (sc->owner != dev)
panic("mmc: you don't own the bus. game over.");
MMC_UNLOCK(sc);
err = MMCBR_RELEASE_HOST(device_get_parent(busdev), busdev);
if (err)
return (err);
MMC_LOCK(sc);
sc->owner = NULL;
MMC_UNLOCK(sc);
return (0);
}
static uint32_t
mmc_select_vdd(struct mmc_softc *sc, uint32_t ocr)
{
return (ocr & MMC_OCR_VOLTAGE);
}
static int
mmc_highest_voltage(uint32_t ocr)
{
int i;
for (i = MMC_OCR_MAX_VOLTAGE_SHIFT;
i >= MMC_OCR_MIN_VOLTAGE_SHIFT; i--)
if (ocr & (1 << i))
return (i);
return (-1);
}
static void
mmc_wakeup(struct mmc_request *req)
{
struct mmc_softc *sc;
sc = (struct mmc_softc *)req->done_data;
MMC_LOCK(sc);
req->flags |= MMC_REQ_DONE;
MMC_UNLOCK(sc);
wakeup(req);
}
static int
mmc_wait_for_req(struct mmc_softc *sc, struct mmc_request *req)
{
req->done = mmc_wakeup;
req->done_data = sc;
if (__predict_false(mmc_debug > 1)) {
device_printf(sc->dev, "REQUEST: CMD%d arg %#x flags %#x",
req->cmd->opcode, req->cmd->arg, req->cmd->flags);
if (req->cmd->data) {
printf(" data %d\n", (int)req->cmd->data->len);
} else
printf("\n");
}
MMCBR_REQUEST(device_get_parent(sc->dev), sc->dev, req);
MMC_LOCK(sc);
while ((req->flags & MMC_REQ_DONE) == 0)
msleep(req, &sc->sc_mtx, 0, "mmcreq", 0);
MMC_UNLOCK(sc);
if (__predict_false(mmc_debug > 2 || (mmc_debug > 0 &&
req->cmd->error != MMC_ERR_NONE)))
device_printf(sc->dev, "CMD%d RESULT: %d\n",
req->cmd->opcode, req->cmd->error);
return (0);
}
static int
mmc_wait_for_request(device_t busdev, device_t dev, struct mmc_request *req)
{
struct mmc_softc *sc;
struct mmc_ivars *ivar;
int err, i;
enum mmc_retune_req retune_req;
sc = device_get_softc(busdev);
KASSERT(sc->owner != NULL,
("%s: Request from %s without bus being acquired.", __func__,
device_get_nameunit(dev)));
/*
* Unless no device is selected or re-tuning is already ongoing,
* execute re-tuning if a) the bridge is requesting to do so and
* re-tuning hasn't been otherwise paused, or b) if a child asked
* to be re-tuned prior to pausing (see also mmc_retune_pause()).
*/
if (__predict_false(sc->last_rca != 0 && sc->retune_ongoing == 0 &&
(((retune_req = mmcbr_get_retune_req(busdev)) != retune_req_none &&
sc->retune_paused == 0) || sc->retune_needed == 1))) {
if (__predict_false(mmc_debug > 1)) {
device_printf(busdev,
"Re-tuning with%s circuit reset required\n",
retune_req == retune_req_reset ? "" : "out");
}
if (device_get_parent(dev) == busdev)
ivar = device_get_ivars(dev);
else {
for (i = 0; i < sc->child_count; i++) {
ivar = device_get_ivars(sc->child_list[i]);
if (ivar->rca == sc->last_rca)
break;
}
if (ivar->rca != sc->last_rca)
return (EINVAL);
}
sc->retune_ongoing = 1;
err = mmc_retune(busdev, dev, retune_req == retune_req_reset);
sc->retune_ongoing = 0;
switch (err) {
case MMC_ERR_NONE:
case MMC_ERR_FAILED: /* Re-tune error but still might work */
break;
case MMC_ERR_BADCRC: /* Switch failure on HS400 recovery */
return (ENXIO);
case MMC_ERR_INVALID: /* Driver implementation b0rken */
default: /* Unknown error, should not happen */
return (EINVAL);
}
sc->retune_needed = 0;
}
return (mmc_wait_for_req(sc, req));
}
static int
mmc_wait_for_command(struct mmc_softc *sc, uint32_t opcode,
uint32_t arg, uint32_t flags, uint32_t *resp, int retries)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = opcode;
cmd.arg = arg;
cmd.flags = flags;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, retries);
if (err)
return (err);
if (resp) {
if (flags & MMC_RSP_136)
memcpy(resp, cmd.resp, 4 * sizeof(uint32_t));
else
*resp = cmd.resp[0];
}
return (0);
}
static void
mmc_idle_cards(struct mmc_softc *sc)
{
device_t dev;
struct mmc_command cmd;
dev = sc->dev;
mmcbr_set_chip_select(dev, cs_high);
mmcbr_update_ios(dev);
mmc_ms_delay(1);
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_GO_IDLE_STATE;
cmd.arg = 0;
cmd.flags = MMC_RSP_NONE | MMC_CMD_BC;
cmd.data = NULL;
mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
mmc_ms_delay(1);
mmcbr_set_chip_select(dev, cs_dontcare);
mmcbr_update_ios(dev);
mmc_ms_delay(1);
}
static int
mmc_send_app_op_cond(struct mmc_softc *sc, uint32_t ocr, uint32_t *rocr)
{
struct mmc_command cmd;
int err = MMC_ERR_NONE, i;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = ACMD_SD_SEND_OP_COND;
cmd.arg = ocr;
cmd.flags = MMC_RSP_R3 | MMC_CMD_BCR;
cmd.data = NULL;
for (i = 0; i < 1000; i++) {
err = mmc_wait_for_app_cmd(sc->dev, sc->dev, 0, &cmd,
CMD_RETRIES);
if (err != MMC_ERR_NONE)
break;
if ((cmd.resp[0] & MMC_OCR_CARD_BUSY) ||
(ocr & MMC_OCR_VOLTAGE) == 0)
break;
err = MMC_ERR_TIMEOUT;
mmc_ms_delay(10);
}
if (rocr && err == MMC_ERR_NONE)
*rocr = cmd.resp[0];
return (err);
}
static int
mmc_send_op_cond(struct mmc_softc *sc, uint32_t ocr, uint32_t *rocr)
{
struct mmc_command cmd;
int err = MMC_ERR_NONE, i;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_SEND_OP_COND;
cmd.arg = ocr;
cmd.flags = MMC_RSP_R3 | MMC_CMD_BCR;
cmd.data = NULL;
for (i = 0; i < 1000; i++) {
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
if (err != MMC_ERR_NONE)
break;
if ((cmd.resp[0] & MMC_OCR_CARD_BUSY) ||
(ocr & MMC_OCR_VOLTAGE) == 0)
break;
err = MMC_ERR_TIMEOUT;
mmc_ms_delay(10);
}
if (rocr && err == MMC_ERR_NONE)
*rocr = cmd.resp[0];
return (err);
}
static int
mmc_send_if_cond(struct mmc_softc *sc, uint8_t vhs)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = SD_SEND_IF_COND;
cmd.arg = (vhs << 8) + 0xAA;
cmd.flags = MMC_RSP_R7 | MMC_CMD_BCR;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
return (err);
}
static void
mmc_power_up(struct mmc_softc *sc)
{
device_t dev;
enum mmc_vccq vccq;
dev = sc->dev;
mmcbr_set_vdd(dev, mmc_highest_voltage(mmcbr_get_host_ocr(dev)));
mmcbr_set_bus_mode(dev, opendrain);
mmcbr_set_chip_select(dev, cs_dontcare);
mmcbr_set_bus_width(dev, bus_width_1);
mmcbr_set_power_mode(dev, power_up);
mmcbr_set_clock(dev, 0);
mmcbr_update_ios(dev);
for (vccq = vccq_330; ; vccq--) {
mmcbr_set_vccq(dev, vccq);
if (mmcbr_switch_vccq(dev) == 0 || vccq == vccq_120)
break;
}
mmc_ms_delay(1);
mmcbr_set_clock(dev, SD_MMC_CARD_ID_FREQUENCY);
mmcbr_set_timing(dev, bus_timing_normal);
mmcbr_set_power_mode(dev, power_on);
mmcbr_update_ios(dev);
mmc_ms_delay(2);
}
static void
mmc_power_down(struct mmc_softc *sc)
{
device_t dev = sc->dev;
mmcbr_set_bus_mode(dev, opendrain);
mmcbr_set_chip_select(dev, cs_dontcare);
mmcbr_set_bus_width(dev, bus_width_1);
mmcbr_set_power_mode(dev, power_off);
mmcbr_set_clock(dev, 0);
mmcbr_set_timing(dev, bus_timing_normal);
mmcbr_update_ios(dev);
}
static int
mmc_select_card(struct mmc_softc *sc, uint16_t rca)
{
int err, flags;
flags = (rca ? MMC_RSP_R1B : MMC_RSP_NONE) | MMC_CMD_AC;
sc->retune_paused++;
err = mmc_wait_for_command(sc, MMC_SELECT_CARD, (uint32_t)rca << 16,
flags, NULL, CMD_RETRIES);
sc->retune_paused--;
return (err);
}
static int
mmc_sd_switch(struct mmc_softc *sc, uint8_t mode, uint8_t grp, uint8_t value,
uint8_t *res)
{
int err;
struct mmc_command cmd;
struct mmc_data data;
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
memset(res, 0, 64);
cmd.opcode = SD_SWITCH_FUNC;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.arg = mode << 31; /* 0 - check, 1 - set */
cmd.arg |= 0x00FFFFFF;
cmd.arg &= ~(0xF << (grp * 4));
cmd.arg |= value << (grp * 4);
cmd.data = &data;
data.data = res;
data.len = 64;
data.flags = MMC_DATA_READ;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
return (err);
}
static int
mmc_set_card_bus_width(struct mmc_softc *sc, struct mmc_ivars *ivar,
enum mmc_bus_timing timing)
{
struct mmc_command cmd;
int err;
uint8_t value;
if (mmcbr_get_mode(sc->dev) == mode_sd) {
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = ACMD_SET_CLR_CARD_DETECT;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
cmd.arg = SD_CLR_CARD_DETECT;
err = mmc_wait_for_app_cmd(sc->dev, sc->dev, ivar->rca, &cmd,
CMD_RETRIES);
if (err != 0)
return (err);
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = ACMD_SET_BUS_WIDTH;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
switch (ivar->bus_width) {
case bus_width_1:
cmd.arg = SD_BUS_WIDTH_1;
break;
case bus_width_4:
cmd.arg = SD_BUS_WIDTH_4;
break;
default:
return (MMC_ERR_INVALID);
}
err = mmc_wait_for_app_cmd(sc->dev, sc->dev, ivar->rca, &cmd,
CMD_RETRIES);
} else {
switch (ivar->bus_width) {
case bus_width_1:
if (timing == bus_timing_mmc_hs400 ||
timing == bus_timing_mmc_hs400es)
return (MMC_ERR_INVALID);
value = EXT_CSD_BUS_WIDTH_1;
break;
case bus_width_4:
switch (timing) {
case bus_timing_mmc_ddr52:
value = EXT_CSD_BUS_WIDTH_4_DDR;
break;
case bus_timing_mmc_hs400:
case bus_timing_mmc_hs400es:
return (MMC_ERR_INVALID);
default:
value = EXT_CSD_BUS_WIDTH_4;
break;
}
break;
case bus_width_8:
value = 0;
switch (timing) {
case bus_timing_mmc_hs400es:
value = EXT_CSD_BUS_WIDTH_ES;
/* FALLTHROUGH */
case bus_timing_mmc_ddr52:
case bus_timing_mmc_hs400:
value |= EXT_CSD_BUS_WIDTH_8_DDR;
break;
default:
value = EXT_CSD_BUS_WIDTH_8;
break;
}
break;
default:
return (MMC_ERR_INVALID);
}
err = mmc_switch(sc->dev, sc->dev, ivar->rca,
EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BUS_WIDTH, value,
ivar->cmd6_time, true);
}
return (err);
}
static int
mmc_set_power_class(struct mmc_softc *sc, struct mmc_ivars *ivar)
{
device_t dev;
const uint8_t *ext_csd;
uint32_t clock;
uint8_t value;
enum mmc_bus_timing timing;
enum mmc_bus_width bus_width;
dev = sc->dev;
timing = mmcbr_get_timing(dev);
bus_width = ivar->bus_width;
if (mmcbr_get_mode(dev) != mode_mmc || ivar->csd.spec_vers < 4 ||
timing == bus_timing_normal || bus_width == bus_width_1)
return (MMC_ERR_NONE);
value = 0;
ext_csd = ivar->raw_ext_csd;
clock = mmcbr_get_clock(dev);
switch (1 << mmcbr_get_vdd(dev)) {
case MMC_OCR_LOW_VOLTAGE:
if (clock <= MMC_TYPE_HS_26_MAX)
value = ext_csd[EXT_CSD_PWR_CL_26_195];
else if (clock <= MMC_TYPE_HS_52_MAX) {
if (timing >= bus_timing_mmc_ddr52 &&
bus_width >= bus_width_4)
value = ext_csd[EXT_CSD_PWR_CL_52_195_DDR];
else
value = ext_csd[EXT_CSD_PWR_CL_52_195];
} else if (clock <= MMC_TYPE_HS200_HS400ES_MAX)
value = ext_csd[EXT_CSD_PWR_CL_200_195];
break;
case MMC_OCR_270_280:
case MMC_OCR_280_290:
case MMC_OCR_290_300:
case MMC_OCR_300_310:
case MMC_OCR_310_320:
case MMC_OCR_320_330:
case MMC_OCR_330_340:
case MMC_OCR_340_350:
case MMC_OCR_350_360:
if (clock <= MMC_TYPE_HS_26_MAX)
value = ext_csd[EXT_CSD_PWR_CL_26_360];
else if (clock <= MMC_TYPE_HS_52_MAX) {
if (timing == bus_timing_mmc_ddr52 &&
bus_width >= bus_width_4)
value = ext_csd[EXT_CSD_PWR_CL_52_360_DDR];
else
value = ext_csd[EXT_CSD_PWR_CL_52_360];
} else if (clock <= MMC_TYPE_HS200_HS400ES_MAX) {
if (bus_width == bus_width_8)
value = ext_csd[EXT_CSD_PWR_CL_200_360_DDR];
else
value = ext_csd[EXT_CSD_PWR_CL_200_360];
}
break;
default:
device_printf(dev, "No power class support for VDD 0x%x\n",
1 << mmcbr_get_vdd(dev));
return (MMC_ERR_INVALID);
}
if (bus_width == bus_width_8)
value = (value & EXT_CSD_POWER_CLASS_8BIT_MASK) >>
EXT_CSD_POWER_CLASS_8BIT_SHIFT;
else
value = (value & EXT_CSD_POWER_CLASS_4BIT_MASK) >>
EXT_CSD_POWER_CLASS_4BIT_SHIFT;
if (value == 0)
return (MMC_ERR_NONE);
return (mmc_switch(dev, dev, ivar->rca, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_POWER_CLASS, value, ivar->cmd6_time, true));
}
static int
mmc_set_timing(struct mmc_softc *sc, struct mmc_ivars *ivar,
enum mmc_bus_timing timing)
{
u_char switch_res[64];
uint8_t value;
int err;
if (mmcbr_get_mode(sc->dev) == mode_sd) {
switch (timing) {
case bus_timing_normal:
value = SD_SWITCH_NORMAL_MODE;
break;
case bus_timing_hs:
value = SD_SWITCH_HS_MODE;
break;
default:
return (MMC_ERR_INVALID);
}
err = mmc_sd_switch(sc, SD_SWITCH_MODE_SET, SD_SWITCH_GROUP1,
value, switch_res);
if (err != MMC_ERR_NONE)
return (err);
if ((switch_res[16] & 0xf) != value)
return (MMC_ERR_FAILED);
mmcbr_set_timing(sc->dev, timing);
mmcbr_update_ios(sc->dev);
} else {
switch (timing) {
case bus_timing_normal:
value = EXT_CSD_HS_TIMING_BC;
break;
case bus_timing_hs:
case bus_timing_mmc_ddr52:
value = EXT_CSD_HS_TIMING_HS;
break;
case bus_timing_mmc_hs200:
value = EXT_CSD_HS_TIMING_HS200;
break;
case bus_timing_mmc_hs400:
case bus_timing_mmc_hs400es:
value = EXT_CSD_HS_TIMING_HS400;
break;
default:
return (MMC_ERR_INVALID);
}
err = mmc_switch(sc->dev, sc->dev, ivar->rca,
EXT_CSD_CMD_SET_NORMAL, EXT_CSD_HS_TIMING, value,
ivar->cmd6_time, false);
if (err != MMC_ERR_NONE)
return (err);
mmcbr_set_timing(sc->dev, timing);
mmcbr_update_ios(sc->dev);
err = mmc_switch_status(sc->dev, sc->dev, ivar->rca,
ivar->cmd6_time);
}
return (err);
}
static int
mmc_set_vccq(struct mmc_softc *sc, struct mmc_ivars *ivar,
enum mmc_bus_timing timing)
{
if (isset(&ivar->vccq_120, timing))
mmcbr_set_vccq(sc->dev, vccq_120);
else if (isset(&ivar->vccq_180, timing))
mmcbr_set_vccq(sc->dev, vccq_180);
else
mmcbr_set_vccq(sc->dev, vccq_330);
if (mmcbr_switch_vccq(sc->dev) != 0)
return (MMC_ERR_INVALID);
else
return (MMC_ERR_NONE);
}
static const uint8_t p8[8] = {
0x55, 0xAA, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
};
static const uint8_t p8ok[8] = {
0xAA, 0x55, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
};
static const uint8_t p4[4] = {
0x5A, 0x00, 0x00, 0x00
};
static const uint8_t p4ok[4] = {
0xA5, 0x00, 0x00, 0x00
};
static int
mmc_test_bus_width(struct mmc_softc *sc)
{
struct mmc_command cmd;
struct mmc_data data;
uint8_t buf[8];
int err;
if (mmcbr_get_caps(sc->dev) & MMC_CAP_8_BIT_DATA) {
mmcbr_set_bus_width(sc->dev, bus_width_8);
mmcbr_update_ios(sc->dev);
sc->squelched++; /* Errors are expected, squelch reporting. */
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
cmd.opcode = MMC_BUSTEST_W;
cmd.arg = 0;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.data = &data;
data.data = __DECONST(void *, p8);
data.len = 8;
data.flags = MMC_DATA_WRITE;
mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, 0);
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
cmd.opcode = MMC_BUSTEST_R;
cmd.arg = 0;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.data = &data;
data.data = buf;
data.len = 8;
data.flags = MMC_DATA_READ;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, 0);
sc->squelched--;
mmcbr_set_bus_width(sc->dev, bus_width_1);
mmcbr_update_ios(sc->dev);
if (err == MMC_ERR_NONE && memcmp(buf, p8ok, 8) == 0)
return (bus_width_8);
}
if (mmcbr_get_caps(sc->dev) & MMC_CAP_4_BIT_DATA) {
mmcbr_set_bus_width(sc->dev, bus_width_4);
mmcbr_update_ios(sc->dev);
sc->squelched++; /* Errors are expected, squelch reporting. */
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
cmd.opcode = MMC_BUSTEST_W;
cmd.arg = 0;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.data = &data;
data.data = __DECONST(void *, p4);
data.len = 4;
data.flags = MMC_DATA_WRITE;
mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, 0);
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
cmd.opcode = MMC_BUSTEST_R;
cmd.arg = 0;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.data = &data;
data.data = buf;
data.len = 4;
data.flags = MMC_DATA_READ;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, 0);
sc->squelched--;
mmcbr_set_bus_width(sc->dev, bus_width_1);
mmcbr_update_ios(sc->dev);
if (err == MMC_ERR_NONE && memcmp(buf, p4ok, 4) == 0)
return (bus_width_4);
}
return (bus_width_1);
}
static uint32_t
mmc_get_bits(uint32_t *bits, int bit_len, int start, int size)
{
const int i = (bit_len / 32) - (start / 32) - 1;
const int shift = start & 31;
uint32_t retval = bits[i] >> shift;
if (size + shift > 32)
retval |= bits[i - 1] << (32 - shift);
return (retval & ((1llu << size) - 1));
}
static void
mmc_decode_cid_sd(uint32_t *raw_cid, struct mmc_cid *cid)
{
int i;
/* There's no version info, so we take it on faith */
memset(cid, 0, sizeof(*cid));
cid->mid = mmc_get_bits(raw_cid, 128, 120, 8);
cid->oid = mmc_get_bits(raw_cid, 128, 104, 16);
for (i = 0; i < 5; i++)
cid->pnm[i] = mmc_get_bits(raw_cid, 128, 96 - i * 8, 8);
cid->pnm[5] = 0;
cid->prv = mmc_get_bits(raw_cid, 128, 56, 8);
cid->psn = mmc_get_bits(raw_cid, 128, 24, 32);
cid->mdt_year = mmc_get_bits(raw_cid, 128, 12, 8) + 2000;
cid->mdt_month = mmc_get_bits(raw_cid, 128, 8, 4);
}
static void
mmc_decode_cid_mmc(uint32_t *raw_cid, struct mmc_cid *cid, bool is_4_41p)
{
int i;
/* There's no version info, so we take it on faith */
memset(cid, 0, sizeof(*cid));
cid->mid = mmc_get_bits(raw_cid, 128, 120, 8);
cid->oid = mmc_get_bits(raw_cid, 128, 104, 8);
for (i = 0; i < 6; i++)
cid->pnm[i] = mmc_get_bits(raw_cid, 128, 96 - i * 8, 8);
cid->pnm[6] = 0;
cid->prv = mmc_get_bits(raw_cid, 128, 48, 8);
cid->psn = mmc_get_bits(raw_cid, 128, 16, 32);
cid->mdt_month = mmc_get_bits(raw_cid, 128, 12, 4);
cid->mdt_year = mmc_get_bits(raw_cid, 128, 8, 4);
if (is_4_41p)
cid->mdt_year += 2013;
else
cid->mdt_year += 1997;
}
static void
mmc_format_card_id_string(struct mmc_ivars *ivar)
{
char oidstr[8];
uint8_t c1;
uint8_t c2;
/*
* Format a card ID string for use by the mmcsd driver, it's what
* appears between the <> in the following:
* mmcsd0: 968MB <SD SD01G 8.0 SN 2686905 MFG 08/2008 by 3 TN> at mmc0
* 22.5MHz/4bit/128-block
*
* Also format just the card serial number, which the mmcsd driver will
* use as the disk->d_ident string.
*
* The card_id_string in mmc_ivars is currently allocated as 64 bytes,
* and our max formatted length is currently 55 bytes if every field
* contains the largest value.
*
* Sometimes the oid is two printable ascii chars; when it's not,
* format it as 0xnnnn instead.
*/
c1 = (ivar->cid.oid >> 8) & 0x0ff;
c2 = ivar->cid.oid & 0x0ff;
if (c1 > 0x1f && c1 < 0x7f && c2 > 0x1f && c2 < 0x7f)
snprintf(oidstr, sizeof(oidstr), "%c%c", c1, c2);
else
snprintf(oidstr, sizeof(oidstr), "0x%04x", ivar->cid.oid);
snprintf(ivar->card_sn_string, sizeof(ivar->card_sn_string),
"%08X", ivar->cid.psn);
snprintf(ivar->card_id_string, sizeof(ivar->card_id_string),
"%s%s %s %d.%d SN %08X MFG %02d/%04d by %d %s",
ivar->mode == mode_sd ? "SD" : "MMC", ivar->high_cap ? "HC" : "",
ivar->cid.pnm, ivar->cid.prv >> 4, ivar->cid.prv & 0x0f,
ivar->cid.psn, ivar->cid.mdt_month, ivar->cid.mdt_year,
ivar->cid.mid, oidstr);
}
static const int exp[8] = {
1, 10, 100, 1000, 10000, 100000, 1000000, 10000000
};
static const int mant[16] = {
0, 10, 12, 13, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 70, 80
};
static const int cur_min[8] = {
500, 1000, 5000, 10000, 25000, 35000, 60000, 100000
};
static const int cur_max[8] = {
1000, 5000, 10000, 25000, 35000, 45000, 800000, 200000
};
static int
mmc_decode_csd_sd(uint32_t *raw_csd, struct mmc_csd *csd)
{
int v;
int m;
int e;
memset(csd, 0, sizeof(*csd));
csd->csd_structure = v = mmc_get_bits(raw_csd, 128, 126, 2);
if (v == 0) {
m = mmc_get_bits(raw_csd, 128, 115, 4);
e = mmc_get_bits(raw_csd, 128, 112, 3);
csd->tacc = (exp[e] * mant[m] + 9) / 10;
csd->nsac = mmc_get_bits(raw_csd, 128, 104, 8) * 100;
m = mmc_get_bits(raw_csd, 128, 99, 4);
e = mmc_get_bits(raw_csd, 128, 96, 3);
csd->tran_speed = exp[e] * 10000 * mant[m];
csd->ccc = mmc_get_bits(raw_csd, 128, 84, 12);
csd->read_bl_len = 1 << mmc_get_bits(raw_csd, 128, 80, 4);
csd->read_bl_partial = mmc_get_bits(raw_csd, 128, 79, 1);
csd->write_blk_misalign = mmc_get_bits(raw_csd, 128, 78, 1);
csd->read_blk_misalign = mmc_get_bits(raw_csd, 128, 77, 1);
csd->dsr_imp = mmc_get_bits(raw_csd, 128, 76, 1);
csd->vdd_r_curr_min =
cur_min[mmc_get_bits(raw_csd, 128, 59, 3)];
csd->vdd_r_curr_max =
cur_max[mmc_get_bits(raw_csd, 128, 56, 3)];
csd->vdd_w_curr_min =
cur_min[mmc_get_bits(raw_csd, 128, 53, 3)];
csd->vdd_w_curr_max =
cur_max[mmc_get_bits(raw_csd, 128, 50, 3)];
m = mmc_get_bits(raw_csd, 128, 62, 12);
e = mmc_get_bits(raw_csd, 128, 47, 3);
csd->capacity = ((1 + m) << (e + 2)) * csd->read_bl_len;
csd->erase_blk_en = mmc_get_bits(raw_csd, 128, 46, 1);
csd->erase_sector = mmc_get_bits(raw_csd, 128, 39, 7) + 1;
csd->wp_grp_size = mmc_get_bits(raw_csd, 128, 32, 7);
csd->wp_grp_enable = mmc_get_bits(raw_csd, 128, 31, 1);
csd->r2w_factor = 1 << mmc_get_bits(raw_csd, 128, 26, 3);
csd->write_bl_len = 1 << mmc_get_bits(raw_csd, 128, 22, 4);
csd->write_bl_partial = mmc_get_bits(raw_csd, 128, 21, 1);
return (MMC_ERR_NONE);
} else if (v == 1) {
m = mmc_get_bits(raw_csd, 128, 115, 4);
e = mmc_get_bits(raw_csd, 128, 112, 3);
csd->tacc = (exp[e] * mant[m] + 9) / 10;
csd->nsac = mmc_get_bits(raw_csd, 128, 104, 8) * 100;
m = mmc_get_bits(raw_csd, 128, 99, 4);
e = mmc_get_bits(raw_csd, 128, 96, 3);
csd->tran_speed = exp[e] * 10000 * mant[m];
csd->ccc = mmc_get_bits(raw_csd, 128, 84, 12);
csd->read_bl_len = 1 << mmc_get_bits(raw_csd, 128, 80, 4);
csd->read_bl_partial = mmc_get_bits(raw_csd, 128, 79, 1);
csd->write_blk_misalign = mmc_get_bits(raw_csd, 128, 78, 1);
csd->read_blk_misalign = mmc_get_bits(raw_csd, 128, 77, 1);
csd->dsr_imp = mmc_get_bits(raw_csd, 128, 76, 1);
csd->capacity = ((uint64_t)mmc_get_bits(raw_csd, 128, 48, 22) +
1) * 512 * 1024;
csd->erase_blk_en = mmc_get_bits(raw_csd, 128, 46, 1);
csd->erase_sector = mmc_get_bits(raw_csd, 128, 39, 7) + 1;
csd->wp_grp_size = mmc_get_bits(raw_csd, 128, 32, 7);
csd->wp_grp_enable = mmc_get_bits(raw_csd, 128, 31, 1);
csd->r2w_factor = 1 << mmc_get_bits(raw_csd, 128, 26, 3);
csd->write_bl_len = 1 << mmc_get_bits(raw_csd, 128, 22, 4);
csd->write_bl_partial = mmc_get_bits(raw_csd, 128, 21, 1);
return (MMC_ERR_NONE);
}
return (MMC_ERR_INVALID);
}
static void
mmc_decode_csd_mmc(uint32_t *raw_csd, struct mmc_csd *csd)
{
int m;
int e;
memset(csd, 0, sizeof(*csd));
csd->csd_structure = mmc_get_bits(raw_csd, 128, 126, 2);
csd->spec_vers = mmc_get_bits(raw_csd, 128, 122, 4);
m = mmc_get_bits(raw_csd, 128, 115, 4);
e = mmc_get_bits(raw_csd, 128, 112, 3);
csd->tacc = exp[e] * mant[m] + 9 / 10;
csd->nsac = mmc_get_bits(raw_csd, 128, 104, 8) * 100;
m = mmc_get_bits(raw_csd, 128, 99, 4);
e = mmc_get_bits(raw_csd, 128, 96, 3);
csd->tran_speed = exp[e] * 10000 * mant[m];
csd->ccc = mmc_get_bits(raw_csd, 128, 84, 12);
csd->read_bl_len = 1 << mmc_get_bits(raw_csd, 128, 80, 4);
csd->read_bl_partial = mmc_get_bits(raw_csd, 128, 79, 1);
csd->write_blk_misalign = mmc_get_bits(raw_csd, 128, 78, 1);
csd->read_blk_misalign = mmc_get_bits(raw_csd, 128, 77, 1);
csd->dsr_imp = mmc_get_bits(raw_csd, 128, 76, 1);
csd->vdd_r_curr_min = cur_min[mmc_get_bits(raw_csd, 128, 59, 3)];
csd->vdd_r_curr_max = cur_max[mmc_get_bits(raw_csd, 128, 56, 3)];
csd->vdd_w_curr_min = cur_min[mmc_get_bits(raw_csd, 128, 53, 3)];
csd->vdd_w_curr_max = cur_max[mmc_get_bits(raw_csd, 128, 50, 3)];
m = mmc_get_bits(raw_csd, 128, 62, 12);
e = mmc_get_bits(raw_csd, 128, 47, 3);
csd->capacity = ((1 + m) << (e + 2)) * csd->read_bl_len;
csd->erase_blk_en = 0;
csd->erase_sector = (mmc_get_bits(raw_csd, 128, 42, 5) + 1) *
(mmc_get_bits(raw_csd, 128, 37, 5) + 1);
csd->wp_grp_size = mmc_get_bits(raw_csd, 128, 32, 5);
csd->wp_grp_enable = mmc_get_bits(raw_csd, 128, 31, 1);
csd->r2w_factor = 1 << mmc_get_bits(raw_csd, 128, 26, 3);
csd->write_bl_len = 1 << mmc_get_bits(raw_csd, 128, 22, 4);
csd->write_bl_partial = mmc_get_bits(raw_csd, 128, 21, 1);
}
static void
mmc_app_decode_scr(uint32_t *raw_scr, struct mmc_scr *scr)
{
unsigned int scr_struct;
memset(scr, 0, sizeof(*scr));
scr_struct = mmc_get_bits(raw_scr, 64, 60, 4);
if (scr_struct != 0) {
printf("Unrecognised SCR structure version %d\n",
scr_struct);
return;
}
scr->sda_vsn = mmc_get_bits(raw_scr, 64, 56, 4);
scr->bus_widths = mmc_get_bits(raw_scr, 64, 48, 4);
}
static void
mmc_app_decode_sd_status(uint32_t *raw_sd_status,
struct mmc_sd_status *sd_status)
{
memset(sd_status, 0, sizeof(*sd_status));
sd_status->bus_width = mmc_get_bits(raw_sd_status, 512, 510, 2);
sd_status->secured_mode = mmc_get_bits(raw_sd_status, 512, 509, 1);
sd_status->card_type = mmc_get_bits(raw_sd_status, 512, 480, 16);
sd_status->prot_area = mmc_get_bits(raw_sd_status, 512, 448, 12);
sd_status->speed_class = mmc_get_bits(raw_sd_status, 512, 440, 8);
sd_status->perf_move = mmc_get_bits(raw_sd_status, 512, 432, 8);
sd_status->au_size = mmc_get_bits(raw_sd_status, 512, 428, 4);
sd_status->erase_size = mmc_get_bits(raw_sd_status, 512, 408, 16);
sd_status->erase_timeout = mmc_get_bits(raw_sd_status, 512, 402, 6);
sd_status->erase_offset = mmc_get_bits(raw_sd_status, 512, 400, 2);
}
static int
mmc_all_send_cid(struct mmc_softc *sc, uint32_t *rawcid)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_ALL_SEND_CID;
cmd.arg = 0;
cmd.flags = MMC_RSP_R2 | MMC_CMD_BCR;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
memcpy(rawcid, cmd.resp, 4 * sizeof(uint32_t));
return (err);
}
static int
mmc_send_csd(struct mmc_softc *sc, uint16_t rca, uint32_t *rawcsd)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_SEND_CSD;
cmd.arg = rca << 16;
cmd.flags = MMC_RSP_R2 | MMC_CMD_BCR;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
memcpy(rawcsd, cmd.resp, 4 * sizeof(uint32_t));
return (err);
}
static int
mmc_app_send_scr(struct mmc_softc *sc, uint16_t rca, uint32_t *rawscr)
{
int err;
struct mmc_command cmd;
struct mmc_data data;
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
memset(rawscr, 0, 8);
cmd.opcode = ACMD_SEND_SCR;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.arg = 0;
cmd.data = &data;
data.data = rawscr;
data.len = 8;
data.flags = MMC_DATA_READ;
err = mmc_wait_for_app_cmd(sc->dev, sc->dev, rca, &cmd, CMD_RETRIES);
rawscr[0] = be32toh(rawscr[0]);
rawscr[1] = be32toh(rawscr[1]);
return (err);
}
static int
mmc_app_sd_status(struct mmc_softc *sc, uint16_t rca, uint32_t *rawsdstatus)
{
struct mmc_command cmd;
struct mmc_data data;
int err, i;
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
memset(rawsdstatus, 0, 64);
cmd.opcode = ACMD_SD_STATUS;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.arg = 0;
cmd.data = &data;
data.data = rawsdstatus;
data.len = 64;
data.flags = MMC_DATA_READ;
err = mmc_wait_for_app_cmd(sc->dev, sc->dev, rca, &cmd, CMD_RETRIES);
for (i = 0; i < 16; i++)
rawsdstatus[i] = be32toh(rawsdstatus[i]);
return (err);
}
static int
mmc_set_relative_addr(struct mmc_softc *sc, uint16_t resp)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_SET_RELATIVE_ADDR;
cmd.arg = resp << 16;
cmd.flags = MMC_RSP_R6 | MMC_CMD_BCR;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
return (err);
}
static int
mmc_send_relative_addr(struct mmc_softc *sc, uint32_t *resp)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = SD_SEND_RELATIVE_ADDR;
cmd.arg = 0;
cmd.flags = MMC_RSP_R6 | MMC_CMD_BCR;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
*resp = cmd.resp[0];
return (err);
}
static int
mmc_set_blocklen(struct mmc_softc *sc, uint32_t len)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_SET_BLOCKLEN;
cmd.arg = len;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
cmd.data = NULL;
err = mmc_wait_for_cmd(sc->dev, sc->dev, &cmd, CMD_RETRIES);
return (err);
}
static uint32_t
mmc_timing_to_dtr(struct mmc_ivars *ivar, enum mmc_bus_timing timing)
{
switch (timing) {
case bus_timing_normal:
return (ivar->tran_speed);
case bus_timing_hs:
return (ivar->hs_tran_speed);
case bus_timing_uhs_sdr12:
return (SD_SDR12_MAX);
case bus_timing_uhs_sdr25:
return (SD_SDR25_MAX);
case bus_timing_uhs_ddr50:
return (SD_DDR50_MAX);
case bus_timing_uhs_sdr50:
return (SD_SDR50_MAX);
case bus_timing_uhs_sdr104:
return (SD_SDR104_MAX);
case bus_timing_mmc_ddr52:
return (MMC_TYPE_DDR52_MAX);
case bus_timing_mmc_hs200:
case bus_timing_mmc_hs400:
case bus_timing_mmc_hs400es:
return (MMC_TYPE_HS200_HS400ES_MAX);
}
return (0);
}
static const char *
mmc_timing_to_string(enum mmc_bus_timing timing)
{
switch (timing) {
case bus_timing_normal:
return ("normal speed");
case bus_timing_hs:
return ("high speed");
case bus_timing_uhs_sdr12:
case bus_timing_uhs_sdr25:
case bus_timing_uhs_sdr50:
case bus_timing_uhs_sdr104:
return ("single data rate");
case bus_timing_uhs_ddr50:
case bus_timing_mmc_ddr52:
return ("dual data rate");
case bus_timing_mmc_hs200:
return ("HS200");
case bus_timing_mmc_hs400:
return ("HS400");
case bus_timing_mmc_hs400es:
return ("HS400 with enhanced strobe");
}
return ("");
}
static bool
mmc_host_timing(device_t dev, enum mmc_bus_timing timing)
{
int host_caps;
host_caps = mmcbr_get_caps(dev);
#define HOST_TIMING_CAP(host_caps, cap) ({ \
bool retval; \
if (((host_caps) & (cap)) == (cap)) \
retval = true; \
else \
retval = false; \
retval; \
})
switch (timing) {
case bus_timing_normal:
return (true);
case bus_timing_hs:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_HSPEED));
case bus_timing_uhs_sdr12:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_UHS_SDR12));
case bus_timing_uhs_sdr25:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_UHS_SDR25));
case bus_timing_uhs_ddr50:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_UHS_DDR50));
case bus_timing_uhs_sdr50:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_UHS_SDR50));
case bus_timing_uhs_sdr104:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_UHS_SDR104));
case bus_timing_mmc_ddr52:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_MMC_DDR52));
case bus_timing_mmc_hs200:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_MMC_HS200));
case bus_timing_mmc_hs400:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_MMC_HS400));
case bus_timing_mmc_hs400es:
return (HOST_TIMING_CAP(host_caps, MMC_CAP_MMC_HS400 |
MMC_CAP_MMC_ENH_STROBE));
}
#undef HOST_TIMING_CAP
return (false);
}
static void
mmc_log_card(device_t dev, struct mmc_ivars *ivar, int newcard)
{
enum mmc_bus_timing timing;
device_printf(dev, "Card at relative address 0x%04x%s:\n",
ivar->rca, newcard ? " added" : "");
device_printf(dev, " card: %s\n", ivar->card_id_string);
for (timing = bus_timing_max; timing > bus_timing_normal; timing--) {
if (isset(&ivar->timings, timing))
break;
}
device_printf(dev, " quirks: %b\n", ivar->quirks, MMC_QUIRKS_FMT);
device_printf(dev, " bus: %ubit, %uMHz (%s timing)\n",
(ivar->bus_width == bus_width_1 ? 1 :
(ivar->bus_width == bus_width_4 ? 4 : 8)),
mmc_timing_to_dtr(ivar, timing) / 1000000,
mmc_timing_to_string(timing));
device_printf(dev, " memory: %u blocks, erase sector %u blocks%s\n",
ivar->sec_count, ivar->erase_sector,
ivar->read_only ? ", read-only" : "");
}
static void
mmc_discover_cards(struct mmc_softc *sc)
{
u_char switch_res[64];
uint32_t raw_cid[4];
struct mmc_ivars *ivar = NULL;
const struct mmc_quirk *quirk;
const uint8_t *ext_csd;
device_t child;
int err, host_caps, i, newcard;
uint32_t resp, sec_count, status;
uint16_t rca = 2;
int16_t rev;
uint8_t card_type;
host_caps = mmcbr_get_caps(sc->dev);
if (bootverbose || mmc_debug)
device_printf(sc->dev, "Probing cards\n");
while (1) {
child = NULL;
sc->squelched++; /* Errors are expected, squelch reporting. */
err = mmc_all_send_cid(sc, raw_cid);
sc->squelched--;
if (err == MMC_ERR_TIMEOUT)
break;
if (err != MMC_ERR_NONE) {
device_printf(sc->dev, "Error reading CID %d\n", err);
break;
}
newcard = 1;
for (i = 0; i < sc->child_count; i++) {
ivar = device_get_ivars(sc->child_list[i]);
if (memcmp(ivar->raw_cid, raw_cid, sizeof(raw_cid)) ==
0) {
newcard = 0;
break;
}
}
if (bootverbose || mmc_debug) {
device_printf(sc->dev,
"%sard detected (CID %08x%08x%08x%08x)\n",
newcard ? "New c" : "C",
raw_cid[0], raw_cid[1], raw_cid[2], raw_cid[3]);
}
if (newcard) {
ivar = malloc(sizeof(struct mmc_ivars), M_DEVBUF,
M_WAITOK | M_ZERO);
memcpy(ivar->raw_cid, raw_cid, sizeof(raw_cid));
}
if (mmcbr_get_ro(sc->dev))
ivar->read_only = 1;
ivar->bus_width = bus_width_1;
setbit(&ivar->timings, bus_timing_normal);
ivar->mode = mmcbr_get_mode(sc->dev);
if (ivar->mode == mode_sd) {
mmc_decode_cid_sd(ivar->raw_cid, &ivar->cid);
err = mmc_send_relative_addr(sc, &resp);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error getting RCA %d\n", err);
goto free_ivar;
}
ivar->rca = resp >> 16;
/* Get card CSD. */
err = mmc_send_csd(sc, ivar->rca, ivar->raw_csd);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error getting CSD %d\n", err);
goto free_ivar;
}
if (bootverbose || mmc_debug)
device_printf(sc->dev,
"%sard detected (CSD %08x%08x%08x%08x)\n",
newcard ? "New c" : "C", ivar->raw_csd[0],
ivar->raw_csd[1], ivar->raw_csd[2],
ivar->raw_csd[3]);
err = mmc_decode_csd_sd(ivar->raw_csd, &ivar->csd);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev, "Error decoding CSD\n");
goto free_ivar;
}
ivar->sec_count = ivar->csd.capacity / MMC_SECTOR_SIZE;
if (ivar->csd.csd_structure > 0)
ivar->high_cap = 1;
ivar->tran_speed = ivar->csd.tran_speed;
ivar->erase_sector = ivar->csd.erase_sector *
ivar->csd.write_bl_len / MMC_SECTOR_SIZE;
err = mmc_send_status(sc->dev, sc->dev, ivar->rca,
&status);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error reading card status %d\n", err);
goto free_ivar;
}
if ((status & R1_CARD_IS_LOCKED) != 0) {
device_printf(sc->dev,
"Card is password protected, skipping\n");
goto free_ivar;
}
/* Get card SCR. Card must be selected to fetch it. */
err = mmc_select_card(sc, ivar->rca);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error selecting card %d\n", err);
goto free_ivar;
}
err = mmc_app_send_scr(sc, ivar->rca, ivar->raw_scr);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error reading SCR %d\n", err);
goto free_ivar;
}
mmc_app_decode_scr(ivar->raw_scr, &ivar->scr);
/* Get card switch capabilities (command class 10). */
if ((ivar->scr.sda_vsn >= 1) &&
(ivar->csd.ccc & (1 << 10))) {
err = mmc_sd_switch(sc, SD_SWITCH_MODE_CHECK,
SD_SWITCH_GROUP1, SD_SWITCH_NOCHANGE,
switch_res);
if (err == MMC_ERR_NONE &&
switch_res[13] & (1 << SD_SWITCH_HS_MODE)) {
setbit(&ivar->timings, bus_timing_hs);
ivar->hs_tran_speed = SD_HS_MAX;
}
}
/*
* We deselect then reselect the card here. Some cards
* become unselected and timeout with the above two
* commands, although the state tables / diagrams in the
* standard suggest they go back to the transfer state.
* Other cards don't become deselected, and if we
* attempt to blindly re-select them, we get timeout
* errors from some controllers. So we deselect then
* reselect to handle all situations. The only thing we
* use from the sd_status is the erase sector size, but
* it is still nice to get that right.
*/
(void)mmc_select_card(sc, 0);
(void)mmc_select_card(sc, ivar->rca);
(void)mmc_app_sd_status(sc, ivar->rca,
ivar->raw_sd_status);
mmc_app_decode_sd_status(ivar->raw_sd_status,
&ivar->sd_status);
if (ivar->sd_status.au_size != 0) {
ivar->erase_sector =
16 << ivar->sd_status.au_size;
}
/* Find maximum supported bus width. */
if ((host_caps & MMC_CAP_4_BIT_DATA) &&
(ivar->scr.bus_widths & SD_SCR_BUS_WIDTH_4))
ivar->bus_width = bus_width_4;
goto child_common;
}
ivar->rca = rca++;
err = mmc_set_relative_addr(sc, ivar->rca);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev, "Error setting RCA %d\n", err);
goto free_ivar;
}
/* Get card CSD. */
err = mmc_send_csd(sc, ivar->rca, ivar->raw_csd);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev, "Error getting CSD %d\n", err);
goto free_ivar;
}
if (bootverbose || mmc_debug)
device_printf(sc->dev,
"%sard detected (CSD %08x%08x%08x%08x)\n",
newcard ? "New c" : "C", ivar->raw_csd[0],
ivar->raw_csd[1], ivar->raw_csd[2],
ivar->raw_csd[3]);
mmc_decode_csd_mmc(ivar->raw_csd, &ivar->csd);
ivar->sec_count = ivar->csd.capacity / MMC_SECTOR_SIZE;
ivar->tran_speed = ivar->csd.tran_speed;
ivar->erase_sector = ivar->csd.erase_sector *
ivar->csd.write_bl_len / MMC_SECTOR_SIZE;
err = mmc_send_status(sc->dev, sc->dev, ivar->rca, &status);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error reading card status %d\n", err);
goto free_ivar;
}
if ((status & R1_CARD_IS_LOCKED) != 0) {
device_printf(sc->dev,
"Card is password protected, skipping\n");
goto free_ivar;
}
err = mmc_select_card(sc, ivar->rca);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev, "Error selecting card %d\n",
err);
goto free_ivar;
}
rev = -1;
/* Only MMC >= 4.x devices support EXT_CSD. */
if (ivar->csd.spec_vers >= 4) {
err = mmc_send_ext_csd(sc->dev, sc->dev,
ivar->raw_ext_csd);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error reading EXT_CSD %d\n", err);
goto free_ivar;
}
ext_csd = ivar->raw_ext_csd;
rev = ext_csd[EXT_CSD_REV];
/* Handle extended capacity from EXT_CSD */
sec_count = le32dec(&ext_csd[EXT_CSD_SEC_CNT]);
if (sec_count != 0) {
ivar->sec_count = sec_count;
ivar->high_cap = 1;
}
/* Find maximum supported bus width. */
ivar->bus_width = mmc_test_bus_width(sc);
/* Get device speeds beyond normal mode. */
card_type = ext_csd[EXT_CSD_CARD_TYPE];
if ((card_type & EXT_CSD_CARD_TYPE_HS_52) != 0) {
setbit(&ivar->timings, bus_timing_hs);
ivar->hs_tran_speed = MMC_TYPE_HS_52_MAX;
} else if ((card_type & EXT_CSD_CARD_TYPE_HS_26) != 0) {
setbit(&ivar->timings, bus_timing_hs);
ivar->hs_tran_speed = MMC_TYPE_HS_26_MAX;
}
if ((card_type & EXT_CSD_CARD_TYPE_DDR_52_1_2V) != 0 &&
(host_caps & MMC_CAP_SIGNALING_120) != 0) {
setbit(&ivar->timings, bus_timing_mmc_ddr52);
setbit(&ivar->vccq_120, bus_timing_mmc_ddr52);
}
if ((card_type & EXT_CSD_CARD_TYPE_DDR_52_1_8V) != 0 &&
(host_caps & MMC_CAP_SIGNALING_180) != 0) {
setbit(&ivar->timings, bus_timing_mmc_ddr52);
setbit(&ivar->vccq_180, bus_timing_mmc_ddr52);
}
if ((card_type & EXT_CSD_CARD_TYPE_HS200_1_2V) != 0 &&
(host_caps & MMC_CAP_SIGNALING_120) != 0) {
setbit(&ivar->timings, bus_timing_mmc_hs200);
setbit(&ivar->vccq_120, bus_timing_mmc_hs200);
}
if ((card_type & EXT_CSD_CARD_TYPE_HS200_1_8V) != 0 &&
(host_caps & MMC_CAP_SIGNALING_180) != 0) {
setbit(&ivar->timings, bus_timing_mmc_hs200);
setbit(&ivar->vccq_180, bus_timing_mmc_hs200);
}
if ((card_type & EXT_CSD_CARD_TYPE_HS400_1_2V) != 0 &&
(host_caps & MMC_CAP_SIGNALING_120) != 0 &&
ivar->bus_width == bus_width_8) {
setbit(&ivar->timings, bus_timing_mmc_hs400);
setbit(&ivar->vccq_120, bus_timing_mmc_hs400);
}
if ((card_type & EXT_CSD_CARD_TYPE_HS400_1_8V) != 0 &&
(host_caps & MMC_CAP_SIGNALING_180) != 0 &&
ivar->bus_width == bus_width_8) {
setbit(&ivar->timings, bus_timing_mmc_hs400);
setbit(&ivar->vccq_180, bus_timing_mmc_hs400);
}
if ((card_type & EXT_CSD_CARD_TYPE_HS400_1_2V) != 0 &&
(ext_csd[EXT_CSD_STROBE_SUPPORT] &
EXT_CSD_STROBE_SUPPORT_EN) != 0 &&
(host_caps & MMC_CAP_SIGNALING_120) != 0 &&
ivar->bus_width == bus_width_8) {
setbit(&ivar->timings, bus_timing_mmc_hs400es);
setbit(&ivar->vccq_120, bus_timing_mmc_hs400es);
}
if ((card_type & EXT_CSD_CARD_TYPE_HS400_1_8V) != 0 &&
(ext_csd[EXT_CSD_STROBE_SUPPORT] &
EXT_CSD_STROBE_SUPPORT_EN) != 0 &&
(host_caps & MMC_CAP_SIGNALING_180) != 0 &&
ivar->bus_width == bus_width_8) {
setbit(&ivar->timings, bus_timing_mmc_hs400es);
setbit(&ivar->vccq_180, bus_timing_mmc_hs400es);
}
/*
* Determine generic switch timeout (provided in
* units of 10 ms), defaulting to 500 ms.
*/
ivar->cmd6_time = 500 * 1000;
if (rev >= 6)
ivar->cmd6_time = 10 *
ext_csd[EXT_CSD_GEN_CMD6_TIME];
/* Handle HC erase sector size. */
if (ext_csd[EXT_CSD_ERASE_GRP_SIZE] != 0) {
ivar->erase_sector = 1024 *
ext_csd[EXT_CSD_ERASE_GRP_SIZE];
err = mmc_switch(sc->dev, sc->dev, ivar->rca,
EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_ERASE_GRP_DEF,
EXT_CSD_ERASE_GRP_DEF_EN,
ivar->cmd6_time, true);
if (err != MMC_ERR_NONE) {
device_printf(sc->dev,
"Error setting erase group %d\n",
err);
goto free_ivar;
}
}
}
mmc_decode_cid_mmc(ivar->raw_cid, &ivar->cid, rev >= 5);
child_common:
for (quirk = &mmc_quirks[0]; quirk->mid != 0x0; quirk++) {
if ((quirk->mid == MMC_QUIRK_MID_ANY ||
quirk->mid == ivar->cid.mid) &&
(quirk->oid == MMC_QUIRK_OID_ANY ||
quirk->oid == ivar->cid.oid) &&
strncmp(quirk->pnm, ivar->cid.pnm,
sizeof(ivar->cid.pnm)) == 0) {
ivar->quirks = quirk->quirks;
break;
}
}
/*
* Some cards that report maximum I/O block sizes greater
* than 512 require the block length to be set to 512, even
* though that is supposed to be the default. Example:
*
* Transcend 2GB SDSC card, CID:
* mid=0x1b oid=0x534d pnm="00000" prv=1.0 mdt=00.2000
*/
if (ivar->csd.read_bl_len != MMC_SECTOR_SIZE ||
ivar->csd.write_bl_len != MMC_SECTOR_SIZE)
mmc_set_blocklen(sc, MMC_SECTOR_SIZE);
mmc_format_card_id_string(ivar);
if (bootverbose || mmc_debug)
mmc_log_card(sc->dev, ivar, newcard);
if (newcard) {
/* Add device. */
child = device_add_child(sc->dev, NULL, -1);
if (child != NULL) {
device_set_ivars(child, ivar);
sc->child_list = realloc(sc->child_list,
sizeof(device_t) * sc->child_count + 1,
M_DEVBUF, M_WAITOK);
sc->child_list[sc->child_count++] = child;
} else
device_printf(sc->dev, "Error adding child\n");
}
free_ivar:
if (newcard && child == NULL)
free(ivar, M_DEVBUF);
(void)mmc_select_card(sc, 0);
/*
* Not returning here when one MMC device could no be added
* potentially would mean looping forever when that device
* is broken (in which case it also may impact the remainder
* of the bus anyway, though).
*/
if ((newcard && child == NULL) ||
mmcbr_get_mode(sc->dev) == mode_sd)
return;
}
}
static void
mmc_update_child_list(struct mmc_softc *sc)
{
device_t child;
int i, j;
if (sc->child_count == 0) {
free(sc->child_list, M_DEVBUF);
return;
}
for (i = j = 0; i < sc->child_count; i++) {
for (;;) {
child = sc->child_list[j++];
if (child != NULL)
break;
}
if (i != j)
sc->child_list[i] = child;
}
sc->child_list = realloc(sc->child_list, sizeof(device_t) *
sc->child_count, M_DEVBUF, M_WAITOK);
}
static void
mmc_rescan_cards(struct mmc_softc *sc)
{
struct mmc_ivars *ivar;
int err, i, j;
for (i = j = 0; i < sc->child_count; i++) {
ivar = device_get_ivars(sc->child_list[i]);
if (mmc_select_card(sc, ivar->rca) != MMC_ERR_NONE) {
if (bootverbose || mmc_debug)
device_printf(sc->dev,
"Card at relative address %d lost\n",
ivar->rca);
err = device_delete_child(sc->dev, sc->child_list[i]);
if (err != 0) {
j++;
continue;
}
free(ivar, M_DEVBUF);
} else
j++;
}
if (sc->child_count == j)
goto out;
sc->child_count = j;
mmc_update_child_list(sc);
out:
(void)mmc_select_card(sc, 0);
}
static int
mmc_delete_cards(struct mmc_softc *sc, bool final)
{
struct mmc_ivars *ivar;
int err, i, j;
err = 0;
for (i = j = 0; i < sc->child_count; i++) {
ivar = device_get_ivars(sc->child_list[i]);
if (bootverbose || mmc_debug)
device_printf(sc->dev,
"Card at relative address %d deleted\n",
ivar->rca);
err = device_delete_child(sc->dev, sc->child_list[i]);
if (err != 0) {
j++;
if (final == false)
continue;
else
break;
}
free(ivar, M_DEVBUF);
}
sc->child_count = j;
mmc_update_child_list(sc);
return (err);
}
static void
mmc_go_discovery(struct mmc_softc *sc)
{
uint32_t ocr;
device_t dev;
int err;
dev = sc->dev;
if (mmcbr_get_power_mode(dev) != power_on) {
/*
* First, try SD modes
*/
sc->squelched++; /* Errors are expected, squelch reporting. */
mmcbr_set_mode(dev, mode_sd);
mmc_power_up(sc);
mmcbr_set_bus_mode(dev, pushpull);
if (bootverbose || mmc_debug)
device_printf(sc->dev, "Probing bus\n");
mmc_idle_cards(sc);
err = mmc_send_if_cond(sc, 1);
if ((bootverbose || mmc_debug) && err == 0)
device_printf(sc->dev,
"SD 2.0 interface conditions: OK\n");
if (mmc_send_app_op_cond(sc, 0, &ocr) != MMC_ERR_NONE) {
if (bootverbose || mmc_debug)
device_printf(sc->dev, "SD probe: failed\n");
/*
* Failed, try MMC
*/
mmcbr_set_mode(dev, mode_mmc);
if (mmc_send_op_cond(sc, 0, &ocr) != MMC_ERR_NONE) {
if (bootverbose || mmc_debug)
device_printf(sc->dev,
"MMC probe: failed\n");
ocr = 0; /* Failed both, powerdown. */
} else if (bootverbose || mmc_debug)
device_printf(sc->dev,
"MMC probe: OK (OCR: 0x%08x)\n", ocr);
} else if (bootverbose || mmc_debug)
device_printf(sc->dev, "SD probe: OK (OCR: 0x%08x)\n",
ocr);
sc->squelched--;
mmcbr_set_ocr(dev, mmc_select_vdd(sc, ocr));
if (mmcbr_get_ocr(dev) != 0)
mmc_idle_cards(sc);
} else {
mmcbr_set_bus_mode(dev, opendrain);
mmcbr_set_clock(dev, SD_MMC_CARD_ID_FREQUENCY);
mmcbr_update_ios(dev);
/* XXX recompute vdd based on new cards? */
}
/*
* Make sure that we have a mutually agreeable voltage to at least
* one card on the bus.
*/
if (bootverbose || mmc_debug)
device_printf(sc->dev, "Current OCR: 0x%08x\n",
mmcbr_get_ocr(dev));
if (mmcbr_get_ocr(dev) == 0) {
device_printf(sc->dev, "No compatible cards found on bus\n");
(void)mmc_delete_cards(sc, false);
mmc_power_down(sc);
return;
}
/*
* Reselect the cards after we've idled them above.
*/
if (mmcbr_get_mode(dev) == mode_sd) {
err = mmc_send_if_cond(sc, 1);
mmc_send_app_op_cond(sc,
(err ? 0 : MMC_OCR_CCS) | mmcbr_get_ocr(dev), NULL);
} else
mmc_send_op_cond(sc, MMC_OCR_CCS | mmcbr_get_ocr(dev), NULL);
mmc_discover_cards(sc);
mmc_rescan_cards(sc);
mmcbr_set_bus_mode(dev, pushpull);
mmcbr_update_ios(dev);
mmc_calculate_clock(sc);
}
static int
mmc_calculate_clock(struct mmc_softc *sc)
{
device_t dev;
struct mmc_ivars *ivar;
int i;
uint32_t dtr, max_dtr;
uint16_t rca;
enum mmc_bus_timing max_timing, timing;
bool changed, hs400;
dev = sc->dev;
max_dtr = mmcbr_get_f_max(dev);
max_timing = bus_timing_max;
do {
changed = false;
for (i = 0; i < sc->child_count; i++) {
ivar = device_get_ivars(sc->child_list[i]);
if (isclr(&ivar->timings, max_timing) ||
!mmc_host_timing(dev, max_timing)) {
for (timing = max_timing - 1; timing >=
bus_timing_normal; timing--) {
if (isset(&ivar->timings, timing) &&
mmc_host_timing(dev, timing)) {
max_timing = timing;
break;
}
}
changed = true;
}
dtr = mmc_timing_to_dtr(ivar, max_timing);
if (dtr < max_dtr) {
max_dtr = dtr;
changed = true;
}
}
} while (changed == true);
if (bootverbose || mmc_debug) {
device_printf(dev,
"setting transfer rate to %d.%03dMHz (%s timing)\n",
max_dtr / 1000000, (max_dtr / 1000) % 1000,
mmc_timing_to_string(max_timing));
}
/*
* HS400 must be tuned in HS200 mode, so in case of HS400 we begin
* with HS200 following the sequence as described in "6.6.2.2 HS200
* timing mode selection" of the eMMC specification v5.1, too, and
* switch to max_timing later. HS400ES requires no tuning and, thus,
* can be switch to directly, but requires the same detour via high
* speed mode as does HS400 (see mmc_switch_to_hs400()).
*/
hs400 = max_timing == bus_timing_mmc_hs400;
timing = hs400 == true ? bus_timing_mmc_hs200 : max_timing;
for (i = 0; i < sc->child_count; i++) {
ivar = device_get_ivars(sc->child_list[i]);
if ((ivar->timings & ~(1 << bus_timing_normal)) == 0)
goto clock;
rca = ivar->rca;
if (mmc_select_card(sc, rca) != MMC_ERR_NONE) {
device_printf(dev, "Card at relative address %d "
"failed to select\n", rca);
continue;
}
if (timing == bus_timing_mmc_hs200 || /* includes HS400 */
timing == bus_timing_mmc_hs400es) {
if (mmc_set_vccq(sc, ivar, timing) != MMC_ERR_NONE) {
device_printf(dev, "Failed to set VCCQ for "
"card at relative address %d\n", rca);
continue;
}
}
if (timing == bus_timing_mmc_hs200) { /* includes HS400 */
/* Set bus width (required for initial tuning). */
if (mmc_set_card_bus_width(sc, ivar, timing) !=
MMC_ERR_NONE) {
device_printf(dev, "Card at relative address "
"%d failed to set bus width\n", rca);
continue;
}
mmcbr_set_bus_width(dev, ivar->bus_width);
mmcbr_update_ios(dev);
} else if (timing == bus_timing_mmc_hs400es) {
if (mmc_switch_to_hs400(sc, ivar, max_dtr, timing) !=
MMC_ERR_NONE) {
device_printf(dev, "Card at relative address "
"%d failed to set %s timing\n", rca,
mmc_timing_to_string(timing));
continue;
}
goto power_class;
}
if (mmc_set_timing(sc, ivar, timing) != MMC_ERR_NONE) {
device_printf(dev, "Card at relative address %d "
"failed to set %s timing\n", rca,
mmc_timing_to_string(timing));
continue;
}
if (timing == bus_timing_mmc_ddr52) {
/*
* Set EXT_CSD_BUS_WIDTH_n_DDR in EXT_CSD_BUS_WIDTH
* (must be done after switching to EXT_CSD_HS_TIMING).
*/
if (mmc_set_card_bus_width(sc, ivar, timing) !=
MMC_ERR_NONE) {
device_printf(dev, "Card at relative address "
"%d failed to set bus width\n", rca);
continue;
}
mmcbr_set_bus_width(dev, ivar->bus_width);
mmcbr_update_ios(dev);
if (mmc_set_vccq(sc, ivar, timing) != MMC_ERR_NONE) {
device_printf(dev, "Failed to set VCCQ for "
"card at relative address %d\n", rca);
continue;
}
}
clock:
/* Set clock (must be done before initial tuning). */
mmcbr_set_clock(dev, max_dtr);
mmcbr_update_ios(dev);
if (mmcbr_tune(dev, hs400) != 0) {
device_printf(dev, "Card at relative address %d "
"failed to execute initial tuning\n", rca);
continue;
}
if (hs400 == true && mmc_switch_to_hs400(sc, ivar, max_dtr,
max_timing) != MMC_ERR_NONE) {
device_printf(dev, "Card at relative address %d "
"failed to set %s timing\n", rca,
mmc_timing_to_string(max_timing));
continue;
}
power_class:
if (mmc_set_power_class(sc, ivar) != MMC_ERR_NONE) {
device_printf(dev, "Card at relative address %d "
"failed to set power class\n", rca);
}
}
(void)mmc_select_card(sc, 0);
return (max_dtr);
}
/*
* Switch from HS200 to HS400 (either initially or for re-tuning) or directly
* to HS400ES. This follows the sequences described in "6.6.2.3 HS400 timing
* mode selection" of the eMMC specification v5.1.
*/
static int
mmc_switch_to_hs400(struct mmc_softc *sc, struct mmc_ivars *ivar,
uint32_t clock, enum mmc_bus_timing max_timing)
{
device_t dev;
int err;
uint16_t rca;
dev = sc->dev;
rca = ivar->rca;
/*
* Both clock and timing must be set as appropriate for high speed
* before eventually switching to HS400/HS400ES; mmc_set_timing()
* will issue mmcbr_update_ios().
*/
mmcbr_set_clock(dev, ivar->hs_tran_speed);
err = mmc_set_timing(sc, ivar, bus_timing_hs);
if (err != MMC_ERR_NONE)
return (err);
/*
* Set EXT_CSD_BUS_WIDTH_8_DDR in EXT_CSD_BUS_WIDTH (and additionally
* EXT_CSD_BUS_WIDTH_ES for HS400ES).
*/
err = mmc_set_card_bus_width(sc, ivar, max_timing);
if (err != MMC_ERR_NONE)
return (err);
mmcbr_set_bus_width(dev, ivar->bus_width);
mmcbr_update_ios(dev);
/* Finally, switch to HS400/HS400ES mode. */
err = mmc_set_timing(sc, ivar, max_timing);
if (err != MMC_ERR_NONE)
return (err);
mmcbr_set_clock(dev, clock);
mmcbr_update_ios(dev);
return (MMC_ERR_NONE);
}
/*
* Switch from HS400 to HS200 (for re-tuning).
*/
static int
mmc_switch_to_hs200(struct mmc_softc *sc, struct mmc_ivars *ivar,
uint32_t clock)
{
device_t dev;
int err;
uint16_t rca;
dev = sc->dev;
rca = ivar->rca;
/*
* Both clock and timing must initially be set as appropriate for
* DDR52 before eventually switching to HS200; mmc_set_timing()
* will issue mmcbr_update_ios().
*/
mmcbr_set_clock(dev, ivar->hs_tran_speed);
err = mmc_set_timing(sc, ivar, bus_timing_mmc_ddr52);
if (err != MMC_ERR_NONE)
return (err);
/*
* Next, switch to high speed. Thus, clear EXT_CSD_BUS_WIDTH_n_DDR
* in EXT_CSD_BUS_WIDTH and update bus width and timing in ios.
*/
err = mmc_set_card_bus_width(sc, ivar, bus_timing_hs);
if (err != MMC_ERR_NONE)
return (err);
mmcbr_set_bus_width(dev, ivar->bus_width);
mmcbr_set_timing(sc->dev, bus_timing_hs);
mmcbr_update_ios(dev);
/* Finally, switch to HS200 mode. */
err = mmc_set_timing(sc, ivar, bus_timing_mmc_hs200);
if (err != MMC_ERR_NONE)
return (err);
mmcbr_set_clock(dev, clock);
mmcbr_update_ios(dev);
return (MMC_ERR_NONE);
}
static int
mmc_retune(device_t busdev, device_t dev, bool reset)
{
struct mmc_softc *sc;
struct mmc_ivars *ivar;
int err;
uint32_t clock;
enum mmc_bus_timing timing;
if (device_get_parent(dev) != busdev)
return (MMC_ERR_INVALID);
sc = device_get_softc(busdev);
if (sc->retune_needed != 1 && sc->retune_paused != 0)
return (MMC_ERR_INVALID);
timing = mmcbr_get_timing(busdev);
if (timing == bus_timing_mmc_hs400) {
/*
* Controllers use the data strobe line to latch data from
* the devices in HS400 mode so periodic re-tuning isn't
* expected to be required, i. e. only if a CRC or tuning
* error is signaled to the bridge. In these latter cases
* we are asked to reset the tuning circuit and need to do
* the switch timing dance.
*/
if (reset == false)
return (0);
ivar = device_get_ivars(dev);
clock = mmcbr_get_clock(busdev);
if (mmc_switch_to_hs200(sc, ivar, clock) != MMC_ERR_NONE)
return (MMC_ERR_BADCRC);
}
err = mmcbr_retune(busdev, reset);
if (err != 0 && timing == bus_timing_mmc_hs400)
return (MMC_ERR_BADCRC);
switch (err) {
case 0:
break;
case EIO:
return (MMC_ERR_FAILED);
default:
return (MMC_ERR_INVALID);
}
if (timing == bus_timing_mmc_hs400) {
if (mmc_switch_to_hs400(sc, ivar, clock, timing) !=
MMC_ERR_NONE)
return (MMC_ERR_BADCRC);
}
return (MMC_ERR_NONE);
}
static void
mmc_retune_pause(device_t busdev, device_t dev, bool retune)
{
struct mmc_softc *sc;
sc = device_get_softc(busdev);
KASSERT(device_get_parent(dev) == busdev,
("%s: %s is not a child of %s", __func__, device_get_nameunit(dev),
device_get_nameunit(busdev)));
KASSERT(sc->owner != NULL,
("%s: Request from %s without bus being acquired.", __func__,
device_get_nameunit(dev)));
if (retune == true && sc->retune_paused == 0)
sc->retune_needed = 1;
sc->retune_paused++;
}
static void
mmc_retune_unpause(device_t busdev, device_t dev)
{
struct mmc_softc *sc;
sc = device_get_softc(busdev);
KASSERT(device_get_parent(dev) == busdev,
("%s: %s is not a child of %s", __func__, device_get_nameunit(dev),
device_get_nameunit(busdev)));
KASSERT(sc->owner != NULL,
("%s: Request from %s without bus being acquired.", __func__,
device_get_nameunit(dev)));
KASSERT(sc->retune_paused != 0,
("%s: Re-tune pause count already at 0", __func__));
sc->retune_paused--;
}
static void
mmc_scan(struct mmc_softc *sc)
{
device_t dev = sc->dev;
int err;
err = mmc_acquire_bus(dev, dev);
if (err != 0) {
device_printf(dev, "Failed to acquire bus for scanning\n");
return;
}
mmc_go_discovery(sc);
err = mmc_release_bus(dev, dev);
if (err != 0) {
device_printf(dev, "Failed to release bus after scanning\n");
return;
}
(void)bus_generic_attach(dev);
}
static int
mmc_read_ivar(device_t bus, device_t child, int which, uintptr_t *result)
{
struct mmc_ivars *ivar = device_get_ivars(child);
switch (which) {
default:
return (EINVAL);
case MMC_IVAR_SPEC_VERS:
*result = ivar->csd.spec_vers;
break;
case MMC_IVAR_DSR_IMP:
*result = ivar->csd.dsr_imp;
break;
case MMC_IVAR_MEDIA_SIZE:
*result = ivar->sec_count;
break;
case MMC_IVAR_RCA:
*result = ivar->rca;
break;
case MMC_IVAR_SECTOR_SIZE:
*result = MMC_SECTOR_SIZE;
break;
case MMC_IVAR_TRAN_SPEED:
*result = mmcbr_get_clock(bus);
break;
case MMC_IVAR_READ_ONLY:
*result = ivar->read_only;
break;
case MMC_IVAR_HIGH_CAP:
*result = ivar->high_cap;
break;
case MMC_IVAR_CARD_TYPE:
*result = ivar->mode;
break;
case MMC_IVAR_BUS_WIDTH:
*result = ivar->bus_width;
break;
case MMC_IVAR_ERASE_SECTOR:
*result = ivar->erase_sector;
break;
case MMC_IVAR_MAX_DATA:
*result = mmcbr_get_max_data(bus);
break;
case MMC_IVAR_CMD6_TIMEOUT:
*result = ivar->cmd6_time;
break;
case MMC_IVAR_QUIRKS:
*result = ivar->quirks;
break;
case MMC_IVAR_CARD_ID_STRING:
*(char **)result = ivar->card_id_string;
break;
case MMC_IVAR_CARD_SN_STRING:
*(char **)result = ivar->card_sn_string;
break;
}
return (0);
}
static int
mmc_write_ivar(device_t bus, device_t child, int which, uintptr_t value)
{
/*
* None are writable ATM
*/
return (EINVAL);
}
static void
mmc_delayed_attach(void *xsc)
{
struct mmc_softc *sc = xsc;
mmc_scan(sc);
config_intrhook_disestablish(&sc->config_intrhook);
}
static int
mmc_child_location_str(device_t dev, device_t child, char *buf,
size_t buflen)
{
snprintf(buf, buflen, "rca=0x%04x", mmc_get_rca(child));
return (0);
}
static device_method_t mmc_methods[] = {
/* device_if */
DEVMETHOD(device_probe, mmc_probe),
DEVMETHOD(device_attach, mmc_attach),
DEVMETHOD(device_detach, mmc_detach),
DEVMETHOD(device_suspend, mmc_suspend),
DEVMETHOD(device_resume, mmc_resume),
/* Bus interface */
DEVMETHOD(bus_read_ivar, mmc_read_ivar),
DEVMETHOD(bus_write_ivar, mmc_write_ivar),
DEVMETHOD(bus_child_location_str, mmc_child_location_str),
/* MMC Bus interface */
DEVMETHOD(mmcbus_retune_pause, mmc_retune_pause),
DEVMETHOD(mmcbus_retune_unpause, mmc_retune_unpause),
DEVMETHOD(mmcbus_wait_for_request, mmc_wait_for_request),
DEVMETHOD(mmcbus_acquire_bus, mmc_acquire_bus),
DEVMETHOD(mmcbus_release_bus, mmc_release_bus),
DEVMETHOD_END
};
driver_t mmc_driver = {
"mmc",
mmc_methods,
sizeof(struct mmc_softc),
};
devclass_t mmc_devclass;
MODULE_VERSION(mmc, MMC_VERSION);
Index: projects/clang800-import/sys/dev/mmc/mmc_private.h
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmc_private.h (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmc_private.h (revision 343956)
@@ -1,74 +1,74 @@
/*-
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*
* $FreeBSD$
*/
#ifndef DEV_MMC_PRIVATE_H
#define DEV_MMC_PRIVATE_H
struct mmc_softc {
device_t dev;
struct mtx sc_mtx;
struct intr_config_hook config_intrhook;
device_t owner;
device_t *child_list;
int child_count;
uint16_t last_rca;
uint16_t retune_paused;
uint8_t retune_needed;
uint8_t retune_ongoing;
uint16_t squelched; /* suppress reporting of (expected) errors */
int log_count;
struct timeval log_time;
};
#endif /* DEV_MMC_PRIVATE_H */
Index: projects/clang800-import/sys/dev/mmc/mmc_subr.c
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmc_subr.c (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmc_subr.c (revision 343956)
@@ -1,264 +1,264 @@
/*-
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/time.h>
#include <dev/mmc/bridge.h>
#include <dev/mmc/mmc_private.h>
#include <dev/mmc/mmc_subr.h>
#include <dev/mmc/mmcreg.h>
#include <dev/mmc/mmcbrvar.h>
#include "mmcbus_if.h"
#define CMD_RETRIES 3
#define LOG_PPS 5 /* Log no more than 5 errors per second. */
int
mmc_wait_for_cmd(device_t busdev, device_t dev, struct mmc_command *cmd,
int retries)
{
struct mmc_request mreq;
struct mmc_softc *sc;
int err;
do {
memset(&mreq, 0, sizeof(mreq));
memset(cmd->resp, 0, sizeof(cmd->resp));
cmd->retries = 0; /* Retries done here, not in hardware. */
cmd->mrq = &mreq;
if (cmd->data != NULL)
cmd->data->mrq = &mreq;
mreq.cmd = cmd;
if (MMCBUS_WAIT_FOR_REQUEST(busdev, dev, &mreq) != 0)
err = MMC_ERR_FAILED;
else
err = cmd->error;
} while (err != MMC_ERR_NONE && retries-- > 0);
if (err != MMC_ERR_NONE && busdev == dev) {
sc = device_get_softc(busdev);
if (sc->squelched == 0 && ppsratecheck(&sc->log_time,
&sc->log_count, LOG_PPS)) {
device_printf(sc->dev, "CMD%d failed, RESULT: %d\n",
cmd->opcode, err);
}
}
return (err);
}
int
mmc_wait_for_app_cmd(device_t busdev, device_t dev, uint16_t rca,
struct mmc_command *cmd, int retries)
{
struct mmc_command appcmd;
struct mmc_softc *sc;
int err;
sc = device_get_softc(busdev);
/* Squelch error reporting at lower levels, we report below. */
sc->squelched++;
do {
memset(&appcmd, 0, sizeof(appcmd));
appcmd.opcode = MMC_APP_CMD;
appcmd.arg = (uint32_t)rca << 16;
appcmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
if (mmc_wait_for_cmd(busdev, dev, &appcmd, 0) != 0)
err = MMC_ERR_FAILED;
else
err = appcmd.error;
if (err == MMC_ERR_NONE) {
if (!(appcmd.resp[0] & R1_APP_CMD))
err = MMC_ERR_FAILED;
else if (mmc_wait_for_cmd(busdev, dev, cmd, 0) != 0)
err = MMC_ERR_FAILED;
else
err = cmd->error;
}
} while (err != MMC_ERR_NONE && retries-- > 0);
sc->squelched--;
if (err != MMC_ERR_NONE && busdev == dev) {
if (sc->squelched == 0 && ppsratecheck(&sc->log_time,
&sc->log_count, LOG_PPS)) {
device_printf(sc->dev, "ACMD%d failed, RESULT: %d\n",
cmd->opcode, err);
}
}
return (err);
}
int
mmc_switch(device_t busdev, device_t dev, uint16_t rca, uint8_t set,
uint8_t index, uint8_t value, u_int timeout, bool status)
{
struct mmc_command cmd;
struct mmc_softc *sc;
int err;
KASSERT(timeout != 0, ("%s: no timeout", __func__));
sc = device_get_softc(busdev);
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_SWITCH_FUNC;
cmd.arg = (MMC_SWITCH_FUNC_WR << 24) | (index << 16) | (value << 8) |
set;
/*
* If the hardware supports busy detection but the switch timeout
* exceeds the maximum host timeout, use a R1 instead of a R1B
* response in order to keep the hardware from timing out.
*/
if (mmcbr_get_caps(busdev) & MMC_CAP_WAIT_WHILE_BUSY &&
timeout > mmcbr_get_max_busy_timeout(busdev))
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
else
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
/*
* Pause re-tuning so it won't interfere with the busy state and also
* so that the result of CMD13 will always refer to switching rather
* than to a tuning command that may have snuck in between.
*/
sc->retune_paused++;
err = mmc_wait_for_cmd(busdev, dev, &cmd, CMD_RETRIES);
if (err != MMC_ERR_NONE || status == false)
goto out;
err = mmc_switch_status(busdev, dev, rca, timeout);
out:
sc->retune_paused--;
return (err);
}
int
mmc_switch_status(device_t busdev, device_t dev, uint16_t rca, u_int timeout)
{
struct timeval cur, end;
int err;
uint32_t status;
KASSERT(timeout != 0, ("%s: no timeout", __func__));
/*
* Note that when using a R1B response in mmc_switch(), bridges of
* type MMC_CAP_WAIT_WHILE_BUSY will issue mmc_send_status() only
* once and then exit the loop.
*/
end.tv_sec = end.tv_usec = 0;
for (;;) {
err = mmc_send_status(busdev, dev, rca, &status);
if (err != MMC_ERR_NONE)
break;
if (R1_CURRENT_STATE(status) == R1_STATE_TRAN)
break;
getmicrouptime(&cur);
if (end.tv_sec == 0 && end.tv_usec == 0) {
end.tv_usec = timeout;
timevaladd(&end, &cur);
}
if (timevalcmp(&cur, &end, >)) {
err = MMC_ERR_TIMEOUT;
break;
}
}
if (err == MMC_ERR_NONE && (status & R1_SWITCH_ERROR) != 0)
return (MMC_ERR_FAILED);
return (err);
}
int
mmc_send_ext_csd(device_t busdev, device_t dev, uint8_t *rawextcsd)
{
struct mmc_command cmd;
struct mmc_data data;
int err;
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
memset(rawextcsd, 0, MMC_EXTCSD_SIZE);
cmd.opcode = MMC_SEND_EXT_CSD;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
cmd.data = &data;
data.data = rawextcsd;
data.len = MMC_EXTCSD_SIZE;
data.flags = MMC_DATA_READ;
err = mmc_wait_for_cmd(busdev, dev, &cmd, CMD_RETRIES);
return (err);
}
int
mmc_send_status(device_t busdev, device_t dev, uint16_t rca, uint32_t *status)
{
struct mmc_command cmd;
int err;
memset(&cmd, 0, sizeof(cmd));
cmd.opcode = MMC_SEND_STATUS;
cmd.arg = (uint32_t)rca << 16;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
err = mmc_wait_for_cmd(busdev, dev, &cmd, CMD_RETRIES);
*status = cmd.resp[0];
return (err);
}
Index: projects/clang800-import/sys/dev/mmc/mmc_subr.h
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmc_subr.h (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmc_subr.h (revision 343956)
@@ -1,72 +1,72 @@
/*-
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*
* $FreeBSD$
*/
#ifndef DEV_MMC_SUBR_H
#define DEV_MMC_SUBR_H
struct mmc_command;
int mmc_send_ext_csd(device_t busdev, device_t dev, uint8_t *rawextcsd);
int mmc_send_status(device_t busdev, device_t dev, uint16_t rca,
uint32_t *status);
int mmc_switch(device_t busdev, device_t dev, uint16_t rca, uint8_t set,
uint8_t index, uint8_t value, u_int timeout, bool send_status);
int mmc_switch_status(device_t busdev, device_t dev, uint16_t rca,
u_int timeout);
int mmc_wait_for_app_cmd(device_t busdev, device_t dev, uint16_t rca,
struct mmc_command *cmd, int retries);
int mmc_wait_for_cmd(device_t busdev, device_t dev, struct mmc_command *cmd,
int retries);
#endif /* DEV_MMC_SUBR_H */
Index: projects/clang800-import/sys/dev/mmc/mmcbrvar.h
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmcbrvar.h (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmcbrvar.h (revision 343956)
@@ -1,156 +1,156 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*
* $FreeBSD$
*/
#ifndef DEV_MMC_MMCBRVAR_H
#define DEV_MMC_MMCBRVAR_H
#include <dev/mmc/mmcreg.h>
#include "mmcbr_if.h"
enum mmcbr_device_ivars {
MMCBR_IVAR_BUS_MODE,
MMCBR_IVAR_BUS_WIDTH,
MMCBR_IVAR_CHIP_SELECT,
MMCBR_IVAR_CLOCK,
MMCBR_IVAR_F_MIN,
MMCBR_IVAR_F_MAX,
MMCBR_IVAR_HOST_OCR,
MMCBR_IVAR_MODE,
MMCBR_IVAR_OCR,
MMCBR_IVAR_POWER_MODE,
MMCBR_IVAR_RETUNE_REQ,
MMCBR_IVAR_VDD,
MMCBR_IVAR_VCCQ,
MMCBR_IVAR_CAPS,
MMCBR_IVAR_TIMING,
MMCBR_IVAR_MAX_DATA,
MMCBR_IVAR_MAX_BUSY_TIMEOUT
};
/*
* Simplified accessors for bridge devices
*/
#define MMCBR_ACCESSOR(var, ivar, type) \
__BUS_ACCESSOR(mmcbr, var, MMCBR, ivar, type)
MMCBR_ACCESSOR(bus_mode, BUS_MODE, int)
MMCBR_ACCESSOR(bus_width, BUS_WIDTH, int)
MMCBR_ACCESSOR(chip_select, CHIP_SELECT, int)
MMCBR_ACCESSOR(clock, CLOCK, int)
MMCBR_ACCESSOR(f_max, F_MAX, int)
MMCBR_ACCESSOR(f_min, F_MIN, int)
MMCBR_ACCESSOR(host_ocr, HOST_OCR, int)
MMCBR_ACCESSOR(mode, MODE, int)
MMCBR_ACCESSOR(ocr, OCR, int)
MMCBR_ACCESSOR(power_mode, POWER_MODE, int)
MMCBR_ACCESSOR(vdd, VDD, int)
MMCBR_ACCESSOR(vccq, VCCQ, int)
MMCBR_ACCESSOR(caps, CAPS, int)
MMCBR_ACCESSOR(timing, TIMING, int)
MMCBR_ACCESSOR(max_data, MAX_DATA, int)
MMCBR_ACCESSOR(max_busy_timeout, MAX_BUSY_TIMEOUT, u_int)
static int __inline
mmcbr_get_retune_req(device_t dev)
{
uintptr_t v;
if (__predict_false(BUS_READ_IVAR(device_get_parent(dev), dev,
MMCBR_IVAR_RETUNE_REQ, &v) != 0))
return (retune_req_none);
return ((int)v);
}
/*
* Convenience wrappers for the mmcbr interface
*/
static int __inline
mmcbr_update_ios(device_t dev)
{
return (MMCBR_UPDATE_IOS(device_get_parent(dev), dev));
}
static int __inline
mmcbr_tune(device_t dev, bool hs400)
{
return (MMCBR_TUNE(device_get_parent(dev), dev, hs400));
}
static int __inline
mmcbr_retune(device_t dev, bool reset)
{
return (MMCBR_RETUNE(device_get_parent(dev), dev, reset));
}
static int __inline
mmcbr_switch_vccq(device_t dev)
{
return (MMCBR_SWITCH_VCCQ(device_get_parent(dev), dev));
}
static int __inline
mmcbr_get_ro(device_t dev)
{
return (MMCBR_GET_RO(device_get_parent(dev), dev));
}
#endif /* DEV_MMC_MMCBRVAR_H */
Index: projects/clang800-import/sys/dev/mmc/mmcreg.h
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmcreg.h (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmcreg.h (revision 343956)
@@ -1,722 +1,722 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
* Copyright (c) 2017 Marius Strobl <marius@FreeBSD.org>
* Copyright (c) 2015-2016 Ilya Bakulin <kibab@FreeBSD.org>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*
* $FreeBSD$
*/
#ifndef DEV_MMC_MMCREG_H
#define DEV_MMC_MMCREG_H
/*
* This file contains the register definitions for the mmc and sd buses.
* They are taken from publicly available sources.
*/
struct mmc_data;
struct mmc_request;
struct mmc_command {
uint32_t opcode;
uint32_t arg;
uint32_t resp[4];
uint32_t flags; /* Expected responses */
#define MMC_RSP_PRESENT (1ul << 0) /* Response */
#define MMC_RSP_136 (1ul << 1) /* 136 bit response */
#define MMC_RSP_CRC (1ul << 2) /* Expect valid crc */
#define MMC_RSP_BUSY (1ul << 3) /* Card may send busy */
#define MMC_RSP_OPCODE (1ul << 4) /* Response include opcode */
#define MMC_RSP_MASK 0x1ful
#define MMC_CMD_AC (0ul << 5) /* Addressed Command, no data */
#define MMC_CMD_ADTC (1ul << 5) /* Addressed Data transfer cmd */
#define MMC_CMD_BC (2ul << 5) /* Broadcast command, no response */
#define MMC_CMD_BCR (3ul << 5) /* Broadcast command with response */
#define MMC_CMD_MASK (3ul << 5)
/* Possible response types defined in the standard: */
#define MMC_RSP_NONE (0)
#define MMC_RSP_R1 (MMC_RSP_PRESENT | MMC_RSP_CRC | MMC_RSP_OPCODE)
#define MMC_RSP_R1B (MMC_RSP_PRESENT | MMC_RSP_CRC | MMC_RSP_OPCODE | MMC_RSP_BUSY)
#define MMC_RSP_R2 (MMC_RSP_PRESENT | MMC_RSP_136 | MMC_RSP_CRC)
#define MMC_RSP_R3 (MMC_RSP_PRESENT)
#define MMC_RSP_R4 (MMC_RSP_PRESENT)
#define MMC_RSP_R5 (MMC_RSP_PRESENT | MMC_RSP_CRC | MMC_RSP_OPCODE)
#define MMC_RSP_R5B (MMC_RSP_PRESENT | MMC_RSP_CRC | MMC_RSP_OPCODE | MMC_RSP_BUSY)
#define MMC_RSP_R6 (MMC_RSP_PRESENT | MMC_RSP_CRC | MMC_RSP_OPCODE)
#define MMC_RSP_R7 (MMC_RSP_PRESENT | MMC_RSP_CRC | MMC_RSP_OPCODE)
#define MMC_RSP(x) ((x) & MMC_RSP_MASK)
uint32_t retries;
uint32_t error;
#define MMC_ERR_NONE 0
#define MMC_ERR_TIMEOUT 1
#define MMC_ERR_BADCRC 2
#define MMC_ERR_FIFO 3
#define MMC_ERR_FAILED 4
#define MMC_ERR_INVALID 5
#define MMC_ERR_NO_MEMORY 6
#define MMC_ERR_MAX 6
struct mmc_data *data; /* Data segment with cmd */
struct mmc_request *mrq; /* backpointer to request */
};
/*
* R1 responses
*
* Types (per SD 2.0 standard)
* e : error bit
* s : status bit
* r : detected and set for the actual command response
* x : Detected and set during command execution. The host can get
* the status by issuing a command with R1 response.
*
* Clear Condition (per SD 2.0 standard)
* a : according to the card current state.
* b : always related to the previous command. reception of a valid
* command will clear it (with a delay of one command).
* c : clear by read
*/
#define R1_OUT_OF_RANGE (1u << 31) /* erx, c */
#define R1_ADDRESS_ERROR (1u << 30) /* erx, c */
#define R1_BLOCK_LEN_ERROR (1u << 29) /* erx, c */
#define R1_ERASE_SEQ_ERROR (1u << 28) /* er, c */
#define R1_ERASE_PARAM (1u << 27) /* erx, c */
#define R1_WP_VIOLATION (1u << 26) /* erx, c */
#define R1_CARD_IS_LOCKED (1u << 25) /* sx, a */
#define R1_LOCK_UNLOCK_FAILED (1u << 24) /* erx, c */
#define R1_COM_CRC_ERROR (1u << 23) /* er, b */
#define R1_ILLEGAL_COMMAND (1u << 22) /* er, b */
#define R1_CARD_ECC_FAILED (1u << 21) /* erx, c */
#define R1_CC_ERROR (1u << 20) /* erx, c */
#define R1_ERROR (1u << 19) /* erx, c */
#define R1_CSD_OVERWRITE (1u << 16) /* erx, c */
#define R1_WP_ERASE_SKIP (1u << 15) /* erx, c */
#define R1_CARD_ECC_DISABLED (1u << 14) /* sx, a */
#define R1_ERASE_RESET (1u << 13) /* sr, c */
#define R1_CURRENT_STATE_MASK (0xfu << 9) /* sx, b */
#define R1_READY_FOR_DATA (1u << 8) /* sx, a */
#define R1_SWITCH_ERROR (1u << 7) /* sx, c */
#define R1_APP_CMD (1u << 5) /* sr, c */
#define R1_AKE_SEQ_ERROR (1u << 3) /* er, c */
#define R1_STATUS(x) ((x) & 0xFFFFE000)
#define R1_CURRENT_STATE(x) (((x) & R1_CURRENT_STATE_MASK) >> 9)
#define R1_STATE_IDLE 0
#define R1_STATE_READY 1
#define R1_STATE_IDENT 2
#define R1_STATE_STBY 3
#define R1_STATE_TRAN 4
#define R1_STATE_DATA 5
#define R1_STATE_RCV 6
#define R1_STATE_PRG 7
#define R1_STATE_DIS 8
/* R4 responses (SDIO) */
#define R4_IO_NUM_FUNCTIONS(ocr) (((ocr) >> 28) & 0x3)
#define R4_IO_MEM_PRESENT (0x1 << 27)
#define R4_IO_OCR_MASK 0x00fffff0
/*
* R5 responses
*
* Types (per SD 2.0 standard)
* e : error bit
* s : status bit
* r : detected and set for the actual command response
* x : Detected and set during command execution. The host can get
* the status by issuing a command with R1 response.
*
* Clear Condition (per SD 2.0 standard)
* a : according to the card current state.
* b : always related to the previous command. reception of a valid
* command will clear it (with a delay of one command).
* c : clear by read
*/
#define R5_COM_CRC_ERROR (1u << 15) /* er, b */
#define R5_ILLEGAL_COMMAND (1u << 14) /* er, b */
#define R5_IO_CURRENT_STATE_MASK (3u << 12) /* s, b */
#define R5_IO_CURRENT_STATE(x) (((x) & R5_IO_CURRENT_STATE_MASK) >> 12)
#define R5_ERROR (1u << 11) /* erx, c */
#define R5_FUNCTION_NUMBER (1u << 9) /* er, c */
#define R5_OUT_OF_RANGE (1u << 8) /* er, c */
struct mmc_data {
size_t len; /* size of the data */
size_t xfer_len;
void *data; /* data buffer */
uint32_t flags;
#define MMC_DATA_WRITE (1UL << 0)
#define MMC_DATA_READ (1UL << 1)
#define MMC_DATA_STREAM (1UL << 2)
#define MMC_DATA_MULTI (1UL << 3)
struct mmc_request *mrq;
};
struct mmc_request {
struct mmc_command *cmd;
struct mmc_command *stop;
void (*done)(struct mmc_request *); /* Completion function */
void *done_data; /* requestor set data */
uint32_t flags;
#define MMC_REQ_DONE 1
#define MMC_TUNE_DONE 2
};
/* Command definitions */
/* Class 0 and 1: Basic commands & read stream commands */
#define MMC_GO_IDLE_STATE 0
#define MMC_SEND_OP_COND 1
#define MMC_ALL_SEND_CID 2
#define MMC_SET_RELATIVE_ADDR 3
#define SD_SEND_RELATIVE_ADDR 3
#define MMC_SET_DSR 4
#define MMC_SLEEP_AWAKE 5
#define IO_SEND_OP_COND 5
#define MMC_SWITCH_FUNC 6
#define MMC_SWITCH_FUNC_CMDS 0
#define MMC_SWITCH_FUNC_SET 1
#define MMC_SWITCH_FUNC_CLR 2
#define MMC_SWITCH_FUNC_WR 3
#define MMC_SELECT_CARD 7
#define MMC_DESELECT_CARD 7
#define MMC_SEND_EXT_CSD 8
#define SD_SEND_IF_COND 8
#define MMC_SEND_CSD 9
#define MMC_SEND_CID 10
#define MMC_READ_DAT_UNTIL_STOP 11
#define MMC_STOP_TRANSMISSION 12
#define MMC_SEND_STATUS 13
#define MMC_BUSTEST_R 14
#define MMC_GO_INACTIVE_STATE 15
#define MMC_BUSTEST_W 19
/* Class 2: Block oriented read commands */
#define MMC_SET_BLOCKLEN 16
#define MMC_READ_SINGLE_BLOCK 17
#define MMC_READ_MULTIPLE_BLOCK 18
#define MMC_SEND_TUNING_BLOCK 19
#define MMC_SEND_TUNING_BLOCK_HS200 21
/* Class 3: Stream write commands */
#define MMC_WRITE_DAT_UNTIL_STOP 20
/* reserved: 22 */
/* Class 4: Block oriented write commands */
#define MMC_SET_BLOCK_COUNT 23
#define MMC_WRITE_BLOCK 24
#define MMC_WRITE_MULTIPLE_BLOCK 25
#define MMC_PROGARM_CID 26
#define MMC_PROGRAM_CSD 27
/* Class 6: Block oriented write protection commands */
#define MMC_SET_WRITE_PROT 28
#define MMC_CLR_WRITE_PROT 29
#define MMC_SEND_WRITE_PROT 30
/* reserved: 31 */
/* Class 5: Erase commands */
#define SD_ERASE_WR_BLK_START 32
#define SD_ERASE_WR_BLK_END 33
/* 34 -- reserved old command */
#define MMC_ERASE_GROUP_START 35
#define MMC_ERASE_GROUP_END 36
/* 37 -- reserved old command */
#define MMC_ERASE 38
#define MMC_ERASE_ERASE 0x00000000
#define MMC_ERASE_TRIM 0x00000001
#define MMC_ERASE_FULE 0x00000002
#define MMC_ERASE_DISCARD 0x00000003
#define MMC_ERASE_SECURE_ERASE 0x80000000
#define MMC_ERASE_SECURE_TRIM1 0x80000001
#define MMC_ERASE_SECURE_TRIM2 0x80008000
/* Class 9: I/O mode commands */
#define MMC_FAST_IO 39
#define MMC_GO_IRQ_STATE 40
/* reserved: 41 */
/* Class 7: Lock card */
#define MMC_LOCK_UNLOCK 42
/* reserved: 43 */
/* reserved: 44 */
/* reserved: 45 */
/* reserved: 46 */
/* reserved: 47 */
/* reserved: 48 */
/* reserved: 49 */
/* reserved: 50 */
/* reserved: 51 */
/* reserved: 54 */
/* Class 8: Application specific commands */
#define MMC_APP_CMD 55
#define MMC_GEN_CMD 56
/* reserved: 57 */
/* reserved: 58 */
/* reserved: 59 */
/* reserved for mfg: 60 */
/* reserved for mfg: 61 */
/* reserved for mfg: 62 */
/* reserved for mfg: 63 */
/* Class 9: I/O cards (sd) */
#define SD_IO_RW_DIRECT 52
/* CMD52 arguments */
#define SD_ARG_CMD52_READ (0 << 31)
#define SD_ARG_CMD52_WRITE (1 << 31)
#define SD_ARG_CMD52_FUNC_SHIFT 28
#define SD_ARG_CMD52_FUNC_MASK 0x7
#define SD_ARG_CMD52_EXCHANGE (1 << 27)
#define SD_ARG_CMD52_REG_SHIFT 9
#define SD_ARG_CMD52_REG_MASK 0x1ffff
#define SD_ARG_CMD52_DATA_SHIFT 0
#define SD_ARG_CMD52_DATA_MASK 0xff
#define SD_R5_DATA(resp) ((resp)[0] & 0xff)
#define SD_IO_RW_EXTENDED 53
/* CMD53 arguments */
#define SD_ARG_CMD53_READ (0 << 31)
#define SD_ARG_CMD53_WRITE (1 << 31)
#define SD_ARG_CMD53_FUNC_SHIFT 28
#define SD_ARG_CMD53_FUNC_MASK 0x7
#define SD_ARG_CMD53_BLOCK_MODE (1 << 27)
#define SD_ARG_CMD53_INCREMENT (1 << 26)
#define SD_ARG_CMD53_REG_SHIFT 9
#define SD_ARG_CMD53_REG_MASK 0x1ffff
#define SD_ARG_CMD53_LENGTH_SHIFT 0
#define SD_ARG_CMD53_LENGTH_MASK 0x1ff
#define SD_ARG_CMD53_LENGTH_MAX 64 /* XXX should be 511? */
/* Class 10: Switch function commands */
#define SD_SWITCH_FUNC 6
/* reserved: 34 */
/* reserved: 35 */
/* reserved: 36 */
/* reserved: 37 */
/* reserved: 50 */
/* reserved: 57 */
/* Application specific commands for SD */
#define ACMD_SET_BUS_WIDTH 6
#define ACMD_SD_STATUS 13
#define ACMD_SEND_NUM_WR_BLOCKS 22
#define ACMD_SET_WR_BLK_ERASE_COUNT 23
#define ACMD_SD_SEND_OP_COND 41
#define ACMD_SET_CLR_CARD_DETECT 42
#define ACMD_SEND_SCR 51
/*
* EXT_CSD fields
*/
#define EXT_CSD_FLUSH_CACHE 32 /* W/E */
#define EXT_CSD_CACHE_CTRL 33 /* R/W/E */
#define EXT_CSD_EXT_PART_ATTR 52 /* R/W, 2 bytes */
#define EXT_CSD_ENH_START_ADDR 136 /* R/W, 4 bytes */
#define EXT_CSD_ENH_SIZE_MULT 140 /* R/W, 3 bytes */
#define EXT_CSD_GP_SIZE_MULT 143 /* R/W, 12 bytes */
#define EXT_CSD_PART_SET 155 /* R/W */
#define EXT_CSD_PART_ATTR 156 /* R/W */
#define EXT_CSD_PART_SUPPORT 160 /* RO */
#define EXT_CSD_RPMB_MULT 168 /* RO */
#define EXT_CSD_BOOT_WP_STATUS 174 /* RO */
#define EXT_CSD_ERASE_GRP_DEF 175 /* R/W */
#define EXT_CSD_PART_CONFIG 179 /* R/W */
#define EXT_CSD_BUS_WIDTH 183 /* R/W */
#define EXT_CSD_STROBE_SUPPORT 184 /* RO */
#define EXT_CSD_HS_TIMING 185 /* R/W */
#define EXT_CSD_POWER_CLASS 187 /* R/W */
#define EXT_CSD_CARD_TYPE 196 /* RO */
#define EXT_CSD_DRIVER_STRENGTH 197 /* RO */
#define EXT_CSD_REV 192 /* RO */
#define EXT_CSD_PART_SWITCH_TO 199 /* RO */
#define EXT_CSD_PWR_CL_52_195 200 /* RO */
#define EXT_CSD_PWR_CL_26_195 201 /* RO */
#define EXT_CSD_PWR_CL_52_360 202 /* RO */
#define EXT_CSD_PWR_CL_26_360 203 /* RO */
#define EXT_CSD_SEC_CNT 212 /* RO, 4 bytes */
#define EXT_CSD_HC_WP_GRP_SIZE 221 /* RO */
#define EXT_CSD_ERASE_TO_MULT 223 /* RO */
#define EXT_CSD_ERASE_GRP_SIZE 224 /* RO */
#define EXT_CSD_BOOT_SIZE_MULT 226 /* RO */
#define EXT_CSD_SEC_FEATURE_SUPPORT 231 /* RO */
#define EXT_CSD_PWR_CL_200_195 236 /* RO */
#define EXT_CSD_PWR_CL_200_360 237 /* RO */
#define EXT_CSD_PWR_CL_52_195_DDR 238 /* RO */
#define EXT_CSD_PWR_CL_52_360_DDR 239 /* RO */
#define EXT_CSD_CACHE_FLUSH_POLICY 249 /* RO */
#define EXT_CSD_GEN_CMD6_TIME 248 /* RO */
#define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */
#define EXT_CSD_PWR_CL_200_360_DDR 253 /* RO */
/*
* EXT_CSD field definitions
*/
#define EXT_CSD_FLUSH_CACHE_FLUSH 0x01
#define EXT_CSD_FLUSH_CACHE_BARRIER 0x02
#define EXT_CSD_CACHE_CTRL_CACHE_EN 0x01
#define EXT_CSD_EXT_PART_ATTR_DEFAULT 0x0
#define EXT_CSD_EXT_PART_ATTR_SYSTEMCODE 0x1
#define EXT_CSD_EXT_PART_ATTR_NPERSISTENT 0x2
#define EXT_CSD_PART_SET_COMPLETED 0x01
#define EXT_CSD_PART_ATTR_ENH_USR 0x01
#define EXT_CSD_PART_ATTR_ENH_GP0 0x02
#define EXT_CSD_PART_ATTR_ENH_GP1 0x04
#define EXT_CSD_PART_ATTR_ENH_GP2 0x08
#define EXT_CSD_PART_ATTR_ENH_GP3 0x10
#define EXT_CSD_PART_ATTR_ENH_MASK 0x1f
#define EXT_CSD_PART_SUPPORT_EN 0x01
#define EXT_CSD_PART_SUPPORT_ENH_ATTR_EN 0x02
#define EXT_CSD_PART_SUPPORT_EXT_ATTR_EN 0x04
#define EXT_CSD_BOOT_WP_STATUS_BOOT0_PWR 0x01
#define EXT_CSD_BOOT_WP_STATUS_BOOT0_PERM 0x02
#define EXT_CSD_BOOT_WP_STATUS_BOOT0_MASK 0x03
#define EXT_CSD_BOOT_WP_STATUS_BOOT1_PWR 0x04
#define EXT_CSD_BOOT_WP_STATUS_BOOT1_PERM 0x08
#define EXT_CSD_BOOT_WP_STATUS_BOOT1_MASK 0x0c
#define EXT_CSD_ERASE_GRP_DEF_EN 0x01
#define EXT_CSD_PART_CONFIG_ACC_DEFAULT 0x00
#define EXT_CSD_PART_CONFIG_ACC_BOOT0 0x01
#define EXT_CSD_PART_CONFIG_ACC_BOOT1 0x02
#define EXT_CSD_PART_CONFIG_ACC_RPMB 0x03
#define EXT_CSD_PART_CONFIG_ACC_GP0 0x04
#define EXT_CSD_PART_CONFIG_ACC_GP1 0x05
#define EXT_CSD_PART_CONFIG_ACC_GP2 0x06
#define EXT_CSD_PART_CONFIG_ACC_GP3 0x07
#define EXT_CSD_PART_CONFIG_ACC_MASK 0x07
#define EXT_CSD_PART_CONFIG_BOOT0 0x08
#define EXT_CSD_PART_CONFIG_BOOT1 0x10
#define EXT_CSD_PART_CONFIG_BOOT_USR 0x38
#define EXT_CSD_PART_CONFIG_BOOT_MASK 0x38
#define EXT_CSD_PART_CONFIG_BOOT_ACK 0x40
#define EXT_CSD_CMD_SET_NORMAL 1
#define EXT_CSD_CMD_SET_SECURE 2
#define EXT_CSD_CMD_SET_CPSECURE 4
#define EXT_CSD_HS_TIMING_BC 0
#define EXT_CSD_HS_TIMING_HS 1
#define EXT_CSD_HS_TIMING_HS200 2
#define EXT_CSD_HS_TIMING_HS400 3
#define EXT_CSD_HS_TIMING_DRV_STR_SHIFT 4
#define EXT_CSD_POWER_CLASS_8BIT_MASK 0xf0
#define EXT_CSD_POWER_CLASS_8BIT_SHIFT 4
#define EXT_CSD_POWER_CLASS_4BIT_MASK 0x0f
#define EXT_CSD_POWER_CLASS_4BIT_SHIFT 0
#define EXT_CSD_CARD_TYPE_HS_26 0x0001
#define EXT_CSD_CARD_TYPE_HS_52 0x0002
#define EXT_CSD_CARD_TYPE_DDR_52_1_8V 0x0004
#define EXT_CSD_CARD_TYPE_DDR_52_1_2V 0x0008
#define EXT_CSD_CARD_TYPE_HS200_1_8V 0x0010
#define EXT_CSD_CARD_TYPE_HS200_1_2V 0x0020
#define EXT_CSD_CARD_TYPE_HS400_1_8V 0x0040
#define EXT_CSD_CARD_TYPE_HS400_1_2V 0x0080
#define EXT_CSD_BUS_WIDTH_1 0
#define EXT_CSD_BUS_WIDTH_4 1
#define EXT_CSD_BUS_WIDTH_8 2
#define EXT_CSD_BUS_WIDTH_4_DDR 5
#define EXT_CSD_BUS_WIDTH_8_DDR 6
#define EXT_CSD_BUS_WIDTH_ES 0x80
#define EXT_CSD_STROBE_SUPPORT_EN 0x01
#define EXT_CSD_SEC_FEATURE_SUPPORT_ER_EN 0x01
#define EXT_CSD_SEC_FEATURE_SUPPORT_BD_BLK_EN 0x04
#define EXT_CSD_SEC_FEATURE_SUPPORT_GB_CL_EN 0x10
#define EXT_CSD_SEC_FEATURE_SUPPORT_SANITIZE 0x40
#define EXT_CSD_CACHE_FLUSH_POLICY_FIFO 0x01
/*
* Vendor specific EXT_CSD fields
*/
/* SanDisk iNAND */
#define EXT_CSD_INAND_CMD38 113
#define EXT_CSD_INAND_CMD38_ERASE 0x00
#define EXT_CSD_INAND_CMD38_TRIM 0x01
#define EXT_CSD_INAND_CMD38_SECURE_ERASE 0x80
#define EXT_CSD_INAND_CMD38_SECURE_TRIM1 0x81
#define EXT_CSD_INAND_CMD38_SECURE_TRIM2 0x82
#define MMC_TYPE_HS_26_MAX 26000000
#define MMC_TYPE_HS_52_MAX 52000000
#define MMC_TYPE_DDR52_MAX 52000000
#define MMC_TYPE_HS200_HS400ES_MAX 200000000
/*
* SD bus widths
*/
#define SD_BUS_WIDTH_1 0
#define SD_BUS_WIDTH_4 2
/*
* SD Switch
*/
#define SD_SWITCH_MODE_CHECK 0
#define SD_SWITCH_MODE_SET 1
#define SD_SWITCH_GROUP1 0
#define SD_SWITCH_NORMAL_MODE 0
#define SD_SWITCH_HS_MODE 1
#define SD_SWITCH_SDR50_MODE 2
#define SD_SWITCH_SDR104_MODE 3
#define SD_SWITCH_DDR50 4
#define SD_SWITCH_NOCHANGE 0xF
#define SD_CLR_CARD_DETECT 0
#define SD_SET_CARD_DETECT 1
#define SD_HS_MAX 50000000
#define SD_DDR50_MAX 50000000
#define SD_SDR12_MAX 25000000
#define SD_SDR25_MAX 50000000
#define SD_SDR50_MAX 100000000
#define SD_SDR104_MAX 208000000
/* Specifications require 400 kHz max. during ID phase. */
#define SD_MMC_CARD_ID_FREQUENCY 400000
/*
* SDIO Direct & Extended I/O
*/
#define SD_IO_RW_WR (1u << 31)
#define SD_IO_RW_FUNC(x) (((x) & 0x7) << 28)
#define SD_IO_RW_RAW (1u << 27)
#define SD_IO_RW_INCR (1u << 26)
#define SD_IO_RW_ADR(x) (((x) & 0x1FFFF) << 9)
#define SD_IO_RW_DAT(x) (((x) & 0xFF) << 0)
#define SD_IO_RW_LEN(x) (((x) & 0xFF) << 0)
#define SD_IOE_RW_LEN(x) (((x) & 0x1FF) << 0)
#define SD_IOE_RW_BLK (1u << 27)
/* Card Common Control Registers (CCCR) */
#define SD_IO_CCCR_START 0x00000
#define SD_IO_CCCR_SIZE 0x100
#define SD_IO_CCCR_FN_ENABLE 0x02
#define SD_IO_CCCR_FN_READY 0x03
#define SD_IO_CCCR_INT_ENABLE 0x04
#define SD_IO_CCCR_INT_PENDING 0x05
#define SD_IO_CCCR_CTL 0x06
#define CCCR_CTL_RES (1 << 3)
#define SD_IO_CCCR_BUS_WIDTH 0x07
#define CCCR_BUS_WIDTH_4 (1 << 1)
#define CCCR_BUS_WIDTH_1 (1 << 0)
#define SD_IO_CCCR_CARDCAP 0x08
#define SD_IO_CCCR_CISPTR 0x09 /* XXX 9-10, 10-11, or 9-12 */
/* Function Basic Registers (FBR) */
#define SD_IO_FBR_START 0x00100
#define SD_IO_FBR_SIZE 0x00700
/* Card Information Structure (CIS) */
#define SD_IO_CIS_START 0x01000
#define SD_IO_CIS_SIZE 0x17000
/* CIS tuple codes (based on PC Card 16) */
#define SD_IO_CISTPL_VERS_1 0x15
#define SD_IO_CISTPL_MANFID 0x20
#define SD_IO_CISTPL_FUNCID 0x21
#define SD_IO_CISTPL_FUNCE 0x22
#define SD_IO_CISTPL_END 0xff
/* CISTPL_FUNCID codes */
/* OpenBSD incorrectly defines 0x0c as FUNCTION_WLAN */
/* #define SDMMC_FUNCTION_WLAN 0x0c */
/* OCR bits */
/*
* in SD 2.0 spec, bits 8-14 are now marked reserved
* Low voltage in SD2.0 spec is bit 7, TBD voltage
* Low voltage in MC 3.31 spec is bit 7, 1.65-1.95V
* Specs prior to MMC 3.31 defined bits 0-7 as voltages down to 1.5V.
* 3.31 redefined them to be reserved and also said that cards had to
* support the 2.7-3.6V and fixed the OCR to be 0xfff8000 for high voltage
* cards. MMC 4.0 says that a dual voltage card responds with 0xfff8080.
* Looks like the fine-grained control of the voltage tolerance ranges
* was abandoned.
*
* The MMC_OCR_CCS appears to be valid for only SD cards.
*/
#define MMC_OCR_VOLTAGE 0x3fffffffU /* Vdd Voltage mask */
#define MMC_OCR_LOW_VOLTAGE (1u << 7) /* Low Voltage Range -- tbd */
#define MMC_OCR_MIN_VOLTAGE_SHIFT 7
#define MMC_OCR_200_210 (1U << 8) /* Vdd voltage 2.00 ~ 2.10 */
#define MMC_OCR_210_220 (1U << 9) /* Vdd voltage 2.10 ~ 2.20 */
#define MMC_OCR_220_230 (1U << 10) /* Vdd voltage 2.20 ~ 2.30 */
#define MMC_OCR_230_240 (1U << 11) /* Vdd voltage 2.30 ~ 2.40 */
#define MMC_OCR_240_250 (1U << 12) /* Vdd voltage 2.40 ~ 2.50 */
#define MMC_OCR_250_260 (1U << 13) /* Vdd voltage 2.50 ~ 2.60 */
#define MMC_OCR_260_270 (1U << 14) /* Vdd voltage 2.60 ~ 2.70 */
#define MMC_OCR_270_280 (1U << 15) /* Vdd voltage 2.70 ~ 2.80 */
#define MMC_OCR_280_290 (1U << 16) /* Vdd voltage 2.80 ~ 2.90 */
#define MMC_OCR_290_300 (1U << 17) /* Vdd voltage 2.90 ~ 3.00 */
#define MMC_OCR_300_310 (1U << 18) /* Vdd voltage 3.00 ~ 3.10 */
#define MMC_OCR_310_320 (1U << 19) /* Vdd voltage 3.10 ~ 3.20 */
#define MMC_OCR_320_330 (1U << 20) /* Vdd voltage 3.20 ~ 3.30 */
#define MMC_OCR_330_340 (1U << 21) /* Vdd voltage 3.30 ~ 3.40 */
#define MMC_OCR_340_350 (1U << 22) /* Vdd voltage 3.40 ~ 3.50 */
#define MMC_OCR_350_360 (1U << 23) /* Vdd voltage 3.50 ~ 3.60 */
#define MMC_OCR_MAX_VOLTAGE_SHIFT 23
#define MMC_OCR_S18R (1U << 24) /* Switching to 1.8 V requested (SD) */
#define MMC_OCR_S18A MMC_OCR_S18R /* Switching to 1.8 V accepted (SD) */
#define MMC_OCR_XPC (1U << 28) /* SDXC Power Control */
#define MMC_OCR_ACCESS_MODE_BYTE (0U << 29) /* Access Mode Byte (MMC) */
#define MMC_OCR_ACCESS_MODE_SECT (1U << 29) /* Access Mode Sector (MMC) */
#define MMC_OCR_ACCESS_MODE_MASK (3U << 29)
#define MMC_OCR_CCS (1u << 30) /* Card Capacity status (SD vs SDHC) */
#define MMC_OCR_CARD_BUSY (1U << 31) /* Card Power up status */
/* CSD -- decoded structure */
struct mmc_cid {
uint32_t mid;
char pnm[8];
uint32_t psn;
uint16_t oid;
uint16_t mdt_year;
uint8_t mdt_month;
uint8_t prv;
uint8_t fwrev;
};
struct mmc_csd {
uint8_t csd_structure;
uint8_t spec_vers;
uint16_t ccc;
uint16_t tacc;
uint32_t nsac;
uint32_t r2w_factor;
uint32_t tran_speed;
uint32_t read_bl_len;
uint32_t write_bl_len;
uint32_t vdd_r_curr_min;
uint32_t vdd_r_curr_max;
uint32_t vdd_w_curr_min;
uint32_t vdd_w_curr_max;
uint32_t wp_grp_size;
uint32_t erase_sector;
uint64_t capacity;
unsigned int read_bl_partial:1,
read_blk_misalign:1,
write_bl_partial:1,
write_blk_misalign:1,
dsr_imp:1,
erase_blk_en:1,
wp_grp_enable:1;
};
struct mmc_scr {
unsigned char sda_vsn;
unsigned char bus_widths;
#define SD_SCR_BUS_WIDTH_1 (1 << 0)
#define SD_SCR_BUS_WIDTH_4 (1 << 2)
};
struct mmc_sd_status {
uint8_t bus_width;
uint8_t secured_mode;
uint16_t card_type;
uint16_t prot_area;
uint8_t speed_class;
uint8_t perf_move;
uint8_t au_size;
uint16_t erase_size;
uint8_t erase_timeout;
uint8_t erase_offset;
};
struct mmc_quirk {
uint32_t mid;
#define MMC_QUIRK_MID_ANY ((uint32_t)-1)
uint16_t oid;
#define MMC_QUIRK_OID_ANY ((uint16_t)-1)
const char *pnm;
uint32_t quirks;
#define MMC_QUIRK_INAND_CMD38 0x0001
#define MMC_QUIRK_BROKEN_TRIM 0x0002
};
#define MMC_QUIRKS_FMT "\020" "\001INAND_CMD38" "\002BROKEN_TRIM"
/*
* Various MMC/SD constants
*/
#define MMC_BOOT_RPMB_BLOCK_SIZE (128 * 1024)
#define MMC_EXTCSD_SIZE 512
#define MMC_PART_GP_MAX 4
#define MMC_PART_MAX 8
#define MMC_TUNING_MAX 64 /* Maximum tuning iterations */
#define MMC_TUNING_LEN 64 /* Size of tuning data */
#define MMC_TUNING_LEN_HS200 128 /* Size of tuning data in HS200 mode */
/*
* Older versions of the MMC standard had a variable sector size. However,
* I've been able to find no old MMC or SD cards that have a non 512
* byte sector size anywhere, so we assume that such cards are very rare
* and only note their existence in passing here...
*/
#define MMC_SECTOR_SIZE 512
#endif /* DEV_MMCREG_H */
Index: projects/clang800-import/sys/dev/mmc/mmcsd.c
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmcsd.c (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmcsd.c (revision 343956)
@@ -1,1576 +1,1576 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
* Copyright (c) 2017 Marius Strobl <marius@FreeBSD.org>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bio.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/endian.h>
#include <sys/fcntl.h>
#include <sys/ioccom.h>
#include <sys/kernel.h>
#include <sys/kthread.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <sys/priv.h>
#include <sys/slicer.h>
#include <sys/sysctl.h>
#include <sys/time.h>
#include <geom/geom.h>
#include <geom/geom_disk.h>
#include <dev/mmc/bridge.h>
#include <dev/mmc/mmc_ioctl.h>
#include <dev/mmc/mmc_subr.h>
#include <dev/mmc/mmcbrvar.h>
#include <dev/mmc/mmcreg.h>
#include <dev/mmc/mmcvar.h>
#include "mmcbus_if.h"
#if __FreeBSD_version < 800002
#define kproc_create kthread_create
#define kproc_exit kthread_exit
#endif
#define MMCSD_CMD_RETRIES 5
#define MMCSD_FMT_BOOT "mmcsd%dboot"
#define MMCSD_FMT_GP "mmcsd%dgp"
#define MMCSD_FMT_RPMB "mmcsd%drpmb"
#define MMCSD_LABEL_ENH "enh"
#define MMCSD_PART_NAMELEN (16 + 1)
struct mmcsd_softc;
struct mmcsd_part {
struct mtx disk_mtx;
struct mtx ioctl_mtx;
struct mmcsd_softc *sc;
struct disk *disk;
struct proc *p;
struct bio_queue_head bio_queue;
daddr_t eblock, eend; /* Range remaining after the last erase. */
u_int cnt;
u_int type;
int running;
int suspend;
int ioctl;
bool ro;
char name[MMCSD_PART_NAMELEN];
};
struct mmcsd_softc {
device_t dev;
device_t mmcbus;
struct mmcsd_part *part[MMC_PART_MAX];
enum mmc_card_mode mode;
u_int max_data; /* Maximum data size [blocks] */
u_int erase_sector; /* Device native erase sector size [blocks] */
uint8_t high_cap; /* High Capacity device (block addressed) */
uint8_t part_curr; /* Partition currently switched to */
uint8_t ext_csd[MMC_EXTCSD_SIZE];
uint16_t rca;
uint32_t flags;
#define MMCSD_INAND_CMD38 0x0001
#define MMCSD_USE_TRIM 0x0002
#define MMCSD_FLUSH_CACHE 0x0004
#define MMCSD_DIRTY 0x0008
uint32_t cmd6_time; /* Generic switch timeout [us] */
uint32_t part_time; /* Partition switch timeout [us] */
off_t enh_base; /* Enhanced user data area slice base ... */
off_t enh_size; /* ... and size [bytes] */
int log_count;
struct timeval log_time;
struct cdev *rpmb_dev;
};
static const char *errmsg[] =
{
"None",
"Timeout",
"Bad CRC",
"Fifo",
"Failed",
"Invalid",
"NO MEMORY"
};
static SYSCTL_NODE(_hw, OID_AUTO, mmcsd, CTLFLAG_RD, NULL, "mmcsd driver");
static int mmcsd_cache = 1;
SYSCTL_INT(_hw_mmcsd, OID_AUTO, cache, CTLFLAG_RDTUN, &mmcsd_cache, 0,
"Device R/W cache enabled if present");
#define LOG_PPS 5 /* Log no more than 5 errors per second. */
/* bus entry points */
static int mmcsd_attach(device_t dev);
static int mmcsd_detach(device_t dev);
static int mmcsd_probe(device_t dev);
static int mmcsd_shutdown(device_t dev);
/* disk routines */
static int mmcsd_close(struct disk *dp);
static int mmcsd_dump(void *arg, void *virtual, vm_offset_t physical,
off_t offset, size_t length);
static int mmcsd_getattr(struct bio *);
static int mmcsd_ioctl_disk(struct disk *disk, u_long cmd, void *data,
int fflag, struct thread *td);
static void mmcsd_strategy(struct bio *bp);
static void mmcsd_task(void *arg);
/* RMPB cdev interface */
static int mmcsd_ioctl_rpmb(struct cdev *dev, u_long cmd, caddr_t data,
int fflag, struct thread *td);
static void mmcsd_add_part(struct mmcsd_softc *sc, u_int type,
const char *name, u_int cnt, off_t media_size, bool ro);
static int mmcsd_bus_bit_width(device_t dev);
static daddr_t mmcsd_delete(struct mmcsd_part *part, struct bio *bp);
static const char *mmcsd_errmsg(int e);
static int mmcsd_flush_cache(struct mmcsd_softc *sc);
static int mmcsd_ioctl(struct mmcsd_part *part, u_long cmd, void *data,
int fflag, struct thread *td);
static int mmcsd_ioctl_cmd(struct mmcsd_part *part, struct mmc_ioc_cmd *mic,
int fflag);
static uintmax_t mmcsd_pretty_size(off_t size, char *unit);
static daddr_t mmcsd_rw(struct mmcsd_part *part, struct bio *bp);
static int mmcsd_set_blockcount(struct mmcsd_softc *sc, u_int count, bool rel);
static int mmcsd_slicer(device_t dev, const char *provider,
struct flash_slice *slices, int *nslices);
static int mmcsd_switch_part(device_t bus, device_t dev, uint16_t rca,
u_int part);
#define MMCSD_DISK_LOCK(_part) mtx_lock(&(_part)->disk_mtx)
#define MMCSD_DISK_UNLOCK(_part) mtx_unlock(&(_part)->disk_mtx)
#define MMCSD_DISK_LOCK_INIT(_part) \
mtx_init(&(_part)->disk_mtx, (_part)->name, "mmcsd disk", MTX_DEF)
#define MMCSD_DISK_LOCK_DESTROY(_part) mtx_destroy(&(_part)->disk_mtx);
#define MMCSD_DISK_ASSERT_LOCKED(_part) \
mtx_assert(&(_part)->disk_mtx, MA_OWNED);
#define MMCSD_DISK_ASSERT_UNLOCKED(_part) \
mtx_assert(&(_part)->disk_mtx, MA_NOTOWNED);
#define MMCSD_IOCTL_LOCK(_part) mtx_lock(&(_part)->ioctl_mtx)
#define MMCSD_IOCTL_UNLOCK(_part) mtx_unlock(&(_part)->ioctl_mtx)
#define MMCSD_IOCTL_LOCK_INIT(_part) \
mtx_init(&(_part)->ioctl_mtx, (_part)->name, "mmcsd IOCTL", MTX_DEF)
#define MMCSD_IOCTL_LOCK_DESTROY(_part) mtx_destroy(&(_part)->ioctl_mtx);
#define MMCSD_IOCTL_ASSERT_LOCKED(_part) \
mtx_assert(&(_part)->ioctl_mtx, MA_OWNED);
#define MMCSD_IOCLT_ASSERT_UNLOCKED(_part) \
mtx_assert(&(_part)->ioctl_mtx, MA_NOTOWNED);
static int
mmcsd_probe(device_t dev)
{
device_quiet(dev);
device_set_desc(dev, "MMC/SD Memory Card");
return (0);
}
static int
mmcsd_attach(device_t dev)
{
device_t mmcbus;
struct mmcsd_softc *sc;
const uint8_t *ext_csd;
off_t erase_size, sector_size, size, wp_size;
uintmax_t bytes;
int err, i;
uint32_t quirks;
uint8_t rev;
bool comp, ro;
char unit[2];
sc = device_get_softc(dev);
sc->dev = dev;
sc->mmcbus = mmcbus = device_get_parent(dev);
sc->mode = mmc_get_card_type(dev);
/*
* Note that in principle with an SDHCI-like re-tuning implementation,
* the maximum data size can change at runtime due to a device removal/
* insertion that results in switches to/from a transfer mode involving
* re-tuning, iff there are multiple devices on a given bus. Until now
* mmc(4) lacks support for rescanning already attached buses, however,
* and sdhci(4) to date has no support for shared buses in the first
* place either.
*/
sc->max_data = mmc_get_max_data(dev);
sc->high_cap = mmc_get_high_cap(dev);
sc->rca = mmc_get_rca(dev);
sc->cmd6_time = mmc_get_cmd6_timeout(dev);
quirks = mmc_get_quirks(dev);
/* Only MMC >= 4.x devices support EXT_CSD. */
if (mmc_get_spec_vers(dev) >= 4) {
MMCBUS_ACQUIRE_BUS(mmcbus, dev);
err = mmc_send_ext_csd(mmcbus, dev, sc->ext_csd);
MMCBUS_RELEASE_BUS(mmcbus, dev);
if (err != MMC_ERR_NONE) {
device_printf(dev, "Error reading EXT_CSD %s\n",
mmcsd_errmsg(err));
return (ENXIO);
}
}
ext_csd = sc->ext_csd;
if ((quirks & MMC_QUIRK_INAND_CMD38) != 0) {
if (mmc_get_spec_vers(dev) < 4) {
device_printf(dev,
"MMC_QUIRK_INAND_CMD38 set but no EXT_CSD\n");
return (EINVAL);
}
sc->flags |= MMCSD_INAND_CMD38;
}
/*
* EXT_CSD_SEC_FEATURE_SUPPORT_GB_CL_EN denotes support for both
* insecure and secure TRIM.
*/
if ((ext_csd[EXT_CSD_SEC_FEATURE_SUPPORT] &
EXT_CSD_SEC_FEATURE_SUPPORT_GB_CL_EN) != 0 &&
(quirks & MMC_QUIRK_BROKEN_TRIM) == 0) {
if (bootverbose)
device_printf(dev, "taking advantage of TRIM\n");
sc->flags |= MMCSD_USE_TRIM;
sc->erase_sector = 1;
} else
sc->erase_sector = mmc_get_erase_sector(dev);
/*
* Enhanced user data area and general purpose partitions are only
* supported in revision 1.4 (EXT_CSD_REV == 4) and later, the RPMB
* partition in revision 1.5 (MMC v4.41, EXT_CSD_REV == 5) and later.
*/
rev = ext_csd[EXT_CSD_REV];
/*
* With revision 1.5 (MMC v4.5, EXT_CSD_REV == 6) and later, take
* advantage of the device R/W cache if present and useage is not
* disabled.
*/
if (rev >= 6 && mmcsd_cache != 0) {
size = le32dec(&ext_csd[EXT_CSD_CACHE_SIZE]);
if (bootverbose)
device_printf(dev, "cache size %juKB\n", size);
if (size > 0) {
MMCBUS_ACQUIRE_BUS(mmcbus, dev);
err = mmc_switch(mmcbus, dev, sc->rca,
EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CACHE_CTRL,
EXT_CSD_CACHE_CTRL_CACHE_EN, sc->cmd6_time, true);
MMCBUS_RELEASE_BUS(mmcbus, dev);
if (err != MMC_ERR_NONE)
device_printf(dev, "failed to enable cache\n");
else
sc->flags |= MMCSD_FLUSH_CACHE;
}
}
/*
* Ignore user-creatable enhanced user data area and general purpose
* partitions partitions as long as partitioning hasn't been finished.
*/
comp = (ext_csd[EXT_CSD_PART_SET] & EXT_CSD_PART_SET_COMPLETED) != 0;
/*
* Add enhanced user data area slice, unless it spans the entirety of
* the user data area. The enhanced area is of a multiple of high
* capacity write protect groups ((ERASE_GRP_SIZE + HC_WP_GRP_SIZE) *
* 512 KB) and its offset given in either sectors or bytes, depending
* on whether it's a high capacity device or not.
* NB: The slicer and its slices need to be registered before adding
* the disk for the corresponding user data area as re-tasting is
* racy.
*/
sector_size = mmc_get_sector_size(dev);
size = ext_csd[EXT_CSD_ENH_SIZE_MULT] +
(ext_csd[EXT_CSD_ENH_SIZE_MULT + 1] << 8) +
(ext_csd[EXT_CSD_ENH_SIZE_MULT + 2] << 16);
if (rev >= 4 && comp == TRUE && size > 0 &&
(ext_csd[EXT_CSD_PART_SUPPORT] &
EXT_CSD_PART_SUPPORT_ENH_ATTR_EN) != 0 &&
(ext_csd[EXT_CSD_PART_ATTR] & (EXT_CSD_PART_ATTR_ENH_USR)) != 0) {
erase_size = ext_csd[EXT_CSD_ERASE_GRP_SIZE] * 1024 *
MMC_SECTOR_SIZE;
wp_size = ext_csd[EXT_CSD_HC_WP_GRP_SIZE];
size *= erase_size * wp_size;
if (size != mmc_get_media_size(dev) * sector_size) {
sc->enh_size = size;
sc->enh_base =
le32dec(&ext_csd[EXT_CSD_ENH_START_ADDR]) *
(sc->high_cap == 0 ? MMC_SECTOR_SIZE : 1);
} else if (bootverbose)
device_printf(dev,
"enhanced user data area spans entire device\n");
}
/*
* Add default partition. This may be the only one or the user
* data area in case partitions are supported.
*/
ro = mmc_get_read_only(dev);
mmcsd_add_part(sc, EXT_CSD_PART_CONFIG_ACC_DEFAULT, "mmcsd",
device_get_unit(dev), mmc_get_media_size(dev) * sector_size, ro);
if (mmc_get_spec_vers(dev) < 3)
return (0);
/* Belatedly announce enhanced user data slice. */
if (sc->enh_size != 0) {
bytes = mmcsd_pretty_size(size, unit);
printf(FLASH_SLICES_FMT ": %ju%sB enhanced user data area "
"slice offset 0x%jx at %s\n", device_get_nameunit(dev),
MMCSD_LABEL_ENH, bytes, unit, (uintmax_t)sc->enh_base,
device_get_nameunit(dev));
}
/*
* Determine partition switch timeout (provided in units of 10 ms)
* and ensure it's at least 300 ms as some eMMC chips lie.
*/
sc->part_time = max(ext_csd[EXT_CSD_PART_SWITCH_TO] * 10 * 1000,
300 * 1000);
/* Add boot partitions, which are of a fixed multiple of 128 KB. */
size = ext_csd[EXT_CSD_BOOT_SIZE_MULT] * MMC_BOOT_RPMB_BLOCK_SIZE;
if (size > 0 && (mmcbr_get_caps(mmcbus) & MMC_CAP_BOOT_NOACC) == 0) {
mmcsd_add_part(sc, EXT_CSD_PART_CONFIG_ACC_BOOT0,
MMCSD_FMT_BOOT, 0, size,
ro | ((ext_csd[EXT_CSD_BOOT_WP_STATUS] &
EXT_CSD_BOOT_WP_STATUS_BOOT0_MASK) != 0));
mmcsd_add_part(sc, EXT_CSD_PART_CONFIG_ACC_BOOT1,
MMCSD_FMT_BOOT, 1, size,
ro | ((ext_csd[EXT_CSD_BOOT_WP_STATUS] &
EXT_CSD_BOOT_WP_STATUS_BOOT1_MASK) != 0));
}
/* Add RPMB partition, which also is of a fixed multiple of 128 KB. */
size = ext_csd[EXT_CSD_RPMB_MULT] * MMC_BOOT_RPMB_BLOCK_SIZE;
if (rev >= 5 && size > 0)
mmcsd_add_part(sc, EXT_CSD_PART_CONFIG_ACC_RPMB,
MMCSD_FMT_RPMB, 0, size, ro);
if (rev <= 3 || comp == FALSE)
return (0);
/*
* Add general purpose partitions, which are of a multiple of high
* capacity write protect groups, too.
*/
if ((ext_csd[EXT_CSD_PART_SUPPORT] & EXT_CSD_PART_SUPPORT_EN) != 0) {
erase_size = ext_csd[EXT_CSD_ERASE_GRP_SIZE] * 1024 *
MMC_SECTOR_SIZE;
wp_size = ext_csd[EXT_CSD_HC_WP_GRP_SIZE];
for (i = 0; i < MMC_PART_GP_MAX; i++) {
size = ext_csd[EXT_CSD_GP_SIZE_MULT + i * 3] +
(ext_csd[EXT_CSD_GP_SIZE_MULT + i * 3 + 1] << 8) +
(ext_csd[EXT_CSD_GP_SIZE_MULT + i * 3 + 2] << 16);
if (size == 0)
continue;
mmcsd_add_part(sc, EXT_CSD_PART_CONFIG_ACC_GP0 + i,
MMCSD_FMT_GP, i, size * erase_size * wp_size, ro);
}
}
return (0);
}
static uintmax_t
mmcsd_pretty_size(off_t size, char *unit)
{
uintmax_t bytes;
int i;
/*
* Display in most natural units. There's no card < 1MB. However,
* RPMB partitions occasionally are smaller than that, though. The
* SD standard goes to 2 GiB due to its reliance on FAT, but the data
* format supports up to 4 GiB and some card makers push it up to this
* limit. The SDHC standard only goes to 32 GiB due to FAT32, but the
* data format supports up to 2 TiB however. 2048 GB isn't too ugly,
* so we note it in passing here and don't add the code to print TB).
* Since these cards are sold in terms of MB and GB not MiB and GiB,
* report them like that. We also round to the nearest unit, since
* many cards are a few percent short, even of the power of 10 size.
*/
bytes = size;
unit[0] = unit[1] = '\0';
for (i = 0; i <= 2 && bytes >= 1000; i++) {
bytes = (bytes + 1000 / 2 - 1) / 1000;
switch (i) {
case 0:
unit[0] = 'k';
break;
case 1:
unit[0] = 'M';
break;
case 2:
unit[0] = 'G';
break;
default:
break;
}
}
return (bytes);
}
static struct cdevsw mmcsd_rpmb_cdevsw = {
.d_version = D_VERSION,
.d_name = "mmcsdrpmb",
.d_ioctl = mmcsd_ioctl_rpmb
};
static void
mmcsd_add_part(struct mmcsd_softc *sc, u_int type, const char *name, u_int cnt,
off_t media_size, bool ro)
{
struct make_dev_args args;
device_t dev, mmcbus;
const char *ext;
const uint8_t *ext_csd;
struct mmcsd_part *part;
struct disk *d;
uintmax_t bytes;
u_int gp;
uint32_t speed;
uint8_t extattr;
bool enh;
char unit[2];
dev = sc->dev;
mmcbus = sc->mmcbus;
part = sc->part[type] = malloc(sizeof(*part), M_DEVBUF,
M_WAITOK | M_ZERO);
part->sc = sc;
part->cnt = cnt;
part->type = type;
part->ro = ro;
snprintf(part->name, sizeof(part->name), name, device_get_unit(dev));
MMCSD_IOCTL_LOCK_INIT(part);
/*
* For the RPMB partition, allow IOCTL access only.
* NB: If ever attaching RPMB partitions to disk(9), the re-tuning
* implementation and especially its pausing need to be revisited,
* because then re-tuning requests may be issued by the IOCTL half
* of this driver while re-tuning is already paused by the disk(9)
* one and vice versa.
*/
if (type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
make_dev_args_init(&args);
args.mda_flags = MAKEDEV_CHECKNAME | MAKEDEV_WAITOK;
args.mda_devsw = &mmcsd_rpmb_cdevsw;
args.mda_uid = UID_ROOT;
args.mda_gid = GID_OPERATOR;
args.mda_mode = 0640;
args.mda_si_drv1 = part;
if (make_dev_s(&args, &sc->rpmb_dev, "%s", part->name) != 0) {
device_printf(dev, "Failed to make RPMB device\n");
free(part, M_DEVBUF);
return;
}
} else {
MMCSD_DISK_LOCK_INIT(part);
d = part->disk = disk_alloc();
d->d_close = mmcsd_close;
d->d_strategy = mmcsd_strategy;
d->d_ioctl = mmcsd_ioctl_disk;
d->d_dump = mmcsd_dump;
d->d_getattr = mmcsd_getattr;
d->d_name = part->name;
d->d_drv1 = part;
d->d_sectorsize = mmc_get_sector_size(dev);
d->d_maxsize = sc->max_data * d->d_sectorsize;
d->d_mediasize = media_size;
d->d_stripesize = sc->erase_sector * d->d_sectorsize;
d->d_unit = cnt;
d->d_flags = DISKFLAG_CANDELETE;
if ((sc->flags & MMCSD_FLUSH_CACHE) != 0)
d->d_flags |= DISKFLAG_CANFLUSHCACHE;
d->d_delmaxsize = mmc_get_erase_sector(dev) * d->d_sectorsize;
strlcpy(d->d_ident, mmc_get_card_sn_string(dev),
sizeof(d->d_ident));
strlcpy(d->d_descr, mmc_get_card_id_string(dev),
sizeof(d->d_descr));
d->d_rotation_rate = DISK_RR_NON_ROTATING;
disk_create(d, DISK_VERSION);
bioq_init(&part->bio_queue);
part->running = 1;
kproc_create(&mmcsd_task, part, &part->p, 0, 0,
"%s%d: mmc/sd card", part->name, cnt);
}
bytes = mmcsd_pretty_size(media_size, unit);
if (type == EXT_CSD_PART_CONFIG_ACC_DEFAULT) {
speed = mmcbr_get_clock(mmcbus);
printf("%s%d: %ju%sB <%s>%s at %s %d.%01dMHz/%dbit/%d-block\n",
part->name, cnt, bytes, unit, mmc_get_card_id_string(dev),
ro ? " (read-only)" : "", device_get_nameunit(mmcbus),
speed / 1000000, (speed / 100000) % 10,
mmcsd_bus_bit_width(dev), sc->max_data);
} else if (type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
printf("%s: %ju%sB partion %d%s at %s\n", part->name, bytes,
unit, type, ro ? " (read-only)" : "",
device_get_nameunit(dev));
} else {
enh = false;
ext = NULL;
extattr = 0;
if (type >= EXT_CSD_PART_CONFIG_ACC_GP0 &&
type <= EXT_CSD_PART_CONFIG_ACC_GP3) {
ext_csd = sc->ext_csd;
gp = type - EXT_CSD_PART_CONFIG_ACC_GP0;
if ((ext_csd[EXT_CSD_PART_SUPPORT] &
EXT_CSD_PART_SUPPORT_ENH_ATTR_EN) != 0 &&
(ext_csd[EXT_CSD_PART_ATTR] &
(EXT_CSD_PART_ATTR_ENH_GP0 << gp)) != 0)
enh = true;
else if ((ext_csd[EXT_CSD_PART_SUPPORT] &
EXT_CSD_PART_SUPPORT_EXT_ATTR_EN) != 0) {
extattr = (ext_csd[EXT_CSD_EXT_PART_ATTR +
(gp / 2)] >> (4 * (gp % 2))) & 0xF;
switch (extattr) {
case EXT_CSD_EXT_PART_ATTR_DEFAULT:
break;
case EXT_CSD_EXT_PART_ATTR_SYSTEMCODE:
ext = "system code";
break;
case EXT_CSD_EXT_PART_ATTR_NPERSISTENT:
ext = "non-persistent";
break;
default:
ext = "reserved";
break;
}
}
}
if (ext == NULL)
printf("%s%d: %ju%sB partion %d%s%s at %s\n",
part->name, cnt, bytes, unit, type, enh ?
" enhanced" : "", ro ? " (read-only)" : "",
device_get_nameunit(dev));
else
printf("%s%d: %ju%sB partion %d extended 0x%x "
"(%s)%s at %s\n", part->name, cnt, bytes, unit,
type, extattr, ext, ro ? " (read-only)" : "",
device_get_nameunit(dev));
}
}
static int
mmcsd_slicer(device_t dev, const char *provider,
struct flash_slice *slices, int *nslices)
{
char name[MMCSD_PART_NAMELEN];
struct mmcsd_softc *sc;
struct mmcsd_part *part;
*nslices = 0;
if (slices == NULL)
return (ENOMEM);
sc = device_get_softc(dev);
if (sc->enh_size == 0)
return (ENXIO);
part = sc->part[EXT_CSD_PART_CONFIG_ACC_DEFAULT];
snprintf(name, sizeof(name), "%s%d", part->disk->d_name,
part->disk->d_unit);
if (strcmp(name, provider) != 0)
return (ENXIO);
*nslices = 1;
slices[0].base = sc->enh_base;
slices[0].size = sc->enh_size;
slices[0].label = MMCSD_LABEL_ENH;
return (0);
}
static int
mmcsd_detach(device_t dev)
{
struct mmcsd_softc *sc = device_get_softc(dev);
struct mmcsd_part *part;
int i;
for (i = 0; i < MMC_PART_MAX; i++) {
part = sc->part[i];
if (part != NULL) {
if (part->disk != NULL) {
MMCSD_DISK_LOCK(part);
part->suspend = 0;
if (part->running > 0) {
/* kill thread */
part->running = 0;
wakeup(part);
/* wait for thread to finish. */
while (part->running != -1)
msleep(part, &part->disk_mtx, 0,
"mmcsd disk detach", 0);
}
MMCSD_DISK_UNLOCK(part);
}
MMCSD_IOCTL_LOCK(part);
while (part->ioctl > 0)
msleep(part, &part->ioctl_mtx, 0,
"mmcsd IOCTL detach", 0);
part->ioctl = -1;
MMCSD_IOCTL_UNLOCK(part);
}
}
if (sc->rpmb_dev != NULL)
destroy_dev(sc->rpmb_dev);
for (i = 0; i < MMC_PART_MAX; i++) {
part = sc->part[i];
if (part != NULL) {
if (part->disk != NULL) {
/* Flush the request queue. */
bioq_flush(&part->bio_queue, NULL, ENXIO);
/* kill disk */
disk_destroy(part->disk);
MMCSD_DISK_LOCK_DESTROY(part);
}
MMCSD_IOCTL_LOCK_DESTROY(part);
free(part, M_DEVBUF);
}
}
if (mmcsd_flush_cache(sc) != MMC_ERR_NONE)
device_printf(dev, "failed to flush cache\n");
return (0);
}
static int
mmcsd_shutdown(device_t dev)
{
struct mmcsd_softc *sc = device_get_softc(dev);
if (mmcsd_flush_cache(sc) != MMC_ERR_NONE)
device_printf(dev, "failed to flush cache\n");
return (0);
}
static int
mmcsd_suspend(device_t dev)
{
struct mmcsd_softc *sc = device_get_softc(dev);
struct mmcsd_part *part;
int i;
for (i = 0; i < MMC_PART_MAX; i++) {
part = sc->part[i];
if (part != NULL) {
if (part->disk != NULL) {
MMCSD_DISK_LOCK(part);
part->suspend = 1;
if (part->running > 0) {
/* kill thread */
part->running = 0;
wakeup(part);
/* wait for thread to finish. */
while (part->running != -1)
msleep(part, &part->disk_mtx, 0,
"mmcsd disk suspension", 0);
}
MMCSD_DISK_UNLOCK(part);
}
MMCSD_IOCTL_LOCK(part);
while (part->ioctl > 0)
msleep(part, &part->ioctl_mtx, 0,
"mmcsd IOCTL suspension", 0);
part->ioctl = -1;
MMCSD_IOCTL_UNLOCK(part);
}
}
if (mmcsd_flush_cache(sc) != MMC_ERR_NONE)
device_printf(dev, "failed to flush cache\n");
return (0);
}
static int
mmcsd_resume(device_t dev)
{
struct mmcsd_softc *sc = device_get_softc(dev);
struct mmcsd_part *part;
int i;
for (i = 0; i < MMC_PART_MAX; i++) {
part = sc->part[i];
if (part != NULL) {
if (part->disk != NULL) {
MMCSD_DISK_LOCK(part);
part->suspend = 0;
if (part->running <= 0) {
part->running = 1;
MMCSD_DISK_UNLOCK(part);
kproc_create(&mmcsd_task, part,
&part->p, 0, 0, "%s%d: mmc/sd card",
part->name, part->cnt);
} else
MMCSD_DISK_UNLOCK(part);
}
MMCSD_IOCTL_LOCK(part);
part->ioctl = 0;
MMCSD_IOCTL_UNLOCK(part);
}
}
return (0);
}
static int
mmcsd_close(struct disk *dp)
{
struct mmcsd_softc *sc;
if ((dp->d_flags & DISKFLAG_OPEN) != 0) {
sc = ((struct mmcsd_part *)dp->d_drv1)->sc;
if (mmcsd_flush_cache(sc) != MMC_ERR_NONE)
device_printf(sc->dev, "failed to flush cache\n");
}
return (0);
}
static void
mmcsd_strategy(struct bio *bp)
{
struct mmcsd_part *part;
part = bp->bio_disk->d_drv1;
MMCSD_DISK_LOCK(part);
if (part->running > 0 || part->suspend > 0) {
bioq_disksort(&part->bio_queue, bp);
MMCSD_DISK_UNLOCK(part);
wakeup(part);
} else {
MMCSD_DISK_UNLOCK(part);
biofinish(bp, NULL, ENXIO);
}
}
static int
mmcsd_ioctl_rpmb(struct cdev *dev, u_long cmd, caddr_t data,
int fflag, struct thread *td)
{
return (mmcsd_ioctl(dev->si_drv1, cmd, data, fflag, td));
}
static int
mmcsd_ioctl_disk(struct disk *disk, u_long cmd, void *data, int fflag,
struct thread *td)
{
return (mmcsd_ioctl(disk->d_drv1, cmd, data, fflag, td));
}
static int
mmcsd_ioctl(struct mmcsd_part *part, u_long cmd, void *data, int fflag,
struct thread *td)
{
struct mmc_ioc_cmd *mic;
struct mmc_ioc_multi_cmd *mimc;
int i, err;
u_long cnt, size;
if ((fflag & FREAD) == 0)
return (EBADF);
err = priv_check(td, PRIV_DRIVER);
if (err != 0)
return (err);
err = 0;
switch (cmd) {
case MMC_IOC_CMD:
mic = data;
err = mmcsd_ioctl_cmd(part, mic, fflag);
break;
case MMC_IOC_MULTI_CMD:
mimc = data;
if (mimc->num_of_cmds == 0)
break;
if (mimc->num_of_cmds > MMC_IOC_MAX_CMDS)
return (EINVAL);
cnt = mimc->num_of_cmds;
size = sizeof(*mic) * cnt;
mic = malloc(size, M_TEMP, M_WAITOK);
err = copyin((const void *)mimc->cmds, mic, size);
if (err == 0) {
for (i = 0; i < cnt; i++) {
err = mmcsd_ioctl_cmd(part, &mic[i], fflag);
if (err != 0)
break;
}
}
free(mic, M_TEMP);
break;
default:
return (ENOIOCTL);
}
return (err);
}
static int
mmcsd_ioctl_cmd(struct mmcsd_part *part, struct mmc_ioc_cmd *mic, int fflag)
{
struct mmc_command cmd;
struct mmc_data data;
struct mmcsd_softc *sc;
device_t dev, mmcbus;
void *dp;
u_long len;
int err, retries;
uint32_t status;
uint16_t rca;
if ((fflag & FWRITE) == 0 && mic->write_flag != 0)
return (EBADF);
if (part->ro == TRUE && mic->write_flag != 0)
return (EROFS);
/*
* We don't need to explicitly lock against the disk(9) half of this
* driver as MMCBUS_ACQUIRE_BUS() will serialize us. However, it's
* necessary to protect against races with detachment and suspension,
* especially since it's required to switch away from RPMB partitions
* again after an access (see mmcsd_switch_part()).
*/
MMCSD_IOCTL_LOCK(part);
while (part->ioctl != 0) {
if (part->ioctl < 0) {
MMCSD_IOCTL_UNLOCK(part);
return (ENXIO);
}
msleep(part, &part->ioctl_mtx, 0, "mmcsd IOCTL", 0);
}
part->ioctl = 1;
MMCSD_IOCTL_UNLOCK(part);
err = 0;
dp = NULL;
len = mic->blksz * mic->blocks;
if (len > MMC_IOC_MAX_BYTES) {
err = EOVERFLOW;
goto out;
}
if (len != 0) {
dp = malloc(len, M_TEMP, M_WAITOK);
err = copyin((void *)(uintptr_t)mic->data_ptr, dp, len);
if (err != 0)
goto out;
}
memset(&cmd, 0, sizeof(cmd));
memset(&data, 0, sizeof(data));
cmd.opcode = mic->opcode;
cmd.arg = mic->arg;
cmd.flags = mic->flags;
if (len != 0) {
data.len = len;
data.data = dp;
data.flags = mic->write_flag != 0 ? MMC_DATA_WRITE :
MMC_DATA_READ;
cmd.data = &data;
}
sc = part->sc;
rca = sc->rca;
if (mic->is_acmd == 0) {
/* Enforce/patch/restrict RCA-based commands */
switch (cmd.opcode) {
case MMC_SET_RELATIVE_ADDR:
case MMC_SELECT_CARD:
err = EPERM;
goto out;
case MMC_STOP_TRANSMISSION:
if ((cmd.arg & 0x1) == 0)
break;
/* FALLTHROUGH */
case MMC_SLEEP_AWAKE:
case MMC_SEND_CSD:
case MMC_SEND_CID:
case MMC_SEND_STATUS:
case MMC_GO_INACTIVE_STATE:
case MMC_FAST_IO:
case MMC_APP_CMD:
cmd.arg = (cmd.arg & 0x0000FFFF) | (rca << 16);
break;
default:
break;
}
/*
* No partition switching in userland; it's almost impossible
* to recover from that, especially if things go wrong.
*/
if (cmd.opcode == MMC_SWITCH_FUNC && dp != NULL &&
(((uint8_t *)dp)[EXT_CSD_PART_CONFIG] &
EXT_CSD_PART_CONFIG_ACC_MASK) != part->type) {
err = EINVAL;
goto out;
}
}
dev = sc->dev;
mmcbus = sc->mmcbus;
MMCBUS_ACQUIRE_BUS(mmcbus, dev);
err = mmcsd_switch_part(mmcbus, dev, rca, part->type);
if (err != MMC_ERR_NONE)
goto release;
if (part->type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
err = mmcsd_set_blockcount(sc, mic->blocks,
mic->write_flag & (1 << 31));
if (err != MMC_ERR_NONE)
goto switch_back;
}
if (mic->write_flag != 0)
sc->flags |= MMCSD_DIRTY;
if (mic->is_acmd != 0)
(void)mmc_wait_for_app_cmd(mmcbus, dev, rca, &cmd, 0);
else
(void)mmc_wait_for_cmd(mmcbus, dev, &cmd, 0);
if (part->type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
/*
* If the request went to the RPMB partition, try to ensure
* that the command actually has completed.
*/
retries = MMCSD_CMD_RETRIES;
do {
err = mmc_send_status(mmcbus, dev, rca, &status);
if (err != MMC_ERR_NONE)
break;
if (R1_STATUS(status) == 0 &&
R1_CURRENT_STATE(status) != R1_STATE_PRG)
break;
DELAY(1000);
} while (retries-- > 0);
}
/*
* If EXT_CSD was changed, our copy is outdated now. Specifically,
* the upper bits of EXT_CSD_PART_CONFIG used in mmcsd_switch_part(),
* so retrieve EXT_CSD again.
*/
if (cmd.opcode == MMC_SWITCH_FUNC) {
err = mmc_send_ext_csd(mmcbus, dev, sc->ext_csd);
if (err != MMC_ERR_NONE)
goto release;
}
switch_back:
if (part->type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
/*
* If the request went to the RPMB partition, always switch
* back to the default partition (see mmcsd_switch_part()).
*/
err = mmcsd_switch_part(mmcbus, dev, rca,
EXT_CSD_PART_CONFIG_ACC_DEFAULT);
if (err != MMC_ERR_NONE)
goto release;
}
MMCBUS_RELEASE_BUS(mmcbus, dev);
if (cmd.error != MMC_ERR_NONE) {
switch (cmd.error) {
case MMC_ERR_TIMEOUT:
err = ETIMEDOUT;
break;
case MMC_ERR_BADCRC:
err = EILSEQ;
break;
case MMC_ERR_INVALID:
err = EINVAL;
break;
case MMC_ERR_NO_MEMORY:
err = ENOMEM;
break;
default:
err = EIO;
break;
}
goto out;
}
memcpy(mic->response, cmd.resp, 4 * sizeof(uint32_t));
if (mic->write_flag == 0 && len != 0) {
err = copyout(dp, (void *)(uintptr_t)mic->data_ptr, len);
if (err != 0)
goto out;
}
goto out;
release:
MMCBUS_RELEASE_BUS(mmcbus, dev);
err = EIO;
out:
MMCSD_IOCTL_LOCK(part);
part->ioctl = 0;
MMCSD_IOCTL_UNLOCK(part);
wakeup(part);
if (dp != NULL)
free(dp, M_TEMP);
return (err);
}
static int
mmcsd_getattr(struct bio *bp)
{
struct mmcsd_part *part;
device_t dev;
if (strcmp(bp->bio_attribute, "MMC::device") == 0) {
if (bp->bio_length != sizeof(dev))
return (EFAULT);
part = bp->bio_disk->d_drv1;
dev = part->sc->dev;
bcopy(&dev, bp->bio_data, sizeof(dev));
bp->bio_completed = bp->bio_length;
return (0);
}
return (-1);
}
static int
mmcsd_set_blockcount(struct mmcsd_softc *sc, u_int count, bool reliable)
{
struct mmc_command cmd;
struct mmc_request req;
memset(&req, 0, sizeof(req));
memset(&cmd, 0, sizeof(cmd));
cmd.mrq = &req;
req.cmd = &cmd;
cmd.opcode = MMC_SET_BLOCK_COUNT;
cmd.arg = count & 0x0000FFFF;
if (reliable)
cmd.arg |= 1 << 31;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
MMCBUS_WAIT_FOR_REQUEST(sc->mmcbus, sc->dev, &req);
return (cmd.error);
}
static int
mmcsd_switch_part(device_t bus, device_t dev, uint16_t rca, u_int part)
{
struct mmcsd_softc *sc;
int err;
uint8_t value;
sc = device_get_softc(dev);
if (sc->mode == mode_sd)
return (MMC_ERR_NONE);
/*
* According to section "6.2.2 Command restrictions" of the eMMC
* specification v5.1, CMD19/CMD21 aren't allowed to be used with
* RPMB partitions. So we pause re-tuning along with triggering
* it up-front to decrease the likelihood of re-tuning becoming
* necessary while accessing an RPMB partition. Consequently, an
* RPMB partition should immediately be switched away from again
* after an access in order to allow for re-tuning to take place
* anew.
*/
if (part == EXT_CSD_PART_CONFIG_ACC_RPMB)
MMCBUS_RETUNE_PAUSE(sc->mmcbus, sc->dev, true);
if (sc->part_curr == part)
return (MMC_ERR_NONE);
value = (sc->ext_csd[EXT_CSD_PART_CONFIG] &
~EXT_CSD_PART_CONFIG_ACC_MASK) | part;
/* Jump! */
err = mmc_switch(bus, dev, rca, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_PART_CONFIG, value, sc->part_time, true);
if (err != MMC_ERR_NONE) {
if (part == EXT_CSD_PART_CONFIG_ACC_RPMB)
MMCBUS_RETUNE_UNPAUSE(sc->mmcbus, sc->dev);
return (err);
}
sc->ext_csd[EXT_CSD_PART_CONFIG] = value;
if (sc->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB)
MMCBUS_RETUNE_UNPAUSE(sc->mmcbus, sc->dev);
sc->part_curr = part;
return (MMC_ERR_NONE);
}
static const char *
mmcsd_errmsg(int e)
{
if (e < 0 || e > MMC_ERR_MAX)
return "Bad error code";
return (errmsg[e]);
}
static daddr_t
mmcsd_rw(struct mmcsd_part *part, struct bio *bp)
{
daddr_t block, end;
struct mmc_command cmd;
struct mmc_command stop;
struct mmc_request req;
struct mmc_data data;
struct mmcsd_softc *sc;
device_t dev, mmcbus;
u_int numblocks, sz;
char *vaddr;
sc = part->sc;
dev = sc->dev;
mmcbus = sc->mmcbus;
block = bp->bio_pblkno;
sz = part->disk->d_sectorsize;
end = bp->bio_pblkno + (bp->bio_bcount / sz);
while (block < end) {
vaddr = bp->bio_data + (block - bp->bio_pblkno) * sz;
numblocks = min(end - block, sc->max_data);
memset(&req, 0, sizeof(req));
memset(&cmd, 0, sizeof(cmd));
memset(&stop, 0, sizeof(stop));
memset(&data, 0, sizeof(data));
cmd.mrq = &req;
req.cmd = &cmd;
cmd.data = &data;
if (bp->bio_cmd == BIO_READ) {
if (numblocks > 1)
cmd.opcode = MMC_READ_MULTIPLE_BLOCK;
else
cmd.opcode = MMC_READ_SINGLE_BLOCK;
} else {
sc->flags |= MMCSD_DIRTY;
if (numblocks > 1)
cmd.opcode = MMC_WRITE_MULTIPLE_BLOCK;
else
cmd.opcode = MMC_WRITE_BLOCK;
}
cmd.arg = block;
if (sc->high_cap == 0)
cmd.arg <<= 9;
cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC;
data.data = vaddr;
data.mrq = &req;
if (bp->bio_cmd == BIO_READ)
data.flags = MMC_DATA_READ;
else
data.flags = MMC_DATA_WRITE;
data.len = numblocks * sz;
if (numblocks > 1) {
data.flags |= MMC_DATA_MULTI;
stop.opcode = MMC_STOP_TRANSMISSION;
stop.arg = 0;
stop.flags = MMC_RSP_R1B | MMC_CMD_AC;
stop.mrq = &req;
req.stop = &stop;
}
MMCBUS_WAIT_FOR_REQUEST(mmcbus, dev, &req);
if (req.cmd->error != MMC_ERR_NONE) {
if (ppsratecheck(&sc->log_time, &sc->log_count,
LOG_PPS))
device_printf(dev, "Error indicated: %d %s\n",
req.cmd->error,
mmcsd_errmsg(req.cmd->error));
break;
}
block += numblocks;
}
return (block);
}
static daddr_t
mmcsd_delete(struct mmcsd_part *part, struct bio *bp)
{
daddr_t block, end, start, stop;
struct mmc_command cmd;
struct mmc_request req;
struct mmcsd_softc *sc;
device_t dev, mmcbus;
u_int erase_sector, sz;
int err;
bool use_trim;
sc = part->sc;
dev = sc->dev;
mmcbus = sc->mmcbus;
block = bp->bio_pblkno;
sz = part->disk->d_sectorsize;
end = bp->bio_pblkno + (bp->bio_bcount / sz);
use_trim = sc->flags & MMCSD_USE_TRIM;
if (use_trim == true) {
start = block;
stop = end;
} else {
/* Coalesce with the remainder of the previous request. */
if (block > part->eblock && block <= part->eend)
block = part->eblock;
if (end >= part->eblock && end < part->eend)
end = part->eend;
/* Safely round to the erase sector boundaries. */
erase_sector = sc->erase_sector;
start = block + erase_sector - 1; /* Round up. */
start -= start % erase_sector;
stop = end; /* Round down. */
stop -= end % erase_sector;
/*
* We can't erase an area smaller than an erase sector, so
* store it for later.
*/
if (start >= stop) {
part->eblock = block;
part->eend = end;
return (end);
}
}
if ((sc->flags & MMCSD_INAND_CMD38) != 0) {
err = mmc_switch(mmcbus, dev, sc->rca, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_INAND_CMD38, use_trim == true ?
EXT_CSD_INAND_CMD38_TRIM : EXT_CSD_INAND_CMD38_ERASE,
sc->cmd6_time, true);
if (err != MMC_ERR_NONE) {
device_printf(dev,
"Setting iNAND erase command failed %s\n",
mmcsd_errmsg(err));
return (block);
}
}
/*
* Pause re-tuning so it won't interfere with the order of erase
* commands. Note that these latter don't use the data lines, so
* re-tuning shouldn't actually become necessary during erase.
*/
MMCBUS_RETUNE_PAUSE(mmcbus, dev, false);
/* Set erase start position. */
memset(&req, 0, sizeof(req));
memset(&cmd, 0, sizeof(cmd));
cmd.mrq = &req;
req.cmd = &cmd;
if (sc->mode == mode_sd)
cmd.opcode = SD_ERASE_WR_BLK_START;
else
cmd.opcode = MMC_ERASE_GROUP_START;
cmd.arg = start;
if (sc->high_cap == 0)
cmd.arg <<= 9;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
MMCBUS_WAIT_FOR_REQUEST(mmcbus, dev, &req);
if (req.cmd->error != MMC_ERR_NONE) {
device_printf(dev, "Setting erase start position failed %s\n",
mmcsd_errmsg(req.cmd->error));
block = bp->bio_pblkno;
goto unpause;
}
/* Set erase stop position. */
memset(&req, 0, sizeof(req));
memset(&cmd, 0, sizeof(cmd));
req.cmd = &cmd;
if (sc->mode == mode_sd)
cmd.opcode = SD_ERASE_WR_BLK_END;
else
cmd.opcode = MMC_ERASE_GROUP_END;
cmd.arg = stop;
if (sc->high_cap == 0)
cmd.arg <<= 9;
cmd.arg--;
cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
MMCBUS_WAIT_FOR_REQUEST(mmcbus, dev, &req);
if (req.cmd->error != MMC_ERR_NONE) {
device_printf(dev, "Setting erase stop position failed %s\n",
mmcsd_errmsg(req.cmd->error));
block = bp->bio_pblkno;
goto unpause;
}
/* Erase range. */
memset(&req, 0, sizeof(req));
memset(&cmd, 0, sizeof(cmd));
req.cmd = &cmd;
cmd.opcode = MMC_ERASE;
cmd.arg = use_trim == true ? MMC_ERASE_TRIM : MMC_ERASE_ERASE;
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
MMCBUS_WAIT_FOR_REQUEST(mmcbus, dev, &req);
if (req.cmd->error != MMC_ERR_NONE) {
device_printf(dev, "Issuing erase command failed %s\n",
mmcsd_errmsg(req.cmd->error));
block = bp->bio_pblkno;
goto unpause;
}
if (use_trim == false) {
/* Store one of the remaining parts for the next call. */
if (bp->bio_pblkno >= part->eblock || block == start) {
part->eblock = stop; /* Predict next forward. */
part->eend = end;
} else {
part->eblock = block; /* Predict next backward. */
part->eend = start;
}
}
block = end;
unpause:
MMCBUS_RETUNE_UNPAUSE(mmcbus, dev);
return (block);
}
static int
mmcsd_dump(void *arg, void *virtual, vm_offset_t physical, off_t offset,
size_t length)
{
struct bio bp;
daddr_t block, end;
struct disk *disk;
struct mmcsd_softc *sc;
struct mmcsd_part *part;
device_t dev, mmcbus;
int err;
disk = arg;
part = disk->d_drv1;
sc = part->sc;
/* length zero is special and really means flush buffers to media */
if (length == 0) {
err = mmcsd_flush_cache(sc);
if (err != MMC_ERR_NONE)
return (EIO);
return (0);
}
dev = sc->dev;
mmcbus = sc->mmcbus;
g_reset_bio(&bp);
bp.bio_disk = disk;
bp.bio_pblkno = offset / disk->d_sectorsize;
bp.bio_bcount = length;
bp.bio_data = virtual;
bp.bio_cmd = BIO_WRITE;
end = bp.bio_pblkno + bp.bio_bcount / disk->d_sectorsize;
MMCBUS_ACQUIRE_BUS(mmcbus, dev);
err = mmcsd_switch_part(mmcbus, dev, sc->rca, part->type);
if (err != MMC_ERR_NONE) {
if (ppsratecheck(&sc->log_time, &sc->log_count, LOG_PPS))
device_printf(dev, "Partition switch error\n");
MMCBUS_RELEASE_BUS(mmcbus, dev);
return (EIO);
}
block = mmcsd_rw(part, &bp);
MMCBUS_RELEASE_BUS(mmcbus, dev);
return ((end < block) ? EIO : 0);
}
static void
mmcsd_task(void *arg)
{
daddr_t block, end;
struct mmcsd_part *part;
struct mmcsd_softc *sc;
struct bio *bp;
device_t dev, mmcbus;
int err, sz;
part = arg;
sc = part->sc;
dev = sc->dev;
mmcbus = sc->mmcbus;
while (1) {
MMCSD_DISK_LOCK(part);
do {
if (part->running == 0)
goto out;
bp = bioq_takefirst(&part->bio_queue);
if (bp == NULL)
msleep(part, &part->disk_mtx, PRIBIO,
"mmcsd disk jobqueue", 0);
} while (bp == NULL);
MMCSD_DISK_UNLOCK(part);
if (__predict_false(bp->bio_cmd == BIO_FLUSH)) {
if (mmcsd_flush_cache(sc) != MMC_ERR_NONE) {
bp->bio_error = EIO;
bp->bio_flags |= BIO_ERROR;
}
biodone(bp);
continue;
}
if (bp->bio_cmd != BIO_READ && part->ro) {
bp->bio_error = EROFS;
bp->bio_resid = bp->bio_bcount;
bp->bio_flags |= BIO_ERROR;
biodone(bp);
continue;
}
MMCBUS_ACQUIRE_BUS(mmcbus, dev);
sz = part->disk->d_sectorsize;
block = bp->bio_pblkno;
end = bp->bio_pblkno + (bp->bio_bcount / sz);
err = mmcsd_switch_part(mmcbus, dev, sc->rca, part->type);
if (err != MMC_ERR_NONE) {
if (ppsratecheck(&sc->log_time, &sc->log_count,
LOG_PPS))
device_printf(dev, "Partition switch error\n");
goto release;
}
if (bp->bio_cmd == BIO_READ || bp->bio_cmd == BIO_WRITE) {
/* Access to the remaining erase block obsoletes it. */
if (block < part->eend && end > part->eblock)
part->eblock = part->eend = 0;
block = mmcsd_rw(part, bp);
} else if (bp->bio_cmd == BIO_DELETE) {
block = mmcsd_delete(part, bp);
}
release:
MMCBUS_RELEASE_BUS(mmcbus, dev);
if (block < end) {
bp->bio_error = EIO;
bp->bio_resid = (end - block) * sz;
bp->bio_flags |= BIO_ERROR;
} else {
bp->bio_resid = 0;
}
biodone(bp);
}
out:
/* tell parent we're done */
part->running = -1;
MMCSD_DISK_UNLOCK(part);
wakeup(part);
kproc_exit(0);
}
static int
mmcsd_bus_bit_width(device_t dev)
{
if (mmc_get_bus_width(dev) == bus_width_1)
return (1);
if (mmc_get_bus_width(dev) == bus_width_4)
return (4);
return (8);
}
static int
mmcsd_flush_cache(struct mmcsd_softc *sc)
{
device_t dev, mmcbus;
int err;
if ((sc->flags & MMCSD_FLUSH_CACHE) == 0)
return (MMC_ERR_NONE);
dev = sc->dev;
mmcbus = sc->mmcbus;
MMCBUS_ACQUIRE_BUS(mmcbus, dev);
if ((sc->flags & MMCSD_DIRTY) == 0) {
MMCBUS_RELEASE_BUS(mmcbus, dev);
return (MMC_ERR_NONE);
}
err = mmc_switch(mmcbus, dev, sc->rca, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_FLUSH_CACHE, EXT_CSD_FLUSH_CACHE_FLUSH, 60 * 1000, true);
if (err == MMC_ERR_NONE)
sc->flags &= ~MMCSD_DIRTY;
MMCBUS_RELEASE_BUS(mmcbus, dev);
return (err);
}
static device_method_t mmcsd_methods[] = {
DEVMETHOD(device_probe, mmcsd_probe),
DEVMETHOD(device_attach, mmcsd_attach),
DEVMETHOD(device_detach, mmcsd_detach),
DEVMETHOD(device_shutdown, mmcsd_shutdown),
DEVMETHOD(device_suspend, mmcsd_suspend),
DEVMETHOD(device_resume, mmcsd_resume),
DEVMETHOD_END
};
static driver_t mmcsd_driver = {
"mmcsd",
mmcsd_methods,
sizeof(struct mmcsd_softc),
};
static devclass_t mmcsd_devclass;
static int
mmcsd_handler(module_t mod __unused, int what, void *arg __unused)
{
switch (what) {
case MOD_LOAD:
flash_register_slicer(mmcsd_slicer, FLASH_SLICES_TYPE_MMC,
TRUE);
return (0);
case MOD_UNLOAD:
flash_register_slicer(NULL, FLASH_SLICES_TYPE_MMC, TRUE);
return (0);
}
return (0);
}
DRIVER_MODULE(mmcsd, mmc, mmcsd_driver, mmcsd_devclass, mmcsd_handler, NULL);
MODULE_DEPEND(mmcsd, g_flashmap, 0, 0, 0);
MMC_DEPEND(mmcsd);
Index: projects/clang800-import/sys/dev/mmc/mmcvar.h
===================================================================
--- projects/clang800-import/sys/dev/mmc/mmcvar.h (revision 343955)
+++ projects/clang800-import/sys/dev/mmc/mmcvar.h (revision 343956)
@@ -1,102 +1,102 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2006 Bernd Walter. All rights reserved.
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Portions of this software may have been developed with reference to
* the SD Simplified Specification. The following disclaimer may apply:
*
* The following conditions apply to the release of the simplified
* specification ("Simplified Specification") by the SD Card Association and
* the SD Group. The Simplified Specification is a subset of the complete SD
* Specification which is owned by the SD Card Association and the SD
* Group. This Simplified Specification is provided on a non-confidential
* basis subject to the disclaimers below. Any implementation of the
* Simplified Specification may require a license from the SD Card
* Association, SD Group, SD-3C LLC or other third parties.
*
* Disclaimers:
*
* The information contained in the Simplified Specification is presented only
* as a standard specification for SD Cards and SD Host/Ancillary products and
* is provided "AS-IS" without any representations or warranties of any
* kind. No responsibility is assumed by the SD Group, SD-3C LLC or the SD
* Card Association for any damages, any infringements of patents or other
* right of the SD Group, SD-3C LLC, the SD Card Association or any third
* parties, which may result from its use. No license is granted by
* implication, estoppel or otherwise under any patent or other rights of the
* SD Group, SD-3C LLC, the SD Card Association or any third party. Nothing
* herein shall be construed as an obligation by the SD Group, the SD-3C LLC
* or the SD Card Association to disclose or distribute any technical
* information, know-how or other confidential information to any third party.
*
* $FreeBSD$
*/
#ifndef DEV_MMC_MMCVAR_H
#define DEV_MMC_MMCVAR_H
enum mmc_device_ivars {
MMC_IVAR_SPEC_VERS,
MMC_IVAR_DSR_IMP,
MMC_IVAR_MEDIA_SIZE,
MMC_IVAR_RCA,
MMC_IVAR_SECTOR_SIZE,
MMC_IVAR_TRAN_SPEED,
MMC_IVAR_READ_ONLY,
MMC_IVAR_HIGH_CAP,
MMC_IVAR_CARD_TYPE,
MMC_IVAR_BUS_WIDTH,
MMC_IVAR_ERASE_SECTOR,
MMC_IVAR_MAX_DATA,
MMC_IVAR_CMD6_TIMEOUT,
MMC_IVAR_QUIRKS,
MMC_IVAR_CARD_ID_STRING,
MMC_IVAR_CARD_SN_STRING,
};
/*
* Simplified accessors for mmc devices
*/
#define MMC_ACCESSOR(var, ivar, type) \
__BUS_ACCESSOR(mmc, var, MMC, ivar, type)
MMC_ACCESSOR(spec_vers, SPEC_VERS, uint8_t)
MMC_ACCESSOR(dsr_imp, DSR_IMP, int)
MMC_ACCESSOR(media_size, MEDIA_SIZE, long)
MMC_ACCESSOR(rca, RCA, int)
MMC_ACCESSOR(sector_size, SECTOR_SIZE, int)
MMC_ACCESSOR(tran_speed, TRAN_SPEED, int)
MMC_ACCESSOR(read_only, READ_ONLY, int)
MMC_ACCESSOR(high_cap, HIGH_CAP, int)
MMC_ACCESSOR(card_type, CARD_TYPE, int)
MMC_ACCESSOR(bus_width, BUS_WIDTH, int)
MMC_ACCESSOR(erase_sector, ERASE_SECTOR, int)
MMC_ACCESSOR(max_data, MAX_DATA, int)
MMC_ACCESSOR(cmd6_timeout, CMD6_TIMEOUT, u_int)
MMC_ACCESSOR(quirks, QUIRKS, u_int)
MMC_ACCESSOR(card_id_string, CARD_ID_STRING, const char *)
MMC_ACCESSOR(card_sn_string, CARD_SN_STRING, const char *)
#endif /* DEV_MMC_MMCVAR_H */
Index: projects/clang800-import/sys/dev/netmap/netmap.c
===================================================================
--- projects/clang800-import/sys/dev/netmap/netmap.c (revision 343955)
+++ projects/clang800-import/sys/dev/netmap/netmap.c (revision 343956)
@@ -1,4218 +1,4217 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (C) 2011-2014 Matteo Landi
* Copyright (C) 2011-2016 Luigi Rizzo
* Copyright (C) 2011-2016 Giuseppe Lettieri
* Copyright (C) 2011-2016 Vincenzo Maffione
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* $FreeBSD$
*
* This module supports memory mapped access to network devices,
* see netmap(4).
*
* The module uses a large, memory pool allocated by the kernel
* and accessible as mmapped memory by multiple userspace threads/processes.
* The memory pool contains packet buffers and "netmap rings",
* i.e. user-accessible copies of the interface's queues.
*
* Access to the network card works like this:
* 1. a process/thread issues one or more open() on /dev/netmap, to create
* select()able file descriptor on which events are reported.
* 2. on each descriptor, the process issues an ioctl() to identify
* the interface that should report events to the file descriptor.
* 3. on each descriptor, the process issues an mmap() request to
* map the shared memory region within the process' address space.
* The list of interesting queues is indicated by a location in
* the shared memory region.
* 4. using the functions in the netmap(4) userspace API, a process
* can look up the occupation state of a queue, access memory buffers,
* and retrieve received packets or enqueue packets to transmit.
* 5. using some ioctl()s the process can synchronize the userspace view
* of the queue with the actual status in the kernel. This includes both
* receiving the notification of new packets, and transmitting new
* packets on the output interface.
* 6. select() or poll() can be used to wait for events on individual
* transmit or receive queues (or all queues for a given interface).
*
SYNCHRONIZATION (USER)
The netmap rings and data structures may be shared among multiple
user threads or even independent processes.
Any synchronization among those threads/processes is delegated
to the threads themselves. Only one thread at a time can be in
a system call on the same netmap ring. The OS does not enforce
this and only guarantees against system crashes in case of
invalid usage.
LOCKING (INTERNAL)
Within the kernel, access to the netmap rings is protected as follows:
- a spinlock on each ring, to handle producer/consumer races on
RX rings attached to the host stack (against multiple host
threads writing from the host stack to the same ring),
and on 'destination' rings attached to a VALE switch
(i.e. RX rings in VALE ports, and TX rings in NIC/host ports)
protecting multiple active senders for the same destination)
- an atomic variable to guarantee that there is at most one
instance of *_*xsync() on the ring at any time.
For rings connected to user file
descriptors, an atomic_test_and_set() protects this, and the
lock on the ring is not actually used.
For NIC RX rings connected to a VALE switch, an atomic_test_and_set()
is also used to prevent multiple executions (the driver might indeed
already guarantee this).
For NIC TX rings connected to a VALE switch, the lock arbitrates
access to the queue (both when allocating buffers and when pushing
them out).
- *xsync() should be protected against initializations of the card.
On FreeBSD most devices have the reset routine protected by
a RING lock (ixgbe, igb, em) or core lock (re). lem is missing
the RING protection on rx_reset(), this should be added.
On linux there is an external lock on the tx path, which probably
also arbitrates access to the reset routine. XXX to be revised
- a per-interface core_lock protecting access from the host stack
while interfaces may be detached from netmap mode.
XXX there should be no need for this lock if we detach the interfaces
only while they are down.
--- VALE SWITCH ---
NMG_LOCK() serializes all modifications to switches and ports.
A switch cannot be deleted until all ports are gone.
For each switch, an SX lock (RWlock on linux) protects
deletion of ports. When configuring or deleting a new port, the
lock is acquired in exclusive mode (after holding NMG_LOCK).
When forwarding, the lock is acquired in shared mode (without NMG_LOCK).
The lock is held throughout the entire forwarding cycle,
during which the thread may incur in a page fault.
Hence it is important that sleepable shared locks are used.
On the rx ring, the per-port lock is grabbed initially to reserve
a number of slot in the ring, then the lock is released,
packets are copied from source to destination, and then
the lock is acquired again and the receive ring is updated.
(A similar thing is done on the tx ring for NIC and host stack
ports attached to the switch)
*/
/* --- internals ----
*
* Roadmap to the code that implements the above.
*
* > 1. a process/thread issues one or more open() on /dev/netmap, to create
* > select()able file descriptor on which events are reported.
*
* Internally, we allocate a netmap_priv_d structure, that will be
* initialized on ioctl(NIOCREGIF). There is one netmap_priv_d
* structure for each open().
*
* os-specific:
* FreeBSD: see netmap_open() (netmap_freebsd.c)
* linux: see linux_netmap_open() (netmap_linux.c)
*
* > 2. on each descriptor, the process issues an ioctl() to identify
* > the interface that should report events to the file descriptor.
*
* Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0.
* Most important things happen in netmap_get_na() and
* netmap_do_regif(), called from there. Additional details can be
* found in the comments above those functions.
*
* In all cases, this action creates/takes-a-reference-to a
* netmap_*_adapter describing the port, and allocates a netmap_if
* and all necessary netmap rings, filling them with netmap buffers.
*
* In this phase, the sync callbacks for each ring are set (these are used
* in steps 5 and 6 below). The callbacks depend on the type of adapter.
* The adapter creation/initialization code puts them in the
* netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they
* are copied from there to the netmap_kring's during netmap_do_regif(), by
* the nm_krings_create() callback. All the nm_krings_create callbacks
* actually call netmap_krings_create() to perform this and the other
* common stuff. netmap_krings_create() also takes care of the host rings,
* if needed, by setting their sync callbacks appropriately.
*
* Additional actions depend on the kind of netmap_adapter that has been
* registered:
*
* - netmap_hw_adapter: [netmap.c]
* This is a system netdev/ifp with native netmap support.
* The ifp is detached from the host stack by redirecting:
* - transmissions (from the network stack) to netmap_transmit()
* - receive notifications to the nm_notify() callback for
* this adapter. The callback is normally netmap_notify(), unless
* the ifp is attached to a bridge using bwrap, in which case it
* is netmap_bwrap_intr_notify().
*
* - netmap_generic_adapter: [netmap_generic.c]
* A system netdev/ifp without native netmap support.
*
* (the decision about native/non native support is taken in
* netmap_get_hw_na(), called by netmap_get_na())
*
* - netmap_vp_adapter [netmap_vale.c]
* Returned by netmap_get_bdg_na().
* This is a persistent or ephemeral VALE port. Ephemeral ports
* are created on the fly if they don't already exist, and are
* always attached to a bridge.
* Persistent VALE ports must must be created separately, and i
* then attached like normal NICs. The NIOCREGIF we are examining
* will find them only if they had previosly been created and
* attached (see VALE_CTL below).
*
* - netmap_pipe_adapter [netmap_pipe.c]
* Returned by netmap_get_pipe_na().
* Both pipe ends are created, if they didn't already exist.
*
* - netmap_monitor_adapter [netmap_monitor.c]
* Returned by netmap_get_monitor_na().
* If successful, the nm_sync callbacks of the monitored adapter
* will be intercepted by the returned monitor.
*
* - netmap_bwrap_adapter [netmap_vale.c]
* Cannot be obtained in this way, see VALE_CTL below
*
*
* os-specific:
* linux: we first go through linux_netmap_ioctl() to
* adapt the FreeBSD interface to the linux one.
*
*
* > 3. on each descriptor, the process issues an mmap() request to
* > map the shared memory region within the process' address space.
* > The list of interesting queues is indicated by a location in
* > the shared memory region.
*
* os-specific:
* FreeBSD: netmap_mmap_single (netmap_freebsd.c).
* linux: linux_netmap_mmap (netmap_linux.c).
*
* > 4. using the functions in the netmap(4) userspace API, a process
* > can look up the occupation state of a queue, access memory buffers,
* > and retrieve received packets or enqueue packets to transmit.
*
* these actions do not involve the kernel.
*
* > 5. using some ioctl()s the process can synchronize the userspace view
* > of the queue with the actual status in the kernel. This includes both
* > receiving the notification of new packets, and transmitting new
* > packets on the output interface.
*
* These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC
* cases. They invoke the nm_sync callbacks on the netmap_kring
* structures, as initialized in step 2 and maybe later modified
* by a monitor. Monitors, however, will always call the original
* callback before doing anything else.
*
*
* > 6. select() or poll() can be used to wait for events on individual
* > transmit or receive queues (or all queues for a given interface).
*
* Implemented in netmap_poll(). This will call the same nm_sync()
* callbacks as in step 5 above.
*
* os-specific:
* linux: we first go through linux_netmap_poll() to adapt
* the FreeBSD interface to the linux one.
*
*
* ---- VALE_CTL -----
*
* VALE switches are controlled by issuing a NIOCREGIF with a non-null
* nr_cmd in the nmreq structure. These subcommands are handled by
* netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created
* and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF
* subcommands, respectively.
*
* Any network interface known to the system (including a persistent VALE
* port) can be attached to a VALE switch by issuing the
* NETMAP_REQ_VALE_ATTACH command. After the attachment, persistent VALE ports
* look exactly like ephemeral VALE ports (as created in step 2 above). The
* attachment of other interfaces, instead, requires the creation of a
* netmap_bwrap_adapter. Moreover, the attached interface must be put in
* netmap mode. This may require the creation of a netmap_generic_adapter if
* we have no native support for the interface, or if generic adapters have
* been forced by sysctl.
*
* Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(),
* called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach()
* callback. In the case of the bwrap, the callback creates the
* netmap_bwrap_adapter. The initialization of the bwrap is then
* completed by calling netmap_do_regif() on it, in the nm_bdg_ctl()
* callback (netmap_bwrap_bdg_ctl in netmap_vale.c).
* A generic adapter for the wrapped ifp will be created if needed, when
* netmap_get_bdg_na() calls netmap_get_hw_na().
*
*
* ---- DATAPATHS -----
*
* -= SYSTEM DEVICE WITH NATIVE SUPPORT =-
*
* na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach()
*
* - tx from netmap userspace:
* concurrently:
* 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == DEVICE_netmap_txsync()
* 2) device interrupt handler
* na->nm_notify() == netmap_notify()
* - rx from netmap userspace:
* concurrently:
* 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == DEVICE_netmap_rxsync()
* 2) device interrupt handler
* na->nm_notify() == netmap_notify()
* - rx from host stack
* concurrently:
* 1) host stack
* netmap_transmit()
* na->nm_notify == netmap_notify()
* 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_rxsync_from_host
* netmap_rxsync_from_host(na, NULL, NULL)
* - tx to host stack
* ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_txsync_to_host
* netmap_txsync_to_host(na)
* nm_os_send_up()
* FreeBSD: na->if_input() == ether_input()
* linux: netif_rx() with NM_MAGIC_PRIORITY_RX
*
*
* -= SYSTEM DEVICE WITH GENERIC SUPPORT =-
*
* na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach()
*
* - tx from netmap userspace:
* concurrently:
* 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == generic_netmap_txsync()
* nm_os_generic_xmit_frame()
* linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX
* ifp->ndo_start_xmit == generic_ndo_start_xmit()
* gna->save_start_xmit == orig. dev. start_xmit
* FreeBSD: na->if_transmit() == orig. dev if_transmit
* 2) generic_mbuf_destructor()
* na->nm_notify() == netmap_notify()
* - rx from netmap userspace:
* 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == generic_netmap_rxsync()
* mbq_safe_dequeue()
* 2) device driver
* generic_rx_handler()
* mbq_safe_enqueue()
* na->nm_notify() == netmap_notify()
* - rx from host stack
* FreeBSD: same as native
* Linux: same as native except:
* 1) host stack
* dev_queue_xmit() without NM_MAGIC_PRIORITY_TX
* ifp->ndo_start_xmit == generic_ndo_start_xmit()
* netmap_transmit()
* na->nm_notify() == netmap_notify()
* - tx to host stack (same as native):
*
*
* -= VALE =-
*
* INCOMING:
*
* - VALE ports:
* ioctl(NIOCTXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_vp_txsync()
*
* - system device with native support:
* from cable:
* interrupt
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
* kring->nm_sync() == DEVICE_netmap_rxsync()
* netmap_vp_txsync()
* kring->nm_sync() == DEVICE_netmap_rxsync()
* from host stack:
* netmap_transmit()
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
* kring->nm_sync() == netmap_rxsync_from_host()
* netmap_vp_txsync()
*
* - system device with generic support:
* from device driver:
* generic_rx_handler()
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring)
* kring->nm_sync() == generic_netmap_rxsync()
* netmap_vp_txsync()
* kring->nm_sync() == generic_netmap_rxsync()
* from host stack:
* netmap_transmit()
* na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring)
* kring->nm_sync() == netmap_rxsync_from_host()
* netmap_vp_txsync()
*
* (all cases) --> nm_bdg_flush()
* dest_na->nm_notify() == (see below)
*
* OUTGOING:
*
* - VALE ports:
* concurrently:
* 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context
* kring->nm_sync() == netmap_vp_rxsync()
* 2) from nm_bdg_flush()
* na->nm_notify() == netmap_notify()
*
* - system device with native support:
* to cable:
* na->nm_notify() == netmap_bwrap_notify()
* netmap_vp_rxsync()
* kring->nm_sync() == DEVICE_netmap_txsync()
* netmap_vp_rxsync()
* to host stack:
* netmap_vp_rxsync()
* kring->nm_sync() == netmap_txsync_to_host
* netmap_vp_rxsync_locked()
*
* - system device with generic adapter:
* to device driver:
* na->nm_notify() == netmap_bwrap_notify()
* netmap_vp_rxsync()
* kring->nm_sync() == generic_netmap_txsync()
* netmap_vp_rxsync()
* to host stack:
* netmap_vp_rxsync()
* kring->nm_sync() == netmap_txsync_to_host
* netmap_vp_rxsync()
*
*/
/*
* OS-specific code that is used only within this file.
* Other OS-specific code that must be accessed by drivers
* is present in netmap_kern.h
*/
#if defined(__FreeBSD__)
#include <sys/cdefs.h> /* prerequisite */
#include <sys/types.h>
#include <sys/errno.h>
#include <sys/param.h> /* defines used in kernel.h */
#include <sys/kernel.h> /* types used in module initialization */
#include <sys/conf.h> /* cdevsw struct, UID, GID */
#include <sys/filio.h> /* FIONBIO */
#include <sys/sockio.h>
#include <sys/socketvar.h> /* struct socket */
#include <sys/malloc.h>
#include <sys/poll.h>
#include <sys/rwlock.h>
#include <sys/socket.h> /* sockaddrs */
#include <sys/selinfo.h>
#include <sys/sysctl.h>
#include <sys/jail.h>
#include <net/vnet.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/bpf.h> /* BIOCIMMEDIATE */
#include <machine/bus.h> /* bus_dmamap_* */
#include <sys/endian.h>
#include <sys/refcount.h>
#include <net/ethernet.h> /* ETHER_BPF_MTAP */
#elif defined(linux)
#include "bsd_glue.h"
#elif defined(__APPLE__)
#warning OSX support is only partial
#include "osx_glue.h"
#elif defined (_WIN32)
#include "win_glue.h"
#else
#error Unsupported platform
#endif /* unsupported */
/*
* common headers
*/
#include <net/netmap.h>
#include <dev/netmap/netmap_kern.h>
#include <dev/netmap/netmap_mem2.h>
/* user-controlled variables */
int netmap_verbose;
#ifdef CONFIG_NETMAP_DEBUG
int netmap_debug;
#endif /* CONFIG_NETMAP_DEBUG */
static int netmap_no_timestamp; /* don't timestamp on rxsync */
int netmap_no_pendintr = 1;
int netmap_txsync_retry = 2;
static int netmap_fwd = 0; /* force transparent forwarding */
/*
* netmap_admode selects the netmap mode to use.
* Invalid values are reset to NETMAP_ADMODE_BEST
*/
enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */
NETMAP_ADMODE_NATIVE, /* either native or none */
NETMAP_ADMODE_GENERIC, /* force generic */
NETMAP_ADMODE_LAST };
static int netmap_admode = NETMAP_ADMODE_BEST;
/* netmap_generic_mit controls mitigation of RX notifications for
* the generic netmap adapter. The value is a time interval in
* nanoseconds. */
int netmap_generic_mit = 100*1000;
/* We use by default netmap-aware qdiscs with generic netmap adapters,
* even if there can be a little performance hit with hardware NICs.
* However, using the qdisc is the safer approach, for two reasons:
* 1) it prevents non-fifo qdiscs to break the TX notification
* scheme, which is based on mbuf destructors when txqdisc is
* not used.
* 2) it makes it possible to transmit over software devices that
* change skb->dev, like bridge, veth, ...
*
* Anyway users looking for the best performance should
* use native adapters.
*/
#ifdef linux
int netmap_generic_txqdisc = 1;
#endif
/* Default number of slots and queues for generic adapters. */
int netmap_generic_ringsize = 1024;
int netmap_generic_rings = 1;
/* Non-zero to enable checksum offloading in NIC drivers */
int netmap_generic_hwcsum = 0;
/* Non-zero if ptnet devices are allowed to use virtio-net headers. */
int ptnet_vnet_hdr = 1;
/*
* SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated
* in some other operating systems
*/
SYSBEGIN(main_init);
SYSCTL_DECL(_dev_netmap);
SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
#ifdef CONFIG_NETMAP_DEBUG
SYSCTL_INT(_dev_netmap, OID_AUTO, debug,
CTLFLAG_RW, &netmap_debug, 0, "Debug messages");
#endif /* CONFIG_NETMAP_DEBUG */
SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, CTLFLAG_RW, &netmap_no_pendintr,
0, "Always look for new received packets.");
SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW,
&netmap_txsync_retry, 0, "Number of txsync loops in bridge's flush.");
SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0,
"Force NR_FORWARD mode");
SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0,
"Adapter mode. 0 selects the best option available,"
"1 forces native adapter, 2 forces emulated adapter");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_hwcsum, CTLFLAG_RW, &netmap_generic_hwcsum,
0, "Hardware checksums. 0 to disable checksum generation by the NIC (default),"
"1 to enable checksum generation by the NIC");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit,
0, "RX notification interval in nanoseconds");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW,
&netmap_generic_ringsize, 0,
"Number of per-ring slots for emulated netmap mode");
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW,
&netmap_generic_rings, 0,
"Number of TX/RX queues for emulated netmap adapters");
#ifdef linux
SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW,
&netmap_generic_txqdisc, 0, "Use qdisc for generic adapters");
#endif
SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr,
0, "Allow ptnet devices to use virtio-net headers");
SYSEND;
NMG_LOCK_T netmap_global_lock;
/*
* mark the ring as stopped, and run through the locks
* to make sure other users get to see it.
* stopped must be either NR_KR_STOPPED (for unbounded stop)
* of NR_KR_LOCKED (brief stop for mutual exclusion purposes)
*/
static void
netmap_disable_ring(struct netmap_kring *kr, int stopped)
{
nm_kr_stop(kr, stopped);
// XXX check if nm_kr_stop is sufficient
mtx_lock(&kr->q_lock);
mtx_unlock(&kr->q_lock);
nm_kr_put(kr);
}
/* stop or enable a single ring */
void
netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped)
{
if (stopped)
netmap_disable_ring(NMR(na, t)[ring_id], stopped);
else
NMR(na, t)[ring_id]->nkr_stopped = 0;
}
/* stop or enable all the rings of na */
void
netmap_set_all_rings(struct netmap_adapter *na, int stopped)
{
int i;
enum txrx t;
if (!nm_netmap_on(na))
return;
for_rx_tx(t) {
for (i = 0; i < netmap_real_rings(na, t); i++) {
netmap_set_ring(na, i, t, stopped);
}
}
}
/*
* Convenience function used in drivers. Waits for current txsync()s/rxsync()s
* to finish and prevents any new one from starting. Call this before turning
* netmap mode off, or before removing the hardware rings (e.g., on module
* onload).
*/
void
netmap_disable_all_rings(struct ifnet *ifp)
{
if (NM_NA_VALID(ifp)) {
netmap_set_all_rings(NA(ifp), NM_KR_STOPPED);
}
}
/*
* Convenience function used in drivers. Re-enables rxsync and txsync on the
* adapter's rings In linux drivers, this should be placed near each
* napi_enable().
*/
void
netmap_enable_all_rings(struct ifnet *ifp)
{
if (NM_NA_VALID(ifp)) {
netmap_set_all_rings(NA(ifp), 0 /* enabled */);
}
}
void
netmap_make_zombie(struct ifnet *ifp)
{
if (NM_NA_VALID(ifp)) {
struct netmap_adapter *na = NA(ifp);
netmap_set_all_rings(na, NM_KR_LOCKED);
na->na_flags |= NAF_ZOMBIE;
netmap_set_all_rings(na, 0);
}
}
void
netmap_undo_zombie(struct ifnet *ifp)
{
if (NM_NA_VALID(ifp)) {
struct netmap_adapter *na = NA(ifp);
if (na->na_flags & NAF_ZOMBIE) {
netmap_set_all_rings(na, NM_KR_LOCKED);
na->na_flags &= ~NAF_ZOMBIE;
netmap_set_all_rings(na, 0);
}
}
}
/*
* generic bound_checking function
*/
u_int
nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg)
{
u_int oldv = *v;
const char *op = NULL;
if (dflt < lo)
dflt = lo;
if (dflt > hi)
dflt = hi;
if (oldv < lo) {
*v = dflt;
op = "Bump";
} else if (oldv > hi) {
*v = hi;
op = "Clamp";
}
if (op && msg)
nm_prinf("%s %s to %d (was %d)", op, msg, *v, oldv);
return *v;
}
/*
* packet-dump function, user-supplied or static buffer.
* The destination buffer must be at least 30+4*len
*/
const char *
nm_dump_buf(char *p, int len, int lim, char *dst)
{
static char _dst[8192];
int i, j, i0;
static char hex[] ="0123456789abcdef";
char *o; /* output position */
#define P_HI(x) hex[((x) & 0xf0)>>4]
#define P_LO(x) hex[((x) & 0xf)]
#define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.')
if (!dst)
dst = _dst;
if (lim <= 0 || lim > len)
lim = len;
o = dst;
sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim);
o += strlen(o);
/* hexdump routine */
for (i = 0; i < lim; ) {
sprintf(o, "%5d: ", i);
o += strlen(o);
memset(o, ' ', 48);
i0 = i;
for (j=0; j < 16 && i < lim; i++, j++) {
o[j*3] = P_HI(p[i]);
o[j*3+1] = P_LO(p[i]);
}
i = i0;
for (j=0; j < 16 && i < lim; i++, j++)
o[j + 48] = P_C(p[i]);
o[j+48] = '\n';
o += j+49;
}
*o = '\0';
#undef P_HI
#undef P_LO
#undef P_C
return dst;
}
/*
* Fetch configuration from the device, to cope with dynamic
* reconfigurations after loading the module.
*/
/* call with NMG_LOCK held */
int
netmap_update_config(struct netmap_adapter *na)
{
struct nm_config_info info;
bzero(&info, sizeof(info));
if (na->nm_config == NULL ||
na->nm_config(na, &info)) {
/* take whatever we had at init time */
info.num_tx_rings = na->num_tx_rings;
info.num_tx_descs = na->num_tx_desc;
info.num_rx_rings = na->num_rx_rings;
info.num_rx_descs = na->num_rx_desc;
info.rx_buf_maxsize = na->rx_buf_maxsize;
}
if (na->num_tx_rings == info.num_tx_rings &&
na->num_tx_desc == info.num_tx_descs &&
na->num_rx_rings == info.num_rx_rings &&
na->num_rx_desc == info.num_rx_descs &&
na->rx_buf_maxsize == info.rx_buf_maxsize)
return 0; /* nothing changed */
if (na->active_fds == 0) {
na->num_tx_rings = info.num_tx_rings;
na->num_tx_desc = info.num_tx_descs;
na->num_rx_rings = info.num_rx_rings;
na->num_rx_desc = info.num_rx_descs;
na->rx_buf_maxsize = info.rx_buf_maxsize;
if (netmap_verbose)
nm_prinf("configuration changed for %s: txring %d x %d, "
"rxring %d x %d, rxbufsz %d",
na->name, na->num_tx_rings, na->num_tx_desc,
na->num_rx_rings, na->num_rx_desc, na->rx_buf_maxsize);
return 0;
}
nm_prerr("WARNING: configuration changed for %s while active: "
"txring %d x %d, rxring %d x %d, rxbufsz %d",
na->name, info.num_tx_rings, info.num_tx_descs,
info.num_rx_rings, info.num_rx_descs,
info.rx_buf_maxsize);
return 1;
}
/* nm_sync callbacks for the host rings */
static int netmap_txsync_to_host(struct netmap_kring *kring, int flags);
static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags);
/* create the krings array and initialize the fields common to all adapters.
* The array layout is this:
*
* +----------+
* na->tx_rings ----->| | \
* | | } na->num_tx_ring
* | | /
* +----------+
* | | host tx kring
* na->rx_rings ----> +----------+
* | | \
* | | } na->num_rx_rings
* | | /
* +----------+
* | | host rx kring
* +----------+
* na->tailroom ----->| | \
* | | } tailroom bytes
* | | /
* +----------+
*
* Note: for compatibility, host krings are created even when not needed.
* The tailroom space is currently used by vale ports for allocating leases.
*/
/* call with NMG_LOCK held */
int
netmap_krings_create(struct netmap_adapter *na, u_int tailroom)
{
u_int i, len, ndesc;
struct netmap_kring *kring;
u_int n[NR_TXRX];
enum txrx t;
int err = 0;
if (na->tx_rings != NULL) {
if (netmap_debug & NM_DEBUG_ON)
nm_prerr("warning: krings were already created");
return 0;
}
/* account for the (possibly fake) host rings */
n[NR_TX] = netmap_all_rings(na, NR_TX);
n[NR_RX] = netmap_all_rings(na, NR_RX);
len = (n[NR_TX] + n[NR_RX]) *
(sizeof(struct netmap_kring) + sizeof(struct netmap_kring *))
+ tailroom;
na->tx_rings = nm_os_malloc((size_t)len);
if (na->tx_rings == NULL) {
nm_prerr("Cannot allocate krings");
return ENOMEM;
}
na->rx_rings = na->tx_rings + n[NR_TX];
na->tailroom = na->rx_rings + n[NR_RX];
/* link the krings in the krings array */
kring = (struct netmap_kring *)((char *)na->tailroom + tailroom);
for (i = 0; i < n[NR_TX] + n[NR_RX]; i++) {
na->tx_rings[i] = kring;
kring++;
}
/*
* All fields in krings are 0 except the one initialized below.
* but better be explicit on important kring fields.
*/
for_rx_tx(t) {
ndesc = nma_get_ndesc(na, t);
for (i = 0; i < n[t]; i++) {
kring = NMR(na, t)[i];
bzero(kring, sizeof(*kring));
kring->notify_na = na;
kring->ring_id = i;
kring->tx = t;
kring->nkr_num_slots = ndesc;
kring->nr_mode = NKR_NETMAP_OFF;
kring->nr_pending_mode = NKR_NETMAP_OFF;
if (i < nma_get_nrings(na, t)) {
kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync);
} else {
if (!(na->na_flags & NAF_HOST_RINGS))
kring->nr_kflags |= NKR_FAKERING;
kring->nm_sync = (t == NR_TX ?
netmap_txsync_to_host:
netmap_rxsync_from_host);
}
kring->nm_notify = na->nm_notify;
kring->rhead = kring->rcur = kring->nr_hwcur = 0;
/*
* IMPORTANT: Always keep one slot empty.
*/
kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0);
snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name,
nm_txrx2str(t), i);
nm_prdis("ktx %s h %d c %d t %d",
kring->name, kring->rhead, kring->rcur, kring->rtail);
err = nm_os_selinfo_init(&kring->si, kring->name);
if (err) {
netmap_krings_delete(na);
return err;
}
mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF);
kring->na = na; /* setting this field marks the mutex as initialized */
}
err = nm_os_selinfo_init(&na->si[t], na->name);
if (err) {
netmap_krings_delete(na);
return err;
}
}
return 0;
}
/* undo the actions performed by netmap_krings_create */
/* call with NMG_LOCK held */
void
netmap_krings_delete(struct netmap_adapter *na)
{
struct netmap_kring **kring = na->tx_rings;
enum txrx t;
if (na->tx_rings == NULL) {
if (netmap_debug & NM_DEBUG_ON)
nm_prerr("warning: krings were already deleted");
return;
}
for_rx_tx(t)
nm_os_selinfo_uninit(&na->si[t]);
/* we rely on the krings layout described above */
for ( ; kring != na->tailroom; kring++) {
if ((*kring)->na != NULL)
mtx_destroy(&(*kring)->q_lock);
nm_os_selinfo_uninit(&(*kring)->si);
}
nm_os_free(na->tx_rings);
na->tx_rings = na->rx_rings = na->tailroom = NULL;
}
/*
* Destructor for NIC ports. They also have an mbuf queue
* on the rings connected to the host so we need to purge
* them first.
*/
/* call with NMG_LOCK held */
void
netmap_hw_krings_delete(struct netmap_adapter *na)
{
u_int lim = netmap_real_rings(na, NR_RX), i;
for (i = nma_get_nrings(na, NR_RX); i < lim; i++) {
struct mbq *q = &NMR(na, NR_RX)[i]->rx_queue;
nm_prdis("destroy sw mbq with len %d", mbq_len(q));
mbq_purge(q);
mbq_safe_fini(q);
}
netmap_krings_delete(na);
}
static void
netmap_mem_drop(struct netmap_adapter *na)
{
int last = netmap_mem_deref(na->nm_mem, na);
/* if the native allocator had been overrided on regif,
* restore it now and drop the temporary one
*/
if (last && na->nm_mem_prev) {
netmap_mem_put(na->nm_mem);
na->nm_mem = na->nm_mem_prev;
na->nm_mem_prev = NULL;
}
}
/*
* Undo everything that was done in netmap_do_regif(). In particular,
* call nm_register(ifp,0) to stop netmap mode on the interface and
* revert to normal operation.
*/
/* call with NMG_LOCK held */
static void netmap_unset_ringid(struct netmap_priv_d *);
static void netmap_krings_put(struct netmap_priv_d *);
void
netmap_do_unregif(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
NMG_LOCK_ASSERT();
na->active_fds--;
/* unset nr_pending_mode and possibly release exclusive mode */
netmap_krings_put(priv);
#ifdef WITH_MONITOR
/* XXX check whether we have to do something with monitor
* when rings change nr_mode. */
if (na->active_fds <= 0) {
/* walk through all the rings and tell any monitor
* that the port is going to exit netmap mode
*/
netmap_monitor_stop(na);
}
#endif
if (na->active_fds <= 0 || nm_kring_pending(priv)) {
na->nm_register(na, 0);
}
/* delete rings and buffers that are no longer needed */
netmap_mem_rings_delete(na);
if (na->active_fds <= 0) { /* last instance */
/*
* (TO CHECK) We enter here
* when the last reference to this file descriptor goes
* away. This means we cannot have any pending poll()
* or interrupt routine operating on the structure.
* XXX The file may be closed in a thread while
* another thread is using it.
* Linux keeps the file opened until the last reference
* by any outstanding ioctl/poll or mmap is gone.
* FreeBSD does not track mmap()s (but we do) and
* wakes up any sleeping poll(). Need to check what
* happens if the close() occurs while a concurrent
* syscall is running.
*/
if (netmap_debug & NM_DEBUG_ON)
nm_prinf("deleting last instance for %s", na->name);
if (nm_netmap_on(na)) {
nm_prerr("BUG: netmap on while going to delete the krings");
}
na->nm_krings_delete(na);
}
/* possibily decrement counter of tx_si/rx_si users */
netmap_unset_ringid(priv);
/* delete the nifp */
netmap_mem_if_delete(na, priv->np_nifp);
/* drop the allocator */
netmap_mem_drop(na);
/* mark the priv as unregistered */
priv->np_na = NULL;
priv->np_nifp = NULL;
}
struct netmap_priv_d*
netmap_priv_new(void)
{
struct netmap_priv_d *priv;
priv = nm_os_malloc(sizeof(struct netmap_priv_d));
if (priv == NULL)
return NULL;
priv->np_refs = 1;
nm_os_get_module();
return priv;
}
/*
* Destructor of the netmap_priv_d, called when the fd is closed
* Action: undo all the things done by NIOCREGIF,
* On FreeBSD we need to track whether there are active mmap()s,
* and we use np_active_mmaps for that. On linux, the field is always 0.
* Return: 1 if we can free priv, 0 otherwise.
*
*/
/* call with NMG_LOCK held */
void
netmap_priv_delete(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
/* number of active references to this fd */
if (--priv->np_refs > 0) {
return;
}
nm_os_put_module();
if (na) {
netmap_do_unregif(priv);
}
netmap_unget_na(na, priv->np_ifp);
bzero(priv, sizeof(*priv)); /* for safety */
nm_os_free(priv);
}
/* call with NMG_LOCK *not* held */
void
netmap_dtor(void *data)
{
struct netmap_priv_d *priv = data;
NMG_LOCK();
netmap_priv_delete(priv);
NMG_UNLOCK();
}
/*
* Handlers for synchronization of the rings from/to the host stack.
* These are associated to a network interface and are just another
* ring pair managed by userspace.
*
* Netmap also supports transparent forwarding (NS_FORWARD and NR_FORWARD
* flags):
*
* - Before releasing buffers on hw RX rings, the application can mark
* them with the NS_FORWARD flag. During the next RXSYNC or poll(), they
* will be forwarded to the host stack, similarly to what happened if
* the application moved them to the host TX ring.
*
* - Before releasing buffers on the host RX ring, the application can
* mark them with the NS_FORWARD flag. During the next RXSYNC or poll(),
* they will be forwarded to the hw TX rings, saving the application
* from doing the same task in user-space.
*
* Transparent fowarding can be enabled per-ring, by setting the NR_FORWARD
* flag, or globally with the netmap_fwd sysctl.
*
* The transfer NIC --> host is relatively easy, just encapsulate
* into mbufs and we are done. The host --> NIC side is slightly
* harder because there might not be room in the tx ring so it
* might take a while before releasing the buffer.
*/
/*
* Pass a whole queue of mbufs to the host stack as coming from 'dst'
* We do not need to lock because the queue is private.
* After this call the queue is empty.
*/
static void
netmap_send_up(struct ifnet *dst, struct mbq *q)
{
struct mbuf *m;
struct mbuf *head = NULL, *prev = NULL;
/* Send packets up, outside the lock; head/prev machinery
* is only useful for Windows. */
while ((m = mbq_dequeue(q)) != NULL) {
if (netmap_debug & NM_DEBUG_HOST)
nm_prinf("sending up pkt %p size %d", m, MBUF_LEN(m));
prev = nm_os_send_up(dst, m, prev);
if (head == NULL)
head = prev;
}
if (head)
nm_os_send_up(dst, NULL, head);
mbq_fini(q);
}
/*
* Scan the buffers from hwcur to ring->head, and put a copy of those
* marked NS_FORWARD (or all of them if forced) into a queue of mbufs.
* Drop remaining packets in the unlikely event
* of an mbuf shortage.
*/
static void
netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
{
u_int const lim = kring->nkr_num_slots - 1;
u_int const head = kring->rhead;
u_int n;
struct netmap_adapter *na = kring->na;
for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) {
struct mbuf *m;
struct netmap_slot *slot = &kring->ring->slot[n];
if ((slot->flags & NS_FORWARD) == 0 && !force)
continue;
if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) {
nm_prlim(5, "bad pkt at %d len %d", n, slot->len);
continue;
}
slot->flags &= ~NS_FORWARD; // XXX needed ?
/* XXX TODO: adapt to the case of a multisegment packet */
m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL);
if (m == NULL)
break;
mbq_enqueue(q, m);
}
}
static inline int
_nm_may_forward(struct netmap_kring *kring)
{
return ((netmap_fwd || kring->ring->flags & NR_FORWARD) &&
kring->na->na_flags & NAF_HOST_RINGS &&
kring->tx == NR_RX);
}
static inline int
nm_may_forward_up(struct netmap_kring *kring)
{
return _nm_may_forward(kring) &&
kring->ring_id != kring->na->num_rx_rings;
}
static inline int
nm_may_forward_down(struct netmap_kring *kring, int sync_flags)
{
return _nm_may_forward(kring) &&
(sync_flags & NAF_CAN_FORWARD_DOWN) &&
kring->ring_id == kring->na->num_rx_rings;
}
/*
* Send to the NIC rings packets marked NS_FORWARD between
* kring->nr_hwcur and kring->rhead.
* Called under kring->rx_queue.lock on the sw rx ring.
*
* It can only be called if the user opened all the TX hw rings,
* see NAF_CAN_FORWARD_DOWN flag.
* We can touch the TX netmap rings (slots, head and cur) since
* we are in poll/ioctl system call context, and the application
* is not supposed to touch the ring (using a different thread)
* during the execution of the system call.
*/
static u_int
netmap_sw_to_nic(struct netmap_adapter *na)
{
struct netmap_kring *kring = na->rx_rings[na->num_rx_rings];
struct netmap_slot *rxslot = kring->ring->slot;
u_int i, rxcur = kring->nr_hwcur;
u_int const head = kring->rhead;
u_int const src_lim = kring->nkr_num_slots - 1;
u_int sent = 0;
/* scan rings to find space, then fill as much as possible */
for (i = 0; i < na->num_tx_rings; i++) {
struct netmap_kring *kdst = na->tx_rings[i];
struct netmap_ring *rdst = kdst->ring;
u_int const dst_lim = kdst->nkr_num_slots - 1;
/* XXX do we trust ring or kring->rcur,rtail ? */
for (; rxcur != head && !nm_ring_empty(rdst);
rxcur = nm_next(rxcur, src_lim) ) {
struct netmap_slot *src, *dst, tmp;
u_int dst_head = rdst->head;
src = &rxslot[rxcur];
if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd)
continue;
sent++;
dst = &rdst->slot[dst_head];
tmp = *src;
src->buf_idx = dst->buf_idx;
src->flags = NS_BUF_CHANGED;
dst->buf_idx = tmp.buf_idx;
dst->len = tmp.len;
dst->flags = NS_BUF_CHANGED;
rdst->head = rdst->cur = nm_next(dst_head, dst_lim);
}
/* if (sent) XXX txsync ? it would be just an optimization */
}
return sent;
}
/*
* netmap_txsync_to_host() passes packets up. We are called from a
* system call in user process context, and the only contention
* can be among multiple user threads erroneously calling
* this routine concurrently.
*/
static int
netmap_txsync_to_host(struct netmap_kring *kring, int flags)
{
struct netmap_adapter *na = kring->na;
u_int const lim = kring->nkr_num_slots - 1;
u_int const head = kring->rhead;
struct mbq q;
/* Take packets from hwcur to head and pass them up.
* Force hwcur = head since netmap_grab_packets() stops at head
*/
mbq_init(&q);
netmap_grab_packets(kring, &q, 1 /* force */);
nm_prdis("have %d pkts in queue", mbq_len(&q));
kring->nr_hwcur = head;
kring->nr_hwtail = head + lim;
if (kring->nr_hwtail > lim)
kring->nr_hwtail -= lim + 1;
netmap_send_up(na->ifp, &q);
return 0;
}
/*
* rxsync backend for packets coming from the host stack.
* They have been put in kring->rx_queue by netmap_transmit().
* We protect access to the kring using kring->rx_queue.lock
*
* also moves to the nic hw rings any packet the user has marked
* for transparent-mode forwarding, then sets the NR_FORWARD
* flag in the kring to let the caller push them out
*/
static int
netmap_rxsync_from_host(struct netmap_kring *kring, int flags)
{
struct netmap_adapter *na = kring->na;
struct netmap_ring *ring = kring->ring;
u_int nm_i, n;
u_int const lim = kring->nkr_num_slots - 1;
u_int const head = kring->rhead;
int ret = 0;
struct mbq *q = &kring->rx_queue, fq;
mbq_init(&fq); /* fq holds packets to be freed */
mbq_lock(q);
/* First part: import newly received packets */
n = mbq_len(q);
if (n) { /* grab packets from the queue */
struct mbuf *m;
uint32_t stop_i;
nm_i = kring->nr_hwtail;
stop_i = nm_prev(kring->nr_hwcur, lim);
while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) {
int len = MBUF_LEN(m);
struct netmap_slot *slot = &ring->slot[nm_i];
m_copydata(m, 0, len, NMB(na, slot));
nm_prdis("nm %d len %d", nm_i, len);
if (netmap_debug & NM_DEBUG_HOST)
nm_prinf("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL));
slot->len = len;
slot->flags = 0;
nm_i = nm_next(nm_i, lim);
mbq_enqueue(&fq, m);
}
kring->nr_hwtail = nm_i;
}
/*
* Second part: skip past packets that userspace has released.
*/
nm_i = kring->nr_hwcur;
if (nm_i != head) { /* something was released */
if (nm_may_forward_down(kring, flags)) {
ret = netmap_sw_to_nic(na);
if (ret > 0) {
kring->nr_kflags |= NR_FORWARD;
ret = 0;
}
}
kring->nr_hwcur = head;
}
mbq_unlock(q);
mbq_purge(&fq);
mbq_fini(&fq);
return ret;
}
/* Get a netmap adapter for the port.
*
* If it is possible to satisfy the request, return 0
* with *na containing the netmap adapter found.
* Otherwise return an error code, with *na containing NULL.
*
* When the port is attached to a bridge, we always return
* EBUSY.
* Otherwise, if the port is already bound to a file descriptor,
* then we unconditionally return the existing adapter into *na.
* In all the other cases, we return (into *na) either native,
* generic or NULL, according to the following table:
*
* native_support
* active_fds dev.netmap.admode YES NO
* -------------------------------------------------------
* >0 * NA(ifp) NA(ifp)
*
* 0 NETMAP_ADMODE_BEST NATIVE GENERIC
* 0 NETMAP_ADMODE_NATIVE NATIVE NULL
* 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC
*
*/
static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */
int
netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na)
{
/* generic support */
int i = netmap_admode; /* Take a snapshot. */
struct netmap_adapter *prev_na;
int error = 0;
*na = NULL; /* default */
/* reset in case of invalid value */
if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST)
i = netmap_admode = NETMAP_ADMODE_BEST;
if (NM_NA_VALID(ifp)) {
prev_na = NA(ifp);
/* If an adapter already exists, return it if
* there are active file descriptors or if
* netmap is not forced to use generic
* adapters.
*/
if (NETMAP_OWNED_BY_ANY(prev_na)
|| i != NETMAP_ADMODE_GENERIC
|| prev_na->na_flags & NAF_FORCE_NATIVE
#ifdef WITH_PIPES
/* ugly, but we cannot allow an adapter switch
* if some pipe is referring to this one
*/
|| prev_na->na_next_pipe > 0
#endif
) {
*na = prev_na;
goto assign_mem;
}
}
/* If there isn't native support and netmap is not allowed
* to use generic adapters, we cannot satisfy the request.
*/
if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE)
return EOPNOTSUPP;
/* Otherwise, create a generic adapter and return it,
* saving the previously used netmap adapter, if any.
*
* Note that here 'prev_na', if not NULL, MUST be a
* native adapter, and CANNOT be a generic one. This is
* true because generic adapters are created on demand, and
* destroyed when not used anymore. Therefore, if the adapter
* currently attached to an interface 'ifp' is generic, it
* must be that
* (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))).
* Consequently, if NA(ifp) is generic, we will enter one of
* the branches above. This ensures that we never override
* a generic adapter with another generic adapter.
*/
error = generic_netmap_attach(ifp);
if (error)
return error;
*na = NA(ifp);
assign_mem:
if (nmd != NULL && !((*na)->na_flags & NAF_MEM_OWNER) &&
(*na)->active_fds == 0 && ((*na)->nm_mem != nmd)) {
(*na)->nm_mem_prev = (*na)->nm_mem;
(*na)->nm_mem = netmap_mem_get(nmd);
}
return 0;
}
/*
* MUST BE CALLED UNDER NMG_LOCK()
*
* Get a refcounted reference to a netmap adapter attached
* to the interface specified by req.
* This is always called in the execution of an ioctl().
*
* Return ENXIO if the interface specified by the request does
* not exist, ENOTSUP if netmap is not supported by the interface,
* EBUSY if the interface is already attached to a bridge,
* EINVAL if parameters are invalid, ENOMEM if needed resources
* could not be allocated.
* If successful, hold a reference to the netmap adapter.
*
* If the interface specified by req is a system one, also keep
* a reference to it and return a valid *ifp.
*/
int
netmap_get_na(struct nmreq_header *hdr,
struct netmap_adapter **na, struct ifnet **ifp,
struct netmap_mem_d *nmd, int create)
{
struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body;
int error = 0;
struct netmap_adapter *ret = NULL;
int nmd_ref = 0;
*na = NULL; /* default return value */
*ifp = NULL;
if (hdr->nr_reqtype != NETMAP_REQ_REGISTER) {
return EINVAL;
}
if (req->nr_mode == NR_REG_PIPE_MASTER ||
req->nr_mode == NR_REG_PIPE_SLAVE) {
/* Do not accept deprecated pipe modes. */
nm_prerr("Deprecated pipe nr_mode, use xx{yy or xx}yy syntax");
return EINVAL;
}
NMG_LOCK_ASSERT();
/* if the request contain a memid, try to find the
* corresponding memory region
*/
if (nmd == NULL && req->nr_mem_id) {
nmd = netmap_mem_find(req->nr_mem_id);
if (nmd == NULL)
return EINVAL;
/* keep the rereference */
nmd_ref = 1;
}
/* We cascade through all possible types of netmap adapter.
* All netmap_get_*_na() functions return an error and an na,
* with the following combinations:
*
* error na
* 0 NULL type doesn't match
* !0 NULL type matches, but na creation/lookup failed
* 0 !NULL type matches and na created/found
* !0 !NULL impossible
*/
error = netmap_get_null_na(hdr, na, nmd, create);
if (error || *na != NULL)
goto out;
/* try to see if this is a monitor port */
error = netmap_get_monitor_na(hdr, na, nmd, create);
if (error || *na != NULL)
goto out;
/* try to see if this is a pipe port */
error = netmap_get_pipe_na(hdr, na, nmd, create);
if (error || *na != NULL)
goto out;
/* try to see if this is a bridge port */
error = netmap_get_vale_na(hdr, na, nmd, create);
if (error)
goto out;
if (*na != NULL) /* valid match in netmap_get_bdg_na() */
goto out;
/*
* This must be a hardware na, lookup the name in the system.
* Note that by hardware we actually mean "it shows up in ifconfig".
* This may still be a tap, a veth/epair, or even a
* persistent VALE port.
*/
*ifp = ifunit_ref(hdr->nr_name);
if (*ifp == NULL) {
error = ENXIO;
goto out;
}
error = netmap_get_hw_na(*ifp, nmd, &ret);
if (error)
goto out;
*na = ret;
netmap_adapter_get(ret);
out:
if (error) {
if (ret)
netmap_adapter_put(ret);
if (*ifp) {
if_rele(*ifp);
*ifp = NULL;
}
}
if (nmd_ref)
netmap_mem_put(nmd);
return error;
}
/* undo netmap_get_na() */
void
netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp)
{
if (ifp)
if_rele(ifp);
if (na)
netmap_adapter_put(na);
}
#define NM_FAIL_ON(t) do { \
if (unlikely(t)) { \
nm_prlim(5, "%s: fail '" #t "' " \
"h %d c %d t %d " \
"rh %d rc %d rt %d " \
"hc %d ht %d", \
kring->name, \
head, cur, ring->tail, \
kring->rhead, kring->rcur, kring->rtail, \
kring->nr_hwcur, kring->nr_hwtail); \
return kring->nkr_num_slots; \
} \
} while (0)
/*
* validate parameters on entry for *_txsync()
* Returns ring->cur if ok, or something >= kring->nkr_num_slots
* in case of error.
*
* rhead, rcur and rtail=hwtail are stored from previous round.
* hwcur is the next packet to send to the ring.
*
* We want
* hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail
*
* hwcur, rhead, rtail and hwtail are reliable
*/
u_int
nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
{
u_int head = ring->head; /* read only once */
u_int cur = ring->cur; /* read only once */
u_int n = kring->nkr_num_slots;
nm_prdis(5, "%s kcur %d ktail %d head %d cur %d tail %d",
kring->name,
kring->nr_hwcur, kring->nr_hwtail,
ring->head, ring->cur, ring->tail);
#if 1 /* kernel sanity checks; but we can trust the kring. */
NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n ||
kring->rtail >= n || kring->nr_hwtail >= n);
#endif /* kernel sanity checks */
/*
* user sanity checks. We only use head,
* A, B, ... are possible positions for head:
*
* 0 A rhead B rtail C n-1
* 0 D rtail E rhead F n-1
*
* B, F, D are valid. A, C, E are wrong
*/
if (kring->rtail >= kring->rhead) {
/* want rhead <= head <= rtail */
NM_FAIL_ON(head < kring->rhead || head > kring->rtail);
/* and also head <= cur <= rtail */
NM_FAIL_ON(cur < head || cur > kring->rtail);
} else { /* here rtail < rhead */
/* we need head outside rtail .. rhead */
NM_FAIL_ON(head > kring->rtail && head < kring->rhead);
/* two cases now: head <= rtail or head >= rhead */
if (head <= kring->rtail) {
/* want head <= cur <= rtail */
NM_FAIL_ON(cur < head || cur > kring->rtail);
} else { /* head >= rhead */
/* cur must be outside rtail..head */
NM_FAIL_ON(cur > kring->rtail && cur < head);
}
}
if (ring->tail != kring->rtail) {
nm_prlim(5, "%s tail overwritten was %d need %d", kring->name,
ring->tail, kring->rtail);
ring->tail = kring->rtail;
}
kring->rhead = head;
kring->rcur = cur;
return head;
}
/*
* validate parameters on entry for *_rxsync()
* Returns ring->head if ok, kring->nkr_num_slots on error.
*
* For a valid configuration,
* hwcur <= head <= cur <= tail <= hwtail
*
* We only consider head and cur.
* hwcur and hwtail are reliable.
*
*/
u_int
nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring)
{
uint32_t const n = kring->nkr_num_slots;
uint32_t head, cur;
nm_prdis(5,"%s kc %d kt %d h %d c %d t %d",
kring->name,
kring->nr_hwcur, kring->nr_hwtail,
ring->head, ring->cur, ring->tail);
/*
* Before storing the new values, we should check they do not
* move backwards. However:
* - head is not an issue because the previous value is hwcur;
* - cur could in principle go back, however it does not matter
* because we are processing a brand new rxsync()
*/
cur = kring->rcur = ring->cur; /* read only once */
head = kring->rhead = ring->head; /* read only once */
#if 1 /* kernel sanity checks */
NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n);
#endif /* kernel sanity checks */
/* user sanity checks */
if (kring->nr_hwtail >= kring->nr_hwcur) {
/* want hwcur <= rhead <= hwtail */
NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail);
/* and also rhead <= rcur <= hwtail */
NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
} else {
/* we need rhead outside hwtail..hwcur */
NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail);
/* two cases now: head <= hwtail or head >= hwcur */
if (head <= kring->nr_hwtail) {
/* want head <= cur <= hwtail */
NM_FAIL_ON(cur < head || cur > kring->nr_hwtail);
} else {
/* cur must be outside hwtail..head */
NM_FAIL_ON(cur < head && cur > kring->nr_hwtail);
}
}
if (ring->tail != kring->rtail) {
nm_prlim(5, "%s tail overwritten was %d need %d",
kring->name,
ring->tail, kring->rtail);
ring->tail = kring->rtail;
}
return head;
}
/*
* Error routine called when txsync/rxsync detects an error.
* Can't do much more than resetting head = cur = hwcur, tail = hwtail
* Return 1 on reinit.
*
* This routine is only called by the upper half of the kernel.
* It only reads hwcur (which is changed only by the upper half, too)
* and hwtail (which may be changed by the lower half, but only on
* a tx ring and only to increase it, so any error will be recovered
* on the next call). For the above, we don't strictly need to call
* it under lock.
*/
int
netmap_ring_reinit(struct netmap_kring *kring)
{
struct netmap_ring *ring = kring->ring;
u_int i, lim = kring->nkr_num_slots - 1;
int errors = 0;
// XXX KASSERT nm_kr_tryget
nm_prlim(10, "called for %s", kring->name);
// XXX probably wrong to trust userspace
kring->rhead = ring->head;
kring->rcur = ring->cur;
kring->rtail = ring->tail;
if (ring->cur > lim)
errors++;
if (ring->head > lim)
errors++;
if (ring->tail > lim)
errors++;
for (i = 0; i <= lim; i++) {
u_int idx = ring->slot[i].buf_idx;
u_int len = ring->slot[i].len;
if (idx < 2 || idx >= kring->na->na_lut.objtotal) {
nm_prlim(5, "bad index at slot %d idx %d len %d ", i, idx, len);
ring->slot[i].buf_idx = 0;
ring->slot[i].len = 0;
} else if (len > NETMAP_BUF_SIZE(kring->na)) {
ring->slot[i].len = 0;
nm_prlim(5, "bad len at slot %d idx %d len %d", i, idx, len);
}
}
if (errors) {
nm_prlim(10, "total %d errors", errors);
nm_prlim(10, "%s reinit, cur %d -> %d tail %d -> %d",
kring->name,
ring->cur, kring->nr_hwcur,
ring->tail, kring->nr_hwtail);
ring->head = kring->rhead = kring->nr_hwcur;
ring->cur = kring->rcur = kring->nr_hwcur;
ring->tail = kring->rtail = kring->nr_hwtail;
}
return (errors ? 1 : 0);
}
/* interpret the ringid and flags fields of an nmreq, by translating them
* into a pair of intervals of ring indices:
*
* [priv->np_txqfirst, priv->np_txqlast) and
* [priv->np_rxqfirst, priv->np_rxqlast)
*
*/
int
netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
uint16_t nr_ringid, uint64_t nr_flags)
{
struct netmap_adapter *na = priv->np_na;
int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY };
enum txrx t;
u_int j;
for_rx_tx(t) {
if (nr_flags & excluded_direction[t]) {
priv->np_qfirst[t] = priv->np_qlast[t] = 0;
continue;
}
switch (nr_mode) {
case NR_REG_ALL_NIC:
case NR_REG_NULL:
priv->np_qfirst[t] = 0;
priv->np_qlast[t] = nma_get_nrings(na, t);
nm_prdis("ALL/PIPE: %s %d %d", nm_txrx2str(t),
priv->np_qfirst[t], priv->np_qlast[t]);
break;
case NR_REG_SW:
case NR_REG_NIC_SW:
if (!(na->na_flags & NAF_HOST_RINGS)) {
nm_prerr("host rings not supported");
return EINVAL;
}
priv->np_qfirst[t] = (nr_mode == NR_REG_SW ?
nma_get_nrings(na, t) : 0);
priv->np_qlast[t] = netmap_all_rings(na, t);
nm_prdis("%s: %s %d %d", nr_mode == NR_REG_SW ? "SW" : "NIC+SW",
nm_txrx2str(t),
priv->np_qfirst[t], priv->np_qlast[t]);
break;
case NR_REG_ONE_NIC:
if (nr_ringid >= na->num_tx_rings &&
nr_ringid >= na->num_rx_rings) {
nm_prerr("invalid ring id %d", nr_ringid);
return EINVAL;
}
/* if not enough rings, use the first one */
j = nr_ringid;
if (j >= nma_get_nrings(na, t))
j = 0;
priv->np_qfirst[t] = j;
priv->np_qlast[t] = j + 1;
nm_prdis("ONE_NIC: %s %d %d", nm_txrx2str(t),
priv->np_qfirst[t], priv->np_qlast[t]);
break;
default:
nm_prerr("invalid regif type %d", nr_mode);
return EINVAL;
}
}
priv->np_flags = nr_flags;
/* Allow transparent forwarding mode in the host --> nic
* direction only if all the TX hw rings have been opened. */
if (priv->np_qfirst[NR_TX] == 0 &&
priv->np_qlast[NR_TX] >= na->num_tx_rings) {
priv->np_sync_flags |= NAF_CAN_FORWARD_DOWN;
}
if (netmap_verbose) {
nm_prinf("%s: tx [%d,%d) rx [%d,%d) id %d",
na->name,
priv->np_qfirst[NR_TX],
priv->np_qlast[NR_TX],
priv->np_qfirst[NR_RX],
priv->np_qlast[NR_RX],
nr_ringid);
}
return 0;
}
/*
* Set the ring ID. For devices with a single queue, a request
* for all rings is the same as a single ring.
*/
static int
netmap_set_ringid(struct netmap_priv_d *priv, uint32_t nr_mode,
uint16_t nr_ringid, uint64_t nr_flags)
{
struct netmap_adapter *na = priv->np_na;
int error;
enum txrx t;
error = netmap_interp_ringid(priv, nr_mode, nr_ringid, nr_flags);
if (error) {
return error;
}
priv->np_txpoll = (nr_flags & NR_NO_TX_POLL) ? 0 : 1;
/* optimization: count the users registered for more than
* one ring, which are the ones sleeping on the global queue.
* The default netmap_notify() callback will then
* avoid signaling the global queue if nobody is using it
*/
for_rx_tx(t) {
if (nm_si_user(priv, t))
na->si_users[t]++;
}
return 0;
}
static void
netmap_unset_ringid(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
enum txrx t;
for_rx_tx(t) {
if (nm_si_user(priv, t))
na->si_users[t]--;
priv->np_qfirst[t] = priv->np_qlast[t] = 0;
}
priv->np_flags = 0;
priv->np_txpoll = 0;
priv->np_kloop_state = 0;
}
/* Set the nr_pending_mode for the requested rings.
* If requested, also try to get exclusive access to the rings, provided
* the rings we want to bind are not exclusively owned by a previous bind.
*/
static int
netmap_krings_get(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
u_int i;
struct netmap_kring *kring;
int excl = (priv->np_flags & NR_EXCLUSIVE);
enum txrx t;
if (netmap_debug & NM_DEBUG_ON)
nm_prinf("%s: grabbing tx [%d, %d) rx [%d, %d)",
na->name,
priv->np_qfirst[NR_TX],
priv->np_qlast[NR_TX],
priv->np_qfirst[NR_RX],
priv->np_qlast[NR_RX]);
/* first round: check that all the requested rings
* are neither alread exclusively owned, nor we
* want exclusive ownership when they are already in use
*/
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = NMR(na, t)[i];
if ((kring->nr_kflags & NKR_EXCLUSIVE) ||
(kring->users && excl))
{
nm_prdis("ring %s busy", kring->name);
return EBUSY;
}
}
}
/* second round: increment usage count (possibly marking them
* as exclusive) and set the nr_pending_mode
*/
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = NMR(na, t)[i];
kring->users++;
if (excl)
kring->nr_kflags |= NKR_EXCLUSIVE;
kring->nr_pending_mode = NKR_NETMAP_ON;
}
}
return 0;
}
/* Undo netmap_krings_get(). This is done by clearing the exclusive mode
* if was asked on regif, and unset the nr_pending_mode if we are the
* last users of the involved rings. */
static void
netmap_krings_put(struct netmap_priv_d *priv)
{
struct netmap_adapter *na = priv->np_na;
u_int i;
struct netmap_kring *kring;
int excl = (priv->np_flags & NR_EXCLUSIVE);
enum txrx t;
nm_prdis("%s: releasing tx [%d, %d) rx [%d, %d)",
na->name,
priv->np_qfirst[NR_TX],
priv->np_qlast[NR_TX],
priv->np_qfirst[NR_RX],
priv->np_qlast[MR_RX]);
for_rx_tx(t) {
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = NMR(na, t)[i];
if (excl)
kring->nr_kflags &= ~NKR_EXCLUSIVE;
kring->users--;
if (kring->users == 0)
kring->nr_pending_mode = NKR_NETMAP_OFF;
}
}
}
static int
nm_priv_rx_enabled(struct netmap_priv_d *priv)
{
return (priv->np_qfirst[NR_RX] != priv->np_qlast[NR_RX]);
}
/* Validate the CSB entries for both directions (atok and ktoa).
* To be called under NMG_LOCK(). */
static int
netmap_csb_validate(struct netmap_priv_d *priv, struct nmreq_opt_csb *csbo)
{
struct nm_csb_atok *csb_atok_base =
(struct nm_csb_atok *)(uintptr_t)csbo->csb_atok;
struct nm_csb_ktoa *csb_ktoa_base =
(struct nm_csb_ktoa *)(uintptr_t)csbo->csb_ktoa;
enum txrx t;
int num_rings[NR_TXRX], tot_rings;
size_t entry_size[2];
void *csb_start[2];
int i;
if (priv->np_kloop_state & NM_SYNC_KLOOP_RUNNING) {
nm_prerr("Cannot update CSB while kloop is running");
return EBUSY;
}
tot_rings = 0;
for_rx_tx(t) {
num_rings[t] = priv->np_qlast[t] - priv->np_qfirst[t];
tot_rings += num_rings[t];
}
if (tot_rings <= 0)
return 0;
if (!(priv->np_flags & NR_EXCLUSIVE)) {
nm_prerr("CSB mode requires NR_EXCLUSIVE");
return EINVAL;
}
entry_size[0] = sizeof(*csb_atok_base);
entry_size[1] = sizeof(*csb_ktoa_base);
csb_start[0] = (void *)csb_atok_base;
csb_start[1] = (void *)csb_ktoa_base;
for (i = 0; i < 2; i++) {
/* On Linux we could use access_ok() to simplify
* the validation. However, the advantage of
* this approach is that it works also on
* FreeBSD. */
size_t csb_size = tot_rings * entry_size[i];
void *tmp;
int err;
if ((uintptr_t)csb_start[i] & (entry_size[i]-1)) {
nm_prerr("Unaligned CSB address");
return EINVAL;
}
tmp = nm_os_malloc(csb_size);
if (!tmp)
return ENOMEM;
if (i == 0) {
/* Application --> kernel direction. */
err = copyin(csb_start[i], tmp, csb_size);
} else {
/* Kernel --> application direction. */
memset(tmp, 0, csb_size);
err = copyout(tmp, csb_start[i], csb_size);
}
nm_os_free(tmp);
if (err) {
nm_prerr("Invalid CSB address");
return err;
}
}
priv->np_csb_atok_base = csb_atok_base;
priv->np_csb_ktoa_base = csb_ktoa_base;
/* Initialize the CSB. */
for_rx_tx(t) {
for (i = 0; i < num_rings[t]; i++) {
struct netmap_kring *kring =
NMR(priv->np_na, t)[i + priv->np_qfirst[t]];
struct nm_csb_atok *csb_atok = csb_atok_base + i;
struct nm_csb_ktoa *csb_ktoa = csb_ktoa_base + i;
if (t == NR_RX) {
csb_atok += num_rings[NR_TX];
csb_ktoa += num_rings[NR_TX];
}
CSB_WRITE(csb_atok, head, kring->rhead);
CSB_WRITE(csb_atok, cur, kring->rcur);
CSB_WRITE(csb_atok, appl_need_kick, 1);
CSB_WRITE(csb_atok, sync_flags, 1);
CSB_WRITE(csb_ktoa, hwcur, kring->nr_hwcur);
CSB_WRITE(csb_ktoa, hwtail, kring->nr_hwtail);
CSB_WRITE(csb_ktoa, kern_need_kick, 1);
nm_prinf("csb_init for kring %s: head %u, cur %u, "
"hwcur %u, hwtail %u", kring->name,
kring->rhead, kring->rcur, kring->nr_hwcur,
kring->nr_hwtail);
}
}
return 0;
}
/* Ensure that the netmap adapter can support the given MTU.
* @return EINVAL if the na cannot be set to mtu, 0 otherwise.
*/
int
netmap_buf_size_validate(const struct netmap_adapter *na, unsigned mtu) {
unsigned nbs = NETMAP_BUF_SIZE(na);
if (mtu <= na->rx_buf_maxsize) {
/* The MTU fits a single NIC slot. We only
* Need to check that netmap buffers are
* large enough to hold an MTU. NS_MOREFRAG
* cannot be used in this case. */
if (nbs < mtu) {
nm_prerr("error: netmap buf size (%u) "
"< device MTU (%u)", nbs, mtu);
return EINVAL;
}
} else {
/* More NIC slots may be needed to receive
* or transmit a single packet. Check that
* the adapter supports NS_MOREFRAG and that
* netmap buffers are large enough to hold
* the maximum per-slot size. */
if (!(na->na_flags & NAF_MOREFRAG)) {
nm_prerr("error: large MTU (%d) needed "
"but %s does not support "
"NS_MOREFRAG", mtu,
na->ifp->if_xname);
return EINVAL;
} else if (nbs < na->rx_buf_maxsize) {
nm_prerr("error: using NS_MOREFRAG on "
"%s requires netmap buf size "
">= %u", na->ifp->if_xname,
na->rx_buf_maxsize);
return EINVAL;
} else {
nm_prinf("info: netmap application on "
"%s needs to support "
"NS_MOREFRAG "
"(MTU=%u,netmap_buf_size=%u)",
na->ifp->if_xname, mtu, nbs);
}
}
return 0;
}
/*
* possibly move the interface to netmap-mode.
* If success it returns a pointer to netmap_if, otherwise NULL.
* This must be called with NMG_LOCK held.
*
* The following na callbacks are called in the process:
*
* na->nm_config() [by netmap_update_config]
* (get current number and size of rings)
*
* We have a generic one for linux (netmap_linux_config).
* The bwrap has to override this, since it has to forward
* the request to the wrapped adapter (netmap_bwrap_config).
*
*
* na->nm_krings_create()
* (create and init the krings array)
*
* One of the following:
*
* * netmap_hw_krings_create, (hw ports)
* creates the standard layout for the krings
* and adds the mbq (used for the host rings).
*
* * netmap_vp_krings_create (VALE ports)
* add leases and scratchpads
*
* * netmap_pipe_krings_create (pipes)
* create the krings and rings of both ends and
* cross-link them
*
* * netmap_monitor_krings_create (monitors)
* avoid allocating the mbq
*
* * netmap_bwrap_krings_create (bwraps)
* create both the brap krings array,
* the krings array of the wrapped adapter, and
* (if needed) the fake array for the host adapter
*
* na->nm_register(, 1)
* (put the adapter in netmap mode)
*
* This may be one of the following:
*
* * netmap_hw_reg (hw ports)
* checks that the ifp is still there, then calls
* the hardware specific callback;
*
* * netmap_vp_reg (VALE ports)
* If the port is connected to a bridge,
* set the NAF_NETMAP_ON flag under the
* bridge write lock.
*
* * netmap_pipe_reg (pipes)
* inform the other pipe end that it is no
* longer responsible for the lifetime of this
* pipe end
*
* * netmap_monitor_reg (monitors)
* intercept the sync callbacks of the monitored
* rings
*
* * netmap_bwrap_reg (bwraps)
* cross-link the bwrap and hwna rings,
* forward the request to the hwna, override
* the hwna notify callback (to get the frames
* coming from outside go through the bridge).
*
*
*/
int
netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na,
uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags)
{
struct netmap_if *nifp = NULL;
int error;
NMG_LOCK_ASSERT();
priv->np_na = na; /* store the reference */
error = netmap_mem_finalize(na->nm_mem, na);
if (error)
goto err;
if (na->active_fds == 0) {
/* cache the allocator info in the na */
error = netmap_mem_get_lut(na->nm_mem, &na->na_lut);
if (error)
goto err_drop_mem;
nm_prdis("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal,
na->na_lut.objsize);
/* ring configuration may have changed, fetch from the card */
netmap_update_config(na);
}
/* compute the range of tx and rx rings to monitor */
error = netmap_set_ringid(priv, nr_mode, nr_ringid, nr_flags);
if (error)
goto err_put_lut;
if (na->active_fds == 0) {
/*
* If this is the first registration of the adapter,
* perform sanity checks and create the in-kernel view
* of the netmap rings (the netmap krings).
*/
if (na->ifp && nm_priv_rx_enabled(priv)) {
/* This netmap adapter is attached to an ifnet. */
unsigned mtu = nm_os_ifnet_mtu(na->ifp);
nm_prdis("%s: mtu %d rx_buf_maxsize %d netmap_buf_size %d",
na->name, mtu, na->rx_buf_maxsize, NETMAP_BUF_SIZE(na));
if (na->rx_buf_maxsize == 0) {
nm_prerr("%s: error: rx_buf_maxsize == 0", na->name);
error = EIO;
goto err_drop_mem;
}
error = netmap_buf_size_validate(na, mtu);
if (error)
goto err_drop_mem;
}
/*
* Depending on the adapter, this may also create
* the netmap rings themselves
*/
error = na->nm_krings_create(na);
if (error)
goto err_put_lut;
}
/* now the krings must exist and we can check whether some
* previous bind has exclusive ownership on them, and set
* nr_pending_mode
*/
error = netmap_krings_get(priv);
if (error)
goto err_del_krings;
/* create all needed missing netmap rings */
error = netmap_mem_rings_create(na);
if (error)
goto err_rel_excl;
/* in all cases, create a new netmap if */
nifp = netmap_mem_if_new(na, priv);
if (nifp == NULL) {
error = ENOMEM;
goto err_rel_excl;
}
if (nm_kring_pending(priv)) {
/* Some kring is switching mode, tell the adapter to
* react on this. */
error = na->nm_register(na, 1);
if (error)
goto err_del_if;
}
/* Commit the reference. */
na->active_fds++;
/*
* advertise that the interface is ready by setting np_nifp.
* The barrier is needed because readers (poll, *SYNC and mmap)
* check for priv->np_nifp != NULL without locking
*/
mb(); /* make sure previous writes are visible to all CPUs */
priv->np_nifp = nifp;
return 0;
err_del_if:
netmap_mem_if_delete(na, nifp);
err_rel_excl:
netmap_krings_put(priv);
netmap_mem_rings_delete(na);
err_del_krings:
if (na->active_fds == 0)
na->nm_krings_delete(na);
err_put_lut:
if (na->active_fds == 0)
memset(&na->na_lut, 0, sizeof(na->na_lut));
err_drop_mem:
netmap_mem_drop(na);
err:
priv->np_na = NULL;
return error;
}
/*
* update kring and ring at the end of rxsync/txsync.
*/
static inline void
nm_sync_finalize(struct netmap_kring *kring)
{
/*
* Update ring tail to what the kernel knows
* After txsync: head/rhead/hwcur might be behind cur/rcur
* if no carrier.
*/
kring->ring->tail = kring->rtail = kring->nr_hwtail;
nm_prdis(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d",
kring->name, kring->nr_hwcur, kring->nr_hwtail,
kring->rhead, kring->rcur, kring->rtail);
}
/* set ring timestamp */
static inline void
ring_timestamp_set(struct netmap_ring *ring)
{
if (netmap_no_timestamp == 0 || ring->flags & NR_TIMESTAMP) {
microtime(&ring->ts);
}
}
static int nmreq_copyin(struct nmreq_header *, int);
static int nmreq_copyout(struct nmreq_header *, int);
static int nmreq_checkoptions(struct nmreq_header *);
/*
* ioctl(2) support for the "netmap" device.
*
* Following a list of accepted commands:
* - NIOCCTRL device control API
* - NIOCTXSYNC sync TX rings
* - NIOCRXSYNC sync RX rings
* - SIOCGIFADDR just for convenience
* - NIOCGINFO deprecated (legacy API)
* - NIOCREGIF deprecated (legacy API)
*
* Return 0 on success, errno otherwise.
*/
int
netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data,
struct thread *td, int nr_body_is_user)
{
struct mbq q; /* packets from RX hw queues to host stack */
struct netmap_adapter *na = NULL;
struct netmap_mem_d *nmd = NULL;
struct ifnet *ifp = NULL;
int error = 0;
u_int i, qfirst, qlast;
struct netmap_kring **krings;
int sync_flags;
enum txrx t;
switch (cmd) {
case NIOCCTRL: {
struct nmreq_header *hdr = (struct nmreq_header *)data;
if (hdr->nr_version < NETMAP_MIN_API ||
hdr->nr_version > NETMAP_MAX_API) {
nm_prerr("API mismatch: got %d need %d",
hdr->nr_version, NETMAP_API);
return EINVAL;
}
/* Make a kernel-space copy of the user-space nr_body.
* For convenince, the nr_body pointer and the pointers
* in the options list will be replaced with their
* kernel-space counterparts. The original pointers are
* saved internally and later restored by nmreq_copyout
*/
error = nmreq_copyin(hdr, nr_body_is_user);
if (error) {
return error;
}
/* Sanitize hdr->nr_name. */
hdr->nr_name[sizeof(hdr->nr_name) - 1] = '\0';
switch (hdr->nr_reqtype) {
case NETMAP_REQ_REGISTER: {
struct nmreq_register *req =
(struct nmreq_register *)(uintptr_t)hdr->nr_body;
struct netmap_if *nifp;
/* Protect access to priv from concurrent requests. */
NMG_LOCK();
do {
struct nmreq_option *opt;
u_int memflags;
if (priv->np_nifp != NULL) { /* thread already registered */
error = EBUSY;
break;
}
#ifdef WITH_EXTMEM
opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
NETMAP_REQ_OPT_EXTMEM);
if (opt != NULL) {
struct nmreq_opt_extmem *e =
(struct nmreq_opt_extmem *)opt;
error = nmreq_checkduplicate(opt);
if (error) {
opt->nro_status = error;
break;
}
nmd = netmap_mem_ext_create(e->nro_usrptr,
&e->nro_info, &error);
opt->nro_status = error;
if (nmd == NULL)
break;
}
#endif /* WITH_EXTMEM */
if (nmd == NULL && req->nr_mem_id) {
/* find the allocator and get a reference */
nmd = netmap_mem_find(req->nr_mem_id);
if (nmd == NULL) {
if (netmap_verbose) {
nm_prerr("%s: failed to find mem_id %u",
hdr->nr_name, req->nr_mem_id);
}
error = EINVAL;
break;
}
}
/* find the interface and a reference */
error = netmap_get_na(hdr, &na, &ifp, nmd,
1 /* create */); /* keep reference */
if (error)
break;
if (NETMAP_OWNED_BY_KERN(na)) {
error = EBUSY;
break;
}
if (na->virt_hdr_len && !(req->nr_flags & NR_ACCEPT_VNET_HDR)) {
nm_prerr("virt_hdr_len=%d, but application does "
"not accept it", na->virt_hdr_len);
error = EIO;
break;
}
error = netmap_do_regif(priv, na, req->nr_mode,
req->nr_ringid, req->nr_flags);
if (error) { /* reg. failed, release priv and ref */
break;
}
opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
NETMAP_REQ_OPT_CSB);
if (opt != NULL) {
struct nmreq_opt_csb *csbo =
(struct nmreq_opt_csb *)opt;
error = nmreq_checkduplicate(opt);
if (!error) {
error = netmap_csb_validate(priv, csbo);
}
opt->nro_status = error;
if (error) {
netmap_do_unregif(priv);
break;
}
}
nifp = priv->np_nifp;
/* return the offset of the netmap_if object */
req->nr_rx_rings = na->num_rx_rings;
req->nr_tx_rings = na->num_tx_rings;
req->nr_rx_slots = na->num_rx_desc;
req->nr_tx_slots = na->num_tx_desc;
error = netmap_mem_get_info(na->nm_mem, &req->nr_memsize, &memflags,
&req->nr_mem_id);
if (error) {
netmap_do_unregif(priv);
break;
}
if (memflags & NETMAP_MEM_PRIVATE) {
*(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM;
}
for_rx_tx(t) {
priv->np_si[t] = nm_si_user(priv, t) ?
&na->si[t] : &NMR(na, t)[priv->np_qfirst[t]]->si;
}
if (req->nr_extra_bufs) {
if (netmap_verbose)
nm_prinf("requested %d extra buffers",
req->nr_extra_bufs);
req->nr_extra_bufs = netmap_extra_alloc(na,
&nifp->ni_bufs_head, req->nr_extra_bufs);
if (netmap_verbose)
nm_prinf("got %d extra buffers", req->nr_extra_bufs);
}
req->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp);
error = nmreq_checkoptions(hdr);
if (error) {
netmap_do_unregif(priv);
break;
}
/* store ifp reference so that priv destructor may release it */
priv->np_ifp = ifp;
} while (0);
if (error) {
netmap_unget_na(na, ifp);
}
/* release the reference from netmap_mem_find() or
* netmap_mem_ext_create()
*/
if (nmd)
netmap_mem_put(nmd);
NMG_UNLOCK();
break;
}
case NETMAP_REQ_PORT_INFO_GET: {
struct nmreq_port_info_get *req =
(struct nmreq_port_info_get *)(uintptr_t)hdr->nr_body;
NMG_LOCK();
do {
u_int memflags;
if (hdr->nr_name[0] != '\0') {
/* Build a nmreq_register out of the nmreq_port_info_get,
* so that we can call netmap_get_na(). */
struct nmreq_register regreq;
bzero(&regreq, sizeof(regreq));
regreq.nr_mode = NR_REG_ALL_NIC;
regreq.nr_tx_slots = req->nr_tx_slots;
regreq.nr_rx_slots = req->nr_rx_slots;
regreq.nr_tx_rings = req->nr_tx_rings;
regreq.nr_rx_rings = req->nr_rx_rings;
regreq.nr_mem_id = req->nr_mem_id;
/* get a refcount */
hdr->nr_reqtype = NETMAP_REQ_REGISTER;
hdr->nr_body = (uintptr_t)&regreq;
error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; /* reset type */
hdr->nr_body = (uintptr_t)req; /* reset nr_body */
if (error) {
na = NULL;
ifp = NULL;
break;
}
nmd = na->nm_mem; /* get memory allocator */
} else {
nmd = netmap_mem_find(req->nr_mem_id ? req->nr_mem_id : 1);
if (nmd == NULL) {
if (netmap_verbose)
nm_prerr("%s: failed to find mem_id %u",
hdr->nr_name,
req->nr_mem_id ? req->nr_mem_id : 1);
error = EINVAL;
break;
}
}
error = netmap_mem_get_info(nmd, &req->nr_memsize, &memflags,
&req->nr_mem_id);
if (error)
break;
if (na == NULL) /* only memory info */
break;
netmap_update_config(na);
req->nr_rx_rings = na->num_rx_rings;
req->nr_tx_rings = na->num_tx_rings;
req->nr_rx_slots = na->num_rx_desc;
req->nr_tx_slots = na->num_tx_desc;
} while (0);
netmap_unget_na(na, ifp);
NMG_UNLOCK();
break;
}
#ifdef WITH_VALE
case NETMAP_REQ_VALE_ATTACH: {
error = netmap_vale_attach(hdr, NULL /* userspace request */);
break;
}
case NETMAP_REQ_VALE_DETACH: {
error = netmap_vale_detach(hdr, NULL /* userspace request */);
break;
}
case NETMAP_REQ_VALE_LIST: {
error = netmap_vale_list(hdr);
break;
}
case NETMAP_REQ_PORT_HDR_SET: {
struct nmreq_port_hdr *req =
(struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body;
/* Build a nmreq_register out of the nmreq_port_hdr,
* so that we can call netmap_get_bdg_na(). */
struct nmreq_register regreq;
bzero(&regreq, sizeof(regreq));
regreq.nr_mode = NR_REG_ALL_NIC;
/* For now we only support virtio-net headers, and only for
* VALE ports, but this may change in future. Valid lengths
* for the virtio-net header are 0 (no header), 10 and 12. */
if (req->nr_hdr_len != 0 &&
req->nr_hdr_len != sizeof(struct nm_vnet_hdr) &&
req->nr_hdr_len != 12) {
if (netmap_verbose)
nm_prerr("invalid hdr_len %u", req->nr_hdr_len);
error = EINVAL;
break;
}
NMG_LOCK();
hdr->nr_reqtype = NETMAP_REQ_REGISTER;
hdr->nr_body = (uintptr_t)&regreq;
error = netmap_get_vale_na(hdr, &na, NULL, 0);
hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_SET;
hdr->nr_body = (uintptr_t)req;
if (na && !error) {
struct netmap_vp_adapter *vpna =
(struct netmap_vp_adapter *)na;
na->virt_hdr_len = req->nr_hdr_len;
if (na->virt_hdr_len) {
vpna->mfs = NETMAP_BUF_SIZE(na);
}
if (netmap_verbose)
nm_prinf("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na);
netmap_adapter_put(na);
} else if (!na) {
error = ENXIO;
}
NMG_UNLOCK();
break;
}
case NETMAP_REQ_PORT_HDR_GET: {
/* Get vnet-header length for this netmap port */
struct nmreq_port_hdr *req =
(struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body;
/* Build a nmreq_register out of the nmreq_port_hdr,
* so that we can call netmap_get_bdg_na(). */
struct nmreq_register regreq;
struct ifnet *ifp;
bzero(&regreq, sizeof(regreq));
regreq.nr_mode = NR_REG_ALL_NIC;
NMG_LOCK();
hdr->nr_reqtype = NETMAP_REQ_REGISTER;
hdr->nr_body = (uintptr_t)&regreq;
error = netmap_get_na(hdr, &na, &ifp, NULL, 0);
hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_GET;
hdr->nr_body = (uintptr_t)req;
if (na && !error) {
req->nr_hdr_len = na->virt_hdr_len;
}
netmap_unget_na(na, ifp);
NMG_UNLOCK();
break;
}
case NETMAP_REQ_VALE_NEWIF: {
error = nm_vi_create(hdr);
break;
}
case NETMAP_REQ_VALE_DELIF: {
error = nm_vi_destroy(hdr->nr_name);
break;
}
case NETMAP_REQ_VALE_POLLING_ENABLE:
case NETMAP_REQ_VALE_POLLING_DISABLE: {
error = nm_bdg_polling(hdr);
break;
}
#endif /* WITH_VALE */
case NETMAP_REQ_POOLS_INFO_GET: {
/* Get information from the memory allocator used for
* hdr->nr_name. */
struct nmreq_pools_info *req =
(struct nmreq_pools_info *)(uintptr_t)hdr->nr_body;
NMG_LOCK();
do {
/* Build a nmreq_register out of the nmreq_pools_info,
* so that we can call netmap_get_na(). */
struct nmreq_register regreq;
bzero(&regreq, sizeof(regreq));
regreq.nr_mem_id = req->nr_mem_id;
regreq.nr_mode = NR_REG_ALL_NIC;
hdr->nr_reqtype = NETMAP_REQ_REGISTER;
hdr->nr_body = (uintptr_t)&regreq;
error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */);
hdr->nr_reqtype = NETMAP_REQ_POOLS_INFO_GET; /* reset type */
hdr->nr_body = (uintptr_t)req; /* reset nr_body */
if (error) {
na = NULL;
ifp = NULL;
break;
}
nmd = na->nm_mem; /* grab the memory allocator */
if (nmd == NULL) {
error = EINVAL;
break;
}
/* Finalize the memory allocator, get the pools
* information and release the allocator. */
error = netmap_mem_finalize(nmd, na);
if (error) {
break;
}
error = netmap_mem_pools_info_get(req, nmd);
netmap_mem_drop(na);
} while (0);
netmap_unget_na(na, ifp);
NMG_UNLOCK();
break;
}
case NETMAP_REQ_CSB_ENABLE: {
struct nmreq_option *opt;
opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options,
NETMAP_REQ_OPT_CSB);
if (opt == NULL) {
error = EINVAL;
} else {
struct nmreq_opt_csb *csbo =
(struct nmreq_opt_csb *)opt;
error = nmreq_checkduplicate(opt);
if (!error) {
NMG_LOCK();
error = netmap_csb_validate(priv, csbo);
NMG_UNLOCK();
}
opt->nro_status = error;
}
break;
}
case NETMAP_REQ_SYNC_KLOOP_START: {
error = netmap_sync_kloop(priv, hdr);
break;
}
case NETMAP_REQ_SYNC_KLOOP_STOP: {
error = netmap_sync_kloop_stop(priv);
break;
}
default: {
error = EINVAL;
break;
}
}
/* Write back request body to userspace and reset the
* user-space pointer. */
error = nmreq_copyout(hdr, error);
break;
}
case NIOCTXSYNC:
case NIOCRXSYNC: {
if (unlikely(priv->np_nifp == NULL)) {
error = ENXIO;
break;
}
mb(); /* make sure following reads are not from cache */
if (unlikely(priv->np_csb_atok_base)) {
nm_prerr("Invalid sync in CSB mode");
error = EBUSY;
break;
}
na = priv->np_na; /* we have a reference */
mbq_init(&q);
t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX);
krings = NMR(na, t);
qfirst = priv->np_qfirst[t];
qlast = priv->np_qlast[t];
sync_flags = priv->np_sync_flags;
for (i = qfirst; i < qlast; i++) {
struct netmap_kring *kring = krings[i];
struct netmap_ring *ring = kring->ring;
if (unlikely(nm_kr_tryget(kring, 1, &error))) {
error = (error ? EIO : 0);
continue;
}
if (cmd == NIOCTXSYNC) {
if (netmap_debug & NM_DEBUG_TXSYNC)
nm_prinf("pre txsync ring %d cur %d hwcur %d",
i, ring->cur,
kring->nr_hwcur);
if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
} else if (kring->nm_sync(kring, sync_flags | NAF_FORCE_RECLAIM) == 0) {
nm_sync_finalize(kring);
}
if (netmap_debug & NM_DEBUG_TXSYNC)
nm_prinf("post txsync ring %d cur %d hwcur %d",
i, ring->cur,
kring->nr_hwcur);
} else {
if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
}
if (nm_may_forward_up(kring)) {
/* transparent forwarding, see netmap_poll() */
netmap_grab_packets(kring, &q, netmap_fwd);
}
if (kring->nm_sync(kring, sync_flags | NAF_FORCE_READ) == 0) {
nm_sync_finalize(kring);
}
ring_timestamp_set(ring);
}
nm_kr_put(kring);
}
if (mbq_peek(&q)) {
netmap_send_up(na->ifp, &q);
}
break;
}
default: {
return netmap_ioctl_legacy(priv, cmd, data, td);
break;
}
}
return (error);
}
size_t
nmreq_size_by_type(uint16_t nr_reqtype)
{
switch (nr_reqtype) {
case NETMAP_REQ_REGISTER:
return sizeof(struct nmreq_register);
case NETMAP_REQ_PORT_INFO_GET:
return sizeof(struct nmreq_port_info_get);
case NETMAP_REQ_VALE_ATTACH:
return sizeof(struct nmreq_vale_attach);
case NETMAP_REQ_VALE_DETACH:
return sizeof(struct nmreq_vale_detach);
case NETMAP_REQ_VALE_LIST:
return sizeof(struct nmreq_vale_list);
case NETMAP_REQ_PORT_HDR_SET:
case NETMAP_REQ_PORT_HDR_GET:
return sizeof(struct nmreq_port_hdr);
case NETMAP_REQ_VALE_NEWIF:
return sizeof(struct nmreq_vale_newif);
case NETMAP_REQ_VALE_DELIF:
case NETMAP_REQ_SYNC_KLOOP_STOP:
case NETMAP_REQ_CSB_ENABLE:
return 0;
case NETMAP_REQ_VALE_POLLING_ENABLE:
case NETMAP_REQ_VALE_POLLING_DISABLE:
return sizeof(struct nmreq_vale_polling);
case NETMAP_REQ_POOLS_INFO_GET:
return sizeof(struct nmreq_pools_info);
case NETMAP_REQ_SYNC_KLOOP_START:
return sizeof(struct nmreq_sync_kloop_start);
}
return 0;
}
static size_t
nmreq_opt_size_by_type(uint32_t nro_reqtype, uint64_t nro_size)
{
size_t rv = sizeof(struct nmreq_option);
#ifdef NETMAP_REQ_OPT_DEBUG
if (nro_reqtype & NETMAP_REQ_OPT_DEBUG)
return (nro_reqtype & ~NETMAP_REQ_OPT_DEBUG);
#endif /* NETMAP_REQ_OPT_DEBUG */
switch (nro_reqtype) {
#ifdef WITH_EXTMEM
case NETMAP_REQ_OPT_EXTMEM:
rv = sizeof(struct nmreq_opt_extmem);
break;
#endif /* WITH_EXTMEM */
case NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS:
if (nro_size >= rv)
rv = nro_size;
break;
case NETMAP_REQ_OPT_CSB:
rv = sizeof(struct nmreq_opt_csb);
break;
case NETMAP_REQ_OPT_SYNC_KLOOP_MODE:
rv = sizeof(struct nmreq_opt_sync_kloop_mode);
break;
}
/* subtract the common header */
return rv - sizeof(struct nmreq_option);
}
int
nmreq_copyin(struct nmreq_header *hdr, int nr_body_is_user)
{
size_t rqsz, optsz, bufsz;
int error;
char *ker = NULL, *p;
struct nmreq_option **next, *src;
struct nmreq_option buf;
uint64_t *ptrs;
if (hdr->nr_reserved) {
if (netmap_verbose)
nm_prerr("nr_reserved must be zero");
return EINVAL;
}
if (!nr_body_is_user)
return 0;
hdr->nr_reserved = nr_body_is_user;
/* compute the total size of the buffer */
rqsz = nmreq_size_by_type(hdr->nr_reqtype);
if (rqsz > NETMAP_REQ_MAXSIZE) {
error = EMSGSIZE;
goto out_err;
}
if ((rqsz && hdr->nr_body == (uintptr_t)NULL) ||
(!rqsz && hdr->nr_body != (uintptr_t)NULL)) {
/* Request body expected, but not found; or
* request body found but unexpected. */
if (netmap_verbose)
nm_prerr("nr_body expected but not found, or vice versa");
error = EINVAL;
goto out_err;
}
bufsz = 2 * sizeof(void *) + rqsz;
optsz = 0;
for (src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; src;
src = (struct nmreq_option *)(uintptr_t)buf.nro_next)
{
error = copyin(src, &buf, sizeof(*src));
if (error)
goto out_err;
optsz += sizeof(*src);
optsz += nmreq_opt_size_by_type(buf.nro_reqtype, buf.nro_size);
if (rqsz + optsz > NETMAP_REQ_MAXSIZE) {
error = EMSGSIZE;
goto out_err;
}
bufsz += optsz + sizeof(void *);
}
ker = nm_os_malloc(bufsz);
if (ker == NULL) {
error = ENOMEM;
goto out_err;
}
p = ker;
/* make a copy of the user pointers */
ptrs = (uint64_t*)p;
*ptrs++ = hdr->nr_body;
*ptrs++ = hdr->nr_options;
p = (char *)ptrs;
/* copy the body */
error = copyin((void *)(uintptr_t)hdr->nr_body, p, rqsz);
if (error)
goto out_restore;
/* overwrite the user pointer with the in-kernel one */
hdr->nr_body = (uintptr_t)p;
p += rqsz;
/* copy the options */
next = (struct nmreq_option **)&hdr->nr_options;
src = *next;
while (src) {
struct nmreq_option *opt;
/* copy the option header */
ptrs = (uint64_t *)p;
opt = (struct nmreq_option *)(ptrs + 1);
error = copyin(src, opt, sizeof(*src));
if (error)
goto out_restore;
/* make a copy of the user next pointer */
*ptrs = opt->nro_next;
/* overwrite the user pointer with the in-kernel one */
*next = opt;
/* initialize the option as not supported.
* Recognized options will update this field.
*/
opt->nro_status = EOPNOTSUPP;
p = (char *)(opt + 1);
/* copy the option body */
optsz = nmreq_opt_size_by_type(opt->nro_reqtype,
opt->nro_size);
if (optsz) {
/* the option body follows the option header */
error = copyin(src + 1, p, optsz);
if (error)
goto out_restore;
p += optsz;
}
/* move to next option */
next = (struct nmreq_option **)&opt->nro_next;
src = *next;
}
return 0;
out_restore:
ptrs = (uint64_t *)ker;
hdr->nr_body = *ptrs++;
hdr->nr_options = *ptrs++;
hdr->nr_reserved = 0;
nm_os_free(ker);
out_err:
return error;
}
static int
nmreq_copyout(struct nmreq_header *hdr, int rerror)
{
struct nmreq_option *src, *dst;
void *ker = (void *)(uintptr_t)hdr->nr_body, *bufstart;
uint64_t *ptrs;
size_t bodysz;
int error;
if (!hdr->nr_reserved)
return rerror;
/* restore the user pointers in the header */
ptrs = (uint64_t *)ker - 2;
bufstart = ptrs;
hdr->nr_body = *ptrs++;
src = (struct nmreq_option *)(uintptr_t)hdr->nr_options;
hdr->nr_options = *ptrs;
if (!rerror) {
/* copy the body */
bodysz = nmreq_size_by_type(hdr->nr_reqtype);
error = copyout(ker, (void *)(uintptr_t)hdr->nr_body, bodysz);
if (error) {
rerror = error;
goto out;
}
}
/* copy the options */
dst = (struct nmreq_option *)(uintptr_t)hdr->nr_options;
while (src) {
size_t optsz;
uint64_t next;
/* restore the user pointer */
next = src->nro_next;
ptrs = (uint64_t *)src - 1;
src->nro_next = *ptrs;
/* always copy the option header */
error = copyout(src, dst, sizeof(*src));
if (error) {
rerror = error;
goto out;
}
/* copy the option body only if there was no error */
if (!rerror && !src->nro_status) {
optsz = nmreq_opt_size_by_type(src->nro_reqtype,
src->nro_size);
if (optsz) {
error = copyout(src + 1, dst + 1, optsz);
if (error) {
rerror = error;
goto out;
}
}
}
src = (struct nmreq_option *)(uintptr_t)next;
dst = (struct nmreq_option *)(uintptr_t)*ptrs;
}
out:
hdr->nr_reserved = 0;
nm_os_free(bufstart);
return rerror;
}
struct nmreq_option *
nmreq_findoption(struct nmreq_option *opt, uint16_t reqtype)
{
for ( ; opt; opt = (struct nmreq_option *)(uintptr_t)opt->nro_next)
if (opt->nro_reqtype == reqtype)
return opt;
return NULL;
}
int
nmreq_checkduplicate(struct nmreq_option *opt) {
uint16_t type = opt->nro_reqtype;
int dup = 0;
while ((opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)opt->nro_next,
type))) {
dup++;
opt->nro_status = EINVAL;
}
return (dup ? EINVAL : 0);
}
static int
nmreq_checkoptions(struct nmreq_header *hdr)
{
struct nmreq_option *opt;
/* return error if there is still any option
* marked as not supported
*/
for (opt = (struct nmreq_option *)(uintptr_t)hdr->nr_options; opt;
opt = (struct nmreq_option *)(uintptr_t)opt->nro_next)
if (opt->nro_status == EOPNOTSUPP)
return EOPNOTSUPP;
return 0;
}
/*
* select(2) and poll(2) handlers for the "netmap" device.
*
* Can be called for one or more queues.
* Return true the event mask corresponding to ready events.
* If there are no ready events (and 'sr' is not NULL), do a
* selrecord on either individual selinfo or on the global one.
* Device-dependent parts (locking and sync of tx/rx rings)
* are done through callbacks.
*
* On linux, arguments are really pwait, the poll table, and 'td' is struct file *
* The first one is remapped to pwait as selrecord() uses the name as an
* hidden argument.
*/
int
netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr)
{
struct netmap_adapter *na;
struct netmap_kring *kring;
struct netmap_ring *ring;
u_int i, want[NR_TXRX], revents = 0;
NM_SELINFO_T *si[NR_TXRX];
#define want_tx want[NR_TX]
#define want_rx want[NR_RX]
struct mbq q; /* packets from RX hw queues to host stack */
/*
* In order to avoid nested locks, we need to "double check"
* txsync and rxsync if we decide to do a selrecord().
* retry_tx (and retry_rx, later) prevent looping forever.
*/
int retry_tx = 1, retry_rx = 1;
/* Transparent mode: send_down is 1 if we have found some
* packets to forward (host RX ring --> NIC) during the rx
* scan and we have not sent them down to the NIC yet.
* Transparent mode requires to bind all rings to a single
* file descriptor.
*/
int send_down = 0;
int sync_flags = priv->np_sync_flags;
mbq_init(&q);
if (unlikely(priv->np_nifp == NULL)) {
return POLLERR;
}
mb(); /* make sure following reads are not from cache */
na = priv->np_na;
if (unlikely(!nm_netmap_on(na)))
return POLLERR;
if (unlikely(priv->np_csb_atok_base)) {
nm_prerr("Invalid poll in CSB mode");
return POLLERR;
}
if (netmap_debug & NM_DEBUG_ON)
nm_prinf("device %s events 0x%x", na->name, events);
want_tx = events & (POLLOUT | POLLWRNORM);
want_rx = events & (POLLIN | POLLRDNORM);
/*
* If the card has more than one queue AND the file descriptor is
* bound to all of them, we sleep on the "global" selinfo, otherwise
* we sleep on individual selinfo (FreeBSD only allows two selinfo's
* per file descriptor).
* The interrupt routine in the driver wake one or the other
* (or both) depending on which clients are active.
*
* rxsync() is only called if we run out of buffers on a POLLIN.
* txsync() is called if we run out of buffers on POLLOUT, or
* there are pending packets to send. The latter can be disabled
* passing NETMAP_NO_TX_POLL in the NIOCREG call.
*/
si[NR_RX] = priv->np_si[NR_RX];
si[NR_TX] = priv->np_si[NR_TX];
#ifdef __FreeBSD__
/*
* We start with a lock free round which is cheap if we have
* slots available. If this fails, then lock and call the sync
* routines. We can't do this on Linux, as the contract says
* that we must call nm_os_selrecord() unconditionally.
*/
if (want_tx) {
const enum txrx t = NR_TX;
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = NMR(na, t)[i];
if (kring->ring->cur != kring->ring->tail) {
/* Some unseen TX space is available, so what
* we don't need to run txsync. */
revents |= want[t];
want[t] = 0;
break;
}
}
}
if (want_rx) {
const enum txrx t = NR_RX;
int rxsync_needed = 0;
for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) {
kring = NMR(na, t)[i];
if (kring->ring->cur == kring->ring->tail
|| kring->rhead != kring->ring->head) {
/* There are no unseen packets on this ring,
* or there are some buffers to be returned
* to the netmap port. We therefore go ahead
* and run rxsync. */
rxsync_needed = 1;
break;
}
}
if (!rxsync_needed) {
revents |= want_rx;
want_rx = 0;
}
}
#endif
#ifdef linux
/* The selrecord must be unconditional on linux. */
nm_os_selrecord(sr, si[NR_RX]);
nm_os_selrecord(sr, si[NR_TX]);
#endif /* linux */
/*
* If we want to push packets out (priv->np_txpoll) or
* want_tx is still set, we must issue txsync calls
* (on all rings, to avoid that the tx rings stall).
* Fortunately, normal tx mode has np_txpoll set.
*/
if (priv->np_txpoll || want_tx) {
/*
* The first round checks if anyone is ready, if not
* do a selrecord and another round to handle races.
* want_tx goes to 0 if any space is found, and is
* used to skip rings with no pending transmissions.
*/
flush_tx:
for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) {
int found = 0;
kring = na->tx_rings[i];
ring = kring->ring;
/*
* Don't try to txsync this TX ring if we already found some
* space in some of the TX rings (want_tx == 0) and there are no
* TX slots in this ring that need to be flushed to the NIC
* (head == hwcur).
*/
if (!send_down && !want_tx && ring->head == kring->nr_hwcur)
continue;
if (nm_kr_tryget(kring, 1, &revents))
continue;
if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
revents |= POLLERR;
} else {
if (kring->nm_sync(kring, sync_flags))
revents |= POLLERR;
else
nm_sync_finalize(kring);
}
/*
* If we found new slots, notify potential
* listeners on the same ring.
* Since we just did a txsync, look at the copies
* of cur,tail in the kring.
*/
found = kring->rcur != kring->rtail;
nm_kr_put(kring);
if (found) { /* notify other listeners */
revents |= want_tx;
want_tx = 0;
#ifndef linux
kring->nm_notify(kring, 0);
#endif /* linux */
}
}
/* if there were any packet to forward we must have handled them by now */
send_down = 0;
if (want_tx && retry_tx && sr) {
#ifndef linux
nm_os_selrecord(sr, si[NR_TX]);
#endif /* !linux */
retry_tx = 0;
goto flush_tx;
}
}
/*
* If want_rx is still set scan receive rings.
* Do it on all rings because otherwise we starve.
*/
if (want_rx) {
/* two rounds here for race avoidance */
do_retry_rx:
for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) {
int found = 0;
kring = na->rx_rings[i];
ring = kring->ring;
if (unlikely(nm_kr_tryget(kring, 1, &revents)))
continue;
if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) {
netmap_ring_reinit(kring);
revents |= POLLERR;
}
/* now we can use kring->rcur, rtail */
/*
* transparent mode support: collect packets from
* hw rxring(s) that have been released by the user
*/
if (nm_may_forward_up(kring)) {
netmap_grab_packets(kring, &q, netmap_fwd);
}
/* Clear the NR_FORWARD flag anyway, it may be set by
* the nm_sync() below only on for the host RX ring (see
* netmap_rxsync_from_host()). */
kring->nr_kflags &= ~NR_FORWARD;
if (kring->nm_sync(kring, sync_flags))
revents |= POLLERR;
else
nm_sync_finalize(kring);
send_down |= (kring->nr_kflags & NR_FORWARD);
ring_timestamp_set(ring);
found = kring->rcur != kring->rtail;
nm_kr_put(kring);
if (found) {
revents |= want_rx;
retry_rx = 0;
#ifndef linux
kring->nm_notify(kring, 0);
#endif /* linux */
}
}
#ifndef linux
if (retry_rx && sr) {
nm_os_selrecord(sr, si[NR_RX]);
}
#endif /* !linux */
if (send_down || retry_rx) {
retry_rx = 0;
if (send_down)
goto flush_tx; /* and retry_rx */
else
goto do_retry_rx;
}
}
/*
* Transparent mode: released bufs (i.e. between kring->nr_hwcur and
* ring->head) marked with NS_FORWARD on hw rx rings are passed up
* to the host stack.
*/
if (mbq_peek(&q)) {
netmap_send_up(na->ifp, &q);
}
return (revents);
#undef want_tx
#undef want_rx
}
int
nma_intr_enable(struct netmap_adapter *na, int onoff)
{
bool changed = false;
enum txrx t;
int i;
for_rx_tx(t) {
for (i = 0; i < nma_get_nrings(na, t); i++) {
struct netmap_kring *kring = NMR(na, t)[i];
int on = !(kring->nr_kflags & NKR_NOINTR);
if (!!onoff != !!on) {
changed = true;
}
if (onoff) {
kring->nr_kflags &= ~NKR_NOINTR;
} else {
kring->nr_kflags |= NKR_NOINTR;
}
}
}
if (!changed) {
return 0; /* nothing to do */
}
if (!na->nm_intr) {
nm_prerr("Cannot %s interrupts for %s", onoff ? "enable" : "disable",
na->name);
return -1;
}
na->nm_intr(na, onoff);
return 0;
}
/*-------------------- driver support routines -------------------*/
/* default notify callback */
static int
netmap_notify(struct netmap_kring *kring, int flags)
{
struct netmap_adapter *na = kring->notify_na;
enum txrx t = kring->tx;
nm_os_selwakeup(&kring->si);
/* optimization: avoid a wake up on the global
* queue if nobody has registered for more
* than one ring
*/
if (na->si_users[t] > 0)
nm_os_selwakeup(&na->si[t]);
return NM_IRQ_COMPLETED;
}
/* called by all routines that create netmap_adapters.
* provide some defaults and get a reference to the
* memory allocator
*/
int
netmap_attach_common(struct netmap_adapter *na)
{
if (!na->rx_buf_maxsize) {
/* Set a conservative default (larger is safer). */
na->rx_buf_maxsize = PAGE_SIZE;
}
#ifdef __FreeBSD__
if (na->na_flags & NAF_HOST_RINGS && na->ifp) {
na->if_input = na->ifp->if_input; /* for netmap_send_up */
}
na->pdev = na; /* make sure netmap_mem_map() is called */
#endif /* __FreeBSD__ */
if (na->na_flags & NAF_HOST_RINGS) {
if (na->num_host_rx_rings == 0)
na->num_host_rx_rings = 1;
if (na->num_host_tx_rings == 0)
na->num_host_tx_rings = 1;
}
if (na->nm_krings_create == NULL) {
/* we assume that we have been called by a driver,
* since other port types all provide their own
* nm_krings_create
*/
na->nm_krings_create = netmap_hw_krings_create;
na->nm_krings_delete = netmap_hw_krings_delete;
}
if (na->nm_notify == NULL)
na->nm_notify = netmap_notify;
na->active_fds = 0;
if (na->nm_mem == NULL) {
/* use the global allocator */
na->nm_mem = netmap_mem_get(&nm_mem);
}
#ifdef WITH_VALE
if (na->nm_bdg_attach == NULL)
/* no special nm_bdg_attach callback. On VALE
* attach, we need to interpose a bwrap
*/
na->nm_bdg_attach = netmap_default_bdg_attach;
#endif
return 0;
}
/* Wrapper for the register callback provided netmap-enabled
* hardware drivers.
* nm_iszombie(na) means that the driver module has been
* unloaded, so we cannot call into it.
* nm_os_ifnet_lock() must guarantee mutual exclusion with
* module unloading.
*/
static int
netmap_hw_reg(struct netmap_adapter *na, int onoff)
{
struct netmap_hw_adapter *hwna =
(struct netmap_hw_adapter*)na;
int error = 0;
nm_os_ifnet_lock();
if (nm_iszombie(na)) {
if (onoff) {
error = ENXIO;
} else if (na != NULL) {
na->na_flags &= ~NAF_NETMAP_ON;
}
goto out;
}
error = hwna->nm_hw_register(na, onoff);
out:
nm_os_ifnet_unlock();
return error;
}
static void
netmap_hw_dtor(struct netmap_adapter *na)
{
if (na->ifp == NULL)
return;
NM_DETACH_NA(na->ifp);
}
/*
* Allocate a netmap_adapter object, and initialize it from the
* 'arg' passed by the driver on attach.
* We allocate a block of memory of 'size' bytes, which has room
* for struct netmap_adapter plus additional room private to
* the caller.
* Return 0 on success, ENOMEM otherwise.
*/
int
netmap_attach_ext(struct netmap_adapter *arg, size_t size, int override_reg)
{
struct netmap_hw_adapter *hwna = NULL;
struct ifnet *ifp = NULL;
if (size < sizeof(struct netmap_hw_adapter)) {
if (netmap_debug & NM_DEBUG_ON)
nm_prerr("Invalid netmap adapter size %d", (int)size);
return EINVAL;
}
if (arg == NULL || arg->ifp == NULL) {
if (netmap_debug & NM_DEBUG_ON)
nm_prerr("either arg or arg->ifp is NULL");
return EINVAL;
}
if (arg->num_tx_rings == 0 || arg->num_rx_rings == 0) {
if (netmap_debug & NM_DEBUG_ON)
nm_prerr("%s: invalid rings tx %d rx %d",
arg->name, arg->num_tx_rings, arg->num_rx_rings);
return EINVAL;
}
ifp = arg->ifp;
if (NM_NA_CLASH(ifp)) {
/* If NA(ifp) is not null but there is no valid netmap
* adapter it means that someone else is using the same
* pointer (e.g. ax25_ptr on linux). This happens for
* instance when also PF_RING is in use. */
nm_prerr("Error: netmap adapter hook is busy");
return EBUSY;
}
hwna = nm_os_malloc(size);
if (hwna == NULL)
goto fail;
hwna->up = *arg;
hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE;
strlcpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name));
if (override_reg) {
hwna->nm_hw_register = hwna->up.nm_register;
hwna->up.nm_register = netmap_hw_reg;
}
if (netmap_attach_common(&hwna->up)) {
nm_os_free(hwna);
goto fail;
}
netmap_adapter_get(&hwna->up);
NM_ATTACH_NA(ifp, &hwna->up);
nm_os_onattach(ifp);
if (arg->nm_dtor == NULL) {
hwna->up.nm_dtor = netmap_hw_dtor;
}
- nm_prinf("%s: netmap queues/slots: TX %d/%d, RX %d/%d\n",
- hwna->up.name,
+ if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n",
hwna->up.num_tx_rings, hwna->up.num_tx_desc,
hwna->up.num_rx_rings, hwna->up.num_rx_desc);
return 0;
fail:
nm_prerr("fail, arg %p ifp %p na %p", arg, ifp, hwna);
return (hwna ? EINVAL : ENOMEM);
}
int
netmap_attach(struct netmap_adapter *arg)
{
return netmap_attach_ext(arg, sizeof(struct netmap_hw_adapter),
1 /* override nm_reg */);
}
void
NM_DBG(netmap_adapter_get)(struct netmap_adapter *na)
{
if (!na) {
return;
}
refcount_acquire(&na->na_refcount);
}
/* returns 1 iff the netmap_adapter is destroyed */
int
NM_DBG(netmap_adapter_put)(struct netmap_adapter *na)
{
if (!na)
return 1;
if (!refcount_release(&na->na_refcount))
return 0;
if (na->nm_dtor)
na->nm_dtor(na);
if (na->tx_rings) { /* XXX should not happen */
if (netmap_debug & NM_DEBUG_ON)
nm_prerr("freeing leftover tx_rings");
na->nm_krings_delete(na);
}
netmap_pipe_dealloc(na);
if (na->nm_mem)
netmap_mem_put(na->nm_mem);
bzero(na, sizeof(*na));
nm_os_free(na);
return 1;
}
/* nm_krings_create callback for all hardware native adapters */
int
netmap_hw_krings_create(struct netmap_adapter *na)
{
int ret = netmap_krings_create(na, 0);
if (ret == 0) {
/* initialize the mbq for the sw rx ring */
u_int lim = netmap_real_rings(na, NR_RX), i;
for (i = na->num_rx_rings; i < lim; i++) {
mbq_safe_init(&NMR(na, NR_RX)[i]->rx_queue);
}
nm_prdis("initialized sw rx queue %d", na->num_rx_rings);
}
return ret;
}
/*
* Called on module unload by the netmap-enabled drivers
*/
void
netmap_detach(struct ifnet *ifp)
{
struct netmap_adapter *na = NA(ifp);
if (!na)
return;
NMG_LOCK();
netmap_set_all_rings(na, NM_KR_LOCKED);
/*
* if the netmap adapter is not native, somebody
* changed it, so we can not release it here.
* The NAF_ZOMBIE flag will notify the new owner that
* the driver is gone.
*/
if (!(na->na_flags & NAF_NATIVE) || !netmap_adapter_put(na)) {
na->na_flags |= NAF_ZOMBIE;
}
/* give active users a chance to notice that NAF_ZOMBIE has been
* turned on, so that they can stop and return an error to userspace.
* Note that this becomes a NOP if there are no active users and,
* therefore, the put() above has deleted the na, since now NA(ifp) is
* NULL.
*/
netmap_enable_all_rings(ifp);
NMG_UNLOCK();
}
/*
* Intercept packets from the network stack and pass them
* to netmap as incoming packets on the 'software' ring.
*
* We only store packets in a bounded mbq and then copy them
* in the relevant rxsync routine.
*
* We rely on the OS to make sure that the ifp and na do not go
* away (typically the caller checks for IFF_DRV_RUNNING or the like).
* In nm_register() or whenever there is a reinitialization,
* we make sure to make the mode change visible here.
*/
int
netmap_transmit(struct ifnet *ifp, struct mbuf *m)
{
struct netmap_adapter *na = NA(ifp);
struct netmap_kring *kring, *tx_kring;
u_int len = MBUF_LEN(m);
u_int error = ENOBUFS;
unsigned int txr;
struct mbq *q;
int busy;
u_int i;
i = MBUF_TXQ(m);
if (i >= na->num_host_rx_rings) {
i = i % na->num_host_rx_rings;
}
kring = NMR(na, NR_RX)[nma_get_nrings(na, NR_RX) + i];
// XXX [Linux] we do not need this lock
// if we follow the down/configure/up protocol -gl
// mtx_lock(&na->core_lock);
if (!nm_netmap_on(na)) {
nm_prerr("%s not in netmap mode anymore", na->name);
error = ENXIO;
goto done;
}
txr = MBUF_TXQ(m);
if (txr >= na->num_tx_rings) {
txr %= na->num_tx_rings;
}
tx_kring = NMR(na, NR_TX)[txr];
if (tx_kring->nr_mode == NKR_NETMAP_OFF) {
return MBUF_TRANSMIT(na, ifp, m);
}
q = &kring->rx_queue;
// XXX reconsider long packets if we handle fragments
if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */
nm_prerr("%s from_host, drop packet size %d > %d", na->name,
len, NETMAP_BUF_SIZE(na));
goto done;
}
if (!netmap_generic_hwcsum) {
if (nm_os_mbuf_has_csum_offld(m)) {
nm_prlim(1, "%s drop mbuf that needs checksum offload", na->name);
goto done;
}
}
if (nm_os_mbuf_has_seg_offld(m)) {
nm_prlim(1, "%s drop mbuf that needs generic segmentation offload", na->name);
goto done;
}
#ifdef __FreeBSD__
ETHER_BPF_MTAP(ifp, m);
#endif /* __FreeBSD__ */
/* protect against netmap_rxsync_from_host(), netmap_sw_to_nic()
* and maybe other instances of netmap_transmit (the latter
* not possible on Linux).
* We enqueue the mbuf only if we are sure there is going to be
* enough room in the host RX ring, otherwise we drop it.
*/
mbq_lock(q);
busy = kring->nr_hwtail - kring->nr_hwcur;
if (busy < 0)
busy += kring->nkr_num_slots;
if (busy + mbq_len(q) >= kring->nkr_num_slots - 1) {
nm_prlim(2, "%s full hwcur %d hwtail %d qlen %d", na->name,
kring->nr_hwcur, kring->nr_hwtail, mbq_len(q));
} else {
mbq_enqueue(q, m);
nm_prdis(2, "%s %d bufs in queue", na->name, mbq_len(q));
/* notify outside the lock */
m = NULL;
error = 0;
}
mbq_unlock(q);
done:
if (m)
m_freem(m);
/* unconditionally wake up listeners */
kring->nm_notify(kring, 0);
/* this is normally netmap_notify(), but for nics
* connected to a bridge it is netmap_bwrap_intr_notify(),
* that possibly forwards the frames through the switch
*/
return (error);
}
/*
* netmap_reset() is called by the driver routines when reinitializing
* a ring. The driver is in charge of locking to protect the kring.
* If native netmap mode is not set just return NULL.
* If native netmap mode is set, in particular, we have to set nr_mode to
* NKR_NETMAP_ON.
*/
struct netmap_slot *
netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n,
u_int new_cur)
{
struct netmap_kring *kring;
int new_hwofs, lim;
if (!nm_native_on(na)) {
nm_prdis("interface not in native netmap mode");
return NULL; /* nothing to reinitialize */
}
/* XXX note- in the new scheme, we are not guaranteed to be
* under lock (e.g. when called on a device reset).
* In this case, we should set a flag and do not trust too
* much the values. In practice: TODO
* - set a RESET flag somewhere in the kring
* - do the processing in a conservative way
* - let the *sync() fixup at the end.
*/
if (tx == NR_TX) {
if (n >= na->num_tx_rings)
return NULL;
kring = na->tx_rings[n];
if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
kring->nr_mode = NKR_NETMAP_OFF;
return NULL;
}
// XXX check whether we should use hwcur or rcur
new_hwofs = kring->nr_hwcur - new_cur;
} else {
if (n >= na->num_rx_rings)
return NULL;
kring = na->rx_rings[n];
if (kring->nr_pending_mode == NKR_NETMAP_OFF) {
kring->nr_mode = NKR_NETMAP_OFF;
return NULL;
}
new_hwofs = kring->nr_hwtail - new_cur;
}
lim = kring->nkr_num_slots - 1;
if (new_hwofs > lim)
new_hwofs -= lim + 1;
/* Always set the new offset value and realign the ring. */
if (netmap_debug & NM_DEBUG_ON)
nm_prinf("%s %s%d hwofs %d -> %d, hwtail %d -> %d",
na->name,
tx == NR_TX ? "TX" : "RX", n,
kring->nkr_hwofs, new_hwofs,
kring->nr_hwtail,
tx == NR_TX ? lim : kring->nr_hwtail);
kring->nkr_hwofs = new_hwofs;
if (tx == NR_TX) {
kring->nr_hwtail = kring->nr_hwcur + lim;
if (kring->nr_hwtail > lim)
kring->nr_hwtail -= lim + 1;
}
/*
* Wakeup on the individual and global selwait
* We do the wakeup here, but the ring is not yet reconfigured.
* However, we are under lock so there are no races.
*/
kring->nr_mode = NKR_NETMAP_ON;
kring->nm_notify(kring, 0);
return kring->ring->slot;
}
/*
* Dispatch rx/tx interrupts to the netmap rings.
*
* "work_done" is non-null on the RX path, NULL for the TX path.
* We rely on the OS to make sure that there is only one active
* instance per queue, and that there is appropriate locking.
*
* The 'notify' routine depends on what the ring is attached to.
* - for a netmap file descriptor, do a selwakeup on the individual
* waitqueue, plus one on the global one if needed
* (see netmap_notify)
* - for a nic connected to a switch, call the proper forwarding routine
* (see netmap_bwrap_intr_notify)
*/
int
netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done)
{
struct netmap_kring *kring;
enum txrx t = (work_done ? NR_RX : NR_TX);
q &= NETMAP_RING_MASK;
if (netmap_debug & (NM_DEBUG_RXINTR|NM_DEBUG_TXINTR)) {
nm_prlim(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
}
if (q >= nma_get_nrings(na, t))
return NM_IRQ_PASS; // not a physical queue
kring = NMR(na, t)[q];
if (kring->nr_mode == NKR_NETMAP_OFF) {
return NM_IRQ_PASS;
}
if (t == NR_RX) {
kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ?
*work_done = 1; /* do not fire napi again */
}
return kring->nm_notify(kring, 0);
}
/*
* Default functions to handle rx/tx interrupts from a physical device.
* "work_done" is non-null on the RX path, NULL for the TX path.
*
* If the card is not in netmap mode, simply return NM_IRQ_PASS,
* so that the caller proceeds with regular processing.
* Otherwise call netmap_common_irq().
*
* If the card is connected to a netmap file descriptor,
* do a selwakeup on the individual queue, plus one on the global one
* if needed (multiqueue card _and_ there are multiqueue listeners),
* and return NR_IRQ_COMPLETED.
*
* Finally, if called on rx from an interface connected to a switch,
* calls the proper forwarding routine.
*/
int
netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done)
{
struct netmap_adapter *na = NA(ifp);
/*
* XXX emulated netmap mode sets NAF_SKIP_INTR so
* we still use the regular driver even though the previous
* check fails. It is unclear whether we should use
* nm_native_on() here.
*/
if (!nm_netmap_on(na))
return NM_IRQ_PASS;
if (na->na_flags & NAF_SKIP_INTR) {
nm_prdis("use regular interrupt");
return NM_IRQ_PASS;
}
return netmap_common_irq(na, q, work_done);
}
/* set/clear native flags and if_transmit/netdev_ops */
void
nm_set_native_flags(struct netmap_adapter *na)
{
struct ifnet *ifp = na->ifp;
/* We do the setup for intercepting packets only if we are the
* first user of this adapapter. */
if (na->active_fds > 0) {
return;
}
na->na_flags |= NAF_NETMAP_ON;
nm_os_onenter(ifp);
nm_update_hostrings_mode(na);
}
void
nm_clear_native_flags(struct netmap_adapter *na)
{
struct ifnet *ifp = na->ifp;
/* We undo the setup for intercepting packets only if we are the
* last user of this adapter. */
if (na->active_fds > 0) {
return;
}
nm_update_hostrings_mode(na);
nm_os_onexit(ifp);
na->na_flags &= ~NAF_NETMAP_ON;
}
void
netmap_krings_mode_commit(struct netmap_adapter *na, int onoff)
{
enum txrx t;
for_rx_tx(t) {
int i;
for (i = 0; i < netmap_real_rings(na, t); i++) {
struct netmap_kring *kring = NMR(na, t)[i];
if (onoff && nm_kring_pending_on(kring))
kring->nr_mode = NKR_NETMAP_ON;
else if (!onoff && nm_kring_pending_off(kring))
kring->nr_mode = NKR_NETMAP_OFF;
}
}
}
/*
* Module loader and unloader
*
* netmap_init() creates the /dev/netmap device and initializes
* all global variables. Returns 0 on success, errno on failure
* (but there is no chance)
*
* netmap_fini() destroys everything.
*/
static struct cdev *netmap_dev; /* /dev/netmap character device. */
extern struct cdevsw netmap_cdevsw;
void
netmap_fini(void)
{
if (netmap_dev)
destroy_dev(netmap_dev);
/* we assume that there are no longer netmap users */
nm_os_ifnet_fini();
netmap_uninit_bridges();
netmap_mem_fini();
NMG_LOCK_DESTROY();
nm_prinf("netmap: unloaded module.");
}
int
netmap_init(void)
{
int error;
NMG_LOCK_INIT();
error = netmap_mem_init();
if (error != 0)
goto fail;
/*
* MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls
* when the module is compiled in.
* XXX could use make_dev_credv() to get error number
*/
netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD,
&netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600,
"netmap");
if (!netmap_dev)
goto fail;
error = netmap_init_bridges();
if (error)
goto fail;
#ifdef __FreeBSD__
nm_os_vi_init_index();
#endif
error = nm_os_ifnet_init();
if (error)
goto fail;
nm_prinf("netmap: loaded module");
return (0);
fail:
netmap_fini();
return (EINVAL); /* may be incorrect */
}
Index: projects/clang800-import/sys/dev/pccbb/pccbbdevid.h
===================================================================
--- projects/clang800-import/sys/dev/pccbb/pccbbdevid.h (revision 343955)
+++ projects/clang800-import/sys/dev/pccbb/pccbbdevid.h (revision 343956)
@@ -1,116 +1,116 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2001-2004 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001-2004 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
/* Vendor/Device IDs */
#define PCIC_ID_CLPD6729 0x11001013ul /* 16bit I/O */
#define PCIC_ID_CLPD6832 0x11101013ul
#define PCIC_ID_CLPD6833 0x11131013ul
#define PCIC_ID_CLPD6834 0x11121013ul
#define PCIC_ID_ENE_CB710 0x14111524ul
#define PCIC_ID_ENE_CB720 0x14211524ul /* ??? */
#define PCIC_ID_ENE_CB1211 0x12111524ul /* ??? */
#define PCIC_ID_ENE_CB1225 0x12251524ul /* ??? */
#define PCIC_ID_ENE_CB1410 0x14101524ul
#define PCIC_ID_ENE_CB1420 0x14201524ul
#define PCIC_ID_INTEL_82092AA_0 0x12218086ul /* 16bit I/O */
#define PCIC_ID_INTEL_82092AA_1 0x12228086ul /* 16bit I/O */
#define PCIC_ID_OMEGA_82C094 0x1221119bul /* 16bit I/O */
#define PCIC_ID_OZ6729 0x67291217ul /* 16bit I/O */
#define PCIC_ID_OZ6730 0x673a1217ul /* 16bit I/O */
#define PCIC_ID_OZ6832 0x68321217ul /* Also 6833 */
#define PCIC_ID_OZ6860 0x68361217ul /* Also 6836 */
#define PCIC_ID_OZ6872 0x68721217ul /* Also 6812 */
#define PCIC_ID_OZ6912 0x69721217ul /* Also 6972 */
#define PCIC_ID_OZ6922 0x69251217ul
#define PCIC_ID_OZ6933 0x69331217ul
#define PCIC_ID_OZ711EC1 0x71121217ul /* O2Micro 711EC1/M1 */
#define PCIC_ID_OZ711E1 0x71131217ul /* O2Micro 711E1 */
#define PCIC_ID_OZ711M1 0x71141217ul /* O2Micro 711M1 */
#define PCIC_ID_OZ711E2 0x71e21217ul
#define PCIC_ID_OZ711M2 0x72121217ul
#define PCIC_ID_OZ711M3 0x72231217ul
#define PCIC_ID_RICOH_RL5C465 0x04651180ul
#define PCIC_ID_RICOH_RL5C466 0x04661180ul
#define PCIC_ID_RICOH_RL5C475 0x04751180ul
#define PCIC_ID_RICOH_RL5C476 0x04761180ul
#define PCIC_ID_RICOH_RL5C477 0x04771180ul
#define PCIC_ID_RICOH_RL5C478 0x04781180ul
#define PCIC_ID_SMC_34C90 0xb10610b3ul
#define PCIC_ID_TI1031 0xac13104cul
#define PCIC_ID_TI1130 0xac12104cul
#define PCIC_ID_TI1131 0xac15104cul
#define PCIC_ID_TI1210 0xac1a104cul
#define PCIC_ID_TI1211 0xac1e104cul
#define PCIC_ID_TI1220 0xac17104cul
#define PCIC_ID_TI1221 0xac19104cul /* never sold */
#define PCIC_ID_TI1225 0xac1c104cul
#define PCIC_ID_TI1250 0xac16104cul /* Rare */
#define PCIC_ID_TI1251 0xac1d104cul
#define PCIC_ID_TI1251B 0xac1f104cul
#define PCIC_ID_TI1260 0xac18104cul /* never sold */
#define PCIC_ID_TI1260B 0xac30104cul /* never sold */
#define PCIC_ID_TI1410 0xac50104cul
#define PCIC_ID_TI1420 0xac51104cul
#define PCIC_ID_TI1421 0xac53104cul /* never sold */
#define PCIC_ID_TI1450 0xac1b104cul
#define PCIC_ID_TI1451 0xac52104cul
#define PCIC_ID_TI1510 0xac56104cul
#define PCIC_ID_TI1515 0xac58104cul
#define PCIC_ID_TI1520 0xac55104cul
#define PCIC_ID_TI1530 0xac57104cul
#define PCIC_ID_TI1620 0xac54104cul
#define PCIC_ID_TI4410 0xac41104cul
#define PCIC_ID_TI4450 0xac40104cul
#define PCIC_ID_TI4451 0xac42104cul
#define PCIC_ID_TI4510 0xac44104cul
#define PCIC_ID_TI4520 0xac46104cul
#define PCIC_ID_TI6411 0x8031104cul /* PCI[67]x[12]1 */
#define PCIC_ID_TI6420 0xac8d104cul /* PCI[67]x20 Smartcard dis */
#define PCIC_ID_TI6420SC 0xac8e104cul /* PCI[67]x20 Smartcard en */
#define PCIC_ID_TI7410 0xac49104cul
#define PCIC_ID_TI7510 0xac47104cul
#define PCIC_ID_TI7610 0xac48104cul
#define PCIC_ID_TI7610M 0xac4a104cul
#define PCIC_ID_TI7610SD 0xac4b104cul
#define PCIC_ID_TI7610MS 0xac4c104cul
#define PCIC_ID_TOPIC95 0x06031179ul
#define PCIC_ID_TOPIC95B 0x060a1179ul
#define PCIC_ID_TOPIC97 0x060f1179ul
#define PCIC_ID_TOPIC100 0x06171179ul
/*
* Other ID, from sources too vague to be reliable
* Mfg model PCI ID
* smc/Databook DB87144 0x310610b3
* Omega/Trident 82c194 0x01941023
* Omega/Trident 82c722 0x07221023?
* Opti 82c814 0xc8141045
* Opti 82c824 0xc8241045
* NEC uPD66369 0x003e1033
*/
Index: projects/clang800-import/sys/dev/pci/pci_host_generic_acpi.c
===================================================================
--- projects/clang800-import/sys/dev/pci/pci_host_generic_acpi.c (revision 343955)
+++ projects/clang800-import/sys/dev/pci/pci_host_generic_acpi.c (revision 343956)
@@ -1,450 +1,486 @@
/*-
* Copyright (C) 2018 Cavium Inc.
* Copyright (c) 2015 Ruslan Bukin <br@bsdpad.com>
* Copyright (c) 2014 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed by Semihalf under
* the sponsorship of the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/* Generic ECAM PCIe driver */
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_platform.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/malloc.h>
#include <sys/kernel.h>
#include <sys/rman.h>
#include <sys/module.h>
#include <sys/bus.h>
#include <sys/endian.h>
#include <sys/cpuset.h>
#include <sys/rwlock.h>
#include <contrib/dev/acpica/include/acpi.h>
#include <contrib/dev/acpica/include/accommon.h>
#include <dev/acpica/acpivar.h>
#include <dev/acpica/acpi_pcibvar.h>
#include <dev/pci/pcivar.h>
#include <dev/pci/pcireg.h>
#include <dev/pci/pcib_private.h>
#include <dev/pci/pci_host_generic.h>
#include <machine/cpu.h>
#include <machine/bus.h>
#include <machine/intr.h>
#include "pcib_if.h"
#include "acpi_bus_if.h"
/* Assembling ECAM Configuration Address */
#define PCIE_BUS_SHIFT 20
#define PCIE_SLOT_SHIFT 15
#define PCIE_FUNC_SHIFT 12
#define PCIE_BUS_MASK 0xFF
#define PCIE_SLOT_MASK 0x1F
#define PCIE_FUNC_MASK 0x07
#define PCIE_REG_MASK 0xFFF
#define PCIE_ADDR_OFFSET(bus, slot, func, reg) \
((((bus) & PCIE_BUS_MASK) << PCIE_BUS_SHIFT) | \
(((slot) & PCIE_SLOT_MASK) << PCIE_SLOT_SHIFT) | \
(((func) & PCIE_FUNC_MASK) << PCIE_FUNC_SHIFT) | \
((reg) & PCIE_REG_MASK))
#define PCI_IO_WINDOW_OFFSET 0x1000
#define SPACE_CODE_SHIFT 24
#define SPACE_CODE_MASK 0x3
#define SPACE_CODE_IO_SPACE 0x1
#define PROPS_CELL_SIZE 1
#define PCI_ADDR_CELL_SIZE 2
struct generic_pcie_acpi_softc {
struct generic_pcie_core_softc base;
ACPI_BUFFER ap_prt; /* interrupt routing table */
};
/* Forward prototypes */
static int generic_pcie_acpi_probe(device_t dev);
static ACPI_STATUS pci_host_generic_acpi_parse_resource(ACPI_RESOURCE *, void *);
static int generic_pcie_acpi_read_ivar(device_t, device_t, int, uintptr_t *);
/*
* generic_pcie_acpi_probe - look for root bridge flag
*/
static int
generic_pcie_acpi_probe(device_t dev)
{
ACPI_DEVICE_INFO *devinfo;
ACPI_HANDLE h;
int root;
if (acpi_disabled("pcib") || (h = acpi_get_handle(dev)) == NULL ||
ACPI_FAILURE(AcpiGetObjectInfo(h, &devinfo)))
return (ENXIO);
root = (devinfo->Flags & ACPI_PCI_ROOT_BRIDGE) != 0;
AcpiOsFree(devinfo);
if (!root)
return (ENXIO);
device_set_desc(dev, "Generic PCI host controller");
return (BUS_PROBE_GENERIC);
}
/*
* pci_host_generic_acpi_parse_resource - parse PCI memory, IO and bus spaces
* 'produced' by this bridge
*/
static ACPI_STATUS
pci_host_generic_acpi_parse_resource(ACPI_RESOURCE *res, void *arg)
{
device_t dev = (device_t)arg;
struct generic_pcie_acpi_softc *sc;
struct rman *rm;
rman_res_t min, max, off;
int r;
rm = NULL;
sc = device_get_softc(dev);
r = sc->base.nranges;
switch (res->Type) {
case ACPI_RESOURCE_TYPE_ADDRESS16:
min = res->Data.Address16.Address.Minimum;
max = res->Data.Address16.Address.Maximum;
break;
case ACPI_RESOURCE_TYPE_ADDRESS32:
min = res->Data.Address32.Address.Minimum;
max = res->Data.Address32.Address.Maximum;
off = res->Data.Address32.Address.TranslationOffset;
break;
case ACPI_RESOURCE_TYPE_ADDRESS64:
if (res->Data.Address.ResourceType != ACPI_MEMORY_RANGE)
break;
min = res->Data.Address64.Address.Minimum;
max = res->Data.Address64.Address.Maximum;
off = res->Data.Address64.Address.TranslationOffset;
break;
default:
return (AE_OK);
}
/* Save detected ranges */
if (res->Data.Address.ResourceType == ACPI_MEMORY_RANGE ||
res->Data.Address.ResourceType == ACPI_IO_RANGE) {
sc->base.ranges[r].pci_base = min;
sc->base.ranges[r].phys_base = min + off;
sc->base.ranges[r].size = max - min + 1;
if (res->Data.Address.ResourceType == ACPI_MEMORY_RANGE)
sc->base.ranges[r].flags |= FLAG_MEM;
else if (res->Data.Address.ResourceType == ACPI_IO_RANGE)
sc->base.ranges[r].flags |= FLAG_IO;
sc->base.nranges++;
} else if (res->Data.Address.ResourceType == ACPI_BUS_NUMBER_RANGE) {
sc->base.bus_start = min;
sc->base.bus_end = max;
}
return (AE_OK);
}
static int
pci_host_acpi_get_ecam_resource(device_t dev)
{
struct generic_pcie_acpi_softc *sc;
struct acpi_device *ad;
struct resource_list *rl;
ACPI_TABLE_HEADER *hdr;
ACPI_MCFG_ALLOCATION *mcfg_entry, *mcfg_end;
ACPI_HANDLE handle;
ACPI_STATUS status;
rman_res_t base, start, end;
int found, val;
sc = device_get_softc(dev);
handle = acpi_get_handle(dev);
/* Try MCFG first */
status = AcpiGetTable(ACPI_SIG_MCFG, 1, &hdr);
if (ACPI_SUCCESS(status)) {
found = FALSE;
mcfg_end = (ACPI_MCFG_ALLOCATION *)((char *)hdr + hdr->Length);
mcfg_entry = (ACPI_MCFG_ALLOCATION *)((ACPI_TABLE_MCFG *)hdr + 1);
while (mcfg_entry < mcfg_end && !found) {
if (mcfg_entry->PciSegment == sc->base.ecam &&
mcfg_entry->StartBusNumber <= sc->base.bus_start &&
mcfg_entry->EndBusNumber >= sc->base.bus_start)
found = TRUE;
else
mcfg_entry++;
}
if (found) {
if (mcfg_entry->EndBusNumber < sc->base.bus_end) {
device_printf(dev, "bus end mismatch! expected %d found %d.\n",
sc->base.bus_end, (int)mcfg_entry->EndBusNumber);
sc->base.bus_end = mcfg_entry->EndBusNumber;
}
base = mcfg_entry->Address;
} else {
device_printf(dev, "MCFG exists, but does not have bus %d-%d\n",
sc->base.bus_start, sc->base.bus_end);
return (ENXIO);
}
} else {
status = acpi_GetInteger(handle, "_CBA", &val);
if (ACPI_SUCCESS(status))
base = val;
else
return (ENXIO);
}
/* add as MEM rid 0 */
ad = device_get_ivars(dev);
rl = &ad->ad_rl;
start = base + (sc->base.bus_start << PCIE_BUS_SHIFT);
end = base + ((sc->base.bus_end + 1) << PCIE_BUS_SHIFT) - 1;
resource_list_add(rl, SYS_RES_MEMORY, 0, start, end, end - start + 1);
if (bootverbose)
device_printf(dev, "ECAM for bus %d-%d at mem %jx-%jx\n",
sc->base.bus_start, sc->base.bus_end, start, end);
return (0);
}
static int
pci_host_generic_acpi_attach(device_t dev)
{
struct generic_pcie_acpi_softc *sc;
ACPI_HANDLE handle;
uint64_t phys_base;
uint64_t pci_base;
uint64_t size;
ACPI_STATUS status;
int error;
int tuple;
sc = device_get_softc(dev);
handle = acpi_get_handle(dev);
/* Get Start bus number for the PCI host bus is from _BBN method */
status = acpi_GetInteger(handle, "_BBN", &sc->base.bus_start);
if (ACPI_FAILURE(status)) {
device_printf(dev, "No _BBN, using start bus 0\n");
sc->base.bus_start = 0;
}
sc->base.bus_end = 255;
/* Get PCI Segment (domain) needed for MCFG lookup */
status = acpi_GetInteger(handle, "_SEG", &sc->base.ecam);
if (ACPI_FAILURE(status)) {
device_printf(dev, "No _SEG for PCI Bus, using segment 0\n");
sc->base.ecam = 0;
}
/* Bus decode ranges */
status = AcpiWalkResources(handle, "_CRS",
pci_host_generic_acpi_parse_resource, (void *)dev);
if (ACPI_FAILURE(status))
return (ENXIO);
/* Coherency attribute */
if (ACPI_FAILURE(acpi_GetInteger(handle, "_CCA", &sc->base.coherent)))
sc->base.coherent = 0;
if (bootverbose)
device_printf(dev, "Bus is%s cache-coherent\n",
sc->base.coherent ? "" : " not");
/* add config space resource */
pci_host_acpi_get_ecam_resource(dev);
acpi_pcib_fetch_prt(dev, &sc->ap_prt);
error = pci_host_generic_core_attach(dev);
if (error != 0)
return (error);
for (tuple = 0; tuple < MAX_RANGES_TUPLES; tuple++) {
phys_base = sc->base.ranges[tuple].phys_base;
pci_base = sc->base.ranges[tuple].pci_base;
size = sc->base.ranges[tuple].size;
if (phys_base == 0 || size == 0)
continue; /* empty range element */
if (sc->base.ranges[tuple].flags & FLAG_MEM) {
error = rman_manage_region(&sc->base.mem_rman,
phys_base, phys_base + size - 1);
} else if (sc->base.ranges[tuple].flags & FLAG_IO) {
error = rman_manage_region(&sc->base.io_rman,
pci_base + PCI_IO_WINDOW_OFFSET,
pci_base + PCI_IO_WINDOW_OFFSET + size - 1);
} else
continue;
if (error) {
device_printf(dev, "rman_manage_region() failed."
"error = %d\n", error);
rman_fini(&sc->base.mem_rman);
return (error);
}
}
device_add_child(dev, "pci", -1);
return (bus_generic_attach(dev));
}
static int
generic_pcie_acpi_read_ivar(device_t dev, device_t child, int index,
uintptr_t *result)
{
struct generic_pcie_acpi_softc *sc;
sc = device_get_softc(dev);
if (index == PCIB_IVAR_BUS) {
*result = sc->base.bus_start;
return (0);
}
if (index == PCIB_IVAR_DOMAIN) {
*result = sc->base.ecam;
return (0);
}
if (bootverbose)
device_printf(dev, "ERROR: Unknown index %d.\n", index);
return (ENOENT);
}
static int
generic_pcie_acpi_route_interrupt(device_t bus, device_t dev, int pin)
{
struct generic_pcie_acpi_softc *sc;
sc = device_get_softc(bus);
return (acpi_pcib_route_interrupt(bus, dev, pin, &sc->ap_prt));
}
+static u_int
+generic_pcie_get_xref(device_t pci, device_t child)
+{
+ struct generic_pcie_acpi_softc *sc;
+ uintptr_t rid;
+ u_int xref, devid;
+ int err;
+
+ sc = device_get_softc(pci);
+ err = pcib_get_id(pci, child, PCI_ID_RID, &rid);
+ if (err != 0)
+ return (ACPI_MSI_XREF);
+ err = acpi_iort_map_pci_msi(sc->base.ecam, rid, &xref, &devid);
+ if (err != 0)
+ return (ACPI_MSI_XREF);
+ return (xref);
+}
+
+static u_int
+generic_pcie_map_id(device_t pci, device_t child, uintptr_t *id)
+{
+ struct generic_pcie_acpi_softc *sc;
+ uintptr_t rid;
+ u_int xref, devid;
+ int err;
+
+ sc = device_get_softc(pci);
+ err = pcib_get_id(pci, child, PCI_ID_RID, &rid);
+ if (err != 0)
+ return (err);
+ err = acpi_iort_map_pci_msi(sc->base.ecam, rid, &xref, &devid);
+ if (err == 0)
+ *id = devid;
+ else
+ *id = rid; /* RID not in IORT, likely FW bug, ignore */
+ return (0);
+}
+
static int
generic_pcie_acpi_alloc_msi(device_t pci, device_t child, int count,
int maxcount, int *irqs)
{
#if defined(INTRNG)
- return (intr_alloc_msi(pci, child, ACPI_MSI_XREF, count, maxcount,
- irqs));
+ return (intr_alloc_msi(pci, child, generic_pcie_get_xref(pci, child),
+ count, maxcount, irqs));
#else
return (ENXIO);
#endif
}
static int
generic_pcie_acpi_release_msi(device_t pci, device_t child, int count,
int *irqs)
{
#if defined(INTRNG)
- return (intr_release_msi(pci, child, ACPI_MSI_XREF, count, irqs));
+ return (intr_release_msi(pci, child, generic_pcie_get_xref(pci, child),
+ count, irqs));
#else
return (ENXIO);
#endif
}
static int
generic_pcie_acpi_map_msi(device_t pci, device_t child, int irq, uint64_t *addr,
uint32_t *data)
{
#if defined(INTRNG)
- return (intr_map_msi(pci, child, ACPI_MSI_XREF, irq, addr, data));
+ return (intr_map_msi(pci, child, generic_pcie_get_xref(pci, child), irq,
+ addr, data));
#else
return (ENXIO);
#endif
}
static int
generic_pcie_acpi_alloc_msix(device_t pci, device_t child, int *irq)
{
#if defined(INTRNG)
- return (intr_alloc_msix(pci, child, ACPI_MSI_XREF, irq));
+ return (intr_alloc_msix(pci, child, generic_pcie_get_xref(pci, child),
+ irq));
#else
return (ENXIO);
#endif
}
static int
generic_pcie_acpi_release_msix(device_t pci, device_t child, int irq)
{
#if defined(INTRNG)
- return (intr_release_msix(pci, child, ACPI_MSI_XREF, irq));
+ return (intr_release_msix(pci, child, generic_pcie_get_xref(pci, child),
+ irq));
#else
return (ENXIO);
#endif
}
static int
generic_pcie_acpi_get_id(device_t pci, device_t child, enum pci_id_type type,
uintptr_t *id)
{
- /*
- * Use the PCI RID to find the MSI ID for now, we support only 1:1
- * mapping
- *
- * On aarch64, more complex mapping would come from IORT table
- */
if (type == PCI_ID_MSI)
- return (pcib_get_id(pci, child, PCI_ID_RID, id));
+ return (generic_pcie_map_id(pci, child, id));
else
return (pcib_get_id(pci, child, type, id));
}
static device_method_t generic_pcie_acpi_methods[] = {
DEVMETHOD(device_probe, generic_pcie_acpi_probe),
DEVMETHOD(device_attach, pci_host_generic_acpi_attach),
DEVMETHOD(bus_read_ivar, generic_pcie_acpi_read_ivar),
/* pcib interface */
DEVMETHOD(pcib_route_interrupt, generic_pcie_acpi_route_interrupt),
DEVMETHOD(pcib_alloc_msi, generic_pcie_acpi_alloc_msi),
DEVMETHOD(pcib_release_msi, generic_pcie_acpi_release_msi),
DEVMETHOD(pcib_alloc_msix, generic_pcie_acpi_alloc_msix),
DEVMETHOD(pcib_release_msix, generic_pcie_acpi_release_msix),
DEVMETHOD(pcib_map_msi, generic_pcie_acpi_map_msi),
DEVMETHOD(pcib_get_id, generic_pcie_acpi_get_id),
DEVMETHOD_END
};
DEFINE_CLASS_1(pcib, generic_pcie_acpi_driver, generic_pcie_acpi_methods,
sizeof(struct generic_pcie_acpi_softc), generic_pcie_core_driver);
static devclass_t generic_pcie_acpi_devclass;
DRIVER_MODULE(pcib, acpi, generic_pcie_acpi_driver, generic_pcie_acpi_devclass,
0, 0);
Index: projects/clang800-import/sys/dev/pms/freebsd/driver/common/lxutil.c
===================================================================
--- projects/clang800-import/sys/dev/pms/freebsd/driver/common/lxutil.c (revision 343955)
+++ projects/clang800-import/sys/dev/pms/freebsd/driver/common/lxutil.c (revision 343956)
@@ -1,786 +1,786 @@
/******************************************************************************
*Copyright (c) 2014 PMC-Sierra, Inc. All rights reserved.
*
*Redistribution and use in source and binary forms, with or without modification, are permitted provided
*that the following conditions are met:
*1. Redistributions of source code must retain the above copyright notice, this list of conditions and the
*following disclaimer.
*2. Redistributions in binary form must reproduce the above copyright notice,
*this list of conditions and the following disclaimer in the documentation and/or other materials provided
*with the distribution.
*
*THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED
*WARRANTIES,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
*FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
*FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
*NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
*BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
*LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
*SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE
******************************************************************************/
/* $FreeBSD$ */
/******************************************************************************
This program is part of PMC-Sierra initiator/target device driver.
The functions here are commonly used by different type of drivers that support
PMC-Sierra storage network initiator hardware.
******************************************************************************/
MALLOC_DEFINE( M_PMC_MMAL, "agtiapi_MemAlloc malloc",
"allocated from agtiapi_MemAlloc as simple malloc case" );
/*****************************************************************************
agtiapi_DelayMSec()
Purpose:
Busy wait for number of mili-seconds
Parameters:
U32 MiliSeconds (IN) Number of mili-seconds to delay
Return:
Note:
*****************************************************************************/
STATIC void agtiapi_DelayMSec( U32 MiliSeconds )
{
DELAY(MiliSeconds * 1000); // DELAY takes in usecs
}
/******************************************************************************
agtiapi_typhAlloc()
Purpose:
Preallocation handling
Allocate DMA memory which will be divided among proper pointers in
agtiapi_MemAlloc() later
Parameters:
ag_card_info_t *thisCardInst (IN)
Return:
AGTIAPI_SUCCESS - success
AGTIAPI_FAIL - fail
******************************************************************************/
STATIC agBOOLEAN agtiapi_typhAlloc( ag_card_info_t *thisCardInst )
{
struct agtiapi_softc *pmsc = thisCardInst->pCard;
int wait = 0;
- if( bus_dma_tag_create( agNULL, // parent
+ if( bus_dma_tag_create( bus_get_dma_tag(pmsc->my_dev), // parent
32, // alignment
0, // boundary
BUS_SPACE_MAXADDR, // lowaddr
BUS_SPACE_MAXADDR, // highaddr
NULL, // filter
NULL, // filterarg
pmsc->typhn, // maxsize (size)
1, // number of segments
pmsc->typhn, // maxsegsize
0, // flags
NULL, // lockfunc
NULL, // lockarg
&pmsc->typh_dmat ) ) {
printf( "agtiapi_typhAlloc: Can't create no-cache mem tag\n" );
return AGTIAPI_FAIL;
}
if( bus_dmamem_alloc( pmsc->typh_dmat,
&pmsc->typh_mem,
BUS_DMA_WAITOK | BUS_DMA_ZERO | BUS_DMA_NOCACHE,
&pmsc->typh_mapp ) ) {
printf( "agtiapi_typhAlloc: Cannot allocate cache mem %d\n",
pmsc->typhn );
return AGTIAPI_FAIL;
}
if ( bus_dmamap_load( pmsc->typh_dmat,
pmsc->typh_mapp,
pmsc->typh_mem,
pmsc->typhn,
agtiapi_MemoryCB, // try reuse of CB for same goal
&pmsc->typh_busaddr,
0 ) || !pmsc->typh_busaddr ) {
for( ; wait < 20; wait++ ) {
if( pmsc->typh_busaddr ) break;
DELAY( 50000 );
}
if( ! pmsc->typh_busaddr ) {
printf( "agtiapi_typhAlloc: cache mem won't load %d\n",
pmsc->typhn );
return AGTIAPI_FAIL;
}
}
pmsc->typhIdx = 0;
pmsc->tyPhsIx = 0;
return AGTIAPI_SUCCESS;
}
/******************************************************************************
agtiapi_InitResource()
Purpose:
Mapping PCI memory space
Allocate and initialize per card based resource
Parameters:
ag_card_info_t *pCardInfo (IN)
Return:
AGTIAPI_SUCCESS - success
AGTIAPI_FAIL - fail
Note:
******************************************************************************/
STATIC agBOOLEAN agtiapi_InitResource( ag_card_info_t *thisCardInst )
{
struct agtiapi_softc *pmsc = thisCardInst->pCard;
device_t devx = thisCardInst->pPCIDev;
//AGTIAPI_PRINTK( "agtiapi_InitResource: begin; pointer values %p / %p \n",
// devx, thisCardInst );
// no IO mapped card implementation, we'll implement memory mapping
if( agtiapi_typhAlloc( thisCardInst ) == AGTIAPI_FAIL ) {
printf( "agtiapi_InitResource: failed call to agtiapi_typhAlloc \n" );
return AGTIAPI_FAIL;
}
AGTIAPI_PRINTK( "agtiapi_InitResource: dma alloc MemSpan %p -- %p\n",
(void*) pmsc->typh_busaddr,
(void*) ( (U32_64)pmsc->typh_busaddr + pmsc->typhn ) );
// logical BARs for SPC:
// bar 0 and 1 - logical BAR0
// bar 2 and 3 - logical BAR1
// bar4 - logical BAR2
// bar5 - logical BAR3
// Skiping the assignments for bar 1 and bar 3 (making bar 0, 2 64-bit):
U32 bar;
U32 lBar = 0; // logicalBar
for (bar = 0; bar < PCI_NUMBER_BARS; bar++) {
if ((bar==1) || (bar==3))
continue;
thisCardInst->pciMemBaseRIDSpc[lBar] = PCIR_BAR(bar);
thisCardInst->pciMemBaseRscSpc[lBar] =
bus_alloc_resource_any( devx,
SYS_RES_MEMORY,
&(thisCardInst->pciMemBaseRIDSpc[lBar]),
RF_ACTIVE );
AGTIAPI_PRINTK( "agtiapi_InitResource: bus_alloc_resource_any rtn %p \n",
thisCardInst->pciMemBaseRscSpc[lBar] );
if ( thisCardInst->pciMemBaseRscSpc[lBar] != NULL ) {
thisCardInst->pciMemVirtAddrSpc[lBar] =
(caddr_t)rman_get_virtual(
thisCardInst->pciMemBaseRscSpc[lBar] );
thisCardInst->pciMemBaseSpc[lBar] =
bus_get_resource_start( devx, SYS_RES_MEMORY,
thisCardInst->pciMemBaseRIDSpc[lBar]);
thisCardInst->pciMemSizeSpc[lBar] =
bus_get_resource_count( devx, SYS_RES_MEMORY,
thisCardInst->pciMemBaseRIDSpc[lBar] );
AGTIAPI_PRINTK( "agtiapi_InitResource: PCI: bar %d, lBar %d "
"VirtAddr=%lx, len=%d\n", bar, lBar,
(long unsigned int)thisCardInst->pciMemVirtAddrSpc[lBar],
thisCardInst->pciMemSizeSpc[lBar] );
}
else {
thisCardInst->pciMemVirtAddrSpc[lBar] = 0;
thisCardInst->pciMemBaseSpc[lBar] = 0;
thisCardInst->pciMemSizeSpc[lBar] = 0;
}
lBar++;
}
thisCardInst->pciMemVirtAddr = thisCardInst->pciMemVirtAddrSpc[0];
thisCardInst->pciMemSize = thisCardInst->pciMemSizeSpc[0];
thisCardInst->pciMemBase = thisCardInst->pciMemBaseSpc[0];
// Allocate all TI data structure required resources.
// tiLoLevelResource
U32 numVal;
ag_resource_info_t *pRscInfo;
pRscInfo = &thisCardInst->tiRscInfo;
pRscInfo->tiLoLevelResource.loLevelOption.pciFunctionNumber =
pci_get_function( devx );
struct timeval tv;
tv.tv_sec = 1;
tv.tv_usec = 0;
int ticksPerSec;
ticksPerSec = tvtohz( &tv );
int uSecPerTick = 1000000/USEC_PER_TICK;
if (pRscInfo->tiLoLevelResource.loLevelMem.count != 0) {
//AGTIAPI_INIT("agtiapi_InitResource: loLevelMem count = %d\n",
// pRscInfo->tiLoLevelResource.loLevelMem.count);
// adjust tick value to meet Linux requirement
pRscInfo->tiLoLevelResource.loLevelOption.usecsPerTick = uSecPerTick;
AGTIAPI_PRINTK( "agtiapi_InitResource: "
"pRscInfo->tiLoLevelResource.loLevelOption.usecsPerTick"
" 0x%x\n",
pRscInfo->tiLoLevelResource.loLevelOption.usecsPerTick );
for( numVal = 0; numVal < pRscInfo->tiLoLevelResource.loLevelMem.count;
numVal++ ) {
if( pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength ==
0 ) {
AGTIAPI_PRINTK("agtiapi_InitResource: skip ZERO %d\n", numVal);
continue;
}
// check for 64 bit alignment
if ( pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment <
AGTIAPI_64BIT_ALIGN ) {
AGTIAPI_PRINTK("agtiapi_InitResource: set ALIGN %d\n", numVal);
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment =
AGTIAPI_64BIT_ALIGN;
}
if( ((pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1))) == TI_DMA_MEM) ||
((pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1))) == TI_CACHED_DMA_MEM)) {
if ( thisCardInst->dmaIndex >=
sizeof(thisCardInst->tiDmaMem) /
sizeof(thisCardInst->tiDmaMem[0]) ) {
AGTIAPI_PRINTK( "Invalid dmaIndex %d ERROR\n",
thisCardInst->dmaIndex );
return AGTIAPI_FAIL;
}
thisCardInst->tiDmaMem[thisCardInst->dmaIndex].type =
#ifdef CACHED_DMA
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1));
#else
TI_DMA_MEM;
#endif
if( agtiapi_MemAlloc( thisCardInst,
&thisCardInst->tiDmaMem[thisCardInst->dmaIndex].dmaVirtAddr,
&thisCardInst->tiDmaMem[thisCardInst->dmaIndex].dmaPhysAddr,
&pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].virtPtr,
&pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].
physAddrUpper,
&pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].
physAddrLower,
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength,
thisCardInst->tiDmaMem[thisCardInst->dmaIndex].type,
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment)
!= AGTIAPI_SUCCESS ) {
return AGTIAPI_FAIL;
}
thisCardInst->tiDmaMem[thisCardInst->dmaIndex].memSize =
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength;
//AGTIAPI_INIT("agtiapi_InitResource: LoMem %d dmaIndex=%d DMA virt"
// " %p, phys 0x%x, length %d align %d\n",
// numVal, pCardInfo->dmaIndex,
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].virtPtr,
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].physAddrLower,
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength,
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment);
thisCardInst->dmaIndex++;
}
else if ( (pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type &
(BIT(0) | BIT(1))) == TI_CACHED_MEM) {
if (thisCardInst->cacheIndex >=
sizeof(thisCardInst->tiCachedMem) /
sizeof(thisCardInst->tiCachedMem[0])) {
AGTIAPI_PRINTK( "Invalid cacheIndex %d ERROR\n",
thisCardInst->cacheIndex );
return AGTIAPI_FAIL;
}
if ( agtiapi_MemAlloc( thisCardInst,
&thisCardInst->tiCachedMem[thisCardInst->cacheIndex],
(vm_paddr_t *)agNULL,
&pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].virtPtr,
(U32 *)agNULL,
(U32 *)agNULL,
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength,
TI_CACHED_MEM,
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment)
!= AGTIAPI_SUCCESS ) {
return AGTIAPI_FAIL;
}
//AGTIAPI_INIT("agtiapi_InitResource: LoMem %d cacheIndex=%d CACHED "
// "vaddr %p / %p, length %d align %d\n",
// numVal, pCardInfo->cacheIndex,
// pCardInfo->tiCachedMem[pCardInfo->cacheIndex],
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].virtPtr,
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength,
// pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment);
thisCardInst->cacheIndex++;
}
else if ( ((pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1))) == TI_DMA_MEM_CHIP)) {
// not expecting this case, print warning that should get attention
printf( "RED ALARM: we need a BAR for TI_DMA_MEM_CHIP, ignoring!" );
}
else {
printf( "agtiapi_InitResource: Unknown required memory type %d "
"ERROR!\n",
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type);
return AGTIAPI_FAIL;
}
}
}
// end: TI data structure resources ...
// begin: tiInitiatorResource
if ( pmsc->flags & AGTIAPI_INITIATOR ) {
if ( pRscInfo->tiInitiatorResource.initiatorMem.count != 0 ) {
//AGTIAPI_INIT("agtiapi_InitResource: initiatorMem count = %d\n",
// pRscInfo->tiInitiatorResource.initiatorMem.count);
numVal =
(U32)( pRscInfo->tiInitiatorResource.initiatorOption.usecsPerTick
/ uSecPerTick );
if( pRscInfo->tiInitiatorResource.initiatorOption.usecsPerTick
% uSecPerTick > 0 )
pRscInfo->tiInitiatorResource.initiatorOption.usecsPerTick =
(numVal + 1) * uSecPerTick;
else
pRscInfo->tiInitiatorResource.initiatorOption.usecsPerTick =
numVal * uSecPerTick;
for ( numVal = 0;
numVal < pRscInfo->tiInitiatorResource.initiatorMem.count;
numVal++ ) {
// check for 64 bit alignment
if( pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
alignment < AGTIAPI_64BIT_ALIGN ) {
pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
alignment = AGTIAPI_64BIT_ALIGN;
}
if( thisCardInst->cacheIndex >=
sizeof( thisCardInst->tiCachedMem) /
sizeof( thisCardInst->tiCachedMem[0])) {
AGTIAPI_PRINTK( "Invalid cacheIndex %d ERROR\n",
thisCardInst->cacheIndex );
return AGTIAPI_FAIL;
}
// initiator memory is cached, no check is needed
if( agtiapi_MemAlloc( thisCardInst,
(void *)&thisCardInst->tiCachedMem[thisCardInst->cacheIndex],
(vm_paddr_t *)agNULL,
&pRscInfo->tiInitiatorResource.initiatorMem.
tdCachedMem[numVal].virtPtr,
(U32 *)agNULL,
(U32 *)agNULL,
pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
totalLength,
TI_CACHED_MEM,
pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
alignment)
!= AGTIAPI_SUCCESS) {
return AGTIAPI_FAIL;
}
// AGTIAPI_INIT("agtiapi_InitResource: IniMem %d cacheIndex=%d CACHED "
// "vaddr %p / %p, length %d align 0x%x\n",
// numVal,
// pCardInfo->cacheIndex,
// pCardInfo->tiCachedMem[pCardInfo->cacheIndex],
// pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
// virtPtr,
//pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
// totalLength,
// pRscInfo->tiInitiatorResource.initiatorMem.tdCachedMem[numVal].
// alignment);
thisCardInst->cacheIndex++;
}
}
}
// end: tiInitiatorResource
// begin: tiTdSharedMem
if (pRscInfo->tiSharedMem.tdSharedCachedMem1.totalLength != 0) {
// check for 64 bit alignment
if( pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment <
AGTIAPI_64BIT_ALIGN ) {
pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment = AGTIAPI_64BIT_ALIGN;
}
if( (pRscInfo->tiSharedMem.tdSharedCachedMem1.type & (BIT(0) | BIT(1)))
== TI_DMA_MEM ) {
if( thisCardInst->dmaIndex >=
sizeof(thisCardInst->tiDmaMem) / sizeof(thisCardInst->tiDmaMem[0]) ) {
AGTIAPI_PRINTK( "Invalid dmaIndex %d ERROR\n", thisCardInst->dmaIndex);
return AGTIAPI_FAIL;
}
if( agtiapi_MemAlloc( thisCardInst, (void *)&thisCardInst->
tiDmaMem[thisCardInst->dmaIndex].dmaVirtAddr,
&thisCardInst->tiDmaMem[thisCardInst->dmaIndex].
dmaPhysAddr,
&pRscInfo->tiSharedMem.tdSharedCachedMem1.virtPtr,
&pRscInfo->tiSharedMem.tdSharedCachedMem1.
physAddrUpper,
&pRscInfo->tiSharedMem.tdSharedCachedMem1.
physAddrLower,
pRscInfo->tiSharedMem.tdSharedCachedMem1.
totalLength,
TI_DMA_MEM,
pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment)
!= AGTIAPI_SUCCESS )
return AGTIAPI_FAIL;
thisCardInst->tiDmaMem[thisCardInst->dmaIndex].memSize =
pRscInfo->tiSharedMem.tdSharedCachedMem1.totalLength +
pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment;
// printf( "agtiapi_InitResource: SharedMem DmaIndex=%d DMA "
// "virt %p / %p, phys 0x%x, align %d\n",
// thisCardInst->dmaIndex,
// thisCardInst->tiDmaMem[thisCardInst->dmaIndex].dmaVirtAddr,
// pRscInfo->tiSharedMem.tdSharedCachedMem1.virtPtr,
// pRscInfo->tiSharedMem.tdSharedCachedMem1.physAddrLower,
// pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment);
thisCardInst->dmaIndex++;
}
else if( (pRscInfo->tiSharedMem.tdSharedCachedMem1.type &
(BIT(0) | BIT(1)))
== TI_CACHED_MEM ) {
if( thisCardInst->cacheIndex >=
sizeof(thisCardInst->tiCachedMem) /
sizeof(thisCardInst->tiCachedMem[0]) ) {
AGTIAPI_PRINTK( "Invalid cacheIndex %d ERROR\n", thisCardInst->cacheIndex);
return AGTIAPI_FAIL;
}
if( agtiapi_MemAlloc( thisCardInst, (void *)&thisCardInst->
tiCachedMem[thisCardInst->cacheIndex],
(vm_paddr_t *)agNULL,
&pRscInfo->tiSharedMem.tdSharedCachedMem1.virtPtr,
(U32 *)agNULL,
(U32 *)agNULL,
pRscInfo->
tiSharedMem.tdSharedCachedMem1.totalLength,
TI_CACHED_MEM,
pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment)
!= AGTIAPI_SUCCESS )
return AGTIAPI_FAIL;
// printf( "agtiapi_InitResource: SharedMem cacheIndex=%d CACHED "
// "vaddr %p / %p, length %d align 0x%x\n",
// thisCardInst->cacheIndex,
// thisCardInst->tiCachedMem[thisCardInst->cacheIndex],
// pRscInfo->tiSharedMem.tdSharedCachedMem1.virtPtr,
// pRscInfo->tiSharedMem.tdSharedCachedMem1.totalLength,
// pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment);
AGTIAPI_PRINTK( "agtiapi_InitResource: SharedMem cacheIndex=%d CACHED "
"vaddr %p / %p, length %d align 0x%x\n",
thisCardInst->cacheIndex,
thisCardInst->tiCachedMem[thisCardInst->cacheIndex],
pRscInfo->tiSharedMem.tdSharedCachedMem1.virtPtr,
pRscInfo->tiSharedMem.tdSharedCachedMem1.totalLength,
pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment );
thisCardInst->cacheIndex++;
}
else {
AGTIAPI_PRINTK( "agtiapi_InitResource: "
"Unknown required memory type ERROR!\n" );
return AGTIAPI_FAIL;
}
}
// end: tiTdSharedMem
DELAY( 200000 ); // or use AGTIAPI_INIT_MDELAY(200);
return AGTIAPI_SUCCESS;
} // agtiapi_InitResource() ends here
/******************************************************************************
agtiapi_ScopeDMARes()
Purpose:
Determine the amount of DMA (non-cache) memory resources which will be
required for a card ( and necessarily allocated in agtiapi_InitResource() )
Parameters:
ag_card_info_t *thisCardInst (IN)
Return:
size of DMA memory which call to agtiapi_InitResource() will consume
Note:
this funcion mirrors the flow of agtiapi_InitResource()
results are stored in agtiapi_softc fields
******************************************************************************/
STATIC int agtiapi_ScopeDMARes( ag_card_info_t *thisCardInst )
{
struct agtiapi_softc *pmsc = thisCardInst->pCard;
U32 lAllMem = 0; // total memory count; typhn
U32 lTmpAlign, lTmpType, lTmpLen;
// tiLoLevelResource
U32 numVal;
ag_resource_info_t *pRscInfo;
pRscInfo = &thisCardInst->tiRscInfo;
if (pRscInfo->tiLoLevelResource.loLevelMem.count != 0) {
for( numVal = 0; numVal < pRscInfo->tiLoLevelResource.loLevelMem.count;
numVal++ ) {
if( pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength ==
0 ) {
printf( "agtiapi_ScopeDMARes: skip ZERO %d\n", numVal );
continue;
}
// check for 64 bit alignment
lTmpAlign = pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment;
if( lTmpAlign < AGTIAPI_64BIT_ALIGN ) {
AGTIAPI_PRINTK("agtiapi_ScopeDMARes: set ALIGN %d\n", numVal);
//pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].alignment =
lTmpAlign = AGTIAPI_64BIT_ALIGN;
}
if( ((pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1))) == TI_DMA_MEM) ||
((pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1))) == TI_CACHED_DMA_MEM)) {
//thisCardInst->tiDmaMem[thisCardInst->dmaIndex].type =
lTmpType =
#ifdef CACHED_DMA
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type
& (BIT(0) | BIT(1));
#else
TI_DMA_MEM;
#endif
if( lTmpType == TI_DMA_MEM ) {
lTmpLen =
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].totalLength;
lAllMem += lTmpLen + lTmpAlign;
}
//printf( "agtiapi_ScopeDMARes: call 1 0x%x\n", lAllMem );
}
else if ( ( pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type &
(BIT(0) | BIT(1)) ) == TI_CACHED_MEM ) {
// these are not the droids we're looking for
if( thisCardInst->cacheIndex >=
sizeof(thisCardInst->tiCachedMem) /
sizeof(thisCardInst->tiCachedMem[0]) ) {
AGTIAPI_PRINTK( "agtiapi_ScopeDMARes: Invalid cacheIndex %d ERROR\n",
thisCardInst->cacheIndex );
return lAllMem;
}
}
else {
printf( "agtiapi_ScopeDMARes: Unknown required memory type %d "
"ERROR!\n",
pRscInfo->tiLoLevelResource.loLevelMem.mem[numVal].type );
return lAllMem;
}
}
}
// end: TI data structure resources ...
// nothing for tiInitiatorResource
// begin: tiTdSharedMem
if (pRscInfo->tiSharedMem.tdSharedCachedMem1.totalLength != 0) {
// check for 64 bit alignment
lTmpAlign = pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment;
if( lTmpAlign < AGTIAPI_64BIT_ALIGN ) {
//pRscInfo->tiSharedMem.tdSharedCachedMem1.alignment=AGTIAPI_64BIT_ALIGN;
lTmpAlign = AGTIAPI_64BIT_ALIGN;
}
if( (pRscInfo->tiSharedMem.tdSharedCachedMem1.type & (BIT(0) | BIT(1)))
== TI_DMA_MEM ) {
lTmpLen = pRscInfo->tiSharedMem.tdSharedCachedMem1.totalLength;
lAllMem += lTmpLen + lTmpAlign;
// printf( "agtiapi_ScopeDMARes: call 4D 0x%x\n", lAllMem );
}
else if( (pRscInfo->tiSharedMem.tdSharedCachedMem1.type &
(BIT(0) | BIT(1)))
!= TI_CACHED_MEM ) {
printf( "agtiapi_ScopeDMARes: Unknown required memory type ERROR!\n" );
}
}
// end: tiTdSharedMem
pmsc->typhn = lAllMem;
return lAllMem;
} // agtiapi_ScopeDMARes() ends here
STATIC void agtiapi_ReleasePCIMem( ag_card_info_t *pCardInfo ) {
U32 bar = 0;
int tmpRid = 0;
struct resource *tmpRsc = NULL;
device_t dev;
dev = pCardInfo->pPCIDev;
for (bar=0; bar < PCI_NUMBER_BARS; bar++) { // clean up PCI resource
tmpRid = pCardInfo->pciMemBaseRIDSpc[bar];
tmpRsc = pCardInfo->pciMemBaseRscSpc[bar];
if (tmpRsc != NULL) { // Release PCI resources
bus_release_resource( dev, SYS_RES_MEMORY, tmpRid, tmpRsc );
}
}
return;
}
/******************************************************************************
agtiapi_MemAlloc()
Purpose:
Handle various memory allocation requests.
Parameters:
ag_card_info_t *pCardInfo (IN) Pointer to card info structure
void **VirtAlloc (OUT) Allocated memory virtual address
dma_addr_t *pDmaAddr (OUT) Allocated dma memory physical address
void **VirtAddr (OUT) Aligned memory virtual address
U32 *pPhysAddrUp (OUT) Allocated memory physical upper 32 bits
U32 *pPhysAddrLow (OUT) Allocated memory physical lower 32 bits
U32 MemSize (IN) Allocated memory size
U32 Type (IN) Type of memory required
U32 Align (IN) Required memory alignment
Return:
AGTIAPI_SUCCESS - success
AGTIAPI_FAIL - fail
******************************************************************************/
STATIC agBOOLEAN agtiapi_MemAlloc( ag_card_info_t *thisCardInst,
void **VirtAlloc,
vm_paddr_t *pDmaAddr,
void **VirtAddr,
U32 *pPhysAddrUp,
U32 *pPhysAddrLow,
U32 MemSize,
U32 Type,
U32 Align )
{
U32_64 alignOffset = 0;
if( Align )
alignOffset = Align - 1;
// printf( "agtiapi_MemAlloc: debug find mem TYPE, %d vs. CACHE %d, DMA %d \n",
// ( Type & ( BIT(0) | BIT(1) ) ), TI_CACHED_MEM, TI_DMA_MEM );
if ((Type & (BIT(0) | BIT(1))) == TI_CACHED_MEM) {
*VirtAlloc = malloc( MemSize + Align, M_PMC_MMAL, M_ZERO | M_NOWAIT );
*VirtAddr = (void *)(((U32_64)*VirtAlloc + alignOffset) & ~alignOffset);
}
else {
struct agtiapi_softc *pmsc = thisCardInst->pCard; // get card reference
U32 residAlign = 0;
// find virt index value
*VirtAlloc = (void*)( (U64)pmsc->typh_mem + pmsc->typhIdx );
*VirtAddr = (void *)( ( (U32_64)*VirtAlloc + alignOffset) & ~alignOffset );
if( *VirtAddr != *VirtAlloc )
residAlign = (U64)*VirtAddr - (U64)*VirtAlloc; // find alignment needed
pmsc->typhIdx += residAlign + MemSize; // update index
residAlign = 0; // reset variable for reuse
// find phys index val
pDmaAddr = (vm_paddr_t*)( (U64)pmsc->typh_busaddr + pmsc->tyPhsIx );
vm_paddr_t *lPhysAligned =
(vm_paddr_t*)( ( (U64)pDmaAddr + alignOffset ) & ~alignOffset );
if( lPhysAligned != pDmaAddr )
residAlign = (U64)lPhysAligned - (U64)pDmaAddr; // find alignment needed
pmsc->tyPhsIx += residAlign + MemSize; // update index
*pPhysAddrUp = HIGH_32_BITS( (U64)lPhysAligned );
*pPhysAddrLow = LOW_32_BITS( (U64)lPhysAligned );
//printf( "agtiapi_MemAlloc: physIx 0x%x size 0x%x resid:0x%x "
// "addr:0x%p addrAligned:0x%p Align:0x%x\n",
// pmsc->tyPhsIx, MemSize, residAlign, pDmaAddr, lPhysAligned,
// Align );
}
if ( !*VirtAlloc ) {
AGTIAPI_PRINTK( "agtiapi_MemAlloc memory allocation ERROR x%x\n",
Type & (U32)(BIT(0) | BIT(1)));
return AGTIAPI_FAIL;
}
return AGTIAPI_SUCCESS;
}
/******************************************************************************
agtiapi_MemFree()
Purpose:
Free agtiapi_MemAlloc() allocated memory
Parameters:
ag_card_info_t *pCardInfo (IN) Pointer to card info structure
Return: none
******************************************************************************/
STATIC void agtiapi_MemFree( ag_card_info_t *pCardInfo )
{
U32 idx;
// release memory vs. alloc in agtiapi_MemAlloc; cached case
for( idx = 0; idx < pCardInfo->cacheIndex; idx++ ) {
if( pCardInfo->tiCachedMem[idx] ) {
free( pCardInfo->tiCachedMem[idx], M_PMC_MMAL );
AGTIAPI_PRINTK( "agtiapi_MemFree: TI_CACHED_MEM Mem[%d] %p\n",
idx, pCardInfo->tiCachedMem[idx] );
}
}
// release memory vs. alloc in agtiapi_typhAlloc; used in agtiapi_MemAlloc
struct agtiapi_softc *pmsc = pCardInfo->pCard; // get card reference
if( pmsc->typh_busaddr != 0 ) {
bus_dmamap_unload( pmsc->typh_dmat, pmsc->typh_mapp );
}
if( pmsc->typh_mem != NULL ) {
bus_dmamem_free( pmsc->typh_dmat, pmsc->typh_mem, pmsc->typh_mapp );
}
if( pmsc->typh_dmat != NULL ) {
bus_dma_tag_destroy( pmsc->typh_dmat );
}
//reference values:
// pCardInfo->dmaIndex
// pCardInfo->tiDmaMem[idx].dmaVirtAddr
// pCardInfo->tiDmaMem[idx].memSize
// pCardInfo->tiDmaMem[idx].type == TI_CACHED_DMA_MEM
// pCardInfo->tiDmaMem[idx].type == TI_DMA_MEM
/* This code is redundant. Commenting out for now to maintain a placekeeper.
Free actually takes place in agtiapi_ReleaseHBA as calls on osti_dmat. dm
// release possible lower layer dynamic memory
for( idx = 0; idx < AGTIAPI_DYNAMIC_MAX; idx++ ) {
if( pCardInfo->dynamicMem[idx].dmaVirtAddr != NULL ) {
printf( "agtiapi_MemFree: dynMem[%d] virtAddr"
" %p / %lx size: %d\n",
idx, pCardInfo->dynamicMem[idx].dmaVirtAddr,
(long unsigned int)pCardInfo->dynamicMem[idx].dmaPhysAddr,
pCardInfo->dynamicMem[idx].memSize );
if( pCardInfo->dynamicMem[idx].dmaPhysAddr )
some form of free call would go here (
pCardInfo->dynamicMem[idx].dmaVirtAddr,
pCardInfo->dynamicMem[idx].memSize, ... );
else
free case for cacheable memory would go here
}
}
*/
return;
}
/******************************************************************************
agtiapi_ProbeCard()
Purpose:
sets thisCardInst->cardIdIndex to structure variant consistent with card.
ag_card_type[idx].vendorId we already determined is PCI_VENDOR_ID_PMC_SIERRA.
Parameters:
device_t dev,
ag_card_info_t *thisCardInst,
int thisCard
Return:
0 - success
other values are not as good
Note:
This implementation is tailored to FreeBSD in alignment with the probe
functionality of the FreeBSD environment.
******************************************************************************/
STATIC int agtiapi_ProbeCard( device_t dev,
ag_card_info_t *thisCardInst,
int thisCard )
{
int idx;
u_int16_t agtiapi_vendor; // PCI vendor ID
u_int16_t agtiapi_dev; // PCI device ID
AGTIAPI_PRINTK("agtiapi_ProbeCard: start\n");
agtiapi_vendor = pci_get_vendor( dev ); // get PCI vendor ID
agtiapi_dev = pci_get_device( dev ); // get PCI device ID
for( idx = 0; idx < COUNT(ag_card_type); idx++ )
{
if ( ag_card_type[idx].deviceId == agtiapi_dev &&
ag_card_type[idx].vendorId == agtiapi_vendor)
{ // device ID match
memset( (void *)&agCardInfoList[ thisCard ], 0,
sizeof(ag_card_info_t) );
thisCardInst->cardIdIndex = idx;
thisCardInst->pPCIDev = dev;
thisCardInst->cardNameIndex = ag_card_type[idx].cardNameIndex;
thisCardInst->cardID =
pci_read_config( dev, ag_card_type[idx].membar, 4 ); // memAddr
AGTIAPI_PRINTK("agtiapi_ProbeCard: We've got PMC SAS, probe successful %p / %p\n",
thisCardInst->pPCIDev, thisCardInst );
device_set_desc( dev, ag_card_names[ag_card_type[idx].cardNameIndex] );
return 0;
}
}
return 1;
}
Index: projects/clang800-import/sys/dev/puc/puc_pci.c
===================================================================
--- projects/clang800-import/sys/dev/puc/puc_pci.c (revision 343955)
+++ projects/clang800-import/sys/dev/puc/puc_pci.c (revision 343956)
@@ -1,203 +1,203 @@
/* $NetBSD: puc.c,v 1.7 2000/07/29 17:43:38 jlam Exp $ */
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD AND BSD-3-Clause
*
* Copyright (c) 2002 JF Hay. All rights reserved.
- * Copyright (c) 2000 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2000 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*-
* Copyright (c) 1996, 1998, 1999
* Christopher G. Demetriou. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by Christopher G. Demetriou
* for the NetBSD Project.
* 4. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/malloc.h>
#include <sys/sysctl.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <sys/rman.h>
#include <dev/pci/pcireg.h>
#include <dev/pci/pcivar.h>
#include <dev/puc/puc_cfg.h>
#include <dev/puc/puc_bfe.h>
#include <dev/puc/pucdata.c>
static int puc_msi_disable;
SYSCTL_INT(_hw_puc, OID_AUTO, msi_disable, CTLFLAG_RDTUN,
&puc_msi_disable, 0, "Disable use of MSI interrupts by puc(9)");
static const struct puc_cfg *
puc_pci_match(device_t dev, const struct puc_cfg *desc)
{
uint16_t vendor, device;
uint16_t subvendor, subdevice;
vendor = pci_get_vendor(dev);
device = pci_get_device(dev);
subvendor = pci_get_subvendor(dev);
subdevice = pci_get_subdevice(dev);
while (desc->vendor != 0xffff) {
if (desc->vendor == vendor && desc->device == device) {
/* exact match */
if (desc->subvendor == subvendor &&
desc->subdevice == subdevice)
return (desc);
/* wildcard match */
if (desc->subvendor == 0xffff)
return (desc);
}
desc++;
}
/* no match */
return (NULL);
}
static int
puc_pci_probe(device_t dev)
{
const struct puc_cfg *desc;
if ((pci_read_config(dev, PCIR_HDRTYPE, 1) & PCIM_HDRTYPE) != 0)
return (ENXIO);
desc = puc_pci_match(dev, puc_pci_devices);
if (desc == NULL)
return (ENXIO);
return (puc_bfe_probe(dev, desc));
}
static int
puc_pci_attach(device_t dev)
{
struct puc_softc *sc;
int error, count;
sc = device_get_softc(dev);
if (!puc_msi_disable) {
count = 1;
if (pci_alloc_msi(dev, &count) == 0) {
sc->sc_msi = 1;
sc->sc_irid = 1;
}
}
error = puc_bfe_attach(dev);
if (error != 0 && sc->sc_msi)
pci_release_msi(dev);
return (error);
}
static int
puc_pci_detach(device_t dev)
{
struct puc_softc *sc;
int error;
sc = device_get_softc(dev);
error = puc_bfe_detach(dev);
if (error != 0)
return (error);
if (sc->sc_msi)
error = pci_release_msi(dev);
return (error);
}
static device_method_t puc_pci_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, puc_pci_probe),
DEVMETHOD(device_attach, puc_pci_attach),
DEVMETHOD(device_detach, puc_pci_detach),
DEVMETHOD(bus_alloc_resource, puc_bus_alloc_resource),
DEVMETHOD(bus_release_resource, puc_bus_release_resource),
DEVMETHOD(bus_get_resource, puc_bus_get_resource),
DEVMETHOD(bus_read_ivar, puc_bus_read_ivar),
DEVMETHOD(bus_setup_intr, puc_bus_setup_intr),
DEVMETHOD(bus_teardown_intr, puc_bus_teardown_intr),
DEVMETHOD(bus_print_child, puc_bus_print_child),
DEVMETHOD(bus_child_pnpinfo_str, puc_bus_child_pnpinfo_str),
DEVMETHOD(bus_child_location_str, puc_bus_child_location_str),
DEVMETHOD_END
};
static driver_t puc_pci_driver = {
puc_driver_name,
puc_pci_methods,
sizeof(struct puc_softc),
};
DRIVER_MODULE(puc, pci, puc_pci_driver, puc_devclass, 0, 0);
MODULE_PNP_INFO("U16:vendor;U16:device;U16:#;U16:#;D:#", pci, puc,
puc_pci_devices, nitems(puc_pci_devices) - 1);
Index: projects/clang800-import/sys/dev/sio/sio_isa.c
===================================================================
--- projects/clang800-import/sys/dev/sio/sio_isa.c (revision 343955)
+++ projects/clang800-import/sys/dev/sio/sio_isa.c (revision 343956)
@@ -1,178 +1,178 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_sio.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mutex.h>
#include <sys/module.h>
#include <sys/tty.h>
#include <machine/bus.h>
#include <sys/timepps.h>
#include <dev/sio/siovar.h>
#include <isa/isareg.h>
#include <isa/isavar.h>
static int sio_isa_attach(device_t dev);
static int sio_isa_probe(device_t dev);
static device_method_t sio_isa_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, sio_isa_probe),
DEVMETHOD(device_attach, sio_isa_attach),
DEVMETHOD(device_detach, siodetach),
{ 0, 0 }
};
static driver_t sio_isa_driver = {
sio_driver_name,
sio_isa_methods,
0,
};
static struct isa_pnp_id sio_ids[] = {
{0x0005d041, "Standard PC COM port"}, /* PNP0500 */
{0x0105d041, "16550A-compatible COM port"}, /* PNP0501 */
{0x0205d041, "Multiport serial device (non-intelligent 16550)"}, /* PNP0502 */
{0x1005d041, "Generic IRDA-compatible device"}, /* PNP0510 */
{0x1105d041, "Generic IRDA-compatible device"}, /* PNP0511 */
/* Devices that do not have a compatid */
{0x12206804, NULL}, /* ACH2012 - 5634BTS 56K Video Ready Modem */
{0x7602a904, NULL}, /* AEI0276 - 56K v.90 Fax Modem (LKT) */
{0x00007905, NULL}, /* AKY0000 - 56K Plug&Play Modem */
{0x21107905, NULL}, /* AKY1021 - 56K Plug&Play Modem */
{0x01405407, NULL}, /* AZT4001 - AZT3000 PnP SOUND DEVICE, MODEM */
{0x56039008, NULL}, /* BDP0356 - Best Data 56x2 */
{0x56159008, NULL}, /* BDP1556 - B.D. Smart One 56SPS,Voice Modem*/
{0x36339008, NULL}, /* BDP3336 - Best Data Prods. 336F */
{0x0014490a, NULL}, /* BRI1400 - Boca 33.6 PnP */
{0x0015490a, NULL}, /* BRI1500 - Internal Fax Data */
{0x0034490a, NULL}, /* BRI3400 - Internal ACF Modem */
{0x0094490a, NULL}, /* BRI9400 - Boca K56Flex PnP */
{0x00b4490a, NULL}, /* BRIB400 - Boca 56k PnP */
{0x0010320d, NULL}, /* CIR1000 - Cirrus Logic V34 */
{0x0030320d, NULL}, /* CIR3000 - Cirrus Logic V43 */
{0x0100440e, NULL}, /* CRD0001 - Cardinal MVP288IV ? */
{0x01308c0e, NULL}, /* CTL3001 - Creative Labs Phoneblaster */
{0x36033610, NULL}, /* DAV0336 - DAVICOM 336PNP MODEM */
{0x01009416, NULL}, /* ETT0001 - E-Tech Bullet 33k6 PnP */
{0x0000aa1a, NULL}, /* FUJ0000 - FUJITSU Modem 33600 PNP/I2 */
{0x1200c31e, NULL}, /* GVC0012 - VF1128HV-R9 (win modem?) */
{0x0303c31e, NULL}, /* GVC0303 - MaxTech 33.6 PnP D/F/V */
{0x0505c31e, NULL}, /* GVC0505 - GVC 56k Faxmodem */
{0x0116c31e, NULL}, /* GVC1601 - Rockwell V.34 Plug & Play Modem */
{0x0050c31e, NULL}, /* GVC5000 - some GVC modem */
{0x3800f91e, NULL}, /* GWY0038 - Telepath with v.90 */
{0x9062f91e, NULL}, /* GWY6290 - Telepath with x2 Technology */
{0x8100e425, NULL}, /* IOD0081 - I-O DATA DEVICE,INC. IFML-560 */
{0x71004d24, NULL}, /* IBM0071 - IBM ThinkPad 240 IrDA controller*/
{0x21002534, NULL}, /* MAE0021 - Jetstream Int V.90 56k Voice Series 2*/
{0x0000f435, NULL}, /* MOT0000 - Motorola ModemSURFR 33.6 Intern */
{0x5015f435, NULL}, /* MOT1550 - Motorola ModemSURFR 56K Modem */
{0xf015f435, NULL}, /* MOT15F0 - Motorola VoiceSURFR 56K Modem */
{0x6045f435, NULL}, /* MOT4560 - Motorola ? */
{0x61e7a338, NULL}, /* NECE761 - 33.6Modem */
{0x0160633a, NULL}, /* NSC6001 - National Semi's IrDA Controller*/
{0x08804f3f, NULL}, /* OZO8008 - Zoom (33.6k Modem) */
{0x0f804f3f, NULL}, /* OZO800f - Zoom 2812 (56k Modem) */
{0x39804f3f, NULL}, /* OZO8039 - Zoom 56k flex */
{0x00914f3f, NULL}, /* OZO9100 - Zoom 2919 (K56 Faxmodem) */
{0x3024a341, NULL}, /* PMC2430 - Pace 56 Voice Internal Modem */
{0x1000eb49, NULL}, /* ROK0010 - Rockwell ? */
{0x1200b23d, NULL}, /* RSS0012 - OMRON ME5614ISA */
{0x5002734a, NULL}, /* RSS0250 - 5614Jx3(G) Internal Modem */
{0x6202734a, NULL}, /* RSS0262 - 5614Jx3[G] V90+K56Flex Modem */
{0x1010104d, NULL}, /* SHP1010 - Rockwell 33600bps Modem */
{0x10f0a34d, NULL}, /* SMCF010 - SMC IrCC*/
{0xc100ad4d, NULL}, /* SMM00C1 - Leopard 56k PnP */
{0x9012b04e, NULL}, /* SUP1290 - Supra ? */
{0x1013b04e, NULL}, /* SUP1310 - SupraExpress 336i PnP */
{0x8013b04e, NULL}, /* SUP1380 - SupraExpress 288i PnP Voice */
{0x8113b04e, NULL}, /* SUP1381 - SupraExpress 336i PnP Voice */
{0x5016b04e, NULL}, /* SUP1650 - Supra 336i Sp Intl */
{0x7016b04e, NULL}, /* SUP1670 - Supra 336i V+ Intl */
{0x7420b04e, NULL}, /* SUP2070 - Supra ? */
{0x8020b04e, NULL}, /* SUP2080 - Supra ? */
{0x8420b04e, NULL}, /* SUP2084 - SupraExpress 56i PnP */
{0x7121b04e, NULL}, /* SUP2171 - SupraExpress 56i Sp? */
{0x8024b04e, NULL}, /* SUP2480 - Supra ? */
{0x01007256, NULL}, /* USR0001 - U.S. Robotics Inc., Sportster W */
{0x02007256, NULL}, /* USR0002 - U.S. Robotics Inc. Sportster 33. */
{0x04007256, NULL}, /* USR0004 - USR Sportster 14.4k */
{0x06007256, NULL}, /* USR0006 - USR Sportster 33.6k */
{0x11007256, NULL}, /* USR0011 - USR ? */
{0x01017256, NULL}, /* USR0101 - USR ? */
{0x30207256, NULL}, /* USR2030 - U.S.Robotics Inc. Sportster 560 */
{0x50207256, NULL}, /* USR2050 - U.S.Robotics Inc. Sportster 33. */
{0x70207256, NULL}, /* USR2070 - U.S.Robotics Inc. Sportster 560 */
{0x30307256, NULL}, /* USR3030 - U.S. Robotics 56K FAX INT */
{0x31307256, NULL}, /* USR3031 - U.S. Robotics 56K FAX INT */
{0x50307256, NULL}, /* USR3050 - U.S. Robotics 56K FAX INT */
{0x70307256, NULL}, /* USR3070 - U.S. Robotics 56K Voice INT */
{0x90307256, NULL}, /* USR3090 - USR ? */
{0x70917256, NULL}, /* USR9170 - U.S. Robotics 56K FAX INT */
{0x90917256, NULL}, /* USR9190 - USR 56k Voice INT */
{0x04f0235c, NULL}, /* WACF004 - Wacom Tablet PC Screen*/
{0x0300695c, NULL}, /* WCI0003 - Fax/Voice/Modem/Speakphone/Asvd */
{0x01a0896a, NULL}, /* ZTIA001 - Zoom Internal V90 Faxmodem */
{0x61f7896a, NULL}, /* ZTIF761 - Zoom ComStar 33.6 */
{0}
};
static int
sio_isa_probe(dev)
device_t dev;
{
/* Check isapnp ids */
if (ISA_PNP_PROBE(device_get_parent(dev), dev, sio_ids) == ENXIO)
return (ENXIO);
return (sioprobe(dev, 0, 0UL, 0));
}
static int
sio_isa_attach(dev)
device_t dev;
{
return (sioattach(dev, 0, 0UL));
}
DRIVER_MODULE(sio, isa, sio_isa_driver, sio_devclass, 0, 0);
#ifndef COM_NO_ACPI
DRIVER_MODULE(sio, acpi, sio_isa_driver, sio_devclass, 0, 0);
#endif
ISA_PNP_INFO(sio_ids);
Index: projects/clang800-import/sys/dev/sio/sio_pccard.c
===================================================================
--- projects/clang800-import/sys/dev/sio/sio_pccard.c (revision 343955)
+++ projects/clang800-import/sys/dev/sio/sio_pccard.c (revision 343956)
@@ -1,99 +1,99 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mutex.h>
#include <sys/module.h>
#include <sys/tty.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <sys/timepps.h>
#include <dev/pccard/pccard_cis.h>
#include <dev/pccard/pccardvar.h>
#include <dev/sio/siovar.h>
static int sio_pccard_attach(device_t dev);
static int sio_pccard_probe(device_t dev);
static device_method_t sio_pccard_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, sio_pccard_probe),
DEVMETHOD(device_attach, sio_pccard_attach),
DEVMETHOD(device_detach, siodetach),
{ 0, 0 }
};
static driver_t sio_pccard_driver = {
sio_driver_name,
sio_pccard_methods,
0,
};
static int
sio_pccard_probe(device_t dev)
{
int error = 0;
u_int32_t fcn = PCCARD_FUNCTION_UNSPEC;
error = pccard_get_function(dev, &fcn);
if (error != 0)
return (error);
/*
* If a serial card, we are likely the right driver. However,
* some serial cards are better servered by other drivers, so
* allow other drivers to claim it, if they want.
*/
if (fcn == PCCARD_FUNCTION_SERIAL)
return (-100);
return (ENXIO);
}
static int
sio_pccard_attach(device_t dev)
{
int err;
/* Do not probe IRQ - pccard doesn't turn on the interrupt line */
/* until bus_setup_intr */
if ((err = sioprobe(dev, 0, 0UL, 1)) > 0)
return (err);
return (sioattach(dev, 0, 0UL));
}
DRIVER_MODULE(sio, pccard, sio_pccard_driver, sio_devclass, 0, 0);
Index: projects/clang800-import/sys/dev/sio/sio_pci.c
===================================================================
--- projects/clang800-import/sys/dev/sio/sio_pci.c (revision 343955)
+++ projects/clang800-import/sys/dev/sio/sio_pci.c (revision 343956)
@@ -1,127 +1,127 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mutex.h>
#include <sys/module.h>
#include <sys/tty.h>
#include <machine/bus.h>
#include <sys/timepps.h>
#include <dev/sio/siovar.h>
#include <dev/pci/pcivar.h>
static int sio_pci_attach(device_t dev);
static int sio_pci_probe(device_t dev);
static device_method_t sio_pci_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, sio_pci_probe),
DEVMETHOD(device_attach, sio_pci_attach),
DEVMETHOD(device_detach, siodetach),
{ 0, 0 }
};
static driver_t sio_pci_driver = {
sio_driver_name,
sio_pci_methods,
0,
};
struct pci_ids {
u_int32_t type;
const char *desc;
int rid;
};
static struct pci_ids pci_ids[] = {
{ 0x100812b9, "3COM PCI FaxModem", 0x10 },
{ 0x2000131f, "CyberSerial (1-port) 16550", 0x10 },
{ 0x01101407, "Koutech IOFLEX-2S PCI Dual Port Serial", 0x10 },
{ 0x01111407, "Koutech IOFLEX-2S PCI Dual Port Serial", 0x10 },
{ 0x048011c1, "Lucent kermit based PCI Modem", 0x14 },
{ 0x95211415, "Oxford Semiconductor PCI Dual Port Serial", 0x10 },
{ 0x7101135e, "SeaLevel Ultra 530.PCI Single Port Serial", 0x18 },
{ 0x0000151f, "SmartLink 5634PCV SurfRider", 0x10 },
{ 0x0103115d, "Xircom Cardbus modem", 0x10 },
{ 0x432214e4, "Broadcom 802.11b/GPRS CardBus (Serial)", 0x10 },
{ 0x434414e4, "Broadcom 802.11bg/EDGE/GPRS CardBus (Serial)", 0x10 },
{ 0x01c0135c, "Quatech SSCLP-200/300", 0x18
/*
* NB: You must mount the "SPAD" jumper to correctly detect
* the FIFO on the UART. Set the options on the jumpers,
* we do not support the extra registers on the Quatech.
*/
},
{ 0x00000000, NULL, 0 }
};
static int
sio_pci_attach(dev)
device_t dev;
{
u_int32_t type;
struct pci_ids *id;
type = pci_get_devid(dev);
id = pci_ids;
while (id->type && id->type != type)
id++;
if (id->desc == NULL)
return (ENXIO);
return (sioattach(dev, id->rid, 0UL));
}
static int
sio_pci_probe(dev)
device_t dev;
{
u_int32_t type;
struct pci_ids *id;
type = pci_get_devid(dev);
id = pci_ids;
while (id->type && id->type != type)
id++;
if (id->desc == NULL)
return (ENXIO);
device_set_desc(dev, id->desc);
return (sioprobe(dev, id->rid, 0UL, 0));
}
DRIVER_MODULE(sio, pci, sio_pci_driver, sio_devclass, 0, 0);
Index: projects/clang800-import/sys/dev/sio/sio_puc.c
===================================================================
--- projects/clang800-import/sys/dev/sio/sio_puc.c (revision 343955)
+++ projects/clang800-import/sys/dev/sio/sio_puc.c (revision 343956)
@@ -1,99 +1,99 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2002 JF Hay. All rights reserved.
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mutex.h>
#include <sys/module.h>
#include <sys/tty.h>
#include <machine/bus.h>
#include <sys/timepps.h>
#include <dev/puc/puc_bus.h>
#include <dev/sio/siovar.h>
#include <dev/sio/sioreg.h>
static int sio_puc_attach(device_t dev);
static int sio_puc_probe(device_t dev);
static device_method_t sio_puc_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, sio_puc_probe),
DEVMETHOD(device_attach, sio_puc_attach),
DEVMETHOD(device_detach, siodetach),
{ 0, 0 }
};
static driver_t sio_puc_driver = {
sio_driver_name,
sio_puc_methods,
0,
};
static int
sio_puc_attach(device_t dev)
{
uintptr_t rclk;
if (BUS_READ_IVAR(device_get_parent(dev), dev, PUC_IVAR_CLOCK,
&rclk) != 0)
rclk = DEFAULT_RCLK;
return (sioattach(dev, 0, rclk));
}
static int
sio_puc_probe(device_t dev)
{
device_t parent;
uintptr_t rclk, type;
int error;
parent = device_get_parent(dev);
if (BUS_READ_IVAR(parent, dev, PUC_IVAR_TYPE, &type))
return (ENXIO);
if (type != PUC_TYPE_SERIAL)
return (ENXIO);
if (BUS_READ_IVAR(parent, dev, PUC_IVAR_CLOCK, &rclk))
rclk = DEFAULT_RCLK;
error = sioprobe(dev, 0, rclk, 1);
return ((error > 0) ? error : BUS_PROBE_LOW_PRIORITY);
}
DRIVER_MODULE(sio, puc, sio_puc_driver, sio_devclass, 0, 0);
Index: projects/clang800-import/sys/dev/uart/uart_bus_acpi.c
===================================================================
--- projects/clang800-import/sys/dev/uart/uart_bus_acpi.c (revision 343955)
+++ projects/clang800-import/sys/dev/uart/uart_bus_acpi.c (revision 343956)
@@ -1,102 +1,102 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <machine/bus.h>
#include <sys/rman.h>
#include <machine/resource.h>
#include <dev/uart/uart.h>
#include <dev/uart/uart_bus.h>
#include <dev/uart/uart_cpu_acpi.h>
#include <contrib/dev/acpica/include/acpi.h>
#include <contrib/dev/acpica/include/accommon.h>
#include <dev/acpica/acpivar.h>
static int uart_acpi_probe(device_t dev);
static device_method_t uart_acpi_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, uart_acpi_probe),
DEVMETHOD(device_attach, uart_bus_attach),
DEVMETHOD(device_detach, uart_bus_detach),
DEVMETHOD(device_resume, uart_bus_resume),
{ 0, 0 }
};
static driver_t uart_acpi_driver = {
uart_driver_name,
uart_acpi_methods,
sizeof(struct uart_softc),
};
static struct acpi_uart_compat_data *
uart_acpi_find_device(device_t dev)
{
struct acpi_uart_compat_data **cd, *cd_it;
ACPI_HANDLE h;
if ((h = acpi_get_handle(dev)) == NULL)
return (NULL);
SET_FOREACH(cd, uart_acpi_class_and_device_set) {
for (cd_it = *cd; cd_it->cd_hid != NULL; cd_it++) {
if (acpi_MatchHid(h, cd_it->cd_hid))
return (cd_it);
}
}
return (NULL);
}
static int
uart_acpi_probe(device_t dev)
{
struct uart_softc *sc;
struct acpi_uart_compat_data *cd;
sc = device_get_softc(dev);
if ((cd = uart_acpi_find_device(dev)) != NULL) {
sc->sc_class = cd->cd_class;
if (cd->cd_desc != NULL)
device_set_desc(dev, cd->cd_desc);
return (uart_bus_probe(dev, cd->cd_regshft, cd->cd_regiowidth,
cd->cd_rclk, 0, 0, cd->cd_quirks));
}
return (ENXIO);
}
DRIVER_MODULE(uart, acpi, uart_acpi_driver, uart_devclass, 0, 0);
Index: projects/clang800-import/sys/dev/uart/uart_bus_pccard.c
===================================================================
--- projects/clang800-import/sys/dev/uart/uart_bus_pccard.c (revision 343955)
+++ projects/clang800-import/sys/dev/uart/uart_bus_pccard.c (revision 343956)
@@ -1,106 +1,106 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
* Copyright (c) 2003 Norikatsu Shigemura, Takenori Watanabe All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <machine/bus.h>
#include <machine/resource.h>
#include <dev/pccard/pccard_cis.h>
#include <dev/pccard/pccardvar.h>
#include <dev/uart/uart.h>
#include <dev/uart/uart_bus.h>
#include "pccarddevs.h"
static int uart_pccard_probe(device_t dev);
static int uart_pccard_attach(device_t dev);
static device_method_t uart_pccard_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, uart_pccard_probe),
DEVMETHOD(device_attach, uart_pccard_attach),
DEVMETHOD(device_detach, uart_bus_detach),
{ 0, 0 }
};
static uint32_t uart_pccard_function = PCCARD_FUNCTION_SERIAL;
static driver_t uart_pccard_driver = {
uart_driver_name,
uart_pccard_methods,
sizeof(struct uart_softc),
};
static int
uart_pccard_probe(device_t dev)
{
int error;
uint32_t fcn;
fcn = PCCARD_FUNCTION_UNSPEC;
error = pccard_get_function(dev, &fcn);
if (error != 0)
return (error);
/*
* If a serial card, we are likely the right driver. However,
* some serial cards are better serviced by other drivers, so
* allow other drivers to claim it, if they want.
*/
if (fcn == uart_pccard_function)
return (BUS_PROBE_GENERIC);
return (ENXIO);
}
static int
uart_pccard_attach(device_t dev)
{
struct uart_softc *sc;
int error;
sc = device_get_softc(dev);
sc->sc_class = &uart_ns8250_class;
error = uart_bus_probe(dev, 0, 0, 0, 0, 0, 0);
if (error > 0)
return (error);
return (uart_bus_attach(dev));
}
DRIVER_MODULE(uart, pccard, uart_pccard_driver, uart_devclass, 0, 0);
MODULE_PNP_INFO("U32:function_type;", pccard, uart, &uart_pccard_function,
1);
Index: projects/clang800-import/sys/dev/uart/uart_bus_puc.c
===================================================================
--- projects/clang800-import/sys/dev/uart/uart_bus_puc.c (revision 343955)
+++ projects/clang800-import/sys/dev/uart/uart_bus_puc.c (revision 343956)
@@ -1,89 +1,89 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2006 Marcel Moolenaar. All rights reserved.
* Copyright (c) 2002 JF Hay. All rights reserved.
- * Copyright (c) 2001 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2001 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <machine/bus.h>
#include <sys/rman.h>
#include <machine/resource.h>
#include <dev/puc/puc_bus.h>
#include <dev/uart/uart.h>
#include <dev/uart/uart_bus.h>
static int uart_puc_probe(device_t dev);
static device_method_t uart_puc_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, uart_puc_probe),
DEVMETHOD(device_attach, uart_bus_attach),
DEVMETHOD(device_detach, uart_bus_detach),
/* Serdev interface */
DEVMETHOD(serdev_ihand, uart_bus_ihand),
DEVMETHOD(serdev_ipend, uart_bus_ipend),
{ 0, 0 }
};
static driver_t uart_puc_driver = {
uart_driver_name,
uart_puc_methods,
sizeof(struct uart_softc),
};
static int
uart_puc_probe(device_t dev)
{
device_t parent;
struct uart_softc *sc;
uintptr_t rclk, type;
parent = device_get_parent(dev);
sc = device_get_softc(dev);
if (BUS_READ_IVAR(parent, dev, PUC_IVAR_TYPE, &type))
return (ENXIO);
if (type != PUC_TYPE_SERIAL)
return (ENXIO);
sc->sc_class = &uart_ns8250_class;
if (BUS_READ_IVAR(parent, dev, PUC_IVAR_CLOCK, &rclk))
rclk = 0;
return (uart_bus_probe(dev, 0, 0, rclk, 0, 0, 0));
}
DRIVER_MODULE(uart, puc, uart_puc_driver, uart_devclass, 0, 0);
Index: projects/clang800-import/sys/dev/usb/controller/ohci_s3c24x0.c
===================================================================
--- projects/clang800-import/sys/dev/usb/controller/ohci_s3c24x0.c (revision 343955)
+++ projects/clang800-import/sys/dev/usb/controller/ohci_s3c24x0.c (nonexistent)
@@ -1,214 +0,0 @@
-/*-
- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD
- *
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
- * Copyright (c) 2009 Andrew Turner. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <sys/cdefs.h>
-__FBSDID("$FreeBSD$");
-
-#include <sys/stdint.h>
-#include <sys/stddef.h>
-#include <sys/param.h>
-#include <sys/queue.h>
-#include <sys/types.h>
-#include <sys/systm.h>
-#include <sys/kernel.h>
-#include <sys/bus.h>
-#include <sys/module.h>
-#include <sys/lock.h>
-#include <sys/mutex.h>
-#include <sys/condvar.h>
-#include <sys/sysctl.h>
-#include <sys/sx.h>
-#include <sys/unistd.h>
-#include <sys/callout.h>
-#include <sys/malloc.h>
-#include <sys/priv.h>
-
-#include <dev/usb/usb.h>
-#include <dev/usb/usbdi.h>
-
-#include <dev/usb/usb_core.h>
-#include <dev/usb/usb_busdma.h>
-#include <dev/usb/usb_process.h>
-#include <dev/usb/usb_util.h>
-
-#include <dev/usb/usb_controller.h>
-#include <dev/usb/usb_bus.h>
-#include <dev/usb/controller/ohci.h>
-#include <dev/usb/controller/ohcireg.h>
-
-#include <sys/rman.h>
-
-#include <arm/samsung/s3c2xx0/s3c24x0reg.h>
-
-static device_probe_t ohci_s3c24x0_probe;
-static device_attach_t ohci_s3c24x0_attach;
-static device_detach_t ohci_s3c24x0_detach;
-
-static int
-ohci_s3c24x0_probe(device_t dev)
-{
- device_set_desc(dev, "S3C24x0 integrated OHCI controller");
- return (BUS_PROBE_DEFAULT);
-}
-
-static int
-ohci_s3c24x0_attach(device_t dev)
-{
- struct ohci_softc *sc = device_get_softc(dev);
- int err;
- int rid;
-
- /* initialise some bus fields */
- sc->sc_bus.parent = dev;
- sc->sc_bus.devices = sc->sc_devices;
- sc->sc_bus.devices_max = OHCI_MAX_DEVICES;
- sc->sc_bus.dma_bits = 32;
-
- /* get all DMA memory */
- if (usb_bus_mem_alloc_all(&sc->sc_bus, USB_GET_DMA_TAG(dev),
- &ohci_iterate_hw_softc)) {
- return (ENOMEM);
- }
-
- sc->sc_dev = dev;
-
- rid = 0;
- sc->sc_io_res = bus_alloc_resource_any(dev, SYS_RES_IOPORT,
- &rid, RF_ACTIVE);
-
- if (!(sc->sc_io_res)) {
- err = ENOMEM;
- goto error;
- }
- sc->sc_io_tag = rman_get_bustag(sc->sc_io_res);
- sc->sc_io_hdl = rman_get_bushandle(sc->sc_io_res);
- sc->sc_io_size = rman_get_size(sc->sc_io_res);
-
- rid = 0;
- sc->sc_irq_res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid,
- RF_ACTIVE);
- if (!(sc->sc_irq_res)) {
- goto error;
- }
- sc->sc_bus.bdev = device_add_child(dev, "usbus", -1);
- if (!(sc->sc_bus.bdev)) {
- goto error;
- }
- device_set_ivars(sc->sc_bus.bdev, &sc->sc_bus);
-
- strlcpy(sc->sc_vendor, "Samsung", sizeof(sc->sc_vendor));
-
- err = bus_setup_intr(dev, sc->sc_irq_res, INTR_TYPE_BIO | INTR_MPSAFE,
- NULL, (void *)ohci_interrupt, sc, &sc->sc_intr_hdl);
- if (err) {
- sc->sc_intr_hdl = NULL;
- goto error;
- }
-
- bus_space_write_4(sc->sc_io_tag, sc->sc_io_hdl,
- OHCI_CONTROL, 0);
-
- err = ohci_init(sc);
- if (!err) {
- err = device_probe_and_attach(sc->sc_bus.bdev);
- }
- if (err) {
- goto error;
- }
- return (0);
-
-error:
- ohci_s3c24x0_detach(dev);
- return (ENXIO);
-}
-
-static int
-ohci_s3c24x0_detach(device_t dev)
-{
- struct ohci_softc *sc = device_get_softc(dev);
- int err;
-
- /* during module unload there are lots of children leftover */
- device_delete_children(dev);
-
- /*
- * Put the controller into reset, then disable clocks and do
- * the MI tear down. We have to disable the clocks/hardware
- * after we do the rest of the teardown. We also disable the
- * clocks in the opposite order we acquire them, but that
- * doesn't seem to be absolutely necessary. We free up the
- * clocks after we disable them, so the system could, in
- * theory, reuse them.
- */
- bus_space_write_4(sc->sc_io_tag, sc->sc_io_hdl,
- OHCI_CONTROL, 0);
-
- if (sc->sc_irq_res && sc->sc_intr_hdl) {
- /*
- * only call ohci_detach() after ohci_init()
- */
- ohci_detach(sc);
-
- err = bus_teardown_intr(dev, sc->sc_irq_res, sc->sc_intr_hdl);
- sc->sc_intr_hdl = NULL;
- }
- if (sc->sc_irq_res) {
- bus_release_resource(dev, SYS_RES_IRQ, 0, sc->sc_irq_res);
- sc->sc_irq_res = NULL;
- }
- if (sc->sc_io_res) {
- bus_release_resource(dev, SYS_RES_MEMORY, 0,
- sc->sc_io_res);
- sc->sc_io_res = NULL;
- }
- usb_bus_mem_free_all(&sc->sc_bus, &ohci_iterate_hw_softc);
-
- return (0);
-}
-
-static device_method_t ohci_methods[] = {
- /* Device interface */
- DEVMETHOD(device_probe, ohci_s3c24x0_probe),
- DEVMETHOD(device_attach, ohci_s3c24x0_attach),
- DEVMETHOD(device_detach, ohci_s3c24x0_detach),
- DEVMETHOD(device_suspend, bus_generic_suspend),
- DEVMETHOD(device_resume, bus_generic_resume),
- DEVMETHOD(device_shutdown, bus_generic_shutdown),
-
- DEVMETHOD_END
-};
-
-static driver_t ohci_driver = {
- .name = "ohci",
- .methods = ohci_methods,
- .size = sizeof(struct ohci_softc),
-};
-
-static devclass_t ohci_devclass;
-
-DRIVER_MODULE(ohci, s3c24x0, ohci_driver, ohci_devclass, 0, 0);
-MODULE_DEPEND(ohci, usb, 1, 1, 1);
Property changes on: projects/clang800-import/sys/dev/usb/controller/ohci_s3c24x0.c
___________________________________________________________________
Deleted: svn:eol-style
## -1 +0,0 ##
-native
\ No newline at end of property
Deleted: svn:keywords
## -1 +0,0 ##
-FreeBSD=%H
\ No newline at end of property
Deleted: svn:mime-type
## -1 +0,0 ##
-text/plain
\ No newline at end of property
Index: projects/clang800-import/sys/dev/usb/controller/generic_ohci.c
===================================================================
--- projects/clang800-import/sys/dev/usb/controller/generic_ohci.c (revision 343955)
+++ projects/clang800-import/sys/dev/usb/controller/generic_ohci.c (revision 343956)
@@ -1,320 +1,319 @@
/*-
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
- * Copyright (c) 2016 Emmanuel Vadot <manu@freebsd.org>
- * All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
+ * Copyright (c) 2016 Emmanuel Vadot <manu@freebsd.org> All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* Generic OHCI driver based on AT91 OHCI
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/rman.h>
#include <sys/condvar.h>
#include <sys/kernel.h>
#include <sys/module.h>
#include <machine/bus.h>
#include <dev/ofw/ofw_bus.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/usb/usb.h>
#include <dev/usb/usbdi.h>
#include <dev/usb/usb_core.h>
#include <dev/usb/usb_busdma.h>
#include <dev/usb/usb_process.h>
#include <dev/usb/usb_util.h>
#include <dev/usb/usb_controller.h>
#include <dev/usb/usb_bus.h>
#include <dev/usb/controller/ohci.h>
#include <dev/usb/controller/ohcireg.h>
#ifdef EXT_RESOURCES
#include <dev/extres/clk/clk.h>
#include <dev/extres/hwreset/hwreset.h>
#include <dev/extres/phy/phy.h>
#endif
#include "generic_usb_if.h"
#ifdef EXT_RESOURCES
struct clk_list {
TAILQ_ENTRY(clk_list) next;
clk_t clk;
};
#endif
struct generic_ohci_softc {
ohci_softc_t ohci_sc;
#ifdef EXT_RESOURCES
hwreset_t rst;
phy_t phy;
TAILQ_HEAD(, clk_list) clk_list;
#endif
};
static int generic_ohci_detach(device_t);
static int
generic_ohci_probe(device_t dev)
{
if (!ofw_bus_status_okay(dev))
return (ENXIO);
if (!ofw_bus_is_compatible(dev, "generic-ohci"))
return (ENXIO);
device_set_desc(dev, "Generic OHCI Controller");
return (BUS_PROBE_DEFAULT);
}
static int
generic_ohci_attach(device_t dev)
{
struct generic_ohci_softc *sc = device_get_softc(dev);
int err, rid;
#ifdef EXT_RESOURCES
int off;
struct clk_list *clkp;
clk_t clk;
#endif
sc->ohci_sc.sc_bus.parent = dev;
sc->ohci_sc.sc_bus.devices = sc->ohci_sc.sc_devices;
sc->ohci_sc.sc_bus.devices_max = OHCI_MAX_DEVICES;
sc->ohci_sc.sc_bus.dma_bits = 32;
/* get all DMA memory */
if (usb_bus_mem_alloc_all(&sc->ohci_sc.sc_bus,
USB_GET_DMA_TAG(dev), &ohci_iterate_hw_softc)) {
return (ENOMEM);
}
rid = 0;
sc->ohci_sc.sc_io_res = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
&rid, RF_ACTIVE);
if (sc->ohci_sc.sc_io_res == 0) {
err = ENOMEM;
goto error;
}
sc->ohci_sc.sc_io_tag = rman_get_bustag(sc->ohci_sc.sc_io_res);
sc->ohci_sc.sc_io_hdl = rman_get_bushandle(sc->ohci_sc.sc_io_res);
sc->ohci_sc.sc_io_size = rman_get_size(sc->ohci_sc.sc_io_res);
rid = 0;
sc->ohci_sc.sc_irq_res = bus_alloc_resource_any(dev, SYS_RES_IRQ, &rid,
RF_ACTIVE);
if (sc->ohci_sc.sc_irq_res == 0) {
err = ENXIO;
goto error;
}
sc->ohci_sc.sc_bus.bdev = device_add_child(dev, "usbus", -1);
if (sc->ohci_sc.sc_bus.bdev == 0) {
err = ENXIO;
goto error;
}
device_set_ivars(sc->ohci_sc.sc_bus.bdev, &sc->ohci_sc.sc_bus);
strlcpy(sc->ohci_sc.sc_vendor, "Generic",
sizeof(sc->ohci_sc.sc_vendor));
err = bus_setup_intr(dev, sc->ohci_sc.sc_irq_res,
INTR_TYPE_BIO | INTR_MPSAFE, NULL,
(driver_intr_t *)ohci_interrupt, sc, &sc->ohci_sc.sc_intr_hdl);
if (err) {
sc->ohci_sc.sc_intr_hdl = NULL;
goto error;
}
#ifdef EXT_RESOURCES
TAILQ_INIT(&sc->clk_list);
/* Enable clock */
for (off = 0; clk_get_by_ofw_index(dev, 0, off, &clk) == 0; off++) {
err = clk_enable(clk);
if (err != 0) {
device_printf(dev, "Could not enable clock %s\n",
clk_get_name(clk));
goto error;
}
clkp = malloc(sizeof(*clkp), M_DEVBUF, M_WAITOK | M_ZERO);
clkp->clk = clk;
TAILQ_INSERT_TAIL(&sc->clk_list, clkp, next);
}
/* De-assert reset */
if (hwreset_get_by_ofw_idx(dev, 0, 0, &sc->rst) == 0) {
err = hwreset_deassert(sc->rst);
if (err != 0) {
device_printf(dev, "Could not de-assert reset %d\n",
off);
goto error;
}
}
/* Enable phy */
if (phy_get_by_ofw_name(dev, 0, "usb", &sc->phy) == 0) {
err = phy_enable(sc->phy);
if (err != 0) {
device_printf(dev, "Could not enable phy\n");
goto error;
}
}
#endif
if (GENERIC_USB_INIT(dev) != 0) {
err = ENXIO;
goto error;
}
err = ohci_init(&sc->ohci_sc);
if (err == 0)
err = device_probe_and_attach(sc->ohci_sc.sc_bus.bdev);
if (err)
goto error;
return (0);
error:
generic_ohci_detach(dev);
return (err);
}
static int
generic_ohci_detach(device_t dev)
{
struct generic_ohci_softc *sc = device_get_softc(dev);
int err;
#ifdef EXT_RESOURCES
struct clk_list *clk, *clk_tmp;
#endif
/* during module unload there are lots of children leftover */
device_delete_children(dev);
/*
* Put the controller into reset, then disable clocks and do
* the MI tear down. We have to disable the clocks/hardware
* after we do the rest of the teardown. We also disable the
* clocks in the opposite order we acquire them, but that
* doesn't seem to be absolutely necessary. We free up the
* clocks after we disable them, so the system could, in
* theory, reuse them.
*/
bus_space_write_4(sc->ohci_sc.sc_io_tag, sc->ohci_sc.sc_io_hdl,
OHCI_CONTROL, 0);
if (sc->ohci_sc.sc_irq_res && sc->ohci_sc.sc_intr_hdl) {
/*
* only call ohci_detach() after ohci_init()
*/
ohci_detach(&sc->ohci_sc);
err = bus_teardown_intr(dev, sc->ohci_sc.sc_irq_res,
sc->ohci_sc.sc_intr_hdl);
sc->ohci_sc.sc_intr_hdl = NULL;
}
if (sc->ohci_sc.sc_irq_res) {
bus_release_resource(dev, SYS_RES_IRQ, 0,
sc->ohci_sc.sc_irq_res);
sc->ohci_sc.sc_irq_res = NULL;
}
if (sc->ohci_sc.sc_io_res) {
bus_release_resource(dev, SYS_RES_MEMORY, 0,
sc->ohci_sc.sc_io_res);
sc->ohci_sc.sc_io_res = NULL;
}
usb_bus_mem_free_all(&sc->ohci_sc.sc_bus, &ohci_iterate_hw_softc);
#ifdef EXT_RESOURCES
/* Disable phy */
if (sc->phy) {
err = phy_disable(sc->phy);
if (err != 0)
device_printf(dev, "Could not disable phy\n");
phy_release(sc->phy);
}
/* Disable clock */
TAILQ_FOREACH_SAFE(clk, &sc->clk_list, next, clk_tmp) {
err = clk_disable(clk->clk);
if (err != 0)
device_printf(dev, "Could not disable clock %s\n",
clk_get_name(clk->clk));
err = clk_release(clk->clk);
if (err != 0)
device_printf(dev, "Could not release clock %s\n",
clk_get_name(clk->clk));
TAILQ_REMOVE(&sc->clk_list, clk, next);
free(clk, M_DEVBUF);
}
/* De-assert reset */
if (sc->rst) {
err = hwreset_assert(sc->rst);
if (err != 0)
device_printf(dev, "Could not assert reset\n");
hwreset_release(sc->rst);
}
#endif
if (GENERIC_USB_DEINIT(dev) != 0)
return (ENXIO);
return (0);
}
static device_method_t generic_ohci_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, generic_ohci_probe),
DEVMETHOD(device_attach, generic_ohci_attach),
DEVMETHOD(device_detach, generic_ohci_detach),
DEVMETHOD(device_suspend, bus_generic_suspend),
DEVMETHOD(device_resume, bus_generic_resume),
DEVMETHOD(device_shutdown, bus_generic_shutdown),
DEVMETHOD_END
};
driver_t generic_ohci_driver = {
.name = "ohci",
.methods = generic_ohci_methods,
.size = sizeof(struct generic_ohci_softc),
};
static devclass_t generic_ohci_devclass;
DRIVER_MODULE(ohci, simplebus, generic_ohci_driver,
generic_ohci_devclass, 0, 0);
MODULE_DEPEND(ohci, usb, 1, 1, 1);
Index: projects/clang800-import/sys/dev/usb/net/if_ure.c
===================================================================
--- projects/clang800-import/sys/dev/usb/net/if_ure.c (revision 343955)
+++ projects/clang800-import/sys/dev/usb/net/if_ure.c (revision 343956)
@@ -1,1263 +1,1264 @@
/*-
* Copyright (c) 2015-2016 Kevin Lo <kevlo@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/condvar.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <sys/socket.h>
#include <sys/sysctl.h>
#include <sys/unistd.h>
#include <net/if.h>
#include <net/if_var.h>
#include <dev/usb/usb.h>
#include <dev/usb/usbdi.h>
#include <dev/usb/usbdi_util.h>
#include "usbdevs.h"
#define USB_DEBUG_VAR ure_debug
#include <dev/usb/usb_debug.h>
#include <dev/usb/usb_process.h>
#include <dev/usb/net/usb_ethernet.h>
#include <dev/usb/net/if_urereg.h>
#ifdef USB_DEBUG
static int ure_debug = 0;
static SYSCTL_NODE(_hw_usb, OID_AUTO, ure, CTLFLAG_RW, 0, "USB ure");
SYSCTL_INT(_hw_usb_ure, OID_AUTO, debug, CTLFLAG_RWTUN, &ure_debug, 0,
"Debug level");
#endif
/*
* Various supported device vendors/products.
*/
static const STRUCT_USB_HOST_ID ure_devs[] = {
#define URE_DEV(v,p,i) { USB_VPI(USB_VENDOR_##v, USB_PRODUCT_##v##_##p, i) }
URE_DEV(LENOVO, RTL8153, 0),
URE_DEV(LENOVO, TBT3LAN, 0),
+ URE_DEV(LENOVO, ONELINK, 0),
URE_DEV(LENOVO, USBCLAN, 0),
URE_DEV(NVIDIA, RTL8153, 0),
URE_DEV(REALTEK, RTL8152, URE_FLAG_8152),
URE_DEV(REALTEK, RTL8153, 0),
URE_DEV(TPLINK, RTL8153, 0),
#undef URE_DEV
};
static device_probe_t ure_probe;
static device_attach_t ure_attach;
static device_detach_t ure_detach;
static usb_callback_t ure_bulk_read_callback;
static usb_callback_t ure_bulk_write_callback;
static miibus_readreg_t ure_miibus_readreg;
static miibus_writereg_t ure_miibus_writereg;
static miibus_statchg_t ure_miibus_statchg;
static uether_fn_t ure_attach_post;
static uether_fn_t ure_init;
static uether_fn_t ure_stop;
static uether_fn_t ure_start;
static uether_fn_t ure_tick;
static uether_fn_t ure_rxfilter;
static int ure_ctl(struct ure_softc *, uint8_t, uint16_t, uint16_t,
void *, int);
static int ure_read_mem(struct ure_softc *, uint16_t, uint16_t, void *,
int);
static int ure_write_mem(struct ure_softc *, uint16_t, uint16_t, void *,
int);
static uint8_t ure_read_1(struct ure_softc *, uint16_t, uint16_t);
static uint16_t ure_read_2(struct ure_softc *, uint16_t, uint16_t);
static uint32_t ure_read_4(struct ure_softc *, uint16_t, uint16_t);
static int ure_write_1(struct ure_softc *, uint16_t, uint16_t, uint32_t);
static int ure_write_2(struct ure_softc *, uint16_t, uint16_t, uint32_t);
static int ure_write_4(struct ure_softc *, uint16_t, uint16_t, uint32_t);
static uint16_t ure_ocp_reg_read(struct ure_softc *, uint16_t);
static void ure_ocp_reg_write(struct ure_softc *, uint16_t, uint16_t);
static void ure_read_chipver(struct ure_softc *);
static int ure_attach_post_sub(struct usb_ether *);
static void ure_reset(struct ure_softc *);
static int ure_ifmedia_upd(struct ifnet *);
static void ure_ifmedia_sts(struct ifnet *, struct ifmediareq *);
static int ure_ioctl(struct ifnet *, u_long, caddr_t);
static void ure_rtl8152_init(struct ure_softc *);
static void ure_rtl8153_init(struct ure_softc *);
static void ure_disable_teredo(struct ure_softc *);
static void ure_init_fifo(struct ure_softc *);
static const struct usb_config ure_config[URE_N_TRANSFER] = {
[URE_BULK_DT_WR] = {
.type = UE_BULK,
.endpoint = UE_ADDR_ANY,
.direction = UE_DIR_OUT,
.bufsize = MCLBYTES,
.flags = {.pipe_bof = 1,.force_short_xfer = 1,},
.callback = ure_bulk_write_callback,
.timeout = 10000, /* 10 seconds */
},
[URE_BULK_DT_RD] = {
.type = UE_BULK,
.endpoint = UE_ADDR_ANY,
.direction = UE_DIR_IN,
.bufsize = 16384,
.flags = {.pipe_bof = 1,.short_xfer_ok = 1,},
.callback = ure_bulk_read_callback,
.timeout = 0, /* no timeout */
},
};
static device_method_t ure_methods[] = {
/* Device interface. */
DEVMETHOD(device_probe, ure_probe),
DEVMETHOD(device_attach, ure_attach),
DEVMETHOD(device_detach, ure_detach),
/* MII interface. */
DEVMETHOD(miibus_readreg, ure_miibus_readreg),
DEVMETHOD(miibus_writereg, ure_miibus_writereg),
DEVMETHOD(miibus_statchg, ure_miibus_statchg),
DEVMETHOD_END
};
static driver_t ure_driver = {
.name = "ure",
.methods = ure_methods,
.size = sizeof(struct ure_softc),
};
static devclass_t ure_devclass;
DRIVER_MODULE(ure, uhub, ure_driver, ure_devclass, NULL, NULL);
DRIVER_MODULE(miibus, ure, miibus_driver, miibus_devclass, NULL, NULL);
MODULE_DEPEND(ure, uether, 1, 1, 1);
MODULE_DEPEND(ure, usb, 1, 1, 1);
MODULE_DEPEND(ure, ether, 1, 1, 1);
MODULE_DEPEND(ure, miibus, 1, 1, 1);
MODULE_VERSION(ure, 1);
USB_PNP_HOST_INFO(ure_devs);
static const struct usb_ether_methods ure_ue_methods = {
.ue_attach_post = ure_attach_post,
.ue_attach_post_sub = ure_attach_post_sub,
.ue_start = ure_start,
.ue_init = ure_init,
.ue_stop = ure_stop,
.ue_tick = ure_tick,
.ue_setmulti = ure_rxfilter,
.ue_setpromisc = ure_rxfilter,
.ue_mii_upd = ure_ifmedia_upd,
.ue_mii_sts = ure_ifmedia_sts,
};
static int
ure_ctl(struct ure_softc *sc, uint8_t rw, uint16_t val, uint16_t index,
void *buf, int len)
{
struct usb_device_request req;
URE_LOCK_ASSERT(sc, MA_OWNED);
if (rw == URE_CTL_WRITE)
req.bmRequestType = UT_WRITE_VENDOR_DEVICE;
else
req.bmRequestType = UT_READ_VENDOR_DEVICE;
req.bRequest = UR_SET_ADDRESS;
USETW(req.wValue, val);
USETW(req.wIndex, index);
USETW(req.wLength, len);
return (uether_do_request(&sc->sc_ue, &req, buf, 1000));
}
static int
ure_read_mem(struct ure_softc *sc, uint16_t addr, uint16_t index,
void *buf, int len)
{
return (ure_ctl(sc, URE_CTL_READ, addr, index, buf, len));
}
static int
ure_write_mem(struct ure_softc *sc, uint16_t addr, uint16_t index,
void *buf, int len)
{
return (ure_ctl(sc, URE_CTL_WRITE, addr, index, buf, len));
}
static uint8_t
ure_read_1(struct ure_softc *sc, uint16_t reg, uint16_t index)
{
uint32_t val;
uint8_t temp[4];
uint8_t shift;
shift = (reg & 3) << 3;
reg &= ~3;
ure_read_mem(sc, reg, index, &temp, 4);
val = UGETDW(temp);
val >>= shift;
return (val & 0xff);
}
static uint16_t
ure_read_2(struct ure_softc *sc, uint16_t reg, uint16_t index)
{
uint32_t val;
uint8_t temp[4];
uint8_t shift;
shift = (reg & 2) << 3;
reg &= ~3;
ure_read_mem(sc, reg, index, &temp, 4);
val = UGETDW(temp);
val >>= shift;
return (val & 0xffff);
}
static uint32_t
ure_read_4(struct ure_softc *sc, uint16_t reg, uint16_t index)
{
uint8_t temp[4];
ure_read_mem(sc, reg, index, &temp, 4);
return (UGETDW(temp));
}
static int
ure_write_1(struct ure_softc *sc, uint16_t reg, uint16_t index, uint32_t val)
{
uint16_t byen;
uint8_t temp[4];
uint8_t shift;
byen = URE_BYTE_EN_BYTE;
shift = reg & 3;
val &= 0xff;
if (reg & 3) {
byen <<= shift;
val <<= (shift << 3);
reg &= ~3;
}
USETDW(temp, val);
return (ure_write_mem(sc, reg, index | byen, &temp, 4));
}
static int
ure_write_2(struct ure_softc *sc, uint16_t reg, uint16_t index, uint32_t val)
{
uint16_t byen;
uint8_t temp[4];
uint8_t shift;
byen = URE_BYTE_EN_WORD;
shift = reg & 2;
val &= 0xffff;
if (reg & 2) {
byen <<= shift;
val <<= (shift << 3);
reg &= ~3;
}
USETDW(temp, val);
return (ure_write_mem(sc, reg, index | byen, &temp, 4));
}
static int
ure_write_4(struct ure_softc *sc, uint16_t reg, uint16_t index, uint32_t val)
{
uint8_t temp[4];
USETDW(temp, val);
return (ure_write_mem(sc, reg, index | URE_BYTE_EN_DWORD, &temp, 4));
}
static uint16_t
ure_ocp_reg_read(struct ure_softc *sc, uint16_t addr)
{
uint16_t reg;
ure_write_2(sc, URE_PLA_OCP_GPHY_BASE, URE_MCU_TYPE_PLA, addr & 0xf000);
reg = (addr & 0x0fff) | 0xb000;
return (ure_read_2(sc, reg, URE_MCU_TYPE_PLA));
}
static void
ure_ocp_reg_write(struct ure_softc *sc, uint16_t addr, uint16_t data)
{
uint16_t reg;
ure_write_2(sc, URE_PLA_OCP_GPHY_BASE, URE_MCU_TYPE_PLA, addr & 0xf000);
reg = (addr & 0x0fff) | 0xb000;
ure_write_2(sc, reg, URE_MCU_TYPE_PLA, data);
}
static int
ure_miibus_readreg(device_t dev, int phy, int reg)
{
struct ure_softc *sc;
uint16_t val;
int locked;
sc = device_get_softc(dev);
locked = mtx_owned(&sc->sc_mtx);
if (!locked)
URE_LOCK(sc);
/* Let the rgephy driver read the URE_GMEDIASTAT register. */
if (reg == URE_GMEDIASTAT) {
if (!locked)
URE_UNLOCK(sc);
return (ure_read_1(sc, URE_GMEDIASTAT, URE_MCU_TYPE_PLA));
}
val = ure_ocp_reg_read(sc, URE_OCP_BASE_MII + reg * 2);
if (!locked)
URE_UNLOCK(sc);
return (val);
}
static int
ure_miibus_writereg(device_t dev, int phy, int reg, int val)
{
struct ure_softc *sc;
int locked;
sc = device_get_softc(dev);
if (sc->sc_phyno != phy)
return (0);
locked = mtx_owned(&sc->sc_mtx);
if (!locked)
URE_LOCK(sc);
ure_ocp_reg_write(sc, URE_OCP_BASE_MII + reg * 2, val);
if (!locked)
URE_UNLOCK(sc);
return (0);
}
static void
ure_miibus_statchg(device_t dev)
{
struct ure_softc *sc;
struct mii_data *mii;
struct ifnet *ifp;
int locked;
sc = device_get_softc(dev);
mii = GET_MII(sc);
locked = mtx_owned(&sc->sc_mtx);
if (!locked)
URE_LOCK(sc);
ifp = uether_getifp(&sc->sc_ue);
if (mii == NULL || ifp == NULL ||
(ifp->if_drv_flags & IFF_DRV_RUNNING) == 0)
goto done;
sc->sc_flags &= ~URE_FLAG_LINK;
if ((mii->mii_media_status & (IFM_ACTIVE | IFM_AVALID)) ==
(IFM_ACTIVE | IFM_AVALID)) {
switch (IFM_SUBTYPE(mii->mii_media_active)) {
case IFM_10_T:
case IFM_100_TX:
sc->sc_flags |= URE_FLAG_LINK;
break;
case IFM_1000_T:
if ((sc->sc_flags & URE_FLAG_8152) != 0)
break;
sc->sc_flags |= URE_FLAG_LINK;
break;
default:
break;
}
}
/* Lost link, do nothing. */
if ((sc->sc_flags & URE_FLAG_LINK) == 0)
goto done;
done:
if (!locked)
URE_UNLOCK(sc);
}
/*
* Probe for a RTL8152/RTL8153 chip.
*/
static int
ure_probe(device_t dev)
{
struct usb_attach_arg *uaa;
uaa = device_get_ivars(dev);
if (uaa->usb_mode != USB_MODE_HOST)
return (ENXIO);
if (uaa->info.bConfigIndex != URE_CONFIG_IDX)
return (ENXIO);
if (uaa->info.bIfaceIndex != URE_IFACE_IDX)
return (ENXIO);
return (usbd_lookup_id_by_uaa(ure_devs, sizeof(ure_devs), uaa));
}
/*
* Attach the interface. Allocate softc structures, do ifmedia
* setup and ethernet/BPF attach.
*/
static int
ure_attach(device_t dev)
{
struct usb_attach_arg *uaa = device_get_ivars(dev);
struct ure_softc *sc = device_get_softc(dev);
struct usb_ether *ue = &sc->sc_ue;
uint8_t iface_index;
int error;
sc->sc_flags = USB_GET_DRIVER_INFO(uaa);
device_set_usb_desc(dev);
mtx_init(&sc->sc_mtx, device_get_nameunit(dev), NULL, MTX_DEF);
iface_index = URE_IFACE_IDX;
error = usbd_transfer_setup(uaa->device, &iface_index, sc->sc_xfer,
ure_config, URE_N_TRANSFER, sc, &sc->sc_mtx);
if (error != 0) {
device_printf(dev, "allocating USB transfers failed\n");
goto detach;
}
ue->ue_sc = sc;
ue->ue_dev = dev;
ue->ue_udev = uaa->device;
ue->ue_mtx = &sc->sc_mtx;
ue->ue_methods = &ure_ue_methods;
error = uether_ifattach(ue);
if (error != 0) {
device_printf(dev, "could not attach interface\n");
goto detach;
}
return (0); /* success */
detach:
ure_detach(dev);
return (ENXIO); /* failure */
}
static int
ure_detach(device_t dev)
{
struct ure_softc *sc = device_get_softc(dev);
struct usb_ether *ue = &sc->sc_ue;
usbd_transfer_unsetup(sc->sc_xfer, URE_N_TRANSFER);
uether_ifdetach(ue);
mtx_destroy(&sc->sc_mtx);
return (0);
}
static void
ure_bulk_read_callback(struct usb_xfer *xfer, usb_error_t error)
{
struct ure_softc *sc = usbd_xfer_softc(xfer);
struct usb_ether *ue = &sc->sc_ue;
struct ifnet *ifp = uether_getifp(ue);
struct usb_page_cache *pc;
struct ure_rxpkt pkt;
int actlen, len;
usbd_xfer_status(xfer, &actlen, NULL, NULL, NULL);
switch (USB_GET_STATE(xfer)) {
case USB_ST_TRANSFERRED:
if (actlen < (int)(sizeof(pkt))) {
if_inc_counter(ifp, IFCOUNTER_IERRORS, 1);
goto tr_setup;
}
pc = usbd_xfer_get_frame(xfer, 0);
usbd_copy_out(pc, 0, &pkt, sizeof(pkt));
len = le32toh(pkt.ure_pktlen) & URE_RXPKT_LEN_MASK;
len -= ETHER_CRC_LEN;
if (actlen < (int)(len + sizeof(pkt))) {
if_inc_counter(ifp, IFCOUNTER_IERRORS, 1);
goto tr_setup;
}
uether_rxbuf(ue, pc, sizeof(pkt), len);
/* FALLTHROUGH */
case USB_ST_SETUP:
tr_setup:
usbd_xfer_set_frame_len(xfer, 0, usbd_xfer_max_len(xfer));
usbd_transfer_submit(xfer);
uether_rxflush(ue);
return;
default: /* Error */
DPRINTF("bulk read error, %s\n",
usbd_errstr(error));
if (error != USB_ERR_CANCELLED) {
/* try to clear stall first */
usbd_xfer_set_stall(xfer);
goto tr_setup;
}
return;
}
}
static void
ure_bulk_write_callback(struct usb_xfer *xfer, usb_error_t error)
{
struct ure_softc *sc = usbd_xfer_softc(xfer);
struct ifnet *ifp = uether_getifp(&sc->sc_ue);
struct usb_page_cache *pc;
struct mbuf *m;
struct ure_txpkt txpkt;
int len, pos;
switch (USB_GET_STATE(xfer)) {
case USB_ST_TRANSFERRED:
DPRINTFN(11, "transfer complete\n");
ifp->if_drv_flags &= ~IFF_DRV_OACTIVE;
/* FALLTHROUGH */
case USB_ST_SETUP:
tr_setup:
if ((sc->sc_flags & URE_FLAG_LINK) == 0 ||
(ifp->if_drv_flags & IFF_DRV_OACTIVE) != 0) {
/*
* don't send anything if there is no link !
*/
return;
}
IFQ_DRV_DEQUEUE(&ifp->if_snd, m);
if (m == NULL)
break;
pos = 0;
len = m->m_pkthdr.len;
pc = usbd_xfer_get_frame(xfer, 0);
memset(&txpkt, 0, sizeof(txpkt));
txpkt.ure_pktlen = htole32((len & URE_TXPKT_LEN_MASK) |
URE_TKPKT_TX_FS | URE_TKPKT_TX_LS);
usbd_copy_in(pc, pos, &txpkt, sizeof(txpkt));
pos += sizeof(txpkt);
usbd_m_copy_in(pc, pos, m, 0, m->m_pkthdr.len);
pos += m->m_pkthdr.len;
if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1);
/*
* If there's a BPF listener, bounce a copy
* of this frame to him.
*/
BPF_MTAP(ifp, m);
m_freem(m);
/* Set frame length. */
usbd_xfer_set_frame_len(xfer, 0, pos);
usbd_transfer_submit(xfer);
ifp->if_drv_flags |= IFF_DRV_OACTIVE;
return;
default: /* Error */
DPRINTFN(11, "transfer error, %s\n",
usbd_errstr(error));
if_inc_counter(ifp, IFCOUNTER_OERRORS, 1);
ifp->if_drv_flags &= ~IFF_DRV_OACTIVE;
if (error != USB_ERR_CANCELLED) {
/* try to clear stall first */
usbd_xfer_set_stall(xfer);
goto tr_setup;
}
return;
}
}
static void
ure_read_chipver(struct ure_softc *sc)
{
uint16_t ver;
ver = ure_read_2(sc, URE_PLA_TCR1, URE_MCU_TYPE_PLA) & URE_VERSION_MASK;
switch (ver) {
case 0x4c00:
sc->sc_chip |= URE_CHIP_VER_4C00;
break;
case 0x4c10:
sc->sc_chip |= URE_CHIP_VER_4C10;
break;
case 0x5c00:
sc->sc_chip |= URE_CHIP_VER_5C00;
break;
case 0x5c10:
sc->sc_chip |= URE_CHIP_VER_5C10;
break;
case 0x5c20:
sc->sc_chip |= URE_CHIP_VER_5C20;
break;
case 0x5c30:
sc->sc_chip |= URE_CHIP_VER_5C30;
break;
default:
device_printf(sc->sc_ue.ue_dev,
"unknown version 0x%04x\n", ver);
break;
}
}
static void
ure_attach_post(struct usb_ether *ue)
{
struct ure_softc *sc = uether_getsc(ue);
sc->sc_phyno = 0;
/* Determine the chip version. */
ure_read_chipver(sc);
/* Initialize controller and get station address. */
if (sc->sc_flags & URE_FLAG_8152)
ure_rtl8152_init(sc);
else
ure_rtl8153_init(sc);
if (sc->sc_chip & URE_CHIP_VER_4C00)
ure_read_mem(sc, URE_PLA_IDR, URE_MCU_TYPE_PLA,
ue->ue_eaddr, 8);
else
ure_read_mem(sc, URE_PLA_BACKUP, URE_MCU_TYPE_PLA,
ue->ue_eaddr, 8);
}
static int
ure_attach_post_sub(struct usb_ether *ue)
{
struct ure_softc *sc;
struct ifnet *ifp;
int error;
sc = uether_getsc(ue);
ifp = ue->ue_ifp;
ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
ifp->if_start = uether_start;
ifp->if_ioctl = ure_ioctl;
ifp->if_init = uether_init;
IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen);
ifp->if_snd.ifq_drv_maxlen = ifqmaxlen;
IFQ_SET_READY(&ifp->if_snd);
mtx_lock(&Giant);
error = mii_attach(ue->ue_dev, &ue->ue_miibus, ifp,
uether_ifmedia_upd, ue->ue_methods->ue_mii_sts,
BMSR_DEFCAPMASK, sc->sc_phyno, MII_OFFSET_ANY, 0);
mtx_unlock(&Giant);
return (error);
}
static void
ure_init(struct usb_ether *ue)
{
struct ure_softc *sc = uether_getsc(ue);
struct ifnet *ifp = uether_getifp(ue);
URE_LOCK_ASSERT(sc, MA_OWNED);
if ((ifp->if_drv_flags & IFF_DRV_RUNNING) != 0)
return;
/* Cancel pending I/O. */
ure_stop(ue);
ure_reset(sc);
/* Set MAC address. */
ure_write_mem(sc, URE_PLA_IDR, URE_MCU_TYPE_PLA | URE_BYTE_EN_SIX_BYTES,
IF_LLADDR(ifp), 8);
/* Reset the packet filter. */
ure_write_2(sc, URE_PLA_FMC, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_FMC, URE_MCU_TYPE_PLA) &
~URE_FMC_FCR_MCU_EN);
ure_write_2(sc, URE_PLA_FMC, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_FMC, URE_MCU_TYPE_PLA) |
URE_FMC_FCR_MCU_EN);
/* Enable transmit and receive. */
ure_write_1(sc, URE_PLA_CR, URE_MCU_TYPE_PLA,
ure_read_1(sc, URE_PLA_CR, URE_MCU_TYPE_PLA) | URE_CR_RE |
URE_CR_TE);
ure_write_2(sc, URE_PLA_MISC_1, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_MISC_1, URE_MCU_TYPE_PLA) &
~URE_RXDY_GATED_EN);
/* Configure RX filters. */
ure_rxfilter(ue);
usbd_xfer_set_stall(sc->sc_xfer[URE_BULK_DT_WR]);
/* Indicate we are up and running. */
ifp->if_drv_flags |= IFF_DRV_RUNNING;
/* Switch to selected media. */
ure_ifmedia_upd(ifp);
}
static void
ure_tick(struct usb_ether *ue)
{
struct ure_softc *sc = uether_getsc(ue);
struct mii_data *mii = GET_MII(sc);
URE_LOCK_ASSERT(sc, MA_OWNED);
mii_tick(mii);
if ((sc->sc_flags & URE_FLAG_LINK) == 0
&& mii->mii_media_status & IFM_ACTIVE &&
IFM_SUBTYPE(mii->mii_media_active) != IFM_NONE) {
sc->sc_flags |= URE_FLAG_LINK;
ure_start(ue);
}
}
/*
* Program the 64-bit multicast hash filter.
*/
static void
ure_rxfilter(struct usb_ether *ue)
{
struct ure_softc *sc = uether_getsc(ue);
struct ifnet *ifp = uether_getifp(ue);
struct ifmultiaddr *ifma;
uint32_t h, rxmode;
uint32_t hashes[2] = { 0, 0 };
URE_LOCK_ASSERT(sc, MA_OWNED);
rxmode = URE_RCR_APM;
if (ifp->if_flags & IFF_BROADCAST)
rxmode |= URE_RCR_AB;
if (ifp->if_flags & (IFF_ALLMULTI | IFF_PROMISC)) {
if (ifp->if_flags & IFF_PROMISC)
rxmode |= URE_RCR_AAP;
rxmode |= URE_RCR_AM;
hashes[0] = hashes[1] = 0xffffffff;
goto done;
}
rxmode |= URE_RCR_AM;
if_maddr_rlock(ifp);
CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) {
if (ifma->ifma_addr->sa_family != AF_LINK)
continue;
h = ether_crc32_be(LLADDR((struct sockaddr_dl *)
ifma->ifma_addr), ETHER_ADDR_LEN) >> 26;
if (h < 32)
hashes[0] |= (1 << h);
else
hashes[1] |= (1 << (h - 32));
}
if_maddr_runlock(ifp);
h = bswap32(hashes[0]);
hashes[0] = bswap32(hashes[1]);
hashes[1] = h;
rxmode |= URE_RCR_AM;
done:
ure_write_4(sc, URE_PLA_MAR0, URE_MCU_TYPE_PLA, hashes[0]);
ure_write_4(sc, URE_PLA_MAR4, URE_MCU_TYPE_PLA, hashes[1]);
ure_write_4(sc, URE_PLA_RCR, URE_MCU_TYPE_PLA, rxmode);
}
static void
ure_start(struct usb_ether *ue)
{
struct ure_softc *sc = uether_getsc(ue);
/*
* start the USB transfers, if not already started:
*/
usbd_transfer_start(sc->sc_xfer[URE_BULK_DT_RD]);
usbd_transfer_start(sc->sc_xfer[URE_BULK_DT_WR]);
}
static void
ure_reset(struct ure_softc *sc)
{
int i;
ure_write_1(sc, URE_PLA_CR, URE_MCU_TYPE_PLA, URE_CR_RST);
for (i = 0; i < URE_TIMEOUT; i++) {
if (!(ure_read_1(sc, URE_PLA_CR, URE_MCU_TYPE_PLA) &
URE_CR_RST))
break;
uether_pause(&sc->sc_ue, hz / 100);
}
if (i == URE_TIMEOUT)
device_printf(sc->sc_ue.ue_dev, "reset never completed\n");
}
/*
* Set media options.
*/
static int
ure_ifmedia_upd(struct ifnet *ifp)
{
struct ure_softc *sc = ifp->if_softc;
struct mii_data *mii = GET_MII(sc);
struct mii_softc *miisc;
int error;
URE_LOCK_ASSERT(sc, MA_OWNED);
LIST_FOREACH(miisc, &mii->mii_phys, mii_list)
PHY_RESET(miisc);
error = mii_mediachg(mii);
return (error);
}
/*
* Report current media status.
*/
static void
ure_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr)
{
struct ure_softc *sc;
struct mii_data *mii;
sc = ifp->if_softc;
mii = GET_MII(sc);
URE_LOCK(sc);
mii_pollstat(mii);
ifmr->ifm_active = mii->mii_media_active;
ifmr->ifm_status = mii->mii_media_status;
URE_UNLOCK(sc);
}
static int
ure_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data)
{
struct usb_ether *ue = ifp->if_softc;
struct ure_softc *sc;
struct ifreq *ifr;
int error, mask, reinit;
sc = uether_getsc(ue);
ifr = (struct ifreq *)data;
error = 0;
reinit = 0;
if (cmd == SIOCSIFCAP) {
URE_LOCK(sc);
mask = ifr->ifr_reqcap ^ ifp->if_capenable;
if (reinit > 0 && ifp->if_drv_flags & IFF_DRV_RUNNING)
ifp->if_drv_flags &= ~IFF_DRV_RUNNING;
else
reinit = 0;
URE_UNLOCK(sc);
if (reinit > 0)
uether_init(ue);
} else
error = uether_ioctl(ifp, cmd, data);
return (error);
}
static void
ure_rtl8152_init(struct ure_softc *sc)
{
uint32_t pwrctrl;
/* Disable ALDPS. */
ure_ocp_reg_write(sc, URE_OCP_ALDPS_CONFIG, URE_ENPDNPS | URE_LINKENA |
URE_DIS_SDSAVE);
uether_pause(&sc->sc_ue, hz / 50);
if (sc->sc_chip & URE_CHIP_VER_4C00) {
ure_write_2(sc, URE_PLA_LED_FEATURE, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_LED_FEATURE, URE_MCU_TYPE_PLA) &
~URE_LED_MODE_MASK);
}
ure_write_2(sc, URE_USB_UPS_CTRL, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_UPS_CTRL, URE_MCU_TYPE_USB) &
~URE_POWER_CUT);
ure_write_2(sc, URE_USB_PM_CTRL_STATUS, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_PM_CTRL_STATUS, URE_MCU_TYPE_USB) &
~URE_RESUME_INDICATE);
ure_write_2(sc, URE_PLA_PHY_PWR, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_PHY_PWR, URE_MCU_TYPE_PLA) |
URE_TX_10M_IDLE_EN | URE_PFM_PWM_SWITCH);
pwrctrl = ure_read_4(sc, URE_PLA_MAC_PWR_CTRL, URE_MCU_TYPE_PLA);
pwrctrl &= ~URE_MCU_CLK_RATIO_MASK;
pwrctrl |= URE_MCU_CLK_RATIO | URE_D3_CLK_GATED_EN;
ure_write_4(sc, URE_PLA_MAC_PWR_CTRL, URE_MCU_TYPE_PLA, pwrctrl);
ure_write_2(sc, URE_PLA_GPHY_INTR_IMR, URE_MCU_TYPE_PLA,
URE_GPHY_STS_MSK | URE_SPEED_DOWN_MSK | URE_SPDWN_RXDV_MSK |
URE_SPDWN_LINKCHG_MSK);
/* Disable Rx aggregation. */
ure_write_2(sc, URE_USB_USB_CTRL, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_USB_CTRL, URE_MCU_TYPE_USB) |
URE_RX_AGG_DISABLE);
/* Disable ALDPS. */
ure_ocp_reg_write(sc, URE_OCP_ALDPS_CONFIG, URE_ENPDNPS | URE_LINKENA |
URE_DIS_SDSAVE);
uether_pause(&sc->sc_ue, hz / 50);
ure_init_fifo(sc);
ure_write_1(sc, URE_USB_TX_AGG, URE_MCU_TYPE_USB,
URE_TX_AGG_MAX_THRESHOLD);
ure_write_4(sc, URE_USB_RX_BUF_TH, URE_MCU_TYPE_USB, URE_RX_THR_HIGH);
ure_write_4(sc, URE_USB_TX_DMA, URE_MCU_TYPE_USB,
URE_TEST_MODE_DISABLE | URE_TX_SIZE_ADJUST1);
}
static void
ure_rtl8153_init(struct ure_softc *sc)
{
uint16_t val;
uint8_t u1u2[8];
int i;
/* Disable ALDPS. */
ure_ocp_reg_write(sc, URE_OCP_POWER_CFG,
ure_ocp_reg_read(sc, URE_OCP_POWER_CFG) & ~URE_EN_ALDPS);
uether_pause(&sc->sc_ue, hz / 50);
memset(u1u2, 0x00, sizeof(u1u2));
ure_write_mem(sc, URE_USB_TOLERANCE,
URE_MCU_TYPE_USB | URE_BYTE_EN_SIX_BYTES, u1u2, sizeof(u1u2));
for (i = 0; i < URE_TIMEOUT; i++) {
if (ure_read_2(sc, URE_PLA_BOOT_CTRL, URE_MCU_TYPE_PLA) &
URE_AUTOLOAD_DONE)
break;
uether_pause(&sc->sc_ue, hz / 100);
}
if (i == URE_TIMEOUT)
device_printf(sc->sc_ue.ue_dev,
"timeout waiting for chip autoload\n");
for (i = 0; i < URE_TIMEOUT; i++) {
val = ure_ocp_reg_read(sc, URE_OCP_PHY_STATUS) &
URE_PHY_STAT_MASK;
if (val == URE_PHY_STAT_LAN_ON || val == URE_PHY_STAT_PWRDN)
break;
uether_pause(&sc->sc_ue, hz / 100);
}
if (i == URE_TIMEOUT)
device_printf(sc->sc_ue.ue_dev,
"timeout waiting for phy to stabilize\n");
ure_write_2(sc, URE_USB_U2P3_CTRL, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_U2P3_CTRL, URE_MCU_TYPE_USB) &
~URE_U2P3_ENABLE);
if (sc->sc_chip & URE_CHIP_VER_5C10) {
val = ure_read_2(sc, URE_USB_SSPHYLINK2, URE_MCU_TYPE_USB);
val &= ~URE_PWD_DN_SCALE_MASK;
val |= URE_PWD_DN_SCALE(96);
ure_write_2(sc, URE_USB_SSPHYLINK2, URE_MCU_TYPE_USB, val);
ure_write_1(sc, URE_USB_USB2PHY, URE_MCU_TYPE_USB,
ure_read_1(sc, URE_USB_USB2PHY, URE_MCU_TYPE_USB) |
URE_USB2PHY_L1 | URE_USB2PHY_SUSPEND);
} else if (sc->sc_chip & URE_CHIP_VER_5C20) {
ure_write_1(sc, URE_PLA_DMY_REG0, URE_MCU_TYPE_PLA,
ure_read_1(sc, URE_PLA_DMY_REG0, URE_MCU_TYPE_PLA) &
~URE_ECM_ALDPS);
}
if (sc->sc_chip & (URE_CHIP_VER_5C20 | URE_CHIP_VER_5C30)) {
val = ure_read_1(sc, URE_USB_CSR_DUMMY1, URE_MCU_TYPE_USB);
if (ure_read_2(sc, URE_USB_BURST_SIZE, URE_MCU_TYPE_USB) ==
0)
val &= ~URE_DYNAMIC_BURST;
else
val |= URE_DYNAMIC_BURST;
ure_write_1(sc, URE_USB_CSR_DUMMY1, URE_MCU_TYPE_USB, val);
}
ure_write_1(sc, URE_USB_CSR_DUMMY2, URE_MCU_TYPE_USB,
ure_read_1(sc, URE_USB_CSR_DUMMY2, URE_MCU_TYPE_USB) |
URE_EP4_FULL_FC);
ure_write_2(sc, URE_USB_WDT11_CTRL, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_WDT11_CTRL, URE_MCU_TYPE_USB) &
~URE_TIMER11_EN);
ure_write_2(sc, URE_PLA_LED_FEATURE, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_LED_FEATURE, URE_MCU_TYPE_PLA) &
~URE_LED_MODE_MASK);
if ((sc->sc_chip & URE_CHIP_VER_5C10) &&
usbd_get_speed(sc->sc_ue.ue_udev) != USB_SPEED_SUPER)
val = URE_LPM_TIMER_500MS;
else
val = URE_LPM_TIMER_500US;
ure_write_1(sc, URE_USB_LPM_CTRL, URE_MCU_TYPE_USB,
val | URE_FIFO_EMPTY_1FB | URE_ROK_EXIT_LPM);
val = ure_read_2(sc, URE_USB_AFE_CTRL2, URE_MCU_TYPE_USB);
val &= ~URE_SEN_VAL_MASK;
val |= URE_SEN_VAL_NORMAL | URE_SEL_RXIDLE;
ure_write_2(sc, URE_USB_AFE_CTRL2, URE_MCU_TYPE_USB, val);
ure_write_2(sc, URE_USB_CONNECT_TIMER, URE_MCU_TYPE_USB, 0x0001);
ure_write_2(sc, URE_USB_POWER_CUT, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_POWER_CUT, URE_MCU_TYPE_USB) &
~(URE_PWR_EN | URE_PHASE2_EN));
ure_write_2(sc, URE_USB_MISC_0, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_MISC_0, URE_MCU_TYPE_USB) &
~URE_PCUT_STATUS);
memset(u1u2, 0xff, sizeof(u1u2));
ure_write_mem(sc, URE_USB_TOLERANCE,
URE_MCU_TYPE_USB | URE_BYTE_EN_SIX_BYTES, u1u2, sizeof(u1u2));
ure_write_2(sc, URE_PLA_MAC_PWR_CTRL, URE_MCU_TYPE_PLA,
URE_ALDPS_SPDWN_RATIO);
ure_write_2(sc, URE_PLA_MAC_PWR_CTRL2, URE_MCU_TYPE_PLA,
URE_EEE_SPDWN_RATIO);
ure_write_2(sc, URE_PLA_MAC_PWR_CTRL3, URE_MCU_TYPE_PLA,
URE_PKT_AVAIL_SPDWN_EN | URE_SUSPEND_SPDWN_EN |
URE_U1U2_SPDWN_EN | URE_L1_SPDWN_EN);
ure_write_2(sc, URE_PLA_MAC_PWR_CTRL4, URE_MCU_TYPE_PLA,
URE_PWRSAVE_SPDWN_EN | URE_RXDV_SPDWN_EN | URE_TX10MIDLE_EN |
URE_TP100_SPDWN_EN | URE_TP500_SPDWN_EN | URE_TP1000_SPDWN_EN |
URE_EEE_SPDWN_EN);
val = ure_read_2(sc, URE_USB_U2P3_CTRL, URE_MCU_TYPE_USB);
if (!(sc->sc_chip & (URE_CHIP_VER_5C00 | URE_CHIP_VER_5C10)))
val |= URE_U2P3_ENABLE;
else
val &= ~URE_U2P3_ENABLE;
ure_write_2(sc, URE_USB_U2P3_CTRL, URE_MCU_TYPE_USB, val);
memset(u1u2, 0x00, sizeof(u1u2));
ure_write_mem(sc, URE_USB_TOLERANCE,
URE_MCU_TYPE_USB | URE_BYTE_EN_SIX_BYTES, u1u2, sizeof(u1u2));
/* Disable ALDPS. */
ure_ocp_reg_write(sc, URE_OCP_POWER_CFG,
ure_ocp_reg_read(sc, URE_OCP_POWER_CFG) & ~URE_EN_ALDPS);
uether_pause(&sc->sc_ue, hz / 50);
ure_init_fifo(sc);
/* Disable Rx aggregation. */
ure_write_2(sc, URE_USB_USB_CTRL, URE_MCU_TYPE_USB,
ure_read_2(sc, URE_USB_USB_CTRL, URE_MCU_TYPE_USB) |
URE_RX_AGG_DISABLE);
val = ure_read_2(sc, URE_USB_U2P3_CTRL, URE_MCU_TYPE_USB);
if (!(sc->sc_chip & (URE_CHIP_VER_5C00 | URE_CHIP_VER_5C10)))
val |= URE_U2P3_ENABLE;
else
val &= ~URE_U2P3_ENABLE;
ure_write_2(sc, URE_USB_U2P3_CTRL, URE_MCU_TYPE_USB, val);
memset(u1u2, 0xff, sizeof(u1u2));
ure_write_mem(sc, URE_USB_TOLERANCE,
URE_MCU_TYPE_USB | URE_BYTE_EN_SIX_BYTES, u1u2, sizeof(u1u2));
}
static void
ure_stop(struct usb_ether *ue)
{
struct ure_softc *sc = uether_getsc(ue);
struct ifnet *ifp = uether_getifp(ue);
URE_LOCK_ASSERT(sc, MA_OWNED);
ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE);
sc->sc_flags &= ~URE_FLAG_LINK;
/*
* stop all the transfers, if not already stopped:
*/
usbd_transfer_stop(sc->sc_xfer[URE_BULK_DT_WR]);
usbd_transfer_stop(sc->sc_xfer[URE_BULK_DT_RD]);
}
static void
ure_disable_teredo(struct ure_softc *sc)
{
ure_write_4(sc, URE_PLA_TEREDO_CFG, URE_MCU_TYPE_PLA,
ure_read_4(sc, URE_PLA_TEREDO_CFG, URE_MCU_TYPE_PLA) &
~(URE_TEREDO_SEL | URE_TEREDO_RS_EVENT_MASK | URE_OOB_TEREDO_EN));
ure_write_2(sc, URE_PLA_WDT6_CTRL, URE_MCU_TYPE_PLA,
URE_WDT6_SET_MODE);
ure_write_2(sc, URE_PLA_REALWOW_TIMER, URE_MCU_TYPE_PLA, 0);
ure_write_4(sc, URE_PLA_TEREDO_TIMER, URE_MCU_TYPE_PLA, 0);
}
static void
ure_init_fifo(struct ure_softc *sc)
{
uint32_t rx_fifo1, rx_fifo2;
int i;
ure_write_2(sc, URE_PLA_MISC_1, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_MISC_1, URE_MCU_TYPE_PLA) |
URE_RXDY_GATED_EN);
ure_disable_teredo(sc);
ure_write_4(sc, URE_PLA_RCR, URE_MCU_TYPE_PLA,
ure_read_4(sc, URE_PLA_RCR, URE_MCU_TYPE_PLA) &
~URE_RCR_ACPT_ALL);
if (!(sc->sc_flags & URE_FLAG_8152)) {
if (sc->sc_chip & (URE_CHIP_VER_5C00 | URE_CHIP_VER_5C10 |
URE_CHIP_VER_5C20)) {
ure_ocp_reg_write(sc, URE_OCP_ADC_CFG,
URE_CKADSEL_L | URE_ADC_EN | URE_EN_EMI_L);
}
if (sc->sc_chip & URE_CHIP_VER_5C00) {
ure_ocp_reg_write(sc, URE_OCP_EEE_CFG,
ure_ocp_reg_read(sc, URE_OCP_EEE_CFG) &
~URE_CTAP_SHORT_EN);
}
ure_ocp_reg_write(sc, URE_OCP_POWER_CFG,
ure_ocp_reg_read(sc, URE_OCP_POWER_CFG) |
URE_EEE_CLKDIV_EN);
ure_ocp_reg_write(sc, URE_OCP_DOWN_SPEED,
ure_ocp_reg_read(sc, URE_OCP_DOWN_SPEED) |
URE_EN_10M_BGOFF);
ure_ocp_reg_write(sc, URE_OCP_POWER_CFG,
ure_ocp_reg_read(sc, URE_OCP_POWER_CFG) |
URE_EN_10M_PLLOFF);
ure_ocp_reg_write(sc, URE_OCP_SRAM_ADDR, URE_SRAM_IMPEDANCE);
ure_ocp_reg_write(sc, URE_OCP_SRAM_DATA, 0x0b13);
ure_write_2(sc, URE_PLA_PHY_PWR, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_PHY_PWR, URE_MCU_TYPE_PLA) |
URE_PFM_PWM_SWITCH);
/* Enable LPF corner auto tune. */
ure_ocp_reg_write(sc, URE_OCP_SRAM_ADDR, URE_SRAM_LPF_CFG);
ure_ocp_reg_write(sc, URE_OCP_SRAM_DATA, 0xf70f);
/* Adjust 10M amplitude. */
ure_ocp_reg_write(sc, URE_OCP_SRAM_ADDR, URE_SRAM_10M_AMP1);
ure_ocp_reg_write(sc, URE_OCP_SRAM_DATA, 0x00af);
ure_ocp_reg_write(sc, URE_OCP_SRAM_ADDR, URE_SRAM_10M_AMP2);
ure_ocp_reg_write(sc, URE_OCP_SRAM_DATA, 0x0208);
}
ure_reset(sc);
ure_write_1(sc, URE_PLA_CR, URE_MCU_TYPE_PLA, 0);
ure_write_1(sc, URE_PLA_OOB_CTRL, URE_MCU_TYPE_PLA,
ure_read_1(sc, URE_PLA_OOB_CTRL, URE_MCU_TYPE_PLA) &
~URE_NOW_IS_OOB);
ure_write_2(sc, URE_PLA_SFF_STS_7, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_SFF_STS_7, URE_MCU_TYPE_PLA) &
~URE_MCU_BORW_EN);
for (i = 0; i < URE_TIMEOUT; i++) {
if (ure_read_1(sc, URE_PLA_OOB_CTRL, URE_MCU_TYPE_PLA) &
URE_LINK_LIST_READY)
break;
uether_pause(&sc->sc_ue, hz / 100);
}
if (i == URE_TIMEOUT)
device_printf(sc->sc_ue.ue_dev,
"timeout waiting for OOB control\n");
ure_write_2(sc, URE_PLA_SFF_STS_7, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_SFF_STS_7, URE_MCU_TYPE_PLA) |
URE_RE_INIT_LL);
for (i = 0; i < URE_TIMEOUT; i++) {
if (ure_read_1(sc, URE_PLA_OOB_CTRL, URE_MCU_TYPE_PLA) &
URE_LINK_LIST_READY)
break;
uether_pause(&sc->sc_ue, hz / 100);
}
if (i == URE_TIMEOUT)
device_printf(sc->sc_ue.ue_dev,
"timeout waiting for OOB control\n");
ure_write_2(sc, URE_PLA_CPCR, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_CPCR, URE_MCU_TYPE_PLA) &
~URE_CPCR_RX_VLAN);
ure_write_2(sc, URE_PLA_TCR0, URE_MCU_TYPE_PLA,
ure_read_2(sc, URE_PLA_TCR0, URE_MCU_TYPE_PLA) |
URE_TCR0_AUTO_FIFO);
/* Configure Rx FIFO threshold. */
ure_write_4(sc, URE_PLA_RXFIFO_CTRL0, URE_MCU_TYPE_PLA,
URE_RXFIFO_THR1_NORMAL);
if (usbd_get_speed(sc->sc_ue.ue_udev) == USB_SPEED_FULL) {
rx_fifo1 = URE_RXFIFO_THR2_FULL;
rx_fifo2 = URE_RXFIFO_THR3_FULL;
} else {
rx_fifo1 = URE_RXFIFO_THR2_HIGH;
rx_fifo2 = URE_RXFIFO_THR3_HIGH;
}
ure_write_4(sc, URE_PLA_RXFIFO_CTRL1, URE_MCU_TYPE_PLA, rx_fifo1);
ure_write_4(sc, URE_PLA_RXFIFO_CTRL2, URE_MCU_TYPE_PLA, rx_fifo2);
/* Configure Tx FIFO threshold. */
ure_write_4(sc, URE_PLA_TXFIFO_CTRL, URE_MCU_TYPE_PLA,
URE_TXFIFO_THR_NORMAL);
}
Index: projects/clang800-import/sys/dev/usb/usbdevs
===================================================================
--- projects/clang800-import/sys/dev/usb/usbdevs (revision 343955)
+++ projects/clang800-import/sys/dev/usb/usbdevs (revision 343956)
@@ -1,4958 +1,4959 @@
$FreeBSD$
/* $NetBSD: usbdevs,v 1.392 2004/12/29 08:38:44 imp Exp $ */
/*-
* Copyright (c) 1998-2004 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Lennart Augustsson (lennart@augustsson.net) at
* Carlstedt Research & Technology.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
/*
* List of known USB vendors
*
* USB.org publishes a VID list of USB-IF member companies at
* http://www.usb.org/developers/tools
* Note that it does not show companies that have obtained a Vendor ID
* without becoming full members.
*
* Please note that these IDs do not do anything. Adding an ID here and
* regenerating the usbdevs.h and usbdevs_data.h only makes a symbolic name
* available to the source code and does not change any functionality, nor
* does it make your device available to a specific driver.
* It will however make the descriptive string available if a device does not
* provide the string itself.
*
* After adding a vendor ID VNDR and a product ID PRDCT you will have the
* following extra defines:
* #define USB_VENDOR_VNDR 0x????
* #define USB_PRODUCT_VNDR_PRDCT 0x????
*
* You may have to add these defines to the respective probe routines to
* make the device recognised by the appropriate device driver.
*/
vendor UNKNOWN1 0x0053 Unknown vendor
vendor UNKNOWN2 0x0105 Unknown vendor
vendor EGALAX2 0x0123 eGalax, Inc.
vendor CHIPSBANK 0x0204 Chipsbank Microelectronics Co.
vendor HUMAX 0x02ad HUMAX
vendor QUAN 0x01e1 Quan
vendor LTS 0x0386 LTS
vendor BWCT 0x03da Bernd Walter Computer Technology
vendor AOX 0x03e8 AOX
vendor THESYS 0x03e9 Thesys
vendor DATABROADCAST 0x03ea Data Broadcasting
vendor ATMEL 0x03eb Atmel
vendor IWATSU 0x03ec Iwatsu America
vendor MITSUMI 0x03ee Mitsumi
vendor HP 0x03f0 Hewlett Packard
vendor GENOA 0x03f1 Genoa
vendor OAK 0x03f2 Oak
vendor ADAPTEC 0x03f3 Adaptec
vendor DIEBOLD 0x03f4 Diebold
vendor SIEMENSELECTRO 0x03f5 Siemens Electromechanical
vendor EPSONIMAGING 0x03f8 Epson Imaging
vendor KEYTRONIC 0x03f9 KeyTronic
vendor OPTI 0x03fb OPTi
vendor ELITEGROUP 0x03fc Elitegroup
vendor XILINX 0x03fd Xilinx
vendor FARALLON 0x03fe Farallon Communications
vendor NATIONAL 0x0400 National Semiconductor
vendor NATIONALREG 0x0401 National Registry
vendor ACERLABS 0x0402 Acer Labs
vendor FTDI 0x0403 Future Technology Devices
vendor NCR 0x0404 NCR
vendor SYNOPSYS2 0x0405 Synopsys
vendor FUJITSUICL 0x0406 Fujitsu-ICL
vendor FUJITSU2 0x0407 Fujitsu Personal Systems
vendor QUANTA 0x0408 Quanta
vendor NEC 0x0409 NEC
vendor KODAK 0x040a Eastman Kodak
vendor WELTREND 0x040b Weltrend Semiconductor
vendor VIA 0x040d VIA
vendor MCCI 0x040e MCCI
vendor MELCO 0x0411 Melco
vendor LEADTEK 0x0413 Leadtek
vendor WINBOND 0x0416 Winbond
vendor PHOENIX 0x041a Phoenix
vendor CREATIVE 0x041e Creative Labs
vendor NOKIA 0x0421 Nokia
vendor ADI 0x0422 ADI Systems
vendor CATC 0x0423 Computer Access Technology
vendor SMC2 0x0424 Microchip (Standard Microsystems)
vendor MOTOROLA_HK 0x0425 Motorola HK
vendor GRAVIS 0x0428 Advanced Gravis Computer
vendor CIRRUSLOGIC 0x0429 Cirrus Logic
vendor INNOVATIVE 0x042c Innovative Semiconductors
vendor MOLEX 0x042f Molex
vendor SUN 0x0430 Sun Microsystems
vendor UNISYS 0x0432 Unisys
vendor TAUGA 0x0436 Taugagreining HF
vendor AMD 0x0438 Advanced Micro Devices
vendor LEXMARK 0x043d Lexmark International
vendor LG 0x043e LG Electronics
vendor NANAO 0x0440 NANAO
vendor GATEWAY 0x0443 Gateway 2000
vendor NMB 0x0446 NMB
vendor ALPS 0x044e Alps Electric
vendor THRUST 0x044f Thrustmaster
vendor TI 0x0451 Texas Instruments
vendor ANALOGDEVICES 0x0456 Analog Devices
vendor SIS 0x0457 Silicon Integrated Systems Corp.
vendor KYE 0x0458 KYE Systems
vendor DIAMOND2 0x045a Diamond (Supra)
vendor RENESAS 0x045b Renesas
vendor MICROSOFT 0x045e Microsoft
vendor PRIMAX 0x0461 Primax Electronics
vendor MGE 0x0463 MGE UPS Systems
vendor AMP 0x0464 AMP
vendor CHERRY 0x046a Cherry Mikroschalter
vendor MEGATRENDS 0x046b American Megatrends
vendor LOGITECH 0x046d Logitech
vendor BTC 0x046e Behavior Tech. Computer
vendor PHILIPS 0x0471 Philips
vendor SUN2 0x0472 Sun Microsystems (official)
vendor SANYO 0x0474 Sanyo Electric
vendor SEAGATE 0x0477 Seagate
vendor CONNECTIX 0x0478 Connectix
vendor SEMTECH 0x047a Semtech
vendor DELL2 0x047c Dell
vendor KENSINGTON 0x047d Kensington
vendor LUCENT 0x047e Lucent
vendor PLANTRONICS 0x047f Plantronics
vendor KYOCERA 0x0482 Kyocera Wireless Corp.
vendor STMICRO 0x0483 STMicroelectronics
vendor FOXCONN 0x0489 Foxconn / Hon Hai
vendor MEIZU 0x0492 Meizu Electronics
vendor YAMAHA 0x0499 YAMAHA
vendor COMPAQ 0x049f Compaq
vendor HITACHI 0x04a4 Hitachi
vendor ACERP 0x04a5 Acer Peripherals
vendor DAVICOM 0x04a6 Davicom
vendor VISIONEER 0x04a7 Visioneer
vendor CANON 0x04a9 Canon
vendor NIKON 0x04b0 Nikon
vendor PAN 0x04b1 Pan International
vendor IBM 0x04b3 IBM
vendor CYPRESS 0x04b4 Cypress Semiconductor
vendor ROHM 0x04b5 ROHM
vendor COMPAL 0x04b7 Compal
vendor EPSON 0x04b8 Seiko Epson
vendor RAINBOW 0x04b9 Rainbow Technologies
vendor IODATA 0x04bb I-O Data
vendor TDK 0x04bf TDK
vendor 3COMUSR 0x04c1 U.S. Robotics
vendor METHODE 0x04c2 Methode Electronics Far East
vendor MAXISWITCH 0x04c3 Maxi Switch
vendor LOCKHEEDMER 0x04c4 Lockheed Martin Energy Research
vendor FUJITSU 0x04c5 Fujitsu
vendor TOSHIBAAM 0x04c6 Toshiba America
vendor MICROMACRO 0x04c7 Micro Macro Technologies
vendor KONICA 0x04c8 Konica
vendor LITEON 0x04ca Lite-On Technology
vendor FUJIPHOTO 0x04cb Fuji Photo Film
vendor PHILIPSSEMI 0x04cc Philips Semiconductors
vendor TATUNG 0x04cd Tatung Co. Of America
vendor SCANLOGIC 0x04ce ScanLogic
vendor MYSON 0x04cf Myson Technology
vendor DIGI2 0x04d0 Digi
vendor ITTCANON 0x04d1 ITT Canon
vendor ALTEC 0x04d2 Altec Lansing
vendor LSI 0x04d4 LSI
vendor MENTORGRAPHICS 0x04d6 Mentor Graphics
vendor ITUNERNET 0x04d8 I-Tuner Networks
vendor HOLTEK 0x04d9 Holtek Semiconductor, Inc.
vendor PANASONIC 0x04da Panasonic (Matsushita)
vendor HUANHSIN 0x04dc Huan Hsin
vendor SHARP 0x04dd Sharp
vendor IIYAMA 0x04e1 Iiyama
vendor SHUTTLE 0x04e6 Shuttle Technology
vendor ELO 0x04e7 Elo TouchSystems
vendor SAMSUNG 0x04e8 Samsung Electronics
vendor NORTHSTAR 0x04eb Northstar
vendor TOKYOELECTRON 0x04ec Tokyo Electron
vendor ANNABOOKS 0x04ed Annabooks
vendor JVC 0x04f1 JVC
vendor CHICONY 0x04f2 Chicony Electronics
vendor ELAN 0x04f3 ELAN Microelectronics
vendor NEWNEX 0x04f7 Newnex
vendor BROTHER 0x04f9 Brother Industries
vendor DALLAS 0x04fa Dallas Semiconductor
vendor AIPTEK2 0x04fc AIPTEK International
vendor PFU 0x04fe PFU
vendor FUJIKURA 0x0501 Fujikura/DDK
vendor ACER 0x0502 Acer
vendor 3COM 0x0506 3Com
vendor HOSIDEN 0x0507 Hosiden Corporation
vendor AZTECH 0x0509 Aztech Systems
vendor BELKIN 0x050d Belkin Components
vendor KAWATSU 0x050f Kawatsu Semiconductor
vendor FCI 0x0514 FCI
vendor LONGWELL 0x0516 Longwell
vendor COMPOSITE 0x0518 Composite
vendor STAR 0x0519 Star Micronics
vendor APC 0x051d American Power Conversion
vendor SCIATLANTA 0x051e Scientific Atlanta
vendor TSM 0x0520 TSM
vendor CONNECTEK 0x0522 Advanced Connectek USA
vendor NETCHIP 0x0525 NetChip Technology
vendor ALTRA 0x0527 ALTRA
vendor ATI 0x0528 ATI Technologies
vendor AKS 0x0529 Aladdin Knowledge Systems
vendor TEKOM 0x052b Tekom
vendor CANONDEV 0x052c Canon
vendor WACOMTECH 0x0531 Wacom
vendor INVENTEC 0x0537 Inventec
vendor SHYHSHIUN 0x0539 Shyh Shiun Terminals
vendor PREHWERKE 0x053a Preh Werke Gmbh & Co. KG
vendor SYNOPSYS 0x053f Synopsys
vendor UNIACCESS 0x0540 Universal Access
vendor VIEWSONIC 0x0543 ViewSonic
vendor XIRLINK 0x0545 Xirlink
vendor ANCHOR 0x0547 Anchor Chips
vendor SONY 0x054c Sony
vendor FUJIXEROX 0x0550 Fuji Xerox
vendor VISION 0x0553 VLSI Vision
vendor ASAHIKASEI 0x0556 Asahi Kasei Microsystems
vendor ATEN 0x0557 ATEN International
vendor SAMSUNG2 0x055d Samsung Electronics
vendor MUSTEK 0x055f Mustek Systems
vendor TELEX 0x0562 Telex Communications
vendor CHINON 0x0564 Chinon
vendor PERACOM 0x0565 Peracom Networks
vendor ALCOR2 0x0566 Alcor Micro
vendor XYRATEX 0x0567 Xyratex
vendor WACOM 0x056a WACOM
vendor ETEK 0x056c e-TEK Labs
vendor EIZO 0x056d EIZO
vendor ELECOM 0x056e Elecom
vendor CONEXANT 0x0572 Conexant
vendor HAUPPAUGE 0x0573 Hauppauge Computer Works
vendor BAFO 0x0576 BAFO/Quality Computer Accessories
vendor YEDATA 0x057b Y-E Data
vendor AVM 0x057c AVM
vendor NINTENDO 0x057e Nintendo
vendor QUICKSHOT 0x057f Quickshot
vendor ROLAND 0x0582 Roland
vendor ROCKFIRE 0x0583 Rockfire
vendor RATOC 0x0584 RATOC Systems
vendor ZYXEL 0x0586 ZyXEL Communication
vendor INFINEON 0x058b Infineon
vendor MICREL 0x058d Micrel
vendor ALCOR 0x058f Alcor Micro
vendor OMRON 0x0590 OMRON
vendor ZORAN 0x0595 Zoran Microelectronics
vendor NIIGATA 0x0598 Niigata
vendor IOMEGA 0x059b Iomega
vendor ATREND 0x059c A-Trend Technology
vendor AID 0x059d Advanced Input Devices
vendor LACIE 0x059f LaCie
vendor FUJIFILM 0x05a2 Fuji Film
vendor ARC 0x05a3 ARC
vendor ORTEK 0x05a4 Ortek
vendor CISCOLINKSYS3 0x05a6 Cisco-Linksys
vendor BOSE 0x05a7 Bose
vendor OMNIVISION 0x05a9 OmniVision
vendor INSYSTEM 0x05ab In-System Design
vendor APPLE 0x05ac Apple Computer
vendor YCCABLE 0x05ad Y.C. Cable
vendor DIGITALPERSONA 0x05ba DigitalPersona
vendor 3G 0x05bc 3G Green Green Globe
vendor RAFI 0x05bd RAFI
vendor TYCO 0x05be Tyco
vendor KAWASAKI 0x05c1 Kawasaki
vendor DIGI 0x05c5 Digi International
vendor QUALCOMM2 0x05c6 Qualcomm
vendor QTRONIX 0x05c7 Qtronix
vendor FOXLINK 0x05c8 Foxlink
vendor RICOH 0x05ca Ricoh
vendor ELSA 0x05cc ELSA
vendor SCIWORX 0x05ce sci-worx
vendor BRAINBOXES 0x05d1 Brainboxes Limited
vendor ULTIMA 0x05d8 Ultima
vendor AXIOHM 0x05d9 Axiohm Transaction Solutions
vendor MICROTEK 0x05da Microtek
vendor SUNTAC 0x05db SUN Corporation
vendor LEXAR 0x05dc Lexar Media
vendor ADDTRON 0x05dd Addtron
vendor SYMBOL 0x05e0 Symbol Technologies
vendor SYNTEK 0x05e1 Syntek
vendor GENESYS 0x05e3 Genesys Logic
vendor FUJI 0x05e5 Fuji Electric
vendor KEITHLEY 0x05e6 Keithley Instruments
vendor EIZONANAO 0x05e7 EIZO Nanao
vendor KLSI 0x05e9 Kawasaki LSI
vendor FFC 0x05eb FFC
vendor ANKO 0x05ef Anko Electronic
vendor PIENGINEERING 0x05f3 P.I. Engineering
vendor AOC 0x05f6 AOC International
vendor CHIC 0x05fe Chic Technology
vendor BARCO 0x0600 Barco Display Systems
vendor BRIDGE 0x0607 Bridge Information
vendor SMK 0x0609 SMK
vendor SOLIDYEAR 0x060b Solid Year
vendor BIORAD 0x0614 Bio-Rad Laboratories
vendor MACALLY 0x0618 Macally
vendor ACTLABS 0x061c Act Labs
vendor ALARIS 0x0620 Alaris
vendor APEX 0x0624 Apex
vendor CREATIVE3 0x062a Creative Labs
vendor MICRON 0x0634 Micron Technology
vendor VIVITAR 0x0636 Vivitar
vendor GUNZE 0x0637 Gunze Electronics USA
vendor AVISION 0x0638 Avision
vendor TEAC 0x0644 TEAC
vendor ACTON 0x0647 Acton Research Corp.
vendor OPTO 0x065a Optoelectronics Co., Ltd
vendor SGI 0x065e Silicon Graphics
vendor SANWASUPPLY 0x0663 Sanwa Supply
vendor MEGATEC 0x0665 Megatec
vendor LINKSYS 0x066b Linksys
vendor ACERSA 0x066e Acer Semiconductor America
vendor SIGMATEL 0x066f Sigmatel
vendor DRAYTEK 0x0675 DrayTek
vendor AIWA 0x0677 Aiwa
vendor ACARD 0x0678 ACARD Technology
vendor PROLIFIC 0x067b Prolific Technology
vendor SIEMENS 0x067c Siemens
vendor AVANCELOGIC 0x0680 Avance Logic
vendor SIEMENS2 0x0681 Siemens
vendor MINOLTA 0x0686 Minolta
vendor CHPRODUCTS 0x068e CH Products
vendor HAGIWARA 0x0693 Hagiwara Sys-Com
vendor CTX 0x0698 Chuntex
vendor ASKEY 0x069a Askey Computer
vendor SAITEK 0x06a3 Saitek
vendor ALCATELT 0x06b9 Alcatel Telecom
vendor AGFA 0x06bd AGFA-Gevaert
vendor ASIAMD 0x06be Asia Microelectronic Development
vendor BIZLINK 0x06c4 Bizlink International
vendor KEYSPAN 0x06cd Keyspan / InnoSys Inc.
vendor CONTEC 0x06ce Contec products
vendor AASHIMA 0x06d6 Aashima Technology
vendor LIEBERT 0x06da Liebert
vendor MULTITECH 0x06e0 MultiTech
vendor ADS 0x06e1 ADS Technologies
vendor ALCATELM 0x06e4 Alcatel Microelectronics
vendor SIRIUS 0x06ea Sirius Technologies
vendor GUILLEMOT 0x06f8 Guillemot
vendor BOSTON 0x06fd Boston Acoustics
vendor SMC 0x0707 Standard Microsystems
vendor PUTERCOM 0x0708 Putercom
vendor MCT 0x0711 MCT
vendor IMATION 0x0718 Imation
vendor TECLAST 0x071b Teclast
vendor SONYERICSSON 0x0731 Sony Ericsson
vendor EICON 0x0734 Eicon Networks
vendor MADCATZ 0x0738 Mad Catz, Inc.
vendor SYNTECH 0x0745 Syntech Information
vendor DIGITALSTREAM 0x074e Digital Stream
vendor AUREAL 0x0755 Aureal Semiconductor
vendor MAUDIO 0x0763 M-Audio
vendor CYBERPOWER 0x0764 Cyber Power Systems, Inc.
vendor SURECOM 0x0769 Surecom Technology
vendor HIDGLOBAL 0x076b HID Global
vendor LINKSYS2 0x077b Linksys
vendor GRIFFIN 0x077d Griffin Technology
vendor SANDISK 0x0781 SanDisk
vendor JENOPTIK 0x0784 Jenoptik
vendor LOGITEC 0x0789 Logitec
vendor NOKIA2 0x078b Nokia
vendor BRIMAX 0x078e Brimax
vendor AXIS 0x0792 Axis Communications
vendor ABL 0x0794 ABL Electronics
vendor SAGEM 0x079b Sagem
vendor SUNCOMM 0x079c Sun Communications, Inc.
vendor ALFADATA 0x079d Alfadata Computer
vendor NATIONALTECH 0x07a2 National Technical Systems
vendor ONNTO 0x07a3 Onnto
vendor BE 0x07a4 Be
vendor ADMTEK 0x07a6 ADMtek
vendor COREGA 0x07aa Corega
vendor FREECOM 0x07ab Freecom
vendor MICROTECH 0x07af Microtech
vendor GENERALINSTMNTS 0x07b2 General Instruments (Motorola)
vendor OLYMPUS 0x07b4 Olympus
vendor ABOCOM 0x07b8 AboCom Systems
vendor KINGSUN 0x07c0 KingSun
vendor KEISOKUGIKEN 0x07c1 Keisokugiken
vendor ONSPEC 0x07c4 OnSpec
vendor APG 0x07c5 APG Cash Drawer
vendor BUG 0x07c8 B.U.G.
vendor ALLIEDTELESYN 0x07c9 Allied Telesyn International
vendor AVERMEDIA 0x07ca AVerMedia Technologies
vendor SIIG 0x07cc SIIG
vendor CASIO 0x07cf CASIO
vendor DLINK2 0x07d1 D-Link
vendor APTIO 0x07d2 Aptio Products
vendor ARASAN 0x07da Arasan Chip Systems
vendor ALLIEDCABLE 0x07e6 Allied Cable
vendor STSN 0x07ef STSN
vendor BEWAN 0x07fa Bewan
vendor CENTURY 0x07f7 Century Corp
vendor NEWLINK 0x07ff NEWlink
vendor MAGTEK 0x0801 Mag-Tek
vendor ZOOM 0x0803 Zoom Telephonics
vendor PCS 0x0810 Personal Communication Systems
vendor SYNET 0x0812 Synet Electronics
vendor ALPHASMART 0x081e AlphaSmart, Inc.
vendor BROADLOGIC 0x0827 BroadLogic
vendor HANDSPRING 0x082d Handspring
vendor PALM 0x0830 Palm Computing
vendor SOURCENEXT 0x0833 SOURCENEXT
vendor ACTIONSTAR 0x0835 Action Star Enterprise
vendor SAMSUNG_TECHWIN 0x0839 Samsung Techwin
vendor ACCTON 0x083a Accton Technology
vendor DIAMOND 0x0841 Diamond
vendor NETGEAR 0x0846 BayNETGEAR
vendor TOPRE 0x0853 Topre Corporation
vendor ACTIVEWIRE 0x0854 ActiveWire
vendor BBELECTRONICS 0x0856 B&B Electronics
vendor PORTGEAR 0x085a PortGear
vendor NETGEAR2 0x0864 Netgear
vendor SYSTEMTALKS 0x086e System Talks
vendor METRICOM 0x0870 Metricom
vendor ADESSOKBTEK 0x087c ADESSO/Kbtek America
vendor JATON 0x087d Jaton
vendor APT 0x0880 APT Technologies
vendor BOCARESEARCH 0x0885 Boca Research
vendor ANDREA 0x08a8 Andrea Electronics
vendor BURRBROWN 0x08bb Burr-Brown Japan
vendor 2WIRE 0x08c8 2Wire
vendor AIPTEK 0x08ca AIPTEK International
vendor SMARTBRIDGES 0x08d1 SmartBridges
vendor FUJITSUSIEMENS 0x08d4 Fujitsu-Siemens
vendor BILLIONTON 0x08dd Billionton Systems
vendor GEMALTO 0x08e6 Gemalto SA
vendor EXTENDED 0x08e9 Extended Systems
vendor MSYSTEMS 0x08ec M-Systems
vendor DIGIANSWER 0x08fd Digianswer
vendor AUTHENTEC 0x08ff AuthenTec
vendor AUDIOTECHNICA 0x0909 Audio-Technica
vendor TRUMPION 0x090a Trumpion Microelectronics
vendor FEIYA 0x090c Feiya
vendor ALATION 0x0910 Alation Systems
vendor GLOBESPAN 0x0915 Globespan
vendor CONCORDCAMERA 0x0919 Concord Camera
vendor GARMIN 0x091e Garmin International
vendor GOHUBS 0x0921 GoHubs
vendor DYMO 0x0922 DYMO
vendor XEROX 0x0924 Xerox
vendor BIOMETRIC 0x0929 American Biometric Company
vendor TOSHIBA 0x0930 Toshiba
vendor PLEXTOR 0x093b Plextor
vendor INTREPIDCS 0x093c Intrepid
vendor YANO 0x094f Yano
vendor KINGSTON 0x0951 Kingston Technology
vendor NVIDIA 0x0955 NVIDIA Corporation
vendor BLUEWATER 0x0956 BlueWater Systems
vendor AGILENT 0x0957 Agilent Technologies
vendor GUDE 0x0959 Gude ADS
vendor PORTSMITH 0x095a Portsmith
vendor ACERW 0x0967 Acer
vendor ADIRONDACK 0x0976 Adirondack Wire & Cable
vendor BECKHOFF 0x0978 Beckhoff
vendor MINDSATWORK 0x097a Minds At Work
vendor ZIPPY 0x099a Zippy Technology Corporation
vendor POINTCHIPS 0x09a6 PointChips
vendor INTERSIL 0x09aa Intersil
vendor TRIPPLITE2 0x09ae Tripp Lite
vendor ALTIUS 0x09b3 Altius Solutions
vendor ARRIS 0x09c1 Arris Interactive
vendor ACTIVCARD 0x09c3 ACTIVCARD
vendor ACTISYS 0x09c4 ACTiSYS
vendor NOVATEL2 0x09d7 Novatel Wireless
vendor AFOURTECH 0x09da A-FOUR TECH
vendor AIMEX 0x09dc AIMEX
vendor ADDONICS 0x09df Addonics Technologies
vendor AKAI 0x09e8 AKAI professional M.I.
vendor ARESCOM 0x09f5 ARESCOM
vendor BAY 0x09f9 Bay Associates
vendor ALTERA 0x09fb Altera
vendor CSR 0x0a12 Cambridge Silicon Radio
vendor TREK 0x0a16 Trek Technology
vendor ASAHIOPTICAL 0x0a17 Asahi Optical
vendor BOCASYSTEMS 0x0a43 Boca Systems
vendor SHANTOU 0x0a46 ShanTou
vendor MEDIAGEAR 0x0a48 MediaGear
vendor PLOYTEC 0x0a4a Ploytec GmbH
vendor BROADCOM 0x0a5c Broadcom
vendor GREENHOUSE 0x0a6b GREENHOUSE
vendor MEDELI 0x0a67 Medeli
vendor GEOCAST 0x0a79 Geocast Network Systems
vendor EGO 0x0a92 EGO systems
vendor IDQUANTIQUE 0x0aba ID Quantique
vendor IDTECH 0x0acd ID TECH
vendor ZYDAS 0x0ace Zydas Technology Corporation
vendor NEODIO 0x0aec Neodio
vendor OPTION 0x0af0 Option N.V.
vendor ASUS 0x0b05 ASUSTeK Computer
vendor TODOS 0x0b0c Todos Data System
vendor SIIG2 0x0b39 SIIG
vendor TEKRAM 0x0b3b Tekram Technology
vendor HAL 0x0b41 HAL Corporation
vendor EMS 0x0b43 EMS Production
vendor NEC2 0x0b62 NEC
vendor ADLINK 0x0b63 ADLINK Technoligy, Inc.
vendor ATI2 0x0b6f ATI Technologies
vendor ZEEVO 0x0b7a Zeevo, Inc.
vendor KURUSUGAWA 0x0b7e Kurusugawa Electronics, Inc.
vendor SMART 0x0b8c Smart Technologies
vendor ASIX 0x0b95 ASIX Electronics
vendor O2MICRO 0x0b97 O2 Micro, Inc.
vendor USR 0x0baf U.S. Robotics
vendor AMBIT 0x0bb2 Ambit Microsystems
vendor HTC 0x0bb4 HTC
vendor REALTEK 0x0bda Realtek
vendor ERICSSON2 0x0bdb Ericsson
vendor MEI 0x0bed MEI
vendor ADDONICS2 0x0bf6 Addonics Technology
vendor FSC 0x0bf8 Fujitsu Siemens Computers
vendor AGATE 0x0c08 Agate Technologies
vendor DMI 0x0c0b DMI
vendor CANYON 0x0c10 Canyon
vendor ICOM 0x0c26 Icom Inc.
vendor GNOTOMETRICS 0x0c33 GN Otometrics
vendor CHICONY2 0x0c45 Chicony / Microdia / Sonix Technology Co., Ltd.
vendor REINERSCT 0x0c4b Reiner-SCT
vendor SEALEVEL 0x0c52 Sealevel System
vendor JETI 0x0c6c Jeti
vendor LUWEN 0x0c76 Luwen
vendor ELEKTOR 0x0c7d ELEKTOR Electronics
vendor KYOCERA2 0x0c88 Kyocera Wireless Corp.
vendor ZCOM 0x0cde Z-Com
vendor ATHEROS2 0x0cf3 Atheros Communications
vendor POSIFLEX 0x0d3a POSIFLEX
vendor TANGTOP 0x0d3d Tangtop
vendor KOBIL 0x0d46 KOBIL
vendor SMC3 0x0d5c Standard Microsystems
vendor ADDON 0x0d7d Add-on Technology
vendor ACDC 0x0d7e American Computer & Digital Components
vendor CMEDIA 0x0d8c CMEDIA
vendor CONCEPTRONIC 0x0d8e Conceptronic
vendor SKANHEX 0x0d96 Skanhex Technology, Inc.
vendor POWERCOM 0x0d9f PowerCOM
vendor MSI 0x0db0 Micro Star International
vendor ELCON 0x0db7 ELCON Systemtechnik
vendor UNKNOWN4 0x0dcd Unknown vendor
vendor NETAC 0x0dd8 Netac
vendor SITECOMEU 0x0df6 Sitecom Europe
vendor MOBILEACTION 0x0df7 Mobile Action
vendor AMIGO 0x0e0b Amigo Technology
vendor SMART2 0x0e39 Smart Modular Technologies
vendor SPEEDDRAGON 0x0e55 Speed Dragon Multimedia
vendor HAWKING 0x0e66 Hawking
vendor FOSSIL 0x0e67 Fossil, Inc
vendor GMATE 0x0e7e G.Mate, Inc
vendor MEDIATEK 0x0e8d MediaTek, Inc.
vendor OTI 0x0ea0 Ours Technology
vendor YISO 0x0eab Yiso Wireless Co.
vendor PILOTECH 0x0eaf Pilotech
vendor NOVATECH 0x0eb0 NovaTech
vendor ITEGNO 0x0eba iTegno
vendor WINMAXGROUP 0x0ed1 WinMaxGroup
vendor TOD 0x0ede TOD
vendor EGALAX 0x0eef eGalax, Inc.
vendor AIRPRIME 0x0f3d AirPrime, Inc.
vendor MICROTUNE 0x0f4d Microtune
vendor VTECH 0x0f88 VTech
vendor FALCOM 0x0f94 Falcom Wireless Communications GmbH
vendor RIM 0x0fca Research In Motion
vendor DYNASTREAM 0x0fcf Dynastream Innovations
vendor LARSENBRUSGAARD 0x0fd8 Larsen and Brusgaard
vendor OWL 0x0fde OWL
vendor KONTRON 0x0fe6 Kontron AG
vendor DVICO 0x0fe9 DViCO
vendor QUALCOMM 0x1004 Qualcomm
vendor APACER 0x1005 Apacer
vendor MOTOROLA4 0x100d Motorola
vendor HP3 0x103c Hewlett Packard
vendor AIRPLUS 0x1011 Airplus
vendor DESKNOTE 0x1019 Desknote
vendor AMD2 0x1022 Advanced Micro Devices
vendor NEC3 0x1033 NEC
vendor TTI 0x103e Thurlby Thandar Instruments
vendor GIGABYTE 0x1044 GIGABYTE
vendor WESTERN 0x1058 Western Digital
vendor MOTOROLA 0x1063 Motorola
vendor CCYU 0x1065 CCYU Technology
vendor CURITEL 0x106c Curitel Communications Inc
vendor SILABS2 0x10a6 SILABS2
vendor USI 0x10ab USI
vendor HONEYWELL 0x10ac Honeywell
vendor LIEBERT2 0x10af Liebert
vendor PLX 0x10b5 PLX
vendor ASANTE 0x10bd Asante
vendor SILABS 0x10c4 Silicon Labs
vendor SILABS3 0x10c5 Silicon Labs
vendor SILABS4 0x10ce Silicon Labs
vendor ACTIONS 0x10d6 Actions
vendor MOXA 0x110a Moxa
vendor ANALOG 0x1110 Analog Devices
vendor TENX 0x1130 Ten X Technology, Inc.
vendor ISSC 0x1131 Integrated System Solution Corp.
vendor JRC 0x1145 Japan Radio Company
vendor SPHAIRON 0x114b Sphairon Access Systems GmbH
vendor DELORME 0x1163 DeLorme
vendor SERVERWORKS 0x1166 ServerWorks
vendor DLINK3 0x1186 Dlink
vendor ACERCM 0x1189 Acer Communications & Multimedia
vendor SIERRA 0x1199 Sierra Wireless
vendor SANWA 0x11ad Sanwa Electric Instrument Co., Ltd.
vendor TOPFIELD 0x11db Topfield Co., Ltd
vendor SIEMENS3 0x11f5 Siemens
vendor NETINDEX 0x11f6 NetIndex
vendor ALCATEL 0x11f7 Alcatel
vendor INTERBIOMETRICS 0x1209 Interbiometrics
vendor FUJITSU3 0x1221 Fujitsu Ltd.
vendor UNKNOWN3 0x1233 Unknown vendor
vendor TSUNAMI 0x1241 Tsunami
vendor PHEENET 0x124a Pheenet
vendor TARGUS 0x1267 Targus
vendor TWINMOS 0x126f TwinMOS
vendor TENDA 0x1286 Tenda
vendor TESTO 0x128d Testo products
vendor CREATIVE2 0x1292 Creative Labs
vendor BELKIN2 0x1293 Belkin Components
vendor CYBERTAN 0x129b CyberTAN Technology
vendor HUAWEI 0x12d1 Huawei Technologies
vendor ARANEUS 0x12d8 Araneus Information Systems
vendor TAPWAVE 0x12ef Tapwave
vendor AINCOMM 0x12fd Aincomm
vendor MOBILITY 0x1342 Mobility
vendor DICKSMITH 0x1371 Dick Smith Electronics
vendor NETGEAR3 0x1385 Netgear
vendor VALIDITY 0x138a Validity Sensors, Inc.
vendor BALTECH 0x13ad Baltech
vendor CISCOLINKSYS 0x13b1 Cisco-Linksys
vendor SHARK 0x13d2 Shark
vendor AZUREWAVE 0x13d3 AsureWave
vendor INITIO 0x13fd Initio Corporation
vendor EMTEC 0x13fe Emtec
vendor NOVATEL 0x1410 Novatel Wireless
vendor OMNIVISION2 0x1415 OmniVision Technologies, Inc.
vendor MERLIN 0x1416 Merlin
vendor REDOCTANE 0x1430 RedOctane
vendor WISTRONNEWEB 0x1435 Wistron NeWeb
vendor RADIOSHACK 0x1453 Radio Shack
vendor FIC 0x1457 FIC / OpenMoko
vendor HUAWEI3COM 0x1472 Huawei-3Com
vendor ABOCOM2 0x1482 AboCom Systems
vendor SILICOM 0x1485 Silicom
vendor RALINK 0x148f Ralink Technology
vendor IMAGINATION 0x149a Imagination Technologies
vendor ATP 0x14af ATP Electronics
vendor CONCEPTRONIC2 0x14b2 Conceptronic
vendor SUPERTOP 0x14cd Super Top
vendor PLANEX3 0x14ea Planex Communications
vendor SILICONPORTALS 0x1527 Silicon Portals
vendor UBIQUAM 0x1529 UBIQUAM Co., Ltd.
vendor JMICRON 0x152d JMicron
vendor UBLOX 0x1546 U-blox
vendor PNY 0x154b PNY
vendor OWEN 0x1555 Owen
vendor OQO 0x1557 OQO
vendor UMEDIA 0x157e U-MEDIA Communications
vendor FIBERLINE 0x1582 Fiberline
vendor FREESCALE 0x15a2 Freescale Semiconductor, Inc.
vendor AFATECH 0x15a4 Afatech Technologies, Inc.
vendor SPARKLAN 0x15a9 SparkLAN
vendor OLIMEX 0x15ba Olimex
vendor SOUNDGRAPH 0x15c2 Soundgraph, Inc.
vendor AMIT2 0x15c5 AMIT
vendor TEXTECH 0x15ca Textech International Ltd.
vendor SOHOWARE 0x15e8 SOHOware
vendor ABIT 0x15eb ABIT Corporation
vendor UMAX 0x1606 UMAX Data Systems
vendor INSIDEOUT 0x1608 Inside Out Networks
vendor AMOI 0x1614 Amoi Electronics
vendor GOODWAY 0x1631 Good Way Technology
vendor ENTREGA 0x1645 Entrega
vendor ACTIONTEC 0x1668 Actiontec Electronics
vendor CLIPSAL 0x166a Clipsal
vendor CISCOLINKSYS2 0x167b Cisco-Linksys
vendor ATHEROS 0x168c Atheros Communications
vendor GIGASET 0x1690 Gigaset
vendor GLOBALSUN 0x16ab Global Sun Technology
vendor ANYDATA 0x16d5 AnyDATA Corporation
vendor JABLOTRON 0x16d6 Jablotron
vendor CMOTECH 0x16d8 C-motech
vendor WIENERPLEINBAUS 0x16dc WIENER Plein & Baus GmbH.
vendor AXESSTEL 0x1726 Axesstel Co., Ltd.
vendor LINKSYS4 0x1737 Linksys
vendor SENAO 0x1740 Senao
vendor ASUS2 0x1761 ASUS
vendor SWEEX2 0x177f Sweex
vendor METAGEEK 0x1781 MetaGeek
vendor KAMSTRUP 0x17a8 Kamstrup A/S
vendor MISC 0x1781 Misc Vendors
vendor DISPLAYLINK 0x17e9 DisplayLink
vendor LENOVO 0x17ef Lenovo
vendor WAVESENSE 0x17f4 WaveSense
vendor VAISALA 0x1843 Vaisala
vendor E3C 0x18b4 E3C Technologies
vendor AMIT 0x18c5 AMIT
vendor GOOGLE 0x18d1 Google
vendor QCOM 0x18e8 Qcom
vendor ELV 0x18ef ELV
vendor LINKSYS3 0x1915 Linksys
vendor MEINBERG 0x1938 Meinberg Funkuhren
vendor BECEEM 0x198f Beceem Communications
vendor ZTE 0x19d2 ZTE
vendor QUALCOMMINC 0x19d2 Qualcomm, Incorporated
vendor QUALCOMM3 0x19f5 Qualcomm, Inc.
vendor QUANTA2 0x1a32 Quanta
vendor TERMINUS 0x1a40 Terminus Technology
vendor ABBOTT 0x1a61 Abbott Diabetics
vendor BAYER 0x1a79 Bayer
vendor WCH2 0x1a86 QinHeng Electronics
vendor STELERA 0x1a8d Stelera Wireless
vendor SEL 0x1adb Schweitzer Engineering Laboratories
vendor CORSAIR 0x1b1c Corsair
vendor ASM 0x1b21 ASMedia Technology
vendor MATRIXORBITAL 0x1b3d Matrix Orbital
vendor OVISLINK 0x1b75 OvisLink
vendor TML 0x1b91 The Mobility Lab
vendor TCTMOBILE 0x1bbb TCT Mobile
vendor ALTI2 0x1bc9 Alti-2 products
vendor SUNPLUS 0x1bcf Sunplus Innovation Technology Inc.
vendor WAGO 0x1be3 WAGO Kontakttechnik GmbH.
vendor TELIT 0x1bc7 Telit
vendor IONICS 0x1c0c Ionics PlugComputer
vendor LONGCHEER 0x1c9e Longcheer Holdings, Ltd.
vendor MPMAN 0x1cae MpMan
vendor DRESDENELEKTRONIK 0x1cf1 dresden elektronik
vendor NEOTEL 0x1d09 Neotel
vendor DREAMLINK 0x1d34 Dream Link
vendor PEGATRON 0x1d4d Pegatron
vendor QISDA 0x1da5 Qisda
vendor METAGEEK2 0x1dd5 MetaGeek
vendor ALINK 0x1e0e Alink
vendor AIRTIES 0x1eda AirTies
vendor FESTO 0x1e29 Festo
vendor LAKESHORE 0x1fb9 Lake Shore Cryotronics, Inc.
vendor VERTEX 0x1fe7 Vertex Wireless Co., Ltd.
vendor DLINK 0x2001 D-Link
vendor PLANEX2 0x2019 Planex Communications
vendor HAUPPAUGE2 0x2040 Hauppauge Computer Works
vendor TLAYTECH 0x20b9 Tlay Tech
vendor ENCORE 0x203d Encore
vendor QIHARDWARE 0x20b7 QI-hardware
vendor PARA 0x20b8 PARA Industrial
vendor SIMTEC 0x20df Simtec Electronics
vendor TRENDNET 0x20f4 TRENDnet
vendor RTSYSTEMS 0x2100 RT Systems
vendor DLINK4 0x2101 D-Link
vendor INTENSO 0x2109 INTENSO
vendor VIALABS 0x2109 VIA Labs
vendor ERICSSON 0x2282 Ericsson
vendor MOTOROLA2 0x22b8 Motorola
vendor WETELECOM 0x22de WeTelecom
vendor PINNACLE 0x2304 Pinnacle Systems
vendor ARDUINO 0x2341 Arduino SA
vendor TPLINK 0x2357 TP-Link
vendor WESTMOUNTAIN 0x2405 West Mountain Radio
vendor TRIPPLITE 0x2478 Tripp-Lite
vendor HIROSE 0x2631 Hirose Electric
vendor NHJ 0x2770 NHJ
vendor THINGM 0x27b8 ThingM
vendor PERASO 0x2932 Peraso Technologies, Inc.
vendor PLANEX 0x2c02 Planex Communications
vendor QUECTEL 0x2c7c Quectel Wireless Solutions
vendor VIDZMEDIA 0x3275 VidzMedia Pte Ltd
vendor LINKINSTRUMENTS 0x3195 Link Instruments Inc.
vendor AEI 0x3334 AEI
vendor HANK 0x3353 Hank Connection
vendor PQI 0x3538 PQI
vendor DAISY 0x3579 Daisy Technology
vendor NI 0x3923 National Instruments
vendor MICRONET 0x3980 Micronet Communications
vendor IODATA2 0x40bb I-O Data
vendor IRIVER 0x4102 iRiver
vendor DELL 0x413c Dell
vendor WCH 0x4348 QinHeng Electronics
vendor ACEECA 0x4766 Aceeca
vendor FEIXUN 0x4855 FeiXun Communication
vendor PAPOUCH 0x5050 Papouch products
vendor AVERATEC 0x50c2 Averatec
vendor SWEEX 0x5173 Sweex
vendor PROLIFIC2 0x5372 Prolific Technologies
vendor ONSPEC2 0x55aa OnSpec Electronic Inc.
vendor ZINWELL 0x5a57 Zinwell
vendor INGENIC 0x601a Ingenic Semiconductor Ltd.
vendor SITECOM 0x6189 Sitecom
vendor SPRINGERDESIGN 0x6400 Springer Design, Inc.
vendor ARKMICRO 0x6547 Arkmicro Technologies Inc.
vendor 3COM2 0x6891 3Com
vendor EDIMAX 0x7392 Edimax
vendor INTEL 0x8086 Intel
vendor INTEL2 0x8087 Intel
vendor ALLWIN 0x8516 ALLWIN Tech
vendor SITECOM2 0x9016 Sitecom
vendor MOSCHIP 0x9710 MosChip Semiconductor
vendor NETGEAR4 0x9846 Netgear
vendor MARVELL 0x9e88 Marvell Technology Group Ltd.
vendor 3COM3 0xa727 3Com
vendor CACE 0xcace CACE Technologies
vendor COMPARE 0xcdab Compare
vendor DATAAPEX 0xdaae DataApex
vendor EVOLUTION 0xdeee Evolution Robotics
vendor EMPIA 0xeb1a eMPIA Technology
vendor HP2 0xf003 Hewlett Packard
vendor LOGILINK 0xfc08 LogiLink
vendor USRP 0xfffe GNU Radio USRP
/*
* List of known products. Grouped by vendor.
*/
/* 3Com products */
product 3COM HOMECONN 0x009d HomeConnect USB Camera
product 3COM 3CREB96 0x00a0 Bluetooth USB Adapter
product 3COM 3C19250 0x03e8 3C19250 Ethernet Adapter
product 3COM 3CRSHEW696 0x0a01 3CRSHEW696 Wireless Adapter
product 3COM 3C460 0x11f8 HomeConnect 3C460
product 3COM USR56K 0x3021 U.S.Robotics 56000 Voice FaxModem Pro
product 3COM 3C460B 0x4601 HomeConnect 3C460B
product 3COM2 3CRUSB10075 0xa727 3CRUSB10075
product 3COM3 AR5523_1 0x6893 AR5523
product 3COM3 AR5523_2 0x6895 AR5523
product 3COM3 AR5523_3 0x6897 AR5523
product 3COMUSR OFFICECONN 0x0082 3Com OfficeConnect Analog Modem
product 3COMUSR USRISDN 0x008f 3Com U.S. Robotics Pro ISDN TA
product 3COMUSR HOMECONN 0x009d 3Com HomeConnect Camera
product 3COMUSR USR56K 0x3021 U.S. Robotics 56000 Voice FaxModem Pro
/* Abbott Diabetics */
product ABBOTT STEREO_PLUG 0x3410 Abbott Diabetics Stereo Plug
product ABBOTT STRIP_PORT 0x3420 Abbott Diabetics Strip Port
/* ABIT products */
product ABIT AK_020 0x7d0e 3G modem
product ACDC HUB 0x2315 USB Pen Drive HUB
product ACDC SECWRITE 0x2316 USB Pen Drive Secure Write
product ACDC PEN 0x2317 USB Pen Drive with Secure Write
/* AboCom products */
product ABOCOM XX1 0x110c XX1
product ABOCOM XX2 0x200c XX2
product ABOCOM RT2770 0x2770 RT2770
product ABOCOM RT2870 0x2870 RT2870
product ABOCOM RT3070 0x3070 RT3070
product ABOCOM RT3071 0x3071 RT3071
product ABOCOM RT3072 0x3072 RT3072
product ABOCOM2 RT2870_1 0x3c09 RT2870
product ABOCOM URE450 0x4000 URE450 Ethernet Adapter
product ABOCOM UFE1000 0x4002 UFE1000 Fast Ethernet Adapter
product ABOCOM DSB650TX_PNA 0x4003 1/10/100 Ethernet Adapter
product ABOCOM XX4 0x4004 XX4
product ABOCOM XX5 0x4007 XX5
product ABOCOM XX6 0x400b XX6
product ABOCOM XX7 0x400c XX7
product ABOCOM RTL8151 0x401a RTL8151
product ABOCOM XX8 0x4102 XX8
product ABOCOM XX9 0x4104 XX9
product ABOCOM UF200 0x420a UF200 Ethernet
product ABOCOM WL54 0x6001 WL54
product ABOCOM XX10 0xabc1 XX10
product ABOCOM BWU613 0xb000 BWU613
product ABOCOM HWU54DM 0xb21b HWU54DM
product ABOCOM RT2573_2 0xb21c RT2573
product ABOCOM RT2573_3 0xb21d RT2573
product ABOCOM RT2573_4 0xb21e RT2573
product ABOCOM RTL8188CU_1 0x8188 RTL8188CU
product ABOCOM RTL8188CU_2 0x8189 RTL8188CU
product ABOCOM RTL8192CU 0x8178 RTL8192CU
product ABOCOM RTL8188EU 0x8179 RTL8188EU
product ABOCOM WUG2700 0xb21f WUG2700
/* Acton Research Corp. */
product ACTON SPECTRAPRO 0x0100 FTDI compatible adapter
/* Accton products */
product ACCTON USB320_EC 0x1046 USB320-EC Ethernet Adapter
product ACCTON 2664W 0x3501 2664W
product ACCTON 111 0x3503 T-Sinus 111 Wireless Adapter
product ACCTON SMCWUSBG_NF 0x4505 SMCWUSB-G (no firmware)
product ACCTON SMCWUSBG 0x4506 SMCWUSB-G
product ACCTON SMCWUSBTG2_NF 0x4507 SMCWUSBT-G2 (no firmware)
product ACCTON SMCWUSBTG2 0x4508 SMCWUSBT-G2
product ACCTON PRISM_GT 0x4521 PrismGT USB 2.0 WLAN
product ACCTON SS1001 0x5046 SpeedStream Ethernet Adapter
product ACCTON RT2870_2 0x6618 RT2870
product ACCTON RT3070 0x7511 RT3070
product ACCTON RT2770 0x7512 RT2770
product ACCTON RT2870_3 0x7522 RT2870
product ACCTON RT2870_5 0x8522 RT2870
product ACCTON RT3070_4 0xa512 RT3070
product ACCTON RT2870_4 0xa618 RT2870
product ACCTON RT3070_1 0xa701 RT3070
product ACCTON RT3070_2 0xa702 RT3070
product ACCTON RT2870_1 0xb522 RT2870
product ACCTON RT3070_3 0xc522 RT3070
product ACCTON RT3070_5 0xd522 RT3070
product ACCTON RTL8192SU 0xc512 RTL8192SU
product ACCTON ZD1211B 0xe501 ZD1211B
product ACCTON WN7512 0xf522 WN7512
/* Aceeca products */
product ACEECA MEZ1000 0x0001 MEZ1000 RDA
/* Acer Communications & Multimedia (oemd by Surecom) */
product ACERCM EP1427X2 0x0893 EP-1427X-2 Ethernet Adapter
/* Acer Labs products */
product ACERLABS M5632 0x5632 USB 2.0 Data Link
/* Acer Peripherals, Inc. products */
product ACERP ACERSCAN_C310U 0x12a6 Acerscan C310U
product ACERP ACERSCAN_320U 0x2022 Acerscan 320U
product ACERP ACERSCAN_640U 0x2040 Acerscan 640U
product ACERP ACERSCAN_620U 0x2060 Acerscan 620U
product ACERP ACERSCAN_4300U 0x20b0 Benq 3300U/4300U
product ACERP ACERSCAN_640BT 0x20be Acerscan 640BT
product ACERP ACERSCAN_1240U 0x20c0 Acerscan 1240U
product ACERP S81 0x4027 BenQ S81 phone
product ACERP H10 0x4068 AWL400 Wireless Adapter
product ACERP ATAPI 0x6003 ATA/ATAPI Adapter
product ACERP AWL300 0x9000 AWL300 Wireless Adapter
product ACERP AWL400 0x9001 AWL400 Wireless Adapter
/* Acer Warp products */
product ACERW WARPLINK 0x0204 Warplink
/* Actions products */
product ACTIONS MP4 0x1101 Actions MP4 Player
/* Actiontec, Inc. products */
product ACTIONTEC PRISM_25 0x0408 Prism2.5 Wireless Adapter
product ACTIONTEC PRISM_25A 0x0421 Prism2.5 Wireless Adapter A
product ACTIONTEC FREELAN 0x6106 ROPEX FreeLan 802.11b
product ACTIONTEC UAT1 0x7605 UAT1 Wireless Ethernet Adapter
/* ACTiSYS products */
product ACTISYS IR2000U 0x0011 ACT-IR2000U FIR
/* ActiveWire, Inc. products */
product ACTIVEWIRE IOBOARD 0x0100 I/O Board
product ACTIVEWIRE IOBOARD_FW1 0x0101 I/O Board, rev. 1 firmware
/* Adaptec products */
product ADAPTEC AWN8020 0x0020 AWN-8020 WLAN
/* Addonics products */
product ADDONICS2 205 0xa001 Cable 205
/* Addtron products */
product ADDTRON AWU120 0xff31 AWU-120
/* ADLINK Texhnology products */
product ADLINK ND6530 0x6530 ND-6530 USB-Serial
/* ADMtek products */
product ADMTEK PEGASUSII_4 0x07c2 AN986A Ethernet
product ADMTEK PEGASUS 0x0986 AN986 Ethernet
product ADMTEK PEGASUSII 0x8511 AN8511 Ethernet
product ADMTEK PEGASUSII_2 0x8513 AN8513 Ethernet
product ADMTEK PEGASUSII_3 0x8515 AN8515 Ethernet
/* ADDON products */
/* PNY OEMs these */
product ADDON ATTACHE 0x1300 USB 2.0 Flash Drive
product ADDON ATTACHE 0x1300 USB 2.0 Flash Drive
product ADDON A256MB 0x1400 Attache 256MB USB 2.0 Flash Drive
product ADDON DISKPRO512 0x1420 USB 2.0 Flash Drive (DANE-ELEC zMate 512MB USB flash drive)
/* Addonics products */
product ADDONICS2 CABLE_205 0xa001 Cable 205
/* ADS products */
product ADS UBS10BT 0x0008 UBS-10BT Ethernet
product ADS UBS10BTX 0x0009 UBS-10BT Ethernet
/* AEI products */
product AEI FASTETHERNET 0x1701 Fast Ethernet
/* Afatech Technologies, Inc. */
product AFATECH AFATECH1336 0x1336 Flash Card Reader
/* Agate Technologies products */
product AGATE QDRIVE 0x0378 Q-Drive
/* AGFA products */
product AGFA SNAPSCAN1212U 0x0001 SnapScan 1212U
product AGFA SNAPSCAN1236U 0x0002 SnapScan 1236U
product AGFA SNAPSCANTOUCH 0x0100 SnapScan Touch
product AGFA SNAPSCAN1212U2 0x2061 SnapScan 1212U
product AGFA SNAPSCANE40 0x208d SnapScan e40
product AGFA SNAPSCANE50 0x208f SnapScan e50
product AGFA SNAPSCANE20 0x2091 SnapScan e20
product AGFA SNAPSCANE25 0x2095 SnapScan e25
product AGFA SNAPSCANE26 0x2097 SnapScan e26
product AGFA SNAPSCANE52 0x20fd SnapScan e52
/* Ain Communication Technology products */
product AINCOMM AWU2000B 0x1001 AWU2000B Wireless Adapter
/* AIPTEK products */
product AIPTEK POCKETCAM3M 0x2011 PocketCAM 3Mega
product AIPTEK2 PENCAM_MEGA_1_3 0x504a PenCam Mega 1.3
product AIPTEK2 SUNPLUS_TECH 0x0c15 Sunplus Technology Inc.
/* AirPlis products */
product AIRPLUS MCD650 0x3198 MCD650 modem
/* AirPrime products */
product AIRPRIME PC5220 0x0112 CDMA Wireless PC Card
product AIRPRIME USB308 0x68A3 USB308 HSPA+ USB Modem
product AIRPRIME AC313U 0x68aa Sierra Wireless AirCard 313U
/* AirTies products */
product AIRTIES RT3070 0x2310 RT3070
/* AKS products */
product AKS USBHASP 0x0001 USB-HASP 0.06
/* Alcatel products */
product ALCATEL OT535 0x02df One Touch 535/735
/* Alcor Micro, Inc. products */
product ALCOR2 KBD_HUB 0x2802 Kbd Hub
product ALCOR DUMMY 0x0000 Dummy product
product ALCOR SDCR_6335 0x6335 SD/MMC Card Reader
product ALCOR SDCR_6362 0x6362 SD/MMC Card Reader
product ALCOR SDCR_6366 0x6366 SD/MMC Card Reader
product ALCOR TRANSCEND 0x6387 Transcend JetFlash Drive
product ALCOR MA_KBD_HUB 0x9213 MacAlly Kbd Hub
product ALCOR AU9814 0x9215 AU9814 Hub
product ALCOR UMCR_9361 0x9361 USB Multimedia Card Reader
product ALCOR SM_KBD 0x9410 MicroConnectors/StrongMan Keyboard
product ALCOR NEC_KBD_HUB 0x9472 NEC Kbd Hub
product ALCOR AU9720 0x9720 USB2 - RS-232
product ALCOR AU6390 0x6390 AU6390 USB-IDE converter
/* Alink products */
product ALINK DWM652U5 0xce16 DWM-652
product ALINK 3G 0x9000 3G modem
product ALINK SIM7600E 0x9001 LTE modem
product ALINK 3GU 0x9200 3G modem
/* Altec Lansing products */
product ALTEC ADA70 0x0070 ADA70 Speakers
product ALTEC ASC495 0xff05 ASC495 Speakers
/* Alti-2 products */
product ALTI2 N3 0x6001 FTDI compatible adapter
/* Allied Telesyn International products */
product ALLIEDTELESYN ATUSB100 0xb100 AT-USB100
/* ALLWIN Tech products */
product ALLWIN RT2070 0x2070 RT2070
product ALLWIN RT2770 0x2770 RT2770
product ALLWIN RT2870 0x2870 RT2870
product ALLWIN RT3070 0x3070 RT3070
product ALLWIN RT3071 0x3071 RT3071
product ALLWIN RT3072 0x3072 RT3072
product ALLWIN RT3572 0x3572 RT3572
/* AlphaSmart, Inc. products */
product ALPHASMART DANA_KB 0xdbac AlphaSmart Dana Keyboard
product ALPHASMART DANA_SYNC 0xdf00 AlphaSmart Dana HotSync
/* Amoi products */
product AMOI H01 0x0800 H01 3G modem
product AMOI H01A 0x7002 H01A 3G modem
product AMOI H02 0x0802 H02 3G modem
/* American Power Conversion products */
product APC UPS 0x0002 Uninterruptible Power Supply
/* Ambit Microsystems products */
product AMBIT WLAN 0x0302 WLAN
product AMBIT NTL_250 0x6098 NTL 250 cable modem
/* Apacer products */
product APACER HT202 0xb113 USB 2.0 Flash Drive
/* American Power Conversion products */
product APC UPS 0x0002 Uninterruptible Power Supply
/* Amigo Technology products */
product AMIGO RT2870_1 0x9031 RT2870
product AMIGO RT2870_2 0x9041 RT2870
/* AMIT products */
product AMIT CGWLUSB2GO 0x0002 CG-WLUSB2GO
product AMIT CGWLUSB2GNR 0x0008 CG-WLUSB2GNR
product AMIT RT2870_1 0x0012 RT2870
/* AMIT(2) products */
product AMIT2 RT2870 0x0008 RT2870
/* Analog Devices products */
product ANALOGDEVICES GNICE 0xf000 FTDI compatible adapter
product ANALOGDEVICES GNICEPLUS 0xf001 FTDI compatible adapter
/* Anchor products */
product ANCHOR SERIAL 0x2008 Serial
product ANCHOR EZUSB 0x2131 EZUSB
product ANCHOR EZLINK 0x2720 EZLINK
/* AnyData products */
product ANYDATA ADU_620UW 0x6202 CDMA 2000 EV-DO USB Modem
product ANYDATA ADU_E100X 0x6501 CDMA 2000 1xRTT/EV-DO USB Modem
product ANYDATA ADU_500A 0x6502 CDMA 2000 EV-DO USB Modem
/* AOX, Inc. products */
product AOX USB101 0x0008 Ethernet
/* American Power Conversion products */
product APC UPS 0x0002 Uninterruptible Power Supply
/* Apple Computer products */
product APPLE DUMMY 0x0000 Dummy product
product APPLE IMAC_KBD 0x0201 USB iMac Keyboard
product APPLE KBD 0x0202 USB Keyboard M2452
product APPLE EXT_KBD 0x020c Apple Extended USB Keyboard
/* MacbookAir, aka wellspring */
product APPLE WELLSPRING_ANSI 0x0223 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING_ISO 0x0224 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING_JIS 0x0225 Apple Internal Keyboard/Trackpad
/* MacbookProPenryn, aka wellspring2 */
product APPLE WELLSPRING2_ANSI 0x0230 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING2_ISO 0x0231 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING2_JIS 0x0232 Apple Internal Keyboard/Trackpad
/* Macbook5,1 (unibody), aka wellspring3 */
product APPLE WELLSPRING3_ANSI 0x0236 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING3_ISO 0x0237 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING3_JIS 0x0238 Apple Internal Keyboard/Trackpad
/* MacbookAir3,2 (unibody), aka wellspring4 */
product APPLE WELLSPRING4_ANSI 0x023f Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING4_ISO 0x0240 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING4_JIS 0x0241 Apple Internal Keyboard/Trackpad
/* MacbookAir3,1 (unibody), aka wellspring4 */
product APPLE WELLSPRING4A_ANSI 0x0242 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING4A_ISO 0x0243 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING4A_JIS 0x0244 Apple Internal Keyboard/Trackpad
/* Macbook8 (unibody, March 2011) */
product APPLE WELLSPRING5_ANSI 0x0245 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING5_ISO 0x0246 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING5_JIS 0x0247 Apple Internal Keyboard/Trackpad
/* MacbookAir4,1 (unibody, July 2011) */
product APPLE WELLSPRING6A_ANSI 0x0249 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING6A_ISO 0x024a Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING6A_JIS 0x024b Apple Internal Keyboard/Trackpad
/* MacbookAir4,2 (unibody, July 2011) */
product APPLE WELLSPRING6_ANSI 0x024c Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING6_ISO 0x024d Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING6_JIS 0x024e Apple Internal Keyboard/Trackpad
/* Macbook8,2 (unibody) */
product APPLE WELLSPRING5A_ANSI 0x0252 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING5A_ISO 0x0253 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING5A_JIS 0x0254 Apple Internal Keyboard/Trackpad
/* MacbookPro10,1 (unibody, June 2012) */
product APPLE WELLSPRING7_ANSI 0x0262 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING7_ISO 0x0263 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING7_JIS 0x0264 Apple Internal Keyboard/Trackpad
/* MacbookPro10,2 (unibody, October 2012) */
product APPLE WELLSPRING7A_ANSI 0x0259 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING7A_ISO 0x025a Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING7A_JIS 0x025b Apple Internal Keyboard/Trackpad
/* MacbookAir6,2 (unibody, June 2013) */
product APPLE WELLSPRING8_ANSI 0x0290 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING8_ISO 0x0291 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING8_JIS 0x0292 Apple Internal Keyboard/Trackpad
/* MacbookPro12,1 */
product APPLE WELLSPRING9_ANSI 0x0272 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING9_ISO 0x0273 Apple Internal Keyboard/Trackpad
product APPLE WELLSPRING9_JIS 0x0274 Apple Internal Keyboard/Trackpad
product APPLE MOUSE 0x0301 Mouse M4848
product APPLE OPTMOUSE 0x0302 Optical mouse
product APPLE MIGHTYMOUSE 0x0304 Mighty Mouse
product APPLE KBD_HUB 0x1001 Hub in Apple USB Keyboard
product APPLE EXT_KBD_HUB 0x1003 Hub in Apple Extended USB Keyboard
product APPLE SPEAKERS 0x1101 Speakers
product APPLE IPOD 0x1201 iPod
product APPLE IPOD2G 0x1202 iPod 2G
product APPLE IPOD3G 0x1203 iPod 3G
product APPLE IPOD_04 0x1204 iPod '04'
product APPLE IPODMINI 0x1205 iPod Mini
product APPLE IPOD_06 0x1206 iPod '06'
product APPLE IPOD_07 0x1207 iPod '07'
product APPLE IPOD_08 0x1208 iPod '08'
product APPLE IPODVIDEO 0x1209 iPod Video
product APPLE IPODNANO 0x120a iPod Nano
product APPLE IPHONE 0x1290 iPhone
product APPLE IPOD_TOUCH 0x1291 iPod Touch
product APPLE IPHONE_3G 0x1292 iPhone 3G
product APPLE IPHONE_3GS 0x1294 iPhone 3GS
product APPLE IPHONE_4 0x1297 iPhone 4
product APPLE IPHONE_4S 0x12a0 iPhone 4S
product APPLE IPHONE_5 0x12a8 iPhone 5
product APPLE IPAD 0x129a iPad
product APPLE ETHERNET 0x1402 Ethernet A1277
/* Arkmicro Technologies */
product ARKMICRO ARK3116 0x0232 ARK3116 Serial
/* Asahi Optical products */
product ASAHIOPTICAL OPTIO230 0x0004 Digital camera
product ASAHIOPTICAL OPTIO330 0x0006 Digital camera
/* Asante products */
product ASANTE EA 0x1427 Ethernet
/* ASIX Electronics products */
product ASIX AX88172 0x1720 10/100 Ethernet
product ASIX AX88178 0x1780 AX88178
product ASIX AX88178A 0x178a AX88178A USB 2.0 10/100/1000 Ethernet
product ASIX AX88179 0x1790 AX88179 USB 3.0 10/100/1000 Ethernet
product ASIX AX88772 0x7720 AX88772
product ASIX AX88772A 0x772a AX88772A USB 2.0 10/100 Ethernet
product ASIX AX88772B 0x772b AX88772B USB 2.0 10/100 Ethernet
product ASIX AX88772B_1 0x7e2b AX88772B USB 2.0 10/100 Ethernet
/* ASUS products */
product ASUS2 USBN11 0x0b05 USB-N11
product ASUS RT2570 0x1706 RT2500USB Wireless Adapter
product ASUS WL167G 0x1707 WL-167g Wireless Adapter
product ASUS WL159G 0x170c WL-159g
product ASUS A9T_WIFI 0x171b A9T wireless
product ASUS P5B_WIFI 0x171d P5B wireless
product ASUS RT2573_1 0x1723 RT2573
product ASUS RT2573_2 0x1724 RT2573
product ASUS LCM 0x1726 LCM display
product ASUS RT2870_1 0x1731 RT2870
product ASUS RT2870_2 0x1732 RT2870
product ASUS RT2870_3 0x1742 RT2870
product ASUS RT2870_4 0x1760 RT2870
product ASUS RT2870_5 0x1761 RT2870
product ASUS USBN13 0x1784 USB-N13
product ASUS USBN10 0x1786 USB-N10
product ASUS RT3070_1 0x1790 RT3070
product ASUS RTL8192SU 0x1791 RTL8192SU
product ASUS USB_N53 0x179d ASUS Black Diamond Dual Band USB-N53
product ASUS RTL8192CU 0x17ab RTL8192CU
product ASUS USBN66 0x17ad USB-N66
product ASUS USBN10NANO 0x17ba USB-N10 Nano
product ASUS USBAC51 0x17d1 USB-AC51
product ASUS USBAC56 0x17d2 USB-AC56
product ASUS A730W 0x4202 ASUS MyPal A730W
product ASUS P535 0x420f ASUS P535 PDA
product ASUS GMSC 0x422f ASUS Generic Mass Storage
/* ATen products */
product ATEN UC1284 0x2001 Parallel printer
product ATEN UC10T 0x2002 10Mbps Ethernet
product ATEN UC110T 0x2007 UC-110T Ethernet
product ATEN UC232A 0x2008 Serial
product ATEN UC210T 0x2009 UC-210T Ethernet
product ATEN DSB650C 0x4000 DSB-650C
/* ATP Electronics products */
product ATP EUSB 0xaf01 ATP IG eUSB SSD
/* Atheros Communications products */
product ATHEROS AR5523 0x0001 AR5523
product ATHEROS AR5523_NF 0x0002 AR5523 (no firmware)
product ATHEROS2 AR5523_1 0x0001 AR5523
product ATHEROS2 AR5523_1_NF 0x0002 AR5523 (no firmware)
product ATHEROS2 AR5523_2 0x0003 AR5523
product ATHEROS2 AR5523_2_NF 0x0004 AR5523 (no firmware)
product ATHEROS2 AR5523_3 0x0005 AR5523
product ATHEROS2 AR5523_3_NF 0x0006 AR5523 (no firmware)
product ATHEROS2 TG121N 0x1001 TG121N
product ATHEROS2 WN821NV2 0x1002 WN821NV2
product ATHEROS2 3CRUSBN275 0x1010 3CRUSBN275
product ATHEROS2 WN612 0x1011 WN612
product ATHEROS2 AR9170 0x9170 AR9170
/* Atmel Comp. products */
product ATMEL STK541 0x2109 Zigbee Controller
product ATMEL UHB124 0x3301 AT43301 USB 1.1 Hub
product ATMEL DWL120 0x7603 DWL-120 Wireless Adapter
product ATMEL BW002 0x7605 BW002 Wireless Adapter
product ATMEL WL1130USB 0x7613 WL-1130 USB
product ATMEL AT76C505A 0x7614 AT76c505a Wireless Adapter
/* AuthenTec products */
product AUTHENTEC AES1610 0x1600 AES1610 Fingerprint Sensor
/* Avision products */
product AVISION 1200U 0x0268 1200U scanner
/* AVM products */
product AVM FRITZWLAN 0x8401 FRITZ!WLAN N
/* Axesstel products */
product AXESSTEL DATAMODEM 0x1000 Data Modem
/* AsureWave products */
product AZUREWAVE RT2870_1 0x3247 RT2870
product AZUREWAVE RT2870_2 0x3262 RT2870
product AZUREWAVE RT3070_1 0x3273 RT3070
product AZUREWAVE RT3070_2 0x3284 RT3070
product AZUREWAVE RT3070_3 0x3305 RT3070
product AZUREWAVE RTL8188CU 0x3357 RTL8188CU
product AZUREWAVE RTL8188CE_1 0x3358 RTL8188CE
product AZUREWAVE RTL8188CE_2 0x3359 RTL8188CE
product AZUREWAVE RTL8192SU_1 0x3306 RTL8192SU
product AZUREWAVE RTL8192SU_2 0x3309 RTL8192SU
product AZUREWAVE RTL8192SU_3 0x3310 RTL8192SU
product AZUREWAVE RTL8192SU_4 0x3311 RTL8192SU
product AZUREWAVE RTL8192SU_5 0x3325 RTL8192SU
/* Baltech products */
product BALTECH CARDREADER 0x9999 Card reader
/* Bayer products */
product BAYER CONTOUR_CABLE 0x6001 FTDI compatible adapter
/* B&B Electronics products */
product BBELECTRONICS USOTL4 0xAC01 RS-422/485
product BBELECTRONICS 232USB9M 0xac27 FTDI compatible adapter
product BBELECTRONICS 485USB9F_2W 0xac25 FTDI compatible adapter
product BBELECTRONICS 485USB9F_4W 0xac26 FTDI compatible adapter
product BBELECTRONICS 485USBTB_2W 0xac33 FTDI compatible adapter
product BBELECTRONICS 485USBTB_4W 0xac34 FTDI compatible adapter
product BBELECTRONICS TTL3USB9M 0xac50 FTDI compatible adapter
product BBELECTRONICS TTL5USB9M 0xac49 FTDI compatible adapter
product BBELECTRONICS USO9ML2 0xac03 FTDI compatible adapter
product BBELECTRONICS USO9ML2DR 0xac17 FTDI compatible adapter
product BBELECTRONICS USO9ML2DR_2 0xac16 FTDI compatible adapter
product BBELECTRONICS USOPTL4 0xac11 FTDI compatible adapter
product BBELECTRONICS USOPTL4DR 0xac19 FTDI compatible adapter
product BBELECTRONICS USOPTL4DR2 0xac18 FTDI compatible adapter
product BBELECTRONICS USPTL4 0xac12 FTDI compatible adapter
product BBELECTRONICS USTL4 0xac02 FTDI compatible adapter
product BBELECTRONICS ZZ_PROG1_USB 0xba02 FTDI compatible adapter
/* Belkin products */
/*product BELKIN F5U111 0x???? F5U111 Ethernet*/
product BELKIN F5D6050 0x0050 F5D6050 802.11b Wireless Adapter
product BELKIN FBT001V 0x0081 FBT001v2 Bluetooth
product BELKIN FBT003V 0x0084 FBT003v2 Bluetooth
product BELKIN F5U103 0x0103 F5U103 Serial
product BELKIN F5U109 0x0109 F5U109 Serial
product BELKIN USB2SCSI 0x0115 USB to SCSI
product BELKIN F8T012 0x0121 F8T012xx1 Bluetooth USB Adapter
product BELKIN USB2LAN 0x0121 USB to LAN
product BELKIN F5U208 0x0208 F5U208 VideoBus II
product BELKIN F5U237 0x0237 F5U237 USB 2.0 7-Port Hub
product BELKIN F5U257 0x0257 F5U257 Serial
product BELKIN F6H375USB 0x0375 F6H375-USB
product BELKIN F5U409 0x0409 F5U409 Serial
product BELKIN F6C550AVR 0x0551 F6C550-AVR UPS
product BELKIN F6C1250TWRK 0x0750 F6C1250-TW-RK
product BELKIN F6C1500TWRK 0x0751 F6C1500-TW-RK
product BELKIN F6C900UNV 0x0900 F6C900-UNV
product BELKIN F6C100UNV 0x0910 F6C100-UNV
product BELKIN F6C120UNV 0x0912 F6C120-UNV UPS
product BELKIN F6C800UNV 0x0980 F6C800-UNV
product BELKIN F6C1100UNV 0x1100 F6C1100-UNV, F6C1200-UNV
product BELKIN F5U120 0x1203 F5U120-PC Hub
product BELKIN RTL8188CU 0x1102 RTL8188CU Wireless Adapter
product BELKIN F9L1103 0x1103 F9L1103 Wireless Adapter
product BELKIN RTL8192CU 0x2102 RTL8192CU Wireless Adapter
product BELKIN F7D2102 0x2103 F7D2102 Wireless Adapter
product BELKIN F5U258 0x258A F5U258 Host to Host cable
product BELKIN ZD1211B 0x4050 ZD1211B
product BELKIN F5D5055 0x5055 F5D5055
product BELKIN F5D7050 0x7050 F5D7050 Wireless Adapter
product BELKIN F5D7051 0x7051 F5D7051 54g USB Network Adapter
product BELKIN F5D7050A 0x705a F5D7050A Wireless Adapter
/* Also sold as 'Ativa 802.11g wireless card' */
product BELKIN F5D7050_V4000 0x705c F5D7050 v4000 Wireless Adapter
product BELKIN F5D7050E 0x705e F5D7050E Wireless Adapter
product BELKIN RT2870_1 0x8053 RT2870
product BELKIN RT2870_2 0x805c RT2870
product BELKIN F5D8053V3 0x815c F5D8053 v3
product BELKIN RTL8192SU_1 0x815f RTL8192SU
product BELKIN RTL8192SU_2 0x845a RTL8192SU
product BELKIN RTL8192SU_3 0x945a RTL8192SU
product BELKIN F5D8055 0x825a F5D8055
product BELKIN F5D8055V2 0x825b F5D8055 v2
product BELKIN F5D9050V3 0x905b F5D9050 ver 3 Wireless Adapter
product BELKIN2 F5U002 0x0002 F5U002 Parallel printer
product BELKIN F6D4050V1 0x935a F6D4050 v1
product BELKIN F6D4050V2 0x935b F6D4050 v2
/* Billionton products */
product BILLIONTON USB100 0x0986 USB100N 10/100 FastEthernet
product BILLIONTON USBLP100 0x0987 USB100LP
product BILLIONTON USBEL100 0x0988 USB100EL
product BILLIONTON USBE100 0x8511 USBE100
product BILLIONTON USB2AR 0x90ff USB2AR Ethernet
/* Broadcom products */
product BROADCOM BCM2033 0x2033 BCM2033 Bluetooth USB dongle
/* Brother Industries products */
product BROTHER HL1050 0x0002 HL-1050 laser printer
product BROTHER MFC8600_9650 0x0100 MFC8600/9650 multifunction device
/* Behavior Technology Computer products */
product BTC BTC6100 0x5550 6100C Keyboard
product BTC BTC7932 0x6782 Keyboard with mouse port
/* CACE Technologies products */
product CACE AIRPCAPNX 0x0300 AirPcap NX
/* Canon, Inc. products */
product CANON N656U 0x2206 CanoScan N656U
product CANON N1220U 0x2207 CanoScan N1220U
product CANON D660U 0x2208 CanoScan D660U
product CANON N676U 0x220d CanoScan N676U
product CANON N1240U 0x220e CanoScan N1240U
product CANON LIDE25 0x2220 CanoScan LIDE 25
product CANON S10 0x3041 PowerShot S10
product CANON S100 0x3045 PowerShot S100
product CANON S200 0x3065 PowerShot S200
product CANON REBELXT 0x30ef Digital Rebel XT
/* CATC products */
product CATC NETMATE 0x000a Netmate Ethernet
product CATC NETMATE2 0x000c Netmate2 Ethernet
product CATC CHIEF 0x000d USB Chief Bus & Protocol Analyzer
product CATC ANDROMEDA 0x1237 Andromeda hub
/* CASIO products */
product CASIO QV_DIGICAM 0x1001 QV DigiCam
product CASIO EXS880 0x1105 Exilim EX-S880
product CASIO BE300 0x2002 BE-300 PDA
product CASIO NAMELAND 0x4001 CASIO Nameland EZ-USB
/* CCYU products */
product CCYU ED1064 0x2136 EasyDisk ED1064
/* Century products */
product CENTURY EX35QUAT 0x011e Century USB Disk Enclosure
product CENTURY EX35SW4_SB4 0x011f Century USB Disk Enclosure
/* Cherry products */
product CHERRY MY3000KBD 0x0001 My3000 keyboard
product CHERRY MY3000HUB 0x0003 My3000 hub
product CHERRY CYBOARD 0x0004 CyBoard Keyboard
/* Chic Technology products */
product CHIC MOUSE1 0x0001 mouse
product CHIC CYPRESS 0x0003 Cypress USB Mouse
/* Chicony products */
product CHICONY KB8933 0x0001 KB-8933 keyboard
product CHICONY KU0325 0x0116 KU-0325 keyboard
product CHICONY CNF7129 0xb071 Notebook Web Camera
product CHICONY HDUVCCAM 0xb40a HD UVC WebCam
product CHICONY RTL8188CUS_1 0xaff7 RTL8188CUS
product CHICONY RTL8188CUS_2 0xaff8 RTL8188CUS
product CHICONY RTL8188CUS_3 0xaff9 RTL8188CUS
product CHICONY RTL8188CUS_4 0xaffa RTL8188CUS
product CHICONY RTL8188CUS_5 0xaffa RTL8188CUS
product CHICONY2 TWINKLECAM 0x600d TwinkleCam USB camera
/* CH Products */
product CHPRODUCTS PROTHROTTLE 0x00f1 Pro Throttle
product CHPRODUCTS PROPEDALS 0x00f2 Pro Pedals
product CHPRODUCTS FIGHTERSTICK 0x00f3 Fighterstick
product CHPRODUCTS FLIGHTYOKE 0x00ff Flight Sim Yoke
/* Cisco-Linksys products */
product CISCOLINKSYS WUSB54AG 0x000c WUSB54AG Wireless Adapter
product CISCOLINKSYS WUSB54G 0x000d WUSB54G Wireless Adapter
product CISCOLINKSYS WUSB54GP 0x0011 WUSB54GP Wireless Adapter
product CISCOLINKSYS USB200MV2 0x0018 USB200M v2
product CISCOLINKSYS HU200TS 0x001a HU200TS Wireless Adapter
product CISCOLINKSYS WUSB54GC 0x0020 WUSB54GC
product CISCOLINKSYS WUSB54GR 0x0023 WUSB54GR
product CISCOLINKSYS WUSBF54G 0x0024 WUSBF54G
product CISCOLINKSYS AE1000 0x002f AE1000
product CISCOLINKSYS WUSB6300 0x003f WUSB6300
product CISCOLINKSYS USB3GIGV1 0x0041 USB3GIGV1 USB Ethernet Adapter
product CISCOLINKSYS2 RT3070 0x4001 RT3070
product CISCOLINKSYS3 RT3070 0x0101 RT3070
/* Clipsal products */
product CLIPSAL 560884 0x0101 560884 C-Bus Audio Matrix Switch
product CLIPSAL 5500PACA 0x0201 5500PACA C-Bus Pascal Automation Controller
product CLIPSAL 5800PC 0x0301 5800PC C-Bus Wireless Interface
product CLIPSAL 5500PCU 0x0303 5500PCU C-Bus Interface
product CLIPSAL 5000CT2 0x0304 5000CT2 C-Bus Touch Screen
product CLIPSAL C5000CT2 0x0305 C5000CT2 C-Bus Touch Screen
product CLIPSAL L51xx 0x0401 L51xx C-Bus Dimmer
/* C-Media products */
product CMEDIA CM6206 0x0102 CM106 compatible sound device
/* CMOTECH products */
product CMOTECH CNU510 0x5141 CDMA Technologies USB modem
product CMOTECH CNU550 0x5543 CDMA 2000 1xRTT/1xEVDO USB modem
product CMOTECH CGU628 0x6006 CGU-628
product CMOTECH CDMA_MODEM1 0x6280 CDMA Technologies USB modem
product CMOTECH DISK 0xf000 disk mode
/* Compaq products */
product COMPAQ IPAQPOCKETPC 0x0003 iPAQ PocketPC
product COMPAQ PJB100 0x504a Personal Jukebox PJB100
product COMPAQ IPAQLINUX 0x505a iPAQ Linux
/* Composite Corp products looks the same as "TANGTOP" */
product COMPOSITE USBPS2 0x0001 USB to PS2 Adaptor
/* Conceptronic products */
product CONCEPTRONIC PRISM_GT 0x3762 PrismGT USB 2.0 WLAN
product CONCEPTRONIC C11U 0x7100 C11U
product CONCEPTRONIC WL210 0x7110 WL-210
product CONCEPTRONIC AR5523_1 0x7801 AR5523
product CONCEPTRONIC AR5523_1_NF 0x7802 AR5523 (no firmware)
product CONCEPTRONIC AR5523_2 0x7811 AR5523
product CONCEPTRONIC AR5523_2_NF 0x7812 AR5523 (no firmware)
product CONCEPTRONIC2 RTL8192SU_1 0x3300 RTL8192SU
product CONCEPTRONIC2 RTL8192SU_2 0x3301 RTL8192SU
product CONCEPTRONIC2 RTL8192SU_3 0x3302 RTL8192SU
product CONCEPTRONIC2 C54RU 0x3c02 C54RU WLAN
product CONCEPTRONIC2 C54RU2 0x3c22 C54RU
product CONCEPTRONIC2 RT3070_1 0x3c08 RT3070
product CONCEPTRONIC2 RT3070_2 0x3c11 RT3070
product CONCEPTRONIC2 VIGORN61 0x3c25 VIGORN61
product CONCEPTRONIC2 RT2870_1 0x3c06 RT2870
product CONCEPTRONIC2 RT2870_2 0x3c07 RT2870
product CONCEPTRONIC2 RT2870_7 0x3c09 RT2870
product CONCEPTRONIC2 RT2870_8 0x3c12 RT2870
product CONCEPTRONIC2 RT2870_3 0x3c23 RT2870
product CONCEPTRONIC2 RT2870_4 0x3c25 RT2870
product CONCEPTRONIC2 RT2870_5 0x3c27 RT2870
product CONCEPTRONIC2 RT2870_6 0x3c28 RT2870
/* Connectix products */
product CONNECTIX QUICKCAM 0x0001 QuickCam
/* Conect products */
product CONTEC COM1USBH 0x8311 FTDI compatible adapter
/* Corega products */
product COREGA ETHER_USB_T 0x0001 Ether USB-T
product COREGA FETHER_USB_TX 0x0004 FEther USB-TX
product COREGA WLAN_USB_USB_11 0x000c WirelessLAN USB-11
product COREGA FETHER_USB_TXS 0x000d FEther USB-TXS
product COREGA WLANUSB 0x0012 Wireless LAN Stick-11
product COREGA FETHER_USB2_TX 0x0017 FEther USB2-TX
product COREGA WLUSB_11_KEY 0x001a ULUSB-11 Key
product COREGA CGUSBRS232R 0x002a CG-USBRS232R
product COREGA CGWLUSB2GL 0x002d CG-WLUSB2GL
product COREGA CGWLUSB2GPX 0x002e CG-WLUSB2GPX
product COREGA RT2870_1 0x002f RT2870
product COREGA RT2870_2 0x003c RT2870
product COREGA RT2870_3 0x003f RT2870
product COREGA RT3070 0x0041 RT3070
product COREGA CGWLUSB300GNM 0x0042 CG-WLUSB300GNM
product COREGA RTL8192SU 0x0047 RTL8192SU
product COREGA RTL8192CU 0x0056 RTL8192CU
product COREGA WLUSB_11_STICK 0x7613 WLAN USB Stick 11
product COREGA FETHER_USB_TXC 0x9601 FEther USB-TXC
/* Corsair products */
product CORSAIR K60 0x0a60 Corsair Vengeance K60 keyboard
product CORSAIR K68 0x1b3f Corsair Gaming K68 keyboard
product CORSAIR K70 0x1b09 Corsair Vengeance K70 keyboard
product CORSAIR K70_RGB 0x1b13 Corsair K70 RGB Keyboard
product CORSAIR STRAFE 0x1b15 Corsair STRAFE Gaming keyboard
product CORSAIR STRAFE2 0x1b44 Corsair STRAFE Gaming keyboard
/* Creative products */
product CREATIVE NOMAD_II 0x1002 Nomad II MP3 player
product CREATIVE NOMAD_IIMG 0x4004 Nomad II MG
product CREATIVE NOMAD 0x4106 Nomad
product CREATIVE2 VOIP_BLASTER 0x0258 Voip Blaster
product CREATIVE3 OPTICAL_MOUSE 0x0001 Notebook Optical Mouse
/* Cambridge Silicon Radio Ltd. products */
product CSR BT_DONGLE 0x0001 Bluetooth USB dongle
product CSR CSRDFU 0xffff USB Bluetooth Device in DFU State
/* Chipsbank Microelectronics Co., Ltd */
product CHIPSBANK USBMEMSTICK 0x6025 CBM2080 Flash drive controller
product CHIPSBANK USBMEMSTICK1 0x6026 CBM1180 Flash drive controller
/* CTX products */
product CTX EX1300 0x9999 Ex1300 hub
/* Curitel products */
product CURITEL HX550C 0x1101 CDMA 2000 1xRTT USB modem (HX-550C)
product CURITEL HX57XB 0x2101 CDMA 2000 1xRTT USB modem (HX-570/575B/PR-600)
product CURITEL PC5740 0x3701 Broadband Wireless modem
product CURITEL UM150 0x3711 EVDO modem
product CURITEL UM175 0x3714 EVDO modem
/* CyberPower products */
product CYBERPOWER BC900D 0x0005 900AVR/BC900D, CP1200AVR/BC1200D
product CYBERPOWER 1500CAVRLCD 0x0501 1500CAVRLCD
product CYBERPOWER OR2200LCDRM2U 0x0601 OR2200LCDRM2U
/* CyberTAN Technology products */
product CYBERTAN TG54USB 0x1666 TG54USB
product CYBERTAN RT2870 0x1828 RT2870
/* Cypress Semiconductor products */
product CYPRESS MOUSE 0x0001 mouse
product CYPRESS THERMO 0x0002 thermometer
product CYPRESS WISPY1A 0x0bad MetaGeek Wi-Spy
product CYPRESS KBDHUB 0x0101 Keyboard/Hub
product CYPRESS FMRADIO 0x1002 FM Radio
product CYPRESS IKARILASER 0x121f Ikari Laser SteelSeries ApS
product CYPRESS USBRS232 0x5500 USB-RS232 Interface
product CYPRESS SLIM_HUB 0x6560 Slim Hub
product CYPRESS XX6830XX 0x6830 PATA Storage Device
product CYPRESS SILVERSHIELD 0xfd13 Gembird Silver Shield PM
/* Daisy Technology products */
product DAISY DMC 0x6901 USB MultiMedia Reader
/* Dallas Semiconductor products */
product DALLAS J6502 0x4201 J-6502 speakers
/* DataApex products */
product DATAAPEX MULTICOM 0xead6 MultiCom
/* Dell products */
product DELL PORT 0x0058 Port Replicator
product DELL AIO926 0x5115 Photo AIO Printer 926
product DELL BC02 0x8000 BC02 Bluetooth USB Adapter
product DELL PRISM_GT_1 0x8102 PrismGT USB 2.0 WLAN
product DELL TM350 0x8103 TrueMobile 350 Bluetooth USB Adapter
product DELL PRISM_GT_2 0x8104 PrismGT USB 2.0 WLAN
product DELL U5700 0x8114 Dell 5700 3G
product DELL U5500 0x8115 Dell 5500 3G
product DELL U5505 0x8116 Dell 5505 3G
product DELL U5700_2 0x8117 Dell 5700 3G
product DELL U5510 0x8118 Dell 5510 3G
product DELL U5700_3 0x8128 Dell 5700 3G
product DELL U5700_4 0x8129 Dell 5700 3G
product DELL U5720 0x8133 Dell 5720 3G
product DELL U5720_2 0x8134 Dell 5720 3G
product DELL U740 0x8135 Dell U740 CDMA
product DELL U5520 0x8136 Dell 5520 3G
product DELL U5520_2 0x8137 Dell 5520 3G
product DELL U5520_3 0x8138 Dell 5520 3G
product DELL U5730 0x8180 Dell 5730 3G
product DELL U5730_2 0x8181 Dell 5730 3G
product DELL U5730_3 0x8182 Dell 5730 3G
product DELL DW700 0x9500 Dell DW700 GPS
product DELL2 VARIOUS_UPS 0xffff Various UPS Models
/* Delorme Paublishing products */
product DELORME EARTHMATE 0x0100 Earthmate GPS
/* Desknote products */
product DESKNOTE UCR_61S2B 0x0c55 UCR-61S2B
/* Diamond products */
product DIAMOND RIO500USB 0x0001 Rio 500 USB
/* Dick Smith Electronics (really C-Net) products */
product DICKSMITH RT2573 0x9022 RT2573
product DICKSMITH CWD854F 0x9032 C-Net CWD-854 rev F
/* Digi International products */
product DIGI ACCELEPORT2 0x0002 AccelePort USB 2
product DIGI ACCELEPORT4 0x0004 AccelePort USB 4
product DIGI ACCELEPORT8 0x0008 AccelePort USB 8
/* Digianswer A/S products */
product DIGIANSWER ZIGBEE802154 0x000a ZigBee/802.15.4 MAC
/* D-Link products */
/*product DLINK DSBS25 0x0100 DSB-S25 serial*/
product DLINK DUBE100 0x1a00 10/100 Ethernet
product DLINK DUBE100C1 0x1a02 DUB-E100 rev C1
product DLINK DSB650TX4 0x200c 10/100 Ethernet
product DLINK DWL120E 0x3200 DWL-120 rev E
product DLINK RTL8192CU_1 0x3307 RTL8192CU
product DLINK RTL8188CU 0x3308 RTL8188CU
product DLINK RTL8192CU_2 0x3309 RTL8192CU
product DLINK RTL8192CU_3 0x330a RTL8192CU
product DLINK DWA131B 0x330d DWA-131 rev B
product DLINK DWA125D1 0x330f DWA-125 rev D1
product DLINK DWA123D1 0x3310 DWA-123 rev D1
product DLINK DWA171A1 0x3314 DWA-171 rev A1
product DLINK DWA182C1 0x3315 DWA-182 rev C1
product DLINK DWA180A1 0x3316 DWA-180 rev A1
product DLINK DWA172A1 0x3318 DWA-172 rev A1
product DLINK DWA131E1 0x3319 DWA-131 rev E1
product DLINK DWL122 0x3700 DWL-122
product DLINK DWLG120 0x3701 DWL-G120
product DLINK DWL120F 0x3702 DWL-120 rev F
product DLINK DWLAG132 0x3a00 DWL-AG132
product DLINK DWLAG132_NF 0x3a01 DWL-AG132 (no firmware)
product DLINK DWLG132 0x3a02 DWL-G132
product DLINK DWLG132_NF 0x3a03 DWL-G132 (no firmware)
product DLINK DWLAG122 0x3a04 DWL-AG122
product DLINK DWLAG122_NF 0x3a05 DWL-AG122 (no firmware)
product DLINK DWLG122 0x3c00 DWL-G122 b1 Wireless Adapter
product DLINK DUBE100B1 0x3c05 DUB-E100 rev B1
product DLINK RT2870 0x3c09 RT2870
product DLINK RT3072 0x3c0a RT3072
product DLINK DWA140B3 0x3c15 DWA-140 rev B3
product DLINK DWA125A3 0x3c19 DWA-125 rev A3
product DLINK DWA160B2 0x3c1a DWA-160 rev B2
product DLINK DWA127 0x3c1b DWA-127 Wireless Adapter
product DLINK DWA162 0x3c1f DWA-162 Wireless Adapter
product DLINK DWA140D1 0x3c20 DWA-140 rev D1
product DLINK DSB650C 0x4000 10Mbps Ethernet
product DLINK DSB650TX1 0x4001 10/100 Ethernet
product DLINK DSB650TX 0x4002 10/100 Ethernet
product DLINK DSB650TX_PNA 0x4003 1/10/100 Ethernet
product DLINK DSB650TX3 0x400b 10/100 Ethernet
product DLINK DSB650TX2 0x4102 10/100 Ethernet
product DLINK DUB1312 0x4a00 10/100/1000 Ethernet
product DLINK DWM157 0x7d02 DWM-157
product DLINK DWR510 0x7e12 DWR-510
product DLINK DWM222 0x7e35 DWM-222
product DLINK DWM157_CD 0xa707 DWM-157 CD-ROM Mode
product DLINK DWR510_CD 0xa805 DWR-510 CD-ROM Mode
product DLINK DWM222_CD 0xab00 DWM-222 CD-ROM Mode
product DLINK DSB650 0xabc1 10/100 Ethernet
product DLINK DUBH7 0xf103 DUB-H7 USB 2.0 7-Port Hub
product DLINK2 RTL8192SU_1 0x3300 RTL8192SU
product DLINK2 RTL8192SU_2 0x3302 RTL8192SU
product DLINK2 DWA131A1 0x3303 DWA-131 A1
product DLINK2 DWA160A2 0x3a09 DWA-160 A2
product DLINK2 DWA120 0x3a0c DWA-120
product DLINK2 DWA120_NF 0x3a0d DWA-120 (no firmware)
product DLINK2 DWA130D1 0x3a0f DWA-130 D1
product DLINK2 DWLG122C1 0x3c03 DWL-G122 c1
product DLINK2 WUA1340 0x3c04 WUA-1340
product DLINK2 DWA111 0x3c06 DWA-111
product DLINK2 DWA110 0x3c07 DWA-110
product DLINK2 RT2870_1 0x3c09 RT2870
product DLINK2 RT3072 0x3c0a RT3072
product DLINK2 RT3072_1 0x3c0b RT3072
product DLINK2 RT3070_1 0x3c0d RT3070
product DLINK2 RT3070_2 0x3c0e RT3070
product DLINK2 RT3070_3 0x3c0f RT3070
product DLINK2 DWA160A1 0x3c10 DWA-160 A1
product DLINK2 RT2870_2 0x3c11 RT2870
product DLINK2 DWA130 0x3c13 DWA-130
product DLINK2 RT3070_4 0x3c15 RT3070
product DLINK2 RT3070_5 0x3c16 RT3070
product DLINK3 DWM652 0x3e04 DWM-652
/* DisplayLink products */
product DISPLAYLINK LCD4300U 0x01ba LCD-4300U
product DISPLAYLINK LCD8000U 0x01bb LCD-8000U
product DISPLAYLINK LD220 0x0100 Samsung LD220
product DISPLAYLINK GUC2020 0x0059 IOGEAR DVI GUC2020
product DISPLAYLINK VCUD60 0x0136 Rextron DVI
product DISPLAYLINK CONV 0x0138 StarTech CONV-USB2DVI
product DISPLAYLINK DLDVI 0x0141 DisplayLink DVI
product DISPLAYLINK VGA10 0x015a CMP-USBVGA10
product DISPLAYLINK WSDVI 0x0198 WS Tech DVI
product DISPLAYLINK EC008 0x019b EasyCAP008 DVI
product DISPLAYLINK HPDOCK 0x01d4 HP USB Docking
product DISPLAYLINK NL571 0x01d7 HP USB DVI
product DISPLAYLINK M01061 0x01e2 Lenovo DVI
product DISPLAYLINK SWDVI 0x024c SUNWEIT DVI
product DISPLAYLINK NBDOCK 0x0215 VideoHome NBdock1920
product DISPLAYLINK LUM70 0x02a9 Lilliput UM-70
product DISPLAYLINK UM7X0 0x401a nanovision MiMo
product DISPLAYLINK LT1421 0x03e0 Lenovo ThinkVision LT1421
product DISPLAYLINK POLARIS2 0x0117 Polaris2 USB dock
product DISPLAYLINK PLUGABLE 0x0377 Plugable docking station
product DISPLAYLINK ITEC 0x02e9 i-tec USB 2.0 Docking Station
/* DMI products */
product DMI CFSM_RW 0xa109 CF/SM Reader/Writer
product DMI DISK 0x2bcf Generic Disk
/* DrayTek products */
product DRAYTEK VIGOR550 0x0550 Vigor550
/* Dream Link products */
product DREAMLINK DL100B 0x0004 USB Webmail Notifier
/* dresden elektronik products */
product DRESDENELEKTRONIK SENSORTERMINALBOARD 0x0001 SensorTerminalBoard
product DRESDENELEKTRONIK WIRELESSHANDHELDTERMINAL 0x0004 Wireless Handheld Terminal
product DRESDENELEKTRONIK DE_RFNODE 0x001c deRFnode
product DRESDENELEKTRONIK LEVELSHIFTERSTICKLOWCOST 0x0022 Levelshifter Stick Low Cost
/* DYMO */
product DYMO LABELMANAGERPNP 0x1001 DYMO LabelManager PnP
/* Dynastream Innovations */
product DYNASTREAM ANTDEVBOARD 0x1003 ANT dev board
product DYNASTREAM ANT2USB 0x1004 ANT2USB
product DYNASTREAM ANTDEVBOARD2 0x1006 ANT dev board
/* Edimax products */
product EDIMAX EW7318USG 0x7318 USB Wireless dongle
product EDIMAX RTL8192SU_1 0x7611 RTL8192SU
product EDIMAX RTL8192SU_2 0x7612 RTL8192SU
product EDIMAX EW7622UMN 0x7622 EW-7622UMn
product EDIMAX RT2870_1 0x7711 RT2870
product EDIMAX EW7717 0x7717 EW-7717
product EDIMAX EW7718 0x7718 EW-7718
product EDIMAX EW7733UND 0x7733 EW-7733UnD
product EDIMAX EW7811UN 0x7811 EW-7811Un
product EDIMAX RTL8192CU 0x7822 RTL8192CU
product EDIMAX EW7811UTC_1 0xa811 EW-7811UTC
product EDIMAX EW7811UTC_2 0xa812 EW-7811UTC
product EDIMAX EW7822UAC 0xa822 EW-7822UAC
/* eGalax Products */
product EGALAX TPANEL 0x0001 Touch Panel
product EGALAX TPANEL2 0x0002 Touch Panel
product EGALAX2 TPANEL 0x0001 Touch Panel
/* EGO Products */
product EGO DUMMY 0x0000 Dummy Product
product EGO M4U 0x1020 ESI M4U
/* Eicon Networks */
product EICON DIVA852 0x4905 Diva 852 ISDN TA
/* EIZO products */
product EIZO HUB 0x0000 hub
product EIZO MONITOR 0x0001 monitor
/* ELCON Systemtechnik products */
product ELCON PLAN 0x0002 Goldpfeil P-LAN
/* Elecom products */
product ELECOM MOUSE29UO 0x0002 mouse 29UO
product ELECOM LDUSBTX0 0x200c LD-USB/TX
product ELECOM LDUSBTX1 0x4002 LD-USB/TX
product ELECOM LDUSBLTX 0x4005 LD-USBL/TX
product ELECOM WDC150SU2M 0x4008 WDC-150SU2M
product ELECOM LDUSBTX2 0x400b LD-USB/TX
product ELECOM LDUSB20 0x4010 LD-USB20
product ELECOM UCSGT 0x5003 UC-SGT
product ELECOM UCSGT0 0x5004 UC-SGT
product ELECOM LDUSBTX3 0xabc1 LD-USB/TX
/* Elektor products */
product ELEKTOR FT323R 0x0005 FTDI compatible adapter
/* Elsa products */
product ELSA MODEM1 0x2265 ELSA Modem Board
product ELSA USB2ETHERNET 0x3000 Microlink USB2Ethernet
/* ELV products */
product ELV USBI2C 0xe00f USB-I2C interface
/* EMS products */
product EMS DUAL_SHOOTER 0x0003 PSX gun controller converter
/* Emtec products */
product EMTEC RUF2PS 0x2240 Flash Drive
/* Encore products */
product ENCORE RT3070_1 0x1480 RT3070
product ENCORE RT3070_2 0x14a1 RT3070
product ENCORE RT3070_3 0x14a9 RT3070
/* Entrega products */
product ENTREGA 1S 0x0001 1S serial
product ENTREGA 2S 0x0002 2S serial
product ENTREGA 1S25 0x0003 1S25 serial
product ENTREGA 4S 0x0004 4S serial
product ENTREGA E45 0x0005 E45 Ethernet
product ENTREGA CENTRONICS 0x0006 Parallel Port
product ENTREGA XX1 0x0008 Ethernet
product ENTREGA 1S9 0x0093 1S9 serial
product ENTREGA EZUSB 0x8000 EZ-USB
/*product ENTREGA SERIAL 0x8001 DB25 Serial*/
product ENTREGA 2U4S 0x8004 2U4S serial/usb hub
product ENTREGA XX2 0x8005 Ethernet
/*product ENTREGA SERIAL_DB9 0x8093 DB9 Serial*/
/* Epson products */
product EPSON PRINTER1 0x0001 USB Printer
product EPSON PRINTER2 0x0002 ISD USB Smart Cable for Mac
product EPSON PRINTER3 0x0003 ISD USB Smart Cable
product EPSON PRINTER5 0x0005 USB Printer
product EPSON 636 0x0101 Perfection 636U / 636Photo scanner
product EPSON 610 0x0103 Perfection 610 scanner
product EPSON 1200 0x0104 Perfection 1200U / 1200Photo scanner
product EPSON 1600 0x0107 Expression 1600 scanner
product EPSON 1640 0x010a Perfection 1640SU scanner
product EPSON 1240 0x010b Perfection 1240U / 1240Photo scanner
product EPSON 640U 0x010c Perfection 640U scanner
product EPSON 1250 0x010f Perfection 1250U / 1250Photo scanner
product EPSON 1650 0x0110 Perfection 1650 scanner
product EPSON GT9700F 0x0112 GT-9700F scanner
product EPSON GT9300UF 0x011b GT-9300UF scanner
product EPSON 3200 0x011c Perfection 3200 scanner
product EPSON 1260 0x011d Perfection 1260 scanner
product EPSON 1660 0x011e Perfection 1660 scanner
product EPSON 1670 0x011f Perfection 1670 scanner
product EPSON 1270 0x0120 Perfection 1270 scanner
product EPSON 2480 0x0121 Perfection 2480 scanner
product EPSON 3590 0x0122 Perfection 3590 scanner
product EPSON 4990 0x012a Perfection 4990 Photo scanner
product EPSON CRESSI_EDY 0x0521 Cressi Edy diving computer
product EPSON N2ITION3 0x0522 Zeagle N2iTion3 diving computer
product EPSON STYLUS_875DC 0x0601 Stylus Photo 875DC Card Reader
product EPSON STYLUS_895 0x0602 Stylus Photo 895 Card Reader
product EPSON CX5400 0x0808 CX5400 scanner
product EPSON 3500 0x080e CX-3500/3600/3650 MFP
product EPSON RX425 0x080f Stylus Photo RX425 scanner
product EPSON DX3800 0x0818 CX3700/CX3800/DX38x0 MFP scanner
product EPSON 4800 0x0819 CX4700/CX4800/DX48x0 MFP scanner
product EPSON 4200 0x0820 CX4100/CX4200/DX4200 MFP scanner
product EPSON 5000 0x082b CX4900/CX5000/DX50x0 MFP scanner
product EPSON 6000 0x082e CX5900/CX6000/DX60x0 MFP scanner
product EPSON DX4000 0x082f DX4000 MFP scanner
product EPSON DX7400 0x0838 CX7300/CX7400/DX7400 MFP scanner
product EPSON DX8400 0x0839 CX8300/CX8400/DX8400 MFP scanner
product EPSON SX100 0x0841 SX100/NX100 MFP scanner
product EPSON NX300 0x0848 NX300 MFP scanner
product EPSON SX200 0x0849 SX200/SX205 MFP scanner
product EPSON SX400 0x084a SX400/NX400/TX400 MFP scanner
/* e-TEK Labs products */
product ETEK 1COM 0x8007 Serial
/* Evolution products */
product EVOLUTION ER1 0x0300 FTDI compatible adapter
product EVOLUTION HYBRID 0x0302 FTDI compatible adapter
product EVOLUTION RCM4 0x0303 FTDI compatible adapter
/* Extended Systems products */
product EXTENDED XTNDACCESS 0x0100 XTNDAccess IrDA
/* Falcom products */
product FALCOM TWIST 0x0001 USB GSM/GPRS Modem
product FALCOM SAMBA 0x0005 FTDI compatible adapter
/* FEIYA products */
product FEIYA DUMMY 0x0000 Dummy product
product FEIYA 5IN1 0x1132 5-in-1 Card Reader
product FEIYA ELANGO 0x6200 MicroSDHC Card Reader
product FEIYA AC110 0x6300 AC-110 Card Reader
/* FeiXun Communication products */
product FEIXUN RTL8188CU 0x0090 RTL8188CU
product FEIXUN RTL8192CU 0x0091 RTL8192CU
/* Festo */
product FESTO CPX_USB 0x0102 CPX-USB
product FESTO CMSP 0x0501 CMSP
/* Fiberline */
product FIBERLINE WL430U 0x6003 WL-430U
/* FIC / OpenMoko */
product FIC NEO1973_DEBUG 0x5118 FTDI compatible adapter
/* Fossil, Inc products */
product FOSSIL WRISTPDA 0x0002 Wrist PDA
/* Foxconn products */
product FOXCONN TCOM_TC_300 0xe000 T-Com TC 300
product FOXCONN PIRELLI_DP_L10 0xe003 Pirelli DP-L10
/* Freecom products */
product FREECOM DVD 0xfc01 DVD drive
product FREECOM HDD 0xfc05 Classic SL Hard Drive
/* Fujitsu Siemens Computers products */
product FSC E5400 0x1009 PrismGT USB 2.0 WLAN
/* Future Technology Devices products */
product FTDI SCX8_USB_PHOENIX 0x5259 SCx8 USB Phoenix interface
product FTDI SERIAL_8U100AX 0x8372 8U100AX Serial
product FTDI SERIAL_8U232AM 0x6001 8U232AM Serial
product FTDI SERIAL_8U232AM4 0x6004 8U232AM Serial
product FTDI SERIAL_232RL 0x6006 FT232RL Serial
product FTDI SERIAL_2232C 0x6010 FT2232C Dual port Serial
product FTDI 232H 0x6014 FTDI compatible adapter
product FTDI 232EX 0x6015 FTDI compatible adapter
product FTDI SERIAL_2232D 0x9e90 FT2232D Dual port Serial
product FTDI SERIAL_4232H 0x6011 FT4232H Quad port Serial
product FTDI XDS100V2 0xa6d0 TI XDS100V1/V2 and early Beaglebones
product FTDI XDS100V3 0xa6d1 TI XDS100V3
product FTDI KTLINK 0xbbe2 KT-LINK Embedded Hackers Multitool
product FTDI TURTELIZER2 0xbdc8 egnite Turtelizer 2 JTAG/RS232 Adapter
/* Gude Analog- und Digitalsysteme products also uses FTDI's id: */
product FTDI TACTRIX_OPENPORT_13M 0xcc48 OpenPort 1.3 Mitsubishi
product FTDI TACTRIX_OPENPORT_13S 0xcc49 OpenPort 1.3 Subaru
product FTDI TACTRIX_OPENPORT_13U 0xcc4a OpenPort 1.3 Universal
product FTDI GAMMASCOUT 0xd678 Gamma-Scout
product FTDI KBS 0xe6c8 Pyramid KBS USB LCD
product FTDI EISCOU 0xe888 Expert ISDN Control USB
product FTDI UOPTBR 0xe889 USB-RS232 OptoBridge
product FTDI EMCU2D 0xe88a Expert mouseCLOCK USB II
product FTDI PCMSFU 0xe88b Precision Clock MSF USB
product FTDI EMCU2H 0xe88c Expert mouseCLOCK USB II HBG
product FTDI MAXSTREAM 0xee18 Maxstream PKG-U
product FTDI USB_UIRT 0xf850 USB-UIRT
product FTDI USBSERIAL 0xfa00 Matrix Orbital USB Serial
product FTDI MX2_3 0xfa01 Matrix Orbital MX2 or MX3
product FTDI MX4_5 0xfa02 Matrix Orbital MX4 or MX5
product FTDI LK202 0xfa03 Matrix Orbital VK/LK202 Family
product FTDI LK204 0xfa04 Matrix Orbital VK/LK204 Family
product FTDI CFA_632 0xfc08 Crystalfontz CFA-632 USB LCD
product FTDI CFA_634 0xfc09 Crystalfontz CFA-634 USB LCD
product FTDI CFA_633 0xfc0b Crystalfontz CFA-633 USB LCD
product FTDI CFA_631 0xfc0c Crystalfontz CFA-631 USB LCD
product FTDI CFA_635 0xfc0d Crystalfontz CFA-635 USB LCD
product FTDI SEMC_DSS20 0xfc82 SEMC DSS-20 SyncStation
/* Commerzielle und Technische Informationssysteme GmbH products */
product FTDI CTI_USB_NANO_485 0xf60b CTI USB-Nano 485
product FTDI CTI_USB_MINI_485 0xf608 CTI USB-Mini 485
/* Other products */
product FTDI 232RL 0xfbfa FTDI compatible adapter
product FTDI 4N_GALAXY_DE_1 0xf3c0 FTDI compatible adapter
product FTDI 4N_GALAXY_DE_2 0xf3c1 FTDI compatible adapter
product FTDI 4N_GALAXY_DE_3 0xf3c2 FTDI compatible adapter
product FTDI 8U232AM_ALT 0x6006 FTDI compatible adapter
product FTDI ACCESSO 0xfad0 FTDI compatible adapter
product FTDI ACG_HFDUAL 0xdd20 FTDI compatible adapter
product FTDI ACTIVE_ROBOTS 0xe548 FTDI compatible adapter
product FTDI ACTZWAVE 0xf2d0 FTDI compatible adapter
product FTDI AMC232 0xff00 FTDI compatible adapter
product FTDI ARTEMIS 0xdf28 FTDI compatible adapter
product FTDI ASK_RDR400 0xc991 FTDI compatible adapter
product FTDI ATIK_ATK16 0xdf30 FTDI compatible adapter
product FTDI ATIK_ATK16C 0xdf32 FTDI compatible adapter
product FTDI ATIK_ATK16HR 0xdf31 FTDI compatible adapter
product FTDI ATIK_ATK16HRC 0xdf33 FTDI compatible adapter
product FTDI ATIK_ATK16IC 0xdf35 FTDI compatible adapter
product FTDI BCS_SE923 0xfb99 FTDI compatible adapter
product FTDI CANDAPTER 0x9f80 FTDI compatible adapter
product FTDI CANUSB 0xffa8 FTDI compatible adapter
product FTDI CCSICDU20_0 0xf9d0 FTDI compatible adapter
product FTDI CCSICDU40_1 0xf9d1 FTDI compatible adapter
product FTDI CCSICDU64_4 0xf9d4 FTDI compatible adapter
product FTDI CCSLOAD_N_GO_3 0xf9d3 FTDI compatible adapter
product FTDI CCSMACHX_2 0xf9d2 FTDI compatible adapter
product FTDI CCSPRIME8_5 0xf9d5 FTDI compatible adapter
product FTDI CHAMSYS_24_MASTER_WING 0xdaf8 FTDI compatible adapter
product FTDI CHAMSYS_MAXI_WING 0xdafd FTDI compatible adapter
product FTDI CHAMSYS_MEDIA_WING 0xdafe FTDI compatible adapter
product FTDI CHAMSYS_MIDI_TIMECODE 0xdafb FTDI compatible adapter
product FTDI CHAMSYS_MINI_WING 0xdafc FTDI compatible adapter
product FTDI CHAMSYS_PC_WING 0xdaf9 FTDI compatible adapter
product FTDI CHAMSYS_USB_DMX 0xdafa FTDI compatible adapter
product FTDI CHAMSYS_WING 0xdaff FTDI compatible adapter
product FTDI COM4SM 0xd578 FTDI compatible adapter
product FTDI CONVERTER_0 0xd388 FTDI compatible adapter
product FTDI CONVERTER_1 0xd389 FTDI compatible adapter
product FTDI CONVERTER_2 0xd38a FTDI compatible adapter
product FTDI CONVERTER_3 0xd38b FTDI compatible adapter
product FTDI CONVERTER_4 0xd38c FTDI compatible adapter
product FTDI CONVERTER_5 0xd38d FTDI compatible adapter
product FTDI CONVERTER_6 0xd38e FTDI compatible adapter
product FTDI CONVERTER_7 0xd38f FTDI compatible adapter
product FTDI DMX4ALL 0xc850 FTDI compatible adapter
product FTDI DOMINTELL_DGQG 0xef50 FTDI compatible adapter
product FTDI DOMINTELL_DUSB 0xef51 FTDI compatible adapter
product FTDI DOTEC 0x9868 FTDI compatible adapter
product FTDI ECLO_COM_1WIRE 0xea90 FTDI compatible adapter
product FTDI ECO_PRO_CDS 0xe520 FTDI compatible adapter
product FTDI ELSTER_UNICOM 0xe700 FTDI compatible adapter
product FTDI ELV_ALC8500 0xf06e FTDI compatible adapter
product FTDI ELV_CLI7000 0xfb59 FTDI compatible adapter
product FTDI ELV_CSI8 0xe0f0 FTDI compatible adapter
product FTDI ELV_EC3000 0xe006 FTDI compatible adapter
product FTDI ELV_EM1000DL 0xe0f1 FTDI compatible adapter
product FTDI ELV_EM1010PC 0xe0ef FTDI compatible adapter
product FTDI ELV_FEM 0xe00a FTDI compatible adapter
product FTDI ELV_FHZ1000PC 0xf06f FTDI compatible adapter
product FTDI ELV_FHZ1300PC 0xe0e8 FTDI compatible adapter
product FTDI ELV_FM3RX 0xe0ed FTDI compatible adapter
product FTDI ELV_FS20SIG 0xe0f4 FTDI compatible adapter
product FTDI ELV_HS485 0xe0ea FTDI compatible adapter
product FTDI ELV_KL100 0xe002 FTDI compatible adapter
product FTDI ELV_MSM1 0xe001 FTDI compatible adapter
product FTDI ELV_PCD200 0xf06c FTDI compatible adapter
product FTDI ELV_PCK100 0xe0f2 FTDI compatible adapter
product FTDI ELV_PPS7330 0xfb5c FTDI compatible adapter
product FTDI ELV_RFP500 0xe0f3 FTDI compatible adapter
product FTDI ELV_T1100 0xf06b FTDI compatible adapter
product FTDI ELV_TFD128 0xe0ec FTDI compatible adapter
product FTDI ELV_TFM100 0xfb5d FTDI compatible adapter
product FTDI ELV_TWS550 0xe009 FTDI compatible adapter
product FTDI ELV_UAD8 0xf068 FTDI compatible adapter
product FTDI ELV_UDA7 0xf069 FTDI compatible adapter
product FTDI ELV_UDF77 0xfb5e FTDI compatible adapter
product FTDI ELV_UIO88 0xfb5f FTDI compatible adapter
product FTDI ELV_ULA200 0xf06d FTDI compatible adapter
product FTDI ELV_UM100 0xfb5a FTDI compatible adapter
product FTDI ELV_UMS100 0xe0eb FTDI compatible adapter
product FTDI ELV_UO100 0xfb5b FTDI compatible adapter
product FTDI ELV_UR100 0xfb58 FTDI compatible adapter
product FTDI ELV_USI2 0xf06a FTDI compatible adapter
product FTDI ELV_USR 0xe000 FTDI compatible adapter
product FTDI ELV_UTP8 0xe0f5 FTDI compatible adapter
product FTDI ELV_WS300PC 0xe0f6 FTDI compatible adapter
product FTDI ELV_WS444PC 0xe0f7 FTDI compatible adapter
product FTDI ELV_WS500 0xe0e9 FTDI compatible adapter
product FTDI ELV_WS550 0xe004 FTDI compatible adapter
product FTDI ELV_WS777 0xe0ee FTDI compatible adapter
product FTDI ELV_WS888 0xe008 FTDI compatible adapter
product FTDI FUTURE_0 0xf44a FTDI compatible adapter
product FTDI FUTURE_1 0xf44b FTDI compatible adapter
product FTDI FUTURE_2 0xf44c FTDI compatible adapter
product FTDI GENERIC 0x9378 FTDI compatible adapter
product FTDI GUDEADS_E808 0xe808 FTDI compatible adapter
product FTDI GUDEADS_E809 0xe809 FTDI compatible adapter
product FTDI GUDEADS_E80A 0xe80a FTDI compatible adapter
product FTDI GUDEADS_E80B 0xe80b FTDI compatible adapter
product FTDI GUDEADS_E80C 0xe80c FTDI compatible adapter
product FTDI GUDEADS_E80D 0xe80d FTDI compatible adapter
product FTDI GUDEADS_E80E 0xe80e FTDI compatible adapter
product FTDI GUDEADS_E80F 0xe80f FTDI compatible adapter
product FTDI GUDEADS_E88D 0xe88d FTDI compatible adapter
product FTDI GUDEADS_E88E 0xe88e FTDI compatible adapter
product FTDI GUDEADS_E88F 0xe88f FTDI compatible adapter
product FTDI HD_RADIO 0x937c FTDI compatible adapter
product FTDI HO720 0xed72 FTDI compatible adapter
product FTDI HO730 0xed73 FTDI compatible adapter
product FTDI HO820 0xed74 FTDI compatible adapter
product FTDI HO870 0xed71 FTDI compatible adapter
product FTDI IBS_APP70 0xff3d FTDI compatible adapter
product FTDI IBS_PCMCIA 0xff3a FTDI compatible adapter
product FTDI IBS_PEDO 0xff3e FTDI compatible adapter
product FTDI IBS_PICPRO 0xff39 FTDI compatible adapter
product FTDI IBS_PK1 0xff3b FTDI compatible adapter
product FTDI IBS_PROD 0xff3f FTDI compatible adapter
product FTDI IBS_RS232MON 0xff3c FTDI compatible adapter
product FTDI IBS_US485 0xff38 FTDI compatible adapter
product FTDI IPLUS 0xd070 FTDI compatible adapter
product FTDI IPLUS2 0xd071 FTDI compatible adapter
product FTDI IRTRANS 0xfc60 FTDI compatible adapter
product FTDI LENZ_LIUSB 0xd780 FTDI compatible adapter
product FTDI LM3S_DEVEL_BOARD 0xbcd8 FTDI compatible adapter
product FTDI LM3S_EVAL_BOARD 0xbcd9 FTDI compatible adapter
product FTDI LM3S_ICDI_B_BOARD 0xbcda FTDI compatible adapter
product FTDI MASTERDEVEL2 0xf449 FTDI compatible adapter
product FTDI MHAM_DB9 0xeeed FTDI compatible adapter
product FTDI MHAM_IC 0xeeec FTDI compatible adapter
product FTDI MHAM_KW 0xeee8 FTDI compatible adapter
product FTDI MHAM_RS232 0xeeee FTDI compatible adapter
product FTDI MHAM_Y6 0xeeea FTDI compatible adapter
product FTDI MHAM_Y8 0xeeeb FTDI compatible adapter
product FTDI MHAM_Y9 0xeeef FTDI compatible adapter
product FTDI MHAM_YS 0xeee9 FTDI compatible adapter
product FTDI MICRO_CHAMELEON 0xcaa0 FTDI compatible adapter
product FTDI MTXORB_5 0xfa05 FTDI compatible adapter
product FTDI MTXORB_6 0xfa06 FTDI compatible adapter
product FTDI NXTCAM 0xabb8 FTDI compatible adapter
product FTDI OCEANIC 0xf460 FTDI compatible adapter
product FTDI OOCDLINK 0xbaf8 FTDI compatible adapter
product FTDI OPENDCC 0xbfd8 FTDI compatible adapter
product FTDI OPENDCC_GATEWAY 0xbfdb FTDI compatible adapter
product FTDI OPENDCC_GBM 0xbfdc FTDI compatible adapter
product FTDI OPENDCC_SNIFFER 0xbfd9 FTDI compatible adapter
product FTDI OPENDCC_THROTTLE 0xbfda FTDI compatible adapter
product FTDI PCDJ_DAC2 0xfa88 FTDI compatible adapter
product FTDI PERLE_ULTRAPORT 0xf0c0 FTDI compatible adapter
product FTDI PHI_FISCO 0xe40b FTDI compatible adapter
product FTDI PIEGROUP 0xf208 FTDI compatible adapter
product FTDI PROPOX_JTAGCABLEII 0xd738 FTDI compatible adapter
product FTDI R2000KU_TRUE_RNG 0xfb80 FTDI compatible adapter
product FTDI R2X0 0xfc71 FTDI compatible adapter
product FTDI RELAIS 0xfa10 FTDI compatible adapter
product FTDI REU_TINY 0xed22 FTDI compatible adapter
product FTDI RMP200 0xe729 FTDI compatible adapter
product FTDI RM_CANVIEW 0xfd60 FTDI compatible adapter
product FTDI RRCIRKITS_LOCOBUFFER 0xc7d0 FTDI compatible adapter
product FTDI SCIENCESCOPE_HS_LOGBOOK 0xff1d FTDI compatible adapter
product FTDI SCIENCESCOPE_LOGBOOKML 0xff18 FTDI compatible adapter
product FTDI SCIENCESCOPE_LS_LOGBOOK 0xff1c FTDI compatible adapter
product FTDI SCS_DEVICE_0 0xd010 FTDI compatible adapter
product FTDI SCS_DEVICE_1 0xd011 FTDI compatible adapter
product FTDI SCS_DEVICE_2 0xd012 FTDI compatible adapter
product FTDI SCS_DEVICE_3 0xd013 FTDI compatible adapter
product FTDI SCS_DEVICE_4 0xd014 FTDI compatible adapter
product FTDI SCS_DEVICE_5 0xd015 FTDI compatible adapter
product FTDI SCS_DEVICE_6 0xd016 FTDI compatible adapter
product FTDI SCS_DEVICE_7 0xd017 FTDI compatible adapter
product FTDI SDMUSBQSS 0xf448 FTDI compatible adapter
product FTDI SIGNALYZER_SH2 0xbca2 FTDI compatible adapter
product FTDI SIGNALYZER_SH4 0xbca4 FTDI compatible adapter
product FTDI SIGNALYZER_SLITE 0xbca1 FTDI compatible adapter
product FTDI SIGNALYZER_ST 0xbca0 FTDI compatible adapter
product FTDI SPECIAL_1 0xfc70 FTDI compatible adapter
product FTDI SPECIAL_3 0xfc72 FTDI compatible adapter
product FTDI SPECIAL_4 0xfc73 FTDI compatible adapter
product FTDI SPROG_II 0xf0c8 FTDI compatible adapter
product FTDI SR_RADIO 0x9379 FTDI compatible adapter
product FTDI SUUNTO_SPORTS 0xf680 FTDI compatible adapter
product FTDI TAVIR_STK500 0xfa33 FTDI compatible adapter
product FTDI TERATRONIK_D2XX 0xec89 FTDI compatible adapter
product FTDI TERATRONIK_VCP 0xec88 FTDI compatible adapter
product FTDI THORLABS 0xfaf0 FTDI compatible adapter
product FTDI TIAO 0x8a98 FTDI compatible adapter
product FTDI TNC_X 0xebe0 FTDI compatible adapter
product FTDI TTUSB 0xff20 FTDI compatible adapter
product FTDI USBX_707 0xf857 FTDI compatible adapter
product FTDI USINT_CAT 0xb810 FTDI compatible adapter
product FTDI USINT_RS232 0xb812 FTDI compatible adapter
product FTDI USINT_WKEY 0xb811 FTDI compatible adapter
product FTDI VARDAAN 0xf070 FTDI compatible adapter
product FTDI VNHCPCUSB_D 0xfe38 FTDI compatible adapter
product FTDI WESTREX_MODEL_777 0xdc00 FTDI compatible adapter
product FTDI WESTREX_MODEL_8900F 0xdc01 FTDI compatible adapter
product FTDI XF_547 0xfc0a FTDI compatible adapter
product FTDI XF_640 0xfc0e FTDI compatible adapter
product FTDI XF_642 0xfc0f FTDI compatible adapter
product FTDI XM_RADIO 0x937a FTDI compatible adapter
product FTDI YEI_SERVOCENTER31 0xe050 FTDI compatible adapter
/* Fuji photo products */
product FUJIPHOTO MASS0100 0x0100 Mass Storage
/* Fujitsu protducts */
product FUJITSU AH_F401U 0x105b AH-F401U Air H device
/* Fujitsu-Siemens protducts */
product FUJITSUSIEMENS SCR 0x0009 Fujitsu-Siemens SCR USB Reader
/* Garmin products */
product GARMIN FORERUNNER230 0x086d ForeRunner 230
product GARMIN IQUE_3600 0x0004 iQue 3600
/* Gemalto products */
product GEMALTO PROXPU 0x5501 Prox-PU/CU RFID Card Reader
/* General Instruments (Motorola) products */
product GENERALINSTMNTS SB5100 0x5100 SURFboard SB5100 Cable modem
/* Genesys Logic products */
product GENESYS GL620USB 0x0501 GL620USB Host-Host interface
product GENESYS GL650 0x0604 GL650 HUB
product GENESYS GL606 0x0606 GL606 USB 2.0 HUB
product GENESYS GL850G 0x0608 GL850G USB 2.0 HUB
product GENESYS GL3520_2 0x0610 GL3520 4-Port USB 2.0 DataPath
product GENESYS GL3520_SS 0x0616 GL3520 4-Port USB 3.0 DataPath
product GENESYS GL641USB 0x0700 GL641USB CompactFlash Card Reader
product GENESYS GL641USB2IDE_2 0x0701 GL641USB USB-IDE Bridge No 2
product GENESYS GL641USB2IDE 0x0702 GL641USB USB-IDE Bridge
product GENESYS GL3233 0x0743 GL3233 USB 3.0 AiO Card Reader
product GENESYS GL641USB_2 0x0760 GL641USB 6-in-1 Card Reader
/* GIGABYTE products */
product GIGABYTE GN54G 0x8001 GN-54G
product GIGABYTE GNBR402W 0x8002 GN-BR402W
product GIGABYTE GNWLBM101 0x8003 GN-WLBM101
product GIGABYTE GNWBKG 0x8007 GN-WBKG
product GIGABYTE GNWB01GS 0x8008 GN-WB01GS
product GIGABYTE GNWI05GS 0x800a GN-WI05GS
/* Gigaset products */
product GIGASET WLAN 0x0701 WLAN
product GIGASET SMCWUSBTG 0x0710 SMCWUSBT-G
product GIGASET SMCWUSBTG_NF 0x0711 SMCWUSBT-G (no firmware)
product GIGASET AR5523 0x0712 AR5523
product GIGASET AR5523_NF 0x0713 AR5523 (no firmware)
product GIGASET RT2573 0x0722 RT2573
product GIGASET RT3070_1 0x0740 RT3070
product GIGASET RT3070_2 0x0744 RT3070
product GIGABYTE RT2870_1 0x800b RT2870
product GIGABYTE GNWB31N 0x800c GN-WB31N
product GIGABYTE GNWB32L 0x800d GN-WB32L
/* Global Sun Technology product */
product GLOBALSUN AR5523_1 0x7801 AR5523
product GLOBALSUN AR5523_1_NF 0x7802 AR5523 (no firmware)
product GLOBALSUN AR5523_2 0x7811 AR5523
product GLOBALSUN AR5523_2_NF 0x7812 AR5523 (no firmware)
/* Globespan products */
product GLOBESPAN MODEM_1 0x1329 USB Modem
product GLOBESPAN PRISM_GT_1 0x2000 PrismGT USB 2.0 WLAN
product GLOBESPAN PRISM_GT_2 0x2002 PrismGT USB 2.0 WLAN
/* G.Mate, Inc products */
product GMATE YP3X00 0x1001 YP3X00 PDA
/* GN Otometrics */
product GNOTOMETRICS USB 0x0010 FTDI compatible adapter
/* GoHubs products */
product GOHUBS GOCOM232 0x1001 GoCOM232 Serial
/* Good Way Technology products */
product GOODWAY GWUSB2E 0x6200 GWUSB2E
product GOODWAY RT2573 0xc019 RT2573
/* Google products */
product GOOGLE NEXUSONE 0x4e11 Nexus One
/* Gravis products */
product GRAVIS GAMEPADPRO 0x4001 GamePad Pro
/* GREENHOUSE products */
product GREENHOUSE KANA21 0x0001 CF-writer with MP3
/* Griffin Technology */
product GRIFFIN IMATE 0x0405 iMate, ADB Adapter
/* Guillemot Corporation */
product GUILLEMOT DALEADER 0xa300 DA Leader
product GUILLEMOT HWGUSB254 0xe000 HWGUSB2-54 WLAN
product GUILLEMOT HWGUSB254LB 0xe010 HWGUSB2-54-LB
product GUILLEMOT HWGUSB254V2AP 0xe020 HWGUSB2-54V2-AP
product GUILLEMOT HWNU300 0xe030 HWNU-300
product GUILLEMOT HWNUM300 0xe031 HWNUm-300
product GUILLEMOT HWGUN54 0xe032 HWGUn-54
product GUILLEMOT HWNUP150 0xe033 HWNUP-150
/* Hagiwara products */
product HAGIWARA FGSM 0x0002 FlashGate SmartMedia Card Reader
product HAGIWARA FGCF 0x0003 FlashGate CompactFlash Card Reader
product HAGIWARA FG 0x0005 FlashGate
/* HAL Corporation products */
product HAL IMR001 0x0011 Crossam2+USB IR commander
/* Handspring, Inc. */
product HANDSPRING VISOR 0x0100 Handspring Visor
product HANDSPRING TREO 0x0200 Handspring Treo
product HANDSPRING TREO600 0x0300 Handspring Treo 600
/* Hauppauge Computer Works */
product HAUPPAUGE WINTV_USB_FM 0x4d12 WinTV USB FM
product HAUPPAUGE2 NOVAT500 0x9580 NovaT 500Stick
/* Hawking Technologies products */
product HAWKING RT2870_1 0x0001 RT2870
product HAWKING RT2870_2 0x0003 RT2870
product HAWKING HWUN2 0x0009 HWUN2
product HAWKING RT3070 0x000b RT3070
product HAWKING RTL8192CU 0x0019 RTL8192CU
product HAWKING UF100 0x400c 10/100 USB Ethernet
product HAWKING RTL8192SU_1 0x0015 RTL8192SU
product HAWKING RTL8192SU_2 0x0016 RTL8192SU
product HAWKING HD65U 0x0023 HD65U
/* HID Global GmbH products */
product HIDGLOBAL CM2020 0x0596 Omnikey Cardman 2020
product HIDGLOBAL CM6020 0x1784 Omnikey Cardman 6020
/* Hitachi, Ltd. products */
product HITACHI DVDCAM_DZ_MV100A 0x0004 DVD-CAM DZ-MV100A Camcorder
product HITACHI DVDCAM_USB 0x001e DVDCAM USB HS Interface
/* Holtek products */
product HOLTEK F85 0xa030 Holtek USB gaming keyboard
/* Honeywell */
product HONEYWELL HGI80 0x0102 Honeywell HGI80 Wireless USB Gateway
/* HP products */
product HP 895C 0x0004 DeskJet 895C
product HP 4100C 0x0101 Scanjet 4100C
product HP S20 0x0102 Photosmart S20
product HP 880C 0x0104 DeskJet 880C
product HP 4200C 0x0105 ScanJet 4200C
product HP CDWRITERPLUS 0x0107 CD-Writer Plus
product HP KBDHUB 0x010c Multimedia Keyboard Hub
product HP G55XI 0x0111 OfficeJet G55xi
product HP HN210W 0x011c HN210W 802.11b WLAN
product HP 49GPLUS 0x0121 49g+ graphing calculator
product HP 6200C 0x0201 ScanJet 6200C
product HP S20b 0x0202 PhotoSmart S20
product HP 815C 0x0204 DeskJet 815C
product HP 3300C 0x0205 ScanJet 3300C
product HP CDW8200 0x0207 CD-Writer Plus 8200e
product HP MMKEYB 0x020c Multimedia keyboard
product HP 1220C 0x0212 DeskJet 1220C
product HP UN2420_QDL 0x241d UN2420 QDL Firmware Loader
product HP UN2420 0x251d UN2420 WWAN/GPS Module
product HP 810C 0x0304 DeskJet 810C/812C
product HP 4300C 0x0305 Scanjet 4300C
product HP CDW4E 0x0307 CD-Writer+ CD-4e
product HP G85XI 0x0311 OfficeJet G85xi
product HP 1200 0x0317 LaserJet 1200
product HP 5200C 0x0401 Scanjet 5200C
product HP 830C 0x0404 DeskJet 830C
product HP 3400CSE 0x0405 ScanJet 3400cse
product HP 6300C 0x0601 Scanjet 6300C
product HP 840C 0x0604 DeskJet 840c
product HP 2200C 0x0605 ScanJet 2200C
product HP 5300C 0x0701 Scanjet 5300C
product HP 4400C 0x0705 Scanjet 4400C
product HP 4470C 0x0805 Scanjet 4470C
product HP 82x0C 0x0b01 Scanjet 82x0C
product HP 2300D 0x0b17 Laserjet 2300d
product HP 970CSE 0x1004 Deskjet 970Cse
product HP 5400C 0x1005 Scanjet 5400C
product HP 2215 0x1016 iPAQ 22xx/Jornada 548
product HP 568J 0x1116 Jornada 568
product HP 930C 0x1204 DeskJet 930c
product HP3 RTL8188CU 0x1629 RTL8188CU
product HP P2000U 0x1801 Inkjet P-2000U
product HP HS2300 0x1e1d HS2300 HSDPA (aka MC8775)
product HP T500 0x1f01 T500
product HP T750 0x1f02 T750
product HP 640C 0x2004 DeskJet 640c
product HP 4670V 0x3005 ScanJet 4670v
product HP P1100 0x3102 Photosmart P1100
product HP LD220 0x3524 LD220 POS Display
product HP OJ4215 0x3d11 OfficeJet 4215
product HP HN210E 0x811c Ethernet HN210E
product HP2 C500 0x6002 PhotoSmart C500
product HP EV2200 0x1b1d ev2200 HSDPA (aka MC5720)
product HP HS2300 0x1e1d hs2300 HSDPA (aka MC8775)
/* HTC products */
product HTC WINMOBILE 0x00ce HTC USB Sync
product HTC PPC6700MODEM 0x00cf PPC6700 Modem
product HTC SMARTPHONE 0x0a51 SmartPhone USB Sync
product HTC WIZARD 0x0bce HTC Wizard USB Sync
product HTC LEGENDSYNC 0x0c97 HTC Legend USB Sync
product HTC LEGEND 0x0ff9 HTC Legend
product HTC LEGENDINTERNET 0x0ffe HTC Legend Internet Sharing
/* HUAWEI products */
product HUAWEI MOBILE 0x1001 Huawei Mobile
product HUAWEI E220 0x1003 HSDPA modem
product HUAWEI E220BIS 0x1004 HSDPA modem
product HUAWEI E1401 0x1401 3G modem
product HUAWEI E1402 0x1402 3G modem
product HUAWEI E1403 0x1403 3G modem
product HUAWEI E1404 0x1404 3G modem
product HUAWEI E1405 0x1405 3G modem
product HUAWEI E1406 0x1406 3G modem
product HUAWEI E1407 0x1407 3G modem
product HUAWEI E1408 0x1408 3G modem
product HUAWEI E1409 0x1409 3G modem
product HUAWEI E140A 0x140a 3G modem
product HUAWEI E140B 0x140b 3G modem
product HUAWEI E180V 0x140c E180V
product HUAWEI E140D 0x140d 3G modem
product HUAWEI E140E 0x140e 3G modem
product HUAWEI E140F 0x140f 3G modem
product HUAWEI E1410 0x1410 3G modem
product HUAWEI E1411 0x1411 3G modem
product HUAWEI E1412 0x1412 3G modem
product HUAWEI E1413 0x1413 3G modem
product HUAWEI E1414 0x1414 3G modem
product HUAWEI E1415 0x1415 3G modem
product HUAWEI E1416 0x1416 3G modem
product HUAWEI E1417 0x1417 3G modem
product HUAWEI E1418 0x1418 3G modem
product HUAWEI E1419 0x1419 3G modem
product HUAWEI E141A 0x141a 3G modem
product HUAWEI E141B 0x141b 3G modem
product HUAWEI E141C 0x141c 3G modem
product HUAWEI E141D 0x141d 3G modem
product HUAWEI E141E 0x141e 3G modem
product HUAWEI E141F 0x141f 3G modem
product HUAWEI E1420 0x1420 3G modem
product HUAWEI E1421 0x1421 3G modem
product HUAWEI E1422 0x1422 3G modem
product HUAWEI E1423 0x1423 3G modem
product HUAWEI E1424 0x1424 3G modem
product HUAWEI E1425 0x1425 3G modem
product HUAWEI E1426 0x1426 3G modem
product HUAWEI E1427 0x1427 3G modem
product HUAWEI E1428 0x1428 3G modem
product HUAWEI E1429 0x1429 3G modem
product HUAWEI E142A 0x142a 3G modem
product HUAWEI E142B 0x142b 3G modem
product HUAWEI E142C 0x142c 3G modem
product HUAWEI E142D 0x142d 3G modem
product HUAWEI E142E 0x142e 3G modem
product HUAWEI E142F 0x142f 3G modem
product HUAWEI E1430 0x1430 3G modem
product HUAWEI E1431 0x1431 3G modem
product HUAWEI E1432 0x1432 3G modem
product HUAWEI E1433 0x1433 3G modem
product HUAWEI E1434 0x1434 3G modem
product HUAWEI E1435 0x1435 3G modem
product HUAWEI E1436 0x1436 3G modem
product HUAWEI E1437 0x1437 3G modem
product HUAWEI E1438 0x1438 3G modem
product HUAWEI E1439 0x1439 3G modem
product HUAWEI E143A 0x143a 3G modem
product HUAWEI E143B 0x143b 3G modem
product HUAWEI E143C 0x143c 3G modem
product HUAWEI E143D 0x143d 3G modem
product HUAWEI E143E 0x143e 3G modem
product HUAWEI E143F 0x143f 3G modem
product HUAWEI E1752 0x1446 3G modem
product HUAWEI K4505 0x1464 3G modem
product HUAWEI K3765 0x1465 3G modem
product HUAWEI E1820 0x14ac E1820 HSPA+ USB Slider
product HUAWEI K3771_INIT 0x14c4 K3771 Initial
product HUAWEI K3770 0x14c9 3G modem
product HUAWEI K3771 0x14ca K3771
product HUAWEI K3772 0x14cf K3772
product HUAWEI K3770_INIT 0x14d1 K3770 Initial
product HUAWEI E3131_INIT 0x14fe 3G modem initial
product HUAWEI E392 0x1505 LTE modem
product HUAWEI E3131 0x1506 3G modem
product HUAWEI K3765_INIT 0x1520 K3765 Initial
product HUAWEI K4505_INIT 0x1521 K4505 Initial
product HUAWEI K3772_INIT 0x1526 K3772 Initial
product HUAWEI E3272_INIT 0x155b LTE modem initial
product HUAWEI ME909U 0x1573 LTE modem
product HUAWEI R215_INIT 0x1582 LTE modem initial
product HUAWEI R215 0x1588 LTE modem
product HUAWEI ME909S 0x15c1 LTE modem
product HUAWEI ETS2055 0x1803 CDMA modem
product HUAWEI E173 0x1c05 3G modem
product HUAWEI E173_INIT 0x1c0b 3G modem initial
product HUAWEI E3272 0x1c1e LTE modem
/* HUAWEI 3com products */
product HUAWEI3COM WUB320G 0x0009 Aolynk WUB320g
/* IBM Corporation */
product IBM USBCDROMDRIVE 0x4427 USB CD-ROM Drive
product IBM USB4543 0x4543 TI IBM USB 4543 Modem
product IBM USB454B 0x454b TI IBM USB 454B Modem
product IBM USB454C 0x454c TI IBM USB 454C Modem
/* Icom products */
product ICOM SP1 0x0004 FTDI compatible adapter
product ICOM OPC_U_UC 0x0018 FTDI compatible adapter
product ICOM RP2C1 0x0009 FTDI compatible adapter
product ICOM RP2C2 0x000a FTDI compatible adapter
product ICOM RP2D 0x000b FTDI compatible adapter
product ICOM RP2KVR 0x0013 FTDI compatible adapter
product ICOM RP2KVT 0x0012 FTDI compatible adapter
product ICOM RP2VR 0x000d FTDI compatible adapter
product ICOM RP2VT 0x000c FTDI compatible adapter
product ICOM RP4KVR 0x0011 FTDI compatible adapter
product ICOM RP4KVT 0x0010 FTDI compatible adapter
/* ID-tech products */
product IDTECH IDT1221U 0x0300 FTDI compatible adapter
/* Imagination Technologies products */
product IMAGINATION DBX1 0x2107 DBX1 DSP core
/* Initio Corporation products */
product INITIO DUMMY 0x0000 Dummy product
product INITIO INIC_1610P 0x1e40 USB to SATA Bridge
/* Inside Out Networks products */
product INSIDEOUT EDGEPORT4 0x0001 EdgePort/4 serial ports
/* In-System products */
product INSYSTEM F5U002 0x0002 Parallel printer
product INSYSTEM ATAPI 0x0031 ATAPI Adapter
product INSYSTEM ISD110 0x0200 IDE Adapter ISD110
product INSYSTEM ISD105 0x0202 IDE Adapter ISD105
product INSYSTEM USBCABLE 0x081a USB cable
product INSYSTEM STORAGE_V2 0x5701 USB Storage Adapter V2
/* Intenso products */
product INTENSO MEMORY_BOX 0x0701 External disk
/* Intel products */
product INTEL EASYPC_CAMERA 0x0110 Easy PC Camera
product INTEL TESTBOARD 0x9890 82930 test board
product INTEL2 IRMH 0x0020 Integrated Rate Matching Hub
product INTEL2 IRMH2 0x0024 Integrated Rate Matching Hub
product INTEL2 IRMH3 0x8000 Integrated Rate Matching Hub
product INTEL2 IRMH4 0x8008 Integrated Rate Matching Hub
product INTEL2 SFP 0x0aa7 Sandy Peak (3168) Bluetooth Module
product INTEL2 JFP 0x0aaa Jefferson Peak (9460/9560) Bluetooth Module
product INTEL2 THP 0x0025 Thunder Peak (9160/9260) Bluetooth Module
product INTEL2 HSP 0x0026 Harrison Peak (22560) Bluetooth Module
/* Interbiometric products */
product INTERBIOMETRICS IOBOARD 0x1002 FTDI compatible adapter
product INTERBIOMETRICS MINI_IOBOARD 0x1006 FTDI compatible adapter
/* Intersil products */
product INTERSIL PRISM_GT 0x1000 PrismGT USB 2.0 WLAN
product INTERSIL PRISM_2X 0x3642 Prism2.x or Atmel WLAN
/* Interpid Control Systems products */
product INTREPIDCS VALUECAN 0x0601 ValueCAN CAN bus interface
product INTREPIDCS NEOVI 0x0701 NeoVI Blue vehicle bus interface
/* I/O DATA products */
product IODATA IU_CD2 0x0204 DVD Multi-plus unit iU-CD2
product IODATA DVR_UEH8 0x0206 DVD Multi-plus unit DVR-UEH8
product IODATA USBSSMRW 0x0314 USB-SSMRW SD-card
product IODATA USBSDRW 0x031e USB-SDRW SD-card
product IODATA USBETT 0x0901 USB ETT
product IODATA USBETTX 0x0904 USB ETTX
product IODATA USBETTXS 0x0913 USB ETTX
product IODATA USBWNB11A 0x0919 USB WN-B11
product IODATA USBWNB11 0x0922 USB Airport WN-B11
product IODATA ETGUS2 0x0930 ETG-US2
product IODATA WNGDNUS2 0x093f WN-GDN/US2
product IODATA RT3072_1 0x0944 RT3072
product IODATA RT3072_2 0x0945 RT3072
product IODATA RT3072_3 0x0947 RT3072
product IODATA RT3072_4 0x0948 RT3072
product IODATA WNAC867U 0x0952 WN-AC867U
product IODATA USBRSAQ 0x0a03 Serial USB-RSAQ1
product IODATA USBRSAQ5 0x0a0e Serial USB-RSAQ5
product IODATA2 USB2SC 0x0a09 USB2.0-SCSI Bridge USB2-SC
/* Iomega products */
product IOMEGA ZIP100 0x0001 Zip 100
product IOMEGA ZIP250 0x0030 Zip 250
/* Ionic products */
product IONICS PLUGCOMPUTER 0x0102 FTDI compatible adapter
/* Integrated System Solution Corp. products */
product ISSC ISSCBTA 0x1001 Bluetooth USB Adapter
/* iTegno products */
product ITEGNO WM1080A 0x1080 WM1080A GSM/GPRS modem
product ITEGNO WM2080A 0x2080 WM2080A CDMA modem
/* Ituner networks products */
product ITUNERNET USBLCD2X20 0x0002 USB-LCD 2x20
product ITUNERNET USBLCD4X20 0xc001 USB-LCD 4x20
/* Jablotron products */
product JABLOTRON PC60B 0x0001 PC-60B
/* Jaton products */
product JATON EDA 0x5704 Ethernet
/* Jeti products */
product JETI SPC1201 0x04b2 FTDI compatible adapter
/* JMicron products */
product JMICRON JMS566 0x3569 USB to SATA 3.0Gb/s bridge
product JMICRON JMS567 0x0567 USB to SATA 6.0Gb/s bridge
product JMICRON JM20336 0x2336 USB to SATA Bridge
product JMICRON JM20337 0x2338 USB to ATA/ATAPI Bridge
/* JVC products */
product JVC GR_DX95 0x000a GR-DX95
product JVC MP_PRX1 0x3008 MP-PRX1 Ethernet
/* JRC products */
product JRC AH_J3001V_J3002V 0x0001 AirH PHONE AH-J3001V/J3002V
/* Kamstrrup products */
product KAMSTRUP OPTICALEYE 0x0001 Optical Eye/3-wire
product KAMSTRUP MBUS_250D 0x0005 M-Bus Master MultiPort 250D
/* Kawatsu products */
product KAWATSU MH4000P 0x0003 MiniHub 4000P
/* Keisokugiken Corp. products */
product KEISOKUGIKEN USBDAQ 0x0068 HKS-0200 USBDAQ
/* Kensington products */
product KENSINGTON ORBIT 0x1003 Orbit USB/PS2 trackball
product KENSINGTON TURBOBALL 0x1005 TurboBall
/* Keyspan products */
product KEYSPAN USA28_NF 0x0101 USA-28 serial Adapter (no firmware)
product KEYSPAN USA28X_NF 0x0102 USA-28X serial Adapter (no firmware)
product KEYSPAN USA19_NF 0x0103 USA-19 serial Adapter (no firmware)
product KEYSPAN USA18_NF 0x0104 USA-18 serial Adapter (no firmware)
product KEYSPAN USA18X_NF 0x0105 USA-18X serial Adapter (no firmware)
product KEYSPAN USA19W_NF 0x0106 USA-19W serial Adapter (no firmware)
product KEYSPAN USA19 0x0107 USA-19 serial Adapter
product KEYSPAN USA19W 0x0108 USA-19W serial Adapter
product KEYSPAN USA49W_NF 0x0109 USA-49W serial Adapter (no firmware)
product KEYSPAN USA49W 0x010a USA-49W serial Adapter
product KEYSPAN USA19QI_NF 0x010b USA-19QI serial Adapter (no firmware)
product KEYSPAN USA19QI 0x010c USA-19QI serial Adapter
product KEYSPAN USA19Q_NF 0x010d USA-19Q serial Adapter (no firmware)
product KEYSPAN USA19Q 0x010e USA-19Q serial Adapter
product KEYSPAN USA28 0x010f USA-28 serial Adapter
product KEYSPAN USA28XXB 0x0110 USA-28X/XB serial Adapter
product KEYSPAN USA18 0x0111 USA-18 serial Adapter
product KEYSPAN USA18X 0x0112 USA-18X serial Adapter
product KEYSPAN USA28XB_NF 0x0113 USA-28XB serial Adapter (no firmware)
product KEYSPAN USA28XA_NF 0x0114 USA-28XB serial Adapter (no firmware)
product KEYSPAN USA28XA 0x0115 USA-28XA serial Adapter
product KEYSPAN USA18XA_NF 0x0116 USA-18XA serial Adapter (no firmware)
product KEYSPAN USA18XA 0x0117 USA-18XA serial Adapter
product KEYSPAN USA19QW_NF 0x0118 USA-19WQ serial Adapter (no firmware)
product KEYSPAN USA19QW 0x0119 USA-19WQ serial Adapter
product KEYSPAN USA19HA 0x0121 USA-19HS serial Adapter
product KEYSPAN UIA10 0x0201 UIA-10 remote control
product KEYSPAN UIA11 0x0202 UIA-11 remote control
/* Kingston products */
product KINGSTON XX1 0x0008 Ethernet
product KINGSTON KNU101TX 0x000a KNU101TX USB Ethernet
product KINGSTON HYPERX3_0 0x162b DT HyperX 3.0
/* Kawasaki products */
product KLSI DUH3E10BT 0x0008 USB Ethernet
product KLSI DUH3E10BTN 0x0009 USB Ethernet
/* Kobil products */
product KOBIL CONV_B1 0x2020 FTDI compatible adapter
product KOBIL CONV_KAAN 0x2021 FTDI compatible adapter
/* Kodak products */
product KODAK DC220 0x0100 Digital Science DC220
product KODAK DC260 0x0110 Digital Science DC260
product KODAK DC265 0x0111 Digital Science DC265
product KODAK DC290 0x0112 Digital Science DC290
product KODAK DC240 0x0120 Digital Science DC240
product KODAK DC280 0x0130 Digital Science DC280
/* Kontron AG products */
product KONTRON DM9601 0x8101 USB Ethernet
product KONTRON JP1082 0x9700 USB Ethernet
/* Konica Corp. Products */
product KONICA CAMERA 0x0720 Digital Color Camera
/* KYE products */
product KYE NICHE 0x0001 Niche mouse
product KYE NETSCROLL 0x0003 Genius NetScroll mouse
product KYE FLIGHT2000 0x1004 Flight 2000 joystick
product KYE VIVIDPRO 0x2001 ColorPage Vivid-Pro scanner
/* Kyocera products */
product KYOCERA FINECAM_S3X 0x0100 Finecam S3x
product KYOCERA FINECAM_S4 0x0101 Finecam S4
product KYOCERA FINECAM_S5 0x0103 Finecam S5
product KYOCERA FINECAM_L3 0x0105 Finecam L3
product KYOCERA AHK3001V 0x0203 AH-K3001V
product KYOCERA2 CDMA_MSM_K 0x17da Qualcomm Kyocera CDMA Technologies MSM
product KYOCERA2 KPC680 0x180a Qualcomm Kyocera CDMA Technologies MSM
/* LaCie products */
product LACIE HD 0xa601 Hard Disk
product LACIE CDRW 0xa602 CD R/W
/* Lake Shore Cryotronics products */
product LAKESHORE 121 0x0100 121 Current Source
product LAKESHORE 218A 0x0200 218A Temperature Monitor
product LAKESHORE 219 0x0201 219 Temperature Monitor
product LAKESHORE 233 0x0202 233 Temperature Transmitter
product LAKESHORE 235 0x0203 235 Temperature Transmitter
product LAKESHORE 335 0x0300 335 Temperature Controller
product LAKESHORE 336 0x0301 336 Temperature Controller
product LAKESHORE 350 0x0302 350 Temperature Controller
product LAKESHORE 371 0x0303 371 AC Bridge
product LAKESHORE 411 0x0400 411 Handheld Gaussmeter
product LAKESHORE 425 0x0401 425 Gaussmeter
product LAKESHORE 455A 0x0402 455A DSP Gaussmeter
product LAKESHORE 475A 0x0403 475A DSP Gaussmeter
product LAKESHORE 465 0x0404 465 Gaussmeter
product LAKESHORE 625A 0x0600 625A Magnet PSU
product LAKESHORE 642A 0x0601 642A Magnet PSU
product LAKESHORE 648 0x0602 648 Magnet PSU
product LAKESHORE 737 0x0700 737 VSM Controller
product LAKESHORE 776 0x0701 776 Matrix Switch
/* Larsen and Brusgaard products */
product LARSENBRUSGAARD ALTITRACK 0x0001 FTDI compatible adapter
/* Leadtek products */
product LEADTEK 9531 0x2101 9531 GPS
/* Lenovo products */
product LENOVO GIGALAN 0x304b USB 3.0 Ethernet
product LENOVO ETHERNET 0x7203 USB 2.0 Ethernet
product LENOVO RTL8153 0x7205 USB 3.0 Ethernet
+product LENOVO ONELINK 0x720a USB 3.0 Ethernet
product LENOVO TBT3LAN 0x3069 LAN port in Thinkpad TB3 dock
product LENOVO USBCLAN 0x3062 LAN port in Thinkpad USB-C dock
/* Lexar products */
product LEXAR JUMPSHOT 0x0001 jumpSHOT CompactFlash Reader
product LEXAR CF_READER 0xb002 USB CF Reader
product LEXAR JUMPDRIVE 0xa833 USB Jumpdrive Flash Drive
/* Lexmark products */
product LEXMARK S2450 0x0009 Optra S 2450
/* Liebert products */
product LIEBERT POWERSURE_PXT 0xffff PowerSure Personal XT
product LIEBERT2 POWERSURE_PSA 0x0001 PowerSure PSA UPS
product LIEBERT2 PSI1000 0x0004 UPS PSI 1000 FW:08
/* Link Instruments Inc. products */
product LINKINSTRUMENTS MSO19 0xf190 Link Instruments MSO-19
product LINKINSTRUMENTS MSO28 0xf280 Link Instruments MSO-28
product LINKINSTRUMENTS MSO28_2 0xf281 Link Instruments MSO-28
/* Linksys products */
product LINKSYS MAUSB2 0x0105 Camedia MAUSB-2
product LINKSYS USB10TX1 0x200c USB10TX
product LINKSYS USB10T 0x2202 USB10T Ethernet
product LINKSYS USB100TX 0x2203 USB100TX Ethernet
product LINKSYS USB100H1 0x2204 USB100H1 Ethernet/HPNA
product LINKSYS USB10TA 0x2206 USB10TA Ethernet
product LINKSYS USB10TX2 0x400b USB10TX
product LINKSYS2 WUSB11 0x2219 WUSB11 Wireless Adapter
product LINKSYS2 USB200M 0x2226 USB 2.0 10/100 Ethernet
product LINKSYS3 WUSB11v28 0x2233 WUSB11 v2.8 Wireless Adapter
product LINKSYS4 USB1000 0x0039 USB1000
product LINKSYS4 WUSB100 0x0070 WUSB100
product LINKSYS4 WUSB600N 0x0071 WUSB600N
product LINKSYS4 WUSB54GCV2 0x0073 WUSB54GC v2
product LINKSYS4 WUSB54GCV3 0x0077 WUSB54GC v3
product LINKSYS4 RT3070 0x0078 RT3070
product LINKSYS4 WUSB600NV2 0x0079 WUSB600N v2
/* Logilink products */
product LOGILINK DUMMY 0x0000 Dummy product
product LOGILINK U2M 0x0101 LogiLink USB MIDI Cable
/* Logitech products */
product LOGITECH M2452 0x0203 M2452 keyboard
product LOGITECH M4848 0x0301 M4848 mouse
product LOGITECH PAGESCAN 0x040f PageScan
product LOGITECH QUICKCAMWEB 0x0801 QuickCam Web
product LOGITECH QUICKCAMPRO 0x0810 QuickCam Pro
product LOGITECH WEBCAMC100 0X0817 Webcam C100
product LOGITECH QUICKCAMEXP 0x0840 QuickCam Express
product LOGITECH QUICKCAM 0x0850 QuickCam
product LOGITECH QUICKCAMPRO3 0x0990 QuickCam Pro 9000
product LOGITECH N43 0xc000 N43
product LOGITECH N48 0xc001 N48 mouse
product LOGITECH MBA47 0xc002 M-BA47 mouse
product LOGITECH WMMOUSE 0xc004 WingMan Gaming Mouse
product LOGITECH BD58 0xc00c BD58 mouse
product LOGITECH UN58A 0xc030 iFeel Mouse
product LOGITECH UN53B 0xc032 iFeel MouseMan
product LOGITECH WMPAD 0xc208 WingMan GamePad Extreme
product LOGITECH WMRPAD 0xc20a WingMan RumblePad
product LOGITECH G510S 0xc22d G510s Keyboard
product LOGITECH WMJOY 0xc281 WingMan Force joystick
product LOGITECH BB13 0xc401 USB-PS/2 Trackball
product LOGITECH RK53 0xc501 Cordless mouse
product LOGITECH RB6 0xc503 Cordless keyboard
product LOGITECH MX700 0xc506 Cordless optical mouse
product LOGITECH UNIFYING 0xc52b Logitech Unifying Receiver
product LOGITECH QUICKCAMPRO2 0xd001 QuickCam Pro
/* Logitec Corp. products */
product LOGITEC LDR_H443SU2 0x0033 DVD Multi-plus unit LDR-H443SU2
product LOGITEC LDR_H443U2 0x00b3 DVD Multi-plus unit LDR-H443U2
product LOGITEC LAN_GTJU2A 0x0160 LAN-GTJ/U2A Ethernet
product LOGITEC RT2870_1 0x0162 RT2870
product LOGITEC RT2870_2 0x0163 RT2870
product LOGITEC RT2870_3 0x0164 RT2870
product LOGITEC LANW300NU2 0x0166 LAN-W300N/U2
product LOGITEC LANW150NU2 0x0168 LAN-W150N/U2
product LOGITEC LANW300NU2S 0x0169 LAN-W300N/U2S
/* Longcheer Holdings, Ltd. products */
product LONGCHEER WM66 0x6061 Longcheer WM66 HSDPA
product LONGCHEER W14 0x9603 Mobilcom W14
product LONGCHEER DISK 0xf000 Driver disk
product LONGCHEER XSSTICK 0x9605 4G Systems XSStick P14
/* Lucent products */
product LUCENT EVALKIT 0x1001 USS-720 evaluation kit
/* Luwen products */
product LUWEN EASYDISK 0x0005 EasyDisc
/* Macally products */
product MACALLY MOUSE1 0x0101 mouse
/* Mag-Tek products */
product MAGTEK USBSWIPE 0x0002 USB Mag Stripe Swipe Reader
/* Marvell Technology Group, Ltd. products */
product MARVELL SHEEVAPLUG 0x9e8f SheevaPlug serial interface
/* Matrix Orbital products */
product MATRIXORBITAL FTDI_RANGE_0100 0x0100 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0101 0x0101 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0102 0x0102 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0103 0x0103 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0104 0x0104 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0105 0x0105 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0106 0x0106 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0107 0x0107 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0108 0x0108 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0109 0x0109 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_010A 0x010a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_010B 0x010b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_010C 0x010c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_010D 0x010d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_010E 0x010e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_010F 0x010f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0110 0x0110 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0111 0x0111 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0112 0x0112 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0113 0x0113 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0114 0x0114 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0115 0x0115 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0116 0x0116 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0117 0x0117 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0118 0x0118 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0119 0x0119 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_011A 0x011a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_011B 0x011b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_011C 0x011c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_011D 0x011d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_011E 0x011e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_011F 0x011f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0120 0x0120 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0121 0x0121 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0122 0x0122 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0123 0x0123 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0124 0x0124 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0125 0x0125 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0126 0x0126 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0128 0x0128 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0129 0x0129 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_012A 0x012a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_012B 0x012b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_012D 0x012d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_012E 0x012e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_012F 0x012f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0130 0x0130 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0131 0x0131 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0132 0x0132 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0133 0x0133 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0134 0x0134 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0135 0x0135 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0136 0x0136 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0137 0x0137 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0138 0x0138 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0139 0x0139 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_013A 0x013a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_013B 0x013b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_013C 0x013c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_013D 0x013d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_013E 0x013e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_013F 0x013f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0140 0x0140 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0141 0x0141 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0142 0x0142 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0143 0x0143 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0144 0x0144 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0145 0x0145 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0146 0x0146 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0147 0x0147 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0148 0x0148 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0149 0x0149 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_014A 0x014a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_014B 0x014b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_014C 0x014c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_014D 0x014d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_014E 0x014e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_014F 0x014f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0150 0x0150 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0151 0x0151 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0152 0x0152 FTDI compatible adapter
product MATRIXORBITAL MOUA 0x0153 Martrix Orbital MOU-Axxxx LCD displays
product MATRIXORBITAL FTDI_RANGE_0159 0x0159 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_015A 0x015a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_015B 0x015b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_015C 0x015c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_015D 0x015d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_015E 0x015e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_015F 0x015f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0160 0x0160 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0161 0x0161 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0162 0x0162 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0163 0x0163 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0164 0x0164 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0165 0x0165 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0166 0x0166 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0167 0x0167 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0168 0x0168 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0169 0x0169 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_016A 0x016a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_016B 0x016b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_016C 0x016c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_016D 0x016d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_016E 0x016e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_016F 0x016f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0170 0x0170 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0171 0x0171 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0172 0x0172 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0173 0x0173 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0174 0x0174 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0175 0x0175 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0176 0x0176 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0177 0x0177 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0178 0x0178 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0179 0x0179 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_017A 0x017a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_017B 0x017b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_017C 0x017c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_017D 0x017d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_017E 0x017e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_017F 0x017f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0180 0x0180 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0181 0x0181 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0182 0x0182 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0183 0x0183 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0184 0x0184 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0185 0x0185 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0186 0x0186 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0187 0x0187 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0188 0x0188 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0189 0x0189 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_018A 0x018a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_018B 0x018b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_018C 0x018c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_018D 0x018d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_018E 0x018e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_018F 0x018f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0190 0x0190 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0191 0x0191 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0192 0x0192 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0193 0x0193 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0194 0x0194 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0195 0x0195 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0196 0x0196 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0197 0x0197 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0198 0x0198 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_0199 0x0199 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_019A 0x019a FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_019B 0x019b FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_019C 0x019c FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_019D 0x019d FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_019E 0x019e FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_019F 0x019f FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A0 0x01a0 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A1 0x01a1 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A2 0x01a2 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A3 0x01a3 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A4 0x01a4 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A5 0x01a5 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A6 0x01a6 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A7 0x01a7 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A8 0x01a8 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01A9 0x01a9 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01AA 0x01aa FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01AB 0x01ab FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01AC 0x01ac FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01AD 0x01ad FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01AE 0x01ae FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01AF 0x01af FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B0 0x01b0 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B1 0x01b1 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B2 0x01b2 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B3 0x01b3 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B4 0x01b4 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B5 0x01b5 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B6 0x01b6 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B7 0x01b7 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B8 0x01b8 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01B9 0x01b9 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01BA 0x01ba FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01BB 0x01bb FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01BC 0x01bc FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01BD 0x01bd FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01BE 0x01be FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01BF 0x01bf FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C0 0x01c0 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C1 0x01c1 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C2 0x01c2 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C3 0x01c3 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C4 0x01c4 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C5 0x01c5 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C6 0x01c6 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C7 0x01c7 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C8 0x01c8 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01C9 0x01c9 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01CA 0x01ca FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01CB 0x01cb FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01CC 0x01cc FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01CD 0x01cd FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01CE 0x01ce FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01CF 0x01cf FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D0 0x01d0 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D1 0x01d1 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D2 0x01d2 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D3 0x01d3 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D4 0x01d4 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D5 0x01d5 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D6 0x01d6 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D7 0x01d7 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D8 0x01d8 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01D9 0x01d9 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01DA 0x01da FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01DB 0x01db FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01DC 0x01dc FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01DD 0x01dd FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01DE 0x01de FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01DF 0x01df FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E0 0x01e0 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E1 0x01e1 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E2 0x01e2 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E3 0x01e3 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E4 0x01e4 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E5 0x01e5 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E6 0x01e6 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E7 0x01e7 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E8 0x01e8 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01E9 0x01e9 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01EA 0x01ea FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01EB 0x01eb FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01EC 0x01ec FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01ED 0x01ed FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01EE 0x01ee FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01EF 0x01ef FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F0 0x01f0 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F1 0x01f1 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F2 0x01f2 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F3 0x01f3 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F4 0x01f4 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F5 0x01f5 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F6 0x01f6 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F7 0x01f7 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F8 0x01f8 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01F9 0x01f9 FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01FA 0x01fa FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01FB 0x01fb FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01FC 0x01fc FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01FD 0x01fd FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01FE 0x01fe FTDI compatible adapter
product MATRIXORBITAL FTDI_RANGE_01FF 0x01ff FTDI compatible adapter
/* MCT Corp. */
product MCT HUB0100 0x0100 Hub
product MCT DU_H3SP_USB232 0x0200 D-Link DU-H3SP USB BAY Hub
product MCT USB232 0x0210 USB-232 Interface
product MCT SITECOM_USB232 0x0230 Sitecom USB-232 Products
/* Medeli */
product MEDELI DD305 0x5011 DD305 Digital Drum Set
/* MediaTek, Inc. */
product MEDIATEK MTK3329 0x3329 MTK II GPS Receiver
/* Meizu Electronics */
product MEIZU M6_SL 0x0140 MiniPlayer M6 (SL)
/* Melco, Inc products */
product MELCO LUATX1 0x0001 LUA-TX Ethernet
product MELCO LUATX5 0x0005 LUA-TX Ethernet
product MELCO LUA2TX5 0x0009 LUA2-TX Ethernet
product MELCO LUAKTX 0x0012 LUA-KTX Ethernet
product MELCO DUBPXXG 0x001c DUB-PxxG
product MELCO LUAU2KTX 0x003d LUA-U2-KTX Ethernet
product MELCO KG54YB 0x005e WLI-U2-KG54-YB WLAN
product MELCO KG54 0x0066 WLI-U2-KG54 WLAN
product MELCO KG54AI 0x0067 WLI-U2-KG54-AI WLAN
product MELCO LUA3U2AGT 0x006e LUA3-U2-AGT
product MELCO NINWIFI 0x008b Nintendo Wi-Fi
product MELCO PCOPRS1 0x00b3 PC-OP-RS1 RemoteStation
product MELCO SG54HP 0x00d8 WLI-U2-SG54HP
product MELCO G54HP 0x00d9 WLI-U2-G54HP
product MELCO KG54L 0x00da WLI-U2-KG54L
product MELCO WLIUCG300N 0x00e8 WLI-UC-G300N
product MELCO SG54HG 0x00f4 WLI-U2-SG54HG
product MELCO WLRUCG 0x0116 WLR-UC-G
product MELCO WLRUCGAOSS 0x0119 WLR-UC-G-AOSS
product MELCO WLIUCAG300N 0x012e WLI-UC-AG300N
product MELCO WLIUCG 0x0137 WLI-UC-G
product MELCO WLIUCG300HP 0x0148 WLI-UC-G300HP
product MELCO RT2870_2 0x0150 RT2870
product MELCO WLIUCGN 0x015d WLI-UC-GN
product MELCO WLIUCG301N 0x016f WLI-UC-G301N
product MELCO WLIUCGNM 0x01a2 WLI-UC-GNM
product MELCO WLIUCG300HPV1 0x01a8 WLI-UC-G300HP-V1
product MELCO WLIUCGNM2 0x01ee WLI-UC-GNM2
product MELCO WIU2433DM 0x0242 WI-U2-433DM
product MELCO WIU3866D 0x025d WI-U3-866D
/* Merlin products */
product MERLIN V620 0x1110 Merlin V620
/* MetaGeek products */
product METAGEEK TELLSTICK 0x0c30 FTDI compatible adapter
product METAGEEK WISPY1B 0x083e MetaGeek Wi-Spy
product METAGEEK WISPY24X 0x083f MetaGeek Wi-Spy 2.4x
product METAGEEK2 WISPYDBX 0x5000 MetaGeek Wi-Spy DBx
/* Metricom products */
product METRICOM RICOCHET_GS 0x0001 Ricochet GS
/* MGE UPS Systems */
product MGE UPS1 0x0001 MGE UPS SYSTEMS PROTECTIONCENTER 1
product MGE UPS2 0xffff MGE UPS SYSTEMS PROTECTIONCENTER 2
/* MEI products */
product MEI CASHFLOW_SC 0x1100 Cashflow-SC Cash Acceptor
product MEI S2000 0x1101 Series 2000 Combo Acceptor
/* Microdia / Sonix Techonology Co., Ltd. products */
product CHICONY2 YUREX 0x1010 YUREX
product CHICONY2 CAM_1 0x62c0 CAM_1
product CHICONY2 TEMPER 0x7401 TEMPer sensor
/* Micro Star International products */
product MSI BT_DONGLE 0x1967 Bluetooth USB dongle
product MSI RT3070_1 0x3820 RT3070
product MSI RT3070_2 0x3821 RT3070
product MSI RT3070_8 0x3822 RT3070
product MSI RT3070_3 0x3870 RT3070
product MSI RT3070_9 0x3871 RT3070
product MSI UB11B 0x6823 UB11B
product MSI RT2570 0x6861 RT2570
product MSI RT2570_2 0x6865 RT2570
product MSI RT2570_3 0x6869 RT2570
product MSI RT2573_1 0x6874 RT2573
product MSI RT2573_2 0x6877 RT2573
product MSI RT3070_4 0x6899 RT3070
product MSI RT3070_5 0x821a RT3070
product MSI RT3070_10 0x822a RT3070
product MSI RT3070_6 0x870a RT3070
product MSI RT3070_11 0x871a RT3070
product MSI RT3070_7 0x899a RT3070
product MSI RT2573_3 0xa861 RT2573
product MSI RT2573_4 0xa874 RT2573
/* Micron products */
product MICRON REALSSD 0x0655 Real SSD eUSB
/* Microsoft products */
product MICROSOFT SIDEPREC 0x0008 SideWinder Precision Pro
product MICROSOFT INTELLIMOUSE 0x0009 IntelliMouse
product MICROSOFT NATURALKBD 0x000b Natural Keyboard Elite
product MICROSOFT DDS80 0x0014 Digital Sound System 80
product MICROSOFT SIDEWINDER 0x001a Sidewinder Precision Racing Wheel
product MICROSOFT INETPRO 0x001c Internet Keyboard Pro
product MICROSOFT TBEXPLORER 0x0024 Trackball Explorer
product MICROSOFT INTELLIEYE 0x0025 IntelliEye mouse
product MICROSOFT INETPRO2 0x002b Internet Keyboard Pro
product MICROSOFT INTELLIMOUSE5 0x0039 IntelliMouse 1.1 5-Button Mouse
product MICROSOFT WHEELMOUSE 0x0040 Wheel Mouse Optical
product MICROSOFT MN510 0x006e MN510 Wireless
product MICROSOFT 700WX 0x0079 Palm 700WX
product MICROSOFT MN110 0x007a 10/100 USB NIC
product MICROSOFT WLINTELLIMOUSE 0x008c Wireless Optical IntelliMouse
product MICROSOFT WLNOTEBOOK 0x00b9 Wireless Optical Mouse (Model 1023)
product MICROSOFT COMFORT3000 0x00d1 Comfort Optical Mouse 3000 (Model 1043)
product MICROSOFT WLNOTEBOOK3 0x00d2 Wireless Optical Mouse 3000 (Model 1049)
product MICROSOFT NATURAL4000 0x00db Natural Ergonomic Keyboard 4000
product MICROSOFT WLNOTEBOOK2 0x00e1 Wireless Optical Mouse 3000 (Model 1056)
product MICROSOFT XBOX360 0x0292 XBOX 360 WLAN
/* Microtech products */
product MICROTECH SCSIDB25 0x0004 USB-SCSI-DB25
product MICROTECH SCSIHD50 0x0005 USB-SCSI-HD50
product MICROTECH DPCM 0x0006 USB CameraMate
product MICROTECH FREECOM 0xfc01 Freecom USB-IDE
/* Microtek products */
product MICROTEK 336CX 0x0094 Phantom 336CX - C3 scanner
product MICROTEK X6U 0x0099 ScanMaker X6 - X6U
product MICROTEK C6 0x009a Phantom C6 scanner
product MICROTEK 336CX2 0x00a0 Phantom 336CX - C3 scanner
product MICROTEK V6USL 0x00a3 ScanMaker V6USL
product MICROTEK V6USL2 0x80a3 ScanMaker V6USL
product MICROTEK V6UL 0x80ac ScanMaker V6UL
/* Microtune, Inc. products */
product MICROTUNE BT_DONGLE 0x1000 Bluetooth USB dongle
/* Midiman products */
product MAUDIO MIDISPORT2X2 0x1001 Midisport 2x2
product MAUDIO FASTTRACKULTRA 0x2080 Fast Track Ultra
product MAUDIO FASTTRACKULTRA8R 0x2081 Fast Track Ultra 8R
/* MindsAtWork products */
product MINDSATWORK WALLET 0x0001 Digital Wallet
/* Minolta Co., Ltd. */
product MINOLTA 2300 0x4001 Dimage 2300
product MINOLTA S304 0x4007 Dimage S304
product MINOLTA X 0x4009 Dimage X
product MINOLTA 5400 0x400e Dimage 5400
product MINOLTA F300 0x4011 Dimage F300
product MINOLTA E223 0x4017 Dimage E223
/* Mitsumi products */
product MITSUMI CDRRW 0x0000 CD-R/RW Drive
product MITSUMI BT_DONGLE 0x641f Bluetooth USB dongle
product MITSUMI FDD 0x6901 USB FDD
/* Mobile Action products */
product MOBILEACTION MA620 0x0620 MA-620 Infrared Adapter
/* Mobility products */
product MOBILITY USB_SERIAL 0x0202 FTDI compatible adapter
product MOBILITY EA 0x0204 Ethernet
product MOBILITY EASIDOCK 0x0304 EasiDock Ethernet
/* MosChip products */
product MOSCHIP MCS7703 0x7703 MCS7703 Serial Port Adapter
product MOSCHIP MCS7730 0x7730 MCS7730 Ethernet
product MOSCHIP MCS7820 0x7820 MCS7820 Serial Port Adapter
product MOSCHIP MCS7830 0x7830 MCS7830 Ethernet
product MOSCHIP MCS7832 0x7832 MCS7832 Ethernet
product MOSCHIP MCS7840 0x7840 MCS7840 Serial Port Adapter
/* Motorola products */
product MOTOROLA MC141555 0x1555 MC141555 hub controller
product MOTOROLA SB4100 0x4100 SB4100 USB Cable Modem
product MOTOROLA2 T720C 0x2822 T720c
product MOTOROLA2 A41XV32X 0x2a22 A41x/V32x Mobile Phones
product MOTOROLA2 E398 0x4810 E398 Mobile Phone
product MOTOROLA2 USBLAN 0x600c USBLAN
product MOTOROLA2 USBLAN2 0x6027 USBLAN
product MOTOROLA2 MB886 0x710f MB886 Mobile Phone (Atria HD)
product MOTOROLA4 RT2770 0x9031 RT2770
product MOTOROLA4 RT3070 0x9032 RT3070
/* Moxa */
product MOXA MXU1_1110 0x1110 Moxa Uport 1110
product MOXA MXU1_1130 0x1130 Moxa Uport 1130
product MOXA MXU1_1131 0x1131 Moxa Uport 1131
product MOXA MXU1_1150 0x1150 Moxa Uport 1150
product MOXA MXU1_1151 0x1151 Moxa Uport 1151
/* MpMan products */
product MPMAN MPF400_2 0x25a8 MPF400 Music Player 2Go
product MPMAN MPF400_1 0x36d0 MPF400 Music Player 1Go
/* MultiTech products */
product MULTITECH MT9234ZBA_2 0x0319 MT9234ZBA USB modem (alt)
product MULTITECH ATLAS 0xf101 MT5634ZBA-USB modem
product MULTITECH GSM 0xf108 GSM USB Modem
product MULTITECH CDMA 0xf109 CDMA USB Modem
product MULTITECH CDMA_FW 0xf110 CDMA USB Modem firmware running
product MULTITECH GSM_FW 0xf111 GSM USB Modem firmware running
product MULTITECH EDGE 0xf112 Edge USB Modem
product MULTITECH MT9234MU 0xf114 MT9234 MU
product MULTITECH MT9234ZBA 0xf115 MT9234 ZBA
/* Mustek products */
product MUSTEK 1200CU 0x0001 1200 CU scanner
product MUSTEK 600CU 0x0002 600 CU scanner
product MUSTEK 1200USB 0x0003 1200 USB scanner
product MUSTEK 1200UB 0x0006 1200 UB scanner
product MUSTEK 1200USBPLUS 0x0007 1200 USB Plus scanner
product MUSTEK 1200CUPLUS 0x0008 1200 CU Plus scanner
product MUSTEK BEARPAW1200F 0x0010 BearPaw 1200F scanner
product MUSTEK BEARPAW2400TA 0x0218 BearPaw 2400TA scanner
product MUSTEK BEARPAW1200TA 0x021e BearPaw 1200TA scanner
product MUSTEK 600USB 0x0873 600 USB scanner
product MUSTEK MDC800 0xa800 MDC-800 digital camera
/* M-Systems products */
product MSYSTEMS DISKONKEY 0x0010 DiskOnKey
product MSYSTEMS DISKONKEY2 0x0011 DiskOnKey
/* Myson products */
product MYSON HEDEN_8813 0x8813 USB-IDE
product MYSON HEDEN 0x8818 USB-IDE
product MYSON HUBREADER 0x8819 COMBO Card reader with USB HUB
product MYSON STARREADER 0x9920 USB flash card adapter
/* National Semiconductor */
product NATIONAL BEARPAW1200 0x1000 BearPaw 1200
product NATIONAL BEARPAW2400 0x1001 BearPaw 2400
/* NEC products */
product NEC HUB_0050 0x0050 USB 2.0 7-Port Hub
product NEC HUB_005A 0x005a USB 2.0 4-Port Hub
product NEC WL300NUG 0x0249 WL300NU-G
product NEC WL900U 0x0408 Aterm WL900U
product NEC HUB 0x55aa hub
product NEC HUB_B 0x55ab hub
/* NEODIO products */
product NEODIO ND3260 0x3260 8-in-1 Multi-format Flash Controller
product NEODIO ND5010 0x5010 Multi-format Flash Controller
/* Neotel products */
product NEOTEL PRIME 0x4000 Prime USB modem
/* Netac products */
product NETAC CF_CARD 0x1060 USB-CF-Card
product NETAC ONLYDISK 0x0003 OnlyDisk
/* NetChip Technology Products */
product NETCHIP TURBOCONNECT 0x1080 Turbo-Connect
product NETCHIP CLIK_40 0xa140 USB Clik! 40
product NETCHIP GADGETZERO 0xa4a0 Linux Gadget Zero
product NETCHIP ETHERNETGADGET 0xa4a2 Linux Ethernet/RNDIS gadget on pxa210/25x/26x
product NETCHIP POCKETBOOK 0xa4a5 PocketBook
/* Netgear products */
product NETGEAR EA101 0x1001 Ethernet
product NETGEAR EA101X 0x1002 Ethernet
product NETGEAR FA101 0x1020 Ethernet 10/100, USB1.1
product NETGEAR FA120 0x1040 USB 2.0 Ethernet
product NETGEAR M4100 0x1100 M4100/M5300/M7100 series switch
product NETGEAR WG111V1_2 0x4240 PrismGT USB 2.0 WLAN
product NETGEAR WG111V3 0x4260 WG111v3
product NETGEAR WG111U 0x4300 WG111U
product NETGEAR WG111U_NF 0x4301 WG111U (no firmware)
product NETGEAR WG111V2 0x6a00 WG111V2
product NETGEAR WN111V2 0x9001 WN111V2
product NETGEAR WNDA3100 0x9010 WNDA3100
product NETGEAR WNDA4100 0x9012 WNDA4100
product NETGEAR WNDA3200 0x9018 WNDA3200
product NETGEAR RTL8192CU 0x9021 RTL8192CU
product NETGEAR WNA1000 0x9040 WNA1000
product NETGEAR WNA1000M 0x9041 WNA1000M
product NETGEAR A6100 0x9052 A6100
product NETGEAR2 MA101 0x4100 MA101
product NETGEAR2 MA101B 0x4102 MA101 Rev B
product NETGEAR3 WG111T 0x4250 WG111T
product NETGEAR3 WG111T_NF 0x4251 WG111T (no firmware)
product NETGEAR3 WPN111 0x5f00 WPN111
product NETGEAR3 WPN111_NF 0x5f01 WPN111 (no firmware)
product NETGEAR3 WPN111_2 0x5f02 WPN111
product NETGEAR4 RTL8188CU 0x9041 RTL8188CU
/* NetIndex products */
product NETINDEX WS002IN 0x2001 Willcom WS002IN
/* NEWlink */
product NEWLINK USB2IDEBRIDGE 0x00ff USB 2.0 Hard Drive Enclosure
/* Nikon products */
product NIKON E990 0x0102 Digital Camera E990
product NIKON LS40 0x4000 CoolScan LS40 ED
product NIKON D300 0x041a Digital Camera D300
/* NovaTech Products */
product NOVATECH NV902 0x9020 NovaTech NV-902W
product NOVATECH RT2573 0x9021 RT2573
product NOVATECH RTL8188CU 0x9071 RTL8188CU
/* Nokia products */
product NOKIA N958GB 0x0070 Nokia N95 8GBc
product NOKIA2 CA42 0x1234 CA-42 cable
/* Novatel Wireless products */
product NOVATEL V640 0x1100 Merlin V620
product NOVATEL CDMA_MODEM 0x1110 Novatel Wireless Merlin CDMA
product NOVATEL V620 0x1110 Merlin V620
product NOVATEL V740 0x1120 Merlin V740
product NOVATEL V720 0x1130 Merlin V720
product NOVATEL U740 0x1400 Merlin U740
product NOVATEL U740_2 0x1410 Merlin U740
product NOVATEL U870 0x1420 Merlin U870
product NOVATEL XU870 0x1430 Merlin XU870
product NOVATEL X950D 0x1450 Merlin X950D
product NOVATEL ES620 0x2100 Expedite ES620
product NOVATEL E725 0x2120 Expedite E725
product NOVATEL ES620_2 0x2130 Expedite ES620
product NOVATEL ES620 0x2100 ES620 CDMA
product NOVATEL U720 0x2110 Merlin U720
product NOVATEL EU730 0x2400 Expedite EU730
product NOVATEL EU740 0x2410 Expedite EU740
product NOVATEL EU870D 0x2420 Expedite EU870D
product NOVATEL U727 0x4100 Merlin U727 CDMA
product NOVATEL MC950D 0x4400 Novatel MC950D HSUPA
product NOVATEL MC990D 0x7001 Novatel MC990D
product NOVATEL ZEROCD 0x5010 Novatel ZeroCD
product NOVATEL MIFI2200V 0x5020 Novatel MiFi 2200 CDMA Virgin Mobile
product NOVATEL ZEROCD2 0x5030 Novatel ZeroCD
product NOVATEL MIFI2200 0x5041 Novatel MiFi 2200 CDMA
product NOVATEL U727_2 0x5100 Merlin U727 CDMA
product NOVATEL U760 0x6000 Novatel U760
product NOVATEL MC760 0x6002 Novatel MC760
product NOVATEL MC547 0x7042 Novatel MC547
product NOVATEL MC679 0x7031 Novatel MC679
product NOVATEL2 FLEXPACKGPS 0x0100 NovAtel FlexPack GPS receiver
/* NVIDIA products */
product NVIDIA RTL8153 0x09ff USB 3.0 Ethernet
/* Merlin products */
product MERLIN V620 0x1110 Merlin V620
/* O2Micro products */
product O2MICRO OZ776_HUB 0x7761 OZ776 hub
product O2MICRO OZ776_CCID_SC 0x7772 OZ776 CCID SC Reader
/* Olimex products */
product OLIMEX ARM_USB_OCD 0x0003 FTDI compatible adapter
product OLIMEX ARM_USB_OCD_H 0x002b FTDI compatible adapter
/* Olympus products */
product OLYMPUS C1 0x0102 C-1 Digital Camera
product OLYMPUS C700 0x0105 C-700 Ultra Zoom
/* OmniVision Technologies, Inc. products */
product OMNIVISION OV511 0x0511 OV511 Camera
product OMNIVISION OV511PLUS 0xa511 OV511+ Camera
/* OnSpec Electronic, Inc. */
product ONSPEC SDS_HOTFIND_D 0x0400 SDS-infrared.com Hotfind-D Infrared Camera
product ONSPEC MDCFE_B_CF_READER 0xa000 MDCFE-B USB CF Reader
product ONSPEC CFMS_RW 0xa001 SIIG/Datafab Memory Stick+CF Reader/Writer
product ONSPEC READER 0xa003 Datafab-based Reader
product ONSPEC CFSM_READER 0xa005 PNY/Datafab CF+SM Reader
product ONSPEC CFSM_READER2 0xa006 Simple Tech/Datafab CF+SM Reader
product ONSPEC MDSM_B_READER 0xa103 MDSM-B reader
product ONSPEC CFSM_COMBO 0xa109 USB to CF + SM Combo (LC1)
product ONSPEC UCF100 0xa400 FlashLink UCF-100 CompactFlash Reader
product ONSPEC2 IMAGEMATE_SDDR55 0xa103 ImageMate SDDR55
/* Option products */
product OPTION VODAFONEMC3G 0x5000 Vodafone Mobile Connect 3G datacard
product OPTION GT3G 0x6000 GlobeTrotter 3G datacard
product OPTION GT3GQUAD 0x6300 GlobeTrotter 3G QUAD datacard
product OPTION GT3GPLUS 0x6600 GlobeTrotter 3G+ datacard
product OPTION GTICON322 0xd033 GlobeTrotter Icon322 storage
product OPTION GTMAX36 0x6701 GlobeTrotter Max 3.6 Modem
product OPTION GTMAX72 0x6711 GlobeTrotter Max 7.2 HSDPA
product OPTION GTHSDPA 0x6971 GlobeTrotter HSDPA
product OPTION GTMAXHSUPA 0x7001 GlobeTrotter HSUPA
product OPTION GTMAXHSUPAE 0x6901 GlobeTrotter HSUPA PCIe
product OPTION GTMAX380HSUPAE 0x7211 GlobeTrotter 380HSUPA PCIe
product OPTION GT3G_1 0x6050 3G modem
product OPTION GT3G_2 0x6100 3G modem
product OPTION GT3G_3 0x6150 3G modem
product OPTION GT3G_4 0x6200 3G modem
product OPTION GT3G_5 0x6250 3G modem
product OPTION GT3G_6 0x6350 3G modem
product OPTION E6500 0x6500 3G modem
product OPTION E6501 0x6501 3G modem
product OPTION E6601 0x6601 3G modem
product OPTION E6721 0x6721 3G modem
product OPTION E6741 0x6741 3G modem
product OPTION E6761 0x6761 3G modem
product OPTION E6800 0x6800 3G modem
product OPTION E7021 0x7021 3G modem
product OPTION E7041 0x7041 3G modem
product OPTION E7061 0x7061 3G modem
product OPTION E7100 0x7100 3G modem
product OPTION GTM380 0x7201 3G modem
product OPTION GE40X 0x7601 Globetrotter HSUPA
product OPTION GSICON72 0x6911 GlobeSurfer iCON
product OPTION GSICONHSUPA 0x7251 Globetrotter HSUPA
product OPTION ICON401 0x7401 GlobeSurfer iCON 401
product OPTION GTHSUPA 0x7011 Globetrotter HSUPA
product OPTION GMT382 0x7501 Globetrotter HSUPA
product OPTION GE40X_1 0x7301 Globetrotter HSUPA
product OPTION GE40X_2 0x7361 Globetrotter HSUPA
product OPTION GE40X_3 0x7381 Globetrotter HSUPA
product OPTION GTM661W 0x9000 GTM661W
product OPTION ICONEDGE 0xc031 GlobeSurfer iCON EDGE
product OPTION MODHSXPA 0xd013 Globetrotter HSUPA
product OPTION ICON321 0xd031 Globetrotter HSUPA
product OPTION ICON505 0xd055 Globetrotter iCON 505
product OPTION ICON452 0x7901 Globetrotter iCON 452
/* Optoelectronics Co., Ltd */
product OPTO BARCODE 0x0001 Barcode Reader
product OPTO OPTICONCODE 0x0009 Opticon Code Reader
product OPTO BARCODE_1 0xa002 Barcode Reader
product OPTO CRD7734 0xc000 USB Cradle CRD-7734-RU
product OPTO CRD7734_1 0xc001 USB Cradle CRD-7734-RU
/* OvisLink product */
product OVISLINK RT3072 0x3072 RT3072
/* OQO */
product OQO WIFI01 0x0002 model 01 WiFi interface
product OQO BT01 0x0003 model 01 Bluetooth interface
product OQO ETHER01PLUS 0x7720 model 01+ Ethernet
product OQO ETHER01 0x8150 model 01 Ethernet interface
/* Ours Technology Inc. */
product OTI DKU5 0x6858 DKU-5 Serial
/* Owen.ru products */
product OWEN AC4 0x0004 AC4 USB-RS485 converter
/* OWL producs */
product OWL CM_160 0xca05 OWL CM-160 power monitor
/* Palm Computing, Inc. product */
product PALM SERIAL 0x0080 USB Serial
product PALM M500 0x0001 Palm m500
product PALM M505 0x0002 Palm m505
product PALM M515 0x0003 Palm m515
product PALM I705 0x0020 Palm i705
product PALM TUNGSTEN_Z 0x0031 Palm Tungsten Z
product PALM M125 0x0040 Palm m125
product PALM M130 0x0050 Palm m130
product PALM TUNGSTEN_T 0x0060 Palm Tungsten T
product PALM ZIRE31 0x0061 Palm Zire 31
product PALM ZIRE 0x0070 Palm Zire
/* Panasonic products */
product PANASONIC LS120CAM 0x0901 LS-120 Camera
product PANASONIC KXL840AN 0x0d01 CD-R Drive KXL-840AN
product PANASONIC KXLRW32AN 0x0d09 CD-R Drive KXL-RW32AN
product PANASONIC KXLCB20AN 0x0d0a CD-R Drive KXL-CB20AN
product PANASONIC KXLCB35AN 0x0d0e DVD-ROM & CD-R/RW
product PANASONIC SDCAAE 0x1b00 MultiMediaCard
product PANASONIC TYTP50P6S 0x3900 TY-TP50P6-S 50in Touch Panel
/* Papouch products */
product PAPOUCH AD4USB 0x8003 FTDI compatible adapter
product PAPOUCH AP485 0x0101 FTDI compatible adapter
product PAPOUCH AP485_2 0x0104 FTDI compatible adapter
product PAPOUCH DRAK5 0x0700 FTDI compatible adapter
product PAPOUCH DRAK6 0x1000 FTDI compatible adapter
product PAPOUCH GMSR 0x8005 FTDI compatible adapter
product PAPOUCH GMUX 0x8004 FTDI compatible adapter
product PAPOUCH IRAMP 0x0500 FTDI compatible adapter
product PAPOUCH LEC 0x0300 FTDI compatible adapter
product PAPOUCH MU 0x8001 FTDI compatible adapter
product PAPOUCH QUIDO10X1 0x0b00 FTDI compatible adapter
product PAPOUCH QUIDO2X16 0x0e00 FTDI compatible adapter
product PAPOUCH QUIDO2X2 0x0a00 FTDI compatible adapter
product PAPOUCH QUIDO30X3 0x0c00 FTDI compatible adapter
product PAPOUCH QUIDO3X32 0x0f00 FTDI compatible adapter
product PAPOUCH QUIDO4X4 0x0900 FTDI compatible adapter
product PAPOUCH QUIDO60X3 0x0d00 FTDI compatible adapter
product PAPOUCH QUIDO8X8 0x0800 FTDI compatible adapter
product PAPOUCH SB232 0x0301 FTDI compatible adapter
product PAPOUCH SB422 0x0102 FTDI compatible adapter
product PAPOUCH SB422_2 0x0105 FTDI compatible adapter
product PAPOUCH SB485 0x0100 FTDI compatible adapter
product PAPOUCH SB485C 0x0107 FTDI compatible adapter
product PAPOUCH SB485S 0x0106 FTDI compatible adapter
product PAPOUCH SB485_2 0x0103 FTDI compatible adapter
product PAPOUCH SIMUKEY 0x8002 FTDI compatible adapter
product PAPOUCH TMU 0x0400 FTDI compatible adapter
product PAPOUCH UPSUSB 0x8000 FTDI compatible adapter
/* PARA Industrial products */
product PARA RT3070 0x8888 RT3070
/* Simtec Electronics products */
product SIMTEC ENTROPYKEY 0x0001 Entropy Key
/* Pegatron products */
product PEGATRON RT2870 0x0002 RT2870
product PEGATRON RT3070 0x000c RT3070
product PEGATRON RT3070_2 0x000e RT3070
product PEGATRON RT3070_3 0x0010 RT3070
/* Peracom products */
product PERACOM SERIAL1 0x0001 Serial
product PERACOM ENET 0x0002 Ethernet
product PERACOM ENET3 0x0003 At Home Ethernet
product PERACOM ENET2 0x0005 Ethernet
/* Peraso Technologies, Inc products */
product PERASO PRS4001 0x4001 PRS4001 WLAN
/* Philips products */
product PHILIPS DSS350 0x0101 DSS 350 Digital Speaker System
product PHILIPS DSS 0x0104 DSS XXX Digital Speaker System
product PHILIPS HUB 0x0201 hub
product PHILIPS PCA646VC 0x0303 PCA646VC PC Camera
product PHILIPS PCVC680K 0x0308 PCVC680K Vesta Pro PC Camera
product PHILIPS SPC900NC 0x0329 SPC 900NC CCD PC Camera
product PHILIPS DSS150 0x0471 DSS 150 Digital Speaker System
product PHILIPS ACE1001 0x066a AKTAKOM ACE-1001 cable
product PHILIPS SPE3030CC 0x083a USB 2.0 External Disk
product PHILIPS SNU5600 0x1236 SNU5600
product PHILIPS UM10016 0x1552 ISP 1581 Hi-Speed USB MPEG2 Encoder Reference Kit
product PHILIPS DIVAUSB 0x1801 DIVA USB mp3 player
product PHILIPS RT2870 0x200f RT2870
/* Philips Semiconductor products */
product PHILIPSSEMI HUB1122 0x1122 HUB
/* Megatec */
product MEGATEC UPS 0x5161 Phoenixtec protocol based UPS
/* P.I. Engineering products */
product PIENGINEERING PS2USB 0x020b PS2 to Mac USB Adapter
/* Planex Communications products */
product PLANEX GW_US11H 0x14ea GW-US11H WLAN
product PLANEX2 RTL8188CUS 0x1201 RTL8188CUS
product PLANEX2 GW_US11S 0x3220 GW-US11S WLAN
product PLANEX2 GW_US54GXS 0x5303 GW-US54GXS WLAN
product PLANEX2 GW_US300 0x5304 GW-US300
product PLANEX2 RTL8188CU_1 0xab2a RTL8188CU
product PLANEX2 RTL8188CU_2 0xed17 RTL8188CU
product PLANEX2 RTL8188CU_3 0x4902 RTL8188CU
product PLANEX2 RTL8188CU_4 0xab2e RTL8188CU
product PLANEX2 RTL8192CU 0xab2b RTL8192CU
product PLANEX2 GWUS54HP 0xab01 GW-US54HP
product PLANEX2 GWUS300MINIS 0xab24 GW-US300MiniS
product PLANEX2 RT3070 0xab25 RT3070
product PLANEX2 MZKUE150N 0xab2f MZK-UE150N
product PLANEX2 GW900D 0xab30 GW-900D
product PLANEX2 GWUS54MINI2 0xab50 GW-US54Mini2
product PLANEX2 GWUS54SG 0xc002 GW-US54SG
product PLANEX2 GWUS54GZL 0xc007 GW-US54GZL
product PLANEX2 GWUS54GD 0xed01 GW-US54GD
product PLANEX2 GWUSMM 0xed02 GW-USMM
product PLANEX2 RT2870 0xed06 RT2870
product PLANEX2 GWUSMICRON 0xed14 GW-USMicroN
product PLANEX2 GWUSVALUEEZ 0xed17 GW-USValue-EZ
product PLANEX3 GWUS54GZ 0xab10 GW-US54GZ
product PLANEX3 GU1000T 0xab11 GU-1000T
product PLANEX3 GWUS54MINI 0xab13 GW-US54Mini
product PLANEX2 GWUSNANO 0xab28 GW-USNano
/* Plextor Corp. */
product PLEXTOR 40_12_40U 0x0011 PlexWriter 40/12/40U
/* Ploytec GmbH */
product PLOYTEC SPL_CRIMSON_1 0xc150 SPL Crimson Revision 1
/* PLX products */
product PLX TESTBOARD 0x9060 test board
product PLX CA42 0xac70 CA-42
/* PowerCOM products */
product POWERCOM IMPERIAL_SERIES 0x00a2 IMPERIAL Series
product POWERCOM SMART_KING_PRO 0x00a3 Smart KING Pro
product POWERCOM WOW 0x00a4 WOW
product POWERCOM VANGUARD 0x00a5 Vanguard
product POWERCOM BLACK_KNIGHT_PRO 0x00a6 Black Knight Pro
/* PNY products */
product PNY ATTACHE2 0x0010 USB 2.0 Flash Drive
/* PortGear products */
product PORTGEAR EA8 0x0008 Ethernet
product PORTGEAR EA9 0x0009 Ethernet
/* Portsmith products */
product PORTSMITH EEA 0x3003 Express Ethernet
/* Posiflex products */
product POSIFLEX PP7000 0x0300 FTDI compatible adapter
/* Primax products */
product PRIMAX G2X300 0x0300 G2-200 scanner
product PRIMAX G2E300 0x0301 G2E-300 scanner
product PRIMAX G2300 0x0302 G2-300 scanner
product PRIMAX G2E3002 0x0303 G2E-300 scanner
product PRIMAX 9600 0x0340 Colorado USB 9600 scanner
product PRIMAX 600U 0x0341 Colorado 600u scanner
product PRIMAX 6200 0x0345 Visioneer 6200 scanner
product PRIMAX 19200 0x0360 Colorado USB 19200 scanner
product PRIMAX 1200U 0x0361 Colorado 1200u scanner
product PRIMAX G600 0x0380 G2-600 scanner
product PRIMAX 636I 0x0381 ReadyScan 636i
product PRIMAX G2600 0x0382 G2-600 scanner
product PRIMAX G2E600 0x0383 G2E-600 scanner
product PRIMAX COMFORT 0x4d01 Comfort
product PRIMAX MOUSEINABOX 0x4d02 Mouse-in-a-Box
product PRIMAX PCGAUMS1 0x4d04 Sony PCGA-UMS1
product PRIMAX HP_RH304AA 0x4d17 HP RH304AA mouse
/* Prolific products */
product PROLIFIC PL2301 0x0000 PL2301 Host-Host interface
product PROLIFIC PL2302 0x0001 PL2302 Host-Host interface
product PROLIFIC MOTOROLA 0x0307 Motorola Cable
product PROLIFIC RSAQ2 0x04bb PL2303 Serial (IODATA USB-RSAQ2)
product PROLIFIC ALLTRONIX_GPRS 0x0609 Alltronix ACM003U00 modem
product PROLIFIC ALDIGA_AL11U 0x0611 AlDiga AL-11U modem
product PROLIFIC MICROMAX_610U 0x0612 Micromax 610U
product PROLIFIC DCU11 0x1234 DCU-11 Phone Cable
product PROLIFIC UIC_MSR206 0x206a UIC MSR206 Card Reader
product PROLIFIC PL2303 0x2303 PL2303 Serial (ATEN/IOGEAR UC232A)
product PROLIFIC PL2305 0x2305 Parallel printer
product PROLIFIC ATAPI4 0x2307 ATAPI-4 Controller
product PROLIFIC PL2501 0x2501 PL2501 Host-Host interface
product PROLIFIC PL2506 0x2506 PL2506 USB to IDE Bridge
product PROLIFIC PL27A1 0x27A1 PL27A1 USB 3.0 Host-Host interface
product PROLIFIC HCR331 0x331a HCR331 Hybrid Card Reader
product PROLIFIC PHAROS 0xaaa0 Prolific Pharos
product PROLIFIC RSAQ3 0xaaa2 PL2303 Serial Adapter (IODATA USB-RSAQ3)
product PROLIFIC2 PL2303 0x2303 PL2303 Serial Adapter
/* Putercom products */
product PUTERCOM UPA100 0x047e USB-1284 BRIDGE
/* Qcom products */
product QCOM RT2573 0x6196 RT2573
product QCOM RT2573_2 0x6229 RT2573
product QCOM RT2573_3 0x6238 RT2573
product QCOM RT2870 0x6259 RT2870
/* QI-hardware */
product QIHARDWARE JTAGSERIAL 0x0713 FTDI compatible adapter
/* Qisda products */
product QISDA H21_1 0x4512 3G modem
product QISDA H21_2 0x4523 3G modem
product QISDA H20_1 0x4515 3G modem
product QISDA H20_2 0x4519 3G modem
/* Qualcomm products */
product QUALCOMM CDMA_MSM 0x6000 CDMA Technologies MSM phone
product QUALCOMM NTT_L02C_MODEM 0x618f NTT DOCOMO L-02C
product QUALCOMM NTT_L02C_STORAGE 0x61dd NTT DOCOMO L-02C
product QUALCOMM2 MF330 0x6613 MF330
product QUALCOMM2 RWT_FCT 0x3100 RWT FCT-CDMA 2000 1xRTT modem
product QUALCOMM2 CDMA_MSM 0x3196 CDMA Technologies MSM modem
product QUALCOMM2 AC8700 0x6000 AC8700
product QUALCOMM2 VW110L 0x1000 Vertex Wireless 110L modem
product QUALCOMM2 SIM5218 0x9000 SIM5218
product QUALCOMM2 WM620 0x9002 Neoway WM620
product QUALCOMM2 GOBI2000_QDL 0x9204 Qualcomm Gobi 2000 QDL
product QUALCOMM2 GOBI2000 0x9205 Qualcomm Gobi 2000 modem
product QUALCOMM2 VT80N 0x6500 Venus VT80N
product QUALCOMM3 VFAST2 0x9909 Venus Fast2 modem
product QUALCOMMINC CDMA_MSM 0x0001 CDMA Technologies MSM modem
product QUALCOMMINC E0002 0x0002 3G modem
product QUALCOMMINC E0003 0x0003 3G modem
product QUALCOMMINC E0004 0x0004 3G modem
product QUALCOMMINC E0005 0x0005 3G modem
product QUALCOMMINC E0006 0x0006 3G modem
product QUALCOMMINC E0007 0x0007 3G modem
product QUALCOMMINC E0008 0x0008 3G modem
product QUALCOMMINC E0009 0x0009 3G modem
product QUALCOMMINC E000A 0x000a 3G modem
product QUALCOMMINC E000B 0x000b 3G modem
product QUALCOMMINC E000C 0x000c 3G modem
product QUALCOMMINC E000D 0x000d 3G modem
product QUALCOMMINC E000E 0x000e 3G modem
product QUALCOMMINC E000F 0x000f 3G modem
product QUALCOMMINC E0010 0x0010 3G modem
product QUALCOMMINC E0011 0x0011 3G modem
product QUALCOMMINC E0012 0x0012 3G modem
product QUALCOMMINC E0013 0x0013 3G modem
product QUALCOMMINC E0014 0x0014 3G modem
product QUALCOMMINC MF628 0x0015 3G modem
product QUALCOMMINC MF633R 0x0016 ZTE WCDMA modem
product QUALCOMMINC E0017 0x0017 3G modem
product QUALCOMMINC E0018 0x0018 3G modem
product QUALCOMMINC E0019 0x0019 3G modem
product QUALCOMMINC E0020 0x0020 3G modem
product QUALCOMMINC E0021 0x0021 3G modem
product QUALCOMMINC E0022 0x0022 3G modem
product QUALCOMMINC E0023 0x0023 3G modem
product QUALCOMMINC E0024 0x0024 3G modem
product QUALCOMMINC E0025 0x0025 3G modem
product QUALCOMMINC E0026 0x0026 3G modem
product QUALCOMMINC E0027 0x0027 3G modem
product QUALCOMMINC E0028 0x0028 3G modem
product QUALCOMMINC E0029 0x0029 3G modem
product QUALCOMMINC E0030 0x0030 3G modem
product QUALCOMMINC MF626 0x0031 3G modem
product QUALCOMMINC E0032 0x0032 3G modem
product QUALCOMMINC E0033 0x0033 3G modem
product QUALCOMMINC E0037 0x0037 3G modem
product QUALCOMMINC E0039 0x0039 3G modem
product QUALCOMMINC E0042 0x0042 3G modem
product QUALCOMMINC E0043 0x0043 3G modem
product QUALCOMMINC E0048 0x0048 3G modem
product QUALCOMMINC E0049 0x0049 3G modem
product QUALCOMMINC E0051 0x0051 3G modem
product QUALCOMMINC E0052 0x0052 3G modem
product QUALCOMMINC ZTE_STOR2 0x0053 USB ZTE Storage
product QUALCOMMINC E0054 0x0054 3G modem
product QUALCOMMINC E0055 0x0055 3G modem
product QUALCOMMINC E0057 0x0057 3G modem
product QUALCOMMINC E0058 0x0058 3G modem
product QUALCOMMINC E0059 0x0059 3G modem
product QUALCOMMINC E0060 0x0060 3G modem
product QUALCOMMINC E0061 0x0061 3G modem
product QUALCOMMINC E0062 0x0062 3G modem
product QUALCOMMINC E0063 0x0063 3G modem
product QUALCOMMINC E0064 0x0064 3G modem
product QUALCOMMINC E0066 0x0066 3G modem
product QUALCOMMINC E0069 0x0069 3G modem
product QUALCOMMINC E0070 0x0070 3G modem
product QUALCOMMINC E0073 0x0073 3G modem
product QUALCOMMINC E0076 0x0076 3G modem
product QUALCOMMINC E0078 0x0078 3G modem
product QUALCOMMINC E0082 0x0082 3G modem
product QUALCOMMINC E0086 0x0086 3G modem
product QUALCOMMINC MF112 0x0103 3G modem
product QUALCOMMINC SURFSTICK 0x0117 1&1 Surf Stick
product QUALCOMMINC K3772_Z_INIT 0x1179 K3772-Z Initial
product QUALCOMMINC K3772_Z 0x1181 K3772-Z
product QUALCOMMINC ZTE_MF730M 0x1420 3G modem
product QUALCOMMINC MF195E_INIT 0x1514 MF195E initial
product QUALCOMMINC MF195E 0x1516 MF195E
product QUALCOMMINC ZTE_STOR 0x2000 USB ZTE Storage
product QUALCOMMINC E2002 0x2002 3G modem
product QUALCOMMINC E2003 0x2003 3G modem
product QUALCOMMINC AC682 0xffdd CDMA 1xEVDO USB modem
product QUALCOMMINC AC682_INIT 0xffde CDMA 1xEVDO USB modem (initial)
product QUALCOMMINC AC8710 0xfff1 3G modem
product QUALCOMMINC AC2726 0xfff5 3G modem
product QUALCOMMINC AC8700 0xfffe CDMA 1xEVDO USB modem
/* Quanta products */
product QUANTA RW6815_1 0x00ce HP iPAQ rw6815
product QUANTA RT3070 0x0304 RT3070
product QUANTA Q101_STOR 0x1000 USB Q101 Storage
product QUANTA Q101 0xea02 HSDPA modem
product QUANTA Q111 0xea03 HSDPA modem
product QUANTA GLX 0xea04 HSDPA modem
product QUANTA GKE 0xea05 HSDPA modem
product QUANTA GLE 0xea06 HSDPA modem
product QUANTA RW6815R 0xf003 HP iPAQ rw6815 RNDIS
/* Quectel products */
product QUECTEL EC25 0x0125 LTE modem
/* Quickshot products */
product QUICKSHOT STRIKEPAD 0x6238 USB StrikePad
/* Qtronix products */
product QTRONIX 980N 0x2011 Scorpion-980N keyboard
/* Radio Shack */
product RADIOSHACK USBCABLE 0x4026 USB to Serial Cable
/* Rainbow Technologies products */
product RAINBOW IKEY2000 0x1200 i-Key 2000
/* Ralink Technology products */
product RALINK RT2570 0x1706 RT2500USB Wireless Adapter
product RALINK RT2070 0x2070 RT2070
product RALINK RT2570_2 0x2570 RT2500USB Wireless Adapter
product RALINK RT2573 0x2573 RT2501USB Wireless Adapter
product RALINK RT2671 0x2671 RT2601USB Wireless Adapter
product RALINK RT2770 0x2770 RT2770
product RALINK RT2870 0x2870 RT2870
product RALINK RT_STOR 0x2878 USB Storage
product RALINK RT3070 0x3070 RT3070
product RALINK RT3071 0x3071 RT3071
product RALINK RT3072 0x3072 RT3072
product RALINK RT3370 0x3370 RT3370
product RALINK RT3572 0x3572 RT3572
product RALINK RT3573 0x3573 RT3573
product RALINK RT5370 0x5370 RT5370
product RALINK RT5372 0x5372 RT5372
product RALINK RT5572 0x5572 RT5572
product RALINK RT8070 0x8070 RT8070
product RALINK RT2570_3 0x9020 RT2500USB Wireless Adapter
product RALINK RT2573_2 0x9021 RT2501USB Wireless Adapter
/* RATOC Systems products */
product RATOC REXUSB60 0xb000 USB serial adapter REX-USB60
product RATOC REXUSB60F 0xb020 USB serial adapter REX-USB60F
/* Realtek products */
/* Green House and CompUSA OEM this part */
product REALTEK DUMMY 0x0000 Dummy product
product REALTEK USB20CRW 0x0158 USB20CRW Card Reader
product REALTEK RTL8188ETV 0x0179 RTL8188ETV
product REALTEK RTL8188CTV 0x018a RTL8188CTV
product REALTEK RTL8821AU_2 0x0811 RTL8821AU
product REALTEK RTL8188RU_2 0x317f RTL8188RU
product REALTEK USBKR100 0x8150 USBKR100 USB Ethernet
product REALTEK RTL8152 0x8152 RTL8152 USB Ethernet
product REALTEK RTL8153 0x8153 RTL8153 USB Ethernet
product REALTEK RTL8188CE_0 0x8170 RTL8188CE
product REALTEK RTL8171 0x8171 RTL8171
product REALTEK RTL8172 0x8172 RTL8172
product REALTEK RTL8173 0x8173 RTL8173
product REALTEK RTL8174 0x8174 RTL8174
product REALTEK RTL8188CU_0 0x8176 RTL8188CU
product REALTEK RTL8191CU 0x8177 RTL8191CU
product REALTEK RTL8192CU 0x8178 RTL8192CU
product REALTEK RTL8188EU 0x8179 RTL8188EU
product REALTEK RTL8188CU_1 0x817a RTL8188CU
product REALTEK RTL8188CU_2 0x817b RTL8188CU
product REALTEK RTL8192CE 0x817c RTL8192CE
product REALTEK RTL8188RU_1 0x817d RTL8188RU
product REALTEK RTL8188CE_1 0x817e RTL8188CE
product REALTEK RTL8188RU_3 0x817f RTL8188RU
product REALTEK RTL8187 0x8187 RTL8187 Wireless Adapter
product REALTEK RTL8187B_0 0x8189 RTL8187B Wireless Adapter
product REALTEK RTL8188CUS 0x818a RTL8188CUS
product REALTEK RTL8192EU 0x818b RTL8192EU
product REALTEK RTL8188CU_3 0x8191 RTL8188CU
product REALTEK RTL8196EU 0x8196 RTL8196EU
product REALTEK RTL8187B_1 0x8197 RTL8187B Wireless Adapter
product REALTEK RTL8187B_2 0x8198 RTL8187B Wireless Adapter
product REALTEK RTL8712 0x8712 RTL8712
product REALTEK RTL8713 0x8713 RTL8713
product REALTEK RTL8188CU_COMBO 0x8754 RTL8188CU
product REALTEK RTL8821AU_1 0xa811 RTL8821AU
product REALTEK RTL8723BU 0xb720 RTL8723BU
product REALTEK RTL8192SU 0xc512 RTL8192SU
product REALTEK RTL8812AU 0x8812 RTL8812AU Wireless Adapter
/* RedOctane products */
product REDOCTANE DUMMY 0x0000 Dummy product
product REDOCTANE GHMIDI 0x474b GH MIDI INTERFACE
/* Renesas products */
product RENESAS RX610 0x0053 RX610 RX-Stick
/* Ricoh products */
product RICOH VGPVCC2 0x1830 VGP-VCC2 Camera
product RICOH VGPVCC3 0x1832 VGP-VCC3 Camera
product RICOH VGPVCC2_2 0x1833 VGP-VCC2 Camera
product RICOH VGPVCC2_3 0x1834 VGP-VCC2 Camera
product RICOH VGPVCC7 0x183a VGP-VCC7 Camera
product RICOH VGPVCC8 0x183b VGP-VCC8 Camera
/* Reiner-SCT products */
product REINERSCT CYBERJACK_ECOM 0x0100 e-com cyberJack
/* Roland products */
product ROLAND UA100 0x0000 UA-100 Audio I/F
product ROLAND UM4 0x0002 UM-4 MIDI I/F
product ROLAND SC8850 0x0003 SC-8850 MIDI Synth
product ROLAND U8 0x0004 U-8 Audio I/F
product ROLAND UM2 0x0005 UM-2 MIDI I/F
product ROLAND SC8820 0x0007 SC-8820 MIDI Synth
product ROLAND PC300 0x0008 PC-300 MIDI Keyboard
product ROLAND UM1 0x0009 UM-1 MIDI I/F
product ROLAND SK500 0x000b SK-500 MIDI Keyboard
product ROLAND SCD70 0x000c SC-D70 MIDI Synth
product ROLAND UM880N 0x0014 EDIROL UM-880 MIDI I/F (native)
product ROLAND UM880G 0x0015 EDIROL UM-880 MIDI I/F (generic)
product ROLAND SD90 0x0016 SD-90 MIDI Synth
product ROLAND UM550 0x0023 UM-550 MIDI I/F
product ROLAND SD20 0x0027 SD-20 MIDI Synth
product ROLAND SD80 0x0029 SD-80 MIDI Synth
product ROLAND UA700 0x002b UA-700 Audio I/F
product ROLAND PCR300 0x0033 EDIROL PCR-300 MIDI I/F
product ROLAND UA25EX_AD 0x00e6 EDIROL UA-25EX (Advanced Driver)
product ROLAND UA25EX_CC 0x00e7 EDIROL UA-25EX (Class Compliant)
/* Rockfire products */
product ROCKFIRE GAMEPAD 0x2033 gamepad 203USB
/* RATOC Systems products */
product RATOC REXUSB60 0xb000 REX-USB60
product RATOC REXUSB60F 0xb020 REX-USB60F
/* RT system products */
product RTSYSTEMS CT29B 0x9e54 FTDI compatible adapter
product RTSYSTEMS SERIAL_VX7 0x9e52 FTDI compatible adapter
/* Sagem products */
product SAGEM USBSERIAL 0x0027 USB-Serial Controller
product SAGEM XG760A 0x004a XG-760A
product SAGEM XG76NA 0x0062 XG-76NA
/* Samsung products */
product SAMSUNG WIS09ABGN 0x2018 WIS09ABGN Wireless LAN adapter
product SAMSUNG ML6060 0x3008 ML-6060 laser printer
product SAMSUNG YP_U2 0x5050 YP-U2 MP3 Player
product SAMSUNG YP_U4 0x5092 YP-U4 MP3 Player
product SAMSUNG I500 0x6601 I500 Palm USB Phone
product SAMSUNG I330 0x8001 I330 phone cradle
product SAMSUNG2 RT2870_1 0x2018 RT2870
/* Samsung Techwin products */
product SAMSUNG_TECHWIN DIGIMAX_410 0x000a Digimax 410
/* SanDisk products */
product SANDISK SDDR05A 0x0001 ImageMate SDDR-05a
product SANDISK SDDR31 0x0002 ImageMate SDDR-31
product SANDISK SDDR05 0x0005 ImageMate SDDR-05
product SANDISK SDDR12 0x0100 ImageMate SDDR-12
product SANDISK SDDR09 0x0200 ImageMate SDDR-09
product SANDISK SDDR75 0x0810 ImageMate SDDR-75
product SANDISK SDCZ2_128 0x7100 Cruzer Mini 128MB
product SANDISK SDCZ2_256 0x7104 Cruzer Mini 256MB
product SANDISK SDCZ4_128 0x7112 Cruzer Micro 128MB
product SANDISK SDCZ4_256 0x7113 Cruzer Micro 256MB
product SANDISK IMAGEMATE_SDDR289 0xb6ba ImageMate SDDR-289
/* Sanwa Electric Instrument Co., Ltd. products */
product SANWA KB_USB2 0x0701 KB-USB2 multimeter cable
/* Sanyo Electric products */
product SANYO SCP4900 0x0701 Sanyo SCP-4900 USB Phone
/* ScanLogic products */
product SCANLOGIC SL11R 0x0002 SL11R IDE Adapter
product SCANLOGIC 336CX 0x0300 Phantom 336CX - C3 scanner
/* Schweitzer Engineering Laboratories products */
product SEL C662 0x0001 C662 Cable
/* Sealevel products */
product SEALEVEL 2101 0x2101 FTDI compatible adapter
product SEALEVEL 2102 0x2102 FTDI compatible adapter
product SEALEVEL 2103 0x2103 FTDI compatible adapter
product SEALEVEL 2104 0x2104 FTDI compatible adapter
product SEALEVEL 2106 0x9020 FTDI compatible adapter
product SEALEVEL 2201_1 0x2211 FTDI compatible adapter
product SEALEVEL 2201_2 0x2221 FTDI compatible adapter
product SEALEVEL 2202_1 0x2212 FTDI compatible adapter
product SEALEVEL 2202_2 0x2222 FTDI compatible adapter
product SEALEVEL 2203_1 0x2213 FTDI compatible adapter
product SEALEVEL 2203_2 0x2223 FTDI compatible adapter
product SEALEVEL 2401_1 0x2411 FTDI compatible adapter
product SEALEVEL 2401_2 0x2421 FTDI compatible adapter
product SEALEVEL 2401_3 0x2431 FTDI compatible adapter
product SEALEVEL 2401_4 0x2441 FTDI compatible adapter
product SEALEVEL 2402_1 0x2412 FTDI compatible adapter
product SEALEVEL 2402_2 0x2422 FTDI compatible adapter
product SEALEVEL 2402_3 0x2432 FTDI compatible adapter
product SEALEVEL 2402_4 0x2442 FTDI compatible adapter
product SEALEVEL 2403_1 0x2413 FTDI compatible adapter
product SEALEVEL 2403_2 0x2423 FTDI compatible adapter
product SEALEVEL 2403_3 0x2433 FTDI compatible adapter
product SEALEVEL 2403_4 0x2443 FTDI compatible adapter
product SEALEVEL 2801_1 0x2811 FTDI compatible adapter
product SEALEVEL 2801_2 0x2821 FTDI compatible adapter
product SEALEVEL 2801_3 0x2831 FTDI compatible adapter
product SEALEVEL 2801_4 0x2841 FTDI compatible adapter
product SEALEVEL 2801_5 0x2851 FTDI compatible adapter
product SEALEVEL 2801_6 0x2861 FTDI compatible adapter
product SEALEVEL 2801_7 0x2871 FTDI compatible adapter
product SEALEVEL 2801_8 0x2881 FTDI compatible adapter
product SEALEVEL 2802_1 0x2812 FTDI compatible adapter
product SEALEVEL 2802_2 0x2822 FTDI compatible adapter
product SEALEVEL 2802_3 0x2832 FTDI compatible adapter
product SEALEVEL 2802_4 0x2842 FTDI compatible adapter
product SEALEVEL 2802_5 0x2852 FTDI compatible adapter
product SEALEVEL 2802_6 0x2862 FTDI compatible adapter
product SEALEVEL 2802_7 0x2872 FTDI compatible adapter
product SEALEVEL 2802_8 0x2882 FTDI compatible adapter
product SEALEVEL 2803_1 0x2813 FTDI compatible adapter
product SEALEVEL 2803_2 0x2823 FTDI compatible adapter
product SEALEVEL 2803_3 0x2833 FTDI compatible adapter
product SEALEVEL 2803_4 0x2843 FTDI compatible adapter
product SEALEVEL 2803_5 0x2853 FTDI compatible adapter
product SEALEVEL 2803_6 0x2863 FTDI compatible adapter
product SEALEVEL 2803_7 0x2873 FTDI compatible adapter
product SEALEVEL 2803_8 0x2883 FTDI compatible adapter
/* Senao products */
product SENAO EUB1200AC 0x0100 EnGenius EUB1200AC
product SENAO RT2870_3 0x0605 RT2870
product SENAO RT2870_4 0x0615 RT2870
product SENAO NUB8301 0x2000 NUB-8301
product SENAO RT2870_1 0x9701 RT2870
product SENAO RT2870_2 0x9702 RT2870
product SENAO RT3070 0x9703 RT3070
product SENAO RT3071 0x9705 RT3071
product SENAO RT3072_1 0x9706 RT3072
product SENAO RT3072_2 0x9707 RT3072
product SENAO RT3072_3 0x9708 RT3072
product SENAO RT3072_4 0x9709 RT3072
product SENAO RT3072_5 0x9801 RT3072
product SENAO RTL8192SU_1 0x9603 RTL8192SU
product SENAO RTL8192SU_2 0x9605 RTL8192SU
/* ShanTou products */
product SHANTOU ST268 0x0268 ST268
product SHANTOU DM9601 0x9601 DM 9601
product SHANTOU ADM8515 0x8515 ADM8515
/* Shark products */
product SHARK PA 0x0400 Pocket Adapter
/* Sharp products */
product SHARP SL5500 0x8004 Zaurus SL-5500 PDA
product SHARP SLA300 0x8005 Zaurus SL-A300 PDA
product SHARP SL5600 0x8006 Zaurus SL-5600 PDA
product SHARP SLC700 0x8007 Zaurus SL-C700 PDA
product SHARP SLC750 0x9031 Zaurus SL-C750 PDA
product SHARP WZERO3ES 0x9123 W-ZERO3 ES Smartphone
product SHARP WZERO3ADES 0x91ac Advanced W-ZERO3 ES Smartphone
product SHARP WILLCOM03 0x9242 WILLCOM03
/* Shuttle Technology products */
product SHUTTLE EUSB 0x0001 E-USB Bridge
product SHUTTLE EUSCSI 0x0002 eUSCSI Bridge
product SHUTTLE SDDR09 0x0003 ImageMate SDDR09
product SHUTTLE EUSBCFSM 0x0005 eUSB SmartMedia / CompactFlash Adapter
product SHUTTLE ZIOMMC 0x0006 eUSB MultiMediaCard Adapter
product SHUTTLE HIFD 0x0007 Sony Hifd
product SHUTTLE EUSBATAPI 0x0009 eUSB ATA/ATAPI Adapter
product SHUTTLE CF 0x000a eUSB CompactFlash Adapter
product SHUTTLE EUSCSI_B 0x000b eUSCSI Bridge
product SHUTTLE EUSCSI_C 0x000c eUSCSI Bridge
product SHUTTLE CDRW 0x0101 CD-RW Device
product SHUTTLE EUSBORCA 0x0325 eUSB ORCA Quad Reader
/* Siemens products */
product SIEMENS SPEEDSTREAM 0x1001 SpeedStream
product SIEMENS SPEEDSTREAM22 0x1022 SpeedStream 1022
product SIEMENS2 WLL013 0x001b WLL013
product SIEMENS2 ES75 0x0034 GSM module MC35
product SIEMENS2 WL54G 0x3c06 54g USB Network Adapter
product SIEMENS3 SX1 0x0001 SX1
product SIEMENS3 X65 0x0003 X65
product SIEMENS3 X75 0x0004 X75
product SIEMENS3 EF81 0x0005 EF81
/* Sierra Wireless products */
product SIERRA EM5625 0x0017 EM5625
product SIERRA MC5720_2 0x0018 MC5720
product SIERRA MC5725 0x0020 MC5725
product SIERRA AIRCARD580 0x0112 Sierra Wireless AirCard 580
product SIERRA AIRCARD595 0x0019 Sierra Wireless AirCard 595
product SIERRA AC595U 0x0120 Sierra Wireless AirCard 595U
product SIERRA AC597E 0x0021 Sierra Wireless AirCard 597E
product SIERRA EM5725 0x0022 EM5725
product SIERRA C597 0x0023 Sierra Wireless Compass 597
product SIERRA MC5727 0x0024 MC5727
product SIERRA T598 0x0025 T598
product SIERRA T11 0x0026 T11
product SIERRA AC402 0x0027 AC402
product SIERRA MC5728 0x0028 MC5728
product SIERRA E0029 0x0029 E0029
product SIERRA AIRCARD580 0x0112 Sierra Wireless AirCard 580
product SIERRA AC595U 0x0120 Sierra Wireless AirCard 595U
product SIERRA MC5720 0x0218 MC5720 Wireless Modem
product SIERRA MINI5725 0x0220 Sierra Wireless miniPCI 5275
product SIERRA MC5727_2 0x0224 MC5727
product SIERRA MC8755_2 0x6802 MC8755
product SIERRA MC8765 0x6803 MC8765
product SIERRA MC8755 0x6804 MC8755
product SIERRA MC8765_2 0x6805 MC8765
product SIERRA MC8755_4 0x6808 MC8755
product SIERRA MC8765_3 0x6809 MC8765
product SIERRA AC875U 0x6812 AC875U HSDPA USB Modem
product SIERRA MC8755_3 0x6813 MC8755 HSDPA
product SIERRA MC8775_2 0x6815 MC8775
product SIERRA MC8775 0x6816 MC8775
product SIERRA AC875 0x6820 Sierra Wireless AirCard 875
product SIERRA AC875U_2 0x6821 AC875U
product SIERRA AC875E 0x6822 AC875E
product SIERRA MC8780 0x6832 MC8780
product SIERRA MC8781 0x6833 MC8781
product SIERRA MC8780_2 0x6834 MC8780
product SIERRA MC8781_2 0x6835 MC8781
product SIERRA MC8780_3 0x6838 MC8780
product SIERRA MC8781_3 0x6839 MC8781
product SIERRA MC8785 0x683A MC8785
product SIERRA MC8785_2 0x683B MC8785
product SIERRA MC8790 0x683C MC8790
product SIERRA MC8791 0x683D MC8791
product SIERRA MC8792 0x683E MC8792
product SIERRA AC880 0x6850 Sierra Wireless AirCard 880
product SIERRA AC881 0x6851 Sierra Wireless AirCard 881
product SIERRA AC880E 0x6852 Sierra Wireless AirCard 880E
product SIERRA AC881E 0x6853 Sierra Wireless AirCard 881E
product SIERRA AC880U 0x6855 Sierra Wireless AirCard 880U
product SIERRA AC881U 0x6856 Sierra Wireless AirCard 881U
product SIERRA AC885E 0x6859 AC885E
product SIERRA AC885E_2 0x685A AC885E
product SIERRA AC885U 0x6880 Sierra Wireless AirCard 885U
product SIERRA C888 0x6890 C888
product SIERRA C22 0x6891 C22
product SIERRA E6892 0x6892 E6892
product SIERRA E6893 0x6893 E6893
product SIERRA MC8700 0x68A3 MC8700
product SIERRA MC7354 0x68C0 MC7354
product SIERRA MC7355 0x9041 MC7355
product SIERRA MC7430 0x9071 Sierra Wireless MC7430 Qualcomm Snapdragon X7 LTE-A
product SIERRA AC313U 0x68aa Sierra Wireless AirCard 313U
product SIERRA TRUINSTALL 0x0fff Aircard Tru Installer
/* Sigmatel products */
product SIGMATEL WBT_3052 0x4200 WBT-3052 IrDA/USB Bridge
product SIGMATEL I_BEAD100 0x8008 i-Bead 100 MP3 Player
/* SIIG products */
/* Also: Omnidirectional Control Technology products */
product SIIG DIGIFILMREADER 0x0004 DigiFilm-Combo Reader
product SIIG WINTERREADER 0x0330 WINTERREADER Reader
product SIIG2 DK201 0x0103 FTDI compatible adapter
product SIIG2 USBTOETHER 0x0109 USB TO Ethernet
product SIIG2 US2308 0x0421 Serial
/* Silicom products */
product SILICOM U2E 0x0001 U2E
product SILICOM GPE 0x0002 Psion Gold Port Ethernet
/* SI Labs */
product SILABS VSTABI 0x0f91 VStabi Controller
product SILABS ARKHAM_DS101_M 0x1101 Arkham DS101 Monitor
product SILABS ARKHAM_DS101_A 0x1601 Arkham DS101 Adapter
product SILABS BSM7DUSB 0x800a SPORTident BSM7-D USB
product SILABS POLOLU 0x803b Pololu Serial
product SILABS CYGNAL_DEBUG 0x8044 Cygnal Debug Adapter
product SILABS SB_PARAMOUNT_ME 0x8043 Software Bisque Paramount ME
product SILABS SAEL 0x8053 SA-EL USB
product SILABS GSM2228 0x8054 Enfora GSM2228 USB
product SILABS ARGUSISP 0x8066 Argussoft ISP
product SILABS IMS_USB_RS422 0x806f IMS USB-RS422
product SILABS CRUMB128 0x807a Crumb128 board
product SILABS OPTRIS_MSPRO 0x80c4 Optris MSpro LT Thermometer
product SILABS DEGREE 0x80ca Degree Controls Inc
product SILABS TRACIENT 0x80dd Tracient RFID
product SILABS TRAQMATE 0x80ed Track Systems Traqmate
product SILABS SUUNTO 0x80f6 Suunto Sports Instrument
product SILABS ARYGON_MIFARE 0x8115 Arygon Mifare RFID reader
product SILABS BURNSIDE 0x813d Burnside Telecon Deskmobile
product SILABS TAMSMASTER 0x813f Tams Master Easy Control
product SILABS WMRBATT 0x814a WMR RIGblaster Plug&Play
product SILABS WMRRIGBLASTER 0x814a WMR RIGblaster Plug&Play
product SILABS WMRRIGTALK 0x814b WMR RIGtalk RT1
product SILABS B_G_H3000 0x8156 B&G H3000 Data Cable
product SILABS HELICOM 0x815e Helicomm IP-Link 1220-DVM
product SILABS HAMLINKUSB 0x815f Timewave HamLinkUSB
product SILABS AVIT_USB_TTL 0x818b AVIT Research USB-TTL
product SILABS MJS_TOSLINK 0x819f MJS USB-TOSLINK
product SILABS WAVIT 0x81a6 ThinkOptics WavIt
product SILABS MULTIPLEX_RC 0x81a9 Multiplex RC adapter
product SILABS MSD_DASHHAWK 0x81ac MSD DashHawk
product SILABS INSYS_MODEM 0x81ad INSYS Modem
product SILABS LIPOWSKY_JTAG 0x81c8 Lipowsky Baby-JTAG
product SILABS LIPOWSKY_LIN 0x81e2 Lipowsky Baby-LIN
product SILABS AEROCOMM 0x81e7 Aerocomm Radio
product SILABS ZEPHYR_BIO 0x81e8 Zephyr Bioharness
product SILABS EMS_C1007 0x81f2 EMS C1007 HF RFID controller
product SILABS LIPOWSKY_HARP 0x8218 Lipowsky HARP-1
product SILABS C2_EDGE_MODEM 0x822b Commander 2 EDGE(GSM) Modem
product SILABS CYGNAL_GPS 0x826b Cygnal Fasttrax GPS
product SILABS TELEGESIS_ETRX2 0x8293 Telegesis ETRX2USB
product SILABS PROCYON_AVS 0x82f9 Procyon AVS
product SILABS MC35PU 0x8341 MC35pu
product SILABS CYGNAL 0x8382 Cygnal
product SILABS AMBER_AMB2560 0x83a8 Amber Wireless AMB2560
product SILABS DEKTEK_DTAPLUS 0x83d8 DekTec DTA Plus VHF/UHF Booster
product SILABS KYOCERA_GPS 0x8411 Kyocera GPS
product SILABS IRZ_SG10 0x8418 IRZ SG-10 GSM/GPRS Modem
product SILABS BEI_VCP 0x846e BEI USB Sensor (VCP)
product SILABS BALLUFF_RFID 0x8477 Balluff RFID reader
product SILABS AC_SERV_IBUS 0x85ea AC-Services IBUS Interface
product SILABS AC_SERV_CIS 0x85eb AC-Services CIS-IBUS
product SILABS V_PREON32 0x85f8 Virtenio Preon32
product SILABS AC_SERV_CAN 0x8664 AC-Services CAN Interface
product SILABS AC_SERV_OBD 0x8665 AC-Services OBD Interface
product SILABS MMB_ZIGBEE 0x88a4 MMB Networks ZigBee
product SILABS INGENI_ZIGBEE 0x88a5 Planet Innovation Ingeni ZigBee
product SILABS HUBZ 0x8a2a HubZ dual ZigBee and Z-Wave
product SILABS CP2102 0xea60 SILABS USB UART
product SILABS CP210X_2 0xea61 CP210x Serial
product SILABS CP210X_3 0xea70 CP210x Serial
product SILABS CP210X_4 0xea80 CP210x Serial
product SILABS INFINITY_MIC 0xea71 Infinity GPS-MIC-1 Radio Monophone
product SILABS USBSCOPE50 0xf001 USBscope50
product SILABS USBWAVE12 0xf002 USBwave12
product SILABS USBPULSE100 0xf003 USBpulse100
product SILABS USBCOUNT50 0xf004 USBcount50
product SILABS2 DCU11CLONE 0xaa26 DCU-11 clone
product SILABS3 GPRS_MODEM 0xea61 GPRS Modem
product SILABS4 100EU_MODEM 0xea61 GPRS Modem 100EU
/* Silicon Portals Inc. */
product SILICONPORTALS YAPPH_NF 0x0200 YAP Phone (no firmware)
product SILICONPORTALS YAPPHONE 0x0201 YAP Phone
/* Sirius Technologies products */
product SIRIUS ROADSTER 0x0001 NetComm Roadster II 56 USB
/* Sitecom products */
product SITECOM LN029 0x182d USB 2.0 Ethernet
product SITECOM SERIAL 0x2068 USB to serial cable (v2)
product SITECOM2 WL022 0x182d WL-022
/* Sitecom Europe products */
product SITECOMEU RT2870_1 0x0017 RT2870
product SITECOMEU WL168V1 0x000d WL-168 v1
product SITECOMEU LN030 0x0021 MCS7830
product SITECOMEU WL168V4 0x0028 WL-168 v4
product SITECOMEU RT2870_2 0x002b RT2870
product SITECOMEU RT2870_3 0x002c RT2870
product SITECOMEU RT2870_4 0x002d RT2870
product SITECOMEU RT2770 0x0039 RT2770
product SITECOMEU RT3070_2 0x003b RT3070
product SITECOMEU RT3070_3 0x003c RT3070
product SITECOMEU RT3070_4 0x003d RT3070
product SITECOMEU RT3070 0x003e RT3070
product SITECOMEU WL608 0x003f WL-608
product SITECOMEU RT3071 0x0040 RT3071
product SITECOMEU RT3072_1 0x0041 RT3072
product SITECOMEU RT3072_2 0x0042 RT3072
product SITECOMEU WL353 0x0045 WL-353
product SITECOMEU RT3072_3 0x0047 RT3072
product SITECOMEU RT3072_4 0x0048 RT3072
product SITECOMEU RT3072_5 0x004a RT3072
product SITECOMEU WL349V1 0x004b WL-349 v1
product SITECOMEU RT3072_6 0x004d RT3072
product SITECOMEU WLA1000 0x005b WLA-1000
product SITECOMEU RTL8188CU_1 0x0052 RTL8188CU
product SITECOMEU RTL8188CU_2 0x005c RTL8188CU
product SITECOMEU RTL8192CU 0x0061 RTL8192CU
product SITECOMEU LN032 0x0072 LN-032
product SITECOMEU WLA7100 0x0074 WLA-7100
product SITECOMEU LN031 0x0056 LN-031
product SITECOMEU LN028 0x061c LN-028
product SITECOMEU WL113 0x9071 WL-113
product SITECOMEU ZD1211B 0x9075 ZD1211B
product SITECOMEU WL172 0x90ac WL-172
product SITECOMEU WL113R2 0x9712 WL-113 rev 2
/* Skanhex Technology products */
product SKANHEX MD_7425 0x410a MD 7425 Camera
product SKANHEX SX_520Z 0x5200 SX 520z Camera
/* Smart Technologies products */
product SMART PL2303 0x2303 Serial adapter
/* Smart Modular Technologies products */
product SMART2 G2MEMKEY 0x1700 G2 Memory Key
/* SmartBridges products */
product SMARTBRIDGES SMARTLINK 0x0001 SmartLink USB Ethernet
product SMARTBRIDGES SMARTNIC 0x0003 smartNIC 2 PnP Ethernet
/* SMC products */
product SMC 2102USB 0x0100 10Mbps Ethernet
product SMC 2202USB 0x0200 10/100 Ethernet
product SMC 2206USB 0x0201 EZ Connect USB Ethernet
product SMC 2862WG 0xee13 EZ Connect Wireless Adapter
product SMC2 2020HUB 0x2020 USB Hub
product SMC2 2514HUB 0x2514 USB Hub
product SMC3 2662WUSB 0xa002 2662W-AR Wireless
product SMC2 LAN7800_ETH 0x7800 USB/Ethernet
product SMC2 LAN7801_ETH 0x7801 USB/Ethernet
product SMC2 LAN7850_ETH 0x7850 USB/Ethernet
product SMC2 LAN9500_ETH 0x9500 USB/Ethernet
product SMC2 LAN9505_ETH 0x9505 USB/Ethernet
product SMC2 LAN9530_ETH 0x9530 USB/Ethernet
product SMC2 LAN9730_ETH 0x9730 USB/Ethernet
product SMC2 LAN9500_SAL10 0x9900 USB/Ethernet
product SMC2 LAN9505_SAL10 0x9901 USB/Ethernet
product SMC2 LAN9500A_SAL10 0x9902 USB/Ethernet
product SMC2 LAN9505A_SAL10 0x9903 USB/Ethernet
product SMC2 LAN9514_SAL10 0x9904 USB/Ethernet
product SMC2 LAN9500A_HAL 0x9905 USB/Ethernet
product SMC2 LAN9505A_HAL 0x9906 USB/Ethernet
product SMC2 LAN9500_ETH_2 0x9907 USB/Ethernet
product SMC2 LAN9500A_ETH_2 0x9908 USB/Ethernet
product SMC2 LAN9514_ETH_2 0x9909 USB/Ethernet
product SMC2 LAN9500A_ETH 0x9e00 USB/Ethernet
product SMC2 LAN9505A_ETH 0x9e01 USB/Ethernet
product SMC2 LAN89530_ETH 0x9e08 USB/Ethernet
product SMC2 LAN9514_ETH 0xec00 USB/Ethernet
/* SOHOware products */
product SOHOWARE NUB100 0x9100 10/100 USB Ethernet
product SOHOWARE NUB110 0x9110 10/100 USB Ethernet
/* SOLID YEAR products */
product SOLIDYEAR KEYBOARD 0x2101 Solid Year USB keyboard
/* SONY products */
product SONY DSC 0x0010 DSC cameras
product SONY MS_NW_MS7 0x0025 Memorystick NW-MS7
product SONY PORTABLE_HDD_V2 0x002b Portable USB Harddrive V2
product SONY MSACUS1 0x002d Memorystick MSAC-US1
product SONY HANDYCAM 0x002e Handycam
product SONY MSC 0x0032 MSC memory stick slot
product SONY CLIE_35 0x0038 Sony Clie v3.5
product SONY MS_PEG_N760C 0x0058 PEG N760c Memorystick
product SONY CLIE_40 0x0066 Sony Clie v4.0
product SONY MS_MSC_U03 0x0069 Memorystick MSC-U03
product SONY CLIE_40_MS 0x006d Sony Clie v4.0 Memory Stick slot
product SONY CLIE_S360 0x0095 Sony Clie s360
product SONY CLIE_41_MS 0x0099 Sony Clie v4.1 Memory Stick slot
product SONY CLIE_41 0x009a Sony Clie v4.1
product SONY CLIE_NX60 0x00da Sony Clie nx60
product SONY CLIE_TH55 0x0144 Sony Clie th55
product SONY CLIE_TJ37 0x0169 Sony Clie tj37
product SONY RF_RECEIVER 0x01db Sony RF mouse/kbd Receiver VGP-WRC1
product SONY QN3 0x0437 Sony QN3 CMD-Jxx phone cable
/* Sony Ericsson products */
product SONYERICSSON DCU10 0x0528 DCU-10 Phone Data Cable
product SONYERICSSON DATAPILOT 0x2003 Datapilot Phone Cable
/* SOURCENEXT products */
product SOURCENEXT KEIKAI8 0x039f KeikaiDenwa 8
product SOURCENEXT KEIKAI8_CHG 0x012e KeikaiDenwa 8 with charger
/* SparkLAN products */
product SPARKLAN RT2573 0x0004 RT2573
product SPARKLAN RT2870_1 0x0006 RT2870
product SPARKLAN RT3070 0x0010 RT3070
/* Soundgraph products */
product SOUNDGRAPH IMON_VFD 0x0044 Antec Veris Elite VFD Panel, Knob, and Remote
product SOUNDGRAPH SSTONE_LC16 0xffdc Silverstone LC16 VFD Panel, Knob, and Remote
/* Speed Dragon Multimedia products */
product SPEEDDRAGON MS3303H 0x110b MS3303H Serial
/* Sphairon Access Systems GmbH products */
product SPHAIRON UB801R 0x0110 UB801R
/* Stelera Wireless products */
product STELERA ZEROCD 0x1000 Zerocd Installer
product STELERA C105 0x1002 Stelera/Bandrish C105 USB
product STELERA E1003 0x1003 3G modem
product STELERA E1004 0x1004 3G modem
product STELERA E1005 0x1005 3G modem
product STELERA E1006 0x1006 3G modem
product STELERA E1007 0x1007 3G modem
product STELERA E1008 0x1008 3G modem
product STELERA E1009 0x1009 3G modem
product STELERA E100A 0x100a 3G modem
product STELERA E100B 0x100b 3G modem
product STELERA E100C 0x100c 3G modem
product STELERA E100D 0x100d 3G modem
product STELERA E100E 0x100e 3G modem
product STELERA E100F 0x100f 3G modem
product STELERA E1010 0x1010 3G modem
product STELERA E1011 0x1011 3G modem
product STELERA E1012 0x1012 3G modem
/* STMicroelectronics products */
product STMICRO BIOCPU 0x2016 Biometric Coprocessor
product STMICRO COMMUNICATOR 0x7554 USB Communicator
product STMICRO ST72682 0xfada USB 2.0 Flash drive controller
/* STSN products */
product STSN STSN0001 0x0001 Internet Access Device
/* SUN Corporation products */
product SUNTAC DS96L 0x0003 SUNTAC U-Cable type D2
product SUNTAC PS64P1 0x0005 SUNTAC U-Cable type P1
product SUNTAC VS10U 0x0009 SUNTAC Slipper U
product SUNTAC IS96U 0x000a SUNTAC Ir-Trinity
product SUNTAC AS64LX 0x000b SUNTAC U-Cable type A3
product SUNTAC AS144L4 0x0011 SUNTAC U-Cable type A4
/* Sun Microsystems products */
product SUN KEYBOARD_TYPE_6 0x0005 Type 6 USB keyboard
product SUN KEYBOARD_TYPE_7 0x00a2 Type 7 USB keyboard
/* XXX The above is a North American PC style keyboard possibly */
product SUN MOUSE 0x0100 Type 6 USB mouse
product SUN KBD_HUB 0x100e Kbd Hub
/* Sunplus Innovation Technology Inc. products */
product SUNPLUS USBMOUSE 0x0007 USB Optical Mouse
/* Super Top products */
product SUPERTOP IDE 0x6600 USB-IDE
product SUPERTOP FLASHDRIVE 0x121c extrememory Snippy
/* Syntech products */
product SYNTECH CPT8001C 0x0001 CPT-8001C Barcode scanner
product SYNTECH CYPHERLAB100 0x1000 CipherLab USB Barcode Scanner
/* Teclast products */
product TECLAST TLC300 0x3203 USB Media Player
/* Testo products */
product TESTO USB_INTERFACE 0x0001 FTDI compatible adapter
/* TexTech products */
product TEXTECH DUMMY 0x0000 Dummy product
product TEXTECH U2M_1 0x0101 Textech USB MIDI cable
product TEXTECH U2M_2 0x1806 Textech USB MIDI cable
/* The Mobility Lab products */
product TML USB_SERIAL 0x0064 FTDI compatible adapter
/* Thurlby Thandar Instrument products */
product TTI QL355P 0x03e8 FTDI compatible adapter
/* Supra products */
product DIAMOND2 SUPRAEXPRESS56K 0x07da Supra Express 56K modem
product DIAMOND2 SUPRA2890 0x0b4a SupraMax 2890 56K Modem
product DIAMOND2 RIO600USB 0x5001 Rio 600 USB
product DIAMOND2 RIO800USB 0x5002 Rio 800 USB
/* Surecom Technology products */
product SURECOM EP9001G2A 0x11f2 EP-9001-G rev 2A
product SURECOM RT2570 0x11f3 RT2570
product SURECOM RT2573 0x31f3 RT2573
/* Sweex products */
product SWEEX ZD1211 0x1809 ZD1211
product SWEEX2 LW153 0x0153 LW153
product SWEEX2 LW154 0x0154 LW154
product SWEEX2 LW303 0x0302 LW303
product SWEEX2 LW313 0x0313 LW313
/* System TALKS, Inc. */
product SYSTEMTALKS SGCX2UL 0x1920 SGC-X2UL
/* Tapwave products */
product TAPWAVE ZODIAC 0x0100 Zodiac
/* Taugagreining products */
product TAUGA CAMERAMATE 0x0005 CameraMate (DPCM_USB)
/* TCTMobile products */
product TCTMOBILE X060S 0x0000 X060S 3G modem
product TCTMOBILE X080S 0xf000 X080S 3G modem
/* TDK products */
product TDK UPA9664 0x0115 USB-PDC Adapter UPA9664
product TDK UCA1464 0x0116 USB-cdmaOne Adapter UCA1464
product TDK UHA6400 0x0117 USB-PHS Adapter UHA6400
product TDK UPA6400 0x0118 USB-PHS Adapter UPA6400
product TDK BT_DONGLE 0x0309 Bluetooth USB dongle
/* TEAC products */
product TEAC FD05PUB 0x0000 FD-05PUB floppy
/* Tekram Technology products */
product TEKRAM QUICKWLAN 0x1630 QuickWLAN
product TEKRAM ZD1211_1 0x5630 ZD1211
product TEKRAM ZD1211_2 0x6630 ZD1211
/* Telex Communications products */
product TELEX MIC1 0x0001 Enhanced USB Microphone
/* Telit products */
product TELIT UC864E 0x1003 UC864E 3G modem
product TELIT UC864G 0x1004 UC864G 3G modem
/* Ten X Technology, Inc. */
product TENX UAUDIO0 0xf211 USB audio headset
/* ThingM products */
product THINGM BLINK1 0x01ed USB notification light
/* Texas Intel products */
product TI UTUSB41 0x1446 UT-USB41 hub
product TI TUSB2046 0x2046 TUSB2046 hub
product TI USB3410 0x3410 TI USB 3410 Modem
product TI USB5052 0x5052 TI USB 5250 Modem
product TI FRI2 0x5053 TI Fish River Island II
product TI USB5052_EEPROM 0x505a TI USB 5250 Modem w/Eeprom
product TI USB5052_FW 0x505f TI USB 5250 Modme w/Firmware running
product TI USB5152 0x5152 TI USB 5152 Modem
product TI EZ430 0xf430 TI ex430 development tool
/* Thrustmaster products */
product THRUST FUSION_PAD 0xa0a3 Fusion Digital Gamepad
/* TLayTech products */
product TLAYTECH TEU800 0x1682 TEU800 3G modem
/* Topre Corporation products */
product TOPRE HHKB 0x0100 HHKB Professional
/* Toshiba Corporation products */
product TOSHIBA POCKETPC_E740 0x0706 PocketPC e740
product TOSHIBA RT3070 0x0a07 RT3070
product TOSHIBA G450 0x0d45 G450 modem
product TOSHIBA HSDPA 0x1302 G450 modem
product TOSHIBA TRANSMEMORY 0x6545 USB ThumbDrive
/* TP-Link products */
product TPLINK T4U 0x0101 Archer T4U
product TPLINK WN821NV5 0x0107 TL-WN821N v5
product TPLINK WN822NV4 0x0108 TL-WN822N v4
product TPLINK WN823NV2 0x0109 TL-WN823N v2
product TPLINK WN722NV2 0x010c TL-WN722N v2
product TPLINK T4UV2 0x010d Archer T4U ver 2
product TPLINK T4UHV1 0x0103 Archer T4UH ver 1
product TPLINK T4UHV2 0x010e Archer T4UH ver 2
product TPLINK RTL8153 0x0601 RTL8153 USB 10/100/1000 LAN
/* Trek Technology products */
product TREK THUMBDRIVE 0x1111 ThumbDrive
product TREK MEMKEY 0x8888 IBM USB Memory Key
product TREK THUMBDRIVE_8MB 0x9988 ThumbDrive_8MB
/* TRENDnet products */
product TRENDNET RTL8192CU 0x624d RTL8192CU
product TRENDNET TEW646UBH 0x646b TEW-646UBH
product TRENDNET RTL8188CU 0x648b RTL8188CU
product TRENDNET TEW805UB 0x805b TEW-805UB
/* Tripp-Lite products */
product TRIPPLITE U209 0x2008 Serial
product TRIPPLITE2 OMNIVS1000 0x0001 OMNIVS1000, SMART550USB
product TRIPPLITE2 AVR550U 0x1003 AVR550U
product TRIPPLITE2 AVR750U 0x1007 AVR750U
product TRIPPLITE2 ECO550UPS 0x1008 ECO550UPS
product TRIPPLITE2 T750_INTL 0x1f06 T750 INTL
product TRIPPLITE2 RT_2200_INTL 0x1f0a R/T 2200 INTL
product TRIPPLITE2 OMNI1000LCD 0x2005 OMNI1000LCD
product TRIPPLITE2 OMNI900LCD 0x2007 OMNI900LCD
product TRIPPLITE2 SMART_2200RMXL2U 0x3012 smart2200RMXL2U
product TRIPPLITE2 UPS_3014 0x3014 Unknown UPS
product TRIPPLITE2 SU1500RTXL2UA 0x4001 SmartOnline SU1500RTXL2UA
product TRIPPLITE2 SU6000RT4U 0x4002 SmartOnline SU6000RT4U
product TRIPPLITE2 SU1500RTXL2UA_2 0x4003 SmartOnline SU1500RTXL2UA
/* Trumpion products */
product TRUMPION T33520 0x1001 T33520 USB Flash Card Controller
product TRUMPION C3310 0x1100 Comotron C3310 MP3 player
product TRUMPION MP3 0x1200 MP3 player
/* TwinMOS */
product TWINMOS G240 0xa006 G240
product TWINMOS MDIV 0x1325 Memory Disk IV
/* Ubiquam products */
product UBIQUAM UALL 0x3100 CDMA 1xRTT USB Modem (U-100/105/200/300/520)
/* Ultima products */
product ULTIMA 1200UBPLUS 0x4002 1200 UB Plus scanner
/* UMAX products */
product UMAX ASTRA1236U 0x0002 Astra 1236U Scanner
product UMAX ASTRA1220U 0x0010 Astra 1220U Scanner
product UMAX ASTRA2000U 0x0030 Astra 2000U Scanner
product UMAX ASTRA2100U 0x0130 Astra 2100U Scanner
product UMAX ASTRA2200U 0x0230 Astra 2200U Scanner
product UMAX ASTRA3400 0x0060 Astra 3400 Scanner
/* U-MEDIA Communications products */
product UMEDIA TEW444UBEU 0x3006 TEW-444UB EU
product UMEDIA TEW444UBEU_NF 0x3007 TEW-444UB EU (no firmware)
product UMEDIA TEW429UB_A 0x300a TEW-429UB_A
product UMEDIA TEW429UB 0x300b TEW-429UB
product UMEDIA TEW429UBC1 0x300d TEW-429UB C1
product UMEDIA RT2870_1 0x300e RT2870
product UMEDIA ALL0298V2 0x3204 ALL0298 v2
product UMEDIA AR5523_2 0x3205 AR5523
product UMEDIA AR5523_2_NF 0x3206 AR5523 (no firmware)
/* Universal Access products */
product UNIACCESS PANACHE 0x0101 Panache Surf USB ISDN Adapter
/* Unknown products */
product UNKNOWN4 NF_RIC 0x0001 FTDI compatible adapter
/* USI products */
product USI MC60 0x10c5 MC60 Serial
/* U.S. Robotics products */
product USR USR5422 0x0118 USR5422 WLAN
product USR USR5423 0x0121 USR5423 WLAN
/* VIA Technologies products */
product VIA USB2IDEBRIDGE 0x6204 USB 2.0 IDE Bridge
/* VIA Labs */
product VIALABS USB30SATABRIDGE 0x0700 USB 3.0 SATA Bridge
/* Vaisala products */
product VAISALA CABLE 0x0200 USB Interface cable
/* Vertex products */
product VERTEX VW110L 0x0100 Vertex VW110L modem
/* VidzMedia products */
product VIDZMEDIA MONSTERTV 0x4fb1 MonsterTV P2H
/* Vision products */
product VISION VC6452V002 0x0002 CPiA Camera
/* Visioneer products */
product VISIONEER 7600 0x0211 OneTouch 7600
product VISIONEER 5300 0x0221 OneTouch 5300
product VISIONEER 3000 0x0224 Scanport 3000
product VISIONEER 6100 0x0231 OneTouch 6100
product VISIONEER 6200 0x0311 OneTouch 6200
product VISIONEER 8100 0x0321 OneTouch 8100
product VISIONEER 8600 0x0331 OneTouch 8600
/* Vivitar products */
product VIVITAR 35XX 0x0003 Vivicam 35Xx
/* VTech products */
product VTECH RT2570 0x3012 RT2570
product VTECH ZD1211B 0x3014 ZD1211B
/* Wacom products */
product WACOM CT0405U 0x0000 CT-0405-U Tablet
product WACOM GRAPHIRE 0x0010 Graphire
product WACOM GRAPHIRE3_4X5 0x0013 Graphire 3 4x5
product WACOM INTUOSA5 0x0021 Intuos A5
product WACOM GD0912U 0x0022 Intuos 9x12 Graphics Tablet
/* WAGO Kontakttechnik GmbH products */
product WAGO SERVICECABLE 0x07a6 USB Service Cable 750-923
/* WaveSense products */
product WAVESENSE JAZZ 0xaaaa Jazz blood glucose meter
/* WCH products */
product WCH CH341SER 0x5523 CH341/CH340 USB-Serial Bridge
product WCH2 DUMMY 0x0000 Dummy product
product WCH2 CH341SER_2 0x5523 CH341/CH340 USB-Serial Bridge
product WCH2 CH341SER 0x7523 CH341/CH340 USB-Serial Bridge
product WCH2 U2M 0X752d CH345 USB2.0-MIDI
/* West Mountain Radio products */
product WESTMOUNTAIN RIGBLASTER_ADVANTAGE 0x0003 RIGblaster Advantage
/* Western Digital products */
product WESTERN COMBO 0x0200 Firewire USB Combo
product WESTERN EXTHDD 0x0400 External HDD
product WESTERN HUB 0x0500 USB HUB
product WESTERN MYBOOK 0x0901 MyBook External HDD
product WESTERN MYPASSPORT_00 0x0704 MyPassport External HDD
product WESTERN MYPASSPORT_11 0x0741 MyPassport External HDD
product WESTERN MYPASSPORT_01 0x0746 MyPassport External HDD
product WESTERN MYPASSPORT_02 0x0748 MyPassport External HDD
product WESTERN MYPASSPORT_03 0x074A MyPassport External HDD
product WESTERN MYPASSPORT_04 0x074C MyPassport External HDD
product WESTERN MYPASSPORT_05 0x074E MyPassport External HDD
product WESTERN MYPASSPORT_06 0x07A6 MyPassport External HDD
product WESTERN MYPASSPORT_07 0x07A8 MyPassport External HDD
product WESTERN MYPASSPORT_08 0x07AA MyPassport External HDD
product WESTERN MYPASSPORT_09 0x07AC MyPassport External HDD
product WESTERN MYPASSPORT_10 0x07AE MyPassport External HDD
product WESTERN MYPASSPORTES_00 0x070A MyPassport Essential External HDD
product WESTERN MYPASSPORTES_01 0x071A MyPassport Essential External HDD
product WESTERN MYPASSPORTES_02 0x0730 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_03 0x0732 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_04 0x0740 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_05 0x0742 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_06 0x0750 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_07 0x0752 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_08 0x07A0 MyPassport Essential External HDD
product WESTERN MYPASSPORTES_09 0x07A2 MyPassport Essential External HDD
/* WeTelecom products */
product WETELECOM WM_D200 0x6801 WM-D200
/* WIENER Plein & Baus GmbH products */
product WIENERPLEINBAUS PL512 0x0010 PL512 PSU
product WIENERPLEINBAUS RCM 0x0011 RCM Remote Control
product WIENERPLEINBAUS MPOD 0x0012 MPOD PSU
product WIENERPLEINBAUS CML 0x0015 CML Data Logger
/* Windbond Electronics */
product WINBOND UH104 0x5518 4-port USB Hub
/* WinMaxGroup products */
product WINMAXGROUP FLASH64MC 0x6660 USB Flash Disk 64M-C
/* Wistron NeWeb products */
product WISTRONNEWEB WNC0600 0x0326 WNC-0600USB
product WISTRONNEWEB UR045G 0x0427 PrismGT USB 2.0 WLAN
product WISTRONNEWEB UR055G 0x0711 UR055G
product WISTRONNEWEB O8494 0x0804 ORiNOCO 802.11n
product WISTRONNEWEB AR5523_1 0x0826 AR5523
product WISTRONNEWEB AR5523_1_NF 0x0827 AR5523 (no firmware)
product WISTRONNEWEB AR5523_2 0x082a AR5523
product WISTRONNEWEB AR5523_2_NF 0x0829 AR5523 (no firmware)
/* Xerox products */
product XEROX WCM15 0xffef WorkCenter M15
/* Xirlink products */
product XIRLINK PCCAM 0x8080 IBM PC Camera
/* Xyratex products */
product XYRATEX PRISM_GT_1 0x2000 PrismGT USB 2.0 WLAN
product XYRATEX PRISM_GT_2 0x2002 PrismGT USB 2.0 WLAN
/* Yamaha products */
product YAMAHA UX256 0x1000 UX256 MIDI I/F
product YAMAHA MU1000 0x1001 MU1000 MIDI Synth.
product YAMAHA MU2000 0x1002 MU2000 MIDI Synth.
product YAMAHA MU500 0x1003 MU500 MIDI Synth.
product YAMAHA UW500 0x1004 UW500 USB Audio I/F
product YAMAHA MOTIF6 0x1005 MOTIF6 MIDI Synth. Workstation
product YAMAHA MOTIF7 0x1006 MOTIF7 MIDI Synth. Workstation
product YAMAHA MOTIF8 0x1007 MOTIF8 MIDI Synth. Workstation
product YAMAHA UX96 0x1008 UX96 MIDI I/F
product YAMAHA UX16 0x1009 UX16 MIDI I/F
product YAMAHA S08 0x100e S08 MIDI Keyboard
product YAMAHA CLP150 0x100f CLP-150 digital piano
product YAMAHA CLP170 0x1010 CLP-170 digital piano
product YAMAHA RPU200 0x3104 RP-U200
product YAMAHA RTA54I 0x4000 NetVolante RTA54i Broadband&ISDN Router
product YAMAHA RTW65B 0x4001 NetVolante RTW65b Broadband Wireless Router
product YAMAHA RTW65I 0x4002 NetVolante RTW65i Broadband&ISDN Wireless Router
product YAMAHA RTA55I 0x4004 NetVolante RTA55i Broadband VoIP Router
/* Yano products */
product YANO U640MO 0x0101 U640MO-03
product YANO FW800HD 0x05fc METALWEAR-HDD
/* Y.C. Cable products */
product YCCABLE PL2303 0x0fba PL2303 Serial
/* Y-E Data products */
product YEDATA FLASHBUSTERU 0x0000 Flashbuster-U
/* Yiso Wireless Co. products */
product YISO C893 0xc893 CDMA 2000 1xEVDO PC Card
/* Z-Com products */
product ZCOM M4Y750 0x0001 M4Y-750
product ZCOM XI725 0x0002 XI-725/726
product ZCOM XI735 0x0005 XI-735
product ZCOM XG703A 0x0008 PrismGT USB 2.0 WLAN
product ZCOM ZD1211 0x0011 ZD1211
product ZCOM AR5523 0x0012 AR5523
product ZCOM AR5523_NF 0x0013 AR5523 driver (no firmware)
product ZCOM XM142 0x0015 XM-142
product ZCOM ZD1211B 0x001a ZD1211B
product ZCOM RT2870_1 0x0022 RT2870
product ZCOM UB81 0x0023 UB81
product ZCOM RT2870_2 0x0025 RT2870
product ZCOM UB82 0x0026 UB82
/* Zeevo, Inc. products */
product ZEEVO BLUETOOTH 0x07d0 BT-500 Bluetooth USB Adapter
/* Zinwell products */
product ZINWELL RT2570 0x0260 RT2570
product ZINWELL RT2870_1 0x0280 RT2870
product ZINWELL RT2870_2 0x0282 RT2870
product ZINWELL RT3072_1 0x0283 RT3072
product ZINWELL RT3072_2 0x0284 RT3072
product ZINWELL RT3070 0x5257 RT3070
/* Zoom Telephonics, Inc. products */
product ZOOM 2986L 0x9700 2986L Fax modem
product ZOOM 3095 0x3095 3095 USB Fax modem
/* Zoran Microelectronics products */
product ZORAN EX20DSC 0x4343 Digital Camera EX-20 DSC
/* ZTE products */
product ZTE MF622 0x0001 MF622 modem
product ZTE MF628 0x0015 MF628 modem
product ZTE MF626 0x0031 MF626 modem
product ZTE MF820D_INSTALLER 0x0166 MF820D CD
product ZTE MF820D 0x0167 MF820D modem
product ZTE INSTALLER 0x2000 UMTS CD
product ZTE MC2718 0xffe8 MC2718 modem
product ZTE AC8700 0xfffe CDMA 1xEVDO USB modem
/* Zydas Technology Corporation products */
product ZYDAS ZD1201 0x1201 ZD1201
product ZYDAS ZD1211 0x1211 ZD1211 WLAN abg
product ZYDAS ZD1211B 0x1215 ZD1211B
product ZYDAS ZD1221 0x1221 ZD1221
product ZYDAS ALL0298 0xa211 ALL0298
product ZYDAS ZD1211B_2 0xb215 ZD1211B
/* ZyXEL Communication Co. products */
product ZYXEL OMNI56K 0x1500 Omni 56K Plus
product ZYXEL 980N 0x2011 Scorpion-980N keyboard
product ZYXEL ZYAIRG220 0x3401 ZyAIR G-220
product ZYXEL G200V2 0x3407 G-200 v2
product ZYXEL AG225H 0x3409 AG-225H
product ZYXEL M202 0x340a M-202
product ZYXEL G270S 0x340c G-270S
product ZYXEL G220V2 0x340f G-220 v2
product ZYXEL G202 0x3410 G-202
product ZYXEL RT2573 0x3415 RT2573
product ZYXEL RT2870_1 0x3416 RT2870
product ZYXEL NWD271N 0x3417 NWD-271N
product ZYXEL NWD211AN 0x3418 NWD-211AN
product ZYXEL RT2870_2 0x341a RT2870
product ZYXEL RT3070 0x341e NWD2105
product ZYXEL RTL8192CU 0x341f RTL8192CU
product ZYXEL NWD2705 0x3421 NWD2705
product ZYXEL NWD6605 0x3426 ND6605
product ZYXEL PRESTIGE 0x401a Prestige
Index: projects/clang800-import/sys/dev/wi/if_wivar.h
===================================================================
--- projects/clang800-import/sys/dev/wi/if_wivar.h (revision 343955)
+++ projects/clang800-import/sys/dev/wi/if_wivar.h (revision 343956)
@@ -1,190 +1,189 @@
/*-
* SPDX-License-Identifier: BSD-4-Clause
*
- * Copyright (c) 2002
- * M Warner Losh <imp@freebsd.org>. All rights reserved.
+ * Copyright (c) 2002 M Warner Losh <imp@freebsd.org>.
* Copyright (c) 1997, 1998, 1999
* Bill Paul <wpaul@ctr.columbia.edu>. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by Bill Paul.
* 4. Neither the name of the author nor the names of any co-contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*
* $FreeBSD$
*/
/*
* Encryption controls. We can enable or disable encryption as
* well as specify up to 4 encryption keys. We can also specify
* which of the four keys will be used for transmit encryption.
*/
#define WI_RID_ENCRYPTION 0xFC20
#define WI_RID_AUTHTYPE 0xFC21
#define WI_RID_DEFLT_CRYPT_KEYS 0xFCB0
#define WI_RID_TX_CRYPT_KEY 0xFCB1
#define WI_RID_WEP_AVAIL 0xFD4F
#define WI_RID_P2_TX_CRYPT_KEY 0xFC23
#define WI_RID_P2_CRYPT_KEY0 0xFC24
#define WI_RID_P2_CRYPT_KEY1 0xFC25
#define WI_RID_MICROWAVE_OVEN 0xFC25
#define WI_RID_P2_CRYPT_KEY2 0xFC26
#define WI_RID_P2_CRYPT_KEY3 0xFC27
#define WI_RID_P2_ENCRYPTION 0xFC28
#define WI_RID_ROAMING_MODE 0xFC2D
#define WI_RID_CUR_TX_RATE 0xFD44 /* current TX rate */
#define WI_MAX_AID 256 /* max stations for ap operation */
struct wi_vap {
struct ieee80211vap wv_vap;
void (*wv_recv_mgmt)(struct ieee80211_node *, struct mbuf *,
int, const struct ieee80211_rx_stats *rxs, int, int);
int (*wv_newstate)(struct ieee80211vap *,
enum ieee80211_state, int);
};
#define WI_VAP(vap) ((struct wi_vap *)(vap))
struct wi_softc {
struct ieee80211com sc_ic;
struct mbufq sc_snd;
device_t sc_dev;
struct mtx sc_mtx;
struct callout sc_watchdog;
int sc_unit;
int wi_gone;
int sc_enabled;
int sc_reset;
int sc_firmware_type;
#define WI_NOTYPE 0
#define WI_LUCENT 1
#define WI_INTERSIL 2
#define WI_SYMBOL 3
int sc_pri_firmware_ver; /* Primary firmware */
int sc_sta_firmware_ver; /* Station firmware */
unsigned int sc_nic_id; /* Type of NIC */
char * sc_nic_name;
int wi_bus_type; /* Bus attachment type */
struct resource * local;
int local_rid;
struct resource * iobase;
int iobase_rid;
struct resource * irq;
int irq_rid;
struct resource * mem;
int mem_rid;
bus_space_handle_t wi_localhandle;
bus_space_tag_t wi_localtag;
bus_space_handle_t wi_bhandle;
bus_space_tag_t wi_btag;
bus_space_handle_t wi_bmemhandle;
bus_space_tag_t wi_bmemtag;
void * wi_intrhand;
struct ieee80211_channel *wi_channel;
int wi_io_addr;
int wi_cmd_count;
int sc_flags;
int sc_bap_id;
int sc_bap_off;
int sc_porttype;
u_int16_t sc_portnum;
u_int16_t sc_encryption;
u_int16_t sc_monitor_port;
u_int16_t sc_chanmask;
/* RSSI interpretation */
u_int16_t sc_min_rssi; /* clamp sc_min_rssi < RSSI */
u_int16_t sc_max_rssi; /* clamp RSSI < sc_max_rssi */
u_int16_t sc_dbm_offset; /* dBm ~ RSSI - sc_dbm_offset */
int sc_buflen; /* TX buffer size */
int sc_ntxbuf;
#define WI_NTXBUF 3
struct {
int d_fid;
int d_len;
} sc_txd[WI_NTXBUF]; /* TX buffers */
int sc_txnext; /* index of next TX */
int sc_txcur; /* index of current TX*/
int sc_tx_timer;
struct wi_counters sc_stats;
u_int16_t sc_ibss_port;
struct timeval sc_last_syn;
int sc_false_syns;
u_int16_t sc_txbuf[IEEE80211_MAX_LEN/2];
struct wi_tx_radiotap_header sc_tx_th;
struct wi_rx_radiotap_header sc_rx_th;
};
/* maximum consecutive false change-of-BSSID indications */
#define WI_MAX_FALSE_SYNS 10
#define WI_FLAGS_HAS_ENHSECURITY 0x0001
#define WI_FLAGS_HAS_WPASUPPORT 0x0002
#define WI_FLAGS_HAS_ROAMING 0x0020
#define WI_FLAGS_HAS_FRAGTHR 0x0200
#define WI_FLAGS_HAS_DBMADJUST 0x0400
#define WI_FLAGS_RUNNING 0x0800
#define WI_FLAGS_PROMISC 0x1000
struct wi_card_ident {
u_int16_t card_id;
char *card_name;
u_int8_t firm_type;
};
#define WI_PRISM_MIN_RSSI 0x1b
#define WI_PRISM_MAX_RSSI 0x9a
#define WI_PRISM_DBM_OFFSET 100 /* XXX */
#define WI_LUCENT_MIN_RSSI 47
#define WI_LUCENT_MAX_RSSI 138
#define WI_LUCENT_DBM_OFFSET 149
#define WI_RSSI_TO_DBM(sc, rssi) (MIN((sc)->sc_max_rssi, \
MAX((sc)->sc_min_rssi, (rssi))) - (sc)->sc_dbm_offset)
#define WI_LOCK(_sc) mtx_lock(&(_sc)->sc_mtx)
#define WI_UNLOCK(_sc) mtx_unlock(&(_sc)->sc_mtx)
#define WI_LOCK_ASSERT(_sc) mtx_assert(&(_sc)->sc_mtx, MA_OWNED)
int wi_attach(device_t);
int wi_detach(device_t);
int wi_shutdown(device_t);
int wi_alloc(device_t, int);
void wi_free(device_t);
extern devclass_t wi_devclass;
void wi_intr(void *);
int wi_mgmt_xmit(struct wi_softc *, caddr_t, int);
void wi_stop(struct wi_softc *, int);
void wi_init(struct wi_softc *);
Index: projects/clang800-import/sys/dts/arm/tegra124-jetson-tk1-fbsd.dts
===================================================================
--- projects/clang800-import/sys/dts/arm/tegra124-jetson-tk1-fbsd.dts (revision 343955)
+++ projects/clang800-import/sys/dts/arm/tegra124-jetson-tk1-fbsd.dts (revision 343956)
@@ -1,48 +1,48 @@
/*-
* Copyright (c) 2016 Michal Meloun <mmel@FreeBSD.org>
* All rights reserved.
*
* This software was developed by SRI International and the University of
* Cambridge Computer Laboratory under DARPA/AFRL contract (FA8750-10-C-0237)
* ("CTSRD"), as part of the DARPA CRASH research programme.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#include "tegra124-jetson-tk1.dts"
/ {
chosen {
stdin = &uartd;
stdout = &uartd;
};
- memory {
+ memory@80000000 {
/* reg = <0x0 0x80000000 0x0 0x80000000>; */
reg = <0x0 0x80000000 0x0 0x70000000>;
};
usb@70090000 {
freebsd,clock-xusb-gate = <&tegra_car 143>;
};
};
Index: projects/clang800-import/sys/fs/nullfs/null_vfsops.c
===================================================================
--- projects/clang800-import/sys/fs/nullfs/null_vfsops.c (revision 343955)
+++ projects/clang800-import/sys/fs/nullfs/null_vfsops.c (revision 343956)
@@ -1,461 +1,469 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1992, 1993, 1995
* The Regents of the University of California. All rights reserved.
*
* This code is derived from software donated to Berkeley by
* Jan-Simon Pendry.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)null_vfsops.c 8.2 (Berkeley) 1/21/94
*
* @(#)lofs_vfsops.c 1.2 (Berkeley) 6/18/92
* $FreeBSD$
*/
/*
* Null Layer
* (See null_vnops.c for a description of what this does.)
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/fcntl.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mount.h>
#include <sys/namei.h>
#include <sys/proc.h>
#include <sys/vnode.h>
#include <sys/jail.h>
#include <fs/nullfs/null.h>
static MALLOC_DEFINE(M_NULLFSMNT, "nullfs_mount", "NULLFS mount structure");
static vfs_fhtovp_t nullfs_fhtovp;
static vfs_mount_t nullfs_mount;
static vfs_quotactl_t nullfs_quotactl;
static vfs_root_t nullfs_root;
static vfs_sync_t nullfs_sync;
static vfs_statfs_t nullfs_statfs;
static vfs_unmount_t nullfs_unmount;
static vfs_vget_t nullfs_vget;
static vfs_extattrctl_t nullfs_extattrctl;
/*
* Mount null layer
*/
static int
nullfs_mount(struct mount *mp)
{
- int error = 0;
struct vnode *lowerrootvp, *vp;
struct vnode *nullm_rootvp;
struct null_mount *xmp;
+ struct null_node *nn;
+ struct nameidata nd, *ndp;
char *target;
- int isvnunlocked = 0, len;
- struct nameidata nd, *ndp = &nd;
+ int error, len;
+ bool isvnunlocked;
NULLFSDEBUG("nullfs_mount(mp = %p)\n", (void *)mp);
if (mp->mnt_flag & MNT_ROOTFS)
return (EOPNOTSUPP);
/*
* Update is a no-op
*/
if (mp->mnt_flag & MNT_UPDATE) {
/*
* Only support update mounts for NFS export.
*/
if (vfs_flagopt(mp->mnt_optnew, "export", NULL, 0))
return (0);
else
return (EOPNOTSUPP);
}
/*
* Get argument
*/
error = vfs_getopt(mp->mnt_optnew, "target", (void **)&target, &len);
if (error || target[len - 1] != '\0')
return (EINVAL);
/*
* Unlock lower node to avoid possible deadlock.
*/
- if ((mp->mnt_vnodecovered->v_op == &null_vnodeops) &&
+ if (mp->mnt_vnodecovered->v_op == &null_vnodeops &&
VOP_ISLOCKED(mp->mnt_vnodecovered) == LK_EXCLUSIVE) {
VOP_UNLOCK(mp->mnt_vnodecovered, 0);
- isvnunlocked = 1;
+ isvnunlocked = true;
+ } else {
+ isvnunlocked = false;
}
+
/*
* Find lower node
*/
+ ndp = &nd;
NDINIT(ndp, LOOKUP, FOLLOW|LOCKLEAF, UIO_SYSSPACE, target, curthread);
error = namei(ndp);
/*
* Re-lock vnode.
* XXXKIB This is deadlock-prone as well.
*/
if (isvnunlocked)
vn_lock(mp->mnt_vnodecovered, LK_EXCLUSIVE | LK_RETRY);
if (error)
return (error);
NDFREE(ndp, NDF_ONLY_PNBUF);
/*
* Sanity check on lower vnode
*/
lowerrootvp = ndp->ni_vp;
/*
* Check multi null mount to avoid `lock against myself' panic.
*/
- if (lowerrootvp == VTONULL(mp->mnt_vnodecovered)->null_lowervp) {
- NULLFSDEBUG("nullfs_mount: multi null mount?\n");
- vput(lowerrootvp);
- return (EDEADLK);
+ if (mp->mnt_vnodecovered->v_op == &null_vnodeops) {
+ nn = VTONULL(mp->mnt_vnodecovered);
+ if (nn == NULL || lowerrootvp == nn->null_lowervp) {
+ NULLFSDEBUG("nullfs_mount: multi null mount?\n");
+ vput(lowerrootvp);
+ return (EDEADLK);
+ }
}
xmp = (struct null_mount *) malloc(sizeof(struct null_mount),
M_NULLFSMNT, M_WAITOK | M_ZERO);
/*
* Save reference to underlying FS
*/
xmp->nullm_vfs = lowerrootvp->v_mount;
/*
* Save reference. Each mount also holds
* a reference on the root vnode.
*/
error = null_nodeget(mp, lowerrootvp, &vp);
/*
* Make sure the node alias worked
*/
if (error) {
free(xmp, M_NULLFSMNT);
return (error);
}
/*
* Keep a held reference to the root vnode.
* It is vrele'd in nullfs_unmount.
*/
nullm_rootvp = vp;
nullm_rootvp->v_vflag |= VV_ROOT;
xmp->nullm_rootvp = nullm_rootvp;
/*
* Unlock the node (either the lower or the alias)
*/
VOP_UNLOCK(vp, 0);
if (NULLVPTOLOWERVP(nullm_rootvp)->v_mount->mnt_flag & MNT_LOCAL) {
MNT_ILOCK(mp);
mp->mnt_flag |= MNT_LOCAL;
MNT_IUNLOCK(mp);
}
xmp->nullm_flags |= NULLM_CACHE;
if (vfs_getopt(mp->mnt_optnew, "nocache", NULL, NULL) == 0 ||
(xmp->nullm_vfs->mnt_kern_flag & MNTK_NULL_NOCACHE) != 0)
xmp->nullm_flags &= ~NULLM_CACHE;
MNT_ILOCK(mp);
if ((xmp->nullm_flags & NULLM_CACHE) != 0) {
mp->mnt_kern_flag |= lowerrootvp->v_mount->mnt_kern_flag &
(MNTK_SHARED_WRITES | MNTK_LOOKUP_SHARED |
MNTK_EXTENDED_SHARED);
}
mp->mnt_kern_flag |= MNTK_LOOKUP_EXCL_DOTDOT;
mp->mnt_kern_flag |= lowerrootvp->v_mount->mnt_kern_flag &
(MNTK_USES_BCACHE | MNTK_NO_IOPF | MNTK_UNMAPPED_BUFS);
MNT_IUNLOCK(mp);
mp->mnt_data = xmp;
vfs_getnewfsid(mp);
if ((xmp->nullm_flags & NULLM_CACHE) != 0) {
MNT_ILOCK(xmp->nullm_vfs);
TAILQ_INSERT_TAIL(&xmp->nullm_vfs->mnt_uppers, mp,
mnt_upper_link);
MNT_IUNLOCK(xmp->nullm_vfs);
}
vfs_mountedfrom(mp, target);
NULLFSDEBUG("nullfs_mount: lower %s, alias at %s\n",
mp->mnt_stat.f_mntfromname, mp->mnt_stat.f_mntonname);
return (0);
}
/*
* Free reference to null layer
*/
static int
nullfs_unmount(mp, mntflags)
struct mount *mp;
int mntflags;
{
struct null_mount *mntdata;
struct mount *ump;
int error, flags;
NULLFSDEBUG("nullfs_unmount: mp = %p\n", (void *)mp);
if (mntflags & MNT_FORCE)
flags = FORCECLOSE;
else
flags = 0;
/* There is 1 extra root vnode reference (nullm_rootvp). */
error = vflush(mp, 1, flags, curthread);
if (error)
return (error);
/*
* Finally, throw away the null_mount structure
*/
mntdata = mp->mnt_data;
ump = mntdata->nullm_vfs;
if ((mntdata->nullm_flags & NULLM_CACHE) != 0) {
MNT_ILOCK(ump);
while ((ump->mnt_kern_flag & MNTK_VGONE_UPPER) != 0) {
ump->mnt_kern_flag |= MNTK_VGONE_WAITER;
msleep(&ump->mnt_uppers, &ump->mnt_mtx, 0, "vgnupw", 0);
}
TAILQ_REMOVE(&ump->mnt_uppers, mp, mnt_upper_link);
MNT_IUNLOCK(ump);
}
mp->mnt_data = NULL;
free(mntdata, M_NULLFSMNT);
return (0);
}
static int
nullfs_root(mp, flags, vpp)
struct mount *mp;
int flags;
struct vnode **vpp;
{
struct vnode *vp;
NULLFSDEBUG("nullfs_root(mp = %p, vp = %p->%p)\n", (void *)mp,
(void *)MOUNTTONULLMOUNT(mp)->nullm_rootvp,
(void *)NULLVPTOLOWERVP(MOUNTTONULLMOUNT(mp)->nullm_rootvp));
/*
* Return locked reference to root.
*/
vp = MOUNTTONULLMOUNT(mp)->nullm_rootvp;
VREF(vp);
ASSERT_VOP_UNLOCKED(vp, "root vnode is locked");
vn_lock(vp, flags | LK_RETRY);
*vpp = vp;
return 0;
}
static int
nullfs_quotactl(mp, cmd, uid, arg)
struct mount *mp;
int cmd;
uid_t uid;
void *arg;
{
return VFS_QUOTACTL(MOUNTTONULLMOUNT(mp)->nullm_vfs, cmd, uid, arg);
}
static int
nullfs_statfs(mp, sbp)
struct mount *mp;
struct statfs *sbp;
{
int error;
struct statfs *mstat;
NULLFSDEBUG("nullfs_statfs(mp = %p, vp = %p->%p)\n", (void *)mp,
(void *)MOUNTTONULLMOUNT(mp)->nullm_rootvp,
(void *)NULLVPTOLOWERVP(MOUNTTONULLMOUNT(mp)->nullm_rootvp));
mstat = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK | M_ZERO);
error = VFS_STATFS(MOUNTTONULLMOUNT(mp)->nullm_vfs, mstat);
if (error) {
free(mstat, M_STATFS);
return (error);
}
/* now copy across the "interesting" information and fake the rest */
sbp->f_type = mstat->f_type;
sbp->f_flags = (sbp->f_flags & (MNT_RDONLY | MNT_NOEXEC | MNT_NOSUID |
MNT_UNION | MNT_NOSYMFOLLOW | MNT_AUTOMOUNTED)) |
(mstat->f_flags & ~(MNT_ROOTFS | MNT_AUTOMOUNTED));
sbp->f_bsize = mstat->f_bsize;
sbp->f_iosize = mstat->f_iosize;
sbp->f_blocks = mstat->f_blocks;
sbp->f_bfree = mstat->f_bfree;
sbp->f_bavail = mstat->f_bavail;
sbp->f_files = mstat->f_files;
sbp->f_ffree = mstat->f_ffree;
free(mstat, M_STATFS);
return (0);
}
static int
nullfs_sync(mp, waitfor)
struct mount *mp;
int waitfor;
{
/*
* XXX - Assumes no data cached at null layer.
*/
return (0);
}
static int
nullfs_vget(mp, ino, flags, vpp)
struct mount *mp;
ino_t ino;
int flags;
struct vnode **vpp;
{
int error;
KASSERT((flags & LK_TYPE_MASK) != 0,
("nullfs_vget: no lock requested"));
error = VFS_VGET(MOUNTTONULLMOUNT(mp)->nullm_vfs, ino, flags, vpp);
if (error != 0)
return (error);
return (null_nodeget(mp, *vpp, vpp));
}
static int
nullfs_fhtovp(mp, fidp, flags, vpp)
struct mount *mp;
struct fid *fidp;
int flags;
struct vnode **vpp;
{
int error;
error = VFS_FHTOVP(MOUNTTONULLMOUNT(mp)->nullm_vfs, fidp, flags,
vpp);
if (error != 0)
return (error);
return (null_nodeget(mp, *vpp, vpp));
}
static int
nullfs_extattrctl(mp, cmd, filename_vp, namespace, attrname)
struct mount *mp;
int cmd;
struct vnode *filename_vp;
int namespace;
const char *attrname;
{
return (VFS_EXTATTRCTL(MOUNTTONULLMOUNT(mp)->nullm_vfs, cmd,
filename_vp, namespace, attrname));
}
static void
nullfs_reclaim_lowervp(struct mount *mp, struct vnode *lowervp)
{
struct vnode *vp;
vp = null_hashget(mp, lowervp);
if (vp == NULL)
return;
VTONULL(vp)->null_flags |= NULLV_NOUNLOCK;
vgone(vp);
vput(vp);
}
static void
nullfs_unlink_lowervp(struct mount *mp, struct vnode *lowervp)
{
struct vnode *vp;
struct null_node *xp;
vp = null_hashget(mp, lowervp);
if (vp == NULL)
return;
xp = VTONULL(vp);
xp->null_flags |= NULLV_DROP | NULLV_NOUNLOCK;
vhold(vp);
vunref(vp);
if (vp->v_usecount == 0) {
/*
* If vunref() dropped the last use reference on the
* nullfs vnode, it must be reclaimed, and its lock
* was split from the lower vnode lock. Need to do
* extra unlock before allowing the final vdrop() to
* free the vnode.
*/
KASSERT((vp->v_iflag & VI_DOOMED) != 0,
("not reclaimed nullfs vnode %p", vp));
VOP_UNLOCK(vp, 0);
} else {
/*
* Otherwise, the nullfs vnode still shares the lock
* with the lower vnode, and must not be unlocked.
* Also clear the NULLV_NOUNLOCK, the flag is not
* relevant for future reclamations.
*/
ASSERT_VOP_ELOCKED(vp, "unlink_lowervp");
KASSERT((vp->v_iflag & VI_DOOMED) == 0,
("reclaimed nullfs vnode %p", vp));
xp->null_flags &= ~NULLV_NOUNLOCK;
}
vdrop(vp);
}
static struct vfsops null_vfsops = {
.vfs_extattrctl = nullfs_extattrctl,
.vfs_fhtovp = nullfs_fhtovp,
.vfs_init = nullfs_init,
.vfs_mount = nullfs_mount,
.vfs_quotactl = nullfs_quotactl,
.vfs_root = nullfs_root,
.vfs_statfs = nullfs_statfs,
.vfs_sync = nullfs_sync,
.vfs_uninit = nullfs_uninit,
.vfs_unmount = nullfs_unmount,
.vfs_vget = nullfs_vget,
.vfs_reclaim_lowervp = nullfs_reclaim_lowervp,
.vfs_unlink_lowervp = nullfs_unlink_lowervp,
};
VFS_SET(null_vfsops, nullfs, VFCF_LOOPBACK | VFCF_JAIL);
Index: projects/clang800-import/sys/fs/nullfs/null_vnops.c
===================================================================
--- projects/clang800-import/sys/fs/nullfs/null_vnops.c (revision 343955)
+++ projects/clang800-import/sys/fs/nullfs/null_vnops.c (revision 343956)
@@ -1,936 +1,942 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1992, 1993
* The Regents of the University of California. All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* John Heidemann of the UCLA Ficus project.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)null_vnops.c 8.6 (Berkeley) 5/27/95
*
* Ancestors:
* @(#)lofs_vnops.c 1.2 (Berkeley) 6/18/92
* ...and...
* @(#)null_vnodeops.c 1.20 92/07/07 UCLA Ficus project
*
* $FreeBSD$
*/
/*
* Null Layer
*
* (See mount_nullfs(8) for more information.)
*
* The null layer duplicates a portion of the filesystem
* name space under a new name. In this respect, it is
* similar to the loopback filesystem. It differs from
* the loopback fs in two respects: it is implemented using
* a stackable layers techniques, and its "null-node"s stack above
* all lower-layer vnodes, not just over directory vnodes.
*
* The null layer has two purposes. First, it serves as a demonstration
* of layering by proving a layer which does nothing. (It actually
* does everything the loopback filesystem does, which is slightly
* more than nothing.) Second, the null layer can serve as a prototype
* layer. Since it provides all necessary layer framework,
* new filesystem layers can be created very easily be starting
* with a null layer.
*
* The remainder of this man page examines the null layer as a basis
* for constructing new layers.
*
*
* INSTANTIATING NEW NULL LAYERS
*
* New null layers are created with mount_nullfs(8).
* Mount_nullfs(8) takes two arguments, the pathname
* of the lower vfs (target-pn) and the pathname where the null
* layer will appear in the namespace (alias-pn). After
* the null layer is put into place, the contents
* of target-pn subtree will be aliased under alias-pn.
*
*
* OPERATION OF A NULL LAYER
*
* The null layer is the minimum filesystem layer,
* simply bypassing all possible operations to the lower layer
* for processing there. The majority of its activity centers
* on the bypass routine, through which nearly all vnode operations
* pass.
*
* The bypass routine accepts arbitrary vnode operations for
* handling by the lower layer. It begins by examing vnode
* operation arguments and replacing any null-nodes by their
* lower-layer equivlants. It then invokes the operation
* on the lower layer. Finally, it replaces the null-nodes
* in the arguments and, if a vnode is return by the operation,
* stacks a null-node on top of the returned vnode.
*
* Although bypass handles most operations, vop_getattr, vop_lock,
* vop_unlock, vop_inactive, vop_reclaim, and vop_print are not
* bypassed. Vop_getattr must change the fsid being returned.
* Vop_lock and vop_unlock must handle any locking for the
* current vnode as well as pass the lock request down.
* Vop_inactive and vop_reclaim are not bypassed so that
* they can handle freeing null-layer specific data. Vop_print
* is not bypassed to avoid excessive debugging information.
* Also, certain vnode operations change the locking state within
* the operation (create, mknod, remove, link, rename, mkdir, rmdir,
* and symlink). Ideally these operations should not change the
* lock state, but should be changed to let the caller of the
* function unlock them. Otherwise all intermediate vnode layers
* (such as union, umapfs, etc) must catch these functions to do
* the necessary locking at their layer.
*
*
* INSTANTIATING VNODE STACKS
*
* Mounting associates the null layer with a lower layer,
* effect stacking two VFSes. Vnode stacks are instead
* created on demand as files are accessed.
*
* The initial mount creates a single vnode stack for the
* root of the new null layer. All other vnode stacks
* are created as a result of vnode operations on
* this or other null vnode stacks.
*
* New vnode stacks come into existence as a result of
* an operation which returns a vnode.
* The bypass routine stacks a null-node above the new
* vnode before returning it to the caller.
*
* For example, imagine mounting a null layer with
* "mount_nullfs /usr/include /dev/layer/null".
* Changing directory to /dev/layer/null will assign
* the root null-node (which was created when the null layer was mounted).
* Now consider opening "sys". A vop_lookup would be
* done on the root null-node. This operation would bypass through
* to the lower layer which would return a vnode representing
* the UFS "sys". Null_bypass then builds a null-node
* aliasing the UFS "sys" and returns this to the caller.
* Later operations on the null-node "sys" will repeat this
* process when constructing other vnode stacks.
*
*
* CREATING OTHER FILE SYSTEM LAYERS
*
* One of the easiest ways to construct new filesystem layers is to make
* a copy of the null layer, rename all files and variables, and
* then begin modifing the copy. Sed can be used to easily rename
* all variables.
*
* The umap layer is an example of a layer descended from the
* null layer.
*
*
* INVOKING OPERATIONS ON LOWER LAYERS
*
* There are two techniques to invoke operations on a lower layer
* when the operation cannot be completely bypassed. Each method
* is appropriate in different situations. In both cases,
* it is the responsibility of the aliasing layer to make
* the operation arguments "correct" for the lower layer
* by mapping a vnode arguments to the lower layer.
*
* The first approach is to call the aliasing layer's bypass routine.
* This method is most suitable when you wish to invoke the operation
* currently being handled on the lower layer. It has the advantage
* that the bypass routine already must do argument mapping.
* An example of this is null_getattrs in the null layer.
*
* A second approach is to directly invoke vnode operations on
* the lower layer with the VOP_OPERATIONNAME interface.
* The advantage of this method is that it is easy to invoke
* arbitrary operations on the lower layer. The disadvantage
* is that vnode arguments must be manualy mapped.
*
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mount.h>
#include <sys/mutex.h>
#include <sys/namei.h>
#include <sys/sysctl.h>
#include <sys/vnode.h>
#include <fs/nullfs/null.h>
#include <vm/vm.h>
#include <vm/vm_extern.h>
#include <vm/vm_object.h>
#include <vm/vnode_pager.h>
static int null_bug_bypass = 0; /* for debugging: enables bypass printf'ing */
SYSCTL_INT(_debug, OID_AUTO, nullfs_bug_bypass, CTLFLAG_RW,
&null_bug_bypass, 0, "");
/*
* This is the 10-Apr-92 bypass routine.
* This version has been optimized for speed, throwing away some
* safety checks. It should still always work, but it's not as
* robust to programmer errors.
*
* In general, we map all vnodes going down and unmap them on the way back.
* As an exception to this, vnodes can be marked "unmapped" by setting
* the Nth bit in operation's vdesc_flags.
*
* Also, some BSD vnode operations have the side effect of vrele'ing
* their arguments. With stacking, the reference counts are held
* by the upper node, not the lower one, so we must handle these
* side-effects here. This is not of concern in Sun-derived systems
* since there are no such side-effects.
*
* This makes the following assumptions:
* - only one returned vpp
* - no INOUT vpp's (Sun's vop_open has one of these)
* - the vnode operation vector of the first vnode should be used
* to determine what implementation of the op should be invoked
* - all mapped vnodes are of our vnode-type (NEEDSWORK:
* problems on rmdir'ing mount points and renaming?)
*/
int
null_bypass(struct vop_generic_args *ap)
{
struct vnode **this_vp_p;
int error;
struct vnode *old_vps[VDESC_MAX_VPS];
struct vnode **vps_p[VDESC_MAX_VPS];
struct vnode ***vppp;
struct vnodeop_desc *descp = ap->a_desc;
int reles, i;
if (null_bug_bypass)
printf ("null_bypass: %s\n", descp->vdesc_name);
#ifdef DIAGNOSTIC
/*
* We require at least one vp.
*/
if (descp->vdesc_vp_offsets == NULL ||
descp->vdesc_vp_offsets[0] == VDESC_NO_OFFSET)
panic ("null_bypass: no vp's in map");
#endif
/*
* Map the vnodes going in.
* Later, we'll invoke the operation based on
* the first mapped vnode's operation vector.
*/
reles = descp->vdesc_flags;
for (i = 0; i < VDESC_MAX_VPS; reles >>= 1, i++) {
if (descp->vdesc_vp_offsets[i] == VDESC_NO_OFFSET)
break; /* bail out at end of list */
vps_p[i] = this_vp_p =
VOPARG_OFFSETTO(struct vnode**,descp->vdesc_vp_offsets[i],ap);
/*
* We're not guaranteed that any but the first vnode
* are of our type. Check for and don't map any
* that aren't. (We must always map first vp or vclean fails.)
*/
if (i && (*this_vp_p == NULLVP ||
(*this_vp_p)->v_op != &null_vnodeops)) {
old_vps[i] = NULLVP;
} else {
old_vps[i] = *this_vp_p;
*(vps_p[i]) = NULLVPTOLOWERVP(*this_vp_p);
/*
* XXX - Several operations have the side effect
* of vrele'ing their vp's. We must account for
* that. (This should go away in the future.)
*/
if (reles & VDESC_VP0_WILLRELE)
VREF(*this_vp_p);
}
}
/*
* Call the operation on the lower layer
* with the modified argument structure.
*/
if (vps_p[0] && *vps_p[0])
error = VCALL(ap);
else {
printf("null_bypass: no map for %s\n", descp->vdesc_name);
error = EINVAL;
}
/*
* Maintain the illusion of call-by-value
* by restoring vnodes in the argument structure
* to their original value.
*/
reles = descp->vdesc_flags;
for (i = 0; i < VDESC_MAX_VPS; reles >>= 1, i++) {
if (descp->vdesc_vp_offsets[i] == VDESC_NO_OFFSET)
break; /* bail out at end of list */
if (old_vps[i]) {
*(vps_p[i]) = old_vps[i];
#if 0
if (reles & VDESC_VP0_WILLUNLOCK)
VOP_UNLOCK(*(vps_p[i]), 0);
#endif
if (reles & VDESC_VP0_WILLRELE)
vrele(*(vps_p[i]));
}
}
/*
* Map the possible out-going vpp
* (Assumes that the lower layer always returns
* a VREF'ed vpp unless it gets an error.)
*/
if (descp->vdesc_vpp_offset != VDESC_NO_OFFSET &&
!(descp->vdesc_flags & VDESC_NOMAP_VPP) &&
!error) {
/*
* XXX - even though some ops have vpp returned vp's,
* several ops actually vrele this before returning.
* We must avoid these ops.
* (This should go away when these ops are regularized.)
*/
if (descp->vdesc_flags & VDESC_VPP_WILLRELE)
goto out;
vppp = VOPARG_OFFSETTO(struct vnode***,
descp->vdesc_vpp_offset,ap);
if (*vppp)
error = null_nodeget(old_vps[0]->v_mount, **vppp, *vppp);
}
out:
return (error);
}
static int
null_add_writecount(struct vop_add_writecount_args *ap)
{
struct vnode *lvp, *vp;
int error;
vp = ap->a_vp;
lvp = NULLVPTOLOWERVP(vp);
KASSERT(vp->v_writecount + ap->a_inc >= 0, ("wrong writecount inc"));
if (vp->v_writecount > 0 && vp->v_writecount + ap->a_inc == 0)
error = VOP_ADD_WRITECOUNT(lvp, -1);
else if (vp->v_writecount == 0 && vp->v_writecount + ap->a_inc > 0)
error = VOP_ADD_WRITECOUNT(lvp, 1);
else
error = 0;
if (error == 0)
vp->v_writecount += ap->a_inc;
return (error);
}
/*
* We have to carry on the locking protocol on the null layer vnodes
* as we progress through the tree. We also have to enforce read-only
* if this layer is mounted read-only.
*/
static int
null_lookup(struct vop_lookup_args *ap)
{
struct componentname *cnp = ap->a_cnp;
struct vnode *dvp = ap->a_dvp;
int flags = cnp->cn_flags;
struct vnode *vp, *ldvp, *lvp;
struct mount *mp;
int error;
mp = dvp->v_mount;
if ((flags & ISLASTCN) != 0 && (mp->mnt_flag & MNT_RDONLY) != 0 &&
(cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME))
return (EROFS);
/*
* Although it is possible to call null_bypass(), we'll do
* a direct call to reduce overhead
*/
ldvp = NULLVPTOLOWERVP(dvp);
vp = lvp = NULL;
KASSERT((ldvp->v_vflag & VV_ROOT) == 0 ||
((dvp->v_vflag & VV_ROOT) != 0 && (flags & ISDOTDOT) == 0),
("ldvp %p fl %#x dvp %p fl %#x flags %#x", ldvp, ldvp->v_vflag,
dvp, dvp->v_vflag, flags));
/*
* Hold ldvp. The reference on it, owned by dvp, is lost in
* case of dvp reclamation, and we need ldvp to move our lock
* from ldvp to dvp.
*/
vhold(ldvp);
error = VOP_LOOKUP(ldvp, &lvp, cnp);
/*
* VOP_LOOKUP() on lower vnode may unlock ldvp, which allows
* dvp to be reclaimed due to shared v_vnlock. Check for the
* doomed state and return error.
*/
if ((error == 0 || error == EJUSTRETURN) &&
(dvp->v_iflag & VI_DOOMED) != 0) {
error = ENOENT;
if (lvp != NULL)
vput(lvp);
/*
* If vgone() did reclaimed dvp before curthread
* relocked ldvp, the locks of dvp and ldpv are no
* longer shared. In this case, relock of ldvp in
* lower fs VOP_LOOKUP() does not restore the locking
* state of dvp. Compensate for this by unlocking
* ldvp and locking dvp, which is also correct if the
* locks are still shared.
*/
VOP_UNLOCK(ldvp, 0);
vn_lock(dvp, LK_EXCLUSIVE | LK_RETRY);
}
vdrop(ldvp);
if (error == EJUSTRETURN && (flags & ISLASTCN) != 0 &&
(mp->mnt_flag & MNT_RDONLY) != 0 &&
(cnp->cn_nameiop == CREATE || cnp->cn_nameiop == RENAME))
error = EROFS;
if ((error == 0 || error == EJUSTRETURN) && lvp != NULL) {
if (ldvp == lvp) {
*ap->a_vpp = dvp;
VREF(dvp);
vrele(lvp);
} else {
error = null_nodeget(mp, lvp, &vp);
if (error == 0)
*ap->a_vpp = vp;
}
}
return (error);
}
static int
null_open(struct vop_open_args *ap)
{
int retval;
struct vnode *vp, *ldvp;
vp = ap->a_vp;
ldvp = NULLVPTOLOWERVP(vp);
retval = null_bypass(&ap->a_gen);
if (retval == 0)
vp->v_object = ldvp->v_object;
return (retval);
}
/*
* Setattr call. Disallow write attempts if the layer is mounted read-only.
*/
static int
null_setattr(struct vop_setattr_args *ap)
{
struct vnode *vp = ap->a_vp;
struct vattr *vap = ap->a_vap;
if ((vap->va_flags != VNOVAL || vap->va_uid != (uid_t)VNOVAL ||
vap->va_gid != (gid_t)VNOVAL || vap->va_atime.tv_sec != VNOVAL ||
vap->va_mtime.tv_sec != VNOVAL || vap->va_mode != (mode_t)VNOVAL) &&
(vp->v_mount->mnt_flag & MNT_RDONLY))
return (EROFS);
if (vap->va_size != VNOVAL) {
switch (vp->v_type) {
case VDIR:
return (EISDIR);
case VCHR:
case VBLK:
case VSOCK:
case VFIFO:
if (vap->va_flags != VNOVAL)
return (EOPNOTSUPP);
return (0);
case VREG:
case VLNK:
default:
/*
* Disallow write attempts if the filesystem is
* mounted read-only.
*/
if (vp->v_mount->mnt_flag & MNT_RDONLY)
return (EROFS);
}
}
return (null_bypass((struct vop_generic_args *)ap));
}
/*
* We handle getattr only to change the fsid.
*/
static int
null_getattr(struct vop_getattr_args *ap)
{
int error;
if ((error = null_bypass((struct vop_generic_args *)ap)) != 0)
return (error);
ap->a_vap->va_fsid = ap->a_vp->v_mount->mnt_stat.f_fsid.val[0];
return (0);
}
/*
* Handle to disallow write access if mounted read-only.
*/
static int
null_access(struct vop_access_args *ap)
{
struct vnode *vp = ap->a_vp;
accmode_t accmode = ap->a_accmode;
/*
* Disallow write attempts on read-only layers;
* unless the file is a socket, fifo, or a block or
* character device resident on the filesystem.
*/
if (accmode & VWRITE) {
switch (vp->v_type) {
case VDIR:
case VLNK:
case VREG:
if (vp->v_mount->mnt_flag & MNT_RDONLY)
return (EROFS);
break;
default:
break;
}
}
return (null_bypass((struct vop_generic_args *)ap));
}
static int
null_accessx(struct vop_accessx_args *ap)
{
struct vnode *vp = ap->a_vp;
accmode_t accmode = ap->a_accmode;
/*
* Disallow write attempts on read-only layers;
* unless the file is a socket, fifo, or a block or
* character device resident on the filesystem.
*/
if (accmode & VWRITE) {
switch (vp->v_type) {
case VDIR:
case VLNK:
case VREG:
if (vp->v_mount->mnt_flag & MNT_RDONLY)
return (EROFS);
break;
default:
break;
}
}
return (null_bypass((struct vop_generic_args *)ap));
}
/*
* Increasing refcount of lower vnode is needed at least for the case
* when lower FS is NFS to do sillyrename if the file is in use.
* Unfortunately v_usecount is incremented in many places in
* the kernel and, as such, there may be races that result in
* the NFS client doing an extraneous silly rename, but that seems
* preferable to not doing a silly rename when it is needed.
*/
static int
null_remove(struct vop_remove_args *ap)
{
int retval, vreleit;
struct vnode *lvp, *vp;
vp = ap->a_vp;
if (vrefcnt(vp) > 1) {
lvp = NULLVPTOLOWERVP(vp);
VREF(lvp);
vreleit = 1;
} else
vreleit = 0;
VTONULL(vp)->null_flags |= NULLV_DROP;
retval = null_bypass(&ap->a_gen);
if (vreleit != 0)
vrele(lvp);
return (retval);
}
/*
* We handle this to eliminate null FS to lower FS
* file moving. Don't know why we don't allow this,
* possibly we should.
*/
static int
null_rename(struct vop_rename_args *ap)
{
struct vnode *tdvp = ap->a_tdvp;
struct vnode *fvp = ap->a_fvp;
struct vnode *fdvp = ap->a_fdvp;
struct vnode *tvp = ap->a_tvp;
struct null_node *tnn;
/* Check for cross-device rename. */
if ((fvp->v_mount != tdvp->v_mount) ||
(tvp && (fvp->v_mount != tvp->v_mount))) {
if (tdvp == tvp)
vrele(tdvp);
else
vput(tdvp);
if (tvp)
vput(tvp);
vrele(fdvp);
vrele(fvp);
return (EXDEV);
}
if (tvp != NULL) {
tnn = VTONULL(tvp);
tnn->null_flags |= NULLV_DROP;
}
return (null_bypass((struct vop_generic_args *)ap));
}
static int
null_rmdir(struct vop_rmdir_args *ap)
{
VTONULL(ap->a_vp)->null_flags |= NULLV_DROP;
return (null_bypass(&ap->a_gen));
}
/*
* We need to process our own vnode lock and then clear the
* interlock flag as it applies only to our vnode, not the
* vnodes below us on the stack.
*/
static int
null_lock(struct vop_lock1_args *ap)
{
struct vnode *vp = ap->a_vp;
int flags = ap->a_flags;
struct null_node *nn;
struct vnode *lvp;
int error;
if ((flags & LK_INTERLOCK) == 0) {
VI_LOCK(vp);
ap->a_flags = flags |= LK_INTERLOCK;
}
nn = VTONULL(vp);
/*
* If we're still active we must ask the lower layer to
* lock as ffs has special lock considerations in its
* vop lock.
*/
if (nn != NULL && (lvp = NULLVPTOLOWERVP(vp)) != NULL) {
VI_LOCK_FLAGS(lvp, MTX_DUPOK);
VI_UNLOCK(vp);
/*
* We have to hold the vnode here to solve a potential
* reclaim race. If we're forcibly vgone'd while we
* still have refs, a thread could be sleeping inside
* the lowervp's vop_lock routine. When we vgone we will
* drop our last ref to the lowervp, which would allow it
* to be reclaimed. The lowervp could then be recycled,
* in which case it is not legal to be sleeping in its VOP.
* We prevent it from being recycled by holding the vnode
* here.
*/
vholdl(lvp);
error = VOP_LOCK(lvp, flags);
/*
* We might have slept to get the lock and someone might have
* clean our vnode already, switching vnode lock from one in
* lowervp to v_lock in our own vnode structure. Handle this
* case by reacquiring correct lock in requested mode.
*/
if (VTONULL(vp) == NULL && error == 0) {
ap->a_flags &= ~(LK_TYPE_MASK | LK_INTERLOCK);
switch (flags & LK_TYPE_MASK) {
case LK_SHARED:
ap->a_flags |= LK_SHARED;
break;
case LK_UPGRADE:
case LK_EXCLUSIVE:
ap->a_flags |= LK_EXCLUSIVE;
break;
default:
panic("Unsupported lock request %d\n",
ap->a_flags);
}
VOP_UNLOCK(lvp, 0);
error = vop_stdlock(ap);
}
vdrop(lvp);
} else
error = vop_stdlock(ap);
return (error);
}
/*
* We need to process our own vnode unlock and then clear the
* interlock flag as it applies only to our vnode, not the
* vnodes below us on the stack.
*/
static int
null_unlock(struct vop_unlock_args *ap)
{
struct vnode *vp = ap->a_vp;
int flags = ap->a_flags;
int mtxlkflag = 0;
struct null_node *nn;
struct vnode *lvp;
int error;
if ((flags & LK_INTERLOCK) != 0)
mtxlkflag = 1;
else if (mtx_owned(VI_MTX(vp)) == 0) {
VI_LOCK(vp);
mtxlkflag = 2;
}
nn = VTONULL(vp);
if (nn != NULL && (lvp = NULLVPTOLOWERVP(vp)) != NULL) {
VI_LOCK_FLAGS(lvp, MTX_DUPOK);
flags |= LK_INTERLOCK;
vholdl(lvp);
VI_UNLOCK(vp);
error = VOP_UNLOCK(lvp, flags);
vdrop(lvp);
if (mtxlkflag == 0)
VI_LOCK(vp);
} else {
if (mtxlkflag == 2)
VI_UNLOCK(vp);
error = vop_stdunlock(ap);
}
return (error);
}
/*
* Do not allow the VOP_INACTIVE to be passed to the lower layer,
* since the reference count on the lower vnode is not related to
* ours.
*/
static int
null_inactive(struct vop_inactive_args *ap __unused)
{
struct vnode *vp, *lvp;
struct null_node *xp;
struct mount *mp;
struct null_mount *xmp;
vp = ap->a_vp;
xp = VTONULL(vp);
lvp = NULLVPTOLOWERVP(vp);
mp = vp->v_mount;
xmp = MOUNTTONULLMOUNT(mp);
if ((xmp->nullm_flags & NULLM_CACHE) == 0 ||
(xp->null_flags & NULLV_DROP) != 0 ||
(lvp->v_vflag & VV_NOSYNC) != 0) {
/*
* If this is the last reference and caching of the
* nullfs vnodes is not enabled, or the lower vnode is
* deleted, then free up the vnode so as not to tie up
* the lower vnodes.
*/
vp->v_object = NULL;
vrecycle(vp);
}
return (0);
}
/*
* Now, the nullfs vnode and, due to the sharing lock, the lower
* vnode, are exclusively locked, and we shall destroy the null vnode.
*/
static int
null_reclaim(struct vop_reclaim_args *ap)
{
struct vnode *vp;
struct null_node *xp;
struct vnode *lowervp;
vp = ap->a_vp;
xp = VTONULL(vp);
lowervp = xp->null_lowervp;
KASSERT(lowervp != NULL && vp->v_vnlock != &vp->v_lock,
("Reclaiming incomplete null vnode %p", vp));
null_hashrem(xp);
/*
* Use the interlock to protect the clearing of v_data to
* prevent faults in null_lock().
*/
lockmgr(&vp->v_lock, LK_EXCLUSIVE, NULL);
VI_LOCK(vp);
vp->v_data = NULL;
vp->v_object = NULL;
vp->v_vnlock = &vp->v_lock;
VI_UNLOCK(vp);
/*
* If we were opened for write, we leased one write reference
* to the lower vnode. If this is a reclamation due to the
* forced unmount, undo the reference now.
*/
if (vp->v_writecount > 0)
VOP_ADD_WRITECOUNT(lowervp, -1);
if ((xp->null_flags & NULLV_NOUNLOCK) != 0)
vunref(lowervp);
else
vput(lowervp);
free(xp, M_NULLFSNODE);
return (0);
}
static int
null_print(struct vop_print_args *ap)
{
struct vnode *vp = ap->a_vp;
printf("\tvp=%p, lowervp=%p\n", vp, VTONULL(vp)->null_lowervp);
return (0);
}
/* ARGSUSED */
static int
null_getwritemount(struct vop_getwritemount_args *ap)
{
struct null_node *xp;
struct vnode *lowervp;
struct vnode *vp;
vp = ap->a_vp;
VI_LOCK(vp);
xp = VTONULL(vp);
if (xp && (lowervp = xp->null_lowervp)) {
VI_LOCK_FLAGS(lowervp, MTX_DUPOK);
VI_UNLOCK(vp);
vholdl(lowervp);
VI_UNLOCK(lowervp);
VOP_GETWRITEMOUNT(lowervp, ap->a_mpp);
vdrop(lowervp);
} else {
VI_UNLOCK(vp);
*(ap->a_mpp) = NULL;
}
return (0);
}
static int
null_vptofh(struct vop_vptofh_args *ap)
{
struct vnode *lvp;
lvp = NULLVPTOLOWERVP(ap->a_vp);
return VOP_VPTOFH(lvp, ap->a_fhp);
}
static int
null_vptocnp(struct vop_vptocnp_args *ap)
{
struct vnode *vp = ap->a_vp;
struct vnode **dvp = ap->a_vpp;
struct vnode *lvp, *ldvp;
struct ucred *cred = ap->a_cred;
+ struct mount *mp;
int error, locked;
locked = VOP_ISLOCKED(vp);
lvp = NULLVPTOLOWERVP(vp);
vhold(lvp);
+ mp = vp->v_mount;
+ vfs_ref(mp);
VOP_UNLOCK(vp, 0); /* vp is held by vn_vptocnp_locked that called us */
ldvp = lvp;
vref(lvp);
error = vn_vptocnp(&ldvp, cred, ap->a_buf, ap->a_buflen);
vdrop(lvp);
if (error != 0) {
vn_lock(vp, locked | LK_RETRY);
+ vfs_rel(mp);
return (ENOENT);
}
/*
* Exclusive lock is required by insmntque1 call in
* null_nodeget()
*/
error = vn_lock(ldvp, LK_EXCLUSIVE);
if (error != 0) {
vrele(ldvp);
vn_lock(vp, locked | LK_RETRY);
+ vfs_rel(mp);
return (ENOENT);
}
- error = null_nodeget(vp->v_mount, ldvp, dvp);
+ error = null_nodeget(mp, ldvp, dvp);
if (error == 0) {
#ifdef DIAGNOSTIC
NULLVPTOLOWERVP(*dvp);
#endif
VOP_UNLOCK(*dvp, 0); /* keep reference on *dvp */
}
vn_lock(vp, locked | LK_RETRY);
+ vfs_rel(mp);
return (error);
}
/*
* Global vfs data structures
*/
struct vop_vector null_vnodeops = {
.vop_bypass = null_bypass,
.vop_access = null_access,
.vop_accessx = null_accessx,
.vop_advlockpurge = vop_stdadvlockpurge,
.vop_bmap = VOP_EOPNOTSUPP,
.vop_getattr = null_getattr,
.vop_getwritemount = null_getwritemount,
.vop_inactive = null_inactive,
.vop_islocked = vop_stdislocked,
.vop_lock1 = null_lock,
.vop_lookup = null_lookup,
.vop_open = null_open,
.vop_print = null_print,
.vop_reclaim = null_reclaim,
.vop_remove = null_remove,
.vop_rename = null_rename,
.vop_rmdir = null_rmdir,
.vop_setattr = null_setattr,
.vop_strategy = VOP_EOPNOTSUPP,
.vop_unlock = null_unlock,
.vop_vptocnp = null_vptocnp,
.vop_vptofh = null_vptofh,
.vop_add_writecount = null_add_writecount,
};
Index: projects/clang800-import/sys/i386/acpica/acpi_wakecode.S
===================================================================
--- projects/clang800-import/sys/i386/acpica/acpi_wakecode.S (revision 343955)
+++ projects/clang800-import/sys/i386/acpica/acpi_wakecode.S (revision 343956)
@@ -1,201 +1,214 @@
/*-
* Copyright (c) 2001 Takanori Watanabe <takawata@jp.freebsd.org>
* Copyright (c) 2001-2012 Mitsuru IWASAKI <iwasaki@jp.freebsd.org>
* Copyright (c) 2003 Peter Wemm
* Copyright (c) 2008-2012 Jung-uk Kim <jkim@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#include <machine/asmacros.h>
#include <machine/ppireg.h>
#include <machine/specialreg.h>
#include <machine/timerreg.h>
#include "assym.inc"
/*
* Resume entry point. The BIOS enters here in real mode after POST with
* CS set to the page where we stored this code. It should configure the
* segment registers with a flat 4 GB address space and EFLAGS.IF = 0.
* Depending on the previous sleep state, we may need to initialize more
* of the system (i.e., S3 suspend-to-RAM vs. S4 suspend-to-disk).
*/
.data /* So we can modify it */
ALIGN_TEXT
.code16
wakeup_start:
/*
* Set up segment registers for real mode and a small stack for
* any calls we make. Set up full 32-bit bootstrap kernel flags
* since resumectx() doesn't restore flags. PSL_KERNEL gives
* bootstrap kernel flags (with interrupts disabled), not normal
* kernel flags.
*/
cli /* make sure no interrupts */
mov %cs, %ax /* copy %cs to %ds. Remember these */
mov %ax, %ds /* are offsets rather than selectors */
mov %ax, %ss
movw $PAGE_SIZE, %sp
pushl $PSL_KERNEL
popfl
/* To debug resume hangs, beep the speaker if the user requested. */
testb $~0, resume_beep - wakeup_start
jz 1f
movb $0, resume_beep - wakeup_start
/* Set PIC timer2 to beep. */
movb $(TIMER_SEL2 | TIMER_SQWAVE | TIMER_16BIT), %al
outb %al, $TIMER_MODE
/* Turn on speaker. */
inb $IO_PPI, %al
orb $PIT_SPKR, %al
outb %al, $IO_PPI
/* Set frequency. */
movw $0x4c0, %ax
outb %al, $TIMER_CNTR2
shrw $8, %ax
outb %al, $TIMER_CNTR2
1:
/* Re-initialize video BIOS if the reset_video tunable is set. */
testb $~0, reset_video - wakeup_start
jz 1f
movb $0, reset_video - wakeup_start
lcall $0xc000, $3
/* When we reach here, int 0x10 should be ready. Hide cursor. */
movb $0x01, %ah
movb $0x20, %ch
int $0x10
/* Re-start in case the previous BIOS call clobbers them. */
jmp wakeup_start
1:
/*
* Find relocation base and patch the gdt descript and ljmp targets
*/
xorl %ebx, %ebx
mov %cs, %bx
sall $4, %ebx /* %ebx is now our relocation base */
/*
* Load the descriptor table pointer. We'll need it when running
* in 16-bit protected mode.
*/
lgdtl bootgdtdesc - wakeup_start
/* Enable protected mode */
movl $CR0_PE, %eax
mov %eax, %cr0
/*
* Now execute a far jump to turn on protected mode. This
* causes the segment registers to turn into selectors and causes
* %cs to be loaded from the gdt.
*
* The following instruction is:
* ljmpl $bootcode32 - bootgdt, $wakeup_32 - wakeup_start
* but gas cannot assemble that. And besides, we patch the targets
* in early startup and its a little clearer what we are patching.
*/
wakeup_sw32:
.byte 0x66 /* size override to 32 bits */
.byte 0xea /* opcode for far jump */
.long wakeup_32 - wakeup_start /* offset in segment */
.word bootcode32 - bootgdt /* index in gdt for 32 bit code */
/*
* At this point, we are running in 32 bit legacy protected mode.
*/
ALIGN_TEXT
.code32
wakeup_32:
mov $bootdata32 - bootgdt, %eax
mov %ax, %ds
- /* Get PCB and return address. */
- movl wakeup_pcb - wakeup_start(%ebx), %ecx
- movl wakeup_ret - wakeup_start(%ebx), %edx
-
- /* Restore CR4 and CR3. */
- movl wakeup_cr4 - wakeup_start(%ebx), %eax
+ /* Restore EFER, CR4 and CR3. */
+ movl wakeup_efer - wakeup_start(%ebx), %eax
+ movl wakeup_efer - wakeup_start + 4(%ebx), %edx
+ cmpl $0, %eax
+ jne 1f
+ cmpl $0, %edx
+ je 2f
+1: movl $MSR_EFER, %ecx
+ wrmsr
+2: movl wakeup_cr4 - wakeup_start(%ebx), %eax
+ cmpl $0, %eax
+ je 3f
mov %eax, %cr4
- movl wakeup_cr3 - wakeup_start(%ebx), %eax
+3: movl wakeup_cr3 - wakeup_start(%ebx), %eax
mov %eax, %cr3
+ /* Get PCB and return address. */
+ movl wakeup_ret - wakeup_start(%ebx), %edx
+ movl wakeup_pcb - wakeup_start(%ebx), %ecx
+
/* Enable paging. */
mov %cr0, %eax
orl $CR0_PG, %eax
mov %eax, %cr0
/* Jump to return address. */
jmp *%edx
.data
resume_beep:
.byte 0
reset_video:
.byte 0
ALIGN_DATA
bootgdt:
.long 0x00000000
.long 0x00000000
bootcode32:
.long 0x0000ffff
.long 0x00cf9b00
bootdata32:
.long 0x0000ffff
.long 0x00cf9300
bootgdtend:
bootgdtdesc:
.word bootgdtend - bootgdt /* Length */
.long bootgdt - wakeup_start /* Offset plus %ds << 4 */
ALIGN_DATA
+wakeup_efer:
+ .long 0
+ .long 0
wakeup_cr4:
.long 0
wakeup_cr3:
.long 0
wakeup_pcb:
.long 0
wakeup_ret:
.long 0
wakeup_gdt: /* not used */
.word 0
.long 0
dummy:
Index: projects/clang800-import/sys/i386/conf/NOTES
===================================================================
--- projects/clang800-import/sys/i386/conf/NOTES (revision 343955)
+++ projects/clang800-import/sys/i386/conf/NOTES (revision 343956)
@@ -1,946 +1,943 @@
#
# NOTES -- Lines that can be cut/pasted into kernel and hints configs.
#
# This file contains machine dependent kernel configuration notes. For
# machine independent notes, look in /sys/conf/NOTES.
#
# $FreeBSD$
#
#
# We want LINT to cover profiling as well.
profile 2
#
# Enable the kernel DTrace hooks which are required to load the DTrace
# kernel modules.
#
options KDTRACE_HOOKS
# DTrace core
# NOTE: introduces CDDL-licensed components into the kernel
#device dtrace
# DTrace modules
#device dtrace_profile
#device dtrace_sdt
#device dtrace_fbt
#device dtrace_systrace
#device dtrace_prototype
#device dtnfscl
#device dtmalloc
# Alternatively include all the DTrace modules
#device dtraceall
#####################################################################
# SMP OPTIONS:
#
# The apic device enables the use of the I/O APIC for interrupt delivery.
# The apic device can be used in both UP and SMP kernels, but is required
# for SMP kernels. Thus, the apic device is not strictly an SMP option,
# but it is a prerequisite for SMP.
#
# Notes:
#
# HTT CPUs should only be used if they are enabled in the BIOS. For
# the ACPI case, ACPI only correctly tells us about any HTT CPUs if
# they are enabled. However, most HTT systems do not list HTT CPUs
# in the MP Table if they are enabled, thus we guess at the HTT CPUs
# for the MP Table case. However, we shouldn't try to guess and use
# these CPUs if HTT is disabled. Thus, HTT guessing is only enabled
# for the MP Table if the user explicitly asks for it via the
# MPTABLE_FORCE_HTT option. Do NOT use this option if you have HTT
# disabled in your BIOS.
#
# IPI_PREEMPTION instructs the kernel to preempt threads running on other
# CPUS if needed. Relies on the PREEMPTION option
# Mandatory:
device apic # I/O apic
# Optional:
options MPTABLE_FORCE_HTT # Enable HTT CPUs with the MP Table
options IPI_PREEMPTION
#
# Watchdog routines.
#
options MP_WATCHDOG
# Debugging options.
#
options COUNT_XINVLTLB_HITS # Counters for TLB events
options COUNT_IPIS # Per-CPU IPI interrupt counters
#####################################################################
# CPU OPTIONS
#
# You must specify at least one CPU (the one you intend to run on);
# deleting the specification for CPUs you don't need to use may make
# parts of the system run faster.
#
cpu I486_CPU
cpu I586_CPU # aka Pentium(tm)
cpu I686_CPU # aka Pentium Pro(tm)
#
# Options for CPU features.
#
# CPU_ATHLON_SSE_HACK tries to enable SSE instructions when the BIOS has
# forgotten to enable them.
#
# CPU_BLUELIGHTNING_3X enables triple-clock mode on IBM Blue Lightning
# CPU if CPU supports it. The default is double-clock mode on
# BlueLightning CPU box.
#
# CPU_BLUELIGHTNING_FPU_OP_CACHE enables FPU operand cache on IBM
# BlueLightning CPU. It works only with Cyrix FPU, and this option
# should not be used with Intel FPU.
#
# CPU_BTB_EN enables branch target buffer on Cyrix 5x86 (NOTE 1).
#
# CPU_CYRIX_NO_LOCK enables weak locking for the entire address space
# of Cyrix 6x86 and 6x86MX CPUs by setting the NO_LOCK bit of CCR1.
# Otherwise, the NO_LOCK bit of CCR1 is cleared. (NOTE 3)
#
# CPU_DIRECT_MAPPED_CACHE sets L1 cache of Cyrix 486DLC CPU in direct
# mapped mode. Default is 2-way set associative mode.
#
# CPU_DISABLE_5X86_LSSER disables load store serialize (i.e., enables
# reorder). This option should not be used if you use memory mapped
# I/O device(s).
#
# CPU_ELAN enables support for AMDs ElanSC520 CPU.
# CPU_ELAN_PPS enables precision timestamp code.
# CPU_ELAN_XTAL sets the clock crystal frequency in Hz.
#
# CPU_ENABLE_LONGRUN enables support for Transmeta Crusoe LongRun
# technology which allows to restrict power consumption of the CPU by
# using group of hw.crusoe.* sysctls.
#
# CPU_FASTER_5X86_FPU enables faster FPU exception handler.
#
# CPU_GEODE is for the SC1100 Geode embedded processor. This option
# is necessary because the i8254 timecounter is toast.
#
# CPU_I486_ON_386 enables CPU cache on i486 based CPU upgrade products
# for i386 machines.
#
# CPU_IORT defines I/O clock delay time (NOTE 1). Default values of
# I/O clock delay time on Cyrix 5x86 and 6x86 are 0 and 7,respectively
# (no clock delay).
#
# CPU_L2_LATENCY specifies the L2 cache latency value. This option is used
# only when CPU_PPRO2CELERON is defined and Mendocino Celeron is detected.
# The default value is 5.
#
# CPU_LOOP_EN prevents flushing the prefetch buffer if the destination
# of a jump is already present in the prefetch buffer on Cyrix 5x86(NOTE
# 1).
#
# CPU_PPRO2CELERON enables L2 cache of Mendocino Celeron CPUs. This option
# is useful when you use Socket 8 to Socket 370 converter, because most Pentium
# Pro BIOSs do not enable L2 cache of Mendocino Celeron CPUs.
#
# CPU_RSTK_EN enables return stack on Cyrix 5x86 (NOTE 1).
#
# CPU_SOEKRIS enables support www.soekris.com hardware.
#
# CPU_SUSP_HLT enables suspend on HALT. If this option is set, CPU
# enters suspend mode following execution of HALT instruction.
#
# CPU_UPGRADE_HW_CACHE eliminates unneeded cache flush instruction(s).
#
# CPU_WT_ALLOC enables write allocation on Cyrix 6x86/6x86MX and AMD
# K5/K6/K6-2 CPUs.
#
# CYRIX_CACHE_WORKS enables CPU cache on Cyrix 486 CPUs with cache
# flush at hold state.
#
# CYRIX_CACHE_REALLY_WORKS enables (1) CPU cache on Cyrix 486 CPUs
# without cache flush at hold state, and (2) write-back CPU cache on
# Cyrix 6x86 whose revision < 2.7 (NOTE 2).
#
# NO_F00F_HACK disables the hack that prevents Pentiums (and ONLY
# Pentiums) from locking up when a LOCK CMPXCHG8B instruction is
# executed. This option is only needed if I586_CPU is also defined,
# and should be included for any non-Pentium CPU that defines it.
#
# NO_MEMORY_HOLE is an optimisation for systems with AMD K6 processors
# which indicates that the 15-16MB range is *definitely* not being
# occupied by an ISA memory hole.
#
# NOTE 1: The options, CPU_BTB_EN, CPU_LOOP_EN, CPU_IORT,
# CPU_LOOP_EN and CPU_RSTK_EN should not be used because of CPU bugs.
# These options may crash your system.
#
# NOTE 2: If CYRIX_CACHE_REALLY_WORKS is not set, CPU cache is enabled
# in write-through mode when revision < 2.7. If revision of Cyrix
# 6x86 >= 2.7, CPU cache is always enabled in write-back mode.
#
# NOTE 3: This option may cause failures for software that requires
# locked cycles in order to operate correctly.
#
options CPU_ATHLON_SSE_HACK
options CPU_BLUELIGHTNING_3X
options CPU_BLUELIGHTNING_FPU_OP_CACHE
options CPU_BTB_EN
options CPU_DIRECT_MAPPED_CACHE
options CPU_DISABLE_5X86_LSSER
options CPU_ELAN
options CPU_ELAN_PPS
options CPU_ELAN_XTAL=32768000
options CPU_ENABLE_LONGRUN
options CPU_FASTER_5X86_FPU
options CPU_GEODE
options CPU_I486_ON_386
options CPU_IORT
options CPU_L2_LATENCY=5
options CPU_LOOP_EN
options CPU_PPRO2CELERON
options CPU_RSTK_EN
options CPU_SOEKRIS
options CPU_SUSP_HLT
options CPU_UPGRADE_HW_CACHE
options CPU_WT_ALLOC
options CYRIX_CACHE_WORKS
options CYRIX_CACHE_REALLY_WORKS
#options NO_F00F_HACK
# Debug options
options NPX_DEBUG # enable npx debugging
#
# PERFMON causes the driver for Pentium/Pentium Pro performance counters
# to be compiled. See perfmon(4) for more information.
#
options PERFMON
#####################################################################
# NETWORKING OPTIONS
#
# DEVICE_POLLING adds support for mixed interrupt-polling handling
# of network device drivers, which has significant benefits in terms
# of robustness to overloads and responsivity, as well as permitting
# accurate scheduling of the CPU time between kernel network processing
# and other activities. The drawback is a moderate (up to 1/HZ seconds)
# potential increase in response times.
# It is strongly recommended to use HZ=1000 or 2000 with DEVICE_POLLING
# to achieve smoother behaviour.
# Additionally, you can enable/disable polling at runtime with help of
# the ifconfig(8) utility, and select the CPU fraction reserved to
# userland with the sysctl variable kern.polling.user_frac
# (default 50, range 0..100).
#
# Not all device drivers support this mode of operation at the time of
# this writing. See polling(4) for more details.
options DEVICE_POLLING
# BPF_JITTER adds support for BPF just-in-time compiler.
options BPF_JITTER
# OpenFabrics Enterprise Distribution (Infiniband).
options OFED
options OFED_DEBUG_INIT
# Sockets Direct Protocol
options SDP
options SDP_DEBUG
# IP over Infiniband
options IPOIB
options IPOIB_DEBUG
options IPOIB_CM
#####################################################################
# CLOCK OPTIONS
# Provide read/write access to the memory in the clock chip.
device nvram # Access to rtc cmos via /dev/nvram
#####################################################################
# MISCELLANEOUS DEVICES AND OPTIONS
device speaker #Play IBM BASIC-style noises out your speaker
hint.speaker.0.at="isa"
hint.speaker.0.port="0x61"
device gzip #Exec gzipped a.out's. REQUIRES COMPAT_AOUT!
device apm_saver # Requires APM
#####################################################################
# HARDWARE BUS CONFIGURATION
#
# ISA bus
#
device isa
#
# Options for `isa':
#
# AUTO_EOI_1 enables the `automatic EOI' feature for the master 8259A
# interrupt controller. This saves about 0.7-1.25 usec for each interrupt.
# This option breaks suspend/resume on some portables.
#
# AUTO_EOI_2 enables the `automatic EOI' feature for the slave 8259A
# interrupt controller. This saves about 0.7-1.25 usec for each interrupt.
# Automatic EOI is documented not to work for for the slave with the
# original i8259A, but it works for some clones and some integrated
# versions.
#
# MAXMEM specifies the amount of RAM on the machine; if this is not
# specified, FreeBSD will first read the amount of memory from the CMOS
# RAM, so the amount of memory will initially be limited to 64MB or 16MB
# depending on the BIOS. If the BIOS reports 64MB, a memory probe will
# then attempt to detect the installed amount of RAM. If this probe
# fails to detect >64MB RAM you will have to use the MAXMEM option.
# The amount is in kilobytes, so for a machine with 128MB of RAM, it would
# be 131072 (128 * 1024).
#
# BROKEN_KEYBOARD_RESET disables the use of the keyboard controller to
# reset the CPU for reboot. This is needed on some systems with broken
# keyboard controllers.
options AUTO_EOI_1
#options AUTO_EOI_2
options MAXMEM=(128*1024)
#options BROKEN_KEYBOARD_RESET
#
# AGP GART support
device agp
# AGP debugging.
options AGP_DEBUG
#####################################################################
# HARDWARE DEVICE CONFIGURATION
# To include support for VGA VESA video modes
options VESA
# Turn on extra debugging checks and output for VESA support.
options VESA_DEBUG
device dpms # DPMS suspend & resume via VESA BIOS
# x86 real mode BIOS emulator, required by atkbdc/dpms/vesa
options X86BIOS
#
# Hints for the non-optional Numeric Processing eXtension driver.
hint.npx.0.flags="0x0"
hint.npx.0.irq="13"
#
# `flags' for npx0:
# 0x01 don't use the npx registers to optimize bcopy.
# 0x02 don't use the npx registers to optimize bzero.
# 0x04 don't use the npx registers to optimize copyin or copyout.
# The npx registers are normally used to optimize copying and zeroing when
# all of the following conditions are satisfied:
# I586_CPU is an option
# the cpu is an i586 (perhaps not a Pentium)
# the probe for npx0 succeeds
# INT 16 exception handling works.
# Then copying and zeroing using the npx registers is normally 30-100% faster.
# The flags can be used to control cases where it doesn't work or is slower.
# Setting them at boot time using hints works right (the optimizations
# are not used until later in the bootstrap when npx0 is attached).
# Flag 0x08 automatically disables the i586 optimized routines.
#
#
# Optional devices:
#
# PS/2 mouse
device psm
hint.psm.0.at="atkbdc"
hint.psm.0.irq="12"
# Options for psm:
options PSM_HOOKRESUME #hook the system resume event, useful
#for some laptops
options PSM_RESETAFTERSUSPEND #reset the device at the resume event
# The keyboard controller; it controls the keyboard and the PS/2 mouse.
device atkbdc
hint.atkbdc.0.at="isa"
hint.atkbdc.0.port="0x060"
# The AT keyboard
device atkbd
hint.atkbd.0.at="atkbdc"
hint.atkbd.0.irq="1"
# Options for atkbd:
options ATKBD_DFLT_KEYMAP # specify the built-in keymap
makeoptions ATKBD_DFLT_KEYMAP=fr.dvorak
# `flags' for atkbd:
# 0x01 Force detection of keyboard, else we always assume a keyboard
# 0x02 Don't reset keyboard, useful for some newer ThinkPads
# 0x03 Force detection and avoid reset, might help with certain
# dockingstations
# 0x04 Old-style (XT) keyboard support, useful for older ThinkPads
# Video card driver for VGA adapters.
device vga
hint.vga.0.at="isa"
# Options for vga:
# Try the following option if the mouse pointer is not drawn correctly
# or font does not seem to be loaded properly. May cause flicker on
# some systems.
options VGA_ALT_SEQACCESS
# If you can dispense with some vga driver features, you may want to
# use the following options to save some memory.
#options VGA_NO_FONT_LOADING # don't save/load font
#options VGA_NO_MODE_CHANGE # don't change video modes
# Older video cards may require this option for proper operation.
options VGA_SLOW_IOACCESS # do byte-wide i/o's to TS and GDC regs
# The following option probably won't work with the LCD displays.
options VGA_WIDTH90 # support 90 column modes
# Debugging.
options VGA_DEBUG
# vt(4) drivers.
device vt_vga
# Linear framebuffer driver for S3 VESA 1.2 cards. Works on top of VESA.
device s3pci
# 3Dfx Voodoo Graphics, Voodoo II /dev/3dfx CDEV support. This will create
# the /dev/3dfx0 device to work with glide implementations. This should get
# linked to /dev/3dfx and /dev/voodoo. Note that this is not the same as
# the tdfx DRI module from XFree86 and is completely unrelated.
#
# To enable Linuxulator support, one must also include COMPAT_LINUX in the
# config as well. The other option is to load both as modules.
device tdfx # Enable 3Dfx Voodoo support
device tdfx_linux # Enable Linuxulator support
#
# ACPI support using the Intel ACPI Component Architecture reference
# implementation.
#
# ACPI_DEBUG enables the use of the debug.acpi.level and debug.acpi.layer
# kernel environment variables to select initial debugging levels for the
# Intel ACPICA code. (Note that the Intel code must also have USE_DEBUGGER
# defined when it is built).
device acpi
options ACPI_DEBUG
options ACPI_DMAR
# ACPI WMI Mapping driver
device acpi_wmi
# ACPI Asus Extras (LCD backlight/brightness, video output, etc.)
device acpi_asus
# ACPI Fujitsu Extras (Buttons)
device acpi_fujitsu
# ACPI extras driver for HP laptops
device acpi_hp
# ACPI extras driver for IBM laptops
device acpi_ibm
# ACPI Panasonic Extras (LCD backlight/brightness, video output, etc.)
device acpi_panasonic
# ACPI Sony extra (LCD brightness)
device acpi_sony
# ACPI Toshiba Extras (LCD backlight/brightness, video output, etc.)
device acpi_toshiba
# ACPI Video Extensions (LCD backlight/brightness, video output, etc.)
device acpi_video
# ACPI Docking Station
device acpi_dock
# ACPI ASOC ATK0110 ASUSTeK AI Booster (voltage, temperature and fan sensors)
device aibs
# The cpufreq(4) driver provides support for non-ACPI CPU frequency control
device cpufreq
# Direct Rendering modules for 3D acceleration.
device drm # DRM core module required by DRM drivers
device mach64drm # ATI Rage Pro, Rage Mobility P/M, Rage XL
device mgadrm # AGP Matrox G200, G400, G450, G550
device r128drm # ATI Rage 128
device savagedrm # S3 Savage3D, Savage4
device sisdrm # SiS 300/305, 540, 630
device tdfxdrm # 3dfx Voodoo 3/4/5 and Banshee
device viadrm # VIA
options DRM_DEBUG # Include debug printfs (slow)
#
# Network interfaces:
#
# bxe: Broadcom NetXtreme II (BCM5771X/BCM578XX) PCIe 10Gb Ethernet
# adapters.
# ce: Cronyx Tau-PCI/32 sync single/dual port G.703/E1 serial adaptor
# with 32 HDLC subchannels (requires sppp (default), or NETGRAPH if
# NETGRAPH_CRONYX is configured)
# cp: Cronyx Tau-PCI sync single/dual/four port
# V.35/RS-232/RS-530/RS-449/X.21/G.703/E1/E3/T3/STS-1
# serial adaptor (requires sppp (default), or NETGRAPH if
# NETGRAPH_CRONYX is configured)
# cs: IBM Etherjet and other Crystal Semi CS89x0-based adapters
# ctau: Cronyx Tau sync dual port V.35/RS-232/RS-530/RS-449/X.21/G.703/E1
# serial adaptor (requires sppp (default), or NETGRAPH if
# NETGRAPH_CRONYX is configured)
# ed: Western Digital and SMC 80xx; Novell NE1000 and NE2000; 3Com 3C503
# HP PC Lan+, various PC Card devices
# (requires miibus)
# ipw: Intel PRO/Wireless 2100 IEEE 802.11 adapter
# iwi: Intel PRO/Wireless 2200BG/2225BG/2915ABG IEEE 802.11 adapters
# Requires the iwi firmware module
# iwn: Intel Wireless WiFi Link 1000/105/135/2000/4965/5000/6000/6050 abgn
# 802.11 network adapters
# Requires the iwn firmware module
# mthca: Mellanox HCA InfiniBand
# mlx4ib: Mellanox ConnectX HCA InfiniBand
# mlx4en: Mellanox ConnectX HCA Ethernet
# nfe: nVidia nForce MCP on-board Ethernet Networking (BSD open source)
# sbni: Granch SBNI12-xx ISA and PCI adapters
# vmx: VMware VMXNET3 Ethernet (BSD open source)
# wpi: Intel 3945ABG Wireless LAN controller
# Requires the wpi firmware module
# Order for ISA/EISA devices is important here
device bxe # Broadcom NetXtreme II BCM5771X/BCM578XX 10GbE
device ce
device cp
device cs # Crystal Semiconductor CS89x0 NIC
hint.cs.0.at="isa"
hint.cs.0.port="0x300"
device ctau
hint.ctau.0.at="isa"
hint.ctau.0.port="0x240"
hint.ctau.0.irq="15"
hint.ctau.0.drq="7"
#options NETGRAPH_CRONYX # Enable NETGRAPH support for Cronyx adapter(s)
device ed # NE[12]000, SMC Ultra, 3c503, DS8390 cards
options ED_3C503
options ED_HPP
options ED_SIC
hint.ed.0.at="isa"
hint.ed.0.port="0x280"
hint.ed.0.irq="5"
hint.ed.0.maddr="0xd8000"
device ipw # Intel 2100 wireless NICs.
device iwi # Intel 2200BG/2225BG/2915ABG wireless NICs.
device iwn # Intel 4965/1000/5000/6000 wireless NICs.
# Hint for the i386-only ISA front-end of le(4).
hint.le.0.at="isa"
hint.le.0.port="0x280"
hint.le.0.irq="10"
hint.le.0.drq="0"
device mthca # Mellanox HCA InfiniBand
device mlx4 # Shared code module between IB and Ethernet
device mlx4ib # Mellanox ConnectX HCA InfiniBand
device mlx4en # Mellanox ConnectX HCA Ethernet
device nfe # nVidia nForce MCP on-board Ethernet
device sbni
hint.sbni.0.at="isa"
hint.sbni.0.port="0x210"
hint.sbni.0.irq="0xefdead"
hint.sbni.0.flags="0"
device vmx # VMware VMXNET3 Ethernet
device wpi # Intel 3945ABG wireless NICs.
# IEEE 802.11 adapter firmware modules
# Intel PRO/Wireless 2100 firmware:
# ipwfw: BSS/IBSS/monitor mode firmware
# ipwbssfw: BSS mode firmware
# ipwibssfw: IBSS mode firmware
# ipwmonitorfw: Monitor mode firmware
# Intel PRO/Wireless 2200BG/2225BG/2915ABG firmware:
# iwifw: BSS/IBSS/monitor mode firmware
# iwibssfw: BSS mode firmware
# iwiibssfw: IBSS mode firmware
# iwimonitorfw: Monitor mode firmware
# Intel Wireless WiFi Link 4965/1000/5000/6000 series firmware:
# iwnfw: Single module to support all devices
# iwn1000fw: Specific module for the 1000 only
# iwn105fw: Specific module for the 105 only
# iwn135fw: Specific module for the 135 only
# iwn2000fw: Specific module for the 2000 only
# iwn2030fw: Specific module for the 2030 only
# iwn4965fw: Specific module for the 4965 only
# iwn5000fw: Specific module for the 5000 only
# iwn5150fw: Specific module for the 5150 only
# iwn6000fw: Specific module for the 6000 only
# iwn6000g2afw: Specific module for the 6000g2a only
# iwn6000g2bfw: Specific module for the 6000g2b only
# iwn6050fw: Specific module for the 6050 only
# wpifw: Intel 3945ABG Wireless LAN Controller firmware
device iwifw
device iwibssfw
device iwiibssfw
device iwimonitorfw
device ipwfw
device ipwbssfw
device ipwibssfw
device ipwmonitorfw
device iwnfw
device iwn1000fw
device iwn105fw
device iwn135fw
device iwn2000fw
device iwn2030fw
device iwn4965fw
device iwn5000fw
device iwn5150fw
device iwn6000fw
device iwn6000g2afw
device iwn6000g2bfw
device iwn6050fw
device wpifw
#
# Non-Transparent Bridge (NTB) drivers
#
device if_ntb # Virtual NTB network interface
device ntb_transport # NTB packet transport driver
device ntb # NTB hardware interface
device ntb_hw_intel # Intel NTB hardware driver
device ntb_hw_plx # PLX NTB hardware driver
#
# ATA raid adapters
#
device pst
#
# Areca 11xx and 12xx series of SATA II RAID controllers.
# CAM is required.
#
device arcmsr # Areca SATA II RAID
#
# 3ware 9000 series PATA/SATA RAID controller driver and options.
# The driver is implemented as a SIM, and so, needs the CAM infrastructure.
#
options TWA_DEBUG # 0-10; 10 prints the most messages.
device twa # 3ware 9000 series PATA/SATA RAID
#
# Adaptec FSA RAID controllers, including integrated DELL controllers,
# the Dell PERC 2/QC and the HP NetRAID-4M
device aac
device aacp # SCSI Passthrough interface (optional, CAM required)
#
# Adaptec by PMC RAID controllers, Series 6/7/8 and upcoming families
device aacraid # Container interface, CAM required
#
# Highpoint RocketRAID 27xx.
device hpt27xx
#
# Highpoint RocketRAID 182x.
device hptmv
#
# Highpoint DC7280 and R750.
device hptnr
#
# Highpoint RocketRAID. Supports RR172x, RR222x, RR2240, RR232x, RR2340,
# RR2210, RR174x, RR2522, RR231x, RR230x.
device hptrr
#
# Highpoint RocketRaid 3xxx series SATA RAID
device hptiop
#
# Intel integrated Memory Controller (iMC) SMBus controller
# Sandybridge-Xeon, Ivybridge-Xeon, Haswell-Xeon, Broadwell-Xeon
device imcsmb
#
# IBM (now Adaptec) ServeRAID controllers
device ips
#
# Intel C600 (Patsburg) integrated SAS controller
device isci
options ISCI_LOGGING # enable debugging in isci HAL
#
# NVM Express (NVMe) support
device nvme # base NVMe driver
device nvd # expose NVMe namespaces as disks, depends on nvme
#
# PMC-Sierra SAS/SATA controller
device pmspcv
#
# SafeNet crypto driver: can be moved to the MI NOTES as soon as
# it's tested on a big-endian machine
#
device safe # SafeNet 1141
options SAFE_DEBUG # enable debugging support: hw.safe.debug
options SAFE_RNDTEST # enable rndtest support
#
# glxiic is an I2C driver for the AMD Geode LX CS5536 System Management Bus
# controller. Requires 'device iicbus'.
#
device glxiic # AMD Geode LX CS5536 System Management Bus
#
# glxsb is a driver for the Security Block in AMD Geode LX processors.
# Requires 'device crypto'.
#
device glxsb # AMD Geode LX Security Block
#
# VirtIO support
#
# The virtio entry provides a generic bus for use by the device drivers.
# It must be combined with an interface that communicates with the host.
# Multiple such interfaces defined by the VirtIO specification. FreeBSD
# only has support for PCI. Therefore, virtio_pci must be statically
# compiled in or loaded as a module for the device drivers to function.
#
device virtio # Generic VirtIO bus (required)
device virtio_pci # VirtIO PCI Interface
device vtnet # VirtIO Ethernet device
device virtio_blk # VirtIO Block device
device virtio_scsi # VirtIO SCSI device
device virtio_balloon # VirtIO Memory Balloon device
device virtio_random # VirtIO Entropy device
device virtio_console # VirtIO Console device
device hyperv # HyperV drivers
#####################################################################
#
# Miscellaneous hardware:
#
# apm: Laptop Advanced Power Management (experimental)
# ipmi: Intelligent Platform Management Interface
# smapi: System Management Application Program Interface driver
# smbios: DMI/SMBIOS entry point
# vpd: Vital Product Data kernel interface
# pbio: Parallel (8255 PPI) basic I/O (mode 0) port (e.g. Advantech PCL-724)
# asmc: Apple System Management Controller
# si: Specialix International SI/XIO or SX intelligent serial card driver
# tpm: Trusted Platform Module
# Notes on APM
# The flags takes the following meaning for apm0:
# 0x0020 Statclock is broken.
# Notes on the Specialix SI/XIO driver:
# The host card is memory, not IO mapped.
# The Rev 1 host cards use a 64K chunk, on a 32K boundary.
# The Rev 2 host cards use a 32K chunk, on a 32K boundary.
# The cards can use an IRQ of 11, 12 or 15.
# Notes on the Sony Programmable I/O controller
# This is a temporary driver that should someday be replaced by something
# that hooks into the ACPI layer. The device is hooked to the PIIX4's
# General Device 10 decoder, which means you have to fiddle with PCI
# registers to map it in, even though it is otherwise treated here as
# an ISA device. At the moment, the driver polls, although the device
# is capable of generating interrupts. It largely undocumented.
# The port location in the hint is where you WANT the device to be
# mapped. 0x10a0 seems to be traditional. At the moment the jogdial
# is the only thing truly supported, but apparently a fair percentage
# of the Vaio extra features are controlled by this device.
device apm
hint.apm.0.flags="0x20"
device ipmi
device smapi
device smbios
device vpd
device pbio
hint.pbio.0.at="isa"
hint.pbio.0.port="0x360"
device asmc
device tpm
device padlock_rng # VIA Padlock RNG
device rdrand_rng # Intel Bull Mountain RNG
device aesni # AES-NI OpenCrypto module
#
# Laptop/Notebook options:
#
# See also:
# apm under `Miscellaneous hardware'
# above.
# For older notebooks that signal a powerfail condition (external
# power supply dropped, or battery state low) by issuing an NMI:
options POWERFAIL_NMI # make it beep instead of panicing
#
# I2C Bus
#
# Philips i2c bus support is provided by the `iicbus' device.
#
# Supported interfaces:
# pcf Philips PCF8584 ISA-bus controller
#
device pcf
hint.pcf.0.at="isa"
hint.pcf.0.port="0x320"
hint.pcf.0.irq="5"
#
# Hardware watchdog timers:
#
# ichwd: Intel ICH watchdog timer
# amdsbwd: AMD SB7xx watchdog timer
# viawd: VIA south bridge watchdog timer
# wbwd: Winbond watchdog timer
#
device ichwd
device amdsbwd
device viawd
device wbwd
#
# Temperature sensors:
#
# coretemp: on-die sensor on Intel Core and newer CPUs
# amdtemp: on-die sensor on AMD K8/K10/K11 CPUs
#
device coretemp
device amdtemp
#
# CPU control pseudo-device. Provides access to MSRs, CPUID info and
# microcode update feature.
#
device cpuctl
#
# System Management Bus (SMB)
#
options ENABLE_ALART # Control alarm on Intel intpm driver
#
# Set the number of PV entries per process. Increasing this can
# stop panics related to heavy use of shared memory. However, that can
# (combined with large amounts of physical memory) cause panics at
# boot time due the kernel running out of VM space.
#
# If you're tweaking this, you might also want to increase the sysctls
# "vm.v_free_min", "vm.v_free_reserved", and "vm.v_free_target".
#
# The value below is the one more than the default.
#
options PMAP_SHPGPERPROC=201
#
# Number of initial kernel page table pages used for early bootstrap.
# This number should include enough pages to map the kernel, any
# modules or other data loaded with the kernel by the loader, and data
# structures allocated before the VM system is initialized such as the
# vm_page_t array. Each page table page maps 4MB (2MB with PAE).
#
options NKPT=31
#####################################################################
# ABI Emulation
-# Emulate spx device for client side of SVR3 local X interface
-options SPX_HACK
-
# Enable (32-bit) a.out binary support
options COMPAT_AOUT
# Enable 32-bit runtime support for CloudABI binaries.
options COMPAT_CLOUDABI32
# Enable Linux ABI emulation
options COMPAT_LINUX
# Enable the linux-like proc filesystem support (requires COMPAT_LINUX
# and PSEUDOFS)
options LINPROCFS
#Enable the linux-like sys filesystem support (requires COMPAT_LINUX
# and PSEUDOFS)
options LINSYSFS
# Enable NDIS binary driver support
options NDISAPI
device ndis
#####################################################################
# VM OPTIONS
# KSTACK_PAGES is the number of memory pages to assign to the kernel
# stack of each thread.
options KSTACK_PAGES=5
# Enable detailed accounting by the PV entry allocator.
options PV_STATS
#####################################################################
# More undocumented options for linting.
# Note that documenting these are not considered an affront.
options FB_INSTALL_CDEV # install a CDEV entry in /dev
options I586_PMC_GUPROF=0x70000
options KBDIO_DEBUG=2
options KBD_MAXRETRY=4
options KBD_MAXWAIT=6
options KBD_RESETDELAY=201
options PSM_DEBUG=1
options TIMER_FREQ=((14318182+6)/12)
options VM_KMEM_SIZE
options VM_KMEM_SIZE_MAX
options VM_KMEM_SIZE_SCALE
Index: projects/clang800-import/sys/i386/i386/pmap.c
===================================================================
--- projects/clang800-import/sys/i386/i386/pmap.c (revision 343955)
+++ projects/clang800-import/sys/i386/i386/pmap.c (revision 343956)
@@ -1,6200 +1,6200 @@
/*-
* SPDX-License-Identifier: BSD-4-Clause
*
* Copyright (c) 1991 Regents of the University of California.
* All rights reserved.
* Copyright (c) 1994 John S. Dyson
* All rights reserved.
* Copyright (c) 1994 David Greenman
* All rights reserved.
* Copyright (c) 2005-2010 Alan L. Cox <alc@cs.rice.edu>
* All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
* Science Department and William Jolitz of UUNET Technologies Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: @(#)pmap.c 7.7 (Berkeley) 5/12/91
*/
/*-
* Copyright (c) 2003 Networks Associates Technology, Inc.
* All rights reserved.
* Copyright (c) 2018 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed for the FreeBSD Project by Jake Burkholder,
* Safeport Network Services, and Network Associates Laboratories, the
* Security Research Division of Network Associates, Inc. under
* DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA
* CHATS research program.
*
* Portions of this software were developed by
* Konstantin Belousov <kib@FreeBSD.org> under sponsorship from
* the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* Manages physical address maps.
*
* Since the information managed by this module is
* also stored by the logical address mapping module,
* this module may throw away valid virtual-to-physical
* mappings at almost any time. However, invalidations
* of virtual-to-physical mappings must be done as
* requested.
*
* In order to cope with hardware architectures which
* make virtual-to-physical map invalidates expensive,
* this module may delay invalidate or reduced protection
* operations until such time as they are actually
* necessary. This module is given full information as
* to which processors are currently using which maps,
* and to when physical maps must be made correct.
*/
#include "opt_apic.h"
#include "opt_cpu.h"
#include "opt_pmap.h"
#include "opt_smp.h"
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/ktr.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mman.h>
#include <sys/msgbuf.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/rwlock.h>
#include <sys/sf_buf.h>
#include <sys/sx.h>
#include <sys/vmmeter.h>
#include <sys/sched.h>
#include <sys/sysctl.h>
#include <sys/smp.h>
#include <sys/vmem.h>
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <vm/vm_map.h>
#include <vm/vm_object.h>
#include <vm/vm_extern.h>
#include <vm/vm_pageout.h>
#include <vm/vm_pager.h>
#include <vm/vm_phys.h>
#include <vm/vm_radix.h>
#include <vm/vm_reserv.h>
#include <vm/uma.h>
#ifdef DEV_APIC
#include <sys/bus.h>
#include <machine/intr_machdep.h>
#include <x86/apicvar.h>
#endif
#include <x86/ifunc.h>
#include <machine/bootinfo.h>
#include <machine/cpu.h>
#include <machine/cputypes.h>
#include <machine/md_var.h>
#include <machine/pcb.h>
#include <machine/specialreg.h>
#ifdef SMP
#include <machine/smp.h>
#endif
#include <machine/pmap_base.h>
#if !defined(DIAGNOSTIC)
#ifdef __GNUC_GNU_INLINE__
#define PMAP_INLINE __attribute__((__gnu_inline__)) inline
#else
#define PMAP_INLINE extern inline
#endif
#else
#define PMAP_INLINE
#endif
#ifdef PV_STATS
#define PV_STAT(x) do { x ; } while (0)
#else
#define PV_STAT(x) do { } while (0)
#endif
#define pa_index(pa) ((pa) >> PDRSHIFT)
#define pa_to_pvh(pa) (&pv_table[pa_index(pa)])
/*
* PTmap is recursive pagemap at top of virtual address space.
* Within PTmap, the page directory can be found (third indirection).
*/
#define PTmap ((pt_entry_t *)(PTDPTDI << PDRSHIFT))
#define PTD ((pd_entry_t *)((PTDPTDI << PDRSHIFT) + (PTDPTDI * PAGE_SIZE)))
#define PTDpde ((pd_entry_t *)((PTDPTDI << PDRSHIFT) + (PTDPTDI * PAGE_SIZE) + \
(PTDPTDI * PDESIZE)))
/*
* Translate a virtual address to the kernel virtual address of its page table
* entry (PTE). This can be used recursively. If the address of a PTE as
* previously returned by this macro is itself given as the argument, then the
* address of the page directory entry (PDE) that maps the PTE will be
* returned.
*
* This macro may be used before pmap_bootstrap() is called.
*/
#define vtopte(va) (PTmap + i386_btop(va))
/*
* Get PDEs and PTEs for user/kernel address space
*/
#define pmap_pde(m, v) (&((m)->pm_pdir[(vm_offset_t)(v) >> PDRSHIFT]))
#define pdir_pde(m, v) (m[(vm_offset_t)(v) >> PDRSHIFT])
#define pmap_pde_v(pte) ((*(int *)pte & PG_V) != 0)
#define pmap_pte_w(pte) ((*(int *)pte & PG_W) != 0)
#define pmap_pte_m(pte) ((*(int *)pte & PG_M) != 0)
#define pmap_pte_u(pte) ((*(int *)pte & PG_A) != 0)
#define pmap_pte_v(pte) ((*(int *)pte & PG_V) != 0)
#define pmap_pte_set_w(pte, v) ((v) ? atomic_set_int((u_int *)(pte), PG_W) : \
atomic_clear_int((u_int *)(pte), PG_W))
#define pmap_pte_set_prot(pte, v) ((*(int *)pte &= ~PG_PROT), (*(int *)pte |= (v)))
_Static_assert(sizeof(struct pmap) <= sizeof(struct pmap_KBI),
"pmap_KBI");
static int pgeflag = 0; /* PG_G or-in */
static int pseflag = 0; /* PG_PS or-in */
static int nkpt = NKPT;
#ifdef PMAP_PAE_COMP
pt_entry_t pg_nx;
static uma_zone_t pdptzone;
#endif
_Static_assert(VM_MAXUSER_ADDRESS == VADDR(TRPTDI, 0), "VM_MAXUSER_ADDRESS");
_Static_assert(VM_MAX_KERNEL_ADDRESS <= VADDR(PTDPTDI, 0),
"VM_MAX_KERNEL_ADDRESS");
_Static_assert(PMAP_MAP_LOW == VADDR(LOWPTDI, 0), "PMAP_MAP_LOW");
_Static_assert(KERNLOAD == (KERNPTDI << PDRSHIFT), "KERNLOAD");
extern int pat_works;
extern int pg_ps_enabled;
extern int elf32_nxstack;
#define PAT_INDEX_SIZE 8
static int pat_index[PAT_INDEX_SIZE]; /* cache mode to PAT index conversion */
/*
* pmap_mapdev support pre initialization (i.e. console)
*/
#define PMAP_PREINIT_MAPPING_COUNT 8
static struct pmap_preinit_mapping {
vm_paddr_t pa;
vm_offset_t va;
vm_size_t sz;
int mode;
} pmap_preinit_mapping[PMAP_PREINIT_MAPPING_COUNT];
static int pmap_initialized;
static struct rwlock_padalign pvh_global_lock;
/*
* Data for the pv entry allocation mechanism
*/
static TAILQ_HEAD(pch, pv_chunk) pv_chunks = TAILQ_HEAD_INITIALIZER(pv_chunks);
extern int pv_entry_max, pv_entry_count;
static int pv_entry_high_water = 0;
static struct md_page *pv_table;
extern int shpgperproc;
static struct pv_chunk *pv_chunkbase; /* KVA block for pv_chunks */
static int pv_maxchunks; /* How many chunks we have KVA for */
static vm_offset_t pv_vafree; /* freelist stored in the PTE */
/*
* All those kernel PT submaps that BSD is so fond of
*/
static pt_entry_t *CMAP3;
static pd_entry_t *KPTD;
static caddr_t CADDR3;
/*
* Crashdump maps.
*/
static caddr_t crashdumpmap;
static pt_entry_t *PMAP1 = NULL, *PMAP2, *PMAP3;
static pt_entry_t *PADDR1 = NULL, *PADDR2, *PADDR3;
#ifdef SMP
static int PMAP1cpu, PMAP3cpu;
extern int PMAP1changedcpu;
#endif
extern int PMAP1changed;
extern int PMAP1unchanged;
static struct mtx PMAP2mutex;
/*
* Internal flags for pmap_enter()'s helper functions.
*/
#define PMAP_ENTER_NORECLAIM 0x1000000 /* Don't reclaim PV entries. */
#define PMAP_ENTER_NOREPLACE 0x2000000 /* Don't replace mappings. */
static void free_pv_chunk(struct pv_chunk *pc);
static void free_pv_entry(pmap_t pmap, pv_entry_t pv);
static pv_entry_t get_pv_entry(pmap_t pmap, boolean_t try);
static void pmap_pv_demote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa);
static bool pmap_pv_insert_pde(pmap_t pmap, vm_offset_t va, pd_entry_t pde,
u_int flags);
#if VM_NRESERVLEVEL > 0
static void pmap_pv_promote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa);
#endif
static void pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va);
static pv_entry_t pmap_pvh_remove(struct md_page *pvh, pmap_t pmap,
vm_offset_t va);
static int pmap_pvh_wired_mappings(struct md_page *pvh, int count);
static boolean_t pmap_demote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va);
static bool pmap_enter_4mpage(pmap_t pmap, vm_offset_t va, vm_page_t m,
vm_prot_t prot);
static int pmap_enter_pde(pmap_t pmap, vm_offset_t va, pd_entry_t newpde,
u_int flags, vm_page_t m);
static vm_page_t pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va,
vm_page_t m, vm_prot_t prot, vm_page_t mpte);
static int pmap_insert_pt_page(pmap_t pmap, vm_page_t mpte);
static void pmap_invalidate_pde_page(pmap_t pmap, vm_offset_t va,
pd_entry_t pde);
static void pmap_fill_ptp(pt_entry_t *firstpte, pt_entry_t newpte);
static boolean_t pmap_is_modified_pvh(struct md_page *pvh);
static boolean_t pmap_is_referenced_pvh(struct md_page *pvh);
static void pmap_kenter_attr(vm_offset_t va, vm_paddr_t pa, int mode);
static void pmap_kenter_pde(vm_offset_t va, pd_entry_t newpde);
static void pmap_pde_attr(pd_entry_t *pde, int cache_bits);
#if VM_NRESERVLEVEL > 0
static void pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va);
#endif
static boolean_t pmap_protect_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t sva,
vm_prot_t prot);
static void pmap_pte_attr(pt_entry_t *pte, int cache_bits);
static void pmap_remove_pde(pmap_t pmap, pd_entry_t *pdq, vm_offset_t sva,
struct spglist *free);
static int pmap_remove_pte(pmap_t pmap, pt_entry_t *ptq, vm_offset_t sva,
struct spglist *free);
static vm_page_t pmap_remove_pt_page(pmap_t pmap, vm_offset_t va);
static void pmap_remove_page(struct pmap *pmap, vm_offset_t va,
struct spglist *free);
static bool pmap_remove_ptes(pmap_t pmap, vm_offset_t sva, vm_offset_t eva,
struct spglist *free);
static void pmap_remove_entry(struct pmap *pmap, vm_page_t m,
vm_offset_t va);
static void pmap_insert_entry(pmap_t pmap, vm_offset_t va, vm_page_t m);
static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va,
vm_page_t m);
static void pmap_update_pde(pmap_t pmap, vm_offset_t va, pd_entry_t *pde,
pd_entry_t newpde);
static void pmap_update_pde_invalidate(vm_offset_t va, pd_entry_t newpde);
static vm_page_t pmap_allocpte(pmap_t pmap, vm_offset_t va, u_int flags);
static vm_page_t _pmap_allocpte(pmap_t pmap, u_int ptepindex, u_int flags);
static void _pmap_unwire_ptp(pmap_t pmap, vm_page_t m, struct spglist *free);
static pt_entry_t *pmap_pte_quick(pmap_t pmap, vm_offset_t va);
static void pmap_pte_release(pt_entry_t *pte);
static int pmap_unuse_pt(pmap_t, vm_offset_t, struct spglist *);
#ifdef PMAP_PAE_COMP
static void *pmap_pdpt_allocf(uma_zone_t zone, vm_size_t bytes, int domain,
uint8_t *flags, int wait);
#endif
static void pmap_init_trm(void);
static void pmap_invalidate_all_int(pmap_t pmap);
static __inline void pagezero(void *page);
CTASSERT(1 << PDESHIFT == sizeof(pd_entry_t));
CTASSERT(1 << PTESHIFT == sizeof(pt_entry_t));
extern char _end[];
extern u_long physfree; /* phys addr of next free page */
extern u_long vm86phystk;/* PA of vm86/bios stack */
extern u_long vm86paddr;/* address of vm86 region */
extern int vm86pa; /* phys addr of vm86 region */
extern u_long KERNend; /* phys addr end of kernel (just after bss) */
#ifdef PMAP_PAE_COMP
pd_entry_t *IdlePTD_pae; /* phys addr of kernel PTD */
pdpt_entry_t *IdlePDPT; /* phys addr of kernel PDPT */
pt_entry_t *KPTmap_pae; /* address of kernel page tables */
#define IdlePTD IdlePTD_pae
#define KPTmap KPTmap_pae
#else
pd_entry_t *IdlePTD_nopae;
pt_entry_t *KPTmap_nopae;
#define IdlePTD IdlePTD_nopae
#define KPTmap KPTmap_nopae
#endif
extern u_long KPTphys; /* phys addr of kernel page tables */
extern u_long tramp_idleptd;
static u_long
allocpages(u_int cnt, u_long *physfree)
{
u_long res;
res = *physfree;
*physfree += PAGE_SIZE * cnt;
bzero((void *)res, PAGE_SIZE * cnt);
return (res);
}
static void
pmap_cold_map(u_long pa, u_long va, u_long cnt)
{
pt_entry_t *pt;
for (pt = (pt_entry_t *)KPTphys + atop(va); cnt > 0;
cnt--, pt++, va += PAGE_SIZE, pa += PAGE_SIZE)
*pt = pa | PG_V | PG_RW | PG_A | PG_M;
}
static void
pmap_cold_mapident(u_long pa, u_long cnt)
{
pmap_cold_map(pa, pa, cnt);
}
_Static_assert(LOWPTDI * 2 * NBPDR == KERNBASE,
"Broken double-map of zero PTD");
static void
__CONCAT(PMTYPE, remap_lower)(bool enable)
{
int i;
for (i = 0; i < LOWPTDI; i++)
IdlePTD[i] = enable ? IdlePTD[LOWPTDI + i] : 0;
load_cr3(rcr3()); /* invalidate TLB */
}
/*
* Called from locore.s before paging is enabled. Sets up the first
* kernel page table. Since kernel is mapped with PA == VA, this code
* does not require relocations.
*/
void
__CONCAT(PMTYPE, cold)(void)
{
pt_entry_t *pt;
u_long a;
u_int cr3, ncr4;
physfree = (u_long)&_end;
if (bootinfo.bi_esymtab != 0)
physfree = bootinfo.bi_esymtab;
if (bootinfo.bi_kernend != 0)
physfree = bootinfo.bi_kernend;
physfree = roundup2(physfree, NBPDR);
KERNend = physfree;
/* Allocate Kernel Page Tables */
KPTphys = allocpages(NKPT, &physfree);
KPTmap = (pt_entry_t *)KPTphys;
/* Allocate Page Table Directory */
#ifdef PMAP_PAE_COMP
/* XXX only need 32 bytes (easier for now) */
IdlePDPT = (pdpt_entry_t *)allocpages(1, &physfree);
#endif
IdlePTD = (pd_entry_t *)allocpages(NPGPTD, &physfree);
/*
* Allocate KSTACK. Leave a guard page between IdlePTD and
* proc0kstack, to control stack overflow for thread0 and
* prevent corruption of the page table. We leak the guard
* physical memory due to 1:1 mappings.
*/
allocpages(1, &physfree);
proc0kstack = allocpages(TD0_KSTACK_PAGES, &physfree);
/* vm86/bios stack */
vm86phystk = allocpages(1, &physfree);
/* pgtable + ext + IOPAGES */
vm86paddr = vm86pa = allocpages(3, &physfree);
/* Install page tables into PTD. Page table page 1 is wasted. */
for (a = 0; a < NKPT; a++)
IdlePTD[a] = (KPTphys + ptoa(a)) | PG_V | PG_RW | PG_A | PG_M;
#ifdef PMAP_PAE_COMP
/* PAE install PTD pointers into PDPT */
for (a = 0; a < NPGPTD; a++)
IdlePDPT[a] = ((u_int)IdlePTD + ptoa(a)) | PG_V;
#endif
/*
* Install recursive mapping for kernel page tables into
* itself.
*/
for (a = 0; a < NPGPTD; a++)
IdlePTD[PTDPTDI + a] = ((u_int)IdlePTD + ptoa(a)) | PG_V |
PG_RW;
/*
* Initialize page table pages mapping physical address zero
* through the (physical) end of the kernel. Many of these
* pages must be reserved, and we reserve them all and map
* them linearly for convenience. We do this even if we've
* enabled PSE above; we'll just switch the corresponding
* kernel PDEs before we turn on paging.
*
* This and all other page table entries allow read and write
* access for various reasons. Kernel mappings never have any
* access restrictions.
*/
pmap_cold_mapident(0, atop(NBPDR) * LOWPTDI);
pmap_cold_map(0, NBPDR * LOWPTDI, atop(NBPDR) * LOWPTDI);
pmap_cold_mapident(KERNBASE, atop(KERNend - KERNBASE));
/* Map page table directory */
#ifdef PMAP_PAE_COMP
pmap_cold_mapident((u_long)IdlePDPT, 1);
#endif
pmap_cold_mapident((u_long)IdlePTD, NPGPTD);
/* Map early KPTmap. It is really pmap_cold_mapident. */
pmap_cold_map(KPTphys, (u_long)KPTmap, NKPT);
/* Map proc0kstack */
pmap_cold_mapident(proc0kstack, TD0_KSTACK_PAGES);
/* ISA hole already mapped */
pmap_cold_mapident(vm86phystk, 1);
pmap_cold_mapident(vm86pa, 3);
/* Map page 0 into the vm86 page table */
*(pt_entry_t *)vm86pa = 0 | PG_RW | PG_U | PG_A | PG_M | PG_V;
/* ...likewise for the ISA hole for vm86 */
for (pt = (pt_entry_t *)vm86pa + atop(ISA_HOLE_START), a = 0;
a < atop(ISA_HOLE_LENGTH); a++, pt++)
*pt = (ISA_HOLE_START + ptoa(a)) | PG_RW | PG_U | PG_A |
PG_M | PG_V;
/* Enable PSE, PGE, VME, and PAE if configured. */
ncr4 = 0;
if ((cpu_feature & CPUID_PSE) != 0) {
ncr4 |= CR4_PSE;
pseflag = PG_PS;
/*
* Superpage mapping of the kernel text. Existing 4k
* page table pages are wasted.
*/
for (a = KERNBASE; a < KERNend; a += NBPDR)
IdlePTD[a >> PDRSHIFT] = a | PG_PS | PG_A | PG_M |
PG_RW | PG_V;
}
if ((cpu_feature & CPUID_PGE) != 0) {
ncr4 |= CR4_PGE;
pgeflag = PG_G;
}
ncr4 |= (cpu_feature & CPUID_VME) != 0 ? CR4_VME : 0;
#ifdef PMAP_PAE_COMP
ncr4 |= CR4_PAE;
#endif
if (ncr4 != 0)
load_cr4(rcr4() | ncr4);
/* Now enable paging */
#ifdef PMAP_PAE_COMP
cr3 = (u_int)IdlePDPT;
#else
cr3 = (u_int)IdlePTD;
#endif
tramp_idleptd = cr3;
load_cr3(cr3);
load_cr0(rcr0() | CR0_PG);
/*
* Now running relocated at KERNBASE where the system is
* linked to run.
*/
/*
* Remove the lowest part of the double mapping of low memory
* to get some null pointer checks.
*/
__CONCAT(PMTYPE, remap_lower)(false);
kernel_vm_end = /* 0 + */ NKPT * NBPDR;
#ifdef PMAP_PAE_COMP
i386_pmap_VM_NFREEORDER = VM_NFREEORDER_PAE;
i386_pmap_VM_LEVEL_0_ORDER = VM_LEVEL_0_ORDER_PAE;
i386_pmap_PDRSHIFT = PDRSHIFT_PAE;
#else
i386_pmap_VM_NFREEORDER = VM_NFREEORDER_NOPAE;
i386_pmap_VM_LEVEL_0_ORDER = VM_LEVEL_0_ORDER_NOPAE;
i386_pmap_PDRSHIFT = PDRSHIFT_NOPAE;
#endif
}
static void
__CONCAT(PMTYPE, set_nx)(void)
{
#ifdef PMAP_PAE_COMP
if ((amd_feature & AMDID_NX) == 0)
return;
pg_nx = PG_NX;
elf32_nxstack = 1;
/* EFER.EFER_NXE is set in initializecpu(). */
#endif
}
/*
* Bootstrap the system enough to run with virtual memory.
*
* On the i386 this is called after pmap_cold() created initial
* kernel page table and enabled paging, and just syncs the pmap
* module with what has already been done.
*/
static void
__CONCAT(PMTYPE, bootstrap)(vm_paddr_t firstaddr)
{
vm_offset_t va;
pt_entry_t *pte, *unused;
struct pcpu *pc;
u_long res;
int i;
res = atop(firstaddr - (vm_paddr_t)KERNLOAD);
/*
* Add a physical memory segment (vm_phys_seg) corresponding to the
* preallocated kernel page table pages so that vm_page structures
* representing these pages will be created. The vm_page structures
* are required for promotion of the corresponding kernel virtual
* addresses to superpage mappings.
*/
vm_phys_add_seg(KPTphys, KPTphys + ptoa(nkpt));
/*
* Initialize the first available kernel virtual address.
* However, using "firstaddr" may waste a few pages of the
* kernel virtual address space, because pmap_cold() may not
* have mapped every physical page that it allocated.
* Preferably, pmap_cold() would provide a first unused
* virtual address in addition to "firstaddr".
*/
virtual_avail = (vm_offset_t)firstaddr;
virtual_end = VM_MAX_KERNEL_ADDRESS;
/*
* Initialize the kernel pmap (which is statically allocated).
* Count bootstrap data as being resident in case any of this data is
* later unmapped (using pmap_remove()) and freed.
*/
PMAP_LOCK_INIT(kernel_pmap);
kernel_pmap->pm_pdir = IdlePTD;
#ifdef PMAP_PAE_COMP
kernel_pmap->pm_pdpt = IdlePDPT;
#endif
CPU_FILL(&kernel_pmap->pm_active); /* don't allow deactivation */
kernel_pmap->pm_stats.resident_count = res;
TAILQ_INIT(&kernel_pmap->pm_pvchunk);
/*
* Initialize the global pv list lock.
*/
rw_init(&pvh_global_lock, "pmap pv global");
/*
* Reserve some special page table entries/VA space for temporary
* mapping of pages.
*/
#define SYSMAP(c, p, v, n) \
v = (c)va; va += ((n)*PAGE_SIZE); p = pte; pte += (n);
va = virtual_avail;
pte = vtopte(va);
/*
* Initialize temporary map objects on the current CPU for use
* during early boot.
* CMAP1/CMAP2 are used for zeroing and copying pages.
* CMAP3 is used for the boot-time memory test.
*/
pc = get_pcpu();
mtx_init(&pc->pc_cmap_lock, "SYSMAPS", NULL, MTX_DEF);
SYSMAP(caddr_t, pc->pc_cmap_pte1, pc->pc_cmap_addr1, 1)
SYSMAP(caddr_t, pc->pc_cmap_pte2, pc->pc_cmap_addr2, 1)
SYSMAP(vm_offset_t, pte, pc->pc_qmap_addr, 1)
SYSMAP(caddr_t, CMAP3, CADDR3, 1);
/*
* Crashdump maps.
*/
SYSMAP(caddr_t, unused, crashdumpmap, MAXDUMPPGS)
/*
* ptvmmap is used for reading arbitrary physical pages via /dev/mem.
*/
SYSMAP(caddr_t, unused, ptvmmap, 1)
/*
* msgbufp is used to map the system message buffer.
*/
SYSMAP(struct msgbuf *, unused, msgbufp, atop(round_page(msgbufsize)))
/*
* KPTmap is used by pmap_kextract().
*
* KPTmap is first initialized by pmap_cold(). However, that initial
* KPTmap can only support NKPT page table pages. Here, a larger
* KPTmap is created that can support KVA_PAGES page table pages.
*/
SYSMAP(pt_entry_t *, KPTD, KPTmap, KVA_PAGES)
for (i = 0; i < NKPT; i++)
KPTD[i] = (KPTphys + ptoa(i)) | PG_RW | PG_V;
/*
* PADDR1 and PADDR2 are used by pmap_pte_quick() and pmap_pte(),
* respectively.
*/
SYSMAP(pt_entry_t *, PMAP1, PADDR1, 1)
SYSMAP(pt_entry_t *, PMAP2, PADDR2, 1)
SYSMAP(pt_entry_t *, PMAP3, PADDR3, 1)
mtx_init(&PMAP2mutex, "PMAP2", NULL, MTX_DEF);
virtual_avail = va;
/*
* Initialize the PAT MSR if present.
* pmap_init_pat() clears and sets CR4_PGE, which, as a
* side-effect, invalidates stale PG_G TLB entries that might
* have been created in our pre-boot environment. We assume
* that PAT support implies PGE and in reverse, PGE presence
* comes with PAT. Both features were added for Pentium Pro.
*/
pmap_init_pat();
}
static void
pmap_init_reserved_pages(void)
{
struct pcpu *pc;
vm_offset_t pages;
int i;
#ifdef PMAP_PAE_COMP
if (!pae_mode)
return;
#else
if (pae_mode)
return;
#endif
CPU_FOREACH(i) {
pc = pcpu_find(i);
mtx_init(&pc->pc_copyout_mlock, "cpmlk", NULL, MTX_DEF |
MTX_NEW);
pc->pc_copyout_maddr = kva_alloc(ptoa(2));
if (pc->pc_copyout_maddr == 0)
panic("unable to allocate non-sleepable copyout KVA");
sx_init(&pc->pc_copyout_slock, "cpslk");
pc->pc_copyout_saddr = kva_alloc(ptoa(2));
if (pc->pc_copyout_saddr == 0)
panic("unable to allocate sleepable copyout KVA");
pc->pc_pmap_eh_va = kva_alloc(ptoa(1));
if (pc->pc_pmap_eh_va == 0)
panic("unable to allocate pmap_extract_and_hold KVA");
pc->pc_pmap_eh_ptep = (char *)vtopte(pc->pc_pmap_eh_va);
/*
* Skip if the mappings have already been initialized,
* i.e. this is the BSP.
*/
if (pc->pc_cmap_addr1 != 0)
continue;
mtx_init(&pc->pc_cmap_lock, "SYSMAPS", NULL, MTX_DEF);
pages = kva_alloc(PAGE_SIZE * 3);
if (pages == 0)
panic("unable to allocate CMAP KVA");
pc->pc_cmap_pte1 = vtopte(pages);
pc->pc_cmap_pte2 = vtopte(pages + PAGE_SIZE);
pc->pc_cmap_addr1 = (caddr_t)pages;
pc->pc_cmap_addr2 = (caddr_t)(pages + PAGE_SIZE);
pc->pc_qmap_addr = pages + ptoa(2);
}
}
SYSINIT(rpages_init, SI_SUB_CPU, SI_ORDER_ANY, pmap_init_reserved_pages, NULL);
/*
* Setup the PAT MSR.
*/
static void
__CONCAT(PMTYPE, init_pat)(void)
{
int pat_table[PAT_INDEX_SIZE];
uint64_t pat_msr;
u_long cr0, cr4;
int i;
/* Set default PAT index table. */
for (i = 0; i < PAT_INDEX_SIZE; i++)
pat_table[i] = -1;
pat_table[PAT_WRITE_BACK] = 0;
pat_table[PAT_WRITE_THROUGH] = 1;
pat_table[PAT_UNCACHEABLE] = 3;
pat_table[PAT_WRITE_COMBINING] = 3;
pat_table[PAT_WRITE_PROTECTED] = 3;
pat_table[PAT_UNCACHED] = 3;
/*
* Bail if this CPU doesn't implement PAT.
* We assume that PAT support implies PGE.
*/
if ((cpu_feature & CPUID_PAT) == 0) {
for (i = 0; i < PAT_INDEX_SIZE; i++)
pat_index[i] = pat_table[i];
pat_works = 0;
return;
}
/*
* Due to some Intel errata, we can only safely use the lower 4
* PAT entries.
*
* Intel Pentium III Processor Specification Update
* Errata E.27 (Upper Four PAT Entries Not Usable With Mode B
* or Mode C Paging)
*
* Intel Pentium IV Processor Specification Update
* Errata N46 (PAT Index MSB May Be Calculated Incorrectly)
*/
if (cpu_vendor_id == CPU_VENDOR_INTEL &&
!(CPUID_TO_FAMILY(cpu_id) == 6 && CPUID_TO_MODEL(cpu_id) >= 0xe))
pat_works = 0;
/* Initialize default PAT entries. */
pat_msr = PAT_VALUE(0, PAT_WRITE_BACK) |
PAT_VALUE(1, PAT_WRITE_THROUGH) |
PAT_VALUE(2, PAT_UNCACHED) |
PAT_VALUE(3, PAT_UNCACHEABLE) |
PAT_VALUE(4, PAT_WRITE_BACK) |
PAT_VALUE(5, PAT_WRITE_THROUGH) |
PAT_VALUE(6, PAT_UNCACHED) |
PAT_VALUE(7, PAT_UNCACHEABLE);
if (pat_works) {
/*
* Leave the indices 0-3 at the default of WB, WT, UC-, and UC.
* Program 5 and 6 as WP and WC.
* Leave 4 and 7 as WB and UC.
*/
pat_msr &= ~(PAT_MASK(5) | PAT_MASK(6));
pat_msr |= PAT_VALUE(5, PAT_WRITE_PROTECTED) |
PAT_VALUE(6, PAT_WRITE_COMBINING);
pat_table[PAT_UNCACHED] = 2;
pat_table[PAT_WRITE_PROTECTED] = 5;
pat_table[PAT_WRITE_COMBINING] = 6;
} else {
/*
* Just replace PAT Index 2 with WC instead of UC-.
*/
pat_msr &= ~PAT_MASK(2);
pat_msr |= PAT_VALUE(2, PAT_WRITE_COMBINING);
pat_table[PAT_WRITE_COMBINING] = 2;
}
/* Disable PGE. */
cr4 = rcr4();
load_cr4(cr4 & ~CR4_PGE);
/* Disable caches (CD = 1, NW = 0). */
cr0 = rcr0();
load_cr0((cr0 & ~CR0_NW) | CR0_CD);
/* Flushes caches and TLBs. */
wbinvd();
invltlb();
/* Update PAT and index table. */
wrmsr(MSR_PAT, pat_msr);
for (i = 0; i < PAT_INDEX_SIZE; i++)
pat_index[i] = pat_table[i];
/* Flush caches and TLBs again. */
wbinvd();
invltlb();
/* Restore caches and PGE. */
load_cr0(cr0);
load_cr4(cr4);
}
#ifdef PMAP_PAE_COMP
static void *
pmap_pdpt_allocf(uma_zone_t zone, vm_size_t bytes, int domain, uint8_t *flags,
int wait)
{
/* Inform UMA that this allocator uses kernel_map/object. */
*flags = UMA_SLAB_KERNEL;
return ((void *)kmem_alloc_contig_domainset(DOMAINSET_FIXED(domain),
bytes, wait, 0x0ULL, 0xffffffffULL, 1, 0, VM_MEMATTR_DEFAULT));
}
#endif
/*
* Abuse the pte nodes for unmapped kva to thread a kva freelist through.
* Requirements:
* - Must deal with pages in order to ensure that none of the PG_* bits
* are ever set, PG_V in particular.
* - Assumes we can write to ptes without pte_store() atomic ops, even
* on PAE systems. This should be ok.
* - Assumes nothing will ever test these addresses for 0 to indicate
* no mapping instead of correctly checking PG_V.
* - Assumes a vm_offset_t will fit in a pte (true for i386).
* Because PG_V is never set, there can be no mappings to invalidate.
*/
static vm_offset_t
pmap_ptelist_alloc(vm_offset_t *head)
{
pt_entry_t *pte;
vm_offset_t va;
va = *head;
if (va == 0)
panic("pmap_ptelist_alloc: exhausted ptelist KVA");
pte = vtopte(va);
*head = *pte;
if (*head & PG_V)
panic("pmap_ptelist_alloc: va with PG_V set!");
*pte = 0;
return (va);
}
static void
pmap_ptelist_free(vm_offset_t *head, vm_offset_t va)
{
pt_entry_t *pte;
if (va & PG_V)
panic("pmap_ptelist_free: freeing va with PG_V set!");
pte = vtopte(va);
*pte = *head; /* virtual! PG_V is 0 though */
*head = va;
}
static void
pmap_ptelist_init(vm_offset_t *head, void *base, int npages)
{
int i;
vm_offset_t va;
*head = 0;
for (i = npages - 1; i >= 0; i--) {
va = (vm_offset_t)base + i * PAGE_SIZE;
pmap_ptelist_free(head, va);
}
}
/*
* Initialize the pmap module.
* Called by vm_init, to initialize any structures that the pmap
* system needs to map virtual memory.
*/
static void
__CONCAT(PMTYPE, init)(void)
{
struct pmap_preinit_mapping *ppim;
vm_page_t mpte;
vm_size_t s;
int i, pv_npg;
/*
* Initialize the vm page array entries for the kernel pmap's
* page table pages.
*/
PMAP_LOCK(kernel_pmap);
for (i = 0; i < NKPT; i++) {
mpte = PHYS_TO_VM_PAGE(KPTphys + ptoa(i));
KASSERT(mpte >= vm_page_array &&
mpte < &vm_page_array[vm_page_array_size],
("pmap_init: page table page is out of range"));
mpte->pindex = i + KPTDI;
mpte->phys_addr = KPTphys + ptoa(i);
mpte->wire_count = 1;
if (pseflag != 0 &&
KERNBASE <= i << PDRSHIFT && i << PDRSHIFT < KERNend &&
pmap_insert_pt_page(kernel_pmap, mpte))
panic("pmap_init: pmap_insert_pt_page failed");
}
PMAP_UNLOCK(kernel_pmap);
vm_wire_add(NKPT);
/*
* Initialize the address space (zone) for the pv entries. Set a
* high water mark so that the system can recover from excessive
* numbers of pv entries.
*/
TUNABLE_INT_FETCH("vm.pmap.shpgperproc", &shpgperproc);
pv_entry_max = shpgperproc * maxproc + vm_cnt.v_page_count;
TUNABLE_INT_FETCH("vm.pmap.pv_entries", &pv_entry_max);
pv_entry_max = roundup(pv_entry_max, _NPCPV);
pv_entry_high_water = 9 * (pv_entry_max / 10);
/*
* If the kernel is running on a virtual machine, then it must assume
* that MCA is enabled by the hypervisor. Moreover, the kernel must
* be prepared for the hypervisor changing the vendor and family that
* are reported by CPUID. Consequently, the workaround for AMD Family
* 10h Erratum 383 is enabled if the processor's feature set does not
* include at least one feature that is only supported by older Intel
* or newer AMD processors.
*/
if (vm_guest != VM_GUEST_NO && (cpu_feature & CPUID_SS) == 0 &&
(cpu_feature2 & (CPUID2_SSSE3 | CPUID2_SSE41 | CPUID2_AESNI |
CPUID2_AVX | CPUID2_XSAVE)) == 0 && (amd_feature2 & (AMDID2_XOP |
AMDID2_FMA4)) == 0)
workaround_erratum383 = 1;
/*
* Are large page mappings supported and enabled?
*/
TUNABLE_INT_FETCH("vm.pmap.pg_ps_enabled", &pg_ps_enabled);
if (pseflag == 0)
pg_ps_enabled = 0;
else if (pg_ps_enabled) {
KASSERT(MAXPAGESIZES > 1 && pagesizes[1] == 0,
("pmap_init: can't assign to pagesizes[1]"));
pagesizes[1] = NBPDR;
}
/*
* Calculate the size of the pv head table for superpages.
* Handle the possibility that "vm_phys_segs[...].end" is zero.
*/
pv_npg = trunc_4mpage(vm_phys_segs[vm_phys_nsegs - 1].end -
PAGE_SIZE) / NBPDR + 1;
/*
* Allocate memory for the pv head table for superpages.
*/
s = (vm_size_t)(pv_npg * sizeof(struct md_page));
s = round_page(s);
pv_table = (struct md_page *)kmem_malloc(s, M_WAITOK | M_ZERO);
for (i = 0; i < pv_npg; i++)
TAILQ_INIT(&pv_table[i].pv_list);
pv_maxchunks = MAX(pv_entry_max / _NPCPV, maxproc);
pv_chunkbase = (struct pv_chunk *)kva_alloc(PAGE_SIZE * pv_maxchunks);
if (pv_chunkbase == NULL)
panic("pmap_init: not enough kvm for pv chunks");
pmap_ptelist_init(&pv_vafree, pv_chunkbase, pv_maxchunks);
#ifdef PMAP_PAE_COMP
pdptzone = uma_zcreate("PDPT", NPGPTD * sizeof(pdpt_entry_t), NULL,
NULL, NULL, NULL, (NPGPTD * sizeof(pdpt_entry_t)) - 1,
UMA_ZONE_VM | UMA_ZONE_NOFREE);
uma_zone_set_allocf(pdptzone, pmap_pdpt_allocf);
#endif
pmap_initialized = 1;
pmap_init_trm();
if (!bootverbose)
return;
for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) {
ppim = pmap_preinit_mapping + i;
if (ppim->va == 0)
continue;
printf("PPIM %u: PA=%#jx, VA=%#x, size=%#x, mode=%#x\n", i,
(uintmax_t)ppim->pa, ppim->va, ppim->sz, ppim->mode);
}
}
extern u_long pmap_pde_demotions;
extern u_long pmap_pde_mappings;
extern u_long pmap_pde_p_failures;
extern u_long pmap_pde_promotions;
/***************************************************
* Low level helper routines.....
***************************************************/
static boolean_t
__CONCAT(PMTYPE, is_valid_memattr)(pmap_t pmap __unused, vm_memattr_t mode)
{
return (mode >= 0 && mode < PAT_INDEX_SIZE &&
pat_index[(int)mode] >= 0);
}
/*
* Determine the appropriate bits to set in a PTE or PDE for a specified
* caching mode.
*/
static int
__CONCAT(PMTYPE, cache_bits)(pmap_t pmap, int mode, boolean_t is_pde)
{
int cache_bits, pat_flag, pat_idx;
if (!pmap_is_valid_memattr(pmap, mode))
panic("Unknown caching mode %d\n", mode);
/* The PAT bit is different for PTE's and PDE's. */
pat_flag = is_pde ? PG_PDE_PAT : PG_PTE_PAT;
/* Map the caching mode to a PAT index. */
pat_idx = pat_index[mode];
/* Map the 3-bit index value into the PAT, PCD, and PWT bits. */
cache_bits = 0;
if (pat_idx & 0x4)
cache_bits |= pat_flag;
if (pat_idx & 0x2)
cache_bits |= PG_NC_PCD;
if (pat_idx & 0x1)
cache_bits |= PG_NC_PWT;
return (cache_bits);
}
static bool
__CONCAT(PMTYPE, ps_enabled)(pmap_t pmap __unused)
{
return (pg_ps_enabled);
}
/*
* The caller is responsible for maintaining TLB consistency.
*/
static void
pmap_kenter_pde(vm_offset_t va, pd_entry_t newpde)
{
pd_entry_t *pde;
pde = pmap_pde(kernel_pmap, va);
pde_store(pde, newpde);
}
/*
* After changing the page size for the specified virtual address in the page
* table, flush the corresponding entries from the processor's TLB. Only the
* calling processor's TLB is affected.
*
* The calling thread must be pinned to a processor.
*/
static void
pmap_update_pde_invalidate(vm_offset_t va, pd_entry_t newpde)
{
if ((newpde & PG_PS) == 0)
/* Demotion: flush a specific 2MB page mapping. */
invlpg(va);
else /* if ((newpde & PG_G) == 0) */
/*
* Promotion: flush every 4KB page mapping from the TLB
* because there are too many to flush individually.
*/
invltlb();
}
#ifdef SMP
/*
* For SMP, these functions have to use the IPI mechanism for coherence.
*
* N.B.: Before calling any of the following TLB invalidation functions,
* the calling processor must ensure that all stores updating a non-
* kernel page table are globally performed. Otherwise, another
* processor could cache an old, pre-update entry without being
* invalidated. This can happen one of two ways: (1) The pmap becomes
* active on another processor after its pm_active field is checked by
* one of the following functions but before a store updating the page
* table is globally performed. (2) The pmap becomes active on another
* processor before its pm_active field is checked but due to
* speculative loads one of the following functions stills reads the
* pmap as inactive on the other processor.
*
* The kernel page table is exempt because its pm_active field is
* immutable. The kernel page table is always active on every
* processor.
*/
static void
pmap_invalidate_page_int(pmap_t pmap, vm_offset_t va)
{
cpuset_t *mask, other_cpus;
u_int cpuid;
sched_pin();
if (pmap == kernel_pmap) {
invlpg(va);
mask = &all_cpus;
} else if (!CPU_CMP(&pmap->pm_active, &all_cpus)) {
mask = &all_cpus;
} else {
cpuid = PCPU_GET(cpuid);
other_cpus = all_cpus;
CPU_CLR(cpuid, &other_cpus);
CPU_AND(&other_cpus, &pmap->pm_active);
mask = &other_cpus;
}
smp_masked_invlpg(*mask, va, pmap);
sched_unpin();
}
/* 4k PTEs -- Chosen to exceed the total size of Broadwell L2 TLB */
#define PMAP_INVLPG_THRESHOLD (4 * 1024 * PAGE_SIZE)
static void
pmap_invalidate_range_int(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
cpuset_t *mask, other_cpus;
vm_offset_t addr;
u_int cpuid;
if (eva - sva >= PMAP_INVLPG_THRESHOLD) {
pmap_invalidate_all_int(pmap);
return;
}
sched_pin();
if (pmap == kernel_pmap) {
for (addr = sva; addr < eva; addr += PAGE_SIZE)
invlpg(addr);
mask = &all_cpus;
} else if (!CPU_CMP(&pmap->pm_active, &all_cpus)) {
mask = &all_cpus;
} else {
cpuid = PCPU_GET(cpuid);
other_cpus = all_cpus;
CPU_CLR(cpuid, &other_cpus);
CPU_AND(&other_cpus, &pmap->pm_active);
mask = &other_cpus;
}
smp_masked_invlpg_range(*mask, sva, eva, pmap);
sched_unpin();
}
static void
pmap_invalidate_all_int(pmap_t pmap)
{
cpuset_t *mask, other_cpus;
u_int cpuid;
sched_pin();
if (pmap == kernel_pmap) {
invltlb();
mask = &all_cpus;
} else if (!CPU_CMP(&pmap->pm_active, &all_cpus)) {
mask = &all_cpus;
} else {
cpuid = PCPU_GET(cpuid);
other_cpus = all_cpus;
CPU_CLR(cpuid, &other_cpus);
CPU_AND(&other_cpus, &pmap->pm_active);
mask = &other_cpus;
}
smp_masked_invltlb(*mask, pmap);
sched_unpin();
}
static void
__CONCAT(PMTYPE, invalidate_cache)(void)
{
sched_pin();
wbinvd();
smp_cache_flush();
sched_unpin();
}
struct pde_action {
cpuset_t invalidate; /* processors that invalidate their TLB */
vm_offset_t va;
pd_entry_t *pde;
pd_entry_t newpde;
u_int store; /* processor that updates the PDE */
};
static void
pmap_update_pde_kernel(void *arg)
{
struct pde_action *act = arg;
pd_entry_t *pde;
if (act->store == PCPU_GET(cpuid)) {
pde = pmap_pde(kernel_pmap, act->va);
pde_store(pde, act->newpde);
}
}
static void
pmap_update_pde_user(void *arg)
{
struct pde_action *act = arg;
if (act->store == PCPU_GET(cpuid))
pde_store(act->pde, act->newpde);
}
static void
pmap_update_pde_teardown(void *arg)
{
struct pde_action *act = arg;
if (CPU_ISSET(PCPU_GET(cpuid), &act->invalidate))
pmap_update_pde_invalidate(act->va, act->newpde);
}
/*
* Change the page size for the specified virtual address in a way that
* prevents any possibility of the TLB ever having two entries that map the
* same virtual address using different page sizes. This is the recommended
* workaround for Erratum 383 on AMD Family 10h processors. It prevents a
* machine check exception for a TLB state that is improperly diagnosed as a
* hardware error.
*/
static void
pmap_update_pde(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, pd_entry_t newpde)
{
struct pde_action act;
cpuset_t active, other_cpus;
u_int cpuid;
sched_pin();
cpuid = PCPU_GET(cpuid);
other_cpus = all_cpus;
CPU_CLR(cpuid, &other_cpus);
if (pmap == kernel_pmap)
active = all_cpus;
else
active = pmap->pm_active;
if (CPU_OVERLAP(&active, &other_cpus)) {
act.store = cpuid;
act.invalidate = active;
act.va = va;
act.pde = pde;
act.newpde = newpde;
CPU_SET(cpuid, &active);
smp_rendezvous_cpus(active,
smp_no_rendezvous_barrier, pmap == kernel_pmap ?
pmap_update_pde_kernel : pmap_update_pde_user,
pmap_update_pde_teardown, &act);
} else {
if (pmap == kernel_pmap)
pmap_kenter_pde(va, newpde);
else
pde_store(pde, newpde);
if (CPU_ISSET(cpuid, &active))
pmap_update_pde_invalidate(va, newpde);
}
sched_unpin();
}
#else /* !SMP */
/*
* Normal, non-SMP, 486+ invalidation functions.
* We inline these within pmap.c for speed.
*/
static void
pmap_invalidate_page_int(pmap_t pmap, vm_offset_t va)
{
if (pmap == kernel_pmap)
invlpg(va);
}
static void
pmap_invalidate_range_int(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
vm_offset_t addr;
if (pmap == kernel_pmap)
for (addr = sva; addr < eva; addr += PAGE_SIZE)
invlpg(addr);
}
static void
pmap_invalidate_all_int(pmap_t pmap)
{
if (pmap == kernel_pmap)
invltlb();
}
static void
__CONCAT(PMTYPE, invalidate_cache)(void)
{
wbinvd();
}
static void
pmap_update_pde(pmap_t pmap, vm_offset_t va, pd_entry_t *pde, pd_entry_t newpde)
{
if (pmap == kernel_pmap)
pmap_kenter_pde(va, newpde);
else
pde_store(pde, newpde);
if (pmap == kernel_pmap || !CPU_EMPTY(&pmap->pm_active))
pmap_update_pde_invalidate(va, newpde);
}
#endif /* !SMP */
static void
__CONCAT(PMTYPE, invalidate_page)(pmap_t pmap, vm_offset_t va)
{
pmap_invalidate_page_int(pmap, va);
}
static void
__CONCAT(PMTYPE, invalidate_range)(pmap_t pmap, vm_offset_t sva,
vm_offset_t eva)
{
pmap_invalidate_range_int(pmap, sva, eva);
}
static void
__CONCAT(PMTYPE, invalidate_all)(pmap_t pmap)
{
pmap_invalidate_all_int(pmap);
}
static void
pmap_invalidate_pde_page(pmap_t pmap, vm_offset_t va, pd_entry_t pde)
{
/*
* When the PDE has PG_PROMOTED set, the 2- or 4MB page mapping was
* created by a promotion that did not invalidate the 512 or 1024 4KB
* page mappings that might exist in the TLB. Consequently, at this
* point, the TLB may hold both 4KB and 2- or 4MB page mappings for
* the address range [va, va + NBPDR). Therefore, the entire range
* must be invalidated here. In contrast, when PG_PROMOTED is clear,
* the TLB will not hold any 4KB page mappings for the address range
* [va, va + NBPDR), and so a single INVLPG suffices to invalidate the
* 2- or 4MB page mapping from the TLB.
*/
if ((pde & PG_PROMOTED) != 0)
pmap_invalidate_range_int(pmap, va, va + NBPDR - 1);
else
pmap_invalidate_page_int(pmap, va);
}
/*
* Are we current address space or kernel?
*/
static __inline int
pmap_is_current(pmap_t pmap)
{
return (pmap == kernel_pmap);
}
/*
* If the given pmap is not the current or kernel pmap, the returned pte must
* be released by passing it to pmap_pte_release().
*/
static pt_entry_t *
__CONCAT(PMTYPE, pte)(pmap_t pmap, vm_offset_t va)
{
pd_entry_t newpf;
pd_entry_t *pde;
pde = pmap_pde(pmap, va);
if (*pde & PG_PS)
return (pde);
if (*pde != 0) {
/* are we current address space or kernel? */
if (pmap_is_current(pmap))
return (vtopte(va));
mtx_lock(&PMAP2mutex);
newpf = *pde & PG_FRAME;
if ((*PMAP2 & PG_FRAME) != newpf) {
*PMAP2 = newpf | PG_RW | PG_V | PG_A | PG_M;
pmap_invalidate_page_int(kernel_pmap,
(vm_offset_t)PADDR2);
}
return (PADDR2 + (i386_btop(va) & (NPTEPG - 1)));
}
return (NULL);
}
/*
* Releases a pte that was obtained from pmap_pte(). Be prepared for the pte
* being NULL.
*/
static __inline void
pmap_pte_release(pt_entry_t *pte)
{
if ((pt_entry_t *)((vm_offset_t)pte & ~PAGE_MASK) == PADDR2)
mtx_unlock(&PMAP2mutex);
}
/*
* NB: The sequence of updating a page table followed by accesses to the
* corresponding pages is subject to the situation described in the "AMD64
* Architecture Programmer's Manual Volume 2: System Programming" rev. 3.23,
* "7.3.1 Special Coherency Considerations". Therefore, issuing the INVLPG
* right after modifying the PTE bits is crucial.
*/
static __inline void
invlcaddr(void *caddr)
{
invlpg((u_int)caddr);
}
/*
* Super fast pmap_pte routine best used when scanning
* the pv lists. This eliminates many coarse-grained
* invltlb calls. Note that many of the pv list
* scans are across different pmaps. It is very wasteful
* to do an entire invltlb for checking a single mapping.
*
* If the given pmap is not the current pmap, pvh_global_lock
* must be held and curthread pinned to a CPU.
*/
static pt_entry_t *
pmap_pte_quick(pmap_t pmap, vm_offset_t va)
{
pd_entry_t newpf;
pd_entry_t *pde;
pde = pmap_pde(pmap, va);
if (*pde & PG_PS)
return (pde);
if (*pde != 0) {
/* are we current address space or kernel? */
if (pmap_is_current(pmap))
return (vtopte(va));
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT(curthread->td_pinned > 0, ("curthread not pinned"));
newpf = *pde & PG_FRAME;
if ((*PMAP1 & PG_FRAME) != newpf) {
*PMAP1 = newpf | PG_RW | PG_V | PG_A | PG_M;
#ifdef SMP
PMAP1cpu = PCPU_GET(cpuid);
#endif
invlcaddr(PADDR1);
PMAP1changed++;
} else
#ifdef SMP
if (PMAP1cpu != PCPU_GET(cpuid)) {
PMAP1cpu = PCPU_GET(cpuid);
invlcaddr(PADDR1);
PMAP1changedcpu++;
} else
#endif
PMAP1unchanged++;
return (PADDR1 + (i386_btop(va) & (NPTEPG - 1)));
}
return (0);
}
static pt_entry_t *
pmap_pte_quick3(pmap_t pmap, vm_offset_t va)
{
pd_entry_t newpf;
pd_entry_t *pde;
pde = pmap_pde(pmap, va);
if (*pde & PG_PS)
return (pde);
if (*pde != 0) {
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT(curthread->td_pinned > 0, ("curthread not pinned"));
newpf = *pde & PG_FRAME;
if ((*PMAP3 & PG_FRAME) != newpf) {
*PMAP3 = newpf | PG_RW | PG_V | PG_A | PG_M;
#ifdef SMP
PMAP3cpu = PCPU_GET(cpuid);
#endif
invlcaddr(PADDR3);
PMAP1changed++;
} else
#ifdef SMP
if (PMAP3cpu != PCPU_GET(cpuid)) {
PMAP3cpu = PCPU_GET(cpuid);
invlcaddr(PADDR3);
PMAP1changedcpu++;
} else
#endif
PMAP1unchanged++;
return (PADDR3 + (i386_btop(va) & (NPTEPG - 1)));
}
return (0);
}
static pt_entry_t
pmap_pte_ufast(pmap_t pmap, vm_offset_t va, pd_entry_t pde)
{
pt_entry_t *eh_ptep, pte, *ptep;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
pde &= PG_FRAME;
critical_enter();
eh_ptep = (pt_entry_t *)PCPU_GET(pmap_eh_ptep);
if ((*eh_ptep & PG_FRAME) != pde) {
*eh_ptep = pde | PG_RW | PG_V | PG_A | PG_M;
invlcaddr((void *)PCPU_GET(pmap_eh_va));
}
ptep = (pt_entry_t *)PCPU_GET(pmap_eh_va) + (i386_btop(va) &
(NPTEPG - 1));
pte = *ptep;
critical_exit();
return (pte);
}
/*
* Extract from the kernel page table the physical address that is mapped by
* the given virtual address "va".
*
* This function may be used before pmap_bootstrap() is called.
*/
static vm_paddr_t
__CONCAT(PMTYPE, kextract)(vm_offset_t va)
{
vm_paddr_t pa;
if ((pa = pte_load(&PTD[va >> PDRSHIFT])) & PG_PS) {
pa = (pa & PG_PS_FRAME) | (va & PDRMASK);
} else {
/*
* Beware of a concurrent promotion that changes the PDE at
* this point! For example, vtopte() must not be used to
* access the PTE because it would use the new PDE. It is,
* however, safe to use the old PDE because the page table
* page is preserved by the promotion.
*/
pa = KPTmap[i386_btop(va)];
pa = (pa & PG_FRAME) | (va & PAGE_MASK);
}
return (pa);
}
/*
* Routine: pmap_extract
* Function:
* Extract the physical page address associated
* with the given map/virtual_address pair.
*/
static vm_paddr_t
__CONCAT(PMTYPE, extract)(pmap_t pmap, vm_offset_t va)
{
vm_paddr_t rtval;
pt_entry_t pte;
pd_entry_t pde;
rtval = 0;
PMAP_LOCK(pmap);
pde = pmap->pm_pdir[va >> PDRSHIFT];
if (pde != 0) {
if ((pde & PG_PS) != 0)
rtval = (pde & PG_PS_FRAME) | (va & PDRMASK);
else {
pte = pmap_pte_ufast(pmap, va, pde);
rtval = (pte & PG_FRAME) | (va & PAGE_MASK);
}
}
PMAP_UNLOCK(pmap);
return (rtval);
}
/*
* Routine: pmap_extract_and_hold
* Function:
* Atomically extract and hold the physical page
* with the given pmap and virtual address pair
* if that mapping permits the given protection.
*/
static vm_page_t
__CONCAT(PMTYPE, extract_and_hold)(pmap_t pmap, vm_offset_t va, vm_prot_t prot)
{
pd_entry_t pde;
pt_entry_t pte;
vm_page_t m;
vm_paddr_t pa;
pa = 0;
m = NULL;
PMAP_LOCK(pmap);
retry:
pde = *pmap_pde(pmap, va);
if (pde != 0) {
if (pde & PG_PS) {
if ((pde & PG_RW) || (prot & VM_PROT_WRITE) == 0) {
if (vm_page_pa_tryrelock(pmap, (pde &
PG_PS_FRAME) | (va & PDRMASK), &pa))
goto retry;
m = PHYS_TO_VM_PAGE(pa);
}
} else {
pte = pmap_pte_ufast(pmap, va, pde);
if (pte != 0 &&
((pte & PG_RW) || (prot & VM_PROT_WRITE) == 0)) {
if (vm_page_pa_tryrelock(pmap, pte & PG_FRAME,
&pa))
goto retry;
m = PHYS_TO_VM_PAGE(pa);
}
}
if (m != NULL)
vm_page_hold(m);
}
PA_UNLOCK_COND(pa);
PMAP_UNLOCK(pmap);
return (m);
}
/***************************************************
* Low level mapping routines.....
***************************************************/
/*
* Add a wired page to the kva.
* Note: not SMP coherent.
*
* This function may be used before pmap_bootstrap() is called.
*/
static void
__CONCAT(PMTYPE, kenter)(vm_offset_t va, vm_paddr_t pa)
{
pt_entry_t *pte;
pte = vtopte(va);
pte_store(pte, pa | PG_RW | PG_V);
}
static __inline void
pmap_kenter_attr(vm_offset_t va, vm_paddr_t pa, int mode)
{
pt_entry_t *pte;
pte = vtopte(va);
pte_store(pte, pa | PG_RW | PG_V | pmap_cache_bits(kernel_pmap,
mode, 0));
}
/*
* Remove a page from the kernel pagetables.
* Note: not SMP coherent.
*
* This function may be used before pmap_bootstrap() is called.
*/
static void
__CONCAT(PMTYPE, kremove)(vm_offset_t va)
{
pt_entry_t *pte;
pte = vtopte(va);
pte_clear(pte);
}
/*
* Used to map a range of physical addresses into kernel
* virtual address space.
*
* The value passed in '*virt' is a suggested virtual address for
* the mapping. Architectures which can support a direct-mapped
* physical to virtual region can return the appropriate address
* within that region, leaving '*virt' unchanged. Other
* architectures should map the pages starting at '*virt' and
* update '*virt' with the first usable address after the mapped
* region.
*/
static vm_offset_t
__CONCAT(PMTYPE, map)(vm_offset_t *virt, vm_paddr_t start, vm_paddr_t end,
int prot)
{
vm_offset_t va, sva;
vm_paddr_t superpage_offset;
pd_entry_t newpde;
va = *virt;
/*
* Does the physical address range's size and alignment permit at
* least one superpage mapping to be created?
*/
superpage_offset = start & PDRMASK;
if ((end - start) - ((NBPDR - superpage_offset) & PDRMASK) >= NBPDR) {
/*
* Increase the starting virtual address so that its alignment
* does not preclude the use of superpage mappings.
*/
if ((va & PDRMASK) < superpage_offset)
va = (va & ~PDRMASK) + superpage_offset;
else if ((va & PDRMASK) > superpage_offset)
va = ((va + PDRMASK) & ~PDRMASK) + superpage_offset;
}
sva = va;
while (start < end) {
if ((start & PDRMASK) == 0 && end - start >= NBPDR &&
pseflag != 0) {
KASSERT((va & PDRMASK) == 0,
("pmap_map: misaligned va %#x", va));
newpde = start | PG_PS | PG_RW | PG_V;
pmap_kenter_pde(va, newpde);
va += NBPDR;
start += NBPDR;
} else {
pmap_kenter(va, start);
va += PAGE_SIZE;
start += PAGE_SIZE;
}
}
pmap_invalidate_range_int(kernel_pmap, sva, va);
*virt = va;
return (sva);
}
/*
* Add a list of wired pages to the kva
* this routine is only used for temporary
* kernel mappings that do not need to have
* page modification or references recorded.
* Note that old mappings are simply written
* over. The page *must* be wired.
* Note: SMP coherent. Uses a ranged shootdown IPI.
*/
static void
__CONCAT(PMTYPE, qenter)(vm_offset_t sva, vm_page_t *ma, int count)
{
pt_entry_t *endpte, oldpte, pa, *pte;
vm_page_t m;
oldpte = 0;
pte = vtopte(sva);
endpte = pte + count;
while (pte < endpte) {
m = *ma++;
pa = VM_PAGE_TO_PHYS(m) | pmap_cache_bits(kernel_pmap,
m->md.pat_mode, 0);
if ((*pte & (PG_FRAME | PG_PTE_CACHE)) != pa) {
oldpte |= *pte;
#ifdef PMAP_PAE_COMP
pte_store(pte, pa | pg_nx | PG_RW | PG_V);
#else
pte_store(pte, pa | PG_RW | PG_V);
#endif
}
pte++;
}
if (__predict_false((oldpte & PG_V) != 0))
pmap_invalidate_range_int(kernel_pmap, sva, sva + count *
PAGE_SIZE);
}
/*
* This routine tears out page mappings from the
* kernel -- it is meant only for temporary mappings.
* Note: SMP coherent. Uses a ranged shootdown IPI.
*/
static void
__CONCAT(PMTYPE, qremove)(vm_offset_t sva, int count)
{
vm_offset_t va;
va = sva;
while (count-- > 0) {
pmap_kremove(va);
va += PAGE_SIZE;
}
pmap_invalidate_range_int(kernel_pmap, sva, va);
}
/***************************************************
* Page table page management routines.....
***************************************************/
/*
* Schedule the specified unused page table page to be freed. Specifically,
* add the page to the specified list of pages that will be released to the
* physical memory manager after the TLB has been updated.
*/
static __inline void
pmap_add_delayed_free_list(vm_page_t m, struct spglist *free,
boolean_t set_PG_ZERO)
{
if (set_PG_ZERO)
m->flags |= PG_ZERO;
else
m->flags &= ~PG_ZERO;
SLIST_INSERT_HEAD(free, m, plinks.s.ss);
}
/*
* Inserts the specified page table page into the specified pmap's collection
* of idle page table pages. Each of a pmap's page table pages is responsible
* for mapping a distinct range of virtual addresses. The pmap's collection is
* ordered by this virtual address range.
*/
static __inline int
pmap_insert_pt_page(pmap_t pmap, vm_page_t mpte)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
return (vm_radix_insert(&pmap->pm_root, mpte));
}
/*
* Removes the page table page mapping the specified virtual address from the
* specified pmap's collection of idle page table pages, and returns it.
* Otherwise, returns NULL if there is no page table page corresponding to the
* specified virtual address.
*/
static __inline vm_page_t
pmap_remove_pt_page(pmap_t pmap, vm_offset_t va)
{
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
return (vm_radix_remove(&pmap->pm_root, va >> PDRSHIFT));
}
/*
* Decrements a page table page's wire count, which is used to record the
* number of valid page table entries within the page. If the wire count
* drops to zero, then the page table page is unmapped. Returns TRUE if the
* page table page was unmapped and FALSE otherwise.
*/
static inline boolean_t
pmap_unwire_ptp(pmap_t pmap, vm_page_t m, struct spglist *free)
{
--m->wire_count;
if (m->wire_count == 0) {
_pmap_unwire_ptp(pmap, m, free);
return (TRUE);
} else
return (FALSE);
}
static void
_pmap_unwire_ptp(pmap_t pmap, vm_page_t m, struct spglist *free)
{
/*
* unmap the page table page
*/
pmap->pm_pdir[m->pindex] = 0;
--pmap->pm_stats.resident_count;
/*
* There is not need to invalidate the recursive mapping since
* we never instantiate such mapping for the usermode pmaps,
* and never remove page table pages from the kernel pmap.
* Put page on a list so that it is released since all TLB
* shootdown is done.
*/
MPASS(pmap != kernel_pmap);
pmap_add_delayed_free_list(m, free, TRUE);
}
/*
* After removing a page table entry, this routine is used to
* conditionally free the page, and manage the hold/wire counts.
*/
static int
pmap_unuse_pt(pmap_t pmap, vm_offset_t va, struct spglist *free)
{
pd_entry_t ptepde;
vm_page_t mpte;
if (pmap == kernel_pmap)
return (0);
ptepde = *pmap_pde(pmap, va);
mpte = PHYS_TO_VM_PAGE(ptepde & PG_FRAME);
return (pmap_unwire_ptp(pmap, mpte, free));
}
/*
* Initialize the pmap for the swapper process.
*/
static void
__CONCAT(PMTYPE, pinit0)(pmap_t pmap)
{
PMAP_LOCK_INIT(pmap);
pmap->pm_pdir = IdlePTD;
#ifdef PMAP_PAE_COMP
pmap->pm_pdpt = IdlePDPT;
#endif
pmap->pm_root.rt_root = 0;
CPU_ZERO(&pmap->pm_active);
TAILQ_INIT(&pmap->pm_pvchunk);
bzero(&pmap->pm_stats, sizeof pmap->pm_stats);
pmap_activate_boot(pmap);
}
/*
* Initialize a preallocated and zeroed pmap structure,
* such as one in a vmspace structure.
*/
static int
__CONCAT(PMTYPE, pinit)(pmap_t pmap)
{
vm_page_t m;
int i;
/*
* No need to allocate page table space yet but we do need a valid
* page directory table.
*/
if (pmap->pm_pdir == NULL) {
pmap->pm_pdir = (pd_entry_t *)kva_alloc(NBPTD);
if (pmap->pm_pdir == NULL)
return (0);
#ifdef PMAP_PAE_COMP
pmap->pm_pdpt = uma_zalloc(pdptzone, M_WAITOK | M_ZERO);
KASSERT(((vm_offset_t)pmap->pm_pdpt &
((NPGPTD * sizeof(pdpt_entry_t)) - 1)) == 0,
("pmap_pinit: pdpt misaligned"));
KASSERT(pmap_kextract((vm_offset_t)pmap->pm_pdpt) < (4ULL<<30),
("pmap_pinit: pdpt above 4g"));
#endif
pmap->pm_root.rt_root = 0;
}
KASSERT(vm_radix_is_empty(&pmap->pm_root),
("pmap_pinit: pmap has reserved page table page(s)"));
/*
* allocate the page directory page(s)
*/
for (i = 0; i < NPGPTD; i++) {
m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ |
VM_ALLOC_WIRED | VM_ALLOC_ZERO | VM_ALLOC_WAITOK);
pmap->pm_ptdpg[i] = m;
#ifdef PMAP_PAE_COMP
pmap->pm_pdpt[i] = VM_PAGE_TO_PHYS(m) | PG_V;
#endif
}
pmap_qenter((vm_offset_t)pmap->pm_pdir, pmap->pm_ptdpg, NPGPTD);
for (i = 0; i < NPGPTD; i++)
if ((pmap->pm_ptdpg[i]->flags & PG_ZERO) == 0)
pagezero(pmap->pm_pdir + (i * NPDEPG));
/* Install the trampoline mapping. */
pmap->pm_pdir[TRPTDI] = PTD[TRPTDI];
CPU_ZERO(&pmap->pm_active);
TAILQ_INIT(&pmap->pm_pvchunk);
bzero(&pmap->pm_stats, sizeof pmap->pm_stats);
return (1);
}
/*
* this routine is called if the page table page is not
* mapped correctly.
*/
static vm_page_t
_pmap_allocpte(pmap_t pmap, u_int ptepindex, u_int flags)
{
vm_paddr_t ptepa;
vm_page_t m;
/*
* Allocate a page table page.
*/
if ((m = vm_page_alloc(NULL, ptepindex, VM_ALLOC_NOOBJ |
VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) {
if ((flags & PMAP_ENTER_NOSLEEP) == 0) {
PMAP_UNLOCK(pmap);
rw_wunlock(&pvh_global_lock);
vm_wait(NULL);
rw_wlock(&pvh_global_lock);
PMAP_LOCK(pmap);
}
/*
* Indicate the need to retry. While waiting, the page table
* page may have been allocated.
*/
return (NULL);
}
if ((m->flags & PG_ZERO) == 0)
pmap_zero_page(m);
/*
* Map the pagetable page into the process address space, if
* it isn't already there.
*/
pmap->pm_stats.resident_count++;
ptepa = VM_PAGE_TO_PHYS(m);
pmap->pm_pdir[ptepindex] =
(pd_entry_t) (ptepa | PG_U | PG_RW | PG_V | PG_A | PG_M);
return (m);
}
static vm_page_t
pmap_allocpte(pmap_t pmap, vm_offset_t va, u_int flags)
{
u_int ptepindex;
pd_entry_t ptepa;
vm_page_t m;
/*
* Calculate pagetable page index
*/
ptepindex = va >> PDRSHIFT;
retry:
/*
* Get the page directory entry
*/
ptepa = pmap->pm_pdir[ptepindex];
/*
* This supports switching from a 4MB page to a
* normal 4K page.
*/
if (ptepa & PG_PS) {
(void)pmap_demote_pde(pmap, &pmap->pm_pdir[ptepindex], va);
ptepa = pmap->pm_pdir[ptepindex];
}
/*
* If the page table page is mapped, we just increment the
* hold count, and activate it.
*/
if (ptepa) {
m = PHYS_TO_VM_PAGE(ptepa & PG_FRAME);
m->wire_count++;
} else {
/*
* Here if the pte page isn't mapped, or if it has
* been deallocated.
*/
m = _pmap_allocpte(pmap, ptepindex, flags);
if (m == NULL && (flags & PMAP_ENTER_NOSLEEP) == 0)
goto retry;
}
return (m);
}
/***************************************************
* Pmap allocation/deallocation routines.
***************************************************/
/*
* Release any resources held by the given physical map.
* Called when a pmap initialized by pmap_pinit is being released.
* Should only be called if the map contains no valid mappings.
*/
static void
__CONCAT(PMTYPE, release)(pmap_t pmap)
{
vm_page_t m;
int i;
KASSERT(pmap->pm_stats.resident_count == 0,
("pmap_release: pmap resident count %ld != 0",
pmap->pm_stats.resident_count));
KASSERT(vm_radix_is_empty(&pmap->pm_root),
("pmap_release: pmap has reserved page table page(s)"));
KASSERT(CPU_EMPTY(&pmap->pm_active),
("releasing active pmap %p", pmap));
pmap_qremove((vm_offset_t)pmap->pm_pdir, NPGPTD);
for (i = 0; i < NPGPTD; i++) {
m = pmap->pm_ptdpg[i];
#ifdef PMAP_PAE_COMP
KASSERT(VM_PAGE_TO_PHYS(m) == (pmap->pm_pdpt[i] & PG_FRAME),
("pmap_release: got wrong ptd page"));
#endif
vm_page_unwire_noq(m);
vm_page_free(m);
}
}
/*
* grow the number of kernel page table entries, if needed
*/
static void
__CONCAT(PMTYPE, growkernel)(vm_offset_t addr)
{
vm_paddr_t ptppaddr;
vm_page_t nkpg;
pd_entry_t newpdir;
mtx_assert(&kernel_map->system_mtx, MA_OWNED);
addr = roundup2(addr, NBPDR);
if (addr - 1 >= vm_map_max(kernel_map))
addr = vm_map_max(kernel_map);
while (kernel_vm_end < addr) {
if (pdir_pde(PTD, kernel_vm_end)) {
kernel_vm_end = (kernel_vm_end + NBPDR) & ~PDRMASK;
if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) {
kernel_vm_end = vm_map_max(kernel_map);
break;
}
continue;
}
nkpg = vm_page_alloc(NULL, kernel_vm_end >> PDRSHIFT,
VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED |
VM_ALLOC_ZERO);
if (nkpg == NULL)
panic("pmap_growkernel: no memory to grow kernel");
nkpt++;
if ((nkpg->flags & PG_ZERO) == 0)
pmap_zero_page(nkpg);
ptppaddr = VM_PAGE_TO_PHYS(nkpg);
newpdir = (pd_entry_t) (ptppaddr | PG_V | PG_RW | PG_A | PG_M);
pdir_pde(KPTD, kernel_vm_end) = newpdir;
pmap_kenter_pde(kernel_vm_end, newpdir);
kernel_vm_end = (kernel_vm_end + NBPDR) & ~PDRMASK;
if (kernel_vm_end - 1 >= vm_map_max(kernel_map)) {
kernel_vm_end = vm_map_max(kernel_map);
break;
}
}
}
/***************************************************
* page management routines.
***************************************************/
CTASSERT(sizeof(struct pv_chunk) == PAGE_SIZE);
CTASSERT(_NPCM == 11);
CTASSERT(_NPCPV == 336);
static __inline struct pv_chunk *
pv_to_chunk(pv_entry_t pv)
{
return ((struct pv_chunk *)((uintptr_t)pv & ~(uintptr_t)PAGE_MASK));
}
#define PV_PMAP(pv) (pv_to_chunk(pv)->pc_pmap)
#define PC_FREE0_9 0xfffffffful /* Free values for index 0 through 9 */
#define PC_FREE10 0x0000fffful /* Free values for index 10 */
static const uint32_t pc_freemask[_NPCM] = {
PC_FREE0_9, PC_FREE0_9, PC_FREE0_9,
PC_FREE0_9, PC_FREE0_9, PC_FREE0_9,
PC_FREE0_9, PC_FREE0_9, PC_FREE0_9,
PC_FREE0_9, PC_FREE10
};
#ifdef PV_STATS
extern int pc_chunk_count, pc_chunk_allocs, pc_chunk_frees, pc_chunk_tryfail;
extern long pv_entry_frees, pv_entry_allocs;
extern int pv_entry_spare;
#endif
/*
* We are in a serious low memory condition. Resort to
* drastic measures to free some pages so we can allocate
* another pv entry chunk.
*/
static vm_page_t
pmap_pv_reclaim(pmap_t locked_pmap)
{
struct pch newtail;
struct pv_chunk *pc;
struct md_page *pvh;
pd_entry_t *pde;
pmap_t pmap;
pt_entry_t *pte, tpte;
pv_entry_t pv;
vm_offset_t va;
vm_page_t m, m_pc;
struct spglist free;
uint32_t inuse;
int bit, field, freed;
PMAP_LOCK_ASSERT(locked_pmap, MA_OWNED);
pmap = NULL;
m_pc = NULL;
SLIST_INIT(&free);
TAILQ_INIT(&newtail);
while ((pc = TAILQ_FIRST(&pv_chunks)) != NULL && (pv_vafree == 0 ||
SLIST_EMPTY(&free))) {
TAILQ_REMOVE(&pv_chunks, pc, pc_lru);
if (pmap != pc->pc_pmap) {
if (pmap != NULL) {
pmap_invalidate_all_int(pmap);
if (pmap != locked_pmap)
PMAP_UNLOCK(pmap);
}
pmap = pc->pc_pmap;
/* Avoid deadlock and lock recursion. */
if (pmap > locked_pmap)
PMAP_LOCK(pmap);
else if (pmap != locked_pmap && !PMAP_TRYLOCK(pmap)) {
pmap = NULL;
TAILQ_INSERT_TAIL(&newtail, pc, pc_lru);
continue;
}
}
/*
* Destroy every non-wired, 4 KB page mapping in the chunk.
*/
freed = 0;
for (field = 0; field < _NPCM; field++) {
for (inuse = ~pc->pc_map[field] & pc_freemask[field];
inuse != 0; inuse &= ~(1UL << bit)) {
bit = bsfl(inuse);
pv = &pc->pc_pventry[field * 32 + bit];
va = pv->pv_va;
pde = pmap_pde(pmap, va);
if ((*pde & PG_PS) != 0)
continue;
pte = __CONCAT(PMTYPE, pte)(pmap, va);
tpte = *pte;
if ((tpte & PG_W) == 0)
tpte = pte_load_clear(pte);
pmap_pte_release(pte);
if ((tpte & PG_W) != 0)
continue;
KASSERT(tpte != 0,
("pmap_pv_reclaim: pmap %p va %x zero pte",
pmap, va));
if ((tpte & PG_G) != 0)
pmap_invalidate_page_int(pmap, va);
m = PHYS_TO_VM_PAGE(tpte & PG_FRAME);
if ((tpte & (PG_M | PG_RW)) == (PG_M | PG_RW))
vm_page_dirty(m);
if ((tpte & PG_A) != 0)
vm_page_aflag_set(m, PGA_REFERENCED);
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
if (TAILQ_EMPTY(&m->md.pv_list) &&
(m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
if (TAILQ_EMPTY(&pvh->pv_list)) {
vm_page_aflag_clear(m,
PGA_WRITEABLE);
}
}
pc->pc_map[field] |= 1UL << bit;
pmap_unuse_pt(pmap, va, &free);
freed++;
}
}
if (freed == 0) {
TAILQ_INSERT_TAIL(&newtail, pc, pc_lru);
continue;
}
/* Every freed mapping is for a 4 KB page. */
pmap->pm_stats.resident_count -= freed;
PV_STAT(pv_entry_frees += freed);
PV_STAT(pv_entry_spare += freed);
pv_entry_count -= freed;
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
for (field = 0; field < _NPCM; field++)
if (pc->pc_map[field] != pc_freemask[field]) {
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc,
pc_list);
TAILQ_INSERT_TAIL(&newtail, pc, pc_lru);
/*
* One freed pv entry in locked_pmap is
* sufficient.
*/
if (pmap == locked_pmap)
goto out;
break;
}
if (field == _NPCM) {
PV_STAT(pv_entry_spare -= _NPCPV);
PV_STAT(pc_chunk_count--);
PV_STAT(pc_chunk_frees++);
/* Entire chunk is free; return it. */
m_pc = PHYS_TO_VM_PAGE(pmap_kextract((vm_offset_t)pc));
pmap_qremove((vm_offset_t)pc, 1);
pmap_ptelist_free(&pv_vafree, (vm_offset_t)pc);
break;
}
}
out:
TAILQ_CONCAT(&pv_chunks, &newtail, pc_lru);
if (pmap != NULL) {
pmap_invalidate_all_int(pmap);
if (pmap != locked_pmap)
PMAP_UNLOCK(pmap);
}
if (m_pc == NULL && pv_vafree != 0 && SLIST_EMPTY(&free)) {
m_pc = SLIST_FIRST(&free);
SLIST_REMOVE_HEAD(&free, plinks.s.ss);
/* Recycle a freed page table page. */
m_pc->wire_count = 1;
}
vm_page_free_pages_toq(&free, true);
return (m_pc);
}
/*
* free the pv_entry back to the free list
*/
static void
free_pv_entry(pmap_t pmap, pv_entry_t pv)
{
struct pv_chunk *pc;
int idx, field, bit;
rw_assert(&pvh_global_lock, RA_WLOCKED);
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
PV_STAT(pv_entry_frees++);
PV_STAT(pv_entry_spare++);
pv_entry_count--;
pc = pv_to_chunk(pv);
idx = pv - &pc->pc_pventry[0];
field = idx / 32;
bit = idx % 32;
pc->pc_map[field] |= 1ul << bit;
for (idx = 0; idx < _NPCM; idx++)
if (pc->pc_map[idx] != pc_freemask[idx]) {
/*
* 98% of the time, pc is already at the head of the
* list. If it isn't already, move it to the head.
*/
if (__predict_false(TAILQ_FIRST(&pmap->pm_pvchunk) !=
pc)) {
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc,
pc_list);
}
return;
}
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
free_pv_chunk(pc);
}
static void
free_pv_chunk(struct pv_chunk *pc)
{
vm_page_t m;
TAILQ_REMOVE(&pv_chunks, pc, pc_lru);
PV_STAT(pv_entry_spare -= _NPCPV);
PV_STAT(pc_chunk_count--);
PV_STAT(pc_chunk_frees++);
/* entire chunk is free, return it */
m = PHYS_TO_VM_PAGE(pmap_kextract((vm_offset_t)pc));
pmap_qremove((vm_offset_t)pc, 1);
vm_page_unwire(m, PQ_NONE);
vm_page_free(m);
pmap_ptelist_free(&pv_vafree, (vm_offset_t)pc);
}
/*
* get a new pv_entry, allocating a block from the system
* when needed.
*/
static pv_entry_t
get_pv_entry(pmap_t pmap, boolean_t try)
{
static const struct timeval printinterval = { 60, 0 };
static struct timeval lastprint;
int bit, field;
pv_entry_t pv;
struct pv_chunk *pc;
vm_page_t m;
rw_assert(&pvh_global_lock, RA_WLOCKED);
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
PV_STAT(pv_entry_allocs++);
pv_entry_count++;
if (pv_entry_count > pv_entry_high_water)
if (ratecheck(&lastprint, &printinterval))
printf("Approaching the limit on PV entries, consider "
"increasing either the vm.pmap.shpgperproc or the "
"vm.pmap.pv_entries tunable.\n");
retry:
pc = TAILQ_FIRST(&pmap->pm_pvchunk);
if (pc != NULL) {
for (field = 0; field < _NPCM; field++) {
if (pc->pc_map[field]) {
bit = bsfl(pc->pc_map[field]);
break;
}
}
if (field < _NPCM) {
pv = &pc->pc_pventry[field * 32 + bit];
pc->pc_map[field] &= ~(1ul << bit);
/* If this was the last item, move it to tail */
for (field = 0; field < _NPCM; field++)
if (pc->pc_map[field] != 0) {
PV_STAT(pv_entry_spare--);
return (pv); /* not full, return */
}
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
TAILQ_INSERT_TAIL(&pmap->pm_pvchunk, pc, pc_list);
PV_STAT(pv_entry_spare--);
return (pv);
}
}
/*
* Access to the ptelist "pv_vafree" is synchronized by the pvh
* global lock. If "pv_vafree" is currently non-empty, it will
* remain non-empty until pmap_ptelist_alloc() completes.
*/
if (pv_vafree == 0 || (m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == NULL) {
if (try) {
pv_entry_count--;
PV_STAT(pc_chunk_tryfail++);
return (NULL);
}
m = pmap_pv_reclaim(pmap);
if (m == NULL)
goto retry;
}
PV_STAT(pc_chunk_count++);
PV_STAT(pc_chunk_allocs++);
pc = (struct pv_chunk *)pmap_ptelist_alloc(&pv_vafree);
pmap_qenter((vm_offset_t)pc, &m, 1);
pc->pc_pmap = pmap;
pc->pc_map[0] = pc_freemask[0] & ~1ul; /* preallocated bit 0 */
for (field = 1; field < _NPCM; field++)
pc->pc_map[field] = pc_freemask[field];
TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru);
pv = &pc->pc_pventry[0];
TAILQ_INSERT_HEAD(&pmap->pm_pvchunk, pc, pc_list);
PV_STAT(pv_entry_spare += _NPCPV - 1);
return (pv);
}
static __inline pv_entry_t
pmap_pvh_remove(struct md_page *pvh, pmap_t pmap, vm_offset_t va)
{
pv_entry_t pv;
rw_assert(&pvh_global_lock, RA_WLOCKED);
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
if (pmap == PV_PMAP(pv) && va == pv->pv_va) {
TAILQ_REMOVE(&pvh->pv_list, pv, pv_next);
break;
}
}
return (pv);
}
static void
pmap_pv_demote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa)
{
struct md_page *pvh;
pv_entry_t pv;
vm_offset_t va_last;
vm_page_t m;
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT((pa & PDRMASK) == 0,
("pmap_pv_demote_pde: pa is not 4mpage aligned"));
/*
* Transfer the 4mpage's pv entry for this mapping to the first
* page's pv list.
*/
pvh = pa_to_pvh(pa);
va = trunc_4mpage(va);
pv = pmap_pvh_remove(pvh, pmap, va);
KASSERT(pv != NULL, ("pmap_pv_demote_pde: pv not found"));
m = PHYS_TO_VM_PAGE(pa);
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
/* Instantiate the remaining NPTEPG - 1 pv entries. */
va_last = va + NBPDR - PAGE_SIZE;
do {
m++;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_pv_demote_pde: page %p is not managed", m));
va += PAGE_SIZE;
pmap_insert_entry(pmap, va, m);
} while (va < va_last);
}
#if VM_NRESERVLEVEL > 0
static void
pmap_pv_promote_pde(pmap_t pmap, vm_offset_t va, vm_paddr_t pa)
{
struct md_page *pvh;
pv_entry_t pv;
vm_offset_t va_last;
vm_page_t m;
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT((pa & PDRMASK) == 0,
("pmap_pv_promote_pde: pa is not 4mpage aligned"));
/*
* Transfer the first page's pv entry for this mapping to the
* 4mpage's pv list. Aside from avoiding the cost of a call
* to get_pv_entry(), a transfer avoids the possibility that
* get_pv_entry() calls pmap_collect() and that pmap_collect()
* removes one of the mappings that is being promoted.
*/
m = PHYS_TO_VM_PAGE(pa);
va = trunc_4mpage(va);
pv = pmap_pvh_remove(&m->md, pmap, va);
KASSERT(pv != NULL, ("pmap_pv_promote_pde: pv not found"));
pvh = pa_to_pvh(pa);
TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next);
/* Free the remaining NPTEPG - 1 pv entries. */
va_last = va + NBPDR - PAGE_SIZE;
do {
m++;
va += PAGE_SIZE;
pmap_pvh_free(&m->md, pmap, va);
} while (va < va_last);
}
#endif /* VM_NRESERVLEVEL > 0 */
static void
pmap_pvh_free(struct md_page *pvh, pmap_t pmap, vm_offset_t va)
{
pv_entry_t pv;
pv = pmap_pvh_remove(pvh, pmap, va);
KASSERT(pv != NULL, ("pmap_pvh_free: pv not found"));
free_pv_entry(pmap, pv);
}
static void
pmap_remove_entry(pmap_t pmap, vm_page_t m, vm_offset_t va)
{
struct md_page *pvh;
rw_assert(&pvh_global_lock, RA_WLOCKED);
pmap_pvh_free(&m->md, pmap, va);
if (TAILQ_EMPTY(&m->md.pv_list) && (m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
if (TAILQ_EMPTY(&pvh->pv_list))
vm_page_aflag_clear(m, PGA_WRITEABLE);
}
}
/*
* Create a pv entry for page at pa for
* (pmap, va).
*/
static void
pmap_insert_entry(pmap_t pmap, vm_offset_t va, vm_page_t m)
{
pv_entry_t pv;
rw_assert(&pvh_global_lock, RA_WLOCKED);
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
pv = get_pv_entry(pmap, FALSE);
pv->pv_va = va;
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
}
/*
* Conditionally create a pv entry.
*/
static boolean_t
pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va, vm_page_t m)
{
pv_entry_t pv;
rw_assert(&pvh_global_lock, RA_WLOCKED);
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
if (pv_entry_count < pv_entry_high_water &&
(pv = get_pv_entry(pmap, TRUE)) != NULL) {
pv->pv_va = va;
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
return (TRUE);
} else
return (FALSE);
}
/*
* Create the pv entries for each of the pages within a superpage.
*/
static bool
pmap_pv_insert_pde(pmap_t pmap, vm_offset_t va, pd_entry_t pde, u_int flags)
{
struct md_page *pvh;
pv_entry_t pv;
bool noreclaim;
rw_assert(&pvh_global_lock, RA_WLOCKED);
noreclaim = (flags & PMAP_ENTER_NORECLAIM) != 0;
if ((noreclaim && pv_entry_count >= pv_entry_high_water) ||
(pv = get_pv_entry(pmap, noreclaim)) == NULL)
return (false);
pv->pv_va = va;
pvh = pa_to_pvh(pde & PG_PS_FRAME);
TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next);
return (true);
}
/*
* Fills a page table page with mappings to consecutive physical pages.
*/
static void
pmap_fill_ptp(pt_entry_t *firstpte, pt_entry_t newpte)
{
pt_entry_t *pte;
for (pte = firstpte; pte < firstpte + NPTEPG; pte++) {
*pte = newpte;
newpte += PAGE_SIZE;
}
}
/*
* Tries to demote a 2- or 4MB page mapping. If demotion fails, the
* 2- or 4MB page mapping is invalidated.
*/
static boolean_t
pmap_demote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va)
{
pd_entry_t newpde, oldpde;
pt_entry_t *firstpte, newpte;
vm_paddr_t mptepa;
vm_page_t mpte;
struct spglist free;
vm_offset_t sva;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
oldpde = *pde;
KASSERT((oldpde & (PG_PS | PG_V)) == (PG_PS | PG_V),
("pmap_demote_pde: oldpde is missing PG_PS and/or PG_V"));
if ((oldpde & PG_A) == 0 || (mpte = pmap_remove_pt_page(pmap, va)) ==
NULL) {
KASSERT((oldpde & PG_W) == 0,
("pmap_demote_pde: page table page for a wired mapping"
" is missing"));
/*
* Invalidate the 2- or 4MB page mapping and return
* "failure" if the mapping was never accessed or the
* allocation of the new page table page fails.
*/
if ((oldpde & PG_A) == 0 || (mpte = vm_page_alloc(NULL,
va >> PDRSHIFT, VM_ALLOC_NOOBJ | VM_ALLOC_NORMAL |
VM_ALLOC_WIRED)) == NULL) {
SLIST_INIT(&free);
sva = trunc_4mpage(va);
pmap_remove_pde(pmap, pde, sva, &free);
if ((oldpde & PG_G) == 0)
pmap_invalidate_pde_page(pmap, sva, oldpde);
vm_page_free_pages_toq(&free, true);
CTR2(KTR_PMAP, "pmap_demote_pde: failure for va %#x"
" in pmap %p", va, pmap);
return (FALSE);
}
if (pmap != kernel_pmap)
pmap->pm_stats.resident_count++;
}
mptepa = VM_PAGE_TO_PHYS(mpte);
/*
* If the page mapping is in the kernel's address space, then the
* KPTmap can provide access to the page table page. Otherwise,
* temporarily map the page table page (mpte) into the kernel's
* address space at either PADDR1 or PADDR2.
*/
if (pmap == kernel_pmap)
firstpte = &KPTmap[i386_btop(trunc_4mpage(va))];
else if (curthread->td_pinned > 0 && rw_wowned(&pvh_global_lock)) {
if ((*PMAP1 & PG_FRAME) != mptepa) {
*PMAP1 = mptepa | PG_RW | PG_V | PG_A | PG_M;
#ifdef SMP
PMAP1cpu = PCPU_GET(cpuid);
#endif
invlcaddr(PADDR1);
PMAP1changed++;
} else
#ifdef SMP
if (PMAP1cpu != PCPU_GET(cpuid)) {
PMAP1cpu = PCPU_GET(cpuid);
invlcaddr(PADDR1);
PMAP1changedcpu++;
} else
#endif
PMAP1unchanged++;
firstpte = PADDR1;
} else {
mtx_lock(&PMAP2mutex);
if ((*PMAP2 & PG_FRAME) != mptepa) {
*PMAP2 = mptepa | PG_RW | PG_V | PG_A | PG_M;
pmap_invalidate_page_int(kernel_pmap,
(vm_offset_t)PADDR2);
}
firstpte = PADDR2;
}
newpde = mptepa | PG_M | PG_A | (oldpde & PG_U) | PG_RW | PG_V;
KASSERT((oldpde & PG_A) != 0,
("pmap_demote_pde: oldpde is missing PG_A"));
KASSERT((oldpde & (PG_M | PG_RW)) != PG_RW,
("pmap_demote_pde: oldpde is missing PG_M"));
newpte = oldpde & ~PG_PS;
if ((newpte & PG_PDE_PAT) != 0)
newpte ^= PG_PDE_PAT | PG_PTE_PAT;
/*
* If the page table page is new, initialize it.
*/
if (mpte->wire_count == 1) {
mpte->wire_count = NPTEPG;
pmap_fill_ptp(firstpte, newpte);
}
KASSERT((*firstpte & PG_FRAME) == (newpte & PG_FRAME),
("pmap_demote_pde: firstpte and newpte map different physical"
" addresses"));
/*
* If the mapping has changed attributes, update the page table
* entries.
*/
if ((*firstpte & PG_PTE_PROMOTE) != (newpte & PG_PTE_PROMOTE))
pmap_fill_ptp(firstpte, newpte);
/*
* Demote the mapping. This pmap is locked. The old PDE has
* PG_A set. If the old PDE has PG_RW set, it also has PG_M
* set. Thus, there is no danger of a race with another
* processor changing the setting of PG_A and/or PG_M between
* the read above and the store below.
*/
if (workaround_erratum383)
pmap_update_pde(pmap, va, pde, newpde);
else if (pmap == kernel_pmap)
pmap_kenter_pde(va, newpde);
else
pde_store(pde, newpde);
if (firstpte == PADDR2)
mtx_unlock(&PMAP2mutex);
/*
* Invalidate the recursive mapping of the page table page.
*/
pmap_invalidate_page_int(pmap, (vm_offset_t)vtopte(va));
/*
* Demote the pv entry. This depends on the earlier demotion
* of the mapping. Specifically, the (re)creation of a per-
* page pv entry might trigger the execution of pmap_collect(),
* which might reclaim a newly (re)created per-page pv entry
* and destroy the associated mapping. In order to destroy
* the mapping, the PDE must have already changed from mapping
* the 2mpage to referencing the page table page.
*/
if ((oldpde & PG_MANAGED) != 0)
pmap_pv_demote_pde(pmap, va, oldpde & PG_PS_FRAME);
pmap_pde_demotions++;
CTR2(KTR_PMAP, "pmap_demote_pde: success for va %#x"
" in pmap %p", va, pmap);
return (TRUE);
}
/*
* Removes a 2- or 4MB page mapping from the kernel pmap.
*/
static void
pmap_remove_kernel_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va)
{
pd_entry_t newpde;
vm_paddr_t mptepa;
vm_page_t mpte;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
mpte = pmap_remove_pt_page(pmap, va);
if (mpte == NULL)
panic("pmap_remove_kernel_pde: Missing pt page.");
mptepa = VM_PAGE_TO_PHYS(mpte);
newpde = mptepa | PG_M | PG_A | PG_RW | PG_V;
/*
* Initialize the page table page.
*/
pagezero((void *)&KPTmap[i386_btop(trunc_4mpage(va))]);
/*
* Remove the mapping.
*/
if (workaround_erratum383)
pmap_update_pde(pmap, va, pde, newpde);
else
pmap_kenter_pde(va, newpde);
/*
* Invalidate the recursive mapping of the page table page.
*/
pmap_invalidate_page_int(pmap, (vm_offset_t)vtopte(va));
}
/*
* pmap_remove_pde: do the things to unmap a superpage in a process
*/
static void
pmap_remove_pde(pmap_t pmap, pd_entry_t *pdq, vm_offset_t sva,
struct spglist *free)
{
struct md_page *pvh;
pd_entry_t oldpde;
vm_offset_t eva, va;
vm_page_t m, mpte;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
KASSERT((sva & PDRMASK) == 0,
("pmap_remove_pde: sva is not 4mpage aligned"));
oldpde = pte_load_clear(pdq);
if (oldpde & PG_W)
pmap->pm_stats.wired_count -= NBPDR / PAGE_SIZE;
/*
* Machines that don't support invlpg, also don't support
* PG_G.
*/
if ((oldpde & PG_G) != 0)
pmap_invalidate_pde_page(kernel_pmap, sva, oldpde);
pmap->pm_stats.resident_count -= NBPDR / PAGE_SIZE;
if (oldpde & PG_MANAGED) {
pvh = pa_to_pvh(oldpde & PG_PS_FRAME);
pmap_pvh_free(pvh, pmap, sva);
eva = sva + NBPDR;
for (va = sva, m = PHYS_TO_VM_PAGE(oldpde & PG_PS_FRAME);
va < eva; va += PAGE_SIZE, m++) {
if ((oldpde & (PG_M | PG_RW)) == (PG_M | PG_RW))
vm_page_dirty(m);
if (oldpde & PG_A)
vm_page_aflag_set(m, PGA_REFERENCED);
if (TAILQ_EMPTY(&m->md.pv_list) &&
TAILQ_EMPTY(&pvh->pv_list))
vm_page_aflag_clear(m, PGA_WRITEABLE);
}
}
if (pmap == kernel_pmap) {
pmap_remove_kernel_pde(pmap, pdq, sva);
} else {
mpte = pmap_remove_pt_page(pmap, sva);
if (mpte != NULL) {
pmap->pm_stats.resident_count--;
KASSERT(mpte->wire_count == NPTEPG,
("pmap_remove_pde: pte page wire count error"));
mpte->wire_count = 0;
pmap_add_delayed_free_list(mpte, free, FALSE);
}
}
}
/*
* pmap_remove_pte: do the things to unmap a page in a process
*/
static int
pmap_remove_pte(pmap_t pmap, pt_entry_t *ptq, vm_offset_t va,
struct spglist *free)
{
pt_entry_t oldpte;
vm_page_t m;
rw_assert(&pvh_global_lock, RA_WLOCKED);
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
oldpte = pte_load_clear(ptq);
KASSERT(oldpte != 0,
("pmap_remove_pte: pmap %p va %x zero pte", pmap, va));
if (oldpte & PG_W)
pmap->pm_stats.wired_count -= 1;
/*
* Machines that don't support invlpg, also don't support
* PG_G.
*/
if (oldpte & PG_G)
pmap_invalidate_page_int(kernel_pmap, va);
pmap->pm_stats.resident_count -= 1;
if (oldpte & PG_MANAGED) {
m = PHYS_TO_VM_PAGE(oldpte & PG_FRAME);
if ((oldpte & (PG_M | PG_RW)) == (PG_M | PG_RW))
vm_page_dirty(m);
if (oldpte & PG_A)
vm_page_aflag_set(m, PGA_REFERENCED);
pmap_remove_entry(pmap, m, va);
}
return (pmap_unuse_pt(pmap, va, free));
}
/*
* Remove a single page from a process address space
*/
static void
pmap_remove_page(pmap_t pmap, vm_offset_t va, struct spglist *free)
{
pt_entry_t *pte;
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT(curthread->td_pinned > 0, ("curthread not pinned"));
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
if ((pte = pmap_pte_quick(pmap, va)) == NULL || *pte == 0)
return;
pmap_remove_pte(pmap, pte, va, free);
pmap_invalidate_page_int(pmap, va);
}
/*
* Removes the specified range of addresses from the page table page.
*/
static bool
pmap_remove_ptes(pmap_t pmap, vm_offset_t sva, vm_offset_t eva,
struct spglist *free)
{
pt_entry_t *pte;
bool anyvalid;
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT(curthread->td_pinned > 0, ("curthread not pinned"));
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
anyvalid = false;
for (pte = pmap_pte_quick(pmap, sva); sva != eva; pte++,
sva += PAGE_SIZE) {
if (*pte == 0)
continue;
/*
* The TLB entry for a PG_G mapping is invalidated by
* pmap_remove_pte().
*/
if ((*pte & PG_G) == 0)
anyvalid = true;
if (pmap_remove_pte(pmap, pte, sva, free))
break;
}
return (anyvalid);
}
/*
* Remove the given range of addresses from the specified map.
*
* It is assumed that the start and end are properly
* rounded to the page size.
*/
static void
__CONCAT(PMTYPE, remove)(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
vm_offset_t pdnxt;
pd_entry_t ptpaddr;
struct spglist free;
int anyvalid;
/*
* Perform an unsynchronized read. This is, however, safe.
*/
if (pmap->pm_stats.resident_count == 0)
return;
anyvalid = 0;
SLIST_INIT(&free);
rw_wlock(&pvh_global_lock);
sched_pin();
PMAP_LOCK(pmap);
/*
* special handling of removing one page. a very
* common operation and easy to short circuit some
* code.
*/
if ((sva + PAGE_SIZE == eva) &&
((pmap->pm_pdir[(sva >> PDRSHIFT)] & PG_PS) == 0)) {
pmap_remove_page(pmap, sva, &free);
goto out;
}
for (; sva < eva; sva = pdnxt) {
u_int pdirindex;
/*
* Calculate index for next page table.
*/
pdnxt = (sva + NBPDR) & ~PDRMASK;
if (pdnxt < sva)
pdnxt = eva;
if (pmap->pm_stats.resident_count == 0)
break;
pdirindex = sva >> PDRSHIFT;
ptpaddr = pmap->pm_pdir[pdirindex];
/*
* Weed out invalid mappings. Note: we assume that the page
* directory table is always allocated, and in kernel virtual.
*/
if (ptpaddr == 0)
continue;
/*
* Check for large page.
*/
if ((ptpaddr & PG_PS) != 0) {
/*
* Are we removing the entire large page? If not,
* demote the mapping and fall through.
*/
if (sva + NBPDR == pdnxt && eva >= pdnxt) {
/*
* The TLB entry for a PG_G mapping is
* invalidated by pmap_remove_pde().
*/
if ((ptpaddr & PG_G) == 0)
anyvalid = 1;
pmap_remove_pde(pmap,
&pmap->pm_pdir[pdirindex], sva, &free);
continue;
} else if (!pmap_demote_pde(pmap,
&pmap->pm_pdir[pdirindex], sva)) {
/* The large page mapping was destroyed. */
continue;
}
}
/*
* Limit our scan to either the end of the va represented
* by the current page table page, or to the end of the
* range being removed.
*/
if (pdnxt > eva)
pdnxt = eva;
if (pmap_remove_ptes(pmap, sva, pdnxt, &free))
anyvalid = 1;
}
out:
sched_unpin();
if (anyvalid)
pmap_invalidate_all_int(pmap);
rw_wunlock(&pvh_global_lock);
PMAP_UNLOCK(pmap);
vm_page_free_pages_toq(&free, true);
}
/*
* Routine: pmap_remove_all
* Function:
* Removes this physical page from
* all physical maps in which it resides.
* Reflects back modify bits to the pager.
*
* Notes:
* Original versions of this routine were very
* inefficient because they iteratively called
* pmap_remove (slow...)
*/
static void
__CONCAT(PMTYPE, remove_all)(vm_page_t m)
{
struct md_page *pvh;
pv_entry_t pv;
pmap_t pmap;
pt_entry_t *pte, tpte;
pd_entry_t *pde;
vm_offset_t va;
struct spglist free;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_remove_all: page %p is not managed", m));
SLIST_INIT(&free);
rw_wlock(&pvh_global_lock);
sched_pin();
if ((m->flags & PG_FICTITIOUS) != 0)
goto small_mappings;
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
while ((pv = TAILQ_FIRST(&pvh->pv_list)) != NULL) {
va = pv->pv_va;
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, va);
(void)pmap_demote_pde(pmap, pde, va);
PMAP_UNLOCK(pmap);
}
small_mappings:
while ((pv = TAILQ_FIRST(&m->md.pv_list)) != NULL) {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pmap->pm_stats.resident_count--;
pde = pmap_pde(pmap, pv->pv_va);
KASSERT((*pde & PG_PS) == 0, ("pmap_remove_all: found"
" a 4mpage in page %p's pv list", m));
pte = pmap_pte_quick(pmap, pv->pv_va);
tpte = pte_load_clear(pte);
KASSERT(tpte != 0, ("pmap_remove_all: pmap %p va %x zero pte",
pmap, pv->pv_va));
if (tpte & PG_W)
pmap->pm_stats.wired_count--;
if (tpte & PG_A)
vm_page_aflag_set(m, PGA_REFERENCED);
/*
* Update the vm_page_t clean and reference bits.
*/
if ((tpte & (PG_M | PG_RW)) == (PG_M | PG_RW))
vm_page_dirty(m);
pmap_unuse_pt(pmap, pv->pv_va, &free);
pmap_invalidate_page_int(pmap, pv->pv_va);
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
free_pv_entry(pmap, pv);
PMAP_UNLOCK(pmap);
}
vm_page_aflag_clear(m, PGA_WRITEABLE);
sched_unpin();
rw_wunlock(&pvh_global_lock);
vm_page_free_pages_toq(&free, true);
}
/*
* pmap_protect_pde: do the things to protect a 4mpage in a process
*/
static boolean_t
pmap_protect_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t sva, vm_prot_t prot)
{
pd_entry_t newpde, oldpde;
vm_offset_t eva, va;
vm_page_t m;
boolean_t anychanged;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
KASSERT((sva & PDRMASK) == 0,
("pmap_protect_pde: sva is not 4mpage aligned"));
anychanged = FALSE;
retry:
oldpde = newpde = *pde;
if ((oldpde & (PG_MANAGED | PG_M | PG_RW)) ==
(PG_MANAGED | PG_M | PG_RW)) {
eva = sva + NBPDR;
for (va = sva, m = PHYS_TO_VM_PAGE(oldpde & PG_PS_FRAME);
va < eva; va += PAGE_SIZE, m++)
vm_page_dirty(m);
}
if ((prot & VM_PROT_WRITE) == 0)
newpde &= ~(PG_RW | PG_M);
#ifdef PMAP_PAE_COMP
- if ((prot & VM_PROT_EXECUTE) == 0)
+ if ((prot & VM_PROT_EXECUTE) == 0 && !i386_read_exec)
newpde |= pg_nx;
#endif
if (newpde != oldpde) {
/*
* As an optimization to future operations on this PDE, clear
* PG_PROMOTED. The impending invalidation will remove any
* lingering 4KB page mappings from the TLB.
*/
if (!pde_cmpset(pde, oldpde, newpde & ~PG_PROMOTED))
goto retry;
if ((oldpde & PG_G) != 0)
pmap_invalidate_pde_page(kernel_pmap, sva, oldpde);
else
anychanged = TRUE;
}
return (anychanged);
}
/*
* Set the physical protection on the
* specified range of this map as requested.
*/
static void
__CONCAT(PMTYPE, protect)(pmap_t pmap, vm_offset_t sva, vm_offset_t eva,
vm_prot_t prot)
{
vm_offset_t pdnxt;
pd_entry_t ptpaddr;
pt_entry_t *pte;
boolean_t anychanged, pv_lists_locked;
KASSERT((prot & ~VM_PROT_ALL) == 0, ("invalid prot %x", prot));
if (prot == VM_PROT_NONE) {
pmap_remove(pmap, sva, eva);
return;
}
#ifdef PMAP_PAE_COMP
if ((prot & (VM_PROT_WRITE | VM_PROT_EXECUTE)) ==
(VM_PROT_WRITE | VM_PROT_EXECUTE))
return;
#else
if (prot & VM_PROT_WRITE)
return;
#endif
if (pmap_is_current(pmap))
pv_lists_locked = FALSE;
else {
pv_lists_locked = TRUE;
resume:
rw_wlock(&pvh_global_lock);
sched_pin();
}
anychanged = FALSE;
PMAP_LOCK(pmap);
for (; sva < eva; sva = pdnxt) {
pt_entry_t obits, pbits;
u_int pdirindex;
pdnxt = (sva + NBPDR) & ~PDRMASK;
if (pdnxt < sva)
pdnxt = eva;
pdirindex = sva >> PDRSHIFT;
ptpaddr = pmap->pm_pdir[pdirindex];
/*
* Weed out invalid mappings. Note: we assume that the page
* directory table is always allocated, and in kernel virtual.
*/
if (ptpaddr == 0)
continue;
/*
* Check for large page.
*/
if ((ptpaddr & PG_PS) != 0) {
/*
* Are we protecting the entire large page? If not,
* demote the mapping and fall through.
*/
if (sva + NBPDR == pdnxt && eva >= pdnxt) {
/*
* The TLB entry for a PG_G mapping is
* invalidated by pmap_protect_pde().
*/
if (pmap_protect_pde(pmap,
&pmap->pm_pdir[pdirindex], sva, prot))
anychanged = TRUE;
continue;
} else {
if (!pv_lists_locked) {
pv_lists_locked = TRUE;
if (!rw_try_wlock(&pvh_global_lock)) {
if (anychanged)
pmap_invalidate_all_int(
pmap);
PMAP_UNLOCK(pmap);
goto resume;
}
sched_pin();
}
if (!pmap_demote_pde(pmap,
&pmap->pm_pdir[pdirindex], sva)) {
/*
* The large page mapping was
* destroyed.
*/
continue;
}
}
}
if (pdnxt > eva)
pdnxt = eva;
for (pte = pmap_pte_quick(pmap, sva); sva != pdnxt; pte++,
sva += PAGE_SIZE) {
vm_page_t m;
retry:
/*
* Regardless of whether a pte is 32 or 64 bits in
* size, PG_RW, PG_A, and PG_M are among the least
* significant 32 bits.
*/
obits = pbits = *pte;
if ((pbits & PG_V) == 0)
continue;
if ((prot & VM_PROT_WRITE) == 0) {
if ((pbits & (PG_MANAGED | PG_M | PG_RW)) ==
(PG_MANAGED | PG_M | PG_RW)) {
m = PHYS_TO_VM_PAGE(pbits & PG_FRAME);
vm_page_dirty(m);
}
pbits &= ~(PG_RW | PG_M);
}
#ifdef PMAP_PAE_COMP
- if ((prot & VM_PROT_EXECUTE) == 0)
+ if ((prot & VM_PROT_EXECUTE) == 0 && !i386_read_exec)
pbits |= pg_nx;
#endif
if (pbits != obits) {
#ifdef PMAP_PAE_COMP
if (!atomic_cmpset_64(pte, obits, pbits))
goto retry;
#else
if (!atomic_cmpset_int((u_int *)pte, obits,
pbits))
goto retry;
#endif
if (obits & PG_G)
pmap_invalidate_page_int(pmap, sva);
else
anychanged = TRUE;
}
}
}
if (anychanged)
pmap_invalidate_all_int(pmap);
if (pv_lists_locked) {
sched_unpin();
rw_wunlock(&pvh_global_lock);
}
PMAP_UNLOCK(pmap);
}
#if VM_NRESERVLEVEL > 0
/*
* Tries to promote the 512 or 1024, contiguous 4KB page mappings that are
* within a single page table page (PTP) to a single 2- or 4MB page mapping.
* For promotion to occur, two conditions must be met: (1) the 4KB page
* mappings must map aligned, contiguous physical memory and (2) the 4KB page
* mappings must have identical characteristics.
*
* Managed (PG_MANAGED) mappings within the kernel address space are not
* promoted. The reason is that kernel PDEs are replicated in each pmap but
* pmap_clear_ptes() and pmap_ts_referenced() only read the PDE from the kernel
* pmap.
*/
static void
pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va)
{
pd_entry_t newpde;
pt_entry_t *firstpte, oldpte, pa, *pte;
vm_offset_t oldpteva;
vm_page_t mpte;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/*
* Examine the first PTE in the specified PTP. Abort if this PTE is
* either invalid, unused, or does not map the first 4KB physical page
* within a 2- or 4MB page.
*/
firstpte = pmap_pte_quick(pmap, trunc_4mpage(va));
setpde:
newpde = *firstpte;
if ((newpde & ((PG_FRAME & PDRMASK) | PG_A | PG_V)) != (PG_A | PG_V)) {
pmap_pde_p_failures++;
CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#x"
" in pmap %p", va, pmap);
return;
}
if ((*firstpte & PG_MANAGED) != 0 && pmap == kernel_pmap) {
pmap_pde_p_failures++;
CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#x"
" in pmap %p", va, pmap);
return;
}
if ((newpde & (PG_M | PG_RW)) == PG_RW) {
/*
* When PG_M is already clear, PG_RW can be cleared without
* a TLB invalidation.
*/
if (!atomic_cmpset_int((u_int *)firstpte, newpde, newpde &
~PG_RW))
goto setpde;
newpde &= ~PG_RW;
}
/*
* Examine each of the other PTEs in the specified PTP. Abort if this
* PTE maps an unexpected 4KB physical page or does not have identical
* characteristics to the first PTE.
*/
pa = (newpde & (PG_PS_FRAME | PG_A | PG_V)) + NBPDR - PAGE_SIZE;
for (pte = firstpte + NPTEPG - 1; pte > firstpte; pte--) {
setpte:
oldpte = *pte;
if ((oldpte & (PG_FRAME | PG_A | PG_V)) != pa) {
pmap_pde_p_failures++;
CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#x"
" in pmap %p", va, pmap);
return;
}
if ((oldpte & (PG_M | PG_RW)) == PG_RW) {
/*
* When PG_M is already clear, PG_RW can be cleared
* without a TLB invalidation.
*/
if (!atomic_cmpset_int((u_int *)pte, oldpte,
oldpte & ~PG_RW))
goto setpte;
oldpte &= ~PG_RW;
oldpteva = (oldpte & PG_FRAME & PDRMASK) |
(va & ~PDRMASK);
CTR2(KTR_PMAP, "pmap_promote_pde: protect for va %#x"
" in pmap %p", oldpteva, pmap);
}
if ((oldpte & PG_PTE_PROMOTE) != (newpde & PG_PTE_PROMOTE)) {
pmap_pde_p_failures++;
CTR2(KTR_PMAP, "pmap_promote_pde: failure for va %#x"
" in pmap %p", va, pmap);
return;
}
pa -= PAGE_SIZE;
}
/*
* Save the page table page in its current state until the PDE
* mapping the superpage is demoted by pmap_demote_pde() or
* destroyed by pmap_remove_pde().
*/
mpte = PHYS_TO_VM_PAGE(*pde & PG_FRAME);
KASSERT(mpte >= vm_page_array &&
mpte < &vm_page_array[vm_page_array_size],
("pmap_promote_pde: page table page is out of range"));
KASSERT(mpte->pindex == va >> PDRSHIFT,
("pmap_promote_pde: page table page's pindex is wrong"));
if (pmap_insert_pt_page(pmap, mpte)) {
pmap_pde_p_failures++;
CTR2(KTR_PMAP,
"pmap_promote_pde: failure for va %#x in pmap %p", va,
pmap);
return;
}
/*
* Promote the pv entries.
*/
if ((newpde & PG_MANAGED) != 0)
pmap_pv_promote_pde(pmap, va, newpde & PG_PS_FRAME);
/*
* Propagate the PAT index to its proper position.
*/
if ((newpde & PG_PTE_PAT) != 0)
newpde ^= PG_PDE_PAT | PG_PTE_PAT;
/*
* Map the superpage.
*/
if (workaround_erratum383)
pmap_update_pde(pmap, va, pde, PG_PS | newpde);
else if (pmap == kernel_pmap)
pmap_kenter_pde(va, PG_PROMOTED | PG_PS | newpde);
else
pde_store(pde, PG_PROMOTED | PG_PS | newpde);
pmap_pde_promotions++;
CTR2(KTR_PMAP, "pmap_promote_pde: success for va %#x"
" in pmap %p", va, pmap);
}
#endif /* VM_NRESERVLEVEL > 0 */
/*
* Insert the given physical page (p) at
* the specified virtual address (v) in the
* target physical map with the protection requested.
*
* If specified, the page will be wired down, meaning
* that the related pte can not be reclaimed.
*
* NB: This is the only routine which MAY NOT lazy-evaluate
* or lose information. That is, this routine must actually
* insert this page into the given map NOW.
*/
static int
__CONCAT(PMTYPE, enter)(pmap_t pmap, vm_offset_t va, vm_page_t m,
vm_prot_t prot, u_int flags, int8_t psind)
{
pd_entry_t *pde;
pt_entry_t *pte;
pt_entry_t newpte, origpte;
pv_entry_t pv;
vm_paddr_t opa, pa;
vm_page_t mpte, om;
int rv;
va = trunc_page(va);
KASSERT((pmap == kernel_pmap && va < VM_MAX_KERNEL_ADDRESS) ||
(pmap != kernel_pmap && va < VM_MAXUSER_ADDRESS),
("pmap_enter: toobig k%d %#x", pmap == kernel_pmap, va));
KASSERT(va < PMAP_TRM_MIN_ADDRESS,
("pmap_enter: invalid to pmap_enter into trampoline (va: 0x%x)",
va));
KASSERT(pmap != kernel_pmap || (m->oflags & VPO_UNMANAGED) != 0 ||
va < kmi.clean_sva || va >= kmi.clean_eva,
("pmap_enter: managed mapping within the clean submap"));
if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m))
VM_OBJECT_ASSERT_LOCKED(m->object);
KASSERT((flags & PMAP_ENTER_RESERVED) == 0,
("pmap_enter: flags %u has reserved bits set", flags));
pa = VM_PAGE_TO_PHYS(m);
newpte = (pt_entry_t)(pa | PG_A | PG_V);
if ((flags & VM_PROT_WRITE) != 0)
newpte |= PG_M;
if ((prot & VM_PROT_WRITE) != 0)
newpte |= PG_RW;
KASSERT((newpte & (PG_M | PG_RW)) != PG_M,
("pmap_enter: flags includes VM_PROT_WRITE but prot doesn't"));
#ifdef PMAP_PAE_COMP
- if ((prot & VM_PROT_EXECUTE) == 0)
+ if ((prot & VM_PROT_EXECUTE) == 0 && !i386_read_exec)
newpte |= pg_nx;
#endif
if ((flags & PMAP_ENTER_WIRED) != 0)
newpte |= PG_W;
if (pmap != kernel_pmap)
newpte |= PG_U;
newpte |= pmap_cache_bits(pmap, m->md.pat_mode, psind > 0);
if ((m->oflags & VPO_UNMANAGED) == 0)
newpte |= PG_MANAGED;
rw_wlock(&pvh_global_lock);
PMAP_LOCK(pmap);
sched_pin();
if (psind == 1) {
/* Assert the required virtual and physical alignment. */
KASSERT((va & PDRMASK) == 0, ("pmap_enter: va unaligned"));
KASSERT(m->psind > 0, ("pmap_enter: m->psind < psind"));
rv = pmap_enter_pde(pmap, va, newpte | PG_PS, flags, m);
goto out;
}
pde = pmap_pde(pmap, va);
if (pmap != kernel_pmap) {
/*
* va is for UVA.
* In the case that a page table page is not resident,
* we are creating it here. pmap_allocpte() handles
* demotion.
*/
mpte = pmap_allocpte(pmap, va, flags);
if (mpte == NULL) {
KASSERT((flags & PMAP_ENTER_NOSLEEP) != 0,
("pmap_allocpte failed with sleep allowed"));
rv = KERN_RESOURCE_SHORTAGE;
goto out;
}
} else {
/*
* va is for KVA, so pmap_demote_pde() will never fail
* to install a page table page. PG_V is also
* asserted by pmap_demote_pde().
*/
mpte = NULL;
KASSERT(pde != NULL && (*pde & PG_V) != 0,
("KVA %#x invalid pde pdir %#jx", va,
(uintmax_t)pmap->pm_pdir[PTDPTDI]));
if ((*pde & PG_PS) != 0)
pmap_demote_pde(pmap, pde, va);
}
pte = pmap_pte_quick(pmap, va);
/*
* Page Directory table entry is not valid, which should not
* happen. We should have either allocated the page table
* page or demoted the existing mapping above.
*/
if (pte == NULL) {
panic("pmap_enter: invalid page directory pdir=%#jx, va=%#x",
(uintmax_t)pmap->pm_pdir[PTDPTDI], va);
}
origpte = *pte;
pv = NULL;
/*
* Is the specified virtual address already mapped?
*/
if ((origpte & PG_V) != 0) {
/*
* Wiring change, just update stats. We don't worry about
* wiring PT pages as they remain resident as long as there
* are valid mappings in them. Hence, if a user page is wired,
* the PT page will be also.
*/
if ((newpte & PG_W) != 0 && (origpte & PG_W) == 0)
pmap->pm_stats.wired_count++;
else if ((newpte & PG_W) == 0 && (origpte & PG_W) != 0)
pmap->pm_stats.wired_count--;
/*
* Remove the extra PT page reference.
*/
if (mpte != NULL) {
mpte->wire_count--;
KASSERT(mpte->wire_count > 0,
("pmap_enter: missing reference to page table page,"
" va: 0x%x", va));
}
/*
* Has the physical page changed?
*/
opa = origpte & PG_FRAME;
if (opa == pa) {
/*
* No, might be a protection or wiring change.
*/
if ((origpte & PG_MANAGED) != 0 &&
(newpte & PG_RW) != 0)
vm_page_aflag_set(m, PGA_WRITEABLE);
if (((origpte ^ newpte) & ~(PG_M | PG_A)) == 0)
goto unchanged;
goto validate;
}
/*
* The physical page has changed. Temporarily invalidate
* the mapping. This ensures that all threads sharing the
* pmap keep a consistent view of the mapping, which is
* necessary for the correct handling of COW faults. It
* also permits reuse of the old mapping's PV entry,
* avoiding an allocation.
*
* For consistency, handle unmanaged mappings the same way.
*/
origpte = pte_load_clear(pte);
KASSERT((origpte & PG_FRAME) == opa,
("pmap_enter: unexpected pa update for %#x", va));
if ((origpte & PG_MANAGED) != 0) {
om = PHYS_TO_VM_PAGE(opa);
/*
* The pmap lock is sufficient to synchronize with
* concurrent calls to pmap_page_test_mappings() and
* pmap_ts_referenced().
*/
if ((origpte & (PG_M | PG_RW)) == (PG_M | PG_RW))
vm_page_dirty(om);
if ((origpte & PG_A) != 0)
vm_page_aflag_set(om, PGA_REFERENCED);
pv = pmap_pvh_remove(&om->md, pmap, va);
KASSERT(pv != NULL,
("pmap_enter: no PV entry for %#x", va));
if ((newpte & PG_MANAGED) == 0)
free_pv_entry(pmap, pv);
if ((om->aflags & PGA_WRITEABLE) != 0 &&
TAILQ_EMPTY(&om->md.pv_list) &&
((om->flags & PG_FICTITIOUS) != 0 ||
TAILQ_EMPTY(&pa_to_pvh(opa)->pv_list)))
vm_page_aflag_clear(om, PGA_WRITEABLE);
}
if ((origpte & PG_A) != 0)
pmap_invalidate_page_int(pmap, va);
origpte = 0;
} else {
/*
* Increment the counters.
*/
if ((newpte & PG_W) != 0)
pmap->pm_stats.wired_count++;
pmap->pm_stats.resident_count++;
}
/*
* Enter on the PV list if part of our managed memory.
*/
if ((newpte & PG_MANAGED) != 0) {
if (pv == NULL) {
pv = get_pv_entry(pmap, FALSE);
pv->pv_va = va;
}
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
if ((newpte & PG_RW) != 0)
vm_page_aflag_set(m, PGA_WRITEABLE);
}
/*
* Update the PTE.
*/
if ((origpte & PG_V) != 0) {
validate:
origpte = pte_load_store(pte, newpte);
KASSERT((origpte & PG_FRAME) == pa,
("pmap_enter: unexpected pa update for %#x", va));
if ((newpte & PG_M) == 0 && (origpte & (PG_M | PG_RW)) ==
(PG_M | PG_RW)) {
if ((origpte & PG_MANAGED) != 0)
vm_page_dirty(m);
/*
* Although the PTE may still have PG_RW set, TLB
* invalidation may nonetheless be required because
* the PTE no longer has PG_M set.
*/
}
#ifdef PMAP_PAE_COMP
else if ((origpte & PG_NX) != 0 || (newpte & PG_NX) == 0) {
/*
* This PTE change does not require TLB invalidation.
*/
goto unchanged;
}
#endif
if ((origpte & PG_A) != 0)
pmap_invalidate_page_int(pmap, va);
} else
pte_store(pte, newpte);
unchanged:
#if VM_NRESERVLEVEL > 0
/*
* If both the page table page and the reservation are fully
* populated, then attempt promotion.
*/
if ((mpte == NULL || mpte->wire_count == NPTEPG) &&
pg_ps_enabled && (m->flags & PG_FICTITIOUS) == 0 &&
vm_reserv_level_iffullpop(m) == 0)
pmap_promote_pde(pmap, pde, va);
#endif
rv = KERN_SUCCESS;
out:
sched_unpin();
rw_wunlock(&pvh_global_lock);
PMAP_UNLOCK(pmap);
return (rv);
}
/*
* Tries to create a read- and/or execute-only 2 or 4 MB page mapping. Returns
* true if successful. Returns false if (1) a mapping already exists at the
* specified virtual address or (2) a PV entry cannot be allocated without
* reclaiming another PV entry.
*/
static bool
pmap_enter_4mpage(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot)
{
pd_entry_t newpde;
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
newpde = VM_PAGE_TO_PHYS(m) | pmap_cache_bits(pmap, m->md.pat_mode, 1) |
PG_PS | PG_V;
if ((m->oflags & VPO_UNMANAGED) == 0)
newpde |= PG_MANAGED;
#ifdef PMAP_PAE_COMP
- if ((prot & VM_PROT_EXECUTE) == 0)
+ if ((prot & VM_PROT_EXECUTE) == 0 && !i386_read_exec)
newpde |= pg_nx;
#endif
if (pmap != kernel_pmap)
newpde |= PG_U;
return (pmap_enter_pde(pmap, va, newpde, PMAP_ENTER_NOSLEEP |
PMAP_ENTER_NOREPLACE | PMAP_ENTER_NORECLAIM, NULL) ==
KERN_SUCCESS);
}
/*
* Tries to create the specified 2 or 4 MB page mapping. Returns KERN_SUCCESS
* if the mapping was created, and either KERN_FAILURE or
* KERN_RESOURCE_SHORTAGE otherwise. Returns KERN_FAILURE if
* PMAP_ENTER_NOREPLACE was specified and a mapping already exists at the
* specified virtual address. Returns KERN_RESOURCE_SHORTAGE if
* PMAP_ENTER_NORECLAIM was specified and a PV entry allocation failed.
*
* The parameter "m" is only used when creating a managed, writeable mapping.
*/
static int
pmap_enter_pde(pmap_t pmap, vm_offset_t va, pd_entry_t newpde, u_int flags,
vm_page_t m)
{
struct spglist free;
pd_entry_t oldpde, *pde;
vm_page_t mt;
rw_assert(&pvh_global_lock, RA_WLOCKED);
KASSERT((newpde & (PG_M | PG_RW)) != PG_RW,
("pmap_enter_pde: newpde is missing PG_M"));
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
pde = pmap_pde(pmap, va);
oldpde = *pde;
if ((oldpde & PG_V) != 0) {
if ((flags & PMAP_ENTER_NOREPLACE) != 0) {
CTR2(KTR_PMAP, "pmap_enter_pde: failure for va %#lx"
" in pmap %p", va, pmap);
return (KERN_FAILURE);
}
/* Break the existing mapping(s). */
SLIST_INIT(&free);
if ((oldpde & PG_PS) != 0) {
/*
* If the PDE resulted from a promotion, then a
* reserved PT page could be freed.
*/
(void)pmap_remove_pde(pmap, pde, va, &free);
if ((oldpde & PG_G) == 0)
pmap_invalidate_pde_page(pmap, va, oldpde);
} else {
if (pmap_remove_ptes(pmap, va, va + NBPDR, &free))
pmap_invalidate_all_int(pmap);
}
vm_page_free_pages_toq(&free, true);
if (pmap == kernel_pmap) {
mt = PHYS_TO_VM_PAGE(*pde & PG_FRAME);
if (pmap_insert_pt_page(pmap, mt)) {
/*
* XXX Currently, this can't happen because
* we do not perform pmap_enter(psind == 1)
* on the kernel pmap.
*/
panic("pmap_enter_pde: trie insert failed");
}
} else
KASSERT(*pde == 0, ("pmap_enter_pde: non-zero pde %p",
pde));
}
if ((newpde & PG_MANAGED) != 0) {
/*
* Abort this mapping if its PV entry could not be created.
*/
if (!pmap_pv_insert_pde(pmap, va, newpde, flags)) {
CTR2(KTR_PMAP, "pmap_enter_pde: failure for va %#lx"
" in pmap %p", va, pmap);
return (KERN_RESOURCE_SHORTAGE);
}
if ((newpde & PG_RW) != 0) {
for (mt = m; mt < &m[NBPDR / PAGE_SIZE]; mt++)
vm_page_aflag_set(mt, PGA_WRITEABLE);
}
}
/*
* Increment counters.
*/
if ((newpde & PG_W) != 0)
pmap->pm_stats.wired_count += NBPDR / PAGE_SIZE;
pmap->pm_stats.resident_count += NBPDR / PAGE_SIZE;
/*
* Map the superpage. (This is not a promoted mapping; there will not
* be any lingering 4KB page mappings in the TLB.)
*/
pde_store(pde, newpde);
pmap_pde_mappings++;
CTR2(KTR_PMAP, "pmap_enter_pde: success for va %#lx"
" in pmap %p", va, pmap);
return (KERN_SUCCESS);
}
/*
* Maps a sequence of resident pages belonging to the same object.
* The sequence begins with the given page m_start. This page is
* mapped at the given virtual address start. Each subsequent page is
* mapped at a virtual address that is offset from start by the same
* amount as the page is offset from m_start within the object. The
* last page in the sequence is the page with the largest offset from
* m_start that can be mapped at a virtual address less than the given
* virtual address end. Not every virtual page between start and end
* is mapped; only those for which a resident page exists with the
* corresponding offset from m_start are mapped.
*/
static void
__CONCAT(PMTYPE, enter_object)(pmap_t pmap, vm_offset_t start, vm_offset_t end,
vm_page_t m_start, vm_prot_t prot)
{
vm_offset_t va;
vm_page_t m, mpte;
vm_pindex_t diff, psize;
VM_OBJECT_ASSERT_LOCKED(m_start->object);
psize = atop(end - start);
mpte = NULL;
m = m_start;
rw_wlock(&pvh_global_lock);
PMAP_LOCK(pmap);
while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) {
va = start + ptoa(diff);
if ((va & PDRMASK) == 0 && va + NBPDR <= end &&
m->psind == 1 && pg_ps_enabled &&
pmap_enter_4mpage(pmap, va, m, prot))
m = &m[NBPDR / PAGE_SIZE - 1];
else
mpte = pmap_enter_quick_locked(pmap, va, m, prot,
mpte);
m = TAILQ_NEXT(m, listq);
}
rw_wunlock(&pvh_global_lock);
PMAP_UNLOCK(pmap);
}
/*
* this code makes some *MAJOR* assumptions:
* 1. Current pmap & pmap exists.
* 2. Not wired.
* 3. Read access.
* 4. No page table pages.
* but is *MUCH* faster than pmap_enter...
*/
static void
__CONCAT(PMTYPE, enter_quick)(pmap_t pmap, vm_offset_t va, vm_page_t m,
vm_prot_t prot)
{
rw_wlock(&pvh_global_lock);
PMAP_LOCK(pmap);
(void)pmap_enter_quick_locked(pmap, va, m, prot, NULL);
rw_wunlock(&pvh_global_lock);
PMAP_UNLOCK(pmap);
}
static vm_page_t
pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m,
vm_prot_t prot, vm_page_t mpte)
{
pt_entry_t newpte, *pte;
struct spglist free;
KASSERT(pmap != kernel_pmap || va < kmi.clean_sva ||
va >= kmi.clean_eva || (m->oflags & VPO_UNMANAGED) != 0,
("pmap_enter_quick_locked: managed mapping within the clean submap"));
rw_assert(&pvh_global_lock, RA_WLOCKED);
PMAP_LOCK_ASSERT(pmap, MA_OWNED);
/*
* In the case that a page table page is not
* resident, we are creating it here.
*/
if (pmap != kernel_pmap) {
u_int ptepindex;
pd_entry_t ptepa;
/*
* Calculate pagetable page index
*/
ptepindex = va >> PDRSHIFT;
if (mpte && (mpte->pindex == ptepindex)) {
mpte->wire_count++;
} else {
/*
* Get the page directory entry
*/
ptepa = pmap->pm_pdir[ptepindex];
/*
* If the page table page is mapped, we just increment
* the hold count, and activate it.
*/
if (ptepa) {
if (ptepa & PG_PS)
return (NULL);
mpte = PHYS_TO_VM_PAGE(ptepa & PG_FRAME);
mpte->wire_count++;
} else {
mpte = _pmap_allocpte(pmap, ptepindex,
PMAP_ENTER_NOSLEEP);
if (mpte == NULL)
return (mpte);
}
}
} else {
mpte = NULL;
}
sched_pin();
pte = pmap_pte_quick(pmap, va);
if (*pte) {
if (mpte != NULL) {
mpte->wire_count--;
mpte = NULL;
}
sched_unpin();
return (mpte);
}
/*
* Enter on the PV list if part of our managed memory.
*/
if ((m->oflags & VPO_UNMANAGED) == 0 &&
!pmap_try_insert_pv_entry(pmap, va, m)) {
if (mpte != NULL) {
SLIST_INIT(&free);
if (pmap_unwire_ptp(pmap, mpte, &free)) {
pmap_invalidate_page_int(pmap, va);
vm_page_free_pages_toq(&free, true);
}
mpte = NULL;
}
sched_unpin();
return (mpte);
}
/*
* Increment counters
*/
pmap->pm_stats.resident_count++;
newpte = VM_PAGE_TO_PHYS(m) | PG_V |
pmap_cache_bits(pmap, m->md.pat_mode, 0);
if ((m->oflags & VPO_UNMANAGED) == 0)
newpte |= PG_MANAGED;
#ifdef PMAP_PAE_COMP
- if ((prot & VM_PROT_EXECUTE) == 0)
+ if ((prot & VM_PROT_EXECUTE) == 0 && !i386_read_exec)
newpte |= pg_nx;
#endif
if (pmap != kernel_pmap)
newpte |= PG_U;
pte_store(pte, newpte);
sched_unpin();
return (mpte);
}
/*
* Make a temporary mapping for a physical address. This is only intended
* to be used for panic dumps.
*/
static void *
__CONCAT(PMTYPE, kenter_temporary)(vm_paddr_t pa, int i)
{
vm_offset_t va;
va = (vm_offset_t)crashdumpmap + (i * PAGE_SIZE);
pmap_kenter(va, pa);
invlpg(va);
return ((void *)crashdumpmap);
}
/*
* This code maps large physical mmap regions into the
* processor address space. Note that some shortcuts
* are taken, but the code works.
*/
static void
__CONCAT(PMTYPE, object_init_pt)(pmap_t pmap, vm_offset_t addr,
vm_object_t object, vm_pindex_t pindex, vm_size_t size)
{
pd_entry_t *pde;
vm_paddr_t pa, ptepa;
vm_page_t p;
int pat_mode;
VM_OBJECT_ASSERT_WLOCKED(object);
KASSERT(object->type == OBJT_DEVICE || object->type == OBJT_SG,
("pmap_object_init_pt: non-device object"));
if (pg_ps_enabled &&
(addr & (NBPDR - 1)) == 0 && (size & (NBPDR - 1)) == 0) {
if (!vm_object_populate(object, pindex, pindex + atop(size)))
return;
p = vm_page_lookup(object, pindex);
KASSERT(p->valid == VM_PAGE_BITS_ALL,
("pmap_object_init_pt: invalid page %p", p));
pat_mode = p->md.pat_mode;
/*
* Abort the mapping if the first page is not physically
* aligned to a 2/4MB page boundary.
*/
ptepa = VM_PAGE_TO_PHYS(p);
if (ptepa & (NBPDR - 1))
return;
/*
* Skip the first page. Abort the mapping if the rest of
* the pages are not physically contiguous or have differing
* memory attributes.
*/
p = TAILQ_NEXT(p, listq);
for (pa = ptepa + PAGE_SIZE; pa < ptepa + size;
pa += PAGE_SIZE) {
KASSERT(p->valid == VM_PAGE_BITS_ALL,
("pmap_object_init_pt: invalid page %p", p));
if (pa != VM_PAGE_TO_PHYS(p) ||
pat_mode != p->md.pat_mode)
return;
p = TAILQ_NEXT(p, listq);
}
/*
* Map using 2/4MB pages. Since "ptepa" is 2/4M aligned and
* "size" is a multiple of 2/4M, adding the PAT setting to
* "pa" will not affect the termination of this loop.
*/
PMAP_LOCK(pmap);
for (pa = ptepa | pmap_cache_bits(pmap, pat_mode, 1);
pa < ptepa + size; pa += NBPDR) {
pde = pmap_pde(pmap, addr);
if (*pde == 0) {
pde_store(pde, pa | PG_PS | PG_M | PG_A |
PG_U | PG_RW | PG_V);
pmap->pm_stats.resident_count += NBPDR /
PAGE_SIZE;
pmap_pde_mappings++;
}
/* Else continue on if the PDE is already valid. */
addr += NBPDR;
}
PMAP_UNLOCK(pmap);
}
}
/*
* Clear the wired attribute from the mappings for the specified range of
* addresses in the given pmap. Every valid mapping within that range
* must have the wired attribute set. In contrast, invalid mappings
* cannot have the wired attribute set, so they are ignored.
*
* The wired attribute of the page table entry is not a hardware feature,
* so there is no need to invalidate any TLB entries.
*/
static void
__CONCAT(PMTYPE, unwire)(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
vm_offset_t pdnxt;
pd_entry_t *pde;
pt_entry_t *pte;
boolean_t pv_lists_locked;
if (pmap_is_current(pmap))
pv_lists_locked = FALSE;
else {
pv_lists_locked = TRUE;
resume:
rw_wlock(&pvh_global_lock);
sched_pin();
}
PMAP_LOCK(pmap);
for (; sva < eva; sva = pdnxt) {
pdnxt = (sva + NBPDR) & ~PDRMASK;
if (pdnxt < sva)
pdnxt = eva;
pde = pmap_pde(pmap, sva);
if ((*pde & PG_V) == 0)
continue;
if ((*pde & PG_PS) != 0) {
if ((*pde & PG_W) == 0)
panic("pmap_unwire: pde %#jx is missing PG_W",
(uintmax_t)*pde);
/*
* Are we unwiring the entire large page? If not,
* demote the mapping and fall through.
*/
if (sva + NBPDR == pdnxt && eva >= pdnxt) {
/*
* Regardless of whether a pde (or pte) is 32
* or 64 bits in size, PG_W is among the least
* significant 32 bits.
*/
atomic_clear_int((u_int *)pde, PG_W);
pmap->pm_stats.wired_count -= NBPDR /
PAGE_SIZE;
continue;
} else {
if (!pv_lists_locked) {
pv_lists_locked = TRUE;
if (!rw_try_wlock(&pvh_global_lock)) {
PMAP_UNLOCK(pmap);
/* Repeat sva. */
goto resume;
}
sched_pin();
}
if (!pmap_demote_pde(pmap, pde, sva))
panic("pmap_unwire: demotion failed");
}
}
if (pdnxt > eva)
pdnxt = eva;
for (pte = pmap_pte_quick(pmap, sva); sva != pdnxt; pte++,
sva += PAGE_SIZE) {
if ((*pte & PG_V) == 0)
continue;
if ((*pte & PG_W) == 0)
panic("pmap_unwire: pte %#jx is missing PG_W",
(uintmax_t)*pte);
/*
* PG_W must be cleared atomically. Although the pmap
* lock synchronizes access to PG_W, another processor
* could be setting PG_M and/or PG_A concurrently.
*
* PG_W is among the least significant 32 bits.
*/
atomic_clear_int((u_int *)pte, PG_W);
pmap->pm_stats.wired_count--;
}
}
if (pv_lists_locked) {
sched_unpin();
rw_wunlock(&pvh_global_lock);
}
PMAP_UNLOCK(pmap);
}
/*
* Copy the range specified by src_addr/len
* from the source map to the range dst_addr/len
* in the destination map.
*
* This routine is only advisory and need not do anything. Since
* current pmap is always the kernel pmap when executing in
* kernel, and we do not copy from the kernel pmap to a user
* pmap, this optimization is not usable in 4/4G full split i386
* world.
*/
static void
__CONCAT(PMTYPE, copy)(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr,
vm_size_t len, vm_offset_t src_addr)
{
struct spglist free;
pt_entry_t *src_pte, *dst_pte, ptetemp;
pd_entry_t srcptepaddr;
vm_page_t dstmpte, srcmpte;
vm_offset_t addr, end_addr, pdnxt;
u_int ptepindex;
if (dst_addr != src_addr)
return;
end_addr = src_addr + len;
rw_wlock(&pvh_global_lock);
if (dst_pmap < src_pmap) {
PMAP_LOCK(dst_pmap);
PMAP_LOCK(src_pmap);
} else {
PMAP_LOCK(src_pmap);
PMAP_LOCK(dst_pmap);
}
sched_pin();
for (addr = src_addr; addr < end_addr; addr = pdnxt) {
KASSERT(addr < PMAP_TRM_MIN_ADDRESS,
("pmap_copy: invalid to pmap_copy the trampoline"));
pdnxt = (addr + NBPDR) & ~PDRMASK;
if (pdnxt < addr)
pdnxt = end_addr;
ptepindex = addr >> PDRSHIFT;
srcptepaddr = src_pmap->pm_pdir[ptepindex];
if (srcptepaddr == 0)
continue;
if (srcptepaddr & PG_PS) {
if ((addr & PDRMASK) != 0 || addr + NBPDR > end_addr)
continue;
if (dst_pmap->pm_pdir[ptepindex] == 0 &&
((srcptepaddr & PG_MANAGED) == 0 ||
pmap_pv_insert_pde(dst_pmap, addr, srcptepaddr,
PMAP_ENTER_NORECLAIM))) {
dst_pmap->pm_pdir[ptepindex] = srcptepaddr &
~PG_W;
dst_pmap->pm_stats.resident_count +=
NBPDR / PAGE_SIZE;
pmap_pde_mappings++;
}
continue;
}
srcmpte = PHYS_TO_VM_PAGE(srcptepaddr & PG_FRAME);
KASSERT(srcmpte->wire_count > 0,
("pmap_copy: source page table page is unused"));
if (pdnxt > end_addr)
pdnxt = end_addr;
src_pte = pmap_pte_quick3(src_pmap, addr);
while (addr < pdnxt) {
ptetemp = *src_pte;
/*
* we only virtual copy managed pages
*/
if ((ptetemp & PG_MANAGED) != 0) {
dstmpte = pmap_allocpte(dst_pmap, addr,
PMAP_ENTER_NOSLEEP);
if (dstmpte == NULL)
goto out;
dst_pte = pmap_pte_quick(dst_pmap, addr);
if (*dst_pte == 0 &&
pmap_try_insert_pv_entry(dst_pmap, addr,
PHYS_TO_VM_PAGE(ptetemp & PG_FRAME))) {
/*
* Clear the wired, modified, and
* accessed (referenced) bits
* during the copy.
*/
*dst_pte = ptetemp & ~(PG_W | PG_M |
PG_A);
dst_pmap->pm_stats.resident_count++;
} else {
SLIST_INIT(&free);
if (pmap_unwire_ptp(dst_pmap, dstmpte,
&free)) {
pmap_invalidate_page_int(
dst_pmap, addr);
vm_page_free_pages_toq(&free,
true);
}
goto out;
}
if (dstmpte->wire_count >= srcmpte->wire_count)
break;
}
addr += PAGE_SIZE;
src_pte++;
}
}
out:
sched_unpin();
rw_wunlock(&pvh_global_lock);
PMAP_UNLOCK(src_pmap);
PMAP_UNLOCK(dst_pmap);
}
/*
* Zero 1 page of virtual memory mapped from a hardware page by the caller.
*/
static __inline void
pagezero(void *page)
{
#if defined(I686_CPU)
if (cpu_class == CPUCLASS_686) {
if (cpu_feature & CPUID_SSE2)
sse2_pagezero(page);
else
i686_pagezero(page);
} else
#endif
bzero(page, PAGE_SIZE);
}
/*
* Zero the specified hardware page.
*/
static void
__CONCAT(PMTYPE, zero_page)(vm_page_t m)
{
pt_entry_t *cmap_pte2;
struct pcpu *pc;
sched_pin();
pc = get_pcpu();
cmap_pte2 = pc->pc_cmap_pte2;
mtx_lock(&pc->pc_cmap_lock);
if (*cmap_pte2)
panic("pmap_zero_page: CMAP2 busy");
*cmap_pte2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
pmap_cache_bits(kernel_pmap, m->md.pat_mode, 0);
invlcaddr(pc->pc_cmap_addr2);
pagezero(pc->pc_cmap_addr2);
*cmap_pte2 = 0;
/*
* Unpin the thread before releasing the lock. Otherwise the thread
* could be rescheduled while still bound to the current CPU, only
* to unpin itself immediately upon resuming execution.
*/
sched_unpin();
mtx_unlock(&pc->pc_cmap_lock);
}
/*
* Zero an an area within a single hardware page. off and size must not
* cover an area beyond a single hardware page.
*/
static void
__CONCAT(PMTYPE, zero_page_area)(vm_page_t m, int off, int size)
{
pt_entry_t *cmap_pte2;
struct pcpu *pc;
sched_pin();
pc = get_pcpu();
cmap_pte2 = pc->pc_cmap_pte2;
mtx_lock(&pc->pc_cmap_lock);
if (*cmap_pte2)
panic("pmap_zero_page_area: CMAP2 busy");
*cmap_pte2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
pmap_cache_bits(kernel_pmap, m->md.pat_mode, 0);
invlcaddr(pc->pc_cmap_addr2);
if (off == 0 && size == PAGE_SIZE)
pagezero(pc->pc_cmap_addr2);
else
bzero(pc->pc_cmap_addr2 + off, size);
*cmap_pte2 = 0;
sched_unpin();
mtx_unlock(&pc->pc_cmap_lock);
}
/*
* Copy 1 specified hardware page to another.
*/
static void
__CONCAT(PMTYPE, copy_page)(vm_page_t src, vm_page_t dst)
{
pt_entry_t *cmap_pte1, *cmap_pte2;
struct pcpu *pc;
sched_pin();
pc = get_pcpu();
cmap_pte1 = pc->pc_cmap_pte1;
cmap_pte2 = pc->pc_cmap_pte2;
mtx_lock(&pc->pc_cmap_lock);
if (*cmap_pte1)
panic("pmap_copy_page: CMAP1 busy");
if (*cmap_pte2)
panic("pmap_copy_page: CMAP2 busy");
*cmap_pte1 = PG_V | VM_PAGE_TO_PHYS(src) | PG_A |
pmap_cache_bits(kernel_pmap, src->md.pat_mode, 0);
invlcaddr(pc->pc_cmap_addr1);
*cmap_pte2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(dst) | PG_A | PG_M |
pmap_cache_bits(kernel_pmap, dst->md.pat_mode, 0);
invlcaddr(pc->pc_cmap_addr2);
bcopy(pc->pc_cmap_addr1, pc->pc_cmap_addr2, PAGE_SIZE);
*cmap_pte1 = 0;
*cmap_pte2 = 0;
sched_unpin();
mtx_unlock(&pc->pc_cmap_lock);
}
static void
__CONCAT(PMTYPE, copy_pages)(vm_page_t ma[], vm_offset_t a_offset,
vm_page_t mb[], vm_offset_t b_offset, int xfersize)
{
vm_page_t a_pg, b_pg;
char *a_cp, *b_cp;
vm_offset_t a_pg_offset, b_pg_offset;
pt_entry_t *cmap_pte1, *cmap_pte2;
struct pcpu *pc;
int cnt;
sched_pin();
pc = get_pcpu();
cmap_pte1 = pc->pc_cmap_pte1;
cmap_pte2 = pc->pc_cmap_pte2;
mtx_lock(&pc->pc_cmap_lock);
if (*cmap_pte1 != 0)
panic("pmap_copy_pages: CMAP1 busy");
if (*cmap_pte2 != 0)
panic("pmap_copy_pages: CMAP2 busy");
while (xfersize > 0) {
a_pg = ma[a_offset >> PAGE_SHIFT];
a_pg_offset = a_offset & PAGE_MASK;
cnt = min(xfersize, PAGE_SIZE - a_pg_offset);
b_pg = mb[b_offset >> PAGE_SHIFT];
b_pg_offset = b_offset & PAGE_MASK;
cnt = min(cnt, PAGE_SIZE - b_pg_offset);
*cmap_pte1 = PG_V | VM_PAGE_TO_PHYS(a_pg) | PG_A |
pmap_cache_bits(kernel_pmap, a_pg->md.pat_mode, 0);
invlcaddr(pc->pc_cmap_addr1);
*cmap_pte2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(b_pg) | PG_A |
PG_M | pmap_cache_bits(kernel_pmap, b_pg->md.pat_mode, 0);
invlcaddr(pc->pc_cmap_addr2);
a_cp = pc->pc_cmap_addr1 + a_pg_offset;
b_cp = pc->pc_cmap_addr2 + b_pg_offset;
bcopy(a_cp, b_cp, cnt);
a_offset += cnt;
b_offset += cnt;
xfersize -= cnt;
}
*cmap_pte1 = 0;
*cmap_pte2 = 0;
sched_unpin();
mtx_unlock(&pc->pc_cmap_lock);
}
/*
* Returns true if the pmap's pv is one of the first
* 16 pvs linked to from this page. This count may
* be changed upwards or downwards in the future; it
* is only necessary that true be returned for a small
* subset of pmaps for proper page aging.
*/
static boolean_t
__CONCAT(PMTYPE, page_exists_quick)(pmap_t pmap, vm_page_t m)
{
struct md_page *pvh;
pv_entry_t pv;
int loops = 0;
boolean_t rv;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_page_exists_quick: page %p is not managed", m));
rv = FALSE;
rw_wlock(&pvh_global_lock);
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
if (PV_PMAP(pv) == pmap) {
rv = TRUE;
break;
}
loops++;
if (loops >= 16)
break;
}
if (!rv && loops < 16 && (m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
if (PV_PMAP(pv) == pmap) {
rv = TRUE;
break;
}
loops++;
if (loops >= 16)
break;
}
}
rw_wunlock(&pvh_global_lock);
return (rv);
}
/*
* pmap_page_wired_mappings:
*
* Return the number of managed mappings to the given physical page
* that are wired.
*/
static int
__CONCAT(PMTYPE, page_wired_mappings)(vm_page_t m)
{
int count;
count = 0;
if ((m->oflags & VPO_UNMANAGED) != 0)
return (count);
rw_wlock(&pvh_global_lock);
count = pmap_pvh_wired_mappings(&m->md, count);
if ((m->flags & PG_FICTITIOUS) == 0) {
count = pmap_pvh_wired_mappings(pa_to_pvh(VM_PAGE_TO_PHYS(m)),
count);
}
rw_wunlock(&pvh_global_lock);
return (count);
}
/*
* pmap_pvh_wired_mappings:
*
* Return the updated number "count" of managed mappings that are wired.
*/
static int
pmap_pvh_wired_mappings(struct md_page *pvh, int count)
{
pmap_t pmap;
pt_entry_t *pte;
pv_entry_t pv;
rw_assert(&pvh_global_lock, RA_WLOCKED);
sched_pin();
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pte = pmap_pte_quick(pmap, pv->pv_va);
if ((*pte & PG_W) != 0)
count++;
PMAP_UNLOCK(pmap);
}
sched_unpin();
return (count);
}
/*
* Returns TRUE if the given page is mapped individually or as part of
* a 4mpage. Otherwise, returns FALSE.
*/
static boolean_t
__CONCAT(PMTYPE, page_is_mapped)(vm_page_t m)
{
boolean_t rv;
if ((m->oflags & VPO_UNMANAGED) != 0)
return (FALSE);
rw_wlock(&pvh_global_lock);
rv = !TAILQ_EMPTY(&m->md.pv_list) ||
((m->flags & PG_FICTITIOUS) == 0 &&
!TAILQ_EMPTY(&pa_to_pvh(VM_PAGE_TO_PHYS(m))->pv_list));
rw_wunlock(&pvh_global_lock);
return (rv);
}
/*
* Remove all pages from specified address space
* this aids process exit speeds. Also, this code
* is special cased for current process only, but
* can have the more generic (and slightly slower)
* mode enabled. This is much faster than pmap_remove
* in the case of running down an entire address space.
*/
static void
__CONCAT(PMTYPE, remove_pages)(pmap_t pmap)
{
pt_entry_t *pte, tpte;
vm_page_t m, mpte, mt;
pv_entry_t pv;
struct md_page *pvh;
struct pv_chunk *pc, *npc;
struct spglist free;
int field, idx;
int32_t bit;
uint32_t inuse, bitmask;
int allfree;
if (pmap != PCPU_GET(curpmap)) {
printf("warning: pmap_remove_pages called with non-current pmap\n");
return;
}
SLIST_INIT(&free);
rw_wlock(&pvh_global_lock);
PMAP_LOCK(pmap);
sched_pin();
TAILQ_FOREACH_SAFE(pc, &pmap->pm_pvchunk, pc_list, npc) {
KASSERT(pc->pc_pmap == pmap, ("Wrong pmap %p %p", pmap,
pc->pc_pmap));
allfree = 1;
for (field = 0; field < _NPCM; field++) {
inuse = ~pc->pc_map[field] & pc_freemask[field];
while (inuse != 0) {
bit = bsfl(inuse);
bitmask = 1UL << bit;
idx = field * 32 + bit;
pv = &pc->pc_pventry[idx];
inuse &= ~bitmask;
pte = pmap_pde(pmap, pv->pv_va);
tpte = *pte;
if ((tpte & PG_PS) == 0) {
pte = pmap_pte_quick(pmap, pv->pv_va);
tpte = *pte & ~PG_PTE_PAT;
}
if (tpte == 0) {
printf(
"TPTE at %p IS ZERO @ VA %08x\n",
pte, pv->pv_va);
panic("bad pte");
}
/*
* We cannot remove wired pages from a process' mapping at this time
*/
if (tpte & PG_W) {
allfree = 0;
continue;
}
m = PHYS_TO_VM_PAGE(tpte & PG_FRAME);
KASSERT(m->phys_addr == (tpte & PG_FRAME),
("vm_page_t %p phys_addr mismatch %016jx %016jx",
m, (uintmax_t)m->phys_addr,
(uintmax_t)tpte));
KASSERT((m->flags & PG_FICTITIOUS) != 0 ||
m < &vm_page_array[vm_page_array_size],
("pmap_remove_pages: bad tpte %#jx",
(uintmax_t)tpte));
pte_clear(pte);
/*
* Update the vm_page_t clean/reference bits.
*/
if ((tpte & (PG_M | PG_RW)) == (PG_M | PG_RW)) {
if ((tpte & PG_PS) != 0) {
for (mt = m; mt < &m[NBPDR / PAGE_SIZE]; mt++)
vm_page_dirty(mt);
} else
vm_page_dirty(m);
}
/* Mark free */
PV_STAT(pv_entry_frees++);
PV_STAT(pv_entry_spare++);
pv_entry_count--;
pc->pc_map[field] |= bitmask;
if ((tpte & PG_PS) != 0) {
pmap->pm_stats.resident_count -= NBPDR / PAGE_SIZE;
pvh = pa_to_pvh(tpte & PG_PS_FRAME);
TAILQ_REMOVE(&pvh->pv_list, pv, pv_next);
if (TAILQ_EMPTY(&pvh->pv_list)) {
for (mt = m; mt < &m[NBPDR / PAGE_SIZE]; mt++)
if (TAILQ_EMPTY(&mt->md.pv_list))
vm_page_aflag_clear(mt, PGA_WRITEABLE);
}
mpte = pmap_remove_pt_page(pmap, pv->pv_va);
if (mpte != NULL) {
pmap->pm_stats.resident_count--;
KASSERT(mpte->wire_count == NPTEPG,
("pmap_remove_pages: pte page wire count error"));
mpte->wire_count = 0;
pmap_add_delayed_free_list(mpte, &free, FALSE);
}
} else {
pmap->pm_stats.resident_count--;
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
if (TAILQ_EMPTY(&m->md.pv_list) &&
(m->flags & PG_FICTITIOUS) == 0) {
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
if (TAILQ_EMPTY(&pvh->pv_list))
vm_page_aflag_clear(m, PGA_WRITEABLE);
}
pmap_unuse_pt(pmap, pv->pv_va, &free);
}
}
}
if (allfree) {
TAILQ_REMOVE(&pmap->pm_pvchunk, pc, pc_list);
free_pv_chunk(pc);
}
}
sched_unpin();
pmap_invalidate_all_int(pmap);
rw_wunlock(&pvh_global_lock);
PMAP_UNLOCK(pmap);
vm_page_free_pages_toq(&free, true);
}
/*
* pmap_is_modified:
*
* Return whether or not the specified physical page was modified
* in any physical maps.
*/
static boolean_t
__CONCAT(PMTYPE, is_modified)(vm_page_t m)
{
boolean_t rv;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_is_modified: page %p is not managed", m));
/*
* If the page is not exclusive busied, then PGA_WRITEABLE cannot be
* concurrently set while the object is locked. Thus, if PGA_WRITEABLE
* is clear, no PTEs can have PG_M set.
*/
VM_OBJECT_ASSERT_WLOCKED(m->object);
if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0)
return (FALSE);
rw_wlock(&pvh_global_lock);
rv = pmap_is_modified_pvh(&m->md) ||
((m->flags & PG_FICTITIOUS) == 0 &&
pmap_is_modified_pvh(pa_to_pvh(VM_PAGE_TO_PHYS(m))));
rw_wunlock(&pvh_global_lock);
return (rv);
}
/*
* Returns TRUE if any of the given mappings were used to modify
* physical memory. Otherwise, returns FALSE. Both page and 2mpage
* mappings are supported.
*/
static boolean_t
pmap_is_modified_pvh(struct md_page *pvh)
{
pv_entry_t pv;
pt_entry_t *pte;
pmap_t pmap;
boolean_t rv;
rw_assert(&pvh_global_lock, RA_WLOCKED);
rv = FALSE;
sched_pin();
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pte = pmap_pte_quick(pmap, pv->pv_va);
rv = (*pte & (PG_M | PG_RW)) == (PG_M | PG_RW);
PMAP_UNLOCK(pmap);
if (rv)
break;
}
sched_unpin();
return (rv);
}
/*
* pmap_is_prefaultable:
*
* Return whether or not the specified virtual address is elgible
* for prefault.
*/
static boolean_t
__CONCAT(PMTYPE, is_prefaultable)(pmap_t pmap, vm_offset_t addr)
{
pd_entry_t pde;
boolean_t rv;
rv = FALSE;
PMAP_LOCK(pmap);
pde = *pmap_pde(pmap, addr);
if (pde != 0 && (pde & PG_PS) == 0)
rv = pmap_pte_ufast(pmap, addr, pde) == 0;
PMAP_UNLOCK(pmap);
return (rv);
}
/*
* pmap_is_referenced:
*
* Return whether or not the specified physical page was referenced
* in any physical maps.
*/
static boolean_t
__CONCAT(PMTYPE, is_referenced)(vm_page_t m)
{
boolean_t rv;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_is_referenced: page %p is not managed", m));
rw_wlock(&pvh_global_lock);
rv = pmap_is_referenced_pvh(&m->md) ||
((m->flags & PG_FICTITIOUS) == 0 &&
pmap_is_referenced_pvh(pa_to_pvh(VM_PAGE_TO_PHYS(m))));
rw_wunlock(&pvh_global_lock);
return (rv);
}
/*
* Returns TRUE if any of the given mappings were referenced and FALSE
* otherwise. Both page and 4mpage mappings are supported.
*/
static boolean_t
pmap_is_referenced_pvh(struct md_page *pvh)
{
pv_entry_t pv;
pt_entry_t *pte;
pmap_t pmap;
boolean_t rv;
rw_assert(&pvh_global_lock, RA_WLOCKED);
rv = FALSE;
sched_pin();
TAILQ_FOREACH(pv, &pvh->pv_list, pv_next) {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pte = pmap_pte_quick(pmap, pv->pv_va);
rv = (*pte & (PG_A | PG_V)) == (PG_A | PG_V);
PMAP_UNLOCK(pmap);
if (rv)
break;
}
sched_unpin();
return (rv);
}
/*
* Clear the write and modified bits in each of the given page's mappings.
*/
static void
__CONCAT(PMTYPE, remove_write)(vm_page_t m)
{
struct md_page *pvh;
pv_entry_t next_pv, pv;
pmap_t pmap;
pd_entry_t *pde;
pt_entry_t oldpte, *pte;
vm_offset_t va;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_remove_write: page %p is not managed", m));
/*
* If the page is not exclusive busied, then PGA_WRITEABLE cannot be
* set by another thread while the object is locked. Thus,
* if PGA_WRITEABLE is clear, no page table entries need updating.
*/
VM_OBJECT_ASSERT_WLOCKED(m->object);
if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0)
return;
rw_wlock(&pvh_global_lock);
sched_pin();
if ((m->flags & PG_FICTITIOUS) != 0)
goto small_mappings;
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) {
va = pv->pv_va;
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, va);
if ((*pde & PG_RW) != 0)
(void)pmap_demote_pde(pmap, pde, va);
PMAP_UNLOCK(pmap);
}
small_mappings:
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, pv->pv_va);
KASSERT((*pde & PG_PS) == 0, ("pmap_clear_write: found"
" a 4mpage in page %p's pv list", m));
pte = pmap_pte_quick(pmap, pv->pv_va);
retry:
oldpte = *pte;
if ((oldpte & PG_RW) != 0) {
/*
* Regardless of whether a pte is 32 or 64 bits
* in size, PG_RW and PG_M are among the least
* significant 32 bits.
*/
if (!atomic_cmpset_int((u_int *)pte, oldpte,
oldpte & ~(PG_RW | PG_M)))
goto retry;
if ((oldpte & PG_M) != 0)
vm_page_dirty(m);
pmap_invalidate_page_int(pmap, pv->pv_va);
}
PMAP_UNLOCK(pmap);
}
vm_page_aflag_clear(m, PGA_WRITEABLE);
sched_unpin();
rw_wunlock(&pvh_global_lock);
}
/*
* pmap_ts_referenced:
*
* Return a count of reference bits for a page, clearing those bits.
* It is not necessary for every reference bit to be cleared, but it
* is necessary that 0 only be returned when there are truly no
* reference bits set.
*
* As an optimization, update the page's dirty field if a modified bit is
* found while counting reference bits. This opportunistic update can be
* performed at low cost and can eliminate the need for some future calls
* to pmap_is_modified(). However, since this function stops after
* finding PMAP_TS_REFERENCED_MAX reference bits, it may not detect some
* dirty pages. Those dirty pages will only be detected by a future call
* to pmap_is_modified().
*/
static int
__CONCAT(PMTYPE, ts_referenced)(vm_page_t m)
{
struct md_page *pvh;
pv_entry_t pv, pvf;
pmap_t pmap;
pd_entry_t *pde;
pt_entry_t *pte;
vm_paddr_t pa;
int rtval = 0;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_ts_referenced: page %p is not managed", m));
pa = VM_PAGE_TO_PHYS(m);
pvh = pa_to_pvh(pa);
rw_wlock(&pvh_global_lock);
sched_pin();
if ((m->flags & PG_FICTITIOUS) != 0 ||
(pvf = TAILQ_FIRST(&pvh->pv_list)) == NULL)
goto small_mappings;
pv = pvf;
do {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, pv->pv_va);
if ((*pde & (PG_M | PG_RW)) == (PG_M | PG_RW)) {
/*
* Although "*pde" is mapping a 2/4MB page, because
* this function is called at a 4KB page granularity,
* we only update the 4KB page under test.
*/
vm_page_dirty(m);
}
if ((*pde & PG_A) != 0) {
/*
* Since this reference bit is shared by either 1024
* or 512 4KB pages, it should not be cleared every
* time it is tested. Apply a simple "hash" function
* on the physical page number, the virtual superpage
* number, and the pmap address to select one 4KB page
* out of the 1024 or 512 on which testing the
* reference bit will result in clearing that bit.
* This function is designed to avoid the selection of
* the same 4KB page for every 2- or 4MB page mapping.
*
* On demotion, a mapping that hasn't been referenced
* is simply destroyed. To avoid the possibility of a
* subsequent page fault on a demoted wired mapping,
* always leave its reference bit set. Moreover,
* since the superpage is wired, the current state of
* its reference bit won't affect page replacement.
*/
if ((((pa >> PAGE_SHIFT) ^ (pv->pv_va >> PDRSHIFT) ^
(uintptr_t)pmap) & (NPTEPG - 1)) == 0 &&
(*pde & PG_W) == 0) {
atomic_clear_int((u_int *)pde, PG_A);
pmap_invalidate_page_int(pmap, pv->pv_va);
}
rtval++;
}
PMAP_UNLOCK(pmap);
/* Rotate the PV list if it has more than one entry. */
if (TAILQ_NEXT(pv, pv_next) != NULL) {
TAILQ_REMOVE(&pvh->pv_list, pv, pv_next);
TAILQ_INSERT_TAIL(&pvh->pv_list, pv, pv_next);
}
if (rtval >= PMAP_TS_REFERENCED_MAX)
goto out;
} while ((pv = TAILQ_FIRST(&pvh->pv_list)) != pvf);
small_mappings:
if ((pvf = TAILQ_FIRST(&m->md.pv_list)) == NULL)
goto out;
pv = pvf;
do {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, pv->pv_va);
KASSERT((*pde & PG_PS) == 0,
("pmap_ts_referenced: found a 4mpage in page %p's pv list",
m));
pte = pmap_pte_quick(pmap, pv->pv_va);
if ((*pte & (PG_M | PG_RW)) == (PG_M | PG_RW))
vm_page_dirty(m);
if ((*pte & PG_A) != 0) {
atomic_clear_int((u_int *)pte, PG_A);
pmap_invalidate_page_int(pmap, pv->pv_va);
rtval++;
}
PMAP_UNLOCK(pmap);
/* Rotate the PV list if it has more than one entry. */
if (TAILQ_NEXT(pv, pv_next) != NULL) {
TAILQ_REMOVE(&m->md.pv_list, pv, pv_next);
TAILQ_INSERT_TAIL(&m->md.pv_list, pv, pv_next);
}
} while ((pv = TAILQ_FIRST(&m->md.pv_list)) != pvf && rtval <
PMAP_TS_REFERENCED_MAX);
out:
sched_unpin();
rw_wunlock(&pvh_global_lock);
return (rtval);
}
/*
* Apply the given advice to the specified range of addresses within the
* given pmap. Depending on the advice, clear the referenced and/or
* modified flags in each mapping and set the mapped page's dirty field.
*/
static void
__CONCAT(PMTYPE, advise)(pmap_t pmap, vm_offset_t sva, vm_offset_t eva,
int advice)
{
pd_entry_t oldpde, *pde;
pt_entry_t *pte;
vm_offset_t va, pdnxt;
vm_page_t m;
boolean_t anychanged, pv_lists_locked;
if (advice != MADV_DONTNEED && advice != MADV_FREE)
return;
if (pmap_is_current(pmap))
pv_lists_locked = FALSE;
else {
pv_lists_locked = TRUE;
resume:
rw_wlock(&pvh_global_lock);
sched_pin();
}
anychanged = FALSE;
PMAP_LOCK(pmap);
for (; sva < eva; sva = pdnxt) {
pdnxt = (sva + NBPDR) & ~PDRMASK;
if (pdnxt < sva)
pdnxt = eva;
pde = pmap_pde(pmap, sva);
oldpde = *pde;
if ((oldpde & PG_V) == 0)
continue;
else if ((oldpde & PG_PS) != 0) {
if ((oldpde & PG_MANAGED) == 0)
continue;
if (!pv_lists_locked) {
pv_lists_locked = TRUE;
if (!rw_try_wlock(&pvh_global_lock)) {
if (anychanged)
pmap_invalidate_all_int(pmap);
PMAP_UNLOCK(pmap);
goto resume;
}
sched_pin();
}
if (!pmap_demote_pde(pmap, pde, sva)) {
/*
* The large page mapping was destroyed.
*/
continue;
}
/*
* Unless the page mappings are wired, remove the
* mapping to a single page so that a subsequent
* access may repromote. Since the underlying page
* table page is fully populated, this removal never
* frees a page table page.
*/
if ((oldpde & PG_W) == 0) {
pte = pmap_pte_quick(pmap, sva);
KASSERT((*pte & PG_V) != 0,
("pmap_advise: invalid PTE"));
pmap_remove_pte(pmap, pte, sva, NULL);
anychanged = TRUE;
}
}
if (pdnxt > eva)
pdnxt = eva;
va = pdnxt;
for (pte = pmap_pte_quick(pmap, sva); sva != pdnxt; pte++,
sva += PAGE_SIZE) {
if ((*pte & (PG_MANAGED | PG_V)) != (PG_MANAGED | PG_V))
goto maybe_invlrng;
else if ((*pte & (PG_M | PG_RW)) == (PG_M | PG_RW)) {
if (advice == MADV_DONTNEED) {
/*
* Future calls to pmap_is_modified()
* can be avoided by making the page
* dirty now.
*/
m = PHYS_TO_VM_PAGE(*pte & PG_FRAME);
vm_page_dirty(m);
}
atomic_clear_int((u_int *)pte, PG_M | PG_A);
} else if ((*pte & PG_A) != 0)
atomic_clear_int((u_int *)pte, PG_A);
else
goto maybe_invlrng;
if ((*pte & PG_G) != 0) {
if (va == pdnxt)
va = sva;
} else
anychanged = TRUE;
continue;
maybe_invlrng:
if (va != pdnxt) {
pmap_invalidate_range_int(pmap, va, sva);
va = pdnxt;
}
}
if (va != pdnxt)
pmap_invalidate_range_int(pmap, va, sva);
}
if (anychanged)
pmap_invalidate_all_int(pmap);
if (pv_lists_locked) {
sched_unpin();
rw_wunlock(&pvh_global_lock);
}
PMAP_UNLOCK(pmap);
}
/*
* Clear the modify bits on the specified physical page.
*/
static void
__CONCAT(PMTYPE, clear_modify)(vm_page_t m)
{
struct md_page *pvh;
pv_entry_t next_pv, pv;
pmap_t pmap;
pd_entry_t oldpde, *pde;
pt_entry_t oldpte, *pte;
vm_offset_t va;
KASSERT((m->oflags & VPO_UNMANAGED) == 0,
("pmap_clear_modify: page %p is not managed", m));
VM_OBJECT_ASSERT_WLOCKED(m->object);
KASSERT(!vm_page_xbusied(m),
("pmap_clear_modify: page %p is exclusive busied", m));
/*
* If the page is not PGA_WRITEABLE, then no PTEs can have PG_M set.
* If the object containing the page is locked and the page is not
* exclusive busied, then PGA_WRITEABLE cannot be concurrently set.
*/
if ((m->aflags & PGA_WRITEABLE) == 0)
return;
rw_wlock(&pvh_global_lock);
sched_pin();
if ((m->flags & PG_FICTITIOUS) != 0)
goto small_mappings;
pvh = pa_to_pvh(VM_PAGE_TO_PHYS(m));
TAILQ_FOREACH_SAFE(pv, &pvh->pv_list, pv_next, next_pv) {
va = pv->pv_va;
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, va);
oldpde = *pde;
if ((oldpde & PG_RW) != 0) {
if (pmap_demote_pde(pmap, pde, va)) {
if ((oldpde & PG_W) == 0) {
/*
* Write protect the mapping to a
* single page so that a subsequent
* write access may repromote.
*/
va += VM_PAGE_TO_PHYS(m) - (oldpde &
PG_PS_FRAME);
pte = pmap_pte_quick(pmap, va);
oldpte = *pte;
if ((oldpte & PG_V) != 0) {
/*
* Regardless of whether a pte is 32 or 64 bits
* in size, PG_RW and PG_M are among the least
* significant 32 bits.
*/
while (!atomic_cmpset_int((u_int *)pte,
oldpte,
oldpte & ~(PG_M | PG_RW)))
oldpte = *pte;
vm_page_dirty(m);
pmap_invalidate_page_int(pmap,
va);
}
}
}
}
PMAP_UNLOCK(pmap);
}
small_mappings:
TAILQ_FOREACH(pv, &m->md.pv_list, pv_next) {
pmap = PV_PMAP(pv);
PMAP_LOCK(pmap);
pde = pmap_pde(pmap, pv->pv_va);
KASSERT((*pde & PG_PS) == 0, ("pmap_clear_modify: found"
" a 4mpage in page %p's pv list", m));
pte = pmap_pte_quick(pmap, pv->pv_va);
if ((*pte & (PG_M | PG_RW)) == (PG_M | PG_RW)) {
/*
* Regardless of whether a pte is 32 or 64 bits
* in size, PG_M is among the least significant
* 32 bits.
*/
atomic_clear_int((u_int *)pte, PG_M);
pmap_invalidate_page_int(pmap, pv->pv_va);
}
PMAP_UNLOCK(pmap);
}
sched_unpin();
rw_wunlock(&pvh_global_lock);
}
/*
* Miscellaneous support routines follow
*/
/* Adjust the cache mode for a 4KB page mapped via a PTE. */
static __inline void
pmap_pte_attr(pt_entry_t *pte, int cache_bits)
{
u_int opte, npte;
/*
* The cache mode bits are all in the low 32-bits of the
* PTE, so we can just spin on updating the low 32-bits.
*/
do {
opte = *(u_int *)pte;
npte = opte & ~PG_PTE_CACHE;
npte |= cache_bits;
} while (npte != opte && !atomic_cmpset_int((u_int *)pte, opte, npte));
}
/* Adjust the cache mode for a 2/4MB page mapped via a PDE. */
static __inline void
pmap_pde_attr(pd_entry_t *pde, int cache_bits)
{
u_int opde, npde;
/*
* The cache mode bits are all in the low 32-bits of the
* PDE, so we can just spin on updating the low 32-bits.
*/
do {
opde = *(u_int *)pde;
npde = opde & ~PG_PDE_CACHE;
npde |= cache_bits;
} while (npde != opde && !atomic_cmpset_int((u_int *)pde, opde, npde));
}
/*
* Map a set of physical memory pages into the kernel virtual
* address space. Return a pointer to where it is mapped. This
* routine is intended to be used for mapping device memory,
* NOT real memory.
*/
static void *
__CONCAT(PMTYPE, mapdev_attr)(vm_paddr_t pa, vm_size_t size, int mode)
{
struct pmap_preinit_mapping *ppim;
vm_offset_t va, offset;
vm_size_t tmpsize;
int i;
offset = pa & PAGE_MASK;
size = round_page(offset + size);
pa = pa & PG_FRAME;
if (pa < PMAP_MAP_LOW && pa + size <= PMAP_MAP_LOW)
va = pa + PMAP_MAP_LOW;
else if (!pmap_initialized) {
va = 0;
for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) {
ppim = pmap_preinit_mapping + i;
if (ppim->va == 0) {
ppim->pa = pa;
ppim->sz = size;
ppim->mode = mode;
ppim->va = virtual_avail;
virtual_avail += size;
va = ppim->va;
break;
}
}
if (va == 0)
panic("%s: too many preinit mappings", __func__);
} else {
/*
* If we have a preinit mapping, re-use it.
*/
for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) {
ppim = pmap_preinit_mapping + i;
if (ppim->pa == pa && ppim->sz == size &&
ppim->mode == mode)
return ((void *)(ppim->va + offset));
}
va = kva_alloc(size);
if (va == 0)
panic("%s: Couldn't allocate KVA", __func__);
}
for (tmpsize = 0; tmpsize < size; tmpsize += PAGE_SIZE)
pmap_kenter_attr(va + tmpsize, pa + tmpsize, mode);
pmap_invalidate_range_int(kernel_pmap, va, va + tmpsize);
pmap_invalidate_cache_range(va, va + size);
return ((void *)(va + offset));
}
static void
__CONCAT(PMTYPE, unmapdev)(vm_offset_t va, vm_size_t size)
{
struct pmap_preinit_mapping *ppim;
vm_offset_t offset;
int i;
if (va >= PMAP_MAP_LOW && va <= KERNBASE && va + size <= KERNBASE)
return;
offset = va & PAGE_MASK;
size = round_page(offset + size);
va = trunc_page(va);
for (i = 0; i < PMAP_PREINIT_MAPPING_COUNT; i++) {
ppim = pmap_preinit_mapping + i;
if (ppim->va == va && ppim->sz == size) {
if (pmap_initialized)
return;
ppim->pa = 0;
ppim->va = 0;
ppim->sz = 0;
ppim->mode = 0;
if (va + size == virtual_avail)
virtual_avail = va;
return;
}
}
if (pmap_initialized)
kva_free(va, size);
}
/*
* Sets the memory attribute for the specified page.
*/
static void
__CONCAT(PMTYPE, page_set_memattr)(vm_page_t m, vm_memattr_t ma)
{
m->md.pat_mode = ma;
if ((m->flags & PG_FICTITIOUS) != 0)
return;
/*
* If "m" is a normal page, flush it from the cache.
* See pmap_invalidate_cache_range().
*
* First, try to find an existing mapping of the page by sf
* buffer. sf_buf_invalidate_cache() modifies mapping and
* flushes the cache.
*/
if (sf_buf_invalidate_cache(m))
return;
/*
* If page is not mapped by sf buffer, but CPU does not
* support self snoop, map the page transient and do
* invalidation. In the worst case, whole cache is flushed by
* pmap_invalidate_cache_range().
*/
if ((cpu_feature & CPUID_SS) == 0)
pmap_flush_page(m);
}
static void
__CONCAT(PMTYPE, flush_page)(vm_page_t m)
{
pt_entry_t *cmap_pte2;
struct pcpu *pc;
vm_offset_t sva, eva;
bool useclflushopt;
useclflushopt = (cpu_stdext_feature & CPUID_STDEXT_CLFLUSHOPT) != 0;
if (useclflushopt || (cpu_feature & CPUID_CLFSH) != 0) {
sched_pin();
pc = get_pcpu();
cmap_pte2 = pc->pc_cmap_pte2;
mtx_lock(&pc->pc_cmap_lock);
if (*cmap_pte2)
panic("pmap_flush_page: CMAP2 busy");
*cmap_pte2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) |
PG_A | PG_M | pmap_cache_bits(kernel_pmap, m->md.pat_mode,
0);
invlcaddr(pc->pc_cmap_addr2);
sva = (vm_offset_t)pc->pc_cmap_addr2;
eva = sva + PAGE_SIZE;
/*
* Use mfence or sfence despite the ordering implied by
* mtx_{un,}lock() because clflush on non-Intel CPUs
* and clflushopt are not guaranteed to be ordered by
* any other instruction.
*/
if (useclflushopt)
sfence();
else if (cpu_vendor_id != CPU_VENDOR_INTEL)
mfence();
for (; sva < eva; sva += cpu_clflush_line_size) {
if (useclflushopt)
clflushopt(sva);
else
clflush(sva);
}
if (useclflushopt)
sfence();
else if (cpu_vendor_id != CPU_VENDOR_INTEL)
mfence();
*cmap_pte2 = 0;
sched_unpin();
mtx_unlock(&pc->pc_cmap_lock);
} else
pmap_invalidate_cache();
}
/*
* Changes the specified virtual address range's memory type to that given by
* the parameter "mode". The specified virtual address range must be
* completely contained within either the kernel map.
*
* Returns zero if the change completed successfully, and either EINVAL or
* ENOMEM if the change failed. Specifically, EINVAL is returned if some part
* of the virtual address range was not mapped, and ENOMEM is returned if
* there was insufficient memory available to complete the change.
*/
static int
__CONCAT(PMTYPE, change_attr)(vm_offset_t va, vm_size_t size, int mode)
{
vm_offset_t base, offset, tmpva;
pd_entry_t *pde;
pt_entry_t *pte;
int cache_bits_pte, cache_bits_pde;
boolean_t changed;
base = trunc_page(va);
offset = va & PAGE_MASK;
size = round_page(offset + size);
/*
* Only supported on kernel virtual addresses above the recursive map.
*/
if (base < VM_MIN_KERNEL_ADDRESS)
return (EINVAL);
cache_bits_pde = pmap_cache_bits(kernel_pmap, mode, 1);
cache_bits_pte = pmap_cache_bits(kernel_pmap, mode, 0);
changed = FALSE;
/*
* Pages that aren't mapped aren't supported. Also break down
* 2/4MB pages into 4KB pages if required.
*/
PMAP_LOCK(kernel_pmap);
for (tmpva = base; tmpva < base + size; ) {
pde = pmap_pde(kernel_pmap, tmpva);
if (*pde == 0) {
PMAP_UNLOCK(kernel_pmap);
return (EINVAL);
}
if (*pde & PG_PS) {
/*
* If the current 2/4MB page already has
* the required memory type, then we need not
* demote this page. Just increment tmpva to
* the next 2/4MB page frame.
*/
if ((*pde & PG_PDE_CACHE) == cache_bits_pde) {
tmpva = trunc_4mpage(tmpva) + NBPDR;
continue;
}
/*
* If the current offset aligns with a 2/4MB
* page frame and there is at least 2/4MB left
* within the range, then we need not break
* down this page into 4KB pages.
*/
if ((tmpva & PDRMASK) == 0 &&
tmpva + PDRMASK < base + size) {
tmpva += NBPDR;
continue;
}
if (!pmap_demote_pde(kernel_pmap, pde, tmpva)) {
PMAP_UNLOCK(kernel_pmap);
return (ENOMEM);
}
}
pte = vtopte(tmpva);
if (*pte == 0) {
PMAP_UNLOCK(kernel_pmap);
return (EINVAL);
}
tmpva += PAGE_SIZE;
}
PMAP_UNLOCK(kernel_pmap);
/*
* Ok, all the pages exist, so run through them updating their
* cache mode if required.
*/
for (tmpva = base; tmpva < base + size; ) {
pde = pmap_pde(kernel_pmap, tmpva);
if (*pde & PG_PS) {
if ((*pde & PG_PDE_CACHE) != cache_bits_pde) {
pmap_pde_attr(pde, cache_bits_pde);
changed = TRUE;
}
tmpva = trunc_4mpage(tmpva) + NBPDR;
} else {
pte = vtopte(tmpva);
if ((*pte & PG_PTE_CACHE) != cache_bits_pte) {
pmap_pte_attr(pte, cache_bits_pte);
changed = TRUE;
}
tmpva += PAGE_SIZE;
}
}
/*
* Flush CPU caches to make sure any data isn't cached that
* shouldn't be, etc.
*/
if (changed) {
pmap_invalidate_range_int(kernel_pmap, base, tmpva);
pmap_invalidate_cache_range(base, tmpva);
}
return (0);
}
/*
* perform the pmap work for mincore
*/
static int
__CONCAT(PMTYPE, mincore)(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa)
{
pd_entry_t pde;
pt_entry_t pte;
vm_paddr_t pa;
int val;
PMAP_LOCK(pmap);
retry:
pde = *pmap_pde(pmap, addr);
if (pde != 0) {
if ((pde & PG_PS) != 0) {
pte = pde;
/* Compute the physical address of the 4KB page. */
pa = ((pde & PG_PS_FRAME) | (addr & PDRMASK)) &
PG_FRAME;
val = MINCORE_SUPER;
} else {
pte = pmap_pte_ufast(pmap, addr, pde);
pa = pte & PG_FRAME;
val = 0;
}
} else {
pte = 0;
pa = 0;
val = 0;
}
if ((pte & PG_V) != 0) {
val |= MINCORE_INCORE;
if ((pte & (PG_M | PG_RW)) == (PG_M | PG_RW))
val |= MINCORE_MODIFIED | MINCORE_MODIFIED_OTHER;
if ((pte & PG_A) != 0)
val |= MINCORE_REFERENCED | MINCORE_REFERENCED_OTHER;
}
if ((val & (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER)) !=
(MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER) &&
(pte & (PG_MANAGED | PG_V)) == (PG_MANAGED | PG_V)) {
/* Ensure that "PHYS_TO_VM_PAGE(pa)->object" doesn't change. */
if (vm_page_pa_tryrelock(pmap, pa, locked_pa))
goto retry;
} else
PA_UNLOCK_COND(*locked_pa);
PMAP_UNLOCK(pmap);
return (val);
}
static void
__CONCAT(PMTYPE, activate)(struct thread *td)
{
pmap_t pmap, oldpmap;
u_int cpuid;
u_int32_t cr3;
critical_enter();
pmap = vmspace_pmap(td->td_proc->p_vmspace);
oldpmap = PCPU_GET(curpmap);
cpuid = PCPU_GET(cpuid);
#if defined(SMP)
CPU_CLR_ATOMIC(cpuid, &oldpmap->pm_active);
CPU_SET_ATOMIC(cpuid, &pmap->pm_active);
#else
CPU_CLR(cpuid, &oldpmap->pm_active);
CPU_SET(cpuid, &pmap->pm_active);
#endif
#ifdef PMAP_PAE_COMP
cr3 = vtophys(pmap->pm_pdpt);
#else
cr3 = vtophys(pmap->pm_pdir);
#endif
/*
* pmap_activate is for the current thread on the current cpu
*/
td->td_pcb->pcb_cr3 = cr3;
PCPU_SET(curpmap, pmap);
critical_exit();
}
static void
__CONCAT(PMTYPE, activate_boot)(pmap_t pmap)
{
u_int cpuid;
cpuid = PCPU_GET(cpuid);
#if defined(SMP)
CPU_SET_ATOMIC(cpuid, &pmap->pm_active);
#else
CPU_SET(cpuid, &pmap->pm_active);
#endif
PCPU_SET(curpmap, pmap);
}
/*
* Increase the starting virtual address of the given mapping if a
* different alignment might result in more superpage mappings.
*/
static void
__CONCAT(PMTYPE, align_superpage)(vm_object_t object, vm_ooffset_t offset,
vm_offset_t *addr, vm_size_t size)
{
vm_offset_t superpage_offset;
if (size < NBPDR)
return;
if (object != NULL && (object->flags & OBJ_COLORED) != 0)
offset += ptoa(object->pg_color);
superpage_offset = offset & PDRMASK;
if (size - ((NBPDR - superpage_offset) & PDRMASK) < NBPDR ||
(*addr & PDRMASK) == superpage_offset)
return;
if ((*addr & PDRMASK) < superpage_offset)
*addr = (*addr & ~PDRMASK) + superpage_offset;
else
*addr = ((*addr + PDRMASK) & ~PDRMASK) + superpage_offset;
}
static vm_offset_t
__CONCAT(PMTYPE, quick_enter_page)(vm_page_t m)
{
vm_offset_t qaddr;
pt_entry_t *pte;
critical_enter();
qaddr = PCPU_GET(qmap_addr);
pte = vtopte(qaddr);
KASSERT(*pte == 0,
("pmap_quick_enter_page: PTE busy %#jx", (uintmax_t)*pte));
*pte = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
pmap_cache_bits(kernel_pmap, pmap_page_get_memattr(m), 0);
invlpg(qaddr);
return (qaddr);
}
static void
__CONCAT(PMTYPE, quick_remove_page)(vm_offset_t addr)
{
vm_offset_t qaddr;
pt_entry_t *pte;
qaddr = PCPU_GET(qmap_addr);
pte = vtopte(qaddr);
KASSERT(*pte != 0, ("pmap_quick_remove_page: PTE not in use"));
KASSERT(addr == qaddr, ("pmap_quick_remove_page: invalid address"));
*pte = 0;
critical_exit();
}
static vmem_t *pmap_trm_arena;
static vmem_addr_t pmap_trm_arena_last = PMAP_TRM_MIN_ADDRESS;
static int trm_guard = PAGE_SIZE;
static int
pmap_trm_import(void *unused __unused, vmem_size_t size, int flags,
vmem_addr_t *addrp)
{
vm_page_t m;
vmem_addr_t af, addr, prev_addr;
pt_entry_t *trm_pte;
prev_addr = atomic_load_long(&pmap_trm_arena_last);
size = round_page(size) + trm_guard;
for (;;) {
if (prev_addr + size < prev_addr || prev_addr + size < size ||
prev_addr + size > PMAP_TRM_MAX_ADDRESS)
return (ENOMEM);
addr = prev_addr + size;
if (atomic_fcmpset_int(&pmap_trm_arena_last, &prev_addr, addr))
break;
}
prev_addr += trm_guard;
trm_pte = PTmap + atop(prev_addr);
for (af = prev_addr; af < addr; af += PAGE_SIZE) {
m = vm_page_alloc(NULL, 0, VM_ALLOC_NOOBJ | VM_ALLOC_NOBUSY |
VM_ALLOC_NORMAL | VM_ALLOC_WIRED | VM_ALLOC_WAITOK);
pte_store(&trm_pte[atop(af - prev_addr)], VM_PAGE_TO_PHYS(m) |
PG_M | PG_A | PG_RW | PG_V | pgeflag |
pmap_cache_bits(kernel_pmap, VM_MEMATTR_DEFAULT, FALSE));
}
*addrp = prev_addr;
return (0);
}
void
pmap_init_trm(void)
{
vm_page_t pd_m;
TUNABLE_INT_FETCH("machdep.trm_guard", &trm_guard);
if ((trm_guard & PAGE_MASK) != 0)
trm_guard = 0;
pmap_trm_arena = vmem_create("i386trampoline", 0, 0, 1, 0, M_WAITOK);
vmem_set_import(pmap_trm_arena, pmap_trm_import, NULL, NULL, PAGE_SIZE);
pd_m = vm_page_alloc(NULL, 0, VM_ALLOC_NOOBJ | VM_ALLOC_NOBUSY |
VM_ALLOC_NORMAL | VM_ALLOC_WIRED | VM_ALLOC_WAITOK | VM_ALLOC_ZERO);
if ((pd_m->flags & PG_ZERO) == 0)
pmap_zero_page(pd_m);
PTD[TRPTDI] = VM_PAGE_TO_PHYS(pd_m) | PG_M | PG_A | PG_RW | PG_V |
pmap_cache_bits(kernel_pmap, VM_MEMATTR_DEFAULT, TRUE);
}
static void *
__CONCAT(PMTYPE, trm_alloc)(size_t size, int flags)
{
vmem_addr_t res;
int error;
MPASS((flags & ~(M_WAITOK | M_NOWAIT | M_ZERO)) == 0);
error = vmem_xalloc(pmap_trm_arena, roundup2(size, 4), sizeof(int),
0, 0, VMEM_ADDR_MIN, VMEM_ADDR_MAX, flags | M_FIRSTFIT, &res);
if (error != 0)
return (NULL);
if ((flags & M_ZERO) != 0)
bzero((void *)res, size);
return ((void *)res);
}
static void
__CONCAT(PMTYPE, trm_free)(void *addr, size_t size)
{
vmem_free(pmap_trm_arena, (uintptr_t)addr, roundup2(size, 4));
}
#if defined(PMAP_DEBUG)
pmap_pid_dump(int pid)
{
pmap_t pmap;
struct proc *p;
int npte = 0;
int index;
sx_slock(&allproc_lock);
FOREACH_PROC_IN_SYSTEM(p) {
if (p->p_pid != pid)
continue;
if (p->p_vmspace) {
int i,j;
index = 0;
pmap = vmspace_pmap(p->p_vmspace);
for (i = 0; i < NPDEPTD; i++) {
pd_entry_t *pde;
pt_entry_t *pte;
vm_offset_t base = i << PDRSHIFT;
pde = &pmap->pm_pdir[i];
if (pde && pmap_pde_v(pde)) {
for (j = 0; j < NPTEPG; j++) {
vm_offset_t va = base + (j << PAGE_SHIFT);
if (va >= (vm_offset_t) VM_MIN_KERNEL_ADDRESS) {
if (index) {
index = 0;
printf("\n");
}
sx_sunlock(&allproc_lock);
return (npte);
}
pte = pmap_pte(pmap, va);
if (pte && pmap_pte_v(pte)) {
pt_entry_t pa;
vm_page_t m;
pa = *pte;
m = PHYS_TO_VM_PAGE(pa & PG_FRAME);
printf("va: 0x%x, pt: 0x%x, h: %d, w: %d, f: 0x%x",
va, pa, m->hold_count, m->wire_count, m->flags);
npte++;
index++;
if (index >= 2) {
index = 0;
printf("\n");
} else {
printf(" ");
}
}
}
}
}
}
}
sx_sunlock(&allproc_lock);
return (npte);
}
#endif
static void
__CONCAT(PMTYPE, ksetrw)(vm_offset_t va)
{
*vtopte(va) |= PG_RW;
}
static void
__CONCAT(PMTYPE, remap_lowptdi)(bool enable)
{
PTD[KPTDI] = enable ? PTD[LOWPTDI] : 0;
invltlb_glob();
}
static vm_offset_t
__CONCAT(PMTYPE, get_map_low)(void)
{
return (PMAP_MAP_LOW);
}
static vm_offset_t
__CONCAT(PMTYPE, get_vm_maxuser_address)(void)
{
return (VM_MAXUSER_ADDRESS);
}
static vm_paddr_t
__CONCAT(PMTYPE, pg_frame)(vm_paddr_t pa)
{
return (pa & PG_FRAME);
}
static void
__CONCAT(PMTYPE, sf_buf_map)(struct sf_buf *sf)
{
pt_entry_t opte, *ptep;
/*
* Update the sf_buf's virtual-to-physical mapping, flushing the
* virtual address from the TLB. Since the reference count for
* the sf_buf's old mapping was zero, that mapping is not
* currently in use. Consequently, there is no need to exchange
* the old and new PTEs atomically, even under PAE.
*/
ptep = vtopte(sf->kva);
opte = *ptep;
*ptep = VM_PAGE_TO_PHYS(sf->m) | PG_RW | PG_V |
pmap_cache_bits(kernel_pmap, sf->m->md.pat_mode, 0);
/*
* Avoid unnecessary TLB invalidations: If the sf_buf's old
* virtual-to-physical mapping was not used, then any processor
* that has invalidated the sf_buf's virtual address from its TLB
* since the last used mapping need not invalidate again.
*/
#ifdef SMP
if ((opte & (PG_V | PG_A)) == (PG_V | PG_A))
CPU_ZERO(&sf->cpumask);
#else
if ((opte & (PG_V | PG_A)) == (PG_V | PG_A))
pmap_invalidate_page_int(kernel_pmap, sf->kva);
#endif
}
static void
__CONCAT(PMTYPE, cp_slow0_map)(vm_offset_t kaddr, int plen, vm_page_t *ma)
{
pt_entry_t *pte;
int i;
for (i = 0, pte = vtopte(kaddr); i < plen; i++, pte++) {
*pte = PG_V | PG_RW | PG_A | PG_M | VM_PAGE_TO_PHYS(ma[i]) |
pmap_cache_bits(kernel_pmap, pmap_page_get_memattr(ma[i]),
FALSE);
invlpg(kaddr + ptoa(i));
}
}
static u_int
__CONCAT(PMTYPE, get_kcr3)(void)
{
#ifdef PMAP_PAE_COMP
return ((u_int)IdlePDPT);
#else
return ((u_int)IdlePTD);
#endif
}
static u_int
__CONCAT(PMTYPE, get_cr3)(pmap_t pmap)
{
#ifdef PMAP_PAE_COMP
return ((u_int)vtophys(pmap->pm_pdpt));
#else
return ((u_int)vtophys(pmap->pm_pdir));
#endif
}
static caddr_t
__CONCAT(PMTYPE, cmap3)(vm_paddr_t pa, u_int pte_bits)
{
pt_entry_t *pte;
pte = CMAP3;
*pte = pa | pte_bits;
invltlb();
return (CADDR3);
}
static void
__CONCAT(PMTYPE, basemem_setup)(u_int basemem)
{
pt_entry_t *pte;
int i;
/*
* Map pages between basemem and ISA_HOLE_START, if any, r/w into
* the vm86 page table so that vm86 can scribble on them using
* the vm86 map too. XXX: why 2 ways for this and only 1 way for
* page 0, at least as initialized here?
*/
pte = (pt_entry_t *)vm86paddr;
for (i = basemem / 4; i < 160; i++)
pte[i] = (i << PAGE_SHIFT) | PG_V | PG_RW | PG_U;
}
struct bios16_pmap_handle {
pt_entry_t *pte;
pd_entry_t *ptd;
pt_entry_t orig_ptd;
};
static void *
__CONCAT(PMTYPE, bios16_enter)(void)
{
struct bios16_pmap_handle *h;
/*
* no page table, so create one and install it.
*/
h = malloc(sizeof(struct bios16_pmap_handle), M_TEMP, M_WAITOK);
h->pte = (pt_entry_t *)malloc(PAGE_SIZE, M_TEMP, M_WAITOK);
h->ptd = IdlePTD;
*h->pte = vm86phystk | PG_RW | PG_V;
h->orig_ptd = *h->ptd;
*h->ptd = vtophys(h->pte) | PG_RW | PG_V;
pmap_invalidate_all_int(kernel_pmap); /* XXX insurance for now */
return (h);
}
static void
__CONCAT(PMTYPE, bios16_leave)(void *arg)
{
struct bios16_pmap_handle *h;
h = arg;
*h->ptd = h->orig_ptd; /* remove page table */
/*
* XXX only needs to be invlpg(0) but that doesn't work on the 386
*/
pmap_invalidate_all_int(kernel_pmap);
free(h->pte, M_TEMP); /* ... and free it */
}
#define PMM(a) \
.pm_##a = __CONCAT(PMTYPE, a),
struct pmap_methods __CONCAT(PMTYPE, methods) = {
PMM(ksetrw)
PMM(remap_lower)
PMM(remap_lowptdi)
PMM(align_superpage)
PMM(quick_enter_page)
PMM(quick_remove_page)
PMM(trm_alloc)
PMM(trm_free)
PMM(get_map_low)
PMM(get_vm_maxuser_address)
PMM(kextract)
PMM(pg_frame)
PMM(sf_buf_map)
PMM(cp_slow0_map)
PMM(get_kcr3)
PMM(get_cr3)
PMM(cmap3)
PMM(basemem_setup)
PMM(set_nx)
PMM(bios16_enter)
PMM(bios16_leave)
PMM(bootstrap)
PMM(is_valid_memattr)
PMM(cache_bits)
PMM(ps_enabled)
PMM(pinit0)
PMM(pinit)
PMM(activate)
PMM(activate_boot)
PMM(advise)
PMM(clear_modify)
PMM(change_attr)
PMM(mincore)
PMM(copy)
PMM(copy_page)
PMM(copy_pages)
PMM(zero_page)
PMM(zero_page_area)
PMM(enter)
PMM(enter_object)
PMM(enter_quick)
PMM(kenter_temporary)
PMM(object_init_pt)
PMM(unwire)
PMM(page_exists_quick)
PMM(page_wired_mappings)
PMM(page_is_mapped)
PMM(remove_pages)
PMM(is_modified)
PMM(is_prefaultable)
PMM(is_referenced)
PMM(remove_write)
PMM(ts_referenced)
PMM(mapdev_attr)
PMM(unmapdev)
PMM(page_set_memattr)
PMM(extract)
PMM(extract_and_hold)
PMM(map)
PMM(qenter)
PMM(qremove)
PMM(release)
PMM(remove)
PMM(protect)
PMM(remove_all)
PMM(init)
PMM(init_pat)
PMM(growkernel)
PMM(invalidate_page)
PMM(invalidate_range)
PMM(invalidate_all)
PMM(invalidate_cache)
PMM(flush_page)
PMM(kenter)
PMM(kremove)
};
Index: projects/clang800-import/sys/i386/i386/pmap_base.c
===================================================================
--- projects/clang800-import/sys/i386/i386/pmap_base.c (revision 343955)
+++ projects/clang800-import/sys/i386/i386/pmap_base.c (revision 343956)
@@ -1,954 +1,958 @@
/*-
* SPDX-License-Identifier: BSD-4-Clause
*
* Copyright (c) 1991 Regents of the University of California.
* All rights reserved.
* Copyright (c) 1994 John S. Dyson
* All rights reserved.
* Copyright (c) 1994 David Greenman
* All rights reserved.
* Copyright (c) 2005-2010 Alan L. Cox <alc@cs.rice.edu>
* All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
* Science Department and William Jolitz of UUNET Technologies Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: @(#)pmap.c 7.7 (Berkeley) 5/12/91
*/
/*-
* Copyright (c) 2003 Networks Associates Technology, Inc.
* All rights reserved.
* Copyright (c) 2018 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed for the FreeBSD Project by Jake Burkholder,
* Safeport Network Services, and Network Associates Laboratories, the
* Security Research Division of Network Associates, Inc. under
* DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA
* CHATS research program.
*
* Portions of this software were developed by
* Konstantin Belousov <kib@FreeBSD.org> under sponsorship from
* the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_apic.h"
#include "opt_cpu.h"
#include "opt_pmap.h"
#include "opt_smp.h"
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/vmmeter.h>
#include <sys/sysctl.h>
+#include <machine/bootinfo.h>
#include <machine/cpu.h>
#include <machine/cputypes.h>
#include <machine/md_var.h>
#ifdef DEV_APIC
#include <sys/bus.h>
#include <machine/intr_machdep.h>
#include <x86/apicvar.h>
#endif
#include <x86/ifunc.h>
static SYSCTL_NODE(_vm, OID_AUTO, pmap, CTLFLAG_RD, 0, "VM/pmap parameters");
#include <machine/vmparam.h>
#include <vm/vm.h>
#include <vm/vm_page.h>
#include <vm/pmap.h>
#include <machine/pmap_base.h>
vm_offset_t virtual_avail; /* VA of first avail page (after kernel bss) */
vm_offset_t virtual_end; /* VA of last avail page (end of kernel AS) */
int unmapped_buf_allowed = 1;
int pti;
u_long physfree; /* phys addr of next free page */
u_long vm86phystk; /* PA of vm86/bios stack */
u_long vm86paddr; /* address of vm86 region */
int vm86pa; /* phys addr of vm86 region */
u_long KERNend; /* phys addr end of kernel (just after bss) */
u_long KPTphys; /* phys addr of kernel page tables */
caddr_t ptvmmap = 0;
vm_offset_t kernel_vm_end;
int i386_pmap_VM_NFREEORDER;
int i386_pmap_VM_LEVEL_0_ORDER;
int i386_pmap_PDRSHIFT;
int pat_works = 1;
SYSCTL_INT(_vm_pmap, OID_AUTO, pat_works, CTLFLAG_RD,
- &pat_works, 1,
+ &pat_works, 0,
"Is page attribute table fully functional?");
int pg_ps_enabled = 1;
SYSCTL_INT(_vm_pmap, OID_AUTO, pg_ps_enabled, CTLFLAG_RDTUN | CTLFLAG_NOFETCH,
&pg_ps_enabled, 0,
"Are large page mappings enabled?");
int pv_entry_max = 0;
SYSCTL_INT(_vm_pmap, OID_AUTO, pv_entry_max, CTLFLAG_RD,
&pv_entry_max, 0,
"Max number of PV entries");
int pv_entry_count = 0;
SYSCTL_INT(_vm_pmap, OID_AUTO, pv_entry_count, CTLFLAG_RD,
&pv_entry_count, 0,
"Current number of pv entries");
#ifndef PMAP_SHPGPERPROC
#define PMAP_SHPGPERPROC 200
#endif
int shpgperproc = PMAP_SHPGPERPROC;
SYSCTL_INT(_vm_pmap, OID_AUTO, shpgperproc, CTLFLAG_RD,
&shpgperproc, 0,
"Page share factor per proc");
static SYSCTL_NODE(_vm_pmap, OID_AUTO, pde, CTLFLAG_RD, 0,
"2/4MB page mapping counters");
u_long pmap_pde_demotions;
SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, demotions, CTLFLAG_RD,
&pmap_pde_demotions, 0,
"2/4MB page demotions");
u_long pmap_pde_mappings;
SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, mappings, CTLFLAG_RD,
&pmap_pde_mappings, 0,
"2/4MB page mappings");
u_long pmap_pde_p_failures;
SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, p_failures, CTLFLAG_RD,
&pmap_pde_p_failures, 0,
"2/4MB page promotion failures");
u_long pmap_pde_promotions;
SYSCTL_ULONG(_vm_pmap_pde, OID_AUTO, promotions, CTLFLAG_RD,
&pmap_pde_promotions, 0,
"2/4MB page promotions");
#ifdef SMP
int PMAP1changedcpu;
SYSCTL_INT(_debug, OID_AUTO, PMAP1changedcpu, CTLFLAG_RD,
&PMAP1changedcpu, 0,
"Number of times pmap_pte_quick changed CPU with same PMAP1");
#endif
int PMAP1changed;
SYSCTL_INT(_debug, OID_AUTO, PMAP1changed, CTLFLAG_RD,
&PMAP1changed, 0,
"Number of times pmap_pte_quick changed PMAP1");
int PMAP1unchanged;
SYSCTL_INT(_debug, OID_AUTO, PMAP1unchanged, CTLFLAG_RD,
&PMAP1unchanged, 0,
"Number of times pmap_pte_quick didn't change PMAP1");
static int
kvm_size(SYSCTL_HANDLER_ARGS)
{
unsigned long ksize;
ksize = VM_MAX_KERNEL_ADDRESS - KERNBASE;
return (sysctl_handle_long(oidp, &ksize, 0, req));
}
SYSCTL_PROC(_vm, OID_AUTO, kvm_size, CTLTYPE_LONG | CTLFLAG_RD | CTLFLAG_MPSAFE,
0, 0, kvm_size, "IU",
"Size of KVM");
static int
kvm_free(SYSCTL_HANDLER_ARGS)
{
unsigned long kfree;
kfree = VM_MAX_KERNEL_ADDRESS - kernel_vm_end;
return (sysctl_handle_long(oidp, &kfree, 0, req));
}
SYSCTL_PROC(_vm, OID_AUTO, kvm_free, CTLTYPE_LONG | CTLFLAG_RD | CTLFLAG_MPSAFE,
0, 0, kvm_free, "IU",
"Amount of KVM free");
#ifdef PV_STATS
int pc_chunk_count, pc_chunk_allocs, pc_chunk_frees, pc_chunk_tryfail;
long pv_entry_frees, pv_entry_allocs;
int pv_entry_spare;
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_count, CTLFLAG_RD,
&pc_chunk_count, 0,
"Current number of pv entry chunks");
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_allocs, CTLFLAG_RD,
&pc_chunk_allocs, 0,
"Current number of pv entry chunks allocated");
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_frees, CTLFLAG_RD,
&pc_chunk_frees, 0,
"Current number of pv entry chunks frees");
SYSCTL_INT(_vm_pmap, OID_AUTO, pc_chunk_tryfail, CTLFLAG_RD,
&pc_chunk_tryfail, 0,
"Number of times tried to get a chunk page but failed.");
SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_frees, CTLFLAG_RD,
&pv_entry_frees, 0,
"Current number of pv entry frees");
SYSCTL_LONG(_vm_pmap, OID_AUTO, pv_entry_allocs, CTLFLAG_RD,
&pv_entry_allocs, 0,
"Current number of pv entry allocs");
SYSCTL_INT(_vm_pmap, OID_AUTO, pv_entry_spare, CTLFLAG_RD,
&pv_entry_spare, 0,
"Current number of spare pv entries");
#endif
struct pmap kernel_pmap_store;
static struct pmap_methods *pmap_methods_ptr;
/*
* Initialize a vm_page's machine-dependent fields.
*/
void
pmap_page_init(vm_page_t m)
{
TAILQ_INIT(&m->md.pv_list);
m->md.pat_mode = PAT_WRITE_BACK;
}
void
invltlb_glob(void)
{
invltlb();
}
static void pmap_invalidate_cache_range_selfsnoop(vm_offset_t sva,
vm_offset_t eva);
static void pmap_invalidate_cache_range_all(vm_offset_t sva,
vm_offset_t eva);
void
pmap_flush_page(vm_page_t m)
{
pmap_methods_ptr->pm_flush_page(m);
}
DEFINE_IFUNC(, void, pmap_invalidate_cache_range, (vm_offset_t, vm_offset_t),
static)
{
if ((cpu_feature & CPUID_SS) != 0)
return (pmap_invalidate_cache_range_selfsnoop);
if ((cpu_feature & CPUID_CLFSH) != 0)
return (pmap_force_invalidate_cache_range);
return (pmap_invalidate_cache_range_all);
}
#define PMAP_CLFLUSH_THRESHOLD (2 * 1024 * 1024)
static void
pmap_invalidate_cache_range_check_align(vm_offset_t sva, vm_offset_t eva)
{
KASSERT((sva & PAGE_MASK) == 0,
("pmap_invalidate_cache_range: sva not page-aligned"));
KASSERT((eva & PAGE_MASK) == 0,
("pmap_invalidate_cache_range: eva not page-aligned"));
}
static void
pmap_invalidate_cache_range_selfsnoop(vm_offset_t sva, vm_offset_t eva)
{
pmap_invalidate_cache_range_check_align(sva, eva);
}
void
pmap_force_invalidate_cache_range(vm_offset_t sva, vm_offset_t eva)
{
sva &= ~(vm_offset_t)(cpu_clflush_line_size - 1);
if (eva - sva >= PMAP_CLFLUSH_THRESHOLD) {
/*
* The supplied range is bigger than 2MB.
* Globally invalidate cache.
*/
pmap_invalidate_cache();
return;
}
#ifdef DEV_APIC
/*
* XXX: Some CPUs fault, hang, or trash the local APIC
* registers if we use CLFLUSH on the local APIC
* range. The local APIC is always uncached, so we
* don't need to flush for that range anyway.
*/
if (pmap_kextract(sva) == lapic_paddr)
return;
#endif
if ((cpu_stdext_feature & CPUID_STDEXT_CLFLUSHOPT) != 0) {
/*
* Do per-cache line flush. Use the sfence
* instruction to insure that previous stores are
* included in the write-back. The processor
* propagates flush to other processors in the cache
* coherence domain.
*/
sfence();
for (; sva < eva; sva += cpu_clflush_line_size)
clflushopt(sva);
sfence();
} else {
/*
* Writes are ordered by CLFLUSH on Intel CPUs.
*/
if (cpu_vendor_id != CPU_VENDOR_INTEL)
mfence();
for (; sva < eva; sva += cpu_clflush_line_size)
clflush(sva);
if (cpu_vendor_id != CPU_VENDOR_INTEL)
mfence();
}
}
static void
pmap_invalidate_cache_range_all(vm_offset_t sva, vm_offset_t eva)
{
pmap_invalidate_cache_range_check_align(sva, eva);
pmap_invalidate_cache();
}
void
pmap_invalidate_cache_pages(vm_page_t *pages, int count)
{
int i;
if (count >= PMAP_CLFLUSH_THRESHOLD / PAGE_SIZE ||
(cpu_feature & CPUID_CLFSH) == 0) {
pmap_invalidate_cache();
} else {
for (i = 0; i < count; i++)
pmap_flush_page(pages[i]);
}
}
void
pmap_ksetrw(vm_offset_t va)
{
pmap_methods_ptr->pm_ksetrw(va);
}
void
pmap_remap_lower(bool enable)
{
pmap_methods_ptr->pm_remap_lower(enable);
}
void
pmap_remap_lowptdi(bool enable)
{
pmap_methods_ptr->pm_remap_lowptdi(enable);
}
void
pmap_align_superpage(vm_object_t object, vm_ooffset_t offset,
vm_offset_t *addr, vm_size_t size)
{
return (pmap_methods_ptr->pm_align_superpage(object, offset,
addr, size));
}
vm_offset_t
pmap_quick_enter_page(vm_page_t m)
{
return (pmap_methods_ptr->pm_quick_enter_page(m));
}
void
pmap_quick_remove_page(vm_offset_t addr)
{
return (pmap_methods_ptr->pm_quick_remove_page(addr));
}
void *
pmap_trm_alloc(size_t size, int flags)
{
return (pmap_methods_ptr->pm_trm_alloc(size, flags));
}
void
pmap_trm_free(void *addr, size_t size)
{
pmap_methods_ptr->pm_trm_free(addr, size);
}
void
pmap_sync_icache(pmap_t pm, vm_offset_t va, vm_size_t sz)
{
}
vm_offset_t
pmap_get_map_low(void)
{
return (pmap_methods_ptr->pm_get_map_low());
}
vm_offset_t
pmap_get_vm_maxuser_address(void)
{
return (pmap_methods_ptr->pm_get_vm_maxuser_address());
}
vm_paddr_t
pmap_kextract(vm_offset_t va)
{
return (pmap_methods_ptr->pm_kextract(va));
}
vm_paddr_t
pmap_pg_frame(vm_paddr_t pa)
{
return (pmap_methods_ptr->pm_pg_frame(pa));
}
void
pmap_sf_buf_map(struct sf_buf *sf)
{
pmap_methods_ptr->pm_sf_buf_map(sf);
}
void
pmap_cp_slow0_map(vm_offset_t kaddr, int plen, vm_page_t *ma)
{
pmap_methods_ptr->pm_cp_slow0_map(kaddr, plen, ma);
}
u_int
pmap_get_kcr3(void)
{
return (pmap_methods_ptr->pm_get_kcr3());
}
u_int
pmap_get_cr3(pmap_t pmap)
{
return (pmap_methods_ptr->pm_get_cr3(pmap));
}
caddr_t
pmap_cmap3(vm_paddr_t pa, u_int pte_flags)
{
return (pmap_methods_ptr->pm_cmap3(pa, pte_flags));
}
void
pmap_basemem_setup(u_int basemem)
{
pmap_methods_ptr->pm_basemem_setup(basemem);
}
void
pmap_set_nx(void)
{
pmap_methods_ptr->pm_set_nx();
}
void *
pmap_bios16_enter(void)
{
return (pmap_methods_ptr->pm_bios16_enter());
}
void
pmap_bios16_leave(void *handle)
{
pmap_methods_ptr->pm_bios16_leave(handle);
}
void
pmap_bootstrap(vm_paddr_t firstaddr)
{
pmap_methods_ptr->pm_bootstrap(firstaddr);
}
boolean_t
pmap_is_valid_memattr(pmap_t pmap, vm_memattr_t mode)
{
return (pmap_methods_ptr->pm_is_valid_memattr(pmap, mode));
}
int
pmap_cache_bits(pmap_t pmap, int mode, boolean_t is_pde)
{
return (pmap_methods_ptr->pm_cache_bits(pmap, mode, is_pde));
}
bool
pmap_ps_enabled(pmap_t pmap)
{
return (pmap_methods_ptr->pm_ps_enabled(pmap));
}
void
pmap_pinit0(pmap_t pmap)
{
pmap_methods_ptr->pm_pinit0(pmap);
}
int
pmap_pinit(pmap_t pmap)
{
return (pmap_methods_ptr->pm_pinit(pmap));
}
void
pmap_activate(struct thread *td)
{
pmap_methods_ptr->pm_activate(td);
}
void
pmap_activate_boot(pmap_t pmap)
{
pmap_methods_ptr->pm_activate_boot(pmap);
}
void
pmap_advise(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, int advice)
{
pmap_methods_ptr->pm_advise(pmap, sva, eva, advice);
}
void
pmap_clear_modify(vm_page_t m)
{
pmap_methods_ptr->pm_clear_modify(m);
}
int
pmap_change_attr(vm_offset_t va, vm_size_t size, int mode)
{
return (pmap_methods_ptr->pm_change_attr(va, size, mode));
}
int
pmap_mincore(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa)
{
return (pmap_methods_ptr->pm_mincore(pmap, addr, locked_pa));
}
void
pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, vm_size_t len,
vm_offset_t src_addr)
{
pmap_methods_ptr->pm_copy(dst_pmap, src_pmap, dst_addr, len, src_addr);
}
void
pmap_copy_page(vm_page_t src, vm_page_t dst)
{
pmap_methods_ptr->pm_copy_page(src, dst);
}
void
pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[],
vm_offset_t b_offset, int xfersize)
{
pmap_methods_ptr->pm_copy_pages(ma, a_offset, mb, b_offset, xfersize);
}
void
pmap_zero_page(vm_page_t m)
{
pmap_methods_ptr->pm_zero_page(m);
}
void
pmap_zero_page_area(vm_page_t m, int off, int size)
{
pmap_methods_ptr->pm_zero_page_area(m, off, size);
}
int
pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot,
u_int flags, int8_t psind)
{
return (pmap_methods_ptr->pm_enter(pmap, va, m, prot, flags, psind));
}
void
pmap_enter_object(pmap_t pmap, vm_offset_t start, vm_offset_t end,
vm_page_t m_start, vm_prot_t prot)
{
pmap_methods_ptr->pm_enter_object(pmap, start, end, m_start, prot);
}
void
pmap_enter_quick(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot)
{
pmap_methods_ptr->pm_enter_quick(pmap, va, m, prot);
}
void *
pmap_kenter_temporary(vm_paddr_t pa, int i)
{
return (pmap_methods_ptr->pm_kenter_temporary(pa, i));
}
void
pmap_object_init_pt(pmap_t pmap, vm_offset_t addr, vm_object_t object,
vm_pindex_t pindex, vm_size_t size)
{
pmap_methods_ptr->pm_object_init_pt(pmap, addr, object, pindex, size);
}
void
pmap_unwire(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
pmap_methods_ptr->pm_unwire(pmap, sva, eva);
}
boolean_t
pmap_page_exists_quick(pmap_t pmap, vm_page_t m)
{
return (pmap_methods_ptr->pm_page_exists_quick(pmap, m));
}
int
pmap_page_wired_mappings(vm_page_t m)
{
return (pmap_methods_ptr->pm_page_wired_mappings(m));
}
boolean_t
pmap_page_is_mapped(vm_page_t m)
{
return (pmap_methods_ptr->pm_page_is_mapped(m));
}
void
pmap_remove_pages(pmap_t pmap)
{
pmap_methods_ptr->pm_remove_pages(pmap);
}
boolean_t
pmap_is_modified(vm_page_t m)
{
return (pmap_methods_ptr->pm_is_modified(m));
}
boolean_t
pmap_is_prefaultable(pmap_t pmap, vm_offset_t addr)
{
return (pmap_methods_ptr->pm_is_prefaultable(pmap, addr));
}
boolean_t
pmap_is_referenced(vm_page_t m)
{
return (pmap_methods_ptr->pm_is_referenced(m));
}
void
pmap_remove_write(vm_page_t m)
{
pmap_methods_ptr->pm_remove_write(m);
}
int
pmap_ts_referenced(vm_page_t m)
{
return (pmap_methods_ptr->pm_ts_referenced(m));
}
void *
pmap_mapdev_attr(vm_paddr_t pa, vm_size_t size, int mode)
{
return (pmap_methods_ptr->pm_mapdev_attr(pa, size, mode));
}
void *
pmap_mapdev(vm_paddr_t pa, vm_size_t size)
{
return (pmap_methods_ptr->pm_mapdev_attr(pa, size, PAT_UNCACHEABLE));
}
void *
pmap_mapbios(vm_paddr_t pa, vm_size_t size)
{
return (pmap_methods_ptr->pm_mapdev_attr(pa, size, PAT_WRITE_BACK));
}
void
pmap_unmapdev(vm_offset_t va, vm_size_t size)
{
pmap_methods_ptr->pm_unmapdev(va, size);
}
void
pmap_page_set_memattr(vm_page_t m, vm_memattr_t ma)
{
pmap_methods_ptr->pm_page_set_memattr(m, ma);
}
vm_paddr_t
pmap_extract(pmap_t pmap, vm_offset_t va)
{
return (pmap_methods_ptr->pm_extract(pmap, va));
}
vm_page_t
pmap_extract_and_hold(pmap_t pmap, vm_offset_t va, vm_prot_t prot)
{
return (pmap_methods_ptr->pm_extract_and_hold(pmap, va, prot));
}
vm_offset_t
pmap_map(vm_offset_t *virt, vm_paddr_t start, vm_paddr_t end, int prot)
{
return (pmap_methods_ptr->pm_map(virt, start, end, prot));
}
void
pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count)
{
pmap_methods_ptr->pm_qenter(sva, ma, count);
}
void
pmap_qremove(vm_offset_t sva, int count)
{
pmap_methods_ptr->pm_qremove(sva, count);
}
void
pmap_release(pmap_t pmap)
{
pmap_methods_ptr->pm_release(pmap);
}
void
pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
pmap_methods_ptr->pm_remove(pmap, sva, eva);
}
void
pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot)
{
pmap_methods_ptr->pm_protect(pmap, sva, eva, prot);
}
void
pmap_remove_all(vm_page_t m)
{
pmap_methods_ptr->pm_remove_all(m);
}
void
pmap_init(void)
{
pmap_methods_ptr->pm_init();
}
void
pmap_init_pat(void)
{
pmap_methods_ptr->pm_init_pat();
}
void
pmap_growkernel(vm_offset_t addr)
{
pmap_methods_ptr->pm_growkernel(addr);
}
void
pmap_invalidate_page(pmap_t pmap, vm_offset_t va)
{
pmap_methods_ptr->pm_invalidate_page(pmap, va);
}
void
pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
{
pmap_methods_ptr->pm_invalidate_range(pmap, sva, eva);
}
void
pmap_invalidate_all(pmap_t pmap)
{
pmap_methods_ptr->pm_invalidate_all(pmap);
}
void
pmap_invalidate_cache(void)
{
pmap_methods_ptr->pm_invalidate_cache();
}
void
pmap_kenter(vm_offset_t va, vm_paddr_t pa)
{
pmap_methods_ptr->pm_kenter(va, pa);
}
void
pmap_kremove(vm_offset_t va)
{
pmap_methods_ptr->pm_kremove(va);
}
extern struct pmap_methods pmap_pae_methods, pmap_nopae_methods;
int pae_mode;
-SYSCTL_INT(_vm_pmap, OID_AUTO, pae_mode, CTLFLAG_RD,
- &pae_mode, 1,
+SYSCTL_INT(_vm_pmap, OID_AUTO, pae_mode, CTLFLAG_RDTUN | CTLFLAG_NOFETCH,
+ &pae_mode, 0,
"PAE");
void
pmap_cold(void)
{
- if ((cpu_feature & CPUID_PAE) != 0) {
- pae_mode = 1;
+ init_static_kenv((char *)bootinfo.bi_envp, 0);
+ pae_mode = (cpu_feature & CPUID_PAE) != 0;
+ if (pae_mode)
+ TUNABLE_INT_FETCH("vm.pmap.pae_mode", &pae_mode);
+ if (pae_mode) {
pmap_methods_ptr = &pmap_pae_methods;
pmap_pae_cold();
} else {
pmap_methods_ptr = &pmap_nopae_methods;
pmap_nopae_cold();
}
}
Index: projects/clang800-import/sys/kern/imgact_elf.c
===================================================================
--- projects/clang800-import/sys/kern/imgact_elf.c (revision 343955)
+++ projects/clang800-import/sys/kern/imgact_elf.c (revision 343956)
@@ -1,2541 +1,2537 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2017 Dell EMC
* Copyright (c) 2000-2001, 2003 David O'Brien
* Copyright (c) 1995-1996 Søren Schmidt
* Copyright (c) 1996 Peter Wemm
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer
* in this position and unchanged.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_capsicum.h"
#include <sys/param.h>
#include <sys/capsicum.h>
#include <sys/compressor.h>
#include <sys/exec.h>
#include <sys/fcntl.h>
#include <sys/imgact.h>
#include <sys/imgact_elf.h>
#include <sys/jail.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mount.h>
#include <sys/mman.h>
#include <sys/namei.h>
#include <sys/pioctl.h>
#include <sys/proc.h>
#include <sys/procfs.h>
#include <sys/ptrace.h>
#include <sys/racct.h>
#include <sys/resourcevar.h>
#include <sys/rwlock.h>
#include <sys/sbuf.h>
#include <sys/sf_buf.h>
#include <sys/smp.h>
#include <sys/systm.h>
#include <sys/signalvar.h>
#include <sys/stat.h>
#include <sys/sx.h>
#include <sys/syscall.h>
#include <sys/sysctl.h>
#include <sys/sysent.h>
#include <sys/vnode.h>
#include <sys/syslog.h>
#include <sys/eventhandler.h>
#include <sys/user.h>
#include <vm/vm.h>
#include <vm/vm_kern.h>
#include <vm/vm_param.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#include <vm/vm_object.h>
#include <vm/vm_extern.h>
#include <machine/elf.h>
#include <machine/md_var.h>
#define ELF_NOTE_ROUNDSIZE 4
#define OLD_EI_BRAND 8
static int __elfN(check_header)(const Elf_Ehdr *hdr);
static Elf_Brandinfo *__elfN(get_brandinfo)(struct image_params *imgp,
const char *interp, int interp_name_len, int32_t *osrel, uint32_t *fctl0);
static int __elfN(load_file)(struct proc *p, const char *file, u_long *addr,
u_long *entry, size_t pagesize);
static int __elfN(load_section)(struct image_params *imgp, vm_ooffset_t offset,
caddr_t vmaddr, size_t memsz, size_t filsz, vm_prot_t prot,
size_t pagesize);
static int __CONCAT(exec_, __elfN(imgact))(struct image_params *imgp);
static bool __elfN(freebsd_trans_osrel)(const Elf_Note *note,
int32_t *osrel);
static bool kfreebsd_trans_osrel(const Elf_Note *note, int32_t *osrel);
static boolean_t __elfN(check_note)(struct image_params *imgp,
Elf_Brandnote *checknote, int32_t *osrel, uint32_t *fctl0);
static vm_prot_t __elfN(trans_prot)(Elf_Word);
static Elf_Word __elfN(untrans_prot)(vm_prot_t);
SYSCTL_NODE(_kern, OID_AUTO, __CONCAT(elf, __ELF_WORD_SIZE), CTLFLAG_RW, 0,
"");
#define CORE_BUF_SIZE (16 * 1024)
int __elfN(fallback_brand) = -1;
SYSCTL_INT(__CONCAT(_kern_elf, __ELF_WORD_SIZE), OID_AUTO,
fallback_brand, CTLFLAG_RWTUN, &__elfN(fallback_brand), 0,
__XSTRING(__CONCAT(ELF, __ELF_WORD_SIZE)) " brand of last resort");
static int elf_legacy_coredump = 0;
SYSCTL_INT(_debug, OID_AUTO, __elfN(legacy_coredump), CTLFLAG_RW,
&elf_legacy_coredump, 0,
"include all and only RW pages in core dumps");
int __elfN(nxstack) =
#if defined(__amd64__) || defined(__powerpc64__) /* both 64 and 32 bit */ || \
(defined(__arm__) && __ARM_ARCH >= 7) || defined(__aarch64__) || \
defined(__riscv)
1;
#else
0;
#endif
SYSCTL_INT(__CONCAT(_kern_elf, __ELF_WORD_SIZE), OID_AUTO,
nxstack, CTLFLAG_RW, &__elfN(nxstack), 0,
__XSTRING(__CONCAT(ELF, __ELF_WORD_SIZE)) ": enable non-executable stack");
-#if __ELF_WORD_SIZE == 32
-#if defined(__amd64__)
+#if __ELF_WORD_SIZE == 32 && (defined(__amd64__) || defined(__i386__))
int i386_read_exec = 0;
SYSCTL_INT(_kern_elf32, OID_AUTO, read_exec, CTLFLAG_RW, &i386_read_exec, 0,
"enable execution from readable segments");
#endif
-#endif
static Elf_Brandinfo *elf_brand_list[MAX_BRANDS];
#define trunc_page_ps(va, ps) rounddown2(va, ps)
#define round_page_ps(va, ps) roundup2(va, ps)
#define aligned(a, t) (trunc_page_ps((u_long)(a), sizeof(t)) == (u_long)(a))
static const char FREEBSD_ABI_VENDOR[] = "FreeBSD";
Elf_Brandnote __elfN(freebsd_brandnote) = {
.hdr.n_namesz = sizeof(FREEBSD_ABI_VENDOR),
.hdr.n_descsz = sizeof(int32_t),
.hdr.n_type = NT_FREEBSD_ABI_TAG,
.vendor = FREEBSD_ABI_VENDOR,
.flags = BN_TRANSLATE_OSREL,
.trans_osrel = __elfN(freebsd_trans_osrel)
};
static bool
__elfN(freebsd_trans_osrel)(const Elf_Note *note, int32_t *osrel)
{
uintptr_t p;
p = (uintptr_t)(note + 1);
p += roundup2(note->n_namesz, ELF_NOTE_ROUNDSIZE);
*osrel = *(const int32_t *)(p);
return (true);
}
static const char GNU_ABI_VENDOR[] = "GNU";
static int GNU_KFREEBSD_ABI_DESC = 3;
Elf_Brandnote __elfN(kfreebsd_brandnote) = {
.hdr.n_namesz = sizeof(GNU_ABI_VENDOR),
.hdr.n_descsz = 16, /* XXX at least 16 */
.hdr.n_type = 1,
.vendor = GNU_ABI_VENDOR,
.flags = BN_TRANSLATE_OSREL,
.trans_osrel = kfreebsd_trans_osrel
};
static bool
kfreebsd_trans_osrel(const Elf_Note *note, int32_t *osrel)
{
const Elf32_Word *desc;
uintptr_t p;
p = (uintptr_t)(note + 1);
p += roundup2(note->n_namesz, ELF_NOTE_ROUNDSIZE);
desc = (const Elf32_Word *)p;
if (desc[0] != GNU_KFREEBSD_ABI_DESC)
return (false);
/*
* Debian GNU/kFreeBSD embed the earliest compatible kernel version
* (__FreeBSD_version: <major><two digit minor>Rxx) in the LSB way.
*/
*osrel = desc[1] * 100000 + desc[2] * 1000 + desc[3];
return (true);
}
int
__elfN(insert_brand_entry)(Elf_Brandinfo *entry)
{
int i;
for (i = 0; i < MAX_BRANDS; i++) {
if (elf_brand_list[i] == NULL) {
elf_brand_list[i] = entry;
break;
}
}
if (i == MAX_BRANDS) {
printf("WARNING: %s: could not insert brandinfo entry: %p\n",
__func__, entry);
return (-1);
}
return (0);
}
int
__elfN(remove_brand_entry)(Elf_Brandinfo *entry)
{
int i;
for (i = 0; i < MAX_BRANDS; i++) {
if (elf_brand_list[i] == entry) {
elf_brand_list[i] = NULL;
break;
}
}
if (i == MAX_BRANDS)
return (-1);
return (0);
}
int
__elfN(brand_inuse)(Elf_Brandinfo *entry)
{
struct proc *p;
int rval = FALSE;
sx_slock(&allproc_lock);
FOREACH_PROC_IN_SYSTEM(p) {
if (p->p_sysent == entry->sysvec) {
rval = TRUE;
break;
}
}
sx_sunlock(&allproc_lock);
return (rval);
}
static Elf_Brandinfo *
__elfN(get_brandinfo)(struct image_params *imgp, const char *interp,
int interp_name_len, int32_t *osrel, uint32_t *fctl0)
{
const Elf_Ehdr *hdr = (const Elf_Ehdr *)imgp->image_header;
Elf_Brandinfo *bi, *bi_m;
boolean_t ret;
int i;
/*
* We support four types of branding -- (1) the ELF EI_OSABI field
* that SCO added to the ELF spec, (2) FreeBSD 3.x's traditional string
* branding w/in the ELF header, (3) path of the `interp_path'
* field, and (4) the ".note.ABI-tag" ELF section.
*/
/* Look for an ".note.ABI-tag" ELF section */
bi_m = NULL;
for (i = 0; i < MAX_BRANDS; i++) {
bi = elf_brand_list[i];
if (bi == NULL)
continue;
if (interp != NULL && (bi->flags & BI_BRAND_ONLY_STATIC) != 0)
continue;
if (hdr->e_machine == bi->machine && (bi->flags &
(BI_BRAND_NOTE|BI_BRAND_NOTE_MANDATORY)) != 0) {
ret = __elfN(check_note)(imgp, bi->brand_note, osrel,
fctl0);
/* Give brand a chance to veto check_note's guess */
if (ret && bi->header_supported)
ret = bi->header_supported(imgp);
/*
* If note checker claimed the binary, but the
* interpreter path in the image does not
* match default one for the brand, try to
* search for other brands with the same
* interpreter. Either there is better brand
* with the right interpreter, or, failing
* this, we return first brand which accepted
* our note and, optionally, header.
*/
if (ret && bi_m == NULL && interp != NULL &&
(bi->interp_path == NULL ||
(strlen(bi->interp_path) + 1 != interp_name_len ||
strncmp(interp, bi->interp_path, interp_name_len)
!= 0))) {
bi_m = bi;
ret = 0;
}
if (ret)
return (bi);
}
}
if (bi_m != NULL)
return (bi_m);
/* If the executable has a brand, search for it in the brand list. */
for (i = 0; i < MAX_BRANDS; i++) {
bi = elf_brand_list[i];
if (bi == NULL || (bi->flags & BI_BRAND_NOTE_MANDATORY) != 0 ||
(interp != NULL && (bi->flags & BI_BRAND_ONLY_STATIC) != 0))
continue;
if (hdr->e_machine == bi->machine &&
(hdr->e_ident[EI_OSABI] == bi->brand ||
(bi->compat_3_brand != NULL &&
strcmp((const char *)&hdr->e_ident[OLD_EI_BRAND],
bi->compat_3_brand) == 0))) {
/* Looks good, but give brand a chance to veto */
if (bi->header_supported == NULL ||
bi->header_supported(imgp)) {
/*
* Again, prefer strictly matching
* interpreter path.
*/
if (interp_name_len == 0 &&
bi->interp_path == NULL)
return (bi);
if (bi->interp_path != NULL &&
strlen(bi->interp_path) + 1 ==
interp_name_len && strncmp(interp,
bi->interp_path, interp_name_len) == 0)
return (bi);
if (bi_m == NULL)
bi_m = bi;
}
}
}
if (bi_m != NULL)
return (bi_m);
/* No known brand, see if the header is recognized by any brand */
for (i = 0; i < MAX_BRANDS; i++) {
bi = elf_brand_list[i];
if (bi == NULL || bi->flags & BI_BRAND_NOTE_MANDATORY ||
bi->header_supported == NULL)
continue;
if (hdr->e_machine == bi->machine) {
ret = bi->header_supported(imgp);
if (ret)
return (bi);
}
}
/* Lacking a known brand, search for a recognized interpreter. */
if (interp != NULL) {
for (i = 0; i < MAX_BRANDS; i++) {
bi = elf_brand_list[i];
if (bi == NULL || (bi->flags &
(BI_BRAND_NOTE_MANDATORY | BI_BRAND_ONLY_STATIC))
!= 0)
continue;
if (hdr->e_machine == bi->machine &&
bi->interp_path != NULL &&
/* ELF image p_filesz includes terminating zero */
strlen(bi->interp_path) + 1 == interp_name_len &&
strncmp(interp, bi->interp_path, interp_name_len)
== 0 && (bi->header_supported == NULL ||
bi->header_supported(imgp)))
return (bi);
}
}
/* Lacking a recognized interpreter, try the default brand */
for (i = 0; i < MAX_BRANDS; i++) {
bi = elf_brand_list[i];
if (bi == NULL || (bi->flags & BI_BRAND_NOTE_MANDATORY) != 0 ||
(interp != NULL && (bi->flags & BI_BRAND_ONLY_STATIC) != 0))
continue;
if (hdr->e_machine == bi->machine &&
__elfN(fallback_brand) == bi->brand &&
(bi->header_supported == NULL ||
bi->header_supported(imgp)))
return (bi);
}
return (NULL);
}
static int
__elfN(check_header)(const Elf_Ehdr *hdr)
{
Elf_Brandinfo *bi;
int i;
if (!IS_ELF(*hdr) ||
hdr->e_ident[EI_CLASS] != ELF_TARG_CLASS ||
hdr->e_ident[EI_DATA] != ELF_TARG_DATA ||
hdr->e_ident[EI_VERSION] != EV_CURRENT ||
hdr->e_phentsize != sizeof(Elf_Phdr) ||
hdr->e_version != ELF_TARG_VER)
return (ENOEXEC);
/*
* Make sure we have at least one brand for this machine.
*/
for (i = 0; i < MAX_BRANDS; i++) {
bi = elf_brand_list[i];
if (bi != NULL && bi->machine == hdr->e_machine)
break;
}
if (i == MAX_BRANDS)
return (ENOEXEC);
return (0);
}
static int
__elfN(map_partial)(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
vm_offset_t start, vm_offset_t end, vm_prot_t prot)
{
struct sf_buf *sf;
int error;
vm_offset_t off;
/*
* Create the page if it doesn't exist yet. Ignore errors.
*/
vm_map_fixed(map, NULL, 0, trunc_page(start), round_page(end) -
trunc_page(start), VM_PROT_ALL, VM_PROT_ALL, MAP_CHECK_EXCL);
/*
* Find the page from the underlying object.
*/
if (object != NULL) {
sf = vm_imgact_map_page(object, offset);
if (sf == NULL)
return (KERN_FAILURE);
off = offset - trunc_page(offset);
error = copyout((caddr_t)sf_buf_kva(sf) + off, (caddr_t)start,
end - start);
vm_imgact_unmap_page(sf);
if (error != 0)
return (KERN_FAILURE);
}
return (KERN_SUCCESS);
}
static int
__elfN(map_insert)(struct image_params *imgp, vm_map_t map, vm_object_t object,
vm_ooffset_t offset, vm_offset_t start, vm_offset_t end, vm_prot_t prot,
int cow)
{
struct sf_buf *sf;
vm_offset_t off;
vm_size_t sz;
int error, locked, rv;
if (start != trunc_page(start)) {
rv = __elfN(map_partial)(map, object, offset, start,
round_page(start), prot);
if (rv != KERN_SUCCESS)
return (rv);
offset += round_page(start) - start;
start = round_page(start);
}
if (end != round_page(end)) {
rv = __elfN(map_partial)(map, object, offset +
trunc_page(end) - start, trunc_page(end), end, prot);
if (rv != KERN_SUCCESS)
return (rv);
end = trunc_page(end);
}
if (start >= end)
return (KERN_SUCCESS);
if ((offset & PAGE_MASK) != 0) {
/*
* The mapping is not page aligned. This means that we have
* to copy the data.
*/
rv = vm_map_fixed(map, NULL, 0, start, end - start,
prot | VM_PROT_WRITE, VM_PROT_ALL, MAP_CHECK_EXCL);
if (rv != KERN_SUCCESS)
return (rv);
if (object == NULL)
return (KERN_SUCCESS);
for (; start < end; start += sz) {
sf = vm_imgact_map_page(object, offset);
if (sf == NULL)
return (KERN_FAILURE);
off = offset - trunc_page(offset);
sz = end - start;
if (sz > PAGE_SIZE - off)
sz = PAGE_SIZE - off;
error = copyout((caddr_t)sf_buf_kva(sf) + off,
(caddr_t)start, sz);
vm_imgact_unmap_page(sf);
if (error != 0)
return (KERN_FAILURE);
offset += sz;
}
} else {
vm_object_reference(object);
rv = vm_map_fixed(map, object, offset, start, end - start,
prot, VM_PROT_ALL, cow | MAP_CHECK_EXCL);
if (rv != KERN_SUCCESS) {
locked = VOP_ISLOCKED(imgp->vp);
VOP_UNLOCK(imgp->vp, 0);
vm_object_deallocate(object);
vn_lock(imgp->vp, locked | LK_RETRY);
return (rv);
}
}
return (KERN_SUCCESS);
}
static int
__elfN(load_section)(struct image_params *imgp, vm_ooffset_t offset,
caddr_t vmaddr, size_t memsz, size_t filsz, vm_prot_t prot,
size_t pagesize)
{
struct sf_buf *sf;
size_t map_len;
vm_map_t map;
vm_object_t object;
vm_offset_t off, map_addr;
int error, rv, cow;
size_t copy_len;
vm_ooffset_t file_addr;
/*
* It's necessary to fail if the filsz + offset taken from the
* header is greater than the actual file pager object's size.
* If we were to allow this, then the vm_map_find() below would
* walk right off the end of the file object and into the ether.
*
* While I'm here, might as well check for something else that
* is invalid: filsz cannot be greater than memsz.
*/
if ((filsz != 0 && (off_t)filsz + offset > imgp->attr->va_size) ||
filsz > memsz) {
uprintf("elf_load_section: truncated ELF file\n");
return (ENOEXEC);
}
object = imgp->object;
map = &imgp->proc->p_vmspace->vm_map;
map_addr = trunc_page_ps((vm_offset_t)vmaddr, pagesize);
file_addr = trunc_page_ps(offset, pagesize);
/*
* We have two choices. We can either clear the data in the last page
* of an oversized mapping, or we can start the anon mapping a page
* early and copy the initialized data into that first page. We
* choose the second.
*/
if (filsz == 0)
map_len = 0;
else if (memsz > filsz)
map_len = trunc_page_ps(offset + filsz, pagesize) - file_addr;
else
map_len = round_page_ps(offset + filsz, pagesize) - file_addr;
if (map_len != 0) {
/* cow flags: don't dump readonly sections in core */
cow = MAP_COPY_ON_WRITE | MAP_PREFAULT |
(prot & VM_PROT_WRITE ? 0 : MAP_DISABLE_COREDUMP);
rv = __elfN(map_insert)(imgp, map,
object,
file_addr, /* file offset */
map_addr, /* virtual start */
map_addr + map_len,/* virtual end */
prot,
cow);
if (rv != KERN_SUCCESS)
return (EINVAL);
/* we can stop now if we've covered it all */
if (memsz == filsz)
return (0);
}
/*
* We have to get the remaining bit of the file into the first part
* of the oversized map segment. This is normally because the .data
* segment in the file is extended to provide bss. It's a neat idea
* to try and save a page, but it's a pain in the behind to implement.
*/
copy_len = filsz == 0 ? 0 : (offset + filsz) - trunc_page_ps(offset +
filsz, pagesize);
map_addr = trunc_page_ps((vm_offset_t)vmaddr + filsz, pagesize);
map_len = round_page_ps((vm_offset_t)vmaddr + memsz, pagesize) -
map_addr;
/* This had damn well better be true! */
if (map_len != 0) {
rv = __elfN(map_insert)(imgp, map, NULL, 0, map_addr,
map_addr + map_len, prot, 0);
if (rv != KERN_SUCCESS)
return (EINVAL);
}
if (copy_len != 0) {
sf = vm_imgact_map_page(object, offset + filsz);
if (sf == NULL)
return (EIO);
/* send the page fragment to user space */
off = trunc_page_ps(offset + filsz, pagesize) -
trunc_page(offset + filsz);
error = copyout((caddr_t)sf_buf_kva(sf) + off,
(caddr_t)map_addr, copy_len);
vm_imgact_unmap_page(sf);
if (error != 0)
return (error);
}
/*
* Remove write access to the page if it was only granted by map_insert
* to allow copyout.
*/
if ((prot & VM_PROT_WRITE) == 0)
vm_map_protect(map, trunc_page(map_addr), round_page(map_addr +
map_len), prot, FALSE);
return (0);
}
/*
* Load the file "file" into memory. It may be either a shared object
* or an executable.
*
* The "addr" reference parameter is in/out. On entry, it specifies
* the address where a shared object should be loaded. If the file is
* an executable, this value is ignored. On exit, "addr" specifies
* where the file was actually loaded.
*
* The "entry" reference parameter is out only. On exit, it specifies
* the entry point for the loaded file.
*/
static int
__elfN(load_file)(struct proc *p, const char *file, u_long *addr,
u_long *entry, size_t pagesize)
{
struct {
struct nameidata nd;
struct vattr attr;
struct image_params image_params;
} *tempdata;
const Elf_Ehdr *hdr = NULL;
const Elf_Phdr *phdr = NULL;
struct nameidata *nd;
struct vattr *attr;
struct image_params *imgp;
vm_prot_t prot;
u_long rbase;
u_long base_addr = 0;
int error, i, numsegs;
#ifdef CAPABILITY_MODE
/*
* XXXJA: This check can go away once we are sufficiently confident
* that the checks in namei() are correct.
*/
if (IN_CAPABILITY_MODE(curthread))
return (ECAPMODE);
#endif
tempdata = malloc(sizeof(*tempdata), M_TEMP, M_WAITOK);
nd = &tempdata->nd;
attr = &tempdata->attr;
imgp = &tempdata->image_params;
/*
* Initialize part of the common data
*/
imgp->proc = p;
imgp->attr = attr;
imgp->firstpage = NULL;
imgp->image_header = NULL;
imgp->object = NULL;
imgp->execlabel = NULL;
NDINIT(nd, LOOKUP, LOCKLEAF | FOLLOW, UIO_SYSSPACE, file, curthread);
if ((error = namei(nd)) != 0) {
nd->ni_vp = NULL;
goto fail;
}
NDFREE(nd, NDF_ONLY_PNBUF);
imgp->vp = nd->ni_vp;
/*
* Check permissions, modes, uid, etc on the file, and "open" it.
*/
error = exec_check_permissions(imgp);
if (error)
goto fail;
error = exec_map_first_page(imgp);
if (error)
goto fail;
/*
* Also make certain that the interpreter stays the same, so set
* its VV_TEXT flag, too.
*/
VOP_SET_TEXT(nd->ni_vp);
imgp->object = nd->ni_vp->v_object;
hdr = (const Elf_Ehdr *)imgp->image_header;
if ((error = __elfN(check_header)(hdr)) != 0)
goto fail;
if (hdr->e_type == ET_DYN)
rbase = *addr;
else if (hdr->e_type == ET_EXEC)
rbase = 0;
else {
error = ENOEXEC;
goto fail;
}
/* Only support headers that fit within first page for now */
if ((hdr->e_phoff > PAGE_SIZE) ||
(u_int)hdr->e_phentsize * hdr->e_phnum > PAGE_SIZE - hdr->e_phoff) {
error = ENOEXEC;
goto fail;
}
phdr = (const Elf_Phdr *)(imgp->image_header + hdr->e_phoff);
if (!aligned(phdr, Elf_Addr)) {
error = ENOEXEC;
goto fail;
}
for (i = 0, numsegs = 0; i < hdr->e_phnum; i++) {
if (phdr[i].p_type == PT_LOAD && phdr[i].p_memsz != 0) {
/* Loadable segment */
prot = __elfN(trans_prot)(phdr[i].p_flags);
error = __elfN(load_section)(imgp, phdr[i].p_offset,
(caddr_t)(uintptr_t)phdr[i].p_vaddr + rbase,
phdr[i].p_memsz, phdr[i].p_filesz, prot, pagesize);
if (error != 0)
goto fail;
/*
* Establish the base address if this is the
* first segment.
*/
if (numsegs == 0)
base_addr = trunc_page(phdr[i].p_vaddr +
rbase);
numsegs++;
}
}
*addr = base_addr;
*entry = (unsigned long)hdr->e_entry + rbase;
fail:
if (imgp->firstpage)
exec_unmap_first_page(imgp);
if (nd->ni_vp)
vput(nd->ni_vp);
free(tempdata, M_TEMP);
return (error);
}
static int
__CONCAT(exec_, __elfN(imgact))(struct image_params *imgp)
{
struct thread *td;
const Elf_Ehdr *hdr;
const Elf_Phdr *phdr;
Elf_Auxargs *elf_auxargs;
struct vmspace *vmspace;
const char *err_str, *newinterp;
char *interp, *interp_buf, *path;
Elf_Brandinfo *brand_info;
struct sysentvec *sv;
vm_prot_t prot;
u_long text_size, data_size, total_size, text_addr, data_addr;
u_long seg_size, seg_addr, addr, baddr, et_dyn_addr, entry, proghdr;
uint32_t fctl0;
int32_t osrel;
int error, i, n, interp_name_len, have_interp;
hdr = (const Elf_Ehdr *)imgp->image_header;
/*
* Do we have a valid ELF header ?
*
* Only allow ET_EXEC & ET_DYN here, reject ET_DYN later
* if particular brand doesn't support it.
*/
if (__elfN(check_header)(hdr) != 0 ||
(hdr->e_type != ET_EXEC && hdr->e_type != ET_DYN))
return (-1);
/*
* From here on down, we return an errno, not -1, as we've
* detected an ELF file.
*/
if ((hdr->e_phoff > PAGE_SIZE) ||
(u_int)hdr->e_phentsize * hdr->e_phnum > PAGE_SIZE - hdr->e_phoff) {
/* Only support headers in first page for now */
uprintf("Program headers not in the first page\n");
return (ENOEXEC);
}
phdr = (const Elf_Phdr *)(imgp->image_header + hdr->e_phoff);
if (!aligned(phdr, Elf_Addr)) {
uprintf("Unaligned program headers\n");
return (ENOEXEC);
}
n = error = 0;
baddr = 0;
osrel = 0;
fctl0 = 0;
text_size = data_size = total_size = text_addr = data_addr = 0;
entry = proghdr = 0;
interp_name_len = 0;
err_str = newinterp = NULL;
interp = interp_buf = NULL;
td = curthread;
for (i = 0; i < hdr->e_phnum; i++) {
switch (phdr[i].p_type) {
case PT_LOAD:
if (n == 0)
baddr = phdr[i].p_vaddr;
n++;
break;
case PT_INTERP:
/* Path to interpreter */
if (phdr[i].p_filesz < 2 ||
phdr[i].p_filesz > MAXPATHLEN) {
uprintf("Invalid PT_INTERP\n");
error = ENOEXEC;
goto ret;
}
if (interp != NULL) {
uprintf("Multiple PT_INTERP headers\n");
error = ENOEXEC;
goto ret;
}
interp_name_len = phdr[i].p_filesz;
if (phdr[i].p_offset > PAGE_SIZE ||
interp_name_len > PAGE_SIZE - phdr[i].p_offset) {
VOP_UNLOCK(imgp->vp, 0);
interp_buf = malloc(interp_name_len + 1, M_TEMP,
M_WAITOK);
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
error = vn_rdwr(UIO_READ, imgp->vp, interp_buf,
interp_name_len, phdr[i].p_offset,
UIO_SYSSPACE, IO_NODELOCKED, td->td_ucred,
NOCRED, NULL, td);
if (error != 0) {
uprintf("i/o error PT_INTERP %d\n",
error);
goto ret;
}
interp_buf[interp_name_len] = '\0';
interp = interp_buf;
} else {
interp = __DECONST(char *, imgp->image_header) +
phdr[i].p_offset;
if (interp[interp_name_len - 1] != '\0') {
uprintf("Invalid PT_INTERP\n");
error = ENOEXEC;
goto ret;
}
}
break;
case PT_GNU_STACK:
if (__elfN(nxstack))
imgp->stack_prot =
__elfN(trans_prot)(phdr[i].p_flags);
imgp->stack_sz = phdr[i].p_memsz;
break;
}
}
brand_info = __elfN(get_brandinfo)(imgp, interp, interp_name_len,
&osrel, &fctl0);
if (brand_info == NULL) {
uprintf("ELF binary type \"%u\" not known.\n",
hdr->e_ident[EI_OSABI]);
error = ENOEXEC;
goto ret;
}
et_dyn_addr = 0;
if (hdr->e_type == ET_DYN) {
if ((brand_info->flags & BI_CAN_EXEC_DYN) == 0) {
uprintf("Cannot execute shared object\n");
error = ENOEXEC;
goto ret;
}
/*
* Honour the base load address from the dso if it is
* non-zero for some reason.
*/
if (baddr == 0)
et_dyn_addr = ET_DYN_LOAD_ADDR;
}
sv = brand_info->sysvec;
if (interp != NULL && brand_info->interp_newpath != NULL)
newinterp = brand_info->interp_newpath;
/*
* Avoid a possible deadlock if the current address space is destroyed
* and that address space maps the locked vnode. In the common case,
* the locked vnode's v_usecount is decremented but remains greater
* than zero. Consequently, the vnode lock is not needed by vrele().
* However, in cases where the vnode lock is external, such as nullfs,
* v_usecount may become zero.
*
* The VV_TEXT flag prevents modifications to the executable while
* the vnode is unlocked.
*/
VOP_UNLOCK(imgp->vp, 0);
error = exec_new_vmspace(imgp, sv);
imgp->proc->p_sysent = sv;
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
if (error != 0)
goto ret;
for (i = 0; i < hdr->e_phnum; i++) {
switch (phdr[i].p_type) {
case PT_LOAD: /* Loadable segment */
if (phdr[i].p_memsz == 0)
break;
prot = __elfN(trans_prot)(phdr[i].p_flags);
error = __elfN(load_section)(imgp, phdr[i].p_offset,
(caddr_t)(uintptr_t)phdr[i].p_vaddr + et_dyn_addr,
phdr[i].p_memsz, phdr[i].p_filesz, prot,
sv->sv_pagesize);
if (error != 0)
goto ret;
/*
* If this segment contains the program headers,
* remember their virtual address for the AT_PHDR
* aux entry. Static binaries don't usually include
* a PT_PHDR entry.
*/
if (phdr[i].p_offset == 0 &&
hdr->e_phoff + hdr->e_phnum * hdr->e_phentsize
<= phdr[i].p_filesz)
proghdr = phdr[i].p_vaddr + hdr->e_phoff +
et_dyn_addr;
seg_addr = trunc_page(phdr[i].p_vaddr + et_dyn_addr);
seg_size = round_page(phdr[i].p_memsz +
phdr[i].p_vaddr + et_dyn_addr - seg_addr);
/*
* Make the largest executable segment the official
* text segment and all others data.
*
* Note that obreak() assumes that data_addr +
* data_size == end of data load area, and the ELF
* file format expects segments to be sorted by
* address. If multiple data segments exist, the
* last one will be used.
*/
if (phdr[i].p_flags & PF_X && text_size < seg_size) {
text_size = seg_size;
text_addr = seg_addr;
} else {
data_size = seg_size;
data_addr = seg_addr;
}
total_size += seg_size;
break;
case PT_PHDR: /* Program header table info */
proghdr = phdr[i].p_vaddr + et_dyn_addr;
break;
default:
break;
}
}
if (data_addr == 0 && data_size == 0) {
data_addr = text_addr;
data_size = text_size;
}
entry = (u_long)hdr->e_entry + et_dyn_addr;
/*
* Check limits. It should be safe to check the
* limits after loading the segments since we do
* not actually fault in all the segments pages.
*/
PROC_LOCK(imgp->proc);
if (data_size > lim_cur_proc(imgp->proc, RLIMIT_DATA))
err_str = "Data segment size exceeds process limit";
else if (text_size > maxtsiz)
err_str = "Text segment size exceeds system limit";
else if (total_size > lim_cur_proc(imgp->proc, RLIMIT_VMEM))
err_str = "Total segment size exceeds process limit";
else if (racct_set(imgp->proc, RACCT_DATA, data_size) != 0)
err_str = "Data segment size exceeds resource limit";
else if (racct_set(imgp->proc, RACCT_VMEM, total_size) != 0)
err_str = "Total segment size exceeds resource limit";
if (err_str != NULL) {
PROC_UNLOCK(imgp->proc);
uprintf("%s\n", err_str);
error = ENOMEM;
goto ret;
}
vmspace = imgp->proc->p_vmspace;
vmspace->vm_tsize = text_size >> PAGE_SHIFT;
vmspace->vm_taddr = (caddr_t)(uintptr_t)text_addr;
vmspace->vm_dsize = data_size >> PAGE_SHIFT;
vmspace->vm_daddr = (caddr_t)(uintptr_t)data_addr;
/*
* We load the dynamic linker where a userland call
* to mmap(0, ...) would put it. The rationale behind this
* calculation is that it leaves room for the heap to grow to
* its maximum allowed size.
*/
addr = round_page((vm_offset_t)vmspace->vm_daddr + lim_max(td,
RLIMIT_DATA));
PROC_UNLOCK(imgp->proc);
imgp->entry_addr = entry;
if (interp != NULL) {
have_interp = FALSE;
VOP_UNLOCK(imgp->vp, 0);
if (brand_info->emul_path != NULL &&
brand_info->emul_path[0] != '\0') {
path = malloc(MAXPATHLEN, M_TEMP, M_WAITOK);
snprintf(path, MAXPATHLEN, "%s%s",
brand_info->emul_path, interp);
error = __elfN(load_file)(imgp->proc, path, &addr,
&imgp->entry_addr, sv->sv_pagesize);
free(path, M_TEMP);
if (error == 0)
have_interp = TRUE;
}
if (!have_interp && newinterp != NULL &&
(brand_info->interp_path == NULL ||
strcmp(interp, brand_info->interp_path) == 0)) {
error = __elfN(load_file)(imgp->proc, newinterp, &addr,
&imgp->entry_addr, sv->sv_pagesize);
if (error == 0)
have_interp = TRUE;
}
if (!have_interp) {
error = __elfN(load_file)(imgp->proc, interp, &addr,
&imgp->entry_addr, sv->sv_pagesize);
}
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
if (error != 0) {
uprintf("ELF interpreter %s not found, error %d\n",
interp, error);
goto ret;
}
} else
addr = et_dyn_addr;
/*
* Construct auxargs table (used by the fixup routine)
*/
elf_auxargs = malloc(sizeof(Elf_Auxargs), M_TEMP, M_WAITOK);
elf_auxargs->execfd = -1;
elf_auxargs->phdr = proghdr;
elf_auxargs->phent = hdr->e_phentsize;
elf_auxargs->phnum = hdr->e_phnum;
elf_auxargs->pagesz = PAGE_SIZE;
elf_auxargs->base = addr;
elf_auxargs->flags = 0;
elf_auxargs->entry = entry;
elf_auxargs->hdr_eflags = hdr->e_flags;
imgp->auxargs = elf_auxargs;
imgp->interpreted = 0;
imgp->reloc_base = addr;
imgp->proc->p_osrel = osrel;
imgp->proc->p_fctl0 = fctl0;
imgp->proc->p_elf_machine = hdr->e_machine;
imgp->proc->p_elf_flags = hdr->e_flags;
ret:
free(interp_buf, M_TEMP);
return (error);
}
#define suword __CONCAT(suword, __ELF_WORD_SIZE)
int
__elfN(freebsd_fixup)(register_t **stack_base, struct image_params *imgp)
{
Elf_Auxargs *args = (Elf_Auxargs *)imgp->auxargs;
Elf_Auxinfo *argarray, *pos;
Elf_Addr *base, *auxbase;
int error;
base = (Elf_Addr *)*stack_base;
auxbase = base + imgp->args->argc + 1 + imgp->args->envc + 1;
argarray = pos = malloc(AT_COUNT * sizeof(*pos), M_TEMP,
M_WAITOK | M_ZERO);
if (args->execfd != -1)
AUXARGS_ENTRY(pos, AT_EXECFD, args->execfd);
AUXARGS_ENTRY(pos, AT_PHDR, args->phdr);
AUXARGS_ENTRY(pos, AT_PHENT, args->phent);
AUXARGS_ENTRY(pos, AT_PHNUM, args->phnum);
AUXARGS_ENTRY(pos, AT_PAGESZ, args->pagesz);
AUXARGS_ENTRY(pos, AT_FLAGS, args->flags);
AUXARGS_ENTRY(pos, AT_ENTRY, args->entry);
AUXARGS_ENTRY(pos, AT_BASE, args->base);
AUXARGS_ENTRY(pos, AT_EHDRFLAGS, args->hdr_eflags);
if (imgp->execpathp != 0)
AUXARGS_ENTRY(pos, AT_EXECPATH, imgp->execpathp);
AUXARGS_ENTRY(pos, AT_OSRELDATE,
imgp->proc->p_ucred->cr_prison->pr_osreldate);
if (imgp->canary != 0) {
AUXARGS_ENTRY(pos, AT_CANARY, imgp->canary);
AUXARGS_ENTRY(pos, AT_CANARYLEN, imgp->canarylen);
}
AUXARGS_ENTRY(pos, AT_NCPUS, mp_ncpus);
if (imgp->pagesizes != 0) {
AUXARGS_ENTRY(pos, AT_PAGESIZES, imgp->pagesizes);
AUXARGS_ENTRY(pos, AT_PAGESIZESLEN, imgp->pagesizeslen);
}
if (imgp->sysent->sv_timekeep_base != 0) {
AUXARGS_ENTRY(pos, AT_TIMEKEEP,
imgp->sysent->sv_timekeep_base);
}
AUXARGS_ENTRY(pos, AT_STACKPROT, imgp->sysent->sv_shared_page_obj
!= NULL && imgp->stack_prot != 0 ? imgp->stack_prot :
imgp->sysent->sv_stackprot);
if (imgp->sysent->sv_hwcap != NULL)
AUXARGS_ENTRY(pos, AT_HWCAP, *imgp->sysent->sv_hwcap);
if (imgp->sysent->sv_hwcap2 != NULL)
AUXARGS_ENTRY(pos, AT_HWCAP2, *imgp->sysent->sv_hwcap2);
AUXARGS_ENTRY(pos, AT_NULL, 0);
free(imgp->auxargs, M_TEMP);
imgp->auxargs = NULL;
KASSERT(pos - argarray <= AT_COUNT, ("Too many auxargs"));
error = copyout(argarray, auxbase, sizeof(*argarray) * AT_COUNT);
free(argarray, M_TEMP);
if (error != 0)
return (error);
base--;
if (suword(base, imgp->args->argc) == -1)
return (EFAULT);
*stack_base = (register_t *)base;
return (0);
}
/*
* Code for generating ELF core dumps.
*/
typedef void (*segment_callback)(vm_map_entry_t, void *);
/* Closure for cb_put_phdr(). */
struct phdr_closure {
Elf_Phdr *phdr; /* Program header to fill in */
Elf_Off offset; /* Offset of segment in core file */
};
/* Closure for cb_size_segment(). */
struct sseg_closure {
int count; /* Count of writable segments. */
size_t size; /* Total size of all writable segments. */
};
typedef void (*outfunc_t)(void *, struct sbuf *, size_t *);
struct note_info {
int type; /* Note type. */
outfunc_t outfunc; /* Output function. */
void *outarg; /* Argument for the output function. */
size_t outsize; /* Output size. */
TAILQ_ENTRY(note_info) link; /* Link to the next note info. */
};
TAILQ_HEAD(note_info_list, note_info);
/* Coredump output parameters. */
struct coredump_params {
off_t offset;
struct ucred *active_cred;
struct ucred *file_cred;
struct thread *td;
struct vnode *vp;
struct compressor *comp;
};
extern int compress_user_cores;
extern int compress_user_cores_level;
static void cb_put_phdr(vm_map_entry_t, void *);
static void cb_size_segment(vm_map_entry_t, void *);
static int core_write(struct coredump_params *, const void *, size_t, off_t,
enum uio_seg);
static void each_dumpable_segment(struct thread *, segment_callback, void *);
static int __elfN(corehdr)(struct coredump_params *, int, void *, size_t,
struct note_info_list *, size_t);
static void __elfN(prepare_notes)(struct thread *, struct note_info_list *,
size_t *);
static void __elfN(puthdr)(struct thread *, void *, size_t, int, size_t);
static void __elfN(putnote)(struct note_info *, struct sbuf *);
static size_t register_note(struct note_info_list *, int, outfunc_t, void *);
static int sbuf_drain_core_output(void *, const char *, int);
static int sbuf_drain_count(void *arg, const char *data, int len);
static void __elfN(note_fpregset)(void *, struct sbuf *, size_t *);
static void __elfN(note_prpsinfo)(void *, struct sbuf *, size_t *);
static void __elfN(note_prstatus)(void *, struct sbuf *, size_t *);
static void __elfN(note_threadmd)(void *, struct sbuf *, size_t *);
static void __elfN(note_thrmisc)(void *, struct sbuf *, size_t *);
static void __elfN(note_ptlwpinfo)(void *, struct sbuf *, size_t *);
static void __elfN(note_procstat_auxv)(void *, struct sbuf *, size_t *);
static void __elfN(note_procstat_proc)(void *, struct sbuf *, size_t *);
static void __elfN(note_procstat_psstrings)(void *, struct sbuf *, size_t *);
static void note_procstat_files(void *, struct sbuf *, size_t *);
static void note_procstat_groups(void *, struct sbuf *, size_t *);
static void note_procstat_osrel(void *, struct sbuf *, size_t *);
static void note_procstat_rlimit(void *, struct sbuf *, size_t *);
static void note_procstat_umask(void *, struct sbuf *, size_t *);
static void note_procstat_vmmap(void *, struct sbuf *, size_t *);
/*
* Write out a core segment to the compression stream.
*/
static int
compress_chunk(struct coredump_params *p, char *base, char *buf, u_int len)
{
u_int chunk_len;
int error;
while (len > 0) {
chunk_len = MIN(len, CORE_BUF_SIZE);
/*
* We can get EFAULT error here.
* In that case zero out the current chunk of the segment.
*/
error = copyin(base, buf, chunk_len);
if (error != 0)
bzero(buf, chunk_len);
error = compressor_write(p->comp, buf, chunk_len);
if (error != 0)
break;
base += chunk_len;
len -= chunk_len;
}
return (error);
}
static int
core_compressed_write(void *base, size_t len, off_t offset, void *arg)
{
return (core_write((struct coredump_params *)arg, base, len, offset,
UIO_SYSSPACE));
}
static int
core_write(struct coredump_params *p, const void *base, size_t len,
off_t offset, enum uio_seg seg)
{
return (vn_rdwr_inchunks(UIO_WRITE, p->vp, __DECONST(void *, base),
len, offset, seg, IO_UNIT | IO_DIRECT | IO_RANGELOCKED,
p->active_cred, p->file_cred, NULL, p->td));
}
static int
core_output(void *base, size_t len, off_t offset, struct coredump_params *p,
void *tmpbuf)
{
int error;
if (p->comp != NULL)
return (compress_chunk(p, base, tmpbuf, len));
/*
* EFAULT is a non-fatal error that we can get, for example,
* if the segment is backed by a file but extends beyond its
* end.
*/
error = core_write(p, base, len, offset, UIO_USERSPACE);
if (error == EFAULT) {
log(LOG_WARNING, "Failed to fully fault in a core file segment "
"at VA %p with size 0x%zx to be written at offset 0x%jx "
"for process %s\n", base, len, offset, curproc->p_comm);
/*
* Write a "real" zero byte at the end of the target region
* in the case this is the last segment.
* The intermediate space will be implicitly zero-filled.
*/
error = core_write(p, zero_region, 1, offset + len - 1,
UIO_SYSSPACE);
}
return (error);
}
/*
* Drain into a core file.
*/
static int
sbuf_drain_core_output(void *arg, const char *data, int len)
{
struct coredump_params *p;
int error, locked;
p = (struct coredump_params *)arg;
/*
* Some kern_proc out routines that print to this sbuf may
* call us with the process lock held. Draining with the
* non-sleepable lock held is unsafe. The lock is needed for
* those routines when dumping a live process. In our case we
* can safely release the lock before draining and acquire
* again after.
*/
locked = PROC_LOCKED(p->td->td_proc);
if (locked)
PROC_UNLOCK(p->td->td_proc);
if (p->comp != NULL)
error = compressor_write(p->comp, __DECONST(char *, data), len);
else
error = core_write(p, __DECONST(void *, data), len, p->offset,
UIO_SYSSPACE);
if (locked)
PROC_LOCK(p->td->td_proc);
if (error != 0)
return (-error);
p->offset += len;
return (len);
}
/*
* Drain into a counter.
*/
static int
sbuf_drain_count(void *arg, const char *data __unused, int len)
{
size_t *sizep;
sizep = (size_t *)arg;
*sizep += len;
return (len);
}
int
__elfN(coredump)(struct thread *td, struct vnode *vp, off_t limit, int flags)
{
struct ucred *cred = td->td_ucred;
int error = 0;
struct sseg_closure seginfo;
struct note_info_list notelst;
struct coredump_params params;
struct note_info *ninfo;
void *hdr, *tmpbuf;
size_t hdrsize, notesz, coresize;
hdr = NULL;
tmpbuf = NULL;
TAILQ_INIT(&notelst);
/* Size the program segments. */
seginfo.count = 0;
seginfo.size = 0;
each_dumpable_segment(td, cb_size_segment, &seginfo);
/*
* Collect info about the core file header area.
*/
hdrsize = sizeof(Elf_Ehdr) + sizeof(Elf_Phdr) * (1 + seginfo.count);
if (seginfo.count + 1 >= PN_XNUM)
hdrsize += sizeof(Elf_Shdr);
__elfN(prepare_notes)(td, &notelst, &notesz);
coresize = round_page(hdrsize + notesz) + seginfo.size;
/* Set up core dump parameters. */
params.offset = 0;
params.active_cred = cred;
params.file_cred = NOCRED;
params.td = td;
params.vp = vp;
params.comp = NULL;
#ifdef RACCT
if (racct_enable) {
PROC_LOCK(td->td_proc);
error = racct_add(td->td_proc, RACCT_CORE, coresize);
PROC_UNLOCK(td->td_proc);
if (error != 0) {
error = EFAULT;
goto done;
}
}
#endif
if (coresize >= limit) {
error = EFAULT;
goto done;
}
/* Create a compression stream if necessary. */
if (compress_user_cores != 0) {
params.comp = compressor_init(core_compressed_write,
compress_user_cores, CORE_BUF_SIZE,
compress_user_cores_level, &params);
if (params.comp == NULL) {
error = EFAULT;
goto done;
}
tmpbuf = malloc(CORE_BUF_SIZE, M_TEMP, M_WAITOK | M_ZERO);
}
/*
* Allocate memory for building the header, fill it up,
* and write it out following the notes.
*/
hdr = malloc(hdrsize, M_TEMP, M_WAITOK);
error = __elfN(corehdr)(&params, seginfo.count, hdr, hdrsize, &notelst,
notesz);
/* Write the contents of all of the writable segments. */
if (error == 0) {
Elf_Phdr *php;
off_t offset;
int i;
php = (Elf_Phdr *)((char *)hdr + sizeof(Elf_Ehdr)) + 1;
offset = round_page(hdrsize + notesz);
for (i = 0; i < seginfo.count; i++) {
error = core_output((caddr_t)(uintptr_t)php->p_vaddr,
php->p_filesz, offset, &params, tmpbuf);
if (error != 0)
break;
offset += php->p_filesz;
php++;
}
if (error == 0 && params.comp != NULL)
error = compressor_flush(params.comp);
}
if (error) {
log(LOG_WARNING,
"Failed to write core file for process %s (error %d)\n",
curproc->p_comm, error);
}
done:
free(tmpbuf, M_TEMP);
if (params.comp != NULL)
compressor_fini(params.comp);
while ((ninfo = TAILQ_FIRST(&notelst)) != NULL) {
TAILQ_REMOVE(&notelst, ninfo, link);
free(ninfo, M_TEMP);
}
if (hdr != NULL)
free(hdr, M_TEMP);
return (error);
}
/*
* A callback for each_dumpable_segment() to write out the segment's
* program header entry.
*/
static void
cb_put_phdr(vm_map_entry_t entry, void *closure)
{
struct phdr_closure *phc = (struct phdr_closure *)closure;
Elf_Phdr *phdr = phc->phdr;
phc->offset = round_page(phc->offset);
phdr->p_type = PT_LOAD;
phdr->p_offset = phc->offset;
phdr->p_vaddr = entry->start;
phdr->p_paddr = 0;
phdr->p_filesz = phdr->p_memsz = entry->end - entry->start;
phdr->p_align = PAGE_SIZE;
phdr->p_flags = __elfN(untrans_prot)(entry->protection);
phc->offset += phdr->p_filesz;
phc->phdr++;
}
/*
* A callback for each_dumpable_segment() to gather information about
* the number of segments and their total size.
*/
static void
cb_size_segment(vm_map_entry_t entry, void *closure)
{
struct sseg_closure *ssc = (struct sseg_closure *)closure;
ssc->count++;
ssc->size += entry->end - entry->start;
}
/*
* For each writable segment in the process's memory map, call the given
* function with a pointer to the map entry and some arbitrary
* caller-supplied data.
*/
static void
each_dumpable_segment(struct thread *td, segment_callback func, void *closure)
{
struct proc *p = td->td_proc;
vm_map_t map = &p->p_vmspace->vm_map;
vm_map_entry_t entry;
vm_object_t backing_object, object;
boolean_t ignore_entry;
vm_map_lock_read(map);
for (entry = map->header.next; entry != &map->header;
entry = entry->next) {
/*
* Don't dump inaccessible mappings, deal with legacy
* coredump mode.
*
* Note that read-only segments related to the elf binary
* are marked MAP_ENTRY_NOCOREDUMP now so we no longer
* need to arbitrarily ignore such segments.
*/
if (elf_legacy_coredump) {
if ((entry->protection & VM_PROT_RW) != VM_PROT_RW)
continue;
} else {
if ((entry->protection & VM_PROT_ALL) == 0)
continue;
}
/*
* Dont include memory segment in the coredump if
* MAP_NOCORE is set in mmap(2) or MADV_NOCORE in
* madvise(2). Do not dump submaps (i.e. parts of the
* kernel map).
*/
if (entry->eflags & (MAP_ENTRY_NOCOREDUMP|MAP_ENTRY_IS_SUB_MAP))
continue;
if ((object = entry->object.vm_object) == NULL)
continue;
/* Ignore memory-mapped devices and such things. */
VM_OBJECT_RLOCK(object);
while ((backing_object = object->backing_object) != NULL) {
VM_OBJECT_RLOCK(backing_object);
VM_OBJECT_RUNLOCK(object);
object = backing_object;
}
ignore_entry = object->type != OBJT_DEFAULT &&
object->type != OBJT_SWAP && object->type != OBJT_VNODE &&
object->type != OBJT_PHYS;
VM_OBJECT_RUNLOCK(object);
if (ignore_entry)
continue;
(*func)(entry, closure);
}
vm_map_unlock_read(map);
}
/*
* Write the core file header to the file, including padding up to
* the page boundary.
*/
static int
__elfN(corehdr)(struct coredump_params *p, int numsegs, void *hdr,
size_t hdrsize, struct note_info_list *notelst, size_t notesz)
{
struct note_info *ninfo;
struct sbuf *sb;
int error;
/* Fill in the header. */
bzero(hdr, hdrsize);
__elfN(puthdr)(p->td, hdr, hdrsize, numsegs, notesz);
sb = sbuf_new(NULL, NULL, CORE_BUF_SIZE, SBUF_FIXEDLEN);
sbuf_set_drain(sb, sbuf_drain_core_output, p);
sbuf_start_section(sb, NULL);
sbuf_bcat(sb, hdr, hdrsize);
TAILQ_FOREACH(ninfo, notelst, link)
__elfN(putnote)(ninfo, sb);
/* Align up to a page boundary for the program segments. */
sbuf_end_section(sb, -1, PAGE_SIZE, 0);
error = sbuf_finish(sb);
sbuf_delete(sb);
return (error);
}
static void
__elfN(prepare_notes)(struct thread *td, struct note_info_list *list,
size_t *sizep)
{
struct proc *p;
struct thread *thr;
size_t size;
p = td->td_proc;
size = 0;
size += register_note(list, NT_PRPSINFO, __elfN(note_prpsinfo), p);
/*
* To have the debugger select the right thread (LWP) as the initial
* thread, we dump the state of the thread passed to us in td first.
* This is the thread that causes the core dump and thus likely to
* be the right thread one wants to have selected in the debugger.
*/
thr = td;
while (thr != NULL) {
size += register_note(list, NT_PRSTATUS,
__elfN(note_prstatus), thr);
size += register_note(list, NT_FPREGSET,
__elfN(note_fpregset), thr);
size += register_note(list, NT_THRMISC,
__elfN(note_thrmisc), thr);
size += register_note(list, NT_PTLWPINFO,
__elfN(note_ptlwpinfo), thr);
size += register_note(list, -1,
__elfN(note_threadmd), thr);
thr = (thr == td) ? TAILQ_FIRST(&p->p_threads) :
TAILQ_NEXT(thr, td_plist);
if (thr == td)
thr = TAILQ_NEXT(thr, td_plist);
}
size += register_note(list, NT_PROCSTAT_PROC,
__elfN(note_procstat_proc), p);
size += register_note(list, NT_PROCSTAT_FILES,
note_procstat_files, p);
size += register_note(list, NT_PROCSTAT_VMMAP,
note_procstat_vmmap, p);
size += register_note(list, NT_PROCSTAT_GROUPS,
note_procstat_groups, p);
size += register_note(list, NT_PROCSTAT_UMASK,
note_procstat_umask, p);
size += register_note(list, NT_PROCSTAT_RLIMIT,
note_procstat_rlimit, p);
size += register_note(list, NT_PROCSTAT_OSREL,
note_procstat_osrel, p);
size += register_note(list, NT_PROCSTAT_PSSTRINGS,
__elfN(note_procstat_psstrings), p);
size += register_note(list, NT_PROCSTAT_AUXV,
__elfN(note_procstat_auxv), p);
*sizep = size;
}
static void
__elfN(puthdr)(struct thread *td, void *hdr, size_t hdrsize, int numsegs,
size_t notesz)
{
Elf_Ehdr *ehdr;
Elf_Phdr *phdr;
Elf_Shdr *shdr;
struct phdr_closure phc;
ehdr = (Elf_Ehdr *)hdr;
ehdr->e_ident[EI_MAG0] = ELFMAG0;
ehdr->e_ident[EI_MAG1] = ELFMAG1;
ehdr->e_ident[EI_MAG2] = ELFMAG2;
ehdr->e_ident[EI_MAG3] = ELFMAG3;
ehdr->e_ident[EI_CLASS] = ELF_CLASS;
ehdr->e_ident[EI_DATA] = ELF_DATA;
ehdr->e_ident[EI_VERSION] = EV_CURRENT;
ehdr->e_ident[EI_OSABI] = ELFOSABI_FREEBSD;
ehdr->e_ident[EI_ABIVERSION] = 0;
ehdr->e_ident[EI_PAD] = 0;
ehdr->e_type = ET_CORE;
ehdr->e_machine = td->td_proc->p_elf_machine;
ehdr->e_version = EV_CURRENT;
ehdr->e_entry = 0;
ehdr->e_phoff = sizeof(Elf_Ehdr);
ehdr->e_flags = td->td_proc->p_elf_flags;
ehdr->e_ehsize = sizeof(Elf_Ehdr);
ehdr->e_phentsize = sizeof(Elf_Phdr);
ehdr->e_shentsize = sizeof(Elf_Shdr);
ehdr->e_shstrndx = SHN_UNDEF;
if (numsegs + 1 < PN_XNUM) {
ehdr->e_phnum = numsegs + 1;
ehdr->e_shnum = 0;
} else {
ehdr->e_phnum = PN_XNUM;
ehdr->e_shnum = 1;
ehdr->e_shoff = ehdr->e_phoff +
(numsegs + 1) * ehdr->e_phentsize;
KASSERT(ehdr->e_shoff == hdrsize - sizeof(Elf_Shdr),
("e_shoff: %zu, hdrsize - shdr: %zu",
(size_t)ehdr->e_shoff, hdrsize - sizeof(Elf_Shdr)));
shdr = (Elf_Shdr *)((char *)hdr + ehdr->e_shoff);
memset(shdr, 0, sizeof(*shdr));
/*
* A special first section is used to hold large segment and
* section counts. This was proposed by Sun Microsystems in
* Solaris and has been adopted by Linux; the standard ELF
* tools are already familiar with the technique.
*
* See table 7-7 of the Solaris "Linker and Libraries Guide"
* (or 12-7 depending on the version of the document) for more
* details.
*/
shdr->sh_type = SHT_NULL;
shdr->sh_size = ehdr->e_shnum;
shdr->sh_link = ehdr->e_shstrndx;
shdr->sh_info = numsegs + 1;
}
/*
* Fill in the program header entries.
*/
phdr = (Elf_Phdr *)((char *)hdr + ehdr->e_phoff);
/* The note segement. */
phdr->p_type = PT_NOTE;
phdr->p_offset = hdrsize;
phdr->p_vaddr = 0;
phdr->p_paddr = 0;
phdr->p_filesz = notesz;
phdr->p_memsz = 0;
phdr->p_flags = PF_R;
phdr->p_align = ELF_NOTE_ROUNDSIZE;
phdr++;
/* All the writable segments from the program. */
phc.phdr = phdr;
phc.offset = round_page(hdrsize + notesz);
each_dumpable_segment(td, cb_put_phdr, &phc);
}
static size_t
register_note(struct note_info_list *list, int type, outfunc_t out, void *arg)
{
struct note_info *ninfo;
size_t size, notesize;
size = 0;
out(arg, NULL, &size);
ninfo = malloc(sizeof(*ninfo), M_TEMP, M_ZERO | M_WAITOK);
ninfo->type = type;
ninfo->outfunc = out;
ninfo->outarg = arg;
ninfo->outsize = size;
TAILQ_INSERT_TAIL(list, ninfo, link);
if (type == -1)
return (size);
notesize = sizeof(Elf_Note) + /* note header */
roundup2(sizeof(FREEBSD_ABI_VENDOR), ELF_NOTE_ROUNDSIZE) +
/* note name */
roundup2(size, ELF_NOTE_ROUNDSIZE); /* note description */
return (notesize);
}
static size_t
append_note_data(const void *src, void *dst, size_t len)
{
size_t padded_len;
padded_len = roundup2(len, ELF_NOTE_ROUNDSIZE);
if (dst != NULL) {
bcopy(src, dst, len);
bzero((char *)dst + len, padded_len - len);
}
return (padded_len);
}
size_t
__elfN(populate_note)(int type, void *src, void *dst, size_t size, void **descp)
{
Elf_Note *note;
char *buf;
size_t notesize;
buf = dst;
if (buf != NULL) {
note = (Elf_Note *)buf;
note->n_namesz = sizeof(FREEBSD_ABI_VENDOR);
note->n_descsz = size;
note->n_type = type;
buf += sizeof(*note);
buf += append_note_data(FREEBSD_ABI_VENDOR, buf,
sizeof(FREEBSD_ABI_VENDOR));
append_note_data(src, buf, size);
if (descp != NULL)
*descp = buf;
}
notesize = sizeof(Elf_Note) + /* note header */
roundup2(sizeof(FREEBSD_ABI_VENDOR), ELF_NOTE_ROUNDSIZE) +
/* note name */
roundup2(size, ELF_NOTE_ROUNDSIZE); /* note description */
return (notesize);
}
static void
__elfN(putnote)(struct note_info *ninfo, struct sbuf *sb)
{
Elf_Note note;
ssize_t old_len, sect_len;
size_t new_len, descsz, i;
if (ninfo->type == -1) {
ninfo->outfunc(ninfo->outarg, sb, &ninfo->outsize);
return;
}
note.n_namesz = sizeof(FREEBSD_ABI_VENDOR);
note.n_descsz = ninfo->outsize;
note.n_type = ninfo->type;
sbuf_bcat(sb, &note, sizeof(note));
sbuf_start_section(sb, &old_len);
sbuf_bcat(sb, FREEBSD_ABI_VENDOR, sizeof(FREEBSD_ABI_VENDOR));
sbuf_end_section(sb, old_len, ELF_NOTE_ROUNDSIZE, 0);
if (note.n_descsz == 0)
return;
sbuf_start_section(sb, &old_len);
ninfo->outfunc(ninfo->outarg, sb, &ninfo->outsize);
sect_len = sbuf_end_section(sb, old_len, ELF_NOTE_ROUNDSIZE, 0);
if (sect_len < 0)
return;
new_len = (size_t)sect_len;
descsz = roundup(note.n_descsz, ELF_NOTE_ROUNDSIZE);
if (new_len < descsz) {
/*
* It is expected that individual note emitters will correctly
* predict their expected output size and fill up to that size
* themselves, padding in a format-specific way if needed.
* However, in case they don't, just do it here with zeros.
*/
for (i = 0; i < descsz - new_len; i++)
sbuf_putc(sb, 0);
} else if (new_len > descsz) {
/*
* We can't always truncate sb -- we may have drained some
* of it already.
*/
KASSERT(new_len == descsz, ("%s: Note type %u changed as we "
"read it (%zu > %zu). Since it is longer than "
"expected, this coredump's notes are corrupt. THIS "
"IS A BUG in the note_procstat routine for type %u.\n",
__func__, (unsigned)note.n_type, new_len, descsz,
(unsigned)note.n_type));
}
}
/*
* Miscellaneous note out functions.
*/
#if defined(COMPAT_FREEBSD32) && __ELF_WORD_SIZE == 32
#include <compat/freebsd32/freebsd32.h>
#include <compat/freebsd32/freebsd32_signal.h>
typedef struct prstatus32 elf_prstatus_t;
typedef struct prpsinfo32 elf_prpsinfo_t;
typedef struct fpreg32 elf_prfpregset_t;
typedef struct fpreg32 elf_fpregset_t;
typedef struct reg32 elf_gregset_t;
typedef struct thrmisc32 elf_thrmisc_t;
#define ELF_KERN_PROC_MASK KERN_PROC_MASK32
typedef struct kinfo_proc32 elf_kinfo_proc_t;
typedef uint32_t elf_ps_strings_t;
#else
typedef prstatus_t elf_prstatus_t;
typedef prpsinfo_t elf_prpsinfo_t;
typedef prfpregset_t elf_prfpregset_t;
typedef prfpregset_t elf_fpregset_t;
typedef gregset_t elf_gregset_t;
typedef thrmisc_t elf_thrmisc_t;
#define ELF_KERN_PROC_MASK 0
typedef struct kinfo_proc elf_kinfo_proc_t;
typedef vm_offset_t elf_ps_strings_t;
#endif
static void
__elfN(note_prpsinfo)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct sbuf sbarg;
size_t len;
char *cp, *end;
struct proc *p;
elf_prpsinfo_t *psinfo;
int error;
p = (struct proc *)arg;
if (sb != NULL) {
KASSERT(*sizep == sizeof(*psinfo), ("invalid size"));
psinfo = malloc(sizeof(*psinfo), M_TEMP, M_ZERO | M_WAITOK);
psinfo->pr_version = PRPSINFO_VERSION;
psinfo->pr_psinfosz = sizeof(elf_prpsinfo_t);
strlcpy(psinfo->pr_fname, p->p_comm, sizeof(psinfo->pr_fname));
PROC_LOCK(p);
if (p->p_args != NULL) {
len = sizeof(psinfo->pr_psargs) - 1;
if (len > p->p_args->ar_length)
len = p->p_args->ar_length;
memcpy(psinfo->pr_psargs, p->p_args->ar_args, len);
PROC_UNLOCK(p);
error = 0;
} else {
_PHOLD(p);
PROC_UNLOCK(p);
sbuf_new(&sbarg, psinfo->pr_psargs,
sizeof(psinfo->pr_psargs), SBUF_FIXEDLEN);
error = proc_getargv(curthread, p, &sbarg);
PRELE(p);
if (sbuf_finish(&sbarg) == 0)
len = sbuf_len(&sbarg) - 1;
else
len = sizeof(psinfo->pr_psargs) - 1;
sbuf_delete(&sbarg);
}
if (error || len == 0)
strlcpy(psinfo->pr_psargs, p->p_comm,
sizeof(psinfo->pr_psargs));
else {
KASSERT(len < sizeof(psinfo->pr_psargs),
("len is too long: %zu vs %zu", len,
sizeof(psinfo->pr_psargs)));
cp = psinfo->pr_psargs;
end = cp + len - 1;
for (;;) {
cp = memchr(cp, '\0', end - cp);
if (cp == NULL)
break;
*cp = ' ';
}
}
psinfo->pr_pid = p->p_pid;
sbuf_bcat(sb, psinfo, sizeof(*psinfo));
free(psinfo, M_TEMP);
}
*sizep = sizeof(*psinfo);
}
static void
__elfN(note_prstatus)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct thread *td;
elf_prstatus_t *status;
td = (struct thread *)arg;
if (sb != NULL) {
KASSERT(*sizep == sizeof(*status), ("invalid size"));
status = malloc(sizeof(*status), M_TEMP, M_ZERO | M_WAITOK);
status->pr_version = PRSTATUS_VERSION;
status->pr_statussz = sizeof(elf_prstatus_t);
status->pr_gregsetsz = sizeof(elf_gregset_t);
status->pr_fpregsetsz = sizeof(elf_fpregset_t);
status->pr_osreldate = osreldate;
status->pr_cursig = td->td_proc->p_sig;
status->pr_pid = td->td_tid;
#if defined(COMPAT_FREEBSD32) && __ELF_WORD_SIZE == 32
fill_regs32(td, &status->pr_reg);
#else
fill_regs(td, &status->pr_reg);
#endif
sbuf_bcat(sb, status, sizeof(*status));
free(status, M_TEMP);
}
*sizep = sizeof(*status);
}
static void
__elfN(note_fpregset)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct thread *td;
elf_prfpregset_t *fpregset;
td = (struct thread *)arg;
if (sb != NULL) {
KASSERT(*sizep == sizeof(*fpregset), ("invalid size"));
fpregset = malloc(sizeof(*fpregset), M_TEMP, M_ZERO | M_WAITOK);
#if defined(COMPAT_FREEBSD32) && __ELF_WORD_SIZE == 32
fill_fpregs32(td, fpregset);
#else
fill_fpregs(td, fpregset);
#endif
sbuf_bcat(sb, fpregset, sizeof(*fpregset));
free(fpregset, M_TEMP);
}
*sizep = sizeof(*fpregset);
}
static void
__elfN(note_thrmisc)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct thread *td;
elf_thrmisc_t thrmisc;
td = (struct thread *)arg;
if (sb != NULL) {
KASSERT(*sizep == sizeof(thrmisc), ("invalid size"));
bzero(&thrmisc._pad, sizeof(thrmisc._pad));
strcpy(thrmisc.pr_tname, td->td_name);
sbuf_bcat(sb, &thrmisc, sizeof(thrmisc));
}
*sizep = sizeof(thrmisc);
}
static void
__elfN(note_ptlwpinfo)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct thread *td;
size_t size;
int structsize;
#if defined(COMPAT_FREEBSD32) && __ELF_WORD_SIZE == 32
struct ptrace_lwpinfo32 pl;
#else
struct ptrace_lwpinfo pl;
#endif
td = (struct thread *)arg;
size = sizeof(structsize) + sizeof(pl);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(pl);
sbuf_bcat(sb, &structsize, sizeof(structsize));
bzero(&pl, sizeof(pl));
pl.pl_lwpid = td->td_tid;
pl.pl_event = PL_EVENT_NONE;
pl.pl_sigmask = td->td_sigmask;
pl.pl_siglist = td->td_siglist;
if (td->td_si.si_signo != 0) {
pl.pl_event = PL_EVENT_SIGNAL;
pl.pl_flags |= PL_FLAG_SI;
#if defined(COMPAT_FREEBSD32) && __ELF_WORD_SIZE == 32
siginfo_to_siginfo32(&td->td_si, &pl.pl_siginfo);
#else
pl.pl_siginfo = td->td_si;
#endif
}
strcpy(pl.pl_tdname, td->td_name);
/* XXX TODO: supply more information in struct ptrace_lwpinfo*/
sbuf_bcat(sb, &pl, sizeof(pl));
}
*sizep = size;
}
/*
* Allow for MD specific notes, as well as any MD
* specific preparations for writing MI notes.
*/
static void
__elfN(note_threadmd)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct thread *td;
void *buf;
size_t size;
td = (struct thread *)arg;
size = *sizep;
if (size != 0 && sb != NULL)
buf = malloc(size, M_TEMP, M_ZERO | M_WAITOK);
else
buf = NULL;
size = 0;
__elfN(dump_thread)(td, buf, &size);
KASSERT(sb == NULL || *sizep == size, ("invalid size"));
if (size != 0 && sb != NULL)
sbuf_bcat(sb, buf, size);
free(buf, M_TEMP);
*sizep = size;
}
#ifdef KINFO_PROC_SIZE
CTASSERT(sizeof(struct kinfo_proc) == KINFO_PROC_SIZE);
#endif
static void
__elfN(note_procstat_proc)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size;
int structsize;
p = (struct proc *)arg;
size = sizeof(structsize) + p->p_numthreads *
sizeof(elf_kinfo_proc_t);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(elf_kinfo_proc_t);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PROC_LOCK(p);
kern_proc_out(p, sb, ELF_KERN_PROC_MASK);
}
*sizep = size;
}
#ifdef KINFO_FILE_SIZE
CTASSERT(sizeof(struct kinfo_file) == KINFO_FILE_SIZE);
#endif
static void
note_procstat_files(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size, sect_sz, i;
ssize_t start_len, sect_len;
int structsize, filedesc_flags;
if (coredump_pack_fileinfo)
filedesc_flags = KERN_FILEDESC_PACK_KINFO;
else
filedesc_flags = 0;
p = (struct proc *)arg;
structsize = sizeof(struct kinfo_file);
if (sb == NULL) {
size = 0;
sb = sbuf_new(NULL, NULL, 128, SBUF_FIXEDLEN);
sbuf_set_drain(sb, sbuf_drain_count, &size);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PROC_LOCK(p);
kern_proc_filedesc_out(p, sb, -1, filedesc_flags);
sbuf_finish(sb);
sbuf_delete(sb);
*sizep = size;
} else {
sbuf_start_section(sb, &start_len);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PROC_LOCK(p);
kern_proc_filedesc_out(p, sb, *sizep - sizeof(structsize),
filedesc_flags);
sect_len = sbuf_end_section(sb, start_len, 0, 0);
if (sect_len < 0)
return;
sect_sz = sect_len;
KASSERT(sect_sz <= *sizep,
("kern_proc_filedesc_out did not respect maxlen; "
"requested %zu, got %zu", *sizep - sizeof(structsize),
sect_sz - sizeof(structsize)));
for (i = 0; i < *sizep - sect_sz && sb->s_error == 0; i++)
sbuf_putc(sb, 0);
}
}
#ifdef KINFO_VMENTRY_SIZE
CTASSERT(sizeof(struct kinfo_vmentry) == KINFO_VMENTRY_SIZE);
#endif
static void
note_procstat_vmmap(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size;
int structsize, vmmap_flags;
if (coredump_pack_vmmapinfo)
vmmap_flags = KERN_VMMAP_PACK_KINFO;
else
vmmap_flags = 0;
p = (struct proc *)arg;
structsize = sizeof(struct kinfo_vmentry);
if (sb == NULL) {
size = 0;
sb = sbuf_new(NULL, NULL, 128, SBUF_FIXEDLEN);
sbuf_set_drain(sb, sbuf_drain_count, &size);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PROC_LOCK(p);
kern_proc_vmmap_out(p, sb, -1, vmmap_flags);
sbuf_finish(sb);
sbuf_delete(sb);
*sizep = size;
} else {
sbuf_bcat(sb, &structsize, sizeof(structsize));
PROC_LOCK(p);
kern_proc_vmmap_out(p, sb, *sizep - sizeof(structsize),
vmmap_flags);
}
}
static void
note_procstat_groups(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size;
int structsize;
p = (struct proc *)arg;
size = sizeof(structsize) + p->p_ucred->cr_ngroups * sizeof(gid_t);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(gid_t);
sbuf_bcat(sb, &structsize, sizeof(structsize));
sbuf_bcat(sb, p->p_ucred->cr_groups, p->p_ucred->cr_ngroups *
sizeof(gid_t));
}
*sizep = size;
}
static void
note_procstat_umask(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size;
int structsize;
p = (struct proc *)arg;
size = sizeof(structsize) + sizeof(p->p_fd->fd_cmask);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(p->p_fd->fd_cmask);
sbuf_bcat(sb, &structsize, sizeof(structsize));
sbuf_bcat(sb, &p->p_fd->fd_cmask, sizeof(p->p_fd->fd_cmask));
}
*sizep = size;
}
static void
note_procstat_rlimit(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
struct rlimit rlim[RLIM_NLIMITS];
size_t size;
int structsize, i;
p = (struct proc *)arg;
size = sizeof(structsize) + sizeof(rlim);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(rlim);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PROC_LOCK(p);
for (i = 0; i < RLIM_NLIMITS; i++)
lim_rlimit_proc(p, i, &rlim[i]);
PROC_UNLOCK(p);
sbuf_bcat(sb, rlim, sizeof(rlim));
}
*sizep = size;
}
static void
note_procstat_osrel(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size;
int structsize;
p = (struct proc *)arg;
size = sizeof(structsize) + sizeof(p->p_osrel);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(p->p_osrel);
sbuf_bcat(sb, &structsize, sizeof(structsize));
sbuf_bcat(sb, &p->p_osrel, sizeof(p->p_osrel));
}
*sizep = size;
}
static void
__elfN(note_procstat_psstrings)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
elf_ps_strings_t ps_strings;
size_t size;
int structsize;
p = (struct proc *)arg;
size = sizeof(structsize) + sizeof(ps_strings);
if (sb != NULL) {
KASSERT(*sizep == size, ("invalid size"));
structsize = sizeof(ps_strings);
#if defined(COMPAT_FREEBSD32) && __ELF_WORD_SIZE == 32
ps_strings = PTROUT(p->p_sysent->sv_psstrings);
#else
ps_strings = p->p_sysent->sv_psstrings;
#endif
sbuf_bcat(sb, &structsize, sizeof(structsize));
sbuf_bcat(sb, &ps_strings, sizeof(ps_strings));
}
*sizep = size;
}
static void
__elfN(note_procstat_auxv)(void *arg, struct sbuf *sb, size_t *sizep)
{
struct proc *p;
size_t size;
int structsize;
p = (struct proc *)arg;
if (sb == NULL) {
size = 0;
sb = sbuf_new(NULL, NULL, 128, SBUF_FIXEDLEN);
sbuf_set_drain(sb, sbuf_drain_count, &size);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PHOLD(p);
proc_getauxv(curthread, p, sb);
PRELE(p);
sbuf_finish(sb);
sbuf_delete(sb);
*sizep = size;
} else {
structsize = sizeof(Elf_Auxinfo);
sbuf_bcat(sb, &structsize, sizeof(structsize));
PHOLD(p);
proc_getauxv(curthread, p, sb);
PRELE(p);
}
}
static boolean_t
__elfN(parse_notes)(struct image_params *imgp, Elf_Note *checknote,
const char *note_vendor, const Elf_Phdr *pnote,
boolean_t (*cb)(const Elf_Note *, void *, boolean_t *), void *cb_arg)
{
const Elf_Note *note, *note0, *note_end;
const char *note_name;
char *buf;
int i, error;
boolean_t res;
/* We need some limit, might as well use PAGE_SIZE. */
if (pnote == NULL || pnote->p_filesz > PAGE_SIZE)
return (FALSE);
ASSERT_VOP_LOCKED(imgp->vp, "parse_notes");
if (pnote->p_offset > PAGE_SIZE ||
pnote->p_filesz > PAGE_SIZE - pnote->p_offset) {
VOP_UNLOCK(imgp->vp, 0);
buf = malloc(pnote->p_filesz, M_TEMP, M_WAITOK);
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
error = vn_rdwr(UIO_READ, imgp->vp, buf, pnote->p_filesz,
pnote->p_offset, UIO_SYSSPACE, IO_NODELOCKED,
curthread->td_ucred, NOCRED, NULL, curthread);
if (error != 0) {
uprintf("i/o error PT_NOTE\n");
goto retf;
}
note = note0 = (const Elf_Note *)buf;
note_end = (const Elf_Note *)(buf + pnote->p_filesz);
} else {
note = note0 = (const Elf_Note *)(imgp->image_header +
pnote->p_offset);
note_end = (const Elf_Note *)(imgp->image_header +
pnote->p_offset + pnote->p_filesz);
buf = NULL;
}
for (i = 0; i < 100 && note >= note0 && note < note_end; i++) {
if (!aligned(note, Elf32_Addr) || (const char *)note_end -
(const char *)note < sizeof(Elf_Note)) {
goto retf;
}
if (note->n_namesz != checknote->n_namesz ||
note->n_descsz != checknote->n_descsz ||
note->n_type != checknote->n_type)
goto nextnote;
note_name = (const char *)(note + 1);
if (note_name + checknote->n_namesz >=
(const char *)note_end || strncmp(note_vendor,
note_name, checknote->n_namesz) != 0)
goto nextnote;
if (cb(note, cb_arg, &res))
goto ret;
nextnote:
note = (const Elf_Note *)((const char *)(note + 1) +
roundup2(note->n_namesz, ELF_NOTE_ROUNDSIZE) +
roundup2(note->n_descsz, ELF_NOTE_ROUNDSIZE));
}
retf:
res = FALSE;
ret:
free(buf, M_TEMP);
return (res);
}
struct brandnote_cb_arg {
Elf_Brandnote *brandnote;
int32_t *osrel;
};
static boolean_t
brandnote_cb(const Elf_Note *note, void *arg0, boolean_t *res)
{
struct brandnote_cb_arg *arg;
arg = arg0;
/*
* Fetch the osreldate for binary from the ELF OSABI-note if
* necessary.
*/
*res = (arg->brandnote->flags & BN_TRANSLATE_OSREL) != 0 &&
arg->brandnote->trans_osrel != NULL ?
arg->brandnote->trans_osrel(note, arg->osrel) : TRUE;
return (TRUE);
}
static Elf_Note fctl_note = {
.n_namesz = sizeof(FREEBSD_ABI_VENDOR),
.n_descsz = sizeof(uint32_t),
.n_type = NT_FREEBSD_FEATURE_CTL,
};
struct fctl_cb_arg {
uint32_t *fctl0;
};
static boolean_t
note_fctl_cb(const Elf_Note *note, void *arg0, boolean_t *res)
{
struct fctl_cb_arg *arg;
const Elf32_Word *desc;
uintptr_t p;
arg = arg0;
p = (uintptr_t)(note + 1);
p += roundup2(note->n_namesz, ELF_NOTE_ROUNDSIZE);
desc = (const Elf32_Word *)p;
*arg->fctl0 = desc[0];
return (TRUE);
}
/*
* Try to find the appropriate ABI-note section for checknote, fetch
* the osreldate and feature control flags for binary from the ELF
* OSABI-note. Only the first page of the image is searched, the same
* as for headers.
*/
static boolean_t
__elfN(check_note)(struct image_params *imgp, Elf_Brandnote *brandnote,
int32_t *osrel, uint32_t *fctl0)
{
const Elf_Phdr *phdr;
const Elf_Ehdr *hdr;
struct brandnote_cb_arg b_arg;
struct fctl_cb_arg f_arg;
int i, j;
hdr = (const Elf_Ehdr *)imgp->image_header;
phdr = (const Elf_Phdr *)(imgp->image_header + hdr->e_phoff);
b_arg.brandnote = brandnote;
b_arg.osrel = osrel;
f_arg.fctl0 = fctl0;
for (i = 0; i < hdr->e_phnum; i++) {
if (phdr[i].p_type == PT_NOTE && __elfN(parse_notes)(imgp,
&brandnote->hdr, brandnote->vendor, &phdr[i], brandnote_cb,
&b_arg)) {
for (j = 0; j < hdr->e_phnum; j++) {
if (phdr[j].p_type == PT_NOTE &&
__elfN(parse_notes)(imgp, &fctl_note,
FREEBSD_ABI_VENDOR, &phdr[j],
note_fctl_cb, &f_arg))
break;
}
return (TRUE);
}
}
return (FALSE);
}
/*
* Tell kern_execve.c about it, with a little help from the linker.
*/
static struct execsw __elfN(execsw) = {
.ex_imgact = __CONCAT(exec_, __elfN(imgact)),
.ex_name = __XSTRING(__CONCAT(ELF, __ELF_WORD_SIZE))
};
EXEC_SET(__CONCAT(elf, __ELF_WORD_SIZE), __elfN(execsw));
static vm_prot_t
__elfN(trans_prot)(Elf_Word flags)
{
vm_prot_t prot;
prot = 0;
if (flags & PF_X)
prot |= VM_PROT_EXECUTE;
if (flags & PF_W)
prot |= VM_PROT_WRITE;
if (flags & PF_R)
prot |= VM_PROT_READ;
-#if __ELF_WORD_SIZE == 32
-#if defined(__amd64__)
+#if __ELF_WORD_SIZE == 32 && (defined(__amd64__) || defined(__i386__))
if (i386_read_exec && (flags & PF_R))
prot |= VM_PROT_EXECUTE;
-#endif
#endif
return (prot);
}
static Elf_Word
__elfN(untrans_prot)(vm_prot_t prot)
{
Elf_Word flags;
flags = 0;
if (prot & VM_PROT_EXECUTE)
flags |= PF_X;
if (prot & VM_PROT_READ)
flags |= PF_R;
if (prot & VM_PROT_WRITE)
flags |= PF_W;
return (flags);
}
Index: projects/clang800-import/sys/kern/kern_exec.c
===================================================================
--- projects/clang800-import/sys/kern/kern_exec.c (revision 343955)
+++ projects/clang800-import/sys/kern/kern_exec.c (revision 343956)
@@ -1,1828 +1,1830 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 1993, David Greenman
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_capsicum.h"
#include "opt_hwpmc_hooks.h"
#include "opt_ktrace.h"
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/acct.h>
#include <sys/capsicum.h>
#include <sys/eventhandler.h>
#include <sys/exec.h>
#include <sys/fcntl.h>
#include <sys/filedesc.h>
#include <sys/imgact.h>
#include <sys/imgact_elf.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/mutex.h>
#include <sys/namei.h>
#include <sys/pioctl.h>
#include <sys/priv.h>
#include <sys/proc.h>
#include <sys/ptrace.h>
#include <sys/resourcevar.h>
#include <sys/rwlock.h>
#include <sys/sched.h>
#include <sys/sdt.h>
#include <sys/sf_buf.h>
#include <sys/shm.h>
#include <sys/signalvar.h>
#include <sys/smp.h>
#include <sys/stat.h>
#include <sys/syscallsubr.h>
#include <sys/sysctl.h>
#include <sys/sysent.h>
#include <sys/sysproto.h>
#include <sys/vnode.h>
#include <sys/wait.h>
#ifdef KTRACE
#include <sys/ktrace.h>
#endif
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/pmap.h>
#include <vm/vm_page.h>
#include <vm/vm_map.h>
#include <vm/vm_kern.h>
#include <vm/vm_extern.h>
#include <vm/vm_object.h>
#include <vm/vm_pager.h>
#ifdef HWPMC_HOOKS
#include <sys/pmckern.h>
#endif
#include <machine/reg.h>
#include <security/audit/audit.h>
#include <security/mac/mac_framework.h>
#ifdef KDTRACE_HOOKS
#include <sys/dtrace_bsd.h>
dtrace_execexit_func_t dtrace_fasttrap_exec;
#endif
SDT_PROVIDER_DECLARE(proc);
SDT_PROBE_DEFINE1(proc, , , exec, "char *");
SDT_PROBE_DEFINE1(proc, , , exec__failure, "int");
SDT_PROBE_DEFINE1(proc, , , exec__success, "char *");
MALLOC_DEFINE(M_PARGS, "proc-args", "Process arguments");
int coredump_pack_fileinfo = 1;
SYSCTL_INT(_kern, OID_AUTO, coredump_pack_fileinfo, CTLFLAG_RWTUN,
&coredump_pack_fileinfo, 0,
"Enable file path packing in 'procstat -f' coredump notes");
int coredump_pack_vmmapinfo = 1;
SYSCTL_INT(_kern, OID_AUTO, coredump_pack_vmmapinfo, CTLFLAG_RWTUN,
&coredump_pack_vmmapinfo, 0,
"Enable file path packing in 'procstat -v' coredump notes");
static int sysctl_kern_ps_strings(SYSCTL_HANDLER_ARGS);
static int sysctl_kern_usrstack(SYSCTL_HANDLER_ARGS);
static int sysctl_kern_stackprot(SYSCTL_HANDLER_ARGS);
static int do_execve(struct thread *td, struct image_args *args,
struct mac *mac_p);
/* XXX This should be vm_size_t. */
SYSCTL_PROC(_kern, KERN_PS_STRINGS, ps_strings, CTLTYPE_ULONG|CTLFLAG_RD|
CTLFLAG_CAPRD|CTLFLAG_MPSAFE, NULL, 0, sysctl_kern_ps_strings, "LU", "");
/* XXX This should be vm_size_t. */
SYSCTL_PROC(_kern, KERN_USRSTACK, usrstack, CTLTYPE_ULONG|CTLFLAG_RD|
CTLFLAG_CAPRD|CTLFLAG_MPSAFE, NULL, 0, sysctl_kern_usrstack, "LU", "");
SYSCTL_PROC(_kern, OID_AUTO, stackprot, CTLTYPE_INT|CTLFLAG_RD|CTLFLAG_MPSAFE,
NULL, 0, sysctl_kern_stackprot, "I", "");
u_long ps_arg_cache_limit = PAGE_SIZE / 16;
SYSCTL_ULONG(_kern, OID_AUTO, ps_arg_cache_limit, CTLFLAG_RW,
&ps_arg_cache_limit, 0, "");
static int disallow_high_osrel;
SYSCTL_INT(_kern, OID_AUTO, disallow_high_osrel, CTLFLAG_RW,
&disallow_high_osrel, 0,
"Disallow execution of binaries built for higher version of the world");
static int map_at_zero = 0;
SYSCTL_INT(_security_bsd, OID_AUTO, map_at_zero, CTLFLAG_RWTUN, &map_at_zero, 0,
"Permit processes to map an object at virtual address 0.");
EVENTHANDLER_LIST_DECLARE(process_exec);
static int
sysctl_kern_ps_strings(SYSCTL_HANDLER_ARGS)
{
struct proc *p;
int error;
p = curproc;
#ifdef SCTL_MASK32
if (req->flags & SCTL_MASK32) {
unsigned int val;
val = (unsigned int)p->p_sysent->sv_psstrings;
error = SYSCTL_OUT(req, &val, sizeof(val));
} else
#endif
error = SYSCTL_OUT(req, &p->p_sysent->sv_psstrings,
sizeof(p->p_sysent->sv_psstrings));
return error;
}
static int
sysctl_kern_usrstack(SYSCTL_HANDLER_ARGS)
{
struct proc *p;
int error;
p = curproc;
#ifdef SCTL_MASK32
if (req->flags & SCTL_MASK32) {
unsigned int val;
val = (unsigned int)p->p_sysent->sv_usrstack;
error = SYSCTL_OUT(req, &val, sizeof(val));
} else
#endif
error = SYSCTL_OUT(req, &p->p_sysent->sv_usrstack,
sizeof(p->p_sysent->sv_usrstack));
return error;
}
static int
sysctl_kern_stackprot(SYSCTL_HANDLER_ARGS)
{
struct proc *p;
p = curproc;
return (SYSCTL_OUT(req, &p->p_sysent->sv_stackprot,
sizeof(p->p_sysent->sv_stackprot)));
}
/*
* Each of the items is a pointer to a `const struct execsw', hence the
* double pointer here.
*/
static const struct execsw **execsw;
#ifndef _SYS_SYSPROTO_H_
struct execve_args {
char *fname;
char **argv;
char **envv;
};
#endif
int
sys_execve(struct thread *td, struct execve_args *uap)
{
struct image_args args;
struct vmspace *oldvmspace;
int error;
error = pre_execve(td, &oldvmspace);
if (error != 0)
return (error);
error = exec_copyin_args(&args, uap->fname, UIO_USERSPACE,
uap->argv, uap->envv);
if (error == 0)
error = kern_execve(td, &args, NULL);
post_execve(td, error, oldvmspace);
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct fexecve_args {
int fd;
char **argv;
char **envv;
}
#endif
int
sys_fexecve(struct thread *td, struct fexecve_args *uap)
{
struct image_args args;
struct vmspace *oldvmspace;
int error;
error = pre_execve(td, &oldvmspace);
if (error != 0)
return (error);
error = exec_copyin_args(&args, NULL, UIO_SYSSPACE,
uap->argv, uap->envv);
if (error == 0) {
args.fd = uap->fd;
error = kern_execve(td, &args, NULL);
}
post_execve(td, error, oldvmspace);
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct __mac_execve_args {
char *fname;
char **argv;
char **envv;
struct mac *mac_p;
};
#endif
int
sys___mac_execve(struct thread *td, struct __mac_execve_args *uap)
{
#ifdef MAC
struct image_args args;
struct vmspace *oldvmspace;
int error;
error = pre_execve(td, &oldvmspace);
if (error != 0)
return (error);
error = exec_copyin_args(&args, uap->fname, UIO_USERSPACE,
uap->argv, uap->envv);
if (error == 0)
error = kern_execve(td, &args, uap->mac_p);
post_execve(td, error, oldvmspace);
return (error);
#else
return (ENOSYS);
#endif
}
int
pre_execve(struct thread *td, struct vmspace **oldvmspace)
{
struct proc *p;
int error;
KASSERT(td == curthread, ("non-current thread %p", td));
error = 0;
p = td->td_proc;
if ((p->p_flag & P_HADTHREADS) != 0) {
PROC_LOCK(p);
if (thread_single(p, SINGLE_BOUNDARY) != 0)
error = ERESTART;
PROC_UNLOCK(p);
}
KASSERT(error != 0 || (td->td_pflags & TDP_EXECVMSPC) == 0,
("nested execve"));
*oldvmspace = p->p_vmspace;
return (error);
}
void
post_execve(struct thread *td, int error, struct vmspace *oldvmspace)
{
struct proc *p;
KASSERT(td == curthread, ("non-current thread %p", td));
p = td->td_proc;
if ((p->p_flag & P_HADTHREADS) != 0) {
PROC_LOCK(p);
/*
* If success, we upgrade to SINGLE_EXIT state to
* force other threads to suicide.
*/
if (error == EJUSTRETURN)
thread_single(p, SINGLE_EXIT);
else
thread_single_end(p, SINGLE_BOUNDARY);
PROC_UNLOCK(p);
}
if ((td->td_pflags & TDP_EXECVMSPC) != 0) {
KASSERT(p->p_vmspace != oldvmspace,
("oldvmspace still used"));
vmspace_free(oldvmspace);
td->td_pflags &= ~TDP_EXECVMSPC;
}
}
/*
* XXX: kern_execve has the astonishing property of not always returning to
* the caller. If sufficiently bad things happen during the call to
* do_execve(), it can end up calling exit1(); as a result, callers must
* avoid doing anything which they might need to undo (e.g., allocating
* memory).
*/
int
kern_execve(struct thread *td, struct image_args *args, struct mac *mac_p)
{
AUDIT_ARG_ARGV(args->begin_argv, args->argc,
exec_args_get_begin_envv(args) - args->begin_argv);
AUDIT_ARG_ENVV(exec_args_get_begin_envv(args), args->envc,
args->endp - exec_args_get_begin_envv(args));
return (do_execve(td, args, mac_p));
}
/*
* In-kernel implementation of execve(). All arguments are assumed to be
* userspace pointers from the passed thread.
*/
static int
do_execve(struct thread *td, struct image_args *args, struct mac *mac_p)
{
struct proc *p = td->td_proc;
struct nameidata nd;
struct ucred *oldcred;
struct uidinfo *euip = NULL;
register_t *stack_base;
int error, i;
struct image_params image_params, *imgp;
struct vattr attr;
int (*img_first)(struct image_params *);
struct pargs *oldargs = NULL, *newargs = NULL;
struct sigacts *oldsigacts = NULL, *newsigacts = NULL;
#ifdef KTRACE
struct vnode *tracevp = NULL;
struct ucred *tracecred = NULL;
#endif
struct vnode *oldtextvp = NULL, *newtextvp;
int credential_changing;
int textset;
#ifdef MAC
struct label *interpvplabel = NULL;
int will_transition;
#endif
#ifdef HWPMC_HOOKS
struct pmckern_procexec pe;
#endif
static const char fexecv_proc_title[] = "(fexecv)";
imgp = &image_params;
/*
* Lock the process and set the P_INEXEC flag to indicate that
* it should be left alone until we're done here. This is
* necessary to avoid race conditions - e.g. in ptrace() -
* that might allow a local user to illicitly obtain elevated
* privileges.
*/
PROC_LOCK(p);
KASSERT((p->p_flag & P_INEXEC) == 0,
("%s(): process already has P_INEXEC flag", __func__));
p->p_flag |= P_INEXEC;
PROC_UNLOCK(p);
/*
* Initialize part of the common data
*/
bzero(imgp, sizeof(*imgp));
imgp->proc = p;
imgp->attr = &attr;
imgp->args = args;
oldcred = p->p_ucred;
#ifdef MAC
error = mac_execve_enter(imgp, mac_p);
if (error)
goto exec_fail;
#endif
/*
* Translate the file name. namei() returns a vnode pointer
* in ni_vp among other things.
*
* XXXAUDIT: It would be desirable to also audit the name of the
* interpreter if this is an interpreted binary.
*/
if (args->fname != NULL) {
NDINIT(&nd, LOOKUP, ISOPEN | LOCKLEAF | FOLLOW | SAVENAME
| AUDITVNODE1, UIO_SYSSPACE, args->fname, td);
}
SDT_PROBE1(proc, , , exec, args->fname);
interpret:
if (args->fname != NULL) {
#ifdef CAPABILITY_MODE
/*
* While capability mode can't reach this point via direct
* path arguments to execve(), we also don't allow
* interpreters to be used in capability mode (for now).
* Catch indirect lookups and return a permissions error.
*/
if (IN_CAPABILITY_MODE(td)) {
error = ECAPMODE;
goto exec_fail;
}
#endif
error = namei(&nd);
if (error)
goto exec_fail;
newtextvp = nd.ni_vp;
imgp->vp = newtextvp;
} else {
AUDIT_ARG_FD(args->fd);
/*
* Descriptors opened only with O_EXEC or O_RDONLY are allowed.
*/
error = fgetvp_exec(td, args->fd, &cap_fexecve_rights, &newtextvp);
if (error)
goto exec_fail;
vn_lock(newtextvp, LK_EXCLUSIVE | LK_RETRY);
AUDIT_ARG_VNODE1(newtextvp);
imgp->vp = newtextvp;
}
/*
* Check file permissions (also 'opens' file)
*/
error = exec_check_permissions(imgp);
if (error)
goto exec_fail_dealloc;
imgp->object = imgp->vp->v_object;
if (imgp->object != NULL)
vm_object_reference(imgp->object);
/*
* Set VV_TEXT now so no one can write to the executable while we're
* activating it.
*
* Remember if this was set before and unset it in case this is not
* actually an executable image.
*/
textset = VOP_IS_TEXT(imgp->vp);
VOP_SET_TEXT(imgp->vp);
error = exec_map_first_page(imgp);
if (error)
goto exec_fail_dealloc;
imgp->proc->p_osrel = 0;
imgp->proc->p_fctl0 = 0;
/*
* Implement image setuid/setgid.
*
* Determine new credentials before attempting image activators
* so that it can be used by process_exec handlers to determine
* credential/setid changes.
*
* Don't honor setuid/setgid if the filesystem prohibits it or if
* the process is being traced.
*
* We disable setuid/setgid/etc in capability mode on the basis
* that most setugid applications are not written with that
* environment in mind, and will therefore almost certainly operate
* incorrectly. In principle there's no reason that setugid
* applications might not be useful in capability mode, so we may want
* to reconsider this conservative design choice in the future.
*
* XXXMAC: For the time being, use NOSUID to also prohibit
* transitions on the file system.
*/
credential_changing = 0;
credential_changing |= (attr.va_mode & S_ISUID) &&
oldcred->cr_uid != attr.va_uid;
credential_changing |= (attr.va_mode & S_ISGID) &&
oldcred->cr_gid != attr.va_gid;
#ifdef MAC
will_transition = mac_vnode_execve_will_transition(oldcred, imgp->vp,
interpvplabel, imgp);
credential_changing |= will_transition;
#endif
/* Don't inherit PROC_PDEATHSIG_CTL value if setuid/setgid. */
if (credential_changing)
imgp->proc->p_pdeathsig = 0;
if (credential_changing &&
#ifdef CAPABILITY_MODE
((oldcred->cr_flags & CRED_FLAG_CAPMODE) == 0) &&
#endif
(imgp->vp->v_mount->mnt_flag & MNT_NOSUID) == 0 &&
(p->p_flag & P_TRACED) == 0) {
imgp->credential_setid = true;
VOP_UNLOCK(imgp->vp, 0);
imgp->newcred = crdup(oldcred);
if (attr.va_mode & S_ISUID) {
euip = uifind(attr.va_uid);
change_euid(imgp->newcred, euip);
}
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
if (attr.va_mode & S_ISGID)
change_egid(imgp->newcred, attr.va_gid);
/*
* Implement correct POSIX saved-id behavior.
*
* XXXMAC: Note that the current logic will save the
* uid and gid if a MAC domain transition occurs, even
* though maybe it shouldn't.
*/
change_svuid(imgp->newcred, imgp->newcred->cr_uid);
change_svgid(imgp->newcred, imgp->newcred->cr_gid);
} else {
/*
* Implement correct POSIX saved-id behavior.
*
* XXX: It's not clear that the existing behavior is
* POSIX-compliant. A number of sources indicate that the
* saved uid/gid should only be updated if the new ruid is
* not equal to the old ruid, or the new euid is not equal
* to the old euid and the new euid is not equal to the old
* ruid. The FreeBSD code always updates the saved uid/gid.
* Also, this code uses the new (replaced) euid and egid as
* the source, which may or may not be the right ones to use.
*/
if (oldcred->cr_svuid != oldcred->cr_uid ||
oldcred->cr_svgid != oldcred->cr_gid) {
VOP_UNLOCK(imgp->vp, 0);
imgp->newcred = crdup(oldcred);
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
change_svuid(imgp->newcred, imgp->newcred->cr_uid);
change_svgid(imgp->newcred, imgp->newcred->cr_gid);
}
}
/* The new credentials are installed into the process later. */
/*
* Do the best to calculate the full path to the image file.
*/
if (args->fname != NULL && args->fname[0] == '/')
imgp->execpath = args->fname;
else {
VOP_UNLOCK(imgp->vp, 0);
if (vn_fullpath(td, imgp->vp, &imgp->execpath,
&imgp->freepath) != 0)
imgp->execpath = args->fname;
vn_lock(imgp->vp, LK_EXCLUSIVE | LK_RETRY);
}
/*
* If the current process has a special image activator it
* wants to try first, call it. For example, emulating shell
* scripts differently.
*/
error = -1;
if ((img_first = imgp->proc->p_sysent->sv_imgact_try) != NULL)
error = img_first(imgp);
/*
* Loop through the list of image activators, calling each one.
* An activator returns -1 if there is no match, 0 on success,
* and an error otherwise.
*/
for (i = 0; error == -1 && execsw[i]; ++i) {
if (execsw[i]->ex_imgact == NULL ||
execsw[i]->ex_imgact == img_first) {
continue;
}
error = (*execsw[i]->ex_imgact)(imgp);
}
if (error) {
if (error == -1) {
if (textset == 0)
VOP_UNSET_TEXT(imgp->vp);
error = ENOEXEC;
}
goto exec_fail_dealloc;
}
/*
* Special interpreter operation, cleanup and loop up to try to
* activate the interpreter.
*/
if (imgp->interpreted) {
exec_unmap_first_page(imgp);
/*
* VV_TEXT needs to be unset for scripts. There is a short
* period before we determine that something is a script where
* VV_TEXT will be set. The vnode lock is held over this
* entire period so nothing should illegitimately be blocked.
*/
VOP_UNSET_TEXT(imgp->vp);
/* free name buffer and old vnode */
if (args->fname != NULL)
NDFREE(&nd, NDF_ONLY_PNBUF);
#ifdef MAC
mac_execve_interpreter_enter(newtextvp, &interpvplabel);
#endif
if (imgp->opened) {
VOP_CLOSE(newtextvp, FREAD, td->td_ucred, td);
imgp->opened = 0;
}
vput(newtextvp);
vm_object_deallocate(imgp->object);
imgp->object = NULL;
imgp->credential_setid = false;
if (imgp->newcred != NULL) {
crfree(imgp->newcred);
imgp->newcred = NULL;
}
imgp->execpath = NULL;
free(imgp->freepath, M_TEMP);
imgp->freepath = NULL;
/* set new name to that of the interpreter */
NDINIT(&nd, LOOKUP, LOCKLEAF | FOLLOW | SAVENAME,
UIO_SYSSPACE, imgp->interpreter_name, td);
args->fname = imgp->interpreter_name;
goto interpret;
}
/*
* NB: We unlock the vnode here because it is believed that none
* of the sv_copyout_strings/sv_fixup operations require the vnode.
*/
VOP_UNLOCK(imgp->vp, 0);
if (disallow_high_osrel &&
P_OSREL_MAJOR(p->p_osrel) > P_OSREL_MAJOR(__FreeBSD_version)) {
error = ENOEXEC;
uprintf("Osrel %d for image %s too high\n", p->p_osrel,
imgp->execpath != NULL ? imgp->execpath : "<unresolved>");
vn_lock(imgp->vp, LK_SHARED | LK_RETRY);
goto exec_fail_dealloc;
}
/* ABI enforces the use of Capsicum. Switch into capabilities mode. */
if (SV_PROC_FLAG(p, SV_CAPSICUM))
sys_cap_enter(td, NULL);
/*
* Copy out strings (args and env) and initialize stack base
*/
if (p->p_sysent->sv_copyout_strings)
stack_base = (*p->p_sysent->sv_copyout_strings)(imgp);
else
stack_base = exec_copyout_strings(imgp);
/*
* If custom stack fixup routine present for this process
* let it do the stack setup.
* Else stuff argument count as first item on stack
*/
if (p->p_sysent->sv_fixup != NULL)
error = (*p->p_sysent->sv_fixup)(&stack_base, imgp);
else
error = suword(--stack_base, imgp->args->argc) == 0 ?
0 : EFAULT;
- if (error != 0)
+ if (error != 0) {
+ vn_lock(imgp->vp, LK_SHARED | LK_RETRY);
goto exec_fail_dealloc;
+ }
if (args->fdp != NULL) {
/* Install a brand new file descriptor table. */
fdinstall_remapped(td, args->fdp);
args->fdp = NULL;
} else {
/*
* Keep on using the existing file descriptor table. For
* security and other reasons, the file descriptor table
* cannot be shared after an exec.
*/
fdunshare(td);
/* close files on exec */
fdcloseexec(td);
}
/*
* Malloc things before we need locks.
*/
i = exec_args_get_begin_envv(imgp->args) - imgp->args->begin_argv;
/* Cache arguments if they fit inside our allowance */
if (ps_arg_cache_limit >= i + sizeof(struct pargs)) {
newargs = pargs_alloc(i);
bcopy(imgp->args->begin_argv, newargs->ar_args, i);
}
/*
* For security and other reasons, signal handlers cannot
* be shared after an exec. The new process gets a copy of the old
* handlers. In execsigs(), the new process will have its signals
* reset.
*/
if (sigacts_shared(p->p_sigacts)) {
oldsigacts = p->p_sigacts;
newsigacts = sigacts_alloc();
sigacts_copy(newsigacts, oldsigacts);
}
vn_lock(imgp->vp, LK_SHARED | LK_RETRY);
PROC_LOCK(p);
if (oldsigacts)
p->p_sigacts = newsigacts;
/* Stop profiling */
stopprofclock(p);
/* reset caught signals */
execsigs(p);
/* name this process - nameiexec(p, ndp) */
bzero(p->p_comm, sizeof(p->p_comm));
if (args->fname)
bcopy(nd.ni_cnd.cn_nameptr, p->p_comm,
min(nd.ni_cnd.cn_namelen, MAXCOMLEN));
else if (vn_commname(newtextvp, p->p_comm, sizeof(p->p_comm)) != 0)
bcopy(fexecv_proc_title, p->p_comm, sizeof(fexecv_proc_title));
bcopy(p->p_comm, td->td_name, sizeof(td->td_name));
#ifdef KTR
sched_clear_tdname(td);
#endif
/*
* mark as execed, wakeup the process that vforked (if any) and tell
* it that it now has its own resources back
*/
p->p_flag |= P_EXEC;
if ((p->p_flag2 & P2_NOTRACE_EXEC) == 0)
p->p_flag2 &= ~P2_NOTRACE;
if (p->p_flag & P_PPWAIT) {
p->p_flag &= ~(P_PPWAIT | P_PPTRACE);
cv_broadcast(&p->p_pwait);
/* STOPs are no longer ignored, arrange for AST */
signotify(td);
}
/*
* Implement image setuid/setgid installation.
*/
if (imgp->credential_setid) {
/*
* Turn off syscall tracing for set-id programs, except for
* root. Record any set-id flags first to make sure that
* we do not regain any tracing during a possible block.
*/
setsugid(p);
#ifdef KTRACE
if (p->p_tracecred != NULL &&
priv_check_cred(p->p_tracecred, PRIV_DEBUG_DIFFCRED))
ktrprocexec(p, &tracecred, &tracevp);
#endif
/*
* Close any file descriptors 0..2 that reference procfs,
* then make sure file descriptors 0..2 are in use.
*
* Both fdsetugidsafety() and fdcheckstd() may call functions
* taking sleepable locks, so temporarily drop our locks.
*/
PROC_UNLOCK(p);
VOP_UNLOCK(imgp->vp, 0);
fdsetugidsafety(td);
error = fdcheckstd(td);
vn_lock(imgp->vp, LK_SHARED | LK_RETRY);
if (error != 0)
goto exec_fail_dealloc;
PROC_LOCK(p);
#ifdef MAC
if (will_transition) {
mac_vnode_execve_transition(oldcred, imgp->newcred,
imgp->vp, interpvplabel, imgp);
}
#endif
} else {
if (oldcred->cr_uid == oldcred->cr_ruid &&
oldcred->cr_gid == oldcred->cr_rgid)
p->p_flag &= ~P_SUGID;
}
/*
* Set the new credentials.
*/
if (imgp->newcred != NULL) {
proc_set_cred(p, imgp->newcred);
crfree(oldcred);
oldcred = NULL;
}
/*
* Store the vp for use in procfs. This vnode was referenced by namei
* or fgetvp_exec.
*/
oldtextvp = p->p_textvp;
p->p_textvp = newtextvp;
#ifdef KDTRACE_HOOKS
/*
* Tell the DTrace fasttrap provider about the exec if it
* has declared an interest.
*/
if (dtrace_fasttrap_exec)
dtrace_fasttrap_exec(p);
#endif
/*
* Notify others that we exec'd, and clear the P_INEXEC flag
* as we're now a bona fide freshly-execed process.
*/
KNOTE_LOCKED(p->p_klist, NOTE_EXEC);
p->p_flag &= ~P_INEXEC;
/* clear "fork but no exec" flag, as we _are_ execing */
p->p_acflag &= ~AFORK;
/*
* Free any previous argument cache and replace it with
* the new argument cache, if any.
*/
oldargs = p->p_args;
p->p_args = newargs;
newargs = NULL;
PROC_UNLOCK(p);
#ifdef HWPMC_HOOKS
/*
* Check if system-wide sampling is in effect or if the
* current process is using PMCs. If so, do exec() time
* processing. This processing needs to happen AFTER the
* P_INEXEC flag is cleared.
*/
if (PMC_SYSTEM_SAMPLING_ACTIVE() || PMC_PROC_IS_USING_PMCS(p)) {
VOP_UNLOCK(imgp->vp, 0);
pe.pm_credentialschanged = credential_changing;
pe.pm_entryaddr = imgp->entry_addr;
PMC_CALL_HOOK_X(td, PMC_FN_PROCESS_EXEC, (void *) &pe);
vn_lock(imgp->vp, LK_SHARED | LK_RETRY);
}
#endif
/* Set values passed into the program in registers. */
if (p->p_sysent->sv_setregs)
(*p->p_sysent->sv_setregs)(td, imgp,
(u_long)(uintptr_t)stack_base);
else
exec_setregs(td, imgp, (u_long)(uintptr_t)stack_base);
vfs_mark_atime(imgp->vp, td->td_ucred);
SDT_PROBE1(proc, , , exec__success, args->fname);
exec_fail_dealloc:
if (imgp->firstpage != NULL)
exec_unmap_first_page(imgp);
if (imgp->vp != NULL) {
if (args->fname)
NDFREE(&nd, NDF_ONLY_PNBUF);
if (imgp->opened)
VOP_CLOSE(imgp->vp, FREAD, td->td_ucred, td);
if (error != 0)
vput(imgp->vp);
else
VOP_UNLOCK(imgp->vp, 0);
}
if (imgp->object != NULL)
vm_object_deallocate(imgp->object);
free(imgp->freepath, M_TEMP);
if (error == 0) {
if (p->p_ptevents & PTRACE_EXEC) {
PROC_LOCK(p);
if (p->p_ptevents & PTRACE_EXEC)
td->td_dbgflags |= TDB_EXEC;
PROC_UNLOCK(p);
}
/*
* Stop the process here if its stop event mask has
* the S_EXEC bit set.
*/
STOPEVENT(p, S_EXEC, 0);
} else {
exec_fail:
/* we're done here, clear P_INEXEC */
PROC_LOCK(p);
p->p_flag &= ~P_INEXEC;
PROC_UNLOCK(p);
SDT_PROBE1(proc, , , exec__failure, error);
}
if (imgp->newcred != NULL && oldcred != NULL)
crfree(imgp->newcred);
#ifdef MAC
mac_execve_exit(imgp);
mac_execve_interpreter_exit(interpvplabel);
#endif
exec_free_args(args);
/*
* Handle deferred decrement of ref counts.
*/
if (oldtextvp != NULL)
vrele(oldtextvp);
#ifdef KTRACE
if (tracevp != NULL)
vrele(tracevp);
if (tracecred != NULL)
crfree(tracecred);
#endif
pargs_drop(oldargs);
pargs_drop(newargs);
if (oldsigacts != NULL)
sigacts_free(oldsigacts);
if (euip != NULL)
uifree(euip);
if (error && imgp->vmspace_destroyed) {
/* sorry, no more process anymore. exit gracefully */
exit1(td, 0, SIGABRT);
/* NOT REACHED */
}
#ifdef KTRACE
if (error == 0)
ktrprocctor(p);
#endif
/*
* We don't want cpu_set_syscall_retval() to overwrite any of
* the register values put in place by exec_setregs().
* Implementations of cpu_set_syscall_retval() will leave
* registers unmodified when returning EJUSTRETURN.
*/
return (error == 0 ? EJUSTRETURN : error);
}
int
exec_map_first_page(struct image_params *imgp)
{
int rv, i, after, initial_pagein;
vm_page_t ma[VM_INITIAL_PAGEIN];
vm_object_t object;
if (imgp->firstpage != NULL)
exec_unmap_first_page(imgp);
object = imgp->vp->v_object;
if (object == NULL)
return (EACCES);
VM_OBJECT_WLOCK(object);
#if VM_NRESERVLEVEL > 0
vm_object_color(object, 0);
#endif
ma[0] = vm_page_grab(object, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOBUSY);
if (ma[0]->valid != VM_PAGE_BITS_ALL) {
vm_page_xbusy(ma[0]);
if (!vm_pager_has_page(object, 0, NULL, &after)) {
vm_page_lock(ma[0]);
vm_page_free(ma[0]);
vm_page_unlock(ma[0]);
VM_OBJECT_WUNLOCK(object);
return (EIO);
}
initial_pagein = min(after, VM_INITIAL_PAGEIN);
KASSERT(initial_pagein <= object->size,
("%s: initial_pagein %d object->size %ju",
__func__, initial_pagein, (uintmax_t )object->size));
for (i = 1; i < initial_pagein; i++) {
if ((ma[i] = vm_page_next(ma[i - 1])) != NULL) {
if (ma[i]->valid)
break;
if (!vm_page_tryxbusy(ma[i]))
break;
} else {
ma[i] = vm_page_alloc(object, i,
VM_ALLOC_NORMAL);
if (ma[i] == NULL)
break;
}
}
initial_pagein = i;
rv = vm_pager_get_pages(object, ma, initial_pagein, NULL, NULL);
if (rv != VM_PAGER_OK) {
for (i = 0; i < initial_pagein; i++) {
vm_page_lock(ma[i]);
vm_page_free(ma[i]);
vm_page_unlock(ma[i]);
}
VM_OBJECT_WUNLOCK(object);
return (EIO);
}
vm_page_xunbusy(ma[0]);
for (i = 1; i < initial_pagein; i++)
vm_page_readahead_finish(ma[i]);
}
vm_page_lock(ma[0]);
vm_page_hold(ma[0]);
vm_page_activate(ma[0]);
vm_page_unlock(ma[0]);
VM_OBJECT_WUNLOCK(object);
imgp->firstpage = sf_buf_alloc(ma[0], 0);
imgp->image_header = (char *)sf_buf_kva(imgp->firstpage);
return (0);
}
void
exec_unmap_first_page(struct image_params *imgp)
{
vm_page_t m;
if (imgp->firstpage != NULL) {
m = sf_buf_page(imgp->firstpage);
sf_buf_free(imgp->firstpage);
imgp->firstpage = NULL;
vm_page_lock(m);
vm_page_unhold(m);
vm_page_unlock(m);
}
}
/*
* Destroy old address space, and allocate a new stack.
* The new stack is only sgrowsiz large because it is grown
* automatically on a page fault.
*/
int
exec_new_vmspace(struct image_params *imgp, struct sysentvec *sv)
{
int error;
struct proc *p = imgp->proc;
struct vmspace *vmspace = p->p_vmspace;
vm_object_t obj;
struct rlimit rlim_stack;
vm_offset_t sv_minuser, stack_addr;
vm_map_t map;
u_long ssiz;
imgp->vmspace_destroyed = 1;
imgp->sysent = sv;
/* May be called with Giant held */
EVENTHANDLER_DIRECT_INVOKE(process_exec, p, imgp);
/*
* Blow away entire process VM, if address space not shared,
* otherwise, create a new VM space so that other threads are
* not disrupted
*/
map = &vmspace->vm_map;
if (map_at_zero)
sv_minuser = sv->sv_minuser;
else
sv_minuser = MAX(sv->sv_minuser, PAGE_SIZE);
if (vmspace->vm_refcnt == 1 && vm_map_min(map) == sv_minuser &&
vm_map_max(map) == sv->sv_maxuser) {
shmexit(vmspace);
pmap_remove_pages(vmspace_pmap(vmspace));
vm_map_remove(map, vm_map_min(map), vm_map_max(map));
/* An exec terminates mlockall(MCL_FUTURE). */
vm_map_lock(map);
vm_map_modflags(map, 0, MAP_WIREFUTURE);
vm_map_unlock(map);
} else {
error = vmspace_exec(p, sv_minuser, sv->sv_maxuser);
if (error)
return (error);
vmspace = p->p_vmspace;
map = &vmspace->vm_map;
}
/* Map a shared page */
obj = sv->sv_shared_page_obj;
if (obj != NULL) {
vm_object_reference(obj);
error = vm_map_fixed(map, obj, 0,
sv->sv_shared_page_base, sv->sv_shared_page_len,
VM_PROT_READ | VM_PROT_EXECUTE,
VM_PROT_READ | VM_PROT_EXECUTE,
MAP_INHERIT_SHARE | MAP_ACC_NO_CHARGE);
if (error != KERN_SUCCESS) {
vm_object_deallocate(obj);
return (vm_mmap_to_errno(error));
}
}
/* Allocate a new stack */
if (imgp->stack_sz != 0) {
ssiz = trunc_page(imgp->stack_sz);
PROC_LOCK(p);
lim_rlimit_proc(p, RLIMIT_STACK, &rlim_stack);
PROC_UNLOCK(p);
if (ssiz > rlim_stack.rlim_max)
ssiz = rlim_stack.rlim_max;
if (ssiz > rlim_stack.rlim_cur) {
rlim_stack.rlim_cur = ssiz;
kern_setrlimit(curthread, RLIMIT_STACK, &rlim_stack);
}
} else if (sv->sv_maxssiz != NULL) {
ssiz = *sv->sv_maxssiz;
} else {
ssiz = maxssiz;
}
stack_addr = sv->sv_usrstack - ssiz;
error = vm_map_stack(map, stack_addr, (vm_size_t)ssiz,
obj != NULL && imgp->stack_prot != 0 ? imgp->stack_prot :
sv->sv_stackprot, VM_PROT_ALL, MAP_STACK_GROWS_DOWN);
if (error != KERN_SUCCESS)
return (vm_mmap_to_errno(error));
/*
* vm_ssize and vm_maxsaddr are somewhat antiquated concepts, but they
* are still used to enforce the stack rlimit on the process stack.
*/
vmspace->vm_ssize = sgrowsiz >> PAGE_SHIFT;
vmspace->vm_maxsaddr = (char *)stack_addr;
return (0);
}
/*
* Copy out argument and environment strings from the old process address
* space into the temporary string buffer.
*/
int
exec_copyin_args(struct image_args *args, const char *fname,
enum uio_seg segflg, char **argv, char **envv)
{
u_long arg, env;
int error;
bzero(args, sizeof(*args));
if (argv == NULL)
return (EFAULT);
/*
* Allocate demand-paged memory for the file name, argument, and
* environment strings.
*/
error = exec_alloc_args(args);
if (error != 0)
return (error);
/*
* Copy the file name.
*/
error = exec_args_add_fname(args, fname, segflg);
if (error != 0)
goto err_exit;
/*
* extract arguments first
*/
for (;;) {
error = fueword(argv++, &arg);
if (error == -1) {
error = EFAULT;
goto err_exit;
}
if (arg == 0)
break;
error = exec_args_add_arg(args, (char *)(uintptr_t)arg,
UIO_USERSPACE);
if (error != 0)
goto err_exit;
}
/*
* extract environment strings
*/
if (envv) {
for (;;) {
error = fueword(envv++, &env);
if (error == -1) {
error = EFAULT;
goto err_exit;
}
if (env == 0)
break;
error = exec_args_add_env(args,
(char *)(uintptr_t)env, UIO_USERSPACE);
if (error != 0)
goto err_exit;
}
}
return (0);
err_exit:
exec_free_args(args);
return (error);
}
int
exec_copyin_data_fds(struct thread *td, struct image_args *args,
const void *data, size_t datalen, const int *fds, size_t fdslen)
{
struct filedesc *ofdp;
const char *p;
int *kfds;
int error;
memset(args, '\0', sizeof(*args));
ofdp = td->td_proc->p_fd;
if (datalen >= ARG_MAX || fdslen > ofdp->fd_lastfile + 1)
return (E2BIG);
error = exec_alloc_args(args);
if (error != 0)
return (error);
args->begin_argv = args->buf;
args->stringspace = ARG_MAX;
if (datalen > 0) {
/*
* Argument buffer has been provided. Copy it into the
* kernel as a single string and add a terminating null
* byte.
*/
error = copyin(data, args->begin_argv, datalen);
if (error != 0)
goto err_exit;
args->begin_argv[datalen] = '\0';
args->endp = args->begin_argv + datalen + 1;
args->stringspace -= datalen + 1;
/*
* Traditional argument counting. Count the number of
* null bytes.
*/
for (p = args->begin_argv; p < args->endp; ++p)
if (*p == '\0')
++args->argc;
} else {
/* No argument buffer provided. */
args->endp = args->begin_argv;
}
/* Create new file descriptor table. */
kfds = malloc(fdslen * sizeof(int), M_TEMP, M_WAITOK);
error = copyin(fds, kfds, fdslen * sizeof(int));
if (error != 0) {
free(kfds, M_TEMP);
goto err_exit;
}
error = fdcopy_remapped(ofdp, kfds, fdslen, &args->fdp);
free(kfds, M_TEMP);
if (error != 0)
goto err_exit;
return (0);
err_exit:
exec_free_args(args);
return (error);
}
struct exec_args_kva {
vm_offset_t addr;
u_int gen;
SLIST_ENTRY(exec_args_kva) next;
};
DPCPU_DEFINE_STATIC(struct exec_args_kva *, exec_args_kva);
static SLIST_HEAD(, exec_args_kva) exec_args_kva_freelist;
static struct mtx exec_args_kva_mtx;
static u_int exec_args_gen;
static void
exec_prealloc_args_kva(void *arg __unused)
{
struct exec_args_kva *argkva;
u_int i;
SLIST_INIT(&exec_args_kva_freelist);
mtx_init(&exec_args_kva_mtx, "exec args kva", NULL, MTX_DEF);
for (i = 0; i < exec_map_entries; i++) {
argkva = malloc(sizeof(*argkva), M_PARGS, M_WAITOK);
argkva->addr = kmap_alloc_wait(exec_map, exec_map_entry_size);
argkva->gen = exec_args_gen;
SLIST_INSERT_HEAD(&exec_args_kva_freelist, argkva, next);
}
}
SYSINIT(exec_args_kva, SI_SUB_EXEC, SI_ORDER_ANY, exec_prealloc_args_kva, NULL);
static vm_offset_t
exec_alloc_args_kva(void **cookie)
{
struct exec_args_kva *argkva;
argkva = (void *)atomic_readandclear_ptr(
(uintptr_t *)DPCPU_PTR(exec_args_kva));
if (argkva == NULL) {
mtx_lock(&exec_args_kva_mtx);
while ((argkva = SLIST_FIRST(&exec_args_kva_freelist)) == NULL)
(void)mtx_sleep(&exec_args_kva_freelist,
&exec_args_kva_mtx, 0, "execkva", 0);
SLIST_REMOVE_HEAD(&exec_args_kva_freelist, next);
mtx_unlock(&exec_args_kva_mtx);
}
*(struct exec_args_kva **)cookie = argkva;
return (argkva->addr);
}
static void
exec_release_args_kva(struct exec_args_kva *argkva, u_int gen)
{
vm_offset_t base;
base = argkva->addr;
if (argkva->gen != gen) {
(void)vm_map_madvise(exec_map, base, base + exec_map_entry_size,
MADV_FREE);
argkva->gen = gen;
}
if (!atomic_cmpset_ptr((uintptr_t *)DPCPU_PTR(exec_args_kva),
(uintptr_t)NULL, (uintptr_t)argkva)) {
mtx_lock(&exec_args_kva_mtx);
SLIST_INSERT_HEAD(&exec_args_kva_freelist, argkva, next);
wakeup_one(&exec_args_kva_freelist);
mtx_unlock(&exec_args_kva_mtx);
}
}
static void
exec_free_args_kva(void *cookie)
{
exec_release_args_kva(cookie, exec_args_gen);
}
static void
exec_args_kva_lowmem(void *arg __unused)
{
SLIST_HEAD(, exec_args_kva) head;
struct exec_args_kva *argkva;
u_int gen;
int i;
gen = atomic_fetchadd_int(&exec_args_gen, 1) + 1;
/*
* Force an madvise of each KVA range. Any currently allocated ranges
* will have MADV_FREE applied once they are freed.
*/
SLIST_INIT(&head);
mtx_lock(&exec_args_kva_mtx);
SLIST_SWAP(&head, &exec_args_kva_freelist, exec_args_kva);
mtx_unlock(&exec_args_kva_mtx);
while ((argkva = SLIST_FIRST(&head)) != NULL) {
SLIST_REMOVE_HEAD(&head, next);
exec_release_args_kva(argkva, gen);
}
CPU_FOREACH(i) {
argkva = (void *)atomic_readandclear_ptr(
(uintptr_t *)DPCPU_ID_PTR(i, exec_args_kva));
if (argkva != NULL)
exec_release_args_kva(argkva, gen);
}
}
EVENTHANDLER_DEFINE(vm_lowmem, exec_args_kva_lowmem, NULL,
EVENTHANDLER_PRI_ANY);
/*
* Allocate temporary demand-paged, zero-filled memory for the file name,
* argument, and environment strings.
*/
int
exec_alloc_args(struct image_args *args)
{
args->buf = (char *)exec_alloc_args_kva(&args->bufkva);
return (0);
}
void
exec_free_args(struct image_args *args)
{
if (args->buf != NULL) {
exec_free_args_kva(args->bufkva);
args->buf = NULL;
}
if (args->fname_buf != NULL) {
free(args->fname_buf, M_TEMP);
args->fname_buf = NULL;
}
if (args->fdp != NULL)
fdescfree_remapped(args->fdp);
}
/*
* A set to functions to fill struct image args.
*
* NOTE: exec_args_add_fname() must be called (possibly with a NULL
* fname) before the other functions. All exec_args_add_arg() calls must
* be made before any exec_args_add_env() calls. exec_args_adjust_args()
* may be called any time after exec_args_add_fname().
*
* exec_args_add_fname() - install path to be executed
* exec_args_add_arg() - append an argument string
* exec_args_add_env() - append an env string
* exec_args_adjust_args() - adjust location of the argument list to
* allow new arguments to be prepended
*/
int
exec_args_add_fname(struct image_args *args, const char *fname,
enum uio_seg segflg)
{
int error;
size_t length;
KASSERT(args->fname == NULL, ("fname already appended"));
KASSERT(args->endp == NULL, ("already appending to args"));
if (fname != NULL) {
args->fname = args->buf;
error = segflg == UIO_SYSSPACE ?
copystr(fname, args->fname, PATH_MAX, &length) :
copyinstr(fname, args->fname, PATH_MAX, &length);
if (error != 0)
return (error == ENAMETOOLONG ? E2BIG : error);
} else
length = 0;
/* Set up for _arg_*()/_env_*() */
args->endp = args->buf + length;
/* begin_argv must be set and kept updated */
args->begin_argv = args->endp;
KASSERT(exec_map_entry_size - length >= ARG_MAX,
("too little space remaining for arguments %zu < %zu",
exec_map_entry_size - length, (size_t)ARG_MAX));
args->stringspace = ARG_MAX;
return (0);
}
static int
exec_args_add_str(struct image_args *args, const char *str,
enum uio_seg segflg, int *countp)
{
int error;
size_t length;
KASSERT(args->endp != NULL, ("endp not initialized"));
KASSERT(args->begin_argv != NULL, ("begin_argp not initialized"));
error = (segflg == UIO_SYSSPACE) ?
copystr(str, args->endp, args->stringspace, &length) :
copyinstr(str, args->endp, args->stringspace, &length);
if (error != 0)
return (error == ENAMETOOLONG ? E2BIG : error);
args->stringspace -= length;
args->endp += length;
(*countp)++;
return (0);
}
int
exec_args_add_arg(struct image_args *args, const char *argp,
enum uio_seg segflg)
{
KASSERT(args->envc == 0, ("appending args after env"));
return (exec_args_add_str(args, argp, segflg, &args->argc));
}
int
exec_args_add_env(struct image_args *args, const char *envp,
enum uio_seg segflg)
{
if (args->envc == 0)
args->begin_envv = args->endp;
return (exec_args_add_str(args, envp, segflg, &args->envc));
}
int
exec_args_adjust_args(struct image_args *args, size_t consume, ssize_t extend)
{
ssize_t offset;
KASSERT(args->endp != NULL, ("endp not initialized"));
KASSERT(args->begin_argv != NULL, ("begin_argp not initialized"));
offset = extend - consume;
if (args->stringspace < offset)
return (E2BIG);
memmove(args->begin_argv + extend, args->begin_argv + consume,
args->endp - args->begin_argv + consume);
if (args->envc > 0)
args->begin_envv += offset;
args->endp += offset;
args->stringspace -= offset;
return (0);
}
char *
exec_args_get_begin_envv(struct image_args *args)
{
KASSERT(args->endp != NULL, ("endp not initialized"));
if (args->envc > 0)
return (args->begin_envv);
return (args->endp);
}
/*
* Copy strings out to the new process address space, constructing new arg
* and env vector tables. Return a pointer to the base so that it can be used
* as the initial stack pointer.
*/
register_t *
exec_copyout_strings(struct image_params *imgp)
{
int argc, envc;
char **vectp;
char *stringp;
uintptr_t destp;
register_t *stack_base;
struct ps_strings *arginfo;
struct proc *p;
size_t execpath_len;
int szsigcode, szps;
char canary[sizeof(long) * 8];
szps = sizeof(pagesizes[0]) * MAXPAGESIZES;
/*
* Calculate string base and vector table pointers.
* Also deal with signal trampoline code for this exec type.
*/
if (imgp->execpath != NULL && imgp->auxargs != NULL)
execpath_len = strlen(imgp->execpath) + 1;
else
execpath_len = 0;
p = imgp->proc;
szsigcode = 0;
arginfo = (struct ps_strings *)p->p_sysent->sv_psstrings;
if (p->p_sysent->sv_sigcode_base == 0) {
if (p->p_sysent->sv_szsigcode != NULL)
szsigcode = *(p->p_sysent->sv_szsigcode);
}
destp = (uintptr_t)arginfo;
/*
* install sigcode
*/
if (szsigcode != 0) {
destp -= szsigcode;
destp = rounddown2(destp, sizeof(void *));
copyout(p->p_sysent->sv_sigcode, (void *)destp, szsigcode);
}
/*
* Copy the image path for the rtld.
*/
if (execpath_len != 0) {
destp -= execpath_len;
destp = rounddown2(destp, sizeof(void *));
imgp->execpathp = destp;
copyout(imgp->execpath, (void *)destp, execpath_len);
}
/*
* Prepare the canary for SSP.
*/
arc4rand(canary, sizeof(canary), 0);
destp -= sizeof(canary);
imgp->canary = destp;
copyout(canary, (void *)destp, sizeof(canary));
imgp->canarylen = sizeof(canary);
/*
* Prepare the pagesizes array.
*/
destp -= szps;
destp = rounddown2(destp, sizeof(void *));
imgp->pagesizes = destp;
copyout(pagesizes, (void *)destp, szps);
imgp->pagesizeslen = szps;
destp -= ARG_MAX - imgp->args->stringspace;
destp = rounddown2(destp, sizeof(void *));
vectp = (char **)destp;
if (imgp->auxargs) {
/*
* Allocate room on the stack for the ELF auxargs
* array. It has up to AT_COUNT entries.
*/
vectp -= howmany(AT_COUNT * sizeof(Elf_Auxinfo),
sizeof(*vectp));
}
/*
* Allocate room for the argv[] and env vectors including the
* terminating NULL pointers.
*/
vectp -= imgp->args->argc + 1 + imgp->args->envc + 1;
/*
* vectp also becomes our initial stack base
*/
stack_base = (register_t *)vectp;
stringp = imgp->args->begin_argv;
argc = imgp->args->argc;
envc = imgp->args->envc;
/*
* Copy out strings - arguments and environment.
*/
copyout(stringp, (void *)destp, ARG_MAX - imgp->args->stringspace);
/*
* Fill in "ps_strings" struct for ps, w, etc.
*/
suword(&arginfo->ps_argvstr, (long)(intptr_t)vectp);
suword32(&arginfo->ps_nargvstr, argc);
/*
* Fill in argument portion of vector table.
*/
for (; argc > 0; --argc) {
suword(vectp++, (long)(intptr_t)destp);
while (*stringp++ != 0)
destp++;
destp++;
}
/* a null vector table pointer separates the argp's from the envp's */
suword(vectp++, 0);
suword(&arginfo->ps_envstr, (long)(intptr_t)vectp);
suword32(&arginfo->ps_nenvstr, envc);
/*
* Fill in environment portion of vector table.
*/
for (; envc > 0; --envc) {
suword(vectp++, (long)(intptr_t)destp);
while (*stringp++ != 0)
destp++;
destp++;
}
/* end of vector table is a null pointer */
suword(vectp, 0);
return (stack_base);
}
/*
* Check permissions of file to execute.
* Called with imgp->vp locked.
* Return 0 for success or error code on failure.
*/
int
exec_check_permissions(struct image_params *imgp)
{
struct vnode *vp = imgp->vp;
struct vattr *attr = imgp->attr;
struct thread *td;
int error, writecount;
td = curthread;
/* Get file attributes */
error = VOP_GETATTR(vp, attr, td->td_ucred);
if (error)
return (error);
#ifdef MAC
error = mac_vnode_check_exec(td->td_ucred, imgp->vp, imgp);
if (error)
return (error);
#endif
/*
* 1) Check if file execution is disabled for the filesystem that
* this file resides on.
* 2) Ensure that at least one execute bit is on. Otherwise, a
* privileged user will always succeed, and we don't want this
* to happen unless the file really is executable.
* 3) Ensure that the file is a regular file.
*/
if ((vp->v_mount->mnt_flag & MNT_NOEXEC) ||
(attr->va_mode & (S_IXUSR | S_IXGRP | S_IXOTH)) == 0 ||
(attr->va_type != VREG))
return (EACCES);
/*
* Zero length files can't be exec'd
*/
if (attr->va_size == 0)
return (ENOEXEC);
/*
* Check for execute permission to file based on current credentials.
*/
error = VOP_ACCESS(vp, VEXEC, td->td_ucred, td);
if (error)
return (error);
/*
* Check number of open-for-writes on the file and deny execution
* if there are any.
*/
error = VOP_GET_WRITECOUNT(vp, &writecount);
if (error != 0)
return (error);
if (writecount != 0)
return (ETXTBSY);
/*
* Call filesystem specific open routine (which does nothing in the
* general case).
*/
error = VOP_OPEN(vp, FREAD, td->td_ucred, td, NULL);
if (error == 0)
imgp->opened = 1;
return (error);
}
/*
* Exec handler registration
*/
int
exec_register(const struct execsw *execsw_arg)
{
const struct execsw **es, **xs, **newexecsw;
u_int count = 2; /* New slot and trailing NULL */
if (execsw)
for (es = execsw; *es; es++)
count++;
newexecsw = malloc(count * sizeof(*es), M_TEMP, M_WAITOK);
xs = newexecsw;
if (execsw)
for (es = execsw; *es; es++)
*xs++ = *es;
*xs++ = execsw_arg;
*xs = NULL;
if (execsw)
free(execsw, M_TEMP);
execsw = newexecsw;
return (0);
}
int
exec_unregister(const struct execsw *execsw_arg)
{
const struct execsw **es, **xs, **newexecsw;
int count = 1;
if (execsw == NULL)
panic("unregister with no handlers left?\n");
for (es = execsw; *es; es++) {
if (*es == execsw_arg)
break;
}
if (*es == NULL)
return (ENOENT);
for (es = execsw; *es; es++)
if (*es != execsw_arg)
count++;
newexecsw = malloc(count * sizeof(*es), M_TEMP, M_WAITOK);
xs = newexecsw;
for (es = execsw; *es; es++)
if (*es != execsw_arg)
*xs++ = *es;
*xs = NULL;
if (execsw)
free(execsw, M_TEMP);
execsw = newexecsw;
return (0);
}
Index: projects/clang800-import/sys/kern/kern_kcov.c
===================================================================
--- projects/clang800-import/sys/kern/kern_kcov.c (revision 343955)
+++ projects/clang800-import/sys/kern/kern_kcov.c (revision 343956)
@@ -1,566 +1,566 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (C) 2018 The FreeBSD Foundation. All rights reserved.
* Copyright (C) 2018, 2019 Andrew Turner
*
* This software was developed by Mitchell Horne under sponsorship of
* the FreeBSD Foundation.
*
* This software was developed by SRI International and the University of
* Cambridge Computer Laboratory under DARPA/AFRL contract FA8750-10-C-0237
* ("CTSRD"), as part of the DARPA CRASH research programme.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/conf.h>
#include <sys/kcov.h>
#include <sys/kernel.h>
#include <sys/limits.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mman.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/rwlock.h>
#include <sys/sysctl.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <vm/vm_extern.h>
#include <vm/vm_object.h>
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
MALLOC_DEFINE(M_KCOV_INFO, "kcovinfo", "KCOV info type");
#define KCOV_ELEMENT_SIZE sizeof(uint64_t)
/*
* To know what the code can safely perform at any point in time we use a
* state machine. In the normal case the state transitions are:
*
* OPEN -> READY -> RUNNING -> DYING
* | | ^ | ^ ^
* | | +--------+ | |
* | +-------------------+ |
* +-----------------------------+
*
* The states are:
* OPEN: The kcov fd has been opened, but no buffer is available to store
* coverage data.
* READY: The buffer to store coverage data has been allocated. Userspace
* can set this by using ioctl(fd, KIOSETBUFSIZE, entries);. When
* this has been set the buffer can be written to by the kernel,
* and mmaped by userspace.
* RUNNING: The coverage probes are able to store coverage data in the buffer.
* This is entered with ioctl(fd, KIOENABLE, mode);. The READY state
* can be exited by ioctl(fd, KIODISABLE); or exiting the thread to
* return to the READY state to allow tracing to be reused, or by
* closing the kcov fd to enter the DYING state.
* DYING: The fd has been closed. All states can enter into this state when
* userspace closes the kcov fd.
*
* We need to be careful when moving into and out of the RUNNING state. As
* an interrupt may happen while this is happening the ordering of memory
* operations is important so struct kcov_info is valid for the tracing
* functions.
*
* When moving into the RUNNING state prior stores to struct kcov_info need
* to be observed before the state is set. This allows for interrupts that
* may call into one of the coverage functions to fire at any point while
* being enabled and see a consistent struct kcov_info.
*
* When moving out of the RUNNING state any later stores to struct kcov_info
* need to be observed after the state is set. As with entering this is to
* present a consistent struct kcov_info to interrupts.
*/
typedef enum {
KCOV_STATE_INVALID,
KCOV_STATE_OPEN, /* The device is open, but with no buffer */
KCOV_STATE_READY, /* The buffer has been allocated */
KCOV_STATE_RUNNING, /* Recording trace data */
KCOV_STATE_DYING, /* The fd was closed */
} kcov_state_t;
/*
* (l) Set while holding the kcov_lock mutex and not in the RUNNING state.
* (o) Only set once while in the OPEN state. Cleaned up while in the DYING
* state, and with no thread associated with the struct kcov_info.
* (s) Set atomically to enter or exit the RUNNING state, non-atomically
* otherwise. See above for a description of the other constraints while
* moving into or out of the RUNNING state.
*/
struct kcov_info {
struct thread *thread; /* (l) */
vm_object_t bufobj; /* (o) */
vm_offset_t kvaddr; /* (o) */
size_t entries; /* (o) */
size_t bufsize; /* (o) */
kcov_state_t state; /* (s) */
int mode; /* (l) */
bool mmap;
};
/* Prototypes */
static d_open_t kcov_open;
static d_close_t kcov_close;
static d_mmap_single_t kcov_mmap_single;
static d_ioctl_t kcov_ioctl;
static int kcov_alloc(struct kcov_info *info, size_t entries);
static void kcov_init(const void *unused);
static struct cdevsw kcov_cdevsw = {
.d_version = D_VERSION,
.d_open = kcov_open,
.d_close = kcov_close,
.d_mmap_single = kcov_mmap_single,
.d_ioctl = kcov_ioctl,
.d_name = "kcov",
};
SYSCTL_NODE(_kern, OID_AUTO, kcov, CTLFLAG_RW, 0, "Kernel coverage");
static u_int kcov_max_entries = KCOV_MAXENTRIES;
SYSCTL_UINT(_kern_kcov, OID_AUTO, max_entries, CTLFLAG_RW,
&kcov_max_entries, 0,
"Maximum number of entries in the kcov buffer");
static struct mtx kcov_lock;
static int active_count;
static struct kcov_info *
get_kinfo(struct thread *td)
{
struct kcov_info *info;
/* We might have a NULL thread when releasing the secondary CPUs */
if (td == NULL)
return (NULL);
/*
* We are in an interrupt, stop tracing as it is not explicitly
* part of a syscall.
*/
if (td->td_intr_nesting_level > 0 || td->td_intr_frame != NULL)
return (NULL);
/*
* If info is NULL or the state is not running we are not tracing.
*/
info = td->td_kcov_info;
if (info == NULL ||
atomic_load_acq_int(&info->state) != KCOV_STATE_RUNNING)
return (NULL);
return (info);
}
static void
trace_pc(uintptr_t ret)
{
struct thread *td;
struct kcov_info *info;
uint64_t *buf, index;
td = curthread;
info = get_kinfo(td);
if (info == NULL)
return;
/*
* Check we are in the PC-trace mode.
*/
if (info->mode != KCOV_MODE_TRACE_PC)
return;
KASSERT(info->kvaddr != 0,
("__sanitizer_cov_trace_pc: NULL buf while running"));
buf = (uint64_t *)info->kvaddr;
/* The first entry of the buffer holds the index */
index = buf[0];
if (index + 2 > info->entries)
return;
buf[index + 1] = ret;
buf[0] = index + 1;
}
static bool
trace_cmp(uint64_t type, uint64_t arg1, uint64_t arg2, uint64_t ret)
{
struct thread *td;
struct kcov_info *info;
uint64_t *buf, index;
td = curthread;
info = get_kinfo(td);
if (info == NULL)
return (false);
/*
* Check we are in the comparison-trace mode.
*/
if (info->mode != KCOV_MODE_TRACE_CMP)
return (false);
KASSERT(info->kvaddr != 0,
("__sanitizer_cov_trace_pc: NULL buf while running"));
buf = (uint64_t *)info->kvaddr;
/* The first entry of the buffer holds the index */
index = buf[0];
/* Check we have space to store all elements */
if (index * 4 + 4 + 1 > info->entries)
return (false);
buf[index * 4 + 1] = type;
buf[index * 4 + 2] = arg1;
buf[index * 4 + 3] = arg2;
buf[index * 4 + 4] = ret;
buf[0] = index + 1;
return (true);
}
/*
* The fd is being closed, cleanup everything we can.
*/
static void
kcov_mmap_cleanup(void *arg)
{
struct kcov_info *info = arg;
struct thread *thread;
mtx_lock_spin(&kcov_lock);
/*
* Move to KCOV_STATE_DYING to stop adding new entries.
*
* If the thread is running we need to wait until thread exit to
* clean up as it may currently be adding a new entry. If this is
* the case being in KCOV_STATE_DYING will signal that the buffer
* needs to be cleaned up.
*/
atomic_store_int(&info->state, KCOV_STATE_DYING);
atomic_thread_fence_seq_cst();
thread = info->thread;
mtx_unlock_spin(&kcov_lock);
if (thread != NULL)
return;
/*
* We can safely clean up the info struct as it is in the
* KCOV_STATE_DYING state with no thread associated.
*
* The KCOV_STATE_DYING stops new threads from using it.
* The lack of a thread means nothing is currently using the buffers.
*/
if (info->kvaddr != 0) {
pmap_qremove(info->kvaddr, info->bufsize / PAGE_SIZE);
kva_free(info->kvaddr, info->bufsize);
}
if (info->bufobj != NULL && !info->mmap)
vm_object_deallocate(info->bufobj);
free(info, M_KCOV_INFO);
}
static int
kcov_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
{
struct kcov_info *info;
int error;
info = malloc(sizeof(struct kcov_info), M_KCOV_INFO, M_ZERO | M_WAITOK);
info->state = KCOV_STATE_OPEN;
info->thread = NULL;
info->mode = -1;
info->mmap = false;
if ((error = devfs_set_cdevpriv(info, kcov_mmap_cleanup)) != 0)
kcov_mmap_cleanup(info);
return (error);
}
static int
kcov_close(struct cdev *dev, int fflag, int devtype, struct thread *td)
{
struct kcov_info *info;
int error;
if ((error = devfs_get_cdevpriv((void **)&info)) != 0)
return (error);
KASSERT(info != NULL, ("kcov_close with no kcov_info structure"));
/* Trying to close, but haven't disabled */
if (info->state == KCOV_STATE_RUNNING)
return (EBUSY);
return (0);
}
static int
kcov_mmap_single(struct cdev *dev, vm_ooffset_t *offset, vm_size_t size,
struct vm_object **object, int nprot)
{
struct kcov_info *info;
int error;
if ((nprot & (PROT_EXEC | PROT_READ | PROT_WRITE)) !=
(PROT_READ | PROT_WRITE))
return (EINVAL);
if ((error = devfs_get_cdevpriv((void **)&info)) != 0)
return (error);
if (info->kvaddr == 0 || size / KCOV_ELEMENT_SIZE != info->entries ||
info->mmap != false)
return (EINVAL);
info->mmap = true;
*offset = 0;
*object = info->bufobj;
return (0);
}
static int
kcov_alloc(struct kcov_info *info, size_t entries)
{
size_t n, pages;
vm_page_t *m;
KASSERT(info->kvaddr == 0, ("kcov_alloc: Already have a buffer"));
KASSERT(info->state == KCOV_STATE_OPEN,
("kcov_alloc: Not in open state (%x)", info->state));
if (entries < 2 || entries > kcov_max_entries)
return (EINVAL);
/* Align to page size so mmap can't access other kernel memory */
info->bufsize = roundup2(entries * KCOV_ELEMENT_SIZE, PAGE_SIZE);
pages = info->bufsize / PAGE_SIZE;
if ((info->kvaddr = kva_alloc(info->bufsize)) == 0)
return (ENOMEM);
info->bufobj = vm_pager_allocate(OBJT_PHYS, 0, info->bufsize,
PROT_READ | PROT_WRITE, 0, curthread->td_ucred);
m = malloc(sizeof(*m) * pages, M_TEMP, M_WAITOK);
VM_OBJECT_WLOCK(info->bufobj);
for (n = 0; n < pages; n++) {
m[n] = vm_page_grab(info->bufobj, n,
VM_ALLOC_NOBUSY | VM_ALLOC_ZERO | VM_ALLOC_WIRED);
m[n]->valid = VM_PAGE_BITS_ALL;
}
VM_OBJECT_WUNLOCK(info->bufobj);
pmap_qenter(info->kvaddr, m, pages);
free(m, M_TEMP);
info->entries = entries;
return (0);
}
static int
kcov_ioctl(struct cdev *dev, u_long cmd, caddr_t data, int fflag __unused,
struct thread *td)
{
struct kcov_info *info;
int mode, error;
if ((error = devfs_get_cdevpriv((void **)&info)) != 0)
return (error);
if (cmd == KIOSETBUFSIZE) {
/*
* Set the size of the coverage buffer. Should be called
* before enabling coverage collection for that thread.
*/
if (info->state != KCOV_STATE_OPEN) {
return (EBUSY);
}
error = kcov_alloc(info, *(u_int *)data);
if (error == 0)
info->state = KCOV_STATE_READY;
return (error);
}
mtx_lock_spin(&kcov_lock);
switch (cmd) {
case KIOENABLE:
if (info->state != KCOV_STATE_READY) {
error = EBUSY;
break;
}
if (td->td_kcov_info != NULL) {
error = EINVAL;
break;
}
mode = *(int *)data;
if (mode != KCOV_MODE_TRACE_PC && mode != KCOV_MODE_TRACE_CMP) {
error = EINVAL;
break;
}
/* Lets hope nobody opens this 2 billion times */
KASSERT(active_count < INT_MAX,
("%s: Open too many times", __func__));
active_count++;
if (active_count == 1) {
cov_register_pc(&trace_pc);
cov_register_cmp(&trace_cmp);
}
KASSERT(info->thread == NULL,
("Enabling kcov when already enabled"));
info->thread = td;
info->mode = mode;
/*
* Ensure the mode has been set before starting coverage
* tracing.
*/
atomic_store_rel_int(&info->state, KCOV_STATE_RUNNING);
td->td_kcov_info = info;
break;
case KIODISABLE:
/* Only the currently enabled thread may disable itself */
if (info->state != KCOV_STATE_RUNNING ||
info != td->td_kcov_info) {
error = EINVAL;
break;
}
KASSERT(active_count > 0, ("%s: Open count is zero", __func__));
active_count--;
if (active_count == 0) {
- cov_register_pc(&trace_pc);
- cov_register_cmp(&trace_cmp);
+ cov_unregister_pc();
+ cov_unregister_cmp();
}
td->td_kcov_info = NULL;
atomic_store_int(&info->state, KCOV_STATE_READY);
/*
* Ensure we have exited the READY state before clearing the
* rest of the info struct.
*/
atomic_thread_fence_rel();
info->mode = -1;
info->thread = NULL;
break;
default:
error = EINVAL;
break;
}
mtx_unlock_spin(&kcov_lock);
return (error);
}
static void
kcov_thread_dtor(void *arg __unused, struct thread *td)
{
struct kcov_info *info;
info = td->td_kcov_info;
if (info == NULL)
return;
mtx_lock_spin(&kcov_lock);
KASSERT(active_count > 0, ("%s: Open count is zero", __func__));
active_count--;
if (active_count == 0) {
- cov_register_pc(&trace_pc);
- cov_register_cmp(&trace_cmp);
+ cov_unregister_pc();
+ cov_unregister_cmp();
}
td->td_kcov_info = NULL;
if (info->state != KCOV_STATE_DYING) {
/*
* The kcov file is still open. Mark it as unused and
* wait for it to be closed before cleaning up.
*/
atomic_store_int(&info->state, KCOV_STATE_READY);
atomic_thread_fence_seq_cst();
/* This info struct is unused */
info->thread = NULL;
mtx_unlock_spin(&kcov_lock);
return;
}
mtx_unlock_spin(&kcov_lock);
/*
* We can safely clean up the info struct as it is in the
* KCOV_STATE_DYING state where the info struct is associated with
* the current thread that's about to exit.
*
* The KCOV_STATE_DYING stops new threads from using it.
* It also stops the current thread from trying to use the info struct.
*/
if (info->kvaddr != 0) {
pmap_qremove(info->kvaddr, info->bufsize / PAGE_SIZE);
kva_free(info->kvaddr, info->bufsize);
}
if (info->bufobj != NULL && !info->mmap)
vm_object_deallocate(info->bufobj);
free(info, M_KCOV_INFO);
}
static void
kcov_init(const void *unused)
{
struct make_dev_args args;
struct cdev *dev;
mtx_init(&kcov_lock, "kcov lock", NULL, MTX_SPIN);
make_dev_args_init(&args);
args.mda_devsw = &kcov_cdevsw;
args.mda_uid = UID_ROOT;
args.mda_gid = GID_WHEEL;
args.mda_mode = 0600;
if (make_dev_s(&args, &dev, "kcov") != 0) {
printf("%s", "Failed to create kcov device");
return;
}
EVENTHANDLER_REGISTER(thread_dtor, kcov_thread_dtor, NULL,
EVENTHANDLER_PRI_ANY);
}
SYSINIT(kcovdev, SI_SUB_LAST, SI_ORDER_ANY, kcov_init, NULL);
Index: projects/clang800-import/sys/kern/vfs_lookup.c
===================================================================
--- projects/clang800-import/sys/kern/vfs_lookup.c (revision 343955)
+++ projects/clang800-import/sys/kern/vfs_lookup.c (revision 343956)
@@ -1,1478 +1,1480 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1982, 1986, 1989, 1993
* The Regents of the University of California. All rights reserved.
* (c) UNIX System Laboratories, Inc.
* All or some portions of this file are derived from material licensed
* to the University of California by American Telephone and Telegraph
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
* the permission of UNIX System Laboratories, Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)vfs_lookup.c 8.4 (Berkeley) 2/16/94
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_capsicum.h"
#include "opt_ktrace.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/capsicum.h>
#include <sys/fcntl.h>
#include <sys/jail.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/namei.h>
#include <sys/vnode.h>
#include <sys/mount.h>
#include <sys/filedesc.h>
#include <sys/proc.h>
#include <sys/sdt.h>
#include <sys/syscallsubr.h>
#include <sys/sysctl.h>
#ifdef KTRACE
#include <sys/ktrace.h>
#endif
#include <security/audit/audit.h>
#include <security/mac/mac_framework.h>
#include <vm/uma.h>
#define NAMEI_DIAGNOSTIC 1
#undef NAMEI_DIAGNOSTIC
SDT_PROVIDER_DECLARE(vfs);
SDT_PROBE_DEFINE3(vfs, namei, lookup, entry, "struct vnode *", "char *",
"unsigned long");
SDT_PROBE_DEFINE2(vfs, namei, lookup, return, "int", "struct vnode *");
/* Allocation zone for namei. */
uma_zone_t namei_zone;
/* Placeholder vnode for mp traversal. */
static struct vnode *vp_crossmp;
static int
crossmp_vop_islocked(struct vop_islocked_args *ap)
{
return (LK_SHARED);
}
static int
crossmp_vop_lock1(struct vop_lock1_args *ap)
{
struct vnode *vp;
struct lock *lk __unused;
const char *file __unused;
int flags, line __unused;
vp = ap->a_vp;
lk = vp->v_vnlock;
flags = ap->a_flags;
file = ap->a_file;
line = ap->a_line;
if ((flags & LK_SHARED) == 0)
panic("invalid lock request for crossmp");
WITNESS_CHECKORDER(&lk->lock_object, LOP_NEWORDER, file, line,
flags & LK_INTERLOCK ? &VI_MTX(vp)->lock_object : NULL);
WITNESS_LOCK(&lk->lock_object, 0, file, line);
if ((flags & LK_INTERLOCK) != 0)
VI_UNLOCK(vp);
LOCK_LOG_LOCK("SLOCK", &lk->lock_object, 0, 0, ap->a_file, line);
return (0);
}
static int
crossmp_vop_unlock(struct vop_unlock_args *ap)
{
struct vnode *vp;
struct lock *lk __unused;
int flags;
vp = ap->a_vp;
lk = vp->v_vnlock;
flags = ap->a_flags;
if ((flags & LK_INTERLOCK) != 0)
VI_UNLOCK(vp);
WITNESS_UNLOCK(&lk->lock_object, 0, LOCK_FILE, LOCK_LINE);
LOCK_LOG_LOCK("SUNLOCK", &lk->lock_object, 0, 0, LOCK_FILE,
LOCK_LINE);
return (0);
}
static struct vop_vector crossmp_vnodeops = {
.vop_default = &default_vnodeops,
.vop_islocked = crossmp_vop_islocked,
.vop_lock1 = crossmp_vop_lock1,
.vop_unlock = crossmp_vop_unlock,
};
struct nameicap_tracker {
struct vnode *dp;
TAILQ_ENTRY(nameicap_tracker) nm_link;
};
/* Zone for cap mode tracker elements used for dotdot capability checks. */
static uma_zone_t nt_zone;
static void
nameiinit(void *dummy __unused)
{
namei_zone = uma_zcreate("NAMEI", MAXPATHLEN, NULL, NULL, NULL, NULL,
UMA_ALIGN_PTR, 0);
nt_zone = uma_zcreate("rentr", sizeof(struct nameicap_tracker),
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, 0);
getnewvnode("crossmp", NULL, &crossmp_vnodeops, &vp_crossmp);
}
SYSINIT(vfs, SI_SUB_VFS, SI_ORDER_SECOND, nameiinit, NULL);
static int lookup_cap_dotdot = 1;
SYSCTL_INT(_vfs, OID_AUTO, lookup_cap_dotdot, CTLFLAG_RWTUN,
&lookup_cap_dotdot, 0,
"enables \"..\" components in path lookup in capability mode");
static int lookup_cap_dotdot_nonlocal = 1;
SYSCTL_INT(_vfs, OID_AUTO, lookup_cap_dotdot_nonlocal, CTLFLAG_RWTUN,
&lookup_cap_dotdot_nonlocal, 0,
"enables \"..\" components in path lookup in capability mode "
"on non-local mount");
static void
nameicap_tracker_add(struct nameidata *ndp, struct vnode *dp)
{
struct nameicap_tracker *nt;
if ((ndp->ni_lcf & NI_LCF_CAP_DOTDOT) == 0 || dp->v_type != VDIR)
return;
if ((ndp->ni_lcf & (NI_LCF_BENEATH_ABS | NI_LCF_BENEATH_LATCHED)) ==
NI_LCF_BENEATH_ABS) {
MPASS((ndp->ni_lcf & NI_LCF_LATCH) != 0);
if (dp != ndp->ni_beneath_latch)
return;
ndp->ni_lcf |= NI_LCF_BENEATH_LATCHED;
}
nt = uma_zalloc(nt_zone, M_WAITOK);
vhold(dp);
nt->dp = dp;
TAILQ_INSERT_TAIL(&ndp->ni_cap_tracker, nt, nm_link);
}
static void
nameicap_cleanup(struct nameidata *ndp, bool clean_latch)
{
struct nameicap_tracker *nt, *nt1;
KASSERT(TAILQ_EMPTY(&ndp->ni_cap_tracker) ||
(ndp->ni_lcf & NI_LCF_CAP_DOTDOT) != 0, ("not strictrelative"));
TAILQ_FOREACH_SAFE(nt, &ndp->ni_cap_tracker, nm_link, nt1) {
TAILQ_REMOVE(&ndp->ni_cap_tracker, nt, nm_link);
vdrop(nt->dp);
uma_zfree(nt_zone, nt);
}
if (clean_latch && (ndp->ni_lcf & NI_LCF_LATCH) != 0) {
ndp->ni_lcf &= ~NI_LCF_LATCH;
vrele(ndp->ni_beneath_latch);
}
}
/*
* For dotdot lookups in capability mode, only allow the component
* lookup to succeed if the resulting directory was already traversed
* during the operation. Also fail dotdot lookups for non-local
* filesystems, where external agents might assist local lookups to
* escape the compartment.
*/
static int
nameicap_check_dotdot(struct nameidata *ndp, struct vnode *dp)
{
struct nameicap_tracker *nt;
struct mount *mp;
if ((ndp->ni_lcf & NI_LCF_CAP_DOTDOT) == 0 || dp == NULL ||
dp->v_type != VDIR)
return (0);
mp = dp->v_mount;
if (lookup_cap_dotdot_nonlocal == 0 && mp != NULL &&
(mp->mnt_flag & MNT_LOCAL) == 0)
return (ENOTCAPABLE);
TAILQ_FOREACH_REVERSE(nt, &ndp->ni_cap_tracker, nameicap_tracker_head,
nm_link) {
if (dp == nt->dp)
return (0);
}
if ((ndp->ni_lcf & NI_LCF_BENEATH_ABS) != 0) {
ndp->ni_lcf &= ~NI_LCF_BENEATH_LATCHED;
nameicap_cleanup(ndp, false);
return (0);
}
return (ENOTCAPABLE);
}
static void
namei_cleanup_cnp(struct componentname *cnp)
{
uma_zfree(namei_zone, cnp->cn_pnbuf);
#ifdef DIAGNOSTIC
cnp->cn_pnbuf = NULL;
cnp->cn_nameptr = NULL;
#endif
}
static int
namei_handle_root(struct nameidata *ndp, struct vnode **dpp)
{
struct componentname *cnp;
cnp = &ndp->ni_cnd;
if ((ndp->ni_lcf & NI_LCF_STRICTRELATIVE) != 0) {
#ifdef KTRACE
if (KTRPOINT(curthread, KTR_CAPFAIL))
ktrcapfail(CAPFAIL_LOOKUP, NULL, NULL);
#endif
return (ENOTCAPABLE);
}
if ((cnp->cn_flags & BENEATH) != 0) {
ndp->ni_lcf |= NI_LCF_BENEATH_ABS;
ndp->ni_lcf &= ~NI_LCF_BENEATH_LATCHED;
nameicap_cleanup(ndp, false);
}
while (*(cnp->cn_nameptr) == '/') {
cnp->cn_nameptr++;
ndp->ni_pathlen--;
}
*dpp = ndp->ni_rootdir;
vrefact(*dpp);
return (0);
}
/*
* Convert a pathname into a pointer to a locked vnode.
*
* The FOLLOW flag is set when symbolic links are to be followed
* when they occur at the end of the name translation process.
* Symbolic links are always followed for all other pathname
* components other than the last.
*
* The segflg defines whether the name is to be copied from user
* space or kernel space.
*
* Overall outline of namei:
*
* copy in name
* get starting directory
* while (!done && !error) {
* call lookup to search path.
* if symbolic link, massage name in buffer and continue
* }
*/
int
namei(struct nameidata *ndp)
{
struct filedesc *fdp; /* pointer to file descriptor state */
char *cp; /* pointer into pathname argument */
struct vnode *dp; /* the directory we are searching */
struct iovec aiov; /* uio for reading symbolic links */
struct componentname *cnp;
struct thread *td;
struct proc *p;
cap_rights_t rights;
struct filecaps dirfd_caps;
struct uio auio;
int error, linklen, startdir_used;
cnp = &ndp->ni_cnd;
td = cnp->cn_thread;
p = td->td_proc;
ndp->ni_cnd.cn_cred = ndp->ni_cnd.cn_thread->td_ucred;
KASSERT(cnp->cn_cred && p, ("namei: bad cred/proc"));
KASSERT((cnp->cn_nameiop & (~OPMASK)) == 0,
("namei: nameiop contaminated with flags"));
KASSERT((cnp->cn_flags & OPMASK) == 0,
("namei: flags contaminated with nameiops"));
MPASS(ndp->ni_startdir == NULL || ndp->ni_startdir->v_type == VDIR ||
ndp->ni_startdir->v_type == VBAD);
fdp = p->p_fd;
TAILQ_INIT(&ndp->ni_cap_tracker);
ndp->ni_lcf = 0;
/* We will set this ourselves if we need it. */
cnp->cn_flags &= ~TRAILINGSLASH;
/*
* Get a buffer for the name to be translated, and copy the
* name into the buffer.
*/
if ((cnp->cn_flags & HASBUF) == 0)
cnp->cn_pnbuf = uma_zalloc(namei_zone, M_WAITOK);
if (ndp->ni_segflg == UIO_SYSSPACE)
error = copystr(ndp->ni_dirp, cnp->cn_pnbuf, MAXPATHLEN,
&ndp->ni_pathlen);
else
error = copyinstr(ndp->ni_dirp, cnp->cn_pnbuf, MAXPATHLEN,
&ndp->ni_pathlen);
/*
* Don't allow empty pathnames.
*/
if (error == 0 && *cnp->cn_pnbuf == '\0')
error = ENOENT;
#ifdef CAPABILITY_MODE
/*
* In capability mode, lookups must be restricted to happen in
* the subtree with the root specified by the file descriptor:
* - The root must be real file descriptor, not the pseudo-descriptor
* AT_FDCWD.
* - The passed path must be relative and not absolute.
* - If lookup_cap_dotdot is disabled, path must not contain the
* '..' components.
* - If lookup_cap_dotdot is enabled, we verify that all '..'
* components lookups result in the directories which were
* previously walked by us, which prevents an escape from
* the relative root.
*/
if (error == 0 && IN_CAPABILITY_MODE(td) &&
(cnp->cn_flags & NOCAPCHECK) == 0) {
ndp->ni_lcf |= NI_LCF_STRICTRELATIVE;
if (ndp->ni_dirfd == AT_FDCWD) {
#ifdef KTRACE
if (KTRPOINT(td, KTR_CAPFAIL))
ktrcapfail(CAPFAIL_LOOKUP, NULL, NULL);
#endif
error = ECAPMODE;
}
}
#endif
if (error != 0) {
namei_cleanup_cnp(cnp);
ndp->ni_vp = NULL;
return (error);
}
ndp->ni_loopcnt = 0;
#ifdef KTRACE
if (KTRPOINT(td, KTR_NAMEI)) {
KASSERT(cnp->cn_thread == curthread,
("namei not using curthread"));
ktrnamei(cnp->cn_pnbuf);
}
#endif
/*
* Get starting point for the translation.
*/
FILEDESC_SLOCK(fdp);
ndp->ni_rootdir = fdp->fd_rdir;
vrefact(ndp->ni_rootdir);
ndp->ni_topdir = fdp->fd_jdir;
/*
* If we are auditing the kernel pathname, save the user pathname.
*/
if (cnp->cn_flags & AUDITVNODE1)
AUDIT_ARG_UPATH1(td, ndp->ni_dirfd, cnp->cn_pnbuf);
if (cnp->cn_flags & AUDITVNODE2)
AUDIT_ARG_UPATH2(td, ndp->ni_dirfd, cnp->cn_pnbuf);
startdir_used = 0;
dp = NULL;
cnp->cn_nameptr = cnp->cn_pnbuf;
if (cnp->cn_pnbuf[0] == '/') {
+ ndp->ni_resflags |= NIRES_ABS;
error = namei_handle_root(ndp, &dp);
} else {
if (ndp->ni_startdir != NULL) {
dp = ndp->ni_startdir;
startdir_used = 1;
} else if (ndp->ni_dirfd == AT_FDCWD) {
dp = fdp->fd_cdir;
vrefact(dp);
} else {
rights = ndp->ni_rightsneeded;
cap_rights_set(&rights, CAP_LOOKUP);
if (cnp->cn_flags & AUDITVNODE1)
AUDIT_ARG_ATFD1(ndp->ni_dirfd);
if (cnp->cn_flags & AUDITVNODE2)
AUDIT_ARG_ATFD2(ndp->ni_dirfd);
error = fgetvp_rights(td, ndp->ni_dirfd,
&rights, &ndp->ni_filecaps, &dp);
if (error == EINVAL)
error = ENOTDIR;
#ifdef CAPABILITIES
/*
* If file descriptor doesn't have all rights,
* all lookups relative to it must also be
* strictly relative.
*/
CAP_ALL(&rights);
if (!cap_rights_contains(&ndp->ni_filecaps.fc_rights,
&rights) ||
ndp->ni_filecaps.fc_fcntls != CAP_FCNTL_ALL ||
ndp->ni_filecaps.fc_nioctls != -1) {
ndp->ni_lcf |= NI_LCF_STRICTRELATIVE;
}
#endif
}
if (error == 0 && dp->v_type != VDIR)
error = ENOTDIR;
}
if (error == 0 && (cnp->cn_flags & BENEATH) != 0) {
if (ndp->ni_dirfd == AT_FDCWD) {
ndp->ni_beneath_latch = fdp->fd_cdir;
vrefact(ndp->ni_beneath_latch);
} else {
rights = ndp->ni_rightsneeded;
cap_rights_set(&rights, CAP_LOOKUP);
error = fgetvp_rights(td, ndp->ni_dirfd, &rights,
&dirfd_caps, &ndp->ni_beneath_latch);
if (error == 0 && dp->v_type != VDIR) {
vrele(ndp->ni_beneath_latch);
error = ENOTDIR;
}
}
if (error == 0)
ndp->ni_lcf |= NI_LCF_LATCH;
}
FILEDESC_SUNLOCK(fdp);
if (ndp->ni_startdir != NULL && !startdir_used)
vrele(ndp->ni_startdir);
if (error != 0) {
if (dp != NULL)
vrele(dp);
goto out;
}
MPASS((ndp->ni_lcf & (NI_LCF_BENEATH_ABS | NI_LCF_LATCH)) !=
NI_LCF_BENEATH_ABS);
if (((ndp->ni_lcf & NI_LCF_STRICTRELATIVE) != 0 &&
lookup_cap_dotdot != 0) ||
((ndp->ni_lcf & NI_LCF_STRICTRELATIVE) == 0 &&
(cnp->cn_flags & BENEATH) != 0))
ndp->ni_lcf |= NI_LCF_CAP_DOTDOT;
SDT_PROBE3(vfs, namei, lookup, entry, dp, cnp->cn_pnbuf,
cnp->cn_flags);
for (;;) {
ndp->ni_startdir = dp;
error = lookup(ndp);
if (error != 0)
goto out;
/*
* If not a symbolic link, we're done.
*/
if ((cnp->cn_flags & ISSYMLINK) == 0) {
vrele(ndp->ni_rootdir);
if ((cnp->cn_flags & (SAVENAME | SAVESTART)) == 0) {
namei_cleanup_cnp(cnp);
} else
cnp->cn_flags |= HASBUF;
if ((ndp->ni_lcf & (NI_LCF_BENEATH_ABS |
NI_LCF_BENEATH_LATCHED)) == NI_LCF_BENEATH_ABS) {
NDFREE(ndp, 0);
error = ENOTCAPABLE;
}
nameicap_cleanup(ndp, true);
SDT_PROBE2(vfs, namei, lookup, return, error,
(error == 0 ? ndp->ni_vp : NULL));
return (error);
}
if (ndp->ni_loopcnt++ >= MAXSYMLINKS) {
error = ELOOP;
break;
}
#ifdef MAC
if ((cnp->cn_flags & NOMACCHECK) == 0) {
error = mac_vnode_check_readlink(td->td_ucred,
ndp->ni_vp);
if (error != 0)
break;
}
#endif
if (ndp->ni_pathlen > 1)
cp = uma_zalloc(namei_zone, M_WAITOK);
else
cp = cnp->cn_pnbuf;
aiov.iov_base = cp;
aiov.iov_len = MAXPATHLEN;
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
auio.uio_offset = 0;
auio.uio_rw = UIO_READ;
auio.uio_segflg = UIO_SYSSPACE;
auio.uio_td = td;
auio.uio_resid = MAXPATHLEN;
error = VOP_READLINK(ndp->ni_vp, &auio, cnp->cn_cred);
if (error != 0) {
if (ndp->ni_pathlen > 1)
uma_zfree(namei_zone, cp);
break;
}
linklen = MAXPATHLEN - auio.uio_resid;
if (linklen == 0) {
if (ndp->ni_pathlen > 1)
uma_zfree(namei_zone, cp);
error = ENOENT;
break;
}
if (linklen + ndp->ni_pathlen > MAXPATHLEN) {
if (ndp->ni_pathlen > 1)
uma_zfree(namei_zone, cp);
error = ENAMETOOLONG;
break;
}
if (ndp->ni_pathlen > 1) {
bcopy(ndp->ni_next, cp + linklen, ndp->ni_pathlen);
uma_zfree(namei_zone, cnp->cn_pnbuf);
cnp->cn_pnbuf = cp;
} else
cnp->cn_pnbuf[linklen] = '\0';
ndp->ni_pathlen += linklen;
vput(ndp->ni_vp);
dp = ndp->ni_dvp;
/*
* Check if root directory should replace current directory.
*/
cnp->cn_nameptr = cnp->cn_pnbuf;
if (*(cnp->cn_nameptr) == '/') {
vrele(dp);
error = namei_handle_root(ndp, &dp);
if (error != 0)
goto out;
}
}
vput(ndp->ni_vp);
ndp->ni_vp = NULL;
vrele(ndp->ni_dvp);
out:
vrele(ndp->ni_rootdir);
MPASS(error != 0);
namei_cleanup_cnp(cnp);
nameicap_cleanup(ndp, true);
SDT_PROBE2(vfs, namei, lookup, return, error, NULL);
return (error);
}
static int
compute_cn_lkflags(struct mount *mp, int lkflags, int cnflags)
{
if (mp == NULL || ((lkflags & LK_SHARED) &&
(!(mp->mnt_kern_flag & MNTK_LOOKUP_SHARED) ||
((cnflags & ISDOTDOT) &&
(mp->mnt_kern_flag & MNTK_LOOKUP_EXCL_DOTDOT))))) {
lkflags &= ~LK_SHARED;
lkflags |= LK_EXCLUSIVE;
}
lkflags |= LK_NODDLKTREAT;
return (lkflags);
}
static __inline int
needs_exclusive_leaf(struct mount *mp, int flags)
{
/*
* Intermediate nodes can use shared locks, we only need to
* force an exclusive lock for leaf nodes.
*/
if ((flags & (ISLASTCN | LOCKLEAF)) != (ISLASTCN | LOCKLEAF))
return (0);
/* Always use exclusive locks if LOCKSHARED isn't set. */
if (!(flags & LOCKSHARED))
return (1);
/*
* For lookups during open(), if the mount point supports
* extended shared operations, then use a shared lock for the
* leaf node, otherwise use an exclusive lock.
*/
if ((flags & ISOPEN) != 0)
return (!MNT_EXTENDED_SHARED(mp));
/*
* Lookup requests outside of open() that specify LOCKSHARED
* only need a shared lock on the leaf vnode.
*/
return (0);
}
/*
* Search a pathname.
* This is a very central and rather complicated routine.
*
* The pathname is pointed to by ni_ptr and is of length ni_pathlen.
* The starting directory is taken from ni_startdir. The pathname is
* descended until done, or a symbolic link is encountered. The variable
* ni_more is clear if the path is completed; it is set to one if a
* symbolic link needing interpretation is encountered.
*
* The flag argument is LOOKUP, CREATE, RENAME, or DELETE depending on
* whether the name is to be looked up, created, renamed, or deleted.
* When CREATE, RENAME, or DELETE is specified, information usable in
* creating, renaming, or deleting a directory entry may be calculated.
* If flag has LOCKPARENT or'ed into it, the parent directory is returned
* locked. If flag has WANTPARENT or'ed into it, the parent directory is
* returned unlocked. Otherwise the parent directory is not returned. If
* the target of the pathname exists and LOCKLEAF is or'ed into the flag
* the target is returned locked, otherwise it is returned unlocked.
* When creating or renaming and LOCKPARENT is specified, the target may not
* be ".". When deleting and LOCKPARENT is specified, the target may be ".".
*
* Overall outline of lookup:
*
* dirloop:
* identify next component of name at ndp->ni_ptr
* handle degenerate case where name is null string
* if .. and crossing mount points and on mounted filesys, find parent
* call VOP_LOOKUP routine for next component name
* directory vnode returned in ni_dvp, unlocked unless LOCKPARENT set
* component vnode returned in ni_vp (if it exists), locked.
* if result vnode is mounted on and crossing mount points,
* find mounted on vnode
* if more components of name, do next level at dirloop
* return the answer in ni_vp, locked if LOCKLEAF set
* if LOCKPARENT set, return locked parent in ni_dvp
* if WANTPARENT set, return unlocked parent in ni_dvp
*/
int
lookup(struct nameidata *ndp)
{
char *cp; /* pointer into pathname argument */
char *prev_ni_next; /* saved ndp->ni_next */
struct vnode *dp = NULL; /* the directory we are searching */
struct vnode *tdp; /* saved dp */
struct mount *mp; /* mount table entry */
struct prison *pr;
size_t prev_ni_pathlen; /* saved ndp->ni_pathlen */
int docache; /* == 0 do not cache last component */
int wantparent; /* 1 => wantparent or lockparent flag */
int rdonly; /* lookup read-only flag bit */
int error = 0;
int dpunlocked = 0; /* dp has already been unlocked */
int relookup = 0; /* do not consume the path component */
struct componentname *cnp = &ndp->ni_cnd;
int lkflags_save;
int ni_dvp_unlocked;
/*
* Setup: break out flag bits into variables.
*/
ni_dvp_unlocked = 0;
wantparent = cnp->cn_flags & (LOCKPARENT | WANTPARENT);
KASSERT(cnp->cn_nameiop == LOOKUP || wantparent,
("CREATE, DELETE, RENAME require LOCKPARENT or WANTPARENT."));
docache = (cnp->cn_flags & NOCACHE) ^ NOCACHE;
if (cnp->cn_nameiop == DELETE ||
(wantparent && cnp->cn_nameiop != CREATE &&
cnp->cn_nameiop != LOOKUP))
docache = 0;
rdonly = cnp->cn_flags & RDONLY;
cnp->cn_flags &= ~ISSYMLINK;
ndp->ni_dvp = NULL;
/*
* We use shared locks until we hit the parent of the last cn then
* we adjust based on the requesting flags.
*/
cnp->cn_lkflags = LK_SHARED;
dp = ndp->ni_startdir;
ndp->ni_startdir = NULLVP;
vn_lock(dp,
compute_cn_lkflags(dp->v_mount, cnp->cn_lkflags | LK_RETRY,
cnp->cn_flags));
dirloop:
/*
* Search a new directory.
*
* The last component of the filename is left accessible via
* cnp->cn_nameptr for callers that need the name. Callers needing
* the name set the SAVENAME flag. When done, they assume
* responsibility for freeing the pathname buffer.
*/
for (cp = cnp->cn_nameptr; *cp != 0 && *cp != '/'; cp++)
continue;
cnp->cn_namelen = cp - cnp->cn_nameptr;
if (cnp->cn_namelen > NAME_MAX) {
error = ENAMETOOLONG;
goto bad;
}
#ifdef NAMEI_DIAGNOSTIC
{ char c = *cp;
*cp = '\0';
printf("{%s}: ", cnp->cn_nameptr);
*cp = c; }
#endif
prev_ni_pathlen = ndp->ni_pathlen;
ndp->ni_pathlen -= cnp->cn_namelen;
KASSERT(ndp->ni_pathlen <= PATH_MAX,
("%s: ni_pathlen underflow to %zd\n", __func__, ndp->ni_pathlen));
prev_ni_next = ndp->ni_next;
ndp->ni_next = cp;
/*
* Replace multiple slashes by a single slash and trailing slashes
* by a null. This must be done before VOP_LOOKUP() because some
* fs's don't know about trailing slashes. Remember if there were
* trailing slashes to handle symlinks, existing non-directories
* and non-existing files that won't be directories specially later.
*/
while (*cp == '/' && (cp[1] == '/' || cp[1] == '\0')) {
cp++;
ndp->ni_pathlen--;
if (*cp == '\0') {
*ndp->ni_next = '\0';
cnp->cn_flags |= TRAILINGSLASH;
}
}
ndp->ni_next = cp;
cnp->cn_flags |= MAKEENTRY;
if (*cp == '\0' && docache == 0)
cnp->cn_flags &= ~MAKEENTRY;
if (cnp->cn_namelen == 2 &&
cnp->cn_nameptr[1] == '.' && cnp->cn_nameptr[0] == '.')
cnp->cn_flags |= ISDOTDOT;
else
cnp->cn_flags &= ~ISDOTDOT;
if (*ndp->ni_next == 0)
cnp->cn_flags |= ISLASTCN;
else
cnp->cn_flags &= ~ISLASTCN;
if ((cnp->cn_flags & ISLASTCN) != 0 &&
cnp->cn_namelen == 1 && cnp->cn_nameptr[0] == '.' &&
(cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME)) {
error = EINVAL;
goto bad;
}
nameicap_tracker_add(ndp, dp);
/*
* Check for degenerate name (e.g. / or "")
* which is a way of talking about a directory,
* e.g. like "/." or ".".
*/
if (cnp->cn_nameptr[0] == '\0') {
if (dp->v_type != VDIR) {
error = ENOTDIR;
goto bad;
}
if (cnp->cn_nameiop != LOOKUP) {
error = EISDIR;
goto bad;
}
if (wantparent) {
ndp->ni_dvp = dp;
VREF(dp);
}
ndp->ni_vp = dp;
if (cnp->cn_flags & AUDITVNODE1)
AUDIT_ARG_VNODE1(dp);
else if (cnp->cn_flags & AUDITVNODE2)
AUDIT_ARG_VNODE2(dp);
if (!(cnp->cn_flags & (LOCKPARENT | LOCKLEAF)))
VOP_UNLOCK(dp, 0);
/* XXX This should probably move to the top of function. */
if (cnp->cn_flags & SAVESTART)
panic("lookup: SAVESTART");
goto success;
}
/*
* Handle "..": five special cases.
* 0. If doing a capability lookup and lookup_cap_dotdot is
* disabled, return ENOTCAPABLE.
* 1. Return an error if this is the last component of
* the name and the operation is DELETE or RENAME.
* 2. If at root directory (e.g. after chroot)
* or at absolute root directory
* then ignore it so can't get out.
* 3. If this vnode is the root of a mounted
* filesystem, then replace it with the
* vnode which was mounted on so we take the
* .. in the other filesystem.
* 4. If the vnode is the top directory of
* the jail or chroot, don't let them out.
* 5. If doing a capability lookup and lookup_cap_dotdot is
* enabled, return ENOTCAPABLE if the lookup would escape
* from the initial file descriptor directory. Checks are
* done by ensuring that namei() already traversed the
* result of dotdot lookup.
*/
if (cnp->cn_flags & ISDOTDOT) {
if ((ndp->ni_lcf & (NI_LCF_STRICTRELATIVE | NI_LCF_CAP_DOTDOT))
== NI_LCF_STRICTRELATIVE) {
#ifdef KTRACE
if (KTRPOINT(curthread, KTR_CAPFAIL))
ktrcapfail(CAPFAIL_LOOKUP, NULL, NULL);
#endif
error = ENOTCAPABLE;
goto bad;
}
if ((cnp->cn_flags & ISLASTCN) != 0 &&
(cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME)) {
error = EINVAL;
goto bad;
}
for (;;) {
for (pr = cnp->cn_cred->cr_prison; pr != NULL;
pr = pr->pr_parent)
if (dp == pr->pr_root)
break;
if (dp == ndp->ni_rootdir ||
dp == ndp->ni_topdir ||
dp == rootvnode ||
pr != NULL ||
((dp->v_vflag & VV_ROOT) != 0 &&
(cnp->cn_flags & NOCROSSMOUNT) != 0)) {
ndp->ni_dvp = dp;
ndp->ni_vp = dp;
VREF(dp);
goto nextname;
}
if ((dp->v_vflag & VV_ROOT) == 0)
break;
if (dp->v_iflag & VI_DOOMED) { /* forced unmount */
error = ENOENT;
goto bad;
}
tdp = dp;
dp = dp->v_mount->mnt_vnodecovered;
VREF(dp);
vput(tdp);
vn_lock(dp,
compute_cn_lkflags(dp->v_mount, cnp->cn_lkflags |
LK_RETRY, ISDOTDOT));
error = nameicap_check_dotdot(ndp, dp);
if (error != 0) {
#ifdef KTRACE
if (KTRPOINT(curthread, KTR_CAPFAIL))
ktrcapfail(CAPFAIL_LOOKUP, NULL, NULL);
#endif
goto bad;
}
}
}
/*
* We now have a segment name to search for, and a directory to search.
*/
unionlookup:
#ifdef MAC
if ((cnp->cn_flags & NOMACCHECK) == 0) {
error = mac_vnode_check_lookup(cnp->cn_thread->td_ucred, dp,
cnp);
if (error)
goto bad;
}
#endif
ndp->ni_dvp = dp;
ndp->ni_vp = NULL;
ASSERT_VOP_LOCKED(dp, "lookup");
/*
* If we have a shared lock we may need to upgrade the lock for the
* last operation.
*/
if ((cnp->cn_flags & LOCKPARENT) && (cnp->cn_flags & ISLASTCN) &&
dp != vp_crossmp && VOP_ISLOCKED(dp) == LK_SHARED)
vn_lock(dp, LK_UPGRADE|LK_RETRY);
if ((dp->v_iflag & VI_DOOMED) != 0) {
error = ENOENT;
goto bad;
}
/*
* If we're looking up the last component and we need an exclusive
* lock, adjust our lkflags.
*/
if (needs_exclusive_leaf(dp->v_mount, cnp->cn_flags))
cnp->cn_lkflags = LK_EXCLUSIVE;
#ifdef NAMEI_DIAGNOSTIC
vn_printf(dp, "lookup in ");
#endif
lkflags_save = cnp->cn_lkflags;
cnp->cn_lkflags = compute_cn_lkflags(dp->v_mount, cnp->cn_lkflags,
cnp->cn_flags);
error = VOP_LOOKUP(dp, &ndp->ni_vp, cnp);
cnp->cn_lkflags = lkflags_save;
if (error != 0) {
KASSERT(ndp->ni_vp == NULL, ("leaf should be empty"));
#ifdef NAMEI_DIAGNOSTIC
printf("not found\n");
#endif
if ((error == ENOENT) &&
(dp->v_vflag & VV_ROOT) && (dp->v_mount != NULL) &&
(dp->v_mount->mnt_flag & MNT_UNION)) {
tdp = dp;
dp = dp->v_mount->mnt_vnodecovered;
VREF(dp);
vput(tdp);
vn_lock(dp,
compute_cn_lkflags(dp->v_mount, cnp->cn_lkflags |
LK_RETRY, cnp->cn_flags));
nameicap_tracker_add(ndp, dp);
goto unionlookup;
}
if (error == ERELOOKUP) {
vref(dp);
ndp->ni_vp = dp;
error = 0;
relookup = 1;
goto good;
}
if (error != EJUSTRETURN)
goto bad;
/*
* At this point, we know we're at the end of the
* pathname. If creating / renaming, we can consider
* allowing the file or directory to be created / renamed,
* provided we're not on a read-only filesystem.
*/
if (rdonly) {
error = EROFS;
goto bad;
}
/* trailing slash only allowed for directories */
if ((cnp->cn_flags & TRAILINGSLASH) &&
!(cnp->cn_flags & WILLBEDIR)) {
error = ENOENT;
goto bad;
}
if ((cnp->cn_flags & LOCKPARENT) == 0)
VOP_UNLOCK(dp, 0);
/*
* We return with ni_vp NULL to indicate that the entry
* doesn't currently exist, leaving a pointer to the
* (possibly locked) directory vnode in ndp->ni_dvp.
*/
if (cnp->cn_flags & SAVESTART) {
ndp->ni_startdir = ndp->ni_dvp;
VREF(ndp->ni_startdir);
}
goto success;
}
good:
#ifdef NAMEI_DIAGNOSTIC
printf("found\n");
#endif
dp = ndp->ni_vp;
/*
* Check to see if the vnode has been mounted on;
* if so find the root of the mounted filesystem.
*/
while (dp->v_type == VDIR && (mp = dp->v_mountedhere) &&
(cnp->cn_flags & NOCROSSMOUNT) == 0) {
if (vfs_busy(mp, 0))
continue;
vput(dp);
if (dp != ndp->ni_dvp)
vput(ndp->ni_dvp);
else
vrele(ndp->ni_dvp);
vrefact(vp_crossmp);
ndp->ni_dvp = vp_crossmp;
error = VFS_ROOT(mp, compute_cn_lkflags(mp, cnp->cn_lkflags,
cnp->cn_flags), &tdp);
vfs_unbusy(mp);
if (vn_lock(vp_crossmp, LK_SHARED | LK_NOWAIT))
panic("vp_crossmp exclusively locked or reclaimed");
if (error) {
dpunlocked = 1;
goto bad2;
}
ndp->ni_vp = dp = tdp;
}
/*
* Check for symbolic link
*/
if ((dp->v_type == VLNK) &&
((cnp->cn_flags & FOLLOW) || (cnp->cn_flags & TRAILINGSLASH) ||
*ndp->ni_next == '/')) {
cnp->cn_flags |= ISSYMLINK;
if (dp->v_iflag & VI_DOOMED) {
/*
* We can't know whether the directory was mounted with
* NOSYMFOLLOW, so we can't follow safely.
*/
error = ENOENT;
goto bad2;
}
if (dp->v_mount->mnt_flag & MNT_NOSYMFOLLOW) {
error = EACCES;
goto bad2;
}
/*
* Symlink code always expects an unlocked dvp.
*/
if (ndp->ni_dvp != ndp->ni_vp) {
VOP_UNLOCK(ndp->ni_dvp, 0);
ni_dvp_unlocked = 1;
}
goto success;
}
nextname:
/*
* Not a symbolic link that we will follow. Continue with the
* next component if there is any; otherwise, we're done.
*/
KASSERT((cnp->cn_flags & ISLASTCN) || *ndp->ni_next == '/',
("lookup: invalid path state."));
if (relookup) {
relookup = 0;
ndp->ni_pathlen = prev_ni_pathlen;
ndp->ni_next = prev_ni_next;
if (ndp->ni_dvp != dp)
vput(ndp->ni_dvp);
else
vrele(ndp->ni_dvp);
goto dirloop;
}
if (cnp->cn_flags & ISDOTDOT) {
error = nameicap_check_dotdot(ndp, ndp->ni_vp);
if (error != 0) {
#ifdef KTRACE
if (KTRPOINT(curthread, KTR_CAPFAIL))
ktrcapfail(CAPFAIL_LOOKUP, NULL, NULL);
#endif
goto bad2;
}
}
if (*ndp->ni_next == '/') {
cnp->cn_nameptr = ndp->ni_next;
while (*cnp->cn_nameptr == '/') {
cnp->cn_nameptr++;
ndp->ni_pathlen--;
}
if (ndp->ni_dvp != dp)
vput(ndp->ni_dvp);
else
vrele(ndp->ni_dvp);
goto dirloop;
}
/*
* If we're processing a path with a trailing slash,
* check that the end result is a directory.
*/
if ((cnp->cn_flags & TRAILINGSLASH) && dp->v_type != VDIR) {
error = ENOTDIR;
goto bad2;
}
/*
* Disallow directory write attempts on read-only filesystems.
*/
if (rdonly &&
(cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME)) {
error = EROFS;
goto bad2;
}
if (cnp->cn_flags & SAVESTART) {
ndp->ni_startdir = ndp->ni_dvp;
VREF(ndp->ni_startdir);
}
if (!wantparent) {
ni_dvp_unlocked = 2;
if (ndp->ni_dvp != dp)
vput(ndp->ni_dvp);
else
vrele(ndp->ni_dvp);
} else if ((cnp->cn_flags & LOCKPARENT) == 0 && ndp->ni_dvp != dp) {
VOP_UNLOCK(ndp->ni_dvp, 0);
ni_dvp_unlocked = 1;
}
if (cnp->cn_flags & AUDITVNODE1)
AUDIT_ARG_VNODE1(dp);
else if (cnp->cn_flags & AUDITVNODE2)
AUDIT_ARG_VNODE2(dp);
if ((cnp->cn_flags & LOCKLEAF) == 0)
VOP_UNLOCK(dp, 0);
success:
/*
* Because of shared lookup we may have the vnode shared locked, but
* the caller may want it to be exclusively locked.
*/
if (needs_exclusive_leaf(dp->v_mount, cnp->cn_flags) &&
VOP_ISLOCKED(dp) != LK_EXCLUSIVE) {
vn_lock(dp, LK_UPGRADE | LK_RETRY);
if (dp->v_iflag & VI_DOOMED) {
error = ENOENT;
goto bad2;
}
}
return (0);
bad2:
if (ni_dvp_unlocked != 2) {
if (dp != ndp->ni_dvp && !ni_dvp_unlocked)
vput(ndp->ni_dvp);
else
vrele(ndp->ni_dvp);
}
bad:
if (!dpunlocked)
vput(dp);
ndp->ni_vp = NULL;
return (error);
}
/*
* relookup - lookup a path name component
* Used by lookup to re-acquire things.
*/
int
relookup(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp)
{
struct vnode *dp = NULL; /* the directory we are searching */
int wantparent; /* 1 => wantparent or lockparent flag */
int rdonly; /* lookup read-only flag bit */
int error = 0;
KASSERT(cnp->cn_flags & ISLASTCN,
("relookup: Not given last component."));
/*
* Setup: break out flag bits into variables.
*/
wantparent = cnp->cn_flags & (LOCKPARENT|WANTPARENT);
KASSERT(wantparent, ("relookup: parent not wanted."));
rdonly = cnp->cn_flags & RDONLY;
cnp->cn_flags &= ~ISSYMLINK;
dp = dvp;
cnp->cn_lkflags = LK_EXCLUSIVE;
vn_lock(dp, LK_EXCLUSIVE | LK_RETRY);
/*
* Search a new directory.
*
* The last component of the filename is left accessible via
* cnp->cn_nameptr for callers that need the name. Callers needing
* the name set the SAVENAME flag. When done, they assume
* responsibility for freeing the pathname buffer.
*/
#ifdef NAMEI_DIAGNOSTIC
printf("{%s}: ", cnp->cn_nameptr);
#endif
/*
* Check for "" which represents the root directory after slash
* removal.
*/
if (cnp->cn_nameptr[0] == '\0') {
/*
* Support only LOOKUP for "/" because lookup()
* can't succeed for CREATE, DELETE and RENAME.
*/
KASSERT(cnp->cn_nameiop == LOOKUP, ("nameiop must be LOOKUP"));
KASSERT(dp->v_type == VDIR, ("dp is not a directory"));
if (!(cnp->cn_flags & LOCKLEAF))
VOP_UNLOCK(dp, 0);
*vpp = dp;
/* XXX This should probably move to the top of function. */
if (cnp->cn_flags & SAVESTART)
panic("lookup: SAVESTART");
return (0);
}
if (cnp->cn_flags & ISDOTDOT)
panic ("relookup: lookup on dot-dot");
/*
* We now have a segment name to search for, and a directory to search.
*/
#ifdef NAMEI_DIAGNOSTIC
vn_printf(dp, "search in ");
#endif
if ((error = VOP_LOOKUP(dp, vpp, cnp)) != 0) {
KASSERT(*vpp == NULL, ("leaf should be empty"));
if (error != EJUSTRETURN)
goto bad;
/*
* If creating and at end of pathname, then can consider
* allowing file to be created.
*/
if (rdonly) {
error = EROFS;
goto bad;
}
/* ASSERT(dvp == ndp->ni_startdir) */
if (cnp->cn_flags & SAVESTART)
VREF(dvp);
if ((cnp->cn_flags & LOCKPARENT) == 0)
VOP_UNLOCK(dp, 0);
/*
* We return with ni_vp NULL to indicate that the entry
* doesn't currently exist, leaving a pointer to the
* (possibly locked) directory vnode in ndp->ni_dvp.
*/
return (0);
}
dp = *vpp;
/*
* Disallow directory write attempts on read-only filesystems.
*/
if (rdonly &&
(cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME)) {
if (dvp == dp)
vrele(dvp);
else
vput(dvp);
error = EROFS;
goto bad;
}
/*
* Set the parent lock/ref state to the requested state.
*/
if ((cnp->cn_flags & LOCKPARENT) == 0 && dvp != dp) {
if (wantparent)
VOP_UNLOCK(dvp, 0);
else
vput(dvp);
} else if (!wantparent)
vrele(dvp);
/*
* Check for symbolic link
*/
KASSERT(dp->v_type != VLNK || !(cnp->cn_flags & FOLLOW),
("relookup: symlink found.\n"));
/* ASSERT(dvp == ndp->ni_startdir) */
if (cnp->cn_flags & SAVESTART)
VREF(dvp);
if ((cnp->cn_flags & LOCKLEAF) == 0)
VOP_UNLOCK(dp, 0);
return (0);
bad:
vput(dp);
*vpp = NULL;
return (error);
}
void
NDINIT_ALL(struct nameidata *ndp, u_long op, u_long flags, enum uio_seg segflg,
const char *namep, int dirfd, struct vnode *startdir, cap_rights_t *rightsp,
struct thread *td)
{
ndp->ni_cnd.cn_nameiop = op;
ndp->ni_cnd.cn_flags = flags;
ndp->ni_segflg = segflg;
ndp->ni_dirp = namep;
ndp->ni_dirfd = dirfd;
ndp->ni_startdir = startdir;
+ ndp->ni_resflags = 0;
filecaps_init(&ndp->ni_filecaps);
ndp->ni_cnd.cn_thread = td;
if (rightsp != NULL)
ndp->ni_rightsneeded = *rightsp;
else
cap_rights_init(&ndp->ni_rightsneeded);
}
/*
* Free data allocated by namei(); see namei(9) for details.
*/
void
NDFREE(struct nameidata *ndp, const u_int flags)
{
int unlock_dvp;
int unlock_vp;
unlock_dvp = 0;
unlock_vp = 0;
if (!(flags & NDF_NO_FREE_PNBUF) &&
(ndp->ni_cnd.cn_flags & HASBUF)) {
uma_zfree(namei_zone, ndp->ni_cnd.cn_pnbuf);
ndp->ni_cnd.cn_flags &= ~HASBUF;
}
if (!(flags & NDF_NO_VP_UNLOCK) &&
(ndp->ni_cnd.cn_flags & LOCKLEAF) && ndp->ni_vp)
unlock_vp = 1;
if (!(flags & NDF_NO_VP_RELE) && ndp->ni_vp) {
if (unlock_vp) {
vput(ndp->ni_vp);
unlock_vp = 0;
} else
vrele(ndp->ni_vp);
ndp->ni_vp = NULL;
}
if (unlock_vp)
VOP_UNLOCK(ndp->ni_vp, 0);
if (!(flags & NDF_NO_DVP_UNLOCK) &&
(ndp->ni_cnd.cn_flags & LOCKPARENT) &&
ndp->ni_dvp != ndp->ni_vp)
unlock_dvp = 1;
if (!(flags & NDF_NO_DVP_RELE) &&
(ndp->ni_cnd.cn_flags & (LOCKPARENT|WANTPARENT))) {
if (unlock_dvp) {
vput(ndp->ni_dvp);
unlock_dvp = 0;
} else
vrele(ndp->ni_dvp);
ndp->ni_dvp = NULL;
}
if (unlock_dvp)
VOP_UNLOCK(ndp->ni_dvp, 0);
if (!(flags & NDF_NO_STARTDIR_RELE) &&
(ndp->ni_cnd.cn_flags & SAVESTART)) {
vrele(ndp->ni_startdir);
ndp->ni_startdir = NULL;
}
}
/*
* Determine if there is a suitable alternate filename under the specified
* prefix for the specified path. If the create flag is set, then the
* alternate prefix will be used so long as the parent directory exists.
* This is used by the various compatibility ABIs so that Linux binaries prefer
* files under /compat/linux for example. The chosen path (whether under
* the prefix or under /) is returned in a kernel malloc'd buffer pointed
* to by pathbuf. The caller is responsible for free'ing the buffer from
* the M_TEMP bucket if one is returned.
*/
int
kern_alternate_path(struct thread *td, const char *prefix, const char *path,
enum uio_seg pathseg, char **pathbuf, int create, int dirfd)
{
struct nameidata nd, ndroot;
char *ptr, *buf, *cp;
size_t len, sz;
int error;
buf = (char *) malloc(MAXPATHLEN, M_TEMP, M_WAITOK);
*pathbuf = buf;
/* Copy the prefix into the new pathname as a starting point. */
len = strlcpy(buf, prefix, MAXPATHLEN);
if (len >= MAXPATHLEN) {
*pathbuf = NULL;
free(buf, M_TEMP);
return (EINVAL);
}
sz = MAXPATHLEN - len;
ptr = buf + len;
/* Append the filename to the prefix. */
if (pathseg == UIO_SYSSPACE)
error = copystr(path, ptr, sz, &len);
else
error = copyinstr(path, ptr, sz, &len);
if (error) {
*pathbuf = NULL;
free(buf, M_TEMP);
return (error);
}
/* Only use a prefix with absolute pathnames. */
if (*ptr != '/') {
error = EINVAL;
goto keeporig;
}
if (dirfd != AT_FDCWD) {
/*
* We want the original because the "prefix" is
* included in the already opened dirfd.
*/
bcopy(ptr, buf, len);
return (0);
}
/*
* We know that there is a / somewhere in this pathname.
* Search backwards for it, to find the file's parent dir
* to see if it exists in the alternate tree. If it does,
* and we want to create a file (cflag is set). We don't
* need to worry about the root comparison in this case.
*/
if (create) {
for (cp = &ptr[len] - 1; *cp != '/'; cp--);
*cp = '\0';
NDINIT(&nd, LOOKUP, NOFOLLOW, UIO_SYSSPACE, buf, td);
error = namei(&nd);
*cp = '/';
if (error != 0)
goto keeporig;
} else {
NDINIT(&nd, LOOKUP, NOFOLLOW, UIO_SYSSPACE, buf, td);
error = namei(&nd);
if (error != 0)
goto keeporig;
/*
* We now compare the vnode of the prefix to the one
* vnode asked. If they resolve to be the same, then we
* ignore the match so that the real root gets used.
* This avoids the problem of traversing "../.." to find the
* root directory and never finding it, because "/" resolves
* to the emulation root directory. This is expensive :-(
*/
NDINIT(&ndroot, LOOKUP, FOLLOW, UIO_SYSSPACE, prefix,
td);
/* We shouldn't ever get an error from this namei(). */
error = namei(&ndroot);
if (error == 0) {
if (nd.ni_vp == ndroot.ni_vp)
error = ENOENT;
NDFREE(&ndroot, NDF_ONLY_PNBUF);
vrele(ndroot.ni_vp);
}
}
NDFREE(&nd, NDF_ONLY_PNBUF);
vrele(nd.ni_vp);
keeporig:
/* If there was an error, use the original path name. */
if (error)
bcopy(ptr, buf, len);
return (error);
}
Index: projects/clang800-import/sys/kern/vfs_syscalls.c
===================================================================
--- projects/clang800-import/sys/kern/vfs_syscalls.c (revision 343955)
+++ projects/clang800-import/sys/kern/vfs_syscalls.c (revision 343956)
@@ -1,4753 +1,4753 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1989, 1993
* The Regents of the University of California. All rights reserved.
* (c) UNIX System Laboratories, Inc.
* All or some portions of this file are derived from material licensed
* to the University of California by American Telephone and Telegraph
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
* the permission of UNIX System Laboratories, Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)vfs_syscalls.c 8.13 (Berkeley) 4/15/94
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_capsicum.h"
#include "opt_ktrace.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bio.h>
#include <sys/buf.h>
#include <sys/capsicum.h>
#include <sys/disk.h>
#include <sys/sysent.h>
#include <sys/malloc.h>
#include <sys/mount.h>
#include <sys/mutex.h>
#include <sys/sysproto.h>
#include <sys/namei.h>
#include <sys/filedesc.h>
#include <sys/kernel.h>
#include <sys/fcntl.h>
#include <sys/file.h>
#include <sys/filio.h>
#include <sys/limits.h>
#include <sys/linker.h>
#include <sys/rwlock.h>
#include <sys/sdt.h>
#include <sys/stat.h>
#include <sys/sx.h>
#include <sys/unistd.h>
#include <sys/vnode.h>
#include <sys/priv.h>
#include <sys/proc.h>
#include <sys/dirent.h>
#include <sys/jail.h>
#include <sys/syscallsubr.h>
#include <sys/sysctl.h>
#ifdef KTRACE
#include <sys/ktrace.h>
#endif
#include <machine/stdarg.h>
#include <security/audit/audit.h>
#include <security/mac/mac_framework.h>
#include <vm/vm.h>
#include <vm/vm_object.h>
#include <vm/vm_page.h>
#include <vm/uma.h>
#include <ufs/ufs/quota.h>
MALLOC_DEFINE(M_FADVISE, "fadvise", "posix_fadvise(2) information");
SDT_PROVIDER_DEFINE(vfs);
SDT_PROBE_DEFINE2(vfs, , stat, mode, "char *", "int");
SDT_PROBE_DEFINE2(vfs, , stat, reg, "char *", "int");
static int kern_chflagsat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, u_long flags, int atflag);
static int setfflags(struct thread *td, struct vnode *, u_long);
static int getutimes(const struct timeval *, enum uio_seg, struct timespec *);
static int getutimens(const struct timespec *, enum uio_seg,
struct timespec *, int *);
static int setutimes(struct thread *td, struct vnode *,
const struct timespec *, int, int);
static int vn_access(struct vnode *vp, int user_flags, struct ucred *cred,
struct thread *td);
static int kern_fhlinkat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, fhandle_t *fhp);
static int kern_getfhat(struct thread *td, int flags, int fd,
const char *path, enum uio_seg pathseg, fhandle_t *fhp);
static int kern_readlink_vp(struct vnode *vp, char *buf, enum uio_seg bufseg,
size_t count, struct thread *td);
static int kern_linkat_vp(struct thread *td, struct vnode *vp, int fd,
const char *path, enum uio_seg segflag);
/*
* Sync each mounted filesystem.
*/
#ifndef _SYS_SYSPROTO_H_
struct sync_args {
int dummy;
};
#endif
/* ARGSUSED */
int
sys_sync(struct thread *td, struct sync_args *uap)
{
struct mount *mp, *nmp;
int save;
mtx_lock(&mountlist_mtx);
for (mp = TAILQ_FIRST(&mountlist); mp != NULL; mp = nmp) {
if (vfs_busy(mp, MBF_NOWAIT | MBF_MNTLSTLOCK)) {
nmp = TAILQ_NEXT(mp, mnt_list);
continue;
}
if ((mp->mnt_flag & MNT_RDONLY) == 0 &&
vn_start_write(NULL, &mp, V_NOWAIT) == 0) {
save = curthread_pflags_set(TDP_SYNCIO);
vfs_msync(mp, MNT_NOWAIT);
VFS_SYNC(mp, MNT_NOWAIT);
curthread_pflags_restore(save);
vn_finished_write(mp);
}
mtx_lock(&mountlist_mtx);
nmp = TAILQ_NEXT(mp, mnt_list);
vfs_unbusy(mp);
}
mtx_unlock(&mountlist_mtx);
return (0);
}
/*
* Change filesystem quotas.
*/
#ifndef _SYS_SYSPROTO_H_
struct quotactl_args {
char *path;
int cmd;
int uid;
caddr_t arg;
};
#endif
int
sys_quotactl(struct thread *td, struct quotactl_args *uap)
{
struct mount *mp;
struct nameidata nd;
int error;
AUDIT_ARG_CMD(uap->cmd);
AUDIT_ARG_UID(uap->uid);
if (!prison_allow(td->td_ucred, PR_ALLOW_QUOTAS))
return (EPERM);
NDINIT(&nd, LOOKUP, FOLLOW | LOCKLEAF | AUDITVNODE1, UIO_USERSPACE,
uap->path, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
mp = nd.ni_vp->v_mount;
vfs_ref(mp);
vput(nd.ni_vp);
error = vfs_busy(mp, 0);
vfs_rel(mp);
if (error != 0)
return (error);
error = VFS_QUOTACTL(mp, uap->cmd, uap->uid, uap->arg);
/*
* Since quota on operation typically needs to open quota
* file, the Q_QUOTAON handler needs to unbusy the mount point
* before calling into namei. Otherwise, unmount might be
* started between two vfs_busy() invocations (first is our,
* second is from mount point cross-walk code in lookup()),
* causing deadlock.
*
* Require that Q_QUOTAON handles the vfs_busy() reference on
* its own, always returning with ubusied mount point.
*/
if ((uap->cmd >> SUBCMDSHIFT) != Q_QUOTAON &&
(uap->cmd >> SUBCMDSHIFT) != Q_QUOTAOFF)
vfs_unbusy(mp);
return (error);
}
/*
* Used by statfs conversion routines to scale the block size up if
* necessary so that all of the block counts are <= 'max_size'. Note
* that 'max_size' should be a bitmask, i.e. 2^n - 1 for some non-zero
* value of 'n'.
*/
void
statfs_scale_blocks(struct statfs *sf, long max_size)
{
uint64_t count;
int shift;
KASSERT(powerof2(max_size + 1), ("%s: invalid max_size", __func__));
/*
* Attempt to scale the block counts to give a more accurate
* overview to userland of the ratio of free space to used
* space. To do this, find the largest block count and compute
* a divisor that lets it fit into a signed integer <= max_size.
*/
if (sf->f_bavail < 0)
count = -sf->f_bavail;
else
count = sf->f_bavail;
count = MAX(sf->f_blocks, MAX(sf->f_bfree, count));
if (count <= max_size)
return;
count >>= flsl(max_size);
shift = 0;
while (count > 0) {
shift++;
count >>=1;
}
sf->f_bsize <<= shift;
sf->f_blocks >>= shift;
sf->f_bfree >>= shift;
sf->f_bavail >>= shift;
}
static int
kern_do_statfs(struct thread *td, struct mount *mp, struct statfs *buf)
{
struct statfs *sp;
int error;
if (mp == NULL)
return (EBADF);
error = vfs_busy(mp, 0);
vfs_rel(mp);
if (error != 0)
return (error);
#ifdef MAC
error = mac_mount_check_stat(td->td_ucred, mp);
if (error != 0)
goto out;
#endif
/*
* Set these in case the underlying filesystem fails to do so.
*/
sp = &mp->mnt_stat;
sp->f_version = STATFS_VERSION;
sp->f_namemax = NAME_MAX;
sp->f_flags = mp->mnt_flag & MNT_VISFLAGMASK;
error = VFS_STATFS(mp, sp);
if (error != 0)
goto out;
*buf = *sp;
if (priv_check(td, PRIV_VFS_GENERATION)) {
buf->f_fsid.val[0] = buf->f_fsid.val[1] = 0;
prison_enforce_statfs(td->td_ucred, mp, buf);
}
out:
vfs_unbusy(mp);
return (error);
}
/*
* Get filesystem statistics.
*/
#ifndef _SYS_SYSPROTO_H_
struct statfs_args {
char *path;
struct statfs *buf;
};
#endif
int
sys_statfs(struct thread *td, struct statfs_args *uap)
{
struct statfs *sfp;
int error;
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_statfs(td, uap->path, UIO_USERSPACE, sfp);
if (error == 0)
error = copyout(sfp, uap->buf, sizeof(struct statfs));
free(sfp, M_STATFS);
return (error);
}
int
kern_statfs(struct thread *td, const char *path, enum uio_seg pathseg,
struct statfs *buf)
{
struct mount *mp;
struct nameidata nd;
int error;
NDINIT(&nd, LOOKUP, FOLLOW | LOCKSHARED | LOCKLEAF | AUDITVNODE1,
pathseg, path, td);
error = namei(&nd);
if (error != 0)
return (error);
mp = nd.ni_vp->v_mount;
vfs_ref(mp);
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_vp);
return (kern_do_statfs(td, mp, buf));
}
/*
* Get filesystem statistics.
*/
#ifndef _SYS_SYSPROTO_H_
struct fstatfs_args {
int fd;
struct statfs *buf;
};
#endif
int
sys_fstatfs(struct thread *td, struct fstatfs_args *uap)
{
struct statfs *sfp;
int error;
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fstatfs(td, uap->fd, sfp);
if (error == 0)
error = copyout(sfp, uap->buf, sizeof(struct statfs));
free(sfp, M_STATFS);
return (error);
}
int
kern_fstatfs(struct thread *td, int fd, struct statfs *buf)
{
struct file *fp;
struct mount *mp;
struct vnode *vp;
int error;
AUDIT_ARG_FD(fd);
error = getvnode(td, fd, &cap_fstatfs_rights, &fp);
if (error != 0)
return (error);
vp = fp->f_vnode;
vn_lock(vp, LK_SHARED | LK_RETRY);
#ifdef AUDIT
AUDIT_ARG_VNODE1(vp);
#endif
mp = vp->v_mount;
if (mp != NULL)
vfs_ref(mp);
VOP_UNLOCK(vp, 0);
fdrop(fp, td);
return (kern_do_statfs(td, mp, buf));
}
/*
* Get statistics on all filesystems.
*/
#ifndef _SYS_SYSPROTO_H_
struct getfsstat_args {
struct statfs *buf;
long bufsize;
int mode;
};
#endif
int
sys_getfsstat(struct thread *td, struct getfsstat_args *uap)
{
size_t count;
int error;
if (uap->bufsize < 0 || uap->bufsize > SIZE_MAX)
return (EINVAL);
error = kern_getfsstat(td, &uap->buf, uap->bufsize, &count,
UIO_USERSPACE, uap->mode);
if (error == 0)
td->td_retval[0] = count;
return (error);
}
/*
* If (bufsize > 0 && bufseg == UIO_SYSSPACE)
* The caller is responsible for freeing memory which will be allocated
* in '*buf'.
*/
int
kern_getfsstat(struct thread *td, struct statfs **buf, size_t bufsize,
size_t *countp, enum uio_seg bufseg, int mode)
{
struct mount *mp, *nmp;
struct statfs *sfsp, *sp, *sptmp, *tofree;
size_t count, maxcount;
int error;
switch (mode) {
case MNT_WAIT:
case MNT_NOWAIT:
break;
default:
if (bufseg == UIO_SYSSPACE)
*buf = NULL;
return (EINVAL);
}
restart:
maxcount = bufsize / sizeof(struct statfs);
if (bufsize == 0) {
sfsp = NULL;
tofree = NULL;
} else if (bufseg == UIO_USERSPACE) {
sfsp = *buf;
tofree = NULL;
} else /* if (bufseg == UIO_SYSSPACE) */ {
count = 0;
mtx_lock(&mountlist_mtx);
TAILQ_FOREACH(mp, &mountlist, mnt_list) {
count++;
}
mtx_unlock(&mountlist_mtx);
if (maxcount > count)
maxcount = count;
tofree = sfsp = *buf = malloc(maxcount * sizeof(struct statfs),
M_STATFS, M_WAITOK);
}
count = 0;
mtx_lock(&mountlist_mtx);
for (mp = TAILQ_FIRST(&mountlist); mp != NULL; mp = nmp) {
if (prison_canseemount(td->td_ucred, mp) != 0) {
nmp = TAILQ_NEXT(mp, mnt_list);
continue;
}
#ifdef MAC
if (mac_mount_check_stat(td->td_ucred, mp) != 0) {
nmp = TAILQ_NEXT(mp, mnt_list);
continue;
}
#endif
if (mode == MNT_WAIT) {
if (vfs_busy(mp, MBF_MNTLSTLOCK) != 0) {
/*
* If vfs_busy() failed, and MBF_NOWAIT
* wasn't passed, then the mp is gone.
* Furthermore, because of MBF_MNTLSTLOCK,
* the mountlist_mtx was dropped. We have
* no other choice than to start over.
*/
mtx_unlock(&mountlist_mtx);
free(tofree, M_STATFS);
goto restart;
}
} else {
if (vfs_busy(mp, MBF_NOWAIT | MBF_MNTLSTLOCK) != 0) {
nmp = TAILQ_NEXT(mp, mnt_list);
continue;
}
}
if (sfsp != NULL && count < maxcount) {
sp = &mp->mnt_stat;
/*
* Set these in case the underlying filesystem
* fails to do so.
*/
sp->f_version = STATFS_VERSION;
sp->f_namemax = NAME_MAX;
sp->f_flags = mp->mnt_flag & MNT_VISFLAGMASK;
/*
* If MNT_NOWAIT is specified, do not refresh
* the fsstat cache.
*/
if (mode != MNT_NOWAIT) {
error = VFS_STATFS(mp, sp);
if (error != 0) {
mtx_lock(&mountlist_mtx);
nmp = TAILQ_NEXT(mp, mnt_list);
vfs_unbusy(mp);
continue;
}
}
if (priv_check(td, PRIV_VFS_GENERATION)) {
sptmp = malloc(sizeof(struct statfs), M_STATFS,
M_WAITOK);
*sptmp = *sp;
sptmp->f_fsid.val[0] = sptmp->f_fsid.val[1] = 0;
prison_enforce_statfs(td->td_ucred, mp, sptmp);
sp = sptmp;
} else
sptmp = NULL;
if (bufseg == UIO_SYSSPACE) {
bcopy(sp, sfsp, sizeof(*sp));
free(sptmp, M_STATFS);
} else /* if (bufseg == UIO_USERSPACE) */ {
error = copyout(sp, sfsp, sizeof(*sp));
free(sptmp, M_STATFS);
if (error != 0) {
vfs_unbusy(mp);
return (error);
}
}
sfsp++;
}
count++;
mtx_lock(&mountlist_mtx);
nmp = TAILQ_NEXT(mp, mnt_list);
vfs_unbusy(mp);
}
mtx_unlock(&mountlist_mtx);
if (sfsp != NULL && count > maxcount)
*countp = maxcount;
else
*countp = count;
return (0);
}
#ifdef COMPAT_FREEBSD4
/*
* Get old format filesystem statistics.
*/
static void freebsd4_cvtstatfs(struct statfs *, struct ostatfs *);
#ifndef _SYS_SYSPROTO_H_
struct freebsd4_statfs_args {
char *path;
struct ostatfs *buf;
};
#endif
int
freebsd4_statfs(struct thread *td, struct freebsd4_statfs_args *uap)
{
struct ostatfs osb;
struct statfs *sfp;
int error;
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_statfs(td, uap->path, UIO_USERSPACE, sfp);
if (error == 0) {
freebsd4_cvtstatfs(sfp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
}
free(sfp, M_STATFS);
return (error);
}
/*
* Get filesystem statistics.
*/
#ifndef _SYS_SYSPROTO_H_
struct freebsd4_fstatfs_args {
int fd;
struct ostatfs *buf;
};
#endif
int
freebsd4_fstatfs(struct thread *td, struct freebsd4_fstatfs_args *uap)
{
struct ostatfs osb;
struct statfs *sfp;
int error;
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fstatfs(td, uap->fd, sfp);
if (error == 0) {
freebsd4_cvtstatfs(sfp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
}
free(sfp, M_STATFS);
return (error);
}
/*
* Get statistics on all filesystems.
*/
#ifndef _SYS_SYSPROTO_H_
struct freebsd4_getfsstat_args {
struct ostatfs *buf;
long bufsize;
int mode;
};
#endif
int
freebsd4_getfsstat(struct thread *td, struct freebsd4_getfsstat_args *uap)
{
struct statfs *buf, *sp;
struct ostatfs osb;
size_t count, size;
int error;
if (uap->bufsize < 0)
return (EINVAL);
count = uap->bufsize / sizeof(struct ostatfs);
if (count > SIZE_MAX / sizeof(struct statfs))
return (EINVAL);
size = count * sizeof(struct statfs);
error = kern_getfsstat(td, &buf, size, &count, UIO_SYSSPACE,
uap->mode);
if (error == 0)
td->td_retval[0] = count;
if (size != 0) {
sp = buf;
while (count != 0 && error == 0) {
freebsd4_cvtstatfs(sp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
sp++;
uap->buf++;
count--;
}
free(buf, M_STATFS);
}
return (error);
}
/*
* Implement fstatfs() for (NFS) file handles.
*/
#ifndef _SYS_SYSPROTO_H_
struct freebsd4_fhstatfs_args {
struct fhandle *u_fhp;
struct ostatfs *buf;
};
#endif
int
freebsd4_fhstatfs(struct thread *td, struct freebsd4_fhstatfs_args *uap)
{
struct ostatfs osb;
struct statfs *sfp;
fhandle_t fh;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t));
if (error != 0)
return (error);
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fhstatfs(td, fh, sfp);
if (error == 0) {
freebsd4_cvtstatfs(sfp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
}
free(sfp, M_STATFS);
return (error);
}
/*
* Convert a new format statfs structure to an old format statfs structure.
*/
static void
freebsd4_cvtstatfs(struct statfs *nsp, struct ostatfs *osp)
{
statfs_scale_blocks(nsp, LONG_MAX);
bzero(osp, sizeof(*osp));
osp->f_bsize = nsp->f_bsize;
osp->f_iosize = MIN(nsp->f_iosize, LONG_MAX);
osp->f_blocks = nsp->f_blocks;
osp->f_bfree = nsp->f_bfree;
osp->f_bavail = nsp->f_bavail;
osp->f_files = MIN(nsp->f_files, LONG_MAX);
osp->f_ffree = MIN(nsp->f_ffree, LONG_MAX);
osp->f_owner = nsp->f_owner;
osp->f_type = nsp->f_type;
osp->f_flags = nsp->f_flags;
osp->f_syncwrites = MIN(nsp->f_syncwrites, LONG_MAX);
osp->f_asyncwrites = MIN(nsp->f_asyncwrites, LONG_MAX);
osp->f_syncreads = MIN(nsp->f_syncreads, LONG_MAX);
osp->f_asyncreads = MIN(nsp->f_asyncreads, LONG_MAX);
strlcpy(osp->f_fstypename, nsp->f_fstypename,
MIN(MFSNAMELEN, OMFSNAMELEN));
strlcpy(osp->f_mntonname, nsp->f_mntonname,
MIN(MNAMELEN, OMNAMELEN));
strlcpy(osp->f_mntfromname, nsp->f_mntfromname,
MIN(MNAMELEN, OMNAMELEN));
osp->f_fsid = nsp->f_fsid;
}
#endif /* COMPAT_FREEBSD4 */
#if defined(COMPAT_FREEBSD11)
/*
* Get old format filesystem statistics.
*/
static void freebsd11_cvtstatfs(struct statfs *, struct freebsd11_statfs *);
int
freebsd11_statfs(struct thread *td, struct freebsd11_statfs_args *uap)
{
struct freebsd11_statfs osb;
struct statfs *sfp;
int error;
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_statfs(td, uap->path, UIO_USERSPACE, sfp);
if (error == 0) {
freebsd11_cvtstatfs(sfp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
}
free(sfp, M_STATFS);
return (error);
}
/*
* Get filesystem statistics.
*/
int
freebsd11_fstatfs(struct thread *td, struct freebsd11_fstatfs_args *uap)
{
struct freebsd11_statfs osb;
struct statfs *sfp;
int error;
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fstatfs(td, uap->fd, sfp);
if (error == 0) {
freebsd11_cvtstatfs(sfp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
}
free(sfp, M_STATFS);
return (error);
}
/*
* Get statistics on all filesystems.
*/
int
freebsd11_getfsstat(struct thread *td, struct freebsd11_getfsstat_args *uap)
{
struct freebsd11_statfs osb;
struct statfs *buf, *sp;
size_t count, size;
int error;
count = uap->bufsize / sizeof(struct ostatfs);
size = count * sizeof(struct statfs);
error = kern_getfsstat(td, &buf, size, &count, UIO_SYSSPACE,
uap->mode);
if (error == 0)
td->td_retval[0] = count;
if (size > 0) {
sp = buf;
while (count > 0 && error == 0) {
freebsd11_cvtstatfs(sp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
sp++;
uap->buf++;
count--;
}
free(buf, M_STATFS);
}
return (error);
}
/*
* Implement fstatfs() for (NFS) file handles.
*/
int
freebsd11_fhstatfs(struct thread *td, struct freebsd11_fhstatfs_args *uap)
{
struct freebsd11_statfs osb;
struct statfs *sfp;
fhandle_t fh;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t));
if (error)
return (error);
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fhstatfs(td, fh, sfp);
if (error == 0) {
freebsd11_cvtstatfs(sfp, &osb);
error = copyout(&osb, uap->buf, sizeof(osb));
}
free(sfp, M_STATFS);
return (error);
}
/*
* Convert a new format statfs structure to an old format statfs structure.
*/
static void
freebsd11_cvtstatfs(struct statfs *nsp, struct freebsd11_statfs *osp)
{
bzero(osp, sizeof(*osp));
osp->f_version = FREEBSD11_STATFS_VERSION;
osp->f_type = nsp->f_type;
osp->f_flags = nsp->f_flags;
osp->f_bsize = nsp->f_bsize;
osp->f_iosize = nsp->f_iosize;
osp->f_blocks = nsp->f_blocks;
osp->f_bfree = nsp->f_bfree;
osp->f_bavail = nsp->f_bavail;
osp->f_files = nsp->f_files;
osp->f_ffree = nsp->f_ffree;
osp->f_syncwrites = nsp->f_syncwrites;
osp->f_asyncwrites = nsp->f_asyncwrites;
osp->f_syncreads = nsp->f_syncreads;
osp->f_asyncreads = nsp->f_asyncreads;
osp->f_namemax = nsp->f_namemax;
osp->f_owner = nsp->f_owner;
osp->f_fsid = nsp->f_fsid;
strlcpy(osp->f_fstypename, nsp->f_fstypename,
MIN(MFSNAMELEN, sizeof(osp->f_fstypename)));
strlcpy(osp->f_mntonname, nsp->f_mntonname,
MIN(MNAMELEN, sizeof(osp->f_mntonname)));
strlcpy(osp->f_mntfromname, nsp->f_mntfromname,
MIN(MNAMELEN, sizeof(osp->f_mntfromname)));
}
#endif /* COMPAT_FREEBSD11 */
/*
* Change current working directory to a given file descriptor.
*/
#ifndef _SYS_SYSPROTO_H_
struct fchdir_args {
int fd;
};
#endif
int
sys_fchdir(struct thread *td, struct fchdir_args *uap)
{
struct vnode *vp, *tdp;
struct mount *mp;
struct file *fp;
int error;
AUDIT_ARG_FD(uap->fd);
error = getvnode(td, uap->fd, &cap_fchdir_rights,
&fp);
if (error != 0)
return (error);
vp = fp->f_vnode;
vrefact(vp);
fdrop(fp, td);
vn_lock(vp, LK_SHARED | LK_RETRY);
AUDIT_ARG_VNODE1(vp);
error = change_dir(vp, td);
while (!error && (mp = vp->v_mountedhere) != NULL) {
if (vfs_busy(mp, 0))
continue;
error = VFS_ROOT(mp, LK_SHARED, &tdp);
vfs_unbusy(mp);
if (error != 0)
break;
vput(vp);
vp = tdp;
}
if (error != 0) {
vput(vp);
return (error);
}
VOP_UNLOCK(vp, 0);
pwd_chdir(td, vp);
return (0);
}
/*
* Change current working directory (``.'').
*/
#ifndef _SYS_SYSPROTO_H_
struct chdir_args {
char *path;
};
#endif
int
sys_chdir(struct thread *td, struct chdir_args *uap)
{
return (kern_chdir(td, uap->path, UIO_USERSPACE));
}
int
kern_chdir(struct thread *td, const char *path, enum uio_seg pathseg)
{
struct nameidata nd;
int error;
NDINIT(&nd, LOOKUP, FOLLOW | LOCKSHARED | LOCKLEAF | AUDITVNODE1,
pathseg, path, td);
if ((error = namei(&nd)) != 0)
return (error);
if ((error = change_dir(nd.ni_vp, td)) != 0) {
vput(nd.ni_vp);
NDFREE(&nd, NDF_ONLY_PNBUF);
return (error);
}
VOP_UNLOCK(nd.ni_vp, 0);
NDFREE(&nd, NDF_ONLY_PNBUF);
pwd_chdir(td, nd.ni_vp);
return (0);
}
/*
* Change notion of root (``/'') directory.
*/
#ifndef _SYS_SYSPROTO_H_
struct chroot_args {
char *path;
};
#endif
int
sys_chroot(struct thread *td, struct chroot_args *uap)
{
struct nameidata nd;
int error;
error = priv_check(td, PRIV_VFS_CHROOT);
if (error != 0)
return (error);
NDINIT(&nd, LOOKUP, FOLLOW | LOCKSHARED | LOCKLEAF | AUDITVNODE1,
UIO_USERSPACE, uap->path, td);
error = namei(&nd);
if (error != 0)
goto error;
error = change_dir(nd.ni_vp, td);
if (error != 0)
goto e_vunlock;
#ifdef MAC
error = mac_vnode_check_chroot(td->td_ucred, nd.ni_vp);
if (error != 0)
goto e_vunlock;
#endif
VOP_UNLOCK(nd.ni_vp, 0);
error = pwd_chroot(td, nd.ni_vp);
vrele(nd.ni_vp);
NDFREE(&nd, NDF_ONLY_PNBUF);
return (error);
e_vunlock:
vput(nd.ni_vp);
error:
NDFREE(&nd, NDF_ONLY_PNBUF);
return (error);
}
/*
* Common routine for chroot and chdir. Callers must provide a locked vnode
* instance.
*/
int
change_dir(struct vnode *vp, struct thread *td)
{
#ifdef MAC
int error;
#endif
ASSERT_VOP_LOCKED(vp, "change_dir(): vp not locked");
if (vp->v_type != VDIR)
return (ENOTDIR);
#ifdef MAC
error = mac_vnode_check_chdir(td->td_ucred, vp);
if (error != 0)
return (error);
#endif
return (VOP_ACCESS(vp, VEXEC, td->td_ucred, td));
}
static __inline void
flags_to_rights(int flags, cap_rights_t *rightsp)
{
if (flags & O_EXEC) {
cap_rights_set(rightsp, CAP_FEXECVE);
} else {
switch ((flags & O_ACCMODE)) {
case O_RDONLY:
cap_rights_set(rightsp, CAP_READ);
break;
case O_RDWR:
cap_rights_set(rightsp, CAP_READ);
/* FALLTHROUGH */
case O_WRONLY:
cap_rights_set(rightsp, CAP_WRITE);
if (!(flags & (O_APPEND | O_TRUNC)))
cap_rights_set(rightsp, CAP_SEEK);
break;
}
}
if (flags & O_CREAT)
cap_rights_set(rightsp, CAP_CREATE);
if (flags & O_TRUNC)
cap_rights_set(rightsp, CAP_FTRUNCATE);
if (flags & (O_SYNC | O_FSYNC))
cap_rights_set(rightsp, CAP_FSYNC);
if (flags & (O_EXLOCK | O_SHLOCK))
cap_rights_set(rightsp, CAP_FLOCK);
}
/*
* Check permissions, allocate an open file structure, and call the device
* open routine if any.
*/
#ifndef _SYS_SYSPROTO_H_
struct open_args {
char *path;
int flags;
int mode;
};
#endif
int
sys_open(struct thread *td, struct open_args *uap)
{
return (kern_openat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->flags, uap->mode));
}
#ifndef _SYS_SYSPROTO_H_
struct openat_args {
int fd;
char *path;
int flag;
int mode;
};
#endif
int
sys_openat(struct thread *td, struct openat_args *uap)
{
AUDIT_ARG_FD(uap->fd);
return (kern_openat(td, uap->fd, uap->path, UIO_USERSPACE, uap->flag,
uap->mode));
}
int
kern_openat(struct thread *td, int fd, const char *path, enum uio_seg pathseg,
int flags, int mode)
{
struct proc *p = td->td_proc;
struct filedesc *fdp = p->p_fd;
struct file *fp;
struct vnode *vp;
struct nameidata nd;
cap_rights_t rights;
int cmode, error, indx;
indx = -1;
AUDIT_ARG_FFLAGS(flags);
AUDIT_ARG_MODE(mode);
cap_rights_init(&rights, CAP_LOOKUP);
flags_to_rights(flags, &rights);
/*
* Only one of the O_EXEC, O_RDONLY, O_WRONLY and O_RDWR flags
* may be specified.
*/
if (flags & O_EXEC) {
if (flags & O_ACCMODE)
return (EINVAL);
} else if ((flags & O_ACCMODE) == O_ACCMODE) {
return (EINVAL);
} else {
flags = FFLAGS(flags);
}
/*
* Allocate a file structure. The descriptor to reference it
* is allocated and set by finstall() below.
*/
error = falloc_noinstall(td, &fp);
if (error != 0)
return (error);
/*
* An extra reference on `fp' has been held for us by
* falloc_noinstall().
*/
/* Set the flags early so the finit in devfs can pick them up. */
fp->f_flag = flags & FMASK;
cmode = ((mode & ~fdp->fd_cmask) & ALLPERMS) & ~S_ISTXT;
NDINIT_ATRIGHTS(&nd, LOOKUP, FOLLOW | AUDITVNODE1, pathseg, path, fd,
&rights, td);
td->td_dupfd = -1; /* XXX check for fdopen */
error = vn_open(&nd, &flags, cmode, fp);
if (error != 0) {
/*
* If the vn_open replaced the method vector, something
* wonderous happened deep below and we just pass it up
* pretending we know what we do.
*/
if (error == ENXIO && fp->f_ops != &badfileops)
goto success;
/*
* Handle special fdopen() case. bleh.
*
* Don't do this for relative (capability) lookups; we don't
* understand exactly what would happen, and we don't think
* that it ever should.
*/
if ((nd.ni_lcf & NI_LCF_STRICTRELATIVE) == 0 &&
(error == ENODEV || error == ENXIO) &&
td->td_dupfd >= 0) {
error = dupfdopen(td, fdp, td->td_dupfd, flags, error,
&indx);
if (error == 0)
goto success;
}
goto bad;
}
td->td_dupfd = 0;
NDFREE(&nd, NDF_ONLY_PNBUF);
vp = nd.ni_vp;
/*
* Store the vnode, for any f_type. Typically, the vnode use
* count is decremented by direct call to vn_closefile() for
* files that switched type in the cdevsw fdopen() method.
*/
fp->f_vnode = vp;
/*
* If the file wasn't claimed by devfs bind it to the normal
* vnode operations here.
*/
if (fp->f_ops == &badfileops) {
KASSERT(vp->v_type != VFIFO, ("Unexpected fifo."));
fp->f_seqcount = 1;
finit(fp, (flags & FMASK) | (fp->f_flag & FHASLOCK),
DTYPE_VNODE, vp, &vnops);
}
VOP_UNLOCK(vp, 0);
if (flags & O_TRUNC) {
error = fo_truncate(fp, 0, td->td_ucred, td);
if (error != 0)
goto bad;
}
success:
/*
* If we haven't already installed the FD (for dupfdopen), do so now.
*/
if (indx == -1) {
struct filecaps *fcaps;
#ifdef CAPABILITIES
if ((nd.ni_lcf & NI_LCF_STRICTRELATIVE) != 0)
fcaps = &nd.ni_filecaps;
else
#endif
fcaps = NULL;
error = finstall(td, fp, &indx, flags, fcaps);
/* On success finstall() consumes fcaps. */
if (error != 0) {
filecaps_free(&nd.ni_filecaps);
goto bad;
}
} else {
filecaps_free(&nd.ni_filecaps);
}
/*
* Release our private reference, leaving the one associated with
* the descriptor table intact.
*/
fdrop(fp, td);
td->td_retval[0] = indx;
return (0);
bad:
KASSERT(indx == -1, ("indx=%d, should be -1", indx));
fdrop(fp, td);
return (error);
}
#ifdef COMPAT_43
/*
* Create a file.
*/
#ifndef _SYS_SYSPROTO_H_
struct ocreat_args {
char *path;
int mode;
};
#endif
int
ocreat(struct thread *td, struct ocreat_args *uap)
{
return (kern_openat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
O_WRONLY | O_CREAT | O_TRUNC, uap->mode));
}
#endif /* COMPAT_43 */
/*
* Create a special file.
*/
#ifndef _SYS_SYSPROTO_H_
struct mknodat_args {
int fd;
char *path;
mode_t mode;
dev_t dev;
};
#endif
int
sys_mknodat(struct thread *td, struct mknodat_args *uap)
{
return (kern_mknodat(td, uap->fd, uap->path, UIO_USERSPACE, uap->mode,
uap->dev));
}
#if defined(COMPAT_FREEBSD11)
int
freebsd11_mknod(struct thread *td,
struct freebsd11_mknod_args *uap)
{
return (kern_mknodat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->mode, uap->dev));
}
int
freebsd11_mknodat(struct thread *td,
struct freebsd11_mknodat_args *uap)
{
return (kern_mknodat(td, uap->fd, uap->path, UIO_USERSPACE, uap->mode,
uap->dev));
}
#endif /* COMPAT_FREEBSD11 */
int
kern_mknodat(struct thread *td, int fd, const char *path, enum uio_seg pathseg,
int mode, dev_t dev)
{
struct vnode *vp;
struct mount *mp;
struct vattr vattr;
struct nameidata nd;
int error, whiteout = 0;
AUDIT_ARG_MODE(mode);
AUDIT_ARG_DEV(dev);
switch (mode & S_IFMT) {
case S_IFCHR:
case S_IFBLK:
error = priv_check(td, PRIV_VFS_MKNOD_DEV);
if (error == 0 && dev == VNOVAL)
error = EINVAL;
break;
case S_IFWHT:
error = priv_check(td, PRIV_VFS_MKNOD_WHT);
break;
case S_IFIFO:
if (dev == 0)
return (kern_mkfifoat(td, fd, path, pathseg, mode));
/* FALLTHROUGH */
default:
error = EINVAL;
break;
}
if (error != 0)
return (error);
restart:
bwillwrite();
NDINIT_ATRIGHTS(&nd, CREATE, LOCKPARENT | SAVENAME | AUDITVNODE1 |
NOCACHE, pathseg, path, fd, &cap_mknodat_rights,
td);
if ((error = namei(&nd)) != 0)
return (error);
vp = nd.ni_vp;
if (vp != NULL) {
NDFREE(&nd, NDF_ONLY_PNBUF);
if (vp == nd.ni_dvp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
vrele(vp);
return (EEXIST);
} else {
VATTR_NULL(&vattr);
vattr.va_mode = (mode & ALLPERMS) &
~td->td_proc->p_fd->fd_cmask;
vattr.va_rdev = dev;
whiteout = 0;
switch (mode & S_IFMT) {
case S_IFCHR:
vattr.va_type = VCHR;
break;
case S_IFBLK:
vattr.va_type = VBLK;
break;
case S_IFWHT:
whiteout = 1;
break;
default:
panic("kern_mknod: invalid mode");
}
}
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
return (error);
goto restart;
}
#ifdef MAC
if (error == 0 && !whiteout)
error = mac_vnode_check_create(td->td_ucred, nd.ni_dvp,
&nd.ni_cnd, &vattr);
#endif
if (error == 0) {
if (whiteout)
error = VOP_WHITEOUT(nd.ni_dvp, &nd.ni_cnd, CREATE);
else {
error = VOP_MKNOD(nd.ni_dvp, &nd.ni_vp,
&nd.ni_cnd, &vattr);
if (error == 0)
vput(nd.ni_vp);
}
}
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
vn_finished_write(mp);
return (error);
}
/*
* Create a named pipe.
*/
#ifndef _SYS_SYSPROTO_H_
struct mkfifo_args {
char *path;
int mode;
};
#endif
int
sys_mkfifo(struct thread *td, struct mkfifo_args *uap)
{
return (kern_mkfifoat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->mode));
}
#ifndef _SYS_SYSPROTO_H_
struct mkfifoat_args {
int fd;
char *path;
mode_t mode;
};
#endif
int
sys_mkfifoat(struct thread *td, struct mkfifoat_args *uap)
{
return (kern_mkfifoat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->mode));
}
int
kern_mkfifoat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, int mode)
{
struct mount *mp;
struct vattr vattr;
struct nameidata nd;
int error;
AUDIT_ARG_MODE(mode);
restart:
bwillwrite();
NDINIT_ATRIGHTS(&nd, CREATE, LOCKPARENT | SAVENAME | AUDITVNODE1 |
NOCACHE, pathseg, path, fd, &cap_mkfifoat_rights,
td);
if ((error = namei(&nd)) != 0)
return (error);
if (nd.ni_vp != NULL) {
NDFREE(&nd, NDF_ONLY_PNBUF);
if (nd.ni_vp == nd.ni_dvp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
vrele(nd.ni_vp);
return (EEXIST);
}
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
return (error);
goto restart;
}
VATTR_NULL(&vattr);
vattr.va_type = VFIFO;
vattr.va_mode = (mode & ALLPERMS) & ~td->td_proc->p_fd->fd_cmask;
#ifdef MAC
error = mac_vnode_check_create(td->td_ucred, nd.ni_dvp, &nd.ni_cnd,
&vattr);
if (error != 0)
goto out;
#endif
error = VOP_MKNOD(nd.ni_dvp, &nd.ni_vp, &nd.ni_cnd, &vattr);
if (error == 0)
vput(nd.ni_vp);
#ifdef MAC
out:
#endif
vput(nd.ni_dvp);
vn_finished_write(mp);
NDFREE(&nd, NDF_ONLY_PNBUF);
return (error);
}
/*
* Make a hard file link.
*/
#ifndef _SYS_SYSPROTO_H_
struct link_args {
char *path;
char *link;
};
#endif
int
sys_link(struct thread *td, struct link_args *uap)
{
return (kern_linkat(td, AT_FDCWD, AT_FDCWD, uap->path, uap->link,
UIO_USERSPACE, FOLLOW));
}
#ifndef _SYS_SYSPROTO_H_
struct linkat_args {
int fd1;
char *path1;
int fd2;
char *path2;
int flag;
};
#endif
int
sys_linkat(struct thread *td, struct linkat_args *uap)
{
int flag;
flag = uap->flag;
if ((flag & ~(AT_SYMLINK_FOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
return (kern_linkat(td, uap->fd1, uap->fd2, uap->path1, uap->path2,
UIO_USERSPACE, ((flag & AT_SYMLINK_FOLLOW) != 0 ? FOLLOW :
NOFOLLOW) | ((flag & AT_BENEATH) != 0 ? BENEATH : 0)));
}
int hardlink_check_uid = 0;
SYSCTL_INT(_security_bsd, OID_AUTO, hardlink_check_uid, CTLFLAG_RW,
&hardlink_check_uid, 0,
"Unprivileged processes cannot create hard links to files owned by other "
"users");
static int hardlink_check_gid = 0;
SYSCTL_INT(_security_bsd, OID_AUTO, hardlink_check_gid, CTLFLAG_RW,
&hardlink_check_gid, 0,
"Unprivileged processes cannot create hard links to files owned by other "
"groups");
static int
can_hardlink(struct vnode *vp, struct ucred *cred)
{
struct vattr va;
int error;
if (!hardlink_check_uid && !hardlink_check_gid)
return (0);
error = VOP_GETATTR(vp, &va, cred);
if (error != 0)
return (error);
if (hardlink_check_uid && cred->cr_uid != va.va_uid) {
error = priv_check_cred(cred, PRIV_VFS_LINK);
if (error != 0)
return (error);
}
if (hardlink_check_gid && !groupmember(va.va_gid, cred)) {
error = priv_check_cred(cred, PRIV_VFS_LINK);
if (error != 0)
return (error);
}
return (0);
}
int
kern_linkat(struct thread *td, int fd1, int fd2, const char *path1,
const char *path2, enum uio_seg segflag, int follow)
{
struct nameidata nd;
int error;
do {
bwillwrite();
NDINIT_ATRIGHTS(&nd, LOOKUP, follow | AUDITVNODE1, segflag,
path1, fd1, &cap_linkat_source_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = kern_linkat_vp(td, nd.ni_vp, fd2, path2, segflag);
} while (error == EAGAIN);
return (error);
}
static int
kern_linkat_vp(struct thread *td, struct vnode *vp, int fd, const char *path,
enum uio_seg segflag)
{
struct nameidata nd;
struct mount *mp;
int error;
if (vp->v_type == VDIR) {
vrele(vp);
return (EPERM); /* POSIX */
}
NDINIT_ATRIGHTS(&nd, CREATE,
LOCKPARENT | SAVENAME | AUDITVNODE2 | NOCACHE, segflag, path, fd,
&cap_linkat_target_rights, td);
if ((error = namei(&nd)) == 0) {
if (nd.ni_vp != NULL) {
NDFREE(&nd, NDF_ONLY_PNBUF);
if (nd.ni_dvp == nd.ni_vp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
vrele(nd.ni_vp);
vrele(vp);
return (EEXIST);
} else if (nd.ni_dvp->v_mount != vp->v_mount) {
/*
* Cross-device link. No need to recheck
* vp->v_type, since it cannot change, except
* to VBAD.
*/
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
vrele(vp);
return (EXDEV);
} else if ((error = vn_lock(vp, LK_EXCLUSIVE)) == 0) {
error = can_hardlink(vp, td->td_ucred);
#ifdef MAC
if (error == 0)
error = mac_vnode_check_link(td->td_ucred,
nd.ni_dvp, vp, &nd.ni_cnd);
#endif
if (error != 0) {
vput(vp);
vput(nd.ni_dvp);
NDFREE(&nd, NDF_ONLY_PNBUF);
return (error);
}
error = vn_start_write(vp, &mp, V_NOWAIT);
if (error != 0) {
vput(vp);
vput(nd.ni_dvp);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = vn_start_write(NULL, &mp,
V_XSLEEP | PCATCH);
if (error != 0)
return (error);
return (EAGAIN);
}
error = VOP_LINK(nd.ni_dvp, vp, &nd.ni_cnd);
VOP_UNLOCK(vp, 0);
vput(nd.ni_dvp);
vn_finished_write(mp);
NDFREE(&nd, NDF_ONLY_PNBUF);
} else {
vput(nd.ni_dvp);
NDFREE(&nd, NDF_ONLY_PNBUF);
vrele(vp);
return (EAGAIN);
}
}
vrele(vp);
return (error);
}
/*
* Make a symbolic link.
*/
#ifndef _SYS_SYSPROTO_H_
struct symlink_args {
char *path;
char *link;
};
#endif
int
sys_symlink(struct thread *td, struct symlink_args *uap)
{
return (kern_symlinkat(td, uap->path, AT_FDCWD, uap->link,
UIO_USERSPACE));
}
#ifndef _SYS_SYSPROTO_H_
struct symlinkat_args {
char *path;
int fd;
char *path2;
};
#endif
int
sys_symlinkat(struct thread *td, struct symlinkat_args *uap)
{
return (kern_symlinkat(td, uap->path1, uap->fd, uap->path2,
UIO_USERSPACE));
}
int
kern_symlinkat(struct thread *td, const char *path1, int fd, const char *path2,
enum uio_seg segflg)
{
struct mount *mp;
struct vattr vattr;
const char *syspath;
char *tmppath;
struct nameidata nd;
int error;
if (segflg == UIO_SYSSPACE) {
syspath = path1;
} else {
tmppath = uma_zalloc(namei_zone, M_WAITOK);
if ((error = copyinstr(path1, tmppath, MAXPATHLEN, NULL)) != 0)
goto out;
syspath = tmppath;
}
AUDIT_ARG_TEXT(syspath);
restart:
bwillwrite();
NDINIT_ATRIGHTS(&nd, CREATE, LOCKPARENT | SAVENAME | AUDITVNODE1 |
NOCACHE, segflg, path2, fd, &cap_symlinkat_rights,
td);
if ((error = namei(&nd)) != 0)
goto out;
if (nd.ni_vp) {
NDFREE(&nd, NDF_ONLY_PNBUF);
if (nd.ni_vp == nd.ni_dvp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
vrele(nd.ni_vp);
error = EEXIST;
goto out;
}
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
goto out;
goto restart;
}
VATTR_NULL(&vattr);
vattr.va_mode = ACCESSPERMS &~ td->td_proc->p_fd->fd_cmask;
#ifdef MAC
vattr.va_type = VLNK;
error = mac_vnode_check_create(td->td_ucred, nd.ni_dvp, &nd.ni_cnd,
&vattr);
if (error != 0)
goto out2;
#endif
error = VOP_SYMLINK(nd.ni_dvp, &nd.ni_vp, &nd.ni_cnd, &vattr, syspath);
if (error == 0)
vput(nd.ni_vp);
#ifdef MAC
out2:
#endif
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
vn_finished_write(mp);
out:
if (segflg != UIO_SYSSPACE)
uma_zfree(namei_zone, tmppath);
return (error);
}
/*
* Delete a whiteout from the filesystem.
*/
#ifndef _SYS_SYSPROTO_H_
struct undelete_args {
char *path;
};
#endif
int
sys_undelete(struct thread *td, struct undelete_args *uap)
{
struct mount *mp;
struct nameidata nd;
int error;
restart:
bwillwrite();
NDINIT(&nd, DELETE, LOCKPARENT | DOWHITEOUT | AUDITVNODE1,
UIO_USERSPACE, uap->path, td);
error = namei(&nd);
if (error != 0)
return (error);
if (nd.ni_vp != NULLVP || !(nd.ni_cnd.cn_flags & ISWHITEOUT)) {
NDFREE(&nd, NDF_ONLY_PNBUF);
if (nd.ni_vp == nd.ni_dvp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
if (nd.ni_vp)
vrele(nd.ni_vp);
return (EEXIST);
}
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
return (error);
goto restart;
}
error = VOP_WHITEOUT(nd.ni_dvp, &nd.ni_cnd, DELETE);
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
vn_finished_write(mp);
return (error);
}
/*
* Delete a name from the filesystem.
*/
#ifndef _SYS_SYSPROTO_H_
struct unlink_args {
char *path;
};
#endif
int
sys_unlink(struct thread *td, struct unlink_args *uap)
{
return (kern_unlinkat(td, AT_FDCWD, uap->path, UIO_USERSPACE, 0, 0));
}
#ifndef _SYS_SYSPROTO_H_
struct unlinkat_args {
int fd;
char *path;
int flag;
};
#endif
int
sys_unlinkat(struct thread *td, struct unlinkat_args *uap)
{
int fd, flag;
const char *path;
flag = uap->flag;
fd = uap->fd;
path = uap->path;
if ((flag & ~(AT_REMOVEDIR | AT_BENEATH)) != 0)
return (EINVAL);
if ((uap->flag & AT_REMOVEDIR) != 0)
return (kern_rmdirat(td, fd, path, UIO_USERSPACE, flag));
else
return (kern_unlinkat(td, fd, path, UIO_USERSPACE, flag, 0));
}
int
kern_unlinkat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, int flag, ino_t oldinum)
{
struct mount *mp;
struct vnode *vp;
struct nameidata nd;
struct stat sb;
int error;
restart:
bwillwrite();
NDINIT_ATRIGHTS(&nd, DELETE, LOCKPARENT | LOCKLEAF | AUDITVNODE1 |
((flag & AT_BENEATH) != 0 ? BENEATH : 0),
pathseg, path, fd, &cap_unlinkat_rights, td);
if ((error = namei(&nd)) != 0)
return (error == EINVAL ? EPERM : error);
vp = nd.ni_vp;
if (vp->v_type == VDIR && oldinum == 0) {
error = EPERM; /* POSIX */
} else if (oldinum != 0 &&
((error = vn_stat(vp, &sb, td->td_ucred, NOCRED, td)) == 0) &&
sb.st_ino != oldinum) {
error = EIDRM; /* Identifier removed */
} else {
/*
* The root of a mounted filesystem cannot be deleted.
*
* XXX: can this only be a VDIR case?
*/
if (vp->v_vflag & VV_ROOT)
error = EBUSY;
}
if (error == 0) {
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if (vp == nd.ni_dvp)
vrele(vp);
else
vput(vp);
if ((error = vn_start_write(NULL, &mp,
V_XSLEEP | PCATCH)) != 0)
return (error);
goto restart;
}
#ifdef MAC
error = mac_vnode_check_unlink(td->td_ucred, nd.ni_dvp, vp,
&nd.ni_cnd);
if (error != 0)
goto out;
#endif
vfs_notify_upper(vp, VFS_NOTIFY_UPPER_UNLINK);
error = VOP_REMOVE(nd.ni_dvp, vp, &nd.ni_cnd);
#ifdef MAC
out:
#endif
vn_finished_write(mp);
}
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if (vp == nd.ni_dvp)
vrele(vp);
else
vput(vp);
return (error);
}
/*
* Reposition read/write file offset.
*/
#ifndef _SYS_SYSPROTO_H_
struct lseek_args {
int fd;
int pad;
off_t offset;
int whence;
};
#endif
int
sys_lseek(struct thread *td, struct lseek_args *uap)
{
return (kern_lseek(td, uap->fd, uap->offset, uap->whence));
}
int
kern_lseek(struct thread *td, int fd, off_t offset, int whence)
{
struct file *fp;
int error;
AUDIT_ARG_FD(fd);
error = fget(td, fd, &cap_seek_rights, &fp);
if (error != 0)
return (error);
error = (fp->f_ops->fo_flags & DFLAG_SEEKABLE) != 0 ?
fo_seek(fp, offset, whence, td) : ESPIPE;
fdrop(fp, td);
return (error);
}
#if defined(COMPAT_43)
/*
* Reposition read/write file offset.
*/
#ifndef _SYS_SYSPROTO_H_
struct olseek_args {
int fd;
long offset;
int whence;
};
#endif
int
olseek(struct thread *td, struct olseek_args *uap)
{
return (kern_lseek(td, uap->fd, uap->offset, uap->whence));
}
#endif /* COMPAT_43 */
#if defined(COMPAT_FREEBSD6)
/* Version with the 'pad' argument */
int
freebsd6_lseek(struct thread *td, struct freebsd6_lseek_args *uap)
{
return (kern_lseek(td, uap->fd, uap->offset, uap->whence));
}
#endif
/*
* Check access permissions using passed credentials.
*/
static int
vn_access(struct vnode *vp, int user_flags, struct ucred *cred,
struct thread *td)
{
accmode_t accmode;
int error;
/* Flags == 0 means only check for existence. */
if (user_flags == 0)
return (0);
accmode = 0;
if (user_flags & R_OK)
accmode |= VREAD;
if (user_flags & W_OK)
accmode |= VWRITE;
if (user_flags & X_OK)
accmode |= VEXEC;
#ifdef MAC
error = mac_vnode_check_access(cred, vp, accmode);
if (error != 0)
return (error);
#endif
if ((accmode & VWRITE) == 0 || (error = vn_writechk(vp)) == 0)
error = VOP_ACCESS(vp, accmode, cred, td);
return (error);
}
/*
* Check access permissions using "real" credentials.
*/
#ifndef _SYS_SYSPROTO_H_
struct access_args {
char *path;
int amode;
};
#endif
int
sys_access(struct thread *td, struct access_args *uap)
{
return (kern_accessat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
0, uap->amode));
}
#ifndef _SYS_SYSPROTO_H_
struct faccessat_args {
int dirfd;
char *path;
int amode;
int flag;
}
#endif
int
sys_faccessat(struct thread *td, struct faccessat_args *uap)
{
return (kern_accessat(td, uap->fd, uap->path, UIO_USERSPACE, uap->flag,
uap->amode));
}
int
kern_accessat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, int flag, int amode)
{
struct ucred *cred, *usecred;
struct vnode *vp;
struct nameidata nd;
int error;
if ((flag & ~(AT_EACCESS | AT_BENEATH)) != 0)
return (EINVAL);
if (amode != F_OK && (amode & ~(R_OK | W_OK | X_OK)) != 0)
return (EINVAL);
/*
* Create and modify a temporary credential instead of one that
* is potentially shared (if we need one).
*/
cred = td->td_ucred;
if ((flag & AT_EACCESS) == 0 &&
((cred->cr_uid != cred->cr_ruid ||
cred->cr_rgid != cred->cr_groups[0]))) {
usecred = crdup(cred);
usecred->cr_uid = cred->cr_ruid;
usecred->cr_groups[0] = cred->cr_rgid;
td->td_ucred = usecred;
} else
usecred = cred;
AUDIT_ARG_VALUE(amode);
NDINIT_ATRIGHTS(&nd, LOOKUP, FOLLOW | LOCKSHARED | LOCKLEAF |
AUDITVNODE1 | ((flag & AT_BENEATH) != 0 ? BENEATH : 0),
pathseg, path, fd, &cap_fstat_rights, td);
if ((error = namei(&nd)) != 0)
goto out;
vp = nd.ni_vp;
error = vn_access(vp, amode, usecred, td);
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(vp);
out:
if (usecred != cred) {
td->td_ucred = cred;
crfree(usecred);
}
return (error);
}
/*
* Check access permissions using "effective" credentials.
*/
#ifndef _SYS_SYSPROTO_H_
struct eaccess_args {
char *path;
int amode;
};
#endif
int
sys_eaccess(struct thread *td, struct eaccess_args *uap)
{
return (kern_accessat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
AT_EACCESS, uap->amode));
}
#if defined(COMPAT_43)
/*
* Get file status; this version follows links.
*/
#ifndef _SYS_SYSPROTO_H_
struct ostat_args {
char *path;
struct ostat *ub;
};
#endif
int
ostat(struct thread *td, struct ostat_args *uap)
{
struct stat sb;
struct ostat osb;
int error;
error = kern_statat(td, 0, AT_FDCWD, uap->path, UIO_USERSPACE,
&sb, NULL);
if (error != 0)
return (error);
cvtstat(&sb, &osb);
return (copyout(&osb, uap->ub, sizeof (osb)));
}
/*
* Get file status; this version does not follow links.
*/
#ifndef _SYS_SYSPROTO_H_
struct olstat_args {
char *path;
struct ostat *ub;
};
#endif
int
olstat(struct thread *td, struct olstat_args *uap)
{
struct stat sb;
struct ostat osb;
int error;
error = kern_statat(td, AT_SYMLINK_NOFOLLOW, AT_FDCWD, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error != 0)
return (error);
cvtstat(&sb, &osb);
return (copyout(&osb, uap->ub, sizeof (osb)));
}
/*
* Convert from an old to a new stat structure.
* XXX: many values are blindly truncated.
*/
void
cvtstat(struct stat *st, struct ostat *ost)
{
bzero(ost, sizeof(*ost));
ost->st_dev = st->st_dev;
ost->st_ino = st->st_ino;
ost->st_mode = st->st_mode;
ost->st_nlink = st->st_nlink;
ost->st_uid = st->st_uid;
ost->st_gid = st->st_gid;
ost->st_rdev = st->st_rdev;
ost->st_size = MIN(st->st_size, INT32_MAX);
ost->st_atim = st->st_atim;
ost->st_mtim = st->st_mtim;
ost->st_ctim = st->st_ctim;
ost->st_blksize = st->st_blksize;
ost->st_blocks = st->st_blocks;
ost->st_flags = st->st_flags;
ost->st_gen = st->st_gen;
}
#endif /* COMPAT_43 */
#if defined(COMPAT_43) || defined(COMPAT_FREEBSD11)
int ino64_trunc_error;
SYSCTL_INT(_vfs, OID_AUTO, ino64_trunc_error, CTLFLAG_RW,
&ino64_trunc_error, 0,
"Error on truncation of device, file or inode number, or link count");
int
freebsd11_cvtstat(struct stat *st, struct freebsd11_stat *ost)
{
ost->st_dev = st->st_dev;
if (ost->st_dev != st->st_dev) {
switch (ino64_trunc_error) {
default:
/*
* Since dev_t is almost raw, don't clamp to the
* maximum for case 2, but ignore the error.
*/
break;
case 1:
return (EOVERFLOW);
}
}
ost->st_ino = st->st_ino;
if (ost->st_ino != st->st_ino) {
switch (ino64_trunc_error) {
default:
case 0:
break;
case 1:
return (EOVERFLOW);
case 2:
ost->st_ino = UINT32_MAX;
break;
}
}
ost->st_mode = st->st_mode;
ost->st_nlink = st->st_nlink;
if (ost->st_nlink != st->st_nlink) {
switch (ino64_trunc_error) {
default:
case 0:
break;
case 1:
return (EOVERFLOW);
case 2:
ost->st_nlink = UINT16_MAX;
break;
}
}
ost->st_uid = st->st_uid;
ost->st_gid = st->st_gid;
ost->st_rdev = st->st_rdev;
if (ost->st_rdev != st->st_rdev) {
switch (ino64_trunc_error) {
default:
break;
case 1:
return (EOVERFLOW);
}
}
ost->st_atim = st->st_atim;
ost->st_mtim = st->st_mtim;
ost->st_ctim = st->st_ctim;
ost->st_size = st->st_size;
ost->st_blocks = st->st_blocks;
ost->st_blksize = st->st_blksize;
ost->st_flags = st->st_flags;
ost->st_gen = st->st_gen;
ost->st_lspare = 0;
ost->st_birthtim = st->st_birthtim;
bzero((char *)&ost->st_birthtim + sizeof(ost->st_birthtim),
sizeof(*ost) - offsetof(struct freebsd11_stat,
st_birthtim) - sizeof(ost->st_birthtim));
return (0);
}
int
freebsd11_stat(struct thread *td, struct freebsd11_stat_args* uap)
{
struct stat sb;
struct freebsd11_stat osb;
int error;
error = kern_statat(td, 0, AT_FDCWD, uap->path, UIO_USERSPACE,
&sb, NULL);
if (error != 0)
return (error);
error = freebsd11_cvtstat(&sb, &osb);
if (error == 0)
error = copyout(&osb, uap->ub, sizeof(osb));
return (error);
}
int
freebsd11_lstat(struct thread *td, struct freebsd11_lstat_args* uap)
{
struct stat sb;
struct freebsd11_stat osb;
int error;
error = kern_statat(td, AT_SYMLINK_NOFOLLOW, AT_FDCWD, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error != 0)
return (error);
error = freebsd11_cvtstat(&sb, &osb);
if (error == 0)
error = copyout(&osb, uap->ub, sizeof(osb));
return (error);
}
int
freebsd11_fhstat(struct thread *td, struct freebsd11_fhstat_args* uap)
{
struct fhandle fh;
struct stat sb;
struct freebsd11_stat osb;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t));
if (error != 0)
return (error);
error = kern_fhstat(td, fh, &sb);
if (error != 0)
return (error);
error = freebsd11_cvtstat(&sb, &osb);
if (error == 0)
error = copyout(&osb, uap->sb, sizeof(osb));
return (error);
}
int
freebsd11_fstatat(struct thread *td, struct freebsd11_fstatat_args* uap)
{
struct stat sb;
struct freebsd11_stat osb;
int error;
error = kern_statat(td, uap->flag, uap->fd, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error != 0)
return (error);
error = freebsd11_cvtstat(&sb, &osb);
if (error == 0)
error = copyout(&osb, uap->buf, sizeof(osb));
return (error);
}
#endif /* COMPAT_FREEBSD11 */
/*
* Get file status
*/
#ifndef _SYS_SYSPROTO_H_
struct fstatat_args {
int fd;
char *path;
struct stat *buf;
int flag;
}
#endif
int
sys_fstatat(struct thread *td, struct fstatat_args *uap)
{
struct stat sb;
int error;
error = kern_statat(td, uap->flag, uap->fd, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error == 0)
error = copyout(&sb, uap->buf, sizeof (sb));
return (error);
}
int
kern_statat(struct thread *td, int flag, int fd, const char *path,
enum uio_seg pathseg, struct stat *sbp,
void (*hook)(struct vnode *vp, struct stat *sbp))
{
struct nameidata nd;
int error;
if ((flag & ~(AT_SYMLINK_NOFOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
NDINIT_ATRIGHTS(&nd, LOOKUP, ((flag & AT_SYMLINK_NOFOLLOW) != 0 ?
NOFOLLOW : FOLLOW) | ((flag & AT_BENEATH) != 0 ? BENEATH : 0) |
LOCKSHARED | LOCKLEAF | AUDITVNODE1, pathseg, path, fd,
&cap_fstat_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
error = vn_stat(nd.ni_vp, sbp, td->td_ucred, NOCRED, td);
if (error == 0) {
SDT_PROBE2(vfs, , stat, mode, path, sbp->st_mode);
if (S_ISREG(sbp->st_mode))
SDT_PROBE2(vfs, , stat, reg, path, pathseg);
if (__predict_false(hook != NULL))
hook(nd.ni_vp, sbp);
}
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_vp);
if (error != 0)
return (error);
#ifdef __STAT_TIME_T_EXT
sbp->st_atim_ext = 0;
sbp->st_mtim_ext = 0;
sbp->st_ctim_ext = 0;
sbp->st_btim_ext = 0;
#endif
#ifdef KTRACE
if (KTRPOINT(td, KTR_STRUCT))
ktrstat(sbp);
#endif
return (0);
}
#if defined(COMPAT_FREEBSD11)
/*
* Implementation of the NetBSD [l]stat() functions.
*/
void
freebsd11_cvtnstat(struct stat *sb, struct nstat *nsb)
{
bzero(nsb, sizeof(*nsb));
nsb->st_dev = sb->st_dev;
nsb->st_ino = sb->st_ino;
nsb->st_mode = sb->st_mode;
nsb->st_nlink = sb->st_nlink;
nsb->st_uid = sb->st_uid;
nsb->st_gid = sb->st_gid;
nsb->st_rdev = sb->st_rdev;
nsb->st_atim = sb->st_atim;
nsb->st_mtim = sb->st_mtim;
nsb->st_ctim = sb->st_ctim;
nsb->st_size = sb->st_size;
nsb->st_blocks = sb->st_blocks;
nsb->st_blksize = sb->st_blksize;
nsb->st_flags = sb->st_flags;
nsb->st_gen = sb->st_gen;
nsb->st_birthtim = sb->st_birthtim;
}
#ifndef _SYS_SYSPROTO_H_
struct freebsd11_nstat_args {
char *path;
struct nstat *ub;
};
#endif
int
freebsd11_nstat(struct thread *td, struct freebsd11_nstat_args *uap)
{
struct stat sb;
struct nstat nsb;
int error;
error = kern_statat(td, 0, AT_FDCWD, uap->path, UIO_USERSPACE,
&sb, NULL);
if (error != 0)
return (error);
freebsd11_cvtnstat(&sb, &nsb);
return (copyout(&nsb, uap->ub, sizeof (nsb)));
}
/*
* NetBSD lstat. Get file status; this version does not follow links.
*/
#ifndef _SYS_SYSPROTO_H_
struct freebsd11_nlstat_args {
char *path;
struct nstat *ub;
};
#endif
int
freebsd11_nlstat(struct thread *td, struct freebsd11_nlstat_args *uap)
{
struct stat sb;
struct nstat nsb;
int error;
error = kern_statat(td, AT_SYMLINK_NOFOLLOW, AT_FDCWD, uap->path,
UIO_USERSPACE, &sb, NULL);
if (error != 0)
return (error);
freebsd11_cvtnstat(&sb, &nsb);
return (copyout(&nsb, uap->ub, sizeof (nsb)));
}
#endif /* COMPAT_FREEBSD11 */
/*
* Get configurable pathname variables.
*/
#ifndef _SYS_SYSPROTO_H_
struct pathconf_args {
char *path;
int name;
};
#endif
int
sys_pathconf(struct thread *td, struct pathconf_args *uap)
{
long value;
int error;
error = kern_pathconf(td, uap->path, UIO_USERSPACE, uap->name, FOLLOW,
&value);
if (error == 0)
td->td_retval[0] = value;
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct lpathconf_args {
char *path;
int name;
};
#endif
int
sys_lpathconf(struct thread *td, struct lpathconf_args *uap)
{
long value;
int error;
error = kern_pathconf(td, uap->path, UIO_USERSPACE, uap->name,
NOFOLLOW, &value);
if (error == 0)
td->td_retval[0] = value;
return (error);
}
int
kern_pathconf(struct thread *td, const char *path, enum uio_seg pathseg,
int name, u_long flags, long *valuep)
{
struct nameidata nd;
int error;
NDINIT(&nd, LOOKUP, LOCKSHARED | LOCKLEAF | AUDITVNODE1 | flags,
pathseg, path, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = VOP_PATHCONF(nd.ni_vp, name, valuep);
vput(nd.ni_vp);
return (error);
}
/*
* Return target name of a symbolic link.
*/
#ifndef _SYS_SYSPROTO_H_
struct readlink_args {
char *path;
char *buf;
size_t count;
};
#endif
int
sys_readlink(struct thread *td, struct readlink_args *uap)
{
return (kern_readlinkat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->buf, UIO_USERSPACE, uap->count));
}
#ifndef _SYS_SYSPROTO_H_
struct readlinkat_args {
int fd;
char *path;
char *buf;
size_t bufsize;
};
#endif
int
sys_readlinkat(struct thread *td, struct readlinkat_args *uap)
{
return (kern_readlinkat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->buf, UIO_USERSPACE, uap->bufsize));
}
int
kern_readlinkat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, char *buf, enum uio_seg bufseg, size_t count)
{
struct vnode *vp;
struct nameidata nd;
int error;
if (count > IOSIZE_MAX)
return (EINVAL);
NDINIT_AT(&nd, LOOKUP, NOFOLLOW | LOCKSHARED | LOCKLEAF | AUDITVNODE1,
pathseg, path, fd, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
vp = nd.ni_vp;
error = kern_readlink_vp(vp, buf, bufseg, count, td);
vput(vp);
return (error);
}
/*
* Helper function to readlink from a vnode
*/
static int
kern_readlink_vp(struct vnode *vp, char *buf, enum uio_seg bufseg, size_t count,
struct thread *td)
{
struct iovec aiov;
struct uio auio;
int error;
ASSERT_VOP_LOCKED(vp, "kern_readlink_vp(): vp not locked");
#ifdef MAC
error = mac_vnode_check_readlink(td->td_ucred, vp);
if (error != 0)
return (error);
#endif
if (vp->v_type != VLNK && (vp->v_vflag & VV_READLINK) == 0)
return (EINVAL);
aiov.iov_base = buf;
aiov.iov_len = count;
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
auio.uio_offset = 0;
auio.uio_rw = UIO_READ;
auio.uio_segflg = bufseg;
auio.uio_td = td;
auio.uio_resid = count;
error = VOP_READLINK(vp, &auio, td->td_ucred);
td->td_retval[0] = count - auio.uio_resid;
return (error);
}
/*
* Common implementation code for chflags() and fchflags().
*/
static int
setfflags(struct thread *td, struct vnode *vp, u_long flags)
{
struct mount *mp;
struct vattr vattr;
int error;
/* We can't support the value matching VNOVAL. */
if (flags == VNOVAL)
return (EOPNOTSUPP);
/*
* Prevent non-root users from setting flags on devices. When
* a device is reused, users can retain ownership of the device
* if they are allowed to set flags and programs assume that
* chown can't fail when done as root.
*/
if (vp->v_type == VCHR || vp->v_type == VBLK) {
error = priv_check(td, PRIV_VFS_CHFLAGS_DEV);
if (error != 0)
return (error);
}
if ((error = vn_start_write(vp, &mp, V_WAIT | PCATCH)) != 0)
return (error);
VATTR_NULL(&vattr);
vattr.va_flags = flags;
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
#ifdef MAC
error = mac_vnode_check_setflags(td->td_ucred, vp, vattr.va_flags);
if (error == 0)
#endif
error = VOP_SETATTR(vp, &vattr, td->td_ucred);
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
return (error);
}
/*
* Change flags of a file given a path name.
*/
#ifndef _SYS_SYSPROTO_H_
struct chflags_args {
const char *path;
u_long flags;
};
#endif
int
sys_chflags(struct thread *td, struct chflags_args *uap)
{
return (kern_chflagsat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->flags, 0));
}
#ifndef _SYS_SYSPROTO_H_
struct chflagsat_args {
int fd;
const char *path;
u_long flags;
int atflag;
}
#endif
int
sys_chflagsat(struct thread *td, struct chflagsat_args *uap)
{
if ((uap->atflag & ~(AT_SYMLINK_NOFOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
return (kern_chflagsat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->flags, uap->atflag));
}
/*
* Same as chflags() but doesn't follow symlinks.
*/
#ifndef _SYS_SYSPROTO_H_
struct lchflags_args {
const char *path;
u_long flags;
};
#endif
int
sys_lchflags(struct thread *td, struct lchflags_args *uap)
{
return (kern_chflagsat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->flags, AT_SYMLINK_NOFOLLOW));
}
static int
kern_chflagsat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, u_long flags, int atflag)
{
struct nameidata nd;
int error, follow;
AUDIT_ARG_FFLAGS(flags);
follow = (atflag & AT_SYMLINK_NOFOLLOW) ? NOFOLLOW : FOLLOW;
follow |= (atflag & AT_BENEATH) != 0 ? BENEATH : 0;
NDINIT_ATRIGHTS(&nd, LOOKUP, follow | AUDITVNODE1, pathseg, path, fd,
&cap_fchflags_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = setfflags(td, nd.ni_vp, flags);
vrele(nd.ni_vp);
return (error);
}
/*
* Change flags of a file given a file descriptor.
*/
#ifndef _SYS_SYSPROTO_H_
struct fchflags_args {
int fd;
u_long flags;
};
#endif
int
sys_fchflags(struct thread *td, struct fchflags_args *uap)
{
struct file *fp;
int error;
AUDIT_ARG_FD(uap->fd);
AUDIT_ARG_FFLAGS(uap->flags);
error = getvnode(td, uap->fd, &cap_fchflags_rights,
&fp);
if (error != 0)
return (error);
#ifdef AUDIT
vn_lock(fp->f_vnode, LK_SHARED | LK_RETRY);
AUDIT_ARG_VNODE1(fp->f_vnode);
VOP_UNLOCK(fp->f_vnode, 0);
#endif
error = setfflags(td, fp->f_vnode, uap->flags);
fdrop(fp, td);
return (error);
}
/*
* Common implementation code for chmod(), lchmod() and fchmod().
*/
int
setfmode(struct thread *td, struct ucred *cred, struct vnode *vp, int mode)
{
struct mount *mp;
struct vattr vattr;
int error;
if ((error = vn_start_write(vp, &mp, V_WAIT | PCATCH)) != 0)
return (error);
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
VATTR_NULL(&vattr);
vattr.va_mode = mode & ALLPERMS;
#ifdef MAC
error = mac_vnode_check_setmode(cred, vp, vattr.va_mode);
if (error == 0)
#endif
error = VOP_SETATTR(vp, &vattr, cred);
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
return (error);
}
/*
* Change mode of a file given path name.
*/
#ifndef _SYS_SYSPROTO_H_
struct chmod_args {
char *path;
int mode;
};
#endif
int
sys_chmod(struct thread *td, struct chmod_args *uap)
{
return (kern_fchmodat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->mode, 0));
}
#ifndef _SYS_SYSPROTO_H_
struct fchmodat_args {
int dirfd;
char *path;
mode_t mode;
int flag;
}
#endif
int
sys_fchmodat(struct thread *td, struct fchmodat_args *uap)
{
if ((uap->flag & ~(AT_SYMLINK_NOFOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
return (kern_fchmodat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->mode, uap->flag));
}
/*
* Change mode of a file given path name (don't follow links.)
*/
#ifndef _SYS_SYSPROTO_H_
struct lchmod_args {
char *path;
int mode;
};
#endif
int
sys_lchmod(struct thread *td, struct lchmod_args *uap)
{
return (kern_fchmodat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->mode, AT_SYMLINK_NOFOLLOW));
}
int
kern_fchmodat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, mode_t mode, int flag)
{
struct nameidata nd;
int error, follow;
AUDIT_ARG_MODE(mode);
follow = (flag & AT_SYMLINK_NOFOLLOW) != 0 ? NOFOLLOW : FOLLOW;
follow |= (flag & AT_BENEATH) != 0 ? BENEATH : 0;
NDINIT_ATRIGHTS(&nd, LOOKUP, follow | AUDITVNODE1, pathseg, path, fd,
&cap_fchmod_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = setfmode(td, td->td_ucred, nd.ni_vp, mode);
vrele(nd.ni_vp);
return (error);
}
/*
* Change mode of a file given a file descriptor.
*/
#ifndef _SYS_SYSPROTO_H_
struct fchmod_args {
int fd;
int mode;
};
#endif
int
sys_fchmod(struct thread *td, struct fchmod_args *uap)
{
struct file *fp;
int error;
AUDIT_ARG_FD(uap->fd);
AUDIT_ARG_MODE(uap->mode);
error = fget(td, uap->fd, &cap_fchmod_rights, &fp);
if (error != 0)
return (error);
error = fo_chmod(fp, uap->mode, td->td_ucred, td);
fdrop(fp, td);
return (error);
}
/*
* Common implementation for chown(), lchown(), and fchown()
*/
int
setfown(struct thread *td, struct ucred *cred, struct vnode *vp, uid_t uid,
gid_t gid)
{
struct mount *mp;
struct vattr vattr;
int error;
if ((error = vn_start_write(vp, &mp, V_WAIT | PCATCH)) != 0)
return (error);
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
VATTR_NULL(&vattr);
vattr.va_uid = uid;
vattr.va_gid = gid;
#ifdef MAC
error = mac_vnode_check_setowner(cred, vp, vattr.va_uid,
vattr.va_gid);
if (error == 0)
#endif
error = VOP_SETATTR(vp, &vattr, cred);
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
return (error);
}
/*
* Set ownership given a path name.
*/
#ifndef _SYS_SYSPROTO_H_
struct chown_args {
char *path;
int uid;
int gid;
};
#endif
int
sys_chown(struct thread *td, struct chown_args *uap)
{
return (kern_fchownat(td, AT_FDCWD, uap->path, UIO_USERSPACE, uap->uid,
uap->gid, 0));
}
#ifndef _SYS_SYSPROTO_H_
struct fchownat_args {
int fd;
const char * path;
uid_t uid;
gid_t gid;
int flag;
};
#endif
int
sys_fchownat(struct thread *td, struct fchownat_args *uap)
{
if ((uap->flag & ~(AT_SYMLINK_NOFOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
return (kern_fchownat(td, uap->fd, uap->path, UIO_USERSPACE, uap->uid,
uap->gid, uap->flag));
}
int
kern_fchownat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, int uid, int gid, int flag)
{
struct nameidata nd;
int error, follow;
AUDIT_ARG_OWNER(uid, gid);
follow = (flag & AT_SYMLINK_NOFOLLOW) ? NOFOLLOW : FOLLOW;
follow |= (flag & AT_BENEATH) != 0 ? BENEATH : 0;
NDINIT_ATRIGHTS(&nd, LOOKUP, follow | AUDITVNODE1, pathseg, path, fd,
&cap_fchown_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = setfown(td, td->td_ucred, nd.ni_vp, uid, gid);
vrele(nd.ni_vp);
return (error);
}
/*
* Set ownership given a path name, do not cross symlinks.
*/
#ifndef _SYS_SYSPROTO_H_
struct lchown_args {
char *path;
int uid;
int gid;
};
#endif
int
sys_lchown(struct thread *td, struct lchown_args *uap)
{
return (kern_fchownat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->uid, uap->gid, AT_SYMLINK_NOFOLLOW));
}
/*
* Set ownership given a file descriptor.
*/
#ifndef _SYS_SYSPROTO_H_
struct fchown_args {
int fd;
int uid;
int gid;
};
#endif
int
sys_fchown(struct thread *td, struct fchown_args *uap)
{
struct file *fp;
int error;
AUDIT_ARG_FD(uap->fd);
AUDIT_ARG_OWNER(uap->uid, uap->gid);
error = fget(td, uap->fd, &cap_fchown_rights, &fp);
if (error != 0)
return (error);
error = fo_chown(fp, uap->uid, uap->gid, td->td_ucred, td);
fdrop(fp, td);
return (error);
}
/*
* Common implementation code for utimes(), lutimes(), and futimes().
*/
static int
getutimes(const struct timeval *usrtvp, enum uio_seg tvpseg,
struct timespec *tsp)
{
struct timeval tv[2];
const struct timeval *tvp;
int error;
if (usrtvp == NULL) {
vfs_timestamp(&tsp[0]);
tsp[1] = tsp[0];
} else {
if (tvpseg == UIO_SYSSPACE) {
tvp = usrtvp;
} else {
if ((error = copyin(usrtvp, tv, sizeof(tv))) != 0)
return (error);
tvp = tv;
}
if (tvp[0].tv_usec < 0 || tvp[0].tv_usec >= 1000000 ||
tvp[1].tv_usec < 0 || tvp[1].tv_usec >= 1000000)
return (EINVAL);
TIMEVAL_TO_TIMESPEC(&tvp[0], &tsp[0]);
TIMEVAL_TO_TIMESPEC(&tvp[1], &tsp[1]);
}
return (0);
}
/*
* Common implementation code for futimens(), utimensat().
*/
#define UTIMENS_NULL 0x1
#define UTIMENS_EXIT 0x2
static int
getutimens(const struct timespec *usrtsp, enum uio_seg tspseg,
struct timespec *tsp, int *retflags)
{
struct timespec tsnow;
int error;
vfs_timestamp(&tsnow);
*retflags = 0;
if (usrtsp == NULL) {
tsp[0] = tsnow;
tsp[1] = tsnow;
*retflags |= UTIMENS_NULL;
return (0);
}
if (tspseg == UIO_SYSSPACE) {
tsp[0] = usrtsp[0];
tsp[1] = usrtsp[1];
} else if ((error = copyin(usrtsp, tsp, sizeof(*tsp) * 2)) != 0)
return (error);
if (tsp[0].tv_nsec == UTIME_OMIT && tsp[1].tv_nsec == UTIME_OMIT)
*retflags |= UTIMENS_EXIT;
if (tsp[0].tv_nsec == UTIME_NOW && tsp[1].tv_nsec == UTIME_NOW)
*retflags |= UTIMENS_NULL;
if (tsp[0].tv_nsec == UTIME_OMIT)
tsp[0].tv_sec = VNOVAL;
else if (tsp[0].tv_nsec == UTIME_NOW)
tsp[0] = tsnow;
else if (tsp[0].tv_nsec < 0 || tsp[0].tv_nsec >= 1000000000L)
return (EINVAL);
if (tsp[1].tv_nsec == UTIME_OMIT)
tsp[1].tv_sec = VNOVAL;
else if (tsp[1].tv_nsec == UTIME_NOW)
tsp[1] = tsnow;
else if (tsp[1].tv_nsec < 0 || tsp[1].tv_nsec >= 1000000000L)
return (EINVAL);
return (0);
}
/*
* Common implementation code for utimes(), lutimes(), futimes(), futimens(),
* and utimensat().
*/
static int
setutimes(struct thread *td, struct vnode *vp, const struct timespec *ts,
int numtimes, int nullflag)
{
struct mount *mp;
struct vattr vattr;
int error, setbirthtime;
if ((error = vn_start_write(vp, &mp, V_WAIT | PCATCH)) != 0)
return (error);
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
setbirthtime = 0;
if (numtimes < 3 && !VOP_GETATTR(vp, &vattr, td->td_ucred) &&
timespeccmp(&ts[1], &vattr.va_birthtime, < ))
setbirthtime = 1;
VATTR_NULL(&vattr);
vattr.va_atime = ts[0];
vattr.va_mtime = ts[1];
if (setbirthtime)
vattr.va_birthtime = ts[1];
if (numtimes > 2)
vattr.va_birthtime = ts[2];
if (nullflag)
vattr.va_vaflags |= VA_UTIMES_NULL;
#ifdef MAC
error = mac_vnode_check_setutimes(td->td_ucred, vp, vattr.va_atime,
vattr.va_mtime);
#endif
if (error == 0)
error = VOP_SETATTR(vp, &vattr, td->td_ucred);
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
return (error);
}
/*
* Set the access and modification times of a file.
*/
#ifndef _SYS_SYSPROTO_H_
struct utimes_args {
char *path;
struct timeval *tptr;
};
#endif
int
sys_utimes(struct thread *td, struct utimes_args *uap)
{
return (kern_utimesat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->tptr, UIO_USERSPACE));
}
#ifndef _SYS_SYSPROTO_H_
struct futimesat_args {
int fd;
const char * path;
const struct timeval * times;
};
#endif
int
sys_futimesat(struct thread *td, struct futimesat_args *uap)
{
return (kern_utimesat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->times, UIO_USERSPACE));
}
int
kern_utimesat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, struct timeval *tptr, enum uio_seg tptrseg)
{
struct nameidata nd;
struct timespec ts[2];
int error;
if ((error = getutimes(tptr, tptrseg, ts)) != 0)
return (error);
NDINIT_ATRIGHTS(&nd, LOOKUP, FOLLOW | AUDITVNODE1, pathseg, path, fd,
&cap_futimes_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = setutimes(td, nd.ni_vp, ts, 2, tptr == NULL);
vrele(nd.ni_vp);
return (error);
}
/*
* Set the access and modification times of a file.
*/
#ifndef _SYS_SYSPROTO_H_
struct lutimes_args {
char *path;
struct timeval *tptr;
};
#endif
int
sys_lutimes(struct thread *td, struct lutimes_args *uap)
{
return (kern_lutimes(td, uap->path, UIO_USERSPACE, uap->tptr,
UIO_USERSPACE));
}
int
kern_lutimes(struct thread *td, const char *path, enum uio_seg pathseg,
struct timeval *tptr, enum uio_seg tptrseg)
{
struct timespec ts[2];
struct nameidata nd;
int error;
if ((error = getutimes(tptr, tptrseg, ts)) != 0)
return (error);
NDINIT(&nd, LOOKUP, NOFOLLOW | AUDITVNODE1, pathseg, path, td);
if ((error = namei(&nd)) != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
error = setutimes(td, nd.ni_vp, ts, 2, tptr == NULL);
vrele(nd.ni_vp);
return (error);
}
/*
* Set the access and modification times of a file.
*/
#ifndef _SYS_SYSPROTO_H_
struct futimes_args {
int fd;
struct timeval *tptr;
};
#endif
int
sys_futimes(struct thread *td, struct futimes_args *uap)
{
return (kern_futimes(td, uap->fd, uap->tptr, UIO_USERSPACE));
}
int
kern_futimes(struct thread *td, int fd, struct timeval *tptr,
enum uio_seg tptrseg)
{
struct timespec ts[2];
struct file *fp;
int error;
AUDIT_ARG_FD(fd);
error = getutimes(tptr, tptrseg, ts);
if (error != 0)
return (error);
error = getvnode(td, fd, &cap_futimes_rights, &fp);
if (error != 0)
return (error);
#ifdef AUDIT
vn_lock(fp->f_vnode, LK_SHARED | LK_RETRY);
AUDIT_ARG_VNODE1(fp->f_vnode);
VOP_UNLOCK(fp->f_vnode, 0);
#endif
error = setutimes(td, fp->f_vnode, ts, 2, tptr == NULL);
fdrop(fp, td);
return (error);
}
int
sys_futimens(struct thread *td, struct futimens_args *uap)
{
return (kern_futimens(td, uap->fd, uap->times, UIO_USERSPACE));
}
int
kern_futimens(struct thread *td, int fd, struct timespec *tptr,
enum uio_seg tptrseg)
{
struct timespec ts[2];
struct file *fp;
int error, flags;
AUDIT_ARG_FD(fd);
error = getutimens(tptr, tptrseg, ts, &flags);
if (error != 0)
return (error);
if (flags & UTIMENS_EXIT)
return (0);
error = getvnode(td, fd, &cap_futimes_rights, &fp);
if (error != 0)
return (error);
#ifdef AUDIT
vn_lock(fp->f_vnode, LK_SHARED | LK_RETRY);
AUDIT_ARG_VNODE1(fp->f_vnode);
VOP_UNLOCK(fp->f_vnode, 0);
#endif
error = setutimes(td, fp->f_vnode, ts, 2, flags & UTIMENS_NULL);
fdrop(fp, td);
return (error);
}
int
sys_utimensat(struct thread *td, struct utimensat_args *uap)
{
return (kern_utimensat(td, uap->fd, uap->path, UIO_USERSPACE,
uap->times, UIO_USERSPACE, uap->flag));
}
int
kern_utimensat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, struct timespec *tptr, enum uio_seg tptrseg,
int flag)
{
struct nameidata nd;
struct timespec ts[2];
int error, flags;
if ((flag & ~(AT_SYMLINK_NOFOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
if ((error = getutimens(tptr, tptrseg, ts, &flags)) != 0)
return (error);
NDINIT_ATRIGHTS(&nd, LOOKUP, ((flag & AT_SYMLINK_NOFOLLOW) ? NOFOLLOW :
FOLLOW) | ((flag & AT_BENEATH) != 0 ? BENEATH : 0) | AUDITVNODE1,
pathseg, path, fd, &cap_futimes_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
/*
* We are allowed to call namei() regardless of 2xUTIME_OMIT.
* POSIX states:
* "If both tv_nsec fields are UTIME_OMIT... EACCESS may be detected."
* "Search permission is denied by a component of the path prefix."
*/
NDFREE(&nd, NDF_ONLY_PNBUF);
if ((flags & UTIMENS_EXIT) == 0)
error = setutimes(td, nd.ni_vp, ts, 2, flags & UTIMENS_NULL);
vrele(nd.ni_vp);
return (error);
}
/*
* Truncate a file given its path name.
*/
#ifndef _SYS_SYSPROTO_H_
struct truncate_args {
char *path;
int pad;
off_t length;
};
#endif
int
sys_truncate(struct thread *td, struct truncate_args *uap)
{
return (kern_truncate(td, uap->path, UIO_USERSPACE, uap->length));
}
int
kern_truncate(struct thread *td, const char *path, enum uio_seg pathseg,
off_t length)
{
struct mount *mp;
struct vnode *vp;
void *rl_cookie;
struct vattr vattr;
struct nameidata nd;
int error;
if (length < 0)
return(EINVAL);
NDINIT(&nd, LOOKUP, FOLLOW | AUDITVNODE1, pathseg, path, td);
if ((error = namei(&nd)) != 0)
return (error);
vp = nd.ni_vp;
rl_cookie = vn_rangelock_wlock(vp, 0, OFF_MAX);
if ((error = vn_start_write(vp, &mp, V_WAIT | PCATCH)) != 0) {
vn_rangelock_unlock(vp, rl_cookie);
vrele(vp);
return (error);
}
NDFREE(&nd, NDF_ONLY_PNBUF);
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
if (vp->v_type == VDIR)
error = EISDIR;
#ifdef MAC
else if ((error = mac_vnode_check_write(td->td_ucred, NOCRED, vp))) {
}
#endif
else if ((error = vn_writechk(vp)) == 0 &&
(error = VOP_ACCESS(vp, VWRITE, td->td_ucred, td)) == 0) {
VATTR_NULL(&vattr);
vattr.va_size = length;
error = VOP_SETATTR(vp, &vattr, td->td_ucred);
}
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
vn_rangelock_unlock(vp, rl_cookie);
vrele(vp);
return (error);
}
#if defined(COMPAT_43)
/*
* Truncate a file given its path name.
*/
#ifndef _SYS_SYSPROTO_H_
struct otruncate_args {
char *path;
long length;
};
#endif
int
otruncate(struct thread *td, struct otruncate_args *uap)
{
return (kern_truncate(td, uap->path, UIO_USERSPACE, uap->length));
}
#endif /* COMPAT_43 */
#if defined(COMPAT_FREEBSD6)
/* Versions with the pad argument */
int
freebsd6_truncate(struct thread *td, struct freebsd6_truncate_args *uap)
{
return (kern_truncate(td, uap->path, UIO_USERSPACE, uap->length));
}
int
freebsd6_ftruncate(struct thread *td, struct freebsd6_ftruncate_args *uap)
{
return (kern_ftruncate(td, uap->fd, uap->length));
}
#endif
int
kern_fsync(struct thread *td, int fd, bool fullsync)
{
struct vnode *vp;
struct mount *mp;
struct file *fp;
int error, lock_flags;
AUDIT_ARG_FD(fd);
error = getvnode(td, fd, &cap_fsync_rights, &fp);
if (error != 0)
return (error);
vp = fp->f_vnode;
#if 0
if (!fullsync)
/* XXXKIB: compete outstanding aio writes */;
#endif
error = vn_start_write(vp, &mp, V_WAIT | PCATCH);
if (error != 0)
goto drop;
if (MNT_SHARED_WRITES(mp) ||
((mp == NULL) && MNT_SHARED_WRITES(vp->v_mount))) {
lock_flags = LK_SHARED;
} else {
lock_flags = LK_EXCLUSIVE;
}
vn_lock(vp, lock_flags | LK_RETRY);
AUDIT_ARG_VNODE1(vp);
if (vp->v_object != NULL) {
VM_OBJECT_WLOCK(vp->v_object);
vm_object_page_clean(vp->v_object, 0, 0, 0);
VM_OBJECT_WUNLOCK(vp->v_object);
}
error = fullsync ? VOP_FSYNC(vp, MNT_WAIT, td) : VOP_FDATASYNC(vp, td);
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
drop:
fdrop(fp, td);
return (error);
}
/*
* Sync an open file.
*/
#ifndef _SYS_SYSPROTO_H_
struct fsync_args {
int fd;
};
#endif
int
sys_fsync(struct thread *td, struct fsync_args *uap)
{
return (kern_fsync(td, uap->fd, true));
}
int
sys_fdatasync(struct thread *td, struct fdatasync_args *uap)
{
return (kern_fsync(td, uap->fd, false));
}
/*
* Rename files. Source and destination must either both be directories, or
* both not be directories. If target is a directory, it must be empty.
*/
#ifndef _SYS_SYSPROTO_H_
struct rename_args {
char *from;
char *to;
};
#endif
int
sys_rename(struct thread *td, struct rename_args *uap)
{
return (kern_renameat(td, AT_FDCWD, uap->from, AT_FDCWD,
uap->to, UIO_USERSPACE));
}
#ifndef _SYS_SYSPROTO_H_
struct renameat_args {
int oldfd;
char *old;
int newfd;
char *new;
};
#endif
int
sys_renameat(struct thread *td, struct renameat_args *uap)
{
return (kern_renameat(td, uap->oldfd, uap->old, uap->newfd, uap->new,
UIO_USERSPACE));
}
int
kern_renameat(struct thread *td, int oldfd, const char *old, int newfd,
const char *new, enum uio_seg pathseg)
{
struct mount *mp = NULL;
struct vnode *tvp, *fvp, *tdvp;
struct nameidata fromnd, tond;
int error;
again:
bwillwrite();
#ifdef MAC
NDINIT_ATRIGHTS(&fromnd, DELETE, LOCKPARENT | LOCKLEAF | SAVESTART |
AUDITVNODE1, pathseg, old, oldfd,
&cap_renameat_source_rights, td);
#else
NDINIT_ATRIGHTS(&fromnd, DELETE, WANTPARENT | SAVESTART | AUDITVNODE1,
pathseg, old, oldfd,
&cap_renameat_source_rights, td);
#endif
if ((error = namei(&fromnd)) != 0)
return (error);
#ifdef MAC
error = mac_vnode_check_rename_from(td->td_ucred, fromnd.ni_dvp,
fromnd.ni_vp, &fromnd.ni_cnd);
VOP_UNLOCK(fromnd.ni_dvp, 0);
if (fromnd.ni_dvp != fromnd.ni_vp)
VOP_UNLOCK(fromnd.ni_vp, 0);
#endif
fvp = fromnd.ni_vp;
NDINIT_ATRIGHTS(&tond, RENAME, LOCKPARENT | LOCKLEAF | NOCACHE |
SAVESTART | AUDITVNODE2, pathseg, new, newfd,
&cap_renameat_target_rights, td);
if (fromnd.ni_vp->v_type == VDIR)
tond.ni_cnd.cn_flags |= WILLBEDIR;
if ((error = namei(&tond)) != 0) {
/* Translate error code for rename("dir1", "dir2/."). */
if (error == EISDIR && fvp->v_type == VDIR)
error = EINVAL;
NDFREE(&fromnd, NDF_ONLY_PNBUF);
vrele(fromnd.ni_dvp);
vrele(fvp);
goto out1;
}
tdvp = tond.ni_dvp;
tvp = tond.ni_vp;
error = vn_start_write(fvp, &mp, V_NOWAIT);
if (error != 0) {
NDFREE(&fromnd, NDF_ONLY_PNBUF);
NDFREE(&tond, NDF_ONLY_PNBUF);
if (tvp != NULL)
vput(tvp);
if (tdvp == tvp)
vrele(tdvp);
else
vput(tdvp);
vrele(fromnd.ni_dvp);
vrele(fvp);
vrele(tond.ni_startdir);
if (fromnd.ni_startdir != NULL)
vrele(fromnd.ni_startdir);
error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH);
if (error != 0)
return (error);
goto again;
}
if (tvp != NULL) {
if (fvp->v_type == VDIR && tvp->v_type != VDIR) {
error = ENOTDIR;
goto out;
} else if (fvp->v_type != VDIR && tvp->v_type == VDIR) {
error = EISDIR;
goto out;
}
#ifdef CAPABILITIES
- if (newfd != AT_FDCWD) {
+ if (newfd != AT_FDCWD && (tond.ni_resflags & NIRES_ABS) == 0) {
/*
* If the target already exists we require CAP_UNLINKAT
- * from 'newfd'.
+ * from 'newfd', when newfd was used for the lookup.
*/
error = cap_check(&tond.ni_filecaps.fc_rights,
&cap_unlinkat_rights);
if (error != 0)
goto out;
}
#endif
}
if (fvp == tdvp) {
error = EINVAL;
goto out;
}
/*
* If the source is the same as the destination (that is, if they
* are links to the same vnode), then there is nothing to do.
*/
if (fvp == tvp)
error = -1;
#ifdef MAC
else
error = mac_vnode_check_rename_to(td->td_ucred, tdvp,
tond.ni_vp, fromnd.ni_dvp == tdvp, &tond.ni_cnd);
#endif
out:
if (error == 0) {
error = VOP_RENAME(fromnd.ni_dvp, fromnd.ni_vp, &fromnd.ni_cnd,
tond.ni_dvp, tond.ni_vp, &tond.ni_cnd);
NDFREE(&fromnd, NDF_ONLY_PNBUF);
NDFREE(&tond, NDF_ONLY_PNBUF);
} else {
NDFREE(&fromnd, NDF_ONLY_PNBUF);
NDFREE(&tond, NDF_ONLY_PNBUF);
if (tvp != NULL)
vput(tvp);
if (tdvp == tvp)
vrele(tdvp);
else
vput(tdvp);
vrele(fromnd.ni_dvp);
vrele(fvp);
}
vrele(tond.ni_startdir);
vn_finished_write(mp);
out1:
if (fromnd.ni_startdir)
vrele(fromnd.ni_startdir);
if (error == -1)
return (0);
return (error);
}
/*
* Make a directory file.
*/
#ifndef _SYS_SYSPROTO_H_
struct mkdir_args {
char *path;
int mode;
};
#endif
int
sys_mkdir(struct thread *td, struct mkdir_args *uap)
{
return (kern_mkdirat(td, AT_FDCWD, uap->path, UIO_USERSPACE,
uap->mode));
}
#ifndef _SYS_SYSPROTO_H_
struct mkdirat_args {
int fd;
char *path;
mode_t mode;
};
#endif
int
sys_mkdirat(struct thread *td, struct mkdirat_args *uap)
{
return (kern_mkdirat(td, uap->fd, uap->path, UIO_USERSPACE, uap->mode));
}
int
kern_mkdirat(struct thread *td, int fd, const char *path, enum uio_seg segflg,
int mode)
{
struct mount *mp;
struct vnode *vp;
struct vattr vattr;
struct nameidata nd;
int error;
AUDIT_ARG_MODE(mode);
restart:
bwillwrite();
NDINIT_ATRIGHTS(&nd, CREATE, LOCKPARENT | SAVENAME | AUDITVNODE1 |
NOCACHE, segflg, path, fd, &cap_mkdirat_rights,
td);
nd.ni_cnd.cn_flags |= WILLBEDIR;
if ((error = namei(&nd)) != 0)
return (error);
vp = nd.ni_vp;
if (vp != NULL) {
NDFREE(&nd, NDF_ONLY_PNBUF);
/*
* XXX namei called with LOCKPARENT but not LOCKLEAF has
* the strange behaviour of leaving the vnode unlocked
* if the target is the same vnode as the parent.
*/
if (vp == nd.ni_dvp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
vrele(vp);
return (EEXIST);
}
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
return (error);
goto restart;
}
VATTR_NULL(&vattr);
vattr.va_type = VDIR;
vattr.va_mode = (mode & ACCESSPERMS) &~ td->td_proc->p_fd->fd_cmask;
#ifdef MAC
error = mac_vnode_check_create(td->td_ucred, nd.ni_dvp, &nd.ni_cnd,
&vattr);
if (error != 0)
goto out;
#endif
error = VOP_MKDIR(nd.ni_dvp, &nd.ni_vp, &nd.ni_cnd, &vattr);
#ifdef MAC
out:
#endif
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(nd.ni_dvp);
if (error == 0)
vput(nd.ni_vp);
vn_finished_write(mp);
return (error);
}
/*
* Remove a directory file.
*/
#ifndef _SYS_SYSPROTO_H_
struct rmdir_args {
char *path;
};
#endif
int
sys_rmdir(struct thread *td, struct rmdir_args *uap)
{
return (kern_rmdirat(td, AT_FDCWD, uap->path, UIO_USERSPACE, 0));
}
int
kern_rmdirat(struct thread *td, int fd, const char *path, enum uio_seg pathseg,
int flag)
{
struct mount *mp;
struct vnode *vp;
struct nameidata nd;
int error;
restart:
bwillwrite();
NDINIT_ATRIGHTS(&nd, DELETE, LOCKPARENT | LOCKLEAF | AUDITVNODE1 |
((flag & AT_BENEATH) != 0 ? BENEATH : 0),
pathseg, path, fd, &cap_unlinkat_rights, td);
if ((error = namei(&nd)) != 0)
return (error);
vp = nd.ni_vp;
if (vp->v_type != VDIR) {
error = ENOTDIR;
goto out;
}
/*
* No rmdir "." please.
*/
if (nd.ni_dvp == vp) {
error = EINVAL;
goto out;
}
/*
* The root of a mounted filesystem cannot be deleted.
*/
if (vp->v_vflag & VV_ROOT) {
error = EBUSY;
goto out;
}
#ifdef MAC
error = mac_vnode_check_unlink(td->td_ucred, nd.ni_dvp, vp,
&nd.ni_cnd);
if (error != 0)
goto out;
#endif
if (vn_start_write(nd.ni_dvp, &mp, V_NOWAIT) != 0) {
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(vp);
if (nd.ni_dvp == vp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
return (error);
goto restart;
}
vfs_notify_upper(vp, VFS_NOTIFY_UPPER_UNLINK);
error = VOP_RMDIR(nd.ni_dvp, nd.ni_vp, &nd.ni_cnd);
vn_finished_write(mp);
out:
NDFREE(&nd, NDF_ONLY_PNBUF);
vput(vp);
if (nd.ni_dvp == vp)
vrele(nd.ni_dvp);
else
vput(nd.ni_dvp);
return (error);
}
#if defined(COMPAT_43) || defined(COMPAT_FREEBSD11)
int
freebsd11_kern_getdirentries(struct thread *td, int fd, char *ubuf, u_int count,
long *basep, void (*func)(struct freebsd11_dirent *))
{
struct freebsd11_dirent dstdp;
struct dirent *dp, *edp;
char *dirbuf;
off_t base;
ssize_t resid, ucount;
int error;
/* XXX arbitrary sanity limit on `count'. */
count = min(count, 64 * 1024);
dirbuf = malloc(count, M_TEMP, M_WAITOK);
error = kern_getdirentries(td, fd, dirbuf, count, &base, &resid,
UIO_SYSSPACE);
if (error != 0)
goto done;
if (basep != NULL)
*basep = base;
ucount = 0;
for (dp = (struct dirent *)dirbuf,
edp = (struct dirent *)&dirbuf[count - resid];
ucount < count && dp < edp; ) {
if (dp->d_reclen == 0)
break;
MPASS(dp->d_reclen >= _GENERIC_DIRLEN(0));
if (dp->d_namlen >= sizeof(dstdp.d_name))
continue;
dstdp.d_type = dp->d_type;
dstdp.d_namlen = dp->d_namlen;
dstdp.d_fileno = dp->d_fileno; /* truncate */
if (dstdp.d_fileno != dp->d_fileno) {
switch (ino64_trunc_error) {
default:
case 0:
break;
case 1:
error = EOVERFLOW;
goto done;
case 2:
dstdp.d_fileno = UINT32_MAX;
break;
}
}
dstdp.d_reclen = sizeof(dstdp) - sizeof(dstdp.d_name) +
((dp->d_namlen + 1 + 3) &~ 3);
bcopy(dp->d_name, dstdp.d_name, dstdp.d_namlen);
bzero(dstdp.d_name + dstdp.d_namlen,
dstdp.d_reclen - offsetof(struct freebsd11_dirent, d_name) -
dstdp.d_namlen);
MPASS(dstdp.d_reclen <= dp->d_reclen);
MPASS(ucount + dstdp.d_reclen <= count);
if (func != NULL)
func(&dstdp);
error = copyout(&dstdp, ubuf + ucount, dstdp.d_reclen);
if (error != 0)
break;
dp = (struct dirent *)((char *)dp + dp->d_reclen);
ucount += dstdp.d_reclen;
}
done:
free(dirbuf, M_TEMP);
if (error == 0)
td->td_retval[0] = ucount;
return (error);
}
#endif /* COMPAT */
#ifdef COMPAT_43
static void
ogetdirentries_cvt(struct freebsd11_dirent *dp)
{
#if (BYTE_ORDER == LITTLE_ENDIAN)
/*
* The expected low byte of dp->d_namlen is our dp->d_type.
* The high MBZ byte of dp->d_namlen is our dp->d_namlen.
*/
dp->d_type = dp->d_namlen;
dp->d_namlen = 0;
#else
/*
* The dp->d_type is the high byte of the expected dp->d_namlen,
* so must be zero'ed.
*/
dp->d_type = 0;
#endif
}
/*
* Read a block of directory entries in a filesystem independent format.
*/
#ifndef _SYS_SYSPROTO_H_
struct ogetdirentries_args {
int fd;
char *buf;
u_int count;
long *basep;
};
#endif
int
ogetdirentries(struct thread *td, struct ogetdirentries_args *uap)
{
long loff;
int error;
error = kern_ogetdirentries(td, uap, &loff);
if (error == 0)
error = copyout(&loff, uap->basep, sizeof(long));
return (error);
}
int
kern_ogetdirentries(struct thread *td, struct ogetdirentries_args *uap,
long *ploff)
{
long base;
int error;
/* XXX arbitrary sanity limit on `count'. */
if (uap->count > 64 * 1024)
return (EINVAL);
error = freebsd11_kern_getdirentries(td, uap->fd, uap->buf, uap->count,
&base, ogetdirentries_cvt);
if (error == 0 && uap->basep != NULL)
error = copyout(&base, uap->basep, sizeof(long));
return (error);
}
#endif /* COMPAT_43 */
#if defined(COMPAT_FREEBSD11)
#ifndef _SYS_SYSPROTO_H_
struct freebsd11_getdirentries_args {
int fd;
char *buf;
u_int count;
long *basep;
};
#endif
int
freebsd11_getdirentries(struct thread *td,
struct freebsd11_getdirentries_args *uap)
{
long base;
int error;
error = freebsd11_kern_getdirentries(td, uap->fd, uap->buf, uap->count,
&base, NULL);
if (error == 0 && uap->basep != NULL)
error = copyout(&base, uap->basep, sizeof(long));
return (error);
}
int
freebsd11_getdents(struct thread *td, struct freebsd11_getdents_args *uap)
{
struct freebsd11_getdirentries_args ap;
ap.fd = uap->fd;
ap.buf = uap->buf;
ap.count = uap->count;
ap.basep = NULL;
return (freebsd11_getdirentries(td, &ap));
}
#endif /* COMPAT_FREEBSD11 */
/*
* Read a block of directory entries in a filesystem independent format.
*/
int
sys_getdirentries(struct thread *td, struct getdirentries_args *uap)
{
off_t base;
int error;
error = kern_getdirentries(td, uap->fd, uap->buf, uap->count, &base,
NULL, UIO_USERSPACE);
if (error != 0)
return (error);
if (uap->basep != NULL)
error = copyout(&base, uap->basep, sizeof(off_t));
return (error);
}
int
kern_getdirentries(struct thread *td, int fd, char *buf, size_t count,
off_t *basep, ssize_t *residp, enum uio_seg bufseg)
{
struct vnode *vp;
struct file *fp;
struct uio auio;
struct iovec aiov;
off_t loff;
int error, eofflag;
off_t foffset;
AUDIT_ARG_FD(fd);
if (count > IOSIZE_MAX)
return (EINVAL);
auio.uio_resid = count;
error = getvnode(td, fd, &cap_read_rights, &fp);
if (error != 0)
return (error);
if ((fp->f_flag & FREAD) == 0) {
fdrop(fp, td);
return (EBADF);
}
vp = fp->f_vnode;
foffset = foffset_lock(fp, 0);
unionread:
if (vp->v_type != VDIR) {
error = EINVAL;
goto fail;
}
aiov.iov_base = buf;
aiov.iov_len = count;
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
auio.uio_rw = UIO_READ;
auio.uio_segflg = bufseg;
auio.uio_td = td;
vn_lock(vp, LK_SHARED | LK_RETRY);
AUDIT_ARG_VNODE1(vp);
loff = auio.uio_offset = foffset;
#ifdef MAC
error = mac_vnode_check_readdir(td->td_ucred, vp);
if (error == 0)
#endif
error = VOP_READDIR(vp, &auio, fp->f_cred, &eofflag, NULL,
NULL);
foffset = auio.uio_offset;
if (error != 0) {
VOP_UNLOCK(vp, 0);
goto fail;
}
if (count == auio.uio_resid &&
(vp->v_vflag & VV_ROOT) &&
(vp->v_mount->mnt_flag & MNT_UNION)) {
struct vnode *tvp = vp;
vp = vp->v_mount->mnt_vnodecovered;
VREF(vp);
fp->f_vnode = vp;
fp->f_data = vp;
foffset = 0;
vput(tvp);
goto unionread;
}
VOP_UNLOCK(vp, 0);
*basep = loff;
if (residp != NULL)
*residp = auio.uio_resid;
td->td_retval[0] = count - auio.uio_resid;
fail:
foffset_unlock(fp, foffset, 0);
fdrop(fp, td);
return (error);
}
/*
* Set the mode mask for creation of filesystem nodes.
*/
#ifndef _SYS_SYSPROTO_H_
struct umask_args {
int newmask;
};
#endif
int
sys_umask(struct thread *td, struct umask_args *uap)
{
struct filedesc *fdp;
fdp = td->td_proc->p_fd;
FILEDESC_XLOCK(fdp);
td->td_retval[0] = fdp->fd_cmask;
fdp->fd_cmask = uap->newmask & ALLPERMS;
FILEDESC_XUNLOCK(fdp);
return (0);
}
/*
* Void all references to file by ripping underlying filesystem away from
* vnode.
*/
#ifndef _SYS_SYSPROTO_H_
struct revoke_args {
char *path;
};
#endif
int
sys_revoke(struct thread *td, struct revoke_args *uap)
{
struct vnode *vp;
struct vattr vattr;
struct nameidata nd;
int error;
NDINIT(&nd, LOOKUP, FOLLOW | LOCKLEAF | AUDITVNODE1, UIO_USERSPACE,
uap->path, td);
if ((error = namei(&nd)) != 0)
return (error);
vp = nd.ni_vp;
NDFREE(&nd, NDF_ONLY_PNBUF);
if (vp->v_type != VCHR || vp->v_rdev == NULL) {
error = EINVAL;
goto out;
}
#ifdef MAC
error = mac_vnode_check_revoke(td->td_ucred, vp);
if (error != 0)
goto out;
#endif
error = VOP_GETATTR(vp, &vattr, td->td_ucred);
if (error != 0)
goto out;
if (td->td_ucred->cr_uid != vattr.va_uid) {
error = priv_check(td, PRIV_VFS_ADMIN);
if (error != 0)
goto out;
}
if (vcount(vp) > 1)
VOP_REVOKE(vp, REVOKEALL);
out:
vput(vp);
return (error);
}
/*
* Convert a user file descriptor to a kernel file entry and check that, if it
* is a capability, the correct rights are present. A reference on the file
* entry is held upon returning.
*/
int
getvnode(struct thread *td, int fd, cap_rights_t *rightsp, struct file **fpp)
{
struct file *fp;
int error;
error = fget_unlocked(td->td_proc->p_fd, fd, rightsp, &fp, NULL);
if (error != 0)
return (error);
/*
* The file could be not of the vnode type, or it may be not
* yet fully initialized, in which case the f_vnode pointer
* may be set, but f_ops is still badfileops. E.g.,
* devfs_open() transiently create such situation to
* facilitate csw d_fdopen().
*
* Dupfdopen() handling in kern_openat() installs the
* half-baked file into the process descriptor table, allowing
* other thread to dereference it. Guard against the race by
* checking f_ops.
*/
if (fp->f_vnode == NULL || fp->f_ops == &badfileops) {
fdrop(fp, td);
return (EINVAL);
}
*fpp = fp;
return (0);
}
/*
* Get an (NFS) file handle.
*/
#ifndef _SYS_SYSPROTO_H_
struct lgetfh_args {
char *fname;
fhandle_t *fhp;
};
#endif
int
sys_lgetfh(struct thread *td, struct lgetfh_args *uap)
{
return (kern_getfhat(td, AT_SYMLINK_NOFOLLOW, AT_FDCWD, uap->fname,
UIO_USERSPACE, uap->fhp));
}
#ifndef _SYS_SYSPROTO_H_
struct getfh_args {
char *fname;
fhandle_t *fhp;
};
#endif
int
sys_getfh(struct thread *td, struct getfh_args *uap)
{
return (kern_getfhat(td, 0, AT_FDCWD, uap->fname, UIO_USERSPACE,
uap->fhp));
}
/*
* syscall for the rpc.lockd to use to translate an open descriptor into
* a NFS file handle.
*
* warning: do not remove the priv_check() call or this becomes one giant
* security hole.
*/
#ifndef _SYS_SYSPROTO_H_
struct getfhat_args {
int fd;
char *path;
fhandle_t *fhp;
int flags;
};
#endif
int
sys_getfhat(struct thread *td, struct getfhat_args *uap)
{
if ((uap->flags & ~(AT_SYMLINK_NOFOLLOW | AT_BENEATH)) != 0)
return (EINVAL);
return (kern_getfhat(td, uap->flags, uap->fd, uap->path, UIO_USERSPACE,
uap->fhp));
}
static int
kern_getfhat(struct thread *td, int flags, int fd, const char *path,
enum uio_seg pathseg, fhandle_t *fhp)
{
struct nameidata nd;
fhandle_t fh;
struct vnode *vp;
int error;
error = priv_check(td, PRIV_VFS_GETFH);
if (error != 0)
return (error);
NDINIT_AT(&nd, LOOKUP, ((flags & AT_SYMLINK_NOFOLLOW) != 0 ? NOFOLLOW :
FOLLOW) | ((flags & AT_BENEATH) != 0 ? BENEATH : 0) | LOCKLEAF |
AUDITVNODE1, pathseg, path, fd, td);
error = namei(&nd);
if (error != 0)
return (error);
NDFREE(&nd, NDF_ONLY_PNBUF);
vp = nd.ni_vp;
bzero(&fh, sizeof(fh));
fh.fh_fsid = vp->v_mount->mnt_stat.f_fsid;
error = VOP_VPTOFH(vp, &fh.fh_fid);
vput(vp);
if (error == 0)
error = copyout(&fh, fhp, sizeof (fh));
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct fhlink_args {
fhandle_t *fhp;
const char *to;
};
#endif
int
sys_fhlink(struct thread *td, struct fhlink_args *uap)
{
return (kern_fhlinkat(td, AT_FDCWD, uap->to, UIO_USERSPACE, uap->fhp));
}
#ifndef _SYS_SYSPROTO_H_
struct fhlinkat_args {
fhandle_t *fhp;
int tofd;
const char *to;
};
#endif
int
sys_fhlinkat(struct thread *td, struct fhlinkat_args *uap)
{
return (kern_fhlinkat(td, uap->tofd, uap->to, UIO_USERSPACE, uap->fhp));
}
static int
kern_fhlinkat(struct thread *td, int fd, const char *path,
enum uio_seg pathseg, fhandle_t *fhp)
{
fhandle_t fh;
struct mount *mp;
struct vnode *vp;
int error;
error = priv_check(td, PRIV_VFS_GETFH);
if (error != 0)
return (error);
error = copyin(fhp, &fh, sizeof(fh));
if (error != 0)
return (error);
do {
bwillwrite();
if ((mp = vfs_busyfs(&fh.fh_fsid)) == NULL)
return (ESTALE);
error = VFS_FHTOVP(mp, &fh.fh_fid, LK_SHARED, &vp);
vfs_unbusy(mp);
if (error != 0)
return (error);
VOP_UNLOCK(vp, 0);
} while ((error = kern_linkat_vp(td, vp, fd, path, pathseg)) == EAGAIN);
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct fhreadlink_args {
fhandle_t *fhp;
char *buf;
size_t bufsize;
};
#endif
int
sys_fhreadlink(struct thread *td, struct fhreadlink_args *uap)
{
fhandle_t fh;
struct mount *mp;
struct vnode *vp;
int error;
error = priv_check(td, PRIV_VFS_GETFH);
if (error != 0)
return (error);
if (uap->bufsize > IOSIZE_MAX)
return (EINVAL);
error = copyin(uap->fhp, &fh, sizeof(fh));
if (error != 0)
return (error);
if ((mp = vfs_busyfs(&fh.fh_fsid)) == NULL)
return (ESTALE);
error = VFS_FHTOVP(mp, &fh.fh_fid, LK_SHARED, &vp);
vfs_unbusy(mp);
if (error != 0)
return (error);
error = kern_readlink_vp(vp, uap->buf, UIO_USERSPACE, uap->bufsize, td);
vput(vp);
return (error);
}
/*
* syscall for the rpc.lockd to use to translate a NFS file handle into an
* open descriptor.
*
* warning: do not remove the priv_check() call or this becomes one giant
* security hole.
*/
#ifndef _SYS_SYSPROTO_H_
struct fhopen_args {
const struct fhandle *u_fhp;
int flags;
};
#endif
int
sys_fhopen(struct thread *td, struct fhopen_args *uap)
{
struct mount *mp;
struct vnode *vp;
struct fhandle fhp;
struct file *fp;
int fmode, error;
int indx;
error = priv_check(td, PRIV_VFS_FHOPEN);
if (error != 0)
return (error);
indx = -1;
fmode = FFLAGS(uap->flags);
/* why not allow a non-read/write open for our lockd? */
if (((fmode & (FREAD | FWRITE)) == 0) || (fmode & O_CREAT))
return (EINVAL);
error = copyin(uap->u_fhp, &fhp, sizeof(fhp));
if (error != 0)
return(error);
/* find the mount point */
mp = vfs_busyfs(&fhp.fh_fsid);
if (mp == NULL)
return (ESTALE);
/* now give me my vnode, it gets returned to me locked */
error = VFS_FHTOVP(mp, &fhp.fh_fid, LK_EXCLUSIVE, &vp);
vfs_unbusy(mp);
if (error != 0)
return (error);
error = falloc_noinstall(td, &fp);
if (error != 0) {
vput(vp);
return (error);
}
/*
* An extra reference on `fp' has been held for us by
* falloc_noinstall().
*/
#ifdef INVARIANTS
td->td_dupfd = -1;
#endif
error = vn_open_vnode(vp, fmode, td->td_ucred, td, fp);
if (error != 0) {
KASSERT(fp->f_ops == &badfileops,
("VOP_OPEN in fhopen() set f_ops"));
KASSERT(td->td_dupfd < 0,
("fhopen() encountered fdopen()"));
vput(vp);
goto bad;
}
#ifdef INVARIANTS
td->td_dupfd = 0;
#endif
fp->f_vnode = vp;
fp->f_seqcount = 1;
finit(fp, (fmode & FMASK) | (fp->f_flag & FHASLOCK), DTYPE_VNODE, vp,
&vnops);
VOP_UNLOCK(vp, 0);
if ((fmode & O_TRUNC) != 0) {
error = fo_truncate(fp, 0, td->td_ucred, td);
if (error != 0)
goto bad;
}
error = finstall(td, fp, &indx, fmode, NULL);
bad:
fdrop(fp, td);
td->td_retval[0] = indx;
return (error);
}
/*
* Stat an (NFS) file handle.
*/
#ifndef _SYS_SYSPROTO_H_
struct fhstat_args {
struct fhandle *u_fhp;
struct stat *sb;
};
#endif
int
sys_fhstat(struct thread *td, struct fhstat_args *uap)
{
struct stat sb;
struct fhandle fh;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fh));
if (error != 0)
return (error);
error = kern_fhstat(td, fh, &sb);
if (error == 0)
error = copyout(&sb, uap->sb, sizeof(sb));
return (error);
}
int
kern_fhstat(struct thread *td, struct fhandle fh, struct stat *sb)
{
struct mount *mp;
struct vnode *vp;
int error;
error = priv_check(td, PRIV_VFS_FHSTAT);
if (error != 0)
return (error);
if ((mp = vfs_busyfs(&fh.fh_fsid)) == NULL)
return (ESTALE);
error = VFS_FHTOVP(mp, &fh.fh_fid, LK_EXCLUSIVE, &vp);
vfs_unbusy(mp);
if (error != 0)
return (error);
error = vn_stat(vp, sb, td->td_ucred, NOCRED, td);
vput(vp);
return (error);
}
/*
* Implement fstatfs() for (NFS) file handles.
*/
#ifndef _SYS_SYSPROTO_H_
struct fhstatfs_args {
struct fhandle *u_fhp;
struct statfs *buf;
};
#endif
int
sys_fhstatfs(struct thread *td, struct fhstatfs_args *uap)
{
struct statfs *sfp;
fhandle_t fh;
int error;
error = copyin(uap->u_fhp, &fh, sizeof(fhandle_t));
if (error != 0)
return (error);
sfp = malloc(sizeof(struct statfs), M_STATFS, M_WAITOK);
error = kern_fhstatfs(td, fh, sfp);
if (error == 0)
error = copyout(sfp, uap->buf, sizeof(*sfp));
free(sfp, M_STATFS);
return (error);
}
int
kern_fhstatfs(struct thread *td, fhandle_t fh, struct statfs *buf)
{
struct statfs *sp;
struct mount *mp;
struct vnode *vp;
int error;
error = priv_check(td, PRIV_VFS_FHSTATFS);
if (error != 0)
return (error);
if ((mp = vfs_busyfs(&fh.fh_fsid)) == NULL)
return (ESTALE);
error = VFS_FHTOVP(mp, &fh.fh_fid, LK_EXCLUSIVE, &vp);
if (error != 0) {
vfs_unbusy(mp);
return (error);
}
vput(vp);
error = prison_canseemount(td->td_ucred, mp);
if (error != 0)
goto out;
#ifdef MAC
error = mac_mount_check_stat(td->td_ucred, mp);
if (error != 0)
goto out;
#endif
/*
* Set these in case the underlying filesystem fails to do so.
*/
sp = &mp->mnt_stat;
sp->f_version = STATFS_VERSION;
sp->f_namemax = NAME_MAX;
sp->f_flags = mp->mnt_flag & MNT_VISFLAGMASK;
error = VFS_STATFS(mp, sp);
if (error == 0)
*buf = *sp;
out:
vfs_unbusy(mp);
return (error);
}
int
kern_posix_fallocate(struct thread *td, int fd, off_t offset, off_t len)
{
struct file *fp;
struct mount *mp;
struct vnode *vp;
off_t olen, ooffset;
int error;
#ifdef AUDIT
int audited_vnode1 = 0;
#endif
AUDIT_ARG_FD(fd);
if (offset < 0 || len <= 0)
return (EINVAL);
/* Check for wrap. */
if (offset > OFF_MAX - len)
return (EFBIG);
AUDIT_ARG_FD(fd);
error = fget(td, fd, &cap_pwrite_rights, &fp);
if (error != 0)
return (error);
AUDIT_ARG_FILE(td->td_proc, fp);
if ((fp->f_ops->fo_flags & DFLAG_SEEKABLE) == 0) {
error = ESPIPE;
goto out;
}
if ((fp->f_flag & FWRITE) == 0) {
error = EBADF;
goto out;
}
if (fp->f_type != DTYPE_VNODE) {
error = ENODEV;
goto out;
}
vp = fp->f_vnode;
if (vp->v_type != VREG) {
error = ENODEV;
goto out;
}
/* Allocating blocks may take a long time, so iterate. */
for (;;) {
olen = len;
ooffset = offset;
bwillwrite();
mp = NULL;
error = vn_start_write(vp, &mp, V_WAIT | PCATCH);
if (error != 0)
break;
error = vn_lock(vp, LK_EXCLUSIVE);
if (error != 0) {
vn_finished_write(mp);
break;
}
#ifdef AUDIT
if (!audited_vnode1) {
AUDIT_ARG_VNODE1(vp);
audited_vnode1 = 1;
}
#endif
#ifdef MAC
error = mac_vnode_check_write(td->td_ucred, fp->f_cred, vp);
if (error == 0)
#endif
error = VOP_ALLOCATE(vp, &offset, &len);
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
if (olen + ooffset != offset + len) {
panic("offset + len changed from %jx/%jx to %jx/%jx",
ooffset, olen, offset, len);
}
if (error != 0 || len == 0)
break;
KASSERT(olen > len, ("Iteration did not make progress?"));
maybe_yield();
}
out:
fdrop(fp, td);
return (error);
}
int
sys_posix_fallocate(struct thread *td, struct posix_fallocate_args *uap)
{
int error;
error = kern_posix_fallocate(td, uap->fd, uap->offset, uap->len);
return (kern_posix_error(td, error));
}
/*
* Unlike madvise(2), we do not make a best effort to remember every
* possible caching hint. Instead, we remember the last setting with
* the exception that we will allow POSIX_FADV_NORMAL to adjust the
* region of any current setting.
*/
int
kern_posix_fadvise(struct thread *td, int fd, off_t offset, off_t len,
int advice)
{
struct fadvise_info *fa, *new;
struct file *fp;
struct vnode *vp;
off_t end;
int error;
if (offset < 0 || len < 0 || offset > OFF_MAX - len)
return (EINVAL);
AUDIT_ARG_VALUE(advice);
switch (advice) {
case POSIX_FADV_SEQUENTIAL:
case POSIX_FADV_RANDOM:
case POSIX_FADV_NOREUSE:
new = malloc(sizeof(*fa), M_FADVISE, M_WAITOK);
break;
case POSIX_FADV_NORMAL:
case POSIX_FADV_WILLNEED:
case POSIX_FADV_DONTNEED:
new = NULL;
break;
default:
return (EINVAL);
}
/* XXX: CAP_POSIX_FADVISE? */
AUDIT_ARG_FD(fd);
error = fget(td, fd, &cap_no_rights, &fp);
if (error != 0)
goto out;
AUDIT_ARG_FILE(td->td_proc, fp);
if ((fp->f_ops->fo_flags & DFLAG_SEEKABLE) == 0) {
error = ESPIPE;
goto out;
}
if (fp->f_type != DTYPE_VNODE) {
error = ENODEV;
goto out;
}
vp = fp->f_vnode;
if (vp->v_type != VREG) {
error = ENODEV;
goto out;
}
if (len == 0)
end = OFF_MAX;
else
end = offset + len - 1;
switch (advice) {
case POSIX_FADV_SEQUENTIAL:
case POSIX_FADV_RANDOM:
case POSIX_FADV_NOREUSE:
/*
* Try to merge any existing non-standard region with
* this new region if possible, otherwise create a new
* non-standard region for this request.
*/
mtx_pool_lock(mtxpool_sleep, fp);
fa = fp->f_advice;
if (fa != NULL && fa->fa_advice == advice &&
((fa->fa_start <= end && fa->fa_end >= offset) ||
(end != OFF_MAX && fa->fa_start == end + 1) ||
(fa->fa_end != OFF_MAX && fa->fa_end + 1 == offset))) {
if (offset < fa->fa_start)
fa->fa_start = offset;
if (end > fa->fa_end)
fa->fa_end = end;
} else {
new->fa_advice = advice;
new->fa_start = offset;
new->fa_end = end;
fp->f_advice = new;
new = fa;
}
mtx_pool_unlock(mtxpool_sleep, fp);
break;
case POSIX_FADV_NORMAL:
/*
* If a the "normal" region overlaps with an existing
* non-standard region, trim or remove the
* non-standard region.
*/
mtx_pool_lock(mtxpool_sleep, fp);
fa = fp->f_advice;
if (fa != NULL) {
if (offset <= fa->fa_start && end >= fa->fa_end) {
new = fa;
fp->f_advice = NULL;
} else if (offset <= fa->fa_start &&
end >= fa->fa_start)
fa->fa_start = end + 1;
else if (offset <= fa->fa_end && end >= fa->fa_end)
fa->fa_end = offset - 1;
else if (offset >= fa->fa_start && end <= fa->fa_end) {
/*
* If the "normal" region is a middle
* portion of the existing
* non-standard region, just remove
* the whole thing rather than picking
* one side or the other to
* preserve.
*/
new = fa;
fp->f_advice = NULL;
}
}
mtx_pool_unlock(mtxpool_sleep, fp);
break;
case POSIX_FADV_WILLNEED:
case POSIX_FADV_DONTNEED:
error = VOP_ADVISE(vp, offset, end, advice);
break;
}
out:
if (fp != NULL)
fdrop(fp, td);
free(new, M_FADVISE);
return (error);
}
int
sys_posix_fadvise(struct thread *td, struct posix_fadvise_args *uap)
{
int error;
error = kern_posix_fadvise(td, uap->fd, uap->offset, uap->len,
uap->advice);
return (kern_posix_error(td, error));
}
Index: projects/clang800-import/sys/mips/include/elf.h
===================================================================
--- projects/clang800-import/sys/mips/include/elf.h (revision 343955)
+++ projects/clang800-import/sys/mips/include/elf.h (revision 343956)
@@ -1,215 +1,215 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD AND BSD-2-Clause-NetBSD
*
- * Copyright (c) 2013 M. Warner Losh. All Rights Reserved.
+ * Copyright (c) 2013 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
/*-
* Copyright (c) 2013 The NetBSD Foundation, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
* See below starting with the line with $NetBSD...$ for code this applies to.
*/
#ifndef __MIPS_ELF_H
#define __MIPS_ELF_H
/* FreeBSD specific bits - derived from FreeBSD specific files and changes to old elf.h */
/*
* Define __ELF_WORD_SIZE based on the ABI, if not defined yet. This sets
* the proper defaults when we're not trying to do 32-bit on 64-bit systems.
* We include both 32 and 64 bit versions so we can support multiple ABIs.
*/
#ifndef __ELF_WORD_SIZE
#if defined(__mips_n64)
#define __ELF_WORD_SIZE 64
#else
#define __ELF_WORD_SIZE 32
#endif
#endif
#include <sys/elf32.h>
#include <sys/elf64.h>
#include <sys/elf_generic.h>
#define ELF_ARCH EM_MIPS
#define ELF_ARCH32 EM_MIPS
#define ELF_MACHINE_OK(x) ((x) == ELF_ARCH)
/* Define "machine" characteristics */
#if __ELF_WORD_SIZE == 32
#define ELF_TARG_CLASS ELFCLASS32
#else
#define ELF_TARG_CLASS ELFCLASS64
#endif
#ifdef __MIPSEB__
#define ELF_TARG_DATA ELFDATA2MSB
#else
#define ELF_TARG_DATA ELFDATA2LSB
#endif
#define ELF_TARG_MACH EM_MIPS
#define ELF_TARG_VER 1
/*
* Auxiliary vector entries for passing information to the interpreter.
*
* The i386 supplement to the SVR4 ABI specification names this "auxv_t",
* but POSIX lays claim to all symbols ending with "_t".
*/
typedef struct { /* Auxiliary vector entry on initial stack */
int a_type; /* Entry type. */
union {
int a_val; /* Integer value. */
void *a_ptr; /* Address. */
void (*a_fcn)(void); /* Function pointer (not used). */
} a_un;
} Elf32_Auxinfo;
typedef struct { /* Auxiliary vector entry on initial stack */
long a_type; /* Entry type. */
union {
long a_val; /* Integer value. */
void *a_ptr; /* Address. */
void (*a_fcn)(void); /* Function pointer (not used). */
} a_un;
} Elf64_Auxinfo;
__ElfType(Auxinfo);
#define ET_DYN_LOAD_ADDR 0x0120000
/*
* Constant to mark start of symtab/strtab saved by trampoline
*/
#define SYMTAB_MAGIC 0x64656267
/* from NetBSD's sys/mips/include/elf_machdep.h $NetBSD: elf_machdep.h,v 1.18 2013/05/23 21:39:49 christos Exp $ */
/* mips relocs. */
#define R_MIPS_NONE 0
#define R_MIPS_16 1
#define R_MIPS_32 2
#define R_MIPS_REL32 3
#define R_MIPS_REL R_MIPS_REL32
#define R_MIPS_26 4
#define R_MIPS_HI16 5 /* high 16 bits of symbol value */
#define R_MIPS_LO16 6 /* low 16 bits of symbol value */
#define R_MIPS_GPREL16 7 /* GP-relative reference */
#define R_MIPS_LITERAL 8 /* Reference to literal section */
#define R_MIPS_GOT16 9 /* Reference to global offset table */
#define R_MIPS_GOT R_MIPS_GOT16
#define R_MIPS_PC16 10 /* 16 bit PC relative reference */
#define R_MIPS_CALL16 11 /* 16 bit call thru glbl offset tbl */
#define R_MIPS_CALL R_MIPS_CALL16
#define R_MIPS_GPREL32 12
/* 13, 14, 15 are not defined at this point. */
#define R_MIPS_UNUSED1 13
#define R_MIPS_UNUSED2 14
#define R_MIPS_UNUSED3 15
/*
* The remaining relocs are apparently part of the 64-bit Irix ELF ABI.
*/
#define R_MIPS_SHIFT5 16
#define R_MIPS_SHIFT6 17
#define R_MIPS_64 18
#define R_MIPS_GOT_DISP 19
#define R_MIPS_GOT_PAGE 20
#define R_MIPS_GOT_OFST 21
#define R_MIPS_GOT_HI16 22
#define R_MIPS_GOT_LO16 23
#define R_MIPS_SUB 24
#define R_MIPS_INSERT_A 25
#define R_MIPS_INSERT_B 26
#define R_MIPS_DELETE 27
#define R_MIPS_HIGHER 28
#define R_MIPS_HIGHEST 29
#define R_MIPS_CALL_HI16 30
#define R_MIPS_CALL_LO16 31
#define R_MIPS_SCN_DISP 32
#define R_MIPS_REL16 33
#define R_MIPS_ADD_IMMEDIATE 34
#define R_MIPS_PJUMP 35
#define R_MIPS_RELGOT 36
#define R_MIPS_JALR 37
/* TLS relocations */
#define R_MIPS_TLS_DTPMOD32 38 /* Module number 32 bit */
#define R_MIPS_TLS_DTPREL32 39 /* Module-relative offset 32 bit */
#define R_MIPS_TLS_DTPMOD64 40 /* Module number 64 bit */
#define R_MIPS_TLS_DTPREL64 41 /* Module-relative offset 64 bit */
#define R_MIPS_TLS_GD 42 /* 16 bit GOT offset for GD */
#define R_MIPS_TLS_LDM 43 /* 16 bit GOT offset for LDM */
#define R_MIPS_TLS_DTPREL_HI16 44 /* Module-relative offset, high 16 bits */
#define R_MIPS_TLS_DTPREL_LO16 45 /* Module-relative offset, low 16 bits */
#define R_MIPS_TLS_GOTTPREL 46 /* 16 bit GOT offset for IE */
#define R_MIPS_TLS_TPREL32 47 /* TP-relative offset, 32 bit */
#define R_MIPS_TLS_TPREL64 48 /* TP-relative offset, 64 bit */
#define R_MIPS_TLS_TPREL_HI16 49 /* TP-relative offset, high 16 bits */
#define R_MIPS_TLS_TPREL_LO16 50 /* TP-relative offset, low 16 bits */
#define R_MIPS_max 51
#define R_TYPE(name) __CONCAT(R_MIPS_,name)
#define R_MIPS16_min 100
#define R_MIPS16_26 100
#define R_MIPS16_GPREL 101
#define R_MIPS16_GOT16 102
#define R_MIPS16_CALL16 103
#define R_MIPS16_HI16 104
#define R_MIPS16_LO16 105
#define R_MIPS16_max 106
#define R_MIPS_COPY 126
#define R_MIPS_JUMP_SLOT 127
#endif /* __MIPS_ELF_H */
Index: projects/clang800-import/sys/modules/dtb/allwinner/Makefile
===================================================================
--- projects/clang800-import/sys/modules/dtb/allwinner/Makefile (revision 343955)
+++ projects/clang800-import/sys/modules/dtb/allwinner/Makefile (revision 343956)
@@ -1,57 +1,58 @@
# $FreeBSD$
# All the dts files for allwinner systems we support.
.if ${MACHINE_ARCH} == "armv7"
DTS= \
sun4i-a10-cubieboard.dts \
sun4i-a10-olinuxino-lime.dts \
sun6i-a31s-sinovoip-bpi-m2.dts \
sun5i-a13-olinuxino.dts \
sun5i-r8-chip.dts \
sun7i-a20-bananapi.dts \
sun7i-a20-cubieboard2.dts \
sun7i-a20-lamobo-r1.dts \
sun7i-a20-olimex-som-evb.dts \
sun7i-a20-pcduino3.dts \
sun8i-a83t-bananapi-m3.dts \
sun8i-h2-plus-orangepi-r1.dts \
sun8i-h2-plus-orangepi-zero.dts \
sun8i-h3-nanopi-m1.dts \
sun8i-h3-nanopi-m1-plus.dts \
sun8i-h3-nanopi-neo.dts \
sun8i-h3-orangepi-one.dts \
sun8i-h3-orangepi-pc.dts \
sun8i-h3-orangepi-plus2e.dts
DTSO= sun8i-a83t-sid.dtso \
sun8i-h3-sid.dtso
LINKS= \
${DTBDIR}/sun4i-a10-cubieboard.dtb ${DTBDIR}/cubieboard.dtb \
${DTBDIR}/sun4i-a10-olinuxino-lime.dtb ${DTBDIR}/olinuxino-lime.dtb \
${DTBDIR}/sun6i-a31s-sinovoip-bpi-m2.dtb ${DTBDIR}/bananapim2.dtb \
${DTBDIR}/sun7i-a20-bananapi.dtb ${DTBDIR}/bananapi.dtb \
${DTBDIR}/sun7i-a20-cubieboard2.dtb ${DTBDIR}/cubieboard2.dtb \
${DTBDIR}/sun7i-a20-olimex-som-evb.dtb ${DTBDIR}/olimex-a20-som-evb.dtb \
${DTBDIR}/sun7i-a20-pcduino3.dtb ${DTBDIR}/pcduino3.dtb \
${DTBDIR}/sun8i-a83t-bananapi-m3.dtb ${DTBDIR}/sinovoip-bpi-m3.dtb \
${DTBDIR}/sun8i-a83t-bananapi-m3.dtb ${DTBDIR}/sun8i-a83t-sinovoip-bpi-m3.dtb
.elif ${MACHINE_ARCH} == "aarch64"
DTS= \
allwinner/sun50i-a64-nanopi-a64.dts \
allwinner/sun50i-a64-olinuxino.dts \
+ allwinner/sun50i-a64-pine64-lts.dts \
allwinner/sun50i-a64-pine64-plus.dts \
allwinner/sun50i-a64-pine64.dts \
allwinner/sun50i-a64-sopine-baseboard.dts \
allwinner/sun50i-h5-orangepi-pc2.dts
DTSO= sun50i-a64-opp.dtso \
sun50i-a64-pwm.dtso \
sun50i-a64-rpwm.dtso \
sun50i-a64-sid.dtso \
sun50i-a64-ths.dtso \
sun50i-a64-timer.dtso
.endif
.include <bsd.dtb.mk>
Index: projects/clang800-import/sys/net80211/ieee80211_ioctl.c
===================================================================
--- projects/clang800-import/sys/net80211/ieee80211_ioctl.c (revision 343955)
+++ projects/clang800-import/sys/net80211/ieee80211_ioctl.c (revision 343956)
@@ -1,3624 +1,3679 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2001 Atsushi Onoe
* Copyright (c) 2002-2009 Sam Leffler, Errno Consulting
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* IEEE 802.11 ioctl support (FreeBSD-specific)
*/
#include "opt_inet.h"
#include "opt_wlan.h"
#include <sys/endian.h>
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/priv.h>
#include <sys/socket.h>
#include <sys/sockio.h>
#include <sys/systm.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/if_dl.h>
#include <net/if_media.h>
#include <net/ethernet.h>
#ifdef INET
#include <netinet/in.h>
#include <netinet/if_ether.h>
#endif
#include <net80211/ieee80211_var.h>
#include <net80211/ieee80211_ioctl.h>
#include <net80211/ieee80211_regdomain.h>
#include <net80211/ieee80211_input.h>
#define IS_UP_AUTO(_vap) \
(IFNET_IS_UP_RUNNING((_vap)->iv_ifp) && \
(_vap)->iv_roaming == IEEE80211_ROAMING_AUTO)
static const uint8_t zerobssid[IEEE80211_ADDR_LEN];
static struct ieee80211_channel *findchannel(struct ieee80211com *,
int ieee, int mode);
static int ieee80211_scanreq(struct ieee80211vap *,
struct ieee80211_scan_req *);
static int
ieee80211_ioctl_getkey(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_node *ni;
struct ieee80211req_key ik;
struct ieee80211_key *wk;
const struct ieee80211_cipher *cip;
u_int kid;
int error;
if (ireq->i_len != sizeof(ik))
return EINVAL;
error = copyin(ireq->i_data, &ik, sizeof(ik));
if (error)
return error;
kid = ik.ik_keyix;
if (kid == IEEE80211_KEYIX_NONE) {
ni = ieee80211_find_vap_node(&ic->ic_sta, vap, ik.ik_macaddr);
if (ni == NULL)
return ENOENT;
wk = &ni->ni_ucastkey;
} else {
if (kid >= IEEE80211_WEP_NKID)
return EINVAL;
wk = &vap->iv_nw_keys[kid];
IEEE80211_ADDR_COPY(&ik.ik_macaddr, vap->iv_bss->ni_macaddr);
ni = NULL;
}
cip = wk->wk_cipher;
ik.ik_type = cip->ic_cipher;
ik.ik_keylen = wk->wk_keylen;
ik.ik_flags = wk->wk_flags & (IEEE80211_KEY_XMIT | IEEE80211_KEY_RECV);
if (wk->wk_keyix == vap->iv_def_txkey)
ik.ik_flags |= IEEE80211_KEY_DEFAULT;
if (priv_check(curthread, PRIV_NET80211_GETKEY) == 0) {
/* NB: only root can read key data */
ik.ik_keyrsc = wk->wk_keyrsc[IEEE80211_NONQOS_TID];
ik.ik_keytsc = wk->wk_keytsc;
memcpy(ik.ik_keydata, wk->wk_key, wk->wk_keylen);
if (cip->ic_cipher == IEEE80211_CIPHER_TKIP) {
memcpy(ik.ik_keydata+wk->wk_keylen,
wk->wk_key + IEEE80211_KEYBUF_SIZE,
IEEE80211_MICBUF_SIZE);
ik.ik_keylen += IEEE80211_MICBUF_SIZE;
}
} else {
ik.ik_keyrsc = 0;
ik.ik_keytsc = 0;
memset(ik.ik_keydata, 0, sizeof(ik.ik_keydata));
}
if (ni != NULL)
ieee80211_free_node(ni);
return copyout(&ik, ireq->i_data, sizeof(ik));
}
static int
ieee80211_ioctl_getchanlist(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
if (sizeof(ic->ic_chan_active) < ireq->i_len)
ireq->i_len = sizeof(ic->ic_chan_active);
return copyout(&ic->ic_chan_active, ireq->i_data, ireq->i_len);
}
static int
ieee80211_ioctl_getchaninfo(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
uint32_t space;
space = __offsetof(struct ieee80211req_chaninfo,
ic_chans[ic->ic_nchans]);
if (space > ireq->i_len)
space = ireq->i_len;
/* XXX assumes compatible layout */
return copyout(&ic->ic_nchans, ireq->i_data, space);
}
static int
ieee80211_ioctl_getwpaie(struct ieee80211vap *vap,
struct ieee80211req *ireq, int req)
{
struct ieee80211_node *ni;
struct ieee80211req_wpaie2 *wpaie;
int error;
if (ireq->i_len < IEEE80211_ADDR_LEN)
return EINVAL;
wpaie = IEEE80211_MALLOC(sizeof(*wpaie), M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (wpaie == NULL)
return ENOMEM;
error = copyin(ireq->i_data, wpaie->wpa_macaddr, IEEE80211_ADDR_LEN);
if (error != 0)
goto bad;
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap, wpaie->wpa_macaddr);
if (ni == NULL) {
error = ENOENT;
goto bad;
}
if (ni->ni_ies.wpa_ie != NULL) {
int ielen = ni->ni_ies.wpa_ie[1] + 2;
if (ielen > sizeof(wpaie->wpa_ie))
ielen = sizeof(wpaie->wpa_ie);
memcpy(wpaie->wpa_ie, ni->ni_ies.wpa_ie, ielen);
}
if (req == IEEE80211_IOC_WPAIE2) {
if (ni->ni_ies.rsn_ie != NULL) {
int ielen = ni->ni_ies.rsn_ie[1] + 2;
if (ielen > sizeof(wpaie->rsn_ie))
ielen = sizeof(wpaie->rsn_ie);
memcpy(wpaie->rsn_ie, ni->ni_ies.rsn_ie, ielen);
}
if (ireq->i_len > sizeof(struct ieee80211req_wpaie2))
ireq->i_len = sizeof(struct ieee80211req_wpaie2);
} else {
/* compatibility op, may overwrite wpa ie */
/* XXX check ic_flags? */
if (ni->ni_ies.rsn_ie != NULL) {
int ielen = ni->ni_ies.rsn_ie[1] + 2;
if (ielen > sizeof(wpaie->wpa_ie))
ielen = sizeof(wpaie->wpa_ie);
memcpy(wpaie->wpa_ie, ni->ni_ies.rsn_ie, ielen);
}
if (ireq->i_len > sizeof(struct ieee80211req_wpaie))
ireq->i_len = sizeof(struct ieee80211req_wpaie);
}
ieee80211_free_node(ni);
error = copyout(wpaie, ireq->i_data, ireq->i_len);
bad:
IEEE80211_FREE(wpaie, M_TEMP);
return error;
}
static int
ieee80211_ioctl_getstastats(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_node *ni;
uint8_t macaddr[IEEE80211_ADDR_LEN];
const size_t off = __offsetof(struct ieee80211req_sta_stats, is_stats);
int error;
if (ireq->i_len < off)
return EINVAL;
error = copyin(ireq->i_data, macaddr, IEEE80211_ADDR_LEN);
if (error != 0)
return error;
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap, macaddr);
if (ni == NULL)
return ENOENT;
if (ireq->i_len > sizeof(struct ieee80211req_sta_stats))
ireq->i_len = sizeof(struct ieee80211req_sta_stats);
/* NB: copy out only the statistics */
error = copyout(&ni->ni_stats, (uint8_t *) ireq->i_data + off,
ireq->i_len - off);
ieee80211_free_node(ni);
return error;
}
struct scanreq {
struct ieee80211req_scan_result *sr;
size_t space;
};
static size_t
scan_space(const struct ieee80211_scan_entry *se, int *ielen)
{
size_t len;
*ielen = se->se_ies.len;
/*
* NB: ie's can be no more than 255 bytes and the max 802.11
* packet is <3Kbytes so we are sure this doesn't overflow
* 16-bits; if this is a concern we can drop the ie's.
*/
len = sizeof(struct ieee80211req_scan_result) + se->se_ssid[1] +
se->se_meshid[1] + *ielen;
return roundup(len, sizeof(uint32_t));
}
static void
get_scan_space(void *arg, const struct ieee80211_scan_entry *se)
{
struct scanreq *req = arg;
int ielen;
req->space += scan_space(se, &ielen);
}
static void
get_scan_result(void *arg, const struct ieee80211_scan_entry *se)
{
struct scanreq *req = arg;
struct ieee80211req_scan_result *sr;
int ielen, len, nr, nxr;
uint8_t *cp;
len = scan_space(se, &ielen);
if (len > req->space)
return;
sr = req->sr;
KASSERT(len <= 65535 && ielen <= 65535,
("len %u ssid %u ie %u", len, se->se_ssid[1], ielen));
sr->isr_len = len;
sr->isr_ie_off = sizeof(struct ieee80211req_scan_result);
sr->isr_ie_len = ielen;
sr->isr_freq = se->se_chan->ic_freq;
sr->isr_flags = se->se_chan->ic_flags;
sr->isr_rssi = se->se_rssi;
sr->isr_noise = se->se_noise;
sr->isr_intval = se->se_intval;
sr->isr_capinfo = se->se_capinfo;
sr->isr_erp = se->se_erp;
IEEE80211_ADDR_COPY(sr->isr_bssid, se->se_bssid);
nr = min(se->se_rates[1], IEEE80211_RATE_MAXSIZE);
memcpy(sr->isr_rates, se->se_rates+2, nr);
nxr = min(se->se_xrates[1], IEEE80211_RATE_MAXSIZE - nr);
memcpy(sr->isr_rates+nr, se->se_xrates+2, nxr);
sr->isr_nrates = nr + nxr;
/* copy SSID */
sr->isr_ssid_len = se->se_ssid[1];
cp = ((uint8_t *)sr) + sr->isr_ie_off;
memcpy(cp, se->se_ssid+2, sr->isr_ssid_len);
/* copy mesh id */
cp += sr->isr_ssid_len;
sr->isr_meshid_len = se->se_meshid[1];
memcpy(cp, se->se_meshid+2, sr->isr_meshid_len);
cp += sr->isr_meshid_len;
if (ielen)
memcpy(cp, se->se_ies.data, ielen);
req->space -= len;
req->sr = (struct ieee80211req_scan_result *)(((uint8_t *)sr) + len);
}
static int
ieee80211_ioctl_getscanresults(struct ieee80211vap *vap,
struct ieee80211req *ireq)
{
struct scanreq req;
int error;
if (ireq->i_len < sizeof(struct scanreq))
return EFAULT;
error = 0;
req.space = 0;
ieee80211_scan_iterate(vap, get_scan_space, &req);
if (req.space > ireq->i_len)
req.space = ireq->i_len;
if (req.space > 0) {
uint32_t space;
void *p;
space = req.space;
/* XXX M_WAITOK after driver lock released */
p = IEEE80211_MALLOC(space, M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (p == NULL)
return ENOMEM;
req.sr = p;
ieee80211_scan_iterate(vap, get_scan_result, &req);
ireq->i_len = space - req.space;
error = copyout(p, ireq->i_data, ireq->i_len);
IEEE80211_FREE(p, M_TEMP);
} else
ireq->i_len = 0;
return error;
}
struct stainforeq {
struct ieee80211req_sta_info *si;
size_t space;
};
static size_t
sta_space(const struct ieee80211_node *ni, size_t *ielen)
{
*ielen = ni->ni_ies.len;
return roundup(sizeof(struct ieee80211req_sta_info) + *ielen,
sizeof(uint32_t));
}
static void
get_sta_space(void *arg, struct ieee80211_node *ni)
{
struct stainforeq *req = arg;
size_t ielen;
if (ni->ni_vap->iv_opmode == IEEE80211_M_HOSTAP &&
ni->ni_associd == 0) /* only associated stations */
return;
req->space += sta_space(ni, &ielen);
}
static void
get_sta_info(void *arg, struct ieee80211_node *ni)
{
struct stainforeq *req = arg;
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211req_sta_info *si;
size_t ielen, len;
uint8_t *cp;
if (vap->iv_opmode == IEEE80211_M_HOSTAP &&
ni->ni_associd == 0) /* only associated stations */
return;
if (ni->ni_chan == IEEE80211_CHAN_ANYC) /* XXX bogus entry */
return;
len = sta_space(ni, &ielen);
if (len > req->space)
return;
si = req->si;
si->isi_len = len;
si->isi_ie_off = sizeof(struct ieee80211req_sta_info);
si->isi_ie_len = ielen;
si->isi_freq = ni->ni_chan->ic_freq;
si->isi_flags = ni->ni_chan->ic_flags;
si->isi_state = ni->ni_flags;
si->isi_authmode = ni->ni_authmode;
vap->iv_ic->ic_node_getsignal(ni, &si->isi_rssi, &si->isi_noise);
vap->iv_ic->ic_node_getmimoinfo(ni, &si->isi_mimo);
si->isi_capinfo = ni->ni_capinfo;
si->isi_erp = ni->ni_erp;
IEEE80211_ADDR_COPY(si->isi_macaddr, ni->ni_macaddr);
si->isi_nrates = ni->ni_rates.rs_nrates;
if (si->isi_nrates > 15)
si->isi_nrates = 15;
memcpy(si->isi_rates, ni->ni_rates.rs_rates, si->isi_nrates);
si->isi_txrate = ni->ni_txrate;
if (si->isi_txrate & IEEE80211_RATE_MCS) {
const struct ieee80211_mcs_rates *mcs =
&ieee80211_htrates[ni->ni_txrate &~ IEEE80211_RATE_MCS];
if (IEEE80211_IS_CHAN_HT40(ni->ni_chan)) {
if (ni->ni_flags & IEEE80211_NODE_SGI40)
si->isi_txmbps = mcs->ht40_rate_800ns;
else
si->isi_txmbps = mcs->ht40_rate_400ns;
} else {
if (ni->ni_flags & IEEE80211_NODE_SGI20)
si->isi_txmbps = mcs->ht20_rate_800ns;
else
si->isi_txmbps = mcs->ht20_rate_400ns;
}
} else
si->isi_txmbps = si->isi_txrate;
si->isi_associd = ni->ni_associd;
si->isi_txpower = ni->ni_txpower;
si->isi_vlan = ni->ni_vlan;
if (ni->ni_flags & IEEE80211_NODE_QOS) {
memcpy(si->isi_txseqs, ni->ni_txseqs, sizeof(ni->ni_txseqs));
memcpy(si->isi_rxseqs, ni->ni_rxseqs, sizeof(ni->ni_rxseqs));
} else {
si->isi_txseqs[0] = ni->ni_txseqs[IEEE80211_NONQOS_TID];
si->isi_rxseqs[0] = ni->ni_rxseqs[IEEE80211_NONQOS_TID];
}
/* NB: leave all cases in case we relax ni_associd == 0 check */
if (ieee80211_node_is_authorized(ni))
si->isi_inact = vap->iv_inact_run;
else if (ni->ni_associd != 0 ||
(vap->iv_opmode == IEEE80211_M_WDS &&
(vap->iv_flags_ext & IEEE80211_FEXT_WDSLEGACY)))
si->isi_inact = vap->iv_inact_auth;
else
si->isi_inact = vap->iv_inact_init;
si->isi_inact = (si->isi_inact - ni->ni_inact) * IEEE80211_INACT_WAIT;
si->isi_localid = ni->ni_mllid;
si->isi_peerid = ni->ni_mlpid;
si->isi_peerstate = ni->ni_mlstate;
if (ielen) {
cp = ((uint8_t *)si) + si->isi_ie_off;
memcpy(cp, ni->ni_ies.data, ielen);
}
req->si = (struct ieee80211req_sta_info *)(((uint8_t *)si) + len);
req->space -= len;
}
static int
getstainfo_common(struct ieee80211vap *vap, struct ieee80211req *ireq,
struct ieee80211_node *ni, size_t off)
{
struct ieee80211com *ic = vap->iv_ic;
struct stainforeq req;
size_t space;
void *p;
int error;
error = 0;
req.space = 0;
if (ni == NULL) {
ieee80211_iterate_nodes_vap(&ic->ic_sta, vap, get_sta_space,
&req);
} else
get_sta_space(&req, ni);
if (req.space > ireq->i_len)
req.space = ireq->i_len;
if (req.space > 0) {
space = req.space;
/* XXX M_WAITOK after driver lock released */
p = IEEE80211_MALLOC(space, M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (p == NULL) {
error = ENOMEM;
goto bad;
}
req.si = p;
if (ni == NULL) {
ieee80211_iterate_nodes_vap(&ic->ic_sta, vap,
get_sta_info, &req);
} else
get_sta_info(&req, ni);
ireq->i_len = space - req.space;
error = copyout(p, (uint8_t *) ireq->i_data+off, ireq->i_len);
IEEE80211_FREE(p, M_TEMP);
} else
ireq->i_len = 0;
bad:
if (ni != NULL)
ieee80211_free_node(ni);
return error;
}
static int
ieee80211_ioctl_getstainfo(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
uint8_t macaddr[IEEE80211_ADDR_LEN];
const size_t off = __offsetof(struct ieee80211req_sta_req, info);
struct ieee80211_node *ni;
int error;
if (ireq->i_len < sizeof(struct ieee80211req_sta_req))
return EFAULT;
error = copyin(ireq->i_data, macaddr, IEEE80211_ADDR_LEN);
if (error != 0)
return error;
if (IEEE80211_ADDR_EQ(macaddr, vap->iv_ifp->if_broadcastaddr)) {
ni = NULL;
} else {
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap, macaddr);
if (ni == NULL)
return ENOENT;
}
return getstainfo_common(vap, ireq, ni, off);
}
static int
ieee80211_ioctl_getstatxpow(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_node *ni;
struct ieee80211req_sta_txpow txpow;
int error;
if (ireq->i_len != sizeof(txpow))
return EINVAL;
error = copyin(ireq->i_data, &txpow, sizeof(txpow));
if (error != 0)
return error;
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap, txpow.it_macaddr);
if (ni == NULL)
return ENOENT;
txpow.it_txpow = ni->ni_txpower;
error = copyout(&txpow, ireq->i_data, sizeof(txpow));
ieee80211_free_node(ni);
return error;
}
static int
ieee80211_ioctl_getwmeparam(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_wme_state *wme = &ic->ic_wme;
struct wmeParams *wmep;
int ac;
if ((ic->ic_caps & IEEE80211_C_WME) == 0)
return EINVAL;
ac = (ireq->i_len & IEEE80211_WMEPARAM_VAL);
if (ac >= WME_NUM_AC)
ac = WME_AC_BE;
if (ireq->i_len & IEEE80211_WMEPARAM_BSS)
wmep = &wme->wme_wmeBssChanParams.cap_wmeParams[ac];
else
wmep = &wme->wme_wmeChanParams.cap_wmeParams[ac];
switch (ireq->i_type) {
case IEEE80211_IOC_WME_CWMIN: /* WME: CWmin */
ireq->i_val = wmep->wmep_logcwmin;
break;
case IEEE80211_IOC_WME_CWMAX: /* WME: CWmax */
ireq->i_val = wmep->wmep_logcwmax;
break;
case IEEE80211_IOC_WME_AIFS: /* WME: AIFS */
ireq->i_val = wmep->wmep_aifsn;
break;
case IEEE80211_IOC_WME_TXOPLIMIT: /* WME: txops limit */
ireq->i_val = wmep->wmep_txopLimit;
break;
case IEEE80211_IOC_WME_ACM: /* WME: ACM (bss only) */
wmep = &wme->wme_wmeBssChanParams.cap_wmeParams[ac];
ireq->i_val = wmep->wmep_acm;
break;
case IEEE80211_IOC_WME_ACKPOLICY: /* WME: ACK policy (!bss only)*/
wmep = &wme->wme_wmeChanParams.cap_wmeParams[ac];
ireq->i_val = !wmep->wmep_noackPolicy;
break;
}
return 0;
}
static int
ieee80211_ioctl_getmaccmd(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
const struct ieee80211_aclator *acl = vap->iv_acl;
return (acl == NULL ? EINVAL : acl->iac_getioctl(vap, ireq));
}
static int
ieee80211_ioctl_getcurchan(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_channel *c;
if (ireq->i_len != sizeof(struct ieee80211_channel))
return EINVAL;
/*
* vap's may have different operating channels when HT is
* in use. When in RUN state report the vap-specific channel.
* Otherwise return curchan.
*/
if (vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP)
c = vap->iv_bss->ni_chan;
else
c = ic->ic_curchan;
return copyout(c, ireq->i_data, sizeof(*c));
}
static int
getappie(const struct ieee80211_appie *aie, struct ieee80211req *ireq)
{
if (aie == NULL)
return EINVAL;
/* NB: truncate, caller can check length */
if (ireq->i_len > aie->ie_len)
ireq->i_len = aie->ie_len;
return copyout(aie->ie_data, ireq->i_data, ireq->i_len);
}
static int
ieee80211_ioctl_getappie(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
uint8_t fc0;
fc0 = ireq->i_val & 0xff;
if ((fc0 & IEEE80211_FC0_TYPE_MASK) != IEEE80211_FC0_TYPE_MGT)
return EINVAL;
/* NB: could check iv_opmode and reject but hardly worth the effort */
switch (fc0 & IEEE80211_FC0_SUBTYPE_MASK) {
case IEEE80211_FC0_SUBTYPE_BEACON:
return getappie(vap->iv_appie_beacon, ireq);
case IEEE80211_FC0_SUBTYPE_PROBE_RESP:
return getappie(vap->iv_appie_proberesp, ireq);
case IEEE80211_FC0_SUBTYPE_ASSOC_RESP:
return getappie(vap->iv_appie_assocresp, ireq);
case IEEE80211_FC0_SUBTYPE_PROBE_REQ:
return getappie(vap->iv_appie_probereq, ireq);
case IEEE80211_FC0_SUBTYPE_ASSOC_REQ:
return getappie(vap->iv_appie_assocreq, ireq);
case IEEE80211_FC0_SUBTYPE_BEACON|IEEE80211_FC0_SUBTYPE_PROBE_RESP:
return getappie(vap->iv_appie_wpa, ireq);
}
return EINVAL;
}
static int
ieee80211_ioctl_getregdomain(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
if (ireq->i_len != sizeof(ic->ic_regdomain))
return EINVAL;
return copyout(&ic->ic_regdomain, ireq->i_data,
sizeof(ic->ic_regdomain));
}
static int
ieee80211_ioctl_getroam(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
size_t len = ireq->i_len;
/* NB: accept short requests for backwards compat */
if (len > sizeof(vap->iv_roamparms))
len = sizeof(vap->iv_roamparms);
return copyout(vap->iv_roamparms, ireq->i_data, len);
}
static int
ieee80211_ioctl_gettxparams(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
size_t len = ireq->i_len;
/* NB: accept short requests for backwards compat */
if (len > sizeof(vap->iv_txparms))
len = sizeof(vap->iv_txparms);
return copyout(vap->iv_txparms, ireq->i_data, len);
}
static int
ieee80211_ioctl_getdevcaps(struct ieee80211com *ic,
const struct ieee80211req *ireq)
{
struct ieee80211_devcaps_req *dc;
struct ieee80211req_chaninfo *ci;
int maxchans, error;
maxchans = 1 + ((ireq->i_len - sizeof(struct ieee80211_devcaps_req)) /
sizeof(struct ieee80211_channel));
/* NB: require 1 so we know ic_nchans is accessible */
if (maxchans < 1)
return EINVAL;
/* constrain max request size, 2K channels is ~24Kbytes */
if (maxchans > 2048)
maxchans = 2048;
dc = (struct ieee80211_devcaps_req *)
IEEE80211_MALLOC(IEEE80211_DEVCAPS_SIZE(maxchans), M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (dc == NULL)
return ENOMEM;
dc->dc_drivercaps = ic->ic_caps;
dc->dc_cryptocaps = ic->ic_cryptocaps;
dc->dc_htcaps = ic->ic_htcaps;
dc->dc_vhtcaps = ic->ic_vhtcaps;
ci = &dc->dc_chaninfo;
ic->ic_getradiocaps(ic, maxchans, &ci->ic_nchans, ci->ic_chans);
KASSERT(ci->ic_nchans <= maxchans,
("nchans %d maxchans %d", ci->ic_nchans, maxchans));
ieee80211_sort_channels(ci->ic_chans, ci->ic_nchans);
error = copyout(dc, ireq->i_data, IEEE80211_DEVCAPS_SPACE(dc));
IEEE80211_FREE(dc, M_TEMP);
return error;
}
static int
ieee80211_ioctl_getstavlan(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_node *ni;
struct ieee80211req_sta_vlan vlan;
int error;
if (ireq->i_len != sizeof(vlan))
return EINVAL;
error = copyin(ireq->i_data, &vlan, sizeof(vlan));
if (error != 0)
return error;
if (!IEEE80211_ADDR_EQ(vlan.sv_macaddr, zerobssid)) {
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap,
vlan.sv_macaddr);
if (ni == NULL)
return ENOENT;
} else
ni = ieee80211_ref_node(vap->iv_bss);
vlan.sv_vlan = ni->ni_vlan;
error = copyout(&vlan, ireq->i_data, sizeof(vlan));
ieee80211_free_node(ni);
return error;
}
/*
* Dummy ioctl get handler so the linker set is defined.
*/
static int
dummy_ioctl_get(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
return ENOSYS;
}
IEEE80211_IOCTL_GET(dummy, dummy_ioctl_get);
static int
ieee80211_ioctl_getdefault(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
ieee80211_ioctl_getfunc * const *get;
int error;
SET_FOREACH(get, ieee80211_ioctl_getset) {
error = (*get)(vap, ireq);
if (error != ENOSYS)
return error;
}
return EINVAL;
}
static int
ieee80211_ioctl_get80211(struct ieee80211vap *vap, u_long cmd,
struct ieee80211req *ireq)
{
#define MS(_v, _f) (((_v) & _f) >> _f##_S)
struct ieee80211com *ic = vap->iv_ic;
u_int kid, len;
uint8_t tmpkey[IEEE80211_KEYBUF_SIZE];
char tmpssid[IEEE80211_NWID_LEN];
int error = 0;
switch (ireq->i_type) {
case IEEE80211_IOC_SSID:
switch (vap->iv_state) {
case IEEE80211_S_INIT:
case IEEE80211_S_SCAN:
ireq->i_len = vap->iv_des_ssid[0].len;
memcpy(tmpssid, vap->iv_des_ssid[0].ssid, ireq->i_len);
break;
default:
ireq->i_len = vap->iv_bss->ni_esslen;
memcpy(tmpssid, vap->iv_bss->ni_essid, ireq->i_len);
break;
}
error = copyout(tmpssid, ireq->i_data, ireq->i_len);
break;
case IEEE80211_IOC_NUMSSIDS:
ireq->i_val = 1;
break;
case IEEE80211_IOC_WEP:
if ((vap->iv_flags & IEEE80211_F_PRIVACY) == 0)
ireq->i_val = IEEE80211_WEP_OFF;
else if (vap->iv_flags & IEEE80211_F_DROPUNENC)
ireq->i_val = IEEE80211_WEP_ON;
else
ireq->i_val = IEEE80211_WEP_MIXED;
break;
case IEEE80211_IOC_WEPKEY:
kid = (u_int) ireq->i_val;
if (kid >= IEEE80211_WEP_NKID)
return EINVAL;
len = (u_int) vap->iv_nw_keys[kid].wk_keylen;
/* NB: only root can read WEP keys */
if (priv_check(curthread, PRIV_NET80211_GETKEY) == 0) {
bcopy(vap->iv_nw_keys[kid].wk_key, tmpkey, len);
} else {
bzero(tmpkey, len);
}
ireq->i_len = len;
error = copyout(tmpkey, ireq->i_data, len);
break;
case IEEE80211_IOC_NUMWEPKEYS:
ireq->i_val = IEEE80211_WEP_NKID;
break;
case IEEE80211_IOC_WEPTXKEY:
ireq->i_val = vap->iv_def_txkey;
break;
case IEEE80211_IOC_AUTHMODE:
if (vap->iv_flags & IEEE80211_F_WPA)
ireq->i_val = IEEE80211_AUTH_WPA;
else
ireq->i_val = vap->iv_bss->ni_authmode;
break;
case IEEE80211_IOC_CHANNEL:
ireq->i_val = ieee80211_chan2ieee(ic, ic->ic_curchan);
break;
case IEEE80211_IOC_POWERSAVE:
if (vap->iv_flags & IEEE80211_F_PMGTON)
ireq->i_val = IEEE80211_POWERSAVE_ON;
else
ireq->i_val = IEEE80211_POWERSAVE_OFF;
break;
case IEEE80211_IOC_POWERSAVESLEEP:
ireq->i_val = ic->ic_lintval;
break;
case IEEE80211_IOC_RTSTHRESHOLD:
ireq->i_val = vap->iv_rtsthreshold;
break;
case IEEE80211_IOC_PROTMODE:
ireq->i_val = ic->ic_protmode;
break;
case IEEE80211_IOC_TXPOWER:
/*
* Tx power limit is the min of max regulatory
* power, any user-set limit, and the max the
* radio can do.
*
* TODO: methodize this
*/
ireq->i_val = 2*ic->ic_curchan->ic_maxregpower;
if (ireq->i_val > ic->ic_txpowlimit)
ireq->i_val = ic->ic_txpowlimit;
if (ireq->i_val > ic->ic_curchan->ic_maxpower)
ireq->i_val = ic->ic_curchan->ic_maxpower;
break;
case IEEE80211_IOC_WPA:
switch (vap->iv_flags & IEEE80211_F_WPA) {
case IEEE80211_F_WPA1:
ireq->i_val = 1;
break;
case IEEE80211_F_WPA2:
ireq->i_val = 2;
break;
case IEEE80211_F_WPA1 | IEEE80211_F_WPA2:
ireq->i_val = 3;
break;
default:
ireq->i_val = 0;
break;
}
break;
case IEEE80211_IOC_CHANLIST:
error = ieee80211_ioctl_getchanlist(vap, ireq);
break;
case IEEE80211_IOC_ROAMING:
ireq->i_val = vap->iv_roaming;
break;
case IEEE80211_IOC_PRIVACY:
ireq->i_val = (vap->iv_flags & IEEE80211_F_PRIVACY) != 0;
break;
case IEEE80211_IOC_DROPUNENCRYPTED:
ireq->i_val = (vap->iv_flags & IEEE80211_F_DROPUNENC) != 0;
break;
case IEEE80211_IOC_COUNTERMEASURES:
ireq->i_val = (vap->iv_flags & IEEE80211_F_COUNTERM) != 0;
break;
case IEEE80211_IOC_WME:
ireq->i_val = (vap->iv_flags & IEEE80211_F_WME) != 0;
break;
case IEEE80211_IOC_HIDESSID:
ireq->i_val = (vap->iv_flags & IEEE80211_F_HIDESSID) != 0;
break;
case IEEE80211_IOC_APBRIDGE:
ireq->i_val = (vap->iv_flags & IEEE80211_F_NOBRIDGE) == 0;
break;
case IEEE80211_IOC_WPAKEY:
error = ieee80211_ioctl_getkey(vap, ireq);
break;
case IEEE80211_IOC_CHANINFO:
error = ieee80211_ioctl_getchaninfo(vap, ireq);
break;
case IEEE80211_IOC_BSSID:
if (ireq->i_len != IEEE80211_ADDR_LEN)
return EINVAL;
if (vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP) {
error = copyout(vap->iv_opmode == IEEE80211_M_WDS ?
vap->iv_bss->ni_macaddr : vap->iv_bss->ni_bssid,
ireq->i_data, ireq->i_len);
} else
error = copyout(vap->iv_des_bssid, ireq->i_data,
ireq->i_len);
break;
case IEEE80211_IOC_WPAIE:
case IEEE80211_IOC_WPAIE2:
error = ieee80211_ioctl_getwpaie(vap, ireq, ireq->i_type);
break;
case IEEE80211_IOC_SCAN_RESULTS:
error = ieee80211_ioctl_getscanresults(vap, ireq);
break;
case IEEE80211_IOC_STA_STATS:
error = ieee80211_ioctl_getstastats(vap, ireq);
break;
case IEEE80211_IOC_TXPOWMAX:
ireq->i_val = vap->iv_bss->ni_txpower;
break;
case IEEE80211_IOC_STA_TXPOW:
error = ieee80211_ioctl_getstatxpow(vap, ireq);
break;
case IEEE80211_IOC_STA_INFO:
error = ieee80211_ioctl_getstainfo(vap, ireq);
break;
case IEEE80211_IOC_WME_CWMIN: /* WME: CWmin */
case IEEE80211_IOC_WME_CWMAX: /* WME: CWmax */
case IEEE80211_IOC_WME_AIFS: /* WME: AIFS */
case IEEE80211_IOC_WME_TXOPLIMIT: /* WME: txops limit */
case IEEE80211_IOC_WME_ACM: /* WME: ACM (bss only) */
case IEEE80211_IOC_WME_ACKPOLICY: /* WME: ACK policy (!bss only) */
error = ieee80211_ioctl_getwmeparam(vap, ireq);
break;
case IEEE80211_IOC_DTIM_PERIOD:
ireq->i_val = vap->iv_dtim_period;
break;
case IEEE80211_IOC_BEACON_INTERVAL:
/* NB: get from ic_bss for station mode */
ireq->i_val = vap->iv_bss->ni_intval;
break;
case IEEE80211_IOC_PUREG:
ireq->i_val = (vap->iv_flags & IEEE80211_F_PUREG) != 0;
break;
case IEEE80211_IOC_QUIET:
ireq->i_val = vap->iv_quiet;
break;
case IEEE80211_IOC_QUIET_COUNT:
ireq->i_val = vap->iv_quiet_count;
break;
case IEEE80211_IOC_QUIET_PERIOD:
ireq->i_val = vap->iv_quiet_period;
break;
case IEEE80211_IOC_QUIET_DUR:
ireq->i_val = vap->iv_quiet_duration;
break;
case IEEE80211_IOC_QUIET_OFFSET:
ireq->i_val = vap->iv_quiet_offset;
break;
case IEEE80211_IOC_BGSCAN:
ireq->i_val = (vap->iv_flags & IEEE80211_F_BGSCAN) != 0;
break;
case IEEE80211_IOC_BGSCAN_IDLE:
ireq->i_val = vap->iv_bgscanidle*hz/1000; /* ms */
break;
case IEEE80211_IOC_BGSCAN_INTERVAL:
ireq->i_val = vap->iv_bgscanintvl/hz; /* seconds */
break;
case IEEE80211_IOC_SCANVALID:
ireq->i_val = vap->iv_scanvalid/hz; /* seconds */
break;
case IEEE80211_IOC_FRAGTHRESHOLD:
ireq->i_val = vap->iv_fragthreshold;
break;
case IEEE80211_IOC_MACCMD:
error = ieee80211_ioctl_getmaccmd(vap, ireq);
break;
case IEEE80211_IOC_BURST:
ireq->i_val = (vap->iv_flags & IEEE80211_F_BURST) != 0;
break;
case IEEE80211_IOC_BMISSTHRESHOLD:
ireq->i_val = vap->iv_bmissthreshold;
break;
case IEEE80211_IOC_CURCHAN:
error = ieee80211_ioctl_getcurchan(vap, ireq);
break;
case IEEE80211_IOC_SHORTGI:
ireq->i_val = 0;
if (vap->iv_flags_ht & IEEE80211_FHT_SHORTGI20)
ireq->i_val |= IEEE80211_HTCAP_SHORTGI20;
if (vap->iv_flags_ht & IEEE80211_FHT_SHORTGI40)
ireq->i_val |= IEEE80211_HTCAP_SHORTGI40;
break;
case IEEE80211_IOC_AMPDU:
ireq->i_val = 0;
if (vap->iv_flags_ht & IEEE80211_FHT_AMPDU_TX)
ireq->i_val |= 1;
if (vap->iv_flags_ht & IEEE80211_FHT_AMPDU_RX)
ireq->i_val |= 2;
break;
case IEEE80211_IOC_AMPDU_LIMIT:
/* XXX TODO: make this a per-node thing; and leave this as global */
if (vap->iv_opmode == IEEE80211_M_HOSTAP)
ireq->i_val = vap->iv_ampdu_rxmax;
else if (vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP)
/*
* XXX TODO: this isn't completely correct, as we've
* negotiated the higher of the two.
*/
ireq->i_val = MS(vap->iv_bss->ni_htparam,
IEEE80211_HTCAP_MAXRXAMPDU);
else
ireq->i_val = vap->iv_ampdu_limit;
break;
case IEEE80211_IOC_AMPDU_DENSITY:
/* XXX TODO: make this a per-node thing; and leave this as global */
if (vap->iv_opmode == IEEE80211_M_STA &&
(vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP))
/*
* XXX TODO: this isn't completely correct, as we've
* negotiated the higher of the two.
*/
ireq->i_val = MS(vap->iv_bss->ni_htparam,
IEEE80211_HTCAP_MPDUDENSITY);
else
ireq->i_val = vap->iv_ampdu_density;
break;
case IEEE80211_IOC_AMSDU:
ireq->i_val = 0;
if (vap->iv_flags_ht & IEEE80211_FHT_AMSDU_TX)
ireq->i_val |= 1;
if (vap->iv_flags_ht & IEEE80211_FHT_AMSDU_RX)
ireq->i_val |= 2;
break;
case IEEE80211_IOC_AMSDU_LIMIT:
ireq->i_val = vap->iv_amsdu_limit; /* XXX truncation? */
break;
case IEEE80211_IOC_PUREN:
ireq->i_val = (vap->iv_flags_ht & IEEE80211_FHT_PUREN) != 0;
break;
case IEEE80211_IOC_DOTH:
ireq->i_val = (vap->iv_flags & IEEE80211_F_DOTH) != 0;
break;
case IEEE80211_IOC_REGDOMAIN:
error = ieee80211_ioctl_getregdomain(vap, ireq);
break;
case IEEE80211_IOC_ROAM:
error = ieee80211_ioctl_getroam(vap, ireq);
break;
case IEEE80211_IOC_TXPARAMS:
error = ieee80211_ioctl_gettxparams(vap, ireq);
break;
case IEEE80211_IOC_HTCOMPAT:
ireq->i_val = (vap->iv_flags_ht & IEEE80211_FHT_HTCOMPAT) != 0;
break;
case IEEE80211_IOC_DWDS:
ireq->i_val = (vap->iv_flags & IEEE80211_F_DWDS) != 0;
break;
case IEEE80211_IOC_INACTIVITY:
ireq->i_val = (vap->iv_flags_ext & IEEE80211_FEXT_INACT) != 0;
break;
case IEEE80211_IOC_APPIE:
error = ieee80211_ioctl_getappie(vap, ireq);
break;
case IEEE80211_IOC_WPS:
ireq->i_val = (vap->iv_flags_ext & IEEE80211_FEXT_WPS) != 0;
break;
case IEEE80211_IOC_TSN:
ireq->i_val = (vap->iv_flags_ext & IEEE80211_FEXT_TSN) != 0;
break;
case IEEE80211_IOC_DFS:
ireq->i_val = (vap->iv_flags_ext & IEEE80211_FEXT_DFS) != 0;
break;
case IEEE80211_IOC_DOTD:
ireq->i_val = (vap->iv_flags_ext & IEEE80211_FEXT_DOTD) != 0;
break;
case IEEE80211_IOC_DEVCAPS:
error = ieee80211_ioctl_getdevcaps(ic, ireq);
break;
case IEEE80211_IOC_HTPROTMODE:
ireq->i_val = ic->ic_htprotmode;
break;
case IEEE80211_IOC_HTCONF:
if (vap->iv_flags_ht & IEEE80211_FHT_HT) {
ireq->i_val = 1;
if (vap->iv_flags_ht & IEEE80211_FHT_USEHT40)
ireq->i_val |= 2;
} else
ireq->i_val = 0;
break;
case IEEE80211_IOC_STA_VLAN:
error = ieee80211_ioctl_getstavlan(vap, ireq);
break;
case IEEE80211_IOC_SMPS:
if (vap->iv_opmode == IEEE80211_M_STA &&
(vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP)) {
if (vap->iv_bss->ni_flags & IEEE80211_NODE_MIMO_RTS)
ireq->i_val = IEEE80211_HTCAP_SMPS_DYNAMIC;
else if (vap->iv_bss->ni_flags & IEEE80211_NODE_MIMO_PS)
ireq->i_val = IEEE80211_HTCAP_SMPS_ENA;
else
ireq->i_val = IEEE80211_HTCAP_SMPS_OFF;
} else
ireq->i_val = vap->iv_htcaps & IEEE80211_HTCAP_SMPS;
break;
case IEEE80211_IOC_RIFS:
if (vap->iv_opmode == IEEE80211_M_STA &&
(vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP))
ireq->i_val =
(vap->iv_bss->ni_flags & IEEE80211_NODE_RIFS) != 0;
else
ireq->i_val =
(vap->iv_flags_ht & IEEE80211_FHT_RIFS) != 0;
break;
case IEEE80211_IOC_STBC:
ireq->i_val = 0;
if (vap->iv_flags_ht & IEEE80211_FHT_STBC_TX)
ireq->i_val |= 1;
if (vap->iv_flags_ht & IEEE80211_FHT_STBC_RX)
ireq->i_val |= 2;
break;
case IEEE80211_IOC_LDPC:
ireq->i_val = 0;
if (vap->iv_flags_ht & IEEE80211_FHT_LDPC_TX)
ireq->i_val |= 1;
if (vap->iv_flags_ht & IEEE80211_FHT_LDPC_RX)
ireq->i_val |= 2;
break;
/* VHT */
case IEEE80211_IOC_VHTCONF:
ireq->i_val = 0;
if (vap->iv_flags_vht & IEEE80211_FVHT_VHT)
ireq->i_val |= 1;
if (vap->iv_flags_vht & IEEE80211_FVHT_USEVHT40)
ireq->i_val |= 2;
if (vap->iv_flags_vht & IEEE80211_FVHT_USEVHT80)
ireq->i_val |= 4;
if (vap->iv_flags_vht & IEEE80211_FVHT_USEVHT80P80)
ireq->i_val |= 8;
if (vap->iv_flags_vht & IEEE80211_FVHT_USEVHT160)
ireq->i_val |= 16;
break;
default:
error = ieee80211_ioctl_getdefault(vap, ireq);
break;
}
return error;
#undef MS
}
static int
ieee80211_ioctl_setkey(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211req_key ik;
struct ieee80211_node *ni;
struct ieee80211_key *wk;
uint16_t kid;
int error, i;
if (ireq->i_len != sizeof(ik))
return EINVAL;
error = copyin(ireq->i_data, &ik, sizeof(ik));
if (error)
return error;
/* NB: cipher support is verified by ieee80211_crypt_newkey */
/* NB: this also checks ik->ik_keylen > sizeof(wk->wk_key) */
if (ik.ik_keylen > sizeof(ik.ik_keydata))
return E2BIG;
kid = ik.ik_keyix;
if (kid == IEEE80211_KEYIX_NONE) {
/* XXX unicast keys currently must be tx/rx */
if (ik.ik_flags != (IEEE80211_KEY_XMIT | IEEE80211_KEY_RECV))
return EINVAL;
if (vap->iv_opmode == IEEE80211_M_STA) {
ni = ieee80211_ref_node(vap->iv_bss);
if (!IEEE80211_ADDR_EQ(ik.ik_macaddr, ni->ni_bssid)) {
ieee80211_free_node(ni);
return EADDRNOTAVAIL;
}
} else {
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap,
ik.ik_macaddr);
if (ni == NULL)
return ENOENT;
}
wk = &ni->ni_ucastkey;
} else {
if (kid >= IEEE80211_WEP_NKID)
return EINVAL;
wk = &vap->iv_nw_keys[kid];
/*
* Global slots start off w/o any assigned key index.
* Force one here for consistency with IEEE80211_IOC_WEPKEY.
*/
if (wk->wk_keyix == IEEE80211_KEYIX_NONE)
wk->wk_keyix = kid;
ni = NULL;
}
error = 0;
ieee80211_key_update_begin(vap);
if (ieee80211_crypto_newkey(vap, ik.ik_type, ik.ik_flags, wk)) {
wk->wk_keylen = ik.ik_keylen;
/* NB: MIC presence is implied by cipher type */
if (wk->wk_keylen > IEEE80211_KEYBUF_SIZE)
wk->wk_keylen = IEEE80211_KEYBUF_SIZE;
for (i = 0; i < IEEE80211_TID_SIZE; i++)
wk->wk_keyrsc[i] = ik.ik_keyrsc;
wk->wk_keytsc = 0; /* new key, reset */
memset(wk->wk_key, 0, sizeof(wk->wk_key));
memcpy(wk->wk_key, ik.ik_keydata, ik.ik_keylen);
IEEE80211_ADDR_COPY(wk->wk_macaddr,
ni != NULL ? ni->ni_macaddr : ik.ik_macaddr);
if (!ieee80211_crypto_setkey(vap, wk))
error = EIO;
else if ((ik.ik_flags & IEEE80211_KEY_DEFAULT))
/*
* Inform the driver that this is the default
* transmit key. Now, ideally we'd just set
* a flag in the key update that would
* say "yes, we're the default key", but
* that currently isn't the way the ioctl ->
* key interface works.
*/
ieee80211_crypto_set_deftxkey(vap, kid);
} else
error = ENXIO;
ieee80211_key_update_end(vap);
if (ni != NULL)
ieee80211_free_node(ni);
return error;
}
static int
ieee80211_ioctl_delkey(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211req_del_key dk;
int kid, error;
if (ireq->i_len != sizeof(dk))
return EINVAL;
error = copyin(ireq->i_data, &dk, sizeof(dk));
if (error)
return error;
kid = dk.idk_keyix;
/* XXX uint8_t -> uint16_t */
if (dk.idk_keyix == (uint8_t) IEEE80211_KEYIX_NONE) {
struct ieee80211_node *ni;
if (vap->iv_opmode == IEEE80211_M_STA) {
ni = ieee80211_ref_node(vap->iv_bss);
if (!IEEE80211_ADDR_EQ(dk.idk_macaddr, ni->ni_bssid)) {
ieee80211_free_node(ni);
return EADDRNOTAVAIL;
}
} else {
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap,
dk.idk_macaddr);
if (ni == NULL)
return ENOENT;
}
/* XXX error return */
ieee80211_node_delucastkey(ni);
ieee80211_free_node(ni);
} else {
if (kid >= IEEE80211_WEP_NKID)
return EINVAL;
/* XXX error return */
ieee80211_crypto_delkey(vap, &vap->iv_nw_keys[kid]);
}
return 0;
}
struct mlmeop {
struct ieee80211vap *vap;
int op;
int reason;
};
static void
mlmedebug(struct ieee80211vap *vap, const uint8_t mac[IEEE80211_ADDR_LEN],
int op, int reason)
{
#ifdef IEEE80211_DEBUG
static const struct {
int mask;
const char *opstr;
} ops[] = {
{ 0, "op#0" },
{ IEEE80211_MSG_IOCTL | IEEE80211_MSG_STATE |
IEEE80211_MSG_ASSOC, "assoc" },
{ IEEE80211_MSG_IOCTL | IEEE80211_MSG_STATE |
IEEE80211_MSG_ASSOC, "disassoc" },
{ IEEE80211_MSG_IOCTL | IEEE80211_MSG_STATE |
IEEE80211_MSG_AUTH, "deauth" },
{ IEEE80211_MSG_IOCTL | IEEE80211_MSG_STATE |
IEEE80211_MSG_AUTH, "authorize" },
{ IEEE80211_MSG_IOCTL | IEEE80211_MSG_STATE |
IEEE80211_MSG_AUTH, "unauthorize" },
};
if (op == IEEE80211_MLME_AUTH) {
IEEE80211_NOTE_MAC(vap, IEEE80211_MSG_IOCTL |
IEEE80211_MSG_STATE | IEEE80211_MSG_AUTH, mac,
"station authenticate %s via MLME (reason: %d (%s))",
reason == IEEE80211_STATUS_SUCCESS ? "ACCEPT" : "REJECT",
reason, ieee80211_reason_to_string(reason));
} else if (!(IEEE80211_MLME_ASSOC <= op && op <= IEEE80211_MLME_AUTH)) {
IEEE80211_NOTE_MAC(vap, IEEE80211_MSG_ANY, mac,
"unknown MLME request %d (reason: %d (%s))", op, reason,
ieee80211_reason_to_string(reason));
} else if (reason == IEEE80211_STATUS_SUCCESS) {
IEEE80211_NOTE_MAC(vap, ops[op].mask, mac,
"station %s via MLME", ops[op].opstr);
} else {
IEEE80211_NOTE_MAC(vap, ops[op].mask, mac,
"station %s via MLME (reason: %d (%s))", ops[op].opstr,
reason, ieee80211_reason_to_string(reason));
}
#endif /* IEEE80211_DEBUG */
}
static void
domlme(void *arg, struct ieee80211_node *ni)
{
struct mlmeop *mop = arg;
struct ieee80211vap *vap = ni->ni_vap;
if (vap != mop->vap)
return;
/*
* NB: if ni_associd is zero then the node is already cleaned
* up and we don't need to do this (we're safely holding a
* reference but should otherwise not modify it's state).
*/
if (ni->ni_associd == 0)
return;
mlmedebug(vap, ni->ni_macaddr, mop->op, mop->reason);
if (mop->op == IEEE80211_MLME_DEAUTH) {
IEEE80211_SEND_MGMT(ni, IEEE80211_FC0_SUBTYPE_DEAUTH,
mop->reason);
} else {
IEEE80211_SEND_MGMT(ni, IEEE80211_FC0_SUBTYPE_DISASSOC,
mop->reason);
}
ieee80211_node_leave(ni);
}
static int
setmlme_dropsta(struct ieee80211vap *vap,
const uint8_t mac[IEEE80211_ADDR_LEN], struct mlmeop *mlmeop)
{
struct ieee80211_node_table *nt = &vap->iv_ic->ic_sta;
struct ieee80211_node *ni;
int error = 0;
/* NB: the broadcast address means do 'em all */
if (!IEEE80211_ADDR_EQ(mac, vap->iv_ifp->if_broadcastaddr)) {
IEEE80211_NODE_LOCK(nt);
ni = ieee80211_find_node_locked(nt, mac);
IEEE80211_NODE_UNLOCK(nt);
/*
* Don't do the node update inside the node
* table lock. This unfortunately causes LORs
* with drivers and their TX paths.
*/
if (ni != NULL) {
domlme(mlmeop, ni);
ieee80211_free_node(ni);
} else
error = ENOENT;
} else {
ieee80211_iterate_nodes(nt, domlme, mlmeop);
}
return error;
}
static int
setmlme_common(struct ieee80211vap *vap, int op,
const uint8_t mac[IEEE80211_ADDR_LEN], int reason)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_node_table *nt = &ic->ic_sta;
struct ieee80211_node *ni;
struct mlmeop mlmeop;
int error;
error = 0;
switch (op) {
case IEEE80211_MLME_DISASSOC:
case IEEE80211_MLME_DEAUTH:
switch (vap->iv_opmode) {
case IEEE80211_M_STA:
mlmedebug(vap, vap->iv_bss->ni_macaddr, op, reason);
/* XXX not quite right */
ieee80211_new_state(vap, IEEE80211_S_INIT, reason);
break;
case IEEE80211_M_HOSTAP:
mlmeop.vap = vap;
mlmeop.op = op;
mlmeop.reason = reason;
error = setmlme_dropsta(vap, mac, &mlmeop);
break;
case IEEE80211_M_WDS:
/* XXX user app should send raw frame? */
if (op != IEEE80211_MLME_DEAUTH) {
error = EINVAL;
break;
}
#if 0
/* XXX accept any address, simplifies user code */
if (!IEEE80211_ADDR_EQ(mac, vap->iv_bss->ni_macaddr)) {
error = EINVAL;
break;
}
#endif
mlmedebug(vap, vap->iv_bss->ni_macaddr, op, reason);
ni = ieee80211_ref_node(vap->iv_bss);
IEEE80211_SEND_MGMT(ni,
IEEE80211_FC0_SUBTYPE_DEAUTH, reason);
ieee80211_free_node(ni);
break;
case IEEE80211_M_MBSS:
IEEE80211_NODE_LOCK(nt);
ni = ieee80211_find_node_locked(nt, mac);
/*
* Don't do the node update inside the node
* table lock. This unfortunately causes LORs
* with drivers and their TX paths.
*/
IEEE80211_NODE_UNLOCK(nt);
if (ni != NULL) {
ieee80211_node_leave(ni);
ieee80211_free_node(ni);
} else {
error = ENOENT;
}
break;
default:
error = EINVAL;
break;
}
break;
case IEEE80211_MLME_AUTHORIZE:
case IEEE80211_MLME_UNAUTHORIZE:
if (vap->iv_opmode != IEEE80211_M_HOSTAP &&
vap->iv_opmode != IEEE80211_M_WDS) {
error = EINVAL;
break;
}
IEEE80211_NODE_LOCK(nt);
ni = ieee80211_find_vap_node_locked(nt, vap, mac);
/*
* Don't do the node update inside the node
* table lock. This unfortunately causes LORs
* with drivers and their TX paths.
*/
IEEE80211_NODE_UNLOCK(nt);
if (ni != NULL) {
mlmedebug(vap, mac, op, reason);
if (op == IEEE80211_MLME_AUTHORIZE)
ieee80211_node_authorize(ni);
else
ieee80211_node_unauthorize(ni);
ieee80211_free_node(ni);
} else
error = ENOENT;
break;
case IEEE80211_MLME_AUTH:
if (vap->iv_opmode != IEEE80211_M_HOSTAP) {
error = EINVAL;
break;
}
IEEE80211_NODE_LOCK(nt);
ni = ieee80211_find_vap_node_locked(nt, vap, mac);
/*
* Don't do the node update inside the node
* table lock. This unfortunately causes LORs
* with drivers and their TX paths.
*/
IEEE80211_NODE_UNLOCK(nt);
if (ni != NULL) {
mlmedebug(vap, mac, op, reason);
if (reason == IEEE80211_STATUS_SUCCESS) {
IEEE80211_SEND_MGMT(ni,
IEEE80211_FC0_SUBTYPE_AUTH, 2);
/*
* For shared key auth, just continue the
* exchange. Otherwise when 802.1x is not in
* use mark the port authorized at this point
* so traffic can flow.
*/
if (ni->ni_authmode != IEEE80211_AUTH_8021X &&
ni->ni_challenge == NULL)
ieee80211_node_authorize(ni);
} else {
vap->iv_stats.is_rx_acl++;
ieee80211_send_error(ni, ni->ni_macaddr,
IEEE80211_FC0_SUBTYPE_AUTH, 2|(reason<<16));
ieee80211_node_leave(ni);
}
ieee80211_free_node(ni);
} else
error = ENOENT;
break;
default:
error = EINVAL;
break;
}
return error;
}
struct scanlookup {
const uint8_t *mac;
int esslen;
const uint8_t *essid;
const struct ieee80211_scan_entry *se;
};
/*
* Match mac address and any ssid.
*/
static void
mlmelookup(void *arg, const struct ieee80211_scan_entry *se)
{
struct scanlookup *look = arg;
if (!IEEE80211_ADDR_EQ(look->mac, se->se_macaddr))
return;
if (look->esslen != 0) {
if (se->se_ssid[1] != look->esslen)
return;
if (memcmp(look->essid, se->se_ssid+2, look->esslen))
return;
}
look->se = se;
}
static int
setmlme_assoc_sta(struct ieee80211vap *vap,
const uint8_t mac[IEEE80211_ADDR_LEN], int ssid_len,
const uint8_t ssid[IEEE80211_NWID_LEN])
{
struct scanlookup lookup;
KASSERT(vap->iv_opmode == IEEE80211_M_STA,
("expected opmode STA not %s",
ieee80211_opmode_name[vap->iv_opmode]));
/* NB: this is racey if roaming is !manual */
lookup.se = NULL;
lookup.mac = mac;
lookup.esslen = ssid_len;
lookup.essid = ssid;
ieee80211_scan_iterate(vap, mlmelookup, &lookup);
if (lookup.se == NULL)
return ENOENT;
mlmedebug(vap, mac, IEEE80211_MLME_ASSOC, 0);
if (!ieee80211_sta_join(vap, lookup.se->se_chan, lookup.se))
return EIO; /* XXX unique but could be better */
return 0;
}
static int
setmlme_assoc_adhoc(struct ieee80211vap *vap,
const uint8_t mac[IEEE80211_ADDR_LEN], int ssid_len,
const uint8_t ssid[IEEE80211_NWID_LEN])
{
struct ieee80211_scan_req *sr;
int error;
KASSERT(vap->iv_opmode == IEEE80211_M_IBSS ||
vap->iv_opmode == IEEE80211_M_AHDEMO,
("expected opmode IBSS or AHDEMO not %s",
ieee80211_opmode_name[vap->iv_opmode]));
if (ssid_len == 0)
return EINVAL;
sr = IEEE80211_MALLOC(sizeof(*sr), M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (sr == NULL)
return ENOMEM;
/* NB: IEEE80211_IOC_SSID call missing for ap_scan=2. */
memset(vap->iv_des_ssid[0].ssid, 0, IEEE80211_NWID_LEN);
vap->iv_des_ssid[0].len = ssid_len;
memcpy(vap->iv_des_ssid[0].ssid, ssid, ssid_len);
vap->iv_des_nssid = 1;
sr->sr_flags = IEEE80211_IOC_SCAN_ACTIVE | IEEE80211_IOC_SCAN_ONCE;
sr->sr_duration = IEEE80211_IOC_SCAN_FOREVER;
memcpy(sr->sr_ssid[0].ssid, ssid, ssid_len);
sr->sr_ssid[0].len = ssid_len;
sr->sr_nssid = 1;
error = ieee80211_scanreq(vap, sr);
IEEE80211_FREE(sr, M_TEMP);
return error;
}
static int
ieee80211_ioctl_setmlme(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211req_mlme mlme;
int error;
if (ireq->i_len != sizeof(mlme))
return EINVAL;
error = copyin(ireq->i_data, &mlme, sizeof(mlme));
if (error)
return error;
if (vap->iv_opmode == IEEE80211_M_STA &&
mlme.im_op == IEEE80211_MLME_ASSOC)
return setmlme_assoc_sta(vap, mlme.im_macaddr,
vap->iv_des_ssid[0].len, vap->iv_des_ssid[0].ssid);
else if ((vap->iv_opmode == IEEE80211_M_IBSS ||
vap->iv_opmode == IEEE80211_M_AHDEMO) &&
mlme.im_op == IEEE80211_MLME_ASSOC)
return setmlme_assoc_adhoc(vap, mlme.im_macaddr,
mlme.im_ssid_len, mlme.im_ssid);
else
return setmlme_common(vap, mlme.im_op,
mlme.im_macaddr, mlme.im_reason);
}
static int
ieee80211_ioctl_macmac(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
uint8_t mac[IEEE80211_ADDR_LEN];
const struct ieee80211_aclator *acl = vap->iv_acl;
int error;
if (ireq->i_len != sizeof(mac))
return EINVAL;
error = copyin(ireq->i_data, mac, ireq->i_len);
if (error)
return error;
if (acl == NULL) {
acl = ieee80211_aclator_get("mac");
if (acl == NULL || !acl->iac_attach(vap))
return EINVAL;
vap->iv_acl = acl;
}
if (ireq->i_type == IEEE80211_IOC_ADDMAC)
acl->iac_add(vap, mac);
else
acl->iac_remove(vap, mac);
return 0;
}
static int
ieee80211_ioctl_setmaccmd(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
const struct ieee80211_aclator *acl = vap->iv_acl;
switch (ireq->i_val) {
case IEEE80211_MACCMD_POLICY_OPEN:
case IEEE80211_MACCMD_POLICY_ALLOW:
case IEEE80211_MACCMD_POLICY_DENY:
case IEEE80211_MACCMD_POLICY_RADIUS:
if (acl == NULL) {
acl = ieee80211_aclator_get("mac");
if (acl == NULL || !acl->iac_attach(vap))
return EINVAL;
vap->iv_acl = acl;
}
acl->iac_setpolicy(vap, ireq->i_val);
break;
case IEEE80211_MACCMD_FLUSH:
if (acl != NULL)
acl->iac_flush(vap);
/* NB: silently ignore when not in use */
break;
case IEEE80211_MACCMD_DETACH:
if (acl != NULL) {
vap->iv_acl = NULL;
acl->iac_detach(vap);
}
break;
default:
if (acl == NULL)
return EINVAL;
else
return acl->iac_setioctl(vap, ireq);
}
return 0;
}
static int
ieee80211_ioctl_setchanlist(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
uint8_t *chanlist, *list;
int i, nchan, maxchan, error;
if (ireq->i_len > sizeof(ic->ic_chan_active))
ireq->i_len = sizeof(ic->ic_chan_active);
list = IEEE80211_MALLOC(ireq->i_len + IEEE80211_CHAN_BYTES, M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (list == NULL)
return ENOMEM;
error = copyin(ireq->i_data, list, ireq->i_len);
if (error) {
IEEE80211_FREE(list, M_TEMP);
return error;
}
nchan = 0;
chanlist = list + ireq->i_len; /* NB: zero'd already */
maxchan = ireq->i_len * NBBY;
for (i = 0; i < ic->ic_nchans; i++) {
const struct ieee80211_channel *c = &ic->ic_channels[i];
/*
* Calculate the intersection of the user list and the
* available channels so users can do things like specify
* 1-255 to get all available channels.
*/
if (c->ic_ieee < maxchan && isset(list, c->ic_ieee)) {
setbit(chanlist, c->ic_ieee);
nchan++;
}
}
if (nchan == 0) {
IEEE80211_FREE(list, M_TEMP);
return EINVAL;
}
if (ic->ic_bsschan != IEEE80211_CHAN_ANYC && /* XXX */
isclr(chanlist, ic->ic_bsschan->ic_ieee))
ic->ic_bsschan = IEEE80211_CHAN_ANYC;
memcpy(ic->ic_chan_active, chanlist, IEEE80211_CHAN_BYTES);
ieee80211_scan_flush(vap);
IEEE80211_FREE(list, M_TEMP);
return ENETRESET;
}
static int
ieee80211_ioctl_setstastats(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_node *ni;
uint8_t macaddr[IEEE80211_ADDR_LEN];
int error;
/*
* NB: we could copyin ieee80211req_sta_stats so apps
* could make selective changes but that's overkill;
* just clear all stats for now.
*/
if (ireq->i_len < IEEE80211_ADDR_LEN)
return EINVAL;
error = copyin(ireq->i_data, macaddr, IEEE80211_ADDR_LEN);
if (error != 0)
return error;
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap, macaddr);
if (ni == NULL)
return ENOENT;
/* XXX require ni_vap == vap? */
memset(&ni->ni_stats, 0, sizeof(ni->ni_stats));
ieee80211_free_node(ni);
return 0;
}
static int
ieee80211_ioctl_setstatxpow(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_node *ni;
struct ieee80211req_sta_txpow txpow;
int error;
if (ireq->i_len != sizeof(txpow))
return EINVAL;
error = copyin(ireq->i_data, &txpow, sizeof(txpow));
if (error != 0)
return error;
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap, txpow.it_macaddr);
if (ni == NULL)
return ENOENT;
ni->ni_txpower = txpow.it_txpow;
ieee80211_free_node(ni);
return error;
}
static int
ieee80211_ioctl_setwmeparam(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_wme_state *wme = &ic->ic_wme;
struct wmeParams *wmep, *chanp;
int isbss, ac, aggrmode;
if ((ic->ic_caps & IEEE80211_C_WME) == 0)
return EOPNOTSUPP;
isbss = (ireq->i_len & IEEE80211_WMEPARAM_BSS);
ac = (ireq->i_len & IEEE80211_WMEPARAM_VAL);
aggrmode = (wme->wme_flags & WME_F_AGGRMODE);
if (ac >= WME_NUM_AC)
ac = WME_AC_BE;
if (isbss) {
chanp = &wme->wme_bssChanParams.cap_wmeParams[ac];
wmep = &wme->wme_wmeBssChanParams.cap_wmeParams[ac];
} else {
chanp = &wme->wme_chanParams.cap_wmeParams[ac];
wmep = &wme->wme_wmeChanParams.cap_wmeParams[ac];
}
switch (ireq->i_type) {
case IEEE80211_IOC_WME_CWMIN: /* WME: CWmin */
wmep->wmep_logcwmin = ireq->i_val;
if (!isbss || !aggrmode)
chanp->wmep_logcwmin = ireq->i_val;
break;
case IEEE80211_IOC_WME_CWMAX: /* WME: CWmax */
wmep->wmep_logcwmax = ireq->i_val;
if (!isbss || !aggrmode)
chanp->wmep_logcwmax = ireq->i_val;
break;
case IEEE80211_IOC_WME_AIFS: /* WME: AIFS */
wmep->wmep_aifsn = ireq->i_val;
if (!isbss || !aggrmode)
chanp->wmep_aifsn = ireq->i_val;
break;
case IEEE80211_IOC_WME_TXOPLIMIT: /* WME: txops limit */
wmep->wmep_txopLimit = ireq->i_val;
if (!isbss || !aggrmode)
chanp->wmep_txopLimit = ireq->i_val;
break;
case IEEE80211_IOC_WME_ACM: /* WME: ACM (bss only) */
wmep->wmep_acm = ireq->i_val;
if (!aggrmode)
chanp->wmep_acm = ireq->i_val;
break;
case IEEE80211_IOC_WME_ACKPOLICY: /* WME: ACK policy (!bss only)*/
wmep->wmep_noackPolicy = chanp->wmep_noackPolicy =
(ireq->i_val) == 0;
break;
}
ieee80211_wme_updateparams(vap);
return 0;
}
static int
find11gchannel(struct ieee80211com *ic, int start, int freq)
{
const struct ieee80211_channel *c;
int i;
for (i = start+1; i < ic->ic_nchans; i++) {
c = &ic->ic_channels[i];
if (c->ic_freq == freq && IEEE80211_IS_CHAN_ANYG(c))
return 1;
}
/* NB: should not be needed but in case things are mis-sorted */
for (i = 0; i < start; i++) {
c = &ic->ic_channels[i];
if (c->ic_freq == freq && IEEE80211_IS_CHAN_ANYG(c))
return 1;
}
return 0;
}
static struct ieee80211_channel *
findchannel(struct ieee80211com *ic, int ieee, int mode)
{
static const u_int chanflags[IEEE80211_MODE_MAX] = {
[IEEE80211_MODE_AUTO] = 0,
[IEEE80211_MODE_11A] = IEEE80211_CHAN_A,
[IEEE80211_MODE_11B] = IEEE80211_CHAN_B,
[IEEE80211_MODE_11G] = IEEE80211_CHAN_G,
[IEEE80211_MODE_FH] = IEEE80211_CHAN_FHSS,
[IEEE80211_MODE_TURBO_A] = IEEE80211_CHAN_108A,
[IEEE80211_MODE_TURBO_G] = IEEE80211_CHAN_108G,
[IEEE80211_MODE_STURBO_A] = IEEE80211_CHAN_STURBO,
[IEEE80211_MODE_HALF] = IEEE80211_CHAN_HALF,
[IEEE80211_MODE_QUARTER] = IEEE80211_CHAN_QUARTER,
/* NB: handled specially below */
[IEEE80211_MODE_11NA] = IEEE80211_CHAN_A,
[IEEE80211_MODE_11NG] = IEEE80211_CHAN_G,
[IEEE80211_MODE_VHT_5GHZ] = IEEE80211_CHAN_A,
[IEEE80211_MODE_VHT_2GHZ] = IEEE80211_CHAN_G,
};
u_int modeflags;
int i;
modeflags = chanflags[mode];
for (i = 0; i < ic->ic_nchans; i++) {
struct ieee80211_channel *c = &ic->ic_channels[i];
if (c->ic_ieee != ieee)
continue;
if (mode == IEEE80211_MODE_AUTO) {
/* ignore turbo channels for autoselect */
if (IEEE80211_IS_CHAN_TURBO(c))
continue;
/*
* XXX special-case 11b/g channels so we
* always select the g channel if both
* are present.
* XXX prefer HT to non-HT?
*/
if (!IEEE80211_IS_CHAN_B(c) ||
!find11gchannel(ic, i, c->ic_freq))
return c;
} else {
/* must check VHT specifically */
if ((mode == IEEE80211_MODE_VHT_5GHZ ||
mode == IEEE80211_MODE_VHT_2GHZ) &&
!IEEE80211_IS_CHAN_VHT(c))
continue;
/*
* Must check HT specially - only match on HT,
* not HT+VHT channels
*/
if ((mode == IEEE80211_MODE_11NA ||
mode == IEEE80211_MODE_11NG) &&
!IEEE80211_IS_CHAN_HT(c))
continue;
if ((mode == IEEE80211_MODE_11NA ||
mode == IEEE80211_MODE_11NG) &&
IEEE80211_IS_CHAN_VHT(c))
continue;
/* Check that the modeflags above match */
if ((c->ic_flags & modeflags) == modeflags)
return c;
}
}
return NULL;
}
/*
* Check the specified against any desired mode (aka netband).
* This is only used (presently) when operating in hostap mode
* to enforce consistency.
*/
static int
check_mode_consistency(const struct ieee80211_channel *c, int mode)
{
KASSERT(c != IEEE80211_CHAN_ANYC, ("oops, no channel"));
switch (mode) {
case IEEE80211_MODE_11B:
return (IEEE80211_IS_CHAN_B(c));
case IEEE80211_MODE_11G:
return (IEEE80211_IS_CHAN_ANYG(c) && !IEEE80211_IS_CHAN_HT(c));
case IEEE80211_MODE_11A:
return (IEEE80211_IS_CHAN_A(c) && !IEEE80211_IS_CHAN_HT(c));
case IEEE80211_MODE_STURBO_A:
return (IEEE80211_IS_CHAN_STURBO(c));
case IEEE80211_MODE_11NA:
return (IEEE80211_IS_CHAN_HTA(c));
case IEEE80211_MODE_11NG:
return (IEEE80211_IS_CHAN_HTG(c));
}
return 1;
}
/*
* Common code to set the current channel. If the device
* is up and running this may result in an immediate channel
* change or a kick of the state machine.
*/
static int
setcurchan(struct ieee80211vap *vap, struct ieee80211_channel *c)
{
struct ieee80211com *ic = vap->iv_ic;
int error;
if (c != IEEE80211_CHAN_ANYC) {
if (IEEE80211_IS_CHAN_RADAR(c))
return EBUSY; /* XXX better code? */
if (vap->iv_opmode == IEEE80211_M_HOSTAP) {
if (IEEE80211_IS_CHAN_NOHOSTAP(c))
return EINVAL;
if (!check_mode_consistency(c, vap->iv_des_mode))
return EINVAL;
} else if (vap->iv_opmode == IEEE80211_M_IBSS) {
if (IEEE80211_IS_CHAN_NOADHOC(c))
return EINVAL;
}
if ((vap->iv_state == IEEE80211_S_RUN || vap->iv_state == IEEE80211_S_SLEEP) &&
vap->iv_bss->ni_chan == c)
return 0; /* NB: nothing to do */
}
vap->iv_des_chan = c;
error = 0;
if (vap->iv_opmode == IEEE80211_M_MONITOR &&
vap->iv_des_chan != IEEE80211_CHAN_ANYC) {
/*
* Monitor mode can switch directly.
*/
if (IFNET_IS_UP_RUNNING(vap->iv_ifp)) {
/* XXX need state machine for other vap's to follow */
ieee80211_setcurchan(ic, vap->iv_des_chan);
vap->iv_bss->ni_chan = ic->ic_curchan;
} else {
ic->ic_curchan = vap->iv_des_chan;
ic->ic_rt = ieee80211_get_ratetable(ic->ic_curchan);
}
} else {
/*
* Need to go through the state machine in case we
* need to reassociate or the like. The state machine
* will pickup the desired channel and avoid scanning.
*/
if (IS_UP_AUTO(vap))
ieee80211_new_state(vap, IEEE80211_S_SCAN, 0);
else if (vap->iv_des_chan != IEEE80211_CHAN_ANYC) {
/*
* When not up+running and a real channel has
* been specified fix the current channel so
* there is immediate feedback; e.g. via ifconfig.
*/
ic->ic_curchan = vap->iv_des_chan;
ic->ic_rt = ieee80211_get_ratetable(ic->ic_curchan);
}
}
return error;
}
/*
* Old api for setting the current channel; this is
* deprecated because channel numbers are ambiguous.
*/
static int
ieee80211_ioctl_setchannel(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_channel *c;
/* XXX 0xffff overflows 16-bit signed */
if (ireq->i_val == 0 ||
ireq->i_val == (int16_t) IEEE80211_CHAN_ANY) {
c = IEEE80211_CHAN_ANYC;
} else {
struct ieee80211_channel *c2;
c = findchannel(ic, ireq->i_val, vap->iv_des_mode);
if (c == NULL) {
c = findchannel(ic, ireq->i_val,
IEEE80211_MODE_AUTO);
if (c == NULL)
return EINVAL;
}
/*
* Fine tune channel selection based on desired mode:
* if 11b is requested, find the 11b version of any
* 11g channel returned,
* if static turbo, find the turbo version of any
* 11a channel return,
* if 11na is requested, find the ht version of any
* 11a channel returned,
* if 11ng is requested, find the ht version of any
* 11g channel returned,
* if 11ac is requested, find the 11ac version
* of any 11a/11na channel returned,
* (TBD) 11acg (2GHz VHT)
* otherwise we should be ok with what we've got.
*/
switch (vap->iv_des_mode) {
case IEEE80211_MODE_11B:
if (IEEE80211_IS_CHAN_ANYG(c)) {
c2 = findchannel(ic, ireq->i_val,
IEEE80211_MODE_11B);
/* NB: should not happen, =>'s 11g w/o 11b */
if (c2 != NULL)
c = c2;
}
break;
case IEEE80211_MODE_TURBO_A:
if (IEEE80211_IS_CHAN_A(c)) {
c2 = findchannel(ic, ireq->i_val,
IEEE80211_MODE_TURBO_A);
if (c2 != NULL)
c = c2;
}
break;
case IEEE80211_MODE_11NA:
if (IEEE80211_IS_CHAN_A(c)) {
c2 = findchannel(ic, ireq->i_val,
IEEE80211_MODE_11NA);
if (c2 != NULL)
c = c2;
}
break;
case IEEE80211_MODE_11NG:
if (IEEE80211_IS_CHAN_ANYG(c)) {
c2 = findchannel(ic, ireq->i_val,
IEEE80211_MODE_11NG);
if (c2 != NULL)
c = c2;
}
break;
case IEEE80211_MODE_VHT_2GHZ:
printf("%s: TBD\n", __func__);
break;
case IEEE80211_MODE_VHT_5GHZ:
if (IEEE80211_IS_CHAN_A(c)) {
c2 = findchannel(ic, ireq->i_val,
IEEE80211_MODE_VHT_5GHZ);
if (c2 != NULL)
c = c2;
}
break;
default: /* NB: no static turboG */
break;
}
}
return setcurchan(vap, c);
}
/*
* New/current api for setting the current channel; a complete
* channel description is provide so there is no ambiguity in
* identifying the channel.
*/
static int
ieee80211_ioctl_setcurchan(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_channel chan, *c;
int error;
if (ireq->i_len != sizeof(chan))
return EINVAL;
error = copyin(ireq->i_data, &chan, sizeof(chan));
if (error != 0)
return error;
/* XXX 0xffff overflows 16-bit signed */
if (chan.ic_freq == 0 || chan.ic_freq == IEEE80211_CHAN_ANY) {
c = IEEE80211_CHAN_ANYC;
} else {
c = ieee80211_find_channel(ic, chan.ic_freq, chan.ic_flags);
if (c == NULL)
return EINVAL;
}
return setcurchan(vap, c);
}
static int
ieee80211_ioctl_setregdomain(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
struct ieee80211_regdomain_req *reg;
int nchans, error;
nchans = 1 + ((ireq->i_len - sizeof(struct ieee80211_regdomain_req)) /
sizeof(struct ieee80211_channel));
if (!(1 <= nchans && nchans <= IEEE80211_CHAN_MAX)) {
IEEE80211_DPRINTF(vap, IEEE80211_MSG_IOCTL,
"%s: bad # chans, i_len %d nchans %d\n", __func__,
ireq->i_len, nchans);
return EINVAL;
}
reg = (struct ieee80211_regdomain_req *)
IEEE80211_MALLOC(IEEE80211_REGDOMAIN_SIZE(nchans), M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (reg == NULL) {
IEEE80211_DPRINTF(vap, IEEE80211_MSG_IOCTL,
"%s: no memory, nchans %d\n", __func__, nchans);
return ENOMEM;
}
error = copyin(ireq->i_data, reg, IEEE80211_REGDOMAIN_SIZE(nchans));
if (error == 0) {
/* NB: validate inline channel count against storage size */
if (reg->chaninfo.ic_nchans != nchans) {
IEEE80211_DPRINTF(vap, IEEE80211_MSG_IOCTL,
"%s: chan cnt mismatch, %d != %d\n", __func__,
reg->chaninfo.ic_nchans, nchans);
error = EINVAL;
} else
error = ieee80211_setregdomain(vap, reg);
}
IEEE80211_FREE(reg, M_TEMP);
return (error == 0 ? ENETRESET : error);
}
static int
-ieee80211_ioctl_setroam(struct ieee80211vap *vap,
- const struct ieee80211req *ireq)
-{
- if (ireq->i_len != sizeof(vap->iv_roamparms))
- return EINVAL;
- /* XXX validate params */
- /* XXX? ENETRESET to push to device? */
- return copyin(ireq->i_data, vap->iv_roamparms,
- sizeof(vap->iv_roamparms));
-}
-
-static int
checkrate(const struct ieee80211_rateset *rs, int rate)
{
int i;
if (rate == IEEE80211_FIXED_RATE_NONE)
return 1;
for (i = 0; i < rs->rs_nrates; i++)
if ((rs->rs_rates[i] & IEEE80211_RATE_VAL) == rate)
return 1;
return 0;
}
static int
checkmcs(const struct ieee80211_htrateset *rs, int mcs)
{
int rate_val = IEEE80211_RV(mcs);
int i;
if (mcs == IEEE80211_FIXED_RATE_NONE)
return 1;
if ((mcs & IEEE80211_RATE_MCS) == 0) /* MCS always have 0x80 set */
return 0;
for (i = 0; i < rs->rs_nrates; i++)
if (IEEE80211_RV(rs->rs_rates[i]) == rate_val)
return 1;
return 0;
+}
+
+static int
+ieee80211_ioctl_setroam(struct ieee80211vap *vap,
+ const struct ieee80211req *ireq)
+{
+ struct ieee80211com *ic = vap->iv_ic;
+ struct ieee80211_roamparams_req *parms;
+ struct ieee80211_roamparam *src, *dst;
+ const struct ieee80211_htrateset *rs_ht;
+ const struct ieee80211_rateset *rs;
+ int changed, error, mode, is11n, nmodes;
+
+ if (ireq->i_len != sizeof(vap->iv_roamparms))
+ return EINVAL;
+
+ parms = IEEE80211_MALLOC(sizeof(*parms), M_TEMP,
+ IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
+ if (parms == NULL)
+ return ENOMEM;
+
+ error = copyin(ireq->i_data, parms, ireq->i_len);
+ if (error != 0)
+ goto fail;
+
+ changed = 0;
+ nmodes = IEEE80211_MODE_MAX;
+
+ /* validate parameters and check if anything changed */
+ for (mode = IEEE80211_MODE_11A; mode < nmodes; mode++) {
+ if (isclr(ic->ic_modecaps, mode))
+ continue;
+ src = &parms->params[mode];
+ dst = &vap->iv_roamparms[mode];
+ rs = &ic->ic_sup_rates[mode]; /* NB: 11n maps to legacy */
+ rs_ht = &ic->ic_sup_htrates;
+ is11n = (mode == IEEE80211_MODE_11NA ||
+ mode == IEEE80211_MODE_11NG);
+ /* XXX TODO: 11ac */
+ if (src->rate != dst->rate) {
+ if (!checkrate(rs, src->rate) &&
+ (!is11n || !checkmcs(rs_ht, src->rate))) {
+ error = EINVAL;
+ goto fail;
+ }
+ changed++;
+ }
+ if (src->rssi != dst->rssi)
+ changed++;
+ }
+ if (changed) {
+ /*
+ * Copy new parameters in place and notify the
+ * driver so it can push state to the device.
+ */
+ /* XXX locking? */
+ for (mode = IEEE80211_MODE_11A; mode < nmodes; mode++) {
+ if (isset(ic->ic_modecaps, mode))
+ vap->iv_roamparms[mode] = parms->params[mode];
+ }
+
+ if (vap->iv_roaming == IEEE80211_ROAMING_DEVICE)
+ error = ERESTART;
+ }
+
+fail: IEEE80211_FREE(parms, M_TEMP);
+ return error;
}
static int
ieee80211_ioctl_settxparams(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_txparams_req parms; /* XXX stack use? */
struct ieee80211_txparam *src, *dst;
const struct ieee80211_htrateset *rs_ht;
const struct ieee80211_rateset *rs;
int error, mode, changed, is11n, nmodes;
/* NB: accept short requests for backwards compat */
if (ireq->i_len > sizeof(parms))
return EINVAL;
error = copyin(ireq->i_data, &parms, ireq->i_len);
if (error != 0)
return error;
nmodes = ireq->i_len / sizeof(struct ieee80211_txparam);
changed = 0;
/* validate parameters and check if anything changed */
for (mode = IEEE80211_MODE_11A; mode < nmodes; mode++) {
if (isclr(ic->ic_modecaps, mode))
continue;
src = &parms.params[mode];
dst = &vap->iv_txparms[mode];
rs = &ic->ic_sup_rates[mode]; /* NB: 11n maps to legacy */
rs_ht = &ic->ic_sup_htrates;
is11n = (mode == IEEE80211_MODE_11NA ||
mode == IEEE80211_MODE_11NG);
if (src->ucastrate != dst->ucastrate) {
if (!checkrate(rs, src->ucastrate) &&
(!is11n || !checkmcs(rs_ht, src->ucastrate)))
return EINVAL;
changed++;
}
if (src->mcastrate != dst->mcastrate) {
if (!checkrate(rs, src->mcastrate) &&
(!is11n || !checkmcs(rs_ht, src->mcastrate)))
return EINVAL;
changed++;
}
if (src->mgmtrate != dst->mgmtrate) {
if (!checkrate(rs, src->mgmtrate) &&
(!is11n || !checkmcs(rs_ht, src->mgmtrate)))
return EINVAL;
changed++;
}
if (src->maxretry != dst->maxretry) /* NB: no bounds */
changed++;
}
if (changed) {
/*
* Copy new parameters in place and notify the
* driver so it can push state to the device.
*/
for (mode = IEEE80211_MODE_11A; mode < nmodes; mode++) {
if (isset(ic->ic_modecaps, mode))
vap->iv_txparms[mode] = parms.params[mode];
}
/* XXX could be more intelligent,
e.g. don't reset if setting not being used */
return ENETRESET;
}
return 0;
}
/*
* Application Information Element support.
*/
static int
setappie(struct ieee80211_appie **aie, const struct ieee80211req *ireq)
{
struct ieee80211_appie *app = *aie;
struct ieee80211_appie *napp;
int error;
if (ireq->i_len == 0) { /* delete any existing ie */
if (app != NULL) {
*aie = NULL; /* XXX racey */
IEEE80211_FREE(app, M_80211_NODE_IE);
}
return 0;
}
if (!(2 <= ireq->i_len && ireq->i_len <= IEEE80211_MAX_APPIE))
return EINVAL;
/*
* Allocate a new appie structure and copy in the user data.
* When done swap in the new structure. Note that we do not
* guard against users holding a ref to the old structure;
* this must be handled outside this code.
*
* XXX bad bad bad
*/
napp = (struct ieee80211_appie *) IEEE80211_MALLOC(
sizeof(struct ieee80211_appie) + ireq->i_len, M_80211_NODE_IE,
IEEE80211_M_NOWAIT);
if (napp == NULL)
return ENOMEM;
/* XXX holding ic lock */
error = copyin(ireq->i_data, napp->ie_data, ireq->i_len);
if (error) {
IEEE80211_FREE(napp, M_80211_NODE_IE);
return error;
}
napp->ie_len = ireq->i_len;
*aie = napp;
if (app != NULL)
IEEE80211_FREE(app, M_80211_NODE_IE);
return 0;
}
static void
setwparsnie(struct ieee80211vap *vap, uint8_t *ie, int space)
{
/* validate data is present as best we can */
if (space == 0 || 2+ie[1] > space)
return;
if (ie[0] == IEEE80211_ELEMID_VENDOR)
vap->iv_wpa_ie = ie;
else if (ie[0] == IEEE80211_ELEMID_RSN)
vap->iv_rsn_ie = ie;
}
static int
ieee80211_ioctl_setappie_locked(struct ieee80211vap *vap,
const struct ieee80211req *ireq, int fc0)
{
int error;
IEEE80211_LOCK_ASSERT(vap->iv_ic);
switch (fc0 & IEEE80211_FC0_SUBTYPE_MASK) {
case IEEE80211_FC0_SUBTYPE_BEACON:
if (vap->iv_opmode != IEEE80211_M_HOSTAP &&
vap->iv_opmode != IEEE80211_M_IBSS) {
error = EINVAL;
break;
}
error = setappie(&vap->iv_appie_beacon, ireq);
if (error == 0)
ieee80211_beacon_notify(vap, IEEE80211_BEACON_APPIE);
break;
case IEEE80211_FC0_SUBTYPE_PROBE_RESP:
error = setappie(&vap->iv_appie_proberesp, ireq);
break;
case IEEE80211_FC0_SUBTYPE_ASSOC_RESP:
if (vap->iv_opmode == IEEE80211_M_HOSTAP)
error = setappie(&vap->iv_appie_assocresp, ireq);
else
error = EINVAL;
break;
case IEEE80211_FC0_SUBTYPE_PROBE_REQ:
error = setappie(&vap->iv_appie_probereq, ireq);
break;
case IEEE80211_FC0_SUBTYPE_ASSOC_REQ:
if (vap->iv_opmode == IEEE80211_M_STA)
error = setappie(&vap->iv_appie_assocreq, ireq);
else
error = EINVAL;
break;
case (IEEE80211_APPIE_WPA & IEEE80211_FC0_SUBTYPE_MASK):
error = setappie(&vap->iv_appie_wpa, ireq);
if (error == 0) {
/*
* Must split single blob of data into separate
* WPA and RSN ie's because they go in different
* locations in the mgt frames.
* XXX use IEEE80211_IOC_WPA2 so user code does split
*/
vap->iv_wpa_ie = NULL;
vap->iv_rsn_ie = NULL;
if (vap->iv_appie_wpa != NULL) {
struct ieee80211_appie *appie =
vap->iv_appie_wpa;
uint8_t *data = appie->ie_data;
/* XXX ie length validate is painful, cheat */
setwparsnie(vap, data, appie->ie_len);
setwparsnie(vap, data + 2 + data[1],
appie->ie_len - (2 + data[1]));
}
if (vap->iv_opmode == IEEE80211_M_HOSTAP ||
vap->iv_opmode == IEEE80211_M_IBSS) {
/*
* Must rebuild beacon frame as the update
* mechanism doesn't handle WPA/RSN ie's.
* Could extend it but it doesn't normally
* change; this is just to deal with hostapd
* plumbing the ie after the interface is up.
*/
error = ENETRESET;
}
}
break;
default:
error = EINVAL;
break;
}
return error;
}
static int
ieee80211_ioctl_setappie(struct ieee80211vap *vap,
const struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
int error;
uint8_t fc0;
fc0 = ireq->i_val & 0xff;
if ((fc0 & IEEE80211_FC0_TYPE_MASK) != IEEE80211_FC0_TYPE_MGT)
return EINVAL;
/* NB: could check iv_opmode and reject but hardly worth the effort */
IEEE80211_LOCK(ic);
error = ieee80211_ioctl_setappie_locked(vap, ireq, fc0);
IEEE80211_UNLOCK(ic);
return error;
}
static int
ieee80211_ioctl_chanswitch(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_chanswitch_req csr;
struct ieee80211_channel *c;
int error;
if (ireq->i_len != sizeof(csr))
return EINVAL;
error = copyin(ireq->i_data, &csr, sizeof(csr));
if (error != 0)
return error;
/* XXX adhoc mode not supported */
if (vap->iv_opmode != IEEE80211_M_HOSTAP ||
(vap->iv_flags & IEEE80211_F_DOTH) == 0)
return EOPNOTSUPP;
c = ieee80211_find_channel(ic,
csr.csa_chan.ic_freq, csr.csa_chan.ic_flags);
if (c == NULL)
return ENOENT;
IEEE80211_LOCK(ic);
if ((ic->ic_flags & IEEE80211_F_CSAPENDING) == 0)
ieee80211_csa_startswitch(ic, c, csr.csa_mode, csr.csa_count);
else if (csr.csa_count == 0)
ieee80211_csa_cancelswitch(ic);
else
error = EBUSY;
IEEE80211_UNLOCK(ic);
return error;
}
static int
ieee80211_scanreq(struct ieee80211vap *vap, struct ieee80211_scan_req *sr)
{
#define IEEE80211_IOC_SCAN_FLAGS \
(IEEE80211_IOC_SCAN_NOPICK | IEEE80211_IOC_SCAN_ACTIVE | \
IEEE80211_IOC_SCAN_PICK1ST | IEEE80211_IOC_SCAN_BGSCAN | \
IEEE80211_IOC_SCAN_ONCE | IEEE80211_IOC_SCAN_NOBCAST | \
IEEE80211_IOC_SCAN_NOJOIN | IEEE80211_IOC_SCAN_FLUSH | \
IEEE80211_IOC_SCAN_CHECK)
struct ieee80211com *ic = vap->iv_ic;
int error, i;
/* convert duration */
if (sr->sr_duration == IEEE80211_IOC_SCAN_FOREVER)
sr->sr_duration = IEEE80211_SCAN_FOREVER;
else {
if (sr->sr_duration < IEEE80211_IOC_SCAN_DURATION_MIN ||
sr->sr_duration > IEEE80211_IOC_SCAN_DURATION_MAX)
return EINVAL;
sr->sr_duration = msecs_to_ticks(sr->sr_duration);
}
/* convert min/max channel dwell */
if (sr->sr_mindwell != 0)
sr->sr_mindwell = msecs_to_ticks(sr->sr_mindwell);
if (sr->sr_maxdwell != 0)
sr->sr_maxdwell = msecs_to_ticks(sr->sr_maxdwell);
/* NB: silently reduce ssid count to what is supported */
if (sr->sr_nssid > IEEE80211_SCAN_MAX_SSID)
sr->sr_nssid = IEEE80211_SCAN_MAX_SSID;
for (i = 0; i < sr->sr_nssid; i++)
if (sr->sr_ssid[i].len > IEEE80211_NWID_LEN)
return EINVAL;
/* cleanse flags just in case, could reject if invalid flags */
sr->sr_flags &= IEEE80211_IOC_SCAN_FLAGS;
/*
* Add an implicit NOPICK if the vap is not marked UP. This
* allows applications to scan without joining a bss (or picking
* a channel and setting up a bss) and without forcing manual
* roaming mode--you just need to mark the parent device UP.
*/
if ((vap->iv_ifp->if_flags & IFF_UP) == 0)
sr->sr_flags |= IEEE80211_IOC_SCAN_NOPICK;
IEEE80211_DPRINTF(vap, IEEE80211_MSG_SCAN,
"%s: flags 0x%x%s duration 0x%x mindwell %u maxdwell %u nssid %d\n",
__func__, sr->sr_flags,
(vap->iv_ifp->if_flags & IFF_UP) == 0 ? " (!IFF_UP)" : "",
sr->sr_duration, sr->sr_mindwell, sr->sr_maxdwell, sr->sr_nssid);
/*
* If we are in INIT state then the driver has never had a chance
* to setup hardware state to do a scan; we must use the state
* machine to get us up to the SCAN state but once we reach SCAN
* state we then want to use the supplied params. Stash the
* parameters in the vap and mark IEEE80211_FEXT_SCANREQ; the
* state machines will recognize this and use the stashed params
* to issue the scan request.
*
* Otherwise just invoke the scan machinery directly.
*/
IEEE80211_LOCK(ic);
if (ic->ic_nrunning == 0) {
IEEE80211_UNLOCK(ic);
return ENXIO;
}
if (vap->iv_state == IEEE80211_S_INIT) {
/* NB: clobbers previous settings */
vap->iv_scanreq_flags = sr->sr_flags;
vap->iv_scanreq_duration = sr->sr_duration;
vap->iv_scanreq_nssid = sr->sr_nssid;
for (i = 0; i < sr->sr_nssid; i++) {
vap->iv_scanreq_ssid[i].len = sr->sr_ssid[i].len;
memcpy(vap->iv_scanreq_ssid[i].ssid,
sr->sr_ssid[i].ssid, sr->sr_ssid[i].len);
}
vap->iv_flags_ext |= IEEE80211_FEXT_SCANREQ;
IEEE80211_UNLOCK(ic);
ieee80211_new_state(vap, IEEE80211_S_SCAN, 0);
} else {
vap->iv_flags_ext &= ~IEEE80211_FEXT_SCANREQ;
IEEE80211_UNLOCK(ic);
if (sr->sr_flags & IEEE80211_IOC_SCAN_CHECK) {
error = ieee80211_check_scan(vap, sr->sr_flags,
sr->sr_duration, sr->sr_mindwell, sr->sr_maxdwell,
sr->sr_nssid,
/* NB: cheat, we assume structures are compatible */
(const struct ieee80211_scan_ssid *) &sr->sr_ssid[0]);
} else {
error = ieee80211_start_scan(vap, sr->sr_flags,
sr->sr_duration, sr->sr_mindwell, sr->sr_maxdwell,
sr->sr_nssid,
/* NB: cheat, we assume structures are compatible */
(const struct ieee80211_scan_ssid *) &sr->sr_ssid[0]);
}
if (error == 0)
return EINPROGRESS;
}
return 0;
#undef IEEE80211_IOC_SCAN_FLAGS
}
static int
ieee80211_ioctl_scanreq(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_scan_req *sr;
int error;
if (ireq->i_len != sizeof(*sr))
return EINVAL;
sr = IEEE80211_MALLOC(sizeof(*sr), M_TEMP,
IEEE80211_M_NOWAIT | IEEE80211_M_ZERO);
if (sr == NULL)
return ENOMEM;
error = copyin(ireq->i_data, sr, sizeof(*sr));
if (error != 0)
goto bad;
error = ieee80211_scanreq(vap, sr);
bad:
IEEE80211_FREE(sr, M_TEMP);
return error;
}
static int
ieee80211_ioctl_setstavlan(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
struct ieee80211_node *ni;
struct ieee80211req_sta_vlan vlan;
int error;
if (ireq->i_len != sizeof(vlan))
return EINVAL;
error = copyin(ireq->i_data, &vlan, sizeof(vlan));
if (error != 0)
return error;
if (!IEEE80211_ADDR_EQ(vlan.sv_macaddr, zerobssid)) {
ni = ieee80211_find_vap_node(&vap->iv_ic->ic_sta, vap,
vlan.sv_macaddr);
if (ni == NULL)
return ENOENT;
} else
ni = ieee80211_ref_node(vap->iv_bss);
ni->ni_vlan = vlan.sv_vlan;
ieee80211_free_node(ni);
return error;
}
static int
isvap11g(const struct ieee80211vap *vap)
{
const struct ieee80211_node *bss = vap->iv_bss;
return bss->ni_chan != IEEE80211_CHAN_ANYC &&
IEEE80211_IS_CHAN_ANYG(bss->ni_chan);
}
static int
isvapht(const struct ieee80211vap *vap)
{
const struct ieee80211_node *bss = vap->iv_bss;
return bss->ni_chan != IEEE80211_CHAN_ANYC &&
IEEE80211_IS_CHAN_HT(bss->ni_chan);
}
/*
* Dummy ioctl set handler so the linker set is defined.
*/
static int
dummy_ioctl_set(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
return ENOSYS;
}
IEEE80211_IOCTL_SET(dummy, dummy_ioctl_set);
static int
ieee80211_ioctl_setdefault(struct ieee80211vap *vap, struct ieee80211req *ireq)
{
ieee80211_ioctl_setfunc * const *set;
int error;
SET_FOREACH(set, ieee80211_ioctl_setset) {
error = (*set)(vap, ireq);
if (error != ENOSYS)
return error;
}
return EINVAL;
}
static int
ieee80211_ioctl_set80211(struct ieee80211vap *vap, u_long cmd, struct ieee80211req *ireq)
{
struct ieee80211com *ic = vap->iv_ic;
int error;
const struct ieee80211_authenticator *auth;
uint8_t tmpkey[IEEE80211_KEYBUF_SIZE];
char tmpssid[IEEE80211_NWID_LEN];
uint8_t tmpbssid[IEEE80211_ADDR_LEN];
struct ieee80211_key *k;
u_int kid;
uint32_t flags;
error = 0;
switch (ireq->i_type) {
case IEEE80211_IOC_SSID:
if (ireq->i_val != 0 ||
ireq->i_len > IEEE80211_NWID_LEN)
return EINVAL;
error = copyin(ireq->i_data, tmpssid, ireq->i_len);
if (error)
break;
memset(vap->iv_des_ssid[0].ssid, 0, IEEE80211_NWID_LEN);
vap->iv_des_ssid[0].len = ireq->i_len;
memcpy(vap->iv_des_ssid[0].ssid, tmpssid, ireq->i_len);
vap->iv_des_nssid = (ireq->i_len > 0);
error = ENETRESET;
break;
case IEEE80211_IOC_WEP:
switch (ireq->i_val) {
case IEEE80211_WEP_OFF:
vap->iv_flags &= ~IEEE80211_F_PRIVACY;
vap->iv_flags &= ~IEEE80211_F_DROPUNENC;
break;
case IEEE80211_WEP_ON:
vap->iv_flags |= IEEE80211_F_PRIVACY;
vap->iv_flags |= IEEE80211_F_DROPUNENC;
break;
case IEEE80211_WEP_MIXED:
vap->iv_flags |= IEEE80211_F_PRIVACY;
vap->iv_flags &= ~IEEE80211_F_DROPUNENC;
break;
}
error = ENETRESET;
break;
case IEEE80211_IOC_WEPKEY:
kid = (u_int) ireq->i_val;
if (kid >= IEEE80211_WEP_NKID)
return EINVAL;
k = &vap->iv_nw_keys[kid];
if (ireq->i_len == 0) {
/* zero-len =>'s delete any existing key */
(void) ieee80211_crypto_delkey(vap, k);
break;
}
if (ireq->i_len > sizeof(tmpkey))
return EINVAL;
memset(tmpkey, 0, sizeof(tmpkey));
error = copyin(ireq->i_data, tmpkey, ireq->i_len);
if (error)
break;
ieee80211_key_update_begin(vap);
k->wk_keyix = kid; /* NB: force fixed key id */
if (ieee80211_crypto_newkey(vap, IEEE80211_CIPHER_WEP,
IEEE80211_KEY_XMIT | IEEE80211_KEY_RECV, k)) {
k->wk_keylen = ireq->i_len;
memcpy(k->wk_key, tmpkey, sizeof(tmpkey));
IEEE80211_ADDR_COPY(k->wk_macaddr, vap->iv_myaddr);
if (!ieee80211_crypto_setkey(vap, k))
error = EINVAL;
} else
error = EINVAL;
ieee80211_key_update_end(vap);
break;
case IEEE80211_IOC_WEPTXKEY:
kid = (u_int) ireq->i_val;
if (kid >= IEEE80211_WEP_NKID &&
(uint16_t) kid != IEEE80211_KEYIX_NONE)
return EINVAL;
/*
* Firmware devices may need to be told about an explicit
* key index here, versus just inferring it from the
* key set / change. Since we may also need to pause
* things like transmit before the key is updated,
* give the driver a chance to flush things by tying
* into key update begin/end.
*/
ieee80211_key_update_begin(vap);
ieee80211_crypto_set_deftxkey(vap, kid);
ieee80211_key_update_end(vap);
break;
case IEEE80211_IOC_AUTHMODE:
switch (ireq->i_val) {
case IEEE80211_AUTH_WPA:
case IEEE80211_AUTH_8021X: /* 802.1x */
case IEEE80211_AUTH_OPEN: /* open */
case IEEE80211_AUTH_SHARED: /* shared-key */
case IEEE80211_AUTH_AUTO: /* auto */
auth = ieee80211_authenticator_get(ireq->i_val);
if (auth == NULL)
return EINVAL;
break;
default:
return EINVAL;
}
switch (ireq->i_val) {
case IEEE80211_AUTH_WPA: /* WPA w/ 802.1x */
vap->iv_flags |= IEEE80211_F_PRIVACY;
ireq->i_val = IEEE80211_AUTH_8021X;
break;
case IEEE80211_AUTH_OPEN: /* open */
vap->iv_flags &= ~(IEEE80211_F_WPA|IEEE80211_F_PRIVACY);
break;
case IEEE80211_AUTH_SHARED: /* shared-key */
case IEEE80211_AUTH_8021X: /* 802.1x */
vap->iv_flags &= ~IEEE80211_F_WPA;
/* both require a key so mark the PRIVACY capability */
vap->iv_flags |= IEEE80211_F_PRIVACY;
break;
case IEEE80211_AUTH_AUTO: /* auto */
vap->iv_flags &= ~IEEE80211_F_WPA;
/* XXX PRIVACY handling? */
/* XXX what's the right way to do this? */
break;
}
/* NB: authenticator attach/detach happens on state change */
vap->iv_bss->ni_authmode = ireq->i_val;
/* XXX mixed/mode/usage? */
vap->iv_auth = auth;
error = ENETRESET;
break;
case IEEE80211_IOC_CHANNEL:
error = ieee80211_ioctl_setchannel(vap, ireq);
break;
case IEEE80211_IOC_POWERSAVE:
switch (ireq->i_val) {
case IEEE80211_POWERSAVE_OFF:
if (vap->iv_flags & IEEE80211_F_PMGTON) {
ieee80211_syncflag(vap, -IEEE80211_F_PMGTON);
error = ERESTART;
}
break;
case IEEE80211_POWERSAVE_ON:
if ((vap->iv_caps & IEEE80211_C_PMGT) == 0)
error = EOPNOTSUPP;
else if ((vap->iv_flags & IEEE80211_F_PMGTON) == 0) {
ieee80211_syncflag(vap, IEEE80211_F_PMGTON);
error = ERESTART;
}
break;
default:
error = EINVAL;
break;
}
break;
case IEEE80211_IOC_POWERSAVESLEEP:
if (ireq->i_val < 0)
return EINVAL;
ic->ic_lintval = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_RTSTHRESHOLD:
if (!(IEEE80211_RTS_MIN <= ireq->i_val &&
ireq->i_val <= IEEE80211_RTS_MAX))
return EINVAL;
vap->iv_rtsthreshold = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_PROTMODE:
if (ireq->i_val > IEEE80211_PROT_RTSCTS)
return EINVAL;
ic->ic_protmode = (enum ieee80211_protmode)ireq->i_val;
/* NB: if not operating in 11g this can wait */
if (ic->ic_bsschan != IEEE80211_CHAN_ANYC &&
IEEE80211_IS_CHAN_ANYG(ic->ic_bsschan))
error = ERESTART;
break;
case IEEE80211_IOC_TXPOWER:
if ((ic->ic_caps & IEEE80211_C_TXPMGT) == 0)
return EOPNOTSUPP;
if (!(IEEE80211_TXPOWER_MIN <= ireq->i_val &&
ireq->i_val <= IEEE80211_TXPOWER_MAX))
return EINVAL;
ic->ic_txpowlimit = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_ROAMING:
if (!(IEEE80211_ROAMING_DEVICE <= ireq->i_val &&
ireq->i_val <= IEEE80211_ROAMING_MANUAL))
return EINVAL;
vap->iv_roaming = (enum ieee80211_roamingmode)ireq->i_val;
/* XXXX reset? */
break;
case IEEE80211_IOC_PRIVACY:
if (ireq->i_val) {
/* XXX check for key state? */
vap->iv_flags |= IEEE80211_F_PRIVACY;
} else
vap->iv_flags &= ~IEEE80211_F_PRIVACY;
/* XXX ERESTART? */
break;
case IEEE80211_IOC_DROPUNENCRYPTED:
if (ireq->i_val)
vap->iv_flags |= IEEE80211_F_DROPUNENC;
else
vap->iv_flags &= ~IEEE80211_F_DROPUNENC;
/* XXX ERESTART? */
break;
case IEEE80211_IOC_WPAKEY:
error = ieee80211_ioctl_setkey(vap, ireq);
break;
case IEEE80211_IOC_DELKEY:
error = ieee80211_ioctl_delkey(vap, ireq);
break;
case IEEE80211_IOC_MLME:
error = ieee80211_ioctl_setmlme(vap, ireq);
break;
case IEEE80211_IOC_COUNTERMEASURES:
if (ireq->i_val) {
if ((vap->iv_flags & IEEE80211_F_WPA) == 0)
return EOPNOTSUPP;
vap->iv_flags |= IEEE80211_F_COUNTERM;
} else
vap->iv_flags &= ~IEEE80211_F_COUNTERM;
/* XXX ERESTART? */
break;
case IEEE80211_IOC_WPA:
if (ireq->i_val > 3)
return EINVAL;
/* XXX verify ciphers available */
flags = vap->iv_flags & ~IEEE80211_F_WPA;
switch (ireq->i_val) {
case 0:
/* wpa_supplicant calls this to clear the WPA config */
break;
case 1:
if (!(vap->iv_caps & IEEE80211_C_WPA1))
return EOPNOTSUPP;
flags |= IEEE80211_F_WPA1;
break;
case 2:
if (!(vap->iv_caps & IEEE80211_C_WPA2))
return EOPNOTSUPP;
flags |= IEEE80211_F_WPA2;
break;
case 3:
if ((vap->iv_caps & IEEE80211_C_WPA) != IEEE80211_C_WPA)
return EOPNOTSUPP;
flags |= IEEE80211_F_WPA1 | IEEE80211_F_WPA2;
break;
default: /* Can't set any -> error */
return EOPNOTSUPP;
}
vap->iv_flags = flags;
error = ERESTART; /* NB: can change beacon frame */
break;
case IEEE80211_IOC_WME:
if (ireq->i_val) {
if ((vap->iv_caps & IEEE80211_C_WME) == 0)
return EOPNOTSUPP;
ieee80211_syncflag(vap, IEEE80211_F_WME);
} else
ieee80211_syncflag(vap, -IEEE80211_F_WME);
error = ERESTART; /* NB: can change beacon frame */
break;
case IEEE80211_IOC_HIDESSID:
if (ireq->i_val)
vap->iv_flags |= IEEE80211_F_HIDESSID;
else
vap->iv_flags &= ~IEEE80211_F_HIDESSID;
error = ERESTART; /* XXX ENETRESET? */
break;
case IEEE80211_IOC_APBRIDGE:
if (ireq->i_val == 0)
vap->iv_flags |= IEEE80211_F_NOBRIDGE;
else
vap->iv_flags &= ~IEEE80211_F_NOBRIDGE;
break;
case IEEE80211_IOC_BSSID:
if (ireq->i_len != sizeof(tmpbssid))
return EINVAL;
error = copyin(ireq->i_data, tmpbssid, ireq->i_len);
if (error)
break;
IEEE80211_ADDR_COPY(vap->iv_des_bssid, tmpbssid);
if (IEEE80211_ADDR_EQ(vap->iv_des_bssid, zerobssid))
vap->iv_flags &= ~IEEE80211_F_DESBSSID;
else
vap->iv_flags |= IEEE80211_F_DESBSSID;
error = ENETRESET;
break;
case IEEE80211_IOC_CHANLIST:
error = ieee80211_ioctl_setchanlist(vap, ireq);
break;
#define OLD_IEEE80211_IOC_SCAN_REQ 23
#ifdef OLD_IEEE80211_IOC_SCAN_REQ
case OLD_IEEE80211_IOC_SCAN_REQ:
IEEE80211_DPRINTF(vap, IEEE80211_MSG_SCAN,
"%s: active scan request\n", __func__);
/*
* If we are in INIT state then the driver has never
* had a chance to setup hardware state to do a scan;
* use the state machine to get us up the SCAN state.
* Otherwise just invoke the scan machinery to start
* a one-time scan.
*/
if (vap->iv_state == IEEE80211_S_INIT)
ieee80211_new_state(vap, IEEE80211_S_SCAN, 0);
else
(void) ieee80211_start_scan(vap,
IEEE80211_SCAN_ACTIVE |
IEEE80211_SCAN_NOPICK |
IEEE80211_SCAN_ONCE,
IEEE80211_SCAN_FOREVER, 0, 0,
/* XXX use ioctl params */
vap->iv_des_nssid, vap->iv_des_ssid);
break;
#endif /* OLD_IEEE80211_IOC_SCAN_REQ */
case IEEE80211_IOC_SCAN_REQ:
error = ieee80211_ioctl_scanreq(vap, ireq);
break;
case IEEE80211_IOC_SCAN_CANCEL:
IEEE80211_DPRINTF(vap, IEEE80211_MSG_SCAN,
"%s: cancel scan\n", __func__);
ieee80211_cancel_scan(vap);
break;
case IEEE80211_IOC_HTCONF:
if (ireq->i_val & 1)
ieee80211_syncflag_ht(vap, IEEE80211_FHT_HT);
else
ieee80211_syncflag_ht(vap, -IEEE80211_FHT_HT);
if (ireq->i_val & 2)
ieee80211_syncflag_ht(vap, IEEE80211_FHT_USEHT40);
else
ieee80211_syncflag_ht(vap, -IEEE80211_FHT_USEHT40);
error = ENETRESET;
break;
case IEEE80211_IOC_ADDMAC:
case IEEE80211_IOC_DELMAC:
error = ieee80211_ioctl_macmac(vap, ireq);
break;
case IEEE80211_IOC_MACCMD:
error = ieee80211_ioctl_setmaccmd(vap, ireq);
break;
case IEEE80211_IOC_STA_STATS:
error = ieee80211_ioctl_setstastats(vap, ireq);
break;
case IEEE80211_IOC_STA_TXPOW:
error = ieee80211_ioctl_setstatxpow(vap, ireq);
break;
case IEEE80211_IOC_WME_CWMIN: /* WME: CWmin */
case IEEE80211_IOC_WME_CWMAX: /* WME: CWmax */
case IEEE80211_IOC_WME_AIFS: /* WME: AIFS */
case IEEE80211_IOC_WME_TXOPLIMIT: /* WME: txops limit */
case IEEE80211_IOC_WME_ACM: /* WME: ACM (bss only) */
case IEEE80211_IOC_WME_ACKPOLICY: /* WME: ACK policy (!bss only) */
error = ieee80211_ioctl_setwmeparam(vap, ireq);
break;
case IEEE80211_IOC_DTIM_PERIOD:
if (vap->iv_opmode != IEEE80211_M_HOSTAP &&
vap->iv_opmode != IEEE80211_M_MBSS &&
vap->iv_opmode != IEEE80211_M_IBSS)
return EINVAL;
if (IEEE80211_DTIM_MIN <= ireq->i_val &&
ireq->i_val <= IEEE80211_DTIM_MAX) {
vap->iv_dtim_period = ireq->i_val;
error = ENETRESET; /* requires restart */
} else
error = EINVAL;
break;
case IEEE80211_IOC_BEACON_INTERVAL:
if (vap->iv_opmode != IEEE80211_M_HOSTAP &&
vap->iv_opmode != IEEE80211_M_MBSS &&
vap->iv_opmode != IEEE80211_M_IBSS)
return EINVAL;
if (IEEE80211_BINTVAL_MIN <= ireq->i_val &&
ireq->i_val <= IEEE80211_BINTVAL_MAX) {
ic->ic_bintval = ireq->i_val;
error = ENETRESET; /* requires restart */
} else
error = EINVAL;
break;
case IEEE80211_IOC_PUREG:
if (ireq->i_val)
vap->iv_flags |= IEEE80211_F_PUREG;
else
vap->iv_flags &= ~IEEE80211_F_PUREG;
/* NB: reset only if we're operating on an 11g channel */
if (isvap11g(vap))
error = ENETRESET;
break;
case IEEE80211_IOC_QUIET:
vap->iv_quiet= ireq->i_val;
break;
case IEEE80211_IOC_QUIET_COUNT:
vap->iv_quiet_count=ireq->i_val;
break;
case IEEE80211_IOC_QUIET_PERIOD:
vap->iv_quiet_period=ireq->i_val;
break;
case IEEE80211_IOC_QUIET_OFFSET:
vap->iv_quiet_offset=ireq->i_val;
break;
case IEEE80211_IOC_QUIET_DUR:
if(ireq->i_val < vap->iv_bss->ni_intval)
vap->iv_quiet_duration = ireq->i_val;
else
error = EINVAL;
break;
case IEEE80211_IOC_BGSCAN:
if (ireq->i_val) {
if ((vap->iv_caps & IEEE80211_C_BGSCAN) == 0)
return EOPNOTSUPP;
vap->iv_flags |= IEEE80211_F_BGSCAN;
} else
vap->iv_flags &= ~IEEE80211_F_BGSCAN;
break;
case IEEE80211_IOC_BGSCAN_IDLE:
if (ireq->i_val >= IEEE80211_BGSCAN_IDLE_MIN)
vap->iv_bgscanidle = ireq->i_val*hz/1000;
else
error = EINVAL;
break;
case IEEE80211_IOC_BGSCAN_INTERVAL:
if (ireq->i_val >= IEEE80211_BGSCAN_INTVAL_MIN)
vap->iv_bgscanintvl = ireq->i_val*hz;
else
error = EINVAL;
break;
case IEEE80211_IOC_SCANVALID:
if (ireq->i_val >= IEEE80211_SCAN_VALID_MIN)
vap->iv_scanvalid = ireq->i_val*hz;
else
error = EINVAL;
break;
case IEEE80211_IOC_FRAGTHRESHOLD:
if ((vap->iv_caps & IEEE80211_C_TXFRAG) == 0 &&
ireq->i_val != IEEE80211_FRAG_MAX)
return EOPNOTSUPP;
if (!(IEEE80211_FRAG_MIN <= ireq->i_val &&
ireq->i_val <= IEEE80211_FRAG_MAX))
return EINVAL;
vap->iv_fragthreshold = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_BURST:
if (ireq->i_val) {
if ((vap->iv_caps & IEEE80211_C_BURST) == 0)
return EOPNOTSUPP;
ieee80211_syncflag(vap, IEEE80211_F_BURST);
} else
ieee80211_syncflag(vap, -IEEE80211_F_BURST);
error = ERESTART;
break;
case IEEE80211_IOC_BMISSTHRESHOLD:
if (!(IEEE80211_HWBMISS_MIN <= ireq->i_val &&
ireq->i_val <= IEEE80211_HWBMISS_MAX))
return EINVAL;
vap->iv_bmissthreshold = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_CURCHAN:
error = ieee80211_ioctl_setcurchan(vap, ireq);
break;
case IEEE80211_IOC_SHORTGI:
if (ireq->i_val) {
#define IEEE80211_HTCAP_SHORTGI \
(IEEE80211_HTCAP_SHORTGI20 | IEEE80211_HTCAP_SHORTGI40)
if (((ireq->i_val ^ vap->iv_htcaps) & IEEE80211_HTCAP_SHORTGI) != 0)
return EINVAL;
if (ireq->i_val & IEEE80211_HTCAP_SHORTGI20)
vap->iv_flags_ht |= IEEE80211_FHT_SHORTGI20;
if (ireq->i_val & IEEE80211_HTCAP_SHORTGI40)
vap->iv_flags_ht |= IEEE80211_FHT_SHORTGI40;
#undef IEEE80211_HTCAP_SHORTGI
} else
vap->iv_flags_ht &=
~(IEEE80211_FHT_SHORTGI20 | IEEE80211_FHT_SHORTGI40);
error = ERESTART;
break;
case IEEE80211_IOC_AMPDU:
if (ireq->i_val && (vap->iv_htcaps & IEEE80211_HTC_AMPDU) == 0)
return EINVAL;
if (ireq->i_val & 1)
vap->iv_flags_ht |= IEEE80211_FHT_AMPDU_TX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_AMPDU_TX;
if (ireq->i_val & 2)
vap->iv_flags_ht |= IEEE80211_FHT_AMPDU_RX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_AMPDU_RX;
/* NB: reset only if we're operating on an 11n channel */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_AMPDU_LIMIT:
/* XXX TODO: figure out ampdu_limit versus ampdu_rxmax */
if (!(IEEE80211_HTCAP_MAXRXAMPDU_8K <= ireq->i_val &&
ireq->i_val <= IEEE80211_HTCAP_MAXRXAMPDU_64K))
return EINVAL;
if (vap->iv_opmode == IEEE80211_M_HOSTAP)
vap->iv_ampdu_rxmax = ireq->i_val;
else
vap->iv_ampdu_limit = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_AMPDU_DENSITY:
if (!(IEEE80211_HTCAP_MPDUDENSITY_NA <= ireq->i_val &&
ireq->i_val <= IEEE80211_HTCAP_MPDUDENSITY_16))
return EINVAL;
vap->iv_ampdu_density = ireq->i_val;
error = ERESTART;
break;
case IEEE80211_IOC_AMSDU:
if (ireq->i_val && (vap->iv_htcaps & IEEE80211_HTC_AMSDU) == 0)
return EINVAL;
if (ireq->i_val & 1)
vap->iv_flags_ht |= IEEE80211_FHT_AMSDU_TX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_AMSDU_TX;
if (ireq->i_val & 2)
vap->iv_flags_ht |= IEEE80211_FHT_AMSDU_RX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_AMSDU_RX;
/* NB: reset only if we're operating on an 11n channel */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_AMSDU_LIMIT:
/* XXX validate */
vap->iv_amsdu_limit = ireq->i_val; /* XXX truncation? */
break;
case IEEE80211_IOC_PUREN:
if (ireq->i_val) {
if ((vap->iv_flags_ht & IEEE80211_FHT_HT) == 0)
return EINVAL;
vap->iv_flags_ht |= IEEE80211_FHT_PUREN;
} else
vap->iv_flags_ht &= ~IEEE80211_FHT_PUREN;
/* NB: reset only if we're operating on an 11n channel */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_DOTH:
if (ireq->i_val) {
#if 0
/* XXX no capability */
if ((vap->iv_caps & IEEE80211_C_DOTH) == 0)
return EOPNOTSUPP;
#endif
vap->iv_flags |= IEEE80211_F_DOTH;
} else
vap->iv_flags &= ~IEEE80211_F_DOTH;
error = ENETRESET;
break;
case IEEE80211_IOC_REGDOMAIN:
error = ieee80211_ioctl_setregdomain(vap, ireq);
break;
case IEEE80211_IOC_ROAM:
error = ieee80211_ioctl_setroam(vap, ireq);
break;
case IEEE80211_IOC_TXPARAMS:
error = ieee80211_ioctl_settxparams(vap, ireq);
break;
case IEEE80211_IOC_HTCOMPAT:
if (ireq->i_val) {
if ((vap->iv_flags_ht & IEEE80211_FHT_HT) == 0)
return EOPNOTSUPP;
vap->iv_flags_ht |= IEEE80211_FHT_HTCOMPAT;
} else
vap->iv_flags_ht &= ~IEEE80211_FHT_HTCOMPAT;
/* NB: reset only if we're operating on an 11n channel */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_DWDS:
if (ireq->i_val) {
/* NB: DWDS only makes sense for WDS-capable devices */
if ((ic->ic_caps & IEEE80211_C_WDS) == 0)
return EOPNOTSUPP;
/* NB: DWDS is used only with ap+sta vaps */
if (vap->iv_opmode != IEEE80211_M_HOSTAP &&
vap->iv_opmode != IEEE80211_M_STA)
return EINVAL;
vap->iv_flags |= IEEE80211_F_DWDS;
if (vap->iv_opmode == IEEE80211_M_STA)
vap->iv_flags_ext |= IEEE80211_FEXT_4ADDR;
} else {
vap->iv_flags &= ~IEEE80211_F_DWDS;
if (vap->iv_opmode == IEEE80211_M_STA)
vap->iv_flags_ext &= ~IEEE80211_FEXT_4ADDR;
}
break;
case IEEE80211_IOC_INACTIVITY:
if (ireq->i_val)
vap->iv_flags_ext |= IEEE80211_FEXT_INACT;
else
vap->iv_flags_ext &= ~IEEE80211_FEXT_INACT;
break;
case IEEE80211_IOC_APPIE:
error = ieee80211_ioctl_setappie(vap, ireq);
break;
case IEEE80211_IOC_WPS:
if (ireq->i_val) {
if ((vap->iv_caps & IEEE80211_C_WPA) == 0)
return EOPNOTSUPP;
vap->iv_flags_ext |= IEEE80211_FEXT_WPS;
} else
vap->iv_flags_ext &= ~IEEE80211_FEXT_WPS;
break;
case IEEE80211_IOC_TSN:
if (ireq->i_val) {
if ((vap->iv_caps & IEEE80211_C_WPA) == 0)
return EOPNOTSUPP;
vap->iv_flags_ext |= IEEE80211_FEXT_TSN;
} else
vap->iv_flags_ext &= ~IEEE80211_FEXT_TSN;
break;
case IEEE80211_IOC_CHANSWITCH:
error = ieee80211_ioctl_chanswitch(vap, ireq);
break;
case IEEE80211_IOC_DFS:
if (ireq->i_val) {
if ((vap->iv_caps & IEEE80211_C_DFS) == 0)
return EOPNOTSUPP;
/* NB: DFS requires 11h support */
if ((vap->iv_flags & IEEE80211_F_DOTH) == 0)
return EINVAL;
vap->iv_flags_ext |= IEEE80211_FEXT_DFS;
} else
vap->iv_flags_ext &= ~IEEE80211_FEXT_DFS;
break;
case IEEE80211_IOC_DOTD:
if (ireq->i_val)
vap->iv_flags_ext |= IEEE80211_FEXT_DOTD;
else
vap->iv_flags_ext &= ~IEEE80211_FEXT_DOTD;
if (vap->iv_opmode == IEEE80211_M_STA)
error = ENETRESET;
break;
case IEEE80211_IOC_HTPROTMODE:
if (ireq->i_val > IEEE80211_PROT_RTSCTS)
return EINVAL;
ic->ic_htprotmode = ireq->i_val ?
IEEE80211_PROT_RTSCTS : IEEE80211_PROT_NONE;
/* NB: if not operating in 11n this can wait */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_STA_VLAN:
error = ieee80211_ioctl_setstavlan(vap, ireq);
break;
case IEEE80211_IOC_SMPS:
if ((ireq->i_val &~ IEEE80211_HTCAP_SMPS) != 0 ||
ireq->i_val == 0x0008) /* value of 2 is reserved */
return EINVAL;
if (ireq->i_val != IEEE80211_HTCAP_SMPS_OFF &&
(vap->iv_htcaps & IEEE80211_HTC_SMPS) == 0)
return EOPNOTSUPP;
vap->iv_htcaps = (vap->iv_htcaps &~ IEEE80211_HTCAP_SMPS) |
ireq->i_val;
/* NB: if not operating in 11n this can wait */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_RIFS:
if (ireq->i_val != 0) {
if ((vap->iv_htcaps & IEEE80211_HTC_RIFS) == 0)
return EOPNOTSUPP;
vap->iv_flags_ht |= IEEE80211_FHT_RIFS;
} else
vap->iv_flags_ht &= ~IEEE80211_FHT_RIFS;
/* NB: if not operating in 11n this can wait */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_STBC:
/* Check if we can do STBC TX/RX before changing the setting */
if ((ireq->i_val & 1) &&
((vap->iv_htcaps & IEEE80211_HTCAP_TXSTBC) == 0))
return EOPNOTSUPP;
if ((ireq->i_val & 2) &&
((vap->iv_htcaps & IEEE80211_HTCAP_RXSTBC) == 0))
return EOPNOTSUPP;
/* TX */
if (ireq->i_val & 1)
vap->iv_flags_ht |= IEEE80211_FHT_STBC_TX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_STBC_TX;
/* RX */
if (ireq->i_val & 2)
vap->iv_flags_ht |= IEEE80211_FHT_STBC_RX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_STBC_RX;
/* NB: reset only if we're operating on an 11n channel */
if (isvapht(vap))
error = ERESTART;
break;
case IEEE80211_IOC_LDPC:
/* Check if we can do LDPC TX/RX before changing the setting */
if ((ireq->i_val & 1) &&
(vap->iv_htcaps & IEEE80211_HTC_TXLDPC) == 0)
return EOPNOTSUPP;
if ((ireq->i_val & 2) &&
(vap->iv_htcaps & IEEE80211_HTCAP_LDPC) == 0)
return EOPNOTSUPP;
/* TX */
if (ireq->i_val & 1)
vap->iv_flags_ht |= IEEE80211_FHT_LDPC_TX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_LDPC_TX;
/* RX */
if (ireq->i_val & 2)
vap->iv_flags_ht |= IEEE80211_FHT_LDPC_RX;
else
vap->iv_flags_ht &= ~IEEE80211_FHT_LDPC_RX;
/* NB: reset only if we're operating on an 11n channel */
if (isvapht(vap))
error = ERESTART;
break;
/* VHT */
case IEEE80211_IOC_VHTCONF:
if (ireq->i_val & 1)
ieee80211_syncflag_vht(vap, IEEE80211_FVHT_VHT);
else
ieee80211_syncflag_vht(vap, -IEEE80211_FVHT_VHT);
if (ireq->i_val & 2)
ieee80211_syncflag_vht(vap, IEEE80211_FVHT_USEVHT40);
else
ieee80211_syncflag_vht(vap, -IEEE80211_FVHT_USEVHT40);
if (ireq->i_val & 4)
ieee80211_syncflag_vht(vap, IEEE80211_FVHT_USEVHT80);
else
ieee80211_syncflag_vht(vap, -IEEE80211_FVHT_USEVHT80);
if (ireq->i_val & 8)
ieee80211_syncflag_vht(vap, IEEE80211_FVHT_USEVHT80P80);
else
ieee80211_syncflag_vht(vap, -IEEE80211_FVHT_USEVHT80P80);
if (ireq->i_val & 16)
ieee80211_syncflag_vht(vap, IEEE80211_FVHT_USEVHT160);
else
ieee80211_syncflag_vht(vap, -IEEE80211_FVHT_USEVHT160);
error = ENETRESET;
break;
default:
error = ieee80211_ioctl_setdefault(vap, ireq);
break;
}
/*
* The convention is that ENETRESET means an operation
* requires a complete re-initialization of the device (e.g.
* changing something that affects the association state).
* ERESTART means the request may be handled with only a
* reload of the hardware state. We hand ERESTART requests
* to the iv_reset callback so the driver can decide. If
* a device does not fillin iv_reset then it defaults to one
* that returns ENETRESET. Otherwise a driver may return
* ENETRESET (in which case a full reset will be done) or
* 0 to mean there's no need to do anything (e.g. when the
* change has no effect on the driver/device).
*/
if (error == ERESTART)
error = IFNET_IS_UP_RUNNING(vap->iv_ifp) ?
vap->iv_reset(vap, ireq->i_type) : 0;
if (error == ENETRESET) {
/* XXX need to re-think AUTO handling */
if (IS_UP_AUTO(vap))
ieee80211_init(vap);
error = 0;
}
return error;
}
int
ieee80211_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data)
{
struct ieee80211vap *vap = ifp->if_softc;
struct ieee80211com *ic = vap->iv_ic;
int error = 0, wait = 0, ic_used;
struct ifreq *ifr;
struct ifaddr *ifa; /* XXX */
ic_used = (cmd != SIOCSIFMTU && cmd != SIOCG80211STATS);
if (ic_used && (error = ieee80211_com_vincref(vap)) != 0)
return (error);
switch (cmd) {
case SIOCSIFFLAGS:
IEEE80211_LOCK(ic);
if ((ifp->if_flags ^ vap->iv_ifflags) & IFF_PROMISC) {
/*
* Enable promiscuous mode when:
* 1. Interface is not a member of bridge, or
* 2. Requested by user, or
* 3. In monitor (or adhoc-demo) mode.
*/
if (ifp->if_bridge == NULL ||
(ifp->if_flags & IFF_PPROMISC) != 0 ||
vap->iv_opmode == IEEE80211_M_MONITOR ||
(vap->iv_opmode == IEEE80211_M_AHDEMO &&
(vap->iv_caps & IEEE80211_C_TDMA) == 0)) {
ieee80211_promisc(vap,
ifp->if_flags & IFF_PROMISC);
vap->iv_ifflags ^= IFF_PROMISC;
}
}
if ((ifp->if_flags ^ vap->iv_ifflags) & IFF_ALLMULTI) {
ieee80211_allmulti(vap, ifp->if_flags & IFF_ALLMULTI);
vap->iv_ifflags ^= IFF_ALLMULTI;
}
if (ifp->if_flags & IFF_UP) {
/*
* Bring ourself up unless we're already operational.
* If we're the first vap and the parent is not up
* then it will automatically be brought up as a
* side-effect of bringing ourself up.
*/
if (vap->iv_state == IEEE80211_S_INIT) {
if (ic->ic_nrunning == 0)
wait = 1;
ieee80211_start_locked(vap);
}
} else if (ifp->if_drv_flags & IFF_DRV_RUNNING) {
/*
* Stop ourself. If we are the last vap to be
* marked down the parent will also be taken down.
*/
if (ic->ic_nrunning == 1)
wait = 1;
ieee80211_stop_locked(vap);
}
IEEE80211_UNLOCK(ic);
/* Wait for parent ioctl handler if it was queued */
if (wait) {
ieee80211_waitfor_parent(ic);
/*
* Check if the MAC address was changed
* via SIOCSIFLLADDR ioctl.
*
* NB: device may be detached during initialization;
* use if_ioctl for existence check.
*/
if_addr_rlock(ifp);
if (ifp->if_ioctl == ieee80211_ioctl &&
(ifp->if_flags & IFF_UP) == 0 &&
!IEEE80211_ADDR_EQ(vap->iv_myaddr, IF_LLADDR(ifp)))
IEEE80211_ADDR_COPY(vap->iv_myaddr,
IF_LLADDR(ifp));
if_addr_runlock(ifp);
}
break;
case SIOCADDMULTI:
case SIOCDELMULTI:
ieee80211_runtask(ic, &ic->ic_mcast_task);
break;
case SIOCSIFMEDIA:
case SIOCGIFMEDIA:
ifr = (struct ifreq *)data;
error = ifmedia_ioctl(ifp, ifr, &vap->iv_media, cmd);
break;
case SIOCG80211:
error = ieee80211_ioctl_get80211(vap, cmd,
(struct ieee80211req *) data);
break;
case SIOCS80211:
error = priv_check(curthread, PRIV_NET80211_MANAGE);
if (error == 0)
error = ieee80211_ioctl_set80211(vap, cmd,
(struct ieee80211req *) data);
break;
case SIOCG80211STATS:
ifr = (struct ifreq *)data;
copyout(&vap->iv_stats, ifr_data_get_ptr(ifr),
sizeof (vap->iv_stats));
break;
case SIOCSIFMTU:
ifr = (struct ifreq *)data;
if (!(IEEE80211_MTU_MIN <= ifr->ifr_mtu &&
ifr->ifr_mtu <= IEEE80211_MTU_MAX))
error = EINVAL;
else
ifp->if_mtu = ifr->ifr_mtu;
break;
case SIOCSIFADDR:
/*
* XXX Handle this directly so we can suppress if_init calls.
* XXX This should be done in ether_ioctl but for the moment
* XXX there are too many other parts of the system that
* XXX set IFF_UP and so suppress if_init being called when
* XXX it should be.
*/
ifa = (struct ifaddr *) data;
switch (ifa->ifa_addr->sa_family) {
#ifdef INET
case AF_INET:
if ((ifp->if_flags & IFF_UP) == 0) {
ifp->if_flags |= IFF_UP;
ifp->if_init(ifp->if_softc);
}
arp_ifinit(ifp, ifa);
break;
#endif
default:
if ((ifp->if_flags & IFF_UP) == 0) {
ifp->if_flags |= IFF_UP;
ifp->if_init(ifp->if_softc);
}
break;
}
break;
default:
/*
* Pass unknown ioctls first to the driver, and if it
* returns ENOTTY, then to the generic Ethernet handler.
*/
if (ic->ic_ioctl != NULL &&
(error = ic->ic_ioctl(ic, cmd, data)) != ENOTTY)
break;
error = ether_ioctl(ifp, cmd, data);
break;
}
if (ic_used)
ieee80211_com_vdecref(vap);
return (error);
}
Index: projects/clang800-import/sys/netgraph/ng_iface.c
===================================================================
--- projects/clang800-import/sys/netgraph/ng_iface.c (revision 343955)
+++ projects/clang800-import/sys/netgraph/ng_iface.c (revision 343956)
@@ -1,808 +1,817 @@
/*
* ng_iface.c
*/
/*-
* Copyright (c) 1996-1999 Whistle Communications, Inc.
* All rights reserved.
*
* Subject to the following obligations and disclaimer of warranty, use and
* redistribution of this software, in source or object code forms, with or
* without modifications are expressly permitted by Whistle Communications;
* provided, however, that:
* 1. Any and all reproductions of the source or object code must include the
* copyright notice above and the following disclaimer of warranties; and
* 2. No rights are granted, in any manner or form, to use Whistle
* Communications, Inc. trademarks, including the mark "WHISTLE
* COMMUNICATIONS" on advertising, endorsements, or otherwise except as
* such appears in the above copyright notice or in the software.
*
* THIS SOFTWARE IS BEING PROVIDED BY WHISTLE COMMUNICATIONS "AS IS", AND
* TO THE MAXIMUM EXTENT PERMITTED BY LAW, WHISTLE COMMUNICATIONS MAKES NO
* REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, REGARDING THIS SOFTWARE,
* INCLUDING WITHOUT LIMITATION, ANY AND ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
* WHISTLE COMMUNICATIONS DOES NOT WARRANT, GUARANTEE, OR MAKE ANY
* REPRESENTATIONS REGARDING THE USE OF, OR THE RESULTS OF THE USE OF THIS
* SOFTWARE IN TERMS OF ITS CORRECTNESS, ACCURACY, RELIABILITY OR OTHERWISE.
* IN NO EVENT SHALL WHISTLE COMMUNICATIONS BE LIABLE FOR ANY DAMAGES
* RESULTING FROM OR ARISING OUT OF ANY USE OF THIS SOFTWARE, INCLUDING
* WITHOUT LIMITATION, ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
* PUNITIVE, OR CONSEQUENTIAL DAMAGES, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES, LOSS OF USE, DATA OR PROFITS, HOWEVER CAUSED AND UNDER ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF WHISTLE COMMUNICATIONS IS ADVISED OF THE POSSIBILITY
* OF SUCH DAMAGE.
*
* Author: Archie Cobbs <archie@freebsd.org>
*
* $FreeBSD$
* $Whistle: ng_iface.c,v 1.33 1999/11/01 09:24:51 julian Exp $
*/
/*
* This node is also a system networking interface. It has
* a hook for each protocol (IP, AppleTalk, etc). Packets
* are simply relayed between the interface and the hooks.
*
* Interfaces are named ng0, ng1, etc. New nodes take the
* first available interface name.
*
* This node also includes Berkeley packet filter support.
*/
#include "opt_inet.h"
#include "opt_inet6.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/errno.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/mbuf.h>
#include <sys/errno.h>
#include <sys/proc.h>
#include <sys/random.h>
#include <sys/rmlock.h>
#include <sys/sockio.h>
#include <sys/socket.h>
+#include <sys/sysctl.h>
#include <sys/syslog.h>
#include <sys/libkern.h>
#include <net/if.h>
#include <net/if_var.h>
#include <net/if_types.h>
#include <net/bpf.h>
#include <net/netisr.h>
#include <net/route.h>
#include <net/vnet.h>
#include <netinet/in.h>
#include <netgraph/ng_message.h>
#include <netgraph/netgraph.h>
#include <netgraph/ng_parse.h>
#include <netgraph/ng_iface.h>
#ifdef NG_SEPARATE_MALLOC
static MALLOC_DEFINE(M_NETGRAPH_IFACE, "netgraph_iface", "netgraph iface node");
#else
#define M_NETGRAPH_IFACE M_NETGRAPH
#endif
+static SYSCTL_NODE(_net_graph, OID_AUTO, iface, CTLFLAG_RW, 0,
+ "Point to point netgraph interface");
+VNET_DEFINE_STATIC(int, ng_iface_max_nest) = 2;
+#define V_ng_iface_max_nest VNET(ng_iface_max_nest)
+SYSCTL_INT(_net_graph_iface, OID_AUTO, max_nesting, CTLFLAG_VNET | CTLFLAG_RW,
+ &VNET_NAME(ng_iface_max_nest), 0, "Max nested tunnels");
+
/* This struct describes one address family */
struct iffam {
sa_family_t family; /* Address family */
const char *hookname; /* Name for hook */
};
typedef const struct iffam *iffam_p;
/* List of address families supported by our interface */
const static struct iffam gFamilies[] = {
{ AF_INET, NG_IFACE_HOOK_INET },
{ AF_INET6, NG_IFACE_HOOK_INET6 },
{ AF_ATM, NG_IFACE_HOOK_ATM },
{ AF_NATM, NG_IFACE_HOOK_NATM },
};
#define NUM_FAMILIES nitems(gFamilies)
/* Node private data */
struct ng_iface_private {
struct ifnet *ifp; /* Our interface */
int unit; /* Interface unit number */
node_p node; /* Our netgraph node */
hook_p hooks[NUM_FAMILIES]; /* Hook for each address family */
struct rmlock lock; /* Protect private data changes */
};
typedef struct ng_iface_private *priv_p;
#define PRIV_RLOCK(priv, t) rm_rlock(&priv->lock, t)
#define PRIV_RUNLOCK(priv, t) rm_runlock(&priv->lock, t)
#define PRIV_WLOCK(priv) rm_wlock(&priv->lock)
#define PRIV_WUNLOCK(priv) rm_wunlock(&priv->lock)
/* Interface methods */
static void ng_iface_start(struct ifnet *ifp);
static int ng_iface_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data);
static int ng_iface_output(struct ifnet *ifp, struct mbuf *m0,
const struct sockaddr *dst, struct route *ro);
static void ng_iface_bpftap(struct ifnet *ifp,
struct mbuf *m, sa_family_t family);
static int ng_iface_send(struct ifnet *ifp, struct mbuf *m,
sa_family_t sa);
#ifdef DEBUG
static void ng_iface_print_ioctl(struct ifnet *ifp, int cmd, caddr_t data);
#endif
/* Netgraph methods */
static int ng_iface_mod_event(module_t, int, void *);
static ng_constructor_t ng_iface_constructor;
static ng_rcvmsg_t ng_iface_rcvmsg;
static ng_shutdown_t ng_iface_shutdown;
static ng_newhook_t ng_iface_newhook;
static ng_rcvdata_t ng_iface_rcvdata;
static ng_disconnect_t ng_iface_disconnect;
/* Helper stuff */
static iffam_p get_iffam_from_af(sa_family_t family);
static iffam_p get_iffam_from_hook(priv_p priv, hook_p hook);
static iffam_p get_iffam_from_name(const char *name);
static hook_p *get_hook_from_iffam(priv_p priv, iffam_p iffam);
/* List of commands and how to convert arguments to/from ASCII */
static const struct ng_cmdlist ng_iface_cmds[] = {
{
NGM_IFACE_COOKIE,
NGM_IFACE_GET_IFNAME,
"getifname",
NULL,
&ng_parse_string_type
},
{
NGM_IFACE_COOKIE,
NGM_IFACE_POINT2POINT,
"point2point",
NULL,
NULL
},
{
NGM_IFACE_COOKIE,
NGM_IFACE_BROADCAST,
"broadcast",
NULL,
NULL
},
{
NGM_IFACE_COOKIE,
NGM_IFACE_GET_IFINDEX,
"getifindex",
NULL,
&ng_parse_uint32_type
},
{ 0 }
};
/* Node type descriptor */
static struct ng_type typestruct = {
.version = NG_ABI_VERSION,
.name = NG_IFACE_NODE_TYPE,
.mod_event = ng_iface_mod_event,
.constructor = ng_iface_constructor,
.rcvmsg = ng_iface_rcvmsg,
.shutdown = ng_iface_shutdown,
.newhook = ng_iface_newhook,
.rcvdata = ng_iface_rcvdata,
.disconnect = ng_iface_disconnect,
.cmdlist = ng_iface_cmds,
};
NETGRAPH_INIT(iface, &typestruct);
VNET_DEFINE_STATIC(struct unrhdr *, ng_iface_unit);
#define V_ng_iface_unit VNET(ng_iface_unit)
/************************************************************************
HELPER STUFF
************************************************************************/
/*
* Get the family descriptor from the family ID
*/
static __inline iffam_p
get_iffam_from_af(sa_family_t family)
{
iffam_p iffam;
int k;
for (k = 0; k < NUM_FAMILIES; k++) {
iffam = &gFamilies[k];
if (iffam->family == family)
return (iffam);
}
return (NULL);
}
/*
* Get the family descriptor from the hook
*/
static __inline iffam_p
get_iffam_from_hook(priv_p priv, hook_p hook)
{
int k;
for (k = 0; k < NUM_FAMILIES; k++)
if (priv->hooks[k] == hook)
return (&gFamilies[k]);
return (NULL);
}
/*
* Get the hook from the iffam descriptor
*/
static __inline hook_p *
get_hook_from_iffam(priv_p priv, iffam_p iffam)
{
return (&priv->hooks[iffam - gFamilies]);
}
/*
* Get the iffam descriptor from the name
*/
static __inline iffam_p
get_iffam_from_name(const char *name)
{
iffam_p iffam;
int k;
for (k = 0; k < NUM_FAMILIES; k++) {
iffam = &gFamilies[k];
if (!strcmp(iffam->hookname, name))
return (iffam);
}
return (NULL);
}
/************************************************************************
INTERFACE STUFF
************************************************************************/
/*
* Process an ioctl for the virtual interface
*/
static int
ng_iface_ioctl(struct ifnet *ifp, u_long command, caddr_t data)
{
struct ifreq *const ifr = (struct ifreq *) data;
int error = 0;
#ifdef DEBUG
ng_iface_print_ioctl(ifp, command, data);
#endif
switch (command) {
/* These two are mostly handled at a higher layer */
case SIOCSIFADDR:
ifp->if_flags |= IFF_UP;
ifp->if_drv_flags |= IFF_DRV_RUNNING;
ifp->if_drv_flags &= ~(IFF_DRV_OACTIVE);
break;
case SIOCGIFADDR:
break;
/* Set flags */
case SIOCSIFFLAGS:
/*
* If the interface is marked up and stopped, then start it.
* If it is marked down and running, then stop it.
*/
if (ifr->ifr_flags & IFF_UP) {
if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) {
ifp->if_drv_flags &= ~(IFF_DRV_OACTIVE);
ifp->if_drv_flags |= IFF_DRV_RUNNING;
}
} else {
if (ifp->if_drv_flags & IFF_DRV_RUNNING)
ifp->if_drv_flags &= ~(IFF_DRV_RUNNING |
IFF_DRV_OACTIVE);
}
break;
/* Set the interface MTU */
case SIOCSIFMTU:
if (ifr->ifr_mtu > NG_IFACE_MTU_MAX
|| ifr->ifr_mtu < NG_IFACE_MTU_MIN)
error = EINVAL;
else
ifp->if_mtu = ifr->ifr_mtu;
break;
/* Stuff that's not supported */
case SIOCADDMULTI:
case SIOCDELMULTI:
error = 0;
break;
case SIOCSIFPHYS:
error = EOPNOTSUPP;
break;
default:
error = EINVAL;
break;
}
return (error);
}
/*
* This routine is called to deliver a packet out the interface.
* We simply look at the address family and relay the packet to
* the corresponding hook, if it exists and is connected.
*/
static int
ng_iface_output(struct ifnet *ifp, struct mbuf *m,
const struct sockaddr *dst, struct route *ro)
{
uint32_t af;
int error;
/* Check interface flags */
if (!((ifp->if_flags & IFF_UP) &&
(ifp->if_drv_flags & IFF_DRV_RUNNING))) {
m_freem(m);
return (ENETDOWN);
}
/* Protect from deadly infinite recursion. */
- error = if_tunnel_check_nesting(ifp, m, NGM_IFACE_COOKIE, 1);
+ error = if_tunnel_check_nesting(ifp, m, NGM_IFACE_COOKIE,
+ V_ng_iface_max_nest);
if (error) {
m_freem(m);
return (error);
}
/* BPF writes need to be handled specially. */
if (dst->sa_family == AF_UNSPEC)
bcopy(dst->sa_data, &af, sizeof(af));
else
af = dst->sa_family;
/* Berkeley packet filter */
ng_iface_bpftap(ifp, m, af);
if (ALTQ_IS_ENABLED(&ifp->if_snd)) {
M_PREPEND(m, sizeof(sa_family_t), M_NOWAIT);
if (m == NULL) {
if_inc_counter(ifp, IFCOUNTER_OQDROPS, 1);
return (ENOBUFS);
}
*(sa_family_t *)m->m_data = af;
error = (ifp->if_transmit)(ifp, m);
} else
error = ng_iface_send(ifp, m, af);
return (error);
}
/*
* Start method is used only when ALTQ is enabled.
*/
static void
ng_iface_start(struct ifnet *ifp)
{
struct mbuf *m;
sa_family_t sa;
KASSERT(ALTQ_IS_ENABLED(&ifp->if_snd), ("%s without ALTQ", __func__));
for(;;) {
IFQ_DRV_DEQUEUE(&ifp->if_snd, m);
if (m == NULL)
break;
sa = *mtod(m, sa_family_t *);
m_adj(m, sizeof(sa_family_t));
ng_iface_send(ifp, m, sa);
}
}
/*
* Flash a packet by the BPF (requires prepending 4 byte AF header)
* Note the phoney mbuf; this is OK because BPF treats it read-only.
*/
static void
ng_iface_bpftap(struct ifnet *ifp, struct mbuf *m, sa_family_t family)
{
KASSERT(family != AF_UNSPEC, ("%s: family=AF_UNSPEC", __func__));
if (bpf_peers_present(ifp->if_bpf)) {
int32_t family4 = (int32_t)family;
bpf_mtap2(ifp->if_bpf, &family4, sizeof(family4), m);
}
}
/*
* This routine does actual delivery of the packet into the
* netgraph(4). It is called from ng_iface_start() and
* ng_iface_output().
*/
static int
ng_iface_send(struct ifnet *ifp, struct mbuf *m, sa_family_t sa)
{
struct rm_priotracker priv_tracker;
const priv_p priv = (priv_p) ifp->if_softc;
const iffam_p iffam = get_iffam_from_af(sa);
hook_p hook;
int error;
int len;
/* Check address family to determine hook (if known) */
if (iffam == NULL) {
m_freem(m);
log(LOG_WARNING, "%s: can't handle af%d\n", ifp->if_xname, sa);
return (EAFNOSUPPORT);
}
/* Copy length before the mbuf gets invalidated. */
len = m->m_pkthdr.len;
PRIV_RLOCK(priv, &priv_tracker);
hook = *get_hook_from_iffam(priv, iffam);
if (hook == NULL) {
NG_FREE_M(m);
PRIV_RUNLOCK(priv, &priv_tracker);
return ENETDOWN;
}
NG_HOOK_REF(hook);
PRIV_RUNLOCK(priv, &priv_tracker);
NG_OUTBOUND_THREAD_REF();
NG_SEND_DATA_ONLY(error, hook, m);
NG_OUTBOUND_THREAD_UNREF();
NG_HOOK_UNREF(hook);
/* Update stats. */
if (error == 0) {
if_inc_counter(ifp, IFCOUNTER_OBYTES, len);
if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1);
}
return (error);
}
#ifdef DEBUG
/*
* Display an ioctl to the virtual interface
*/
static void
ng_iface_print_ioctl(struct ifnet *ifp, int command, caddr_t data)
{
char *str;
switch (command & IOC_DIRMASK) {
case IOC_VOID:
str = "IO";
break;
case IOC_OUT:
str = "IOR";
break;
case IOC_IN:
str = "IOW";
break;
case IOC_INOUT:
str = "IORW";
break;
default:
str = "IO??";
}
log(LOG_DEBUG, "%s: %s('%c', %d, char[%d])\n",
ifp->if_xname,
str,
IOCGROUP(command),
command & 0xff,
IOCPARM_LEN(command));
}
#endif /* DEBUG */
/************************************************************************
NETGRAPH NODE STUFF
************************************************************************/
/*
* Constructor for a node
*/
static int
ng_iface_constructor(node_p node)
{
struct ifnet *ifp;
priv_p priv;
/* Allocate node and interface private structures */
priv = malloc(sizeof(*priv), M_NETGRAPH_IFACE, M_WAITOK | M_ZERO);
ifp = if_alloc(IFT_PROPVIRTUAL);
if (ifp == NULL) {
free(priv, M_NETGRAPH_IFACE);
return (ENOMEM);
}
rm_init(&priv->lock, "ng_iface private rmlock");
/* Link them together */
ifp->if_softc = priv;
priv->ifp = ifp;
/* Get an interface unit number */
priv->unit = alloc_unr(V_ng_iface_unit);
/* Link together node and private info */
NG_NODE_SET_PRIVATE(node, priv);
priv->node = node;
/* Initialize interface structure */
if_initname(ifp, NG_IFACE_IFACE_NAME, priv->unit);
ifp->if_output = ng_iface_output;
ifp->if_start = ng_iface_start;
ifp->if_ioctl = ng_iface_ioctl;
ifp->if_mtu = NG_IFACE_MTU_DEFAULT;
ifp->if_flags = (IFF_SIMPLEX|IFF_POINTOPOINT|IFF_NOARP|IFF_MULTICAST);
ifp->if_type = IFT_PROPVIRTUAL; /* XXX */
ifp->if_addrlen = 0; /* XXX */
ifp->if_hdrlen = 0; /* XXX */
ifp->if_baudrate = 64000; /* XXX */
IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen);
ifp->if_snd.ifq_drv_maxlen = ifqmaxlen;
IFQ_SET_READY(&ifp->if_snd);
/* Give this node the same name as the interface (if possible) */
if (ng_name_node(node, ifp->if_xname) != 0)
log(LOG_WARNING, "%s: can't acquire netgraph name\n",
ifp->if_xname);
/* Attach the interface */
if_attach(ifp);
bpfattach(ifp, DLT_NULL, sizeof(u_int32_t));
/* Done */
return (0);
}
/*
* Give our ok for a hook to be added
*/
static int
ng_iface_newhook(node_p node, hook_p hook, const char *name)
{
const iffam_p iffam = get_iffam_from_name(name);
const priv_p priv = NG_NODE_PRIVATE(node);
hook_p *hookptr;
if (iffam == NULL)
return (EPFNOSUPPORT);
PRIV_WLOCK(priv);
hookptr = get_hook_from_iffam(priv, iffam);
if (*hookptr != NULL) {
PRIV_WUNLOCK(priv);
return (EISCONN);
}
*hookptr = hook;
NG_HOOK_HI_STACK(hook);
NG_HOOK_SET_TO_INBOUND(hook);
PRIV_WUNLOCK(priv);
return (0);
}
/*
* Receive a control message
*/
static int
ng_iface_rcvmsg(node_p node, item_p item, hook_p lasthook)
{
const priv_p priv = NG_NODE_PRIVATE(node);
struct ifnet *const ifp = priv->ifp;
struct ng_mesg *resp = NULL;
int error = 0;
struct ng_mesg *msg;
NGI_GET_MSG(item, msg);
switch (msg->header.typecookie) {
case NGM_IFACE_COOKIE:
switch (msg->header.cmd) {
case NGM_IFACE_GET_IFNAME:
NG_MKRESPONSE(resp, msg, IFNAMSIZ, M_NOWAIT);
if (resp == NULL) {
error = ENOMEM;
break;
}
strlcpy(resp->data, ifp->if_xname, IFNAMSIZ);
break;
case NGM_IFACE_POINT2POINT:
case NGM_IFACE_BROADCAST:
{
/* Deny request if interface is UP */
if ((ifp->if_flags & IFF_UP) != 0)
return (EBUSY);
/* Change flags */
switch (msg->header.cmd) {
case NGM_IFACE_POINT2POINT:
ifp->if_flags |= IFF_POINTOPOINT;
ifp->if_flags &= ~IFF_BROADCAST;
break;
case NGM_IFACE_BROADCAST:
ifp->if_flags &= ~IFF_POINTOPOINT;
ifp->if_flags |= IFF_BROADCAST;
break;
}
break;
}
case NGM_IFACE_GET_IFINDEX:
NG_MKRESPONSE(resp, msg, sizeof(uint32_t), M_NOWAIT);
if (resp == NULL) {
error = ENOMEM;
break;
}
*((uint32_t *)resp->data) = priv->ifp->if_index;
break;
default:
error = EINVAL;
break;
}
break;
case NGM_FLOW_COOKIE:
switch (msg->header.cmd) {
case NGM_LINK_IS_UP:
if_link_state_change(ifp, LINK_STATE_UP);
break;
case NGM_LINK_IS_DOWN:
if_link_state_change(ifp, LINK_STATE_DOWN);
break;
default:
break;
}
break;
default:
error = EINVAL;
break;
}
NG_RESPOND_MSG(error, node, item, resp);
NG_FREE_MSG(msg);
return (error);
}
/*
* Recive data from a hook. Pass the packet to the correct input routine.
*/
static int
ng_iface_rcvdata(hook_p hook, item_p item)
{
const priv_p priv = NG_NODE_PRIVATE(NG_HOOK_NODE(hook));
const iffam_p iffam = get_iffam_from_hook(priv, hook);
struct ifnet *const ifp = priv->ifp;
struct mbuf *m;
int isr;
NGI_GET_M(item, m);
NG_FREE_ITEM(item);
/* Sanity checks */
KASSERT(iffam != NULL, ("%s: iffam", __func__));
M_ASSERTPKTHDR(m);
if ((ifp->if_flags & IFF_UP) == 0) {
NG_FREE_M(m);
return (ENETDOWN);
}
/* Update interface stats */
if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1);
if_inc_counter(ifp, IFCOUNTER_IBYTES, m->m_pkthdr.len);
/* Note receiving interface */
m->m_pkthdr.rcvif = ifp;
/* Berkeley packet filter */
ng_iface_bpftap(ifp, m, iffam->family);
/* Send packet */
switch (iffam->family) {
#ifdef INET
case AF_INET:
isr = NETISR_IP;
break;
#endif
#ifdef INET6
case AF_INET6:
isr = NETISR_IPV6;
break;
#endif
default:
m_freem(m);
return (EAFNOSUPPORT);
}
random_harvest_queue(m, sizeof(*m), RANDOM_NET_NG);
M_SETFIB(m, ifp->if_fib);
netisr_dispatch(isr, m);
return (0);
}
/*
* Shutdown and remove the node and its associated interface.
*/
static int
ng_iface_shutdown(node_p node)
{
const priv_p priv = NG_NODE_PRIVATE(node);
/*
* The ifnet may be in a different vnet than the netgraph node,
* hence we have to change the current vnet context here.
*/
CURVNET_SET_QUIET(priv->ifp->if_vnet);
bpfdetach(priv->ifp);
if_detach(priv->ifp);
if_free(priv->ifp);
CURVNET_RESTORE();
priv->ifp = NULL;
free_unr(V_ng_iface_unit, priv->unit);
rm_destroy(&priv->lock);
free(priv, M_NETGRAPH_IFACE);
NG_NODE_SET_PRIVATE(node, NULL);
NG_NODE_UNREF(node);
return (0);
}
/*
* Hook disconnection. Note that we do *not* shutdown when all
* hooks have been disconnected.
*/
static int
ng_iface_disconnect(hook_p hook)
{
const priv_p priv = NG_NODE_PRIVATE(NG_HOOK_NODE(hook));
const iffam_p iffam = get_iffam_from_hook(priv, hook);
if (iffam == NULL)
panic("%s", __func__);
PRIV_WLOCK(priv);
*get_hook_from_iffam(priv, iffam) = NULL;
PRIV_WUNLOCK(priv);
return (0);
}
/*
* Handle loading and unloading for this node type.
*/
static int
ng_iface_mod_event(module_t mod, int event, void *data)
{
int error = 0;
switch (event) {
case MOD_LOAD:
case MOD_UNLOAD:
break;
default:
error = EOPNOTSUPP;
break;
}
return (error);
}
static void
vnet_ng_iface_init(const void *unused)
{
V_ng_iface_unit = new_unrhdr(0, 0xffff, NULL);
}
VNET_SYSINIT(vnet_ng_iface_init, SI_SUB_PSEUDO, SI_ORDER_ANY,
vnet_ng_iface_init, NULL);
static void
vnet_ng_iface_uninit(const void *unused)
{
delete_unrhdr(V_ng_iface_unit);
}
VNET_SYSUNINIT(vnet_ng_iface_uninit, SI_SUB_INIT_IF, SI_ORDER_ANY,
vnet_ng_iface_uninit, NULL);
Index: projects/clang800-import/sys/netgraph/ng_ipfw.c
===================================================================
--- projects/clang800-import/sys/netgraph/ng_ipfw.c (revision 343955)
+++ projects/clang800-import/sys/netgraph/ng_ipfw.c (revision 343956)
@@ -1,363 +1,360 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright 2005, Gleb Smirnoff <glebius@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#include "opt_inet.h"
#include "opt_inet6.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/mbuf.h>
#include <sys/malloc.h>
#include <sys/ctype.h>
#include <sys/errno.h>
#include <sys/rwlock.h>
#include <sys/socket.h>
#include <sys/syslog.h>
#include <net/if.h>
#include <net/if_var.h>
#include <netinet/in.h>
#include <netinet/in_systm.h>
#include <netinet/in_var.h>
#include <netinet/ip_var.h>
#include <netinet/ip_fw.h>
#include <netinet/ip.h>
#include <netinet/ip6.h>
#include <netinet6/ip6_var.h>
#include <netpfil/ipfw/ip_fw_private.h>
#include <netgraph/ng_message.h>
#include <netgraph/ng_parse.h>
#include <netgraph/ng_ipfw.h>
#include <netgraph/netgraph.h>
static int ng_ipfw_mod_event(module_t mod, int event, void *data);
static ng_constructor_t ng_ipfw_constructor;
static ng_shutdown_t ng_ipfw_shutdown;
static ng_newhook_t ng_ipfw_newhook;
static ng_connect_t ng_ipfw_connect;
static ng_findhook_t ng_ipfw_findhook;
static ng_rcvdata_t ng_ipfw_rcvdata;
static ng_disconnect_t ng_ipfw_disconnect;
static hook_p ng_ipfw_findhook1(node_p, u_int16_t );
static int ng_ipfw_input(struct mbuf **, int, struct ip_fw_args *,
int);
/* We have only one node */
static node_p fw_node;
/* Netgraph node type descriptor */
static struct ng_type ng_ipfw_typestruct = {
.version = NG_ABI_VERSION,
.name = NG_IPFW_NODE_TYPE,
.mod_event = ng_ipfw_mod_event,
.constructor = ng_ipfw_constructor,
.shutdown = ng_ipfw_shutdown,
.newhook = ng_ipfw_newhook,
.connect = ng_ipfw_connect,
.findhook = ng_ipfw_findhook,
.rcvdata = ng_ipfw_rcvdata,
.disconnect = ng_ipfw_disconnect,
};
NETGRAPH_INIT(ipfw, &ng_ipfw_typestruct);
MODULE_DEPEND(ng_ipfw, ipfw, 3, 3, 3);
/* Information we store for each hook */
struct ng_ipfw_hook_priv {
hook_p hook;
u_int16_t rulenum;
};
typedef struct ng_ipfw_hook_priv *hpriv_p;
static int
ng_ipfw_mod_event(module_t mod, int event, void *data)
{
int error = 0;
switch (event) {
case MOD_LOAD:
if (ng_ipfw_input_p != NULL) {
error = EEXIST;
break;
}
/* Setup node without any private data */
if ((error = ng_make_node_common(&ng_ipfw_typestruct, &fw_node))
!= 0) {
log(LOG_ERR, "%s: can't create ng_ipfw node", __func__);
break;
}
/* Try to name node */
if (ng_name_node(fw_node, "ipfw") != 0)
log(LOG_WARNING, "%s: failed to name node \"ipfw\"",
__func__);
/* Register hook */
ng_ipfw_input_p = ng_ipfw_input;
break;
case MOD_UNLOAD:
/*
* This won't happen if a node exists.
* ng_ipfw_input_p is already cleared.
*/
break;
default:
error = EOPNOTSUPP;
break;
}
return (error);
}
static int
ng_ipfw_constructor(node_p node)
{
return (EINVAL); /* Only one node */
}
static int
ng_ipfw_newhook(node_p node, hook_p hook, const char *name)
{
hpriv_p hpriv;
u_int16_t rulenum;
const char *cp;
char *endptr;
/* Protect from leading zero */
if (name[0] == '0' && name[1] != '\0')
return (EINVAL);
/* Check that name contains only digits */
for (cp = name; *cp != '\0'; cp++)
if (!isdigit(*cp))
return (EINVAL);
/* Convert it to integer */
rulenum = (u_int16_t)strtol(name, &endptr, 10);
if (*endptr != '\0')
return (EINVAL);
/* Allocate memory for this hook's private data */
hpriv = malloc(sizeof(*hpriv), M_NETGRAPH, M_NOWAIT | M_ZERO);
if (hpriv== NULL)
return (ENOMEM);
hpriv->hook = hook;
hpriv->rulenum = rulenum;
NG_HOOK_SET_PRIVATE(hook, hpriv);
return(0);
}
/*
* Set hooks into queueing mode, to avoid recursion between
* netgraph layer and ip_{input,output}.
*/
static int
ng_ipfw_connect(hook_p hook)
{
NG_HOOK_FORCE_QUEUE(hook);
return (0);
}
/* Look up hook by name */
static hook_p
ng_ipfw_findhook(node_p node, const char *name)
{
u_int16_t n; /* numeric representation of hook */
char *endptr;
n = (u_int16_t)strtol(name, &endptr, 10);
if (*endptr != '\0')
return NULL;
return ng_ipfw_findhook1(node, n);
}
/* Look up hook by rule number */
static hook_p
ng_ipfw_findhook1(node_p node, u_int16_t rulenum)
{
hook_p hook;
hpriv_p hpriv;
LIST_FOREACH(hook, &node->nd_hooks, hk_hooks) {
hpriv = NG_HOOK_PRIVATE(hook);
if (NG_HOOK_IS_VALID(hook) && (hpriv->rulenum == rulenum))
return (hook);
}
return (NULL);
}
static int
ng_ipfw_rcvdata(hook_p hook, item_p item)
{
struct m_tag *tag;
struct ipfw_rule_ref *r;
struct mbuf *m;
struct ip *ip;
NGI_GET_M(item, m);
NG_FREE_ITEM(item);
tag = m_tag_locate(m, MTAG_IPFW_RULE, 0, NULL);
if (tag == NULL) {
NG_FREE_M(m);
return (EINVAL); /* XXX: find smth better */
}
if (m->m_len < sizeof(struct ip) &&
(m = m_pullup(m, sizeof(struct ip))) == NULL)
return (ENOBUFS);
ip = mtod(m, struct ip *);
r = (struct ipfw_rule_ref *)(tag + 1);
if (r->info & IPFW_INFO_IN) {
switch (ip->ip_v) {
#ifdef INET
case IPVERSION:
ip_input(m);
return (0);
#endif
#ifdef INET6
case IPV6_VERSION >> 4:
ip6_input(m);
return (0);
#endif
}
} else {
switch (ip->ip_v) {
#ifdef INET
case IPVERSION:
return (ip_output(m, NULL, NULL, IP_FORWARDING,
NULL, NULL));
#endif
#ifdef INET6
case IPV6_VERSION >> 4:
return (ip6_output(m, NULL, NULL, 0, NULL,
NULL, NULL));
#endif
}
}
/* unknown IP protocol version */
NG_FREE_M(m);
return (EPROTONOSUPPORT);
}
static int
ng_ipfw_input(struct mbuf **m0, int dir, struct ip_fw_args *fwa, int tee)
{
struct mbuf *m;
- struct ip *ip;
hook_p hook;
int error = 0;
/*
* Node must be loaded and corresponding hook must be present.
*/
if (fw_node == NULL ||
(hook = ng_ipfw_findhook1(fw_node, fwa->rule.info)) == NULL)
return (ESRCH); /* no hook associated with this rule */
/*
* We have two modes: in normal mode we add a tag to packet, which is
* important to return packet back to IP stack. In tee mode we make
* a copy of a packet and forward it into netgraph without a tag.
*/
if (tee == 0) {
struct m_tag *tag;
struct ipfw_rule_ref *r;
m = *m0;
*m0 = NULL; /* it belongs now to netgraph */
tag = m_tag_alloc(MTAG_IPFW_RULE, 0, sizeof(*r),
M_NOWAIT|M_ZERO);
if (tag == NULL) {
m_freem(m);
return (ENOMEM);
}
r = (struct ipfw_rule_ref *)(tag + 1);
*r = fwa->rule;
r->info &= IPFW_ONEPASS; /* keep this info */
r->info |= dir ? IPFW_INFO_IN : IPFW_INFO_OUT;
m_tag_prepend(m, tag);
} else
if ((m = m_dup(*m0, M_NOWAIT)) == NULL)
return (ENOMEM); /* which is ignored */
if (m->m_len < sizeof(struct ip) &&
(m = m_pullup(m, sizeof(struct ip))) == NULL)
return (EINVAL);
-
- ip = mtod(m, struct ip *);
NG_SEND_DATA_ONLY(error, hook, m);
return (error);
}
static int
ng_ipfw_shutdown(node_p node)
{
/*
* After our single node has been removed,
* the only thing that can be done is
* 'kldunload ng_ipfw.ko'
*/
ng_ipfw_input_p = NULL;
NG_NODE_UNREF(node);
return (0);
}
static int
ng_ipfw_disconnect(hook_p hook)
{
const hpriv_p hpriv = NG_HOOK_PRIVATE(hook);
free(hpriv, M_NETGRAPH);
NG_HOOK_SET_PRIVATE(hook, NULL);
return (0);
}
Index: projects/clang800-import/sys/netinet/cc/cc_cdg.c
===================================================================
--- projects/clang800-import/sys/netinet/cc/cc_cdg.c (revision 343955)
+++ projects/clang800-import/sys/netinet/cc/cc_cdg.c (revision 343956)
@@ -1,714 +1,718 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2009-2013
* Swinburne University of Technology, Melbourne, Australia
* All rights reserved.
*
* This software was developed at the Centre for Advanced Internet
* Architectures, Swinburne University of Technology, by David Hayes, made
* possible in part by a gift from The Cisco University Research Program Fund,
* a corporate advised fund of Silicon Valley Community Foundation. Development
* and testing were further assisted by a grant from the FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* CAIA Delay-Gradient (CDG) congestion control algorithm
*
* An implemention of the delay-gradient congestion control algorithm proposed
* in the following paper:
*
* D. A. Hayes and G. Armitage, "Revisiting TCP Congestion Control using Delay
* Gradients", in IFIP Networking, Valencia, Spain, 9-13 May 2011.
*
* Developed as part of the NewTCP research project at Swinburne University of
* Technology's Centre for Advanced Internet Architectures, Melbourne,
* Australia. More details are available at:
* http://caia.swin.edu.au/urp/newtcp/
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/hhook.h>
#include <sys/kernel.h>
#include <sys/khelp.h>
#include <sys/limits.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/queue.h>
#include <sys/socket.h>
#include <sys/socketvar.h>
#include <sys/sysctl.h>
#include <sys/systm.h>
#include <net/vnet.h>
#include <netinet/tcp.h>
#include <netinet/tcp_seq.h>
#include <netinet/tcp_timer.h>
#include <netinet/tcp_var.h>
#include <netinet/cc/cc.h>
#include <netinet/cc/cc_module.h>
#include <netinet/khelp/h_ertt.h>
#include <vm/uma.h>
#define CDG_VERSION "0.1"
/* Private delay-gradient induced congestion control signal. */
#define CC_CDG_DELAY 0x01000000
/* NewReno window deflation factor on loss (as a percentage). */
#define RENO_BETA 50
/* Queue states. */
#define CDG_Q_EMPTY 1
#define CDG_Q_RISING 2
#define CDG_Q_FALLING 3
#define CDG_Q_FULL 4
#define CDG_Q_UNKNOWN 9999
/* Number of bit shifts used in probexp lookup table. */
#define EXP_PREC 15
/* Largest gradient represented in probexp lookup table. */
#define MAXGRAD 5
/*
* Delay Precision Enhance - number of bit shifts used for qtrend related
* integer arithmetic precision.
*/
#define D_P_E 7
struct qdiff_sample {
long qdiff;
STAILQ_ENTRY(qdiff_sample) qdiff_lnk;
};
struct cdg {
long max_qtrend;
long min_qtrend;
STAILQ_HEAD(minrtts_head, qdiff_sample) qdiffmin_q;
STAILQ_HEAD(maxrtts_head, qdiff_sample) qdiffmax_q;
long window_incr;
/* rttcount for window increase when in congestion avoidance */
long rtt_count;
/* maximum measured rtt within an rtt period */
int maxrtt_in_rtt;
/* maximum measured rtt within prev rtt period */
int maxrtt_in_prevrtt;
/* minimum measured rtt within an rtt period */
int minrtt_in_rtt;
/* minimum measured rtt within prev rtt period */
int minrtt_in_prevrtt;
/* consecutive congestion episode counter */
uint32_t consec_cong_cnt;
/* when tracking a new reno type loss window */
uint32_t shadow_w;
/* maximum number of samples in the moving average queue */
int sample_q_size;
/* number of samples in the moving average queue */
int num_samples;
/* estimate of the queue state of the path */
int queue_state;
};
/*
* Lookup table for:
* (1 - exp(-x)) << EXP_PREC, where x = [0,MAXGRAD] in 2^-7 increments
*
* Note: probexp[0] is set to 10 (not 0) as a safety for very low increase
* gradients.
*/
static const int probexp[641] = {
10,255,508,759,1008,1255,1501,1744,1985,2225,2463,2698,2932,3165,3395,3624,
3850,4075,4299,4520,4740,4958,5175,5389,5602,5814,6024,6232,6438,6643,6846,
7048,7248,7447,7644,7839,8033,8226,8417,8606,8794,8981,9166,9350,9532,9713,
9892,10070,10247,10422,10596,10769,10940,11110,11278,11445,11611,11776,11939,
12101,12262,12422,12580,12737,12893,13048,13201,13354,13505,13655,13803,13951,
14097,14243,14387,14530,14672,14813,14952,15091,15229,15365,15500,15635,15768,
15900,16032,16162,16291,16419,16547,16673,16798,16922,17046,17168,17289,17410,
17529,17648,17766,17882,17998,18113,18227,18340,18453,18564,18675,18784,18893,
19001,19108,19215,19320,19425,19529,19632,19734,19835,19936,20036,20135,20233,
20331,20427,20523,20619,20713,20807,20900,20993,21084,21175,21265,21355,21444,
21532,21619,21706,21792,21878,21962,22046,22130,22213,22295,22376,22457,22537,
22617,22696,22774,22852,22929,23006,23082,23157,23232,23306,23380,23453,23525,
23597,23669,23739,23810,23879,23949,24017,24085,24153,24220,24286,24352,24418,
24483,24547,24611,24675,24738,24800,24862,24924,24985,25045,25106,25165,25224,
25283,25341,25399,25456,25513,25570,25626,25681,25737,25791,25846,25899,25953,
26006,26059,26111,26163,26214,26265,26316,26366,26416,26465,26514,26563,26611,
26659,26707,26754,26801,26847,26893,26939,26984,27029,27074,27118,27162,27206,
27249,27292,27335,27377,27419,27460,27502,27543,27583,27624,27664,27703,27743,
27782,27821,27859,27897,27935,27973,28010,28047,28084,28121,28157,28193,28228,
28263,28299,28333,28368,28402,28436,28470,28503,28536,28569,28602,28634,28667,
28699,28730,28762,28793,28824,28854,28885,28915,28945,28975,29004,29034,29063,
29092,29120,29149,29177,29205,29232,29260,29287,29314,29341,29368,29394,29421,
29447,29472,29498,29524,29549,29574,29599,29623,29648,29672,29696,29720,29744,
29767,29791,29814,29837,29860,29882,29905,29927,29949,29971,29993,30014,30036,
30057,30078,30099,30120,30141,30161,30181,30201,30221,30241,30261,30280,30300,
30319,30338,30357,30376,30394,30413,30431,30449,30467,30485,30503,30521,30538,
30555,30573,30590,30607,30624,30640,30657,30673,30690,30706,30722,30738,30753,
30769,30785,30800,30815,30831,30846,30861,30876,30890,30905,30919,30934,30948,
30962,30976,30990,31004,31018,31031,31045,31058,31072,31085,31098,31111,31124,
31137,31149,31162,31174,31187,31199,31211,31223,31235,31247,31259,31271,31283,
31294,31306,31317,31328,31339,31351,31362,31373,31383,31394,31405,31416,31426,
31436,31447,31457,31467,31477,31487,31497,31507,31517,31527,31537,31546,31556,
31565,31574,31584,31593,31602,31611,31620,31629,31638,31647,31655,31664,31673,
31681,31690,31698,31706,31715,31723,31731,31739,31747,31755,31763,31771,31778,
31786,31794,31801,31809,31816,31824,31831,31838,31846,31853,31860,31867,31874,
31881,31888,31895,31902,31908,31915,31922,31928,31935,31941,31948,31954,31960,
31967,31973,31979,31985,31991,31997,32003,32009,32015,32021,32027,32033,32038,
32044,32050,32055,32061,32066,32072,32077,32083,32088,32093,32098,32104,32109,
32114,32119,32124,32129,32134,32139,32144,32149,32154,32158,32163,32168,32173,
32177,32182,32186,32191,32195,32200,32204,32209,32213,32217,32222,32226,32230,
32234,32238,32242,32247,32251,32255,32259,32263,32267,32270,32274,32278,32282,
32286,32290,32293,32297,32301,32304,32308,32311,32315,32318,32322,32325,32329,
32332,32336,32339,32342,32346,32349,32352,32356,32359,32362,32365,32368,32371,
32374,32377,32381,32384,32387,32389,32392,32395,32398,32401,32404,32407,32410,
32412,32415,32418,32421,32423,32426,32429,32431,32434,32437,32439,32442,32444,
32447,32449,32452,32454,32457,32459,32461,32464,32466,32469,32471,32473,32476,
32478,32480,32482,32485,32487,32489,32491,32493,32495,32497,32500,32502,32504,
32506,32508,32510,32512,32514,32516,32518,32520,32522,32524,32526,32527,32529,
32531,32533,32535,32537,32538,32540,32542,32544,32545,32547};
static uma_zone_t qdiffsample_zone;
static MALLOC_DEFINE(M_CDG, "cdg data",
"Per connection data required for the CDG congestion control algorithm");
static int ertt_id;
VNET_DEFINE_STATIC(uint32_t, cdg_alpha_inc);
VNET_DEFINE_STATIC(uint32_t, cdg_beta_delay);
VNET_DEFINE_STATIC(uint32_t, cdg_beta_loss);
VNET_DEFINE_STATIC(uint32_t, cdg_smoothing_factor);
VNET_DEFINE_STATIC(uint32_t, cdg_exp_backoff_scale);
VNET_DEFINE_STATIC(uint32_t, cdg_consec_cong);
VNET_DEFINE_STATIC(uint32_t, cdg_hold_backoff);
#define V_cdg_alpha_inc VNET(cdg_alpha_inc)
#define V_cdg_beta_delay VNET(cdg_beta_delay)
#define V_cdg_beta_loss VNET(cdg_beta_loss)
#define V_cdg_smoothing_factor VNET(cdg_smoothing_factor)
#define V_cdg_exp_backoff_scale VNET(cdg_exp_backoff_scale)
#define V_cdg_consec_cong VNET(cdg_consec_cong)
#define V_cdg_hold_backoff VNET(cdg_hold_backoff)
/* Function prototypes. */
static int cdg_mod_init(void);
static int cdg_mod_destroy(void);
static void cdg_conn_init(struct cc_var *ccv);
static int cdg_cb_init(struct cc_var *ccv);
static void cdg_cb_destroy(struct cc_var *ccv);
static void cdg_cong_signal(struct cc_var *ccv, uint32_t signal_type);
static void cdg_ack_received(struct cc_var *ccv, uint16_t ack_type);
struct cc_algo cdg_cc_algo = {
.name = "cdg",
.mod_init = cdg_mod_init,
.ack_received = cdg_ack_received,
.cb_destroy = cdg_cb_destroy,
.cb_init = cdg_cb_init,
.conn_init = cdg_conn_init,
.cong_signal = cdg_cong_signal,
.mod_destroy = cdg_mod_destroy
};
/* Vnet created and being initialised. */
static void
cdg_init_vnet(const void *unused __unused)
{
V_cdg_alpha_inc = 0;
V_cdg_beta_delay = 70;
V_cdg_beta_loss = 50;
V_cdg_smoothing_factor = 8;
V_cdg_exp_backoff_scale = 3;
V_cdg_consec_cong = 5;
V_cdg_hold_backoff = 5;
}
static int
cdg_mod_init(void)
{
VNET_ITERATOR_DECL(v);
ertt_id = khelp_get_id("ertt");
if (ertt_id <= 0)
return (EINVAL);
qdiffsample_zone = uma_zcreate("cdg_qdiffsample",
sizeof(struct qdiff_sample), NULL, NULL, NULL, NULL, 0, 0);
VNET_LIST_RLOCK();
VNET_FOREACH(v) {
CURVNET_SET(v);
cdg_init_vnet(NULL);
CURVNET_RESTORE();
}
VNET_LIST_RUNLOCK();
cdg_cc_algo.post_recovery = newreno_cc_algo.post_recovery;
cdg_cc_algo.after_idle = newreno_cc_algo.after_idle;
return (0);
}
static int
cdg_mod_destroy(void)
{
uma_zdestroy(qdiffsample_zone);
return (0);
}
static int
cdg_cb_init(struct cc_var *ccv)
{
struct cdg *cdg_data;
cdg_data = malloc(sizeof(struct cdg), M_CDG, M_NOWAIT);
if (cdg_data == NULL)
return (ENOMEM);
cdg_data->shadow_w = 0;
cdg_data->max_qtrend = 0;
cdg_data->min_qtrend = 0;
cdg_data->queue_state = CDG_Q_UNKNOWN;
cdg_data->maxrtt_in_rtt = 0;
cdg_data->maxrtt_in_prevrtt = 0;
cdg_data->minrtt_in_rtt = INT_MAX;
cdg_data->minrtt_in_prevrtt = 0;
cdg_data->window_incr = 0;
cdg_data->rtt_count = 0;
cdg_data->consec_cong_cnt = 0;
cdg_data->sample_q_size = V_cdg_smoothing_factor;
cdg_data->num_samples = 0;
STAILQ_INIT(&cdg_data->qdiffmin_q);
STAILQ_INIT(&cdg_data->qdiffmax_q);
ccv->cc_data = cdg_data;
return (0);
}
static void
cdg_conn_init(struct cc_var *ccv)
{
struct cdg *cdg_data = ccv->cc_data;
/*
* Initialise the shadow_cwnd in case we are competing with loss based
* flows from the start
*/
cdg_data->shadow_w = CCV(ccv, snd_cwnd);
}
static void
cdg_cb_destroy(struct cc_var *ccv)
{
struct cdg *cdg_data;
struct qdiff_sample *qds, *qds_n;
cdg_data = ccv->cc_data;
qds = STAILQ_FIRST(&cdg_data->qdiffmin_q);
while (qds != NULL) {
qds_n = STAILQ_NEXT(qds, qdiff_lnk);
uma_zfree(qdiffsample_zone,qds);
qds = qds_n;
}
qds = STAILQ_FIRST(&cdg_data->qdiffmax_q);
while (qds != NULL) {
qds_n = STAILQ_NEXT(qds, qdiff_lnk);
uma_zfree(qdiffsample_zone,qds);
qds = qds_n;
}
free(ccv->cc_data, M_CDG);
}
static int
cdg_beta_handler(SYSCTL_HANDLER_ARGS)
{
int error;
uint32_t new;
new = *(uint32_t *)arg1;
error = sysctl_handle_int(oidp, &new, 0, req);
if (error == 0 && req->newptr != NULL) {
if (new == 0 || new > 100)
error = EINVAL;
else
*(uint32_t *)arg1 = new;
}
return (error);
}
static int
cdg_exp_backoff_scale_handler(SYSCTL_HANDLER_ARGS)
{
int error;
uint32_t new;
new = *(uint32_t *)arg1;
error = sysctl_handle_int(oidp, &new, 0, req);
if (error == 0 && req->newptr != NULL) {
if (new < 1)
error = EINVAL;
else
*(uint32_t *)arg1 = new;
}
return (error);
}
static inline uint32_t
cdg_window_decrease(struct cc_var *ccv, unsigned long owin, unsigned int beta)
{
return ((ulmin(CCV(ccv, snd_wnd), owin) * beta) / 100);
}
/*
* Window increase function
* This window increase function is independent of the initial window size
* to ensure small window flows are not discriminated against (i.e. fairness).
* It increases at 1pkt/rtt like Reno for alpha_inc rtts, and then 2pkts/rtt for
* the next alpha_inc rtts, etc.
*/
static void
cdg_window_increase(struct cc_var *ccv, int new_measurement)
{
struct cdg *cdg_data;
int incr, s_w_incr;
cdg_data = ccv->cc_data;
incr = s_w_incr = 0;
if (CCV(ccv, snd_cwnd) <= CCV(ccv, snd_ssthresh)) {
/* Slow start. */
incr = CCV(ccv, t_maxseg);
s_w_incr = incr;
cdg_data->window_incr = cdg_data->rtt_count = 0;
} else {
/* Congestion avoidance. */
if (new_measurement) {
s_w_incr = CCV(ccv, t_maxseg);
if (V_cdg_alpha_inc == 0) {
incr = CCV(ccv, t_maxseg);
} else {
if (++cdg_data->rtt_count >= V_cdg_alpha_inc) {
cdg_data->window_incr++;
cdg_data->rtt_count = 0;
}
incr = CCV(ccv, t_maxseg) *
cdg_data->window_incr;
}
}
}
if (cdg_data->shadow_w > 0)
cdg_data->shadow_w = ulmin(cdg_data->shadow_w + s_w_incr,
TCP_MAXWIN << CCV(ccv, snd_scale));
CCV(ccv, snd_cwnd) = ulmin(CCV(ccv, snd_cwnd) + incr,
TCP_MAXWIN << CCV(ccv, snd_scale));
}
static void
cdg_cong_signal(struct cc_var *ccv, uint32_t signal_type)
{
struct cdg *cdg_data = ccv->cc_data;
switch(signal_type) {
case CC_CDG_DELAY:
CCV(ccv, snd_ssthresh) = cdg_window_decrease(ccv,
CCV(ccv, snd_cwnd), V_cdg_beta_delay);
CCV(ccv, snd_cwnd) = CCV(ccv, snd_ssthresh);
CCV(ccv, snd_recover) = CCV(ccv, snd_max);
cdg_data->window_incr = cdg_data->rtt_count = 0;
ENTER_CONGRECOVERY(CCV(ccv, t_flags));
break;
case CC_NDUPACK:
/*
* If already responding to congestion OR we have guessed no
* queue in the path is full.
*/
if (IN_CONGRECOVERY(CCV(ccv, t_flags)) ||
cdg_data->queue_state < CDG_Q_FULL) {
CCV(ccv, snd_ssthresh) = CCV(ccv, snd_cwnd);
CCV(ccv, snd_recover) = CCV(ccv, snd_max);
} else {
/*
* Loss is likely to be congestion related. We have
* inferred a queue full state, so have shadow window
* react to loss as NewReno would.
*/
if (cdg_data->shadow_w > 0)
cdg_data->shadow_w = cdg_window_decrease(ccv,
cdg_data->shadow_w, RENO_BETA);
CCV(ccv, snd_ssthresh) = max(cdg_data->shadow_w,
cdg_window_decrease(ccv, CCV(ccv, snd_cwnd),
V_cdg_beta_loss));
cdg_data->window_incr = cdg_data->rtt_count = 0;
}
ENTER_RECOVERY(CCV(ccv, t_flags));
break;
default:
newreno_cc_algo.cong_signal(ccv, signal_type);
break;
}
}
/*
* Using a negative exponential probabilistic backoff so that sources with
* varying RTTs which share the same link will, on average, have the same
* probability of backoff over time.
*
* Prob_backoff = 1 - exp(-qtrend / V_cdg_exp_backoff_scale), where
* V_cdg_exp_backoff_scale is the average qtrend for the exponential backoff.
*/
static inline int
prob_backoff(long qtrend)
{
int backoff, idx, p;
backoff = (qtrend > ((MAXGRAD * V_cdg_exp_backoff_scale) << D_P_E));
if (!backoff) {
if (V_cdg_exp_backoff_scale > 1)
idx = (qtrend + V_cdg_exp_backoff_scale / 2) /
V_cdg_exp_backoff_scale;
else
idx = qtrend;
/* Backoff probability proportional to rate of queue growth. */
p = (INT_MAX / (1 << EXP_PREC)) * probexp[idx];
backoff = (random() < p);
}
return (backoff);
}
static inline void
calc_moving_average(struct cdg *cdg_data, long qdiff_max, long qdiff_min)
{
struct qdiff_sample *qds;
++cdg_data->num_samples;
if (cdg_data->num_samples > cdg_data->sample_q_size) {
/* Minimum RTT. */
qds = STAILQ_FIRST(&cdg_data->qdiffmin_q);
cdg_data->min_qtrend = cdg_data->min_qtrend +
(qdiff_min - qds->qdiff) / cdg_data->sample_q_size;
STAILQ_REMOVE_HEAD(&cdg_data->qdiffmin_q, qdiff_lnk);
qds->qdiff = qdiff_min;
STAILQ_INSERT_TAIL(&cdg_data->qdiffmin_q, qds, qdiff_lnk);
/* Maximum RTT. */
qds = STAILQ_FIRST(&cdg_data->qdiffmax_q);
cdg_data->max_qtrend = cdg_data->max_qtrend +
(qdiff_max - qds->qdiff) / cdg_data->sample_q_size;
STAILQ_REMOVE_HEAD(&cdg_data->qdiffmax_q, qdiff_lnk);
qds->qdiff = qdiff_max;
STAILQ_INSERT_TAIL(&cdg_data->qdiffmax_q, qds, qdiff_lnk);
--cdg_data->num_samples;
} else {
qds = uma_zalloc(qdiffsample_zone, M_NOWAIT);
if (qds != NULL) {
cdg_data->min_qtrend = cdg_data->min_qtrend +
qdiff_min / cdg_data->sample_q_size;
qds->qdiff = qdiff_min;
STAILQ_INSERT_TAIL(&cdg_data->qdiffmin_q, qds,
qdiff_lnk);
}
qds = uma_zalloc(qdiffsample_zone, M_NOWAIT);
if (qds) {
cdg_data->max_qtrend = cdg_data->max_qtrend +
qdiff_max / cdg_data->sample_q_size;
qds->qdiff = qdiff_max;
STAILQ_INSERT_TAIL(&cdg_data->qdiffmax_q, qds,
qdiff_lnk);
}
}
}
static void
cdg_ack_received(struct cc_var *ccv, uint16_t ack_type)
{
struct cdg *cdg_data;
struct ertt *e_t;
long qdiff_max, qdiff_min;
int congestion, new_measurement, slowstart;
cdg_data = ccv->cc_data;
e_t = (struct ertt *)khelp_get_osd(CCV(ccv, osd), ertt_id);
new_measurement = e_t->flags & ERTT_NEW_MEASUREMENT;
congestion = 0;
cdg_data->maxrtt_in_rtt = imax(e_t->rtt, cdg_data->maxrtt_in_rtt);
cdg_data->minrtt_in_rtt = imin(e_t->rtt, cdg_data->minrtt_in_rtt);
if (new_measurement) {
slowstart = (CCV(ccv, snd_cwnd) <= CCV(ccv, snd_ssthresh));
/*
* Update smoothed gradient measurements. Since we are only
* using one measurement per RTT, use max or min rtt_in_rtt.
* This is also less noisy than a sample RTT measurement. Max
* RTT measurements can have trouble due to OS issues.
*/
if (cdg_data->maxrtt_in_prevrtt) {
qdiff_max = ((long)(cdg_data->maxrtt_in_rtt -
cdg_data->maxrtt_in_prevrtt) << D_P_E );
qdiff_min = ((long)(cdg_data->minrtt_in_rtt -
cdg_data->minrtt_in_prevrtt) << D_P_E );
- calc_moving_average(cdg_data, qdiff_max, qdiff_min);
+ if (cdg_data->sample_q_size == 0) {
+ cdg_data->max_qtrend = qdiff_max;
+ cdg_data->min_qtrend = qdiff_min;
+ } else
+ calc_moving_average(cdg_data, qdiff_max, qdiff_min);
/* Probabilistic backoff with respect to gradient. */
if (slowstart && qdiff_min > 0)
congestion = prob_backoff(qdiff_min);
else if (cdg_data->min_qtrend > 0)
congestion = prob_backoff(cdg_data->min_qtrend);
else if (slowstart && qdiff_max > 0)
congestion = prob_backoff(qdiff_max);
else if (cdg_data->max_qtrend > 0)
congestion = prob_backoff(cdg_data->max_qtrend);
/* Update estimate of queue state. */
if (cdg_data->min_qtrend > 0 &&
cdg_data->max_qtrend <= 0) {
cdg_data->queue_state = CDG_Q_FULL;
} else if (cdg_data->min_qtrend >= 0 &&
cdg_data->max_qtrend < 0) {
cdg_data->queue_state = CDG_Q_EMPTY;
cdg_data->shadow_w = 0;
} else if (cdg_data->min_qtrend > 0 &&
cdg_data->max_qtrend > 0) {
cdg_data->queue_state = CDG_Q_RISING;
} else if (cdg_data->min_qtrend < 0 &&
cdg_data->max_qtrend < 0) {
cdg_data->queue_state = CDG_Q_FALLING;
}
if (cdg_data->min_qtrend < 0 ||
cdg_data->max_qtrend < 0)
cdg_data->consec_cong_cnt = 0;
}
cdg_data->minrtt_in_prevrtt = cdg_data->minrtt_in_rtt;
cdg_data->minrtt_in_rtt = INT_MAX;
cdg_data->maxrtt_in_prevrtt = cdg_data->maxrtt_in_rtt;
cdg_data->maxrtt_in_rtt = 0;
e_t->flags &= ~ERTT_NEW_MEASUREMENT;
}
if (congestion) {
cdg_data->consec_cong_cnt++;
if (!IN_RECOVERY(CCV(ccv, t_flags))) {
if (cdg_data->consec_cong_cnt <= V_cdg_consec_cong)
cdg_cong_signal(ccv, CC_CDG_DELAY);
else
/*
* We have been backing off but the queue is not
* falling. Assume we are competing with
* loss-based flows and don't back off for the
* next V_cdg_hold_backoff RTT periods.
*/
if (cdg_data->consec_cong_cnt >=
V_cdg_consec_cong + V_cdg_hold_backoff)
cdg_data->consec_cong_cnt = 0;
/* Won't see effect until 2nd RTT. */
cdg_data->maxrtt_in_prevrtt = 0;
/*
* Resync shadow window in case we are competing with a
* loss based flow
*/
cdg_data->shadow_w = ulmax(CCV(ccv, snd_cwnd),
cdg_data->shadow_w);
}
} else if (ack_type == CC_ACK)
cdg_window_increase(ccv, new_measurement);
}
/* When a vnet is created and being initialised, init the per-stack CDG vars. */
VNET_SYSINIT(cdg_init_vnet, SI_SUB_PROTO_BEGIN, SI_ORDER_FIRST,
cdg_init_vnet, NULL);
SYSCTL_DECL(_net_inet_tcp_cc_cdg);
SYSCTL_NODE(_net_inet_tcp_cc, OID_AUTO, cdg, CTLFLAG_RW, NULL,
"CAIA delay-gradient congestion control related settings");
SYSCTL_STRING(_net_inet_tcp_cc_cdg, OID_AUTO, version,
CTLFLAG_RD, CDG_VERSION, sizeof(CDG_VERSION) - 1,
"Current algorithm/implementation version number");
SYSCTL_UINT(_net_inet_tcp_cc_cdg, OID_AUTO, alpha_inc,
CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(cdg_alpha_inc), 0,
"Increment the window increase factor alpha by 1 MSS segment every "
"alpha_inc RTTs during congestion avoidance mode.");
SYSCTL_PROC(_net_inet_tcp_cc_cdg, OID_AUTO, beta_delay,
CTLFLAG_VNET | CTLTYPE_UINT | CTLFLAG_RW, &VNET_NAME(cdg_beta_delay), 70,
&cdg_beta_handler, "IU",
"Delay-based window decrease factor as a percentage "
"(on delay-based backoff, w = w * beta_delay / 100)");
SYSCTL_PROC(_net_inet_tcp_cc_cdg, OID_AUTO, beta_loss,
CTLFLAG_VNET | CTLTYPE_UINT | CTLFLAG_RW, &VNET_NAME(cdg_beta_loss), 50,
&cdg_beta_handler, "IU",
"Loss-based window decrease factor as a percentage "
"(on loss-based backoff, w = w * beta_loss / 100)");
SYSCTL_PROC(_net_inet_tcp_cc_cdg, OID_AUTO, exp_backoff_scale,
CTLFLAG_VNET | CTLTYPE_UINT | CTLFLAG_RW,
&VNET_NAME(cdg_exp_backoff_scale), 2, &cdg_exp_backoff_scale_handler, "IU",
"Scaling parameter for the probabilistic exponential backoff");
SYSCTL_UINT(_net_inet_tcp_cc_cdg, OID_AUTO, smoothing_factor,
CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(cdg_smoothing_factor), 8,
"Number of samples used for moving average smoothing (0 = no smoothing)");
SYSCTL_UINT(_net_inet_tcp_cc_cdg, OID_AUTO, loss_compete_consec_cong,
CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(cdg_consec_cong), 5,
"Number of consecutive delay-gradient based congestion episodes which will "
"trigger loss based CC compatibility");
SYSCTL_UINT(_net_inet_tcp_cc_cdg, OID_AUTO, loss_compete_hold_backoff,
CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(cdg_hold_backoff), 5,
"Number of consecutive delay-gradient based congestion episodes to hold "
"the window backoff for loss based CC compatibility");
DECLARE_CC_MODULE(cdg, &cdg_cc_algo);
MODULE_DEPEND(cdg, ertt, 1, 1, 1);
Index: projects/clang800-import/sys/netinet/sctp_usrreq.c
===================================================================
--- projects/clang800-import/sys/netinet/sctp_usrreq.c (revision 343955)
+++ projects/clang800-import/sys/netinet/sctp_usrreq.c (revision 343956)
@@ -1,7482 +1,7489 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2001-2008, by Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2008-2012, by Randall Stewart. All rights reserved.
* Copyright (c) 2008-2012, by Michael Tuexen. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* a) Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* b) Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the distribution.
*
* c) Neither the name of Cisco Systems, Inc. nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <netinet/sctp_os.h>
#include <sys/proc.h>
#include <netinet/sctp_pcb.h>
#include <netinet/sctp_header.h>
#include <netinet/sctp_var.h>
#ifdef INET6
#include <netinet6/sctp6_var.h>
#endif
#include <netinet/sctp_sysctl.h>
#include <netinet/sctp_output.h>
#include <netinet/sctp_uio.h>
#include <netinet/sctp_asconf.h>
#include <netinet/sctputil.h>
#include <netinet/sctp_indata.h>
#include <netinet/sctp_timer.h>
#include <netinet/sctp_auth.h>
#include <netinet/sctp_bsd_addr.h>
#include <netinet/udp.h>
extern const struct sctp_cc_functions sctp_cc_functions[];
extern const struct sctp_ss_functions sctp_ss_functions[];
void
sctp_init(void)
{
u_long sb_max_adj;
/* Initialize and modify the sysctled variables */
sctp_init_sysctls();
if ((nmbclusters / 8) > SCTP_ASOC_MAX_CHUNKS_ON_QUEUE)
SCTP_BASE_SYSCTL(sctp_max_chunks_on_queue) = (nmbclusters / 8);
/*
* Allow a user to take no more than 1/2 the number of clusters or
* the SB_MAX whichever is smaller for the send window.
*/
sb_max_adj = (u_long)((u_quad_t)(SB_MAX) * MCLBYTES / (MSIZE + MCLBYTES));
SCTP_BASE_SYSCTL(sctp_sendspace) = min(sb_max_adj,
(((uint32_t)nmbclusters / 2) * SCTP_DEFAULT_MAXSEGMENT));
/*
* Now for the recv window, should we take the same amount? or
* should I do 1/2 the SB_MAX instead in the SB_MAX min above. For
* now I will just copy.
*/
SCTP_BASE_SYSCTL(sctp_recvspace) = SCTP_BASE_SYSCTL(sctp_sendspace);
SCTP_BASE_VAR(first_time) = 0;
SCTP_BASE_VAR(sctp_pcb_initialized) = 0;
sctp_pcb_init();
#if defined(SCTP_PACKET_LOGGING)
SCTP_BASE_VAR(packet_log_writers) = 0;
SCTP_BASE_VAR(packet_log_end) = 0;
memset(&SCTP_BASE_VAR(packet_log_buffer), 0, SCTP_PACKET_LOG_SIZE);
#endif
}
#ifdef VIMAGE
static void
sctp_finish(void *unused __unused)
{
sctp_pcb_finish();
}
VNET_SYSUNINIT(sctp, SI_SUB_PROTO_DOMAIN, SI_ORDER_FOURTH, sctp_finish, NULL);
#endif
void
sctp_pathmtu_adjustment(struct sctp_tcb *stcb, uint16_t nxtsz)
{
struct sctp_tmit_chunk *chk;
uint16_t overhead;
/* Adjust that too */
stcb->asoc.smallest_mtu = nxtsz;
/* now off to subtract IP_DF flag if needed */
overhead = IP_HDR_SIZE + sizeof(struct sctphdr);
if (sctp_auth_is_required_chunk(SCTP_DATA, stcb->asoc.peer_auth_chunks)) {
overhead += sctp_get_auth_chunk_len(stcb->asoc.peer_hmac_id);
}
TAILQ_FOREACH(chk, &stcb->asoc.send_queue, sctp_next) {
if ((chk->send_size + overhead) > nxtsz) {
chk->flags |= CHUNK_FLAGS_FRAGMENT_OK;
}
}
TAILQ_FOREACH(chk, &stcb->asoc.sent_queue, sctp_next) {
if ((chk->send_size + overhead) > nxtsz) {
/*
* For this guy we also mark for immediate resend
* since we sent to big of chunk
*/
chk->flags |= CHUNK_FLAGS_FRAGMENT_OK;
if (chk->sent < SCTP_DATAGRAM_RESEND) {
sctp_flight_size_decrease(chk);
sctp_total_flight_decrease(stcb, chk);
chk->sent = SCTP_DATAGRAM_RESEND;
sctp_ucount_incr(stcb->asoc.sent_queue_retran_cnt);
chk->rec.data.doing_fast_retransmit = 0;
if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) {
sctp_misc_ints(SCTP_FLIGHT_LOG_DOWN_PMTU,
chk->whoTo->flight_size,
chk->book_size,
(uint32_t)(uintptr_t)chk->whoTo,
chk->rec.data.tsn);
}
/* Clear any time so NO RTT is being done */
chk->do_rtt = 0;
}
}
}
}
#ifdef INET
void
sctp_notify(struct sctp_inpcb *inp,
struct sctp_tcb *stcb,
struct sctp_nets *net,
uint8_t icmp_type,
uint8_t icmp_code,
uint16_t ip_len,
uint32_t next_mtu)
{
#if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING)
struct socket *so;
#endif
int timer_stopped;
if (icmp_type != ICMP_UNREACH) {
/* We only care about unreachable */
SCTP_TCB_UNLOCK(stcb);
return;
}
if ((icmp_code == ICMP_UNREACH_NET) ||
(icmp_code == ICMP_UNREACH_HOST) ||
(icmp_code == ICMP_UNREACH_NET_UNKNOWN) ||
(icmp_code == ICMP_UNREACH_HOST_UNKNOWN) ||
(icmp_code == ICMP_UNREACH_ISOLATED) ||
(icmp_code == ICMP_UNREACH_NET_PROHIB) ||
(icmp_code == ICMP_UNREACH_HOST_PROHIB) ||
(icmp_code == ICMP_UNREACH_FILTER_PROHIB)) {
/* Mark the net unreachable. */
if (net->dest_state & SCTP_ADDR_REACHABLE) {
/* OK, that destination is NOT reachable. */
net->dest_state &= ~SCTP_ADDR_REACHABLE;
net->dest_state &= ~SCTP_ADDR_PF;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_DOWN,
stcb, 0,
(void *)net, SCTP_SO_NOT_LOCKED);
}
SCTP_TCB_UNLOCK(stcb);
} else if ((icmp_code == ICMP_UNREACH_PROTOCOL) ||
(icmp_code == ICMP_UNREACH_PORT)) {
/* Treat it like an ABORT. */
sctp_abort_notification(stcb, 1, 0, NULL, SCTP_SO_NOT_LOCKED);
#if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING)
so = SCTP_INP_SO(inp);
atomic_add_int(&stcb->asoc.refcnt, 1);
SCTP_TCB_UNLOCK(stcb);
SCTP_SOCKET_LOCK(so, 1);
SCTP_TCB_LOCK(stcb);
atomic_subtract_int(&stcb->asoc.refcnt, 1);
#endif
(void)sctp_free_assoc(inp, stcb, SCTP_NORMAL_PROC,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_2);
#if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING)
SCTP_SOCKET_UNLOCK(so, 1);
/* SCTP_TCB_UNLOCK(stcb); MT: I think this is not needed. */
#endif
/* no need to unlock here, since the TCB is gone */
} else if (icmp_code == ICMP_UNREACH_NEEDFRAG) {
if (net->dest_state & SCTP_ADDR_NO_PMTUD) {
SCTP_TCB_UNLOCK(stcb);
return;
}
/* Find the next (smaller) MTU */
if (next_mtu == 0) {
/*
* Old type router that does not tell us what the
* next MTU is. Rats we will have to guess (in a
* educated fashion of course).
*/
next_mtu = sctp_get_prev_mtu(ip_len);
}
/* Stop the PMTU timer. */
if (SCTP_OS_TIMER_PENDING(&net->pmtu_timer.timer)) {
timer_stopped = 1;
sctp_timer_stop(SCTP_TIMER_TYPE_PATHMTURAISE, inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_1);
} else {
timer_stopped = 0;
}
/* Update the path MTU. */
if (net->port) {
next_mtu -= sizeof(struct udphdr);
}
if (net->mtu > next_mtu) {
net->mtu = next_mtu;
if (net->port) {
sctp_hc_set_mtu(&net->ro._l_addr, inp->fibnum, next_mtu + sizeof(struct udphdr));
} else {
sctp_hc_set_mtu(&net->ro._l_addr, inp->fibnum, next_mtu);
}
}
/* Update the association MTU */
if (stcb->asoc.smallest_mtu > next_mtu) {
sctp_pathmtu_adjustment(stcb, next_mtu);
}
/* Finally, start the PMTU timer if it was running before. */
if (timer_stopped) {
sctp_timer_start(SCTP_TIMER_TYPE_PATHMTURAISE, inp, stcb, net);
}
SCTP_TCB_UNLOCK(stcb);
} else {
SCTP_TCB_UNLOCK(stcb);
}
}
void
sctp_ctlinput(int cmd, struct sockaddr *sa, void *vip)
{
struct ip *outer_ip;
struct ip *inner_ip;
struct sctphdr *sh;
struct icmp *icmp;
struct sctp_inpcb *inp;
struct sctp_tcb *stcb;
struct sctp_nets *net;
struct sctp_init_chunk *ch;
struct sockaddr_in src, dst;
if (sa->sa_family != AF_INET ||
((struct sockaddr_in *)sa)->sin_addr.s_addr == INADDR_ANY) {
return;
}
if (PRC_IS_REDIRECT(cmd)) {
vip = NULL;
} else if ((unsigned)cmd >= PRC_NCMDS || inetctlerrmap[cmd] == 0) {
return;
}
if (vip != NULL) {
inner_ip = (struct ip *)vip;
icmp = (struct icmp *)((caddr_t)inner_ip -
(sizeof(struct icmp) - sizeof(struct ip)));
outer_ip = (struct ip *)((caddr_t)icmp - sizeof(struct ip));
sh = (struct sctphdr *)((caddr_t)inner_ip + (inner_ip->ip_hl << 2));
memset(&src, 0, sizeof(struct sockaddr_in));
src.sin_family = AF_INET;
src.sin_len = sizeof(struct sockaddr_in);
src.sin_port = sh->src_port;
src.sin_addr = inner_ip->ip_src;
memset(&dst, 0, sizeof(struct sockaddr_in));
dst.sin_family = AF_INET;
dst.sin_len = sizeof(struct sockaddr_in);
dst.sin_port = sh->dest_port;
dst.sin_addr = inner_ip->ip_dst;
/*
* 'dst' holds the dest of the packet that failed to be
* sent. 'src' holds our local endpoint address. Thus we
* reverse the dst and the src in the lookup.
*/
inp = NULL;
net = NULL;
stcb = sctp_findassociation_addr_sa((struct sockaddr *)&dst,
(struct sockaddr *)&src,
&inp, &net, 1,
SCTP_DEFAULT_VRFID);
if ((stcb != NULL) &&
(net != NULL) &&
(inp != NULL)) {
/* Check the verification tag */
if (ntohl(sh->v_tag) != 0) {
/*
* This must be the verification tag used
* for sending out packets. We don't
* consider packets reflecting the
* verification tag.
*/
if (ntohl(sh->v_tag) != stcb->asoc.peer_vtag) {
SCTP_TCB_UNLOCK(stcb);
return;
}
} else {
if (ntohs(outer_ip->ip_len) >=
sizeof(struct ip) +
8 + (inner_ip->ip_hl << 2) + 20) {
/*
* In this case we can check if we
* got an INIT chunk and if the
* initiate tag matches.
*/
ch = (struct sctp_init_chunk *)(sh + 1);
if ((ch->ch.chunk_type != SCTP_INITIATION) ||
(ntohl(ch->init.initiate_tag) != stcb->asoc.my_vtag)) {
SCTP_TCB_UNLOCK(stcb);
return;
}
} else {
SCTP_TCB_UNLOCK(stcb);
return;
}
}
sctp_notify(inp, stcb, net,
icmp->icmp_type,
icmp->icmp_code,
ntohs(inner_ip->ip_len),
(uint32_t)ntohs(icmp->icmp_nextmtu));
} else {
if ((stcb == NULL) && (inp != NULL)) {
/* reduce ref-count */
SCTP_INP_WLOCK(inp);
SCTP_INP_DECR_REF(inp);
SCTP_INP_WUNLOCK(inp);
}
if (stcb) {
SCTP_TCB_UNLOCK(stcb);
}
}
}
return;
}
#endif
static int
sctp_getcred(SYSCTL_HANDLER_ARGS)
{
struct xucred xuc;
struct sockaddr_in addrs[2];
struct sctp_inpcb *inp;
struct sctp_nets *net;
struct sctp_tcb *stcb;
int error;
uint32_t vrf_id;
/* FIX, for non-bsd is this right? */
vrf_id = SCTP_DEFAULT_VRFID;
error = priv_check(req->td, PRIV_NETINET_GETCRED);
if (error)
return (error);
error = SYSCTL_IN(req, addrs, sizeof(addrs));
if (error)
return (error);
stcb = sctp_findassociation_addr_sa(sintosa(&addrs[1]),
sintosa(&addrs[0]),
&inp, &net, 1, vrf_id);
if (stcb == NULL || inp == NULL || inp->sctp_socket == NULL) {
if ((inp != NULL) && (stcb == NULL)) {
/* reduce ref-count */
SCTP_INP_WLOCK(inp);
SCTP_INP_DECR_REF(inp);
goto cred_can_cont;
}
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
goto out;
}
SCTP_TCB_UNLOCK(stcb);
/*
* We use the write lock here, only since in the error leg we need
* it. If we used RLOCK, then we would have to
* wlock/decr/unlock/rlock. Which in theory could create a hole.
* Better to use higher wlock.
*/
SCTP_INP_WLOCK(inp);
cred_can_cont:
error = cr_canseesocket(req->td->td_ucred, inp->sctp_socket);
if (error) {
SCTP_INP_WUNLOCK(inp);
goto out;
}
cru2x(inp->sctp_socket->so_cred, &xuc);
SCTP_INP_WUNLOCK(inp);
error = SYSCTL_OUT(req, &xuc, sizeof(struct xucred));
out:
return (error);
}
SYSCTL_PROC(_net_inet_sctp, OID_AUTO, getcred, CTLTYPE_OPAQUE | CTLFLAG_RW,
0, 0, sctp_getcred, "S,ucred", "Get the ucred of a SCTP connection");
#ifdef INET
static void
sctp_abort(struct socket *so)
{
struct sctp_inpcb *inp;
uint32_t flags;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
return;
}
sctp_must_try_again:
flags = inp->sctp_flags;
#ifdef SCTP_LOG_CLOSING
sctp_log_closing(inp, NULL, 17);
#endif
if (((flags & SCTP_PCB_FLAGS_SOCKET_GONE) == 0) &&
(atomic_cmpset_int(&inp->sctp_flags, flags, (flags | SCTP_PCB_FLAGS_SOCKET_GONE | SCTP_PCB_FLAGS_CLOSE_IP)))) {
#ifdef SCTP_LOG_CLOSING
sctp_log_closing(inp, NULL, 16);
#endif
sctp_inpcb_free(inp, SCTP_FREE_SHOULD_USE_ABORT,
SCTP_CALLED_AFTER_CMPSET_OFCLOSE);
SOCK_LOCK(so);
SCTP_SB_CLEAR(so->so_snd);
/*
* same for the rcv ones, they are only here for the
* accounting/select.
*/
SCTP_SB_CLEAR(so->so_rcv);
/* Now null out the reference, we are completely detached. */
so->so_pcb = NULL;
SOCK_UNLOCK(so);
} else {
flags = inp->sctp_flags;
if ((flags & SCTP_PCB_FLAGS_SOCKET_GONE) == 0) {
goto sctp_must_try_again;
}
}
return;
}
static int
sctp_attach(struct socket *so, int proto SCTP_UNUSED, struct thread *p SCTP_UNUSED)
{
struct sctp_inpcb *inp;
struct inpcb *ip_inp;
int error;
uint32_t vrf_id = SCTP_DEFAULT_VRFID;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp != NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
if (so->so_snd.sb_hiwat == 0 || so->so_rcv.sb_hiwat == 0) {
error = SCTP_SORESERVE(so, SCTP_BASE_SYSCTL(sctp_sendspace), SCTP_BASE_SYSCTL(sctp_recvspace));
if (error) {
return (error);
}
}
error = sctp_inpcb_alloc(so, vrf_id);
if (error) {
return (error);
}
inp = (struct sctp_inpcb *)so->so_pcb;
SCTP_INP_WLOCK(inp);
inp->sctp_flags &= ~SCTP_PCB_FLAGS_BOUND_V6; /* I'm not v6! */
ip_inp = &inp->ip_inp.inp;
ip_inp->inp_vflag |= INP_IPV4;
ip_inp->inp_ip_ttl = MODULE_GLOBAL(ip_defttl);
SCTP_INP_WUNLOCK(inp);
return (0);
}
static int
sctp_bind(struct socket *so, struct sockaddr *addr, struct thread *p)
{
struct sctp_inpcb *inp;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
if (addr != NULL) {
if ((addr->sa_family != AF_INET) ||
(addr->sa_len != sizeof(struct sockaddr_in))) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
}
return (sctp_inpcb_bind(so, addr, NULL, p));
}
#endif
void
sctp_close(struct socket *so)
{
struct sctp_inpcb *inp;
uint32_t flags;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL)
return;
/*
* Inform all the lower layer assoc that we are done.
*/
sctp_must_try_again:
flags = inp->sctp_flags;
#ifdef SCTP_LOG_CLOSING
sctp_log_closing(inp, NULL, 17);
#endif
if (((flags & SCTP_PCB_FLAGS_SOCKET_GONE) == 0) &&
(atomic_cmpset_int(&inp->sctp_flags, flags, (flags | SCTP_PCB_FLAGS_SOCKET_GONE | SCTP_PCB_FLAGS_CLOSE_IP)))) {
if (((so->so_options & SO_LINGER) && (so->so_linger == 0)) ||
(so->so_rcv.sb_cc > 0)) {
#ifdef SCTP_LOG_CLOSING
sctp_log_closing(inp, NULL, 13);
#endif
sctp_inpcb_free(inp, SCTP_FREE_SHOULD_USE_ABORT,
SCTP_CALLED_AFTER_CMPSET_OFCLOSE);
} else {
#ifdef SCTP_LOG_CLOSING
sctp_log_closing(inp, NULL, 14);
#endif
sctp_inpcb_free(inp, SCTP_FREE_SHOULD_USE_GRACEFUL_CLOSE,
SCTP_CALLED_AFTER_CMPSET_OFCLOSE);
}
/*
* The socket is now detached, no matter what the state of
* the SCTP association.
*/
SOCK_LOCK(so);
SCTP_SB_CLEAR(so->so_snd);
/*
* same for the rcv ones, they are only here for the
* accounting/select.
*/
SCTP_SB_CLEAR(so->so_rcv);
/* Now null out the reference, we are completely detached. */
so->so_pcb = NULL;
SOCK_UNLOCK(so);
} else {
flags = inp->sctp_flags;
if ((flags & SCTP_PCB_FLAGS_SOCKET_GONE) == 0) {
goto sctp_must_try_again;
}
}
return;
}
int
sctp_sendm(struct socket *so, int flags, struct mbuf *m, struct sockaddr *addr,
struct mbuf *control, struct thread *p);
int
sctp_sendm(struct socket *so, int flags, struct mbuf *m, struct sockaddr *addr,
struct mbuf *control, struct thread *p)
{
struct sctp_inpcb *inp;
int error;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
if (control) {
sctp_m_freem(control);
control = NULL;
}
SCTP_LTRACE_ERR_RET_PKT(m, inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
sctp_m_freem(m);
return (EINVAL);
}
/* Got to have an to address if we are NOT a connected socket */
if ((addr == NULL) &&
((inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE))) {
goto connected_type;
} else if (addr == NULL) {
SCTP_LTRACE_ERR_RET_PKT(m, inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EDESTADDRREQ);
error = EDESTADDRREQ;
sctp_m_freem(m);
if (control) {
sctp_m_freem(control);
control = NULL;
}
return (error);
}
#ifdef INET6
if (addr->sa_family != AF_INET) {
/* must be a v4 address! */
SCTP_LTRACE_ERR_RET_PKT(m, inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EDESTADDRREQ);
sctp_m_freem(m);
if (control) {
sctp_m_freem(control);
control = NULL;
}
error = EDESTADDRREQ;
return (error);
}
#endif /* INET6 */
connected_type:
/* now what about control */
if (control) {
if (inp->control) {
SCTP_PRINTF("huh? control set?\n");
sctp_m_freem(inp->control);
inp->control = NULL;
}
inp->control = control;
}
/* Place the data */
if (inp->pkt) {
SCTP_BUF_NEXT(inp->pkt_last) = m;
inp->pkt_last = m;
} else {
inp->pkt_last = inp->pkt = m;
}
if (
/* FreeBSD uses a flag passed */
((flags & PRUS_MORETOCOME) == 0)
) {
/*
* note with the current version this code will only be used
* by OpenBSD-- NetBSD, FreeBSD, and MacOS have methods for
* re-defining sosend to use the sctp_sosend. One can
* optionally switch back to this code (by changing back the
* definitions) but this is not advisable. This code is used
* by FreeBSD when sending a file with sendfile() though.
*/
int ret;
ret = sctp_output(inp, inp->pkt, addr, inp->control, p, flags);
inp->pkt = NULL;
inp->control = NULL;
return (ret);
} else {
return (0);
}
}
int
sctp_disconnect(struct socket *so)
{
struct sctp_inpcb *inp;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTCONN);
return (ENOTCONN);
}
SCTP_INP_RLOCK(inp);
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) {
if (LIST_EMPTY(&inp->sctp_asoc_list)) {
/* No connection */
SCTP_INP_RUNLOCK(inp);
return (0);
} else {
struct sctp_association *asoc;
struct sctp_tcb *stcb;
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb == NULL) {
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
SCTP_TCB_LOCK(stcb);
asoc = &stcb->asoc;
if (stcb->asoc.state & SCTP_STATE_ABOUT_TO_BE_FREED) {
/* We are about to be freed, out of here */
SCTP_TCB_UNLOCK(stcb);
SCTP_INP_RUNLOCK(inp);
return (0);
}
if (((so->so_options & SO_LINGER) &&
(so->so_linger == 0)) ||
(so->so_rcv.sb_cc > 0)) {
if (SCTP_GET_STATE(stcb) != SCTP_STATE_COOKIE_WAIT) {
/* Left with Data unread */
struct mbuf *op_err;
op_err = sctp_generate_cause(SCTP_CAUSE_USER_INITIATED_ABT, "");
sctp_send_abort_tcb(stcb, op_err, SCTP_SO_LOCKED);
SCTP_STAT_INCR_COUNTER32(sctps_aborted);
}
SCTP_INP_RUNLOCK(inp);
if ((SCTP_GET_STATE(stcb) == SCTP_STATE_OPEN) ||
(SCTP_GET_STATE(stcb) == SCTP_STATE_SHUTDOWN_RECEIVED)) {
SCTP_STAT_DECR_GAUGE32(sctps_currestab);
}
(void)sctp_free_assoc(inp, stcb, SCTP_NORMAL_PROC,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_3);
/* No unlock tcb assoc is gone */
return (0);
}
if (TAILQ_EMPTY(&asoc->send_queue) &&
TAILQ_EMPTY(&asoc->sent_queue) &&
(asoc->stream_queue_cnt == 0)) {
/* there is nothing queued to send, so done */
if ((*asoc->ss_functions.sctp_ss_is_user_msgs_incomplete) (stcb, asoc)) {
goto abort_anyway;
}
if ((SCTP_GET_STATE(stcb) != SCTP_STATE_SHUTDOWN_SENT) &&
(SCTP_GET_STATE(stcb) != SCTP_STATE_SHUTDOWN_ACK_SENT)) {
/* only send SHUTDOWN 1st time thru */
struct sctp_nets *netp;
if ((SCTP_GET_STATE(stcb) == SCTP_STATE_OPEN) ||
(SCTP_GET_STATE(stcb) == SCTP_STATE_SHUTDOWN_RECEIVED)) {
SCTP_STAT_DECR_GAUGE32(sctps_currestab);
}
SCTP_SET_STATE(stcb, SCTP_STATE_SHUTDOWN_SENT);
sctp_stop_timers_for_shutdown(stcb);
if (stcb->asoc.alternate) {
netp = stcb->asoc.alternate;
} else {
netp = stcb->asoc.primary_destination;
}
sctp_send_shutdown(stcb, netp);
sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWN,
stcb->sctp_ep, stcb, netp);
sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNGUARD,
stcb->sctp_ep, stcb, netp);
sctp_chunk_output(stcb->sctp_ep, stcb, SCTP_OUTPUT_FROM_T3, SCTP_SO_LOCKED);
}
} else {
/*
* we still got (or just got) data to send,
* so set SHUTDOWN_PENDING
*/
/*
* XXX sockets draft says that SCTP_EOF
* should be sent with no data. currently,
* we will allow user data to be sent first
* and move to SHUTDOWN-PENDING
*/
struct sctp_nets *netp;
if (stcb->asoc.alternate) {
netp = stcb->asoc.alternate;
} else {
netp = stcb->asoc.primary_destination;
}
SCTP_ADD_SUBSTATE(stcb, SCTP_STATE_SHUTDOWN_PENDING);
sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNGUARD, stcb->sctp_ep, stcb,
netp);
if ((*asoc->ss_functions.sctp_ss_is_user_msgs_incomplete) (stcb, asoc)) {
SCTP_ADD_SUBSTATE(stcb, SCTP_STATE_PARTIAL_MSG_LEFT);
}
if (TAILQ_EMPTY(&asoc->send_queue) &&
TAILQ_EMPTY(&asoc->sent_queue) &&
(asoc->state & SCTP_STATE_PARTIAL_MSG_LEFT)) {
struct mbuf *op_err;
abort_anyway:
op_err = sctp_generate_cause(SCTP_CAUSE_USER_INITIATED_ABT, "");
stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_USRREQ + SCTP_LOC_4;
sctp_send_abort_tcb(stcb, op_err, SCTP_SO_LOCKED);
SCTP_STAT_INCR_COUNTER32(sctps_aborted);
if ((SCTP_GET_STATE(stcb) == SCTP_STATE_OPEN) ||
(SCTP_GET_STATE(stcb) == SCTP_STATE_SHUTDOWN_RECEIVED)) {
SCTP_STAT_DECR_GAUGE32(sctps_currestab);
}
SCTP_INP_RUNLOCK(inp);
(void)sctp_free_assoc(inp, stcb, SCTP_NORMAL_PROC,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_5);
return (0);
} else {
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_CLOSING, SCTP_SO_LOCKED);
}
}
soisdisconnecting(so);
SCTP_TCB_UNLOCK(stcb);
SCTP_INP_RUNLOCK(inp);
return (0);
}
/* not reached */
} else {
/* UDP model does not support this */
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
return (EOPNOTSUPP);
}
}
int
sctp_flush(struct socket *so, int how)
{
/*
* We will just clear out the values and let subsequent close clear
* out the data, if any. Note if the user did a shutdown(SHUT_RD)
* they will not be able to read the data, the socket will block
* that from happening.
*/
struct sctp_inpcb *inp;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
SCTP_INP_RLOCK(inp);
/* For the 1 to many model this does nothing */
if (inp->sctp_flags & SCTP_PCB_FLAGS_UDPTYPE) {
SCTP_INP_RUNLOCK(inp);
return (0);
}
SCTP_INP_RUNLOCK(inp);
if ((how == PRU_FLUSH_RD) || (how == PRU_FLUSH_RDWR)) {
/*
* First make sure the sb will be happy, we don't use these
* except maybe the count
*/
SCTP_INP_WLOCK(inp);
SCTP_INP_READ_LOCK(inp);
inp->sctp_flags |= SCTP_PCB_FLAGS_SOCKET_CANT_READ;
SCTP_INP_READ_UNLOCK(inp);
SCTP_INP_WUNLOCK(inp);
so->so_rcv.sb_cc = 0;
so->so_rcv.sb_mbcnt = 0;
so->so_rcv.sb_mb = NULL;
}
if ((how == PRU_FLUSH_WR) || (how == PRU_FLUSH_RDWR)) {
/*
* First make sure the sb will be happy, we don't use these
* except maybe the count
*/
so->so_snd.sb_cc = 0;
so->so_snd.sb_mbcnt = 0;
so->so_snd.sb_mb = NULL;
}
return (0);
}
int
sctp_shutdown(struct socket *so)
{
struct sctp_inpcb *inp;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
SCTP_INP_RLOCK(inp);
/* For UDP model this is a invalid call */
if (!((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL))) {
/* Restore the flags that the soshutdown took away. */
SOCKBUF_LOCK(&so->so_rcv);
so->so_rcv.sb_state &= ~SBS_CANTRCVMORE;
SOCKBUF_UNLOCK(&so->so_rcv);
/* This proc will wakeup for read and do nothing (I hope) */
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
return (EOPNOTSUPP);
} else {
/*
* Ok, if we reach here its the TCP model and it is either a
* SHUT_WR or SHUT_RDWR. This means we put the shutdown flag
* against it.
*/
struct sctp_tcb *stcb;
struct sctp_association *asoc;
struct sctp_nets *netp;
if ((so->so_state &
(SS_ISCONNECTED | SS_ISCONNECTING | SS_ISDISCONNECTING)) == 0) {
SCTP_INP_RUNLOCK(inp);
return (ENOTCONN);
}
socantsendmore(so);
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb == NULL) {
/*
* Ok, we hit the case that the shutdown call was
* made after an abort or something. Nothing to do
* now.
*/
SCTP_INP_RUNLOCK(inp);
return (0);
}
SCTP_TCB_LOCK(stcb);
asoc = &stcb->asoc;
if (asoc->state & SCTP_STATE_ABOUT_TO_BE_FREED) {
SCTP_TCB_UNLOCK(stcb);
SCTP_INP_RUNLOCK(inp);
return (0);
}
if ((SCTP_GET_STATE(stcb) != SCTP_STATE_COOKIE_WAIT) &&
(SCTP_GET_STATE(stcb) != SCTP_STATE_COOKIE_ECHOED) &&
(SCTP_GET_STATE(stcb) != SCTP_STATE_OPEN)) {
/*
* If we are not in or before ESTABLISHED, there is
* no protocol action required.
*/
SCTP_TCB_UNLOCK(stcb);
SCTP_INP_RUNLOCK(inp);
return (0);
}
if (stcb->asoc.alternate) {
netp = stcb->asoc.alternate;
} else {
netp = stcb->asoc.primary_destination;
}
if ((SCTP_GET_STATE(stcb) == SCTP_STATE_OPEN) &&
TAILQ_EMPTY(&asoc->send_queue) &&
TAILQ_EMPTY(&asoc->sent_queue) &&
(asoc->stream_queue_cnt == 0)) {
if ((*asoc->ss_functions.sctp_ss_is_user_msgs_incomplete) (stcb, asoc)) {
goto abort_anyway;
}
/* there is nothing queued to send, so I'm done... */
SCTP_STAT_DECR_GAUGE32(sctps_currestab);
SCTP_SET_STATE(stcb, SCTP_STATE_SHUTDOWN_SENT);
sctp_stop_timers_for_shutdown(stcb);
sctp_send_shutdown(stcb, netp);
sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWN,
stcb->sctp_ep, stcb, netp);
} else {
/*
* We still got (or just got) data to send, so set
* SHUTDOWN_PENDING.
*/
SCTP_ADD_SUBSTATE(stcb, SCTP_STATE_SHUTDOWN_PENDING);
if ((*asoc->ss_functions.sctp_ss_is_user_msgs_incomplete) (stcb, asoc)) {
SCTP_ADD_SUBSTATE(stcb, SCTP_STATE_PARTIAL_MSG_LEFT);
}
if (TAILQ_EMPTY(&asoc->send_queue) &&
TAILQ_EMPTY(&asoc->sent_queue) &&
(asoc->state & SCTP_STATE_PARTIAL_MSG_LEFT)) {
struct mbuf *op_err;
abort_anyway:
op_err = sctp_generate_cause(SCTP_CAUSE_USER_INITIATED_ABT, "");
stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_USRREQ + SCTP_LOC_6;
sctp_abort_an_association(stcb->sctp_ep, stcb,
op_err, SCTP_SO_LOCKED);
SCTP_INP_RUNLOCK(inp);
return (0);
}
}
sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNGUARD, stcb->sctp_ep, stcb, netp);
/*
* XXX: Why do this in the case where we have still data
* queued?
*/
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_CLOSING, SCTP_SO_LOCKED);
SCTP_TCB_UNLOCK(stcb);
SCTP_INP_RUNLOCK(inp);
return (0);
}
}
/*
* copies a "user" presentable address and removes embedded scope, etc.
* returns 0 on success, 1 on error
*/
static uint32_t
sctp_fill_user_address(struct sockaddr_storage *ss, struct sockaddr *sa)
{
#ifdef INET6
struct sockaddr_in6 lsa6;
sa = (struct sockaddr *)sctp_recover_scope((struct sockaddr_in6 *)sa,
&lsa6);
#endif
memcpy(ss, sa, sa->sa_len);
return (0);
}
/*
* NOTE: assumes addr lock is held
*/
static size_t
sctp_fill_up_addresses_vrf(struct sctp_inpcb *inp,
struct sctp_tcb *stcb,
size_t limit,
struct sockaddr_storage *sas,
uint32_t vrf_id)
{
struct sctp_ifn *sctp_ifn;
struct sctp_ifa *sctp_ifa;
size_t actual;
int loopback_scope;
#if defined(INET)
int ipv4_local_scope, ipv4_addr_legal;
#endif
#if defined(INET6)
int local_scope, site_scope, ipv6_addr_legal;
#endif
struct sctp_vrf *vrf;
actual = 0;
if (limit <= 0)
return (actual);
if (stcb) {
/* Turn on all the appropriate scope */
loopback_scope = stcb->asoc.scope.loopback_scope;
#if defined(INET)
ipv4_local_scope = stcb->asoc.scope.ipv4_local_scope;
ipv4_addr_legal = stcb->asoc.scope.ipv4_addr_legal;
#endif
#if defined(INET6)
local_scope = stcb->asoc.scope.local_scope;
site_scope = stcb->asoc.scope.site_scope;
ipv6_addr_legal = stcb->asoc.scope.ipv6_addr_legal;
#endif
} else {
/* Use generic values for endpoints. */
loopback_scope = 1;
#if defined(INET)
ipv4_local_scope = 1;
#endif
#if defined(INET6)
local_scope = 1;
site_scope = 1;
#endif
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) {
#if defined(INET6)
ipv6_addr_legal = 1;
#endif
#if defined(INET)
if (SCTP_IPV6_V6ONLY(inp)) {
ipv4_addr_legal = 0;
} else {
ipv4_addr_legal = 1;
}
#endif
} else {
#if defined(INET6)
ipv6_addr_legal = 0;
#endif
#if defined(INET)
ipv4_addr_legal = 1;
#endif
}
}
vrf = sctp_find_vrf(vrf_id);
if (vrf == NULL) {
return (0);
}
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) {
LIST_FOREACH(sctp_ifn, &vrf->ifnlist, next_ifn) {
if ((loopback_scope == 0) &&
SCTP_IFN_IS_IFT_LOOP(sctp_ifn)) {
/* Skip loopback if loopback_scope not set */
continue;
}
LIST_FOREACH(sctp_ifa, &sctp_ifn->ifalist, next_ifa) {
if (stcb) {
/*
* For the BOUND-ALL case, the list
* associated with a TCB is Always
* considered a reverse list.. i.e.
* it lists addresses that are NOT
* part of the association. If this
* is one of those we must skip it.
*/
if (sctp_is_addr_restricted(stcb,
sctp_ifa)) {
continue;
}
}
switch (sctp_ifa->address.sa.sa_family) {
#ifdef INET
case AF_INET:
if (ipv4_addr_legal) {
struct sockaddr_in *sin;
sin = &sctp_ifa->address.sin;
if (sin->sin_addr.s_addr == 0) {
/*
* we skip
* unspecifed
* addresses
*/
continue;
}
if (prison_check_ip4(inp->ip_inp.inp.inp_cred,
&sin->sin_addr) != 0) {
continue;
}
if ((ipv4_local_scope == 0) &&
(IN4_ISPRIVATE_ADDRESS(&sin->sin_addr))) {
continue;
}
#ifdef INET6
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4)) {
in6_sin_2_v4mapsin6(sin, (struct sockaddr_in6 *)sas);
((struct sockaddr_in6 *)sas)->sin6_port = inp->sctp_lport;
sas = (struct sockaddr_storage *)((caddr_t)sas + sizeof(struct sockaddr_in6));
actual += sizeof(struct sockaddr_in6);
} else {
#endif
memcpy(sas, sin, sizeof(*sin));
((struct sockaddr_in *)sas)->sin_port = inp->sctp_lport;
sas = (struct sockaddr_storage *)((caddr_t)sas + sizeof(*sin));
actual += sizeof(*sin);
#ifdef INET6
}
#endif
if (actual >= limit) {
return (actual);
}
} else {
continue;
}
break;
#endif
#ifdef INET6
case AF_INET6:
if (ipv6_addr_legal) {
struct sockaddr_in6 *sin6;
sin6 = &sctp_ifa->address.sin6;
if (IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
/*
* we skip
* unspecifed
* addresses
*/
continue;
}
if (prison_check_ip6(inp->ip_inp.inp.inp_cred,
&sin6->sin6_addr) != 0) {
continue;
}
if (IN6_IS_ADDR_LINKLOCAL(&sin6->sin6_addr)) {
if (local_scope == 0)
continue;
if (sin6->sin6_scope_id == 0) {
if (sa6_recoverscope(sin6) != 0)
/*
*
* bad
* link
*
* local
*
* address
*/
continue;
}
}
if ((site_scope == 0) &&
(IN6_IS_ADDR_SITELOCAL(&sin6->sin6_addr))) {
continue;
}
memcpy(sas, sin6, sizeof(*sin6));
((struct sockaddr_in6 *)sas)->sin6_port = inp->sctp_lport;
sas = (struct sockaddr_storage *)((caddr_t)sas + sizeof(*sin6));
actual += sizeof(*sin6);
if (actual >= limit) {
return (actual);
}
} else {
continue;
}
break;
#endif
default:
/* TSNH */
break;
}
}
}
} else {
struct sctp_laddr *laddr;
LIST_FOREACH(laddr, &inp->sctp_addr_list, sctp_nxt_addr) {
if (stcb) {
if (sctp_is_addr_restricted(stcb, laddr->ifa)) {
continue;
}
}
if (sctp_fill_user_address(sas, &laddr->ifa->address.sa))
continue;
switch (laddr->ifa->address.sa.sa_family) {
#ifdef INET
case AF_INET:
((struct sockaddr_in *)sas)->sin_port = inp->sctp_lport;
break;
#endif
#ifdef INET6
case AF_INET6:
((struct sockaddr_in6 *)sas)->sin6_port = inp->sctp_lport;
break;
#endif
default:
/* TSNH */
break;
}
sas = (struct sockaddr_storage *)((caddr_t)sas +
laddr->ifa->address.sa.sa_len);
actual += laddr->ifa->address.sa.sa_len;
if (actual >= limit) {
return (actual);
}
}
}
return (actual);
}
static size_t
sctp_fill_up_addresses(struct sctp_inpcb *inp,
struct sctp_tcb *stcb,
size_t limit,
struct sockaddr_storage *sas)
{
size_t size = 0;
SCTP_IPI_ADDR_RLOCK();
/* fill up addresses for the endpoint's default vrf */
size = sctp_fill_up_addresses_vrf(inp, stcb, limit, sas,
inp->def_vrf_id);
SCTP_IPI_ADDR_RUNLOCK();
return (size);
}
/*
* NOTE: assumes addr lock is held
*/
static int
sctp_count_max_addresses_vrf(struct sctp_inpcb *inp, uint32_t vrf_id)
{
int cnt = 0;
struct sctp_vrf *vrf = NULL;
/*
* In both sub-set bound an bound_all cases we return the MAXIMUM
* number of addresses that you COULD get. In reality the sub-set
* bound may have an exclusion list for a given TCB OR in the
* bound-all case a TCB may NOT include the loopback or other
* addresses as well.
*/
vrf = sctp_find_vrf(vrf_id);
if (vrf == NULL) {
return (0);
}
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) {
struct sctp_ifn *sctp_ifn;
struct sctp_ifa *sctp_ifa;
LIST_FOREACH(sctp_ifn, &vrf->ifnlist, next_ifn) {
LIST_FOREACH(sctp_ifa, &sctp_ifn->ifalist, next_ifa) {
/* Count them if they are the right type */
switch (sctp_ifa->address.sa.sa_family) {
#ifdef INET
case AF_INET:
#ifdef INET6
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4))
cnt += sizeof(struct sockaddr_in6);
else
cnt += sizeof(struct sockaddr_in);
#else
cnt += sizeof(struct sockaddr_in);
#endif
break;
#endif
#ifdef INET6
case AF_INET6:
cnt += sizeof(struct sockaddr_in6);
break;
#endif
default:
break;
}
}
}
} else {
struct sctp_laddr *laddr;
LIST_FOREACH(laddr, &inp->sctp_addr_list, sctp_nxt_addr) {
switch (laddr->ifa->address.sa.sa_family) {
#ifdef INET
case AF_INET:
#ifdef INET6
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4))
cnt += sizeof(struct sockaddr_in6);
else
cnt += sizeof(struct sockaddr_in);
#else
cnt += sizeof(struct sockaddr_in);
#endif
break;
#endif
#ifdef INET6
case AF_INET6:
cnt += sizeof(struct sockaddr_in6);
break;
#endif
default:
break;
}
}
}
return (cnt);
}
static int
sctp_count_max_addresses(struct sctp_inpcb *inp)
{
int cnt = 0;
SCTP_IPI_ADDR_RLOCK();
/* count addresses for the endpoint's default VRF */
cnt = sctp_count_max_addresses_vrf(inp, inp->def_vrf_id);
SCTP_IPI_ADDR_RUNLOCK();
return (cnt);
}
static int
sctp_do_connect_x(struct socket *so, struct sctp_inpcb *inp, void *optval,
size_t optsize, void *p, int delay)
{
int error = 0;
int creat_lock_on = 0;
struct sctp_tcb *stcb = NULL;
struct sockaddr *sa;
unsigned int num_v6 = 0, num_v4 = 0, *totaddrp, totaddr;
uint32_t vrf_id;
int bad_addresses = 0;
sctp_assoc_t *a_id;
SCTPDBG(SCTP_DEBUG_PCB1, "Connectx called\n");
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) &&
(inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED)) {
/* We are already connected AND the TCP model */
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, EADDRINUSE);
return (EADDRINUSE);
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) &&
(sctp_is_feature_off(inp, SCTP_PCB_FLAGS_PORTREUSE))) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
if (inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED) {
SCTP_INP_RLOCK(inp);
stcb = LIST_FIRST(&inp->sctp_asoc_list);
SCTP_INP_RUNLOCK(inp);
}
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
return (EALREADY);
}
SCTP_INP_INCR_REF(inp);
SCTP_ASOC_CREATE_LOCK(inp);
creat_lock_on = 1;
if ((inp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_ALLGONE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_GONE)) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, EFAULT);
error = EFAULT;
goto out_now;
}
totaddrp = (unsigned int *)optval;
totaddr = *totaddrp;
sa = (struct sockaddr *)(totaddrp + 1);
stcb = sctp_connectx_helper_find(inp, sa, &totaddr, &num_v4, &num_v6, &error, (unsigned int)(optsize - sizeof(int)), &bad_addresses);
if ((stcb != NULL) || bad_addresses) {
/* Already have or am bring up an association */
SCTP_ASOC_CREATE_UNLOCK(inp);
creat_lock_on = 0;
if (stcb)
SCTP_TCB_UNLOCK(stcb);
if (bad_addresses == 0) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
error = EALREADY;
}
goto out_now;
}
#ifdef INET6
if (((inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) == 0) &&
(num_v6 > 0)) {
error = EINVAL;
goto out_now;
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) &&
(num_v4 > 0)) {
struct in6pcb *inp6;
inp6 = (struct in6pcb *)inp;
if (SCTP_IPV6_V6ONLY(inp6)) {
/*
* if IPV6_V6ONLY flag, ignore connections destined
* to a v4 addr or v4-mapped addr
*/
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_now;
}
}
#endif /* INET6 */
if ((inp->sctp_flags & SCTP_PCB_FLAGS_UNBOUND) ==
SCTP_PCB_FLAGS_UNBOUND) {
/* Bind a ephemeral port */
error = sctp_inpcb_bind(so, NULL, NULL, p);
if (error) {
goto out_now;
}
}
/* FIX ME: do we want to pass in a vrf on the connect call? */
vrf_id = inp->def_vrf_id;
/* We are GOOD to go */
stcb = sctp_aloc_assoc(inp, sa, &error, 0, vrf_id,
inp->sctp_ep.pre_open_stream_count,
inp->sctp_ep.port,
(struct thread *)p
);
if (stcb == NULL) {
/* Gak! no memory */
goto out_now;
}
if (stcb->sctp_ep->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) {
stcb->sctp_ep->sctp_flags |= SCTP_PCB_FLAGS_CONNECTED;
/* Set the connected flag so we can queue data */
soisconnecting(so);
}
SCTP_SET_STATE(stcb, SCTP_STATE_COOKIE_WAIT);
/* move to second address */
switch (sa->sa_family) {
#ifdef INET
case AF_INET:
sa = (struct sockaddr *)((caddr_t)sa + sizeof(struct sockaddr_in));
break;
#endif
#ifdef INET6
case AF_INET6:
sa = (struct sockaddr *)((caddr_t)sa + sizeof(struct sockaddr_in6));
break;
#endif
default:
break;
}
error = 0;
sctp_connectx_helper_add(stcb, sa, (totaddr - 1), &error);
/* Fill in the return id */
if (error) {
(void)sctp_free_assoc(inp, stcb, SCTP_PCBFREE_FORCE,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_7);
goto out_now;
}
a_id = (sctp_assoc_t *)optval;
*a_id = sctp_get_associd(stcb);
/* initialize authentication parameters for the assoc */
sctp_initialize_auth_params(inp, stcb);
if (delay) {
/* doing delayed connection */
stcb->asoc.delayed_connection = 1;
sctp_timer_start(SCTP_TIMER_TYPE_INIT, inp, stcb, stcb->asoc.primary_destination);
} else {
(void)SCTP_GETTIME_TIMEVAL(&stcb->asoc.time_entered);
sctp_send_initiate(inp, stcb, SCTP_SO_LOCKED);
}
SCTP_TCB_UNLOCK(stcb);
out_now:
if (creat_lock_on) {
SCTP_ASOC_CREATE_UNLOCK(inp);
}
SCTP_INP_DECR_REF(inp);
return (error);
}
#define SCTP_FIND_STCB(inp, stcb, assoc_id) { \
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||\
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) { \
SCTP_INP_RLOCK(inp); \
stcb = LIST_FIRST(&inp->sctp_asoc_list); \
if (stcb) { \
SCTP_TCB_LOCK(stcb); \
} \
SCTP_INP_RUNLOCK(inp); \
} else if (assoc_id > SCTP_ALL_ASSOC) { \
stcb = sctp_findassociation_ep_asocid(inp, assoc_id, 1); \
if (stcb == NULL) { \
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT); \
error = ENOENT; \
break; \
} \
} else { \
stcb = NULL; \
} \
}
#define SCTP_CHECK_AND_CAST(destp, srcp, type, size) {\
if (size < sizeof(type)) { \
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL); \
error = EINVAL; \
break; \
} else { \
destp = (type *)srcp; \
} \
}
static int
sctp_getopt(struct socket *so, int optname, void *optval, size_t *optsize,
void *p)
{
struct sctp_inpcb *inp = NULL;
int error, val = 0;
struct sctp_tcb *stcb = NULL;
if (optval == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return EINVAL;
}
error = 0;
switch (optname) {
case SCTP_NODELAY:
case SCTP_AUTOCLOSE:
case SCTP_EXPLICIT_EOR:
case SCTP_AUTO_ASCONF:
case SCTP_DISABLE_FRAGMENTS:
case SCTP_I_WANT_MAPPED_V4_ADDR:
case SCTP_USE_EXT_RCVINFO:
SCTP_INP_RLOCK(inp);
switch (optname) {
case SCTP_DISABLE_FRAGMENTS:
val = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NO_FRAGMENT);
break;
case SCTP_I_WANT_MAPPED_V4_ADDR:
val = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4);
break;
case SCTP_AUTO_ASCONF:
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) {
/* only valid for bound all sockets */
val = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_AUTO_ASCONF);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto flags_out;
}
break;
case SCTP_EXPLICIT_EOR:
val = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_EXPLICIT_EOR);
break;
case SCTP_NODELAY:
val = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NODELAY);
break;
case SCTP_USE_EXT_RCVINFO:
val = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_EXT_RCVINFO);
break;
case SCTP_AUTOCLOSE:
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_AUTOCLOSE))
val = TICKS_TO_SEC(inp->sctp_ep.auto_close_time);
else
val = 0;
break;
default:
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOPROTOOPT);
error = ENOPROTOOPT;
} /* end switch (sopt->sopt_name) */
if (*optsize < sizeof(val)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
flags_out:
SCTP_INP_RUNLOCK(inp);
if (error == 0) {
/* return the option value */
*(int *)optval = val;
*optsize = sizeof(val);
}
break;
case SCTP_GET_PACKET_LOG:
{
#ifdef SCTP_PACKET_LOGGING
uint8_t *target;
int ret;
SCTP_CHECK_AND_CAST(target, optval, uint8_t, *optsize);
ret = sctp_copy_out_packet_log(target, (int)*optsize);
*optsize = ret;
#else
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
#endif
break;
}
case SCTP_REUSE_PORT:
{
uint32_t *value;
if ((inp->sctp_flags & SCTP_PCB_FLAGS_UDPTYPE)) {
/* Can't do this for a 1-m socket */
error = EINVAL;
break;
}
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
*value = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_PORTREUSE);
*optsize = sizeof(uint32_t);
break;
}
case SCTP_PARTIAL_DELIVERY_POINT:
{
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
*value = inp->partial_delivery_point;
*optsize = sizeof(uint32_t);
break;
}
case SCTP_FRAGMENT_INTERLEAVE:
{
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_FRAG_INTERLEAVE)) {
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_INTERLEAVE_STRMS)) {
*value = SCTP_FRAG_LEVEL_2;
} else {
*value = SCTP_FRAG_LEVEL_1;
}
} else {
*value = SCTP_FRAG_LEVEL_0;
}
*optsize = sizeof(uint32_t);
break;
}
case SCTP_INTERLEAVING_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.idata_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
if (inp->idata_supported) {
av->assoc_value = 1;
} else {
av->assoc_value = 0;
}
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_CMT_ON_OFF:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.sctp_cmt_on_off;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->sctp_cmt_on_off;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_PLUGGABLE_CC:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.congestion_control_module;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->sctp_ep.sctp_default_cc_module;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_CC_OPTION:
{
struct sctp_cc_option *cc_opt;
SCTP_CHECK_AND_CAST(cc_opt, optval, struct sctp_cc_option, *optsize);
SCTP_FIND_STCB(inp, stcb, cc_opt->aid_value.assoc_id);
if (stcb == NULL) {
error = EINVAL;
} else {
if (stcb->asoc.cc_functions.sctp_cwnd_socket_option == NULL) {
error = ENOTSUP;
} else {
error = (*stcb->asoc.cc_functions.sctp_cwnd_socket_option) (stcb, 0, cc_opt);
*optsize = sizeof(struct sctp_cc_option);
}
SCTP_TCB_UNLOCK(stcb);
}
break;
}
case SCTP_PLUGGABLE_SS:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.stream_scheduling_module;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->sctp_ep.sctp_default_ss_module;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_SS_VALUE:
{
struct sctp_stream_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_stream_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
if ((av->stream_id >= stcb->asoc.streamoutcnt) ||
(stcb->asoc.ss_functions.sctp_ss_get_value(stcb, &stcb->asoc, &stcb->asoc.strmout[av->stream_id],
&av->stream_value) < 0)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
} else {
*optsize = sizeof(struct sctp_stream_value);
}
SCTP_TCB_UNLOCK(stcb);
} else {
/*
* Can't get stream value without
* association
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
break;
}
case SCTP_GET_ADDR_LEN:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
error = EINVAL;
#ifdef INET
if (av->assoc_value == AF_INET) {
av->assoc_value = sizeof(struct sockaddr_in);
error = 0;
}
#endif
#ifdef INET6
if (av->assoc_value == AF_INET6) {
av->assoc_value = sizeof(struct sockaddr_in6);
error = 0;
}
#endif
if (error) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
} else {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_GET_ASSOC_NUMBER:
{
uint32_t *value, cnt;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
SCTP_INP_RLOCK(inp);
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) {
/* Can't do this for a 1-1 socket */
error = EINVAL;
SCTP_INP_RUNLOCK(inp);
break;
}
cnt = 0;
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
cnt++;
}
SCTP_INP_RUNLOCK(inp);
*value = cnt;
*optsize = sizeof(uint32_t);
break;
}
case SCTP_GET_ASSOC_ID_LIST:
{
struct sctp_assoc_ids *ids;
uint32_t at;
size_t limit;
SCTP_CHECK_AND_CAST(ids, optval, struct sctp_assoc_ids, *optsize);
SCTP_INP_RLOCK(inp);
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) {
/* Can't do this for a 1-1 socket */
error = EINVAL;
SCTP_INP_RUNLOCK(inp);
break;
}
at = 0;
limit = (*optsize - sizeof(uint32_t)) / sizeof(sctp_assoc_t);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
if (at < limit) {
ids->gaids_assoc_id[at++] = sctp_get_associd(stcb);
if (at == 0) {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
SCTP_INP_RUNLOCK(inp);
if (error == 0) {
ids->gaids_number_of_ids = at;
*optsize = ((at * sizeof(sctp_assoc_t)) + sizeof(uint32_t));
}
break;
}
case SCTP_CONTEXT:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.context;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->sctp_context;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_VRF_ID:
{
uint32_t *default_vrfid;
SCTP_CHECK_AND_CAST(default_vrfid, optval, uint32_t, *optsize);
*default_vrfid = inp->def_vrf_id;
*optsize = sizeof(uint32_t);
break;
}
case SCTP_GET_ASOC_VRF:
{
struct sctp_assoc_value *id;
SCTP_CHECK_AND_CAST(id, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, id->assoc_id);
if (stcb == NULL) {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
} else {
id->assoc_value = stcb->asoc.vrf_id;
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_GET_VRF_IDS:
{
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
break;
}
case SCTP_GET_NONCE_VALUES:
{
struct sctp_get_nonce_values *gnv;
SCTP_CHECK_AND_CAST(gnv, optval, struct sctp_get_nonce_values, *optsize);
SCTP_FIND_STCB(inp, stcb, gnv->gn_assoc_id);
if (stcb) {
gnv->gn_peers_tag = stcb->asoc.peer_vtag;
gnv->gn_local_tag = stcb->asoc.my_vtag;
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_get_nonce_values);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTCONN);
error = ENOTCONN;
}
break;
}
case SCTP_DELAYED_SACK:
{
struct sctp_sack_info *sack;
SCTP_CHECK_AND_CAST(sack, optval, struct sctp_sack_info, *optsize);
SCTP_FIND_STCB(inp, stcb, sack->sack_assoc_id);
if (stcb) {
sack->sack_delay = stcb->asoc.delayed_ack;
sack->sack_freq = stcb->asoc.sack_freq;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(sack->sack_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
sack->sack_delay = TICKS_TO_MSEC(inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_RECV]);
sack->sack_freq = inp->sctp_ep.sctp_sack_freq;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_sack_info);
}
break;
}
case SCTP_GET_SNDBUF_USE:
{
struct sctp_sockstat *ss;
SCTP_CHECK_AND_CAST(ss, optval, struct sctp_sockstat, *optsize);
SCTP_FIND_STCB(inp, stcb, ss->ss_assoc_id);
if (stcb) {
ss->ss_total_sndbuf = stcb->asoc.total_output_queue_size;
ss->ss_total_recv_buf = (stcb->asoc.size_on_reasm_queue +
stcb->asoc.size_on_all_streams);
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_sockstat);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTCONN);
error = ENOTCONN;
}
break;
}
case SCTP_MAX_BURST:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.max_burst;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->sctp_ep.max_burst;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_MAXSEG:
{
struct sctp_assoc_value *av;
int ovh;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = sctp_get_frag_point(stcb, &stcb->asoc);
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) {
ovh = SCTP_MED_OVERHEAD;
} else {
ovh = SCTP_MED_V4_OVERHEAD;
}
if (inp->sctp_frag_point >= SCTP_DEFAULT_MAXSEGMENT)
av->assoc_value = 0;
else
av->assoc_value = inp->sctp_frag_point - ovh;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_GET_STAT_LOG:
error = sctp_fill_stat_log(optval, optsize);
break;
case SCTP_EVENTS:
{
struct sctp_event_subscribe *events;
SCTP_CHECK_AND_CAST(events, optval, struct sctp_event_subscribe, *optsize);
memset(events, 0, sizeof(struct sctp_event_subscribe));
SCTP_INP_RLOCK(inp);
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVDATAIOEVNT))
events->sctp_data_io_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVASSOCEVNT))
events->sctp_association_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVPADDREVNT))
events->sctp_address_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVSENDFAILEVNT))
events->sctp_send_failure_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVPEERERR))
events->sctp_peer_error_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT))
events->sctp_shutdown_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_PDAPIEVNT))
events->sctp_partial_delivery_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_ADAPTATIONEVNT))
events->sctp_adaptation_layer_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_AUTHEVNT))
events->sctp_authentication_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_DRYEVNT))
events->sctp_sender_dry_event = 1;
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_STREAM_RESETEVNT))
events->sctp_stream_reset_event = 1;
SCTP_INP_RUNLOCK(inp);
*optsize = sizeof(struct sctp_event_subscribe);
break;
}
case SCTP_ADAPTATION_LAYER:
{
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
SCTP_INP_RLOCK(inp);
*value = inp->sctp_ep.adaptation_layer_indicator;
SCTP_INP_RUNLOCK(inp);
*optsize = sizeof(uint32_t);
break;
}
case SCTP_SET_INITIAL_DBG_SEQ:
{
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
SCTP_INP_RLOCK(inp);
*value = inp->sctp_ep.initial_sequence_debug;
SCTP_INP_RUNLOCK(inp);
*optsize = sizeof(uint32_t);
break;
}
case SCTP_GET_LOCAL_ADDR_SIZE:
{
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
SCTP_INP_RLOCK(inp);
*value = sctp_count_max_addresses(inp);
SCTP_INP_RUNLOCK(inp);
*optsize = sizeof(uint32_t);
break;
}
case SCTP_GET_REMOTE_ADDR_SIZE:
{
uint32_t *value;
size_t size;
struct sctp_nets *net;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, *optsize);
/* FIXME MT: change to sctp_assoc_value? */
SCTP_FIND_STCB(inp, stcb, (sctp_assoc_t)*value);
if (stcb) {
size = 0;
/* Count the sizes */
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
switch (net->ro._l_addr.sa.sa_family) {
#ifdef INET
case AF_INET:
#ifdef INET6
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4)) {
size += sizeof(struct sockaddr_in6);
} else {
size += sizeof(struct sockaddr_in);
}
#else
size += sizeof(struct sockaddr_in);
#endif
break;
#endif
#ifdef INET6
case AF_INET6:
size += sizeof(struct sockaddr_in6);
break;
#endif
default:
break;
}
}
SCTP_TCB_UNLOCK(stcb);
*value = (uint32_t)size;
*optsize = sizeof(uint32_t);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTCONN);
error = ENOTCONN;
}
break;
}
case SCTP_GET_PEER_ADDRESSES:
/*
* Get the address information, an array is passed in to
* fill up we pack it.
*/
{
size_t cpsz, left;
struct sockaddr_storage *sas;
struct sctp_nets *net;
struct sctp_getaddresses *saddr;
SCTP_CHECK_AND_CAST(saddr, optval, struct sctp_getaddresses, *optsize);
SCTP_FIND_STCB(inp, stcb, saddr->sget_assoc_id);
if (stcb) {
left = (*optsize) - sizeof(struct sctp_getaddresses);
*optsize = sizeof(struct sctp_getaddresses);
sas = (struct sockaddr_storage *)&saddr->addr[0];
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
switch (net->ro._l_addr.sa.sa_family) {
#ifdef INET
case AF_INET:
#ifdef INET6
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4)) {
cpsz = sizeof(struct sockaddr_in6);
} else {
cpsz = sizeof(struct sockaddr_in);
}
#else
cpsz = sizeof(struct sockaddr_in);
#endif
break;
#endif
#ifdef INET6
case AF_INET6:
cpsz = sizeof(struct sockaddr_in6);
break;
#endif
default:
cpsz = 0;
break;
}
if (cpsz == 0) {
break;
}
if (left < cpsz) {
/* not enough room. */
break;
}
#if defined(INET) && defined(INET6)
if ((sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4)) &&
(net->ro._l_addr.sa.sa_family == AF_INET)) {
/* Must map the address */
in6_sin_2_v4mapsin6(&net->ro._l_addr.sin,
(struct sockaddr_in6 *)sas);
} else {
memcpy(sas, &net->ro._l_addr, cpsz);
}
#else
memcpy(sas, &net->ro._l_addr, cpsz);
#endif
((struct sockaddr_in *)sas)->sin_port = stcb->rport;
sas = (struct sockaddr_storage *)((caddr_t)sas + cpsz);
left -= cpsz;
*optsize += cpsz;
}
SCTP_TCB_UNLOCK(stcb);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
}
break;
}
case SCTP_GET_LOCAL_ADDRESSES:
{
size_t limit, actual;
struct sockaddr_storage *sas;
struct sctp_getaddresses *saddr;
SCTP_CHECK_AND_CAST(saddr, optval, struct sctp_getaddresses, *optsize);
SCTP_FIND_STCB(inp, stcb, saddr->sget_assoc_id);
sas = (struct sockaddr_storage *)&saddr->addr[0];
limit = *optsize - sizeof(sctp_assoc_t);
actual = sctp_fill_up_addresses(inp, stcb, limit, sas);
if (stcb) {
SCTP_TCB_UNLOCK(stcb);
}
*optsize = sizeof(struct sockaddr_storage) + actual;
break;
}
case SCTP_PEER_ADDR_PARAMS:
{
struct sctp_paddrparams *paddrp;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(paddrp, optval, struct sctp_paddrparams, *optsize);
SCTP_FIND_STCB(inp, stcb, paddrp->spp_assoc_id);
#if defined(INET) && defined(INET6)
if (paddrp->spp_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&paddrp->spp_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&paddrp->spp_address;
}
} else {
addr = (struct sockaddr *)&paddrp->spp_address;
}
#else
addr = (struct sockaddr *)&paddrp->spp_address;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr, &net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net == NULL)) {
#ifdef INET
if (addr->sa_family == AF_INET) {
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (sin->sin_addr.s_addr != INADDR_ANY) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
#ifdef INET6
if (addr->sa_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (!IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
if (stcb != NULL) {
/* Applies to the specific association */
paddrp->spp_flags = 0;
if (net != NULL) {
paddrp->spp_hbinterval = net->heart_beat_delay;
paddrp->spp_pathmaxrxt = net->failure_threshold;
paddrp->spp_pathmtu = net->mtu;
switch (net->ro._l_addr.sa.sa_family) {
#ifdef INET
case AF_INET:
paddrp->spp_pathmtu -= SCTP_MIN_V4_OVERHEAD;
break;
#endif
#ifdef INET6
case AF_INET6:
paddrp->spp_pathmtu -= SCTP_MIN_OVERHEAD;
break;
#endif
default:
break;
}
/* get flags for HB */
if (net->dest_state & SCTP_ADDR_NOHB) {
paddrp->spp_flags |= SPP_HB_DISABLE;
} else {
paddrp->spp_flags |= SPP_HB_ENABLE;
}
/* get flags for PMTU */
if (net->dest_state & SCTP_ADDR_NO_PMTUD) {
paddrp->spp_flags |= SPP_PMTUD_DISABLE;
} else {
paddrp->spp_flags |= SPP_PMTUD_ENABLE;
}
if (net->dscp & 0x01) {
paddrp->spp_dscp = net->dscp & 0xfc;
paddrp->spp_flags |= SPP_DSCP;
}
#ifdef INET6
if ((net->ro._l_addr.sa.sa_family == AF_INET6) &&
(net->flowlabel & 0x80000000)) {
paddrp->spp_ipv6_flowlabel = net->flowlabel & 0x000fffff;
paddrp->spp_flags |= SPP_IPV6_FLOWLABEL;
}
#endif
} else {
/*
* No destination so return default
* value
*/
paddrp->spp_pathmaxrxt = stcb->asoc.def_net_failure;
paddrp->spp_pathmtu = stcb->asoc.default_mtu;
if (stcb->asoc.default_dscp & 0x01) {
paddrp->spp_dscp = stcb->asoc.default_dscp & 0xfc;
paddrp->spp_flags |= SPP_DSCP;
}
#ifdef INET6
if (stcb->asoc.default_flowlabel & 0x80000000) {
paddrp->spp_ipv6_flowlabel = stcb->asoc.default_flowlabel & 0x000fffff;
paddrp->spp_flags |= SPP_IPV6_FLOWLABEL;
}
#endif
/* default settings should be these */
if (sctp_stcb_is_feature_on(inp, stcb, SCTP_PCB_FLAGS_DONOT_HEARTBEAT)) {
paddrp->spp_flags |= SPP_HB_DISABLE;
} else {
paddrp->spp_flags |= SPP_HB_ENABLE;
}
if (sctp_stcb_is_feature_on(inp, stcb, SCTP_PCB_FLAGS_DO_NOT_PMTUD)) {
paddrp->spp_flags |= SPP_PMTUD_DISABLE;
} else {
paddrp->spp_flags |= SPP_PMTUD_ENABLE;
}
paddrp->spp_hbinterval = stcb->asoc.heart_beat_delay;
}
paddrp->spp_assoc_id = sctp_get_associd(stcb);
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(paddrp->spp_assoc_id == SCTP_FUTURE_ASSOC)) {
/* Use endpoint defaults */
SCTP_INP_RLOCK(inp);
paddrp->spp_pathmaxrxt = inp->sctp_ep.def_net_failure;
paddrp->spp_hbinterval = TICKS_TO_MSEC(inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_HEARTBEAT]);
paddrp->spp_assoc_id = SCTP_FUTURE_ASSOC;
/* get inp's default */
if (inp->sctp_ep.default_dscp & 0x01) {
paddrp->spp_dscp = inp->sctp_ep.default_dscp & 0xfc;
paddrp->spp_flags |= SPP_DSCP;
}
#ifdef INET6
if ((inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) &&
(inp->sctp_ep.default_flowlabel & 0x80000000)) {
paddrp->spp_ipv6_flowlabel = inp->sctp_ep.default_flowlabel & 0x000fffff;
paddrp->spp_flags |= SPP_IPV6_FLOWLABEL;
}
#endif
paddrp->spp_pathmtu = inp->sctp_ep.default_mtu;
if (sctp_is_feature_off(inp, SCTP_PCB_FLAGS_DONOT_HEARTBEAT)) {
paddrp->spp_flags |= SPP_HB_ENABLE;
} else {
paddrp->spp_flags |= SPP_HB_DISABLE;
}
if (sctp_is_feature_off(inp, SCTP_PCB_FLAGS_DO_NOT_PMTUD)) {
paddrp->spp_flags |= SPP_PMTUD_ENABLE;
} else {
paddrp->spp_flags |= SPP_PMTUD_DISABLE;
}
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_paddrparams);
}
break;
}
case SCTP_GET_PEER_ADDR_INFO:
{
struct sctp_paddrinfo *paddri;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(paddri, optval, struct sctp_paddrinfo, *optsize);
SCTP_FIND_STCB(inp, stcb, paddri->spinfo_assoc_id);
#if defined(INET) && defined(INET6)
if (paddri->spinfo_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&paddri->spinfo_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&paddri->spinfo_address;
}
} else {
addr = (struct sockaddr *)&paddri->spinfo_address;
}
#else
addr = (struct sockaddr *)&paddri->spinfo_address;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr, &net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net != NULL)) {
if (net->dest_state & SCTP_ADDR_UNCONFIRMED) {
/* It's unconfirmed */
paddri->spinfo_state = SCTP_UNCONFIRMED;
} else if (net->dest_state & SCTP_ADDR_REACHABLE) {
/* It's active */
paddri->spinfo_state = SCTP_ACTIVE;
} else {
/* It's inactive */
paddri->spinfo_state = SCTP_INACTIVE;
}
paddri->spinfo_cwnd = net->cwnd;
paddri->spinfo_srtt = net->lastsa >> SCTP_RTT_SHIFT;
paddri->spinfo_rto = net->RTO;
paddri->spinfo_assoc_id = sctp_get_associd(stcb);
paddri->spinfo_mtu = net->mtu;
switch (addr->sa_family) {
#if defined(INET)
case AF_INET:
paddri->spinfo_mtu -= SCTP_MIN_V4_OVERHEAD;
break;
#endif
#if defined(INET6)
case AF_INET6:
paddri->spinfo_mtu -= SCTP_MIN_OVERHEAD;
break;
#endif
default:
break;
}
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_paddrinfo);
} else {
if (stcb != NULL) {
SCTP_TCB_UNLOCK(stcb);
}
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
}
break;
}
case SCTP_PCB_STATUS:
{
struct sctp_pcbinfo *spcb;
SCTP_CHECK_AND_CAST(spcb, optval, struct sctp_pcbinfo, *optsize);
sctp_fill_pcbinfo(spcb);
*optsize = sizeof(struct sctp_pcbinfo);
break;
}
case SCTP_STATUS:
{
struct sctp_nets *net;
struct sctp_status *sstat;
SCTP_CHECK_AND_CAST(sstat, optval, struct sctp_status, *optsize);
SCTP_FIND_STCB(inp, stcb, sstat->sstat_assoc_id);
if (stcb == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
sstat->sstat_state = sctp_map_assoc_state(stcb->asoc.state);
sstat->sstat_assoc_id = sctp_get_associd(stcb);
sstat->sstat_rwnd = stcb->asoc.peers_rwnd;
sstat->sstat_unackdata = stcb->asoc.sent_queue_cnt;
/*
* We can't include chunks that have been passed to
* the socket layer. Only things in queue.
*/
sstat->sstat_penddata = (stcb->asoc.cnt_on_reasm_queue +
stcb->asoc.cnt_on_all_streams);
sstat->sstat_instrms = stcb->asoc.streamincnt;
sstat->sstat_outstrms = stcb->asoc.streamoutcnt;
sstat->sstat_fragmentation_point = sctp_get_frag_point(stcb, &stcb->asoc);
memcpy(&sstat->sstat_primary.spinfo_address,
&stcb->asoc.primary_destination->ro._l_addr,
((struct sockaddr *)(&stcb->asoc.primary_destination->ro._l_addr))->sa_len);
net = stcb->asoc.primary_destination;
((struct sockaddr_in *)&sstat->sstat_primary.spinfo_address)->sin_port = stcb->rport;
/*
* Again the user can get info from sctp_constants.h
* for what the state of the network is.
*/
if (net->dest_state & SCTP_ADDR_UNCONFIRMED) {
/* It's unconfirmed */
sstat->sstat_primary.spinfo_state = SCTP_UNCONFIRMED;
} else if (net->dest_state & SCTP_ADDR_REACHABLE) {
/* It's active */
sstat->sstat_primary.spinfo_state = SCTP_ACTIVE;
} else {
/* It's inactive */
sstat->sstat_primary.spinfo_state = SCTP_INACTIVE;
}
sstat->sstat_primary.spinfo_cwnd = net->cwnd;
sstat->sstat_primary.spinfo_srtt = net->lastsa >> SCTP_RTT_SHIFT;
sstat->sstat_primary.spinfo_rto = net->RTO;
sstat->sstat_primary.spinfo_mtu = net->mtu;
switch (stcb->asoc.primary_destination->ro._l_addr.sa.sa_family) {
#if defined(INET)
case AF_INET:
sstat->sstat_primary.spinfo_mtu -= SCTP_MIN_V4_OVERHEAD;
break;
#endif
#if defined(INET6)
case AF_INET6:
sstat->sstat_primary.spinfo_mtu -= SCTP_MIN_OVERHEAD;
break;
#endif
default:
break;
}
sstat->sstat_primary.spinfo_assoc_id = sctp_get_associd(stcb);
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_status);
break;
}
case SCTP_RTOINFO:
{
struct sctp_rtoinfo *srto;
SCTP_CHECK_AND_CAST(srto, optval, struct sctp_rtoinfo, *optsize);
SCTP_FIND_STCB(inp, stcb, srto->srto_assoc_id);
if (stcb) {
srto->srto_initial = stcb->asoc.initial_rto;
srto->srto_max = stcb->asoc.maxrto;
srto->srto_min = stcb->asoc.minrto;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(srto->srto_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
srto->srto_initial = inp->sctp_ep.initial_rto;
srto->srto_max = inp->sctp_ep.sctp_maxrto;
srto->srto_min = inp->sctp_ep.sctp_minrto;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_rtoinfo);
}
break;
}
case SCTP_TIMEOUTS:
{
struct sctp_timeouts *stimo;
SCTP_CHECK_AND_CAST(stimo, optval, struct sctp_timeouts, *optsize);
SCTP_FIND_STCB(inp, stcb, stimo->stimo_assoc_id);
if (stcb) {
stimo->stimo_init = stcb->asoc.timoinit;
stimo->stimo_data = stcb->asoc.timodata;
stimo->stimo_sack = stcb->asoc.timosack;
stimo->stimo_shutdown = stcb->asoc.timoshutdown;
stimo->stimo_heartbeat = stcb->asoc.timoheartbeat;
stimo->stimo_cookie = stcb->asoc.timocookie;
stimo->stimo_shutdownack = stcb->asoc.timoshutdownack;
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_timeouts);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
break;
}
case SCTP_ASSOCINFO:
{
struct sctp_assocparams *sasoc;
SCTP_CHECK_AND_CAST(sasoc, optval, struct sctp_assocparams, *optsize);
SCTP_FIND_STCB(inp, stcb, sasoc->sasoc_assoc_id);
if (stcb) {
sasoc->sasoc_cookie_life = TICKS_TO_MSEC(stcb->asoc.cookie_life);
sasoc->sasoc_asocmaxrxt = stcb->asoc.max_send_times;
sasoc->sasoc_number_peer_destinations = stcb->asoc.numnets;
sasoc->sasoc_peer_rwnd = stcb->asoc.peers_rwnd;
sasoc->sasoc_local_rwnd = stcb->asoc.my_rwnd;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(sasoc->sasoc_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
sasoc->sasoc_cookie_life = TICKS_TO_MSEC(inp->sctp_ep.def_cookie_life);
sasoc->sasoc_asocmaxrxt = inp->sctp_ep.max_send_times;
sasoc->sasoc_number_peer_destinations = 0;
sasoc->sasoc_peer_rwnd = 0;
sasoc->sasoc_local_rwnd = sbspace(&inp->sctp_socket->so_rcv);
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assocparams);
}
break;
}
case SCTP_DEFAULT_SEND_PARAM:
{
struct sctp_sndrcvinfo *s_info;
SCTP_CHECK_AND_CAST(s_info, optval, struct sctp_sndrcvinfo, *optsize);
SCTP_FIND_STCB(inp, stcb, s_info->sinfo_assoc_id);
if (stcb) {
memcpy(s_info, &stcb->asoc.def_send, sizeof(stcb->asoc.def_send));
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(s_info->sinfo_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
memcpy(s_info, &inp->def_send, sizeof(inp->def_send));
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_sndrcvinfo);
}
break;
}
case SCTP_INITMSG:
{
struct sctp_initmsg *sinit;
SCTP_CHECK_AND_CAST(sinit, optval, struct sctp_initmsg, *optsize);
SCTP_INP_RLOCK(inp);
sinit->sinit_num_ostreams = inp->sctp_ep.pre_open_stream_count;
sinit->sinit_max_instreams = inp->sctp_ep.max_open_streams_intome;
sinit->sinit_max_attempts = inp->sctp_ep.max_init_times;
sinit->sinit_max_init_timeo = inp->sctp_ep.initial_init_rto_max;
SCTP_INP_RUNLOCK(inp);
*optsize = sizeof(struct sctp_initmsg);
break;
}
case SCTP_PRIMARY_ADDR:
/* we allow a "get" operation on this */
{
struct sctp_setprim *ssp;
SCTP_CHECK_AND_CAST(ssp, optval, struct sctp_setprim, *optsize);
SCTP_FIND_STCB(inp, stcb, ssp->ssp_assoc_id);
if (stcb) {
union sctp_sockstore *addr;
addr = &stcb->asoc.primary_destination->ro._l_addr;
switch (addr->sa.sa_family) {
#ifdef INET
case AF_INET:
#ifdef INET6
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_NEEDS_MAPPED_V4)) {
in6_sin_2_v4mapsin6(&addr->sin,
(struct sockaddr_in6 *)&ssp->ssp_addr);
} else {
memcpy(&ssp->ssp_addr, &addr->sin, sizeof(struct sockaddr_in));
}
#else
memcpy(&ssp->ssp_addr, &addr->sin, sizeof(struct sockaddr_in));
#endif
break;
#endif
#ifdef INET6
case AF_INET6:
memcpy(&ssp->ssp_addr, &addr->sin6, sizeof(struct sockaddr_in6));
break;
#endif
default:
break;
}
SCTP_TCB_UNLOCK(stcb);
*optsize = sizeof(struct sctp_setprim);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
break;
}
case SCTP_HMAC_IDENT:
{
struct sctp_hmacalgo *shmac;
sctp_hmaclist_t *hmaclist;
uint32_t size;
int i;
SCTP_CHECK_AND_CAST(shmac, optval, struct sctp_hmacalgo, *optsize);
SCTP_INP_RLOCK(inp);
hmaclist = inp->sctp_ep.local_hmacs;
if (hmaclist == NULL) {
/* no HMACs to return */
*optsize = sizeof(*shmac);
SCTP_INP_RUNLOCK(inp);
break;
}
/* is there room for all of the hmac ids? */
size = sizeof(*shmac) + (hmaclist->num_algo *
sizeof(shmac->shmac_idents[0]));
if ((size_t)(*optsize) < size) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_INP_RUNLOCK(inp);
break;
}
/* copy in the list */
shmac->shmac_number_of_idents = hmaclist->num_algo;
for (i = 0; i < hmaclist->num_algo; i++) {
shmac->shmac_idents[i] = hmaclist->hmac[i];
}
SCTP_INP_RUNLOCK(inp);
*optsize = size;
break;
}
case SCTP_AUTH_ACTIVE_KEY:
{
struct sctp_authkeyid *scact;
SCTP_CHECK_AND_CAST(scact, optval, struct sctp_authkeyid, *optsize);
SCTP_FIND_STCB(inp, stcb, scact->scact_assoc_id);
if (stcb) {
/* get the active key on the assoc */
scact->scact_keynumber = stcb->asoc.authinfo.active_keyid;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(scact->scact_assoc_id == SCTP_FUTURE_ASSOC)) {
/* get the endpoint active key */
SCTP_INP_RLOCK(inp);
scact->scact_keynumber = inp->sctp_ep.default_keyid;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_authkeyid);
}
break;
}
case SCTP_LOCAL_AUTH_CHUNKS:
{
struct sctp_authchunks *sac;
sctp_auth_chklist_t *chklist = NULL;
size_t size = 0;
SCTP_CHECK_AND_CAST(sac, optval, struct sctp_authchunks, *optsize);
SCTP_FIND_STCB(inp, stcb, sac->gauth_assoc_id);
if (stcb) {
/* get off the assoc */
chklist = stcb->asoc.local_auth_chunks;
/* is there enough space? */
size = sctp_auth_get_chklist_size(chklist);
if (*optsize < (sizeof(struct sctp_authchunks) + size)) {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
} else {
/* copy in the chunks */
(void)sctp_serialize_auth_chunks(chklist, sac->gauth_chunks);
sac->gauth_number_of_chunks = (uint32_t)size;
*optsize = sizeof(struct sctp_authchunks) + size;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(sac->gauth_assoc_id == SCTP_FUTURE_ASSOC)) {
/* get off the endpoint */
SCTP_INP_RLOCK(inp);
chklist = inp->sctp_ep.local_auth_chunks;
/* is there enough space? */
size = sctp_auth_get_chklist_size(chklist);
if (*optsize < (sizeof(struct sctp_authchunks) + size)) {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
} else {
/* copy in the chunks */
(void)sctp_serialize_auth_chunks(chklist, sac->gauth_chunks);
sac->gauth_number_of_chunks = (uint32_t)size;
*optsize = sizeof(struct sctp_authchunks) + size;
}
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_PEER_AUTH_CHUNKS:
{
struct sctp_authchunks *sac;
sctp_auth_chklist_t *chklist = NULL;
size_t size = 0;
SCTP_CHECK_AND_CAST(sac, optval, struct sctp_authchunks, *optsize);
SCTP_FIND_STCB(inp, stcb, sac->gauth_assoc_id);
if (stcb) {
/* get off the assoc */
chklist = stcb->asoc.peer_auth_chunks;
/* is there enough space? */
size = sctp_auth_get_chklist_size(chklist);
if (*optsize < (sizeof(struct sctp_authchunks) + size)) {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
} else {
/* copy in the chunks */
(void)sctp_serialize_auth_chunks(chklist, sac->gauth_chunks);
sac->gauth_number_of_chunks = (uint32_t)size;
*optsize = sizeof(struct sctp_authchunks) + size;
}
SCTP_TCB_UNLOCK(stcb);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
}
break;
}
case SCTP_EVENT:
{
struct sctp_event *event;
uint32_t event_type;
SCTP_CHECK_AND_CAST(event, optval, struct sctp_event, *optsize);
SCTP_FIND_STCB(inp, stcb, event->se_assoc_id);
switch (event->se_type) {
case SCTP_ASSOC_CHANGE:
event_type = SCTP_PCB_FLAGS_RECVASSOCEVNT;
break;
case SCTP_PEER_ADDR_CHANGE:
event_type = SCTP_PCB_FLAGS_RECVPADDREVNT;
break;
case SCTP_REMOTE_ERROR:
event_type = SCTP_PCB_FLAGS_RECVPEERERR;
break;
case SCTP_SEND_FAILED:
event_type = SCTP_PCB_FLAGS_RECVSENDFAILEVNT;
break;
case SCTP_SHUTDOWN_EVENT:
event_type = SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT;
break;
case SCTP_ADAPTATION_INDICATION:
event_type = SCTP_PCB_FLAGS_ADAPTATIONEVNT;
break;
case SCTP_PARTIAL_DELIVERY_EVENT:
event_type = SCTP_PCB_FLAGS_PDAPIEVNT;
break;
case SCTP_AUTHENTICATION_EVENT:
event_type = SCTP_PCB_FLAGS_AUTHEVNT;
break;
case SCTP_STREAM_RESET_EVENT:
event_type = SCTP_PCB_FLAGS_STREAM_RESETEVNT;
break;
case SCTP_SENDER_DRY_EVENT:
event_type = SCTP_PCB_FLAGS_DRYEVNT;
break;
case SCTP_NOTIFICATIONS_STOPPED_EVENT:
event_type = 0;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTSUP);
error = ENOTSUP;
break;
case SCTP_ASSOC_RESET_EVENT:
event_type = SCTP_PCB_FLAGS_ASSOC_RESETEVNT;
break;
case SCTP_STREAM_CHANGE_EVENT:
event_type = SCTP_PCB_FLAGS_STREAM_CHANGEEVNT;
break;
case SCTP_SEND_FAILED_EVENT:
event_type = SCTP_PCB_FLAGS_RECVNSENDFAILEVNT;
break;
default:
event_type = 0;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (event_type > 0) {
if (stcb) {
event->se_on = sctp_stcb_is_feature_on(inp, stcb, event_type);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(event->se_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
event->se_on = sctp_is_feature_on(inp, event_type);
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
}
if (stcb != NULL) {
SCTP_TCB_UNLOCK(stcb);
}
if (error == 0) {
*optsize = sizeof(struct sctp_event);
}
break;
}
case SCTP_RECVRCVINFO:
{
int onoff;
if (*optsize < sizeof(int)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
} else {
SCTP_INP_RLOCK(inp);
onoff = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVRCVINFO);
SCTP_INP_RUNLOCK(inp);
}
if (error == 0) {
/* return the option value */
*(int *)optval = onoff;
*optsize = sizeof(int);
}
break;
}
case SCTP_RECVNXTINFO:
{
int onoff;
if (*optsize < sizeof(int)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
} else {
SCTP_INP_RLOCK(inp);
onoff = sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVNXTINFO);
SCTP_INP_RUNLOCK(inp);
}
if (error == 0) {
/* return the option value */
*(int *)optval = onoff;
*optsize = sizeof(int);
}
break;
}
case SCTP_DEFAULT_SNDINFO:
{
struct sctp_sndinfo *info;
SCTP_CHECK_AND_CAST(info, optval, struct sctp_sndinfo, *optsize);
SCTP_FIND_STCB(inp, stcb, info->snd_assoc_id);
if (stcb) {
info->snd_sid = stcb->asoc.def_send.sinfo_stream;
info->snd_flags = stcb->asoc.def_send.sinfo_flags;
info->snd_flags &= 0xfff0;
info->snd_ppid = stcb->asoc.def_send.sinfo_ppid;
info->snd_context = stcb->asoc.def_send.sinfo_context;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(info->snd_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
info->snd_sid = inp->def_send.sinfo_stream;
info->snd_flags = inp->def_send.sinfo_flags;
info->snd_flags &= 0xfff0;
info->snd_ppid = inp->def_send.sinfo_ppid;
info->snd_context = inp->def_send.sinfo_context;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_sndinfo);
}
break;
}
case SCTP_DEFAULT_PRINFO:
{
struct sctp_default_prinfo *info;
SCTP_CHECK_AND_CAST(info, optval, struct sctp_default_prinfo, *optsize);
SCTP_FIND_STCB(inp, stcb, info->pr_assoc_id);
if (stcb) {
info->pr_policy = PR_SCTP_POLICY(stcb->asoc.def_send.sinfo_flags);
info->pr_value = stcb->asoc.def_send.sinfo_timetolive;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(info->pr_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
info->pr_policy = PR_SCTP_POLICY(inp->def_send.sinfo_flags);
info->pr_value = inp->def_send.sinfo_timetolive;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_default_prinfo);
}
break;
}
case SCTP_PEER_ADDR_THLDS:
{
struct sctp_paddrthlds *thlds;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(thlds, optval, struct sctp_paddrthlds, *optsize);
SCTP_FIND_STCB(inp, stcb, thlds->spt_assoc_id);
#if defined(INET) && defined(INET6)
if (thlds->spt_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&thlds->spt_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&thlds->spt_address;
}
} else {
addr = (struct sockaddr *)&thlds->spt_address;
}
#else
addr = (struct sockaddr *)&thlds->spt_address;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr, &net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net == NULL)) {
#ifdef INET
if (addr->sa_family == AF_INET) {
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (sin->sin_addr.s_addr != INADDR_ANY) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
#ifdef INET6
if (addr->sa_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (!IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
if (stcb != NULL) {
if (net != NULL) {
thlds->spt_pathmaxrxt = net->failure_threshold;
thlds->spt_pathpfthld = net->pf_threshold;
thlds->spt_pathcpthld = 0xffff;
} else {
thlds->spt_pathmaxrxt = stcb->asoc.def_net_failure;
thlds->spt_pathpfthld = stcb->asoc.def_net_pf_threshold;
thlds->spt_pathcpthld = 0xffff;
}
thlds->spt_assoc_id = sctp_get_associd(stcb);
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(thlds->spt_assoc_id == SCTP_FUTURE_ASSOC)) {
/* Use endpoint defaults */
SCTP_INP_RLOCK(inp);
thlds->spt_pathmaxrxt = inp->sctp_ep.def_net_failure;
thlds->spt_pathpfthld = inp->sctp_ep.def_net_pf_threshold;
thlds->spt_pathcpthld = 0xffff;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_paddrthlds);
}
break;
}
case SCTP_REMOTE_UDP_ENCAPS_PORT:
{
struct sctp_udpencaps *encaps;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(encaps, optval, struct sctp_udpencaps, *optsize);
SCTP_FIND_STCB(inp, stcb, encaps->sue_assoc_id);
#if defined(INET) && defined(INET6)
if (encaps->sue_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&encaps->sue_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&encaps->sue_address;
}
} else {
addr = (struct sockaddr *)&encaps->sue_address;
}
#else
addr = (struct sockaddr *)&encaps->sue_address;
#endif
if (stcb) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr, &net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net == NULL)) {
#ifdef INET
if (addr->sa_family == AF_INET) {
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (sin->sin_addr.s_addr != INADDR_ANY) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
#ifdef INET6
if (addr->sa_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (!IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
if (stcb != NULL) {
if (net) {
encaps->sue_port = net->port;
} else {
encaps->sue_port = stcb->asoc.port;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(encaps->sue_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
encaps->sue_port = inp->sctp_ep.port;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_udpencaps);
}
break;
}
case SCTP_ECN_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.ecn_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->ecn_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_PR_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.prsctp_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->prsctp_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_AUTH_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.auth_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->auth_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_ASCONF_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.asconf_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->asconf_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_RECONFIG_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.reconfig_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->reconfig_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_NRSACK_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.nrsack_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->nrsack_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_PKTDROP_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.pktdrop_supported;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->pktdrop_supported;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_ENABLE_STREAM_RESET:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = (uint32_t)stcb->asoc.local_strreset_support;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = (uint32_t)inp->local_strreset_support;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
case SCTP_PR_STREAM_STATUS:
{
struct sctp_prstatus *sprstat;
uint16_t sid;
uint16_t policy;
SCTP_CHECK_AND_CAST(sprstat, optval, struct sctp_prstatus, *optsize);
SCTP_FIND_STCB(inp, stcb, sprstat->sprstat_assoc_id);
sid = sprstat->sprstat_sid;
policy = sprstat->sprstat_policy;
#if defined(SCTP_DETAILED_STR_STATS)
if ((stcb != NULL) &&
(sid < stcb->asoc.streamoutcnt) &&
(policy != SCTP_PR_SCTP_NONE) &&
((policy <= SCTP_PR_SCTP_MAX) ||
(policy == SCTP_PR_SCTP_ALL))) {
if (policy == SCTP_PR_SCTP_ALL) {
sprstat->sprstat_abandoned_unsent = stcb->asoc.strmout[sid].abandoned_unsent[0];
sprstat->sprstat_abandoned_sent = stcb->asoc.strmout[sid].abandoned_sent[0];
} else {
sprstat->sprstat_abandoned_unsent = stcb->asoc.strmout[sid].abandoned_unsent[policy];
sprstat->sprstat_abandoned_sent = stcb->asoc.strmout[sid].abandoned_sent[policy];
}
#else
if ((stcb != NULL) &&
(sid < stcb->asoc.streamoutcnt) &&
(policy == SCTP_PR_SCTP_ALL)) {
sprstat->sprstat_abandoned_unsent = stcb->asoc.strmout[sid].abandoned_unsent[0];
sprstat->sprstat_abandoned_sent = stcb->asoc.strmout[sid].abandoned_sent[0];
#endif
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
if (stcb != NULL) {
SCTP_TCB_UNLOCK(stcb);
}
if (error == 0) {
*optsize = sizeof(struct sctp_prstatus);
}
break;
}
case SCTP_PR_ASSOC_STATUS:
{
struct sctp_prstatus *sprstat;
uint16_t policy;
SCTP_CHECK_AND_CAST(sprstat, optval, struct sctp_prstatus, *optsize);
SCTP_FIND_STCB(inp, stcb, sprstat->sprstat_assoc_id);
policy = sprstat->sprstat_policy;
if ((stcb != NULL) &&
(policy != SCTP_PR_SCTP_NONE) &&
((policy <= SCTP_PR_SCTP_MAX) ||
(policy == SCTP_PR_SCTP_ALL))) {
if (policy == SCTP_PR_SCTP_ALL) {
sprstat->sprstat_abandoned_unsent = stcb->asoc.abandoned_unsent[0];
sprstat->sprstat_abandoned_sent = stcb->asoc.abandoned_sent[0];
} else {
sprstat->sprstat_abandoned_unsent = stcb->asoc.abandoned_unsent[policy];
sprstat->sprstat_abandoned_sent = stcb->asoc.abandoned_sent[policy];
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
if (stcb != NULL) {
SCTP_TCB_UNLOCK(stcb);
}
if (error == 0) {
*optsize = sizeof(struct sctp_prstatus);
}
break;
}
case SCTP_MAX_CWND:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, *optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
av->assoc_value = stcb->asoc.max_cwnd;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_RLOCK(inp);
av->assoc_value = inp->max_cwnd;
SCTP_INP_RUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
if (error == 0) {
*optsize = sizeof(struct sctp_assoc_value);
}
break;
}
default:
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOPROTOOPT);
error = ENOPROTOOPT;
break;
} /* end switch (sopt->sopt_name) */
if (error) {
*optsize = 0;
}
return (error);
}
static int
sctp_setopt(struct socket *so, int optname, void *optval, size_t optsize,
void *p)
{
int error, set_opt;
uint32_t *mopt;
struct sctp_tcb *stcb = NULL;
struct sctp_inpcb *inp = NULL;
uint32_t vrf_id;
if (optval == NULL) {
SCTP_PRINTF("optval is NULL\n");
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_PRINTF("inp is NULL?\n");
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
vrf_id = inp->def_vrf_id;
error = 0;
switch (optname) {
case SCTP_NODELAY:
case SCTP_AUTOCLOSE:
case SCTP_AUTO_ASCONF:
case SCTP_EXPLICIT_EOR:
case SCTP_DISABLE_FRAGMENTS:
case SCTP_USE_EXT_RCVINFO:
case SCTP_I_WANT_MAPPED_V4_ADDR:
/* copy in the option value */
SCTP_CHECK_AND_CAST(mopt, optval, uint32_t, optsize);
set_opt = 0;
if (error)
break;
switch (optname) {
case SCTP_DISABLE_FRAGMENTS:
set_opt = SCTP_PCB_FLAGS_NO_FRAGMENT;
break;
case SCTP_AUTO_ASCONF:
/*
* NOTE: we don't really support this flag
*/
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) {
/* only valid for bound all sockets */
if ((SCTP_BASE_SYSCTL(sctp_auto_asconf) == 0) &&
(*mopt != 0)) {
/* forbidden by admin */
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EPERM);
return (EPERM);
}
set_opt = SCTP_PCB_FLAGS_AUTO_ASCONF;
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
break;
case SCTP_EXPLICIT_EOR:
set_opt = SCTP_PCB_FLAGS_EXPLICIT_EOR;
break;
case SCTP_USE_EXT_RCVINFO:
set_opt = SCTP_PCB_FLAGS_EXT_RCVINFO;
break;
case SCTP_I_WANT_MAPPED_V4_ADDR:
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) {
set_opt = SCTP_PCB_FLAGS_NEEDS_MAPPED_V4;
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
break;
case SCTP_NODELAY:
set_opt = SCTP_PCB_FLAGS_NODELAY;
break;
case SCTP_AUTOCLOSE:
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
set_opt = SCTP_PCB_FLAGS_AUTOCLOSE;
/*
* The value is in ticks. Note this does not effect
* old associations, only new ones.
*/
inp->sctp_ep.auto_close_time = SEC_TO_TICKS(*mopt);
break;
}
SCTP_INP_WLOCK(inp);
if (*mopt != 0) {
sctp_feature_on(inp, set_opt);
} else {
sctp_feature_off(inp, set_opt);
}
SCTP_INP_WUNLOCK(inp);
break;
case SCTP_REUSE_PORT:
{
SCTP_CHECK_AND_CAST(mopt, optval, uint32_t, optsize);
if ((inp->sctp_flags & SCTP_PCB_FLAGS_UNBOUND) == 0) {
/* Can't set it after we are bound */
error = EINVAL;
break;
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_UDPTYPE)) {
/* Can't do this for a 1-m socket */
error = EINVAL;
break;
}
if (optval)
sctp_feature_on(inp, SCTP_PCB_FLAGS_PORTREUSE);
else
sctp_feature_off(inp, SCTP_PCB_FLAGS_PORTREUSE);
break;
}
case SCTP_PARTIAL_DELIVERY_POINT:
{
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, optsize);
if (*value > SCTP_SB_LIMIT_RCV(so)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
inp->partial_delivery_point = *value;
break;
}
case SCTP_FRAGMENT_INTERLEAVE:
/* not yet until we re-write sctp_recvmsg() */
{
uint32_t *level;
SCTP_CHECK_AND_CAST(level, optval, uint32_t, optsize);
if (*level == SCTP_FRAG_LEVEL_2) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_FRAG_INTERLEAVE);
sctp_feature_on(inp, SCTP_PCB_FLAGS_INTERLEAVE_STRMS);
} else if (*level == SCTP_FRAG_LEVEL_1) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_FRAG_INTERLEAVE);
sctp_feature_off(inp, SCTP_PCB_FLAGS_INTERLEAVE_STRMS);
} else if (*level == SCTP_FRAG_LEVEL_0) {
sctp_feature_off(inp, SCTP_PCB_FLAGS_FRAG_INTERLEAVE);
sctp_feature_off(inp, SCTP_PCB_FLAGS_INTERLEAVE_STRMS);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
break;
}
case SCTP_INTERLEAVING_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->idata_supported = 0;
} else {
if ((sctp_is_feature_on(inp, SCTP_PCB_FLAGS_FRAG_INTERLEAVE)) &&
(sctp_is_feature_on(inp, SCTP_PCB_FLAGS_INTERLEAVE_STRMS))) {
inp->idata_supported = 1;
} else {
/*
* Must have Frag
* interleave and
* stream interleave
* on
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_CMT_ON_OFF:
if (SCTP_BASE_SYSCTL(sctp_cmt_on_off)) {
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
if (av->assoc_value > SCTP_CMT_MAX) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.sctp_cmt_on_off = av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_cmt_on_off = av->assoc_value;
SCTP_INP_WUNLOCK(inp);
}
if ((av->assoc_id == SCTP_CURRENT_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.sctp_cmt_on_off = av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOPROTOOPT);
error = ENOPROTOOPT;
}
break;
case SCTP_PLUGGABLE_CC:
{
struct sctp_assoc_value *av;
struct sctp_nets *net;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
if ((av->assoc_value != SCTP_CC_RFC2581) &&
(av->assoc_value != SCTP_CC_HSTCP) &&
(av->assoc_value != SCTP_CC_HTCP) &&
(av->assoc_value != SCTP_CC_RTCC)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.cc_functions = sctp_cc_functions[av->assoc_value];
stcb->asoc.congestion_control_module = av->assoc_value;
if (stcb->asoc.cc_functions.sctp_set_initial_cc_param != NULL) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
stcb->asoc.cc_functions.sctp_set_initial_cc_param(stcb, net);
}
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_ep.sctp_default_cc_module = av->assoc_value;
SCTP_INP_WUNLOCK(inp);
}
if ((av->assoc_id == SCTP_CURRENT_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.cc_functions = sctp_cc_functions[av->assoc_value];
stcb->asoc.congestion_control_module = av->assoc_value;
if (stcb->asoc.cc_functions.sctp_set_initial_cc_param != NULL) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
stcb->asoc.cc_functions.sctp_set_initial_cc_param(stcb, net);
}
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_CC_OPTION:
{
struct sctp_cc_option *cc_opt;
SCTP_CHECK_AND_CAST(cc_opt, optval, struct sctp_cc_option, optsize);
SCTP_FIND_STCB(inp, stcb, cc_opt->aid_value.assoc_id);
if (stcb == NULL) {
if (cc_opt->aid_value.assoc_id == SCTP_CURRENT_ASSOC) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (stcb->asoc.cc_functions.sctp_cwnd_socket_option) {
(*stcb->asoc.cc_functions.sctp_cwnd_socket_option) (stcb, 1, cc_opt);
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
} else {
error = EINVAL;
}
} else {
if (stcb->asoc.cc_functions.sctp_cwnd_socket_option == NULL) {
error = ENOTSUP;
} else {
error = (*stcb->asoc.cc_functions.sctp_cwnd_socket_option) (stcb, 1,
cc_opt);
}
SCTP_TCB_UNLOCK(stcb);
}
break;
}
case SCTP_PLUGGABLE_SS:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
if ((av->assoc_value != SCTP_SS_DEFAULT) &&
(av->assoc_value != SCTP_SS_ROUND_ROBIN) &&
(av->assoc_value != SCTP_SS_ROUND_ROBIN_PACKET) &&
(av->assoc_value != SCTP_SS_PRIORITY) &&
(av->assoc_value != SCTP_SS_FAIR_BANDWITH) &&
(av->assoc_value != SCTP_SS_FIRST_COME)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.ss_functions.sctp_ss_clear(stcb, &stcb->asoc, 1, 1);
stcb->asoc.ss_functions = sctp_ss_functions[av->assoc_value];
stcb->asoc.stream_scheduling_module = av->assoc_value;
stcb->asoc.ss_functions.sctp_ss_init(stcb, &stcb->asoc, 1);
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_ep.sctp_default_ss_module = av->assoc_value;
SCTP_INP_WUNLOCK(inp);
}
if ((av->assoc_id == SCTP_CURRENT_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.ss_functions.sctp_ss_clear(stcb, &stcb->asoc, 1, 1);
stcb->asoc.ss_functions = sctp_ss_functions[av->assoc_value];
stcb->asoc.stream_scheduling_module = av->assoc_value;
stcb->asoc.ss_functions.sctp_ss_init(stcb, &stcb->asoc, 1);
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_SS_VALUE:
{
struct sctp_stream_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_stream_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
if ((av->stream_id >= stcb->asoc.streamoutcnt) ||
(stcb->asoc.ss_functions.sctp_ss_set_value(stcb, &stcb->asoc, &stcb->asoc.strmout[av->stream_id],
av->stream_value) < 0)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if (av->assoc_id == SCTP_CURRENT_ASSOC) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (av->stream_id < stcb->asoc.streamoutcnt) {
stcb->asoc.ss_functions.sctp_ss_set_value(stcb,
&stcb->asoc,
&stcb->asoc.strmout[av->stream_id],
av->stream_value);
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
} else {
/*
* Can't set stream value without
* association
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_CLR_STAT_LOG:
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
break;
case SCTP_CONTEXT:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.context = av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_context = av->assoc_value;
SCTP_INP_WUNLOCK(inp);
}
if ((av->assoc_id == SCTP_CURRENT_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.context = av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_VRF_ID:
{
uint32_t *default_vrfid;
SCTP_CHECK_AND_CAST(default_vrfid, optval, uint32_t, optsize);
if (*default_vrfid > SCTP_MAX_VRF_ID) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
inp->def_vrf_id = *default_vrfid;
break;
}
case SCTP_DEL_VRF_ID:
{
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
break;
}
case SCTP_ADD_VRF_ID:
{
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
break;
}
case SCTP_DELAYED_SACK:
{
struct sctp_sack_info *sack;
SCTP_CHECK_AND_CAST(sack, optval, struct sctp_sack_info, optsize);
SCTP_FIND_STCB(inp, stcb, sack->sack_assoc_id);
if (sack->sack_delay) {
if (sack->sack_delay > SCTP_MAX_SACK_DELAY)
sack->sack_delay = SCTP_MAX_SACK_DELAY;
if (MSEC_TO_TICKS(sack->sack_delay) < 1) {
sack->sack_delay = TICKS_TO_MSEC(1);
}
}
if (stcb) {
if (sack->sack_delay) {
stcb->asoc.delayed_ack = sack->sack_delay;
}
if (sack->sack_freq) {
stcb->asoc.sack_freq = sack->sack_freq;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(sack->sack_assoc_id == SCTP_FUTURE_ASSOC) ||
(sack->sack_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (sack->sack_delay) {
inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_RECV] = MSEC_TO_TICKS(sack->sack_delay);
}
if (sack->sack_freq) {
inp->sctp_ep.sctp_sack_freq = sack->sack_freq;
}
SCTP_INP_WUNLOCK(inp);
}
if ((sack->sack_assoc_id == SCTP_CURRENT_ASSOC) ||
(sack->sack_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (sack->sack_delay) {
stcb->asoc.delayed_ack = sack->sack_delay;
}
if (sack->sack_freq) {
stcb->asoc.sack_freq = sack->sack_freq;
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_AUTH_CHUNK:
{
struct sctp_authchunk *sauth;
SCTP_CHECK_AND_CAST(sauth, optval, struct sctp_authchunk, optsize);
SCTP_INP_WLOCK(inp);
if (sctp_auth_add_chunk(sauth->sauth_chunk, inp->sctp_ep.local_auth_chunks)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
} else {
inp->auth_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
break;
}
case SCTP_AUTH_KEY:
{
struct sctp_authkey *sca;
struct sctp_keyhead *shared_keys;
sctp_sharedkey_t *shared_key;
sctp_key_t *key = NULL;
size_t size;
SCTP_CHECK_AND_CAST(sca, optval, struct sctp_authkey, optsize);
if (sca->sca_keylength == 0) {
size = optsize - sizeof(struct sctp_authkey);
} else {
if (sca->sca_keylength + sizeof(struct sctp_authkey) <= optsize) {
size = sca->sca_keylength;
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
}
SCTP_FIND_STCB(inp, stcb, sca->sca_assoc_id);
if (stcb) {
shared_keys = &stcb->asoc.shared_keys;
/* clear the cached keys for this key id */
sctp_clear_cachedkeys(stcb, sca->sca_keynumber);
/*
* create the new shared key and
* insert/replace it
*/
if (size > 0) {
key = sctp_set_key(sca->sca_key, (uint32_t)size);
if (key == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOMEM);
error = ENOMEM;
SCTP_TCB_UNLOCK(stcb);
break;
}
}
shared_key = sctp_alloc_sharedkey();
if (shared_key == NULL) {
sctp_free_key(key);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOMEM);
error = ENOMEM;
SCTP_TCB_UNLOCK(stcb);
break;
}
shared_key->key = key;
shared_key->keyid = sca->sca_keynumber;
error = sctp_insert_sharedkey(shared_keys, shared_key);
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(sca->sca_assoc_id == SCTP_FUTURE_ASSOC) ||
(sca->sca_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
shared_keys = &inp->sctp_ep.shared_keys;
/*
* clear the cached keys on all
* assocs for this key id
*/
sctp_clear_cachedkeys_ep(inp, sca->sca_keynumber);
/*
* create the new shared key and
* insert/replace it
*/
if (size > 0) {
key = sctp_set_key(sca->sca_key, (uint32_t)size);
if (key == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOMEM);
error = ENOMEM;
SCTP_INP_WUNLOCK(inp);
break;
}
}
shared_key = sctp_alloc_sharedkey();
if (shared_key == NULL) {
sctp_free_key(key);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOMEM);
error = ENOMEM;
SCTP_INP_WUNLOCK(inp);
break;
}
shared_key->key = key;
shared_key->keyid = sca->sca_keynumber;
error = sctp_insert_sharedkey(shared_keys, shared_key);
SCTP_INP_WUNLOCK(inp);
}
if ((sca->sca_assoc_id == SCTP_CURRENT_ASSOC) ||
(sca->sca_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
shared_keys = &stcb->asoc.shared_keys;
/*
* clear the cached keys for
* this key id
*/
sctp_clear_cachedkeys(stcb, sca->sca_keynumber);
/*
* create the new shared key
* and insert/replace it
*/
if (size > 0) {
key = sctp_set_key(sca->sca_key, (uint32_t)size);
if (key == NULL) {
SCTP_TCB_UNLOCK(stcb);
continue;
}
}
shared_key = sctp_alloc_sharedkey();
if (shared_key == NULL) {
sctp_free_key(key);
SCTP_TCB_UNLOCK(stcb);
continue;
}
shared_key->key = key;
shared_key->keyid = sca->sca_keynumber;
error = sctp_insert_sharedkey(shared_keys, shared_key);
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_HMAC_IDENT:
{
struct sctp_hmacalgo *shmac;
sctp_hmaclist_t *hmaclist;
uint16_t hmacid;
uint32_t i;
SCTP_CHECK_AND_CAST(shmac, optval, struct sctp_hmacalgo, optsize);
if ((optsize < sizeof(struct sctp_hmacalgo) + shmac->shmac_number_of_idents * sizeof(uint16_t)) ||
(shmac->shmac_number_of_idents > 0xffff)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
hmaclist = sctp_alloc_hmaclist((uint16_t)shmac->shmac_number_of_idents);
if (hmaclist == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOMEM);
error = ENOMEM;
break;
}
for (i = 0; i < shmac->shmac_number_of_idents; i++) {
hmacid = shmac->shmac_idents[i];
if (sctp_auth_add_hmacid(hmaclist, hmacid)) {
/* invalid HMACs were found */ ;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
sctp_free_hmaclist(hmaclist);
goto sctp_set_hmac_done;
}
}
for (i = 0; i < hmaclist->num_algo; i++) {
if (hmaclist->hmac[i] == SCTP_AUTH_HMAC_ID_SHA1) {
/* already in list */
break;
}
}
if (i == hmaclist->num_algo) {
/* not found in list */
sctp_free_hmaclist(hmaclist);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
/* set it on the endpoint */
SCTP_INP_WLOCK(inp);
if (inp->sctp_ep.local_hmacs)
sctp_free_hmaclist(inp->sctp_ep.local_hmacs);
inp->sctp_ep.local_hmacs = hmaclist;
SCTP_INP_WUNLOCK(inp);
sctp_set_hmac_done:
break;
}
case SCTP_AUTH_ACTIVE_KEY:
{
struct sctp_authkeyid *scact;
SCTP_CHECK_AND_CAST(scact, optval, struct sctp_authkeyid, optsize);
SCTP_FIND_STCB(inp, stcb, scact->scact_assoc_id);
/* set the active key on the right place */
if (stcb) {
/* set the active key on the assoc */
if (sctp_auth_setactivekey(stcb,
scact->scact_keynumber)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL,
SCTP_FROM_SCTP_USRREQ,
EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(scact->scact_assoc_id == SCTP_FUTURE_ASSOC) ||
(scact->scact_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (sctp_auth_setactivekey_ep(inp, scact->scact_keynumber)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_INP_WUNLOCK(inp);
}
if ((scact->scact_assoc_id == SCTP_CURRENT_ASSOC) ||
(scact->scact_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
sctp_auth_setactivekey(stcb, scact->scact_keynumber);
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_AUTH_DELETE_KEY:
{
struct sctp_authkeyid *scdel;
SCTP_CHECK_AND_CAST(scdel, optval, struct sctp_authkeyid, optsize);
SCTP_FIND_STCB(inp, stcb, scdel->scact_assoc_id);
/* delete the key from the right place */
if (stcb) {
if (sctp_delete_sharedkey(stcb, scdel->scact_keynumber)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(scdel->scact_assoc_id == SCTP_FUTURE_ASSOC) ||
(scdel->scact_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (sctp_delete_sharedkey_ep(inp, scdel->scact_keynumber)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_INP_WUNLOCK(inp);
}
if ((scdel->scact_assoc_id == SCTP_CURRENT_ASSOC) ||
(scdel->scact_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
sctp_delete_sharedkey(stcb, scdel->scact_keynumber);
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_AUTH_DEACTIVATE_KEY:
{
struct sctp_authkeyid *keyid;
SCTP_CHECK_AND_CAST(keyid, optval, struct sctp_authkeyid, optsize);
SCTP_FIND_STCB(inp, stcb, keyid->scact_assoc_id);
/* deactivate the key from the right place */
if (stcb) {
if (sctp_deact_sharedkey(stcb, keyid->scact_keynumber)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(keyid->scact_assoc_id == SCTP_FUTURE_ASSOC) ||
(keyid->scact_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (sctp_deact_sharedkey_ep(inp, keyid->scact_keynumber)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_INP_WUNLOCK(inp);
}
if ((keyid->scact_assoc_id == SCTP_CURRENT_ASSOC) ||
(keyid->scact_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
sctp_deact_sharedkey(stcb, keyid->scact_keynumber);
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_ENABLE_STREAM_RESET:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
if (av->assoc_value & (~SCTP_ENABLE_VALUE_MASK)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.local_strreset_support = (uint8_t)av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->local_strreset_support = (uint8_t)av->assoc_value;
SCTP_INP_WUNLOCK(inp);
}
if ((av->assoc_id == SCTP_CURRENT_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.local_strreset_support = (uint8_t)av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_RESET_STREAMS:
{
struct sctp_reset_streams *strrst;
int i, send_out = 0;
int send_in = 0;
SCTP_CHECK_AND_CAST(strrst, optval, struct sctp_reset_streams, optsize);
SCTP_FIND_STCB(inp, stcb, strrst->srs_assoc_id);
if (stcb == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
break;
}
if (stcb->asoc.reconfig_supported == 0) {
/*
* Peer does not support the chunk type.
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
SCTP_TCB_UNLOCK(stcb);
break;
}
if (sizeof(struct sctp_reset_streams) +
strrst->srs_number_streams * sizeof(uint16_t) > optsize) {
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
break;
}
if (strrst->srs_flags & SCTP_STREAM_RESET_INCOMING) {
send_in = 1;
if (stcb->asoc.stream_reset_outstanding) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
error = EALREADY;
SCTP_TCB_UNLOCK(stcb);
break;
}
}
if (strrst->srs_flags & SCTP_STREAM_RESET_OUTGOING) {
send_out = 1;
}
if ((strrst->srs_number_streams > SCTP_MAX_STREAMS_AT_ONCE_RESET) && send_in) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOMEM);
error = ENOMEM;
SCTP_TCB_UNLOCK(stcb);
break;
}
if ((send_in == 0) && (send_out == 0)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
break;
}
for (i = 0; i < strrst->srs_number_streams; i++) {
if ((send_in) &&
(strrst->srs_stream_list[i] >= stcb->asoc.streamincnt)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if ((send_out) &&
(strrst->srs_stream_list[i] >= stcb->asoc.streamoutcnt)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
}
if (error) {
SCTP_TCB_UNLOCK(stcb);
break;
}
if (send_out) {
int cnt;
uint16_t strm;
if (strrst->srs_number_streams) {
for (i = 0, cnt = 0; i < strrst->srs_number_streams; i++) {
strm = strrst->srs_stream_list[i];
if (stcb->asoc.strmout[strm].state == SCTP_STREAM_OPEN) {
stcb->asoc.strmout[strm].state = SCTP_STREAM_RESET_PENDING;
cnt++;
}
}
} else {
/* Its all */
for (i = 0, cnt = 0; i < stcb->asoc.streamoutcnt; i++) {
if (stcb->asoc.strmout[i].state == SCTP_STREAM_OPEN) {
stcb->asoc.strmout[i].state = SCTP_STREAM_RESET_PENDING;
cnt++;
}
}
}
}
if (send_in) {
error = sctp_send_str_reset_req(stcb, strrst->srs_number_streams,
strrst->srs_stream_list,
send_in, 0, 0, 0, 0, 0);
} else {
error = sctp_send_stream_reset_out_if_possible(stcb, SCTP_SO_LOCKED);
}
if (error == 0) {
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_STRRST_REQ, SCTP_SO_LOCKED);
} else {
/*
* For outgoing streams don't report any
* problems in sending the request to the
* application. XXX: Double check resetting
* incoming streams.
*/
error = 0;
}
SCTP_TCB_UNLOCK(stcb);
break;
}
case SCTP_ADD_STREAMS:
{
struct sctp_add_streams *stradd;
uint8_t addstream = 0;
uint16_t add_o_strmcnt = 0;
uint16_t add_i_strmcnt = 0;
SCTP_CHECK_AND_CAST(stradd, optval, struct sctp_add_streams, optsize);
SCTP_FIND_STCB(inp, stcb, stradd->sas_assoc_id);
if (stcb == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
break;
}
if (stcb->asoc.reconfig_supported == 0) {
/*
* Peer does not support the chunk type.
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
SCTP_TCB_UNLOCK(stcb);
break;
}
if (stcb->asoc.stream_reset_outstanding) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
error = EALREADY;
SCTP_TCB_UNLOCK(stcb);
break;
}
if ((stradd->sas_outstrms == 0) &&
(stradd->sas_instrms == 0)) {
error = EINVAL;
goto skip_stuff;
}
if (stradd->sas_outstrms) {
addstream = 1;
/* We allocate here */
add_o_strmcnt = stradd->sas_outstrms;
if ((((int)add_o_strmcnt) + ((int)stcb->asoc.streamoutcnt)) > 0x0000ffff) {
/* You can't have more than 64k */
error = EINVAL;
goto skip_stuff;
}
}
if (stradd->sas_instrms) {
int cnt;
addstream |= 2;
/*
* We allocate inside
* sctp_send_str_reset_req()
*/
add_i_strmcnt = stradd->sas_instrms;
cnt = add_i_strmcnt;
cnt += stcb->asoc.streamincnt;
if (cnt > 0x0000ffff) {
/* You can't have more than 64k */
error = EINVAL;
goto skip_stuff;
}
if (cnt > (int)stcb->asoc.max_inbound_streams) {
/* More than you are allowed */
error = EINVAL;
goto skip_stuff;
}
}
error = sctp_send_str_reset_req(stcb, 0, NULL, 0, 0, addstream, add_o_strmcnt, add_i_strmcnt, 0);
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_STRRST_REQ, SCTP_SO_LOCKED);
skip_stuff:
SCTP_TCB_UNLOCK(stcb);
break;
}
case SCTP_RESET_ASSOC:
{
int i;
uint32_t *value;
SCTP_CHECK_AND_CAST(value, optval, uint32_t, optsize);
SCTP_FIND_STCB(inp, stcb, (sctp_assoc_t)*value);
if (stcb == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
break;
}
if (stcb->asoc.reconfig_supported == 0) {
/*
* Peer does not support the chunk type.
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
error = EOPNOTSUPP;
SCTP_TCB_UNLOCK(stcb);
break;
}
if (stcb->asoc.stream_reset_outstanding) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
error = EALREADY;
SCTP_TCB_UNLOCK(stcb);
break;
}
/*
* Is there any data pending in the send or sent
* queues?
*/
if (!TAILQ_EMPTY(&stcb->asoc.send_queue) ||
!TAILQ_EMPTY(&stcb->asoc.sent_queue)) {
busy_out:
error = EBUSY;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
SCTP_TCB_UNLOCK(stcb);
break;
}
/* Do any streams have data queued? */
for (i = 0; i < stcb->asoc.streamoutcnt; i++) {
if (!TAILQ_EMPTY(&stcb->asoc.strmout[i].outqueue)) {
goto busy_out;
}
}
error = sctp_send_str_reset_req(stcb, 0, NULL, 0, 1, 0, 0, 0, 0);
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_STRRST_REQ, SCTP_SO_LOCKED);
SCTP_TCB_UNLOCK(stcb);
break;
}
case SCTP_CONNECT_X:
if (optsize < (sizeof(int) + sizeof(struct sockaddr_in))) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
error = sctp_do_connect_x(so, inp, optval, optsize, p, 0);
break;
case SCTP_CONNECT_X_DELAYED:
if (optsize < (sizeof(int) + sizeof(struct sockaddr_in))) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
error = sctp_do_connect_x(so, inp, optval, optsize, p, 1);
break;
case SCTP_CONNECT_X_COMPLETE:
{
struct sockaddr *sa;
/* FIXME MT: check correct? */
SCTP_CHECK_AND_CAST(sa, optval, struct sockaddr, optsize);
/* find tcb */
if (inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED) {
SCTP_INP_RLOCK(inp);
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb) {
SCTP_TCB_LOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, sa, NULL, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if (stcb == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
error = ENOENT;
break;
}
if (stcb->asoc.delayed_connection == 1) {
stcb->asoc.delayed_connection = 0;
(void)SCTP_GETTIME_TIMEVAL(&stcb->asoc.time_entered);
sctp_timer_stop(SCTP_TIMER_TYPE_INIT, inp, stcb,
stcb->asoc.primary_destination,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_8);
sctp_send_initiate(inp, stcb, SCTP_SO_LOCKED);
} else {
/*
* already expired or did not use delayed
* connectx
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
error = EALREADY;
}
SCTP_TCB_UNLOCK(stcb);
break;
}
case SCTP_MAX_BURST:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.max_burst = av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_ep.max_burst = av->assoc_value;
SCTP_INP_WUNLOCK(inp);
}
if ((av->assoc_id == SCTP_CURRENT_ASSOC) ||
(av->assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.max_burst = av->assoc_value;
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_MAXSEG:
{
struct sctp_assoc_value *av;
int ovh;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) {
ovh = SCTP_MED_OVERHEAD;
} else {
ovh = SCTP_MED_V4_OVERHEAD;
}
if (stcb) {
if (av->assoc_value) {
stcb->asoc.sctp_frag_point = (av->assoc_value + ovh);
} else {
stcb->asoc.sctp_frag_point = SCTP_DEFAULT_MAXSEGMENT;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
/*
* FIXME MT: I think this is not in
* tune with the API ID
*/
if (av->assoc_value) {
inp->sctp_frag_point = (av->assoc_value + ovh);
} else {
inp->sctp_frag_point = SCTP_DEFAULT_MAXSEGMENT;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_EVENTS:
{
struct sctp_event_subscribe *events;
SCTP_CHECK_AND_CAST(events, optval, struct sctp_event_subscribe, optsize);
SCTP_INP_WLOCK(inp);
if (events->sctp_data_io_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVDATAIOEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVDATAIOEVNT);
}
if (events->sctp_association_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVASSOCEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVASSOCEVNT);
}
if (events->sctp_address_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVPADDREVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVPADDREVNT);
}
if (events->sctp_send_failure_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVSENDFAILEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVSENDFAILEVNT);
}
if (events->sctp_peer_error_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVPEERERR);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVPEERERR);
}
if (events->sctp_shutdown_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT);
}
if (events->sctp_partial_delivery_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_PDAPIEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_PDAPIEVNT);
}
if (events->sctp_adaptation_layer_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_ADAPTATIONEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_ADAPTATIONEVNT);
}
if (events->sctp_authentication_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_AUTHEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_AUTHEVNT);
}
if (events->sctp_sender_dry_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_DRYEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_DRYEVNT);
}
if (events->sctp_stream_reset_event) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_STREAM_RESETEVNT);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_STREAM_RESETEVNT);
}
SCTP_INP_WUNLOCK(inp);
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (events->sctp_association_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_RECVASSOCEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_RECVASSOCEVNT);
}
if (events->sctp_address_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_RECVPADDREVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_RECVPADDREVNT);
}
if (events->sctp_send_failure_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_RECVSENDFAILEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_RECVSENDFAILEVNT);
}
if (events->sctp_peer_error_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_RECVPEERERR);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_RECVPEERERR);
}
if (events->sctp_shutdown_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT);
}
if (events->sctp_partial_delivery_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_PDAPIEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_PDAPIEVNT);
}
if (events->sctp_adaptation_layer_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_ADAPTATIONEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_ADAPTATIONEVNT);
}
if (events->sctp_authentication_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_AUTHEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_AUTHEVNT);
}
if (events->sctp_sender_dry_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_DRYEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_DRYEVNT);
}
if (events->sctp_stream_reset_event) {
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_STREAM_RESETEVNT);
} else {
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_STREAM_RESETEVNT);
}
SCTP_TCB_UNLOCK(stcb);
}
/*
* Send up the sender dry event only for 1-to-1
* style sockets.
*/
if (events->sctp_sender_dry_event) {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) {
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb) {
SCTP_TCB_LOCK(stcb);
if (TAILQ_EMPTY(&stcb->asoc.send_queue) &&
TAILQ_EMPTY(&stcb->asoc.sent_queue) &&
(stcb->asoc.stream_queue_cnt == 0)) {
sctp_ulp_notify(SCTP_NOTIFY_SENDER_DRY, stcb, 0, NULL, SCTP_SO_LOCKED);
}
SCTP_TCB_UNLOCK(stcb);
}
}
}
SCTP_INP_RUNLOCK(inp);
break;
}
case SCTP_ADAPTATION_LAYER:
{
struct sctp_setadaptation *adap_bits;
SCTP_CHECK_AND_CAST(adap_bits, optval, struct sctp_setadaptation, optsize);
SCTP_INP_WLOCK(inp);
inp->sctp_ep.adaptation_layer_indicator = adap_bits->ssb_adaptation_ind;
inp->sctp_ep.adaptation_layer_indicator_provided = 1;
SCTP_INP_WUNLOCK(inp);
break;
}
#ifdef SCTP_DEBUG
case SCTP_SET_INITIAL_DBG_SEQ:
{
uint32_t *vvv;
SCTP_CHECK_AND_CAST(vvv, optval, uint32_t, optsize);
SCTP_INP_WLOCK(inp);
inp->sctp_ep.initial_sequence_debug = *vvv;
SCTP_INP_WUNLOCK(inp);
break;
}
#endif
case SCTP_DEFAULT_SEND_PARAM:
{
struct sctp_sndrcvinfo *s_info;
SCTP_CHECK_AND_CAST(s_info, optval, struct sctp_sndrcvinfo, optsize);
SCTP_FIND_STCB(inp, stcb, s_info->sinfo_assoc_id);
if (stcb) {
if (s_info->sinfo_stream < stcb->asoc.streamoutcnt) {
memcpy(&stcb->asoc.def_send, s_info, min(optsize, sizeof(stcb->asoc.def_send)));
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(s_info->sinfo_assoc_id == SCTP_FUTURE_ASSOC) ||
(s_info->sinfo_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
memcpy(&inp->def_send, s_info, min(optsize, sizeof(inp->def_send)));
SCTP_INP_WUNLOCK(inp);
}
if ((s_info->sinfo_assoc_id == SCTP_CURRENT_ASSOC) ||
(s_info->sinfo_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (s_info->sinfo_stream < stcb->asoc.streamoutcnt) {
memcpy(&stcb->asoc.def_send, s_info, min(optsize, sizeof(stcb->asoc.def_send)));
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_PEER_ADDR_PARAMS:
{
struct sctp_paddrparams *paddrp;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(paddrp, optval, struct sctp_paddrparams, optsize);
SCTP_FIND_STCB(inp, stcb, paddrp->spp_assoc_id);
#if defined(INET) && defined(INET6)
if (paddrp->spp_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&paddrp->spp_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&paddrp->spp_address;
}
} else {
addr = (struct sockaddr *)&paddrp->spp_address;
}
#else
addr = (struct sockaddr *)&paddrp->spp_address;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr,
&net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net == NULL)) {
#ifdef INET
if (addr->sa_family == AF_INET) {
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (sin->sin_addr.s_addr != INADDR_ANY) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
SCTP_TCB_UNLOCK(stcb);
error = EINVAL;
break;
}
} else
#endif
#ifdef INET6
if (addr->sa_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (!IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
SCTP_TCB_UNLOCK(stcb);
error = EINVAL;
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
/* sanity checks */
if ((paddrp->spp_flags & SPP_HB_ENABLE) && (paddrp->spp_flags & SPP_HB_DISABLE)) {
if (stcb)
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
if ((paddrp->spp_flags & SPP_PMTUD_ENABLE) && (paddrp->spp_flags & SPP_PMTUD_DISABLE)) {
if (stcb)
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
if (stcb != NULL) {
/************************TCB SPECIFIC SET ******************/
if (net != NULL) {
/************************NET SPECIFIC SET ******************/
if (paddrp->spp_flags & SPP_HB_DISABLE) {
if (!(net->dest_state & SCTP_ADDR_UNCONFIRMED) &&
!(net->dest_state & SCTP_ADDR_NOHB)) {
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT, inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_9);
}
net->dest_state |= SCTP_ADDR_NOHB;
}
if (paddrp->spp_flags & SPP_HB_ENABLE) {
if (paddrp->spp_hbinterval) {
net->heart_beat_delay = paddrp->spp_hbinterval;
} else if (paddrp->spp_flags & SPP_HB_TIME_IS_ZERO) {
net->heart_beat_delay = 0;
}
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT, inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_10);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, inp, stcb, net);
net->dest_state &= ~SCTP_ADDR_NOHB;
}
if (paddrp->spp_flags & SPP_HB_DEMAND) {
/* on demand HB */
sctp_send_hb(stcb, net, SCTP_SO_LOCKED);
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_SOCKOPT, SCTP_SO_LOCKED);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, inp, stcb, net);
}
if ((paddrp->spp_flags & SPP_PMTUD_DISABLE) && (paddrp->spp_pathmtu >= SCTP_SMALLEST_PMTU)) {
if (SCTP_OS_TIMER_PENDING(&net->pmtu_timer.timer)) {
sctp_timer_stop(SCTP_TIMER_TYPE_PATHMTURAISE, inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_11);
}
net->dest_state |= SCTP_ADDR_NO_PMTUD;
net->mtu = paddrp->spp_pathmtu;
switch (net->ro._l_addr.sa.sa_family) {
#ifdef INET
case AF_INET:
net->mtu += SCTP_MIN_V4_OVERHEAD;
break;
#endif
#ifdef INET6
case AF_INET6:
net->mtu += SCTP_MIN_OVERHEAD;
break;
#endif
default:
break;
}
if (net->mtu < stcb->asoc.smallest_mtu) {
sctp_pathmtu_adjustment(stcb, net->mtu);
}
}
if (paddrp->spp_flags & SPP_PMTUD_ENABLE) {
if (!SCTP_OS_TIMER_PENDING(&net->pmtu_timer.timer)) {
sctp_timer_start(SCTP_TIMER_TYPE_PATHMTURAISE, inp, stcb, net);
}
net->dest_state &= ~SCTP_ADDR_NO_PMTUD;
}
if (paddrp->spp_pathmaxrxt) {
if (net->dest_state & SCTP_ADDR_PF) {
if (net->error_count > paddrp->spp_pathmaxrxt) {
net->dest_state &= ~SCTP_ADDR_PF;
}
} else {
if ((net->error_count <= paddrp->spp_pathmaxrxt) &&
(net->error_count > net->pf_threshold)) {
net->dest_state |= SCTP_ADDR_PF;
sctp_send_hb(stcb, net, SCTP_SO_LOCKED);
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT,
stcb->sctp_ep, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_12);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net);
}
}
if (net->dest_state & SCTP_ADDR_REACHABLE) {
if (net->error_count > paddrp->spp_pathmaxrxt) {
net->dest_state &= ~SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_DOWN, stcb, 0, net, SCTP_SO_LOCKED);
}
} else {
if (net->error_count <= paddrp->spp_pathmaxrxt) {
net->dest_state |= SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_UP, stcb, 0, net, SCTP_SO_LOCKED);
}
}
net->failure_threshold = paddrp->spp_pathmaxrxt;
}
if (paddrp->spp_flags & SPP_DSCP) {
net->dscp = paddrp->spp_dscp & 0xfc;
net->dscp |= 0x01;
}
#ifdef INET6
if (paddrp->spp_flags & SPP_IPV6_FLOWLABEL) {
if (net->ro._l_addr.sa.sa_family == AF_INET6) {
net->flowlabel = paddrp->spp_ipv6_flowlabel & 0x000fffff;
net->flowlabel |= 0x80000000;
}
}
#endif
} else {
/************************ASSOC ONLY -- NO NET SPECIFIC SET ******************/
if (paddrp->spp_pathmaxrxt != 0) {
stcb->asoc.def_net_failure = paddrp->spp_pathmaxrxt;
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if (net->dest_state & SCTP_ADDR_PF) {
if (net->error_count > paddrp->spp_pathmaxrxt) {
net->dest_state &= ~SCTP_ADDR_PF;
}
} else {
if ((net->error_count <= paddrp->spp_pathmaxrxt) &&
(net->error_count > net->pf_threshold)) {
net->dest_state |= SCTP_ADDR_PF;
sctp_send_hb(stcb, net, SCTP_SO_LOCKED);
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT,
stcb->sctp_ep, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_13);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net);
}
}
if (net->dest_state & SCTP_ADDR_REACHABLE) {
if (net->error_count > paddrp->spp_pathmaxrxt) {
net->dest_state &= ~SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_DOWN, stcb, 0, net, SCTP_SO_LOCKED);
}
} else {
if (net->error_count <= paddrp->spp_pathmaxrxt) {
net->dest_state |= SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_UP, stcb, 0, net, SCTP_SO_LOCKED);
}
}
net->failure_threshold = paddrp->spp_pathmaxrxt;
}
}
if (paddrp->spp_flags & SPP_HB_ENABLE) {
if (paddrp->spp_hbinterval != 0) {
stcb->asoc.heart_beat_delay = paddrp->spp_hbinterval;
} else if (paddrp->spp_flags & SPP_HB_TIME_IS_ZERO) {
stcb->asoc.heart_beat_delay = 0;
}
/* Turn back on the timer */
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if (paddrp->spp_hbinterval != 0) {
net->heart_beat_delay = paddrp->spp_hbinterval;
} else if (paddrp->spp_flags & SPP_HB_TIME_IS_ZERO) {
net->heart_beat_delay = 0;
}
if (net->dest_state & SCTP_ADDR_NOHB) {
net->dest_state &= ~SCTP_ADDR_NOHB;
}
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT, inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_14);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, inp, stcb, net);
}
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_DONOT_HEARTBEAT);
}
if (paddrp->spp_flags & SPP_HB_DISABLE) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if (!(net->dest_state & SCTP_ADDR_NOHB)) {
net->dest_state |= SCTP_ADDR_NOHB;
if (!(net->dest_state & SCTP_ADDR_UNCONFIRMED)) {
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT,
inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_15);
}
}
}
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_DONOT_HEARTBEAT);
}
if ((paddrp->spp_flags & SPP_PMTUD_DISABLE) && (paddrp->spp_pathmtu >= SCTP_SMALLEST_PMTU)) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if (SCTP_OS_TIMER_PENDING(&net->pmtu_timer.timer)) {
sctp_timer_stop(SCTP_TIMER_TYPE_PATHMTURAISE, inp, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_16);
}
net->dest_state |= SCTP_ADDR_NO_PMTUD;
net->mtu = paddrp->spp_pathmtu;
switch (net->ro._l_addr.sa.sa_family) {
#ifdef INET
case AF_INET:
net->mtu += SCTP_MIN_V4_OVERHEAD;
break;
#endif
#ifdef INET6
case AF_INET6:
net->mtu += SCTP_MIN_OVERHEAD;
break;
#endif
default:
break;
}
if (net->mtu < stcb->asoc.smallest_mtu) {
sctp_pathmtu_adjustment(stcb, net->mtu);
}
}
stcb->asoc.default_mtu = paddrp->spp_pathmtu;
sctp_stcb_feature_on(inp, stcb, SCTP_PCB_FLAGS_DO_NOT_PMTUD);
}
if (paddrp->spp_flags & SPP_PMTUD_ENABLE) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if (!SCTP_OS_TIMER_PENDING(&net->pmtu_timer.timer)) {
sctp_timer_start(SCTP_TIMER_TYPE_PATHMTURAISE, inp, stcb, net);
}
net->dest_state &= ~SCTP_ADDR_NO_PMTUD;
}
stcb->asoc.default_mtu = 0;
sctp_stcb_feature_off(inp, stcb, SCTP_PCB_FLAGS_DO_NOT_PMTUD);
}
if (paddrp->spp_flags & SPP_DSCP) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
net->dscp = paddrp->spp_dscp & 0xfc;
net->dscp |= 0x01;
}
stcb->asoc.default_dscp = paddrp->spp_dscp & 0xfc;
stcb->asoc.default_dscp |= 0x01;
}
#ifdef INET6
if (paddrp->spp_flags & SPP_IPV6_FLOWLABEL) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if (net->ro._l_addr.sa.sa_family == AF_INET6) {
net->flowlabel = paddrp->spp_ipv6_flowlabel & 0x000fffff;
net->flowlabel |= 0x80000000;
}
}
stcb->asoc.default_flowlabel = paddrp->spp_ipv6_flowlabel & 0x000fffff;
stcb->asoc.default_flowlabel |= 0x80000000;
}
#endif
}
SCTP_TCB_UNLOCK(stcb);
} else {
/************************NO TCB, SET TO default stuff ******************/
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(paddrp->spp_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
/*
* For the TOS/FLOWLABEL stuff you
* set it with the options on the
* socket
*/
if (paddrp->spp_pathmaxrxt != 0) {
inp->sctp_ep.def_net_failure = paddrp->spp_pathmaxrxt;
}
if (paddrp->spp_flags & SPP_HB_TIME_IS_ZERO)
inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_HEARTBEAT] = 0;
else if (paddrp->spp_hbinterval != 0) {
if (paddrp->spp_hbinterval > SCTP_MAX_HB_INTERVAL)
paddrp->spp_hbinterval = SCTP_MAX_HB_INTERVAL;
inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_HEARTBEAT] = MSEC_TO_TICKS(paddrp->spp_hbinterval);
}
if (paddrp->spp_flags & SPP_HB_ENABLE) {
if (paddrp->spp_flags & SPP_HB_TIME_IS_ZERO) {
inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_HEARTBEAT] = 0;
} else if (paddrp->spp_hbinterval) {
inp->sctp_ep.sctp_timeoutticks[SCTP_TIMER_HEARTBEAT] = MSEC_TO_TICKS(paddrp->spp_hbinterval);
}
sctp_feature_off(inp, SCTP_PCB_FLAGS_DONOT_HEARTBEAT);
} else if (paddrp->spp_flags & SPP_HB_DISABLE) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_DONOT_HEARTBEAT);
}
if (paddrp->spp_flags & SPP_PMTUD_ENABLE) {
inp->sctp_ep.default_mtu = 0;
sctp_feature_off(inp, SCTP_PCB_FLAGS_DO_NOT_PMTUD);
} else if (paddrp->spp_flags & SPP_PMTUD_DISABLE) {
if (paddrp->spp_pathmtu >= SCTP_SMALLEST_PMTU) {
inp->sctp_ep.default_mtu = paddrp->spp_pathmtu;
}
sctp_feature_on(inp, SCTP_PCB_FLAGS_DO_NOT_PMTUD);
}
if (paddrp->spp_flags & SPP_DSCP) {
inp->sctp_ep.default_dscp = paddrp->spp_dscp & 0xfc;
inp->sctp_ep.default_dscp |= 0x01;
}
#ifdef INET6
if (paddrp->spp_flags & SPP_IPV6_FLOWLABEL) {
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) {
inp->sctp_ep.default_flowlabel = paddrp->spp_ipv6_flowlabel & 0x000fffff;
inp->sctp_ep.default_flowlabel |= 0x80000000;
}
}
#endif
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_RTOINFO:
{
struct sctp_rtoinfo *srto;
uint32_t new_init, new_min, new_max;
SCTP_CHECK_AND_CAST(srto, optval, struct sctp_rtoinfo, optsize);
SCTP_FIND_STCB(inp, stcb, srto->srto_assoc_id);
if (stcb) {
if (srto->srto_initial)
new_init = srto->srto_initial;
else
new_init = stcb->asoc.initial_rto;
if (srto->srto_max)
new_max = srto->srto_max;
else
new_max = stcb->asoc.maxrto;
if (srto->srto_min)
new_min = srto->srto_min;
else
new_min = stcb->asoc.minrto;
if ((new_min <= new_init) && (new_init <= new_max)) {
stcb->asoc.initial_rto = new_init;
stcb->asoc.maxrto = new_max;
stcb->asoc.minrto = new_min;
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(srto->srto_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (srto->srto_initial)
new_init = srto->srto_initial;
else
new_init = inp->sctp_ep.initial_rto;
if (srto->srto_max)
new_max = srto->srto_max;
else
new_max = inp->sctp_ep.sctp_maxrto;
if (srto->srto_min)
new_min = srto->srto_min;
else
new_min = inp->sctp_ep.sctp_minrto;
if ((new_min <= new_init) && (new_init <= new_max)) {
inp->sctp_ep.initial_rto = new_init;
inp->sctp_ep.sctp_maxrto = new_max;
inp->sctp_ep.sctp_minrto = new_min;
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_ASSOCINFO:
{
struct sctp_assocparams *sasoc;
SCTP_CHECK_AND_CAST(sasoc, optval, struct sctp_assocparams, optsize);
SCTP_FIND_STCB(inp, stcb, sasoc->sasoc_assoc_id);
if (sasoc->sasoc_cookie_life) {
/* boundary check the cookie life */
if (sasoc->sasoc_cookie_life < 1000)
sasoc->sasoc_cookie_life = 1000;
if (sasoc->sasoc_cookie_life > SCTP_MAX_COOKIE_LIFE) {
sasoc->sasoc_cookie_life = SCTP_MAX_COOKIE_LIFE;
}
}
if (stcb) {
if (sasoc->sasoc_asocmaxrxt)
stcb->asoc.max_send_times = sasoc->sasoc_asocmaxrxt;
if (sasoc->sasoc_cookie_life) {
stcb->asoc.cookie_life = MSEC_TO_TICKS(sasoc->sasoc_cookie_life);
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(sasoc->sasoc_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (sasoc->sasoc_asocmaxrxt)
inp->sctp_ep.max_send_times = sasoc->sasoc_asocmaxrxt;
if (sasoc->sasoc_cookie_life) {
inp->sctp_ep.def_cookie_life = MSEC_TO_TICKS(sasoc->sasoc_cookie_life);
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_INITMSG:
{
struct sctp_initmsg *sinit;
SCTP_CHECK_AND_CAST(sinit, optval, struct sctp_initmsg, optsize);
SCTP_INP_WLOCK(inp);
if (sinit->sinit_num_ostreams)
inp->sctp_ep.pre_open_stream_count = sinit->sinit_num_ostreams;
if (sinit->sinit_max_instreams)
inp->sctp_ep.max_open_streams_intome = sinit->sinit_max_instreams;
if (sinit->sinit_max_attempts)
inp->sctp_ep.max_init_times = sinit->sinit_max_attempts;
if (sinit->sinit_max_init_timeo)
inp->sctp_ep.initial_init_rto_max = sinit->sinit_max_init_timeo;
SCTP_INP_WUNLOCK(inp);
break;
}
case SCTP_PRIMARY_ADDR:
{
struct sctp_setprim *spa;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(spa, optval, struct sctp_setprim, optsize);
SCTP_FIND_STCB(inp, stcb, spa->ssp_assoc_id);
#if defined(INET) && defined(INET6)
if (spa->ssp_addr.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&spa->ssp_addr;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&spa->ssp_addr;
}
} else {
addr = (struct sockaddr *)&spa->ssp_addr;
}
#else
addr = (struct sockaddr *)&spa->ssp_addr;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr,
&net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net != NULL)) {
if (net != stcb->asoc.primary_destination) {
if (!(net->dest_state & SCTP_ADDR_UNCONFIRMED)) {
/* Ok we need to set it */
if (sctp_set_primary_addr(stcb, (struct sockaddr *)NULL, net) == 0) {
if ((stcb->asoc.alternate) &&
(!(net->dest_state & SCTP_ADDR_PF)) &&
(net->dest_state & SCTP_ADDR_REACHABLE)) {
sctp_free_remote_addr(stcb->asoc.alternate);
stcb->asoc.alternate = NULL;
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
if (stcb != NULL) {
SCTP_TCB_UNLOCK(stcb);
}
break;
}
case SCTP_SET_DYNAMIC_PRIMARY:
{
union sctp_sockstore *ss;
error = priv_check(curthread,
PRIV_NETINET_RESERVEDPORT);
if (error)
break;
SCTP_CHECK_AND_CAST(ss, optval, union sctp_sockstore, optsize);
/* SUPER USER CHECK? */
error = sctp_dynamic_set_primary(&ss->sa, vrf_id);
break;
}
case SCTP_SET_PEER_PRIMARY_ADDR:
{
struct sctp_setpeerprim *sspp;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(sspp, optval, struct sctp_setpeerprim, optsize);
SCTP_FIND_STCB(inp, stcb, sspp->sspp_assoc_id);
if (stcb != NULL) {
struct sctp_ifa *ifa;
#if defined(INET) && defined(INET6)
if (sspp->sspp_addr.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&sspp->sspp_addr;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&sspp->sspp_addr;
}
} else {
addr = (struct sockaddr *)&sspp->sspp_addr;
}
#else
addr = (struct sockaddr *)&sspp->sspp_addr;
#endif
ifa = sctp_find_ifa_by_addr(addr, stcb->asoc.vrf_id, SCTP_ADDR_NOT_LOCKED);
if (ifa == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_of_it;
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) == 0) {
/*
* Must validate the ifa found is in
* our ep
*/
struct sctp_laddr *laddr;
int found = 0;
LIST_FOREACH(laddr, &inp->sctp_addr_list, sctp_nxt_addr) {
if (laddr->ifa == NULL) {
SCTPDBG(SCTP_DEBUG_OUTPUT1, "%s: NULL ifa\n",
__func__);
continue;
}
if ((sctp_is_addr_restricted(stcb, laddr->ifa)) &&
(!sctp_is_addr_pending(stcb, laddr->ifa))) {
continue;
}
if (laddr->ifa == ifa) {
found = 1;
break;
}
}
if (!found) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_of_it;
}
} else {
switch (addr->sa_family) {
#ifdef INET
case AF_INET:
{
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (prison_check_ip4(inp->ip_inp.inp.inp_cred,
&sin->sin_addr) != 0) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_of_it;
}
break;
}
#endif
#ifdef INET6
case AF_INET6:
{
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (prison_check_ip6(inp->ip_inp.inp.inp_cred,
&sin6->sin6_addr) != 0) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_of_it;
}
break;
}
#endif
default:
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_of_it;
}
}
if (sctp_set_primary_ip_address_sa(stcb, addr) != 0) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
sctp_chunk_output(inp, stcb, SCTP_OUTPUT_FROM_SOCKOPT, SCTP_SO_LOCKED);
out_of_it:
SCTP_TCB_UNLOCK(stcb);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
break;
}
case SCTP_BINDX_ADD_ADDR:
{
struct sctp_getaddresses *addrs;
struct thread *td;
td = (struct thread *)p;
SCTP_CHECK_AND_CAST(addrs, optval, struct sctp_getaddresses,
optsize);
#ifdef INET
if (addrs->addr->sa_family == AF_INET) {
if (optsize < sizeof(struct sctp_getaddresses) - sizeof(struct sockaddr) + sizeof(struct sockaddr_in)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (td != NULL && (error = prison_local_ip4(td->td_ucred, &(((struct sockaddr_in *)(addrs->addr))->sin_addr)))) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
#ifdef INET6
if (addrs->addr->sa_family == AF_INET6) {
if (optsize < sizeof(struct sctp_getaddresses) - sizeof(struct sockaddr) + sizeof(struct sockaddr_in6)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (td != NULL && (error = prison_local_ip6(td->td_ucred, &(((struct sockaddr_in6 *)(addrs->addr))->sin6_addr),
(SCTP_IPV6_V6ONLY(inp) != 0))) != 0) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
break;
}
sctp_bindx_add_address(so, inp, addrs->addr,
addrs->sget_assoc_id, vrf_id,
&error, p);
break;
}
case SCTP_BINDX_REM_ADDR:
{
struct sctp_getaddresses *addrs;
struct thread *td;
td = (struct thread *)p;
SCTP_CHECK_AND_CAST(addrs, optval, struct sctp_getaddresses, optsize);
#ifdef INET
if (addrs->addr->sa_family == AF_INET) {
if (optsize < sizeof(struct sctp_getaddresses) - sizeof(struct sockaddr) + sizeof(struct sockaddr_in)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (td != NULL && (error = prison_local_ip4(td->td_ucred, &(((struct sockaddr_in *)(addrs->addr))->sin_addr)))) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
#ifdef INET6
if (addrs->addr->sa_family == AF_INET6) {
if (optsize < sizeof(struct sctp_getaddresses) - sizeof(struct sockaddr) + sizeof(struct sockaddr_in6)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (td != NULL &&
(error = prison_local_ip6(td->td_ucred,
&(((struct sockaddr_in6 *)(addrs->addr))->sin6_addr),
(SCTP_IPV6_V6ONLY(inp) != 0))) != 0) {
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
break;
}
sctp_bindx_delete_address(inp, addrs->addr,
addrs->sget_assoc_id, vrf_id,
&error);
break;
}
case SCTP_EVENT:
{
struct sctp_event *event;
uint32_t event_type;
SCTP_CHECK_AND_CAST(event, optval, struct sctp_event, optsize);
SCTP_FIND_STCB(inp, stcb, event->se_assoc_id);
switch (event->se_type) {
case SCTP_ASSOC_CHANGE:
event_type = SCTP_PCB_FLAGS_RECVASSOCEVNT;
break;
case SCTP_PEER_ADDR_CHANGE:
event_type = SCTP_PCB_FLAGS_RECVPADDREVNT;
break;
case SCTP_REMOTE_ERROR:
event_type = SCTP_PCB_FLAGS_RECVPEERERR;
break;
case SCTP_SEND_FAILED:
event_type = SCTP_PCB_FLAGS_RECVSENDFAILEVNT;
break;
case SCTP_SHUTDOWN_EVENT:
event_type = SCTP_PCB_FLAGS_RECVSHUTDOWNEVNT;
break;
case SCTP_ADAPTATION_INDICATION:
event_type = SCTP_PCB_FLAGS_ADAPTATIONEVNT;
break;
case SCTP_PARTIAL_DELIVERY_EVENT:
event_type = SCTP_PCB_FLAGS_PDAPIEVNT;
break;
case SCTP_AUTHENTICATION_EVENT:
event_type = SCTP_PCB_FLAGS_AUTHEVNT;
break;
case SCTP_STREAM_RESET_EVENT:
event_type = SCTP_PCB_FLAGS_STREAM_RESETEVNT;
break;
case SCTP_SENDER_DRY_EVENT:
event_type = SCTP_PCB_FLAGS_DRYEVNT;
break;
case SCTP_NOTIFICATIONS_STOPPED_EVENT:
event_type = 0;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTSUP);
error = ENOTSUP;
break;
case SCTP_ASSOC_RESET_EVENT:
event_type = SCTP_PCB_FLAGS_ASSOC_RESETEVNT;
break;
case SCTP_STREAM_CHANGE_EVENT:
event_type = SCTP_PCB_FLAGS_STREAM_CHANGEEVNT;
break;
case SCTP_SEND_FAILED_EVENT:
event_type = SCTP_PCB_FLAGS_RECVNSENDFAILEVNT;
break;
default:
event_type = 0;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (event_type > 0) {
if (stcb) {
if (event->se_on) {
sctp_stcb_feature_on(inp, stcb, event_type);
if (event_type == SCTP_PCB_FLAGS_DRYEVNT) {
if (TAILQ_EMPTY(&stcb->asoc.send_queue) &&
TAILQ_EMPTY(&stcb->asoc.sent_queue) &&
(stcb->asoc.stream_queue_cnt == 0)) {
sctp_ulp_notify(SCTP_NOTIFY_SENDER_DRY, stcb, 0, NULL, SCTP_SO_LOCKED);
}
}
} else {
sctp_stcb_feature_off(inp, stcb, event_type);
}
SCTP_TCB_UNLOCK(stcb);
} else {
/*
* We don't want to send up a storm
* of events, so return an error for
* sender dry events
*/
if ((event_type == SCTP_PCB_FLAGS_DRYEVNT) &&
((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) == 0) &&
((inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) == 0) &&
((event->se_assoc_id == SCTP_ALL_ASSOC) ||
(event->se_assoc_id == SCTP_CURRENT_ASSOC))) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTSUP);
error = ENOTSUP;
break;
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(event->se_assoc_id == SCTP_FUTURE_ASSOC) ||
(event->se_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (event->se_on) {
sctp_feature_on(inp, event_type);
} else {
sctp_feature_off(inp, event_type);
}
SCTP_INP_WUNLOCK(inp);
}
if ((event->se_assoc_id == SCTP_CURRENT_ASSOC) ||
(event->se_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (event->se_on) {
sctp_stcb_feature_on(inp, stcb, event_type);
} else {
sctp_stcb_feature_off(inp, stcb, event_type);
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
+ } else {
+ if (stcb) {
+ SCTP_TCB_UNLOCK(stcb);
+ }
}
break;
}
case SCTP_RECVRCVINFO:
{
int *onoff;
SCTP_CHECK_AND_CAST(onoff, optval, int, optsize);
SCTP_INP_WLOCK(inp);
if (*onoff != 0) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVRCVINFO);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVRCVINFO);
}
SCTP_INP_WUNLOCK(inp);
break;
}
case SCTP_RECVNXTINFO:
{
int *onoff;
SCTP_CHECK_AND_CAST(onoff, optval, int, optsize);
SCTP_INP_WLOCK(inp);
if (*onoff != 0) {
sctp_feature_on(inp, SCTP_PCB_FLAGS_RECVNXTINFO);
} else {
sctp_feature_off(inp, SCTP_PCB_FLAGS_RECVNXTINFO);
}
SCTP_INP_WUNLOCK(inp);
break;
}
case SCTP_DEFAULT_SNDINFO:
{
struct sctp_sndinfo *info;
uint16_t policy;
SCTP_CHECK_AND_CAST(info, optval, struct sctp_sndinfo, optsize);
SCTP_FIND_STCB(inp, stcb, info->snd_assoc_id);
if (stcb) {
if (info->snd_sid < stcb->asoc.streamoutcnt) {
stcb->asoc.def_send.sinfo_stream = info->snd_sid;
policy = PR_SCTP_POLICY(stcb->asoc.def_send.sinfo_flags);
stcb->asoc.def_send.sinfo_flags = info->snd_flags;
stcb->asoc.def_send.sinfo_flags |= policy;
stcb->asoc.def_send.sinfo_ppid = info->snd_ppid;
stcb->asoc.def_send.sinfo_context = info->snd_context;
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(info->snd_assoc_id == SCTP_FUTURE_ASSOC) ||
(info->snd_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->def_send.sinfo_stream = info->snd_sid;
policy = PR_SCTP_POLICY(inp->def_send.sinfo_flags);
inp->def_send.sinfo_flags = info->snd_flags;
inp->def_send.sinfo_flags |= policy;
inp->def_send.sinfo_ppid = info->snd_ppid;
inp->def_send.sinfo_context = info->snd_context;
SCTP_INP_WUNLOCK(inp);
}
if ((info->snd_assoc_id == SCTP_CURRENT_ASSOC) ||
(info->snd_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
if (info->snd_sid < stcb->asoc.streamoutcnt) {
stcb->asoc.def_send.sinfo_stream = info->snd_sid;
policy = PR_SCTP_POLICY(stcb->asoc.def_send.sinfo_flags);
stcb->asoc.def_send.sinfo_flags = info->snd_flags;
stcb->asoc.def_send.sinfo_flags |= policy;
stcb->asoc.def_send.sinfo_ppid = info->snd_ppid;
stcb->asoc.def_send.sinfo_context = info->snd_context;
}
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_DEFAULT_PRINFO:
{
struct sctp_default_prinfo *info;
SCTP_CHECK_AND_CAST(info, optval, struct sctp_default_prinfo, optsize);
SCTP_FIND_STCB(inp, stcb, info->pr_assoc_id);
if (info->pr_policy > SCTP_PR_SCTP_MAX) {
+ if (stcb) {
+ SCTP_TCB_UNLOCK(stcb);
+ }
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
break;
}
if (stcb) {
stcb->asoc.def_send.sinfo_flags &= 0xfff0;
stcb->asoc.def_send.sinfo_flags |= info->pr_policy;
stcb->asoc.def_send.sinfo_timetolive = info->pr_value;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(info->pr_assoc_id == SCTP_FUTURE_ASSOC) ||
(info->pr_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->def_send.sinfo_flags &= 0xfff0;
inp->def_send.sinfo_flags |= info->pr_policy;
inp->def_send.sinfo_timetolive = info->pr_value;
SCTP_INP_WUNLOCK(inp);
}
if ((info->pr_assoc_id == SCTP_CURRENT_ASSOC) ||
(info->pr_assoc_id == SCTP_ALL_ASSOC)) {
SCTP_INP_RLOCK(inp);
LIST_FOREACH(stcb, &inp->sctp_asoc_list, sctp_tcblist) {
SCTP_TCB_LOCK(stcb);
stcb->asoc.def_send.sinfo_flags &= 0xfff0;
stcb->asoc.def_send.sinfo_flags |= info->pr_policy;
stcb->asoc.def_send.sinfo_timetolive = info->pr_value;
SCTP_TCB_UNLOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
}
}
break;
}
case SCTP_PEER_ADDR_THLDS:
/* Applies to the specific association */
{
struct sctp_paddrthlds *thlds;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(thlds, optval, struct sctp_paddrthlds, optsize);
SCTP_FIND_STCB(inp, stcb, thlds->spt_assoc_id);
#if defined(INET) && defined(INET6)
if (thlds->spt_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&thlds->spt_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&thlds->spt_address;
}
} else {
addr = (struct sockaddr *)&thlds->spt_address;
}
#else
addr = (struct sockaddr *)&thlds->spt_address;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr,
&net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net == NULL)) {
#ifdef INET
if (addr->sa_family == AF_INET) {
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (sin->sin_addr.s_addr != INADDR_ANY) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
SCTP_TCB_UNLOCK(stcb);
error = EINVAL;
break;
}
} else
#endif
#ifdef INET6
if (addr->sa_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (!IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
SCTP_TCB_UNLOCK(stcb);
error = EINVAL;
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
if (thlds->spt_pathcpthld != 0xffff) {
error = EINVAL;
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
if (stcb != NULL) {
if (net != NULL) {
net->failure_threshold = thlds->spt_pathmaxrxt;
net->pf_threshold = thlds->spt_pathpfthld;
if (net->dest_state & SCTP_ADDR_PF) {
if ((net->error_count > net->failure_threshold) ||
(net->error_count <= net->pf_threshold)) {
net->dest_state &= ~SCTP_ADDR_PF;
}
} else {
if ((net->error_count > net->pf_threshold) &&
(net->error_count <= net->failure_threshold)) {
net->dest_state |= SCTP_ADDR_PF;
sctp_send_hb(stcb, net, SCTP_SO_LOCKED);
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT,
stcb->sctp_ep, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_17);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net);
}
}
if (net->dest_state & SCTP_ADDR_REACHABLE) {
if (net->error_count > net->failure_threshold) {
net->dest_state &= ~SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_DOWN, stcb, 0, net, SCTP_SO_LOCKED);
}
} else {
if (net->error_count <= net->failure_threshold) {
net->dest_state |= SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_UP, stcb, 0, net, SCTP_SO_LOCKED);
}
}
} else {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
net->failure_threshold = thlds->spt_pathmaxrxt;
net->pf_threshold = thlds->spt_pathpfthld;
if (net->dest_state & SCTP_ADDR_PF) {
if ((net->error_count > net->failure_threshold) ||
(net->error_count <= net->pf_threshold)) {
net->dest_state &= ~SCTP_ADDR_PF;
}
} else {
if ((net->error_count > net->pf_threshold) &&
(net->error_count <= net->failure_threshold)) {
net->dest_state |= SCTP_ADDR_PF;
sctp_send_hb(stcb, net, SCTP_SO_LOCKED);
sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT,
stcb->sctp_ep, stcb, net,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_18);
sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net);
}
}
if (net->dest_state & SCTP_ADDR_REACHABLE) {
if (net->error_count > net->failure_threshold) {
net->dest_state &= ~SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_DOWN, stcb, 0, net, SCTP_SO_LOCKED);
}
} else {
if (net->error_count <= net->failure_threshold) {
net->dest_state |= SCTP_ADDR_REACHABLE;
sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_UP, stcb, 0, net, SCTP_SO_LOCKED);
}
}
}
stcb->asoc.def_net_failure = thlds->spt_pathmaxrxt;
stcb->asoc.def_net_pf_threshold = thlds->spt_pathpfthld;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(thlds->spt_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_ep.def_net_failure = thlds->spt_pathmaxrxt;
inp->sctp_ep.def_net_pf_threshold = thlds->spt_pathpfthld;
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_REMOTE_UDP_ENCAPS_PORT:
{
struct sctp_udpencaps *encaps;
struct sctp_nets *net;
struct sockaddr *addr;
#if defined(INET) && defined(INET6)
struct sockaddr_in sin_store;
#endif
SCTP_CHECK_AND_CAST(encaps, optval, struct sctp_udpencaps, optsize);
SCTP_FIND_STCB(inp, stcb, encaps->sue_assoc_id);
#if defined(INET) && defined(INET6)
if (encaps->sue_address.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)&encaps->sue_address;
if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) {
in6_sin6_2_sin(&sin_store, sin6);
addr = (struct sockaddr *)&sin_store;
} else {
addr = (struct sockaddr *)&encaps->sue_address;
}
} else {
addr = (struct sockaddr *)&encaps->sue_address;
}
#else
addr = (struct sockaddr *)&encaps->sue_address;
#endif
if (stcb != NULL) {
net = sctp_findnet(stcb, addr);
} else {
/*
* We increment here since
* sctp_findassociation_ep_addr() wil do a
* decrement if it finds the stcb as long as
* the locked tcb (last argument) is NOT a
* TCB.. aka NULL.
*/
net = NULL;
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr, &net, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
}
}
if ((stcb != NULL) && (net == NULL)) {
#ifdef INET
if (addr->sa_family == AF_INET) {
struct sockaddr_in *sin;
sin = (struct sockaddr_in *)addr;
if (sin->sin_addr.s_addr != INADDR_ANY) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
SCTP_TCB_UNLOCK(stcb);
error = EINVAL;
break;
}
} else
#endif
#ifdef INET6
if (addr->sa_family == AF_INET6) {
struct sockaddr_in6 *sin6;
sin6 = (struct sockaddr_in6 *)addr;
if (!IN6_IS_ADDR_UNSPECIFIED(&sin6->sin6_addr)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
SCTP_TCB_UNLOCK(stcb);
error = EINVAL;
break;
}
} else
#endif
{
error = EAFNOSUPPORT;
SCTP_TCB_UNLOCK(stcb);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
break;
}
}
if (stcb != NULL) {
if (net != NULL) {
net->port = encaps->sue_port;
} else {
stcb->asoc.port = encaps->sue_port;
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(encaps->sue_assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->sctp_ep.port = encaps->sue_port;
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_ECN_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->ecn_supported = 0;
} else {
inp->ecn_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_PR_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->prsctp_supported = 0;
} else {
inp->prsctp_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_AUTH_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
if ((av->assoc_value == 0) &&
(inp->asconf_supported == 1)) {
/*
* AUTH is required for
* ASCONF
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
} else {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->auth_supported = 0;
} else {
inp->auth_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_ASCONF_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
if ((av->assoc_value != 0) &&
(inp->auth_supported == 0)) {
/*
* AUTH is required for
* ASCONF
*/
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
} else {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->asconf_supported = 0;
sctp_auth_delete_chunk(SCTP_ASCONF,
inp->sctp_ep.local_auth_chunks);
sctp_auth_delete_chunk(SCTP_ASCONF_ACK,
inp->sctp_ep.local_auth_chunks);
} else {
inp->asconf_supported = 1;
sctp_auth_add_chunk(SCTP_ASCONF,
inp->sctp_ep.local_auth_chunks);
sctp_auth_add_chunk(SCTP_ASCONF_ACK,
inp->sctp_ep.local_auth_chunks);
}
SCTP_INP_WUNLOCK(inp);
}
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_RECONFIG_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->reconfig_supported = 0;
} else {
inp->reconfig_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_NRSACK_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->nrsack_supported = 0;
} else {
inp->nrsack_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_PKTDROP_SUPPORTED:
{
struct sctp_assoc_value *av;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
if (av->assoc_value == 0) {
inp->pktdrop_supported = 0;
} else {
inp->pktdrop_supported = 1;
}
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
case SCTP_MAX_CWND:
{
struct sctp_assoc_value *av;
struct sctp_nets *net;
SCTP_CHECK_AND_CAST(av, optval, struct sctp_assoc_value, optsize);
SCTP_FIND_STCB(inp, stcb, av->assoc_id);
if (stcb) {
stcb->asoc.max_cwnd = av->assoc_value;
if (stcb->asoc.max_cwnd > 0) {
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
if ((net->cwnd > stcb->asoc.max_cwnd) &&
(net->cwnd > (net->mtu - sizeof(struct sctphdr)))) {
net->cwnd = stcb->asoc.max_cwnd;
if (net->cwnd < (net->mtu - sizeof(struct sctphdr))) {
net->cwnd = net->mtu - sizeof(struct sctphdr);
}
}
}
}
SCTP_TCB_UNLOCK(stcb);
} else {
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) ||
(av->assoc_id == SCTP_FUTURE_ASSOC)) {
SCTP_INP_WLOCK(inp);
inp->max_cwnd = av->assoc_value;
SCTP_INP_WUNLOCK(inp);
} else {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
}
break;
}
default:
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOPROTOOPT);
error = ENOPROTOOPT;
break;
} /* end switch (opt) */
return (error);
}
int
sctp_ctloutput(struct socket *so, struct sockopt *sopt)
{
void *optval = NULL;
size_t optsize = 0;
void *p;
int error = 0;
struct sctp_inpcb *inp;
if ((sopt->sopt_level == SOL_SOCKET) &&
(sopt->sopt_name == SO_SETFIB)) {
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(so->so_pcb, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOBUFS);
return (EINVAL);
}
SCTP_INP_WLOCK(inp);
inp->fibnum = so->so_fibnum;
SCTP_INP_WUNLOCK(inp);
return (0);
}
if (sopt->sopt_level != IPPROTO_SCTP) {
/* wrong proto level... send back up to IP */
#ifdef INET6
if (INP_CHECK_SOCKAF(so, AF_INET6))
error = ip6_ctloutput(so, sopt);
#endif /* INET6 */
#if defined(INET) && defined(INET6)
else
#endif
#ifdef INET
error = ip_ctloutput(so, sopt);
#endif
return (error);
}
optsize = sopt->sopt_valsize;
if (optsize > SCTP_SOCKET_OPTION_LIMIT) {
SCTP_LTRACE_ERR_RET(so->so_pcb, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOBUFS);
return (ENOBUFS);
}
if (optsize) {
SCTP_MALLOC(optval, void *, optsize, SCTP_M_SOCKOPT);
if (optval == NULL) {
SCTP_LTRACE_ERR_RET(so->so_pcb, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOBUFS);
return (ENOBUFS);
}
error = sooptcopyin(sopt, optval, optsize, optsize);
if (error) {
SCTP_FREE(optval, SCTP_M_SOCKOPT);
goto out;
}
}
p = (void *)sopt->sopt_td;
if (sopt->sopt_dir == SOPT_SET) {
error = sctp_setopt(so, sopt->sopt_name, optval, optsize, p);
} else if (sopt->sopt_dir == SOPT_GET) {
error = sctp_getopt(so, sopt->sopt_name, optval, &optsize, p);
} else {
SCTP_LTRACE_ERR_RET(so->so_pcb, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
}
if ((error == 0) && (optval != NULL)) {
error = sooptcopyout(sopt, optval, optsize);
SCTP_FREE(optval, SCTP_M_SOCKOPT);
} else if (optval != NULL) {
SCTP_FREE(optval, SCTP_M_SOCKOPT);
}
out:
return (error);
}
#ifdef INET
static int
sctp_connect(struct socket *so, struct sockaddr *addr, struct thread *p)
{
int error = 0;
int create_lock_on = 0;
uint32_t vrf_id;
struct sctp_inpcb *inp;
struct sctp_tcb *stcb = NULL;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
/* I made the same as TCP since we are not setup? */
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (ECONNRESET);
}
if (addr == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return EINVAL;
}
switch (addr->sa_family) {
#ifdef INET6
case AF_INET6:
{
struct sockaddr_in6 *sin6p;
if (addr->sa_len != sizeof(struct sockaddr_in6)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
sin6p = (struct sockaddr_in6 *)addr;
if (p != NULL && (error = prison_remote_ip6(p->td_ucred, &sin6p->sin6_addr)) != 0) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
return (error);
}
break;
}
#endif
#ifdef INET
case AF_INET:
{
struct sockaddr_in *sinp;
if (addr->sa_len != sizeof(struct sockaddr_in)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (EINVAL);
}
sinp = (struct sockaddr_in *)addr;
if (p != NULL && (error = prison_remote_ip4(p->td_ucred, &sinp->sin_addr)) != 0) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, error);
return (error);
}
break;
}
#endif
default:
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EAFNOSUPPORT);
return (EAFNOSUPPORT);
}
SCTP_INP_INCR_REF(inp);
SCTP_ASOC_CREATE_LOCK(inp);
create_lock_on = 1;
if ((inp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_ALLGONE) ||
(inp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_GONE)) {
/* Should I really unlock ? */
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EFAULT);
error = EFAULT;
goto out_now;
}
#ifdef INET6
if (((inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) == 0) &&
(addr->sa_family == AF_INET6)) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_now;
}
#endif
if ((inp->sctp_flags & SCTP_PCB_FLAGS_UNBOUND) ==
SCTP_PCB_FLAGS_UNBOUND) {
/* Bind a ephemeral port */
error = sctp_inpcb_bind(so, NULL, NULL, p);
if (error) {
goto out_now;
}
}
/* Now do we connect? */
if ((inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL) &&
(sctp_is_feature_off(inp, SCTP_PCB_FLAGS_PORTREUSE))) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
error = EINVAL;
goto out_now;
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) &&
(inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED)) {
/* We are already connected AND the TCP model */
SCTP_LTRACE_ERR_RET(inp, stcb, NULL, SCTP_FROM_SCTP_USRREQ, EADDRINUSE);
error = EADDRINUSE;
goto out_now;
}
if (inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED) {
SCTP_INP_RLOCK(inp);
stcb = LIST_FIRST(&inp->sctp_asoc_list);
SCTP_INP_RUNLOCK(inp);
} else {
/*
* We increment here since sctp_findassociation_ep_addr()
* will do a decrement if it finds the stcb as long as the
* locked tcb (last argument) is NOT a TCB.. aka NULL.
*/
SCTP_INP_INCR_REF(inp);
stcb = sctp_findassociation_ep_addr(&inp, addr, NULL, NULL, NULL);
if (stcb == NULL) {
SCTP_INP_DECR_REF(inp);
} else {
SCTP_TCB_UNLOCK(stcb);
}
}
if (stcb != NULL) {
/* Already have or am bring up an association */
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EALREADY);
error = EALREADY;
goto out_now;
}
vrf_id = inp->def_vrf_id;
/* We are GOOD to go */
stcb = sctp_aloc_assoc(inp, addr, &error, 0, vrf_id,
inp->sctp_ep.pre_open_stream_count,
inp->sctp_ep.port, p);
if (stcb == NULL) {
/* Gak! no memory */
goto out_now;
}
if (stcb->sctp_ep->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) {
stcb->sctp_ep->sctp_flags |= SCTP_PCB_FLAGS_CONNECTED;
/* Set the connected flag so we can queue data */
soisconnecting(so);
}
SCTP_SET_STATE(stcb, SCTP_STATE_COOKIE_WAIT);
(void)SCTP_GETTIME_TIMEVAL(&stcb->asoc.time_entered);
/* initialize authentication parameters for the assoc */
sctp_initialize_auth_params(inp, stcb);
sctp_send_initiate(inp, stcb, SCTP_SO_LOCKED);
SCTP_TCB_UNLOCK(stcb);
out_now:
if (create_lock_on) {
SCTP_ASOC_CREATE_UNLOCK(inp);
}
SCTP_INP_DECR_REF(inp);
return (error);
}
#endif
int
sctp_listen(struct socket *so, int backlog, struct thread *p)
{
/*
* Note this module depends on the protocol processing being called
* AFTER any socket level flags and backlog are applied to the
* socket. The traditional way that the socket flags are applied is
* AFTER protocol processing. We have made a change to the
* sys/kern/uipc_socket.c module to reverse this but this MUST be in
* place if the socket API for SCTP is to work properly.
*/
int error = 0;
struct sctp_inpcb *inp;
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
/* I made the same as TCP since we are not setup? */
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (ECONNRESET);
}
if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_PORTREUSE)) {
/* See if we have a listener */
struct sctp_inpcb *tinp;
union sctp_sockstore store;
if ((inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) == 0) {
/* not bound all */
struct sctp_laddr *laddr;
LIST_FOREACH(laddr, &inp->sctp_addr_list, sctp_nxt_addr) {
memcpy(&store, &laddr->ifa->address, sizeof(store));
switch (store.sa.sa_family) {
#ifdef INET
case AF_INET:
store.sin.sin_port = inp->sctp_lport;
break;
#endif
#ifdef INET6
case AF_INET6:
store.sin6.sin6_port = inp->sctp_lport;
break;
#endif
default:
break;
}
tinp = sctp_pcb_findep(&store.sa, 0, 0, inp->def_vrf_id);
if (tinp && (tinp != inp) &&
((tinp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_ALLGONE) == 0) &&
((tinp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_GONE) == 0) &&
(SCTP_IS_LISTENING(tinp))) {
/*
* we have a listener already and
* its not this inp.
*/
SCTP_INP_DECR_REF(tinp);
return (EADDRINUSE);
} else if (tinp) {
SCTP_INP_DECR_REF(tinp);
}
}
} else {
/* Setup a local addr bound all */
memset(&store, 0, sizeof(store));
#ifdef INET6
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) {
store.sa.sa_family = AF_INET6;
store.sa.sa_len = sizeof(struct sockaddr_in6);
}
#endif
#ifdef INET
if ((inp->sctp_flags & SCTP_PCB_FLAGS_BOUND_V6) == 0) {
store.sa.sa_family = AF_INET;
store.sa.sa_len = sizeof(struct sockaddr_in);
}
#endif
switch (store.sa.sa_family) {
#ifdef INET
case AF_INET:
store.sin.sin_port = inp->sctp_lport;
break;
#endif
#ifdef INET6
case AF_INET6:
store.sin6.sin6_port = inp->sctp_lport;
break;
#endif
default:
break;
}
tinp = sctp_pcb_findep(&store.sa, 0, 0, inp->def_vrf_id);
if (tinp && (tinp != inp) &&
((tinp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_ALLGONE) == 0) &&
((tinp->sctp_flags & SCTP_PCB_FLAGS_SOCKET_GONE) == 0) &&
(SCTP_IS_LISTENING(tinp))) {
/*
* we have a listener already and its not
* this inp.
*/
SCTP_INP_DECR_REF(tinp);
return (EADDRINUSE);
} else if (tinp) {
SCTP_INP_DECR_REF(tinp);
}
}
}
SCTP_INP_RLOCK(inp);
#ifdef SCTP_LOCK_LOGGING
if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOCK_LOGGING_ENABLE) {
sctp_log_lock(inp, (struct sctp_tcb *)NULL, SCTP_LOG_LOCK_SOCK);
}
#endif
SOCK_LOCK(so);
error = solisten_proto_check(so);
SOCK_UNLOCK(so);
if (error) {
SCTP_INP_RUNLOCK(inp);
return (error);
}
if ((sctp_is_feature_on(inp, SCTP_PCB_FLAGS_PORTREUSE)) &&
(inp->sctp_flags & SCTP_PCB_FLAGS_IN_TCPPOOL)) {
/*
* The unlucky case - We are in the tcp pool with this guy.
* - Someone else is in the main inp slot. - We must move
* this guy (the listener) to the main slot - We must then
* move the guy that was listener to the TCP Pool.
*/
if (sctp_swap_inpcb_for_listen(inp)) {
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EADDRINUSE);
return (EADDRINUSE);
}
}
if ((inp->sctp_flags & SCTP_PCB_FLAGS_TCPTYPE) &&
(inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED)) {
/* We are already connected AND the TCP model */
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EADDRINUSE);
return (EADDRINUSE);
}
SCTP_INP_RUNLOCK(inp);
if (inp->sctp_flags & SCTP_PCB_FLAGS_UNBOUND) {
/* We must do a bind. */
if ((error = sctp_inpcb_bind(so, NULL, NULL, p))) {
/* bind error, probably perm */
return (error);
}
}
SCTP_INP_WLOCK(inp);
if ((inp->sctp_flags & SCTP_PCB_FLAGS_UDPTYPE) == 0) {
SOCK_LOCK(so);
solisten_proto(so, backlog);
SOCK_UNLOCK(so);
}
if (backlog > 0) {
inp->sctp_flags |= SCTP_PCB_FLAGS_ACCEPTING;
} else {
inp->sctp_flags &= ~SCTP_PCB_FLAGS_ACCEPTING;
}
SCTP_INP_WUNLOCK(inp);
return (error);
}
static int sctp_defered_wakeup_cnt = 0;
int
sctp_accept(struct socket *so, struct sockaddr **addr)
{
struct sctp_tcb *stcb;
struct sctp_inpcb *inp;
union sctp_sockstore store;
#ifdef INET6
int error;
#endif
inp = (struct sctp_inpcb *)so->so_pcb;
if (inp == NULL) {
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (ECONNRESET);
}
SCTP_INP_RLOCK(inp);
if (inp->sctp_flags & SCTP_PCB_FLAGS_UDPTYPE) {
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EOPNOTSUPP);
return (EOPNOTSUPP);
}
if (so->so_state & SS_ISDISCONNECTED) {
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ECONNABORTED);
return (ECONNABORTED);
}
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb == NULL) {
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (ECONNRESET);
}
SCTP_TCB_LOCK(stcb);
SCTP_INP_RUNLOCK(inp);
store = stcb->asoc.primary_destination->ro._l_addr;
SCTP_CLEAR_SUBSTATE(stcb, SCTP_STATE_IN_ACCEPT_QUEUE);
SCTP_TCB_UNLOCK(stcb);
switch (store.sa.sa_family) {
#ifdef INET
case AF_INET:
{
struct sockaddr_in *sin;
SCTP_MALLOC_SONAME(sin, struct sockaddr_in *, sizeof *sin);
if (sin == NULL)
return (ENOMEM);
sin->sin_family = AF_INET;
sin->sin_len = sizeof(*sin);
sin->sin_port = store.sin.sin_port;
sin->sin_addr = store.sin.sin_addr;
*addr = (struct sockaddr *)sin;
break;
}
#endif
#ifdef INET6
case AF_INET6:
{
struct sockaddr_in6 *sin6;
SCTP_MALLOC_SONAME(sin6, struct sockaddr_in6 *, sizeof *sin6);
if (sin6 == NULL)
return (ENOMEM);
sin6->sin6_family = AF_INET6;
sin6->sin6_len = sizeof(*sin6);
sin6->sin6_port = store.sin6.sin6_port;
sin6->sin6_addr = store.sin6.sin6_addr;
if ((error = sa6_recoverscope(sin6)) != 0) {
SCTP_FREE_SONAME(sin6);
return (error);
}
*addr = (struct sockaddr *)sin6;
break;
}
#endif
default:
/* TSNH */
break;
}
/* Wake any delayed sleep action */
if (inp->sctp_flags & SCTP_PCB_FLAGS_DONT_WAKE) {
SCTP_INP_WLOCK(inp);
inp->sctp_flags &= ~SCTP_PCB_FLAGS_DONT_WAKE;
if (inp->sctp_flags & SCTP_PCB_FLAGS_WAKEOUTPUT) {
inp->sctp_flags &= ~SCTP_PCB_FLAGS_WAKEOUTPUT;
SCTP_INP_WUNLOCK(inp);
SOCKBUF_LOCK(&inp->sctp_socket->so_snd);
if (sowriteable(inp->sctp_socket)) {
sowwakeup_locked(inp->sctp_socket);
} else {
SOCKBUF_UNLOCK(&inp->sctp_socket->so_snd);
}
SCTP_INP_WLOCK(inp);
}
if (inp->sctp_flags & SCTP_PCB_FLAGS_WAKEINPUT) {
inp->sctp_flags &= ~SCTP_PCB_FLAGS_WAKEINPUT;
SCTP_INP_WUNLOCK(inp);
SOCKBUF_LOCK(&inp->sctp_socket->so_rcv);
if (soreadable(inp->sctp_socket)) {
sctp_defered_wakeup_cnt++;
sorwakeup_locked(inp->sctp_socket);
} else {
SOCKBUF_UNLOCK(&inp->sctp_socket->so_rcv);
}
SCTP_INP_WLOCK(inp);
}
SCTP_INP_WUNLOCK(inp);
}
if (stcb->asoc.state & SCTP_STATE_ABOUT_TO_BE_FREED) {
SCTP_TCB_LOCK(stcb);
sctp_free_assoc(inp, stcb, SCTP_NORMAL_PROC,
SCTP_FROM_SCTP_USRREQ + SCTP_LOC_19);
}
return (0);
}
#ifdef INET
int
sctp_ingetaddr(struct socket *so, struct sockaddr **addr)
{
struct sockaddr_in *sin;
uint32_t vrf_id;
struct sctp_inpcb *inp;
struct sctp_ifa *sctp_ifa;
/*
* Do the malloc first in case it blocks.
*/
SCTP_MALLOC_SONAME(sin, struct sockaddr_in *, sizeof *sin);
if (sin == NULL)
return (ENOMEM);
sin->sin_family = AF_INET;
sin->sin_len = sizeof(*sin);
inp = (struct sctp_inpcb *)so->so_pcb;
if (!inp) {
SCTP_FREE_SONAME(sin);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (ECONNRESET);
}
SCTP_INP_RLOCK(inp);
sin->sin_port = inp->sctp_lport;
if (inp->sctp_flags & SCTP_PCB_FLAGS_BOUNDALL) {
if (inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED) {
struct sctp_tcb *stcb;
struct sockaddr_in *sin_a;
struct sctp_nets *net;
int fnd;
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb == NULL) {
goto notConn;
}
fnd = 0;
sin_a = NULL;
SCTP_TCB_LOCK(stcb);
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
sin_a = (struct sockaddr_in *)&net->ro._l_addr;
if (sin_a == NULL)
/* this will make coverity happy */
continue;
if (sin_a->sin_family == AF_INET) {
fnd = 1;
break;
}
}
if ((!fnd) || (sin_a == NULL)) {
/* punt */
SCTP_TCB_UNLOCK(stcb);
goto notConn;
}
vrf_id = inp->def_vrf_id;
sctp_ifa = sctp_source_address_selection(inp,
stcb,
(sctp_route_t *)&net->ro,
net, 0, vrf_id);
if (sctp_ifa) {
sin->sin_addr = sctp_ifa->address.sin.sin_addr;
sctp_free_ifa(sctp_ifa);
}
SCTP_TCB_UNLOCK(stcb);
} else {
/* For the bound all case you get back 0 */
notConn:
sin->sin_addr.s_addr = 0;
}
} else {
/* Take the first IPv4 address in the list */
struct sctp_laddr *laddr;
int fnd = 0;
LIST_FOREACH(laddr, &inp->sctp_addr_list, sctp_nxt_addr) {
if (laddr->ifa->address.sa.sa_family == AF_INET) {
struct sockaddr_in *sin_a;
sin_a = &laddr->ifa->address.sin;
sin->sin_addr = sin_a->sin_addr;
fnd = 1;
break;
}
}
if (!fnd) {
SCTP_FREE_SONAME(sin);
SCTP_INP_RUNLOCK(inp);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
return (ENOENT);
}
}
SCTP_INP_RUNLOCK(inp);
(*addr) = (struct sockaddr *)sin;
return (0);
}
int
sctp_peeraddr(struct socket *so, struct sockaddr **addr)
{
struct sockaddr_in *sin;
int fnd;
struct sockaddr_in *sin_a;
struct sctp_inpcb *inp;
struct sctp_tcb *stcb;
struct sctp_nets *net;
/* Do the malloc first in case it blocks. */
SCTP_MALLOC_SONAME(sin, struct sockaddr_in *, sizeof *sin);
if (sin == NULL)
return (ENOMEM);
sin->sin_family = AF_INET;
sin->sin_len = sizeof(*sin);
inp = (struct sctp_inpcb *)so->so_pcb;
if ((inp == NULL) ||
((inp->sctp_flags & SCTP_PCB_FLAGS_CONNECTED) == 0)) {
/* UDP type and listeners will drop out here */
SCTP_FREE_SONAME(sin);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOTCONN);
return (ENOTCONN);
}
SCTP_INP_RLOCK(inp);
stcb = LIST_FIRST(&inp->sctp_asoc_list);
if (stcb) {
SCTP_TCB_LOCK(stcb);
}
SCTP_INP_RUNLOCK(inp);
if (stcb == NULL) {
SCTP_FREE_SONAME(sin);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, EINVAL);
return (ECONNRESET);
}
fnd = 0;
TAILQ_FOREACH(net, &stcb->asoc.nets, sctp_next) {
sin_a = (struct sockaddr_in *)&net->ro._l_addr;
if (sin_a->sin_family == AF_INET) {
fnd = 1;
sin->sin_port = stcb->rport;
sin->sin_addr = sin_a->sin_addr;
break;
}
}
SCTP_TCB_UNLOCK(stcb);
if (!fnd) {
/* No IPv4 address */
SCTP_FREE_SONAME(sin);
SCTP_LTRACE_ERR_RET(inp, NULL, NULL, SCTP_FROM_SCTP_USRREQ, ENOENT);
return (ENOENT);
}
(*addr) = (struct sockaddr *)sin;
return (0);
}
struct pr_usrreqs sctp_usrreqs = {
.pru_abort = sctp_abort,
.pru_accept = sctp_accept,
.pru_attach = sctp_attach,
.pru_bind = sctp_bind,
.pru_connect = sctp_connect,
.pru_control = in_control,
.pru_close = sctp_close,
.pru_detach = sctp_close,
.pru_sopoll = sopoll_generic,
.pru_flush = sctp_flush,
.pru_disconnect = sctp_disconnect,
.pru_listen = sctp_listen,
.pru_peeraddr = sctp_peeraddr,
.pru_send = sctp_sendm,
.pru_shutdown = sctp_shutdown,
.pru_sockaddr = sctp_ingetaddr,
.pru_sosend = sctp_sosend,
.pru_soreceive = sctp_soreceive
};
#endif
Index: projects/clang800-import/sys/powerpc/include/openpicvar.h
===================================================================
--- projects/clang800-import/sys/powerpc/include/openpicvar.h (revision 343955)
+++ projects/clang800-import/sys/powerpc/include/openpicvar.h (revision 343956)
@@ -1,90 +1,93 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (C) 2002 Benno Rice.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY Benno Rice ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* $FreeBSD$
*/
#ifndef _POWERPC_OPENPICVAR_H_
#define _POWERPC_OPENPICVAR_H_
#define OPENPIC_DEVSTR "OpenPIC Interrupt Controller"
#define OPENPIC_IRQMAX 256 /* h/w allows more */
+#define OPENPIC_QUIRK_SINGLE_BIND 1 /* Bind interrupts to only 1 CPU */
+
/* Names match the macros in openpicreg.h. */
struct openpic_timer {
uint32_t tcnt;
uint32_t tbase;
uint32_t tvec;
uint32_t tdst;
};
struct openpic_softc {
device_t sc_dev;
struct resource *sc_memr;
struct resource *sc_intr;
bus_space_tag_t sc_bt;
bus_space_handle_t sc_bh;
char *sc_version;
int sc_rid;
int sc_irq;
void *sc_icookie;
u_int sc_ncpu;
u_int sc_nirq;
int sc_psim;
+ u_int sc_quirks;
/* Saved states. */
uint32_t sc_saved_config;
uint32_t sc_saved_ipis[4];
uint32_t sc_saved_prios[4];
struct openpic_timer sc_saved_timers[OPENPIC_TIMERS];
uint32_t sc_saved_vectors[OPENPIC_SRC_VECTOR_COUNT];
};
extern devclass_t openpic_devclass;
/*
* Bus-independent attach i/f
*/
int openpic_common_attach(device_t, uint32_t);
/*
* PIC interface.
*/
void openpic_bind(device_t dev, u_int irq, cpuset_t cpumask, void **);
void openpic_config(device_t, u_int, enum intr_trigger, enum intr_polarity);
void openpic_dispatch(device_t, struct trapframe *);
void openpic_enable(device_t, u_int, u_int, void **);
void openpic_eoi(device_t, u_int, void *);
void openpic_ipi(device_t, u_int);
void openpic_mask(device_t, u_int, void *);
void openpic_unmask(device_t, u_int, void *);
int openpic_suspend(device_t dev);
int openpic_resume(device_t dev);
#endif /* _POWERPC_OPENPICVAR_H_ */
Index: projects/clang800-import/sys/powerpc/ofw/openpic_ofw.c
===================================================================
--- projects/clang800-import/sys/powerpc/ofw/openpic_ofw.c (revision 343955)
+++ projects/clang800-import/sys/powerpc/ofw/openpic_ofw.c (revision 343956)
@@ -1,173 +1,178 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2003 by Peter Grehan. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/module.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <dev/ofw/ofw_bus.h>
#include <dev/ofw/ofw_bus_subr.h>
#include <dev/ofw/openfirm.h>
#include <machine/bus.h>
#include <machine/intr_machdep.h>
#include <machine/md_var.h>
#include <machine/pio.h>
#include <machine/resource.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <sys/rman.h>
#include <machine/openpicreg.h>
#include <machine/openpicvar.h>
#include "pic_if.h"
/*
* OFW interface
*/
static int openpic_ofw_probe(device_t);
static int openpic_ofw_attach(device_t);
static void openpic_ofw_translate_code(device_t, u_int irq, int code,
enum intr_trigger *trig, enum intr_polarity *pol);
static device_method_t openpic_ofw_methods[] = {
/* Device interface */
DEVMETHOD(device_probe, openpic_ofw_probe),
DEVMETHOD(device_attach, openpic_ofw_attach),
DEVMETHOD(device_suspend, openpic_suspend),
DEVMETHOD(device_resume, openpic_resume),
/* PIC interface */
DEVMETHOD(pic_bind, openpic_bind),
DEVMETHOD(pic_config, openpic_config),
DEVMETHOD(pic_dispatch, openpic_dispatch),
DEVMETHOD(pic_enable, openpic_enable),
DEVMETHOD(pic_eoi, openpic_eoi),
DEVMETHOD(pic_ipi, openpic_ipi),
DEVMETHOD(pic_mask, openpic_mask),
DEVMETHOD(pic_unmask, openpic_unmask),
DEVMETHOD(pic_translate_code, openpic_ofw_translate_code),
DEVMETHOD_END
};
static driver_t openpic_ofw_driver = {
"openpic",
openpic_ofw_methods,
sizeof(struct openpic_softc),
};
EARLY_DRIVER_MODULE(openpic, ofwbus, openpic_ofw_driver, openpic_devclass,
0, 0, BUS_PASS_INTERRUPT);
EARLY_DRIVER_MODULE(openpic, simplebus, openpic_ofw_driver, openpic_devclass,
0, 0, BUS_PASS_INTERRUPT);
EARLY_DRIVER_MODULE(openpic, macio, openpic_ofw_driver, openpic_devclass, 0, 0,
BUS_PASS_INTERRUPT);
static int
openpic_ofw_probe(device_t dev)
{
const char *type = ofw_bus_get_type(dev);
if (type == NULL)
return (ENXIO);
if (!ofw_bus_is_compatible(dev, "chrp,open-pic") &&
strcmp(type, "open-pic") != 0)
return (ENXIO);
/*
* On some U4 systems, there is a phantom MPIC in the mac-io cell.
* The uninorth driver will pick up the real PIC, so ignore it here.
*/
if (OF_finddevice("/u4") != (phandle_t)-1)
return (ENXIO);
device_set_desc(dev, OPENPIC_DEVSTR);
return (0);
}
static int
openpic_ofw_attach(device_t dev)
{
+ struct openpic_softc *sc;
phandle_t xref, node;
node = ofw_bus_get_node(dev);
+ sc = device_get_softc(dev);
if (OF_getencprop(node, "phandle", &xref, sizeof(xref)) == -1 &&
OF_getencprop(node, "ibm,phandle", &xref, sizeof(xref)) == -1 &&
OF_getencprop(node, "linux,phandle", &xref, sizeof(xref)) == -1)
xref = node;
+
+ if (ofw_bus_is_compatible(dev, "fsl,mpic"))
+ sc->sc_quirks = OPENPIC_QUIRK_SINGLE_BIND;
return (openpic_common_attach(dev, xref));
}
static void
openpic_ofw_translate_code(device_t dev, u_int irq, int code,
enum intr_trigger *trig, enum intr_polarity *pol)
{
switch (code) {
case 0:
/* L to H edge */
*trig = INTR_TRIGGER_EDGE;
*pol = INTR_POLARITY_HIGH;
break;
case 1:
/* Active L level */
*trig = INTR_TRIGGER_LEVEL;
*pol = INTR_POLARITY_LOW;
break;
case 2:
/* Active H level */
*trig = INTR_TRIGGER_LEVEL;
*pol = INTR_POLARITY_HIGH;
break;
case 3:
/* H to L edge */
*trig = INTR_TRIGGER_EDGE;
*pol = INTR_POLARITY_LOW;
break;
default:
*trig = INTR_TRIGGER_CONFORM;
*pol = INTR_POLARITY_CONFORM;
}
}
Index: projects/clang800-import/sys/powerpc/powerpc/cpu.c
===================================================================
--- projects/clang800-import/sys/powerpc/powerpc/cpu.c (revision 343955)
+++ projects/clang800-import/sys/powerpc/powerpc/cpu.c (revision 343956)
@@ -1,829 +1,836 @@
/*-
* SPDX-License-Identifier: BSD-4-Clause AND BSD-2-Clause-FreeBSD
*
* Copyright (c) 2001 Matt Thomas.
* Copyright (c) 2001 Tsubai Masanari.
* Copyright (c) 1998, 1999, 2001 Internet Research Institute, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by
* Internet Research Institute, Inc.
* 4. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*-
* Copyright (C) 2003 Benno Rice.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY Benno Rice ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* from $NetBSD: cpu_subr.c,v 1.1 2003/02/03 17:10:09 matt Exp $
* $FreeBSD$
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/cpu.h>
#include <sys/kernel.h>
#include <sys/proc.h>
#include <sys/sysctl.h>
#include <sys/sched.h>
#include <sys/smp.h>
#include <machine/bus.h>
#include <machine/cpu.h>
#include <machine/hid.h>
#include <machine/md_var.h>
#include <machine/smp.h>
#include <machine/spr.h>
#include <dev/ofw/openfirm.h>
static void cpu_6xx_setup(int cpuid, uint16_t vers);
static void cpu_970_setup(int cpuid, uint16_t vers);
static void cpu_booke_setup(int cpuid, uint16_t vers);
static void cpu_powerx_setup(int cpuid, uint16_t vers);
int powerpc_pow_enabled;
void (*cpu_idle_hook)(sbintime_t) = NULL;
static void cpu_idle_60x(sbintime_t);
static void cpu_idle_booke(sbintime_t);
+#ifdef BOOKE_E500
+static void cpu_idle_e500mc(sbintime_t sbt);
+#endif
#if defined(__powerpc64__) && defined(AIM)
static void cpu_idle_powerx(sbintime_t);
static void cpu_idle_power9(sbintime_t);
#endif
struct cputab {
const char *name;
uint16_t version;
uint16_t revfmt;
int features; /* Do not include PPC_FEATURE_32 or
* PPC_FEATURE_HAS_MMU */
int features2;
void (*cpu_setup)(int cpuid, uint16_t vers);
};
#define REVFMT_MAJMIN 1 /* %u.%u */
#define REVFMT_HEX 2 /* 0x%04x */
#define REVFMT_DEC 3 /* %u */
static const struct cputab models[] = {
{ "Motorola PowerPC 601", MPC601, REVFMT_DEC,
PPC_FEATURE_HAS_FPU | PPC_FEATURE_UNIFIED_CACHE, 0, cpu_6xx_setup },
{ "Motorola PowerPC 602", MPC602, REVFMT_DEC,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 603", MPC603, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 603e", MPC603e, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 603ev", MPC603ev, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 604", MPC604, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 604ev", MPC604ev, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 620", MPC620, REVFMT_HEX,
PPC_FEATURE_64 | PPC_FEATURE_HAS_FPU, 0, NULL },
{ "Motorola PowerPC 750", MPC750, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "IBM PowerPC 750FX", IBM750FX, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "IBM PowerPC 970", IBM970, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU,
0, cpu_970_setup },
{ "IBM PowerPC 970FX", IBM970FX, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU,
0, cpu_970_setup },
{ "IBM PowerPC 970GX", IBM970GX, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU,
0, cpu_970_setup },
{ "IBM PowerPC 970MP", IBM970MP, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU,
0, cpu_970_setup },
{ "IBM POWER4", IBMPOWER4, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_FPU | PPC_FEATURE_POWER4, 0, NULL },
{ "IBM POWER4+", IBMPOWER4PLUS, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_FPU | PPC_FEATURE_POWER4, 0, NULL },
{ "IBM POWER5", IBMPOWER5, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_FPU | PPC_FEATURE_POWER4 |
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP, 0, NULL },
{ "IBM POWER5+", IBMPOWER5PLUS, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_FPU | PPC_FEATURE_POWER5_PLUS |
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP, 0, NULL },
{ "IBM POWER6", IBMPOWER6, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP | PPC_FEATURE_ARCH_2_05 |
PPC_FEATURE_TRUE_LE, 0, NULL },
{ "IBM POWER7", IBMPOWER7, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_SMT | PPC_FEATURE_ARCH_2_05 | PPC_FEATURE_ARCH_2_06 |
PPC_FEATURE_HAS_VSX | PPC_FEATURE_TRUE_LE, PPC_FEATURE2_DSCR, NULL },
{ "IBM POWER7+", IBMPOWER7PLUS, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_SMT | PPC_FEATURE_ARCH_2_05 | PPC_FEATURE_ARCH_2_06 |
PPC_FEATURE_HAS_VSX, PPC_FEATURE2_DSCR, NULL },
{ "IBM POWER8E", IBMPOWER8E, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP | PPC_FEATURE_ARCH_2_05 |
PPC_FEATURE_ARCH_2_06 | PPC_FEATURE_HAS_VSX | PPC_FEATURE_TRUE_LE,
PPC_FEATURE2_ARCH_2_07 | PPC_FEATURE2_HTM | PPC_FEATURE2_DSCR |
PPC_FEATURE2_ISEL | PPC_FEATURE2_TAR | PPC_FEATURE2_HAS_VEC_CRYPTO |
PPC_FEATURE2_HTM_NOSC, cpu_powerx_setup },
{ "IBM POWER8", IBMPOWER8, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP | PPC_FEATURE_ARCH_2_05 |
PPC_FEATURE_ARCH_2_06 | PPC_FEATURE_HAS_VSX | PPC_FEATURE_TRUE_LE,
PPC_FEATURE2_ARCH_2_07 | PPC_FEATURE2_HTM | PPC_FEATURE2_DSCR |
PPC_FEATURE2_ISEL | PPC_FEATURE2_TAR | PPC_FEATURE2_HAS_VEC_CRYPTO |
PPC_FEATURE2_HTM_NOSC, cpu_powerx_setup },
{ "IBM POWER9", IBMPOWER9, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP | PPC_FEATURE_ARCH_2_05 |
PPC_FEATURE_ARCH_2_06 | PPC_FEATURE_HAS_VSX | PPC_FEATURE_TRUE_LE,
PPC_FEATURE2_ARCH_2_07 | PPC_FEATURE2_HTM | PPC_FEATURE2_DSCR |
PPC_FEATURE2_ISEL | PPC_FEATURE2_TAR | PPC_FEATURE2_HAS_VEC_CRYPTO |
PPC_FEATURE2_ARCH_3_00 | PPC_FEATURE2_HAS_IEEE128 |
PPC_FEATURE2_DARN, cpu_powerx_setup },
{ "Motorola PowerPC 7400", MPC7400, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 7410", MPC7410, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 7450", MPC7450, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 7455", MPC7455, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 7457", MPC7457, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 7447A", MPC7447A, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 7448", MPC7448, REVFMT_MAJMIN,
PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 8240", MPC8240, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Motorola PowerPC 8245", MPC8245, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU, 0, cpu_6xx_setup },
{ "Freescale e500v1 core", FSL_E500v1, REVFMT_MAJMIN,
PPC_FEATURE_HAS_SPE | PPC_FEATURE_HAS_EFP_SINGLE | PPC_FEATURE_BOOKE,
PPC_FEATURE2_ISEL, cpu_booke_setup },
{ "Freescale e500v2 core", FSL_E500v2, REVFMT_MAJMIN,
PPC_FEATURE_HAS_SPE | PPC_FEATURE_BOOKE |
PPC_FEATURE_HAS_EFP_SINGLE | PPC_FEATURE_HAS_EFP_DOUBLE,
PPC_FEATURE2_ISEL, cpu_booke_setup },
{ "Freescale e500mc core", FSL_E500mc, REVFMT_MAJMIN,
PPC_FEATURE_HAS_FPU | PPC_FEATURE_BOOKE | PPC_FEATURE_ARCH_2_05 |
PPC_FEATURE_ARCH_2_06, PPC_FEATURE2_ISEL,
cpu_booke_setup },
{ "Freescale e5500 core", FSL_E5500, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_FPU | PPC_FEATURE_BOOKE |
PPC_FEATURE_ARCH_2_05 | PPC_FEATURE_ARCH_2_06,
PPC_FEATURE2_ISEL, cpu_booke_setup },
{ "Freescale e6500 core", FSL_E6500, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_BOOKE | PPC_FEATURE_ARCH_2_05 | PPC_FEATURE_ARCH_2_06,
PPC_FEATURE2_ISEL, cpu_booke_setup },
{ "IBM Cell Broadband Engine", IBMCELLBE, REVFMT_MAJMIN,
PPC_FEATURE_64 | PPC_FEATURE_HAS_ALTIVEC | PPC_FEATURE_HAS_FPU |
PPC_FEATURE_CELL | PPC_FEATURE_SMT, 0, NULL},
{ "Unknown PowerPC CPU", 0, REVFMT_HEX, 0, 0, NULL },
};
static void cpu_6xx_print_cacheinfo(u_int, uint16_t);
static int cpu_feature_bit(SYSCTL_HANDLER_ARGS);
static char model[64];
SYSCTL_STRING(_hw, HW_MODEL, model, CTLFLAG_RD, model, 0, "");
static const struct cputab *cput;
u_long cpu_features = PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU;
u_long cpu_features2 = 0;
SYSCTL_OPAQUE(_hw, OID_AUTO, cpu_features, CTLFLAG_RD,
&cpu_features, sizeof(cpu_features), "LX", "PowerPC CPU features");
SYSCTL_OPAQUE(_hw, OID_AUTO, cpu_features2, CTLFLAG_RD,
&cpu_features2, sizeof(cpu_features2), "LX", "PowerPC CPU features 2");
#ifdef __powerpc64__
register_t lpcr = LPCR_LPES;
#endif
/* Provide some user-friendly aliases for bits in cpu_features */
SYSCTL_PROC(_hw, OID_AUTO, floatingpoint, CTLTYPE_INT | CTLFLAG_RD,
0, PPC_FEATURE_HAS_FPU, cpu_feature_bit, "I",
"Floating point instructions executed in hardware");
SYSCTL_PROC(_hw, OID_AUTO, altivec, CTLTYPE_INT | CTLFLAG_RD,
0, PPC_FEATURE_HAS_ALTIVEC, cpu_feature_bit, "I", "CPU supports Altivec");
/*
* Phase 1 (early) CPU setup. Setup the cpu_features/cpu_features2 variables,
* so they can be used during platform and MMU bringup.
*/
void
cpu_feature_setup()
{
u_int pvr;
uint16_t vers;
const struct cputab *cp;
pvr = mfpvr();
vers = pvr >> 16;
for (cp = models; cp->version != 0; cp++) {
if (cp->version == vers)
break;
}
cput = cp;
cpu_features |= cp->features;
cpu_features2 |= cp->features2;
}
void
cpu_setup(u_int cpuid)
{
uint64_t cps;
const char *name;
u_int maj, min, pvr;
uint16_t rev, revfmt, vers;
pvr = mfpvr();
vers = pvr >> 16;
rev = pvr;
switch (vers) {
case MPC7410:
min = (pvr >> 0) & 0xff;
maj = min <= 4 ? 1 : 2;
break;
case FSL_E500v1:
case FSL_E500v2:
case FSL_E500mc:
case FSL_E5500:
maj = (pvr >> 4) & 0xf;
min = (pvr >> 0) & 0xf;
break;
default:
maj = (pvr >> 8) & 0xf;
min = (pvr >> 0) & 0xf;
}
revfmt = cput->revfmt;
name = cput->name;
if (rev == MPC750 && pvr == 15) {
name = "Motorola MPC755";
revfmt = REVFMT_HEX;
}
strncpy(model, name, sizeof(model) - 1);
printf("cpu%d: %s revision ", cpuid, name);
switch (revfmt) {
case REVFMT_MAJMIN:
printf("%u.%u", maj, min);
break;
case REVFMT_HEX:
printf("0x%04x", rev);
break;
case REVFMT_DEC:
printf("%u", rev);
break;
}
if (cpu_est_clockrate(0, &cps) == 0)
printf(", %jd.%02jd MHz", cps / 1000000, (cps / 10000) % 100);
printf("\n");
printf("cpu%d: Features %b\n", cpuid, (int)cpu_features,
PPC_FEATURE_BITMASK);
if (cpu_features2 != 0)
printf("cpu%d: Features2 %b\n", cpuid, (int)cpu_features2,
PPC_FEATURE2_BITMASK);
/*
* Configure CPU
*/
if (cput->cpu_setup != NULL)
cput->cpu_setup(cpuid, vers);
}
/* Get current clock frequency for the given cpu id. */
int
cpu_est_clockrate(int cpu_id, uint64_t *cps)
{
uint16_t vers;
register_t msr;
phandle_t cpu, dev, root;
int res = 0;
char buf[8];
vers = mfpvr() >> 16;
msr = mfmsr();
mtmsr(msr & ~PSL_EE);
switch (vers) {
case MPC7450:
case MPC7455:
case MPC7457:
case MPC750:
case IBM750FX:
case MPC7400:
case MPC7410:
case MPC7447A:
case MPC7448:
mtspr(SPR_MMCR0, SPR_MMCR0_FC);
mtspr(SPR_PMC1, 0);
mtspr(SPR_MMCR0, SPR_MMCR0_PMC1SEL(PMCN_CYCLES));
DELAY(1000);
*cps = (mfspr(SPR_PMC1) * 1000) + 4999;
mtspr(SPR_MMCR0, SPR_MMCR0_FC);
mtmsr(msr);
return (0);
case IBM970:
case IBM970FX:
case IBM970MP:
isync();
mtspr(SPR_970MMCR0, SPR_MMCR0_FC);
isync();
mtspr(SPR_970MMCR1, 0);
mtspr(SPR_970MMCRA, 0);
mtspr(SPR_970PMC1, 0);
mtspr(SPR_970MMCR0,
SPR_970MMCR0_PMC1SEL(PMC970N_CYCLES));
isync();
DELAY(1000);
powerpc_sync();
mtspr(SPR_970MMCR0, SPR_MMCR0_FC);
*cps = (mfspr(SPR_970PMC1) * 1000) + 4999;
mtmsr(msr);
return (0);
default:
root = OF_peer(0);
if (root == 0)
return (ENXIO);
dev = OF_child(root);
while (dev != 0) {
res = OF_getprop(dev, "name", buf, sizeof(buf));
if (res > 0 && strcmp(buf, "cpus") == 0)
break;
dev = OF_peer(dev);
}
cpu = OF_child(dev);
while (cpu != 0) {
res = OF_getprop(cpu, "device_type", buf,
sizeof(buf));
if (res > 0 && strcmp(buf, "cpu") == 0)
break;
cpu = OF_peer(cpu);
}
if (cpu == 0)
return (ENOENT);
if (OF_getprop(cpu, "ibm,extended-clock-frequency",
cps, sizeof(*cps)) >= 0) {
return (0);
} else if (OF_getprop(cpu, "clock-frequency", cps,
sizeof(cell_t)) >= 0) {
*cps >>= 32;
return (0);
} else {
return (ENOENT);
}
}
}
void
cpu_6xx_setup(int cpuid, uint16_t vers)
{
register_t hid0, pvr;
const char *bitmask;
hid0 = mfspr(SPR_HID0);
pvr = mfpvr();
/*
* Configure power-saving mode.
*/
switch (vers) {
case MPC603:
case MPC603e:
case MPC603ev:
case MPC604ev:
case MPC750:
case IBM750FX:
case MPC7400:
case MPC7410:
case MPC8240:
case MPC8245:
/* Select DOZE mode. */
hid0 &= ~(HID0_DOZE | HID0_NAP | HID0_SLEEP);
hid0 |= HID0_DOZE | HID0_DPM;
powerpc_pow_enabled = 1;
break;
case MPC7448:
case MPC7447A:
case MPC7457:
case MPC7455:
case MPC7450:
/* Enable the 7450 branch caches */
hid0 |= HID0_SGE | HID0_BTIC;
hid0 |= HID0_LRSTK | HID0_FOLD | HID0_BHT;
/* Disable BTIC on 7450 Rev 2.0 or earlier and on 7457 */
if (((pvr >> 16) == MPC7450 && (pvr & 0xFFFF) <= 0x0200)
|| (pvr >> 16) == MPC7457)
hid0 &= ~HID0_BTIC;
/* Select NAP mode. */
hid0 &= ~(HID0_DOZE | HID0_NAP | HID0_SLEEP);
hid0 |= HID0_NAP | HID0_DPM;
powerpc_pow_enabled = 1;
break;
default:
/* No power-saving mode is available. */ ;
}
switch (vers) {
case IBM750FX:
case MPC750:
hid0 &= ~HID0_DBP; /* XXX correct? */
hid0 |= HID0_EMCP | HID0_BTIC | HID0_SGE | HID0_BHT;
break;
case MPC7400:
case MPC7410:
hid0 &= ~HID0_SPD;
hid0 |= HID0_EMCP | HID0_BTIC | HID0_SGE | HID0_BHT;
hid0 |= HID0_EIEC;
break;
}
mtspr(SPR_HID0, hid0);
if (bootverbose)
cpu_6xx_print_cacheinfo(cpuid, vers);
switch (vers) {
case MPC7447A:
case MPC7448:
case MPC7450:
case MPC7455:
case MPC7457:
bitmask = HID0_7450_BITMASK;
break;
default:
bitmask = HID0_BITMASK;
break;
}
printf("cpu%d: HID0 %b\n", cpuid, (int)hid0, bitmask);
if (cpu_idle_hook == NULL)
cpu_idle_hook = cpu_idle_60x;
}
static void
cpu_6xx_print_cacheinfo(u_int cpuid, uint16_t vers)
{
register_t hid;
hid = mfspr(SPR_HID0);
printf("cpu%u: ", cpuid);
printf("L1 I-cache %sabled, ", (hid & HID0_ICE) ? "en" : "dis");
printf("L1 D-cache %sabled\n", (hid & HID0_DCE) ? "en" : "dis");
printf("cpu%u: ", cpuid);
if (mfspr(SPR_L2CR) & L2CR_L2E) {
switch (vers) {
case MPC7450:
case MPC7455:
case MPC7457:
printf("256KB L2 cache, ");
if (mfspr(SPR_L3CR) & L3CR_L3E)
printf("%cMB L3 backside cache",
mfspr(SPR_L3CR) & L3CR_L3SIZ ? '2' : '1');
else
printf("L3 cache disabled");
printf("\n");
break;
case IBM750FX:
printf("512KB L2 cache\n");
break;
default:
switch (mfspr(SPR_L2CR) & L2CR_L2SIZ) {
case L2SIZ_256K:
printf("256KB ");
break;
case L2SIZ_512K:
printf("512KB ");
break;
case L2SIZ_1M:
printf("1MB ");
break;
}
printf("write-%s", (mfspr(SPR_L2CR) & L2CR_L2WT)
? "through" : "back");
if (mfspr(SPR_L2CR) & L2CR_L2PE)
printf(", with parity");
printf(" backside cache\n");
break;
}
} else
printf("L2 cache disabled\n");
}
static void
cpu_booke_setup(int cpuid, uint16_t vers)
{
#ifdef BOOKE_E500
register_t hid0;
const char *bitmask;
hid0 = mfspr(SPR_HID0);
switch (vers) {
case FSL_E500mc:
bitmask = HID0_E500MC_BITMASK;
+ cpu_idle_hook = cpu_idle_e500mc;
break;
case FSL_E5500:
case FSL_E6500:
bitmask = HID0_E5500_BITMASK;
+ cpu_idle_hook = cpu_idle_e500mc;
break;
case FSL_E500v1:
case FSL_E500v2:
/* Only e500v1/v2 support HID0 power management setup. */
/* Program power-management mode. */
hid0 &= ~(HID0_DOZE | HID0_NAP | HID0_SLEEP);
hid0 |= HID0_DOZE;
mtspr(SPR_HID0, hid0);
default:
bitmask = HID0_E500_BITMASK;
break;
}
printf("cpu%d: HID0 %b\n", cpuid, (int)hid0, bitmask);
#endif
if (cpu_idle_hook == NULL)
cpu_idle_hook = cpu_idle_booke;
}
static void
cpu_970_setup(int cpuid, uint16_t vers)
{
#ifdef AIM
uint32_t hid0_hi, hid0_lo;
__asm __volatile ("mfspr %0,%2; clrldi %1,%0,32; srdi %0,%0,32;"
: "=r" (hid0_hi), "=r" (hid0_lo) : "K" (SPR_HID0));
/* Configure power-saving mode */
switch (vers) {
case IBM970MP:
hid0_hi |= (HID0_DEEPNAP | HID0_NAP | HID0_DPM);
hid0_hi &= ~HID0_DOZE;
break;
default:
hid0_hi |= (HID0_NAP | HID0_DPM);
hid0_hi &= ~(HID0_DOZE | HID0_DEEPNAP);
break;
}
powerpc_pow_enabled = 1;
__asm __volatile (" \
sync; isync; \
sldi %0,%0,32; or %0,%0,%1; \
mtspr %2, %0; \
mfspr %0, %2; mfspr %0, %2; mfspr %0, %2; \
mfspr %0, %2; mfspr %0, %2; mfspr %0, %2; \
sync; isync"
:: "r" (hid0_hi), "r"(hid0_lo), "K" (SPR_HID0));
__asm __volatile ("mfspr %0,%1; srdi %0,%0,32;"
: "=r" (hid0_hi) : "K" (SPR_HID0));
printf("cpu%d: HID0 %b\n", cpuid, (int)(hid0_hi), HID0_970_BITMASK);
#endif
cpu_idle_hook = cpu_idle_60x;
}
static void
cpu_powerx_setup(int cpuid, uint16_t vers)
{
#if defined(__powerpc64__) && defined(AIM)
if ((mfmsr() & PSL_HV) == 0)
return;
/* Configure power-saving */
switch (vers) {
case IBMPOWER8:
case IBMPOWER8E:
cpu_idle_hook = cpu_idle_powerx;
mtspr(SPR_LPCR, mfspr(SPR_LPCR) | LPCR_PECE_WAKESET);
isync();
break;
case IBMPOWER9:
cpu_idle_hook = cpu_idle_power9;
mtspr(SPR_LPCR, mfspr(SPR_LPCR) | LPCR_PECE_WAKESET);
isync();
break;
default:
return;
}
#endif
}
static int
cpu_feature_bit(SYSCTL_HANDLER_ARGS)
{
int result;
result = (cpu_features & arg2) ? 1 : 0;
return (sysctl_handle_int(oidp, &result, 0, req));
}
void
cpu_idle(int busy)
{
sbintime_t sbt = -1;
#ifdef INVARIANTS
if ((mfmsr() & PSL_EE) != PSL_EE) {
struct thread *td = curthread;
printf("td msr %#lx\n", (u_long)td->td_md.md_saved_msr);
panic("ints disabled in idleproc!");
}
#endif
CTR2(KTR_SPARE2, "cpu_idle(%d) at %d",
busy, curcpu);
if (cpu_idle_hook != NULL) {
if (!busy) {
critical_enter();
sbt = cpu_idleclock();
}
cpu_idle_hook(sbt);
if (!busy) {
cpu_activeclock();
critical_exit();
}
}
CTR2(KTR_SPARE2, "cpu_idle(%d) at %d done",
busy, curcpu);
}
static void
cpu_idle_60x(sbintime_t sbt)
{
register_t msr;
uint16_t vers;
if (!powerpc_pow_enabled)
return;
msr = mfmsr();
vers = mfpvr() >> 16;
#ifdef AIM
switch (vers) {
case IBM970:
case IBM970FX:
case IBM970MP:
case MPC7447A:
case MPC7448:
case MPC7450:
case MPC7455:
case MPC7457:
__asm __volatile("\
dssall; sync; mtmsr %0; isync"
:: "r"(msr | PSL_POW));
break;
default:
powerpc_sync();
mtmsr(msr | PSL_POW);
break;
}
#endif
}
+#ifdef BOOKE_E500
static void
+cpu_idle_e500mc(sbintime_t sbt)
+{
+ /*
+ * Base binutils doesn't know what the 'wait' instruction is, so
+ * use the opcode encoding here.
+ */
+ __asm __volatile(".long 0x7c00007c");
+}
+#endif
+
+static void
cpu_idle_booke(sbintime_t sbt)
{
register_t msr;
- uint16_t vers;
msr = mfmsr();
- vers = mfpvr() >> 16;
-#ifdef BOOKE
- switch (vers) {
- case FSL_E500mc:
- case FSL_E5500:
- case FSL_E6500:
- break;
- default:
- powerpc_sync();
- mtmsr(msr | PSL_WE);
- break;
- }
+#ifdef BOOKE_E500
+ powerpc_sync();
+ mtmsr(msr | PSL_WE);
#endif
}
#if defined(__powerpc64__) && defined(AIM)
static void
cpu_idle_powerx(sbintime_t sbt)
{
/* Sleeping when running on one cpu gives no advantages - avoid it */
if (smp_started == 0)
return;
spinlock_enter();
if (sched_runnable()) {
spinlock_exit();
return;
}
if (can_wakeup == 0)
can_wakeup = 1;
mb();
enter_idle_powerx();
spinlock_exit();
}
static void
cpu_idle_power9(sbintime_t sbt)
{
register_t msr;
msr = mfmsr();
/* Suspend external interrupts until stop instruction completes. */
mtmsr(msr & ~PSL_EE);
/* Set the stop state to lowest latency, wake up to next instruction */
mtspr(SPR_PSSCR, 0);
/* "stop" instruction (PowerISA 3.0) */
__asm __volatile (".long 0x4c0002e4");
/*
* Re-enable external interrupts to capture the interrupt that caused
* the wake up.
*/
mtmsr(msr);
}
#endif
int
cpu_idle_wakeup(int cpu)
{
return (0);
}
Index: projects/clang800-import/sys/powerpc/powerpc/mem.c
===================================================================
--- projects/clang800-import/sys/powerpc/powerpc/mem.c (revision 343955)
+++ projects/clang800-import/sys/powerpc/powerpc/mem.c (revision 343956)
@@ -1,319 +1,321 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1988 University of Utah.
* Copyright (c) 1982, 1986, 1990 The Regents of the University of California.
* All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
* Science Department, and code derived from software contributed to
* Berkeley by William Jolitz.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: Utah $Hdr: mem.c 1.13 89/10/08$
* from: @(#)mem.c 7.2 (Berkeley) 5/9/91
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/*
* Memory special file
*/
#include <sys/param.h>
#include <sys/conf.h>
#include <sys/fcntl.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/ioccom.h>
#include <sys/malloc.h>
#include <sys/memrange.h>
#include <sys/module.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/msgbuf.h>
#include <sys/systm.h>
#include <sys/signalvar.h>
#include <sys/uio.h>
#include <machine/md_var.h>
#include <machine/vmparam.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <vm/vm_extern.h>
#include <vm/vm_page.h>
#include <machine/memdev.h>
static void ppc_mrinit(struct mem_range_softc *);
static int ppc_mrset(struct mem_range_softc *, struct mem_range_desc *, int *);
MALLOC_DEFINE(M_MEMDESC, "memdesc", "memory range descriptors");
static struct mem_range_ops ppc_mem_range_ops = {
ppc_mrinit,
ppc_mrset,
NULL,
NULL
};
struct mem_range_softc mem_range_softc = {
&ppc_mem_range_ops,
0, 0, NULL
};
/* ARGSUSED */
int
memrw(struct cdev *dev, struct uio *uio, int flags)
{
struct iovec *iov;
int error = 0;
vm_offset_t va, eva, off, v;
vm_prot_t prot;
struct vm_page m;
vm_page_t marr;
vm_size_t cnt;
cnt = 0;
error = 0;
while (uio->uio_resid > 0 && !error) {
iov = uio->uio_iov;
if (iov->iov_len == 0) {
uio->uio_iov++;
uio->uio_iovcnt--;
if (uio->uio_iovcnt < 0)
panic("memrw");
continue;
}
if (dev2unit(dev) == CDEV_MINOR_MEM) {
-kmem_direct_mapped: v = uio->uio_offset;
+ v = uio->uio_offset;
- off = uio->uio_offset & PAGE_MASK;
+kmem_direct_mapped: off = v & PAGE_MASK;
cnt = PAGE_SIZE - ((vm_offset_t)iov->iov_base &
PAGE_MASK);
cnt = min(cnt, PAGE_SIZE - off);
cnt = min(cnt, iov->iov_len);
if (mem_valid(v, cnt)) {
error = EFAULT;
break;
}
if (hw_direct_map && !pmap_dev_direct_mapped(v, cnt)) {
error = uiomove((void *)PHYS_TO_DMAP(v), cnt,
uio);
} else {
m.phys_addr = trunc_page(v);
marr = &m;
error = uiomove_fromphys(&marr, off, cnt, uio);
}
}
else if (dev2unit(dev) == CDEV_MINOR_KMEM) {
va = uio->uio_offset;
- if ((va < VM_MIN_KERNEL_ADDRESS) || (va > virtual_end))
+ if ((va < VM_MIN_KERNEL_ADDRESS) || (va > virtual_end)) {
+ v = DMAP_TO_PHYS(va);
goto kmem_direct_mapped;
+ }
va = trunc_page(uio->uio_offset);
eva = round_page(uio->uio_offset
+ iov->iov_len);
/*
* Make sure that all the pages are currently resident
* so that we don't create any zero-fill pages.
*/
for (; va < eva; va += PAGE_SIZE)
if (pmap_extract(kernel_pmap, va) == 0)
return (EFAULT);
prot = (uio->uio_rw == UIO_READ)
? VM_PROT_READ : VM_PROT_WRITE;
va = uio->uio_offset;
if (kernacc((void *) va, iov->iov_len, prot)
== FALSE)
return (EFAULT);
error = uiomove((void *)va, iov->iov_len, uio);
continue;
}
}
return (error);
}
/*
* allow user processes to MMAP some memory sections
* instead of going through read/write
*/
int
memmmap(struct cdev *dev, vm_ooffset_t offset, vm_paddr_t *paddr,
int prot, vm_memattr_t *memattr)
{
int i;
if (dev2unit(dev) == CDEV_MINOR_MEM)
*paddr = offset;
else
return (EFAULT);
for (i = 0; i < mem_range_softc.mr_ndesc; i++) {
if (!(mem_range_softc.mr_desc[i].mr_flags & MDF_ACTIVE))
continue;
if (offset >= mem_range_softc.mr_desc[i].mr_base &&
offset < mem_range_softc.mr_desc[i].mr_base +
mem_range_softc.mr_desc[i].mr_len) {
switch (mem_range_softc.mr_desc[i].mr_flags &
MDF_ATTRMASK) {
case MDF_WRITEBACK:
*memattr = VM_MEMATTR_WRITE_BACK;
break;
case MDF_WRITECOMBINE:
*memattr = VM_MEMATTR_WRITE_COMBINING;
break;
case MDF_UNCACHEABLE:
*memattr = VM_MEMATTR_UNCACHEABLE;
break;
case MDF_WRITETHROUGH:
*memattr = VM_MEMATTR_WRITE_THROUGH;
break;
}
break;
}
}
return (0);
}
static void
ppc_mrinit(struct mem_range_softc *sc)
{
sc->mr_cap = 0;
sc->mr_ndesc = 8; /* XXX: Should be dynamically expandable */
sc->mr_desc = malloc(sc->mr_ndesc * sizeof(struct mem_range_desc),
M_MEMDESC, M_WAITOK | M_ZERO);
}
static int
ppc_mrset(struct mem_range_softc *sc, struct mem_range_desc *desc, int *arg)
{
int i;
switch(*arg) {
case MEMRANGE_SET_UPDATE:
for (i = 0; i < sc->mr_ndesc; i++) {
if (!sc->mr_desc[i].mr_len) {
sc->mr_desc[i] = *desc;
sc->mr_desc[i].mr_flags |= MDF_ACTIVE;
return (0);
}
if (sc->mr_desc[i].mr_base == desc->mr_base &&
sc->mr_desc[i].mr_len == desc->mr_len)
return (EEXIST);
}
return (ENOSPC);
case MEMRANGE_SET_REMOVE:
for (i = 0; i < sc->mr_ndesc; i++)
if (sc->mr_desc[i].mr_base == desc->mr_base &&
sc->mr_desc[i].mr_len == desc->mr_len) {
bzero(&sc->mr_desc[i], sizeof(sc->mr_desc[i]));
return (0);
}
return (ENOENT);
default:
return (EOPNOTSUPP);
}
return (0);
}
/*
* Operations for changing memory attributes.
*
* This is basically just an ioctl shim for mem_range_attr_get
* and mem_range_attr_set.
*/
/* ARGSUSED */
int
memioctl(struct cdev *dev __unused, u_long cmd, caddr_t data, int flags,
struct thread *td)
{
int nd, error = 0;
struct mem_range_op *mo = (struct mem_range_op *)data;
struct mem_range_desc *md;
/* is this for us? */
if ((cmd != MEMRANGE_GET) &&
(cmd != MEMRANGE_SET))
return (ENOTTY);
/* any chance we can handle this? */
if (mem_range_softc.mr_op == NULL)
return (EOPNOTSUPP);
/* do we have any descriptors? */
if (mem_range_softc.mr_ndesc == 0)
return (ENXIO);
switch (cmd) {
case MEMRANGE_GET:
nd = imin(mo->mo_arg[0], mem_range_softc.mr_ndesc);
if (nd > 0) {
md = (struct mem_range_desc *)
malloc(nd * sizeof(struct mem_range_desc),
M_MEMDESC, M_WAITOK);
error = mem_range_attr_get(md, &nd);
if (!error)
error = copyout(md, mo->mo_desc,
nd * sizeof(struct mem_range_desc));
free(md, M_MEMDESC);
}
else
nd = mem_range_softc.mr_ndesc;
mo->mo_arg[0] = nd;
break;
case MEMRANGE_SET:
md = (struct mem_range_desc *)malloc(sizeof(struct mem_range_desc),
M_MEMDESC, M_WAITOK);
error = copyin(mo->mo_desc, md, sizeof(struct mem_range_desc));
/* clamp description string */
md->mr_owner[sizeof(md->mr_owner) - 1] = 0;
if (error == 0)
error = mem_range_attr_set(md, &mo->mo_arg[0]);
free(md, M_MEMDESC);
break;
}
return (error);
}
Index: projects/clang800-import/sys/powerpc/powerpc/openpic.c
===================================================================
--- projects/clang800-import/sys/powerpc/powerpc/openpic.c (revision 343955)
+++ projects/clang800-import/sys/powerpc/powerpc/openpic.c (revision 343956)
@@ -1,445 +1,463 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (C) 2002 Benno Rice.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY Benno Rice ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* $FreeBSD$
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/kernel.h>
#include <sys/proc.h>
#include <sys/rman.h>
#include <sys/sched.h>
+#include <sys/smp.h>
#include <machine/bus.h>
#include <machine/intr_machdep.h>
#include <machine/md_var.h>
#include <machine/pio.h>
#include <machine/resource.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <machine/openpicreg.h>
#include <machine/openpicvar.h>
#include "pic_if.h"
devclass_t openpic_devclass;
/*
* Local routines
*/
static int openpic_intr(void *arg);
static __inline uint32_t
openpic_read(struct openpic_softc *sc, u_int reg)
{
return (bus_space_read_4(sc->sc_bt, sc->sc_bh, reg));
}
static __inline void
openpic_write(struct openpic_softc *sc, u_int reg, uint32_t val)
{
bus_space_write_4(sc->sc_bt, sc->sc_bh, reg, val);
}
static __inline void
openpic_set_priority(struct openpic_softc *sc, int pri)
{
u_int tpr;
uint32_t x;
sched_pin();
tpr = OPENPIC_PCPU_TPR((sc->sc_dev == root_pic) ? PCPU_GET(cpuid) : 0);
x = openpic_read(sc, tpr);
x &= ~OPENPIC_TPR_MASK;
x |= pri;
openpic_write(sc, tpr, x);
sched_unpin();
}
int
openpic_common_attach(device_t dev, uint32_t node)
{
struct openpic_softc *sc;
u_int cpu, ipi, irq;
u_int32_t x;
sc = device_get_softc(dev);
sc->sc_dev = dev;
sc->sc_rid = 0;
sc->sc_memr = bus_alloc_resource_any(dev, SYS_RES_MEMORY, &sc->sc_rid,
RF_ACTIVE);
if (sc->sc_memr == NULL) {
device_printf(dev, "Could not alloc mem resource!\n");
return (ENXIO);
}
sc->sc_bt = rman_get_bustag(sc->sc_memr);
sc->sc_bh = rman_get_bushandle(sc->sc_memr);
/* Reset the PIC */
x = openpic_read(sc, OPENPIC_CONFIG);
x |= OPENPIC_CONFIG_RESET;
openpic_write(sc, OPENPIC_CONFIG, x);
while (openpic_read(sc, OPENPIC_CONFIG) & OPENPIC_CONFIG_RESET) {
powerpc_sync();
DELAY(100);
}
/* Check if this is a cascaded PIC */
sc->sc_irq = 0;
sc->sc_intr = NULL;
do {
struct resource_list *rl;
rl = BUS_GET_RESOURCE_LIST(device_get_parent(dev), dev);
if (rl == NULL)
break;
if (resource_list_find(rl, SYS_RES_IRQ, 0) == NULL)
break;
sc->sc_intr = bus_alloc_resource_any(dev, SYS_RES_IRQ,
&sc->sc_irq, RF_ACTIVE);
/* XXX Cascaded PICs pass NULL trapframes! */
bus_setup_intr(dev, sc->sc_intr, INTR_TYPE_MISC | INTR_MPSAFE,
openpic_intr, NULL, dev, &sc->sc_icookie);
} while (0);
/* Reset the PIC */
x = openpic_read(sc, OPENPIC_CONFIG);
x |= OPENPIC_CONFIG_RESET;
openpic_write(sc, OPENPIC_CONFIG, x);
while (openpic_read(sc, OPENPIC_CONFIG) & OPENPIC_CONFIG_RESET) {
powerpc_sync();
DELAY(100);
}
x = openpic_read(sc, OPENPIC_FEATURE);
switch (x & OPENPIC_FEATURE_VERSION_MASK) {
case 1:
sc->sc_version = "1.0";
break;
case 2:
sc->sc_version = "1.2";
break;
case 3:
sc->sc_version = "1.3";
break;
default:
sc->sc_version = "unknown";
break;
}
sc->sc_ncpu = ((x & OPENPIC_FEATURE_LAST_CPU_MASK) >>
OPENPIC_FEATURE_LAST_CPU_SHIFT) + 1;
sc->sc_nirq = ((x & OPENPIC_FEATURE_LAST_IRQ_MASK) >>
OPENPIC_FEATURE_LAST_IRQ_SHIFT) + 1;
/*
* PSIM seems to report 1 too many IRQs and CPUs
*/
if (sc->sc_psim) {
sc->sc_nirq--;
sc->sc_ncpu--;
}
if (bootverbose)
device_printf(dev,
"Version %s, supports %d CPUs and %d irqs\n",
sc->sc_version, sc->sc_ncpu, sc->sc_nirq);
for (cpu = 0; cpu < sc->sc_ncpu; cpu++)
openpic_write(sc, OPENPIC_PCPU_TPR(cpu), 15);
/* Reset and disable all interrupts. */
for (irq = 0; irq < sc->sc_nirq; irq++) {
x = irq; /* irq == vector. */
x |= OPENPIC_IMASK;
x |= OPENPIC_POLARITY_NEGATIVE;
x |= OPENPIC_SENSE_LEVEL;
x |= 8 << OPENPIC_PRIORITY_SHIFT;
openpic_write(sc, OPENPIC_SRC_VECTOR(irq), x);
}
/* Reset and disable all IPIs. */
for (ipi = 0; ipi < 4; ipi++) {
x = sc->sc_nirq + ipi;
x |= OPENPIC_IMASK;
x |= 15 << OPENPIC_PRIORITY_SHIFT;
openpic_write(sc, OPENPIC_IPI_VECTOR(ipi), x);
}
/* we don't need 8259 passthrough mode */
x = openpic_read(sc, OPENPIC_CONFIG);
x |= OPENPIC_CONFIG_8259_PASSTHRU_DISABLE;
openpic_write(sc, OPENPIC_CONFIG, x);
/* send all interrupts to cpu 0 */
for (irq = 0; irq < sc->sc_nirq; irq++)
openpic_write(sc, OPENPIC_IDEST(irq), 1 << 0);
/* clear all pending interrupts from cpu 0 */
for (irq = 0; irq < sc->sc_nirq; irq++) {
(void)openpic_read(sc, OPENPIC_PCPU_IACK(0));
openpic_write(sc, OPENPIC_PCPU_EOI(0), 0);
}
for (cpu = 0; cpu < sc->sc_ncpu; cpu++)
openpic_write(sc, OPENPIC_PCPU_TPR(cpu), 0);
powerpc_register_pic(dev, node, sc->sc_nirq, 4, FALSE);
/* If this is not a cascaded PIC, it must be the root PIC */
if (sc->sc_intr == NULL)
root_pic = dev;
return (0);
}
/*
* PIC I/F methods
*/
void
openpic_bind(device_t dev, u_int irq, cpuset_t cpumask, void **priv __unused)
{
struct openpic_softc *sc;
+ uint32_t mask;
/* If we aren't directly connected to the CPU, this won't work */
if (dev != root_pic)
return;
sc = device_get_softc(dev);
/*
* XXX: openpic_write() is very special and just needs a 32 bits mask.
* For the moment, just play dirty and get the first half word.
*/
- openpic_write(sc, OPENPIC_IDEST(irq), cpumask.__bits[0] & 0xffffffff);
+ mask = cpumask.__bits[0] & 0xffffffff;
+ if (sc->sc_quirks & OPENPIC_QUIRK_SINGLE_BIND) {
+ int i = mftb() % CPU_COUNT(&cpumask);
+ int cpu, ncpu;
+
+ ncpu = 0;
+ CPU_FOREACH(cpu) {
+ if (!(mask & (1 << cpu)))
+ continue;
+ if (ncpu == i)
+ break;
+ ncpu++;
+ }
+ mask &= (1 << cpu);
+ }
+
+ openpic_write(sc, OPENPIC_IDEST(irq), mask);
}
void
openpic_config(device_t dev, u_int irq, enum intr_trigger trig,
enum intr_polarity pol)
{
struct openpic_softc *sc;
uint32_t x;
sc = device_get_softc(dev);
x = openpic_read(sc, OPENPIC_SRC_VECTOR(irq));
if (pol == INTR_POLARITY_LOW)
x &= ~OPENPIC_POLARITY_POSITIVE;
else
x |= OPENPIC_POLARITY_POSITIVE;
if (trig == INTR_TRIGGER_EDGE)
x &= ~OPENPIC_SENSE_LEVEL;
else
x |= OPENPIC_SENSE_LEVEL;
openpic_write(sc, OPENPIC_SRC_VECTOR(irq), x);
}
static int
openpic_intr(void *arg)
{
device_t dev = (device_t)(arg);
/* XXX Cascaded PICs do not pass non-NULL trapframes! */
openpic_dispatch(dev, NULL);
return (FILTER_HANDLED);
}
void
openpic_dispatch(device_t dev, struct trapframe *tf)
{
struct openpic_softc *sc;
u_int cpuid, vector;
CTR1(KTR_INTR, "%s: got interrupt", __func__);
cpuid = (dev == root_pic) ? PCPU_GET(cpuid) : 0;
sc = device_get_softc(dev);
while (1) {
vector = openpic_read(sc, OPENPIC_PCPU_IACK(cpuid));
vector &= OPENPIC_VECTOR_MASK;
if (vector == 255)
break;
powerpc_dispatch_intr(vector, tf);
}
}
void
openpic_enable(device_t dev, u_int irq, u_int vector, void **priv __unused)
{
struct openpic_softc *sc;
uint32_t x;
sc = device_get_softc(dev);
if (irq < sc->sc_nirq) {
x = openpic_read(sc, OPENPIC_SRC_VECTOR(irq));
x &= ~(OPENPIC_IMASK | OPENPIC_VECTOR_MASK);
x |= vector;
openpic_write(sc, OPENPIC_SRC_VECTOR(irq), x);
} else {
x = openpic_read(sc, OPENPIC_IPI_VECTOR(0));
x &= ~(OPENPIC_IMASK | OPENPIC_VECTOR_MASK);
x |= vector;
openpic_write(sc, OPENPIC_IPI_VECTOR(0), x);
}
}
void
openpic_eoi(device_t dev, u_int irq __unused, void *priv __unused)
{
struct openpic_softc *sc;
u_int cpuid;
cpuid = (dev == root_pic) ? PCPU_GET(cpuid) : 0;
sc = device_get_softc(dev);
openpic_write(sc, OPENPIC_PCPU_EOI(cpuid), 0);
}
void
openpic_ipi(device_t dev, u_int cpu)
{
struct openpic_softc *sc;
KASSERT(dev == root_pic, ("Cannot send IPIs from non-root OpenPIC"));
sc = device_get_softc(dev);
sched_pin();
openpic_write(sc, OPENPIC_PCPU_IPI_DISPATCH(PCPU_GET(cpuid), 0),
1u << cpu);
sched_unpin();
}
void
openpic_mask(device_t dev, u_int irq, void *priv __unused)
{
struct openpic_softc *sc;
uint32_t x;
sc = device_get_softc(dev);
if (irq < sc->sc_nirq) {
x = openpic_read(sc, OPENPIC_SRC_VECTOR(irq));
x |= OPENPIC_IMASK;
openpic_write(sc, OPENPIC_SRC_VECTOR(irq), x);
} else {
x = openpic_read(sc, OPENPIC_IPI_VECTOR(0));
x |= OPENPIC_IMASK;
openpic_write(sc, OPENPIC_IPI_VECTOR(0), x);
}
}
void
openpic_unmask(device_t dev, u_int irq, void *priv __unused)
{
struct openpic_softc *sc;
uint32_t x;
sc = device_get_softc(dev);
if (irq < sc->sc_nirq) {
x = openpic_read(sc, OPENPIC_SRC_VECTOR(irq));
x &= ~OPENPIC_IMASK;
openpic_write(sc, OPENPIC_SRC_VECTOR(irq), x);
} else {
x = openpic_read(sc, OPENPIC_IPI_VECTOR(0));
x &= ~OPENPIC_IMASK;
openpic_write(sc, OPENPIC_IPI_VECTOR(0), x);
}
}
int
openpic_suspend(device_t dev)
{
struct openpic_softc *sc;
int i;
sc = device_get_softc(dev);
sc->sc_saved_config = bus_read_4(sc->sc_memr, OPENPIC_CONFIG);
for (i = 0; i < 4; i++) {
sc->sc_saved_ipis[i] = bus_read_4(sc->sc_memr, OPENPIC_IPI_VECTOR(i));
}
for (i = 0; i < 4; i++) {
sc->sc_saved_prios[i] = bus_read_4(sc->sc_memr, OPENPIC_PCPU_TPR(i));
}
for (i = 0; i < OPENPIC_TIMERS; i++) {
sc->sc_saved_timers[i].tcnt = bus_read_4(sc->sc_memr, OPENPIC_TCNT(i));
sc->sc_saved_timers[i].tbase = bus_read_4(sc->sc_memr, OPENPIC_TBASE(i));
sc->sc_saved_timers[i].tvec = bus_read_4(sc->sc_memr, OPENPIC_TVEC(i));
sc->sc_saved_timers[i].tdst = bus_read_4(sc->sc_memr, OPENPIC_TDST(i));
}
for (i = 0; i < OPENPIC_SRC_VECTOR_COUNT; i++)
sc->sc_saved_vectors[i] =
bus_read_4(sc->sc_memr, OPENPIC_SRC_VECTOR(i)) & ~OPENPIC_ACTIVITY;
return (0);
}
int
openpic_resume(device_t dev)
{
struct openpic_softc *sc;
int i;
sc = device_get_softc(dev);
sc->sc_saved_config = bus_read_4(sc->sc_memr, OPENPIC_CONFIG);
for (i = 0; i < 4; i++) {
bus_write_4(sc->sc_memr, OPENPIC_IPI_VECTOR(i), sc->sc_saved_ipis[i]);
}
for (i = 0; i < 4; i++) {
bus_write_4(sc->sc_memr, OPENPIC_PCPU_TPR(i), sc->sc_saved_prios[i]);
}
for (i = 0; i < OPENPIC_TIMERS; i++) {
bus_write_4(sc->sc_memr, OPENPIC_TCNT(i), sc->sc_saved_timers[i].tcnt);
bus_write_4(sc->sc_memr, OPENPIC_TBASE(i), sc->sc_saved_timers[i].tbase);
bus_write_4(sc->sc_memr, OPENPIC_TVEC(i), sc->sc_saved_timers[i].tvec);
bus_write_4(sc->sc_memr, OPENPIC_TDST(i), sc->sc_saved_timers[i].tdst);
}
for (i = 0; i < OPENPIC_SRC_VECTOR_COUNT; i++)
bus_write_4(sc->sc_memr, OPENPIC_SRC_VECTOR(i), sc->sc_saved_vectors[i]);
return (0);
}
Index: projects/clang800-import/sys/riscv/riscv/elf_machdep.c
===================================================================
--- projects/clang800-import/sys/riscv/riscv/elf_machdep.c (revision 343955)
+++ projects/clang800-import/sys/riscv/riscv/elf_machdep.c (revision 343956)
@@ -1,522 +1,522 @@
/*-
* Copyright 1996-1998 John D. Polstra.
* Copyright (c) 2015 Ruslan Bukin <br@bsdpad.com>
* Copyright (c) 2016 Yukishige Shibata <y-shibat@mtd.biglobe.ne.jp>
* All rights reserved.
*
* Portions of this software were developed by SRI International and the
* University of Cambridge Computer Laboratory under DARPA/AFRL contract
* FA8750-10-C-0237 ("CTSRD"), as part of the DARPA CRASH research programme.
*
* Portions of this software were developed by the University of Cambridge
* Computer Laboratory as part of the CTSRD Project, with support from the
* UK Higher Education Innovation Fund (HEIF).
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/systm.h>
#include <sys/exec.h>
#include <sys/imgact.h>
#include <sys/linker.h>
#include <sys/proc.h>
#include <sys/sysctl.h>
#include <sys/sysent.h>
#include <sys/imgact_elf.h>
#include <sys/syscall.h>
#include <sys/signalvar.h>
#include <sys/vnode.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <vm/vm_param.h>
#include <machine/elf.h>
#include <machine/md_var.h>
struct sysentvec elf64_freebsd_sysvec = {
.sv_size = SYS_MAXSYSCALL,
.sv_table = sysent,
.sv_errsize = 0,
.sv_errtbl = NULL,
.sv_transtrap = NULL,
.sv_fixup = __elfN(freebsd_fixup),
.sv_sendsig = sendsig,
.sv_sigcode = sigcode,
.sv_szsigcode = &szsigcode,
.sv_name = "FreeBSD ELF64",
.sv_coredump = __elfN(coredump),
.sv_imgact_try = NULL,
.sv_minsigstksz = MINSIGSTKSZ,
.sv_pagesize = PAGE_SIZE,
.sv_minuser = VM_MIN_ADDRESS,
.sv_maxuser = VM_MAXUSER_ADDRESS,
.sv_usrstack = USRSTACK,
.sv_psstrings = PS_STRINGS,
- .sv_stackprot = VM_PROT_ALL,
+ .sv_stackprot = VM_PROT_READ | VM_PROT_WRITE,
.sv_copyout_strings = exec_copyout_strings,
.sv_setregs = exec_setregs,
.sv_fixlimit = NULL,
.sv_maxssiz = NULL,
.sv_flags = SV_ABI_FREEBSD | SV_LP64 | SV_SHP,
.sv_set_syscall_retval = cpu_set_syscall_retval,
.sv_fetch_syscall_args = cpu_fetch_syscall_args,
.sv_syscallnames = syscallnames,
.sv_shared_page_base = SHAREDPAGE,
.sv_shared_page_len = PAGE_SIZE,
.sv_schedtail = NULL,
.sv_thread_detach = NULL,
.sv_trap = NULL,
};
INIT_SYSENTVEC(elf64_sysvec, &elf64_freebsd_sysvec);
static Elf64_Brandinfo freebsd_brand_info = {
.brand = ELFOSABI_FREEBSD,
.machine = EM_RISCV,
.compat_3_brand = "FreeBSD",
.emul_path = NULL,
.interp_path = "/libexec/ld-elf.so.1",
.sysvec = &elf64_freebsd_sysvec,
.interp_newpath = NULL,
.brand_note = &elf64_freebsd_brandnote,
.flags = BI_CAN_EXEC_DYN | BI_BRAND_NOTE
};
SYSINIT(elf64, SI_SUB_EXEC, SI_ORDER_FIRST,
(sysinit_cfunc_t) elf64_insert_brand_entry,
&freebsd_brand_info);
static int debug_kld;
SYSCTL_INT(_kern, OID_AUTO, debug_kld,
CTLFLAG_RW, &debug_kld, 0,
"Activate debug prints in elf_reloc_internal()");
struct type2str_ent {
int type;
const char *str;
};
void
elf64_dump_thread(struct thread *td, void *dst, size_t *off)
{
}
/*
* Following 4 functions are used to manupilate bits on 32bit interger value.
* FIXME: I implemetend for ease-to-understand rather than for well-optimized.
*/
static uint32_t
gen_bitmask(int msb, int lsb)
{
uint32_t mask;
if (msb == sizeof(mask) * 8 - 1)
mask = ~0;
else
mask = (1U << (msb + 1)) - 1;
if (lsb > 0)
mask &= ~((1U << lsb) - 1);
return (mask);
}
static uint32_t
extract_bits(uint32_t x, int msb, int lsb)
{
uint32_t mask;
mask = gen_bitmask(msb, lsb);
x &= mask;
x >>= lsb;
return (x);
}
static uint32_t
insert_bits(uint32_t d, uint32_t s, int msb, int lsb)
{
uint32_t mask;
mask = gen_bitmask(msb, lsb);
d &= ~mask;
s <<= lsb;
s &= mask;
return (d | s);
}
static uint32_t
insert_imm(uint32_t insn, uint32_t imm, int imm_msb, int imm_lsb,
int insn_lsb)
{
int insn_msb;
uint32_t v;
v = extract_bits(imm, imm_msb, imm_lsb);
insn_msb = (imm_msb - imm_lsb) + insn_lsb;
return (insert_bits(insn, v, insn_msb, insn_lsb));
}
/*
* The RISC-V ISA is designed so that all of immediate values are
* sign-extended.
* An immediate value is sometimes generated at runtime by adding
* 12bit sign integer and 20bit signed integer. This requests 20bit
* immediate value to be ajusted if the MSB of the 12bit immediate
* value is asserted (sign-extended value is treated as negative value).
*
* For example, 0x123800 can be calculated by adding upper 20 bit of
* 0x124000 and sign-extended 12bit immediate whose bit pattern is
* 0x800 as follows:
* 0x123800
* = 0x123000 + 0x800
* = (0x123000 + 0x1000) + (-0x1000 + 0x800)
* = (0x123000 + 0x1000) + (0xff...ff800)
* = 0x124000 + sign-extention(0x800)
*/
static uint32_t
calc_hi20_imm(uint32_t value)
{
/*
* There is the arithmetical hack that can remove conditional
* statement. But I implement it in straightforward way.
*/
if ((value & 0x800) != 0)
value += 0x1000;
return (value & ~0xfff);
}
static const struct type2str_ent t2s[] = {
{ R_RISCV_NONE, "R_RISCV_NONE" },
{ R_RISCV_64, "R_RISCV_64" },
{ R_RISCV_JUMP_SLOT, "R_RISCV_JUMP_SLOT" },
{ R_RISCV_RELATIVE, "R_RISCV_RELATIVE" },
{ R_RISCV_JAL, "R_RISCV_JAL" },
{ R_RISCV_CALL, "R_RISCV_CALL" },
{ R_RISCV_PCREL_HI20, "R_RISCV_PCREL_HI20" },
{ R_RISCV_PCREL_LO12_I, "R_RISCV_PCREL_LO12_I" },
{ R_RISCV_PCREL_LO12_S, "R_RISCV_PCREL_LO12_S" },
{ R_RISCV_HI20, "R_RISCV_HI20" },
{ R_RISCV_LO12_I, "R_RISCV_LO12_I" },
{ R_RISCV_LO12_S, "R_RISCV_LO12_S" },
};
static const char *
reloctype_to_str(int type)
{
int i;
for (i = 0; i < sizeof(t2s) / sizeof(t2s[0]); ++i) {
if (type == t2s[i].type)
return t2s[i].str;
}
return "*unknown*";
}
bool
elf_is_ifunc_reloc(Elf_Size r_info __unused)
{
return (false);
}
/*
* Currently kernel loadable module for RISCV is compiled with -fPIC option.
* (see also additional CFLAGS definition for RISCV in sys/conf/kmod.mk)
* Only R_RISCV_64, R_RISCV_JUMP_SLOT and RISCV_RELATIVE are emitted in
* the module. Other relocations will be processed when kernel loadable
* modules are built in non-PIC.
*
* FIXME: only RISCV64 is supported.
*/
static int
elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data,
int type, int local, elf_lookup_fn lookup)
{
Elf_Size rtype, symidx;
const Elf_Rela *rela;
Elf_Addr val, addr;
Elf64_Addr *where;
Elf_Addr addend;
uint32_t before32_1;
uint32_t before32;
uint64_t before64;
uint32_t* insn32p;
uint32_t imm20;
int error;
switch (type) {
case ELF_RELOC_RELA:
rela = (const Elf_Rela *)data;
where = (Elf_Addr *)(relocbase + rela->r_offset);
insn32p = (uint32_t*)where;
addend = rela->r_addend;
rtype = ELF_R_TYPE(rela->r_info);
symidx = ELF_R_SYM(rela->r_info);
break;
default:
printf("%s:%d unknown reloc type %d\n",
__FUNCTION__, __LINE__, type);
return -1;
}
switch (rtype) {
case R_RISCV_NONE:
break;
case R_RISCV_64:
case R_RISCV_JUMP_SLOT:
error = lookup(lf, symidx, 1, &addr);
if (error != 0)
return -1;
val = addr;
before64 = *where;
if (*where != val)
*where = val;
if (debug_kld)
printf("%p %c %-24s %016lx -> %016lx\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before64, *where);
break;
case R_RISCV_RELATIVE:
before64 = *where;
*where = elf_relocaddr(lf, relocbase + addend);
if (debug_kld)
printf("%p %c %-24s %016lx -> %016lx\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before64, *where);
break;
case R_RISCV_JAL:
error = lookup(lf, symidx, 1, &addr);
if (error != 0)
return -1;
val = addr - (Elf_Addr)where;
if ((val <= -(1UL << 20) || (1UL << 20) <= val)) {
printf("kldload: huge offset against R_RISCV_JAL\n");
return -1;
}
before32 = *insn32p;
*insn32p = insert_imm(*insn32p, val, 20, 20, 31);
*insn32p = insert_imm(*insn32p, val, 10, 1, 21);
*insn32p = insert_imm(*insn32p, val, 11, 11, 20);
*insn32p = insert_imm(*insn32p, val, 19, 12, 12);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
case R_RISCV_CALL:
/*
* R_RISCV_CALL relocates 8-byte region that consists
* of the sequence of AUIPC and JALR.
*/
/* calculate and check the pc relative offset. */
error = lookup(lf, symidx, 1, &addr);
if (error != 0)
return -1;
val = addr - (Elf_Addr)where;
if ((val <= -(1UL << 32) || (1UL << 32) <= val)) {
printf("kldload: huge offset against R_RISCV_CALL\n");
return -1;
}
/* Relocate AUIPC. */
before32 = insn32p[0];
imm20 = calc_hi20_imm(val);
insn32p[0] = insert_imm(insn32p[0], imm20, 31, 12, 12);
/* Relocate JALR. */
before32_1 = insn32p[1];
insn32p[1] = insert_imm(insn32p[1], val, 11, 0, 20);
if (debug_kld)
printf("%p %c %-24s %08x %08x -> %08x %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, insn32p[0],
before32_1, insn32p[1]);
break;
case R_RISCV_PCREL_HI20:
val = addr - (Elf_Addr)where;
insn32p = (uint32_t*)where;
before32 = *insn32p;
imm20 = calc_hi20_imm(val);
*insn32p = insert_imm(*insn32p, imm20, 31, 12, 12);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
case R_RISCV_PCREL_LO12_I:
val = addr - (Elf_Addr)where;
insn32p = (uint32_t*)where;
before32 = *insn32p;
*insn32p = insert_imm(*insn32p, addr, 11, 0, 20);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
case R_RISCV_PCREL_LO12_S:
val = addr - (Elf_Addr)where;
insn32p = (uint32_t*)where;
before32 = *insn32p;
*insn32p = insert_imm(*insn32p, addr, 11, 5, 25);
*insn32p = insert_imm(*insn32p, addr, 4, 0, 7);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
case R_RISCV_HI20:
error = lookup(lf, symidx, 1, &addr);
if (error != 0)
return -1;
insn32p = (uint32_t*)where;
before32 = *insn32p;
imm20 = calc_hi20_imm(val);
*insn32p = insert_imm(*insn32p, imm20, 31, 12, 12);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
case R_RISCV_LO12_I:
error = lookup(lf, symidx, 1, &addr);
if (error != 0)
return -1;
val = addr;
insn32p = (uint32_t*)where;
before32 = *insn32p;
*insn32p = insert_imm(*insn32p, addr, 11, 0, 20);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
case R_RISCV_LO12_S:
error = lookup(lf, symidx, 1, &addr);
if (error != 0)
return -1;
val = addr;
insn32p = (uint32_t*)where;
before32 = *insn32p;
*insn32p = insert_imm(*insn32p, addr, 11, 5, 25);
*insn32p = insert_imm(*insn32p, addr, 4, 0, 7);
if (debug_kld)
printf("%p %c %-24s %08x -> %08x\n",
where,
(local? 'l': 'g'),
reloctype_to_str(rtype),
before32, *insn32p);
break;
default:
printf("kldload: unexpected relocation type %ld\n", rtype);
return (-1);
}
return (0);
}
int
elf_reloc(linker_file_t lf, Elf_Addr relocbase, const void *data, int type,
elf_lookup_fn lookup)
{
return (elf_reloc_internal(lf, relocbase, data, type, 0, lookup));
}
int
elf_reloc_local(linker_file_t lf, Elf_Addr relocbase, const void *data,
int type, elf_lookup_fn lookup)
{
return (elf_reloc_internal(lf, relocbase, data, type, 1, lookup));
}
int
elf_cpu_load_file(linker_file_t lf __unused)
{
return (0);
}
int
elf_cpu_unload_file(linker_file_t lf __unused)
{
return (0);
}
Index: projects/clang800-import/sys/sys/namei.h
===================================================================
--- projects/clang800-import/sys/sys/namei.h (revision 343955)
+++ projects/clang800-import/sys/sys/namei.h (revision 343956)
@@ -1,221 +1,230 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1985, 1989, 1991, 1993
* The Regents of the University of California. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)namei.h 8.5 (Berkeley) 1/9/95
* $FreeBSD$
*/
#ifndef _SYS_NAMEI_H_
#define _SYS_NAMEI_H_
#include <sys/caprights.h>
#include <sys/filedesc.h>
#include <sys/queue.h>
#include <sys/_uio.h>
struct componentname {
/*
* Arguments to lookup.
*/
u_long cn_nameiop; /* namei operation */
u_int64_t cn_flags; /* flags to namei */
struct thread *cn_thread;/* thread requesting lookup */
struct ucred *cn_cred; /* credentials */
int cn_lkflags; /* Lock flags LK_EXCLUSIVE or LK_SHARED */
/*
* Shared between lookup and commit routines.
*/
char *cn_pnbuf; /* pathname buffer */
char *cn_nameptr; /* pointer to looked up name */
long cn_namelen; /* length of looked up component */
};
struct nameicap_tracker;
TAILQ_HEAD(nameicap_tracker_head, nameicap_tracker);
/*
* Encapsulation of namei parameters.
*/
struct nameidata {
/*
* Arguments to namei/lookup.
*/
const char *ni_dirp; /* pathname pointer */
enum uio_seg ni_segflg; /* location of pathname */
cap_rights_t ni_rightsneeded; /* rights required to look up vnode */
/*
* Arguments to lookup.
*/
struct vnode *ni_startdir; /* starting directory */
struct vnode *ni_rootdir; /* logical root directory */
struct vnode *ni_topdir; /* logical top directory */
int ni_dirfd; /* starting directory for *at functions */
int ni_lcf; /* local call flags */
/*
* Results: returned from namei
*/
struct filecaps ni_filecaps; /* rights the *at base has */
/*
* Results: returned from/manipulated by lookup
*/
struct vnode *ni_vp; /* vnode of result */
struct vnode *ni_dvp; /* vnode of intermediate directory */
/*
+ * Results: flags returned from namei
+ */
+ u_int ni_resflags;
+ /*
* Shared between namei and lookup/commit routines.
*/
size_t ni_pathlen; /* remaining chars in path */
char *ni_next; /* next location in pathname */
u_int ni_loopcnt; /* count of symlinks encountered */
/*
* Lookup parameters: this structure describes the subset of
* information from the nameidata structure that is passed
* through the VOP interface.
*/
struct componentname ni_cnd;
struct nameicap_tracker_head ni_cap_tracker;
struct vnode *ni_beneath_latch;
};
#ifdef _KERNEL
/*
* namei operations
*/
#define LOOKUP 0 /* perform name lookup only */
#define CREATE 1 /* setup for file creation */
#define DELETE 2 /* setup for file deletion */
#define RENAME 3 /* setup for file renaming */
#define OPMASK 3 /* mask for operation */
/*
* namei operational modifier flags, stored in ni_cnd.flags
*/
#define LOCKLEAF 0x0004 /* lock vnode on return */
#define LOCKPARENT 0x0008 /* want parent vnode returned locked */
#define WANTPARENT 0x0010 /* want parent vnode returned unlocked */
#define NOCACHE 0x0020 /* name must not be left in cache */
#define FOLLOW 0x0040 /* follow symbolic links */
#define BENEATH 0x0080 /* No escape from the start dir */
#define LOCKSHARED 0x0100 /* Shared lock leaf */
#define NOFOLLOW 0x0000 /* do not follow symbolic links (pseudo) */
#define MODMASK 0x01fc /* mask of operational modifiers */
/*
* Namei parameter descriptors.
*
* SAVENAME may be set by either the callers of namei or by VOP_LOOKUP.
* If the caller of namei sets the flag (for example execve wants to
* know the name of the program that is being executed), then it must
* free the buffer. If VOP_LOOKUP sets the flag, then the buffer must
* be freed by either the commit routine or the VOP_ABORT routine.
* SAVESTART is set only by the callers of namei. It implies SAVENAME
* plus the addition of saving the parent directory that contains the
* name in ni_startdir. It allows repeated calls to lookup for the
* name being sought. The caller is responsible for releasing the
* buffer and for vrele'ing ni_startdir.
*/
#define RDONLY 0x00000200 /* lookup with read-only semantics */
#define HASBUF 0x00000400 /* has allocated pathname buffer */
#define SAVENAME 0x00000800 /* save pathname buffer */
#define SAVESTART 0x00001000 /* save starting directory */
#define ISDOTDOT 0x00002000 /* current component name is .. */
#define MAKEENTRY 0x00004000 /* entry is to be added to name cache */
#define ISLASTCN 0x00008000 /* this is last component of pathname */
#define ISSYMLINK 0x00010000 /* symlink needs interpretation */
#define ISWHITEOUT 0x00020000 /* found whiteout */
#define DOWHITEOUT 0x00040000 /* do whiteouts */
#define WILLBEDIR 0x00080000 /* new files will be dirs; allow trailing / */
#define ISUNICODE 0x00100000 /* current component name is unicode*/
#define ISOPEN 0x00200000 /* caller is opening; return a real vnode. */
#define NOCROSSMOUNT 0x00400000 /* do not cross mount points */
#define NOMACCHECK 0x00800000 /* do not perform MAC checks */
#define AUDITVNODE1 0x04000000 /* audit the looked up vnode information */
#define AUDITVNODE2 0x08000000 /* audit the looked up vnode information */
#define TRAILINGSLASH 0x10000000 /* path ended in a slash */
#define NOCAPCHECK 0x20000000 /* do not perform capability checks */
#define PARAMASK 0x3ffffe00 /* mask of parameter descriptors */
+
+/*
+ * Namei results flags
+ */
+#define NIRES_ABS 0x00000001 /* Path was absolute */
/*
* Flags in ni_lcf, valid for the duration of the namei call.
*/
#define NI_LCF_STRICTRELATIVE 0x0001 /* relative lookup only */
#define NI_LCF_CAP_DOTDOT 0x0002 /* ".." in strictrelative case */
#define NI_LCF_BENEATH_ABS 0x0004 /* BENEATH with absolute path */
#define NI_LCF_BENEATH_LATCHED 0x0008 /* BENEATH_ABS traversed starting dir */
#define NI_LCF_LATCH 0x0010 /* ni_beneath_latch valid */
/*
* Initialization of a nameidata structure.
*/
#define NDINIT(ndp, op, flags, segflg, namep, td) \
NDINIT_ALL(ndp, op, flags, segflg, namep, AT_FDCWD, NULL, 0, td)
#define NDINIT_AT(ndp, op, flags, segflg, namep, dirfd, td) \
NDINIT_ALL(ndp, op, flags, segflg, namep, dirfd, NULL, 0, td)
#define NDINIT_ATRIGHTS(ndp, op, flags, segflg, namep, dirfd, rightsp, td) \
NDINIT_ALL(ndp, op, flags, segflg, namep, dirfd, NULL, rightsp, td)
#define NDINIT_ATVP(ndp, op, flags, segflg, namep, vp, td) \
NDINIT_ALL(ndp, op, flags, segflg, namep, AT_FDCWD, vp, 0, td)
void NDINIT_ALL(struct nameidata *ndp, u_long op, u_long flags,
enum uio_seg segflg, const char *namep, int dirfd, struct vnode *startdir,
cap_rights_t *rightsp, struct thread *td);
#define NDF_NO_DVP_RELE 0x00000001
#define NDF_NO_DVP_UNLOCK 0x00000002
#define NDF_NO_DVP_PUT 0x00000003
#define NDF_NO_VP_RELE 0x00000004
#define NDF_NO_VP_UNLOCK 0x00000008
#define NDF_NO_VP_PUT 0x0000000c
#define NDF_NO_STARTDIR_RELE 0x00000010
#define NDF_NO_FREE_PNBUF 0x00000020
#define NDF_ONLY_PNBUF (~NDF_NO_FREE_PNBUF)
void NDFREE(struct nameidata *, const u_int);
int namei(struct nameidata *ndp);
int lookup(struct nameidata *ndp);
int relookup(struct vnode *dvp, struct vnode **vpp,
struct componentname *cnp);
#endif
/*
* Stats on usefulness of namei caches.
*/
struct nchstats {
long ncs_goodhits; /* hits that we can really use */
long ncs_neghits; /* negative hits that we can use */
long ncs_badhits; /* hits we must drop */
long ncs_falsehits; /* hits with id mismatch */
long ncs_miss; /* misses */
long ncs_long; /* long names that ignore cache */
long ncs_pass2; /* names found with passes == 2 */
long ncs_2passes; /* number of times we attempt it */
};
extern struct nchstats nchstats;
#endif /* !_SYS_NAMEI_H_ */
Index: projects/clang800-import/sys/sys/sysent.h
===================================================================
--- projects/clang800-import/sys/sys/sysent.h (revision 343955)
+++ projects/clang800-import/sys/sys/sysent.h (revision 343956)
@@ -1,324 +1,320 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1982, 1988, 1991 The Regents of the University of California.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#ifndef _SYS_SYSENT_H_
#define _SYS_SYSENT_H_
#include <bsm/audit.h>
struct rlimit;
struct sysent;
struct thread;
struct ksiginfo;
struct syscall_args;
enum systrace_probe_t {
SYSTRACE_ENTRY,
SYSTRACE_RETURN,
};
typedef int sy_call_t(struct thread *, void *);
typedef void (*systrace_probe_func_t)(struct syscall_args *,
enum systrace_probe_t, int);
typedef void (*systrace_args_func_t)(int, void *, uint64_t *, int *);
#ifdef _KERNEL
extern bool systrace_enabled;
#endif
extern systrace_probe_func_t systrace_probe_func;
struct sysent { /* system call table */
int sy_narg; /* number of arguments */
sy_call_t *sy_call; /* implementing function */
au_event_t sy_auevent; /* audit event associated with syscall */
systrace_args_func_t sy_systrace_args_func;
/* optional argument conversion function. */
u_int32_t sy_entry; /* DTrace entry ID for systrace. */
u_int32_t sy_return; /* DTrace return ID for systrace. */
u_int32_t sy_flags; /* General flags for system calls. */
u_int32_t sy_thrcnt;
};
/*
* A system call is permitted in capability mode.
*/
#define SYF_CAPENABLED 0x00000001
#define SY_THR_FLAGMASK 0x7
#define SY_THR_STATIC 0x1
#define SY_THR_DRAINING 0x2
#define SY_THR_ABSENT 0x4
#define SY_THR_INCR 0x8
#ifdef KLD_MODULE
#define SY_THR_STATIC_KLD 0
#else
#define SY_THR_STATIC_KLD SY_THR_STATIC
#endif
struct image_params;
struct __sigset;
struct trapframe;
struct vnode;
struct sysentvec {
int sv_size; /* number of entries */
struct sysent *sv_table; /* pointer to sysent */
int sv_errsize; /* size of errno translation table */
const int *sv_errtbl; /* errno translation table */
int (*sv_transtrap)(int, int);
/* translate trap-to-signal mapping */
int (*sv_fixup)(register_t **, struct image_params *);
/* stack fixup function */
void (*sv_sendsig)(void (*)(int), struct ksiginfo *, struct __sigset *);
/* send signal */
char *sv_sigcode; /* start of sigtramp code */
int *sv_szsigcode; /* size of sigtramp code */
char *sv_name; /* name of binary type */
int (*sv_coredump)(struct thread *, struct vnode *, off_t, int);
/* function to dump core, or NULL */
int (*sv_imgact_try)(struct image_params *);
int sv_minsigstksz; /* minimum signal stack size */
int sv_pagesize; /* pagesize */
vm_offset_t sv_minuser; /* VM_MIN_ADDRESS */
vm_offset_t sv_maxuser; /* VM_MAXUSER_ADDRESS */
vm_offset_t sv_usrstack; /* USRSTACK */
vm_offset_t sv_psstrings; /* PS_STRINGS */
int sv_stackprot; /* vm protection for stack */
register_t *(*sv_copyout_strings)(struct image_params *);
void (*sv_setregs)(struct thread *, struct image_params *,
u_long);
void (*sv_fixlimit)(struct rlimit *, int);
u_long *sv_maxssiz;
u_int sv_flags;
void (*sv_set_syscall_retval)(struct thread *, int);
int (*sv_fetch_syscall_args)(struct thread *);
const char **sv_syscallnames;
vm_offset_t sv_timekeep_base;
vm_offset_t sv_shared_page_base;
vm_offset_t sv_shared_page_len;
vm_offset_t sv_sigcode_base;
void *sv_shared_page_obj;
void (*sv_schedtail)(struct thread *);
void (*sv_thread_detach)(struct thread *);
int (*sv_trap)(struct thread *);
u_long *sv_hwcap; /* Value passed in AT_HWCAP. */
u_long *sv_hwcap2; /* Value passed in AT_HWCAP2. */
};
#define SV_ILP32 0x000100 /* 32-bit executable. */
#define SV_LP64 0x000200 /* 64-bit executable. */
#define SV_IA32 0x004000 /* Intel 32-bit executable. */
#define SV_AOUT 0x008000 /* a.out executable. */
#define SV_SHP 0x010000 /* Shared page. */
#define SV_CAPSICUM 0x020000 /* Force cap_enter() on startup. */
#define SV_TIMEKEEP 0x040000 /* Shared page timehands. */
#define SV_ABI_MASK 0xff
#define SV_ABI_ERRNO(p, e) ((p)->p_sysent->sv_errsize <= 0 ? e : \
((e) >= (p)->p_sysent->sv_errsize ? -1 : (p)->p_sysent->sv_errtbl[e]))
#define SV_PROC_FLAG(p, x) ((p)->p_sysent->sv_flags & (x))
#define SV_PROC_ABI(p) ((p)->p_sysent->sv_flags & SV_ABI_MASK)
#define SV_CURPROC_FLAG(x) SV_PROC_FLAG(curproc, x)
#define SV_CURPROC_ABI() SV_PROC_ABI(curproc)
/* same as ELFOSABI_XXX, to prevent header pollution */
#define SV_ABI_LINUX 3
#define SV_ABI_FREEBSD 9
#define SV_ABI_CLOUDABI 17
#define SV_ABI_UNDEF 255
#ifdef _KERNEL
extern struct sysentvec aout_sysvec;
extern struct sysent sysent[];
extern const char *syscallnames[];
-#if defined(__amd64__)
-extern int i386_read_exec;
-#endif
-
#define NO_SYSCALL (-1)
struct module;
struct syscall_module_data {
int (*chainevh)(struct module *, int, void *); /* next handler */
void *chainarg; /* arg for next event handler */
int *offset; /* offset into sysent */
struct sysent *new_sysent; /* new sysent */
struct sysent old_sysent; /* old sysent */
int flags; /* flags for syscall_register */
};
/* separate initialization vector so it can be used in a substructure */
#define SYSENT_INIT_VALS(_syscallname) { \
.sy_narg = (sizeof(struct _syscallname ## _args ) \
/ sizeof(register_t)), \
.sy_call = (sy_call_t *)&sys_##_syscallname, \
.sy_auevent = SYS_AUE_##_syscallname, \
.sy_systrace_args_func = NULL, \
.sy_entry = 0, \
.sy_return = 0, \
.sy_flags = 0, \
.sy_thrcnt = 0 \
}
#define MAKE_SYSENT(syscallname) \
static struct sysent syscallname##_sysent = SYSENT_INIT_VALS(syscallname);
#define MAKE_SYSENT_COMPAT(syscallname) \
static struct sysent syscallname##_sysent = { \
(sizeof(struct syscallname ## _args ) \
/ sizeof(register_t)), \
(sy_call_t *)& syscallname, \
SYS_AUE_##syscallname \
}
#define SYSCALL_MODULE(name, offset, new_sysent, evh, arg) \
static struct syscall_module_data name##_syscall_mod = { \
evh, arg, offset, new_sysent, { 0, NULL, AUE_NULL } \
}; \
\
static moduledata_t name##_mod = { \
"sys/" #name, \
syscall_module_handler, \
&name##_syscall_mod \
}; \
DECLARE_MODULE(name, name##_mod, SI_SUB_SYSCALLS, SI_ORDER_MIDDLE)
#define SYSCALL_MODULE_HELPER(syscallname) \
static int syscallname##_syscall = SYS_##syscallname; \
MAKE_SYSENT(syscallname); \
SYSCALL_MODULE(syscallname, \
& syscallname##_syscall, & syscallname##_sysent, \
NULL, NULL)
#define SYSCALL_MODULE_PRESENT(syscallname) \
(sysent[SYS_##syscallname].sy_call != (sy_call_t *)lkmnosys && \
sysent[SYS_##syscallname].sy_call != (sy_call_t *)lkmressys)
/*
* Syscall registration helpers with resource allocation handling.
*/
struct syscall_helper_data {
struct sysent new_sysent;
struct sysent old_sysent;
int syscall_no;
int registered;
};
#define SYSCALL_INIT_HELPER_F(syscallname, flags) { \
.new_sysent = { \
.sy_narg = (sizeof(struct syscallname ## _args ) \
/ sizeof(register_t)), \
.sy_call = (sy_call_t *)& sys_ ## syscallname, \
.sy_auevent = SYS_AUE_##syscallname, \
.sy_flags = (flags) \
}, \
.syscall_no = SYS_##syscallname \
}
#define SYSCALL_INIT_HELPER_COMPAT_F(syscallname, flags) { \
.new_sysent = { \
.sy_narg = (sizeof(struct syscallname ## _args ) \
/ sizeof(register_t)), \
.sy_call = (sy_call_t *)& syscallname, \
.sy_auevent = SYS_AUE_##syscallname, \
.sy_flags = (flags) \
}, \
.syscall_no = SYS_##syscallname \
}
#define SYSCALL_INIT_HELPER(syscallname) \
SYSCALL_INIT_HELPER_F(syscallname, 0)
#define SYSCALL_INIT_HELPER_COMPAT(syscallname) \
SYSCALL_INIT_HELPER_COMPAT_F(syscallname, 0)
#define SYSCALL_INIT_LAST { \
.syscall_no = NO_SYSCALL \
}
int syscall_module_handler(struct module *mod, int what, void *arg);
int syscall_helper_register(struct syscall_helper_data *sd, int flags);
int syscall_helper_unregister(struct syscall_helper_data *sd);
/* Implementation, exposed for COMPAT code */
int kern_syscall_register(struct sysent *sysents, int *offset,
struct sysent *new_sysent, struct sysent *old_sysent, int flags);
int kern_syscall_deregister(struct sysent *sysents, int offset,
const struct sysent *old_sysent);
int kern_syscall_module_handler(struct sysent *sysents,
struct module *mod, int what, void *arg);
int kern_syscall_helper_register(struct sysent *sysents,
struct syscall_helper_data *sd, int flags);
int kern_syscall_helper_unregister(struct sysent *sysents,
struct syscall_helper_data *sd);
struct proc;
const char *syscallname(struct proc *p, u_int code);
/* Special purpose system call functions. */
struct nosys_args;
int lkmnosys(struct thread *, struct nosys_args *);
int lkmressys(struct thread *, struct nosys_args *);
int _syscall_thread_enter(struct thread *td, struct sysent *se);
void _syscall_thread_exit(struct thread *td, struct sysent *se);
static inline int
syscall_thread_enter(struct thread *td, struct sysent *se)
{
if (__predict_true((se->sy_thrcnt & SY_THR_STATIC) != 0))
return (0);
return (_syscall_thread_enter(td, se));
}
static inline void
syscall_thread_exit(struct thread *td, struct sysent *se)
{
if (__predict_true((se->sy_thrcnt & SY_THR_STATIC) != 0))
return;
_syscall_thread_exit(td, se);
}
int shared_page_alloc(int size, int align);
int shared_page_fill(int size, int align, const void *data);
void shared_page_write(int base, int size, const void *data);
void exec_sysvec_init(void *param);
void exec_inittk(void);
#define INIT_SYSENTVEC(name, sv) \
SYSINIT(name, SI_SUB_EXEC, SI_ORDER_ANY, \
(sysinit_cfunc_t)exec_sysvec_init, sv);
#endif /* _KERNEL */
#endif /* !_SYS_SYSENT_H_ */
Index: projects/clang800-import/sys/vm/uma_core.c
===================================================================
--- projects/clang800-import/sys/vm/uma_core.c (revision 343955)
+++ projects/clang800-import/sys/vm/uma_core.c (revision 343956)
@@ -1,4201 +1,4227 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2002-2005, 2009, 2013 Jeffrey Roberson <jeff@FreeBSD.org>
* Copyright (c) 2004, 2005 Bosko Milekic <bmilekic@FreeBSD.org>
* Copyright (c) 2004-2006 Robert N. M. Watson
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice unmodified, this list of conditions, and the following
* disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
* uma_core.c Implementation of the Universal Memory allocator
*
* This allocator is intended to replace the multitude of similar object caches
* in the standard FreeBSD kernel. The intent is to be flexible as well as
* efficient. A primary design goal is to return unused memory to the rest of
* the system. This will make the system as a whole more flexible due to the
* ability to move memory to subsystems which most need it instead of leaving
* pools of reserved memory unused.
*
* The basic ideas stem from similar slab/zone based allocators whose algorithms
* are well known.
*
*/
/*
* TODO:
* - Improve memory usage for large allocations
* - Investigate cache size adjustments
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_ddb.h"
#include "opt_param.h"
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bitset.h>
#include <sys/domainset.h>
#include <sys/eventhandler.h>
#include <sys/kernel.h>
#include <sys/types.h>
#include <sys/limits.h>
#include <sys/queue.h>
#include <sys/malloc.h>
#include <sys/ktr.h>
#include <sys/lock.h>
#include <sys/sysctl.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/random.h>
#include <sys/rwlock.h>
#include <sys/sbuf.h>
#include <sys/sched.h>
#include <sys/smp.h>
#include <sys/taskqueue.h>
#include <sys/vmmeter.h>
#include <vm/vm.h>
#include <vm/vm_domainset.h>
#include <vm/vm_object.h>
#include <vm/vm_page.h>
#include <vm/vm_pageout.h>
#include <vm/vm_param.h>
#include <vm/vm_phys.h>
#include <vm/vm_pagequeue.h>
#include <vm/vm_map.h>
#include <vm/vm_kern.h>
#include <vm/vm_extern.h>
#include <vm/uma.h>
#include <vm/uma_int.h>
#include <vm/uma_dbg.h>
#include <ddb/ddb.h>
#ifdef DEBUG_MEMGUARD
#include <vm/memguard.h>
#endif
/*
* This is the zone and keg from which all zones are spawned.
*/
static uma_zone_t kegs;
static uma_zone_t zones;
/* This is the zone from which all offpage uma_slab_ts are allocated. */
static uma_zone_t slabzone;
/*
* The initial hash tables come out of this zone so they can be allocated
* prior to malloc coming up.
*/
static uma_zone_t hashzone;
/* The boot-time adjusted value for cache line alignment. */
int uma_align_cache = 64 - 1;
static MALLOC_DEFINE(M_UMAHASH, "UMAHash", "UMA Hash Buckets");
/*
* Are we allowed to allocate buckets?
*/
static int bucketdisable = 1;
/* Linked list of all kegs in the system */
static LIST_HEAD(,uma_keg) uma_kegs = LIST_HEAD_INITIALIZER(uma_kegs);
/* Linked list of all cache-only zones in the system */
static LIST_HEAD(,uma_zone) uma_cachezones =
LIST_HEAD_INITIALIZER(uma_cachezones);
/* This RW lock protects the keg list */
static struct rwlock_padalign __exclusive_cache_line uma_rwlock;
/*
* Pointer and counter to pool of pages, that is preallocated at
* startup to bootstrap UMA.
*/
static char *bootmem;
static int boot_pages;
static struct sx uma_drain_lock;
/* kmem soft limit. */
static unsigned long uma_kmem_limit = LONG_MAX;
static volatile unsigned long uma_kmem_total;
/* Is the VM done starting up? */
static enum { BOOT_COLD = 0, BOOT_STRAPPED, BOOT_PAGEALLOC, BOOT_BUCKETS,
BOOT_RUNNING } booted = BOOT_COLD;
/*
* This is the handle used to schedule events that need to happen
* outside of the allocation fast path.
*/
static struct callout uma_callout;
#define UMA_TIMEOUT 20 /* Seconds for callout interval. */
/*
* This structure is passed as the zone ctor arg so that I don't have to create
* a special allocation function just for zones.
*/
struct uma_zctor_args {
const char *name;
size_t size;
uma_ctor ctor;
uma_dtor dtor;
uma_init uminit;
uma_fini fini;
uma_import import;
uma_release release;
void *arg;
uma_keg_t keg;
int align;
uint32_t flags;
};
struct uma_kctor_args {
uma_zone_t zone;
size_t size;
uma_init uminit;
uma_fini fini;
int align;
uint32_t flags;
};
struct uma_bucket_zone {
uma_zone_t ubz_zone;
char *ubz_name;
int ubz_entries; /* Number of items it can hold. */
int ubz_maxsize; /* Maximum allocation size per-item. */
};
/*
* Compute the actual number of bucket entries to pack them in power
* of two sizes for more efficient space utilization.
*/
#define BUCKET_SIZE(n) \
(((sizeof(void *) * (n)) - sizeof(struct uma_bucket)) / sizeof(void *))
#define BUCKET_MAX BUCKET_SIZE(256)
struct uma_bucket_zone bucket_zones[] = {
{ NULL, "4 Bucket", BUCKET_SIZE(4), 4096 },
{ NULL, "6 Bucket", BUCKET_SIZE(6), 3072 },
{ NULL, "8 Bucket", BUCKET_SIZE(8), 2048 },
{ NULL, "12 Bucket", BUCKET_SIZE(12), 1536 },
{ NULL, "16 Bucket", BUCKET_SIZE(16), 1024 },
{ NULL, "32 Bucket", BUCKET_SIZE(32), 512 },
{ NULL, "64 Bucket", BUCKET_SIZE(64), 256 },
{ NULL, "128 Bucket", BUCKET_SIZE(128), 128 },
{ NULL, "256 Bucket", BUCKET_SIZE(256), 64 },
{ NULL, NULL, 0}
};
/*
* Flags and enumerations to be passed to internal functions.
*/
enum zfreeskip {
SKIP_NONE = 0,
SKIP_CNT = 0x00000001,
SKIP_DTOR = 0x00010000,
SKIP_FINI = 0x00020000,
};
#define UMA_ANYDOMAIN -1 /* Special value for domain search. */
/* Prototypes.. */
int uma_startup_count(int);
void uma_startup(void *, int);
void uma_startup1(void);
void uma_startup2(void);
static void *noobj_alloc(uma_zone_t, vm_size_t, int, uint8_t *, int);
static void *page_alloc(uma_zone_t, vm_size_t, int, uint8_t *, int);
static void *pcpu_page_alloc(uma_zone_t, vm_size_t, int, uint8_t *, int);
static void *startup_alloc(uma_zone_t, vm_size_t, int, uint8_t *, int);
static void page_free(void *, vm_size_t, uint8_t);
static void pcpu_page_free(void *, vm_size_t, uint8_t);
static uma_slab_t keg_alloc_slab(uma_keg_t, uma_zone_t, int, int, int);
static void cache_drain(uma_zone_t);
static void bucket_drain(uma_zone_t, uma_bucket_t);
static void bucket_cache_drain(uma_zone_t zone);
static int keg_ctor(void *, int, void *, int);
static void keg_dtor(void *, int, void *);
static int zone_ctor(void *, int, void *, int);
static void zone_dtor(void *, int, void *);
static int zero_init(void *, int, int);
static void keg_small_init(uma_keg_t keg);
static void keg_large_init(uma_keg_t keg);
static void zone_foreach(void (*zfunc)(uma_zone_t));
static void zone_timeout(uma_zone_t zone);
static int hash_alloc(struct uma_hash *);
static int hash_expand(struct uma_hash *, struct uma_hash *);
static void hash_free(struct uma_hash *hash);
static void uma_timeout(void *);
static void uma_startup3(void);
static void *zone_alloc_item(uma_zone_t, void *, int, int);
static void *zone_alloc_item_locked(uma_zone_t, void *, int, int);
static void zone_free_item(uma_zone_t, void *, void *, enum zfreeskip);
static void bucket_enable(void);
static void bucket_init(void);
static uma_bucket_t bucket_alloc(uma_zone_t zone, void *, int);
static void bucket_free(uma_zone_t zone, uma_bucket_t, void *);
static void bucket_zone_drain(void);
static uma_bucket_t zone_alloc_bucket(uma_zone_t, void *, int, int, int);
static uma_slab_t zone_fetch_slab(uma_zone_t, uma_keg_t, int, int);
static void *slab_alloc_item(uma_keg_t keg, uma_slab_t slab);
static void slab_free_item(uma_zone_t zone, uma_slab_t slab, void *item);
static uma_keg_t uma_kcreate(uma_zone_t zone, size_t size, uma_init uminit,
uma_fini fini, int align, uint32_t flags);
static int zone_import(uma_zone_t, void **, int, int, int);
static void zone_release(uma_zone_t, void **, int);
static void uma_zero_item(void *, uma_zone_t);
void uma_print_zone(uma_zone_t);
void uma_print_stats(void);
static int sysctl_vm_zone_count(SYSCTL_HANDLER_ARGS);
static int sysctl_vm_zone_stats(SYSCTL_HANDLER_ARGS);
#ifdef INVARIANTS
static bool uma_dbg_kskip(uma_keg_t keg, void *mem);
static bool uma_dbg_zskip(uma_zone_t zone, void *mem);
static void uma_dbg_free(uma_zone_t zone, uma_slab_t slab, void *item);
static void uma_dbg_alloc(uma_zone_t zone, uma_slab_t slab, void *item);
static SYSCTL_NODE(_vm, OID_AUTO, debug, CTLFLAG_RD, 0,
"Memory allocation debugging");
static u_int dbg_divisor = 1;
SYSCTL_UINT(_vm_debug, OID_AUTO, divisor,
CTLFLAG_RDTUN | CTLFLAG_NOFETCH, &dbg_divisor, 0,
"Debug & thrash every this item in memory allocator");
static counter_u64_t uma_dbg_cnt = EARLY_COUNTER;
static counter_u64_t uma_skip_cnt = EARLY_COUNTER;
SYSCTL_COUNTER_U64(_vm_debug, OID_AUTO, trashed, CTLFLAG_RD,
&uma_dbg_cnt, "memory items debugged");
SYSCTL_COUNTER_U64(_vm_debug, OID_AUTO, skipped, CTLFLAG_RD,
&uma_skip_cnt, "memory items skipped, not debugged");
#endif
SYSINIT(uma_startup3, SI_SUB_VM_CONF, SI_ORDER_SECOND, uma_startup3, NULL);
SYSCTL_PROC(_vm, OID_AUTO, zone_count, CTLFLAG_RD|CTLTYPE_INT,
0, 0, sysctl_vm_zone_count, "I", "Number of UMA zones");
SYSCTL_PROC(_vm, OID_AUTO, zone_stats, CTLFLAG_RD|CTLTYPE_STRUCT,
0, 0, sysctl_vm_zone_stats, "s,struct uma_type_header", "Zone Stats");
static int zone_warnings = 1;
SYSCTL_INT(_vm, OID_AUTO, zone_warnings, CTLFLAG_RWTUN, &zone_warnings, 0,
"Warn when UMA zones becomes full");
/* Adjust bytes under management by UMA. */
static inline void
uma_total_dec(unsigned long size)
{
atomic_subtract_long(&uma_kmem_total, size);
}
static inline void
uma_total_inc(unsigned long size)
{
if (atomic_fetchadd_long(&uma_kmem_total, size) > uma_kmem_limit)
uma_reclaim_wakeup();
}
/*
* This routine checks to see whether or not it's safe to enable buckets.
*/
static void
bucket_enable(void)
{
bucketdisable = vm_page_count_min();
}
/*
* Initialize bucket_zones, the array of zones of buckets of various sizes.
*
* For each zone, calculate the memory required for each bucket, consisting
* of the header and an array of pointers.
*/
static void
bucket_init(void)
{
struct uma_bucket_zone *ubz;
int size;
for (ubz = &bucket_zones[0]; ubz->ubz_entries != 0; ubz++) {
size = roundup(sizeof(struct uma_bucket), sizeof(void *));
size += sizeof(void *) * ubz->ubz_entries;
ubz->ubz_zone = uma_zcreate(ubz->ubz_name, size,
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR,
UMA_ZONE_MTXCLASS | UMA_ZFLAG_BUCKET | UMA_ZONE_NUMA);
}
}
/*
* Given a desired number of entries for a bucket, return the zone from which
* to allocate the bucket.
*/
static struct uma_bucket_zone *
bucket_zone_lookup(int entries)
{
struct uma_bucket_zone *ubz;
for (ubz = &bucket_zones[0]; ubz->ubz_entries != 0; ubz++)
if (ubz->ubz_entries >= entries)
return (ubz);
ubz--;
return (ubz);
}
static int
bucket_select(int size)
{
struct uma_bucket_zone *ubz;
ubz = &bucket_zones[0];
if (size > ubz->ubz_maxsize)
return MAX((ubz->ubz_maxsize * ubz->ubz_entries) / size, 1);
for (; ubz->ubz_entries != 0; ubz++)
if (ubz->ubz_maxsize < size)
break;
ubz--;
return (ubz->ubz_entries);
}
static uma_bucket_t
bucket_alloc(uma_zone_t zone, void *udata, int flags)
{
struct uma_bucket_zone *ubz;
uma_bucket_t bucket;
/*
* This is to stop us from allocating per cpu buckets while we're
* running out of vm.boot_pages. Otherwise, we would exhaust the
* boot pages. This also prevents us from allocating buckets in
* low memory situations.
*/
if (bucketdisable)
return (NULL);
/*
* To limit bucket recursion we store the original zone flags
* in a cookie passed via zalloc_arg/zfree_arg. This allows the
* NOVM flag to persist even through deep recursions. We also
* store ZFLAG_BUCKET once we have recursed attempting to allocate
* a bucket for a bucket zone so we do not allow infinite bucket
* recursion. This cookie will even persist to frees of unused
* buckets via the allocation path or bucket allocations in the
* free path.
*/
if ((zone->uz_flags & UMA_ZFLAG_BUCKET) == 0)
udata = (void *)(uintptr_t)zone->uz_flags;
else {
if ((uintptr_t)udata & UMA_ZFLAG_BUCKET)
return (NULL);
udata = (void *)((uintptr_t)udata | UMA_ZFLAG_BUCKET);
}
if ((uintptr_t)udata & UMA_ZFLAG_CACHEONLY)
flags |= M_NOVM;
ubz = bucket_zone_lookup(zone->uz_count);
if (ubz->ubz_zone == zone && (ubz + 1)->ubz_entries != 0)
ubz++;
bucket = uma_zalloc_arg(ubz->ubz_zone, udata, flags);
if (bucket) {
#ifdef INVARIANTS
bzero(bucket->ub_bucket, sizeof(void *) * ubz->ubz_entries);
#endif
bucket->ub_cnt = 0;
bucket->ub_entries = ubz->ubz_entries;
}
return (bucket);
}
static void
bucket_free(uma_zone_t zone, uma_bucket_t bucket, void *udata)
{
struct uma_bucket_zone *ubz;
KASSERT(bucket->ub_cnt == 0,
("bucket_free: Freeing a non free bucket."));
if ((zone->uz_flags & UMA_ZFLAG_BUCKET) == 0)
udata = (void *)(uintptr_t)zone->uz_flags;
ubz = bucket_zone_lookup(bucket->ub_entries);
uma_zfree_arg(ubz->ubz_zone, bucket, udata);
}
static void
bucket_zone_drain(void)
{
struct uma_bucket_zone *ubz;
for (ubz = &bucket_zones[0]; ubz->ubz_entries != 0; ubz++)
zone_drain(ubz->ubz_zone);
}
static uma_bucket_t
zone_try_fetch_bucket(uma_zone_t zone, uma_zone_domain_t zdom, const bool ws)
{
uma_bucket_t bucket;
ZONE_LOCK_ASSERT(zone);
if ((bucket = LIST_FIRST(&zdom->uzd_buckets)) != NULL) {
MPASS(zdom->uzd_nitems >= bucket->ub_cnt);
LIST_REMOVE(bucket, ub_link);
zdom->uzd_nitems -= bucket->ub_cnt;
if (ws && zdom->uzd_imin > zdom->uzd_nitems)
zdom->uzd_imin = zdom->uzd_nitems;
zone->uz_bkt_count -= bucket->ub_cnt;
}
return (bucket);
}
static void
zone_put_bucket(uma_zone_t zone, uma_zone_domain_t zdom, uma_bucket_t bucket,
const bool ws)
{
ZONE_LOCK_ASSERT(zone);
KASSERT(zone->uz_bkt_count < zone->uz_bkt_max, ("%s: zone %p overflow",
__func__, zone));
LIST_INSERT_HEAD(&zdom->uzd_buckets, bucket, ub_link);
zdom->uzd_nitems += bucket->ub_cnt;
if (ws && zdom->uzd_imax < zdom->uzd_nitems)
zdom->uzd_imax = zdom->uzd_nitems;
zone->uz_bkt_count += bucket->ub_cnt;
}
static void
zone_log_warning(uma_zone_t zone)
{
static const struct timeval warninterval = { 300, 0 };
if (!zone_warnings || zone->uz_warning == NULL)
return;
if (ratecheck(&zone->uz_ratecheck, &warninterval))
printf("[zone: %s] %s\n", zone->uz_name, zone->uz_warning);
}
static inline void
zone_maxaction(uma_zone_t zone)
{
if (zone->uz_maxaction.ta_func != NULL)
taskqueue_enqueue(taskqueue_thread, &zone->uz_maxaction);
}
/*
* Routine called by timeout which is used to fire off some time interval
* based calculations. (stats, hash size, etc.)
*
* Arguments:
* arg Unused
*
* Returns:
* Nothing
*/
static void
uma_timeout(void *unused)
{
bucket_enable();
zone_foreach(zone_timeout);
/* Reschedule this event */
callout_reset(&uma_callout, UMA_TIMEOUT * hz, uma_timeout, NULL);
}
/*
* Update the working set size estimate for the zone's bucket cache.
* The constants chosen here are somewhat arbitrary. With an update period of
* 20s (UMA_TIMEOUT), this estimate is dominated by zone activity over the
* last 100s.
*/
static void
zone_domain_update_wss(uma_zone_domain_t zdom)
{
long wss;
MPASS(zdom->uzd_imax >= zdom->uzd_imin);
wss = zdom->uzd_imax - zdom->uzd_imin;
zdom->uzd_imax = zdom->uzd_imin = zdom->uzd_nitems;
zdom->uzd_wss = (3 * wss + 2 * zdom->uzd_wss) / 5;
}
/*
* Routine to perform timeout driven calculations. This expands the
* hashes and does per cpu statistics aggregation.
*
* Returns nothing.
*/
static void
zone_timeout(uma_zone_t zone)
{
uma_keg_t keg = zone->uz_keg;
KEG_LOCK(keg);
/*
* Expand the keg hash table.
*
* This is done if the number of slabs is larger than the hash size.
* What I'm trying to do here is completely reduce collisions. This
* may be a little aggressive. Should I allow for two collisions max?
*/
if (keg->uk_flags & UMA_ZONE_HASH &&
keg->uk_pages / keg->uk_ppera >= keg->uk_hash.uh_hashsize) {
struct uma_hash newhash;
struct uma_hash oldhash;
int ret;
/*
* This is so involved because allocating and freeing
* while the keg lock is held will lead to deadlock.
* I have to do everything in stages and check for
* races.
*/
newhash = keg->uk_hash;
KEG_UNLOCK(keg);
ret = hash_alloc(&newhash);
KEG_LOCK(keg);
if (ret) {
if (hash_expand(&keg->uk_hash, &newhash)) {
oldhash = keg->uk_hash;
keg->uk_hash = newhash;
} else
oldhash = newhash;
KEG_UNLOCK(keg);
hash_free(&oldhash);
return;
}
}
for (int i = 0; i < vm_ndomains; i++)
zone_domain_update_wss(&zone->uz_domain[i]);
KEG_UNLOCK(keg);
}
/*
* Allocate and zero fill the next sized hash table from the appropriate
* backing store.
*
* Arguments:
* hash A new hash structure with the old hash size in uh_hashsize
*
* Returns:
* 1 on success and 0 on failure.
*/
static int
hash_alloc(struct uma_hash *hash)
{
int oldsize;
size_t alloc;
oldsize = hash->uh_hashsize;
/* We're just going to go to a power of two greater */
if (oldsize) {
hash->uh_hashsize = oldsize * 2;
alloc = sizeof(hash->uh_slab_hash[0]) * hash->uh_hashsize;
hash->uh_slab_hash = (struct slabhead *)malloc(alloc,
M_UMAHASH, M_NOWAIT);
} else {
alloc = sizeof(hash->uh_slab_hash[0]) * UMA_HASH_SIZE_INIT;
hash->uh_slab_hash = zone_alloc_item(hashzone, NULL,
UMA_ANYDOMAIN, M_WAITOK);
hash->uh_hashsize = UMA_HASH_SIZE_INIT;
}
if (hash->uh_slab_hash) {
bzero(hash->uh_slab_hash, alloc);
hash->uh_hashmask = hash->uh_hashsize - 1;
return (1);
}
return (0);
}
/*
* Expands the hash table for HASH zones. This is done from zone_timeout
* to reduce collisions. This must not be done in the regular allocation
* path, otherwise, we can recurse on the vm while allocating pages.
*
* Arguments:
* oldhash The hash you want to expand
* newhash The hash structure for the new table
*
* Returns:
* Nothing
*
* Discussion:
*/
static int
hash_expand(struct uma_hash *oldhash, struct uma_hash *newhash)
{
uma_slab_t slab;
int hval;
int i;
if (!newhash->uh_slab_hash)
return (0);
if (oldhash->uh_hashsize >= newhash->uh_hashsize)
return (0);
/*
* I need to investigate hash algorithms for resizing without a
* full rehash.
*/
for (i = 0; i < oldhash->uh_hashsize; i++)
while (!SLIST_EMPTY(&oldhash->uh_slab_hash[i])) {
slab = SLIST_FIRST(&oldhash->uh_slab_hash[i]);
SLIST_REMOVE_HEAD(&oldhash->uh_slab_hash[i], us_hlink);
hval = UMA_HASH(newhash, slab->us_data);
SLIST_INSERT_HEAD(&newhash->uh_slab_hash[hval],
slab, us_hlink);
}
return (1);
}
/*
* Free the hash bucket to the appropriate backing store.
*
* Arguments:
* slab_hash The hash bucket we're freeing
* hashsize The number of entries in that hash bucket
*
* Returns:
* Nothing
*/
static void
hash_free(struct uma_hash *hash)
{
if (hash->uh_slab_hash == NULL)
return;
if (hash->uh_hashsize == UMA_HASH_SIZE_INIT)
zone_free_item(hashzone, hash->uh_slab_hash, NULL, SKIP_NONE);
else
free(hash->uh_slab_hash, M_UMAHASH);
}
/*
* Frees all outstanding items in a bucket
*
* Arguments:
* zone The zone to free to, must be unlocked.
* bucket The free/alloc bucket with items, cpu queue must be locked.
*
* Returns:
* Nothing
*/
static void
bucket_drain(uma_zone_t zone, uma_bucket_t bucket)
{
int i;
if (bucket == NULL)
return;
if (zone->uz_fini)
for (i = 0; i < bucket->ub_cnt; i++)
zone->uz_fini(bucket->ub_bucket[i], zone->uz_size);
zone->uz_release(zone->uz_arg, bucket->ub_bucket, bucket->ub_cnt);
if (zone->uz_max_items > 0) {
ZONE_LOCK(zone);
zone->uz_items -= bucket->ub_cnt;
if (zone->uz_sleepers && zone->uz_items < zone->uz_max_items)
wakeup_one(zone);
ZONE_UNLOCK(zone);
}
bucket->ub_cnt = 0;
}
/*
* Drains the per cpu caches for a zone.
*
* NOTE: This may only be called while the zone is being turn down, and not
* during normal operation. This is necessary in order that we do not have
* to migrate CPUs to drain the per-CPU caches.
*
* Arguments:
* zone The zone to drain, must be unlocked.
*
* Returns:
* Nothing
*/
static void
cache_drain(uma_zone_t zone)
{
uma_cache_t cache;
int cpu;
/*
* XXX: It is safe to not lock the per-CPU caches, because we're
* tearing down the zone anyway. I.e., there will be no further use
* of the caches at this point.
*
* XXX: It would good to be able to assert that the zone is being
* torn down to prevent improper use of cache_drain().
*
* XXX: We lock the zone before passing into bucket_cache_drain() as
* it is used elsewhere. Should the tear-down path be made special
* there in some form?
*/
CPU_FOREACH(cpu) {
cache = &zone->uz_cpu[cpu];
bucket_drain(zone, cache->uc_allocbucket);
bucket_drain(zone, cache->uc_freebucket);
if (cache->uc_allocbucket != NULL)
bucket_free(zone, cache->uc_allocbucket, NULL);
if (cache->uc_freebucket != NULL)
bucket_free(zone, cache->uc_freebucket, NULL);
cache->uc_allocbucket = cache->uc_freebucket = NULL;
}
ZONE_LOCK(zone);
bucket_cache_drain(zone);
ZONE_UNLOCK(zone);
}
static void
cache_shrink(uma_zone_t zone)
{
if (zone->uz_flags & UMA_ZFLAG_INTERNAL)
return;
ZONE_LOCK(zone);
zone->uz_count = (zone->uz_count_min + zone->uz_count) / 2;
ZONE_UNLOCK(zone);
}
static void
cache_drain_safe_cpu(uma_zone_t zone)
{
uma_cache_t cache;
uma_bucket_t b1, b2;
int domain;
if (zone->uz_flags & UMA_ZFLAG_INTERNAL)
return;
b1 = b2 = NULL;
ZONE_LOCK(zone);
critical_enter();
if (zone->uz_flags & UMA_ZONE_NUMA)
domain = PCPU_GET(domain);
else
domain = 0;
cache = &zone->uz_cpu[curcpu];
if (cache->uc_allocbucket) {
if (cache->uc_allocbucket->ub_cnt != 0)
zone_put_bucket(zone, &zone->uz_domain[domain],
cache->uc_allocbucket, false);
else
b1 = cache->uc_allocbucket;
cache->uc_allocbucket = NULL;
}
if (cache->uc_freebucket) {
if (cache->uc_freebucket->ub_cnt != 0)
zone_put_bucket(zone, &zone->uz_domain[domain],
cache->uc_freebucket, false);
else
b2 = cache->uc_freebucket;
cache->uc_freebucket = NULL;
}
critical_exit();
ZONE_UNLOCK(zone);
if (b1)
bucket_free(zone, b1, NULL);
if (b2)
bucket_free(zone, b2, NULL);
}
/*
* Safely drain per-CPU caches of a zone(s) to alloc bucket.
* This is an expensive call because it needs to bind to all CPUs
* one by one and enter a critical section on each of them in order
* to safely access their cache buckets.
* Zone lock must not be held on call this function.
*/
static void
cache_drain_safe(uma_zone_t zone)
{
int cpu;
/*
* Polite bucket sizes shrinking was not enouth, shrink aggressively.
*/
if (zone)
cache_shrink(zone);
else
zone_foreach(cache_shrink);
CPU_FOREACH(cpu) {
thread_lock(curthread);
sched_bind(curthread, cpu);
thread_unlock(curthread);
if (zone)
cache_drain_safe_cpu(zone);
else
zone_foreach(cache_drain_safe_cpu);
}
thread_lock(curthread);
sched_unbind(curthread);
thread_unlock(curthread);
}
/*
* Drain the cached buckets from a zone. Expects a locked zone on entry.
*/
static void
bucket_cache_drain(uma_zone_t zone)
{
uma_zone_domain_t zdom;
uma_bucket_t bucket;
int i;
/*
* Drain the bucket queues and free the buckets.
*/
for (i = 0; i < vm_ndomains; i++) {
zdom = &zone->uz_domain[i];
while ((bucket = zone_try_fetch_bucket(zone, zdom, false)) !=
NULL) {
ZONE_UNLOCK(zone);
bucket_drain(zone, bucket);
bucket_free(zone, bucket, NULL);
ZONE_LOCK(zone);
}
}
/*
* Shrink further bucket sizes. Price of single zone lock collision
* is probably lower then price of global cache drain.
*/
if (zone->uz_count > zone->uz_count_min)
zone->uz_count--;
}
static void
keg_free_slab(uma_keg_t keg, uma_slab_t slab, int start)
{
uint8_t *mem;
int i;
uint8_t flags;
CTR4(KTR_UMA, "keg_free_slab keg %s(%p) slab %p, returning %d bytes",
keg->uk_name, keg, slab, PAGE_SIZE * keg->uk_ppera);
mem = slab->us_data;
flags = slab->us_flags;
i = start;
if (keg->uk_fini != NULL) {
for (i--; i > -1; i--)
#ifdef INVARIANTS
/*
* trash_fini implies that dtor was trash_dtor. trash_fini
* would check that memory hasn't been modified since free,
* which executed trash_dtor.
* That's why we need to run uma_dbg_kskip() check here,
* albeit we don't make skip check for other init/fini
* invocations.
*/
if (!uma_dbg_kskip(keg, slab->us_data + (keg->uk_rsize * i)) ||
keg->uk_fini != trash_fini)
#endif
keg->uk_fini(slab->us_data + (keg->uk_rsize * i),
keg->uk_size);
}
if (keg->uk_flags & UMA_ZONE_OFFPAGE)
zone_free_item(keg->uk_slabzone, slab, NULL, SKIP_NONE);
keg->uk_freef(mem, PAGE_SIZE * keg->uk_ppera, flags);
uma_total_dec(PAGE_SIZE * keg->uk_ppera);
}
/*
* Frees pages from a keg back to the system. This is done on demand from
* the pageout daemon.
*
* Returns nothing.
*/
static void
keg_drain(uma_keg_t keg)
{
struct slabhead freeslabs = { 0 };
uma_domain_t dom;
uma_slab_t slab, tmp;
int i;
/*
* We don't want to take pages from statically allocated kegs at this
* time
*/
if (keg->uk_flags & UMA_ZONE_NOFREE || keg->uk_freef == NULL)
return;
CTR3(KTR_UMA, "keg_drain %s(%p) free items: %u",
keg->uk_name, keg, keg->uk_free);
KEG_LOCK(keg);
if (keg->uk_free == 0)
goto finished;
for (i = 0; i < vm_ndomains; i++) {
dom = &keg->uk_domain[i];
LIST_FOREACH_SAFE(slab, &dom->ud_free_slab, us_link, tmp) {
/* We have nowhere to free these to. */
if (slab->us_flags & UMA_SLAB_BOOT)
continue;
LIST_REMOVE(slab, us_link);
keg->uk_pages -= keg->uk_ppera;
keg->uk_free -= keg->uk_ipers;
if (keg->uk_flags & UMA_ZONE_HASH)
UMA_HASH_REMOVE(&keg->uk_hash, slab,
slab->us_data);
SLIST_INSERT_HEAD(&freeslabs, slab, us_hlink);
}
}
finished:
KEG_UNLOCK(keg);
while ((slab = SLIST_FIRST(&freeslabs)) != NULL) {
SLIST_REMOVE(&freeslabs, slab, uma_slab, us_hlink);
keg_free_slab(keg, slab, keg->uk_ipers);
}
}
static void
zone_drain_wait(uma_zone_t zone, int waitok)
{
/*
* Set draining to interlock with zone_dtor() so we can release our
* locks as we go. Only dtor() should do a WAITOK call since it
* is the only call that knows the structure will still be available
* when it wakes up.
*/
ZONE_LOCK(zone);
while (zone->uz_flags & UMA_ZFLAG_DRAINING) {
if (waitok == M_NOWAIT)
goto out;
msleep(zone, zone->uz_lockptr, PVM, "zonedrain", 1);
}
zone->uz_flags |= UMA_ZFLAG_DRAINING;
bucket_cache_drain(zone);
ZONE_UNLOCK(zone);
/*
* The DRAINING flag protects us from being freed while
* we're running. Normally the uma_rwlock would protect us but we
* must be able to release and acquire the right lock for each keg.
*/
keg_drain(zone->uz_keg);
ZONE_LOCK(zone);
zone->uz_flags &= ~UMA_ZFLAG_DRAINING;
wakeup(zone);
out:
ZONE_UNLOCK(zone);
}
void
zone_drain(uma_zone_t zone)
{
zone_drain_wait(zone, M_NOWAIT);
}
/*
* Allocate a new slab for a keg. This does not insert the slab onto a list.
* If the allocation was successful, the keg lock will be held upon return,
* otherwise the keg will be left unlocked.
*
* Arguments:
* flags Wait flags for the item initialization routine
* aflags Wait flags for the slab allocation
*
* Returns:
* The slab that was allocated or NULL if there is no memory and the
* caller specified M_NOWAIT.
*/
static uma_slab_t
keg_alloc_slab(uma_keg_t keg, uma_zone_t zone, int domain, int flags,
int aflags)
{
uma_alloc allocf;
uma_slab_t slab;
unsigned long size;
uint8_t *mem;
uint8_t sflags;
int i;
KASSERT(domain >= 0 && domain < vm_ndomains,
("keg_alloc_slab: domain %d out of range", domain));
KEG_LOCK_ASSERT(keg);
MPASS(zone->uz_lockptr == &keg->uk_lock);
allocf = keg->uk_allocf;
KEG_UNLOCK(keg);
slab = NULL;
mem = NULL;
if (keg->uk_flags & UMA_ZONE_OFFPAGE) {
slab = zone_alloc_item(keg->uk_slabzone, NULL, domain, aflags);
if (slab == NULL)
goto out;
}
/*
* This reproduces the old vm_zone behavior of zero filling pages the
* first time they are added to a zone.
*
* Malloced items are zeroed in uma_zalloc.
*/
if ((keg->uk_flags & UMA_ZONE_MALLOC) == 0)
aflags |= M_ZERO;
else
aflags &= ~M_ZERO;
if (keg->uk_flags & UMA_ZONE_NODUMP)
aflags |= M_NODUMP;
/* zone is passed for legacy reasons. */
size = keg->uk_ppera * PAGE_SIZE;
mem = allocf(zone, size, domain, &sflags, aflags);
if (mem == NULL) {
if (keg->uk_flags & UMA_ZONE_OFFPAGE)
zone_free_item(keg->uk_slabzone, slab, NULL, SKIP_NONE);
slab = NULL;
goto out;
}
uma_total_inc(size);
/* Point the slab into the allocated memory */
if (!(keg->uk_flags & UMA_ZONE_OFFPAGE))
slab = (uma_slab_t )(mem + keg->uk_pgoff);
if (keg->uk_flags & UMA_ZONE_VTOSLAB)
for (i = 0; i < keg->uk_ppera; i++)
vsetslab((vm_offset_t)mem + (i * PAGE_SIZE), slab);
slab->us_keg = keg;
slab->us_data = mem;
slab->us_freecount = keg->uk_ipers;
slab->us_flags = sflags;
slab->us_domain = domain;
BIT_FILL(SLAB_SETSIZE, &slab->us_free);
#ifdef INVARIANTS
BIT_ZERO(SLAB_SETSIZE, &slab->us_debugfree);
#endif
if (keg->uk_init != NULL) {
for (i = 0; i < keg->uk_ipers; i++)
if (keg->uk_init(slab->us_data + (keg->uk_rsize * i),
keg->uk_size, flags) != 0)
break;
if (i != keg->uk_ipers) {
keg_free_slab(keg, slab, i);
slab = NULL;
goto out;
}
}
KEG_LOCK(keg);
CTR3(KTR_UMA, "keg_alloc_slab: allocated slab %p for %s(%p)",
slab, keg->uk_name, keg);
if (keg->uk_flags & UMA_ZONE_HASH)
UMA_HASH_INSERT(&keg->uk_hash, slab, mem);
keg->uk_pages += keg->uk_ppera;
keg->uk_free += keg->uk_ipers;
out:
return (slab);
}
/*
* This function is intended to be used early on in place of page_alloc() so
* that we may use the boot time page cache to satisfy allocations before
* the VM is ready.
*/
static void *
startup_alloc(uma_zone_t zone, vm_size_t bytes, int domain, uint8_t *pflag,
int wait)
{
uma_keg_t keg;
void *mem;
int pages;
keg = zone->uz_keg;
/*
* If we are in BOOT_BUCKETS or higher, than switch to real
* allocator. Zones with page sized slabs switch at BOOT_PAGEALLOC.
*/
switch (booted) {
case BOOT_COLD:
case BOOT_STRAPPED:
break;
case BOOT_PAGEALLOC:
if (keg->uk_ppera > 1)
break;
case BOOT_BUCKETS:
case BOOT_RUNNING:
#ifdef UMA_MD_SMALL_ALLOC
keg->uk_allocf = (keg->uk_ppera > 1) ?
page_alloc : uma_small_alloc;
#else
keg->uk_allocf = page_alloc;
#endif
return keg->uk_allocf(zone, bytes, domain, pflag, wait);
}
/*
* Check our small startup cache to see if it has pages remaining.
*/
pages = howmany(bytes, PAGE_SIZE);
KASSERT(pages > 0, ("%s can't reserve 0 pages", __func__));
if (pages > boot_pages)
panic("UMA zone \"%s\": Increase vm.boot_pages", zone->uz_name);
#ifdef DIAGNOSTIC
printf("%s from \"%s\", %d boot pages left\n", __func__, zone->uz_name,
boot_pages);
#endif
mem = bootmem;
boot_pages -= pages;
bootmem += pages * PAGE_SIZE;
*pflag = UMA_SLAB_BOOT;
return (mem);
}
/*
* Allocates a number of pages from the system
*
* Arguments:
* bytes The number of bytes requested
* wait Shall we wait?
*
* Returns:
* A pointer to the alloced memory or possibly
* NULL if M_NOWAIT is set.
*/
static void *
page_alloc(uma_zone_t zone, vm_size_t bytes, int domain, uint8_t *pflag,
int wait)
{
void *p; /* Returned page */
*pflag = UMA_SLAB_KERNEL;
p = (void *)kmem_malloc_domainset(DOMAINSET_FIXED(domain), bytes, wait);
return (p);
}
static void *
pcpu_page_alloc(uma_zone_t zone, vm_size_t bytes, int domain, uint8_t *pflag,
int wait)
{
struct pglist alloctail;
vm_offset_t addr, zkva;
int cpu, flags;
vm_page_t p, p_next;
#ifdef NUMA
struct pcpu *pc;
#endif
MPASS(bytes == (mp_maxid + 1) * PAGE_SIZE);
TAILQ_INIT(&alloctail);
flags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED | VM_ALLOC_NOOBJ |
malloc2vm_flags(wait);
*pflag = UMA_SLAB_KERNEL;
for (cpu = 0; cpu <= mp_maxid; cpu++) {
if (CPU_ABSENT(cpu)) {
p = vm_page_alloc(NULL, 0, flags);
} else {
#ifndef NUMA
p = vm_page_alloc(NULL, 0, flags);
#else
pc = pcpu_find(cpu);
p = vm_page_alloc_domain(NULL, 0, pc->pc_domain, flags);
if (__predict_false(p == NULL))
p = vm_page_alloc(NULL, 0, flags);
#endif
}
if (__predict_false(p == NULL))
goto fail;
TAILQ_INSERT_TAIL(&alloctail, p, listq);
}
if ((addr = kva_alloc(bytes)) == 0)
goto fail;
zkva = addr;
TAILQ_FOREACH(p, &alloctail, listq) {
pmap_qenter(zkva, &p, 1);
zkva += PAGE_SIZE;
}
return ((void*)addr);
fail:
TAILQ_FOREACH_SAFE(p, &alloctail, listq, p_next) {
vm_page_unwire(p, PQ_NONE);
vm_page_free(p);
}
return (NULL);
}
/*
* Allocates a number of pages from within an object
*
* Arguments:
* bytes The number of bytes requested
* wait Shall we wait?
*
* Returns:
* A pointer to the alloced memory or possibly
* NULL if M_NOWAIT is set.
*/
static void *
noobj_alloc(uma_zone_t zone, vm_size_t bytes, int domain, uint8_t *flags,
int wait)
{
TAILQ_HEAD(, vm_page) alloctail;
u_long npages;
vm_offset_t retkva, zkva;
vm_page_t p, p_next;
uma_keg_t keg;
TAILQ_INIT(&alloctail);
keg = zone->uz_keg;
npages = howmany(bytes, PAGE_SIZE);
while (npages > 0) {
p = vm_page_alloc_domain(NULL, 0, domain, VM_ALLOC_INTERRUPT |
VM_ALLOC_WIRED | VM_ALLOC_NOOBJ |
((wait & M_WAITOK) != 0 ? VM_ALLOC_WAITOK :
VM_ALLOC_NOWAIT));
if (p != NULL) {
/*
* Since the page does not belong to an object, its
* listq is unused.
*/
TAILQ_INSERT_TAIL(&alloctail, p, listq);
npages--;
continue;
}
/*
* Page allocation failed, free intermediate pages and
* exit.
*/
TAILQ_FOREACH_SAFE(p, &alloctail, listq, p_next) {
vm_page_unwire(p, PQ_NONE);
vm_page_free(p);
}
return (NULL);
}
*flags = UMA_SLAB_PRIV;
zkva = keg->uk_kva +
atomic_fetchadd_long(&keg->uk_offset, round_page(bytes));
retkva = zkva;
TAILQ_FOREACH(p, &alloctail, listq) {
pmap_qenter(zkva, &p, 1);
zkva += PAGE_SIZE;
}
return ((void *)retkva);
}
/*
* Frees a number of pages to the system
*
* Arguments:
* mem A pointer to the memory to be freed
* size The size of the memory being freed
* flags The original p->us_flags field
*
* Returns:
* Nothing
*/
static void
page_free(void *mem, vm_size_t size, uint8_t flags)
{
if ((flags & UMA_SLAB_KERNEL) == 0)
panic("UMA: page_free used with invalid flags %x", flags);
kmem_free((vm_offset_t)mem, size);
}
/*
* Frees pcpu zone allocations
*
* Arguments:
* mem A pointer to the memory to be freed
* size The size of the memory being freed
* flags The original p->us_flags field
*
* Returns:
* Nothing
*/
static void
pcpu_page_free(void *mem, vm_size_t size, uint8_t flags)
{
vm_offset_t sva, curva;
vm_paddr_t paddr;
vm_page_t m;
MPASS(size == (mp_maxid+1)*PAGE_SIZE);
sva = (vm_offset_t)mem;
for (curva = sva; curva < sva + size; curva += PAGE_SIZE) {
paddr = pmap_kextract(curva);
m = PHYS_TO_VM_PAGE(paddr);
vm_page_unwire(m, PQ_NONE);
vm_page_free(m);
}
pmap_qremove(sva, size >> PAGE_SHIFT);
kva_free(sva, size);
}
/*
* Zero fill initializer
*
* Arguments/Returns follow uma_init specifications
*/
static int
zero_init(void *mem, int size, int flags)
{
bzero(mem, size);
return (0);
}
/*
* Finish creating a small uma keg. This calculates ipers, and the keg size.
*
* Arguments
* keg The zone we should initialize
*
* Returns
* Nothing
*/
static void
keg_small_init(uma_keg_t keg)
{
u_int rsize;
u_int memused;
u_int wastedspace;
u_int shsize;
u_int slabsize;
if (keg->uk_flags & UMA_ZONE_PCPU) {
u_int ncpus = (mp_maxid + 1) ? (mp_maxid + 1) : MAXCPU;
slabsize = UMA_PCPU_ALLOC_SIZE;
keg->uk_ppera = ncpus;
} else {
slabsize = UMA_SLAB_SIZE;
keg->uk_ppera = 1;
}
/*
* Calculate the size of each allocation (rsize) according to
* alignment. If the requested size is smaller than we have
* allocation bits for we round it up.
*/
rsize = keg->uk_size;
if (rsize < slabsize / SLAB_SETSIZE)
rsize = slabsize / SLAB_SETSIZE;
if (rsize & keg->uk_align)
rsize = (rsize & ~keg->uk_align) + (keg->uk_align + 1);
keg->uk_rsize = rsize;
KASSERT((keg->uk_flags & UMA_ZONE_PCPU) == 0 ||
keg->uk_rsize < UMA_PCPU_ALLOC_SIZE,
("%s: size %u too large", __func__, keg->uk_rsize));
if (keg->uk_flags & UMA_ZONE_OFFPAGE)
shsize = 0;
else
shsize = SIZEOF_UMA_SLAB;
if (rsize <= slabsize - shsize)
keg->uk_ipers = (slabsize - shsize) / rsize;
else {
/* Handle special case when we have 1 item per slab, so
* alignment requirement can be relaxed. */
KASSERT(keg->uk_size <= slabsize - shsize,
("%s: size %u greater than slab", __func__, keg->uk_size));
keg->uk_ipers = 1;
}
KASSERT(keg->uk_ipers > 0 && keg->uk_ipers <= SLAB_SETSIZE,
("%s: keg->uk_ipers %u", __func__, keg->uk_ipers));
memused = keg->uk_ipers * rsize + shsize;
wastedspace = slabsize - memused;
/*
* We can't do OFFPAGE if we're internal or if we've been
* asked to not go to the VM for buckets. If we do this we
* may end up going to the VM for slabs which we do not
* want to do if we're UMA_ZFLAG_CACHEONLY as a result
* of UMA_ZONE_VM, which clearly forbids it.
*/
if ((keg->uk_flags & UMA_ZFLAG_INTERNAL) ||
(keg->uk_flags & UMA_ZFLAG_CACHEONLY))
return;
/*
* See if using an OFFPAGE slab will limit our waste. Only do
* this if it permits more items per-slab.
*
* XXX We could try growing slabsize to limit max waste as well.
* Historically this was not done because the VM could not
* efficiently handle contiguous allocations.
*/
if ((wastedspace >= slabsize / UMA_MAX_WASTE) &&
(keg->uk_ipers < (slabsize / keg->uk_rsize))) {
keg->uk_ipers = slabsize / keg->uk_rsize;
KASSERT(keg->uk_ipers > 0 && keg->uk_ipers <= SLAB_SETSIZE,
("%s: keg->uk_ipers %u", __func__, keg->uk_ipers));
CTR6(KTR_UMA, "UMA decided we need offpage slab headers for "
"keg: %s(%p), calculated wastedspace = %d, "
"maximum wasted space allowed = %d, "
"calculated ipers = %d, "
"new wasted space = %d\n", keg->uk_name, keg, wastedspace,
slabsize / UMA_MAX_WASTE, keg->uk_ipers,
slabsize - keg->uk_ipers * keg->uk_rsize);
keg->uk_flags |= UMA_ZONE_OFFPAGE;
}
if ((keg->uk_flags & UMA_ZONE_OFFPAGE) &&
(keg->uk_flags & UMA_ZONE_VTOSLAB) == 0)
keg->uk_flags |= UMA_ZONE_HASH;
}
/*
* Finish creating a large (> UMA_SLAB_SIZE) uma kegs. Just give in and do
* OFFPAGE for now. When I can allow for more dynamic slab sizes this will be
* more complicated.
*
* Arguments
* keg The keg we should initialize
*
* Returns
* Nothing
*/
static void
keg_large_init(uma_keg_t keg)
{
KASSERT(keg != NULL, ("Keg is null in keg_large_init"));
KASSERT((keg->uk_flags & UMA_ZONE_PCPU) == 0,
("%s: Cannot large-init a UMA_ZONE_PCPU keg", __func__));
keg->uk_ppera = howmany(keg->uk_size, PAGE_SIZE);
keg->uk_ipers = 1;
keg->uk_rsize = keg->uk_size;
/* Check whether we have enough space to not do OFFPAGE. */
if ((keg->uk_flags & UMA_ZONE_OFFPAGE) == 0 &&
PAGE_SIZE * keg->uk_ppera - keg->uk_rsize < SIZEOF_UMA_SLAB) {
/*
* We can't do OFFPAGE if we're internal, in which case
* we need an extra page per allocation to contain the
* slab header.
*/
if ((keg->uk_flags & UMA_ZFLAG_INTERNAL) == 0)
keg->uk_flags |= UMA_ZONE_OFFPAGE;
else
keg->uk_ppera++;
}
if ((keg->uk_flags & UMA_ZONE_OFFPAGE) &&
(keg->uk_flags & UMA_ZONE_VTOSLAB) == 0)
keg->uk_flags |= UMA_ZONE_HASH;
}
static void
keg_cachespread_init(uma_keg_t keg)
{
int alignsize;
int trailer;
int pages;
int rsize;
KASSERT((keg->uk_flags & UMA_ZONE_PCPU) == 0,
("%s: Cannot cachespread-init a UMA_ZONE_PCPU keg", __func__));
alignsize = keg->uk_align + 1;
rsize = keg->uk_size;
/*
* We want one item to start on every align boundary in a page. To
* do this we will span pages. We will also extend the item by the
* size of align if it is an even multiple of align. Otherwise, it
* would fall on the same boundary every time.
*/
if (rsize & keg->uk_align)
rsize = (rsize & ~keg->uk_align) + alignsize;
if ((rsize & alignsize) == 0)
rsize += alignsize;
trailer = rsize - keg->uk_size;
pages = (rsize * (PAGE_SIZE / alignsize)) / PAGE_SIZE;
pages = MIN(pages, (128 * 1024) / PAGE_SIZE);
keg->uk_rsize = rsize;
keg->uk_ppera = pages;
keg->uk_ipers = ((pages * PAGE_SIZE) + trailer) / rsize;
keg->uk_flags |= UMA_ZONE_OFFPAGE | UMA_ZONE_VTOSLAB;
KASSERT(keg->uk_ipers <= SLAB_SETSIZE,
("%s: keg->uk_ipers too high(%d) increase max_ipers", __func__,
keg->uk_ipers));
}
/*
* Keg header ctor. This initializes all fields, locks, etc. And inserts
* the keg onto the global keg list.
*
* Arguments/Returns follow uma_ctor specifications
* udata Actually uma_kctor_args
*/
static int
keg_ctor(void *mem, int size, void *udata, int flags)
{
struct uma_kctor_args *arg = udata;
uma_keg_t keg = mem;
uma_zone_t zone;
bzero(keg, size);
keg->uk_size = arg->size;
keg->uk_init = arg->uminit;
keg->uk_fini = arg->fini;
keg->uk_align = arg->align;
keg->uk_free = 0;
keg->uk_reserve = 0;
keg->uk_pages = 0;
keg->uk_flags = arg->flags;
keg->uk_slabzone = NULL;
/*
* We use a global round-robin policy by default. Zones with
* UMA_ZONE_NUMA set will use first-touch instead, in which case the
* iterator is never run.
*/
keg->uk_dr.dr_policy = DOMAINSET_RR();
keg->uk_dr.dr_iter = 0;
/*
* The master zone is passed to us at keg-creation time.
*/
zone = arg->zone;
keg->uk_name = zone->uz_name;
if (arg->flags & UMA_ZONE_VM)
keg->uk_flags |= UMA_ZFLAG_CACHEONLY;
if (arg->flags & UMA_ZONE_ZINIT)
keg->uk_init = zero_init;
if (arg->flags & UMA_ZONE_MALLOC)
keg->uk_flags |= UMA_ZONE_VTOSLAB;
if (arg->flags & UMA_ZONE_PCPU)
#ifdef SMP
keg->uk_flags |= UMA_ZONE_OFFPAGE;
#else
keg->uk_flags &= ~UMA_ZONE_PCPU;
#endif
if (keg->uk_flags & UMA_ZONE_CACHESPREAD) {
keg_cachespread_init(keg);
} else {
if (keg->uk_size > UMA_SLAB_SPACE)
keg_large_init(keg);
else
keg_small_init(keg);
}
if (keg->uk_flags & UMA_ZONE_OFFPAGE)
keg->uk_slabzone = slabzone;
/*
* If we haven't booted yet we need allocations to go through the
* startup cache until the vm is ready.
*/
if (booted < BOOT_PAGEALLOC)
keg->uk_allocf = startup_alloc;
#ifdef UMA_MD_SMALL_ALLOC
else if (keg->uk_ppera == 1)
keg->uk_allocf = uma_small_alloc;
#endif
else if (keg->uk_flags & UMA_ZONE_PCPU)
keg->uk_allocf = pcpu_page_alloc;
else
keg->uk_allocf = page_alloc;
#ifdef UMA_MD_SMALL_ALLOC
if (keg->uk_ppera == 1)
keg->uk_freef = uma_small_free;
else
#endif
if (keg->uk_flags & UMA_ZONE_PCPU)
keg->uk_freef = pcpu_page_free;
else
keg->uk_freef = page_free;
/*
* Initialize keg's lock
*/
KEG_LOCK_INIT(keg, (arg->flags & UMA_ZONE_MTXCLASS));
/*
* If we're putting the slab header in the actual page we need to
* figure out where in each page it goes. See SIZEOF_UMA_SLAB
* macro definition.
*/
if (!(keg->uk_flags & UMA_ZONE_OFFPAGE)) {
keg->uk_pgoff = (PAGE_SIZE * keg->uk_ppera) - SIZEOF_UMA_SLAB;
/*
* The only way the following is possible is if with our
* UMA_ALIGN_PTR adjustments we are now bigger than
* UMA_SLAB_SIZE. I haven't checked whether this is
* mathematically possible for all cases, so we make
* sure here anyway.
*/
KASSERT(keg->uk_pgoff + sizeof(struct uma_slab) <=
PAGE_SIZE * keg->uk_ppera,
("zone %s ipers %d rsize %d size %d slab won't fit",
zone->uz_name, keg->uk_ipers, keg->uk_rsize, keg->uk_size));
}
if (keg->uk_flags & UMA_ZONE_HASH)
hash_alloc(&keg->uk_hash);
CTR5(KTR_UMA, "keg_ctor %p zone %s(%p) out %d free %d\n",
keg, zone->uz_name, zone,
(keg->uk_pages / keg->uk_ppera) * keg->uk_ipers - keg->uk_free,
keg->uk_free);
LIST_INSERT_HEAD(&keg->uk_zones, zone, uz_link);
rw_wlock(&uma_rwlock);
LIST_INSERT_HEAD(&uma_kegs, keg, uk_link);
rw_wunlock(&uma_rwlock);
return (0);
}
static void
zone_alloc_counters(uma_zone_t zone)
{
zone->uz_allocs = counter_u64_alloc(M_WAITOK);
zone->uz_frees = counter_u64_alloc(M_WAITOK);
zone->uz_fails = counter_u64_alloc(M_WAITOK);
}
/*
* Zone header ctor. This initializes all fields, locks, etc.
*
* Arguments/Returns follow uma_ctor specifications
* udata Actually uma_zctor_args
*/
static int
zone_ctor(void *mem, int size, void *udata, int flags)
{
struct uma_zctor_args *arg = udata;
uma_zone_t zone = mem;
uma_zone_t z;
uma_keg_t keg;
bzero(zone, size);
zone->uz_name = arg->name;
zone->uz_ctor = arg->ctor;
zone->uz_dtor = arg->dtor;
- zone->uz_slab = zone_fetch_slab;
zone->uz_init = NULL;
zone->uz_fini = NULL;
zone->uz_sleeps = 0;
zone->uz_count = 0;
zone->uz_count_min = 0;
zone->uz_count_max = BUCKET_MAX;
zone->uz_flags = 0;
zone->uz_warning = NULL;
/* The domain structures follow the cpu structures. */
zone->uz_domain = (struct uma_zone_domain *)&zone->uz_cpu[mp_ncpus];
zone->uz_bkt_max = ULONG_MAX;
timevalclear(&zone->uz_ratecheck);
if (__predict_true(booted == BOOT_RUNNING))
zone_alloc_counters(zone);
else {
zone->uz_allocs = EARLY_COUNTER;
zone->uz_frees = EARLY_COUNTER;
zone->uz_fails = EARLY_COUNTER;
}
/*
* This is a pure cache zone, no kegs.
*/
if (arg->import) {
if (arg->flags & UMA_ZONE_VM)
arg->flags |= UMA_ZFLAG_CACHEONLY;
zone->uz_flags = arg->flags;
zone->uz_size = arg->size;
zone->uz_import = arg->import;
zone->uz_release = arg->release;
zone->uz_arg = arg->arg;
zone->uz_lockptr = &zone->uz_lock;
ZONE_LOCK_INIT(zone, (arg->flags & UMA_ZONE_MTXCLASS));
rw_wlock(&uma_rwlock);
LIST_INSERT_HEAD(&uma_cachezones, zone, uz_link);
rw_wunlock(&uma_rwlock);
goto out;
}
/*
* Use the regular zone/keg/slab allocator.
*/
zone->uz_import = (uma_import)zone_import;
zone->uz_release = (uma_release)zone_release;
zone->uz_arg = zone;
keg = arg->keg;
if (arg->flags & UMA_ZONE_SECONDARY) {
KASSERT(arg->keg != NULL, ("Secondary zone on zero'd keg"));
zone->uz_init = arg->uminit;
zone->uz_fini = arg->fini;
zone->uz_lockptr = &keg->uk_lock;
zone->uz_flags |= UMA_ZONE_SECONDARY;
rw_wlock(&uma_rwlock);
ZONE_LOCK(zone);
LIST_FOREACH(z, &keg->uk_zones, uz_link) {
if (LIST_NEXT(z, uz_link) == NULL) {
LIST_INSERT_AFTER(z, zone, uz_link);
break;
}
}
ZONE_UNLOCK(zone);
rw_wunlock(&uma_rwlock);
} else if (keg == NULL) {
if ((keg = uma_kcreate(zone, arg->size, arg->uminit, arg->fini,
arg->align, arg->flags)) == NULL)
return (ENOMEM);
} else {
struct uma_kctor_args karg;
int error;
/* We should only be here from uma_startup() */
karg.size = arg->size;
karg.uminit = arg->uminit;
karg.fini = arg->fini;
karg.align = arg->align;
karg.flags = arg->flags;
karg.zone = zone;
error = keg_ctor(arg->keg, sizeof(struct uma_keg), &karg,
flags);
if (error)
return (error);
}
zone->uz_keg = keg;
zone->uz_size = keg->uk_size;
zone->uz_flags |= (keg->uk_flags &
(UMA_ZONE_INHERIT | UMA_ZFLAG_INHERIT));
/*
* Some internal zones don't have room allocated for the per cpu
* caches. If we're internal, bail out here.
*/
if (keg->uk_flags & UMA_ZFLAG_INTERNAL) {
KASSERT((zone->uz_flags & UMA_ZONE_SECONDARY) == 0,
("Secondary zone requested UMA_ZFLAG_INTERNAL"));
return (0);
}
out:
KASSERT((arg->flags & (UMA_ZONE_MAXBUCKET | UMA_ZONE_NOBUCKET)) !=
(UMA_ZONE_MAXBUCKET | UMA_ZONE_NOBUCKET),
("Invalid zone flag combination"));
if ((arg->flags & UMA_ZONE_MAXBUCKET) != 0)
zone->uz_count = BUCKET_MAX;
else if ((arg->flags & UMA_ZONE_NOBUCKET) != 0)
zone->uz_count = 0;
else
zone->uz_count = bucket_select(zone->uz_size);
zone->uz_count_min = zone->uz_count;
return (0);
}
/*
* Keg header dtor. This frees all data, destroys locks, frees the hash
* table and removes the keg from the global list.
*
* Arguments/Returns follow uma_dtor specifications
* udata unused
*/
static void
keg_dtor(void *arg, int size, void *udata)
{
uma_keg_t keg;
keg = (uma_keg_t)arg;
KEG_LOCK(keg);
if (keg->uk_free != 0) {
printf("Freed UMA keg (%s) was not empty (%d items). "
" Lost %d pages of memory.\n",
keg->uk_name ? keg->uk_name : "",
keg->uk_free, keg->uk_pages);
}
KEG_UNLOCK(keg);
hash_free(&keg->uk_hash);
KEG_LOCK_FINI(keg);
}
/*
* Zone header dtor.
*
* Arguments/Returns follow uma_dtor specifications
* udata unused
*/
static void
zone_dtor(void *arg, int size, void *udata)
{
uma_zone_t zone;
uma_keg_t keg;
zone = (uma_zone_t)arg;
if (!(zone->uz_flags & UMA_ZFLAG_INTERNAL))
cache_drain(zone);
rw_wlock(&uma_rwlock);
LIST_REMOVE(zone, uz_link);
rw_wunlock(&uma_rwlock);
/*
* XXX there are some races here where
* the zone can be drained but zone lock
* released and then refilled before we
* remove it... we dont care for now
*/
zone_drain_wait(zone, M_WAITOK);
/*
* We only destroy kegs from non secondary zones.
*/
if ((keg = zone->uz_keg) != NULL &&
(zone->uz_flags & UMA_ZONE_SECONDARY) == 0) {
rw_wlock(&uma_rwlock);
LIST_REMOVE(keg, uk_link);
rw_wunlock(&uma_rwlock);
zone_free_item(kegs, keg, NULL, SKIP_NONE);
}
counter_u64_free(zone->uz_allocs);
counter_u64_free(zone->uz_frees);
counter_u64_free(zone->uz_fails);
if (zone->uz_lockptr == &zone->uz_lock)
ZONE_LOCK_FINI(zone);
}
/*
* Traverses every zone in the system and calls a callback
*
* Arguments:
* zfunc A pointer to a function which accepts a zone
* as an argument.
*
* Returns:
* Nothing
*/
static void
zone_foreach(void (*zfunc)(uma_zone_t))
{
uma_keg_t keg;
uma_zone_t zone;
/*
* Before BOOT_RUNNING we are guaranteed to be single
* threaded, so locking isn't needed. Startup functions
* are allowed to use M_WAITOK.
*/
if (__predict_true(booted == BOOT_RUNNING))
rw_rlock(&uma_rwlock);
LIST_FOREACH(keg, &uma_kegs, uk_link) {
LIST_FOREACH(zone, &keg->uk_zones, uz_link)
zfunc(zone);
}
if (__predict_true(booted == BOOT_RUNNING))
rw_runlock(&uma_rwlock);
}
/*
* Count how many pages do we need to bootstrap. VM supplies
* its need in early zones in the argument, we add up our zones,
* which consist of: UMA Slabs, UMA Hash and 9 Bucket zones. The
* zone of zones and zone of kegs are accounted separately.
*/
#define UMA_BOOT_ZONES 11
/* Zone of zones and zone of kegs have arbitrary alignment. */
#define UMA_BOOT_ALIGN 32
static int zsize, ksize;
int
uma_startup_count(int vm_zones)
{
int zones, pages;
ksize = sizeof(struct uma_keg) +
(sizeof(struct uma_domain) * vm_ndomains);
zsize = sizeof(struct uma_zone) +
(sizeof(struct uma_cache) * (mp_maxid + 1)) +
(sizeof(struct uma_zone_domain) * vm_ndomains);
/*
* Memory for the zone of kegs and its keg,
* and for zone of zones.
*/
pages = howmany(roundup(zsize, CACHE_LINE_SIZE) * 2 +
roundup(ksize, CACHE_LINE_SIZE), PAGE_SIZE);
#ifdef UMA_MD_SMALL_ALLOC
zones = UMA_BOOT_ZONES;
#else
zones = UMA_BOOT_ZONES + vm_zones;
vm_zones = 0;
#endif
/* Memory for the rest of startup zones, UMA and VM, ... */
if (zsize > UMA_SLAB_SPACE) {
/* See keg_large_init(). */
u_int ppera;
ppera = howmany(roundup2(zsize, UMA_BOOT_ALIGN), PAGE_SIZE);
if (PAGE_SIZE * ppera - roundup2(zsize, UMA_BOOT_ALIGN) <
SIZEOF_UMA_SLAB)
ppera++;
pages += (zones + vm_zones) * ppera;
} else if (roundup2(zsize, UMA_BOOT_ALIGN) > UMA_SLAB_SPACE)
/* See keg_small_init() special case for uk_ppera = 1. */
pages += zones;
else
pages += howmany(zones,
UMA_SLAB_SPACE / roundup2(zsize, UMA_BOOT_ALIGN));
/* ... and their kegs. Note that zone of zones allocates a keg! */
pages += howmany(zones + 1,
UMA_SLAB_SPACE / roundup2(ksize, UMA_BOOT_ALIGN));
/*
* Most of startup zones are not going to be offpages, that's
* why we use UMA_SLAB_SPACE instead of UMA_SLAB_SIZE in all
* calculations. Some large bucket zones will be offpage, and
* thus will allocate hashes. We take conservative approach
* and assume that all zones may allocate hash. This may give
* us some positive inaccuracy, usually an extra single page.
*/
pages += howmany(zones, UMA_SLAB_SPACE /
(sizeof(struct slabhead *) * UMA_HASH_SIZE_INIT));
return (pages);
}
void
uma_startup(void *mem, int npages)
{
struct uma_zctor_args args;
uma_keg_t masterkeg;
uintptr_t m;
#ifdef DIAGNOSTIC
printf("Entering %s with %d boot pages configured\n", __func__, npages);
#endif
rw_init(&uma_rwlock, "UMA lock");
/* Use bootpages memory for the zone of zones and zone of kegs. */
m = (uintptr_t)mem;
zones = (uma_zone_t)m;
m += roundup(zsize, CACHE_LINE_SIZE);
kegs = (uma_zone_t)m;
m += roundup(zsize, CACHE_LINE_SIZE);
masterkeg = (uma_keg_t)m;
m += roundup(ksize, CACHE_LINE_SIZE);
m = roundup(m, PAGE_SIZE);
npages -= (m - (uintptr_t)mem) / PAGE_SIZE;
mem = (void *)m;
/* "manually" create the initial zone */
memset(&args, 0, sizeof(args));
args.name = "UMA Kegs";
args.size = ksize;
args.ctor = keg_ctor;
args.dtor = keg_dtor;
args.uminit = zero_init;
args.fini = NULL;
args.keg = masterkeg;
args.align = UMA_BOOT_ALIGN - 1;
args.flags = UMA_ZFLAG_INTERNAL;
zone_ctor(kegs, zsize, &args, M_WAITOK);
bootmem = mem;
boot_pages = npages;
args.name = "UMA Zones";
args.size = zsize;
args.ctor = zone_ctor;
args.dtor = zone_dtor;
args.uminit = zero_init;
args.fini = NULL;
args.keg = NULL;
args.align = UMA_BOOT_ALIGN - 1;
args.flags = UMA_ZFLAG_INTERNAL;
zone_ctor(zones, zsize, &args, M_WAITOK);
/* Now make a zone for slab headers */
slabzone = uma_zcreate("UMA Slabs",
sizeof(struct uma_slab),
NULL, NULL, NULL, NULL,
UMA_ALIGN_PTR, UMA_ZFLAG_INTERNAL);
hashzone = uma_zcreate("UMA Hash",
sizeof(struct slabhead *) * UMA_HASH_SIZE_INIT,
NULL, NULL, NULL, NULL,
UMA_ALIGN_PTR, UMA_ZFLAG_INTERNAL);
bucket_init();
booted = BOOT_STRAPPED;
}
void
uma_startup1(void)
{
#ifdef DIAGNOSTIC
printf("Entering %s with %d boot pages left\n", __func__, boot_pages);
#endif
booted = BOOT_PAGEALLOC;
}
void
uma_startup2(void)
{
#ifdef DIAGNOSTIC
printf("Entering %s with %d boot pages left\n", __func__, boot_pages);
#endif
booted = BOOT_BUCKETS;
sx_init(&uma_drain_lock, "umadrain");
bucket_enable();
}
/*
* Initialize our callout handle
*
*/
static void
uma_startup3(void)
{
#ifdef INVARIANTS
TUNABLE_INT_FETCH("vm.debug.divisor", &dbg_divisor);
uma_dbg_cnt = counter_u64_alloc(M_WAITOK);
uma_skip_cnt = counter_u64_alloc(M_WAITOK);
#endif
zone_foreach(zone_alloc_counters);
callout_init(&uma_callout, 1);
callout_reset(&uma_callout, UMA_TIMEOUT * hz, uma_timeout, NULL);
booted = BOOT_RUNNING;
}
static uma_keg_t
uma_kcreate(uma_zone_t zone, size_t size, uma_init uminit, uma_fini fini,
int align, uint32_t flags)
{
struct uma_kctor_args args;
args.size = size;
args.uminit = uminit;
args.fini = fini;
args.align = (align == UMA_ALIGN_CACHE) ? uma_align_cache : align;
args.flags = flags;
args.zone = zone;
return (zone_alloc_item(kegs, &args, UMA_ANYDOMAIN, M_WAITOK));
}
/* Public functions */
/* See uma.h */
void
uma_set_align(int align)
{
if (align != UMA_ALIGN_CACHE)
uma_align_cache = align;
}
/* See uma.h */
uma_zone_t
uma_zcreate(const char *name, size_t size, uma_ctor ctor, uma_dtor dtor,
uma_init uminit, uma_fini fini, int align, uint32_t flags)
{
struct uma_zctor_args args;
uma_zone_t res;
bool locked;
KASSERT(powerof2(align + 1), ("invalid zone alignment %d for \"%s\"",
align, name));
/* This stuff is essential for the zone ctor */
memset(&args, 0, sizeof(args));
args.name = name;
args.size = size;
args.ctor = ctor;
args.dtor = dtor;
args.uminit = uminit;
args.fini = fini;
#ifdef INVARIANTS
/*
* If a zone is being created with an empty constructor and
* destructor, pass UMA constructor/destructor which checks for
* memory use after free.
*/
if ((!(flags & (UMA_ZONE_ZINIT | UMA_ZONE_NOFREE))) &&
ctor == NULL && dtor == NULL && uminit == NULL && fini == NULL) {
args.ctor = trash_ctor;
args.dtor = trash_dtor;
args.uminit = trash_init;
args.fini = trash_fini;
}
#endif
args.align = align;
args.flags = flags;
args.keg = NULL;
if (booted < BOOT_BUCKETS) {
locked = false;
} else {
sx_slock(&uma_drain_lock);
locked = true;
}
res = zone_alloc_item(zones, &args, UMA_ANYDOMAIN, M_WAITOK);
if (locked)
sx_sunlock(&uma_drain_lock);
return (res);
}
/* See uma.h */
uma_zone_t
uma_zsecond_create(char *name, uma_ctor ctor, uma_dtor dtor,
uma_init zinit, uma_fini zfini, uma_zone_t master)
{
struct uma_zctor_args args;
uma_keg_t keg;
uma_zone_t res;
bool locked;
keg = master->uz_keg;
memset(&args, 0, sizeof(args));
args.name = name;
args.size = keg->uk_size;
args.ctor = ctor;
args.dtor = dtor;
args.uminit = zinit;
args.fini = zfini;
args.align = keg->uk_align;
args.flags = keg->uk_flags | UMA_ZONE_SECONDARY;
args.keg = keg;
if (booted < BOOT_BUCKETS) {
locked = false;
} else {
sx_slock(&uma_drain_lock);
locked = true;
}
/* XXX Attaches only one keg of potentially many. */
res = zone_alloc_item(zones, &args, UMA_ANYDOMAIN, M_WAITOK);
if (locked)
sx_sunlock(&uma_drain_lock);
return (res);
}
/* See uma.h */
uma_zone_t
uma_zcache_create(char *name, int size, uma_ctor ctor, uma_dtor dtor,
uma_init zinit, uma_fini zfini, uma_import zimport,
uma_release zrelease, void *arg, int flags)
{
struct uma_zctor_args args;
memset(&args, 0, sizeof(args));
args.name = name;
args.size = size;
args.ctor = ctor;
args.dtor = dtor;
args.uminit = zinit;
args.fini = zfini;
args.import = zimport;
args.release = zrelease;
args.arg = arg;
args.align = 0;
args.flags = flags | UMA_ZFLAG_CACHE;
return (zone_alloc_item(zones, &args, UMA_ANYDOMAIN, M_WAITOK));
}
/* See uma.h */
void
uma_zdestroy(uma_zone_t zone)
{
sx_slock(&uma_drain_lock);
zone_free_item(zones, zone, NULL, SKIP_NONE);
sx_sunlock(&uma_drain_lock);
}
void
uma_zwait(uma_zone_t zone)
{
void *item;
item = uma_zalloc_arg(zone, NULL, M_WAITOK);
uma_zfree(zone, item);
}
void *
uma_zalloc_pcpu_arg(uma_zone_t zone, void *udata, int flags)
{
void *item;
#ifdef SMP
int i;
MPASS(zone->uz_flags & UMA_ZONE_PCPU);
#endif
item = uma_zalloc_arg(zone, udata, flags & ~M_ZERO);
if (item != NULL && (flags & M_ZERO)) {
#ifdef SMP
for (i = 0; i <= mp_maxid; i++)
bzero(zpcpu_get_cpu(item, i), zone->uz_size);
#else
bzero(item, zone->uz_size);
#endif
}
return (item);
}
/*
* A stub while both regular and pcpu cases are identical.
*/
void
uma_zfree_pcpu_arg(uma_zone_t zone, void *item, void *udata)
{
#ifdef SMP
MPASS(zone->uz_flags & UMA_ZONE_PCPU);
#endif
uma_zfree_arg(zone, item, udata);
}
/* See uma.h */
void *
uma_zalloc_arg(uma_zone_t zone, void *udata, int flags)
{
uma_zone_domain_t zdom;
uma_bucket_t bucket;
uma_cache_t cache;
void *item;
int cpu, domain, lockfail, maxbucket;
#ifdef INVARIANTS
bool skipdbg;
#endif
/* Enable entropy collection for RANDOM_ENABLE_UMA kernel option */
random_harvest_fast_uma(&zone, sizeof(zone), RANDOM_UMA);
/* This is the fast path allocation */
CTR4(KTR_UMA, "uma_zalloc_arg thread %x zone %s(%p) flags %d",
curthread, zone->uz_name, zone, flags);
if (flags & M_WAITOK) {
WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL,
"uma_zalloc_arg: zone \"%s\"", zone->uz_name);
}
KASSERT((flags & M_EXEC) == 0, ("uma_zalloc_arg: called with M_EXEC"));
KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(),
("uma_zalloc_arg: called with spinlock or critical section held"));
if (zone->uz_flags & UMA_ZONE_PCPU)
KASSERT((flags & M_ZERO) == 0, ("allocating from a pcpu zone "
"with M_ZERO passed"));
#ifdef DEBUG_MEMGUARD
if (memguard_cmp_zone(zone)) {
item = memguard_alloc(zone->uz_size, flags);
if (item != NULL) {
if (zone->uz_init != NULL &&
zone->uz_init(item, zone->uz_size, flags) != 0)
return (NULL);
if (zone->uz_ctor != NULL &&
zone->uz_ctor(item, zone->uz_size, udata,
flags) != 0) {
zone->uz_fini(item, zone->uz_size);
return (NULL);
}
return (item);
}
/* This is unfortunate but should not be fatal. */
}
#endif
/*
* If possible, allocate from the per-CPU cache. There are two
* requirements for safe access to the per-CPU cache: (1) the thread
* accessing the cache must not be preempted or yield during access,
* and (2) the thread must not migrate CPUs without switching which
* cache it accesses. We rely on a critical section to prevent
* preemption and migration. We release the critical section in
* order to acquire the zone mutex if we are unable to allocate from
* the current cache; when we re-acquire the critical section, we
* must detect and handle migration if it has occurred.
*/
zalloc_restart:
critical_enter();
cpu = curcpu;
cache = &zone->uz_cpu[cpu];
zalloc_start:
bucket = cache->uc_allocbucket;
if (bucket != NULL && bucket->ub_cnt > 0) {
bucket->ub_cnt--;
item = bucket->ub_bucket[bucket->ub_cnt];
#ifdef INVARIANTS
bucket->ub_bucket[bucket->ub_cnt] = NULL;
#endif
KASSERT(item != NULL, ("uma_zalloc: Bucket pointer mangled."));
cache->uc_allocs++;
critical_exit();
#ifdef INVARIANTS
skipdbg = uma_dbg_zskip(zone, item);
#endif
if (zone->uz_ctor != NULL &&
#ifdef INVARIANTS
(!skipdbg || zone->uz_ctor != trash_ctor ||
zone->uz_dtor != trash_dtor) &&
#endif
zone->uz_ctor(item, zone->uz_size, udata, flags) != 0) {
counter_u64_add(zone->uz_fails, 1);
zone_free_item(zone, item, udata, SKIP_DTOR | SKIP_CNT);
return (NULL);
}
#ifdef INVARIANTS
if (!skipdbg)
uma_dbg_alloc(zone, NULL, item);
#endif
if (flags & M_ZERO)
uma_zero_item(item, zone);
return (item);
}
/*
* We have run out of items in our alloc bucket.
* See if we can switch with our free bucket.
*/
bucket = cache->uc_freebucket;
if (bucket != NULL && bucket->ub_cnt > 0) {
CTR2(KTR_UMA,
"uma_zalloc: zone %s(%p) swapping empty with alloc",
zone->uz_name, zone);
cache->uc_freebucket = cache->uc_allocbucket;
cache->uc_allocbucket = bucket;
goto zalloc_start;
}
/*
* Discard any empty allocation bucket while we hold no locks.
*/
bucket = cache->uc_allocbucket;
cache->uc_allocbucket = NULL;
critical_exit();
if (bucket != NULL)
bucket_free(zone, bucket, udata);
if (zone->uz_flags & UMA_ZONE_NUMA) {
domain = PCPU_GET(domain);
if (VM_DOMAIN_EMPTY(domain))
domain = UMA_ANYDOMAIN;
} else
domain = UMA_ANYDOMAIN;
/* Short-circuit for zones without buckets and low memory. */
if (zone->uz_count == 0 || bucketdisable) {
ZONE_LOCK(zone);
goto zalloc_item;
}
/*
* Attempt to retrieve the item from the per-CPU cache has failed, so
* we must go back to the zone. This requires the zone lock, so we
* must drop the critical section, then re-acquire it when we go back
* to the cache. Since the critical section is released, we may be
* preempted or migrate. As such, make sure not to maintain any
* thread-local state specific to the cache from prior to releasing
* the critical section.
*/
lockfail = 0;
if (ZONE_TRYLOCK(zone) == 0) {
/* Record contention to size the buckets. */
ZONE_LOCK(zone);
lockfail = 1;
}
critical_enter();
cpu = curcpu;
cache = &zone->uz_cpu[cpu];
/* See if we lost the race to fill the cache. */
if (cache->uc_allocbucket != NULL) {
ZONE_UNLOCK(zone);
goto zalloc_start;
}
/*
* Check the zone's cache of buckets.
*/
if (domain == UMA_ANYDOMAIN)
zdom = &zone->uz_domain[0];
else
zdom = &zone->uz_domain[domain];
if ((bucket = zone_try_fetch_bucket(zone, zdom, true)) != NULL) {
KASSERT(bucket->ub_cnt != 0,
("uma_zalloc_arg: Returning an empty bucket."));
cache->uc_allocbucket = bucket;
ZONE_UNLOCK(zone);
goto zalloc_start;
}
/* We are no longer associated with this CPU. */
critical_exit();
/*
* We bump the uz count when the cache size is insufficient to
* handle the working set.
*/
if (lockfail && zone->uz_count < zone->uz_count_max)
zone->uz_count++;
if (zone->uz_max_items > 0) {
if (zone->uz_items >= zone->uz_max_items)
goto zalloc_item;
maxbucket = MIN(zone->uz_count,
zone->uz_max_items - zone->uz_items);
zone->uz_items += maxbucket;
} else
maxbucket = zone->uz_count;
ZONE_UNLOCK(zone);
/*
* Now lets just fill a bucket and put it on the free list. If that
* works we'll restart the allocation from the beginning and it
* will use the just filled bucket.
*/
bucket = zone_alloc_bucket(zone, udata, domain, flags, maxbucket);
CTR3(KTR_UMA, "uma_zalloc: zone %s(%p) bucket zone returned %p",
zone->uz_name, zone, bucket);
ZONE_LOCK(zone);
if (bucket != NULL) {
if (zone->uz_max_items > 0 && bucket->ub_cnt < maxbucket) {
MPASS(zone->uz_items >= maxbucket - bucket->ub_cnt);
zone->uz_items -= maxbucket - bucket->ub_cnt;
if (zone->uz_sleepers > 0 &&
zone->uz_items < zone->uz_max_items)
wakeup_one(zone);
}
critical_enter();
cpu = curcpu;
cache = &zone->uz_cpu[cpu];
/*
* See if we lost the race or were migrated. Cache the
* initialized bucket to make this less likely or claim
* the memory directly.
*/
if (cache->uc_allocbucket == NULL &&
((zone->uz_flags & UMA_ZONE_NUMA) == 0 ||
domain == PCPU_GET(domain))) {
cache->uc_allocbucket = bucket;
zdom->uzd_imax += bucket->ub_cnt;
} else if (zone->uz_bkt_count >= zone->uz_bkt_max) {
critical_exit();
ZONE_UNLOCK(zone);
bucket_drain(zone, bucket);
bucket_free(zone, bucket, udata);
goto zalloc_restart;
} else
zone_put_bucket(zone, zdom, bucket, false);
ZONE_UNLOCK(zone);
goto zalloc_start;
} else if (zone->uz_max_items > 0) {
zone->uz_items -= maxbucket;
if (zone->uz_sleepers > 0 &&
zone->uz_items + 1 < zone->uz_max_items)
wakeup_one(zone);
}
/*
* We may not be able to get a bucket so return an actual item.
*/
zalloc_item:
item = zone_alloc_item_locked(zone, udata, domain, flags);
return (item);
}
void *
uma_zalloc_domain(uma_zone_t zone, void *udata, int domain, int flags)
{
/* Enable entropy collection for RANDOM_ENABLE_UMA kernel option */
random_harvest_fast_uma(&zone, sizeof(zone), RANDOM_UMA);
/* This is the fast path allocation */
CTR5(KTR_UMA,
"uma_zalloc_domain thread %x zone %s(%p) domain %d flags %d",
curthread, zone->uz_name, zone, domain, flags);
if (flags & M_WAITOK) {
WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL,
"uma_zalloc_domain: zone \"%s\"", zone->uz_name);
}
KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(),
("uma_zalloc_domain: called with spinlock or critical section held"));
return (zone_alloc_item(zone, udata, domain, flags));
}
/*
* Find a slab with some space. Prefer slabs that are partially used over those
* that are totally full. This helps to reduce fragmentation.
*
* If 'rr' is 1, search all domains starting from 'domain'. Otherwise check
* only 'domain'.
*/
static uma_slab_t
keg_first_slab(uma_keg_t keg, int domain, bool rr)
{
uma_domain_t dom;
uma_slab_t slab;
int start;
KASSERT(domain >= 0 && domain < vm_ndomains,
("keg_first_slab: domain %d out of range", domain));
KEG_LOCK_ASSERT(keg);
slab = NULL;
start = domain;
do {
dom = &keg->uk_domain[domain];
if (!LIST_EMPTY(&dom->ud_part_slab))
return (LIST_FIRST(&dom->ud_part_slab));
if (!LIST_EMPTY(&dom->ud_free_slab)) {
slab = LIST_FIRST(&dom->ud_free_slab);
LIST_REMOVE(slab, us_link);
LIST_INSERT_HEAD(&dom->ud_part_slab, slab, us_link);
return (slab);
}
if (rr)
domain = (domain + 1) % vm_ndomains;
} while (domain != start);
return (NULL);
}
static uma_slab_t
keg_fetch_free_slab(uma_keg_t keg, int domain, bool rr, int flags)
{
uint32_t reserve;
KEG_LOCK_ASSERT(keg);
reserve = (flags & M_USE_RESERVE) != 0 ? 0 : keg->uk_reserve;
if (keg->uk_free <= reserve)
return (NULL);
return (keg_first_slab(keg, domain, rr));
}
static uma_slab_t
keg_fetch_slab(uma_keg_t keg, uma_zone_t zone, int rdomain, const int flags)
{
struct vm_domainset_iter di;
uma_domain_t dom;
uma_slab_t slab;
int aflags, domain;
bool rr;
restart:
KEG_LOCK_ASSERT(keg);
/*
* Use the keg's policy if upper layers haven't already specified a
* domain (as happens with first-touch zones).
*
* To avoid races we run the iterator with the keg lock held, but that
* means that we cannot allow the vm_domainset layer to sleep. Thus,
* clear M_WAITOK and handle low memory conditions locally.
*/
rr = rdomain == UMA_ANYDOMAIN;
if (rr) {
aflags = (flags & ~M_WAITOK) | M_NOWAIT;
vm_domainset_iter_policy_ref_init(&di, &keg->uk_dr, &domain,
&aflags);
} else {
aflags = flags;
domain = rdomain;
}
for (;;) {
slab = keg_fetch_free_slab(keg, domain, rr, flags);
if (slab != NULL) {
MPASS(slab->us_keg == keg);
return (slab);
}
/*
* M_NOVM means don't ask at all!
*/
if (flags & M_NOVM)
break;
KASSERT(zone->uz_max_items == 0 ||
zone->uz_items <= zone->uz_max_items,
("%s: zone %p overflow", __func__, zone));
slab = keg_alloc_slab(keg, zone, domain, flags, aflags);
/*
* If we got a slab here it's safe to mark it partially used
* and return. We assume that the caller is going to remove
* at least one item.
*/
if (slab) {
MPASS(slab->us_keg == keg);
dom = &keg->uk_domain[slab->us_domain];
LIST_INSERT_HEAD(&dom->ud_part_slab, slab, us_link);
return (slab);
}
KEG_LOCK(keg);
if (rr && vm_domainset_iter_policy(&di, &domain) != 0) {
if ((flags & M_WAITOK) != 0) {
KEG_UNLOCK(keg);
vm_wait_doms(&keg->uk_dr.dr_policy->ds_mask);
KEG_LOCK(keg);
goto restart;
}
break;
}
}
/*
* We might not have been able to get a slab but another cpu
* could have while we were unlocked. Check again before we
* fail.
*/
if ((slab = keg_fetch_free_slab(keg, domain, rr, flags)) != NULL) {
MPASS(slab->us_keg == keg);
return (slab);
}
return (NULL);
}
static uma_slab_t
zone_fetch_slab(uma_zone_t zone, uma_keg_t keg, int domain, int flags)
{
uma_slab_t slab;
if (keg == NULL) {
keg = zone->uz_keg;
KEG_LOCK(keg);
}
for (;;) {
slab = keg_fetch_slab(keg, zone, domain, flags);
if (slab)
return (slab);
if (flags & (M_NOWAIT | M_NOVM))
break;
}
KEG_UNLOCK(keg);
return (NULL);
}
static void *
slab_alloc_item(uma_keg_t keg, uma_slab_t slab)
{
uma_domain_t dom;
void *item;
uint8_t freei;
MPASS(keg == slab->us_keg);
KEG_LOCK_ASSERT(keg);
freei = BIT_FFS(SLAB_SETSIZE, &slab->us_free) - 1;
BIT_CLR(SLAB_SETSIZE, freei, &slab->us_free);
item = slab->us_data + (keg->uk_rsize * freei);
slab->us_freecount--;
keg->uk_free--;
/* Move this slab to the full list */
if (slab->us_freecount == 0) {
LIST_REMOVE(slab, us_link);
dom = &keg->uk_domain[slab->us_domain];
LIST_INSERT_HEAD(&dom->ud_full_slab, slab, us_link);
}
return (item);
}
static int
zone_import(uma_zone_t zone, void **bucket, int max, int domain, int flags)
{
uma_slab_t slab;
uma_keg_t keg;
#ifdef NUMA
int stripe;
#endif
int i;
slab = NULL;
keg = NULL;
/* Try to keep the buckets totally full */
for (i = 0; i < max; ) {
- if ((slab = zone->uz_slab(zone, keg, domain, flags)) == NULL)
+ if ((slab = zone_fetch_slab(zone, keg, domain, flags)) == NULL)
break;
keg = slab->us_keg;
#ifdef NUMA
stripe = howmany(max, vm_ndomains);
#endif
while (slab->us_freecount && i < max) {
bucket[i++] = slab_alloc_item(keg, slab);
if (keg->uk_free <= keg->uk_reserve)
break;
#ifdef NUMA
/*
* If the zone is striped we pick a new slab for every
* N allocations. Eliminating this conditional will
* instead pick a new domain for each bucket rather
* than stripe within each bucket. The current option
* produces more fragmentation and requires more cpu
* time but yields better distribution.
*/
if ((zone->uz_flags & UMA_ZONE_NUMA) == 0 &&
vm_ndomains > 1 && --stripe == 0)
break;
#endif
}
/* Don't block if we allocated any successfully. */
flags &= ~M_WAITOK;
flags |= M_NOWAIT;
}
if (slab != NULL)
KEG_UNLOCK(keg);
return i;
}
static uma_bucket_t
zone_alloc_bucket(uma_zone_t zone, void *udata, int domain, int flags, int max)
{
uma_bucket_t bucket;
CTR1(KTR_UMA, "zone_alloc:_bucket domain %d)", domain);
/* Don't wait for buckets, preserve caller's NOVM setting. */
bucket = bucket_alloc(zone, udata, M_NOWAIT | (flags & M_NOVM));
if (bucket == NULL)
return (NULL);
bucket->ub_cnt = zone->uz_import(zone->uz_arg, bucket->ub_bucket,
MIN(max, bucket->ub_entries), domain, flags);
/*
* Initialize the memory if necessary.
*/
if (bucket->ub_cnt != 0 && zone->uz_init != NULL) {
int i;
for (i = 0; i < bucket->ub_cnt; i++)
if (zone->uz_init(bucket->ub_bucket[i], zone->uz_size,
flags) != 0)
break;
/*
* If we couldn't initialize the whole bucket, put the
* rest back onto the freelist.
*/
if (i != bucket->ub_cnt) {
zone->uz_release(zone->uz_arg, &bucket->ub_bucket[i],
bucket->ub_cnt - i);
#ifdef INVARIANTS
bzero(&bucket->ub_bucket[i],
sizeof(void *) * (bucket->ub_cnt - i));
#endif
bucket->ub_cnt = i;
}
}
if (bucket->ub_cnt == 0) {
bucket_free(zone, bucket, udata);
counter_u64_add(zone->uz_fails, 1);
return (NULL);
}
return (bucket);
}
/*
* Allocates a single item from a zone.
*
* Arguments
* zone The zone to alloc for.
* udata The data to be passed to the constructor.
* domain The domain to allocate from or UMA_ANYDOMAIN.
* flags M_WAITOK, M_NOWAIT, M_ZERO.
*
* Returns
* NULL if there is no memory and M_NOWAIT is set
* An item if successful
*/
static void *
zone_alloc_item(uma_zone_t zone, void *udata, int domain, int flags)
{
ZONE_LOCK(zone);
return (zone_alloc_item_locked(zone, udata, domain, flags));
}
/*
* Returns with zone unlocked.
*/
static void *
zone_alloc_item_locked(uma_zone_t zone, void *udata, int domain, int flags)
{
void *item;
#ifdef INVARIANTS
bool skipdbg;
#endif
ZONE_LOCK_ASSERT(zone);
if (zone->uz_max_items > 0) {
if (zone->uz_items >= zone->uz_max_items) {
zone_log_warning(zone);
zone_maxaction(zone);
if (flags & M_NOWAIT) {
ZONE_UNLOCK(zone);
return (NULL);
}
zone->uz_sleeps++;
zone->uz_sleepers++;
while (zone->uz_items >= zone->uz_max_items)
mtx_sleep(zone, zone->uz_lockptr, PVM,
"zonelimit", 0);
zone->uz_sleepers--;
if (zone->uz_sleepers > 0 &&
zone->uz_items + 1 < zone->uz_max_items)
wakeup_one(zone);
}
zone->uz_items++;
}
ZONE_UNLOCK(zone);
if (domain != UMA_ANYDOMAIN) {
/* avoid allocs targeting empty domains */
if (VM_DOMAIN_EMPTY(domain))
domain = UMA_ANYDOMAIN;
}
if (zone->uz_import(zone->uz_arg, &item, 1, domain, flags) != 1)
goto fail;
#ifdef INVARIANTS
skipdbg = uma_dbg_zskip(zone, item);
#endif
/*
* We have to call both the zone's init (not the keg's init)
* and the zone's ctor. This is because the item is going from
* a keg slab directly to the user, and the user is expecting it
* to be both zone-init'd as well as zone-ctor'd.
*/
if (zone->uz_init != NULL) {
if (zone->uz_init(item, zone->uz_size, flags) != 0) {
zone_free_item(zone, item, udata, SKIP_FINI | SKIP_CNT);
goto fail;
}
}
if (zone->uz_ctor != NULL &&
#ifdef INVARIANTS
(!skipdbg || zone->uz_ctor != trash_ctor ||
zone->uz_dtor != trash_dtor) &&
#endif
zone->uz_ctor(item, zone->uz_size, udata, flags) != 0) {
zone_free_item(zone, item, udata, SKIP_DTOR | SKIP_CNT);
goto fail;
}
#ifdef INVARIANTS
if (!skipdbg)
uma_dbg_alloc(zone, NULL, item);
#endif
if (flags & M_ZERO)
uma_zero_item(item, zone);
counter_u64_add(zone->uz_allocs, 1);
CTR3(KTR_UMA, "zone_alloc_item item %p from %s(%p)", item,
zone->uz_name, zone);
return (item);
fail:
if (zone->uz_max_items > 0) {
ZONE_LOCK(zone);
zone->uz_items--;
ZONE_UNLOCK(zone);
}
counter_u64_add(zone->uz_fails, 1);
CTR2(KTR_UMA, "zone_alloc_item failed from %s(%p)",
zone->uz_name, zone);
return (NULL);
}
/* See uma.h */
void
uma_zfree_arg(uma_zone_t zone, void *item, void *udata)
{
uma_cache_t cache;
uma_bucket_t bucket;
uma_zone_domain_t zdom;
int cpu, domain;
bool lockfail;
#ifdef INVARIANTS
bool skipdbg;
#endif
/* Enable entropy collection for RANDOM_ENABLE_UMA kernel option */
random_harvest_fast_uma(&zone, sizeof(zone), RANDOM_UMA);
CTR2(KTR_UMA, "uma_zfree_arg thread %x zone %s", curthread,
zone->uz_name);
KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(),
("uma_zfree_arg: called with spinlock or critical section held"));
/* uma_zfree(..., NULL) does nothing, to match free(9). */
if (item == NULL)
return;
#ifdef DEBUG_MEMGUARD
if (is_memguard_addr(item)) {
if (zone->uz_dtor != NULL)
zone->uz_dtor(item, zone->uz_size, udata);
if (zone->uz_fini != NULL)
zone->uz_fini(item, zone->uz_size);
memguard_free(item);
return;
}
#endif
#ifdef INVARIANTS
skipdbg = uma_dbg_zskip(zone, item);
if (skipdbg == false) {
if (zone->uz_flags & UMA_ZONE_MALLOC)
uma_dbg_free(zone, udata, item);
else
uma_dbg_free(zone, NULL, item);
}
if (zone->uz_dtor != NULL && (!skipdbg ||
zone->uz_dtor != trash_dtor || zone->uz_ctor != trash_ctor))
#else
if (zone->uz_dtor != NULL)
#endif
zone->uz_dtor(item, zone->uz_size, udata);
/*
* The race here is acceptable. If we miss it we'll just have to wait
* a little longer for the limits to be reset.
*/
if (zone->uz_sleepers > 0)
goto zfree_item;
/*
* If possible, free to the per-CPU cache. There are two
* requirements for safe access to the per-CPU cache: (1) the thread
* accessing the cache must not be preempted or yield during access,
* and (2) the thread must not migrate CPUs without switching which
* cache it accesses. We rely on a critical section to prevent
* preemption and migration. We release the critical section in
* order to acquire the zone mutex if we are unable to free to the
* current cache; when we re-acquire the critical section, we must
* detect and handle migration if it has occurred.
*/
zfree_restart:
critical_enter();
cpu = curcpu;
cache = &zone->uz_cpu[cpu];
zfree_start:
/*
* Try to free into the allocbucket first to give LIFO ordering
* for cache-hot datastructures. Spill over into the freebucket
* if necessary. Alloc will swap them if one runs dry.
*/
bucket = cache->uc_allocbucket;
if (bucket == NULL || bucket->ub_cnt >= bucket->ub_entries)
bucket = cache->uc_freebucket;
if (bucket != NULL && bucket->ub_cnt < bucket->ub_entries) {
KASSERT(bucket->ub_bucket[bucket->ub_cnt] == NULL,
("uma_zfree: Freeing to non free bucket index."));
bucket->ub_bucket[bucket->ub_cnt] = item;
bucket->ub_cnt++;
cache->uc_frees++;
critical_exit();
return;
}
/*
* We must go back the zone, which requires acquiring the zone lock,
* which in turn means we must release and re-acquire the critical
* section. Since the critical section is released, we may be
* preempted or migrate. As such, make sure not to maintain any
* thread-local state specific to the cache from prior to releasing
* the critical section.
*/
critical_exit();
if (zone->uz_count == 0 || bucketdisable)
goto zfree_item;
lockfail = false;
if (ZONE_TRYLOCK(zone) == 0) {
/* Record contention to size the buckets. */
ZONE_LOCK(zone);
lockfail = true;
}
critical_enter();
cpu = curcpu;
cache = &zone->uz_cpu[cpu];
bucket = cache->uc_freebucket;
if (bucket != NULL && bucket->ub_cnt < bucket->ub_entries) {
ZONE_UNLOCK(zone);
goto zfree_start;
}
cache->uc_freebucket = NULL;
/* We are no longer associated with this CPU. */
critical_exit();
if ((zone->uz_flags & UMA_ZONE_NUMA) != 0) {
domain = PCPU_GET(domain);
if (VM_DOMAIN_EMPTY(domain))
domain = UMA_ANYDOMAIN;
} else
domain = 0;
zdom = &zone->uz_domain[0];
/* Can we throw this on the zone full list? */
if (bucket != NULL) {
CTR3(KTR_UMA,
"uma_zfree: zone %s(%p) putting bucket %p on free list",
zone->uz_name, zone, bucket);
/* ub_cnt is pointing to the last free item */
KASSERT(bucket->ub_cnt == bucket->ub_entries,
("uma_zfree: Attempting to insert not full bucket onto the full list.\n"));
if (zone->uz_bkt_count >= zone->uz_bkt_max) {
ZONE_UNLOCK(zone);
bucket_drain(zone, bucket);
bucket_free(zone, bucket, udata);
goto zfree_restart;
} else
zone_put_bucket(zone, zdom, bucket, true);
}
/*
* We bump the uz count when the cache size is insufficient to
* handle the working set.
*/
if (lockfail && zone->uz_count < zone->uz_count_max)
zone->uz_count++;
ZONE_UNLOCK(zone);
bucket = bucket_alloc(zone, udata, M_NOWAIT);
CTR3(KTR_UMA, "uma_zfree: zone %s(%p) allocated bucket %p",
zone->uz_name, zone, bucket);
if (bucket) {
critical_enter();
cpu = curcpu;
cache = &zone->uz_cpu[cpu];
if (cache->uc_freebucket == NULL &&
((zone->uz_flags & UMA_ZONE_NUMA) == 0 ||
domain == PCPU_GET(domain))) {
cache->uc_freebucket = bucket;
goto zfree_start;
}
/*
* We lost the race, start over. We have to drop our
* critical section to free the bucket.
*/
critical_exit();
bucket_free(zone, bucket, udata);
goto zfree_restart;
}
/*
* If nothing else caught this, we'll just do an internal free.
*/
zfree_item:
zone_free_item(zone, item, udata, SKIP_DTOR);
}
void
uma_zfree_domain(uma_zone_t zone, void *item, void *udata)
{
/* Enable entropy collection for RANDOM_ENABLE_UMA kernel option */
random_harvest_fast_uma(&zone, sizeof(zone), RANDOM_UMA);
CTR2(KTR_UMA, "uma_zfree_domain thread %x zone %s", curthread,
zone->uz_name);
KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(),
("uma_zfree_domain: called with spinlock or critical section held"));
/* uma_zfree(..., NULL) does nothing, to match free(9). */
if (item == NULL)
return;
zone_free_item(zone, item, udata, SKIP_NONE);
}
static void
slab_free_item(uma_zone_t zone, uma_slab_t slab, void *item)
{
uma_keg_t keg;
uma_domain_t dom;
uint8_t freei;
keg = zone->uz_keg;
MPASS(zone->uz_lockptr == &keg->uk_lock);
KEG_LOCK_ASSERT(keg);
MPASS(keg == slab->us_keg);
dom = &keg->uk_domain[slab->us_domain];
/* Do we need to remove from any lists? */
if (slab->us_freecount+1 == keg->uk_ipers) {
LIST_REMOVE(slab, us_link);
LIST_INSERT_HEAD(&dom->ud_free_slab, slab, us_link);
} else if (slab->us_freecount == 0) {
LIST_REMOVE(slab, us_link);
LIST_INSERT_HEAD(&dom->ud_part_slab, slab, us_link);
}
/* Slab management. */
freei = ((uintptr_t)item - (uintptr_t)slab->us_data) / keg->uk_rsize;
BIT_SET(SLAB_SETSIZE, freei, &slab->us_free);
slab->us_freecount++;
/* Keg statistics. */
keg->uk_free++;
}
static void
zone_release(uma_zone_t zone, void **bucket, int cnt)
{
void *item;
uma_slab_t slab;
uma_keg_t keg;
uint8_t *mem;
int i;
keg = zone->uz_keg;
KEG_LOCK(keg);
for (i = 0; i < cnt; i++) {
item = bucket[i];
if (!(zone->uz_flags & UMA_ZONE_VTOSLAB)) {
mem = (uint8_t *)((uintptr_t)item & (~UMA_SLAB_MASK));
if (zone->uz_flags & UMA_ZONE_HASH) {
slab = hash_sfind(&keg->uk_hash, mem);
} else {
mem += keg->uk_pgoff;
slab = (uma_slab_t)mem;
}
} else {
slab = vtoslab((vm_offset_t)item);
MPASS(slab->us_keg == keg);
}
slab_free_item(zone, slab, item);
}
KEG_UNLOCK(keg);
}
/*
* Frees a single item to any zone.
*
* Arguments:
* zone The zone to free to
* item The item we're freeing
* udata User supplied data for the dtor
* skip Skip dtors and finis
*/
static void
zone_free_item(uma_zone_t zone, void *item, void *udata, enum zfreeskip skip)
{
#ifdef INVARIANTS
bool skipdbg;
skipdbg = uma_dbg_zskip(zone, item);
if (skip == SKIP_NONE && !skipdbg) {
if (zone->uz_flags & UMA_ZONE_MALLOC)
uma_dbg_free(zone, udata, item);
else
uma_dbg_free(zone, NULL, item);
}
if (skip < SKIP_DTOR && zone->uz_dtor != NULL &&
(!skipdbg || zone->uz_dtor != trash_dtor ||
zone->uz_ctor != trash_ctor))
#else
if (skip < SKIP_DTOR && zone->uz_dtor != NULL)
#endif
zone->uz_dtor(item, zone->uz_size, udata);
if (skip < SKIP_FINI && zone->uz_fini)
zone->uz_fini(item, zone->uz_size);
zone->uz_release(zone->uz_arg, &item, 1);
if (skip & SKIP_CNT)
return;
counter_u64_add(zone->uz_frees, 1);
if (zone->uz_max_items > 0) {
ZONE_LOCK(zone);
zone->uz_items--;
if (zone->uz_sleepers > 0 &&
zone->uz_items < zone->uz_max_items)
wakeup_one(zone);
ZONE_UNLOCK(zone);
}
}
/* See uma.h */
int
uma_zone_set_max(uma_zone_t zone, int nitems)
{
struct uma_bucket_zone *ubz;
/*
* If limit is very low we may need to limit how
* much items are allowed in CPU caches.
*/
ubz = &bucket_zones[0];
for (; ubz->ubz_entries != 0; ubz++)
if (ubz->ubz_entries * 2 * mp_ncpus > nitems)
break;
if (ubz == &bucket_zones[0])
nitems = ubz->ubz_entries * 2 * mp_ncpus;
else
ubz--;
ZONE_LOCK(zone);
zone->uz_count_max = zone->uz_count = ubz->ubz_entries;
if (zone->uz_count_min > zone->uz_count_max)
zone->uz_count_min = zone->uz_count_max;
zone->uz_max_items = nitems;
ZONE_UNLOCK(zone);
return (nitems);
}
/* See uma.h */
int
uma_zone_set_maxcache(uma_zone_t zone, int nitems)
{
ZONE_LOCK(zone);
zone->uz_bkt_max = nitems;
ZONE_UNLOCK(zone);
return (nitems);
}
/* See uma.h */
int
uma_zone_get_max(uma_zone_t zone)
{
int nitems;
ZONE_LOCK(zone);
nitems = zone->uz_max_items;
ZONE_UNLOCK(zone);
return (nitems);
}
/* See uma.h */
void
uma_zone_set_warning(uma_zone_t zone, const char *warning)
{
ZONE_LOCK(zone);
zone->uz_warning = warning;
ZONE_UNLOCK(zone);
}
/* See uma.h */
void
uma_zone_set_maxaction(uma_zone_t zone, uma_maxaction_t maxaction)
{
ZONE_LOCK(zone);
TASK_INIT(&zone->uz_maxaction, 0, (task_fn_t *)maxaction, zone);
ZONE_UNLOCK(zone);
}
/* See uma.h */
int
uma_zone_get_cur(uma_zone_t zone)
{
int64_t nitems;
u_int i;
ZONE_LOCK(zone);
nitems = counter_u64_fetch(zone->uz_allocs) -
counter_u64_fetch(zone->uz_frees);
CPU_FOREACH(i) {
/*
* See the comment in sysctl_vm_zone_stats() regarding the
* safety of accessing the per-cpu caches. With the zone lock
* held, it is safe, but can potentially result in stale data.
*/
nitems += zone->uz_cpu[i].uc_allocs -
zone->uz_cpu[i].uc_frees;
}
ZONE_UNLOCK(zone);
return (nitems < 0 ? 0 : nitems);
}
/* See uma.h */
void
uma_zone_set_init(uma_zone_t zone, uma_init uminit)
{
uma_keg_t keg;
KEG_GET(zone, keg);
KEG_LOCK(keg);
KASSERT(keg->uk_pages == 0,
("uma_zone_set_init on non-empty keg"));
keg->uk_init = uminit;
KEG_UNLOCK(keg);
}
/* See uma.h */
void
uma_zone_set_fini(uma_zone_t zone, uma_fini fini)
{
uma_keg_t keg;
KEG_GET(zone, keg);
KEG_LOCK(keg);
KASSERT(keg->uk_pages == 0,
("uma_zone_set_fini on non-empty keg"));
keg->uk_fini = fini;
KEG_UNLOCK(keg);
}
/* See uma.h */
void
uma_zone_set_zinit(uma_zone_t zone, uma_init zinit)
{
ZONE_LOCK(zone);
KASSERT(zone->uz_keg->uk_pages == 0,
("uma_zone_set_zinit on non-empty keg"));
zone->uz_init = zinit;
ZONE_UNLOCK(zone);
}
/* See uma.h */
void
uma_zone_set_zfini(uma_zone_t zone, uma_fini zfini)
{
ZONE_LOCK(zone);
KASSERT(zone->uz_keg->uk_pages == 0,
("uma_zone_set_zfini on non-empty keg"));
zone->uz_fini = zfini;
ZONE_UNLOCK(zone);
}
/* See uma.h */
/* XXX uk_freef is not actually used with the zone locked */
void
uma_zone_set_freef(uma_zone_t zone, uma_free freef)
{
uma_keg_t keg;
KEG_GET(zone, keg);
KASSERT(keg != NULL, ("uma_zone_set_freef: Invalid zone type"));
KEG_LOCK(keg);
keg->uk_freef = freef;
KEG_UNLOCK(keg);
}
/* See uma.h */
/* XXX uk_allocf is not actually used with the zone locked */
void
uma_zone_set_allocf(uma_zone_t zone, uma_alloc allocf)
{
uma_keg_t keg;
KEG_GET(zone, keg);
KEG_LOCK(keg);
keg->uk_allocf = allocf;
KEG_UNLOCK(keg);
}
/* See uma.h */
void
uma_zone_reserve(uma_zone_t zone, int items)
{
uma_keg_t keg;
KEG_GET(zone, keg);
KEG_LOCK(keg);
keg->uk_reserve = items;
KEG_UNLOCK(keg);
}
/* See uma.h */
int
uma_zone_reserve_kva(uma_zone_t zone, int count)
{
uma_keg_t keg;
vm_offset_t kva;
u_int pages;
KEG_GET(zone, keg);
pages = count / keg->uk_ipers;
if (pages * keg->uk_ipers < count)
pages++;
pages *= keg->uk_ppera;
#ifdef UMA_MD_SMALL_ALLOC
if (keg->uk_ppera > 1) {
#else
if (1) {
#endif
kva = kva_alloc((vm_size_t)pages * PAGE_SIZE);
if (kva == 0)
return (0);
} else
kva = 0;
ZONE_LOCK(zone);
MPASS(keg->uk_kva == 0);
keg->uk_kva = kva;
keg->uk_offset = 0;
zone->uz_max_items = pages * keg->uk_ipers;
#ifdef UMA_MD_SMALL_ALLOC
keg->uk_allocf = (keg->uk_ppera > 1) ? noobj_alloc : uma_small_alloc;
#else
keg->uk_allocf = noobj_alloc;
#endif
keg->uk_flags |= UMA_ZONE_NOFREE;
ZONE_UNLOCK(zone);
return (1);
}
/* See uma.h */
void
uma_prealloc(uma_zone_t zone, int items)
{
struct vm_domainset_iter di;
uma_domain_t dom;
uma_slab_t slab;
uma_keg_t keg;
int aflags, domain, slabs;
KEG_GET(zone, keg);
KEG_LOCK(keg);
slabs = items / keg->uk_ipers;
if (slabs * keg->uk_ipers < items)
slabs++;
while (slabs-- > 0) {
aflags = M_NOWAIT;
vm_domainset_iter_policy_ref_init(&di, &keg->uk_dr, &domain,
&aflags);
for (;;) {
slab = keg_alloc_slab(keg, zone, domain, M_WAITOK,
aflags);
if (slab != NULL) {
MPASS(slab->us_keg == keg);
dom = &keg->uk_domain[slab->us_domain];
LIST_INSERT_HEAD(&dom->ud_free_slab, slab,
us_link);
break;
}
KEG_LOCK(keg);
if (vm_domainset_iter_policy(&di, &domain) != 0) {
KEG_UNLOCK(keg);
vm_wait_doms(&keg->uk_dr.dr_policy->ds_mask);
KEG_LOCK(keg);
}
}
}
KEG_UNLOCK(keg);
}
/* See uma.h */
static void
uma_reclaim_locked(bool kmem_danger)
{
CTR0(KTR_UMA, "UMA: vm asked us to release pages!");
sx_assert(&uma_drain_lock, SA_XLOCKED);
bucket_enable();
zone_foreach(zone_drain);
if (vm_page_count_min() || kmem_danger) {
cache_drain_safe(NULL);
zone_foreach(zone_drain);
}
/*
* Some slabs may have been freed but this zone will be visited early
* we visit again so that we can free pages that are empty once other
* zones are drained. We have to do the same for buckets.
*/
zone_drain(slabzone);
bucket_zone_drain();
}
void
uma_reclaim(void)
{
sx_xlock(&uma_drain_lock);
uma_reclaim_locked(false);
sx_xunlock(&uma_drain_lock);
}
static volatile int uma_reclaim_needed;
void
uma_reclaim_wakeup(void)
{
if (atomic_fetchadd_int(&uma_reclaim_needed, 1) == 0)
wakeup(uma_reclaim);
}
void
uma_reclaim_worker(void *arg __unused)
{
for (;;) {
sx_xlock(&uma_drain_lock);
while (atomic_load_int(&uma_reclaim_needed) == 0)
sx_sleep(uma_reclaim, &uma_drain_lock, PVM, "umarcl",
hz);
sx_xunlock(&uma_drain_lock);
EVENTHANDLER_INVOKE(vm_lowmem, VM_LOW_KMEM);
sx_xlock(&uma_drain_lock);
uma_reclaim_locked(true);
atomic_store_int(&uma_reclaim_needed, 0);
sx_xunlock(&uma_drain_lock);
/* Don't fire more than once per-second. */
pause("umarclslp", hz);
}
}
/* See uma.h */
int
uma_zone_exhausted(uma_zone_t zone)
{
int full;
ZONE_LOCK(zone);
full = zone->uz_sleepers > 0;
ZONE_UNLOCK(zone);
return (full);
}
int
uma_zone_exhausted_nolock(uma_zone_t zone)
{
return (zone->uz_sleepers > 0);
}
void *
uma_large_malloc_domain(vm_size_t size, int domain, int wait)
{
struct domainset *policy;
vm_offset_t addr;
uma_slab_t slab;
if (domain != UMA_ANYDOMAIN) {
/* avoid allocs targeting empty domains */
if (VM_DOMAIN_EMPTY(domain))
domain = UMA_ANYDOMAIN;
}
slab = zone_alloc_item(slabzone, NULL, domain, wait);
if (slab == NULL)
return (NULL);
policy = (domain == UMA_ANYDOMAIN) ? DOMAINSET_RR() :
DOMAINSET_FIXED(domain);
addr = kmem_malloc_domainset(policy, size, wait);
if (addr != 0) {
vsetslab(addr, slab);
slab->us_data = (void *)addr;
slab->us_flags = UMA_SLAB_KERNEL | UMA_SLAB_MALLOC;
slab->us_size = size;
slab->us_domain = vm_phys_domain(PHYS_TO_VM_PAGE(
pmap_kextract(addr)));
uma_total_inc(size);
} else {
zone_free_item(slabzone, slab, NULL, SKIP_NONE);
}
return ((void *)addr);
}
void *
uma_large_malloc(vm_size_t size, int wait)
{
return uma_large_malloc_domain(size, UMA_ANYDOMAIN, wait);
}
void
uma_large_free(uma_slab_t slab)
{
KASSERT((slab->us_flags & UMA_SLAB_KERNEL) != 0,
("uma_large_free: Memory not allocated with uma_large_malloc."));
kmem_free((vm_offset_t)slab->us_data, slab->us_size);
uma_total_dec(slab->us_size);
zone_free_item(slabzone, slab, NULL, SKIP_NONE);
}
static void
uma_zero_item(void *item, uma_zone_t zone)
{
bzero(item, zone->uz_size);
}
unsigned long
uma_limit(void)
{
return (uma_kmem_limit);
}
void
uma_set_limit(unsigned long limit)
{
uma_kmem_limit = limit;
}
unsigned long
uma_size(void)
{
return (uma_kmem_total);
}
long
uma_avail(void)
{
return (uma_kmem_limit - uma_kmem_total);
}
void
uma_print_stats(void)
{
zone_foreach(uma_print_zone);
}
static void
slab_print(uma_slab_t slab)
{
printf("slab: keg %p, data %p, freecount %d\n",
slab->us_keg, slab->us_data, slab->us_freecount);
}
static void
cache_print(uma_cache_t cache)
{
printf("alloc: %p(%d), free: %p(%d)\n",
cache->uc_allocbucket,
cache->uc_allocbucket?cache->uc_allocbucket->ub_cnt:0,
cache->uc_freebucket,
cache->uc_freebucket?cache->uc_freebucket->ub_cnt:0);
}
static void
uma_print_keg(uma_keg_t keg)
{
uma_domain_t dom;
uma_slab_t slab;
int i;
printf("keg: %s(%p) size %d(%d) flags %#x ipers %d ppera %d "
"out %d free %d\n",
keg->uk_name, keg, keg->uk_size, keg->uk_rsize, keg->uk_flags,
keg->uk_ipers, keg->uk_ppera,
(keg->uk_pages / keg->uk_ppera) * keg->uk_ipers - keg->uk_free,
keg->uk_free);
for (i = 0; i < vm_ndomains; i++) {
dom = &keg->uk_domain[i];
printf("Part slabs:\n");
LIST_FOREACH(slab, &dom->ud_part_slab, us_link)
slab_print(slab);
printf("Free slabs:\n");
LIST_FOREACH(slab, &dom->ud_free_slab, us_link)
slab_print(slab);
printf("Full slabs:\n");
LIST_FOREACH(slab, &dom->ud_full_slab, us_link)
slab_print(slab);
}
}
void
uma_print_zone(uma_zone_t zone)
{
uma_cache_t cache;
int i;
printf("zone: %s(%p) size %d maxitems %ju flags %#x\n",
zone->uz_name, zone, zone->uz_size, (uintmax_t)zone->uz_max_items,
zone->uz_flags);
if (zone->uz_lockptr != &zone->uz_lock)
uma_print_keg(zone->uz_keg);
CPU_FOREACH(i) {
cache = &zone->uz_cpu[i];
printf("CPU %d Cache:\n", i);
cache_print(cache);
}
}
#ifdef DDB
/*
* Generate statistics across both the zone and its per-cpu cache's. Return
* desired statistics if the pointer is non-NULL for that statistic.
*
* Note: does not update the zone statistics, as it can't safely clear the
* per-CPU cache statistic.
*
* XXXRW: Following the uc_allocbucket and uc_freebucket pointers here isn't
* safe from off-CPU; we should modify the caches to track this information
* directly so that we don't have to.
*/
static void
uma_zone_sumstat(uma_zone_t z, long *cachefreep, uint64_t *allocsp,
uint64_t *freesp, uint64_t *sleepsp)
{
uma_cache_t cache;
uint64_t allocs, frees, sleeps;
int cachefree, cpu;
allocs = frees = sleeps = 0;
cachefree = 0;
CPU_FOREACH(cpu) {
cache = &z->uz_cpu[cpu];
if (cache->uc_allocbucket != NULL)
cachefree += cache->uc_allocbucket->ub_cnt;
if (cache->uc_freebucket != NULL)
cachefree += cache->uc_freebucket->ub_cnt;
allocs += cache->uc_allocs;
frees += cache->uc_frees;
}
allocs += counter_u64_fetch(z->uz_allocs);
frees += counter_u64_fetch(z->uz_frees);
sleeps += z->uz_sleeps;
if (cachefreep != NULL)
*cachefreep = cachefree;
if (allocsp != NULL)
*allocsp = allocs;
if (freesp != NULL)
*freesp = frees;
if (sleepsp != NULL)
*sleepsp = sleeps;
}
#endif /* DDB */
static int
sysctl_vm_zone_count(SYSCTL_HANDLER_ARGS)
{
uma_keg_t kz;
uma_zone_t z;
int count;
count = 0;
rw_rlock(&uma_rwlock);
LIST_FOREACH(kz, &uma_kegs, uk_link) {
LIST_FOREACH(z, &kz->uk_zones, uz_link)
count++;
}
+ LIST_FOREACH(z, &uma_cachezones, uz_link)
+ count++;
+
rw_runlock(&uma_rwlock);
return (sysctl_handle_int(oidp, &count, 0, req));
}
+static void
+uma_vm_zone_stats(struct uma_type_header *uth, uma_zone_t z, struct sbuf *sbuf,
+ struct uma_percpu_stat *ups, bool internal)
+{
+ uma_zone_domain_t zdom;
+ uma_cache_t cache;
+ int i;
+
+
+ for (i = 0; i < vm_ndomains; i++) {
+ zdom = &z->uz_domain[i];
+ uth->uth_zone_free += zdom->uzd_nitems;
+ }
+ uth->uth_allocs = counter_u64_fetch(z->uz_allocs);
+ uth->uth_frees = counter_u64_fetch(z->uz_frees);
+ uth->uth_fails = counter_u64_fetch(z->uz_fails);
+ uth->uth_sleeps = z->uz_sleeps;
+ /*
+ * While it is not normally safe to access the cache
+ * bucket pointers while not on the CPU that owns the
+ * cache, we only allow the pointers to be exchanged
+ * without the zone lock held, not invalidated, so
+ * accept the possible race associated with bucket
+ * exchange during monitoring.
+ */
+ for (i = 0; i < mp_maxid + 1; i++) {
+ bzero(&ups[i], sizeof(*ups));
+ if (internal || CPU_ABSENT(i))
+ continue;
+ cache = &z->uz_cpu[i];
+ if (cache->uc_allocbucket != NULL)
+ ups[i].ups_cache_free +=
+ cache->uc_allocbucket->ub_cnt;
+ if (cache->uc_freebucket != NULL)
+ ups[i].ups_cache_free +=
+ cache->uc_freebucket->ub_cnt;
+ ups[i].ups_allocs = cache->uc_allocs;
+ ups[i].ups_frees = cache->uc_frees;
+ }
+}
+
static int
sysctl_vm_zone_stats(SYSCTL_HANDLER_ARGS)
{
struct uma_stream_header ush;
struct uma_type_header uth;
struct uma_percpu_stat *ups;
- uma_zone_domain_t zdom;
struct sbuf sbuf;
- uma_cache_t cache;
uma_keg_t kz;
uma_zone_t z;
int count, error, i;
error = sysctl_wire_old_buffer(req, 0);
if (error != 0)
return (error);
sbuf_new_for_sysctl(&sbuf, NULL, 128, req);
sbuf_clear_flags(&sbuf, SBUF_INCLUDENUL);
ups = malloc((mp_maxid + 1) * sizeof(*ups), M_TEMP, M_WAITOK);
count = 0;
rw_rlock(&uma_rwlock);
LIST_FOREACH(kz, &uma_kegs, uk_link) {
LIST_FOREACH(z, &kz->uk_zones, uz_link)
count++;
}
+ LIST_FOREACH(z, &uma_cachezones, uz_link)
+ count++;
+
/*
* Insert stream header.
*/
bzero(&ush, sizeof(ush));
ush.ush_version = UMA_STREAM_VERSION;
ush.ush_maxcpus = (mp_maxid + 1);
ush.ush_count = count;
(void)sbuf_bcat(&sbuf, &ush, sizeof(ush));
LIST_FOREACH(kz, &uma_kegs, uk_link) {
LIST_FOREACH(z, &kz->uk_zones, uz_link) {
bzero(&uth, sizeof(uth));
ZONE_LOCK(z);
strlcpy(uth.uth_name, z->uz_name, UTH_MAX_NAME);
uth.uth_align = kz->uk_align;
uth.uth_size = kz->uk_size;
uth.uth_rsize = kz->uk_rsize;
if (z->uz_max_items > 0)
uth.uth_pages = (z->uz_items / kz->uk_ipers) *
kz->uk_ppera;
else
uth.uth_pages = kz->uk_pages;
uth.uth_maxpages = (z->uz_max_items / kz->uk_ipers) *
kz->uk_ppera;
uth.uth_limit = z->uz_max_items;
uth.uth_keg_free = z->uz_keg->uk_free;
/*
* A zone is secondary is it is not the first entry
* on the keg's zone list.
*/
if ((z->uz_flags & UMA_ZONE_SECONDARY) &&
(LIST_FIRST(&kz->uk_zones) != z))
uth.uth_zone_flags = UTH_ZONE_SECONDARY;
-
- for (i = 0; i < vm_ndomains; i++) {
- zdom = &z->uz_domain[i];
- uth.uth_zone_free += zdom->uzd_nitems;
- }
- uth.uth_allocs = counter_u64_fetch(z->uz_allocs);
- uth.uth_frees = counter_u64_fetch(z->uz_frees);
- uth.uth_fails = counter_u64_fetch(z->uz_fails);
- uth.uth_sleeps = z->uz_sleeps;
- /*
- * While it is not normally safe to access the cache
- * bucket pointers while not on the CPU that owns the
- * cache, we only allow the pointers to be exchanged
- * without the zone lock held, not invalidated, so
- * accept the possible race associated with bucket
- * exchange during monitoring.
- */
- for (i = 0; i < mp_maxid + 1; i++) {
- bzero(&ups[i], sizeof(*ups));
- if (kz->uk_flags & UMA_ZFLAG_INTERNAL ||
- CPU_ABSENT(i))
- continue;
- cache = &z->uz_cpu[i];
- if (cache->uc_allocbucket != NULL)
- ups[i].ups_cache_free +=
- cache->uc_allocbucket->ub_cnt;
- if (cache->uc_freebucket != NULL)
- ups[i].ups_cache_free +=
- cache->uc_freebucket->ub_cnt;
- ups[i].ups_allocs = cache->uc_allocs;
- ups[i].ups_frees = cache->uc_frees;
- }
+ uma_vm_zone_stats(&uth, z, &sbuf, ups,
+ kz->uk_flags & UMA_ZFLAG_INTERNAL);
ZONE_UNLOCK(z);
(void)sbuf_bcat(&sbuf, &uth, sizeof(uth));
for (i = 0; i < mp_maxid + 1; i++)
(void)sbuf_bcat(&sbuf, &ups[i], sizeof(ups[i]));
}
}
+ LIST_FOREACH(z, &uma_cachezones, uz_link) {
+ bzero(&uth, sizeof(uth));
+ ZONE_LOCK(z);
+ strlcpy(uth.uth_name, z->uz_name, UTH_MAX_NAME);
+ uth.uth_size = z->uz_size;
+ uma_vm_zone_stats(&uth, z, &sbuf, ups, false);
+ ZONE_UNLOCK(z);
+ (void)sbuf_bcat(&sbuf, &uth, sizeof(uth));
+ for (i = 0; i < mp_maxid + 1; i++)
+ (void)sbuf_bcat(&sbuf, &ups[i], sizeof(ups[i]));
+ }
+
rw_runlock(&uma_rwlock);
error = sbuf_finish(&sbuf);
sbuf_delete(&sbuf);
free(ups, M_TEMP);
return (error);
}
int
sysctl_handle_uma_zone_max(SYSCTL_HANDLER_ARGS)
{
uma_zone_t zone = *(uma_zone_t *)arg1;
int error, max;
max = uma_zone_get_max(zone);
error = sysctl_handle_int(oidp, &max, 0, req);
if (error || !req->newptr)
return (error);
uma_zone_set_max(zone, max);
return (0);
}
int
sysctl_handle_uma_zone_cur(SYSCTL_HANDLER_ARGS)
{
uma_zone_t zone = *(uma_zone_t *)arg1;
int cur;
cur = uma_zone_get_cur(zone);
return (sysctl_handle_int(oidp, &cur, 0, req));
}
#ifdef INVARIANTS
static uma_slab_t
uma_dbg_getslab(uma_zone_t zone, void *item)
{
uma_slab_t slab;
uma_keg_t keg;
uint8_t *mem;
mem = (uint8_t *)((uintptr_t)item & (~UMA_SLAB_MASK));
if (zone->uz_flags & UMA_ZONE_VTOSLAB) {
slab = vtoslab((vm_offset_t)mem);
} else {
/*
* It is safe to return the slab here even though the
* zone is unlocked because the item's allocation state
* essentially holds a reference.
*/
if (zone->uz_lockptr == &zone->uz_lock)
return (NULL);
ZONE_LOCK(zone);
keg = zone->uz_keg;
if (keg->uk_flags & UMA_ZONE_HASH)
slab = hash_sfind(&keg->uk_hash, mem);
else
slab = (uma_slab_t)(mem + keg->uk_pgoff);
ZONE_UNLOCK(zone);
}
return (slab);
}
static bool
uma_dbg_zskip(uma_zone_t zone, void *mem)
{
if (zone->uz_lockptr == &zone->uz_lock)
return (true);
return (uma_dbg_kskip(zone->uz_keg, mem));
}
static bool
uma_dbg_kskip(uma_keg_t keg, void *mem)
{
uintptr_t idx;
if (dbg_divisor == 0)
return (true);
if (dbg_divisor == 1)
return (false);
idx = (uintptr_t)mem >> PAGE_SHIFT;
if (keg->uk_ipers > 1) {
idx *= keg->uk_ipers;
idx += ((uintptr_t)mem & PAGE_MASK) / keg->uk_rsize;
}
if ((idx / dbg_divisor) * dbg_divisor != idx) {
counter_u64_add(uma_skip_cnt, 1);
return (true);
}
counter_u64_add(uma_dbg_cnt, 1);
return (false);
}
/*
* Set up the slab's freei data such that uma_dbg_free can function.
*
*/
static void
uma_dbg_alloc(uma_zone_t zone, uma_slab_t slab, void *item)
{
uma_keg_t keg;
int freei;
if (slab == NULL) {
slab = uma_dbg_getslab(zone, item);
if (slab == NULL)
panic("uma: item %p did not belong to zone %s\n",
item, zone->uz_name);
}
keg = slab->us_keg;
freei = ((uintptr_t)item - (uintptr_t)slab->us_data) / keg->uk_rsize;
if (BIT_ISSET(SLAB_SETSIZE, freei, &slab->us_debugfree))
panic("Duplicate alloc of %p from zone %p(%s) slab %p(%d)\n",
item, zone, zone->uz_name, slab, freei);
BIT_SET_ATOMIC(SLAB_SETSIZE, freei, &slab->us_debugfree);
return;
}
/*
* Verifies freed addresses. Checks for alignment, valid slab membership
* and duplicate frees.
*
*/
static void
uma_dbg_free(uma_zone_t zone, uma_slab_t slab, void *item)
{
uma_keg_t keg;
int freei;
if (slab == NULL) {
slab = uma_dbg_getslab(zone, item);
if (slab == NULL)
panic("uma: Freed item %p did not belong to zone %s\n",
item, zone->uz_name);
}
keg = slab->us_keg;
freei = ((uintptr_t)item - (uintptr_t)slab->us_data) / keg->uk_rsize;
if (freei >= keg->uk_ipers)
panic("Invalid free of %p from zone %p(%s) slab %p(%d)\n",
item, zone, zone->uz_name, slab, freei);
if (((freei * keg->uk_rsize) + slab->us_data) != item)
panic("Unaligned free of %p from zone %p(%s) slab %p(%d)\n",
item, zone, zone->uz_name, slab, freei);
if (!BIT_ISSET(SLAB_SETSIZE, freei, &slab->us_debugfree))
panic("Duplicate free of %p from zone %p(%s) slab %p(%d)\n",
item, zone, zone->uz_name, slab, freei);
BIT_CLR_ATOMIC(SLAB_SETSIZE, freei, &slab->us_debugfree);
}
#endif /* INVARIANTS */
#ifdef DDB
DB_SHOW_COMMAND(uma, db_show_uma)
{
uma_keg_t kz;
uma_zone_t z;
uint64_t allocs, frees, sleeps;
long cachefree;
int i;
db_printf("%18s %8s %8s %8s %12s %8s %8s\n", "Zone", "Size", "Used",
"Free", "Requests", "Sleeps", "Bucket");
LIST_FOREACH(kz, &uma_kegs, uk_link) {
LIST_FOREACH(z, &kz->uk_zones, uz_link) {
if (kz->uk_flags & UMA_ZFLAG_INTERNAL) {
allocs = counter_u64_fetch(z->uz_allocs);
frees = counter_u64_fetch(z->uz_frees);
sleeps = z->uz_sleeps;
cachefree = 0;
} else
uma_zone_sumstat(z, &cachefree, &allocs,
&frees, &sleeps);
if (!((z->uz_flags & UMA_ZONE_SECONDARY) &&
(LIST_FIRST(&kz->uk_zones) != z)))
cachefree += kz->uk_free;
for (i = 0; i < vm_ndomains; i++)
cachefree += z->uz_domain[i].uzd_nitems;
db_printf("%18s %8ju %8jd %8ld %12ju %8ju %8u\n",
z->uz_name, (uintmax_t)kz->uk_size,
(intmax_t)(allocs - frees), cachefree,
(uintmax_t)allocs, sleeps, z->uz_count);
if (db_pager_quit)
return;
}
}
}
DB_SHOW_COMMAND(umacache, db_show_umacache)
{
uma_zone_t z;
uint64_t allocs, frees;
long cachefree;
int i;
db_printf("%18s %8s %8s %8s %12s %8s\n", "Zone", "Size", "Used", "Free",
"Requests", "Bucket");
LIST_FOREACH(z, &uma_cachezones, uz_link) {
uma_zone_sumstat(z, &cachefree, &allocs, &frees, NULL);
for (i = 0; i < vm_ndomains; i++)
cachefree += z->uz_domain[i].uzd_nitems;
db_printf("%18s %8ju %8jd %8ld %12ju %8u\n",
z->uz_name, (uintmax_t)z->uz_size,
(intmax_t)(allocs - frees), cachefree,
(uintmax_t)allocs, z->uz_count);
if (db_pager_quit)
return;
}
}
#endif /* DDB */
Index: projects/clang800-import/sys/vm/uma_int.h
===================================================================
--- projects/clang800-import/sys/vm/uma_int.h (revision 343955)
+++ projects/clang800-import/sys/vm/uma_int.h (revision 343956)
@@ -1,499 +1,498 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2002-2005, 2009, 2013 Jeffrey Roberson <jeff@FreeBSD.org>
* Copyright (c) 2004, 2005 Bosko Milekic <bmilekic@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice unmodified, this list of conditions, and the following
* disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* $FreeBSD$
*
*/
#include <sys/counter.h>
#include <sys/_bitset.h>
#include <sys/_domainset.h>
#include <sys/_task.h>
/*
* This file includes definitions, structures, prototypes, and inlines that
* should not be used outside of the actual implementation of UMA.
*/
/*
* The brief summary; Zones describe unique allocation types. Zones are
* organized into per-CPU caches which are filled by buckets. Buckets are
* organized according to memory domains. Buckets are filled from kegs which
* are also organized according to memory domains. Kegs describe a unique
* allocation type, backend memory provider, and layout. Kegs are associated
* with one or more zones and zones reference one or more kegs. Kegs provide
* slabs which are virtually contiguous collections of pages. Each slab is
* broken down int one or more items that will satisfy an individual allocation.
*
* Allocation is satisfied in the following order:
* 1) Per-CPU cache
* 2) Per-domain cache of buckets
* 3) Slab from any of N kegs
* 4) Backend page provider
*
* More detail on individual objects is contained below:
*
* Kegs contain lists of slabs which are stored in either the full bin, empty
* bin, or partially allocated bin, to reduce fragmentation. They also contain
* the user supplied value for size, which is adjusted for alignment purposes
* and rsize is the result of that. The Keg also stores information for
* managing a hash of page addresses that maps pages to uma_slab_t structures
* for pages that don't have embedded uma_slab_t's.
*
* Keg slab lists are organized by memory domain to support NUMA allocation
* policies. By default allocations are spread across domains to reduce the
* potential for hotspots. Special keg creation flags may be specified to
* prefer location allocation. However there is no strict enforcement as frees
* may happen on any CPU and these are returned to the CPU-local cache
* regardless of the originating domain.
*
* The uma_slab_t may be embedded in a UMA_SLAB_SIZE chunk of memory or it may
* be allocated off the page from a special slab zone. The free list within a
* slab is managed with a bitmask. For item sizes that would yield more than
* 10% memory waste we potentially allocate a separate uma_slab_t if this will
* improve the number of items per slab that will fit.
*
* The only really gross cases, with regards to memory waste, are for those
* items that are just over half the page size. You can get nearly 50% waste,
* so you fall back to the memory footprint of the power of two allocator. I
* have looked at memory allocation sizes on many of the machines available to
* me, and there does not seem to be an abundance of allocations at this range
* so at this time it may not make sense to optimize for it. This can, of
* course, be solved with dynamic slab sizes.
*
* Kegs may serve multiple Zones but by far most of the time they only serve
* one. When a Zone is created, a Keg is allocated and setup for it. While
* the backing Keg stores slabs, the Zone caches Buckets of items allocated
* from the slabs. Each Zone is equipped with an init/fini and ctor/dtor
* pair, as well as with its own set of small per-CPU caches, layered above
* the Zone's general Bucket cache.
*
* The PCPU caches are protected by critical sections, and may be accessed
* safely only from their associated CPU, while the Zones backed by the same
* Keg all share a common Keg lock (to coalesce contention on the backing
* slabs). The backing Keg typically only serves one Zone but in the case of
* multiple Zones, one of the Zones is considered the Master Zone and all
* Zone-related stats from the Keg are done in the Master Zone. For an
* example of a Multi-Zone setup, refer to the Mbuf allocation code.
*/
/*
* This is the representation for normal (Non OFFPAGE slab)
*
* i == item
* s == slab pointer
*
* <---------------- Page (UMA_SLAB_SIZE) ------------------>
* ___________________________________________________________
* | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___________ |
* ||i||i||i||i||i||i||i||i||i||i||i||i||i||i||i| |slab header||
* ||_||_||_||_||_||_||_||_||_||_||_||_||_||_||_| |___________||
* |___________________________________________________________|
*
*
* This is an OFFPAGE slab. These can be larger than UMA_SLAB_SIZE.
*
* ___________________________________________________________
* | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |
* ||i||i||i||i||i||i||i||i||i||i||i||i||i||i||i||i||i||i||i| |
* ||_||_||_||_||_||_||_||_||_||_||_||_||_||_||_||_||_||_||_| |
* |___________________________________________________________|
* ___________ ^
* |slab header| |
* |___________|---*
*
*/
#ifndef VM_UMA_INT_H
#define VM_UMA_INT_H
#define UMA_SLAB_SIZE PAGE_SIZE /* How big are our slabs? */
#define UMA_SLAB_MASK (PAGE_SIZE - 1) /* Mask to get back to the page */
#define UMA_SLAB_SHIFT PAGE_SHIFT /* Number of bits PAGE_MASK */
/* Max waste percentage before going to off page slab management */
#define UMA_MAX_WASTE 10
/*
* Actual size of uma_slab when it is placed at an end of a page
* with pointer sized alignment requirement.
*/
#define SIZEOF_UMA_SLAB ((sizeof(struct uma_slab) & UMA_ALIGN_PTR) ? \
(sizeof(struct uma_slab) & ~UMA_ALIGN_PTR) + \
(UMA_ALIGN_PTR + 1) : sizeof(struct uma_slab))
/*
* Size of memory in a not offpage single page slab available for actual items.
*/
#define UMA_SLAB_SPACE (PAGE_SIZE - SIZEOF_UMA_SLAB)
/*
* I doubt there will be many cases where this is exceeded. This is the initial
* size of the hash table for uma_slabs that are managed off page. This hash
* does expand by powers of two. Currently it doesn't get smaller.
*/
#define UMA_HASH_SIZE_INIT 32
/*
* I should investigate other hashing algorithms. This should yield a low
* number of collisions if the pages are relatively contiguous.
*/
#define UMA_HASH(h, s) ((((uintptr_t)s) >> UMA_SLAB_SHIFT) & (h)->uh_hashmask)
#define UMA_HASH_INSERT(h, s, mem) \
SLIST_INSERT_HEAD(&(h)->uh_slab_hash[UMA_HASH((h), \
(mem))], (s), us_hlink)
#define UMA_HASH_REMOVE(h, s, mem) \
SLIST_REMOVE(&(h)->uh_slab_hash[UMA_HASH((h), \
(mem))], (s), uma_slab, us_hlink)
/* Hash table for freed address -> slab translation */
SLIST_HEAD(slabhead, uma_slab);
struct uma_hash {
struct slabhead *uh_slab_hash; /* Hash table for slabs */
int uh_hashsize; /* Current size of the hash table */
int uh_hashmask; /* Mask used during hashing */
};
/*
* align field or structure to cache line
*/
#if defined(__amd64__) || defined(__powerpc64__)
#define UMA_ALIGN __aligned(128)
#else
#define UMA_ALIGN
#endif
/*
* Structures for per cpu queues.
*/
struct uma_bucket {
LIST_ENTRY(uma_bucket) ub_link; /* Link into the zone */
int16_t ub_cnt; /* Count of items in bucket. */
int16_t ub_entries; /* Max items. */
void *ub_bucket[]; /* actual allocation storage */
};
typedef struct uma_bucket * uma_bucket_t;
struct uma_cache {
uma_bucket_t uc_freebucket; /* Bucket we're freeing to */
uma_bucket_t uc_allocbucket; /* Bucket to allocate from */
uint64_t uc_allocs; /* Count of allocations */
uint64_t uc_frees; /* Count of frees */
} UMA_ALIGN;
typedef struct uma_cache * uma_cache_t;
/*
* Per-domain memory list. Embedded in the kegs.
*/
struct uma_domain {
LIST_HEAD(,uma_slab) ud_part_slab; /* partially allocated slabs */
LIST_HEAD(,uma_slab) ud_free_slab; /* empty slab list */
LIST_HEAD(,uma_slab) ud_full_slab; /* full slabs */
};
typedef struct uma_domain * uma_domain_t;
/*
* Keg management structure
*
* TODO: Optimize for cache line size
*
*/
struct uma_keg {
struct mtx uk_lock; /* Lock for the keg must be first.
* See shared uz_keg/uz_lockptr
* member of struct uma_zone. */
struct uma_hash uk_hash;
LIST_HEAD(,uma_zone) uk_zones; /* Keg's zones */
struct domainset_ref uk_dr; /* Domain selection policy. */
uint32_t uk_align; /* Alignment mask */
uint32_t uk_pages; /* Total page count */
uint32_t uk_free; /* Count of items free in slabs */
uint32_t uk_reserve; /* Number of reserved items. */
uint32_t uk_size; /* Requested size of each item */
uint32_t uk_rsize; /* Real size of each item */
uma_init uk_init; /* Keg's init routine */
uma_fini uk_fini; /* Keg's fini routine */
uma_alloc uk_allocf; /* Allocation function */
uma_free uk_freef; /* Free routine */
u_long uk_offset; /* Next free offset from base KVA */
vm_offset_t uk_kva; /* Zone base KVA */
uma_zone_t uk_slabzone; /* Slab zone backing us, if OFFPAGE */
uint32_t uk_pgoff; /* Offset to uma_slab struct */
uint16_t uk_ppera; /* pages per allocation from backend */
uint16_t uk_ipers; /* Items per slab */
uint32_t uk_flags; /* Internal flags */
/* Least used fields go to the last cache line. */
const char *uk_name; /* Name of creating zone. */
LIST_ENTRY(uma_keg) uk_link; /* List of all kegs */
/* Must be last, variable sized. */
struct uma_domain uk_domain[]; /* Keg's slab lists. */
};
typedef struct uma_keg * uma_keg_t;
/*
* Free bits per-slab.
*/
#define SLAB_SETSIZE (PAGE_SIZE / UMA_SMALLEST_UNIT)
BITSET_DEFINE(slabbits, SLAB_SETSIZE);
/*
* The slab structure manages a single contiguous allocation from backing
* store and subdivides it into individually allocatable items.
*/
struct uma_slab {
uma_keg_t us_keg; /* Keg we live in */
union {
LIST_ENTRY(uma_slab) _us_link; /* slabs in zone */
unsigned long _us_size; /* Size of allocation */
} us_type;
SLIST_ENTRY(uma_slab) us_hlink; /* Link for hash table */
uint8_t *us_data; /* First item */
struct slabbits us_free; /* Free bitmask. */
#ifdef INVARIANTS
struct slabbits us_debugfree; /* Debug bitmask. */
#endif
uint16_t us_freecount; /* How many are free? */
uint8_t us_flags; /* Page flags see uma.h */
uint8_t us_domain; /* Backing NUMA domain. */
};
#define us_link us_type._us_link
#define us_size us_type._us_size
#if MAXMEMDOM >= 255
#error "Slab domain type insufficient"
#endif
typedef struct uma_slab * uma_slab_t;
-typedef uma_slab_t (*uma_slaballoc)(uma_zone_t, uma_keg_t, int, int);
struct uma_zone_domain {
LIST_HEAD(,uma_bucket) uzd_buckets; /* full buckets */
long uzd_nitems; /* total item count */
long uzd_imax; /* maximum item count this period */
long uzd_imin; /* minimum item count this period */
long uzd_wss; /* working set size estimate */
};
typedef struct uma_zone_domain * uma_zone_domain_t;
/*
* Zone management structure
*
* TODO: Optimize for cache line size
*
*/
struct uma_zone {
/* Offset 0, used in alloc/free fast/medium fast path and const. */
union {
uma_keg_t uz_keg; /* This zone's keg */
struct mtx *uz_lockptr; /* To keg or to self */
};
struct uma_zone_domain *uz_domain; /* per-domain buckets */
uint32_t uz_flags; /* Flags inherited from kegs */
uint32_t uz_size; /* Size inherited from kegs */
uma_ctor uz_ctor; /* Constructor for each allocation */
uma_dtor uz_dtor; /* Destructor */
uint64_t uz_items; /* Total items count */
uint64_t uz_max_items; /* Maximum number of items to alloc */
uint32_t uz_sleepers; /* Number of sleepers on memory */
uint16_t uz_count; /* Amount of items in full bucket */
uint16_t uz_count_max; /* Maximum amount of items there */
/* Offset 64, used in bucket replenish. */
uma_import uz_import; /* Import new memory to cache. */
uma_release uz_release; /* Release memory from cache. */
void *uz_arg; /* Import/release argument. */
uma_init uz_init; /* Initializer for each item */
uma_fini uz_fini; /* Finalizer for each item. */
- uma_slaballoc uz_slab; /* Allocate a slab from the backend. */
+ void *uz_spare;
uint64_t uz_bkt_count; /* Items in bucket cache */
uint64_t uz_bkt_max; /* Maximum bucket cache size */
/* Offset 128 Rare. */
/*
* The lock is placed here to avoid adjacent line prefetcher
* in fast paths and to take up space near infrequently accessed
* members to reduce alignment overhead.
*/
struct mtx uz_lock; /* Lock for the zone */
LIST_ENTRY(uma_zone) uz_link; /* List of all zones in keg */
const char *uz_name; /* Text name of the zone */
/* The next two fields are used to print a rate-limited warnings. */
const char *uz_warning; /* Warning to print on failure */
struct timeval uz_ratecheck; /* Warnings rate-limiting */
struct task uz_maxaction; /* Task to run when at limit */
uint16_t uz_count_min; /* Minimal amount of items in bucket */
/* Offset 256, stats. */
counter_u64_t uz_allocs; /* Total number of allocations */
counter_u64_t uz_frees; /* Total number of frees */
counter_u64_t uz_fails; /* Total number of alloc failures */
uint64_t uz_sleeps; /* Total number of alloc sleeps */
/*
* This HAS to be the last item because we adjust the zone size
* based on NCPU and then allocate the space for the zones.
*/
struct uma_cache uz_cpu[]; /* Per cpu caches */
/* uz_domain follows here. */
};
/*
* These flags must not overlap with the UMA_ZONE flags specified in uma.h.
*/
#define UMA_ZFLAG_CACHE 0x04000000 /* uma_zcache_create()d it */
#define UMA_ZFLAG_DRAINING 0x08000000 /* Running zone_drain. */
#define UMA_ZFLAG_BUCKET 0x10000000 /* Bucket zone. */
#define UMA_ZFLAG_INTERNAL 0x20000000 /* No offpage no PCPU. */
#define UMA_ZFLAG_CACHEONLY 0x80000000 /* Don't ask VM for buckets. */
#define UMA_ZFLAG_INHERIT \
(UMA_ZFLAG_INTERNAL | UMA_ZFLAG_CACHEONLY | UMA_ZFLAG_BUCKET)
#undef UMA_ALIGN
#ifdef _KERNEL
/* Internal prototypes */
static __inline uma_slab_t hash_sfind(struct uma_hash *hash, uint8_t *data);
void *uma_large_malloc(vm_size_t size, int wait);
void *uma_large_malloc_domain(vm_size_t size, int domain, int wait);
void uma_large_free(uma_slab_t slab);
/* Lock Macros */
#define KEG_LOCK_INIT(k, lc) \
do { \
if ((lc)) \
mtx_init(&(k)->uk_lock, (k)->uk_name, \
(k)->uk_name, MTX_DEF | MTX_DUPOK); \
else \
mtx_init(&(k)->uk_lock, (k)->uk_name, \
"UMA zone", MTX_DEF | MTX_DUPOK); \
} while (0)
#define KEG_LOCK_FINI(k) mtx_destroy(&(k)->uk_lock)
#define KEG_LOCK(k) mtx_lock(&(k)->uk_lock)
#define KEG_UNLOCK(k) mtx_unlock(&(k)->uk_lock)
#define KEG_LOCK_ASSERT(k) mtx_assert(&(k)->uk_lock, MA_OWNED)
#define KEG_GET(zone, keg) do { \
(keg) = (zone)->uz_keg; \
KASSERT((void *)(keg) != (void *)&(zone)->uz_lock, \
("%s: Invalid zone %p type", __func__, (zone))); \
} while (0)
#define ZONE_LOCK_INIT(z, lc) \
do { \
if ((lc)) \
mtx_init(&(z)->uz_lock, (z)->uz_name, \
(z)->uz_name, MTX_DEF | MTX_DUPOK); \
else \
mtx_init(&(z)->uz_lock, (z)->uz_name, \
"UMA zone", MTX_DEF | MTX_DUPOK); \
} while (0)
#define ZONE_LOCK(z) mtx_lock((z)->uz_lockptr)
#define ZONE_TRYLOCK(z) mtx_trylock((z)->uz_lockptr)
#define ZONE_UNLOCK(z) mtx_unlock((z)->uz_lockptr)
#define ZONE_LOCK_FINI(z) mtx_destroy(&(z)->uz_lock)
#define ZONE_LOCK_ASSERT(z) mtx_assert((z)->uz_lockptr, MA_OWNED)
/*
* Find a slab within a hash table. This is used for OFFPAGE zones to lookup
* the slab structure.
*
* Arguments:
* hash The hash table to search.
* data The base page of the item.
*
* Returns:
* A pointer to a slab if successful, else NULL.
*/
static __inline uma_slab_t
hash_sfind(struct uma_hash *hash, uint8_t *data)
{
uma_slab_t slab;
int hval;
hval = UMA_HASH(hash, data);
SLIST_FOREACH(slab, &hash->uh_slab_hash[hval], us_hlink) {
if ((uint8_t *)slab->us_data == data)
return (slab);
}
return (NULL);
}
static __inline uma_slab_t
vtoslab(vm_offset_t va)
{
vm_page_t p;
p = PHYS_TO_VM_PAGE(pmap_kextract(va));
return ((uma_slab_t)p->plinks.s.pv);
}
static __inline void
vsetslab(vm_offset_t va, uma_slab_t slab)
{
vm_page_t p;
p = PHYS_TO_VM_PAGE(pmap_kextract(va));
p->plinks.s.pv = slab;
}
/*
* The following two functions may be defined by architecture specific code
* if they can provide more efficient allocation functions. This is useful
* for using direct mapped addresses.
*/
void *uma_small_alloc(uma_zone_t zone, vm_size_t bytes, int domain,
uint8_t *pflag, int wait);
void uma_small_free(void *mem, vm_size_t size, uint8_t flags);
/* Set a global soft limit on UMA managed memory. */
void uma_set_limit(unsigned long limit);
#endif /* _KERNEL */
#endif /* VM_UMA_INT_H */
Index: projects/clang800-import/sys/vm/vm_kern.c
===================================================================
--- projects/clang800-import/sys/vm/vm_kern.c (revision 343955)
+++ projects/clang800-import/sys/vm/vm_kern.c (revision 343956)
@@ -1,862 +1,864 @@
/*-
* SPDX-License-Identifier: (BSD-3-Clause AND MIT-CMU)
*
* Copyright (c) 1991, 1993
* The Regents of the University of California. All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* The Mach Operating System project at Carnegie-Mellon University.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: @(#)vm_kern.c 8.3 (Berkeley) 1/12/94
*
*
* Copyright (c) 1987, 1990 Carnegie-Mellon University.
* All rights reserved.
*
* Authors: Avadis Tevanian, Jr., Michael Wayne Young
*
* Permission to use, copy, modify and distribute this software and
* its documentation is hereby granted, provided that both the copyright
* notice and this permission notice appear in all copies of the
* software, derivative works or modified versions, and any portions
* thereof, and that both notices appear in supporting documentation.
*
* CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
* CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
* FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
*
* Carnegie Mellon requests users of this software to return to
*
* Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
* School of Computer Science
* Carnegie Mellon University
* Pittsburgh PA 15213-3890
*
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*/
/*
* Kernel memory management.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h> /* for ticks and hz */
#include <sys/domainset.h>
#include <sys/eventhandler.h>
#include <sys/lock.h>
#include <sys/proc.h>
#include <sys/malloc.h>
#include <sys/rwlock.h>
#include <sys/sysctl.h>
#include <sys/vmem.h>
#include <sys/vmmeter.h>
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/vm_domainset.h>
#include <vm/vm_kern.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#include <vm/vm_object.h>
#include <vm/vm_page.h>
#include <vm/vm_pageout.h>
#include <vm/vm_phys.h>
#include <vm/vm_pagequeue.h>
#include <vm/vm_radix.h>
#include <vm/vm_extern.h>
#include <vm/uma.h>
vm_map_t kernel_map;
vm_map_t exec_map;
vm_map_t pipe_map;
const void *zero_region;
CTASSERT((ZERO_REGION_SIZE & PAGE_MASK) == 0);
/* NB: Used by kernel debuggers. */
const u_long vm_maxuser_address = VM_MAXUSER_ADDRESS;
u_int exec_map_entry_size;
u_int exec_map_entries;
SYSCTL_ULONG(_vm, OID_AUTO, min_kernel_address, CTLFLAG_RD,
SYSCTL_NULL_ULONG_PTR, VM_MIN_KERNEL_ADDRESS, "Min kernel address");
SYSCTL_ULONG(_vm, OID_AUTO, max_kernel_address, CTLFLAG_RD,
#if defined(__arm__) || defined(__sparc64__)
&vm_max_kernel_address, 0,
#else
SYSCTL_NULL_ULONG_PTR, VM_MAX_KERNEL_ADDRESS,
#endif
"Max kernel address");
#if VM_NRESERVLEVEL > 0
#define KVA_QUANTUM_SHIFT (VM_LEVEL_0_ORDER + PAGE_SHIFT)
#else
/* On non-superpage architectures want large import sizes. */
#define KVA_QUANTUM_SHIFT (10 + PAGE_SHIFT)
#endif
#define KVA_QUANTUM (1 << KVA_QUANTUM_SHIFT)
/*
* kva_alloc:
*
* Allocate a virtual address range with no underlying object and
* no initial mapping to physical memory. Any mapping from this
* range to physical memory must be explicitly created prior to
* its use, typically with pmap_qenter(). Any attempt to create
* a mapping on demand through vm_fault() will result in a panic.
*/
vm_offset_t
kva_alloc(vm_size_t size)
{
vm_offset_t addr;
size = round_page(size);
if (vmem_alloc(kernel_arena, size, M_BESTFIT | M_NOWAIT, &addr))
return (0);
return (addr);
}
/*
* kva_free:
*
* Release a region of kernel virtual memory allocated
* with kva_alloc, and return the physical pages
* associated with that region.
*
* This routine may not block on kernel maps.
*/
void
kva_free(vm_offset_t addr, vm_size_t size)
{
size = round_page(size);
vmem_free(kernel_arena, addr, size);
}
/*
* Allocates a region from the kernel address map and physical pages
* within the specified address range to the kernel object. Creates a
* wired mapping from this region to these pages, and returns the
* region's starting virtual address. The allocated pages are not
* necessarily physically contiguous. If M_ZERO is specified through the
* given flags, then the pages are zeroed before they are mapped.
*/
static vm_offset_t
kmem_alloc_attr_domain(int domain, vm_size_t size, int flags, vm_paddr_t low,
vm_paddr_t high, vm_memattr_t memattr)
{
vmem_t *vmem;
vm_object_t object = kernel_object;
vm_offset_t addr, i, offset;
vm_page_t m;
int pflags, tries;
+ vm_prot_t prot;
size = round_page(size);
vmem = vm_dom[domain].vmd_kernel_arena;
if (vmem_alloc(vmem, size, M_BESTFIT | flags, &addr))
return (0);
offset = addr - VM_MIN_KERNEL_ADDRESS;
pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED;
pflags &= ~(VM_ALLOC_NOWAIT | VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL);
pflags |= VM_ALLOC_NOWAIT;
+ prot = (flags & M_EXEC) != 0 ? VM_PROT_ALL : VM_PROT_RW;
VM_OBJECT_WLOCK(object);
for (i = 0; i < size; i += PAGE_SIZE) {
tries = 0;
retry:
m = vm_page_alloc_contig_domain(object, atop(offset + i),
domain, pflags, 1, low, high, PAGE_SIZE, 0, memattr);
if (m == NULL) {
VM_OBJECT_WUNLOCK(object);
if (tries < ((flags & M_NOWAIT) != 0 ? 1 : 3)) {
if (!vm_page_reclaim_contig_domain(domain,
pflags, 1, low, high, PAGE_SIZE, 0) &&
(flags & M_WAITOK) != 0)
vm_wait_domain(domain);
VM_OBJECT_WLOCK(object);
tries++;
goto retry;
}
kmem_unback(object, addr, i);
vmem_free(vmem, addr, size);
return (0);
}
KASSERT(vm_phys_domain(m) == domain,
("kmem_alloc_attr_domain: Domain mismatch %d != %d",
vm_phys_domain(m), domain));
if ((flags & M_ZERO) && (m->flags & PG_ZERO) == 0)
pmap_zero_page(m);
m->valid = VM_PAGE_BITS_ALL;
- pmap_enter(kernel_pmap, addr + i, m, VM_PROT_RW,
- VM_PROT_RW | PMAP_ENTER_WIRED, 0);
+ pmap_enter(kernel_pmap, addr + i, m, prot,
+ prot | PMAP_ENTER_WIRED, 0);
}
VM_OBJECT_WUNLOCK(object);
return (addr);
}
vm_offset_t
kmem_alloc_attr(vm_size_t size, int flags, vm_paddr_t low, vm_paddr_t high,
vm_memattr_t memattr)
{
return (kmem_alloc_attr_domainset(DOMAINSET_RR(), size, flags, low,
high, memattr));
}
vm_offset_t
kmem_alloc_attr_domainset(struct domainset *ds, vm_size_t size, int flags,
vm_paddr_t low, vm_paddr_t high, vm_memattr_t memattr)
{
struct vm_domainset_iter di;
vm_offset_t addr;
int domain;
vm_domainset_iter_policy_init(&di, ds, &domain, &flags);
do {
addr = kmem_alloc_attr_domain(domain, size, flags, low, high,
memattr);
if (addr != 0)
break;
} while (vm_domainset_iter_policy(&di, &domain) == 0);
return (addr);
}
/*
* Allocates a region from the kernel address map and physically
* contiguous pages within the specified address range to the kernel
* object. Creates a wired mapping from this region to these pages, and
* returns the region's starting virtual address. If M_ZERO is specified
* through the given flags, then the pages are zeroed before they are
* mapped.
*/
static vm_offset_t
kmem_alloc_contig_domain(int domain, vm_size_t size, int flags, vm_paddr_t low,
vm_paddr_t high, u_long alignment, vm_paddr_t boundary,
vm_memattr_t memattr)
{
vmem_t *vmem;
vm_object_t object = kernel_object;
vm_offset_t addr, offset, tmp;
vm_page_t end_m, m;
u_long npages;
int pflags, tries;
size = round_page(size);
vmem = vm_dom[domain].vmd_kernel_arena;
if (vmem_alloc(vmem, size, flags | M_BESTFIT, &addr))
return (0);
offset = addr - VM_MIN_KERNEL_ADDRESS;
pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED;
pflags &= ~(VM_ALLOC_NOWAIT | VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL);
pflags |= VM_ALLOC_NOWAIT;
npages = atop(size);
VM_OBJECT_WLOCK(object);
tries = 0;
retry:
m = vm_page_alloc_contig_domain(object, atop(offset), domain, pflags,
npages, low, high, alignment, boundary, memattr);
if (m == NULL) {
VM_OBJECT_WUNLOCK(object);
if (tries < ((flags & M_NOWAIT) != 0 ? 1 : 3)) {
if (!vm_page_reclaim_contig_domain(domain, pflags,
npages, low, high, alignment, boundary) &&
(flags & M_WAITOK) != 0)
vm_wait_domain(domain);
VM_OBJECT_WLOCK(object);
tries++;
goto retry;
}
vmem_free(vmem, addr, size);
return (0);
}
KASSERT(vm_phys_domain(m) == domain,
("kmem_alloc_contig_domain: Domain mismatch %d != %d",
vm_phys_domain(m), domain));
end_m = m + npages;
tmp = addr;
for (; m < end_m; m++) {
if ((flags & M_ZERO) && (m->flags & PG_ZERO) == 0)
pmap_zero_page(m);
m->valid = VM_PAGE_BITS_ALL;
pmap_enter(kernel_pmap, tmp, m, VM_PROT_RW,
VM_PROT_RW | PMAP_ENTER_WIRED, 0);
tmp += PAGE_SIZE;
}
VM_OBJECT_WUNLOCK(object);
return (addr);
}
vm_offset_t
kmem_alloc_contig(vm_size_t size, int flags, vm_paddr_t low, vm_paddr_t high,
u_long alignment, vm_paddr_t boundary, vm_memattr_t memattr)
{
return (kmem_alloc_contig_domainset(DOMAINSET_RR(), size, flags, low,
high, alignment, boundary, memattr));
}
vm_offset_t
kmem_alloc_contig_domainset(struct domainset *ds, vm_size_t size, int flags,
vm_paddr_t low, vm_paddr_t high, u_long alignment, vm_paddr_t boundary,
vm_memattr_t memattr)
{
struct vm_domainset_iter di;
vm_offset_t addr;
int domain;
vm_domainset_iter_policy_init(&di, ds, &domain, &flags);
do {
addr = kmem_alloc_contig_domain(domain, size, flags, low, high,
alignment, boundary, memattr);
if (addr != 0)
break;
} while (vm_domainset_iter_policy(&di, &domain) == 0);
return (addr);
}
/*
* kmem_suballoc:
*
* Allocates a map to manage a subrange
* of the kernel virtual address space.
*
* Arguments are as follows:
*
* parent Map to take range from
* min, max Returned endpoints of map
* size Size of range to find
* superpage_align Request that min is superpage aligned
*/
vm_map_t
kmem_suballoc(vm_map_t parent, vm_offset_t *min, vm_offset_t *max,
vm_size_t size, boolean_t superpage_align)
{
int ret;
vm_map_t result;
size = round_page(size);
*min = vm_map_min(parent);
ret = vm_map_find(parent, NULL, 0, min, size, 0, superpage_align ?
VMFS_SUPER_SPACE : VMFS_ANY_SPACE, VM_PROT_ALL, VM_PROT_ALL,
MAP_ACC_NO_CHARGE);
if (ret != KERN_SUCCESS)
panic("kmem_suballoc: bad status return of %d", ret);
*max = *min + size;
result = vm_map_create(vm_map_pmap(parent), *min, *max);
if (result == NULL)
panic("kmem_suballoc: cannot create submap");
if (vm_map_submap(parent, *min, *max, result) != KERN_SUCCESS)
panic("kmem_suballoc: unable to change range to submap");
return (result);
}
/*
* kmem_malloc_domain:
*
* Allocate wired-down pages in the kernel's address space.
*/
static vm_offset_t
kmem_malloc_domain(int domain, vm_size_t size, int flags)
{
vmem_t *arena;
vm_offset_t addr;
int rv;
#if VM_NRESERVLEVEL > 0
if (__predict_true((flags & M_EXEC) == 0))
arena = vm_dom[domain].vmd_kernel_arena;
else
arena = vm_dom[domain].vmd_kernel_rwx_arena;
#else
arena = vm_dom[domain].vmd_kernel_arena;
#endif
size = round_page(size);
if (vmem_alloc(arena, size, flags | M_BESTFIT, &addr))
return (0);
rv = kmem_back_domain(domain, kernel_object, addr, size, flags);
if (rv != KERN_SUCCESS) {
vmem_free(arena, addr, size);
return (0);
}
return (addr);
}
vm_offset_t
kmem_malloc(vm_size_t size, int flags)
{
return (kmem_malloc_domainset(DOMAINSET_RR(), size, flags));
}
vm_offset_t
kmem_malloc_domainset(struct domainset *ds, vm_size_t size, int flags)
{
struct vm_domainset_iter di;
vm_offset_t addr;
int domain;
vm_domainset_iter_policy_init(&di, ds, &domain, &flags);
do {
addr = kmem_malloc_domain(domain, size, flags);
if (addr != 0)
break;
} while (vm_domainset_iter_policy(&di, &domain) == 0);
return (addr);
}
/*
* kmem_back_domain:
*
* Allocate physical pages from the specified domain for the specified
* virtual address range.
*/
int
kmem_back_domain(int domain, vm_object_t object, vm_offset_t addr,
vm_size_t size, int flags)
{
vm_offset_t offset, i;
vm_page_t m, mpred;
vm_prot_t prot;
int pflags;
KASSERT(object == kernel_object,
("kmem_back_domain: only supports kernel object."));
offset = addr - VM_MIN_KERNEL_ADDRESS;
pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED;
pflags &= ~(VM_ALLOC_NOWAIT | VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL);
if (flags & M_WAITOK)
pflags |= VM_ALLOC_WAITFAIL;
prot = (flags & M_EXEC) != 0 ? VM_PROT_ALL : VM_PROT_RW;
i = 0;
VM_OBJECT_WLOCK(object);
retry:
mpred = vm_radix_lookup_le(&object->rtree, atop(offset + i));
for (; i < size; i += PAGE_SIZE, mpred = m) {
m = vm_page_alloc_domain_after(object, atop(offset + i),
domain, pflags, mpred);
/*
* Ran out of space, free everything up and return. Don't need
* to lock page queues here as we know that the pages we got
* aren't on any queues.
*/
if (m == NULL) {
if ((flags & M_NOWAIT) == 0)
goto retry;
VM_OBJECT_WUNLOCK(object);
kmem_unback(object, addr, i);
return (KERN_NO_SPACE);
}
KASSERT(vm_phys_domain(m) == domain,
("kmem_back_domain: Domain mismatch %d != %d",
vm_phys_domain(m), domain));
if (flags & M_ZERO && (m->flags & PG_ZERO) == 0)
pmap_zero_page(m);
KASSERT((m->oflags & VPO_UNMANAGED) != 0,
("kmem_malloc: page %p is managed", m));
m->valid = VM_PAGE_BITS_ALL;
pmap_enter(kernel_pmap, addr + i, m, prot,
prot | PMAP_ENTER_WIRED, 0);
#if VM_NRESERVLEVEL > 0
if (__predict_false((prot & VM_PROT_EXECUTE) != 0))
m->oflags |= VPO_KMEM_EXEC;
#endif
}
VM_OBJECT_WUNLOCK(object);
return (KERN_SUCCESS);
}
/*
* kmem_back:
*
* Allocate physical pages for the specified virtual address range.
*/
int
kmem_back(vm_object_t object, vm_offset_t addr, vm_size_t size, int flags)
{
vm_offset_t end, next, start;
int domain, rv;
KASSERT(object == kernel_object,
("kmem_back: only supports kernel object."));
for (start = addr, end = addr + size; addr < end; addr = next) {
/*
* We must ensure that pages backing a given large virtual page
* all come from the same physical domain.
*/
if (vm_ndomains > 1) {
domain = (addr >> KVA_QUANTUM_SHIFT) % vm_ndomains;
while (VM_DOMAIN_EMPTY(domain))
domain++;
next = roundup2(addr + 1, KVA_QUANTUM);
if (next > end || next < start)
next = end;
} else {
domain = 0;
next = end;
}
rv = kmem_back_domain(domain, object, addr, next - addr, flags);
if (rv != KERN_SUCCESS) {
kmem_unback(object, start, addr - start);
break;
}
}
return (rv);
}
/*
* kmem_unback:
*
* Unmap and free the physical pages underlying the specified virtual
* address range.
*
* A physical page must exist within the specified object at each index
* that is being unmapped.
*/
static struct vmem *
_kmem_unback(vm_object_t object, vm_offset_t addr, vm_size_t size)
{
struct vmem *arena;
vm_page_t m, next;
vm_offset_t end, offset;
int domain;
KASSERT(object == kernel_object,
("kmem_unback: only supports kernel object."));
if (size == 0)
return (NULL);
pmap_remove(kernel_pmap, addr, addr + size);
offset = addr - VM_MIN_KERNEL_ADDRESS;
end = offset + size;
VM_OBJECT_WLOCK(object);
m = vm_page_lookup(object, atop(offset));
domain = vm_phys_domain(m);
#if VM_NRESERVLEVEL > 0
if (__predict_true((m->oflags & VPO_KMEM_EXEC) == 0))
arena = vm_dom[domain].vmd_kernel_arena;
else
arena = vm_dom[domain].vmd_kernel_rwx_arena;
#else
arena = vm_dom[domain].vmd_kernel_arena;
#endif
for (; offset < end; offset += PAGE_SIZE, m = next) {
next = vm_page_next(m);
vm_page_unwire(m, PQ_NONE);
vm_page_free(m);
}
VM_OBJECT_WUNLOCK(object);
return (arena);
}
void
kmem_unback(vm_object_t object, vm_offset_t addr, vm_size_t size)
{
(void)_kmem_unback(object, addr, size);
}
/*
* kmem_free:
*
* Free memory allocated with kmem_malloc. The size must match the
* original allocation.
*/
void
kmem_free(vm_offset_t addr, vm_size_t size)
{
struct vmem *arena;
size = round_page(size);
arena = _kmem_unback(kernel_object, addr, size);
if (arena != NULL)
vmem_free(arena, addr, size);
}
/*
* kmap_alloc_wait:
*
* Allocates pageable memory from a sub-map of the kernel. If the submap
* has no room, the caller sleeps waiting for more memory in the submap.
*
* This routine may block.
*/
vm_offset_t
kmap_alloc_wait(vm_map_t map, vm_size_t size)
{
vm_offset_t addr;
size = round_page(size);
if (!swap_reserve(size))
return (0);
for (;;) {
/*
* To make this work for more than one map, use the map's lock
* to lock out sleepers/wakers.
*/
vm_map_lock(map);
if (vm_map_findspace(map, vm_map_min(map), size, &addr) == 0)
break;
/* no space now; see if we can ever get space */
if (vm_map_max(map) - vm_map_min(map) < size) {
vm_map_unlock(map);
swap_release(size);
return (0);
}
map->needs_wakeup = TRUE;
vm_map_unlock_and_wait(map, 0);
}
vm_map_insert(map, NULL, 0, addr, addr + size, VM_PROT_RW, VM_PROT_RW,
MAP_ACC_CHARGED);
vm_map_unlock(map);
return (addr);
}
/*
* kmap_free_wakeup:
*
* Returns memory to a submap of the kernel, and wakes up any processes
* waiting for memory in that map.
*/
void
kmap_free_wakeup(vm_map_t map, vm_offset_t addr, vm_size_t size)
{
vm_map_lock(map);
(void) vm_map_delete(map, trunc_page(addr), round_page(addr + size));
if (map->needs_wakeup) {
map->needs_wakeup = FALSE;
vm_map_wakeup(map);
}
vm_map_unlock(map);
}
void
kmem_init_zero_region(void)
{
vm_offset_t addr, i;
vm_page_t m;
/*
* Map a single physical page of zeros to a larger virtual range.
* This requires less looping in places that want large amounts of
* zeros, while not using much more physical resources.
*/
addr = kva_alloc(ZERO_REGION_SIZE);
m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO);
if ((m->flags & PG_ZERO) == 0)
pmap_zero_page(m);
for (i = 0; i < ZERO_REGION_SIZE; i += PAGE_SIZE)
pmap_qenter(addr + i, &m, 1);
pmap_protect(kernel_pmap, addr, addr + ZERO_REGION_SIZE, VM_PROT_READ);
zero_region = (const void *)addr;
}
/*
* Import KVA from the kernel map into the kernel arena.
*/
static int
kva_import(void *unused, vmem_size_t size, int flags, vmem_addr_t *addrp)
{
vm_offset_t addr;
int result;
KASSERT((size % KVA_QUANTUM) == 0,
("kva_import: Size %jd is not a multiple of %d",
(intmax_t)size, (int)KVA_QUANTUM));
addr = vm_map_min(kernel_map);
result = vm_map_find(kernel_map, NULL, 0, &addr, size, 0,
VMFS_SUPER_SPACE, VM_PROT_ALL, VM_PROT_ALL, MAP_NOFAULT);
if (result != KERN_SUCCESS)
return (ENOMEM);
*addrp = addr;
return (0);
}
/*
* Import KVA from a parent arena into a per-domain arena. Imports must be
* KVA_QUANTUM-aligned and a multiple of KVA_QUANTUM in size.
*/
static int
kva_import_domain(void *arena, vmem_size_t size, int flags, vmem_addr_t *addrp)
{
KASSERT((size % KVA_QUANTUM) == 0,
("kva_import_domain: Size %jd is not a multiple of %d",
(intmax_t)size, (int)KVA_QUANTUM));
return (vmem_xalloc(arena, size, KVA_QUANTUM, 0, 0, VMEM_ADDR_MIN,
VMEM_ADDR_MAX, flags, addrp));
}
/*
* kmem_init:
*
* Create the kernel map; insert a mapping covering kernel text,
* data, bss, and all space allocated thus far (`boostrap' data). The
* new map will thus map the range between VM_MIN_KERNEL_ADDRESS and
* `start' as allocated, and the range between `start' and `end' as free.
* Create the kernel vmem arena and its per-domain children.
*/
void
kmem_init(vm_offset_t start, vm_offset_t end)
{
vm_map_t m;
int domain;
m = vm_map_create(kernel_pmap, VM_MIN_KERNEL_ADDRESS, end);
m->system_map = 1;
vm_map_lock(m);
/* N.B.: cannot use kgdb to debug, starting with this assignment ... */
kernel_map = m;
(void) vm_map_insert(m, NULL, (vm_ooffset_t) 0,
#ifdef __amd64__
KERNBASE,
#else
VM_MIN_KERNEL_ADDRESS,
#endif
start, VM_PROT_ALL, VM_PROT_ALL, MAP_NOFAULT);
/* ... and ending with the completion of the above `insert' */
vm_map_unlock(m);
/*
* Initialize the kernel_arena. This can grow on demand.
*/
vmem_init(kernel_arena, "kernel arena", 0, 0, PAGE_SIZE, 0, 0);
vmem_set_import(kernel_arena, kva_import, NULL, NULL, KVA_QUANTUM);
for (domain = 0; domain < vm_ndomains; domain++) {
/*
* Initialize the per-domain arenas. These are used to color
* the KVA space in a way that ensures that virtual large pages
* are backed by memory from the same physical domain,
* maximizing the potential for superpage promotion.
*/
vm_dom[domain].vmd_kernel_arena = vmem_create(
"kernel arena domain", 0, 0, PAGE_SIZE, 0, M_WAITOK);
vmem_set_import(vm_dom[domain].vmd_kernel_arena,
kva_import_domain, NULL, kernel_arena, KVA_QUANTUM);
/*
* In architectures with superpages, maintain separate arenas
* for allocations with permissions that differ from the
* "standard" read/write permissions used for kernel memory,
* so as not to inhibit superpage promotion.
*/
#if VM_NRESERVLEVEL > 0
vm_dom[domain].vmd_kernel_rwx_arena = vmem_create(
"kernel rwx arena domain", 0, 0, PAGE_SIZE, 0, M_WAITOK);
vmem_set_import(vm_dom[domain].vmd_kernel_rwx_arena,
kva_import_domain, (vmem_release_t *)vmem_xfree,
kernel_arena, KVA_QUANTUM);
#endif
}
}
/*
* kmem_bootstrap_free:
*
* Free pages backing preloaded data (e.g., kernel modules) to the
* system. Currently only supported on platforms that create a
* vm_phys segment for preloaded data.
*/
void
kmem_bootstrap_free(vm_offset_t start, vm_size_t size)
{
#if defined(__i386__) || defined(__amd64__)
struct vm_domain *vmd;
vm_offset_t end, va;
vm_paddr_t pa;
vm_page_t m;
end = trunc_page(start + size);
start = round_page(start);
for (va = start; va < end; va += PAGE_SIZE) {
pa = pmap_kextract(va);
m = PHYS_TO_VM_PAGE(pa);
vmd = vm_pagequeue_domain(m);
vm_domain_free_lock(vmd);
vm_phys_free_pages(m, 0);
vm_domain_free_unlock(vmd);
vm_domain_freecnt_inc(vmd, 1);
vm_cnt.v_page_count++;
}
pmap_remove(kernel_pmap, start, end);
(void)vmem_add(kernel_arena, start, end - start, M_WAITOK);
#endif
}
#ifdef DIAGNOSTIC
/*
* Allow userspace to directly trigger the VM drain routine for testing
* purposes.
*/
static int
debug_vm_lowmem(SYSCTL_HANDLER_ARGS)
{
int error, i;
i = 0;
error = sysctl_handle_int(oidp, &i, 0, req);
if (error)
return (error);
if ((i & ~(VM_LOW_KMEM | VM_LOW_PAGES)) != 0)
return (EINVAL);
if (i != 0)
EVENTHANDLER_INVOKE(vm_lowmem, i);
return (0);
}
SYSCTL_PROC(_debug, OID_AUTO, vm_lowmem, CTLTYPE_INT | CTLFLAG_RW, 0, 0,
debug_vm_lowmem, "I", "set to trigger vm_lowmem event with given flags");
#endif
Index: projects/clang800-import/sys/vm/vm_mmap.c
===================================================================
--- projects/clang800-import/sys/vm/vm_mmap.c (revision 343955)
+++ projects/clang800-import/sys/vm/vm_mmap.c (revision 343956)
@@ -1,1589 +1,1590 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1988 University of Utah.
* Copyright (c) 1991, 1993
* The Regents of the University of California. All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
* Science Department.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: Utah $Hdr: vm_mmap.c 1.6 91/10/21$
*
* @(#)vm_mmap.c 8.4 (Berkeley) 1/12/94
*/
/*
* Mapped file (mmap) interface to VM
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_hwpmc_hooks.h"
#include "opt_vm.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/capsicum.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/sysproto.h>
#include <sys/filedesc.h>
#include <sys/priv.h>
#include <sys/proc.h>
#include <sys/procctl.h>
#include <sys/racct.h>
#include <sys/resource.h>
#include <sys/resourcevar.h>
#include <sys/rwlock.h>
#include <sys/sysctl.h>
#include <sys/vnode.h>
#include <sys/fcntl.h>
#include <sys/file.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/conf.h>
#include <sys/stat.h>
#include <sys/syscallsubr.h>
#include <sys/sysent.h>
#include <sys/vmmeter.h>
+#if defined(__amd64__) || defined(__i386__) /* for i386_read_exec */
+#include <machine/md_var.h>
+#endif
#include <security/audit/audit.h>
#include <security/mac/mac_framework.h>
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#include <vm/vm_object.h>
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
#include <vm/vm_pageout.h>
#include <vm/vm_extern.h>
#include <vm/vm_page.h>
#include <vm/vnode_pager.h>
#ifdef HWPMC_HOOKS
#include <sys/pmckern.h>
#endif
int old_mlock = 0;
SYSCTL_INT(_vm, OID_AUTO, old_mlock, CTLFLAG_RWTUN, &old_mlock, 0,
"Do not apply RLIMIT_MEMLOCK on mlockall");
static int mincore_mapped = 1;
SYSCTL_INT(_vm, OID_AUTO, mincore_mapped, CTLFLAG_RWTUN, &mincore_mapped, 0,
"mincore reports mappings, not residency");
#ifdef MAP_32BIT
#define MAP_32BIT_MAX_ADDR ((vm_offset_t)1 << 31)
#endif
#ifndef _SYS_SYSPROTO_H_
struct sbrk_args {
int incr;
};
#endif
int
sys_sbrk(struct thread *td, struct sbrk_args *uap)
{
/* Not yet implemented */
return (EOPNOTSUPP);
}
#ifndef _SYS_SYSPROTO_H_
struct sstk_args {
int incr;
};
#endif
int
sys_sstk(struct thread *td, struct sstk_args *uap)
{
/* Not yet implemented */
return (EOPNOTSUPP);
}
#if defined(COMPAT_43)
int
ogetpagesize(struct thread *td, struct ogetpagesize_args *uap)
{
td->td_retval[0] = PAGE_SIZE;
return (0);
}
#endif /* COMPAT_43 */
/*
* Memory Map (mmap) system call. Note that the file offset
* and address are allowed to be NOT page aligned, though if
* the MAP_FIXED flag it set, both must have the same remainder
* modulo the PAGE_SIZE (POSIX 1003.1b). If the address is not
* page-aligned, the actual mapping starts at trunc_page(addr)
* and the return value is adjusted up by the page offset.
*
* Generally speaking, only character devices which are themselves
* memory-based, such as a video framebuffer, can be mmap'd. Otherwise
* there would be no cache coherency between a descriptor and a VM mapping
* both to the same character device.
*/
#ifndef _SYS_SYSPROTO_H_
struct mmap_args {
void *addr;
size_t len;
int prot;
int flags;
int fd;
long pad;
off_t pos;
};
#endif
int
sys_mmap(struct thread *td, struct mmap_args *uap)
{
return (kern_mmap(td, (uintptr_t)uap->addr, uap->len, uap->prot,
uap->flags, uap->fd, uap->pos));
}
int
kern_mmap(struct thread *td, uintptr_t addr0, size_t size, int prot, int flags,
int fd, off_t pos)
{
struct vmspace *vms;
struct file *fp;
vm_offset_t addr;
vm_size_t pageoff;
vm_prot_t cap_maxprot;
int align, error;
cap_rights_t rights;
vms = td->td_proc->p_vmspace;
fp = NULL;
AUDIT_ARG_FD(fd);
addr = addr0;
/*
* Ignore old flags that used to be defined but did not do anything.
*/
flags &= ~(MAP_RESERVED0020 | MAP_RESERVED0040);
/*
* Enforce the constraints.
* Mapping of length 0 is only allowed for old binaries.
* Anonymous mapping shall specify -1 as filedescriptor and
* zero position for new code. Be nice to ancient a.out
* binaries and correct pos for anonymous mapping, since old
* ld.so sometimes issues anonymous map requests with non-zero
* pos.
*/
if (!SV_CURPROC_FLAG(SV_AOUT)) {
if ((size == 0 && curproc->p_osrel >= P_OSREL_MAP_ANON) ||
((flags & MAP_ANON) != 0 && (fd != -1 || pos != 0)))
return (EINVAL);
} else {
if ((flags & MAP_ANON) != 0)
pos = 0;
}
if (flags & MAP_STACK) {
if ((fd != -1) ||
((prot & (PROT_READ | PROT_WRITE)) != (PROT_READ | PROT_WRITE)))
return (EINVAL);
flags |= MAP_ANON;
pos = 0;
}
if ((flags & ~(MAP_SHARED | MAP_PRIVATE | MAP_FIXED | MAP_HASSEMAPHORE |
MAP_STACK | MAP_NOSYNC | MAP_ANON | MAP_EXCL | MAP_NOCORE |
MAP_PREFAULT_READ | MAP_GUARD |
#ifdef MAP_32BIT
MAP_32BIT |
#endif
MAP_ALIGNMENT_MASK)) != 0)
return (EINVAL);
if ((flags & (MAP_EXCL | MAP_FIXED)) == MAP_EXCL)
return (EINVAL);
if ((flags & (MAP_SHARED | MAP_PRIVATE)) == (MAP_SHARED | MAP_PRIVATE))
return (EINVAL);
if (prot != PROT_NONE &&
(prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC)) != 0)
return (EINVAL);
if ((flags & MAP_GUARD) != 0 && (prot != PROT_NONE || fd != -1 ||
pos != 0 || (flags & ~(MAP_FIXED | MAP_GUARD | MAP_EXCL |
#ifdef MAP_32BIT
MAP_32BIT |
#endif
MAP_ALIGNMENT_MASK)) != 0))
return (EINVAL);
/*
* Align the file position to a page boundary,
* and save its page offset component.
*/
pageoff = (pos & PAGE_MASK);
pos -= pageoff;
/* Adjust size for rounding (on both ends). */
size += pageoff; /* low end... */
size = (vm_size_t) round_page(size); /* hi end */
/* Ensure alignment is at least a page and fits in a pointer. */
align = flags & MAP_ALIGNMENT_MASK;
if (align != 0 && align != MAP_ALIGNED_SUPER &&
(align >> MAP_ALIGNMENT_SHIFT >= sizeof(void *) * NBBY ||
align >> MAP_ALIGNMENT_SHIFT < PAGE_SHIFT))
return (EINVAL);
/*
* Check for illegal addresses. Watch out for address wrap... Note
* that VM_*_ADDRESS are not constants due to casts (argh).
*/
if (flags & MAP_FIXED) {
/*
* The specified address must have the same remainder
* as the file offset taken modulo PAGE_SIZE, so it
* should be aligned after adjustment by pageoff.
*/
addr -= pageoff;
if (addr & PAGE_MASK)
return (EINVAL);
/* Address range must be all in user VM space. */
if (addr < vm_map_min(&vms->vm_map) ||
addr + size > vm_map_max(&vms->vm_map))
return (EINVAL);
if (addr + size < addr)
return (EINVAL);
#ifdef MAP_32BIT
if (flags & MAP_32BIT && addr + size > MAP_32BIT_MAX_ADDR)
return (EINVAL);
} else if (flags & MAP_32BIT) {
/*
* For MAP_32BIT, override the hint if it is too high and
* do not bother moving the mapping past the heap (since
* the heap is usually above 2GB).
*/
if (addr + size > MAP_32BIT_MAX_ADDR)
addr = 0;
#endif
} else {
/*
* XXX for non-fixed mappings where no hint is provided or
* the hint would fall in the potential heap space,
* place it after the end of the largest possible heap.
*
* There should really be a pmap call to determine a reasonable
* location.
*/
if (addr == 0 ||
(addr >= round_page((vm_offset_t)vms->vm_taddr) &&
addr < round_page((vm_offset_t)vms->vm_daddr +
lim_max(td, RLIMIT_DATA))))
addr = round_page((vm_offset_t)vms->vm_daddr +
lim_max(td, RLIMIT_DATA));
}
if (size == 0) {
/*
* Return success without mapping anything for old
* binaries that request a page-aligned mapping of
* length 0. For modern binaries, this function
* returns an error earlier.
*/
error = 0;
} else if ((flags & MAP_GUARD) != 0) {
error = vm_mmap_object(&vms->vm_map, &addr, size, VM_PROT_NONE,
VM_PROT_NONE, flags, NULL, pos, FALSE, td);
} else if ((flags & MAP_ANON) != 0) {
/*
* Mapping blank space is trivial.
*
* This relies on VM_PROT_* matching PROT_*.
*/
error = vm_mmap_object(&vms->vm_map, &addr, size, prot,
VM_PROT_ALL, flags, NULL, pos, FALSE, td);
} else {
/*
* Mapping file, get fp for validation and don't let the
* descriptor disappear on us if we block. Check capability
* rights, but also return the maximum rights to be combined
* with maxprot later.
*/
cap_rights_init(&rights, CAP_MMAP);
if (prot & PROT_READ)
cap_rights_set(&rights, CAP_MMAP_R);
if ((flags & MAP_SHARED) != 0) {
if (prot & PROT_WRITE)
cap_rights_set(&rights, CAP_MMAP_W);
}
if (prot & PROT_EXEC)
cap_rights_set(&rights, CAP_MMAP_X);
error = fget_mmap(td, fd, &rights, &cap_maxprot, &fp);
if (error != 0)
goto done;
if ((flags & (MAP_SHARED | MAP_PRIVATE)) == 0 &&
td->td_proc->p_osrel >= P_OSREL_MAP_FSTRICT) {
error = EINVAL;
goto done;
}
/* This relies on VM_PROT_* matching PROT_*. */
error = fo_mmap(fp, &vms->vm_map, &addr, size, prot,
cap_maxprot, flags, pos, td);
}
if (error == 0)
td->td_retval[0] = (register_t) (addr + pageoff);
done:
if (fp)
fdrop(fp, td);
return (error);
}
#if defined(COMPAT_FREEBSD6)
int
freebsd6_mmap(struct thread *td, struct freebsd6_mmap_args *uap)
{
return (kern_mmap(td, (uintptr_t)uap->addr, uap->len, uap->prot,
uap->flags, uap->fd, uap->pos));
}
#endif
#ifdef COMPAT_43
#ifndef _SYS_SYSPROTO_H_
struct ommap_args {
caddr_t addr;
int len;
int prot;
int flags;
int fd;
long pos;
};
#endif
int
ommap(struct thread *td, struct ommap_args *uap)
{
static const char cvtbsdprot[8] = {
0,
PROT_EXEC,
PROT_WRITE,
PROT_EXEC | PROT_WRITE,
PROT_READ,
PROT_EXEC | PROT_READ,
PROT_WRITE | PROT_READ,
PROT_EXEC | PROT_WRITE | PROT_READ,
};
int flags, prot;
#define OMAP_ANON 0x0002
#define OMAP_COPY 0x0020
#define OMAP_SHARED 0x0010
#define OMAP_FIXED 0x0100
prot = cvtbsdprot[uap->prot & 0x7];
-#ifdef COMPAT_FREEBSD32
-#if defined(__amd64__)
+#if (defined(COMPAT_FREEBSD32) && defined(__amd64__)) || defined(__i386__)
if (i386_read_exec && SV_PROC_FLAG(td->td_proc, SV_ILP32) &&
prot != 0)
prot |= PROT_EXEC;
-#endif
#endif
flags = 0;
if (uap->flags & OMAP_ANON)
flags |= MAP_ANON;
if (uap->flags & OMAP_COPY)
flags |= MAP_COPY;
if (uap->flags & OMAP_SHARED)
flags |= MAP_SHARED;
else
flags |= MAP_PRIVATE;
if (uap->flags & OMAP_FIXED)
flags |= MAP_FIXED;
return (kern_mmap(td, (uintptr_t)uap->addr, uap->len, prot, flags,
uap->fd, uap->pos));
}
#endif /* COMPAT_43 */
#ifndef _SYS_SYSPROTO_H_
struct msync_args {
void *addr;
size_t len;
int flags;
};
#endif
int
sys_msync(struct thread *td, struct msync_args *uap)
{
return (kern_msync(td, (uintptr_t)uap->addr, uap->len, uap->flags));
}
int
kern_msync(struct thread *td, uintptr_t addr0, size_t size, int flags)
{
vm_offset_t addr;
vm_size_t pageoff;
vm_map_t map;
int rv;
addr = addr0;
pageoff = (addr & PAGE_MASK);
addr -= pageoff;
size += pageoff;
size = (vm_size_t) round_page(size);
if (addr + size < addr)
return (EINVAL);
if ((flags & (MS_ASYNC|MS_INVALIDATE)) == (MS_ASYNC|MS_INVALIDATE))
return (EINVAL);
map = &td->td_proc->p_vmspace->vm_map;
/*
* Clean the pages and interpret the return value.
*/
rv = vm_map_sync(map, addr, addr + size, (flags & MS_ASYNC) == 0,
(flags & MS_INVALIDATE) != 0);
switch (rv) {
case KERN_SUCCESS:
return (0);
case KERN_INVALID_ADDRESS:
return (ENOMEM);
case KERN_INVALID_ARGUMENT:
return (EBUSY);
case KERN_FAILURE:
return (EIO);
default:
return (EINVAL);
}
}
#ifndef _SYS_SYSPROTO_H_
struct munmap_args {
void *addr;
size_t len;
};
#endif
int
sys_munmap(struct thread *td, struct munmap_args *uap)
{
return (kern_munmap(td, (uintptr_t)uap->addr, uap->len));
}
int
kern_munmap(struct thread *td, uintptr_t addr0, size_t size)
{
#ifdef HWPMC_HOOKS
struct pmckern_map_out pkm;
vm_map_entry_t entry;
bool pmc_handled;
#endif
vm_offset_t addr;
vm_size_t pageoff;
vm_map_t map;
if (size == 0)
return (EINVAL);
addr = addr0;
pageoff = (addr & PAGE_MASK);
addr -= pageoff;
size += pageoff;
size = (vm_size_t) round_page(size);
if (addr + size < addr)
return (EINVAL);
/*
* Check for illegal addresses. Watch out for address wrap...
*/
map = &td->td_proc->p_vmspace->vm_map;
if (addr < vm_map_min(map) || addr + size > vm_map_max(map))
return (EINVAL);
vm_map_lock(map);
#ifdef HWPMC_HOOKS
pmc_handled = false;
if (PMC_HOOK_INSTALLED(PMC_FN_MUNMAP)) {
pmc_handled = true;
/*
* Inform hwpmc if the address range being unmapped contains
* an executable region.
*/
pkm.pm_address = (uintptr_t) NULL;
if (vm_map_lookup_entry(map, addr, &entry)) {
for (; entry->start < addr + size;
entry = entry->next) {
if (vm_map_check_protection(map, entry->start,
entry->end, VM_PROT_EXECUTE) == TRUE) {
pkm.pm_address = (uintptr_t) addr;
pkm.pm_size = (size_t) size;
break;
}
}
}
}
#endif
vm_map_delete(map, addr, addr + size);
#ifdef HWPMC_HOOKS
if (__predict_false(pmc_handled)) {
/* downgrade the lock to prevent a LOR with the pmc-sx lock */
vm_map_lock_downgrade(map);
if (pkm.pm_address != (uintptr_t) NULL)
PMC_CALL_HOOK(td, PMC_FN_MUNMAP, (void *) &pkm);
vm_map_unlock_read(map);
} else
#endif
vm_map_unlock(map);
/* vm_map_delete returns nothing but KERN_SUCCESS anyway */
return (0);
}
#ifndef _SYS_SYSPROTO_H_
struct mprotect_args {
const void *addr;
size_t len;
int prot;
};
#endif
int
sys_mprotect(struct thread *td, struct mprotect_args *uap)
{
return (kern_mprotect(td, (uintptr_t)uap->addr, uap->len, uap->prot));
}
int
kern_mprotect(struct thread *td, uintptr_t addr0, size_t size, int prot)
{
vm_offset_t addr;
vm_size_t pageoff;
addr = addr0;
prot = (prot & VM_PROT_ALL);
pageoff = (addr & PAGE_MASK);
addr -= pageoff;
size += pageoff;
size = (vm_size_t) round_page(size);
#ifdef COMPAT_FREEBSD32
if (SV_PROC_FLAG(td->td_proc, SV_ILP32)) {
if (((addr + size) & 0xffffffff) < addr)
return (EINVAL);
} else
#endif
if (addr + size < addr)
return (EINVAL);
switch (vm_map_protect(&td->td_proc->p_vmspace->vm_map, addr,
addr + size, prot, FALSE)) {
case KERN_SUCCESS:
return (0);
case KERN_PROTECTION_FAILURE:
return (EACCES);
case KERN_RESOURCE_SHORTAGE:
return (ENOMEM);
}
return (EINVAL);
}
#ifndef _SYS_SYSPROTO_H_
struct minherit_args {
void *addr;
size_t len;
int inherit;
};
#endif
int
sys_minherit(struct thread *td, struct minherit_args *uap)
{
vm_offset_t addr;
vm_size_t size, pageoff;
vm_inherit_t inherit;
addr = (vm_offset_t)uap->addr;
size = uap->len;
inherit = uap->inherit;
pageoff = (addr & PAGE_MASK);
addr -= pageoff;
size += pageoff;
size = (vm_size_t) round_page(size);
if (addr + size < addr)
return (EINVAL);
switch (vm_map_inherit(&td->td_proc->p_vmspace->vm_map, addr,
addr + size, inherit)) {
case KERN_SUCCESS:
return (0);
case KERN_PROTECTION_FAILURE:
return (EACCES);
}
return (EINVAL);
}
#ifndef _SYS_SYSPROTO_H_
struct madvise_args {
void *addr;
size_t len;
int behav;
};
#endif
int
sys_madvise(struct thread *td, struct madvise_args *uap)
{
return (kern_madvise(td, (uintptr_t)uap->addr, uap->len, uap->behav));
}
int
kern_madvise(struct thread *td, uintptr_t addr0, size_t len, int behav)
{
vm_map_t map;
vm_offset_t addr, end, start;
int flags;
/*
* Check for our special case, advising the swap pager we are
* "immortal."
*/
if (behav == MADV_PROTECT) {
flags = PPROT_SET;
return (kern_procctl(td, P_PID, td->td_proc->p_pid,
PROC_SPROTECT, &flags));
}
/*
* Check for illegal addresses. Watch out for address wrap... Note
* that VM_*_ADDRESS are not constants due to casts (argh).
*/
map = &td->td_proc->p_vmspace->vm_map;
addr = addr0;
if (addr < vm_map_min(map) || addr + len > vm_map_max(map))
return (EINVAL);
if ((addr + len) < addr)
return (EINVAL);
/*
* Since this routine is only advisory, we default to conservative
* behavior.
*/
start = trunc_page(addr);
end = round_page(addr + len);
/*
* vm_map_madvise() checks for illegal values of behav.
*/
return (vm_map_madvise(map, start, end, behav));
}
#ifndef _SYS_SYSPROTO_H_
struct mincore_args {
const void *addr;
size_t len;
char *vec;
};
#endif
int
sys_mincore(struct thread *td, struct mincore_args *uap)
{
return (kern_mincore(td, (uintptr_t)uap->addr, uap->len, uap->vec));
}
int
kern_mincore(struct thread *td, uintptr_t addr0, size_t len, char *vec)
{
vm_offset_t addr, first_addr;
vm_offset_t end, cend;
pmap_t pmap;
vm_map_t map;
int error = 0;
int vecindex, lastvecindex;
vm_map_entry_t current;
vm_map_entry_t entry;
vm_object_t object;
vm_paddr_t locked_pa;
vm_page_t m;
vm_pindex_t pindex;
int mincoreinfo;
unsigned int timestamp;
boolean_t locked;
/*
* Make sure that the addresses presented are valid for user
* mode.
*/
first_addr = addr = trunc_page(addr0);
end = addr + (vm_size_t)round_page(len);
map = &td->td_proc->p_vmspace->vm_map;
if (end > vm_map_max(map) || end < addr)
return (ENOMEM);
pmap = vmspace_pmap(td->td_proc->p_vmspace);
vm_map_lock_read(map);
RestartScan:
timestamp = map->timestamp;
if (!vm_map_lookup_entry(map, addr, &entry)) {
vm_map_unlock_read(map);
return (ENOMEM);
}
/*
* Do this on a map entry basis so that if the pages are not
* in the current processes address space, we can easily look
* up the pages elsewhere.
*/
lastvecindex = -1;
for (current = entry; current->start < end; current = current->next) {
/*
* check for contiguity
*/
if (current->end < end && current->next->start > current->end) {
vm_map_unlock_read(map);
return (ENOMEM);
}
/*
* ignore submaps (for now) or null objects
*/
if ((current->eflags & MAP_ENTRY_IS_SUB_MAP) ||
current->object.vm_object == NULL)
continue;
/*
* limit this scan to the current map entry and the
* limits for the mincore call
*/
if (addr < current->start)
addr = current->start;
cend = current->end;
if (cend > end)
cend = end;
/*
* scan this entry one page at a time
*/
while (addr < cend) {
/*
* Check pmap first, it is likely faster, also
* it can provide info as to whether we are the
* one referencing or modifying the page.
*/
object = NULL;
locked_pa = 0;
retry:
m = NULL;
mincoreinfo = pmap_mincore(pmap, addr, &locked_pa);
if (mincore_mapped) {
/*
* We only care about this pmap's
* mapping of the page, if any.
*/
if (locked_pa != 0) {
vm_page_unlock(PHYS_TO_VM_PAGE(
locked_pa));
}
} else if (locked_pa != 0) {
/*
* The page is mapped by this process but not
* both accessed and modified. It is also
* managed. Acquire the object lock so that
* other mappings might be examined.
*/
m = PHYS_TO_VM_PAGE(locked_pa);
if (m->object != object) {
if (object != NULL)
VM_OBJECT_WUNLOCK(object);
object = m->object;
locked = VM_OBJECT_TRYWLOCK(object);
vm_page_unlock(m);
if (!locked) {
VM_OBJECT_WLOCK(object);
vm_page_lock(m);
goto retry;
}
} else
vm_page_unlock(m);
KASSERT(m->valid == VM_PAGE_BITS_ALL,
("mincore: page %p is mapped but invalid",
m));
} else if (mincoreinfo == 0) {
/*
* The page is not mapped by this process. If
* the object implements managed pages, then
* determine if the page is resident so that
* the mappings might be examined.
*/
if (current->object.vm_object != object) {
if (object != NULL)
VM_OBJECT_WUNLOCK(object);
object = current->object.vm_object;
VM_OBJECT_WLOCK(object);
}
if (object->type == OBJT_DEFAULT ||
object->type == OBJT_SWAP ||
object->type == OBJT_VNODE) {
pindex = OFF_TO_IDX(current->offset +
(addr - current->start));
m = vm_page_lookup(object, pindex);
if (m != NULL && m->valid == 0)
m = NULL;
if (m != NULL)
mincoreinfo = MINCORE_INCORE;
}
}
if (m != NULL) {
/* Examine other mappings to the page. */
if (m->dirty == 0 && pmap_is_modified(m))
vm_page_dirty(m);
if (m->dirty != 0)
mincoreinfo |= MINCORE_MODIFIED_OTHER;
/*
* The first test for PGA_REFERENCED is an
* optimization. The second test is
* required because a concurrent pmap
* operation could clear the last reference
* and set PGA_REFERENCED before the call to
* pmap_is_referenced().
*/
if ((m->aflags & PGA_REFERENCED) != 0 ||
pmap_is_referenced(m) ||
(m->aflags & PGA_REFERENCED) != 0)
mincoreinfo |= MINCORE_REFERENCED_OTHER;
}
if (object != NULL)
VM_OBJECT_WUNLOCK(object);
/*
* subyte may page fault. In case it needs to modify
* the map, we release the lock.
*/
vm_map_unlock_read(map);
/*
* calculate index into user supplied byte vector
*/
vecindex = atop(addr - first_addr);
/*
* If we have skipped map entries, we need to make sure that
* the byte vector is zeroed for those skipped entries.
*/
while ((lastvecindex + 1) < vecindex) {
++lastvecindex;
error = subyte(vec + lastvecindex, 0);
if (error) {
error = EFAULT;
goto done2;
}
}
/*
* Pass the page information to the user
*/
error = subyte(vec + vecindex, mincoreinfo);
if (error) {
error = EFAULT;
goto done2;
}
/*
* If the map has changed, due to the subyte, the previous
* output may be invalid.
*/
vm_map_lock_read(map);
if (timestamp != map->timestamp)
goto RestartScan;
lastvecindex = vecindex;
addr += PAGE_SIZE;
}
}
/*
* subyte may page fault. In case it needs to modify
* the map, we release the lock.
*/
vm_map_unlock_read(map);
/*
* Zero the last entries in the byte vector.
*/
vecindex = atop(end - first_addr);
while ((lastvecindex + 1) < vecindex) {
++lastvecindex;
error = subyte(vec + lastvecindex, 0);
if (error) {
error = EFAULT;
goto done2;
}
}
/*
* If the map has changed, due to the subyte, the previous
* output may be invalid.
*/
vm_map_lock_read(map);
if (timestamp != map->timestamp)
goto RestartScan;
vm_map_unlock_read(map);
done2:
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct mlock_args {
const void *addr;
size_t len;
};
#endif
int
sys_mlock(struct thread *td, struct mlock_args *uap)
{
return (kern_mlock(td->td_proc, td->td_ucred,
__DECONST(uintptr_t, uap->addr), uap->len));
}
int
kern_mlock(struct proc *proc, struct ucred *cred, uintptr_t addr0, size_t len)
{
vm_offset_t addr, end, last, start;
vm_size_t npages, size;
vm_map_t map;
unsigned long nsize;
int error;
error = priv_check_cred(cred, PRIV_VM_MLOCK);
if (error)
return (error);
addr = addr0;
size = len;
last = addr + size;
start = trunc_page(addr);
end = round_page(last);
if (last < addr || end < addr)
return (EINVAL);
npages = atop(end - start);
if (npages > vm_page_max_wired)
return (ENOMEM);
map = &proc->p_vmspace->vm_map;
PROC_LOCK(proc);
nsize = ptoa(npages + pmap_wired_count(map->pmap));
if (nsize > lim_cur_proc(proc, RLIMIT_MEMLOCK)) {
PROC_UNLOCK(proc);
return (ENOMEM);
}
PROC_UNLOCK(proc);
if (npages + vm_wire_count() > vm_page_max_wired)
return (EAGAIN);
#ifdef RACCT
if (racct_enable) {
PROC_LOCK(proc);
error = racct_set(proc, RACCT_MEMLOCK, nsize);
PROC_UNLOCK(proc);
if (error != 0)
return (ENOMEM);
}
#endif
error = vm_map_wire(map, start, end,
VM_MAP_WIRE_USER | VM_MAP_WIRE_NOHOLES);
#ifdef RACCT
if (racct_enable && error != KERN_SUCCESS) {
PROC_LOCK(proc);
racct_set(proc, RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)));
PROC_UNLOCK(proc);
}
#endif
return (error == KERN_SUCCESS ? 0 : ENOMEM);
}
#ifndef _SYS_SYSPROTO_H_
struct mlockall_args {
int how;
};
#endif
int
sys_mlockall(struct thread *td, struct mlockall_args *uap)
{
vm_map_t map;
int error;
map = &td->td_proc->p_vmspace->vm_map;
error = priv_check(td, PRIV_VM_MLOCK);
if (error)
return (error);
if ((uap->how == 0) || ((uap->how & ~(MCL_CURRENT|MCL_FUTURE)) != 0))
return (EINVAL);
/*
* If wiring all pages in the process would cause it to exceed
* a hard resource limit, return ENOMEM.
*/
if (!old_mlock && uap->how & MCL_CURRENT) {
if (map->size > lim_cur(td, RLIMIT_MEMLOCK))
return (ENOMEM);
}
#ifdef RACCT
if (racct_enable) {
PROC_LOCK(td->td_proc);
error = racct_set(td->td_proc, RACCT_MEMLOCK, map->size);
PROC_UNLOCK(td->td_proc);
if (error != 0)
return (ENOMEM);
}
#endif
if (uap->how & MCL_FUTURE) {
vm_map_lock(map);
vm_map_modflags(map, MAP_WIREFUTURE, 0);
vm_map_unlock(map);
error = 0;
}
if (uap->how & MCL_CURRENT) {
/*
* P1003.1-2001 mandates that all currently mapped pages
* will be memory resident and locked (wired) upon return
* from mlockall(). vm_map_wire() will wire pages, by
* calling vm_fault_wire() for each page in the region.
*/
error = vm_map_wire(map, vm_map_min(map), vm_map_max(map),
VM_MAP_WIRE_USER|VM_MAP_WIRE_HOLESOK);
error = (error == KERN_SUCCESS ? 0 : EAGAIN);
}
#ifdef RACCT
if (racct_enable && error != KERN_SUCCESS) {
PROC_LOCK(td->td_proc);
racct_set(td->td_proc, RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)));
PROC_UNLOCK(td->td_proc);
}
#endif
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct munlockall_args {
register_t dummy;
};
#endif
int
sys_munlockall(struct thread *td, struct munlockall_args *uap)
{
vm_map_t map;
int error;
map = &td->td_proc->p_vmspace->vm_map;
error = priv_check(td, PRIV_VM_MUNLOCK);
if (error)
return (error);
/* Clear the MAP_WIREFUTURE flag from this vm_map. */
vm_map_lock(map);
vm_map_modflags(map, 0, MAP_WIREFUTURE);
vm_map_unlock(map);
/* Forcibly unwire all pages. */
error = vm_map_unwire(map, vm_map_min(map), vm_map_max(map),
VM_MAP_WIRE_USER|VM_MAP_WIRE_HOLESOK);
#ifdef RACCT
if (racct_enable && error == KERN_SUCCESS) {
PROC_LOCK(td->td_proc);
racct_set(td->td_proc, RACCT_MEMLOCK, 0);
PROC_UNLOCK(td->td_proc);
}
#endif
return (error);
}
#ifndef _SYS_SYSPROTO_H_
struct munlock_args {
const void *addr;
size_t len;
};
#endif
int
sys_munlock(struct thread *td, struct munlock_args *uap)
{
return (kern_munlock(td, (uintptr_t)uap->addr, uap->len));
}
int
kern_munlock(struct thread *td, uintptr_t addr0, size_t size)
{
vm_offset_t addr, end, last, start;
#ifdef RACCT
vm_map_t map;
#endif
int error;
error = priv_check(td, PRIV_VM_MUNLOCK);
if (error)
return (error);
addr = addr0;
last = addr + size;
start = trunc_page(addr);
end = round_page(last);
if (last < addr || end < addr)
return (EINVAL);
error = vm_map_unwire(&td->td_proc->p_vmspace->vm_map, start, end,
VM_MAP_WIRE_USER | VM_MAP_WIRE_NOHOLES);
#ifdef RACCT
if (racct_enable && error == KERN_SUCCESS) {
PROC_LOCK(td->td_proc);
map = &td->td_proc->p_vmspace->vm_map;
racct_set(td->td_proc, RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)));
PROC_UNLOCK(td->td_proc);
}
#endif
return (error == KERN_SUCCESS ? 0 : ENOMEM);
}
/*
* vm_mmap_vnode()
*
* Helper function for vm_mmap. Perform sanity check specific for mmap
* operations on vnodes.
*/
int
vm_mmap_vnode(struct thread *td, vm_size_t objsize,
vm_prot_t prot, vm_prot_t *maxprotp, int *flagsp,
struct vnode *vp, vm_ooffset_t *foffp, vm_object_t *objp,
boolean_t *writecounted)
{
struct vattr va;
vm_object_t obj;
vm_ooffset_t foff;
struct ucred *cred;
int error, flags, locktype;
cred = td->td_ucred;
if ((*maxprotp & VM_PROT_WRITE) && (*flagsp & MAP_SHARED))
locktype = LK_EXCLUSIVE;
else
locktype = LK_SHARED;
if ((error = vget(vp, locktype, td)) != 0)
return (error);
AUDIT_ARG_VNODE1(vp);
foff = *foffp;
flags = *flagsp;
obj = vp->v_object;
if (vp->v_type == VREG) {
/*
* Get the proper underlying object
*/
if (obj == NULL) {
error = EINVAL;
goto done;
}
if (obj->type == OBJT_VNODE && obj->handle != vp) {
vput(vp);
vp = (struct vnode *)obj->handle;
/*
* Bypass filesystems obey the mpsafety of the
* underlying fs. Tmpfs never bypasses.
*/
error = vget(vp, locktype, td);
if (error != 0)
return (error);
}
if (locktype == LK_EXCLUSIVE) {
*writecounted = TRUE;
vnode_pager_update_writecount(obj, 0, objsize);
}
} else {
error = EINVAL;
goto done;
}
if ((error = VOP_GETATTR(vp, &va, cred)))
goto done;
#ifdef MAC
/* This relies on VM_PROT_* matching PROT_*. */
error = mac_vnode_check_mmap(cred, vp, (int)prot, flags);
if (error != 0)
goto done;
#endif
if ((flags & MAP_SHARED) != 0) {
if ((va.va_flags & (SF_SNAPSHOT|IMMUTABLE|APPEND)) != 0) {
if (prot & VM_PROT_WRITE) {
error = EPERM;
goto done;
}
*maxprotp &= ~VM_PROT_WRITE;
}
}
/*
* If it is a regular file without any references
* we do not need to sync it.
* Adjust object size to be the size of actual file.
*/
objsize = round_page(va.va_size);
if (va.va_nlink == 0)
flags |= MAP_NOSYNC;
if (obj->type == OBJT_VNODE) {
obj = vm_pager_allocate(OBJT_VNODE, vp, objsize, prot, foff,
cred);
if (obj == NULL) {
error = ENOMEM;
goto done;
}
} else {
KASSERT(obj->type == OBJT_DEFAULT || obj->type == OBJT_SWAP,
("wrong object type"));
VM_OBJECT_WLOCK(obj);
vm_object_reference_locked(obj);
#if VM_NRESERVLEVEL > 0
vm_object_color(obj, 0);
#endif
VM_OBJECT_WUNLOCK(obj);
}
*objp = obj;
*flagsp = flags;
vfs_mark_atime(vp, cred);
done:
if (error != 0 && *writecounted) {
*writecounted = FALSE;
vnode_pager_update_writecount(obj, objsize, 0);
}
vput(vp);
return (error);
}
/*
* vm_mmap_cdev()
*
* Helper function for vm_mmap. Perform sanity check specific for mmap
* operations on cdevs.
*/
int
vm_mmap_cdev(struct thread *td, vm_size_t objsize, vm_prot_t prot,
vm_prot_t *maxprotp, int *flagsp, struct cdev *cdev, struct cdevsw *dsw,
vm_ooffset_t *foff, vm_object_t *objp)
{
vm_object_t obj;
int error, flags;
flags = *flagsp;
if (dsw->d_flags & D_MMAP_ANON) {
*objp = NULL;
*foff = 0;
*maxprotp = VM_PROT_ALL;
*flagsp |= MAP_ANON;
return (0);
}
/*
* cdevs do not provide private mappings of any kind.
*/
if ((*maxprotp & VM_PROT_WRITE) == 0 &&
(prot & VM_PROT_WRITE) != 0)
return (EACCES);
if (flags & (MAP_PRIVATE|MAP_COPY))
return (EINVAL);
/*
* Force device mappings to be shared.
*/
flags |= MAP_SHARED;
#ifdef MAC_XXX
error = mac_cdev_check_mmap(td->td_ucred, cdev, (int)prot);
if (error != 0)
return (error);
#endif
/*
* First, try d_mmap_single(). If that is not implemented
* (returns ENODEV), fall back to using the device pager.
* Note that d_mmap_single() must return a reference to the
* object (it needs to bump the reference count of the object
* it returns somehow).
*
* XXX assumes VM_PROT_* == PROT_*
*/
error = dsw->d_mmap_single(cdev, foff, objsize, objp, (int)prot);
if (error != ENODEV)
return (error);
obj = vm_pager_allocate(OBJT_DEVICE, cdev, objsize, prot, *foff,
td->td_ucred);
if (obj == NULL)
return (EINVAL);
*objp = obj;
*flagsp = flags;
return (0);
}
/*
* vm_mmap()
*
* Internal version of mmap used by exec, sys5 shared memory, and
* various device drivers. Handle is either a vnode pointer, a
* character device, or NULL for MAP_ANON.
*/
int
vm_mmap(vm_map_t map, vm_offset_t *addr, vm_size_t size, vm_prot_t prot,
vm_prot_t maxprot, int flags,
objtype_t handle_type, void *handle,
vm_ooffset_t foff)
{
vm_object_t object;
struct thread *td = curthread;
int error;
boolean_t writecounted;
if (size == 0)
return (EINVAL);
size = round_page(size);
object = NULL;
writecounted = FALSE;
/*
* Lookup/allocate object.
*/
switch (handle_type) {
case OBJT_DEVICE: {
struct cdevsw *dsw;
struct cdev *cdev;
int ref;
cdev = handle;
dsw = dev_refthread(cdev, &ref);
if (dsw == NULL)
return (ENXIO);
error = vm_mmap_cdev(td, size, prot, &maxprot, &flags, cdev,
dsw, &foff, &object);
dev_relthread(cdev, ref);
break;
}
case OBJT_VNODE:
error = vm_mmap_vnode(td, size, prot, &maxprot, &flags,
handle, &foff, &object, &writecounted);
break;
case OBJT_DEFAULT:
if (handle == NULL) {
error = 0;
break;
}
/* FALLTHROUGH */
default:
error = EINVAL;
break;
}
if (error)
return (error);
error = vm_mmap_object(map, addr, size, prot, maxprot, flags, object,
foff, writecounted, td);
if (error != 0 && object != NULL) {
/*
* If this mapping was accounted for in the vnode's
* writecount, then undo that now.
*/
if (writecounted)
vnode_pager_release_writecount(object, 0, size);
vm_object_deallocate(object);
}
return (error);
}
/*
* Internal version of mmap that maps a specific VM object into an
* map. Called by mmap for MAP_ANON, vm_mmap, shm_mmap, and vn_mmap.
*/
int
vm_mmap_object(vm_map_t map, vm_offset_t *addr, vm_size_t size, vm_prot_t prot,
vm_prot_t maxprot, int flags, vm_object_t object, vm_ooffset_t foff,
boolean_t writecounted, struct thread *td)
{
boolean_t curmap, fitit;
vm_offset_t max_addr;
int docow, error, findspace, rv;
curmap = map == &td->td_proc->p_vmspace->vm_map;
if (curmap) {
RACCT_PROC_LOCK(td->td_proc);
if (map->size + size > lim_cur(td, RLIMIT_VMEM)) {
RACCT_PROC_UNLOCK(td->td_proc);
return (ENOMEM);
}
if (racct_set(td->td_proc, RACCT_VMEM, map->size + size)) {
RACCT_PROC_UNLOCK(td->td_proc);
return (ENOMEM);
}
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
if (ptoa(pmap_wired_count(map->pmap)) + size >
lim_cur(td, RLIMIT_MEMLOCK)) {
racct_set_force(td->td_proc, RACCT_VMEM,
map->size);
RACCT_PROC_UNLOCK(td->td_proc);
return (ENOMEM);
}
error = racct_set(td->td_proc, RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)) + size);
if (error != 0) {
racct_set_force(td->td_proc, RACCT_VMEM,
map->size);
RACCT_PROC_UNLOCK(td->td_proc);
return (error);
}
}
RACCT_PROC_UNLOCK(td->td_proc);
}
/*
* We currently can only deal with page aligned file offsets.
* The mmap() system call already enforces this by subtracting
* the page offset from the file offset, but checking here
* catches errors in device drivers (e.g. d_single_mmap()
* callbacks) and other internal mapping requests (such as in
* exec).
*/
if (foff & PAGE_MASK)
return (EINVAL);
if ((flags & MAP_FIXED) == 0) {
fitit = TRUE;
*addr = round_page(*addr);
} else {
if (*addr != trunc_page(*addr))
return (EINVAL);
fitit = FALSE;
}
if (flags & MAP_ANON) {
if (object != NULL || foff != 0)
return (EINVAL);
docow = 0;
} else if (flags & MAP_PREFAULT_READ)
docow = MAP_PREFAULT;
else
docow = MAP_PREFAULT_PARTIAL;
if ((flags & (MAP_ANON|MAP_SHARED)) == 0)
docow |= MAP_COPY_ON_WRITE;
if (flags & MAP_NOSYNC)
docow |= MAP_DISABLE_SYNCER;
if (flags & MAP_NOCORE)
docow |= MAP_DISABLE_COREDUMP;
/* Shared memory is also shared with children. */
if (flags & MAP_SHARED)
docow |= MAP_INHERIT_SHARE;
if (writecounted)
docow |= MAP_VN_WRITECOUNT;
if (flags & MAP_STACK) {
if (object != NULL)
return (EINVAL);
docow |= MAP_STACK_GROWS_DOWN;
}
if ((flags & MAP_EXCL) != 0)
docow |= MAP_CHECK_EXCL;
if ((flags & MAP_GUARD) != 0)
docow |= MAP_CREATE_GUARD;
if (fitit) {
if ((flags & MAP_ALIGNMENT_MASK) == MAP_ALIGNED_SUPER)
findspace = VMFS_SUPER_SPACE;
else if ((flags & MAP_ALIGNMENT_MASK) != 0)
findspace = VMFS_ALIGNED_SPACE(flags >>
MAP_ALIGNMENT_SHIFT);
else
findspace = VMFS_OPTIMAL_SPACE;
max_addr = 0;
#ifdef MAP_32BIT
if ((flags & MAP_32BIT) != 0)
max_addr = MAP_32BIT_MAX_ADDR;
#endif
if (curmap) {
rv = vm_map_find_min(map, object, foff, addr, size,
round_page((vm_offset_t)td->td_proc->p_vmspace->
vm_daddr + lim_max(td, RLIMIT_DATA)), max_addr,
findspace, prot, maxprot, docow);
} else {
rv = vm_map_find(map, object, foff, addr, size,
max_addr, findspace, prot, maxprot, docow);
}
} else {
rv = vm_map_fixed(map, object, foff, *addr, size,
prot, maxprot, docow);
}
if (rv == KERN_SUCCESS) {
/*
* If the process has requested that all future mappings
* be wired, then heed this.
*/
if (map->flags & MAP_WIREFUTURE) {
vm_map_wire(map, *addr, *addr + size,
VM_MAP_WIRE_USER | ((flags & MAP_STACK) ?
VM_MAP_WIRE_HOLESOK : VM_MAP_WIRE_NOHOLES));
}
}
return (vm_mmap_to_errno(rv));
}
/*
* Translate a Mach VM return code to zero on success or the appropriate errno
* on failure.
*/
int
vm_mmap_to_errno(int rv)
{
switch (rv) {
case KERN_SUCCESS:
return (0);
case KERN_INVALID_ADDRESS:
case KERN_NO_SPACE:
return (ENOMEM);
case KERN_PROTECTION_FAILURE:
return (EACCES);
default:
return (EINVAL);
}
}
Index: projects/clang800-import/sys/vm/vm_unix.c
===================================================================
--- projects/clang800-import/sys/vm/vm_unix.c (revision 343955)
+++ projects/clang800-import/sys/vm/vm_unix.c (revision 343956)
@@ -1,258 +1,259 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1988 University of Utah.
* Copyright (c) 1991, 1993
* The Regents of the University of California. All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
* Science Department.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* from: Utah $Hdr: vm_unix.c 1.1 89/11/07$
*
* @(#)vm_unix.c 8.1 (Berkeley) 6/11/93
*/
/*
* Traditional sbrk/grow interface to VM
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/racct.h>
#include <sys/resourcevar.h>
#include <sys/syscallsubr.h>
#include <sys/sysent.h>
#include <sys/sysproto.h>
#include <sys/systm.h>
+#if defined(__amd64__) || defined(__i386__) /* for i386_read_exec */
+#include <machine/md_var.h>
+#endif
#include <vm/vm.h>
#include <vm/vm_param.h>
#include <vm/pmap.h>
#include <vm/vm_map.h>
#ifndef _SYS_SYSPROTO_H_
struct break_args {
char *nsize;
};
#endif
int
sys_break(struct thread *td, struct break_args *uap)
{
#if !defined(__aarch64__) && !defined(__riscv)
uintptr_t addr;
int error;
addr = (uintptr_t)uap->nsize;
error = kern_break(td, &addr);
if (error == 0)
td->td_retval[0] = addr;
return (error);
#else /* defined(__aarch64__) || defined(__riscv) */
return (ENOSYS);
#endif /* defined(__aarch64__) || defined(__riscv) */
}
int
kern_break(struct thread *td, uintptr_t *addr)
{
struct vmspace *vm = td->td_proc->p_vmspace;
vm_map_t map = &vm->vm_map;
vm_offset_t new, old, base;
rlim_t datalim, lmemlim, vmemlim;
int prot, rv;
int error = 0;
boolean_t do_map_wirefuture;
datalim = lim_cur(td, RLIMIT_DATA);
lmemlim = lim_cur(td, RLIMIT_MEMLOCK);
vmemlim = lim_cur(td, RLIMIT_VMEM);
do_map_wirefuture = FALSE;
new = round_page(*addr);
vm_map_lock(map);
base = round_page((vm_offset_t) vm->vm_daddr);
old = base + ctob(vm->vm_dsize);
if (new > base) {
/*
* Check the resource limit, but allow a process to reduce
* its usage, even if it remains over the limit.
*/
if (new - base > datalim && new > old) {
error = ENOMEM;
goto done;
}
if (new > vm_map_max(map)) {
error = ENOMEM;
goto done;
}
} else if (new < base) {
/*
* Simply return the current break address without
* modifying any state. This is an ad-hoc interface
* used by libc to determine the initial break address,
* avoiding a dependency on magic features in the system
* linker.
*/
new = old;
goto done;
}
if (new > old) {
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
if (ptoa(pmap_wired_count(map->pmap)) +
(new - old) > lmemlim) {
error = ENOMEM;
goto done;
}
}
if (map->size + (new - old) > vmemlim) {
error = ENOMEM;
goto done;
}
#ifdef RACCT
if (racct_enable) {
PROC_LOCK(td->td_proc);
error = racct_set(td->td_proc, RACCT_DATA, new - base);
if (error != 0) {
PROC_UNLOCK(td->td_proc);
error = ENOMEM;
goto done;
}
error = racct_set(td->td_proc, RACCT_VMEM,
map->size + (new - old));
if (error != 0) {
racct_set_force(td->td_proc, RACCT_DATA,
old - base);
PROC_UNLOCK(td->td_proc);
error = ENOMEM;
goto done;
}
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
error = racct_set(td->td_proc, RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)) +
(new - old));
if (error != 0) {
racct_set_force(td->td_proc, RACCT_DATA,
old - base);
racct_set_force(td->td_proc, RACCT_VMEM,
map->size);
PROC_UNLOCK(td->td_proc);
error = ENOMEM;
goto done;
}
}
PROC_UNLOCK(td->td_proc);
}
#endif
prot = VM_PROT_RW;
-#ifdef COMPAT_FREEBSD32
-#if defined(__amd64__)
+#if (defined(COMPAT_FREEBSD32) && defined(__amd64__)) || defined(__i386__)
if (i386_read_exec && SV_PROC_FLAG(td->td_proc, SV_ILP32))
prot |= VM_PROT_EXECUTE;
-#endif
#endif
rv = vm_map_insert(map, NULL, 0, old, new, prot, VM_PROT_ALL, 0);
if (rv != KERN_SUCCESS) {
#ifdef RACCT
if (racct_enable) {
PROC_LOCK(td->td_proc);
racct_set_force(td->td_proc,
RACCT_DATA, old - base);
racct_set_force(td->td_proc,
RACCT_VMEM, map->size);
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
racct_set_force(td->td_proc,
RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)));
}
PROC_UNLOCK(td->td_proc);
}
#endif
error = ENOMEM;
goto done;
}
vm->vm_dsize += btoc(new - old);
/*
* Handle the MAP_WIREFUTURE case for legacy applications,
* by marking the newly mapped range of pages as wired.
* We are not required to perform a corresponding
* vm_map_unwire() before vm_map_delete() below, as
* it will forcibly unwire the pages in the range.
*
* XXX If the pages cannot be wired, no error is returned.
*/
if ((map->flags & MAP_WIREFUTURE) == MAP_WIREFUTURE)
do_map_wirefuture = TRUE;
} else if (new < old) {
rv = vm_map_delete(map, new, old);
if (rv != KERN_SUCCESS) {
error = ENOMEM;
goto done;
}
vm->vm_dsize -= btoc(old - new);
#ifdef RACCT
if (racct_enable) {
PROC_LOCK(td->td_proc);
racct_set_force(td->td_proc, RACCT_DATA, new - base);
racct_set_force(td->td_proc, RACCT_VMEM, map->size);
if (!old_mlock && map->flags & MAP_WIREFUTURE) {
racct_set_force(td->td_proc, RACCT_MEMLOCK,
ptoa(pmap_wired_count(map->pmap)));
}
PROC_UNLOCK(td->td_proc);
}
#endif
}
done:
vm_map_unlock(map);
if (do_map_wirefuture)
(void) vm_map_wire(map, old, new,
VM_MAP_WIRE_USER|VM_MAP_WIRE_NOHOLES);
if (error == 0)
*addr = new;
return (error);
}
#ifdef COMPAT_FREEBSD11
int
freebsd11_vadvise(struct thread *td, struct freebsd11_vadvise_args *uap)
{
return (EINVAL);
}
#endif
Index: projects/clang800-import/sys/x86/acpica/acpi_wakeup.c
===================================================================
--- projects/clang800-import/sys/x86/acpica/acpi_wakeup.c (revision 343955)
+++ projects/clang800-import/sys/x86/acpica/acpi_wakeup.c (revision 343956)
@@ -1,478 +1,484 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2001 Takanori Watanabe <takawata@jp.freebsd.org>
* Copyright (c) 2001-2012 Mitsuru IWASAKI <iwasaki@jp.freebsd.org>
* Copyright (c) 2003 Peter Wemm
* Copyright (c) 2008-2012 Jung-uk Kim <jkim@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#if defined(__amd64__)
#define DEV_APIC
#else
#include "opt_apic.h"
#endif
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/eventhandler.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/memrange.h>
#include <sys/smp.h>
#include <sys/systm.h>
#include <sys/cons.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#include <machine/clock.h>
#include <machine/cpu.h>
#include <machine/intr_machdep.h>
#include <machine/md_var.h>
#include <x86/mca.h>
#include <machine/pcb.h>
#include <machine/specialreg.h>
#include <x86/ucode.h>
#ifdef DEV_APIC
#include <x86/apicreg.h>
#include <x86/apicvar.h>
#endif
#ifdef SMP
#include <machine/smp.h>
#include <machine/vmparam.h>
#endif
#include <contrib/dev/acpica/include/acpi.h>
#include <dev/acpica/acpivar.h>
#include "acpi_wakecode.h"
#include "acpi_wakedata.h"
/* Make sure the code is less than a page and leave room for the stack. */
CTASSERT(sizeof(wakecode) < PAGE_SIZE - 1024);
extern int acpi_resume_beep;
extern int acpi_reset_video;
extern int acpi_susp_bounce;
#ifdef SMP
extern struct susppcb **susppcbs;
static cpuset_t suspcpus;
#else
static struct susppcb **susppcbs;
#endif
static void *acpi_alloc_wakeup_handler(void **);
static void acpi_stop_beep(void *);
#ifdef SMP
static int acpi_wakeup_ap(struct acpi_softc *, int);
static void acpi_wakeup_cpus(struct acpi_softc *);
#endif
#ifdef __amd64__
#define ACPI_WAKEPAGES 4
#else
#define ACPI_WAKEPAGES 1
#endif
#define WAKECODE_FIXUP(offset, type, val) do { \
type *addr; \
addr = (type *)(sc->acpi_wakeaddr + (offset)); \
*addr = val; \
} while (0)
static void
acpi_stop_beep(void *arg)
{
if (acpi_resume_beep != 0)
timer_spkr_release();
}
#ifdef SMP
static int
acpi_wakeup_ap(struct acpi_softc *sc, int cpu)
{
struct pcb *pcb;
int vector = (sc->acpi_wakephys >> 12) & 0xff;
int apic_id = cpu_apic_ids[cpu];
int ms;
pcb = &susppcbs[cpu]->sp_pcb;
WAKECODE_FIXUP(wakeup_pcb, struct pcb *, pcb);
WAKECODE_FIXUP(wakeup_gdt, uint16_t, pcb->pcb_gdt.rd_limit);
WAKECODE_FIXUP(wakeup_gdt + 2, uint64_t, pcb->pcb_gdt.rd_base);
ipi_startup(apic_id, vector);
/* Wait up to 5 seconds for it to resume. */
for (ms = 0; ms < 5000; ms++) {
if (!CPU_ISSET(cpu, &suspended_cpus))
return (1); /* return SUCCESS */
DELAY(1000);
}
return (0); /* return FAILURE */
}
#define WARMBOOT_TARGET 0
#ifdef __amd64__
#define WARMBOOT_OFF (KERNBASE + 0x0467)
#define WARMBOOT_SEG (KERNBASE + 0x0469)
#else /* __i386__ */
#define WARMBOOT_OFF (PMAP_MAP_LOW + 0x0467)
#define WARMBOOT_SEG (PMAP_MAP_LOW + 0x0469)
#endif
#define CMOS_REG (0x70)
#define CMOS_DATA (0x71)
#define BIOS_RESET (0x0f)
#define BIOS_WARM (0x0a)
static void
acpi_wakeup_cpus(struct acpi_softc *sc)
{
uint32_t mpbioswarmvec;
int cpu;
u_char mpbiosreason;
/* save the current value of the warm-start vector */
mpbioswarmvec = *((uint32_t *)WARMBOOT_OFF);
outb(CMOS_REG, BIOS_RESET);
mpbiosreason = inb(CMOS_DATA);
/* setup a vector to our boot code */
*((volatile u_short *)WARMBOOT_OFF) = WARMBOOT_TARGET;
*((volatile u_short *)WARMBOOT_SEG) = sc->acpi_wakephys >> 4;
outb(CMOS_REG, BIOS_RESET);
outb(CMOS_DATA, BIOS_WARM); /* 'warm-start' */
/* Wake up each AP. */
for (cpu = 1; cpu < mp_ncpus; cpu++) {
if (!CPU_ISSET(cpu, &suspcpus))
continue;
if (acpi_wakeup_ap(sc, cpu) == 0) {
/* restore the warmstart vector */
*(uint32_t *)WARMBOOT_OFF = mpbioswarmvec;
panic("acpi_wakeup: failed to resume AP #%d (PHY #%d)",
cpu, cpu_apic_ids[cpu]);
}
}
#ifdef __i386__
/*
* Remove the identity mapping of low memory for all CPUs and sync
* the TLB for the BSP. The APs are now spinning in
* cpususpend_handler() and we will release them soon. Then each
* will invalidate its TLB.
*/
pmap_remap_lowptdi(false);
#endif
/* restore the warmstart vector */
*(uint32_t *)WARMBOOT_OFF = mpbioswarmvec;
outb(CMOS_REG, BIOS_RESET);
outb(CMOS_DATA, mpbiosreason);
}
#endif
int
acpi_sleep_machdep(struct acpi_softc *sc, int state)
{
ACPI_STATUS status;
struct pcb *pcb;
#ifdef __amd64__
struct pcpu *pc;
int i;
#endif
if (sc->acpi_wakeaddr == 0ul)
return (-1); /* couldn't alloc wake memory */
#ifdef SMP
suspcpus = all_cpus;
CPU_CLR(PCPU_GET(cpuid), &suspcpus);
#endif
if (acpi_resume_beep != 0)
timer_spkr_acquire();
AcpiSetFirmwareWakingVector(sc->acpi_wakephys, 0);
intr_suspend();
pcb = &susppcbs[0]->sp_pcb;
if (savectx(pcb)) {
#ifdef __amd64__
fpususpend(susppcbs[0]->sp_fpususpend);
#else
npxsuspend(susppcbs[0]->sp_fpususpend);
#endif
#ifdef SMP
if (!CPU_EMPTY(&suspcpus) && suspend_cpus(suspcpus) == 0) {
device_printf(sc->acpi_dev, "Failed to suspend APs\n");
return (0); /* couldn't sleep */
}
#endif
#ifdef __amd64__
hw_ibrs_active = 0;
hw_ssb_active = 0;
cpu_stdext_feature3 = 0;
CPU_FOREACH(i) {
pc = pcpu_find(i);
pc->pc_ibpb_set = 0;
}
#endif
WAKECODE_FIXUP(resume_beep, uint8_t, (acpi_resume_beep != 0));
WAKECODE_FIXUP(reset_video, uint8_t, (acpi_reset_video != 0));
#ifdef __amd64__
WAKECODE_FIXUP(wakeup_efer, uint64_t, rdmsr(MSR_EFER) &
~(EFER_LMA));
#else
+ if ((amd_feature & AMDID_NX) != 0)
+ WAKECODE_FIXUP(wakeup_efer, uint64_t, rdmsr(MSR_EFER));
WAKECODE_FIXUP(wakeup_cr4, register_t, pcb->pcb_cr4);
#endif
WAKECODE_FIXUP(wakeup_pcb, struct pcb *, pcb);
WAKECODE_FIXUP(wakeup_gdt, uint16_t, pcb->pcb_gdt.rd_limit);
WAKECODE_FIXUP(wakeup_gdt + 2, uint64_t, pcb->pcb_gdt.rd_base);
#ifdef __i386__
/*
* Map some low memory with virt == phys for ACPI wakecode
* to use to jump to high memory after enabling paging. This
* is the same as for similar jump in locore, except the
* jump is a single instruction, and we know its address
* more precisely so only need a single PTD, and we have to
* be careful to use the kernel map (PTD[0] is for curthread
* which may be a user thread in deprecated APIs).
*/
pmap_remap_lowptdi(true);
#endif
/* Call ACPICA to enter the desired sleep state */
if (state == ACPI_STATE_S4 && sc->acpi_s4bios)
status = AcpiEnterSleepStateS4bios();
else
status = AcpiEnterSleepState(state);
if (ACPI_FAILURE(status)) {
device_printf(sc->acpi_dev,
"AcpiEnterSleepState failed - %s\n",
AcpiFormatException(status));
return (0); /* couldn't sleep */
}
if (acpi_susp_bounce)
resumectx(pcb);
for (;;)
ia32_pause();
} else {
/*
* Re-initialize console hardware as soon as possibe.
* No console output (e.g. printf) is allowed before
* this point.
*/
cnresume();
#ifdef __amd64__
fpuresume(susppcbs[0]->sp_fpususpend);
#else
npxresume(susppcbs[0]->sp_fpususpend);
#endif
}
return (1); /* wakeup successfully */
}
int
acpi_wakeup_machdep(struct acpi_softc *sc, int state, int sleep_result,
int intr_enabled)
{
if (sleep_result == -1)
return (sleep_result);
if (!intr_enabled) {
/* Wakeup MD procedures in interrupt disabled context */
if (sleep_result == 1) {
ucode_reload();
pmap_init_pat();
initializecpu();
PCPU_SET(switchtime, 0);
PCPU_SET(switchticks, ticks);
#ifdef DEV_APIC
lapic_xapic_mode();
#endif
#ifdef SMP
if (!CPU_EMPTY(&suspcpus))
acpi_wakeup_cpus(sc);
#endif
}
#ifdef SMP
if (!CPU_EMPTY(&suspcpus))
resume_cpus(suspcpus);
#endif
mca_resume();
#ifdef __amd64__
if (vmm_resume_p != NULL)
vmm_resume_p();
#endif
intr_resume(/*suspend_cancelled*/false);
AcpiSetFirmwareWakingVector(0, 0);
} else {
/* Wakeup MD procedures in interrupt enabled context */
if (sleep_result == 1 && mem_range_softc.mr_op != NULL &&
mem_range_softc.mr_op->reinit != NULL)
mem_range_softc.mr_op->reinit(&mem_range_softc);
}
return (sleep_result);
}
static void *
acpi_alloc_wakeup_handler(void *wakepages[ACPI_WAKEPAGES])
{
int i;
memset(wakepages, 0, ACPI_WAKEPAGES * sizeof(*wakepages));
/*
* Specify the region for our wakeup code. We want it in the low 1 MB
* region, excluding real mode IVT (0-0x3ff), BDA (0x400-0x4ff), EBDA
* (less than 128KB, below 0xa0000, must be excluded by SMAP and DSDT),
* and ROM area (0xa0000 and above). The temporary page tables must be
* page-aligned.
*/
for (i = 0; i < ACPI_WAKEPAGES; i++) {
- wakepages[i] = contigmalloc(PAGE_SIZE, M_DEVBUF, M_NOWAIT,
- 0x500, 0xa0000, PAGE_SIZE, 0ul);
+ wakepages[i] = contigmalloc(PAGE_SIZE, M_DEVBUF,
+ M_NOWAIT
+#ifdef __i386__
+ | M_EXEC
+#endif
+ , 0x500, 0xa0000, PAGE_SIZE, 0ul);
if (wakepages[i] == NULL) {
printf("%s: can't alloc wake memory\n", __func__);
goto freepages;
}
}
if (EVENTHANDLER_REGISTER(power_resume, acpi_stop_beep, NULL,
EVENTHANDLER_PRI_LAST) == NULL) {
printf("%s: can't register event handler\n", __func__);
goto freepages;
}
susppcbs = malloc(mp_ncpus * sizeof(*susppcbs), M_DEVBUF, M_WAITOK);
for (i = 0; i < mp_ncpus; i++) {
susppcbs[i] = malloc(sizeof(**susppcbs), M_DEVBUF, M_WAITOK);
susppcbs[i]->sp_fpususpend = alloc_fpusave(M_WAITOK);
}
return (wakepages);
freepages:
for (i = 0; i < ACPI_WAKEPAGES; i++)
if (wakepages[i] != NULL)
contigfree(wakepages[i], PAGE_SIZE, M_DEVBUF);
return (NULL);
}
void
acpi_install_wakeup_handler(struct acpi_softc *sc)
{
static void *wakeaddr;
void *wakepages[ACPI_WAKEPAGES];
#ifdef __amd64__
uint64_t *pt4, *pt3, *pt2;
vm_paddr_t pt4pa, pt3pa, pt2pa;
int i;
#endif
if (wakeaddr != NULL)
return;
if (acpi_alloc_wakeup_handler(wakepages) == NULL)
return;
wakeaddr = wakepages[0];
sc->acpi_wakeaddr = (vm_offset_t)wakeaddr;
sc->acpi_wakephys = vtophys(wakeaddr);
#ifdef __amd64__
pt4 = wakepages[1];
pt3 = wakepages[2];
pt2 = wakepages[3];
pt4pa = vtophys(pt4);
pt3pa = vtophys(pt3);
pt2pa = vtophys(pt2);
#endif
bcopy(wakecode, (void *)sc->acpi_wakeaddr, sizeof(wakecode));
/* Patch GDT base address, ljmp targets. */
WAKECODE_FIXUP((bootgdtdesc + 2), uint32_t,
sc->acpi_wakephys + bootgdt);
WAKECODE_FIXUP((wakeup_sw32 + 2), uint32_t,
sc->acpi_wakephys + wakeup_32);
#ifdef __amd64__
WAKECODE_FIXUP((wakeup_sw64 + 1), uint32_t,
sc->acpi_wakephys + wakeup_64);
WAKECODE_FIXUP(wakeup_pagetables, uint32_t, pt4pa);
#endif
/* Save pointers to some global data. */
WAKECODE_FIXUP(wakeup_ret, void *, resumectx);
#ifndef __amd64__
WAKECODE_FIXUP(wakeup_cr3, register_t, pmap_get_kcr3());
#else /* __amd64__ */
/* Create the initial 1GB replicated page tables */
for (i = 0; i < 512; i++) {
/*
* Each slot of the level 4 pages points
* to the same level 3 page
*/
pt4[i] = (uint64_t)pt3pa;
pt4[i] |= PG_V | PG_RW | PG_U;
/*
* Each slot of the level 3 pages points
* to the same level 2 page
*/
pt3[i] = (uint64_t)pt2pa;
pt3[i] |= PG_V | PG_RW | PG_U;
/* The level 2 page slots are mapped with 2MB pages for 1GB. */
pt2[i] = i * (2 * 1024 * 1024);
pt2[i] |= PG_V | PG_RW | PG_PS | PG_U;
}
#endif /* !__amd64__ */
if (bootverbose)
device_printf(sc->acpi_dev, "wakeup code va %#jx pa %#jx\n",
(uintmax_t)sc->acpi_wakeaddr, (uintmax_t)sc->acpi_wakephys);
}
Index: projects/clang800-import/sys/x86/include/x86_var.h
===================================================================
--- projects/clang800-import/sys/x86/include/x86_var.h (revision 343955)
+++ projects/clang800-import/sys/x86/include/x86_var.h (revision 343956)
@@ -1,141 +1,142 @@
/*-
* Copyright (c) 1995 Bruce D. Evans.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the author nor the names of contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
#ifndef _X86_X86_VAR_H_
#define _X86_X86_VAR_H_
/*
* Miscellaneous machine-dependent declarations.
*/
extern long Maxmem;
extern u_int basemem;
extern int busdma_swi_pending;
extern u_int cpu_exthigh;
extern u_int cpu_feature;
extern u_int cpu_feature2;
extern u_int amd_feature;
extern u_int amd_feature2;
extern u_int amd_rascap;
extern u_int amd_pminfo;
extern u_int amd_extended_feature_extensions;
extern u_int via_feature_rng;
extern u_int via_feature_xcrypt;
extern u_int cpu_clflush_line_size;
extern u_int cpu_stdext_feature;
extern u_int cpu_stdext_feature2;
extern u_int cpu_stdext_feature3;
extern uint64_t cpu_ia32_arch_caps;
extern u_int cpu_fxsr;
extern u_int cpu_high;
extern u_int cpu_id;
extern u_int cpu_max_ext_state_size;
extern u_int cpu_mxcsr_mask;
extern u_int cpu_procinfo;
extern u_int cpu_procinfo2;
extern char cpu_vendor[];
extern u_int cpu_vendor_id;
extern u_int cpu_mon_mwait_flags;
extern u_int cpu_mon_min_size;
extern u_int cpu_mon_max_size;
extern u_int cpu_maxphyaddr;
extern char ctx_switch_xsave[];
extern u_int hv_high;
extern char hv_vendor[];
extern char kstack[];
extern char sigcode[];
extern int szsigcode;
extern int vm_page_dump_size;
extern int workaround_erratum383;
extern int _udatasel;
extern int _ucodesel;
extern int _ucode32sel;
extern int _ufssel;
extern int _ugssel;
extern int use_xsave;
extern uint64_t xsave_mask;
extern u_int max_apic_id;
+extern int i386_read_exec;
extern int pti;
extern int hw_ibrs_active;
extern int hw_ssb_active;
struct pcb;
struct thread;
struct reg;
struct fpreg;
struct dbreg;
struct dumperinfo;
struct trapframe;
/*
* The interface type of the interrupt handler entry point cannot be
* expressed in C. Use simplest non-variadic function type as an
* approximation.
*/
typedef void alias_for_inthand_t(void);
bool acpi_get_fadt_bootflags(uint16_t *flagsp);
void *alloc_fpusave(int flags);
void busdma_swi(void);
vm_paddr_t cpu_getmaxphyaddr(void);
bool cpu_mwait_usable(void);
void cpu_probe_amdc1e(void);
void cpu_setregs(void);
bool disable_wp(void);
void restore_wp(bool old_wp);
void dump_add_page(vm_paddr_t);
void dump_drop_page(vm_paddr_t);
void finishidentcpu(void);
void identify_cpu1(void);
void identify_cpu2(void);
void identify_cpu_fixup_bsp(void);
void identify_hypervisor(void);
void initializecpu(void);
void initializecpucache(void);
bool fix_cpuid(void);
void fillw(int /*u_short*/ pat, void *base, size_t cnt);
int is_physical_memory(vm_paddr_t addr);
int isa_nmi(int cd);
void handle_ibrs_entry(void);
void handle_ibrs_exit(void);
void hw_ibrs_recalculate(void);
void hw_ssb_recalculate(bool all_cpus);
void nmi_call_kdb(u_int cpu, u_int type, struct trapframe *frame);
void nmi_call_kdb_smp(u_int type, struct trapframe *frame);
void nmi_handle_intr(u_int type, struct trapframe *frame);
void pagecopy(void *from, void *to);
void printcpuinfo(void);
int pti_get_default(void);
int user_dbreg_trap(register_t dr6);
int minidumpsys(struct dumperinfo *);
struct pcb *get_pcb_td(struct thread *td);
#endif
Index: projects/clang800-import/tools/build/mk/OptionalObsoleteFiles.inc
===================================================================
--- projects/clang800-import/tools/build/mk/OptionalObsoleteFiles.inc (revision 343955)
+++ projects/clang800-import/tools/build/mk/OptionalObsoleteFiles.inc (revision 343956)
@@ -1,10250 +1,10253 @@
#
# $FreeBSD$
#
# This file add support for the WITHOUT_* and WITH_* knobs in src.conf(5) to
# the check-old and delete-old* targets.
#
.if ${MK_ACCT} == no
OLD_FILES+=etc/rc.d/accounting
OLD_FILES+=etc/periodic/daily/310.accounting
OLD_FILES+=usr/sbin/accton
OLD_FILES+=usr/sbin/sa
OLD_FILES+=usr/share/man/man8/accton.8.gz
OLD_FILES+=usr/share/man/man8/sa.8.gz
OLD_FILES+=usr/tests/usr.sbin/sa/Kyuafile
OLD_FILES+=usr/tests/usr.sbin/sa/legacy_test
OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-sav.in
OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-sav.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-u.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-usr.in
OLD_FILES+=usr/tests/usr.sbin/sa/v1-amd64-usr.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-sav.in
OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-sav.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-u.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-usr.in
OLD_FILES+=usr/tests/usr.sbin/sa/v1-i386-usr.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-sav.in
OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-sav.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-u.out
OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-usr.in
OLD_FILES+=usr/tests/usr.sbin/sa/v1-sparc64-usr.out
OLD_FILES+=usr/tests/usr.sbin/sa/v2-amd64-sav.in
OLD_FILES+=usr/tests/usr.sbin/sa/v2-amd64-u.out
OLD_FILES+=usr/tests/usr.sbin/sa/v2-amd64-usr.in
OLD_FILES+=usr/tests/usr.sbin/sa/v2-i386-sav.in
OLD_FILES+=usr/tests/usr.sbin/sa/v2-i386-u.out
OLD_FILES+=usr/tests/usr.sbin/sa/v2-i386-usr.in
OLD_FILES+=usr/tests/usr.sbin/sa/v2-sparc64-sav.in
OLD_FILES+=usr/tests/usr.sbin/sa/v2-sparc64-u.out
OLD_FILES+=usr/tests/usr.sbin/sa/v2-sparc64-usr.in
OLD_DIRS+=usr/tests/usr.sbin/sa
.endif
.if ${MK_ACPI} == no
OLD_FILES+=etc/devd/asus.conf
OLD_FILES+=etc/rc.d/power_profile
OLD_FILES+=usr/sbin/acpiconf
OLD_FILES+=usr/sbin/acpidb
OLD_FILES+=usr/sbin/acpidump
OLD_FILES+=usr/sbin/iasl
OLD_FILES+=usr/share/man/man8/acpiconf.8.gz
OLD_FILES+=usr/share/man/man8/acpidb.8.gz
OLD_FILES+=usr/share/man/man8/acpidump.8.gz
OLD_FILES+=usr/share/man/man8/iasl.8.gz
.endif
.if ${MK_AMD} == no
OLD_FILES+=etc/amd.map
OLD_FILES+=etc/newsyslog.conf.d/amd.conf
OLD_FILES+=etc/rc.d/amd
OLD_FILES+=usr/bin/pawd
OLD_FILES+=usr/sbin/amd
OLD_FILES+=usr/sbin/amq
OLD_FILES+=usr/sbin/fixmount
OLD_FILES+=usr/sbin/fsinfo
OLD_FILES+=usr/sbin/hlfsd
OLD_FILES+=usr/sbin/mk-amd-map
OLD_FILES+=usr/sbin/wire-test
OLD_FILES+=usr/share/examples/etc/amd.map
OLD_FILES+=usr/share/man/man1/pawd.1.gz
OLD_FILES+=usr/share/man/man5/amd.conf.5.gz
OLD_FILES+=usr/share/man/man8/amd.8.gz
OLD_FILES+=usr/share/man/man8/amq.8.gz
OLD_FILES+=usr/share/man/man8/fixmount.8.gz
OLD_FILES+=usr/share/man/man8/fsinfo.8.gz
OLD_FILES+=usr/share/man/man8/hlfsd.8.gz
OLD_FILES+=usr/share/man/man8/mk-amd-map.8.gz
OLD_FILES+=usr/share/man/man8/wire-test.8.gz
.endif
.if ${MK_APM} == no
OLD_FILES+=etc/rc.d/apm
OLD_FILES+=etc/rc.d/apmd
OLD_FILES+=etc/apmd.conf
OLD_FILES+=usr/sbin/apm
OLD_FILES+=usr/share/examples/etc/apmd.conf
OLD_FILES+=usr/share/man/man8/amd64/apm.8.gz
OLD_FILES+=usr/share/man/man8/amd64/apmconf.8.gz
.endif
.if ${MK_AT} == no
OLD_FILES+=etc/pam.d/atrun
OLD_FILES+=usr/bin/at
OLD_FILES+=usr/bin/atq
OLD_FILES+=usr/bin/atrm
OLD_FILES+=usr/bin/batch
OLD_FILES+=usr/libexec/atrun
OLD_FILES+=usr/share/man/man1/at.1.gz
OLD_FILES+=usr/share/man/man1/atq.1.gz
OLD_FILES+=usr/share/man/man1/atrm.1.gz
OLD_FILES+=usr/share/man/man1/batch.1.gz
OLD_FILES+=usr/share/man/man8/atrun.8.gz
.endif
.if ${MK_ATM} == no
OLD_FILES+=usr/bin/sscop
OLD_FILES+=usr/include/netnatm/addr.h
OLD_FILES+=usr/include/netnatm/api/atmapi.h
OLD_FILES+=usr/include/netnatm/api/ccatm.h
OLD_FILES+=usr/include/netnatm/api/unisap.h
OLD_DIRS+=usr/include/netnatm/api
OLD_FILES+=usr/include/netnatm/msg/uni_config.h
OLD_FILES+=usr/include/netnatm/msg/uni_hdr.h
OLD_FILES+=usr/include/netnatm/msg/uni_ie.h
OLD_FILES+=usr/include/netnatm/msg/uni_msg.h
OLD_FILES+=usr/include/netnatm/msg/unimsglib.h
OLD_FILES+=usr/include/netnatm/msg/uniprint.h
OLD_FILES+=usr/include/netnatm/msg/unistruct.h
OLD_DIRS+=usr/include/netnatm/msg
OLD_FILES+=usr/include/netnatm/saal/sscfu.h
OLD_FILES+=usr/include/netnatm/saal/sscfudef.h
OLD_FILES+=usr/include/netnatm/saal/sscop.h
OLD_FILES+=usr/include/netnatm/saal/sscopdef.h
OLD_DIRS+=usr/include/netnatm/saal
OLD_FILES+=usr/include/netnatm/sig/uni.h
OLD_FILES+=usr/include/netnatm/sig/unidef.h
OLD_FILES+=usr/include/netnatm/sig/unisig.h
OLD_DIRS+=usr/include/netnatm/sig
OLD_FILES+=usr/include/netnatm/unimsg.h
OLD_FILES+=usr/lib/libngatm.a
OLD_FILES+=usr/lib/libngatm.so
OLD_LIBS+=usr/lib/libngatm.so.4
OLD_FILES+=usr/lib/libngatm_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libngatm.a
OLD_FILES+=usr/lib32/libngatm.so
OLD_LIBS+=usr/lib32/libngatm.so.4
OLD_FILES+=usr/lib32/libngatm_p.a
.endif
OLD_FILES+=usr/share/man/man1/sscop.1.gz
OLD_FILES+=usr/share/man/man3/libngatm.3.gz
OLD_FILES+=usr/share/man/man3/uniaddr.3.gz
OLD_FILES+=usr/share/man/man3/unifunc.3.gz
OLD_FILES+=usr/share/man/man3/unimsg.3.gz
OLD_FILES+=usr/share/man/man3/unisap.3.gz
OLD_FILES+=usr/share/man/man3/unistruct.3.gz
.endif
.if ${MK_AUDIT} == no
OLD_FILES+=etc/rc.d/auditd
OLD_FILES+=etc/rc.d/auditdistd
OLD_FILES+=usr/sbin/audit
OLD_FILES+=usr/sbin/auditd
OLD_FILES+=usr/sbin/auditdistd
OLD_FILES+=usr/sbin/auditreduce
OLD_FILES+=usr/sbin/praudit
OLD_FILES+=usr/share/man/man1/auditreduce.1.gz
OLD_FILES+=usr/share/man/man1/praudit.1.gz
OLD_FILES+=usr/share/man/man5/auditdistd.conf.5.gz
OLD_FILES+=usr/share/man/man8/audit.8.gz
OLD_FILES+=usr/share/man/man8/auditd.8.gz
OLD_FILES+=usr/share/man/man8/auditdistd.8.gz
OLD_FILES+=usr/tests/sys/audit/process-control
OLD_FILES+=usr/tests/sys/audit/open
OLD_FILES+=usr/tests/sys/audit/network
OLD_FILES+=usr/tests/sys/audit/miscellaneous
OLD_FILES+=usr/tests/sys/audit/Kyuafile
OLD_FILES+=usr/tests/sys/audit/ioctl
OLD_FILES+=usr/tests/sys/audit/inter-process
OLD_FILES+=usr/tests/sys/audit/file-write
OLD_FILES+=usr/tests/sys/audit/file-read
OLD_FILES+=usr/tests/sys/audit/file-delete
OLD_FILES+=usr/tests/sys/audit/file-create
OLD_FILES+=usr/tests/sys/audit/file-close
OLD_FILES+=usr/tests/sys/audit/file-attribute-modify
OLD_FILES+=usr/tests/sys/audit/file-attribute-access
OLD_FILES+=usr/tests/sys/audit/administrative
OLD_DIRS+=usr/tests/sys/audit
.endif
.if ${MK_AUTHPF} == no
OLD_FILES+=usr/sbin/authpf
OLD_FILES+=usr/sbin/authpf-noip
OLD_FILES+=usr/share/man/man8/authpf.8.gz
OLD_FILES+=usr/share/man/man8/authpf-noip.8.gz
.endif
.if ${MK_AUTOFS} == no
OLD_FILES+=etc/autofs/include_ldap
OLD_FILES+=etc/autofs/special_hosts
OLD_FILES+=etc/autofs/special_media
OLD_FILES+=etc/autofs/special_noauto
OLD_FILES+=etc/autofs/special_null
OLD_FILES+=etc/auto_master
OLD_FILES+=etc/rc.d/automount
OLD_FILES+=etc/rc.d/automountd
OLD_FILES+=etc/rc.d/autounmountd
OLD_FILES+=usr/sbin/automount
OLD_FILES+=usr/sbin/automountd
OLD_FILES+=usr/sbin/autounmountd
OLD_FILES+=usr/share/man/man5/autofs.5.gz
OLD_FILES+=usr/share/man/man5/auto_master.5.gz
OLD_FILES+=usr/share/man/man8/automount.8.gz
OLD_FILES+=usr/share/man/man8/automountd.8.gz
OLD_FILES+=usr/share/man/man8/autounmountd.8.gz
OLD_DIRS+=etc/autofs
.endif
.if ${MK_BHYVE} == no
OLD_FILES+=usr/lib/libvmmapi.a
OLD_FILES+=usr/lib/libvmmapi.so
OLD_LIBS+=usr/lib/libvmmapi.so.5
OLD_FILES+=usr/include/vmmapi.h
OLD_FILES+=usr/sbin/bhyve
OLD_FILES+=usr/sbin/bhyvectl
OLD_FILES+=usr/sbin/bhyveload
OLD_FILES+=usr/share/examples/bhyve/vmrun.sh
OLD_FILES+=usr/share/man/man8/bhyve.8.gz
OLD_FILES+=usr/share/man/man8/bhyveload.8.gz
OLD_DIRS+=usr/share/examples/bhyve
.endif
.if ${MK_BINUTILS} == no
OLD_FILES+=usr/bin/as
.if ${MK_LLD_IS_LD} == no
OLD_FILES+=usr/bin/ld
OLD_FILES+=usr/share/man/man1/ld.1.gz
.endif
OLD_FILES+=usr/bin/objdump
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/armelf_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/armelfb_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.x
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xbn
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xc
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xd
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xdc
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xdw
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xn
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xr
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xs
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xsc
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xsw
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xu
OLD_FILES+=usr/libdata/ldscripts/elf32_sparc.xw
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf32btsmip_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf32btsmipn32_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmip_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf32ltsmipn32_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf32ppc_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.x
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xbn
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xc
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xd
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xdc
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xdw
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xn
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xr
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xs
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xsc
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xsw
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xu
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc.xw
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf64_sparc_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf64btsmip_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf64ltsmip_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf64ppc_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf_i386_fbsd.xw
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.x
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xbn
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xc
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xd
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xdc
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xdw
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xn
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xr
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xs
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xsc
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xsw
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xu
OLD_FILES+=usr/libdata/ldscripts/elf_x86_64_fbsd.xw
OLD_FILES+=usr/share/man/man1/as.1.gz
OLD_FILES+=usr/share/man/man1/objdump.1.gz
OLD_FILES+=usr/share/man/man7/as.7.gz
OLD_FILES+=usr/share/man/man7/ld.7.gz
OLD_FILES+=usr/share/man/man7/ldint.7.gz
OLD_FILES+=usr/share/man/man7/binutils.7.gz
.endif
.if ${MK_BINUTILS} == no || ${MK_LLD_IS_LD} == yes
OLD_FILES+=usr/bin/ld.bfd
.endif
.if ${MK_BLACKLIST} == no
OLD_FILES+=etc/rc.d/blacklistd
OLD_FILES+=usr/include/blacklist.h
OLD_FILES+=usr/lib/libblacklist.a
OLD_FILES+=usr/lib/libblacklist_p.a
OLD_FILES+=usr/lib/libblacklist.so
OLD_LIBS+=usr/lib/libblacklist.so.0
OLD_FILES+=usr/libexec/blacklistd-helper
OLD_FILES+=usr/sbin/blacklistctl
OLD_FILES+=usr/sbin/blacklistd
OLD_FILES+=usr/share/man/man3/blacklist.3.gz
OLD_FILES+=usr/share/man/man3/blacklist_close.3.gz
OLD_FILES+=usr/share/man/man3/blacklist_open.3.gz
OLD_FILES+=usr/share/man/man3/blacklist_r.3.gz
OLD_FILES+=usr/share/man/man3/blacklist_sa.3.gz
OLD_FILES+=usr/share/man/man3/blacklist_sa_r.3.gz
OLD_FILES+=usr/share/man/man5/blacklistd.conf.5.gz
OLD_FILES+=usr/share/man/man8/blacklistctl.8.gz
OLD_FILES+=usr/share/man/man8/blacklistd.8.gz
.endif
.if ${MK_BLUETOOTH} == no
OLD_FILES+=etc/bluetooth/hcsecd.conf
OLD_FILES+=etc/bluetooth/hosts
OLD_FILES+=etc/bluetooth/protocols
OLD_FILES+=etc/defaults/bluetooth.device.conf
OLD_DIRS+=etc/bluetooth
OLD_FILES+=etc/rc.d/bluetooth
OLD_FILES+=etc/rc.d/bthidd
OLD_FILES+=etc/rc.d/hcsecd
OLD_FILES+=etc/rc.d/rfcomm_pppd_server
OLD_FILES+=etc/rc.d/sdpd
OLD_FILES+=etc/rc.d/ubthidhci
OLD_FILES+=usr/bin/bthost
OLD_FILES+=usr/bin/btsockstat
OLD_FILES+=usr/bin/rfcomm_sppd
OLD_FILES+=usr/include/bluetooth.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_bluetooth.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_bt3c.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_hci_raw.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_l2cap.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_rfcomm.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_btsocket_sco.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_h4.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_hci.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_l2cap.h
OLD_FILES+=usr/include/netgraph/bluetooth/include/ng_ubt.h
OLD_DIRS+=usr/include/netgraph/bluetooth/include
OLD_DIRS+=usr/include/netgraph/bluetooth
OLD_FILES+=usr/include/sdp.h
OLD_FILES+=usr/lib/libbluetooth.a
OLD_FILES+=usr/lib/libbluetooth.so
OLD_LIBS+=usr/lib/libbluetooth.so.4
OLD_FILES+=usr/lib/libbluetooth_p.a
OLD_FILES+=usr/lib/libsdp.a
OLD_FILES+=usr/lib/libsdp.so
OLD_LIBS+=usr/lib/libsdp.so.4
OLD_FILES+=usr/lib/libsdp_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libbluetooth.a
OLD_FILES+=usr/lib32/libbluetooth.so
OLD_LIBS+=usr/lib32/libbluetooth.so.4
OLD_FILES+=usr/lib32/libbluetooth_p.a
OLD_FILES+=usr/lib32/libsdp.a
OLD_FILES+=usr/lib32/libsdp.so
OLD_LIBS+=usr/lib32/libsdp.so.4
OLD_FILES+=usr/lib32/libsdp_p.a
.endif
OLD_FILES+=usr/sbin/ath3kfw
OLD_FILES+=usr/sbin/bcmfw
OLD_FILES+=usr/sbin/bluetooth-config
OLD_FILES+=usr/sbin/bt3cfw
OLD_FILES+=usr/sbin/bthidcontrol
OLD_FILES+=usr/sbin/bthidd
OLD_FILES+=usr/sbin/btpand
OLD_FILES+=usr/sbin/hccontrol
OLD_FILES+=usr/sbin/hcsecd
OLD_FILES+=usr/sbin/hcseriald
OLD_FILES+=usr/sbin/l2control
OLD_FILES+=usr/sbin/l2ping
OLD_FILES+=usr/sbin/rfcomm_pppd
OLD_FILES+=usr/sbin/sdpcontrol
OLD_FILES+=usr/sbin/sdpd
OLD_FILES+=usr/share/examples/etc/defaults/bluetooth.device.conf
OLD_FILES+=usr/share/man/man1/bthost.1.gz
OLD_FILES+=usr/share/man/man1/btsockstat.1.gz
OLD_FILES+=usr/share/man/man1/rfcomm_sppd.1.gz
OLD_FILES+=usr/share/man/man3/SDP_GET128.3.gz
OLD_FILES+=usr/share/man/man3/SDP_GET16.3.gz
OLD_FILES+=usr/share/man/man3/SDP_GET32.3.gz
OLD_FILES+=usr/share/man/man3/SDP_GET64.3.gz
OLD_FILES+=usr/share/man/man3/SDP_GET8.3.gz
OLD_FILES+=usr/share/man/man3/SDP_PUT128.3.gz
OLD_FILES+=usr/share/man/man3/SDP_PUT16.3.gz
OLD_FILES+=usr/share/man/man3/SDP_PUT32.3.gz
OLD_FILES+=usr/share/man/man3/SDP_PUT64.3.gz
OLD_FILES+=usr/share/man/man3/SDP_PUT8.3.gz
OLD_FILES+=usr/share/man/man3/bdaddr_any.3.gz
OLD_FILES+=usr/share/man/man3/bdaddr_copy.3.gz
OLD_FILES+=usr/share/man/man3/bdaddr_same.3.gz
OLD_FILES+=usr/share/man/man3/bluetooth.3.gz
OLD_FILES+=usr/share/man/man3/bt_aton.3.gz
OLD_FILES+=usr/share/man/man3/bt_devaddr.3.gz
OLD_FILES+=usr/share/man/man3/bt_devclose.3.gz
OLD_FILES+=usr/share/man/man3/bt_devenum.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter_evt_clr.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter_evt_set.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter_evt_tst.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter_pkt_clr.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter_pkt_set.3.gz
OLD_FILES+=usr/share/man/man3/bt_devfilter_pkt_tst.3.gz
OLD_FILES+=usr/share/man/man3/bt_devinfo.3.gz
OLD_FILES+=usr/share/man/man3/bt_devinquiry.3.gz
OLD_FILES+=usr/share/man/man3/bt_devname.3.gz
OLD_FILES+=usr/share/man/man3/bt_devopen.3.gz
OLD_FILES+=usr/share/man/man3/bt_devreq.3.gz
OLD_FILES+=usr/share/man/man3/bt_devsend.3.gz
OLD_FILES+=usr/share/man/man3/bt_endhostent.3.gz
OLD_FILES+=usr/share/man/man3/bt_endprotoent.3.gz
OLD_FILES+=usr/share/man/man3/bt_gethostbyaddr.3.gz
OLD_FILES+=usr/share/man/man3/bt_gethostbyname.3.gz
OLD_FILES+=usr/share/man/man3/bt_gethostent.3.gz
OLD_FILES+=usr/share/man/man3/bt_getprotobyname.3.gz
OLD_FILES+=usr/share/man/man3/bt_getprotobynumber.3.gz
OLD_FILES+=usr/share/man/man3/bt_getprotoent.3.gz
OLD_FILES+=usr/share/man/man3/bt_ntoa.3.gz
OLD_FILES+=usr/share/man/man3/bt_sethostent.3.gz
OLD_FILES+=usr/share/man/man3/bt_setprotoent.3.gz
OLD_FILES+=usr/share/man/man3/sdp.3.gz
OLD_FILES+=usr/share/man/man3/sdp_attr2desc.3.gz
OLD_FILES+=usr/share/man/man3/sdp_change_service.3.gz
OLD_FILES+=usr/share/man/man3/sdp_close.3.gz
OLD_FILES+=usr/share/man/man3/sdp_error.3.gz
OLD_FILES+=usr/share/man/man3/sdp_open.3.gz
OLD_FILES+=usr/share/man/man3/sdp_open_local.3.gz
OLD_FILES+=usr/share/man/man3/sdp_register_service.3.gz
OLD_FILES+=usr/share/man/man3/sdp_search.3.gz
OLD_FILES+=usr/share/man/man3/sdp_unregister_service.3.gz
OLD_FILES+=usr/share/man/man3/sdp_uuid2desc.3.gz
OLD_FILES+=usr/share/man/man4/ng_bluetooth.4.gz
OLD_FILES+=usr/share/man/man5/bluetooth.device.conf.5.gz
OLD_FILES+=usr/share/man/man5/bluetooth.hosts.5.gz
OLD_FILES+=usr/share/man/man5/bluetooth.protocols.5.gz
OLD_FILES+=usr/share/man/man5/hcsecd.conf.5.gz
OLD_FILES+=usr/share/man/man8/ath3kfw.8.gz
OLD_FILES+=usr/share/man/man8/bcmfw.8.gz
OLD_FILES+=usr/share/man/man8/bluetooth-config.8.gz
OLD_FILES+=usr/share/man/man8/bt3cfw.8.gz
OLD_FILES+=usr/share/man/man8/bthidcontrol.8.gz
OLD_FILES+=usr/share/man/man8/bthidd.8.gz
OLD_FILES+=usr/share/man/man8/btpand.8.gz
OLD_FILES+=usr/share/man/man8/hccontrol.8.gz
OLD_FILES+=usr/share/man/man8/hcsecd.8.gz
OLD_FILES+=usr/share/man/man8/hcseriald.8.gz
OLD_FILES+=usr/share/man/man8/l2control.8.gz
OLD_FILES+=usr/share/man/man8/l2ping.8.gz
OLD_FILES+=usr/share/man/man8/rfcomm_pppd.8.gz
OLD_FILES+=usr/share/man/man8/sdpcontrol.8.gz
OLD_FILES+=usr/share/man/man8/sdpd.8.gz
.endif
.if ${MK_BOOT} == no
OLD_FILES+=boot/beastie.4th
OLD_FILES+=boot/boot
OLD_FILES+=boot/boot0
OLD_FILES+=boot/boot0sio
OLD_FILES+=boot/boot1
OLD_FILES+=boot/boot1.efi
OLD_FILES+=boot/boot1.efifat
OLD_FILES+=boot/boot2
OLD_FILES+=boot/brand.4th
OLD_FILES+=boot/cdboot
OLD_FILES+=boot/check-password.4th
OLD_FILES+=boot/color.4th
OLD_FILES+=boot/defaults/loader.conf
OLD_FILES+=boot/delay.4th
OLD_FILES+=boot/device.hints
OLD_FILES+=boot/frames.4th
OLD_FILES+=boot/gptboot
OLD_FILES+=boot/gptzfsboot
OLD_FILES+=boot/loader
OLD_FILES+=boot/loader.4th
OLD_FILES+=boot/loader.efi
OLD_FILES+=boot/loader.help
OLD_FILES+=boot/loader.rc
OLD_FILES+=boot/mbr
OLD_FILES+=boot/menu-commands.4th
OLD_FILES+=boot/menu.4th
OLD_FILES+=boot/menu.rc
OLD_FILES+=boot/menusets.4th
OLD_FILES+=boot/pcibios.4th
OLD_FILES+=boot/pmbr
OLD_FILES+=boot/pxeboot
OLD_FILES+=boot/screen.4th
OLD_FILES+=boot/shortcuts.4th
OLD_FILES+=boot/support.4th
OLD_FILES+=boot/userboot.so
OLD_FILES+=boot/version.4th
OLD_FILES+=boot/zfsboot
OLD_FILES+=boot/zfsloader
OLD_FILES+=usr/lib/kgzldr.o
OLD_FILES+=usr/share/man/man5/loader.conf.5.gz
OLD_FILES+=usr/share/man/man8/beastie.4th.8.gz
OLD_FILES+=usr/share/man/man8/brand.4th.8.gz
OLD_FILES+=usr/share/man/man8/check-password.4th.8.gz
OLD_FILES+=usr/share/man/man8/color.4th.8.gz
OLD_FILES+=usr/share/man/man8/delay.4th.8.gz
OLD_FILES+=usr/share/man/man8/gptboot.8.gz
OLD_FILES+=usr/share/man/man8/gptzfsboot.8.gz
OLD_FILES+=usr/share/man/man8/loader.4th.8.gz
OLD_FILES+=usr/share/man/man8/loader.8.gz
OLD_FILES+=usr/share/man/man8/menu.4th.8.gz
OLD_FILES+=usr/share/man/man8/menusets.4th.8.gz
OLD_FILES+=usr/share/man/man8/pxeboot.8.gz
OLD_FILES+=usr/share/man/man8/version.4th.8.gz
OLD_FILES+=usr/share/man/man8/zfsboot.8.gz
OLD_FILES+=usr/share/man/man8/zfsloader.8.gz
.endif
.if ${MK_BOOTPARAMD} == no
OLD_FILES+=usr/sbin/bootparamd
OLD_FILES+=usr/share/man/man5/bootparams.5.gz
OLD_FILES+=usr/share/man/man8/bootparamd.8.gz
OLD_FILES+=usr/sbin/callbootd
.endif
.if ${MK_BOOTPD} == no
OLD_FILES+=usr/libexec/bootpd
OLD_FILES+=usr/share/man/man5/bootptab.5.gz
OLD_FILES+=usr/share/man/man8/bootpd.8.gz
OLD_FILES+=usr/libexec/bootpgw
OLD_FILES+=usr/sbin/bootpef
OLD_FILES+=usr/share/man/man8/bootpef.8.gz
OLD_FILES+=usr/sbin/bootptest
OLD_FILES+=usr/share/man/man8/bootptest.8.gz
.endif
.if ${MK_BSD_CPIO} == no
OLD_FILES+=usr/bin/bsdcpio
OLD_FILES+=usr/bin/cpio
OLD_FILES+=usr/share/man/man1/bsdcpio.1.gz
OLD_FILES+=usr/share/man/man1/cpio.1.gz
.endif
.if ${MK_BSDINSTALL} == no
OLD_FILES+=usr/libexec/bsdinstall/adduser
OLD_FILES+=usr/libexec/bsdinstall/auto
OLD_FILES+=usr/libexec/bsdinstall/autopart
OLD_FILES+=usr/libexec/bsdinstall/bootconfig
OLD_FILES+=usr/libexec/bsdinstall/checksum
OLD_FILES+=usr/libexec/bsdinstall/config
OLD_FILES+=usr/libexec/bsdinstall/distextract
OLD_FILES+=usr/libexec/bsdinstall/distfetch
OLD_FILES+=usr/libexec/bsdinstall/docsinstall
OLD_FILES+=usr/libexec/bsdinstall/entropy
OLD_FILES+=usr/libexec/bsdinstall/hardening
OLD_FILES+=usr/libexec/bsdinstall/hostname
OLD_FILES+=usr/libexec/bsdinstall/jail
OLD_FILES+=usr/libexec/bsdinstall/keymap
OLD_FILES+=usr/libexec/bsdinstall/mirrorselect
OLD_FILES+=usr/libexec/bsdinstall/mount
OLD_FILES+=usr/libexec/bsdinstall/netconfig
OLD_FILES+=usr/libexec/bsdinstall/netconfig_ipv4
OLD_FILES+=usr/libexec/bsdinstall/netconfig_ipv6
OLD_FILES+=usr/libexec/bsdinstall/partedit
OLD_FILES+=usr/libexec/bsdinstall/rootpass
OLD_FILES+=usr/libexec/bsdinstall/script
OLD_FILES+=usr/libexec/bsdinstall/scriptedpart
OLD_FILES+=usr/libexec/bsdinstall/services
OLD_FILES+=usr/libexec/bsdinstall/time
OLD_FILES+=usr/libexec/bsdinstall/umount
OLD_FILES+=usr/libexec/bsdinstall/wlanconfig
OLD_FILES+=usr/libexec/bsdinstall/zfsboot
OLD_FILES+=usr/sbin/bsdinstall
OLD_FILES+=usr/share/man/man8/bsdinstall.8.gz
OLD_FILES+=usr/share/man/man8/sade.8.gz
OLD_DIRS+=usr/libexec/bsdinstall
.endif
.if ${MK_BSNMP} == no
OLD_FILES+=etc/snmpd.config
OLD_FILES+=etc/rc.d/bsnmpd
OLD_FILES+=usr/bin/bsnmpget
OLD_FILES+=usr/bin/bsnmpset
OLD_FILES+=usr/bin/bsnmpwalk
OLD_FILES+=usr/include/bsnmp/asn1.h
OLD_FILES+=usr/include/bsnmp/bridge_snmp.h
OLD_FILES+=usr/include/bsnmp/snmp.h
OLD_FILES+=usr/include/bsnmp/snmp_mibII.h
OLD_FILES+=usr/include/bsnmp/snmp_netgraph.h
OLD_FILES+=usr/include/bsnmp/snmpagent.h
OLD_FILES+=usr/include/bsnmp/snmpclient.h
OLD_FILES+=usr/include/bsnmp/snmpmod.h
OLD_FILES+=usr/lib/libbsnmp.a
OLD_FILES+=usr/lib/libbsnmp.so
OLD_LIBS+=usr/lib/libbsnmp.so.6
OLD_FILES+=usr/lib/libbsnmp_p.a
OLD_FILES+=usr/lib/libbsnmptools.a
OLD_FILES+=usr/lib/libbsnmptools.so
OLD_LIBS+=usr/lib/libbsnmptools.so.0
OLD_FILES+=usr/lib/libbsnmptools_p.a
OLD_FILES+=usr/lib/snmp_bridge.so
OLD_LIBS+=usr/lib/snmp_bridge.so.6
OLD_FILES+=usr/lib/snmp_hast.so
OLD_LIBS+=usr/lib/snmp_hast.so.6
OLD_FILES+=usr/lib/snmp_hostres.so
OLD_LIBS+=usr/lib/snmp_hostres.so.6
OLD_FILES+=usr/lib/snmp_lm75.so
OLD_LIBS+=usr/lib/snmp_lm75.so.6
OLD_FILES+=usr/lib/snmp_mibII.so
OLD_LIBS+=usr/lib/snmp_mibII.so.6
OLD_FILES+=usr/lib/snmp_netgraph.so
OLD_LIBS+=usr/lib/snmp_netgraph.so.6
OLD_FILES+=usr/lib/snmp_pf.so
OLD_LIBS+=usr/lib/snmp_pf.so.6
OLD_FILES+=usr/lib/snmp_target.so
OLD_LIBS+=usr/lib/snmp_target.so.6
OLD_FILES+=usr/lib/snmp_usm.so
OLD_LIBS+=usr/lib/snmp_usm.so.6
OLD_FILES+=usr/lib/snmp_vacm.so
OLD_LIBS+=usr/lib/snmp_vacm.so.6
OLD_FILES+=usr/lib/snmp_wlan.so
OLD_LIBS+=usr/lib/snmp_wlan.so.6
OLD_FILES+=usr/lib32/libbsnmp.a
OLD_FILES+=usr/lib32/libbsnmp.so
OLD_LIBS+=usr/lib32/libbsnmp.so.6
OLD_FILES+=usr/lib32/libbsnmp_p.a
OLD_FILES+=usr/sbin/bsnmpd
OLD_FILES+=usr/sbin/gensnmptree
OLD_FILES+=usr/share/examples/etc/snmpd.config
OLD_FILES+=usr/share/man/man1/bsnmpd.1.gz
OLD_FILES+=usr/share/man/man1/bsnmpget.1.gz
OLD_FILES+=usr/share/man/man1/bsnmpset.1.gz
OLD_FILES+=usr/share/man/man1/bsnmpwalk.1.gz
OLD_FILES+=usr/share/man/man1/gensnmptree.1.gz
# lib/libbsnmp/libbsnmp
OLD_FILES+=usr/share/man/man3/TRUTH_GET.3.gz
OLD_FILES+=usr/share/man/man3/TRUTH_MK.3.gz
OLD_FILES+=usr/share/man/man3/TRUTH_OK.3.gz
OLD_FILES+=usr/share/man/man3/asn1.3.gz
OLD_FILES+=usr/share/man/man3/asn_append_oid.3.gz
OLD_FILES+=usr/share/man/man3/asn_commit_header.3.gz
OLD_FILES+=usr/share/man/man3/asn_compare_oid.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_counter64_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_header.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_integer.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_integer_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_ipaddress.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_ipaddress_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_null.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_null_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_objid.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_objid_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_octetstring.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_octetstring_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_sequence.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_timeticks.3.gz
OLD_FILES+=usr/share/man/man3/asn_get_uint32_raw.3.gz
OLD_FILES+=usr/share/man/man3/asn_is_suboid.3.gz
OLD_FILES+=usr/share/man/man3/asn_oid2str.3.gz
OLD_FILES+=usr/share/man/man3/asn_oid2str_r.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_counter64.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_exception.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_header.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_integer.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_ipaddress.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_null.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_objid.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_octetstring.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_temp_header.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_timeticks.3.gz
OLD_FILES+=usr/share/man/man3/asn_put_uint32.3.gz
OLD_FILES+=usr/share/man/man3/asn_skip.3.gz
OLD_FILES+=usr/share/man/man3/asn_slice_oid.3.gz
OLD_FILES+=usr/share/man/man3/snmp_add_binding.3.gz
OLD_FILES+=usr/share/man/man3/snmp_calc_keychange.3.gz
OLD_FILES+=usr/share/man/man3/snmp_client.3.gz
OLD_FILES+=usr/share/man/man3/snmp_client_init.3.gz
OLD_FILES+=usr/share/man/man3/snmp_client_set_host.3.gz
OLD_FILES+=usr/share/man/man3/snmp_client_set_port.3.gz
OLD_FILES+=usr/share/man/man3/snmp_close.3.gz
OLD_FILES+=usr/share/man/man3/snmp_debug.3.gz
OLD_FILES+=usr/share/man/man3/snmp_dep_commit.3.gz
OLD_FILES+=usr/share/man/man3/snmp_dep_finish.3.gz
OLD_FILES+=usr/share/man/man3/snmp_dep_lookup.3.gz
OLD_FILES+=usr/share/man/man3/snmp_dep_rollback.3.gz
OLD_FILES+=usr/share/man/man3/snmp_depop_t.3.gz
OLD_FILES+=usr/share/man/man3/snmp_dialog.3.gz
OLD_FILES+=usr/share/man/man3/snmp_discover_engine.3.gz
OLD_FILES+=usr/share/man/man3/snmp_get.3.gz
OLD_FILES+=usr/share/man/man3/snmp_get_local_keys.3.gz
OLD_FILES+=usr/share/man/man3/snmp_getbulk.3.gz
OLD_FILES+=usr/share/man/man3/snmp_getnext.3.gz
OLD_FILES+=usr/share/man/man3/snmp_init_context.3.gz
OLD_FILES+=usr/share/man/man3/snmp_make_errresp.3.gz
OLD_FILES+=usr/share/man/man3/snmp_oid_append.3.gz
OLD_FILES+=usr/share/man/man3/snmp_op_t.3.gz
OLD_FILES+=usr/share/man/man3/snmp_open.3.gz
OLD_FILES+=usr/share/man/man3/snmp_parse_server.3.gz
OLD_FILES+=usr/share/man/man3/snmp_passwd_to_keys.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_check.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_create.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_decode.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_decode_header.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_decode_scoped.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_decode_secmode.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_dump.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_encode.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_free.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_init_secparams.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_send.3.gz
OLD_FILES+=usr/share/man/man3/snmp_receive.3.gz
OLD_FILES+=usr/share/man/man3/snmp_send_cb_f.3.gz
OLD_FILES+=usr/share/man/man3/snmp_set.3.gz
OLD_FILES+=usr/share/man/man3/snmp_table_cb_f.3.gz
OLD_FILES+=usr/share/man/man3/snmp_table_fetch.3.gz
OLD_FILES+=usr/share/man/man3/snmp_table_fetch_async.3.gz
OLD_FILES+=usr/share/man/man3/snmp_timeout_cb_f.3.gz
OLD_FILES+=usr/share/man/man3/snmp_timeout_start_f.3.gz
OLD_FILES+=usr/share/man/man3/snmp_timeout_stop_f.3.gz
OLD_FILES+=usr/share/man/man3/snmp_trace.3.gz
OLD_FILES+=usr/share/man/man3/snmp_value_copy.3.gz
OLD_FILES+=usr/share/man/man3/snmp_value_free.3.gz
OLD_FILES+=usr/share/man/man3/snmp_value_parse.3.gz
OLD_FILES+=usr/share/man/man3/tree_size.3.gz
# usr.sbin/bsnmpd/bsnmpd
OLD_FILES+=usr/share/man/man3/FIND_OBJECT_INT.3.gz
OLD_FILES+=usr/share/man/man3/FIND_OBJECT_INT_LINK.3.gz
OLD_FILES+=usr/share/man/man3/FIND_OBJECT_INT_LINK_INDEX.3.gz
OLD_FILES+=usr/share/man/man3/FIND_OBJECT_OID.3.gz
OLD_FILES+=usr/share/man/man3/FIND_OBJECT_OID_LINK.3.gz
OLD_FILES+=usr/share/man/man3/FIND_OBJECT_OID_LINK_INDEX.3.gz
OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_INT.3.gz
OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_INT_LINK.3.gz
OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_INT_LINK_INDEX.3.gz
OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_OID.3.gz
OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_OID_LINK.3.gz
OLD_FILES+=usr/share/man/man3/INSERT_OBJECT_OID_LINK_INDEX.3.gz
OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_INT.3.gz
OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_INT_LINK.3.gz
OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_INT_LINK_INDEX.3.gz
OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_OID.3.gz
OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_OID_LINK.3.gz
OLD_FILES+=usr/share/man/man3/NEXT_OBJECT_OID_LINK_INDEX.3.gz
OLD_FILES+=usr/share/man/man3/asn1.3.gz
OLD_FILES+=usr/share/man/man3/bsnmpagent.3.gz
OLD_FILES+=usr/share/man/man3/bsnmpclient.3.gz
OLD_FILES+=usr/share/man/man3/bsnmpd_get_target_stats.3.gz
OLD_FILES+=usr/share/man/man3/bsnmpd_get_usm_stats.3.gz
OLD_FILES+=usr/share/man/man3/bsnmpd_reset_usm_stats.3.gz
OLD_FILES+=usr/share/man/man3/bsnmplib.3.gz
OLD_FILES+=usr/share/man/man3/buf_alloc.3.gz
OLD_FILES+=usr/share/man/man3/buf_size.3.gz
OLD_FILES+=usr/share/man/man3/comm_define.3.gz
OLD_FILES+=usr/share/man/man3/community.3.gz
OLD_FILES+=usr/share/man/man3/fd_deselect.3.gz
OLD_FILES+=usr/share/man/man3/fd_resume.3.gz
OLD_FILES+=usr/share/man/man3/fd_select.3.gz
OLD_FILES+=usr/share/man/man3/fd_suspend.3.gz
OLD_FILES+=usr/share/man/man3/get_ticks.3.gz
OLD_FILES+=usr/share/man/man3/index_append.3.gz
OLD_FILES+=usr/share/man/man3/index_append_off.3.gz
OLD_FILES+=usr/share/man/man3/index_compare.3.gz
OLD_FILES+=usr/share/man/man3/index_compare_off.3.gz
OLD_FILES+=usr/share/man/man3/index_decode.3.gz
OLD_FILES+=usr/share/man/man3/ip_commit.3.gz
OLD_FILES+=usr/share/man/man3/ip_get.3.gz
OLD_FILES+=usr/share/man/man3/ip_rollback.3.gz
OLD_FILES+=usr/share/man/man3/ip_save.3.gz
OLD_FILES+=usr/share/man/man3/or_register.3.gz
OLD_FILES+=usr/share/man/man3/or_unregister.3.gz
OLD_FILES+=usr/share/man/man3/oid_commit.3.gz
OLD_FILES+=usr/share/man/man3/oid_get.3.gz
OLD_FILES+=usr/share/man/man3/oid_rollback.3.gz
OLD_FILES+=usr/share/man/man3/oid_save.3.gz
OLD_FILES+=usr/share/man/man3/oid_usmNotInTimeWindows.3.gz
OLD_FILES+=usr/share/man/man3/oid_usmUnknownEngineIDs.3.gz
OLD_FILES+=usr/share/man/man3/oid_zeroDotZero.3.gz
OLD_FILES+=usr/share/man/man3/reqid_allocate.3.gz
OLD_FILES+=usr/share/man/man3/reqid_base.3.gz
OLD_FILES+=usr/share/man/man3/reqid_istype.3.gz
OLD_FILES+=usr/share/man/man3/reqid_next.3.gz
OLD_FILES+=usr/share/man/man3/reqid_type.3.gz
OLD_FILES+=usr/share/man/man3/snmp_bridge.3.gz
OLD_FILES+=usr/share/man/man3/snmp_hast.3.gz
OLD_FILES+=usr/share/man/man3/snmp_hostres.3.gz
OLD_FILES+=usr/share/man/man3/snmp_input_finish.3.gz
OLD_FILES+=usr/share/man/man3/snmp_input_start.3.gz
OLD_FILES+=usr/share/man/man3/snmp_lm75.3.gz
OLD_FILES+=usr/share/man/man3/snmp_mibII.3.gz
OLD_FILES+=usr/share/man/man3/snmp_netgraph.3.gz
OLD_FILES+=usr/share/man/man3/snmp_output.3.gz
OLD_FILES+=usr/share/man/man3/snmp_pdu_auth_access.3.gz
OLD_FILES+=usr/share/man/man3/snmp_send_port.3.gz
OLD_FILES+=usr/share/man/man3/snmp_send_trap.3.gz
OLD_FILES+=usr/share/man/man3/snmp_target.3.gz
OLD_FILES+=usr/share/man/man3/snmp_usm.3.gz
OLD_FILES+=usr/share/man/man3/snmp_vacm.3.gz
OLD_FILES+=usr/share/man/man3/snmp_wlan.3.gz
OLD_FILES+=usr/share/man/man3/snmpd_target_stat.3.gz
OLD_FILES+=usr/share/man/man3/snmpd_usmstats.3.gz
OLD_FILES+=usr/share/man/man3/snmpmod.3.gz
OLD_FILES+=usr/share/man/man3/start_tick.3.gz
OLD_FILES+=usr/share/man/man3/string_commit.3.gz
OLD_FILES+=usr/share/man/man3/string_free.3.gz
OLD_FILES+=usr/share/man/man3/string_get.3.gz
OLD_FILES+=usr/share/man/man3/string_get_max.3.gz
OLD_FILES+=usr/share/man/man3/string_rollback.3.gz
OLD_FILES+=usr/share/man/man3/string_save.3.gz
OLD_FILES+=usr/share/man/man3/systemg.3.gz
OLD_FILES+=usr/share/man/man3/this_tick.3.gz
OLD_FILES+=usr/share/man/man3/timer_start.3.gz
OLD_FILES+=usr/share/man/man3/timer_start_repeat.3.gz
OLD_FILES+=usr/share/man/man3/timer_stop.3.gz
OLD_FILES+=usr/share/man/man3/target_activate_address.3.gz
OLD_FILES+=usr/share/man/man3/target_address.3.gz
OLD_FILES+=usr/share/man/man3/target_delete_address.3.gz
OLD_FILES+=usr/share/man/man3/target_delete_notify.3.gz
OLD_FILES+=usr/share/man/man3/target_delete_param.3.gz
OLD_FILES+=usr/share/man/man3/target_first_address.3.gz
OLD_FILES+=usr/share/man/man3/target_first_notify.3.gz
OLD_FILES+=usr/share/man/man3/target_first_param.3.gz
OLD_FILES+=usr/share/man/man3/target_flush_all.3.gz
OLD_FILES+=usr/share/man/man3/target_next_address.3.gz
OLD_FILES+=usr/share/man/man3/target_next_notify.3.gz
OLD_FILES+=usr/share/man/man3/target_next_param.3.gz
OLD_FILES+=usr/share/man/man3/target_new_address.3.gz
OLD_FILES+=usr/share/man/man3/target_new_notify.3.gz
OLD_FILES+=usr/share/man/man3/target_new_param.3.gz
OLD_FILES+=usr/share/man/man3/target_notify.3.gz
OLD_FILES+=usr/share/man/man3/target_param.3.gz
OLD_FILES+=usr/share/man/man3/usm_delete_user.3.gz
OLD_FILES+=usr/share/man/man3/usm_find_user.3.gz
OLD_FILES+=usr/share/man/man3/usm_first_user.3.gz
OLD_FILES+=usr/share/man/man3/usm_flush_users.3.gz
OLD_FILES+=usr/share/man/man3/usm_next_user.3.gz
OLD_FILES+=usr/share/man/man3/usm_new_user.3.gz
OLD_FILES+=usr/share/man/man3/usm_user.3.gz
OLD_FILES+=usr/share/snmp/defs/bridge_tree.def
OLD_FILES+=usr/share/snmp/defs/hast_tree.def
OLD_FILES+=usr/share/snmp/defs/hostres_tree.def
OLD_FILES+=usr/share/snmp/defs/lm75_tree.def
OLD_FILES+=usr/share/snmp/defs/mibII_tree.def
OLD_FILES+=usr/share/snmp/defs/netgraph_tree.def
OLD_FILES+=usr/share/snmp/defs/pf_tree.def
OLD_FILES+=usr/share/snmp/defs/target_tree.def
OLD_FILES+=usr/share/snmp/defs/tree.def
OLD_FILES+=usr/share/snmp/defs/usm_tree.def
OLD_FILES+=usr/share/snmp/defs/vacm_tree.def
OLD_FILES+=usr/share/snmp/defs/wlan_tree.def
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-ATM-FREEBSD-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-ATM.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-BRIDGE-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-HAST-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-HOSTRES-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-IP-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-LM75-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-MIB2-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-NETGRAPH.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-PF-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-SNMPD.txt
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-WIRELESS-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/BRIDGE-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/FOKUS-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/FREEBSD-MIB.txt
OLD_FILES+=usr/share/snmp/mibs/RSTP-MIB.txt
OLD_DIRS+=usr/include/bsnmp
OLD_DIRS+=usr/share/snmp
OLD_DIRS+=usr/share/snmp/defs
OLD_DIRS+=usr/share/snmp/mibs
.endif
.if ${MK_CALENDAR} == no
OLD_FILES+=etc/periodic/daily/300.calendar
OLD_FILES+=usr/bin/calendar
OLD_FILES+=usr/share/calendar/calendar.all
OLD_FILES+=usr/share/calendar/calendar.australia
OLD_FILES+=usr/share/calendar/calendar.birthday
OLD_FILES+=usr/share/calendar/calendar.brazilian
OLD_FILES+=usr/share/calendar/calendar.christian
OLD_FILES+=usr/share/calendar/calendar.computer
OLD_FILES+=usr/share/calendar/calendar.croatian
OLD_FILES+=usr/share/calendar/calendar.dutch
OLD_FILES+=usr/share/calendar/calendar.freebsd
OLD_FILES+=usr/share/calendar/calendar.french
OLD_FILES+=usr/share/calendar/calendar.german
OLD_FILES+=usr/share/calendar/calendar.history
OLD_FILES+=usr/share/calendar/calendar.holiday
OLD_FILES+=usr/share/calendar/calendar.hungarian
OLD_FILES+=usr/share/calendar/calendar.judaic
OLD_FILES+=usr/share/calendar/calendar.lotr
OLD_FILES+=usr/share/calendar/calendar.music
OLD_FILES+=usr/share/calendar/calendar.newzealand
OLD_FILES+=usr/share/calendar/calendar.russian
OLD_FILES+=usr/share/calendar/calendar.southafrica
OLD_FILES+=usr/share/calendar/calendar.ukrainian
OLD_FILES+=usr/share/calendar/calendar.usholiday
OLD_FILES+=usr/share/calendar/calendar.world
OLD_FILES+=usr/share/calendar/de_AT.ISO_8859-15/calendar.feiertag
OLD_DIRS+=usr/share/calendar/de_AT.ISO_8859-15
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.all
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.feiertag
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.geschichte
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.kirche
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.literatur
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.musik
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-1/calendar.wissenschaft
OLD_DIRS+=usr/share/calendar/de_DE.ISO8859-1
OLD_FILES+=usr/share/calendar/de_DE.ISO8859-15
OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.all
OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.fetes
OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.french
OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.jferies
OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-1/calendar.proverbes
OLD_DIRS+=usr/share/calendar/fr_FR.ISO8859-1
OLD_FILES+=usr/share/calendar/fr_FR.ISO8859-15
OLD_FILES+=usr/share/calendar/hr_HR.ISO8859-2/calendar.all
OLD_FILES+=usr/share/calendar/hr_HR.ISO8859-2/calendar.praznici
OLD_DIRS+=usr/share/calendar/hr_HR.ISO8859-2
OLD_FILES+=usr/share/calendar/hu_HU.ISO8859-2/calendar.all
OLD_FILES+=usr/share/calendar/hu_HU.ISO8859-2/calendar.nevnapok
OLD_FILES+=usr/share/calendar/hu_HU.ISO8859-2/calendar.unnepek
OLD_DIRS+=usr/share/calendar/hu_HU.ISO8859-2
OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.all
OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.commemorative
OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.holidays
OLD_FILES+=usr/share/calendar/pt_BR.ISO8859-1/calendar.mcommemorative
OLD_DIRS+=usr/share/calendar/pt_BR.ISO8859-1
OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.all
OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.commemorative
OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.holidays
OLD_FILES+=usr/share/calendar/pt_BR.UTF-8/calendar.mcommemorative
OLD_DIRS+=usr/share/calendar/pt_BR.UTF-8
OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.all
OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.common
OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.holiday
OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.military
OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.orthodox
OLD_FILES+=usr/share/calendar/ru_RU.KOI8-R/calendar.pagan
OLD_DIRS+=usr/share/calendar/ru_RU.KOI8-R
OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.all
OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.common
OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.holiday
OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.military
OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.orthodox
OLD_FILES+=usr/share/calendar/ru_RU.UTF-8/calendar.pagan
OLD_DIRS+=usr/share/calendar/ru_RU.UTF-8
OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.all
OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.holiday
OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.misc
OLD_FILES+=usr/share/calendar/uk_UA.KOI8-U/calendar.orthodox
OLD_DIRS+=usr/share/calendar/uk_UA.KOI8-U
OLD_DIRS+=usr/share/calendar
OLD_FILES+=usr/share/man/man1/calendar.1.gz
OLD_FILES+=usr/tests/usr.bin/calendar/Kyuafile
OLD_FILES+=usr/tests/usr.bin/calendar/calendar.calibrate
OLD_FILES+=usr/tests/usr.bin/calendar/legacy_test
OLD_FILES+=usr/tests/usr.bin/calendar/regress.a1.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.a2.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.a3.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.a4.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.a5.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.b1.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.b2.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.b3.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.b4.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.b5.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.s1.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.s2.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.s3.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.s4.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.sh
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-1.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-2.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-3.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-4.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-5.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-6.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.w0-7.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-1.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-2.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-3.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-4.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-5.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-6.out
OLD_FILES+=usr/tests/usr.bin/calendar/regress.wn-7.out
OLD_DIRS+=usr/tests/usr.bin/calendar
.endif
.if ${MK_CASPER} == no
OLD_FILES+=etc/casper/system.dns
OLD_FILES+=etc/casper/system.grp
OLD_FILES+=etc/casper/system.pwd
OLD_FILES+=etc/casper/system.random
OLD_FILES+=etc/casper/system.sysctl
OLD_FILES+=etc/rc.d/casperd
OLD_LIBS+=lib/libcapsicum.so.0
OLD_LIBS+=lib/libcasper.so.0
OLD_FILES+=libexec/casper/dns
OLD_FILES+=libexec/casper/grp
OLD_FILES+=libexec/casper/pwd
OLD_FILES+=libexec/casper/random
OLD_FILES+=libexec/casper/sysctl
OLD_FILES+=sbin/casper
OLD_FILES+=sbin/casperd
OLD_FILES+=usr/include/libcapsicum.h
OLD_FILES+=usr/include/libcapsicum_dns.h
OLD_FILES+=usr/include/libcapsicum_grp.h
OLD_FILES+=usr/include/libcapsicum_pwd.h
OLD_FILES+=usr/include/libcapsicum_random.h
OLD_FILES+=usr/include/libcapsicum_service.h
OLD_FILES+=usr/include/libcapsicum_sysctl.h
OLD_FILES+=usr/include/libcasper.h
OLD_FILES+=usr/lib/libcapsicum.a
OLD_FILES+=usr/lib/libcapsicum.so
OLD_FILES+=usr/lib/libcapsicum_p.a
OLD_FILES+=usr/lib/libcasper.a
OLD_FILES+=usr/lib/libcasper.so
OLD_FILES+=usr/lib/libcasper_p.a
OLD_FILES+=usr/lib32/libcapsicum.a
OLD_FILES+=usr/lib32/libcapsicum.so
OLD_LIBS+=usr/lib32/libcapsicum.so.0
OLD_FILES+=usr/lib32/libcapsicum_p.a
OLD_FILES+=usr/lib32/libcasper.a
OLD_FILES+=usr/lib32/libcasper.so
OLD_LIBS+=usr/lib32/libcasper.so.0
OLD_FILES+=usr/lib32/libcasper_p.a
OLD_FILES+=usr/share/man/man3/cap_clone.3.gz
OLD_FILES+=usr/share/man/man3/cap_close.3.gz
OLD_FILES+=usr/share/man/man3/cap_init.3.gz
OLD_FILES+=usr/share/man/man3/cap_limit_get.3.gz
OLD_FILES+=usr/share/man/man3/cap_limit_set.3.gz
OLD_FILES+=usr/share/man/man3/cap_recv_nvlist.3.gz
OLD_FILES+=usr/share/man/man3/cap_send_nvlist.3.gz
OLD_FILES+=usr/share/man/man3/cap_service_open.3.gz
OLD_FILES+=usr/share/man/man3/cap_sock.3.gz
OLD_FILES+=usr/share/man/man3/cap_unwrap.3.gz
OLD_FILES+=usr/share/man/man3/cap_wrap.3.gz
OLD_FILES+=usr/share/man/man3/cap_xfer_nvlist.3.gz
OLD_FILES+=usr/share/man/man3/libcapsicum.3.gz
OLD_FILES+=usr/share/man/man8/casperd.8.gz
.endif
.if ${MK_CCD} == no
OLD_FILES+=etc/rc.d/ccd
OLD_FILES+=rescue/ccdconfig
OLD_FILES+=sbin/ccdconfig
OLD_FILES+=usr/share/man/man4/ccd.4.gz
OLD_FILES+=usr/share/man/man8/ccdconfig.8.gz
.endif
.if ${MK_CDDL} == no
OLD_LIBS+=lib/libavl.so.2
OLD_LIBS+=lib/libctf.so.2
OLD_LIBS+=lib/libdtrace.so.2
OLD_LIBS+=lib/libnvpair.so.2
OLD_LIBS+=lib/libumem.so.2
OLD_LIBS+=lib/libuutil.so.2
OLD_FILES+=usr/bin/ctfconvert
OLD_FILES+=usr/bin/ctfdump
OLD_FILES+=usr/bin/ctfmerge
OLD_FILES+=usr/lib/dtrace/drti.o
OLD_FILES+=usr/lib/dtrace/errno.d
OLD_FILES+=usr/lib/dtrace/io.d
OLD_FILES+=usr/lib/dtrace/ip.d
OLD_FILES+=usr/lib/dtrace/psinfo.d
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "i386"
OLD_FILES+=usr/lib/dtrace/regs_x86.d
.endif
OLD_FILES+=usr/lib/dtrace/signal.d
OLD_FILES+=usr/lib/dtrace/tcp.d
OLD_FILES+=usr/lib/dtrace/udp.d
OLD_FILES+=usr/lib/dtrace/unistd.d
OLD_FILES+=usr/lib/libavl.a
OLD_FILES+=usr/lib/libavl.so
OLD_FILES+=usr/lib/libavl_p.a
OLD_FILES+=usr/lib/libctf.a
OLD_FILES+=usr/lib/libctf.so
OLD_FILES+=usr/lib/libctf_p.a
OLD_FILES+=usr/lib/libdtrace.a
OLD_FILES+=usr/lib/libdtrace.so
OLD_FILES+=usr/lib/libdtrace_p.a
OLD_FILES+=usr/lib/libnvpair.a
OLD_FILES+=usr/lib/libnvpair.so
OLD_FILES+=usr/lib/libnvpair_p.a
OLD_FILES+=usr/lib/libumem.a
OLD_FILES+=usr/lib/libumem.so
OLD_FILES+=usr/lib/libumem_p.a
OLD_FILES+=usr/lib/libuutil.a
OLD_FILES+=usr/lib/libuutil.so
OLD_FILES+=usr/lib/libuutil_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/dtrace/drti.o
OLD_FILES+=usr/lib32/libavl.a
OLD_FILES+=usr/lib32/libavl.so
OLD_LIBS+=usr/lib32/libavl.so.2
OLD_FILES+=usr/lib32/libavl_p.a
OLD_FILES+=usr/lib32/libctf.a
OLD_FILES+=usr/lib32/libctf.so
OLD_LIBS+=usr/lib32/libctf.so.2
OLD_FILES+=usr/lib32/libctf_p.a
OLD_FILES+=usr/lib32/libdtrace.a
OLD_FILES+=usr/lib32/libdtrace.so
OLD_LIBS+=usr/lib32/libdtrace.so.2
OLD_FILES+=usr/lib32/libdtrace_p.a
OLD_FILES+=usr/lib32/libnvpair.a
OLD_FILES+=usr/lib32/libnvpair.so
OLD_LIBS+=usr/lib32/libnvpair.so.2
OLD_FILES+=usr/lib32/libnvpair_p.a
OLD_FILES+=usr/lib32/libumem.a
OLD_FILES+=usr/lib32/libumem.so
OLD_LIBS+=usr/lib32/libumem.so.2
OLD_FILES+=usr/lib32/libumem_p.a
OLD_FILES+=usr/lib32/libuutil.a
OLD_FILES+=usr/lib32/libuutil.so
OLD_LIBS+=usr/lib32/libuutil.so.2
OLD_FILES+=usr/lib32/libuutil_p.a
.endif
OLD_LIBS+=lib/libdtrace.so.2
OLD_FILES+=usr/sbin/dtrace
OLD_FILES+=usr/sbin/lockstat
OLD_FILES+=usr/sbin/plockstat
OLD_FILES+=usr/share/man/man1/dtrace.1.gz
OLD_FILES+=usr/share/man/man1/dtruss.1.gz
OLD_FILES+=usr/share/man/man1/lockstat.1.gz
OLD_FILES+=usr/share/man/man1/plockstat.1.gz
OLD_FILES+=usr/share/dtrace/disklatency
OLD_FILES+=usr/share/dtrace/disklatencycmd
OLD_FILES+=usr/share/dtrace/hotopen
OLD_FILES+=usr/share/dtrace/nfsclienttime
OLD_FILES+=usr/share/dtrace/toolkit/execsnoop
OLD_FILES+=usr/share/dtrace/toolkit/hotkernel
OLD_FILES+=usr/share/dtrace/toolkit/hotuser
OLD_FILES+=usr/share/dtrace/toolkit/opensnoop
OLD_FILES+=usr/share/dtrace/toolkit/procsystime
OLD_FILES+=usr/share/dtrace/tcpconn
OLD_FILES+=usr/share/dtrace/tcpstate
OLD_FILES+=usr/share/dtrace/tcptrack
OLD_FILES+=usr/share/dtrace/udptrack
OLD_FILES+=usr/share/man/man1/dtrace.1.gz
OLD_DIRS+=usr/lib/dtrace
OLD_DIRS+=usr/lib32/dtrace
OLD_DIRS+=usr/share/dtrace/toolkit
OLD_DIRS+=usr/share/dtrace
.endif
.if ${MK_ZFS} == no
OLD_FILES+=boot/gptzfsboot
OLD_FILES+=boot/zfsboot
OLD_FILES+=boot/zfsloader
OLD_FILES+=etc/rc.d/zfs
OLD_FILES+=etc/rc.d/zfsd
OLD_FILES+=etc/rc.d/zfsbe
OLD_FILES+=etc/rc.d/zvol
OLD_FILES+=etc/devd/zfs.conf
OLD_FILES+=etc/periodic/daily/404.status-zfs
OLD_FILES+=etc/periodic/daily/800.scrub-zfs
OLD_LIBS+=lib/libzfs.so.2
OLD_LIBS+=lib/libzfs.so.3
OLD_LIBS+=lib/libzfs_core.so.2
OLD_LIBS+=lib/libzpool.so.2
OLD_FILES+=rescue/zdb
OLD_FILES+=rescue/zfs
OLD_FILES+=rescue/zpool
OLD_FILES+=sbin/bectl
OLD_FILES+=sbin/zfs
OLD_FILES+=sbin/zpool
OLD_FILES+=sbin/zfsbootcfg
OLD_FILES+=usr/bin/zinject
OLD_FILES+=usr/bin/zstreamdump
OLD_FILES+=usr/bin/ztest
OLD_FILES+=usr/lib/libbe.a
OLD_FILES+=usr/lib/libbe_p.a
OLD_FILES+=usr/lib/libbe.so
OLD_LIBS+=lib/libbe.so.1
OLD_FILES+=usr/lib/libzfs.a
OLD_LIBS+=usr/lib/libzfs.so
OLD_FILES+=usr/lib/libzfs_core.a
OLD_LIBS+=usr/lib/libzfs_core.so
OLD_LIBS+=usr/lib/libzfs_core_p.a
OLD_FILES+=usr/lib/libzfs_p.a
OLD_FILES+=usr/lib/libzpool.a
OLD_FILES+=usr/lib/libzpool.so
OLD_LIBS+=usr/lib/libzpool.so.2
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libzfs.a
OLD_FILES+=usr/lib32/libzfs.so
OLD_LIBS+=usr/lib32/libzfs.so.2
OLD_LIBS+=usr/lib32/libzfs.so.3
OLD_FILES+=usr/lib32/libzfs_core.a
OLD_FILES+=usr/lib32/libzfs_core.so
OLD_LIBS+=usr/lib32/libzfs_core.so.2
OLD_FILES+=usr/lib32/libzfs_core_p.a
OLD_FILES+=usr/lib32/libzfs_p.a
OLD_FILES+=usr/lib32/libzpool.a
OLD_FILES+=usr/lib32/libzpool.so
OLD_LIBS+=usr/lib32/libzpool.so.2
.endif
OLD_FILES+=usr/sbin/zfsd
OLD_FILES+=usr/sbin/zhack
OLD_FILES+=usr/sbin/zdb
OLD_FILES+=usr/share/man/man3/libbe.3.gz
OLD_FILES+=usr/share/man/man7/zpool-features.7.gz
OLD_FILES+=usr/share/man/man8/bectl.8.gz
OLD_FILES+=usr/share/man/man8/gptzfsboot.8.gz
OLD_FILES+=usr/share/man/man8/zdb.8.gz
OLD_FILES+=usr/share/man/man8/zfs-program.8.gz
OLD_FILES+=usr/share/man/man8/zfs.8.gz
OLD_FILES+=usr/share/man/man8/zfsboot.8.gz
OLD_FILES+=usr/share/man/man8/zfsbootcfg.8.gz
OLD_FILES+=usr/share/man/man8/zfsd.8.gz
OLD_FILES+=usr/share/man/man8/zfsloader.8.gz
OLD_FILES+=usr/share/man/man8/zpool.8.gz
.endif
.if ${MK_CLANG} == no
OLD_FILES+=usr/bin/clang
OLD_FILES+=usr/bin/clang++
OLD_FILES+=usr/bin/clang-cpp
OLD_FILES+=usr/bin/clang-tblgen
OLD_FILES+=usr/bin/llvm-objdump
OLD_FILES+=usr/bin/llvm-tblgen
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/allocator_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/asan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/common_interface_defs.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/coverage_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/dfsan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/esan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/hwasan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/linux_syscall_hooks.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/lsan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/msan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/netbsd_syscall_hooks.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/scudo_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/tsan_interface.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sanitizer/tsan_interface_atomic.h
OLD_DIRS+=usr/lib/clang/8.0.0/include/sanitizer
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_builtin_vars.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_cmath.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_complex_builtins.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_device_functions.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_intrinsics.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_libdevice_declares.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_math_forward_declares.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__clang_cuda_runtime_wrapper.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__stddef_max_align_t.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__wmmintrin_aes.h
OLD_FILES+=usr/lib/clang/8.0.0/include/__wmmintrin_pclmul.h
OLD_FILES+=usr/lib/clang/8.0.0/include/adxintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/altivec.h
OLD_FILES+=usr/lib/clang/8.0.0/include/ammintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/arm64intr.h
OLD_FILES+=usr/lib/clang/8.0.0/include/arm_acle.h
OLD_FILES+=usr/lib/clang/8.0.0/include/arm_fp16.h
OLD_FILES+=usr/lib/clang/8.0.0/include/arm_neon.h
OLD_FILES+=usr/lib/clang/8.0.0/include/armintr.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx2intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512bitalgintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512bwintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512cdintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512dqintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512erintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512fintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512ifmaintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512ifmavlintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512pfintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vbmi2intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vbmiintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vbmivlintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vlbitalgintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vlbwintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vlcdintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vldqintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vlintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vlvbmi2intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vlvnniintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vnniintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vpopcntdqintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avx512vpopcntdqvlintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/avxintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/bmi2intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/bmiintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/cetintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/cldemoteintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/clflushoptintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/clwbintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/clzerointrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/cpuid.h
OLD_FILES+=usr/lib/clang/8.0.0/include/emmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/f16cintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/fma4intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/fmaintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/fxsrintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/gfniintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/htmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/htmxlintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/ia32intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/immintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/invpcidintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/lwpintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/lzcntintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/mm3dnow.h
OLD_FILES+=usr/lib/clang/8.0.0/include/mm_malloc.h
OLD_FILES+=usr/lib/clang/8.0.0/include/mmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/module.modulemap
OLD_FILES+=usr/lib/clang/8.0.0/include/movdirintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/msa.h
OLD_FILES+=usr/lib/clang/8.0.0/include/mwaitxintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/nmmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/opencl-c.h
OLD_FILES+=usr/lib/clang/8.0.0/include/pconfigintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/pkuintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/pmmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/popcntintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/prfchwintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/ptwriteintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/rdseedintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/rtmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/s390intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/sgxintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/shaintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/smmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/tbmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/tmmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/vadefs.h
OLD_FILES+=usr/lib/clang/8.0.0/include/vaesintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/vecintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/vpclmulqdqintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/waitpkgintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/wbnoinvdintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/wmmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/x86intrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xmmintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xopintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xsavecintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xsaveintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xsaveoptintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xsavesintrin.h
OLD_FILES+=usr/lib/clang/8.0.0/include/xtestintrin.h
OLD_DIRS+=usr/lib/clang/8.0.0/include
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan-i386.so
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan-preinit-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan-preinit-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan-x86_64.so
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan_cxx-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.asan_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.msan-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.msan-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.msan_cxx-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.msan_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.profile-arm.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.profile-armhf.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.profile-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.profile-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.safestack-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.safestack-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.stats-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.stats-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.stats_client-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.stats_client-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.tsan-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.tsan_cxx-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.ubsan_minimal-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.ubsan_minimal-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.ubsan_standalone-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.ubsan_standalone-x86_64.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-i386.a
OLD_FILES+=usr/lib/clang/8.0.0/lib/freebsd/libclang_rt.ubsan_standalone_cxx-x86_64.a
OLD_DIRS+=usr/lib/clang/8.0.0/lib/freebsd
OLD_DIRS+=usr/lib/clang/8.0.0/lib
OLD_DIRS+=usr/lib/clang/8.0.0
OLD_DIRS+=usr/lib/clang
OLD_FILES+=usr/share/doc/llvm/clang/LICENSE.TXT
OLD_DIRS+=usr/share/doc/llvm/clang
OLD_FILES+=usr/share/doc/llvm/COPYRIGHT.regex
OLD_FILES+=usr/share/doc/llvm/LICENSE.TXT
OLD_DIRS+=usr/share/doc/llvm
OLD_FILES+=usr/share/man/man1/clang.1.gz
OLD_FILES+=usr/share/man/man1/clang++.1.gz
OLD_FILES+=usr/share/man/man1/clang-cpp.1.gz
OLD_FILES+=usr/share/man/man1/llvm-tblgen.1.gz
.endif
.if ${MK_CLANG_EXTRAS} == no
OLD_FILES+=usr/bin/bugpoint
OLD_FILES+=usr/bin/clang-format
OLD_FILES+=usr/bin/llc
OLD_FILES+=usr/bin/lli
OLD_FILES+=usr/bin/llvm-ar
OLD_FILES+=usr/bin/llvm-as
OLD_FILES+=usr/bin/llvm-bcanalyzer
OLD_FILES+=usr/bin/llvm-cxxdump
OLD_FILES+=usr/bin/llvm-cxxfilt
OLD_FILES+=usr/bin/llvm-diff
OLD_FILES+=usr/bin/llvm-dis
OLD_FILES+=usr/bin/llvm-dwarfdump
OLD_FILES+=usr/bin/llvm-extract
OLD_FILES+=usr/bin/llvm-link
OLD_FILES+=usr/bin/llvm-lto
OLD_FILES+=usr/bin/llvm-lto2
OLD_FILES+=usr/bin/llvm-mc
OLD_FILES+=usr/bin/llvm-mca
OLD_FILES+=usr/bin/llvm-modextract
OLD_FILES+=usr/bin/llvm-nm
OLD_FILES+=usr/bin/llvm-objcopy
OLD_FILES+=usr/bin/llvm-pdbutil
OLD_FILES+=usr/bin/llvm-ranlib
OLD_FILES+=usr/bin/llvm-rtdyld
OLD_FILES+=usr/bin/llvm-symbolizer
OLD_FILES+=usr/bin/llvm-xray
OLD_FILES+=usr/bin/opt
OLD_FILES+=usr/share/man/man1/bugpoint.1.gz
OLD_FILES+=usr/share/man/man1/llc.1.gz
OLD_FILES+=usr/share/man/man1/lli.1.gz
OLD_FILES+=usr/share/man/man1/llvm-ar.1.gz
OLD_FILES+=usr/share/man/man1/llvm-as.1.gz
OLD_FILES+=usr/share/man/man1/llvm-bcanalyzer.1.gz
OLD_FILES+=usr/share/man/man1/llvm-diff.1.gz
OLD_FILES+=usr/share/man/man1/llvm-dis.1.gz
OLD_FILES+=usr/share/man/man1/llvm-dwarfdump.1
OLD_FILES+=usr/share/man/man1/llvm-extract.1.gz
OLD_FILES+=usr/share/man/man1/llvm-link.1.gz
OLD_FILES+=usr/share/man/man1/llvm-nm.1.gz
OLD_FILES+=usr/share/man/man1/llvm-pdbutil.1.gz
OLD_FILES+=usr/share/man/man1/llvm-symbolizer.1.gz
OLD_FILES+=usr/share/man/man1/opt.1.gz
.endif
.if ${MK_CPP} == no
OLD_FILES+=usr/bin/cpp
OLD_FILES+=usr/share/man/man1/cpp.1.gz
.endif
.if ${MK_CUSE} == no
OLD_FILES+=usr/include/fs/cuse/cuse_defs.h
OLD_FILES+=usr/include/fs/cuse/cuse_ioctl.h
OLD_FILES+=usr/include/cuse.h
OLD_FILES+=usr/lib/libcuse.a
OLD_LIBS+=usr/lib/libcuse.so.1
OLD_FILES+=usr/lib/libcuse_p.a
OLD_FILES+=usr/share/man/man3/cuse.3.gz
OLD_FILES+=usr/share/man/man3/cuse_alloc_unit_number.3.gz
OLD_FILES+=usr/share/man/man3/cuse_alloc_unit_number_by_id.3.gz
OLD_FILES+=usr/share/man/man3/cuse_copy_in.3.gz
OLD_FILES+=usr/share/man/man3/cuse_copy_out.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_create.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_destroy.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_get_current.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_get_per_file_handle.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_get_priv0.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_get_priv1.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_set_per_file_handle.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_set_priv0.3.gz
OLD_FILES+=usr/share/man/man3/cuse_dev_set_priv1.3.gz
OLD_FILES+=usr/share/man/man3/cuse_free_unit_number.3.gz
OLD_FILES+=usr/share/man/man3/cuse_free_unit_number_by_id.3.gz
OLD_FILES+=usr/share/man/man3/cuse_get_local.3.gz
OLD_FILES+=usr/share/man/man3/cuse_got_peer_signal.3.gz
OLD_FILES+=usr/share/man/man3/cuse_init.3.gz
OLD_FILES+=usr/share/man/man3/cuse_is_vmalloc_addr.3.gz
OLD_FILES+=usr/share/man/man3/cuse_poll_wakeup.3.gz
OLD_FILES+=usr/share/man/man3/cuse_set_local.3.gz
OLD_FILES+=usr/share/man/man3/cuse_uninit.3.gz
OLD_FILES+=usr/share/man/man3/cuse_vmalloc.3.gz
OLD_FILES+=usr/share/man/man3/cuse_vmfree.3.gz
OLD_FILES+=usr/share/man/man3/cuse_vmoffset.3.gz
OLD_FILES+=usr/share/man/man3/cuse_wait_and_process.3.gz
OLD_DIRS+=usr/include/fs/cuse
.endif
# devd(8) not listed here on purpose
.if ${MK_CXX} == no
OLD_FILES+=usr/bin/CC
OLD_FILES+=usr/bin/c++
OLD_FILES+=usr/bin/g++
OLD_FILES+=usr/libexec/cc1plus
.endif
.if ${MK_DEBUG_FILES} == no
.if exists(${DESTDIR}/usr/lib/debug)
DEBUG_DIRS!=find ${DESTDIR}/usr/lib/debug -mindepth 1 \
-type d \! -path "${DESTDIR}/usr/lib/debug/boot/*" \
| sed -e 's,^${DESTDIR}/,,'; echo
DEBUG_FILES!=find ${DESTDIR}/usr/lib/debug \
\! -type d \! -path "${DESTDIR}/usr/lib/debug/boot/*" \! -name "lib*.so*" \
| sed -e 's,^${DESTDIR}/,,'; echo
DEBUG_LIBS!=find ${DESTDIR}/usr/lib/debug \! -type d -name "lib*.so*" \
| sed -e 's,^${DESTDIR}/,,'; echo
OLD_DIRS+=${DEBUG_DIRS}
OLD_FILES+=${DEBUG_FILES}
OLD_LIBS+=${DEBUG_LIBS}
.endif
.endif
.if ${MK_DIALOG} == no
OLD_FILES+=usr/bin/dialog
OLD_FILES+=usr/bin/dpv
OLD_FILES+=usr/lib/libdialog.a
OLD_FILES+=usr/lib/libdialog.so
OLD_FILES+=usr/lib/libdialog.so.8
OLD_FILES+=usr/lib/libdialog_p.a
OLD_FILES+=usr/lib/libdpv.a
OLD_FILES+=usr/lib/libdpv.so
OLD_FILES+=usr/lib/libdpv.so.1
OLD_FILES+=usr/lib/libdpv_p.a
OLD_FILES+=usr/sbin/bsdconfig
OLD_FILES+=usr/share/man/man1/dialog.1.gz
OLD_FILES+=usr/share/man/man1/dpv.1.gz
OLD_FILES+=usr/share/man/man3/dialog.3.gz
OLD_FILES+=usr/share/man/man3/dpv.3.gz
OLD_FILES+=usr/share/man/man8/bsdconfig.8.gz
OLD_DIRS+=usr/share/bsdconfig
OLD_DIRS+=usr/share/bsdconfig/media
OLD_DIRS+=usr/share/bsdconfig/networking
OLD_DIRS+=usr/share/bsdconfig/packages
OLD_DIRS+=usr/share/bsdconfig/password
OLD_DIRS+=usr/share/bsdconfig/startup
OLD_DIRS+=usr/share/bsdconfig/timezone
OLD_DIRS+=usr/share/bsdconfig/usermgmt
.endif
.if ${MK_EFI} == no
OLD_FILES+=usr/sbin/efibootmgr
OLD_FILES+=usr/sbin/efidp
OLD_FILES+=usr/sbin/efivar
OLD_FILES+=usr/sbin/uefisign
OLD_FILES+=usr/share/examples/uefisign/uefikeys
.endif
.if ${MK_FMTREE} == no
OLD_FILES+=usr/sbin/fmtree
OLD_FILES+=usr/share/man/man8/fmtree.8.gz
.endif
.if ${MK_FTP} == no
OLD_FILES+=etc/ftpusers
OLD_FILES+=etc/newsyslog.conf.d/ftp.conf
OLD_FILES+=etc/pam.d/ftp
OLD_FILES+=etc/pam.d/ftpd
OLD_FILES+=etc/rc.d/ftpd
OLD_FILES+=etc/syslog.d/ftp.conf
OLD_FILES+=usr/bin/ftp
OLD_FILES+=usr/bin/gate-ftp
OLD_FILES+=usr/bin/pftp
OLD_FILES+=usr/libexec/ftpd
OLD_FILES+=usr/share/man/man1/ftp.1.gz
OLD_FILES+=usr/share/man/man1/gate-ftp.1.gz
OLD_FILES+=usr/share/man/man1/pftp.1.gz
OLD_FILES+=usr/share/man/man5/ftpchroot.5.gz
OLD_FILES+=usr/share/man/man8/ftpd.8.gz
.endif
.if ${MK_GNUCXX} == no
OLD_FILES+=usr/bin/g++
OLD_FILES+=usr/include/c++/4.2/algorithm
OLD_FILES+=usr/include/c++/4.2/backward/algo.h
OLD_FILES+=usr/include/c++/4.2/backward/algobase.h
OLD_FILES+=usr/include/c++/4.2/backward/alloc.h
OLD_FILES+=usr/include/c++/4.2/backward/backward_warning.h
OLD_FILES+=usr/include/c++/4.2/backward/bvector.h
OLD_FILES+=usr/include/c++/4.2/backward/complex.h
OLD_FILES+=usr/include/c++/4.2/backward/defalloc.h
OLD_FILES+=usr/include/c++/4.2/backward/deque.h
OLD_FILES+=usr/include/c++/4.2/backward/fstream.h
OLD_FILES+=usr/include/c++/4.2/backward/function.h
OLD_FILES+=usr/include/c++/4.2/backward/hash_map.h
OLD_FILES+=usr/include/c++/4.2/backward/hash_set.h
OLD_FILES+=usr/include/c++/4.2/backward/hashtable.h
OLD_FILES+=usr/include/c++/4.2/backward/heap.h
OLD_FILES+=usr/include/c++/4.2/backward/iomanip.h
OLD_FILES+=usr/include/c++/4.2/backward/iostream.h
OLD_FILES+=usr/include/c++/4.2/backward/istream.h
OLD_FILES+=usr/include/c++/4.2/backward/iterator.h
OLD_FILES+=usr/include/c++/4.2/backward/list.h
OLD_FILES+=usr/include/c++/4.2/backward/map.h
OLD_FILES+=usr/include/c++/4.2/backward/multimap.h
OLD_FILES+=usr/include/c++/4.2/backward/multiset.h
OLD_FILES+=usr/include/c++/4.2/backward/new.h
OLD_FILES+=usr/include/c++/4.2/backward/ostream.h
OLD_FILES+=usr/include/c++/4.2/backward/pair.h
OLD_FILES+=usr/include/c++/4.2/backward/queue.h
OLD_FILES+=usr/include/c++/4.2/backward/rope.h
OLD_FILES+=usr/include/c++/4.2/backward/set.h
OLD_FILES+=usr/include/c++/4.2/backward/slist.h
OLD_FILES+=usr/include/c++/4.2/backward/stack.h
OLD_FILES+=usr/include/c++/4.2/backward/stream.h
OLD_FILES+=usr/include/c++/4.2/backward/streambuf.h
OLD_FILES+=usr/include/c++/4.2/backward/strstream
OLD_FILES+=usr/include/c++/4.2/backward/tempbuf.h
OLD_FILES+=usr/include/c++/4.2/backward/tree.h
OLD_FILES+=usr/include/c++/4.2/backward/vector.h
OLD_FILES+=usr/include/c++/4.2/bits/allocator.h
OLD_FILES+=usr/include/c++/4.2/bits/atomic_word.h
OLD_FILES+=usr/include/c++/4.2/bits/basic_file.h
OLD_FILES+=usr/include/c++/4.2/bits/basic_ios.h
OLD_FILES+=usr/include/c++/4.2/bits/basic_ios.tcc
OLD_FILES+=usr/include/c++/4.2/bits/basic_string.h
OLD_FILES+=usr/include/c++/4.2/bits/basic_string.tcc
OLD_FILES+=usr/include/c++/4.2/bits/boost_concept_check.h
OLD_FILES+=usr/include/c++/4.2/bits/c++allocator.h
OLD_FILES+=usr/include/c++/4.2/bits/c++config.h
OLD_FILES+=usr/include/c++/4.2/bits/c++io.h
OLD_FILES+=usr/include/c++/4.2/bits/c++locale.h
OLD_FILES+=usr/include/c++/4.2/bits/c++locale_internal.h
OLD_FILES+=usr/include/c++/4.2/bits/char_traits.h
OLD_FILES+=usr/include/c++/4.2/bits/cmath.tcc
OLD_FILES+=usr/include/c++/4.2/bits/codecvt.h
OLD_FILES+=usr/include/c++/4.2/bits/compatibility.h
OLD_FILES+=usr/include/c++/4.2/bits/concept_check.h
OLD_FILES+=usr/include/c++/4.2/bits/cpp_type_traits.h
OLD_FILES+=usr/include/c++/4.2/bits/cpu_defines.h
OLD_FILES+=usr/include/c++/4.2/bits/ctype_base.h
OLD_FILES+=usr/include/c++/4.2/bits/ctype_inline.h
OLD_FILES+=usr/include/c++/4.2/bits/ctype_noninline.h
OLD_FILES+=usr/include/c++/4.2/bits/cxxabi_tweaks.h
OLD_FILES+=usr/include/c++/4.2/bits/deque.tcc
OLD_FILES+=usr/include/c++/4.2/bits/fstream.tcc
OLD_FILES+=usr/include/c++/4.2/bits/functexcept.h
OLD_FILES+=usr/include/c++/4.2/bits/gslice.h
OLD_FILES+=usr/include/c++/4.2/bits/gslice_array.h
OLD_FILES+=usr/include/c++/4.2/bits/gthr-default.h
OLD_FILES+=usr/include/c++/4.2/bits/gthr-posix.h
OLD_FILES+=usr/include/c++/4.2/bits/gthr-single.h
OLD_FILES+=usr/include/c++/4.2/bits/gthr-tpf.h
OLD_FILES+=usr/include/c++/4.2/bits/gthr.h
OLD_FILES+=usr/include/c++/4.2/bits/indirect_array.h
OLD_FILES+=usr/include/c++/4.2/bits/ios_base.h
OLD_FILES+=usr/include/c++/4.2/bits/istream.tcc
OLD_FILES+=usr/include/c++/4.2/bits/list.tcc
OLD_FILES+=usr/include/c++/4.2/bits/locale_classes.h
OLD_FILES+=usr/include/c++/4.2/bits/locale_facets.h
OLD_FILES+=usr/include/c++/4.2/bits/locale_facets.tcc
OLD_FILES+=usr/include/c++/4.2/bits/localefwd.h
OLD_FILES+=usr/include/c++/4.2/bits/mask_array.h
OLD_FILES+=usr/include/c++/4.2/bits/messages_members.h
OLD_FILES+=usr/include/c++/4.2/bits/os_defines.h
OLD_FILES+=usr/include/c++/4.2/bits/ostream.tcc
OLD_FILES+=usr/include/c++/4.2/bits/ostream_insert.h
OLD_FILES+=usr/include/c++/4.2/bits/postypes.h
OLD_FILES+=usr/include/c++/4.2/bits/slice_array.h
OLD_FILES+=usr/include/c++/4.2/bits/sstream.tcc
OLD_FILES+=usr/include/c++/4.2/bits/stl_algo.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_algobase.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_bvector.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_construct.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_deque.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_function.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_heap.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_iterator.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_iterator_base_funcs.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_iterator_base_types.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_list.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_map.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_multimap.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_multiset.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_numeric.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_pair.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_queue.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_raw_storage_iter.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_relops.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_set.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_stack.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_tempbuf.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_tree.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_uninitialized.h
OLD_FILES+=usr/include/c++/4.2/bits/stl_vector.h
OLD_FILES+=usr/include/c++/4.2/bits/stream_iterator.h
OLD_FILES+=usr/include/c++/4.2/bits/streambuf.tcc
OLD_FILES+=usr/include/c++/4.2/bits/streambuf_iterator.h
OLD_FILES+=usr/include/c++/4.2/bits/stringfwd.h
OLD_FILES+=usr/include/c++/4.2/bits/time_members.h
OLD_FILES+=usr/include/c++/4.2/bits/valarray_after.h
OLD_FILES+=usr/include/c++/4.2/bits/valarray_array.h
OLD_FILES+=usr/include/c++/4.2/bits/valarray_array.tcc
OLD_FILES+=usr/include/c++/4.2/bits/valarray_before.h
OLD_FILES+=usr/include/c++/4.2/bits/vector.tcc
OLD_FILES+=usr/include/c++/4.2/bitset
OLD_FILES+=usr/include/c++/4.2/cassert
OLD_FILES+=usr/include/c++/4.2/cctype
OLD_FILES+=usr/include/c++/4.2/cerrno
OLD_FILES+=usr/include/c++/4.2/cfloat
OLD_FILES+=usr/include/c++/4.2/ciso646
OLD_FILES+=usr/include/c++/4.2/climits
OLD_FILES+=usr/include/c++/4.2/clocale
OLD_FILES+=usr/include/c++/4.2/cmath
OLD_FILES+=usr/include/c++/4.2/complex
OLD_FILES+=usr/include/c++/4.2/csetjmp
OLD_FILES+=usr/include/c++/4.2/csignal
OLD_FILES+=usr/include/c++/4.2/cstdarg
OLD_FILES+=usr/include/c++/4.2/cstddef
OLD_FILES+=usr/include/c++/4.2/cstdio
OLD_FILES+=usr/include/c++/4.2/cstdlib
OLD_FILES+=usr/include/c++/4.2/cstring
OLD_FILES+=usr/include/c++/4.2/ctime
OLD_FILES+=usr/include/c++/4.2/cwchar
OLD_FILES+=usr/include/c++/4.2/cwctype
OLD_FILES+=usr/include/c++/4.2/cxxabi.h
OLD_FILES+=usr/include/c++/4.2/debug/bitset
OLD_FILES+=usr/include/c++/4.2/debug/debug.h
OLD_FILES+=usr/include/c++/4.2/debug/deque
OLD_FILES+=usr/include/c++/4.2/debug/formatter.h
OLD_FILES+=usr/include/c++/4.2/debug/functions.h
OLD_FILES+=usr/include/c++/4.2/debug/hash_map
OLD_FILES+=usr/include/c++/4.2/debug/hash_map.h
OLD_FILES+=usr/include/c++/4.2/debug/hash_multimap.h
OLD_FILES+=usr/include/c++/4.2/debug/hash_multiset.h
OLD_FILES+=usr/include/c++/4.2/debug/hash_set
OLD_FILES+=usr/include/c++/4.2/debug/hash_set.h
OLD_FILES+=usr/include/c++/4.2/debug/list
OLD_FILES+=usr/include/c++/4.2/debug/macros.h
OLD_FILES+=usr/include/c++/4.2/debug/map
OLD_FILES+=usr/include/c++/4.2/debug/map.h
OLD_FILES+=usr/include/c++/4.2/debug/multimap.h
OLD_FILES+=usr/include/c++/4.2/debug/multiset.h
OLD_FILES+=usr/include/c++/4.2/debug/safe_base.h
OLD_FILES+=usr/include/c++/4.2/debug/safe_iterator.h
OLD_FILES+=usr/include/c++/4.2/debug/safe_iterator.tcc
OLD_FILES+=usr/include/c++/4.2/debug/safe_sequence.h
OLD_FILES+=usr/include/c++/4.2/debug/set
OLD_FILES+=usr/include/c++/4.2/debug/set.h
OLD_FILES+=usr/include/c++/4.2/debug/string
OLD_FILES+=usr/include/c++/4.2/debug/vector
OLD_FILES+=usr/include/c++/4.2/deque
OLD_FILES+=usr/include/c++/4.2/exception
OLD_FILES+=usr/include/c++/4.2/exception_defines.h
OLD_FILES+=usr/include/c++/4.2/ext/algorithm
OLD_FILES+=usr/include/c++/4.2/ext/array_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/atomicity.h
OLD_FILES+=usr/include/c++/4.2/ext/bitmap_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/codecvt_specializations.h
OLD_FILES+=usr/include/c++/4.2/ext/concurrence.h
OLD_FILES+=usr/include/c++/4.2/ext/debug_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/functional
OLD_FILES+=usr/include/c++/4.2/ext/hash_fun.h
OLD_FILES+=usr/include/c++/4.2/ext/hash_map
OLD_FILES+=usr/include/c++/4.2/ext/hash_set
OLD_FILES+=usr/include/c++/4.2/ext/hashtable.h
OLD_FILES+=usr/include/c++/4.2/ext/iterator
OLD_FILES+=usr/include/c++/4.2/ext/malloc_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/memory
OLD_FILES+=usr/include/c++/4.2/ext/mt_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/new_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/numeric
OLD_FILES+=usr/include/c++/4.2/ext/numeric_traits.h
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/assoc_container.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_tree_policy/basic_tree_policy_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_tree_policy/null_node_metadata.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_tree_policy/traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/basic_types.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/bin_search_tree_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/cond_dtor_entry_dealtor.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/cond_key_dtor_entry_dealtor.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/node_iterators.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/point_iterators.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/r_erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/rotate_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/bin_search_tree_/traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/binary_heap_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/const_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/const_point_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/entry_cmp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/entry_pred.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/resize_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binary_heap_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_/binomial_heap_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/binomial_heap_base_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/binomial_heap_base_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/cc_ht_map_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/cmp_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/cond_key_dtor_entry_dealtor.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/constructor_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/constructor_destructor_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/constructor_destructor_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/debug_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/debug_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/entry_list_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/erase_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/erase_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/find_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/insert_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/insert_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/resize_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/resize_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/resize_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/size_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/standard_policies.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cc_hash_table_map_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/cond_dealtor.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/container_base_dispatch.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/eq_fn/eq_by_less.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/eq_fn/hash_eq_fn.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/constructor_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/constructor_destructor_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/constructor_destructor_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/debug_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/debug_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/erase_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/erase_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/find_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/find_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/gp_ht_map_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/insert_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/insert_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/iterator_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/resize_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/resize_no_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/resize_store_hash_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/standard_policies.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/gp_hash_table_map_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/direct_mask_range_hashing_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/direct_mod_range_hashing_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/linear_probe_fn_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/mask_based_range_hashing.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/mod_based_range_hashing.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/probe_fn_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/quadratic_probe_fn_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/ranged_hash_fn.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/ranged_probe_fn.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_probe_fn.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_range_hashing.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_ranged_hash_fn.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/hash_fn/sample_ranged_probe_fn.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/const_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/const_point_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/left_child_next_sibling_heap_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/node.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/null_metadata.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/left_child_next_sibling_heap_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/constructor_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/entry_metadata_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/lu_map_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_map_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/counter_lu_metadata.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/counter_lu_policy_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/mtf_lu_policy_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/list_update_policy/sample_update_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/map_debug_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/cond_dtor.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/node_iterators.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/ov_tree_map_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/ov_tree_map_/traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/pairing_heap_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pairing_heap_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/child_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/cond_dtor_entry_dealtor.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/const_child_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/head.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/insert_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/internal_node.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/iterators_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/leaf.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/node_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/node_iterators.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/node_metadata_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/pat_trie_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/point_iterators.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/policy_access_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/r_erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/rotate_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/split_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/split_join_branch_bag.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/synth_e_access_traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/pat_trie_/update_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/priority_queue_base_dispatch.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/node.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/rb_tree_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rb_tree_map_/traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/rc.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/rc_binomial_heap_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/rc_binomial_heap_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/cc_hash_max_collision_check_resize_trigger_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_exponential_size_policy_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_size_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_prime_size_policy_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/hash_standard_resize_policy_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/sample_resize_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/sample_resize_trigger.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/resize_policy/sample_size_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/info_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/node.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/splay_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/splay_tree_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/splay_tree_/traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/standard_policies.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/constructors_destructor_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/debug_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/erase_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/find_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/insert_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/split_join_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/thin_heap_.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/thin_heap_/trace_fn_imps.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/node_metadata_selector.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/null_node_update_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/order_statistics_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_policy/sample_tree_node_update.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/tree_trace_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/node_metadata_selector.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/null_node_update_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/order_statistics_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/prefix_search_node_update_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/sample_trie_e_access_traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/sample_trie_node_update.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/string_trie_e_access_traits_imp.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/trie_policy/trie_policy_base.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/type_utils.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/types_traits.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/const_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/const_point_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/detail/unordered_iterator/point_iterator.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/exception.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/hash_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/list_update_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/priority_queue.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/tag_and_trait.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/tree_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pb_ds/trie_policy.hpp
OLD_FILES+=usr/include/c++/4.2/ext/pod_char_traits.h
OLD_FILES+=usr/include/c++/4.2/ext/pool_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/rb_tree
OLD_FILES+=usr/include/c++/4.2/ext/rc_string_base.h
OLD_FILES+=usr/include/c++/4.2/ext/rope
OLD_FILES+=usr/include/c++/4.2/ext/ropeimpl.h
OLD_FILES+=usr/include/c++/4.2/ext/slist
OLD_FILES+=usr/include/c++/4.2/ext/sso_string_base.h
OLD_FILES+=usr/include/c++/4.2/ext/stdio_filebuf.h
OLD_FILES+=usr/include/c++/4.2/ext/stdio_sync_filebuf.h
OLD_FILES+=usr/include/c++/4.2/ext/throw_allocator.h
OLD_FILES+=usr/include/c++/4.2/ext/type_traits.h
OLD_FILES+=usr/include/c++/4.2/ext/typelist.h
OLD_FILES+=usr/include/c++/4.2/ext/vstring.h
OLD_FILES+=usr/include/c++/4.2/ext/vstring.tcc
OLD_FILES+=usr/include/c++/4.2/ext/vstring_fwd.h
OLD_FILES+=usr/include/c++/4.2/ext/vstring_util.h
OLD_FILES+=usr/include/c++/4.2/fstream
OLD_FILES+=usr/include/c++/4.2/functional
OLD_FILES+=usr/include/c++/4.2/iomanip
OLD_FILES+=usr/include/c++/4.2/ios
OLD_FILES+=usr/include/c++/4.2/iosfwd
OLD_FILES+=usr/include/c++/4.2/iostream
OLD_FILES+=usr/include/c++/4.2/istream
OLD_FILES+=usr/include/c++/4.2/iterator
OLD_FILES+=usr/include/c++/4.2/limits
OLD_FILES+=usr/include/c++/4.2/list
OLD_FILES+=usr/include/c++/4.2/locale
OLD_FILES+=usr/include/c++/4.2/map
OLD_FILES+=usr/include/c++/4.2/memory
OLD_FILES+=usr/include/c++/4.2/new
OLD_FILES+=usr/include/c++/4.2/numeric
OLD_FILES+=usr/include/c++/4.2/ostream
OLD_FILES+=usr/include/c++/4.2/queue
OLD_FILES+=usr/include/c++/4.2/set
OLD_FILES+=usr/include/c++/4.2/sstream
OLD_FILES+=usr/include/c++/4.2/stack
OLD_FILES+=usr/include/c++/4.2/stdexcept
OLD_FILES+=usr/include/c++/4.2/streambuf
OLD_FILES+=usr/include/c++/4.2/string
OLD_FILES+=usr/include/c++/4.2/tr1/array
OLD_FILES+=usr/include/c++/4.2/tr1/bind_iterate.h
OLD_FILES+=usr/include/c++/4.2/tr1/bind_repeat.h
OLD_FILES+=usr/include/c++/4.2/tr1/boost_shared_ptr.h
OLD_FILES+=usr/include/c++/4.2/tr1/cctype
OLD_FILES+=usr/include/c++/4.2/tr1/cfenv
OLD_FILES+=usr/include/c++/4.2/tr1/cfloat
OLD_FILES+=usr/include/c++/4.2/tr1/cinttypes
OLD_FILES+=usr/include/c++/4.2/tr1/climits
OLD_FILES+=usr/include/c++/4.2/tr1/cmath
OLD_FILES+=usr/include/c++/4.2/tr1/common.h
OLD_FILES+=usr/include/c++/4.2/tr1/complex
OLD_FILES+=usr/include/c++/4.2/tr1/cstdarg
OLD_FILES+=usr/include/c++/4.2/tr1/cstdbool
OLD_FILES+=usr/include/c++/4.2/tr1/cstdint
OLD_FILES+=usr/include/c++/4.2/tr1/cstdio
OLD_FILES+=usr/include/c++/4.2/tr1/cstdlib
OLD_FILES+=usr/include/c++/4.2/tr1/ctgmath
OLD_FILES+=usr/include/c++/4.2/tr1/ctime
OLD_FILES+=usr/include/c++/4.2/tr1/ctype.h
OLD_FILES+=usr/include/c++/4.2/tr1/cwchar
OLD_FILES+=usr/include/c++/4.2/tr1/cwctype
OLD_FILES+=usr/include/c++/4.2/tr1/fenv.h
OLD_FILES+=usr/include/c++/4.2/tr1/float.h
OLD_FILES+=usr/include/c++/4.2/tr1/functional
OLD_FILES+=usr/include/c++/4.2/tr1/functional_hash.h
OLD_FILES+=usr/include/c++/4.2/tr1/functional_iterate.h
OLD_FILES+=usr/include/c++/4.2/tr1/hashtable
OLD_FILES+=usr/include/c++/4.2/tr1/hashtable_policy.h
OLD_FILES+=usr/include/c++/4.2/tr1/inttypes.h
OLD_FILES+=usr/include/c++/4.2/tr1/limits.h
OLD_FILES+=usr/include/c++/4.2/tr1/math.h
OLD_FILES+=usr/include/c++/4.2/tr1/memory
OLD_FILES+=usr/include/c++/4.2/tr1/mu_iterate.h
OLD_FILES+=usr/include/c++/4.2/tr1/random
OLD_FILES+=usr/include/c++/4.2/tr1/random.tcc
OLD_FILES+=usr/include/c++/4.2/tr1/ref_fwd.h
OLD_FILES+=usr/include/c++/4.2/tr1/ref_wrap_iterate.h
OLD_FILES+=usr/include/c++/4.2/tr1/repeat.h
OLD_FILES+=usr/include/c++/4.2/tr1/stdarg.h
OLD_FILES+=usr/include/c++/4.2/tr1/stdbool.h
OLD_FILES+=usr/include/c++/4.2/tr1/stdint.h
OLD_FILES+=usr/include/c++/4.2/tr1/stdio.h
OLD_FILES+=usr/include/c++/4.2/tr1/stdlib.h
OLD_FILES+=usr/include/c++/4.2/tr1/tgmath.h
OLD_FILES+=usr/include/c++/4.2/tr1/tuple
OLD_FILES+=usr/include/c++/4.2/tr1/tuple_defs.h
OLD_FILES+=usr/include/c++/4.2/tr1/tuple_iterate.h
OLD_FILES+=usr/include/c++/4.2/tr1/type_traits
OLD_FILES+=usr/include/c++/4.2/tr1/type_traits_fwd.h
OLD_FILES+=usr/include/c++/4.2/tr1/unordered_map
OLD_FILES+=usr/include/c++/4.2/tr1/unordered_set
OLD_FILES+=usr/include/c++/4.2/tr1/utility
OLD_FILES+=usr/include/c++/4.2/tr1/wchar.h
OLD_FILES+=usr/include/c++/4.2/tr1/wctype.h
OLD_FILES+=usr/include/c++/4.2/typeinfo
OLD_FILES+=usr/include/c++/4.2/utility
OLD_FILES+=usr/include/c++/4.2/valarray
OLD_FILES+=usr/include/c++/4.2/vector
OLD_FILES+=usr/lib/libstdc++.a
OLD_FILES+=usr/lib/libstdc++.so
OLD_LIBS+=usr/lib/libstdc++.so.6
OLD_FILES+=usr/lib/libstdc++_p.a
OLD_FILES+=usr/lib/libsupc++.a
OLD_FILES+=usr/lib/libsupc++.so
OLD_LIBS+=usr/lib/libsupc++.so.1
OLD_FILES+=usr/lib/libsupc++_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libstdc++.a
OLD_FILES+=usr/lib32/libstdc++.so
OLD_LIBS+=usr/lib32/libstdc++.so.6
OLD_FILES+=usr/lib32/libstdc++_p.a
OLD_FILES+=usr/lib32/libsupc++.a
OLD_FILES+=usr/lib32/libsupc++.so
OLD_LIBS+=usr/lib32/libsupc++.so.1
OLD_FILES+=usr/lib32/libsupc++_p.a
.endif
OLD_FILES+=usr/libexec/cc1plus
.endif
.if ${MK_DICT} == no
OLD_FILES+=usr/share/dict/README
OLD_FILES+=usr/share/dict/freebsd
OLD_FILES+=usr/share/dict/propernames
OLD_FILES+=usr/share/dict/web2
OLD_FILES+=usr/share/dict/web2a
OLD_FILES+=usr/share/dict/words
OLD_DIRS+=usr/share/dict
.endif
.if ${MK_DMAGENT} == no
OLD_FILES+=etc/dma/dma.conf
OLD_FILES+=usr/libexec/dma
OLD_FILES+=usr/libexec/dma-mbox-create
OLD_FILES+=usr/share/man/man8/dma.8.gz
OLD_FILES+=usr/share/examples/dma/mailer.conf
.endif
.if ${MK_EE} == no
OLD_FILES+=usr/bin/edit
OLD_FILES+=usr/bin/ee
OLD_FILES+=usr/bin/ree
OLD_FILES+=usr/share/man/man1/edit.1.gz
OLD_FILES+=usr/share/man/man1/ee.1.gz
OLD_FILES+=usr/share/man/man1/ree.1.gz
OLD_FILES+=usr/share/nls/C/ee.cat
OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/ee.cat
OLD_FILES+=usr/share/nls/pl_PL.ISO8859-2/ee.cat
OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/ee.cat
OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/ee.cat
.endif
.if ${MK_EXAMPLES} == no
OLD_FILES+=usr/share/examples/BSD_daemon/FreeBSD.pfa
OLD_FILES+=usr/share/examples/BSD_daemon/README
OLD_FILES+=usr/share/examples/BSD_daemon/beastie.eps
OLD_FILES+=usr/share/examples/BSD_daemon/beastie.fig
OLD_FILES+=usr/share/examples/BSD_daemon/eps.patch
OLD_FILES+=usr/share/examples/BSD_daemon/poster.sh
OLD_FILES+=usr/share/examples/FreeBSD_version/FreeBSD_version.c
OLD_FILES+=usr/share/examples/FreeBSD_version/Makefile
OLD_FILES+=usr/share/examples/FreeBSD_version/README
OLD_FILES+=usr/share/examples/IPv6/USAGE
OLD_FILES+=usr/share/examples/bhyve/vmrun.sh
OLD_FILES+=usr/share/examples/bootforth/README
OLD_FILES+=usr/share/examples/bootforth/boot.4th
OLD_FILES+=usr/share/examples/bootforth/frames.4th
OLD_FILES+=usr/share/examples/bootforth/loader.rc
OLD_FILES+=usr/share/examples/bootforth/menu.4th
OLD_FILES+=usr/share/examples/bootforth/menuconf.4th
OLD_FILES+=usr/share/examples/bootforth/screen.4th
OLD_FILES+=usr/share/examples/bsdconfig/add_some_packages.sh
OLD_FILES+=usr/share/examples/bsdconfig/browse_packages_http.sh
OLD_FILES+=usr/share/examples/bsdconfig/bsdconfigrc
OLD_FILES+=usr/share/examples/csh/dot.cshrc
OLD_FILES+=usr/share/examples/diskless/ME
OLD_FILES+=usr/share/examples/diskless/README.BOOTP
OLD_FILES+=usr/share/examples/diskless/README.TEMPLATING
OLD_FILES+=usr/share/examples/diskless/clone_root
OLD_FILES+=usr/share/examples/dma/mailer.conf
OLD_FILES+=usr/share/examples/drivers/README
OLD_FILES+=usr/share/examples/drivers/make_device_driver.sh
OLD_FILES+=usr/share/examples/drivers/make_pseudo_driver.sh
OLD_FILES+=usr/share/examples/dwatch/profile_template
OLD_FILES+=usr/share/examples/etc/README.examples
OLD_FILES+=usr/share/examples/etc/bsd-style-copyright
OLD_FILES+=usr/share/examples/etc/group
OLD_FILES+=usr/share/examples/etc/login.access
OLD_FILES+=usr/share/examples/etc/make.conf
OLD_FILES+=usr/share/examples/etc/rc.bsdextended
OLD_FILES+=usr/share/examples/etc/rc.firewall
OLD_FILES+=usr/share/examples/etc/rc.sendmail
OLD_FILES+=usr/share/examples/etc/termcap.small
OLD_FILES+=usr/share/examples/etc/wpa_supplicant.conf
OLD_FILES+=usr/share/examples/find_interface/Makefile
OLD_FILES+=usr/share/examples/find_interface/README
OLD_FILES+=usr/share/examples/find_interface/find_interface.c
OLD_FILES+=usr/share/examples/hast/ucarp.sh
OLD_FILES+=usr/share/examples/hast/ucarp_down.sh
OLD_FILES+=usr/share/examples/hast/ucarp_up.sh
OLD_FILES+=usr/share/examples/hast/vip-down.sh
OLD_FILES+=usr/share/examples/hast/vip-up.sh
OLD_FILES+=usr/share/examples/hostapd/hostapd.conf
OLD_FILES+=usr/share/examples/hostapd/hostapd.eap_user
OLD_FILES+=usr/share/examples/hostapd/hostapd.wpa_psk
OLD_FILES+=usr/share/examples/indent/indent.pro
OLD_FILES+=usr/share/examples/ipfilter/BASIC.NAT
OLD_FILES+=usr/share/examples/ipfilter/BASIC_1.FW
OLD_FILES+=usr/share/examples/ipfilter/BASIC_2.FW
OLD_FILES+=usr/share/examples/ipfilter/README
OLD_FILES+=usr/share/examples/ipfilter/example.1
OLD_FILES+=usr/share/examples/ipfilter/example.10
OLD_FILES+=usr/share/examples/ipfilter/example.11
OLD_FILES+=usr/share/examples/ipfilter/example.12
OLD_FILES+=usr/share/examples/ipfilter/example.13
OLD_FILES+=usr/share/examples/ipfilter/example.14
OLD_FILES+=usr/share/examples/ipfilter/example.2
OLD_FILES+=usr/share/examples/ipfilter/example.3
OLD_FILES+=usr/share/examples/ipfilter/example.4
OLD_FILES+=usr/share/examples/ipfilter/example.5
OLD_FILES+=usr/share/examples/ipfilter/example.6
OLD_FILES+=usr/share/examples/ipfilter/example.7
OLD_FILES+=usr/share/examples/ipfilter/example.8
OLD_FILES+=usr/share/examples/ipfilter/example.9
OLD_FILES+=usr/share/examples/ipfilter/example.sr
OLD_FILES+=usr/share/examples/ipfilter/examples.txt
OLD_FILES+=usr/share/examples/ipfilter/firewall
OLD_FILES+=usr/share/examples/ipfilter/firewall.1
OLD_FILES+=usr/share/examples/ipfilter/firewall.2
OLD_FILES+=usr/share/examples/ipfilter/ftp-proxy
OLD_FILES+=usr/share/examples/ipfilter/ftppxy
OLD_FILES+=usr/share/examples/ipfilter/ipf-howto.txt
OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.permissive
OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.restrictive
OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.sample
OLD_FILES+=usr/share/examples/ipfilter/ipnat.conf.sample
OLD_FILES+=usr/share/examples/ipfilter/mkfilters
OLD_FILES+=usr/share/examples/ipfilter/nat-setup
OLD_FILES+=usr/share/examples/ipfilter/nat.eg
OLD_FILES+=usr/share/examples/ipfilter/rules.txt
OLD_FILES+=usr/share/examples/ipfilter/server
OLD_FILES+=usr/share/examples/ipfilter/tcpstate
OLD_FILES+=usr/share/examples/ipfw/change_rules.sh
OLD_FILES+=usr/share/examples/jails/README
OLD_FILES+=usr/share/examples/jails/VIMAGE
OLD_FILES+=usr/share/examples/jails/jail.xxx.conf
OLD_FILES+=usr/share/examples/jails/jib
OLD_FILES+=usr/share/examples/jails/jng
OLD_FILES+=usr/share/examples/jails/rc.conf.jails
OLD_FILES+=usr/share/examples/jails/rcjail.xxx.conf
OLD_FILES+=usr/share/examples/kld/Makefile
OLD_FILES+=usr/share/examples/kld/cdev/Makefile
OLD_FILES+=usr/share/examples/kld/cdev/README
OLD_FILES+=usr/share/examples/kld/cdev/module/Makefile
OLD_FILES+=usr/share/examples/kld/cdev/module/cdev.c
OLD_FILES+=usr/share/examples/kld/cdev/module/cdev.h
OLD_FILES+=usr/share/examples/kld/cdev/module/cdevmod.c
OLD_FILES+=usr/share/examples/kld/cdev/test/Makefile
OLD_FILES+=usr/share/examples/kld/cdev/test/testcdev.c
OLD_FILES+=usr/share/examples/kld/dyn_sysctl/Makefile
OLD_FILES+=usr/share/examples/kld/dyn_sysctl/README
OLD_FILES+=usr/share/examples/kld/dyn_sysctl/dyn_sysctl.c
OLD_FILES+=usr/share/examples/kld/firmware/Makefile
OLD_FILES+=usr/share/examples/kld/firmware/README
OLD_FILES+=usr/share/examples/kld/firmware/fwconsumer/Makefile
OLD_FILES+=usr/share/examples/kld/firmware/fwconsumer/fw_consumer.c
OLD_FILES+=usr/share/examples/kld/firmware/fwimage/Makefile
OLD_FILES+=usr/share/examples/kld/firmware/fwimage/firmware.img.uu
OLD_FILES+=usr/share/examples/kld/khelp/Makefile
OLD_FILES+=usr/share/examples/kld/khelp/README
OLD_FILES+=usr/share/examples/kld/khelp/h_example.c
OLD_FILES+=usr/share/examples/kld/syscall/Makefile
OLD_FILES+=usr/share/examples/kld/syscall/module/Makefile
OLD_FILES+=usr/share/examples/kld/syscall/module/syscall.c
OLD_FILES+=usr/share/examples/kld/syscall/test/Makefile
OLD_FILES+=usr/share/examples/kld/syscall/test/call.c
OLD_FILES+=usr/share/examples/libusb20/Makefile
OLD_FILES+=usr/share/examples/libusb20/README
OLD_FILES+=usr/share/examples/libusb20/bulk.c
OLD_FILES+=usr/share/examples/libusb20/control.c
OLD_FILES+=usr/share/examples/libusb20/util.c
OLD_FILES+=usr/share/examples/libusb20/util.h
OLD_FILES+=usr/share/examples/libvgl/Makefile
OLD_FILES+=usr/share/examples/libvgl/demo.c
OLD_FILES+=usr/share/examples/mdoc/POSIX-copyright
OLD_FILES+=usr/share/examples/mdoc/deshallify.sh
OLD_FILES+=usr/share/examples/mdoc/example.1
OLD_FILES+=usr/share/examples/mdoc/example.3
OLD_FILES+=usr/share/examples/mdoc/example.4
OLD_FILES+=usr/share/examples/mdoc/example.9
OLD_FILES+=usr/share/examples/netgraph/ether.bridge
OLD_FILES+=usr/share/examples/netgraph/frame_relay
OLD_FILES+=usr/share/examples/netgraph/ngctl
OLD_FILES+=usr/share/examples/netgraph/raw
OLD_FILES+=usr/share/examples/netgraph/udp.tunnel
OLD_FILES+=usr/share/examples/netgraph/virtual.chain
OLD_FILES+=usr/share/examples/netgraph/virtual.lan
OLD_FILES+=usr/share/examples/pc-sysinstall/README
OLD_FILES+=usr/share/examples/pc-sysinstall/pc-autoinstall.conf
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.fbsd-netinstall
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.geli
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.gmirror
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.netinstall
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.restore
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.rsync
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.upgrade
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.zfs
OLD_FILES+=usr/share/examples/perfmon/Makefile
OLD_FILES+=usr/share/examples/perfmon/README
OLD_FILES+=usr/share/examples/perfmon/perfmon.c
OLD_FILES+=usr/share/examples/pf/ackpri
OLD_FILES+=usr/share/examples/pf/faq-example1
OLD_FILES+=usr/share/examples/pf/faq-example2
OLD_FILES+=usr/share/examples/pf/faq-example3
OLD_FILES+=usr/share/examples/pf/pf.conf
OLD_FILES+=usr/share/examples/pf/queue1
OLD_FILES+=usr/share/examples/pf/queue2
OLD_FILES+=usr/share/examples/pf/queue3
OLD_FILES+=usr/share/examples/pf/queue4
OLD_FILES+=usr/share/examples/pf/spamd
OLD_FILES+=usr/share/examples/ppi/Makefile
OLD_FILES+=usr/share/examples/ppi/ppilcd.c
OLD_FILES+=usr/share/examples/ppp/chap-auth
OLD_FILES+=usr/share/examples/ppp/login-auth
OLD_FILES+=usr/share/examples/ppp/ppp.conf.sample
OLD_FILES+=usr/share/examples/ppp/ppp.conf.span-isp
OLD_FILES+=usr/share/examples/ppp/ppp.conf.span-isp.working
OLD_FILES+=usr/share/examples/ppp/ppp.linkdown.sample
OLD_FILES+=usr/share/examples/ppp/ppp.linkdown.span-isp
OLD_FILES+=usr/share/examples/ppp/ppp.linkdown.span-isp.working
OLD_FILES+=usr/share/examples/ppp/ppp.linkup.sample
OLD_FILES+=usr/share/examples/ppp/ppp.linkup.span-isp
OLD_FILES+=usr/share/examples/ppp/ppp.linkup.span-isp.working
OLD_FILES+=usr/share/examples/ppp/ppp.secret.sample
OLD_FILES+=usr/share/examples/ppp/ppp.secret.span-isp
OLD_FILES+=usr/share/examples/ppp/ppp.secret.span-isp.working
OLD_FILES+=usr/share/examples/printing/diablo-if-net
OLD_FILES+=usr/share/examples/printing/hpdf
OLD_FILES+=usr/share/examples/printing/hpif
OLD_FILES+=usr/share/examples/printing/hpof
OLD_FILES+=usr/share/examples/printing/hprf
OLD_FILES+=usr/share/examples/printing/hpvf
OLD_FILES+=usr/share/examples/printing/if-simple
OLD_FILES+=usr/share/examples/printing/if-simpleX
OLD_FILES+=usr/share/examples/printing/ifhp
OLD_FILES+=usr/share/examples/printing/make-ps-header
OLD_FILES+=usr/share/examples/printing/netprint
OLD_FILES+=usr/share/examples/printing/psdf
OLD_FILES+=usr/share/examples/printing/psdfX
OLD_FILES+=usr/share/examples/printing/psif
OLD_FILES+=usr/share/examples/printing/pstf
OLD_FILES+=usr/share/examples/printing/pstfX
OLD_FILES+=usr/share/examples/scsi_target/Makefile
OLD_FILES+=usr/share/examples/scsi_target/scsi_cmds.c
OLD_FILES+=usr/share/examples/scsi_target/scsi_target.8
OLD_FILES+=usr/share/examples/scsi_target/scsi_target.c
OLD_FILES+=usr/share/examples/scsi_target/scsi_target.h
OLD_FILES+=usr/share/examples/ses/Makefile
OLD_FILES+=usr/share/examples/ses/Makefile.inc
OLD_FILES+=usr/share/examples/ses/getencstat/Makefile
OLD_FILES+=usr/share/examples/ses/getencstat/getencstat.0
OLD_FILES+=usr/share/examples/ses/sesd/Makefile
OLD_FILES+=usr/share/examples/ses/sesd/sesd.0
OLD_FILES+=usr/share/examples/ses/setencstat/Makefile
OLD_FILES+=usr/share/examples/ses/setencstat/setencstat.0
OLD_FILES+=usr/share/examples/ses/setobjstat/Makefile
OLD_FILES+=usr/share/examples/ses/setobjstat/setobjstat.0
OLD_FILES+=usr/share/examples/ses/srcs/chpmon.c
OLD_FILES+=usr/share/examples/ses/srcs/eltsub.c
OLD_FILES+=usr/share/examples/ses/srcs/eltsub.h
OLD_FILES+=usr/share/examples/ses/srcs/getencstat.c
OLD_FILES+=usr/share/examples/ses/srcs/getnobj.c
OLD_FILES+=usr/share/examples/ses/srcs/getobjmap.c
OLD_FILES+=usr/share/examples/ses/srcs/getobjstat.c
OLD_FILES+=usr/share/examples/ses/srcs/inienc.c
OLD_FILES+=usr/share/examples/ses/srcs/sesd.c
OLD_FILES+=usr/share/examples/ses/srcs/setencstat.c
OLD_FILES+=usr/share/examples/ses/srcs/setobjstat.c
OLD_FILES+=usr/share/examples/smbfs/dot.nsmbrc
OLD_FILES+=usr/share/examples/smbfs/print/lj6l
OLD_FILES+=usr/share/examples/smbfs/print/ljspool
OLD_FILES+=usr/share/examples/smbfs/print/printcap.sample
OLD_FILES+=usr/share/examples/smbfs/print/tolj
OLD_FILES+=usr/share/examples/sunrpc/Makefile
OLD_FILES+=usr/share/examples/sunrpc/dir/Makefile
OLD_FILES+=usr/share/examples/sunrpc/dir/dir.x
OLD_FILES+=usr/share/examples/sunrpc/dir/dir_proc.c
OLD_FILES+=usr/share/examples/sunrpc/dir/rls.c
OLD_FILES+=usr/share/examples/sunrpc/msg/Makefile
OLD_FILES+=usr/share/examples/sunrpc/msg/msg.x
OLD_FILES+=usr/share/examples/sunrpc/msg/msg_proc.c
OLD_FILES+=usr/share/examples/sunrpc/msg/printmsg.c
OLD_FILES+=usr/share/examples/sunrpc/msg/rprintmsg.c
OLD_FILES+=usr/share/examples/sunrpc/sort/Makefile
OLD_FILES+=usr/share/examples/sunrpc/sort/rsort.c
OLD_FILES+=usr/share/examples/sunrpc/sort/sort.x
OLD_FILES+=usr/share/examples/sunrpc/sort/sort_proc.c
OLD_FILES+=usr/share/examples/tcsh/complete.tcsh
OLD_FILES+=usr/share/examples/tcsh/csh-mode.el
OLD_FILES+=usr/share/examples/uefisign/uefikeys
OLD_FILES+=usr/share/examples/ypldap/ypldap.conf
OLD_DIRS+=usr/share/examples
OLD_DIRS+=usr/share/examples/BSD_daemon
OLD_DIRS+=usr/share/examples/FreeBSD_version
OLD_DIRS+=usr/share/examples/IPv6
OLD_DIRS+=usr/share/examples/bhyve
OLD_DIRS+=usr/share/examples/bootforth
OLD_DIRS+=usr/share/examples/bsdconfig
OLD_DIRS+=usr/share/examples/csh
OLD_DIRS+=usr/share/examples/diskless
OLD_DIRS+=usr/share/examples/dma
OLD_DIRS+=usr/share/examples/drivers
OLD_DIRS+=usr/share/examples/dwatch
OLD_DIRS+=usr/share/examples/etc
OLD_DIRS+=usr/share/examples/etc/defaults
OLD_DIRS+=usr/share/examples/find_interface
OLD_DIRS+=usr/share/examples/hast
OLD_DIRS+=usr/share/examples/ibcs2
OLD_DIRS+=usr/share/examples/hostapd
OLD_DIRS+=usr/share/examples/indent
OLD_DIRS+=usr/share/examples/ipfilter
OLD_DIRS+=usr/share/examples/ipfw
OLD_DIRS+=usr/share/examples/jails
OLD_DIRS+=usr/share/examples/kld
OLD_DIRS+=usr/share/examples/kld/cdev
OLD_DIRS+=usr/share/examples/kld/cdev/module
OLD_DIRS+=usr/share/examples/kld/cdev/test
OLD_DIRS+=usr/share/examples/kld/dyn_sysctl
OLD_DIRS+=usr/share/examples/kld/firmware
OLD_DIRS+=usr/share/examples/kld/firmware/fwconsumer
OLD_DIRS+=usr/share/examples/kld/firmware/fwimage
OLD_DIRS+=usr/share/examples/kld/khelp
OLD_DIRS+=usr/share/examples/kld/syscall
OLD_DIRS+=usr/share/examples/kld/syscall/module
OLD_DIRS+=usr/share/examples/kld/syscall/test
OLD_DIRS+=usr/share/examples/libusb20
OLD_DIRS+=usr/share/examples/libvgl
OLD_DIRS+=usr/share/examples/mdoc
OLD_DIRS+=usr/share/examples/netgraph
OLD_DIRS+=usr/share/examples/pc-sysinstall
OLD_DIRS+=usr/share/examples/perfmon
OLD_DIRS+=usr/share/examples/pf
OLD_DIRS+=usr/share/examples/ppi
OLD_DIRS+=usr/share/examples/ppp
OLD_DIRS+=usr/share/examples/printing
OLD_DIRS+=usr/share/examples/scsi_target
OLD_DIRS+=usr/share/examples/ses
OLD_DIRS+=usr/share/examples/ses/getencstat
OLD_DIRS+=usr/share/examples/ses/sesd
OLD_DIRS+=usr/share/examples/ses/setencstat
OLD_DIRS+=usr/share/examples/ses/setobjstat
OLD_DIRS+=usr/share/examples/ses/srcs
OLD_DIRS+=usr/share/examples/smbfs
OLD_DIRS+=usr/share/examples/smbfs/print
OLD_DIRS+=usr/share/examples/sunrpc
OLD_DIRS+=usr/share/examples/sunrpc/dir
OLD_DIRS+=usr/share/examples/sunrpc/msg
OLD_DIRS+=usr/share/examples/sunrpc/sort
OLD_DIRS+=usr/share/examples/tcsh
OLD_DIRS+=usr/share/examples/uefisign
OLD_DIRS+=usr/share/examples/ypldap
.endif
.if ${MK_FINGER} == no
OLD_FILES+=usr/bin/finger
OLD_FILES+=usr/share/man/man1/finger.1.gz
OLD_FILES+=usr/share/man/man5/finger.conf.5.gz
OLD_FILES+=usr/libexec/fingerd
OLD_FILES+=usr/share/man/man8/fingerd.8.gz
.endif
.if ${MK_FLOPPY} == no
OLD_FILES+=usr/sbin/fdcontrol
OLD_FILES+=usr/sbin/fdformat
OLD_FILES+=usr/sbin/fdread
OLD_FILES+=usr/sbin/fdwrite
OLD_FILES+=usr/share/man/man1/fdformat.1.gz
OLD_FILES+=usr/share/man/man1/fdread.1.gz
OLD_FILES+=usr/share/man/man1/fdwrite.1.gz
OLD_FILES+=usr/share/man/man8/fdcontrol.8.gz
.endif
.if ${MK_FORTH} == no
OLD_FILES+=usr/share/man/man8/beastie.4th.8.gz
OLD_FILES+=usr/share/man/man8/brand.4th.8.gz
OLD_FILES+=usr/share/man/man8/check-password.4th.8.gz
OLD_FILES+=usr/share/man/man8/color.4th.8.gz
OLD_FILES+=usr/share/man/man8/delay.4th.8.gz
OLD_FILES+=usr/share/man/man8/loader.4th.8.gz
OLD_FILES+=usr/share/man/man8/menu.4th.8.gz
OLD_FILES+=usr/share/man/man8/menusets.4th.8.gz
OLD_FILES+=usr/share/man/man8/version.4th.8.gz
.endif
.if ${MK_FREEBSD_UPDATE} == no
OLD_FILES+=etc/freebsd-update.conf
OLD_FILES+=usr/sbin/freebsd-update
OLD_FILES+=usr/share/examples/etc/freebsd-update.conf
OLD_FILES+=usr/share/man/man5/freebsd-update.conf.5.gz
OLD_FILES+=usr/share/man/man8/freebsd-update.8.gz
.endif
.if ${MK_GAMES} == no
OLD_FILES+=usr/bin/caesar
OLD_FILES+=usr/bin/factor
OLD_FILES+=usr/bin/fortune
OLD_FILES+=usr/bin/grdc
OLD_FILES+=usr/bin/morse
OLD_FILES+=usr/bin/number
OLD_FILES+=usr/bin/pom
OLD_FILES+=usr/bin/primes
OLD_FILES+=usr/bin/random
OLD_FILES+=usr/bin/rot13
OLD_FILES+=usr/bin/strfile
OLD_FILES+=usr/bin/unstr
OLD_FILES+=usr/share/games/fortune/fortunes
OLD_FILES+=usr/share/games/fortune/fortunes.dat
OLD_FILES+=usr/share/games/fortune/freebsd-tips
OLD_FILES+=usr/share/games/fortune/freebsd-tips.dat
OLD_FILES+=usr/share/games/fortune/gerrold.limerick
OLD_FILES+=usr/share/games/fortune/gerrold.limerick.dat
OLD_FILES+=usr/share/games/fortune/limerick
OLD_FILES+=usr/share/games/fortune/limerick.dat
OLD_FILES+=usr/share/games/fortune/murphy
OLD_FILES+=usr/share/games/fortune/murphy-o
OLD_FILES+=usr/share/games/fortune/murphy-o.dat
OLD_FILES+=usr/share/games/fortune/murphy.dat
OLD_FILES+=usr/share/games/fortune/startrek
OLD_FILES+=usr/share/games/fortune/startrek.dat
OLD_FILES+=usr/share/games/fortune/zippy
OLD_FILES+=usr/share/games/fortune/zippy.dat
OLD_DIRS+=usr/share/games/fortune
OLD_DIRS+=usr/share/games
OLD_FILES+=usr/share/man/man6/caesar.6.gz
OLD_FILES+=usr/share/man/man6/factor.6.gz
OLD_FILES+=usr/share/man/man6/fortune.6.gz
OLD_FILES+=usr/share/man/man6/grdc.6.gz
OLD_FILES+=usr/share/man/man6/morse.6.gz
OLD_FILES+=usr/share/man/man6/number.6.gz
OLD_FILES+=usr/share/man/man6/pom.6.gz
OLD_FILES+=usr/share/man/man6/primes.6.gz
OLD_FILES+=usr/share/man/man6/random.6.gz
OLD_FILES+=usr/share/man/man6/rot13.6.gz
OLD_FILES+=usr/share/man/man8/strfile.8.gz
OLD_FILES+=usr/share/man/man8/unstr.8.gz
.endif
.if ${MK_GCC} == no
OLD_FILES+=usr/bin/g++
OLD_FILES+=usr/bin/gcc
OLD_FILES+=usr/bin/gcpp
OLD_FILES+=usr/bin/gperf
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "i386"
OLD_FILES+=usr/include/gcc/4.2/__wmmintrin_aes.h
OLD_FILES+=usr/include/gcc/4.2/__wmmintrin_pclmul.h
OLD_FILES+=usr/include/gcc/4.2/ammintrin.h
OLD_FILES+=usr/include/gcc/4.2/emmintrin.h
OLD_FILES+=usr/include/gcc/4.2/mm3dnow.h
OLD_FILES+=usr/include/gcc/4.2/mm_malloc.h
OLD_FILES+=usr/include/gcc/4.2/mmintrin.h
OLD_FILES+=usr/include/gcc/4.2/pmmintrin.h
OLD_FILES+=usr/include/gcc/4.2/tmmintrin.h
OLD_FILES+=usr/include/gcc/4.2/wmmintrin.h
OLD_FILES+=usr/include/gcc/4.2/xmmintrin.h
.elif ${TARGET_ARCH} == "arm"
OLD_FILES+=usr/include/gcc/4.2/mmintrin.h
.elif ${TARGET_ARCH} == "powerpc" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/include/gcc/4.2/altivec.h
OLD_FILES+=usr/include/gcc/4.2/ppc-asm.h
OLD_FILES+=usr/include/gcc/4.2/spe.h
.endif
OLD_FILES+=usr/include/omp.h
OLD_FILES+=usr/lib/libgcov.a
OLD_FILES+=usr/lib/libgomp.a
OLD_FILES+=usr/lib/libgomp.so
OLD_LIBS+=usr/lib/libgomp.so.1
OLD_FILES+=usr/lib/libgomp_p.a
OLD_FILES+=usr/lib32/libgcov.a
OLD_FILES+=usr/lib32/libgomp.a
OLD_FILES+=usr/lib32/libgomp.so
OLD_LIBS+=usr/lib32/libgomp.so.1
OLD_FILES+=usr/lib32/libgomp_p.a
OLD_FILES+=usr/libexec/cc1
OLD_FILES+=usr/libexec/cc1plus
OLD_FILES+=usr/share/man/man1/g++.1.gz
OLD_FILES+=usr/share/man/man1/gcc.1.gz
OLD_FILES+=usr/share/man/man1/gcpp.1.gz
OLD_FILES+=usr/share/man/man1/gperf.1.gz
OLD_FILES+=usr/share/man/man1/gperf.7.gz
.endif
.if (${MK_GCOV} == no || ${MK_GCC} == no) && ${MK_LLVM_COV} == no
OLD_FILES+=usr/bin/gcov
OLD_FILES+=usr/share/man/man1/gcov.1.gz
.endif
.if ${MK_LLVM_COV} == no
OLD_FILES+=usr/bin/llvm-cov
OLD_FILES+=usr/bin/llvm-profdata
OLD_FILES+=usr/share/man/man1/llvm-cov.1.gz
.endif
.if ${MK_GDB} == no || ${MK_GDB_LIBEXEC} == yes
OLD_FILES+=usr/bin/gdb
OLD_FILES+=usr/bin/gdbserver
OLD_FILES+=usr/bin/kgdb
OLD_FILES+=usr/share/man/man1/gdb.1.gz
OLD_FILES+=usr/share/man/man1/gdbserver.1.gz
OLD_FILES+=usr/share/man/man1/kgdb.1.gz
.endif
.if ${MK_GDB} == no || ${MK_GDB_LIBEXEC} == no
OLD_FILES+=usr/libexec/gdb
OLD_FILES+=usr/libexec/kgdb
.endif
.if ${MK_GPIO} == no
OLD_FILES+=usr/include/libgpio.h
OLD_FILES+=usr/lib/libgpio.a
OLD_FILES+=usr/lib/libgpio.so
OLD_LIBS+=usr/lib/libgpio.so.0
OLD_FILES+=usr/lib/libgpio_p.a
OLD_FILES+=usr/lib32/libgpio.a
OLD_FILES+=usr/lib32/libgpio.so
OLD_LIBS+=usr/lib32/libgpio.so.0
OLD_FILES+=usr/lib32/libgpio_p.a
OLD_FILES+=usr/sbin/gpioctl
OLD_FILES+=usr/share/man/man3/gpio.3.gz
OLD_FILES+=usr/share/man/man3/gpio_close.3.gz
OLD_FILES+=usr/share/man/man3/gpio_open.3.gz
OLD_FILES+=usr/share/man/man3/gpio_open_device.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_config.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_get.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_high.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_input.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_invin.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_invout.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_list.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_low.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_opendrain.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_output.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_pulldown.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_pullup.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_pulsate.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_pushpull.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_set.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_set_flags.3.gz
OLD_FILES+=usr/share/man/man3/gpio_pin_tristate.3.gz
OLD_FILES+=usr/share/man/man8/gpioctl.8.gz
.endif
.if ${MK_GNU_DIFF} == no
OLD_FILES+=usr/bin/diff3
OLD_FILES+=usr/share/man/man1/diff3.1.gz
.endif
.if ${MK_GNU_GREP} == no
OLD_FILES+=usr/bin/gnugrep
OLD_FILES+=usr/share/man/man1/gnugrep.1.gz
.if ${MK_BSD_GREP} == no
OLD_FILES+=usr/bin/bzgrep
OLD_FILES+=usr/bin/bzegrep
OLD_FILES+=usr/bin/bzfgrep
OLD_FILES+=usr/bin/egrep
OLD_FILES+=usr/bin/fgrep
OLD_FILES+=usr/bin/grep
OLD_FILES+=usr/bin/zegrep
OLD_FILES+=usr/bin/zfgrep
OLD_FILES+=usr/bin/zgrep
OLD_FILES+=usr/share/man/man1/bzegrep.1.gz
OLD_FILES+=usr/share/man/man1/bzfgrep.1.gz
OLD_FILES+=usr/share/man/man1/bzgrep.1.gz
OLD_FILES+=usr/share/man/man1/egrep.1.gz
OLD_FILES+=usr/share/man/man1/fgrep.1.gz
OLD_FILES+=usr/share/man/man1/grep.1.gz
OLD_FILES+=usr/share/man/man1/zegrep.1.gz
OLD_FILES+=usr/share/man/man1/zfgrep.1.gz
OLD_FILES+=usr/share/man/man1/zgrep.1.gz
.endif
.endif
.if ${MK_GSSAPI} == no
OLD_FILES+=usr/include/gssapi/gssapi.h
OLD_DIRS+=usr/include/gssapi
OLD_FILES+=usr/include/gssapi.h
OLD_FILES+=usr/lib/libgssapi.a
OLD_FILES+=usr/lib/libgssapi.so
OLD_LIBS+=usr/lib/libgssapi.so.10
OLD_FILES+=usr/lib/libgssapi_p.a
OLD_FILES+=usr/lib/librpcsec_gss.a
OLD_FILES+=usr/lib/librpcsec_gss.so
OLD_LIBS+=usr/lib/librpcsec_gss.so.1
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libgssapi.a
OLD_FILES+=usr/lib32/libgssapi.so
OLD_LIBS+=usr/lib32/libgssapi.so.10
OLD_FILES+=usr/lib32/libgssapi_p.a
OLD_FILES+=usr/lib32/librpcsec_gss.a
OLD_FILES+=usr/lib32/librpcsec_gss.so
OLD_LIBS+=usr/lib32/librpcsec_gss.so.1
.endif
OLD_FILES+=usr/sbin/gssd
OLD_FILES+=usr/share/man/man3/gss_accept_sec_context.3.gz
OLD_FILES+=usr/share/man/man3/gss_acquire_cred.3.gz
OLD_FILES+=usr/share/man/man3/gss_add_cred.3.gz
OLD_FILES+=usr/share/man/man3/gss_add_oid_set_member.3.gz
OLD_FILES+=usr/share/man/man3/gss_canonicalize_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_compare_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_context_time.3.gz
OLD_FILES+=usr/share/man/man3/gss_create_empty_oid_set.3.gz
OLD_FILES+=usr/share/man/man3/gss_delete_sec_context.3.gz
OLD_FILES+=usr/share/man/man3/gss_display_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_display_status.3.gz
OLD_FILES+=usr/share/man/man3/gss_duplicate_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_export_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_export_sec_context.3.gz
OLD_FILES+=usr/share/man/man3/gss_get_mic.3.gz
OLD_FILES+=usr/share/man/man3/gss_import_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_import_sec_context.3.gz
OLD_FILES+=usr/share/man/man3/gss_indicate_mechs.3.gz
OLD_FILES+=usr/share/man/man3/gss_init_sec_context.3.gz
OLD_FILES+=usr/share/man/man3/gss_inquire_context.3.gz
OLD_FILES+=usr/share/man/man3/gss_inquire_cred.3.gz
OLD_FILES+=usr/share/man/man3/gss_inquire_cred_by_mech.3.gz
OLD_FILES+=usr/share/man/man3/gss_inquire_mechs_for_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_inquire_names_for_mech.3.gz
OLD_FILES+=usr/share/man/man3/gss_process_context_token.3.gz
OLD_FILES+=usr/share/man/man3/gss_release_buffer.3.gz
OLD_FILES+=usr/share/man/man3/gss_release_cred.3.gz
OLD_FILES+=usr/share/man/man3/gss_release_name.3.gz
OLD_FILES+=usr/share/man/man3/gss_release_oid_set.3.gz
OLD_FILES+=usr/share/man/man3/gss_seal.3.gz
OLD_FILES+=usr/share/man/man3/gss_sign.3.gz
OLD_FILES+=usr/share/man/man3/gss_test_oid_set_member.3.gz
OLD_FILES+=usr/share/man/man3/gss_unseal.3.gz
OLD_FILES+=usr/share/man/man3/gss_unwrap.3.gz
OLD_FILES+=usr/share/man/man3/gss_verify.3.gz
OLD_FILES+=usr/share/man/man3/gss_verify_mic.3.gz
OLD_FILES+=usr/share/man/man3/gss_wrap.3.gz
OLD_FILES+=usr/share/man/man3/gss_wrap_size_limit.3.gz
OLD_FILES+=usr/share/man/man3/gssapi.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_get_error.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_get_mech_info.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_get_mechanisms.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_get_principal_name.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_get_versions.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_getcred.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_is_installed.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_max_data_length.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_mech_to_oid.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_oid_to_mech.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_qop_to_num.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_seccreate.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_set_callback.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_set_defaults.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_set_svc_name.3.gz
OLD_FILES+=usr/share/man/man3/rpc_gss_svc_max_data_length.3.gz
OLD_FILES+=usr/share/man/man3/rpcsec_gss.3.gz
OLD_FILES+=usr/share/man/man5/mech.5.gz
OLD_FILES+=usr/share/man/man5/qop.5.gz
OLD_FILES+=usr/share/man/man8/gssd.8.gz
.endif
.if ${MK_HAST} == no
OLD_FILES+=sbin/hastctl
OLD_FILES+=sbin/hastd
OLD_FILES+=usr/share/examples/hast/ucarp.sh
OLD_FILES+=usr/share/examples/hast/ucarp_down.sh
OLD_FILES+=usr/share/examples/hast/ucarp_up.sh
OLD_FILES+=usr/share/examples/hast/vip-down.sh
OLD_FILES+=usr/share/examples/hast/vip-up.sh
OLD_FILES+=usr/share/man/man5/hast.conf.5.gz
OLD_FILES+=usr/share/man/man8/hastctl.8.gz
OLD_FILES+=usr/share/man/man8/hastd.8.gz
OLD_DIRS+=usr/share/examples/hast
# bsnmp
OLD_FILES+=usr/lib/snmp_hast.so
OLD_LIBS+=usr/lib/snmp_hast.so.6
OLD_FILES+=usr/share/man/man3/snmp_hast.3.gz
OLD_FILES+=usr/share/snmp/defs/hast_tree.def
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-HAST-MIB.txt
.endif
.if ${MK_HESIOD} == no
OLD_FILES+=usr/bin/hesinfo
OLD_FILES+=usr/include/hesiod.h
OLD_FILES+=usr/share/man/man1/hesinfo.1.gz
OLD_FILES+=usr/share/man/man3/hesiod.3.gz
OLD_FILES+=usr/share/man/man5/hesiod.conf.5.gz
.endif
.if ${MK_HTML} == no
OLD_FILES+=usr/share/doc/ncurses/hackguide.html
OLD_FILES+=usr/share/doc/ncurses/ncurses-intro.html
OLD_DIRS+=usr/share/doc/ncurses
OLD_FILES+=usr/share/doc/ntp/accopt.html
OLD_FILES+=usr/share/doc/ntp/assoc.html
OLD_FILES+=usr/share/doc/ntp/audio.html
OLD_FILES+=usr/share/doc/ntp/authopt.html
OLD_FILES+=usr/share/doc/ntp/build.html
OLD_FILES+=usr/share/doc/ntp/clockopt.html
OLD_FILES+=usr/share/doc/ntp/config.html
OLD_FILES+=usr/share/doc/ntp/confopt.html
OLD_FILES+=usr/share/doc/ntp/copyright.html
OLD_FILES+=usr/share/doc/ntp/debug.html
OLD_FILES+=usr/share/doc/ntp/driver1.html
OLD_FILES+=usr/share/doc/ntp/driver10.html
OLD_FILES+=usr/share/doc/ntp/driver11.html
OLD_FILES+=usr/share/doc/ntp/driver12.html
OLD_FILES+=usr/share/doc/ntp/driver16.html
OLD_FILES+=usr/share/doc/ntp/driver18.html
OLD_FILES+=usr/share/doc/ntp/driver19.html
OLD_FILES+=usr/share/doc/ntp/driver2.html
OLD_FILES+=usr/share/doc/ntp/driver20.html
OLD_FILES+=usr/share/doc/ntp/driver22.html
OLD_FILES+=usr/share/doc/ntp/driver26.html
OLD_FILES+=usr/share/doc/ntp/driver27.html
OLD_FILES+=usr/share/doc/ntp/driver28.html
OLD_FILES+=usr/share/doc/ntp/driver29.html
OLD_FILES+=usr/share/doc/ntp/driver3.html
OLD_FILES+=usr/share/doc/ntp/driver30.html
OLD_FILES+=usr/share/doc/ntp/driver32.html
OLD_FILES+=usr/share/doc/ntp/driver33.html
OLD_FILES+=usr/share/doc/ntp/driver34.html
OLD_FILES+=usr/share/doc/ntp/driver35.html
OLD_FILES+=usr/share/doc/ntp/driver36.html
OLD_FILES+=usr/share/doc/ntp/driver37.html
OLD_FILES+=usr/share/doc/ntp/driver4.html
OLD_FILES+=usr/share/doc/ntp/driver5.html
OLD_FILES+=usr/share/doc/ntp/driver6.html
OLD_FILES+=usr/share/doc/ntp/driver7.html
OLD_FILES+=usr/share/doc/ntp/driver8.html
OLD_FILES+=usr/share/doc/ntp/driver9.html
OLD_FILES+=usr/share/doc/ntp/extern.html
OLD_FILES+=usr/share/doc/ntp/hints.html
OLD_FILES+=usr/share/doc/ntp/howto.html
OLD_FILES+=usr/share/doc/ntp/index.html
OLD_FILES+=usr/share/doc/ntp/kern.html
OLD_FILES+=usr/share/doc/ntp/ldisc.html
OLD_FILES+=usr/share/doc/ntp/measure.html
OLD_FILES+=usr/share/doc/ntp/miscopt.html
OLD_FILES+=usr/share/doc/ntp/monopt.html
OLD_FILES+=usr/share/doc/ntp/mx4200data.html
OLD_FILES+=usr/share/doc/ntp/notes.html
OLD_FILES+=usr/share/doc/ntp/ntpd.html
OLD_FILES+=usr/share/doc/ntp/ntpdate.html
OLD_FILES+=usr/share/doc/ntp/ntpdc.html
OLD_FILES+=usr/share/doc/ntp/ntpq.html
OLD_FILES+=usr/share/doc/ntp/ntptime.html
OLD_FILES+=usr/share/doc/ntp/ntptrace.html
OLD_FILES+=usr/share/doc/ntp/parsedata.html
OLD_FILES+=usr/share/doc/ntp/parsenew.html
OLD_FILES+=usr/share/doc/ntp/patches.html
OLD_FILES+=usr/share/doc/ntp/porting.html
OLD_FILES+=usr/share/doc/ntp/pps.html
OLD_FILES+=usr/share/doc/ntp/prefer.html
OLD_FILES+=usr/share/doc/ntp/quick.html
OLD_FILES+=usr/share/doc/ntp/rdebug.html
OLD_FILES+=usr/share/doc/ntp/refclock.html
OLD_FILES+=usr/share/doc/ntp/release.html
OLD_FILES+=usr/share/doc/ntp/tickadj.html
.endif
.if ${MK_ICONV} == no
OLD_FILES+=usr/bin/iconv
OLD_FILES+=usr/bin/mkcsmapper
OLD_FILES+=usr/bin/mkesdb
OLD_FILES+=usr/include/_libiconv_compat.h
OLD_FILES+=usr/include/iconv.h
OLD_FILES+=usr/share/man/man1/iconv.1.gz
OLD_FILES+=usr/share/man/man1/mkcsmapper.1.gz
OLD_FILES+=usr/share/man/man1/mkesdb.1.gz
OLD_FILES+=usr/share/man/man3/__iconv.3.gz
OLD_FILES+=usr/share/man/man3/__iconv_free_list.3.gz
OLD_FILES+=usr/share/man/man3/__iconv_get_list.3.gz
OLD_FILES+=usr/share/man/man3/iconv.3.gz
OLD_FILES+=usr/share/man/man3/iconv_canonicalize.3.gz
OLD_FILES+=usr/share/man/man3/iconv_close.3.gz
OLD_FILES+=usr/share/man/man3/iconv_open.3.gz
OLD_FILES+=usr/share/man/man3/iconv_open_into.3.gz
OLD_FILES+=usr/share/man/man3/iconvctl.3.gz
OLD_FILES+=usr/share/man/man3/iconvlist.3.gz
OLD_DIRS+=usr/share/i18n
OLD_DIRS+=usr/share/i18n/esdb
OLD_DIRS+=usr/share/i18n/esdb/ISO-2022
OLD_DIRS+=usr/share/i18n/esdb/BIG5
OLD_DIRS+=usr/share/i18n/esdb/MISC
OLD_DIRS+=usr/share/i18n/esdb/TCVN
OLD_DIRS+=usr/share/i18n/esdb/EBCDIC
OLD_DIRS+=usr/share/i18n/esdb/ISO-8859
OLD_DIRS+=usr/share/i18n/esdb/GEORGIAN
OLD_DIRS+=usr/share/i18n/esdb/AST
OLD_DIRS+=usr/share/i18n/esdb/KAZAKH
OLD_DIRS+=usr/share/i18n/esdb/APPLE
OLD_DIRS+=usr/share/i18n/esdb/EUC
OLD_DIRS+=usr/share/i18n/esdb/CP
OLD_DIRS+=usr/share/i18n/esdb/DEC
OLD_DIRS+=usr/share/i18n/esdb/UTF
OLD_DIRS+=usr/share/i18n/esdb/GB
OLD_DIRS+=usr/share/i18n/esdb/ISO646
OLD_DIRS+=usr/share/i18n/esdb/KOI
OLD_DIRS+=usr/share/i18n/csmapper
OLD_DIRS+=usr/share/i18n/csmapper/KAZAKH
OLD_DIRS+=usr/share/i18n/csmapper/CNS
OLD_DIRS+=usr/share/i18n/csmapper/BIG5
OLD_DIRS+=usr/share/i18n/csmapper/JIS
OLD_DIRS+=usr/share/i18n/csmapper/KOI
OLD_DIRS+=usr/share/i18n/csmapper/TCVN
OLD_DIRS+=usr/share/i18n/csmapper/MISC
OLD_DIRS+=usr/share/i18n/csmapper/EBCDIC
OLD_DIRS+=usr/share/i18n/csmapper/ISO646
OLD_DIRS+=usr/share/i18n/csmapper/CP
OLD_DIRS+=usr/share/i18n/csmapper/GEORGIAN
OLD_DIRS+=usr/share/i18n/csmapper/ISO-8859
OLD_DIRS+=usr/share/i18n/csmapper/AST
OLD_DIRS+=usr/share/i18n/csmapper/APPLE
OLD_DIRS+=usr/share/i18n/csmapper/KS
OLD_DIRS+=usr/share/i18n/csmapper/GB
.endif
.if ${MK_INET6} == no
OLD_FILES+=sbin/ping6
OLD_FILES+=sbin/rtsol
OLD_FILES+=usr/sbin/ip6addrctl
OLD_FILES+=usr/sbin/mld6query
OLD_FILES+=usr/sbin/ndp
OLD_FILES+=usr/sbin/rip6query
OLD_FILES+=usr/sbin/route6d
OLD_FILES+=usr/sbin/rrenumd
OLD_FILES+=usr/sbin/rtadvctl
OLD_FILES+=usr/sbin/rtadvd
OLD_FILES+=usr/sbin/rtsold
OLD_FILES+=usr/sbin/traceroute6
OLD_FILES+=usr/share/doc/IPv6/IMPLEMENTATION
OLD_FILES+=usr/share/man/man5/rrenumd.conf.5.gz
OLD_FILES+=usr/share/man/man5/rtadvd.conf.5.gz
OLD_FILES+=usr/share/man/man8/ip6addrctl.8.gz
OLD_FILES+=usr/share/man/man8/mld6query.8.gz
OLD_FILES+=usr/share/man/man8/ndp.8.gz
OLD_FILES+=usr/share/man/man8/ping6.8.gz
OLD_FILES+=usr/share/man/man8/rip6query.8.gz
OLD_FILES+=usr/share/man/man8/route6d.8.gz
OLD_FILES+=usr/share/man/man8/rrenumd.8.gz
OLD_FILES+=usr/share/man/man8/rtadvctl.8.gz
OLD_FILES+=usr/share/man/man8/rtadvd.8.gz
OLD_FILES+=usr/share/man/man8/rtsol.8.gz
OLD_FILES+=usr/share/man/man8/rtsold.8.gz
OLD_FILES+=usr/share/man/man8/traceroute6.8.gz
.endif
.if ${MK_INET6_SUPPORT} == no
OLD_FILES+=rescue/ping6
OLD_FILES+=rescue/rtsol
.endif
.if ${MK_INETD} == no
OLD_FILES+=etc/rc.d/inetd
OLD_FILES+=usr/sbin/inetd
OLD_FILES+=usr/share/man/man5/inetd.conf.5.gz
OLD_FILES+=usr/share/man/man8/inetd.8.gz
.endif
.if ${MK_IPFILTER} == no
OLD_FILES+=etc/periodic/security/510.ipfdenied
OLD_FILES+=etc/periodic/security/610.ipf6denied
OLD_FILES+=rescue/ipf
OLD_FILES+=sbin/ipf
OLD_FILES+=sbin/ipfs
OLD_FILES+=sbin/ipfstat
OLD_FILES+=sbin/ipftest
OLD_FILES+=sbin/ipmon
OLD_FILES+=sbin/ipnat
OLD_FILES+=sbin/ippool
OLD_FILES+=sbin/ipresend
OLD_FILES+=usr/include/netinet/ip_auth.h
OLD_FILES+=usr/include/netinet/ip_compat.h
OLD_FILES+=usr/include/netinet/ip_fil.h
OLD_FILES+=usr/include/netinet/ip_frag.h
OLD_FILES+=usr/include/netinet/ip_htable.h
OLD_FILES+=usr/include/netinet/ip_lookup.h
OLD_FILES+=usr/include/netinet/ip_nat.h
OLD_FILES+=usr/include/netinet/ip_pool.h
OLD_FILES+=usr/include/netinet/ip_proxy.h
OLD_FILES+=usr/include/netinet/ip_rules.h
OLD_FILES+=usr/include/netinet/ip_scan.h
OLD_FILES+=usr/include/netinet/ip_state.h
OLD_FILES+=usr/include/netinet/ip_sync.h
OLD_FILES+=usr/include/netinet/ipl.h
OLD_FILES+=usr/share/examples/ipfilter/README
OLD_FILES+=usr/share/examples/ipfilter/BASIC.NAT
OLD_FILES+=usr/share/examples/ipfilter/BASIC_1.FW
OLD_FILES+=usr/share/examples/ipfilter/BASIC_2.FW
OLD_FILES+=usr/share/examples/ipfilter/example.1
OLD_FILES+=usr/share/examples/ipfilter/example.2
OLD_FILES+=usr/share/examples/ipfilter/example.3
OLD_FILES+=usr/share/examples/ipfilter/example.4
OLD_FILES+=usr/share/examples/ipfilter/example.5
OLD_FILES+=usr/share/examples/ipfilter/example.6
OLD_FILES+=usr/share/examples/ipfilter/example.7
OLD_FILES+=usr/share/examples/ipfilter/example.8
OLD_FILES+=usr/share/examples/ipfilter/example.9
OLD_FILES+=usr/share/examples/ipfilter/example.10
OLD_FILES+=usr/share/examples/ipfilter/example.11
OLD_FILES+=usr/share/examples/ipfilter/example.12
OLD_FILES+=usr/share/examples/ipfilter/example.13
OLD_FILES+=usr/share/examples/ipfilter/example.sr
OLD_FILES+=usr/share/examples/ipfilter/firewall
OLD_FILES+=usr/share/examples/ipfilter/ftp-proxy
OLD_FILES+=usr/share/examples/ipfilter/ftppxy
OLD_FILES+=usr/share/examples/ipfilter/nat-setup
OLD_FILES+=usr/share/examples/ipfilter/nat.eg
OLD_FILES+=usr/share/examples/ipfilter/server
OLD_FILES+=usr/share/examples/ipfilter/tcpstate
OLD_FILES+=usr/share/examples/ipfilter/example.14
OLD_FILES+=usr/share/examples/ipfilter/firewall.1
OLD_FILES+=usr/share/examples/ipfilter/firewall.2
OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.permissive
OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.restrictive
OLD_FILES+=usr/share/examples/ipfilter/ipf.conf.sample
OLD_FILES+=usr/share/examples/ipfilter/ipnat.conf.sample
OLD_FILES+=usr/share/examples/ipfilter/ipf-howto.txt
OLD_FILES+=usr/share/examples/ipfilter/examples.txt
OLD_FILES+=usr/share/examples/ipfilter/rules.txt
OLD_FILES+=usr/share/examples/ipfilter/mkfilters
OLD_DIRS+=usr/share/examples/ipfilter
OLD_FILES+=usr/share/man/man1/ipftest.1.gz
OLD_FILES+=usr/share/man/man1/ipresend.1.gz
OLD_FILES+=usr/share/man/man4/ipf.4.gz
OLD_FILES+=usr/share/man/man4/ipl.4.gz
OLD_FILES+=usr/share/man/man4/ipfilter.4.gz
OLD_FILES+=usr/share/man/man4/ipnat.4.gz
OLD_FILES+=usr/share/man/man5/ipf.5.gz
OLD_FILES+=usr/share/man/man5/ipf.conf.5.gz
OLD_FILES+=usr/share/man/man5/ipf6.conf.5.gz
OLD_FILES+=usr/share/man/man5/ipnat.5.gz
OLD_FILES+=usr/share/man/man5/ipnat.conf.5.gz
OLD_FILES+=usr/share/man/man5/ippool.5.gz
OLD_FILES+=usr/share/man/man8/ipf.8.gz
OLD_FILES+=usr/share/man/man8/ipfs.8.gz
OLD_FILES+=usr/share/man/man8/ipfstat.8.gz
OLD_FILES+=usr/share/man/man8/ipmon.8.gz
OLD_FILES+=usr/share/man/man8/ipnat.8.gz
OLD_FILES+=usr/share/man/man8/ippool.8.gz
.endif
.if ${MK_IPFW} == no
OLD_FILES+=etc/periodic/security/500.ipfwdenied
OLD_FILES+=etc/periodic/security/550.ipfwlimit
OLD_FILES+=sbin/ipfw
OLD_FILES+=sbin/natd
OLD_FILES+=usr/sbin/ipfwpcap
OLD_FILES+=usr/share/man/man8/ipfw.8.gz
OLD_FILES+=usr/share/man/man8/ipfwpcap.8.gz
OLD_FILES+=usr/share/man/man8/natd.8.gz
.endif
.if ${MK_ISCSI} == no
OLD_FILES+=etc/rc.d/iscsictl
OLD_FILES+=etc/rc.d/iscsid
OLD_FILES+=rescue/iscsictl
OLD_FILES+=rescue/iscsid
OLD_FILES+=sbin/iscontrol
OLD_FILES+=usr/bin/iscsictl
OLD_FILES+=usr/sbin/iscsid
OLD_FILES+=usr/share/man/man4/iscsi.4.gz
OLD_FILES+=usr/share/man/man4/iscsi_initiator.4.gz
OLD_FILES+=usr/share/man/man5/iscsi.conf.5.gz
OLD_FILES+=usr/share/man/man8/iscontrol.8.gz
OLD_FILES+=usr/share/man/man8/iscsictl.8.gz
OLD_FILES+=usr/share/man/man8/iscsid.8.gz
.endif
.if ${MK_JAIL} == no
OLD_FILES+=etc/rc.d/jail
OLD_FILES+=usr/sbin/jail
OLD_FILES+=usr/sbin/jexec
OLD_FILES+=usr/sbin/jls
OLD_FILES+=usr/share/man/man5/jail.conf.5.gz
OLD_FILES+=usr/share/man/man8/jail.8.gz
OLD_FILES+=usr/share/man/man8/jexec.8.gz
OLD_FILES+=usr/share/man/man8/jls.8.gz
.endif
.if ${MK_KDUMP} == no
OLD_FILES+=usr/bin/kdump
OLD_FILES+=usr/bin/truss
OLD_FILES+=usr/share/man/man1/kdump.1.gz
OLD_FILES+=usr/share/man/man1/truss.1.gz
.endif
.if ${MK_KERBEROS} == no
OLD_FILES+=etc/rc.d/ipropd_master
OLD_FILES+=etc/rc.d/ipropd_slave
OLD_FILES+=usr/bin/asn1_compile
OLD_FILES+=usr/bin/compile_et
OLD_FILES+=usr/bin/hxtool
OLD_FILES+=usr/bin/kadmin
OLD_FILES+=usr/bin/kcc
OLD_FILES+=usr/bin/kdestroy
OLD_FILES+=usr/bin/kf
OLD_FILES+=usr/bin/kgetcred
OLD_FILES+=usr/bin/kinit
OLD_FILES+=usr/bin/klist
OLD_FILES+=usr/bin/kpasswd
OLD_FILES+=usr/bin/krb5-config
OLD_FILES+=usr/bin/ksu
OLD_FILES+=usr/bin/kswitch
OLD_FILES+=usr/bin/make-roken
OLD_FILES+=usr/bin/slc
OLD_FILES+=usr/bin/string2key
OLD_FILES+=usr/bin/verify_krb5_conf
OLD_FILES+=usr/include/asn1-common.h
OLD_FILES+=usr/include/asn1_err.h
OLD_FILES+=usr/include/base64.h
OLD_FILES+=usr/include/cms_asn1.h
OLD_FILES+=usr/include/crmf_asn1.h
OLD_FILES+=usr/include/der-private.h
OLD_FILES+=usr/include/der-protos.h
OLD_FILES+=usr/include/der.h
OLD_FILES+=usr/include/digest_asn1.h
OLD_FILES+=usr/include/getarg.h
OLD_FILES+=usr/include/gssapi/gssapi_krb5.h
OLD_FILES+=usr/include/hdb-protos.h
OLD_FILES+=usr/include/hdb.h
OLD_FILES+=usr/include/hdb_asn1.h
OLD_FILES+=usr/include/hdb_err.h
OLD_FILES+=usr/include/heim_asn1.h
OLD_FILES+=usr/include/heim_err.h
OLD_FILES+=usr/include/heim_threads.h
OLD_FILES+=usr/include/heimbase.h
OLD_FILES+=usr/include/heimntlm-protos.h
OLD_FILES+=usr/include/heimntlm.h
OLD_FILES+=usr/include/hex.h
OLD_FILES+=usr/include/hx509-private.h
OLD_FILES+=usr/include/hx509-protos.h
OLD_FILES+=usr/include/hx509.h
OLD_FILES+=usr/include/hx509_err.h
OLD_FILES+=usr/include/k524_err.h
OLD_FILES+=usr/include/kadm5/admin.h
OLD_FILES+=usr/include/kadm5/kadm5-private.h
OLD_FILES+=usr/include/kadm5/kadm5-protos.h
OLD_FILES+=usr/include/kadm5/kadm5-pwcheck.h
OLD_FILES+=usr/include/kadm5/kadm5_err.h
OLD_FILES+=usr/include/kadm5/private.h
OLD_DIRS+=usr/include/kadm5
OLD_FILES+=usr/include/kafs.h
OLD_FILES+=usr/include/kdc-protos.h
OLD_FILES+=usr/include/kdc.h
OLD_FILES+=usr/include/krb5-private.h
OLD_FILES+=usr/include/krb5-protos.h
OLD_FILES+=usr/include/krb5-types.h
OLD_FILES+=usr/include/krb5.h
OLD_FILES+=usr/include/krb5/ccache_plugin.h
OLD_FILES+=usr/include/krb5/locate_plugin.h
OLD_FILES+=usr/include/krb5/send_to_kdc_plugin.h
OLD_FILES+=usr/include/krb5/windc_plugin.h
OLD_DIRS+=usr/include/krb5
OLD_FILES+=usr/include/krb5_asn1.h
OLD_FILES+=usr/include/krb5_ccapi.h
OLD_FILES+=usr/include/krb5_err.h
OLD_FILES+=usr/include/kx509_asn1.h
OLD_FILES+=usr/include/ntlm_err.h
OLD_FILES+=usr/include/ocsp_asn1.h
OLD_FILES+=usr/include/parse_bytes.h
OLD_FILES+=usr/include/parse_time.h
OLD_FILES+=usr/include/parse_units.h
OLD_FILES+=usr/include/pkcs10_asn1.h
OLD_FILES+=usr/include/pkcs12_asn1.h
OLD_FILES+=usr/include/pkcs8_asn1.h
OLD_FILES+=usr/include/pkcs9_asn1.h
OLD_FILES+=usr/include/pkinit_asn1.h
OLD_FILES+=usr/include/resolve.h
OLD_FILES+=usr/include/rfc2459_asn1.h
OLD_FILES+=usr/include/roken-common.h
OLD_FILES+=usr/include/rtbl.h
OLD_FILES+=usr/include/wind.h
OLD_FILES+=usr/include/wind_err.h
OLD_FILES+=usr/include/xdbm.h
OLD_FILES+=usr/lib/libasn1.a
OLD_FILES+=usr/lib/libasn1.so
OLD_LIBS+=usr/lib/libasn1.so.11
OLD_FILES+=usr/lib/libasn1_p.a
OLD_FILES+=usr/lib/libcom_err.a
OLD_FILES+=usr/lib/libcom_err.so
OLD_LIBS+=usr/lib/libcom_err.so.5
OLD_FILES+=usr/lib/libcom_err_p.a
OLD_FILES+=usr/lib/libgssapi_krb5.a
OLD_FILES+=usr/lib/libgssapi_krb5.so
OLD_LIBS+=usr/lib/libgssapi_krb5.so.10
OLD_FILES+=usr/lib/libgssapi_krb5_p.a
OLD_FILES+=usr/lib/libgssapi_ntlm.a
OLD_FILES+=usr/lib/libgssapi_ntlm.so
OLD_LIBS+=usr/lib/libgssapi_ntlm.so.10
OLD_FILES+=usr/lib/libgssapi_ntlm_p.a
OLD_FILES+=usr/lib/libgssapi_spnego.a
OLD_FILES+=usr/lib/libgssapi_spnego.so
OLD_LIBS+=usr/lib/libgssapi_spnego.so.10
OLD_FILES+=usr/lib/libgssapi_spnego_p.a
OLD_FILES+=usr/lib/libhdb.a
OLD_FILES+=usr/lib/libhdb.so
OLD_LIBS+=usr/lib/libhdb.so.11
OLD_FILES+=usr/lib/libhdb_p.a
OLD_FILES+=usr/lib/libheimbase.a
OLD_FILES+=usr/lib/libheimbase.so
OLD_LIBS+=usr/lib/libheimbase.so.11
OLD_FILES+=usr/lib/libheimbase_p.a
OLD_FILES+=usr/lib/libheimntlm.a
OLD_FILES+=usr/lib/libheimntlm.so
OLD_LIBS+=usr/lib/libheimntlm.so.11
OLD_FILES+=usr/lib/libheimntlm_p.a
OLD_FILES+=usr/lib/libheimsqlite.a
OLD_FILES+=usr/lib/libheimsqlite.so
OLD_LIBS+=usr/lib/libheimsqlite.so.11
OLD_FILES+=usr/lib/libheimsqlite_p.a
OLD_FILES+=usr/lib/libhx509.a
OLD_FILES+=usr/lib/libhx509.so
OLD_LIBS+=usr/lib/libhx509.so.11
OLD_FILES+=usr/lib/libhx509_p.a
OLD_FILES+=usr/lib/libkadm5clnt.a
OLD_FILES+=usr/lib/libkadm5clnt.so
OLD_LIBS+=usr/lib/libkadm5clnt.so.11
OLD_FILES+=usr/lib/libkadm5clnt_p.a
OLD_FILES+=usr/lib/libkadm5srv.a
OLD_FILES+=usr/lib/libkadm5srv.so
OLD_LIBS+=usr/lib/libkadm5srv.so.11
OLD_FILES+=usr/lib/libkadm5srv_p.a
OLD_FILES+=usr/lib/libkafs5.a
OLD_FILES+=usr/lib/libkafs5.so
OLD_LIBS+=usr/lib/libkafs5.so.11
OLD_FILES+=usr/lib/libkafs5_p.a
OLD_FILES+=usr/lib/libkdc.a
OLD_FILES+=usr/lib/libkdc.so
OLD_LIBS+=usr/lib/libkdc.so.11
OLD_FILES+=usr/lib/libkdc_p.a
OLD_FILES+=usr/lib/libkrb5.a
OLD_FILES+=usr/lib/libkrb5.so
OLD_LIBS+=usr/lib/libkrb5.so.11
OLD_FILES+=usr/lib/libkrb5_p.a
OLD_FILES+=usr/lib/libroken.a
OLD_FILES+=usr/lib/libroken.so
OLD_LIBS+=usr/lib/libroken.so.11
OLD_FILES+=usr/lib/libroken_p.a
OLD_FILES+=usr/lib/libwind.a
OLD_FILES+=usr/lib/libwind.so
OLD_LIBS+=usr/lib/libwind.so.11
OLD_FILES+=usr/lib/libwind_p.a
OLD_FILES+=usr/lib/pam_krb5.so
OLD_LIBS+=usr/lib/pam_krb5.so.6
OLD_FILES+=usr/lib/pam_ksu.so
OLD_LIBS+=usr/lib/pam_ksu.so.6
OLD_FILES+=usr/lib/private/libheimipcc.a
OLD_FILES+=usr/lib/private/libheimipcc.so
OLD_LIBS+=usr/lib/private/libheimipcc.so.11
OLD_FILES+=usr/lib/private/libheimipcc_p.a
OLD_FILES+=usr/lib/private/libheimipcs.a
OLD_FILES+=usr/lib/private/libheimipcs.so
OLD_LIBS+=usr/lib/private/libheimipcs.so.11
OLD_FILES+=usr/lib/private/libheimipcs_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libasn1.a
OLD_FILES+=usr/lib32/libasn1.so
OLD_LIBS+=usr/lib32/libasn1.so.11
OLD_FILES+=usr/lib32/libasn1_p.a
OLD_FILES+=usr/lib32/libgssapi_krb5.a
OLD_FILES+=usr/lib32/libgssapi_krb5.so
OLD_LIBS+=usr/lib32/libgssapi_krb5.so.10
OLD_FILES+=usr/lib32/libgssapi_krb5_p.a
OLD_FILES+=usr/lib32/libgssapi_ntlm.a
OLD_FILES+=usr/lib32/libgssapi_ntlm.so
OLD_LIBS+=usr/lib32/libgssapi_ntlm.so.10
OLD_FILES+=usr/lib32/libgssapi_ntlm_p.a
OLD_FILES+=usr/lib32/libgssapi_spnego.a
OLD_FILES+=usr/lib32/libgssapi_spnego.so
OLD_LIBS+=usr/lib32/libgssapi_spnego.so.10
OLD_FILES+=usr/lib32/libgssapi_spnego_p.a
OLD_FILES+=usr/lib32/libhdb.a
OLD_FILES+=usr/lib32/libhdb.so
OLD_LIBS+=usr/lib32/libhdb.so.11
OLD_FILES+=usr/lib32/libhdb_p.a
OLD_FILES+=usr/lib32/libheimbase.a
OLD_FILES+=usr/lib32/libheimbase.so
OLD_LIBS+=usr/lib32/libheimbase.so.11
OLD_FILES+=usr/lib32/libheimbase_p.a
OLD_FILES+=usr/lib32/libheimntlm.a
OLD_FILES+=usr/lib32/libheimntlm.so
OLD_LIBS+=usr/lib32/libheimntlm.so.11
OLD_FILES+=usr/lib32/libheimntlm_p.a
OLD_FILES+=usr/lib32/libheimsqlite.a
OLD_FILES+=usr/lib32/libheimsqlite.so
OLD_LIBS+=usr/lib32/libheimsqlite.so.11
OLD_FILES+=usr/lib32/libheimsqlite_p.a
OLD_FILES+=usr/lib32/libhx509.a
OLD_FILES+=usr/lib32/libhx509.so
OLD_LIBS+=usr/lib32/libhx509.so.11
OLD_FILES+=usr/lib32/libhx509_p.a
OLD_FILES+=usr/lib32/libkadm5clnt.a
OLD_FILES+=usr/lib32/libkadm5clnt.so
OLD_LIBS+=usr/lib32/libkadm5clnt.so.11
OLD_FILES+=usr/lib32/libkadm5clnt_p.a
OLD_FILES+=usr/lib32/libkadm5srv.a
OLD_FILES+=usr/lib32/libkadm5srv.so
OLD_LIBS+=usr/lib32/libkadm5srv.so.11
OLD_FILES+=usr/lib32/libkadm5srv_p.a
OLD_FILES+=usr/lib32/libkafs5.a
OLD_FILES+=usr/lib32/libkafs5.so
OLD_LIBS+=usr/lib32/libkafs5.so.11
OLD_FILES+=usr/lib32/libkafs5_p.a
OLD_FILES+=usr/lib32/libkdc.a
OLD_FILES+=usr/lib32/libkdc.so
OLD_LIBS+=usr/lib32/libkdc.so.11
OLD_FILES+=usr/lib32/libkdc_p.a
OLD_FILES+=usr/lib32/libkrb5.a
OLD_FILES+=usr/lib32/libkrb5.so
OLD_LIBS+=usr/lib32/libkrb5.so.11
OLD_FILES+=usr/lib32/libkrb5_p.a
OLD_FILES+=usr/lib32/libroken.a
OLD_FILES+=usr/lib32/libroken.so
OLD_LIBS+=usr/lib32/libroken.so.11
OLD_FILES+=usr/lib32/libroken_p.a
OLD_FILES+=usr/lib32/libwind.a
OLD_FILES+=usr/lib32/libwind.so
OLD_LIBS+=usr/lib32/libwind.so.11
OLD_FILES+=usr/lib32/libwind_p.a
OLD_FILES+=usr/lib32/pam_krb5.so
OLD_LIBS+=usr/lib32/pam_krb5.so.6
OLD_FILES+=usr/lib32/pam_ksu.so
OLD_LIBS+=usr/lib32/pam_ksu.so.6
OLD_FILES+=usr/lib32/private/libheimipcc.a
OLD_FILES+=usr/lib32/private/libheimipcc.so
OLD_LIBS+=usr/lib32/private/libheimipcc.so.11
OLD_FILES+=usr/lib32/private/libheimipcc_p.a
OLD_FILES+=usr/lib32/private/libheimipcs.a
OLD_FILES+=usr/lib32/private/libheimipcs.so
OLD_LIBS+=usr/lib32/private/libheimipcs.so.11
OLD_FILES+=usr/lib32/private/libheimipcs_p.a
.endif
OLD_FILES+=usr/libexec/digest-service
OLD_FILES+=usr/libexec/hprop
OLD_FILES+=usr/libexec/hpropd
OLD_FILES+=usr/libexec/ipropd-master
OLD_FILES+=usr/libexec/ipropd-slave
OLD_FILES+=usr/libexec/kadmind
OLD_FILES+=usr/libexec/kcm
OLD_FILES+=usr/libexec/kdc
OLD_FILES+=usr/libexec/kdigest
OLD_FILES+=usr/libexec/kfd
OLD_FILES+=usr/libexec/kimpersonate
OLD_FILES+=usr/libexec/kpasswdd
OLD_FILES+=usr/sbin/kstash
OLD_FILES+=usr/sbin/ktutil
OLD_FILES+=usr/sbin/iprop-log
OLD_FILES+=usr/share/man/man1/kdestroy.1.gz
OLD_FILES+=usr/share/man/man1/kf.1.gz
OLD_FILES+=usr/share/man/man1/kinit.1.gz
OLD_FILES+=usr/share/man/man1/klist.1.gz
OLD_FILES+=usr/share/man/man1/kpasswd.1.gz
OLD_FILES+=usr/share/man/man1/krb5-config.1.gz
OLD_FILES+=usr/share/man/man1/kswitch.1.gz
OLD_FILES+=usr/share/man/man3/HDB.3.gz
OLD_FILES+=usr/share/man/man3/hdb__del.3.gz
OLD_FILES+=usr/share/man/man3/hdb__get.3.gz
OLD_FILES+=usr/share/man/man3/hdb__put.3.gz
OLD_FILES+=usr/share/man/man3/hdb_auth_status.3.gz
OLD_FILES+=usr/share/man/man3/hdb_check_constrained_delegation.3.gz
OLD_FILES+=usr/share/man/man3/hdb_check_pkinit_ms_upn_match.3.gz
OLD_FILES+=usr/share/man/man3/hdb_check_s4u2self.3.gz
OLD_FILES+=usr/share/man/man3/hdb_close.3.gz
OLD_FILES+=usr/share/man/man3/hdb_destroy.3.gz
OLD_FILES+=usr/share/man/man3/hdb_entry_ex.3.gz
OLD_FILES+=usr/share/man/man3/hdb_fetch_kvno.3.gz
OLD_FILES+=usr/share/man/man3/hdb_firstkey.3.gz
OLD_FILES+=usr/share/man/man3/hdb_free.3.gz
OLD_FILES+=usr/share/man/man3/hdb_get_realms.3.gz
OLD_FILES+=usr/share/man/man3/hdb_lock.3.gz
OLD_FILES+=usr/share/man/man3/hdb_name.3.gz
OLD_FILES+=usr/share/man/man3/hdb_nextkey.3.gz
OLD_FILES+=usr/share/man/man3/hdb_open.3.gz
OLD_FILES+=usr/share/man/man3/hdb_password.3.gz
OLD_FILES+=usr/share/man/man3/hdb_remove.3.gz
OLD_FILES+=usr/share/man/man3/hdb_rename.3.gz
OLD_FILES+=usr/share/man/man3/hdb_store.3.gz
OLD_FILES+=usr/share/man/man3/hdb_unlock.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_build_ntlm1_master.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_build_ntlm2_master.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_calculate_lm2.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_calculate_ntlm1.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_calculate_ntlm2.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_decode_targetinfo.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_targetinfo.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_type1.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_type2.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_encode_type3.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_free_buf.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_free_targetinfo.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_free_type1.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_free_type2.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_free_type3.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_keyex_unwrap.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_nt_key.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_ntlmv2_key.3.gz
OLD_FILES+=usr/share/man/man3/heim_ntlm_verify_ntlm2.3.gz
OLD_FILES+=usr/share/man/man3/hx509.3.gz
OLD_FILES+=usr/share/man/man3/hx509_bitstring_print.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_sign.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_sign_self.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_crl_dp_uri.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_eku.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_hostname.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_jid.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_ms_upn.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_otherName.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_pkinit.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_add_san_rfc822name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_init.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_ca.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_domaincontroller.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_notAfter.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_notAfter_lifetime.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_notBefore.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_proxy.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_serialnumber.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_spki.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_subject.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_template.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_set_unique.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_subject_expand.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ca_tbs_template_units.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_binary.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_check_eku.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_cmp.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_find_subjectAltName_otherName.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_SPKI.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_SPKI_AlgorithmIdentifier.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_attribute.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_base_subject.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_friendly_name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_issuer.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_issuer_unique_id.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_notAfter.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_notBefore.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_serialnumber.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_subject.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_get_subject_unique_id.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_init.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_init_data.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_keyusage_print.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_ref.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cert_set_friendly_name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_add.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_append.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_end_seq.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_filter.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_find.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_info.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_init.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_iter_f.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_merge.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_next_cert.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_start_seq.3.gz
OLD_FILES+=usr/share/man/man3/hx509_certs_store.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ci_print_names.3.gz
OLD_FILES+=usr/share/man/man3/hx509_clear_error_string.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms_create_signed_1.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms_envelope_1.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms_unenvelope.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms_unwrap_ContentInfo.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms_verify_signed.3.gz
OLD_FILES+=usr/share/man/man3/hx509_cms_wrap_ContentInfo.3.gz
OLD_FILES+=usr/share/man/man3/hx509_context_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_context_init.3.gz
OLD_FILES+=usr/share/man/man3/hx509_context_set_missing_revoke.3.gz
OLD_FILES+=usr/share/man/man3/hx509_crl_add_revoked_certs.3.gz
OLD_FILES+=usr/share/man/man3/hx509_crl_alloc.3.gz
OLD_FILES+=usr/share/man/man3/hx509_crl_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_crl_lifetime.3.gz
OLD_FILES+=usr/share/man/man3/hx509_crl_sign.3.gz
OLD_FILES+=usr/share/man/man3/hx509_crypto.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env_add.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env_add_binding.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env_find.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env_find_binding.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_env_lfind.3.gz
OLD_FILES+=usr/share/man/man3/hx509_err.3.gz
OLD_FILES+=usr/share/man/man3/hx509_error.3.gz
OLD_FILES+=usr/share/man/man3/hx509_free_error_string.3.gz
OLD_FILES+=usr/share/man/man3/hx509_free_octet_string_list.3.gz
OLD_FILES+=usr/share/man/man3/hx509_general_name_unparse.3.gz
OLD_FILES+=usr/share/man/man3/hx509_get_error_string.3.gz
OLD_FILES+=usr/share/man/man3/hx509_get_one_cert.3.gz
OLD_FILES+=usr/share/man/man3/hx509_keyset.3.gz
OLD_FILES+=usr/share/man/man3/hx509_lock.3.gz
OLD_FILES+=usr/share/man/man3/hx509_misc.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_binary.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_cmp.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_copy.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_expand.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_is_null_p.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_to_Name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_name_to_string.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ocsp_request.3.gz
OLD_FILES+=usr/share/man/man3/hx509_ocsp_verify.3.gz
OLD_FILES+=usr/share/man/man3/hx509_oid_print.3.gz
OLD_FILES+=usr/share/man/man3/hx509_oid_sprint.3.gz
OLD_FILES+=usr/share/man/man3/hx509_parse_name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_peer.3.gz
OLD_FILES+=usr/share/man/man3/hx509_peer_info_add_cms_alg.3.gz
OLD_FILES+=usr/share/man/man3/hx509_peer_info_alloc.3.gz
OLD_FILES+=usr/share/man/man3/hx509_peer_info_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_peer_info_set_cert.3.gz
OLD_FILES+=usr/share/man/man3/hx509_peer_info_set_cms_algs.3.gz
OLD_FILES+=usr/share/man/man3/hx509_print.3.gz
OLD_FILES+=usr/share/man/man3/hx509_print_cert.3.gz
OLD_FILES+=usr/share/man/man3/hx509_print_stdout.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_alloc.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_match_cmp_func.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_match_eku.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_match_friendly_name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_match_issuer_serial.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_match_option.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_statistic_file.3.gz
OLD_FILES+=usr/share/man/man3/hx509_query_unparse_stats.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke_add_crl.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke_add_ocsp.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke_init.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke_ocsp_print.3.gz
OLD_FILES+=usr/share/man/man3/hx509_revoke_verify.3.gz
OLD_FILES+=usr/share/man/man3/hx509_set_error_string.3.gz
OLD_FILES+=usr/share/man/man3/hx509_set_error_stringv.3.gz
OLD_FILES+=usr/share/man/man3/hx509_unparse_der_name.3.gz
OLD_FILES+=usr/share/man/man3/hx509_validate_cert.3.gz
OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_add_flags.3.gz
OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_free.3.gz
OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_init.3.gz
OLD_FILES+=usr/share/man/man3/hx509_validate_ctx_set_print.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_attach_anchors.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_attach_revoke.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_ctx_f_allow_default_trustanchors.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_destroy_ctx.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_hostname.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_init_ctx.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_path.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_set_max_depth.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_set_proxy_certificate.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_set_strict_rfc3280_verification.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_set_time.3.gz
OLD_FILES+=usr/share/man/man3/hx509_verify_signature.3.gz
OLD_FILES+=usr/share/man/man3/hx509_xfree.3.gz
OLD_FILES+=usr/share/man/man3/k_afs_cell_of_file.3.gz
OLD_FILES+=usr/share/man/man3/k_hasafs.3.gz
OLD_FILES+=usr/share/man/man3/k_pioctl.3.gz
OLD_FILES+=usr/share/man/man3/k_setpag.3.gz
OLD_FILES+=usr/share/man/man3/k_unlog.3.gz
OLD_FILES+=usr/share/man/man3/kadm5_pwcheck.3.gz
OLD_FILES+=usr/share/man/man3/kafs.3.gz
OLD_FILES+=usr/share/man/man3/kafs5.3.gz
OLD_FILES+=usr/share/man/man3/kafs_set_verbose.3.gz
OLD_FILES+=usr/share/man/man3/kafs_settoken.3.gz
OLD_FILES+=usr/share/man/man3/kafs_settoken5.3.gz
OLD_FILES+=usr/share/man/man3/kafs_settoken_rxkad.3.gz
OLD_FILES+=usr/share/man/man3/krb5.3.gz
OLD_FILES+=usr/share/man/man3/krb524_convert_creds_kdc.3.gz
OLD_FILES+=usr/share/man/man3/krb524_convert_creds_kdc_ccache.3.gz
OLD_FILES+=usr/share/man/man3/krb5_425_conv_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_425_conv_principal_ext.3.gz
OLD_FILES+=usr/share/man/man3/krb5_524_conv_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_acc_ops.3.gz
OLD_FILES+=usr/share/man/man3/krb5_acl_match_file.3.gz
OLD_FILES+=usr/share/man/man3/krb5_acl_match_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_add_et_list.3.gz
OLD_FILES+=usr/share/man/man3/krb5_add_extra_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_add_ignore_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_addlog_dest.3.gz
OLD_FILES+=usr/share/man/man3/krb5_addlog_func.3.gz
OLD_FILES+=usr/share/man/man3/krb5_addr2sockaddr.3.gz
OLD_FILES+=usr/share/man/man3/krb5_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_address_compare.3.gz
OLD_FILES+=usr/share/man/man3/krb5_address_order.3.gz
OLD_FILES+=usr/share/man/man3/krb5_address_prefixlen_boundary.3.gz
OLD_FILES+=usr/share/man/man3/krb5_address_search.3.gz
OLD_FILES+=usr/share/man/man3/krb5_afslog.3.gz
OLD_FILES+=usr/share/man/man3/krb5_afslog_uid.3.gz
OLD_FILES+=usr/share/man/man3/krb5_allow_weak_crypto.3.gz
OLD_FILES+=usr/share/man/man3/krb5_aname_to_localname.3.gz
OLD_FILES+=usr/share/man/man3/krb5_anyaddr.3.gz
OLD_FILES+=usr/share/man/man3/krb5_appdefault.3.gz
OLD_FILES+=usr/share/man/man3/krb5_appdefault_boolean.3.gz
OLD_FILES+=usr/share/man/man3/krb5_appdefault_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_appdefault_time.3.gz
OLD_FILES+=usr/share/man/man3/krb5_append_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_genaddrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getaddrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getflags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getlocalsubkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getrcache.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getremotesubkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_getuserkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_initivector.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setaddrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setaddrs_from_fd.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setflags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setivector.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setlocalsubkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setrcache.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setremotesubkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_con_setuserkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_context.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_getauthenticator.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_getcksumtype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_getkeytype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_getlocalseqnumber.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_getremoteseqnumber.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_setcksumtype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_setkeytype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_setlocalseqnumber.3.gz
OLD_FILES+=usr/share/man/man3/krb5_auth_setremoteseqnumber.3.gz
OLD_FILES+=usr/share/man/man3/krb5_build_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_build_principal_ext.3.gz
OLD_FILES+=usr/share/man/man3/krb5_build_principal_va.3.gz
OLD_FILES+=usr/share/man/man3/krb5_build_principal_va_ext.3.gz
OLD_FILES+=usr/share/man/man3/krb5_c_enctype_compare.3.gz
OLD_FILES+=usr/share/man/man3/krb5_c_make_checksum.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_cache_end_seq_get.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_cache_get_first.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_cache_match.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_cache_next.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_clear_mcred.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_close.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_copy_cache.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_copy_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_copy_match_f.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_default_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_destroy.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_end_seq_get.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_gen_new.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_config.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_friendly_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_full_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_kdc_offset.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_lifetime.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_ops.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_prefix_ops.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_type.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_get_version.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_initialize.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_last_change_time.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_move.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_new_unique.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_next_cred.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_register.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_remove_cred.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_resolve.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_retrieve_cred.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_set_config.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_set_default_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_set_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_set_friendly_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_set_kdc_offset.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_start_seq_get.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_store_cred.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_support_switch.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cc_switch.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ccache.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ccache_intro.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cccol_cursor_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cccol_cursor_new.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cccol_cursor_next.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cccol_last_change_time.3.gz
OLD_FILES+=usr/share/man/man3/krb5_change_password.3.gz
OLD_FILES+=usr/share/man/man3/krb5_check_transited.3.gz
OLD_FILES+=usr/share/man/man3/krb5_checksum_is_collision_proof.3.gz
OLD_FILES+=usr/share/man/man3/krb5_checksum_is_keyed.3.gz
OLD_FILES+=usr/share/man/man3/krb5_checksumsize.3.gz
OLD_FILES+=usr/share/man/man3/krb5_cksumtype_to_enctype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_clear_error_message.3.gz
OLD_FILES+=usr/share/man/man3/krb5_clear_error_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_closelog.3.gz
OLD_FILES+=usr/share/man/man3/krb5_compare_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_file_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_free_strings.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_bool.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_bool_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_list.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_string_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_strings.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_time.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_get_time_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_parse_file_multi.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_parse_string_multi.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_bool.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_bool_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_list.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_string_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_strings.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_time.3.gz
OLD_FILES+=usr/share/man/man3/krb5_config_vget_time_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_context.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_creds_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_data.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_host_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_keyblock.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_keyblock_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_copy_ticket.3.gz
OLD_FILES+=usr/share/man/man3/krb5_create_checksum.3.gz
OLD_FILES+=usr/share/man/man3/krb5_create_checksum_iov.3.gz
OLD_FILES+=usr/share/man/man3/krb5_credential.3.gz
OLD_FILES+=usr/share/man/man3/krb5_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_creds_get_ticket_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_destroy.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_fx_cf2.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_getblocksize.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_getconfoundersize.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_getenctype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_getpadsize.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_crypto_iov.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_alloc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_cmp.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_copy.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_ct_cmp.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_realloc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_data_zero.3.gz
OLD_FILES+=usr/share/man/man3/krb5_decrypt.3.gz
OLD_FILES+=usr/share/man/man3/krb5_decrypt_EncryptedData.3.gz
OLD_FILES+=usr/share/man/man3/krb5_decrypt_iov_ivec.3.gz
OLD_FILES+=usr/share/man/man3/krb5_deprecated.3.gz
OLD_FILES+=usr/share/man/man3/krb5_digest.3.gz
OLD_FILES+=usr/share/man/man3/krb5_digest_probe.3.gz
OLD_FILES+=usr/share/man/man3/krb5_eai_to_heim_errno.3.gz
OLD_FILES+=usr/share/man/man3/krb5_encrypt.3.gz
OLD_FILES+=usr/share/man/man3/krb5_encrypt_EncryptedData.3.gz
OLD_FILES+=usr/share/man/man3/krb5_encrypt_iov_ivec.3.gz
OLD_FILES+=usr/share/man/man3/krb5_enctype_disable.3.gz
OLD_FILES+=usr/share/man/man3/krb5_enctype_enable.3.gz
OLD_FILES+=usr/share/man/man3/krb5_enctype_valid.3.gz
OLD_FILES+=usr/share/man/man3/krb5_enctypes_compatible_keys.3.gz
OLD_FILES+=usr/share/man/man3/krb5_error.3.gz
OLD_FILES+=usr/share/man/man3/krb5_expand_hostname.3.gz
OLD_FILES+=usr/share/man/man3/krb5_expand_hostname_realms.3.gz
OLD_FILES+=usr/share/man/man3/krb5_fcc_ops.3.gz
OLD_FILES+=usr/share/man/man3/krb5_fileformats.3.gz
OLD_FILES+=usr/share/man/man3/krb5_find_padata.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_config_files.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_context.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_cred_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_creds_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_data.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_data_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_error_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_host_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_keyblock.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_keyblock_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_krbhst.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_ticket.3.gz
OLD_FILES+=usr/share/man/man3/krb5_free_unparsed_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_fwd_tgt_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_generate_random_block.3.gz
OLD_FILES+=usr/share/man/man3/krb5_generate_subkey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_generate_subkey_extended.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_all_client_addrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_all_server_addrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_cred_from_kdc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_cred_from_kdc_opt.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_credentials.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_default_config_files.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_default_in_tkt_etypes.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_default_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_default_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_default_realms.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_dns_canonicalize_hostname.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_extra_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_fcache_version.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_forwarded_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_host_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_ignore_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_in_cred.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_in_tkt_with_keytab.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_in_tkt_with_password.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_in_tkt_with_skey.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_keyblock.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_keytab.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_alloc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_get_error.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_opt_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_init_creds_password.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_kdc_sec_offset.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_krb524hst.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_krb_admin_hst.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_krb_changepw_hst.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_krbhst.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_max_time_skew.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_use_admin_kdc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_get_validated_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_getportbyname.3.gz
OLD_FILES+=usr/share/man/man3/krb5_h_addr2addr.3.gz
OLD_FILES+=usr/share/man/man3/krb5_h_addr2sockaddr.3.gz
OLD_FILES+=usr/share/man/man3/krb5_h_errno_to_heim_errno.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_context.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_get.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_get_error.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_intro.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_set_keytab.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_set_password.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_set_service.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_creds_step.3.gz
OLD_FILES+=usr/share/man/man3/krb5_init_ets.3.gz
OLD_FILES+=usr/share/man/man3/krb5_initlog.3.gz
OLD_FILES+=usr/share/man/man3/krb5_introduction.3.gz
OLD_FILES+=usr/share/man/man3/krb5_is_config_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_is_thread_safe.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kerberos_enctypes.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keyblock_get_enctype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keyblock_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keyblock_zero.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keytab.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keytab_intro.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keytab_key_proc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keytype_to_enctypes.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keytype_to_enctypes_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_keytype_to_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_format_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_get_addrinfo.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_next.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_next_as_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_krbhst_reset.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_add_entry.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_close.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_compare.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_copy_entry_contents.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_default_modify_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_default_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_destroy.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_end_seq_get.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_free_entry.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_get_entry.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_get_full_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_get_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_get_type.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_have_content.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_next_entry.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_read_service_key.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_register.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_remove_entry.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_resolve.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kt_start_seq_get.3.gz
OLD_FILES+=usr/share/man/man3/krb5_kuserok.3.gz
OLD_FILES+=usr/share/man/man3/krb5_log.3.gz
OLD_FILES+=usr/share/man/man3/krb5_log_msg.3.gz
OLD_FILES+=usr/share/man/man3/krb5_make_addrport.3.gz
OLD_FILES+=usr/share/man/man3/krb5_make_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_max_sockaddr_size.3.gz
OLD_FILES+=usr/share/man/man3/krb5_mcc_ops.3.gz
OLD_FILES+=usr/share/man/man3/krb5_mk_req.3.gz
OLD_FILES+=usr/share/man/man3/krb5_mk_safe.3.gz
OLD_FILES+=usr/share/man/man3/krb5_openlog.3.gz
OLD_FILES+=usr/share/man/man3/krb5_pac.3.gz
OLD_FILES+=usr/share/man/man3/krb5_pac_get_buffer.3.gz
OLD_FILES+=usr/share/man/man3/krb5_pac_verify.3.gz
OLD_FILES+=usr/share/man/man3/krb5_parse_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_parse_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_parse_name_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_parse_nametype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_password_key_proc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_plugin_register.3.gz
OLD_FILES+=usr/share/man/man3/krb5_prepend_config_files_default.3.gz
OLD_FILES+=usr/share/man/man3/krb5_princ_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_princ_set_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_compare.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_compare_any_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_get_comp_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_get_num_comp.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_get_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_get_type.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_intro.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_is_krbtgt.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_match.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_set_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_principal_set_type.3.gz
OLD_FILES+=usr/share/man/man3/krb5_print_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_random_to_key.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rcache.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_error.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_req_ctx.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_req_in_ctx_alloc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_req_in_set_keytab.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_req_in_set_pac_check.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_req_out_ctx_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_req_out_get_server.3.gz
OLD_FILES+=usr/share/man/man3/krb5_rd_safe.3.gz
OLD_FILES+=usr/share/man/man3/krb5_realm_compare.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_addrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_authdata.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_creds_tag.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_data.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_int16.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_int32.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_int8.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_keyblock.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_stringz.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_times.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_uint16.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_uint32.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ret_uint8.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_config_files.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_default_in_tkt_etypes.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_default_realm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_dns_canonicalize_hostname.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_error_message.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_error_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_extra_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_fcache_version.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_home_dir_access.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_ignore_addresses.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_kdc_sec_offset.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_max_time_skew.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_password.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_real_time.3.gz
OLD_FILES+=usr/share/man/man3/krb5_set_use_admin_kdc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_sname_to_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_sock_to_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_sockaddr2address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_sockaddr2port.3.gz
OLD_FILES+=usr/share/man/man3/krb5_sockaddr_uninteresting.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_clear_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_emem.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_free.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_from_data.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_from_fd.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_from_mem.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_from_readonly_mem.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_get_byteorder.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_get_eof_code.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_is_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_read.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_seek.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_set_byteorder.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_set_eof_code.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_set_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_set_max_alloc.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_to_data.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_truncate.3.gz
OLD_FILES+=usr/share/man/man3/krb5_storage_write.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_address.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_addrs.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_authdata.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_creds_tag.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_data.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_int16.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_int32.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_int8.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_keyblock.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_principal.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_stringz.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_times.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_uint16.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_uint32.3.gz
OLD_FILES+=usr/share/man/man3/krb5_store_uint8.3.gz
OLD_FILES+=usr/share/man/man3/krb5_string_to_key.3.gz
OLD_FILES+=usr/share/man/man3/krb5_string_to_keytype.3.gz
OLD_FILES+=usr/share/man/man3/krb5_support.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ticket.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ticket_get_authorization_data_type.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ticket_get_client.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ticket_get_endtime.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ticket_get_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_ticket_get_server.3.gz
OLD_FILES+=usr/share/man/man3/krb5_timeofday.3.gz
OLD_FILES+=usr/share/man/man3/krb5_unparse_name.3.gz
OLD_FILES+=usr/share/man/man3/krb5_unparse_name_fixed.3.gz
OLD_FILES+=usr/share/man/man3/krb5_unparse_name_fixed_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_unparse_name_fixed_short.3.gz
OLD_FILES+=usr/share/man/man3/krb5_unparse_name_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_unparse_name_short.3.gz
OLD_FILES+=usr/share/man/man3/krb5_us_timeofday.3.gz
OLD_FILES+=usr/share/man/man3/krb5_v4compat.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_checksum.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_checksum_iov.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_init_creds.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_opt_init.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_flags.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_keytab.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_secure.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_opt_set_service.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_user.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_user_lrealm.3.gz
OLD_FILES+=usr/share/man/man3/krb5_verify_user_opt.3.gz
OLD_FILES+=usr/share/man/man3/krb5_vlog.3.gz
OLD_FILES+=usr/share/man/man3/krb5_vlog_msg.3.gz
OLD_FILES+=usr/share/man/man3/krb5_vset_error_string.3.gz
OLD_FILES+=usr/share/man/man3/krb5_vwarn.3.gz
OLD_FILES+=usr/share/man/man3/krb_afslog.3.gz
OLD_FILES+=usr/share/man/man3/krb_afslog_uid.3.gz
OLD_FILES+=usr/share/man/man3/ntlm_buf.3.gz
OLD_FILES+=usr/share/man/man3/ntlm_core.3.gz
OLD_FILES+=usr/share/man/man3/ntlm_type1.3.gz
OLD_FILES+=usr/share/man/man3/ntlm_type2.3.gz
OLD_FILES+=usr/share/man/man3/ntlm_type3.3.gz
OLD_FILES+=usr/share/man/man5/krb5.conf.5.gz
OLD_FILES+=usr/share/man/man8/hprop.8.gz
OLD_FILES+=usr/share/man/man8/hpropd.8.gz
OLD_FILES+=usr/share/man/man8/iprop-log.8.gz
OLD_FILES+=usr/share/man/man8/iprop.8.gz
OLD_FILES+=usr/share/man/man8/kadmin.8.gz
OLD_FILES+=usr/share/man/man8/kadmind.8.gz
OLD_FILES+=usr/share/man/man8/kcm.8.gz
OLD_FILES+=usr/share/man/man8/kdc.8.gz
OLD_FILES+=usr/share/man/man8/kdigest.8.gz
OLD_FILES+=usr/share/man/man8/kerberos.8.gz
OLD_FILES+=usr/share/man/man8/kimpersonate.8.gz
OLD_FILES+=usr/share/man/man8/kpasswdd.8.gz
OLD_FILES+=usr/share/man/man8/kstash.8.gz
OLD_FILES+=usr/share/man/man8/ktutil.8.gz
OLD_FILES+=usr/share/man/man8/pam_krb5.8.gz
OLD_FILES+=usr/share/man/man8/pam_ksu.8.gz
OLD_FILES+=usr/share/man/man8/string2key.8.gz
OLD_FILES+=usr/share/man/man8/verify_krb5_conf.8.gz
.endif
.if ${MK_KERBEROS_SUPPORT} == no
OLD_FILES+=usr/bin/compile_et
OLD_FILES+=usr/include/com_err.h
OLD_FILES+=usr/include/com_right.h
OLD_FILES+=usr/lib/libcom_err.a
OLD_FILES+=usr/lib/libcom_err.so
OLD_LIBS+=usr/lib/libcom_err.so.5
OLD_FILES+=usr/lib/libcom_err_p.a
OLD_FILES+=usr/lib32/libcom_err.a
OLD_FILES+=usr/lib32/libcom_err.so
OLD_LIBS+=usr/lib32/libcom_err.so.5
OLD_FILES+=usr/lib32/libcom_err_p.a
OLD_FILES+=usr/share/man/man1/compile_et.1.gz
OLD_FILES+=usr/share/man/man3/com_err.3.gz
.endif
.if ${MK_LDNS} == no
OLD_FILES+=usr/lib/private/libldns.a
OLD_FILES+=usr/lib/private/libldns.so
OLD_LIBS+=usr/lib/private/libldns.so.5
OLD_FILES+=usr/lib/private/libldns_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/private/libldns.a
OLD_FILES+=usr/lib32/private/libldns.so
OLD_LIBS+=usr/lib32/private/libldns.so.5
OLD_FILES+=usr/lib32/private/libldns_p.a
.endif
.endif
.if ${MK_LDNS_UTILS} == no
OLD_FILES+=usr/bin/drill
OLD_FILES+=usr/share/man/man1/drill.1.gz
OLD_FILES+=usr/bin/host
OLD_FILES+=usr/share/man/man1/host.1.gz
.endif
.if ${MK_LEGACY_CONSOLE} == no
OLD_FILES+=usr/sbin/kbdcontrol
OLD_FILES+=usr/sbin/kbdmap
OLD_FILES+=usr/sbin/moused
OLD_FILES+=usr/sbin/vidcontrol
OLD_FILES+=usr/sbin/vidfont
OLD_FILES+=usr/share/man/man1/kbdcontrol.1.gz
OLD_FILES+=usr/share/man/man1/kbdmap.1.gz
OLD_FILES+=usr/share/man/man1/vidcontrol.1.gz
OLD_FILES+=usr/share/man/man1/vidfont.1.gz
OLD_FILES+=usr/share/man/man5/kbdmap.5.gz
OLD_FILES+=usr/share/man/man5/keymap.5.gz
OLD_FILES+=usr/share/man/man8/moused.8.gz
.endif
.if ${MK_LIB32} == no
OLD_FILES+=etc/mtree/BSD.lib32.dist
OLD_FILES+=libexec/ld-elf32.so.1
. if exists(${DESTDIR}/usr/lib32)
LIB32_DIRS!=find ${DESTDIR}/usr/lib32 -type d \
| sed -e 's,^${DESTDIR}/,,'; echo
LIB32_FILES!=find ${DESTDIR}/usr/lib32 \! -type d \
\! -name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo
LIB32_LIBS!=find ${DESTDIR}/usr/lib32 \! -type d \
-name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo
OLD_DIRS+=${LIB32_DIRS}
OLD_FILES+=${LIB32_FILES}
OLD_LIBS+=${LIB32_LIBS}
. endif
. if ${MK_DEBUG_FILES} == no
. if exists(${DESTDIR}/usr/lib/debug/usr/lib32)
DEBUG_LIB32_DIRS!=find ${DESTDIR}/usr/lib/debug/usr/lib32 -type d \
| sed -e 's,^${DESTDIR}/,,'; echo
DEBUG_LIB32_FILES!=find ${DESTDIR}/usr/lib/debug/usr/lib32 \! -type d \
\! -name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo
DEBUG_LIB32_LIBS!=find ${DESTDIR}/usr/lib/debug/usr/lib32 \! -type d \
-name "lib*.so*" | sed -e 's,^${DESTDIR}/,,'; echo
OLD_DIRS+=${DEBUG_LIB32_DIRS}
OLD_FILES+=${DEBUG_LIB32_FILES}
OLD_LIBS+=${DEBUG_LIB32_LIBS}
. endif
. endif
.endif
.if ${MK_LIBCPLUSPLUS} == no
OLD_LIBS+=lib/libcxxrt.so.1
OLD_FILES+=usr/lib/libc++.a
OLD_FILES+=usr/lib/libc++_p.a
OLD_FILES+=usr/lib/libc++experimental.a
OLD_FILES+=usr/lib/libc++fs.a
OLD_FILES+=usr/lib/libc++.so
OLD_LIBS+=usr/lib/libc++.so.1
OLD_FILES+=usr/lib/libcxxrt.a
OLD_FILES+=usr/lib/libcxxrt.so
OLD_FILES+=usr/lib/libcxxrt_p.a
OLD_FILES+=usr/include/c++/v1/__bit_reference
OLD_FILES+=usr/include/c++/v1/__bsd_locale_defaults.h
OLD_FILES+=usr/include/c++/v1/__bsd_locale_fallbacks.h
OLD_FILES+=usr/include/c++/v1/__config
OLD_FILES+=usr/include/c++/v1/__debug
OLD_FILES+=usr/include/c++/v1/__errc
OLD_FILES+=usr/include/c++/v1/__functional_03
OLD_FILES+=usr/include/c++/v1/__functional_base
OLD_FILES+=usr/include/c++/v1/__functional_base_03
OLD_FILES+=usr/include/c++/v1/__hash_table
OLD_FILES+=usr/include/c++/v1/__libcpp_version
OLD_FILES+=usr/include/c++/v1/__locale
OLD_FILES+=usr/include/c++/v1/__mutex_base
OLD_FILES+=usr/include/c++/v1/__node_handle
OLD_FILES+=usr/include/c++/v1/__nullptr
OLD_FILES+=usr/include/c++/v1/__split_buffer
OLD_FILES+=usr/include/c++/v1/__sso_allocator
OLD_FILES+=usr/include/c++/v1/__std_stream
OLD_FILES+=usr/include/c++/v1/__string
OLD_FILES+=usr/include/c++/v1/__threading_support
OLD_FILES+=usr/include/c++/v1/__tree
OLD_FILES+=usr/include/c++/v1/__tuple
OLD_FILES+=usr/include/c++/v1/__undef_macros
OLD_FILES+=usr/include/c++/v1/algorithm
OLD_FILES+=usr/include/c++/v1/any
OLD_FILES+=usr/include/c++/v1/array
OLD_FILES+=usr/include/c++/v1/atomic
OLD_FILES+=usr/include/c++/v1/bit
OLD_FILES+=usr/include/c++/v1/bitset
OLD_FILES+=usr/include/c++/v1/cassert
OLD_FILES+=usr/include/c++/v1/ccomplex
OLD_FILES+=usr/include/c++/v1/cctype
OLD_FILES+=usr/include/c++/v1/cerrno
OLD_FILES+=usr/include/c++/v1/cfenv
OLD_FILES+=usr/include/c++/v1/cfloat
OLD_FILES+=usr/include/c++/v1/charconv
OLD_FILES+=usr/include/c++/v1/chrono
OLD_FILES+=usr/include/c++/v1/cinttypes
OLD_FILES+=usr/include/c++/v1/ciso646
OLD_FILES+=usr/include/c++/v1/climits
OLD_FILES+=usr/include/c++/v1/clocale
OLD_FILES+=usr/include/c++/v1/cmath
OLD_FILES+=usr/include/c++/v1/codecvt
OLD_FILES+=usr/include/c++/v1/compare
OLD_FILES+=usr/include/c++/v1/complex
OLD_FILES+=usr/include/c++/v1/complex.h
OLD_FILES+=usr/include/c++/v1/condition_variable
OLD_FILES+=usr/include/c++/v1/csetjmp
OLD_FILES+=usr/include/c++/v1/csignal
OLD_FILES+=usr/include/c++/v1/cstdarg
OLD_FILES+=usr/include/c++/v1/cstdbool
OLD_FILES+=usr/include/c++/v1/cstddef
OLD_FILES+=usr/include/c++/v1/cstdint
OLD_FILES+=usr/include/c++/v1/cstdio
OLD_FILES+=usr/include/c++/v1/cstdlib
OLD_FILES+=usr/include/c++/v1/cstring
OLD_FILES+=usr/include/c++/v1/ctgmath
OLD_FILES+=usr/include/c++/v1/ctime
OLD_FILES+=usr/include/c++/v1/ctype.h
OLD_FILES+=usr/include/c++/v1/cwchar
OLD_FILES+=usr/include/c++/v1/cwctype
OLD_FILES+=usr/include/c++/v1/cxxabi.h
OLD_FILES+=usr/include/c++/v1/deque
OLD_FILES+=usr/include/c++/v1/errno.h
OLD_FILES+=usr/include/c++/v1/exception
OLD_FILES+=usr/include/c++/v1/experimental/__config
OLD_FILES+=usr/include/c++/v1/experimental/__memory
OLD_FILES+=usr/include/c++/v1/experimental/algorithm
OLD_FILES+=usr/include/c++/v1/experimental/any
OLD_FILES+=usr/include/c++/v1/experimental/chrono
OLD_FILES+=usr/include/c++/v1/experimental/coroutine
OLD_FILES+=usr/include/c++/v1/experimental/deque
OLD_FILES+=usr/include/c++/v1/experimental/dynarray
OLD_FILES+=usr/include/c++/v1/experimental/filesystem
OLD_FILES+=usr/include/c++/v1/experimental/forward_list
OLD_FILES+=usr/include/c++/v1/experimental/functional
OLD_FILES+=usr/include/c++/v1/experimental/iterator
OLD_FILES+=usr/include/c++/v1/experimental/list
OLD_FILES+=usr/include/c++/v1/experimental/map
OLD_FILES+=usr/include/c++/v1/experimental/memory_resource
OLD_FILES+=usr/include/c++/v1/experimental/numeric
OLD_FILES+=usr/include/c++/v1/experimental/optional
OLD_FILES+=usr/include/c++/v1/experimental/propagate_const
OLD_FILES+=usr/include/c++/v1/experimental/ratio
OLD_FILES+=usr/include/c++/v1/experimental/regex
OLD_FILES+=usr/include/c++/v1/experimental/set
OLD_FILES+=usr/include/c++/v1/experimental/simd
OLD_FILES+=usr/include/c++/v1/experimental/string
OLD_FILES+=usr/include/c++/v1/experimental/string_view
OLD_FILES+=usr/include/c++/v1/experimental/system_error
OLD_FILES+=usr/include/c++/v1/experimental/tuple
OLD_FILES+=usr/include/c++/v1/experimental/type_traits
OLD_FILES+=usr/include/c++/v1/experimental/unordered_map
OLD_FILES+=usr/include/c++/v1/experimental/unordered_set
OLD_FILES+=usr/include/c++/v1/experimental/utility
OLD_FILES+=usr/include/c++/v1/experimental/vector
OLD_FILES+=usr/include/c++/v1/ext/__hash
OLD_FILES+=usr/include/c++/v1/ext/hash_map
OLD_FILES+=usr/include/c++/v1/ext/hash_set
OLD_FILES+=usr/include/c++/v1/filesystem
OLD_FILES+=usr/include/c++/v1/float.h
OLD_FILES+=usr/include/c++/v1/forward_list
OLD_FILES+=usr/include/c++/v1/fstream
OLD_FILES+=usr/include/c++/v1/functional
OLD_FILES+=usr/include/c++/v1/future
OLD_FILES+=usr/include/c++/v1/initializer_list
OLD_FILES+=usr/include/c++/v1/inttypes.h
OLD_FILES+=usr/include/c++/v1/iomanip
OLD_FILES+=usr/include/c++/v1/ios
OLD_FILES+=usr/include/c++/v1/iosfwd
OLD_FILES+=usr/include/c++/v1/iostream
OLD_FILES+=usr/include/c++/v1/istream
OLD_FILES+=usr/include/c++/v1/iterator
OLD_FILES+=usr/include/c++/v1/limits
OLD_FILES+=usr/include/c++/v1/limits.h
OLD_FILES+=usr/include/c++/v1/list
OLD_FILES+=usr/include/c++/v1/locale
OLD_FILES+=usr/include/c++/v1/locale.h
OLD_FILES+=usr/include/c++/v1/map
OLD_FILES+=usr/include/c++/v1/math.h
OLD_FILES+=usr/include/c++/v1/memory
OLD_FILES+=usr/include/c++/v1/mutex
OLD_FILES+=usr/include/c++/v1/new
OLD_FILES+=usr/include/c++/v1/numeric
OLD_FILES+=usr/include/c++/v1/numeric
OLD_FILES+=usr/include/c++/v1/optional
OLD_FILES+=usr/include/c++/v1/ostream
OLD_FILES+=usr/include/c++/v1/queue
OLD_FILES+=usr/include/c++/v1/random
OLD_FILES+=usr/include/c++/v1/ratio
OLD_FILES+=usr/include/c++/v1/regex
OLD_FILES+=usr/include/c++/v1/scoped_allocator
OLD_FILES+=usr/include/c++/v1/set
OLD_FILES+=usr/include/c++/v1/setjmp.h
OLD_FILES+=usr/include/c++/v1/shared_mutex
OLD_FILES+=usr/include/c++/v1/span
OLD_FILES+=usr/include/c++/v1/sstream
OLD_FILES+=usr/include/c++/v1/stack
OLD_FILES+=usr/include/c++/v1/stdbool.h
OLD_FILES+=usr/include/c++/v1/stddef.h
OLD_FILES+=usr/include/c++/v1/stdexcept
OLD_FILES+=usr/include/c++/v1/stdint.h
OLD_FILES+=usr/include/c++/v1/stdio.h
OLD_FILES+=usr/include/c++/v1/stdlib.h
OLD_FILES+=usr/include/c++/v1/streambuf
OLD_FILES+=usr/include/c++/v1/string
OLD_FILES+=usr/include/c++/v1/string.h
OLD_FILES+=usr/include/c++/v1/string_view
OLD_FILES+=usr/include/c++/v1/strstream
OLD_FILES+=usr/include/c++/v1/system_error
OLD_FILES+=usr/include/c++/v1/tgmath.h
OLD_FILES+=usr/include/c++/v1/thread
OLD_FILES+=usr/include/c++/v1/version
OLD_FILES+=usr/include/c++/v1/tr1/__bit_reference
OLD_FILES+=usr/include/c++/v1/tr1/__bsd_locale_defaults.h
OLD_FILES+=usr/include/c++/v1/tr1/__bsd_locale_fallbacks.h
OLD_FILES+=usr/include/c++/v1/tr1/__config
OLD_FILES+=usr/include/c++/v1/tr1/__debug
OLD_FILES+=usr/include/c++/v1/tr1/__functional_03
OLD_FILES+=usr/include/c++/v1/tr1/__functional_base
OLD_FILES+=usr/include/c++/v1/tr1/__functional_base_03
OLD_FILES+=usr/include/c++/v1/tr1/__hash_table
OLD_FILES+=usr/include/c++/v1/tr1/__libcpp_version
OLD_FILES+=usr/include/c++/v1/tr1/__locale
OLD_FILES+=usr/include/c++/v1/tr1/__mutex_base
OLD_FILES+=usr/include/c++/v1/tr1/__nullptr
OLD_FILES+=usr/include/c++/v1/tr1/__split_buffer
OLD_FILES+=usr/include/c++/v1/tr1/__sso_allocator
OLD_FILES+=usr/include/c++/v1/tr1/__std_stream
OLD_FILES+=usr/include/c++/v1/tr1/__string
OLD_FILES+=usr/include/c++/v1/tr1/__threading_support
OLD_FILES+=usr/include/c++/v1/tr1/__tree
OLD_FILES+=usr/include/c++/v1/tr1/__tuple
OLD_FILES+=usr/include/c++/v1/tr1/__undef_macros
OLD_FILES+=usr/include/c++/v1/tr1/algorithm
OLD_FILES+=usr/include/c++/v1/tr1/any
OLD_FILES+=usr/include/c++/v1/tr1/array
OLD_FILES+=usr/include/c++/v1/tr1/atomic
OLD_FILES+=usr/include/c++/v1/tr1/bitset
OLD_FILES+=usr/include/c++/v1/tr1/cassert
OLD_FILES+=usr/include/c++/v1/tr1/ccomplex
OLD_FILES+=usr/include/c++/v1/tr1/cctype
OLD_FILES+=usr/include/c++/v1/tr1/cerrno
OLD_FILES+=usr/include/c++/v1/tr1/cfenv
OLD_FILES+=usr/include/c++/v1/tr1/cfloat
OLD_FILES+=usr/include/c++/v1/tr1/chrono
OLD_FILES+=usr/include/c++/v1/tr1/cinttypes
OLD_FILES+=usr/include/c++/v1/tr1/ciso646
OLD_FILES+=usr/include/c++/v1/tr1/climits
OLD_FILES+=usr/include/c++/v1/tr1/clocale
OLD_FILES+=usr/include/c++/v1/tr1/cmath
OLD_FILES+=usr/include/c++/v1/tr1/codecvt
OLD_FILES+=usr/include/c++/v1/tr1/complex
OLD_FILES+=usr/include/c++/v1/tr1/complex.h
OLD_FILES+=usr/include/c++/v1/tr1/condition_variable
OLD_FILES+=usr/include/c++/v1/tr1/csetjmp
OLD_FILES+=usr/include/c++/v1/tr1/csignal
OLD_FILES+=usr/include/c++/v1/tr1/cstdarg
OLD_FILES+=usr/include/c++/v1/tr1/cstdbool
OLD_FILES+=usr/include/c++/v1/tr1/cstddef
OLD_FILES+=usr/include/c++/v1/tr1/cstdint
OLD_FILES+=usr/include/c++/v1/tr1/cstdio
OLD_FILES+=usr/include/c++/v1/tr1/cstdlib
OLD_FILES+=usr/include/c++/v1/tr1/cstring
OLD_FILES+=usr/include/c++/v1/tr1/ctgmath
OLD_FILES+=usr/include/c++/v1/tr1/ctime
OLD_FILES+=usr/include/c++/v1/tr1/ctype.h
OLD_FILES+=usr/include/c++/v1/tr1/cwchar
OLD_FILES+=usr/include/c++/v1/tr1/cwctype
OLD_FILES+=usr/include/c++/v1/tr1/deque
OLD_FILES+=usr/include/c++/v1/tr1/errno.h
OLD_FILES+=usr/include/c++/v1/tr1/exception
OLD_FILES+=usr/include/c++/v1/tr1/float.h
OLD_FILES+=usr/include/c++/v1/tr1/forward_list
OLD_FILES+=usr/include/c++/v1/tr1/fstream
OLD_FILES+=usr/include/c++/v1/tr1/functional
OLD_FILES+=usr/include/c++/v1/tr1/future
OLD_FILES+=usr/include/c++/v1/tr1/initializer_list
OLD_FILES+=usr/include/c++/v1/tr1/inttypes.h
OLD_FILES+=usr/include/c++/v1/tr1/iomanip
OLD_FILES+=usr/include/c++/v1/tr1/ios
OLD_FILES+=usr/include/c++/v1/tr1/iosfwd
OLD_FILES+=usr/include/c++/v1/tr1/iostream
OLD_FILES+=usr/include/c++/v1/tr1/istream
OLD_FILES+=usr/include/c++/v1/tr1/iterator
OLD_FILES+=usr/include/c++/v1/tr1/limits
OLD_FILES+=usr/include/c++/v1/tr1/limits.h
OLD_FILES+=usr/include/c++/v1/tr1/list
OLD_FILES+=usr/include/c++/v1/tr1/locale
OLD_FILES+=usr/include/c++/v1/tr1/locale.h
OLD_FILES+=usr/include/c++/v1/tr1/map
OLD_FILES+=usr/include/c++/v1/tr1/math.h
OLD_FILES+=usr/include/c++/v1/tr1/memory
OLD_FILES+=usr/include/c++/v1/tr1/mutex
OLD_FILES+=usr/include/c++/v1/tr1/new
OLD_FILES+=usr/include/c++/v1/tr1/numeric
OLD_FILES+=usr/include/c++/v1/tr1/numeric
OLD_FILES+=usr/include/c++/v1/tr1/optional
OLD_FILES+=usr/include/c++/v1/tr1/ostream
OLD_FILES+=usr/include/c++/v1/tr1/queue
OLD_FILES+=usr/include/c++/v1/tr1/random
OLD_FILES+=usr/include/c++/v1/tr1/ratio
OLD_FILES+=usr/include/c++/v1/tr1/regex
OLD_FILES+=usr/include/c++/v1/tr1/scoped_allocator
OLD_FILES+=usr/include/c++/v1/tr1/set
OLD_FILES+=usr/include/c++/v1/tr1/setjmp.h
OLD_FILES+=usr/include/c++/v1/tr1/shared_mutex
OLD_FILES+=usr/include/c++/v1/tr1/sstream
OLD_FILES+=usr/include/c++/v1/tr1/stack
OLD_FILES+=usr/include/c++/v1/tr1/stdbool.h
OLD_FILES+=usr/include/c++/v1/tr1/stddef.h
OLD_FILES+=usr/include/c++/v1/tr1/stdexcept
OLD_FILES+=usr/include/c++/v1/tr1/stdint.h
OLD_FILES+=usr/include/c++/v1/tr1/stdio.h
OLD_FILES+=usr/include/c++/v1/tr1/stdlib.h
OLD_FILES+=usr/include/c++/v1/tr1/streambuf
OLD_FILES+=usr/include/c++/v1/tr1/string
OLD_FILES+=usr/include/c++/v1/tr1/string.h
OLD_FILES+=usr/include/c++/v1/tr1/string_view
OLD_FILES+=usr/include/c++/v1/tr1/strstream
OLD_FILES+=usr/include/c++/v1/tr1/system_error
OLD_FILES+=usr/include/c++/v1/tr1/tgmath.h
OLD_FILES+=usr/include/c++/v1/tr1/thread
OLD_FILES+=usr/include/c++/v1/tr1/tuple
OLD_FILES+=usr/include/c++/v1/tr1/type_traits
OLD_FILES+=usr/include/c++/v1/tr1/typeindex
OLD_FILES+=usr/include/c++/v1/tr1/typeinfo
OLD_FILES+=usr/include/c++/v1/tr1/unordered_map
OLD_FILES+=usr/include/c++/v1/tr1/unordered_set
OLD_FILES+=usr/include/c++/v1/tr1/utility
OLD_FILES+=usr/include/c++/v1/tr1/valarray
OLD_FILES+=usr/include/c++/v1/tr1/variant
OLD_FILES+=usr/include/c++/v1/tr1/vector
OLD_FILES+=usr/include/c++/v1/tr1/wchar.h
OLD_FILES+=usr/include/c++/v1/tr1/wctype.h
OLD_FILES+=usr/include/c++/v1/tuple
OLD_FILES+=usr/include/c++/v1/type_traits
OLD_FILES+=usr/include/c++/v1/typeindex
OLD_FILES+=usr/include/c++/v1/typeinfo
OLD_FILES+=usr/include/c++/v1/unordered_map
OLD_FILES+=usr/include/c++/v1/unordered_set
OLD_FILES+=usr/include/c++/v1/unwind-arm.h
OLD_FILES+=usr/include/c++/v1/unwind-itanium.h
OLD_FILES+=usr/include/c++/v1/unwind.h
OLD_FILES+=usr/include/c++/v1/utility
OLD_FILES+=usr/include/c++/v1/valarray
OLD_FILES+=usr/include/c++/v1/variant
OLD_FILES+=usr/include/c++/v1/vector
OLD_FILES+=usr/include/c++/v1/wchar.h
OLD_FILES+=usr/include/c++/v1/wctype.h
OLD_FILES+=usr/lib32/libc++.a
OLD_FILES+=usr/lib32/libc++.so
OLD_LIBS+=usr/lib32/libc++.so.1
OLD_FILES+=usr/lib32/libc++_p.a
OLD_FILES+=usr/lib32/libc++experimental.a
OLD_FILES+=usr/lib32/libc++fs.a
OLD_FILES+=usr/lib32/libcxxrt.a
OLD_FILES+=usr/lib32/libcxxrt.so
OLD_LIBS+=usr/lib32/libcxxrt.so.1
OLD_FILES+=usr/lib32/libcxxrt_p.a
OLD_DIRS+=usr/include/c++/v1/tr1
OLD_DIRS+=usr/include/c++/v1/experimental
OLD_DIRS+=usr/include/c++/v1/ext
OLD_DIRS+=usr/include/c++/v1
.endif
.if ${MK_LIBTHR} == no
OLD_LIBS+=lib/libthr.so.3
OLD_FILES+=usr/lib/libthr.a
OLD_FILES+=usr/lib/libthr_p.a
OLD_FILES+=usr/share/man/man3/libthr.3.gz
.endif
.if ${MK_LLD} == no
OLD_FILES+=usr/bin/ld.lld
.endif
.if ${MK_LLDB} == no
OLD_FILES+=usr/bin/lldb
OLD_FILES+=usr/share/man/man1/lldb.1.gz
.endif
.if ${MK_LOCALES} == no
OLD_DIRS+=usr/share/locale
OLD_DIRS+=usr/share/locale/af_ZA.ISO8859-15
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/af_ZA.ISO8859-1
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/af_ZA.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/af_ZA.UTF-8
OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/af_ZA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/am_ET.UTF-8
OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/am_ET.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ar_AE.UTF-8
OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ar_AE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ar_EG.UTF-8
OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ar_EG.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ar_JO.UTF-8
OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ar_JO.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ar_MA.UTF-8
OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ar_MA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ar_QA.UTF-8
OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ar_QA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ar_SA.UTF-8
OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ar_SA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/be_BY.CP1131
OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_COLLATE
OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_CTYPE
OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_MESSAGES
OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_MONETARY
OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_NUMERIC
OLD_FILES+=usr/share/locale/be_BY.CP1131/LC_TIME
OLD_DIRS+=usr/share/locale/be_BY.CP1251
OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_COLLATE
OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_CTYPE
OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_MESSAGES
OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_MONETARY
OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_NUMERIC
OLD_FILES+=usr/share/locale/be_BY.CP1251/LC_TIME
OLD_DIRS+=usr/share/locale/be_BY.ISO8859-5
OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_COLLATE
OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_CTYPE
OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_MESSAGES
OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_MONETARY
OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_NUMERIC
OLD_FILES+=usr/share/locale/be_BY.ISO8859-5/LC_TIME
OLD_DIRS+=usr/share/locale/be_BY.UTF-8
OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/be_BY.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/bg_BG.CP1251
OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_COLLATE
OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_CTYPE
OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_MESSAGES
OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_MONETARY
OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_NUMERIC
OLD_FILES+=usr/share/locale/bg_BG.CP1251/LC_TIME
OLD_DIRS+=usr/share/locale/bg_BG.UTF-8
OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/bg_BG.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ca_AD.ISO8859-1
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/ca_AD.ISO8859-15
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_AD.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/ca_AD.UTF-8
OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_AD.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ca_ES.ISO8859-1
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/ca_ES.ISO8859-15
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_ES.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/ca_ES.UTF-8
OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_ES.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ca_FR.ISO8859-1
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/ca_FR.ISO8859-15
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_FR.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/ca_FR.UTF-8
OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_FR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ca_IT.ISO8859-1
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/ca_IT.ISO8859-15
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_IT.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/ca_IT.UTF-8
OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ca_IT.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/cs_CZ.ISO8859-2
OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/cs_CZ.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/cs_CZ.UTF-8
OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/cs_CZ.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/da_DK.ISO8859-1
OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/da_DK.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/da_DK.ISO8859-15
OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/da_DK.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/da_DK.UTF-8
OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/da_DK.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/de_AT.ISO8859-1
OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_AT.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/de_AT.ISO8859-15
OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_AT.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/de_AT.UTF-8
OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_AT.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/de_CH.ISO8859-1
OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_CH.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/de_CH.ISO8859-15
OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_CH.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/de_CH.UTF-8
OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_CH.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/de_DE.ISO8859-1
OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_DE.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/de_DE.ISO8859-15
OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_DE.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/de_DE.UTF-8
OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/de_DE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/el_GR.ISO8859-7
OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_COLLATE
OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_CTYPE
OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_MESSAGES
OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_MONETARY
OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_NUMERIC
OLD_FILES+=usr/share/locale/el_GR.ISO8859-7/LC_TIME
OLD_DIRS+=usr/share/locale/el_GR.UTF-8
OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/el_GR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_AU.ISO8859-1
OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_AU.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_AU.ISO8859-15
OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_AU.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_AU.US-ASCII
OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_COLLATE
OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_CTYPE
OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_MONETARY
OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_AU.US-ASCII/LC_TIME
OLD_DIRS+=usr/share/locale/en_AU.UTF-8
OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_AU.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_CA.ISO8859-1
OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_CA.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_CA.ISO8859-15
OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_CA.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_CA.US-ASCII
OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_COLLATE
OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_CTYPE
OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_MONETARY
OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_CA.US-ASCII/LC_TIME
OLD_DIRS+=usr/share/locale/en_CA.UTF-8
OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_CA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_GB.ISO8859-1
OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_GB.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_GB.ISO8859-15
OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_GB.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_GB.US-ASCII
OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_COLLATE
OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_CTYPE
OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_MONETARY
OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_GB.US-ASCII/LC_TIME
OLD_DIRS+=usr/share/locale/en_GB.UTF-8
OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_GB.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_HK.ISO8859-1
OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_HK.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_HK.UTF-8
OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_HK.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_IE.ISO8859-1
OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_IE.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_IE.ISO8859-15
OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_IE.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_IE.UTF-8
OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_IE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_NZ.ISO8859-1
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_NZ.ISO8859-15
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_NZ.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_NZ.US-ASCII
OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_COLLATE
OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_CTYPE
OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_MONETARY
OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_NZ.US-ASCII/LC_TIME
OLD_DIRS+=usr/share/locale/en_NZ.UTF-8
OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_NZ.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_PH.UTF-8
OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_PH.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_SG.ISO8859-1
OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_SG.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_SG.UTF-8
OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_SG.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_US.ISO8859-1
OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_US.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_US.ISO8859-15
OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_US.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_US.US-ASCII
OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_COLLATE
OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_CTYPE
OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_MONETARY
OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_US.US-ASCII/LC_TIME
OLD_DIRS+=usr/share/locale/en_US.UTF-8
OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_US.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/en_ZA.ISO8859-1
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/en_ZA.ISO8859-15
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_ZA.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/en_ZA.US-ASCII
OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_COLLATE
OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_CTYPE
OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_MONETARY
OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_ZA.US-ASCII/LC_TIME
OLD_DIRS+=usr/share/locale/en_ZA.UTF-8
OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/en_ZA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/es_AR.ISO8859-1
OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_AR.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/es_AR.UTF-8
OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_AR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/es_CR.UTF-8
OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_CR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/es_ES.ISO8859-1
OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_ES.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/es_ES.ISO8859-15
OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_ES.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/es_ES.UTF-8
OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_ES.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/es_MX.ISO8859-1
OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_MX.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/es_MX.UTF-8
OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/es_MX.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/et_EE.ISO8859-1
OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/et_EE.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/et_EE.ISO8859-15
OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/et_EE.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/et_EE.UTF-8
OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/et_EE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/eu_ES.ISO8859-1
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/eu_ES.ISO8859-15
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/eu_ES.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/eu_ES.UTF-8
OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/eu_ES.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/fi_FI.ISO8859-1
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/fi_FI.ISO8859-15
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/fi_FI.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/fi_FI.UTF-8
OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/fi_FI.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/fr_BE.ISO8859-1
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/fr_BE.ISO8859-15
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_BE.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/fr_BE.UTF-8
OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_BE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/fr_CA.ISO8859-1
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/fr_CA.ISO8859-15
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_CA.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/fr_CA.UTF-8
OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_CA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/fr_CH.ISO8859-1
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/fr_CH.ISO8859-15
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_CH.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/fr_CH.UTF-8
OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_CH.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/fr_FR.ISO8859-1
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/fr_FR.ISO8859-15
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_FR.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/fr_FR.UTF-8
OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/fr_FR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/he_IL.UTF-8
OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/he_IL.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/hi_IN.ISCII-DEV
OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_COLLATE
OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_CTYPE
OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_MESSAGES
OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_MONETARY
OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_NUMERIC
OLD_FILES+=usr/share/locale/hi_IN.ISCII-DEV/LC_TIME
OLD_DIRS+=usr/share/locale/hi_IN.UTF-8
OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/hi_IN.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/hr_HR.ISO8859-2
OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/hr_HR.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/hr_HR.UTF-8
OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/hr_HR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/hu_HU.ISO8859-2
OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/hu_HU.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/hu_HU.UTF-8
OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/hu_HU.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/hy_AM.ARMSCII-8
OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_COLLATE
OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_CTYPE
OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_MONETARY
OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/hy_AM.ARMSCII-8/LC_TIME
OLD_DIRS+=usr/share/locale/hy_AM.UTF-8
OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/hy_AM.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/is_IS.ISO8859-1
OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/is_IS.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/is_IS.ISO8859-15
OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/is_IS.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/is_IS.UTF-8
OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/is_IS.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/it_CH.ISO8859-1
OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/it_CH.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/it_CH.ISO8859-15
OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/it_CH.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/it_CH.UTF-8
OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/it_CH.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/it_IT.ISO8859-1
OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/it_IT.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/it_IT.ISO8859-15
OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/it_IT.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/it_IT.UTF-8
OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/it_IT.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ja_JP.eucJP
OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_COLLATE
OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_CTYPE
OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_MESSAGES
OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_MONETARY
OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_NUMERIC
OLD_FILES+=usr/share/locale/ja_JP.eucJP/LC_TIME
OLD_DIRS+=usr/share/locale/ja_JP.SJIS
OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_COLLATE
OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_CTYPE
OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_MESSAGES
OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_MONETARY
OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_NUMERIC
OLD_FILES+=usr/share/locale/ja_JP.SJIS/LC_TIME
OLD_DIRS+=usr/share/locale/ja_JP.UTF-8
OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ja_JP.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/kk_KZ.UTF-8
OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/kk_KZ.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ko_KR.CP949
OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_COLLATE
OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_CTYPE
OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_MESSAGES
OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_MONETARY
OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_NUMERIC
OLD_FILES+=usr/share/locale/ko_KR.CP949/LC_TIME
OLD_DIRS+=usr/share/locale/ko_KR.eucKR
OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_COLLATE
OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_CTYPE
OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_MESSAGES
OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_MONETARY
OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_NUMERIC
OLD_FILES+=usr/share/locale/ko_KR.eucKR/LC_TIME
OLD_DIRS+=usr/share/locale/ko_KR.UTF-8
OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ko_KR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/lt_LT.ISO8859-13
OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_COLLATE
OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_CTYPE
OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_MESSAGES
OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_MONETARY
OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_NUMERIC
OLD_FILES+=usr/share/locale/lt_LT.ISO8859-13/LC_TIME
OLD_DIRS+=usr/share/locale/lt_LT.UTF-8
OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/lt_LT.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/lv_LV.ISO8859-13
OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_COLLATE
OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_CTYPE
OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_MESSAGES
OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_MONETARY
OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_NUMERIC
OLD_FILES+=usr/share/locale/lv_LV.ISO8859-13/LC_TIME
OLD_DIRS+=usr/share/locale/lv_LV.UTF-8
OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/lv_LV.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/mn_MN.UTF-8
OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/mn_MN.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/nb_NO.ISO8859-1
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/nb_NO.ISO8859-15
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/nb_NO.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/nb_NO.UTF-8
OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/nb_NO.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/nl_BE.ISO8859-1
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/nl_BE.ISO8859-15
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/nl_BE.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/nl_BE.UTF-8
OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/nl_BE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/nl_NL.ISO8859-1
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/nl_NL.ISO8859-15
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/nl_NL.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/nl_NL.UTF-8
OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/nl_NL.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/nn_NO.ISO8859-1
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/nn_NO.ISO8859-15
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/nn_NO.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/nn_NO.UTF-8
OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/nn_NO.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/pl_PL.ISO8859-2
OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/pl_PL.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/pl_PL.UTF-8
OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/pl_PL.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/pt_BR.ISO8859-1
OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/pt_BR.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/pt_BR.UTF-8
OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/pt_BR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/pt_PT.ISO8859-1
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/pt_PT.ISO8859-15
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/pt_PT.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/pt_PT.UTF-8
OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/pt_PT.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ro_RO.ISO8859-2
OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/ro_RO.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/ro_RO.UTF-8
OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ro_RO.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/ru_RU.CP1251
OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_COLLATE
OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_CTYPE
OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_MESSAGES
OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_MONETARY
OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_NUMERIC
OLD_FILES+=usr/share/locale/ru_RU.CP1251/LC_TIME
OLD_DIRS+=usr/share/locale/ru_RU.CP866
OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_COLLATE
OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_CTYPE
OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_MESSAGES
OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_MONETARY
OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_NUMERIC
OLD_FILES+=usr/share/locale/ru_RU.CP866/LC_TIME
OLD_DIRS+=usr/share/locale/ru_RU.ISO8859-5
OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_COLLATE
OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_CTYPE
OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_MESSAGES
OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_MONETARY
OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_NUMERIC
OLD_FILES+=usr/share/locale/ru_RU.ISO8859-5/LC_TIME
OLD_DIRS+=usr/share/locale/ru_RU.KOI8-R
OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_COLLATE
OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_CTYPE
OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_MESSAGES
OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_MONETARY
OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_NUMERIC
OLD_FILES+=usr/share/locale/ru_RU.KOI8-R/LC_TIME
OLD_DIRS+=usr/share/locale/ru_RU.UTF-8
OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/ru_RU.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/se_FI.UTF-8
OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/se_FI.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/se_NO.UTF-8
OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/se_NO.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/sk_SK.ISO8859-2
OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/sk_SK.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/sk_SK.UTF-8
OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/sk_SK.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/sl_SI.ISO8859-2
OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/sl_SI.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/sl_SI.UTF-8
OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/sl_SI.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/sr_RS.ISO8859-5
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_COLLATE
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_CTYPE
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_MESSAGES
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_MONETARY
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_NUMERIC
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-5/LC_TIME
OLD_DIRS+=usr/share/locale/sr_RS.UTF-8
OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/sr_RS.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/sr_RS.ISO8859-2
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_COLLATE
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_CTYPE
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_MESSAGES
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_MONETARY
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_NUMERIC
OLD_FILES+=usr/share/locale/sr_RS.ISO8859-2/LC_TIME
OLD_DIRS+=usr/share/locale/sr_RS.UTF-8@latin
OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_COLLATE
OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_CTYPE
OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_MESSAGES
OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_MONETARY
OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_NUMERIC
OLD_FILES+=usr/share/locale/sr_RS.UTF-8@latin/LC_TIME
OLD_DIRS+=usr/share/locale/sv_FI.ISO8859-1
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/sv_FI.ISO8859-15
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/sv_FI.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/sv_FI.UTF-8
OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/sv_FI.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/sv_SE.ISO8859-1
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_COLLATE
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_CTYPE
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_MESSAGES
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_MONETARY
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_NUMERIC
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-1/LC_TIME
OLD_DIRS+=usr/share/locale/sv_SE.ISO8859-15
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_COLLATE
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_CTYPE
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_MESSAGES
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_MONETARY
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_NUMERIC
OLD_FILES+=usr/share/locale/sv_SE.ISO8859-15/LC_TIME
OLD_DIRS+=usr/share/locale/sv_SE.UTF-8
OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/sv_SE.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/tr_TR.ISO8859-9
OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_COLLATE
OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_CTYPE
OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_MESSAGES
OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_MONETARY
OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_NUMERIC
OLD_FILES+=usr/share/locale/tr_TR.ISO8859-9/LC_TIME
OLD_DIRS+=usr/share/locale/tr_TR.UTF-8
OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/tr_TR.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/uk_UA.CP1251
OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_COLLATE
OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_CTYPE
OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_MESSAGES
OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_MONETARY
OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_NUMERIC
OLD_FILES+=usr/share/locale/uk_UA.CP1251/LC_TIME
OLD_DIRS+=usr/share/locale/uk_UA.ISO8859-5
OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_COLLATE
OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_CTYPE
OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_MESSAGES
OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_MONETARY
OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_NUMERIC
OLD_FILES+=usr/share/locale/uk_UA.ISO8859-5/LC_TIME
OLD_DIRS+=usr/share/locale/uk_UA.KOI8-U
OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_COLLATE
OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_CTYPE
OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_MESSAGES
OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_MONETARY
OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_NUMERIC
OLD_FILES+=usr/share/locale/uk_UA.KOI8-U/LC_TIME
OLD_DIRS+=usr/share/locale/uk_UA.UTF-8
OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/uk_UA.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/zh_CN.eucCN
OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_CN.eucCN/LC_TIME
OLD_DIRS+=usr/share/locale/zh_CN.GB18030
OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_CN.GB18030/LC_TIME
OLD_DIRS+=usr/share/locale/zh_CN.GB2312
OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_CN.GB2312/LC_TIME
OLD_DIRS+=usr/share/locale/zh_CN.GBK
OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_CN.GBK/LC_TIME
OLD_DIRS+=usr/share/locale/zh_CN.UTF-8
OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_CN.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/zh_HK.UTF-8
OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_HK.UTF-8/LC_TIME
OLD_DIRS+=usr/share/locale/zh_TW.Big5
OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_TW.Big5/LC_TIME
OLD_DIRS+=usr/share/locale/zh_TW.UTF-8
OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_COLLATE
OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_CTYPE
OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_MESSAGES
OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_MONETARY
OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_NUMERIC
OLD_FILES+=usr/share/locale/zh_TW.UTF-8/LC_TIME
.endif
.if ${MK_LOCATE} == no
OLD_FILES+=etc/locate.rc
OLD_FILES+=etc/periodic/weekly/310.locate
OLD_FILES+=usr/bin/locate
OLD_FILES+=usr/libexec/locate.bigram
OLD_FILES+=usr/libexec/locate.code
OLD_FILES+=usr/libexec/locate.concatdb
OLD_FILES+=usr/libexec/locate.mklocatedb
OLD_FILES+=usr/libexec/locate.updatedb
OLD_FILES+=usr/share/man/man1/locate.1.gz
OLD_FILES+=usr/share/man/man8/locate.updatedb.8.gz
OLD_FILES+=usr/share/man/man8/updatedb.8.gz
.endif
.if ${MK_LPR} == no
OLD_FILES+=etc/hosts.lpd
OLD_FILES+=etc/printcap
OLD_FILES+=etc/newsyslog.conf.d/lpr.conf
OLD_FILES+=etc/rc.d/lpd
OLD_FILES+=etc/syslog.d/lpr.conf
OLD_FILES+=usr/bin/lp
OLD_FILES+=usr/bin/lpq
OLD_FILES+=usr/bin/lpr
OLD_FILES+=usr/bin/lprm
OLD_FILES+=usr/libexec/lpr/ru/bjc-240.sh.sample
OLD_FILES+=usr/libexec/lpr/ru/koi2alt
OLD_FILES+=usr/libexec/lpr/ru/koi2855
OLD_DIRS+=usr/libexec/lpr/ru
OLD_FILES+=usr/libexec/lpr/lpf
OLD_DIRS+=usr/libexec/lpr
OLD_FILES+=usr/sbin/chkprintcap
OLD_FILES+=usr/sbin/lpc
OLD_FILES+=usr/sbin/lpd
OLD_FILES+=usr/sbin/lptest
OLD_FILES+=usr/sbin/pac
OLD_FILES+=usr/share/doc/smm/07.lpd/paper.ascii.gz
OLD_DIRS+=usr/share/doc/smm/07.lpd
OLD_FILES+=usr/share/examples/etc/hosts.lpd
OLD_FILES+=usr/share/examples/etc/printcap
OLD_FILES+=usr/share/man/man1/lp.1.gz
OLD_FILES+=usr/share/man/man1/lpq.1.gz
OLD_FILES+=usr/share/man/man1/lpr.1.gz
OLD_FILES+=usr/share/man/man1/lprm.1.gz
OLD_FILES+=usr/share/man/man1/lptest.1.gz
OLD_FILES+=usr/share/man/man5/printcap.5.gz
OLD_FILES+=usr/share/man/man8/chkprintcap.8.gz
OLD_FILES+=usr/share/man/man8/lpc.8.gz
OLD_FILES+=usr/share/man/man8/lpd.8.gz
OLD_FILES+=usr/share/man/man8/pac.8.gz
.endif
.if ${MK_MAIL} == no
OLD_FILES+=etc/aliases
OLD_FILES+=etc/mail.rc
OLD_FILES+=etc/mail/aliases
OLD_FILES+=etc/mail/mailer.conf
OLD_FILES+=etc/periodic/daily/130.clean-msgs
OLD_FILES+=usr/bin/Mail
OLD_FILES+=usr/bin/biff
OLD_FILES+=usr/bin/from
OLD_FILES+=usr/bin/mail
OLD_FILES+=usr/bin/mailx
OLD_FILES+=usr/bin/msgs
OLD_FILES+=usr/libexec/comsat
OLD_FILES+=usr/share/examples/etc/mail.rc
OLD_FILES+=usr/share/man/man1/Mail.1.gz
OLD_FILES+=usr/share/man/man1/biff.1.gz
OLD_FILES+=usr/share/man/man1/from.1.gz
OLD_FILES+=usr/share/man/man1/mail.1.gz
OLD_FILES+=usr/share/man/man1/mailx.1.gz
OLD_FILES+=usr/share/man/man1/msgs.1.gz
OLD_FILES+=usr/share/man/man8/comsat.8.gz
OLD_FILES+=usr/share/misc/mail.help
OLD_FILES+=usr/share/misc/mail.tildehelp
.endif
.if ${MK_MAILWRAPPER} == no
OLD_FILES+=etc/mail/mailer.conf
# Don't remove, for no mailwrapper case:
# /usr/sbin/sendmail -> /usr/sbin/mailwrapper
# /usr/sbin/mailwrapper -> /usr/libexec/sendmail/sendmail
#OLD_FILES+=usr/sbin/mailwrapper
OLD_FILES+=usr/share/man/man8/mailwrapper.8.gz
.endif
.if ${MK_MAKE} == no
OLD_FILES+=usr/bin/make
OLD_FILES+=usr/share/man/man1/make.1.gz
OLD_FILES+=usr/share/mk/atf.test.mk
OLD_FILES+=usr/share/mk/bsd.README
OLD_FILES+=usr/share/mk/bsd.arch.inc.mk
OLD_FILES+=usr/share/mk/bsd.compiler.mk
OLD_FILES+=usr/share/mk/bsd.cpu.mk
OLD_FILES+=usr/share/mk/bsd.crunchgen.mk
OLD_FILES+=usr/share/mk/bsd.dep.mk
OLD_FILES+=usr/share/mk/bsd.doc.mk
OLD_FILES+=usr/share/mk/bsd.dtb.mk
OLD_FILES+=usr/share/mk/bsd.endian.mk
OLD_FILES+=usr/share/mk/bsd.files.mk
OLD_FILES+=usr/share/mk/bsd.incs.mk
OLD_FILES+=usr/share/mk/bsd.info.mk
OLD_FILES+=usr/share/mk/bsd.init.mk
OLD_FILES+=usr/share/mk/bsd.kmod.mk
OLD_FILES+=usr/share/mk/bsd.lib.mk
OLD_FILES+=usr/share/mk/bsd.libnames.mk
OLD_FILES+=usr/share/mk/bsd.links.mk
OLD_FILES+=usr/share/mk/bsd.man.mk
OLD_FILES+=usr/share/mk/bsd.mkopt.mk
OLD_FILES+=usr/share/mk/bsd.nls.mk
OLD_FILES+=usr/share/mk/bsd.obj.mk
OLD_FILES+=usr/share/mk/bsd.opts.mk
OLD_FILES+=usr/share/mk/bsd.own.mk
OLD_FILES+=usr/share/mk/bsd.port.mk
OLD_FILES+=usr/share/mk/bsd.port.options.mk
OLD_FILES+=usr/share/mk/bsd.port.post.mk
OLD_FILES+=usr/share/mk/bsd.port.pre.mk
OLD_FILES+=usr/share/mk/bsd.port.subdir.mk
OLD_FILES+=usr/share/mk/bsd.prog.mk
OLD_FILES+=usr/share/mk/bsd.progs.mk
OLD_FILES+=usr/share/mk/bsd.snmpmod.mk
OLD_FILES+=usr/share/mk/bsd.subdir.mk
OLD_FILES+=usr/share/mk/bsd.symver.mk
OLD_FILES+=usr/share/mk/bsd.sys.mk
OLD_FILES+=usr/share/mk/bsd.test.mk
OLD_FILES+=usr/share/mk/plain.test.mk
OLD_FILES+=usr/share/mk/suite.test.mk
OLD_FILES+=usr/share/mk/sys.mk
OLD_FILES+=usr/share/mk/tap.test.mk
OLD_FILES+=usr/share/mk/version_gen.awk
OLD_FILES+=usr/tests/usr.bin/bmake/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/archives/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.status.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stderr.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/expected.stdout.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd/libtest.a
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.status.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stderr.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/expected.stdout.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_44bsd_mod/libtest.a
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.status.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stderr.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.6
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/expected.stdout.7
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/archives/fmt_oldbsd/libtest.a
OLD_FILES+=usr/tests/usr.bin/bmake/basic/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t0/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t1/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t2/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/basic/t3/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/common.sh
OLD_FILES+=usr/tests/usr.bin/bmake/execution/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/ellipsis/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/empty/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/joberr/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/execution/plus/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/builtin/sh
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/meta/sh
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path/sh
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/path_select/shell
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/replace/shell
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/shell/select/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/TEST1.a
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/basic/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/TEST1.a
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/TEST2.a
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild1/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/TEST1.a
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/TEST2.a
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/suffixes/src_wild2/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/directive-t0/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.3
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.4
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.status.5
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/enl/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/funny-targets/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/syntax/semi/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/1/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/mk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t0/mk/sys.mk
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/cleanup
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/1/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/mk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t1/mk/sys.mk
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/cleanup
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/1/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/mk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/sysmk/t2/mk/sys.mk
OLD_FILES+=usr/tests/usr.bin/bmake/test-new.mk
OLD_FILES+=usr/tests/usr.bin/bmake/variables/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_M/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.status.3
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/bmake/variables/modifier_t/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.status.2
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/bmake/variables/opt_V/legacy_test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/Makefile.test
OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/expected.status.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/bmake/variables/t0/legacy_test
.endif
.if ${MK_MAN} == no
MAN_FILES!=find ${DESTDIR}/usr/share/man ${DESTDIR}/usr/share/openssl/man -type f | sed -e 's,^${DESTDIR}/,,'; echo
OLD_FILES+=${MAN_FILES}
MAN_DIRS!=find ${DESTDIR}/usr/share/man ${DESTDIR}/usr/share/openssl/man -type d | sed -e 's,^${DESTDIR}/,,'; echo
OLD_DIRS+=${MAN_DIRS}
.endif
.if ${MK_MAN_UTILS} == no
OLD_FILES+=etc/periodic/weekly/320.whatis
OLD_FILES+=usr/bin/apropos
OLD_FILES+=usr/bin/makewhatis
OLD_FILES+=usr/bin/man
OLD_FILES+=usr/bin/manpath
OLD_FILES+=usr/bin/whatis
OLD_FILES+=usr/libexec/makewhatis.local
OLD_FILES+=usr/sbin/manctl
OLD_FILES+=usr/share/man/man1/apropos.1.gz
OLD_FILES+=usr/share/man/man1/makewhatis.1.gz
OLD_FILES+=usr/share/man/man1/man.1.gz
OLD_FILES+=usr/share/man/man1/manpath.1.gz
OLD_FILES+=usr/share/man/man1/whatis.1.gz
OLD_FILES+=usr/share/man/man5/man.conf.5.gz
OLD_FILES+=usr/share/man/man8/makewhatis.local.8.gz
OLD_FILES+=usr/share/man/man8/manctl.8.gz
OLD_FILES+=usr/share/man/whatis
OLD_FILES+=usr/share/openssl/man/whatis
.endif
.if ${MK_NDIS} == no
OLD_FILES+=usr/sbin/ndiscvt
OLD_FILES+=usr/sbin/ndisgen
OLD_FILES+=usr/share/man/man8/ndiscvt.8.gz
OLD_FILES+=usr/share/man/man8/ndisgen.8.gz
OLD_FILES+=usr/share/misc/windrv_stub.c
.endif
.if ${MK_NETCAT} == no
OLD_FILES+=rescue/nc
OLD_FILES+=usr/bin/nc
OLD_FILES+=usr/share/man/man1/nc.1.gz
.endif
.if ${MK_NETGRAPH} == no
OLD_FILES+=usr/include/netgraph.h
OLD_FILES+=usr/lib/libnetgraph.a
OLD_FILES+=usr/lib/libnetgraph.so
OLD_LIBS+=usr/lib/libnetgraph.so.4
OLD_FILES+=usr/lib/libnetgraph_p.a
OLD_FILES+=usr/lib32/libnetgraph.a
OLD_FILES+=usr/lib32/libnetgraph.so
OLD_LIBS+=usr/lib32/libnetgraph.so.4
OLD_FILES+=usr/lib32/libnetgraph_p.a
OLD_FILES+=usr/libexec/pppoed
OLD_FILES+=usr/sbin/flowctl
OLD_FILES+=usr/sbin/lmcconfig
OLD_FILES+=usr/sbin/ngctl
OLD_FILES+=usr/sbin/nghook
OLD_FILES+=usr/share/man/man3/NgAllocRecvAsciiMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgAllocRecvData.3.gz
OLD_FILES+=usr/share/man/man3/NgAllocRecvMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgMkSockNode.3.gz
OLD_FILES+=usr/share/man/man3/NgNameNode.3.gz
OLD_FILES+=usr/share/man/man3/NgRecvAsciiMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgRecvData.3.gz
OLD_FILES+=usr/share/man/man3/NgRecvMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgSendAsciiMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgSendData.3.gz
OLD_FILES+=usr/share/man/man3/NgSendMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgSendReplyMsg.3.gz
OLD_FILES+=usr/share/man/man3/NgSetDebug.3.gz
OLD_FILES+=usr/share/man/man3/NgSetErrLog.3.gz
OLD_FILES+=usr/share/man/man3/netgraph.3.gz
OLD_FILES+=usr/share/man/man8/flowctl.8.gz
OLD_FILES+=usr/share/man/man8/lmcconfig.8.gz
OLD_FILES+=usr/share/man/man8/ngctl.8.gz
OLD_FILES+=usr/share/man/man8/nghook.8.gz
OLD_FILES+=usr/share/man/man8/pppoed.8.gz
.endif
.if ${MK_NETGRAPH_SUPPORT} == no
OLD_FILES+=usr/include/bsnmp/snmp_netgraph.h
OLD_FILES+=usr/lib/snmp_netgraph.so
OLD_LIBS+=usr/lib/snmp_netgraph.so.6
OLD_FILES+=usr/share/man/man3/snmp_netgraph.3.gz
OLD_FILES+=usr/share/snmp/defs/netgraph_tree.def
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-NETGRAPH.txt
.endif
.if ${MK_NIS} == no
OLD_FILES+=etc/rc.d/ypbind
OLD_FILES+=etc/rc.d/ypldap
OLD_FILES+=etc/rc.d/yppasswdd
OLD_FILES+=etc/rc.d/ypserv
OLD_FILES+=etc/rc.d/ypset
OLD_FILES+=etc/rc.d/ypupdated
OLD_FILES+=etc/rc.d/ypxfrd
OLD_FILES+=usr/bin/ypcat
OLD_FILES+=usr/bin/ypchfn
OLD_FILES+=usr/bin/ypchpass
OLD_FILES+=usr/bin/ypchsh
-OLD_FILES+=usr/bin/ypldap
OLD_FILES+=usr/bin/ypmatch
OLD_FILES+=usr/bin/yppasswd
OLD_FILES+=usr/bin/ypwhich
OLD_FILES+=usr/include/ypclnt.h
OLD_FILES+=usr/lib/libypclnt.a
OLD_FILES+=usr/lib/libypclnt.so
OLD_LIBS+=usr/lib/libypclnt.so.4
OLD_FILES+=usr/lib/libypclnt_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libypclnt.a
OLD_FILES+=usr/lib32/libypclnt.so
OLD_LIBS+=usr/lib32/libypclnt.so.4
OLD_FILES+=usr/lib32/libypclnt_p.a
.endif
OLD_FILES+=usr/libexec/mknetid
OLD_FILES+=usr/libexec/yppwupdate
OLD_FILES+=usr/libexec/ypxfr
OLD_FILES+=usr/sbin/rpc.yppasswdd
OLD_FILES+=usr/sbin/rpc.ypupdated
OLD_FILES+=usr/sbin/rpc.ypxfrd
OLD_FILES+=usr/sbin/yp_mkdb
OLD_FILES+=usr/sbin/ypbind
OLD_FILES+=usr/sbin/ypinit
+OLD_FILES+=usr/sbin/ypldap
OLD_FILES+=usr/sbin/yppoll
OLD_FILES+=usr/sbin/yppush
OLD_FILES+=usr/sbin/ypserv
OLD_FILES+=usr/sbin/ypset
OLD_FILES+=usr/share/man/man1/ypcat.1.gz
OLD_FILES+=usr/share/man/man1/ypchfn.1.gz
OLD_FILES+=usr/share/man/man1/ypchpass.1.gz
OLD_FILES+=usr/share/man/man1/ypchsh.1.gz
OLD_FILES+=usr/share/man/man1/ypmatch.1.gz
OLD_FILES+=usr/share/man/man1/yppasswd.1.gz
OLD_FILES+=usr/share/man/man1/ypwhich.1.gz
OLD_FILES+=usr/share/man/man5/netid.5.gz
+OLD_FILES+=usr/share/man/man5/ypldap.conf.5.gz
OLD_FILES+=usr/share/man/man8/mknetid.8.gz
OLD_FILES+=usr/share/man/man8/rpc.yppasswdd.8.gz
OLD_FILES+=usr/share/man/man8/rpc.ypxfrd.8.gz
OLD_FILES+=usr/share/man/man8/NIS.8.gz
OLD_FILES+=usr/share/man/man8/YP.8.gz
OLD_FILES+=usr/share/man/man8/yp.8.gz
OLD_FILES+=usr/share/man/man8/nis.8.gz
OLD_FILES+=usr/share/man/man8/yp_mkdb.8.gz
OLD_FILES+=usr/share/man/man8/ypbind.8.gz
OLD_FILES+=usr/share/man/man8/ypinit.8.gz
+OLD_FILES+=usr/share/man/man8/ypldap.8.gz
OLD_FILES+=usr/share/man/man8/yppoll.8.gz
OLD_FILES+=usr/share/man/man8/yppush.8.gz
OLD_FILES+=usr/share/man/man8/ypserv.8.gz
OLD_FILES+=usr/share/man/man8/ypset.8.gz
OLD_FILES+=usr/share/man/man8/ypxfr.8.gz
OLD_FILES+=var/yp/Makefile
OLD_FILES+=var/yp/Makefile.dist
+OLD_DIRS+=var/yp
.endif
.if ${MK_NLS} == no
OLD_DIRS+=usr/share/nls/
OLD_DIRS+=usr/share/nls/C
OLD_FILES+=usr/share/mk/bsd.nls.mk
OLD_FILES+=usr/share/nls/C/ee.cat
OLD_DIRS+=usr/share/nls/af_ZA.ISO8859-1
OLD_DIRS+=usr/share/nls/af_ZA.ISO8859-15
OLD_DIRS+=usr/share/nls/af_ZA.UTF-8
OLD_DIRS+=usr/share/nls/am_ET.UTF-8
OLD_DIRS+=usr/share/nls/be_BY.CP1131
OLD_DIRS+=usr/share/nls/be_BY.CP1251
OLD_DIRS+=usr/share/nls/be_BY.ISO8859-5
OLD_DIRS+=usr/share/nls/be_BY.UTF-8
OLD_FILES+=usr/share/nls/be_BY.UTF-8/libc.cat
OLD_DIRS+=usr/share/nls/bg_BG.CP1251
OLD_DIRS+=usr/share/nls/bg_BG.UTF-8
OLD_DIRS+=usr/share/nls/ca_ES.ISO8859-1
OLD_FILES+=usr/share/nls/ca_ES.ISO8859-1/libc.cat
OLD_DIRS+=usr/share/nls/ca_ES.ISO8859-15
OLD_DIRS+=usr/share/nls/ca_ES.UTF-8
OLD_DIRS+=usr/share/nls/cs_CZ.ISO8859-2
OLD_DIRS+=usr/share/nls/cs_CZ.UTF-8
OLD_DIRS+=usr/share/nls/da_DK.ISO8859-1
OLD_DIRS+=usr/share/nls/da_DK.ISO8859-15
OLD_DIRS+=usr/share/nls/da_DK.UTF-8
OLD_DIRS+=usr/share/nls/de_AT.ISO8859-1
OLD_FILES+=usr/share/nls/de_AT.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/de_AT.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/de_AT.ISO8859-15
OLD_FILES+=usr/share/nls/de_AT.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/de_AT.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/de_AT.UTF-8
OLD_FILES+=usr/share/nls/de_AT.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/de_CH.ISO8859-1
OLD_FILES+=usr/share/nls/de_CH.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/de_CH.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/de_CH.ISO8859-15
OLD_FILES+=usr/share/nls/de_CH.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/de_CH.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/de_CH.UTF-8
OLD_FILES+=usr/share/nls/de_CH.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/de_DE.ISO8859-1
OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/libc.cat
OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/de_DE.ISO8859-15
OLD_FILES+=usr/share/nls/de_DE.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/de_DE.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/de_DE.UTF-8
OLD_FILES+=usr/share/nls/de_DE.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/el_GR.ISO8859-7
OLD_FILES+=usr/share/nls/el_GR.ISO8859-7/libc.cat
OLD_FILES+=usr/share/nls/el_GR.ISO8859-7/tcsh.cat
OLD_DIRS+=usr/share/nls/el_GR.UTF-8
OLD_FILES+=usr/share/nls/el_GR.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/en_AU.ISO8859-1
OLD_DIRS+=usr/share/nls/en_AU.ISO8859-15
OLD_DIRS+=usr/share/nls/en_AU.US-ASCII
OLD_DIRS+=usr/share/nls/en_AU.UTF-8
OLD_DIRS+=usr/share/nls/en_CA.ISO8859-1
OLD_FILES+=usr/share/nls/en_US.ISO8859-1/ee.cat
OLD_DIRS+=usr/share/nls/en_CA.ISO8859-15
OLD_DIRS+=usr/share/nls/en_CA.US-ASCII
OLD_DIRS+=usr/share/nls/en_CA.UTF-8
OLD_DIRS+=usr/share/nls/en_GB.ISO8859-1
OLD_DIRS+=usr/share/nls/en_GB.ISO8859-15
OLD_DIRS+=usr/share/nls/en_GB.US-ASCII
OLD_DIRS+=usr/share/nls/en_GB.UTF-8
OLD_DIRS+=usr/share/nls/en_IE.UTF-8
OLD_DIRS+=usr/share/nls/en_NZ.ISO8859-1
OLD_DIRS+=usr/share/nls/en_NZ.ISO8859-15
OLD_DIRS+=usr/share/nls/en_NZ.US-ASCII
OLD_DIRS+=usr/share/nls/en_NZ.UTF-8
OLD_DIRS+=usr/share/nls/en_US.ISO8859-1
OLD_DIRS+=usr/share/nls/en_US.ISO8859-15
OLD_FILES+=usr/share/nls/en_US.ISO8859-15/ee.cat
OLD_DIRS+=usr/share/nls/en_US.UTF-8
OLD_DIRS+=usr/share/nls/es_ES.UTF-8
OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/grep.cat
OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/libc.cat
OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/es_ES.ISO8859-1
OLD_DIRS+=usr/share/nls/es_ES.ISO8859-15
OLD_FILES+=usr/share/nls/es_ES.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/es_ES.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/et_EE.ISO8859-15
OLD_FILES+=usr/share/nls/et_EE.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/et_EE.UTF-8
OLD_FILES+=usr/share/nls/et_EE.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/fi_FI.ISO8859-1
OLD_FILES+=usr/share/nls/fi_FI.ISO8859-1/libc.cat
OLD_FILES+=usr/share/nls/fi_FI.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/fi_FI.ISO8859-15
OLD_FILES+=usr/share/nls/fi_FI.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/fi_FI.UTF-8
OLD_FILES+=usr/share/nls/fi_FI.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_BE.ISO8859-1
OLD_FILES+=usr/share/nls/fr_BE.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/fr_BE.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_BE.ISO8859-15
OLD_FILES+=usr/share/nls/fr_BE.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/fr_BE.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_BE.UTF-8
OLD_FILES+=usr/share/nls/fr_BE.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_CA.ISO8859-1
OLD_FILES+=usr/share/nls/fr_CA.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/fr_CA.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_CA.ISO8859-15
OLD_FILES+=usr/share/nls/fr_CA.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/fr_CA.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_CA.UTF-8
OLD_FILES+=usr/share/nls/fr_CA.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_CH.ISO8859-1
OLD_FILES+=usr/share/nls/fr_CH.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/fr_CH.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_CH.ISO8859-15
OLD_FILES+=usr/share/nls/fr_CH.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/fr_CH.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_CH.UTF-8
OLD_FILES+=usr/share/nls/fr_CH.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_FR.ISO8859-1
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/libc.cat
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_FR.ISO8859-15
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-15/ee.cat
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/fr_FR.UTF-8
OLD_FILES+=usr/share/nls/fr_FR.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/gl_ES.ISO8859-1
OLD_FILES+=usr/share/nls/gl_ES.ISO8859-1/grep.cat
OLD_FILES+=usr/share/nls/gl_ES.ISO8859-1/libc.cat
OLD_DIRS+=usr/share/nls/he_IL.UTF-8
OLD_DIRS+=usr/share/nls/hi_IN.ISCII-DEV
OLD_DIRS+=usr/share/nls/hr_HR.ISO8859-2
OLD_DIRS+=usr/share/nls/hu_HU.ISO8859-2
OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/ee.cat
OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/grep.cat
OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/libc.cat
OLD_FILES+=usr/share/nls/hu_HU.ISO8859-2/sort.cat
OLD_DIRS+=usr/share/nls/hr_HR.UTF-8
OLD_DIRS+=usr/share/nls/hu_HU.UTF-8
OLD_DIRS+=usr/share/nls/hy_AM.ARMSCII-8
OLD_DIRS+=usr/share/nls/hy_AM.UTF-8
OLD_DIRS+=usr/share/nls/is_IS.ISO8859-1
OLD_DIRS+=usr/share/nls/is_IS.ISO8859-15
OLD_DIRS+=usr/share/nls/is_IS.UTF-8
OLD_DIRS+=usr/share/nls/it_CH.ISO8859-1
OLD_FILES+=usr/share/nls/it_CH.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/it_CH.ISO8859-15
OLD_FILES+=usr/share/nls/it_CH.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/it_CH.UTF-8
OLD_FILES+=usr/share/nls/it_CH.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/it_IT.ISO8859-1
OLD_FILES+=usr/share/nls/it_IT.ISO8859-1/tcsh.cat
OLD_DIRS+=usr/share/nls/it_IT.ISO8859-15
OLD_FILES+=usr/share/nls/it_IT.ISO8859-15/libc.cat
OLD_FILES+=usr/share/nls/it_IT.ISO8859-15/tcsh.cat
OLD_DIRS+=usr/share/nls/it_IT.UTF-8
OLD_FILES+=usr/share/nls/it_IT.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/ja_JP.SJIS
OLD_FILES+=usr/share/nls/ja_JP.SJIS/grep.cat
OLD_FILES+=usr/share/nls/ja_JP.SJIS/tcsh.cat
OLD_DIRS+=usr/share/nls/ja_JP.UTF-8
OLD_FILES+=usr/share/nls/ja_JP.UTF-8/grep.cat
OLD_FILES+=usr/share/nls/ja_JP.UTF-8/libc.cat
OLD_FILES+=usr/share/nls/ja_JP.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/ja_JP.eucJP
OLD_FILES+=usr/share/nls/ja_JP.eucJP/grep.cat
OLD_FILES+=usr/share/nls/ja_JP.eucJP/libc.cat
OLD_FILES+=usr/share/nls/ja_JP.eucJP/tcsh.cat
OLD_DIRS+=usr/share/nls/kk_KZ.PT154
OLD_DIRS+=usr/share/nls/kk_KZ.UTF-8
OLD_DIRS+=usr/share/nls/ko_KR.CP949
OLD_DIRS+=usr/share/nls/ko_KR.UTF-8
OLD_FILES+=usr/share/nls/ko_KR.UTF-8/libc.cat
OLD_DIRS+=usr/share/nls/ko_KR.eucKR
OLD_FILES+=usr/share/nls/ko_KR.eucKR/libc.cat
OLD_DIRS+=usr/share/nls/lv_LV.UTF-8
OLD_DIRS+=usr/share/nls/lt_LT.ISO8859-13
OLD_DIRS+=usr/share/nls/lt_LT.UTF-8
OLD_DIRS+=usr/share/nls/lv_LV.ISO8859-13
OLD_DIRS+=usr/share/nls/mn_MN.UTF-8
OLD_FILES+=usr/share/nls/mn_MN.UTF-8/libc.cat
OLD_DIRS+=usr/share/nls/nl_BE.ISO8859-1
OLD_DIRS+=usr/share/nls/nl_BE.ISO8859-15
OLD_DIRS+=usr/share/nls/nl_BE.UTF-8
OLD_DIRS+=usr/share/nls/no_NO.ISO8859-1
OLD_FILES+=usr/share/nls/nl_NL.ISO8859-1/libc.cat
OLD_DIRS+=usr/share/nls/nl_NL.ISO8859-15
OLD_DIRS+=usr/share/nls/nl_NL.ISO8859-1
OLD_FILES+=usr/share/nls/no_NO.ISO8859-1/libc.cat
OLD_DIRS+=usr/share/nls/no_NO.ISO8859-15
OLD_DIRS+=usr/share/nls/nl_NL.UTF-8
OLD_DIRS+=usr/share/nls/no_NO.UTF-8
OLD_DIRS+=usr/share/nls/pl_PL.ISO8859-2
OLD_FILES+=usr/share/nls/pl_PL.ISO8859-2/ee.cat
OLD_FILES+=usr/share/nls/pl_PL.ISO8859-2/libc.cat
OLD_DIRS+=usr/share/nls/pl_PL.UTF-8
OLD_DIRS+=usr/share/nls/pt_BR.ISO8859-1
OLD_DIRS+=usr/share/nls/pt_BR.UTF-8
OLD_DIRS+=usr/share/nls/pt_PT.ISO8859-1
OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/ee.cat
OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/grep.cat
OLD_FILES+=usr/share/nls/pt_BR.ISO8859-1/libc.cat
OLD_FILES+=usr/share/nls/pt_PT.ISO8859-1/ee.cat
OLD_DIRS+=usr/share/nls/pt_PT.ISO8859-15
OLD_DIRS+=usr/share/nls/pt_PT.UTF-8
OLD_DIRS+=usr/share/nls/ro_RO.ISO8859-2
OLD_DIRS+=usr/share/nls/ro_RO.UTF-8
OLD_DIRS+=usr/share/nls/ru_RU.CP1251
OLD_FILES+=usr/share/nls/ru_RU.CP1251/tcsh.cat
OLD_DIRS+=usr/share/nls/ru_RU.CP866
OLD_FILES+=usr/share/nls/ru_RU.CP866/tcsh.cat
OLD_DIRS+=usr/share/nls/ru_RU.ISO8859-5
OLD_FILES+=usr/share/nls/ru_RU.ISO8859-5/tcsh.cat
OLD_DIRS+=usr/share/nls/ru_RU.KOI8-R
OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/ee.cat
OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/grep.cat
OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/libc.cat
OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/tcsh.cat
OLD_DIRS+=usr/share/nls/ru_RU.UTF-8
OLD_FILES+=usr/share/nls/ru_RU.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/sk_SK.ISO8859-2
OLD_FILES+=usr/share/nls/sk_SK.ISO8859-2/libc.cat
OLD_DIRS+=usr/share/nls/sk_SK.UTF-8
OLD_DIRS+=usr/share/nls/sl_SI.ISO8859-2
OLD_DIRS+=usr/share/nls/sl_SI.UTF-8
OLD_DIRS+=usr/share/nls/sr_YU.ISO8859-2
OLD_DIRS+=usr/share/nls/sr_YU.ISO8859-5
OLD_DIRS+=usr/share/nls/sr_YU.UTF-8
OLD_DIRS+=usr/share/nls/sv_SE.ISO8859-1
OLD_FILES+=usr/share/nls/sv_SE.ISO8859-1/libc.cat
OLD_DIRS+=usr/share/nls/sv_SE.ISO8859-15
OLD_DIRS+=usr/share/nls/sv_SE.UTF-8
OLD_DIRS+=usr/share/nls/tr_TR.ISO8859-9
OLD_DIRS+=usr/share/nls/tr_TR.UTF-8
OLD_DIRS+=usr/share/nls/uk_UA.ISO8859-5
OLD_FILES+=usr/share/nls/uk_UA.ISO8859-5/tcsh.cat
OLD_DIRS+=usr/share/nls/uk_UA.KOI8-U
OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/ee.cat
OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/tcsh.cat
OLD_DIRS+=usr/share/nls/uk_UA.UTF-8
OLD_FILES+=usr/share/nls/uk_UA.UTF-8/grep.cat
OLD_FILES+=usr/share/nls/uk_UA.UTF-8/libc.cat
OLD_FILES+=usr/share/nls/uk_UA.UTF-8/tcsh.cat
OLD_DIRS+=usr/share/nls/zh_CN.GB18030
OLD_FILES+=usr/share/nls/zh_CN.GB18030/libc.cat
OLD_DIRS+=usr/share/nls/zh_CN.GBK
OLD_DIRS+=usr/share/nls/zh_CN.GB2312
OLD_FILES+=usr/share/nls/zh_CN.GB2312/libc.cat
OLD_DIRS+=usr/share/nls/zh_CN.UTF-8
OLD_FILES+=usr/share/nls/zh_CN.UTF-8/grep.cat
OLD_FILES+=usr/share/nls/zh_CN.UTF-8/libc.cat
OLD_DIRS+=usr/share/nls/zh_CN.eucCN
OLD_DIRS+=usr/share/nls/zh_HK.UTF-8
OLD_DIRS+=usr/share/nls/zh_TW.UTF-8
OLD_FILES+=usr/tests/bin/sh/builtins/locale1.0
.endif
.if ${MK_NLS_CATALOGS} == no
OLD_FILES+=usr/share/nls/de_AT.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/de_CH.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/de_DE.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/el_GR.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/es_ES.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/et_EE.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fi_FI.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_BE.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CA.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CH.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_FR.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/it_CH.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/it_IT.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/ja_JP.SJIS/tcsh.cat
OLD_FILES+=usr/share/nls/ja_JP.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.CP1251/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.CP866/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.ISO8859-5/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/uk_UA.ISO8859-5/tcsh.cat
OLD_FILES+=usr/share/nls/uk_UA.UTF-8/tcsh.cat
.endif
.if ${MK_NS_CACHING} == no
OLD_FILES+=etc/nscd.conf
OLD_FILES+=etc/rc.d/nscd
OLD_FILES+=usr/sbin/nscd
OLD_FILES+=usr/share/examples/etc/nscd.conf
OLD_FILES+=usr/share/man/man5/nscd.conf.5.gz
OLD_FILES+=usr/share/man/man8/nscd.8.gz
.endif
.if ${MK_NTP} == no
OLD_FILES+=etc/ntp.conf
OLD_FILES+=etc/periodic/daily/480.status-ntpd
OLD_FILES+=usr/bin/ntpq
OLD_FILES+=usr/sbin/ntp-keygen
OLD_FILES+=usr/sbin/ntpd
OLD_FILES+=usr/sbin/ntpdate
OLD_FILES+=usr/sbin/ntpdc
OLD_FILES+=usr/sbin/ntptime
OLD_FILES+=usr/sbin/sntp
OLD_FILES+=usr/share/doc/ntp/access.html
OLD_FILES+=usr/share/doc/ntp/accopt.html
OLD_FILES+=usr/share/doc/ntp/assoc.html
OLD_FILES+=usr/share/doc/ntp/audio.html
OLD_FILES+=usr/share/doc/ntp/authentic.html
OLD_FILES+=usr/share/doc/ntp/authopt.html
OLD_FILES+=usr/share/doc/ntp/autokey.html
OLD_FILES+=usr/share/doc/ntp/bugs.html
OLD_FILES+=usr/share/doc/ntp/build.html
OLD_FILES+=usr/share/doc/ntp/clock.html
OLD_FILES+=usr/share/doc/ntp/clockopt.html
OLD_FILES+=usr/share/doc/ntp/cluster.html
OLD_FILES+=usr/share/doc/ntp/comdex.html
OLD_FILES+=usr/share/doc/ntp/config.html
OLD_FILES+=usr/share/doc/ntp/confopt.html
OLD_FILES+=usr/share/doc/ntp/copyright.html
OLD_FILES+=usr/share/doc/ntp/debug.html
OLD_FILES+=usr/share/doc/ntp/decode.html
OLD_FILES+=usr/share/doc/ntp/discipline.html
OLD_FILES+=usr/share/doc/ntp/discover.html
OLD_FILES+=usr/share/doc/ntp/driver1.html
OLD_FILES+=usr/share/doc/ntp/driver10.html
OLD_FILES+=usr/share/doc/ntp/driver11.html
OLD_FILES+=usr/share/doc/ntp/driver12.html
OLD_FILES+=usr/share/doc/ntp/driver16.html
OLD_FILES+=usr/share/doc/ntp/driver18.html
OLD_FILES+=usr/share/doc/ntp/driver19.html
OLD_FILES+=usr/share/doc/ntp/driver2.html
OLD_FILES+=usr/share/doc/ntp/driver20.html
OLD_FILES+=usr/share/doc/ntp/driver22.html
OLD_FILES+=usr/share/doc/ntp/driver26.html
OLD_FILES+=usr/share/doc/ntp/driver27.html
OLD_FILES+=usr/share/doc/ntp/driver28.html
OLD_FILES+=usr/share/doc/ntp/driver29.html
OLD_FILES+=usr/share/doc/ntp/driver3.html
OLD_FILES+=usr/share/doc/ntp/driver30.html
OLD_FILES+=usr/share/doc/ntp/driver32.html
OLD_FILES+=usr/share/doc/ntp/driver33.html
OLD_FILES+=usr/share/doc/ntp/driver34.html
OLD_FILES+=usr/share/doc/ntp/driver35.html
OLD_FILES+=usr/share/doc/ntp/driver36.html
OLD_FILES+=usr/share/doc/ntp/driver37.html
OLD_FILES+=usr/share/doc/ntp/driver4.html
OLD_FILES+=usr/share/doc/ntp/driver5.html
OLD_FILES+=usr/share/doc/ntp/driver6.html
OLD_FILES+=usr/share/doc/ntp/driver7.html
OLD_FILES+=usr/share/doc/ntp/driver8.html
OLD_FILES+=usr/share/doc/ntp/driver9.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver1.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver10.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver11.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver12.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver16.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver18.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver19.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver20.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver22.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver26.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver27.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver28.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver29.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver3.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver30.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver31.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver32.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver33.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver34.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver35.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver36.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver37.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver38.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver39.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver4.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver40.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver42.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver43.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver44.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver45.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver46.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver5.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver6.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver7.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver8.html
OLD_FILES+=usr/share/doc/ntp/drivers/driver9.html
OLD_FILES+=usr/share/doc/ntp/drivers/icons/home.gif
OLD_FILES+=usr/share/doc/ntp/drivers/icons/mail2.gif
OLD_FILES+=usr/share/doc/ntp/drivers/mx4200data.html
OLD_FILES+=usr/share/doc/ntp/drivers/oncore-shmem.html
OLD_FILES+=usr/share/doc/ntp/drivers/scripts/footer.txt
OLD_FILES+=usr/share/doc/ntp/drivers/scripts/style.css
OLD_FILES+=usr/share/doc/ntp/drivers/tf582_4.html
OLD_FILES+=usr/share/doc/ntp/extern.html
OLD_FILES+=usr/share/doc/ntp/filter.html
OLD_FILES+=usr/share/doc/ntp/hints.html
OLD_FILES+=usr/share/doc/ntp/hints/a-ux
OLD_FILES+=usr/share/doc/ntp/hints/aix
OLD_FILES+=usr/share/doc/ntp/hints/bsdi
OLD_FILES+=usr/share/doc/ntp/hints/changes
OLD_FILES+=usr/share/doc/ntp/hints/decosf1
OLD_FILES+=usr/share/doc/ntp/hints/decosf2
OLD_FILES+=usr/share/doc/ntp/hints/freebsd
OLD_FILES+=usr/share/doc/ntp/hints/hpux
OLD_FILES+=usr/share/doc/ntp/hints/linux
OLD_FILES+=usr/share/doc/ntp/hints/mpeix
OLD_FILES+=usr/share/doc/ntp/hints/notes-xntp-v3
OLD_FILES+=usr/share/doc/ntp/hints/parse
OLD_FILES+=usr/share/doc/ntp/hints/refclocks
OLD_FILES+=usr/share/doc/ntp/hints/rs6000
OLD_FILES+=usr/share/doc/ntp/hints/sco.html
OLD_FILES+=usr/share/doc/ntp/hints/sgi
OLD_FILES+=usr/share/doc/ntp/hints/solaris-dosynctodr.html
OLD_FILES+=usr/share/doc/ntp/hints/solaris.html
OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.4023118
OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.4095849
OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.S99ntpd
OLD_FILES+=usr/share/doc/ntp/hints/solaris.xtra.patchfreq
OLD_FILES+=usr/share/doc/ntp/hints/sun4
OLD_FILES+=usr/share/doc/ntp/hints/svr4-dell
OLD_FILES+=usr/share/doc/ntp/hints/svr4_package
OLD_FILES+=usr/share/doc/ntp/hints/todo
OLD_FILES+=usr/share/doc/ntp/hints/vxworks.html
OLD_FILES+=usr/share/doc/ntp/hints/winnt.html
OLD_FILES+=usr/share/doc/ntp/history.html
OLD_FILES+=usr/share/doc/ntp/howto.html
OLD_FILES+=usr/share/doc/ntp/huffpuff.html
OLD_FILES+=usr/share/doc/ntp/icons/home.gif
OLD_FILES+=usr/share/doc/ntp/icons/mail2.gif
OLD_FILES+=usr/share/doc/ntp/icons/sitemap.png
OLD_FILES+=usr/share/doc/ntp/index.html
OLD_FILES+=usr/share/doc/ntp/kern.html
OLD_FILES+=usr/share/doc/ntp/kernpps.html
OLD_FILES+=usr/share/doc/ntp/keygen.html
OLD_FILES+=usr/share/doc/ntp/ldisc.html
OLD_FILES+=usr/share/doc/ntp/leap.html
OLD_FILES+=usr/share/doc/ntp/measure.html
OLD_FILES+=usr/share/doc/ntp/miscopt.html
OLD_FILES+=usr/share/doc/ntp/monopt.html
OLD_FILES+=usr/share/doc/ntp/msyslog.html
OLD_FILES+=usr/share/doc/ntp/mx4200data.html
OLD_FILES+=usr/share/doc/ntp/notes.html
OLD_FILES+=usr/share/doc/ntp/ntp-keygen.html
OLD_FILES+=usr/share/doc/ntp/ntp-wait.html
OLD_FILES+=usr/share/doc/ntp/ntp.conf.html
OLD_FILES+=usr/share/doc/ntp/ntp.keys.html
OLD_FILES+=usr/share/doc/ntp/ntp_conf.html
OLD_FILES+=usr/share/doc/ntp/ntpd.html
OLD_FILES+=usr/share/doc/ntp/ntpdate.html
OLD_FILES+=usr/share/doc/ntp/ntpdc.html
OLD_FILES+=usr/share/doc/ntp/ntpdsim.html
OLD_FILES+=usr/share/doc/ntp/ntpdsim_new.html
OLD_FILES+=usr/share/doc/ntp/ntpq.html
OLD_FILES+=usr/share/doc/ntp/ntpsnmpd.html
OLD_FILES+=usr/share/doc/ntp/ntptime.html
OLD_FILES+=usr/share/doc/ntp/ntptrace.html
OLD_FILES+=usr/share/doc/ntp/orphan.html
OLD_FILES+=usr/share/doc/ntp/parsedata.html
OLD_FILES+=usr/share/doc/ntp/parsenew.html
OLD_FILES+=usr/share/doc/ntp/patches.html
OLD_FILES+=usr/share/doc/ntp/pic/9400n.jpg
OLD_FILES+=usr/share/doc/ntp/pic/alice11.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice13.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice15.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice23.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice31.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice32.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice35.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice38.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice44.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice47.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice51.gif
OLD_FILES+=usr/share/doc/ntp/pic/alice61.gif
OLD_FILES+=usr/share/doc/ntp/pic/barnstable.gif
OLD_FILES+=usr/share/doc/ntp/pic/beaver.gif
OLD_FILES+=usr/share/doc/ntp/pic/boom3.gif
OLD_FILES+=usr/share/doc/ntp/pic/boom3a.gif
OLD_FILES+=usr/share/doc/ntp/pic/boom4.gif
OLD_FILES+=usr/share/doc/ntp/pic/broad.gif
OLD_FILES+=usr/share/doc/ntp/pic/bustardfly.gif
OLD_FILES+=usr/share/doc/ntp/pic/c51.jpg
OLD_FILES+=usr/share/doc/ntp/pic/description.jpg
OLD_FILES+=usr/share/doc/ntp/pic/discipline.gif
OLD_FILES+=usr/share/doc/ntp/pic/dogsnake.gif
OLD_FILES+=usr/share/doc/ntp/pic/driver29.gif
OLD_FILES+=usr/share/doc/ntp/pic/driver43_1.gif
OLD_FILES+=usr/share/doc/ntp/pic/driver43_2.jpg
OLD_FILES+=usr/share/doc/ntp/pic/fg6021.gif
OLD_FILES+=usr/share/doc/ntp/pic/fg6039.jpg
OLD_FILES+=usr/share/doc/ntp/pic/fig_3_1.gif
OLD_FILES+=usr/share/doc/ntp/pic/flatheads.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt1.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt2.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt3.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt4.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt5.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt6.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt7.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt8.gif
OLD_FILES+=usr/share/doc/ntp/pic/flt9.gif
OLD_FILES+=usr/share/doc/ntp/pic/freq1211.gif
OLD_FILES+=usr/share/doc/ntp/pic/gadget.jpg
OLD_FILES+=usr/share/doc/ntp/pic/gps167.jpg
OLD_FILES+=usr/share/doc/ntp/pic/group.gif
OLD_FILES+=usr/share/doc/ntp/pic/hornraba.gif
OLD_FILES+=usr/share/doc/ntp/pic/igclock.gif
OLD_FILES+=usr/share/doc/ntp/pic/neoclock4x.gif
OLD_FILES+=usr/share/doc/ntp/pic/offset1211.gif
OLD_FILES+=usr/share/doc/ntp/pic/oncore_evalbig.gif
OLD_FILES+=usr/share/doc/ntp/pic/oncore_remoteant.jpg
OLD_FILES+=usr/share/doc/ntp/pic/oncore_utplusbig.gif
OLD_FILES+=usr/share/doc/ntp/pic/oz2.gif
OLD_FILES+=usr/share/doc/ntp/pic/panda.gif
OLD_FILES+=usr/share/doc/ntp/pic/pd_om006.gif
OLD_FILES+=usr/share/doc/ntp/pic/pd_om011.gif
OLD_FILES+=usr/share/doc/ntp/pic/peer.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo1a.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo3a.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo4.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo5.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo6.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo7.gif
OLD_FILES+=usr/share/doc/ntp/pic/pogo8.gif
OLD_FILES+=usr/share/doc/ntp/pic/pzf509.jpg
OLD_FILES+=usr/share/doc/ntp/pic/pzf511.jpg
OLD_FILES+=usr/share/doc/ntp/pic/rabbit.gif
OLD_FILES+=usr/share/doc/ntp/pic/radio2.jpg
OLD_FILES+=usr/share/doc/ntp/pic/sheepb.jpg
OLD_FILES+=usr/share/doc/ntp/pic/stack1a.jpg
OLD_FILES+=usr/share/doc/ntp/pic/stats.gif
OLD_FILES+=usr/share/doc/ntp/pic/sx5.gif
OLD_FILES+=usr/share/doc/ntp/pic/thunderbolt.jpg
OLD_FILES+=usr/share/doc/ntp/pic/time1.gif
OLD_FILES+=usr/share/doc/ntp/pic/tonea.gif
OLD_FILES+=usr/share/doc/ntp/pic/tribeb.gif
OLD_FILES+=usr/share/doc/ntp/pic/wingdorothy.gif
OLD_FILES+=usr/share/doc/ntp/poll.html
OLD_FILES+=usr/share/doc/ntp/porting.html
OLD_FILES+=usr/share/doc/ntp/pps.html
OLD_FILES+=usr/share/doc/ntp/prefer.html
OLD_FILES+=usr/share/doc/ntp/quick.html
OLD_FILES+=usr/share/doc/ntp/rate.html
OLD_FILES+=usr/share/doc/ntp/rdebug.html
OLD_FILES+=usr/share/doc/ntp/refclock.html
OLD_FILES+=usr/share/doc/ntp/release.html
OLD_FILES+=usr/share/doc/ntp/scripts/accopt.txt
OLD_FILES+=usr/share/doc/ntp/scripts/audio.txt
OLD_FILES+=usr/share/doc/ntp/scripts/authopt.txt
OLD_FILES+=usr/share/doc/ntp/scripts/clockopt.txt
OLD_FILES+=usr/share/doc/ntp/scripts/command.txt
OLD_FILES+=usr/share/doc/ntp/scripts/config.txt
OLD_FILES+=usr/share/doc/ntp/scripts/confopt.txt
OLD_FILES+=usr/share/doc/ntp/scripts/external.txt
OLD_FILES+=usr/share/doc/ntp/scripts/footer.txt
OLD_FILES+=usr/share/doc/ntp/scripts/hand.txt
OLD_FILES+=usr/share/doc/ntp/scripts/install.txt
OLD_FILES+=usr/share/doc/ntp/scripts/manual.txt
OLD_FILES+=usr/share/doc/ntp/scripts/misc.txt
OLD_FILES+=usr/share/doc/ntp/scripts/miscopt.txt
OLD_FILES+=usr/share/doc/ntp/scripts/monopt.txt
OLD_FILES+=usr/share/doc/ntp/scripts/refclock.txt
OLD_FILES+=usr/share/doc/ntp/scripts/special.txt
OLD_FILES+=usr/share/doc/ntp/scripts/style.css
OLD_FILES+=usr/share/doc/ntp/select.html
OLD_FILES+=usr/share/doc/ntp/sitemap.html
OLD_FILES+=usr/share/doc/ntp/sntp.html
OLD_FILES+=usr/share/doc/ntp/stats.html
OLD_FILES+=usr/share/doc/ntp/tickadj.html
OLD_FILES+=usr/share/doc/ntp/warp.html
OLD_FILES+=usr/share/doc/ntp/xleave.html
OLD_DIRS+=usr/share/doc/ntp/drivers
OLD_DIRS+=usr/share/doc/ntp/drivers/scripts
OLD_DIRS+=usr/share/doc/ntp/drivers/icons
OLD_DIRS+=usr/share/doc/ntp/hints
OLD_DIRS+=usr/share/doc/ntp/icons
OLD_DIRS+=usr/share/doc/ntp/pic
OLD_DIRS+=usr/share/doc/ntp/scripts
OLD_DIRS+=usr/share/doc/ntp
OLD_FILES+=usr/share/examples/etc/ntp.conf
OLD_FILES+=usr/share/man/man1/sntp.1.gz
OLD_FILES+=usr/share/man/man5/ntp.conf.5.gz
OLD_FILES+=usr/share/man/man5/ntp.keys.5.gz
OLD_FILES+=usr/share/man/man8/ntp-keygen.8.gz
OLD_FILES+=usr/share/man/man8/ntpd.8.gz
OLD_FILES+=usr/share/man/man8/ntpdate.8.gz
OLD_FILES+=usr/share/man/man8/ntpdc.8.gz
OLD_FILES+=usr/share/man/man8/ntpq.8.gz
OLD_FILES+=usr/share/man/man8/ntptime.8.gz
.endif
.if ${MK_OPENSSH} == no
OLD_FILES+=etc/rc.d/sshd
OLD_FILES+=etc/ssh/moduli
OLD_FILES+=etc/ssh/ssh_config
OLD_FILES+=etc/ssh/sshd_config
OLD_FILES+=usr/bin/scp
OLD_FILES+=usr/bin/sftp
OLD_FILES+=usr/bin/slogin
OLD_FILES+=usr/bin/ssh
OLD_FILES+=usr/bin/ssh-add
OLD_FILES+=usr/bin/ssh-agent
OLD_FILES+=usr/bin/ssh-copy-id
OLD_FILES+=usr/bin/ssh-keygen
OLD_FILES+=usr/bin/ssh-keyscan
OLD_FILES+=usr/lib/pam_ssh.so
OLD_LIBS+=usr/lib/pam_ssh.so.6
OLD_FILES+=usr/lib/private/libssh.a
OLD_FILES+=usr/lib/private/libssh.so
OLD_LIBS+=usr/lib/private/libssh.so.5
OLD_FILES+=usr/lib/private/libssh_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/pam_ssh.so
OLD_LIBS+=usr/lib32/pam_ssh.so.6
OLD_FILES+=usr/lib32/private/libssh.a
OLD_FILES+=usr/lib32/private/libssh.so
OLD_LIBS+=usr/lib32/private/libssh.so.5
OLD_FILES+=usr/lib32/private/libssh_p.a
.endif
OLD_FILES+=usr/libexec/sftp-server
OLD_FILES+=usr/libexec/ssh-keysign
OLD_FILES+=usr/libexec/ssh-pkcs11-helper
OLD_FILES+=usr/sbin/sshd
OLD_FILES+=usr/share/man/man1/scp.1.gz
OLD_FILES+=usr/share/man/man1/sftp.1.gz
OLD_FILES+=usr/share/man/man1/slogin.1.gz
OLD_FILES+=usr/share/man/man1/ssh-add.1.gz
OLD_FILES+=usr/share/man/man1/ssh-agent.1.gz
OLD_FILES+=usr/share/man/man1/ssh-copy-id.1.gz
OLD_FILES+=usr/share/man/man1/ssh-keygen.1.gz
OLD_FILES+=usr/share/man/man1/ssh-keyscan.1.gz
OLD_FILES+=usr/share/man/man1/ssh.1.gz
OLD_FILES+=usr/share/man/man5/ssh_config.5.gz
OLD_FILES+=usr/share/man/man5/sshd_config.5.gz
OLD_FILES+=usr/share/man/man8/pam_ssh.8.gz
OLD_FILES+=usr/share/man/man8/sftp-server.8.gz
OLD_FILES+=usr/share/man/man8/ssh-keysign.8.gz
OLD_FILES+=usr/share/man/man8/ssh-pkcs11-helper.8.gz
OLD_FILES+=usr/share/man/man8/sshd.8.gz
.endif
.if ${MK_OPENSSL} == no
OLD_FILES+=etc/rc.d/keyserv
.endif
.if ${MK_PC_SYSINSTALL} == no
# backend-partmanager
OLD_FILES+=usr/share/pc-sysinstall/backend-partmanager/create-part.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-partmanager/delete-part.sh
# backend-query
OLD_FILES+=usr/share/pc-sysinstall/backend-query/detect-emulation.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/detect-laptop.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/detect-nics.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/disk-info.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/disk-list.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/disk-part.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/enable-net.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/get-packages.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-components.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-config.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-mirrors.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-packages.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-rsync-backups.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/list-tzones.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/query-langs.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/send-logs.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/setup-ssh-keys.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/set-mirror.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/sys-mem.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/test-live.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/test-netup.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/update-part-list.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/xkeyboard-layouts.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/xkeyboard-models.sh
OLD_FILES+=usr/share/pc-sysinstall/backend-query/xkeyboard-variants.sh
# backend
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-bsdlabel.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-cleanup.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-disk.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-extractimage.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-ftp.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-installcomponents.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-installpackages.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-localize.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-mountdisk.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-mountoptical.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-networking.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-newfs.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-parse.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-packages.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-runcommands.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-unmount.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-upgrade.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions-users.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/functions.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/installimage.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/parseconfig.sh
OLD_FILES+=usr/share/pc-sysinstall/backend/startautoinstall.sh
# conf
OLD_FILES+=usr/share/pc-sysinstall/conf/avail-langs
OLD_FILES+=usr/share/pc-sysinstall/conf/exclude-from-upgrade
OLD_FILES+=usr/share/pc-sysinstall/conf/license/bsd-en.txt
OLD_FILES+=usr/share/pc-sysinstall/conf/license/intel-en.txt
OLD_FILES+=usr/share/pc-sysinstall/conf/license/nvidia-en.txt
OLD_FILES+=usr/share/pc-sysinstall/conf/pc-sysinstall.conf
# doc
OLD_FILES+=usr/share/pc-sysinstall/doc/help-disk-list
OLD_FILES+=usr/share/pc-sysinstall/doc/help-disk-size
OLD_FILES+=usr/share/pc-sysinstall/doc/help-index
OLD_FILES+=usr/share/pc-sysinstall/doc/help-start-autoinstall
# examples
OLD_FILES+=usr/share/examples/pc-sysinstall/README
OLD_FILES+=usr/share/examples/pc-sysinstall/pc-autoinstall.conf
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.fbsd-netinstall
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.geli
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.gmirror
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.netinstall
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.restore
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.rsync
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.upgrade
OLD_FILES+=usr/share/examples/pc-sysinstall/pcinstall.cfg.zfs
# pc-sysinstall
OLD_FILES+=usr/sbin/pc-sysinstall
OLD_FILES+=usr/share/man/man8/pc-sysinstall.8.gz
OLD_DIRS+=usr/share/pc-sysinstall/backend
OLD_DIRS+=usr/share/pc-sysinstall/backend-partmanager
OLD_DIRS+=usr/share/pc-sysinstall/backend-query
OLD_DIRS+=usr/share/pc-sysinstall/conf/license
OLD_DIRS+=usr/share/pc-sysinstall/conf
OLD_DIRS+=usr/share/pc-sysinstall/doc
OLD_DIRS+=usr/share/pc-sysinstall
OLD_DIRS+=usr/share/examples/pc-sysinstall
.endif
.if ${MK_PF} == no
OLD_FILES+=etc/newsyslog.conf.d/pf.conf
OLD_FILES+=etc/periodic/security/520.pfdenied
OLD_FILES+=etc/pf.os
OLD_FILES+=etc/rc.d/ftp-proxy
OLD_FILES+=sbin/pfctl
OLD_FILES+=sbin/pflogd
OLD_FILES+=usr/include/netpfil/pf/pf.h
OLD_FILES+=usr/include/netpfil/pf/pf_altq.h
OLD_FILES+=usr/include/netpfil/pf/pf_mtag.h
OLD_FILES+=usr/lib/snmp_pf.so
OLD_LIBS+=usr/lib/snmp_pf.so.6
OLD_FILES+=usr/libexec/tftp-proxy
OLD_FILES+=usr/sbin/ftp-proxy
OLD_FILES+=usr/share/examples/etc/pf.os
OLD_FILES+=usr/share/examples/pf/ackpri
OLD_FILES+=usr/share/examples/pf/faq-example1
OLD_FILES+=usr/share/examples/pf/faq-example2
OLD_FILES+=usr/share/examples/pf/faq-example3
OLD_FILES+=usr/share/examples/pf/pf.conf
OLD_FILES+=usr/share/examples/pf/queue1
OLD_FILES+=usr/share/examples/pf/queue2
OLD_FILES+=usr/share/examples/pf/queue3
OLD_FILES+=usr/share/examples/pf/queue4
OLD_FILES+=usr/share/examples/pf/spamd
OLD_DIRS+=usr/share/examples/pf
OLD_FILES+=usr/share/man/man4/pf.4.gz
OLD_FILES+=usr/share/man/man4/pflog.4.gz
OLD_FILES+=usr/share/man/man4/pfsync.4.gz
OLD_FILES+=usr/share/man/man5/pf.conf.5.gz
OLD_FILES+=usr/share/man/man5/pf.os.5.gz
OLD_FILES+=usr/share/man/man8/ftp-proxy.8.gz
OLD_FILES+=usr/share/man/man8/pfctl.8.gz
OLD_FILES+=usr/share/man/man8/pflogd.8.gz
OLD_FILES+=usr/share/man/man8/tftp-proxy.8.gz
OLD_FILES+=usr/share/snmp/defs/pf_tree.def
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-PF-MIB.txt
.endif
.if ${MK_PKGBOOTSTRAP} == no
OLD_FILES+=usr/sbin/pkg
OLD_FILES+=usr/share/man/man7/pkg.7.gz
.endif
.if ${MK_PMC} == no
OLD_FILES+=usr/bin/pmcstudy
OLD_FILES+=usr/include/pmc.h
OLD_FILES+=usr/include/pmclog.h
OLD_FILES+=usr/lib/libpmc.a
OLD_FILES+=usr/lib/libpmc.so
OLD_LIBS+=usr/lib/libpmc.so.5
OLD_FILES+=usr/lib/libpmc_p.a
OLD_FILES+=usr/lib32/libpmc.a
OLD_FILES+=usr/lib32/libpmc.so
OLD_LIBS+=usr/lib32/libpmc.so.5
OLD_FILES+=usr/lib32/libpmc_p.a
OLD_FILES+=usr/sbin/pmcannotate
OLD_FILES+=usr/sbin/pmccontrol
OLD_FILES+=usr/sbin/pmcstat
OLD_FILES+=usr/share/man/man1/pmcstudy.1.gz
OLD_FILES+=usr/share/man/man3/pmc.3.gz
OLD_FILES+=usr/share/man/man3/pmc.atom.3.gz
OLD_FILES+=usr/share/man/man3/pmc.atomsilvermont.3.gz
OLD_FILES+=usr/share/man/man3/pmc.core.3.gz
OLD_FILES+=usr/share/man/man3/pmc.core2.3.gz
OLD_FILES+=usr/share/man/man3/pmc.corei7.3.gz
OLD_FILES+=usr/share/man/man3/pmc.corei7uc.3.gz
OLD_FILES+=usr/share/man/man3/pmc.haswell.3.gz
OLD_FILES+=usr/share/man/man3/pmc.haswelluc.3.gz
OLD_FILES+=usr/share/man/man3/pmc.iaf.3.gz
OLD_FILES+=usr/share/man/man3/pmc.ivybridge.3.gz
OLD_FILES+=usr/share/man/man3/pmc.ivybridgexeon.3.gz
OLD_FILES+=usr/share/man/man3/pmc.k7.3.gz
OLD_FILES+=usr/share/man/man3/pmc.k8.3.gz
OLD_FILES+=usr/share/man/man3/pmc.mips24k.3.gz
OLD_FILES+=usr/share/man/man3/pmc.octeon.3.gz
OLD_FILES+=usr/share/man/man3/pmc.p4.3.gz
OLD_FILES+=usr/share/man/man3/pmc.p5.3.gz
OLD_FILES+=usr/share/man/man3/pmc.p6.3.gz
OLD_FILES+=usr/share/man/man3/pmc.sandybridge.3.gz
OLD_FILES+=usr/share/man/man3/pmc.sandybridgeuc.3.gz
OLD_FILES+=usr/share/man/man3/pmc.sandybridgexeon.3.gz
OLD_FILES+=usr/share/man/man3/pmc.soft.3.gz
OLD_FILES+=usr/share/man/man3/pmc.tsc.3.gz
OLD_FILES+=usr/share/man/man3/pmc.ucf.3.gz
OLD_FILES+=usr/share/man/man3/pmc.westmere.3.gz
OLD_FILES+=usr/share/man/man3/pmc.westmereuc.3.gz
OLD_FILES+=usr/share/man/man3/pmc.xscale.3.gz
OLD_FILES+=usr/share/man/man3/pmc_allocate.3.gz
OLD_FILES+=usr/share/man/man3/pmc_attach.3.gz
OLD_FILES+=usr/share/man/man3/pmc_capabilities.3.gz
OLD_FILES+=usr/share/man/man3/pmc_configure_logfile.3.gz
OLD_FILES+=usr/share/man/man3/pmc_cpuinfo.3.gz
OLD_FILES+=usr/share/man/man3/pmc_detach.3.gz
OLD_FILES+=usr/share/man/man3/pmc_disable.3.gz
OLD_FILES+=usr/share/man/man3/pmc_enable.3.gz
OLD_FILES+=usr/share/man/man3/pmc_event_names_of_class.3.gz
OLD_FILES+=usr/share/man/man3/pmc_flush_logfile.3.gz
OLD_FILES+=usr/share/man/man3/pmc_get_driver_stats.3.gz
OLD_FILES+=usr/share/man/man3/pmc_get_msr.3.gz
OLD_FILES+=usr/share/man/man3/pmc_init.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_capability.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_class.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_cputype.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_disposition.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_event.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_mode.3.gz
OLD_FILES+=usr/share/man/man3/pmc_name_of_state.3.gz
OLD_FILES+=usr/share/man/man3/pmc_ncpu.3.gz
OLD_FILES+=usr/share/man/man3/pmc_npmc.3.gz
OLD_FILES+=usr/share/man/man3/pmc_pmcinfo.3.gz
OLD_FILES+=usr/share/man/man3/pmc_read.3.gz
OLD_FILES+=usr/share/man/man3/pmc_release.3.gz
OLD_FILES+=usr/share/man/man3/pmc_rw.3.gz
OLD_FILES+=usr/share/man/man3/pmc_set.3.gz
OLD_FILES+=usr/share/man/man3/pmc_start.3.gz
OLD_FILES+=usr/share/man/man3/pmc_stop.3.gz
OLD_FILES+=usr/share/man/man3/pmc_width.3.gz
OLD_FILES+=usr/share/man/man3/pmc_write.3.gz
OLD_FILES+=usr/share/man/man3/pmc_writelog.3.gz
OLD_FILES+=usr/share/man/man3/pmclog.3.gz
OLD_FILES+=usr/share/man/man3/pmclog_close.3.gz
OLD_FILES+=usr/share/man/man3/pmclog_feed.3.gz
OLD_FILES+=usr/share/man/man3/pmclog_open.3.gz
OLD_FILES+=usr/share/man/man3/pmclog_read.3.gz
OLD_FILES+=usr/share/man/man8/pmcannotate.8.gz
OLD_FILES+=usr/share/man/man8/pmccontrol.8.gz
OLD_FILES+=usr/share/man/man8/pmcstat.8.gz
.endif
.if ${MK_PORTSNAP} == no
OLD_FILES+=etc/portsnap.conf
OLD_FILES+=usr/libexec/make_index
OLD_FILES+=usr/libexec/phttpget
OLD_FILES+=usr/sbin/portsnap
OLD_FILES+=usr/share/examples/etc/portsnap.conf
OLD_FILES+=usr/share/man/man8/phttpget.8.gz
OLD_FILES+=usr/share/man/man8/portsnap.8.gz
.endif
.if ${MK_PPP} == no
OLD_FILES+=etc/newsyslog.conf.d/ppp.conf
OLD_FILES+=etc/ppp/ppp.conf
OLD_FILES+=etc/syslog.d/ppp.conf
OLD_DIRS+=etc/ppp
OLD_FILES+=usr/sbin/ppp
OLD_FILES+=usr/sbin/pppctl
OLD_FILES+=usr/share/man/man8/ppp.8.gz
OLD_FILES+=usr/share/man/man8/pppctl.8.gz
.endif
.if ${MK_PROFILE} == no
OLD_FILES+=usr/lib/lib80211_p.a
OLD_FILES+=usr/lib/libBlocksRuntime_p.a
OLD_FILES+=usr/lib/libalias_cuseeme_p.a
OLD_FILES+=usr/lib/libalias_dummy_p.a
OLD_FILES+=usr/lib/libalias_ftp_p.a
OLD_FILES+=usr/lib/libalias_irc_p.a
OLD_FILES+=usr/lib/libalias_nbt_p.a
OLD_FILES+=usr/lib/libalias_p.a
OLD_FILES+=usr/lib/libalias_pptp_p.a
OLD_FILES+=usr/lib/libalias_skinny_p.a
OLD_FILES+=usr/lib/libalias_smedia_p.a
OLD_FILES+=usr/lib/libarchive_p.a
OLD_FILES+=usr/lib/libasn1_p.a
OLD_FILES+=usr/lib/libauditd_p.a
OLD_FILES+=usr/lib/libavl_p.a
OLD_FILES+=usr/lib/libbe_p.a
OLD_FILES+=usr/lib/libbegemot_p.a
OLD_FILES+=usr/lib/libblacklist_p.a
OLD_FILES+=usr/lib/libbluetooth_p.a
OLD_FILES+=usr/lib/libbsdxml_p.a
OLD_FILES+=usr/lib/libbsm_p.a
OLD_FILES+=usr/lib/libbsnmp_p.a
OLD_FILES+=usr/lib/libbz2_p.a
OLD_FILES+=usr/lib/libc++_p.a
OLD_FILES+=usr/lib/libc_p.a
OLD_FILES+=usr/lib/libcalendar_p.a
OLD_FILES+=usr/lib/libcam_p.a
OLD_FILES+=usr/lib/libcom_err_p.a
OLD_FILES+=usr/lib/libcompat_p.a
OLD_FILES+=usr/lib/libcompiler_rt_p.a
OLD_FILES+=usr/lib/libcrypt_p.a
OLD_FILES+=usr/lib/libcrypto_p.a
OLD_FILES+=usr/lib/libctf_p.a
OLD_FILES+=usr/lib/libcurses_p.a
OLD_FILES+=usr/lib/libcursesw_p.a
OLD_FILES+=usr/lib/libcuse_p.a
OLD_FILES+=usr/lib/libcxxrt_p.a
OLD_FILES+=usr/lib/libdevctl_p.a
OLD_FILES+=usr/lib/libdevinfo_p.a
OLD_FILES+=usr/lib/libdevstat_p.a
OLD_FILES+=usr/lib/libdialog_p.a
OLD_FILES+=usr/lib/libdl_p.a
OLD_FILES+=usr/lib/libdpv_p.a
OLD_FILES+=usr/lib/libdtrace_p.a
OLD_FILES+=usr/lib/libdwarf_p.a
OLD_FILES+=usr/lib/libedit_p.a
OLD_FILES+=usr/lib/libefivar_p.a
OLD_FILES+=usr/lib/libelf_p.a
OLD_FILES+=usr/lib/libexecinfo_p.a
OLD_FILES+=usr/lib/libfetch_p.a
OLD_FILES+=usr/lib/libfigpar_p.a
OLD_FILES+=usr/lib/libfl_p.a
OLD_FILES+=usr/lib/libform_p.a
OLD_FILES+=usr/lib/libformw_p.a
OLD_FILES+=usr/lib/libgcc_eh_p.a
OLD_FILES+=usr/lib/libgcc_p.a
OLD_FILES+=usr/lib/libgeom_p.a
OLD_FILES+=usr/lib/libgnuregex_p.a
OLD_FILES+=usr/lib/libgpio_p.a
OLD_FILES+=usr/lib/libgssapi_krb5_p.a
OLD_FILES+=usr/lib/libgssapi_ntlm_p.a
OLD_FILES+=usr/lib/libgssapi_p.a
OLD_FILES+=usr/lib/libgssapi_spnego_p.a
OLD_FILES+=usr/lib/libhdb_p.a
OLD_FILES+=usr/lib/libheimbase_p.a
OLD_FILES+=usr/lib/libheimntlm_p.a
OLD_FILES+=usr/lib/libheimsqlite_p.a
OLD_FILES+=usr/lib/libhistory_p.a
OLD_FILES+=usr/lib/libhx509_p.a
OLD_FILES+=usr/lib/libipsec_p.a
OLD_FILES+=usr/lib/libipt_p.a
OLD_FILES+=usr/lib/libjail_p.a
OLD_FILES+=usr/lib/libkadm5clnt_p.a
OLD_FILES+=usr/lib/libkadm5srv_p.a
OLD_FILES+=usr/lib/libkafs5_p.a
OLD_FILES+=usr/lib/libkdc_p.a
OLD_FILES+=usr/lib/libkiconv_p.a
OLD_FILES+=usr/lib/libkrb5_p.a
OLD_FILES+=usr/lib/libkvm_p.a
OLD_FILES+=usr/lib/libl_p.a
OLD_FILES+=usr/lib/libln_p.a
OLD_FILES+=usr/lib/liblzma_p.a
OLD_FILES+=usr/lib/libm_p.a
OLD_FILES+=usr/lib/libmagic_p.a
OLD_FILES+=usr/lib/libmd_p.a
OLD_FILES+=usr/lib/libmemstat_p.a
OLD_FILES+=usr/lib/libmenu_p.a
OLD_FILES+=usr/lib/libmenuw_p.a
OLD_FILES+=usr/lib/libmilter_p.a
OLD_FILES+=usr/lib/libmp_p.a
OLD_FILES+=usr/lib/libmt_p.a
OLD_FILES+=usr/lib/libncurses_p.a
OLD_FILES+=usr/lib/libncursesw_p.a
OLD_FILES+=usr/lib/libnetgraph_p.a
OLD_FILES+=usr/lib/libngatm_p.a
OLD_FILES+=usr/lib/libnv_p.a
OLD_FILES+=usr/lib/libnvpair_p.a
OLD_FILES+=usr/lib/libopencsd_p.a
OLD_FILES+=usr/lib/libopie_p.a
OLD_FILES+=usr/lib/libpanel_p.a
OLD_FILES+=usr/lib/libpanelw_p.a
OLD_FILES+=usr/lib/libpathconv_p.a
OLD_FILES+=usr/lib/libpcap_p.a
OLD_FILES+=usr/lib/libpjdlog_p.a
OLD_FILES+=usr/lib/libpmc_p.a
OLD_FILES+=usr/lib/libprivatebsdstat_p.a
OLD_FILES+=usr/lib/libprivatedevdctl_p.a
OLD_FILES+=usr/lib/libprivateevent_p.a
OLD_FILES+=usr/lib/libprivateheimipcc_p.a
OLD_FILES+=usr/lib/libprivateheimipcs_p.a
OLD_FILES+=usr/lib/libprivateifconfig_p.a
OLD_FILES+=usr/lib/libprivateldns_p.a
OLD_FILES+=usr/lib/libprivatesqlite3_p.a
OLD_FILES+=usr/lib/libprivatessh_p.a
OLD_FILES+=usr/lib/libprivateucl_p.a
OLD_FILES+=usr/lib/libprivateunbound_p.a
OLD_FILES+=usr/lib/libprivatezstd_p.a
OLD_FILES+=usr/lib/libproc_p.a
OLD_FILES+=usr/lib/libprocstat_p.a
OLD_FILES+=usr/lib/libpthread_p.a
OLD_FILES+=usr/lib/libradius_p.a
OLD_FILES+=usr/lib/libregex_p.a
OLD_FILES+=usr/lib/libroken_p.a
OLD_FILES+=usr/lib/librpcsvc_p.a
OLD_FILES+=usr/lib/librss_p.a
OLD_FILES+=usr/lib/librt_p.a
OLD_FILES+=usr/lib/librtld_db_p.a
OLD_FILES+=usr/lib/libsbuf_p.a
OLD_FILES+=usr/lib/libsdp_p.a
OLD_FILES+=usr/lib/libsmb_p.a
OLD_FILES+=usr/lib/libssl_p.a
OLD_FILES+=usr/lib/libstdbuf_p.a
OLD_FILES+=usr/lib/libstdc++_p.a
OLD_FILES+=usr/lib/libstdthreads_p.a
OLD_FILES+=usr/lib/libsupc++_p.a
OLD_FILES+=usr/lib/libsysdecode_p.a
OLD_FILES+=usr/lib/libtacplus_p.a
OLD_FILES+=usr/lib/libtermcap_p.a
OLD_FILES+=usr/lib/libtermcapw_p.a
OLD_FILES+=usr/lib/libtermlib_p.a
OLD_FILES+=usr/lib/libtermlibw_p.a
OLD_FILES+=usr/lib/libthr_p.a
OLD_FILES+=usr/lib/libthread_db_p.a
OLD_FILES+=usr/lib/libtinfo_p.a
OLD_FILES+=usr/lib/libtinfow_p.a
OLD_FILES+=usr/lib/libufs_p.a
OLD_FILES+=usr/lib/libugidfw_p.a
OLD_FILES+=usr/lib/libulog_p.a
OLD_FILES+=usr/lib/libumem_p.a
OLD_FILES+=usr/lib/libusb_p.a
OLD_FILES+=usr/lib/libusbhid_p.a
OLD_FILES+=usr/lib/libutempter_p.a
OLD_FILES+=usr/lib/libutil_p.a
OLD_FILES+=usr/lib/libuutil_p.a
OLD_FILES+=usr/lib/libvgl_p.a
OLD_FILES+=usr/lib/libvmmapi_p.a
OLD_FILES+=usr/lib/libwind_p.a
OLD_FILES+=usr/lib/libwrap_p.a
OLD_FILES+=usr/lib/libxo_p.a
OLD_FILES+=usr/lib/liby_p.a
OLD_FILES+=usr/lib/libypclnt_p.a
OLD_FILES+=usr/lib/libz_p.a
OLD_FILES+=usr/lib/libzfs_core_p.a
OLD_FILES+=usr/lib/libzfs_p.a
OLD_FILES+=usr/lib/private/libldns_p.a
OLD_FILES+=usr/lib/private/libssh_p.a
.endif
.if ${MK_QUOTAS} == no
OLD_FILES+=sbin/quotacheck
OLD_FILES+=usr/bin/quota
OLD_FILES+=usr/sbin/edquota
OLD_FILES+=usr/sbin/quotaoff
OLD_FILES+=usr/sbin/quotaon
OLD_FILES+=usr/sbin/repquota
OLD_FILES+=usr/share/man/man1/quota.1.gz
OLD_FILES+=usr/share/man/man8/edquota.8.gz
OLD_FILES+=usr/share/man/man8/quotacheck.8.gz
OLD_FILES+=usr/share/man/man8/quotaoff.8.gz
OLD_FILES+=usr/share/man/man8/quotaon.8.gz
OLD_FILES+=usr/share/man/man8/repquota.8.gz
.endif
.if ${MK_RADIUS_SUPPORT} == no
OLD_FILES+=usr/lib/libradius.a
OLD_FILES+=usr/lib/libradius.so
OLD_LIBS+=usr/lib/libradius.so.4
OLD_FILES+=usr/lib/libradius_p.a
OLD_FILES+=usr/lib/pam_radius.so
OLD_LIBS+=usr/lib/pam_radius.so.6
OLD_FILES+=usr/include/radlib.h
OLD_FILES+=usr/include/radlib_vs.h
OLD_FILES+=usr/share/man/man3/libradius.3.gz
OLD_FILES+=usr/share/man/man3/rad_acct_open.3.gz
OLD_FILES+=usr/share/man/man3/rad_add_server.3.gz
OLD_FILES+=usr/share/man/man3/rad_add_server_ex.3.gz
OLD_FILES+=usr/share/man/man3/rad_auth_open.3.gz
OLD_FILES+=usr/share/man/man3/rad_bind_to.3.gz
OLD_FILES+=usr/share/man/man3/rad_close.3.gz
OLD_FILES+=usr/share/man/man3/rad_config.3.gz
OLD_FILES+=usr/share/man/man3/rad_continue_send_request.3.gz
OLD_FILES+=usr/share/man/man3/rad_create_request.3.gz
OLD_FILES+=usr/share/man/man3/rad_create_response.3.gz
OLD_FILES+=usr/share/man/man3/rad_cvt_addr.3.gz
OLD_FILES+=usr/share/man/man3/rad_cvt_int.3.gz
OLD_FILES+=usr/share/man/man3/rad_cvt_string.3.gz
OLD_FILES+=usr/share/man/man3/rad_demangle.3.gz
OLD_FILES+=usr/share/man/man3/rad_demangle_mppe_key.3.gz
OLD_FILES+=usr/share/man/man3/rad_get_attr.3.gz
OLD_FILES+=usr/share/man/man3/rad_get_vendor_attr.3.gz
OLD_FILES+=usr/share/man/man3/rad_init_send_request.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_addr.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_attr.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_int.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_message_authentic.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_string.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_vendor_addr.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_vendor_attr.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_vendor_int.3.gz
OLD_FILES+=usr/share/man/man3/rad_put_vendor_string.3.gz
OLD_FILES+=usr/share/man/man3/rad_receive_request.3.gz
OLD_FILES+=usr/share/man/man3/rad_request_authenticator.3.gz
OLD_FILES+=usr/share/man/man3/rad_send_request.3.gz
OLD_FILES+=usr/share/man/man3/rad_send_response.3.gz
OLD_FILES+=usr/share/man/man3/rad_server_open.3.gz
OLD_FILES+=usr/share/man/man3/rad_server_secret.3.gz
OLD_FILES+=usr/share/man/man3/rad_strerror.3.gz
OLD_FILES+=usr/share/man/man5/radius.conf.5.gz
OLD_FILES+=usr/share/man/man8/pam_radius.8.gz
.endif
.if ${MK_RBOOTD} == no
OLD_FILES+=usr/libexec/rbootd
OLD_FILES+=usr/share/man/man8/rbootd.8.gz
.endif
.if ${MK_RESCUE} == no
. if exists(${DESTDIR}${TESTSBASE})
RESCUE_DIRS!=find ${DESTDIR}/rescue -type d 2>/dev/null | sed -e 's,^${DESTDIR}/,,'; echo
OLD_DIRS+=${RESCUE_DIRS}
RESCUE_FILES!=find ${DESTDIR}/rescue \! -type d 2>/dev/null | sed -e 's,^${DESTDIR}/,,'; echo
OLD_FILES+=${RESCUE_FILES}
. endif
.endif
.if ${MK_ROUTED} == no
OLD_FILES+=rescue/routed
OLD_FILES+=rescue/rtquery
OLD_FILES+=sbin/routed
OLD_FILES+=sbin/rtquery
OLD_FILES+=usr/share/man/man8/routed.8.gz
OLD_FILES+=usr/share/man/man8/rtquery.8.gz
.endif
.if ${MK_SENDMAIL} == no
OLD_FILES+=etc/mtree/BSD.sendmail.dist
OLD_FILES+=etc/newsyslog.conf.d/sendmail.conf
OLD_FILES+=etc/periodic/daily/150.clean-hoststat
OLD_FILES+=etc/periodic/daily/440.status-mailq
OLD_FILES+=etc/periodic/daily/460.status-mail-rejects
OLD_FILES+=etc/periodic/daily/500.queuerun
OLD_FILES+=bin/rmail
OLD_FILES+=usr/bin/vacation
OLD_FILES+=usr/include/libmilter/mfapi.h
OLD_FILES+=usr/include/libmilter/mfdef.h
OLD_DIRS+=usr/include/libmilter
OLD_FILES+=usr/lib/libmilter.a
OLD_FILES+=usr/lib/libmilter.so
OLD_LIBS+=usr/lib/libmilter.so.5
OLD_FILES+=usr/lib/libmilter_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/libmilter.a
OLD_FILES+=usr/lib32/libmilter.so
OLD_LIBS+=usr/lib32/libmilter.so.5
OLD_FILES+=usr/lib32/libmilter_p.a
.endif
OLD_FILES+=usr/libexec/mail.local
OLD_FILES+=usr/libexec/sendmail/sendmail
OLD_FILES+=usr/libexec/smrsh
OLD_FILES+=usr/sbin/editmap
OLD_FILES+=usr/sbin/mailstats
OLD_FILES+=usr/sbin/makemap
OLD_FILES+=usr/sbin/praliases
OLD_FILES+=usr/share/doc/smm/08.sendmailop/paper.ascii.gz
OLD_DIRS+=usr/share/doc/smm/08.sendmailop
OLD_FILES+=usr/share/man/man1/mailq.1.gz
OLD_FILES+=usr/share/man/man1/newaliases.1.gz
OLD_FILES+=usr/share/man/man1/vacation.1.gz
OLD_FILES+=usr/share/man/man5/aliases.5.gz
OLD_FILES+=usr/share/man/man8/editmap.8.gz
OLD_FILES+=usr/share/man/man8/hoststat.8.gz
OLD_FILES+=usr/share/man/man8/mail.local.8.gz
OLD_FILES+=usr/share/man/man8/mailstats.8.gz
OLD_FILES+=usr/share/man/man8/makemap.8.gz
OLD_FILES+=usr/share/man/man8/praliases.8.gz
OLD_FILES+=usr/share/man/man8/purgestat.8.gz
OLD_FILES+=usr/share/man/man8/rmail.8.gz
OLD_FILES+=usr/share/man/man8/sendmail.8.gz
OLD_FILES+=usr/share/man/man8/smrsh.8.gz
OLD_FILES+=usr/share/sendmail/cf/README
OLD_FILES+=usr/share/sendmail/cf/cf/Makefile
OLD_FILES+=usr/share/sendmail/cf/cf/README
OLD_FILES+=usr/share/sendmail/cf/cf/chez.cs.mc
OLD_FILES+=usr/share/sendmail/cf/cf/clientproto.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cs-hpux10.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cs-hpux9.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cs-osf1.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cs-solaris2.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cs-sunos4.1.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cs-ultrix4.mc
OLD_FILES+=usr/share/sendmail/cf/cf/cyrusproto.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-bsd4.4.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-hpux10.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-hpux9.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-linux.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-mpeix.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-nextstep3.3.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-osf1.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-solaris.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-sunos4.1.mc
OLD_FILES+=usr/share/sendmail/cf/cf/generic-ultrix4.mc
OLD_FILES+=usr/share/sendmail/cf/cf/huginn.cs.mc
OLD_FILES+=usr/share/sendmail/cf/cf/knecht.mc
OLD_FILES+=usr/share/sendmail/cf/cf/mail.cs.mc
OLD_FILES+=usr/share/sendmail/cf/cf/mail.eecs.mc
OLD_FILES+=usr/share/sendmail/cf/cf/mailspool.cs.mc
OLD_FILES+=usr/share/sendmail/cf/cf/python.cs.mc
OLD_FILES+=usr/share/sendmail/cf/cf/s2k-osf1.mc
OLD_FILES+=usr/share/sendmail/cf/cf/s2k-ultrix4.mc
OLD_FILES+=usr/share/sendmail/cf/cf/submit.cf
OLD_FILES+=usr/share/sendmail/cf/cf/submit.mc
OLD_FILES+=usr/share/sendmail/cf/cf/tcpproto.mc
OLD_FILES+=usr/share/sendmail/cf/cf/ucbarpa.mc
OLD_FILES+=usr/share/sendmail/cf/cf/ucbvax.mc
OLD_FILES+=usr/share/sendmail/cf/cf/uucpproto.mc
OLD_FILES+=usr/share/sendmail/cf/cf/vangogh.cs.mc
OLD_DIRS+=usr/share/sendmail/cf/cf
OLD_FILES+=usr/share/sendmail/cf/domain/Berkeley.EDU.m4
OLD_FILES+=usr/share/sendmail/cf/domain/CS.Berkeley.EDU.m4
OLD_FILES+=usr/share/sendmail/cf/domain/EECS.Berkeley.EDU.m4
OLD_FILES+=usr/share/sendmail/cf/domain/S2K.Berkeley.EDU.m4
OLD_FILES+=usr/share/sendmail/cf/domain/berkeley-only.m4
OLD_FILES+=usr/share/sendmail/cf/domain/generic.m4
OLD_DIRS+=usr/share/sendmail/cf/domain
OLD_FILES+=usr/share/sendmail/cf/feature/accept_unqualified_senders.m4
OLD_FILES+=usr/share/sendmail/cf/feature/accept_unresolvable_domains.m4
OLD_FILES+=usr/share/sendmail/cf/feature/access_db.m4
OLD_FILES+=usr/share/sendmail/cf/feature/allmasquerade.m4
OLD_FILES+=usr/share/sendmail/cf/feature/always_add_domain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/authinfo.m4
OLD_FILES+=usr/share/sendmail/cf/feature/badmx.m4
OLD_FILES+=usr/share/sendmail/cf/feature/bcc.m4
OLD_FILES+=usr/share/sendmail/cf/feature/bestmx_is_local.m4
OLD_FILES+=usr/share/sendmail/cf/feature/bitdomain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/blacklist_recipients.m4
OLD_FILES+=usr/share/sendmail/cf/feature/block_bad_helo.m4
OLD_FILES+=usr/share/sendmail/cf/feature/compat_check.m4
OLD_FILES+=usr/share/sendmail/cf/feature/conncontrol.m4
OLD_FILES+=usr/share/sendmail/cf/feature/delay_checks.m4
OLD_FILES+=usr/share/sendmail/cf/feature/dnsbl.m4
OLD_FILES+=usr/share/sendmail/cf/feature/domaintable.m4
OLD_FILES+=usr/share/sendmail/cf/feature/enhdnsbl.m4
OLD_FILES+=usr/share/sendmail/cf/feature/generics_entire_domain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/genericstable.m4
OLD_FILES+=usr/share/sendmail/cf/feature/greet_pause.m4
OLD_FILES+=usr/share/sendmail/cf/feature/ldap_routing.m4
OLD_FILES+=usr/share/sendmail/cf/feature/limited_masquerade.m4
OLD_FILES+=usr/share/sendmail/cf/feature/local_lmtp.m4
OLD_FILES+=usr/share/sendmail/cf/feature/local_no_masquerade.m4
OLD_FILES+=usr/share/sendmail/cf/feature/local_procmail.m4
OLD_FILES+=usr/share/sendmail/cf/feature/lookupdotdomain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/loose_relay_check.m4
OLD_FILES+=usr/share/sendmail/cf/feature/mailertable.m4
OLD_FILES+=usr/share/sendmail/cf/feature/masquerade_entire_domain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/masquerade_envelope.m4
OLD_FILES+=usr/share/sendmail/cf/feature/msp.m4
OLD_FILES+=usr/share/sendmail/cf/feature/mtamark.m4
OLD_FILES+=usr/share/sendmail/cf/feature/no_default_msa.m4
OLD_FILES+=usr/share/sendmail/cf/feature/nocanonify.m4
OLD_FILES+=usr/share/sendmail/cf/feature/nopercenthack.m4
OLD_FILES+=usr/share/sendmail/cf/feature/notsticky.m4
OLD_FILES+=usr/share/sendmail/cf/feature/nouucp.m4
OLD_FILES+=usr/share/sendmail/cf/feature/nullclient.m4
OLD_FILES+=usr/share/sendmail/cf/feature/prefixmod.m4
OLD_FILES+=usr/share/sendmail/cf/feature/preserve_local_plus_detail.m4
OLD_FILES+=usr/share/sendmail/cf/feature/preserve_luser_host.m4
OLD_FILES+=usr/share/sendmail/cf/feature/promiscuous_relay.m4
OLD_FILES+=usr/share/sendmail/cf/feature/queuegroup.m4
OLD_FILES+=usr/share/sendmail/cf/feature/ratecontrol.m4
OLD_FILES+=usr/share/sendmail/cf/feature/redirect.m4
OLD_FILES+=usr/share/sendmail/cf/feature/relay_based_on_MX.m4
OLD_FILES+=usr/share/sendmail/cf/feature/relay_entire_domain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/relay_hosts_only.m4
OLD_FILES+=usr/share/sendmail/cf/feature/relay_local_from.m4
OLD_FILES+=usr/share/sendmail/cf/feature/relay_mail_from.m4
OLD_FILES+=usr/share/sendmail/cf/feature/require_rdns.m4
OLD_FILES+=usr/share/sendmail/cf/feature/smrsh.m4
OLD_FILES+=usr/share/sendmail/cf/feature/stickyhost.m4
OLD_FILES+=usr/share/sendmail/cf/feature/tls_session_features.m4
OLD_FILES+=usr/share/sendmail/cf/feature/use_client_ptr.m4
OLD_FILES+=usr/share/sendmail/cf/feature/use_ct_file.m4
OLD_FILES+=usr/share/sendmail/cf/feature/use_cw_file.m4
OLD_FILES+=usr/share/sendmail/cf/feature/uucpdomain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/virtuser_entire_domain.m4
OLD_FILES+=usr/share/sendmail/cf/feature/virtusertable.m4
OLD_DIRS+=usr/share/sendmail/cf/feature
OLD_FILES+=usr/share/sendmail/cf/hack/cssubdomain.m4
OLD_FILES+=usr/share/sendmail/cf/hack/xconnect.m4
OLD_DIRS+=usr/share/sendmail/cf/hack
OLD_FILES+=usr/share/sendmail/cf/m4/cf.m4
OLD_FILES+=usr/share/sendmail/cf/m4/cfhead.m4
OLD_FILES+=usr/share/sendmail/cf/m4/proto.m4
OLD_FILES+=usr/share/sendmail/cf/m4/version.m4
OLD_DIRS+=usr/share/sendmail/cf/m4
OLD_FILES+=usr/share/sendmail/cf/mailer/cyrus.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/cyrusv2.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/fax.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/local.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/mail11.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/phquery.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/pop.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/procmail.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/qpage.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/smtp.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/usenet.m4
OLD_FILES+=usr/share/sendmail/cf/mailer/uucp.m4
OLD_DIRS+=usr/share/sendmail/cf/mailer
OLD_FILES+=usr/share/sendmail/cf/ostype/a-ux.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/aix3.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/aix4.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/aix5.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/altos.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/amdahl-uts.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/bsd4.3.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/bsd4.4.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/bsdi.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/bsdi1.0.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/bsdi2.0.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/darwin.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/dgux.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/domainos.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/dragonfly.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/dynix3.2.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/freebsd4.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/freebsd5.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/freebsd6.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/gnu.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/hpux10.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/hpux11.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/hpux9.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/irix4.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/irix5.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/irix6.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/isc4.1.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/linux.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/maxion.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/mklinux.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/mpeix.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/nextstep.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/openbsd.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/osf1.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/powerux.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/ptx2.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/qnx.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/riscos4.5.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/sco-uw-2.1.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/sco3.2.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/sinix.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/solaris11.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/solaris2.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/solaris2.ml.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/solaris2.pre5.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/solaris8.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/sunos3.5.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/sunos4.1.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/svr4.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/ultrix4.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/unicos.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/unicosmk.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/unicosmp.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/unixware7.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/unknown.m4
OLD_FILES+=usr/share/sendmail/cf/ostype/uxpds.m4
OLD_DIRS+=usr/share/sendmail/cf/ostype
OLD_FILES+=usr/share/sendmail/cf/sendmail.schema
OLD_FILES+=usr/share/sendmail/cf/sh/makeinfo.sh
OLD_DIRS+=usr/share/sendmail/cf/sh
OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.cogsci.m4
OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.old.arpa.m4
OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.ucbarpa.m4
OLD_FILES+=usr/share/sendmail/cf/siteconfig/uucp.ucbvax.m4
OLD_DIRS+=usr/share/sendmail/cf/siteconfig
OLD_DIRS+=usr/share/sendmail/cf
OLD_DIRS+=usr/share/sendmail
OLD_DIRS+=var/spool/clientmqueue
.endif
.if ${MK_SERVICESDB} == no
OLD_FILES+=var/db/services.db
.endif
.if ${MK_SHAREDOCS} == no
OLD_FILES+=usr/share/doc/pjdfstest/README
OLD_DIRS+=usr/share/doc/pjdfstest
.endif
.if ${MK_SSP} == no
OLD_LIBS+=lib/libssp.so.0
OLD_FILES+=usr/include/ssp/ssp.h
OLD_FILES+=usr/include/ssp/stdio.h
OLD_FILES+=usr/include/ssp/string.h
OLD_FILES+=usr/include/ssp/unistd.h
OLD_FILES+=usr/lib/libssp.a
OLD_FILES+=usr/lib/libssp.so
OLD_FILES+=usr/lib/libssp_nonshared.a
OLD_FILES+=usr/lib32/libssp.a
OLD_FILES+=usr/lib32/libssp.so
OLD_LIBS+=usr/lib32/libssp.so.0
OLD_FILES+=usr/lib32/libssp_nonshared.a
OLD_FILES+=usr/tests/lib/libc/ssp/Kyuafile
OLD_FILES+=usr/tests/lib/libc/ssp/h_fgets
OLD_FILES+=usr/tests/lib/libc/ssp/h_getcwd
OLD_FILES+=usr/tests/lib/libc/ssp/h_gets
OLD_FILES+=usr/tests/lib/libc/ssp/h_memcpy
OLD_FILES+=usr/tests/lib/libc/ssp/h_memmove
OLD_FILES+=usr/tests/lib/libc/ssp/h_memset
OLD_FILES+=usr/tests/lib/libc/ssp/h_read
OLD_FILES+=usr/tests/lib/libc/ssp/h_readlink
OLD_FILES+=usr/tests/lib/libc/ssp/h_snprintf
OLD_FILES+=usr/tests/lib/libc/ssp/h_sprintf
OLD_FILES+=usr/tests/lib/libc/ssp/h_stpcpy
OLD_FILES+=usr/tests/lib/libc/ssp/h_stpncpy
OLD_FILES+=usr/tests/lib/libc/ssp/h_strcat
OLD_FILES+=usr/tests/lib/libc/ssp/h_strcpy
OLD_FILES+=usr/tests/lib/libc/ssp/h_strncat
OLD_FILES+=usr/tests/lib/libc/ssp/h_strncpy
OLD_FILES+=usr/tests/lib/libc/ssp/h_vsnprintf
OLD_FILES+=usr/tests/lib/libc/ssp/h_vsprintf
OLD_FILES+=usr/tests/lib/libc/ssp/ssp_test
.endif
.if ${MK_SYSCONS} == no
OLD_FILES+=usr/share/syscons/fonts/INDEX.fonts
OLD_FILES+=usr/share/syscons/fonts/armscii8-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/armscii8-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/armscii8-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp1251-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/cp1251-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp1251-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp437-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/cp437-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp437-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp437-thin-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp437-thin-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp850-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/cp850-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp850-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp850-thin-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp850-thin-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp865-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/cp865-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp865-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp865-thin-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp865-thin-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866b-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866c-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866u-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866u-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/cp866u-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/haik8-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/haik8-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/haik8-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso-thin-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso02-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso02-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso02-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-vga9-wide-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso04-wide-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso05-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso05-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso05-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso07-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso07-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso07-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso08-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso08-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso08-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso09-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso15-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/iso15-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/iso15-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/iso15-thin-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-r-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-r-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-r-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-rb-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-rc-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-u-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-u-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/koi8-u-8x8.fnt
OLD_FILES+=usr/share/syscons/fonts/swiss-1131-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/swiss-1251-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/swiss-8x14.fnt
OLD_FILES+=usr/share/syscons/fonts/swiss-8x16.fnt
OLD_FILES+=usr/share/syscons/fonts/swiss-8x8.fnt
OLD_FILES+=usr/share/syscons/keymaps/INDEX.keymaps
OLD_FILES+=usr/share/syscons/keymaps/be.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/be.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/bg.bds.ctrlcaps.kbd
OLD_FILES+=usr/share/syscons/keymaps/bg.phonetic.ctrlcaps.kbd
OLD_FILES+=usr/share/syscons/keymaps/br275.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/br275.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/br275.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/by.cp1131.kbd
OLD_FILES+=usr/share/syscons/keymaps/by.cp1251.kbd
OLD_FILES+=usr/share/syscons/keymaps/by.iso5.kbd
OLD_FILES+=usr/share/syscons/keymaps/ce.iso2.kbd
OLD_FILES+=usr/share/syscons/keymaps/colemak.iso15.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/cs.latin2.qwertz.kbd
OLD_FILES+=usr/share/syscons/keymaps/cz.iso2.kbd
OLD_FILES+=usr/share/syscons/keymaps/danish.cp865.kbd
OLD_FILES+=usr/share/syscons/keymaps/danish.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/danish.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/danish.iso.macbook.kbd
OLD_FILES+=usr/share/syscons/keymaps/dutch.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/eee_nordic.kbd
OLD_FILES+=usr/share/syscons/keymaps/el.iso07.kbd
OLD_FILES+=usr/share/syscons/keymaps/estonian.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/estonian.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/estonian.iso15.kbd
OLD_FILES+=usr/share/syscons/keymaps/finnish.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/finnish.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/fr.dvorak.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/fr.dvorak.kbd
OLD_FILES+=usr/share/syscons/keymaps/fr.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/fr.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/fr.macbook.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/fr_CA.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/german.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/german.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/german.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/gr.elot.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/gr.us101.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/hr.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/hu.iso2.101keys.kbd
OLD_FILES+=usr/share/syscons/keymaps/hu.iso2.102keys.kbd
OLD_FILES+=usr/share/syscons/keymaps/hy.armscii-8.kbd
OLD_FILES+=usr/share/syscons/keymaps/icelandic.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/icelandic.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/it.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/iw.iso8.kbd
OLD_FILES+=usr/share/syscons/keymaps/jp.106.kbd
OLD_FILES+=usr/share/syscons/keymaps/jp.106x.kbd
OLD_FILES+=usr/share/syscons/keymaps/kk.pt154.io.kbd
OLD_FILES+=usr/share/syscons/keymaps/kk.pt154.kst.kbd
OLD_FILES+=usr/share/syscons/keymaps/latinamerican.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/latinamerican.kbd
OLD_FILES+=usr/share/syscons/keymaps/lt.iso4.kbd
OLD_FILES+=usr/share/syscons/keymaps/norwegian.dvorak.kbd
OLD_FILES+=usr/share/syscons/keymaps/norwegian.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/pl_PL.ISO8859-2.kbd
OLD_FILES+=usr/share/syscons/keymaps/pl_PL.dvorak.kbd
OLD_FILES+=usr/share/syscons/keymaps/pt.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/pt.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/ru.cp866.kbd
OLD_FILES+=usr/share/syscons/keymaps/ru.iso5.kbd
OLD_FILES+=usr/share/syscons/keymaps/ru.koi8-r.kbd
OLD_FILES+=usr/share/syscons/keymaps/ru.koi8-r.shift.kbd
OLD_FILES+=usr/share/syscons/keymaps/ru.koi8-r.win.kbd
OLD_FILES+=usr/share/syscons/keymaps/si.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/sk.iso2.kbd
OLD_FILES+=usr/share/syscons/keymaps/spanish.dvorak.kbd
OLD_FILES+=usr/share/syscons/keymaps/spanish.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/spanish.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/spanish.iso15.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/swedish.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/swedish.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissfrench.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissfrench.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissfrench.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissgerman.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissgerman.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissgerman.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/swissgerman.macbook.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/tr.iso9.q.kbd
OLD_FILES+=usr/share/syscons/keymaps/ua.iso5.kbd
OLD_FILES+=usr/share/syscons/keymaps/ua.koi8-u.kbd
OLD_FILES+=usr/share/syscons/keymaps/ua.koi8-u.shift.alt.kbd
OLD_FILES+=usr/share/syscons/keymaps/uk.cp850-ctrl.kbd
OLD_FILES+=usr/share/syscons/keymaps/uk.cp850.kbd
OLD_FILES+=usr/share/syscons/keymaps/uk.dvorak.kbd
OLD_FILES+=usr/share/syscons/keymaps/uk.iso-ctrl.kbd
OLD_FILES+=usr/share/syscons/keymaps/uk.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.dvorak.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.dvorakl.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.dvorakp.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.dvorakr.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.dvorakx.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.emacs.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.iso.acc.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.iso.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.pc-ctrl.kbd
OLD_FILES+=usr/share/syscons/keymaps/us.unix.kbd
OLD_FILES+=usr/share/syscons/scrnmaps/armscii8-2haik8.scm
OLD_FILES+=usr/share/syscons/scrnmaps/iso-8859-1_to_cp437.scm
OLD_FILES+=usr/share/syscons/scrnmaps/iso-8859-4_for_vga9.scm
OLD_FILES+=usr/share/syscons/scrnmaps/iso-8859-7_to_cp437.scm
OLD_FILES+=usr/share/syscons/scrnmaps/koi8-r2cp866.scm
OLD_FILES+=usr/share/syscons/scrnmaps/koi8-u2cp866u.scm
OLD_FILES+=usr/share/syscons/scrnmaps/us-ascii_to_cp437.scm
OLD_DIRS+=usr/share/syscons/fonts
OLD_DIRS+=usr/share/syscons/scrnmaps
OLD_DIRS+=usr/share/syscons/keymaps
OLD_DIRS+=usr/share/syscons
.endif
.if ${MK_TALK} == no
OLD_FILES+=usr/bin/talk
OLD_FILES+=usr/libexec/ntalkd
OLD_FILES+=usr/share/man/man1/talk.1.gz
OLD_FILES+=usr/share/man/man8/talkd.8.gz
.endif
.if ${MK_TCSH} == no
OLD_FILES+=.cshrc
OLD_FILES+=etc/csh.cshrc
OLD_FILES+=etc/csh.login
OLD_FILES+=etc/csh.logout
OLD_FILES+=bin/csh
OLD_FILES+=bin/tcsh
OLD_FILES+=rescue/csh
OLD_FILES+=rescue/tcsh
OLD_FILES+=root/.cshrc
OLD_FILES+=root/.login
OLD_FILES+=usr/share/examples/etc/csh.cshrc
OLD_FILES+=usr/share/examples/etc/csh.login
OLD_FILES+=usr/share/examples/etc/csh.logout
OLD_FILES+=usr/share/examples/tcsh/complete.tcsh
OLD_FILES+=usr/share/examples/tcsh/csh-mode.el
OLD_DIRS+=usr/share/examples/tcsh
OLD_FILES+=usr/share/man/man1/csh.1.gz
OLD_FILES+=usr/share/man/man1/tcsh.1.gz
OLD_FILES+=usr/share/nls/de_AT.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/de_AT.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/de_AT.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/de_CH.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/de_CH.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/de_CH.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/de_DE.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/de_DE.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/de_DE.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/el_GR.ISO8859-7/tcsh.cat
OLD_FILES+=usr/share/nls/el_GR.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/es_ES.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/es_ES.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/es_ES.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/et_EE.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/et_EE.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fi_FI.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/fi_FI.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/fi_FI.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_BE.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/fr_BE.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/fr_BE.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CA.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CA.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CA.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CH.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CH.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/fr_CH.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/fr_FR.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/fr_FR.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/it_CH.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/it_CH.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/it_CH.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/it_IT.ISO8859-1/tcsh.cat
OLD_FILES+=usr/share/nls/it_IT.ISO8859-15/tcsh.cat
OLD_FILES+=usr/share/nls/it_IT.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/ja_JP.SJIS/tcsh.cat
OLD_FILES+=usr/share/nls/ja_JP.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/ja_JP.eucJP/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.CP1251/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.CP866/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.ISO8859-5/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.KOI8-R/tcsh.cat
OLD_FILES+=usr/share/nls/ru_RU.UTF-8/tcsh.cat
OLD_FILES+=usr/share/nls/uk_UA.ISO8859-5/tcsh.cat
OLD_FILES+=usr/share/nls/uk_UA.KOI8-U/tcsh.cat
OLD_FILES+=usr/share/nls/uk_UA.UTF-8/tcsh.cat
.endif
.if ${MK_TELNET} == no
OLD_FILES+=etc/pam.d/telnetd
OLD_FILES+=usr/bin/telnet
OLD_FILES+=usr/libexec/telnetd
OLD_FILES+=usr/share/man/man1/telnet.1.gz
OLD_FILES+=usr/share/man/man8/telnetd.8.gz
.endif
.if ${MK_TESTS} == yes
OLD_FILES+=usr/bin/atf-sh
OLD_FILES+=usr/include/atf-c++/config.hpp
OLD_FILES+=usr/include/atf-c/config.h
OLD_LIBS+=usr/lib/libatf-c++.a
OLD_LIBS+=usr/lib/libatf-c++.so
OLD_LIBS+=usr/lib/libatf-c++.so.1
OLD_LIBS+=usr/lib/libatf-c++.so.2
OLD_LIBS+=usr/lib/libatf-c++_p.a
OLD_LIBS+=usr/lib/libatf-c.a
OLD_LIBS+=usr/lib/libatf-c.so
OLD_LIBS+=usr/lib/libatf-c.so.1
OLD_LIBS+=usr/lib/libatf-c_p.a
OLD_LIBS+=usr/lib/private/libatf-c.so.0
OLD_LIBS+=usr/lib/private/libatf-c++.so.1
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_LIBS+=usr/lib32/libatf-c++.a
OLD_LIBS+=usr/lib32/libatf-c++.so
OLD_LIBS+=usr/lib32/libatf-c++.so.1
OLD_LIBS+=usr/lib32/libatf-c++.so.2
OLD_LIBS+=usr/lib32/libatf-c++_p.a
OLD_LIBS+=usr/lib32/libatf-c.a
OLD_LIBS+=usr/lib32/libatf-c.so
OLD_LIBS+=usr/lib32/libatf-c.so.1
OLD_LIBS+=usr/lib32/libatf-c_p.a
OLD_LIBS+=usr/lib32/private/libatf-c.so.0
OLD_LIBS+=usr/lib32/private/libatf-c++.so.1
.endif
OLD_FILES+=usr/libdata/pkgconfig/atf-c++.pc
OLD_FILES+=usr/libdata/pkgconfig/atf-c.pc
OLD_FILES+=usr/libdata/pkgconfig/atf-sh.pc
OLD_FILES+=usr/share/aclocal/atf-c++.m4
OLD_FILES+=usr/share/aclocal/atf-c.m4
OLD_FILES+=usr/share/aclocal/atf-common.m4
OLD_FILES+=usr/share/aclocal/atf-sh.m4
OLD_DIRS+=usr/share/aclocal
OLD_DIRS+=usr/tests/bin/chown
OLD_FILES+=usr/tests/bin/chown/Kyuafile
OLD_FILES+=usr/tests/bin/chown/chown-f_test
OLD_FILES+=usr/tests/bin/chown/units_basics
OLD_FILES+=usr/tests/bin/date/legacy_test
OLD_FILES+=usr/tests/bin/sh/legacy_test
OLD_FILES+=usr/tests/usr.bin/atf/Kyuafile
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/Kyuafile
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/atf_check_test
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/config_test
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/integration_test
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/misc_helpers
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/normalize_test
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/tc_test
OLD_FILES+=usr/tests/usr.bin/atf/atf-sh/tp_test
OLD_DIRS+=usr/tests/usr.bin/atf/atf-sh
OLD_DIRS+=usr/tests/usr.bin/atf
OLD_FILES+=usr/tests/lib/atf/libatf-c/test_helpers_test
OLD_FILES+=usr/tests/lib/atf/test-programs/fork_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/application_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/config_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/expand_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/parser_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/sanity_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/ui_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/env_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/exceptions_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/expand_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/fs_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/parser_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/process_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/sanity_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/pkg_config_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/text_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/ui_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/config_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/dynstr_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/env_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/fs_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/list_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/map_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/pkg_config_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/process_helpers
OLD_FILES+=usr/tests/lib/atf/libatf-c/process_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/sanity_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/text_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/user_test
.if ${MK_MAKE} == yes
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.status.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stderr.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/expected.stdout.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd/libtest.a
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.status.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stderr.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/expected.stdout.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod/libtest.a
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.status.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stderr.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.6
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/expected.stdout.7
OLD_FILES+=usr/tests/usr.bin/make/archives/fmt_oldbsd/libtest.a
OLD_FILES+=usr/tests/usr.bin/make/archives/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/basic/t0/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/basic/t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/basic/t0/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t0/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t0/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t1/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/basic/t1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/basic/t1/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/basic/t1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t2/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/basic/t2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/basic/t2/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/basic/t2/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t2/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t2/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t3/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/basic/t3/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/basic/t3/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t3/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/basic/t3/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/basic/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/execution/ellipsis/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/execution/empty/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/execution/empty/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/execution/empty/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/execution/empty/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/execution/empty/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/execution/empty/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/execution/joberr/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/execution/plus/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/execution/plus/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/execution/plus/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/execution/plus/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/execution/plus/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/execution/plus/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/execution/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/shell/builtin/sh
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/shell/meta/sh
OLD_FILES+=usr/tests/usr.bin/make/shell/path/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/shell/path/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/path/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/shell/path/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/shell/path/sh
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/shell/path_select/shell
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/shell/replace/shell
OLD_FILES+=usr/tests/usr.bin/make/shell/select/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/shell/select/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/shell/select/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/shell/select/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/shell/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/TEST1.a
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/basic/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/TEST1.a
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/TEST2.a
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/TEST1.a
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/TEST2.a
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/src_wild2/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/suffixes/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/directive-t0/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.3
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.4
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.status.5
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.4
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stderr.5
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.4
OLD_FILES+=usr/tests/usr.bin/make/syntax/enl/expected.stdout.5
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/funny-targets/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/syntax/semi/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/syntax/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/mk/sys.mk
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/mk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/cleanup
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/mk/sys.mk
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/mk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/cleanup
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/1/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/mk/sys.mk
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/mk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/t2/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/sysmk/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_M/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.status.3
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stderr.3
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/variables/modifier_t/expected.stdout.3
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.status.2
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stderr.2
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/variables/opt_V/expected.stdout.2
OLD_FILES+=usr/tests/usr.bin/make/variables/t0/legacy_test
OLD_FILES+=usr/tests/usr.bin/make/variables/t0/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/variables/t0/Makefile.test
OLD_FILES+=usr/tests/usr.bin/make/variables/t0/expected.status.1
OLD_FILES+=usr/tests/usr.bin/make/variables/t0/expected.stderr.1
OLD_FILES+=usr/tests/usr.bin/make/variables/t0/expected.stdout.1
OLD_FILES+=usr/tests/usr.bin/make/variables/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/Kyuafile
OLD_FILES+=usr/tests/usr.bin/make/common.sh
OLD_FILES+=usr/tests/usr.bin/make/test-new.mk
OLD_DIRS+=usr/tests/usr.bin/make/variables/t0
OLD_DIRS+=usr/tests/usr.bin/make/variables/opt_V
OLD_DIRS+=usr/tests/usr.bin/make/variables/modifier_t
OLD_DIRS+=usr/tests/usr.bin/make/variables/modifier_M
OLD_DIRS+=usr/tests/usr.bin/make/variables
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2/mk
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2/2/1
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2/2
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t2
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1/mk
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1/2/1
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1/2
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t1
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0/mk
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0/2/1
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0/2
OLD_DIRS+=usr/tests/usr.bin/make/sysmk/t0
OLD_DIRS+=usr/tests/usr.bin/make/sysmk
OLD_DIRS+=usr/tests/usr.bin/make/syntax/semi
OLD_DIRS+=usr/tests/usr.bin/make/syntax/funny-targets
OLD_DIRS+=usr/tests/usr.bin/make/syntax/enl
OLD_DIRS+=usr/tests/usr.bin/make/syntax/directive-t0
OLD_DIRS+=usr/tests/usr.bin/make/syntax
OLD_DIRS+=usr/tests/usr.bin/make/suffixes/src_wild2
OLD_DIRS+=usr/tests/usr.bin/make/suffixes/src_wild1
OLD_DIRS+=usr/tests/usr.bin/make/suffixes/basic
OLD_DIRS+=usr/tests/usr.bin/make/suffixes
OLD_DIRS+=usr/tests/usr.bin/make/shell/select
OLD_DIRS+=usr/tests/usr.bin/make/shell/replace
OLD_DIRS+=usr/tests/usr.bin/make/shell/path_select
OLD_DIRS+=usr/tests/usr.bin/make/shell/path
OLD_DIRS+=usr/tests/usr.bin/make/shell/meta
OLD_DIRS+=usr/tests/usr.bin/make/shell/builtin
OLD_DIRS+=usr/tests/usr.bin/make/shell
OLD_DIRS+=usr/tests/usr.bin/make/execution/plus
OLD_DIRS+=usr/tests/usr.bin/make/execution/joberr
OLD_DIRS+=usr/tests/usr.bin/make/execution/empty
OLD_DIRS+=usr/tests/usr.bin/make/execution/ellipsis
OLD_DIRS+=usr/tests/usr.bin/make/execution
OLD_DIRS+=usr/tests/usr.bin/make/basic/t3
OLD_DIRS+=usr/tests/usr.bin/make/basic/t2
OLD_DIRS+=usr/tests/usr.bin/make/basic/t1
OLD_DIRS+=usr/tests/usr.bin/make/basic/t0
OLD_DIRS+=usr/tests/usr.bin/make/basic
OLD_DIRS+=usr/tests/usr.bin/make/archives/fmt_oldbsd
OLD_DIRS+=usr/tests/usr.bin/make/archives/fmt_44bsd_mod
OLD_DIRS+=usr/tests/usr.bin/make/archives/fmt_44bsd
OLD_DIRS+=usr/tests/usr.bin/make/archives
OLD_DIRS+=usr/tests/usr.bin/make
OLD_FILES+=usr/tests/usr.bin/yacc/legacy_test
OLD_FILES+=usr/tests/usr.bin/yacc/regress.00.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.01.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.02.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.03.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.04.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.05.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.06.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.07.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.08.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.09.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.10.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.11.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.12.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.13.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.14.out
OLD_FILES+=usr/tests/usr.bin/yacc/regress.sh
OLD_FILES+=usr/tests/usr.bin/yacc/undefined.y
.endif
.else
# ATF libraries.
OLD_FILES+=etc/mtree/BSD.tests.dist
OLD_FILES+=usr/bin/atf-sh
OLD_DIRS+=usr/include/atf-c
OLD_FILES+=usr/include/atf-c/build.h
OLD_FILES+=usr/include/atf-c/check.h
OLD_FILES+=usr/include/atf-c/config.h
OLD_FILES+=usr/include/atf-c/defs.h
OLD_FILES+=usr/include/atf-c/error.h
OLD_FILES+=usr/include/atf-c/error_fwd.h
OLD_FILES+=usr/include/atf-c/macros.h
OLD_FILES+=usr/include/atf-c/tc.h
OLD_FILES+=usr/include/atf-c/tp.h
OLD_FILES+=usr/include/atf-c/utils.h
OLD_FILES+=usr/include/atf-c.h
OLD_DIRS+=usr/include/atf-c++
OLD_FILES+=usr/include/atf-c++/build.hpp
OLD_FILES+=usr/include/atf-c++/check.hpp
OLD_FILES+=usr/include/atf-c++/config.hpp
OLD_FILES+=usr/include/atf-c++/macros.hpp
OLD_FILES+=usr/include/atf-c++/tests.hpp
OLD_FILES+=usr/include/atf-c++/utils.hpp
OLD_FILES+=usr/include/atf-c++.hpp
OLD_FILES+=usr/lib/libatf-c_p.a
OLD_FILES+=usr/lib/libatf-c.so.1
OLD_FILES+=usr/lib/libatf-c.so
OLD_FILES+=usr/lib/libatf-c++.a
OLD_FILES+=usr/lib/libatf-c++_p.a
OLD_FILES+=usr/lib/libatf-c++.so.1
OLD_FILES+=usr/lib/libatf-c++.so
OLD_FILES+=usr/lib/libatf-c.a
OLD_FILES+=usr/libexec/atf-check
OLD_FILES+=usr/libexec/atf-sh
OLD_DIRS+=usr/share/atf
OLD_FILES+=usr/share/atf/libatf-sh.subr
OLD_DIRS+=usr/share/doc/atf
OLD_FILES+=usr/share/doc/atf/AUTHORS
OLD_FILES+=usr/share/doc/atf/COPYING
OLD_FILES+=usr/share/doc/atf/NEWS
OLD_FILES+=usr/share/doc/atf/README
OLD_FILES+=usr/share/doc/pjdfstest/README
OLD_FILES+=usr/share/man/man1/atf-check.1.gz
OLD_FILES+=usr/share/man/man1/atf-sh.1.gz
OLD_FILES+=usr/share/man/man1/atf-test-program.1.gz
OLD_FILES+=usr/share/man/man3/atf-c-api.3.gz
OLD_FILES+=usr/share/man/man3/atf-c++-api.3.gz
OLD_FILES+=usr/share/man/man3/atf-sh-api.3.gz
OLD_FILES+=usr/share/man/man3/atf-sh.3.gz
OLD_FILES+=usr/share/man/man4/atf-test-case.4.gz
OLD_FILES+=usr/share/man/man7/atf.7.gz
OLD_FILES+=usr/share/mk/atf.test.mk
OLD_FILES+=usr/share/mk/plain.test.mk
OLD_FILES+=usr/share/mk/suite.test.mk
OLD_FILES+=usr/share/mk/tap.test.mk
# Test suite.
. if exists(${DESTDIR}${TESTSBASE})
TESTS_DIRS!=find ${DESTDIR}${TESTSBASE} -type d | sed -e 's,^${DESTDIR}/,,'; echo
OLD_DIRS+=${TESTS_DIRS}
TESTS_FILES!=find ${DESTDIR}${TESTSBASE} \! -type d | sed -e 's,^${DESTDIR}/,,'; echo
OLD_FILES+=${TESTS_FILES}
. endif
.endif # Test suite.
.if ${MK_TESTS_SUPPORT} == no
OLD_FILES+=usr/include/atf-c++.hpp
OLD_FILES+=usr/include/atf-c++/build.hpp
OLD_FILES+=usr/include/atf-c++/check.hpp
OLD_FILES+=usr/include/atf-c++/macros.hpp
OLD_FILES+=usr/include/atf-c++/tests.hpp
OLD_FILES+=usr/include/atf-c++/utils.hpp
OLD_FILES+=usr/include/atf-c.h
OLD_FILES+=usr/include/atf-c/build.h
OLD_FILES+=usr/include/atf-c/check.h
OLD_FILES+=usr/include/atf-c/defs.h
OLD_FILES+=usr/include/atf-c/error.h
OLD_FILES+=usr/include/atf-c/error_fwd.h
OLD_FILES+=usr/include/atf-c/macros.h
OLD_FILES+=usr/include/atf-c/tc.h
OLD_FILES+=usr/include/atf-c/tp.h
OLD_FILES+=usr/include/atf-c/utils.h
OLD_LIBS+=usr/lib/private/libatf-c++.so.2
OLD_LIBS+=usr/lib/private/libatf-c.so.1
OLD_FILES+=usr/share/man/man3/atf-c++.3.gz
OLD_FILES+=usr/share/man/man3/atf-c-api++.3.gz
OLD_FILES+=usr/share/man/man3/atf-c-api.3.gz
OLD_FILES+=usr/share/man/man3/atf-c.3.gz
OLD_FILES+=usr/tests/lib/atf/Kyuafile
OLD_FILES+=usr/tests/lib/atf/libatf-c++/Kyuafile
OLD_FILES+=usr/tests/lib/atf/libatf-c++/atf_c++_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/build_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/check_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/Kyuafile
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/application_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/env_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/exceptions_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/fs_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/process_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/text_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/detail/version_helper
OLD_FILES+=usr/tests/lib/atf/libatf-c++/macros_hpp_test.cpp
OLD_FILES+=usr/tests/lib/atf/libatf-c++/macros_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/tests_test
OLD_FILES+=usr/tests/lib/atf/libatf-c++/unused_test.cpp
OLD_FILES+=usr/tests/lib/atf/libatf-c++/utils_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/Kyuafile
OLD_FILES+=usr/tests/lib/atf/libatf-c/atf_c_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/build_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/check_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/Kyuafile
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/dynstr_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/env_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/fs_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/list_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/map_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/process_helpers
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/process_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/sanity_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/text_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/user_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/detail/version_helper
OLD_FILES+=usr/tests/lib/atf/libatf-c/error_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/macros_h_test.c
OLD_FILES+=usr/tests/lib/atf/libatf-c/macros_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/tc_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/tp_test
OLD_FILES+=usr/tests/lib/atf/libatf-c/unused_test.c
OLD_FILES+=usr/tests/lib/atf/libatf-c/utils_test
OLD_FILES+=usr/tests/lib/atf/test-programs/Kyuafile
OLD_FILES+=usr/tests/lib/atf/test-programs/c_helpers
OLD_FILES+=usr/tests/lib/atf/test-programs/config_test
OLD_FILES+=usr/tests/lib/atf/test-programs/cpp_helpers
OLD_FILES+=usr/tests/lib/atf/test-programs/expect_test
OLD_FILES+=usr/tests/lib/atf/test-programs/meta_data_test
OLD_FILES+=usr/tests/lib/atf/test-programs/result_test
OLD_FILES+=usr/tests/lib/atf/test-programs/sh_helpers
OLD_FILES+=usr/tests/lib/atf/test-programs/srcdir_test
.endif
.if ${MK_TEXTPROC} == no
OLD_FILES+=usr/bin/checknr
OLD_FILES+=usr/bin/colcrt
OLD_FILES+=usr/bin/ul
OLD_FILES+=usr/share/man/man1/checknr.1.gz
OLD_FILES+=usr/share/man/man1/colcrt.1.gz
OLD_FILES+=usr/share/man/man1/ul.1.gz
.endif
.if ${MK_TFTP} == no
OLD_FILES+=usr/bin/tftp
OLD_FILES+=usr/libexec/tftpd
OLD_FILES+=usr/share/man/man1/tftp.1.gz
OLD_FILES+=usr/share/man/man8/tftpd.8.gz
.endif
.if ${MK_TOOLCHAIN} == no
OLD_FILES+=usr/bin/addr2line
OLD_FILES+=usr/bin/as
OLD_FILES+=usr/bin/byacc
OLD_FILES+=usr/bin/cc
OLD_FILES+=usr/bin/c88
OLD_FILES+=usr/bin/c++
OLD_FILES+=usr/bin/c++filt
OLD_FILES+=usr/bin/ld
OLD_FILES+=usr/bin/ld.bfd
OLD_FILES+=usr/bin/nm
OLD_FILES+=usr/bin/objcopy
OLD_FILES+=usr/bin/readelf
OLD_FILES+=usr/bin/size
OLD_FILES+=usr/bin/strip
OLD_FILES+=usr/bin/yacc
OLD_FILES+=usr/share/man/man1/addr2line.1.gz
OLD_FILES+=usr/share/man/man1/c++filt.1.gz
OLD_FILES+=usr/share/man/man1/nm.1.gz
OLD_FILES+=usr/share/man/man1/readelf.1.gz
OLD_FILES+=usr/share/man/man1/size.1.gz
OLD_FILES+=usr/share/man/man1/strip.1.gz
OLD_FILES+=usr/share/man/man1/objcopy.1.gz
# lib/libelf
OLD_FILES+=usr/share/man/man3/elf.3.gz
OLD_FILES+=usr/share/man/man3/elf_begin.3.gz
OLD_FILES+=usr/share/man/man3/elf_cntl.3.gz
OLD_FILES+=usr/share/man/man3/elf_end.3.gz
OLD_FILES+=usr/share/man/man3/elf_errmsg.3.gz
OLD_FILES+=usr/share/man/man3/elf_fill.3.gz
OLD_FILES+=usr/share/man/man3/elf_flagdata.3.gz
OLD_FILES+=usr/share/man/man3/elf_getarhdr.3.gz
OLD_FILES+=usr/share/man/man3/elf_getarsym.3.gz
OLD_FILES+=usr/share/man/man3/elf_getbase.3.gz
OLD_FILES+=usr/share/man/man3/elf_getdata.3.gz
OLD_FILES+=usr/share/man/man3/elf_getident.3.gz
OLD_FILES+=usr/share/man/man3/elf_getscn.3.gz
OLD_FILES+=usr/share/man/man3/elf_getphdrnum.3.gz
OLD_FILES+=usr/share/man/man3/elf_getphnum.3.gz
OLD_FILES+=usr/share/man/man3/elf_getshdrnum.3.gz
OLD_FILES+=usr/share/man/man3/elf_getshnum.3.gz
OLD_FILES+=usr/share/man/man3/elf_getshdrstrndx.3.gz
OLD_FILES+=usr/share/man/man3/elf_getshstrndx.3.gz
OLD_FILES+=usr/share/man/man3/elf_hash.3.gz
OLD_FILES+=usr/share/man/man3/elf_kind.3.gz
OLD_FILES+=usr/share/man/man3/elf_memory.3.gz
OLD_FILES+=usr/share/man/man3/elf_next.3.gz
OLD_FILES+=usr/share/man/man3/elf_open.3.gz
OLD_FILES+=usr/share/man/man3/elf_rawfile.3.gz
OLD_FILES+=usr/share/man/man3/elf_rand.3.gz
OLD_FILES+=usr/share/man/man3/elf_strptr.3.gz
OLD_FILES+=usr/share/man/man3/elf_update.3.gz
OLD_FILES+=usr/share/man/man3/elf_version.3.gz
OLD_FILES+=usr/share/man/man3/gelf.3.gz
OLD_FILES+=usr/share/man/man3/gelf_checksum.3.gz
OLD_FILES+=usr/share/man/man3/gelf_fsize.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getcap.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getclass.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getdyn.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getehdr.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getmove.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getphdr.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getrel.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getrela.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getshdr.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getsym.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getsyminfo.3.gz
OLD_FILES+=usr/share/man/man3/gelf_getsymshndx.3.gz
OLD_FILES+=usr/share/man/man3/gelf_newehdr.3.gz
OLD_FILES+=usr/share/man/man3/gelf_newphdr.3.gz
OLD_FILES+=usr/share/man/man3/gelf_update_ehdr.3.gz
OLD_FILES+=usr/share/man/man3/gelf_xlatetof.3.gz
# lib/libelftc
OLD_FILES+=usr/share/man/man3/elftc.3.gz
OLD_FILES+=usr/share/man/man3/elftc_bfd_find_target.3.gz
OLD_FILES+=usr/share/man/man3/elftc_copyfile.3.gz
OLD_FILES+=usr/share/man/man3/elftc_demangle.3.gz
OLD_FILES+=usr/share/man/man3/elftc_reloc_type_str.3.gz
OLD_FILES+=usr/share/man/man3/elftc_set_timestamps.3.gz
OLD_FILES+=usr/share/man/man3/elftc_timestamp.3.gz
OLD_FILES+=usr/share/man/man3/elftc_string_table_create.3.gz
OLD_FILES+=usr/share/man/man3/elftc_version.3.gz
OLD_FILES+=usr/tests/usr.bin/yacc/Kyuafile
OLD_FILES+=usr/tests/usr.bin/yacc/btyacc_calc1.y
OLD_FILES+=usr/tests/usr.bin/yacc/btyacc_demo.y
OLD_FILES+=usr/tests/usr.bin/yacc/calc.y
OLD_FILES+=usr/tests/usr.bin/yacc/calc1.y
OLD_FILES+=usr/tests/usr.bin/yacc/calc2.y
OLD_FILES+=usr/tests/usr.bin/yacc/calc3.y
OLD_FILES+=usr/tests/usr.bin/yacc/code_calc.y
OLD_FILES+=usr/tests/usr.bin/yacc/code_debug.y
OLD_FILES+=usr/tests/usr.bin/yacc/code_error.y
OLD_FILES+=usr/tests/usr.bin/yacc/empty.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit1.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit2.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit3.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit4.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_inherit5.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax1.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax10.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax11.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax12.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax13.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax14.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax15.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax16.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax17.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax18.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax19.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax2.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax20.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax21.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax22.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax23.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax24.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax25.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax26.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax27.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax3.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax4.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax5.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax6.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax7.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax7a.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax7b.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax8.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax8a.y
OLD_FILES+=usr/tests/usr.bin/yacc/err_syntax9.y
OLD_FILES+=usr/tests/usr.bin/yacc/error.y
OLD_FILES+=usr/tests/usr.bin/yacc/grammar.y
OLD_FILES+=usr/tests/usr.bin/yacc/inherit0.y
OLD_FILES+=usr/tests/usr.bin/yacc/inherit1.y
OLD_FILES+=usr/tests/usr.bin/yacc/inherit2.y
OLD_FILES+=usr/tests/usr.bin/yacc/ok_syntax1.y
OLD_FILES+=usr/tests/usr.bin/yacc/pure_calc.y
OLD_FILES+=usr/tests/usr.bin/yacc/pure_error.y
OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc.y
OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc2.y
OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc3.y
OLD_FILES+=usr/tests/usr.bin/yacc/quote_calc4.y
OLD_FILES+=usr/tests/usr.bin/yacc/run_test
OLD_FILES+=usr/tests/usr.bin/yacc/varsyntax_calc1.y
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_b.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_b.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_l.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/big_l.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc1.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc2.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/calc3.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.code.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_calc.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.code.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/code_error.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/empty.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax1.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax10.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax11.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax12.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax13.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax14.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax15.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax16.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax17.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax18.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax19.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax2.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax20.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax21.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax22.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax23.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax24.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax25.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax26.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax27.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax3.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax4.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax5.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax6.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7a.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax7b.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax8a.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/err_syntax9.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/error.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.dot
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/grammar.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/help.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/help.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_b_opt1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_code_c.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_code_c.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_defines.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_defines.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_graph.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_graph.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_include.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_include.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_opts.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_opts.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output2.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_output2.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_p_opt1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_verbose.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/no_verbose.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/nostdin.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/nostdin.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/ok_syntax1.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_calc.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/pure_error.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc-s.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2-s.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc2.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3-s.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc3.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4-s.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/quote_calc4.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.i
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/rename_debug.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.error
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.output
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.tab.c
OLD_FILES+=usr/tests/usr.bin/yacc/yacc/varsyntax_calc1.tab.h
OLD_FILES+=usr/tests/usr.bin/yacc/yacc_tests
OLD_DIRS+=usr/tests/usr.bin/yacc
.endif
.if ${MK_UNBOUND} == no
OLD_FILES+=etc/rc.d/local_unbound
OLD_FILES+=etc/unbound
OLD_FILES+=usr/lib/private/libunbound.a
OLD_FILES+=usr/lib/private/libunbound.so
OLD_LIBS+=usr/lib/private/libunbound.so.5
OLD_FILES+=usr/lib/private/libunbound_p.a
.if ${TARGET_ARCH} == "amd64" || ${TARGET_ARCH} == "powerpc64"
OLD_FILES+=usr/lib32/private/libunbound.a
OLD_FILES+=usr/lib32/private/libunbound.so
OLD_LIBS+=usr/lib32/private/libunbound.so.5
OLD_FILES+=usr/lib32/private/libunbound_p.a
OLD_FILES+=usr/share/man/man5/local-unbound.conf.5.gz
OLD_FILES+=usr/share/man/man8/local-unbound-anchor.8.gz
OLD_FILES+=usr/share/man/man8/local-unbound-checkconf.8.gz
OLD_FILES+=usr/share/man/man8/local-unbound-control.8.gz
OLD_FILES+=usr/share/man/man8/local-unbound.8.gz
.endif
OLD_FILES+=usr/sbin/local-unbound-setup
OLD_FILES+=usr/sbin/local-unbound
OLD_FILES+=usr/sbin/local-unbound-anchor
OLD_FILES+=usr/sbin/local-unbound-checkconf
OLD_FILES+=usr/sbin/local-unbound-control
.endif
.if ${MK_USB} == no
OLD_FILES+=etc/devd/uath.conf
OLD_FILES+=etc/devd/uauth.conf
OLD_FILES+=etc/devd/ulpt.conf
OLD_FILES+=etc/devd/usb.conf
OLD_FILES+=usr/bin/usbhidaction
OLD_FILES+=usr/bin/usbhidctl
OLD_FILES+=usr/include/libusb.h
OLD_FILES+=usr/include/libusb20.h
OLD_FILES+=usr/include/libusb20_desc.h
OLD_FILES+=usr/include/usb.h
OLD_FILES+=usr/include/usbhid.h
OLD_FILES+=usr/lib/libusb.a
OLD_FILES+=usr/lib/libusb.so
OLD_LIBS+=usr/lib/libusb.so.3
OLD_FILES+=usr/lib/libusb_p.a
OLD_FILES+=usr/lib/libusbhid.a
OLD_FILES+=usr/lib/libusbhid.so
OLD_LIBS+=usr/lib/libusbhid.so.4
OLD_FILES+=usr/lib/libusbhid_p.a
OLD_FILES+=usr/lib32/libusb.a
OLD_FILES+=usr/lib32/libusb.so
OLD_LIBS+=usr/lib32/libusb.so.3
OLD_FILES+=usr/lib32/libusb_p.a
OLD_FILES+=usr/lib32/libusbhid.a
OLD_FILES+=usr/lib32/libusbhid.so
OLD_LIBS+=usr/lib32/libusbhid.so.4
OLD_FILES+=usr/lib32/libusbhid_p.a
OLD_FILES+=usr/libdata/pkgconfig/libusb-0.1.pc
OLD_FILES+=usr/libdata/pkgconfig/libusb-1.0.pc
OLD_FILES+=usr/libdata/pkgconfig/libusb-2.0.pc
OLD_FILES+=usr/sbin/uathload
OLD_FILES+=usr/sbin/uhsoctl
OLD_FILES+=usr/sbin/usbconfig
OLD_FILES+=usr/sbin/usbdump
OLD_FILES+=usr/share/examples/libusb20/Makefile
OLD_FILES+=usr/share/examples/libusb20/README
OLD_FILES+=usr/share/examples/libusb20/bulk.c
OLD_FILES+=usr/share/examples/libusb20/control.c
OLD_FILES+=usr/share/examples/libusb20/util.c
OLD_FILES+=usr/share/examples/libusb20/util.h
OLD_DIRS+=usr/share/examples/libusb20
OLD_FILES+=usr/share/firmware/ar5523.bin
OLD_FILES+=usr/share/man/man1/uhsoctl.1.gz
OLD_FILES+=usr/share/man/man1/usbhidaction.1.gz
OLD_FILES+=usr/share/man/man1/usbhidctl.1.gz
OLD_FILES+=usr/share/man/man3/hid_dispose_report_desc.3.gz
OLD_FILES+=usr/share/man/man3/hid_end_parse.3.gz
OLD_FILES+=usr/share/man/man3/hid_get_data.3.gz
OLD_FILES+=usr/share/man/man3/hid_get_item.3.gz
OLD_FILES+=usr/share/man/man3/hid_get_report_desc.3.gz
OLD_FILES+=usr/share/man/man3/hid_init.3.gz
OLD_FILES+=usr/share/man/man3/hid_locate.3.gz
OLD_FILES+=usr/share/man/man3/hid_report_size.3.gz
OLD_FILES+=usr/share/man/man3/hid_set_data.3.gz
OLD_FILES+=usr/share/man/man3/hid_start_parse.3.gz
OLD_FILES+=usr/share/man/man3/hid_usage_in_page.3.gz
OLD_FILES+=usr/share/man/man3/hid_usage_page.3.gz
OLD_FILES+=usr/share/man/man3/libusb.3.gz
OLD_FILES+=usr/share/man/man3/libusb20.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_add_dev_quirk.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_alloc_default.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_dequeue_device.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_device_foreach.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_enqueue_device.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_free.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_get_dev_quirk.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_get_quirk_name.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_get_template.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_remove_dev_quirk.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_be_set_template.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_desc_foreach.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_alloc.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_alloc_config.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_check_connected.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_close.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_detach_kernel_driver.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_free.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_address.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_backend_name.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_bus_number.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_config_index.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_debug.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_desc.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_device_desc.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_fd.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_iface_desc.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_info.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_mode.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_parent_address.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_parent_port.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_port_path.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_power_mode.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_power_usage.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_get_speed.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_kernel_driver_active.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_open.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_process.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_req_string_simple_sync.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_req_string_sync.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_request_sync.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_reset.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_set_alt_index.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_set_config_index.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_set_debug.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_set_power_mode.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_dev_wait_process.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_error_name.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_me_decode.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_me_encode.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_me_get_1.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_me_get_2.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_strerror.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_bulk_intr_sync.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_callback_wrapper.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_clear_stall_sync.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_close.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_drain.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_actual_frames.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_actual_length.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_length.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_max_frames.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_max_packet_length.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_max_total_length.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_pointer.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_priv_sc0.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_priv_sc1.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_status.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_get_time_complete.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_open.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_pending.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_buffer.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_callback.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_flags.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_length.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_priv_sc0.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_priv_sc1.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_timeout.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_set_total_frames.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_bulk.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_control.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_intr.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_setup_isoc.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_start.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_stop.3.gz
OLD_FILES+=usr/share/man/man3/libusb20_tr_submit.3.gz
OLD_FILES+=usr/share/man/man3/libusb_alloc_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_attach_kernel_driver.3.gz
OLD_FILES+=usr/share/man/man3/libusb_bulk_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_cancel_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_check_connected.3.gz
OLD_FILES+=usr/share/man/man3/libusb_claim_interface.3.gz
OLD_FILES+=usr/share/man/man3/libusb_clear_halt.3.gz
OLD_FILES+=usr/share/man/man3/libusb_close.3.gz
OLD_FILES+=usr/share/man/man3/libusb_control_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_detach_kernel_driver.3.gz
OLD_FILES+=usr/share/man/man3/libusb_detach_kernel_driver_np.3.gz
OLD_FILES+=usr/share/man/man3/libusb_error_name.3.gz
OLD_FILES+=usr/share/man/man3/libusb_event_handler_active.3.gz
OLD_FILES+=usr/share/man/man3/libusb_event_handling_ok.3.gz
OLD_FILES+=usr/share/man/man3/libusb_exit.3.gz
OLD_FILES+=usr/share/man/man3/libusb_free_bos_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_free_config_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_free_device_list.3.gz
OLD_FILES+=usr/share/man/man3/libusb_free_ss_endpoint_comp.3.gz
OLD_FILES+=usr/share/man/man3/libusb_free_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_active_config_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_bus_number.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_config_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_config_descriptor_by_value.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_configuration.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_device.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_device_address.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_device_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_device_list.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_device_speed.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_driver.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_driver_np.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_max_iso_packet_size.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_max_packet_size.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_next_timeout.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_pollfds.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_string_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_get_string_descriptor_ascii.3.gz
OLD_FILES+=usr/share/man/man3/libusb_handle_events.3.gz
OLD_FILES+=usr/share/man/man3/libusb_handle_events_completed.3.gz
OLD_FILES+=usr/share/man/man3/libusb_handle_events_locked.3.gz
OLD_FILES+=usr/share/man/man3/libusb_handle_events_timeout.3.gz
OLD_FILES+=usr/share/man/man3/libusb_handle_events_timeout_completed.3.gz
OLD_FILES+=usr/share/man/man3/libusb_init.3.gz
OLD_FILES+=usr/share/man/man3/libusb_interrupt_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_kernel_driver_active.3.gz
OLD_FILES+=usr/share/man/man3/libusb_lock_event_waiters.3.gz
OLD_FILES+=usr/share/man/man3/libusb_lock_events.3.gz
OLD_FILES+=usr/share/man/man3/libusb_open.3.gz
OLD_FILES+=usr/share/man/man3/libusb_open_device_with_vid_pid.3.gz
OLD_FILES+=usr/share/man/man3/libusb_parse_bos_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/libusb_parse_ss_endpoint_comp.3.gz
OLD_FILES+=usr/share/man/man3/libusb_ref_device.3.gz
OLD_FILES+=usr/share/man/man3/libusb_release_interface.3.gz
OLD_FILES+=usr/share/man/man3/libusb_reset_device.3.gz
OLD_FILES+=usr/share/man/man3/libusb_set_configuration.3.gz
OLD_FILES+=usr/share/man/man3/libusb_set_debug.3.gz
OLD_FILES+=usr/share/man/man3/libusb_set_interface_alt_setting.3.gz
OLD_FILES+=usr/share/man/man3/libusb_set_pollfd_notifiers.3.gz
OLD_FILES+=usr/share/man/man3/libusb_strerror.3.gz
OLD_FILES+=usr/share/man/man3/libusb_submit_transfer.3.gz
OLD_FILES+=usr/share/man/man3/libusb_try_lock_events.3.gz
OLD_FILES+=usr/share/man/man3/libusb_unlock_event_waiters.3.gz
OLD_FILES+=usr/share/man/man3/libusb_unlock_events.3.gz
OLD_FILES+=usr/share/man/man3/libusb_unref_device.3.gz
OLD_FILES+=usr/share/man/man3/libusb_wait_for_event.3.gz
OLD_FILES+=usr/share/man/man3/libusbhid.3.gz
OLD_FILES+=usr/share/man/man3/usb.3.gz
OLD_FILES+=usr/share/man/man3/usb_bulk_read.3.gz
OLD_FILES+=usr/share/man/man3/usb_bulk_write.3.gz
OLD_FILES+=usr/share/man/man3/usb_check_connected.3.gz
OLD_FILES+=usr/share/man/man3/usb_claim_interface.3.gz
OLD_FILES+=usr/share/man/man3/usb_clear_halt.3.gz
OLD_FILES+=usr/share/man/man3/usb_close.3.gz
OLD_FILES+=usr/share/man/man3/usb_control_msg.3.gz
OLD_FILES+=usr/share/man/man3/usb_destroy_configuration.3.gz
OLD_FILES+=usr/share/man/man3/usb_device.3.gz
OLD_FILES+=usr/share/man/man3/usb_fetch_and_parse_descriptors.3.gz
OLD_FILES+=usr/share/man/man3/usb_find_busses.3.gz
OLD_FILES+=usr/share/man/man3/usb_find_devices.3.gz
OLD_FILES+=usr/share/man/man3/usb_get_busses.3.gz
OLD_FILES+=usr/share/man/man3/usb_get_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/usb_get_descriptor_by_endpoint.3.gz
OLD_FILES+=usr/share/man/man3/usb_get_string.3.gz
OLD_FILES+=usr/share/man/man3/usb_get_string_simple.3.gz
OLD_FILES+=usr/share/man/man3/usb_init.3.gz
OLD_FILES+=usr/share/man/man3/usb_interrupt_read.3.gz
OLD_FILES+=usr/share/man/man3/usb_interrupt_write.3.gz
OLD_FILES+=usr/share/man/man3/usb_open.3.gz
OLD_FILES+=usr/share/man/man3/usb_parse_configuration.3.gz
OLD_FILES+=usr/share/man/man3/usb_parse_descriptor.3.gz
OLD_FILES+=usr/share/man/man3/usb_release_interface.3.gz
OLD_FILES+=usr/share/man/man3/usb_reset.3.gz
OLD_FILES+=usr/share/man/man3/usb_resetep.3.gz
OLD_FILES+=usr/share/man/man3/usb_set_altinterface.3.gz
OLD_FILES+=usr/share/man/man3/usb_set_configuration.3.gz
OLD_FILES+=usr/share/man/man3/usb_set_debug.3.gz
OLD_FILES+=usr/share/man/man3/usb_strerror.3.gz
OLD_FILES+=usr/share/man/man3/usbhid.3.gz
OLD_FILES+=usr/share/man/man4/if_otus.4.gz
OLD_FILES+=usr/share/man/man4/if_rsu.4.gz
OLD_FILES+=usr/share/man/man4/if_rtwn_usb.4.gz
OLD_FILES+=usr/share/man/man4/if_rum.4.gz
OLD_FILES+=usr/share/man/man4/if_run.4.gz
OLD_FILES+=usr/share/man/man4/if_zyd.4.gz
OLD_FILES+=usr/share/man/man4/otus.4.gz
OLD_FILES+=usr/share/man/man4/otusfw.4.gz
OLD_FILES+=usr/share/man/man4/rsu.4.gz
OLD_FILES+=usr/share/man/man4/rsufw.4.gz
OLD_FILES+=usr/share/man/man4/rtwn_usb.4.gz
OLD_FILES+=usr/share/man/man4/rum.4.gz
OLD_FILES+=usr/share/man/man4/run.4.gz
OLD_FILES+=usr/share/man/man4/runfw.4.gz
OLD_FILES+=usr/share/man/man4/u3g.4.gz
OLD_FILES+=usr/share/man/man4/u3gstub.4.gz
OLD_FILES+=usr/share/man/man4/uark.4.gz
OLD_FILES+=usr/share/man/man4/uart.4.gz
OLD_FILES+=usr/share/man/man4/uath.4.gz
OLD_FILES+=usr/share/man/man4/ubsa.4.gz
OLD_FILES+=usr/share/man/man4/ubsec.4.gz
OLD_FILES+=usr/share/man/man4/ubser.4.gz
OLD_FILES+=usr/share/man/man4/ubtbcmfw.4.gz
OLD_FILES+=usr/share/man/man4/uchcom.4.gz
OLD_FILES+=usr/share/man/man4/ucom.4.gz
OLD_FILES+=usr/share/man/man4/ucycom.4.gz
OLD_FILES+=usr/share/man/man4/udav.4.gz
OLD_FILES+=usr/share/man/man4/udbp.4.gz
OLD_FILES+=usr/share/man/man4/uep.4.gz
OLD_FILES+=usr/share/man/man4/ufm.4.gz
OLD_FILES+=usr/share/man/man4/ufoma.4.gz
OLD_FILES+=usr/share/man/man4/uftdi.4.gz
OLD_FILES+=usr/share/man/man4/ugen.4.gz
OLD_FILES+=usr/share/man/man4/uhci.4.gz
OLD_FILES+=usr/share/man/man4/uhid.4.gz
OLD_FILES+=usr/share/man/man4/uhso.4.gz
OLD_FILES+=usr/share/man/man4/uipaq.4.gz
OLD_FILES+=usr/share/man/man4/ukbd.4.gz
OLD_FILES+=usr/share/man/man4/uled.4.gz
OLD_FILES+=usr/share/man/man4/ulpt.4.gz
OLD_FILES+=usr/share/man/man4/umass.4.gz
OLD_FILES+=usr/share/man/man4/umcs.4.gz
OLD_FILES+=usr/share/man/man4/umct.4.gz
OLD_FILES+=usr/share/man/man4/umodem.4.gz
OLD_FILES+=usr/share/man/man4/umoscom.4.gz
OLD_FILES+=usr/share/man/man4/ums.4.gz
OLD_FILES+=usr/share/man/man4/unix.4.gz
OLD_FILES+=usr/share/man/man4/upgt.4.gz
OLD_FILES+=usr/share/man/man4/uplcom.4.gz
OLD_FILES+=usr/share/man/man4/ural.4.gz
OLD_FILES+=usr/share/man/man4/urio.4.gz
OLD_FILES+=usr/share/man/man4/urndis.4.gz
OLD_FILES+=usr/share/man/man4/urtw.4.gz
OLD_FILES+=usr/share/man/man4/usb.4.gz
OLD_FILES+=usr/share/man/man4/usb_quirk.4.gz
OLD_FILES+=usr/share/man/man4/usb_template.4.gz
OLD_FILES+=usr/share/man/man4/usfs.4.gz
OLD_FILES+=usr/share/man/man4/uslcom.4.gz
OLD_FILES+=usr/share/man/man4/uvisor.4.gz
OLD_FILES+=usr/share/man/man4/uvscom.4.gz
OLD_FILES+=usr/share/man/man4/zyd.4.gz
OLD_FILES+=usr/share/man/man8/uathload.8.gz
OLD_FILES+=usr/share/man/man8/usbconfig.8.gz
OLD_FILES+=usr/share/man/man8/usbdump.8.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_alloc_buffer.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_attach.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_detach.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_free_buffer.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_get_data.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_get_data_buffer.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_get_data_error.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_get_data_linear.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_put_bytes_max.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_put_data.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_put_data_buffer.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_put_data_error.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_put_data_linear.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_reset.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_softc.9.gz
OLD_FILES+=usr/share/man/man9/usb_fifo_wakeup.9.gz
OLD_FILES+=usr/share/man/man9/usbd_do_request.9.gz
OLD_FILES+=usr/share/man/man9/usbd_do_request_flags.9.gz
OLD_FILES+=usr/share/man/man9/usbd_errstr.9.gz
OLD_FILES+=usr/share/man/man9/usbd_lookup_id_by_info.9.gz
OLD_FILES+=usr/share/man/man9/usbd_lookup_id_by_uaa.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_clear_stall.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_drain.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_pending.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_poll.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_setup.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_start.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_stop.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_submit.9.gz
OLD_FILES+=usr/share/man/man9/usbd_transfer_unsetup.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_clr_flag.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_frame_data.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_frame_len.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_get_frame.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_get_priv.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_is_stalled.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_max_framelen.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_max_frames.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_max_len.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_flag.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frame_data.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frame_len.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frame_offset.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_frames.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_interval.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_priv.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_stall.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_set_timeout.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_softc.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_state.9.gz
OLD_FILES+=usr/share/man/man9/usbd_xfer_status.9.gz
OLD_FILES+=usr/share/man/man9/usbdi.9.gz
OLD_FILES+=usr/share/misc/usb_hid_usages
OLD_FILES+=usr/share/misc/usbdevs
.endif
.if ${MK_UTMPX} == no
OLD_FILES+=etc/periodic/monthly/200.accounting
OLD_FILES+=usr/bin/last
OLD_FILES+=usr/bin/users
OLD_FILES+=usr/bin/who
OLD_FILES+=usr/sbin/ac
OLD_FILES+=usr/sbin/lastlogin
OLD_FILES+=usr/sbin/utx
OLD_FILES+=usr/share/man/man1/last.1.gz
OLD_FILES+=usr/share/man/man1/users.1.gz
OLD_FILES+=usr/share/man/man1/who.1.gz
OLD_FILES+=usr/share/man/man8/ac.8.gz
OLD_FILES+=usr/share/man/man8/lastlogin.8.gz
OLD_FILES+=usr/share/man/man8/utx.8.gz
.endif
.if ${MK_WIRELESS} == no
OLD_FILES+=etc/regdomain.xml
OLD_FILES+=etc/rc.d/hostapd
OLD_FILES+=etc/rc.d/wpa_supplicant
OLD_FILES+=usr/sbin/ancontrol
OLD_FILES+=usr/sbin/hostapd
OLD_FILES+=usr/sbin/hostapd_cli
OLD_FILES+=usr/sbin/ndis_events
OLD_FILES+=usr/sbin/wlandebug
OLD_FILES+=usr/sbin/wpa_cli
OLD_FILES+=usr/sbin/wpa_passphrase
OLD_FILES+=usr/sbin/wpa_supplicant
OLD_FILES+=usr/share/examples/etc/regdomain.xml
OLD_FILES+=usr/share/examples/etc/wpa_supplicant.conf
OLD_FILES+=usr/share/examples/hostapd/hostapd.conf
OLD_FILES+=usr/share/examples/hostapd/hostapd.eap_user
OLD_FILES+=usr/share/examples/hostapd/hostapd.wpa_psk
OLD_DIRS+=usr/share/examples/hostapd
OLD_FILES+=usr/share/man/man5/hostapd.conf.5.gz
OLD_FILES+=usr/share/man/man5/wpa_supplicant.conf.5.gz
OLD_FILES+=usr/share/man/man8/ancontrol.8.gz
OLD_FILES+=usr/share/man/man8/hostapd.8.gz
OLD_FILES+=usr/share/man/man8/hostapd_cli.8.gz
OLD_FILES+=usr/share/man/man8/ndis_events.8.gz
OLD_FILES+=usr/share/man/man8/wlandebug.8.gz
OLD_FILES+=usr/share/man/man8/wpa_cli.8.gz
OLD_FILES+=usr/share/man/man8/wpa_passphrase.8.gz
OLD_FILES+=usr/share/man/man8/wpa_supplicant.8.gz
OLD_FILES+=usr/lib/snmp_wlan.so
OLD_LIBS+=usr/lib/snmp_wlan.so.6
# bsnmp module
OLD_FILES+=usr/share/man/man3/snmp_wlan.3.gz
OLD_FILES+=usr/share/snmp/defs/wlan_tree.def
OLD_FILES+=usr/share/snmp/mibs/BEGEMOT-WIRELESS-MIB.txt
.endif
.if ${MK_SVNLITE} == no || ${MK_SVN} == yes
OLD_FILES+=usr/bin/svnlite
OLD_FILES+=usr/bin/svnliteadmin
OLD_FILES+=usr/bin/svnlitebench
OLD_FILES+=usr/bin/svnlitedumpfilter
OLD_FILES+=usr/bin/svnlitefsfs
OLD_FILES+=usr/bin/svnlitelook
OLD_FILES+=usr/bin/svnlitemucc
OLD_FILES+=usr/bin/svnliterdump
OLD_FILES+=usr/bin/svnliteserve
OLD_FILES+=usr/bin/svnlitesync
OLD_FILES+=usr/bin/svnliteversion
OLD_FILES+=usr/share/man/man1/svnlite.1.gz
.endif
.if ${MK_SVN} == no
OLD_FILES+=usr/bin/svn
OLD_FILES+=usr/bin/svnadmin
OLD_FILES+=usr/bin/svnbench
OLD_FILES+=usr/bin/svndumpfilter
OLD_FILES+=usr/bin/svnfsfs
OLD_FILES+=usr/bin/svnlook
OLD_FILES+=usr/bin/svnmucc
OLD_FILES+=usr/bin/svnrdump
OLD_FILES+=usr/bin/svnserve
OLD_FILES+=usr/bin/svnsync
OLD_FILES+=usr/bin/svnversion
.endif
.if ${MK_HYPERV} == no
OLD_FILES+=etc/devd/hyperv.conf
OLD_FILES+=usr/libexec/hyperv/hv_set_ifconfig
OLD_FILES+=usr/libexec/hyperv/hv_get_dns_info
OLD_FILES+=usr/libexec/hyperv/hv_get_dhcp_info
OLD_FILES+=usr/sbin/hv_kvp_daemon
OLD_FILES+=usr/sbin/hv_vss_daemon
OLD_FILES+=usr/share/man/man8/hv_kvp_daemon.8.gz
.endif
.if ${MK_ZONEINFO} == no
OLD_FILES+=usr/share/zoneinfo/Africa/Abidjan
OLD_FILES+=usr/share/zoneinfo/Africa/Accra
OLD_FILES+=usr/share/zoneinfo/Africa/Addis_Ababa
OLD_FILES+=usr/share/zoneinfo/Africa/Algiers
OLD_FILES+=usr/share/zoneinfo/Africa/Asmara
OLD_FILES+=usr/share/zoneinfo/Africa/Bamako
OLD_FILES+=usr/share/zoneinfo/Africa/Bangui
OLD_FILES+=usr/share/zoneinfo/Africa/Banjul
OLD_FILES+=usr/share/zoneinfo/Africa/Bissau
OLD_FILES+=usr/share/zoneinfo/Africa/Blantyre
OLD_FILES+=usr/share/zoneinfo/Africa/Brazzaville
OLD_FILES+=usr/share/zoneinfo/Africa/Bujumbura
OLD_FILES+=usr/share/zoneinfo/Africa/Cairo
OLD_FILES+=usr/share/zoneinfo/Africa/Casablanca
OLD_FILES+=usr/share/zoneinfo/Africa/Ceuta
OLD_FILES+=usr/share/zoneinfo/Africa/Conakry
OLD_FILES+=usr/share/zoneinfo/Africa/Dakar
OLD_FILES+=usr/share/zoneinfo/Africa/Dar_es_Salaam
OLD_FILES+=usr/share/zoneinfo/Africa/Djibouti
OLD_FILES+=usr/share/zoneinfo/Africa/Douala
OLD_FILES+=usr/share/zoneinfo/Africa/El_Aaiun
OLD_FILES+=usr/share/zoneinfo/Africa/Freetown
OLD_FILES+=usr/share/zoneinfo/Africa/Gaborone
OLD_FILES+=usr/share/zoneinfo/Africa/Harare
OLD_FILES+=usr/share/zoneinfo/Africa/Johannesburg
OLD_FILES+=usr/share/zoneinfo/Africa/Juba
OLD_FILES+=usr/share/zoneinfo/Africa/Kampala
OLD_FILES+=usr/share/zoneinfo/Africa/Khartoum
OLD_FILES+=usr/share/zoneinfo/Africa/Kigali
OLD_FILES+=usr/share/zoneinfo/Africa/Kinshasa
OLD_FILES+=usr/share/zoneinfo/Africa/Lagos
OLD_FILES+=usr/share/zoneinfo/Africa/Libreville
OLD_FILES+=usr/share/zoneinfo/Africa/Lome
OLD_FILES+=usr/share/zoneinfo/Africa/Luanda
OLD_FILES+=usr/share/zoneinfo/Africa/Lubumbashi
OLD_FILES+=usr/share/zoneinfo/Africa/Lusaka
OLD_FILES+=usr/share/zoneinfo/Africa/Malabo
OLD_FILES+=usr/share/zoneinfo/Africa/Maputo
OLD_FILES+=usr/share/zoneinfo/Africa/Maseru
OLD_FILES+=usr/share/zoneinfo/Africa/Mbabane
OLD_FILES+=usr/share/zoneinfo/Africa/Mogadishu
OLD_FILES+=usr/share/zoneinfo/Africa/Monrovia
OLD_FILES+=usr/share/zoneinfo/Africa/Nairobi
OLD_FILES+=usr/share/zoneinfo/Africa/Ndjamena
OLD_FILES+=usr/share/zoneinfo/Africa/Niamey
OLD_FILES+=usr/share/zoneinfo/Africa/Nouakchott
OLD_FILES+=usr/share/zoneinfo/Africa/Ouagadougou
OLD_FILES+=usr/share/zoneinfo/Africa/Porto-Novo
OLD_FILES+=usr/share/zoneinfo/Africa/Sao_Tome
OLD_FILES+=usr/share/zoneinfo/Africa/Tripoli
OLD_FILES+=usr/share/zoneinfo/Africa/Tunis
OLD_FILES+=usr/share/zoneinfo/Africa/Windhoek
OLD_FILES+=usr/share/zoneinfo/America/Adak
OLD_FILES+=usr/share/zoneinfo/America/Anchorage
OLD_FILES+=usr/share/zoneinfo/America/Anguilla
OLD_FILES+=usr/share/zoneinfo/America/Antigua
OLD_FILES+=usr/share/zoneinfo/America/Araguaina
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Buenos_Aires
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Catamarca
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Cordoba
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Jujuy
OLD_FILES+=usr/share/zoneinfo/America/Argentina/La_Rioja
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Mendoza
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Rio_Gallegos
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Salta
OLD_FILES+=usr/share/zoneinfo/America/Argentina/San_Juan
OLD_FILES+=usr/share/zoneinfo/America/Argentina/San_Luis
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Tucuman
OLD_FILES+=usr/share/zoneinfo/America/Argentina/Ushuaia
OLD_FILES+=usr/share/zoneinfo/America/Aruba
OLD_FILES+=usr/share/zoneinfo/America/Asuncion
OLD_FILES+=usr/share/zoneinfo/America/Atikokan
OLD_FILES+=usr/share/zoneinfo/America/Bahia
OLD_FILES+=usr/share/zoneinfo/America/Bahia_Banderas
OLD_FILES+=usr/share/zoneinfo/America/Barbados
OLD_FILES+=usr/share/zoneinfo/America/Belem
OLD_FILES+=usr/share/zoneinfo/America/Belize
OLD_FILES+=usr/share/zoneinfo/America/Blanc-Sablon
OLD_FILES+=usr/share/zoneinfo/America/Boa_Vista
OLD_FILES+=usr/share/zoneinfo/America/Bogota
OLD_FILES+=usr/share/zoneinfo/America/Boise
OLD_FILES+=usr/share/zoneinfo/America/Cambridge_Bay
OLD_FILES+=usr/share/zoneinfo/America/Campo_Grande
OLD_FILES+=usr/share/zoneinfo/America/Cancun
OLD_FILES+=usr/share/zoneinfo/America/Caracas
OLD_FILES+=usr/share/zoneinfo/America/Cayenne
OLD_FILES+=usr/share/zoneinfo/America/Cayman
OLD_FILES+=usr/share/zoneinfo/America/Chicago
OLD_FILES+=usr/share/zoneinfo/America/Chihuahua
OLD_FILES+=usr/share/zoneinfo/America/Costa_Rica
OLD_FILES+=usr/share/zoneinfo/America/Creston
OLD_FILES+=usr/share/zoneinfo/America/Cuiaba
OLD_FILES+=usr/share/zoneinfo/America/Curacao
OLD_FILES+=usr/share/zoneinfo/America/Danmarkshavn
OLD_FILES+=usr/share/zoneinfo/America/Dawson
OLD_FILES+=usr/share/zoneinfo/America/Dawson_Creek
OLD_FILES+=usr/share/zoneinfo/America/Denver
OLD_FILES+=usr/share/zoneinfo/America/Detroit
OLD_FILES+=usr/share/zoneinfo/America/Dominica
OLD_FILES+=usr/share/zoneinfo/America/Edmonton
OLD_FILES+=usr/share/zoneinfo/America/Eirunepe
OLD_FILES+=usr/share/zoneinfo/America/El_Salvador
OLD_FILES+=usr/share/zoneinfo/America/Fortaleza
OLD_FILES+=usr/share/zoneinfo/America/Glace_Bay
OLD_FILES+=usr/share/zoneinfo/America/Godthab
OLD_FILES+=usr/share/zoneinfo/America/Goose_Bay
OLD_FILES+=usr/share/zoneinfo/America/Grand_Turk
OLD_FILES+=usr/share/zoneinfo/America/Grenada
OLD_FILES+=usr/share/zoneinfo/America/Guadeloupe
OLD_FILES+=usr/share/zoneinfo/America/Guatemala
OLD_FILES+=usr/share/zoneinfo/America/Guayaquil
OLD_FILES+=usr/share/zoneinfo/America/Guyana
OLD_FILES+=usr/share/zoneinfo/America/Halifax
OLD_FILES+=usr/share/zoneinfo/America/Havana
OLD_FILES+=usr/share/zoneinfo/America/Hermosillo
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Indianapolis
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Knox
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Marengo
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Petersburg
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Tell_City
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Vevay
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Vincennes
OLD_FILES+=usr/share/zoneinfo/America/Indiana/Winamac
OLD_FILES+=usr/share/zoneinfo/America/Inuvik
OLD_FILES+=usr/share/zoneinfo/America/Iqaluit
OLD_FILES+=usr/share/zoneinfo/America/Jamaica
OLD_FILES+=usr/share/zoneinfo/America/Juneau
OLD_FILES+=usr/share/zoneinfo/America/Kentucky/Louisville
OLD_FILES+=usr/share/zoneinfo/America/Kentucky/Monticello
OLD_FILES+=usr/share/zoneinfo/America/Kralendijk
OLD_FILES+=usr/share/zoneinfo/America/La_Paz
OLD_FILES+=usr/share/zoneinfo/America/Lima
OLD_FILES+=usr/share/zoneinfo/America/Los_Angeles
OLD_FILES+=usr/share/zoneinfo/America/Lower_Princes
OLD_FILES+=usr/share/zoneinfo/America/Maceio
OLD_FILES+=usr/share/zoneinfo/America/Managua
OLD_FILES+=usr/share/zoneinfo/America/Manaus
OLD_FILES+=usr/share/zoneinfo/America/Marigot
OLD_FILES+=usr/share/zoneinfo/America/Martinique
OLD_FILES+=usr/share/zoneinfo/America/Matamoros
OLD_FILES+=usr/share/zoneinfo/America/Mazatlan
OLD_FILES+=usr/share/zoneinfo/America/Menominee
OLD_FILES+=usr/share/zoneinfo/America/Merida
OLD_FILES+=usr/share/zoneinfo/America/Metlakatla
OLD_FILES+=usr/share/zoneinfo/America/Mexico_City
OLD_FILES+=usr/share/zoneinfo/America/Miquelon
OLD_FILES+=usr/share/zoneinfo/America/Moncton
OLD_FILES+=usr/share/zoneinfo/America/Monterrey
OLD_FILES+=usr/share/zoneinfo/America/Montevideo
OLD_FILES+=usr/share/zoneinfo/America/Montreal
OLD_FILES+=usr/share/zoneinfo/America/Montserrat
OLD_FILES+=usr/share/zoneinfo/America/Nassau
OLD_FILES+=usr/share/zoneinfo/America/New_York
OLD_FILES+=usr/share/zoneinfo/America/Nipigon
OLD_FILES+=usr/share/zoneinfo/America/Nome
OLD_FILES+=usr/share/zoneinfo/America/Noronha
OLD_FILES+=usr/share/zoneinfo/America/North_Dakota/Beulah
OLD_FILES+=usr/share/zoneinfo/America/North_Dakota/Center
OLD_FILES+=usr/share/zoneinfo/America/North_Dakota/New_Salem
OLD_FILES+=usr/share/zoneinfo/America/Ojinaga
OLD_FILES+=usr/share/zoneinfo/America/Panama
OLD_FILES+=usr/share/zoneinfo/America/Pangnirtung
OLD_FILES+=usr/share/zoneinfo/America/Paramaribo
OLD_FILES+=usr/share/zoneinfo/America/Phoenix
OLD_FILES+=usr/share/zoneinfo/America/Port-au-Prince
OLD_FILES+=usr/share/zoneinfo/America/Port_of_Spain
OLD_FILES+=usr/share/zoneinfo/America/Porto_Velho
OLD_FILES+=usr/share/zoneinfo/America/Puerto_Rico
OLD_FILES+=usr/share/zoneinfo/America/Rainy_River
OLD_FILES+=usr/share/zoneinfo/America/Rankin_Inlet
OLD_FILES+=usr/share/zoneinfo/America/Recife
OLD_FILES+=usr/share/zoneinfo/America/Regina
OLD_FILES+=usr/share/zoneinfo/America/Resolute
OLD_FILES+=usr/share/zoneinfo/America/Rio_Branco
OLD_FILES+=usr/share/zoneinfo/America/Santa_Isabel
OLD_FILES+=usr/share/zoneinfo/America/Santarem
OLD_FILES+=usr/share/zoneinfo/America/Santiago
OLD_FILES+=usr/share/zoneinfo/America/Santo_Domingo
OLD_FILES+=usr/share/zoneinfo/America/Sao_Paulo
OLD_FILES+=usr/share/zoneinfo/America/Scoresbysund
OLD_FILES+=usr/share/zoneinfo/America/Sitka
OLD_FILES+=usr/share/zoneinfo/America/St_Barthelemy
OLD_FILES+=usr/share/zoneinfo/America/St_Johns
OLD_FILES+=usr/share/zoneinfo/America/St_Kitts
OLD_FILES+=usr/share/zoneinfo/America/St_Lucia
OLD_FILES+=usr/share/zoneinfo/America/St_Thomas
OLD_FILES+=usr/share/zoneinfo/America/St_Vincent
OLD_FILES+=usr/share/zoneinfo/America/Swift_Current
OLD_FILES+=usr/share/zoneinfo/America/Tegucigalpa
OLD_FILES+=usr/share/zoneinfo/America/Thule
OLD_FILES+=usr/share/zoneinfo/America/Thunder_Bay
OLD_FILES+=usr/share/zoneinfo/America/Tijuana
OLD_FILES+=usr/share/zoneinfo/America/Toronto
OLD_FILES+=usr/share/zoneinfo/America/Tortola
OLD_FILES+=usr/share/zoneinfo/America/Vancouver
OLD_FILES+=usr/share/zoneinfo/America/Whitehorse
OLD_FILES+=usr/share/zoneinfo/America/Winnipeg
OLD_FILES+=usr/share/zoneinfo/America/Yakutat
OLD_FILES+=usr/share/zoneinfo/America/Yellowknife
OLD_FILES+=usr/share/zoneinfo/Antarctica/Casey
OLD_FILES+=usr/share/zoneinfo/Antarctica/Davis
OLD_FILES+=usr/share/zoneinfo/Antarctica/DumontDUrville
OLD_FILES+=usr/share/zoneinfo/Antarctica/Macquarie
OLD_FILES+=usr/share/zoneinfo/Antarctica/Mawson
OLD_FILES+=usr/share/zoneinfo/Antarctica/McMurdo
OLD_FILES+=usr/share/zoneinfo/Antarctica/Palmer
OLD_FILES+=usr/share/zoneinfo/Antarctica/Rothera
OLD_FILES+=usr/share/zoneinfo/Antarctica/Syowa
OLD_FILES+=usr/share/zoneinfo/Antarctica/Troll
OLD_FILES+=usr/share/zoneinfo/Antarctica/Vostok
OLD_FILES+=usr/share/zoneinfo/Arctic/Longyearbyen
OLD_FILES+=usr/share/zoneinfo/Asia/Aden
OLD_FILES+=usr/share/zoneinfo/Asia/Almaty
OLD_FILES+=usr/share/zoneinfo/Asia/Amman
OLD_FILES+=usr/share/zoneinfo/Asia/Anadyr
OLD_FILES+=usr/share/zoneinfo/Asia/Aqtau
OLD_FILES+=usr/share/zoneinfo/Asia/Aqtobe
OLD_FILES+=usr/share/zoneinfo/Asia/Ashgabat
OLD_FILES+=usr/share/zoneinfo/Asia/Baghdad
OLD_FILES+=usr/share/zoneinfo/Asia/Bahrain
OLD_FILES+=usr/share/zoneinfo/Asia/Baku
OLD_FILES+=usr/share/zoneinfo/Asia/Bangkok
OLD_FILES+=usr/share/zoneinfo/Asia/Beirut
OLD_FILES+=usr/share/zoneinfo/Asia/Bishkek
OLD_FILES+=usr/share/zoneinfo/Asia/Brunei
OLD_FILES+=usr/share/zoneinfo/Asia/Chita
OLD_FILES+=usr/share/zoneinfo/Asia/Choibalsan
OLD_FILES+=usr/share/zoneinfo/Asia/Colombo
OLD_FILES+=usr/share/zoneinfo/Asia/Damascus
OLD_FILES+=usr/share/zoneinfo/Asia/Dhaka
OLD_FILES+=usr/share/zoneinfo/Asia/Dili
OLD_FILES+=usr/share/zoneinfo/Asia/Dubai
OLD_FILES+=usr/share/zoneinfo/Asia/Dushanbe
OLD_FILES+=usr/share/zoneinfo/Asia/Gaza
OLD_FILES+=usr/share/zoneinfo/Asia/Hebron
OLD_FILES+=usr/share/zoneinfo/Asia/Ho_Chi_Minh
OLD_FILES+=usr/share/zoneinfo/Asia/Hong_Kong
OLD_FILES+=usr/share/zoneinfo/Asia/Hovd
OLD_FILES+=usr/share/zoneinfo/Asia/Irkutsk
OLD_FILES+=usr/share/zoneinfo/Asia/Istanbul
OLD_FILES+=usr/share/zoneinfo/Asia/Jakarta
OLD_FILES+=usr/share/zoneinfo/Asia/Jayapura
OLD_FILES+=usr/share/zoneinfo/Asia/Jerusalem
OLD_FILES+=usr/share/zoneinfo/Asia/Kabul
OLD_FILES+=usr/share/zoneinfo/Asia/Kamchatka
OLD_FILES+=usr/share/zoneinfo/Asia/Karachi
OLD_FILES+=usr/share/zoneinfo/Asia/Kathmandu
OLD_FILES+=usr/share/zoneinfo/Asia/Khandyga
OLD_FILES+=usr/share/zoneinfo/Asia/Kolkata
OLD_FILES+=usr/share/zoneinfo/Asia/Krasnoyarsk
OLD_FILES+=usr/share/zoneinfo/Asia/Kuala_Lumpur
OLD_FILES+=usr/share/zoneinfo/Asia/Kuching
OLD_FILES+=usr/share/zoneinfo/Asia/Kuwait
OLD_FILES+=usr/share/zoneinfo/Asia/Macau
OLD_FILES+=usr/share/zoneinfo/Asia/Magadan
OLD_FILES+=usr/share/zoneinfo/Asia/Makassar
OLD_FILES+=usr/share/zoneinfo/Asia/Manila
OLD_FILES+=usr/share/zoneinfo/Asia/Muscat
OLD_FILES+=usr/share/zoneinfo/Asia/Nicosia
OLD_FILES+=usr/share/zoneinfo/Asia/Novokuznetsk
OLD_FILES+=usr/share/zoneinfo/Asia/Novosibirsk
OLD_FILES+=usr/share/zoneinfo/Asia/Omsk
OLD_FILES+=usr/share/zoneinfo/Asia/Oral
OLD_FILES+=usr/share/zoneinfo/Asia/Phnom_Penh
OLD_FILES+=usr/share/zoneinfo/Asia/Pontianak
OLD_FILES+=usr/share/zoneinfo/Asia/Pyongyang
OLD_FILES+=usr/share/zoneinfo/Asia/Qatar
OLD_FILES+=usr/share/zoneinfo/Asia/Qyzylorda
OLD_FILES+=usr/share/zoneinfo/Asia/Rangoon
OLD_FILES+=usr/share/zoneinfo/Asia/Riyadh
OLD_FILES+=usr/share/zoneinfo/Asia/Sakhalin
OLD_FILES+=usr/share/zoneinfo/Asia/Samarkand
OLD_FILES+=usr/share/zoneinfo/Asia/Seoul
OLD_FILES+=usr/share/zoneinfo/Asia/Shanghai
OLD_FILES+=usr/share/zoneinfo/Asia/Singapore
OLD_FILES+=usr/share/zoneinfo/Asia/Srednekolymsk
OLD_FILES+=usr/share/zoneinfo/Asia/Taipei
OLD_FILES+=usr/share/zoneinfo/Asia/Tashkent
OLD_FILES+=usr/share/zoneinfo/Asia/Tbilisi
OLD_FILES+=usr/share/zoneinfo/Asia/Tehran
OLD_FILES+=usr/share/zoneinfo/Asia/Thimphu
OLD_FILES+=usr/share/zoneinfo/Asia/Tokyo
OLD_FILES+=usr/share/zoneinfo/Asia/Ulaanbaatar
OLD_FILES+=usr/share/zoneinfo/Asia/Urumqi
OLD_FILES+=usr/share/zoneinfo/Asia/Ust-Nera
OLD_FILES+=usr/share/zoneinfo/Asia/Vientiane
OLD_FILES+=usr/share/zoneinfo/Asia/Vladivostok
OLD_FILES+=usr/share/zoneinfo/Asia/Yakutsk
OLD_FILES+=usr/share/zoneinfo/Asia/Yekaterinburg
OLD_FILES+=usr/share/zoneinfo/Asia/Yerevan
OLD_FILES+=usr/share/zoneinfo/Atlantic/Azores
OLD_FILES+=usr/share/zoneinfo/Atlantic/Bermuda
OLD_FILES+=usr/share/zoneinfo/Atlantic/Canary
OLD_FILES+=usr/share/zoneinfo/Atlantic/Cape_Verde
OLD_FILES+=usr/share/zoneinfo/Atlantic/Faroe
OLD_FILES+=usr/share/zoneinfo/Atlantic/Madeira
OLD_FILES+=usr/share/zoneinfo/Atlantic/Reykjavik
OLD_FILES+=usr/share/zoneinfo/Atlantic/South_Georgia
OLD_FILES+=usr/share/zoneinfo/Atlantic/St_Helena
OLD_FILES+=usr/share/zoneinfo/Atlantic/Stanley
OLD_FILES+=usr/share/zoneinfo/Australia/Adelaide
OLD_FILES+=usr/share/zoneinfo/Australia/Brisbane
OLD_FILES+=usr/share/zoneinfo/Australia/Broken_Hill
OLD_FILES+=usr/share/zoneinfo/Australia/Currie
OLD_FILES+=usr/share/zoneinfo/Australia/Darwin
OLD_FILES+=usr/share/zoneinfo/Australia/Eucla
OLD_FILES+=usr/share/zoneinfo/Australia/Hobart
OLD_FILES+=usr/share/zoneinfo/Australia/Lindeman
OLD_FILES+=usr/share/zoneinfo/Australia/Lord_Howe
OLD_FILES+=usr/share/zoneinfo/Australia/Melbourne
OLD_FILES+=usr/share/zoneinfo/Australia/Perth
OLD_FILES+=usr/share/zoneinfo/Australia/Sydney
OLD_FILES+=usr/share/zoneinfo/CET
OLD_FILES+=usr/share/zoneinfo/CST6CDT
OLD_FILES+=usr/share/zoneinfo/EET
OLD_FILES+=usr/share/zoneinfo/EST
OLD_FILES+=usr/share/zoneinfo/EST5EDT
OLD_FILES+=usr/share/zoneinfo/Etc/GMT
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+0
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+1
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+10
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+11
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+12
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+2
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+3
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+4
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+5
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+6
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+7
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+8
OLD_FILES+=usr/share/zoneinfo/Etc/GMT+9
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-0
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-1
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-10
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-11
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-12
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-13
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-14
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-2
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-3
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-4
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-5
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-6
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-7
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-8
OLD_FILES+=usr/share/zoneinfo/Etc/GMT-9
OLD_FILES+=usr/share/zoneinfo/Etc/GMT0
OLD_FILES+=usr/share/zoneinfo/Etc/Greenwich
OLD_FILES+=usr/share/zoneinfo/Etc/UCT
OLD_FILES+=usr/share/zoneinfo/Etc/UTC
OLD_FILES+=usr/share/zoneinfo/Etc/Universal
OLD_FILES+=usr/share/zoneinfo/Etc/Zulu
OLD_FILES+=usr/share/zoneinfo/Europe/Amsterdam
OLD_FILES+=usr/share/zoneinfo/Europe/Andorra
OLD_FILES+=usr/share/zoneinfo/Europe/Athens
OLD_FILES+=usr/share/zoneinfo/Europe/Belgrade
OLD_FILES+=usr/share/zoneinfo/Europe/Berlin
OLD_FILES+=usr/share/zoneinfo/Europe/Bratislava
OLD_FILES+=usr/share/zoneinfo/Europe/Brussels
OLD_FILES+=usr/share/zoneinfo/Europe/Bucharest
OLD_FILES+=usr/share/zoneinfo/Europe/Budapest
OLD_FILES+=usr/share/zoneinfo/Europe/Busingen
OLD_FILES+=usr/share/zoneinfo/Europe/Chisinau
OLD_FILES+=usr/share/zoneinfo/Europe/Copenhagen
OLD_FILES+=usr/share/zoneinfo/Europe/Dublin
OLD_FILES+=usr/share/zoneinfo/Europe/Gibraltar
OLD_FILES+=usr/share/zoneinfo/Europe/Guernsey
OLD_FILES+=usr/share/zoneinfo/Europe/Helsinki
OLD_FILES+=usr/share/zoneinfo/Europe/Isle_of_Man
OLD_FILES+=usr/share/zoneinfo/Europe/Istanbul
OLD_FILES+=usr/share/zoneinfo/Europe/Jersey
OLD_FILES+=usr/share/zoneinfo/Europe/Kaliningrad
OLD_FILES+=usr/share/zoneinfo/Europe/Kiev
OLD_FILES+=usr/share/zoneinfo/Europe/Lisbon
OLD_FILES+=usr/share/zoneinfo/Europe/Ljubljana
OLD_FILES+=usr/share/zoneinfo/Europe/London
OLD_FILES+=usr/share/zoneinfo/Europe/Luxembourg
OLD_FILES+=usr/share/zoneinfo/Europe/Madrid
OLD_FILES+=usr/share/zoneinfo/Europe/Malta
OLD_FILES+=usr/share/zoneinfo/Europe/Mariehamn
OLD_FILES+=usr/share/zoneinfo/Europe/Minsk
OLD_FILES+=usr/share/zoneinfo/Europe/Monaco
OLD_FILES+=usr/share/zoneinfo/Europe/Moscow
OLD_FILES+=usr/share/zoneinfo/Europe/Nicosia
OLD_FILES+=usr/share/zoneinfo/Europe/Oslo
OLD_FILES+=usr/share/zoneinfo/Europe/Paris
OLD_FILES+=usr/share/zoneinfo/Europe/Podgorica
OLD_FILES+=usr/share/zoneinfo/Europe/Prague
OLD_FILES+=usr/share/zoneinfo/Europe/Riga
OLD_FILES+=usr/share/zoneinfo/Europe/Rome
OLD_FILES+=usr/share/zoneinfo/Europe/Samara
OLD_FILES+=usr/share/zoneinfo/Europe/San_Marino
OLD_FILES+=usr/share/zoneinfo/Europe/Sarajevo
OLD_FILES+=usr/share/zoneinfo/Europe/Simferopol
OLD_FILES+=usr/share/zoneinfo/Europe/Skopje
OLD_FILES+=usr/share/zoneinfo/Europe/Sofia
OLD_FILES+=usr/share/zoneinfo/Europe/Stockholm
OLD_FILES+=usr/share/zoneinfo/Europe/Tallinn
OLD_FILES+=usr/share/zoneinfo/Europe/Tirane
OLD_FILES+=usr/share/zoneinfo/Europe/Uzhgorod
OLD_FILES+=usr/share/zoneinfo/Europe/Vaduz
OLD_FILES+=usr/share/zoneinfo/Europe/Vatican
OLD_FILES+=usr/share/zoneinfo/Europe/Vienna
OLD_FILES+=usr/share/zoneinfo/Europe/Vilnius
OLD_FILES+=usr/share/zoneinfo/Europe/Volgograd
OLD_FILES+=usr/share/zoneinfo/Europe/Warsaw
OLD_FILES+=usr/share/zoneinfo/Europe/Zagreb
OLD_FILES+=usr/share/zoneinfo/Europe/Zaporozhye
OLD_FILES+=usr/share/zoneinfo/Europe/Zurich
OLD_FILES+=usr/share/zoneinfo/Factory
OLD_FILES+=usr/share/zoneinfo/HST
OLD_FILES+=usr/share/zoneinfo/Indian/Antananarivo
OLD_FILES+=usr/share/zoneinfo/Indian/Chagos
OLD_FILES+=usr/share/zoneinfo/Indian/Christmas
OLD_FILES+=usr/share/zoneinfo/Indian/Cocos
OLD_FILES+=usr/share/zoneinfo/Indian/Comoro
OLD_FILES+=usr/share/zoneinfo/Indian/Kerguelen
OLD_FILES+=usr/share/zoneinfo/Indian/Mahe
OLD_FILES+=usr/share/zoneinfo/Indian/Maldives
OLD_FILES+=usr/share/zoneinfo/Indian/Mauritius
OLD_FILES+=usr/share/zoneinfo/Indian/Mayotte
OLD_FILES+=usr/share/zoneinfo/Indian/Reunion
OLD_FILES+=usr/share/zoneinfo/MET
OLD_FILES+=usr/share/zoneinfo/MST
OLD_FILES+=usr/share/zoneinfo/MST7MDT
OLD_FILES+=usr/share/zoneinfo/PST8PDT
OLD_FILES+=usr/share/zoneinfo/Pacific/Apia
OLD_FILES+=usr/share/zoneinfo/Pacific/Auckland
OLD_FILES+=usr/share/zoneinfo/Pacific/Bougainville
OLD_FILES+=usr/share/zoneinfo/Pacific/Chatham
OLD_FILES+=usr/share/zoneinfo/Pacific/Chuuk
OLD_FILES+=usr/share/zoneinfo/Pacific/Easter
OLD_FILES+=usr/share/zoneinfo/Pacific/Efate
OLD_FILES+=usr/share/zoneinfo/Pacific/Enderbury
OLD_FILES+=usr/share/zoneinfo/Pacific/Fakaofo
OLD_FILES+=usr/share/zoneinfo/Pacific/Fiji
OLD_FILES+=usr/share/zoneinfo/Pacific/Funafuti
OLD_FILES+=usr/share/zoneinfo/Pacific/Galapagos
OLD_FILES+=usr/share/zoneinfo/Pacific/Gambier
OLD_FILES+=usr/share/zoneinfo/Pacific/Guadalcanal
OLD_FILES+=usr/share/zoneinfo/Pacific/Guam
OLD_FILES+=usr/share/zoneinfo/Pacific/Honolulu
OLD_FILES+=usr/share/zoneinfo/Pacific/Johnston
OLD_FILES+=usr/share/zoneinfo/Pacific/Kiritimati
OLD_FILES+=usr/share/zoneinfo/Pacific/Kosrae
OLD_FILES+=usr/share/zoneinfo/Pacific/Kwajalein
OLD_FILES+=usr/share/zoneinfo/Pacific/Majuro
OLD_FILES+=usr/share/zoneinfo/Pacific/Marquesas
OLD_FILES+=usr/share/zoneinfo/Pacific/Midway
OLD_FILES+=usr/share/zoneinfo/Pacific/Nauru
OLD_FILES+=usr/share/zoneinfo/Pacific/Niue
OLD_FILES+=usr/share/zoneinfo/Pacific/Norfolk
OLD_FILES+=usr/share/zoneinfo/Pacific/Noumea
OLD_FILES+=usr/share/zoneinfo/Pacific/Pago_Pago
OLD_FILES+=usr/share/zoneinfo/Pacific/Palau
OLD_FILES+=usr/share/zoneinfo/Pacific/Pitcairn
OLD_FILES+=usr/share/zoneinfo/Pacific/Pohnpei
OLD_FILES+=usr/share/zoneinfo/Pacific/Port_Moresby
OLD_FILES+=usr/share/zoneinfo/Pacific/Rarotonga
OLD_FILES+=usr/share/zoneinfo/Pacific/Saipan
OLD_FILES+=usr/share/zoneinfo/Pacific/Tahiti
OLD_FILES+=usr/share/zoneinfo/Pacific/Tarawa
OLD_FILES+=usr/share/zoneinfo/Pacific/Tongatapu
OLD_FILES+=usr/share/zoneinfo/Pacific/Wake
OLD_FILES+=usr/share/zoneinfo/Pacific/Wallis
OLD_FILES+=usr/share/zoneinfo/UTC
OLD_FILES+=usr/share/zoneinfo/WET
OLD_FILES+=usr/share/zoneinfo/posixrules
OLD_FILES+=usr/share/zoneinfo/zone.tab
.endif
Index: projects/clang800-import/tools/build/options/WITH_CLANG_EXTRAS
===================================================================
--- projects/clang800-import/tools/build/options/WITH_CLANG_EXTRAS (revision 343955)
+++ projects/clang800-import/tools/build/options/WITH_CLANG_EXTRAS (revision 343956)
@@ -1,2 +1,3 @@
.\" $FreeBSD$
-Set to build additional clang and llvm tools, such as bugpoint.
+Set to build additional clang and llvm tools, such as bugpoint and
+clang-format.
Index: projects/clang800-import/usr.bin/calendar/calendars/calendar.freebsd
===================================================================
--- projects/clang800-import/usr.bin/calendar/calendars/calendar.freebsd (revision 343955)
+++ projects/clang800-import/usr.bin/calendar/calendars/calendar.freebsd (revision 343956)
@@ -1,471 +1,473 @@
/*
* FreeBSD
*
* $FreeBSD$
*/
#ifndef _calendar_freebsd_
#define _calendar_freebsd_
01/01 Dimitry Andric <dim@FreeBSD.org> born in Utrecht, the Netherlands, 1969
01/01 Lev Serebryakov <lev@FreeBSD.org> born in Leningrad, USSR, 1979
01/01 Alexander Langer <alex@FreeBSD.org> born in Duesseldorf, Nordrhein-Westfalen, Germany, 1981
01/01 Zach Leslie <zleslie@FreeBSD.org> born in Grand Junction, Colorado, United States, 1985
01/02 Ion-Mihai "IOnut" Tetcu <itetcu@FreeBSD.org> born in Bucharest, Romania, 1980
01/02 Patrick Li <pat@FreeBSD.org> born in Beijing, People's Republic of China, 1985
01/03 Tetsurou Okazaki <okazaki@FreeBSD.org> born in Mobara, Chiba, Japan, 1972
01/04 Hiroyuki Hanai <hanai@FreeBSD.org> born in Kagawa pre., Japan, 1969
01/06 Philippe Audeoud <jadawin@FreeBSD.org> born in Bretigny-Sur-Orge, France, 1980
01/08 Michael L. Hostbaek <mich@FreeBSD.org> born in Copenhagen, Denmark, 1977
01/10 Jean-Yves Lefort <jylefort@FreeBSD.org> born in Charleroi, Belgium, 1980
01/10 Guangyuan Yang <ygy@FreeBSD.org> born in Yangzhou, Jiangsu, People's Republic of China, 1997
01/12 Yen-Ming Lee <leeym@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1977
01/12 Ying-Chieh Liao <ijliao@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1979
01/12 Kristof Provost <kp@FreeBSD.org> born in Aalst, Belgium, 1983
01/13 Ruslan Bukin <br@FreeBSD.org> born in Dudinka, Russian Federation, 1985
01/14 Yi-Jheng Lin <yzlin@FreeBSD.org> born in Taichung, Taiwan, Republic of China, 1985
01/15 Anne Dickison <anne@FreeBSD.org> born in Madison, Indiana, United States, 1976
01/16 Ariff Abdullah <ariff@FreeBSD.org> born in Kuala Lumpur, Malaysia, 1978
01/16 Dmitry Sivachenko <demon@FreeBSD.org> born in Moscow, USSR, 1978
01/16 Vanilla I. Shu <vanilla@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1978
01/17 Raphael Kubo da Costa <rakuco@FreeBSD.org> born in Sao Paulo, Sao Paulo, Brazil, 1989
01/18 Dejan Lesjak <lesi@FreeBSD.org> born in Ljubljana, Slovenia, Yugoslavia, 1977
01/19 Marshall Kirk McKusick <mckusick@FreeBSD.org> born in Wilmington, Delaware, United States, 1954
01/19 Ruslan Ermilov <ru@FreeBSD.org> born in Simferopol, USSR, 1974
01/19 Marcelo S. Araujo <araujo@FreeBSD.org> born in Joinville, Santa Catarina, Brazil, 1981
01/20 Poul-Henning Kamp <phk@FreeBSD.org> born in Korsoer, Denmark, 1966
01/21 Mahdi Mokhtari <mmokhi@FreeBSD.org> born in Tehran, Iran, 1995
01/22 Johann Visagie <wjv@FreeBSD.org> born in Cape Town, South Africa, 1970
01/23 Hideyuki KURASHINA <rushani@FreeBSD.org> born in Niigata, Japan, 1982
01/24 Fabien Thomas <fabient@FreeBSD.org> born in Avignon, France, 1971
01/24 Matteo Riondato <matteo@FreeBSD.org> born in Padova, Italy, 1986
01/25 Nick Hibma <n_hibma@FreeBSD.org> born in Groningen, the Netherlands, 1972
01/25 Bernd Walter <ticso@FreeBSD.org> born in Moers, Nordrhein-Westfalen, Germany, 1974
01/26 Andrew Gallatin <gallatin@FreeBSD.org> born in Buffalo, New York, United States, 1970
01/27 Nick Sayer <nsayer@FreeBSD.org> born in San Diego, California, United States, 1968
01/27 Jacques Anthony Vidrine <nectar@FreeBSD.org> born in Baton Rouge, Louisiana, United States, 1971
01/27 Alexandre C. Guimaraes <rigoletto@FreeBSD.org> born in Rio de Janeiro, Rio de Janeiro, Brazil, 1982
01/27 Ngie Cooper <ngie@FreeBSD.org> born in Seattle, Washington, United States, 1984
01/31 Hidetoshi Shimokawa <simokawa@FreeBSD.org> born in Yokohama, Kanagawa, Japan, 1970
02/01 Doug Rabson <dfr@FreeBSD.org> born in London, England, 1966
02/01 Nicola Vitale <nivit@FreeBSD.org> born in Busto Arsizio, Varese, Italy, 1976
02/01 Paul Saab <ps@FreeBSD.org> born in Champaign-Urbana, Illinois, United States, 1978
02/01 Martin Wilke <miwi@FreeBSD.org> born in Ludwigsfelde, Brandenburg, Germany, 1980
02/01 Christian Brueffer <brueffer@FreeBSD.org> born in Gronau, Nordrhein-Westfalen, Germany, 1982
02/01 Steven Kreuzer <skreuzer@FreeBSD.org> born in Oceanside, New York, United States, 1982
02/01 Juli Mallett <jmallett@FreeBSD.org> born in Washington, Pennsylvania, United States, 1985
02/02 Diomidis D. Spinellis <dds@FreeBSD.org> born in Athens, Greece, 1967
02/02 Michael W Lucas <mwlucas@FreeBSD.org> born in Detroit, Michigan, United States, 1967
02/02 Dmitry Chagin <dchagin@FreeBSD.org> born in Stalingrad, USSR, 1976
02/02 Yoichi Nakayama <yoichi@FreeBSD.org> born in Tsu, Mie, Japan, 1976
02/02 Yoshihiro Takahashi <nyan@FreeBSD.org> born in Yokohama, Kanagawa, Japan, 1976
02/03 Jason Helfman <jgh@FreeBSD.org> born in Royal Oak, Michigan, United States, 1972
02/04 Eitan Adler <eadler@FreeBSD.org> born in West Hempstead, New York, United States, 1991
02/05 Frank Laszlo <laszlof@FreeBSD.org> born in Howell, Michigan, United States, 1983
02/06 Julien Charbon <jch@FreeBSD.org> born in Saint Etienne, Loire, France, 1978
02/07 Bjoern Heidotting <bhd@FreeBSD.org> born in Uelsen, Germany, 1980
02/10 David Greenman <dg@FreeBSD.org> born in Portland, Oregon, United States, 1968
02/10 Paul Richards <paul@FreeBSD.org> born in Ammanford, Carmarthenshire, United Kingdom, 1968
02/10 Simon Barner <barner@FreeBSD.org> born in Rosenheim, Bayern, Germany, 1980
02/10 Jason E. Hale <jhale@FreeBSD.org> born in Pittsburgh, Pennsylvania, United States, 1982
02/13 Jesper Skriver <jesper@FreeBSD.org> born in Aarhus, Denmark, 1975
02/13 Steve Wills <swills@FreeBSD.org> born in Lynchburg, Virginia, United States, 1975
02/13 Andrey Slusar <anray@FreeBSD.org> born in Odessa, USSR, 1979
02/13 David W. Chapman Jr. <dwcjr@FreeBSD.org> born in Bethel, Connecticut, United States, 1981
02/14 Manolis Kiagias <manolis@FreeBSD.org> born in Chania, Greece, 1970
02/14 Erwin Lansing <erwin@FreeBSD.org> born in 's-Hertogenbosch, the Netherlands, 1975
02/14 Martin Blapp <mbr@FreeBSD.org> born in Olten, Switzerland, 1976
02/15 Hiren Panchasara <hiren@FreeBSD.org> born in Ahmedabad, Gujarat, India, 1984
02/16 Justin Hibbits <jhibbits@FreeBSD.org> born in Toledo, Ohio, United States, 1983
02/16 Tobias Christian Berner <tcberner@FreeBSD.org> born in Bern, Switzerland, 1985
02/18 Christoph Moench-Tegeder <cmt@FreeBSD.org> born in Hannover, Niedersachsen, Germany, 1980
02/19 Murray Stokely <murray@FreeBSD.org> born in Jacksonville, Florida, United States, 1979
02/20 Anders Nordby <anders@FreeBSD.org> born in Oslo, Norway, 1976
02/21 Alexey Zelkin <phantom@FreeBSD.org> born in Simferopol, Ukraine, 1978
02/22 Brooks Davis <brooks@FreeBSD.org> born in Longview, Washington, United States, 1976
02/22 Jake Burkholder <jake@FreeBSD.org> born in Maynooth, Ontario, Canada, 1979
02/23 Peter Wemm <peter@FreeBSD.org> born in Perth, Western Australia, Australia, 1971
02/23 Mathieu Arnold <mat@FreeBSD.org> born in Champigny sur Marne, Val de Marne, France, 1978
02/23 Vinícius Zavam <egypcio@FreeBSD.org> born in Fortaleza, Ceará, Brazil, 1986
02/24 Johan Karlsson <johan@FreeBSD.org> born in Mariannelund, Sweden, 1974
02/24 Colin Percival <cperciva@FreeBSD.org> born in Burnaby, Canada, 1981
02/24 Kevin Bowling <kbowling@FreeBSD.org> born in Scottsdale, Arizona, United States, 1989
02/26 Pietro Cerutti <gahr@FreeBSD.org> born in Faido, Switzerland, 1984
02/28 Daichi GOTO <daichi@FreeBSD.org> born in Shimizu Suntou, Shizuoka, Japan, 1980
02/28 Ruslan Makhmatkhanov <rm@FreeBSD.org> born in Rostov-on-Don, USSR, 1984
03/01 Hye-Shik Chang <perky@FreeBSD.org> born in Daejeon, Republic of Korea, 1980
03/02 Cy Schubert <cy@FreeBSD.org> born in Edmonton, Alberta, Canada, 1956
03/03 Sergey Matveychuk <sem@FreeBSD.org> born in Moscow, Russian Federation, 1973
03/03 Doug White <dwhite@FreeBSD.org> born in Eugene, Oregon, United States, 1977
03/03 Gordon Tetlow <gordon@FreeBSD.org> born in Reno, Nevada, United States, 1978
03/04 Oleksandr Tymoshenko <gonzo@FreeBSD.org> born in Chernihiv, Ukraine, 1980
03/05 Baptiste Daroussin <bapt@FreeBSD.org> born in Beauvais, France, 1980
03/05 Philip Paeps <philip@FreeBSD.org> born in Leuven, Belgium, 1983
03/05 Ulf Lilleengen <lulf@FreeBSD.org> born in Hamar, Norway, 1985
03/06 Christopher Piazza <cpiazza@FreeBSD.org> born in Kamloops, British Columbia, Canada, 1981
03/07 Michael P. Pritchard <mpp@FreeBSD.org> born in Los Angeles, California, United States, 1964
03/07 Giorgos Keramidas <keramida@FreeBSD.org> born in Athens, Greece, 1976
03/10 Andreas Klemm <andreas@FreeBSD.org> born in Duesseldorf, Nordrhein-Westfalen, Germany, 1963
03/10 Luiz Otavio O Souza <loos@FreeBSD.org> born in Bauru, Sao Paulo, Brazil, 1978
03/10 Nikolai Lifanov <lifanov@FreeBSD.org> born in Moscow, Russian Federation, 1989
03/11 Soeren Straarup <xride@FreeBSD.org> born in Andst, Denmark, 1978
03/12 Greg Lewis <glewis@FreeBSD.org> born in Adelaide, South Australia, Australia, 1969
03/13 Alexander Leidinger <netchild@FreeBSD.org> born in Neunkirchen, Saarland, Germany, 1976
03/13 Will Andrews <will@FreeBSD.org> born in Pontiac, Michigan, United States, 1982
03/14 Bernhard Froehlich <decke@FreeBSD.org> born in Graz, Styria, Austria, 1985
03/14 Eric Turgeon <ericbsd@FreeBSD.org> born in Edmundston, New Brunswick, Canada, 1982
03/15 Paolo Pisati <piso@FreeBSD.org> born in Lodi, Italy, 1977
03/15 Brian Fundakowski Feldman <green@FreeBSD.org> born in Alexandria, Virginia, United States, 1983
03/17 Michael Smith <msmith@FreeBSD.org> born in Bankstown, New South Wales, Australia, 1971
03/17 Alexander Motin <mav@FreeBSD.org> born in Simferopol, Ukraine, 1979
03/18 Koop Mast <kwm@FreeBSD.org> born in Dokkum, the Netherlands, 1981
03/19 Mikhail Teterin <mi@FreeBSD.org> born in Kyiv, Ukraine, 1972
03/20 Joseph S. Atkinson <jsa@FreeBSD.org> born in Batesville, Arkansas, United States, 1977
03/20 Henrik Brix Andersen <brix@FreeBSD.org> born in Aarhus, Denmark, 1978
03/20 MANTANI Nobutaka <nobutaka@FreeBSD.org> born in Hiroshima, Japan, 1978
03/20 Cameron Grant <cg@FreeBSD.org> died in Hemel Hempstead, United Kingdom, 2005
03/22 Brad Davis <brd@FreeBSD.org> born in Farmington, New Mexico, United States, 1983
03/23 Daniel C. Sobral <dcs@FreeBSD.org> born in Brasilia, Distrito Federal, Brazil, 1971
03/23 Benno Rice <benno@FreeBSD.org> born in Adelaide, South Australia, Australia, 1977
03/24 Marcel Moolenaar <marcel@FreeBSD.org> born in Hilversum, the Netherlands, 1968
03/24 Emanuel Haupt <ehaupt@FreeBSD.org> born in Zurich, Switzerland, 1979
03/25 Andrew R. Reiter <arr@FreeBSD.org> born in Springfield, Massachusetts, United States, 1980
03/26 Jonathan Anderson <jonathan@FreeBSD.org> born in Ottawa, Ontario, Canada, 1983
03/27 Josef El-Rayes <josef@FreeBSD.org> born in Linz, Austria, 1982
03/28 Sean C. Farley <scf@FreeBSD.org> born in Indianapolis, Indiana, United States, 1970
03/29 Dave Cottlehuber <dch@FreeBSD.org> born in Christchurch, New Zealand, 1973
03/29 Thierry Thomas <thierry@FreeBSD.org> born in Luxeuil les Bains, France, 1961
03/30 Po-Chuan Hsieh <sunpoet@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1978
03/31 First quarter status reports are due on 04/15
04/01 Matthew Jacob <mjacob@FreeBSD.org> born in San Francisco, California, United States, 1958
04/01 Alexander V. Chernikov <melifaro@FreeBSD.org> born in Moscow, Russian Federation, 1984
04/01 Bill Fenner <fenner@FreeBSD.org> born in Bellefonte, Pennsylvania, United States, 1971
04/01 Peter Edwards <peadar@FreeBSD.org> born in Dublin, Ireland, 1973
04/03 Hellmuth Michaelis <hm@FreeBSD.org> born in Kiel, Schleswig-Holstein, Germany, 1958
04/03 Tong Liu <nemoliu@FreeBSD.org> born in Beijing, People's Republic of China, 1981
04/03 Gabor Pali <pgj@FreeBSD.org> born in Kunhegyes, Hungary, 1982
04/04 Jason Unovitch <junovitch@FreeBSD.org> born in Scranton, Pennsylvania, United States, 1986
04/05 Stacey Son <sson@FreeBSD.org> born in Burley, Idaho, United States, 1967
04/06 Peter Jeremy <peterj@FreeBSD.org> born in Sydney, New South Wales, Australia, 1961
04/07 Edward Tomasz Napierala <trasz@FreeBSD.org> born in Wolsztyn, Poland, 1981
04/08 Jordan K. Hubbard <jkh@FreeBSD.org> born in Honolulu, Hawaii, United States, 1963
04/09 Ceri Davies <ceri@FreeBSD.org> born in Haverfordwest, Pembrokeshire, United Kingdom, 1976
04/11 Bruce A. Mah <bmah@FreeBSD.org> born in Fresno, California, United States, 1969
04/12 Patrick Gardella <patrick@FreeBSD.org> born in Columbus, Ohio, United States, 1967
04/12 Ed Schouten <ed@FreeBSD.org> born in Oss, the Netherlands, 1986
04/12 Ruey-Cherng Yu <rcyu@FreeBSD.org> born in Keelung, Taiwan, 1978
04/13 Oliver Braun <obraun@FreeBSD.org> born in Nuremberg, Bavaria, Germany, 1972
04/14 Crist J. Clark <cjc@FreeBSD.org> born in Milwaukee, Wisconsin, United States, 1970
04/14 Glen J. Barber <gjb@FreeBSD.org> born in Wilkes-Barre, Pennsylvania, United States, 1981
04/15 David Malone <dwmalone@FreeBSD.org> born in Dublin, Ireland, 1973
04/17 Alexey Degtyarev <alexey@FreeBSD.org> born in Ahtubinsk, Russian Federation, 1984
04/17 Dryice Liu <dryice@FreeBSD.org> born in Jinan, Shandong, China, 1975
04/22 Joerg Wunsch <joerg@FreeBSD.org> born in Dresden, Sachsen, Germany, 1962
04/22 Jun Kuriyama <kuriyama@FreeBSD.org> born in Matsue, Shimane, Japan, 1973
04/22 Jakub Klama <jceel@FreeBSD.org> born in Blachownia, Silesia, Poland, 1989
04/25 Richard Gallamore <ultima@FreeBSD.org> born in Kissimmee, Florida, United States, 1987
04/26 Rene Ladan <rene@FreeBSD.org> born in Geldrop, the Netherlands, 1980
04/28 Oleg Bulyzhin <oleg@FreeBSD.org> born in Kharkov, USSR, 1976
04/28 Andriy Voskoboinyk <avos@FreeBSD.org> born in Bila Tserkva, Ukraine, 1992
04/29 Adam Weinberger <adamw@FreeBSD.org> born in Berkeley, California, United States, 1980
04/29 Eric Anholt <anholt@FreeBSD.org> born in Portland, Oregon, United States, 1983
05/01 Randall Stewart <rrs@FreeBSD.org> born in Spokane, Washington, United States, 1959
+05/02 Kai Knoblich <kai@FreeBSD.org> born in Hannover, Niedersachsen, Germany, 1982
05/02 Danilo G. Baio <dbaio@FreeBSD.org> born in Maringa, Parana, Brazil, 1986
05/02 Wojciech A. Koszek <wkoszek@FreeBSD.org> born in Czestochowa, Poland, 1987
05/03 Brian Dean <bsd@FreeBSD.org> born in Elkins, West Virginia, United States, 1966
05/03 Patrick Kelsey <pkelsey@FreeBSD.org> born in Freehold, New Jersey, United States, 1976
05/03 Robert Nicholas Maxwell Watson <rwatson@FreeBSD.org> born in Harrow, Middlesex, United Kingdom, 1977
05/04 Denis Peplin <den@FreeBSD.org> born in Nizhniy Novgorod, Russian Federation, 1977
05/08 Kirill Ponomarew <krion@FreeBSD.org> born in Volgograd, Russian Federation, 1977
05/08 Sean Kelly <smkelly@FreeBSD.org> born in Walnut Creek, California, United States, 1982
05/09 Daniel Eischen <deischen@FreeBSD.org> born in Syracuse, New York, United States, 1963
05/09 Aaron Dalton <aaron@FreeBSD.org> born in Boise, Idaho, United States, 1973
05/09 Jase Thew <jase@FreeBSD.org> born in Abergavenny, Gwent, United Kingdom, 1974
05/09 Leandro Lupori <luporl@FreeBSD.org> born in Sao Paulo, Sao Paulo, Brazil, 1983
05/10 Markus Brueffer <markus@FreeBSD.org> born in Gronau, Nordrhein-Westfalen, Germany, 1977
05/11 Kurt Lidl <lidl@FreeBSD.org> born in Rockville, Maryland, United States, 1968
05/11 Jesus Rodriguez <jesusr@FreeBSD.org> born in Barcelona, Spain, 1972
05/11 Marcin Wojtas <mw@FreeBSD.org> born in Krakow, Poland, 1986
05/11 Roman Kurakin <rik@FreeBSD.org> born in Moscow, USSR, 1979
05/11 Ulrich Spoerlein <uqs@FreeBSD.org> born in Schesslitz, Bayern, Germany, 1981
05/13 Pete Fritchman <petef@FreeBSD.org> born in Lansdale, Pennsylvania, United States, 1983
05/13 Ben Widawsky <bwidawsk@FreeBSD.org> born in New York City, New York, United States, 1982
05/14 Tatsumi Hosokawa <hosokawa@FreeBSD.org> born in Tokyo, Japan, 1968
05/14 Shigeyuku Fukushima <shige@FreeBSD.org> born in Osaka, Japan, 1974
05/14 Rebecca Cran <bcran@FreeBSD.org> born in Cambridge, United Kingdom, 1981
05/15 Hans Petter Selasky <hselasky@FreeBSD.org> born in Flekkefjord, Norway, 1982
05/16 Johann Kois <jkois@FreeBSD.org> born in Wolfsberg, Austria, 1975
05/16 Marcus Alves Grando <mnag@FreeBSD.org> born in Florianopolis, Santa Catarina, Brazil, 1979
05/17 Thomas Abthorpe <tabthorpe@FreeBSD.org> born in Port Arthur, Ontario, Canada, 1968
05/19 Philippe Charnier <charnier@FreeBSD.org> born in Fontainebleau, France, 1966
05/19 Ian Dowse <iedowse@FreeBSD.org> born in Dublin, Ireland, 1975
05/19 Sofian Brabez <sbz@FreeBSD.org> born in Toulouse, France, 1984
05/20 Dan Moschuk <dan@FreeBSD.org> died in Burlington, Ontario, Canada, 2010
05/21 Kris Kennaway <kris@FreeBSD.org> born in Winnipeg, Manitoba, Canada, 1978
05/22 James Gritton <jamie@FreeBSD.org> born in San Francisco, California, United States, 1967
05/22 Clive Tong-I Lin <clive@FreeBSD.org> born in Changhua, Taiwan, Republic of China, 1978
05/22 Michael Bushkov <bushman@FreeBSD.org> born in Rostov-on-Don, Russian Federation, 1985
05/22 Rui Paulo <rpaulo@FreeBSD.org> born in Evora, Portugal, 1986
05/22 David Naylor <dbn@FreeBSD.org> born in Johannesburg, South Africa, 1988
05/23 Munechika Sumikawa <sumikawa@FreeBSD.org> born in Osaka, Osaka, Japan, 1972
05/24 Duncan McLennan Barclay <dmlb@FreeBSD.org> born in London, Middlesex, United Kingdom, 1970
05/24 Oliver Lehmann <oliver@FreeBSD.org> born in Karlsburg, Germany, 1981
05/25 Pawel Pekala <pawel@FreeBSD.org> born in Swidnica, Poland, 1980
05/25 Tom Rhodes <trhodes@FreeBSD.org> born in Ellwood City, Pennsylvania, United States, 1981
05/25 Roman Divacky <rdivacky@FreeBSD.org> born in Brno, Czech Republic, 1983
05/26 Jim Pirzyk <pirzyk@FreeBSD.org> born in Chicago, Illinois, United States, 1968
05/26 Florian Smeets <flo@FreeBSD.org> born in Schwerte, Nordrhein-Westfalen, Germany, 1982
05/27 Ollivier Robert <roberto@FreeBSD.org> born in Paris, France, 1967
05/29 Wilko Bulte <wilko@FreeBSD.org> born in Arnhem, the Netherlands, 1965
05/29 Seigo Tanimura <tanimura@FreeBSD.org> born in Kitakyushu, Fukuoka, Japan, 1976
05/30 Wen Heping <wen@FreeBSD.org> born in Xiangxiang, Hunan, China, 1970
05/31 Ville Skytta <scop@FreeBSD.org> born in Helsinki, Finland, 1974
06/02 Jean-Marc Zucconi <jmz@FreeBSD.org> born in Pontarlier, France, 1954
06/02 Alexander Botero-Lowry <alexbl@FreeBSD.org> born in Austin, Texas, United States, 1986
06/03 CHOI Junho <cjh@FreeBSD.org> born in Seoul, Korea, 1974
06/03 Wesley Shields <wxs@FreeBSD.org> born in Binghamton, New York, United States, 1981
06/04 Julian Elischer <julian@FreeBSD.org> born in Perth, Australia, 1959
06/04 Justin Gibbs <gibbs@FreeBSD.org> born in San Pedro, California, United States, 1973
06/04 Jason Evans <jasone@FreeBSD.org> born in Greeley, Colorado, United States, 1973
06/04 Thomas Moestl <tmm@FreeBSD.org> born in Braunschweig, Niedersachsen, Germany, 1980
06/04 Devin Teske <dteske@FreeBSD.org> born in Arcadia, California, United States, 1982
06/04 Zack Kirsch <zack@FreeBSD.org> born in Memphis, Tennessee, United States, 1982
06/04 Johannes Jost Meixner <xmj@FreeBSD.org> born in Wiesbaden, Germany, 1987
+06/05 Johannes Lundberg <johalun@FreeBSD.org> born in Ornskoldsvik, Sweden, 1975
06/06 Sergei Kolobov <sergei@FreeBSD.org> born in Karpinsk, Russian Federation, 1972
06/06 Ryan Libby <rlibby@FreeBSD.org> born in Kirkland, Washington, United States, 1985
06/06 Alan Eldridge <alane@FreeBSD.org> died in Denver, Colorado, United States, 2003
06/07 Jimmy Olgeni <olgeni@FreeBSD.org> born in Milano, Italy, 1976
06/07 Benjamin Close <benjsc@FreeBSD.org> born in Adelaide, Australia, 1978
06/07 Roger Pau Monne <royger@FreeBSD.org> born in Reus, Catalunya, Spain, 1986
06/08 Ravi Pokala <rpokala@FreeBSD.org> born in Royal Oak, Michigan, United States, 1980
06/09 Stanislav Galabov <sgalabov@FreeBSD.org> born in Sofia, Bulgaria 1978
06/11 Alonso Cardenas Marquez <acm@FreeBSD.org> born in Arequipa, Peru, 1979
06/14 Josh Paetzel <jpaetzel@FreeBSD.org> born in Minneapolis, Minnesota, United States, 1973
06/17 Tilman Linneweh <arved@FreeBSD.org> born in Weinheim, Baden-Wuerttemberg, Germany, 1978
06/18 Li-Wen Hsu <lwhsu@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1984
06/18 Roman Bogorodskiy <novel@FreeBSD.org> born in Saratov, Russian Federation, 1986
06/19 Charlie Root <root@FreeBSD.org> born in Portland, Oregon, United States, 1993
06/21 Ganbold Tsagaankhuu <ganbold@FreeBSD.org> born in Ulaanbaatar, Mongolia, 1971
06/21 Niels Heinen <niels@FreeBSD.org> born in Markelo, the Netherlands, 1978
06/22 Andreas Tobler <andreast@FreeBSD.org> born in Heiden, Switzerland, 1968
06/24 Chris Faulhaber <jedgar@FreeBSD.org> born in Springfield, Illinois, United States, 1971
06/26 Brian Somers <brian@FreeBSD.org> born in Dundrum, Dublin, Ireland, 1967
06/28 Mark Santcroos <marks@FreeBSD.org> born in Rotterdam, the Netherlands, 1979
06/28 Xin Li <delphij@FreeBSD.org> born in Beijing, People's Republic of China, 1982
06/28 Bradley T. Hughes <bhughes@FreeBSD.org> born in Amarillo, Texas, United States, 1977
06/29 Wilfredo Sanchez Vega <wsanchez@FreeBSD.org> born in Majaguez, Puerto Rico, United States, 1972
06/29 Daniel Harris <dannyboy@FreeBSD.org> born in Lubbock, Texas, United States, 1985
06/29 Andrew Pantyukhin <sat@FreeBSD.org> born in Moscow, Russian Federation, 1985
06/30 Guido van Rooij <guido@FreeBSD.org> born in Best, Noord-Brabant, the Netherlands, 1965
06/30 Second quarter status reports are due on 07/15
07/01 Matthew Dillon <dillon@apollo.backplane.net> born in San Francisco, California, United States, 1966
07/01 Mateusz Guzik <mjg@FreeBSD.org> born in Dołki Górne, Poland, 1986
07/02 Mark Christopher Ovens <marko@FreeBSD.org> born in Preston, Lancashire, United Kingdom, 1958
07/02 Vasil Venelinov Dimov <vd@FreeBSD.org> born in Shumen, Bulgaria, 1982
07/04 Motoyuki Konno <motoyuki@FreeBSD.org> born in Musashino, Tokyo, Japan, 1969
07/04 Florent Thoumie <flz@FreeBSD.org> born in Montmorency, Val d'Oise, France, 1982
07/05 Olivier Cochard-Labbe <olivier@FreeBSD.org> born in Brest, France, 1977
07/05 Sergey Kandaurov <pluknet@FreeBSD.org> born in Gubkin, Russian Federation, 1985
07/07 Andrew Thompson <thompsa@FreeBSD.org> born in Lower Hutt, Wellington, New Zealand, 1979
07/07 Maxime Henrion <mux@FreeBSD.org> born in Metz, France, 1981
07/07 George Reid <greid@FreeBSD.org> born in Frimley, Hampshire, United Kingdom, 1983
07/10 Jung-uk Kim <jkim@FreeBSD.org> born in Seoul, Korea, 1971
07/10 Justin Seger <jseger@FreeBSD.org> born in Harvard, Massachusetts, United States, 1981
07/10 David Schultz <das@FreeBSD.org> born in Oakland, California, United States, 1982
07/10 Ben Woods <woodsb02@FreeBSD.org> born in Perth, Western Australia, Australia, 1984
07/11 Jesus R. Camou <jcamou@FreeBSD.org> born in Hermosillo, Sonora, Mexico, 1983
07/14 Fernando Apesteguia <fernape@FreeBSD.org> born in Madrid, Spain, 1981
07/15 Gary Jennejohn <gj@FreeBSD.org> born in Rochester, New York, United States, 1950
07/16 Suleiman Souhlal <ssouhlal@FreeBSD.org> born in Roma, Italy, 1983
07/16 Davide Italiano <davide@FreeBSD.org> born in Milazzo, Italy, 1989
07/17 Michael Chin-Yuan Wu <keichii@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1980
07/19 Masafumi NAKANE <max@FreeBSD.org> born in Okazaki, Aichi, Japan, 1972
07/19 Simon L. Nielsen <simon@FreeBSD.org> born in Copenhagen, Denmark, 1980
07/19 Gleb Smirnoff <glebius@FreeBSD.org> born in Kharkov, USSR, 1981
07/20 Dru Lavigne <dru@FreeBSD.org> born in Kingston, Ontario, Canada, 1965
07/20 Andrey V. Elsukov <ae@FreeBSD.org> born in Kotelnich, Russian Federation, 1981
07/22 James Housley <jeh@FreeBSD.org> born in Chicago, Illinois, United States, 1965
07/22 Jens Schweikhardt <schweikh@FreeBSD.org> born in Waiblingen, Baden-Wuerttemberg, Germany, 1967
07/22 Lukas Ertl <le@FreeBSD.org> born in Weissenbach/Enns, Steiermark, Austria, 1976
07/23 Sergey A. Osokin <osa@FreeBSD.org> born in Krasnogorsky, Stepnogorsk, Akmolinskaya region, Kazakhstan, 1972
07/23 Andrey Zonov <zont@FreeBSD.org> born in Kirov, Russian Federation, 1985
07/24 Alexander Nedotsukov <bland@FreeBSD.org> born in Ulyanovsk, Russian Federation, 1974
07/24 Alberto Villa <avilla@FreeBSD.org> born in Vercelli, Italy, 1987
07/27 Andriy Gapon <avg@FreeBSD.org> born in Kyrykivka, Sumy region, Ukraine, 1976
07/28 Jim Mock <jim@FreeBSD.org> born in Bethlehem, Pennsylvania, United States, 1974
07/28 Tom Hukins <tom@FreeBSD.org> born in Manchester, United Kingdom, 1976
07/29 Dirk Meyer <dinoex@FreeBSD.org> born in Kassel, Hessen, Germany, 1965
07/29 Felippe M. Motta <lippe@FreeBSD.org> born in Maceio, Alagoas, Brazil, 1988
08/02 Gabor Kovesdan <gabor@FreeBSD.org> born in Budapest, Hungary, 1987
08/03 Peter Holm <pho@FreeBSD.org> born in Copenhagen, Denmark, 1955
08/05 Alfred Perlstein <alfred@FreeBSD.org> born in Brooklyn, New York, United States, 1978
08/06 Anton Berezin <tobez@FreeBSD.org> born in Dnepropetrovsk, Ukraine, 1970
08/06 John-Mark Gurney <jmg@FreeBSD.org> born in Detroit, Michigan, United States, 1978
08/06 Damjan Marion <dmarion@FreeBSD.org> born in Rijeka, Croatia, 1978
08/07 Jonathan Mini <mini@FreeBSD.org> born in San Mateo, California, United States, 1979
08/08 Mikolaj Golub <trociny@FreeBSD.org> born in Kharkov, USSR, 1977
08/08 Juergen Lock <nox@FreeBSD.org> died in Bremen, Germany, 2015
08/09 Stefan Farfeleder <stefanf@FreeBSD.org> died in Wien, Austria, 2015
08/10 Julio Merino <jmmv@FreeBSD.org> born in Barcelona, Spain, 1984
08/10 Peter Pentchev <roam@FreeBSD.org> born in Sofia, Bulgaria, 1977
08/12 Joe Marcus Clarke <marcus@FreeBSD.org> born in Lakeland, Florida, United States, 1976
08/12 Max Brazhnikov <makc@FreeBSD.org> born in Leningradskaya, Russian Federation, 1979
08/14 Stefan Esser <se@FreeBSD.org> born in Cologne, Nordrhein-Westfalen, Germany, 1961
08/16 Andrey Chernov <ache@FreeBSD.org> died in Moscow, Russian Federation, 2017
08/17 Olivier Houchard <cognet@FreeBSD.org> born in Nancy, France, 1980
08/19 Chin-San Huang <chinsan@FreeBSD.org> born in Yi-Lan, Taiwan, Republic of China, 1979
08/19 Pav Lucistnik <pav@FreeBSD.org> born in Kutna Hora, Czech Republic, 1980
08/20 Michael Heffner <mikeh@FreeBSD.org> born in Cleona, Pennsylvania, United States, 1981
08/21 Jason A. Harmening <jah@FreeBSD.org> born in Fort Wayne, Indiana, United States, 1981
08/22 Ilya Bakulin <kibab@FreeBSD.org> born in Tbilisi, USSR, 1986
08/24 Mark Linimon <linimon@FreeBSD.org> born in Houston, Texas, United States, 1955
08/24 Alexander Botero-Lowry <alexbl@FreeBSD.org> died in San Francisco, California, United States, 2012
08/25 Beech Rintoul <beech@FreeBSD.org> born in Oakland, California, United States, 1952
08/25 Jean Milanez Melo <jmelo@FreeBSD.org> born in Divinopolis, Minas Gerais, Brazil, 1982
08/26 Scott Long <scottl@FreeBSD.org> born in Chicago, Illinois, United States, 1974
08/26 Dima Ruban <dima@FreeBSD.org> born in Nalchik, USSR, 1970
08/26 Marc Fonvieille <blackend@FreeBSD.org> born in Avignon, France, 1972
08/26 Herve Quiroz <hq@FreeBSD.org> born in Aix-en-Provence, France, 1977
08/27 Andrey Chernov <ache@FreeBSD.org> born in Moscow, USSR, 1966
08/27 Tony Finch <fanf@FreeBSD.org> born in London, United Kingdom, 1974
08/27 Michael Johnson <ahze@FreeBSD.org> born in Morganton, North Carolina, United States, 1980
08/28 Norikatsu Shigemura <nork@FreeBSD.org> born in Fujisawa, Kanagawa, Japan, 1974
08/29 Thomas Gellekum <tg@FreeBSD.org> born in Moenchengladbach, Nordrhein-Westfalen, Germany, 1967
08/29 Max Laier <mlaier@FreeBSD.org> born in Karlsruhe, Germany, 1981
08/30 Yuri Pankov <yuripv@FreeBSD.org> born in Krasnodar, USSR, 1979
09/01 Pyun YongHyeon <yongari@FreeBSD.org> born in Kimcheon, Korea, 1968
09/01 William Grzybowski <wg@FreeBSD.org> born in Parana, Brazil, 1988
09/03 Max Khon <fjoe@FreeBSD.org> born in Novosibirsk, USSR, 1976
09/03 Allan Jude <allanjude@FreeBSD.org> born in Hamilton, Ontario, Canada, 1984
09/03 Cheng-Lung Sung <clsung@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1977
09/05 Mark Robert Vaughan Murray <markm@FreeBSD.org> born in Harare, Mashonaland, Zimbabwe, 1961
09/05 Adrian Harold Chadd <adrian@FreeBSD.org> born in Perth, Western Australia, Australia, 1979
09/05 Rodrigo Osorio <rodrigo@FreeBSD.org> born in Montevideo, Uruguay, 1975
09/06 Eric Joyner <erj@FreeBSD.org> born in Fairfax, Virginia, United States, 1991
09/07 Tim Bishop <tdb@FreeBSD.org> born in Cornwall, United Kingdom, 1978
09/07 Chris Rees <crees@FreeBSD.org> born in Kettering, United Kingdom, 1987
09/08 Boris Samorodov <bsam@FreeBSD.org> born in Krasnodar, Russian Federation, 1963
09/09 Yoshio Mita <mita@FreeBSD.org> born in Hiroshima, Japan, 1972
09/09 Steven Hartland <smh@FreeBSD.org> born in Wordsley, United Kingdom, 1973
09/10 Wesley R. Peters <wes@FreeBSD.org> born in Hartford, Alabama, United States, 1961
09/12 Weongyo Jeong <weongyo@FreeBSD.org> born in Haman, Korea, 1980
09/12 Benedict Christopher Reuschling <bcr@FreeBSD.org> born in Darmstadt, Germany, 1981
09/12 William C. Fumerola II <billf@FreeBSD.org> born in Detroit, Michigan, United States, 1981
09/14 Matthew Seaman <matthew@FreeBSD.org> born in Bristol, United Kingdom, 1965
09/15 Aleksandr Rybalko <ray@FreeBSD.org> born in Odessa, Ukraine, 1977
09/15 Dima Panov <fluffy@FreeBSD.org> born in Khabarovsk, Russian Federation, 1978
09/16 Maksim Yevmenkin <emax@FreeBSD.org> born in Taganrog, USSR, 1974
09/17 Maxim Bolotin <mb@FreeBSD.org> born in Rostov-on-Don, Russian Federation, 1976
09/18 Matthew Fleming <mdf@FreeBSD.org> born in Cleveland, Ohio, United States, 1975
09/20 Kevin Lo <kevlo@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1972
09/21 Alex Kozlov <ak@FreeBSD.org> born in Bila Tserkva, Ukraine, 1970
09/21 Gleb Kurtsou <gleb@FreeBSD.org> born in Minsk, Belarus, 1984
09/22 Alan Somers <asomers@FreeBSD.org> born in San Antonio, Texas, United States, 1982
09/22 Bryan Drewery <bdrewery@FreeBSD.org> born in San Diego, California, United States, 1984
09/23 Michael Dexter <dexter@FreeBSD.org> born in Los Angeles, California, 1972
09/23 Martin Matuska <mm@FreeBSD.org> born in Bratislava, Slovakia, 1979
09/24 Larry Rosenman <ler@FreeBSD.org> born in Queens, New York, United States, 1957
09/27 Kyle Evans <kevans@FreeBSD.org> born in Oklahoma City, Oklahoma, United States, 1991
09/27 Neil Blakey-Milner <nbm@FreeBSD.org> born in Port Elizabeth, South Africa, 1978
09/27 Renato Botelho <garga@FreeBSD.org> born in Araras, Sao Paulo, Brazil, 1979
09/28 Greg Lehey <grog@FreeBSD.org> born in Melbourne, Victoria, Australia, 1948
09/28 Alex Dupre <ale@FreeBSD.org> born in Milano, Italy, 1980
09/29 Matthew Hunt <mph@FreeBSD.org> born in Johnstown, Pennsylvania, United States, 1976
09/30 Mark Felder <feld@FreeBSD.org> born in Prairie du Chien, Wisconsin, United States, 1985
09/30 Hiten M. Pandya <hmp@FreeBSD.org> born in Dar-es-Salaam, Tanzania, East Africa, 1986
09/30 Third quarter status reports are due on 10/15
10/02 Beat Gaetzi <beat@FreeBSD.org> born in Zurich, Switzerland, 1980
10/02 Grzegorz Blach <gblach@FreeBSD.org> born in Starachowice, Poland, 1985
10/05 Hiroki Sato <hrs@FreeBSD.org> born in Yamagata, Japan, 1977
10/05 Chris Costello <chris@FreeBSD.org> born in Houston, Texas, United States, 1985
10/09 Stefan Walter <stefan@FreeBSD.org> born in Werne, Nordrhein-Westfalen, Germany, 1978
10/11 Rick Macklem <rmacklem@FreeBSD.org> born in Ontario, Canada, 1955
10/12 Pawel Jakub Dawidek <pjd@FreeBSD.org> born in Radzyn Podlaski, Poland, 1980
10/15 Maxim Konovalov <maxim@FreeBSD.org> born in Khabarovsk, USSR, 1973
10/15 Eugene Grosbein <eugen@FreeBSD.org> born in Novokuznetsk, Russian Republic, USSR, 1976
10/16 Remko Lodder <remko@FreeBSD.org> born in Rotterdam, the Netherlands, 1983
10/17 Maho NAKATA <maho@FreeBSD.org> born in Osaka, Japan, 1974
10/18 Sheldon Hearn <sheldonh@FreeBSD.org> born in Cape Town, Western Cape, South Africa, 1974
10/18 Vladimir Kondratyev <wulf@FreeBSD.org> born in Ryazan, USSR, 1975
10/19 Nicholas Souchu <nsouch@FreeBSD.org> born in Suresnes, Hauts-de-Seine, France, 1972
10/19 Nick Barkas <snb@FreeBSD.org> born in Longview, Washington, United States, 1981
10/19 Pedro Giffuni <pfg@FreeBSD.org> born in Bogotá, Colombia, 1968
10/20 Joel Dahl <joel@FreeBSD.org> born in Bitterna, Skaraborg, Sweden, 1983
10/20 Dmitry Marakasov <amdmi3@FreeBSD.org> born in Moscow, Russian Federation, 1984
10/21 Ben Smithurst <ben@FreeBSD.org> born in Sheffield, South Yorkshire, United Kingdom, 1981
10/22 Jean-Sebastien Pedron <dumbbell@FreeBSD.org> born in Redon, Ille-et-Vilaine, France, 1980
10/23 Mario Sergio Fujikawa Ferreira <lioux@FreeBSD.org> born in Brasilia, Distrito Federal, Brazil, 1976
10/23 Romain Tartière <romain@FreeBSD.org> born in Clermont-Ferrand, France, 1984
10/25 Eric Melville <eric@FreeBSD.org> born in Los Gatos, California, United States, 1980
10/25 Julien Laffaye <jlaffaye@FreeBSD.org> born in Toulouse, France, 1988
10/25 Ashish SHUKLA <ashish@FreeBSD.org> born in Kanpur, India, 1985
10/25 Toomas Soome <tsoome@FreeBSD.org> born Estonia, 1971
10/26 Matthew Ahrens <mahrens@FreeBSD.org> born in United States, 1979
10/26 Philip M. Gollucci <pgollucci@FreeBSD.org> born in Silver Spring, Maryland, United States, 1979
10/27 Takanori Watanabe <takawata@FreeBSD.org> born in Numazu, Shizuoka, Japan, 1972
10/30 Olli Hauer <ohauer@FreeBSD.org> born in Sindelfingen, Germany, 1968
10/31 Taras Korenko <taras@FreeBSD.org> born in Cherkasy region, Ukraine, 1980
11/03 Ryan Stone <rstone@FreeBSD.org> born in Ottawa, Ontario, Canada, 1985
11/04 John Hixson <jhixson@FreeBSD.org> born in Burlingame, California, United States, 1974
11/05 M. Warner Losh <imp@FreeBSD.org> born in Kansas City, Kansas, United States, 1966
11/06 Michael Zhilin <mizhka@FreeBSD.org> born in Stary Oskol, USSR, 1985
11/08 Joseph R. Mingrone <jrm@FreeBSD.org> born in Charlottetown, Prince Edward Island, Canada, 1976
11/09 Coleman Kane <cokane@FreeBSD.org> born in Cincinnati, Ohio, United States, 1980
11/09 Antoine Brodin <antoine@FreeBSD.org> born in Bagnolet, France, 1981
11/10 Gregory Neil Shapiro <gshapiro@FreeBSD.org> born in Providence, Rhode Island, United States, 1970
11/11 Danilo E. Gondolfo <danilo@FreeBSD.org> born in Lobato, Parana, Brazil, 1987
11/13 John Baldwin <jhb@FreeBSD.org> born in Stuart, Virginia, United States, 1977
11/14 Jeremie Le Hen <jlh@FreeBSD.org> born in Nancy, France, 1980
11/15 Lars Engels <lme@FreeBSD.org> born in Hilden, Nordrhein-Westfalen, Germany, 1980
11/15 Tijl Coosemans <tijl@FreeBSD.org> born in Duffel, Belgium, 1983
11/16 Jose Maria Alcaide Salinas <jmas@FreeBSD.org> born in Madrid, Spain, 1962
11/16 Matt Joras <mjoras@FreeBSD.org> born in Evanston, Illinois, United States, 1992
11/17 Ralf S. Engelschall <rse@FreeBSD.org> born in Dachau, Bavaria, Germany, 1972
11/18 Thomas Quinot <thomas@FreeBSD.org> born in Paris, France, 1977
11/19 Konstantin Belousov <kib@FreeBSD.org> born in Kiev, USSR, 1972
11/20 Dmitry Morozovsky <marck@FreeBSD.org> born in Moscow, USSR, 1968
11/20 Gavin Atkinson <gavin@FreeBSD.org> born in Middlesbrough, United Kingdom, 1979
11/21 Shteryana Shopova <syrinx@FreeBSD.org> born in Petrich, Bulgaria, 1982
11/21 Mark Johnston <markj@FreeBSD.org> born in Toronto, Ontario, Canada, 1989
11/22 Frederic Culot <culot@FreeBSD.org> born in Saint-Germain-En-Laye, France, 1976
11/23 Josef Lawrence Karthauser <joe@FreeBSD.org> born in Pembury, Kent, United Kingdom, 1972
11/23 Sepherosa Ziehau <sephe@FreeBSD.org> born in Shanghai, China, 1980
11/23 Luca Pizzamiglio <pizzamig@FreeBSD.org> born in Casalpusterlengo, Italy, 1978
11/24 Andrey Zakhvatov <andy@FreeBSD.org> born in Chelyabinsk, Russian Federation, 1974
11/24 Daniel Gerzo <danger@FreeBSD.org> born in Bratislava, Slovakia, 1986
11/25 Fedor Uporov <fsu@FreeBSD.org> born in Yalta, Crimea, USSR, 1988
11/28 Nik Clayton <nik@FreeBSD.org> born in Peterborough, United Kingdom, 1973
11/28 Stanislav Sedov <stas@FreeBSD.org> born in Chelyabinsk, USSR, 1985
12/01 Hajimu Umemoto <ume@FreeBSD.org> born in Nara, Japan, 1961
12/01 Alexey Dokuchaev <danfe@FreeBSD.org> born in Magadan, USSR, 1980
12/02 Ermal Luçi <eri@FreeBSD.org> born in Tirane, Albania, 1980
12/03 Diane Bruce <db@FreeBSD.org> born in Ottawa, Ontario, Canada, 1952
12/04 Mariusz Zaborski <oshogbo@FreeBSD.org> born in Skierniewice, Poland, 1990
12/05 Ivan Voras <ivoras@FreeBSD.org> born in Slavonski Brod, Croatia, 1981
12/06 Stefan Farfeleder <stefanf@FreeBSD.org> born in Wien, Austria, 1980
12/08 Michael Tuexen <tuexen@FreeBSD.org> born in Oldenburg, Germany, 1966
12/11 Ganael Laplanche <martymac@FreeBSD.org> born in Reims, France, 1980
12/15 James FitzGibbon <jfitz@FreeBSD.org> born in Amersham, Buckinghamshire, United Kingdom, 1974
12/15 Timur I. Bakeyev <timur@FreeBSD.org> born in Kazan, Republic of Tatarstan, USSR, 1974
12/18 Chris Timmons <cwt@FreeBSD.org> born in Ellensburg, Washington, United States, 1964
12/18 Dag-Erling Smorgrav <des@FreeBSD.org> born in Brussels, Belgium, 1977
12/18 Muhammad Moinur Rahman <bofh@FreeBSD.org> born in Dhaka, Bangladesh, 1983
12/18 Semen Ustimenko <semenu@FreeBSD.org> born in Novosibirsk, Russian Federation, 1979
12/19 Stephen Hurd <shurd@FreeBSD.org> born in Estevan, Saskatchewan, Canada, 1975
12/19 Emmanuel Vadot <manu@FreeBSD.org> born in Decines-Charpieu, France, 1983
12/20 Sean Bruno <sbruno@FreeBSD.org> born in Monterey, California, United States, 1974
12/21 Rong-En Fan <rafan@FreeBSD.org> born in Taipei, Taiwan, Republic of China, 1982
12/22 Alan L. Cox <alc@FreeBSD.org> born in Warren, Ohio, United States, 1964
12/22 Maxim Sobolev <sobomax@FreeBSD.org> born in Dnepropetrovsk, Ukraine, 1976
12/23 Sean Chittenden <seanc@FreeBSD.org> born in Seattle, Washington, United States, 1979
12/23 Alejandro Pulver <alepulver@FreeBSD.org> born in Buenos Aires, Argentina, 1989
12/24 Jochen Neumeister <joneum@FreeBSD.org> born in Heidenheim, Germany, 1975
12/24 Guido Falsi <madpilot@FreeBSD.org> born in Firenze, Italy, 1978
12/25 Niclas Zeising <zeising@FreeBSD.org> born in Stockholm, Sweden, 1986
12/28 Soren Schmidt <sos@FreeBSD.org> born in Maribo, Denmark, 1960
12/28 Ade Lovett <ade@FreeBSD.org> born in London, England, 1969
12/28 Marius Strobl <marius@FreeBSD.org> born in Cham, Bavaria, Germany, 1978
12/31 Edwin Groothuis <edwin@FreeBSD.org> born in Geldrop, the Netherlands, 1970
12/31 Fourth quarter status reports are due on 01/15
#endif /* !_calendar_freebsd_ */
Index: projects/clang800-import/usr.bin/ipcs/ipcs.c
===================================================================
--- projects/clang800-import/usr.bin/ipcs/ipcs.c (revision 343955)
+++ projects/clang800-import/usr.bin/ipcs/ipcs.c (revision 343956)
@@ -1,573 +1,558 @@
/*-
* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 1994 SigmaSoft, Th. Lockert <tholo@sigmasoft.com>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
* AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
* THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/proc.h>
#define _WANT_SYSVMSG_INTERNALS
#include <sys/msg.h>
#define _WANT_SYSVSEM_INTERNALS
#include <sys/sem.h>
#define _WANT_SYSVSHM_INTERNALS
#include <sys/shm.h>
#include <err.h>
#include <fcntl.h>
#include <grp.h>
#include <kvm.h>
#include <limits.h>
#include <pwd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include "ipc.h"
char *fmt_perm(u_short);
void cvt_time(time_t, char *);
void usage(void);
uid_t user2uid(char *username);
void print_kmsqtotal(struct msginfo msginfo);
void print_kmsqheader(int option);
void print_kmsqptr(int i, int option, struct msqid_kernel *kmsqptr);
void print_kshmtotal(struct shminfo shminfo);
void print_kshmheader(int option);
void print_kshmptr(int i, int option, struct shmid_kernel *kshmptr);
void print_ksemtotal(struct seminfo seminfo);
void print_ksemheader(int option);
void print_ksemptr(int i, int option, struct semid_kernel *ksemaptr);
char *
fmt_perm(u_short mode)
{
static char buffer[100];
buffer[0] = '-';
buffer[1] = '-';
buffer[2] = ((mode & 0400) ? 'r' : '-');
buffer[3] = ((mode & 0200) ? 'w' : '-');
buffer[4] = ((mode & 0100) ? 'a' : '-');
buffer[5] = ((mode & 0040) ? 'r' : '-');
buffer[6] = ((mode & 0020) ? 'w' : '-');
buffer[7] = ((mode & 0010) ? 'a' : '-');
buffer[8] = ((mode & 0004) ? 'r' : '-');
buffer[9] = ((mode & 0002) ? 'w' : '-');
buffer[10] = ((mode & 0001) ? 'a' : '-');
buffer[11] = '\0';
return (&buffer[0]);
}
void
cvt_time(time_t t, char *buf)
{
struct tm *tm;
if (t == 0) {
strcpy(buf, "no-entry");
} else {
tm = localtime(&t);
sprintf(buf, "%2d:%02d:%02d",
tm->tm_hour, tm->tm_min, tm->tm_sec);
}
}
#define BIGGEST 1
#define CREATOR 2
#define OUTSTANDING 4
#define PID 8
#define TIME 16
int
main(int argc, char *argv[])
{
int display = SHMINFO | MSGINFO | SEMINFO;
int option = 0;
char *core = NULL, *user = NULL, *namelist = NULL;
char kvmoferr[_POSIX2_LINE_MAX]; /* Error buf for kvm_openfiles. */
int i;
u_long shmidx;
uid_t uid = 0;
while ((i = getopt(argc, argv, "MmQqSsabC:cN:optTu:y")) != -1)
switch (i) {
case 'a':
option |= BIGGEST | CREATOR | OUTSTANDING | PID | TIME;
break;
case 'b':
option |= BIGGEST;
break;
case 'C':
core = optarg;
break;
case 'c':
option |= CREATOR;
break;
case 'M':
display = SHMTOTAL;
break;
case 'm':
display = SHMINFO;
break;
case 'N':
namelist = optarg;
break;
case 'o':
option |= OUTSTANDING;
break;
case 'p':
option |= PID;
break;
case 'Q':
display = MSGTOTAL;
break;
case 'q':
display = MSGINFO;
break;
case 'S':
display = SEMTOTAL;
break;
case 's':
display = SEMINFO;
break;
case 'T':
display = SHMTOTAL | MSGTOTAL | SEMTOTAL;
break;
case 't':
option |= TIME;
break;
case 'u':
user = optarg;
uid = user2uid(user);
break;
case 'y':
use_sysctl = 0;
break;
default:
usage();
}
/*
* If paths to the exec file or core file were specified, we
* aren't operating on the running kernel, so we can't use
* sysctl.
*/
if (namelist != NULL || core != NULL)
use_sysctl = 0;
if (!use_sysctl) {
kd = kvm_openfiles(namelist, core, NULL, O_RDONLY, kvmoferr);
if (kd == NULL)
errx(1, "kvm_openfiles: %s", kvmoferr);
switch (kvm_nlist(kd, symbols)) {
case 0:
break;
case -1:
errx(1, "unable to read kernel symbol table");
default:
break;
}
}
kget(X_MSGINFO, &msginfo, sizeof(msginfo));
- if ((display & (MSGINFO | MSGTOTAL))) {
+ if (display & (MSGINFO | MSGTOTAL)) {
if (display & MSGTOTAL)
print_kmsqtotal(msginfo);
if (display & MSGINFO) {
struct msqid_kernel *kxmsqids;
size_t kxmsqids_len;
kxmsqids_len =
sizeof(struct msqid_kernel) * msginfo.msgmni;
kxmsqids = malloc(kxmsqids_len);
kget(X_MSQIDS, kxmsqids, kxmsqids_len);
print_kmsqheader(option);
for (i = 0; i < msginfo.msgmni; i += 1) {
if (kxmsqids[i].u.msg_qbytes != 0) {
if (user &&
uid != kxmsqids[i].u.msg_perm.uid)
continue;
print_kmsqptr(i, option, &kxmsqids[i]);
}
}
printf("\n");
}
- } else
- if (display & (MSGINFO | MSGTOTAL)) {
- fprintf(stderr,
- "SVID messages facility "
- "not configured in the system\n");
- }
+ }
kget(X_SHMINFO, &shminfo, sizeof(shminfo));
- if ((display & (SHMINFO | SHMTOTAL))) {
+ if (display & (SHMINFO | SHMTOTAL)) {
if (display & SHMTOTAL)
print_kshmtotal(shminfo);
if (display & SHMINFO) {
struct shmid_kernel *kxshmids;
size_t kxshmids_len;
kxshmids_len =
sizeof(struct shmid_kernel) * shminfo.shmmni;
kxshmids = malloc(kxshmids_len);
kget(X_SHMSEGS, kxshmids, kxshmids_len);
print_kshmheader(option);
for (shmidx = 0; shmidx < shminfo.shmmni; shmidx += 1) {
if (kxshmids[shmidx].u.shm_perm.mode & 0x0800) {
if (user &&
uid != kxshmids[shmidx].u.shm_perm.uid)
continue;
print_kshmptr(shmidx, option, &kxshmids[shmidx]);
}
}
printf("\n");
}
- } else
- if (display & (SHMINFO | SHMTOTAL)) {
- fprintf(stderr,
- "SVID shared memory facility "
- "not configured in the system\n");
- }
+ }
kget(X_SEMINFO, &seminfo, sizeof(seminfo));
- if ((display & (SEMINFO | SEMTOTAL))) {
+ if (display & (SEMINFO | SEMTOTAL)) {
struct semid_kernel *kxsema;
size_t kxsema_len;
if (display & SEMTOTAL)
print_ksemtotal(seminfo);
if (display & SEMINFO) {
kxsema_len =
sizeof(struct semid_kernel) * seminfo.semmni;
kxsema = malloc(kxsema_len);
kget(X_SEMA, kxsema, kxsema_len);
print_ksemheader(option);
for (i = 0; i < seminfo.semmni; i += 1) {
if ((kxsema[i].u.sem_perm.mode & SEM_ALLOC)
!= 0) {
if (user &&
uid != kxsema[i].u.sem_perm.uid)
continue;
print_ksemptr(i, option, &kxsema[i]);
}
}
printf("\n");
}
- } else
- if (display & (SEMINFO | SEMTOTAL)) {
- fprintf(stderr,
- "SVID semaphores facility "
- "not configured in the system\n");
- }
+ }
if (!use_sysctl)
kvm_close(kd);
exit(0);
}
void
print_kmsqtotal(struct msginfo local_msginfo)
{
printf("msginfo:\n");
printf("\tmsgmax: %12d\t(max characters in a message)\n",
local_msginfo.msgmax);
printf("\tmsgmni: %12d\t(# of message queues)\n",
local_msginfo.msgmni);
printf("\tmsgmnb: %12d\t(max characters in a message queue)\n",
local_msginfo.msgmnb);
printf("\tmsgtql: %12d\t(max # of messages in system)\n",
local_msginfo.msgtql);
printf("\tmsgssz: %12d\t(size of a message segment)\n",
local_msginfo.msgssz);
printf("\tmsgseg: %12d\t(# of message segments in system)\n\n",
local_msginfo.msgseg);
}
void print_kmsqheader(int option)
{
printf("Message Queues:\n");
printf("T %12s %12s %-11s %-8s %-8s",
"ID", "KEY", "MODE", "OWNER", "GROUP");
if (option & CREATOR)
printf(" %-8s %-8s", "CREATOR", "CGROUP");
if (option & OUTSTANDING)
printf(" %20s %20s", "CBYTES", "QNUM");
if (option & BIGGEST)
printf(" %20s", "QBYTES");
if (option & PID)
printf(" %12s %12s", "LSPID", "LRPID");
if (option & TIME)
printf(" %-8s %-8s %-8s", "STIME", "RTIME", "CTIME");
printf("\n");
}
void
print_kmsqptr(int i, int option, struct msqid_kernel *kmsqptr)
{
char stime_buf[100], rtime_buf[100], ctime_buf[100];
cvt_time(kmsqptr->u.msg_stime, stime_buf);
cvt_time(kmsqptr->u.msg_rtime, rtime_buf);
cvt_time(kmsqptr->u.msg_ctime, ctime_buf);
printf("q %12d %12d %s %-8s %-8s",
IXSEQ_TO_IPCID(i, kmsqptr->u.msg_perm),
(int)kmsqptr->u.msg_perm.key,
fmt_perm(kmsqptr->u.msg_perm.mode),
user_from_uid(kmsqptr->u.msg_perm.uid, 0),
group_from_gid(kmsqptr->u.msg_perm.gid, 0));
if (option & CREATOR)
printf(" %-8s %-8s",
user_from_uid(kmsqptr->u.msg_perm.cuid, 0),
group_from_gid(kmsqptr->u.msg_perm.cgid, 0));
if (option & OUTSTANDING)
printf(" %12lu %12lu",
kmsqptr->u.msg_cbytes,
kmsqptr->u.msg_qnum);
if (option & BIGGEST)
printf(" %20lu", kmsqptr->u.msg_qbytes);
if (option & PID)
printf(" %12d %12d",
kmsqptr->u.msg_lspid,
kmsqptr->u.msg_lrpid);
if (option & TIME)
printf(" %s %s %s",
stime_buf,
rtime_buf,
ctime_buf);
printf("\n");
}
void
print_kshmtotal(struct shminfo local_shminfo)
{
printf("shminfo:\n");
printf("\tshmmax: %12lu\t(max shared memory segment size)\n",
local_shminfo.shmmax);
printf("\tshmmin: %12lu\t(min shared memory segment size)\n",
local_shminfo.shmmin);
printf("\tshmmni: %12lu\t(max number of shared memory identifiers)\n",
local_shminfo.shmmni);
printf("\tshmseg: %12lu\t(max shared memory segments per process)\n",
local_shminfo.shmseg);
printf("\tshmall: %12lu\t(max amount of shared memory in pages)\n\n",
local_shminfo.shmall);
}
void
print_kshmheader(int option)
{
printf("Shared Memory:\n");
printf("T %12s %12s %-11s %-8s %-8s",
"ID", "KEY", "MODE", "OWNER", "GROUP");
if (option & CREATOR)
printf(" %-8s %-8s", "CREATOR", "CGROUP");
if (option & OUTSTANDING)
printf(" %12s", "NATTCH");
if (option & BIGGEST)
printf(" %12s", "SEGSZ");
if (option & PID)
printf(" %12s %12s", "CPID", "LPID");
if (option & TIME)
printf(" %-8s %-8s %-8s", "ATIME", "DTIME", "CTIME");
printf("\n");
}
void
print_kshmptr(int i, int option, struct shmid_kernel *kshmptr)
{
char atime_buf[100], dtime_buf[100], ctime_buf[100];
cvt_time(kshmptr->u.shm_atime, atime_buf);
cvt_time(kshmptr->u.shm_dtime, dtime_buf);
cvt_time(kshmptr->u.shm_ctime, ctime_buf);
printf("m %12d %12d %s %-8s %-8s",
IXSEQ_TO_IPCID(i, kshmptr->u.shm_perm),
(int)kshmptr->u.shm_perm.key,
fmt_perm(kshmptr->u.shm_perm.mode),
user_from_uid(kshmptr->u.shm_perm.uid, 0),
group_from_gid(kshmptr->u.shm_perm.gid, 0));
if (option & CREATOR)
printf(" %-8s %-8s",
user_from_uid(kshmptr->u.shm_perm.cuid, 0),
group_from_gid(kshmptr->u.shm_perm.cgid, 0));
if (option & OUTSTANDING)
printf(" %12d",
kshmptr->u.shm_nattch);
if (option & BIGGEST)
printf(" %12zu",
kshmptr->u.shm_segsz);
if (option & PID)
printf(" %12d %12d",
kshmptr->u.shm_cpid,
kshmptr->u.shm_lpid);
if (option & TIME)
printf(" %s %s %s",
atime_buf,
dtime_buf,
ctime_buf);
printf("\n");
}
void
print_ksemtotal(struct seminfo local_seminfo)
{
printf("seminfo:\n");
printf("\tsemmni: %12d\t(# of semaphore identifiers)\n",
local_seminfo.semmni);
printf("\tsemmns: %12d\t(# of semaphores in system)\n",
local_seminfo.semmns);
printf("\tsemmnu: %12d\t(# of undo structures in system)\n",
local_seminfo.semmnu);
printf("\tsemmsl: %12d\t(max # of semaphores per id)\n",
local_seminfo.semmsl);
printf("\tsemopm: %12d\t(max # of operations per semop call)\n",
local_seminfo.semopm);
printf("\tsemume: %12d\t(max # of undo entries per process)\n",
local_seminfo.semume);
printf("\tsemusz: %12d\t(size in bytes of undo structure)\n",
local_seminfo.semusz);
printf("\tsemvmx: %12d\t(semaphore maximum value)\n",
local_seminfo.semvmx);
printf("\tsemaem: %12d\t(adjust on exit max value)\n\n",
local_seminfo.semaem);
}
void
print_ksemheader(int option)
{
printf("Semaphores:\n");
printf("T %12s %12s %-11s %-8s %-8s",
"ID", "KEY", "MODE", "OWNER", "GROUP");
if (option & CREATOR)
printf(" %-8s %-8s", "CREATOR", "CGROUP");
if (option & BIGGEST)
printf(" %12s", "NSEMS");
if (option & TIME)
printf(" %-8s %-8s", "OTIME", "CTIME");
printf("\n");
}
void
print_ksemptr(int i, int option, struct semid_kernel *ksemaptr)
{
char ctime_buf[100], otime_buf[100];
cvt_time(ksemaptr->u.sem_otime, otime_buf);
cvt_time(ksemaptr->u.sem_ctime, ctime_buf);
printf("s %12d %12d %s %-8s %-8s",
IXSEQ_TO_IPCID(i, ksemaptr->u.sem_perm),
(int)ksemaptr->u.sem_perm.key,
fmt_perm(ksemaptr->u.sem_perm.mode),
user_from_uid(ksemaptr->u.sem_perm.uid, 0),
group_from_gid(ksemaptr->u.sem_perm.gid, 0));
if (option & CREATOR)
printf(" %-8s %-8s",
user_from_uid(ksemaptr->u.sem_perm.cuid, 0),
group_from_gid(ksemaptr->u.sem_perm.cgid, 0));
if (option & BIGGEST)
printf(" %12d",
ksemaptr->u.sem_nsems);
if (option & TIME)
printf(" %s %s",
otime_buf,
ctime_buf);
printf("\n");
}
uid_t
user2uid(char *username)
{
struct passwd *pwd;
uid_t uid;
char *r;
uid = strtoul(username, &r, 0);
if (!*r && r != username)
return (uid);
if ((pwd = getpwnam(username)) == NULL)
errx(1, "getpwnam failed: No such user");
endpwent();
return (pwd->pw_uid);
}
void
usage(void)
{
fprintf(stderr,
"usage: "
"ipcs [-abcmopqstyMQST] [-C corefile] [-N namelist] [-u user]\n");
exit(1);
}
Index: projects/clang800-import/usr.bin/newkey/update.c
===================================================================
--- projects/clang800-import/usr.bin/newkey/update.c (revision 343955)
+++ projects/clang800-import/usr.bin/newkey/update.c (revision 343956)
@@ -1,332 +1,340 @@
/*
* Sun RPC is a product of Sun Microsystems, Inc. and is provided for
* unrestricted use provided that this legend is included on all tape
* media and as a part of the software program in whole or part. Users
* may copy or modify Sun RPC without charge, but are not authorized
* to license or distribute it to anyone else except as part of a product or
* program developed by the user or with the express written consent of
* Sun Microsystems, Inc.
*
* SUN RPC IS PROVIDED AS IS WITH NO WARRANTIES OF ANY KIND INCLUDING THE
* WARRANTIES OF DESIGN, MERCHANTIBILITY AND FITNESS FOR A PARTICULAR
* PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE.
*
* Sun RPC is provided with no support and without any obligation on the
* part of Sun Microsystems, Inc. to assist in its use, correction,
* modification or enhancement.
*
* SUN MICROSYSTEMS, INC. SHALL HAVE NO LIABILITY WITH RESPECT TO THE
* INFRINGEMENT OF COPYRIGHTS, TRADE SECRETS OR ANY PATENTS BY SUN RPC
* OR ANY PART THEREOF.
*
* In no event will Sun Microsystems, Inc. be liable for any lost revenue
* or profits or other special, indirect and consequential damages, even if
* Sun has been advised of the possibility of such damages.
*
* Sun Microsystems, Inc.
* 2550 Garcia Avenue
* Mountain View, California 94043
*/
#ifndef lint
#if 0
static char sccsid[] = "@(#)update.c 1.2 91/03/11 Copyr 1986 Sun Micro";
#endif
#endif
/*
* Copyright (C) 1986, 1989, Sun Microsystems, Inc.
*/
/*
* Administrative tool to add a new user to the publickey database
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/types.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <rpc/rpc.h>
#include <rpc/key_prot.h>
#ifdef YP
#include <sys/wait.h>
#include <rpcsvc/yp_prot.h>
#include <rpcsvc/ypclnt.h>
#include <netdb.h>
#endif /* YP */
#include <pwd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include "extern.h"
#ifdef YP
static char SHELL[] = "/bin/sh";
static char YPDBPATH[]="/var/yp"; /* This is defined but not used! */
static char UPDATEFILE[] = "updaters";
static int _openchild(char *, FILE **, FILE **);
static char *basename(char *path);
/*
* Determine if requester is allowed to update the given map,
* and update it if so. Returns the yp status, which is zero
* if there is no access violation.
*/
int
mapupdate(char *requester, char *mapname, u_int op, u_int keylen,
char *key, u_int datalen, char *data)
{
char updater[MAXMAPNAMELEN + 40];
FILE *childargs;
FILE *childrslt;
#ifdef WEXITSTATUS
int status;
#else
union wait status;
#endif
pid_t pid;
u_int yperrno;
#ifdef DEBUG
printf("%s %s\n", key, data);
#endif
(void)sprintf(updater, "make -s -f %s/%s %s", YPDBPATH, /* !!! */
UPDATEFILE, mapname);
pid = _openchild(updater, &childargs, &childrslt);
if (pid < 0) {
return (YPERR_YPERR);
}
/*
* Write to child
*/
(void)fprintf(childargs, "%s\n", requester);
(void)fprintf(childargs, "%u\n", op);
(void)fprintf(childargs, "%u\n", keylen);
(void)fwrite(key, (int)keylen, 1, childargs);
(void)fprintf(childargs, "\n");
(void)fprintf(childargs, "%u\n", datalen);
(void)fwrite(data, (int)datalen, 1, childargs);
(void)fprintf(childargs, "\n");
(void)fclose(childargs);
/*
* Read from child
*/
(void)fscanf(childrslt, "%d", &yperrno);
(void)fclose(childrslt);
(void)wait(&status);
#ifdef WEXITSTATUS
if (WEXITSTATUS(status) != 0) {
#else
if (status.w_retcode != 0) {
#endif
return (YPERR_YPERR);
}
return (yperrno);
}
/*
* returns pid, or -1 for failure
*/
static pid_t
_openchild(char *command, FILE **fto, FILE **ffrom)
{
int i;
pid_t pid;
int pdto[2];
int pdfrom[2];
char *com;
struct rlimit rl;
if (pipe(pdto) < 0) {
goto error1;
}
if (pipe(pdfrom) < 0) {
goto error2;
}
switch (pid = fork()) {
case -1:
goto error3;
case 0:
/*
* child: read from pdto[0], write into pdfrom[1]
*/
(void)close(0);
(void)dup(pdto[0]);
(void)close(1);
(void)dup(pdfrom[1]);
getrlimit(RLIMIT_NOFILE, &rl);
for (i = rl.rlim_max - 1; i >= 3; i--) {
(void) close(i);
}
com = malloc((unsigned) strlen(command) + 6);
if (com == NULL) {
_exit(~0);
}
(void)sprintf(com, "exec %s", command);
execl(SHELL, basename(SHELL), "-c", com, (char *)NULL);
_exit(~0);
default:
/*
* parent: write into pdto[1], read from pdfrom[0]
*/
*fto = fdopen(pdto[1], "w");
(void)close(pdto[0]);
*ffrom = fdopen(pdfrom[0], "r");
(void)close(pdfrom[1]);
break;
}
return (pid);
/*
* error cleanup and return
*/
error3:
(void)close(pdfrom[0]);
(void)close(pdfrom[1]);
error2:
(void)close(pdto[0]);
(void)close(pdto[1]);
error1:
return (-1);
}
static char *
basename(char *path)
{
char *p;
p = strrchr(path, '/');
if (p == NULL) {
return (path);
} else {
return (p + 1);
}
}
#else /* YP */
#define ERR_ACCESS 1
#define ERR_MALLOC 2
#define ERR_READ 3
#define ERR_WRITE 4
#define ERR_DBASE 5
#define ERR_KEY 6
static int match(char *, char *);
/*
* Determine if requester is allowed to update the given map,
* and update it if so. Returns the status, which is zero
* if there is no access violation. This function updates
* the local file and then shuts up.
*/
int
localupdate(char *name, char *filename, u_int op, u_int keylen __unused,
char *key, u_int datalen __unused, char *data)
{
char line[256];
FILE *rf;
FILE *wf;
char *tmpname;
int err;
/*
* Check permission
*/
if (strcmp(name, key) != 0) {
return (ERR_ACCESS);
}
if (strcmp(name, "nobody") == 0) {
/*
* Can't change "nobody"s key.
*/
return (ERR_ACCESS);
}
/*
* Open files
*/
tmpname = malloc(strlen(filename) + 4);
if (tmpname == NULL) {
return (ERR_MALLOC);
}
sprintf(tmpname, "%s.tmp", filename);
rf = fopen(filename, "r");
if (rf == NULL) {
- return (ERR_READ);
+ err = ERR_READ;
+ goto cleanup;
}
wf = fopen(tmpname, "w");
if (wf == NULL) {
- return (ERR_WRITE);
+ fclose(rf);
+ err = ERR_WRITE;
+ goto cleanup;
}
err = -1;
while (fgets(line, sizeof (line), rf)) {
if (err < 0 && match(line, name)) {
switch (op) {
case YPOP_INSERT:
err = ERR_KEY;
break;
case YPOP_STORE:
case YPOP_CHANGE:
fprintf(wf, "%s %s\n", key, data);
err = 0;
break;
case YPOP_DELETE:
/* do nothing */
err = 0;
break;
}
} else {
fputs(line, wf);
}
}
if (err < 0) {
switch (op) {
case YPOP_CHANGE:
case YPOP_DELETE:
err = ERR_KEY;
break;
case YPOP_INSERT:
case YPOP_STORE:
err = 0;
fprintf(wf, "%s %s\n", key, data);
break;
}
}
fclose(wf);
fclose(rf);
if (err == 0) {
if (rename(tmpname, filename) < 0) {
- return (ERR_DBASE);
+ err = ERR_DBASE;
+ goto cleanup;
}
} else {
if (unlink(tmpname) < 0) {
- return (ERR_DBASE);
+ err = ERR_DBASE;
+ goto cleanup;
}
}
+
+cleanup:
+ free(tmpname);
return (err);
}
static int
match(char *line, char *name)
{
int len;
len = strlen(name);
return (strncmp(line, name, len) == 0 &&
(line[len] == ' ' || line[len] == '\t'));
}
#endif /* !YP */
Index: projects/clang800-import/usr.bin/vtfontcvt/vtfontcvt.c
===================================================================
--- projects/clang800-import/usr.bin/vtfontcvt/vtfontcvt.c (revision 343955)
+++ projects/clang800-import/usr.bin/vtfontcvt/vtfontcvt.c (revision 343956)
@@ -1,607 +1,607 @@
/*-
* Copyright (c) 2009, 2014 The FreeBSD Foundation
* All rights reserved.
*
* This software was developed by Ed Schouten under sponsorship from the
* FreeBSD Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/types.h>
#include <sys/fnv_hash.h>
#include <sys/endian.h>
#include <sys/param.h>
#include <sys/queue.h>
#include <assert.h>
#include <err.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define VFNT_MAPS 4
#define VFNT_MAP_NORMAL 0
#define VFNT_MAP_NORMAL_RH 1
#define VFNT_MAP_BOLD 2
#define VFNT_MAP_BOLD_RH 3
static unsigned int width = 8, wbytes, height = 16;
struct glyph {
TAILQ_ENTRY(glyph) g_list;
SLIST_ENTRY(glyph) g_hash;
uint8_t *g_data;
unsigned int g_index;
};
#define FONTCVT_NHASH 4096
TAILQ_HEAD(glyph_list, glyph);
static SLIST_HEAD(, glyph) glyph_hash[FONTCVT_NHASH];
static struct glyph_list glyphs[VFNT_MAPS] = {
- TAILQ_HEAD_INITIALIZER(glyphs[0]),
- TAILQ_HEAD_INITIALIZER(glyphs[1]),
- TAILQ_HEAD_INITIALIZER(glyphs[2]),
- TAILQ_HEAD_INITIALIZER(glyphs[3]),
+ TAILQ_HEAD_INITIALIZER(glyphs[0]),
+ TAILQ_HEAD_INITIALIZER(glyphs[1]),
+ TAILQ_HEAD_INITIALIZER(glyphs[2]),
+ TAILQ_HEAD_INITIALIZER(glyphs[3]),
};
static unsigned int glyph_total, glyph_count[4], glyph_unique, glyph_dupe;
struct mapping {
TAILQ_ENTRY(mapping) m_list;
unsigned int m_char;
unsigned int m_length;
struct glyph *m_glyph;
};
TAILQ_HEAD(mapping_list, mapping);
static struct mapping_list maps[VFNT_MAPS] = {
- TAILQ_HEAD_INITIALIZER(maps[0]),
- TAILQ_HEAD_INITIALIZER(maps[1]),
- TAILQ_HEAD_INITIALIZER(maps[2]),
- TAILQ_HEAD_INITIALIZER(maps[3]),
+ TAILQ_HEAD_INITIALIZER(maps[0]),
+ TAILQ_HEAD_INITIALIZER(maps[1]),
+ TAILQ_HEAD_INITIALIZER(maps[2]),
+ TAILQ_HEAD_INITIALIZER(maps[3]),
};
static unsigned int mapping_total, map_count[4], map_folded_count[4],
mapping_unique, mapping_dupe;
static void
usage(void)
{
(void)fprintf(stderr,
"usage: vtfontcvt [-w width] [-h height] [-v] normal.bdf [bold.bdf] out.fnt\n");
exit(1);
}
static void *
xmalloc(size_t size)
{
void *m;
if ((m = malloc(size)) == NULL)
errx(1, "memory allocation failure");
return (m);
}
static int
add_mapping(struct glyph *gl, unsigned int c, unsigned int map_idx)
{
struct mapping *mp;
struct mapping_list *ml;
mapping_total++;
mp = xmalloc(sizeof *mp);
mp->m_char = c;
mp->m_glyph = gl;
mp->m_length = 0;
ml = &maps[map_idx];
if (TAILQ_LAST(ml, mapping_list) != NULL &&
TAILQ_LAST(ml, mapping_list)->m_char >= c)
errx(1, "Bad ordering at character %u", c);
TAILQ_INSERT_TAIL(ml, mp, m_list);
map_count[map_idx]++;
mapping_unique++;
return (0);
}
static int
dedup_mapping(unsigned int map_idx)
{
struct mapping *mp_bold, *mp_normal, *mp_temp;
unsigned normal_map_idx = map_idx - VFNT_MAP_BOLD;
assert(map_idx == VFNT_MAP_BOLD || map_idx == VFNT_MAP_BOLD_RH);
mp_normal = TAILQ_FIRST(&maps[normal_map_idx]);
TAILQ_FOREACH_SAFE(mp_bold, &maps[map_idx], m_list, mp_temp) {
while (mp_normal->m_char < mp_bold->m_char)
mp_normal = TAILQ_NEXT(mp_normal, m_list);
if (mp_bold->m_char != mp_normal->m_char)
errx(1, "Character %u not in normal font!",
mp_bold->m_char);
if (mp_bold->m_glyph != mp_normal->m_glyph)
continue;
/* No mapping is needed if it's equal to the normal mapping. */
TAILQ_REMOVE(&maps[map_idx], mp_bold, m_list);
free(mp_bold);
mapping_dupe++;
}
return (0);
}
static struct glyph *
add_glyph(const uint8_t *bytes, unsigned int map_idx, int fallback)
{
struct glyph *gl;
int hash;
glyph_total++;
glyph_count[map_idx]++;
hash = fnv_32_buf(bytes, wbytes * height, FNV1_32_INIT) % FONTCVT_NHASH;
SLIST_FOREACH(gl, &glyph_hash[hash], g_hash) {
if (memcmp(gl->g_data, bytes, wbytes * height) == 0) {
glyph_dupe++;
return (gl);
}
}
gl = xmalloc(sizeof *gl);
gl->g_data = xmalloc(wbytes * height);
memcpy(gl->g_data, bytes, wbytes * height);
if (fallback)
TAILQ_INSERT_HEAD(&glyphs[map_idx], gl, g_list);
else
TAILQ_INSERT_TAIL(&glyphs[map_idx], gl, g_list);
SLIST_INSERT_HEAD(&glyph_hash[hash], gl, g_hash);
glyph_unique++;
return (gl);
}
static int
add_char(unsigned curchar, unsigned map_idx, uint8_t *bytes, uint8_t *bytes_r)
{
struct glyph *gl;
/* Prevent adding two glyphs for 0xFFFD */
if (curchar == 0xFFFD) {
if (map_idx < VFNT_MAP_BOLD)
gl = add_glyph(bytes, 0, 1);
} else if (curchar >= 0x20) {
gl = add_glyph(bytes, map_idx, 0);
if (add_mapping(gl, curchar, map_idx) != 0)
return (1);
if (bytes_r != NULL) {
gl = add_glyph(bytes_r, map_idx + 1, 0);
if (add_mapping(gl, curchar,
map_idx + 1) != 0)
return (1);
}
}
return (0);
}
static int
parse_bitmap_line(uint8_t *left, uint8_t *right, unsigned int line,
unsigned int dwidth)
{
uint8_t *p;
unsigned int i, subline;
if (dwidth != width && dwidth != width * 2)
errx(1, "Bitmap with unsupported width %u!", dwidth);
/* Move pixel data right to simplify splitting double characters. */
line >>= (howmany(dwidth, 8) * 8) - dwidth;
for (i = dwidth / width; i > 0; i--) {
p = (i == 2) ? right : left;
subline = line & ((1 << width) - 1);
subline <<= (howmany(width, 8) * 8) - width;
if (wbytes == 1) {
*p = subline;
} else if (wbytes == 2) {
*p++ = subline >> 8;
*p = subline;
} else {
errx(1, "Unsupported wbytes %u!", wbytes);
}
line >>= width;
}
return (0);
}
static int
parse_bdf(FILE *fp, unsigned int map_idx)
{
char *ln;
size_t length;
uint8_t bytes[wbytes * height], bytes_r[wbytes * height];
unsigned int curchar = 0, dwidth = 0, i, line;
while ((ln = fgetln(fp, &length)) != NULL) {
ln[length - 1] = '\0';
if (strncmp(ln, "ENCODING ", 9) == 0) {
curchar = atoi(ln + 9);
}
if (strncmp(ln, "DWIDTH ", 7) == 0) {
dwidth = atoi(ln + 7);
}
if (strncmp(ln, "BITMAP", 6) == 0 &&
(ln[6] == ' ' || ln[6] == '\0')) {
/*
* Assume that the next _height_ lines are bitmap
* data. ENDCHAR is allowed to terminate the bitmap
* early but is not otherwise checked; any extra data
* is ignored.
*/
for (i = 0; i < height; i++) {
if ((ln = fgetln(fp, &length)) == NULL)
errx(1, "Unexpected EOF!");
ln[length - 1] = '\0';
if (strcmp(ln, "ENDCHAR") == 0) {
memset(bytes + i * wbytes, 0,
(height - i) * wbytes);
memset(bytes_r + i * wbytes, 0,
(height - i) * wbytes);
break;
}
sscanf(ln, "%x", &line);
if (parse_bitmap_line(bytes + i * wbytes,
bytes_r + i * wbytes, line, dwidth) != 0)
return (1);
}
if (add_char(curchar, map_idx, bytes,
dwidth == width * 2 ? bytes_r : NULL) != 0)
return (1);
}
}
return (0);
}
static void
set_width(int w)
{
if (w <= 0 || w > 128)
errx(1, "invalid width %d", w);
width = w;
wbytes = howmany(width, 8);
}
static int
parse_hex(FILE *fp, unsigned int map_idx)
{
char *ln, *p;
char fmt_str[8];
size_t length;
uint8_t *bytes = NULL, *bytes_r = NULL;
unsigned curchar = 0, i, line, chars_per_row, dwidth;
int rv = 0;
while ((ln = fgetln(fp, &length)) != NULL) {
ln[length - 1] = '\0';
if (strncmp(ln, "# Height: ", 10) == 0) {
if (bytes != NULL)
errx(1, "malformed input: Height tag after font data");
height = atoi(ln + 10);
} else if (strncmp(ln, "# Width: ", 9) == 0) {
if (bytes != NULL)
errx(1, "malformed input: Width tag after font data");
set_width(atoi(ln + 9));
} else if (sscanf(ln, "%6x:", &curchar)) {
if (bytes == NULL) {
bytes = xmalloc(wbytes * height);
bytes_r = xmalloc(wbytes * height);
}
/* ln is guaranteed to have a colon here. */
p = strchr(ln, ':') + 1;
chars_per_row = strlen(p) / height;
dwidth = width;
if (chars_per_row / 2 > (width + 7) / 8)
dwidth *= 2; /* Double-width character. */
snprintf(fmt_str, sizeof(fmt_str), "%%%ux",
chars_per_row);
for (i = 0; i < height; i++) {
sscanf(p, fmt_str, &line);
p += chars_per_row;
if (parse_bitmap_line(bytes + i * wbytes,
bytes_r + i * wbytes, line, dwidth) != 0) {
rv = 1;
goto out;
}
}
if (add_char(curchar, map_idx, bytes,
dwidth == width * 2 ? bytes_r : NULL) != 0) {
rv = 1;
goto out;
}
}
}
out:
free(bytes);
free(bytes_r);
return (rv);
}
static int
parse_file(const char *filename, unsigned int map_idx)
{
FILE *fp;
size_t len;
int rv;
fp = fopen(filename, "r");
if (fp == NULL) {
perror(filename);
return (1);
}
len = strlen(filename);
if (len > 4 && strcasecmp(filename + len - 4, ".hex") == 0)
rv = parse_hex(fp, map_idx);
else
rv = parse_bdf(fp, map_idx);
fclose(fp);
return (rv);
}
static void
number_glyphs(void)
{
struct glyph *gl;
unsigned int i, idx = 0;
for (i = 0; i < VFNT_MAPS; i++)
TAILQ_FOREACH(gl, &glyphs[i], g_list)
gl->g_index = idx++;
}
static int
write_glyphs(FILE *fp)
{
struct glyph *gl;
unsigned int i;
for (i = 0; i < VFNT_MAPS; i++) {
TAILQ_FOREACH(gl, &glyphs[i], g_list)
if (fwrite(gl->g_data, wbytes * height, 1, fp) != 1)
return (1);
}
return (0);
}
static void
fold_mappings(unsigned int map_idx)
{
struct mapping_list *ml = &maps[map_idx];
struct mapping *mn, *mp, *mbase;
mp = mbase = TAILQ_FIRST(ml);
for (mp = mbase = TAILQ_FIRST(ml); mp != NULL; mp = mn) {
mn = TAILQ_NEXT(mp, m_list);
if (mn != NULL && mn->m_char == mp->m_char + 1 &&
mn->m_glyph->g_index == mp->m_glyph->g_index + 1)
continue;
mbase->m_length = mp->m_char - mbase->m_char + 1;
mbase = mp = mn;
map_folded_count[map_idx]++;
}
}
struct file_mapping {
uint32_t source;
uint16_t destination;
uint16_t length;
} __packed;
static int
write_mappings(FILE *fp, unsigned int map_idx)
{
struct mapping_list *ml = &maps[map_idx];
struct mapping *mp;
struct file_mapping fm;
unsigned int i = 0, j = 0;
TAILQ_FOREACH(mp, ml, m_list) {
j++;
if (mp->m_length > 0) {
i += mp->m_length;
fm.source = htobe32(mp->m_char);
fm.destination = htobe16(mp->m_glyph->g_index);
fm.length = htobe16(mp->m_length - 1);
if (fwrite(&fm, sizeof fm, 1, fp) != 1)
return (1);
}
}
assert(i == j);
return (0);
}
struct file_header {
uint8_t magic[8];
uint8_t width;
uint8_t height;
uint16_t pad;
uint32_t glyph_count;
uint32_t map_count[4];
} __packed;
static int
write_fnt(const char *filename)
{
FILE *fp;
struct file_header fh = {
.magic = "VFNT0002",
};
fp = fopen(filename, "wb");
if (fp == NULL) {
perror(filename);
return (1);
}
fh.width = width;
fh.height = height;
fh.glyph_count = htobe32(glyph_unique);
fh.map_count[0] = htobe32(map_folded_count[0]);
fh.map_count[1] = htobe32(map_folded_count[1]);
fh.map_count[2] = htobe32(map_folded_count[2]);
fh.map_count[3] = htobe32(map_folded_count[3]);
if (fwrite(&fh, sizeof fh, 1, fp) != 1) {
perror(filename);
fclose(fp);
return (1);
}
if (write_glyphs(fp) != 0 ||
write_mappings(fp, VFNT_MAP_NORMAL) != 0 ||
write_mappings(fp, 1) != 0 ||
write_mappings(fp, VFNT_MAP_BOLD) != 0 ||
write_mappings(fp, 3) != 0) {
perror(filename);
fclose(fp);
return (1);
}
fclose(fp);
return (0);
}
static void
print_font_info(void)
{
printf(
"Statistics:\n"
"- glyph_total: %6u\n"
"- glyph_normal: %6u\n"
"- glyph_normal_right: %6u\n"
"- glyph_bold: %6u\n"
"- glyph_bold_right: %6u\n"
"- glyph_unique: %6u\n"
"- glyph_dupe: %6u\n"
"- mapping_total: %6u\n"
"- mapping_normal: %6u\n"
"- mapping_normal_folded: %6u\n"
"- mapping_normal_right: %6u\n"
"- mapping_normal_right_folded: %6u\n"
"- mapping_bold: %6u\n"
"- mapping_bold_folded: %6u\n"
"- mapping_bold_right: %6u\n"
"- mapping_bold_right_folded: %6u\n"
"- mapping_unique: %6u\n"
"- mapping_dupe: %6u\n",
glyph_total,
glyph_count[0],
glyph_count[1],
glyph_count[2],
glyph_count[3],
glyph_unique, glyph_dupe,
mapping_total,
map_count[0], map_folded_count[0],
map_count[1], map_folded_count[1],
map_count[2], map_folded_count[2],
map_count[3], map_folded_count[3],
mapping_unique, mapping_dupe);
}
int
main(int argc, char *argv[])
{
int ch, val, verbose = 0;
assert(sizeof(struct file_header) == 32);
assert(sizeof(struct file_mapping) == 8);
while ((ch = getopt(argc, argv, "h:vw:")) != -1) {
switch (ch) {
case 'h':
val = atoi(optarg);
if (val <= 0 || val > 128)
errx(1, "Invalid height %d", val);
height = val;
break;
case 'v':
verbose = 1;
break;
case 'w':
set_width(atoi(optarg));
break;
case '?':
default:
usage();
}
}
argc -= optind;
argv += optind;
if (argc < 2 || argc > 3)
usage();
wbytes = howmany(width, 8);
if (parse_file(argv[0], VFNT_MAP_NORMAL) != 0)
return (1);
argc--;
argv++;
if (argc == 2) {
if (parse_file(argv[0], VFNT_MAP_BOLD) != 0)
return (1);
argc--;
argv++;
}
number_glyphs();
dedup_mapping(VFNT_MAP_BOLD);
dedup_mapping(VFNT_MAP_BOLD_RH);
fold_mappings(0);
fold_mappings(1);
fold_mappings(2);
fold_mappings(3);
if (write_fnt(argv[0]) != 0)
return (1);
if (verbose)
print_font_info();
return (0);
}
Index: projects/clang800-import/usr.sbin/bluetooth/sdpd/ssar.c
===================================================================
--- projects/clang800-import/usr.sbin/bluetooth/sdpd/ssar.c (revision 343955)
+++ projects/clang800-import/usr.sbin/bluetooth/sdpd/ssar.c (revision 343956)
@@ -1,255 +1,381 @@
/*-
* ssar.c
*
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
* Copyright (c) 2004 Maksim Yevmenkin <m_evmenkin@yahoo.com>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $Id: ssar.c,v 1.4 2004/01/12 22:54:31 max Exp $
* $FreeBSD$
*/
#include <sys/queue.h>
#define L2CAP_SOCKET_CHECKED
#include <bluetooth.h>
#include <sdp.h>
#include <string.h>
#include "profile.h"
#include "provider.h"
#include "server.h"
#include "uuid-private.h"
/* from sar.c */
int32_t server_prepare_attr_list(provider_p const provider,
uint8_t const *req, uint8_t const * const req_end,
uint8_t *rsp, uint8_t const * const rsp_end);
/*
+ * Scan an attribute for matching UUID.
+ */
+static int
+server_search_uuid_sub(uint8_t *buf, uint8_t const * const eob, const uint128_t *uuid)
+{
+ int128_t duuid;
+ uint32_t value;
+ uint8_t type;
+
+ while (buf < eob) {
+
+ SDP_GET8(type, buf);
+
+ switch (type) {
+ case SDP_DATA_UUID16:
+ if (buf + 2 > eob)
+ continue;
+ SDP_GET16(value, buf);
+
+ memcpy(&duuid, &uuid_base, sizeof(duuid));
+ duuid.b[2] = value >> 8 & 0xff;
+ duuid.b[3] = value & 0xff;
+
+ if (memcmp(&duuid, uuid, sizeof(duuid)) == 0)
+ return (0);
+ break;
+ case SDP_DATA_UUID32:
+ if (buf + 4 > eob)
+ continue;
+ SDP_GET32(value, buf);
+ memcpy(&duuid, &uuid_base, sizeof(duuid));
+ duuid.b[0] = value >> 24 & 0xff;
+ duuid.b[1] = value >> 16 & 0xff;
+ duuid.b[2] = value >> 8 & 0xff;
+ duuid.b[3] = value & 0xff;
+
+ if (memcmp(&duuid, uuid, sizeof(duuid)) == 0)
+ return (0);
+ break;
+ case SDP_DATA_UUID128:
+ if (buf + 16 > eob)
+ continue;
+ SDP_GET_UUID128(&duuid, buf);
+
+ if (memcmp(&duuid, uuid, sizeof(duuid)) == 0)
+ return (0);
+ break;
+ case SDP_DATA_UINT8:
+ case SDP_DATA_INT8:
+ case SDP_DATA_SEQ8:
+ buf++;
+ break;
+ case SDP_DATA_UINT16:
+ case SDP_DATA_INT16:
+ case SDP_DATA_SEQ16:
+ buf += 2;
+ break;
+ case SDP_DATA_UINT32:
+ case SDP_DATA_INT32:
+ case SDP_DATA_SEQ32:
+ buf += 4;
+ break;
+ case SDP_DATA_UINT64:
+ case SDP_DATA_INT64:
+ buf += 8;
+ break;
+ case SDP_DATA_UINT128:
+ case SDP_DATA_INT128:
+ buf += 16;
+ break;
+ case SDP_DATA_STR8:
+ if (buf + 1 > eob)
+ continue;
+ SDP_GET8(value, buf);
+ buf += value;
+ break;
+ case SDP_DATA_STR16:
+ if (buf + 2 > eob)
+ continue;
+ SDP_GET16(value, buf);
+ if (value > (eob - buf))
+ return (1);
+ buf += value;
+ break;
+ case SDP_DATA_STR32:
+ if (buf + 4 > eob)
+ continue;
+ SDP_GET32(value, buf);
+ if (value > (eob - buf))
+ return (1);
+ buf += value;
+ break;
+ case SDP_DATA_BOOL:
+ buf += 1;
+ break;
+ default:
+ return (1);
+ }
+ }
+ return (1);
+}
+
+/*
+ * Search a provider for matching UUID in its attributes.
+ */
+static int
+server_search_uuid(provider_p const provider, const uint128_t *uuid)
+{
+ uint8_t buffer[256];
+ const attr_t *attr;
+ int len;
+
+ for (attr = provider->profile->attrs; attr->create != NULL; attr++) {
+
+ len = attr->create(buffer, buffer + sizeof(buffer),
+ (const uint8_t *)provider->profile, sizeof(*provider->profile));
+ if (len < 0)
+ continue;
+ if (server_search_uuid_sub(buffer, buffer + len, uuid) == 0)
+ return (0);
+ }
+ return (1);
+}
+
+/*
* Prepare SDP Service Search Attribute Response
*/
int32_t
server_prepare_service_search_attribute_response(server_p srv, int32_t fd)
{
uint8_t const *req = srv->req + sizeof(sdp_pdu_t);
uint8_t const *req_end = req + ((sdp_pdu_p)(srv->req))->len;
uint8_t *rsp = srv->fdidx[fd].rsp;
uint8_t const *rsp_end = rsp + NG_L2CAP_MTU_MAXIMUM;
uint8_t const *sspptr = NULL, *aidptr = NULL;
uint8_t *ptr = NULL;
provider_t *provider = NULL;
int32_t type, rsp_limit, ssplen, aidlen, cslen, cs;
uint128_t uuid, puuid;
/*
* Minimal Service Search Attribute Request request
*
* seq8 len8 - 2 bytes
* uuid16 value16 - 3 bytes ServiceSearchPattern
* value16 - 2 bytes MaximumAttributeByteCount
* seq8 len8 - 2 bytes
* uint16 value16 - 3 bytes AttributeIDList
* value8 - 1 byte ContinuationState
*/
if (req_end - req < 13)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
/* Get size of ServiceSearchPattern */
ssplen = 0;
SDP_GET8(type, req);
switch (type) {
case SDP_DATA_SEQ8:
SDP_GET8(ssplen, req);
break;
case SDP_DATA_SEQ16:
SDP_GET16(ssplen, req);
break;
case SDP_DATA_SEQ32:
SDP_GET32(ssplen, req);
break;
}
if (ssplen <= 0)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
sspptr = req;
req += ssplen;
/* Get MaximumAttributeByteCount */
if (req + 2 > req_end)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
SDP_GET16(rsp_limit, req);
if (rsp_limit <= 0)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
/* Get size of AttributeIDList */
if (req + 1 > req_end)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
aidlen = 0;
SDP_GET8(type, req);
switch (type) {
case SDP_DATA_SEQ8:
if (req + 1 > req_end)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
SDP_GET8(aidlen, req);
break;
case SDP_DATA_SEQ16:
if (req + 2 > req_end)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
SDP_GET16(aidlen, req);
break;
case SDP_DATA_SEQ32:
if (req + 4 > req_end)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
SDP_GET32(aidlen, req);
break;
}
if (aidlen <= 0)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
aidptr = req;
req += aidlen;
/* Get ContinuationState */
if (req + 1 > req_end)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
SDP_GET8(cslen, req);
if (cslen != 0) {
if (cslen != 2 || req_end - req != 2)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
SDP_GET16(cs, req);
} else
cs = 0;
/* Process the request. First, check continuation state */
if (srv->fdidx[fd].rsp_cs != cs)
return (SDP_ERROR_CODE_INVALID_CONTINUATION_STATE);
if (srv->fdidx[fd].rsp_size > 0)
return (0);
/*
* Service Search Attribute Response format
*
* value16 - 2 bytes AttributeListByteCount (not incl.)
* seq8 len16 - 3 bytes
* attr list - 3+ bytes AttributeLists
* [ attr list ]
*/
ptr = rsp + 3;
while (ssplen > 0) {
SDP_GET8(type, sspptr);
ssplen --;
switch (type) {
case SDP_DATA_UUID16:
if (ssplen < 2)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
memcpy(&uuid, &uuid_base, sizeof(uuid));
uuid.b[2] = *sspptr ++;
uuid.b[3] = *sspptr ++;
ssplen -= 2;
break;
case SDP_DATA_UUID32:
if (ssplen < 4)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
memcpy(&uuid, &uuid_base, sizeof(uuid));
uuid.b[0] = *sspptr ++;
uuid.b[1] = *sspptr ++;
uuid.b[2] = *sspptr ++;
uuid.b[3] = *sspptr ++;
ssplen -= 4;
break;
case SDP_DATA_UUID128:
if (ssplen < 16)
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
memcpy(uuid.b, sspptr, 16);
sspptr += 16;
ssplen -= 16;
break;
default:
return (SDP_ERROR_CODE_INVALID_REQUEST_SYNTAX);
/* NOT REACHED */
}
for (provider = provider_get_first();
provider != NULL;
provider = provider_get_next(provider)) {
if (!provider_match_bdaddr(provider, &srv->req_sa.l2cap_bdaddr))
continue;
memcpy(&puuid, &uuid_base, sizeof(puuid));
puuid.b[2] = provider->profile->uuid >> 8;
puuid.b[3] = provider->profile->uuid;
if (memcmp(&uuid, &puuid, sizeof(uuid)) != 0 &&
- memcmp(&uuid, &uuid_public_browse_group, sizeof(uuid)) != 0)
+ memcmp(&uuid, &uuid_public_browse_group, sizeof(uuid)) != 0 &&
+ server_search_uuid(provider, &uuid) != 0)
continue;
cs = server_prepare_attr_list(provider,
aidptr, aidptr + aidlen, ptr, rsp_end);
if (cs < 0)
return (SDP_ERROR_CODE_INSUFFICIENT_RESOURCES);
ptr += cs;
}
}
/* Set reply size (not counting PDU header and continuation state) */
srv->fdidx[fd].rsp_limit = srv->fdidx[fd].omtu - sizeof(sdp_pdu_t) - 2;
if (srv->fdidx[fd].rsp_limit > rsp_limit)
srv->fdidx[fd].rsp_limit = rsp_limit;
srv->fdidx[fd].rsp_size = ptr - rsp;
srv->fdidx[fd].rsp_cs = 0;
/* Fix AttributeLists sequence header */
ptr = rsp;
SDP_PUT8(SDP_DATA_SEQ16, ptr);
SDP_PUT16(srv->fdidx[fd].rsp_size - 3, ptr);
return (0);
}
Index: projects/clang800-import/usr.sbin/dumpcis/main.c
===================================================================
--- projects/clang800-import/usr.sbin/dumpcis/main.c (revision 343955)
+++ projects/clang800-import/usr.sbin/dumpcis/main.c (revision 343956)
@@ -1,60 +1,60 @@
/*-
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
*
- * Copyright (c) 2006 M. Warner Losh. All rights reserved.
+ * Copyright (c) 2006 M. Warner Losh.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
#include "readcis.h"
static void
scanfile(char *name)
{
int fd;
struct tuple_list *tl;
fd = open(name, O_RDONLY);
if (fd < 0)
return;
tl = readcis(fd);
if (tl) {
printf("Configuration data for file %s\n",
name);
dumpcis(tl);
freecis(tl);
}
close(fd);
}
int
main(int argc, char **argv)
{
for (argc--, argv++; argc; argc--, argv++)
scanfile(*argv);
return 0;
}
Index: projects/clang800-import/usr.sbin/newsyslog/newsyslog.c
===================================================================
--- projects/clang800-import/usr.sbin/newsyslog/newsyslog.c (revision 343955)
+++ projects/clang800-import/usr.sbin/newsyslog/newsyslog.c (revision 343956)
@@ -1,2780 +1,2780 @@
/*-
* ------+---------+---------+-------- + --------+---------+---------+---------*
* This file includes significant modifications done by:
* Copyright (c) 2003, 2004 - Garance Alistair Drosehn <gad@FreeBSD.org>.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* ------+---------+---------+-------- + --------+---------+---------+---------*
*/
/*
* This file contains changes from the Open Software Foundation.
*/
/*
* Copyright 1988, 1989 by the Massachusetts Institute of Technology
*
* Permission to use, copy, modify, and distribute this software and its
* documentation for any purpose and without fee is hereby granted, provided
* that the above copyright notice appear in all copies and that both that
* copyright notice and this permission notice appear in supporting
* documentation, and that the names of M.I.T. and the M.I.T. S.I.P.B. not be
* used in advertising or publicity pertaining to distribution of the
* software without specific, written prior permission. M.I.T. and the M.I.T.
* S.I.P.B. make no representations about the suitability of this software
* for any purpose. It is provided "as is" without express or implied
* warranty.
*
*/
/*
* newsyslog - roll over selected logs at the appropriate time, keeping the a
* specified number of backup files around.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#define OSF
#include <sys/param.h>
#include <sys/queue.h>
#include <sys/sbuf.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <assert.h>
#include <ctype.h>
#include <err.h>
#include <errno.h>
#include <dirent.h>
#include <fcntl.h>
#include <fnmatch.h>
#include <glob.h>
#include <grp.h>
#include <paths.h>
#include <pwd.h>
#include <signal.h>
#include <stdio.h>
#include <libgen.h>
#include <stdlib.h>
#include <string.h>
#include <syslog.h>
#include <time.h>
#include <unistd.h>
#include "pathnames.h"
#include "extern.h"
/*
* Compression types
*/
#define COMPRESS_TYPES 5 /* Number of supported compression types */
#define COMPRESS_NONE 0
#define COMPRESS_GZIP 1
#define COMPRESS_BZIP2 2
#define COMPRESS_XZ 3
#define COMPRESS_ZSTD 4
/*
* Bit-values for the 'flags' parsed from a config-file entry.
*/
#define CE_BINARY 0x0008 /* Logfile is in binary, do not add status */
/* messages to logfile(s) when rotating. */
#define CE_NOSIGNAL 0x0010 /* There is no process to signal when */
/* trimming this file. */
#define CE_TRIMAT 0x0020 /* trim file at a specific time. */
#define CE_GLOB 0x0040 /* name of the log is file name pattern. */
#define CE_SIGNALGROUP 0x0080 /* Signal a process-group instead of a single */
/* process when trimming this file. */
#define CE_CREATE 0x0100 /* Create the log file if it does not exist. */
#define CE_NODUMP 0x0200 /* Set 'nodump' on newly created log file. */
#define CE_PID2CMD 0x0400 /* Replace PID file with a shell command.*/
#define CE_PLAIN0 0x0800 /* Do not compress zero'th history file */
#define CE_RFC5424 0x1000 /* Use RFC5424 format rotation message */
#define MIN_PID 5 /* Don't touch pids lower than this */
#define MAX_PID 99999 /* was lower, see /usr/include/sys/proc.h */
#define kbytes(size) (((size) + 1023) >> 10)
#define DEFAULT_MARKER "<default>"
#define DEBUG_MARKER "<debug>"
#define INCLUDE_MARKER "<include>"
#define DEFAULT_TIMEFNAME_FMT "%Y%m%dT%H%M%S"
#define MAX_OLDLOGS 65536 /* Default maximum number of old logfiles */
struct compress_types {
const char *flag; /* Flag in configuration file */
const char *suffix; /* Compression suffix */
const char *path; /* Path to compression program */
const char **flags; /* Compression program flags */
int nflags; /* Program flags count */
};
static const char *gzip_flags[] = { "-f" };
#define bzip2_flags gzip_flags
#define xz_flags gzip_flags
static const char *zstd_flags[] = { "-q", "--rm" };
static const struct compress_types compress_type[COMPRESS_TYPES] = {
{ "", "", "", NULL, 0 },
{ "Z", ".gz", _PATH_GZIP, gzip_flags, nitems(gzip_flags) },
{ "J", ".bz2", _PATH_BZIP2, bzip2_flags, nitems(bzip2_flags) },
{ "X", ".xz", _PATH_XZ, xz_flags, nitems(xz_flags) },
{ "Y", ".zst", _PATH_ZSTD, zstd_flags, nitems(zstd_flags) }
};
struct conf_entry {
STAILQ_ENTRY(conf_entry) cf_nextp;
char *log; /* Name of the log */
char *pid_cmd_file; /* PID or command file */
char *r_reason; /* The reason this file is being rotated */
int firstcreate; /* Creating log for the first time (-C). */
int rotate; /* Non-zero if this file should be rotated */
int fsize; /* size found for the log file */
uid_t uid; /* Owner of log */
gid_t gid; /* Group of log */
int numlogs; /* Number of logs to keep */
int trsize; /* Size cutoff to trigger trimming the log */
int hours; /* Hours between log trimming */
struct ptime_data *trim_at; /* Specific time to do trimming */
unsigned int permissions; /* File permissions on the log */
int flags; /* CE_BINARY */
int compress; /* Compression */
int sig; /* Signal to send */
int def_cfg; /* Using the <default> rule for this file */
};
struct sigwork_entry {
SLIST_ENTRY(sigwork_entry) sw_nextp;
int sw_signum; /* the signal to send */
int sw_pidok; /* true if pid value is valid */
pid_t sw_pid; /* the process id from the PID file */
const char *sw_pidtype; /* "daemon" or "process group" */
int sw_runcmd; /* run command or send PID to signal */
char sw_fname[1]; /* file the PID was read from or shell cmd */
};
struct zipwork_entry {
SLIST_ENTRY(zipwork_entry) zw_nextp;
const struct conf_entry *zw_conf; /* for chown/perm/flag info */
const struct sigwork_entry *zw_swork; /* to know success of signal */
int zw_fsize; /* size of the file to compress */
char zw_fname[1]; /* the file to compress */
};
struct include_entry {
STAILQ_ENTRY(include_entry) inc_nextp;
const char *file; /* Name of file to process */
};
struct oldlog_entry {
char *fname; /* Filename of the log file */
time_t t; /* Parsed timestamp of the logfile */
};
typedef enum {
FREE_ENT, KEEP_ENT
} fk_entry;
STAILQ_HEAD(cflist, conf_entry);
static SLIST_HEAD(swlisthead, sigwork_entry) swhead =
SLIST_HEAD_INITIALIZER(swhead);
static SLIST_HEAD(zwlisthead, zipwork_entry) zwhead =
SLIST_HEAD_INITIALIZER(zwhead);
STAILQ_HEAD(ilist, include_entry);
int dbg_at_times; /* -D Show details of 'trim_at' code */
static int archtodir = 0; /* Archive old logfiles to other directory */
static int createlogs; /* Create (non-GLOB) logfiles which do not */
/* already exist. 1=='for entries with */
/* C flag', 2=='for all entries'. */
int verbose = 0; /* Print out what's going on */
static int needroot = 1; /* Root privs are necessary */
int noaction = 0; /* Don't do anything, just show it */
static int norotate = 0; /* Don't rotate */
static int nosignal; /* Do not send any signals */
static int enforcepid = 0; /* If PID file does not exist or empty, do nothing */
static int force = 0; /* Force the trim no matter what */
static int rotatereq = 0; /* -R = Always rotate the file(s) as given */
/* on the command (this also requires */
/* that a list of files *are* given on */
/* the run command). */
static char *requestor; /* The name given on a -R request */
static char *timefnamefmt = NULL;/* Use time based filenames instead of .0 */
static char *archdirname; /* Directory path to old logfiles archive */
static char *destdir = NULL; /* Directory to treat at root for logs */
static const char *conf; /* Configuration file to use */
struct ptime_data *dbg_timenow; /* A "timenow" value set via -D option */
static struct ptime_data *timenow; /* The time to use for checking at-fields */
#define DAYTIME_LEN 16
static char daytime[DAYTIME_LEN];/* The current time in human readable form,
* used for rotation-tracking messages. */
/* Another buffer to hold the current time in RFC5424 format. Fractional
* seconds are allowed by the RFC, but are not included in the
* rotation-tracking messages written by newsyslog and so are not accounted for
* in the length below.
*/
#define DAYTIME_RFC5424_LEN sizeof("YYYY-MM-DDTHH:MM:SS+00:00")
static char daytime_rfc5424[DAYTIME_RFC5424_LEN];
static char hostname[MAXHOSTNAMELEN]; /* hostname */
static size_t hostname_shortlen;
static const char *path_syslogpid = _PATH_SYSLOGPID;
static struct cflist *get_worklist(char **files);
static void parse_file(FILE *cf, struct cflist *work_p, struct cflist *glob_p,
- struct conf_entry *defconf_p, struct ilist *inclist);
+ struct conf_entry **defconf, struct ilist *inclist);
static void add_to_queue(const char *fname, struct ilist *inclist);
static char *sob(char *p);
static char *son(char *p);
static int isnumberstr(const char *);
static int isglobstr(const char *);
static char *missing_field(char *p, char *errline);
static void change_attrs(const char *, const struct conf_entry *);
static const char *get_logfile_suffix(const char *logfile);
static fk_entry do_entry(struct conf_entry *);
static fk_entry do_rotate(const struct conf_entry *);
static void do_sigwork(struct sigwork_entry *);
static void do_zipwork(struct zipwork_entry *);
static struct sigwork_entry *
save_sigwork(const struct conf_entry *);
static struct zipwork_entry *
save_zipwork(const struct conf_entry *, const struct
sigwork_entry *, int, const char *);
static void set_swpid(struct sigwork_entry *, const struct conf_entry *);
static int sizefile(const char *);
static void expand_globs(struct cflist *work_p, struct cflist *glob_p);
static void free_clist(struct cflist *list);
static void free_entry(struct conf_entry *ent);
static struct conf_entry *init_entry(const char *fname,
struct conf_entry *src_entry);
static void parse_args(int argc, char **argv);
static int parse_doption(const char *doption);
static void usage(void);
static int log_trim(const char *logname, const struct conf_entry *log_ent);
static int age_old_log(const char *file);
static void savelog(char *from, char *to);
static void createdir(const struct conf_entry *ent, char *dirpart);
static void createlog(const struct conf_entry *ent);
static int parse_signal(const char *str);
/*
* All the following take a parameter of 'int', but expect values in the
* range of unsigned char. Define wrappers which take values of type 'char',
* whether signed or unsigned, and ensure they end up in the right range.
*/
#define isdigitch(Anychar) isdigit((u_char)(Anychar))
#define isprintch(Anychar) isprint((u_char)(Anychar))
#define isspacech(Anychar) isspace((u_char)(Anychar))
#define tolowerch(Anychar) tolower((u_char)(Anychar))
int
main(int argc, char **argv)
{
struct cflist *worklist;
struct conf_entry *p;
struct sigwork_entry *stmp;
struct zipwork_entry *ztmp;
SLIST_INIT(&swhead);
SLIST_INIT(&zwhead);
parse_args(argc, argv);
argc -= optind;
argv += optind;
if (needroot && getuid() && geteuid())
errx(1, "must have root privs");
worklist = get_worklist(argv);
/*
* Rotate all the files which need to be rotated. Note that
* some users have *hundreds* of entries in newsyslog.conf!
*/
while (!STAILQ_EMPTY(worklist)) {
p = STAILQ_FIRST(worklist);
STAILQ_REMOVE_HEAD(worklist, cf_nextp);
if (do_entry(p) == FREE_ENT)
free_entry(p);
}
/*
* Send signals to any processes which need a signal to tell
* them to close and re-open the log file(s) we have rotated.
* Note that zipwork_entries include pointers to these
* sigwork_entry's, so we can not free the entries here.
*/
if (!SLIST_EMPTY(&swhead)) {
if (noaction || verbose)
printf("Signal all daemon process(es)...\n");
SLIST_FOREACH(stmp, &swhead, sw_nextp)
do_sigwork(stmp);
if (!(rotatereq && nosignal)) {
if (noaction)
printf("\tsleep 10\n");
else {
if (verbose)
printf("Pause 10 seconds to allow "
"daemon(s) to close log file(s)\n");
sleep(10);
}
}
}
/*
* Compress all files that we're expected to compress, now
* that all processes should have closed the files which
* have been rotated.
*/
if (!SLIST_EMPTY(&zwhead)) {
if (noaction || verbose)
printf("Compress all rotated log file(s)...\n");
while (!SLIST_EMPTY(&zwhead)) {
ztmp = SLIST_FIRST(&zwhead);
do_zipwork(ztmp);
SLIST_REMOVE_HEAD(&zwhead, zw_nextp);
free(ztmp);
}
}
/* Now free all the sigwork entries. */
while (!SLIST_EMPTY(&swhead)) {
stmp = SLIST_FIRST(&swhead);
SLIST_REMOVE_HEAD(&swhead, sw_nextp);
free(stmp);
}
while (wait(NULL) > 0 || errno == EINTR)
;
+ free(timefnamefmt);
+ free(requestor);
return (0);
}
static struct conf_entry *
init_entry(const char *fname, struct conf_entry *src_entry)
{
struct conf_entry *tempwork;
if (verbose > 4)
printf("\t--> [creating entry for %s]\n", fname);
tempwork = malloc(sizeof(struct conf_entry));
if (tempwork == NULL)
err(1, "malloc of conf_entry for %s", fname);
if (destdir == NULL || fname[0] != '/')
tempwork->log = strdup(fname);
else
asprintf(&tempwork->log, "%s%s", destdir, fname);
if (tempwork->log == NULL)
err(1, "strdup for %s", fname);
if (src_entry != NULL) {
tempwork->pid_cmd_file = NULL;
if (src_entry->pid_cmd_file)
tempwork->pid_cmd_file = strdup(src_entry->pid_cmd_file);
tempwork->r_reason = NULL;
tempwork->firstcreate = 0;
tempwork->rotate = 0;
tempwork->fsize = -1;
tempwork->uid = src_entry->uid;
tempwork->gid = src_entry->gid;
tempwork->numlogs = src_entry->numlogs;
tempwork->trsize = src_entry->trsize;
tempwork->hours = src_entry->hours;
tempwork->trim_at = NULL;
if (src_entry->trim_at != NULL)
tempwork->trim_at = ptime_init(src_entry->trim_at);
tempwork->permissions = src_entry->permissions;
tempwork->flags = src_entry->flags;
tempwork->compress = src_entry->compress;
tempwork->sig = src_entry->sig;
tempwork->def_cfg = src_entry->def_cfg;
} else {
/* Initialize as a "do-nothing" entry */
tempwork->pid_cmd_file = NULL;
tempwork->r_reason = NULL;
tempwork->firstcreate = 0;
tempwork->rotate = 0;
tempwork->fsize = -1;
tempwork->uid = (uid_t)-1;
tempwork->gid = (gid_t)-1;
tempwork->numlogs = 1;
tempwork->trsize = -1;
tempwork->hours = -1;
tempwork->trim_at = NULL;
tempwork->permissions = 0;
tempwork->flags = 0;
tempwork->compress = COMPRESS_NONE;
tempwork->sig = SIGHUP;
tempwork->def_cfg = 0;
}
return (tempwork);
}
static void
free_entry(struct conf_entry *ent)
{
if (ent == NULL)
return;
if (ent->log != NULL) {
if (verbose > 4)
printf("\t--> [freeing entry for %s]\n", ent->log);
free(ent->log);
ent->log = NULL;
}
if (ent->pid_cmd_file != NULL) {
free(ent->pid_cmd_file);
ent->pid_cmd_file = NULL;
}
if (ent->r_reason != NULL) {
free(ent->r_reason);
ent->r_reason = NULL;
}
if (ent->trim_at != NULL) {
ptime_free(ent->trim_at);
ent->trim_at = NULL;
}
free(ent);
}
static void
free_clist(struct cflist *list)
{
struct conf_entry *ent;
while (!STAILQ_EMPTY(list)) {
ent = STAILQ_FIRST(list);
STAILQ_REMOVE_HEAD(list, cf_nextp);
free_entry(ent);
}
free(list);
list = NULL;
}
static fk_entry
do_entry(struct conf_entry * ent)
{
#define REASON_MAX 80
int modtime;
fk_entry free_or_keep;
double diffsecs;
char temp_reason[REASON_MAX];
int oversized;
free_or_keep = FREE_ENT;
if (verbose)
printf("%s <%d%s>: ", ent->log, ent->numlogs,
compress_type[ent->compress].flag);
ent->fsize = sizefile(ent->log);
oversized = ((ent->trsize > 0) && (ent->fsize >= ent->trsize));
modtime = age_old_log(ent->log);
ent->rotate = 0;
ent->firstcreate = 0;
if (ent->fsize < 0) {
/*
* If either the C flag or the -C option was specified,
* and if we won't be creating the file, then have the
* verbose message include a hint as to why the file
* will not be created.
*/
temp_reason[0] = '\0';
if (createlogs > 1)
ent->firstcreate = 1;
else if ((ent->flags & CE_CREATE) && createlogs)
ent->firstcreate = 1;
else if (ent->flags & CE_CREATE)
strlcpy(temp_reason, " (no -C option)", REASON_MAX);
else if (createlogs)
strlcpy(temp_reason, " (no C flag)", REASON_MAX);
if (ent->firstcreate) {
if (verbose)
printf("does not exist -> will create.\n");
createlog(ent);
} else if (verbose) {
printf("does not exist, skipped%s.\n", temp_reason);
}
} else {
if (ent->flags & CE_TRIMAT && !force && !rotatereq &&
!oversized) {
diffsecs = ptimeget_diff(timenow, ent->trim_at);
if (diffsecs < 0.0) {
/* trim_at is some time in the future. */
if (verbose) {
ptime_adjust4dst(ent->trim_at,
timenow);
printf("--> will trim at %s",
ptimeget_ctime(ent->trim_at));
}
return (free_or_keep);
} else if (diffsecs >= 3600.0) {
/*
* trim_at is more than an hour in the past,
* so find the next valid trim_at time, and
* tell the user what that will be.
*/
if (verbose && dbg_at_times)
printf("\n\t--> prev trim at %s\t",
ptimeget_ctime(ent->trim_at));
if (verbose) {
ptimeset_nxtime(ent->trim_at);
printf("--> will trim at %s",
ptimeget_ctime(ent->trim_at));
}
return (free_or_keep);
} else if (verbose && noaction && dbg_at_times) {
/*
* If we are just debugging at-times, then
* a detailed message is helpful. Also
* skip "doing" any commands, since they
* would all be turned off by no-action.
*/
printf("\n\t--> timematch at %s",
ptimeget_ctime(ent->trim_at));
return (free_or_keep);
} else if (verbose && ent->hours <= 0) {
printf("--> time is up\n");
}
}
if (verbose && (ent->trsize > 0))
printf("size (Kb): %d [%d] ", ent->fsize, ent->trsize);
if (verbose && (ent->hours > 0))
printf(" age (hr): %d [%d] ", modtime, ent->hours);
/*
* Figure out if this logfile needs to be rotated.
*/
temp_reason[0] = '\0';
if (rotatereq) {
ent->rotate = 1;
snprintf(temp_reason, REASON_MAX, " due to -R from %s",
requestor);
} else if (force) {
ent->rotate = 1;
snprintf(temp_reason, REASON_MAX, " due to -F request");
} else if (oversized) {
ent->rotate = 1;
snprintf(temp_reason, REASON_MAX, " due to size>%dK",
ent->trsize);
} else if (ent->hours <= 0 && (ent->flags & CE_TRIMAT)) {
ent->rotate = 1;
} else if ((ent->hours > 0) && ((modtime >= ent->hours) ||
(modtime < 0))) {
ent->rotate = 1;
}
/*
* If the file needs to be rotated, then rotate it.
*/
if (ent->rotate && !norotate) {
if (temp_reason[0] != '\0')
ent->r_reason = strdup(temp_reason);
if (verbose)
printf("--> trimming log....\n");
if (noaction && !verbose)
printf("%s <%d%s>: trimming\n", ent->log,
ent->numlogs,
compress_type[ent->compress].flag);
free_or_keep = do_rotate(ent);
} else {
if (verbose)
printf("--> skipping\n");
}
}
return (free_or_keep);
#undef REASON_MAX
}
static void
parse_args(int argc, char **argv)
{
int ch;
char *p;
timenow = ptime_init(NULL);
ptimeset_time(timenow, time(NULL));
strlcpy(daytime, ptimeget_ctime(timenow) + 4, DAYTIME_LEN);
ptimeget_ctime_rfc5424(timenow, daytime_rfc5424, DAYTIME_RFC5424_LEN);
/* Let's get our hostname */
(void)gethostname(hostname, sizeof(hostname));
hostname_shortlen = strcspn(hostname, ".");
/* Parse command line options. */
while ((ch = getopt(argc, argv, "a:d:f:nrst:vCD:FNPR:S:")) != -1)
switch (ch) {
case 'a':
archtodir++;
archdirname = optarg;
break;
case 'd':
destdir = optarg;
break;
case 'f':
conf = optarg;
break;
case 'n':
noaction++;
/* FALLTHROUGH */
case 'r':
needroot = 0;
break;
case 's':
nosignal = 1;
break;
case 't':
if (optarg[0] == '\0' ||
strcmp(optarg, "DEFAULT") == 0)
timefnamefmt = strdup(DEFAULT_TIMEFNAME_FMT);
else
timefnamefmt = strdup(optarg);
break;
case 'v':
verbose++;
break;
case 'C':
/* Useful for things like rc.diskless... */
createlogs++;
break;
case 'D':
/*
* Set some debugging option. The specific option
* depends on the value of optarg. These options
* may come and go without notice or documentation.
*/
if (parse_doption(optarg))
break;
usage();
/* NOTREACHED */
case 'F':
force++;
break;
case 'N':
norotate++;
break;
case 'P':
enforcepid++;
break;
case 'R':
rotatereq++;
requestor = strdup(optarg);
break;
case 'S':
path_syslogpid = optarg;
break;
case 'm': /* Used by OpenBSD for "monitor mode" */
default:
usage();
/* NOTREACHED */
}
if (force && norotate) {
warnx("Only one of -F and -N may be specified.");
usage();
/* NOTREACHED */
}
if (rotatereq) {
if (optind == argc) {
warnx("At least one filename must be given when -R is specified.");
usage();
/* NOTREACHED */
}
/* Make sure "requestor" value is safe for a syslog message. */
for (p = requestor; *p != '\0'; p++) {
if (!isprintch(*p) && (*p != '\t'))
*p = '.';
}
}
if (dbg_timenow) {
/*
* Note that the 'daytime' variable is not changed.
* That is only used in messages that track when a
* logfile is rotated, and if a file *is* rotated,
* then it will still rotated at the "real now" time.
*/
ptime_free(timenow);
timenow = dbg_timenow;
fprintf(stderr, "Debug: Running as if TimeNow is %s",
ptimeget_ctime(dbg_timenow));
}
}
/*
* These debugging options are mainly meant for developer use, such
* as writing regression-tests. They would not be needed by users
* during normal operation of newsyslog...
*/
static int
parse_doption(const char *doption)
{
const char TN[] = "TN=";
int res;
if (strncmp(doption, TN, sizeof(TN) - 1) == 0) {
/*
* The "TimeNow" debugging option. This might be off
* by an hour when crossing a timezone change.
*/
dbg_timenow = ptime_init(NULL);
res = ptime_relparse(dbg_timenow, PTM_PARSE_ISO8601,
time(NULL), doption + sizeof(TN) - 1);
if (res == -2) {
warnx("Non-existent time specified on -D %s", doption);
return (0); /* failure */
} else if (res < 0) {
warnx("Malformed time given on -D %s", doption);
return (0); /* failure */
}
return (1); /* successfully parsed */
}
if (strcmp(doption, "ats") == 0) {
dbg_at_times++;
return (1); /* successfully parsed */
}
/* XXX - This check could probably be dropped. */
if ((strcmp(doption, "neworder") == 0) || (strcmp(doption, "oldorder")
== 0)) {
warnx("NOTE: newsyslog always uses 'neworder'.");
return (1); /* successfully parsed */
}
warnx("Unknown -D (debug) option: '%s'", doption);
return (0); /* failure */
}
static void
usage(void)
{
fprintf(stderr,
"usage: newsyslog [-CFNPnrsv] [-a directory] [-d directory] [-f config_file]\n"
" [-S pidfile] [-t timefmt] [[-R tagname] file ...]\n");
exit(1);
}
/*
* Parse a configuration file and return a linked list of all the logs
* which should be processed.
*/
static struct cflist *
get_worklist(char **files)
{
FILE *f;
char **given;
struct cflist *cmdlist, *filelist, *globlist;
struct conf_entry *defconf, *dupent, *ent;
struct ilist inclist;
struct include_entry *inc;
int gmatch, fnres;
defconf = NULL;
STAILQ_INIT(&inclist);
filelist = malloc(sizeof(struct cflist));
if (filelist == NULL)
err(1, "malloc of filelist");
STAILQ_INIT(filelist);
globlist = malloc(sizeof(struct cflist));
if (globlist == NULL)
err(1, "malloc of globlist");
STAILQ_INIT(globlist);
inc = malloc(sizeof(struct include_entry));
if (inc == NULL)
err(1, "malloc of inc");
inc->file = conf;
if (inc->file == NULL)
inc->file = _PATH_CONF;
STAILQ_INSERT_TAIL(&inclist, inc, inc_nextp);
STAILQ_FOREACH(inc, &inclist, inc_nextp) {
if (strcmp(inc->file, "-") != 0)
f = fopen(inc->file, "r");
else {
f = stdin;
inc->file = "<stdin>";
}
if (!f)
err(1, "%s", inc->file);
if (verbose)
printf("Processing %s\n", inc->file);
- parse_file(f, filelist, globlist, defconf, &inclist);
+ parse_file(f, filelist, globlist, &defconf, &inclist);
(void) fclose(f);
}
/*
* All config-file information has been read in and turned into
* a filelist and a globlist. If there were no specific files
* given on the run command, then the only thing left to do is to
* call a routine which finds all files matched by the globlist
* and adds them to the filelist. Then return the worklist.
*/
if (*files == NULL) {
expand_globs(filelist, globlist);
free_clist(globlist);
if (defconf != NULL)
free_entry(defconf);
return (filelist);
- /* NOTREACHED */
}
/*
* If newsyslog was given a specific list of files to process,
* it may be that some of those files were not listed in any
* config file. Those unlisted files should get the default
* rotation action. First, create the default-rotation action
* if none was found in a system config file.
*/
if (defconf == NULL) {
defconf = init_entry(DEFAULT_MARKER, NULL);
defconf->numlogs = 3;
defconf->trsize = 50;
defconf->permissions = S_IRUSR|S_IWUSR;
}
/*
* If newsyslog was run with a list of specific filenames,
* then create a new worklist which has only those files in
* it, picking up the rotation-rules for those files from
* the original filelist.
*
* XXX - Note that this will copy multiple rules for a single
* logfile, if multiple entries are an exact match for
* that file. That matches the historic behavior, but do
* we want to continue to allow it? If so, it should
* probably be handled more intelligently.
*/
cmdlist = malloc(sizeof(struct cflist));
if (cmdlist == NULL)
err(1, "malloc of cmdlist");
STAILQ_INIT(cmdlist);
for (given = files; *given; ++given) {
/*
* First try to find exact-matches for this given file.
*/
gmatch = 0;
STAILQ_FOREACH(ent, filelist, cf_nextp) {
if (strcmp(ent->log, *given) == 0) {
gmatch++;
dupent = init_entry(*given, ent);
STAILQ_INSERT_TAIL(cmdlist, dupent, cf_nextp);
}
}
if (gmatch) {
if (verbose > 2)
printf("\t+ Matched entry %s\n", *given);
continue;
}
/*
* There was no exact-match for this given file, so look
* for a "glob" entry which does match.
*/
gmatch = 0;
- if (verbose > 2 && globlist != NULL)
+ if (verbose > 2)
printf("\t+ Checking globs for %s\n", *given);
STAILQ_FOREACH(ent, globlist, cf_nextp) {
fnres = fnmatch(ent->log, *given, FNM_PATHNAME);
if (verbose > 2)
printf("\t+ = %d for pattern %s\n", fnres,
ent->log);
if (fnres == 0) {
gmatch++;
dupent = init_entry(*given, ent);
/* This new entry is not a glob! */
dupent->flags &= ~CE_GLOB;
STAILQ_INSERT_TAIL(cmdlist, dupent, cf_nextp);
/* Only allow a match to one glob-entry */
break;
}
}
if (gmatch) {
if (verbose > 2)
printf("\t+ Matched %s via %s\n", *given,
ent->log);
continue;
}
/*
* This given file was not found in any config file, so
* add a worklist item based on the default entry.
*/
if (verbose > 2)
printf("\t+ No entry matched %s (will use %s)\n",
*given, DEFAULT_MARKER);
dupent = init_entry(*given, defconf);
/* Mark that it was *not* found in a config file */
dupent->def_cfg = 1;
STAILQ_INSERT_TAIL(cmdlist, dupent, cf_nextp);
}
/*
* Free all the entries in the original work list, the list of
* glob entries, and the default entry.
*/
free_clist(filelist);
free_clist(globlist);
free_entry(defconf);
/* And finally, return a worklist which matches the given files. */
return (cmdlist);
}
/*
* Expand the list of entries with filename patterns, and add all files
* which match those glob-entries onto the worklist.
*/
static void
expand_globs(struct cflist *work_p, struct cflist *glob_p)
{
int gmatch, gres;
size_t i;
char *mfname;
struct conf_entry *dupent, *ent, *globent;
glob_t pglob;
struct stat st_fm;
/*
* The worklist contains all fully-specified (non-GLOB) names.
*
* Now expand the list of filename-pattern (GLOB) entries into
* a second list, which (by definition) will only match files
* that already exist. Do not add a glob-related entry for any
* file which already exists in the fully-specified list.
*/
STAILQ_FOREACH(globent, glob_p, cf_nextp) {
gres = glob(globent->log, GLOB_NOCHECK, NULL, &pglob);
if (gres != 0) {
warn("cannot expand pattern (%d): %s", gres,
globent->log);
continue;
}
if (verbose > 2)
printf("\t+ Expanding pattern %s\n", globent->log);
for (i = 0; i < pglob.gl_matchc; i++) {
mfname = pglob.gl_pathv[i];
/* See if this file already has a specific entry. */
gmatch = 0;
STAILQ_FOREACH(ent, work_p, cf_nextp) {
if (strcmp(mfname, ent->log) == 0) {
gmatch++;
break;
}
}
if (gmatch)
continue;
/* Make sure the named matched is a file. */
gres = lstat(mfname, &st_fm);
if (gres != 0) {
/* Error on a file that glob() matched?!? */
warn("Skipping %s - lstat() error", mfname);
continue;
}
if (!S_ISREG(st_fm.st_mode)) {
/* We only rotate files! */
if (verbose > 2)
printf("\t+ . skipping %s (!file)\n",
mfname);
continue;
}
if (verbose > 2)
printf("\t+ . add file %s\n", mfname);
dupent = init_entry(mfname, globent);
/* This new entry is not a glob! */
dupent->flags &= ~CE_GLOB;
/* Add to the worklist. */
STAILQ_INSERT_TAIL(work_p, dupent, cf_nextp);
}
globfree(&pglob);
if (verbose > 2)
printf("\t+ Done with pattern %s\n", globent->log);
}
}
/*
* Parse a configuration file and update a linked list of all the logs to
* process.
*/
static void
parse_file(FILE *cf, struct cflist *work_p, struct cflist *glob_p,
- struct conf_entry *defconf_p, struct ilist *inclist)
+ struct conf_entry **defconf_p, struct ilist *inclist)
{
char line[BUFSIZ], *parse, *q;
char *cp, *errline, *group;
struct conf_entry *working;
struct passwd *pwd;
struct group *grp;
glob_t pglob;
int eol, ptm_opts, res, special;
size_t i;
errline = NULL;
while (fgets(line, BUFSIZ, cf)) {
if ((line[0] == '\n') || (line[0] == '#') ||
(strlen(line) == 0))
continue;
if (errline != NULL)
free(errline);
errline = strdup(line);
for (cp = line + 1; *cp != '\0'; cp++) {
if (*cp != '#')
continue;
if (*(cp - 1) == '\\') {
strcpy(cp - 1, cp);
cp--;
continue;
}
*cp = '\0';
break;
}
q = parse = missing_field(sob(line), errline);
parse = son(line);
if (!*parse)
errx(1, "malformed line (missing fields):\n%s",
errline);
*parse = '\0';
/*
* Allow people to set debug options via the config file.
* (NOTE: debug options are undocumented, and may disappear
* at any time, etc).
*/
if (strcasecmp(DEBUG_MARKER, q) == 0) {
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
if (!*parse)
warnx("debug line specifies no option:\n%s",
errline);
else {
*parse = '\0';
parse_doption(q);
}
continue;
} else if (strcasecmp(INCLUDE_MARKER, q) == 0) {
if (verbose)
printf("Found: %s", errline);
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
if (!*parse) {
warnx("include line missing argument:\n%s",
errline);
continue;
}
*parse = '\0';
if (isglobstr(q)) {
res = glob(q, GLOB_NOCHECK, NULL, &pglob);
if (res != 0) {
warn("cannot expand pattern (%d): %s",
res, q);
continue;
}
if (verbose > 2)
printf("\t+ Expanding pattern %s\n", q);
for (i = 0; i < pglob.gl_matchc; i++)
add_to_queue(pglob.gl_pathv[i],
inclist);
globfree(&pglob);
} else
add_to_queue(q, inclist);
continue;
}
special = 0;
working = init_entry(q, NULL);
if (strcasecmp(DEFAULT_MARKER, q) == 0) {
special = 1;
- if (defconf_p != NULL) {
+ if (*defconf_p != NULL) {
warnx("Ignoring duplicate entry for %s!", q);
free_entry(working);
continue;
}
- defconf_p = working;
+ *defconf_p = working;
}
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
if (!*parse)
errx(1, "malformed line (missing fields):\n%s",
errline);
*parse = '\0';
if ((group = strchr(q, ':')) != NULL ||
(group = strrchr(q, '.')) != NULL) {
*group++ = '\0';
if (*q) {
if (!(isnumberstr(q))) {
if ((pwd = getpwnam(q)) == NULL)
errx(1,
"error in config file; unknown user:\n%s",
errline);
working->uid = pwd->pw_uid;
} else
working->uid = atoi(q);
} else
working->uid = (uid_t)-1;
q = group;
if (*q) {
if (!(isnumberstr(q))) {
if ((grp = getgrnam(q)) == NULL)
errx(1,
"error in config file; unknown group:\n%s",
errline);
working->gid = grp->gr_gid;
} else
working->gid = atoi(q);
} else
working->gid = (gid_t)-1;
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
if (!*parse)
errx(1, "malformed line (missing fields):\n%s",
errline);
*parse = '\0';
} else {
working->uid = (uid_t)-1;
working->gid = (gid_t)-1;
}
if (!sscanf(q, "%o", &working->permissions))
errx(1, "error in config file; bad permissions:\n%s",
errline);
if ((working->permissions & ~DEFFILEMODE) != 0) {
warnx("File mode bits 0%o changed to 0%o in line:\n%s",
working->permissions,
working->permissions & DEFFILEMODE, errline);
working->permissions &= DEFFILEMODE;
}
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
if (!*parse)
errx(1, "malformed line (missing fields):\n%s",
errline);
*parse = '\0';
if (!sscanf(q, "%d", &working->numlogs) || working->numlogs < 0)
errx(1, "error in config file; bad value for count of logs to save:\n%s",
errline);
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
if (!*parse)
errx(1, "malformed line (missing fields):\n%s",
errline);
*parse = '\0';
if (isdigitch(*q))
working->trsize = atoi(q);
else if (strcmp(q, "*") == 0)
working->trsize = -1;
else {
warnx("Invalid value of '%s' for 'size' in line:\n%s",
q, errline);
working->trsize = -1;
}
working->flags = 0;
working->compress = COMPRESS_NONE;
q = parse = missing_field(sob(parse + 1), errline);
parse = son(parse);
eol = !*parse;
*parse = '\0';
{
char *ep;
u_long ul;
ul = strtoul(q, &ep, 10);
if (ep == q)
working->hours = 0;
else if (*ep == '*')
working->hours = -1;
else if (ul > INT_MAX)
errx(1, "interval is too large:\n%s", errline);
else
working->hours = ul;
if (*ep == '\0' || strcmp(ep, "*") == 0)
goto no_trimat;
if (*ep != '@' && *ep != '$')
errx(1, "malformed interval/at:\n%s", errline);
working->flags |= CE_TRIMAT;
working->trim_at = ptime_init(NULL);
ptm_opts = PTM_PARSE_ISO8601;
if (*ep == '$')
ptm_opts = PTM_PARSE_DWM;
ptm_opts |= PTM_PARSE_MATCHDOM;
res = ptime_relparse(working->trim_at, ptm_opts,
ptimeget_secs(timenow), ep + 1);
if (res == -2)
errx(1, "nonexistent time for 'at' value:\n%s",
errline);
else if (res < 0)
errx(1, "malformed 'at' value:\n%s", errline);
}
no_trimat:
if (eol)
q = NULL;
else {
q = parse = sob(parse + 1); /* Optional field */
parse = son(parse);
if (!*parse)
eol = 1;
*parse = '\0';
}
for (; q && *q && !isspacech(*q); q++) {
switch (tolowerch(*q)) {
case 'b':
working->flags |= CE_BINARY;
break;
case 'c':
working->flags |= CE_CREATE;
break;
case 'd':
working->flags |= CE_NODUMP;
break;
case 'g':
working->flags |= CE_GLOB;
break;
case 'j':
working->compress = COMPRESS_BZIP2;
break;
case 'n':
working->flags |= CE_NOSIGNAL;
break;
case 'p':
working->flags |= CE_PLAIN0;
break;
case 'r':
working->flags |= CE_PID2CMD;
break;
case 't':
working->flags |= CE_RFC5424;
break;
case 'u':
working->flags |= CE_SIGNALGROUP;
break;
case 'w':
/* Deprecated flag - keep for compatibility purposes */
break;
case 'x':
working->compress = COMPRESS_XZ;
break;
case 'y':
working->compress = COMPRESS_ZSTD;
break;
case 'z':
working->compress = COMPRESS_GZIP;
break;
case '-':
break;
case 'f': /* Used by OpenBSD for "CE_FOLLOW" */
case 'm': /* Used by OpenBSD for "CE_MONITOR" */
default:
errx(1, "illegal flag in config file -- %c",
*q);
}
}
if (eol)
q = NULL;
else {
q = parse = sob(parse + 1); /* Optional field */
parse = son(parse);
if (!*parse)
eol = 1;
*parse = '\0';
}
working->pid_cmd_file = NULL;
if (q && *q) {
if (*q == '/')
working->pid_cmd_file = strdup(q);
else if (isalnum(*q))
goto got_sig;
else {
errx(1,
"illegal pid file or signal in config file:\n%s",
errline);
}
}
if (eol)
q = NULL;
else {
q = parse = sob(parse + 1); /* Optional field */
- *(parse = son(parse)) = '\0';
+ parse = son(parse);
+ *parse = '\0';
}
working->sig = SIGHUP;
if (q && *q) {
got_sig:
working->sig = parse_signal(q);
if (working->sig < 1 || working->sig >= sys_nsig) {
errx(1,
"illegal signal in config file:\n%s",
errline);
}
}
/*
* Finish figuring out what pid-file to use (if any) in
* later processing if this logfile needs to be rotated.
*/
if ((working->flags & CE_NOSIGNAL) == CE_NOSIGNAL) {
/*
* This config-entry specified 'n' for nosignal,
* see if it also specified an explicit pid_cmd_file.
* This would be a pretty pointless combination.
*/
if (working->pid_cmd_file != NULL) {
warnx("Ignoring '%s' because flag 'n' was specified in line:\n%s",
working->pid_cmd_file, errline);
free(working->pid_cmd_file);
working->pid_cmd_file = NULL;
}
} else if (working->pid_cmd_file == NULL) {
/*
* This entry did not specify the 'n' flag, which
* means it should signal syslogd unless it had
* specified some other pid-file (and obviously the
* syslog pid-file will not be for a process-group).
* Also, we should only try to notify syslog if we
* are root.
*/
if (working->flags & CE_SIGNALGROUP) {
warnx("Ignoring flag 'U' in line:\n%s",
errline);
working->flags &= ~CE_SIGNALGROUP;
}
if (needroot)
working->pid_cmd_file = strdup(path_syslogpid);
}
/*
* Add this entry to the appropriate list of entries, unless
* it was some kind of special entry (eg: <default>).
*/
if (special) {
; /* Do not add to any list */
} else if (working->flags & CE_GLOB) {
STAILQ_INSERT_TAIL(glob_p, working, cf_nextp);
} else {
STAILQ_INSERT_TAIL(work_p, working, cf_nextp);
}
}
if (errline != NULL)
free(errline);
}
static char *
missing_field(char *p, char *errline)
{
if (!p || !*p)
errx(1, "missing field in config file:\n%s", errline);
return (p);
}
/*
* In our sort we return it in the reverse of what qsort normally
* would do, as we want the newest files first. If we have two
* entries with the same time we don't really care about order.
*
* Support function for qsort() in delete_oldest_timelog().
*/
static int
oldlog_entry_compare(const void *a, const void *b)
{
const struct oldlog_entry *ola = a, *olb = b;
if (ola->t > olb->t)
return (-1);
else if (ola->t < olb->t)
return (1);
else
return (0);
}
/*
* Check whether the file corresponding to dp is an archive of the logfile
* logfname, based on the timefnamefmt format string. Return true and fill out
* tm if this is the case; otherwise return false.
*/
static int
validate_old_timelog(int fd, const struct dirent *dp, const char *logfname,
struct tm *tm)
{
struct stat sb;
size_t logfname_len;
char *s;
int c;
logfname_len = strlen(logfname);
if (dp->d_type != DT_REG) {
/*
* Some filesystems (e.g. NFS) don't fill out the d_type field
* and leave it set to DT_UNKNOWN; in this case we must obtain
* the file type ourselves.
*/
if (dp->d_type != DT_UNKNOWN ||
fstatat(fd, dp->d_name, &sb, AT_SYMLINK_NOFOLLOW) != 0 ||
!S_ISREG(sb.st_mode))
return (0);
}
/* Ignore everything but files with our logfile prefix. */
if (strncmp(dp->d_name, logfname, logfname_len) != 0)
return (0);
/* Ignore the actual non-rotated logfile. */
if (dp->d_namlen == logfname_len)
return (0);
/*
* Make sure we created have found a logfile, so the
* postfix is valid, IE format is: '.<time>(.[bgx]z)?'.
*/
if (dp->d_name[logfname_len] != '.') {
if (verbose)
printf("Ignoring %s which has unexpected "
"extension '%s'\n", dp->d_name,
&dp->d_name[logfname_len]);
return (0);
}
memset(tm, 0, sizeof(*tm));
if ((s = strptime(&dp->d_name[logfname_len + 1],
timefnamefmt, tm)) == NULL) {
/*
* We could special case "old" sequentially named logfiles here,
* but we do not as that would require special handling to
* decide which one was the oldest compared to "new" time based
* logfiles.
*/
if (verbose)
printf("Ignoring %s which does not "
"match time format\n", dp->d_name);
return (0);
}
for (c = 0; c < COMPRESS_TYPES; c++)
if (strcmp(s, compress_type[c].suffix) == 0)
/* We're done. */
return (1);
if (verbose)
printf("Ignoring %s which has unexpected extension '%s'\n",
dp->d_name, s);
return (0);
}
/*
* Delete the oldest logfiles, when using time based filenames.
*/
static void
delete_oldest_timelog(const struct conf_entry *ent, const char *archive_dir)
{
char *basebuf, *dirbuf, errbuf[80];
const char *base, *dir;
int dir_fd, i, logcnt, max_logcnt;
struct oldlog_entry *oldlogs;
struct dirent *dp;
struct tm tm;
DIR *dirp;
oldlogs = malloc(MAX_OLDLOGS * sizeof(struct oldlog_entry));
max_logcnt = MAX_OLDLOGS;
logcnt = 0;
if (archive_dir != NULL && archive_dir[0] != '\0') {
dirbuf = NULL;
dir = archive_dir;
} else {
if ((dirbuf = strdup(ent->log)) == NULL)
err(1, "strdup()");
dir = dirname(dirbuf);
}
if ((basebuf = strdup(ent->log)) == NULL)
err(1, "strdup()");
base = basename(basebuf);
if (strcmp(base, "/") == 0)
errx(1, "Invalid log filename - became '/'");
if (verbose > 2)
printf("Searching for old logs in %s\n", dir);
/* First we create a 'list' of all archived logfiles */
if ((dirp = opendir(dir)) == NULL)
err(1, "Cannot open log directory '%s'", dir);
dir_fd = dirfd(dirp);
while ((dp = readdir(dirp)) != NULL) {
if (validate_old_timelog(dir_fd, dp, base, &tm) == 0)
continue;
/*
* We should now have old an old rotated logfile, so
* add it to the 'list'.
*/
if ((oldlogs[logcnt].t = timegm(&tm)) == -1)
err(1, "Could not convert time string to time value");
if ((oldlogs[logcnt].fname = strdup(dp->d_name)) == NULL)
err(1, "strdup()");
logcnt++;
/*
* It is very unlikely we ever run out of space in the
* logfile array from the default size, but lets
* handle it anyway...
*/
if (logcnt >= max_logcnt) {
max_logcnt *= 4;
/* Detect integer overflow */
if (max_logcnt < logcnt)
errx(1, "Too many old logfiles found");
oldlogs = realloc(oldlogs,
max_logcnt * sizeof(struct oldlog_entry));
if (oldlogs == NULL)
err(1, "realloc()");
}
}
/* Second, if needed we delete oldest archived logfiles */
if (logcnt > 0 && logcnt >= ent->numlogs && ent->numlogs > 1) {
oldlogs = realloc(oldlogs, logcnt *
sizeof(struct oldlog_entry));
if (oldlogs == NULL)
err(1, "realloc()");
/*
* We now sort the logs in the order of newest to
* oldest. That way we can simply skip over the
* number of records we want to keep.
*/
qsort(oldlogs, logcnt, sizeof(struct oldlog_entry),
oldlog_entry_compare);
for (i = ent->numlogs - 1; i < logcnt; i++) {
if (noaction)
printf("\trm -f %s/%s\n", dir,
oldlogs[i].fname);
else if (unlinkat(dir_fd, oldlogs[i].fname, 0) != 0) {
snprintf(errbuf, sizeof(errbuf),
"Could not delete old logfile '%s'",
oldlogs[i].fname);
perror(errbuf);
}
}
} else if (verbose > 1)
printf("No old logs to delete for logfile %s\n", ent->log);
/* Third, cleanup */
closedir(dirp);
for (i = 0; i < logcnt; i++) {
assert(oldlogs[i].fname != NULL);
free(oldlogs[i].fname);
}
free(oldlogs);
free(dirbuf);
free(basebuf);
}
/*
* Generate a log filename, when using classic filenames.
*/
static void
gen_classiclog_fname(char *fname, size_t fname_sz, const char *archive_dir,
const char *namepart, int numlogs_c)
{
if (archive_dir[0] != '\0')
(void) snprintf(fname, fname_sz, "%s/%s.%d", archive_dir,
namepart, numlogs_c);
else
(void) snprintf(fname, fname_sz, "%s.%d", namepart, numlogs_c);
}
/*
* Delete a rotated logfile, when using classic filenames.
*/
static void
delete_classiclog(const char *archive_dir, const char *namepart, int numlog_c)
{
char file1[MAXPATHLEN], zfile1[MAXPATHLEN];
int c;
gen_classiclog_fname(file1, sizeof(file1), archive_dir, namepart,
numlog_c);
for (c = 0; c < COMPRESS_TYPES; c++) {
(void) snprintf(zfile1, sizeof(zfile1), "%s%s", file1,
compress_type[c].suffix);
if (noaction)
printf("\trm -f %s\n", zfile1);
else
(void) unlink(zfile1);
}
}
/*
* Only add to the queue if the file hasn't already been added. This is
* done to prevent circular include loops.
*/
static void
add_to_queue(const char *fname, struct ilist *inclist)
{
struct include_entry *inc;
STAILQ_FOREACH(inc, inclist, inc_nextp) {
if (strcmp(fname, inc->file) == 0) {
warnx("duplicate include detected: %s", fname);
return;
}
}
inc = malloc(sizeof(struct include_entry));
if (inc == NULL)
err(1, "malloc of inc");
inc->file = strdup(fname);
if (verbose > 2)
printf("\t+ Adding %s to the processing queue.\n", fname);
STAILQ_INSERT_TAIL(inclist, inc, inc_nextp);
}
/*
* Search for logfile and return its compression suffix (if supported)
* The suffix detection is first-match in the order of compress_types
*
* Note: if logfile without suffix exists (uncompressed, COMPRESS_NONE)
* a zero-length string is returned
*/
static const char *
get_logfile_suffix(const char *logfile)
{
struct stat st;
char zfile[MAXPATHLEN];
int c;
for (c = 0; c < COMPRESS_TYPES; c++) {
(void) strlcpy(zfile, logfile, MAXPATHLEN);
(void) strlcat(zfile, compress_type[c].suffix, MAXPATHLEN);
if (lstat(zfile, &st) == 0)
return (compress_type[c].suffix);
}
return (NULL);
}
static fk_entry
do_rotate(const struct conf_entry *ent)
{
char dirpart[MAXPATHLEN], namepart[MAXPATHLEN];
char file1[MAXPATHLEN], file2[MAXPATHLEN];
char zfile1[MAXPATHLEN], zfile2[MAXPATHLEN];
const char *logfile_suffix;
char datetimestr[30];
int flags, numlogs_c;
fk_entry free_or_keep;
struct sigwork_entry *swork;
struct stat st;
struct tm tm;
time_t now;
flags = ent->flags;
free_or_keep = FREE_ENT;
if (archtodir) {
char *p;
/* build complete name of archive directory into dirpart */
if (*archdirname == '/') { /* absolute */
strlcpy(dirpart, archdirname, sizeof(dirpart));
} else { /* relative */
/* get directory part of logfile */
strlcpy(dirpart, ent->log, sizeof(dirpart));
if ((p = strrchr(dirpart, '/')) == NULL)
dirpart[0] = '\0';
else
*(p + 1) = '\0';
strlcat(dirpart, archdirname, sizeof(dirpart));
}
/* check if archive directory exists, if not, create it */
if (lstat(dirpart, &st))
createdir(ent, dirpart);
/* get filename part of logfile */
if ((p = strrchr(ent->log, '/')) == NULL)
strlcpy(namepart, ent->log, sizeof(namepart));
else
strlcpy(namepart, p + 1, sizeof(namepart));
} else {
/*
* Tell utility functions we are not using an archive
* dir.
*/
dirpart[0] = '\0';
strlcpy(namepart, ent->log, sizeof(namepart));
}
/* Delete old logs */
if (timefnamefmt != NULL)
delete_oldest_timelog(ent, dirpart);
else {
/*
* Handle cleaning up after legacy newsyslog where we
* kept ent->numlogs + 1 files. This code can go away
* at some point in the future.
*/
delete_classiclog(dirpart, namepart, ent->numlogs);
if (ent->numlogs > 0)
delete_classiclog(dirpart, namepart, ent->numlogs - 1);
}
if (timefnamefmt != NULL) {
/* If time functions fails we can't really do any sensible */
if (time(&now) == (time_t)-1 ||
localtime_r(&now, &tm) == NULL)
bzero(&tm, sizeof(tm));
strftime(datetimestr, sizeof(datetimestr), timefnamefmt, &tm);
if (archtodir)
(void) snprintf(file1, sizeof(file1), "%s/%s.%s",
dirpart, namepart, datetimestr);
else
(void) snprintf(file1, sizeof(file1), "%s.%s",
ent->log, datetimestr);
/* Don't run the code to move down logs */
numlogs_c = -1;
} else {
gen_classiclog_fname(file1, sizeof(file1), dirpart, namepart,
ent->numlogs - 1);
numlogs_c = ent->numlogs - 2; /* copy for countdown */
}
/* Move down log files */
for (; numlogs_c >= 0; numlogs_c--) {
(void) strlcpy(file2, file1, sizeof(file2));
gen_classiclog_fname(file1, sizeof(file1), dirpart, namepart,
numlogs_c);
logfile_suffix = get_logfile_suffix(file1);
if (logfile_suffix == NULL)
continue;
(void) strlcpy(zfile1, file1, MAXPATHLEN);
(void) strlcpy(zfile2, file2, MAXPATHLEN);
(void) strlcat(zfile1, logfile_suffix, MAXPATHLEN);
(void) strlcat(zfile2, logfile_suffix, MAXPATHLEN);
if (noaction)
printf("\tmv %s %s\n", zfile1, zfile2);
else {
/* XXX - Ought to be checking for failure! */
(void)rename(zfile1, zfile2);
change_attrs(zfile2, ent);
if (ent->compress && !strlen(logfile_suffix)) {
/* compress old rotation */
struct zipwork_entry zwork;
memset(&zwork, 0, sizeof(zwork));
zwork.zw_conf = ent;
zwork.zw_fsize = sizefile(zfile2);
strcpy(zwork.zw_fname, zfile2);
do_zipwork(&zwork);
}
}
}
if (ent->numlogs > 0) {
if (noaction) {
/*
* Note that savelog() may succeed with using link()
* for the archtodir case, but there is no good way
* of knowing if it will when doing "noaction", so
* here we claim that it will have to do a copy...
*/
if (archtodir)
printf("\tcp %s %s\n", ent->log, file1);
else
printf("\tln %s %s\n", ent->log, file1);
printf("\ttouch %s\t\t"
"# Update mtime for 'when'-interval processing\n",
file1);
} else {
if (!(flags & CE_BINARY)) {
/* Report the trimming to the old log */
log_trim(ent->log, ent);
}
savelog(ent->log, file1);
/*
* Interval-based rotations are done using the mtime of
* the most recently archived log, so make sure it gets
* updated during a rotation.
*/
utimes(file1, NULL);
}
change_attrs(file1, ent);
}
/* Create the new log file and move it into place */
if (noaction)
printf("Start new log...\n");
createlog(ent);
/*
* Save all signalling and file-compression to be done after log
* files from all entries have been rotated. This way any one
* process will not be sent the same signal multiple times when
* multiple log files had to be rotated.
*/
swork = NULL;
if (ent->pid_cmd_file != NULL)
swork = save_sigwork(ent);
if (ent->numlogs > 0 && ent->compress > COMPRESS_NONE) {
if (!(ent->flags & CE_PLAIN0) ||
strcmp(&file1[strlen(file1) - 2], ".0") != 0) {
/*
* The zipwork_entry will include a pointer to this
* conf_entry, so the conf_entry should not be freed.
*/
free_or_keep = KEEP_ENT;
save_zipwork(ent, swork, ent->fsize, file1);
}
}
return (free_or_keep);
}
static void
do_sigwork(struct sigwork_entry *swork)
{
struct sigwork_entry *nextsig;
int kres, secs;
char *tmp;
if (swork->sw_runcmd == 0 && (!(swork->sw_pidok) || swork->sw_pid == 0))
return; /* no work to do... */
/*
* If nosignal (-s) was specified, then do not signal any process.
* Note that a nosignal request triggers a warning message if the
* rotated logfile needs to be compressed, *unless* -R was also
* specified. We assume that an `-sR' request came from a process
* which writes to the logfile, and as such, we assume that process
* has already made sure the logfile is not presently in use. This
* just sets swork->sw_pidok to a special value, and do_zipwork
* will print any necessary warning(s).
*/
if (nosignal) {
if (!rotatereq)
swork->sw_pidok = -1;
return;
}
/*
* Compute the pause between consecutive signals. Use a longer
* sleep time if we will be sending two signals to the same
* daemon or process-group.
*/
secs = 0;
nextsig = SLIST_NEXT(swork, sw_nextp);
if (nextsig != NULL) {
if (swork->sw_pid == nextsig->sw_pid)
secs = 10;
else
secs = 1;
}
if (noaction) {
if (swork->sw_runcmd)
printf("\tsh -c '%s %d'\n", swork->sw_fname,
swork->sw_signum);
else {
printf("\tkill -%d %d \t\t# %s\n", swork->sw_signum,
(int)swork->sw_pid, swork->sw_fname);
if (secs > 0)
printf("\tsleep %d\n", secs);
}
return;
}
if (swork->sw_runcmd) {
asprintf(&tmp, "%s %d", swork->sw_fname, swork->sw_signum);
if (tmp == NULL) {
warn("can't allocate memory to run %s",
swork->sw_fname);
return;
}
if (verbose)
printf("Run command: %s\n", tmp);
kres = system(tmp);
if (kres) {
warnx("%s: returned non-zero exit code: %d",
tmp, kres);
}
free(tmp);
return;
}
kres = kill(swork->sw_pid, swork->sw_signum);
if (kres != 0) {
/*
* Assume that "no such process" (ESRCH) is something
* to warn about, but is not an error. Presumably the
* process which writes to the rotated log file(s) is
* gone, in which case we should have no problem with
* compressing the rotated log file(s).
*/
if (errno != ESRCH)
swork->sw_pidok = 0;
warn("can't notify %s, pid %d = %s", swork->sw_pidtype,
(int)swork->sw_pid, swork->sw_fname);
} else {
if (verbose)
printf("Notified %s pid %d = %s\n", swork->sw_pidtype,
(int)swork->sw_pid, swork->sw_fname);
if (secs > 0) {
if (verbose)
printf("Pause %d second(s) between signals\n",
secs);
sleep(secs);
}
}
}
static void
do_zipwork(struct zipwork_entry *zwork)
{
const struct compress_types *ct;
struct sbuf *command;
pid_t pidzip, wpid;
int c, errsav, fcount, zstatus;
const char **args, *pgm_name, *pgm_path;
char *zresult;
- command = NULL;
assert(zwork != NULL);
assert(zwork->zw_conf != NULL);
assert(zwork->zw_conf->compress > COMPRESS_NONE);
assert(zwork->zw_conf->compress < COMPRESS_TYPES);
if (zwork->zw_swork != NULL && zwork->zw_swork->sw_runcmd == 0 &&
zwork->zw_swork->sw_pidok <= 0) {
warnx(
"log %s not compressed because daemon(s) not notified",
zwork->zw_fname);
change_attrs(zwork->zw_fname, zwork->zw_conf);
return;
}
ct = &compress_type[zwork->zw_conf->compress];
/*
* execv will be called with the array [ program, flags ... ,
* filename, NULL ] so allocate nflags+3 elements for the array.
*/
args = calloc(ct->nflags + 3, sizeof(*args));
if (args == NULL)
err(1, "calloc");
pgm_path = ct->path;
pgm_name = strrchr(pgm_path, '/');
if (pgm_name == NULL)
pgm_name = pgm_path;
else
pgm_name++;
/* Build the argument array. */
args[0] = pgm_name;
for (c = 0; c < ct->nflags; c++)
args[c + 1] = ct->flags[c];
args[c + 1] = zwork->zw_fname;
/* Also create a space-delimited version if we need to print it. */
if ((command = sbuf_new_auto()) == NULL)
errx(1, "sbuf_new");
sbuf_cpy(command, pgm_path);
for (c = 1; args[c] != NULL; c++) {
sbuf_putc(command, ' ');
sbuf_cat(command, args[c]);
}
if (sbuf_finish(command) == -1)
err(1, "sbuf_finish");
/* Determine the filename of the compressed file. */
asprintf(&zresult, "%s%s", zwork->zw_fname, ct->suffix);
if (zresult == NULL)
errx(1, "asprintf");
if (verbose)
printf("Executing: %s\n", sbuf_data(command));
if (noaction) {
printf("\t%s %s\n", pgm_name, zwork->zw_fname);
change_attrs(zresult, zwork->zw_conf);
goto out;
}
fcount = 1;
pidzip = fork();
while (pidzip < 0) {
/*
* The fork failed. If the failure was due to a temporary
* problem, then wait a short time and try it again.
*/
errsav = errno;
warn("fork() for `%s %s'", pgm_name, zwork->zw_fname);
if (errsav != EAGAIN || fcount > 5)
errx(1, "Exiting...");
sleep(fcount * 12);
fcount++;
pidzip = fork();
}
if (!pidzip) {
/* The child process executes the compression command */
execv(pgm_path, __DECONST(char *const*, args));
err(1, "execv(`%s')", sbuf_data(command));
}
wpid = waitpid(pidzip, &zstatus, 0);
if (wpid == -1) {
/* XXX - should this be a fatal error? */
warn("%s: waitpid(%d)", pgm_path, pidzip);
goto out;
}
if (!WIFEXITED(zstatus)) {
warnx("`%s' did not terminate normally", sbuf_data(command));
goto out;
}
if (WEXITSTATUS(zstatus)) {
warnx("`%s' terminated with a non-zero status (%d)",
sbuf_data(command), WEXITSTATUS(zstatus));
goto out;
}
/* Compression was successful, set file attributes on the result. */
change_attrs(zresult, zwork->zw_conf);
out:
- if (command != NULL)
- sbuf_delete(command);
+ sbuf_delete(command);
free(args);
free(zresult);
}
/*
* Save information on any process we need to signal. Any single
* process may need to be sent different signal-values for different
* log files, but usually a single signal-value will cause the process
* to close and re-open all of its log files.
*/
static struct sigwork_entry *
save_sigwork(const struct conf_entry *ent)
{
struct sigwork_entry *sprev, *stmp;
int ndiff;
size_t tmpsiz;
sprev = NULL;
ndiff = 1;
SLIST_FOREACH(stmp, &swhead, sw_nextp) {
ndiff = strcmp(ent->pid_cmd_file, stmp->sw_fname);
if (ndiff > 0)
break;
if (ndiff == 0) {
if (ent->sig == stmp->sw_signum)
break;
if (ent->sig > stmp->sw_signum) {
ndiff = 1;
break;
}
}
sprev = stmp;
}
if (stmp != NULL && ndiff == 0)
return (stmp);
tmpsiz = sizeof(struct sigwork_entry) + strlen(ent->pid_cmd_file) + 1;
stmp = malloc(tmpsiz);
stmp->sw_runcmd = 0;
/* If this is a command to run we just set the flag and run command */
if (ent->flags & CE_PID2CMD) {
stmp->sw_pid = -1;
stmp->sw_pidok = 0;
stmp->sw_runcmd = 1;
} else {
set_swpid(stmp, ent);
}
stmp->sw_signum = ent->sig;
strcpy(stmp->sw_fname, ent->pid_cmd_file);
if (sprev == NULL)
SLIST_INSERT_HEAD(&swhead, stmp, sw_nextp);
else
SLIST_INSERT_AFTER(sprev, stmp, sw_nextp);
return (stmp);
}
/*
* Save information on any file we need to compress. We may see the same
* file multiple times, so check the full list to avoid duplicates. The
* list itself is sorted smallest-to-largest, because that's the order we
* want to compress the files. If the partition is very low on disk space,
* then the smallest files are the most likely to compress, and compressing
* them first will free up more space for the larger files.
*/
static struct zipwork_entry *
save_zipwork(const struct conf_entry *ent, const struct sigwork_entry *swork,
int zsize, const char *zipfname)
{
struct zipwork_entry *zprev, *ztmp;
int ndiff;
size_t tmpsiz;
/* Compute the size if the caller did not know it. */
if (zsize < 0)
zsize = sizefile(zipfname);
zprev = NULL;
ndiff = 1;
SLIST_FOREACH(ztmp, &zwhead, zw_nextp) {
ndiff = strcmp(zipfname, ztmp->zw_fname);
if (ndiff == 0)
break;
if (zsize > ztmp->zw_fsize)
zprev = ztmp;
}
if (ztmp != NULL && ndiff == 0)
return (ztmp);
tmpsiz = sizeof(struct zipwork_entry) + strlen(zipfname) + 1;
ztmp = malloc(tmpsiz);
ztmp->zw_conf = ent;
ztmp->zw_swork = swork;
ztmp->zw_fsize = zsize;
strcpy(ztmp->zw_fname, zipfname);
if (zprev == NULL)
SLIST_INSERT_HEAD(&zwhead, ztmp, zw_nextp);
else
SLIST_INSERT_AFTER(zprev, ztmp, zw_nextp);
return (ztmp);
}
/* Send a signal to the pid specified by pidfile */
static void
set_swpid(struct sigwork_entry *swork, const struct conf_entry *ent)
{
FILE *f;
long minok, maxok, rval;
char *endp, *linep, line[BUFSIZ];
minok = MIN_PID;
maxok = MAX_PID;
swork->sw_pidok = 0;
swork->sw_pid = 0;
swork->sw_pidtype = "daemon";
if (ent->flags & CE_SIGNALGROUP) {
/*
* If we are expected to signal a process-group when
* rotating this logfile, then the value read in should
* be the negative of a valid process ID.
*/
minok = -MAX_PID;
maxok = -MIN_PID;
swork->sw_pidtype = "process-group";
}
f = fopen(ent->pid_cmd_file, "r");
if (f == NULL) {
if (errno == ENOENT && enforcepid == 0) {
/*
* Warn if the PID file doesn't exist, but do
* not consider it an error. Most likely it
* means the process has been terminated,
* so it should be safe to rotate any log
* files that the process would have been using.
*/
swork->sw_pidok = 1;
warnx("pid file doesn't exist: %s", ent->pid_cmd_file);
} else
warn("can't open pid file: %s", ent->pid_cmd_file);
return;
}
if (fgets(line, BUFSIZ, f) == NULL) {
/*
* Warn if the PID file is empty, but do not consider
* it an error. Most likely it means the process has
* has terminated, so it should be safe to rotate any
* log files that the process would have been using.
*/
if (feof(f) && enforcepid == 0) {
swork->sw_pidok = 1;
warnx("pid/cmd file is empty: %s", ent->pid_cmd_file);
} else
warn("can't read from pid file: %s", ent->pid_cmd_file);
(void)fclose(f);
return;
}
(void)fclose(f);
errno = 0;
linep = line;
while (*linep == ' ')
linep++;
rval = strtol(linep, &endp, 10);
if (*endp != '\0' && !isspacech(*endp)) {
warnx("pid file does not start with a valid number: %s",
ent->pid_cmd_file);
} else if (rval < minok || rval > maxok) {
warnx("bad value '%ld' for process number in %s",
rval, ent->pid_cmd_file);
if (verbose)
warnx("\t(expecting value between %ld and %ld)",
minok, maxok);
} else {
swork->sw_pidok = 1;
swork->sw_pid = rval;
}
return;
}
/* Log the fact that the logs were turned over */
static int
log_trim(const char *logname, const struct conf_entry *log_ent)
{
FILE *f;
const char *xtra;
if ((f = fopen(logname, "a")) == NULL)
return (-1);
xtra = "";
if (log_ent->def_cfg)
xtra = " using <default> rule";
if (log_ent->flags & CE_RFC5424) {
if (log_ent->firstcreate) {
fprintf(f, "<%d>1 %s %s newsyslog %d - - %s%s\n",
LOG_MAKEPRI(LOG_USER, LOG_INFO),
daytime_rfc5424, hostname, getpid(),
"logfile first created", xtra);
} else if (log_ent->r_reason != NULL) {
fprintf(f, "<%d>1 %s %s newsyslog %d - - %s%s%s\n",
LOG_MAKEPRI(LOG_USER, LOG_INFO),
daytime_rfc5424, hostname, getpid(),
"logfile turned over", log_ent->r_reason, xtra);
} else {
fprintf(f, "<%d>1 %s %s newsyslog %d - - %s%s\n",
LOG_MAKEPRI(LOG_USER, LOG_INFO),
daytime_rfc5424, hostname, getpid(),
"logfile turned over", xtra);
}
} else {
if (log_ent->firstcreate)
fprintf(f,
"%s %.*s newsyslog[%d]: logfile first created%s\n",
daytime, (int)hostname_shortlen, hostname, getpid(),
xtra);
else if (log_ent->r_reason != NULL)
fprintf(f,
"%s %.*s newsyslog[%d]: logfile turned over%s%s\n",
daytime, (int)hostname_shortlen, hostname, getpid(),
log_ent->r_reason, xtra);
else
fprintf(f,
"%s %.*s newsyslog[%d]: logfile turned over%s\n",
daytime, (int)hostname_shortlen, hostname, getpid(),
xtra);
}
if (fclose(f) == EOF)
err(1, "log_trim: fclose");
return (0);
}
/* Return size in kilobytes of a file */
static int
sizefile(const char *file)
{
struct stat sb;
if (stat(file, &sb) < 0)
return (-1);
return (kbytes(sb.st_size));
}
/*
* Return the mtime of the most recent archive of the logfile, using timestamp
* based filenames.
*/
static time_t
mtime_old_timelog(const char *file)
{
struct stat sb;
struct tm tm;
int dir_fd;
time_t t;
struct dirent *dp;
DIR *dirp;
char *logfname, *logfnamebuf, *dir, *dirbuf;
t = -1;
if ((dirbuf = strdup(file)) == NULL) {
warn("strdup() of '%s'", file);
return (t);
}
dir = dirname(dirbuf);
if ((logfnamebuf = strdup(file)) == NULL) {
warn("strdup() of '%s'", file);
free(dirbuf);
return (t);
}
logfname = basename(logfnamebuf);
if (logfname[0] == '/') {
warnx("Invalid log filename '%s'", logfname);
goto out;
}
if ((dirp = opendir(dir)) == NULL) {
warn("Cannot open log directory '%s'", dir);
goto out;
}
dir_fd = dirfd(dirp);
/* Open the archive dir and find the most recent archive of logfname. */
while ((dp = readdir(dirp)) != NULL) {
if (validate_old_timelog(dir_fd, dp, logfname, &tm) == 0)
continue;
if (fstatat(dir_fd, dp->d_name, &sb, AT_SYMLINK_NOFOLLOW) == -1) {
warn("Cannot stat '%s'", file);
continue;
}
if (t < sb.st_mtime)
t = sb.st_mtime;
}
closedir(dirp);
out:
free(dirbuf);
free(logfnamebuf);
return (t);
}
/* Return the age in hours of the most recent archive of the logfile. */
static int
age_old_log(const char *file)
{
struct stat sb;
const char *logfile_suffix;
static unsigned int suffix_maxlen = 0;
char *tmp;
size_t tmpsiz;
time_t mtime;
int c;
if (suffix_maxlen == 0) {
for (c = 0; c < COMPRESS_TYPES; c++)
suffix_maxlen = MAX(suffix_maxlen,
strlen(compress_type[c].suffix));
}
tmpsiz = MAXPATHLEN + sizeof(".0") + suffix_maxlen + 1;
tmp = alloca(tmpsiz);
if (archtodir) {
char *p;
/* build name of archive directory into tmp */
if (*archdirname == '/') { /* absolute */
strlcpy(tmp, archdirname, tmpsiz);
} else { /* relative */
/* get directory part of logfile */
strlcpy(tmp, file, tmpsiz);
if ((p = strrchr(tmp, '/')) == NULL)
tmp[0] = '\0';
else
*(p + 1) = '\0';
strlcat(tmp, archdirname, tmpsiz);
}
strlcat(tmp, "/", tmpsiz);
/* get filename part of logfile */
if ((p = strrchr(file, '/')) == NULL)
strlcat(tmp, file, tmpsiz);
else
strlcat(tmp, p + 1, tmpsiz);
} else {
(void) strlcpy(tmp, file, tmpsiz);
}
if (timefnamefmt != NULL) {
mtime = mtime_old_timelog(tmp);
if (mtime == -1)
return (-1);
} else {
strlcat(tmp, ".0", tmpsiz);
logfile_suffix = get_logfile_suffix(tmp);
if (logfile_suffix == NULL)
return (-1);
(void) strlcat(tmp, logfile_suffix, tmpsiz);
if (stat(tmp, &sb) < 0)
return (-1);
mtime = sb.st_mtime;
}
return ((int)(ptimeget_secs(timenow) - mtime + 1800) / 3600);
}
/* Skip Over Blanks */
static char *
sob(char *p)
{
while (p && *p && isspace(*p))
p++;
return (p);
}
/* Skip Over Non-Blanks */
static char *
son(char *p)
{
while (p && *p && !isspace(*p))
p++;
return (p);
}
/* Check if string is actually a number */
static int
isnumberstr(const char *string)
{
while (*string) {
if (!isdigitch(*string++))
return (0);
}
return (1);
}
/* Check if string contains a glob */
static int
isglobstr(const char *string)
{
char chr;
while ((chr = *string++)) {
if (chr == '*' || chr == '?' || chr == '[')
return (1);
}
return (0);
}
/*
* Save the active log file under a new name. A link to the new name
* is the quick-and-easy way to do this. If that fails (which it will
* if the destination is on another partition), then make a copy of
* the file to the new location.
*/
static void
savelog(char *from, char *to)
{
FILE *src, *dst;
int c, res;
res = link(from, to);
if (res == 0)
return;
if ((src = fopen(from, "r")) == NULL)
err(1, "can't fopen %s for reading", from);
if ((dst = fopen(to, "w")) == NULL)
err(1, "can't fopen %s for writing", to);
while ((c = getc(src)) != EOF) {
if ((putc(c, dst)) == EOF)
err(1, "error writing to %s", to);
}
if (ferror(src))
err(1, "error reading from %s", from);
if ((fclose(src)) != 0)
err(1, "can't fclose %s", to);
if ((fclose(dst)) != 0)
err(1, "can't fclose %s", from);
}
/* create one or more directory components of a path */
static void
createdir(const struct conf_entry *ent, char *dirpart)
{
int res;
char *s, *d;
char mkdirpath[MAXPATHLEN];
struct stat st;
s = dirpart;
d = mkdirpath;
for (;;) {
*d++ = *s++;
if (*s != '/' && *s != '\0')
continue;
*d = '\0';
res = lstat(mkdirpath, &st);
if (res != 0) {
if (noaction) {
printf("\tmkdir %s\n", mkdirpath);
} else {
res = mkdir(mkdirpath, 0755);
if (res != 0)
err(1, "Error on mkdir(\"%s\") for -a",
mkdirpath);
}
}
if (*s == '\0')
break;
}
if (verbose) {
if (ent->firstcreate)
printf("Created directory '%s' for new %s\n",
dirpart, ent->log);
else
printf("Created directory '%s' for -a\n", dirpart);
}
}
/*
* Create a new log file, destroying any currently-existing version
* of the log file in the process. If the caller wants a backup copy
* of the file to exist, they should call 'link(logfile,logbackup)'
* before calling this routine.
*/
void
createlog(const struct conf_entry *ent)
{
int fd, failed;
struct stat st;
char *realfile, *slash, tempfile[MAXPATHLEN];
fd = -1;
realfile = ent->log;
/*
* If this log file is being created for the first time (-C option),
* then it may also be true that the parent directory does not exist
* yet. Check, and create that directory if it is missing.
*/
if (ent->firstcreate) {
strlcpy(tempfile, realfile, sizeof(tempfile));
slash = strrchr(tempfile, '/');
if (slash != NULL) {
*slash = '\0';
failed = stat(tempfile, &st);
if (failed && errno != ENOENT)
err(1, "Error on stat(%s)", tempfile);
if (failed)
createdir(ent, tempfile);
else if (!S_ISDIR(st.st_mode))
errx(1, "%s exists but is not a directory",
tempfile);
}
}
/*
* First create an unused filename, so it can be chown'ed and
* chmod'ed before it is moved into the real location. mkstemp
* will create the file mode=600 & owned by us. Note that all
* temp files will have a suffix of '.z<something>'.
*/
strlcpy(tempfile, realfile, sizeof(tempfile));
strlcat(tempfile, ".zXXXXXX", sizeof(tempfile));
if (noaction)
printf("\tmktemp %s\n", tempfile);
else {
fd = mkstemp(tempfile);
if (fd < 0)
err(1, "can't mkstemp logfile %s", tempfile);
/*
* Add status message to what will become the new log file.
*/
if (!(ent->flags & CE_BINARY)) {
if (log_trim(tempfile, ent))
err(1, "can't add status message to log");
}
}
/* Change the owner/group, if we are supposed to */
if (ent->uid != (uid_t)-1 || ent->gid != (gid_t)-1) {
if (noaction)
printf("\tchown %u:%u %s\n", ent->uid, ent->gid,
tempfile);
else {
failed = fchown(fd, ent->uid, ent->gid);
if (failed)
err(1, "can't fchown temp file %s", tempfile);
}
}
/* Turn on NODUMP if it was requested in the config-file. */
if (ent->flags & CE_NODUMP) {
if (noaction)
printf("\tchflags nodump %s\n", tempfile);
else {
failed = fchflags(fd, UF_NODUMP);
if (failed) {
warn("log_trim: fchflags(NODUMP)");
}
}
}
/*
* Note that if the real logfile still exists, and if the call
* to rename() fails, then "neither the old file nor the new
* file shall be changed or created" (to quote the standard).
* If the call succeeds, then the file will be replaced without
* any window where some other process might find that the file
* did not exist.
* XXX - ? It may be that for some error conditions, we could
* retry by first removing the realfile and then renaming.
*/
if (noaction) {
printf("\tchmod %o %s\n", ent->permissions, tempfile);
printf("\tmv %s %s\n", tempfile, realfile);
} else {
failed = fchmod(fd, ent->permissions);
if (failed)
err(1, "can't fchmod temp file '%s'", tempfile);
failed = rename(tempfile, realfile);
if (failed)
err(1, "can't mv %s to %s", tempfile, realfile);
}
if (fd >= 0)
close(fd);
}
/*
* Change the attributes of a given filename to what was specified in
* the newsyslog.conf entry. This routine is only called for files
* that newsyslog expects that it has created, and thus it is a fatal
* error if this routine finds that the file does not exist.
*/
static void
change_attrs(const char *fname, const struct conf_entry *ent)
{
int failed;
if (noaction) {
printf("\tchmod %o %s\n", ent->permissions, fname);
if (ent->uid != (uid_t)-1 || ent->gid != (gid_t)-1)
printf("\tchown %u:%u %s\n",
ent->uid, ent->gid, fname);
if (ent->flags & CE_NODUMP)
printf("\tchflags nodump %s\n", fname);
return;
}
failed = chmod(fname, ent->permissions);
if (failed) {
if (errno != EPERM)
err(1, "chmod(%s) in change_attrs", fname);
warn("change_attrs couldn't chmod(%s)", fname);
}
if (ent->uid != (uid_t)-1 || ent->gid != (gid_t)-1) {
failed = chown(fname, ent->uid, ent->gid);
if (failed)
warn("can't chown %s", fname);
}
if (ent->flags & CE_NODUMP) {
failed = chflags(fname, UF_NODUMP);
if (failed)
warn("can't chflags %s NODUMP", fname);
}
}
/*
* Parse a signal number or signal name. Returns the signal number parsed or -1
* on failure.
*/
static int
parse_signal(const char *str)
{
int sig, i;
const char *errstr;
sig = strtonum(str, 1, sys_nsig - 1, &errstr);
if (errstr == NULL)
return (sig);
if (strncasecmp(str, "SIG", 3) == 0)
str += 3;
for (i = 1; i < sys_nsig; i++) {
if (strcasecmp(str, sys_signame[i]) == 0)
return (i);
}
return (-1);
}
Index: projects/clang800-import/usr.sbin/pw/pw.8
===================================================================
--- projects/clang800-import/usr.sbin/pw/pw.8 (revision 343955)
+++ projects/clang800-import/usr.sbin/pw/pw.8 (revision 343956)
@@ -1,1072 +1,1078 @@
.\" Copyright (C) 1996
.\" David L. Nugent. All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY DAVID L. NUGENT AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL DAVID L. NUGENT OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
-.Dd December 10, 2017
+.Dd February 8, 2019
.Dt PW 8
.Os
.Sh NAME
.Nm pw
.Nd create, remove, modify & display system users and groups
.Sh SYNOPSIS
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar useradd
.Oo Fl n Oc name Oo Fl u Ar uid Oc
.Op Fl C Ar config
.Op Fl q
.Op Fl c Ar comment
.Op Fl d Ar dir
.Op Fl e Ar date
.Op Fl p Ar date
.Op Fl g Ar group
.Op Fl G Ar grouplist
.Op Fl m
.Op Fl M Ar mode
.Op Fl k Ar dir
.Op Fl w Ar method
.Op Fl s Ar shell
.Op Fl o
.Op Fl L Ar class
.Op Fl h Ar fd | Fl H Ar fd
.Op Fl N
.Op Fl P
.Op Fl Y
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar useradd
.Fl D
.Op Fl C Ar config
.Op Fl q
.Op Fl b Ar dir
.Op Fl e Ar days
.Op Fl p Ar days
.Op Fl g Ar group
.Op Fl G Ar grouplist
.Op Fl k Ar dir
.Op Fl M Ar mode
.Op Fl u Ar min , Ns Ar max
.Op Fl i Ar min , Ns Ar max
.Op Fl w Ar method
.Op Fl s Ar shell
.Op Fl y Ar path
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar userdel
.Oo Fl n Oc name|uid | Fl u Ar uid
.Op Fl r
.Op Fl Y
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar usermod
.Oo Fl n Oc name|uid Oo Fl u Ar newuid Oc | Fl u Ar uid
.Op Fl C Ar config
.Op Fl q
.Op Fl c Ar comment
.Op Fl d Ar dir
.Op Fl e Ar date
.Op Fl p Ar date
.Op Fl g Ar group
.Op Fl G Ar grouplist
.Op Fl l Ar newname
.Op Fl m
.Op Fl M Ar mode
.Op Fl k Ar dir
.Op Fl w Ar method
.Op Fl s Ar shell
.Op Fl L Ar class
.Op Fl h Ar fd | Fl H Ar fd
.Op Fl N
.Op Fl P
.Op Fl Y
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar usershow
.Oo Fl n Oc name|uid | Fl u Ar uid
.Op Fl F
.Op Fl P
.Op Fl 7
.Op Fl a
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar usernext
.Op Fl C Ar config
.Op Fl q
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar groupadd
.Oo Fl n Oc name Oo Fl g Ar gid Oc
.Op Fl C Ar config
.Op Fl q
.Op Fl M Ar members
.Op Fl o
.Op Fl h Ar fd | Fl H Ar fd
.Op Fl N
.Op Fl P
.Op Fl Y
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar groupdel
.Oo Fl n Oc name|gid | Fl g Ar gid
.Op Fl Y
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar groupmod
.Oo Fl n Oc name|gid Oo Fl g Ar newgid Oc | Fl g Ar gid
.Op Fl C Ar config
.Op Fl q
.Op Fl l Ar newname
.Op Fl M Ar members
.Op Fl m Ar newmembers
.Op Fl d Ar oldmembers
.Op Fl h Ar fd | Fl H Ar fd
.Op Fl N
.Op Fl P
.Op Fl Y
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar groupshow
.Oo Fl n Oc name|gid | Fl g Ar gid
.Op Fl F
.Op Fl P
.Op Fl a
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar groupnext
.Op Fl C Ar config
.Op Fl q
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar lock
.Oo Fl n Oc name|uid | Fl u Ar uid
.Op Fl C Ar config
.Op Fl q
.Nm
.Op Fl R Ar rootdir
.Op Fl V Ar etcdir
.Ar unlock
.Oo Fl n Oc name|uid | Fl u Ar uid
.Op Fl C Ar config
.Op Fl q
.Sh DESCRIPTION
The
.Nm
utility is a command-line based editor for the system
.Ar user
and
.Ar group
files, allowing the superuser an easy to use and standardized way of adding,
modifying and removing users and groups.
Note that
.Nm
only operates on the local user and group files.
.Tn NIS
users and groups must be
maintained on the
.Tn NIS
server.
The
.Nm
utility handles updating the
.Pa passwd ,
.Pa master.passwd ,
.Pa group
and the secure and insecure
password database files, and must be run as root.
.Pp
The first one or two keywords provided to
.Nm
on the command line provide the context for the remainder of the arguments.
The keywords
.Ar user
and
.Ar group
may be combined with
.Ar add ,
.Ar del ,
.Ar mod ,
.Ar show ,
or
.Ar next
in any order.
(For example,
.Ar showuser ,
.Ar usershow ,
.Ar show user ,
and
.Ar user show
all mean the same thing.)
This flexibility is useful for interactive scripts calling
.Nm
for user and group database manipulation.
Following these keywords,
the user or group name or numeric id may be optionally specified as an
alternative to using the
.Fl n Ar name ,
.Fl u Ar uid ,
.Fl g Ar gid
options.
.Pp
The following flags are common to most or all modes of operation:
.Bl -tag -width "-G grouplist"
.It Fl R Ar rootdir
Specifies an alternate root directory within which
.Nm
will operate.
Any paths specified will be relative to
.Va rootdir .
.It Fl V Ar etcdir
Set an alternate location for the password, group, and configuration files.
Can be used to maintain a user/group database in an alternate location.
If this switch is specified, the system
.Pa /etc/pw.conf
will not be sourced for default configuration data,
but the file pw.conf in the specified directory will be used instead
.Pq or none, if it does not exist .
The
.Fl C
flag may be used to override this behaviour.
As an exception to the general rule where options must follow the operation
type, the
.Fl V
flag must be used on the command line before the operation keyword.
.It Fl C Ar config
By default,
.Nm
reads the file
.Pa /etc/pw.conf
to obtain policy information on how new user accounts and groups are to be created.
The
.Fl C
option specifies a different configuration file.
While most of the contents of the configuration file may be overridden via
command-line options, it may be more convenient to keep standard information in a
configuration file.
.It Fl q
Use of this option causes
.Nm
to suppress error messages,
which may be useful in interactive environments where it
is preferable to interpret status codes returned by
.Nm
rather than messing up a carefully formatted display.
.It Fl N
This option is available in
.Ar add
and
.Ar modify
operations, and tells
.Nm
to output the result of the operation without updating the user or group
databases.
You may use the
.Fl P
option to switch between standard passwd and readable formats.
.It Fl Y
Using this option with any of the update modes causes
.Nm
to run
.Xr make 1
after changing to the directory
.Pa /var/yp .
This is intended to allow automatic updating of
.Tn NIS
database files.
If separate passwd and group files are being used by
.Tn NIS ,
then use the
.Fl y Ar path
option to specify the location of the
.Tn NIS
passwd database so that
.Nm
will concurrently update it with the system password
databases.
.El
.Sh USER OPTIONS
The following options apply to the
.Ar useradd
and
.Ar usermod
commands:
.Bl -tag -width "-G grouplist"
.It Oo Fl n Oc Ar name
Required unless
.Fl u Ar uid
is given.
Specify the user/account name.
In the case of
.Ar usermod
can be a uid.
.It Fl u Ar uid
Required if
.Ar name
is not given.
Specify the user/account numeric id.
In the case of
.Ar usermod
if paired with
.Ar name ,
changes the numeric id of the named user/account.
.Pp
Usually, only one of these options is required,
as the account name will imply the uid, or vice versa.
However, there are times when both are needed.
For example, when changing the uid of an existing user with
.Ar usermod ,
or overriding the default uid when creating a new account with
.Ar useradd .
To automatically allocate the uid to a new user with
.Ar useradd ,
then do
.Em not
use the
.Fl u
option.
Either the account or userid can also be provided immediately after the
.Ar useradd ,
.Ar userdel ,
.Ar usermod
or
.Ar usershow
keywords on the command line without using the
.Fl n
or
.Fl u
options.
.El
.Bl -tag -width "-G grouplist"
.It Fl c Ar comment
This field sets the contents of the passwd GECOS field,
which normally contains up to four comma-separated fields containing the
user's full name, office or location,
and work and home phone numbers.
These sub-fields are used by convention only, however, and are optional.
If this field is to contain spaces,
the comment must be enclosed in double quotes
.Ql \&" .
Avoid using commas in this field as these are used as sub-field separators,
and the colon
.Ql \&:
character also cannot be used as this is the field separator for the passwd
file itself.
.It Fl d Ar dir
This option sets the account's home directory.
Normally,
this is only used if the home directory is to be different from the
default determined from
.Pa /etc/pw.conf
- normally
.Pa /home
with the account name as a subdirectory.
.It Fl e Ar date
Set the account's expiration date.
Format of the date is either a UNIX time in decimal, or a date in
.Ql dd-mmm-yy[yy]
format, where dd is the day,
mmm is the month, either in numeric or alphabetic format
('Jan', 'Feb', etc) and year is either a two or four digit year.
This option also accepts a relative date in the form
.Ql \&+n[mhdwoy]
where
.Ql \&n
is a decimal,
octal (leading 0) or hexadecimal (leading 0x) digit followed by the
number of Minutes, Hours, Days, Weeks, Months or Years from the current date at
which the expiration date is to be set.
.It Fl p Ar date
Set the account's password expiration date.
This field is similar to the account expiration date option, except that it
applies to forced password changes.
This is set in the same manner as the
.Fl e
option.
.It Fl g Ar group
Set the account's primary group to the given group.
.Ar group
may be defined by either its name or group number.
.It Fl G Ar grouplist
Set secondary group memberships for an account.
.Ar grouplist
is a comma, space, or tab-separated list of group names or group numbers.
The user is added to the groups specified in
.Ar grouplist ,
and removed from all groups not specified.
The current login session is not affected by group membership changes,
which only take effect when the user reconnects.
Note: do not add a user to their primary group with
.Ar grouplist .
.It Fl L Ar class
This option sets the login class for the user being created.
See
.Xr login.conf 5
and
.Xr passwd 5
for more information on user login classes.
.It Fl m
This option instructs
.Nm
to attempt to create the user's home directory.
While primarily useful when adding a new account with
.Ar useradd ,
this may also be of use when moving an existing user's home directory elsewhere
on the file system.
The new home directory is populated with the contents of the
.Ar skeleton
directory, which typically contains a set of shell configuration files that the
user may personalize to taste.
Files in this directory are usually named
.Pa dot . Ns Aq Ar config
where the
.Pa dot
prefix will be stripped.
When
.Fl m
is used on an account with
.Ar usermod ,
existing configuration files in the user's home directory are
.Em not
overwritten from the skeleton files.
.Pp
When a user's home directory is created,
it will by default be a subdirectory of the
.Ar basehome
directory as specified by the
.Fl b
option (see below), bearing the name of the new account.
This can be overridden by the
.Fl d
option on the command line, if desired.
.It Fl M Ar mode
Create the user's home directory with the specified
.Ar mode ,
modified by the current
.Xr umask 2 .
If omitted, it is derived from the parent process'
.Xr umask 2 .
This option is only useful in combination with the
.Fl m
flag.
.It Fl k Ar dir
Set the
.Ar skeleton
directory, from which basic startup and configuration files are copied when
the user's home directory is created.
This option only has meaning when used with the
.Fl d
or
.Fl m
flags.
.It Fl s Ar shell
Set or changes the user's login shell to
.Ar shell .
If the path to the shell program is omitted,
.Nm
searches the
.Ar shellpath
specified in
.Pa /etc/pw.conf
and fills it in as appropriate.
Note that unless you have a specific reason to do so, you should avoid
specifying the path - this will allow
.Nm
to validate that the program exists and is executable.
Specifying a full path (or supplying a blank "" shell) avoids this check
and allows for such entries as
.Pa /nonexistent
that should be set for accounts not intended for interactive login.
.It Fl h Ar fd
This option provides a special interface by which interactive scripts can
set an account password using
.Nm .
Because the command line and environment are fundamentally insecure mechanisms
by which programs can accept information,
.Nm
will only allow setting of account and group passwords via a file descriptor
(usually a pipe between an interactive script and the program).
.Ar sh ,
.Ar bash ,
.Ar ksh
and
.Ar perl
all possess mechanisms by which this can be done.
Alternatively,
.Nm
will prompt for the user's password if
.Fl h Ar 0
is given, nominating
.Em stdin
as the file descriptor on which to read the password.
Note that this password will be read only once and is intended
for use by a script rather than for interactive use.
If you wish to have new password confirmation along the lines of
.Xr passwd 1 ,
this must be implemented as part of an interactive script that calls
.Nm .
.Pp
If a value of
.Ql \&-
is given as the argument
.Ar fd ,
then the password will be set to
.Ql \&* ,
rendering the account inaccessible via password-based login.
.It Fl H Ar fd
Read an encrypted password string from the specified file descriptor.
This is like
.Fl h ,
but the password should be supplied already encrypted in a form
suitable for writing directly to the password database.
.El
.Pp
It is possible to use
.Ar useradd
to create a new account that duplicates an existing user id.
While this is normally considered an error and will be rejected, the
.Fl o
option overrides the check for duplicates and allows the duplication of
the user id.
This may be useful if you allow the same user to login under
different contexts (different group allocations, different home
directory, different shell) while providing basically the same
permissions for access to the user's files in each account.
.Pp
The
.Ar useradd
command also has the ability to set new user and group defaults by using the
.Fl D
option.
Instead of adding a new user,
.Nm
writes a new set of defaults to its configuration file,
.Pa /etc/pw.conf .
When using the
.Fl D
option, you must not use either
.Fl n Ar name
or
.Fl u Ar uid
or an error will result.
Use of
.Fl D
changes the meaning of several command line switches in the
.Ar useradd
command.
These are:
.Bl -tag -width "-G grouplist"
.It Fl D
Set default values in
.Pa /etc/pw.conf
configuration file, or a different named configuration file if the
.Fl C Ar config
option is used.
.It Fl b Ar dir
Set the root directory in which user home directories are created.
The default value for this is
.Pa /home ,
but it may be set elsewhere as desired.
.It Fl e Ar days
Set the default account expiration period in days.
When
.Fl D
is used, the
.Ar days
argument is interpreted differently.
It must be numeric and represents the number of days after creation
that the account expires.
A value of 0 suppresses automatic calculation of the expiry date.
.It Fl p Ar days
Set the default password expiration period in days.
When
.Fl D
is used, the
.Ar days
argument is interpreted differently.
It must be numeric and represents the number of days after creation
that the account expires.
A value of 0 suppresses automatic calculation of the expiry date.
.It Fl g Ar group
Set the default group for new users.
If a blank group is specified using
.Fl g Ar \&"" ,
then new users will be allocated their own private primary group
with the same name as their login name.
If a group is supplied, either its name or uid may be given as an argument.
.It Fl G Ar grouplist
Set the default groups in which new users are granted membership.
This is a separate set of groups from the primary group.
Avoid nominating the same group as both primary and extra groups.
In other words, these extra groups determine membership in groups
.Em other than
the primary group.
.Ar grouplist
is a comma-separated list of group names or ids, and are always
stored in
.Pa /etc/pw.conf
by their symbolic names.
.It Fl L Ar class
This option sets the default login class for new users.
.It Fl k Ar dir
Set the default
.Em skeleton
directory,
from which prototype shell and other initialization files are copied when
.Nm
creates a user's home directory.
See description of
.Fl k
for naming conventions of these files.
.It Xo
.Fl u Ar min , Ns Ar max ,
.Fl i Ar min , Ns Ar max
.Xc
Set the minimum and maximum user and group ids allocated for new
accounts and groups created by
.Nm .
The default values for each is 1000 minimum and 32000 maximum.
.Ar min
and
.Ar max
are both numbers, where max must be greater than min,
and both must be between 0 and 32767.
In general,
user and group ids less than 100 are reserved for use by the system,
and numbers greater than 32000 may also be reserved for special purposes
.Pq used by some system daemons .
.It Fl w Ar method
The
.Fl w
option selects the default method used to set passwords for newly created user
accounts.
.Ar method
is one of:
.Pp
.Bl -tag -width random -offset indent -compact
.It no
disable login on newly created accounts
.It yes
force the password to be the account name
.It none
force a blank password
.It random
generate a random password
.El
.Pp
The
.Ql \&random
or
.Ql \&no
methods are the most secure; in the former case,
.Nm
generates a password and prints it to stdout,
which is suitable when users are issued passwords rather than being allowed
to select their own
.Pq possibly poorly chosen
password.
The
.Ql \&no
method requires that the superuser use
.Xr passwd 1
to render the account accessible with a password.
.It Fl y Ar path
This sets the pathname of the database used by
.Tn NIS
if you are not sharing
the information from
.Pa /etc/master.passwd
directly with
.Tn NIS .
You should only set this option for
.Tn NIS
servers.
.El
.Pp
The
.Ar userdel
command has three distinct options.
The
.Fl n Ar name
and
.Fl u Ar uid
options have already been covered above.
The additional option is:
.Bl -tag -width "-G grouplist"
.It Fl r
This tells
.Nm
to remove the user's home directory and all of its contents.
The
.Nm
utility errs on the side of caution when removing files from the system.
Firstly,
it will not do so if the uid of the account being removed is also used by
another account on the system, and the 'home' directory in the password file is
a valid path that commences with the character
.Ql \&/ .
Secondly, it will only remove files and directories that are actually owned by
the user, or symbolic links owned by anyone under the user's home directory.
Finally, after deleting all contents owned by the user only empty directories
will be removed.
If any additional cleanup work is required, this is left to the administrator.
.El
.Pp
Mail spool files and crontabs are always removed when an account is deleted as
these are unconditionally attached to the user name.
Jobs queued for processing by
.Ar at
are also removed if the user's uid is unique and not also used by another
account on the system.
.Pp
The
.Ar usermod
command adds one additional option:
.Bl -tag -width "-G grouplist"
.It Fl l Ar newname
This option allows changing of an existing account name to
.Ql \&newname .
The new name must not already exist, and any attempt to duplicate an
existing account name will be rejected.
.El
.Pp
The
.Ar usershow
command allows viewing of an account in one of two formats.
By default, the format is identical to the format used in
.Pa /etc/master.passwd
with the password field replaced with a
.Ql \&* .
If the
.Fl P
option is used, then
.Nm
outputs the account details in a more human readable form.
If the
.Fl 7
option is used, the account details are shown in v7 format.
The
.Fl a
option lists all users currently on file.
Using
.Fl F
forces
.Nm
to print the details of an account even if it does not exist.
.Pp
The command
.Ar usernext
returns the next available user and group ids separated by a colon.
This is normally of interest only to interactive scripts or front-ends
that use
.Nm .
.Sh GROUP OPTIONS
The
.Fl C
and
.Fl q
options (explained at the start of the previous section) are available
with the group manipulation commands.
Other common options to all group-related commands are:
.Bl -tag -width "-m newmembers"
.It Oo Fl n Oc Ar name
Required unless
.Fl g Ar gid
is given.
Specify the group name.
In the case of
.Ar groupmod
can be a gid.
.It Fl g Ar gid
Required if
.Ar name
is not given.
Specify the group numeric id.
In the case of
.Ar groupmod
if paired with
.Ar name ,
changes the numeric id of the named group.
.Pp
As with the account name and id fields, you will usually only need
to supply one of these, as the group name implies the uid and vice
versa.
You will only need to use both when setting a specific group id
against a new group or when changing the uid of an existing group.
.It Fl M Ar memberlist
This option provides an alternative way to add existing users to a
new group (in groupadd) or replace an existing membership list (in
groupmod).
.Ar memberlist
is a comma separated list of valid and existing user names or uids.
.It Fl m Ar newmembers
Similar to
.Fl M ,
this option allows the
.Em addition
of existing users to a group without replacing the existing list of
members.
Login names or user ids may be used, and duplicate users are
silently eliminated.
.It Fl d Ar oldmembers
Similar to
.Fl M ,
this option allows the
.Em deletion
of existing users from a group without replacing the existing list of
members.
Login names or user ids may be used, and duplicate users are
silently eliminated.
.El
.Pp
.Ar groupadd
also has a
.Fl o
option that allows allocation of an existing group id to a new group.
The default action is to reject an attempt to add a group,
and this option overrides the check for duplicate group ids.
There is rarely any need to duplicate a group id.
.Pp
The
.Ar groupmod
command adds one additional option:
.Bl -tag -width "-m newmembers"
.It Fl l Ar newname
This option allows changing of an existing group name to
.Ql \&newname .
The new name must not already exist,
and any attempt to duplicate an existing group
name will be rejected.
.El
.Pp
Options for
.Ar groupshow
are the same as for
.Ar usershow ,
with the
.Fl g Ar gid
replacing
.Fl u Ar uid
to specify the group id.
The
.Fl 7
option does not apply to the
.Ar groupshow
command.
.Pp
The command
.Ar groupnext
returns the next available group id on standard output.
.Sh USER LOCKING
The
.Nm
utility
supports a simple password locking mechanism for users; it works by
prepending the string
.Ql *LOCKED*
to the beginning of the password field in
.Pa master.passwd
to prevent successful authentication.
.Pp
The
.Ar lock
and
.Ar unlock
commands take a user name or uid of the account to lock or unlock,
respectively.
The
.Fl V ,
.Fl C ,
and
.Fl q
options as described above are accepted by these commands.
.Sh NOTES
For a summary of options available with each command, you can use
.Dl pw [command] help
For example,
.Dl pw useradd help
lists all available options for the useradd operation.
.Pp
The
.Nm
utility allows 8-bit characters in the passwd GECOS field (user's full name,
office, work and home phone number subfields), but disallows them in
user login and group names.
Use 8-bit characters with caution, as connection to the Internet will
require that your mail transport program supports 8BITMIME, and will
convert headers containing 8-bit characters to 7-bit quoted-printable
format.
.Xr sendmail 8
does support this.
Use of 8-bit characters in the GECOS field should be used in
conjunction with the user's default locale and character set
and should not be implemented without their use.
Using 8-bit characters may also affect other
programs that transmit the contents of the GECOS field over the
Internet, such as
.Xr fingerd 8 ,
and a small number of TCP/IP clients, such as IRC, where full names
specified in the passwd file may be used by default.
.Pp
The
.Nm
utility writes a log to the
.Pa /var/log/userlog
file when actions such as user or group additions or deletions occur.
The location of this logfile can be changed in
.Xr pw.conf 5 .
.Sh FILES
.Bl -tag -width /etc/master.passwd.new -compact
.It Pa /etc/master.passwd
The user database
.It Pa /etc/passwd
A Version 7 format password file
.It Pa /etc/login.conf
The user capabilities database
.It Pa /etc/group
The group database
.It Pa /etc/pw.conf
Pw default options file
.It Pa /var/log/userlog
User/group modification logfile
.El
.Sh EXAMPLES
Add new user Glurmo Smith (gsmith).
A gsmith login group is created if not already present.
The login shell is set to
.Xr csh 1 .
A new home directory at
.Pa /home/gsmith
is created if it does not already exist.
Finally, a random password is generated and displayed:
.Bd -literal -offset indent
pw useradd -n gsmith -c "Glurmo Smith" -s /bin/csh -m -w random
.Ed
.Pp
Delete the gsmith user and their home directory, including contents.
.Bd -literal -offset indent
pw userdel -n gsmith -r
+.Ed
+.Pp
+Add the existing user jsmith to the wheel group,
+in addition to the other groups jsmith is already a member of.
+.Bd -literal -offset indent
+pw groupmod wheel -m jsmith
.Ed
.Sh EXIT STATUS
The
.Nm
utility returns EXIT_SUCCESS on successful operation, otherwise
.Nm
returns one of the
following exit codes defined by
.Xr sysexits 3
as follows:
.Bl -tag -width xxxx
.It EX_USAGE
.Bl -bullet -compact
.It
Command line syntax errors (invalid keyword, unknown option).
.El
.It EX_NOPERM
.Bl -bullet -compact
.It
Attempting to run one of the update modes as non-root.
.El
.It EX_OSERR
.Bl -bullet -compact
.It
Memory allocation error.
.It
Read error from password file descriptor.
.El
.It EX_DATAERR
.Bl -bullet -compact
.It
Bad or invalid data provided or missing on the command line or
via the password file descriptor.
.It
Attempted to remove, rename root account or change its uid.
.El
.It EX_OSFILE
.Bl -bullet -compact
.It
Skeleton directory is invalid or does not exist.
.It
Base home directory is invalid or does not exist.
.It
Invalid or non-existent shell specified.
.El
.It EX_NOUSER
.Bl -bullet -compact
.It
User, user id, group or group id specified does not exist.
.It
User or group recorded, added, or modified unexpectedly disappeared.
.El
.It EX_SOFTWARE
.Bl -bullet -compact
.It
No more group or user ids available within specified range.
.El
.It EX_IOERR
.Bl -bullet -compact
.It
Unable to rewrite configuration file.
.It
Error updating group or user database files.
.It
Update error for passwd or group database files.
.El
.It EX_CONFIG
.Bl -bullet -compact
.It
No base home directory configured.
.El
.El
.Sh SEE ALSO
.Xr chpass 1 ,
.Xr passwd 1 ,
.Xr umask 2 ,
.Xr group 5 ,
.Xr login.conf 5 ,
.Xr passwd 5 ,
.Xr pw.conf 5 ,
.Xr pwd_mkdb 8 ,
.Xr vipw 8
.Sh HISTORY
The
.Nm
utility was written to mimic many of the options used in the SYSV
.Em shadow
support suite, but is modified for passwd and group fields specific to
the
.Bx 4.4
operating system, and combines all of the major elements
into a single command.
Index: projects/clang800-import/usr.sbin/pwm/pwm.8
===================================================================
--- projects/clang800-import/usr.sbin/pwm/pwm.8 (revision 343955)
+++ projects/clang800-import/usr.sbin/pwm/pwm.8 (revision 343956)
@@ -1,100 +1,108 @@
.\" Copyright (c) 2018 Emmanuel Vadot <manu@freebsd.org>
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR
.\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
.\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
.\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT,
.\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
.\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
.\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
.\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
.\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
.\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.\"
.\" $FreeBSD$
.\"
-.Dd November 12, 2018
+.Dd January 12, 2019
.Dt PWM 8
.Os
.Sh NAME
.Nm pwm
.Nd configure pwm controller
.Sh SYNOPSIS
.Nm
.Op Fl f Ar device
.Fl c Ar channel
.Fl E
.Nm
.Op Fl f Ar device
.Fl c Ar channel
.Fl D
.Nm
.Op Fl f Ar device
.Fl c Ar channel
.Fl C
.Nm
.Op Fl f Ar device
.Fl c Ar channel
-.Fl p period
+.Fl p Ar period
.Nm
.Op Fl f Ar device
.Fl c Ar channel
-.Fl d duty
+.Fl d Ar duty
.Sh DESCRIPTION
The
.Nm
utility can be used to configure pwm controllers.
.Pp
The options are as follow:
-.Bl -tag -width ".Fl f Ar device"
+.Bl -tag -width "-c channel"
.It Fl c Ar channel
Channel number to operate on
+.It Fl f Ar device
+Device to operate on.
+If not specified,
+.Pa /dev/pwmc0
+is used.
.It Fl E
Enable the pwm channel
.It Fl D
Disable the pwm channel
.It Fl C
Show the configuration of the pwm channel
.It Fl p Ar period
Configure the period (in nanoseconds) of the pwm channel
.It Fl d Ar duty
Configure the duty (in nanoseconds or percentage) of the pwm channel
.El
.Sh EXAMPLES
.Bl -bullet
.It
Show the configuration of the pwm channel:
-.Pp
+.Bd -literal
pwm -f /dev/pwmc0 -C
+.Ed
.It
Configure a 50000 ns period and a 25000 duty cycle:
-.Pp
+.Bd -literal
pwm -f /dev/pwmc0 -p 50000 -d 25000
+.Ed
.It
Configure a 50% duty cycle:
-.Pp
+.Bd -literal
pwm -f /dev/pwmc0 -d 50%
+.Ed
.El
.Sh SEE ALSO
.Xr pwm 9 ,
.Xr pwmbus 9
.Sh HISTORY
The
.Nm
utility appeared in
.Fx 13.0 .
.Sh AUTHORS
.An -nosplit
The
.Nm
utility and this manual page were written by
.An Emmanuel Vadot Aq Mt manu@FreeBSD.org .
Index: projects/clang800-import
===================================================================
--- projects/clang800-import (revision 343955)
+++ projects/clang800-import (revision 343956)
Property changes on: projects/clang800-import
___________________________________________________________________
Modified: svn:mergeinfo
## -0,0 +0,1 ##
Merged /head:r343807-343955

File Metadata

Mime Type
application/octet-stream
Expires
Wed, May 8, 3:16 AM (1 d, 23 h)
Storage Engine
chunks
Storage Format
Chunks
Storage Handle
QB5OPgarTMXZ
Default Alt Text
(4 MB)

Event Timeline