Index: projects/runtime-coverage/lib/libc/gen/syslog.3 =================================================================== --- projects/runtime-coverage/lib/libc/gen/syslog.3 (revision 325444) +++ projects/runtime-coverage/lib/libc/gen/syslog.3 (revision 325445) @@ -1,295 +1,295 @@ .\" Copyright (c) 1985, 1991, 1993 .\" The Regents of the University of California. All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. Neither the name of the University nor the names of its contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" @(#)syslog.3 8.1 (Berkeley) 6/4/93 .\" $FreeBSD$ .\" -.Dd July 21, 2015 +.Dd November 5, 2017 .Dt SYSLOG 3 .Os .Sh NAME .Nm syslog , .Nm vsyslog , .Nm openlog , .Nm closelog , .Nm setlogmask .Nd control system log .Sh LIBRARY .Lb libc .Sh SYNOPSIS .In syslog.h .In stdarg.h .Ft void .Fn syslog "int priority" "const char *message" "..." .Ft void .Fn vsyslog "int priority" "const char *message" "va_list args" .Ft void .Fn openlog "const char *ident" "int logopt" "int facility" .Ft void .Fn closelog void .Ft int .Fn setlogmask "int maskpri" .Sh DESCRIPTION The .Fn syslog function writes .Fa message to the system message logger. The message is then written to the system console, log files, logged-in users, or forwarded to other machines as appropriate. (See .Xr syslogd 8 . ) .Pp The message is identical to a .Xr printf 3 format string, except that .Ql %m is replaced by the current error message. (As denoted by the global variable .Va errno ; see .Xr strerror 3 . ) A trailing newline is added if none is present. .Pp The .Fn vsyslog function is an alternate form in which the arguments have already been captured using the variable-length argument facilities of .Xr stdarg 3 . .Pp The message is tagged with .Fa priority . Priorities are encoded as a .Fa facility and a .Em level . The facility describes the part of the system generating the message. The level is selected from the following .Em ordered (high to low) list: .Bl -tag -width LOG_AUTHPRIV .It Dv LOG_EMERG A panic condition. This is normally broadcast to all users. .It Dv LOG_ALERT A condition that should be corrected immediately, such as a corrupted system database. .It Dv LOG_CRIT Critical conditions, e.g., hard device errors. .It Dv LOG_ERR Errors. .It Dv LOG_WARNING Warning messages. .It Dv LOG_NOTICE Conditions that are not error conditions, but should possibly be handled specially. .It Dv LOG_INFO Informational messages. .It Dv LOG_DEBUG Messages that contain information normally of use only when debugging a program. .El .Pp The .Fn openlog function provides for more specialized processing of the messages sent by .Fn syslog and .Fn vsyslog . The .Fa ident argument is a string that will be prepended to every message. The .Fa logopt argument is a bit field specifying logging options, which is formed by .Tn OR Ns 'ing one or more of the following values: .Bl -tag -width LOG_AUTHPRIV .It Dv LOG_CONS If .Fn syslog cannot pass the message to .Xr syslogd 8 it will attempt to write the message to the console .Pq Dq Pa /dev/console . .It Dv LOG_NDELAY Open the connection to .Xr syslogd 8 immediately. Normally the open is delayed until the first message is logged. Useful for programs that need to manage the order in which file descriptors are allocated. .It Dv LOG_PERROR Write the message to standard error output as well to the system log. .It Dv LOG_PID Log the process id with each message: useful for identifying instantiations of daemons. .El .Pp The .Fa facility argument encodes a default facility to be assigned to all messages that do not have an explicit facility encoded: .Bl -tag -width LOG_AUTHPRIV .It Dv LOG_AUTH The authorization system: .Xr login 1 , .Xr su 1 , .Xr getty 8 , etc. .It Dv LOG_AUTHPRIV The same as .Dv LOG_AUTH , but logged to a file readable only by selected individuals. .It Dv LOG_CONSOLE Messages written to .Pa /dev/console by the kernel console output driver. .It Dv LOG_CRON The cron daemon: .Xr cron 8 . .It Dv LOG_DAEMON System daemons, such as .Xr routed 8 , that are not provided for explicitly by other facilities. .It Dv LOG_FTP The file transfer protocol daemons: .Xr ftpd 8 , .Xr tftpd 8 . .It Dv LOG_KERN Messages generated by the kernel. These cannot be generated by any user processes. .It Dv LOG_LPR The line printer spooling system: .Xr lpr 1 , .Xr lpc 8 , .Xr lpd 8 , etc. .It Dv LOG_MAIL The mail system. .It Dv LOG_NEWS The network news system. .It Dv LOG_NTP The network time protocol system. .It Dv LOG_SECURITY Security subsystems, such as .Xr ipfw 4 . .It Dv LOG_SYSLOG Messages generated internally by .Xr syslogd 8 . .It Dv LOG_USER Messages generated by random user processes. This is the default facility identifier if none is specified. .It Dv LOG_UUCP The uucp system. .It Dv LOG_LOCAL0 Reserved for local use. Similarly for .Dv LOG_LOCAL1 through .Dv LOG_LOCAL7 . .El .Pp The .Fn closelog function can be used to close the log file. .Pp The .Fn setlogmask function sets the log priority mask to .Fa maskpri and returns the previous mask. Calls to .Fn syslog with a priority not set in .Fa maskpri are rejected. The mask for an individual priority .Fa pri is calculated by the macro .Fn LOG_MASK pri ; the mask for all priorities up to and including .Fa toppri is given by the macro .Fn LOG_UPTO toppri ; . The default allows all priorities to be logged. .Sh RETURN VALUES The routines .Fn closelog , .Fn openlog , .Fn syslog and .Fn vsyslog return no value. .Pp The routine .Fn setlogmask always returns the previous log mask level. .Sh EXAMPLES .Bd -literal -offset indent -compact syslog(LOG_ALERT, "who: internal error 23"); openlog("ftpd", LOG_PID | LOG_NDELAY, LOG_FTP); setlogmask(LOG_UPTO(LOG_ERR)); syslog(LOG_INFO, "Connection from host %d", CallingHost); -syslog(LOG_INFO|LOG_LOCAL2, "foobar error: %m"); +syslog(LOG_ERR|LOG_LOCAL2, "foobar error: %m"); .Ed .Sh SEE ALSO .Xr logger 1 , .Xr syslogd 8 .Sh HISTORY These functions appeared in .Bx 4.2 . .Sh BUGS Never pass a string with user-supplied data as a format without using .Ql %s . An attacker can put format specifiers in the string to mangle your stack, leading to a possible security hole. This holds true even if the string was built using a function like .Fn snprintf , as the resulting string may still contain user-supplied conversion specifiers for later interpolation by .Fn syslog . .Pp Always use the proper secure idiom: .Pp .Dl syslog(priority, "%s", string); Index: projects/runtime-coverage/lib/libclang_rt/Makefile.inc =================================================================== --- projects/runtime-coverage/lib/libclang_rt/Makefile.inc (revision 325444) +++ projects/runtime-coverage/lib/libclang_rt/Makefile.inc (revision 325445) @@ -1,39 +1,39 @@ # $FreeBSD$ .include # NOTE: based on TARGET_ABI/TARGET_CPUTYPE, set in Makefile.inc1 . .if ${MACHINE} == "arm" .if ${MACHINE_ARCH:Marmv[67]*} != "" && ${CPUTYPE:M*soft*} == "" CRTARCH= armhf .endif .endif CRTARCH?= ${TARGET_CPUARCH:C/amd64/x86_64/} CRTSRC= ${SRCTOP}/contrib/compiler-rt .PATH: ${CRTSRC}/lib CLANGDIR= /usr/lib/clang/5.0.0 LIBDIR= ${CLANGDIR}/lib/freebsd NO_PIC= MK_PROFILE= no WARNS?= 0 SSP_CFLAGS= CFLAGS+= -DNDEBUG CFLAGS+= ${PICFLAG} CFLAGS+= -fno-builtin CFLAGS+= -fno-exceptions CXXFLAGS+= -fno-rtti -.if ${COMPILER_VERSION} >= 30700 +.if ${COMPILER_TYPE} == clang && ${COMPILER_VERSION} >= 30700 CFLAGS.clang+= -fno-sanitize=safe-stack .endif CFLAGS+= -fno-stack-protector CFLAGS+= -funwind-tables CXXFLAGS+= -fvisibility-inlines-hidden CXXFLAGS+= -fvisibility=hidden CFLAGS+= -I${CRTSRC}/lib CXXFLAGS+= -std=c++11 Index: projects/runtime-coverage/share/man/man4/md.4 =================================================================== --- projects/runtime-coverage/share/man/man4/md.4 (revision 325444) +++ projects/runtime-coverage/share/man/man4/md.4 (revision 325445) @@ -1,112 +1,124 @@ .\" ---------------------------------------------------------------------------- .\" "THE BEER-WARE LICENSE" (Revision 42): .\" wrote this file. As long as you retain this notice you .\" can do whatever you want with this stuff. If we meet some day, and you think .\" this stuff is worth it, you can buy me a beer in return. Poul-Henning Kamp .\" ---------------------------------------------------------------------------- .\" .\" $FreeBSD$ .\" -.Dd October 30, 2007 +.Dd November 5, 2017 .Dt MD 4 .Os .Sh NAME .Nm md .Nd memory disk .Sh SYNOPSIS -.Cd device md +To compile this driver into the kernel, +place the following lines in your +kernel configuration file: +.Bd -ragged -offset indent +.Cd "device md" +.Ed +.Pp +Alternatively, to load the driver as a +module at boot time, place the following line in +.Xr loader.conf 5 : +.Bd -literal -offset indent +geom_md_load="YES" +.Ed .Sh DESCRIPTION The .Nm driver provides support for four kinds of memory backed virtual disks: .Bl -tag -width preload .It Cm malloc Backing store is allocated using .Xr malloc 9 . Only one malloc-bucket is used, which means that all .Nm devices with .Cm malloc backing must share the malloc-per-bucket-quota. The exact size of this quota varies, in particular with the amount of RAM in the system. The exact value can be determined with .Xr vmstat 8 . .It Cm preload A file loaded by .Xr loader 8 with type .Sq md_image is used for backing store. For backwards compatibility the type .Sq mfs_root is also recognized. If the kernel is created with option .Dv MD_ROOT the first preloaded image found will become the root file system. .It Cm vnode A regular file is used as backing store. This allows for mounting ISO images without the tedious detour over actual physical media. .It Cm swap Backing store is allocated from buffer memory. Pages get pushed out to the swap when the system is under memory pressure, otherwise they stay in the operating memory. Using .Cm swap backing is generally preferable over .Cm malloc backing. .El .Pp For more information, please see .Xr mdconfig 8 . .Sh EXAMPLES To create a kernel with a ramdisk or MD file system, your kernel config needs the following options: .Bd -literal -offset indent options MD_ROOT # MD is a potential root device options MD_ROOT_SIZE=8192 # 8MB ram disk makeoptions MFS_IMAGE=/h/foo/ARM-MD options ROOTDEVNAME=\\"ufs:md0\\" .Ed .Pp The image in .Pa /h/foo/ARM-MD will be loaded as the initial image each boot. To create the image to use, please follow the steps to create a file-backed disk found in the .Xr mdconfig 8 man page. Other tools will also create these images, such as NanoBSD. .Sh SEE ALSO .Xr gpart 8 , .Xr loader 8 , .Xr mdconfig 8 , .Xr mdmfs 8 , .Xr newfs 8 , .Xr vmstat 8 .Sh HISTORY The .Nm driver first appeared in .Fx 4.0 as a cleaner replacement for the MFS functionality previously used in .Tn PicoBSD and in the .Fx installation process. .Pp The .Nm driver did a hostile takeover of the .Xr vn 4 driver in .Fx 5.0 . .Sh AUTHORS The .Nm driver was written by .An Poul-Henning Kamp Aq Mt phk@FreeBSD.org . Index: projects/runtime-coverage/share/mk/bsd.obj.mk =================================================================== --- projects/runtime-coverage/share/mk/bsd.obj.mk (revision 325444) +++ projects/runtime-coverage/share/mk/bsd.obj.mk (revision 325445) @@ -1,248 +1,248 @@ # $FreeBSD$ # # The include file handles creating the 'obj' directory # and cleaning up object files, etc. # # +++ variables +++ # # CLEANDIRS Additional directories to remove for the clean target. # # CLEANFILES Additional files to remove for the clean target. # # MAKEOBJDIR A pathname for the directory where the targets # are built. Note: MAKEOBJDIR is an *environment* variable # and works properly only if set as an environment variable, # not as a global or command line variable! # # E.g. use `env MAKEOBJDIR=temp-obj make' # # MAKEOBJDIRPREFIX Specifies somewhere other than /usr/obj to root the object # tree. Note: MAKEOBJDIRPREFIX is an *environment* variable # and works properly only if set as an environment variable, # not as a global or command line variable! # # E.g. use `env MAKEOBJDIRPREFIX=/somewhere/obj make' # # NO_OBJ Do not create object directories. This should not be set # if anything is built. # # +++ targets +++ # # clean: # remove ${CLEANFILES}; remove ${CLEANDIRS} and all contents. # # cleandir: # remove the build directory (and all its contents) created by obj # # obj: # create build directory. # .if !target(____) ____: .include .if ${MK_AUTO_OBJ} == "yes" # it is done by now objwarn: obj: CANONICALOBJDIR= ${.OBJDIR} # This is also done in bsd.init.mk .if defined(NO_OBJ) # but this makefile does not want it! .OBJDIR: ${.CURDIR} .endif # Handle special case where SRCS is full-pathed and requires # nested objdirs. This duplicates some auto.obj.mk logic. .if (!empty(SRCS:M*/*) || !empty(DPSRCS:M*/*)) && \ (${.TARGETS} == "" || ${.TARGETS:Nclean*:N*clean:Ndestroy*} != "") _wantdirs= ${SRCS:M*/*:H} ${DPSRCS:M*/*:H} .if !empty(_wantdirs) _wantdirs:= ${_wantdirs:O:u} _needdirs= .for _dir in ${_wantdirs} .if !exists(${.OBJDIR}/${_dir}/) _needdirs+= ${_dir} .endif .endfor .endif .if !empty(_needdirs) #_mkneededdirs!= umask ${OBJDIR_UMASK:U002}; ${Mkdirs} ${_needdirs} __objdir_made != umask ${OBJDIR_UMASK:U002}; ${Mkdirs}; \ for dir in ${_needdirs}; do \ dir=${.OBJDIR}/$${dir}; \ ${ECHO_TRACE} "[Creating nested objdir $${dir}...]" >&2; \ Mkdirs $${dir}; \ done .endif .endif # !empty(SRCS:M*/*) || !empty(DPSRCS:M*/*) .elif !empty(MAKEOBJDIRPREFIX) CANONICALOBJDIR:=${MAKEOBJDIRPREFIX}${.CURDIR} .elif defined(MAKEOBJDIR) && ${MAKEOBJDIR:M/*} != "" CANONICALOBJDIR:=${MAKEOBJDIR} OBJTOP?= ${MAKEOBJDIR} .else CANONICALOBJDIR:=/usr/obj${.CURDIR} .endif -.if defined(SRCTOP) && \ +.if defined(SRCTOP) && defined(RELDIR) && \ (${CANONICALOBJDIR} == /${RELDIR} || ${.OBJDIR} == /${RELDIR}) .error .OBJDIR incorrectly set to /${RELDIR} .endif OBJTOP?= ${.OBJDIR:S,${.CURDIR},,}${SRCTOP} # # Warn of unorthodox object directory. # # The following directories are tried in order for ${.OBJDIR}: # # 1. ${MAKEOBJDIRPREFIX}/`pwd` # 2. ${MAKEOBJDIR} # 3. obj.${MACHINE} # 4. obj # 5. /usr/obj/`pwd` # 6. ${.CURDIR} # # If ${.OBJDIR} is constructed using canonical cases 1 or 5, or # case 2 (using MAKEOBJDIR), don't issue a warning. Otherwise, # issue a warning differentiating between cases 6 and (3 or 4). # objwarn: .PHONY .if !defined(NO_OBJ) && ${.OBJDIR} != ${CANONICALOBJDIR} && \ !(defined(MAKEOBJDIRPREFIX) && exists(${CANONICALOBJDIR}/)) && \ !(defined(MAKEOBJDIR) && exists(${MAKEOBJDIR}/)) .if ${.OBJDIR} == ${.CURDIR} @${ECHO} "Warning: Object directory not changed from original ${.CURDIR}" .elif exists(${.CURDIR}/obj.${MACHINE}/) || exists(${.CURDIR}/obj/) @${ECHO} "Warning: Using ${.OBJDIR} as object directory instead of\ canonical ${CANONICALOBJDIR}" .endif .endif beforebuild: objwarn .if !defined(NO_OBJ) .if !target(obj) obj: .PHONY @if ! test -d ${CANONICALOBJDIR}/; then \ mkdir -p ${CANONICALOBJDIR}; \ if ! test -d ${CANONICALOBJDIR}/; then \ ${ECHO} "Unable to create ${CANONICALOBJDIR}."; \ exit 1; \ fi; \ ${ECHO} "${CANONICALOBJDIR} created for ${.CURDIR}"; \ fi .for dir in ${SRCS:H:O:u} ${DPSRCS:H:O:u} @if ! test -d ${CANONICALOBJDIR}/${dir}/; then \ mkdir -p ${CANONICALOBJDIR}/${dir}; \ if ! test -d ${CANONICALOBJDIR}/${dir}/; then \ ${ECHO} "Unable to create ${CANONICALOBJDIR}/${dir}."; \ exit 1; \ fi; \ ${ECHO} "${CANONICALOBJDIR}/${dir} created for ${.CURDIR}"; \ fi .endfor .endif .if !target(objlink) objlink: @if test -d ${CANONICALOBJDIR}/; then \ rm -f ${.CURDIR}/obj; \ ln -s ${CANONICALOBJDIR} ${.CURDIR}/obj; \ else \ echo "No ${CANONICALOBJDIR} to link to - do a make obj."; \ fi .endif .endif # !defined(NO_OBJ) # # where would that obj directory be? # .if !target(whereobj) whereobj: @echo ${.OBJDIR} .endif # Same check in bsd.progs.mk .if ${CANONICALOBJDIR} != ${.CURDIR} && exists(${CANONICALOBJDIR}/) && \ (${MK_AUTO_OBJ} == "no" || ${.TARGETS:Nclean*:N*clean:Ndestroy*} == "") cleanobj: -rm -rf ${CANONICALOBJDIR} .else cleanobj: clean cleandepend .endif @if [ -L ${.CURDIR}/obj ]; then rm -f ${.CURDIR}/obj; fi # Tell bmake not to look for generated files via .PATH NOPATH_FILES+= ${CLEANFILES} .if !empty(NOPATH_FILES) .NOPATH: ${NOPATH_FILES} .endif .if !target(clean) clean: .if defined(CLEANFILES) && !empty(CLEANFILES) rm -f ${CLEANFILES} .endif .if defined(CLEANDIRS) && !empty(CLEANDIRS) -rm -rf ${CLEANDIRS} .endif .endif .ORDER: clean all .if ${MK_AUTO_OBJ} == "yes" .ORDER: cleanobj all .ORDER: cleandir all .endif .include cleandir: .WAIT cleanobj .if make(destroy*) && defined(OBJROOT) # this (rm -rf objdir) is much faster and more reliable than cleaning. # just in case we are playing games with these... _OBJDIR?= ${.OBJDIR} _CURDIR?= ${.CURDIR} # destroy almost everything destroy: destroy-all destroy-all: # just remove our objdir destroy-arch: .NOMETA .if ${_OBJDIR} != ${_CURDIR} cd ${_CURDIR} && rm -rf ${_OBJDIR} .endif .if defined(HOST_OBJTOP) destroy-host: destroy.host destroy.host: .NOMETA cd ${_CURDIR} && rm -rf ${HOST_OBJTOP}/${RELDIR:N.} .endif .if make(destroy-all) && ${RELDIR} == "." destroy-all: destroy-stage .endif # remove the stage tree destroy-stage: .NOMETA .if defined(STAGE_ROOT) cd ${_CURDIR} && rm -rf ${STAGE_ROOT} .endif # allow parallel destruction _destroy_machine_list = common host ${ALL_MACHINE_LIST} .for m in ${_destroy_machine_list:O:u} destroy-all: destroy.$m .if !target(destroy.$m) destroy.$m: .NOMETA .if ${_OBJDIR} != ${_CURDIR} cd ${_CURDIR} && rm -rf ${OBJROOT}$m*/${RELDIR:N.} .endif .endif .endfor .endif .endif # !target(____) Index: projects/runtime-coverage/sys/arm/arm/elf_trampoline.c =================================================================== --- projects/runtime-coverage/sys/arm/arm/elf_trampoline.c (revision 325444) +++ projects/runtime-coverage/sys/arm/arm/elf_trampoline.c (revision 325445) @@ -1,762 +1,738 @@ /*- * Copyright (c) 2005 Olivier Houchard. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * Since we are compiled outside of the normal kernel build process, we * need to include opt_global.h manually. */ #include "opt_global.h" #include "opt_kernname.h" #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include +#if __ARM_ARCH >= 6 +#error "elf_trampline is not supported on ARMv6/v7 platforms" +#endif extern char kernel_start[]; extern char kernel_end[]; extern void *_end; void _start(void); void __start(void); void __startC(unsigned r0, unsigned r1, unsigned r2, unsigned r3); extern unsigned int cpu_ident(void); -extern void armv6_idcache_wbinv_all(void); -extern void armv7_idcache_wbinv_all(void); extern void do_call(void *, void *, void *, int); #define GZ_HEAD 0xa #if defined(CPU_ARM9) #define cpu_idcache_wbinv_all arm9_idcache_wbinv_all extern void arm9_idcache_wbinv_all(void); #elif defined(CPU_FA526) #define cpu_idcache_wbinv_all fa526_idcache_wbinv_all extern void fa526_idcache_wbinv_all(void); #elif defined(CPU_ARM9E) #define cpu_idcache_wbinv_all armv5_ec_idcache_wbinv_all extern void armv5_ec_idcache_wbinv_all(void); -#elif defined(CPU_ARM1176) -#define cpu_idcache_wbinv_all armv6_idcache_wbinv_all #elif defined(CPU_XSCALE_PXA2X0) || defined(CPU_XSCALE_IXP425) #define cpu_idcache_wbinv_all xscale_cache_purgeID extern void xscale_cache_purgeID(void); #elif defined(CPU_XSCALE_81342) #define cpu_idcache_wbinv_all xscalec3_cache_purgeID extern void xscalec3_cache_purgeID(void); -#elif defined(CPU_MV_PJ4B) -#if !defined(SOC_MV_ARMADAXP) -#define cpu_idcache_wbinv_all armv6_idcache_wbinv_all -extern void armv6_idcache_wbinv_all(void); -#else -#define cpu_idcache_wbinv_all() armadaxp_idcache_wbinv_all #endif -#endif /* CPU_MV_PJ4B */ #ifdef CPU_XSCALE_81342 #define cpu_l2cache_wbinv_all xscalec3_l2cache_purge extern void xscalec3_l2cache_purge(void); #elif defined(SOC_MV_KIRKWOOD) || defined(SOC_MV_DISCOVERY) #define cpu_l2cache_wbinv_all sheeva_l2cache_wbinv_all extern void sheeva_l2cache_wbinv_all(void); -#elif defined(CPU_CORTEXA8) || defined(CPU_CORTEXA_MP) || defined(CPU_KRAIT) -#define cpu_idcache_wbinv_all armv7_idcache_wbinv_all -#define cpu_l2cache_wbinv_all() #else #define cpu_l2cache_wbinv_all() #endif -static void armadaxp_idcache_wbinv_all(void); int arm_picache_size; int arm_picache_line_size; int arm_picache_ways; int arm_pdcache_size; /* and unified */ int arm_pdcache_line_size = 32; int arm_pdcache_ways; int arm_pcache_type; int arm_pcache_unified; int arm_dcache_align; int arm_dcache_align_mask; int arm_dcache_min_line_size = 32; int arm_icache_min_line_size = 32; int arm_idcache_min_line_size = 32; u_int arm_cache_level; u_int arm_cache_type[14]; u_int arm_cache_loc; /* Additional cache information local to this file. Log2 of some of the above numbers. */ static int arm_dcache_l2_nsets; static int arm_dcache_l2_assoc; static int arm_dcache_l2_linesize; /* * Boot parameters */ static struct arm_boot_params s_boot_params; extern int arm9_dcache_sets_inc; extern int arm9_dcache_sets_max; extern int arm9_dcache_index_max; extern int arm9_dcache_index_inc; static __inline void * memcpy(void *dst, const void *src, int len) { const char *s = src; char *d = dst; while (len) { if (0 && len >= 4 && !((vm_offset_t)d & 3) && !((vm_offset_t)s & 3)) { *(uint32_t *)d = *(uint32_t *)s; s += 4; d += 4; len -= 4; } else { *d++ = *s++; len--; } } return (dst); } static __inline void bzero(void *addr, int count) { char *tmp = (char *)addr; while (count > 0) { if (count >= 4 && !((vm_offset_t)tmp & 3)) { *(uint32_t *)tmp = 0; tmp += 4; count -= 4; } else { *tmp = 0; tmp++; count--; } } } static void arm9_setup(void); void _startC(unsigned r0, unsigned r1, unsigned r2, unsigned r3) { int tmp1; unsigned int sp = ((unsigned int)&_end & ~3) + 4; unsigned int pc, kernphysaddr; s_boot_params.abp_r0 = r0; s_boot_params.abp_r1 = r1; s_boot_params.abp_r2 = r2; s_boot_params.abp_r3 = r3; /* * Figure out the physical address the kernel was loaded at. This * assumes the entry point (this code right here) is in the first page, * which will always be the case for this trampoline code. */ __asm __volatile("mov %0, pc\n" : "=r" (pc)); kernphysaddr = pc & ~PAGE_MASK; #if defined(FLASHADDR) && defined(PHYSADDR) && defined(LOADERRAMADDR) if ((FLASHADDR > LOADERRAMADDR && pc >= FLASHADDR) || (FLASHADDR < LOADERRAMADDR && pc < LOADERRAMADDR)) { /* * We're running from flash, so just copy the whole thing * from flash to memory. * This is far from optimal, we could do the relocation or * the unzipping directly from flash to memory to avoid this * needless copy, but it would require to know the flash * physical address. */ unsigned int target_addr; unsigned int tmp_sp; uint32_t src_addr = (uint32_t)&_start - PHYSADDR + FLASHADDR + (pc - FLASHADDR - ((uint32_t)&_startC - PHYSADDR)) & 0xfffff000; target_addr = (unsigned int)&_start - PHYSADDR + LOADERRAMADDR; tmp_sp = target_addr + 0x100000 + (unsigned int)&_end - (unsigned int)&_start; memcpy((char *)target_addr, (char *)src_addr, (unsigned int)&_end - (unsigned int)&_start); /* Temporary set the sp and jump to the new location. */ __asm __volatile( "mov sp, %1\n" "mov r0, %2\n" "mov r1, %3\n" "mov r2, %4\n" "mov r3, %5\n" "mov pc, %0\n" : : "r" (target_addr), "r" (tmp_sp), "r" (s_boot_params.abp_r0), "r" (s_boot_params.abp_r1), "r" (s_boot_params.abp_r2), "r" (s_boot_params.abp_r3) : "r0", "r1", "r2", "r3"); } #endif #ifdef KZIP sp += KERNSIZE + 0x100; sp &= ~(L1_TABLE_SIZE - 1); sp += 2 * L1_TABLE_SIZE; #endif sp += 1024 * 1024; /* Should be enough for a stack */ __asm __volatile("adr %0, 2f\n" "bic %0, %0, #0xff000000\n" "and %1, %1, #0xff000000\n" "orr %0, %0, %1\n" "mrc p15, 0, %1, c1, c0, 0\n" /* CP15_SCTLR(%1)*/ "bic %1, %1, #1\n" /* Disable MMU */ "orr %1, %1, #(4 | 8)\n" /* Add DC enable, WBUF enable */ "orr %1, %1, #0x1000\n" /* Add IC enable */ "orr %1, %1, #(0x800)\n" /* BPRD enable */ "mcr p15, 0, %1, c1, c0, 0\n" /* CP15_SCTLR(%1)*/ "nop\n" "nop\n" "nop\n" "mov pc, %0\n" "2: nop\n" "mov sp, %2\n" : "=r" (tmp1), "+r" (kernphysaddr), "+r" (sp)); #ifndef KZIP #ifdef CPU_ARM9 /* So that idcache_wbinv works; */ if ((cpu_ident() & 0x0000f000) == 0x00009000) arm9_setup(); #endif #endif __start(); } static void get_cachetype_cp15() { u_int ctype, isize, dsize, cpuid; u_int clevel, csize, i, sel; u_int multiplier; u_char type; __asm __volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctype)); cpuid = cpu_ident(); /* * ...and thus spake the ARM ARM: * * If an value corresponding to an unimplemented or * reserved ID register is encountered, the System Control * processor returns the value of the main ID register. */ if (ctype == cpuid) goto out; if (CPU_CT_FORMAT(ctype) == CPU_CT_ARMV7) { /* Resolve minimal cache line sizes */ arm_dcache_min_line_size = 1 << (CPU_CT_DMINLINE(ctype) + 2); arm_icache_min_line_size = 1 << (CPU_CT_IMINLINE(ctype) + 2); arm_idcache_min_line_size = (arm_dcache_min_line_size > arm_icache_min_line_size ? arm_icache_min_line_size : arm_dcache_min_line_size); __asm __volatile("mrc p15, 1, %0, c0, c0, 1" : "=r" (clevel)); arm_cache_level = clevel; arm_cache_loc = CPU_CLIDR_LOC(arm_cache_level) + 1; i = 0; while ((type = (clevel & 0x7)) && i < 7) { if (type == CACHE_DCACHE || type == CACHE_UNI_CACHE || type == CACHE_SEP_CACHE) { sel = i << 1; __asm __volatile("mcr p15, 2, %0, c0, c0, 0" : : "r" (sel)); __asm __volatile("mrc p15, 1, %0, c0, c0, 0" : "=r" (csize)); arm_cache_type[sel] = csize; } if (type == CACHE_ICACHE || type == CACHE_SEP_CACHE) { sel = (i << 1) | 1; __asm __volatile("mcr p15, 2, %0, c0, c0, 0" : : "r" (sel)); __asm __volatile("mrc p15, 1, %0, c0, c0, 0" : "=r" (csize)); arm_cache_type[sel] = csize; } i++; clevel >>= 3; } } else { if ((ctype & CPU_CT_S) == 0) arm_pcache_unified = 1; /* * If you want to know how this code works, go read the ARM ARM. */ arm_pcache_type = CPU_CT_CTYPE(ctype); if (arm_pcache_unified == 0) { isize = CPU_CT_ISIZE(ctype); multiplier = (isize & CPU_CT_xSIZE_M) ? 3 : 2; arm_picache_line_size = 1U << (CPU_CT_xSIZE_LEN(isize) + 3); if (CPU_CT_xSIZE_ASSOC(isize) == 0) { if (isize & CPU_CT_xSIZE_M) arm_picache_line_size = 0; /* not present */ else arm_picache_ways = 1; } else { arm_picache_ways = multiplier << (CPU_CT_xSIZE_ASSOC(isize) - 1); } arm_picache_size = multiplier << (CPU_CT_xSIZE_SIZE(isize) + 8); } dsize = CPU_CT_DSIZE(ctype); multiplier = (dsize & CPU_CT_xSIZE_M) ? 3 : 2; arm_pdcache_line_size = 1U << (CPU_CT_xSIZE_LEN(dsize) + 3); if (CPU_CT_xSIZE_ASSOC(dsize) == 0) { if (dsize & CPU_CT_xSIZE_M) arm_pdcache_line_size = 0; /* not present */ else arm_pdcache_ways = 1; } else { arm_pdcache_ways = multiplier << (CPU_CT_xSIZE_ASSOC(dsize) - 1); } arm_pdcache_size = multiplier << (CPU_CT_xSIZE_SIZE(dsize) + 8); arm_dcache_align = arm_pdcache_line_size; arm_dcache_l2_assoc = CPU_CT_xSIZE_ASSOC(dsize) + multiplier - 2; arm_dcache_l2_linesize = CPU_CT_xSIZE_LEN(dsize) + 3; arm_dcache_l2_nsets = 6 + CPU_CT_xSIZE_SIZE(dsize) - CPU_CT_xSIZE_ASSOC(dsize) - CPU_CT_xSIZE_LEN(dsize); out: arm_dcache_align_mask = arm_dcache_align - 1; } } static void arm9_setup(void) { get_cachetype_cp15(); arm9_dcache_sets_inc = 1U << arm_dcache_l2_linesize; arm9_dcache_sets_max = (1U << (arm_dcache_l2_linesize + arm_dcache_l2_nsets)) - arm9_dcache_sets_inc; arm9_dcache_index_inc = 1U << (32 - arm_dcache_l2_assoc); arm9_dcache_index_max = 0U - arm9_dcache_index_inc; } -static void -armadaxp_idcache_wbinv_all(void) -{ - uint32_t feat; - - __asm __volatile("mrc p15, 0, %0, c0, c1, 0" : "=r" (feat)); - if (feat & ARM_PFR0_THUMBEE_MASK) - armv7_idcache_wbinv_all(); - else - armv6_idcache_wbinv_all(); - -} #ifdef KZIP static unsigned char *orig_input, *i_input, *i_output; static u_int memcnt; /* Memory allocated: blocks */ static size_t memtot; /* Memory allocated: bytes */ /* * Library functions required by inflate(). */ #define MEMSIZ 0x8000 /* * Allocate memory block. */ unsigned char * kzipmalloc(int size) { void *ptr; static u_char mem[MEMSIZ]; if (memtot + size > MEMSIZ) return NULL; ptr = mem + memtot; memtot += size; memcnt++; return ptr; } /* * Free allocated memory block. */ void kzipfree(void *ptr) { memcnt--; if (!memcnt) memtot = 0; } void putstr(char *dummy) { } static int input(void *dummy) { if ((size_t)(i_input - orig_input) >= KERNCOMPSIZE) { return (GZ_EOF); } return *i_input++; } static int output(void *dummy, unsigned char *ptr, unsigned long len) { memcpy(i_output, ptr, len); i_output += len; return (0); } static void * inflate_kernel(void *kernel, void *startaddr) { struct inflate infl; unsigned char slide[GZ_WSIZE]; orig_input = kernel; memcnt = memtot = 0; i_input = (unsigned char *)kernel + GZ_HEAD; if (((char *)kernel)[3] & 0x18) { while (*i_input) i_input++; i_input++; } i_output = startaddr; bzero(&infl, sizeof(infl)); infl.gz_input = input; infl.gz_output = output; infl.gz_slide = slide; inflate(&infl); return ((char *)(((vm_offset_t)i_output & ~3) + 4)); } #endif void * load_kernel(unsigned int kstart, unsigned int curaddr,unsigned int func_end, int d) { Elf32_Ehdr *eh; Elf32_Phdr phdr[64] /* XXX */, *php; Elf32_Shdr shdr[64] /* XXX */; int i,j; void *entry_point; int symtabindex = -1; int symstrindex = -1; vm_offset_t lastaddr = 0; Elf_Addr ssym = 0; Elf_Dyn *dp; struct arm_boot_params local_boot_params; eh = (Elf32_Ehdr *)kstart; ssym = 0; entry_point = (void*)eh->e_entry; memcpy(phdr, (void *)(kstart + eh->e_phoff ), eh->e_phnum * sizeof(phdr[0])); /* Determine lastaddr. */ for (i = 0; i < eh->e_phnum; i++) { if (lastaddr < (phdr[i].p_vaddr - KERNVIRTADDR + curaddr + phdr[i].p_memsz)) lastaddr = phdr[i].p_vaddr - KERNVIRTADDR + curaddr + phdr[i].p_memsz; } /* Save the symbol tables, as there're about to be scratched. */ memcpy(shdr, (void *)(kstart + eh->e_shoff), sizeof(*shdr) * eh->e_shnum); if (eh->e_shnum * eh->e_shentsize != 0 && eh->e_shoff != 0) { for (i = 0; i < eh->e_shnum; i++) { if (shdr[i].sh_type == SHT_SYMTAB) { for (j = 0; j < eh->e_phnum; j++) { if (phdr[j].p_type == PT_LOAD && shdr[i].sh_offset >= phdr[j].p_offset && (shdr[i].sh_offset + shdr[i].sh_size <= phdr[j].p_offset + phdr[j].p_filesz)) { shdr[i].sh_offset = 0; shdr[i].sh_size = 0; j = eh->e_phnum; } } if (shdr[i].sh_offset != 0 && shdr[i].sh_size != 0) { symtabindex = i; symstrindex = shdr[i].sh_link; } } } func_end = roundup(func_end, sizeof(long)); if (symtabindex >= 0 && symstrindex >= 0) { ssym = lastaddr; if (d) { memcpy((void *)func_end, (void *)( shdr[symtabindex].sh_offset + kstart), shdr[symtabindex].sh_size); memcpy((void *)(func_end + shdr[symtabindex].sh_size), (void *)(shdr[symstrindex].sh_offset + kstart), shdr[symstrindex].sh_size); } else { lastaddr += shdr[symtabindex].sh_size; lastaddr = roundup(lastaddr, sizeof(shdr[symtabindex].sh_size)); lastaddr += sizeof(shdr[symstrindex].sh_size); lastaddr += shdr[symstrindex].sh_size; lastaddr = roundup(lastaddr, sizeof(shdr[symstrindex].sh_size)); } } } if (!d) return ((void *)lastaddr); /* * Now the stack is fixed, copy boot params * before it's overrided */ memcpy(&local_boot_params, &s_boot_params, sizeof(local_boot_params)); j = eh->e_phnum; for (i = 0; i < j; i++) { volatile char c; if (phdr[i].p_type != PT_LOAD) continue; memcpy((void *)(phdr[i].p_vaddr - KERNVIRTADDR + curaddr), (void*)(kstart + phdr[i].p_offset), phdr[i].p_filesz); /* Clean space from oversized segments, eg: bss. */ if (phdr[i].p_filesz < phdr[i].p_memsz) bzero((void *)(phdr[i].p_vaddr - KERNVIRTADDR + curaddr + phdr[i].p_filesz), phdr[i].p_memsz - phdr[i].p_filesz); } /* Now grab the symbol tables. */ if (symtabindex >= 0 && symstrindex >= 0) { *(Elf_Size *)lastaddr = shdr[symtabindex].sh_size; lastaddr += sizeof(shdr[symtabindex].sh_size); memcpy((void*)lastaddr, (void *)func_end, shdr[symtabindex].sh_size); lastaddr += shdr[symtabindex].sh_size; lastaddr = roundup(lastaddr, sizeof(shdr[symtabindex].sh_size)); *(Elf_Size *)lastaddr = shdr[symstrindex].sh_size; lastaddr += sizeof(shdr[symstrindex].sh_size); memcpy((void*)lastaddr, (void*)(func_end + shdr[symtabindex].sh_size), shdr[symstrindex].sh_size); lastaddr += shdr[symstrindex].sh_size; lastaddr = roundup(lastaddr, sizeof(shdr[symstrindex].sh_size)); *(Elf_Addr *)curaddr = MAGIC_TRAMP_NUMBER; *((Elf_Addr *)curaddr + 1) = ssym - curaddr + KERNVIRTADDR; *((Elf_Addr *)curaddr + 2) = lastaddr - curaddr + KERNVIRTADDR; } else *(Elf_Addr *)curaddr = 0; /* Invalidate the instruction cache. */ __asm __volatile("mcr p15, 0, %0, c7, c5, 0\n" "mcr p15, 0, %0, c7, c10, 4\n" : : "r" (curaddr)); __asm __volatile("mrc p15, 0, %0, c1, c0, 0\n" /* CP15_SCTLR(%0)*/ "bic %0, %0, #1\n" /* MMU_ENABLE */ "mcr p15, 0, %0, c1, c0, 0\n" /* CP15_SCTLR(%0)*/ : "=r" (ssym)); /* Jump to the entry point. */ ((void(*)(unsigned, unsigned, unsigned, unsigned)) (entry_point - KERNVIRTADDR + curaddr)) (local_boot_params.abp_r0, local_boot_params.abp_r1, local_boot_params.abp_r2, local_boot_params.abp_r3); __asm __volatile(".globl func_end\n" "func_end:"); /* NOTREACHED */ return NULL; } extern char func_end[]; #define PMAP_DOMAIN_KERNEL 0 /* * Just define it instead of including the * whole VM headers set. */ int __hack; static __inline void setup_pagetables(unsigned int pt_addr, vm_paddr_t physstart, vm_paddr_t physend, int write_back) { unsigned int *pd = (unsigned int *)pt_addr; vm_paddr_t addr; int domain = (DOMAIN_CLIENT << (PMAP_DOMAIN_KERNEL * 2)) | DOMAIN_CLIENT; int tmp; bzero(pd, L1_TABLE_SIZE); for (addr = physstart; addr < physend; addr += L1_S_SIZE) { pd[addr >> L1_S_SHIFT] = L1_TYPE_S|L1_S_C|L1_S_AP(AP_KRW)| L1_S_DOM(PMAP_DOMAIN_KERNEL) | addr; if (write_back && 0) pd[addr >> L1_S_SHIFT] |= L1_S_B; } /* XXX: See below */ if (0xfff00000 < physstart || 0xfff00000 > physend) pd[0xfff00000 >> L1_S_SHIFT] = L1_TYPE_S|L1_S_AP(AP_KRW)| L1_S_DOM(PMAP_DOMAIN_KERNEL)|physstart; __asm __volatile("mcr p15, 0, %1, c2, c0, 0\n" /* set TTB */ "mcr p15, 0, %1, c8, c7, 0\n" /* Flush TTB */ "mcr p15, 0, %2, c3, c0, 0\n" /* Set DAR */ "mrc p15, 0, %0, c1, c0, 0\n" /* CP15_SCTLR(%0)*/ "orr %0, %0, #1\n" /* MMU_ENABLE */ "mcr p15, 0, %0, c1, c0, 0\n" /* CP15_SCTLR(%0)*/ "mrc p15, 0, %0, c2, c0, 0\n" /* CPWAIT */ "mov r0, r0\n" "sub pc, pc, #4\n" : "=r" (tmp) : "r" (pd), "r" (domain)); /* * XXX: This is the most stupid workaround I've ever wrote. * For some reason, the KB9202 won't boot the kernel unless * we access an address which is not in the * 0x20000000 - 0x20ffffff range. I hope I'll understand * what's going on later. */ __hack = *(volatile int *)0xfffff21c; } void __start(void) { void *curaddr; void *dst, *altdst; char *kernel = (char *)&kernel_start; int sp; int pt_addr; __asm __volatile("mov %0, pc" : "=r" (curaddr)); curaddr = (void*)((unsigned int)curaddr & 0xfff00000); #ifdef KZIP if (*kernel == 0x1f && kernel[1] == 0x8b) { pt_addr = L1_TABLE_SIZE + rounddown2((int)&_end + KERNSIZE + 0x100, L1_TABLE_SIZE); #ifdef CPU_ARM9 /* So that idcache_wbinv works; */ if ((cpu_ident() & 0x0000f000) == 0x00009000) arm9_setup(); #endif setup_pagetables(pt_addr, (vm_paddr_t)curaddr, (vm_paddr_t)curaddr + 0x10000000, 1); /* Gzipped kernel */ dst = inflate_kernel(kernel, &_end); kernel = (char *)&_end; altdst = 4 + load_kernel((unsigned int)kernel, (unsigned int)curaddr, (unsigned int)&func_end + 800 , 0); if (altdst > dst) dst = altdst; /* * Disable MMU. Otherwise, setup_pagetables call below * might overwrite the L1 table we are currently using. */ cpu_idcache_wbinv_all(); cpu_l2cache_wbinv_all(); __asm __volatile("mrc p15, 0, %0, c1, c0, 0\n" /* CP15_SCTLR(%0)*/ "bic %0, %0, #1\n" /* MMU_DISABLE */ "mcr p15, 0, %0, c1, c0, 0\n" /* CP15_SCTLR(%0)*/ :"=r" (pt_addr)); } else #endif dst = 4 + load_kernel((unsigned int)&kernel_start, (unsigned int)curaddr, (unsigned int)&func_end, 0); dst = (void *)(((vm_offset_t)dst & ~3)); pt_addr = L1_TABLE_SIZE + rounddown2((unsigned int)dst, L1_TABLE_SIZE); setup_pagetables(pt_addr, (vm_paddr_t)curaddr, (vm_paddr_t)curaddr + 0x10000000, 0); sp = pt_addr + L1_TABLE_SIZE + 8192; sp = sp &~3; dst = (void *)(sp + 4); memcpy((void *)dst, (void *)&load_kernel, (unsigned int)&func_end - (unsigned int)&load_kernel + 800); do_call(dst, kernel, dst + (unsigned int)(&func_end) - (unsigned int)(&load_kernel) + 800, sp); } /* We need to provide these functions but never call them */ void __aeabi_unwind_cpp_pr0(void); void __aeabi_unwind_cpp_pr1(void); void __aeabi_unwind_cpp_pr2(void); __strong_reference(__aeabi_unwind_cpp_pr0, __aeabi_unwind_cpp_pr1); __strong_reference(__aeabi_unwind_cpp_pr0, __aeabi_unwind_cpp_pr2); void __aeabi_unwind_cpp_pr0(void) { } Index: projects/runtime-coverage/sys/arm/include/cpu-v4.h =================================================================== --- projects/runtime-coverage/sys/arm/include/cpu-v4.h (revision 325444) +++ projects/runtime-coverage/sys/arm/include/cpu-v4.h (revision 325445) @@ -1,186 +1,186 @@ /*- * Copyright 2016 Svatopluk Kraus * Copyright 2016 Michal Meloun * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef MACHINE_CPU_V4_H #define MACHINE_CPU_V4_H /* There are no user serviceable parts here, they may change without notice */ #ifndef _KERNEL #error Only include this file in the kernel #endif #include #include #include #include #if __ARM_ARCH >= 6 #error Never include this file for ARMv6 #else #define CPU_ASID_KERNEL 0 /* * Macros to generate CP15 (system control processor) read/write functions. */ #define _FX(s...) #s #define _RF0(fname, aname...) \ -static __inline register_t \ +static __inline uint32_t \ fname(void) \ { \ - register_t reg; \ + uint32_t reg; \ __asm __volatile("mrc\t" _FX(aname): "=r" (reg)); \ return(reg); \ } #define _R64F0(fname, aname) \ static __inline uint64_t \ fname(void) \ { \ uint64_t reg; \ __asm __volatile("mrrc\t" _FX(aname): "=r" (reg)); \ return(reg); \ } #define _WF0(fname, aname...) \ static __inline void \ fname(void) \ { \ __asm __volatile("mcr\t" _FX(aname)); \ } #define _WF1(fname, aname...) \ static __inline void \ -fname(register_t reg) \ +fname(uint32_t reg) \ { \ __asm __volatile("mcr\t" _FX(aname):: "r" (reg)); \ } /* * Publicly accessible functions */ /* Various control registers */ _RF0(cp15_cpacr_get, CP15_CPACR(%0)) _WF1(cp15_cpacr_set, CP15_CPACR(%0)) _RF0(cp15_dfsr_get, CP15_DFSR(%0)) _RF0(cp15_ttbr_get, CP15_TTBR0(%0)) _RF0(cp15_dfar_get, CP15_DFAR(%0)) /* XScale */ _RF0(cp15_actlr_get, CP15_ACTLR(%0)) _WF1(cp15_actlr_set, CP15_ACTLR(%0)) /*CPU id registers */ _RF0(cp15_midr_get, CP15_MIDR(%0)) _RF0(cp15_ctr_get, CP15_CTR(%0)) _RF0(cp15_tcmtr_get, CP15_TCMTR(%0)) _RF0(cp15_tlbtr_get, CP15_TLBTR(%0)) _RF0(cp15_sctlr_get, CP15_SCTLR(%0)) #undef _FX #undef _RF0 #undef _WF0 #undef _WF1 /* * armv4/5 compatibility shims. * * These functions provide armv4 cache maintenance using the new armv6 names. * Included here are just the functions actually used now in common code; it may * be necessary to add things here over time. * * The callers of the dcache functions expect these routines to handle address * and size values which are not aligned to cacheline boundaries; the armv4 and * armv5 asm code handles that. */ static __inline void tlb_flush_all(void) { cpu_tlb_flushID(); cpu_cpwait(); } static __inline void icache_sync(vm_offset_t va, vm_size_t size) { cpu_icache_sync_range(va, size); } static __inline void dcache_inv_poc(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { cpu_dcache_inv_range(va, size); #ifdef ARM_L2_PIPT cpu_l2cache_inv_range(pa, size); #else cpu_l2cache_inv_range(va, size); #endif } static __inline void dcache_inv_poc_dma(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { /* See armv6 code, above, for why we do L2 before L1 in this case. */ #ifdef ARM_L2_PIPT cpu_l2cache_inv_range(pa, size); #else cpu_l2cache_inv_range(va, size); #endif cpu_dcache_inv_range(va, size); } static __inline void dcache_wb_poc(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { cpu_dcache_wb_range(va, size); #ifdef ARM_L2_PIPT cpu_l2cache_wb_range(pa, size); #else cpu_l2cache_wb_range(va, size); #endif } static __inline void dcache_wbinv_poc_all(void) { cpu_idcache_wbinv_all(); cpu_l2cache_wbinv_all(); } #endif /* _KERNEL */ #endif /* MACHINE_CPU_V4_H */ Index: projects/runtime-coverage/sys/arm/include/cpu-v6.h =================================================================== --- projects/runtime-coverage/sys/arm/include/cpu-v6.h (revision 325444) +++ projects/runtime-coverage/sys/arm/include/cpu-v6.h (revision 325445) @@ -1,689 +1,689 @@ /*- * Copyright 2014 Svatopluk Kraus * Copyright 2014 Michal Meloun * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef MACHINE_CPU_V6_H #define MACHINE_CPU_V6_H /* There are no user serviceable parts here, they may change without notice */ #ifndef _KERNEL #error Only include this file in the kernel #endif #include #include #include #include #if __ARM_ARCH < 6 #error Only include this file for ARMv6 #endif /* * Some kernel modules (dtrace all for example) are compiled * unconditionally with -DSMP. Although it looks like a bug, * handle this case here and in #elif condition in ARM_SMP_UP macro. */ #if __ARM_ARCH <= 6 && defined(SMP) && !defined(KLD_MODULE) #error SMP option is not supported on ARMv6 #endif #if __ARM_ARCH <= 6 && defined(SMP_ON_UP) #error SMP_ON_UP option is only supported on ARMv7+ CPUs #endif #if !defined(SMP) && defined(SMP_ON_UP) #error SMP option must be defined for SMP_ON_UP option #endif #define CPU_ASID_KERNEL 0 #if defined(SMP_ON_UP) #define ARM_SMP_UP(smp_code, up_code) \ do { \ if (cpuinfo.mp_ext != 0) { \ smp_code; \ } else { \ up_code; \ } \ } while (0) #elif defined(SMP) && __ARM_ARCH > 6 #define ARM_SMP_UP(smp_code, up_code) \ do { \ smp_code; \ } while (0) #else #define ARM_SMP_UP(smp_code, up_code) \ do { \ up_code; \ } while (0) #endif void dcache_wbinv_poc_all(void); /* !!! NOT SMP coherent function !!! */ vm_offset_t dcache_wb_pou_checked(vm_offset_t, vm_size_t); vm_offset_t icache_inv_pou_checked(vm_offset_t, vm_size_t); #ifdef DEV_PMU #include #define PMU_OVSR_C 0x80000000 /* Cycle Counter */ extern uint32_t ccnt_hi[MAXCPU]; extern int pmu_attched; #endif /* DEV_PMU */ #define sev() __asm __volatile("sev" : : : "memory") #define wfe() __asm __volatile("wfe" : : : "memory") /* * Macros to generate CP15 (system control processor) read/write functions. */ #define _FX(s...) #s #define _RF0(fname, aname...) \ -static __inline register_t \ +static __inline uint32_t \ fname(void) \ { \ - register_t reg; \ + uint32_t reg; \ __asm __volatile("mrc\t" _FX(aname): "=r" (reg)); \ return(reg); \ } #define _R64F0(fname, aname) \ static __inline uint64_t \ fname(void) \ { \ uint64_t reg; \ __asm __volatile("mrrc\t" _FX(aname): "=r" (reg)); \ return(reg); \ } #define _WF0(fname, aname...) \ static __inline void \ fname(void) \ { \ __asm __volatile("mcr\t" _FX(aname)); \ } #define _WF1(fname, aname...) \ static __inline void \ -fname(register_t reg) \ +fname(uint32_t reg) \ { \ __asm __volatile("mcr\t" _FX(aname):: "r" (reg)); \ } #define _W64F1(fname, aname...) \ static __inline void \ fname(uint64_t reg) \ { \ __asm __volatile("mcrr\t" _FX(aname):: "r" (reg)); \ } /* * Raw CP15 maintenance operations * !!! not for external use !!! */ /* TLB */ _WF0(_CP15_TLBIALL, CP15_TLBIALL) /* Invalidate entire unified TLB */ #if __ARM_ARCH >= 7 && defined(SMP) _WF0(_CP15_TLBIALLIS, CP15_TLBIALLIS) /* Invalidate entire unified TLB IS */ #endif _WF1(_CP15_TLBIASID, CP15_TLBIASID(%0)) /* Invalidate unified TLB by ASID */ #if __ARM_ARCH >= 7 && defined(SMP) _WF1(_CP15_TLBIASIDIS, CP15_TLBIASIDIS(%0)) /* Invalidate unified TLB by ASID IS */ #endif _WF1(_CP15_TLBIMVAA, CP15_TLBIMVAA(%0)) /* Invalidate unified TLB by MVA, all ASID */ #if __ARM_ARCH >= 7 && defined(SMP) _WF1(_CP15_TLBIMVAAIS, CP15_TLBIMVAAIS(%0)) /* Invalidate unified TLB by MVA, all ASID IS */ #endif _WF1(_CP15_TLBIMVA, CP15_TLBIMVA(%0)) /* Invalidate unified TLB by MVA */ _WF1(_CP15_TTB_SET, CP15_TTBR0(%0)) /* Cache and Branch predictor */ _WF0(_CP15_BPIALL, CP15_BPIALL) /* Branch predictor invalidate all */ #if __ARM_ARCH >= 7 && defined(SMP) _WF0(_CP15_BPIALLIS, CP15_BPIALLIS) /* Branch predictor invalidate all IS */ #endif _WF1(_CP15_BPIMVA, CP15_BPIMVA(%0)) /* Branch predictor invalidate by MVA */ _WF1(_CP15_DCCIMVAC, CP15_DCCIMVAC(%0)) /* Data cache clean and invalidate by MVA PoC */ _WF1(_CP15_DCCISW, CP15_DCCISW(%0)) /* Data cache clean and invalidate by set/way */ _WF1(_CP15_DCCMVAC, CP15_DCCMVAC(%0)) /* Data cache clean by MVA PoC */ #if __ARM_ARCH >= 7 _WF1(_CP15_DCCMVAU, CP15_DCCMVAU(%0)) /* Data cache clean by MVA PoU */ #endif _WF1(_CP15_DCCSW, CP15_DCCSW(%0)) /* Data cache clean by set/way */ _WF1(_CP15_DCIMVAC, CP15_DCIMVAC(%0)) /* Data cache invalidate by MVA PoC */ _WF1(_CP15_DCISW, CP15_DCISW(%0)) /* Data cache invalidate by set/way */ _WF0(_CP15_ICIALLU, CP15_ICIALLU) /* Instruction cache invalidate all PoU */ #if __ARM_ARCH >= 7 && defined(SMP) _WF0(_CP15_ICIALLUIS, CP15_ICIALLUIS) /* Instruction cache invalidate all PoU IS */ #endif _WF1(_CP15_ICIMVAU, CP15_ICIMVAU(%0)) /* Instruction cache invalidate */ /* * Publicly accessible functions */ /* CP14 Debug Registers */ _RF0(cp14_dbgdidr_get, CP14_DBGDIDR(%0)) _RF0(cp14_dbgprsr_get, CP14_DBGPRSR(%0)) _RF0(cp14_dbgoslsr_get, CP14_DBGOSLSR(%0)) _RF0(cp14_dbgosdlr_get, CP14_DBGOSDLR(%0)) _RF0(cp14_dbgdscrint_get, CP14_DBGDSCRint(%0)) _WF1(cp14_dbgdscr_v6_set, CP14_DBGDSCRext_V6(%0)) _WF1(cp14_dbgdscr_v7_set, CP14_DBGDSCRext_V7(%0)) _WF1(cp14_dbgvcr_set, CP14_DBGVCR(%0)) _WF1(cp14_dbgoslar_set, CP14_DBGOSLAR(%0)) /* Various control registers */ _RF0(cp15_cpacr_get, CP15_CPACR(%0)) _WF1(cp15_cpacr_set, CP15_CPACR(%0)) _RF0(cp15_dfsr_get, CP15_DFSR(%0)) _RF0(cp15_ifsr_get, CP15_IFSR(%0)) _WF1(cp15_prrr_set, CP15_PRRR(%0)) _WF1(cp15_nmrr_set, CP15_NMRR(%0)) _RF0(cp15_ttbr_get, CP15_TTBR0(%0)) _RF0(cp15_dfar_get, CP15_DFAR(%0)) #if __ARM_ARCH >= 7 _RF0(cp15_ifar_get, CP15_IFAR(%0)) _RF0(cp15_l2ctlr_get, CP15_L2CTLR(%0)) #endif _RF0(cp15_actlr_get, CP15_ACTLR(%0)) _WF1(cp15_actlr_set, CP15_ACTLR(%0)) _WF1(cp15_ats1cpr_set, CP15_ATS1CPR(%0)) _WF1(cp15_ats1cpw_set, CP15_ATS1CPW(%0)) _WF1(cp15_ats1cur_set, CP15_ATS1CUR(%0)) _WF1(cp15_ats1cuw_set, CP15_ATS1CUW(%0)) _RF0(cp15_par_get, CP15_PAR(%0)) _RF0(cp15_sctlr_get, CP15_SCTLR(%0)) /*CPU id registers */ _RF0(cp15_midr_get, CP15_MIDR(%0)) _RF0(cp15_ctr_get, CP15_CTR(%0)) _RF0(cp15_tcmtr_get, CP15_TCMTR(%0)) _RF0(cp15_tlbtr_get, CP15_TLBTR(%0)) _RF0(cp15_mpidr_get, CP15_MPIDR(%0)) _RF0(cp15_revidr_get, CP15_REVIDR(%0)) _RF0(cp15_ccsidr_get, CP15_CCSIDR(%0)) _RF0(cp15_clidr_get, CP15_CLIDR(%0)) _RF0(cp15_aidr_get, CP15_AIDR(%0)) _WF1(cp15_csselr_set, CP15_CSSELR(%0)) _RF0(cp15_id_pfr0_get, CP15_ID_PFR0(%0)) _RF0(cp15_id_pfr1_get, CP15_ID_PFR1(%0)) _RF0(cp15_id_dfr0_get, CP15_ID_DFR0(%0)) _RF0(cp15_id_afr0_get, CP15_ID_AFR0(%0)) _RF0(cp15_id_mmfr0_get, CP15_ID_MMFR0(%0)) _RF0(cp15_id_mmfr1_get, CP15_ID_MMFR1(%0)) _RF0(cp15_id_mmfr2_get, CP15_ID_MMFR2(%0)) _RF0(cp15_id_mmfr3_get, CP15_ID_MMFR3(%0)) _RF0(cp15_id_isar0_get, CP15_ID_ISAR0(%0)) _RF0(cp15_id_isar1_get, CP15_ID_ISAR1(%0)) _RF0(cp15_id_isar2_get, CP15_ID_ISAR2(%0)) _RF0(cp15_id_isar3_get, CP15_ID_ISAR3(%0)) _RF0(cp15_id_isar4_get, CP15_ID_ISAR4(%0)) _RF0(cp15_id_isar5_get, CP15_ID_ISAR5(%0)) _RF0(cp15_cbar_get, CP15_CBAR(%0)) /* Performance Monitor registers */ #if __ARM_ARCH == 6 && defined(CPU_ARM1176) _RF0(cp15_pmuserenr_get, CP15_PMUSERENR(%0)) _WF1(cp15_pmuserenr_set, CP15_PMUSERENR(%0)) _RF0(cp15_pmcr_get, CP15_PMCR(%0)) _WF1(cp15_pmcr_set, CP15_PMCR(%0)) _RF0(cp15_pmccntr_get, CP15_PMCCNTR(%0)) _WF1(cp15_pmccntr_set, CP15_PMCCNTR(%0)) #elif __ARM_ARCH > 6 _RF0(cp15_pmcr_get, CP15_PMCR(%0)) _WF1(cp15_pmcr_set, CP15_PMCR(%0)) _RF0(cp15_pmcnten_get, CP15_PMCNTENSET(%0)) _WF1(cp15_pmcnten_set, CP15_PMCNTENSET(%0)) _WF1(cp15_pmcnten_clr, CP15_PMCNTENCLR(%0)) _RF0(cp15_pmovsr_get, CP15_PMOVSR(%0)) _WF1(cp15_pmovsr_set, CP15_PMOVSR(%0)) _WF1(cp15_pmswinc_set, CP15_PMSWINC(%0)) _RF0(cp15_pmselr_get, CP15_PMSELR(%0)) _WF1(cp15_pmselr_set, CP15_PMSELR(%0)) _RF0(cp15_pmccntr_get, CP15_PMCCNTR(%0)) _WF1(cp15_pmccntr_set, CP15_PMCCNTR(%0)) _RF0(cp15_pmxevtyper_get, CP15_PMXEVTYPER(%0)) _WF1(cp15_pmxevtyper_set, CP15_PMXEVTYPER(%0)) _RF0(cp15_pmxevcntr_get, CP15_PMXEVCNTRR(%0)) _WF1(cp15_pmxevcntr_set, CP15_PMXEVCNTRR(%0)) _RF0(cp15_pmuserenr_get, CP15_PMUSERENR(%0)) _WF1(cp15_pmuserenr_set, CP15_PMUSERENR(%0)) _RF0(cp15_pminten_get, CP15_PMINTENSET(%0)) _WF1(cp15_pminten_set, CP15_PMINTENSET(%0)) _WF1(cp15_pminten_clr, CP15_PMINTENCLR(%0)) #endif _RF0(cp15_tpidrurw_get, CP15_TPIDRURW(%0)) _WF1(cp15_tpidrurw_set, CP15_TPIDRURW(%0)) _RF0(cp15_tpidruro_get, CP15_TPIDRURO(%0)) _WF1(cp15_tpidruro_set, CP15_TPIDRURO(%0)) _RF0(cp15_tpidrpwr_get, CP15_TPIDRPRW(%0)) _WF1(cp15_tpidrpwr_set, CP15_TPIDRPRW(%0)) /* Generic Timer registers - only use when you know the hardware is available */ _RF0(cp15_cntfrq_get, CP15_CNTFRQ(%0)) _WF1(cp15_cntfrq_set, CP15_CNTFRQ(%0)) _RF0(cp15_cntkctl_get, CP15_CNTKCTL(%0)) _WF1(cp15_cntkctl_set, CP15_CNTKCTL(%0)) _RF0(cp15_cntp_tval_get, CP15_CNTP_TVAL(%0)) _WF1(cp15_cntp_tval_set, CP15_CNTP_TVAL(%0)) _RF0(cp15_cntp_ctl_get, CP15_CNTP_CTL(%0)) _WF1(cp15_cntp_ctl_set, CP15_CNTP_CTL(%0)) _RF0(cp15_cntv_tval_get, CP15_CNTV_TVAL(%0)) _WF1(cp15_cntv_tval_set, CP15_CNTV_TVAL(%0)) _RF0(cp15_cntv_ctl_get, CP15_CNTV_CTL(%0)) _WF1(cp15_cntv_ctl_set, CP15_CNTV_CTL(%0)) _RF0(cp15_cnthctl_get, CP15_CNTHCTL(%0)) _WF1(cp15_cnthctl_set, CP15_CNTHCTL(%0)) _RF0(cp15_cnthp_tval_get, CP15_CNTHP_TVAL(%0)) _WF1(cp15_cnthp_tval_set, CP15_CNTHP_TVAL(%0)) _RF0(cp15_cnthp_ctl_get, CP15_CNTHP_CTL(%0)) _WF1(cp15_cnthp_ctl_set, CP15_CNTHP_CTL(%0)) _R64F0(cp15_cntpct_get, CP15_CNTPCT(%Q0, %R0)) _R64F0(cp15_cntvct_get, CP15_CNTVCT(%Q0, %R0)) _R64F0(cp15_cntp_cval_get, CP15_CNTP_CVAL(%Q0, %R0)) _W64F1(cp15_cntp_cval_set, CP15_CNTP_CVAL(%Q0, %R0)) _R64F0(cp15_cntv_cval_get, CP15_CNTV_CVAL(%Q0, %R0)) _W64F1(cp15_cntv_cval_set, CP15_CNTV_CVAL(%Q0, %R0)) _R64F0(cp15_cntvoff_get, CP15_CNTVOFF(%Q0, %R0)) _W64F1(cp15_cntvoff_set, CP15_CNTVOFF(%Q0, %R0)) _R64F0(cp15_cnthp_cval_get, CP15_CNTHP_CVAL(%Q0, %R0)) _W64F1(cp15_cnthp_cval_set, CP15_CNTHP_CVAL(%Q0, %R0)) #undef _FX #undef _RF0 #undef _WF0 #undef _WF1 /* * TLB maintenance operations. */ /* Local (i.e. not broadcasting ) operations. */ /* Flush all TLB entries (even global). */ static __inline void tlb_flush_all_local(void) { dsb(); _CP15_TLBIALL(); dsb(); } /* Flush all not global TLB entries. */ static __inline void tlb_flush_all_ng_local(void) { dsb(); _CP15_TLBIASID(CPU_ASID_KERNEL); dsb(); } /* Flush single TLB entry (even global). */ static __inline void tlb_flush_local(vm_offset_t va) { KASSERT((va & PAGE_MASK) == 0, ("%s: va %#x not aligned", __func__, va)); dsb(); _CP15_TLBIMVA(va | CPU_ASID_KERNEL); dsb(); } /* Flush range of TLB entries (even global). */ static __inline void tlb_flush_range_local(vm_offset_t va, vm_size_t size) { vm_offset_t eva = va + size; KASSERT((va & PAGE_MASK) == 0, ("%s: va %#x not aligned", __func__, va)); KASSERT((size & PAGE_MASK) == 0, ("%s: size %#x not aligned", __func__, size)); dsb(); for (; va < eva; va += PAGE_SIZE) _CP15_TLBIMVA(va | CPU_ASID_KERNEL); dsb(); } /* Broadcasting operations. */ #if __ARM_ARCH >= 7 && defined(SMP) static __inline void tlb_flush_all(void) { dsb(); ARM_SMP_UP( _CP15_TLBIALLIS(), _CP15_TLBIALL() ); dsb(); } static __inline void tlb_flush_all_ng(void) { dsb(); ARM_SMP_UP( _CP15_TLBIASIDIS(CPU_ASID_KERNEL), _CP15_TLBIASID(CPU_ASID_KERNEL) ); dsb(); } static __inline void tlb_flush(vm_offset_t va) { KASSERT((va & PAGE_MASK) == 0, ("%s: va %#x not aligned", __func__, va)); dsb(); ARM_SMP_UP( _CP15_TLBIMVAAIS(va), _CP15_TLBIMVA(va | CPU_ASID_KERNEL) ); dsb(); } static __inline void tlb_flush_range(vm_offset_t va, vm_size_t size) { vm_offset_t eva = va + size; KASSERT((va & PAGE_MASK) == 0, ("%s: va %#x not aligned", __func__, va)); KASSERT((size & PAGE_MASK) == 0, ("%s: size %#x not aligned", __func__, size)); dsb(); ARM_SMP_UP( { for (; va < eva; va += PAGE_SIZE) _CP15_TLBIMVAAIS(va); }, { for (; va < eva; va += PAGE_SIZE) _CP15_TLBIMVA(va | CPU_ASID_KERNEL); } ); dsb(); } #else /* __ARM_ARCH < 7 */ #define tlb_flush_all() tlb_flush_all_local() #define tlb_flush_all_ng() tlb_flush_all_ng_local() #define tlb_flush(va) tlb_flush_local(va) #define tlb_flush_range(va, size) tlb_flush_range_local(va, size) #endif /* __ARM_ARCH < 7 */ /* * Cache maintenance operations. */ /* Sync I and D caches to PoU */ static __inline void icache_sync(vm_offset_t va, vm_size_t size) { vm_offset_t eva = va + size; dsb(); va &= ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { #if __ARM_ARCH >= 7 _CP15_DCCMVAU(va); #else _CP15_DCCMVAC(va); #endif } dsb(); ARM_SMP_UP( _CP15_ICIALLUIS(), _CP15_ICIALLU() ); dsb(); isb(); } /* Invalidate I cache */ static __inline void icache_inv_all(void) { ARM_SMP_UP( _CP15_ICIALLUIS(), _CP15_ICIALLU() ); dsb(); isb(); } /* Invalidate branch predictor buffer */ static __inline void bpb_inv_all(void) { ARM_SMP_UP( _CP15_BPIALLIS(), _CP15_BPIALL() ); dsb(); isb(); } /* Write back D-cache to PoU */ static __inline void dcache_wb_pou(vm_offset_t va, vm_size_t size) { vm_offset_t eva = va + size; dsb(); va &= ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { #if __ARM_ARCH >= 7 _CP15_DCCMVAU(va); #else _CP15_DCCMVAC(va); #endif } dsb(); } /* * Invalidate D-cache to PoC * * Caches are invalidated from outermost to innermost as fresh cachelines * flow in this direction. In given range, if there was no dirty cacheline * in any cache before, no stale cacheline should remain in them after this * operation finishes. */ static __inline void dcache_inv_poc(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { vm_offset_t eva = va + size; dsb(); /* invalidate L2 first */ cpu_l2cache_inv_range(pa, size); /* then L1 */ va &= ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { _CP15_DCIMVAC(va); } dsb(); } /* * Discard D-cache lines to PoC, prior to overwrite by DMA engine. * * Normal invalidation does L2 then L1 to ensure that stale data from L2 doesn't * flow into L1 while invalidating. This routine is intended to be used only * when invalidating a buffer before a DMA operation loads new data into memory. * The concern in this case is that dirty lines are not evicted to main memory, * overwriting the DMA data. For that reason, the L1 is done first to ensure * that an evicted L1 line doesn't flow to L2 after the L2 has been cleaned. */ static __inline void dcache_inv_poc_dma(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { vm_offset_t eva = va + size; /* invalidate L1 first */ dsb(); va &= ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { _CP15_DCIMVAC(va); } dsb(); /* then L2 */ cpu_l2cache_inv_range(pa, size); } /* * Write back D-cache to PoC * * Caches are written back from innermost to outermost as dirty cachelines * flow in this direction. In given range, no dirty cacheline should remain * in any cache after this operation finishes. */ static __inline void dcache_wb_poc(vm_offset_t va, vm_paddr_t pa, vm_size_t size) { vm_offset_t eva = va + size; dsb(); va &= ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { _CP15_DCCMVAC(va); } dsb(); cpu_l2cache_wb_range(pa, size); } /* Write back and invalidate D-cache to PoC */ static __inline void dcache_wbinv_poc(vm_offset_t sva, vm_paddr_t pa, vm_size_t size) { vm_offset_t va; vm_offset_t eva = sva + size; dsb(); /* write back L1 first */ va = sva & ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { _CP15_DCCMVAC(va); } dsb(); /* then write back and invalidate L2 */ cpu_l2cache_wbinv_range(pa, size); /* then invalidate L1 */ va = sva & ~cpuinfo.dcache_line_mask; for ( ; va < eva; va += cpuinfo.dcache_line_size) { _CP15_DCIMVAC(va); } dsb(); } /* Set TTB0 register */ static __inline void cp15_ttbr_set(uint32_t reg) { dsb(); _CP15_TTB_SET(reg); dsb(); _CP15_BPIALL(); dsb(); isb(); tlb_flush_all_ng_local(); } /* * Functions for address checking: * * cp15_ats1cpr_check() ... check stage 1 privileged (PL1) read access * cp15_ats1cpw_check() ... check stage 1 privileged (PL1) write access * cp15_ats1cur_check() ... check stage 1 unprivileged (PL0) read access * cp15_ats1cuw_check() ... check stage 1 unprivileged (PL0) write access * * They must be called while interrupts are disabled to get consistent result. */ static __inline int cp15_ats1cpr_check(vm_offset_t addr) { cp15_ats1cpr_set(addr); isb(); return (cp15_par_get() & 0x01 ? EFAULT : 0); } static __inline int cp15_ats1cpw_check(vm_offset_t addr) { cp15_ats1cpw_set(addr); isb(); return (cp15_par_get() & 0x01 ? EFAULT : 0); } static __inline int cp15_ats1cur_check(vm_offset_t addr) { cp15_ats1cur_set(addr); isb(); return (cp15_par_get() & 0x01 ? EFAULT : 0); } static __inline int cp15_ats1cuw_check(vm_offset_t addr) { cp15_ats1cuw_set(addr); isb(); return (cp15_par_get() & 0x01 ? EFAULT : 0); } #endif /* !MACHINE_CPU_V6_H */ Index: projects/runtime-coverage/sys/dev/ipmi/ipmi.c =================================================================== --- projects/runtime-coverage/sys/dev/ipmi/ipmi.c (revision 325444) +++ projects/runtime-coverage/sys/dev/ipmi/ipmi.c (revision 325445) @@ -1,1100 +1,1100 @@ /*- * Copyright (c) 2006 IronPort Systems Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef LOCAL_MODULE #include #include #else #include #include #endif /* * Driver request structures are allocated on the stack via alloca() to * avoid calling malloc(), especially for the watchdog handler. * To avoid too much stack growth, a previously allocated structure can * be reused via IPMI_INIT_DRIVER_REQUEST(), but the caller should ensure * that there is adequate reply/request space in the original allocation. */ #define IPMI_INIT_DRIVER_REQUEST(req, addr, cmd, reqlen, replylen) \ bzero((req), sizeof(struct ipmi_request)); \ ipmi_init_request((req), NULL, 0, (addr), (cmd), (reqlen), (replylen)) #define IPMI_ALLOC_DRIVER_REQUEST(req, addr, cmd, reqlen, replylen) \ (req) = __builtin_alloca(sizeof(struct ipmi_request) + \ (reqlen) + (replylen)); \ IPMI_INIT_DRIVER_REQUEST((req), (addr), (cmd), (reqlen), \ (replylen)) #ifdef IPMB static int ipmi_ipmb_checksum(u_char, int); static int ipmi_ipmb_send_message(device_t, u_char, u_char, u_char, u_char, u_char, int) #endif static d_ioctl_t ipmi_ioctl; static d_poll_t ipmi_poll; static d_open_t ipmi_open; static void ipmi_dtor(void *arg); int ipmi_attached = 0; static int on = 1; static bool wd_in_shutdown = false; static int wd_timer_actions = IPMI_SET_WD_ACTION_POWER_CYCLE; -static int wd_shutdown_countdown = 420; /* sec */ +static int wd_shutdown_countdown = 0; /* sec */ static int wd_startup_countdown = 0; /* sec */ static int wd_pretimeout_countdown = 120; /* sec */ static int cycle_wait = 10; /* sec */ static SYSCTL_NODE(_hw, OID_AUTO, ipmi, CTLFLAG_RD, 0, "IPMI driver parameters"); SYSCTL_INT(_hw_ipmi, OID_AUTO, on, CTLFLAG_RWTUN, &on, 0, ""); SYSCTL_INT(_hw_ipmi, OID_AUTO, wd_timer_actions, CTLFLAG_RW, &wd_timer_actions, 0, "IPMI watchdog timer actions (including pre-timeout interrupt)"); SYSCTL_INT(_hw_ipmi, OID_AUTO, wd_shutdown_countdown, CTLFLAG_RW, &wd_shutdown_countdown, 0, "IPMI watchdog countdown for shutdown (seconds)"); SYSCTL_INT(_hw_ipmi, OID_AUTO, wd_startup_countdown, CTLFLAG_RDTUN, &wd_startup_countdown, 0, "IPMI watchdog countdown initialized during startup (seconds)"); SYSCTL_INT(_hw_ipmi, OID_AUTO, wd_pretimeout_countdown, CTLFLAG_RW, &wd_pretimeout_countdown, 0, "IPMI watchdog pre-timeout countdown (seconds)"); SYSCTL_INT(_hw_ipmi, OID_AUTO, cyle_wait, CTLFLAG_RWTUN, &cycle_wait, 0, "IPMI power cycle on reboot delay time (seconds)"); static struct cdevsw ipmi_cdevsw = { .d_version = D_VERSION, .d_open = ipmi_open, .d_ioctl = ipmi_ioctl, .d_poll = ipmi_poll, .d_name = "ipmi", }; static MALLOC_DEFINE(M_IPMI, "ipmi", "ipmi"); static int ipmi_open(struct cdev *cdev, int flags, int fmt, struct thread *td) { struct ipmi_device *dev; struct ipmi_softc *sc; int error; if (!on) return (ENOENT); /* Initialize the per file descriptor data. */ dev = malloc(sizeof(struct ipmi_device), M_IPMI, M_WAITOK | M_ZERO); error = devfs_set_cdevpriv(dev, ipmi_dtor); if (error) { free(dev, M_IPMI); return (error); } sc = cdev->si_drv1; TAILQ_INIT(&dev->ipmi_completed_requests); dev->ipmi_address = IPMI_BMC_SLAVE_ADDR; dev->ipmi_lun = IPMI_BMC_SMS_LUN; dev->ipmi_softc = sc; IPMI_LOCK(sc); sc->ipmi_opened++; IPMI_UNLOCK(sc); return (0); } static int ipmi_poll(struct cdev *cdev, int poll_events, struct thread *td) { struct ipmi_device *dev; struct ipmi_softc *sc; int revents = 0; if (devfs_get_cdevpriv((void **)&dev)) return (0); sc = cdev->si_drv1; IPMI_LOCK(sc); if (poll_events & (POLLIN | POLLRDNORM)) { if (!TAILQ_EMPTY(&dev->ipmi_completed_requests)) revents |= poll_events & (POLLIN | POLLRDNORM); if (dev->ipmi_requests == 0) revents |= POLLERR; } if (revents == 0) { if (poll_events & (POLLIN | POLLRDNORM)) selrecord(td, &dev->ipmi_select); } IPMI_UNLOCK(sc); return (revents); } static void ipmi_purge_completed_requests(struct ipmi_device *dev) { struct ipmi_request *req; while (!TAILQ_EMPTY(&dev->ipmi_completed_requests)) { req = TAILQ_FIRST(&dev->ipmi_completed_requests); TAILQ_REMOVE(&dev->ipmi_completed_requests, req, ir_link); dev->ipmi_requests--; ipmi_free_request(req); } } static void ipmi_dtor(void *arg) { struct ipmi_request *req, *nreq; struct ipmi_device *dev; struct ipmi_softc *sc; dev = arg; sc = dev->ipmi_softc; IPMI_LOCK(sc); if (dev->ipmi_requests) { /* Throw away any pending requests for this device. */ TAILQ_FOREACH_SAFE(req, &sc->ipmi_pending_requests, ir_link, nreq) { if (req->ir_owner == dev) { TAILQ_REMOVE(&sc->ipmi_pending_requests, req, ir_link); dev->ipmi_requests--; ipmi_free_request(req); } } /* Throw away any pending completed requests for this device. */ ipmi_purge_completed_requests(dev); /* * If we still have outstanding requests, they must be stuck * in an interface driver, so wait for those to drain. */ dev->ipmi_closing = 1; while (dev->ipmi_requests > 0) { msleep(&dev->ipmi_requests, &sc->ipmi_requests_lock, PWAIT, "ipmidrain", 0); ipmi_purge_completed_requests(dev); } } sc->ipmi_opened--; IPMI_UNLOCK(sc); /* Cleanup. */ free(dev, M_IPMI); } #ifdef IPMB static int ipmi_ipmb_checksum(u_char *data, int len) { u_char sum = 0; for (; len; len--) { sum += *data++; } return (-sum); } /* XXX: Needs work */ static int ipmi_ipmb_send_message(device_t dev, u_char channel, u_char netfn, u_char command, u_char seq, u_char *data, int data_len) { struct ipmi_softc *sc = device_get_softc(dev); struct ipmi_request *req; u_char slave_addr = 0x52; int error; IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_SEND_MSG, data_len + 8, 0); req->ir_request[0] = channel; req->ir_request[1] = slave_addr; req->ir_request[2] = IPMI_ADDR(netfn, 0); req->ir_request[3] = ipmi_ipmb_checksum(&req->ir_request[1], 2); req->ir_request[4] = sc->ipmi_address; req->ir_request[5] = IPMI_ADDR(seq, sc->ipmi_lun); req->ir_request[6] = command; bcopy(data, &req->ir_request[7], data_len); temp[data_len + 7] = ipmi_ipmb_checksum(&req->ir_request[4], data_len + 3); ipmi_submit_driver_request(sc, req); error = req->ir_error; return (error); } static int ipmi_handle_attn(struct ipmi_softc *sc) { struct ipmi_request *req; int error; device_printf(sc->ipmi_dev, "BMC has a message\n"); IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_GET_MSG_FLAGS, 0, 1); ipmi_submit_driver_request(sc, req); if (req->ir_error == 0 && req->ir_compcode == 0) { if (req->ir_reply[0] & IPMI_MSG_BUFFER_FULL) { device_printf(sc->ipmi_dev, "message buffer full"); } if (req->ir_reply[0] & IPMI_WDT_PRE_TIMEOUT) { device_printf(sc->ipmi_dev, "watchdog about to go off"); } if (req->ir_reply[0] & IPMI_MSG_AVAILABLE) { IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_GET_MSG, 0, 16); device_printf(sc->ipmi_dev, "throw out message "); dump_buf(temp, 16); } } error = req->ir_error; return (error); } #endif #ifdef IPMICTL_SEND_COMMAND_32 #define PTRIN(p) ((void *)(uintptr_t)(p)) #define PTROUT(p) ((uintptr_t)(p)) #endif static int ipmi_ioctl(struct cdev *cdev, u_long cmd, caddr_t data, int flags, struct thread *td) { struct ipmi_softc *sc; struct ipmi_device *dev; struct ipmi_request *kreq; struct ipmi_req *req = (struct ipmi_req *)data; struct ipmi_recv *recv = (struct ipmi_recv *)data; struct ipmi_addr addr; #ifdef IPMICTL_SEND_COMMAND_32 struct ipmi_req32 *req32 = (struct ipmi_req32 *)data; struct ipmi_recv32 *recv32 = (struct ipmi_recv32 *)data; union { struct ipmi_req req; struct ipmi_recv recv; } thunk32; #endif int error, len; error = devfs_get_cdevpriv((void **)&dev); if (error) return (error); sc = cdev->si_drv1; #ifdef IPMICTL_SEND_COMMAND_32 /* Convert 32-bit structures to native. */ switch (cmd) { case IPMICTL_SEND_COMMAND_32: req = &thunk32.req; req->addr = PTRIN(req32->addr); req->addr_len = req32->addr_len; req->msgid = req32->msgid; req->msg.netfn = req32->msg.netfn; req->msg.cmd = req32->msg.cmd; req->msg.data_len = req32->msg.data_len; req->msg.data = PTRIN(req32->msg.data); break; case IPMICTL_RECEIVE_MSG_TRUNC_32: case IPMICTL_RECEIVE_MSG_32: recv = &thunk32.recv; recv->addr = PTRIN(recv32->addr); recv->addr_len = recv32->addr_len; recv->msg.data_len = recv32->msg.data_len; recv->msg.data = PTRIN(recv32->msg.data); break; } #endif switch (cmd) { #ifdef IPMICTL_SEND_COMMAND_32 case IPMICTL_SEND_COMMAND_32: #endif case IPMICTL_SEND_COMMAND: /* * XXX: Need to add proper handling of this. */ error = copyin(req->addr, &addr, sizeof(addr)); if (error) return (error); IPMI_LOCK(sc); /* clear out old stuff in queue of stuff done */ /* XXX: This seems odd. */ while ((kreq = TAILQ_FIRST(&dev->ipmi_completed_requests))) { TAILQ_REMOVE(&dev->ipmi_completed_requests, kreq, ir_link); dev->ipmi_requests--; ipmi_free_request(kreq); } IPMI_UNLOCK(sc); kreq = ipmi_alloc_request(dev, req->msgid, IPMI_ADDR(req->msg.netfn, 0), req->msg.cmd, req->msg.data_len, IPMI_MAX_RX); error = copyin(req->msg.data, kreq->ir_request, req->msg.data_len); if (error) { ipmi_free_request(kreq); return (error); } IPMI_LOCK(sc); dev->ipmi_requests++; error = sc->ipmi_enqueue_request(sc, kreq); IPMI_UNLOCK(sc); if (error) return (error); break; #ifdef IPMICTL_SEND_COMMAND_32 case IPMICTL_RECEIVE_MSG_TRUNC_32: case IPMICTL_RECEIVE_MSG_32: #endif case IPMICTL_RECEIVE_MSG_TRUNC: case IPMICTL_RECEIVE_MSG: error = copyin(recv->addr, &addr, sizeof(addr)); if (error) return (error); IPMI_LOCK(sc); kreq = TAILQ_FIRST(&dev->ipmi_completed_requests); if (kreq == NULL) { IPMI_UNLOCK(sc); return (EAGAIN); } addr.channel = IPMI_BMC_CHANNEL; /* XXX */ recv->recv_type = IPMI_RESPONSE_RECV_TYPE; recv->msgid = kreq->ir_msgid; recv->msg.netfn = IPMI_REPLY_ADDR(kreq->ir_addr) >> 2; recv->msg.cmd = kreq->ir_command; error = kreq->ir_error; if (error) { TAILQ_REMOVE(&dev->ipmi_completed_requests, kreq, ir_link); dev->ipmi_requests--; IPMI_UNLOCK(sc); ipmi_free_request(kreq); return (error); } len = kreq->ir_replylen + 1; if (recv->msg.data_len < len && (cmd == IPMICTL_RECEIVE_MSG #ifdef IPMICTL_RECEIVE_MSG_32 || cmd == IPMICTL_RECEIVE_MSG_32 #endif )) { IPMI_UNLOCK(sc); return (EMSGSIZE); } TAILQ_REMOVE(&dev->ipmi_completed_requests, kreq, ir_link); dev->ipmi_requests--; IPMI_UNLOCK(sc); len = min(recv->msg.data_len, len); recv->msg.data_len = len; error = copyout(&addr, recv->addr,sizeof(addr)); if (error == 0) error = copyout(&kreq->ir_compcode, recv->msg.data, 1); if (error == 0) error = copyout(kreq->ir_reply, recv->msg.data + 1, len - 1); ipmi_free_request(kreq); if (error) return (error); break; case IPMICTL_SET_MY_ADDRESS_CMD: IPMI_LOCK(sc); dev->ipmi_address = *(int*)data; IPMI_UNLOCK(sc); break; case IPMICTL_GET_MY_ADDRESS_CMD: IPMI_LOCK(sc); *(int*)data = dev->ipmi_address; IPMI_UNLOCK(sc); break; case IPMICTL_SET_MY_LUN_CMD: IPMI_LOCK(sc); dev->ipmi_lun = *(int*)data & 0x3; IPMI_UNLOCK(sc); break; case IPMICTL_GET_MY_LUN_CMD: IPMI_LOCK(sc); *(int*)data = dev->ipmi_lun; IPMI_UNLOCK(sc); break; case IPMICTL_SET_GETS_EVENTS_CMD: /* device_printf(sc->ipmi_dev, "IPMICTL_SET_GETS_EVENTS_CMD NA\n"); */ break; case IPMICTL_REGISTER_FOR_CMD: case IPMICTL_UNREGISTER_FOR_CMD: return (EOPNOTSUPP); default: device_printf(sc->ipmi_dev, "Unknown IOCTL %lX\n", cmd); return (ENOIOCTL); } #ifdef IPMICTL_SEND_COMMAND_32 /* Update changed fields in 32-bit structures. */ switch (cmd) { case IPMICTL_RECEIVE_MSG_TRUNC_32: case IPMICTL_RECEIVE_MSG_32: recv32->recv_type = recv->recv_type; recv32->msgid = recv->msgid; recv32->msg.netfn = recv->msg.netfn; recv32->msg.cmd = recv->msg.cmd; recv32->msg.data_len = recv->msg.data_len; break; } #endif return (0); } /* * Request management. */ static __inline void ipmi_init_request(struct ipmi_request *req, struct ipmi_device *dev, long msgid, uint8_t addr, uint8_t command, size_t requestlen, size_t replylen) { req->ir_owner = dev; req->ir_msgid = msgid; req->ir_addr = addr; req->ir_command = command; if (requestlen) { req->ir_request = (char *)&req[1]; req->ir_requestlen = requestlen; } if (replylen) { req->ir_reply = (char *)&req[1] + requestlen; req->ir_replybuflen = replylen; } } /* Allocate a new request with request and reply buffers. */ struct ipmi_request * ipmi_alloc_request(struct ipmi_device *dev, long msgid, uint8_t addr, uint8_t command, size_t requestlen, size_t replylen) { struct ipmi_request *req; req = malloc(sizeof(struct ipmi_request) + requestlen + replylen, M_IPMI, M_WAITOK | M_ZERO); ipmi_init_request(req, dev, msgid, addr, command, requestlen, replylen); return (req); } /* Free a request no longer in use. */ void ipmi_free_request(struct ipmi_request *req) { free(req, M_IPMI); } /* Store a processed request on the appropriate completion queue. */ void ipmi_complete_request(struct ipmi_softc *sc, struct ipmi_request *req) { struct ipmi_device *dev; IPMI_LOCK_ASSERT(sc); /* * Anonymous requests (from inside the driver) always have a * waiter that we awaken. */ if (req->ir_owner == NULL) wakeup(req); else { dev = req->ir_owner; TAILQ_INSERT_TAIL(&dev->ipmi_completed_requests, req, ir_link); selwakeup(&dev->ipmi_select); if (dev->ipmi_closing) wakeup(&dev->ipmi_requests); } } /* Perform an internal driver request. */ int ipmi_submit_driver_request(struct ipmi_softc *sc, struct ipmi_request *req, int timo) { return (sc->ipmi_driver_request(sc, req, timo)); } /* * Helper routine for polled system interfaces that use * ipmi_polled_enqueue_request() to queue requests. This request * waits until there is a pending request and then returns the first * request. If the driver is shutting down, it returns NULL. */ struct ipmi_request * ipmi_dequeue_request(struct ipmi_softc *sc) { struct ipmi_request *req; IPMI_LOCK_ASSERT(sc); while (!sc->ipmi_detaching && TAILQ_EMPTY(&sc->ipmi_pending_requests)) cv_wait(&sc->ipmi_request_added, &sc->ipmi_requests_lock); if (sc->ipmi_detaching) return (NULL); req = TAILQ_FIRST(&sc->ipmi_pending_requests); TAILQ_REMOVE(&sc->ipmi_pending_requests, req, ir_link); return (req); } /* Default implementation of ipmi_enqueue_request() for polled interfaces. */ int ipmi_polled_enqueue_request(struct ipmi_softc *sc, struct ipmi_request *req) { IPMI_LOCK_ASSERT(sc); TAILQ_INSERT_TAIL(&sc->ipmi_pending_requests, req, ir_link); cv_signal(&sc->ipmi_request_added); return (0); } /* * Watchdog event handler. */ static int ipmi_reset_watchdog(struct ipmi_softc *sc) { struct ipmi_request *req; int error; IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_RESET_WDOG, 0, 0); error = ipmi_submit_driver_request(sc, req, 0); if (error) device_printf(sc->ipmi_dev, "Failed to reset watchdog\n"); return (error); } static int ipmi_set_watchdog(struct ipmi_softc *sc, unsigned int sec) { struct ipmi_request *req; int error; if (sec > 0xffff / 10) return (EINVAL); IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_SET_WDOG, 6, 0); if (sec) { req->ir_request[0] = IPMI_SET_WD_TIMER_DONT_STOP | IPMI_SET_WD_TIMER_SMS_OS; req->ir_request[1] = (wd_timer_actions & 0xff); req->ir_request[2] = (wd_pretimeout_countdown & 0xff); req->ir_request[3] = 0; /* Timer use */ req->ir_request[4] = (sec * 10) & 0xff; req->ir_request[5] = (sec * 10) >> 8; } else { req->ir_request[0] = IPMI_SET_WD_TIMER_SMS_OS; req->ir_request[1] = 0; req->ir_request[2] = 0; req->ir_request[3] = 0; /* Timer use */ req->ir_request[4] = 0; req->ir_request[5] = 0; } error = ipmi_submit_driver_request(sc, req, 0); if (error) device_printf(sc->ipmi_dev, "Failed to set watchdog\n"); return (error); } static void ipmi_wd_event(void *arg, unsigned int cmd, int *error) { struct ipmi_softc *sc = arg; unsigned int timeout; int e; /* Ignore requests while disabled. */ if (!on) return; /* * To prevent infinite hangs, we don't let anyone pat or change * the watchdog when we're shutting down. (See ipmi_shutdown_event().) * However, we do want to keep patting the watchdog while we are doing * a coredump. */ if (wd_in_shutdown) { if (dumping && sc->ipmi_watchdog_active) ipmi_reset_watchdog(sc); return; } cmd &= WD_INTERVAL; if (cmd > 0 && cmd <= 63) { timeout = ((uint64_t)1 << cmd) / 1000000000; if (timeout == 0) timeout = 1; if (timeout != sc->ipmi_watchdog_active || wd_timer_actions != sc->ipmi_watchdog_actions || wd_pretimeout_countdown != sc->ipmi_watchdog_pretimeout) { e = ipmi_set_watchdog(sc, timeout); if (e == 0) { sc->ipmi_watchdog_active = timeout; sc->ipmi_watchdog_actions = wd_timer_actions; sc->ipmi_watchdog_pretimeout = wd_pretimeout_countdown; } else { (void)ipmi_set_watchdog(sc, 0); sc->ipmi_watchdog_active = 0; sc->ipmi_watchdog_actions = 0; sc->ipmi_watchdog_pretimeout = 0; } } if (sc->ipmi_watchdog_active != 0) { e = ipmi_reset_watchdog(sc); if (e == 0) { *error = 0; } else { (void)ipmi_set_watchdog(sc, 0); sc->ipmi_watchdog_active = 0; sc->ipmi_watchdog_actions = 0; sc->ipmi_watchdog_pretimeout = 0; } } } else if (atomic_readandclear_int(&sc->ipmi_watchdog_active) != 0) { sc->ipmi_watchdog_actions = 0; sc->ipmi_watchdog_pretimeout = 0; e = ipmi_set_watchdog(sc, 0); if (e != 0 && cmd == 0) *error = EOPNOTSUPP; } } static void ipmi_shutdown_event(void *arg, unsigned int cmd, int *error) { struct ipmi_softc *sc = arg; /* Ignore event if disabled. */ if (!on) return; /* * Positive wd_shutdown_countdown value will re-arm watchdog; * Zero value in wd_shutdown_countdown will disable watchdog; * Negative value in wd_shutdown_countdown will keep existing state; * * Revert to using a power cycle to ensure that the watchdog will * do something useful here. Having the watchdog send an NMI * instead is useless during shutdown, and might be ignored if an * NMI already triggered. */ wd_in_shutdown = true; if (wd_shutdown_countdown == 0) { /* disable watchdog */ ipmi_set_watchdog(sc, 0); sc->ipmi_watchdog_active = 0; } else if (wd_shutdown_countdown > 0) { /* set desired action and time, and, reset watchdog */ wd_timer_actions = IPMI_SET_WD_ACTION_POWER_CYCLE; ipmi_set_watchdog(sc, wd_shutdown_countdown); sc->ipmi_watchdog_active = wd_shutdown_countdown; ipmi_reset_watchdog(sc); } } static void ipmi_power_cycle(void *arg, int howto) { struct ipmi_softc *sc = arg; struct ipmi_request *req; /* * Ignore everything except power cycling requests */ if ((howto & RB_POWERCYCLE) == 0) return; device_printf(sc->ipmi_dev, "Power cycling using IPMI\n"); /* * Send a CHASSIS_CONTROL command to the CHASSIS device, subcommand 2 * as described in IPMI v2.0 spec section 28.3. */ IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_CHASSIS_REQUEST, 0), IPMI_CHASSIS_CONTROL, 1, 0); req->ir_request[0] = IPMI_CC_POWER_CYCLE; ipmi_submit_driver_request(sc, req, MAX_TIMEOUT); if (req->ir_error != 0 || req->ir_compcode != 0) { device_printf(sc->ipmi_dev, "Power cycling via IPMI failed code %#x %#x\n", req->ir_error, req->ir_compcode); return; } /* * BMCs are notoriously slow, give it cyle_wait seconds for the power * down leg of the power cycle. If that fails, fallback to the next * hanlder in the shutdown_final chain and/or the platform failsafe. */ DELAY(cycle_wait * 1000 * 1000); device_printf(sc->ipmi_dev, "Power cycling via IPMI timed out\n"); } static void ipmi_startup(void *arg) { struct ipmi_softc *sc = arg; struct ipmi_request *req; device_t dev; int error, i; config_intrhook_disestablish(&sc->ipmi_ich); dev = sc->ipmi_dev; /* Initialize interface-independent state. */ mtx_init(&sc->ipmi_requests_lock, "ipmi requests", NULL, MTX_DEF); mtx_init(&sc->ipmi_io_lock, "ipmi io", NULL, MTX_DEF); cv_init(&sc->ipmi_request_added, "ipmireq"); TAILQ_INIT(&sc->ipmi_pending_requests); /* Initialize interface-dependent state. */ error = sc->ipmi_startup(sc); if (error) { device_printf(dev, "Failed to initialize interface: %d\n", error); return; } /* Send a GET_DEVICE_ID request. */ IPMI_ALLOC_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_GET_DEVICE_ID, 0, 15); error = ipmi_submit_driver_request(sc, req, MAX_TIMEOUT); if (error == EWOULDBLOCK) { device_printf(dev, "Timed out waiting for GET_DEVICE_ID\n"); return; } else if (error) { device_printf(dev, "Failed GET_DEVICE_ID: %d\n", error); return; } else if (req->ir_compcode != 0) { device_printf(dev, "Bad completion code for GET_DEVICE_ID: %d\n", req->ir_compcode); return; } else if (req->ir_replylen < 5) { device_printf(dev, "Short reply for GET_DEVICE_ID: %d\n", req->ir_replylen); return; } device_printf(dev, "IPMI device rev. %d, firmware rev. %d.%d%d, " "version %d.%d, device support mask %#x\n", req->ir_reply[1] & 0x0f, req->ir_reply[2] & 0x7f, req->ir_reply[3] >> 4, req->ir_reply[3] & 0x0f, req->ir_reply[4] & 0x0f, req->ir_reply[4] >> 4, req->ir_reply[5]); sc->ipmi_dev_support = req->ir_reply[5]; IPMI_INIT_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_CLEAR_FLAGS, 1, 0); ipmi_submit_driver_request(sc, req, 0); /* XXX: Magic numbers */ if (req->ir_compcode == 0xc0) { device_printf(dev, "Clear flags is busy\n"); } if (req->ir_compcode == 0xc1) { device_printf(dev, "Clear flags illegal\n"); } for (i = 0; i < 8; i++) { IPMI_INIT_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_GET_CHANNEL_INFO, 1, 0); req->ir_request[0] = i; ipmi_submit_driver_request(sc, req, 0); if (req->ir_compcode != 0) break; } device_printf(dev, "Number of channels %d\n", i); /* * Probe for watchdog, but only for backends which support * polled driver requests. */ if (sc->ipmi_driver_requests_polled) { IPMI_INIT_DRIVER_REQUEST(req, IPMI_ADDR(IPMI_APP_REQUEST, 0), IPMI_GET_WDOG, 0, 0); ipmi_submit_driver_request(sc, req, 0); if (req->ir_compcode == 0x00) { device_printf(dev, "Attached watchdog\n"); /* register the watchdog event handler */ sc->ipmi_watchdog_tag = EVENTHANDLER_REGISTER( watchdog_list, ipmi_wd_event, sc, 0); sc->ipmi_shutdown_tag = EVENTHANDLER_REGISTER( shutdown_pre_sync, ipmi_shutdown_event, sc, 0); } } sc->ipmi_cdev = make_dev(&ipmi_cdevsw, device_get_unit(dev), UID_ROOT, GID_OPERATOR, 0660, "ipmi%d", device_get_unit(dev)); if (sc->ipmi_cdev == NULL) { device_printf(dev, "Failed to create cdev\n"); return; } sc->ipmi_cdev->si_drv1 = sc; /* * Set initial watchdog state. If desired, set an initial * watchdog on startup. Or, if the watchdog device is * disabled, clear any existing watchdog. */ if (on && wd_startup_countdown > 0) { wd_timer_actions = IPMI_SET_WD_ACTION_POWER_CYCLE; if (ipmi_set_watchdog(sc, wd_startup_countdown) == 0 && ipmi_reset_watchdog(sc) == 0) { sc->ipmi_watchdog_active = wd_startup_countdown; sc->ipmi_watchdog_actions = wd_timer_actions; sc->ipmi_watchdog_pretimeout = wd_pretimeout_countdown; } else (void)ipmi_set_watchdog(sc, 0); ipmi_reset_watchdog(sc); } else if (!on) (void)ipmi_set_watchdog(sc, 0); /* * Power cycle the system off using IPMI. We use last - 1 since we don't * handle all the other kinds of reboots. We'll let others handle them. * We only try to do this if the BMC supports the Chassis device. */ if (sc->ipmi_dev_support & IPMI_ADS_CHASSIS) { device_printf(dev, "Establishing power cycle handler\n"); sc->ipmi_power_cycle_tag = EVENTHANDLER_REGISTER(shutdown_final, ipmi_power_cycle, sc, SHUTDOWN_PRI_LAST - 1); } } int ipmi_attach(device_t dev) { struct ipmi_softc *sc = device_get_softc(dev); int error; if (sc->ipmi_irq_res != NULL && sc->ipmi_intr != NULL) { error = bus_setup_intr(dev, sc->ipmi_irq_res, INTR_TYPE_MISC, NULL, sc->ipmi_intr, sc, &sc->ipmi_irq); if (error) { device_printf(dev, "can't set up interrupt\n"); return (error); } } bzero(&sc->ipmi_ich, sizeof(struct intr_config_hook)); sc->ipmi_ich.ich_func = ipmi_startup; sc->ipmi_ich.ich_arg = sc; if (config_intrhook_establish(&sc->ipmi_ich) != 0) { device_printf(dev, "can't establish configuration hook\n"); return (ENOMEM); } ipmi_attached = 1; return (0); } int ipmi_detach(device_t dev) { struct ipmi_softc *sc; sc = device_get_softc(dev); /* Fail if there are any open handles. */ IPMI_LOCK(sc); if (sc->ipmi_opened) { IPMI_UNLOCK(sc); return (EBUSY); } IPMI_UNLOCK(sc); if (sc->ipmi_cdev) destroy_dev(sc->ipmi_cdev); /* Detach from watchdog handling and turn off watchdog. */ if (sc->ipmi_shutdown_tag) EVENTHANDLER_DEREGISTER(shutdown_pre_sync, sc->ipmi_shutdown_tag); if (sc->ipmi_watchdog_tag) { EVENTHANDLER_DEREGISTER(watchdog_list, sc->ipmi_watchdog_tag); ipmi_set_watchdog(sc, 0); } /* Detach from shutdown handling for power cycle reboot */ if (sc->ipmi_power_cycle_tag) EVENTHANDLER_DEREGISTER(shutdown_final, sc->ipmi_power_cycle_tag); /* XXX: should use shutdown callout I think. */ /* If the backend uses a kthread, shut it down. */ IPMI_LOCK(sc); sc->ipmi_detaching = 1; if (sc->ipmi_kthread) { cv_broadcast(&sc->ipmi_request_added); msleep(sc->ipmi_kthread, &sc->ipmi_requests_lock, 0, "ipmi_wait", 0); } IPMI_UNLOCK(sc); if (sc->ipmi_irq) bus_teardown_intr(dev, sc->ipmi_irq_res, sc->ipmi_irq); ipmi_release_resources(dev); mtx_destroy(&sc->ipmi_io_lock); mtx_destroy(&sc->ipmi_requests_lock); return (0); } void ipmi_release_resources(device_t dev) { struct ipmi_softc *sc; int i; sc = device_get_softc(dev); if (sc->ipmi_irq) bus_teardown_intr(dev, sc->ipmi_irq_res, sc->ipmi_irq); if (sc->ipmi_irq_res) bus_release_resource(dev, SYS_RES_IRQ, sc->ipmi_irq_rid, sc->ipmi_irq_res); for (i = 0; i < MAX_RES; i++) if (sc->ipmi_io_res[i]) bus_release_resource(dev, sc->ipmi_io_type, sc->ipmi_io_rid + i, sc->ipmi_io_res[i]); } devclass_t ipmi_devclass; /* XXX: Why? */ static void ipmi_unload(void *arg) { device_t * devs; int count; int i; if (devclass_get_devices(ipmi_devclass, &devs, &count) != 0) return; for (i = 0; i < count; i++) device_delete_child(device_get_parent(devs[i]), devs[i]); free(devs, M_TEMP); } SYSUNINIT(ipmi_unload, SI_SUB_DRIVERS, SI_ORDER_FIRST, ipmi_unload, NULL); #ifdef IMPI_DEBUG static void dump_buf(u_char *data, int len) { char buf[20]; char line[1024]; char temp[30]; int count = 0; int i=0; printf("Address %p len %d\n", data, len); if (len > 256) len = 256; line[0] = '\000'; for (; len > 0; len--, data++) { sprintf(temp, "%02x ", *data); strcat(line, temp); if (*data >= ' ' && *data <= '~') buf[count] = *data; else if (*data >= 'A' && *data <= 'Z') buf[count] = *data; else buf[count] = '.'; if (++count == 16) { buf[count] = '\000'; count = 0; printf(" %3x %s %s\n", i, line, buf); i+=16; line[0] = '\000'; } } buf[count] = '\000'; for (; count != 16; count++) { strcat(line, " "); } printf(" %3x %s %s\n", i, line, buf); } #endif Index: projects/runtime-coverage/sys/kern/md4c.c =================================================================== --- projects/runtime-coverage/sys/kern/md4c.c (revision 325444) +++ projects/runtime-coverage/sys/kern/md4c.c (revision 325445) @@ -1,286 +1,280 @@ /* MD4C.C - RSA Data Security, Inc., MD4 message-digest algorithm */ /*- Copyright (C) 1990-2, RSA Data Security, Inc. All rights reserved. License to copy and use this software is granted provided that it is identified as the "RSA Data Security, Inc. MD4 Message-Digest Algorithm" in all material mentioning or referencing this software or this function. License is also granted to make and use derivative works provided that such works are identified as "derived from the RSA Data Security, Inc. MD4 Message-Digest Algorithm" in all material mentioning or referencing the derived work. RSA Data Security, Inc. makes no representations concerning either the merchantability of this software or the suitability of this software for any particular purpose. It is provided "as is" without express or implied warranty of any kind. These notices must be retained in any copies of any part of this documentation and/or software. */ #include __FBSDID("$FreeBSD$"); #include #include #include typedef unsigned char *POINTER; typedef u_int16_t UINT2; typedef u_int32_t UINT4; #define PROTO_LIST(list) list /* Constants for MD4Transform routine. */ #define S11 3 #define S12 7 #define S13 11 #define S14 19 #define S21 3 #define S22 5 #define S23 9 #define S24 13 #define S31 3 #define S32 9 #define S33 11 #define S34 15 static void MD4Transform PROTO_LIST ((UINT4 [4], const unsigned char [64])); static void Encode PROTO_LIST ((unsigned char *, UINT4 *, unsigned int)); static void Decode PROTO_LIST ((UINT4 *, const unsigned char *, unsigned int)); static unsigned char PADDING[64] = { 0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; /* F, G and H are basic MD4 functions. */ #define F(x, y, z) (((x) & (y)) | ((~x) & (z))) #define G(x, y, z) (((x) & (y)) | ((x) & (z)) | ((y) & (z))) #define H(x, y, z) ((x) ^ (y) ^ (z)) /* ROTATE_LEFT rotates x left n bits. */ #define ROTATE_LEFT(x, n) (((x) << (n)) | ((x) >> (32-(n)))) /* FF, GG and HH are transformations for rounds 1, 2 and 3 */ /* Rotation is separate from addition to prevent recomputation */ #define FF(a, b, c, d, x, s) { \ (a) += F ((b), (c), (d)) + (x); \ (a) = ROTATE_LEFT ((a), (s)); \ } #define GG(a, b, c, d, x, s) { \ (a) += G ((b), (c), (d)) + (x) + (UINT4)0x5a827999; \ (a) = ROTATE_LEFT ((a), (s)); \ } #define HH(a, b, c, d, x, s) { \ (a) += H ((b), (c), (d)) + (x) + (UINT4)0x6ed9eba1; \ (a) = ROTATE_LEFT ((a), (s)); \ } /* MD4 initialization. Begins an MD4 operation, writing a new context. */ -void MD4Init (context) -MD4_CTX *context; /* context */ +void +MD4Init(MD4_CTX *context) { context->count[0] = context->count[1] = 0; /* Load magic initialization constants. */ context->state[0] = 0x67452301; context->state[1] = 0xefcdab89; context->state[2] = 0x98badcfe; context->state[3] = 0x10325476; } /* MD4 block update operation. Continues an MD4 message-digest operation, processing another message block, and updating the context. */ -void MD4Update (context, input, inputLen) -MD4_CTX *context; /* context */ -const unsigned char *input; /* input block */ -unsigned int inputLen; /* length of input block */ +void +MD4Update(MD4_CTX *context, const unsigned char *input, + unsigned int inputLen) { unsigned int i, index, partLen; /* Compute number of bytes mod 64 */ index = (unsigned int)((context->count[0] >> 3) & 0x3F); /* Update number of bits */ if ((context->count[0] += ((UINT4)inputLen << 3)) < ((UINT4)inputLen << 3)) context->count[1]++; context->count[1] += ((UINT4)inputLen >> 29); partLen = 64 - index; /* Transform as many times as possible. */ if (inputLen >= partLen) { bcopy(input, &context->buffer[index], partLen); MD4Transform (context->state, context->buffer); for (i = partLen; i + 63 < inputLen; i += 64) MD4Transform (context->state, &input[i]); index = 0; } else i = 0; /* Buffer remaining input */ bcopy(&input[i], &context->buffer[index], inputLen-i); } /* MD4 padding. */ -void MD4Pad (context) -MD4_CTX *context; /* context */ +void +MD4Pad(MD4_CTX *context) { unsigned char bits[8]; unsigned int index, padLen; /* Save number of bits */ Encode (bits, context->count, 8); /* Pad out to 56 mod 64. */ index = (unsigned int)((context->count[0] >> 3) & 0x3f); padLen = (index < 56) ? (56 - index) : (120 - index); MD4Update (context, PADDING, padLen); /* Append length (before padding) */ MD4Update (context, bits, 8); } /* MD4 finalization. Ends an MD4 message-digest operation, writing the the message digest and zeroizing the context. */ -void MD4Final (unsigned char digest[static 16], MD4_CTX *context) +void +MD4Final(unsigned char digest[static 16], MD4_CTX *context) { /* Do padding */ MD4Pad (context); /* Store state in digest */ Encode (digest, context->state, 16); /* Zeroize sensitive information. */ bzero(context, sizeof (*context)); } /* MD4 basic transformation. Transforms state based on block. */ -static void MD4Transform (state, block) -UINT4 state[4]; -const unsigned char block[64]; +static void +MD4Transform(UINT4 state[4], const unsigned char block[64]) { UINT4 a = state[0], b = state[1], c = state[2], d = state[3], x[16]; Decode (x, block, 64); /* Round 1 */ FF (a, b, c, d, x[ 0], S11); /* 1 */ FF (d, a, b, c, x[ 1], S12); /* 2 */ FF (c, d, a, b, x[ 2], S13); /* 3 */ FF (b, c, d, a, x[ 3], S14); /* 4 */ FF (a, b, c, d, x[ 4], S11); /* 5 */ FF (d, a, b, c, x[ 5], S12); /* 6 */ FF (c, d, a, b, x[ 6], S13); /* 7 */ FF (b, c, d, a, x[ 7], S14); /* 8 */ FF (a, b, c, d, x[ 8], S11); /* 9 */ FF (d, a, b, c, x[ 9], S12); /* 10 */ FF (c, d, a, b, x[10], S13); /* 11 */ FF (b, c, d, a, x[11], S14); /* 12 */ FF (a, b, c, d, x[12], S11); /* 13 */ FF (d, a, b, c, x[13], S12); /* 14 */ FF (c, d, a, b, x[14], S13); /* 15 */ FF (b, c, d, a, x[15], S14); /* 16 */ /* Round 2 */ GG (a, b, c, d, x[ 0], S21); /* 17 */ GG (d, a, b, c, x[ 4], S22); /* 18 */ GG (c, d, a, b, x[ 8], S23); /* 19 */ GG (b, c, d, a, x[12], S24); /* 20 */ GG (a, b, c, d, x[ 1], S21); /* 21 */ GG (d, a, b, c, x[ 5], S22); /* 22 */ GG (c, d, a, b, x[ 9], S23); /* 23 */ GG (b, c, d, a, x[13], S24); /* 24 */ GG (a, b, c, d, x[ 2], S21); /* 25 */ GG (d, a, b, c, x[ 6], S22); /* 26 */ GG (c, d, a, b, x[10], S23); /* 27 */ GG (b, c, d, a, x[14], S24); /* 28 */ GG (a, b, c, d, x[ 3], S21); /* 29 */ GG (d, a, b, c, x[ 7], S22); /* 30 */ GG (c, d, a, b, x[11], S23); /* 31 */ GG (b, c, d, a, x[15], S24); /* 32 */ /* Round 3 */ HH (a, b, c, d, x[ 0], S31); /* 33 */ HH (d, a, b, c, x[ 8], S32); /* 34 */ HH (c, d, a, b, x[ 4], S33); /* 35 */ HH (b, c, d, a, x[12], S34); /* 36 */ HH (a, b, c, d, x[ 2], S31); /* 37 */ HH (d, a, b, c, x[10], S32); /* 38 */ HH (c, d, a, b, x[ 6], S33); /* 39 */ HH (b, c, d, a, x[14], S34); /* 40 */ HH (a, b, c, d, x[ 1], S31); /* 41 */ HH (d, a, b, c, x[ 9], S32); /* 42 */ HH (c, d, a, b, x[ 5], S33); /* 43 */ HH (b, c, d, a, x[13], S34); /* 44 */ HH (a, b, c, d, x[ 3], S31); /* 45 */ HH (d, a, b, c, x[11], S32); /* 46 */ HH (c, d, a, b, x[ 7], S33); /* 47 */ HH (b, c, d, a, x[15], S34); /* 48 */ state[0] += a; state[1] += b; state[2] += c; state[3] += d; /* Zeroize sensitive information. */ bzero((POINTER)x, sizeof (x)); } /* Encodes input (UINT4) into output (unsigned char). Assumes len is a multiple of 4. */ -static void Encode (output, input, len) -unsigned char *output; -UINT4 *input; -unsigned int len; +static void +Encode(unsigned char *output, UINT4 *input, unsigned int len) { unsigned int i, j; for (i = 0, j = 0; j < len; i++, j += 4) { output[j] = (unsigned char)(input[i] & 0xff); output[j+1] = (unsigned char)((input[i] >> 8) & 0xff); output[j+2] = (unsigned char)((input[i] >> 16) & 0xff); output[j+3] = (unsigned char)((input[i] >> 24) & 0xff); } } /* Decodes input (unsigned char) into output (UINT4). Assumes len is a multiple of 4. */ -static void Decode (output, input, len) - -UINT4 *output; -const unsigned char *input; -unsigned int len; +static void +Decode(UINT4 *output, const unsigned char *input, unsigned int len) { unsigned int i, j; for (i = 0, j = 0; j < len; i++, j += 4) output[i] = ((UINT4)input[j]) | (((UINT4)input[j+1]) << 8) | (((UINT4)input[j+2]) << 16) | (((UINT4)input[j+3]) << 24); } Index: projects/runtime-coverage/sys/kern/vfs_cache.c =================================================================== --- projects/runtime-coverage/sys/kern/vfs_cache.c (revision 325444) +++ projects/runtime-coverage/sys/kern/vfs_cache.c (revision 325445) @@ -1,2496 +1,2498 @@ /*- * Copyright (c) 1989, 1993, 1995 * The Regents of the University of California. All rights reserved. * * This code is derived from software contributed to Berkeley by * Poul-Henning Kamp of the FreeBSD Project. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)vfs_cache.c 8.5 (Berkeley) 3/22/95 */ #include __FBSDID("$FreeBSD$"); #include "opt_ktrace.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef KTRACE #include #endif #include SDT_PROVIDER_DECLARE(vfs); SDT_PROBE_DEFINE3(vfs, namecache, enter, done, "struct vnode *", "char *", "struct vnode *"); SDT_PROBE_DEFINE2(vfs, namecache, enter_negative, done, "struct vnode *", "char *"); SDT_PROBE_DEFINE1(vfs, namecache, fullpath, entry, "struct vnode *"); SDT_PROBE_DEFINE3(vfs, namecache, fullpath, hit, "struct vnode *", "char *", "struct vnode *"); SDT_PROBE_DEFINE1(vfs, namecache, fullpath, miss, "struct vnode *"); SDT_PROBE_DEFINE3(vfs, namecache, fullpath, return, "int", "struct vnode *", "char *"); SDT_PROBE_DEFINE3(vfs, namecache, lookup, hit, "struct vnode *", "char *", "struct vnode *"); SDT_PROBE_DEFINE2(vfs, namecache, lookup, hit__negative, "struct vnode *", "char *"); SDT_PROBE_DEFINE2(vfs, namecache, lookup, miss, "struct vnode *", "char *"); SDT_PROBE_DEFINE1(vfs, namecache, purge, done, "struct vnode *"); SDT_PROBE_DEFINE1(vfs, namecache, purge_negative, done, "struct vnode *"); SDT_PROBE_DEFINE1(vfs, namecache, purgevfs, done, "struct mount *"); SDT_PROBE_DEFINE3(vfs, namecache, zap, done, "struct vnode *", "char *", "struct vnode *"); SDT_PROBE_DEFINE3(vfs, namecache, zap_negative, done, "struct vnode *", "char *", "int"); SDT_PROBE_DEFINE3(vfs, namecache, shrink_negative, done, "struct vnode *", "char *", "int"); /* * This structure describes the elements in the cache of recent * names looked up by namei. */ struct namecache { LIST_ENTRY(namecache) nc_hash; /* hash chain */ LIST_ENTRY(namecache) nc_src; /* source vnode list */ TAILQ_ENTRY(namecache) nc_dst; /* destination vnode list */ struct vnode *nc_dvp; /* vnode of parent of name */ union { struct vnode *nu_vp; /* vnode the name refers to */ u_int nu_neghits; /* negative entry hits */ } n_un; u_char nc_flag; /* flag bits */ u_char nc_nlen; /* length of name */ char nc_name[0]; /* segment name + nul */ }; /* * struct namecache_ts repeats struct namecache layout up to the * nc_nlen member. * struct namecache_ts is used in place of struct namecache when time(s) need * to be stored. The nc_dotdottime field is used when a cache entry is mapping * both a non-dotdot directory name plus dotdot for the directory's * parent. */ struct namecache_ts { struct timespec nc_time; /* timespec provided by fs */ struct timespec nc_dotdottime; /* dotdot timespec provided by fs */ int nc_ticks; /* ticks value when entry was added */ struct namecache nc_nc; }; #define nc_vp n_un.nu_vp #define nc_neghits n_un.nu_neghits /* * Flags in namecache.nc_flag */ #define NCF_WHITE 0x01 #define NCF_ISDOTDOT 0x02 #define NCF_TS 0x04 #define NCF_DTS 0x08 #define NCF_DVDROP 0x10 #define NCF_NEGATIVE 0x20 #define NCF_HOTNEGATIVE 0x40 /* * Name caching works as follows: * * Names found by directory scans are retained in a cache * for future reference. It is managed LRU, so frequently * used names will hang around. Cache is indexed by hash value * obtained from (vp, name) where vp refers to the directory * containing name. * * If it is a "negative" entry, (i.e. for a name that is known NOT to * exist) the vnode pointer will be NULL. * * Upon reaching the last segment of a path, if the reference * is for DELETE, or NOCACHE is set (rewrite), and the * name is located in the cache, it will be dropped. * * These locks are used (in the order in which they can be taken): * NAME TYPE ROLE * vnodelock mtx vnode lists and v_cache_dd field protection * bucketlock rwlock for access to given set of hash buckets * neglist mtx negative entry LRU management * * Additionally, ncneg_shrink_lock mtx is used to have at most one thread * shrinking the LRU list. * * It is legal to take multiple vnodelock and bucketlock locks. The locking * order is lower address first. Both are recursive. * * "." lookups are lockless. * * ".." and vnode -> name lookups require vnodelock. * * name -> vnode lookup requires the relevant bucketlock to be held for reading. * * Insertions and removals of entries require involved vnodes and bucketlocks * to be write-locked to prevent other threads from seeing the entry. * * Some lookups result in removal of the found entry (e.g. getting rid of a * negative entry with the intent to create a positive one), which poses a * problem when multiple threads reach the state. Similarly, two different * threads can purge two different vnodes and try to remove the same name. * * If the already held vnode lock is lower than the second required lock, we * can just take the other lock. However, in the opposite case, this could * deadlock. As such, this is resolved by trylocking and if that fails unlocking * the first node, locking everything in order and revalidating the state. */ /* * Structures associated with name caching. */ #define NCHHASH(hash) \ (&nchashtbl[(hash) & nchash]) static __read_mostly LIST_HEAD(nchashhead, namecache) *nchashtbl;/* Hash Table */ static u_long __read_mostly nchash; /* size of hash table */ SYSCTL_ULONG(_debug, OID_AUTO, nchash, CTLFLAG_RD, &nchash, 0, "Size of namecache hash table"); static u_long __read_mostly ncnegfactor = 12; /* ratio of negative entries */ SYSCTL_ULONG(_vfs, OID_AUTO, ncnegfactor, CTLFLAG_RW, &ncnegfactor, 0, "Ratio of negative namecache entries"); static u_long __exclusive_cache_line numneg; /* number of negative entries allocated */ SYSCTL_ULONG(_debug, OID_AUTO, numneg, CTLFLAG_RD, &numneg, 0, "Number of negative entries in namecache"); static u_long __exclusive_cache_line numcache;/* number of cache entries allocated */ SYSCTL_ULONG(_debug, OID_AUTO, numcache, CTLFLAG_RD, &numcache, 0, "Number of namecache entries"); static u_long __exclusive_cache_line numcachehv;/* number of cache entries with vnodes held */ SYSCTL_ULONG(_debug, OID_AUTO, numcachehv, CTLFLAG_RD, &numcachehv, 0, "Number of namecache entries with vnodes held"); u_int __read_mostly ncsizefactor = 2; SYSCTL_UINT(_vfs, OID_AUTO, ncsizefactor, CTLFLAG_RW, &ncsizefactor, 0, "Size factor for namecache"); static u_int __read_mostly ncpurgeminvnodes; SYSCTL_UINT(_vfs, OID_AUTO, ncpurgeminvnodes, CTLFLAG_RW, &ncpurgeminvnodes, 0, "Number of vnodes below which purgevfs ignores the request"); static u_int __read_mostly ncneghitsrequeue = 8; SYSCTL_UINT(_vfs, OID_AUTO, ncneghitsrequeue, CTLFLAG_RW, &ncneghitsrequeue, 0, "Number of hits to requeue a negative entry in the LRU list"); struct nchstats nchstats; /* cache effectiveness statistics */ static struct mtx ncneg_shrink_lock; static int shrink_list_turn; struct neglist { struct mtx nl_lock; TAILQ_HEAD(, namecache) nl_list; } __aligned(CACHE_LINE_SIZE); static struct neglist __read_mostly *neglists; static struct neglist ncneg_hot; #define numneglists (ncneghash + 1) static u_int __read_mostly ncneghash; static inline struct neglist * NCP2NEGLIST(struct namecache *ncp) { return (&neglists[(((uintptr_t)(ncp) >> 8) & ncneghash)]); } #define numbucketlocks (ncbuckethash + 1) static u_int __read_mostly ncbuckethash; static struct rwlock_padalign __read_mostly *bucketlocks; #define HASH2BUCKETLOCK(hash) \ ((struct rwlock *)(&bucketlocks[((hash) & ncbuckethash)])) #define numvnodelocks (ncvnodehash + 1) static u_int __read_mostly ncvnodehash; static struct mtx __read_mostly *vnodelocks; static inline struct mtx * VP2VNODELOCK(struct vnode *vp) { return (&vnodelocks[(((uintptr_t)(vp) >> 8) & ncvnodehash)]); } /* * UMA zones for the VFS cache. * * The small cache is used for entries with short names, which are the * most common. The large cache is used for entries which are too big to * fit in the small cache. */ static uma_zone_t __read_mostly cache_zone_small; static uma_zone_t __read_mostly cache_zone_small_ts; static uma_zone_t __read_mostly cache_zone_large; static uma_zone_t __read_mostly cache_zone_large_ts; #define CACHE_PATH_CUTOFF 35 static struct namecache * cache_alloc(int len, int ts) { struct namecache_ts *ncp_ts; struct namecache *ncp; if (__predict_false(ts)) { if (len <= CACHE_PATH_CUTOFF) ncp_ts = uma_zalloc(cache_zone_small_ts, M_WAITOK); else ncp_ts = uma_zalloc(cache_zone_large_ts, M_WAITOK); ncp = &ncp_ts->nc_nc; } else { if (len <= CACHE_PATH_CUTOFF) ncp = uma_zalloc(cache_zone_small, M_WAITOK); else ncp = uma_zalloc(cache_zone_large, M_WAITOK); } return (ncp); } static void cache_free(struct namecache *ncp) { struct namecache_ts *ncp_ts; if (ncp == NULL) return; if ((ncp->nc_flag & NCF_DVDROP) != 0) vdrop(ncp->nc_dvp); if (__predict_false(ncp->nc_flag & NCF_TS)) { ncp_ts = __containerof(ncp, struct namecache_ts, nc_nc); if (ncp->nc_nlen <= CACHE_PATH_CUTOFF) uma_zfree(cache_zone_small_ts, ncp_ts); else uma_zfree(cache_zone_large_ts, ncp_ts); } else { if (ncp->nc_nlen <= CACHE_PATH_CUTOFF) uma_zfree(cache_zone_small, ncp); else uma_zfree(cache_zone_large, ncp); } } static void cache_out_ts(struct namecache *ncp, struct timespec *tsp, int *ticksp) { struct namecache_ts *ncp_ts; KASSERT((ncp->nc_flag & NCF_TS) != 0 || (tsp == NULL && ticksp == NULL), ("No NCF_TS")); if (tsp == NULL && ticksp == NULL) return; ncp_ts = __containerof(ncp, struct namecache_ts, nc_nc); if (tsp != NULL) *tsp = ncp_ts->nc_time; if (ticksp != NULL) *ticksp = ncp_ts->nc_ticks; } static int __read_mostly doingcache = 1; /* 1 => enable the cache */ SYSCTL_INT(_debug, OID_AUTO, vfscache, CTLFLAG_RW, &doingcache, 0, "VFS namecache enabled"); /* Export size information to userland */ SYSCTL_INT(_debug_sizeof, OID_AUTO, namecache, CTLFLAG_RD, SYSCTL_NULL_INT_PTR, sizeof(struct namecache), "sizeof(struct namecache)"); /* * The new name cache statistics */ static SYSCTL_NODE(_vfs, OID_AUTO, cache, CTLFLAG_RW, 0, "Name cache statistics"); #define STATNODE_ULONG(name, descr) \ SYSCTL_ULONG(_vfs_cache, OID_AUTO, name, CTLFLAG_RD, &name, 0, descr); #define STATNODE_COUNTER(name, descr) \ static counter_u64_t __read_mostly name; \ SYSCTL_COUNTER_U64(_vfs_cache, OID_AUTO, name, CTLFLAG_RD, &name, descr); STATNODE_ULONG(numneg, "Number of negative cache entries"); STATNODE_ULONG(numcache, "Number of cache entries"); STATNODE_COUNTER(numcalls, "Number of cache lookups"); STATNODE_COUNTER(dothits, "Number of '.' hits"); STATNODE_COUNTER(dotdothits, "Number of '..' hits"); STATNODE_COUNTER(numchecks, "Number of checks in lookup"); STATNODE_COUNTER(nummiss, "Number of cache misses"); STATNODE_COUNTER(nummisszap, "Number of cache misses we do not want to cache"); STATNODE_COUNTER(numposzaps, "Number of cache hits (positive) we do not want to cache"); STATNODE_COUNTER(numposhits, "Number of cache hits (positive)"); STATNODE_COUNTER(numnegzaps, "Number of cache hits (negative) we do not want to cache"); STATNODE_COUNTER(numneghits, "Number of cache hits (negative)"); /* These count for kern___getcwd(), too. */ STATNODE_COUNTER(numfullpathcalls, "Number of fullpath search calls"); STATNODE_COUNTER(numfullpathfail1, "Number of fullpath search errors (ENOTDIR)"); STATNODE_COUNTER(numfullpathfail2, "Number of fullpath search errors (VOP_VPTOCNP failures)"); STATNODE_COUNTER(numfullpathfail4, "Number of fullpath search errors (ENOMEM)"); STATNODE_COUNTER(numfullpathfound, "Number of successful fullpath calls"); static long zap_and_exit_bucket_fail; STATNODE_ULONG(zap_and_exit_bucket_fail, "Number of times zap_and_exit failed to lock"); static long cache_lock_vnodes_cel_3_failures; STATNODE_ULONG(cache_lock_vnodes_cel_3_failures, "Number of times 3-way vnode locking failed"); static void cache_zap_locked(struct namecache *ncp, bool neg_locked); static int vn_fullpath1(struct thread *td, struct vnode *vp, struct vnode *rdir, char *buf, char **retbuf, u_int buflen); static MALLOC_DEFINE(M_VFSCACHE, "vfscache", "VFS name cache entries"); static int cache_yield; SYSCTL_INT(_vfs_cache, OID_AUTO, yield, CTLFLAG_RD, &cache_yield, 0, "Number of times cache called yield"); static void cache_maybe_yield(void) { if (should_yield()) { cache_yield++; kern_yield(PRI_USER); } } static inline void cache_assert_vlp_locked(struct mtx *vlp) { if (vlp != NULL) mtx_assert(vlp, MA_OWNED); } static inline void cache_assert_vnode_locked(struct vnode *vp) { struct mtx *vlp; vlp = VP2VNODELOCK(vp); cache_assert_vlp_locked(vlp); } static uint32_t cache_get_hash(char *name, u_char len, struct vnode *dvp) { uint32_t hash; hash = fnv_32_buf(name, len, FNV1_32_INIT); hash = fnv_32_buf(&dvp, sizeof(dvp), hash); return (hash); } static inline struct rwlock * NCP2BUCKETLOCK(struct namecache *ncp) { uint32_t hash; hash = cache_get_hash(ncp->nc_name, ncp->nc_nlen, ncp->nc_dvp); return (HASH2BUCKETLOCK(hash)); } #ifdef INVARIANTS static void cache_assert_bucket_locked(struct namecache *ncp, int mode) { struct rwlock *blp; blp = NCP2BUCKETLOCK(ncp); rw_assert(blp, mode); } #else #define cache_assert_bucket_locked(x, y) do { } while (0) #endif #define cache_sort(x, y) _cache_sort((void **)(x), (void **)(y)) static void _cache_sort(void **p1, void **p2) { void *tmp; if (*p1 > *p2) { tmp = *p2; *p2 = *p1; *p1 = tmp; } } static void cache_lock_all_buckets(void) { u_int i; for (i = 0; i < numbucketlocks; i++) rw_wlock(&bucketlocks[i]); } static void cache_unlock_all_buckets(void) { u_int i; for (i = 0; i < numbucketlocks; i++) rw_wunlock(&bucketlocks[i]); } static void cache_lock_all_vnodes(void) { u_int i; for (i = 0; i < numvnodelocks; i++) mtx_lock(&vnodelocks[i]); } static void cache_unlock_all_vnodes(void) { u_int i; for (i = 0; i < numvnodelocks; i++) mtx_unlock(&vnodelocks[i]); } static int cache_trylock_vnodes(struct mtx *vlp1, struct mtx *vlp2) { cache_sort(&vlp1, &vlp2); MPASS(vlp2 != NULL); if (vlp1 != NULL) { if (!mtx_trylock(vlp1)) return (EAGAIN); } if (!mtx_trylock(vlp2)) { if (vlp1 != NULL) mtx_unlock(vlp1); return (EAGAIN); } return (0); } static void cache_unlock_vnodes(struct mtx *vlp1, struct mtx *vlp2) { MPASS(vlp1 != NULL || vlp2 != NULL); if (vlp1 != NULL) mtx_unlock(vlp1); if (vlp2 != NULL) mtx_unlock(vlp2); } static int sysctl_nchstats(SYSCTL_HANDLER_ARGS) { struct nchstats snap; if (req->oldptr == NULL) return (SYSCTL_OUT(req, 0, sizeof(snap))); snap = nchstats; snap.ncs_goodhits = counter_u64_fetch(numposhits); snap.ncs_neghits = counter_u64_fetch(numneghits); snap.ncs_badhits = counter_u64_fetch(numposzaps) + counter_u64_fetch(numnegzaps); snap.ncs_miss = counter_u64_fetch(nummisszap) + counter_u64_fetch(nummiss); return (SYSCTL_OUT(req, &snap, sizeof(snap))); } SYSCTL_PROC(_vfs_cache, OID_AUTO, nchstats, CTLTYPE_OPAQUE | CTLFLAG_RD | CTLFLAG_MPSAFE, 0, 0, sysctl_nchstats, "LU", "VFS cache effectiveness statistics"); #ifdef DIAGNOSTIC /* * Grab an atomic snapshot of the name cache hash chain lengths */ static SYSCTL_NODE(_debug, OID_AUTO, hashstat, CTLFLAG_RW, NULL, "hash table stats"); static int sysctl_debug_hashstat_rawnchash(SYSCTL_HANDLER_ARGS) { struct nchashhead *ncpp; struct namecache *ncp; int i, error, n_nchash, *cntbuf; retry: n_nchash = nchash + 1; /* nchash is max index, not count */ if (req->oldptr == NULL) return SYSCTL_OUT(req, 0, n_nchash * sizeof(int)); cntbuf = malloc(n_nchash * sizeof(int), M_TEMP, M_ZERO | M_WAITOK); cache_lock_all_buckets(); if (n_nchash != nchash + 1) { cache_unlock_all_buckets(); free(cntbuf, M_TEMP); goto retry; } /* Scan hash tables counting entries */ for (ncpp = nchashtbl, i = 0; i < n_nchash; ncpp++, i++) LIST_FOREACH(ncp, ncpp, nc_hash) cntbuf[i]++; cache_unlock_all_buckets(); for (error = 0, i = 0; i < n_nchash; i++) if ((error = SYSCTL_OUT(req, &cntbuf[i], sizeof(int))) != 0) break; free(cntbuf, M_TEMP); return (error); } SYSCTL_PROC(_debug_hashstat, OID_AUTO, rawnchash, CTLTYPE_INT|CTLFLAG_RD| CTLFLAG_MPSAFE, 0, 0, sysctl_debug_hashstat_rawnchash, "S,int", "nchash chain lengths"); static int sysctl_debug_hashstat_nchash(SYSCTL_HANDLER_ARGS) { int error; struct nchashhead *ncpp; struct namecache *ncp; int n_nchash; int count, maxlength, used, pct; if (!req->oldptr) return SYSCTL_OUT(req, 0, 4 * sizeof(int)); cache_lock_all_buckets(); n_nchash = nchash + 1; /* nchash is max index, not count */ used = 0; maxlength = 0; /* Scan hash tables for applicable entries */ for (ncpp = nchashtbl; n_nchash > 0; n_nchash--, ncpp++) { count = 0; LIST_FOREACH(ncp, ncpp, nc_hash) { count++; } if (count) used++; if (maxlength < count) maxlength = count; } n_nchash = nchash + 1; cache_unlock_all_buckets(); pct = (used * 100) / (n_nchash / 100); error = SYSCTL_OUT(req, &n_nchash, sizeof(n_nchash)); if (error) return (error); error = SYSCTL_OUT(req, &used, sizeof(used)); if (error) return (error); error = SYSCTL_OUT(req, &maxlength, sizeof(maxlength)); if (error) return (error); error = SYSCTL_OUT(req, &pct, sizeof(pct)); if (error) return (error); return (0); } SYSCTL_PROC(_debug_hashstat, OID_AUTO, nchash, CTLTYPE_INT|CTLFLAG_RD| CTLFLAG_MPSAFE, 0, 0, sysctl_debug_hashstat_nchash, "I", "nchash statistics (number of total/used buckets, maximum chain length, usage percentage)"); #endif /* * Negative entries management * * A variation of LRU scheme is used. New entries are hashed into one of * numneglists cold lists. Entries get promoted to the hot list on first hit. * Partial LRU for the hot list is maintained by requeueing them every * ncneghitsrequeue hits. * * The shrinker will demote hot list head and evict from the cold list in a * round-robin manner. */ static void cache_negative_hit(struct namecache *ncp) { struct neglist *neglist; u_int hits; MPASS(ncp->nc_flag & NCF_NEGATIVE); hits = atomic_fetchadd_int(&ncp->nc_neghits, 1); if (ncp->nc_flag & NCF_HOTNEGATIVE) { if ((hits % ncneghitsrequeue) != 0) return; mtx_lock(&ncneg_hot.nl_lock); if (ncp->nc_flag & NCF_HOTNEGATIVE) { TAILQ_REMOVE(&ncneg_hot.nl_list, ncp, nc_dst); TAILQ_INSERT_TAIL(&ncneg_hot.nl_list, ncp, nc_dst); mtx_unlock(&ncneg_hot.nl_lock); return; } /* * The shrinker cleared the flag and removed the entry from * the hot list. Put it back. */ } else { mtx_lock(&ncneg_hot.nl_lock); } neglist = NCP2NEGLIST(ncp); mtx_lock(&neglist->nl_lock); if (!(ncp->nc_flag & NCF_HOTNEGATIVE)) { TAILQ_REMOVE(&neglist->nl_list, ncp, nc_dst); TAILQ_INSERT_TAIL(&ncneg_hot.nl_list, ncp, nc_dst); ncp->nc_flag |= NCF_HOTNEGATIVE; } mtx_unlock(&neglist->nl_lock); mtx_unlock(&ncneg_hot.nl_lock); } static void cache_negative_insert(struct namecache *ncp, bool neg_locked) { struct neglist *neglist; MPASS(ncp->nc_flag & NCF_NEGATIVE); cache_assert_bucket_locked(ncp, RA_WLOCKED); neglist = NCP2NEGLIST(ncp); if (!neg_locked) { mtx_lock(&neglist->nl_lock); } else { mtx_assert(&neglist->nl_lock, MA_OWNED); } TAILQ_INSERT_TAIL(&neglist->nl_list, ncp, nc_dst); if (!neg_locked) mtx_unlock(&neglist->nl_lock); atomic_add_rel_long(&numneg, 1); } static void cache_negative_remove(struct namecache *ncp, bool neg_locked) { struct neglist *neglist; bool hot_locked = false; bool list_locked = false; MPASS(ncp->nc_flag & NCF_NEGATIVE); cache_assert_bucket_locked(ncp, RA_WLOCKED); neglist = NCP2NEGLIST(ncp); if (!neg_locked) { if (ncp->nc_flag & NCF_HOTNEGATIVE) { hot_locked = true; mtx_lock(&ncneg_hot.nl_lock); if (!(ncp->nc_flag & NCF_HOTNEGATIVE)) { list_locked = true; mtx_lock(&neglist->nl_lock); } } else { list_locked = true; mtx_lock(&neglist->nl_lock); } } if (ncp->nc_flag & NCF_HOTNEGATIVE) { mtx_assert(&ncneg_hot.nl_lock, MA_OWNED); TAILQ_REMOVE(&ncneg_hot.nl_list, ncp, nc_dst); } else { mtx_assert(&neglist->nl_lock, MA_OWNED); TAILQ_REMOVE(&neglist->nl_list, ncp, nc_dst); } if (list_locked) mtx_unlock(&neglist->nl_lock); if (hot_locked) mtx_unlock(&ncneg_hot.nl_lock); atomic_subtract_rel_long(&numneg, 1); } static void cache_negative_shrink_select(int start, struct namecache **ncpp, struct neglist **neglistpp) { struct neglist *neglist; struct namecache *ncp; int i; *ncpp = ncp = NULL; for (i = start; i < numneglists; i++) { neglist = &neglists[i]; if (TAILQ_FIRST(&neglist->nl_list) == NULL) continue; mtx_lock(&neglist->nl_lock); ncp = TAILQ_FIRST(&neglist->nl_list); if (ncp != NULL) break; mtx_unlock(&neglist->nl_lock); } *neglistpp = neglist; *ncpp = ncp; } static void cache_negative_zap_one(void) { struct namecache *ncp, *ncp2; struct neglist *neglist; struct mtx *dvlp; struct rwlock *blp; if (!mtx_trylock(&ncneg_shrink_lock)) return; mtx_lock(&ncneg_hot.nl_lock); ncp = TAILQ_FIRST(&ncneg_hot.nl_list); if (ncp != NULL) { neglist = NCP2NEGLIST(ncp); mtx_lock(&neglist->nl_lock); TAILQ_REMOVE(&ncneg_hot.nl_list, ncp, nc_dst); TAILQ_INSERT_TAIL(&neglist->nl_list, ncp, nc_dst); ncp->nc_flag &= ~NCF_HOTNEGATIVE; mtx_unlock(&neglist->nl_lock); } cache_negative_shrink_select(shrink_list_turn, &ncp, &neglist); shrink_list_turn++; if (shrink_list_turn == numneglists) shrink_list_turn = 0; if (ncp == NULL && shrink_list_turn == 0) cache_negative_shrink_select(shrink_list_turn, &ncp, &neglist); if (ncp == NULL) { mtx_unlock(&ncneg_hot.nl_lock); goto out; } MPASS(ncp->nc_flag & NCF_NEGATIVE); dvlp = VP2VNODELOCK(ncp->nc_dvp); blp = NCP2BUCKETLOCK(ncp); mtx_unlock(&neglist->nl_lock); mtx_unlock(&ncneg_hot.nl_lock); mtx_lock(dvlp); rw_wlock(blp); mtx_lock(&neglist->nl_lock); ncp2 = TAILQ_FIRST(&neglist->nl_list); if (ncp != ncp2 || dvlp != VP2VNODELOCK(ncp2->nc_dvp) || blp != NCP2BUCKETLOCK(ncp2) || !(ncp2->nc_flag & NCF_NEGATIVE)) { ncp = NULL; goto out_unlock_all; } SDT_PROBE3(vfs, namecache, shrink_negative, done, ncp->nc_dvp, ncp->nc_name, ncp->nc_neghits); cache_zap_locked(ncp, true); out_unlock_all: mtx_unlock(&neglist->nl_lock); rw_wunlock(blp); mtx_unlock(dvlp); out: mtx_unlock(&ncneg_shrink_lock); cache_free(ncp); } /* * cache_zap_locked(): * * Removes a namecache entry from cache, whether it contains an actual * pointer to a vnode or if it is just a negative cache entry. */ static void cache_zap_locked(struct namecache *ncp, bool neg_locked) { if (!(ncp->nc_flag & NCF_NEGATIVE)) cache_assert_vnode_locked(ncp->nc_vp); cache_assert_vnode_locked(ncp->nc_dvp); cache_assert_bucket_locked(ncp, RA_WLOCKED); CTR2(KTR_VFS, "cache_zap(%p) vp %p", ncp, (ncp->nc_flag & NCF_NEGATIVE) ? NULL : ncp->nc_vp); if (!(ncp->nc_flag & NCF_NEGATIVE)) { SDT_PROBE3(vfs, namecache, zap, done, ncp->nc_dvp, ncp->nc_name, ncp->nc_vp); } else { SDT_PROBE3(vfs, namecache, zap_negative, done, ncp->nc_dvp, ncp->nc_name, ncp->nc_neghits); } LIST_REMOVE(ncp, nc_hash); if (!(ncp->nc_flag & NCF_NEGATIVE)) { TAILQ_REMOVE(&ncp->nc_vp->v_cache_dst, ncp, nc_dst); if (ncp == ncp->nc_vp->v_cache_dd) ncp->nc_vp->v_cache_dd = NULL; } else { cache_negative_remove(ncp, neg_locked); } if (ncp->nc_flag & NCF_ISDOTDOT) { if (ncp == ncp->nc_dvp->v_cache_dd) ncp->nc_dvp->v_cache_dd = NULL; } else { LIST_REMOVE(ncp, nc_src); if (LIST_EMPTY(&ncp->nc_dvp->v_cache_src)) { ncp->nc_flag |= NCF_DVDROP; atomic_subtract_rel_long(&numcachehv, 1); } } atomic_subtract_rel_long(&numcache, 1); } static void cache_zap_negative_locked_vnode_kl(struct namecache *ncp, struct vnode *vp) { struct rwlock *blp; MPASS(ncp->nc_dvp == vp); MPASS(ncp->nc_flag & NCF_NEGATIVE); cache_assert_vnode_locked(vp); blp = NCP2BUCKETLOCK(ncp); rw_wlock(blp); cache_zap_locked(ncp, false); rw_wunlock(blp); } static bool cache_zap_locked_vnode_kl2(struct namecache *ncp, struct vnode *vp, struct mtx **vlpp) { struct mtx *pvlp, *vlp1, *vlp2, *to_unlock; struct rwlock *blp; MPASS(vp == ncp->nc_dvp || vp == ncp->nc_vp); cache_assert_vnode_locked(vp); if (ncp->nc_flag & NCF_NEGATIVE) { if (*vlpp != NULL) { mtx_unlock(*vlpp); *vlpp = NULL; } cache_zap_negative_locked_vnode_kl(ncp, vp); return (true); } pvlp = VP2VNODELOCK(vp); blp = NCP2BUCKETLOCK(ncp); vlp1 = VP2VNODELOCK(ncp->nc_dvp); vlp2 = VP2VNODELOCK(ncp->nc_vp); if (*vlpp == vlp1 || *vlpp == vlp2) { to_unlock = *vlpp; *vlpp = NULL; } else { if (*vlpp != NULL) { mtx_unlock(*vlpp); *vlpp = NULL; } cache_sort(&vlp1, &vlp2); if (vlp1 == pvlp) { mtx_lock(vlp2); to_unlock = vlp2; } else { if (!mtx_trylock(vlp1)) goto out_relock; to_unlock = vlp1; } } rw_wlock(blp); cache_zap_locked(ncp, false); rw_wunlock(blp); if (to_unlock != NULL) mtx_unlock(to_unlock); return (true); out_relock: mtx_unlock(vlp2); mtx_lock(vlp1); mtx_lock(vlp2); MPASS(*vlpp == NULL); *vlpp = vlp1; return (false); } static int cache_zap_locked_vnode(struct namecache *ncp, struct vnode *vp) { struct mtx *pvlp, *vlp1, *vlp2, *to_unlock; struct rwlock *blp; int error = 0; MPASS(vp == ncp->nc_dvp || vp == ncp->nc_vp); cache_assert_vnode_locked(vp); pvlp = VP2VNODELOCK(vp); if (ncp->nc_flag & NCF_NEGATIVE) { cache_zap_negative_locked_vnode_kl(ncp, vp); goto out; } blp = NCP2BUCKETLOCK(ncp); vlp1 = VP2VNODELOCK(ncp->nc_dvp); vlp2 = VP2VNODELOCK(ncp->nc_vp); cache_sort(&vlp1, &vlp2); if (vlp1 == pvlp) { mtx_lock(vlp2); to_unlock = vlp2; } else { if (!mtx_trylock(vlp1)) { error = EAGAIN; goto out; } to_unlock = vlp1; } rw_wlock(blp); cache_zap_locked(ncp, false); rw_wunlock(blp); mtx_unlock(to_unlock); out: mtx_unlock(pvlp); return (error); } static int cache_zap_rlocked_bucket(struct namecache *ncp, struct rwlock *blp) { struct mtx *dvlp, *vlp; cache_assert_bucket_locked(ncp, RA_RLOCKED); dvlp = VP2VNODELOCK(ncp->nc_dvp); vlp = NULL; if (!(ncp->nc_flag & NCF_NEGATIVE)) vlp = VP2VNODELOCK(ncp->nc_vp); if (cache_trylock_vnodes(dvlp, vlp) == 0) { rw_runlock(blp); rw_wlock(blp); cache_zap_locked(ncp, false); rw_wunlock(blp); cache_unlock_vnodes(dvlp, vlp); return (0); } rw_runlock(blp); return (EAGAIN); } static int cache_zap_wlocked_bucket_kl(struct namecache *ncp, struct rwlock *blp, struct mtx **vlpp1, struct mtx **vlpp2) { struct mtx *dvlp, *vlp; cache_assert_bucket_locked(ncp, RA_WLOCKED); dvlp = VP2VNODELOCK(ncp->nc_dvp); vlp = NULL; if (!(ncp->nc_flag & NCF_NEGATIVE)) vlp = VP2VNODELOCK(ncp->nc_vp); cache_sort(&dvlp, &vlp); if (*vlpp1 == dvlp && *vlpp2 == vlp) { cache_zap_locked(ncp, false); cache_unlock_vnodes(dvlp, vlp); *vlpp1 = NULL; *vlpp2 = NULL; return (0); } if (*vlpp1 != NULL) mtx_unlock(*vlpp1); if (*vlpp2 != NULL) mtx_unlock(*vlpp2); *vlpp1 = NULL; *vlpp2 = NULL; if (cache_trylock_vnodes(dvlp, vlp) == 0) { cache_zap_locked(ncp, false); cache_unlock_vnodes(dvlp, vlp); return (0); } rw_wunlock(blp); *vlpp1 = dvlp; *vlpp2 = vlp; if (*vlpp1 != NULL) mtx_lock(*vlpp1); mtx_lock(*vlpp2); rw_wlock(blp); return (EAGAIN); } static void cache_lookup_unlock(struct rwlock *blp, struct mtx *vlp) { if (blp != NULL) { rw_runlock(blp); } else { mtx_unlock(vlp); } } static int __noinline cache_lookup_dot(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, struct timespec *tsp, int *ticksp) { int ltype; *vpp = dvp; CTR2(KTR_VFS, "cache_lookup(%p, %s) found via .", dvp, cnp->cn_nameptr); counter_u64_add(dothits, 1); SDT_PROBE3(vfs, namecache, lookup, hit, dvp, ".", *vpp); if (tsp != NULL) timespecclear(tsp); if (ticksp != NULL) *ticksp = ticks; vrefact(*vpp); /* * When we lookup "." we still can be asked to lock it * differently... */ ltype = cnp->cn_lkflags & LK_TYPE_MASK; if (ltype != VOP_ISLOCKED(*vpp)) { if (ltype == LK_EXCLUSIVE) { vn_lock(*vpp, LK_UPGRADE | LK_RETRY); if ((*vpp)->v_iflag & VI_DOOMED) { /* forced unmount */ vrele(*vpp); *vpp = NULL; return (ENOENT); } } else vn_lock(*vpp, LK_DOWNGRADE | LK_RETRY); } return (-1); } /* * Lookup an entry in the cache * * Lookup is called with dvp pointing to the directory to search, * cnp pointing to the name of the entry being sought. If the lookup * succeeds, the vnode is returned in *vpp, and a status of -1 is * returned. If the lookup determines that the name does not exist * (negative caching), a status of ENOENT is returned. If the lookup * fails, a status of zero is returned. If the directory vnode is * recycled out from under us due to a forced unmount, a status of * ENOENT is returned. * * vpp is locked and ref'd on return. If we're looking up DOTDOT, dvp is * unlocked. If we're looking up . an extra ref is taken, but the lock is * not recursively acquired. */ static __noinline int cache_lookup_nomakeentry(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, struct timespec *tsp, int *ticksp) { struct namecache *ncp; struct rwlock *blp; struct mtx *dvlp, *dvlp2; uint32_t hash; int error; if (cnp->cn_namelen == 2 && cnp->cn_nameptr[0] == '.' && cnp->cn_nameptr[1] == '.') { counter_u64_add(dotdothits, 1); dvlp = VP2VNODELOCK(dvp); dvlp2 = NULL; mtx_lock(dvlp); retry_dotdot: ncp = dvp->v_cache_dd; if (ncp == NULL) { SDT_PROBE3(vfs, namecache, lookup, miss, dvp, "..", NULL); mtx_unlock(dvlp); return (0); } if ((ncp->nc_flag & NCF_ISDOTDOT) != 0) { if (ncp->nc_dvp != dvp) panic("dvp %p v_cache_dd %p\n", dvp, ncp); if (!cache_zap_locked_vnode_kl2(ncp, dvp, &dvlp2)) goto retry_dotdot; MPASS(dvp->v_cache_dd == NULL); mtx_unlock(dvlp); if (dvlp2 != NULL) mtx_unlock(dvlp2); cache_free(ncp); } else { dvp->v_cache_dd = NULL; mtx_unlock(dvlp); if (dvlp2 != NULL) mtx_unlock(dvlp2); } return (0); } hash = cache_get_hash(cnp->cn_nameptr, cnp->cn_namelen, dvp); blp = HASH2BUCKETLOCK(hash); retry: rw_rlock(blp); LIST_FOREACH(ncp, (NCHHASH(hash)), nc_hash) { counter_u64_add(numchecks, 1); if (ncp->nc_dvp == dvp && ncp->nc_nlen == cnp->cn_namelen && !bcmp(ncp->nc_name, cnp->cn_nameptr, ncp->nc_nlen)) break; } /* We failed to find an entry */ if (ncp == NULL) { rw_runlock(blp); SDT_PROBE3(vfs, namecache, lookup, miss, dvp, cnp->cn_nameptr, NULL); counter_u64_add(nummisszap, 1); return (0); } counter_u64_add(numposzaps, 1); error = cache_zap_rlocked_bucket(ncp, blp); if (error != 0) { zap_and_exit_bucket_fail++; cache_maybe_yield(); goto retry; } cache_free(ncp); return (0); } int cache_lookup(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp, struct timespec *tsp, int *ticksp) { struct namecache_ts *ncp_ts; struct namecache *ncp; struct rwlock *blp; struct mtx *dvlp, *dvlp2; uint32_t hash; int error, ltype; if (__predict_false(!doingcache)) { cnp->cn_flags &= ~MAKEENTRY; return (0); } counter_u64_add(numcalls, 1); if (__predict_false(cnp->cn_namelen == 1 && cnp->cn_nameptr[0] == '.')) return (cache_lookup_dot(dvp, vpp, cnp, tsp, ticksp)); if ((cnp->cn_flags & MAKEENTRY) == 0) return (cache_lookup_nomakeentry(dvp, vpp, cnp, tsp, ticksp)); retry: blp = NULL; error = 0; if (cnp->cn_namelen == 2 && cnp->cn_nameptr[0] == '.' && cnp->cn_nameptr[1] == '.') { counter_u64_add(dotdothits, 1); dvlp = VP2VNODELOCK(dvp); dvlp2 = NULL; mtx_lock(dvlp); ncp = dvp->v_cache_dd; if (ncp == NULL) { SDT_PROBE3(vfs, namecache, lookup, miss, dvp, "..", NULL); mtx_unlock(dvlp); return (0); } if ((ncp->nc_flag & NCF_ISDOTDOT) != 0) { if (ncp->nc_flag & NCF_NEGATIVE) *vpp = NULL; else *vpp = ncp->nc_vp; } else *vpp = ncp->nc_dvp; /* Return failure if negative entry was found. */ if (*vpp == NULL) goto negative_success; CTR3(KTR_VFS, "cache_lookup(%p, %s) found %p via ..", dvp, cnp->cn_nameptr, *vpp); SDT_PROBE3(vfs, namecache, lookup, hit, dvp, "..", *vpp); cache_out_ts(ncp, tsp, ticksp); if ((ncp->nc_flag & (NCF_ISDOTDOT | NCF_DTS)) == NCF_DTS && tsp != NULL) { ncp_ts = __containerof(ncp, struct namecache_ts, nc_nc); *tsp = ncp_ts->nc_dotdottime; } goto success; } hash = cache_get_hash(cnp->cn_nameptr, cnp->cn_namelen, dvp); blp = HASH2BUCKETLOCK(hash); rw_rlock(blp); LIST_FOREACH(ncp, (NCHHASH(hash)), nc_hash) { counter_u64_add(numchecks, 1); if (ncp->nc_dvp == dvp && ncp->nc_nlen == cnp->cn_namelen && !bcmp(ncp->nc_name, cnp->cn_nameptr, ncp->nc_nlen)) break; } /* We failed to find an entry */ if (ncp == NULL) { rw_runlock(blp); SDT_PROBE3(vfs, namecache, lookup, miss, dvp, cnp->cn_nameptr, NULL); counter_u64_add(nummiss, 1); return (0); } /* We found a "positive" match, return the vnode */ if (!(ncp->nc_flag & NCF_NEGATIVE)) { counter_u64_add(numposhits, 1); *vpp = ncp->nc_vp; CTR4(KTR_VFS, "cache_lookup(%p, %s) found %p via ncp %p", dvp, cnp->cn_nameptr, *vpp, ncp); SDT_PROBE3(vfs, namecache, lookup, hit, dvp, ncp->nc_name, *vpp); cache_out_ts(ncp, tsp, ticksp); goto success; } negative_success: /* We found a negative match, and want to create it, so purge */ if (cnp->cn_nameiop == CREATE) { counter_u64_add(numnegzaps, 1); goto zap_and_exit; } counter_u64_add(numneghits, 1); cache_negative_hit(ncp); if (ncp->nc_flag & NCF_WHITE) cnp->cn_flags |= ISWHITEOUT; SDT_PROBE2(vfs, namecache, lookup, hit__negative, dvp, ncp->nc_name); cache_out_ts(ncp, tsp, ticksp); cache_lookup_unlock(blp, dvlp); return (ENOENT); success: /* * On success we return a locked and ref'd vnode as per the lookup * protocol. */ MPASS(dvp != *vpp); ltype = 0; /* silence gcc warning */ if (cnp->cn_flags & ISDOTDOT) { ltype = VOP_ISLOCKED(dvp); VOP_UNLOCK(dvp, 0); } vhold(*vpp); cache_lookup_unlock(blp, dvlp); error = vget(*vpp, cnp->cn_lkflags | LK_VNHELD, cnp->cn_thread); if (cnp->cn_flags & ISDOTDOT) { vn_lock(dvp, ltype | LK_RETRY); if (dvp->v_iflag & VI_DOOMED) { if (error == 0) vput(*vpp); *vpp = NULL; return (ENOENT); } } if (error) { *vpp = NULL; goto retry; } if ((cnp->cn_flags & ISLASTCN) && (cnp->cn_lkflags & LK_TYPE_MASK) == LK_EXCLUSIVE) { ASSERT_VOP_ELOCKED(*vpp, "cache_lookup"); } return (-1); zap_and_exit: if (blp != NULL) error = cache_zap_rlocked_bucket(ncp, blp); else error = cache_zap_locked_vnode(ncp, dvp); if (error != 0) { zap_and_exit_bucket_fail++; cache_maybe_yield(); goto retry; } cache_free(ncp); return (0); } struct celockstate { struct mtx *vlp[3]; struct rwlock *blp[2]; }; CTASSERT((nitems(((struct celockstate *)0)->vlp) == 3)); CTASSERT((nitems(((struct celockstate *)0)->blp) == 2)); static inline void cache_celockstate_init(struct celockstate *cel) { bzero(cel, sizeof(*cel)); } static void cache_lock_vnodes_cel(struct celockstate *cel, struct vnode *vp, struct vnode *dvp) { struct mtx *vlp1, *vlp2; MPASS(cel->vlp[0] == NULL); MPASS(cel->vlp[1] == NULL); MPASS(cel->vlp[2] == NULL); MPASS(vp != NULL || dvp != NULL); vlp1 = VP2VNODELOCK(vp); vlp2 = VP2VNODELOCK(dvp); cache_sort(&vlp1, &vlp2); if (vlp1 != NULL) { mtx_lock(vlp1); cel->vlp[0] = vlp1; } mtx_lock(vlp2); cel->vlp[1] = vlp2; } static void cache_unlock_vnodes_cel(struct celockstate *cel) { MPASS(cel->vlp[0] != NULL || cel->vlp[1] != NULL); if (cel->vlp[0] != NULL) mtx_unlock(cel->vlp[0]); if (cel->vlp[1] != NULL) mtx_unlock(cel->vlp[1]); if (cel->vlp[2] != NULL) mtx_unlock(cel->vlp[2]); } static bool cache_lock_vnodes_cel_3(struct celockstate *cel, struct vnode *vp) { struct mtx *vlp; bool ret; cache_assert_vlp_locked(cel->vlp[0]); cache_assert_vlp_locked(cel->vlp[1]); MPASS(cel->vlp[2] == NULL); MPASS(vp != NULL); vlp = VP2VNODELOCK(vp); ret = true; if (vlp >= cel->vlp[1]) { mtx_lock(vlp); } else { if (mtx_trylock(vlp)) goto out; cache_lock_vnodes_cel_3_failures++; cache_unlock_vnodes_cel(cel); if (vlp < cel->vlp[0]) { mtx_lock(vlp); mtx_lock(cel->vlp[0]); mtx_lock(cel->vlp[1]); } else { if (cel->vlp[0] != NULL) mtx_lock(cel->vlp[0]); mtx_lock(vlp); mtx_lock(cel->vlp[1]); } ret = false; } out: cel->vlp[2] = vlp; return (ret); } static void cache_lock_buckets_cel(struct celockstate *cel, struct rwlock *blp1, struct rwlock *blp2) { MPASS(cel->blp[0] == NULL); MPASS(cel->blp[1] == NULL); cache_sort(&blp1, &blp2); if (blp1 != NULL) { rw_wlock(blp1); cel->blp[0] = blp1; } rw_wlock(blp2); cel->blp[1] = blp2; } static void cache_unlock_buckets_cel(struct celockstate *cel) { if (cel->blp[0] != NULL) rw_wunlock(cel->blp[0]); rw_wunlock(cel->blp[1]); } /* * Lock part of the cache affected by the insertion. * * This means vnodelocks for dvp, vp and the relevant bucketlock. * However, insertion can result in removal of an old entry. In this * case we have an additional vnode and bucketlock pair to lock. If the * entry is negative, ncelock is locked instead of the vnode. * * That is, in the worst case we have to lock 3 vnodes and 2 bucketlocks, while * preserving the locking order (smaller address first). */ static void cache_enter_lock(struct celockstate *cel, struct vnode *dvp, struct vnode *vp, uint32_t hash) { struct namecache *ncp; struct rwlock *blps[2]; blps[0] = HASH2BUCKETLOCK(hash); for (;;) { blps[1] = NULL; cache_lock_vnodes_cel(cel, dvp, vp); if (vp == NULL || vp->v_type != VDIR) break; ncp = vp->v_cache_dd; if (ncp == NULL) break; if ((ncp->nc_flag & NCF_ISDOTDOT) == 0) break; MPASS(ncp->nc_dvp == vp); blps[1] = NCP2BUCKETLOCK(ncp); if (ncp->nc_flag & NCF_NEGATIVE) break; if (cache_lock_vnodes_cel_3(cel, ncp->nc_vp)) break; /* * All vnodes got re-locked. Re-validate the state and if * nothing changed we are done. Otherwise restart. */ if (ncp == vp->v_cache_dd && (ncp->nc_flag & NCF_ISDOTDOT) != 0 && blps[1] == NCP2BUCKETLOCK(ncp) && VP2VNODELOCK(ncp->nc_vp) == cel->vlp[2]) break; cache_unlock_vnodes_cel(cel); cel->vlp[0] = NULL; cel->vlp[1] = NULL; cel->vlp[2] = NULL; } cache_lock_buckets_cel(cel, blps[0], blps[1]); } static void cache_enter_lock_dd(struct celockstate *cel, struct vnode *dvp, struct vnode *vp, uint32_t hash) { struct namecache *ncp; struct rwlock *blps[2]; blps[0] = HASH2BUCKETLOCK(hash); for (;;) { blps[1] = NULL; cache_lock_vnodes_cel(cel, dvp, vp); ncp = dvp->v_cache_dd; if (ncp == NULL) break; if ((ncp->nc_flag & NCF_ISDOTDOT) == 0) break; MPASS(ncp->nc_dvp == dvp); blps[1] = NCP2BUCKETLOCK(ncp); if (ncp->nc_flag & NCF_NEGATIVE) break; if (cache_lock_vnodes_cel_3(cel, ncp->nc_vp)) break; if (ncp == dvp->v_cache_dd && (ncp->nc_flag & NCF_ISDOTDOT) != 0 && blps[1] == NCP2BUCKETLOCK(ncp) && VP2VNODELOCK(ncp->nc_vp) == cel->vlp[2]) break; cache_unlock_vnodes_cel(cel); cel->vlp[0] = NULL; cel->vlp[1] = NULL; cel->vlp[2] = NULL; } cache_lock_buckets_cel(cel, blps[0], blps[1]); } static void cache_enter_unlock(struct celockstate *cel) { cache_unlock_buckets_cel(cel); cache_unlock_vnodes_cel(cel); } /* * Add an entry to the cache. */ void cache_enter_time(struct vnode *dvp, struct vnode *vp, struct componentname *cnp, struct timespec *tsp, struct timespec *dtsp) { struct celockstate cel; struct namecache *ncp, *n2, *ndd; struct namecache_ts *ncp_ts, *n2_ts; struct nchashhead *ncpp; struct neglist *neglist; uint32_t hash; int flag; int len; bool neg_locked; CTR3(KTR_VFS, "cache_enter(%p, %p, %s)", dvp, vp, cnp->cn_nameptr); VNASSERT(vp == NULL || (vp->v_iflag & VI_DOOMED) == 0, vp, ("cache_enter: Adding a doomed vnode")); VNASSERT(dvp == NULL || (dvp->v_iflag & VI_DOOMED) == 0, dvp, ("cache_enter: Doomed vnode used as src")); if (__predict_false(!doingcache)) return; /* * Avoid blowout in namecache entries. */ if (__predict_false(numcache >= desiredvnodes * ncsizefactor)) return; cache_celockstate_init(&cel); ndd = NULL; flag = 0; if (cnp->cn_nameptr[0] == '.') { if (cnp->cn_namelen == 1) return; if (cnp->cn_namelen == 2 && cnp->cn_nameptr[1] == '.') { len = cnp->cn_namelen; hash = cache_get_hash(cnp->cn_nameptr, len, dvp); cache_enter_lock_dd(&cel, dvp, vp, hash); /* * If dotdot entry already exists, just retarget it * to new parent vnode, otherwise continue with new * namecache entry allocation. */ if ((ncp = dvp->v_cache_dd) != NULL && ncp->nc_flag & NCF_ISDOTDOT) { KASSERT(ncp->nc_dvp == dvp, ("wrong isdotdot parent")); neg_locked = false; if (ncp->nc_flag & NCF_NEGATIVE || vp == NULL) { neglist = NCP2NEGLIST(ncp); mtx_lock(&ncneg_hot.nl_lock); mtx_lock(&neglist->nl_lock); neg_locked = true; } if (!(ncp->nc_flag & NCF_NEGATIVE)) { TAILQ_REMOVE(&ncp->nc_vp->v_cache_dst, ncp, nc_dst); } else { cache_negative_remove(ncp, true); } if (vp != NULL) { TAILQ_INSERT_HEAD(&vp->v_cache_dst, ncp, nc_dst); ncp->nc_flag &= ~(NCF_NEGATIVE|NCF_HOTNEGATIVE); } else { ncp->nc_flag &= ~(NCF_HOTNEGATIVE); ncp->nc_flag |= NCF_NEGATIVE; cache_negative_insert(ncp, true); } if (neg_locked) { mtx_unlock(&neglist->nl_lock); mtx_unlock(&ncneg_hot.nl_lock); } ncp->nc_vp = vp; cache_enter_unlock(&cel); return; } dvp->v_cache_dd = NULL; cache_enter_unlock(&cel); cache_celockstate_init(&cel); SDT_PROBE3(vfs, namecache, enter, done, dvp, "..", vp); flag = NCF_ISDOTDOT; } } /* * Calculate the hash key and setup as much of the new * namecache entry as possible before acquiring the lock. */ ncp = cache_alloc(cnp->cn_namelen, tsp != NULL); ncp->nc_flag = flag; ncp->nc_vp = vp; if (vp == NULL) ncp->nc_flag |= NCF_NEGATIVE; ncp->nc_dvp = dvp; if (tsp != NULL) { ncp_ts = __containerof(ncp, struct namecache_ts, nc_nc); ncp_ts->nc_time = *tsp; ncp_ts->nc_ticks = ticks; ncp_ts->nc_nc.nc_flag |= NCF_TS; if (dtsp != NULL) { ncp_ts->nc_dotdottime = *dtsp; ncp_ts->nc_nc.nc_flag |= NCF_DTS; } } len = ncp->nc_nlen = cnp->cn_namelen; hash = cache_get_hash(cnp->cn_nameptr, len, dvp); strlcpy(ncp->nc_name, cnp->cn_nameptr, len + 1); cache_enter_lock(&cel, dvp, vp, hash); /* * See if this vnode or negative entry is already in the cache * with this name. This can happen with concurrent lookups of * the same path name. */ ncpp = NCHHASH(hash); LIST_FOREACH(n2, ncpp, nc_hash) { if (n2->nc_dvp == dvp && n2->nc_nlen == cnp->cn_namelen && !bcmp(n2->nc_name, cnp->cn_nameptr, n2->nc_nlen)) { if (tsp != NULL) { KASSERT((n2->nc_flag & NCF_TS) != 0, ("no NCF_TS")); n2_ts = __containerof(n2, struct namecache_ts, nc_nc); n2_ts->nc_time = ncp_ts->nc_time; n2_ts->nc_ticks = ncp_ts->nc_ticks; if (dtsp != NULL) { n2_ts->nc_dotdottime = ncp_ts->nc_dotdottime; if (ncp->nc_flag & NCF_NEGATIVE) mtx_lock(&ncneg_hot.nl_lock); n2_ts->nc_nc.nc_flag |= NCF_DTS; if (ncp->nc_flag & NCF_NEGATIVE) mtx_unlock(&ncneg_hot.nl_lock); } } goto out_unlock_free; } } if (flag == NCF_ISDOTDOT) { /* * See if we are trying to add .. entry, but some other lookup * has populated v_cache_dd pointer already. */ if (dvp->v_cache_dd != NULL) goto out_unlock_free; KASSERT(vp == NULL || vp->v_type == VDIR, ("wrong vnode type %p", vp)); dvp->v_cache_dd = ncp; } atomic_add_rel_long(&numcache, 1); if (vp != NULL) { if (vp->v_type == VDIR) { if (flag != NCF_ISDOTDOT) { /* * For this case, the cache entry maps both the * directory name in it and the name ".." for the * directory's parent. */ if ((ndd = vp->v_cache_dd) != NULL) { if ((ndd->nc_flag & NCF_ISDOTDOT) != 0) cache_zap_locked(ndd, false); else ndd = NULL; } vp->v_cache_dd = ncp; } } else { vp->v_cache_dd = NULL; } } if (flag != NCF_ISDOTDOT) { if (LIST_EMPTY(&dvp->v_cache_src)) { vhold(dvp); atomic_add_rel_long(&numcachehv, 1); } LIST_INSERT_HEAD(&dvp->v_cache_src, ncp, nc_src); } /* * Insert the new namecache entry into the appropriate chain * within the cache entries table. */ LIST_INSERT_HEAD(ncpp, ncp, nc_hash); /* * If the entry is "negative", we place it into the * "negative" cache queue, otherwise, we place it into the * destination vnode's cache entries queue. */ if (vp != NULL) { TAILQ_INSERT_HEAD(&vp->v_cache_dst, ncp, nc_dst); SDT_PROBE3(vfs, namecache, enter, done, dvp, ncp->nc_name, vp); } else { if (cnp->cn_flags & ISWHITEOUT) ncp->nc_flag |= NCF_WHITE; cache_negative_insert(ncp, false); SDT_PROBE2(vfs, namecache, enter_negative, done, dvp, ncp->nc_name); } cache_enter_unlock(&cel); if (numneg * ncnegfactor > numcache) cache_negative_zap_one(); cache_free(ndd); return; out_unlock_free: cache_enter_unlock(&cel); cache_free(ncp); return; } static u_int cache_roundup_2(u_int val) { u_int res; for (res = 1; res <= val; res <<= 1) continue; return (res); } /* * Name cache initialization, from vfs_init() when we are booting */ static void nchinit(void *dummy __unused) { u_int i; cache_zone_small = uma_zcreate("S VFS Cache", sizeof(struct namecache) + CACHE_PATH_CUTOFF + 1, NULL, NULL, NULL, NULL, UMA_ALIGNOF(struct namecache), UMA_ZONE_ZINIT); cache_zone_small_ts = uma_zcreate("STS VFS Cache", sizeof(struct namecache_ts) + CACHE_PATH_CUTOFF + 1, NULL, NULL, NULL, NULL, UMA_ALIGNOF(struct namecache_ts), UMA_ZONE_ZINIT); cache_zone_large = uma_zcreate("L VFS Cache", sizeof(struct namecache) + NAME_MAX + 1, NULL, NULL, NULL, NULL, UMA_ALIGNOF(struct namecache), UMA_ZONE_ZINIT); cache_zone_large_ts = uma_zcreate("LTS VFS Cache", sizeof(struct namecache_ts) + NAME_MAX + 1, NULL, NULL, NULL, NULL, UMA_ALIGNOF(struct namecache_ts), UMA_ZONE_ZINIT); nchashtbl = hashinit(desiredvnodes * 2, M_VFSCACHE, &nchash); ncbuckethash = cache_roundup_2(mp_ncpus * 64) - 1; if (ncbuckethash > nchash) ncbuckethash = nchash; bucketlocks = malloc(sizeof(*bucketlocks) * numbucketlocks, M_VFSCACHE, M_WAITOK | M_ZERO); for (i = 0; i < numbucketlocks; i++) rw_init_flags(&bucketlocks[i], "ncbuc", RW_DUPOK | RW_RECURSE); ncvnodehash = cache_roundup_2(mp_ncpus * 64) - 1; vnodelocks = malloc(sizeof(*vnodelocks) * numvnodelocks, M_VFSCACHE, M_WAITOK | M_ZERO); for (i = 0; i < numvnodelocks; i++) mtx_init(&vnodelocks[i], "ncvn", NULL, MTX_DUPOK | MTX_RECURSE); ncpurgeminvnodes = numbucketlocks; ncneghash = 3; neglists = malloc(sizeof(*neglists) * numneglists, M_VFSCACHE, M_WAITOK | M_ZERO); for (i = 0; i < numneglists; i++) { mtx_init(&neglists[i].nl_lock, "ncnegl", NULL, MTX_DEF); TAILQ_INIT(&neglists[i].nl_list); } mtx_init(&ncneg_hot.nl_lock, "ncneglh", NULL, MTX_DEF); TAILQ_INIT(&ncneg_hot.nl_list); mtx_init(&ncneg_shrink_lock, "ncnegs", NULL, MTX_DEF); numcalls = counter_u64_alloc(M_WAITOK); dothits = counter_u64_alloc(M_WAITOK); dotdothits = counter_u64_alloc(M_WAITOK); numchecks = counter_u64_alloc(M_WAITOK); nummiss = counter_u64_alloc(M_WAITOK); nummisszap = counter_u64_alloc(M_WAITOK); numposzaps = counter_u64_alloc(M_WAITOK); numposhits = counter_u64_alloc(M_WAITOK); numnegzaps = counter_u64_alloc(M_WAITOK); numneghits = counter_u64_alloc(M_WAITOK); numfullpathcalls = counter_u64_alloc(M_WAITOK); numfullpathfail1 = counter_u64_alloc(M_WAITOK); numfullpathfail2 = counter_u64_alloc(M_WAITOK); numfullpathfail4 = counter_u64_alloc(M_WAITOK); numfullpathfound = counter_u64_alloc(M_WAITOK); } SYSINIT(vfs, SI_SUB_VFS, SI_ORDER_SECOND, nchinit, NULL); void cache_changesize(int newmaxvnodes) { struct nchashhead *new_nchashtbl, *old_nchashtbl; u_long new_nchash, old_nchash; struct namecache *ncp; uint32_t hash; int i; newmaxvnodes = cache_roundup_2(newmaxvnodes * 2); if (newmaxvnodes < numbucketlocks) newmaxvnodes = numbucketlocks; new_nchashtbl = hashinit(newmaxvnodes, M_VFSCACHE, &new_nchash); /* If same hash table size, nothing to do */ if (nchash == new_nchash) { free(new_nchashtbl, M_VFSCACHE); return; } /* * Move everything from the old hash table to the new table. * None of the namecache entries in the table can be removed * because to do so, they have to be removed from the hash table. */ cache_lock_all_vnodes(); cache_lock_all_buckets(); old_nchashtbl = nchashtbl; old_nchash = nchash; nchashtbl = new_nchashtbl; nchash = new_nchash; for (i = 0; i <= old_nchash; i++) { while ((ncp = LIST_FIRST(&old_nchashtbl[i])) != NULL) { hash = cache_get_hash(ncp->nc_name, ncp->nc_nlen, ncp->nc_dvp); LIST_REMOVE(ncp, nc_hash); LIST_INSERT_HEAD(NCHHASH(hash), ncp, nc_hash); } } cache_unlock_all_buckets(); cache_unlock_all_vnodes(); free(old_nchashtbl, M_VFSCACHE); } /* * Invalidate all entries to a particular vnode. */ void cache_purge(struct vnode *vp) { TAILQ_HEAD(, namecache) ncps; struct namecache *ncp, *nnp; struct mtx *vlp, *vlp2; CTR1(KTR_VFS, "cache_purge(%p)", vp); SDT_PROBE1(vfs, namecache, purge, done, vp); if (LIST_EMPTY(&vp->v_cache_src) && TAILQ_EMPTY(&vp->v_cache_dst) && vp->v_cache_dd == NULL) return; TAILQ_INIT(&ncps); vlp = VP2VNODELOCK(vp); vlp2 = NULL; mtx_lock(vlp); retry: while (!LIST_EMPTY(&vp->v_cache_src)) { ncp = LIST_FIRST(&vp->v_cache_src); if (!cache_zap_locked_vnode_kl2(ncp, vp, &vlp2)) goto retry; TAILQ_INSERT_TAIL(&ncps, ncp, nc_dst); } while (!TAILQ_EMPTY(&vp->v_cache_dst)) { ncp = TAILQ_FIRST(&vp->v_cache_dst); if (!cache_zap_locked_vnode_kl2(ncp, vp, &vlp2)) goto retry; TAILQ_INSERT_TAIL(&ncps, ncp, nc_dst); } ncp = vp->v_cache_dd; if (ncp != NULL) { KASSERT(ncp->nc_flag & NCF_ISDOTDOT, ("lost dotdot link")); if (!cache_zap_locked_vnode_kl2(ncp, vp, &vlp2)) goto retry; TAILQ_INSERT_TAIL(&ncps, ncp, nc_dst); } KASSERT(vp->v_cache_dd == NULL, ("incomplete purge")); mtx_unlock(vlp); if (vlp2 != NULL) mtx_unlock(vlp2); TAILQ_FOREACH_SAFE(ncp, &ncps, nc_dst, nnp) { cache_free(ncp); } } /* * Invalidate all negative entries for a particular directory vnode. */ void cache_purge_negative(struct vnode *vp) { TAILQ_HEAD(, namecache) ncps; struct namecache *ncp, *nnp; struct mtx *vlp; CTR1(KTR_VFS, "cache_purge_negative(%p)", vp); SDT_PROBE1(vfs, namecache, purge_negative, done, vp); + if (LIST_EMPTY(&vp->v_cache_src)) + return; TAILQ_INIT(&ncps); vlp = VP2VNODELOCK(vp); mtx_lock(vlp); LIST_FOREACH_SAFE(ncp, &vp->v_cache_src, nc_src, nnp) { if (!(ncp->nc_flag & NCF_NEGATIVE)) continue; cache_zap_negative_locked_vnode_kl(ncp, vp); TAILQ_INSERT_TAIL(&ncps, ncp, nc_dst); } mtx_unlock(vlp); TAILQ_FOREACH_SAFE(ncp, &ncps, nc_dst, nnp) { cache_free(ncp); } } /* * Flush all entries referencing a particular filesystem. */ void cache_purgevfs(struct mount *mp, bool force) { TAILQ_HEAD(, namecache) ncps; struct mtx *vlp1, *vlp2; struct rwlock *blp; struct nchashhead *bucket; struct namecache *ncp, *nnp; u_long i, j, n_nchash; int error; /* Scan hash tables for applicable entries */ SDT_PROBE1(vfs, namecache, purgevfs, done, mp); if (!force && mp->mnt_nvnodelistsize <= ncpurgeminvnodes) return; TAILQ_INIT(&ncps); n_nchash = nchash + 1; vlp1 = vlp2 = NULL; for (i = 0; i < numbucketlocks; i++) { blp = (struct rwlock *)&bucketlocks[i]; rw_wlock(blp); for (j = i; j < n_nchash; j += numbucketlocks) { retry: bucket = &nchashtbl[j]; LIST_FOREACH_SAFE(ncp, bucket, nc_hash, nnp) { cache_assert_bucket_locked(ncp, RA_WLOCKED); if (ncp->nc_dvp->v_mount != mp) continue; error = cache_zap_wlocked_bucket_kl(ncp, blp, &vlp1, &vlp2); if (error != 0) goto retry; TAILQ_INSERT_HEAD(&ncps, ncp, nc_dst); } } rw_wunlock(blp); if (vlp1 == NULL && vlp2 == NULL) cache_maybe_yield(); } if (vlp1 != NULL) mtx_unlock(vlp1); if (vlp2 != NULL) mtx_unlock(vlp2); TAILQ_FOREACH_SAFE(ncp, &ncps, nc_dst, nnp) { cache_free(ncp); } } /* * Perform canonical checks and cache lookup and pass on to filesystem * through the vop_cachedlookup only if needed. */ int vfs_cache_lookup(struct vop_lookup_args *ap) { struct vnode *dvp; int error; struct vnode **vpp = ap->a_vpp; struct componentname *cnp = ap->a_cnp; struct ucred *cred = cnp->cn_cred; int flags = cnp->cn_flags; struct thread *td = cnp->cn_thread; *vpp = NULL; dvp = ap->a_dvp; if (dvp->v_type != VDIR) return (ENOTDIR); if ((flags & ISLASTCN) && (dvp->v_mount->mnt_flag & MNT_RDONLY) && (cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME)) return (EROFS); error = VOP_ACCESS(dvp, VEXEC, cred, td); if (error) return (error); error = cache_lookup(dvp, vpp, cnp, NULL, NULL); if (error == 0) return (VOP_CACHEDLOOKUP(dvp, vpp, cnp)); if (error == -1) return (0); return (error); } /* * XXX All of these sysctls would probably be more productive dead. */ static int __read_mostly disablecwd; SYSCTL_INT(_debug, OID_AUTO, disablecwd, CTLFLAG_RW, &disablecwd, 0, "Disable the getcwd syscall"); /* Implementation of the getcwd syscall. */ int sys___getcwd(struct thread *td, struct __getcwd_args *uap) { return (kern___getcwd(td, uap->buf, UIO_USERSPACE, uap->buflen, MAXPATHLEN)); } int kern___getcwd(struct thread *td, char *buf, enum uio_seg bufseg, size_t buflen, size_t path_max) { char *bp, *tmpbuf; struct filedesc *fdp; struct vnode *cdir, *rdir; int error; if (__predict_false(disablecwd)) return (ENODEV); if (__predict_false(buflen < 2)) return (EINVAL); if (buflen > path_max) buflen = path_max; tmpbuf = malloc(buflen, M_TEMP, M_WAITOK); fdp = td->td_proc->p_fd; FILEDESC_SLOCK(fdp); cdir = fdp->fd_cdir; vrefact(cdir); rdir = fdp->fd_rdir; vrefact(rdir); FILEDESC_SUNLOCK(fdp); error = vn_fullpath1(td, cdir, rdir, tmpbuf, &bp, buflen); vrele(rdir); vrele(cdir); if (!error) { if (bufseg == UIO_SYSSPACE) bcopy(bp, buf, strlen(bp) + 1); else error = copyout(bp, buf, strlen(bp) + 1); #ifdef KTRACE if (KTRPOINT(curthread, KTR_NAMEI)) ktrnamei(bp); #endif } free(tmpbuf, M_TEMP); return (error); } /* * Thus begins the fullpath magic. */ static int __read_mostly disablefullpath; SYSCTL_INT(_debug, OID_AUTO, disablefullpath, CTLFLAG_RW, &disablefullpath, 0, "Disable the vn_fullpath function"); /* * Retrieve the full filesystem path that correspond to a vnode from the name * cache (if available) */ int vn_fullpath(struct thread *td, struct vnode *vn, char **retbuf, char **freebuf) { char *buf; struct filedesc *fdp; struct vnode *rdir; int error; if (__predict_false(disablefullpath)) return (ENODEV); if (__predict_false(vn == NULL)) return (EINVAL); buf = malloc(MAXPATHLEN, M_TEMP, M_WAITOK); fdp = td->td_proc->p_fd; FILEDESC_SLOCK(fdp); rdir = fdp->fd_rdir; vrefact(rdir); FILEDESC_SUNLOCK(fdp); error = vn_fullpath1(td, vn, rdir, buf, retbuf, MAXPATHLEN); vrele(rdir); if (!error) *freebuf = buf; else free(buf, M_TEMP); return (error); } /* * This function is similar to vn_fullpath, but it attempts to lookup the * pathname relative to the global root mount point. This is required for the * auditing sub-system, as audited pathnames must be absolute, relative to the * global root mount point. */ int vn_fullpath_global(struct thread *td, struct vnode *vn, char **retbuf, char **freebuf) { char *buf; int error; if (__predict_false(disablefullpath)) return (ENODEV); if (__predict_false(vn == NULL)) return (EINVAL); buf = malloc(MAXPATHLEN, M_TEMP, M_WAITOK); error = vn_fullpath1(td, vn, rootvnode, buf, retbuf, MAXPATHLEN); if (!error) *freebuf = buf; else free(buf, M_TEMP); return (error); } int vn_vptocnp(struct vnode **vp, struct ucred *cred, char *buf, u_int *buflen) { struct vnode *dvp; struct namecache *ncp; struct mtx *vlp; int error; vlp = VP2VNODELOCK(*vp); mtx_lock(vlp); TAILQ_FOREACH(ncp, &((*vp)->v_cache_dst), nc_dst) { if ((ncp->nc_flag & NCF_ISDOTDOT) == 0) break; } if (ncp != NULL) { if (*buflen < ncp->nc_nlen) { mtx_unlock(vlp); vrele(*vp); counter_u64_add(numfullpathfail4, 1); error = ENOMEM; SDT_PROBE3(vfs, namecache, fullpath, return, error, vp, NULL); return (error); } *buflen -= ncp->nc_nlen; memcpy(buf + *buflen, ncp->nc_name, ncp->nc_nlen); SDT_PROBE3(vfs, namecache, fullpath, hit, ncp->nc_dvp, ncp->nc_name, vp); dvp = *vp; *vp = ncp->nc_dvp; vref(*vp); mtx_unlock(vlp); vrele(dvp); return (0); } SDT_PROBE1(vfs, namecache, fullpath, miss, vp); mtx_unlock(vlp); vn_lock(*vp, LK_SHARED | LK_RETRY); error = VOP_VPTOCNP(*vp, &dvp, cred, buf, buflen); vput(*vp); if (error) { counter_u64_add(numfullpathfail2, 1); SDT_PROBE3(vfs, namecache, fullpath, return, error, vp, NULL); return (error); } *vp = dvp; if (dvp->v_iflag & VI_DOOMED) { /* forced unmount */ vrele(dvp); error = ENOENT; SDT_PROBE3(vfs, namecache, fullpath, return, error, vp, NULL); return (error); } /* * *vp has its use count incremented still. */ return (0); } /* * The magic behind kern___getcwd() and vn_fullpath(). */ static int vn_fullpath1(struct thread *td, struct vnode *vp, struct vnode *rdir, char *buf, char **retbuf, u_int buflen) { int error, slash_prefixed; #ifdef KDTRACE_HOOKS struct vnode *startvp = vp; #endif struct vnode *vp1; buflen--; buf[buflen] = '\0'; error = 0; slash_prefixed = 0; SDT_PROBE1(vfs, namecache, fullpath, entry, vp); counter_u64_add(numfullpathcalls, 1); vref(vp); if (vp->v_type != VDIR) { error = vn_vptocnp(&vp, td->td_ucred, buf, &buflen); if (error) return (error); if (buflen == 0) { vrele(vp); return (ENOMEM); } buf[--buflen] = '/'; slash_prefixed = 1; } while (vp != rdir && vp != rootvnode) { /* * The vp vnode must be already fully constructed, * since it is either found in namecache or obtained * from VOP_VPTOCNP(). We may test for VV_ROOT safely * without obtaining the vnode lock. */ if ((vp->v_vflag & VV_ROOT) != 0) { vn_lock(vp, LK_RETRY | LK_SHARED); /* * With the vnode locked, check for races with * unmount, forced or not. Note that we * already verified that vp is not equal to * the root vnode, which means that * mnt_vnodecovered can be NULL only for the * case of unmount. */ if ((vp->v_iflag & VI_DOOMED) != 0 || (vp1 = vp->v_mount->mnt_vnodecovered) == NULL || vp1->v_mountedhere != vp->v_mount) { vput(vp); error = ENOENT; SDT_PROBE3(vfs, namecache, fullpath, return, error, vp, NULL); break; } vref(vp1); vput(vp); vp = vp1; continue; } if (vp->v_type != VDIR) { vrele(vp); counter_u64_add(numfullpathfail1, 1); error = ENOTDIR; SDT_PROBE3(vfs, namecache, fullpath, return, error, vp, NULL); break; } error = vn_vptocnp(&vp, td->td_ucred, buf, &buflen); if (error) break; if (buflen == 0) { vrele(vp); error = ENOMEM; SDT_PROBE3(vfs, namecache, fullpath, return, error, startvp, NULL); break; } buf[--buflen] = '/'; slash_prefixed = 1; } if (error) return (error); if (!slash_prefixed) { if (buflen == 0) { vrele(vp); counter_u64_add(numfullpathfail4, 1); SDT_PROBE3(vfs, namecache, fullpath, return, ENOMEM, startvp, NULL); return (ENOMEM); } buf[--buflen] = '/'; } counter_u64_add(numfullpathfound, 1); vrele(vp); SDT_PROBE3(vfs, namecache, fullpath, return, 0, startvp, buf + buflen); *retbuf = buf + buflen; return (0); } struct vnode * vn_dir_dd_ino(struct vnode *vp) { struct namecache *ncp; struct vnode *ddvp; struct mtx *vlp; ASSERT_VOP_LOCKED(vp, "vn_dir_dd_ino"); vlp = VP2VNODELOCK(vp); mtx_lock(vlp); TAILQ_FOREACH(ncp, &(vp->v_cache_dst), nc_dst) { if ((ncp->nc_flag & NCF_ISDOTDOT) != 0) continue; ddvp = ncp->nc_dvp; vhold(ddvp); mtx_unlock(vlp); if (vget(ddvp, LK_SHARED | LK_NOWAIT | LK_VNHELD, curthread)) return (NULL); return (ddvp); } mtx_unlock(vlp); return (NULL); } int vn_commname(struct vnode *vp, char *buf, u_int buflen) { struct namecache *ncp; struct mtx *vlp; int l; vlp = VP2VNODELOCK(vp); mtx_lock(vlp); TAILQ_FOREACH(ncp, &vp->v_cache_dst, nc_dst) if ((ncp->nc_flag & NCF_ISDOTDOT) == 0) break; if (ncp == NULL) { mtx_unlock(vlp); return (ENOENT); } l = min(ncp->nc_nlen, buflen - 1); memcpy(buf, ncp->nc_name, l); mtx_unlock(vlp); buf[l] = '\0'; return (0); } /* ABI compat shims for old kernel modules. */ #undef cache_enter void cache_enter(struct vnode *dvp, struct vnode *vp, struct componentname *cnp); void cache_enter(struct vnode *dvp, struct vnode *vp, struct componentname *cnp) { cache_enter_time(dvp, vp, cnp, NULL, NULL); } /* * This function updates path string to vnode's full global path * and checks the size of the new path string against the pathlen argument. * * Requires a locked, referenced vnode. * Vnode is re-locked on success or ENODEV, otherwise unlocked. * * If sysctl debug.disablefullpath is set, ENODEV is returned, * vnode is left locked and path remain untouched. * * If vp is a directory, the call to vn_fullpath_global() always succeeds * because it falls back to the ".." lookup if the namecache lookup fails. */ int vn_path_to_global_path(struct thread *td, struct vnode *vp, char *path, u_int pathlen) { struct nameidata nd; struct vnode *vp1; char *rpath, *fbuf; int error; ASSERT_VOP_ELOCKED(vp, __func__); /* Return ENODEV if sysctl debug.disablefullpath==1 */ if (__predict_false(disablefullpath)) return (ENODEV); /* Construct global filesystem path from vp. */ VOP_UNLOCK(vp, 0); error = vn_fullpath_global(td, vp, &rpath, &fbuf); if (error != 0) { vrele(vp); return (error); } if (strlen(rpath) >= pathlen) { vrele(vp); error = ENAMETOOLONG; goto out; } /* * Re-lookup the vnode by path to detect a possible rename. * As a side effect, the vnode is relocked. * If vnode was renamed, return ENOENT. */ NDINIT(&nd, LOOKUP, FOLLOW | LOCKLEAF | AUDITVNODE1, UIO_SYSSPACE, path, td); error = namei(&nd); if (error != 0) { vrele(vp); goto out; } NDFREE(&nd, NDF_ONLY_PNBUF); vp1 = nd.ni_vp; vrele(vp); if (vp1 == vp) strcpy(path, rpath); else { vput(vp1); error = ENOENT; } out: free(fbuf, M_TEMP); return (error); } Index: projects/runtime-coverage/sys/net/if.c =================================================================== --- projects/runtime-coverage/sys/net/if.c (revision 325444) +++ projects/runtime-coverage/sys/net/if.c (revision 325445) @@ -1,4235 +1,4235 @@ /*- * Copyright (c) 1980, 1986, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)if.c 8.5 (Berkeley) 1/9/95 * $FreeBSD$ */ #include "opt_compat.h" #include "opt_inet6.h" #include "opt_inet.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #if defined(INET) || defined(INET6) #include #include #include #include #include #ifdef INET #include #endif /* INET */ #ifdef INET6 #include #include #endif /* INET6 */ #endif /* INET || INET6 */ #include #ifdef COMPAT_FREEBSD32 #include #include #endif SYSCTL_NODE(_net, PF_LINK, link, CTLFLAG_RW, 0, "Link layers"); SYSCTL_NODE(_net_link, 0, generic, CTLFLAG_RW, 0, "Generic link-management"); SYSCTL_INT(_net_link, OID_AUTO, ifqmaxlen, CTLFLAG_RDTUN, &ifqmaxlen, 0, "max send queue size"); /* Log link state change events */ static int log_link_state_change = 1; SYSCTL_INT(_net_link, OID_AUTO, log_link_state_change, CTLFLAG_RW, &log_link_state_change, 0, "log interface link state change events"); /* Log promiscuous mode change events */ static int log_promisc_mode_change = 1; SYSCTL_INT(_net_link, OID_AUTO, log_promisc_mode_change, CTLFLAG_RDTUN, &log_promisc_mode_change, 1, "log promiscuous mode change events"); /* Interface description */ static unsigned int ifdescr_maxlen = 1024; SYSCTL_UINT(_net, OID_AUTO, ifdescr_maxlen, CTLFLAG_RW, &ifdescr_maxlen, 0, "administrative maximum length for interface description"); static MALLOC_DEFINE(M_IFDESCR, "ifdescr", "ifnet descriptions"); /* global sx for non-critical path ifdescr */ static struct sx ifdescr_sx; SX_SYSINIT(ifdescr_sx, &ifdescr_sx, "ifnet descr"); void (*bridge_linkstate_p)(struct ifnet *ifp); void (*ng_ether_link_state_p)(struct ifnet *ifp, int state); void (*lagg_linkstate_p)(struct ifnet *ifp, int state); /* These are external hooks for CARP. */ void (*carp_linkstate_p)(struct ifnet *ifp); void (*carp_demote_adj_p)(int, char *); int (*carp_master_p)(struct ifaddr *); #if defined(INET) || defined(INET6) int (*carp_forus_p)(struct ifnet *ifp, u_char *dhost); int (*carp_output_p)(struct ifnet *ifp, struct mbuf *m, const struct sockaddr *sa); int (*carp_ioctl_p)(struct ifreq *, u_long, struct thread *); int (*carp_attach_p)(struct ifaddr *, int); void (*carp_detach_p)(struct ifaddr *, bool); #endif #ifdef INET int (*carp_iamatch_p)(struct ifaddr *, uint8_t **); #endif #ifdef INET6 struct ifaddr *(*carp_iamatch6_p)(struct ifnet *ifp, struct in6_addr *taddr6); caddr_t (*carp_macmatch6_p)(struct ifnet *ifp, struct mbuf *m, const struct in6_addr *taddr); #endif struct mbuf *(*tbr_dequeue_ptr)(struct ifaltq *, int) = NULL; /* * XXX: Style; these should be sorted alphabetically, and unprototyped * static functions should be prototyped. Currently they are sorted by * declaration order. */ static void if_attachdomain(void *); static void if_attachdomain1(struct ifnet *); static int ifconf(u_long, caddr_t); static void if_freemulti(struct ifmultiaddr *); static void if_grow(void); static void if_input_default(struct ifnet *, struct mbuf *); static int if_requestencap_default(struct ifnet *, struct if_encap_req *); static void if_route(struct ifnet *, int flag, int fam); static int if_setflag(struct ifnet *, int, int, int *, int); static int if_transmit(struct ifnet *ifp, struct mbuf *m); static void if_unroute(struct ifnet *, int flag, int fam); static void link_rtrequest(int, struct rtentry *, struct rt_addrinfo *); static int ifhwioctl(u_long, struct ifnet *, caddr_t, struct thread *); static int if_delmulti_locked(struct ifnet *, struct ifmultiaddr *, int); static void do_link_state_change(void *, int); static int if_getgroup(struct ifgroupreq *, struct ifnet *); static int if_getgroupmembers(struct ifgroupreq *); static void if_delgroups(struct ifnet *); static void if_attach_internal(struct ifnet *, int, struct if_clone *); static int if_detach_internal(struct ifnet *, int, struct if_clone **); #ifdef VIMAGE static void if_vmove(struct ifnet *, struct vnet *); #endif #ifdef INET6 /* * XXX: declare here to avoid to include many inet6 related files.. * should be more generalized? */ extern void nd6_setmtu(struct ifnet *); #endif /* ipsec helper hooks */ VNET_DEFINE(struct hhook_head *, ipsec_hhh_in[HHOOK_IPSEC_COUNT]); VNET_DEFINE(struct hhook_head *, ipsec_hhh_out[HHOOK_IPSEC_COUNT]); VNET_DEFINE(int, if_index); int ifqmaxlen = IFQ_MAXLEN; VNET_DEFINE(struct ifnethead, ifnet); /* depend on static init XXX */ VNET_DEFINE(struct ifgrouphead, ifg_head); static VNET_DEFINE(int, if_indexlim) = 8; /* Table of ifnet by index. */ VNET_DEFINE(struct ifnet **, ifindex_table); #define V_if_indexlim VNET(if_indexlim) #define V_ifindex_table VNET(ifindex_table) /* * The global network interface list (V_ifnet) and related state (such as * if_index, if_indexlim, and ifindex_table) are protected by an sxlock and * an rwlock. Either may be acquired shared to stablize the list, but both * must be acquired writable to modify the list. This model allows us to * both stablize the interface list during interrupt thread processing, but * also to stablize it over long-running ioctls, without introducing priority * inversions and deadlocks. */ struct rwlock ifnet_rwlock; RW_SYSINIT_FLAGS(ifnet_rw, &ifnet_rwlock, "ifnet_rw", RW_RECURSE); struct sx ifnet_sxlock; SX_SYSINIT_FLAGS(ifnet_sx, &ifnet_sxlock, "ifnet_sx", SX_RECURSE); /* * The allocation of network interfaces is a rather non-atomic affair; we * need to select an index before we are ready to expose the interface for * use, so will use this pointer value to indicate reservation. */ #define IFNET_HOLD (void *)(uintptr_t)(-1) static if_com_alloc_t *if_com_alloc[256]; static if_com_free_t *if_com_free[256]; static MALLOC_DEFINE(M_IFNET, "ifnet", "interface internals"); MALLOC_DEFINE(M_IFADDR, "ifaddr", "interface address"); MALLOC_DEFINE(M_IFMADDR, "ether_multi", "link-level multicast address"); struct ifnet * ifnet_byindex_locked(u_short idx) { if (idx > V_if_index) return (NULL); if (V_ifindex_table[idx] == IFNET_HOLD) return (NULL); return (V_ifindex_table[idx]); } struct ifnet * ifnet_byindex(u_short idx) { struct ifnet *ifp; IFNET_RLOCK_NOSLEEP(); ifp = ifnet_byindex_locked(idx); IFNET_RUNLOCK_NOSLEEP(); return (ifp); } struct ifnet * ifnet_byindex_ref(u_short idx) { struct ifnet *ifp; IFNET_RLOCK_NOSLEEP(); ifp = ifnet_byindex_locked(idx); if (ifp == NULL || (ifp->if_flags & IFF_DYING)) { IFNET_RUNLOCK_NOSLEEP(); return (NULL); } if_ref(ifp); IFNET_RUNLOCK_NOSLEEP(); return (ifp); } /* * Allocate an ifindex array entry; return 0 on success or an error on * failure. */ static u_short ifindex_alloc(void) { u_short idx; IFNET_WLOCK_ASSERT(); retry: /* * Try to find an empty slot below V_if_index. If we fail, take the * next slot. */ for (idx = 1; idx <= V_if_index; idx++) { if (V_ifindex_table[idx] == NULL) break; } /* Catch if_index overflow. */ if (idx >= V_if_indexlim) { if_grow(); goto retry; } if (idx > V_if_index) V_if_index = idx; return (idx); } static void ifindex_free_locked(u_short idx) { IFNET_WLOCK_ASSERT(); V_ifindex_table[idx] = NULL; while (V_if_index > 0 && V_ifindex_table[V_if_index] == NULL) V_if_index--; } static void ifindex_free(u_short idx) { IFNET_WLOCK(); ifindex_free_locked(idx); IFNET_WUNLOCK(); } static void ifnet_setbyindex_locked(u_short idx, struct ifnet *ifp) { IFNET_WLOCK_ASSERT(); V_ifindex_table[idx] = ifp; } static void ifnet_setbyindex(u_short idx, struct ifnet *ifp) { IFNET_WLOCK(); ifnet_setbyindex_locked(idx, ifp); IFNET_WUNLOCK(); } struct ifaddr * ifaddr_byindex(u_short idx) { struct ifnet *ifp; struct ifaddr *ifa = NULL; IFNET_RLOCK_NOSLEEP(); ifp = ifnet_byindex_locked(idx); if (ifp != NULL && (ifa = ifp->if_addr) != NULL) ifa_ref(ifa); IFNET_RUNLOCK_NOSLEEP(); return (ifa); } /* * Network interface utility routines. * * Routines with ifa_ifwith* names take sockaddr *'s as * parameters. */ static void vnet_if_init(const void *unused __unused) { TAILQ_INIT(&V_ifnet); TAILQ_INIT(&V_ifg_head); IFNET_WLOCK(); if_grow(); /* create initial table */ IFNET_WUNLOCK(); vnet_if_clone_init(); } VNET_SYSINIT(vnet_if_init, SI_SUB_INIT_IF, SI_ORDER_SECOND, vnet_if_init, NULL); #ifdef VIMAGE static void vnet_if_uninit(const void *unused __unused) { VNET_ASSERT(TAILQ_EMPTY(&V_ifnet), ("%s:%d tailq &V_ifnet=%p " "not empty", __func__, __LINE__, &V_ifnet)); VNET_ASSERT(TAILQ_EMPTY(&V_ifg_head), ("%s:%d tailq &V_ifg_head=%p " "not empty", __func__, __LINE__, &V_ifg_head)); free((caddr_t)V_ifindex_table, M_IFNET); } VNET_SYSUNINIT(vnet_if_uninit, SI_SUB_INIT_IF, SI_ORDER_FIRST, vnet_if_uninit, NULL); static void vnet_if_return(const void *unused __unused) { struct ifnet *ifp, *nifp; /* Return all inherited interfaces to their parent vnets. */ TAILQ_FOREACH_SAFE(ifp, &V_ifnet, if_link, nifp) { if (ifp->if_home_vnet != ifp->if_vnet) if_vmove(ifp, ifp->if_home_vnet); } } VNET_SYSUNINIT(vnet_if_return, SI_SUB_VNET_DONE, SI_ORDER_ANY, vnet_if_return, NULL); #endif static void if_grow(void) { int oldlim; u_int n; struct ifnet **e; IFNET_WLOCK_ASSERT(); oldlim = V_if_indexlim; IFNET_WUNLOCK(); n = (oldlim << 1) * sizeof(*e); e = malloc(n, M_IFNET, M_WAITOK | M_ZERO); IFNET_WLOCK(); if (V_if_indexlim != oldlim) { free(e, M_IFNET); return; } if (V_ifindex_table != NULL) { memcpy((caddr_t)e, (caddr_t)V_ifindex_table, n/2); free((caddr_t)V_ifindex_table, M_IFNET); } V_if_indexlim <<= 1; V_ifindex_table = e; } /* * Allocate a struct ifnet and an index for an interface. A layer 2 * common structure will also be allocated if an allocation routine is * registered for the passed type. */ struct ifnet * if_alloc(u_char type) { struct ifnet *ifp; u_short idx; ifp = malloc(sizeof(struct ifnet), M_IFNET, M_WAITOK|M_ZERO); IFNET_WLOCK(); idx = ifindex_alloc(); ifnet_setbyindex_locked(idx, IFNET_HOLD); IFNET_WUNLOCK(); ifp->if_index = idx; ifp->if_type = type; ifp->if_alloctype = type; #ifdef VIMAGE ifp->if_vnet = curvnet; #endif if (if_com_alloc[type] != NULL) { ifp->if_l2com = if_com_alloc[type](type, ifp); if (ifp->if_l2com == NULL) { free(ifp, M_IFNET); ifindex_free(idx); return (NULL); } } IF_ADDR_LOCK_INIT(ifp); TASK_INIT(&ifp->if_linktask, 0, do_link_state_change, ifp); ifp->if_afdata_initialized = 0; IF_AFDATA_LOCK_INIT(ifp); TAILQ_INIT(&ifp->if_addrhead); TAILQ_INIT(&ifp->if_multiaddrs); TAILQ_INIT(&ifp->if_groups); #ifdef MAC mac_ifnet_init(ifp); #endif ifq_init(&ifp->if_snd, ifp); refcount_init(&ifp->if_refcount, 1); /* Index reference. */ for (int i = 0; i < IFCOUNTERS; i++) ifp->if_counters[i] = counter_u64_alloc(M_WAITOK); ifp->if_get_counter = if_get_counter_default; ifnet_setbyindex(ifp->if_index, ifp); return (ifp); } /* * Do the actual work of freeing a struct ifnet, and layer 2 common * structure. This call is made when the last reference to an * interface is released. */ static void if_free_internal(struct ifnet *ifp) { KASSERT((ifp->if_flags & IFF_DYING), ("if_free_internal: interface not dying")); if (if_com_free[ifp->if_alloctype] != NULL) if_com_free[ifp->if_alloctype](ifp->if_l2com, ifp->if_alloctype); #ifdef MAC mac_ifnet_destroy(ifp); #endif /* MAC */ if (ifp->if_description != NULL) free(ifp->if_description, M_IFDESCR); IF_AFDATA_DESTROY(ifp); IF_ADDR_LOCK_DESTROY(ifp); ifq_delete(&ifp->if_snd); for (int i = 0; i < IFCOUNTERS; i++) counter_u64_free(ifp->if_counters[i]); free(ifp, M_IFNET); } /* * Deregister an interface and free the associated storage. */ void if_free(struct ifnet *ifp) { ifp->if_flags |= IFF_DYING; /* XXX: Locking */ CURVNET_SET_QUIET(ifp->if_vnet); IFNET_WLOCK(); KASSERT(ifp == ifnet_byindex_locked(ifp->if_index), ("%s: freeing unallocated ifnet", ifp->if_xname)); ifindex_free_locked(ifp->if_index); IFNET_WUNLOCK(); if (refcount_release(&ifp->if_refcount)) if_free_internal(ifp); CURVNET_RESTORE(); } /* * Interfaces to keep an ifnet type-stable despite the possibility of the * driver calling if_free(). If there are additional references, we defer * freeing the underlying data structure. */ void if_ref(struct ifnet *ifp) { /* We don't assert the ifnet list lock here, but arguably should. */ refcount_acquire(&ifp->if_refcount); } void if_rele(struct ifnet *ifp) { if (!refcount_release(&ifp->if_refcount)) return; if_free_internal(ifp); } void ifq_init(struct ifaltq *ifq, struct ifnet *ifp) { mtx_init(&ifq->ifq_mtx, ifp->if_xname, "if send queue", MTX_DEF); if (ifq->ifq_maxlen == 0) ifq->ifq_maxlen = ifqmaxlen; ifq->altq_type = 0; ifq->altq_disc = NULL; ifq->altq_flags &= ALTQF_CANTCHANGE; ifq->altq_tbr = NULL; ifq->altq_ifp = ifp; } void ifq_delete(struct ifaltq *ifq) { mtx_destroy(&ifq->ifq_mtx); } /* * Perform generic interface initialization tasks and attach the interface * to the list of "active" interfaces. If vmove flag is set on entry * to if_attach_internal(), perform only a limited subset of initialization * tasks, given that we are moving from one vnet to another an ifnet which * has already been fully initialized. * * Note that if_detach_internal() removes group membership unconditionally * even when vmove flag is set, and if_attach_internal() adds only IFG_ALL. * Thus, when if_vmove() is applied to a cloned interface, group membership * is lost while a cloned one always joins a group whose name is * ifc->ifc_name. To recover this after if_detach_internal() and * if_attach_internal(), the cloner should be specified to * if_attach_internal() via ifc. If it is non-NULL, if_attach_internal() * attempts to join a group whose name is ifc->ifc_name. * * XXX: * - The decision to return void and thus require this function to * succeed is questionable. * - We should probably do more sanity checking. For instance we don't * do anything to insure if_xname is unique or non-empty. */ void if_attach(struct ifnet *ifp) { if_attach_internal(ifp, 0, NULL); } /* * Compute the least common TSO limit. */ void if_hw_tsomax_common(if_t ifp, struct ifnet_hw_tsomax *pmax) { /* * 1) If there is no limit currently, take the limit from * the network adapter. * * 2) If the network adapter has a limit below the current * limit, apply it. */ if (pmax->tsomaxbytes == 0 || (ifp->if_hw_tsomax != 0 && ifp->if_hw_tsomax < pmax->tsomaxbytes)) { pmax->tsomaxbytes = ifp->if_hw_tsomax; } if (pmax->tsomaxsegcount == 0 || (ifp->if_hw_tsomaxsegcount != 0 && ifp->if_hw_tsomaxsegcount < pmax->tsomaxsegcount)) { pmax->tsomaxsegcount = ifp->if_hw_tsomaxsegcount; } if (pmax->tsomaxsegsize == 0 || (ifp->if_hw_tsomaxsegsize != 0 && ifp->if_hw_tsomaxsegsize < pmax->tsomaxsegsize)) { pmax->tsomaxsegsize = ifp->if_hw_tsomaxsegsize; } } /* * Update TSO limit of a network adapter. * * Returns zero if no change. Else non-zero. */ int if_hw_tsomax_update(if_t ifp, struct ifnet_hw_tsomax *pmax) { int retval = 0; if (ifp->if_hw_tsomax != pmax->tsomaxbytes) { ifp->if_hw_tsomax = pmax->tsomaxbytes; retval++; } if (ifp->if_hw_tsomaxsegsize != pmax->tsomaxsegsize) { ifp->if_hw_tsomaxsegsize = pmax->tsomaxsegsize; retval++; } if (ifp->if_hw_tsomaxsegcount != pmax->tsomaxsegcount) { ifp->if_hw_tsomaxsegcount = pmax->tsomaxsegcount; retval++; } return (retval); } static void if_attach_internal(struct ifnet *ifp, int vmove, struct if_clone *ifc) { unsigned socksize, ifasize; int namelen, masklen; struct sockaddr_dl *sdl; struct ifaddr *ifa; if (ifp->if_index == 0 || ifp != ifnet_byindex(ifp->if_index)) panic ("%s: BUG: if_attach called without if_alloc'd input()\n", ifp->if_xname); #ifdef VIMAGE ifp->if_vnet = curvnet; if (ifp->if_home_vnet == NULL) ifp->if_home_vnet = curvnet; #endif if_addgroup(ifp, IFG_ALL); /* Restore group membership for cloned interfaces. */ if (vmove && ifc != NULL) if_clone_addgroup(ifp, ifc); getmicrotime(&ifp->if_lastchange); ifp->if_epoch = time_uptime; KASSERT((ifp->if_transmit == NULL && ifp->if_qflush == NULL) || (ifp->if_transmit != NULL && ifp->if_qflush != NULL), ("transmit and qflush must both either be set or both be NULL")); if (ifp->if_transmit == NULL) { ifp->if_transmit = if_transmit; ifp->if_qflush = if_qflush; } if (ifp->if_input == NULL) ifp->if_input = if_input_default; if (ifp->if_requestencap == NULL) ifp->if_requestencap = if_requestencap_default; if (!vmove) { #ifdef MAC mac_ifnet_create(ifp); #endif /* * Create a Link Level name for this device. */ namelen = strlen(ifp->if_xname); /* * Always save enough space for any possiable name so we * can do a rename in place later. */ masklen = offsetof(struct sockaddr_dl, sdl_data[0]) + IFNAMSIZ; socksize = masklen + ifp->if_addrlen; if (socksize < sizeof(*sdl)) socksize = sizeof(*sdl); socksize = roundup2(socksize, sizeof(long)); ifasize = sizeof(*ifa) + 2 * socksize; ifa = ifa_alloc(ifasize, M_WAITOK); sdl = (struct sockaddr_dl *)(ifa + 1); sdl->sdl_len = socksize; sdl->sdl_family = AF_LINK; bcopy(ifp->if_xname, sdl->sdl_data, namelen); sdl->sdl_nlen = namelen; sdl->sdl_index = ifp->if_index; sdl->sdl_type = ifp->if_type; ifp->if_addr = ifa; ifa->ifa_ifp = ifp; ifa->ifa_rtrequest = link_rtrequest; ifa->ifa_addr = (struct sockaddr *)sdl; sdl = (struct sockaddr_dl *)(socksize + (caddr_t)sdl); ifa->ifa_netmask = (struct sockaddr *)sdl; sdl->sdl_len = masklen; while (namelen != 0) sdl->sdl_data[--namelen] = 0xff; TAILQ_INSERT_HEAD(&ifp->if_addrhead, ifa, ifa_link); /* Reliably crash if used uninitialized. */ ifp->if_broadcastaddr = NULL; if (ifp->if_type == IFT_ETHER) { ifp->if_hw_addr = malloc(ifp->if_addrlen, M_IFADDR, M_WAITOK | M_ZERO); } #if defined(INET) || defined(INET6) /* Use defaults for TSO, if nothing is set */ if (ifp->if_hw_tsomax == 0 && ifp->if_hw_tsomaxsegcount == 0 && ifp->if_hw_tsomaxsegsize == 0) { /* * The TSO defaults needs to be such that an * NFS mbuf list of 35 mbufs totalling just * below 64K works and that a chain of mbufs * can be defragged into at most 32 segments: */ ifp->if_hw_tsomax = min(IP_MAXPACKET, (32 * MCLBYTES) - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN)); ifp->if_hw_tsomaxsegcount = 35; ifp->if_hw_tsomaxsegsize = 2048; /* 2K */ /* XXX some drivers set IFCAP_TSO after ethernet attach */ if (ifp->if_capabilities & IFCAP_TSO) { if_printf(ifp, "Using defaults for TSO: %u/%u/%u\n", ifp->if_hw_tsomax, ifp->if_hw_tsomaxsegcount, ifp->if_hw_tsomaxsegsize); } } #endif } #ifdef VIMAGE else { /* * Update the interface index in the link layer address * of the interface. */ for (ifa = ifp->if_addr; ifa != NULL; ifa = TAILQ_NEXT(ifa, ifa_link)) { if (ifa->ifa_addr->sa_family == AF_LINK) { sdl = (struct sockaddr_dl *)ifa->ifa_addr; sdl->sdl_index = ifp->if_index; } } } #endif IFNET_WLOCK(); TAILQ_INSERT_TAIL(&V_ifnet, ifp, if_link); #ifdef VIMAGE curvnet->vnet_ifcnt++; #endif IFNET_WUNLOCK(); if (domain_init_status >= 2) if_attachdomain1(ifp); EVENTHANDLER_INVOKE(ifnet_arrival_event, ifp); if (IS_DEFAULT_VNET(curvnet)) devctl_notify("IFNET", ifp->if_xname, "ATTACH", NULL); /* Announce the interface. */ rt_ifannouncemsg(ifp, IFAN_ARRIVAL); } static void if_attachdomain(void *dummy) { struct ifnet *ifp; TAILQ_FOREACH(ifp, &V_ifnet, if_link) if_attachdomain1(ifp); } SYSINIT(domainifattach, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_SECOND, if_attachdomain, NULL); static void if_attachdomain1(struct ifnet *ifp) { struct domain *dp; /* * Since dp->dom_ifattach calls malloc() with M_WAITOK, we * cannot lock ifp->if_afdata initialization, entirely. */ IF_AFDATA_LOCK(ifp); if (ifp->if_afdata_initialized >= domain_init_status) { IF_AFDATA_UNLOCK(ifp); log(LOG_WARNING, "%s called more than once on %s\n", __func__, ifp->if_xname); return; } ifp->if_afdata_initialized = domain_init_status; IF_AFDATA_UNLOCK(ifp); /* address family dependent data region */ bzero(ifp->if_afdata, sizeof(ifp->if_afdata)); for (dp = domains; dp; dp = dp->dom_next) { if (dp->dom_ifattach) ifp->if_afdata[dp->dom_family] = (*dp->dom_ifattach)(ifp); } } /* * Remove any unicast or broadcast network addresses from an interface. */ void if_purgeaddrs(struct ifnet *ifp) { struct ifaddr *ifa, *next; /* XXX cannot hold IF_ADDR_WLOCK over called functions. */ TAILQ_FOREACH_SAFE(ifa, &ifp->if_addrhead, ifa_link, next) { if (ifa->ifa_addr->sa_family == AF_LINK) continue; #ifdef INET /* XXX: Ugly!! ad hoc just for INET */ if (ifa->ifa_addr->sa_family == AF_INET) { struct ifaliasreq ifr; bzero(&ifr, sizeof(ifr)); ifr.ifra_addr = *ifa->ifa_addr; if (ifa->ifa_dstaddr) ifr.ifra_broadaddr = *ifa->ifa_dstaddr; if (in_control(NULL, SIOCDIFADDR, (caddr_t)&ifr, ifp, NULL) == 0) continue; } #endif /* INET */ #ifdef INET6 if (ifa->ifa_addr->sa_family == AF_INET6) { in6_purgeaddr(ifa); /* ifp_addrhead is already updated */ continue; } #endif /* INET6 */ IF_ADDR_WLOCK(ifp); TAILQ_REMOVE(&ifp->if_addrhead, ifa, ifa_link); IF_ADDR_WUNLOCK(ifp); ifa_free(ifa); } } /* * Remove any multicast network addresses from an interface when an ifnet * is going away. */ static void if_purgemaddrs(struct ifnet *ifp) { struct ifmultiaddr *ifma; struct ifmultiaddr *next; IF_ADDR_WLOCK(ifp); TAILQ_FOREACH_SAFE(ifma, &ifp->if_multiaddrs, ifma_link, next) if_delmulti_locked(ifp, ifma, 1); IF_ADDR_WUNLOCK(ifp); } /* * Detach an interface, removing it from the list of "active" interfaces. * If vmove flag is set on entry to if_detach_internal(), perform only a * limited subset of cleanup tasks, given that we are moving an ifnet from * one vnet to another, where it must be fully operational. * * XXXRW: There are some significant questions about event ordering, and * how to prevent things from starting to use the interface during detach. */ void if_detach(struct ifnet *ifp) { CURVNET_SET_QUIET(ifp->if_vnet); if_detach_internal(ifp, 0, NULL); CURVNET_RESTORE(); } /* * The vmove flag, if set, indicates that we are called from a callpath * that is moving an interface to a different vnet instance. * * The shutdown flag, if set, indicates that we are called in the * process of shutting down a vnet instance. Currently only the * vnet_if_return SYSUNINIT function sets it. Note: we can be called * on a vnet instance shutdown without this flag being set, e.g., when * the cloned interfaces are destoyed as first thing of teardown. */ static int if_detach_internal(struct ifnet *ifp, int vmove, struct if_clone **ifcp) { struct ifaddr *ifa; int i; struct domain *dp; struct ifnet *iter; int found = 0; #ifdef VIMAGE int shutdown; shutdown = (ifp->if_vnet->vnet_state > SI_SUB_VNET && ifp->if_vnet->vnet_state < SI_SUB_VNET_DONE) ? 1 : 0; #endif IFNET_WLOCK(); TAILQ_FOREACH(iter, &V_ifnet, if_link) if (iter == ifp) { TAILQ_REMOVE(&V_ifnet, ifp, if_link); found = 1; break; } IFNET_WUNLOCK(); if (!found) { /* * While we would want to panic here, we cannot * guarantee that the interface is indeed still on * the list given we don't hold locks all the way. */ return (ENOENT); #if 0 if (vmove) panic("%s: ifp=%p not on the ifnet tailq %p", __func__, ifp, &V_ifnet); else return; /* XXX this should panic as well? */ #endif } /* * At this point we know the interface still was on the ifnet list * and we removed it so we are in a stable state. */ #ifdef VIMAGE curvnet->vnet_ifcnt--; #endif /* * In any case (destroy or vmove) detach us from the groups * and remove/wait for pending events on the taskq. * XXX-BZ in theory an interface could still enqueue a taskq change? */ if_delgroups(ifp); taskqueue_drain(taskqueue_swi, &ifp->if_linktask); /* * Check if this is a cloned interface or not. Must do even if * shutting down as a if_vmove_reclaim() would move the ifp and * the if_clone_addgroup() will have a corrupted string overwise * from a gibberish pointer. */ if (vmove && ifcp != NULL) *ifcp = if_clone_findifc(ifp); if_down(ifp); #ifdef VIMAGE /* * On VNET shutdown abort here as the stack teardown will do all * the work top-down for us. */ if (shutdown) { /* * In case of a vmove we are done here without error. * If we would signal an error it would lead to the same * abort as if we did not find the ifnet anymore. * if_detach() calls us in void context and does not care * about an early abort notification, so life is splendid :) */ goto finish_vnet_shutdown; } #endif /* * At this point we are not tearing down a VNET and are either * going to destroy or vmove the interface and have to cleanup * accordingly. */ /* * Remove routes and flush queues. */ #ifdef ALTQ if (ALTQ_IS_ENABLED(&ifp->if_snd)) altq_disable(&ifp->if_snd); if (ALTQ_IS_ATTACHED(&ifp->if_snd)) altq_detach(&ifp->if_snd); #endif if_purgeaddrs(ifp); #ifdef INET in_ifdetach(ifp); #endif #ifdef INET6 /* * Remove all IPv6 kernel structs related to ifp. This should be done * before removing routing entries below, since IPv6 interface direct * routes are expected to be removed by the IPv6-specific kernel API. * Otherwise, the kernel will detect some inconsistency and bark it. */ in6_ifdetach(ifp); #endif if_purgemaddrs(ifp); /* Announce that the interface is gone. */ rt_ifannouncemsg(ifp, IFAN_DEPARTURE); EVENTHANDLER_INVOKE(ifnet_departure_event, ifp); if (IS_DEFAULT_VNET(curvnet)) devctl_notify("IFNET", ifp->if_xname, "DETACH", NULL); if (!vmove) { /* * Prevent further calls into the device driver via ifnet. */ if_dead(ifp); /* * Remove link ifaddr pointer and maybe decrement if_index. * Clean up all addresses. */ free(ifp->if_hw_addr, M_IFADDR); ifp->if_hw_addr = NULL; ifp->if_addr = NULL; /* We can now free link ifaddr. */ IF_ADDR_WLOCK(ifp); if (!TAILQ_EMPTY(&ifp->if_addrhead)) { ifa = TAILQ_FIRST(&ifp->if_addrhead); TAILQ_REMOVE(&ifp->if_addrhead, ifa, ifa_link); IF_ADDR_WUNLOCK(ifp); ifa_free(ifa); } else IF_ADDR_WUNLOCK(ifp); } rt_flushifroutes(ifp); #ifdef VIMAGE finish_vnet_shutdown: #endif /* * We cannot hold the lock over dom_ifdetach calls as they might * sleep, for example trying to drain a callout, thus open up the * theoretical race with re-attaching. */ IF_AFDATA_LOCK(ifp); i = ifp->if_afdata_initialized; ifp->if_afdata_initialized = 0; IF_AFDATA_UNLOCK(ifp); for (dp = domains; i > 0 && dp; dp = dp->dom_next) { if (dp->dom_ifdetach && ifp->if_afdata[dp->dom_family]) { (*dp->dom_ifdetach)(ifp, ifp->if_afdata[dp->dom_family]); ifp->if_afdata[dp->dom_family] = NULL; } } return (0); } #ifdef VIMAGE /* * if_vmove() performs a limited version of if_detach() in current * vnet and if_attach()es the ifnet to the vnet specified as 2nd arg. * An attempt is made to shrink if_index in current vnet, find an * unused if_index in target vnet and calls if_grow() if necessary, * and finally find an unused if_xname for the target vnet. */ static void if_vmove(struct ifnet *ifp, struct vnet *new_vnet) { struct if_clone *ifc; u_int bif_dlt, bif_hdrlen; int rc; /* * if_detach_internal() will call the eventhandler to notify * interface departure. That will detach if_bpf. We need to * safe the dlt and hdrlen so we can re-attach it later. */ bpf_get_bp_params(ifp->if_bpf, &bif_dlt, &bif_hdrlen); /* * Detach from current vnet, but preserve LLADDR info, do not * mark as dead etc. so that the ifnet can be reattached later. * If we cannot find it, we lost the race to someone else. */ rc = if_detach_internal(ifp, 1, &ifc); if (rc != 0) return; /* * Unlink the ifnet from ifindex_table[] in current vnet, and shrink * the if_index for that vnet if possible. * * NOTE: IFNET_WLOCK/IFNET_WUNLOCK() are assumed to be unvirtualized, * or we'd lock on one vnet and unlock on another. */ IFNET_WLOCK(); ifindex_free_locked(ifp->if_index); IFNET_WUNLOCK(); /* * Perform interface-specific reassignment tasks, if provided by * the driver. */ if (ifp->if_reassign != NULL) ifp->if_reassign(ifp, new_vnet, NULL); /* * Switch to the context of the target vnet. */ CURVNET_SET_QUIET(new_vnet); IFNET_WLOCK(); ifp->if_index = ifindex_alloc(); ifnet_setbyindex_locked(ifp->if_index, ifp); IFNET_WUNLOCK(); if_attach_internal(ifp, 1, ifc); if (ifp->if_bpf == NULL) bpfattach(ifp, bif_dlt, bif_hdrlen); CURVNET_RESTORE(); } /* * Move an ifnet to or from another child prison/vnet, specified by the jail id. */ static int if_vmove_loan(struct thread *td, struct ifnet *ifp, char *ifname, int jid) { struct prison *pr; struct ifnet *difp; int shutdown; /* Try to find the prison within our visibility. */ sx_slock(&allprison_lock); pr = prison_find_child(td->td_ucred->cr_prison, jid); sx_sunlock(&allprison_lock); if (pr == NULL) return (ENXIO); prison_hold_locked(pr); mtx_unlock(&pr->pr_mtx); /* Do not try to move the iface from and to the same prison. */ if (pr->pr_vnet == ifp->if_vnet) { prison_free(pr); return (EEXIST); } /* Make sure the named iface does not exists in the dst. prison/vnet. */ /* XXX Lock interfaces to avoid races. */ CURVNET_SET_QUIET(pr->pr_vnet); difp = ifunit(ifname); if (difp != NULL) { CURVNET_RESTORE(); prison_free(pr); return (EEXIST); } /* Make sure the VNET is stable. */ shutdown = (ifp->if_vnet->vnet_state > SI_SUB_VNET && ifp->if_vnet->vnet_state < SI_SUB_VNET_DONE) ? 1 : 0; if (shutdown) { CURVNET_RESTORE(); prison_free(pr); return (EBUSY); } CURVNET_RESTORE(); /* Move the interface into the child jail/vnet. */ if_vmove(ifp, pr->pr_vnet); /* Report the new if_xname back to the userland. */ sprintf(ifname, "%s", ifp->if_xname); prison_free(pr); return (0); } static int if_vmove_reclaim(struct thread *td, char *ifname, int jid) { struct prison *pr; struct vnet *vnet_dst; struct ifnet *ifp; int shutdown; /* Try to find the prison within our visibility. */ sx_slock(&allprison_lock); pr = prison_find_child(td->td_ucred->cr_prison, jid); sx_sunlock(&allprison_lock); if (pr == NULL) return (ENXIO); prison_hold_locked(pr); mtx_unlock(&pr->pr_mtx); /* Make sure the named iface exists in the source prison/vnet. */ CURVNET_SET(pr->pr_vnet); ifp = ifunit(ifname); /* XXX Lock to avoid races. */ if (ifp == NULL) { CURVNET_RESTORE(); prison_free(pr); return (ENXIO); } /* Do not try to move the iface from and to the same prison. */ vnet_dst = TD_TO_VNET(td); if (vnet_dst == ifp->if_vnet) { CURVNET_RESTORE(); prison_free(pr); return (EEXIST); } /* Make sure the VNET is stable. */ shutdown = (ifp->if_vnet->vnet_state > SI_SUB_VNET && ifp->if_vnet->vnet_state < SI_SUB_VNET_DONE) ? 1 : 0; if (shutdown) { CURVNET_RESTORE(); prison_free(pr); return (EBUSY); } /* Get interface back from child jail/vnet. */ if_vmove(ifp, vnet_dst); CURVNET_RESTORE(); /* Report the new if_xname back to the userland. */ sprintf(ifname, "%s", ifp->if_xname); prison_free(pr); return (0); } #endif /* VIMAGE */ /* * Add a group to an interface */ int if_addgroup(struct ifnet *ifp, const char *groupname) { struct ifg_list *ifgl; struct ifg_group *ifg = NULL; struct ifg_member *ifgm; int new = 0; if (groupname[0] && groupname[strlen(groupname) - 1] >= '0' && groupname[strlen(groupname) - 1] <= '9') return (EINVAL); IFNET_WLOCK(); TAILQ_FOREACH(ifgl, &ifp->if_groups, ifgl_next) if (!strcmp(ifgl->ifgl_group->ifg_group, groupname)) { IFNET_WUNLOCK(); return (EEXIST); } if ((ifgl = (struct ifg_list *)malloc(sizeof(struct ifg_list), M_TEMP, M_NOWAIT)) == NULL) { IFNET_WUNLOCK(); return (ENOMEM); } if ((ifgm = (struct ifg_member *)malloc(sizeof(struct ifg_member), M_TEMP, M_NOWAIT)) == NULL) { free(ifgl, M_TEMP); IFNET_WUNLOCK(); return (ENOMEM); } TAILQ_FOREACH(ifg, &V_ifg_head, ifg_next) if (!strcmp(ifg->ifg_group, groupname)) break; if (ifg == NULL) { if ((ifg = (struct ifg_group *)malloc(sizeof(struct ifg_group), M_TEMP, M_NOWAIT)) == NULL) { free(ifgl, M_TEMP); free(ifgm, M_TEMP); IFNET_WUNLOCK(); return (ENOMEM); } strlcpy(ifg->ifg_group, groupname, sizeof(ifg->ifg_group)); ifg->ifg_refcnt = 0; TAILQ_INIT(&ifg->ifg_members); TAILQ_INSERT_TAIL(&V_ifg_head, ifg, ifg_next); new = 1; } ifg->ifg_refcnt++; ifgl->ifgl_group = ifg; ifgm->ifgm_ifp = ifp; IF_ADDR_WLOCK(ifp); TAILQ_INSERT_TAIL(&ifg->ifg_members, ifgm, ifgm_next); TAILQ_INSERT_TAIL(&ifp->if_groups, ifgl, ifgl_next); IF_ADDR_WUNLOCK(ifp); IFNET_WUNLOCK(); if (new) EVENTHANDLER_INVOKE(group_attach_event, ifg); EVENTHANDLER_INVOKE(group_change_event, groupname); return (0); } /* * Remove a group from an interface */ int if_delgroup(struct ifnet *ifp, const char *groupname) { struct ifg_list *ifgl; struct ifg_member *ifgm; IFNET_WLOCK(); TAILQ_FOREACH(ifgl, &ifp->if_groups, ifgl_next) if (!strcmp(ifgl->ifgl_group->ifg_group, groupname)) break; if (ifgl == NULL) { IFNET_WUNLOCK(); return (ENOENT); } IF_ADDR_WLOCK(ifp); TAILQ_REMOVE(&ifp->if_groups, ifgl, ifgl_next); IF_ADDR_WUNLOCK(ifp); TAILQ_FOREACH(ifgm, &ifgl->ifgl_group->ifg_members, ifgm_next) if (ifgm->ifgm_ifp == ifp) break; if (ifgm != NULL) { TAILQ_REMOVE(&ifgl->ifgl_group->ifg_members, ifgm, ifgm_next); free(ifgm, M_TEMP); } if (--ifgl->ifgl_group->ifg_refcnt == 0) { TAILQ_REMOVE(&V_ifg_head, ifgl->ifgl_group, ifg_next); IFNET_WUNLOCK(); EVENTHANDLER_INVOKE(group_detach_event, ifgl->ifgl_group); free(ifgl->ifgl_group, M_TEMP); } else IFNET_WUNLOCK(); free(ifgl, M_TEMP); EVENTHANDLER_INVOKE(group_change_event, groupname); return (0); } /* * Remove an interface from all groups */ static void if_delgroups(struct ifnet *ifp) { struct ifg_list *ifgl; struct ifg_member *ifgm; char groupname[IFNAMSIZ]; IFNET_WLOCK(); while (!TAILQ_EMPTY(&ifp->if_groups)) { ifgl = TAILQ_FIRST(&ifp->if_groups); strlcpy(groupname, ifgl->ifgl_group->ifg_group, IFNAMSIZ); IF_ADDR_WLOCK(ifp); TAILQ_REMOVE(&ifp->if_groups, ifgl, ifgl_next); IF_ADDR_WUNLOCK(ifp); TAILQ_FOREACH(ifgm, &ifgl->ifgl_group->ifg_members, ifgm_next) if (ifgm->ifgm_ifp == ifp) break; if (ifgm != NULL) { TAILQ_REMOVE(&ifgl->ifgl_group->ifg_members, ifgm, ifgm_next); free(ifgm, M_TEMP); } if (--ifgl->ifgl_group->ifg_refcnt == 0) { TAILQ_REMOVE(&V_ifg_head, ifgl->ifgl_group, ifg_next); IFNET_WUNLOCK(); EVENTHANDLER_INVOKE(group_detach_event, ifgl->ifgl_group); free(ifgl->ifgl_group, M_TEMP); } else IFNET_WUNLOCK(); free(ifgl, M_TEMP); EVENTHANDLER_INVOKE(group_change_event, groupname); IFNET_WLOCK(); } IFNET_WUNLOCK(); } /* * Stores all groups from an interface in memory pointed * to by data */ static int if_getgroup(struct ifgroupreq *data, struct ifnet *ifp) { int len, error; struct ifg_list *ifgl; struct ifg_req ifgrq, *ifgp; struct ifgroupreq *ifgr = data; if (ifgr->ifgr_len == 0) { IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifgl, &ifp->if_groups, ifgl_next) ifgr->ifgr_len += sizeof(struct ifg_req); IF_ADDR_RUNLOCK(ifp); return (0); } len = ifgr->ifgr_len; ifgp = ifgr->ifgr_groups; /* XXX: wire */ IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifgl, &ifp->if_groups, ifgl_next) { if (len < sizeof(ifgrq)) { IF_ADDR_RUNLOCK(ifp); return (EINVAL); } bzero(&ifgrq, sizeof ifgrq); strlcpy(ifgrq.ifgrq_group, ifgl->ifgl_group->ifg_group, sizeof(ifgrq.ifgrq_group)); if ((error = copyout(&ifgrq, ifgp, sizeof(struct ifg_req)))) { IF_ADDR_RUNLOCK(ifp); return (error); } len -= sizeof(ifgrq); ifgp++; } IF_ADDR_RUNLOCK(ifp); return (0); } /* * Stores all members of a group in memory pointed to by data */ static int if_getgroupmembers(struct ifgroupreq *data) { struct ifgroupreq *ifgr = data; struct ifg_group *ifg; struct ifg_member *ifgm; struct ifg_req ifgrq, *ifgp; int len, error; IFNET_RLOCK(); TAILQ_FOREACH(ifg, &V_ifg_head, ifg_next) if (!strcmp(ifg->ifg_group, ifgr->ifgr_name)) break; if (ifg == NULL) { IFNET_RUNLOCK(); return (ENOENT); } if (ifgr->ifgr_len == 0) { TAILQ_FOREACH(ifgm, &ifg->ifg_members, ifgm_next) ifgr->ifgr_len += sizeof(ifgrq); IFNET_RUNLOCK(); return (0); } len = ifgr->ifgr_len; ifgp = ifgr->ifgr_groups; TAILQ_FOREACH(ifgm, &ifg->ifg_members, ifgm_next) { if (len < sizeof(ifgrq)) { IFNET_RUNLOCK(); return (EINVAL); } bzero(&ifgrq, sizeof ifgrq); strlcpy(ifgrq.ifgrq_member, ifgm->ifgm_ifp->if_xname, sizeof(ifgrq.ifgrq_member)); if ((error = copyout(&ifgrq, ifgp, sizeof(struct ifg_req)))) { IFNET_RUNLOCK(); return (error); } len -= sizeof(ifgrq); ifgp++; } IFNET_RUNLOCK(); return (0); } /* * Return counter values from counter(9)s stored in ifnet. */ uint64_t if_get_counter_default(struct ifnet *ifp, ift_counter cnt) { KASSERT(cnt < IFCOUNTERS, ("%s: invalid cnt %d", __func__, cnt)); return (counter_u64_fetch(ifp->if_counters[cnt])); } /* * Increase an ifnet counter. Usually used for counters shared * between the stack and a driver, but function supports them all. */ void if_inc_counter(struct ifnet *ifp, ift_counter cnt, int64_t inc) { KASSERT(cnt < IFCOUNTERS, ("%s: invalid cnt %d", __func__, cnt)); counter_u64_add(ifp->if_counters[cnt], inc); } /* * Copy data from ifnet to userland API structure if_data. */ void if_data_copy(struct ifnet *ifp, struct if_data *ifd) { ifd->ifi_type = ifp->if_type; ifd->ifi_physical = 0; ifd->ifi_addrlen = ifp->if_addrlen; ifd->ifi_hdrlen = ifp->if_hdrlen; ifd->ifi_link_state = ifp->if_link_state; ifd->ifi_vhid = 0; ifd->ifi_datalen = sizeof(struct if_data); ifd->ifi_mtu = ifp->if_mtu; ifd->ifi_metric = ifp->if_metric; ifd->ifi_baudrate = ifp->if_baudrate; ifd->ifi_hwassist = ifp->if_hwassist; ifd->ifi_epoch = ifp->if_epoch; ifd->ifi_lastchange = ifp->if_lastchange; ifd->ifi_ipackets = ifp->if_get_counter(ifp, IFCOUNTER_IPACKETS); ifd->ifi_ierrors = ifp->if_get_counter(ifp, IFCOUNTER_IERRORS); ifd->ifi_opackets = ifp->if_get_counter(ifp, IFCOUNTER_OPACKETS); ifd->ifi_oerrors = ifp->if_get_counter(ifp, IFCOUNTER_OERRORS); ifd->ifi_collisions = ifp->if_get_counter(ifp, IFCOUNTER_COLLISIONS); ifd->ifi_ibytes = ifp->if_get_counter(ifp, IFCOUNTER_IBYTES); ifd->ifi_obytes = ifp->if_get_counter(ifp, IFCOUNTER_OBYTES); ifd->ifi_imcasts = ifp->if_get_counter(ifp, IFCOUNTER_IMCASTS); ifd->ifi_omcasts = ifp->if_get_counter(ifp, IFCOUNTER_OMCASTS); ifd->ifi_iqdrops = ifp->if_get_counter(ifp, IFCOUNTER_IQDROPS); ifd->ifi_oqdrops = ifp->if_get_counter(ifp, IFCOUNTER_OQDROPS); ifd->ifi_noproto = ifp->if_get_counter(ifp, IFCOUNTER_NOPROTO); } /* * Wrapper functions for struct ifnet address list locking macros. These are * used by kernel modules to avoid encoding programming interface or binary * interface assumptions that may be violated when kernel-internal locking * approaches change. */ void if_addr_rlock(struct ifnet *ifp) { IF_ADDR_RLOCK(ifp); } void if_addr_runlock(struct ifnet *ifp) { IF_ADDR_RUNLOCK(ifp); } void if_maddr_rlock(if_t ifp) { IF_ADDR_RLOCK((struct ifnet *)ifp); } void if_maddr_runlock(if_t ifp) { IF_ADDR_RUNLOCK((struct ifnet *)ifp); } /* * Initialization, destruction and refcounting functions for ifaddrs. */ struct ifaddr * ifa_alloc(size_t size, int flags) { struct ifaddr *ifa; KASSERT(size >= sizeof(struct ifaddr), ("%s: invalid size %zu", __func__, size)); ifa = malloc(size, M_IFADDR, M_ZERO | flags); if (ifa == NULL) return (NULL); if ((ifa->ifa_opackets = counter_u64_alloc(flags)) == NULL) goto fail; if ((ifa->ifa_ipackets = counter_u64_alloc(flags)) == NULL) goto fail; if ((ifa->ifa_obytes = counter_u64_alloc(flags)) == NULL) goto fail; if ((ifa->ifa_ibytes = counter_u64_alloc(flags)) == NULL) goto fail; refcount_init(&ifa->ifa_refcnt, 1); return (ifa); fail: /* free(NULL) is okay */ counter_u64_free(ifa->ifa_opackets); counter_u64_free(ifa->ifa_ipackets); counter_u64_free(ifa->ifa_obytes); counter_u64_free(ifa->ifa_ibytes); free(ifa, M_IFADDR); return (NULL); } void ifa_ref(struct ifaddr *ifa) { refcount_acquire(&ifa->ifa_refcnt); } void ifa_free(struct ifaddr *ifa) { if (refcount_release(&ifa->ifa_refcnt)) { counter_u64_free(ifa->ifa_opackets); counter_u64_free(ifa->ifa_ipackets); counter_u64_free(ifa->ifa_obytes); counter_u64_free(ifa->ifa_ibytes); free(ifa, M_IFADDR); } } static int ifa_maintain_loopback_route(int cmd, const char *otype, struct ifaddr *ifa, struct sockaddr *ia) { int error; struct rt_addrinfo info; struct sockaddr_dl null_sdl; struct ifnet *ifp; ifp = ifa->ifa_ifp; bzero(&info, sizeof(info)); if (cmd != RTM_DELETE) info.rti_ifp = V_loif; - info.rti_flags = ifa->ifa_flags | RTF_HOST | RTF_STATIC; + info.rti_flags = ifa->ifa_flags | RTF_HOST | RTF_STATIC | RTF_PINNED; info.rti_info[RTAX_DST] = ia; info.rti_info[RTAX_GATEWAY] = (struct sockaddr *)&null_sdl; link_init_sdl(ifp, (struct sockaddr *)&null_sdl, ifp->if_type); error = rtrequest1_fib(cmd, &info, NULL, ifp->if_fib); if (error != 0) log(LOG_DEBUG, "%s: %s failed for interface %s: %u\n", __func__, otype, if_name(ifp), error); return (error); } int ifa_add_loopback_route(struct ifaddr *ifa, struct sockaddr *ia) { return (ifa_maintain_loopback_route(RTM_ADD, "insertion", ifa, ia)); } int ifa_del_loopback_route(struct ifaddr *ifa, struct sockaddr *ia) { return (ifa_maintain_loopback_route(RTM_DELETE, "deletion", ifa, ia)); } int ifa_switch_loopback_route(struct ifaddr *ifa, struct sockaddr *ia) { return (ifa_maintain_loopback_route(RTM_CHANGE, "switch", ifa, ia)); } /* * XXX: Because sockaddr_dl has deeper structure than the sockaddr * structs used to represent other address families, it is necessary * to perform a different comparison. */ #define sa_dl_equal(a1, a2) \ ((((const struct sockaddr_dl *)(a1))->sdl_len == \ ((const struct sockaddr_dl *)(a2))->sdl_len) && \ (bcmp(CLLADDR((const struct sockaddr_dl *)(a1)), \ CLLADDR((const struct sockaddr_dl *)(a2)), \ ((const struct sockaddr_dl *)(a1))->sdl_alen) == 0)) /* * Locate an interface based on a complete address. */ /*ARGSUSED*/ static struct ifaddr * ifa_ifwithaddr_internal(const struct sockaddr *addr, int getref) { struct ifnet *ifp; struct ifaddr *ifa; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { if (ifa->ifa_addr->sa_family != addr->sa_family) continue; if (sa_equal(addr, ifa->ifa_addr)) { if (getref) ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); goto done; } /* IP6 doesn't have broadcast */ if ((ifp->if_flags & IFF_BROADCAST) && ifa->ifa_broadaddr && ifa->ifa_broadaddr->sa_len != 0 && sa_equal(ifa->ifa_broadaddr, addr)) { if (getref) ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); goto done; } } IF_ADDR_RUNLOCK(ifp); } ifa = NULL; done: IFNET_RUNLOCK_NOSLEEP(); return (ifa); } struct ifaddr * ifa_ifwithaddr(const struct sockaddr *addr) { return (ifa_ifwithaddr_internal(addr, 1)); } int ifa_ifwithaddr_check(const struct sockaddr *addr) { return (ifa_ifwithaddr_internal(addr, 0) != NULL); } /* * Locate an interface based on the broadcast address. */ /* ARGSUSED */ struct ifaddr * ifa_ifwithbroadaddr(const struct sockaddr *addr, int fibnum) { struct ifnet *ifp; struct ifaddr *ifa; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { if ((fibnum != RT_ALL_FIBS) && (ifp->if_fib != fibnum)) continue; IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { if (ifa->ifa_addr->sa_family != addr->sa_family) continue; if ((ifp->if_flags & IFF_BROADCAST) && ifa->ifa_broadaddr && ifa->ifa_broadaddr->sa_len != 0 && sa_equal(ifa->ifa_broadaddr, addr)) { ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); goto done; } } IF_ADDR_RUNLOCK(ifp); } ifa = NULL; done: IFNET_RUNLOCK_NOSLEEP(); return (ifa); } /* * Locate the point to point interface with a given destination address. */ /*ARGSUSED*/ struct ifaddr * ifa_ifwithdstaddr(const struct sockaddr *addr, int fibnum) { struct ifnet *ifp; struct ifaddr *ifa; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { if ((ifp->if_flags & IFF_POINTOPOINT) == 0) continue; if ((fibnum != RT_ALL_FIBS) && (ifp->if_fib != fibnum)) continue; IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { if (ifa->ifa_addr->sa_family != addr->sa_family) continue; if (ifa->ifa_dstaddr != NULL && sa_equal(addr, ifa->ifa_dstaddr)) { ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); goto done; } } IF_ADDR_RUNLOCK(ifp); } ifa = NULL; done: IFNET_RUNLOCK_NOSLEEP(); return (ifa); } /* * Find an interface on a specific network. If many, choice * is most specific found. */ struct ifaddr * ifa_ifwithnet(const struct sockaddr *addr, int ignore_ptp, int fibnum) { struct ifnet *ifp; struct ifaddr *ifa; struct ifaddr *ifa_maybe = NULL; u_int af = addr->sa_family; const char *addr_data = addr->sa_data, *cplim; /* * AF_LINK addresses can be looked up directly by their index number, * so do that if we can. */ if (af == AF_LINK) { const struct sockaddr_dl *sdl = (const struct sockaddr_dl *)addr; if (sdl->sdl_index && sdl->sdl_index <= V_if_index) return (ifaddr_byindex(sdl->sdl_index)); } /* * Scan though each interface, looking for ones that have addresses * in this address family and the requested fib. Maintain a reference * on ifa_maybe once we find one, as we release the IF_ADDR_RLOCK() that * kept it stable when we move onto the next interface. */ IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { if ((fibnum != RT_ALL_FIBS) && (ifp->if_fib != fibnum)) continue; IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { const char *cp, *cp2, *cp3; if (ifa->ifa_addr->sa_family != af) next: continue; if (af == AF_INET && ifp->if_flags & IFF_POINTOPOINT && !ignore_ptp) { /* * This is a bit broken as it doesn't * take into account that the remote end may * be a single node in the network we are * looking for. * The trouble is that we don't know the * netmask for the remote end. */ if (ifa->ifa_dstaddr != NULL && sa_equal(addr, ifa->ifa_dstaddr)) { ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); goto done; } } else { /* * Scan all the bits in the ifa's address. * If a bit dissagrees with what we are * looking for, mask it with the netmask * to see if it really matters. * (A byte at a time) */ if (ifa->ifa_netmask == 0) continue; cp = addr_data; cp2 = ifa->ifa_addr->sa_data; cp3 = ifa->ifa_netmask->sa_data; cplim = ifa->ifa_netmask->sa_len + (char *)ifa->ifa_netmask; while (cp3 < cplim) if ((*cp++ ^ *cp2++) & *cp3++) goto next; /* next address! */ /* * If the netmask of what we just found * is more specific than what we had before * (if we had one), or if the virtual status * of new prefix is better than of the old one, * then remember the new one before continuing * to search for an even better one. */ if (ifa_maybe == NULL || ifa_preferred(ifa_maybe, ifa) || rn_refines((caddr_t)ifa->ifa_netmask, (caddr_t)ifa_maybe->ifa_netmask)) { if (ifa_maybe != NULL) ifa_free(ifa_maybe); ifa_maybe = ifa; ifa_ref(ifa_maybe); } } } IF_ADDR_RUNLOCK(ifp); } ifa = ifa_maybe; ifa_maybe = NULL; done: IFNET_RUNLOCK_NOSLEEP(); if (ifa_maybe != NULL) ifa_free(ifa_maybe); return (ifa); } /* * Find an interface address specific to an interface best matching * a given address. */ struct ifaddr * ifaof_ifpforaddr(const struct sockaddr *addr, struct ifnet *ifp) { struct ifaddr *ifa; const char *cp, *cp2, *cp3; char *cplim; struct ifaddr *ifa_maybe = NULL; u_int af = addr->sa_family; if (af >= AF_MAX) return (NULL); IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { if (ifa->ifa_addr->sa_family != af) continue; if (ifa_maybe == NULL) ifa_maybe = ifa; if (ifa->ifa_netmask == 0) { if (sa_equal(addr, ifa->ifa_addr) || (ifa->ifa_dstaddr && sa_equal(addr, ifa->ifa_dstaddr))) goto done; continue; } if (ifp->if_flags & IFF_POINTOPOINT) { if (sa_equal(addr, ifa->ifa_dstaddr)) goto done; } else { cp = addr->sa_data; cp2 = ifa->ifa_addr->sa_data; cp3 = ifa->ifa_netmask->sa_data; cplim = ifa->ifa_netmask->sa_len + (char *)ifa->ifa_netmask; for (; cp3 < cplim; cp3++) if ((*cp++ ^ *cp2++) & *cp3) break; if (cp3 == cplim) goto done; } } ifa = ifa_maybe; done: if (ifa != NULL) ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); return (ifa); } /* * See whether new ifa is better than current one: * 1) A non-virtual one is preferred over virtual. * 2) A virtual in master state preferred over any other state. * * Used in several address selecting functions. */ int ifa_preferred(struct ifaddr *cur, struct ifaddr *next) { return (cur->ifa_carp && (!next->ifa_carp || ((*carp_master_p)(next) && !(*carp_master_p)(cur)))); } #include /* * Default action when installing a route with a Link Level gateway. * Lookup an appropriate real ifa to point to. * This should be moved to /sys/net/link.c eventually. */ static void link_rtrequest(int cmd, struct rtentry *rt, struct rt_addrinfo *info) { struct ifaddr *ifa, *oifa; struct sockaddr *dst; struct ifnet *ifp; if (cmd != RTM_ADD || ((ifa = rt->rt_ifa) == NULL) || ((ifp = ifa->ifa_ifp) == NULL) || ((dst = rt_key(rt)) == NULL)) return; ifa = ifaof_ifpforaddr(dst, ifp); if (ifa) { oifa = rt->rt_ifa; rt->rt_ifa = ifa; ifa_free(oifa); if (ifa->ifa_rtrequest && ifa->ifa_rtrequest != link_rtrequest) ifa->ifa_rtrequest(cmd, rt, info); } } struct sockaddr_dl * link_alloc_sdl(size_t size, int flags) { return (malloc(size, M_TEMP, flags)); } void link_free_sdl(struct sockaddr *sa) { free(sa, M_TEMP); } /* * Fills in given sdl with interface basic info. * Returns pointer to filled sdl. */ struct sockaddr_dl * link_init_sdl(struct ifnet *ifp, struct sockaddr *paddr, u_char iftype) { struct sockaddr_dl *sdl; sdl = (struct sockaddr_dl *)paddr; memset(sdl, 0, sizeof(struct sockaddr_dl)); sdl->sdl_len = sizeof(struct sockaddr_dl); sdl->sdl_family = AF_LINK; sdl->sdl_index = ifp->if_index; sdl->sdl_type = iftype; return (sdl); } /* * Mark an interface down and notify protocols of * the transition. */ static void if_unroute(struct ifnet *ifp, int flag, int fam) { struct ifaddr *ifa; KASSERT(flag == IFF_UP, ("if_unroute: flag != IFF_UP")); ifp->if_flags &= ~flag; getmicrotime(&ifp->if_lastchange); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) if (fam == PF_UNSPEC || (fam == ifa->ifa_addr->sa_family)) pfctlinput(PRC_IFDOWN, ifa->ifa_addr); ifp->if_qflush(ifp); if (ifp->if_carp) (*carp_linkstate_p)(ifp); rt_ifmsg(ifp); } /* * Mark an interface up and notify protocols of * the transition. */ static void if_route(struct ifnet *ifp, int flag, int fam) { struct ifaddr *ifa; KASSERT(flag == IFF_UP, ("if_route: flag != IFF_UP")); ifp->if_flags |= flag; getmicrotime(&ifp->if_lastchange); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) if (fam == PF_UNSPEC || (fam == ifa->ifa_addr->sa_family)) pfctlinput(PRC_IFUP, ifa->ifa_addr); if (ifp->if_carp) (*carp_linkstate_p)(ifp); rt_ifmsg(ifp); #ifdef INET6 in6_if_up(ifp); #endif } void (*vlan_link_state_p)(struct ifnet *); /* XXX: private from if_vlan */ void (*vlan_trunk_cap_p)(struct ifnet *); /* XXX: private from if_vlan */ struct ifnet *(*vlan_trunkdev_p)(struct ifnet *); struct ifnet *(*vlan_devat_p)(struct ifnet *, uint16_t); int (*vlan_tag_p)(struct ifnet *, uint16_t *); int (*vlan_setcookie_p)(struct ifnet *, void *); void *(*vlan_cookie_p)(struct ifnet *); /* * Handle a change in the interface link state. To avoid LORs * between driver lock and upper layer locks, as well as possible * recursions, we post event to taskqueue, and all job * is done in static do_link_state_change(). */ void if_link_state_change(struct ifnet *ifp, int link_state) { /* Return if state hasn't changed. */ if (ifp->if_link_state == link_state) return; ifp->if_link_state = link_state; taskqueue_enqueue(taskqueue_swi, &ifp->if_linktask); } static void do_link_state_change(void *arg, int pending) { struct ifnet *ifp = (struct ifnet *)arg; int link_state = ifp->if_link_state; CURVNET_SET(ifp->if_vnet); /* Notify that the link state has changed. */ rt_ifmsg(ifp); if (ifp->if_vlantrunk != NULL) (*vlan_link_state_p)(ifp); if ((ifp->if_type == IFT_ETHER || ifp->if_type == IFT_L2VLAN) && ifp->if_l2com != NULL) (*ng_ether_link_state_p)(ifp, link_state); if (ifp->if_carp) (*carp_linkstate_p)(ifp); if (ifp->if_bridge) (*bridge_linkstate_p)(ifp); if (ifp->if_lagg) (*lagg_linkstate_p)(ifp, link_state); if (IS_DEFAULT_VNET(curvnet)) devctl_notify("IFNET", ifp->if_xname, (link_state == LINK_STATE_UP) ? "LINK_UP" : "LINK_DOWN", NULL); if (pending > 1) if_printf(ifp, "%d link states coalesced\n", pending); if (log_link_state_change) log(LOG_NOTICE, "%s: link state changed to %s\n", ifp->if_xname, (link_state == LINK_STATE_UP) ? "UP" : "DOWN" ); EVENTHANDLER_INVOKE(ifnet_link_event, ifp, link_state); CURVNET_RESTORE(); } /* * Mark an interface down and notify protocols of * the transition. */ void if_down(struct ifnet *ifp) { EVENTHANDLER_INVOKE(ifnet_event, ifp, IFNET_EVENT_DOWN); if_unroute(ifp, IFF_UP, AF_UNSPEC); } /* * Mark an interface up and notify protocols of * the transition. */ void if_up(struct ifnet *ifp) { if_route(ifp, IFF_UP, AF_UNSPEC); EVENTHANDLER_INVOKE(ifnet_event, ifp, IFNET_EVENT_UP); } /* * Flush an interface queue. */ void if_qflush(struct ifnet *ifp) { struct mbuf *m, *n; struct ifaltq *ifq; ifq = &ifp->if_snd; IFQ_LOCK(ifq); #ifdef ALTQ if (ALTQ_IS_ENABLED(ifq)) ALTQ_PURGE(ifq); #endif n = ifq->ifq_head; while ((m = n) != NULL) { n = m->m_nextpkt; m_freem(m); } ifq->ifq_head = 0; ifq->ifq_tail = 0; ifq->ifq_len = 0; IFQ_UNLOCK(ifq); } /* * Map interface name to interface structure pointer, with or without * returning a reference. */ struct ifnet * ifunit_ref(const char *name) { struct ifnet *ifp; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { if (strncmp(name, ifp->if_xname, IFNAMSIZ) == 0 && !(ifp->if_flags & IFF_DYING)) break; } if (ifp != NULL) if_ref(ifp); IFNET_RUNLOCK_NOSLEEP(); return (ifp); } struct ifnet * ifunit(const char *name) { struct ifnet *ifp; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { if (strncmp(name, ifp->if_xname, IFNAMSIZ) == 0) break; } IFNET_RUNLOCK_NOSLEEP(); return (ifp); } /* * Hardware specific interface ioctls. */ static int ifhwioctl(u_long cmd, struct ifnet *ifp, caddr_t data, struct thread *td) { struct ifreq *ifr; int error = 0, do_ifup = 0; int new_flags, temp_flags; size_t namelen, onamelen; size_t descrlen; char *descrbuf, *odescrbuf; char new_name[IFNAMSIZ]; struct ifaddr *ifa; struct sockaddr_dl *sdl; ifr = (struct ifreq *)data; switch (cmd) { case SIOCGIFINDEX: ifr->ifr_index = ifp->if_index; break; case SIOCGIFFLAGS: temp_flags = ifp->if_flags | ifp->if_drv_flags; ifr->ifr_flags = temp_flags & 0xffff; ifr->ifr_flagshigh = temp_flags >> 16; break; case SIOCGIFCAP: ifr->ifr_reqcap = ifp->if_capabilities; ifr->ifr_curcap = ifp->if_capenable; break; #ifdef MAC case SIOCGIFMAC: error = mac_ifnet_ioctl_get(td->td_ucred, ifr, ifp); break; #endif case SIOCGIFMETRIC: ifr->ifr_metric = ifp->if_metric; break; case SIOCGIFMTU: ifr->ifr_mtu = ifp->if_mtu; break; case SIOCGIFPHYS: /* XXXGL: did this ever worked? */ ifr->ifr_phys = 0; break; case SIOCGIFDESCR: error = 0; sx_slock(&ifdescr_sx); if (ifp->if_description == NULL) error = ENOMSG; else { /* space for terminating nul */ descrlen = strlen(ifp->if_description) + 1; if (ifr->ifr_buffer.length < descrlen) ifr->ifr_buffer.buffer = NULL; else error = copyout(ifp->if_description, ifr->ifr_buffer.buffer, descrlen); ifr->ifr_buffer.length = descrlen; } sx_sunlock(&ifdescr_sx); break; case SIOCSIFDESCR: error = priv_check(td, PRIV_NET_SETIFDESCR); if (error) return (error); /* * Copy only (length-1) bytes to make sure that * if_description is always nul terminated. The * length parameter is supposed to count the * terminating nul in. */ if (ifr->ifr_buffer.length > ifdescr_maxlen) return (ENAMETOOLONG); else if (ifr->ifr_buffer.length == 0) descrbuf = NULL; else { descrbuf = malloc(ifr->ifr_buffer.length, M_IFDESCR, M_WAITOK | M_ZERO); error = copyin(ifr->ifr_buffer.buffer, descrbuf, ifr->ifr_buffer.length - 1); if (error) { free(descrbuf, M_IFDESCR); break; } } sx_xlock(&ifdescr_sx); odescrbuf = ifp->if_description; ifp->if_description = descrbuf; sx_xunlock(&ifdescr_sx); getmicrotime(&ifp->if_lastchange); free(odescrbuf, M_IFDESCR); break; case SIOCGIFFIB: ifr->ifr_fib = ifp->if_fib; break; case SIOCSIFFIB: error = priv_check(td, PRIV_NET_SETIFFIB); if (error) return (error); if (ifr->ifr_fib >= rt_numfibs) return (EINVAL); ifp->if_fib = ifr->ifr_fib; break; case SIOCSIFFLAGS: error = priv_check(td, PRIV_NET_SETIFFLAGS); if (error) return (error); /* * Currently, no driver owned flags pass the IFF_CANTCHANGE * check, so we don't need special handling here yet. */ new_flags = (ifr->ifr_flags & 0xffff) | (ifr->ifr_flagshigh << 16); if (ifp->if_flags & IFF_UP && (new_flags & IFF_UP) == 0) { if_down(ifp); } else if (new_flags & IFF_UP && (ifp->if_flags & IFF_UP) == 0) { do_ifup = 1; } /* See if permanently promiscuous mode bit is about to flip */ if ((ifp->if_flags ^ new_flags) & IFF_PPROMISC) { if (new_flags & IFF_PPROMISC) ifp->if_flags |= IFF_PROMISC; else if (ifp->if_pcount == 0) ifp->if_flags &= ~IFF_PROMISC; if (log_promisc_mode_change) log(LOG_INFO, "%s: permanently promiscuous mode %s\n", ifp->if_xname, ((new_flags & IFF_PPROMISC) ? "enabled" : "disabled")); } ifp->if_flags = (ifp->if_flags & IFF_CANTCHANGE) | (new_flags &~ IFF_CANTCHANGE); if (ifp->if_ioctl) { (void) (*ifp->if_ioctl)(ifp, cmd, data); } if (do_ifup) if_up(ifp); getmicrotime(&ifp->if_lastchange); break; case SIOCSIFCAP: error = priv_check(td, PRIV_NET_SETIFCAP); if (error) return (error); if (ifp->if_ioctl == NULL) return (EOPNOTSUPP); if (ifr->ifr_reqcap & ~ifp->if_capabilities) return (EINVAL); error = (*ifp->if_ioctl)(ifp, cmd, data); if (error == 0) getmicrotime(&ifp->if_lastchange); break; #ifdef MAC case SIOCSIFMAC: error = mac_ifnet_ioctl_set(td->td_ucred, ifr, ifp); break; #endif case SIOCSIFNAME: error = priv_check(td, PRIV_NET_SETIFNAME); if (error) return (error); error = copyinstr(ifr->ifr_data, new_name, IFNAMSIZ, NULL); if (error != 0) return (error); if (new_name[0] == '\0') return (EINVAL); if (new_name[IFNAMSIZ-1] != '\0') { new_name[IFNAMSIZ-1] = '\0'; if (strlen(new_name) == IFNAMSIZ-1) return (EINVAL); } if (ifunit(new_name) != NULL) return (EEXIST); /* * XXX: Locking. Nothing else seems to lock if_flags, * and there are numerous other races with the * ifunit() checks not being atomic with namespace * changes (renames, vmoves, if_attach, etc). */ ifp->if_flags |= IFF_RENAMING; /* Announce the departure of the interface. */ rt_ifannouncemsg(ifp, IFAN_DEPARTURE); EVENTHANDLER_INVOKE(ifnet_departure_event, ifp); log(LOG_INFO, "%s: changing name to '%s'\n", ifp->if_xname, new_name); IF_ADDR_WLOCK(ifp); strlcpy(ifp->if_xname, new_name, sizeof(ifp->if_xname)); ifa = ifp->if_addr; sdl = (struct sockaddr_dl *)ifa->ifa_addr; namelen = strlen(new_name); onamelen = sdl->sdl_nlen; /* * Move the address if needed. This is safe because we * allocate space for a name of length IFNAMSIZ when we * create this in if_attach(). */ if (namelen != onamelen) { bcopy(sdl->sdl_data + onamelen, sdl->sdl_data + namelen, sdl->sdl_alen); } bcopy(new_name, sdl->sdl_data, namelen); sdl->sdl_nlen = namelen; sdl = (struct sockaddr_dl *)ifa->ifa_netmask; bzero(sdl->sdl_data, onamelen); while (namelen != 0) sdl->sdl_data[--namelen] = 0xff; IF_ADDR_WUNLOCK(ifp); EVENTHANDLER_INVOKE(ifnet_arrival_event, ifp); /* Announce the return of the interface. */ rt_ifannouncemsg(ifp, IFAN_ARRIVAL); ifp->if_flags &= ~IFF_RENAMING; break; #ifdef VIMAGE case SIOCSIFVNET: error = priv_check(td, PRIV_NET_SETIFVNET); if (error) return (error); error = if_vmove_loan(td, ifp, ifr->ifr_name, ifr->ifr_jid); break; #endif case SIOCSIFMETRIC: error = priv_check(td, PRIV_NET_SETIFMETRIC); if (error) return (error); ifp->if_metric = ifr->ifr_metric; getmicrotime(&ifp->if_lastchange); break; case SIOCSIFPHYS: error = priv_check(td, PRIV_NET_SETIFPHYS); if (error) return (error); if (ifp->if_ioctl == NULL) return (EOPNOTSUPP); error = (*ifp->if_ioctl)(ifp, cmd, data); if (error == 0) getmicrotime(&ifp->if_lastchange); break; case SIOCSIFMTU: { u_long oldmtu = ifp->if_mtu; error = priv_check(td, PRIV_NET_SETIFMTU); if (error) return (error); if (ifr->ifr_mtu < IF_MINMTU || ifr->ifr_mtu > IF_MAXMTU) return (EINVAL); if (ifp->if_ioctl == NULL) return (EOPNOTSUPP); error = (*ifp->if_ioctl)(ifp, cmd, data); if (error == 0) { getmicrotime(&ifp->if_lastchange); rt_ifmsg(ifp); } /* * If the link MTU changed, do network layer specific procedure. */ if (ifp->if_mtu != oldmtu) { #ifdef INET6 nd6_setmtu(ifp); #endif rt_updatemtu(ifp); } break; } case SIOCADDMULTI: case SIOCDELMULTI: if (cmd == SIOCADDMULTI) error = priv_check(td, PRIV_NET_ADDMULTI); else error = priv_check(td, PRIV_NET_DELMULTI); if (error) return (error); /* Don't allow group membership on non-multicast interfaces. */ if ((ifp->if_flags & IFF_MULTICAST) == 0) return (EOPNOTSUPP); /* Don't let users screw up protocols' entries. */ if (ifr->ifr_addr.sa_family != AF_LINK) return (EINVAL); if (cmd == SIOCADDMULTI) { struct ifmultiaddr *ifma; /* * Userland is only permitted to join groups once * via the if_addmulti() KPI, because it cannot hold * struct ifmultiaddr * between calls. It may also * lose a race while we check if the membership * already exists. */ IF_ADDR_RLOCK(ifp); ifma = if_findmulti(ifp, &ifr->ifr_addr); IF_ADDR_RUNLOCK(ifp); if (ifma != NULL) error = EADDRINUSE; else error = if_addmulti(ifp, &ifr->ifr_addr, &ifma); } else { error = if_delmulti(ifp, &ifr->ifr_addr); } if (error == 0) getmicrotime(&ifp->if_lastchange); break; case SIOCSIFPHYADDR: case SIOCDIFPHYADDR: #ifdef INET6 case SIOCSIFPHYADDR_IN6: #endif case SIOCSIFMEDIA: case SIOCSIFGENERIC: error = priv_check(td, PRIV_NET_HWIOCTL); if (error) return (error); if (ifp->if_ioctl == NULL) return (EOPNOTSUPP); error = (*ifp->if_ioctl)(ifp, cmd, data); if (error == 0) getmicrotime(&ifp->if_lastchange); break; case SIOCGIFSTATUS: case SIOCGIFPSRCADDR: case SIOCGIFPDSTADDR: case SIOCGIFMEDIA: case SIOCGIFXMEDIA: case SIOCGIFGENERIC: case SIOCGIFRSSKEY: case SIOCGIFRSSHASH: if (ifp->if_ioctl == NULL) return (EOPNOTSUPP); error = (*ifp->if_ioctl)(ifp, cmd, data); break; case SIOCSIFLLADDR: error = priv_check(td, PRIV_NET_SETLLADDR); if (error) return (error); error = if_setlladdr(ifp, ifr->ifr_addr.sa_data, ifr->ifr_addr.sa_len); break; case SIOCGHWADDR: error = if_gethwaddr(ifp, ifr); break; case SIOCAIFGROUP: { struct ifgroupreq *ifgr = (struct ifgroupreq *)ifr; error = priv_check(td, PRIV_NET_ADDIFGROUP); if (error) return (error); if ((error = if_addgroup(ifp, ifgr->ifgr_group))) return (error); break; } case SIOCGIFGROUP: if ((error = if_getgroup((struct ifgroupreq *)ifr, ifp))) return (error); break; case SIOCDIFGROUP: { struct ifgroupreq *ifgr = (struct ifgroupreq *)ifr; error = priv_check(td, PRIV_NET_DELIFGROUP); if (error) return (error); if ((error = if_delgroup(ifp, ifgr->ifgr_group))) return (error); break; } default: error = ENOIOCTL; break; } return (error); } #ifdef COMPAT_FREEBSD32 struct ifconf32 { int32_t ifc_len; union { uint32_t ifcu_buf; uint32_t ifcu_req; } ifc_ifcu; }; #define SIOCGIFCONF32 _IOWR('i', 36, struct ifconf32) #endif /* * Interface ioctls. */ int ifioctl(struct socket *so, u_long cmd, caddr_t data, struct thread *td) { struct ifnet *ifp; struct ifreq *ifr; int error; int oif_flags; #ifdef VIMAGE int shutdown; #endif CURVNET_SET(so->so_vnet); #ifdef VIMAGE /* Make sure the VNET is stable. */ shutdown = (so->so_vnet->vnet_state > SI_SUB_VNET && so->so_vnet->vnet_state < SI_SUB_VNET_DONE) ? 1 : 0; if (shutdown) { CURVNET_RESTORE(); return (EBUSY); } #endif switch (cmd) { case SIOCGIFCONF: error = ifconf(cmd, data); CURVNET_RESTORE(); return (error); #ifdef COMPAT_FREEBSD32 case SIOCGIFCONF32: { struct ifconf32 *ifc32; struct ifconf ifc; ifc32 = (struct ifconf32 *)data; ifc.ifc_len = ifc32->ifc_len; ifc.ifc_buf = PTRIN(ifc32->ifc_buf); error = ifconf(SIOCGIFCONF, (void *)&ifc); CURVNET_RESTORE(); if (error == 0) ifc32->ifc_len = ifc.ifc_len; return (error); } #endif } ifr = (struct ifreq *)data; switch (cmd) { #ifdef VIMAGE case SIOCSIFRVNET: error = priv_check(td, PRIV_NET_SETIFVNET); if (error == 0) error = if_vmove_reclaim(td, ifr->ifr_name, ifr->ifr_jid); CURVNET_RESTORE(); return (error); #endif case SIOCIFCREATE: case SIOCIFCREATE2: error = priv_check(td, PRIV_NET_IFCREATE); if (error == 0) error = if_clone_create(ifr->ifr_name, sizeof(ifr->ifr_name), cmd == SIOCIFCREATE2 ? ifr->ifr_data : NULL); CURVNET_RESTORE(); return (error); case SIOCIFDESTROY: error = priv_check(td, PRIV_NET_IFDESTROY); if (error == 0) error = if_clone_destroy(ifr->ifr_name); CURVNET_RESTORE(); return (error); case SIOCIFGCLONERS: error = if_clone_list((struct if_clonereq *)data); CURVNET_RESTORE(); return (error); case SIOCGIFGMEMB: error = if_getgroupmembers((struct ifgroupreq *)data); CURVNET_RESTORE(); return (error); #if defined(INET) || defined(INET6) case SIOCSVH: case SIOCGVH: if (carp_ioctl_p == NULL) error = EPROTONOSUPPORT; else error = (*carp_ioctl_p)(ifr, cmd, td); CURVNET_RESTORE(); return (error); #endif } ifp = ifunit_ref(ifr->ifr_name); if (ifp == NULL) { CURVNET_RESTORE(); return (ENXIO); } error = ifhwioctl(cmd, ifp, data, td); if (error != ENOIOCTL) { if_rele(ifp); CURVNET_RESTORE(); return (error); } oif_flags = ifp->if_flags; if (so->so_proto == NULL) { if_rele(ifp); CURVNET_RESTORE(); return (EOPNOTSUPP); } /* * Pass the request on to the socket control method, and if the * latter returns EOPNOTSUPP, directly to the interface. * * Make an exception for the legacy SIOCSIF* requests. Drivers * trust SIOCSIFADDR et al to come from an already privileged * layer, and do not perform any credentials checks or input * validation. */ error = ((*so->so_proto->pr_usrreqs->pru_control)(so, cmd, data, ifp, td)); if (error == EOPNOTSUPP && ifp != NULL && ifp->if_ioctl != NULL && cmd != SIOCSIFADDR && cmd != SIOCSIFBRDADDR && cmd != SIOCSIFDSTADDR && cmd != SIOCSIFNETMASK) error = (*ifp->if_ioctl)(ifp, cmd, data); if ((oif_flags ^ ifp->if_flags) & IFF_UP) { #ifdef INET6 if (ifp->if_flags & IFF_UP) in6_if_up(ifp); #endif } if_rele(ifp); CURVNET_RESTORE(); return (error); } /* * The code common to handling reference counted flags, * e.g., in ifpromisc() and if_allmulti(). * The "pflag" argument can specify a permanent mode flag to check, * such as IFF_PPROMISC for promiscuous mode; should be 0 if none. * * Only to be used on stack-owned flags, not driver-owned flags. */ static int if_setflag(struct ifnet *ifp, int flag, int pflag, int *refcount, int onswitch) { struct ifreq ifr; int error; int oldflags, oldcount; /* Sanity checks to catch programming errors */ KASSERT((flag & (IFF_DRV_OACTIVE|IFF_DRV_RUNNING)) == 0, ("%s: setting driver-owned flag %d", __func__, flag)); if (onswitch) KASSERT(*refcount >= 0, ("%s: increment negative refcount %d for flag %d", __func__, *refcount, flag)); else KASSERT(*refcount > 0, ("%s: decrement non-positive refcount %d for flag %d", __func__, *refcount, flag)); /* In case this mode is permanent, just touch refcount */ if (ifp->if_flags & pflag) { *refcount += onswitch ? 1 : -1; return (0); } /* Save ifnet parameters for if_ioctl() may fail */ oldcount = *refcount; oldflags = ifp->if_flags; /* * See if we aren't the only and touching refcount is enough. * Actually toggle interface flag if we are the first or last. */ if (onswitch) { if ((*refcount)++) return (0); ifp->if_flags |= flag; } else { if (--(*refcount)) return (0); ifp->if_flags &= ~flag; } /* Call down the driver since we've changed interface flags */ if (ifp->if_ioctl == NULL) { error = EOPNOTSUPP; goto recover; } ifr.ifr_flags = ifp->if_flags & 0xffff; ifr.ifr_flagshigh = ifp->if_flags >> 16; error = (*ifp->if_ioctl)(ifp, SIOCSIFFLAGS, (caddr_t)&ifr); if (error) goto recover; /* Notify userland that interface flags have changed */ rt_ifmsg(ifp); return (0); recover: /* Recover after driver error */ *refcount = oldcount; ifp->if_flags = oldflags; return (error); } /* * Set/clear promiscuous mode on interface ifp based on the truth value * of pswitch. The calls are reference counted so that only the first * "on" request actually has an effect, as does the final "off" request. * Results are undefined if the "off" and "on" requests are not matched. */ int ifpromisc(struct ifnet *ifp, int pswitch) { int error; int oldflags = ifp->if_flags; error = if_setflag(ifp, IFF_PROMISC, IFF_PPROMISC, &ifp->if_pcount, pswitch); /* If promiscuous mode status has changed, log a message */ if (error == 0 && ((ifp->if_flags ^ oldflags) & IFF_PROMISC) && log_promisc_mode_change) log(LOG_INFO, "%s: promiscuous mode %s\n", ifp->if_xname, (ifp->if_flags & IFF_PROMISC) ? "enabled" : "disabled"); return (error); } /* * Return interface configuration * of system. List may be used * in later ioctl's (above) to get * other information. */ /*ARGSUSED*/ static int ifconf(u_long cmd, caddr_t data) { struct ifconf *ifc = (struct ifconf *)data; struct ifnet *ifp; struct ifaddr *ifa; struct ifreq ifr; struct sbuf *sb; int error, full = 0, valid_len, max_len; /* Limit initial buffer size to MAXPHYS to avoid DoS from userspace. */ max_len = MAXPHYS - 1; /* Prevent hostile input from being able to crash the system */ if (ifc->ifc_len <= 0) return (EINVAL); again: if (ifc->ifc_len <= max_len) { max_len = ifc->ifc_len; full = 1; } sb = sbuf_new(NULL, NULL, max_len + 1, SBUF_FIXEDLEN); max_len = 0; valid_len = 0; IFNET_RLOCK(); TAILQ_FOREACH(ifp, &V_ifnet, if_link) { int addrs; /* * Zero the ifr_name buffer to make sure we don't * disclose the contents of the stack. */ memset(ifr.ifr_name, 0, sizeof(ifr.ifr_name)); if (strlcpy(ifr.ifr_name, ifp->if_xname, sizeof(ifr.ifr_name)) >= sizeof(ifr.ifr_name)) { sbuf_delete(sb); IFNET_RUNLOCK(); return (ENAMETOOLONG); } addrs = 0; IF_ADDR_RLOCK(ifp); TAILQ_FOREACH(ifa, &ifp->if_addrhead, ifa_link) { struct sockaddr *sa = ifa->ifa_addr; if (prison_if(curthread->td_ucred, sa) != 0) continue; addrs++; if (sa->sa_len <= sizeof(*sa)) { ifr.ifr_addr = *sa; sbuf_bcat(sb, &ifr, sizeof(ifr)); max_len += sizeof(ifr); } else { sbuf_bcat(sb, &ifr, offsetof(struct ifreq, ifr_addr)); max_len += offsetof(struct ifreq, ifr_addr); sbuf_bcat(sb, sa, sa->sa_len); max_len += sa->sa_len; } if (sbuf_error(sb) == 0) valid_len = sbuf_len(sb); } IF_ADDR_RUNLOCK(ifp); if (addrs == 0) { bzero((caddr_t)&ifr.ifr_addr, sizeof(ifr.ifr_addr)); sbuf_bcat(sb, &ifr, sizeof(ifr)); max_len += sizeof(ifr); if (sbuf_error(sb) == 0) valid_len = sbuf_len(sb); } } IFNET_RUNLOCK(); /* * If we didn't allocate enough space (uncommon), try again. If * we have already allocated as much space as we are allowed, * return what we've got. */ if (valid_len != max_len && !full) { sbuf_delete(sb); goto again; } ifc->ifc_len = valid_len; sbuf_finish(sb); error = copyout(sbuf_data(sb), ifc->ifc_req, ifc->ifc_len); sbuf_delete(sb); return (error); } /* * Just like ifpromisc(), but for all-multicast-reception mode. */ int if_allmulti(struct ifnet *ifp, int onswitch) { return (if_setflag(ifp, IFF_ALLMULTI, 0, &ifp->if_amcount, onswitch)); } struct ifmultiaddr * if_findmulti(struct ifnet *ifp, const struct sockaddr *sa) { struct ifmultiaddr *ifma; IF_ADDR_LOCK_ASSERT(ifp); TAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) { if (sa->sa_family == AF_LINK) { if (sa_dl_equal(ifma->ifma_addr, sa)) break; } else { if (sa_equal(ifma->ifma_addr, sa)) break; } } return ifma; } /* * Allocate a new ifmultiaddr and initialize based on passed arguments. We * make copies of passed sockaddrs. The ifmultiaddr will not be added to * the ifnet multicast address list here, so the caller must do that and * other setup work (such as notifying the device driver). The reference * count is initialized to 1. */ static struct ifmultiaddr * if_allocmulti(struct ifnet *ifp, struct sockaddr *sa, struct sockaddr *llsa, int mflags) { struct ifmultiaddr *ifma; struct sockaddr *dupsa; ifma = malloc(sizeof *ifma, M_IFMADDR, mflags | M_ZERO); if (ifma == NULL) return (NULL); dupsa = malloc(sa->sa_len, M_IFMADDR, mflags); if (dupsa == NULL) { free(ifma, M_IFMADDR); return (NULL); } bcopy(sa, dupsa, sa->sa_len); ifma->ifma_addr = dupsa; ifma->ifma_ifp = ifp; ifma->ifma_refcount = 1; ifma->ifma_protospec = NULL; if (llsa == NULL) { ifma->ifma_lladdr = NULL; return (ifma); } dupsa = malloc(llsa->sa_len, M_IFMADDR, mflags); if (dupsa == NULL) { free(ifma->ifma_addr, M_IFMADDR); free(ifma, M_IFMADDR); return (NULL); } bcopy(llsa, dupsa, llsa->sa_len); ifma->ifma_lladdr = dupsa; return (ifma); } /* * if_freemulti: free ifmultiaddr structure and possibly attached related * addresses. The caller is responsible for implementing reference * counting, notifying the driver, handling routing messages, and releasing * any dependent link layer state. */ static void if_freemulti(struct ifmultiaddr *ifma) { KASSERT(ifma->ifma_refcount == 0, ("if_freemulti: refcount %d", ifma->ifma_refcount)); if (ifma->ifma_lladdr != NULL) free(ifma->ifma_lladdr, M_IFMADDR); free(ifma->ifma_addr, M_IFMADDR); free(ifma, M_IFMADDR); } /* * Register an additional multicast address with a network interface. * * - If the address is already present, bump the reference count on the * address and return. * - If the address is not link-layer, look up a link layer address. * - Allocate address structures for one or both addresses, and attach to the * multicast address list on the interface. If automatically adding a link * layer address, the protocol address will own a reference to the link * layer address, to be freed when it is freed. * - Notify the network device driver of an addition to the multicast address * list. * * 'sa' points to caller-owned memory with the desired multicast address. * * 'retifma' will be used to return a pointer to the resulting multicast * address reference, if desired. */ int if_addmulti(struct ifnet *ifp, struct sockaddr *sa, struct ifmultiaddr **retifma) { struct ifmultiaddr *ifma, *ll_ifma; struct sockaddr *llsa; struct sockaddr_dl sdl; int error; /* * If the address is already present, return a new reference to it; * otherwise, allocate storage and set up a new address. */ IF_ADDR_WLOCK(ifp); ifma = if_findmulti(ifp, sa); if (ifma != NULL) { ifma->ifma_refcount++; if (retifma != NULL) *retifma = ifma; IF_ADDR_WUNLOCK(ifp); return (0); } /* * The address isn't already present; resolve the protocol address * into a link layer address, and then look that up, bump its * refcount or allocate an ifma for that also. * Most link layer resolving functions returns address data which * fits inside default sockaddr_dl structure. However callback * can allocate another sockaddr structure, in that case we need to * free it later. */ llsa = NULL; ll_ifma = NULL; if (ifp->if_resolvemulti != NULL) { /* Provide called function with buffer size information */ sdl.sdl_len = sizeof(sdl); llsa = (struct sockaddr *)&sdl; error = ifp->if_resolvemulti(ifp, &llsa, sa); if (error) goto unlock_out; } /* * Allocate the new address. Don't hook it up yet, as we may also * need to allocate a link layer multicast address. */ ifma = if_allocmulti(ifp, sa, llsa, M_NOWAIT); if (ifma == NULL) { error = ENOMEM; goto free_llsa_out; } /* * If a link layer address is found, we'll need to see if it's * already present in the address list, or allocate is as well. * When this block finishes, the link layer address will be on the * list. */ if (llsa != NULL) { ll_ifma = if_findmulti(ifp, llsa); if (ll_ifma == NULL) { ll_ifma = if_allocmulti(ifp, llsa, NULL, M_NOWAIT); if (ll_ifma == NULL) { --ifma->ifma_refcount; if_freemulti(ifma); error = ENOMEM; goto free_llsa_out; } TAILQ_INSERT_HEAD(&ifp->if_multiaddrs, ll_ifma, ifma_link); } else ll_ifma->ifma_refcount++; ifma->ifma_llifma = ll_ifma; } /* * We now have a new multicast address, ifma, and possibly a new or * referenced link layer address. Add the primary address to the * ifnet address list. */ TAILQ_INSERT_HEAD(&ifp->if_multiaddrs, ifma, ifma_link); if (retifma != NULL) *retifma = ifma; /* * Must generate the message while holding the lock so that 'ifma' * pointer is still valid. */ rt_newmaddrmsg(RTM_NEWMADDR, ifma); IF_ADDR_WUNLOCK(ifp); /* * We are certain we have added something, so call down to the * interface to let them know about it. */ if (ifp->if_ioctl != NULL) { (void) (*ifp->if_ioctl)(ifp, SIOCADDMULTI, 0); } if ((llsa != NULL) && (llsa != (struct sockaddr *)&sdl)) link_free_sdl(llsa); return (0); free_llsa_out: if ((llsa != NULL) && (llsa != (struct sockaddr *)&sdl)) link_free_sdl(llsa); unlock_out: IF_ADDR_WUNLOCK(ifp); return (error); } /* * Delete a multicast group membership by network-layer group address. * * Returns ENOENT if the entry could not be found. If ifp no longer * exists, results are undefined. This entry point should only be used * from subsystems which do appropriate locking to hold ifp for the * duration of the call. * Network-layer protocol domains must use if_delmulti_ifma(). */ int if_delmulti(struct ifnet *ifp, struct sockaddr *sa) { struct ifmultiaddr *ifma; int lastref; #ifdef INVARIANTS struct ifnet *oifp; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(oifp, &V_ifnet, if_link) if (ifp == oifp) break; if (ifp != oifp) ifp = NULL; IFNET_RUNLOCK_NOSLEEP(); KASSERT(ifp != NULL, ("%s: ifnet went away", __func__)); #endif if (ifp == NULL) return (ENOENT); IF_ADDR_WLOCK(ifp); lastref = 0; ifma = if_findmulti(ifp, sa); if (ifma != NULL) lastref = if_delmulti_locked(ifp, ifma, 0); IF_ADDR_WUNLOCK(ifp); if (ifma == NULL) return (ENOENT); if (lastref && ifp->if_ioctl != NULL) { (void)(*ifp->if_ioctl)(ifp, SIOCDELMULTI, 0); } return (0); } /* * Delete all multicast group membership for an interface. * Should be used to quickly flush all multicast filters. */ void if_delallmulti(struct ifnet *ifp) { struct ifmultiaddr *ifma; struct ifmultiaddr *next; IF_ADDR_WLOCK(ifp); TAILQ_FOREACH_SAFE(ifma, &ifp->if_multiaddrs, ifma_link, next) if_delmulti_locked(ifp, ifma, 0); IF_ADDR_WUNLOCK(ifp); } /* * Delete a multicast group membership by group membership pointer. * Network-layer protocol domains must use this routine. * * It is safe to call this routine if the ifp disappeared. */ void if_delmulti_ifma(struct ifmultiaddr *ifma) { struct ifnet *ifp; int lastref; ifp = ifma->ifma_ifp; #ifdef DIAGNOSTIC if (ifp == NULL) { printf("%s: ifma_ifp seems to be detached\n", __func__); } else { struct ifnet *oifp; IFNET_RLOCK_NOSLEEP(); TAILQ_FOREACH(oifp, &V_ifnet, if_link) if (ifp == oifp) break; if (ifp != oifp) { printf("%s: ifnet %p disappeared\n", __func__, ifp); ifp = NULL; } IFNET_RUNLOCK_NOSLEEP(); } #endif /* * If and only if the ifnet instance exists: Acquire the address lock. */ if (ifp != NULL) IF_ADDR_WLOCK(ifp); lastref = if_delmulti_locked(ifp, ifma, 0); if (ifp != NULL) { /* * If and only if the ifnet instance exists: * Release the address lock. * If the group was left: update the hardware hash filter. */ IF_ADDR_WUNLOCK(ifp); if (lastref && ifp->if_ioctl != NULL) { (void)(*ifp->if_ioctl)(ifp, SIOCDELMULTI, 0); } } } /* * Perform deletion of network-layer and/or link-layer multicast address. * * Return 0 if the reference count was decremented. * Return 1 if the final reference was released, indicating that the * hardware hash filter should be reprogrammed. */ static int if_delmulti_locked(struct ifnet *ifp, struct ifmultiaddr *ifma, int detaching) { struct ifmultiaddr *ll_ifma; if (ifp != NULL && ifma->ifma_ifp != NULL) { KASSERT(ifma->ifma_ifp == ifp, ("%s: inconsistent ifp %p", __func__, ifp)); IF_ADDR_WLOCK_ASSERT(ifp); } ifp = ifma->ifma_ifp; /* * If the ifnet is detaching, null out references to ifnet, * so that upper protocol layers will notice, and not attempt * to obtain locks for an ifnet which no longer exists. The * routing socket announcement must happen before the ifnet * instance is detached from the system. */ if (detaching) { #ifdef DIAGNOSTIC printf("%s: detaching ifnet instance %p\n", __func__, ifp); #endif /* * ifp may already be nulled out if we are being reentered * to delete the ll_ifma. */ if (ifp != NULL) { rt_newmaddrmsg(RTM_DELMADDR, ifma); ifma->ifma_ifp = NULL; } } if (--ifma->ifma_refcount > 0) return 0; /* * If this ifma is a network-layer ifma, a link-layer ifma may * have been associated with it. Release it first if so. */ ll_ifma = ifma->ifma_llifma; if (ll_ifma != NULL) { KASSERT(ifma->ifma_lladdr != NULL, ("%s: llifma w/o lladdr", __func__)); if (detaching) ll_ifma->ifma_ifp = NULL; /* XXX */ if (--ll_ifma->ifma_refcount == 0) { if (ifp != NULL) { TAILQ_REMOVE(&ifp->if_multiaddrs, ll_ifma, ifma_link); } if_freemulti(ll_ifma); } } if (ifp != NULL) TAILQ_REMOVE(&ifp->if_multiaddrs, ifma, ifma_link); if_freemulti(ifma); /* * The last reference to this instance of struct ifmultiaddr * was released; the hardware should be notified of this change. */ return 1; } /* * Set the link layer address on an interface. * * At this time we only support certain types of interfaces, * and we don't allow the length of the address to change. * * Set noinline to be dtrace-friendly */ __noinline int if_setlladdr(struct ifnet *ifp, const u_char *lladdr, int len) { struct sockaddr_dl *sdl; struct ifaddr *ifa; struct ifreq ifr; IF_ADDR_RLOCK(ifp); ifa = ifp->if_addr; if (ifa == NULL) { IF_ADDR_RUNLOCK(ifp); return (EINVAL); } ifa_ref(ifa); IF_ADDR_RUNLOCK(ifp); sdl = (struct sockaddr_dl *)ifa->ifa_addr; if (sdl == NULL) { ifa_free(ifa); return (EINVAL); } if (len != sdl->sdl_alen) { /* don't allow length to change */ ifa_free(ifa); return (EINVAL); } switch (ifp->if_type) { case IFT_ETHER: case IFT_FDDI: case IFT_XETHER: case IFT_ISO88025: case IFT_L2VLAN: case IFT_BRIDGE: case IFT_ARCNET: case IFT_IEEE8023ADLAG: bcopy(lladdr, LLADDR(sdl), len); ifa_free(ifa); break; default: ifa_free(ifa); return (ENODEV); } /* * If the interface is already up, we need * to re-init it in order to reprogram its * address filter. */ if ((ifp->if_flags & IFF_UP) != 0) { if (ifp->if_ioctl) { ifp->if_flags &= ~IFF_UP; ifr.ifr_flags = ifp->if_flags & 0xffff; ifr.ifr_flagshigh = ifp->if_flags >> 16; (*ifp->if_ioctl)(ifp, SIOCSIFFLAGS, (caddr_t)&ifr); ifp->if_flags |= IFF_UP; ifr.ifr_flags = ifp->if_flags & 0xffff; ifr.ifr_flagshigh = ifp->if_flags >> 16; (*ifp->if_ioctl)(ifp, SIOCSIFFLAGS, (caddr_t)&ifr); } } EVENTHANDLER_INVOKE(iflladdr_event, ifp); return (0); } /* * Compat function for handling basic encapsulation requests. * Not converted stacks (FDDI, IB, ..) supports traditional * output model: ARP (and other similar L2 protocols) are handled * inside output routine, arpresolve/nd6_resolve() returns MAC * address instead of full prepend. * * This function creates calculated header==MAC for IPv4/IPv6 and * returns EAFNOSUPPORT (which is then handled in ARP code) for other * address families. */ static int if_requestencap_default(struct ifnet *ifp, struct if_encap_req *req) { if (req->rtype != IFENCAP_LL) return (EOPNOTSUPP); if (req->bufsize < req->lladdr_len) return (ENOMEM); switch (req->family) { case AF_INET: case AF_INET6: break; default: return (EAFNOSUPPORT); } /* Copy lladdr to storage as is */ memmove(req->buf, req->lladdr, req->lladdr_len); req->bufsize = req->lladdr_len; req->lladdr_off = 0; return (0); } /* * Get the link layer address that was read from the hardware at attach. * * This is only set by Ethernet NICs (IFT_ETHER), but laggX interfaces re-type * their component interfaces as IFT_IEEE8023ADLAG. */ int if_gethwaddr(struct ifnet *ifp, struct ifreq *ifr) { if (ifp->if_hw_addr == NULL) return (ENODEV); switch (ifp->if_type) { case IFT_ETHER: case IFT_IEEE8023ADLAG: bcopy(ifp->if_hw_addr, ifr->ifr_addr.sa_data, ifp->if_addrlen); return (0); default: return (ENODEV); } } /* * The name argument must be a pointer to storage which will last as * long as the interface does. For physical devices, the result of * device_get_name(dev) is a good choice and for pseudo-devices a * static string works well. */ void if_initname(struct ifnet *ifp, const char *name, int unit) { ifp->if_dname = name; ifp->if_dunit = unit; if (unit != IF_DUNIT_NONE) snprintf(ifp->if_xname, IFNAMSIZ, "%s%d", name, unit); else strlcpy(ifp->if_xname, name, IFNAMSIZ); } int if_printf(struct ifnet *ifp, const char * fmt, ...) { va_list ap; int retval; retval = printf("%s: ", ifp->if_xname); va_start(ap, fmt); retval += vprintf(fmt, ap); va_end(ap); return (retval); } void if_start(struct ifnet *ifp) { (*(ifp)->if_start)(ifp); } /* * Backwards compatibility interface for drivers * that have not implemented it */ static int if_transmit(struct ifnet *ifp, struct mbuf *m) { int error; IFQ_HANDOFF(ifp, m, error); return (error); } static void if_input_default(struct ifnet *ifp __unused, struct mbuf *m) { m_freem(m); } int if_handoff(struct ifqueue *ifq, struct mbuf *m, struct ifnet *ifp, int adjust) { int active = 0; IF_LOCK(ifq); if (_IF_QFULL(ifq)) { IF_UNLOCK(ifq); if_inc_counter(ifp, IFCOUNTER_OQDROPS, 1); m_freem(m); return (0); } if (ifp != NULL) { if_inc_counter(ifp, IFCOUNTER_OBYTES, m->m_pkthdr.len + adjust); if (m->m_flags & (M_BCAST|M_MCAST)) if_inc_counter(ifp, IFCOUNTER_OMCASTS, 1); active = ifp->if_drv_flags & IFF_DRV_OACTIVE; } _IF_ENQUEUE(ifq, m); IF_UNLOCK(ifq); if (ifp != NULL && !active) (*(ifp)->if_start)(ifp); return (1); } void if_register_com_alloc(u_char type, if_com_alloc_t *a, if_com_free_t *f) { KASSERT(if_com_alloc[type] == NULL, ("if_register_com_alloc: %d already registered", type)); KASSERT(if_com_free[type] == NULL, ("if_register_com_alloc: %d free already registered", type)); if_com_alloc[type] = a; if_com_free[type] = f; } void if_deregister_com_alloc(u_char type) { KASSERT(if_com_alloc[type] != NULL, ("if_deregister_com_alloc: %d not registered", type)); KASSERT(if_com_free[type] != NULL, ("if_deregister_com_alloc: %d free not registered", type)); if_com_alloc[type] = NULL; if_com_free[type] = NULL; } /* API for driver access to network stack owned ifnet.*/ uint64_t if_setbaudrate(struct ifnet *ifp, uint64_t baudrate) { uint64_t oldbrate; oldbrate = ifp->if_baudrate; ifp->if_baudrate = baudrate; return (oldbrate); } uint64_t if_getbaudrate(if_t ifp) { return (((struct ifnet *)ifp)->if_baudrate); } int if_setcapabilities(if_t ifp, int capabilities) { ((struct ifnet *)ifp)->if_capabilities = capabilities; return (0); } int if_setcapabilitiesbit(if_t ifp, int setbit, int clearbit) { ((struct ifnet *)ifp)->if_capabilities |= setbit; ((struct ifnet *)ifp)->if_capabilities &= ~clearbit; return (0); } int if_getcapabilities(if_t ifp) { return ((struct ifnet *)ifp)->if_capabilities; } int if_setcapenable(if_t ifp, int capabilities) { ((struct ifnet *)ifp)->if_capenable = capabilities; return (0); } int if_setcapenablebit(if_t ifp, int setcap, int clearcap) { if(setcap) ((struct ifnet *)ifp)->if_capenable |= setcap; if(clearcap) ((struct ifnet *)ifp)->if_capenable &= ~clearcap; return (0); } const char * if_getdname(if_t ifp) { return ((struct ifnet *)ifp)->if_dname; } int if_togglecapenable(if_t ifp, int togglecap) { ((struct ifnet *)ifp)->if_capenable ^= togglecap; return (0); } int if_getcapenable(if_t ifp) { return ((struct ifnet *)ifp)->if_capenable; } /* * This is largely undesirable because it ties ifnet to a device, but does * provide flexiblity for an embedded product vendor. Should be used with * the understanding that it violates the interface boundaries, and should be * a last resort only. */ int if_setdev(if_t ifp, void *dev) { return (0); } int if_setdrvflagbits(if_t ifp, int set_flags, int clear_flags) { ((struct ifnet *)ifp)->if_drv_flags |= set_flags; ((struct ifnet *)ifp)->if_drv_flags &= ~clear_flags; return (0); } int if_getdrvflags(if_t ifp) { return ((struct ifnet *)ifp)->if_drv_flags; } int if_setdrvflags(if_t ifp, int flags) { ((struct ifnet *)ifp)->if_drv_flags = flags; return (0); } int if_setflags(if_t ifp, int flags) { ((struct ifnet *)ifp)->if_flags = flags; return (0); } int if_setflagbits(if_t ifp, int set, int clear) { ((struct ifnet *)ifp)->if_flags |= set; ((struct ifnet *)ifp)->if_flags &= ~clear; return (0); } int if_getflags(if_t ifp) { return ((struct ifnet *)ifp)->if_flags; } int if_clearhwassist(if_t ifp) { ((struct ifnet *)ifp)->if_hwassist = 0; return (0); } int if_sethwassistbits(if_t ifp, int toset, int toclear) { ((struct ifnet *)ifp)->if_hwassist |= toset; ((struct ifnet *)ifp)->if_hwassist &= ~toclear; return (0); } int if_sethwassist(if_t ifp, int hwassist_bit) { ((struct ifnet *)ifp)->if_hwassist = hwassist_bit; return (0); } int if_gethwassist(if_t ifp) { return ((struct ifnet *)ifp)->if_hwassist; } int if_setmtu(if_t ifp, int mtu) { ((struct ifnet *)ifp)->if_mtu = mtu; return (0); } int if_getmtu(if_t ifp) { return ((struct ifnet *)ifp)->if_mtu; } int if_getmtu_family(if_t ifp, int family) { struct domain *dp; for (dp = domains; dp; dp = dp->dom_next) { if (dp->dom_family == family && dp->dom_ifmtu != NULL) return (dp->dom_ifmtu((struct ifnet *)ifp)); } return (((struct ifnet *)ifp)->if_mtu); } int if_setsoftc(if_t ifp, void *softc) { ((struct ifnet *)ifp)->if_softc = softc; return (0); } void * if_getsoftc(if_t ifp) { return ((struct ifnet *)ifp)->if_softc; } void if_setrcvif(struct mbuf *m, if_t ifp) { m->m_pkthdr.rcvif = (struct ifnet *)ifp; } void if_setvtag(struct mbuf *m, uint16_t tag) { m->m_pkthdr.ether_vtag = tag; } uint16_t if_getvtag(struct mbuf *m) { return (m->m_pkthdr.ether_vtag); } int if_sendq_empty(if_t ifp) { return IFQ_DRV_IS_EMPTY(&((struct ifnet *)ifp)->if_snd); } struct ifaddr * if_getifaddr(if_t ifp) { return ((struct ifnet *)ifp)->if_addr; } int if_getamcount(if_t ifp) { return ((struct ifnet *)ifp)->if_amcount; } int if_setsendqready(if_t ifp) { IFQ_SET_READY(&((struct ifnet *)ifp)->if_snd); return (0); } int if_setsendqlen(if_t ifp, int tx_desc_count) { IFQ_SET_MAXLEN(&((struct ifnet *)ifp)->if_snd, tx_desc_count); ((struct ifnet *)ifp)->if_snd.ifq_drv_maxlen = tx_desc_count; return (0); } int if_vlantrunkinuse(if_t ifp) { return ((struct ifnet *)ifp)->if_vlantrunk != NULL?1:0; } int if_input(if_t ifp, struct mbuf* sendmp) { (*((struct ifnet *)ifp)->if_input)((struct ifnet *)ifp, sendmp); return (0); } /* XXX */ #ifndef ETH_ADDR_LEN #define ETH_ADDR_LEN 6 #endif int if_setupmultiaddr(if_t ifp, void *mta, int *cnt, int max) { struct ifmultiaddr *ifma; uint8_t *lmta = (uint8_t *)mta; int mcnt = 0; TAILQ_FOREACH(ifma, &((struct ifnet *)ifp)->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; if (mcnt == max) break; bcopy(LLADDR((struct sockaddr_dl *)ifma->ifma_addr), &lmta[mcnt * ETH_ADDR_LEN], ETH_ADDR_LEN); mcnt++; } *cnt = mcnt; return (0); } int if_multiaddr_array(if_t ifp, void *mta, int *cnt, int max) { int error; if_maddr_rlock(ifp); error = if_setupmultiaddr(ifp, mta, cnt, max); if_maddr_runlock(ifp); return (error); } int if_multiaddr_count(if_t ifp, int max) { struct ifmultiaddr *ifma; int count; count = 0; if_maddr_rlock(ifp); TAILQ_FOREACH(ifma, &((struct ifnet *)ifp)->if_multiaddrs, ifma_link) { if (ifma->ifma_addr->sa_family != AF_LINK) continue; count++; if (count == max) break; } if_maddr_runlock(ifp); return (count); } int if_multi_apply(struct ifnet *ifp, int (*filter)(void *, struct ifmultiaddr *, int), void *arg) { struct ifmultiaddr *ifma; int cnt = 0; if_maddr_rlock(ifp); TAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) cnt += filter(arg, ifma, cnt); if_maddr_runlock(ifp); return (cnt); } struct mbuf * if_dequeue(if_t ifp) { struct mbuf *m; IFQ_DRV_DEQUEUE(&((struct ifnet *)ifp)->if_snd, m); return (m); } int if_sendq_prepend(if_t ifp, struct mbuf *m) { IFQ_DRV_PREPEND(&((struct ifnet *)ifp)->if_snd, m); return (0); } int if_setifheaderlen(if_t ifp, int len) { ((struct ifnet *)ifp)->if_hdrlen = len; return (0); } caddr_t if_getlladdr(if_t ifp) { return (IF_LLADDR((struct ifnet *)ifp)); } void * if_gethandle(u_char type) { return (if_alloc(type)); } void if_bpfmtap(if_t ifh, struct mbuf *m) { struct ifnet *ifp = (struct ifnet *)ifh; BPF_MTAP(ifp, m); } void if_etherbpfmtap(if_t ifh, struct mbuf *m) { struct ifnet *ifp = (struct ifnet *)ifh; ETHER_BPF_MTAP(ifp, m); } void if_vlancap(if_t ifh) { struct ifnet *ifp = (struct ifnet *)ifh; VLAN_CAPABILITIES(ifp); } int if_sethwtsomax(if_t ifp, u_int if_hw_tsomax) { ((struct ifnet *)ifp)->if_hw_tsomax = if_hw_tsomax; return (0); } int if_sethwtsomaxsegcount(if_t ifp, u_int if_hw_tsomaxsegcount) { ((struct ifnet *)ifp)->if_hw_tsomaxsegcount = if_hw_tsomaxsegcount; return (0); } int if_sethwtsomaxsegsize(if_t ifp, u_int if_hw_tsomaxsegsize) { ((struct ifnet *)ifp)->if_hw_tsomaxsegsize = if_hw_tsomaxsegsize; return (0); } u_int if_gethwtsomax(if_t ifp) { return (((struct ifnet *)ifp)->if_hw_tsomax); } u_int if_gethwtsomaxsegcount(if_t ifp) { return (((struct ifnet *)ifp)->if_hw_tsomaxsegcount); } u_int if_gethwtsomaxsegsize(if_t ifp) { return (((struct ifnet *)ifp)->if_hw_tsomaxsegsize); } void if_setinitfn(if_t ifp, void (*init_fn)(void *)) { ((struct ifnet *)ifp)->if_init = init_fn; } void if_setioctlfn(if_t ifp, int (*ioctl_fn)(if_t, u_long, caddr_t)) { ((struct ifnet *)ifp)->if_ioctl = (void *)ioctl_fn; } void if_setstartfn(if_t ifp, void (*start_fn)(if_t)) { ((struct ifnet *)ifp)->if_start = (void *)start_fn; } void if_settransmitfn(if_t ifp, if_transmit_fn_t start_fn) { ((struct ifnet *)ifp)->if_transmit = start_fn; } void if_setqflushfn(if_t ifp, if_qflush_fn_t flush_fn) { ((struct ifnet *)ifp)->if_qflush = flush_fn; } void if_setgetcounterfn(if_t ifp, if_get_counter_t fn) { ifp->if_get_counter = fn; } /* Revisit these - These are inline functions originally. */ int drbr_inuse_drv(if_t ifh, struct buf_ring *br) { return drbr_inuse(ifh, br); } struct mbuf* drbr_dequeue_drv(if_t ifh, struct buf_ring *br) { return drbr_dequeue(ifh, br); } int drbr_needs_enqueue_drv(if_t ifh, struct buf_ring *br) { return drbr_needs_enqueue(ifh, br); } int drbr_enqueue_drv(if_t ifh, struct buf_ring *br, struct mbuf *m) { return drbr_enqueue(ifh, br, m); } Index: projects/runtime-coverage/sys/netinet/sctp_indata.c =================================================================== --- projects/runtime-coverage/sys/netinet/sctp_indata.c (revision 325444) +++ projects/runtime-coverage/sys/netinet/sctp_indata.c (revision 325445) @@ -1,5747 +1,5754 @@ /*- * Copyright (c) 2001-2007, by Cisco Systems, Inc. All rights reserved. * Copyright (c) 2008-2012, by Randall Stewart. All rights reserved. * Copyright (c) 2008-2012, by Michael Tuexen. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * a) Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * b) Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the distribution. * * c) Neither the name of Cisco Systems, Inc. nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include /* * NOTES: On the outbound side of things I need to check the sack timer to * see if I should generate a sack into the chunk queue (if I have data to * send that is and will be sending it .. for bundling. * * The callback in sctp_usrreq.c will get called when the socket is read from. * This will cause sctp_service_queues() to get called on the top entry in * the list. */ static uint32_t sctp_add_chk_to_control(struct sctp_queued_to_read *control, struct sctp_stream_in *strm, struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_tmit_chunk *chk, int lock_held); void sctp_set_rwnd(struct sctp_tcb *stcb, struct sctp_association *asoc) { asoc->my_rwnd = sctp_calc_rwnd(stcb, asoc); } /* Calculate what the rwnd would be */ uint32_t sctp_calc_rwnd(struct sctp_tcb *stcb, struct sctp_association *asoc) { uint32_t calc = 0; /* * This is really set wrong with respect to a 1-2-m socket. Since * the sb_cc is the count that everyone as put up. When we re-write * sctp_soreceive then we will fix this so that ONLY this * associations data is taken into account. */ if (stcb->sctp_socket == NULL) { return (calc); } + KASSERT(asoc->cnt_on_reasm_queue > 0 || asoc->size_on_reasm_queue == 0, + ("size_on_reasm_queue is %u", asoc->size_on_reasm_queue)); + KASSERT(asoc->cnt_on_all_streams > 0 || asoc->size_on_all_streams == 0, + ("size_on_all_streams is %u", asoc->size_on_all_streams)); if (stcb->asoc.sb_cc == 0 && - asoc->size_on_reasm_queue == 0 && - asoc->size_on_all_streams == 0) { + asoc->cnt_on_reasm_queue == 0 && + asoc->cnt_on_all_streams == 0) { /* Full rwnd granted */ - KASSERT(asoc->cnt_on_reasm_queue == 0, ("cnt_on_reasm_queue is %u", asoc->cnt_on_reasm_queue)); - KASSERT(asoc->cnt_on_all_streams == 0, ("cnt_on_all_streams is %u", asoc->cnt_on_all_streams)); calc = max(SCTP_SB_LIMIT_RCV(stcb->sctp_socket), SCTP_MINIMAL_RWND); return (calc); } /* get actual space */ calc = (uint32_t)sctp_sbspace(&stcb->asoc, &stcb->sctp_socket->so_rcv); /* * take out what has NOT been put on socket queue and we yet hold * for putting up. */ calc = sctp_sbspace_sub(calc, (uint32_t)(asoc->size_on_reasm_queue + asoc->cnt_on_reasm_queue * MSIZE)); calc = sctp_sbspace_sub(calc, (uint32_t)(asoc->size_on_all_streams + asoc->cnt_on_all_streams * MSIZE)); if (calc == 0) { /* out of space */ return (calc); } /* what is the overhead of all these rwnd's */ calc = sctp_sbspace_sub(calc, stcb->asoc.my_rwnd_control_len); /* * If the window gets too small due to ctrl-stuff, reduce it to 1, * even it is 0. SWS engaged */ if (calc < stcb->asoc.my_rwnd_control_len) { calc = 1; } return (calc); } /* * Build out our readq entry based on the incoming packet. */ struct sctp_queued_to_read * sctp_build_readq_entry(struct sctp_tcb *stcb, struct sctp_nets *net, uint32_t tsn, uint32_t ppid, uint32_t context, uint16_t sid, uint32_t mid, uint8_t flags, struct mbuf *dm) { struct sctp_queued_to_read *read_queue_e = NULL; sctp_alloc_a_readq(stcb, read_queue_e); if (read_queue_e == NULL) { goto failed_build; } memset(read_queue_e, 0, sizeof(struct sctp_queued_to_read)); read_queue_e->sinfo_stream = sid; read_queue_e->sinfo_flags = (flags << 8); read_queue_e->sinfo_ppid = ppid; read_queue_e->sinfo_context = context; read_queue_e->sinfo_tsn = tsn; read_queue_e->sinfo_cumtsn = tsn; read_queue_e->sinfo_assoc_id = sctp_get_associd(stcb); read_queue_e->mid = mid; read_queue_e->top_fsn = read_queue_e->fsn_included = 0xffffffff; TAILQ_INIT(&read_queue_e->reasm); read_queue_e->whoFrom = net; atomic_add_int(&net->ref_count, 1); read_queue_e->data = dm; read_queue_e->stcb = stcb; read_queue_e->port_from = stcb->rport; failed_build: return (read_queue_e); } struct mbuf * sctp_build_ctl_nchunk(struct sctp_inpcb *inp, struct sctp_sndrcvinfo *sinfo) { struct sctp_extrcvinfo *seinfo; struct sctp_sndrcvinfo *outinfo; struct sctp_rcvinfo *rcvinfo; struct sctp_nxtinfo *nxtinfo; struct cmsghdr *cmh; struct mbuf *ret; int len; int use_extended; int provide_nxt; if (sctp_is_feature_off(inp, SCTP_PCB_FLAGS_RECVDATAIOEVNT) && sctp_is_feature_off(inp, SCTP_PCB_FLAGS_RECVRCVINFO) && sctp_is_feature_off(inp, SCTP_PCB_FLAGS_RECVNXTINFO)) { /* user does not want any ancillary data */ return (NULL); } len = 0; if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVRCVINFO)) { len += CMSG_SPACE(sizeof(struct sctp_rcvinfo)); } seinfo = (struct sctp_extrcvinfo *)sinfo; if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVNXTINFO) && (seinfo->serinfo_next_flags & SCTP_NEXT_MSG_AVAIL)) { provide_nxt = 1; len += CMSG_SPACE(sizeof(struct sctp_nxtinfo)); } else { provide_nxt = 0; } if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVDATAIOEVNT)) { if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_EXT_RCVINFO)) { use_extended = 1; len += CMSG_SPACE(sizeof(struct sctp_extrcvinfo)); } else { use_extended = 0; len += CMSG_SPACE(sizeof(struct sctp_sndrcvinfo)); } } else { use_extended = 0; } ret = sctp_get_mbuf_for_msg(len, 0, M_NOWAIT, 1, MT_DATA); if (ret == NULL) { /* No space */ return (ret); } SCTP_BUF_LEN(ret) = 0; /* We need a CMSG header followed by the struct */ cmh = mtod(ret, struct cmsghdr *); /* * Make sure that there is no un-initialized padding between the * cmsg header and cmsg data and after the cmsg data. */ memset(cmh, 0, len); if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVRCVINFO)) { cmh->cmsg_level = IPPROTO_SCTP; cmh->cmsg_len = CMSG_LEN(sizeof(struct sctp_rcvinfo)); cmh->cmsg_type = SCTP_RCVINFO; rcvinfo = (struct sctp_rcvinfo *)CMSG_DATA(cmh); rcvinfo->rcv_sid = sinfo->sinfo_stream; rcvinfo->rcv_ssn = sinfo->sinfo_ssn; rcvinfo->rcv_flags = sinfo->sinfo_flags; rcvinfo->rcv_ppid = sinfo->sinfo_ppid; rcvinfo->rcv_tsn = sinfo->sinfo_tsn; rcvinfo->rcv_cumtsn = sinfo->sinfo_cumtsn; rcvinfo->rcv_context = sinfo->sinfo_context; rcvinfo->rcv_assoc_id = sinfo->sinfo_assoc_id; cmh = (struct cmsghdr *)((caddr_t)cmh + CMSG_SPACE(sizeof(struct sctp_rcvinfo))); SCTP_BUF_LEN(ret) += CMSG_SPACE(sizeof(struct sctp_rcvinfo)); } if (provide_nxt) { cmh->cmsg_level = IPPROTO_SCTP; cmh->cmsg_len = CMSG_LEN(sizeof(struct sctp_nxtinfo)); cmh->cmsg_type = SCTP_NXTINFO; nxtinfo = (struct sctp_nxtinfo *)CMSG_DATA(cmh); nxtinfo->nxt_sid = seinfo->serinfo_next_stream; nxtinfo->nxt_flags = 0; if (seinfo->serinfo_next_flags & SCTP_NEXT_MSG_IS_UNORDERED) { nxtinfo->nxt_flags |= SCTP_UNORDERED; } if (seinfo->serinfo_next_flags & SCTP_NEXT_MSG_IS_NOTIFICATION) { nxtinfo->nxt_flags |= SCTP_NOTIFICATION; } if (seinfo->serinfo_next_flags & SCTP_NEXT_MSG_ISCOMPLETE) { nxtinfo->nxt_flags |= SCTP_COMPLETE; } nxtinfo->nxt_ppid = seinfo->serinfo_next_ppid; nxtinfo->nxt_length = seinfo->serinfo_next_length; nxtinfo->nxt_assoc_id = seinfo->serinfo_next_aid; cmh = (struct cmsghdr *)((caddr_t)cmh + CMSG_SPACE(sizeof(struct sctp_nxtinfo))); SCTP_BUF_LEN(ret) += CMSG_SPACE(sizeof(struct sctp_nxtinfo)); } if (sctp_is_feature_on(inp, SCTP_PCB_FLAGS_RECVDATAIOEVNT)) { cmh->cmsg_level = IPPROTO_SCTP; outinfo = (struct sctp_sndrcvinfo *)CMSG_DATA(cmh); if (use_extended) { cmh->cmsg_len = CMSG_LEN(sizeof(struct sctp_extrcvinfo)); cmh->cmsg_type = SCTP_EXTRCV; memcpy(outinfo, sinfo, sizeof(struct sctp_extrcvinfo)); SCTP_BUF_LEN(ret) += CMSG_SPACE(sizeof(struct sctp_extrcvinfo)); } else { cmh->cmsg_len = CMSG_LEN(sizeof(struct sctp_sndrcvinfo)); cmh->cmsg_type = SCTP_SNDRCV; *outinfo = *sinfo; SCTP_BUF_LEN(ret) += CMSG_SPACE(sizeof(struct sctp_sndrcvinfo)); } } return (ret); } static void sctp_mark_non_revokable(struct sctp_association *asoc, uint32_t tsn) { uint32_t gap, i, cumackp1; int fnd = 0; int in_r = 0, in_nr = 0; if (SCTP_BASE_SYSCTL(sctp_do_drain) == 0) { return; } cumackp1 = asoc->cumulative_tsn + 1; if (SCTP_TSN_GT(cumackp1, tsn)) { /* * this tsn is behind the cum ack and thus we don't need to * worry about it being moved from one to the other. */ return; } SCTP_CALC_TSN_TO_GAP(gap, tsn, asoc->mapping_array_base_tsn); in_r = SCTP_IS_TSN_PRESENT(asoc->mapping_array, gap); in_nr = SCTP_IS_TSN_PRESENT(asoc->nr_mapping_array, gap); if ((in_r == 0) && (in_nr == 0)) { #ifdef INVARIANTS panic("Things are really messed up now"); #else SCTP_PRINTF("gap:%x tsn:%x\n", gap, tsn); sctp_print_mapping_array(asoc); #endif } if (in_nr == 0) SCTP_SET_TSN_PRESENT(asoc->nr_mapping_array, gap); if (in_r) SCTP_UNSET_TSN_PRESENT(asoc->mapping_array, gap); if (SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_nr_map)) { asoc->highest_tsn_inside_nr_map = tsn; } if (tsn == asoc->highest_tsn_inside_map) { /* We must back down to see what the new highest is */ for (i = tsn - 1; SCTP_TSN_GE(i, asoc->mapping_array_base_tsn); i--) { SCTP_CALC_TSN_TO_GAP(gap, i, asoc->mapping_array_base_tsn); if (SCTP_IS_TSN_PRESENT(asoc->mapping_array, gap)) { asoc->highest_tsn_inside_map = i; fnd = 1; break; } } if (!fnd) { asoc->highest_tsn_inside_map = asoc->mapping_array_base_tsn - 1; } } } static int sctp_place_control_in_stream(struct sctp_stream_in *strm, struct sctp_association *asoc, struct sctp_queued_to_read *control) { struct sctp_queued_to_read *at; struct sctp_readhead *q; uint8_t flags, unordered; flags = (control->sinfo_flags >> 8); unordered = flags & SCTP_DATA_UNORDERED; if (unordered) { q = &strm->uno_inqueue; if (asoc->idata_supported == 0) { if (!TAILQ_EMPTY(q)) { /* * Only one stream can be here in old style * -- abort */ return (-1); } TAILQ_INSERT_TAIL(q, control, next_instrm); control->on_strm_q = SCTP_ON_UNORDERED; return (0); } } else { q = &strm->inqueue; } if ((flags & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG) { control->end_added = 1; control->first_frag_seen = 1; control->last_frag_seen = 1; } if (TAILQ_EMPTY(q)) { /* Empty queue */ TAILQ_INSERT_HEAD(q, control, next_instrm); if (unordered) { control->on_strm_q = SCTP_ON_UNORDERED; } else { control->on_strm_q = SCTP_ON_ORDERED; } return (0); } else { TAILQ_FOREACH(at, q, next_instrm) { if (SCTP_MID_GT(asoc->idata_supported, at->mid, control->mid)) { /* * one in queue is bigger than the new one, * insert before this one */ TAILQ_INSERT_BEFORE(at, control, next_instrm); if (unordered) { control->on_strm_q = SCTP_ON_UNORDERED; } else { control->on_strm_q = SCTP_ON_ORDERED; } break; } else if (SCTP_MID_EQ(asoc->idata_supported, at->mid, control->mid)) { /* * Gak, He sent me a duplicate msg id * number?? return -1 to abort. */ return (-1); } else { if (TAILQ_NEXT(at, next_instrm) == NULL) { /* * We are at the end, insert it * after this one */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_STR_LOGGING_ENABLE) { sctp_log_strm_del(control, at, SCTP_STR_LOG_FROM_INSERT_TL); } TAILQ_INSERT_AFTER(q, at, control, next_instrm); if (unordered) { control->on_strm_q = SCTP_ON_UNORDERED; } else { control->on_strm_q = SCTP_ON_ORDERED; } break; } } } } return (0); } static void sctp_abort_in_reasm(struct sctp_tcb *stcb, struct sctp_queued_to_read *control, struct sctp_tmit_chunk *chk, int *abort_flag, int opspot) { char msg[SCTP_DIAG_INFO_LEN]; struct mbuf *oper; if (stcb->asoc.idata_supported) { snprintf(msg, sizeof(msg), "Reass %x,CF:%x,TSN=%8.8x,SID=%4.4x,FSN=%8.8x,MID:%8.8x", opspot, control->fsn_included, chk->rec.data.tsn, chk->rec.data.sid, chk->rec.data.fsn, chk->rec.data.mid); } else { snprintf(msg, sizeof(msg), "Reass %x,CI:%x,TSN=%8.8x,SID=%4.4x,FSN=%4.4x,SSN:%4.4x", opspot, control->fsn_included, chk->rec.data.tsn, chk->rec.data.sid, chk->rec.data.fsn, (uint16_t)chk->rec.data.mid); } oper = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); sctp_m_freem(chk->data); chk->data = NULL; sctp_free_a_chunk(stcb, chk, SCTP_SO_NOT_LOCKED); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_1; sctp_abort_an_association(stcb->sctp_ep, stcb, oper, SCTP_SO_NOT_LOCKED); *abort_flag = 1; } static void sctp_clean_up_control(struct sctp_tcb *stcb, struct sctp_queued_to_read *control) { /* * The control could not be placed and must be cleaned. */ struct sctp_tmit_chunk *chk, *nchk; TAILQ_FOREACH_SAFE(chk, &control->reasm, sctp_next, nchk) { TAILQ_REMOVE(&control->reasm, chk, sctp_next); if (chk->data) sctp_m_freem(chk->data); chk->data = NULL; sctp_free_a_chunk(stcb, chk, SCTP_SO_NOT_LOCKED); } sctp_free_a_readq(stcb, control); } /* * Queue the chunk either right into the socket buffer if it is the next one * to go OR put it in the correct place in the delivery queue. If we do * append to the so_buf, keep doing so until we are out of order as * long as the control's entered are non-fragmented. */ static void sctp_queue_data_to_stream(struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_queued_to_read *control, int *abort_flag, int *need_reasm) { /* * FIX-ME maybe? What happens when the ssn wraps? If we are getting * all the data in one stream this could happen quite rapidly. One * could use the TSN to keep track of things, but this scheme breaks * down in the other type of stream usage that could occur. Send a * single msg to stream 0, send 4Billion messages to stream 1, now * send a message to stream 0. You have a situation where the TSN * has wrapped but not in the stream. Is this worth worrying about * or should we just change our queue sort at the bottom to be by * TSN. * * Could it also be legal for a peer to send ssn 1 with TSN 2 and * ssn 2 with TSN 1? If the peer is doing some sort of funky TSN/SSN * assignment this could happen... and I don't see how this would be * a violation. So for now I am undecided an will leave the sort by * SSN alone. Maybe a hybred approach is the answer * */ struct sctp_queued_to_read *at; int queue_needed; uint32_t nxt_todel; struct mbuf *op_err; struct sctp_stream_in *strm; char msg[SCTP_DIAG_INFO_LEN]; strm = &asoc->strmin[control->sinfo_stream]; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_STR_LOGGING_ENABLE) { sctp_log_strm_del(control, NULL, SCTP_STR_LOG_FROM_INTO_STRD); } if (SCTP_MID_GT((asoc->idata_supported), strm->last_mid_delivered, control->mid)) { /* The incoming sseq is behind where we last delivered? */ SCTPDBG(SCTP_DEBUG_INDATA1, "Duplicate S-SEQ: %u delivered: %u from peer, Abort association\n", strm->last_mid_delivered, control->mid); /* * throw it in the stream so it gets cleaned up in * association destruction */ TAILQ_INSERT_HEAD(&strm->inqueue, control, next_instrm); if (asoc->idata_supported) { snprintf(msg, sizeof(msg), "Delivered MID=%8.8x, got TSN=%8.8x, SID=%4.4x, MID=%8.8x", strm->last_mid_delivered, control->sinfo_tsn, control->sinfo_stream, control->mid); } else { snprintf(msg, sizeof(msg), "Delivered SSN=%4.4x, got TSN=%8.8x, SID=%4.4x, SSN=%4.4x", (uint16_t)strm->last_mid_delivered, control->sinfo_tsn, control->sinfo_stream, (uint16_t)control->mid); } op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_2; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); *abort_flag = 1; return; } queue_needed = 1; asoc->size_on_all_streams += control->length; sctp_ucount_incr(asoc->cnt_on_all_streams); nxt_todel = strm->last_mid_delivered + 1; if (SCTP_MID_EQ(asoc->idata_supported, nxt_todel, control->mid)) { #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) struct socket *so; so = SCTP_INP_SO(stcb->sctp_ep); atomic_add_int(&stcb->asoc.refcnt, 1); SCTP_TCB_UNLOCK(stcb); SCTP_SOCKET_LOCK(so, 1); SCTP_TCB_LOCK(stcb); atomic_subtract_int(&stcb->asoc.refcnt, 1); if (stcb->sctp_ep->sctp_flags & SCTP_PCB_FLAGS_SOCKET_GONE) { SCTP_SOCKET_UNLOCK(so, 1); return; } #endif /* can be delivered right away? */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_STR_LOGGING_ENABLE) { sctp_log_strm_del(control, NULL, SCTP_STR_LOG_FROM_IMMED_DEL); } /* EY it wont be queued if it could be delivered directly */ queue_needed = 0; if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); strm->last_mid_delivered++; sctp_mark_non_revokable(asoc, control->sinfo_tsn); sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, 1, SCTP_READ_LOCK_NOT_HELD, SCTP_SO_LOCKED); TAILQ_FOREACH_SAFE(control, &strm->inqueue, next_instrm, at) { /* all delivered */ nxt_todel = strm->last_mid_delivered + 1; if (SCTP_MID_EQ(asoc->idata_supported, nxt_todel, control->mid) && (((control->sinfo_flags >> 8) & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG)) { if (control->on_strm_q == SCTP_ON_ORDERED) { TAILQ_REMOVE(&strm->inqueue, control, next_instrm); if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); #ifdef INVARIANTS } else { panic("Huh control: %p is on_strm_q: %d", control, control->on_strm_q); #endif } control->on_strm_q = 0; strm->last_mid_delivered++; /* * We ignore the return of deliver_data here * since we always can hold the chunk on the * d-queue. And we have a finite number that * can be delivered from the strq. */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_STR_LOGGING_ENABLE) { sctp_log_strm_del(control, NULL, SCTP_STR_LOG_FROM_IMMED_DEL); } sctp_mark_non_revokable(asoc, control->sinfo_tsn); sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, 1, SCTP_READ_LOCK_NOT_HELD, SCTP_SO_LOCKED); continue; } else if (SCTP_MID_EQ(asoc->idata_supported, nxt_todel, control->mid)) { *need_reasm = 1; } break; } #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) SCTP_SOCKET_UNLOCK(so, 1); #endif } if (queue_needed) { /* * Ok, we did not deliver this guy, find the correct place * to put it on the queue. */ if (sctp_place_control_in_stream(strm, asoc, control)) { snprintf(msg, sizeof(msg), "Queue to str MID: %u duplicate", control->mid); sctp_clean_up_control(stcb, control); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_3; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); *abort_flag = 1; } } } static void sctp_setup_tail_pointer(struct sctp_queued_to_read *control) { struct mbuf *m, *prev = NULL; struct sctp_tcb *stcb; stcb = control->stcb; control->held_length = 0; control->length = 0; m = control->data; while (m) { if (SCTP_BUF_LEN(m) == 0) { /* Skip mbufs with NO length */ if (prev == NULL) { /* First one */ control->data = sctp_m_free(m); m = control->data; } else { SCTP_BUF_NEXT(prev) = sctp_m_free(m); m = SCTP_BUF_NEXT(prev); } if (m == NULL) { control->tail_mbuf = prev; } continue; } prev = m; atomic_add_int(&control->length, SCTP_BUF_LEN(m)); if (control->on_read_q) { /* * On read queue so we must increment the SB stuff, * we assume caller has done any locks of SB. */ sctp_sballoc(stcb, &stcb->sctp_socket->so_rcv, m); } m = SCTP_BUF_NEXT(m); } if (prev) { control->tail_mbuf = prev; } } static void sctp_add_to_tail_pointer(struct sctp_queued_to_read *control, struct mbuf *m, uint32_t *added) { struct mbuf *prev = NULL; struct sctp_tcb *stcb; stcb = control->stcb; if (stcb == NULL) { #ifdef INVARIANTS panic("Control broken"); #else return; #endif } if (control->tail_mbuf == NULL) { /* TSNH */ control->data = m; sctp_setup_tail_pointer(control); return; } control->tail_mbuf->m_next = m; while (m) { if (SCTP_BUF_LEN(m) == 0) { /* Skip mbufs with NO length */ if (prev == NULL) { /* First one */ control->tail_mbuf->m_next = sctp_m_free(m); m = control->tail_mbuf->m_next; } else { SCTP_BUF_NEXT(prev) = sctp_m_free(m); m = SCTP_BUF_NEXT(prev); } if (m == NULL) { control->tail_mbuf = prev; } continue; } prev = m; if (control->on_read_q) { /* * On read queue so we must increment the SB stuff, * we assume caller has done any locks of SB. */ sctp_sballoc(stcb, &stcb->sctp_socket->so_rcv, m); } *added += SCTP_BUF_LEN(m); atomic_add_int(&control->length, SCTP_BUF_LEN(m)); m = SCTP_BUF_NEXT(m); } if (prev) { control->tail_mbuf = prev; } } static void sctp_build_readq_entry_from_ctl(struct sctp_queued_to_read *nc, struct sctp_queued_to_read *control) { memset(nc, 0, sizeof(struct sctp_queued_to_read)); nc->sinfo_stream = control->sinfo_stream; nc->mid = control->mid; TAILQ_INIT(&nc->reasm); nc->top_fsn = control->top_fsn; nc->mid = control->mid; nc->sinfo_flags = control->sinfo_flags; nc->sinfo_ppid = control->sinfo_ppid; nc->sinfo_context = control->sinfo_context; nc->fsn_included = 0xffffffff; nc->sinfo_tsn = control->sinfo_tsn; nc->sinfo_cumtsn = control->sinfo_cumtsn; nc->sinfo_assoc_id = control->sinfo_assoc_id; nc->whoFrom = control->whoFrom; atomic_add_int(&nc->whoFrom->ref_count, 1); nc->stcb = control->stcb; nc->port_from = control->port_from; } static void sctp_reset_a_control(struct sctp_queued_to_read *control, struct sctp_inpcb *inp, uint32_t tsn) { control->fsn_included = tsn; if (control->on_read_q) { /* * We have to purge it from there, hopefully this will work * :-) */ TAILQ_REMOVE(&inp->read_queue, control, next); control->on_read_q = 0; } } static int sctp_handle_old_unordered_data(struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_stream_in *strm, struct sctp_queued_to_read *control, uint32_t pd_point, int inp_read_lock_held) { /* * Special handling for the old un-ordered data chunk. All the * chunks/TSN's go to mid 0. So we have to do the old style watching * to see if we have it all. If you return one, no other control * entries on the un-ordered queue will be looked at. In theory * there should be no others entries in reality, unless the guy is * sending both unordered NDATA and unordered DATA... */ struct sctp_tmit_chunk *chk, *lchk, *tchk; uint32_t fsn; struct sctp_queued_to_read *nc; int cnt_added; if (control->first_frag_seen == 0) { /* Nothing we can do, we have not seen the first piece yet */ return (1); } /* Collapse any we can */ cnt_added = 0; restart: fsn = control->fsn_included + 1; /* Now what can we add? */ TAILQ_FOREACH_SAFE(chk, &control->reasm, sctp_next, lchk) { if (chk->rec.data.fsn == fsn) { /* Ok lets add it */ sctp_alloc_a_readq(stcb, nc); if (nc == NULL) { break; } memset(nc, 0, sizeof(struct sctp_queued_to_read)); TAILQ_REMOVE(&control->reasm, chk, sctp_next); sctp_add_chk_to_control(control, strm, stcb, asoc, chk, SCTP_READ_LOCK_NOT_HELD); fsn++; cnt_added++; chk = NULL; if (control->end_added) { /* We are done */ if (!TAILQ_EMPTY(&control->reasm)) { /* * Ok we have to move anything left * on the control queue to a new * control. */ sctp_build_readq_entry_from_ctl(nc, control); tchk = TAILQ_FIRST(&control->reasm); if (tchk->rec.data.rcv_flags & SCTP_DATA_FIRST_FRAG) { TAILQ_REMOVE(&control->reasm, tchk, sctp_next); if (asoc->size_on_reasm_queue >= tchk->send_size) { asoc->size_on_reasm_queue -= tchk->send_size; } else { #ifdef INVARIANTS panic("size_on_reasm_queue = %u smaller than chunk length %u", asoc->size_on_reasm_queue, tchk->send_size); #else asoc->size_on_reasm_queue = 0; #endif } sctp_ucount_decr(asoc->cnt_on_reasm_queue); nc->first_frag_seen = 1; nc->fsn_included = tchk->rec.data.fsn; nc->data = tchk->data; nc->sinfo_ppid = tchk->rec.data.ppid; nc->sinfo_tsn = tchk->rec.data.tsn; sctp_mark_non_revokable(asoc, tchk->rec.data.tsn); tchk->data = NULL; sctp_free_a_chunk(stcb, tchk, SCTP_SO_NOT_LOCKED); sctp_setup_tail_pointer(nc); tchk = TAILQ_FIRST(&control->reasm); } /* Spin the rest onto the queue */ while (tchk) { TAILQ_REMOVE(&control->reasm, tchk, sctp_next); TAILQ_INSERT_TAIL(&nc->reasm, tchk, sctp_next); tchk = TAILQ_FIRST(&control->reasm); } /* * Now lets add it to the queue * after removing control */ TAILQ_INSERT_TAIL(&strm->uno_inqueue, nc, next_instrm); nc->on_strm_q = SCTP_ON_UNORDERED; if (control->on_strm_q) { TAILQ_REMOVE(&strm->uno_inqueue, control, next_instrm); control->on_strm_q = 0; } } if (control->pdapi_started) { strm->pd_api_started = 0; control->pdapi_started = 0; } if (control->on_strm_q) { TAILQ_REMOVE(&strm->uno_inqueue, control, next_instrm); control->on_strm_q = 0; SCTP_STAT_INCR_COUNTER64(sctps_reasmusrmsgs); } if (control->on_read_q == 0) { sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, inp_read_lock_held, SCTP_SO_NOT_LOCKED); } sctp_wakeup_the_read_socket(stcb->sctp_ep, stcb, SCTP_SO_NOT_LOCKED); if ((nc->first_frag_seen) && !TAILQ_EMPTY(&nc->reasm)) { /* * Switch to the new guy and * continue */ control = nc; goto restart; } else { if (nc->on_strm_q == 0) { sctp_free_a_readq(stcb, nc); } } return (1); } else { sctp_free_a_readq(stcb, nc); } } else { /* Can't add more */ break; } } if ((control->length > pd_point) && (strm->pd_api_started == 0)) { strm->pd_api_started = 1; control->pdapi_started = 1; sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, inp_read_lock_held, SCTP_SO_NOT_LOCKED); sctp_wakeup_the_read_socket(stcb->sctp_ep, stcb, SCTP_SO_NOT_LOCKED); return (0); } else { return (1); } } static void sctp_inject_old_unordered_data(struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_queued_to_read *control, struct sctp_tmit_chunk *chk, int *abort_flag) { struct sctp_tmit_chunk *at; int inserted; /* * Here we need to place the chunk into the control structure sorted * in the correct order. */ if (chk->rec.data.rcv_flags & SCTP_DATA_FIRST_FRAG) { /* Its the very first one. */ SCTPDBG(SCTP_DEBUG_XXX, "chunk is a first fsn: %u becomes fsn_included\n", chk->rec.data.fsn); if (control->first_frag_seen) { /* * In old un-ordered we can reassembly on one * control multiple messages. As long as the next * FIRST is greater then the old first (TSN i.e. FSN * wise) */ struct mbuf *tdata; uint32_t tmp; if (SCTP_TSN_GT(chk->rec.data.fsn, control->fsn_included)) { /* * Easy way the start of a new guy beyond * the lowest */ goto place_chunk; } if ((chk->rec.data.fsn == control->fsn_included) || (control->pdapi_started)) { /* * Ok this should not happen, if it does we * started the pd-api on the higher TSN * (since the equals part is a TSN failure * it must be that). * * We are completly hosed in that case since * I have no way to recover. This really * will only happen if we can get more TSN's * higher before the pd-api-point. */ sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_4); return; } /* * Ok we have two firsts and the one we just got is * smaller than the one we previously placed.. yuck! * We must swap them out. */ /* swap the mbufs */ tdata = control->data; control->data = chk->data; chk->data = tdata; /* Save the lengths */ chk->send_size = control->length; /* Recompute length of control and tail pointer */ sctp_setup_tail_pointer(control); /* Fix the FSN included */ tmp = control->fsn_included; control->fsn_included = chk->rec.data.fsn; chk->rec.data.fsn = tmp; /* Fix the TSN included */ tmp = control->sinfo_tsn; control->sinfo_tsn = chk->rec.data.tsn; chk->rec.data.tsn = tmp; /* Fix the PPID included */ tmp = control->sinfo_ppid; control->sinfo_ppid = chk->rec.data.ppid; chk->rec.data.ppid = tmp; /* Fix tail pointer */ goto place_chunk; } control->first_frag_seen = 1; control->fsn_included = chk->rec.data.fsn; control->top_fsn = chk->rec.data.fsn; control->sinfo_tsn = chk->rec.data.tsn; control->sinfo_ppid = chk->rec.data.ppid; control->data = chk->data; sctp_mark_non_revokable(asoc, chk->rec.data.tsn); chk->data = NULL; sctp_free_a_chunk(stcb, chk, SCTP_SO_NOT_LOCKED); sctp_setup_tail_pointer(control); return; } place_chunk: inserted = 0; TAILQ_FOREACH(at, &control->reasm, sctp_next) { if (SCTP_TSN_GT(at->rec.data.fsn, chk->rec.data.fsn)) { /* * This one in queue is bigger than the new one, * insert the new one before at. */ asoc->size_on_reasm_queue += chk->send_size; sctp_ucount_incr(asoc->cnt_on_reasm_queue); inserted = 1; TAILQ_INSERT_BEFORE(at, chk, sctp_next); break; } else if (at->rec.data.fsn == chk->rec.data.fsn) { /* * They sent a duplicate fsn number. This really * should not happen since the FSN is a TSN and it * should have been dropped earlier. */ sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_5); return; } } if (inserted == 0) { /* Its at the end */ asoc->size_on_reasm_queue += chk->send_size; sctp_ucount_incr(asoc->cnt_on_reasm_queue); control->top_fsn = chk->rec.data.fsn; TAILQ_INSERT_TAIL(&control->reasm, chk, sctp_next); } } static int sctp_deliver_reasm_check(struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_stream_in *strm, int inp_read_lock_held) { /* * Given a stream, strm, see if any of the SSN's on it that are * fragmented are ready to deliver. If so go ahead and place them on * the read queue. In so placing if we have hit the end, then we * need to remove them from the stream's queue. */ struct sctp_queued_to_read *control, *nctl = NULL; uint32_t next_to_del; uint32_t pd_point; int ret = 0; if (stcb->sctp_socket) { pd_point = min(SCTP_SB_LIMIT_RCV(stcb->sctp_socket) >> SCTP_PARTIAL_DELIVERY_SHIFT, stcb->sctp_ep->partial_delivery_point); } else { pd_point = stcb->sctp_ep->partial_delivery_point; } control = TAILQ_FIRST(&strm->uno_inqueue); if ((control != NULL) && (asoc->idata_supported == 0)) { /* Special handling needed for "old" data format */ if (sctp_handle_old_unordered_data(stcb, asoc, strm, control, pd_point, inp_read_lock_held)) { goto done_un; } } if (strm->pd_api_started) { /* Can't add more */ return (0); } while (control) { SCTPDBG(SCTP_DEBUG_XXX, "Looking at control: %p e(%d) ssn: %u top_fsn: %u inc_fsn: %u -uo\n", control, control->end_added, control->mid, control->top_fsn, control->fsn_included); nctl = TAILQ_NEXT(control, next_instrm); if (control->end_added) { /* We just put the last bit on */ if (control->on_strm_q) { #ifdef INVARIANTS if (control->on_strm_q != SCTP_ON_UNORDERED) { panic("Huh control: %p on_q: %d -- not unordered?", control, control->on_strm_q); } #endif SCTP_STAT_INCR_COUNTER64(sctps_reasmusrmsgs); TAILQ_REMOVE(&strm->uno_inqueue, control, next_instrm); control->on_strm_q = 0; } if (control->on_read_q == 0) { sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, inp_read_lock_held, SCTP_SO_NOT_LOCKED); } } else { /* Can we do a PD-API for this un-ordered guy? */ if ((control->length >= pd_point) && (strm->pd_api_started == 0)) { strm->pd_api_started = 1; control->pdapi_started = 1; sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, inp_read_lock_held, SCTP_SO_NOT_LOCKED); break; } } control = nctl; } done_un: control = TAILQ_FIRST(&strm->inqueue); if (strm->pd_api_started) { /* Can't add more */ return (0); } if (control == NULL) { return (ret); } if (SCTP_MID_EQ(asoc->idata_supported, strm->last_mid_delivered, control->mid)) { /* * Ok the guy at the top was being partially delivered * completed, so we remove it. Note the pd_api flag was * taken off when the chunk was merged on in * sctp_queue_data_for_reasm below. */ nctl = TAILQ_NEXT(control, next_instrm); SCTPDBG(SCTP_DEBUG_XXX, "Looking at control: %p e(%d) ssn: %u top_fsn: %u inc_fsn: %u (lastdel: %u)- o\n", control, control->end_added, control->mid, control->top_fsn, control->fsn_included, strm->last_mid_delivered); if (control->end_added) { if (control->on_strm_q) { #ifdef INVARIANTS if (control->on_strm_q != SCTP_ON_ORDERED) { panic("Huh control: %p on_q: %d -- not ordered?", control, control->on_strm_q); } #endif SCTP_STAT_INCR_COUNTER64(sctps_reasmusrmsgs); TAILQ_REMOVE(&strm->inqueue, control, next_instrm); if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); control->on_strm_q = 0; } if (strm->pd_api_started && control->pdapi_started) { control->pdapi_started = 0; strm->pd_api_started = 0; } if (control->on_read_q == 0) { sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, inp_read_lock_held, SCTP_SO_NOT_LOCKED); } control = nctl; } } if (strm->pd_api_started) { /* * Can't add more must have gotten an un-ordered above being * partially delivered. */ return (0); } deliver_more: next_to_del = strm->last_mid_delivered + 1; if (control) { SCTPDBG(SCTP_DEBUG_XXX, "Looking at control: %p e(%d) ssn: %u top_fsn: %u inc_fsn: %u (nxtdel: %u)- o\n", control, control->end_added, control->mid, control->top_fsn, control->fsn_included, next_to_del); nctl = TAILQ_NEXT(control, next_instrm); if (SCTP_MID_EQ(asoc->idata_supported, control->mid, next_to_del) && (control->first_frag_seen)) { int done; /* Ok we can deliver it onto the stream. */ if (control->end_added) { /* We are done with it afterwards */ if (control->on_strm_q) { #ifdef INVARIANTS if (control->on_strm_q != SCTP_ON_ORDERED) { panic("Huh control: %p on_q: %d -- not ordered?", control, control->on_strm_q); } #endif SCTP_STAT_INCR_COUNTER64(sctps_reasmusrmsgs); TAILQ_REMOVE(&strm->inqueue, control, next_instrm); if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); control->on_strm_q = 0; } ret++; } if (((control->sinfo_flags >> 8) & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG) { /* * A singleton now slipping through - mark * it non-revokable too */ sctp_mark_non_revokable(asoc, control->sinfo_tsn); } else if (control->end_added == 0) { /* * Check if we can defer adding until its * all there */ if ((control->length < pd_point) || (strm->pd_api_started)) { /* * Don't need it or cannot add more * (one being delivered that way) */ goto out; } } done = (control->end_added) && (control->last_frag_seen); if (control->on_read_q == 0) { + if (!done) { + if (asoc->size_on_all_streams >= control->length) { + asoc->size_on_all_streams -= control->length; + } else { +#ifdef INVARIANTS + panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); +#else + asoc->size_on_all_streams = 0; +#endif + } + strm->pd_api_started = 1; + control->pdapi_started = 1; + } sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, inp_read_lock_held, SCTP_SO_NOT_LOCKED); } strm->last_mid_delivered = next_to_del; if (done) { control = nctl; goto deliver_more; - } else { - /* We are now doing PD API */ - strm->pd_api_started = 1; - control->pdapi_started = 1; } } } out: return (ret); } uint32_t sctp_add_chk_to_control(struct sctp_queued_to_read *control, struct sctp_stream_in *strm, struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_tmit_chunk *chk, int hold_rlock) { /* * Given a control and a chunk, merge the data from the chk onto the * control and free up the chunk resources. */ uint32_t added = 0; int i_locked = 0; if (control->on_read_q && (hold_rlock == 0)) { /* * Its being pd-api'd so we must do some locks. */ SCTP_INP_READ_LOCK(stcb->sctp_ep); i_locked = 1; } if (control->data == NULL) { control->data = chk->data; sctp_setup_tail_pointer(control); } else { sctp_add_to_tail_pointer(control, chk->data, &added); } control->fsn_included = chk->rec.data.fsn; asoc->size_on_reasm_queue -= chk->send_size; sctp_ucount_decr(asoc->cnt_on_reasm_queue); sctp_mark_non_revokable(asoc, chk->rec.data.tsn); chk->data = NULL; if (chk->rec.data.rcv_flags & SCTP_DATA_FIRST_FRAG) { control->first_frag_seen = 1; control->sinfo_tsn = chk->rec.data.tsn; control->sinfo_ppid = chk->rec.data.ppid; } if (chk->rec.data.rcv_flags & SCTP_DATA_LAST_FRAG) { /* Its complete */ if ((control->on_strm_q) && (control->on_read_q)) { if (control->pdapi_started) { control->pdapi_started = 0; strm->pd_api_started = 0; } if (control->on_strm_q == SCTP_ON_UNORDERED) { /* Unordered */ TAILQ_REMOVE(&strm->uno_inqueue, control, next_instrm); control->on_strm_q = 0; } else if (control->on_strm_q == SCTP_ON_ORDERED) { /* Ordered */ TAILQ_REMOVE(&strm->inqueue, control, next_instrm); - if (asoc->size_on_all_streams >= control->length) { - asoc->size_on_all_streams -= control->length; - } else { -#ifdef INVARIANTS - panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); -#else - asoc->size_on_all_streams = 0; -#endif - } + /* + * Don't need to decrement + * size_on_all_streams, since control is on + * the read queue. + */ sctp_ucount_decr(asoc->cnt_on_all_streams); control->on_strm_q = 0; #ifdef INVARIANTS } else if (control->on_strm_q) { panic("Unknown state on ctrl: %p on_strm_q: %d", control, control->on_strm_q); #endif } } control->end_added = 1; control->last_frag_seen = 1; } if (i_locked) { SCTP_INP_READ_UNLOCK(stcb->sctp_ep); } sctp_free_a_chunk(stcb, chk, SCTP_SO_NOT_LOCKED); return (added); } /* * Dump onto the re-assembly queue, in its proper place. After dumping on the * queue, see if anthing can be delivered. If so pull it off (or as much as * we can. If we run out of space then we must dump what we can and set the * appropriate flag to say we queued what we could. */ static void sctp_queue_data_for_reasm(struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_queued_to_read *control, struct sctp_tmit_chunk *chk, int created_control, int *abort_flag, uint32_t tsn) { uint32_t next_fsn; struct sctp_tmit_chunk *at, *nat; struct sctp_stream_in *strm; int do_wakeup, unordered; uint32_t lenadded; strm = &asoc->strmin[control->sinfo_stream]; /* * For old un-ordered data chunks. */ if ((control->sinfo_flags >> 8) & SCTP_DATA_UNORDERED) { unordered = 1; } else { unordered = 0; } /* Must be added to the stream-in queue */ if (created_control) { if (unordered == 0) { sctp_ucount_incr(asoc->cnt_on_all_streams); } if (sctp_place_control_in_stream(strm, asoc, control)) { /* Duplicate SSN? */ sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_6); sctp_clean_up_control(stcb, control); return; } if ((tsn == (asoc->cumulative_tsn + 1) && (asoc->idata_supported == 0))) { /* * Ok we created this control and now lets validate * that its legal i.e. there is a B bit set, if not * and we have up to the cum-ack then its invalid. */ if ((chk->rec.data.rcv_flags & SCTP_DATA_FIRST_FRAG) == 0) { sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_7); return; } } } if ((asoc->idata_supported == 0) && (unordered == 1)) { sctp_inject_old_unordered_data(stcb, asoc, control, chk, abort_flag); return; } /* * Ok we must queue the chunk into the reasembly portion: o if its * the first it goes to the control mbuf. o if its not first but the * next in sequence it goes to the control, and each succeeding one * in order also goes. o if its not in order we place it on the list * in its place. */ if (chk->rec.data.rcv_flags & SCTP_DATA_FIRST_FRAG) { /* Its the very first one. */ SCTPDBG(SCTP_DEBUG_XXX, "chunk is a first fsn: %u becomes fsn_included\n", chk->rec.data.fsn); if (control->first_frag_seen) { /* * Error on senders part, they either sent us two * data chunks with FIRST, or they sent two * un-ordered chunks that were fragmented at the * same time in the same stream. */ sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_8); return; } control->first_frag_seen = 1; control->sinfo_ppid = chk->rec.data.ppid; control->sinfo_tsn = chk->rec.data.tsn; control->fsn_included = chk->rec.data.fsn; control->data = chk->data; sctp_mark_non_revokable(asoc, chk->rec.data.tsn); chk->data = NULL; sctp_free_a_chunk(stcb, chk, SCTP_SO_NOT_LOCKED); sctp_setup_tail_pointer(control); asoc->size_on_all_streams += control->length; } else { /* Place the chunk in our list */ int inserted = 0; if (control->last_frag_seen == 0) { /* Still willing to raise highest FSN seen */ if (SCTP_TSN_GT(chk->rec.data.fsn, control->top_fsn)) { SCTPDBG(SCTP_DEBUG_XXX, "We have a new top_fsn: %u\n", chk->rec.data.fsn); control->top_fsn = chk->rec.data.fsn; } if (chk->rec.data.rcv_flags & SCTP_DATA_LAST_FRAG) { SCTPDBG(SCTP_DEBUG_XXX, "The last fsn is now in place fsn: %u\n", chk->rec.data.fsn); control->last_frag_seen = 1; } if (asoc->idata_supported || control->first_frag_seen) { /* * For IDATA we always check since we know * that the first fragment is 0. For old * DATA we have to receive the first before * we know the first FSN (which is the TSN). */ if (SCTP_TSN_GE(control->fsn_included, chk->rec.data.fsn)) { /* * We have already delivered up to * this so its a dup */ sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_9); return; } } } else { if (chk->rec.data.rcv_flags & SCTP_DATA_LAST_FRAG) { /* Second last? huh? */ SCTPDBG(SCTP_DEBUG_XXX, "Duplicate last fsn: %u (top: %u) -- abort\n", chk->rec.data.fsn, control->top_fsn); sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_10); return; } if (asoc->idata_supported || control->first_frag_seen) { /* * For IDATA we always check since we know * that the first fragment is 0. For old * DATA we have to receive the first before * we know the first FSN (which is the TSN). */ if (SCTP_TSN_GE(control->fsn_included, chk->rec.data.fsn)) { /* * We have already delivered up to * this so its a dup */ SCTPDBG(SCTP_DEBUG_XXX, "New fsn: %u is already seen in included_fsn: %u -- abort\n", chk->rec.data.fsn, control->fsn_included); sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_11); return; } } /* * validate not beyond top FSN if we have seen last * one */ if (SCTP_TSN_GT(chk->rec.data.fsn, control->top_fsn)) { SCTPDBG(SCTP_DEBUG_XXX, "New fsn: %u is beyond or at top_fsn: %u -- abort\n", chk->rec.data.fsn, control->top_fsn); sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_12); return; } } /* * If we reach here, we need to place the new chunk in the * reassembly for this control. */ SCTPDBG(SCTP_DEBUG_XXX, "chunk is a not first fsn: %u needs to be inserted\n", chk->rec.data.fsn); TAILQ_FOREACH(at, &control->reasm, sctp_next) { if (SCTP_TSN_GT(at->rec.data.fsn, chk->rec.data.fsn)) { /* * This one in queue is bigger than the new * one, insert the new one before at. */ SCTPDBG(SCTP_DEBUG_XXX, "Insert it before fsn: %u\n", at->rec.data.fsn); asoc->size_on_reasm_queue += chk->send_size; sctp_ucount_incr(asoc->cnt_on_reasm_queue); TAILQ_INSERT_BEFORE(at, chk, sctp_next); inserted = 1; break; } else if (at->rec.data.fsn == chk->rec.data.fsn) { /* * Gak, He sent me a duplicate str seq * number */ /* * foo bar, I guess I will just free this * new guy, should we abort too? FIX ME * MAYBE? Or it COULD be that the SSN's have * wrapped. Maybe I should compare to TSN * somehow... sigh for now just blow away * the chunk! */ SCTPDBG(SCTP_DEBUG_XXX, "Duplicate to fsn: %u -- abort\n", at->rec.data.fsn); sctp_abort_in_reasm(stcb, control, chk, abort_flag, SCTP_FROM_SCTP_INDATA + SCTP_LOC_13); return; } } if (inserted == 0) { /* Goes on the end */ SCTPDBG(SCTP_DEBUG_XXX, "Inserting at tail of list fsn: %u\n", chk->rec.data.fsn); asoc->size_on_reasm_queue += chk->send_size; sctp_ucount_incr(asoc->cnt_on_reasm_queue); TAILQ_INSERT_TAIL(&control->reasm, chk, sctp_next); } } /* * Ok lets see if we can suck any up into the control structure that * are in seq if it makes sense. */ do_wakeup = 0; /* * If the first fragment has not been seen there is no sense in * looking. */ if (control->first_frag_seen) { next_fsn = control->fsn_included + 1; TAILQ_FOREACH_SAFE(at, &control->reasm, sctp_next, nat) { if (at->rec.data.fsn == next_fsn) { /* We can add this one now to the control */ SCTPDBG(SCTP_DEBUG_XXX, "Adding more to control: %p at: %p fsn: %u next_fsn: %u included: %u\n", control, at, at->rec.data.fsn, next_fsn, control->fsn_included); TAILQ_REMOVE(&control->reasm, at, sctp_next); lenadded = sctp_add_chk_to_control(control, strm, stcb, asoc, at, SCTP_READ_LOCK_NOT_HELD); if (control->on_read_q) { do_wakeup = 1; } else { /* * We only add to the * size-on-all-streams if its not on * the read q. The read q flag will * cause a sballoc so its accounted * for there. */ asoc->size_on_all_streams += lenadded; } next_fsn++; if (control->end_added && control->pdapi_started) { if (strm->pd_api_started) { strm->pd_api_started = 0; control->pdapi_started = 0; } if (control->on_read_q == 0) { sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, control->end_added, SCTP_READ_LOCK_NOT_HELD, SCTP_SO_NOT_LOCKED); } break; } } else { break; } } } if (do_wakeup) { /* Need to wakeup the reader */ sctp_wakeup_the_read_socket(stcb->sctp_ep, stcb, SCTP_SO_NOT_LOCKED); } } static struct sctp_queued_to_read * sctp_find_reasm_entry(struct sctp_stream_in *strm, uint32_t mid, int ordered, int idata_supported) { struct sctp_queued_to_read *control; if (ordered) { TAILQ_FOREACH(control, &strm->inqueue, next_instrm) { if (SCTP_MID_EQ(idata_supported, control->mid, mid)) { break; } } } else { if (idata_supported) { TAILQ_FOREACH(control, &strm->uno_inqueue, next_instrm) { if (SCTP_MID_EQ(idata_supported, control->mid, mid)) { break; } } } else { control = TAILQ_FIRST(&strm->uno_inqueue); } } return (control); } static int sctp_process_a_data_chunk(struct sctp_tcb *stcb, struct sctp_association *asoc, struct mbuf **m, int offset, int chk_length, struct sctp_nets *net, uint32_t *high_tsn, int *abort_flag, int *break_flag, int last_chunk, uint8_t chk_type) { /* Process a data chunk */ /* struct sctp_tmit_chunk *chk; */ struct sctp_tmit_chunk *chk; uint32_t tsn, fsn, gap, mid; struct mbuf *dmbuf; int the_len; int need_reasm_check = 0; uint16_t sid; struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; struct sctp_queued_to_read *control, *ncontrol; uint32_t ppid; uint8_t chk_flags; struct sctp_stream_reset_list *liste; int ordered; size_t clen; int created_control = 0; if (chk_type == SCTP_IDATA) { struct sctp_idata_chunk *chunk, chunk_buf; chunk = (struct sctp_idata_chunk *)sctp_m_getptr(*m, offset, sizeof(struct sctp_idata_chunk), (uint8_t *)&chunk_buf); chk_flags = chunk->ch.chunk_flags; clen = sizeof(struct sctp_idata_chunk); tsn = ntohl(chunk->dp.tsn); sid = ntohs(chunk->dp.sid); mid = ntohl(chunk->dp.mid); if (chk_flags & SCTP_DATA_FIRST_FRAG) { fsn = 0; ppid = chunk->dp.ppid_fsn.ppid; } else { fsn = ntohl(chunk->dp.ppid_fsn.fsn); ppid = 0xffffffff; /* Use as an invalid value. */ } } else { struct sctp_data_chunk *chunk, chunk_buf; chunk = (struct sctp_data_chunk *)sctp_m_getptr(*m, offset, sizeof(struct sctp_data_chunk), (uint8_t *)&chunk_buf); chk_flags = chunk->ch.chunk_flags; clen = sizeof(struct sctp_data_chunk); tsn = ntohl(chunk->dp.tsn); sid = ntohs(chunk->dp.sid); mid = (uint32_t)(ntohs(chunk->dp.ssn)); fsn = tsn; ppid = chunk->dp.ppid; } if ((size_t)chk_length == clen) { /* * Need to send an abort since we had a empty data chunk. */ op_err = sctp_generate_no_user_data_cause(tsn); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_14; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); *abort_flag = 1; return (0); } if ((chk_flags & SCTP_DATA_SACK_IMMEDIATELY) == SCTP_DATA_SACK_IMMEDIATELY) { asoc->send_sack = 1; } ordered = ((chk_flags & SCTP_DATA_UNORDERED) == 0); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map(tsn, asoc->cumulative_tsn, asoc->highest_tsn_inside_map, SCTP_MAP_TSN_ENTERS); } if (stcb == NULL) { return (0); } SCTP_LTRACE_CHK(stcb->sctp_ep, stcb, chk_type, tsn); if (SCTP_TSN_GE(asoc->cumulative_tsn, tsn)) { /* It is a duplicate */ SCTP_STAT_INCR(sctps_recvdupdata); if (asoc->numduptsns < SCTP_MAX_DUP_TSNS) { /* Record a dup for the next outbound sack */ asoc->dup_tsns[asoc->numduptsns] = tsn; asoc->numduptsns++; } asoc->send_sack = 1; return (0); } /* Calculate the number of TSN's between the base and this TSN */ SCTP_CALC_TSN_TO_GAP(gap, tsn, asoc->mapping_array_base_tsn); if (gap >= (SCTP_MAPPING_ARRAY << 3)) { /* Can't hold the bit in the mapping at max array, toss it */ return (0); } if (gap >= (uint32_t)(asoc->mapping_array_size << 3)) { SCTP_TCB_LOCK_ASSERT(stcb); if (sctp_expand_mapping_array(asoc, gap)) { /* Can't expand, drop it */ return (0); } } if (SCTP_TSN_GT(tsn, *high_tsn)) { *high_tsn = tsn; } /* See if we have received this one already */ if (SCTP_IS_TSN_PRESENT(asoc->mapping_array, gap) || SCTP_IS_TSN_PRESENT(asoc->nr_mapping_array, gap)) { SCTP_STAT_INCR(sctps_recvdupdata); if (asoc->numduptsns < SCTP_MAX_DUP_TSNS) { /* Record a dup for the next outbound sack */ asoc->dup_tsns[asoc->numduptsns] = tsn; asoc->numduptsns++; } asoc->send_sack = 1; return (0); } /* * Check to see about the GONE flag, duplicates would cause a sack * to be sent up above */ if (((stcb->sctp_ep->sctp_flags & SCTP_PCB_FLAGS_SOCKET_GONE) || (stcb->sctp_ep->sctp_flags & SCTP_PCB_FLAGS_SOCKET_ALLGONE) || (stcb->asoc.state & SCTP_STATE_CLOSED_SOCKET))) { /* * wait a minute, this guy is gone, there is no longer a * receiver. Send peer an ABORT! */ op_err = sctp_generate_cause(SCTP_CAUSE_OUT_OF_RESC, ""); sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); *abort_flag = 1; return (0); } /* * Now before going further we see if there is room. If NOT then we * MAY let one through only IF this TSN is the one we are waiting * for on a partial delivery API. */ /* Is the stream valid? */ if (sid >= asoc->streamincnt) { struct sctp_error_invalid_stream *cause; op_err = sctp_get_mbuf_for_msg(sizeof(struct sctp_error_invalid_stream), 0, M_NOWAIT, 1, MT_DATA); if (op_err != NULL) { /* add some space up front so prepend will work well */ SCTP_BUF_RESV_UF(op_err, sizeof(struct sctp_chunkhdr)); cause = mtod(op_err, struct sctp_error_invalid_stream *); /* * Error causes are just param's and this one has * two back to back phdr, one with the error type * and size, the other with the streamid and a rsvd */ SCTP_BUF_LEN(op_err) = sizeof(struct sctp_error_invalid_stream); cause->cause.code = htons(SCTP_CAUSE_INVALID_STREAM); cause->cause.length = htons(sizeof(struct sctp_error_invalid_stream)); cause->stream_id = htons(sid); cause->reserved = htons(0); sctp_queue_op_err(stcb, op_err); } SCTP_STAT_INCR(sctps_badsid); SCTP_TCB_LOCK_ASSERT(stcb); SCTP_SET_TSN_PRESENT(asoc->nr_mapping_array, gap); if (SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_nr_map)) { asoc->highest_tsn_inside_nr_map = tsn; } if (tsn == (asoc->cumulative_tsn + 1)) { /* Update cum-ack */ asoc->cumulative_tsn = tsn; } return (0); } /* * If its a fragmented message, lets see if we can find the control * on the reassembly queues. */ if ((chk_type == SCTP_IDATA) && ((chk_flags & SCTP_DATA_FIRST_FRAG) == 0) && (fsn == 0)) { /* * The first *must* be fsn 0, and other (middle/end) pieces * can *not* be fsn 0. XXX: This can happen in case of a * wrap around. Ignore is for now. */ snprintf(msg, sizeof(msg), "FSN zero for MID=%8.8x, but flags=%2.2x", mid, chk_flags); goto err_out; } control = sctp_find_reasm_entry(&asoc->strmin[sid], mid, ordered, asoc->idata_supported); SCTPDBG(SCTP_DEBUG_XXX, "chunk_flags:0x%x look for control on queues %p\n", chk_flags, control); if ((chk_flags & SCTP_DATA_NOT_FRAG) != SCTP_DATA_NOT_FRAG) { /* See if we can find the re-assembly entity */ if (control != NULL) { /* We found something, does it belong? */ if (ordered && (mid != control->mid)) { snprintf(msg, sizeof(msg), "Reassembly problem (MID=%8.8x)", mid); err_out: op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_15; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); *abort_flag = 1; return (0); } if (ordered && ((control->sinfo_flags >> 8) & SCTP_DATA_UNORDERED)) { /* * We can't have a switched order with an * unordered chunk */ snprintf(msg, sizeof(msg), "All fragments of a user message must be ordered or unordered (TSN=%8.8x)", tsn); goto err_out; } if (!ordered && (((control->sinfo_flags >> 8) & SCTP_DATA_UNORDERED) == 0)) { /* * We can't have a switched unordered with a * ordered chunk */ snprintf(msg, sizeof(msg), "All fragments of a user message must be ordered or unordered (TSN=%8.8x)", tsn); goto err_out; } } } else { /* * Its a complete segment. Lets validate we don't have a * re-assembly going on with the same Stream/Seq (for * ordered) or in the same Stream for unordered. */ if (control != NULL) { if (ordered || asoc->idata_supported) { SCTPDBG(SCTP_DEBUG_XXX, "chunk_flags: 0x%x dup detected on MID: %u\n", chk_flags, mid); snprintf(msg, sizeof(msg), "Duplicate MID=%8.8x detected.", mid); goto err_out; } else { if ((tsn == control->fsn_included + 1) && (control->end_added == 0)) { snprintf(msg, sizeof(msg), "Illegal message sequence, missing end for MID: %8.8x", control->fsn_included); goto err_out; } else { control = NULL; } } } } /* now do the tests */ if (((asoc->cnt_on_all_streams + asoc->cnt_on_reasm_queue + asoc->cnt_msg_on_sb) >= SCTP_BASE_SYSCTL(sctp_max_chunks_on_queue)) || (((int)asoc->my_rwnd) <= 0)) { /* * When we have NO room in the rwnd we check to make sure * the reader is doing its job... */ if (stcb->sctp_socket->so_rcv.sb_cc) { /* some to read, wake-up */ #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) struct socket *so; so = SCTP_INP_SO(stcb->sctp_ep); atomic_add_int(&stcb->asoc.refcnt, 1); SCTP_TCB_UNLOCK(stcb); SCTP_SOCKET_LOCK(so, 1); SCTP_TCB_LOCK(stcb); atomic_subtract_int(&stcb->asoc.refcnt, 1); if (stcb->asoc.state & SCTP_STATE_CLOSED_SOCKET) { /* assoc was freed while we were unlocked */ SCTP_SOCKET_UNLOCK(so, 1); return (0); } #endif sctp_sorwakeup(stcb->sctp_ep, stcb->sctp_socket); #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) SCTP_SOCKET_UNLOCK(so, 1); #endif } /* now is it in the mapping array of what we have accepted? */ if (chk_type == SCTP_DATA) { if (SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_map) && SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_nr_map)) { /* Nope not in the valid range dump it */ dump_packet: sctp_set_rwnd(stcb, asoc); if ((asoc->cnt_on_all_streams + asoc->cnt_on_reasm_queue + asoc->cnt_msg_on_sb) >= SCTP_BASE_SYSCTL(sctp_max_chunks_on_queue)) { SCTP_STAT_INCR(sctps_datadropchklmt); } else { SCTP_STAT_INCR(sctps_datadroprwnd); } *break_flag = 1; return (0); } } else { if (control == NULL) { goto dump_packet; } if (SCTP_TSN_GT(fsn, control->top_fsn)) { goto dump_packet; } } } #ifdef SCTP_ASOCLOG_OF_TSNS SCTP_TCB_LOCK_ASSERT(stcb); if (asoc->tsn_in_at >= SCTP_TSN_LOG_SIZE) { asoc->tsn_in_at = 0; asoc->tsn_in_wrapped = 1; } asoc->in_tsnlog[asoc->tsn_in_at].tsn = tsn; asoc->in_tsnlog[asoc->tsn_in_at].strm = sid; asoc->in_tsnlog[asoc->tsn_in_at].seq = mid; asoc->in_tsnlog[asoc->tsn_in_at].sz = chk_length; asoc->in_tsnlog[asoc->tsn_in_at].flgs = chunk_flags; asoc->in_tsnlog[asoc->tsn_in_at].stcb = (void *)stcb; asoc->in_tsnlog[asoc->tsn_in_at].in_pos = asoc->tsn_in_at; asoc->in_tsnlog[asoc->tsn_in_at].in_out = 1; asoc->tsn_in_at++; #endif /* * Before we continue lets validate that we are not being fooled by * an evil attacker. We can only have Nk chunks based on our TSN * spread allowed by the mapping array N * 8 bits, so there is no * way our stream sequence numbers could have wrapped. We of course * only validate the FIRST fragment so the bit must be set. */ if ((chk_flags & SCTP_DATA_FIRST_FRAG) && (TAILQ_EMPTY(&asoc->resetHead)) && (chk_flags & SCTP_DATA_UNORDERED) == 0 && SCTP_MID_GE(asoc->idata_supported, asoc->strmin[sid].last_mid_delivered, mid)) { /* The incoming sseq is behind where we last delivered? */ SCTPDBG(SCTP_DEBUG_INDATA1, "EVIL/Broken-Dup S-SEQ: %u delivered: %u from peer, Abort!\n", mid, asoc->strmin[sid].last_mid_delivered); if (asoc->idata_supported) { snprintf(msg, sizeof(msg), "Delivered MID=%8.8x, got TSN=%8.8x, SID=%4.4x, MID=%8.8x", asoc->strmin[sid].last_mid_delivered, tsn, sid, mid); } else { snprintf(msg, sizeof(msg), "Delivered SSN=%4.4x, got TSN=%8.8x, SID=%4.4x, SSN=%4.4x", (uint16_t)asoc->strmin[sid].last_mid_delivered, tsn, sid, (uint16_t)mid); } op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_16; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); *abort_flag = 1; return (0); } if (chk_type == SCTP_IDATA) { the_len = (chk_length - sizeof(struct sctp_idata_chunk)); } else { the_len = (chk_length - sizeof(struct sctp_data_chunk)); } if (last_chunk == 0) { if (chk_type == SCTP_IDATA) { dmbuf = SCTP_M_COPYM(*m, (offset + sizeof(struct sctp_idata_chunk)), the_len, M_NOWAIT); } else { dmbuf = SCTP_M_COPYM(*m, (offset + sizeof(struct sctp_data_chunk)), the_len, M_NOWAIT); } #ifdef SCTP_MBUF_LOGGING if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MBUF_LOGGING_ENABLE) { sctp_log_mbc(dmbuf, SCTP_MBUF_ICOPY); } #endif } else { /* We can steal the last chunk */ int l_len; dmbuf = *m; /* lop off the top part */ if (chk_type == SCTP_IDATA) { m_adj(dmbuf, (offset + sizeof(struct sctp_idata_chunk))); } else { m_adj(dmbuf, (offset + sizeof(struct sctp_data_chunk))); } if (SCTP_BUF_NEXT(dmbuf) == NULL) { l_len = SCTP_BUF_LEN(dmbuf); } else { /* * need to count up the size hopefully does not hit * this to often :-0 */ struct mbuf *lat; l_len = 0; for (lat = dmbuf; lat; lat = SCTP_BUF_NEXT(lat)) { l_len += SCTP_BUF_LEN(lat); } } if (l_len > the_len) { /* Trim the end round bytes off too */ m_adj(dmbuf, -(l_len - the_len)); } } if (dmbuf == NULL) { SCTP_STAT_INCR(sctps_nomem); return (0); } /* * Now no matter what, we need a control, get one if we don't have * one (we may have gotten it above when we found the message was * fragmented */ if (control == NULL) { sctp_alloc_a_readq(stcb, control); sctp_build_readq_entry_mac(control, stcb, asoc->context, net, tsn, ppid, sid, chk_flags, NULL, fsn, mid); if (control == NULL) { SCTP_STAT_INCR(sctps_nomem); return (0); } if ((chk_flags & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG) { struct mbuf *mm; control->data = dmbuf; for (mm = control->data; mm; mm = mm->m_next) { control->length += SCTP_BUF_LEN(mm); } control->tail_mbuf = NULL; control->end_added = 1; control->last_frag_seen = 1; control->first_frag_seen = 1; control->fsn_included = fsn; control->top_fsn = fsn; } created_control = 1; } SCTPDBG(SCTP_DEBUG_XXX, "chunk_flags: 0x%x ordered: %d MID: %u control: %p\n", chk_flags, ordered, mid, control); if ((chk_flags & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG && TAILQ_EMPTY(&asoc->resetHead) && ((ordered == 0) || (SCTP_MID_EQ(asoc->idata_supported, asoc->strmin[sid].last_mid_delivered + 1, mid) && TAILQ_EMPTY(&asoc->strmin[sid].inqueue)))) { /* Candidate for express delivery */ /* * Its not fragmented, No PD-API is up, Nothing in the * delivery queue, Its un-ordered OR ordered and the next to * deliver AND nothing else is stuck on the stream queue, * And there is room for it in the socket buffer. Lets just * stuff it up the buffer.... */ SCTP_SET_TSN_PRESENT(asoc->nr_mapping_array, gap); if (SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_nr_map)) { asoc->highest_tsn_inside_nr_map = tsn; } SCTPDBG(SCTP_DEBUG_XXX, "Injecting control: %p to be read (MID: %u)\n", control, mid); sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, 1, SCTP_READ_LOCK_NOT_HELD, SCTP_SO_NOT_LOCKED); if ((chk_flags & SCTP_DATA_UNORDERED) == 0) { /* for ordered, bump what we delivered */ asoc->strmin[sid].last_mid_delivered++; } SCTP_STAT_INCR(sctps_recvexpress); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_STR_LOGGING_ENABLE) { sctp_log_strm_del_alt(stcb, tsn, mid, sid, SCTP_STR_LOG_FROM_EXPRS_DEL); } control = NULL; goto finish_express_del; } /* Now will we need a chunk too? */ if ((chk_flags & SCTP_DATA_NOT_FRAG) != SCTP_DATA_NOT_FRAG) { sctp_alloc_a_chunk(stcb, chk); if (chk == NULL) { /* No memory so we drop the chunk */ SCTP_STAT_INCR(sctps_nomem); if (last_chunk == 0) { /* we copied it, free the copy */ sctp_m_freem(dmbuf); } return (0); } chk->rec.data.tsn = tsn; chk->no_fr_allowed = 0; chk->rec.data.fsn = fsn; chk->rec.data.mid = mid; chk->rec.data.sid = sid; chk->rec.data.ppid = ppid; chk->rec.data.context = stcb->asoc.context; chk->rec.data.doing_fast_retransmit = 0; chk->rec.data.rcv_flags = chk_flags; chk->asoc = asoc; chk->send_size = the_len; chk->whoTo = net; SCTPDBG(SCTP_DEBUG_XXX, "Building ck: %p for control: %p to be read (MID: %u)\n", chk, control, mid); atomic_add_int(&net->ref_count, 1); chk->data = dmbuf; } /* Set the appropriate TSN mark */ if (SCTP_BASE_SYSCTL(sctp_do_drain) == 0) { SCTP_SET_TSN_PRESENT(asoc->nr_mapping_array, gap); if (SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_nr_map)) { asoc->highest_tsn_inside_nr_map = tsn; } } else { SCTP_SET_TSN_PRESENT(asoc->mapping_array, gap); if (SCTP_TSN_GT(tsn, asoc->highest_tsn_inside_map)) { asoc->highest_tsn_inside_map = tsn; } } /* Now is it complete (i.e. not fragmented)? */ if ((chk_flags & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG) { /* * Special check for when streams are resetting. We could be * more smart about this and check the actual stream to see * if it is not being reset.. that way we would not create a * HOLB when amongst streams being reset and those not being * reset. * */ if (((liste = TAILQ_FIRST(&asoc->resetHead)) != NULL) && SCTP_TSN_GT(tsn, liste->tsn)) { /* * yep its past where we need to reset... go ahead * and queue it. */ if (TAILQ_EMPTY(&asoc->pending_reply_queue)) { /* first one on */ TAILQ_INSERT_TAIL(&asoc->pending_reply_queue, control, next); } else { struct sctp_queued_to_read *lcontrol, *nlcontrol; unsigned char inserted = 0; TAILQ_FOREACH_SAFE(lcontrol, &asoc->pending_reply_queue, next, nlcontrol) { if (SCTP_TSN_GT(control->sinfo_tsn, lcontrol->sinfo_tsn)) { continue; } else { /* found it */ TAILQ_INSERT_BEFORE(lcontrol, control, next); inserted = 1; break; } } if (inserted == 0) { /* * must be put at end, use prevP * (all setup from loop) to setup * nextP. */ TAILQ_INSERT_TAIL(&asoc->pending_reply_queue, control, next); } } goto finish_express_del; } if (chk_flags & SCTP_DATA_UNORDERED) { /* queue directly into socket buffer */ SCTPDBG(SCTP_DEBUG_XXX, "Unordered data to be read control: %p MID: %u\n", control, mid); sctp_mark_non_revokable(asoc, control->sinfo_tsn); sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, 1, SCTP_READ_LOCK_NOT_HELD, SCTP_SO_NOT_LOCKED); } else { SCTPDBG(SCTP_DEBUG_XXX, "Queue control: %p for reordering MID: %u\n", control, mid); sctp_queue_data_to_stream(stcb, asoc, control, abort_flag, &need_reasm_check); if (*abort_flag) { if (last_chunk) { *m = NULL; } return (0); } } goto finish_express_del; } /* If we reach here its a reassembly */ need_reasm_check = 1; SCTPDBG(SCTP_DEBUG_XXX, "Queue data to stream for reasm control: %p MID: %u\n", control, mid); sctp_queue_data_for_reasm(stcb, asoc, control, chk, created_control, abort_flag, tsn); if (*abort_flag) { /* * the assoc is now gone and chk was put onto the reasm * queue, which has all been freed. */ if (last_chunk) { *m = NULL; } return (0); } finish_express_del: /* Here we tidy up things */ if (tsn == (asoc->cumulative_tsn + 1)) { /* Update cum-ack */ asoc->cumulative_tsn = tsn; } if (last_chunk) { *m = NULL; } if (ordered) { SCTP_STAT_INCR_COUNTER64(sctps_inorderchunks); } else { SCTP_STAT_INCR_COUNTER64(sctps_inunorderchunks); } SCTP_STAT_INCR(sctps_recvdata); /* Set it present please */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_STR_LOGGING_ENABLE) { sctp_log_strm_del_alt(stcb, tsn, mid, sid, SCTP_STR_LOG_FROM_MARK_TSN); } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map(asoc->mapping_array_base_tsn, asoc->cumulative_tsn, asoc->highest_tsn_inside_map, SCTP_MAP_PREPARE_SLIDE); } if (need_reasm_check) { (void)sctp_deliver_reasm_check(stcb, asoc, &asoc->strmin[sid], SCTP_READ_LOCK_NOT_HELD); need_reasm_check = 0; } /* check the special flag for stream resets */ if (((liste = TAILQ_FIRST(&asoc->resetHead)) != NULL) && SCTP_TSN_GE(asoc->cumulative_tsn, liste->tsn)) { /* * we have finished working through the backlogged TSN's now * time to reset streams. 1: call reset function. 2: free * pending_reply space 3: distribute any chunks in * pending_reply_queue. */ sctp_reset_in_stream(stcb, liste->number_entries, liste->list_of_streams); TAILQ_REMOVE(&asoc->resetHead, liste, next_resp); sctp_send_deferred_reset_response(stcb, liste, SCTP_STREAM_RESET_RESULT_PERFORMED); SCTP_FREE(liste, SCTP_M_STRESET); /* sa_ignore FREED_MEMORY */ liste = TAILQ_FIRST(&asoc->resetHead); if (TAILQ_EMPTY(&asoc->resetHead)) { /* All can be removed */ TAILQ_FOREACH_SAFE(control, &asoc->pending_reply_queue, next, ncontrol) { TAILQ_REMOVE(&asoc->pending_reply_queue, control, next); sctp_queue_data_to_stream(stcb, asoc, control, abort_flag, &need_reasm_check); if (*abort_flag) { return (0); } if (need_reasm_check) { (void)sctp_deliver_reasm_check(stcb, asoc, &asoc->strmin[control->sinfo_stream], SCTP_READ_LOCK_NOT_HELD); need_reasm_check = 0; } } } else { TAILQ_FOREACH_SAFE(control, &asoc->pending_reply_queue, next, ncontrol) { if (SCTP_TSN_GT(control->sinfo_tsn, liste->tsn)) { break; } /* * if control->sinfo_tsn is <= liste->tsn we * can process it which is the NOT of * control->sinfo_tsn > liste->tsn */ TAILQ_REMOVE(&asoc->pending_reply_queue, control, next); sctp_queue_data_to_stream(stcb, asoc, control, abort_flag, &need_reasm_check); if (*abort_flag) { return (0); } if (need_reasm_check) { (void)sctp_deliver_reasm_check(stcb, asoc, &asoc->strmin[control->sinfo_stream], SCTP_READ_LOCK_NOT_HELD); need_reasm_check = 0; } } } } return (1); } static const int8_t sctp_map_lookup_tab[256] = { 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 6, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 7, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 6, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 8 }; void sctp_slide_mapping_arrays(struct sctp_tcb *stcb) { /* * Now we also need to check the mapping array in a couple of ways. * 1) Did we move the cum-ack point? * * When you first glance at this you might think that all entries * that make up the position of the cum-ack would be in the * nr-mapping array only.. i.e. things up to the cum-ack are always * deliverable. Thats true with one exception, when its a fragmented * message we may not deliver the data until some threshold (or all * of it) is in place. So we must OR the nr_mapping_array and * mapping_array to get a true picture of the cum-ack. */ struct sctp_association *asoc; int at; uint8_t val; int slide_from, slide_end, lgap, distance; uint32_t old_cumack, old_base, old_highest, highest_tsn; asoc = &stcb->asoc; old_cumack = asoc->cumulative_tsn; old_base = asoc->mapping_array_base_tsn; old_highest = asoc->highest_tsn_inside_map; /* * We could probably improve this a small bit by calculating the * offset of the current cum-ack as the starting point. */ at = 0; for (slide_from = 0; slide_from < stcb->asoc.mapping_array_size; slide_from++) { val = asoc->nr_mapping_array[slide_from] | asoc->mapping_array[slide_from]; if (val == 0xff) { at += 8; } else { /* there is a 0 bit */ at += sctp_map_lookup_tab[val]; break; } } asoc->cumulative_tsn = asoc->mapping_array_base_tsn + (at - 1); if (SCTP_TSN_GT(asoc->cumulative_tsn, asoc->highest_tsn_inside_map) && SCTP_TSN_GT(asoc->cumulative_tsn, asoc->highest_tsn_inside_nr_map)) { #ifdef INVARIANTS panic("huh, cumack 0x%x greater than high-tsn 0x%x in map", asoc->cumulative_tsn, asoc->highest_tsn_inside_map); #else SCTP_PRINTF("huh, cumack 0x%x greater than high-tsn 0x%x in map - should panic?\n", asoc->cumulative_tsn, asoc->highest_tsn_inside_map); sctp_print_mapping_array(asoc); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map(0, 6, asoc->highest_tsn_inside_map, SCTP_MAP_SLIDE_RESULT); } asoc->highest_tsn_inside_map = asoc->cumulative_tsn; asoc->highest_tsn_inside_nr_map = asoc->cumulative_tsn; #endif } if (SCTP_TSN_GT(asoc->highest_tsn_inside_nr_map, asoc->highest_tsn_inside_map)) { highest_tsn = asoc->highest_tsn_inside_nr_map; } else { highest_tsn = asoc->highest_tsn_inside_map; } if ((asoc->cumulative_tsn == highest_tsn) && (at >= 8)) { /* The complete array was completed by a single FR */ /* highest becomes the cum-ack */ int clr; #ifdef INVARIANTS unsigned int i; #endif /* clear the array */ clr = ((at + 7) >> 3); if (clr > asoc->mapping_array_size) { clr = asoc->mapping_array_size; } memset(asoc->mapping_array, 0, clr); memset(asoc->nr_mapping_array, 0, clr); #ifdef INVARIANTS for (i = 0; i < asoc->mapping_array_size; i++) { if ((asoc->mapping_array[i]) || (asoc->nr_mapping_array[i])) { SCTP_PRINTF("Error Mapping array's not clean at clear\n"); sctp_print_mapping_array(asoc); } } #endif asoc->mapping_array_base_tsn = asoc->cumulative_tsn + 1; asoc->highest_tsn_inside_nr_map = asoc->highest_tsn_inside_map = asoc->cumulative_tsn; } else if (at >= 8) { /* we can slide the mapping array down */ /* slide_from holds where we hit the first NON 0xff byte */ /* * now calculate the ceiling of the move using our highest * TSN value */ SCTP_CALC_TSN_TO_GAP(lgap, highest_tsn, asoc->mapping_array_base_tsn); slide_end = (lgap >> 3); if (slide_end < slide_from) { sctp_print_mapping_array(asoc); #ifdef INVARIANTS panic("impossible slide"); #else SCTP_PRINTF("impossible slide lgap: %x slide_end: %x slide_from: %x? at: %d\n", lgap, slide_end, slide_from, at); return; #endif } if (slide_end > asoc->mapping_array_size) { #ifdef INVARIANTS panic("would overrun buffer"); #else SCTP_PRINTF("Gak, would have overrun map end: %d slide_end: %d\n", asoc->mapping_array_size, slide_end); slide_end = asoc->mapping_array_size; #endif } distance = (slide_end - slide_from) + 1; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map(old_base, old_cumack, old_highest, SCTP_MAP_PREPARE_SLIDE); sctp_log_map((uint32_t)slide_from, (uint32_t)slide_end, (uint32_t)lgap, SCTP_MAP_SLIDE_FROM); } if (distance + slide_from > asoc->mapping_array_size || distance < 0) { /* * Here we do NOT slide forward the array so that * hopefully when more data comes in to fill it up * we will be able to slide it forward. Really I * don't think this should happen :-0 */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map((uint32_t)distance, (uint32_t)slide_from, (uint32_t)asoc->mapping_array_size, SCTP_MAP_SLIDE_NONE); } } else { int ii; for (ii = 0; ii < distance; ii++) { asoc->mapping_array[ii] = asoc->mapping_array[slide_from + ii]; asoc->nr_mapping_array[ii] = asoc->nr_mapping_array[slide_from + ii]; } for (ii = distance; ii < asoc->mapping_array_size; ii++) { asoc->mapping_array[ii] = 0; asoc->nr_mapping_array[ii] = 0; } if (asoc->highest_tsn_inside_map + 1 == asoc->mapping_array_base_tsn) { asoc->highest_tsn_inside_map += (slide_from << 3); } if (asoc->highest_tsn_inside_nr_map + 1 == asoc->mapping_array_base_tsn) { asoc->highest_tsn_inside_nr_map += (slide_from << 3); } asoc->mapping_array_base_tsn += (slide_from << 3); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map(asoc->mapping_array_base_tsn, asoc->cumulative_tsn, asoc->highest_tsn_inside_map, SCTP_MAP_SLIDE_RESULT); } } } } void sctp_sack_check(struct sctp_tcb *stcb, int was_a_gap) { struct sctp_association *asoc; uint32_t highest_tsn; int is_a_gap; sctp_slide_mapping_arrays(stcb); asoc = &stcb->asoc; if (SCTP_TSN_GT(asoc->highest_tsn_inside_nr_map, asoc->highest_tsn_inside_map)) { highest_tsn = asoc->highest_tsn_inside_nr_map; } else { highest_tsn = asoc->highest_tsn_inside_map; } /* Is there a gap now? */ is_a_gap = SCTP_TSN_GT(highest_tsn, stcb->asoc.cumulative_tsn); /* * Now we need to see if we need to queue a sack or just start the * timer (if allowed). */ if (SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_SENT) { /* * Ok special case, in SHUTDOWN-SENT case. here we maker * sure SACK timer is off and instead send a SHUTDOWN and a * SACK */ if (SCTP_OS_TIMER_PENDING(&stcb->asoc.dack_timer.timer)) { sctp_timer_stop(SCTP_TIMER_TYPE_RECV, stcb->sctp_ep, stcb, NULL, SCTP_FROM_SCTP_INDATA + SCTP_LOC_17); } sctp_send_shutdown(stcb, ((stcb->asoc.alternate) ? stcb->asoc.alternate : stcb->asoc.primary_destination)); if (is_a_gap) { sctp_send_sack(stcb, SCTP_SO_NOT_LOCKED); } } else { /* * CMT DAC algorithm: increase number of packets received * since last ack */ stcb->asoc.cmt_dac_pkts_rcvd++; if ((stcb->asoc.send_sack == 1) || /* We need to send a * SACK */ ((was_a_gap) && (is_a_gap == 0)) || /* was a gap, but no * longer is one */ (stcb->asoc.numduptsns) || /* we have dup's */ (is_a_gap) || /* is still a gap */ (stcb->asoc.delayed_ack == 0) || /* Delayed sack disabled */ (stcb->asoc.data_pkts_seen >= stcb->asoc.sack_freq) /* hit limit of pkts */ ) { if ((stcb->asoc.sctp_cmt_on_off > 0) && (SCTP_BASE_SYSCTL(sctp_cmt_use_dac)) && (stcb->asoc.send_sack == 0) && (stcb->asoc.numduptsns == 0) && (stcb->asoc.delayed_ack) && (!SCTP_OS_TIMER_PENDING(&stcb->asoc.dack_timer.timer))) { /* * CMT DAC algorithm: With CMT, delay acks * even in the face of * * reordering. Therefore, if acks that do * not have to be sent because of the above * reasons, will be delayed. That is, acks * that would have been sent due to gap * reports will be delayed with DAC. Start * the delayed ack timer. */ sctp_timer_start(SCTP_TIMER_TYPE_RECV, stcb->sctp_ep, stcb, NULL); } else { /* * Ok we must build a SACK since the timer * is pending, we got our first packet OR * there are gaps or duplicates. */ (void)SCTP_OS_TIMER_STOP(&stcb->asoc.dack_timer.timer); sctp_send_sack(stcb, SCTP_SO_NOT_LOCKED); } } else { if (!SCTP_OS_TIMER_PENDING(&stcb->asoc.dack_timer.timer)) { sctp_timer_start(SCTP_TIMER_TYPE_RECV, stcb->sctp_ep, stcb, NULL); } } } } int sctp_process_data(struct mbuf **mm, int iphlen, int *offset, int length, struct sctp_inpcb *inp, struct sctp_tcb *stcb, struct sctp_nets *net, uint32_t *high_tsn) { struct sctp_chunkhdr *ch, chunk_buf; struct sctp_association *asoc; int num_chunks = 0; /* number of control chunks processed */ int stop_proc = 0; int break_flag, last_chunk; int abort_flag = 0, was_a_gap; struct mbuf *m; uint32_t highest_tsn; uint16_t chk_length; /* set the rwnd */ sctp_set_rwnd(stcb, &stcb->asoc); m = *mm; SCTP_TCB_LOCK_ASSERT(stcb); asoc = &stcb->asoc; if (SCTP_TSN_GT(asoc->highest_tsn_inside_nr_map, asoc->highest_tsn_inside_map)) { highest_tsn = asoc->highest_tsn_inside_nr_map; } else { highest_tsn = asoc->highest_tsn_inside_map; } was_a_gap = SCTP_TSN_GT(highest_tsn, stcb->asoc.cumulative_tsn); /* * setup where we got the last DATA packet from for any SACK that * may need to go out. Don't bump the net. This is done ONLY when a * chunk is assigned. */ asoc->last_data_chunk_from = net; /*- * Now before we proceed we must figure out if this is a wasted * cluster... i.e. it is a small packet sent in and yet the driver * underneath allocated a full cluster for it. If so we must copy it * to a smaller mbuf and free up the cluster mbuf. This will help * with cluster starvation. Note for __Panda__ we don't do this * since it has clusters all the way down to 64 bytes. */ if (SCTP_BUF_LEN(m) < (long)MLEN && SCTP_BUF_NEXT(m) == NULL) { /* we only handle mbufs that are singletons.. not chains */ m = sctp_get_mbuf_for_msg(SCTP_BUF_LEN(m), 0, M_NOWAIT, 1, MT_DATA); if (m) { /* ok lets see if we can copy the data up */ caddr_t *from, *to; /* get the pointers and copy */ to = mtod(m, caddr_t *); from = mtod((*mm), caddr_t *); memcpy(to, from, SCTP_BUF_LEN((*mm))); /* copy the length and free up the old */ SCTP_BUF_LEN(m) = SCTP_BUF_LEN((*mm)); sctp_m_freem(*mm); /* success, back copy */ *mm = m; } else { /* We are in trouble in the mbuf world .. yikes */ m = *mm; } } /* get pointer to the first chunk header */ ch = (struct sctp_chunkhdr *)sctp_m_getptr(m, *offset, sizeof(struct sctp_chunkhdr), (uint8_t *)&chunk_buf); if (ch == NULL) { return (1); } /* * process all DATA chunks... */ *high_tsn = asoc->cumulative_tsn; break_flag = 0; asoc->data_pkts_seen++; while (stop_proc == 0) { /* validate chunk length */ chk_length = ntohs(ch->chunk_length); if (length - *offset < chk_length) { /* all done, mutulated chunk */ stop_proc = 1; continue; } if ((asoc->idata_supported == 1) && (ch->chunk_type == SCTP_DATA)) { struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; snprintf(msg, sizeof(msg), "%s", "I-DATA chunk received when DATA was negotiated"); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_18; sctp_abort_an_association(inp, stcb, op_err, SCTP_SO_NOT_LOCKED); return (2); } if ((asoc->idata_supported == 0) && (ch->chunk_type == SCTP_IDATA)) { struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; snprintf(msg, sizeof(msg), "%s", "DATA chunk received when I-DATA was negotiated"); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_19; sctp_abort_an_association(inp, stcb, op_err, SCTP_SO_NOT_LOCKED); return (2); } if ((ch->chunk_type == SCTP_DATA) || (ch->chunk_type == SCTP_IDATA)) { uint16_t clen; if (ch->chunk_type == SCTP_DATA) { clen = sizeof(struct sctp_data_chunk); } else { clen = sizeof(struct sctp_idata_chunk); } if (chk_length < clen) { /* * Need to send an abort since we had a * invalid data chunk. */ struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; snprintf(msg, sizeof(msg), "%s chunk of length %u", ch->chunk_type == SCTP_DATA ? "DATA" : "I-DATA", chk_length); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_20; sctp_abort_an_association(inp, stcb, op_err, SCTP_SO_NOT_LOCKED); return (2); } #ifdef SCTP_AUDITING_ENABLED sctp_audit_log(0xB1, 0); #endif if (SCTP_SIZE32(chk_length) == (length - *offset)) { last_chunk = 1; } else { last_chunk = 0; } if (sctp_process_a_data_chunk(stcb, asoc, mm, *offset, chk_length, net, high_tsn, &abort_flag, &break_flag, last_chunk, ch->chunk_type)) { num_chunks++; } if (abort_flag) return (2); if (break_flag) { /* * Set because of out of rwnd space and no * drop rep space left. */ stop_proc = 1; continue; } } else { /* not a data chunk in the data region */ switch (ch->chunk_type) { case SCTP_INITIATION: case SCTP_INITIATION_ACK: case SCTP_SELECTIVE_ACK: case SCTP_NR_SELECTIVE_ACK: case SCTP_HEARTBEAT_REQUEST: case SCTP_HEARTBEAT_ACK: case SCTP_ABORT_ASSOCIATION: case SCTP_SHUTDOWN: case SCTP_SHUTDOWN_ACK: case SCTP_OPERATION_ERROR: case SCTP_COOKIE_ECHO: case SCTP_COOKIE_ACK: case SCTP_ECN_ECHO: case SCTP_ECN_CWR: case SCTP_SHUTDOWN_COMPLETE: case SCTP_AUTHENTICATION: case SCTP_ASCONF_ACK: case SCTP_PACKET_DROPPED: case SCTP_STREAM_RESET: case SCTP_FORWARD_CUM_TSN: case SCTP_ASCONF: { /* * Now, what do we do with KNOWN * chunks that are NOT in the right * place? * * For now, I do nothing but ignore * them. We may later want to add * sysctl stuff to switch out and do * either an ABORT() or possibly * process them. */ struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; snprintf(msg, sizeof(msg), "DATA chunk followed by chunk of type %2.2x", ch->chunk_type); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); sctp_abort_an_association(inp, stcb, op_err, SCTP_SO_NOT_LOCKED); return (2); } default: /* * Unknown chunk type: use bit rules after * checking length */ if (chk_length < sizeof(struct sctp_chunkhdr)) { /* * Need to send an abort since we * had a invalid chunk. */ struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; snprintf(msg, sizeof(msg), "Chunk of length %u", chk_length); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_20; sctp_abort_an_association(inp, stcb, op_err, SCTP_SO_NOT_LOCKED); return (2); } if (ch->chunk_type & 0x40) { /* Add a error report to the queue */ struct mbuf *op_err; struct sctp_gen_error_cause *cause; op_err = sctp_get_mbuf_for_msg(sizeof(struct sctp_gen_error_cause), 0, M_NOWAIT, 1, MT_DATA); if (op_err != NULL) { cause = mtod(op_err, struct sctp_gen_error_cause *); cause->code = htons(SCTP_CAUSE_UNRECOG_CHUNK); cause->length = htons((uint16_t)(chk_length + sizeof(struct sctp_gen_error_cause))); SCTP_BUF_LEN(op_err) = sizeof(struct sctp_gen_error_cause); SCTP_BUF_NEXT(op_err) = SCTP_M_COPYM(m, *offset, chk_length, M_NOWAIT); if (SCTP_BUF_NEXT(op_err) != NULL) { sctp_queue_op_err(stcb, op_err); } else { sctp_m_freem(op_err); } } } if ((ch->chunk_type & 0x80) == 0) { /* discard the rest of this packet */ stop_proc = 1; } /* else skip this bad chunk and * continue... */ break; } /* switch of chunk type */ } *offset += SCTP_SIZE32(chk_length); if ((*offset >= length) || stop_proc) { /* no more data left in the mbuf chain */ stop_proc = 1; continue; } ch = (struct sctp_chunkhdr *)sctp_m_getptr(m, *offset, sizeof(struct sctp_chunkhdr), (uint8_t *)&chunk_buf); if (ch == NULL) { *offset = length; stop_proc = 1; continue; } } if (break_flag) { /* * we need to report rwnd overrun drops. */ sctp_send_packet_dropped(stcb, net, *mm, length, iphlen, 0); } if (num_chunks) { /* * Did we get data, if so update the time for auto-close and * give peer credit for being alive. */ SCTP_STAT_INCR(sctps_recvpktwithdata); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_THRESHOLD_LOGGING) { sctp_misc_ints(SCTP_THRESHOLD_CLEAR, stcb->asoc.overall_error_count, 0, SCTP_FROM_SCTP_INDATA, __LINE__); } stcb->asoc.overall_error_count = 0; (void)SCTP_GETTIME_TIMEVAL(&stcb->asoc.time_last_rcvd); } /* now service all of the reassm queue if needed */ if (SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_SENT) { /* Assure that we ack right away */ stcb->asoc.send_sack = 1; } /* Start a sack timer or QUEUE a SACK for sending */ sctp_sack_check(stcb, was_a_gap); return (0); } static int sctp_process_segment_range(struct sctp_tcb *stcb, struct sctp_tmit_chunk **p_tp1, uint32_t last_tsn, uint16_t frag_strt, uint16_t frag_end, int nr_sacking, int *num_frs, uint32_t *biggest_newly_acked_tsn, uint32_t *this_sack_lowest_newack, int *rto_ok) { struct sctp_tmit_chunk *tp1; unsigned int theTSN; int j, wake_him = 0, circled = 0; /* Recover the tp1 we last saw */ tp1 = *p_tp1; if (tp1 == NULL) { tp1 = TAILQ_FIRST(&stcb->asoc.sent_queue); } for (j = frag_strt; j <= frag_end; j++) { theTSN = j + last_tsn; while (tp1) { if (tp1->rec.data.doing_fast_retransmit) (*num_frs) += 1; /*- * CMT: CUCv2 algorithm. For each TSN being * processed from the sent queue, track the * next expected pseudo-cumack, or * rtx_pseudo_cumack, if required. Separate * cumack trackers for first transmissions, * and retransmissions. */ if ((tp1->sent < SCTP_DATAGRAM_RESEND) && (tp1->whoTo->find_pseudo_cumack == 1) && (tp1->snd_count == 1)) { tp1->whoTo->pseudo_cumack = tp1->rec.data.tsn; tp1->whoTo->find_pseudo_cumack = 0; } if ((tp1->sent < SCTP_DATAGRAM_RESEND) && (tp1->whoTo->find_rtx_pseudo_cumack == 1) && (tp1->snd_count > 1)) { tp1->whoTo->rtx_pseudo_cumack = tp1->rec.data.tsn; tp1->whoTo->find_rtx_pseudo_cumack = 0; } if (tp1->rec.data.tsn == theTSN) { if (tp1->sent != SCTP_DATAGRAM_UNSENT) { /*- * must be held until * cum-ack passes */ if (tp1->sent < SCTP_DATAGRAM_RESEND) { /*- * If it is less than RESEND, it is * now no-longer in flight. * Higher values may already be set * via previous Gap Ack Blocks... * i.e. ACKED or RESEND. */ if (SCTP_TSN_GT(tp1->rec.data.tsn, *biggest_newly_acked_tsn)) { *biggest_newly_acked_tsn = tp1->rec.data.tsn; } /*- * CMT: SFR algo (and HTNA) - set * saw_newack to 1 for dest being * newly acked. update * this_sack_highest_newack if * appropriate. */ if (tp1->rec.data.chunk_was_revoked == 0) tp1->whoTo->saw_newack = 1; if (SCTP_TSN_GT(tp1->rec.data.tsn, tp1->whoTo->this_sack_highest_newack)) { tp1->whoTo->this_sack_highest_newack = tp1->rec.data.tsn; } /*- * CMT DAC algo: also update * this_sack_lowest_newack */ if (*this_sack_lowest_newack == 0) { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(*this_sack_lowest_newack, last_tsn, tp1->rec.data.tsn, 0, 0, SCTP_LOG_TSN_ACKED); } *this_sack_lowest_newack = tp1->rec.data.tsn; } /*- * CMT: CUCv2 algorithm. If (rtx-)pseudo-cumack for corresp * dest is being acked, then we have a new (rtx-)pseudo-cumack. Set * new_(rtx_)pseudo_cumack to TRUE so that the cwnd for this dest can be * updated. Also trigger search for the next expected (rtx-)pseudo-cumack. * Separate pseudo_cumack trackers for first transmissions and * retransmissions. */ if (tp1->rec.data.tsn == tp1->whoTo->pseudo_cumack) { if (tp1->rec.data.chunk_was_revoked == 0) { tp1->whoTo->new_pseudo_cumack = 1; } tp1->whoTo->find_pseudo_cumack = 1; } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_CWND_LOGGING_ENABLE) { sctp_log_cwnd(stcb, tp1->whoTo, tp1->rec.data.tsn, SCTP_CWND_LOG_FROM_SACK); } if (tp1->rec.data.tsn == tp1->whoTo->rtx_pseudo_cumack) { if (tp1->rec.data.chunk_was_revoked == 0) { tp1->whoTo->new_pseudo_cumack = 1; } tp1->whoTo->find_rtx_pseudo_cumack = 1; } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(*biggest_newly_acked_tsn, last_tsn, tp1->rec.data.tsn, frag_strt, frag_end, SCTP_LOG_TSN_ACKED); } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_DOWN_GAP, tp1->whoTo->flight_size, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } sctp_flight_size_decrease(tp1); if (stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) { (*stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) (tp1->whoTo, tp1); } sctp_total_flight_decrease(stcb, tp1); tp1->whoTo->net_ack += tp1->send_size; if (tp1->snd_count < 2) { /*- * True non-retransmited chunk */ tp1->whoTo->net_ack2 += tp1->send_size; /*- * update RTO too ? */ if (tp1->do_rtt) { if (*rto_ok) { tp1->whoTo->RTO = sctp_calculate_rto(stcb, &stcb->asoc, tp1->whoTo, &tp1->sent_rcv_time, SCTP_RTT_FROM_DATA); *rto_ok = 0; } if (tp1->whoTo->rto_needed == 0) { tp1->whoTo->rto_needed = 1; } tp1->do_rtt = 0; } } } if (tp1->sent <= SCTP_DATAGRAM_RESEND) { if (SCTP_TSN_GT(tp1->rec.data.tsn, stcb->asoc.this_sack_highest_gap)) { stcb->asoc.this_sack_highest_gap = tp1->rec.data.tsn; } if (tp1->sent == SCTP_DATAGRAM_RESEND) { sctp_ucount_decr(stcb->asoc.sent_queue_retran_cnt); #ifdef SCTP_AUDITING_ENABLED sctp_audit_log(0xB2, (stcb->asoc.sent_queue_retran_cnt & 0x000000ff)); #endif } } /*- * All chunks NOT UNSENT fall through here and are marked * (leave PR-SCTP ones that are to skip alone though) */ if ((tp1->sent != SCTP_FORWARD_TSN_SKIP) && (tp1->sent != SCTP_DATAGRAM_NR_ACKED)) { tp1->sent = SCTP_DATAGRAM_MARKED; } if (tp1->rec.data.chunk_was_revoked) { /* deflate the cwnd */ tp1->whoTo->cwnd -= tp1->book_size; tp1->rec.data.chunk_was_revoked = 0; } /* NR Sack code here */ if (nr_sacking && (tp1->sent != SCTP_DATAGRAM_NR_ACKED)) { if (stcb->asoc.strmout[tp1->rec.data.sid].chunks_on_queues > 0) { stcb->asoc.strmout[tp1->rec.data.sid].chunks_on_queues--; #ifdef INVARIANTS } else { panic("No chunks on the queues for sid %u.", tp1->rec.data.sid); #endif } if ((stcb->asoc.strmout[tp1->rec.data.sid].chunks_on_queues == 0) && (stcb->asoc.strmout[tp1->rec.data.sid].state == SCTP_STREAM_RESET_PENDING) && TAILQ_EMPTY(&stcb->asoc.strmout[tp1->rec.data.sid].outqueue)) { stcb->asoc.trigger_reset = 1; } tp1->sent = SCTP_DATAGRAM_NR_ACKED; if (tp1->data) { /* * sa_ignore * NO_NULL_CHK */ sctp_free_bufspace(stcb, &stcb->asoc, tp1, 1); sctp_m_freem(tp1->data); tp1->data = NULL; } wake_him++; } } break; } /* if (tp1->tsn == theTSN) */ if (SCTP_TSN_GT(tp1->rec.data.tsn, theTSN)) { break; } tp1 = TAILQ_NEXT(tp1, sctp_next); if ((tp1 == NULL) && (circled == 0)) { circled++; tp1 = TAILQ_FIRST(&stcb->asoc.sent_queue); } } /* end while (tp1) */ if (tp1 == NULL) { circled = 0; tp1 = TAILQ_FIRST(&stcb->asoc.sent_queue); } /* In case the fragments were not in order we must reset */ } /* end for (j = fragStart */ *p_tp1 = tp1; return (wake_him); /* Return value only used for nr-sack */ } static int sctp_handle_segments(struct mbuf *m, int *offset, struct sctp_tcb *stcb, struct sctp_association *asoc, uint32_t last_tsn, uint32_t *biggest_tsn_acked, uint32_t *biggest_newly_acked_tsn, uint32_t *this_sack_lowest_newack, int num_seg, int num_nr_seg, int *rto_ok) { struct sctp_gap_ack_block *frag, block; struct sctp_tmit_chunk *tp1; int i; int num_frs = 0; int chunk_freed; int non_revocable; uint16_t frag_strt, frag_end, prev_frag_end; tp1 = TAILQ_FIRST(&asoc->sent_queue); prev_frag_end = 0; chunk_freed = 0; for (i = 0; i < (num_seg + num_nr_seg); i++) { if (i == num_seg) { prev_frag_end = 0; tp1 = TAILQ_FIRST(&asoc->sent_queue); } frag = (struct sctp_gap_ack_block *)sctp_m_getptr(m, *offset, sizeof(struct sctp_gap_ack_block), (uint8_t *)&block); *offset += sizeof(block); if (frag == NULL) { return (chunk_freed); } frag_strt = ntohs(frag->start); frag_end = ntohs(frag->end); if (frag_strt > frag_end) { /* This gap report is malformed, skip it. */ continue; } if (frag_strt <= prev_frag_end) { /* This gap report is not in order, so restart. */ tp1 = TAILQ_FIRST(&asoc->sent_queue); } if (SCTP_TSN_GT((last_tsn + frag_end), *biggest_tsn_acked)) { *biggest_tsn_acked = last_tsn + frag_end; } if (i < num_seg) { non_revocable = 0; } else { non_revocable = 1; } if (sctp_process_segment_range(stcb, &tp1, last_tsn, frag_strt, frag_end, non_revocable, &num_frs, biggest_newly_acked_tsn, this_sack_lowest_newack, rto_ok)) { chunk_freed = 1; } prev_frag_end = frag_end; } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { if (num_frs) sctp_log_fr(*biggest_tsn_acked, *biggest_newly_acked_tsn, last_tsn, SCTP_FR_LOG_BIGGEST_TSNS); } return (chunk_freed); } static void sctp_check_for_revoked(struct sctp_tcb *stcb, struct sctp_association *asoc, uint32_t cumack, uint32_t biggest_tsn_acked) { struct sctp_tmit_chunk *tp1; TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (SCTP_TSN_GT(tp1->rec.data.tsn, cumack)) { /* * ok this guy is either ACK or MARKED. If it is * ACKED it has been previously acked but not this * time i.e. revoked. If it is MARKED it was ACK'ed * again. */ if (SCTP_TSN_GT(tp1->rec.data.tsn, biggest_tsn_acked)) { break; } if (tp1->sent == SCTP_DATAGRAM_ACKED) { /* it has been revoked */ tp1->sent = SCTP_DATAGRAM_SENT; tp1->rec.data.chunk_was_revoked = 1; /* * We must add this stuff back in to assure * timers and such get started. */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_UP_REVOKE, tp1->whoTo->flight_size, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } sctp_flight_size_increase(tp1); sctp_total_flight_increase(stcb, tp1); /* * We inflate the cwnd to compensate for our * artificial inflation of the flight_size. */ tp1->whoTo->cwnd += tp1->book_size; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(asoc->last_acked_seq, cumack, tp1->rec.data.tsn, 0, 0, SCTP_LOG_TSN_REVOKED); } } else if (tp1->sent == SCTP_DATAGRAM_MARKED) { /* it has been re-acked in this SACK */ tp1->sent = SCTP_DATAGRAM_ACKED; } } if (tp1->sent == SCTP_DATAGRAM_UNSENT) break; } } static void sctp_strike_gap_ack_chunks(struct sctp_tcb *stcb, struct sctp_association *asoc, uint32_t biggest_tsn_acked, uint32_t biggest_tsn_newly_acked, uint32_t this_sack_lowest_newack, int accum_moved) { struct sctp_tmit_chunk *tp1; int strike_flag = 0; struct timeval now; int tot_retrans = 0; uint32_t sending_seq; struct sctp_nets *net; int num_dests_sacked = 0; /* * select the sending_seq, this is either the next thing ready to be * sent but not transmitted, OR, the next seq we assign. */ tp1 = TAILQ_FIRST(&stcb->asoc.send_queue); if (tp1 == NULL) { sending_seq = asoc->sending_seq; } else { sending_seq = tp1->rec.data.tsn; } /* CMT DAC algo: finding out if SACK is a mixed SACK */ if ((asoc->sctp_cmt_on_off > 0) && SCTP_BASE_SYSCTL(sctp_cmt_use_dac)) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (net->saw_newack) num_dests_sacked++; } } if (stcb->asoc.prsctp_supported) { (void)SCTP_GETTIME_TIMEVAL(&now); } TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { strike_flag = 0; if (tp1->no_fr_allowed) { /* this one had a timeout or something */ continue; } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { if (tp1->sent < SCTP_DATAGRAM_RESEND) sctp_log_fr(biggest_tsn_newly_acked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_CHECK_STRIKE); } if (SCTP_TSN_GT(tp1->rec.data.tsn, biggest_tsn_acked) || tp1->sent == SCTP_DATAGRAM_UNSENT) { /* done */ break; } if (stcb->asoc.prsctp_supported) { if ((PR_SCTP_TTL_ENABLED(tp1->flags)) && tp1->sent < SCTP_DATAGRAM_ACKED) { /* Is it expired? */ if (timevalcmp(&now, &tp1->rec.data.timetodrop, >)) { /* Yes so drop it */ if (tp1->data != NULL) { (void)sctp_release_pr_sctp_chunk(stcb, tp1, 1, SCTP_SO_NOT_LOCKED); } continue; } } } if (SCTP_TSN_GT(tp1->rec.data.tsn, asoc->this_sack_highest_gap)) { /* we are beyond the tsn in the sack */ break; } if (tp1->sent >= SCTP_DATAGRAM_RESEND) { /* either a RESEND, ACKED, or MARKED */ /* skip */ if (tp1->sent == SCTP_FORWARD_TSN_SKIP) { /* Continue strikin FWD-TSN chunks */ tp1->rec.data.fwd_tsn_cnt++; } continue; } /* * CMT : SFR algo (covers part of DAC and HTNA as well) */ if (tp1->whoTo && tp1->whoTo->saw_newack == 0) { /* * No new acks were receieved for data sent to this * dest. Therefore, according to the SFR algo for * CMT, no data sent to this dest can be marked for * FR using this SACK. */ continue; } else if (tp1->whoTo && SCTP_TSN_GT(tp1->rec.data.tsn, tp1->whoTo->this_sack_highest_newack)) { /* * CMT: New acks were receieved for data sent to * this dest. But no new acks were seen for data * sent after tp1. Therefore, according to the SFR * algo for CMT, tp1 cannot be marked for FR using * this SACK. This step covers part of the DAC algo * and the HTNA algo as well. */ continue; } /* * Here we check to see if we were have already done a FR * and if so we see if the biggest TSN we saw in the sack is * smaller than the recovery point. If so we don't strike * the tsn... otherwise we CAN strike the TSN. */ /* * @@@ JRI: Check for CMT if (accum_moved && * asoc->fast_retran_loss_recovery && (sctp_cmt_on_off == * 0)) { */ if (accum_moved && asoc->fast_retran_loss_recovery) { /* * Strike the TSN if in fast-recovery and cum-ack * moved. */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(biggest_tsn_newly_acked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_STRIKE_CHUNK); } if (tp1->sent < SCTP_DATAGRAM_RESEND) { tp1->sent++; } if ((asoc->sctp_cmt_on_off > 0) && SCTP_BASE_SYSCTL(sctp_cmt_use_dac)) { /* * CMT DAC algorithm: If SACK flag is set to * 0, then lowest_newack test will not pass * because it would have been set to the * cumack earlier. If not already to be * rtx'd, If not a mixed sack and if tp1 is * not between two sacked TSNs, then mark by * one more. NOTE that we are marking by one * additional time since the SACK DAC flag * indicates that two packets have been * received after this missing TSN. */ if ((tp1->sent < SCTP_DATAGRAM_RESEND) && (num_dests_sacked == 1) && SCTP_TSN_GT(this_sack_lowest_newack, tp1->rec.data.tsn)) { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(16 + num_dests_sacked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_STRIKE_CHUNK); } tp1->sent++; } } } else if ((tp1->rec.data.doing_fast_retransmit) && (asoc->sctp_cmt_on_off == 0)) { /* * For those that have done a FR we must take * special consideration if we strike. I.e the * biggest_newly_acked must be higher than the * sending_seq at the time we did the FR. */ if ( #ifdef SCTP_FR_TO_ALTERNATE /* * If FR's go to new networks, then we must only do * this for singly homed asoc's. However if the FR's * go to the same network (Armando's work) then its * ok to FR multiple times. */ (asoc->numnets < 2) #else (1) #endif ) { if (SCTP_TSN_GE(biggest_tsn_newly_acked, tp1->rec.data.fast_retran_tsn)) { /* * Strike the TSN, since this ack is * beyond where things were when we * did a FR. */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(biggest_tsn_newly_acked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_STRIKE_CHUNK); } if (tp1->sent < SCTP_DATAGRAM_RESEND) { tp1->sent++; } strike_flag = 1; if ((asoc->sctp_cmt_on_off > 0) && SCTP_BASE_SYSCTL(sctp_cmt_use_dac)) { /* * CMT DAC algorithm: If * SACK flag is set to 0, * then lowest_newack test * will not pass because it * would have been set to * the cumack earlier. If * not already to be rtx'd, * If not a mixed sack and * if tp1 is not between two * sacked TSNs, then mark by * one more. NOTE that we * are marking by one * additional time since the * SACK DAC flag indicates * that two packets have * been received after this * missing TSN. */ if ((tp1->sent < SCTP_DATAGRAM_RESEND) && (num_dests_sacked == 1) && SCTP_TSN_GT(this_sack_lowest_newack, tp1->rec.data.tsn)) { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(32 + num_dests_sacked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_STRIKE_CHUNK); } if (tp1->sent < SCTP_DATAGRAM_RESEND) { tp1->sent++; } } } } } /* * JRI: TODO: remove code for HTNA algo. CMT's SFR * algo covers HTNA. */ } else if (SCTP_TSN_GT(tp1->rec.data.tsn, biggest_tsn_newly_acked)) { /* * We don't strike these: This is the HTNA * algorithm i.e. we don't strike If our TSN is * larger than the Highest TSN Newly Acked. */ ; } else { /* Strike the TSN */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(biggest_tsn_newly_acked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_STRIKE_CHUNK); } if (tp1->sent < SCTP_DATAGRAM_RESEND) { tp1->sent++; } if ((asoc->sctp_cmt_on_off > 0) && SCTP_BASE_SYSCTL(sctp_cmt_use_dac)) { /* * CMT DAC algorithm: If SACK flag is set to * 0, then lowest_newack test will not pass * because it would have been set to the * cumack earlier. If not already to be * rtx'd, If not a mixed sack and if tp1 is * not between two sacked TSNs, then mark by * one more. NOTE that we are marking by one * additional time since the SACK DAC flag * indicates that two packets have been * received after this missing TSN. */ if ((tp1->sent < SCTP_DATAGRAM_RESEND) && (num_dests_sacked == 1) && SCTP_TSN_GT(this_sack_lowest_newack, tp1->rec.data.tsn)) { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(48 + num_dests_sacked, tp1->rec.data.tsn, tp1->sent, SCTP_FR_LOG_STRIKE_CHUNK); } tp1->sent++; } } } if (tp1->sent == SCTP_DATAGRAM_RESEND) { struct sctp_nets *alt; /* fix counts and things */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_DOWN_RSND, (tp1->whoTo ? (tp1->whoTo->flight_size) : 0), tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } if (tp1->whoTo) { tp1->whoTo->net_ack++; sctp_flight_size_decrease(tp1); if (stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) { (*stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) (tp1->whoTo, tp1); } } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_RWND_ENABLE) { sctp_log_rwnd(SCTP_INCREASE_PEER_RWND, asoc->peers_rwnd, tp1->send_size, SCTP_BASE_SYSCTL(sctp_peer_chunk_oh)); } /* add back to the rwnd */ asoc->peers_rwnd += (tp1->send_size + SCTP_BASE_SYSCTL(sctp_peer_chunk_oh)); /* remove from the total flight */ sctp_total_flight_decrease(stcb, tp1); if ((stcb->asoc.prsctp_supported) && (PR_SCTP_RTX_ENABLED(tp1->flags))) { /* * Has it been retransmitted tv_sec times? - * we store the retran count there. */ if (tp1->snd_count > tp1->rec.data.timetodrop.tv_sec) { /* Yes, so drop it */ if (tp1->data != NULL) { (void)sctp_release_pr_sctp_chunk(stcb, tp1, 1, SCTP_SO_NOT_LOCKED); } /* Make sure to flag we had a FR */ tp1->whoTo->net_ack++; continue; } } /* * SCTP_PRINTF("OK, we are now ready to FR this * guy\n"); */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE) { sctp_log_fr(tp1->rec.data.tsn, tp1->snd_count, 0, SCTP_FR_MARKED); } if (strike_flag) { /* This is a subsequent FR */ SCTP_STAT_INCR(sctps_sendmultfastretrans); } sctp_ucount_incr(stcb->asoc.sent_queue_retran_cnt); if (asoc->sctp_cmt_on_off > 0) { /* * CMT: Using RTX_SSTHRESH policy for CMT. * If CMT is being used, then pick dest with * largest ssthresh for any retransmission. */ tp1->no_fr_allowed = 1; alt = tp1->whoTo; /* sa_ignore NO_NULL_CHK */ if (asoc->sctp_cmt_pf > 0) { /* * JRS 5/18/07 - If CMT PF is on, * use the PF version of * find_alt_net() */ alt = sctp_find_alternate_net(stcb, alt, 2); } else { /* * JRS 5/18/07 - If only CMT is on, * use the CMT version of * find_alt_net() */ /* sa_ignore NO_NULL_CHK */ alt = sctp_find_alternate_net(stcb, alt, 1); } if (alt == NULL) { alt = tp1->whoTo; } /* * CUCv2: If a different dest is picked for * the retransmission, then new * (rtx-)pseudo_cumack needs to be tracked * for orig dest. Let CUCv2 track new (rtx-) * pseudo-cumack always. */ if (tp1->whoTo) { tp1->whoTo->find_pseudo_cumack = 1; tp1->whoTo->find_rtx_pseudo_cumack = 1; } } else {/* CMT is OFF */ #ifdef SCTP_FR_TO_ALTERNATE /* Can we find an alternate? */ alt = sctp_find_alternate_net(stcb, tp1->whoTo, 0); #else /* * default behavior is to NOT retransmit * FR's to an alternate. Armando Caro's * paper details why. */ alt = tp1->whoTo; #endif } tp1->rec.data.doing_fast_retransmit = 1; tot_retrans++; /* mark the sending seq for possible subsequent FR's */ /* * SCTP_PRINTF("Marking TSN for FR new value %x\n", * (uint32_t)tpi->rec.data.tsn); */ if (TAILQ_EMPTY(&asoc->send_queue)) { /* * If the queue of send is empty then its * the next sequence number that will be * assigned so we subtract one from this to * get the one we last sent. */ tp1->rec.data.fast_retran_tsn = sending_seq; } else { /* * If there are chunks on the send queue * (unsent data that has made it from the * stream queues but not out the door, we * take the first one (which will have the * lowest TSN) and subtract one to get the * one we last sent. */ struct sctp_tmit_chunk *ttt; ttt = TAILQ_FIRST(&asoc->send_queue); tp1->rec.data.fast_retran_tsn = ttt->rec.data.tsn; } if (tp1->do_rtt) { /* * this guy had a RTO calculation pending on * it, cancel it */ if ((tp1->whoTo != NULL) && (tp1->whoTo->rto_needed == 0)) { tp1->whoTo->rto_needed = 1; } tp1->do_rtt = 0; } if (alt != tp1->whoTo) { /* yes, there is an alternate. */ sctp_free_remote_addr(tp1->whoTo); /* sa_ignore FREED_MEMORY */ tp1->whoTo = alt; atomic_add_int(&alt->ref_count, 1); } } } } struct sctp_tmit_chunk * sctp_try_advance_peer_ack_point(struct sctp_tcb *stcb, struct sctp_association *asoc) { struct sctp_tmit_chunk *tp1, *tp2, *a_adv = NULL; struct timeval now; int now_filled = 0; if (asoc->prsctp_supported == 0) { return (NULL); } TAILQ_FOREACH_SAFE(tp1, &asoc->sent_queue, sctp_next, tp2) { if (tp1->sent != SCTP_FORWARD_TSN_SKIP && tp1->sent != SCTP_DATAGRAM_RESEND && tp1->sent != SCTP_DATAGRAM_NR_ACKED) { /* no chance to advance, out of here */ break; } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_TRY_ADVANCE) { if ((tp1->sent == SCTP_FORWARD_TSN_SKIP) || (tp1->sent == SCTP_DATAGRAM_NR_ACKED)) { sctp_misc_ints(SCTP_FWD_TSN_CHECK, asoc->advanced_peer_ack_point, tp1->rec.data.tsn, 0, 0); } } if (!PR_SCTP_ENABLED(tp1->flags)) { /* * We can't fwd-tsn past any that are reliable aka * retransmitted until the asoc fails. */ break; } if (!now_filled) { (void)SCTP_GETTIME_TIMEVAL(&now); now_filled = 1; } /* * now we got a chunk which is marked for another * retransmission to a PR-stream but has run out its chances * already maybe OR has been marked to skip now. Can we skip * it if its a resend? */ if (tp1->sent == SCTP_DATAGRAM_RESEND && (PR_SCTP_TTL_ENABLED(tp1->flags))) { /* * Now is this one marked for resend and its time is * now up? */ if (timevalcmp(&now, &tp1->rec.data.timetodrop, >)) { /* Yes so drop it */ if (tp1->data) { (void)sctp_release_pr_sctp_chunk(stcb, tp1, 1, SCTP_SO_NOT_LOCKED); } } else { /* * No, we are done when hit one for resend * whos time as not expired. */ break; } } /* * Ok now if this chunk is marked to drop it we can clean up * the chunk, advance our peer ack point and we can check * the next chunk. */ if ((tp1->sent == SCTP_FORWARD_TSN_SKIP) || (tp1->sent == SCTP_DATAGRAM_NR_ACKED)) { /* advance PeerAckPoint goes forward */ if (SCTP_TSN_GT(tp1->rec.data.tsn, asoc->advanced_peer_ack_point)) { asoc->advanced_peer_ack_point = tp1->rec.data.tsn; a_adv = tp1; } else if (tp1->rec.data.tsn == asoc->advanced_peer_ack_point) { /* No update but we do save the chk */ a_adv = tp1; } } else { /* * If it is still in RESEND we can advance no * further */ break; } } return (a_adv); } static int sctp_fs_audit(struct sctp_association *asoc) { struct sctp_tmit_chunk *chk; int inflight = 0, resend = 0, inbetween = 0, acked = 0, above = 0; int ret; #ifndef INVARIANTS int entry_flight, entry_cnt; #endif ret = 0; #ifndef INVARIANTS entry_flight = asoc->total_flight; entry_cnt = asoc->total_flight_count; #endif if (asoc->pr_sctp_cnt >= asoc->sent_queue_cnt) return (0); TAILQ_FOREACH(chk, &asoc->sent_queue, sctp_next) { if (chk->sent < SCTP_DATAGRAM_RESEND) { SCTP_PRINTF("Chk TSN: %u size: %d inflight cnt: %d\n", chk->rec.data.tsn, chk->send_size, chk->snd_count); inflight++; } else if (chk->sent == SCTP_DATAGRAM_RESEND) { resend++; } else if (chk->sent < SCTP_DATAGRAM_ACKED) { inbetween++; } else if (chk->sent > SCTP_DATAGRAM_ACKED) { above++; } else { acked++; } } if ((inflight > 0) || (inbetween > 0)) { #ifdef INVARIANTS panic("Flight size-express incorrect? \n"); #else SCTP_PRINTF("asoc->total_flight: %d cnt: %d\n", entry_flight, entry_cnt); SCTP_PRINTF("Flight size-express incorrect F: %d I: %d R: %d Ab: %d ACK: %d\n", inflight, inbetween, resend, above, acked); ret = 1; #endif } return (ret); } static void sctp_window_probe_recovery(struct sctp_tcb *stcb, struct sctp_association *asoc, struct sctp_tmit_chunk *tp1) { tp1->window_probe = 0; if ((tp1->sent >= SCTP_DATAGRAM_ACKED) || (tp1->data == NULL)) { /* TSN's skipped we do NOT move back. */ sctp_misc_ints(SCTP_FLIGHT_LOG_DWN_WP_FWD, tp1->whoTo ? tp1->whoTo->flight_size : 0, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); return; } /* First setup this by shrinking flight */ if (stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) { (*stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) (tp1->whoTo, tp1); } sctp_flight_size_decrease(tp1); sctp_total_flight_decrease(stcb, tp1); /* Now mark for resend */ tp1->sent = SCTP_DATAGRAM_RESEND; sctp_ucount_incr(asoc->sent_queue_retran_cnt); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_DOWN_WP, tp1->whoTo->flight_size, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } } void sctp_express_handle_sack(struct sctp_tcb *stcb, uint32_t cumack, uint32_t rwnd, int *abort_now, int ecne_seen) { struct sctp_nets *net; struct sctp_association *asoc; struct sctp_tmit_chunk *tp1, *tp2; uint32_t old_rwnd; int win_probe_recovery = 0; int win_probe_recovered = 0; int j, done_once = 0; int rto_ok = 1; uint32_t send_s; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_SACK_ARRIVALS_ENABLE) { sctp_misc_ints(SCTP_SACK_LOG_EXPRESS, cumack, rwnd, stcb->asoc.last_acked_seq, stcb->asoc.peers_rwnd); } SCTP_TCB_LOCK_ASSERT(stcb); #ifdef SCTP_ASOCLOG_OF_TSNS stcb->asoc.cumack_log[stcb->asoc.cumack_log_at] = cumack; stcb->asoc.cumack_log_at++; if (stcb->asoc.cumack_log_at > SCTP_TSN_LOG_SIZE) { stcb->asoc.cumack_log_at = 0; } #endif asoc = &stcb->asoc; old_rwnd = asoc->peers_rwnd; if (SCTP_TSN_GT(asoc->last_acked_seq, cumack)) { /* old ack */ return; } else if (asoc->last_acked_seq == cumack) { /* Window update sack */ asoc->peers_rwnd = sctp_sbspace_sub(rwnd, (uint32_t)(asoc->total_flight + (asoc->total_flight_count * SCTP_BASE_SYSCTL(sctp_peer_chunk_oh)))); if (asoc->peers_rwnd < stcb->sctp_ep->sctp_ep.sctp_sws_sender) { /* SWS sender side engages */ asoc->peers_rwnd = 0; } if (asoc->peers_rwnd > old_rwnd) { goto again; } return; } /* First setup for CC stuff */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (SCTP_TSN_GT(cumack, net->cwr_window_tsn)) { /* Drag along the window_tsn for cwr's */ net->cwr_window_tsn = cumack; } net->prev_cwnd = net->cwnd; net->net_ack = 0; net->net_ack2 = 0; /* * CMT: Reset CUC and Fast recovery algo variables before * SACK processing */ net->new_pseudo_cumack = 0; net->will_exit_fast_recovery = 0; if (stcb->asoc.cc_functions.sctp_cwnd_prepare_net_for_sack) { (*stcb->asoc.cc_functions.sctp_cwnd_prepare_net_for_sack) (stcb, net); } } if (!TAILQ_EMPTY(&asoc->sent_queue)) { tp1 = TAILQ_LAST(&asoc->sent_queue, sctpchunk_listhead); send_s = tp1->rec.data.tsn + 1; } else { send_s = asoc->sending_seq; } if (SCTP_TSN_GE(cumack, send_s)) { struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; *abort_now = 1; /* XXX */ snprintf(msg, sizeof(msg), "Cum ack %8.8x greater or equal than TSN %8.8x", cumack, send_s); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_21; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); return; } asoc->this_sack_highest_gap = cumack; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_THRESHOLD_LOGGING) { sctp_misc_ints(SCTP_THRESHOLD_CLEAR, stcb->asoc.overall_error_count, 0, SCTP_FROM_SCTP_INDATA, __LINE__); } stcb->asoc.overall_error_count = 0; if (SCTP_TSN_GT(cumack, asoc->last_acked_seq)) { /* process the new consecutive TSN first */ TAILQ_FOREACH_SAFE(tp1, &asoc->sent_queue, sctp_next, tp2) { if (SCTP_TSN_GE(cumack, tp1->rec.data.tsn)) { if (tp1->sent == SCTP_DATAGRAM_UNSENT) { SCTP_PRINTF("Warning, an unsent is now acked?\n"); } if (tp1->sent < SCTP_DATAGRAM_ACKED) { /* * If it is less than ACKED, it is * now no-longer in flight. Higher * values may occur during marking */ if (tp1->sent < SCTP_DATAGRAM_RESEND) { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_DOWN_CA, tp1->whoTo->flight_size, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } sctp_flight_size_decrease(tp1); if (stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) { (*stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) (tp1->whoTo, tp1); } /* sa_ignore NO_NULL_CHK */ sctp_total_flight_decrease(stcb, tp1); } tp1->whoTo->net_ack += tp1->send_size; if (tp1->snd_count < 2) { /* * True non-retransmited * chunk */ tp1->whoTo->net_ack2 += tp1->send_size; /* update RTO too? */ if (tp1->do_rtt) { if (rto_ok) { tp1->whoTo->RTO = /* * sa_ignore * NO_NULL_CHK */ sctp_calculate_rto(stcb, asoc, tp1->whoTo, &tp1->sent_rcv_time, SCTP_RTT_FROM_DATA); rto_ok = 0; } if (tp1->whoTo->rto_needed == 0) { tp1->whoTo->rto_needed = 1; } tp1->do_rtt = 0; } } /* * CMT: CUCv2 algorithm. From the * cumack'd TSNs, for each TSN being * acked for the first time, set the * following variables for the * corresp destination. * new_pseudo_cumack will trigger a * cwnd update. * find_(rtx_)pseudo_cumack will * trigger search for the next * expected (rtx-)pseudo-cumack. */ tp1->whoTo->new_pseudo_cumack = 1; tp1->whoTo->find_pseudo_cumack = 1; tp1->whoTo->find_rtx_pseudo_cumack = 1; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_CWND_LOGGING_ENABLE) { /* sa_ignore NO_NULL_CHK */ sctp_log_cwnd(stcb, tp1->whoTo, tp1->rec.data.tsn, SCTP_CWND_LOG_FROM_SACK); } } if (tp1->sent == SCTP_DATAGRAM_RESEND) { sctp_ucount_decr(asoc->sent_queue_retran_cnt); } if (tp1->rec.data.chunk_was_revoked) { /* deflate the cwnd */ tp1->whoTo->cwnd -= tp1->book_size; tp1->rec.data.chunk_was_revoked = 0; } if (tp1->sent != SCTP_DATAGRAM_NR_ACKED) { if (asoc->strmout[tp1->rec.data.sid].chunks_on_queues > 0) { asoc->strmout[tp1->rec.data.sid].chunks_on_queues--; #ifdef INVARIANTS } else { panic("No chunks on the queues for sid %u.", tp1->rec.data.sid); #endif } } if ((asoc->strmout[tp1->rec.data.sid].chunks_on_queues == 0) && (asoc->strmout[tp1->rec.data.sid].state == SCTP_STREAM_RESET_PENDING) && TAILQ_EMPTY(&asoc->strmout[tp1->rec.data.sid].outqueue)) { asoc->trigger_reset = 1; } TAILQ_REMOVE(&asoc->sent_queue, tp1, sctp_next); if (tp1->data) { /* sa_ignore NO_NULL_CHK */ sctp_free_bufspace(stcb, asoc, tp1, 1); sctp_m_freem(tp1->data); tp1->data = NULL; } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(asoc->last_acked_seq, cumack, tp1->rec.data.tsn, 0, 0, SCTP_LOG_FREE_SENT); } asoc->sent_queue_cnt--; sctp_free_a_chunk(stcb, tp1, SCTP_SO_NOT_LOCKED); } else { break; } } } /* sa_ignore NO_NULL_CHK */ if (stcb->sctp_socket) { #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) struct socket *so; #endif SOCKBUF_LOCK(&stcb->sctp_socket->so_snd); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_WAKE_LOGGING_ENABLE) { /* sa_ignore NO_NULL_CHK */ sctp_wakeup_log(stcb, 1, SCTP_WAKESND_FROM_SACK); } #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) so = SCTP_INP_SO(stcb->sctp_ep); atomic_add_int(&stcb->asoc.refcnt, 1); SCTP_TCB_UNLOCK(stcb); SCTP_SOCKET_LOCK(so, 1); SCTP_TCB_LOCK(stcb); atomic_subtract_int(&stcb->asoc.refcnt, 1); if (stcb->asoc.state & SCTP_STATE_CLOSED_SOCKET) { /* assoc was freed while we were unlocked */ SCTP_SOCKET_UNLOCK(so, 1); return; } #endif sctp_sowwakeup_locked(stcb->sctp_ep, stcb->sctp_socket); #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) SCTP_SOCKET_UNLOCK(so, 1); #endif } else { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_WAKE_LOGGING_ENABLE) { sctp_wakeup_log(stcb, 1, SCTP_NOWAKE_FROM_SACK); } } /* JRS - Use the congestion control given in the CC module */ if ((asoc->last_acked_seq != cumack) && (ecne_seen == 0)) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (net->net_ack2 > 0) { /* * Karn's rule applies to clearing error * count, this is optional. */ net->error_count = 0; if (!(net->dest_state & SCTP_ADDR_REACHABLE)) { /* addr came good */ net->dest_state |= SCTP_ADDR_REACHABLE; sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_UP, stcb, 0, (void *)net, SCTP_SO_NOT_LOCKED); } if (net == stcb->asoc.primary_destination) { if (stcb->asoc.alternate) { /* * release the alternate, * primary is good */ sctp_free_remote_addr(stcb->asoc.alternate); stcb->asoc.alternate = NULL; } } if (net->dest_state & SCTP_ADDR_PF) { net->dest_state &= ~SCTP_ADDR_PF; sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_22); sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net); asoc->cc_functions.sctp_cwnd_update_exit_pf(stcb, net); /* Done with this net */ net->net_ack = 0; } /* restore any doubled timers */ net->RTO = (net->lastsa >> SCTP_RTT_SHIFT) + net->lastsv; if (net->RTO < stcb->asoc.minrto) { net->RTO = stcb->asoc.minrto; } if (net->RTO > stcb->asoc.maxrto) { net->RTO = stcb->asoc.maxrto; } } } asoc->cc_functions.sctp_cwnd_update_after_sack(stcb, asoc, 1, 0, 0); } asoc->last_acked_seq = cumack; if (TAILQ_EMPTY(&asoc->sent_queue)) { /* nothing left in-flight */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { net->flight_size = 0; net->partial_bytes_acked = 0; } asoc->total_flight = 0; asoc->total_flight_count = 0; } /* RWND update */ asoc->peers_rwnd = sctp_sbspace_sub(rwnd, (uint32_t)(asoc->total_flight + (asoc->total_flight_count * SCTP_BASE_SYSCTL(sctp_peer_chunk_oh)))); if (asoc->peers_rwnd < stcb->sctp_ep->sctp_ep.sctp_sws_sender) { /* SWS sender side engages */ asoc->peers_rwnd = 0; } if (asoc->peers_rwnd > old_rwnd) { win_probe_recovery = 1; } /* Now assure a timer where data is queued at */ again: j = 0; TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (win_probe_recovery && (net->window_probe)) { win_probe_recovered = 1; /* * Find first chunk that was used with window probe * and clear the sent */ /* sa_ignore FREED_MEMORY */ TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (tp1->window_probe) { /* move back to data send queue */ sctp_window_probe_recovery(stcb, asoc, tp1); break; } } } if (net->flight_size) { j++; sctp_timer_start(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net); if (net->window_probe) { net->window_probe = 0; } } else { if (net->window_probe) { /* * In window probes we must assure a timer * is still running there */ net->window_probe = 0; if (!SCTP_OS_TIMER_PENDING(&net->rxt_timer.timer)) { sctp_timer_start(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net); } } else if (SCTP_OS_TIMER_PENDING(&net->rxt_timer.timer)) { sctp_timer_stop(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_23); } } } if ((j == 0) && (!TAILQ_EMPTY(&asoc->sent_queue)) && (asoc->sent_queue_retran_cnt == 0) && (win_probe_recovered == 0) && (done_once == 0)) { /* * huh, this should not happen unless all packets are * PR-SCTP and marked to skip of course. */ if (sctp_fs_audit(asoc)) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { net->flight_size = 0; } asoc->total_flight = 0; asoc->total_flight_count = 0; asoc->sent_queue_retran_cnt = 0; TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (tp1->sent < SCTP_DATAGRAM_RESEND) { sctp_flight_size_increase(tp1); sctp_total_flight_increase(stcb, tp1); } else if (tp1->sent == SCTP_DATAGRAM_RESEND) { sctp_ucount_incr(asoc->sent_queue_retran_cnt); } } } done_once = 1; goto again; } /**********************************/ /* Now what about shutdown issues */ /**********************************/ if (TAILQ_EMPTY(&asoc->send_queue) && TAILQ_EMPTY(&asoc->sent_queue)) { /* nothing left on sendqueue.. consider done */ /* clean up */ if ((asoc->stream_queue_cnt == 1) && ((asoc->state & SCTP_STATE_SHUTDOWN_PENDING) || (asoc->state & SCTP_STATE_SHUTDOWN_RECEIVED)) && ((*asoc->ss_functions.sctp_ss_is_user_msgs_incomplete) (stcb, asoc))) { asoc->state |= SCTP_STATE_PARTIAL_MSG_LEFT; } if (((asoc->state & SCTP_STATE_SHUTDOWN_PENDING) || (SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_RECEIVED)) && (asoc->stream_queue_cnt == 1) && (asoc->state & SCTP_STATE_PARTIAL_MSG_LEFT)) { struct mbuf *op_err; *abort_now = 1; /* XXX */ op_err = sctp_generate_cause(SCTP_CAUSE_USER_INITIATED_ABT, ""); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_24; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); return; } if ((asoc->state & SCTP_STATE_SHUTDOWN_PENDING) && (asoc->stream_queue_cnt == 0)) { struct sctp_nets *netp; if ((SCTP_GET_STATE(asoc) == SCTP_STATE_OPEN) || (SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_RECEIVED)) { SCTP_STAT_DECR_GAUGE32(sctps_currestab); } SCTP_SET_STATE(asoc, SCTP_STATE_SHUTDOWN_SENT); SCTP_CLEAR_SUBSTATE(asoc, SCTP_STATE_SHUTDOWN_PENDING); sctp_stop_timers_for_shutdown(stcb); if (asoc->alternate) { netp = asoc->alternate; } else { netp = asoc->primary_destination; } sctp_send_shutdown(stcb, netp); sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWN, stcb->sctp_ep, stcb, netp); sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNGUARD, stcb->sctp_ep, stcb, netp); } else if ((SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_RECEIVED) && (asoc->stream_queue_cnt == 0)) { struct sctp_nets *netp; SCTP_STAT_DECR_GAUGE32(sctps_currestab); SCTP_SET_STATE(asoc, SCTP_STATE_SHUTDOWN_ACK_SENT); SCTP_CLEAR_SUBSTATE(asoc, SCTP_STATE_SHUTDOWN_PENDING); sctp_stop_timers_for_shutdown(stcb); if (asoc->alternate) { netp = asoc->alternate; } else { netp = asoc->primary_destination; } sctp_send_shutdown_ack(stcb, netp); sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNACK, stcb->sctp_ep, stcb, netp); } } /*********************************************/ /* Here we perform PR-SCTP procedures */ /* (section 4.2) */ /*********************************************/ /* C1. update advancedPeerAckPoint */ if (SCTP_TSN_GT(cumack, asoc->advanced_peer_ack_point)) { asoc->advanced_peer_ack_point = cumack; } /* PR-Sctp issues need to be addressed too */ if ((asoc->prsctp_supported) && (asoc->pr_sctp_cnt > 0)) { struct sctp_tmit_chunk *lchk; uint32_t old_adv_peer_ack_point; old_adv_peer_ack_point = asoc->advanced_peer_ack_point; lchk = sctp_try_advance_peer_ack_point(stcb, asoc); /* C3. See if we need to send a Fwd-TSN */ if (SCTP_TSN_GT(asoc->advanced_peer_ack_point, cumack)) { /* * ISSUE with ECN, see FWD-TSN processing. */ if (SCTP_TSN_GT(asoc->advanced_peer_ack_point, old_adv_peer_ack_point)) { send_forward_tsn(stcb, asoc); } else if (lchk) { /* try to FR fwd-tsn's that get lost too */ if (lchk->rec.data.fwd_tsn_cnt >= 3) { send_forward_tsn(stcb, asoc); } } } if (lchk) { /* Assure a timer is up */ sctp_timer_start(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, lchk->whoTo); } } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_RWND_LOGGING_ENABLE) { sctp_misc_ints(SCTP_SACK_RWND_UPDATE, rwnd, stcb->asoc.peers_rwnd, stcb->asoc.total_flight, stcb->asoc.total_output_queue_size); } } void sctp_handle_sack(struct mbuf *m, int offset_seg, int offset_dup, struct sctp_tcb *stcb, uint16_t num_seg, uint16_t num_nr_seg, uint16_t num_dup, int *abort_now, uint8_t flags, uint32_t cum_ack, uint32_t rwnd, int ecne_seen) { struct sctp_association *asoc; struct sctp_tmit_chunk *tp1, *tp2; uint32_t last_tsn, biggest_tsn_acked, biggest_tsn_newly_acked, this_sack_lowest_newack; uint16_t wake_him = 0; uint32_t send_s = 0; long j; int accum_moved = 0; int will_exit_fast_recovery = 0; uint32_t a_rwnd, old_rwnd; int win_probe_recovery = 0; int win_probe_recovered = 0; struct sctp_nets *net = NULL; int done_once; int rto_ok = 1; uint8_t reneged_all = 0; uint8_t cmt_dac_flag; /* * we take any chance we can to service our queues since we cannot * get awoken when the socket is read from :< */ /* * Now perform the actual SACK handling: 1) Verify that it is not an * old sack, if so discard. 2) If there is nothing left in the send * queue (cum-ack is equal to last acked) then you have a duplicate * too, update any rwnd change and verify no timers are running. * then return. 3) Process any new consequtive data i.e. cum-ack * moved process these first and note that it moved. 4) Process any * sack blocks. 5) Drop any acked from the queue. 6) Check for any * revoked blocks and mark. 7) Update the cwnd. 8) Nothing left, * sync up flightsizes and things, stop all timers and also check * for shutdown_pending state. If so then go ahead and send off the * shutdown. If in shutdown recv, send off the shutdown-ack and * start that timer, Ret. 9) Strike any non-acked things and do FR * procedure if needed being sure to set the FR flag. 10) Do pr-sctp * procedures. 11) Apply any FR penalties. 12) Assure we will SACK * if in shutdown_recv state. */ SCTP_TCB_LOCK_ASSERT(stcb); /* CMT DAC algo */ this_sack_lowest_newack = 0; SCTP_STAT_INCR(sctps_slowpath_sack); last_tsn = cum_ack; cmt_dac_flag = flags & SCTP_SACK_CMT_DAC; #ifdef SCTP_ASOCLOG_OF_TSNS stcb->asoc.cumack_log[stcb->asoc.cumack_log_at] = cum_ack; stcb->asoc.cumack_log_at++; if (stcb->asoc.cumack_log_at > SCTP_TSN_LOG_SIZE) { stcb->asoc.cumack_log_at = 0; } #endif a_rwnd = rwnd; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_SACK_ARRIVALS_ENABLE) { sctp_misc_ints(SCTP_SACK_LOG_NORMAL, cum_ack, rwnd, stcb->asoc.last_acked_seq, stcb->asoc.peers_rwnd); } old_rwnd = stcb->asoc.peers_rwnd; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_THRESHOLD_LOGGING) { sctp_misc_ints(SCTP_THRESHOLD_CLEAR, stcb->asoc.overall_error_count, 0, SCTP_FROM_SCTP_INDATA, __LINE__); } stcb->asoc.overall_error_count = 0; asoc = &stcb->asoc; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(asoc->last_acked_seq, cum_ack, 0, num_seg, num_dup, SCTP_LOG_NEW_SACK); } if ((num_dup) && (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FR_LOGGING_ENABLE)) { uint16_t i; uint32_t *dupdata, dblock; for (i = 0; i < num_dup; i++) { dupdata = (uint32_t *)sctp_m_getptr(m, offset_dup + i * sizeof(uint32_t), sizeof(uint32_t), (uint8_t *)&dblock); if (dupdata == NULL) { break; } sctp_log_fr(*dupdata, 0, 0, SCTP_FR_DUPED); } } /* reality check */ if (!TAILQ_EMPTY(&asoc->sent_queue)) { tp1 = TAILQ_LAST(&asoc->sent_queue, sctpchunk_listhead); send_s = tp1->rec.data.tsn + 1; } else { tp1 = NULL; send_s = asoc->sending_seq; } if (SCTP_TSN_GE(cum_ack, send_s)) { struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; /* * no way, we have not even sent this TSN out yet. Peer is * hopelessly messed up with us. */ SCTP_PRINTF("NEW cum_ack:%x send_s:%x is smaller or equal\n", cum_ack, send_s); if (tp1) { SCTP_PRINTF("Got send_s from tsn:%x + 1 of tp1: %p\n", tp1->rec.data.tsn, (void *)tp1); } hopeless_peer: *abort_now = 1; /* XXX */ snprintf(msg, sizeof(msg), "Cum ack %8.8x greater or equal than TSN %8.8x", cum_ack, send_s); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_25; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); return; } /**********************/ /* 1) check the range */ /**********************/ if (SCTP_TSN_GT(asoc->last_acked_seq, last_tsn)) { /* acking something behind */ return; } /* update the Rwnd of the peer */ if (TAILQ_EMPTY(&asoc->sent_queue) && TAILQ_EMPTY(&asoc->send_queue) && (asoc->stream_queue_cnt == 0)) { /* nothing left on send/sent and strmq */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_RWND_ENABLE) { sctp_log_rwnd_set(SCTP_SET_PEER_RWND_VIA_SACK, asoc->peers_rwnd, 0, 0, a_rwnd); } asoc->peers_rwnd = a_rwnd; if (asoc->sent_queue_retran_cnt) { asoc->sent_queue_retran_cnt = 0; } if (asoc->peers_rwnd < stcb->sctp_ep->sctp_ep.sctp_sws_sender) { /* SWS sender side engages */ asoc->peers_rwnd = 0; } /* stop any timers */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { sctp_timer_stop(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_26); net->partial_bytes_acked = 0; net->flight_size = 0; } asoc->total_flight = 0; asoc->total_flight_count = 0; return; } /* * We init netAckSz and netAckSz2 to 0. These are used to track 2 * things. The total byte count acked is tracked in netAckSz AND * netAck2 is used to track the total bytes acked that are un- * amibguious and were never retransmitted. We track these on a per * destination address basis. */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (SCTP_TSN_GT(cum_ack, net->cwr_window_tsn)) { /* Drag along the window_tsn for cwr's */ net->cwr_window_tsn = cum_ack; } net->prev_cwnd = net->cwnd; net->net_ack = 0; net->net_ack2 = 0; /* * CMT: Reset CUC and Fast recovery algo variables before * SACK processing */ net->new_pseudo_cumack = 0; net->will_exit_fast_recovery = 0; if (stcb->asoc.cc_functions.sctp_cwnd_prepare_net_for_sack) { (*stcb->asoc.cc_functions.sctp_cwnd_prepare_net_for_sack) (stcb, net); } } /* process the new consecutive TSN first */ TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (SCTP_TSN_GE(last_tsn, tp1->rec.data.tsn)) { if (tp1->sent != SCTP_DATAGRAM_UNSENT) { accum_moved = 1; if (tp1->sent < SCTP_DATAGRAM_ACKED) { /* * If it is less than ACKED, it is * now no-longer in flight. Higher * values may occur during marking */ if ((tp1->whoTo->dest_state & SCTP_ADDR_UNCONFIRMED) && (tp1->snd_count < 2)) { /* * If there was no retran * and the address is * un-confirmed and we sent * there and are now * sacked.. its confirmed, * mark it so. */ tp1->whoTo->dest_state &= ~SCTP_ADDR_UNCONFIRMED; } if (tp1->sent < SCTP_DATAGRAM_RESEND) { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_DOWN_CA, tp1->whoTo->flight_size, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } sctp_flight_size_decrease(tp1); sctp_total_flight_decrease(stcb, tp1); if (stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) { (*stcb->asoc.cc_functions.sctp_cwnd_update_tsn_acknowledged) (tp1->whoTo, tp1); } } tp1->whoTo->net_ack += tp1->send_size; /* CMT SFR and DAC algos */ this_sack_lowest_newack = tp1->rec.data.tsn; tp1->whoTo->saw_newack = 1; if (tp1->snd_count < 2) { /* * True non-retransmited * chunk */ tp1->whoTo->net_ack2 += tp1->send_size; /* update RTO too? */ if (tp1->do_rtt) { if (rto_ok) { tp1->whoTo->RTO = sctp_calculate_rto(stcb, asoc, tp1->whoTo, &tp1->sent_rcv_time, SCTP_RTT_FROM_DATA); rto_ok = 0; } if (tp1->whoTo->rto_needed == 0) { tp1->whoTo->rto_needed = 1; } tp1->do_rtt = 0; } } /* * CMT: CUCv2 algorithm. From the * cumack'd TSNs, for each TSN being * acked for the first time, set the * following variables for the * corresp destination. * new_pseudo_cumack will trigger a * cwnd update. * find_(rtx_)pseudo_cumack will * trigger search for the next * expected (rtx-)pseudo-cumack. */ tp1->whoTo->new_pseudo_cumack = 1; tp1->whoTo->find_pseudo_cumack = 1; tp1->whoTo->find_rtx_pseudo_cumack = 1; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(asoc->last_acked_seq, cum_ack, tp1->rec.data.tsn, 0, 0, SCTP_LOG_TSN_ACKED); } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_CWND_LOGGING_ENABLE) { sctp_log_cwnd(stcb, tp1->whoTo, tp1->rec.data.tsn, SCTP_CWND_LOG_FROM_SACK); } } if (tp1->sent == SCTP_DATAGRAM_RESEND) { sctp_ucount_decr(asoc->sent_queue_retran_cnt); #ifdef SCTP_AUDITING_ENABLED sctp_audit_log(0xB3, (asoc->sent_queue_retran_cnt & 0x000000ff)); #endif } if (tp1->rec.data.chunk_was_revoked) { /* deflate the cwnd */ tp1->whoTo->cwnd -= tp1->book_size; tp1->rec.data.chunk_was_revoked = 0; } if (tp1->sent != SCTP_DATAGRAM_NR_ACKED) { tp1->sent = SCTP_DATAGRAM_ACKED; } } } else { break; } } biggest_tsn_newly_acked = biggest_tsn_acked = last_tsn; /* always set this up to cum-ack */ asoc->this_sack_highest_gap = last_tsn; if ((num_seg > 0) || (num_nr_seg > 0)) { /* * CMT: SFR algo (and HTNA) - this_sack_highest_newack has * to be greater than the cumack. Also reset saw_newack to 0 * for all dests. */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { net->saw_newack = 0; net->this_sack_highest_newack = last_tsn; } /* * thisSackHighestGap will increase while handling NEW * segments this_sack_highest_newack will increase while * handling NEWLY ACKED chunks. this_sack_lowest_newack is * used for CMT DAC algo. saw_newack will also change. */ if (sctp_handle_segments(m, &offset_seg, stcb, asoc, last_tsn, &biggest_tsn_acked, &biggest_tsn_newly_acked, &this_sack_lowest_newack, num_seg, num_nr_seg, &rto_ok)) { wake_him++; } /* * validate the biggest_tsn_acked in the gap acks if strict * adherence is wanted. */ if (SCTP_TSN_GE(biggest_tsn_acked, send_s)) { /* * peer is either confused or we are under attack. * We must abort. */ SCTP_PRINTF("Hopeless peer! biggest_tsn_acked:%x largest seq:%x\n", biggest_tsn_acked, send_s); goto hopeless_peer; } } /*******************************************/ /* cancel ALL T3-send timer if accum moved */ /*******************************************/ if (asoc->sctp_cmt_on_off > 0) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (net->new_pseudo_cumack) sctp_timer_stop(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_27); } } else { if (accum_moved) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { sctp_timer_stop(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_28); } } } /********************************************/ /* drop the acked chunks from the sentqueue */ /********************************************/ asoc->last_acked_seq = cum_ack; TAILQ_FOREACH_SAFE(tp1, &asoc->sent_queue, sctp_next, tp2) { if (SCTP_TSN_GT(tp1->rec.data.tsn, cum_ack)) { break; } if (tp1->sent != SCTP_DATAGRAM_NR_ACKED) { if (asoc->strmout[tp1->rec.data.sid].chunks_on_queues > 0) { asoc->strmout[tp1->rec.data.sid].chunks_on_queues--; #ifdef INVARIANTS } else { panic("No chunks on the queues for sid %u.", tp1->rec.data.sid); #endif } } if ((asoc->strmout[tp1->rec.data.sid].chunks_on_queues == 0) && (asoc->strmout[tp1->rec.data.sid].state == SCTP_STREAM_RESET_PENDING) && TAILQ_EMPTY(&asoc->strmout[tp1->rec.data.sid].outqueue)) { asoc->trigger_reset = 1; } TAILQ_REMOVE(&asoc->sent_queue, tp1, sctp_next); if (PR_SCTP_ENABLED(tp1->flags)) { if (asoc->pr_sctp_cnt != 0) asoc->pr_sctp_cnt--; } asoc->sent_queue_cnt--; if (tp1->data) { /* sa_ignore NO_NULL_CHK */ sctp_free_bufspace(stcb, asoc, tp1, 1); sctp_m_freem(tp1->data); tp1->data = NULL; if (asoc->prsctp_supported && PR_SCTP_BUF_ENABLED(tp1->flags)) { asoc->sent_queue_cnt_removeable--; } } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_LOGGING_ENABLE) { sctp_log_sack(asoc->last_acked_seq, cum_ack, tp1->rec.data.tsn, 0, 0, SCTP_LOG_FREE_SENT); } sctp_free_a_chunk(stcb, tp1, SCTP_SO_NOT_LOCKED); wake_him++; } if (TAILQ_EMPTY(&asoc->sent_queue) && (asoc->total_flight > 0)) { #ifdef INVARIANTS panic("Warning flight size is positive and should be 0"); #else SCTP_PRINTF("Warning flight size incorrect should be 0 is %d\n", asoc->total_flight); #endif asoc->total_flight = 0; } /* sa_ignore NO_NULL_CHK */ if ((wake_him) && (stcb->sctp_socket)) { #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) struct socket *so; #endif SOCKBUF_LOCK(&stcb->sctp_socket->so_snd); if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_WAKE_LOGGING_ENABLE) { sctp_wakeup_log(stcb, wake_him, SCTP_WAKESND_FROM_SACK); } #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) so = SCTP_INP_SO(stcb->sctp_ep); atomic_add_int(&stcb->asoc.refcnt, 1); SCTP_TCB_UNLOCK(stcb); SCTP_SOCKET_LOCK(so, 1); SCTP_TCB_LOCK(stcb); atomic_subtract_int(&stcb->asoc.refcnt, 1); if (stcb->asoc.state & SCTP_STATE_CLOSED_SOCKET) { /* assoc was freed while we were unlocked */ SCTP_SOCKET_UNLOCK(so, 1); return; } #endif sctp_sowwakeup_locked(stcb->sctp_ep, stcb->sctp_socket); #if defined(__APPLE__) || defined(SCTP_SO_LOCK_TESTING) SCTP_SOCKET_UNLOCK(so, 1); #endif } else { if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_WAKE_LOGGING_ENABLE) { sctp_wakeup_log(stcb, wake_him, SCTP_NOWAKE_FROM_SACK); } } if (asoc->fast_retran_loss_recovery && accum_moved) { if (SCTP_TSN_GE(asoc->last_acked_seq, asoc->fast_recovery_tsn)) { /* Setup so we will exit RFC2582 fast recovery */ will_exit_fast_recovery = 1; } } /* * Check for revoked fragments: * * if Previous sack - Had no frags then we can't have any revoked if * Previous sack - Had frag's then - If we now have frags aka * num_seg > 0 call sctp_check_for_revoked() to tell if peer revoked * some of them. else - The peer revoked all ACKED fragments, since * we had some before and now we have NONE. */ if (num_seg) { sctp_check_for_revoked(stcb, asoc, cum_ack, biggest_tsn_acked); asoc->saw_sack_with_frags = 1; } else if (asoc->saw_sack_with_frags) { int cnt_revoked = 0; /* Peer revoked all dg's marked or acked */ TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (tp1->sent == SCTP_DATAGRAM_ACKED) { tp1->sent = SCTP_DATAGRAM_SENT; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_FLIGHT_LOGGING_ENABLE) { sctp_misc_ints(SCTP_FLIGHT_LOG_UP_REVOKE, tp1->whoTo->flight_size, tp1->book_size, (uint32_t)(uintptr_t)tp1->whoTo, tp1->rec.data.tsn); } sctp_flight_size_increase(tp1); sctp_total_flight_increase(stcb, tp1); tp1->rec.data.chunk_was_revoked = 1; /* * To ensure that this increase in * flightsize, which is artificial, does not * throttle the sender, we also increase the * cwnd artificially. */ tp1->whoTo->cwnd += tp1->book_size; cnt_revoked++; } } if (cnt_revoked) { reneged_all = 1; } asoc->saw_sack_with_frags = 0; } if (num_nr_seg > 0) asoc->saw_sack_with_nr_frags = 1; else asoc->saw_sack_with_nr_frags = 0; /* JRS - Use the congestion control given in the CC module */ if (ecne_seen == 0) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (net->net_ack2 > 0) { /* * Karn's rule applies to clearing error * count, this is optional. */ net->error_count = 0; if (!(net->dest_state & SCTP_ADDR_REACHABLE)) { /* addr came good */ net->dest_state |= SCTP_ADDR_REACHABLE; sctp_ulp_notify(SCTP_NOTIFY_INTERFACE_UP, stcb, 0, (void *)net, SCTP_SO_NOT_LOCKED); } if (net == stcb->asoc.primary_destination) { if (stcb->asoc.alternate) { /* * release the alternate, * primary is good */ sctp_free_remote_addr(stcb->asoc.alternate); stcb->asoc.alternate = NULL; } } if (net->dest_state & SCTP_ADDR_PF) { net->dest_state &= ~SCTP_ADDR_PF; sctp_timer_stop(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_29); sctp_timer_start(SCTP_TIMER_TYPE_HEARTBEAT, stcb->sctp_ep, stcb, net); asoc->cc_functions.sctp_cwnd_update_exit_pf(stcb, net); /* Done with this net */ net->net_ack = 0; } /* restore any doubled timers */ net->RTO = (net->lastsa >> SCTP_RTT_SHIFT) + net->lastsv; if (net->RTO < stcb->asoc.minrto) { net->RTO = stcb->asoc.minrto; } if (net->RTO > stcb->asoc.maxrto) { net->RTO = stcb->asoc.maxrto; } } } asoc->cc_functions.sctp_cwnd_update_after_sack(stcb, asoc, accum_moved, reneged_all, will_exit_fast_recovery); } if (TAILQ_EMPTY(&asoc->sent_queue)) { /* nothing left in-flight */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { /* stop all timers */ sctp_timer_stop(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_30); net->flight_size = 0; net->partial_bytes_acked = 0; } asoc->total_flight = 0; asoc->total_flight_count = 0; } /**********************************/ /* Now what about shutdown issues */ /**********************************/ if (TAILQ_EMPTY(&asoc->send_queue) && TAILQ_EMPTY(&asoc->sent_queue)) { /* nothing left on sendqueue.. consider done */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_RWND_ENABLE) { sctp_log_rwnd_set(SCTP_SET_PEER_RWND_VIA_SACK, asoc->peers_rwnd, 0, 0, a_rwnd); } asoc->peers_rwnd = a_rwnd; if (asoc->peers_rwnd < stcb->sctp_ep->sctp_ep.sctp_sws_sender) { /* SWS sender side engages */ asoc->peers_rwnd = 0; } /* clean up */ if ((asoc->stream_queue_cnt == 1) && ((asoc->state & SCTP_STATE_SHUTDOWN_PENDING) || (asoc->state & SCTP_STATE_SHUTDOWN_RECEIVED)) && ((*asoc->ss_functions.sctp_ss_is_user_msgs_incomplete) (stcb, asoc))) { asoc->state |= SCTP_STATE_PARTIAL_MSG_LEFT; } if (((asoc->state & SCTP_STATE_SHUTDOWN_PENDING) || (SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_RECEIVED)) && (asoc->stream_queue_cnt == 1) && (asoc->state & SCTP_STATE_PARTIAL_MSG_LEFT)) { struct mbuf *op_err; *abort_now = 1; /* XXX */ op_err = sctp_generate_cause(SCTP_CAUSE_USER_INITIATED_ABT, ""); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_24; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); return; } if ((asoc->state & SCTP_STATE_SHUTDOWN_PENDING) && (asoc->stream_queue_cnt == 0)) { struct sctp_nets *netp; if ((SCTP_GET_STATE(asoc) == SCTP_STATE_OPEN) || (SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_RECEIVED)) { SCTP_STAT_DECR_GAUGE32(sctps_currestab); } SCTP_SET_STATE(asoc, SCTP_STATE_SHUTDOWN_SENT); SCTP_CLEAR_SUBSTATE(asoc, SCTP_STATE_SHUTDOWN_PENDING); sctp_stop_timers_for_shutdown(stcb); if (asoc->alternate) { netp = asoc->alternate; } else { netp = asoc->primary_destination; } sctp_send_shutdown(stcb, netp); sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWN, stcb->sctp_ep, stcb, netp); sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNGUARD, stcb->sctp_ep, stcb, netp); return; } else if ((SCTP_GET_STATE(asoc) == SCTP_STATE_SHUTDOWN_RECEIVED) && (asoc->stream_queue_cnt == 0)) { struct sctp_nets *netp; SCTP_STAT_DECR_GAUGE32(sctps_currestab); SCTP_SET_STATE(asoc, SCTP_STATE_SHUTDOWN_ACK_SENT); SCTP_CLEAR_SUBSTATE(asoc, SCTP_STATE_SHUTDOWN_PENDING); sctp_stop_timers_for_shutdown(stcb); if (asoc->alternate) { netp = asoc->alternate; } else { netp = asoc->primary_destination; } sctp_send_shutdown_ack(stcb, netp); sctp_timer_start(SCTP_TIMER_TYPE_SHUTDOWNACK, stcb->sctp_ep, stcb, netp); return; } } /* * Now here we are going to recycle net_ack for a different use... * HEADS UP. */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { net->net_ack = 0; } /* * CMT DAC algorithm: If SACK DAC flag was 0, then no extra marking * to be done. Setting this_sack_lowest_newack to the cum_ack will * automatically ensure that. */ if ((asoc->sctp_cmt_on_off > 0) && SCTP_BASE_SYSCTL(sctp_cmt_use_dac) && (cmt_dac_flag == 0)) { this_sack_lowest_newack = cum_ack; } if ((num_seg > 0) || (num_nr_seg > 0)) { sctp_strike_gap_ack_chunks(stcb, asoc, biggest_tsn_acked, biggest_tsn_newly_acked, this_sack_lowest_newack, accum_moved); } /* JRS - Use the congestion control given in the CC module */ asoc->cc_functions.sctp_cwnd_update_after_fr(stcb, asoc); /* Now are we exiting loss recovery ? */ if (will_exit_fast_recovery) { /* Ok, we must exit fast recovery */ asoc->fast_retran_loss_recovery = 0; } if ((asoc->sat_t3_loss_recovery) && SCTP_TSN_GE(asoc->last_acked_seq, asoc->sat_t3_recovery_tsn)) { /* end satellite t3 loss recovery */ asoc->sat_t3_loss_recovery = 0; } /* * CMT Fast recovery */ TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (net->will_exit_fast_recovery) { /* Ok, we must exit fast recovery */ net->fast_retran_loss_recovery = 0; } } /* Adjust and set the new rwnd value */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_RWND_ENABLE) { sctp_log_rwnd_set(SCTP_SET_PEER_RWND_VIA_SACK, asoc->peers_rwnd, asoc->total_flight, (asoc->total_flight_count * SCTP_BASE_SYSCTL(sctp_peer_chunk_oh)), a_rwnd); } asoc->peers_rwnd = sctp_sbspace_sub(a_rwnd, (uint32_t)(asoc->total_flight + (asoc->total_flight_count * SCTP_BASE_SYSCTL(sctp_peer_chunk_oh)))); if (asoc->peers_rwnd < stcb->sctp_ep->sctp_ep.sctp_sws_sender) { /* SWS sender side engages */ asoc->peers_rwnd = 0; } if (asoc->peers_rwnd > old_rwnd) { win_probe_recovery = 1; } /* * Now we must setup so we have a timer up for anyone with * outstanding data. */ done_once = 0; again: j = 0; TAILQ_FOREACH(net, &asoc->nets, sctp_next) { if (win_probe_recovery && (net->window_probe)) { win_probe_recovered = 1; /*- * Find first chunk that was used with * window probe and clear the event. Put * it back into the send queue as if has * not been sent. */ TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (tp1->window_probe) { sctp_window_probe_recovery(stcb, asoc, tp1); break; } } } if (net->flight_size) { j++; if (!SCTP_OS_TIMER_PENDING(&net->rxt_timer.timer)) { sctp_timer_start(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net); } if (net->window_probe) { net->window_probe = 0; } } else { if (net->window_probe) { /* * In window probes we must assure a timer * is still running there */ if (!SCTP_OS_TIMER_PENDING(&net->rxt_timer.timer)) { sctp_timer_start(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net); } } else if (SCTP_OS_TIMER_PENDING(&net->rxt_timer.timer)) { sctp_timer_stop(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, net, SCTP_FROM_SCTP_INDATA + SCTP_LOC_32); } } } if ((j == 0) && (!TAILQ_EMPTY(&asoc->sent_queue)) && (asoc->sent_queue_retran_cnt == 0) && (win_probe_recovered == 0) && (done_once == 0)) { /* * huh, this should not happen unless all packets are * PR-SCTP and marked to skip of course. */ if (sctp_fs_audit(asoc)) { TAILQ_FOREACH(net, &asoc->nets, sctp_next) { net->flight_size = 0; } asoc->total_flight = 0; asoc->total_flight_count = 0; asoc->sent_queue_retran_cnt = 0; TAILQ_FOREACH(tp1, &asoc->sent_queue, sctp_next) { if (tp1->sent < SCTP_DATAGRAM_RESEND) { sctp_flight_size_increase(tp1); sctp_total_flight_increase(stcb, tp1); } else if (tp1->sent == SCTP_DATAGRAM_RESEND) { sctp_ucount_incr(asoc->sent_queue_retran_cnt); } } } done_once = 1; goto again; } /*********************************************/ /* Here we perform PR-SCTP procedures */ /* (section 4.2) */ /*********************************************/ /* C1. update advancedPeerAckPoint */ if (SCTP_TSN_GT(cum_ack, asoc->advanced_peer_ack_point)) { asoc->advanced_peer_ack_point = cum_ack; } /* C2. try to further move advancedPeerAckPoint ahead */ if ((asoc->prsctp_supported) && (asoc->pr_sctp_cnt > 0)) { struct sctp_tmit_chunk *lchk; uint32_t old_adv_peer_ack_point; old_adv_peer_ack_point = asoc->advanced_peer_ack_point; lchk = sctp_try_advance_peer_ack_point(stcb, asoc); /* C3. See if we need to send a Fwd-TSN */ if (SCTP_TSN_GT(asoc->advanced_peer_ack_point, cum_ack)) { /* * ISSUE with ECN, see FWD-TSN processing. */ if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_LOG_TRY_ADVANCE) { sctp_misc_ints(SCTP_FWD_TSN_CHECK, 0xee, cum_ack, asoc->advanced_peer_ack_point, old_adv_peer_ack_point); } if (SCTP_TSN_GT(asoc->advanced_peer_ack_point, old_adv_peer_ack_point)) { send_forward_tsn(stcb, asoc); } else if (lchk) { /* try to FR fwd-tsn's that get lost too */ if (lchk->rec.data.fwd_tsn_cnt >= 3) { send_forward_tsn(stcb, asoc); } } } if (lchk) { /* Assure a timer is up */ sctp_timer_start(SCTP_TIMER_TYPE_SEND, stcb->sctp_ep, stcb, lchk->whoTo); } } if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_SACK_RWND_LOGGING_ENABLE) { sctp_misc_ints(SCTP_SACK_RWND_UPDATE, a_rwnd, stcb->asoc.peers_rwnd, stcb->asoc.total_flight, stcb->asoc.total_output_queue_size); } } void sctp_update_acked(struct sctp_tcb *stcb, struct sctp_shutdown_chunk *cp, int *abort_flag) { /* Copy cum-ack */ uint32_t cum_ack, a_rwnd; cum_ack = ntohl(cp->cumulative_tsn_ack); /* Arrange so a_rwnd does NOT change */ a_rwnd = stcb->asoc.peers_rwnd + stcb->asoc.total_flight; /* Now call the express sack handling */ sctp_express_handle_sack(stcb, cum_ack, a_rwnd, abort_flag, 0); } static void sctp_kick_prsctp_reorder_queue(struct sctp_tcb *stcb, struct sctp_stream_in *strmin) { struct sctp_queued_to_read *control, *ncontrol; struct sctp_association *asoc; uint32_t mid; int need_reasm_check = 0; asoc = &stcb->asoc; mid = strmin->last_mid_delivered; /* * First deliver anything prior to and including the stream no that * came in. */ TAILQ_FOREACH_SAFE(control, &strmin->inqueue, next_instrm, ncontrol) { if (SCTP_MID_GE(asoc->idata_supported, mid, control->mid)) { /* this is deliverable now */ if (((control->sinfo_flags >> 8) & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG) { if (control->on_strm_q) { if (control->on_strm_q == SCTP_ON_ORDERED) { TAILQ_REMOVE(&strmin->inqueue, control, next_instrm); } else if (control->on_strm_q == SCTP_ON_UNORDERED) { TAILQ_REMOVE(&strmin->uno_inqueue, control, next_instrm); #ifdef INVARIANTS } else { panic("strmin: %p ctl: %p unknown %d", strmin, control, control->on_strm_q); #endif } control->on_strm_q = 0; } /* subtract pending on streams */ if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); /* deliver it to at least the delivery-q */ if (stcb->sctp_socket) { sctp_mark_non_revokable(asoc, control->sinfo_tsn); sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, 1, SCTP_READ_LOCK_HELD, SCTP_SO_NOT_LOCKED); } } else { /* Its a fragmented message */ if (control->first_frag_seen) { /* * Make it so this is next to * deliver, we restore later */ strmin->last_mid_delivered = control->mid - 1; need_reasm_check = 1; break; } } } else { /* no more delivery now. */ break; } } if (need_reasm_check) { int ret; ret = sctp_deliver_reasm_check(stcb, &stcb->asoc, strmin, SCTP_READ_LOCK_HELD); if (SCTP_MID_GT(asoc->idata_supported, mid, strmin->last_mid_delivered)) { /* Restore the next to deliver unless we are ahead */ strmin->last_mid_delivered = mid; } if (ret == 0) { /* Left the front Partial one on */ return; } need_reasm_check = 0; } /* * now we must deliver things in queue the normal way if any are * now ready. */ mid = strmin->last_mid_delivered + 1; TAILQ_FOREACH_SAFE(control, &strmin->inqueue, next_instrm, ncontrol) { if (SCTP_MID_EQ(asoc->idata_supported, mid, control->mid)) { if (((control->sinfo_flags >> 8) & SCTP_DATA_NOT_FRAG) == SCTP_DATA_NOT_FRAG) { /* this is deliverable now */ if (control->on_strm_q) { if (control->on_strm_q == SCTP_ON_ORDERED) { TAILQ_REMOVE(&strmin->inqueue, control, next_instrm); } else if (control->on_strm_q == SCTP_ON_UNORDERED) { TAILQ_REMOVE(&strmin->uno_inqueue, control, next_instrm); #ifdef INVARIANTS } else { panic("strmin: %p ctl: %p unknown %d", strmin, control, control->on_strm_q); #endif } control->on_strm_q = 0; } /* subtract pending on streams */ if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); /* deliver it to at least the delivery-q */ strmin->last_mid_delivered = control->mid; if (stcb->sctp_socket) { sctp_mark_non_revokable(asoc, control->sinfo_tsn); sctp_add_to_readq(stcb->sctp_ep, stcb, control, &stcb->sctp_socket->so_rcv, 1, SCTP_READ_LOCK_HELD, SCTP_SO_NOT_LOCKED); } mid = strmin->last_mid_delivered + 1; } else { /* Its a fragmented message */ if (control->first_frag_seen) { /* * Make it so this is next to * deliver */ strmin->last_mid_delivered = control->mid - 1; need_reasm_check = 1; break; } } } else { break; } } if (need_reasm_check) { (void)sctp_deliver_reasm_check(stcb, &stcb->asoc, strmin, SCTP_READ_LOCK_HELD); } } static void sctp_flush_reassm_for_str_seq(struct sctp_tcb *stcb, struct sctp_association *asoc, uint16_t stream, uint32_t mid, int ordered, uint32_t cumtsn) { struct sctp_queued_to_read *control; struct sctp_stream_in *strm; struct sctp_tmit_chunk *chk, *nchk; int cnt_removed = 0; /* * For now large messages held on the stream reasm that are complete * will be tossed too. We could in theory do more work to spin * through and stop after dumping one msg aka seeing the start of a * new msg at the head, and call the delivery function... to see if * it can be delivered... But for now we just dump everything on the * queue. */ strm = &asoc->strmin[stream]; control = sctp_find_reasm_entry(strm, mid, ordered, asoc->idata_supported); if (control == NULL) { /* Not found */ return; } if (!asoc->idata_supported && !ordered && SCTP_TSN_GT(control->fsn_included, cumtsn)) { return; } TAILQ_FOREACH_SAFE(chk, &control->reasm, sctp_next, nchk) { /* Purge hanging chunks */ if (!asoc->idata_supported && (ordered == 0)) { if (SCTP_TSN_GT(chk->rec.data.tsn, cumtsn)) { break; } } cnt_removed++; TAILQ_REMOVE(&control->reasm, chk, sctp_next); if (asoc->size_on_reasm_queue >= chk->send_size) { asoc->size_on_reasm_queue -= chk->send_size; } else { #ifdef INVARIANTS panic("size_on_reasm_queue = %u smaller than chunk length %u", asoc->size_on_reasm_queue, chk->send_size); #else asoc->size_on_reasm_queue = 0; #endif } sctp_ucount_decr(asoc->cnt_on_reasm_queue); if (chk->data) { sctp_m_freem(chk->data); chk->data = NULL; } sctp_free_a_chunk(stcb, chk, SCTP_SO_NOT_LOCKED); } if (!TAILQ_EMPTY(&control->reasm)) { /* This has to be old data, unordered */ if (control->data) { sctp_m_freem(control->data); control->data = NULL; } sctp_reset_a_control(control, stcb->sctp_ep, cumtsn); chk = TAILQ_FIRST(&control->reasm); if (chk->rec.data.rcv_flags & SCTP_DATA_FIRST_FRAG) { TAILQ_REMOVE(&control->reasm, chk, sctp_next); sctp_add_chk_to_control(control, strm, stcb, asoc, chk, SCTP_READ_LOCK_HELD); } sctp_deliver_reasm_check(stcb, asoc, strm, SCTP_READ_LOCK_HELD); return; } if (control->on_strm_q == SCTP_ON_ORDERED) { TAILQ_REMOVE(&strm->inqueue, control, next_instrm); if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); control->on_strm_q = 0; } else if (control->on_strm_q == SCTP_ON_UNORDERED) { TAILQ_REMOVE(&strm->uno_inqueue, control, next_instrm); control->on_strm_q = 0; #ifdef INVARIANTS } else if (control->on_strm_q) { panic("strm: %p ctl: %p unknown %d", strm, control, control->on_strm_q); #endif } control->on_strm_q = 0; if (control->on_read_q == 0) { sctp_free_remote_addr(control->whoFrom); if (control->data) { sctp_m_freem(control->data); control->data = NULL; } sctp_free_a_readq(stcb, control); } } void sctp_handle_forward_tsn(struct sctp_tcb *stcb, struct sctp_forward_tsn_chunk *fwd, int *abort_flag, struct mbuf *m, int offset) { /* The pr-sctp fwd tsn */ /* * here we will perform all the data receiver side steps for * processing FwdTSN, as required in by pr-sctp draft: * * Assume we get FwdTSN(x): * * 1) update local cumTSN to x 2) try to further advance cumTSN to x * + others we have 3) examine and update re-ordering queue on * pr-in-streams 4) clean up re-assembly queue 5) Send a sack to * report where we are. */ struct sctp_association *asoc; uint32_t new_cum_tsn, gap; unsigned int i, fwd_sz, m_size; uint32_t str_seq; struct sctp_stream_in *strm; struct sctp_queued_to_read *control, *sv; asoc = &stcb->asoc; if ((fwd_sz = ntohs(fwd->ch.chunk_length)) < sizeof(struct sctp_forward_tsn_chunk)) { SCTPDBG(SCTP_DEBUG_INDATA1, "Bad size too small/big fwd-tsn\n"); return; } m_size = (stcb->asoc.mapping_array_size << 3); /*************************************************************/ /* 1. Here we update local cumTSN and shift the bitmap array */ /*************************************************************/ new_cum_tsn = ntohl(fwd->new_cumulative_tsn); if (SCTP_TSN_GE(asoc->cumulative_tsn, new_cum_tsn)) { /* Already got there ... */ return; } /* * now we know the new TSN is more advanced, let's find the actual * gap */ SCTP_CALC_TSN_TO_GAP(gap, new_cum_tsn, asoc->mapping_array_base_tsn); asoc->cumulative_tsn = new_cum_tsn; if (gap >= m_size) { if ((long)gap > sctp_sbspace(&stcb->asoc, &stcb->sctp_socket->so_rcv)) { struct mbuf *op_err; char msg[SCTP_DIAG_INFO_LEN]; /* * out of range (of single byte chunks in the rwnd I * give out). This must be an attacker. */ *abort_flag = 1; snprintf(msg, sizeof(msg), "New cum ack %8.8x too high, highest TSN %8.8x", new_cum_tsn, asoc->highest_tsn_inside_map); op_err = sctp_generate_cause(SCTP_CAUSE_PROTOCOL_VIOLATION, msg); stcb->sctp_ep->last_abort_code = SCTP_FROM_SCTP_INDATA + SCTP_LOC_33; sctp_abort_an_association(stcb->sctp_ep, stcb, op_err, SCTP_SO_NOT_LOCKED); return; } SCTP_STAT_INCR(sctps_fwdtsn_map_over); memset(stcb->asoc.mapping_array, 0, stcb->asoc.mapping_array_size); asoc->mapping_array_base_tsn = new_cum_tsn + 1; asoc->highest_tsn_inside_map = new_cum_tsn; memset(stcb->asoc.nr_mapping_array, 0, stcb->asoc.mapping_array_size); asoc->highest_tsn_inside_nr_map = new_cum_tsn; if (SCTP_BASE_SYSCTL(sctp_logging_level) & SCTP_MAP_LOGGING_ENABLE) { sctp_log_map(0, 3, asoc->highest_tsn_inside_map, SCTP_MAP_SLIDE_RESULT); } } else { SCTP_TCB_LOCK_ASSERT(stcb); for (i = 0; i <= gap; i++) { if (!SCTP_IS_TSN_PRESENT(asoc->mapping_array, i) && !SCTP_IS_TSN_PRESENT(asoc->nr_mapping_array, i)) { SCTP_SET_TSN_PRESENT(asoc->nr_mapping_array, i); if (SCTP_TSN_GT(asoc->mapping_array_base_tsn + i, asoc->highest_tsn_inside_nr_map)) { asoc->highest_tsn_inside_nr_map = asoc->mapping_array_base_tsn + i; } } } } /*************************************************************/ /* 2. Clear up re-assembly queue */ /*************************************************************/ /* This is now done as part of clearing up the stream/seq */ if (asoc->idata_supported == 0) { uint16_t sid; /* Flush all the un-ordered data based on cum-tsn */ SCTP_INP_READ_LOCK(stcb->sctp_ep); for (sid = 0; sid < asoc->streamincnt; sid++) { sctp_flush_reassm_for_str_seq(stcb, asoc, sid, 0, 0, new_cum_tsn); } SCTP_INP_READ_UNLOCK(stcb->sctp_ep); } /*******************************************************/ /* 3. Update the PR-stream re-ordering queues and fix */ /* delivery issues as needed. */ /*******************************************************/ fwd_sz -= sizeof(*fwd); if (m && fwd_sz) { /* New method. */ unsigned int num_str; uint32_t mid, cur_mid; uint16_t sid; uint16_t ordered, flags; struct sctp_strseq *stseq, strseqbuf; struct sctp_strseq_mid *stseq_m, strseqbuf_m; offset += sizeof(*fwd); SCTP_INP_READ_LOCK(stcb->sctp_ep); if (asoc->idata_supported) { num_str = fwd_sz / sizeof(struct sctp_strseq_mid); } else { num_str = fwd_sz / sizeof(struct sctp_strseq); } for (i = 0; i < num_str; i++) { if (asoc->idata_supported) { stseq_m = (struct sctp_strseq_mid *)sctp_m_getptr(m, offset, sizeof(struct sctp_strseq_mid), (uint8_t *)&strseqbuf_m); offset += sizeof(struct sctp_strseq_mid); if (stseq_m == NULL) { break; } sid = ntohs(stseq_m->sid); mid = ntohl(stseq_m->mid); flags = ntohs(stseq_m->flags); if (flags & PR_SCTP_UNORDERED_FLAG) { ordered = 0; } else { ordered = 1; } } else { stseq = (struct sctp_strseq *)sctp_m_getptr(m, offset, sizeof(struct sctp_strseq), (uint8_t *)&strseqbuf); offset += sizeof(struct sctp_strseq); if (stseq == NULL) { break; } sid = ntohs(stseq->sid); mid = (uint32_t)ntohs(stseq->ssn); ordered = 1; } /* Convert */ /* now process */ /* * Ok we now look for the stream/seq on the read * queue where its not all delivered. If we find it * we transmute the read entry into a PDI_ABORTED. */ if (sid >= asoc->streamincnt) { /* screwed up streams, stop! */ break; } if ((asoc->str_of_pdapi == sid) && (asoc->ssn_of_pdapi == mid)) { /* * If this is the one we were partially * delivering now then we no longer are. * Note this will change with the reassembly * re-write. */ asoc->fragmented_delivery_inprogress = 0; } strm = &asoc->strmin[sid]; for (cur_mid = strm->last_mid_delivered; SCTP_MID_GE(asoc->idata_supported, mid, cur_mid); cur_mid++) { sctp_flush_reassm_for_str_seq(stcb, asoc, sid, cur_mid, ordered, new_cum_tsn); } TAILQ_FOREACH(control, &stcb->sctp_ep->read_queue, next) { if ((control->sinfo_stream == sid) && (SCTP_MID_EQ(asoc->idata_supported, control->mid, mid))) { str_seq = (sid << 16) | (0x0000ffff & mid); control->pdapi_aborted = 1; sv = stcb->asoc.control_pdapi; control->end_added = 1; if (control->on_strm_q == SCTP_ON_ORDERED) { TAILQ_REMOVE(&strm->inqueue, control, next_instrm); if (asoc->size_on_all_streams >= control->length) { asoc->size_on_all_streams -= control->length; } else { #ifdef INVARIANTS panic("size_on_all_streams = %u smaller than control length %u", asoc->size_on_all_streams, control->length); #else asoc->size_on_all_streams = 0; #endif } sctp_ucount_decr(asoc->cnt_on_all_streams); } else if (control->on_strm_q == SCTP_ON_UNORDERED) { TAILQ_REMOVE(&strm->uno_inqueue, control, next_instrm); #ifdef INVARIANTS } else if (control->on_strm_q) { panic("strm: %p ctl: %p unknown %d", strm, control, control->on_strm_q); #endif } control->on_strm_q = 0; stcb->asoc.control_pdapi = control; sctp_ulp_notify(SCTP_NOTIFY_PARTIAL_DELVIERY_INDICATION, stcb, SCTP_PARTIAL_DELIVERY_ABORTED, (void *)&str_seq, SCTP_SO_NOT_LOCKED); stcb->asoc.control_pdapi = sv; break; } else if ((control->sinfo_stream == sid) && SCTP_MID_GT(asoc->idata_supported, control->mid, mid)) { /* We are past our victim SSN */ break; } } if (SCTP_MID_GT(asoc->idata_supported, mid, strm->last_mid_delivered)) { /* Update the sequence number */ strm->last_mid_delivered = mid; } /* now kick the stream the new way */ /* sa_ignore NO_NULL_CHK */ sctp_kick_prsctp_reorder_queue(stcb, strm); } SCTP_INP_READ_UNLOCK(stcb->sctp_ep); } /* * Now slide thing forward. */ sctp_slide_mapping_arrays(stcb); } Index: projects/runtime-coverage/sys/ufs/ffs/ffs_alloc.c =================================================================== --- projects/runtime-coverage/sys/ufs/ffs/ffs_alloc.c (revision 325444) +++ projects/runtime-coverage/sys/ufs/ffs/ffs_alloc.c (revision 325445) @@ -1,3225 +1,3239 @@ /*- * Copyright (c) 2002 Networks Associates Technology, Inc. * All rights reserved. * * This software was developed for the FreeBSD Project by Marshall * Kirk McKusick and Network Associates Laboratories, the Security * Research Division of Network Associates, Inc. under DARPA/SPAWAR * contract N66001-01-C-8035 ("CBOSS"), as part of the DARPA CHATS * research program * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Copyright (c) 1982, 1986, 1989, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)ffs_alloc.c 8.18 (Berkeley) 5/26/95 */ #include __FBSDID("$FreeBSD$"); #include "opt_quota.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include typedef ufs2_daddr_t allocfcn_t(struct inode *ip, u_int cg, ufs2_daddr_t bpref, int size, int rsize); static ufs2_daddr_t ffs_alloccg(struct inode *, u_int, ufs2_daddr_t, int, int); static ufs2_daddr_t ffs_alloccgblk(struct inode *, struct buf *, ufs2_daddr_t, int); static void ffs_blkfree_cg(struct ufsmount *, struct fs *, struct vnode *, ufs2_daddr_t, long, ino_t, struct workhead *); static void ffs_blkfree_trim_completed(struct bio *); static void ffs_blkfree_trim_task(void *ctx, int pending __unused); #ifdef INVARIANTS static int ffs_checkblk(struct inode *, ufs2_daddr_t, long); #endif static ufs2_daddr_t ffs_clusteralloc(struct inode *, u_int, ufs2_daddr_t, int); static ino_t ffs_dirpref(struct inode *); static ufs2_daddr_t ffs_fragextend(struct inode *, u_int, ufs2_daddr_t, int, int); static ufs2_daddr_t ffs_hashalloc (struct inode *, u_int, ufs2_daddr_t, int, int, allocfcn_t *); static ufs2_daddr_t ffs_nodealloccg(struct inode *, u_int, ufs2_daddr_t, int, int); static ufs1_daddr_t ffs_mapsearch(struct fs *, struct cg *, ufs2_daddr_t, int); static int ffs_reallocblks_ufs1(struct vop_reallocblks_args *); static int ffs_reallocblks_ufs2(struct vop_reallocblks_args *); static void ffs_ckhash_cg(struct buf *); /* * Allocate a block in the filesystem. * * The size of the requested block is given, which must be some * multiple of fs_fsize and <= fs_bsize. * A preference may be optionally specified. If a preference is given * the following hierarchy is used to allocate a block: * 1) allocate the requested block. * 2) allocate a rotationally optimal block in the same cylinder. * 3) allocate a block in the same cylinder group. * 4) quadradically rehash into other cylinder groups, until an * available block is located. * If no block preference is given the following hierarchy is used * to allocate a block: * 1) allocate a block in the cylinder group that contains the * inode for the file. * 2) quadradically rehash into other cylinder groups, until an * available block is located. */ int ffs_alloc(ip, lbn, bpref, size, flags, cred, bnp) struct inode *ip; ufs2_daddr_t lbn, bpref; int size, flags; struct ucred *cred; ufs2_daddr_t *bnp; { struct fs *fs; struct ufsmount *ump; ufs2_daddr_t bno; u_int cg, reclaimed; static struct timeval lastfail; static int curfail; int64_t delta; #ifdef QUOTA int error; #endif *bnp = 0; ump = ITOUMP(ip); fs = ump->um_fs; mtx_assert(UFS_MTX(ump), MA_OWNED); #ifdef INVARIANTS if ((u_int)size > fs->fs_bsize || fragoff(fs, size) != 0) { printf("dev = %s, bsize = %ld, size = %d, fs = %s\n", devtoname(ump->um_dev), (long)fs->fs_bsize, size, fs->fs_fsmnt); panic("ffs_alloc: bad size"); } if (cred == NOCRED) panic("ffs_alloc: missing credential"); #endif /* INVARIANTS */ reclaimed = 0; retry: #ifdef QUOTA UFS_UNLOCK(ump); error = chkdq(ip, btodb(size), cred, 0); if (error) return (error); UFS_LOCK(ump); #endif if (size == fs->fs_bsize && fs->fs_cstotal.cs_nbfree == 0) goto nospace; if (priv_check_cred(cred, PRIV_VFS_BLOCKRESERVE, 0) && freespace(fs, fs->fs_minfree) - numfrags(fs, size) < 0) goto nospace; if (bpref >= fs->fs_size) bpref = 0; if (bpref == 0) cg = ino_to_cg(fs, ip->i_number); else cg = dtog(fs, bpref); bno = ffs_hashalloc(ip, cg, bpref, size, size, ffs_alloccg); if (bno > 0) { delta = btodb(size); DIP_SET(ip, i_blocks, DIP(ip, i_blocks) + delta); if (flags & IO_EXT) ip->i_flag |= IN_CHANGE; else ip->i_flag |= IN_CHANGE | IN_UPDATE; *bnp = bno; return (0); } nospace: #ifdef QUOTA UFS_UNLOCK(ump); /* * Restore user's disk quota because allocation failed. */ (void) chkdq(ip, -btodb(size), cred, FORCE); UFS_LOCK(ump); #endif if (reclaimed == 0 && (flags & IO_BUFLOCKED) == 0) { reclaimed = 1; softdep_request_cleanup(fs, ITOV(ip), cred, FLUSH_BLOCKS_WAIT); goto retry; } UFS_UNLOCK(ump); if (reclaimed > 0 && ppsratecheck(&lastfail, &curfail, 1)) { ffs_fserr(fs, ip->i_number, "filesystem full"); uprintf("\n%s: write failed, filesystem is full\n", fs->fs_fsmnt); } return (ENOSPC); } /* * Reallocate a fragment to a bigger size * * The number and size of the old block is given, and a preference * and new size is also specified. The allocator attempts to extend * the original block. Failing that, the regular block allocator is * invoked to get an appropriate block. */ int ffs_realloccg(ip, lbprev, bprev, bpref, osize, nsize, flags, cred, bpp) struct inode *ip; ufs2_daddr_t lbprev; ufs2_daddr_t bprev; ufs2_daddr_t bpref; int osize, nsize, flags; struct ucred *cred; struct buf **bpp; { struct vnode *vp; struct fs *fs; struct buf *bp; struct ufsmount *ump; u_int cg, request, reclaimed; int error, gbflags; ufs2_daddr_t bno; static struct timeval lastfail; static int curfail; int64_t delta; vp = ITOV(ip); ump = ITOUMP(ip); fs = ump->um_fs; bp = NULL; gbflags = (flags & BA_UNMAPPED) != 0 ? GB_UNMAPPED : 0; mtx_assert(UFS_MTX(ump), MA_OWNED); #ifdef INVARIANTS if (vp->v_mount->mnt_kern_flag & MNTK_SUSPENDED) panic("ffs_realloccg: allocation on suspended filesystem"); if ((u_int)osize > fs->fs_bsize || fragoff(fs, osize) != 0 || (u_int)nsize > fs->fs_bsize || fragoff(fs, nsize) != 0) { printf( "dev = %s, bsize = %ld, osize = %d, nsize = %d, fs = %s\n", devtoname(ump->um_dev), (long)fs->fs_bsize, osize, nsize, fs->fs_fsmnt); panic("ffs_realloccg: bad size"); } if (cred == NOCRED) panic("ffs_realloccg: missing credential"); #endif /* INVARIANTS */ reclaimed = 0; retry: if (priv_check_cred(cred, PRIV_VFS_BLOCKRESERVE, 0) && freespace(fs, fs->fs_minfree) - numfrags(fs, nsize - osize) < 0) { goto nospace; } if (bprev == 0) { printf("dev = %s, bsize = %ld, bprev = %jd, fs = %s\n", devtoname(ump->um_dev), (long)fs->fs_bsize, (intmax_t)bprev, fs->fs_fsmnt); panic("ffs_realloccg: bad bprev"); } UFS_UNLOCK(ump); /* * Allocate the extra space in the buffer. */ error = bread_gb(vp, lbprev, osize, NOCRED, gbflags, &bp); if (error) { brelse(bp); return (error); } if (bp->b_blkno == bp->b_lblkno) { if (lbprev >= UFS_NDADDR) panic("ffs_realloccg: lbprev out of range"); bp->b_blkno = fsbtodb(fs, bprev); } #ifdef QUOTA error = chkdq(ip, btodb(nsize - osize), cred, 0); if (error) { brelse(bp); return (error); } #endif /* * Check for extension in the existing location. */ *bpp = NULL; cg = dtog(fs, bprev); UFS_LOCK(ump); bno = ffs_fragextend(ip, cg, bprev, osize, nsize); if (bno) { if (bp->b_blkno != fsbtodb(fs, bno)) panic("ffs_realloccg: bad blockno"); delta = btodb(nsize - osize); DIP_SET(ip, i_blocks, DIP(ip, i_blocks) + delta); if (flags & IO_EXT) ip->i_flag |= IN_CHANGE; else ip->i_flag |= IN_CHANGE | IN_UPDATE; allocbuf(bp, nsize); bp->b_flags |= B_DONE; vfs_bio_bzero_buf(bp, osize, nsize - osize); if ((bp->b_flags & (B_MALLOC | B_VMIO)) == B_VMIO) vfs_bio_set_valid(bp, osize, nsize - osize); *bpp = bp; return (0); } /* * Allocate a new disk location. */ if (bpref >= fs->fs_size) bpref = 0; switch ((int)fs->fs_optim) { case FS_OPTSPACE: /* * Allocate an exact sized fragment. Although this makes * best use of space, we will waste time relocating it if * the file continues to grow. If the fragmentation is * less than half of the minimum free reserve, we choose * to begin optimizing for time. */ request = nsize; if (fs->fs_minfree <= 5 || fs->fs_cstotal.cs_nffree > (off_t)fs->fs_dsize * fs->fs_minfree / (2 * 100)) break; log(LOG_NOTICE, "%s: optimization changed from SPACE to TIME\n", fs->fs_fsmnt); fs->fs_optim = FS_OPTTIME; break; case FS_OPTTIME: /* * At this point we have discovered a file that is trying to * grow a small fragment to a larger fragment. To save time, * we allocate a full sized block, then free the unused portion. * If the file continues to grow, the `ffs_fragextend' call * above will be able to grow it in place without further * copying. If aberrant programs cause disk fragmentation to * grow within 2% of the free reserve, we choose to begin * optimizing for space. */ request = fs->fs_bsize; if (fs->fs_cstotal.cs_nffree < (off_t)fs->fs_dsize * (fs->fs_minfree - 2) / 100) break; log(LOG_NOTICE, "%s: optimization changed from TIME to SPACE\n", fs->fs_fsmnt); fs->fs_optim = FS_OPTSPACE; break; default: printf("dev = %s, optim = %ld, fs = %s\n", devtoname(ump->um_dev), (long)fs->fs_optim, fs->fs_fsmnt); panic("ffs_realloccg: bad optim"); /* NOTREACHED */ } bno = ffs_hashalloc(ip, cg, bpref, request, nsize, ffs_alloccg); if (bno > 0) { bp->b_blkno = fsbtodb(fs, bno); if (!DOINGSOFTDEP(vp)) ffs_blkfree(ump, fs, ump->um_devvp, bprev, (long)osize, ip->i_number, vp->v_type, NULL); delta = btodb(nsize - osize); DIP_SET(ip, i_blocks, DIP(ip, i_blocks) + delta); if (flags & IO_EXT) ip->i_flag |= IN_CHANGE; else ip->i_flag |= IN_CHANGE | IN_UPDATE; allocbuf(bp, nsize); bp->b_flags |= B_DONE; vfs_bio_bzero_buf(bp, osize, nsize - osize); if ((bp->b_flags & (B_MALLOC | B_VMIO)) == B_VMIO) vfs_bio_set_valid(bp, osize, nsize - osize); *bpp = bp; return (0); } #ifdef QUOTA UFS_UNLOCK(ump); /* * Restore user's disk quota because allocation failed. */ (void) chkdq(ip, -btodb(nsize - osize), cred, FORCE); UFS_LOCK(ump); #endif nospace: /* * no space available */ if (reclaimed == 0 && (flags & IO_BUFLOCKED) == 0) { reclaimed = 1; UFS_UNLOCK(ump); if (bp) { brelse(bp); bp = NULL; } UFS_LOCK(ump); softdep_request_cleanup(fs, vp, cred, FLUSH_BLOCKS_WAIT); goto retry; } UFS_UNLOCK(ump); if (bp) brelse(bp); if (reclaimed > 0 && ppsratecheck(&lastfail, &curfail, 1)) { ffs_fserr(fs, ip->i_number, "filesystem full"); uprintf("\n%s: write failed, filesystem is full\n", fs->fs_fsmnt); } return (ENOSPC); } /* * Reallocate a sequence of blocks into a contiguous sequence of blocks. * * The vnode and an array of buffer pointers for a range of sequential * logical blocks to be made contiguous is given. The allocator attempts * to find a range of sequential blocks starting as close as possible * from the end of the allocation for the logical block immediately * preceding the current range. If successful, the physical block numbers * in the buffer pointers and in the inode are changed to reflect the new * allocation. If unsuccessful, the allocation is left unchanged. The * success in doing the reallocation is returned. Note that the error * return is not reflected back to the user. Rather the previous block * allocation will be used. */ SYSCTL_NODE(_vfs, OID_AUTO, ffs, CTLFLAG_RW, 0, "FFS filesystem"); static int doasyncfree = 1; SYSCTL_INT(_vfs_ffs, OID_AUTO, doasyncfree, CTLFLAG_RW, &doasyncfree, 0, "do not force synchronous writes when blocks are reallocated"); static int doreallocblks = 1; SYSCTL_INT(_vfs_ffs, OID_AUTO, doreallocblks, CTLFLAG_RW, &doreallocblks, 0, "enable block reallocation"); static int maxclustersearch = 10; SYSCTL_INT(_vfs_ffs, OID_AUTO, maxclustersearch, CTLFLAG_RW, &maxclustersearch, 0, "max number of cylinder group to search for contigous blocks"); #ifdef DEBUG static volatile int prtrealloc = 0; #endif int ffs_reallocblks(ap) struct vop_reallocblks_args /* { struct vnode *a_vp; struct cluster_save *a_buflist; } */ *ap; { struct ufsmount *ump; /* * If the underlying device can do deletes, then skip reallocating * the blocks of this file into contiguous sequences. Devices that * benefit from BIO_DELETE also benefit from not moving the data. * These devices are flash and therefore work less well with this * optimization. Also skip if reallocblks has been disabled globally. */ ump = ap->a_vp->v_mount->mnt_data; if (ump->um_candelete || doreallocblks == 0) return (ENOSPC); /* * We can't wait in softdep prealloc as it may fsync and recurse * here. Instead we simply fail to reallocate blocks if this * rare condition arises. */ if (DOINGSOFTDEP(ap->a_vp)) if (softdep_prealloc(ap->a_vp, MNT_NOWAIT) != 0) return (ENOSPC); if (ump->um_fstype == UFS1) return (ffs_reallocblks_ufs1(ap)); return (ffs_reallocblks_ufs2(ap)); } static int ffs_reallocblks_ufs1(ap) struct vop_reallocblks_args /* { struct vnode *a_vp; struct cluster_save *a_buflist; } */ *ap; { struct fs *fs; struct inode *ip; struct vnode *vp; struct buf *sbp, *ebp; ufs1_daddr_t *bap, *sbap, *ebap; struct cluster_save *buflist; struct ufsmount *ump; ufs_lbn_t start_lbn, end_lbn; ufs1_daddr_t soff, newblk, blkno; ufs2_daddr_t pref; struct indir start_ap[UFS_NIADDR + 1], end_ap[UFS_NIADDR + 1], *idp; int i, cg, len, start_lvl, end_lvl, ssize; vp = ap->a_vp; ip = VTOI(vp); ump = ITOUMP(ip); fs = ump->um_fs; /* * If we are not tracking block clusters or if we have less than 4% * free blocks left, then do not attempt to cluster. Running with * less than 5% free block reserve is not recommended and those that * choose to do so do not expect to have good file layout. */ if (fs->fs_contigsumsize <= 0 || freespace(fs, 4) < 0) return (ENOSPC); buflist = ap->a_buflist; len = buflist->bs_nchildren; start_lbn = buflist->bs_children[0]->b_lblkno; end_lbn = start_lbn + len - 1; #ifdef INVARIANTS for (i = 0; i < len; i++) if (!ffs_checkblk(ip, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 1"); for (i = 1; i < len; i++) if (buflist->bs_children[i]->b_lblkno != start_lbn + i) panic("ffs_reallocblks: non-logical cluster"); blkno = buflist->bs_children[0]->b_blkno; ssize = fsbtodb(fs, fs->fs_frag); for (i = 1; i < len - 1; i++) if (buflist->bs_children[i]->b_blkno != blkno + (i * ssize)) panic("ffs_reallocblks: non-physical cluster %d", i); #endif /* * If the cluster crosses the boundary for the first indirect * block, leave space for the indirect block. Indirect blocks * are initially laid out in a position after the last direct * block. Block reallocation would usually destroy locality by * moving the indirect block out of the way to make room for * data blocks if we didn't compensate here. We should also do * this for other indirect block boundaries, but it is only * important for the first one. */ if (start_lbn < UFS_NDADDR && end_lbn >= UFS_NDADDR) return (ENOSPC); /* * If the latest allocation is in a new cylinder group, assume that * the filesystem has decided to move and do not force it back to * the previous cylinder group. */ if (dtog(fs, dbtofsb(fs, buflist->bs_children[0]->b_blkno)) != dtog(fs, dbtofsb(fs, buflist->bs_children[len - 1]->b_blkno))) return (ENOSPC); if (ufs_getlbns(vp, start_lbn, start_ap, &start_lvl) || ufs_getlbns(vp, end_lbn, end_ap, &end_lvl)) return (ENOSPC); /* * Get the starting offset and block map for the first block. */ if (start_lvl == 0) { sbap = &ip->i_din1->di_db[0]; soff = start_lbn; } else { idp = &start_ap[start_lvl - 1]; if (bread(vp, idp->in_lbn, (int)fs->fs_bsize, NOCRED, &sbp)) { brelse(sbp); return (ENOSPC); } sbap = (ufs1_daddr_t *)sbp->b_data; soff = idp->in_off; } /* * If the block range spans two block maps, get the second map. */ ebap = NULL; if (end_lvl == 0 || (idp = &end_ap[end_lvl - 1])->in_off + 1 >= len) { ssize = len; } else { #ifdef INVARIANTS if (start_lvl > 0 && start_ap[start_lvl - 1].in_lbn == idp->in_lbn) panic("ffs_reallocblk: start == end"); #endif ssize = len - (idp->in_off + 1); if (bread(vp, idp->in_lbn, (int)fs->fs_bsize, NOCRED, &ebp)) goto fail; ebap = (ufs1_daddr_t *)ebp->b_data; } /* * Find the preferred location for the cluster. If we have not * previously failed at this endeavor, then follow our standard * preference calculation. If we have failed at it, then pick up * where we last ended our search. */ UFS_LOCK(ump); if (ip->i_nextclustercg == -1) pref = ffs_blkpref_ufs1(ip, start_lbn, soff, sbap); else pref = cgdata(fs, ip->i_nextclustercg); /* * Search the block map looking for an allocation of the desired size. * To avoid wasting too much time, we limit the number of cylinder * groups that we will search. */ cg = dtog(fs, pref); for (i = min(maxclustersearch, fs->fs_ncg); i > 0; i--) { if ((newblk = ffs_clusteralloc(ip, cg, pref, len)) != 0) break; cg += 1; if (cg >= fs->fs_ncg) cg = 0; } /* * If we have failed in our search, record where we gave up for * next time. Otherwise, fall back to our usual search citerion. */ if (newblk == 0) { ip->i_nextclustercg = cg; UFS_UNLOCK(ump); goto fail; } ip->i_nextclustercg = -1; /* * We have found a new contiguous block. * * First we have to replace the old block pointers with the new * block pointers in the inode and indirect blocks associated * with the file. */ #ifdef DEBUG if (prtrealloc) printf("realloc: ino %ju, lbns %jd-%jd\n\told:", (uintmax_t)ip->i_number, (intmax_t)start_lbn, (intmax_t)end_lbn); #endif blkno = newblk; for (bap = &sbap[soff], i = 0; i < len; i++, blkno += fs->fs_frag) { if (i == ssize) { bap = ebap; soff = -i; } #ifdef INVARIANTS if (!ffs_checkblk(ip, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 2"); if (dbtofsb(fs, buflist->bs_children[i]->b_blkno) != *bap) panic("ffs_reallocblks: alloc mismatch"); #endif #ifdef DEBUG if (prtrealloc) printf(" %d,", *bap); #endif if (DOINGSOFTDEP(vp)) { if (sbap == &ip->i_din1->di_db[0] && i < ssize) softdep_setup_allocdirect(ip, start_lbn + i, blkno, *bap, fs->fs_bsize, fs->fs_bsize, buflist->bs_children[i]); else softdep_setup_allocindir_page(ip, start_lbn + i, i < ssize ? sbp : ebp, soff + i, blkno, *bap, buflist->bs_children[i]); } *bap++ = blkno; } /* * Next we must write out the modified inode and indirect blocks. * For strict correctness, the writes should be synchronous since * the old block values may have been written to disk. In practise * they are almost never written, but if we are concerned about * strict correctness, the `doasyncfree' flag should be set to zero. * * The test on `doasyncfree' should be changed to test a flag * that shows whether the associated buffers and inodes have * been written. The flag should be set when the cluster is * started and cleared whenever the buffer or inode is flushed. * We can then check below to see if it is set, and do the * synchronous write only when it has been cleared. */ if (sbap != &ip->i_din1->di_db[0]) { if (doasyncfree) bdwrite(sbp); else bwrite(sbp); } else { ip->i_flag |= IN_CHANGE | IN_UPDATE; if (!doasyncfree) ffs_update(vp, 1); } if (ssize < len) { if (doasyncfree) bdwrite(ebp); else bwrite(ebp); } /* * Last, free the old blocks and assign the new blocks to the buffers. */ #ifdef DEBUG if (prtrealloc) printf("\n\tnew:"); #endif for (blkno = newblk, i = 0; i < len; i++, blkno += fs->fs_frag) { if (!DOINGSOFTDEP(vp)) ffs_blkfree(ump, fs, ump->um_devvp, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize, ip->i_number, vp->v_type, NULL); buflist->bs_children[i]->b_blkno = fsbtodb(fs, blkno); #ifdef INVARIANTS if (!ffs_checkblk(ip, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 3"); #endif #ifdef DEBUG if (prtrealloc) printf(" %d,", blkno); #endif } #ifdef DEBUG if (prtrealloc) { prtrealloc--; printf("\n"); } #endif return (0); fail: if (ssize < len) brelse(ebp); if (sbap != &ip->i_din1->di_db[0]) brelse(sbp); return (ENOSPC); } static int ffs_reallocblks_ufs2(ap) struct vop_reallocblks_args /* { struct vnode *a_vp; struct cluster_save *a_buflist; } */ *ap; { struct fs *fs; struct inode *ip; struct vnode *vp; struct buf *sbp, *ebp; ufs2_daddr_t *bap, *sbap, *ebap; struct cluster_save *buflist; struct ufsmount *ump; ufs_lbn_t start_lbn, end_lbn; ufs2_daddr_t soff, newblk, blkno, pref; struct indir start_ap[UFS_NIADDR + 1], end_ap[UFS_NIADDR + 1], *idp; int i, cg, len, start_lvl, end_lvl, ssize; vp = ap->a_vp; ip = VTOI(vp); ump = ITOUMP(ip); fs = ump->um_fs; /* * If we are not tracking block clusters or if we have less than 4% * free blocks left, then do not attempt to cluster. Running with * less than 5% free block reserve is not recommended and those that * choose to do so do not expect to have good file layout. */ if (fs->fs_contigsumsize <= 0 || freespace(fs, 4) < 0) return (ENOSPC); buflist = ap->a_buflist; len = buflist->bs_nchildren; start_lbn = buflist->bs_children[0]->b_lblkno; end_lbn = start_lbn + len - 1; #ifdef INVARIANTS for (i = 0; i < len; i++) if (!ffs_checkblk(ip, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 1"); for (i = 1; i < len; i++) if (buflist->bs_children[i]->b_lblkno != start_lbn + i) panic("ffs_reallocblks: non-logical cluster"); blkno = buflist->bs_children[0]->b_blkno; ssize = fsbtodb(fs, fs->fs_frag); for (i = 1; i < len - 1; i++) if (buflist->bs_children[i]->b_blkno != blkno + (i * ssize)) panic("ffs_reallocblks: non-physical cluster %d", i); #endif /* * If the cluster crosses the boundary for the first indirect * block, do not move anything in it. Indirect blocks are * usually initially laid out in a position between the data * blocks. Block reallocation would usually destroy locality by * moving the indirect block out of the way to make room for * data blocks if we didn't compensate here. We should also do * this for other indirect block boundaries, but it is only * important for the first one. */ if (start_lbn < UFS_NDADDR && end_lbn >= UFS_NDADDR) return (ENOSPC); /* * If the latest allocation is in a new cylinder group, assume that * the filesystem has decided to move and do not force it back to * the previous cylinder group. */ if (dtog(fs, dbtofsb(fs, buflist->bs_children[0]->b_blkno)) != dtog(fs, dbtofsb(fs, buflist->bs_children[len - 1]->b_blkno))) return (ENOSPC); if (ufs_getlbns(vp, start_lbn, start_ap, &start_lvl) || ufs_getlbns(vp, end_lbn, end_ap, &end_lvl)) return (ENOSPC); /* * Get the starting offset and block map for the first block. */ if (start_lvl == 0) { sbap = &ip->i_din2->di_db[0]; soff = start_lbn; } else { idp = &start_ap[start_lvl - 1]; if (bread(vp, idp->in_lbn, (int)fs->fs_bsize, NOCRED, &sbp)) { brelse(sbp); return (ENOSPC); } sbap = (ufs2_daddr_t *)sbp->b_data; soff = idp->in_off; } /* * If the block range spans two block maps, get the second map. */ ebap = NULL; if (end_lvl == 0 || (idp = &end_ap[end_lvl - 1])->in_off + 1 >= len) { ssize = len; } else { #ifdef INVARIANTS if (start_lvl > 0 && start_ap[start_lvl - 1].in_lbn == idp->in_lbn) panic("ffs_reallocblk: start == end"); #endif ssize = len - (idp->in_off + 1); if (bread(vp, idp->in_lbn, (int)fs->fs_bsize, NOCRED, &ebp)) goto fail; ebap = (ufs2_daddr_t *)ebp->b_data; } /* * Find the preferred location for the cluster. If we have not * previously failed at this endeavor, then follow our standard * preference calculation. If we have failed at it, then pick up * where we last ended our search. */ UFS_LOCK(ump); if (ip->i_nextclustercg == -1) pref = ffs_blkpref_ufs2(ip, start_lbn, soff, sbap); else pref = cgdata(fs, ip->i_nextclustercg); /* * Search the block map looking for an allocation of the desired size. * To avoid wasting too much time, we limit the number of cylinder * groups that we will search. */ cg = dtog(fs, pref); for (i = min(maxclustersearch, fs->fs_ncg); i > 0; i--) { if ((newblk = ffs_clusteralloc(ip, cg, pref, len)) != 0) break; cg += 1; if (cg >= fs->fs_ncg) cg = 0; } /* * If we have failed in our search, record where we gave up for * next time. Otherwise, fall back to our usual search citerion. */ if (newblk == 0) { ip->i_nextclustercg = cg; UFS_UNLOCK(ump); goto fail; } ip->i_nextclustercg = -1; /* * We have found a new contiguous block. * * First we have to replace the old block pointers with the new * block pointers in the inode and indirect blocks associated * with the file. */ #ifdef DEBUG if (prtrealloc) printf("realloc: ino %ju, lbns %jd-%jd\n\told:", (uintmax_t)ip->i_number, (intmax_t)start_lbn, (intmax_t)end_lbn); #endif blkno = newblk; for (bap = &sbap[soff], i = 0; i < len; i++, blkno += fs->fs_frag) { if (i == ssize) { bap = ebap; soff = -i; } #ifdef INVARIANTS if (!ffs_checkblk(ip, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 2"); if (dbtofsb(fs, buflist->bs_children[i]->b_blkno) != *bap) panic("ffs_reallocblks: alloc mismatch"); #endif #ifdef DEBUG if (prtrealloc) printf(" %jd,", (intmax_t)*bap); #endif if (DOINGSOFTDEP(vp)) { if (sbap == &ip->i_din2->di_db[0] && i < ssize) softdep_setup_allocdirect(ip, start_lbn + i, blkno, *bap, fs->fs_bsize, fs->fs_bsize, buflist->bs_children[i]); else softdep_setup_allocindir_page(ip, start_lbn + i, i < ssize ? sbp : ebp, soff + i, blkno, *bap, buflist->bs_children[i]); } *bap++ = blkno; } /* * Next we must write out the modified inode and indirect blocks. * For strict correctness, the writes should be synchronous since * the old block values may have been written to disk. In practise * they are almost never written, but if we are concerned about * strict correctness, the `doasyncfree' flag should be set to zero. * * The test on `doasyncfree' should be changed to test a flag * that shows whether the associated buffers and inodes have * been written. The flag should be set when the cluster is * started and cleared whenever the buffer or inode is flushed. * We can then check below to see if it is set, and do the * synchronous write only when it has been cleared. */ if (sbap != &ip->i_din2->di_db[0]) { if (doasyncfree) bdwrite(sbp); else bwrite(sbp); } else { ip->i_flag |= IN_CHANGE | IN_UPDATE; if (!doasyncfree) ffs_update(vp, 1); } if (ssize < len) { if (doasyncfree) bdwrite(ebp); else bwrite(ebp); } /* * Last, free the old blocks and assign the new blocks to the buffers. */ #ifdef DEBUG if (prtrealloc) printf("\n\tnew:"); #endif for (blkno = newblk, i = 0; i < len; i++, blkno += fs->fs_frag) { if (!DOINGSOFTDEP(vp)) ffs_blkfree(ump, fs, ump->um_devvp, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize, ip->i_number, vp->v_type, NULL); buflist->bs_children[i]->b_blkno = fsbtodb(fs, blkno); #ifdef INVARIANTS if (!ffs_checkblk(ip, dbtofsb(fs, buflist->bs_children[i]->b_blkno), fs->fs_bsize)) panic("ffs_reallocblks: unallocated block 3"); #endif #ifdef DEBUG if (prtrealloc) printf(" %jd,", (intmax_t)blkno); #endif } #ifdef DEBUG if (prtrealloc) { prtrealloc--; printf("\n"); } #endif return (0); fail: if (ssize < len) brelse(ebp); if (sbap != &ip->i_din2->di_db[0]) brelse(sbp); return (ENOSPC); } /* * Allocate an inode in the filesystem. * * If allocating a directory, use ffs_dirpref to select the inode. * If allocating in a directory, the following hierarchy is followed: * 1) allocate the preferred inode. * 2) allocate an inode in the same cylinder group. * 3) quadradically rehash into other cylinder groups, until an * available inode is located. * If no inode preference is given the following hierarchy is used * to allocate an inode: * 1) allocate an inode in cylinder group 0. * 2) quadradically rehash into other cylinder groups, until an * available inode is located. */ int ffs_valloc(pvp, mode, cred, vpp) struct vnode *pvp; int mode; struct ucred *cred; struct vnode **vpp; { struct inode *pip; struct fs *fs; struct inode *ip; struct timespec ts; struct ufsmount *ump; ino_t ino, ipref; u_int cg; int error, error1, reclaimed; static struct timeval lastfail; static int curfail; *vpp = NULL; pip = VTOI(pvp); ump = ITOUMP(pip); fs = ump->um_fs; UFS_LOCK(ump); reclaimed = 0; retry: if (fs->fs_cstotal.cs_nifree == 0) goto noinodes; if ((mode & IFMT) == IFDIR) ipref = ffs_dirpref(pip); else ipref = pip->i_number; if (ipref >= fs->fs_ncg * fs->fs_ipg) ipref = 0; cg = ino_to_cg(fs, ipref); /* * Track number of dirs created one after another * in a same cg without intervening by files. */ if ((mode & IFMT) == IFDIR) { if (fs->fs_contigdirs[cg] < 255) fs->fs_contigdirs[cg]++; } else { if (fs->fs_contigdirs[cg] > 0) fs->fs_contigdirs[cg]--; } ino = (ino_t)ffs_hashalloc(pip, cg, ipref, mode, 0, (allocfcn_t *)ffs_nodealloccg); if (ino == 0) goto noinodes; error = ffs_vget(pvp->v_mount, ino, LK_EXCLUSIVE, vpp); if (error) { error1 = ffs_vgetf(pvp->v_mount, ino, LK_EXCLUSIVE, vpp, FFSV_FORCEINSMQ); ffs_vfree(pvp, ino, mode); if (error1 == 0) { ip = VTOI(*vpp); if (ip->i_mode) goto dup_alloc; ip->i_flag |= IN_MODIFIED; vput(*vpp); } return (error); } ip = VTOI(*vpp); if (ip->i_mode) { dup_alloc: printf("mode = 0%o, inum = %ju, fs = %s\n", ip->i_mode, (uintmax_t)ip->i_number, fs->fs_fsmnt); panic("ffs_valloc: dup alloc"); } if (DIP(ip, i_blocks) && (fs->fs_flags & FS_UNCLEAN) == 0) { /* XXX */ printf("free inode %s/%lu had %ld blocks\n", fs->fs_fsmnt, (u_long)ino, (long)DIP(ip, i_blocks)); DIP_SET(ip, i_blocks, 0); } ip->i_flags = 0; DIP_SET(ip, i_flags, 0); /* * Set up a new generation number for this inode. */ while (ip->i_gen == 0 || ++ip->i_gen == 0) ip->i_gen = arc4random(); DIP_SET(ip, i_gen, ip->i_gen); if (fs->fs_magic == FS_UFS2_MAGIC) { vfs_timestamp(&ts); ip->i_din2->di_birthtime = ts.tv_sec; ip->i_din2->di_birthnsec = ts.tv_nsec; } ufs_prepare_reclaim(*vpp); ip->i_flag = 0; (*vpp)->v_vflag = 0; (*vpp)->v_type = VNON; if (fs->fs_magic == FS_UFS2_MAGIC) { (*vpp)->v_op = &ffs_vnodeops2; ip->i_flag |= IN_UFS2; } else { (*vpp)->v_op = &ffs_vnodeops1; } return (0); noinodes: if (reclaimed == 0) { reclaimed = 1; softdep_request_cleanup(fs, pvp, cred, FLUSH_INODES_WAIT); goto retry; } UFS_UNLOCK(ump); if (ppsratecheck(&lastfail, &curfail, 1)) { ffs_fserr(fs, pip->i_number, "out of inodes"); uprintf("\n%s: create/symlink failed, no inodes free\n", fs->fs_fsmnt); } return (ENOSPC); } /* * Find a cylinder group to place a directory. * * The policy implemented by this algorithm is to allocate a * directory inode in the same cylinder group as its parent * directory, but also to reserve space for its files inodes * and data. Restrict the number of directories which may be * allocated one after another in the same cylinder group * without intervening allocation of files. * * If we allocate a first level directory then force allocation * in another cylinder group. */ static ino_t ffs_dirpref(pip) struct inode *pip; { struct fs *fs; int cg, prefcg, dirsize, cgsize; u_int avgifree, avgbfree, avgndir, curdirsize; u_int minifree, minbfree, maxndir; u_int mincg, minndir; u_int maxcontigdirs; mtx_assert(UFS_MTX(ITOUMP(pip)), MA_OWNED); fs = ITOFS(pip); avgifree = fs->fs_cstotal.cs_nifree / fs->fs_ncg; avgbfree = fs->fs_cstotal.cs_nbfree / fs->fs_ncg; avgndir = fs->fs_cstotal.cs_ndir / fs->fs_ncg; /* * Force allocation in another cg if creating a first level dir. */ ASSERT_VOP_LOCKED(ITOV(pip), "ffs_dirpref"); if (ITOV(pip)->v_vflag & VV_ROOT) { prefcg = arc4random() % fs->fs_ncg; mincg = prefcg; minndir = fs->fs_ipg; for (cg = prefcg; cg < fs->fs_ncg; cg++) if (fs->fs_cs(fs, cg).cs_ndir < minndir && fs->fs_cs(fs, cg).cs_nifree >= avgifree && fs->fs_cs(fs, cg).cs_nbfree >= avgbfree) { mincg = cg; minndir = fs->fs_cs(fs, cg).cs_ndir; } for (cg = 0; cg < prefcg; cg++) if (fs->fs_cs(fs, cg).cs_ndir < minndir && fs->fs_cs(fs, cg).cs_nifree >= avgifree && fs->fs_cs(fs, cg).cs_nbfree >= avgbfree) { mincg = cg; minndir = fs->fs_cs(fs, cg).cs_ndir; } return ((ino_t)(fs->fs_ipg * mincg)); } /* * Count various limits which used for * optimal allocation of a directory inode. */ maxndir = min(avgndir + fs->fs_ipg / 16, fs->fs_ipg); minifree = avgifree - avgifree / 4; if (minifree < 1) minifree = 1; minbfree = avgbfree - avgbfree / 4; if (minbfree < 1) minbfree = 1; cgsize = fs->fs_fsize * fs->fs_fpg; dirsize = fs->fs_avgfilesize * fs->fs_avgfpdir; curdirsize = avgndir ? (cgsize - avgbfree * fs->fs_bsize) / avgndir : 0; if (dirsize < curdirsize) dirsize = curdirsize; if (dirsize <= 0) maxcontigdirs = 0; /* dirsize overflowed */ else maxcontigdirs = min((avgbfree * fs->fs_bsize) / dirsize, 255); if (fs->fs_avgfpdir > 0) maxcontigdirs = min(maxcontigdirs, fs->fs_ipg / fs->fs_avgfpdir); if (maxcontigdirs == 0) maxcontigdirs = 1; /* * Limit number of dirs in one cg and reserve space for * regular files, but only if we have no deficit in * inodes or space. * * We are trying to find a suitable cylinder group nearby * our preferred cylinder group to place a new directory. * We scan from our preferred cylinder group forward looking * for a cylinder group that meets our criterion. If we get * to the final cylinder group and do not find anything, * we start scanning forwards from the beginning of the * filesystem. While it might seem sensible to start scanning * backwards or even to alternate looking forward and backward, * this approach fails badly when the filesystem is nearly full. * Specifically, we first search all the areas that have no space * and finally try the one preceding that. We repeat this on * every request and in the case of the final block end up * searching the entire filesystem. By jumping to the front * of the filesystem, our future forward searches always look * in new cylinder groups so finds every possible block after * one pass over the filesystem. */ prefcg = ino_to_cg(fs, pip->i_number); for (cg = prefcg; cg < fs->fs_ncg; cg++) if (fs->fs_cs(fs, cg).cs_ndir < maxndir && fs->fs_cs(fs, cg).cs_nifree >= minifree && fs->fs_cs(fs, cg).cs_nbfree >= minbfree) { if (fs->fs_contigdirs[cg] < maxcontigdirs) return ((ino_t)(fs->fs_ipg * cg)); } for (cg = 0; cg < prefcg; cg++) if (fs->fs_cs(fs, cg).cs_ndir < maxndir && fs->fs_cs(fs, cg).cs_nifree >= minifree && fs->fs_cs(fs, cg).cs_nbfree >= minbfree) { if (fs->fs_contigdirs[cg] < maxcontigdirs) return ((ino_t)(fs->fs_ipg * cg)); } /* * This is a backstop when we have deficit in space. */ for (cg = prefcg; cg < fs->fs_ncg; cg++) if (fs->fs_cs(fs, cg).cs_nifree >= avgifree) return ((ino_t)(fs->fs_ipg * cg)); for (cg = 0; cg < prefcg; cg++) if (fs->fs_cs(fs, cg).cs_nifree >= avgifree) break; return ((ino_t)(fs->fs_ipg * cg)); } /* * Select the desired position for the next block in a file. The file is * logically divided into sections. The first section is composed of the * direct blocks and the next fs_maxbpg blocks. Each additional section * contains fs_maxbpg blocks. * * If no blocks have been allocated in the first section, the policy is to * request a block in the same cylinder group as the inode that describes * the file. The first indirect is allocated immediately following the last * direct block and the data blocks for the first indirect immediately * follow it. * * If no blocks have been allocated in any other section, the indirect * block(s) are allocated in the same cylinder group as its inode in an * area reserved immediately following the inode blocks. The policy for * the data blocks is to place them in a cylinder group with a greater than * average number of free blocks. An appropriate cylinder group is found * by using a rotor that sweeps the cylinder groups. When a new group of * blocks is needed, the sweep begins in the cylinder group following the * cylinder group from which the previous allocation was made. The sweep * continues until a cylinder group with greater than the average number * of free blocks is found. If the allocation is for the first block in an * indirect block or the previous block is a hole, then the information on * the previous allocation is unavailable; here a best guess is made based * on the logical block number being allocated. * * If a section is already partially allocated, the policy is to * allocate blocks contiguously within the section if possible. */ ufs2_daddr_t ffs_blkpref_ufs1(ip, lbn, indx, bap) struct inode *ip; ufs_lbn_t lbn; int indx; ufs1_daddr_t *bap; { struct fs *fs; u_int cg, inocg; u_int avgbfree, startcg; ufs2_daddr_t pref; KASSERT(indx <= 0 || bap != NULL, ("need non-NULL bap")); mtx_assert(UFS_MTX(ITOUMP(ip)), MA_OWNED); fs = ITOFS(ip); /* * Allocation of indirect blocks is indicated by passing negative * values in indx: -1 for single indirect, -2 for double indirect, * -3 for triple indirect. As noted below, we attempt to allocate * the first indirect inline with the file data. For all later * indirect blocks, the data is often allocated in other cylinder * groups. However to speed random file access and to speed up * fsck, the filesystem reserves the first fs_metaspace blocks * (typically half of fs_minfree) of the data area of each cylinder * group to hold these later indirect blocks. */ inocg = ino_to_cg(fs, ip->i_number); if (indx < 0) { /* * Our preference for indirect blocks is the zone at the * beginning of the inode's cylinder group data area that * we try to reserve for indirect blocks. */ pref = cgmeta(fs, inocg); /* * If we are allocating the first indirect block, try to * place it immediately following the last direct block. */ if (indx == -1 && lbn < UFS_NDADDR + NINDIR(fs) && ip->i_din1->di_db[UFS_NDADDR - 1] != 0) pref = ip->i_din1->di_db[UFS_NDADDR - 1] + fs->fs_frag; return (pref); } /* * If we are allocating the first data block in the first indirect * block and the indirect has been allocated in the data block area, * try to place it immediately following the indirect block. */ if (lbn == UFS_NDADDR) { pref = ip->i_din1->di_ib[0]; if (pref != 0 && pref >= cgdata(fs, inocg) && pref < cgbase(fs, inocg + 1)) return (pref + fs->fs_frag); } /* * If we are at the beginning of a file, or we have already allocated * the maximum number of blocks per cylinder group, or we do not * have a block allocated immediately preceding us, then we need * to decide where to start allocating new blocks. */ if (indx % fs->fs_maxbpg == 0 || bap[indx - 1] == 0) { /* * If we are allocating a directory data block, we want * to place it in the metadata area. */ if ((ip->i_mode & IFMT) == IFDIR) return (cgmeta(fs, inocg)); /* * Until we fill all the direct and all the first indirect's * blocks, we try to allocate in the data area of the inode's * cylinder group. */ if (lbn < UFS_NDADDR + NINDIR(fs)) return (cgdata(fs, inocg)); /* * Find a cylinder with greater than average number of * unused data blocks. */ if (indx == 0 || bap[indx - 1] == 0) startcg = inocg + lbn / fs->fs_maxbpg; else startcg = dtog(fs, bap[indx - 1]) + 1; startcg %= fs->fs_ncg; avgbfree = fs->fs_cstotal.cs_nbfree / fs->fs_ncg; for (cg = startcg; cg < fs->fs_ncg; cg++) if (fs->fs_cs(fs, cg).cs_nbfree >= avgbfree) { fs->fs_cgrotor = cg; return (cgdata(fs, cg)); } for (cg = 0; cg <= startcg; cg++) if (fs->fs_cs(fs, cg).cs_nbfree >= avgbfree) { fs->fs_cgrotor = cg; return (cgdata(fs, cg)); } return (0); } /* * Otherwise, we just always try to lay things out contiguously. */ return (bap[indx - 1] + fs->fs_frag); } /* * Same as above, but for UFS2 */ ufs2_daddr_t ffs_blkpref_ufs2(ip, lbn, indx, bap) struct inode *ip; ufs_lbn_t lbn; int indx; ufs2_daddr_t *bap; { struct fs *fs; u_int cg, inocg; u_int avgbfree, startcg; ufs2_daddr_t pref; KASSERT(indx <= 0 || bap != NULL, ("need non-NULL bap")); mtx_assert(UFS_MTX(ITOUMP(ip)), MA_OWNED); fs = ITOFS(ip); /* * Allocation of indirect blocks is indicated by passing negative * values in indx: -1 for single indirect, -2 for double indirect, * -3 for triple indirect. As noted below, we attempt to allocate * the first indirect inline with the file data. For all later * indirect blocks, the data is often allocated in other cylinder * groups. However to speed random file access and to speed up * fsck, the filesystem reserves the first fs_metaspace blocks * (typically half of fs_minfree) of the data area of each cylinder * group to hold these later indirect blocks. */ inocg = ino_to_cg(fs, ip->i_number); if (indx < 0) { /* * Our preference for indirect blocks is the zone at the * beginning of the inode's cylinder group data area that * we try to reserve for indirect blocks. */ pref = cgmeta(fs, inocg); /* * If we are allocating the first indirect block, try to * place it immediately following the last direct block. */ if (indx == -1 && lbn < UFS_NDADDR + NINDIR(fs) && ip->i_din2->di_db[UFS_NDADDR - 1] != 0) pref = ip->i_din2->di_db[UFS_NDADDR - 1] + fs->fs_frag; return (pref); } /* * If we are allocating the first data block in the first indirect * block and the indirect has been allocated in the data block area, * try to place it immediately following the indirect block. */ if (lbn == UFS_NDADDR) { pref = ip->i_din2->di_ib[0]; if (pref != 0 && pref >= cgdata(fs, inocg) && pref < cgbase(fs, inocg + 1)) return (pref + fs->fs_frag); } /* * If we are at the beginning of a file, or we have already allocated * the maximum number of blocks per cylinder group, or we do not * have a block allocated immediately preceding us, then we need * to decide where to start allocating new blocks. */ if (indx % fs->fs_maxbpg == 0 || bap[indx - 1] == 0) { /* * If we are allocating a directory data block, we want * to place it in the metadata area. */ if ((ip->i_mode & IFMT) == IFDIR) return (cgmeta(fs, inocg)); /* * Until we fill all the direct and all the first indirect's * blocks, we try to allocate in the data area of the inode's * cylinder group. */ if (lbn < UFS_NDADDR + NINDIR(fs)) return (cgdata(fs, inocg)); /* * Find a cylinder with greater than average number of * unused data blocks. */ if (indx == 0 || bap[indx - 1] == 0) startcg = inocg + lbn / fs->fs_maxbpg; else startcg = dtog(fs, bap[indx - 1]) + 1; startcg %= fs->fs_ncg; avgbfree = fs->fs_cstotal.cs_nbfree / fs->fs_ncg; for (cg = startcg; cg < fs->fs_ncg; cg++) if (fs->fs_cs(fs, cg).cs_nbfree >= avgbfree) { fs->fs_cgrotor = cg; return (cgdata(fs, cg)); } for (cg = 0; cg <= startcg; cg++) if (fs->fs_cs(fs, cg).cs_nbfree >= avgbfree) { fs->fs_cgrotor = cg; return (cgdata(fs, cg)); } return (0); } /* * Otherwise, we just always try to lay things out contiguously. */ return (bap[indx - 1] + fs->fs_frag); } /* * Implement the cylinder overflow algorithm. * * The policy implemented by this algorithm is: * 1) allocate the block in its requested cylinder group. * 2) quadradically rehash on the cylinder group number. * 3) brute force search for a free block. * * Must be called with the UFS lock held. Will release the lock on success * and return with it held on failure. */ /*VARARGS5*/ static ufs2_daddr_t ffs_hashalloc(ip, cg, pref, size, rsize, allocator) struct inode *ip; u_int cg; ufs2_daddr_t pref; int size; /* Search size for data blocks, mode for inodes */ int rsize; /* Real allocated size. */ allocfcn_t *allocator; { struct fs *fs; ufs2_daddr_t result; u_int i, icg = cg; mtx_assert(UFS_MTX(ITOUMP(ip)), MA_OWNED); #ifdef INVARIANTS if (ITOV(ip)->v_mount->mnt_kern_flag & MNTK_SUSPENDED) panic("ffs_hashalloc: allocation on suspended filesystem"); #endif fs = ITOFS(ip); /* * 1: preferred cylinder group */ result = (*allocator)(ip, cg, pref, size, rsize); if (result) return (result); /* * 2: quadratic rehash */ for (i = 1; i < fs->fs_ncg; i *= 2) { cg += i; if (cg >= fs->fs_ncg) cg -= fs->fs_ncg; result = (*allocator)(ip, cg, 0, size, rsize); if (result) return (result); } /* * 3: brute force search * Note that we start at i == 2, since 0 was checked initially, * and 1 is always checked in the quadratic rehash. */ cg = (icg + 2) % fs->fs_ncg; for (i = 2; i < fs->fs_ncg; i++) { result = (*allocator)(ip, cg, 0, size, rsize); if (result) return (result); cg++; if (cg == fs->fs_ncg) cg = 0; } return (0); } /* * Determine whether a fragment can be extended. * * Check to see if the necessary fragments are available, and * if they are, allocate them. */ static ufs2_daddr_t ffs_fragextend(ip, cg, bprev, osize, nsize) struct inode *ip; u_int cg; ufs2_daddr_t bprev; int osize, nsize; { struct fs *fs; struct cg *cgp; struct buf *bp; struct ufsmount *ump; int nffree; long bno; int frags, bbase; int i, error; u_int8_t *blksfree; ump = ITOUMP(ip); fs = ump->um_fs; if (fs->fs_cs(fs, cg).cs_nffree < numfrags(fs, nsize - osize)) return (0); frags = numfrags(fs, nsize); bbase = fragnum(fs, bprev); if (bbase > fragnum(fs, (bprev + frags - 1))) { /* cannot extend across a block boundary */ return (0); } UFS_UNLOCK(ump); if ((error = ffs_getcg(fs, ump->um_devvp, cg, &bp, &cgp)) != 0) goto fail; bno = dtogd(fs, bprev); blksfree = cg_blksfree(cgp); for (i = numfrags(fs, osize); i < frags; i++) if (isclr(blksfree, bno + i)) goto fail; /* * the current fragment can be extended * deduct the count on fragment being extended into * increase the count on the remaining fragment (if any) * allocate the extended piece */ for (i = frags; i < fs->fs_frag - bbase; i++) if (isclr(blksfree, bno + i)) break; cgp->cg_frsum[i - numfrags(fs, osize)]--; if (i != frags) cgp->cg_frsum[i - frags]++; for (i = numfrags(fs, osize), nffree = 0; i < frags; i++) { clrbit(blksfree, bno + i); cgp->cg_cs.cs_nffree--; nffree++; } UFS_LOCK(ump); fs->fs_cstotal.cs_nffree -= nffree; fs->fs_cs(fs, cg).cs_nffree -= nffree; fs->fs_fmod = 1; ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); if (DOINGSOFTDEP(ITOV(ip))) softdep_setup_blkmapdep(bp, UFSTOVFS(ump), bprev, frags, numfrags(fs, osize)); bdwrite(bp); return (bprev); fail: brelse(bp); UFS_LOCK(ump); return (0); } /* * Determine whether a block can be allocated. * * Check to see if a block of the appropriate size is available, * and if it is, allocate it. */ static ufs2_daddr_t ffs_alloccg(ip, cg, bpref, size, rsize) struct inode *ip; u_int cg; ufs2_daddr_t bpref; int size; int rsize; { struct fs *fs; struct cg *cgp; struct buf *bp; struct ufsmount *ump; ufs1_daddr_t bno; ufs2_daddr_t blkno; int i, allocsiz, error, frags; u_int8_t *blksfree; ump = ITOUMP(ip); fs = ump->um_fs; if (fs->fs_cs(fs, cg).cs_nbfree == 0 && size == fs->fs_bsize) return (0); UFS_UNLOCK(ump); if ((error = ffs_getcg(fs, ump->um_devvp, cg, &bp, &cgp)) != 0 || (cgp->cg_cs.cs_nbfree == 0 && size == fs->fs_bsize)) goto fail; if (size == fs->fs_bsize) { UFS_LOCK(ump); blkno = ffs_alloccgblk(ip, bp, bpref, rsize); ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); bdwrite(bp); return (blkno); } /* * check to see if any fragments are already available * allocsiz is the size which will be allocated, hacking * it down to a smaller size if necessary */ blksfree = cg_blksfree(cgp); frags = numfrags(fs, size); for (allocsiz = frags; allocsiz < fs->fs_frag; allocsiz++) if (cgp->cg_frsum[allocsiz] != 0) break; if (allocsiz == fs->fs_frag) { /* * no fragments were available, so a block will be * allocated, and hacked up */ if (cgp->cg_cs.cs_nbfree == 0) goto fail; UFS_LOCK(ump); blkno = ffs_alloccgblk(ip, bp, bpref, rsize); ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); bdwrite(bp); return (blkno); } KASSERT(size == rsize, ("ffs_alloccg: size(%d) != rsize(%d)", size, rsize)); bno = ffs_mapsearch(fs, cgp, bpref, allocsiz); if (bno < 0) goto fail; for (i = 0; i < frags; i++) clrbit(blksfree, bno + i); cgp->cg_cs.cs_nffree -= frags; cgp->cg_frsum[allocsiz]--; if (frags != allocsiz) cgp->cg_frsum[allocsiz - frags]++; UFS_LOCK(ump); fs->fs_cstotal.cs_nffree -= frags; fs->fs_cs(fs, cg).cs_nffree -= frags; fs->fs_fmod = 1; blkno = cgbase(fs, cg) + bno; ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); if (DOINGSOFTDEP(ITOV(ip))) softdep_setup_blkmapdep(bp, UFSTOVFS(ump), blkno, frags, 0); bdwrite(bp); return (blkno); fail: brelse(bp); UFS_LOCK(ump); return (0); } /* * Allocate a block in a cylinder group. * * This algorithm implements the following policy: * 1) allocate the requested block. * 2) allocate a rotationally optimal block in the same cylinder. * 3) allocate the next available block on the block rotor for the * specified cylinder group. * Note that this routine only allocates fs_bsize blocks; these * blocks may be fragmented by the routine that allocates them. */ static ufs2_daddr_t ffs_alloccgblk(ip, bp, bpref, size) struct inode *ip; struct buf *bp; ufs2_daddr_t bpref; int size; { struct fs *fs; struct cg *cgp; struct ufsmount *ump; ufs1_daddr_t bno; ufs2_daddr_t blkno; u_int8_t *blksfree; int i, cgbpref; ump = ITOUMP(ip); fs = ump->um_fs; mtx_assert(UFS_MTX(ump), MA_OWNED); cgp = (struct cg *)bp->b_data; blksfree = cg_blksfree(cgp); if (bpref == 0) { bpref = cgbase(fs, cgp->cg_cgx) + cgp->cg_rotor + fs->fs_frag; } else if ((cgbpref = dtog(fs, bpref)) != cgp->cg_cgx) { /* map bpref to correct zone in this cg */ if (bpref < cgdata(fs, cgbpref)) bpref = cgmeta(fs, cgp->cg_cgx); else bpref = cgdata(fs, cgp->cg_cgx); } /* * if the requested block is available, use it */ bno = dtogd(fs, blknum(fs, bpref)); if (ffs_isblock(fs, blksfree, fragstoblks(fs, bno))) goto gotit; /* * Take the next available block in this cylinder group. */ bno = ffs_mapsearch(fs, cgp, bpref, (int)fs->fs_frag); if (bno < 0) return (0); /* Update cg_rotor only if allocated from the data zone */ if (bno >= dtogd(fs, cgdata(fs, cgp->cg_cgx))) cgp->cg_rotor = bno; gotit: blkno = fragstoblks(fs, bno); ffs_clrblock(fs, blksfree, (long)blkno); ffs_clusteracct(fs, cgp, blkno, -1); cgp->cg_cs.cs_nbfree--; fs->fs_cstotal.cs_nbfree--; fs->fs_cs(fs, cgp->cg_cgx).cs_nbfree--; fs->fs_fmod = 1; blkno = cgbase(fs, cgp->cg_cgx) + bno; /* * If the caller didn't want the whole block free the frags here. */ size = numfrags(fs, size); if (size != fs->fs_frag) { bno = dtogd(fs, blkno); for (i = size; i < fs->fs_frag; i++) setbit(blksfree, bno + i); i = fs->fs_frag - size; cgp->cg_cs.cs_nffree += i; fs->fs_cstotal.cs_nffree += i; fs->fs_cs(fs, cgp->cg_cgx).cs_nffree += i; fs->fs_fmod = 1; cgp->cg_frsum[i]++; } /* XXX Fixme. */ UFS_UNLOCK(ump); if (DOINGSOFTDEP(ITOV(ip))) softdep_setup_blkmapdep(bp, UFSTOVFS(ump), blkno, size, 0); UFS_LOCK(ump); return (blkno); } /* * Determine whether a cluster can be allocated. * * We do not currently check for optimal rotational layout if there * are multiple choices in the same cylinder group. Instead we just * take the first one that we find following bpref. */ static ufs2_daddr_t ffs_clusteralloc(ip, cg, bpref, len) struct inode *ip; u_int cg; ufs2_daddr_t bpref; int len; { struct fs *fs; struct cg *cgp; struct buf *bp; struct ufsmount *ump; int i, run, bit, map, got, error; ufs2_daddr_t bno; u_char *mapp; int32_t *lp; u_int8_t *blksfree; ump = ITOUMP(ip); fs = ump->um_fs; if (fs->fs_maxcluster[cg] < len) return (0); UFS_UNLOCK(ump); if ((error = ffs_getcg(fs, ump->um_devvp, cg, &bp, &cgp)) != 0) { UFS_LOCK(ump); return (0); } /* * Check to see if a cluster of the needed size (or bigger) is * available in this cylinder group. */ lp = &cg_clustersum(cgp)[len]; for (i = len; i <= fs->fs_contigsumsize; i++) if (*lp++ > 0) break; if (i > fs->fs_contigsumsize) { /* * This is the first time looking for a cluster in this * cylinder group. Update the cluster summary information * to reflect the true maximum sized cluster so that * future cluster allocation requests can avoid reading * the cylinder group map only to find no clusters. */ lp = &cg_clustersum(cgp)[len - 1]; for (i = len - 1; i > 0; i--) if (*lp-- > 0) break; UFS_LOCK(ump); fs->fs_maxcluster[cg] = i; brelse(bp); return (0); } /* * Search the cluster map to find a big enough cluster. * We take the first one that we find, even if it is larger * than we need as we prefer to get one close to the previous * block allocation. We do not search before the current * preference point as we do not want to allocate a block * that is allocated before the previous one (as we will * then have to wait for another pass of the elevator * algorithm before it will be read). We prefer to fail and * be recalled to try an allocation in the next cylinder group. */ if (dtog(fs, bpref) != cg) bpref = cgdata(fs, cg); else bpref = blknum(fs, bpref); bpref = fragstoblks(fs, dtogd(fs, bpref)); mapp = &cg_clustersfree(cgp)[bpref / NBBY]; map = *mapp++; bit = 1 << (bpref % NBBY); for (run = 0, got = bpref; got < cgp->cg_nclusterblks; got++) { if ((map & bit) == 0) { run = 0; } else { run++; if (run == len) break; } if ((got & (NBBY - 1)) != (NBBY - 1)) { bit <<= 1; } else { map = *mapp++; bit = 1; } } if (got >= cgp->cg_nclusterblks) { UFS_LOCK(ump); brelse(bp); return (0); } /* * Allocate the cluster that we have found. */ blksfree = cg_blksfree(cgp); for (i = 1; i <= len; i++) if (!ffs_isblock(fs, blksfree, got - run + i)) panic("ffs_clusteralloc: map mismatch"); bno = cgbase(fs, cg) + blkstofrags(fs, got - run + 1); if (dtog(fs, bno) != cg) panic("ffs_clusteralloc: allocated out of group"); len = blkstofrags(fs, len); UFS_LOCK(ump); for (i = 0; i < len; i += fs->fs_frag) if (ffs_alloccgblk(ip, bp, bno + i, fs->fs_bsize) != bno + i) panic("ffs_clusteralloc: lost block"); ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); bdwrite(bp); return (bno); } static inline struct buf * getinobuf(struct inode *ip, u_int cg, u_int32_t cginoblk, int gbflags) { struct fs *fs; fs = ITOFS(ip); return (getblk(ITODEVVP(ip), fsbtodb(fs, ino_to_fsba(fs, cg * fs->fs_ipg + cginoblk)), (int)fs->fs_bsize, 0, 0, gbflags)); } /* * Determine whether an inode can be allocated. * * Check to see if an inode is available, and if it is, * allocate it using the following policy: * 1) allocate the requested inode. * 2) allocate the next available inode after the requested * inode in the specified cylinder group. */ static ufs2_daddr_t ffs_nodealloccg(ip, cg, ipref, mode, unused) struct inode *ip; u_int cg; ufs2_daddr_t ipref; int mode; int unused; { struct fs *fs; struct cg *cgp; struct buf *bp, *ibp; struct ufsmount *ump; u_int8_t *inosused, *loc; struct ufs2_dinode *dp2; int error, start, len, i; u_int32_t old_initediblk; ump = ITOUMP(ip); fs = ump->um_fs; check_nifree: if (fs->fs_cs(fs, cg).cs_nifree == 0) return (0); UFS_UNLOCK(ump); if ((error = ffs_getcg(fs, ump->um_devvp, cg, &bp, &cgp)) != 0) { UFS_LOCK(ump); return (0); } restart: if (cgp->cg_cs.cs_nifree == 0) { brelse(bp); UFS_LOCK(ump); return (0); } inosused = cg_inosused(cgp); if (ipref) { ipref %= fs->fs_ipg; if (isclr(inosused, ipref)) goto gotit; } start = cgp->cg_irotor / NBBY; len = howmany(fs->fs_ipg - cgp->cg_irotor, NBBY); loc = memcchr(&inosused[start], 0xff, len); if (loc == NULL) { len = start + 1; start = 0; loc = memcchr(&inosused[start], 0xff, len); if (loc == NULL) { printf("cg = %d, irotor = %ld, fs = %s\n", cg, (long)cgp->cg_irotor, fs->fs_fsmnt); panic("ffs_nodealloccg: map corrupted"); /* NOTREACHED */ } } ipref = (loc - inosused) * NBBY + ffs(~*loc) - 1; gotit: /* * Check to see if we need to initialize more inodes. */ if (fs->fs_magic == FS_UFS2_MAGIC && ipref + INOPB(fs) > cgp->cg_initediblk && cgp->cg_initediblk < cgp->cg_niblk) { old_initediblk = cgp->cg_initediblk; /* * Free the cylinder group lock before writing the * initialized inode block. Entering the * babarrierwrite() with the cylinder group lock * causes lock order violation between the lock and * snaplk. * * Another thread can decide to initialize the same * inode block, but whichever thread first gets the * cylinder group lock after writing the newly * allocated inode block will update it and the other * will realize that it has lost and leave the * cylinder group unchanged. */ ibp = getinobuf(ip, cg, old_initediblk, GB_LOCK_NOWAIT); brelse(bp); if (ibp == NULL) { /* * The inode block buffer is already owned by * another thread, which must initialize it. * Wait on the buffer to allow another thread * to finish the updates, with dropped cg * buffer lock, then retry. */ ibp = getinobuf(ip, cg, old_initediblk, 0); brelse(ibp); UFS_LOCK(ump); goto check_nifree; } bzero(ibp->b_data, (int)fs->fs_bsize); dp2 = (struct ufs2_dinode *)(ibp->b_data); for (i = 0; i < INOPB(fs); i++) { while (dp2->di_gen == 0) dp2->di_gen = arc4random(); dp2++; } /* * Rather than adding a soft updates dependency to ensure * that the new inode block is written before it is claimed * by the cylinder group map, we just do a barrier write * here. The barrier write will ensure that the inode block * gets written before the updated cylinder group map can be * written. The barrier write should only slow down bulk * loading of newly created filesystems. */ babarrierwrite(ibp); /* * After the inode block is written, try to update the * cg initediblk pointer. If another thread beat us * to it, then leave it unchanged as the other thread * has already set it correctly. */ error = ffs_getcg(fs, ump->um_devvp, cg, &bp, &cgp); UFS_LOCK(ump); ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); if (error != 0) return (error); if (cgp->cg_initediblk == old_initediblk) cgp->cg_initediblk += INOPB(fs); goto restart; } cgp->cg_irotor = ipref; UFS_LOCK(ump); ACTIVECLEAR(fs, cg); setbit(inosused, ipref); cgp->cg_cs.cs_nifree--; fs->fs_cstotal.cs_nifree--; fs->fs_cs(fs, cg).cs_nifree--; fs->fs_fmod = 1; if ((mode & IFMT) == IFDIR) { cgp->cg_cs.cs_ndir++; fs->fs_cstotal.cs_ndir++; fs->fs_cs(fs, cg).cs_ndir++; } UFS_UNLOCK(ump); if (DOINGSOFTDEP(ITOV(ip))) softdep_setup_inomapdep(bp, ip, cg * fs->fs_ipg + ipref, mode); bdwrite(bp); return ((ino_t)(cg * fs->fs_ipg + ipref)); } /* * Free a block or fragment. * * The specified block or fragment is placed back in the * free map. If a fragment is deallocated, a possible * block reassembly is checked. */ static void ffs_blkfree_cg(ump, fs, devvp, bno, size, inum, dephd) struct ufsmount *ump; struct fs *fs; struct vnode *devvp; ufs2_daddr_t bno; long size; ino_t inum; struct workhead *dephd; { struct mount *mp; struct cg *cgp; struct buf *bp; ufs1_daddr_t fragno, cgbno; int i, blk, frags, bbase, error; u_int cg; u_int8_t *blksfree; struct cdev *dev; cg = dtog(fs, bno); if (devvp->v_type == VREG) { /* devvp is a snapshot */ MPASS(devvp->v_mount->mnt_data == ump); dev = ump->um_devvp->v_rdev; } else if (devvp->v_type == VCHR) { /* devvp is a normal disk device */ dev = devvp->v_rdev; ASSERT_VOP_LOCKED(devvp, "ffs_blkfree_cg"); } else return; #ifdef INVARIANTS if ((u_int)size > fs->fs_bsize || fragoff(fs, size) != 0 || fragnum(fs, bno) + numfrags(fs, size) > fs->fs_frag) { printf("dev=%s, bno = %jd, bsize = %ld, size = %ld, fs = %s\n", devtoname(dev), (intmax_t)bno, (long)fs->fs_bsize, size, fs->fs_fsmnt); panic("ffs_blkfree_cg: bad size"); } #endif if ((u_int)bno >= fs->fs_size) { printf("bad block %jd, ino %lu\n", (intmax_t)bno, (u_long)inum); ffs_fserr(fs, inum, "bad block"); return; } if ((error = ffs_getcg(fs, devvp, cg, &bp, &cgp)) != 0) return; cgbno = dtogd(fs, bno); blksfree = cg_blksfree(cgp); UFS_LOCK(ump); if (size == fs->fs_bsize) { fragno = fragstoblks(fs, cgbno); if (!ffs_isfreeblock(fs, blksfree, fragno)) { if (devvp->v_type == VREG) { UFS_UNLOCK(ump); /* devvp is a snapshot */ brelse(bp); return; } printf("dev = %s, block = %jd, fs = %s\n", devtoname(dev), (intmax_t)bno, fs->fs_fsmnt); panic("ffs_blkfree_cg: freeing free block"); } ffs_setblock(fs, blksfree, fragno); ffs_clusteracct(fs, cgp, fragno, 1); cgp->cg_cs.cs_nbfree++; fs->fs_cstotal.cs_nbfree++; fs->fs_cs(fs, cg).cs_nbfree++; } else { bbase = cgbno - fragnum(fs, cgbno); /* * decrement the counts associated with the old frags */ blk = blkmap(fs, blksfree, bbase); ffs_fragacct(fs, blk, cgp->cg_frsum, -1); /* * deallocate the fragment */ frags = numfrags(fs, size); for (i = 0; i < frags; i++) { if (isset(blksfree, cgbno + i)) { printf("dev = %s, block = %jd, fs = %s\n", devtoname(dev), (intmax_t)(bno + i), fs->fs_fsmnt); panic("ffs_blkfree_cg: freeing free frag"); } setbit(blksfree, cgbno + i); } cgp->cg_cs.cs_nffree += i; fs->fs_cstotal.cs_nffree += i; fs->fs_cs(fs, cg).cs_nffree += i; /* * add back in counts associated with the new frags */ blk = blkmap(fs, blksfree, bbase); ffs_fragacct(fs, blk, cgp->cg_frsum, 1); /* * if a complete block has been reassembled, account for it */ fragno = fragstoblks(fs, bbase); if (ffs_isblock(fs, blksfree, fragno)) { cgp->cg_cs.cs_nffree -= fs->fs_frag; fs->fs_cstotal.cs_nffree -= fs->fs_frag; fs->fs_cs(fs, cg).cs_nffree -= fs->fs_frag; ffs_clusteracct(fs, cgp, fragno, 1); cgp->cg_cs.cs_nbfree++; fs->fs_cstotal.cs_nbfree++; fs->fs_cs(fs, cg).cs_nbfree++; } } fs->fs_fmod = 1; ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); mp = UFSTOVFS(ump); if (MOUNTEDSOFTDEP(mp) && devvp->v_type == VCHR) softdep_setup_blkfree(UFSTOVFS(ump), bp, bno, numfrags(fs, size), dephd); bdwrite(bp); } struct ffs_blkfree_trim_params { struct task task; struct ufsmount *ump; struct vnode *devvp; ufs2_daddr_t bno; long size; ino_t inum; struct workhead *pdephd; struct workhead dephd; }; static void ffs_blkfree_trim_task(ctx, pending) void *ctx; int pending; { struct ffs_blkfree_trim_params *tp; tp = ctx; ffs_blkfree_cg(tp->ump, tp->ump->um_fs, tp->devvp, tp->bno, tp->size, tp->inum, tp->pdephd); vn_finished_secondary_write(UFSTOVFS(tp->ump)); atomic_add_int(&tp->ump->um_trim_inflight, -1); free(tp, M_TEMP); } static void ffs_blkfree_trim_completed(bip) struct bio *bip; { struct ffs_blkfree_trim_params *tp; tp = bip->bio_caller2; g_destroy_bio(bip); TASK_INIT(&tp->task, 0, ffs_blkfree_trim_task, tp); taskqueue_enqueue(tp->ump->um_trim_tq, &tp->task); } void ffs_blkfree(ump, fs, devvp, bno, size, inum, vtype, dephd) struct ufsmount *ump; struct fs *fs; struct vnode *devvp; ufs2_daddr_t bno; long size; ino_t inum; enum vtype vtype; struct workhead *dephd; { struct mount *mp; struct bio *bip; struct ffs_blkfree_trim_params *tp; /* * Check to see if a snapshot wants to claim the block. * Check that devvp is a normal disk device, not a snapshot, * it has a snapshot(s) associated with it, and one of the * snapshots wants to claim the block. */ if (devvp->v_type == VCHR && (devvp->v_vflag & VV_COPYONWRITE) && ffs_snapblkfree(fs, devvp, bno, size, inum, vtype, dephd)) { return; } /* * Nothing to delay if TRIM is disabled, or the operation is * performed on the snapshot. */ if (!ump->um_candelete || devvp->v_type == VREG) { ffs_blkfree_cg(ump, fs, devvp, bno, size, inum, dephd); return; } /* * Postpone the set of the free bit in the cg bitmap until the * BIO_DELETE is completed. Otherwise, due to disk queue * reordering, TRIM might be issued after we reuse the block * and write some new data into it. */ atomic_add_int(&ump->um_trim_inflight, 1); tp = malloc(sizeof(struct ffs_blkfree_trim_params), M_TEMP, M_WAITOK); tp->ump = ump; tp->devvp = devvp; tp->bno = bno; tp->size = size; tp->inum = inum; if (dephd != NULL) { LIST_INIT(&tp->dephd); LIST_SWAP(dephd, &tp->dephd, worklist, wk_list); tp->pdephd = &tp->dephd; } else tp->pdephd = NULL; bip = g_alloc_bio(); bip->bio_cmd = BIO_DELETE; bip->bio_offset = dbtob(fsbtodb(fs, bno)); bip->bio_done = ffs_blkfree_trim_completed; bip->bio_length = size; bip->bio_caller2 = tp; mp = UFSTOVFS(ump); vn_start_secondary_write(NULL, &mp, 0); g_io_request(bip, (struct g_consumer *)devvp->v_bufobj.bo_private); } #ifdef INVARIANTS /* * Verify allocation of a block or fragment. Returns true if block or * fragment is allocated, false if it is free. */ static int ffs_checkblk(ip, bno, size) struct inode *ip; ufs2_daddr_t bno; long size; { struct fs *fs; struct cg *cgp; struct buf *bp; ufs1_daddr_t cgbno; int i, error, frags, free; u_int8_t *blksfree; fs = ITOFS(ip); if ((u_int)size > fs->fs_bsize || fragoff(fs, size) != 0) { printf("bsize = %ld, size = %ld, fs = %s\n", (long)fs->fs_bsize, size, fs->fs_fsmnt); panic("ffs_checkblk: bad size"); } if ((u_int)bno >= fs->fs_size) panic("ffs_checkblk: bad block %jd", (intmax_t)bno); error = ffs_getcg(fs, ITODEVVP(ip), dtog(fs, bno), &bp, &cgp); if (error) panic("ffs_checkblk: cylinder group read failed"); blksfree = cg_blksfree(cgp); cgbno = dtogd(fs, bno); if (size == fs->fs_bsize) { free = ffs_isblock(fs, blksfree, fragstoblks(fs, cgbno)); } else { frags = numfrags(fs, size); for (free = 0, i = 0; i < frags; i++) if (isset(blksfree, cgbno + i)) free++; if (free != 0 && free != frags) panic("ffs_checkblk: partially free fragment"); } brelse(bp); return (!free); } #endif /* INVARIANTS */ /* * Free an inode. */ int ffs_vfree(pvp, ino, mode) struct vnode *pvp; ino_t ino; int mode; { struct ufsmount *ump; struct inode *ip; if (DOINGSOFTDEP(pvp)) { softdep_freefile(pvp, ino, mode); return (0); } ip = VTOI(pvp); ump = VFSTOUFS(pvp->v_mount); return (ffs_freefile(ump, ump->um_fs, ump->um_devvp, ino, mode, NULL)); } /* * Do the actual free operation. * The specified inode is placed back in the free map. */ int ffs_freefile(ump, fs, devvp, ino, mode, wkhd) struct ufsmount *ump; struct fs *fs; struct vnode *devvp; ino_t ino; int mode; struct workhead *wkhd; { struct cg *cgp; struct buf *bp; ufs2_daddr_t cgbno; int error; u_int cg; u_int8_t *inosused; struct cdev *dev; cg = ino_to_cg(fs, ino); if (devvp->v_type == VREG) { /* devvp is a snapshot */ MPASS(devvp->v_mount->mnt_data == ump); dev = ump->um_devvp->v_rdev; cgbno = fragstoblks(fs, cgtod(fs, cg)); } else if (devvp->v_type == VCHR) { /* devvp is a normal disk device */ dev = devvp->v_rdev; cgbno = fsbtodb(fs, cgtod(fs, cg)); } else { bp = NULL; return (0); } if (ino >= fs->fs_ipg * fs->fs_ncg) panic("ffs_freefile: range: dev = %s, ino = %ju, fs = %s", devtoname(dev), (uintmax_t)ino, fs->fs_fsmnt); if ((error = ffs_getcg(fs, devvp, cg, &bp, &cgp)) != 0) return (error); inosused = cg_inosused(cgp); ino %= fs->fs_ipg; if (isclr(inosused, ino)) { printf("dev = %s, ino = %ju, fs = %s\n", devtoname(dev), (uintmax_t)(ino + cg * fs->fs_ipg), fs->fs_fsmnt); if (fs->fs_ronly == 0) panic("ffs_freefile: freeing free inode"); } clrbit(inosused, ino); if (ino < cgp->cg_irotor) cgp->cg_irotor = ino; cgp->cg_cs.cs_nifree++; UFS_LOCK(ump); fs->fs_cstotal.cs_nifree++; fs->fs_cs(fs, cg).cs_nifree++; if ((mode & IFMT) == IFDIR) { cgp->cg_cs.cs_ndir--; fs->fs_cstotal.cs_ndir--; fs->fs_cs(fs, cg).cs_ndir--; } fs->fs_fmod = 1; ACTIVECLEAR(fs, cg); UFS_UNLOCK(ump); if (MOUNTEDSOFTDEP(UFSTOVFS(ump)) && devvp->v_type == VCHR) softdep_setup_inofree(UFSTOVFS(ump), bp, ino + cg * fs->fs_ipg, wkhd); bdwrite(bp); return (0); } /* * Check to see if a file is free. * Used to check for allocated files in snapshots. */ int ffs_checkfreefile(fs, devvp, ino) struct fs *fs; struct vnode *devvp; ino_t ino; { struct cg *cgp; struct buf *bp; ufs2_daddr_t cgbno; int ret, error; u_int cg; u_int8_t *inosused; cg = ino_to_cg(fs, ino); if (devvp->v_type == VREG) { /* devvp is a snapshot */ cgbno = fragstoblks(fs, cgtod(fs, cg)); } else if (devvp->v_type == VCHR) { /* devvp is a normal disk device */ cgbno = fsbtodb(fs, cgtod(fs, cg)); } else { return (1); } if (ino >= fs->fs_ipg * fs->fs_ncg) return (1); if ((error = ffs_getcg(fs, devvp, cg, &bp, &cgp)) != 0) return (1); inosused = cg_inosused(cgp); ino %= fs->fs_ipg; ret = isclr(inosused, ino); brelse(bp); return (ret); } /* * Find a block of the specified size in the specified cylinder group. * * It is a panic if a request is made to find a block if none are * available. */ static ufs1_daddr_t ffs_mapsearch(fs, cgp, bpref, allocsiz) struct fs *fs; struct cg *cgp; ufs2_daddr_t bpref; int allocsiz; { ufs1_daddr_t bno; int start, len, loc, i; int blk, field, subfield, pos; u_int8_t *blksfree; /* * find the fragment by searching through the free block * map for an appropriate bit pattern */ if (bpref) start = dtogd(fs, bpref) / NBBY; else start = cgp->cg_frotor / NBBY; blksfree = cg_blksfree(cgp); len = howmany(fs->fs_fpg, NBBY) - start; loc = scanc((u_int)len, (u_char *)&blksfree[start], fragtbl[fs->fs_frag], (u_char)(1 << (allocsiz - 1 + (fs->fs_frag % NBBY)))); if (loc == 0) { len = start + 1; start = 0; loc = scanc((u_int)len, (u_char *)&blksfree[0], fragtbl[fs->fs_frag], (u_char)(1 << (allocsiz - 1 + (fs->fs_frag % NBBY)))); if (loc == 0) { printf("start = %d, len = %d, fs = %s\n", start, len, fs->fs_fsmnt); panic("ffs_alloccg: map corrupted"); /* NOTREACHED */ } } bno = (start + len - loc) * NBBY; cgp->cg_frotor = bno; /* * found the byte in the map * sift through the bits to find the selected frag */ for (i = bno + NBBY; bno < i; bno += fs->fs_frag) { blk = blkmap(fs, blksfree, bno); blk <<= 1; field = around[allocsiz]; subfield = inside[allocsiz]; for (pos = 0; pos <= fs->fs_frag - allocsiz; pos++) { if ((blk & field) == subfield) return (bno + pos); field <<= 1; subfield <<= 1; } } printf("bno = %lu, fs = %s\n", (u_long)bno, fs->fs_fsmnt); panic("ffs_alloccg: block not in map"); return (-1); } +static const struct statfs * +ffs_getmntstat(struct vnode *devvp) +{ + + if (devvp->v_type == VCHR) + return (&devvp->v_rdev->si_mountpt->mnt_stat); + return (ffs_getmntstat(VFSTOUFS(devvp->v_mount)->um_devvp)); +} + /* * Fetch and verify a cylinder group. */ int ffs_getcg(fs, devvp, cg, bpp, cgpp) struct fs *fs; struct vnode *devvp; u_int cg; struct buf **bpp; struct cg **cgpp; { struct buf *bp; struct cg *cgp; + const struct statfs *sfs; int flags, error; *bpp = NULL; *cgpp = NULL; flags = 0; if ((fs->fs_metackhash & CK_CYLGRP) != 0) flags |= GB_CKHASH; error = breadn_flags(devvp, devvp->v_type == VREG ? fragstoblks(fs, cgtod(fs, cg)) : fsbtodb(fs, cgtod(fs, cg)), (int)fs->fs_cgsize, NULL, NULL, 0, NOCRED, flags, ffs_ckhash_cg, &bp); if (error != 0) return (error); cgp = (struct cg *)bp->b_data; if (((fs->fs_metackhash & CK_CYLGRP) != 0 && (bp->b_flags & B_CKHASH) != 0 && cgp->cg_ckhash != bp->b_ckhash) || !cg_chkmagic(cgp) || cgp->cg_cgx != cg) { - printf("checksum failed: cg %u, cgp: 0x%x != bp: 0x%jx\n", + sfs = ffs_getmntstat(devvp); + printf("UFS %s%s (%s) cylinder checksum failed: cg %u, cgp: " + "0x%x != bp: 0x%jx\n", + devvp->v_type == VCHR ? "" : "snapshot of ", + sfs->f_mntfromname, sfs->f_mntonname, cg, cgp->cg_ckhash, (uintmax_t)bp->b_ckhash); bp->b_flags &= ~B_CKHASH; bp->b_flags |= B_INVAL | B_NOCACHE; brelse(bp); return (EIO); } bp->b_flags &= ~B_CKHASH; bp->b_xflags |= BX_BKGRDWRITE; if ((fs->fs_metackhash & CK_CYLGRP) != 0) bp->b_xflags |= BX_CYLGRP; cgp->cg_old_time = cgp->cg_time = time_second; *bpp = bp; *cgpp = cgp; return (0); } static void ffs_ckhash_cg(bp) struct buf *bp; { uint32_t ckhash; struct cg *cgp; cgp = (struct cg *)bp->b_data; ckhash = cgp->cg_ckhash; cgp->cg_ckhash = 0; bp->b_ckhash = calculate_crc32c(~0L, bp->b_data, bp->b_bcount); cgp->cg_ckhash = ckhash; } /* * Fserr prints the name of a filesystem with an error diagnostic. * * The form of the error message is: * fs: error message */ void ffs_fserr(fs, inum, cp) struct fs *fs; ino_t inum; char *cp; { struct thread *td = curthread; /* XXX */ struct proc *p = td->td_proc; log(LOG_ERR, "pid %d (%s), uid %d inumber %ju on %s: %s\n", p->p_pid, p->p_comm, td->td_ucred->cr_uid, (uintmax_t)inum, fs->fs_fsmnt, cp); } /* * This function provides the capability for the fsck program to * update an active filesystem. Fourteen operations are provided: * * adjrefcnt(inode, amt) - adjusts the reference count on the * specified inode by the specified amount. Under normal * operation the count should always go down. Decrementing * the count to zero will cause the inode to be freed. * adjblkcnt(inode, amt) - adjust the number of blocks used by the * inode by the specified amount. * adjndir, adjbfree, adjifree, adjffree, adjnumclusters(amt) - * adjust the superblock summary. * freedirs(inode, count) - directory inodes [inode..inode + count - 1] * are marked as free. Inodes should never have to be marked * as in use. * freefiles(inode, count) - file inodes [inode..inode + count - 1] * are marked as free. Inodes should never have to be marked * as in use. * freeblks(blockno, size) - blocks [blockno..blockno + size - 1] * are marked as free. Blocks should never have to be marked * as in use. * setflags(flags, set/clear) - the fs_flags field has the specified * flags set (second parameter +1) or cleared (second parameter -1). * setcwd(dirinode) - set the current directory to dirinode in the * filesystem associated with the snapshot. * setdotdot(oldvalue, newvalue) - Verify that the inode number for ".." * in the current directory is oldvalue then change it to newvalue. * unlink(nameptr, oldvalue) - Verify that the inode number associated * with nameptr in the current directory is oldvalue then unlink it. * * The following functions may only be used on a quiescent filesystem * by the soft updates journal. They are not safe to be run on an active * filesystem. * * setinode(inode, dip) - the specified disk inode is replaced with the * contents pointed to by dip. * setbufoutput(fd, flags) - output associated with the specified file * descriptor (which must reference the character device supporting * the filesystem) switches from using physio to running through the * buffer cache when flags is set to 1. The descriptor reverts to * physio for output when flags is set to zero. */ static int sysctl_ffs_fsck(SYSCTL_HANDLER_ARGS); SYSCTL_PROC(_vfs_ffs, FFS_ADJ_REFCNT, adjrefcnt, CTLFLAG_WR|CTLTYPE_STRUCT, 0, 0, sysctl_ffs_fsck, "S,fsck", "Adjust Inode Reference Count"); static SYSCTL_NODE(_vfs_ffs, FFS_ADJ_BLKCNT, adjblkcnt, CTLFLAG_WR, sysctl_ffs_fsck, "Adjust Inode Used Blocks Count"); static SYSCTL_NODE(_vfs_ffs, FFS_ADJ_NDIR, adjndir, CTLFLAG_WR, sysctl_ffs_fsck, "Adjust number of directories"); static SYSCTL_NODE(_vfs_ffs, FFS_ADJ_NBFREE, adjnbfree, CTLFLAG_WR, sysctl_ffs_fsck, "Adjust number of free blocks"); static SYSCTL_NODE(_vfs_ffs, FFS_ADJ_NIFREE, adjnifree, CTLFLAG_WR, sysctl_ffs_fsck, "Adjust number of free inodes"); static SYSCTL_NODE(_vfs_ffs, FFS_ADJ_NFFREE, adjnffree, CTLFLAG_WR, sysctl_ffs_fsck, "Adjust number of free frags"); static SYSCTL_NODE(_vfs_ffs, FFS_ADJ_NUMCLUSTERS, adjnumclusters, CTLFLAG_WR, sysctl_ffs_fsck, "Adjust number of free clusters"); static SYSCTL_NODE(_vfs_ffs, FFS_DIR_FREE, freedirs, CTLFLAG_WR, sysctl_ffs_fsck, "Free Range of Directory Inodes"); static SYSCTL_NODE(_vfs_ffs, FFS_FILE_FREE, freefiles, CTLFLAG_WR, sysctl_ffs_fsck, "Free Range of File Inodes"); static SYSCTL_NODE(_vfs_ffs, FFS_BLK_FREE, freeblks, CTLFLAG_WR, sysctl_ffs_fsck, "Free Range of Blocks"); static SYSCTL_NODE(_vfs_ffs, FFS_SET_FLAGS, setflags, CTLFLAG_WR, sysctl_ffs_fsck, "Change Filesystem Flags"); static SYSCTL_NODE(_vfs_ffs, FFS_SET_CWD, setcwd, CTLFLAG_WR, sysctl_ffs_fsck, "Set Current Working Directory"); static SYSCTL_NODE(_vfs_ffs, FFS_SET_DOTDOT, setdotdot, CTLFLAG_WR, sysctl_ffs_fsck, "Change Value of .. Entry"); static SYSCTL_NODE(_vfs_ffs, FFS_UNLINK, unlink, CTLFLAG_WR, sysctl_ffs_fsck, "Unlink a Duplicate Name"); static SYSCTL_NODE(_vfs_ffs, FFS_SET_INODE, setinode, CTLFLAG_WR, sysctl_ffs_fsck, "Update an On-Disk Inode"); static SYSCTL_NODE(_vfs_ffs, FFS_SET_BUFOUTPUT, setbufoutput, CTLFLAG_WR, sysctl_ffs_fsck, "Set Buffered Writing for Descriptor"); #define DEBUG 1 #ifdef DEBUG static int fsckcmds = 0; SYSCTL_INT(_debug, OID_AUTO, fsckcmds, CTLFLAG_RW, &fsckcmds, 0, ""); #endif /* DEBUG */ static int buffered_write(struct file *, struct uio *, struct ucred *, int, struct thread *); static int sysctl_ffs_fsck(SYSCTL_HANDLER_ARGS) { struct thread *td = curthread; struct fsck_cmd cmd; struct ufsmount *ump; struct vnode *vp, *dvp, *fdvp; struct inode *ip, *dp; struct mount *mp; struct fs *fs; ufs2_daddr_t blkno; long blkcnt, blksize; struct file *fp, *vfp; cap_rights_t rights; int filetype, error; static struct fileops *origops, bufferedops; if (req->newlen > sizeof cmd) return (EBADRPC); if ((error = SYSCTL_IN(req, &cmd, sizeof cmd)) != 0) return (error); if (cmd.version != FFS_CMD_VERSION) return (ERPCMISMATCH); if ((error = getvnode(td, cmd.handle, cap_rights_init(&rights, CAP_FSCK), &fp)) != 0) return (error); vp = fp->f_data; if (vp->v_type != VREG && vp->v_type != VDIR) { fdrop(fp, td); return (EINVAL); } vn_start_write(vp, &mp, V_WAIT); if (mp == NULL || strncmp(mp->mnt_stat.f_fstypename, "ufs", MFSNAMELEN)) { vn_finished_write(mp); fdrop(fp, td); return (EINVAL); } ump = VFSTOUFS(mp); if ((mp->mnt_flag & MNT_RDONLY) && ump->um_fsckpid != td->td_proc->p_pid) { vn_finished_write(mp); fdrop(fp, td); return (EROFS); } fs = ump->um_fs; filetype = IFREG; switch (oidp->oid_number) { case FFS_SET_FLAGS: #ifdef DEBUG if (fsckcmds) printf("%s: %s flags\n", mp->mnt_stat.f_mntonname, cmd.size > 0 ? "set" : "clear"); #endif /* DEBUG */ if (cmd.size > 0) fs->fs_flags |= (long)cmd.value; else fs->fs_flags &= ~(long)cmd.value; break; case FFS_ADJ_REFCNT: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust inode %jd link count by %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value, (intmax_t)cmd.size); } #endif /* DEBUG */ if ((error = ffs_vget(mp, (ino_t)cmd.value, LK_EXCLUSIVE, &vp))) break; ip = VTOI(vp); ip->i_nlink += cmd.size; DIP_SET(ip, i_nlink, ip->i_nlink); ip->i_effnlink += cmd.size; ip->i_flag |= IN_CHANGE | IN_MODIFIED; error = ffs_update(vp, 1); if (DOINGSOFTDEP(vp)) softdep_change_linkcnt(ip); vput(vp); break; case FFS_ADJ_BLKCNT: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust inode %jd block count by %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value, (intmax_t)cmd.size); } #endif /* DEBUG */ if ((error = ffs_vget(mp, (ino_t)cmd.value, LK_EXCLUSIVE, &vp))) break; ip = VTOI(vp); DIP_SET(ip, i_blocks, DIP(ip, i_blocks) + cmd.size); ip->i_flag |= IN_CHANGE | IN_MODIFIED; error = ffs_update(vp, 1); vput(vp); break; case FFS_DIR_FREE: filetype = IFDIR; /* fall through */ case FFS_FILE_FREE: #ifdef DEBUG if (fsckcmds) { if (cmd.size == 1) printf("%s: free %s inode %ju\n", mp->mnt_stat.f_mntonname, filetype == IFDIR ? "directory" : "file", (uintmax_t)cmd.value); else printf("%s: free %s inodes %ju-%ju\n", mp->mnt_stat.f_mntonname, filetype == IFDIR ? "directory" : "file", (uintmax_t)cmd.value, (uintmax_t)(cmd.value + cmd.size - 1)); } #endif /* DEBUG */ while (cmd.size > 0) { if ((error = ffs_freefile(ump, fs, ump->um_devvp, cmd.value, filetype, NULL))) break; cmd.size -= 1; cmd.value += 1; } break; case FFS_BLK_FREE: #ifdef DEBUG if (fsckcmds) { if (cmd.size == 1) printf("%s: free block %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); else printf("%s: free blocks %jd-%jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value, (intmax_t)cmd.value + cmd.size - 1); } #endif /* DEBUG */ blkno = cmd.value; blkcnt = cmd.size; blksize = fs->fs_frag - (blkno % fs->fs_frag); while (blkcnt > 0) { if (blksize > blkcnt) blksize = blkcnt; ffs_blkfree(ump, fs, ump->um_devvp, blkno, blksize * fs->fs_fsize, UFS_ROOTINO, VDIR, NULL); blkno += blksize; blkcnt -= blksize; blksize = fs->fs_frag; } break; /* * Adjust superblock summaries. fsck(8) is expected to * submit deltas when necessary. */ case FFS_ADJ_NDIR: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust number of directories by %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ fs->fs_cstotal.cs_ndir += cmd.value; break; case FFS_ADJ_NBFREE: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust number of free blocks by %+jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ fs->fs_cstotal.cs_nbfree += cmd.value; break; case FFS_ADJ_NIFREE: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust number of free inodes by %+jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ fs->fs_cstotal.cs_nifree += cmd.value; break; case FFS_ADJ_NFFREE: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust number of free frags by %+jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ fs->fs_cstotal.cs_nffree += cmd.value; break; case FFS_ADJ_NUMCLUSTERS: #ifdef DEBUG if (fsckcmds) { printf("%s: adjust number of free clusters by %+jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ fs->fs_cstotal.cs_numclusters += cmd.value; break; case FFS_SET_CWD: #ifdef DEBUG if (fsckcmds) { printf("%s: set current directory to inode %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ if ((error = ffs_vget(mp, (ino_t)cmd.value, LK_SHARED, &vp))) break; AUDIT_ARG_VNODE1(vp); if ((error = change_dir(vp, td)) != 0) { vput(vp); break; } VOP_UNLOCK(vp, 0); pwd_chdir(td, vp); break; case FFS_SET_DOTDOT: #ifdef DEBUG if (fsckcmds) { printf("%s: change .. in cwd from %jd to %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value, (intmax_t)cmd.size); } #endif /* DEBUG */ /* * First we have to get and lock the parent directory * to which ".." points. */ error = ffs_vget(mp, (ino_t)cmd.value, LK_EXCLUSIVE, &fdvp); if (error) break; /* * Now we get and lock the child directory containing "..". */ FILEDESC_SLOCK(td->td_proc->p_fd); dvp = td->td_proc->p_fd->fd_cdir; FILEDESC_SUNLOCK(td->td_proc->p_fd); if ((error = vget(dvp, LK_EXCLUSIVE, td)) != 0) { vput(fdvp); break; } dp = VTOI(dvp); dp->i_offset = 12; /* XXX mastertemplate.dot_reclen */ error = ufs_dirrewrite(dp, VTOI(fdvp), (ino_t)cmd.size, DT_DIR, 0); cache_purge(fdvp); cache_purge(dvp); vput(dvp); vput(fdvp); break; case FFS_UNLINK: #ifdef DEBUG if (fsckcmds) { char buf[32]; if (copyinstr((char *)(intptr_t)cmd.value, buf,32,NULL)) strncpy(buf, "Name_too_long", 32); printf("%s: unlink %s (inode %jd)\n", mp->mnt_stat.f_mntonname, buf, (intmax_t)cmd.size); } #endif /* DEBUG */ /* * kern_unlinkat will do its own start/finish writes and * they do not nest, so drop ours here. Setting mp == NULL * indicates that vn_finished_write is not needed down below. */ vn_finished_write(mp); mp = NULL; error = kern_unlinkat(td, AT_FDCWD, (char *)(intptr_t)cmd.value, UIO_USERSPACE, (ino_t)cmd.size); break; case FFS_SET_INODE: if (ump->um_fsckpid != td->td_proc->p_pid) { error = EPERM; break; } #ifdef DEBUG if (fsckcmds) { printf("%s: update inode %jd\n", mp->mnt_stat.f_mntonname, (intmax_t)cmd.value); } #endif /* DEBUG */ if ((error = ffs_vget(mp, (ino_t)cmd.value, LK_EXCLUSIVE, &vp))) break; AUDIT_ARG_VNODE1(vp); ip = VTOI(vp); if (I_IS_UFS1(ip)) error = copyin((void *)(intptr_t)cmd.size, ip->i_din1, sizeof(struct ufs1_dinode)); else error = copyin((void *)(intptr_t)cmd.size, ip->i_din2, sizeof(struct ufs2_dinode)); if (error) { vput(vp); break; } ip->i_flag |= IN_CHANGE | IN_MODIFIED; error = ffs_update(vp, 1); vput(vp); break; case FFS_SET_BUFOUTPUT: if (ump->um_fsckpid != td->td_proc->p_pid) { error = EPERM; break; } if (ITOUMP(VTOI(vp)) != ump) { error = EINVAL; break; } #ifdef DEBUG if (fsckcmds) { printf("%s: %s buffered output for descriptor %jd\n", mp->mnt_stat.f_mntonname, cmd.size == 1 ? "enable" : "disable", (intmax_t)cmd.value); } #endif /* DEBUG */ if ((error = getvnode(td, cmd.value, cap_rights_init(&rights, CAP_FSCK), &vfp)) != 0) break; if (vfp->f_vnode->v_type != VCHR) { fdrop(vfp, td); error = EINVAL; break; } if (origops == NULL) { origops = vfp->f_ops; bcopy((void *)origops, (void *)&bufferedops, sizeof(bufferedops)); bufferedops.fo_write = buffered_write; } if (cmd.size == 1) atomic_store_rel_ptr((volatile uintptr_t *)&vfp->f_ops, (uintptr_t)&bufferedops); else atomic_store_rel_ptr((volatile uintptr_t *)&vfp->f_ops, (uintptr_t)origops); fdrop(vfp, td); break; default: #ifdef DEBUG if (fsckcmds) { printf("Invalid request %d from fsck\n", oidp->oid_number); } #endif /* DEBUG */ error = EINVAL; break; } fdrop(fp, td); vn_finished_write(mp); return (error); } /* * Function to switch a descriptor to use the buffer cache to stage * its I/O. This is needed so that writes to the filesystem device * will give snapshots a chance to copy modified blocks for which it * needs to retain copies. */ static int buffered_write(fp, uio, active_cred, flags, td) struct file *fp; struct uio *uio; struct ucred *active_cred; int flags; struct thread *td; { struct vnode *devvp, *vp; struct inode *ip; struct buf *bp; struct fs *fs; struct filedesc *fdp; int error; daddr_t lbn; /* * The devvp is associated with the /dev filesystem. To discover * the filesystem with which the device is associated, we depend * on the application setting the current directory to a location * within the filesystem being written. Yes, this is an ugly hack. */ devvp = fp->f_vnode; if (!vn_isdisk(devvp, NULL)) return (EINVAL); fdp = td->td_proc->p_fd; FILEDESC_SLOCK(fdp); vp = fdp->fd_cdir; vref(vp); FILEDESC_SUNLOCK(fdp); vn_lock(vp, LK_SHARED | LK_RETRY); /* * Check that the current directory vnode indeed belongs to * UFS before trying to dereference UFS-specific v_data fields. */ if (vp->v_op != &ffs_vnodeops1 && vp->v_op != &ffs_vnodeops2) { vput(vp); return (EINVAL); } ip = VTOI(vp); if (ITODEVVP(ip) != devvp) { vput(vp); return (EINVAL); } fs = ITOFS(ip); vput(vp); foffset_lock_uio(fp, uio, flags); vn_lock(devvp, LK_EXCLUSIVE | LK_RETRY); #ifdef DEBUG if (fsckcmds) { printf("%s: buffered write for block %jd\n", fs->fs_fsmnt, (intmax_t)btodb(uio->uio_offset)); } #endif /* DEBUG */ /* * All I/O must be contained within a filesystem block, start on * a fragment boundary, and be a multiple of fragments in length. */ if (uio->uio_resid > fs->fs_bsize - (uio->uio_offset % fs->fs_bsize) || fragoff(fs, uio->uio_offset) != 0 || fragoff(fs, uio->uio_resid) != 0) { error = EINVAL; goto out; } lbn = numfrags(fs, uio->uio_offset); bp = getblk(devvp, lbn, uio->uio_resid, 0, 0, 0); bp->b_flags |= B_RELBUF; if ((error = uiomove((char *)bp->b_data, uio->uio_resid, uio)) != 0) { brelse(bp); goto out; } error = bwrite(bp); out: VOP_UNLOCK(devvp, 0); foffset_unlock_uio(fp, uio, flags | FOF_NEXTOFF); return (error); } Index: projects/runtime-coverage =================================================================== --- projects/runtime-coverage (revision 325444) +++ projects/runtime-coverage (revision 325445) Property changes on: projects/runtime-coverage ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r325423-325444