Index: head/share/man/man4/netmap.4 =================================================================== --- head/share/man/man4/netmap.4 (revision 307393) +++ head/share/man/man4/netmap.4 (revision 307394) @@ -1,1098 +1,1144 @@ .\" Copyright (c) 2011-2014 Matteo Landi, Luigi Rizzo, Universita` di Pisa .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" This document is derived in part from the enet man page (enet.4) .\" distributed with 4.3BSD Unix. .\" .\" $FreeBSD$ .\" .Dd December 14, 2015 .Dt NETMAP 4 .Os .Sh NAME .Nm netmap .Nd a framework for fast packet I/O -.Pp +.br .Nm VALE .Nd a fast VirtuAl Local Ethernet using the netmap API -.Pp +.br .Nm netmap pipes .Nd a shared memory packet transport channel .Sh SYNOPSIS .Cd device netmap .Sh DESCRIPTION .Nm is a framework for extremely fast and efficient packet I/O -for both userspace and kernel clients. +for userspace and kernel clients, and for Virtual Machines. It runs on .Fx -and Linux, and includes -.Nm VALE , -a very fast and modular in-kernel software switch/dataplane, -and -.Nm netmap pipes , -a shared memory packet transport channel. -All these are accessed interchangeably with the same API. +Linux and some versions of Windows, and supports a variety of +.Nm netmap ports , +including +.Bl -tag -width XXXX +.It Nm physical NIC ports +to access individual queues of network interfaces; +.It Nm host ports +to inject packets into the host stack; +.It Nm VALE ports +implementing a very fast and modular in-kernel software switch/dataplane; +.It Nm netmap pipes +a shared memory packet transport channel; +.It Nm netmap monitors +a mechanism similar to +.Xr bpf +to capture traffic +.El .Pp -.Nm , -.Nm VALE -and -.Nm netmap pipes -are at least one order of magnitude faster than +All these +.Nm netmap ports +are accessed interchangeably with the same API, +and are at least one order of magnitude faster than standard OS mechanisms -(sockets, bpf, tun/tap interfaces, native switches, pipes), -reaching 14.88 million packets per second (Mpps) -with much less than one core on a 10 Gbit NIC, -about 20 Mpps per core for VALE ports, -and over 100 Mpps for netmap pipes. +(sockets, bpf, tun/tap interfaces, native switches, pipes). +With suitably fast hardware (NICs, PCIe buses, CPUs), +packet I/O using +.Nm +on supported NICs +reaches 14.88 million packets per second (Mpps) +with much less than one core on 10 Gbit/s NICs; +35-40 Mpps on 40 Gbit/s NICs (limited by the hardware); +about 20 Mpps per core for VALE ports; +and over 100 Mpps for +.Nm netmap pipes. +NICs without native +.Nm +support can still use the API in emulated mode, +which uses unmodified device drivers and is 3-5 times faster than +.Xr bpf +or raw sockets. .Pp Userspace clients can dynamically switch NICs into .Nm mode and send and receive raw packets through memory mapped buffers. Similarly, .Nm VALE -switch instances and ports, and +switch instances and ports, .Nm netmap pipes +and +.Nm netmap monitors can be created dynamically, providing high speed packet I/O between processes, virtual machines, NICs and the host stack. .Pp .Nm supports both non-blocking I/O through .Xr ioctl 2 , synchronization and blocking I/O through a file descriptor and standard OS mechanisms such as .Xr select 2 , .Xr poll 2 , .Xr epoll 2 , and .Xr kqueue 2 . -.Nm VALE -and -.Nm netmap pipes +All types of +.Nm netmap ports +and the +.Nm VALE switch are implemented by a single kernel module, which also emulates the .Nm -API over standard drivers for devices without native -.Nm -support. +API over standard drivers. For best performance, .Nm -requires explicit support in device drivers. +requires native support in device drivers. +A list of such devices is at the end of this document. .Pp In the rest of this (long) manual page we document various aspects of the .Nm and .Nm VALE architecture, features and usage. .Sh ARCHITECTURE .Nm supports raw packet I/O through a .Em port , which can be connected to a physical interface .Em ( NIC ) , to the host stack, or to a .Nm VALE -switch). +switch. Ports use preallocated circular queues of buffers .Em ( rings ) residing in an mmapped region. There is one ring for each transmit/receive queue of a NIC or virtual port. An additional ring pair connects to the host stack. .Pp After binding a file descriptor to a port, a .Nm client can send or receive packets in batches through the rings, and possibly implement zero-copy forwarding between ports. .Pp All NICs operating in .Nm mode use the same memory region, accessible to all processes who own .Pa /dev/netmap file descriptors bound to NICs. Independent .Nm VALE and .Nm netmap pipe ports by default use separate memory regions, but can be independently configured to share memory. .Sh ENTERING AND EXITING NETMAP MODE The following section describes the system calls to create and control .Nm netmap ports (including .Nm VALE and .Nm netmap pipe ports). Simpler, higher level functions are described in section .Xr LIBRARIES . .Pp Ports and rings are created and controlled through a file descriptor, created by opening a special device .Dl fd = open("/dev/netmap"); and then bound to a specific port with an .Dl ioctl(fd, NIOCREGIF, (struct nmreq *)arg); .Pp .Nm has multiple modes of operation controlled by the .Vt struct nmreq argument. .Va arg.nr_name -specifies the port name, as follows: +specifies the netmap port name, as follows: .Bl -tag -width XXXX .It Dv OS network interface name (e.g. 'em0', 'eth1', ... ) the data path of the NIC is disconnected from the host stack, and the file descriptor is bound to the NIC (one or all queues), or to the host stack; -.It Dv valeXXX:YYY (arbitrary XXX and YYY) -the file descriptor is bound to port YYY of a VALE switch called XXX, -both dynamically created if necessary. -The string cannot exceed IFNAMSIZ characters, and YYY cannot +.It Dv valeSSS:PPP +the file descriptor is bound to port PPP of VALE switch SSS. +Switch instances and ports are dynamically created if necessary. +.br +Both SSS and PPP have the form [0-9a-zA-Z_]+ , the string +cannot exceed IFNAMSIZ characters, and PPP cannot be the name of any existing OS network interface. .El .Pp On return, .Va arg indicates the size of the shared memory region, and the number, size and location of all the .Nm data structures, which can be accessed by mmapping the memory .Dl char *mem = mmap(0, arg.nr_memsize, fd); .Pp Non-blocking I/O is done with special .Xr ioctl 2 .Xr select 2 and .Xr poll 2 on the file descriptor permit blocking I/O. .Xr epoll 2 and .Xr kqueue 2 are not supported on .Nm file descriptors. .Pp While a NIC is in .Nm mode, the OS will still believe the interface is up and running. OS-generated packets for that NIC end up into a .Nm ring, and another ring is used to send packets into the OS network stack. A .Xr close 2 on the file descriptor removes the binding, and returns the NIC to normal mode (reconnecting the data path to the host stack), or destroys the virtual port. .Sh DATA STRUCTURES The data structures in the mmapped memory region are detailed in .In sys/net/netmap.h , which is the ultimate reference for the .Nm API. The main structures and fields are indicated below: .Bl -tag -width XXX .It Dv struct netmap_if (one per interface) .Bd -literal struct netmap_if { ... const uint32_t ni_flags; /* properties */ ... const uint32_t ni_tx_rings; /* NIC tx rings */ const uint32_t ni_rx_rings; /* NIC rx rings */ uint32_t ni_bufs_head; /* head of extra bufs list */ ... }; .Ed .Pp Indicates the number of available rings .Pa ( struct netmap_rings ) and their position in the mmapped region. The number of tx and rx rings .Pa ( ni_tx_rings , ni_rx_rings ) normally depends on the hardware. NICs also have an extra tx/rx ring pair connected to the host stack. .Em NIOCREGIF can also request additional unbound buffers in the same memory space, to be used as temporary storage for packets. .Pa ni_bufs_head contains the index of the first of these free rings, which are connected in a list (the first uint32_t of each buffer being the index of the next buffer in the list). A .Dv 0 indicates the end of the list. .It Dv struct netmap_ring (one per ring) .Bd -literal struct netmap_ring { ... const uint32_t num_slots; /* slots in each ring */ const uint32_t nr_buf_size; /* size of each buffer */ ... uint32_t head; /* (u) first buf owned by user */ uint32_t cur; /* (u) wakeup position */ const uint32_t tail; /* (k) first buf owned by kernel */ ... uint32_t flags; struct timeval ts; /* (k) time of last rxsync() */ ... struct netmap_slot slot[0]; /* array of slots */ } .Ed .Pp Implements transmit and receive rings, with read/write pointers, metadata and an array of .Em slots describing the buffers. .It Dv struct netmap_slot (one per buffer) .Bd -literal struct netmap_slot { uint32_t buf_idx; /* buffer index */ uint16_t len; /* packet length */ uint16_t flags; /* buf changed, etc. */ uint64_t ptr; /* address for indirect buffers */ }; .Ed .Pp Describes a packet buffer, which normally is identified by an index and resides in the mmapped region. .It Dv packet buffers Fixed size (normally 2 KB) packet buffers allocated by the kernel. .El .Pp The offset of the .Pa struct netmap_if in the mmapped region is indicated by the .Pa nr_offset field in the structure returned by .Dv NIOCREGIF . From there, all other objects are reachable through relative references (offsets or indexes). Macros and functions in .In net/netmap_user.h help converting them into actual pointers: .Pp .Dl struct netmap_if *nifp = NETMAP_IF(mem, arg.nr_offset); .Dl struct netmap_ring *txr = NETMAP_TXRING(nifp, ring_index); .Dl struct netmap_ring *rxr = NETMAP_RXRING(nifp, ring_index); .Pp .Dl char *buf = NETMAP_BUF(ring, buffer_index); .Sh RINGS, BUFFERS AND DATA I/O .Va Rings are circular queues of packets with three indexes/pointers .Va ( head , cur , tail ) ; one slot is always kept empty. The ring size .Va ( num_slots ) should not be assumed to be a power of two. -.br -(NOTE: older versions of netmap used head/count format to indicate -the content of a ring). .Pp .Va head is the first slot available to userspace; .br .Va cur is the wakeup point: select/poll will unblock when .Va tail passes .Va cur ; .br .Va tail is the first slot reserved to the kernel. .Pp Slot indexes .Em must only move forward; for convenience, the function .Dl nm_ring_next(ring, index) returns the next index modulo the ring size. .Pp .Va head and .Va cur are only modified by the user program; .Va tail is only modified by the kernel. The kernel only reads/writes the .Vt struct netmap_ring slots and buffers during the execution of a netmap-related system call. The only exception are slots (and buffers) in the range .Va tail\ . . . head-1 , that are explicitly assigned to the kernel. .Pp .Ss TRANSMIT RINGS On transmit rings, after a .Nm system call, slots in the range .Va head\ . . . tail-1 are available for transmission. User code should fill the slots sequentially and advance .Va head and .Va cur past slots ready to transmit. .Va cur may be moved further ahead if the user code needs more slots before further transmissions (see .Sx SCATTER GATHER I/O ) . .Pp At the next NIOCTXSYNC/select()/poll(), slots up to .Va head-1 are pushed to the port, and .Va tail may advance if further slots have become available. Below is an example of the evolution of a TX ring: .Bd -literal after the syscall, slots between cur and tail are (a)vailable head=cur tail | | v v TX [.....aaaaaaaaaaa.............] user creates new packets to (T)ransmit head=cur tail | | v v TX [.....TTTTTaaaaaa.............] NIOCTXSYNC/poll()/select() sends packets and reports new slots head=cur tail | | v v TX [..........aaaaaaaaaaa........] .Ed .Pp .Fn select and .Fn poll will block if there is no space in the ring, i.e. .Dl ring->cur == ring->tail and return when new slots have become available. .Pp High speed applications may want to amortize the cost of system calls by preparing as many packets as possible before issuing them. .Pp A transmit ring with pending transmissions has .Dl ring->head != ring->tail + 1 (modulo the ring size). The function .Va int nm_tx_pending(ring) implements this test. .Ss RECEIVE RINGS On receive rings, after a .Nm system call, the slots in the range .Va head\& . . . tail-1 contain received packets. User code should process them and advance .Va head and .Va cur past slots it wants to return to the kernel. .Va cur may be moved further ahead if the user code wants to wait for more packets without returning all the previous slots to the kernel. .Pp At the next NIOCRXSYNC/select()/poll(), slots up to .Va head-1 are returned to the kernel for further receives, and .Va tail may advance to report new incoming packets. .br Below is an example of the evolution of an RX ring: .Bd -literal after the syscall, there are some (h)eld and some (R)eceived slots head cur tail | | | v v v RX [..hhhhhhRRRRRRRR..........] user advances head and cur, releasing some slots and holding others head cur tail | | | v v v RX [..*****hhhRRRRRR...........] NICRXSYNC/poll()/select() recovers slots and reports new packets head cur tail | | | v v v RX [.......hhhRRRRRRRRRRRR....] .Ed .Sh SLOTS AND PACKET BUFFERS Normally, packets should be stored in the netmap-allocated buffers assigned to slots when ports are bound to a file descriptor. One packet is fully contained in a single buffer. .Pp The following flags affect slot and buffer processing: .Bl -tag -width XXX .It NS_BUF_CHANGED .Em must be used when the .Va buf_idx in the slot is changed. This can be used to implement zero-copy forwarding, see .Sx ZERO-COPY FORWARDING . .It NS_REPORT reports when this buffer has been transmitted. Normally, .Nm notifies transmit completions in batches, hence signals can be delayed indefinitely. This flag helps detect when packets have been sent and a file descriptor can be closed. .It NS_FORWARD When a ring is in 'transparent' mode (see .Sx TRANSPARENT MODE ) , packets marked with this flag are forwarded to the other endpoint at the next system call, thus restoring (in a selective way) the connection between a NIC and the host stack. .It NS_NO_LEARN tells the forwarding code that the source MAC address for this packet must not be used in the learning bridge code. .It NS_INDIRECT indicates that the packet's payload is in a user-supplied buffer whose user virtual address is in the 'ptr' field of the slot. The size can reach 65535 bytes. .br This is only supported on the transmit ring of .Nm VALE ports, and it helps reducing data copies in the interconnection of virtual machines. .It NS_MOREFRAG indicates that the packet continues with subsequent buffers; the last buffer in a packet must have the flag clear. .El .Sh SCATTER GATHER I/O Packets can span multiple slots if the .Va NS_MOREFRAG flag is set in all but the last slot. The maximum length of a chain is 64 buffers. This is normally used with .Nm VALE ports when connecting virtual machines, as they generate large TSO segments that are not split unless they reach a physical device. .Pp NOTE: The length field always refers to the individual fragment; there is no place with the total length of a packet. .Pp On receive rings the macro .Va NS_RFRAGS(slot) indicates the remaining number of slots for this packet, including the current one. Slots with a value greater than 1 also have NS_MOREFRAG set. .Sh IOCTLS .Nm uses two ioctls (NIOCTXSYNC, NIOCRXSYNC) for non-blocking I/O. They take no argument. Two more ioctls (NIOCGINFO, NIOCREGIF) are used to query and configure ports, with the following argument: .Bd -literal struct nmreq { char nr_name[IFNAMSIZ]; /* (i) port name */ uint32_t nr_version; /* (i) API version */ uint32_t nr_offset; /* (o) nifp offset in mmap region */ uint32_t nr_memsize; /* (o) size of the mmap region */ uint32_t nr_tx_slots; /* (i/o) slots in tx rings */ uint32_t nr_rx_slots; /* (i/o) slots in rx rings */ uint16_t nr_tx_rings; /* (i/o) number of tx rings */ uint16_t nr_rx_rings; /* (i/o) number of rx rings */ uint16_t nr_ringid; /* (i/o) ring(s) we care about */ uint16_t nr_cmd; /* (i) special command */ uint16_t nr_arg1; /* (i/o) extra arguments */ uint16_t nr_arg2; /* (i/o) extra arguments */ uint32_t nr_arg3; /* (i/o) extra arguments */ uint32_t nr_flags /* (i/o) open mode */ ... }; .Ed .Pp A file descriptor obtained through .Pa /dev/netmap also supports the ioctl supported by network devices, see .Xr netintro 4 . .Bl -tag -width XXXX .It Dv NIOCGINFO returns EINVAL if the named port does not support netmap. Otherwise, it returns 0 and (advisory) information about the port. Note that all the information below can change before the interface is actually put in netmap mode. .Bl -tag -width XX .It Pa nr_memsize indicates the size of the .Nm memory region. NICs in .Nm mode all share the same memory region, whereas .Nm VALE ports have independent regions for each port. .It Pa nr_tx_slots , nr_rx_slots indicate the size of transmit and receive rings. .It Pa nr_tx_rings , nr_rx_rings indicate the number of transmit and receive rings. Both ring number and sizes may be configured at runtime using interface-specific functions (e.g. .Xr ethtool ). .El .It Dv NIOCREGIF binds the port named in .Va nr_name to the file descriptor. For a physical device this also switches it into .Nm mode, disconnecting it from the host stack. Multiple file descriptors can be bound to the same port, with proper synchronization left to the user. .Pp +The recommended way to bind a file descriptor to a port is +to use function +.Va nm_open(..) +(see +.Xr LIBRARIES ) +which parses names to access specific port types and +enable features. +In the following we document the main features. +.Pp .Dv NIOCREGIF can also bind a file descriptor to one endpoint of a .Em netmap pipe , consisting of two netmap ports with a crossover connection. A netmap pipe share the same memory space of the parent port, and is meant to enable configuration where a master process acts as a dispatcher towards slave processes. .Pp To enable this function, the .Pa nr_arg1 field of the structure can be used as a hint to the kernel to indicate how many pipes we expect to use, and reserve extra space in the memory region. .Pp On return, it gives the same info as NIOCGINFO, with .Pa nr_ringid and .Pa nr_flags indicating the identity of the rings controlled through the file descriptor. .Pp .Va nr_flags .Va nr_ringid selects which rings are controlled through this file descriptor. Possible values of .Pa nr_flags are indicated below, together with the naming schemes that application libraries (such as the .Nm nm_open indicated below) can use to indicate the specific set of rings. In the example below, "netmap:foo" is any valid netmap port name. .Bl -tag -width XXXXX .It NR_REG_ALL_NIC "netmap:foo" (default) all hardware ring pairs .It NR_REG_SW "netmap:foo^" the ``host rings'', connecting to the host stack. .It NR_REG_NIC_SW "netmap:foo+" all hardware rings and the host rings .It NR_REG_ONE_NIC "netmap:foo-i" only the i-th hardware ring pair, where the number is in .Pa nr_ringid ; .It NR_REG_PIPE_MASTER "netmap:foo{i" the master side of the netmap pipe whose identifier (i) is in .Pa nr_ringid ; .It NR_REG_PIPE_SLAVE "netmap:foo}i" the slave side of the netmap pipe whose identifier (i) is in .Pa nr_ringid . .Pp The identifier of a pipe must be thought as part of the pipe name, and does not need to be sequential. On return the pipe will only have a single ring pair with index 0, irrespective of the value of .Va i. .El .Pp By default, a .Xr poll 2 or .Xr select 2 call pushes out any pending packets on the transmit ring, even if no write events are specified. The feature can be disabled by or-ing .Va NETMAP_NO_TX_POLL to the value written to .Va nr_ringid. When this feature is used, packets are transmitted only on .Va ioctl(NIOCTXSYNC) or select()/poll() are called with a write event (POLLOUT/wfdset) or a full ring. .Pp When registering a virtual interface that is dynamically created to a .Xr vale 4 switch, we can specify the desired number of rings (1 by default, and currently up to 16) on it using nr_tx_rings and nr_rx_rings fields. .It Dv NIOCTXSYNC tells the hardware of new packets to transmit, and updates the number of slots available for transmission. .It Dv NIOCRXSYNC tells the hardware of consumed packets, and asks for newly available packets. .El .Sh SELECT, POLL, EPOLL, KQUEUE. .Xr select 2 and .Xr poll 2 on a .Nm file descriptor process rings as indicated in .Sx TRANSMIT RINGS and .Sx RECEIVE RINGS , respectively when write (POLLOUT) and read (POLLIN) events are requested. Both block if no slots are available in the ring .Va ( ring->cur == ring->tail ) . Depending on the platform, .Xr epoll 2 and .Xr kqueue 2 are supported too. .Pp Packets in transmit rings are normally pushed out (and buffers reclaimed) even without requesting write events. Passing the .Dv NETMAP_NO_TX_POLL flag to .Em NIOCREGIF disables this feature. By default, receive rings are processed only if read events are requested. Passing the .Dv NETMAP_DO_RX_POLL flag to .Em NIOCREGIF updates receive rings even without read events. Note that on epoll and kqueue, .Dv NETMAP_NO_TX_POLL and .Dv NETMAP_DO_RX_POLL only have an effect when some event is posted for the file descriptor. .Sh LIBRARIES The .Nm API is supposed to be used directly, both because of its simplicity and for efficient integration with applications. .Pp For convenience, the .In net/netmap_user.h header provides a few macros and functions to ease creating a file descriptor and doing I/O with a .Nm port. These are loosely modeled after the .Xr pcap 3 API, to ease porting of libpcap-based applications to .Nm . To use these extra functions, programs should .Dl #define NETMAP_WITH_LIBS before .Dl #include .Pp The following functions are available: .Bl -tag -width XXXXX .It Va struct nm_desc * nm_open(const char *ifname, const struct nmreq *req, uint64_t flags, const struct nm_desc *arg) similar to .Xr pcap_open , binds a file descriptor to a port. .Bl -tag -width XX .It Va ifname -is a port name, in the form "netmap:XXX" for a NIC and "valeXXX:YYY" for a +is a port name, in the form "netmap:PPP" for a NIC and "valeSSS:PPP" for a .Nm VALE port. .It Va req provides the initial values for the argument to the NIOCREGIF ioctl. The nm_flags and nm_ringid values are overwritten by parsing ifname and flags, and other fields can be overridden through the other two arguments. .It Va arg points to a struct nm_desc containing arguments (e.g. from a previously open file descriptor) that should override the defaults. The fields are used as described below .It Va flags can be set to a combination of the following flags: .Va NETMAP_NO_TX_POLL , .Va NETMAP_DO_RX_POLL (copied into nr_ringid); .Va NM_OPEN_NO_MMAP (if arg points to the same memory region, avoids the mmap and uses the values from it); .Va NM_OPEN_IFNAME (ignores ifname and uses the values in arg); .Va NM_OPEN_ARG1 , .Va NM_OPEN_ARG2 , .Va NM_OPEN_ARG3 (uses the fields from arg); .Va NM_OPEN_RING_CFG (uses the ring number and sizes from arg). .El .It Va int nm_close(struct nm_desc *d) closes the file descriptor, unmaps memory, frees resources. .It Va int nm_inject(struct nm_desc *d, const void *buf, size_t size) similar to pcap_inject(), pushes a packet to a ring, returns the size of the packet is successful, or 0 on error; .It Va int nm_dispatch(struct nm_desc *d, int cnt, nm_cb_t cb, u_char *arg) similar to pcap_dispatch(), applies a callback to incoming packets .It Va u_char * nm_nextpkt(struct nm_desc *d, struct nm_pkthdr *hdr) similar to pcap_next(), fetches the next packet .El .Sh SUPPORTED DEVICES .Nm natively supports the following devices: .Pp On FreeBSD: +.Xr cxgbe 4 , .Xr em 4 , .Xr igb 4 , .Xr ixgbe 4 , +.Xr ixl 4 , .Xr lem 4 , .Xr re 4 . .Pp On Linux .Xr e1000 4 , .Xr e1000e 4 , +.Xr i40e 4 , .Xr igb 4 , .Xr ixgbe 4 , -.Xr mlx4 4 , -.Xr forcedeth 4 , .Xr r8169 4 . .Pp NICs without native support can still be used in .Nm mode through emulation. Performance is inferior to native netmap -mode but still significantly higher than sockets, and approaching +mode but still significantly higher than various raw socket types +(bpf, PF_PACKET, etc.). +Note that for slow devices (such as 1 Gbit/s and slower NICs, +or several 10 Gbit/s NICs whose hardware is unable that of in-kernel solutions such as Linux's .Xr pktgen . +When emulation is in use, packet sniffer programs such as tcpdump +could see received packets before they are diverted by netmap. This behaviour +is not intentional, being just an artifact of the implementation of emulation. +Note that in case the netmap application subsequently moves packets received +from the emulated adapter onto the host RX ring, the sniffer will intercept +those packets again, since the packets are injected to the host stack as they +were received by the network interface. .Pp Emulation is also available for devices with native netmap support, which can be used for testing or performance comparison. The sysctl variable .Va dev.netmap.admode globally controls how netmap mode is implemented. .Sh SYSCTL VARIABLES AND MODULE PARAMETERS Some aspect of the operation of .Nm are controlled through sysctl variables on FreeBSD .Em ( dev.netmap.* ) and module parameters on Linux .Em ( /sys/module/netmap_lin/parameters/* ) : .Bl -tag -width indent .It Va dev.netmap.admode: 0 Controls the use of native or emulated adapter mode. -0 uses the best available option, 1 forces native and -fails if not available, 2 forces emulated hence never fails. +.br +0 uses the best available option; +.br +1 forces native mode and fails if not available; +.br +2 forces emulated hence never fails. .It Va dev.netmap.generic_ringsize: 1024 Ring size used for emulated netmap mode .It Va dev.netmap.generic_mit: 100000 Controls interrupt moderation for emulated mode .It Va dev.netmap.mmap_unreg: 0 .It Va dev.netmap.fwd: 0 Forces NS_FORWARD mode .It Va dev.netmap.flags: 0 .It Va dev.netmap.txsync_retry: 2 .It Va dev.netmap.no_pendintr: 1 Forces recovery of transmit buffers on system calls .It Va dev.netmap.mitigate: 1 Propagates interrupt mitigation to user processes .It Va dev.netmap.no_timestamp: 0 Disables the update of the timestamp in the netmap ring .It Va dev.netmap.verbose: 0 Verbose kernel messages .It Va dev.netmap.buf_num: 163840 .It Va dev.netmap.buf_size: 2048 .It Va dev.netmap.ring_num: 200 .It Va dev.netmap.ring_size: 36864 .It Va dev.netmap.if_num: 100 .It Va dev.netmap.if_size: 1024 Sizes and number of objects (netmap_if, netmap_ring, buffers) for the global memory region. The only parameter worth modifying is .Va dev.netmap.buf_num as it impacts the total amount of memory used by netmap. .It Va dev.netmap.buf_curr_num: 0 .It Va dev.netmap.buf_curr_size: 0 .It Va dev.netmap.ring_curr_num: 0 .It Va dev.netmap.ring_curr_size: 0 .It Va dev.netmap.if_curr_num: 0 .It Va dev.netmap.if_curr_size: 0 Actual values in use. .It Va dev.netmap.bridge_batch: 1024 Batch size used when moving packets across a .Nm VALE switch. Values above 64 generally guarantee good performance. .El .Sh SYSTEM CALLS .Nm uses .Xr select 2 , .Xr poll 2 , -.Xr epoll +.Xr epoll 2 and -.Xr kqueue +.Xr kqueue 2 to wake up processes when significant events occur, and .Xr mmap 2 to map memory. .Xr ioctl 2 is used to configure ports and .Nm VALE switches . .Pp Applications may need to create threads and bind them to specific cores to improve performance, using standard OS primitives, see .Xr pthread 3 . In particular, .Xr pthread_setaffinity_np 3 may be of use. .Sh EXAMPLES .Ss TEST PROGRAMS .Nm comes with a few programs that can be used for testing or simple applications. See the .Pa examples/ directory in .Nm distributions, or .Pa tools/tools/netmap/ directory in .Fx distributions. .Pp .Xr pkt-gen is a general purpose traffic source/sink. .Pp As an example .Dl pkt-gen -i ix0 -f tx -l 60 can generate an infinite stream of minimum size packets, and .Dl pkt-gen -i ix0 -f rx is a traffic sink. Both print traffic statistics, to help monitor how the system performs. .Pp .Xr pkt-gen has many options can be uses to set packet sizes, addresses, rates, and use multiple send/receive threads and cores. .Pp .Xr bridge is another test program which interconnects two .Nm ports. It can be used for transparent forwarding between interfaces, as in .Dl bridge -i ix0 -i ix1 or even connect the NIC to the host stack using netmap .Dl bridge -i ix0 -i ix0 .Ss USING THE NATIVE API The following code implements a traffic generator .Pp .Bd -literal -compact #include \&... void sender(void) { struct netmap_if *nifp; struct netmap_ring *ring; struct nmreq nmr; struct pollfd fds; fd = open("/dev/netmap", O_RDWR); bzero(&nmr, sizeof(nmr)); strcpy(nmr.nr_name, "ix0"); nmr.nm_version = NETMAP_API; ioctl(fd, NIOCREGIF, &nmr); p = mmap(0, nmr.nr_memsize, fd); nifp = NETMAP_IF(p, nmr.nr_offset); ring = NETMAP_TXRING(nifp, 0); fds.fd = fd; fds.events = POLLOUT; for (;;) { poll(&fds, 1, -1); while (!nm_ring_empty(ring)) { i = ring->cur; buf = NETMAP_BUF(ring, ring->slot[i].buf_index); ... prepare packet in buf ... ring->slot[i].len = ... packet length ... ring->head = ring->cur = nm_ring_next(ring, i); } } } .Ed .Ss HELPER FUNCTIONS A simple receiver can be implemented using the helper functions .Bd -literal -compact #define NETMAP_WITH_LIBS #include \&... void receiver(void) { struct nm_desc *d; struct pollfd fds; u_char *buf; struct nm_pkthdr h; ... d = nm_open("netmap:ix0", NULL, 0, 0); fds.fd = NETMAP_FD(d); fds.events = POLLIN; for (;;) { poll(&fds, 1, -1); while ( (buf = nm_nextpkt(d, &h)) ) consume_pkt(buf, h->len); } nm_close(d); } .Ed .Ss ZERO-COPY FORWARDING Since physical interfaces share the same memory region, it is possible to do packet forwarding between ports swapping buffers. The buffer from the transmit ring is used to replenish the receive ring: .Bd -literal -compact uint32_t tmp; struct netmap_slot *src, *dst; ... src = &src_ring->slot[rxr->cur]; dst = &dst_ring->slot[txr->cur]; tmp = dst->buf_idx; dst->buf_idx = src->buf_idx; dst->len = src->len; dst->flags = NS_BUF_CHANGED; src->buf_idx = tmp; src->flags = NS_BUF_CHANGED; rxr->head = rxr->cur = nm_ring_next(rxr, rxr->cur); txr->head = txr->cur = nm_ring_next(txr, txr->cur); ... .Ed .Ss ACCESSING THE HOST STACK The host stack is for all practical purposes just a regular ring pair, which you can access with the netmap API (e.g. with .Dl nm_open("netmap:eth0^", ... ) ; All packets that the host would send to an interface in .Nm mode end up into the RX ring, whereas all packets queued to the TX ring are send up to the host stack. .Ss VALE SWITCH A simple way to test the performance of a .Nm VALE switch is to attach a sender and a receiver to it, e.g. running the following in two different terminals: .Dl pkt-gen -i vale1:a -f rx # receiver .Dl pkt-gen -i vale1:b -f tx # sender The same example can be used to test netmap pipes, by simply changing port names, e.g. -.Dl pkt-gen -i vale:x{3 -f rx # receiver on the master side -.Dl pkt-gen -i vale:x}3 -f tx # sender on the slave side +.Dl pkt-gen -i vale2:x{3 -f rx # receiver on the master side +.Dl pkt-gen -i vale2:x}3 -f tx # sender on the slave side .Pp The following command attaches an interface and the host stack to a switch: .Dl vale-ctl -h vale2:em0 Other .Nm clients attached to the same switch can now communicate with the network card or the host. .Sh SEE ALSO .Pa http://info.iet.unipi.it/~luigi/netmap/ .Pp Luigi Rizzo, Revisiting network I/O APIs: the netmap framework, Communications of the ACM, 55 (3), pp.45-51, March 2012 .Pp Luigi Rizzo, netmap: a novel framework for fast packet I/O, Usenix ATC'12, June 2012, Boston .Pp Luigi Rizzo, Giuseppe Lettieri, VALE, a switched ethernet for virtual machines, ACM CoNEXT'12, December 2012, Nice .Pp Luigi Rizzo, Giuseppe Lettieri, Vincenzo Maffione, Speeding up packet I/O in virtual machines, ACM/IEEE ANCS'13, October 2013, San Jose .Sh AUTHORS .An -nosplit The .Nm framework has been originally designed and implemented at the Universita` di Pisa in 2011 by .An Luigi Rizzo , and further extended with help from .An Matteo Landi , .An Gaetano Catalli , .An Giuseppe Lettieri , and .An Vincenzo Maffione . .Pp .Nm and .Nm VALE have been funded by the European Commission within FP7 Projects CHANGE (257422) and OPENLAB (287581). .Sh CAVEATS No matter how fast the CPU and OS are, achieving line rate on 10G and faster interfaces requires hardware with sufficient performance. Several NICs are unable to sustain line rate with small packet sizes. Insufficient PCIe or memory bandwidth can also cause reduced performance. .Pp Another frequent reason for low performance is the use of flow control on the link: a slow receiver can limit the transmit speed. Be sure to disable flow control when running high speed experiments. .Ss SPECIAL NIC FEATURES .Nm is orthogonal to some NIC features such as multiqueue, schedulers, packet filters. .Pp Multiple transmit and receive rings are supported natively and can be configured with ordinary OS tools, such as .Xr ethtool or device-specific sysctl variables. The same goes for Receive Packet Steering (RPS) and filtering of incoming traffic. .Pp .Nm .Em does not use features such as .Em checksum offloading , TCP segmentation offloading , .Em encryption , VLAN encapsulation/decapsulation , etc. When using netmap to exchange packets with the host stack, make sure to disable these features. Index: head/sys/conf/files =================================================================== --- head/sys/conf/files (revision 307393) +++ head/sys/conf/files (revision 307394) @@ -1,4362 +1,4364 @@ # $FreeBSD$ # # The long compile-with and dependency lines are required because of # limitations in config: backslash-newline doesn't work in strings, and # dependency lines other than the first are silently ignored. # acpi_quirks.h optional acpi \ dependency "$S/tools/acpi_quirks2h.awk $S/dev/acpica/acpi_quirks" \ compile-with "${AWK} -f $S/tools/acpi_quirks2h.awk $S/dev/acpica/acpi_quirks" \ no-obj no-implicit-rule before-depend \ clean "acpi_quirks.h" bhnd_nvram_map.h optional bhnd \ dependency "$S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/tools/nvram_map_gen.awk $S/dev/bhnd/nvram/nvram_map" \ compile-with "sh $S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/nvram/nvram_map -h" \ no-obj no-implicit-rule before-depend \ clean "bhnd_nvram_map.h" bhnd_nvram_map_data.h optional bhnd \ dependency "$S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/tools/nvram_map_gen.awk $S/dev/bhnd/nvram/nvram_map" \ compile-with "sh $S/dev/bhnd/tools/nvram_map_gen.sh $S/dev/bhnd/nvram/nvram_map -d" \ no-obj no-implicit-rule before-depend \ clean "bhnd_nvram_map_data.h" # # The 'fdt_dtb_file' target covers an actual DTB file name, which is derived # from the specified source (DTS) file: .dts -> .dtb # fdt_dtb_file optional fdt fdt_dtb_static \ compile-with "sh -c 'MACHINE=${MACHINE} $S/tools/fdt/make_dtb.sh $S ${FDT_DTS_FILE} ${.CURDIR}'" \ no-obj no-implicit-rule before-depend \ clean "${FDT_DTS_FILE:R}.dtb" fdt_static_dtb.h optional fdt fdt_dtb_static \ compile-with "sh -c 'MACHINE=${MACHINE} $S/tools/fdt/make_dtbh.sh ${FDT_DTS_FILE} ${.CURDIR}'" \ dependency "fdt_dtb_file" \ no-obj no-implicit-rule before-depend \ clean "fdt_static_dtb.h" feeder_eq_gen.h optional sound \ dependency "$S/tools/sound/feeder_eq_mkfilter.awk" \ compile-with "${AWK} -f $S/tools/sound/feeder_eq_mkfilter.awk -- ${FEEDER_EQ_PRESETS} > feeder_eq_gen.h" \ no-obj no-implicit-rule before-depend \ clean "feeder_eq_gen.h" feeder_rate_gen.h optional sound \ dependency "$S/tools/sound/feeder_rate_mkfilter.awk" \ compile-with "${AWK} -f $S/tools/sound/feeder_rate_mkfilter.awk -- ${FEEDER_RATE_PRESETS} > feeder_rate_gen.h" \ no-obj no-implicit-rule before-depend \ clean "feeder_rate_gen.h" snd_fxdiv_gen.h optional sound \ dependency "$S/tools/sound/snd_fxdiv_gen.awk" \ compile-with "${AWK} -f $S/tools/sound/snd_fxdiv_gen.awk -- > snd_fxdiv_gen.h" \ no-obj no-implicit-rule before-depend \ clean "snd_fxdiv_gen.h" miidevs.h optional miibus | mii \ dependency "$S/tools/miidevs2h.awk $S/dev/mii/miidevs" \ compile-with "${AWK} -f $S/tools/miidevs2h.awk $S/dev/mii/miidevs" \ no-obj no-implicit-rule before-depend \ clean "miidevs.h" pccarddevs.h standard \ dependency "$S/tools/pccarddevs2h.awk $S/dev/pccard/pccarddevs" \ compile-with "${AWK} -f $S/tools/pccarddevs2h.awk $S/dev/pccard/pccarddevs" \ no-obj no-implicit-rule before-depend \ clean "pccarddevs.h" kbdmuxmap.h optional kbdmux_dflt_keymap \ compile-with "kbdcontrol -P ${S:S/sys$/share/}/vt/keymaps -P ${S:S/sys$/share/}/syscons/keymaps -L ${KBDMUX_DFLT_KEYMAP} | sed -e 's/^static keymap_t.* = /static keymap_t key_map = /' -e 's/^static accentmap_t.* = /static accentmap_t accent_map = /' > kbdmuxmap.h" \ no-obj no-implicit-rule before-depend \ clean "kbdmuxmap.h" teken_state.h optional sc | vt \ dependency "$S/teken/gensequences $S/teken/sequences" \ compile-with "${AWK} -f $S/teken/gensequences $S/teken/sequences > teken_state.h" \ no-obj no-implicit-rule before-depend \ clean "teken_state.h" usbdevs.h optional usb \ dependency "$S/tools/usbdevs2h.awk $S/dev/usb/usbdevs" \ compile-with "${AWK} -f $S/tools/usbdevs2h.awk $S/dev/usb/usbdevs -h" \ no-obj no-implicit-rule before-depend \ clean "usbdevs.h" usbdevs_data.h optional usb \ dependency "$S/tools/usbdevs2h.awk $S/dev/usb/usbdevs" \ compile-with "${AWK} -f $S/tools/usbdevs2h.awk $S/dev/usb/usbdevs -d" \ no-obj no-implicit-rule before-depend \ clean "usbdevs_data.h" cam/cam.c optional scbus cam/cam_compat.c optional scbus cam/cam_iosched.c optional scbus cam/cam_periph.c optional scbus cam/cam_queue.c optional scbus cam/cam_sim.c optional scbus cam/cam_xpt.c optional scbus cam/ata/ata_all.c optional scbus cam/ata/ata_xpt.c optional scbus cam/ata/ata_pmp.c optional scbus cam/nvme/nvme_all.c optional scbus nvme !nvd cam/nvme/nvme_da.c optional scbus nvme da !nvd cam/nvme/nvme_xpt.c optional scbus nvme !nvd cam/scsi/scsi_xpt.c optional scbus cam/scsi/scsi_all.c optional scbus cam/scsi/scsi_cd.c optional cd cam/scsi/scsi_ch.c optional ch cam/ata/ata_da.c optional ada | da cam/ctl/ctl.c optional ctl cam/ctl/ctl_backend.c optional ctl cam/ctl/ctl_backend_block.c optional ctl cam/ctl/ctl_backend_ramdisk.c optional ctl cam/ctl/ctl_cmd_table.c optional ctl cam/ctl/ctl_frontend.c optional ctl cam/ctl/ctl_frontend_cam_sim.c optional ctl cam/ctl/ctl_frontend_ioctl.c optional ctl cam/ctl/ctl_frontend_iscsi.c optional ctl cam/ctl/ctl_ha.c optional ctl cam/ctl/ctl_scsi_all.c optional ctl cam/ctl/ctl_tpc.c optional ctl cam/ctl/ctl_tpc_local.c optional ctl cam/ctl/ctl_error.c optional ctl cam/ctl/ctl_util.c optional ctl cam/ctl/scsi_ctl.c optional ctl cam/scsi/scsi_da.c optional da cam/scsi/scsi_low.c optional ct | ncv | nsp | stg cam/scsi/scsi_pass.c optional pass cam/scsi/scsi_pt.c optional pt cam/scsi/scsi_sa.c optional sa cam/scsi/scsi_enc.c optional ses cam/scsi/scsi_enc_ses.c optional ses cam/scsi/scsi_enc_safte.c optional ses cam/scsi/scsi_sg.c optional sg cam/scsi/scsi_targ_bh.c optional targbh cam/scsi/scsi_target.c optional targ cam/scsi/smp_all.c optional scbus # shared between zfs and dtrace cddl/compat/opensolaris/kern/opensolaris.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_cmn_err.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_kmem.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_misc.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_proc.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_sunddi.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/compat/opensolaris/kern/opensolaris_taskq.c optional zfs | dtrace compile-with "${CDDL_C}" # zfs specific cddl/compat/opensolaris/kern/opensolaris_acl.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_dtrace.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_kobj.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_kstat.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_lookup.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_policy.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_string.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_sysevent.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_uio.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_vfs.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_vm.c optional zfs compile-with "${ZFS_C}" cddl/compat/opensolaris/kern/opensolaris_zone.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/acl/acl_common.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/avl/avl.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/nvpair/opensolaris_fnvpair.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/nvpair/opensolaris_nvpair.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/nvpair/opensolaris_nvpair_alloc_fixed.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/unicode/u8_textprep.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfeature_common.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_comutil.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_deleg.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_fletcher.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_namecheck.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zfs_prop.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zpool_prop.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/common/zfs/zprop_common.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/gfs.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/vnode.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/blkptr.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bplist.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bpobj.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bptree.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/bqueue.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/ddt.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/ddt_zap.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_diff.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_object.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_zfetch.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dnode.c optional zfs compile-with "${ZFS_C}" \ warning "kernel contains CDDL licensed ZFS filesystem" cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_bookmark.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deadlist.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deleg.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_destroy.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dir.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_prop.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_scan.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_userhold.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_synctask.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/gzip.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lz4.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/lzjb.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/multilist.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/refcount.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/rrwlock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/sha256.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/skein_zfs.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_config.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_errlog.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_history.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/space_reftree.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/trim_map.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/uberblock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/unique.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_cache.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_file.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_label.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_missing.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_raidz.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_root.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zap.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zap_leaf.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zap_micro.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfeature.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_acl.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_byteswap.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_debug.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fm.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_log.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_onexit.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_replay.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_rlock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_sa.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zil.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio_checksum.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio_compress.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zio_inject.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zle.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zrlock.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/callb.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/fm.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/list.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/os/nvpair_alloc_system.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/adler32.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/deflate.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/inffast.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/inflate.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/inftrees.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/opensolaris_crc32.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/trees.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/zmod.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/zmod_subr.c optional zfs compile-with "${ZFS_C}" cddl/contrib/opensolaris/uts/common/zmod/zutil.c optional zfs compile-with "${ZFS_C}" # dtrace specific cddl/contrib/opensolaris/uts/common/dtrace/dtrace.c optional dtrace compile-with "${DTRACE_C}" \ warning "kernel contains CDDL licensed DTRACE" cddl/dev/dtmalloc/dtmalloc.c optional dtmalloc | dtraceall compile-with "${CDDL_C}" cddl/dev/profile/profile.c optional dtrace_profile | dtraceall compile-with "${CDDL_C}" cddl/dev/sdt/sdt.c optional dtrace_sdt | dtraceall compile-with "${CDDL_C}" cddl/dev/fbt/fbt.c optional dtrace_fbt | dtraceall compile-with "${FBT_C}" cddl/dev/systrace/systrace.c optional dtrace_systrace | dtraceall compile-with "${CDDL_C}" cddl/dev/prototype.c optional dtrace_prototype | dtraceall compile-with "${CDDL_C}" fs/nfsclient/nfs_clkdtrace.c optional dtnfscl nfscl | dtraceall nfscl compile-with "${CDDL_C}" compat/cloudabi/cloudabi_clock.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_errno.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_fd.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_file.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_futex.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_mem.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_proc.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_random.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_sock.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_thread.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi/cloudabi_vdso.c optional compat_cloudabi32 | compat_cloudabi64 compat/cloudabi32/cloudabi32_fd.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_module.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_poll.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_sock.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_syscalls.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_sysent.c optional compat_cloudabi32 compat/cloudabi32/cloudabi32_thread.c optional compat_cloudabi32 compat/cloudabi64/cloudabi64_fd.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_module.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_poll.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_sock.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_syscalls.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_sysent.c optional compat_cloudabi64 compat/cloudabi64/cloudabi64_thread.c optional compat_cloudabi64 compat/freebsd32/freebsd32_capability.c optional compat_freebsd32 compat/freebsd32/freebsd32_ioctl.c optional compat_freebsd32 compat/freebsd32/freebsd32_misc.c optional compat_freebsd32 compat/freebsd32/freebsd32_syscalls.c optional compat_freebsd32 compat/freebsd32/freebsd32_sysent.c optional compat_freebsd32 contrib/dev/acpica/common/ahids.c optional acpi acpi_debug contrib/dev/acpica/common/ahuuids.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbcmds.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbconvert.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbdisply.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbexec.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbhistry.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbinput.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbmethod.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbnames.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbobject.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbstats.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbtest.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbutils.c optional acpi acpi_debug contrib/dev/acpica/components/debugger/dbxface.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmbuffer.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmcstyle.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmdeferred.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmnames.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmopcode.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrc.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrcl.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrcl2.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmresrcs.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmutils.c optional acpi acpi_debug contrib/dev/acpica/components/disassembler/dmwalk.c optional acpi acpi_debug contrib/dev/acpica/components/dispatcher/dsargs.c optional acpi contrib/dev/acpica/components/dispatcher/dscontrol.c optional acpi contrib/dev/acpica/components/dispatcher/dsdebug.c optional acpi contrib/dev/acpica/components/dispatcher/dsfield.c optional acpi contrib/dev/acpica/components/dispatcher/dsinit.c optional acpi contrib/dev/acpica/components/dispatcher/dsmethod.c optional acpi contrib/dev/acpica/components/dispatcher/dsmthdat.c optional acpi contrib/dev/acpica/components/dispatcher/dsobject.c optional acpi contrib/dev/acpica/components/dispatcher/dsopcode.c optional acpi contrib/dev/acpica/components/dispatcher/dsutils.c optional acpi contrib/dev/acpica/components/dispatcher/dswexec.c optional acpi contrib/dev/acpica/components/dispatcher/dswload.c optional acpi contrib/dev/acpica/components/dispatcher/dswload2.c optional acpi contrib/dev/acpica/components/dispatcher/dswscope.c optional acpi contrib/dev/acpica/components/dispatcher/dswstate.c optional acpi contrib/dev/acpica/components/events/evevent.c optional acpi contrib/dev/acpica/components/events/evglock.c optional acpi contrib/dev/acpica/components/events/evgpe.c optional acpi contrib/dev/acpica/components/events/evgpeblk.c optional acpi contrib/dev/acpica/components/events/evgpeinit.c optional acpi contrib/dev/acpica/components/events/evgpeutil.c optional acpi contrib/dev/acpica/components/events/evhandler.c optional acpi contrib/dev/acpica/components/events/evmisc.c optional acpi contrib/dev/acpica/components/events/evregion.c optional acpi contrib/dev/acpica/components/events/evrgnini.c optional acpi contrib/dev/acpica/components/events/evsci.c optional acpi contrib/dev/acpica/components/events/evxface.c optional acpi contrib/dev/acpica/components/events/evxfevnt.c optional acpi contrib/dev/acpica/components/events/evxfgpe.c optional acpi contrib/dev/acpica/components/events/evxfregn.c optional acpi contrib/dev/acpica/components/executer/exconcat.c optional acpi contrib/dev/acpica/components/executer/exconfig.c optional acpi contrib/dev/acpica/components/executer/exconvrt.c optional acpi contrib/dev/acpica/components/executer/excreate.c optional acpi contrib/dev/acpica/components/executer/exdebug.c optional acpi contrib/dev/acpica/components/executer/exdump.c optional acpi contrib/dev/acpica/components/executer/exfield.c optional acpi contrib/dev/acpica/components/executer/exfldio.c optional acpi contrib/dev/acpica/components/executer/exmisc.c optional acpi contrib/dev/acpica/components/executer/exmutex.c optional acpi contrib/dev/acpica/components/executer/exnames.c optional acpi contrib/dev/acpica/components/executer/exoparg1.c optional acpi contrib/dev/acpica/components/executer/exoparg2.c optional acpi contrib/dev/acpica/components/executer/exoparg3.c optional acpi contrib/dev/acpica/components/executer/exoparg6.c optional acpi contrib/dev/acpica/components/executer/exprep.c optional acpi contrib/dev/acpica/components/executer/exregion.c optional acpi contrib/dev/acpica/components/executer/exresnte.c optional acpi contrib/dev/acpica/components/executer/exresolv.c optional acpi contrib/dev/acpica/components/executer/exresop.c optional acpi contrib/dev/acpica/components/executer/exstore.c optional acpi contrib/dev/acpica/components/executer/exstoren.c optional acpi contrib/dev/acpica/components/executer/exstorob.c optional acpi contrib/dev/acpica/components/executer/exsystem.c optional acpi contrib/dev/acpica/components/executer/extrace.c optional acpi contrib/dev/acpica/components/executer/exutils.c optional acpi contrib/dev/acpica/components/hardware/hwacpi.c optional acpi contrib/dev/acpica/components/hardware/hwesleep.c optional acpi contrib/dev/acpica/components/hardware/hwgpe.c optional acpi contrib/dev/acpica/components/hardware/hwpci.c optional acpi contrib/dev/acpica/components/hardware/hwregs.c optional acpi contrib/dev/acpica/components/hardware/hwsleep.c optional acpi contrib/dev/acpica/components/hardware/hwtimer.c optional acpi contrib/dev/acpica/components/hardware/hwvalid.c optional acpi contrib/dev/acpica/components/hardware/hwxface.c optional acpi contrib/dev/acpica/components/hardware/hwxfsleep.c optional acpi contrib/dev/acpica/components/namespace/nsaccess.c optional acpi contrib/dev/acpica/components/namespace/nsalloc.c optional acpi contrib/dev/acpica/components/namespace/nsarguments.c optional acpi contrib/dev/acpica/components/namespace/nsconvert.c optional acpi contrib/dev/acpica/components/namespace/nsdump.c optional acpi contrib/dev/acpica/components/namespace/nseval.c optional acpi contrib/dev/acpica/components/namespace/nsinit.c optional acpi contrib/dev/acpica/components/namespace/nsload.c optional acpi contrib/dev/acpica/components/namespace/nsnames.c optional acpi contrib/dev/acpica/components/namespace/nsobject.c optional acpi contrib/dev/acpica/components/namespace/nsparse.c optional acpi contrib/dev/acpica/components/namespace/nspredef.c optional acpi contrib/dev/acpica/components/namespace/nsprepkg.c optional acpi contrib/dev/acpica/components/namespace/nsrepair.c optional acpi contrib/dev/acpica/components/namespace/nsrepair2.c optional acpi contrib/dev/acpica/components/namespace/nssearch.c optional acpi contrib/dev/acpica/components/namespace/nsutils.c optional acpi contrib/dev/acpica/components/namespace/nswalk.c optional acpi contrib/dev/acpica/components/namespace/nsxfeval.c optional acpi contrib/dev/acpica/components/namespace/nsxfname.c optional acpi contrib/dev/acpica/components/namespace/nsxfobj.c optional acpi contrib/dev/acpica/components/parser/psargs.c optional acpi contrib/dev/acpica/components/parser/psloop.c optional acpi contrib/dev/acpica/components/parser/psobject.c optional acpi contrib/dev/acpica/components/parser/psopcode.c optional acpi contrib/dev/acpica/components/parser/psopinfo.c optional acpi contrib/dev/acpica/components/parser/psparse.c optional acpi contrib/dev/acpica/components/parser/psscope.c optional acpi contrib/dev/acpica/components/parser/pstree.c optional acpi contrib/dev/acpica/components/parser/psutils.c optional acpi contrib/dev/acpica/components/parser/pswalk.c optional acpi contrib/dev/acpica/components/parser/psxface.c optional acpi contrib/dev/acpica/components/resources/rsaddr.c optional acpi contrib/dev/acpica/components/resources/rscalc.c optional acpi contrib/dev/acpica/components/resources/rscreate.c optional acpi contrib/dev/acpica/components/resources/rsdump.c optional acpi acpi_debug contrib/dev/acpica/components/resources/rsdumpinfo.c optional acpi contrib/dev/acpica/components/resources/rsinfo.c optional acpi contrib/dev/acpica/components/resources/rsio.c optional acpi contrib/dev/acpica/components/resources/rsirq.c optional acpi contrib/dev/acpica/components/resources/rslist.c optional acpi contrib/dev/acpica/components/resources/rsmemory.c optional acpi contrib/dev/acpica/components/resources/rsmisc.c optional acpi contrib/dev/acpica/components/resources/rsserial.c optional acpi contrib/dev/acpica/components/resources/rsutils.c optional acpi contrib/dev/acpica/components/resources/rsxface.c optional acpi contrib/dev/acpica/components/tables/tbdata.c optional acpi contrib/dev/acpica/components/tables/tbfadt.c optional acpi contrib/dev/acpica/components/tables/tbfind.c optional acpi contrib/dev/acpica/components/tables/tbinstal.c optional acpi contrib/dev/acpica/components/tables/tbprint.c optional acpi contrib/dev/acpica/components/tables/tbutils.c optional acpi contrib/dev/acpica/components/tables/tbxface.c optional acpi contrib/dev/acpica/components/tables/tbxfload.c optional acpi contrib/dev/acpica/components/tables/tbxfroot.c optional acpi contrib/dev/acpica/components/utilities/utaddress.c optional acpi contrib/dev/acpica/components/utilities/utalloc.c optional acpi contrib/dev/acpica/components/utilities/utascii.c optional acpi contrib/dev/acpica/components/utilities/utbuffer.c optional acpi contrib/dev/acpica/components/utilities/utcache.c optional acpi contrib/dev/acpica/components/utilities/utcopy.c optional acpi contrib/dev/acpica/components/utilities/utdebug.c optional acpi contrib/dev/acpica/components/utilities/utdecode.c optional acpi contrib/dev/acpica/components/utilities/utdelete.c optional acpi contrib/dev/acpica/components/utilities/uterror.c optional acpi contrib/dev/acpica/components/utilities/uteval.c optional acpi contrib/dev/acpica/components/utilities/utexcep.c optional acpi contrib/dev/acpica/components/utilities/utglobal.c optional acpi contrib/dev/acpica/components/utilities/uthex.c optional acpi contrib/dev/acpica/components/utilities/utids.c optional acpi contrib/dev/acpica/components/utilities/utinit.c optional acpi contrib/dev/acpica/components/utilities/utlock.c optional acpi contrib/dev/acpica/components/utilities/utmath.c optional acpi contrib/dev/acpica/components/utilities/utmisc.c optional acpi contrib/dev/acpica/components/utilities/utmutex.c optional acpi contrib/dev/acpica/components/utilities/utnonansi.c optional acpi contrib/dev/acpica/components/utilities/utobject.c optional acpi contrib/dev/acpica/components/utilities/utosi.c optional acpi contrib/dev/acpica/components/utilities/utownerid.c optional acpi contrib/dev/acpica/components/utilities/utpredef.c optional acpi contrib/dev/acpica/components/utilities/utresrc.c optional acpi contrib/dev/acpica/components/utilities/utstate.c optional acpi contrib/dev/acpica/components/utilities/utstring.c optional acpi contrib/dev/acpica/components/utilities/utstrtoul64.c optional acpi contrib/dev/acpica/components/utilities/utuuid.c optional acpi acpi_debug contrib/dev/acpica/components/utilities/utxface.c optional acpi contrib/dev/acpica/components/utilities/utxferror.c optional acpi contrib/dev/acpica/components/utilities/utxfinit.c optional acpi #contrib/dev/acpica/components/utilities/utxfmutex.c optional acpi contrib/ipfilter/netinet/fil.c optional ipfilter inet \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_auth.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_fil_freebsd.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_frag.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_log.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_nat.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_proxy.c optional ipfilter inet \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_state.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_lookup.c optional ipfilter inet \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -Wno-error -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_pool.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_htable.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_sync.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/mlfk_ipl.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_nat6.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_rules.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_scan.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/ip_dstlist.c optional ipfilter inet \ compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" contrib/ipfilter/netinet/radix_ipf.c optional ipfilter inet \ compile-with "${NORMAL_C} -I$S/contrib/ipfilter" contrib/libfdt/fdt.c optional fdt contrib/libfdt/fdt_ro.c optional fdt contrib/libfdt/fdt_rw.c optional fdt contrib/libfdt/fdt_strerror.c optional fdt contrib/libfdt/fdt_sw.c optional fdt contrib/libfdt/fdt_wip.c optional fdt contrib/libnv/cnvlist.c standard contrib/libnv/dnvlist.c standard contrib/libnv/nvlist.c standard contrib/libnv/nvpair.c standard contrib/ngatm/netnatm/api/cc_conn.c optional ngatm_ccatm \ compile-with "${NORMAL_C_NOWERROR} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_data.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_dump.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_port.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_sig.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/cc_user.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/api/unisap.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/misc/straddr.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/misc/unimsg_common.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/msg/traffic.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/msg/uni_ie.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/msg/uni_msg.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/saal/saal_sscfu.c optional ngatm_sscfu \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/saal/saal_sscop.c optional ngatm_sscop \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_call.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_coord.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_party.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_print.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_reset.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_uni.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_unimsgcpy.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" contrib/ngatm/netnatm/sig/sig_verify.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" crypto/blowfish/bf_ecb.c optional ipsec crypto/blowfish/bf_skey.c optional crypto | ipsec crypto/camellia/camellia.c optional crypto | ipsec crypto/camellia/camellia-api.c optional crypto | ipsec crypto/des/des_ecb.c optional crypto | ipsec | netsmb crypto/des/des_setkey.c optional crypto | ipsec | netsmb crypto/rc4/rc4.c optional netgraph_mppc_encryption | kgssapi crypto/rijndael/rijndael-alg-fst.c optional crypto | geom_bde | \ ipsec | random !random_loadable | wlan_ccmp crypto/rijndael/rijndael-api-fst.c optional geom_bde | random !random_loadable crypto/rijndael/rijndael-api.c optional crypto | ipsec | wlan_ccmp crypto/sha1.c optional carp | crypto | ipsec | \ netgraph_mppc_encryption | sctp crypto/sha2/sha256c.c optional crypto | geom_bde | ipsec | random !random_loadable | \ sctp | zfs crypto/sha2/sha512c.c optional crypto | geom_bde | ipsec | zfs crypto/skein/skein.c optional crypto | zfs crypto/skein/skein_block.c optional crypto | zfs crypto/siphash/siphash.c optional inet | inet6 crypto/siphash/siphash_test.c optional inet | inet6 ddb/db_access.c optional ddb ddb/db_break.c optional ddb ddb/db_capture.c optional ddb ddb/db_command.c optional ddb ddb/db_examine.c optional ddb ddb/db_expr.c optional ddb ddb/db_input.c optional ddb ddb/db_lex.c optional ddb ddb/db_main.c optional ddb ddb/db_output.c optional ddb ddb/db_print.c optional ddb ddb/db_ps.c optional ddb ddb/db_run.c optional ddb ddb/db_script.c optional ddb ddb/db_sym.c optional ddb ddb/db_thread.c optional ddb ddb/db_textdump.c optional ddb ddb/db_variables.c optional ddb ddb/db_watch.c optional ddb ddb/db_write_cmd.c optional ddb dev/aac/aac.c optional aac dev/aac/aac_cam.c optional aacp aac dev/aac/aac_debug.c optional aac dev/aac/aac_disk.c optional aac dev/aac/aac_linux.c optional aac compat_linux dev/aac/aac_pci.c optional aac pci dev/aacraid/aacraid.c optional aacraid dev/aacraid/aacraid_cam.c optional aacraid scbus dev/aacraid/aacraid_debug.c optional aacraid dev/aacraid/aacraid_linux.c optional aacraid compat_linux dev/aacraid/aacraid_pci.c optional aacraid pci dev/acpi_support/acpi_wmi.c optional acpi_wmi acpi dev/acpi_support/acpi_asus.c optional acpi_asus acpi dev/acpi_support/acpi_asus_wmi.c optional acpi_asus_wmi acpi dev/acpi_support/acpi_fujitsu.c optional acpi_fujitsu acpi dev/acpi_support/acpi_hp.c optional acpi_hp acpi dev/acpi_support/acpi_ibm.c optional acpi_ibm acpi dev/acpi_support/acpi_panasonic.c optional acpi_panasonic acpi dev/acpi_support/acpi_sony.c optional acpi_sony acpi dev/acpi_support/acpi_toshiba.c optional acpi_toshiba acpi dev/acpi_support/atk0110.c optional aibs acpi dev/acpica/Osd/OsdDebug.c optional acpi dev/acpica/Osd/OsdHardware.c optional acpi dev/acpica/Osd/OsdInterrupt.c optional acpi dev/acpica/Osd/OsdMemory.c optional acpi dev/acpica/Osd/OsdSchedule.c optional acpi dev/acpica/Osd/OsdStream.c optional acpi dev/acpica/Osd/OsdSynch.c optional acpi dev/acpica/Osd/OsdTable.c optional acpi dev/acpica/acpi.c optional acpi dev/acpica/acpi_acad.c optional acpi dev/acpica/acpi_battery.c optional acpi dev/acpica/acpi_button.c optional acpi dev/acpica/acpi_cmbat.c optional acpi dev/acpica/acpi_cpu.c optional acpi dev/acpica/acpi_ec.c optional acpi dev/acpica/acpi_isab.c optional acpi isa dev/acpica/acpi_lid.c optional acpi dev/acpica/acpi_package.c optional acpi dev/acpica/acpi_pci.c optional acpi pci dev/acpica/acpi_pci_link.c optional acpi pci dev/acpica/acpi_pcib.c optional acpi pci dev/acpica/acpi_pcib_acpi.c optional acpi pci dev/acpica/acpi_pcib_pci.c optional acpi pci dev/acpica/acpi_perf.c optional acpi dev/acpica/acpi_powerres.c optional acpi dev/acpica/acpi_quirk.c optional acpi dev/acpica/acpi_resource.c optional acpi dev/acpica/acpi_smbat.c optional acpi dev/acpica/acpi_thermal.c optional acpi dev/acpica/acpi_throttle.c optional acpi dev/acpica/acpi_timer.c optional acpi dev/acpica/acpi_video.c optional acpi_video acpi dev/acpica/acpi_dock.c optional acpi_dock acpi dev/adlink/adlink.c optional adlink dev/advansys/adv_eisa.c optional adv eisa dev/advansys/adv_pci.c optional adv pci dev/advansys/advansys.c optional adv dev/advansys/advlib.c optional adv dev/advansys/advmcode.c optional adv dev/advansys/adw_pci.c optional adw pci dev/advansys/adwcam.c optional adw dev/advansys/adwlib.c optional adw dev/advansys/adwmcode.c optional adw dev/ae/if_ae.c optional ae pci dev/age/if_age.c optional age pci dev/agp/agp.c optional agp pci dev/agp/agp_if.m optional agp pci dev/aha/aha.c optional aha dev/aha/aha_isa.c optional aha isa dev/aha/aha_mca.c optional aha mca dev/ahb/ahb.c optional ahb eisa dev/ahci/ahci.c optional ahci dev/ahci/ahciem.c optional ahci dev/ahci/ahci_pci.c optional ahci pci dev/aic/aic.c optional aic dev/aic/aic_pccard.c optional aic pccard dev/aic7xxx/ahc_eisa.c optional ahc eisa dev/aic7xxx/ahc_isa.c optional ahc isa dev/aic7xxx/ahc_pci.c optional ahc pci \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/aic7xxx/ahd_pci.c optional ahd pci \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/aic7xxx/aic7770.c optional ahc dev/aic7xxx/aic79xx.c optional ahd pci dev/aic7xxx/aic79xx_osm.c optional ahd pci dev/aic7xxx/aic79xx_pci.c optional ahd pci dev/aic7xxx/aic79xx_reg_print.c optional ahd pci ahd_reg_pretty_print dev/aic7xxx/aic7xxx.c optional ahc dev/aic7xxx/aic7xxx_93cx6.c optional ahc dev/aic7xxx/aic7xxx_osm.c optional ahc dev/aic7xxx/aic7xxx_pci.c optional ahc pci dev/aic7xxx/aic7xxx_reg_print.c optional ahc ahc_reg_pretty_print dev/alc/if_alc.c optional alc pci dev/ale/if_ale.c optional ale pci dev/alpm/alpm.c optional alpm pci dev/altera/avgen/altera_avgen.c optional altera_avgen dev/altera/avgen/altera_avgen_fdt.c optional altera_avgen fdt dev/altera/avgen/altera_avgen_nexus.c optional altera_avgen dev/altera/sdcard/altera_sdcard.c optional altera_sdcard dev/altera/sdcard/altera_sdcard_disk.c optional altera_sdcard dev/altera/sdcard/altera_sdcard_io.c optional altera_sdcard dev/altera/sdcard/altera_sdcard_fdt.c optional altera_sdcard fdt dev/altera/sdcard/altera_sdcard_nexus.c optional altera_sdcard dev/altera/pio/pio.c optional altera_pio dev/altera/pio/pio_if.m optional altera_pio dev/amdpm/amdpm.c optional amdpm pci | nfpm pci dev/amdsmb/amdsmb.c optional amdsmb pci dev/amr/amr.c optional amr dev/amr/amr_cam.c optional amrp amr dev/amr/amr_disk.c optional amr dev/amr/amr_linux.c optional amr compat_linux dev/amr/amr_pci.c optional amr pci dev/an/if_an.c optional an dev/an/if_an_isa.c optional an isa dev/an/if_an_pccard.c optional an pccard dev/an/if_an_pci.c optional an pci # dev/ata/ata_if.m optional ata | atacore dev/ata/ata-all.c optional ata | atacore dev/ata/ata-dma.c optional ata | atacore dev/ata/ata-lowlevel.c optional ata | atacore dev/ata/ata-sata.c optional ata | atacore dev/ata/ata-card.c optional ata pccard | atapccard dev/ata/ata-cbus.c optional ata pc98 | atapc98 dev/ata/ata-isa.c optional ata isa | ataisa dev/ata/ata-pci.c optional ata pci | atapci dev/ata/chipsets/ata-acard.c optional ata pci | ataacard dev/ata/chipsets/ata-acerlabs.c optional ata pci | ataacerlabs dev/ata/chipsets/ata-amd.c optional ata pci | ataamd dev/ata/chipsets/ata-ati.c optional ata pci | ataati dev/ata/chipsets/ata-cenatek.c optional ata pci | atacenatek dev/ata/chipsets/ata-cypress.c optional ata pci | atacypress dev/ata/chipsets/ata-cyrix.c optional ata pci | atacyrix dev/ata/chipsets/ata-highpoint.c optional ata pci | atahighpoint dev/ata/chipsets/ata-intel.c optional ata pci | ataintel dev/ata/chipsets/ata-ite.c optional ata pci | ataite dev/ata/chipsets/ata-jmicron.c optional ata pci | atajmicron dev/ata/chipsets/ata-marvell.c optional ata pci | atamarvell dev/ata/chipsets/ata-micron.c optional ata pci | atamicron dev/ata/chipsets/ata-national.c optional ata pci | atanational dev/ata/chipsets/ata-netcell.c optional ata pci | atanetcell dev/ata/chipsets/ata-nvidia.c optional ata pci | atanvidia dev/ata/chipsets/ata-promise.c optional ata pci | atapromise dev/ata/chipsets/ata-serverworks.c optional ata pci | ataserverworks dev/ata/chipsets/ata-siliconimage.c optional ata pci | atasiliconimage | ataati dev/ata/chipsets/ata-sis.c optional ata pci | atasis dev/ata/chipsets/ata-via.c optional ata pci | atavia # dev/ath/if_ath_pci.c optional ath_pci pci \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/ath/if_ath_ahb.c optional ath_ahb \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/ath/if_ath.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_alq.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_beacon.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_btcoex.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_btcoex_mci.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_debug.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_descdma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_keycache.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_ioctl.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_led.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_lna_div.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tx.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tx_edma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tx_ht.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_tdma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_sysctl.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_rx.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_rx_edma.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/if_ath_spectral.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ah_osdep.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/ath/ath_hal/ah.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v1.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v3.c optional ath_hal | ath_ar5211 | ath_ar5212 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v14.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_v4k.c \ optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_eeprom_9287.c \ optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_hal/ah_regdomain.c optional ath \ compile-with "${NORMAL_C} ${NO_WSHIFT_COUNT_NEGATIVE} ${NO_WSHIFT_COUNT_OVERFLOW} -I$S/dev/ath" # ar5210 dev/ath/ath_hal/ar5210/ar5210_attach.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_beacon.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_interrupts.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_keycache.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_misc.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_phy.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_power.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_recv.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_reset.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5210/ar5210_xmit.c optional ath_hal | ath_ar5210 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar5211 dev/ath/ath_hal/ar5211/ar5211_attach.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_beacon.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_interrupts.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_keycache.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_misc.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_phy.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_power.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_recv.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_reset.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5211/ar5211_xmit.c optional ath_hal | ath_ar5211 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar5212 dev/ath/ath_hal/ar5212/ar5212_ani.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_attach.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_beacon.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_eeprom.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_gpio.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_interrupts.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_keycache.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_misc.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_phy.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_power.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_recv.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_reset.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_rfgain.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5212_xmit.c \ optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ ath_ar9285 ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar5416 (depends on ar5212) dev/ath/ath_hal/ar5416/ar5416_ani.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_attach.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_beacon.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_btcoex.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal_iq.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal_adcgain.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_cal_adcdc.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_eeprom.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_gpio.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_interrupts.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_keycache.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_misc.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_phy.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_power.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_radar.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_recv.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_reset.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_spectral.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar5416_xmit.c \ optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9130 (depends upon ar5416) - also requires AH_SUPPORT_AR9130 # # Since this is an embedded MAC SoC, there's no need to compile it into the # default HAL. dev/ath/ath_hal/ar9001/ar9130_attach.c optional ath_ar9130 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9001/ar9130_phy.c optional ath_ar9130 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9001/ar9130_eeprom.c optional ath_ar9130 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9160 (depends on ar5416) dev/ath/ath_hal/ar9001/ar9160_attach.c optional ath_hal | ath_ar9160 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9280 (depends on ar5416) dev/ath/ath_hal/ar9002/ar9280_attach.c optional ath_hal | ath_ar9280 | \ ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9280_olc.c optional ath_hal | ath_ar9280 | \ ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9285 (depends on ar5416 and ar9280) dev/ath/ath_hal/ar9002/ar9285_attach.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_btcoex.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_reset.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_cal.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_phy.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285_diversity.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9287 (depends on ar5416) dev/ath/ath_hal/ar9002/ar9287_attach.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287_reset.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287_cal.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287_olc.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ar9300 contrib/dev/ath/ath_hal/ar9300/ar9300_ani.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_attach.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_beacon.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_eeprom.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal ${NO_WCONSTANT_CONVERSION}" contrib/dev/ath/ath_hal/ar9300/ar9300_freebsd.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_gpio.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_interrupts.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_keycache.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_mci.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_misc.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_paprd.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_phy.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_power.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_radar.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_radio.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_recv.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_recv_ds.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_reset.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal ${NO_WSOMETIMES_UNINITIALIZED} -Wno-unused-function" contrib/dev/ath/ath_hal/ar9300/ar9300_stub.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_stub_funcs.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_spectral.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_timer.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_xmit.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" contrib/dev/ath/ath_hal/ar9300/ar9300_xmit_ds.c optional ath_hal | ath_ar9300 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" # rf backends dev/ath/ath_hal/ar5212/ar2316.c optional ath_rf2316 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar2317.c optional ath_rf2317 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar2413.c optional ath_hal | ath_rf2413 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar2425.c optional ath_hal | ath_rf2425 | ath_rf2417 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5111.c optional ath_hal | ath_rf5111 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5112.c optional ath_hal | ath_rf5112 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5212/ar5413.c optional ath_hal | ath_rf5413 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar5416/ar2133.c optional ath_hal | ath_ar5416 | \ ath_ar9130 | ath_ar9160 | ath_ar9280 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9280.c optional ath_hal | ath_ar9280 | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9285.c optional ath_hal | ath_ar9285 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" dev/ath/ath_hal/ar9002/ar9287.c optional ath_hal | ath_ar9287 \ compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" # ath rate control algorithms dev/ath/ath_rate/amrr/amrr.c optional ath_rate_amrr \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_rate/onoe/onoe.c optional ath_rate_onoe \ compile-with "${NORMAL_C} -I$S/dev/ath" dev/ath/ath_rate/sample/sample.c optional ath_rate_sample \ compile-with "${NORMAL_C} -I$S/dev/ath" # ath DFS modules dev/ath/ath_dfs/null/dfs_null.c optional ath \ compile-with "${NORMAL_C} -I$S/dev/ath" # dev/bce/if_bce.c optional bce dev/bfe/if_bfe.c optional bfe dev/bge/if_bge.c optional bge dev/bhnd/bhnd.c optional bhnd dev/bhnd/bhnd_erom.c optional bhnd dev/bhnd/bhnd_erom_if.m optional bhnd dev/bhnd/bhnd_nexus.c optional bhnd siba_nexus | \ bhnd bcma_nexus dev/bhnd/bhnd_subr.c optional bhnd dev/bhnd/bhnd_bus_if.m optional bhnd dev/bhnd/bhndb/bhnd_bhndb.c optional bhndb bhnd dev/bhnd/bhndb/bhndb.c optional bhndb bhnd dev/bhnd/bhndb/bhndb_bus_if.m optional bhndb bhnd dev/bhnd/bhndb/bhndb_hwdata.c optional bhndb bhnd dev/bhnd/bhndb/bhndb_if.m optional bhndb bhnd dev/bhnd/bhndb/bhndb_pci.c optional bhndb bhnd pci dev/bhnd/bhndb/bhndb_pci_hwdata.c optional bhndb bhnd pci dev/bhnd/bhndb/bhndb_pci_sprom.c optional bhndb bhnd pci dev/bhnd/bhndb/bhndb_subr.c optional bhndb bhnd dev/bhnd/bcma/bcma.c optional bcma bhnd dev/bhnd/bcma/bcma_bhndb.c optional bcma bhnd bhndb dev/bhnd/bcma/bcma_erom.c optional bcma bhnd dev/bhnd/bcma/bcma_nexus.c optional bcma_nexus bcma bhnd dev/bhnd/bcma/bcma_subr.c optional bcma bhnd dev/bhnd/cores/chipc/bhnd_chipc_if.m optional bhnd dev/bhnd/cores/chipc/bhnd_sprom_chipc.c optional bhnd dev/bhnd/cores/chipc/bhnd_pmu_chipc.c optional bhnd dev/bhnd/cores/chipc/chipc.c optional bhnd dev/bhnd/cores/chipc/chipc_cfi.c optional bhnd cfi dev/bhnd/cores/chipc/chipc_slicer.c optional bhnd cfi | bhnd spibus dev/bhnd/cores/chipc/chipc_spi.c optional bhnd spibus dev/bhnd/cores/chipc/chipc_subr.c optional bhnd dev/bhnd/cores/chipc/pwrctl/bhnd_pwrctl.c optional bhnd dev/bhnd/cores/chipc/pwrctl/bhnd_pwrctl_subr.c optional bhnd dev/bhnd/cores/pci/bhnd_pci.c optional bhnd pci dev/bhnd/cores/pci/bhnd_pci_hostb.c optional bhndb bhnd pci dev/bhnd/cores/pci/bhnd_pcib.c optional bhnd_pcib bhnd pci dev/bhnd/cores/pcie2/bhnd_pcie2.c optional bhnd pci dev/bhnd/cores/pcie2/bhnd_pcie2_hostb.c optional bhndb bhnd pci dev/bhnd/cores/pcie2/bhnd_pcie2b.c optional bhnd_pcie2b bhnd pci dev/bhnd/cores/pmu/bhnd_pmu.c optional bhnd dev/bhnd/cores/pmu/bhnd_pmu_core.c optional bhnd dev/bhnd/cores/pmu/bhnd_pmu_if.m optional bhnd dev/bhnd/cores/pmu/bhnd_pmu_subr.c optional bhnd dev/bhnd/nvram/bhnd_nvram.c optional bhnd dev/bhnd/nvram/bhnd_nvram_common.c optional bhnd dev/bhnd/nvram/bhnd_nvram_cfe.c optional bhnd siba_nexus cfe | \ bhnd bcma_nexus cfe dev/bhnd/nvram/bhnd_nvram_if.m optional bhnd dev/bhnd/nvram/bhnd_nvram_parser.c optional bhnd dev/bhnd/nvram/bhnd_sprom.c optional bhnd dev/bhnd/nvram/bhnd_sprom_parser.c optional bhnd dev/bhnd/siba/siba.c optional siba bhnd dev/bhnd/siba/siba_bhndb.c optional siba bhnd bhndb dev/bhnd/siba/siba_erom.c optional siba bhnd dev/bhnd/siba/siba_nexus.c optional siba_nexus siba bhnd dev/bhnd/siba/siba_subr.c optional siba bhnd # dev/bktr/bktr_audio.c optional bktr pci dev/bktr/bktr_card.c optional bktr pci dev/bktr/bktr_core.c optional bktr pci dev/bktr/bktr_i2c.c optional bktr pci smbus dev/bktr/bktr_os.c optional bktr pci dev/bktr/bktr_tuner.c optional bktr pci dev/bktr/msp34xx.c optional bktr pci dev/buslogic/bt.c optional bt dev/buslogic/bt_eisa.c optional bt eisa dev/buslogic/bt_isa.c optional bt isa dev/buslogic/bt_mca.c optional bt mca dev/buslogic/bt_pci.c optional bt pci dev/bwi/bwimac.c optional bwi dev/bwi/bwiphy.c optional bwi dev/bwi/bwirf.c optional bwi dev/bwi/if_bwi.c optional bwi dev/bwi/if_bwi_pci.c optional bwi pci # XXX Work around clang warnings, until maintainer approves fix. dev/bwn/if_bwn.c optional bwn siba_bwn \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/bwn/if_bwn_pci.c optional bwn pci bhnd dev/bwn/if_bwn_phy_common.c optional bwn siba_bwn dev/bwn/if_bwn_phy_g.c optional bwn siba_bwn \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED} ${NO_WCONSTANT_CONVERSION}" dev/bwn/if_bwn_phy_lp.c optional bwn siba_bwn \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/bwn/if_bwn_phy_n.c optional bwn siba_bwn dev/bwn/if_bwn_util.c optional bwn siba_bwn dev/bwn/bwn_mac.c optional bwn bhnd dev/cardbus/cardbus.c optional cardbus dev/cardbus/cardbus_cis.c optional cardbus dev/cardbus/cardbus_device.c optional cardbus dev/cas/if_cas.c optional cas dev/cfi/cfi_bus_fdt.c optional cfi fdt dev/cfi/cfi_bus_nexus.c optional cfi dev/cfi/cfi_core.c optional cfi dev/cfi/cfi_dev.c optional cfi dev/cfi/cfi_disk.c optional cfid dev/ciss/ciss.c optional ciss dev/cm/smc90cx6.c optional cm dev/cmx/cmx.c optional cmx dev/cmx/cmx_pccard.c optional cmx pccard dev/cpufreq/ichss.c optional cpufreq pci dev/cs/if_cs.c optional cs dev/cs/if_cs_isa.c optional cs isa dev/cs/if_cs_pccard.c optional cs pccard dev/cxgb/cxgb_main.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/cxgb_sge.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_mc5.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_vsc7323.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_vsc8211.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_ael1002.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_aq100x.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_mv88e1xxx.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_xgmac.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_t3_hw.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/common/cxgb_tn1010.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/sys/uipc_mvec.c optional cxgb pci \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgb/cxgb_t3fw.c optional cxgb cxgb_t3fw \ compile-with "${NORMAL_C} -I$S/dev/cxgb" dev/cxgbe/t4_if.m optional cxgbe pci dev/cxgbe/t4_iov.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_mp_ring.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_main.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_netmap.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_sge.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_l2t.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_tracer.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/t4_vf.c optional cxgbev pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/common/t4_hw.c optional cxgbe pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" dev/cxgbe/common/t4vf_hw.c optional cxgbev pci \ compile-with "${NORMAL_C} -I$S/dev/cxgbe" t4fw_cfg.c optional cxgbe \ compile-with "${AWK} -f $S/tools/fw_stub.awk t4fw_cfg.fw:t4fw_cfg t4fw_cfg_uwire.fw:t4fw_cfg_uwire t4fw.fw:t4fw -mt4fw_cfg -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "t4fw_cfg.c" t4fw_cfg.fwo optional cxgbe \ dependency "t4fw_cfg.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t4fw_cfg.fwo" t4fw_cfg.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t4fw_cfg.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t4fw_cfg.fw" t4fw_cfg_uwire.fwo optional cxgbe \ dependency "t4fw_cfg_uwire.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t4fw_cfg_uwire.fwo" t4fw_cfg_uwire.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t4fw_cfg_uwire.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t4fw_cfg_uwire.fw" t4fw.fwo optional cxgbe \ dependency "t4fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t4fw.fwo" t4fw.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t4fw-1.15.37.0.bin.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "t4fw.fw" t5fw_cfg.c optional cxgbe \ compile-with "${AWK} -f $S/tools/fw_stub.awk t5fw_cfg.fw:t5fw_cfg t5fw.fw:t5fw -mt5fw_cfg -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "t5fw_cfg.c" t5fw_cfg.fwo optional cxgbe \ dependency "t5fw_cfg.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t5fw_cfg.fwo" t5fw_cfg.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t5fw_cfg.txt" \ compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ no-obj no-implicit-rule \ clean "t5fw_cfg.fw" t5fw.fwo optional cxgbe \ dependency "t5fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "t5fw.fwo" t5fw.fw optional cxgbe \ dependency "$S/dev/cxgbe/firmware/t5fw-1.15.37.0.bin.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "t5fw.fw" dev/cy/cy.c optional cy dev/cy/cy_isa.c optional cy isa dev/cy/cy_pci.c optional cy pci dev/cyapa/cyapa.c optional cyapa smbus dev/dc/if_dc.c optional dc pci dev/dc/dcphy.c optional dc pci dev/dc/pnphy.c optional dc pci dev/dcons/dcons.c optional dcons dev/dcons/dcons_crom.c optional dcons_crom dev/dcons/dcons_os.c optional dcons dev/de/if_de.c optional de pci dev/dpt/dpt_eisa.c optional dpt eisa dev/dpt/dpt_pci.c optional dpt pci dev/dpt/dpt_scsi.c optional dpt dev/drm/ati_pcigart.c optional drm dev/drm/drm_agpsupport.c optional drm dev/drm/drm_auth.c optional drm dev/drm/drm_bufs.c optional drm dev/drm/drm_context.c optional drm dev/drm/drm_dma.c optional drm dev/drm/drm_drawable.c optional drm dev/drm/drm_drv.c optional drm dev/drm/drm_fops.c optional drm dev/drm/drm_hashtab.c optional drm dev/drm/drm_ioctl.c optional drm dev/drm/drm_irq.c optional drm dev/drm/drm_lock.c optional drm dev/drm/drm_memory.c optional drm dev/drm/drm_mm.c optional drm dev/drm/drm_pci.c optional drm dev/drm/drm_scatter.c optional drm dev/drm/drm_sman.c optional drm dev/drm/drm_sysctl.c optional drm dev/drm/drm_vm.c optional drm dev/drm/i915_dma.c optional i915drm dev/drm/i915_drv.c optional i915drm dev/drm/i915_irq.c optional i915drm dev/drm/i915_mem.c optional i915drm dev/drm/i915_suspend.c optional i915drm dev/drm/mach64_dma.c optional mach64drm dev/drm/mach64_drv.c optional mach64drm dev/drm/mach64_irq.c optional mach64drm dev/drm/mach64_state.c optional mach64drm dev/drm/mga_dma.c optional mgadrm dev/drm/mga_drv.c optional mgadrm dev/drm/mga_irq.c optional mgadrm dev/drm/mga_state.c optional mgadrm dev/drm/mga_warp.c optional mgadrm dev/drm/r128_cce.c optional r128drm \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/drm/r128_drv.c optional r128drm dev/drm/r128_irq.c optional r128drm dev/drm/r128_state.c optional r128drm dev/drm/r300_cmdbuf.c optional radeondrm dev/drm/r600_blit.c optional radeondrm dev/drm/r600_cp.c optional radeondrm \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/drm/radeon_cp.c optional radeondrm \ compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" dev/drm/radeon_cs.c optional radeondrm dev/drm/radeon_drv.c optional radeondrm dev/drm/radeon_irq.c optional radeondrm dev/drm/radeon_mem.c optional radeondrm dev/drm/radeon_state.c optional radeondrm dev/drm/savage_bci.c optional savagedrm dev/drm/savage_drv.c optional savagedrm dev/drm/savage_state.c optional savagedrm dev/drm/sis_drv.c optional sisdrm dev/drm/sis_ds.c optional sisdrm dev/drm/sis_mm.c optional sisdrm dev/drm/tdfx_drv.c optional tdfxdrm dev/drm/via_dma.c optional viadrm dev/drm/via_dmablit.c optional viadrm dev/drm/via_drv.c optional viadrm dev/drm/via_irq.c optional viadrm dev/drm/via_map.c optional viadrm dev/drm/via_mm.c optional viadrm dev/drm/via_verifier.c optional viadrm dev/drm/via_video.c optional viadrm dev/ed/if_ed.c optional ed dev/ed/if_ed_novell.c optional ed dev/ed/if_ed_rtl80x9.c optional ed dev/ed/if_ed_pccard.c optional ed pccard dev/ed/if_ed_pci.c optional ed pci dev/efidev/efidev.c optional efirt dev/eisa/eisa_if.m standard dev/eisa/eisaconf.c optional eisa dev/e1000/if_em.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/if_lem.c optional em \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/if_igb.c optional igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_80003es2lan.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82540.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82541.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82542.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82543.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82571.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_82575.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_ich8lan.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_i210.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_api.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_mac.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_manage.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_nvm.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_phy.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_vf.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_mbx.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/e1000/e1000_osdep.c optional em | igb \ compile-with "${NORMAL_C} -I$S/dev/e1000" dev/et/if_et.c optional et dev/en/if_en_pci.c optional en pci dev/en/midway.c optional en dev/ep/if_ep.c optional ep dev/ep/if_ep_eisa.c optional ep eisa dev/ep/if_ep_isa.c optional ep isa dev/ep/if_ep_mca.c optional ep mca dev/ep/if_ep_pccard.c optional ep pccard dev/esp/esp_pci.c optional esp pci dev/esp/ncr53c9x.c optional esp dev/etherswitch/arswitch/arswitch.c optional arswitch dev/etherswitch/arswitch/arswitch_reg.c optional arswitch dev/etherswitch/arswitch/arswitch_phy.c optional arswitch dev/etherswitch/arswitch/arswitch_8216.c optional arswitch dev/etherswitch/arswitch/arswitch_8226.c optional arswitch dev/etherswitch/arswitch/arswitch_8316.c optional arswitch dev/etherswitch/arswitch/arswitch_8327.c optional arswitch dev/etherswitch/arswitch/arswitch_7240.c optional arswitch dev/etherswitch/arswitch/arswitch_9340.c optional arswitch dev/etherswitch/arswitch/arswitch_vlans.c optional arswitch dev/etherswitch/etherswitch.c optional etherswitch dev/etherswitch/etherswitch_if.m optional etherswitch dev/etherswitch/ip17x/ip17x.c optional ip17x dev/etherswitch/ip17x/ip175c.c optional ip17x dev/etherswitch/ip17x/ip175d.c optional ip17x dev/etherswitch/ip17x/ip17x_phy.c optional ip17x dev/etherswitch/ip17x/ip17x_vlans.c optional ip17x dev/etherswitch/miiproxy.c optional miiproxy dev/etherswitch/rtl8366/rtl8366rb.c optional rtl8366rb dev/etherswitch/ukswitch/ukswitch.c optional ukswitch dev/evdev/cdev.c optional evdev dev/evdev/evdev.c optional evdev dev/evdev/evdev_mt.c optional evdev dev/evdev/evdev_utils.c optional evdev dev/evdev/uinput.c optional evdev uinput dev/ex/if_ex.c optional ex dev/ex/if_ex_isa.c optional ex isa dev/ex/if_ex_pccard.c optional ex pccard dev/exca/exca.c optional cbb dev/extres/clk/clk.c optional ext_resources clk dev/extres/clk/clkdev_if.m optional ext_resources clk dev/extres/clk/clknode_if.m optional ext_resources clk dev/extres/clk/clk_bus.c optional ext_resources clk fdt dev/extres/clk/clk_div.c optional ext_resources clk dev/extres/clk/clk_fixed.c optional ext_resources clk dev/extres/clk/clk_gate.c optional ext_resources clk dev/extres/clk/clk_mux.c optional ext_resources clk dev/extres/phy/phy.c optional ext_resources phy dev/extres/phy/phy_if.m optional ext_resources phy dev/extres/hwreset/hwreset.c optional ext_resources hwreset dev/extres/hwreset/hwreset_if.m optional ext_resources hwreset dev/extres/regulator/regdev_if.m optional ext_resources regulator dev/extres/regulator/regnode_if.m optional ext_resources regulator dev/extres/regulator/regulator.c optional ext_resources regulator dev/extres/regulator/regulator_bus.c optional ext_resources regulator fdt dev/extres/regulator/regulator_fixed.c optional ext_resources regulator dev/fatm/if_fatm.c optional fatm pci dev/fb/fbd.c optional fbd | vt dev/fb/fb_if.m standard dev/fb/splash.c optional sc splash dev/fdt/fdt_clock.c optional fdt fdt_clock dev/fdt/fdt_clock_if.m optional fdt fdt_clock dev/fdt/fdt_common.c optional fdt dev/fdt/fdt_pinctrl.c optional fdt fdt_pinctrl dev/fdt/fdt_pinctrl_if.m optional fdt fdt_pinctrl dev/fdt/fdt_slicer.c optional fdt cfi | fdt nand | fdt mx25l dev/fdt/fdt_static_dtb.S optional fdt fdt_dtb_static \ dependency "fdt_dtb_file" dev/fdt/simplebus.c optional fdt dev/fe/if_fe.c optional fe dev/fe/if_fe_pccard.c optional fe pccard dev/filemon/filemon.c optional filemon dev/firewire/firewire.c optional firewire dev/firewire/fwcrom.c optional firewire dev/firewire/fwdev.c optional firewire dev/firewire/fwdma.c optional firewire dev/firewire/fwmem.c optional firewire dev/firewire/fwohci.c optional firewire dev/firewire/fwohci_pci.c optional firewire pci dev/firewire/if_fwe.c optional fwe dev/firewire/if_fwip.c optional fwip dev/firewire/sbp.c optional sbp dev/firewire/sbp_targ.c optional sbp_targ dev/flash/at45d.c optional at45d dev/flash/mx25l.c optional mx25l dev/fxp/if_fxp.c optional fxp dev/fxp/inphy.c optional fxp dev/gem/if_gem.c optional gem dev/gem/if_gem_pci.c optional gem pci dev/gem/if_gem_sbus.c optional gem sbus dev/gpio/gpiobacklight.c optional gpiobacklight fdt dev/gpio/gpiokeys.c optional gpiokeys fdt dev/gpio/gpiokeys_codes.c optional gpiokeys fdt dev/gpio/gpiobus.c optional gpio \ dependency "gpiobus_if.h" dev/gpio/gpioc.c optional gpio \ dependency "gpio_if.h" dev/gpio/gpioiic.c optional gpioiic dev/gpio/gpioled.c optional gpioled dev/gpio/gpioregulator.c optional gpioregulator fdt ext_resources dev/gpio/gpiospi.c optional gpiospi dev/gpio/gpio_if.m optional gpio dev/gpio/gpiobus_if.m optional gpio dev/gpio/gpiopps.c optional gpiopps dev/gpio/ofw_gpiobus.c optional fdt gpio dev/hatm/if_hatm.c optional hatm pci dev/hatm/if_hatm_intr.c optional hatm pci dev/hatm/if_hatm_ioctl.c optional hatm pci dev/hatm/if_hatm_rx.c optional hatm pci dev/hatm/if_hatm_tx.c optional hatm pci dev/hifn/hifn7751.c optional hifn dev/hme/if_hme.c optional hme dev/hme/if_hme_pci.c optional hme pci dev/hme/if_hme_sbus.c optional hme sbus dev/hptiop/hptiop.c optional hptiop scbus dev/hwpmc/hwpmc_logging.c optional hwpmc dev/hwpmc/hwpmc_mod.c optional hwpmc dev/hwpmc/hwpmc_soft.c optional hwpmc dev/ichiic/ig4_iic.c optional ig4 smbus dev/ichiic/ig4_pci.c optional ig4 pci smbus dev/ichsmb/ichsmb.c optional ichsmb dev/ichsmb/ichsmb_pci.c optional ichsmb pci dev/ida/ida.c optional ida dev/ida/ida_disk.c optional ida dev/ida/ida_eisa.c optional ida eisa dev/ida/ida_pci.c optional ida pci dev/iicbus/ad7418.c optional ad7418 dev/iicbus/ds1307.c optional ds1307 dev/iicbus/ds133x.c optional ds133x dev/iicbus/ds1374.c optional ds1374 dev/iicbus/ds1672.c optional ds1672 dev/iicbus/ds3231.c optional ds3231 dev/iicbus/icee.c optional icee dev/iicbus/if_ic.c optional ic dev/iicbus/iic.c optional iic dev/iicbus/iicbb.c optional iicbb dev/iicbus/iicbb_if.m optional iicbb dev/iicbus/iicbus.c optional iicbus dev/iicbus/iicbus_if.m optional iicbus dev/iicbus/iiconf.c optional iicbus dev/iicbus/iicsmb.c optional iicsmb \ dependency "iicbus_if.h" dev/iicbus/iicoc.c optional iicoc dev/iicbus/lm75.c optional lm75 dev/iicbus/ofw_iicbus.c optional fdt iicbus dev/iicbus/pcf8563.c optional pcf8563 dev/iicbus/s35390a.c optional s35390a dev/iir/iir.c optional iir dev/iir/iir_ctrl.c optional iir dev/iir/iir_pci.c optional iir pci dev/intpm/intpm.c optional intpm pci # XXX Work around clang warning, until maintainer approves fix. dev/ips/ips.c optional ips \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/ips/ips_commands.c optional ips dev/ips/ips_disk.c optional ips dev/ips/ips_ioctl.c optional ips dev/ips/ips_pci.c optional ips pci dev/ipw/if_ipw.c optional ipw ipwbssfw.c optional ipwbssfw | ipwfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_bss.fw:ipw_bss:130 -lintel_ipw -mipw_bss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "ipwbssfw.c" ipw_bss.fwo optional ipwbssfw | ipwfw \ dependency "ipw_bss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "ipw_bss.fwo" ipw_bss.fw optional ipwbssfw | ipwfw \ dependency "$S/contrib/dev/ipw/ipw2100-1.3.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "ipw_bss.fw" ipwibssfw.c optional ipwibssfw | ipwfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_ibss.fw:ipw_ibss:130 -lintel_ipw -mipw_ibss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "ipwibssfw.c" ipw_ibss.fwo optional ipwibssfw | ipwfw \ dependency "ipw_ibss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "ipw_ibss.fwo" ipw_ibss.fw optional ipwibssfw | ipwfw \ dependency "$S/contrib/dev/ipw/ipw2100-1.3-i.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "ipw_ibss.fw" ipwmonitorfw.c optional ipwmonitorfw | ipwfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_monitor.fw:ipw_monitor:130 -lintel_ipw -mipw_monitor -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "ipwmonitorfw.c" ipw_monitor.fwo optional ipwmonitorfw | ipwfw \ dependency "ipw_monitor.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "ipw_monitor.fwo" ipw_monitor.fw optional ipwmonitorfw | ipwfw \ dependency "$S/contrib/dev/ipw/ipw2100-1.3-p.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "ipw_monitor.fw" dev/iscsi/icl.c optional iscsi | ctl dev/iscsi/icl_conn_if.m optional iscsi | ctl dev/iscsi/icl_soft.c optional iscsi | ctl dev/iscsi/icl_soft_proxy.c optional iscsi | ctl dev/iscsi/iscsi.c optional iscsi scbus dev/iscsi_initiator/iscsi.c optional iscsi_initiator scbus dev/iscsi_initiator/iscsi_subr.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_cam.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_soc.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_sm.c optional iscsi_initiator scbus dev/iscsi_initiator/isc_subr.c optional iscsi_initiator scbus dev/ismt/ismt.c optional ismt dev/isl/isl.c optional isl smbus dev/isp/isp.c optional isp dev/isp/isp_freebsd.c optional isp dev/isp/isp_library.c optional isp dev/isp/isp_pci.c optional isp pci dev/isp/isp_sbus.c optional isp sbus dev/isp/isp_target.c optional isp dev/ispfw/ispfw.c optional ispfw dev/iwi/if_iwi.c optional iwi iwibssfw.c optional iwibssfw | iwifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_bss.fw:iwi_bss:300 -lintel_iwi -miwi_bss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwibssfw.c" iwi_bss.fwo optional iwibssfw | iwifw \ dependency "iwi_bss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwi_bss.fwo" iwi_bss.fw optional iwibssfw | iwifw \ dependency "$S/contrib/dev/iwi/ipw2200-bss.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwi_bss.fw" iwiibssfw.c optional iwiibssfw | iwifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_ibss.fw:iwi_ibss:300 -lintel_iwi -miwi_ibss -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwiibssfw.c" iwi_ibss.fwo optional iwiibssfw | iwifw \ dependency "iwi_ibss.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwi_ibss.fwo" iwi_ibss.fw optional iwiibssfw | iwifw \ dependency "$S/contrib/dev/iwi/ipw2200-ibss.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwi_ibss.fw" iwimonitorfw.c optional iwimonitorfw | iwifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_monitor.fw:iwi_monitor:300 -lintel_iwi -miwi_monitor -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwimonitorfw.c" iwi_monitor.fwo optional iwimonitorfw | iwifw \ dependency "iwi_monitor.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwi_monitor.fwo" iwi_monitor.fw optional iwimonitorfw | iwifw \ dependency "$S/contrib/dev/iwi/ipw2200-sniffer.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwi_monitor.fw" dev/iwm/if_iwm.c optional iwm dev/iwm/if_iwm_binding.c optional iwm dev/iwm/if_iwm_led.c optional iwm dev/iwm/if_iwm_mac_ctxt.c optional iwm dev/iwm/if_iwm_pcie_trans.c optional iwm dev/iwm/if_iwm_phy_ctxt.c optional iwm dev/iwm/if_iwm_phy_db.c optional iwm dev/iwm/if_iwm_power.c optional iwm dev/iwm/if_iwm_scan.c optional iwm dev/iwm/if_iwm_time_event.c optional iwm dev/iwm/if_iwm_util.c optional iwm iwm3160fw.c optional iwm3160fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm3160.fw:iwm3160fw -miwm3160fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm3160fw.c" iwm3160fw.fwo optional iwm3160fw | iwmfw \ dependency "iwm3160.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm3160fw.fwo" iwm3160.fw optional iwm3160fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-3160-16.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm3160.fw" iwm7260fw.c optional iwm7260fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm7260.fw:iwm7260fw -miwm7260fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm7260fw.c" iwm7260fw.fwo optional iwm7260fw | iwmfw \ dependency "iwm7260.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm7260fw.fwo" iwm7260.fw optional iwm7260fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-7260-16.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm7260.fw" iwm7265fw.c optional iwm7265fw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm7265.fw:iwm7265fw -miwm7265fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm7265fw.c" iwm7265fw.fwo optional iwm7265fw | iwmfw \ dependency "iwm7265.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm7265fw.fwo" iwm7265.fw optional iwm7265fw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-7265-16.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm7265.fw" iwm8000Cfw.c optional iwm8000Cfw | iwmfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwm8000C.fw:iwm8000Cfw -miwm8000Cfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwm8000Cfw.c" iwm8000Cfw.fwo optional iwm8000Cfw | iwmfw \ dependency "iwm8000C.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwm8000Cfw.fwo" iwm8000C.fw optional iwm8000Cfw | iwmfw \ dependency "$S/contrib/dev/iwm/iwm-8000C-16.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwm8000C.fw" dev/iwn/if_iwn.c optional iwn iwn1000fw.c optional iwn1000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn1000.fw:iwn1000fw -miwn1000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn1000fw.c" iwn1000fw.fwo optional iwn1000fw | iwnfw \ dependency "iwn1000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn1000fw.fwo" iwn1000.fw optional iwn1000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-1000-39.31.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn1000.fw" iwn100fw.c optional iwn100fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn100.fw:iwn100fw -miwn100fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn100fw.c" iwn100fw.fwo optional iwn100fw | iwnfw \ dependency "iwn100.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn100fw.fwo" iwn100.fw optional iwn100fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-100-39.31.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn100.fw" iwn105fw.c optional iwn105fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn105.fw:iwn105fw -miwn105fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn105fw.c" iwn105fw.fwo optional iwn105fw | iwnfw \ dependency "iwn105.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn105fw.fwo" iwn105.fw optional iwn105fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-105-6-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn105.fw" iwn135fw.c optional iwn135fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn135.fw:iwn135fw -miwn135fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn135fw.c" iwn135fw.fwo optional iwn135fw | iwnfw \ dependency "iwn135.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn135fw.fwo" iwn135.fw optional iwn135fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-135-6-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn135.fw" iwn2000fw.c optional iwn2000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn2000.fw:iwn2000fw -miwn2000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn2000fw.c" iwn2000fw.fwo optional iwn2000fw | iwnfw \ dependency "iwn2000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn2000fw.fwo" iwn2000.fw optional iwn2000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-2000-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn2000.fw" iwn2030fw.c optional iwn2030fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn2030.fw:iwn2030fw -miwn2030fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn2030fw.c" iwn2030fw.fwo optional iwn2030fw | iwnfw \ dependency "iwn2030.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn2030fw.fwo" iwn2030.fw optional iwn2030fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwnwifi-2030-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn2030.fw" iwn4965fw.c optional iwn4965fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn4965.fw:iwn4965fw -miwn4965fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn4965fw.c" iwn4965fw.fwo optional iwn4965fw | iwnfw \ dependency "iwn4965.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn4965fw.fwo" iwn4965.fw optional iwn4965fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-4965-228.61.2.24.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn4965.fw" iwn5000fw.c optional iwn5000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn5000.fw:iwn5000fw -miwn5000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn5000fw.c" iwn5000fw.fwo optional iwn5000fw | iwnfw \ dependency "iwn5000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn5000fw.fwo" iwn5000.fw optional iwn5000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-5000-8.83.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn5000.fw" iwn5150fw.c optional iwn5150fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn5150.fw:iwn5150fw -miwn5150fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn5150fw.c" iwn5150fw.fwo optional iwn5150fw | iwnfw \ dependency "iwn5150.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn5150fw.fwo" iwn5150.fw optional iwn5150fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-5150-8.24.2.2.fw.uu"\ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn5150.fw" iwn6000fw.c optional iwn6000fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000.fw:iwn6000fw -miwn6000fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6000fw.c" iwn6000fw.fwo optional iwn6000fw | iwnfw \ dependency "iwn6000.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6000fw.fwo" iwn6000.fw optional iwn6000fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6000-9.221.4.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6000.fw" iwn6000g2afw.c optional iwn6000g2afw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000g2a.fw:iwn6000g2afw -miwn6000g2afw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6000g2afw.c" iwn6000g2afw.fwo optional iwn6000g2afw | iwnfw \ dependency "iwn6000g2a.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6000g2afw.fwo" iwn6000g2a.fw optional iwn6000g2afw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6000g2a-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6000g2a.fw" iwn6000g2bfw.c optional iwn6000g2bfw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000g2b.fw:iwn6000g2bfw -miwn6000g2bfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6000g2bfw.c" iwn6000g2bfw.fwo optional iwn6000g2bfw | iwnfw \ dependency "iwn6000g2b.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6000g2bfw.fwo" iwn6000g2b.fw optional iwn6000g2bfw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6000g2b-18.168.6.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6000g2b.fw" iwn6050fw.c optional iwn6050fw | iwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6050.fw:iwn6050fw -miwn6050fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "iwn6050fw.c" iwn6050fw.fwo optional iwn6050fw | iwnfw \ dependency "iwn6050.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "iwn6050fw.fwo" iwn6050.fw optional iwn6050fw | iwnfw \ dependency "$S/contrib/dev/iwn/iwlwifi-6050-41.28.5.1.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "iwn6050.fw" dev/ixgb/if_ixgb.c optional ixgb dev/ixgb/ixgb_ee.c optional ixgb dev/ixgb/ixgb_hw.c optional ixgb dev/ixgbe/if_ix.c optional ix inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe -DSMP" dev/ixgbe/if_ixv.c optional ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe -DSMP" dev/ixgbe/ix_txrx.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_osdep.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_phy.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_api.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_common.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_mbx.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_vf.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_82598.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_82599.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_x540.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_x550.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_dcb.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_dcb_82598.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/ixgbe/ixgbe_dcb_82599.c optional ix inet | ixv inet \ compile-with "${NORMAL_C} -I$S/dev/ixgbe" dev/jme/if_jme.c optional jme pci dev/joy/joy.c optional joy dev/joy/joy_isa.c optional joy isa dev/kbd/kbd.c optional atkbd | pckbd | sc | ukbd | vt dev/kbdmux/kbdmux.c optional kbdmux dev/ksyms/ksyms.c optional ksyms dev/le/am7990.c optional le dev/le/am79900.c optional le dev/le/if_le_pci.c optional le pci dev/le/lance.c optional le dev/led/led.c standard dev/lge/if_lge.c optional lge dev/lmc/if_lmc.c optional lmc dev/malo/if_malo.c optional malo dev/malo/if_malohal.c optional malo dev/malo/if_malo_pci.c optional malo pci dev/mc146818/mc146818.c optional mc146818 dev/mca/mca_bus.c optional mca dev/md/md.c optional md dev/mdio/mdio_if.m optional miiproxy | mdio dev/mdio/mdio.c optional miiproxy | mdio dev/mem/memdev.c optional mem dev/mem/memutil.c optional mem dev/mfi/mfi.c optional mfi dev/mfi/mfi_debug.c optional mfi dev/mfi/mfi_pci.c optional mfi pci dev/mfi/mfi_disk.c optional mfi dev/mfi/mfi_syspd.c optional mfi dev/mfi/mfi_tbolt.c optional mfi dev/mfi/mfi_linux.c optional mfi compat_linux dev/mfi/mfi_cam.c optional mfip scbus dev/mii/acphy.c optional miibus | acphy dev/mii/amphy.c optional miibus | amphy dev/mii/atphy.c optional miibus | atphy dev/mii/axphy.c optional miibus | axphy dev/mii/bmtphy.c optional miibus | bmtphy dev/mii/brgphy.c optional miibus | brgphy dev/mii/ciphy.c optional miibus | ciphy dev/mii/e1000phy.c optional miibus | e1000phy dev/mii/gentbi.c optional miibus | gentbi dev/mii/icsphy.c optional miibus | icsphy dev/mii/ip1000phy.c optional miibus | ip1000phy dev/mii/jmphy.c optional miibus | jmphy dev/mii/lxtphy.c optional miibus | lxtphy dev/mii/micphy.c optional miibus fdt | micphy fdt dev/mii/mii.c optional miibus | mii dev/mii/mii_bitbang.c optional miibus | mii_bitbang dev/mii/mii_physubr.c optional miibus | mii dev/mii/miibus_if.m optional miibus | mii dev/mii/mlphy.c optional miibus | mlphy dev/mii/nsgphy.c optional miibus | nsgphy dev/mii/nsphy.c optional miibus | nsphy dev/mii/nsphyter.c optional miibus | nsphyter dev/mii/pnaphy.c optional miibus | pnaphy dev/mii/qsphy.c optional miibus | qsphy dev/mii/rdcphy.c optional miibus | rdcphy dev/mii/rgephy.c optional miibus | rgephy dev/mii/rlphy.c optional miibus | rlphy dev/mii/rlswitch.c optional rlswitch dev/mii/smcphy.c optional miibus | smcphy dev/mii/smscphy.c optional miibus | smscphy dev/mii/tdkphy.c optional miibus | tdkphy dev/mii/tlphy.c optional miibus | tlphy dev/mii/truephy.c optional miibus | truephy dev/mii/ukphy.c optional miibus | mii dev/mii/ukphy_subr.c optional miibus | mii dev/mii/xmphy.c optional miibus | xmphy dev/mk48txx/mk48txx.c optional mk48txx dev/mlx/mlx.c optional mlx dev/mlx/mlx_disk.c optional mlx dev/mlx/mlx_pci.c optional mlx pci dev/mly/mly.c optional mly dev/mmc/mmc.c optional mmc dev/mmc/mmcbr_if.m standard dev/mmc/mmcbus_if.m standard dev/mmc/mmcsd.c optional mmcsd dev/mn/if_mn.c optional mn pci dev/mpr/mpr.c optional mpr dev/mpr/mpr_config.c optional mpr # XXX Work around clang warning, until maintainer approves fix. dev/mpr/mpr_mapping.c optional mpr \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/mpr/mpr_pci.c optional mpr pci dev/mpr/mpr_sas.c optional mpr \ compile-with "${NORMAL_C} ${NO_WUNNEEDED_INTERNAL_DECL}" dev/mpr/mpr_sas_lsi.c optional mpr dev/mpr/mpr_table.c optional mpr dev/mpr/mpr_user.c optional mpr dev/mps/mps.c optional mps dev/mps/mps_config.c optional mps # XXX Work around clang warning, until maintainer approves fix. dev/mps/mps_mapping.c optional mps \ compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" dev/mps/mps_pci.c optional mps pci dev/mps/mps_sas.c optional mps \ compile-with "${NORMAL_C} ${NO_WUNNEEDED_INTERNAL_DECL}" dev/mps/mps_sas_lsi.c optional mps dev/mps/mps_table.c optional mps dev/mps/mps_user.c optional mps dev/mpt/mpt.c optional mpt dev/mpt/mpt_cam.c optional mpt dev/mpt/mpt_debug.c optional mpt dev/mpt/mpt_pci.c optional mpt pci dev/mpt/mpt_raid.c optional mpt dev/mpt/mpt_user.c optional mpt dev/mrsas/mrsas.c optional mrsas dev/mrsas/mrsas_cam.c optional mrsas dev/mrsas/mrsas_ioctl.c optional mrsas dev/mrsas/mrsas_fp.c optional mrsas dev/msk/if_msk.c optional msk dev/mvs/mvs.c optional mvs dev/mvs/mvs_if.m optional mvs dev/mvs/mvs_pci.c optional mvs pci dev/mwl/if_mwl.c optional mwl dev/mwl/if_mwl_pci.c optional mwl pci dev/mwl/mwlhal.c optional mwl mwlfw.c optional mwlfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk mw88W8363.fw:mw88W8363fw mwlboot.fw:mwlboot -mmwl -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "mwlfw.c" mw88W8363.fwo optional mwlfw \ dependency "mw88W8363.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "mw88W8363.fwo" mw88W8363.fw optional mwlfw \ dependency "$S/contrib/dev/mwl/mw88W8363.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "mw88W8363.fw" mwlboot.fwo optional mwlfw \ dependency "mwlboot.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "mwlboot.fwo" mwlboot.fw optional mwlfw \ dependency "$S/contrib/dev/mwl/mwlboot.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "mwlboot.fw" dev/mxge/if_mxge.c optional mxge pci dev/mxge/mxge_eth_z8e.c optional mxge pci dev/mxge/mxge_ethp_z8e.c optional mxge pci dev/mxge/mxge_rss_eth_z8e.c optional mxge pci dev/mxge/mxge_rss_ethp_z8e.c optional mxge pci dev/my/if_my.c optional my dev/nand/nand.c optional nand dev/nand/nand_bbt.c optional nand dev/nand/nand_cdev.c optional nand dev/nand/nand_generic.c optional nand dev/nand/nand_geom.c optional nand dev/nand/nand_id.c optional nand dev/nand/nandbus.c optional nand dev/nand/nandbus_if.m optional nand dev/nand/nand_if.m optional nand dev/nand/nandsim.c optional nandsim nand dev/nand/nandsim_chip.c optional nandsim nand dev/nand/nandsim_ctrl.c optional nandsim nand dev/nand/nandsim_log.c optional nandsim nand dev/nand/nandsim_swap.c optional nandsim nand dev/nand/nfc_if.m optional nand dev/ncr/ncr.c optional ncr pci dev/ncv/ncr53c500.c optional ncv dev/ncv/ncr53c500_pccard.c optional ncv pccard +dev/netmap/if_ptnet.c optional netmap dev/netmap/netmap.c optional netmap dev/netmap/netmap_freebsd.c optional netmap dev/netmap/netmap_generic.c optional netmap dev/netmap/netmap_mbq.c optional netmap dev/netmap/netmap_mem2.c optional netmap dev/netmap/netmap_monitor.c optional netmap dev/netmap/netmap_offloadings.c optional netmap dev/netmap/netmap_pipe.c optional netmap +dev/netmap/netmap_pt.c optional netmap dev/netmap/netmap_vale.c optional netmap # compile-with "${NORMAL_C} -Wconversion -Wextra" dev/nfsmb/nfsmb.c optional nfsmb pci dev/nge/if_nge.c optional nge dev/nxge/if_nxge.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-device.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-mm.c optional nxge dev/nxge/xgehal/xge-queue.c optional nxge dev/nxge/xgehal/xgehal-driver.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-ring.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-channel.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-fifo.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-stats.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nxge/xgehal/xgehal-config.c optional nxge dev/nxge/xgehal/xgehal-mgmt.c optional nxge \ compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" dev/nmdm/nmdm.c optional nmdm dev/nsp/nsp.c optional nsp dev/nsp/nsp_pccard.c optional nsp pccard dev/null/null.c standard dev/oce/oce_hw.c optional oce pci dev/oce/oce_if.c optional oce pci dev/oce/oce_mbox.c optional oce pci dev/oce/oce_queue.c optional oce pci dev/oce/oce_sysctl.c optional oce pci dev/oce/oce_util.c optional oce pci dev/ofw/ofw_bus_if.m optional fdt dev/ofw/ofw_bus_subr.c optional fdt dev/ofw/ofw_cpu.c optional fdt dev/ofw/ofw_fdt.c optional fdt dev/ofw/ofw_if.m optional fdt dev/ofw/ofw_subr.c optional fdt dev/ofw/ofwbus.c optional fdt dev/ofw/openfirm.c optional fdt dev/ofw/openfirmio.c optional fdt dev/ow/ow.c optional ow \ dependency "owll_if.h" \ dependency "own_if.h" dev/ow/owll_if.m optional ow dev/ow/own_if.m optional ow dev/ow/ow_temp.c optional ow_temp dev/ow/owc_gpiobus.c optional owc gpio dev/patm/if_patm.c optional patm pci dev/patm/if_patm_attach.c optional patm pci dev/patm/if_patm_intr.c optional patm pci dev/patm/if_patm_ioctl.c optional patm pci dev/patm/if_patm_rtables.c optional patm pci dev/patm/if_patm_rx.c optional patm pci dev/patm/if_patm_tx.c optional patm pci dev/pbio/pbio.c optional pbio isa dev/pccard/card_if.m standard dev/pccard/pccard.c optional pccard dev/pccard/pccard_cis.c optional pccard dev/pccard/pccard_cis_quirks.c optional pccard dev/pccard/pccard_device.c optional pccard dev/pccard/power_if.m standard dev/pccbb/pccbb.c optional cbb dev/pccbb/pccbb_isa.c optional cbb isa dev/pccbb/pccbb_pci.c optional cbb pci dev/pcf/pcf.c optional pcf dev/pci/eisa_pci.c optional pci eisa dev/pci/fixup_pci.c optional pci dev/pci/hostb_pci.c optional pci dev/pci/ignore_pci.c optional pci dev/pci/isa_pci.c optional pci isa dev/pci/pci.c optional pci dev/pci/pci_if.m standard dev/pci/pci_iov.c optional pci pci_iov dev/pci/pci_iov_if.m standard dev/pci/pci_iov_schema.c optional pci pci_iov dev/pci/pci_pci.c optional pci dev/pci/pci_subr.c optional pci dev/pci/pci_user.c optional pci dev/pci/pcib_if.m standard dev/pci/pcib_support.c standard dev/pci/vga_pci.c optional pci dev/pcn/if_pcn.c optional pcn pci dev/pdq/if_fea.c optional fea eisa dev/pdq/if_fpa.c optional fpa pci dev/pdq/pdq.c optional nowerror fea eisa | fpa pci dev/pdq/pdq_ifsubr.c optional nowerror fea eisa | fpa pci dev/pms/freebsd/driver/ini/src/agtiapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sadisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/mpi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saframe.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sahw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sainit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saint.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sampicmd.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sampirsp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saphy.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saport.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sasata.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sasmp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sassp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/satimer.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/sautil.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/saioctlcmd.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sallsdk/spc/mpidebug.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dminit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmsmp.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmdisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmport.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmtimer.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/discovery/dm/dmmisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/sminit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smmisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smsat.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smsatcb.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smsathw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/sat/src/smtimer.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdinit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdmisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdesgl.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdport.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdint.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdioctl.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdhw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/ossacmnapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tddmcmnapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdsmcmnapi.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/common/tdtimers.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itdio.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itdcb.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itdinit.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sas/ini/itddisc.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sata/host/sat.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sata/host/ossasat.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/pms/RefTisa/tisa/sassata/sata/host/sathw.c optional pmspcv \ compile-with "${NORMAL_C} -Wunused-variable -Woverflow -Wparentheses -w" dev/ppbus/if_plip.c optional plip dev/ppbus/immio.c optional vpo dev/ppbus/lpbb.c optional lpbb dev/ppbus/lpt.c optional lpt dev/ppbus/pcfclock.c optional pcfclock dev/ppbus/ppb_1284.c optional ppbus dev/ppbus/ppb_base.c optional ppbus dev/ppbus/ppb_msq.c optional ppbus dev/ppbus/ppbconf.c optional ppbus dev/ppbus/ppbus_if.m optional ppbus dev/ppbus/ppi.c optional ppi dev/ppbus/pps.c optional pps dev/ppbus/vpo.c optional vpo dev/ppbus/vpoio.c optional vpo dev/ppc/ppc.c optional ppc dev/ppc/ppc_acpi.c optional ppc acpi dev/ppc/ppc_isa.c optional ppc isa dev/ppc/ppc_pci.c optional ppc pci dev/ppc/ppc_puc.c optional ppc puc dev/proto/proto_bus_isa.c optional proto acpi | proto isa dev/proto/proto_bus_pci.c optional proto pci dev/proto/proto_busdma.c optional proto dev/proto/proto_core.c optional proto dev/pst/pst-iop.c optional pst dev/pst/pst-pci.c optional pst pci dev/pst/pst-raid.c optional pst dev/pty/pty.c optional pty dev/puc/puc.c optional puc dev/puc/puc_cfg.c optional puc dev/puc/puc_pccard.c optional puc pccard dev/puc/puc_pci.c optional puc pci dev/puc/pucdata.c optional puc pci dev/quicc/quicc_core.c optional quicc dev/ral/rt2560.c optional ral dev/ral/rt2661.c optional ral dev/ral/rt2860.c optional ral dev/ral/if_ral_pci.c optional ral pci rt2561fw.c optional rt2561fw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2561.fw:rt2561fw -mrt2561 -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2561fw.c" rt2561fw.fwo optional rt2561fw | ralfw \ dependency "rt2561.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2561fw.fwo" rt2561.fw optional rt2561fw | ralfw \ dependency "$S/contrib/dev/ral/rt2561.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2561.fw" rt2561sfw.c optional rt2561sfw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2561s.fw:rt2561sfw -mrt2561s -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2561sfw.c" rt2561sfw.fwo optional rt2561sfw | ralfw \ dependency "rt2561s.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2561sfw.fwo" rt2561s.fw optional rt2561sfw | ralfw \ dependency "$S/contrib/dev/ral/rt2561s.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2561s.fw" rt2661fw.c optional rt2661fw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2661.fw:rt2661fw -mrt2661 -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2661fw.c" rt2661fw.fwo optional rt2661fw | ralfw \ dependency "rt2661.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2661fw.fwo" rt2661.fw optional rt2661fw | ralfw \ dependency "$S/contrib/dev/ral/rt2661.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2661.fw" rt2860fw.c optional rt2860fw | ralfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rt2860.fw:rt2860fw -mrt2860 -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rt2860fw.c" rt2860fw.fwo optional rt2860fw | ralfw \ dependency "rt2860.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rt2860fw.fwo" rt2860.fw optional rt2860fw | ralfw \ dependency "$S/contrib/dev/ral/rt2860.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rt2860.fw" dev/random/random_infra.c optional random dev/random/random_harvestq.c optional random dev/random/randomdev.c optional random random_yarrow | \ random !random_yarrow !random_loadable dev/random/yarrow.c optional random random_yarrow dev/random/fortuna.c optional random !random_yarrow !random_loadable dev/random/hash.c optional random random_yarrow | \ random !random_yarrow !random_loadable dev/rc/rc.c optional rc dev/rccgpio/rccgpio.c optional rccgpio gpio dev/re/if_re.c optional re dev/rl/if_rl.c optional rl pci dev/rndtest/rndtest.c optional rndtest dev/rp/rp.c optional rp dev/rp/rp_isa.c optional rp isa dev/rp/rp_pci.c optional rp pci dev/rtwn/if_rtwn.c optional rtwn rtwn-rtl8192cfwU.c optional rtwn-rtl8192cfwU | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192cfwU.fw:rtwn-rtl8192cfwU:111 -mrtwn-rtl8192cfwU -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192cfwU.c" rtwn-rtl8192cfwU.fwo optional rtwn-rtl8192cfwU | rtwnfw \ dependency "rtwn-rtl8192cfwU.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192cfwU.fwo" rtwn-rtl8192cfwU.fw optional rtwn-rtl8192cfwU | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192cfwU.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192cfwU.fw" rtwn-rtl8192cfwU_B.c optional rtwn-rtl8192cfwU_B | rtwnfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rtwn-rtl8192cfwU_B.fw:rtwn-rtl8192cfwU_B:111 -mrtwn-rtl8192cfwU_B -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rtwn-rtl8192cfwU_B.c" rtwn-rtl8192cfwU_B.fwo optional rtwn-rtl8192cfwU_B | rtwnfw \ dependency "rtwn-rtl8192cfwU_B.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rtwn-rtl8192cfwU_B.fwo" rtwn-rtl8192cfwU_B.fw optional rtwn-rtl8192cfwU_B | rtwnfw \ dependency "$S/contrib/dev/rtwn/rtwn-rtl8192cfwU_B.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rtwn-rtl8192cfwU_B.fw" dev/safe/safe.c optional safe dev/scc/scc_if.m optional scc dev/scc/scc_bfe_ebus.c optional scc ebus dev/scc/scc_bfe_quicc.c optional scc quicc dev/scc/scc_bfe_sbus.c optional scc fhc | scc sbus dev/scc/scc_core.c optional scc dev/scc/scc_dev_quicc.c optional scc quicc dev/scc/scc_dev_sab82532.c optional scc dev/scc/scc_dev_z8530.c optional scc dev/sdhci/sdhci.c optional sdhci dev/sdhci/sdhci_if.m optional sdhci dev/sdhci/sdhci_pci.c optional sdhci pci dev/sf/if_sf.c optional sf pci dev/sge/if_sge.c optional sge pci dev/siba/siba_bwn.c optional siba_bwn pci dev/siba/siba_core.c optional siba_bwn pci dev/siis/siis.c optional siis pci dev/sis/if_sis.c optional sis pci dev/sk/if_sk.c optional sk pci dev/smbus/smb.c optional smb dev/smbus/smbconf.c optional smbus dev/smbus/smbus.c optional smbus dev/smbus/smbus_if.m optional smbus dev/smc/if_smc.c optional smc dev/smc/if_smc_fdt.c optional smc fdt dev/sn/if_sn.c optional sn dev/sn/if_sn_isa.c optional sn isa dev/sn/if_sn_pccard.c optional sn pccard dev/snp/snp.c optional snp dev/sound/clone.c optional sound dev/sound/unit.c optional sound dev/sound/isa/ad1816.c optional snd_ad1816 isa dev/sound/isa/ess.c optional snd_ess isa dev/sound/isa/gusc.c optional snd_gusc isa dev/sound/isa/mss.c optional snd_mss isa dev/sound/isa/sb16.c optional snd_sb16 isa dev/sound/isa/sb8.c optional snd_sb8 isa dev/sound/isa/sbc.c optional snd_sbc isa dev/sound/isa/sndbuf_dma.c optional sound isa dev/sound/pci/als4000.c optional snd_als4000 pci dev/sound/pci/atiixp.c optional snd_atiixp pci dev/sound/pci/cmi.c optional snd_cmi pci dev/sound/pci/cs4281.c optional snd_cs4281 pci dev/sound/pci/csa.c optional snd_csa pci dev/sound/pci/csapcm.c optional snd_csa pci dev/sound/pci/ds1.c optional snd_ds1 pci dev/sound/pci/emu10k1.c optional snd_emu10k1 pci dev/sound/pci/emu10kx.c optional snd_emu10kx pci dev/sound/pci/emu10kx-pcm.c optional snd_emu10kx pci dev/sound/pci/emu10kx-midi.c optional snd_emu10kx pci dev/sound/pci/envy24.c optional snd_envy24 pci dev/sound/pci/envy24ht.c optional snd_envy24ht pci dev/sound/pci/es137x.c optional snd_es137x pci dev/sound/pci/fm801.c optional snd_fm801 pci dev/sound/pci/ich.c optional snd_ich pci dev/sound/pci/maestro.c optional snd_maestro pci dev/sound/pci/maestro3.c optional snd_maestro3 pci dev/sound/pci/neomagic.c optional snd_neomagic pci dev/sound/pci/solo.c optional snd_solo pci dev/sound/pci/spicds.c optional snd_spicds pci dev/sound/pci/t4dwave.c optional snd_t4dwave pci dev/sound/pci/via8233.c optional snd_via8233 pci dev/sound/pci/via82c686.c optional snd_via82c686 pci dev/sound/pci/vibes.c optional snd_vibes pci dev/sound/pci/hda/hdaa.c optional snd_hda pci dev/sound/pci/hda/hdaa_patches.c optional snd_hda pci dev/sound/pci/hda/hdac.c optional snd_hda pci dev/sound/pci/hda/hdac_if.m optional snd_hda pci dev/sound/pci/hda/hdacc.c optional snd_hda pci dev/sound/pci/hdspe.c optional snd_hdspe pci dev/sound/pci/hdspe-pcm.c optional snd_hdspe pci dev/sound/pcm/ac97.c optional sound dev/sound/pcm/ac97_if.m optional sound dev/sound/pcm/ac97_patch.c optional sound dev/sound/pcm/buffer.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/channel.c optional sound dev/sound/pcm/channel_if.m optional sound dev/sound/pcm/dsp.c optional sound dev/sound/pcm/feeder.c optional sound dev/sound/pcm/feeder_chain.c optional sound dev/sound/pcm/feeder_eq.c optional sound \ dependency "feeder_eq_gen.h" \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_if.m optional sound dev/sound/pcm/feeder_format.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_matrix.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_mixer.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_rate.c optional sound \ dependency "feeder_rate_gen.h" \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/feeder_volume.c optional sound \ dependency "snd_fxdiv_gen.h" dev/sound/pcm/mixer.c optional sound dev/sound/pcm/mixer_if.m optional sound dev/sound/pcm/sndstat.c optional sound dev/sound/pcm/sound.c optional sound dev/sound/pcm/vchan.c optional sound dev/sound/usb/uaudio.c optional snd_uaudio usb dev/sound/usb/uaudio_pcm.c optional snd_uaudio usb dev/sound/midi/midi.c optional sound dev/sound/midi/mpu401.c optional sound dev/sound/midi/mpu_if.m optional sound dev/sound/midi/mpufoi_if.m optional sound dev/sound/midi/sequencer.c optional sound dev/sound/midi/synth_if.m optional sound dev/spibus/ofw_spibus.c optional fdt spibus dev/spibus/spibus.c optional spibus \ dependency "spibus_if.h" dev/spibus/spigen.c optional spigen dev/spibus/spibus_if.m optional spibus dev/ste/if_ste.c optional ste pci dev/stg/tmc18c30.c optional stg dev/stg/tmc18c30_isa.c optional stg isa dev/stg/tmc18c30_pccard.c optional stg pccard dev/stg/tmc18c30_pci.c optional stg pci dev/stg/tmc18c30_subr.c optional stg dev/stge/if_stge.c optional stge dev/streams/streams.c optional streams dev/sym/sym_hipd.c optional sym \ dependency "$S/dev/sym/sym_{conf,defs}.h" dev/syscons/blank/blank_saver.c optional blank_saver dev/syscons/daemon/daemon_saver.c optional daemon_saver dev/syscons/dragon/dragon_saver.c optional dragon_saver dev/syscons/fade/fade_saver.c optional fade_saver dev/syscons/fire/fire_saver.c optional fire_saver dev/syscons/green/green_saver.c optional green_saver dev/syscons/logo/logo.c optional logo_saver dev/syscons/logo/logo_saver.c optional logo_saver dev/syscons/rain/rain_saver.c optional rain_saver dev/syscons/schistory.c optional sc dev/syscons/scmouse.c optional sc dev/syscons/scterm.c optional sc dev/syscons/scvidctl.c optional sc dev/syscons/snake/snake_saver.c optional snake_saver dev/syscons/star/star_saver.c optional star_saver dev/syscons/syscons.c optional sc dev/syscons/sysmouse.c optional sc dev/syscons/warp/warp_saver.c optional warp_saver dev/tdfx/tdfx_linux.c optional tdfx_linux tdfx compat_linux dev/tdfx/tdfx_pci.c optional tdfx pci dev/ti/if_ti.c optional ti pci dev/tl/if_tl.c optional tl pci dev/trm/trm.c optional trm dev/twa/tw_cl_init.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_cl_intr.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_cl_io.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_cl_misc.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_osl_cam.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twa/tw_osl_freebsd.c optional twa \ compile-with "${NORMAL_C} -I$S/dev/twa" dev/twe/twe.c optional twe dev/twe/twe_freebsd.c optional twe dev/tws/tws.c optional tws dev/tws/tws_cam.c optional tws dev/tws/tws_hdm.c optional tws dev/tws/tws_services.c optional tws dev/tws/tws_user.c optional tws dev/tx/if_tx.c optional tx dev/txp/if_txp.c optional txp dev/uart/uart_bus_acpi.c optional uart acpi dev/uart/uart_bus_ebus.c optional uart ebus dev/uart/uart_bus_fdt.c optional uart fdt dev/uart/uart_bus_isa.c optional uart isa dev/uart/uart_bus_pccard.c optional uart pccard dev/uart/uart_bus_pci.c optional uart pci dev/uart/uart_bus_puc.c optional uart puc dev/uart/uart_bus_scc.c optional uart scc dev/uart/uart_core.c optional uart dev/uart/uart_dbg.c optional uart gdb dev/uart/uart_dev_ns8250.c optional uart uart_ns8250 | uart uart_snps dev/uart/uart_dev_pl011.c optional uart pl011 dev/uart/uart_dev_quicc.c optional uart quicc dev/uart/uart_dev_sab82532.c optional uart uart_sab82532 dev/uart/uart_dev_sab82532.c optional uart scc dev/uart/uart_dev_snps.c optional uart uart_snps dev/uart/uart_dev_z8530.c optional uart uart_z8530 dev/uart/uart_dev_z8530.c optional uart scc dev/uart/uart_if.m optional uart dev/uart/uart_subr.c optional uart dev/uart/uart_tty.c optional uart dev/ubsec/ubsec.c optional ubsec # # USB controller drivers # dev/usb/controller/at91dci.c optional at91dci dev/usb/controller/at91dci_atmelarm.c optional at91dci at91rm9200 dev/usb/controller/musb_otg.c optional musb dev/usb/controller/musb_otg_atmelarm.c optional musb at91rm9200 dev/usb/controller/dwc_otg.c optional dwcotg dev/usb/controller/dwc_otg_fdt.c optional dwcotg fdt dev/usb/controller/ehci.c optional ehci dev/usb/controller/ehci_pci.c optional ehci pci dev/usb/controller/ohci.c optional ohci dev/usb/controller/ohci_pci.c optional ohci pci dev/usb/controller/uhci.c optional uhci dev/usb/controller/uhci_pci.c optional uhci pci dev/usb/controller/xhci.c optional xhci dev/usb/controller/xhci_pci.c optional xhci pci dev/usb/controller/saf1761_otg.c optional saf1761otg dev/usb/controller/saf1761_otg_fdt.c optional saf1761otg fdt dev/usb/controller/uss820dci.c optional uss820dci dev/usb/controller/uss820dci_atmelarm.c optional uss820dci at91rm9200 dev/usb/controller/usb_controller.c optional usb # # USB storage drivers # dev/usb/storage/umass.c optional umass dev/usb/storage/urio.c optional urio dev/usb/storage/ustorage_fs.c optional usfs # # USB core # dev/usb/usb_busdma.c optional usb dev/usb/usb_core.c optional usb dev/usb/usb_debug.c optional usb dev/usb/usb_dev.c optional usb dev/usb/usb_device.c optional usb dev/usb/usb_dynamic.c optional usb dev/usb/usb_error.c optional usb dev/usb/usb_generic.c optional usb dev/usb/usb_handle_request.c optional usb dev/usb/usb_hid.c optional usb dev/usb/usb_hub.c optional usb dev/usb/usb_if.m optional usb dev/usb/usb_lookup.c optional usb dev/usb/usb_mbuf.c optional usb dev/usb/usb_msctest.c optional usb dev/usb/usb_parse.c optional usb dev/usb/usb_pf.c optional usb dev/usb/usb_process.c optional usb dev/usb/usb_request.c optional usb dev/usb/usb_transfer.c optional usb dev/usb/usb_util.c optional usb # # USB network drivers # dev/usb/net/if_aue.c optional aue dev/usb/net/if_axe.c optional axe dev/usb/net/if_axge.c optional axge dev/usb/net/if_cdce.c optional cdce dev/usb/net/if_cue.c optional cue dev/usb/net/if_ipheth.c optional ipheth dev/usb/net/if_kue.c optional kue dev/usb/net/if_mos.c optional mos dev/usb/net/if_rue.c optional rue dev/usb/net/if_smsc.c optional smsc dev/usb/net/if_udav.c optional udav dev/usb/net/if_ure.c optional ure dev/usb/net/if_usie.c optional usie dev/usb/net/if_urndis.c optional urndis dev/usb/net/ruephy.c optional rue dev/usb/net/usb_ethernet.c optional uether | aue | axe | axge | cdce | \ cue | ipheth | kue | mos | rue | \ smsc | udav | ure | urndis dev/usb/net/uhso.c optional uhso # # USB WLAN drivers # dev/usb/wlan/if_rsu.c optional rsu rsu-rtl8712fw.c optional rsu-rtl8712fw | rsufw \ compile-with "${AWK} -f $S/tools/fw_stub.awk rsu-rtl8712fw.fw:rsu-rtl8712fw:120 -mrsu-rtl8712fw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "rsu-rtl8712fw.c" rsu-rtl8712fw.fwo optional rsu-rtl8712fw | rsufw \ dependency "rsu-rtl8712fw.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "rsu-rtl8712fw.fwo" rsu-rtl8712fw.fw optional rsu-rtl8712.fw | rsufw \ dependency "$S/contrib/dev/rsu/rsu-rtl8712fw.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "rsu-rtl8712fw.fw" dev/usb/wlan/if_rum.c optional rum dev/usb/wlan/if_run.c optional run runfw.c optional runfw \ compile-with "${AWK} -f $S/tools/fw_stub.awk run.fw:runfw -mrunfw -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "runfw.c" runfw.fwo optional runfw \ dependency "run.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "runfw.fwo" run.fw optional runfw \ dependency "$S/contrib/dev/run/rt2870.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "run.fw" dev/usb/wlan/if_uath.c optional uath dev/usb/wlan/if_upgt.c optional upgt dev/usb/wlan/if_ural.c optional ural dev/usb/wlan/if_urtw.c optional urtw dev/usb/wlan/if_zyd.c optional zyd # # USB serial and parallel port drivers # dev/usb/serial/u3g.c optional u3g dev/usb/serial/uark.c optional uark dev/usb/serial/ubsa.c optional ubsa dev/usb/serial/ubser.c optional ubser dev/usb/serial/uchcom.c optional uchcom dev/usb/serial/ucycom.c optional ucycom dev/usb/serial/ufoma.c optional ufoma dev/usb/serial/uftdi.c optional uftdi dev/usb/serial/ugensa.c optional ugensa dev/usb/serial/uipaq.c optional uipaq dev/usb/serial/ulpt.c optional ulpt dev/usb/serial/umcs.c optional umcs dev/usb/serial/umct.c optional umct dev/usb/serial/umodem.c optional umodem dev/usb/serial/umoscom.c optional umoscom dev/usb/serial/uplcom.c optional uplcom dev/usb/serial/uslcom.c optional uslcom dev/usb/serial/uvisor.c optional uvisor dev/usb/serial/uvscom.c optional uvscom dev/usb/serial/usb_serial.c optional ucom | u3g | uark | ubsa | ubser | \ uchcom | ucycom | ufoma | uftdi | \ ugensa | uipaq | umcs | umct | \ umodem | umoscom | uplcom | usie | \ uslcom | uvisor | uvscom # # USB misc drivers # dev/usb/misc/ufm.c optional ufm dev/usb/misc/udbp.c optional udbp dev/usb/misc/ugold.c optional ugold dev/usb/misc/uled.c optional uled # # USB input drivers # dev/usb/input/atp.c optional atp dev/usb/input/uep.c optional uep dev/usb/input/uhid.c optional uhid dev/usb/input/ukbd.c optional ukbd dev/usb/input/ums.c optional ums dev/usb/input/wsp.c optional wsp # # USB quirks # dev/usb/quirk/usb_quirk.c optional usb # # USB templates # dev/usb/template/usb_template.c optional usb_template dev/usb/template/usb_template_audio.c optional usb_template dev/usb/template/usb_template_cdce.c optional usb_template dev/usb/template/usb_template_kbd.c optional usb_template dev/usb/template/usb_template_modem.c optional usb_template dev/usb/template/usb_template_mouse.c optional usb_template dev/usb/template/usb_template_msc.c optional usb_template dev/usb/template/usb_template_mtp.c optional usb_template dev/usb/template/usb_template_phone.c optional usb_template dev/usb/template/usb_template_serialnet.c optional usb_template dev/usb/template/usb_template_midi.c optional usb_template # # USB video drivers # dev/usb/video/udl.c optional udl # # USB END # dev/videomode/videomode.c optional videomode dev/videomode/edid.c optional videomode dev/videomode/pickmode.c optional videomode dev/videomode/vesagtf.c optional videomode dev/utopia/idtphy.c optional utopia dev/utopia/suni.c optional utopia dev/utopia/utopia.c optional utopia dev/vge/if_vge.c optional vge dev/viapm/viapm.c optional viapm pci dev/virtio/virtio.c optional virtio dev/virtio/virtqueue.c optional virtio dev/virtio/virtio_bus_if.m optional virtio dev/virtio/virtio_if.m optional virtio dev/virtio/pci/virtio_pci.c optional virtio_pci dev/virtio/mmio/virtio_mmio.c optional virtio_mmio dev/virtio/mmio/virtio_mmio_if.m optional virtio_mmio dev/virtio/network/if_vtnet.c optional vtnet dev/virtio/block/virtio_blk.c optional virtio_blk dev/virtio/balloon/virtio_balloon.c optional virtio_balloon dev/virtio/scsi/virtio_scsi.c optional virtio_scsi dev/virtio/random/virtio_random.c optional virtio_random dev/virtio/console/virtio_console.c optional virtio_console dev/vkbd/vkbd.c optional vkbd dev/vr/if_vr.c optional vr pci dev/vt/colors/vt_termcolors.c optional vt dev/vt/font/vt_font_default.c optional vt dev/vt/font/vt_mouse_cursor.c optional vt dev/vt/hw/efifb/efifb.c optional vt_efifb dev/vt/hw/fb/vt_fb.c optional vt dev/vt/hw/vga/vt_vga.c optional vt vt_vga dev/vt/logo/logo_freebsd.c optional vt splash dev/vt/logo/logo_beastie.c optional vt splash dev/vt/vt_buf.c optional vt dev/vt/vt_consolectl.c optional vt dev/vt/vt_core.c optional vt dev/vt/vt_cpulogos.c optional vt splash dev/vt/vt_font.c optional vt dev/vt/vt_sysmouse.c optional vt dev/vte/if_vte.c optional vte pci dev/vx/if_vx.c optional vx dev/vx/if_vx_eisa.c optional vx eisa dev/vx/if_vx_pci.c optional vx pci dev/vxge/vxge.c optional vxge dev/vxge/vxgehal/vxgehal-ifmsg.c optional vxge dev/vxge/vxgehal/vxgehal-mrpcim.c optional vxge dev/vxge/vxgehal/vxge-queue.c optional vxge dev/vxge/vxgehal/vxgehal-ring.c optional vxge dev/vxge/vxgehal/vxgehal-swapper.c optional vxge dev/vxge/vxgehal/vxgehal-mgmt.c optional vxge dev/vxge/vxgehal/vxgehal-srpcim.c optional vxge dev/vxge/vxgehal/vxgehal-config.c optional vxge dev/vxge/vxgehal/vxgehal-blockpool.c optional vxge dev/vxge/vxgehal/vxgehal-doorbells.c optional vxge dev/vxge/vxgehal/vxgehal-mgmtaux.c optional vxge dev/vxge/vxgehal/vxgehal-device.c optional vxge dev/vxge/vxgehal/vxgehal-mm.c optional vxge dev/vxge/vxgehal/vxgehal-driver.c optional vxge dev/vxge/vxgehal/vxgehal-virtualpath.c optional vxge dev/vxge/vxgehal/vxgehal-channel.c optional vxge dev/vxge/vxgehal/vxgehal-fifo.c optional vxge dev/watchdog/watchdog.c standard dev/wb/if_wb.c optional wb pci dev/wi/if_wi.c optional wi dev/wi/if_wi_pccard.c optional wi pccard dev/wi/if_wi_pci.c optional wi pci dev/wpi/if_wpi.c optional wpi pci wpifw.c optional wpifw \ compile-with "${AWK} -f $S/tools/fw_stub.awk wpi.fw:wpifw:153229 -mwpi -c${.TARGET}" \ no-implicit-rule before-depend local \ clean "wpifw.c" wpifw.fwo optional wpifw \ dependency "wpi.fw" \ compile-with "${NORMAL_FWO}" \ no-implicit-rule \ clean "wpifw.fwo" wpi.fw optional wpifw \ dependency "$S/contrib/dev/wpi/iwlwifi-3945-15.32.2.9.fw.uu" \ compile-with "${NORMAL_FW}" \ no-obj no-implicit-rule \ clean "wpi.fw" dev/xe/if_xe.c optional xe dev/xe/if_xe_pccard.c optional xe pccard dev/xen/balloon/balloon.c optional xenhvm dev/xen/blkfront/blkfront.c optional xenhvm dev/xen/blkback/blkback.c optional xenhvm dev/xen/console/xen_console.c optional xenhvm dev/xen/control/control.c optional xenhvm dev/xen/grant_table/grant_table.c optional xenhvm dev/xen/netback/netback.c optional xenhvm dev/xen/netfront/netfront.c optional xenhvm dev/xen/xenpci/xenpci.c optional xenpci dev/xen/timer/timer.c optional xenhvm dev/xen/pvcpu/pvcpu.c optional xenhvm dev/xen/xenstore/xenstore.c optional xenhvm dev/xen/xenstore/xenstore_dev.c optional xenhvm dev/xen/xenstore/xenstored_dev.c optional xenhvm dev/xen/evtchn/evtchn_dev.c optional xenhvm dev/xen/privcmd/privcmd.c optional xenhvm dev/xen/debug/debug.c optional xenhvm dev/xl/if_xl.c optional xl pci dev/xl/xlphy.c optional xl pci fs/autofs/autofs.c optional autofs fs/autofs/autofs_vfsops.c optional autofs fs/autofs/autofs_vnops.c optional autofs fs/deadfs/dead_vnops.c standard fs/devfs/devfs_devs.c standard fs/devfs/devfs_dir.c standard fs/devfs/devfs_rule.c standard fs/devfs/devfs_vfsops.c standard fs/devfs/devfs_vnops.c standard fs/fdescfs/fdesc_vfsops.c optional fdescfs fs/fdescfs/fdesc_vnops.c optional fdescfs fs/fifofs/fifo_vnops.c standard fs/cuse/cuse.c optional cuse fs/fuse/fuse_device.c optional fuse fs/fuse/fuse_file.c optional fuse fs/fuse/fuse_internal.c optional fuse fs/fuse/fuse_io.c optional fuse fs/fuse/fuse_ipc.c optional fuse fs/fuse/fuse_main.c optional fuse fs/fuse/fuse_node.c optional fuse fs/fuse/fuse_vfsops.c optional fuse fs/fuse/fuse_vnops.c optional fuse fs/msdosfs/msdosfs_conv.c optional msdosfs fs/msdosfs/msdosfs_denode.c optional msdosfs fs/msdosfs/msdosfs_fat.c optional msdosfs fs/msdosfs/msdosfs_fileno.c optional msdosfs fs/msdosfs/msdosfs_iconv.c optional msdosfs_iconv fs/msdosfs/msdosfs_lookup.c optional msdosfs fs/msdosfs/msdosfs_vfsops.c optional msdosfs fs/msdosfs/msdosfs_vnops.c optional msdosfs fs/nandfs/bmap.c optional nandfs fs/nandfs/nandfs_alloc.c optional nandfs fs/nandfs/nandfs_bmap.c optional nandfs fs/nandfs/nandfs_buffer.c optional nandfs fs/nandfs/nandfs_cleaner.c optional nandfs fs/nandfs/nandfs_cpfile.c optional nandfs fs/nandfs/nandfs_dat.c optional nandfs fs/nandfs/nandfs_dir.c optional nandfs fs/nandfs/nandfs_ifile.c optional nandfs fs/nandfs/nandfs_segment.c optional nandfs fs/nandfs/nandfs_subr.c optional nandfs fs/nandfs/nandfs_sufile.c optional nandfs fs/nandfs/nandfs_vfsops.c optional nandfs fs/nandfs/nandfs_vnops.c optional nandfs fs/nfs/nfs_commonkrpc.c optional nfscl | nfsd fs/nfs/nfs_commonsubs.c optional nfscl | nfsd fs/nfs/nfs_commonport.c optional nfscl | nfsd fs/nfs/nfs_commonacl.c optional nfscl | nfsd fs/nfsclient/nfs_clcomsubs.c optional nfscl fs/nfsclient/nfs_clsubs.c optional nfscl fs/nfsclient/nfs_clstate.c optional nfscl fs/nfsclient/nfs_clkrpc.c optional nfscl fs/nfsclient/nfs_clrpcops.c optional nfscl fs/nfsclient/nfs_clvnops.c optional nfscl fs/nfsclient/nfs_clnode.c optional nfscl fs/nfsclient/nfs_clvfsops.c optional nfscl fs/nfsclient/nfs_clport.c optional nfscl fs/nfsclient/nfs_clbio.c optional nfscl fs/nfsclient/nfs_clnfsiod.c optional nfscl fs/nfsserver/nfs_fha_new.c optional nfsd inet fs/nfsserver/nfs_nfsdsocket.c optional nfsd inet fs/nfsserver/nfs_nfsdsubs.c optional nfsd inet fs/nfsserver/nfs_nfsdstate.c optional nfsd inet fs/nfsserver/nfs_nfsdkrpc.c optional nfsd inet fs/nfsserver/nfs_nfsdserv.c optional nfsd inet fs/nfsserver/nfs_nfsdport.c optional nfsd inet fs/nfsserver/nfs_nfsdcache.c optional nfsd inet fs/nullfs/null_subr.c optional nullfs fs/nullfs/null_vfsops.c optional nullfs fs/nullfs/null_vnops.c optional nullfs fs/procfs/procfs.c optional procfs fs/procfs/procfs_ctl.c optional procfs fs/procfs/procfs_dbregs.c optional procfs fs/procfs/procfs_fpregs.c optional procfs fs/procfs/procfs_ioctl.c optional procfs fs/procfs/procfs_map.c optional procfs fs/procfs/procfs_mem.c optional procfs fs/procfs/procfs_note.c optional procfs fs/procfs/procfs_osrel.c optional procfs fs/procfs/procfs_regs.c optional procfs fs/procfs/procfs_rlimit.c optional procfs fs/procfs/procfs_status.c optional procfs fs/procfs/procfs_type.c optional procfs fs/pseudofs/pseudofs.c optional pseudofs fs/pseudofs/pseudofs_fileno.c optional pseudofs fs/pseudofs/pseudofs_vncache.c optional pseudofs fs/pseudofs/pseudofs_vnops.c optional pseudofs fs/smbfs/smbfs_io.c optional smbfs fs/smbfs/smbfs_node.c optional smbfs fs/smbfs/smbfs_smb.c optional smbfs fs/smbfs/smbfs_subr.c optional smbfs fs/smbfs/smbfs_vfsops.c optional smbfs fs/smbfs/smbfs_vnops.c optional smbfs fs/udf/osta.c optional udf fs/udf/udf_iconv.c optional udf_iconv fs/udf/udf_vfsops.c optional udf fs/udf/udf_vnops.c optional udf fs/unionfs/union_subr.c optional unionfs fs/unionfs/union_vfsops.c optional unionfs fs/unionfs/union_vnops.c optional unionfs fs/tmpfs/tmpfs_vnops.c optional tmpfs fs/tmpfs/tmpfs_fifoops.c optional tmpfs fs/tmpfs/tmpfs_vfsops.c optional tmpfs fs/tmpfs/tmpfs_subr.c optional tmpfs gdb/gdb_cons.c optional gdb gdb/gdb_main.c optional gdb gdb/gdb_packet.c optional gdb geom/bde/g_bde.c optional geom_bde geom/bde/g_bde_crypt.c optional geom_bde geom/bde/g_bde_lock.c optional geom_bde geom/bde/g_bde_work.c optional geom_bde geom/cache/g_cache.c optional geom_cache geom/concat/g_concat.c optional geom_concat geom/eli/g_eli.c optional geom_eli geom/eli/g_eli_crypto.c optional geom_eli geom/eli/g_eli_ctl.c optional geom_eli geom/eli/g_eli_hmac.c optional geom_eli geom/eli/g_eli_integrity.c optional geom_eli geom/eli/g_eli_key.c optional geom_eli geom/eli/g_eli_key_cache.c optional geom_eli geom/eli/g_eli_privacy.c optional geom_eli geom/eli/pkcs5v2.c optional geom_eli geom/gate/g_gate.c optional geom_gate geom/geom_aes.c optional geom_aes geom/geom_bsd.c optional geom_bsd geom/geom_bsd_enc.c optional geom_bsd | geom_part_bsd geom/geom_ccd.c optional ccd | geom_ccd geom/geom_ctl.c standard geom/geom_dev.c standard geom/geom_disk.c standard geom/geom_dump.c standard geom/geom_event.c standard geom/geom_fox.c optional geom_fox geom/geom_flashmap.c optional fdt cfi | fdt nand | fdt mx25l geom/geom_io.c standard geom/geom_kern.c standard geom/geom_map.c optional geom_map geom/geom_mbr.c optional geom_mbr geom/geom_mbr_enc.c optional geom_mbr geom/geom_pc98.c optional geom_pc98 geom/geom_pc98_enc.c optional geom_pc98 geom/geom_redboot.c optional geom_redboot geom/geom_slice.c standard geom/geom_subr.c standard geom/geom_sunlabel.c optional geom_sunlabel geom/geom_sunlabel_enc.c optional geom_sunlabel geom/geom_vfs.c standard geom/geom_vol_ffs.c optional geom_vol geom/journal/g_journal.c optional geom_journal geom/journal/g_journal_ufs.c optional geom_journal geom/label/g_label.c optional geom_label | geom_label_gpt geom/label/g_label_ext2fs.c optional geom_label geom/label/g_label_iso9660.c optional geom_label geom/label/g_label_msdosfs.c optional geom_label geom/label/g_label_ntfs.c optional geom_label geom/label/g_label_reiserfs.c optional geom_label geom/label/g_label_ufs.c optional geom_label geom/label/g_label_gpt.c optional geom_label | geom_label_gpt geom/label/g_label_disk_ident.c optional geom_label geom/linux_lvm/g_linux_lvm.c optional geom_linux_lvm geom/mirror/g_mirror.c optional geom_mirror geom/mirror/g_mirror_ctl.c optional geom_mirror geom/mountver/g_mountver.c optional geom_mountver geom/multipath/g_multipath.c optional geom_multipath geom/nop/g_nop.c optional geom_nop geom/part/g_part.c standard geom/part/g_part_if.m standard geom/part/g_part_apm.c optional geom_part_apm geom/part/g_part_bsd.c optional geom_part_bsd geom/part/g_part_bsd64.c optional geom_part_bsd64 geom/part/g_part_ebr.c optional geom_part_ebr geom/part/g_part_gpt.c optional geom_part_gpt geom/part/g_part_ldm.c optional geom_part_ldm geom/part/g_part_mbr.c optional geom_part_mbr geom/part/g_part_pc98.c optional geom_part_pc98 geom/part/g_part_vtoc8.c optional geom_part_vtoc8 geom/raid/g_raid.c optional geom_raid geom/raid/g_raid_ctl.c optional geom_raid geom/raid/g_raid_md_if.m optional geom_raid geom/raid/g_raid_tr_if.m optional geom_raid geom/raid/md_ddf.c optional geom_raid geom/raid/md_intel.c optional geom_raid geom/raid/md_jmicron.c optional geom_raid geom/raid/md_nvidia.c optional geom_raid geom/raid/md_promise.c optional geom_raid geom/raid/md_sii.c optional geom_raid geom/raid/tr_concat.c optional geom_raid geom/raid/tr_raid0.c optional geom_raid geom/raid/tr_raid1.c optional geom_raid geom/raid/tr_raid1e.c optional geom_raid geom/raid/tr_raid5.c optional geom_raid geom/raid3/g_raid3.c optional geom_raid3 geom/raid3/g_raid3_ctl.c optional geom_raid3 geom/shsec/g_shsec.c optional geom_shsec geom/stripe/g_stripe.c optional geom_stripe contrib/xz-embedded/freebsd/xz_malloc.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_crc32.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_dec_bcj.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_dec_lzma2.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" contrib/xz-embedded/linux/lib/xz/xz_dec_stream.c \ optional xz_embedded | geom_uzip \ compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" geom/uzip/g_uzip.c optional geom_uzip geom/uzip/g_uzip_lzma.c optional geom_uzip geom/uzip/g_uzip_wrkthr.c optional geom_uzip geom/uzip/g_uzip_zlib.c optional geom_uzip geom/vinum/geom_vinum.c optional geom_vinum geom/vinum/geom_vinum_create.c optional geom_vinum geom/vinum/geom_vinum_drive.c optional geom_vinum geom/vinum/geom_vinum_plex.c optional geom_vinum geom/vinum/geom_vinum_volume.c optional geom_vinum geom/vinum/geom_vinum_subr.c optional geom_vinum geom/vinum/geom_vinum_raid5.c optional geom_vinum geom/vinum/geom_vinum_share.c optional geom_vinum geom/vinum/geom_vinum_list.c optional geom_vinum geom/vinum/geom_vinum_rm.c optional geom_vinum geom/vinum/geom_vinum_init.c optional geom_vinum geom/vinum/geom_vinum_state.c optional geom_vinum geom/vinum/geom_vinum_rename.c optional geom_vinum geom/vinum/geom_vinum_move.c optional geom_vinum geom/vinum/geom_vinum_events.c optional geom_vinum geom/virstor/binstream.c optional geom_virstor geom/virstor/g_virstor.c optional geom_virstor geom/virstor/g_virstor_md.c optional geom_virstor geom/zero/g_zero.c optional geom_zero fs/ext2fs/ext2_alloc.c optional ext2fs fs/ext2fs/ext2_balloc.c optional ext2fs fs/ext2fs/ext2_bmap.c optional ext2fs fs/ext2fs/ext2_extents.c optional ext2fs fs/ext2fs/ext2_inode.c optional ext2fs fs/ext2fs/ext2_inode_cnv.c optional ext2fs fs/ext2fs/ext2_hash.c optional ext2fs fs/ext2fs/ext2_htree.c optional ext2fs fs/ext2fs/ext2_lookup.c optional ext2fs fs/ext2fs/ext2_subr.c optional ext2fs fs/ext2fs/ext2_vfsops.c optional ext2fs fs/ext2fs/ext2_vnops.c optional ext2fs # isa/isa_if.m standard isa/isa_common.c optional isa isa/isahint.c optional isa isa/pnp.c optional isa isapnp isa/pnpparse.c optional isa isapnp fs/cd9660/cd9660_bmap.c optional cd9660 fs/cd9660/cd9660_lookup.c optional cd9660 fs/cd9660/cd9660_node.c optional cd9660 fs/cd9660/cd9660_rrip.c optional cd9660 fs/cd9660/cd9660_util.c optional cd9660 fs/cd9660/cd9660_vfsops.c optional cd9660 fs/cd9660/cd9660_vnops.c optional cd9660 fs/cd9660/cd9660_iconv.c optional cd9660_iconv kern/bus_if.m standard kern/clock_if.m standard kern/cpufreq_if.m standard kern/device_if.m standard kern/imgact_binmisc.c optional imagact_binmisc kern/imgact_elf.c standard kern/imgact_elf32.c optional compat_freebsd32 kern/imgact_shell.c standard kern/inflate.c optional gzip kern/init_main.c standard kern/init_sysent.c standard kern/ksched.c optional _kposix_priority_scheduling kern/kern_acct.c standard kern/kern_alq.c optional alq kern/kern_clock.c standard kern/kern_condvar.c standard kern/kern_conf.c standard kern/kern_cons.c standard kern/kern_cpu.c standard kern/kern_cpuset.c standard kern/kern_context.c standard kern/kern_descrip.c standard kern/kern_dtrace.c optional kdtrace_hooks kern/kern_dump.c standard kern/kern_environment.c standard kern/kern_et.c standard kern/kern_event.c standard kern/kern_exec.c standard kern/kern_exit.c standard kern/kern_fail.c standard kern/kern_ffclock.c standard kern/kern_fork.c standard kern/kern_gzio.c optional gzio kern/kern_hhook.c standard kern/kern_idle.c standard kern/kern_intr.c standard kern/kern_jail.c standard kern/kern_khelp.c standard kern/kern_kthread.c standard kern/kern_ktr.c optional ktr kern/kern_ktrace.c standard kern/kern_linker.c standard kern/kern_lock.c standard kern/kern_lockf.c standard kern/kern_lockstat.c optional kdtrace_hooks kern/kern_loginclass.c standard kern/kern_malloc.c standard kern/kern_mbuf.c standard kern/kern_mib.c standard kern/kern_module.c standard kern/kern_mtxpool.c standard kern/kern_mutex.c standard kern/kern_ntptime.c standard kern/kern_numa.c standard kern/kern_osd.c standard kern/kern_physio.c standard kern/kern_pmc.c standard kern/kern_poll.c optional device_polling kern/kern_priv.c standard kern/kern_proc.c standard kern/kern_procctl.c standard kern/kern_prot.c standard kern/kern_racct.c standard kern/kern_rangelock.c standard kern/kern_rctl.c standard kern/kern_resource.c standard kern/kern_rmlock.c standard kern/kern_rwlock.c standard kern/kern_sdt.c optional kdtrace_hooks kern/kern_sema.c standard kern/kern_sendfile.c standard kern/kern_sharedpage.c standard kern/kern_shutdown.c standard kern/kern_sig.c standard kern/kern_switch.c standard kern/kern_sx.c standard kern/kern_synch.c standard kern/kern_syscalls.c standard kern/kern_sysctl.c standard kern/kern_tc.c standard kern/kern_thr.c standard kern/kern_thread.c standard kern/kern_time.c standard kern/kern_timeout.c standard kern/kern_umtx.c standard kern/kern_uuid.c standard kern/kern_xxx.c standard kern/link_elf.c standard kern/linker_if.m standard kern/md4c.c optional netsmb kern/md5c.c standard kern/p1003_1b.c standard kern/posix4_mib.c standard kern/sched_4bsd.c optional sched_4bsd kern/sched_ule.c optional sched_ule kern/serdev_if.m standard kern/stack_protector.c standard \ compile-with "${NORMAL_C:N-fstack-protector*}" kern/subr_acl_nfs4.c optional ufs_acl | zfs kern/subr_acl_posix1e.c optional ufs_acl kern/subr_autoconf.c standard kern/subr_blist.c standard kern/subr_bus.c standard kern/subr_bus_dma.c standard kern/subr_bufring.c standard kern/subr_capability.c standard kern/subr_clock.c standard kern/subr_counter.c standard kern/subr_devstat.c standard kern/subr_disk.c standard kern/subr_eventhandler.c standard kern/subr_fattime.c standard kern/subr_firmware.c optional firmware kern/subr_gtaskqueue.c standard kern/subr_hash.c standard kern/subr_hints.c standard kern/subr_kdb.c standard kern/subr_kobj.c standard kern/subr_lock.c standard kern/subr_log.c standard kern/subr_mbpool.c optional libmbpool kern/subr_mchain.c optional libmchain kern/subr_module.c standard kern/subr_msgbuf.c standard kern/subr_param.c standard kern/subr_pcpu.c standard kern/subr_pctrie.c standard kern/subr_power.c standard kern/subr_prf.c standard kern/subr_prof.c standard kern/subr_rman.c standard kern/subr_rtc.c standard kern/subr_sbuf.c standard kern/subr_scanf.c standard kern/subr_sglist.c standard kern/subr_sleepqueue.c standard kern/subr_smp.c standard kern/subr_stack.c optional ddb | stack | ktr kern/subr_taskqueue.c standard kern/subr_terminal.c optional vt kern/subr_trap.c standard kern/subr_turnstile.c standard kern/subr_uio.c standard kern/subr_unit.c standard kern/subr_vmem.c standard kern/subr_witness.c optional witness kern/sys_capability.c standard kern/sys_generic.c standard kern/sys_pipe.c standard kern/sys_procdesc.c standard kern/sys_process.c standard kern/sys_socket.c standard kern/syscalls.c standard kern/sysv_ipc.c standard kern/sysv_msg.c optional sysvmsg kern/sysv_sem.c optional sysvsem kern/sysv_shm.c optional sysvshm kern/tty.c standard kern/tty_compat.c optional compat_43tty kern/tty_info.c standard kern/tty_inq.c standard kern/tty_outq.c standard kern/tty_pts.c standard kern/tty_tty.c standard kern/tty_ttydisc.c standard kern/uipc_accf.c standard kern/uipc_debug.c optional ddb kern/uipc_domain.c standard kern/uipc_mbuf.c standard kern/uipc_mbuf2.c standard kern/uipc_mbufhash.c standard kern/uipc_mqueue.c optional p1003_1b_mqueue kern/uipc_sem.c optional p1003_1b_semaphores kern/uipc_shm.c standard kern/uipc_sockbuf.c standard kern/uipc_socket.c standard kern/uipc_syscalls.c standard kern/uipc_usrreq.c standard kern/vfs_acl.c standard kern/vfs_aio.c standard kern/vfs_bio.c standard kern/vfs_cache.c standard kern/vfs_cluster.c standard kern/vfs_default.c standard kern/vfs_export.c standard kern/vfs_extattr.c standard kern/vfs_hash.c standard kern/vfs_init.c standard kern/vfs_lookup.c standard kern/vfs_mount.c standard kern/vfs_mountroot.c standard kern/vfs_subr.c standard kern/vfs_syscalls.c standard kern/vfs_vnops.c standard # # Kernel GSS-API # gssd.h optional kgssapi \ dependency "$S/kgssapi/gssd.x" \ compile-with "RPCGEN_CPP='${CPP}' rpcgen -hM $S/kgssapi/gssd.x | grep -v pthread.h > gssd.h" \ no-obj no-implicit-rule before-depend local \ clean "gssd.h" gssd_xdr.c optional kgssapi \ dependency "$S/kgssapi/gssd.x gssd.h" \ compile-with "RPCGEN_CPP='${CPP}' rpcgen -c $S/kgssapi/gssd.x -o gssd_xdr.c" \ no-implicit-rule before-depend local \ clean "gssd_xdr.c" gssd_clnt.c optional kgssapi \ dependency "$S/kgssapi/gssd.x gssd.h" \ compile-with "RPCGEN_CPP='${CPP}' rpcgen -lM $S/kgssapi/gssd.x | grep -v string.h > gssd_clnt.c" \ no-implicit-rule before-depend local \ clean "gssd_clnt.c" kgssapi/gss_accept_sec_context.c optional kgssapi kgssapi/gss_add_oid_set_member.c optional kgssapi kgssapi/gss_acquire_cred.c optional kgssapi kgssapi/gss_canonicalize_name.c optional kgssapi kgssapi/gss_create_empty_oid_set.c optional kgssapi kgssapi/gss_delete_sec_context.c optional kgssapi kgssapi/gss_display_status.c optional kgssapi kgssapi/gss_export_name.c optional kgssapi kgssapi/gss_get_mic.c optional kgssapi kgssapi/gss_init_sec_context.c optional kgssapi kgssapi/gss_impl.c optional kgssapi kgssapi/gss_import_name.c optional kgssapi kgssapi/gss_names.c optional kgssapi kgssapi/gss_pname_to_uid.c optional kgssapi kgssapi/gss_release_buffer.c optional kgssapi kgssapi/gss_release_cred.c optional kgssapi kgssapi/gss_release_name.c optional kgssapi kgssapi/gss_release_oid_set.c optional kgssapi kgssapi/gss_set_cred_option.c optional kgssapi kgssapi/gss_test_oid_set_member.c optional kgssapi kgssapi/gss_unwrap.c optional kgssapi kgssapi/gss_verify_mic.c optional kgssapi kgssapi/gss_wrap.c optional kgssapi kgssapi/gss_wrap_size_limit.c optional kgssapi kgssapi/gssd_prot.c optional kgssapi kgssapi/krb5/krb5_mech.c optional kgssapi kgssapi/krb5/kcrypto.c optional kgssapi kgssapi/krb5/kcrypto_aes.c optional kgssapi kgssapi/krb5/kcrypto_arcfour.c optional kgssapi kgssapi/krb5/kcrypto_des.c optional kgssapi kgssapi/krb5/kcrypto_des3.c optional kgssapi kgssapi/kgss_if.m optional kgssapi kgssapi/gsstest.c optional kgssapi_debug # These files in libkern/ are those needed by all architectures. Some # of the files in libkern/ are only needed on some architectures, e.g., # libkern/divdi3.c is needed by i386 but not alpha. Also, some of these # routines may be optimized for a particular platform. In either case, # the file should be moved to conf/files. from here. # libkern/arc4random.c standard libkern/asprintf.c standard libkern/bcd.c standard libkern/bsearch.c standard libkern/crc32.c standard libkern/explicit_bzero.c standard libkern/fnmatch.c standard libkern/iconv.c optional libiconv libkern/iconv_converter_if.m optional libiconv libkern/iconv_ucs.c optional libiconv libkern/iconv_xlat.c optional libiconv libkern/iconv_xlat16.c optional libiconv libkern/inet_aton.c standard libkern/inet_ntoa.c standard libkern/inet_ntop.c standard libkern/inet_pton.c standard libkern/jenkins_hash.c standard libkern/murmur3_32.c standard libkern/mcount.c optional profiling-routine libkern/memcchr.c standard libkern/memchr.c standard libkern/memcmp.c standard libkern/memmem.c optional gdb libkern/qsort.c standard libkern/qsort_r.c standard libkern/random.c standard libkern/scanc.c standard libkern/strcasecmp.c standard libkern/strcat.c standard libkern/strchr.c standard libkern/strcmp.c standard libkern/strcpy.c standard libkern/strcspn.c standard libkern/strdup.c standard libkern/strndup.c standard libkern/strlcat.c standard libkern/strlcpy.c standard libkern/strlen.c standard libkern/strncat.c standard libkern/strncmp.c standard libkern/strncpy.c standard libkern/strnlen.c standard libkern/strrchr.c standard libkern/strsep.c standard libkern/strspn.c standard libkern/strstr.c standard libkern/strtol.c standard libkern/strtoq.c standard libkern/strtoul.c standard libkern/strtouq.c standard libkern/strvalid.c standard libkern/timingsafe_bcmp.c standard libkern/zlib.c optional crypto | geom_uzip | ipsec | \ mxge | netgraph_deflate | \ ddb_ctf | gzio net/altq/altq_cbq.c optional altq net/altq/altq_cdnr.c optional altq net/altq/altq_codel.c optional altq net/altq/altq_hfsc.c optional altq net/altq/altq_fairq.c optional altq net/altq/altq_priq.c optional altq net/altq/altq_red.c optional altq net/altq/altq_rio.c optional altq net/altq/altq_rmclass.c optional altq net/altq/altq_subr.c optional altq net/bpf.c standard net/bpf_buffer.c optional bpf net/bpf_jitter.c optional bpf_jitter net/bpf_filter.c optional bpf | netgraph_bpf net/bpf_zerocopy.c optional bpf net/bridgestp.c optional bridge | if_bridge net/flowtable.c optional flowtable inet | flowtable inet6 net/ieee8023ad_lacp.c optional lagg net/if.c standard net/if_arcsubr.c optional arcnet net/if_atmsubr.c optional atm net/if_bridge.c optional bridge inet | if_bridge inet net/if_clone.c standard net/if_dead.c standard net/if_debug.c optional ddb net/if_disc.c optional disc net/if_edsc.c optional edsc net/if_enc.c optional enc inet | enc inet6 net/if_epair.c optional epair net/if_ethersubr.c optional ether net/if_fddisubr.c optional fddi net/if_fwsubr.c optional fwip net/if_gif.c optional gif inet | gif inet6 | \ netgraph_gif inet | netgraph_gif inet6 net/if_gre.c optional gre inet | gre inet6 net/if_iso88025subr.c optional token net/if_lagg.c optional lagg net/if_loop.c optional loop net/if_llatbl.c standard net/if_me.c optional me inet net/if_media.c standard net/if_mib.c standard net/if_spppfr.c optional sppp | netgraph_sppp net/if_spppsubr.c optional sppp | netgraph_sppp net/if_stf.c optional stf inet inet6 net/if_tun.c optional tun net/if_tap.c optional tap net/if_vlan.c optional vlan net/if_vxlan.c optional vxlan inet | vxlan inet6 net/ifdi_if.m optional ether pci net/iflib.c optional ether pci net/mp_ring.c optional ether net/mppcc.c optional netgraph_mppc_compression net/mppcd.c optional netgraph_mppc_compression net/netisr.c standard net/pfil.c optional ether | inet net/radix.c standard net/radix_mpath.c standard net/raw_cb.c standard net/raw_usrreq.c standard net/route.c standard net/rss_config.c optional inet rss | inet6 rss net/rtsock.c standard net/slcompress.c optional netgraph_vjc | sppp | \ netgraph_sppp net/toeplitz.c optional inet rss | inet6 rss net/vnet.c optional vimage net80211/ieee80211.c optional wlan net80211/ieee80211_acl.c optional wlan wlan_acl net80211/ieee80211_action.c optional wlan net80211/ieee80211_ageq.c optional wlan net80211/ieee80211_adhoc.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_ageq.c optional wlan net80211/ieee80211_amrr.c optional wlan | wlan_amrr net80211/ieee80211_crypto.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_crypto_ccmp.c optional wlan wlan_ccmp net80211/ieee80211_crypto_none.c optional wlan net80211/ieee80211_crypto_tkip.c optional wlan wlan_tkip net80211/ieee80211_crypto_wep.c optional wlan wlan_wep net80211/ieee80211_ddb.c optional wlan ddb net80211/ieee80211_dfs.c optional wlan net80211/ieee80211_freebsd.c optional wlan net80211/ieee80211_hostap.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_ht.c optional wlan net80211/ieee80211_hwmp.c optional wlan ieee80211_support_mesh net80211/ieee80211_input.c optional wlan net80211/ieee80211_ioctl.c optional wlan net80211/ieee80211_mesh.c optional wlan ieee80211_support_mesh \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_monitor.c optional wlan net80211/ieee80211_node.c optional wlan net80211/ieee80211_output.c optional wlan net80211/ieee80211_phy.c optional wlan net80211/ieee80211_power.c optional wlan net80211/ieee80211_proto.c optional wlan net80211/ieee80211_radiotap.c optional wlan net80211/ieee80211_ratectl.c optional wlan net80211/ieee80211_ratectl_none.c optional wlan net80211/ieee80211_regdomain.c optional wlan net80211/ieee80211_rssadapt.c optional wlan wlan_rssadapt net80211/ieee80211_scan.c optional wlan net80211/ieee80211_scan_sta.c optional wlan net80211/ieee80211_sta.c optional wlan \ compile-with "${NORMAL_C} -Wno-unused-function" net80211/ieee80211_superg.c optional wlan ieee80211_support_superg net80211/ieee80211_scan_sw.c optional wlan net80211/ieee80211_tdma.c optional wlan ieee80211_support_tdma net80211/ieee80211_wds.c optional wlan net80211/ieee80211_xauth.c optional wlan wlan_xauth net80211/ieee80211_alq.c optional wlan ieee80211_alq netgraph/atm/ccatm/ng_ccatm.c optional ngatm_ccatm \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/ng_atm.c optional ngatm_atm netgraph/atm/ngatmbase.c optional ngatm_atmbase \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/sscfu/ng_sscfu.c optional ngatm_sscfu \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/sscop/ng_sscop.c optional ngatm_sscop \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/atm/uni/ng_uni.c optional ngatm_uni \ compile-with "${NORMAL_C} -I$S/contrib/ngatm" netgraph/bluetooth/common/ng_bluetooth.c optional netgraph_bluetooth netgraph/bluetooth/drivers/bt3c/ng_bt3c_pccard.c optional netgraph_bluetooth_bt3c netgraph/bluetooth/drivers/h4/ng_h4.c optional netgraph_bluetooth_h4 netgraph/bluetooth/drivers/ubt/ng_ubt.c optional netgraph_bluetooth_ubt usb netgraph/bluetooth/drivers/ubtbcmfw/ubtbcmfw.c optional netgraph_bluetooth_ubtbcmfw usb netgraph/bluetooth/hci/ng_hci_cmds.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_evnt.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_main.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_misc.c optional netgraph_bluetooth_hci netgraph/bluetooth/hci/ng_hci_ulpi.c optional netgraph_bluetooth_hci netgraph/bluetooth/l2cap/ng_l2cap_cmds.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_evnt.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_llpi.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_main.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_misc.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/l2cap/ng_l2cap_ulpi.c optional netgraph_bluetooth_l2cap netgraph/bluetooth/socket/ng_btsocket.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_hci_raw.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_l2cap.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_l2cap_raw.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_rfcomm.c optional netgraph_bluetooth_socket netgraph/bluetooth/socket/ng_btsocket_sco.c optional netgraph_bluetooth_socket netgraph/netflow/netflow.c optional netgraph_netflow netgraph/netflow/netflow_v9.c optional netgraph_netflow netgraph/netflow/ng_netflow.c optional netgraph_netflow netgraph/ng_UI.c optional netgraph_UI netgraph/ng_async.c optional netgraph_async netgraph/ng_atmllc.c optional netgraph_atmllc netgraph/ng_base.c optional netgraph netgraph/ng_bpf.c optional netgraph_bpf netgraph/ng_bridge.c optional netgraph_bridge netgraph/ng_car.c optional netgraph_car netgraph/ng_cisco.c optional netgraph_cisco netgraph/ng_deflate.c optional netgraph_deflate netgraph/ng_device.c optional netgraph_device netgraph/ng_echo.c optional netgraph_echo netgraph/ng_eiface.c optional netgraph_eiface netgraph/ng_ether.c optional netgraph_ether netgraph/ng_ether_echo.c optional netgraph_ether_echo netgraph/ng_frame_relay.c optional netgraph_frame_relay netgraph/ng_gif.c optional netgraph_gif inet6 | netgraph_gif inet netgraph/ng_gif_demux.c optional netgraph_gif_demux netgraph/ng_hole.c optional netgraph_hole netgraph/ng_iface.c optional netgraph_iface netgraph/ng_ip_input.c optional netgraph_ip_input netgraph/ng_ipfw.c optional netgraph_ipfw inet ipfirewall netgraph/ng_ksocket.c optional netgraph_ksocket netgraph/ng_l2tp.c optional netgraph_l2tp netgraph/ng_lmi.c optional netgraph_lmi netgraph/ng_mppc.c optional netgraph_mppc_compression | \ netgraph_mppc_encryption netgraph/ng_nat.c optional netgraph_nat inet libalias netgraph/ng_one2many.c optional netgraph_one2many netgraph/ng_parse.c optional netgraph netgraph/ng_patch.c optional netgraph_patch netgraph/ng_pipe.c optional netgraph_pipe netgraph/ng_ppp.c optional netgraph_ppp netgraph/ng_pppoe.c optional netgraph_pppoe netgraph/ng_pptpgre.c optional netgraph_pptpgre netgraph/ng_pred1.c optional netgraph_pred1 netgraph/ng_rfc1490.c optional netgraph_rfc1490 netgraph/ng_socket.c optional netgraph_socket netgraph/ng_split.c optional netgraph_split netgraph/ng_sppp.c optional netgraph_sppp netgraph/ng_tag.c optional netgraph_tag netgraph/ng_tcpmss.c optional netgraph_tcpmss netgraph/ng_tee.c optional netgraph_tee netgraph/ng_tty.c optional netgraph_tty netgraph/ng_vjc.c optional netgraph_vjc netgraph/ng_vlan.c optional netgraph_vlan netinet/accf_data.c optional accept_filter_data inet netinet/accf_dns.c optional accept_filter_dns inet netinet/accf_http.c optional accept_filter_http inet netinet/if_atm.c optional atm netinet/if_ether.c optional inet ether netinet/igmp.c optional inet netinet/in.c optional inet netinet/in_debug.c optional inet ddb netinet/in_kdtrace.c optional inet | inet6 netinet/ip_carp.c optional inet carp | inet6 carp netinet/in_fib.c optional inet netinet/in_gif.c optional gif inet | netgraph_gif inet netinet/ip_gre.c optional gre inet netinet/ip_id.c optional inet netinet/in_jail.c optional inet netinet/in_mcast.c optional inet netinet/in_pcb.c optional inet | inet6 netinet/in_pcbgroup.c optional inet pcbgroup | inet6 pcbgroup netinet/in_prot.c optional inet | inet6 netinet/in_proto.c optional inet | inet6 netinet/in_rmx.c optional inet netinet/in_rss.c optional inet rss netinet/ip_divert.c optional inet ipdivert ipfirewall netinet/ip_ecn.c optional inet | inet6 netinet/ip_encap.c optional inet | inet6 netinet/ip_fastfwd.c optional inet netinet/ip_icmp.c optional inet | inet6 netinet/ip_input.c optional inet netinet/ip_ipsec.c optional inet ipsec netinet/ip_mroute.c optional mrouting inet netinet/ip_options.c optional inet netinet/ip_output.c optional inet netinet/ip_reass.c optional inet netinet/raw_ip.c optional inet | inet6 netinet/cc/cc.c optional inet | inet6 netinet/cc/cc_newreno.c optional inet | inet6 netinet/sctp_asconf.c optional inet sctp | inet6 sctp netinet/sctp_auth.c optional inet sctp | inet6 sctp netinet/sctp_bsd_addr.c optional inet sctp | inet6 sctp netinet/sctp_cc_functions.c optional inet sctp | inet6 sctp netinet/sctp_crc32.c optional inet sctp | inet6 sctp netinet/sctp_indata.c optional inet sctp | inet6 sctp netinet/sctp_input.c optional inet sctp | inet6 sctp netinet/sctp_output.c optional inet sctp | inet6 sctp netinet/sctp_pcb.c optional inet sctp | inet6 sctp netinet/sctp_peeloff.c optional inet sctp | inet6 sctp netinet/sctp_ss_functions.c optional inet sctp | inet6 sctp netinet/sctp_syscalls.c optional inet sctp | inet6 sctp netinet/sctp_sysctl.c optional inet sctp | inet6 sctp netinet/sctp_timer.c optional inet sctp | inet6 sctp netinet/sctp_usrreq.c optional inet sctp | inet6 sctp netinet/sctputil.c optional inet sctp | inet6 sctp netinet/siftr.c optional inet siftr alq | inet6 siftr alq netinet/tcp_debug.c optional tcpdebug netinet/tcp_fastopen.c optional inet tcp_rfc7413 | inet6 tcp_rfc7413 netinet/tcp_hostcache.c optional inet | inet6 netinet/tcp_input.c optional inet | inet6 netinet/tcp_lro.c optional inet | inet6 netinet/tcp_output.c optional inet | inet6 netinet/tcp_offload.c optional tcp_offload inet | tcp_offload inet6 netinet/tcp_pcap.c optional inet tcppcap | inet6 tcppcap netinet/tcp_reass.c optional inet | inet6 netinet/tcp_sack.c optional inet | inet6 netinet/tcp_subr.c optional inet | inet6 netinet/tcp_syncache.c optional inet | inet6 netinet/tcp_timer.c optional inet | inet6 netinet/tcp_timewait.c optional inet | inet6 netinet/tcp_usrreq.c optional inet | inet6 netinet/udp_usrreq.c optional inet | inet6 netinet/libalias/alias.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_db.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_mod.c optional libalias | netgraph_nat netinet/libalias/alias_proxy.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_util.c optional libalias inet | netgraph_nat inet netinet/libalias/alias_sctp.c optional libalias inet | netgraph_nat inet netinet6/dest6.c optional inet6 netinet6/frag6.c optional inet6 netinet6/icmp6.c optional inet6 netinet6/in6.c optional inet6 netinet6/in6_cksum.c optional inet6 netinet6/in6_fib.c optional inet6 netinet6/in6_gif.c optional gif inet6 | netgraph_gif inet6 netinet6/in6_ifattach.c optional inet6 netinet6/in6_jail.c optional inet6 netinet6/in6_mcast.c optional inet6 netinet6/in6_pcb.c optional inet6 netinet6/in6_pcbgroup.c optional inet6 pcbgroup netinet6/in6_proto.c optional inet6 netinet6/in6_rmx.c optional inet6 netinet6/in6_rss.c optional inet6 rss netinet6/in6_src.c optional inet6 netinet6/ip6_forward.c optional inet6 netinet6/ip6_gre.c optional gre inet6 netinet6/ip6_id.c optional inet6 netinet6/ip6_input.c optional inet6 netinet6/ip6_mroute.c optional mrouting inet6 netinet6/ip6_output.c optional inet6 netinet6/ip6_ipsec.c optional inet6 ipsec netinet6/mld6.c optional inet6 netinet6/nd6.c optional inet6 netinet6/nd6_nbr.c optional inet6 netinet6/nd6_rtr.c optional inet6 netinet6/raw_ip6.c optional inet6 netinet6/route6.c optional inet6 netinet6/scope6.c optional inet6 netinet6/sctp6_usrreq.c optional inet6 sctp netinet6/udp6_usrreq.c optional inet6 netipsec/ipsec.c optional ipsec inet | ipsec inet6 netipsec/ipsec_input.c optional ipsec inet | ipsec inet6 netipsec/ipsec_mbuf.c optional ipsec inet | ipsec inet6 netipsec/ipsec_output.c optional ipsec inet | ipsec inet6 netipsec/key.c optional ipsec inet | ipsec inet6 netipsec/key_debug.c optional ipsec inet | ipsec inet6 netipsec/keysock.c optional ipsec inet | ipsec inet6 netipsec/xform_ah.c optional ipsec inet | ipsec inet6 netipsec/xform_esp.c optional ipsec inet | ipsec inet6 netipsec/xform_ipcomp.c optional ipsec inet | ipsec inet6 netipsec/xform_tcp.c optional ipsec inet tcp_signature | \ ipsec inet6 tcp_signature netnatm/natm.c optional natm netnatm/natm_pcb.c optional natm netnatm/natm_proto.c optional natm netpfil/ipfw/dn_aqm_codel.c optional inet dummynet netpfil/ipfw/dn_aqm_pie.c optional inet dummynet netpfil/ipfw/dn_heap.c optional inet dummynet netpfil/ipfw/dn_sched_fifo.c optional inet dummynet netpfil/ipfw/dn_sched_fq_codel.c optional inet dummynet netpfil/ipfw/dn_sched_fq_pie.c optional inet dummynet netpfil/ipfw/dn_sched_prio.c optional inet dummynet netpfil/ipfw/dn_sched_qfq.c optional inet dummynet netpfil/ipfw/dn_sched_rr.c optional inet dummynet netpfil/ipfw/dn_sched_wf2q.c optional inet dummynet netpfil/ipfw/ip_dummynet.c optional inet dummynet netpfil/ipfw/ip_dn_io.c optional inet dummynet netpfil/ipfw/ip_dn_glue.c optional inet dummynet netpfil/ipfw/ip_fw2.c optional inet ipfirewall netpfil/ipfw/ip_fw_bpf.c optional inet ipfirewall netpfil/ipfw/ip_fw_dynamic.c optional inet ipfirewall netpfil/ipfw/ip_fw_eaction.c optional inet ipfirewall netpfil/ipfw/ip_fw_log.c optional inet ipfirewall netpfil/ipfw/ip_fw_pfil.c optional inet ipfirewall netpfil/ipfw/ip_fw_sockopt.c optional inet ipfirewall netpfil/ipfw/ip_fw_table.c optional inet ipfirewall netpfil/ipfw/ip_fw_table_algo.c optional inet ipfirewall netpfil/ipfw/ip_fw_table_value.c optional inet ipfirewall netpfil/ipfw/ip_fw_iface.c optional inet ipfirewall netpfil/ipfw/ip_fw_nat.c optional inet ipfirewall_nat netpfil/ipfw/nat64/ip_fw_nat64.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64lsn.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64lsn_control.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64stl.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64stl_control.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nat64/nat64_translate.c optional inet inet6 ipfirewall \ ipfirewall_nat64 netpfil/ipfw/nptv6/ip_fw_nptv6.c optional inet inet6 ipfirewall \ ipfirewall_nptv6 netpfil/ipfw/nptv6/nptv6.c optional inet inet6 ipfirewall \ ipfirewall_nptv6 netpfil/pf/if_pflog.c optional pflog pf inet netpfil/pf/if_pfsync.c optional pfsync pf inet netpfil/pf/pf.c optional pf inet netpfil/pf/pf_if.c optional pf inet netpfil/pf/pf_ioctl.c optional pf inet netpfil/pf/pf_lb.c optional pf inet netpfil/pf/pf_norm.c optional pf inet netpfil/pf/pf_osfp.c optional pf inet netpfil/pf/pf_ruleset.c optional pf inet netpfil/pf/pf_table.c optional pf inet netpfil/pf/in4_cksum.c optional pf inet netsmb/smb_conn.c optional netsmb netsmb/smb_crypt.c optional netsmb netsmb/smb_dev.c optional netsmb netsmb/smb_iod.c optional netsmb netsmb/smb_rq.c optional netsmb netsmb/smb_smb.c optional netsmb netsmb/smb_subr.c optional netsmb netsmb/smb_trantcp.c optional netsmb netsmb/smb_usr.c optional netsmb nfs/bootp_subr.c optional bootp nfscl nfs/krpc_subr.c optional bootp nfscl nfs/nfs_diskless.c optional nfscl nfs_root nfs/nfs_fha.c optional nfsd nfs/nfs_lock.c optional nfscl | nfslockd | nfsd nfs/nfs_nfssvc.c optional nfscl | nfsd nlm/nlm_advlock.c optional nfslockd | nfsd nlm/nlm_prot_clnt.c optional nfslockd | nfsd nlm/nlm_prot_impl.c optional nfslockd | nfsd nlm/nlm_prot_server.c optional nfslockd | nfsd nlm/nlm_prot_svc.c optional nfslockd | nfsd nlm/nlm_prot_xdr.c optional nfslockd | nfsd nlm/sm_inter_xdr.c optional nfslockd | nfsd # Linux Kernel Programming Interface compat/linuxkpi/common/src/linux_kmod.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_compat.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_pci.c optional compat_linuxkpi pci \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_idr.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_radix.c optional compat_linuxkpi \ compile-with "${LINUXKPI_C}" compat/linuxkpi/common/src/linux_usb.c optional compat_linuxkpi usb \ compile-with "${LINUXKPI_C}" # OpenFabrics Enterprise Distribution (Infiniband) ofed/drivers/infiniband/core/addr.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/agent.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/cache.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" # XXX Mad.c must be ordered before cm.c for sysinit sets to occur in # the correct order. ofed/drivers/infiniband/core/mad.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/cm.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/ -Wno-unused-function" ofed/drivers/infiniband/core/cma.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/device.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/fmr_pool.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/iwcm.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/mad_rmpp.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/multicast.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/packer.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/peer_mem.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/sa_query.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/smi.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/sysfs.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/ucm.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/ucma.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/ud_header.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/umem.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/user_mad.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/uverbs_cmd.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/uverbs_main.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/uverbs_marshall.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/core/verbs.c optional ofed \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" ofed/drivers/infiniband/ulp/ipoib/ipoib_cm.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" #ofed/drivers/infiniband/ulp/ipoib/ipoib_fs.c optional ipoib \ # compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_ib.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_main.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_multicast.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/ipoib/ipoib_verbs.c optional ipoib \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" #ofed/drivers/infiniband/ulp/ipoib/ipoib_vlan.c optional ipoib \ # compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" ofed/drivers/infiniband/ulp/sdp/sdp_bcopy.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_main.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_rx.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_cma.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" ofed/drivers/infiniband/ulp/sdp/sdp_tx.c optional sdp inet \ compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" dev/mlx4/mlx4_ib/mlx4_ib_alias_GUID.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_mcg.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_sysfs.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_cm.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_ah.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_cq.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_doorbell.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_mad.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_main.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_exp.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_mr.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_qp.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_srq.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_ib/mlx4_ib_wc.c optional mlx4ib pci ofed \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_alloc.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_catas.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_cmd.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_cq.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_eq.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_fw.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_icm.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_intf.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_main.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_mcg.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_mr.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_pd.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_port.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_profile.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_qp.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_reset.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_sense.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_srq.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_resource_tracker.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_core/mlx4_sys_tune.c optional mlx4 pci \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_cq.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_main.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_netdev.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_port.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_resources.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_rx.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx4/mlx4_en/mlx4_en_tx.c optional mlx4en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_alloc.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_cmd.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_cq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_eq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_flow_table.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_fw.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_health.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_mad.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_main.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_mcg.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_mr.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_pagealloc.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_pd.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_port.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_qp.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_srq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_transobj.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_uar.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_vport.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_core/mlx5_wq.c optional mlx5 pci \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_ethtool.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_main.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_tx.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_flow_table.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_rx.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" dev/mlx5/mlx5_en/mlx5_en_txrx.c optional mlx5en pci inet inet6 \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_allocator.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_av.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_catas.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_cmd.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_cq.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_eq.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_mad.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_main.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_mcg.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_memfree.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_mr.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_pd.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_profile.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_provider.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_qp.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_reset.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_srq.c optional mthca \ compile-with "${OFED_C}" ofed/drivers/infiniband/hw/mthca/mthca_uar.c optional mthca \ compile-with "${OFED_C}" # crypto support opencrypto/cast.c optional crypto | ipsec opencrypto/criov.c optional crypto | ipsec opencrypto/crypto.c optional crypto | ipsec opencrypto/cryptodev.c optional cryptodev opencrypto/cryptodev_if.m optional crypto | ipsec opencrypto/cryptosoft.c optional crypto | ipsec opencrypto/cryptodeflate.c optional crypto | ipsec opencrypto/gmac.c optional crypto | ipsec opencrypto/gfmult.c optional crypto | ipsec opencrypto/rmd160.c optional crypto | ipsec opencrypto/skipjack.c optional crypto | ipsec opencrypto/xform.c optional crypto | ipsec rpc/auth_none.c optional krpc | nfslockd | nfscl | nfsd rpc/auth_unix.c optional krpc | nfslockd | nfscl | nfsd rpc/authunix_prot.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_bck.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_dg.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_rc.c optional krpc | nfslockd | nfscl | nfsd rpc/clnt_vc.c optional krpc | nfslockd | nfscl | nfsd rpc/getnetconfig.c optional krpc | nfslockd | nfscl | nfsd rpc/replay.c optional krpc | nfslockd | nfscl | nfsd rpc/rpc_callmsg.c optional krpc | nfslockd | nfscl | nfsd rpc/rpc_generic.c optional krpc | nfslockd | nfscl | nfsd rpc/rpc_prot.c optional krpc | nfslockd | nfscl | nfsd rpc/rpcb_clnt.c optional krpc | nfslockd | nfscl | nfsd rpc/rpcb_prot.c optional krpc | nfslockd | nfscl | nfsd rpc/svc.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_auth.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_auth_unix.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_dg.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_generic.c optional krpc | nfslockd | nfscl | nfsd rpc/svc_vc.c optional krpc | nfslockd | nfscl | nfsd rpc/rpcsec_gss/rpcsec_gss.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/rpcsec_gss_conf.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/rpcsec_gss_misc.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/rpcsec_gss_prot.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi rpc/rpcsec_gss/svc_rpcsec_gss.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi security/audit/audit.c optional audit security/audit/audit_arg.c optional audit security/audit/audit_bsm.c optional audit security/audit/audit_bsm_klib.c optional audit security/audit/audit_pipe.c optional audit security/audit/audit_syscalls.c standard security/audit/audit_trigger.c optional audit security/audit/audit_worker.c optional audit security/audit/bsm_domain.c optional audit security/audit/bsm_errno.c optional audit security/audit/bsm_fcntl.c optional audit security/audit/bsm_socket_type.c optional audit security/audit/bsm_token.c optional audit security/mac/mac_audit.c optional mac audit security/mac/mac_cred.c optional mac security/mac/mac_framework.c optional mac security/mac/mac_inet.c optional mac inet | mac inet6 security/mac/mac_inet6.c optional mac inet6 security/mac/mac_label.c optional mac security/mac/mac_net.c optional mac security/mac/mac_pipe.c optional mac security/mac/mac_posix_sem.c optional mac security/mac/mac_posix_shm.c optional mac security/mac/mac_priv.c optional mac security/mac/mac_process.c optional mac security/mac/mac_socket.c optional mac security/mac/mac_syscalls.c standard security/mac/mac_system.c optional mac security/mac/mac_sysv_msg.c optional mac security/mac/mac_sysv_sem.c optional mac security/mac/mac_sysv_shm.c optional mac security/mac/mac_vfs.c optional mac security/mac_biba/mac_biba.c optional mac_biba security/mac_bsdextended/mac_bsdextended.c optional mac_bsdextended security/mac_bsdextended/ugidfw_system.c optional mac_bsdextended security/mac_bsdextended/ugidfw_vnode.c optional mac_bsdextended security/mac_ifoff/mac_ifoff.c optional mac_ifoff security/mac_lomac/mac_lomac.c optional mac_lomac security/mac_mls/mac_mls.c optional mac_mls security/mac_none/mac_none.c optional mac_none security/mac_partition/mac_partition.c optional mac_partition security/mac_portacl/mac_portacl.c optional mac_portacl security/mac_seeotheruids/mac_seeotheruids.c optional mac_seeotheruids security/mac_stub/mac_stub.c optional mac_stub security/mac_test/mac_test.c optional mac_test teken/teken.c optional sc | vt ufs/ffs/ffs_alloc.c optional ffs ufs/ffs/ffs_balloc.c optional ffs ufs/ffs/ffs_inode.c optional ffs ufs/ffs/ffs_snapshot.c optional ffs ufs/ffs/ffs_softdep.c optional ffs ufs/ffs/ffs_subr.c optional ffs ufs/ffs/ffs_tables.c optional ffs ufs/ffs/ffs_vfsops.c optional ffs ufs/ffs/ffs_vnops.c optional ffs ufs/ffs/ffs_rawread.c optional ffs directio ufs/ffs/ffs_suspend.c optional ffs ufs/ufs/ufs_acl.c optional ffs ufs/ufs/ufs_bmap.c optional ffs ufs/ufs/ufs_dirhash.c optional ffs ufs/ufs/ufs_extattr.c optional ffs ufs/ufs/ufs_gjournal.c optional ffs UFS_GJOURNAL ufs/ufs/ufs_inode.c optional ffs ufs/ufs/ufs_lookup.c optional ffs ufs/ufs/ufs_quota.c optional ffs ufs/ufs/ufs_vfsops.c optional ffs ufs/ufs/ufs_vnops.c optional ffs vm/default_pager.c standard vm/device_pager.c standard vm/phys_pager.c standard vm/redzone.c optional DEBUG_REDZONE vm/sg_pager.c standard vm/swap_pager.c standard vm/uma_core.c standard vm/uma_dbg.c standard vm/memguard.c optional DEBUG_MEMGUARD vm/vm_fault.c standard vm/vm_glue.c standard vm/vm_init.c standard vm/vm_kern.c standard vm/vm_map.c standard vm/vm_meter.c standard vm/vm_mmap.c standard vm/vm_object.c standard vm/vm_page.c standard vm/vm_pageout.c standard vm/vm_pager.c standard vm/vm_phys.c standard vm/vm_radix.c standard vm/vm_reserv.c standard vm/vm_domain.c standard vm/vm_unix.c standard vm/vnode_pager.c standard xen/features.c optional xenhvm xen/xenbus/xenbus_if.m optional xenhvm xen/xenbus/xenbus.c optional xenhvm xen/xenbus/xenbusb_if.m optional xenhvm xen/xenbus/xenbusb.c optional xenhvm xen/xenbus/xenbusb_front.c optional xenhvm xen/xenbus/xenbusb_back.c optional xenhvm xen/xenmem/xenmem_if.m optional xenhvm xdr/xdr.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_array.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_mbuf.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_mem.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_reference.c optional krpc | nfslockd | nfscl | nfsd xdr/xdr_sizeof.c optional krpc | nfslockd | nfscl | nfsd Index: head/sys/dev/netmap/if_ixl_netmap.h =================================================================== --- head/sys/dev/netmap/if_ixl_netmap.h (revision 307393) +++ head/sys/dev/netmap/if_ixl_netmap.h (revision 307394) @@ -1,421 +1,421 @@ /* * Copyright (C) 2015, Luigi Rizzo. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * netmap support for: ixl * * derived from ixgbe * netmap support for a network driver. * This file contains code but only static or inline functions used * by a single driver. To avoid replication of code we just #include * it near the beginning of the standard driver. * For ixl the file is imported in two places, hence the conditional at the * beginning. */ #include #include /* * Some drivers may need the following headers. Others * already include them by default #include #include */ #include int ixl_netmap_txsync(struct netmap_kring *kring, int flags); int ixl_netmap_rxsync(struct netmap_kring *kring, int flags); extern int ixl_rx_miss, ixl_rx_miss_bufs, ixl_crcstrip; #ifdef NETMAP_IXL_MAIN /* * device-specific sysctl variables: * - * ixl_crcstrip: 0: keep CRC in rx frames (default), 1: strip it. + * ixl_crcstrip: 0: NIC keeps CRC in rx frames, 1: NIC strips it (default). * During regular operations the CRC is stripped, but on some * hardware reception of frames not multiple of 64 is slower, * so using crcstrip=0 helps in benchmarks. * * ixl_rx_miss, ixl_rx_miss_bufs: * count packets that might be missed due to lost interrupts. */ SYSCTL_DECL(_dev_netmap); /* * The xl driver by default strips CRCs and we do not override it. */ #if 0 SYSCTL_INT(_dev_netmap, OID_AUTO, ixl_crcstrip, - CTLFLAG_RW, &ixl_crcstrip, 1, "strip CRC on rx frames"); + CTLFLAG_RW, &ixl_crcstrip, 1, "NIC strips CRC on rx frames"); #endif SYSCTL_INT(_dev_netmap, OID_AUTO, ixl_rx_miss, CTLFLAG_RW, &ixl_rx_miss, 0, "potentially missed rx intr"); SYSCTL_INT(_dev_netmap, OID_AUTO, ixl_rx_miss_bufs, CTLFLAG_RW, &ixl_rx_miss_bufs, 0, "potentially missed rx intr bufs"); /* * Register/unregister. We are already under netmap lock. * Only called on the first register or the last unregister. */ static int ixl_netmap_reg(struct netmap_adapter *na, int onoff) { struct ifnet *ifp = na->ifp; struct ixl_vsi *vsi = ifp->if_softc; struct ixl_pf *pf = (struct ixl_pf *)vsi->back; IXL_PF_LOCK(pf); ixl_disable_intr(vsi); /* Tell the stack that the interface is no longer active */ ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); //set_crcstrip(&adapter->hw, onoff); /* enable or disable flags and callbacks in na and ifp */ if (onoff) { nm_set_native_flags(na); } else { nm_clear_native_flags(na); } ixl_init_locked(pf); /* also enables intr */ //set_crcstrip(&adapter->hw, onoff); // XXX why twice ? IXL_PF_UNLOCK(pf); return (ifp->if_drv_flags & IFF_DRV_RUNNING ? 0 : 1); } /* * The attach routine, called near the end of ixl_attach(), * fills the parameters for netmap_attach() and calls it. * It cannot fail, in the worst case (such as no memory) * netmap mode will be disabled and the driver will only * operate in standard mode. */ static void ixl_netmap_attach(struct ixl_vsi *vsi) { struct netmap_adapter na; bzero(&na, sizeof(na)); na.ifp = vsi->ifp; na.na_flags = NAF_BDG_MAYSLEEP; // XXX check that queues is set. printf("queues is %p\n", vsi->queues); if (vsi->queues) { na.num_tx_desc = vsi->queues[0].num_desc; na.num_rx_desc = vsi->queues[0].num_desc; } na.nm_txsync = ixl_netmap_txsync; na.nm_rxsync = ixl_netmap_rxsync; na.nm_register = ixl_netmap_reg; na.num_tx_rings = na.num_rx_rings = vsi->num_queues; netmap_attach(&na); } #else /* !NETMAP_IXL_MAIN, code for ixl_txrx.c */ /* * Reconcile kernel and user view of the transmit ring. * * All information is in the kring. * Userspace wants to send packets up to the one before kring->rhead, * kernel knows kring->nr_hwcur is the first unsent packet. * * Here we push packets out (as many as possible), and possibly * reclaim buffers from previously completed transmission. * * The caller (netmap) guarantees that there is only one instance * running at any time. Any interference with other driver * methods should be handled by the individual drivers. */ int ixl_netmap_txsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ u_int nic_i; /* index into the NIC ring */ u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; /* * interrupts on every tx packet are expensive so request * them every half ring, or where NS_REPORT is set */ u_int report_frequency = kring->nkr_num_slots >> 1; /* device-specific */ struct ixl_vsi *vsi = ifp->if_softc; struct ixl_queue *que = &vsi->queues[kring->ring_id]; struct tx_ring *txr = &que->txr; bus_dmamap_sync(txr->dma.tag, txr->dma.map, BUS_DMASYNC_POSTREAD); /* * First part: process new packets to send. * nm_i is the current index in the netmap ring, * nic_i is the corresponding index in the NIC ring. * * If we have packets to send (nm_i != head) * iterate over the netmap ring, fetch length and update * the corresponding slot in the NIC ring. Some drivers also * need to update the buffer's physical address in the NIC slot * even NS_BUF_CHANGED is not set (PNMB computes the addresses). * * The netmap_reload_map() calls is especially expensive, * even when (as in this case) the tag is 0, so do only * when the buffer has actually changed. * * If possible do not set the report/intr bit on all slots, * but only a few times per ring or when NS_REPORT is set. * * Finally, on 10G and faster drivers, it might be useful * to prefetch the next slot and txr entry. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* we have new packets to send */ nic_i = netmap_idx_k2n(kring, nm_i); __builtin_prefetch(&ring->slot[nm_i]); __builtin_prefetch(&txr->buffers[nic_i]); for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; u_int len = slot->len; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); /* device-specific */ struct i40e_tx_desc *curr = &txr->base[nic_i]; struct ixl_tx_buf *txbuf = &txr->buffers[nic_i]; u64 flags = (slot->flags & NS_REPORT || nic_i == 0 || nic_i == report_frequency) ? ((u64)I40E_TX_DESC_CMD_RS << I40E_TXD_QW1_CMD_SHIFT) : 0; /* prefetch for next round */ __builtin_prefetch(&ring->slot[nm_i + 1]); __builtin_prefetch(&txr->buffers[nic_i + 1]); NM_CHECK_ADDR_LEN(na, addr, len); if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ netmap_reload_map(na, txr->dma.tag, txbuf->map, addr); } slot->flags &= ~(NS_REPORT | NS_BUF_CHANGED); /* Fill the slot in the NIC ring. */ curr->buffer_addr = htole64(paddr); curr->cmd_type_offset_bsz = htole64( ((u64)len << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | flags | ((u64)I40E_TX_DESC_CMD_EOP << I40E_TXD_QW1_CMD_SHIFT) ); // XXX more ? /* make sure changes to the buffer are synced */ bus_dmamap_sync(txr->dma.tag, txbuf->map, BUS_DMASYNC_PREWRITE); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } kring->nr_hwcur = head; /* synchronize the NIC ring */ bus_dmamap_sync(txr->dma.tag, txr->dma.map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* (re)start the tx unit up to slot nic_i (excluded) */ wr32(vsi->hw, txr->tail, nic_i); } /* * Second part: reclaim buffers for completed transmissions. */ nic_i = LE32_TO_CPU(*(volatile __le32 *)&txr->base[que->num_desc]); if (nic_i != txr->next_to_clean) { /* some tx completed, increment avail */ txr->next_to_clean = nic_i; kring->nr_hwtail = nm_prev(netmap_idx_n2k(kring, nic_i), lim); } return 0; } /* * Reconcile kernel and user view of the receive ring. * Same as for the txsync, this routine must be efficient. * The caller guarantees a single invocations, but races against * the rest of the driver should be handled here. * * On call, kring->rhead is the first packet that userspace wants * to keep, and kring->rcur is the wakeup point. * The kernel has previously reported packets up to kring->rtail. * * If (flags & NAF_FORCE_READ) also check for incoming packets irrespective * of whether or not we received an interrupt. */ int ixl_netmap_rxsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ u_int nic_i; /* index into the NIC ring */ u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int force_update = (flags & NAF_FORCE_READ) || kring->nr_kflags & NKR_PENDINTR; /* device-specific */ struct ixl_vsi *vsi = ifp->if_softc; struct ixl_queue *que = &vsi->queues[kring->ring_id]; struct rx_ring *rxr = &que->rxr; if (head > lim) return netmap_ring_reinit(kring); /* XXX check sync modes */ bus_dmamap_sync(rxr->dma.tag, rxr->dma.map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* * First part: import newly received packets. * * nm_i is the index of the next free slot in the netmap ring, * nic_i is the index of the next received packet in the NIC ring, * and they may differ in case if_init() has been called while * in netmap mode. For the receive ring we have * * nic_i = rxr->next_check; * nm_i = kring->nr_hwtail (previous) * and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size * * rxr->next_check is set to 0 on a ring reinit */ if (netmap_no_pendintr || force_update) { int crclen = ixl_crcstrip ? 0 : 4; uint16_t slot_flags = kring->nkr_slot_flags; nic_i = rxr->next_check; // or also k2n(kring->nr_hwtail) nm_i = netmap_idx_n2k(kring, nic_i); for (n = 0; ; n++) { union i40e_32byte_rx_desc *curr = &rxr->base[nic_i]; uint64_t qword = le64toh(curr->wb.qword1.status_error_len); uint32_t staterr = (qword & I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT; if ((staterr & (1<slot[nm_i].len = ((qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - crclen; ring->slot[nm_i].flags = slot_flags; bus_dmamap_sync(rxr->ptag, rxr->buffers[nic_i].pmap, BUS_DMASYNC_POSTREAD); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } if (n) { /* update the state variables */ if (netmap_no_pendintr && !force_update) { /* diagnostics */ ixl_rx_miss ++; ixl_rx_miss_bufs += n; } rxr->next_check = nic_i; kring->nr_hwtail = nm_i; } kring->nr_kflags &= ~NKR_PENDINTR; } /* * Second part: skip past packets that userspace has released. * (kring->nr_hwcur to head excluded), * and make the buffers available for reception. * As usual nm_i is the index in the netmap ring, * nic_i is the index in the NIC ring, and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size */ nm_i = kring->nr_hwcur; if (nm_i != head) { nic_i = netmap_idx_k2n(kring, nm_i); for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); union i40e_32byte_rx_desc *curr = &rxr->base[nic_i]; struct ixl_rx_buf *rxbuf = &rxr->buffers[nic_i]; if (addr == NETMAP_BUF_BASE(na)) /* bad buf */ goto ring_reset; if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ netmap_reload_map(na, rxr->ptag, rxbuf->pmap, addr); slot->flags &= ~NS_BUF_CHANGED; } curr->read.pkt_addr = htole64(paddr); curr->read.hdr_addr = 0; // XXX needed bus_dmamap_sync(rxr->ptag, rxbuf->pmap, BUS_DMASYNC_PREREAD); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } kring->nr_hwcur = head; bus_dmamap_sync(rxr->dma.tag, rxr->dma.map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* * IMPORTANT: we must leave one free slot in the ring, * so move nic_i back by one unit */ nic_i = nm_prev(nic_i, lim); wr32(vsi->hw, rxr->tail, nic_i); } return 0; ring_reset: return netmap_ring_reinit(kring); } #endif /* !NETMAP_IXL_MAIN */ /* end of file */ Index: head/sys/dev/netmap/if_lem_netmap.h =================================================================== --- head/sys/dev/netmap/if_lem_netmap.h (revision 307393) +++ head/sys/dev/netmap/if_lem_netmap.h (revision 307394) @@ -1,492 +1,325 @@ /* * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * netmap support for: lem * * For details on netmap support please see ixgbe_netmap.h */ #include #include #include #include /* vtophys ? */ #include extern int netmap_adaptive_io; /* * Register/unregister. We are already under netmap lock. */ static int lem_netmap_reg(struct netmap_adapter *na, int onoff) { struct ifnet *ifp = na->ifp; struct adapter *adapter = ifp->if_softc; EM_CORE_LOCK(adapter); lem_disable_intr(adapter); /* Tell the stack that the interface is no longer active */ ifp->if_drv_flags &= ~(IFF_DRV_RUNNING | IFF_DRV_OACTIVE); #ifndef EM_LEGACY_IRQ // XXX do we need this ? taskqueue_block(adapter->tq); taskqueue_drain(adapter->tq, &adapter->rxtx_task); taskqueue_drain(adapter->tq, &adapter->link_task); #endif /* !EM_LEGCY_IRQ */ /* enable or disable flags and callbacks in na and ifp */ if (onoff) { nm_set_native_flags(na); } else { nm_clear_native_flags(na); } lem_init_locked(adapter); /* also enable intr */ #ifndef EM_LEGACY_IRQ taskqueue_unblock(adapter->tq); // XXX do we need this ? #endif /* !EM_LEGCY_IRQ */ EM_CORE_UNLOCK(adapter); return (ifp->if_drv_flags & IFF_DRV_RUNNING ? 0 : 1); } +static void +lem_netmap_intr(struct netmap_adapter *na, int onoff) +{ + struct ifnet *ifp = na->ifp; + struct adapter *adapter = ifp->if_softc; + + EM_CORE_LOCK(adapter); + if (onoff) { + lem_enable_intr(adapter); + } else { + lem_disable_intr(adapter); + } + EM_CORE_UNLOCK(adapter); +} + + /* * Reconcile kernel and user view of the transmit ring. */ static int lem_netmap_txsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ u_int nic_i; /* index into the NIC ring */ u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; /* generate an interrupt approximately every half ring */ u_int report_frequency = kring->nkr_num_slots >> 1; /* device-specific */ struct adapter *adapter = ifp->if_softc; -#ifdef NIC_PARAVIRT - struct paravirt_csb *csb = adapter->csb; - uint64_t *csbd = (uint64_t *)(csb + 1); -#endif /* NIC_PARAVIRT */ bus_dmamap_sync(adapter->txdma.dma_tag, adapter->txdma.dma_map, BUS_DMASYNC_POSTREAD); /* * First part: process new packets to send. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* we have new packets to send */ -#ifdef NIC_PARAVIRT - int do_kick = 0; - uint64_t t = 0; // timestamp - int n = head - nm_i; - if (n < 0) - n += lim + 1; - if (csb) { - t = rdtsc(); /* last timestamp */ - csbd[16] += t - csbd[0]; /* total Wg */ - csbd[17] += n; /* Wg count */ - csbd[0] = t; - } -#endif /* NIC_PARAVIRT */ nic_i = netmap_idx_k2n(kring, nm_i); while (nm_i != head) { struct netmap_slot *slot = &ring->slot[nm_i]; u_int len = slot->len; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); /* device-specific */ struct e1000_tx_desc *curr = &adapter->tx_desc_base[nic_i]; struct em_buffer *txbuf = &adapter->tx_buffer_area[nic_i]; int flags = (slot->flags & NS_REPORT || nic_i == 0 || nic_i == report_frequency) ? E1000_TXD_CMD_RS : 0; NM_CHECK_ADDR_LEN(na, addr, len); if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ curr->buffer_addr = htole64(paddr); netmap_reload_map(na, adapter->txtag, txbuf->map, addr); } slot->flags &= ~(NS_REPORT | NS_BUF_CHANGED); /* Fill the slot in the NIC ring. */ curr->upper.data = 0; curr->lower.data = htole32(adapter->txd_cmd | len | (E1000_TXD_CMD_EOP | flags) ); bus_dmamap_sync(adapter->txtag, txbuf->map, BUS_DMASYNC_PREWRITE); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); // XXX might try an early kick } kring->nr_hwcur = head; /* synchronize the NIC ring */ bus_dmamap_sync(adapter->txdma.dma_tag, adapter->txdma.dma_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); -#ifdef NIC_PARAVIRT - /* set unconditionally, then also kick if needed */ - if (csb) { - t = rdtsc(); - if (csb->host_need_txkick == 2) { - /* can compute an update of delta */ - int64_t delta = t - csbd[3]; - if (delta < 0) - delta = -delta; - if (csbd[8] == 0 || delta < csbd[8]) { - csbd[8] = delta; - csbd[9]++; - } - csbd[10]++; - } - csb->guest_tdt = nic_i; - csbd[18] += t - csbd[0]; // total wp - csbd[19] += n; - } - if (!csb || !csb->guest_csb_on || (csb->host_need_txkick & 1)) - do_kick = 1; - if (do_kick) -#endif /* NIC_PARAVIRT */ /* (re)start the tx unit up to slot nic_i (excluded) */ E1000_WRITE_REG(&adapter->hw, E1000_TDT(0), nic_i); -#ifdef NIC_PARAVIRT - if (do_kick) { - uint64_t t1 = rdtsc(); - csbd[20] += t1 - t; // total Np - csbd[21]++; - } -#endif /* NIC_PARAVIRT */ } /* * Second part: reclaim buffers for completed transmissions. */ if (ticks != kring->last_reclaim || flags & NAF_FORCE_RECLAIM || nm_kr_txempty(kring)) { kring->last_reclaim = ticks; /* record completed transmissions using TDH */ -#ifdef NIC_PARAVIRT - /* host updates tdh unconditionally, and we have - * no side effects on reads, so we can read from there - * instead of exiting. - */ - if (csb) { - static int drain = 0, nodrain=0, good = 0, bad = 0, fail = 0; - u_int x = adapter->next_tx_to_clean; - csbd[19]++; // XXX count reclaims - nic_i = csb->host_tdh; - if (csb->guest_csb_on) { - if (nic_i == x) { - bad++; - csbd[24]++; // failed reclaims - /* no progress, request kick and retry */ - csb->guest_need_txkick = 1; - mb(); // XXX barrier - nic_i = csb->host_tdh; - } else { - good++; - } - if (nic_i != x) { - csb->guest_need_txkick = 2; - if (nic_i == csb->guest_tdt) - drain++; - else - nodrain++; -#if 1 - if (netmap_adaptive_io) { - /* new mechanism: last half ring (or so) - * released one slot at a time. - * This effectively makes the system spin. - * - * Take next_to_clean + 1 as a reference. - * tdh must be ahead or equal - * On entry, the logical order is - * x < tdh = nic_i - * We first push tdh up to avoid wraps. - * The limit is tdh-ll (half ring). - * if tdh-256 < x we report x; - * else we report tdh-256 - */ - u_int tdh = nic_i; - u_int ll = csbd[15]; - u_int delta = lim/8; - if (netmap_adaptive_io == 2 || ll > delta) - csbd[15] = ll = delta; - else if (netmap_adaptive_io == 1 && ll > 1) { - csbd[15]--; - } - - if (nic_i >= kring->nkr_num_slots) { - RD(5, "bad nic_i %d on input", nic_i); - } - x = nm_next(x, lim); - if (tdh < x) - tdh += lim + 1; - if (tdh <= x + ll) { - nic_i = x; - csbd[25]++; //report n + 1; - } else { - tdh = nic_i; - if (tdh < ll) - tdh += lim + 1; - nic_i = tdh - ll; - csbd[26]++; // report tdh - ll - } - } -#endif - } else { - /* we stop, count whether we are idle or not */ - int bh_active = csb->host_need_txkick & 2 ? 4 : 0; - csbd[27+ csb->host_need_txkick]++; - if (netmap_adaptive_io == 1) { - if (bh_active && csbd[15] > 1) - csbd[15]--; - else if (!bh_active && csbd[15] < lim/2) - csbd[15]++; - } - bad--; - fail++; - } - } - RD(1, "drain %d nodrain %d good %d retry %d fail %d", - drain, nodrain, good, bad, fail); - } else -#endif /* !NIC_PARAVIRT */ nic_i = E1000_READ_REG(&adapter->hw, E1000_TDH(0)); if (nic_i >= kring->nkr_num_slots) { /* XXX can it happen ? */ D("TDH wrap %d", nic_i); nic_i -= kring->nkr_num_slots; } adapter->next_tx_to_clean = nic_i; kring->nr_hwtail = nm_prev(netmap_idx_n2k(kring, nic_i), lim); } return 0; } /* * Reconcile kernel and user view of the receive ring. */ static int lem_netmap_rxsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ u_int nic_i; /* index into the NIC ring */ u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int force_update = (flags & NAF_FORCE_READ) || kring->nr_kflags & NKR_PENDINTR; /* device-specific */ struct adapter *adapter = ifp->if_softc; -#ifdef NIC_PARAVIRT - struct paravirt_csb *csb = adapter->csb; - uint32_t csb_mode = csb && csb->guest_csb_on; - uint32_t do_host_rxkick = 0; -#endif /* NIC_PARAVIRT */ if (head > lim) return netmap_ring_reinit(kring); -#ifdef NIC_PARAVIRT - if (csb_mode) { - force_update = 1; - csb->guest_need_rxkick = 0; - } -#endif /* NIC_PARAVIRT */ /* XXX check sync modes */ bus_dmamap_sync(adapter->rxdma.dma_tag, adapter->rxdma.dma_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* * First part: import newly received packets. */ if (netmap_no_pendintr || force_update) { uint16_t slot_flags = kring->nkr_slot_flags; nic_i = adapter->next_rx_desc_to_check; nm_i = netmap_idx_n2k(kring, nic_i); for (n = 0; ; n++) { struct e1000_rx_desc *curr = &adapter->rx_desc_base[nic_i]; uint32_t staterr = le32toh(curr->status); int len; -#ifdef NIC_PARAVIRT - if (csb_mode) { - if ((staterr & E1000_RXD_STAT_DD) == 0) { - /* don't bother to retry if more than 1 pkt */ - if (n > 1) - break; - csb->guest_need_rxkick = 1; - wmb(); - staterr = le32toh(curr->status); - if ((staterr & E1000_RXD_STAT_DD) == 0) { - break; - } else { /* we are good */ - csb->guest_need_rxkick = 0; - } - } - } else -#endif /* NIC_PARAVIRT */ if ((staterr & E1000_RXD_STAT_DD) == 0) break; len = le16toh(curr->length) - 4; // CRC if (len < 0) { RD(5, "bogus pkt (%d) size %d nic idx %d", n, len, nic_i); len = 0; } ring->slot[nm_i].len = len; ring->slot[nm_i].flags = slot_flags; bus_dmamap_sync(adapter->rxtag, adapter->rx_buffer_area[nic_i].map, BUS_DMASYNC_POSTREAD); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } if (n) { /* update the state variables */ -#ifdef NIC_PARAVIRT - if (csb_mode) { - if (n > 1) { - /* leave one spare buffer so we avoid rxkicks */ - nm_i = nm_prev(nm_i, lim); - nic_i = nm_prev(nic_i, lim); - n--; - } else { - csb->guest_need_rxkick = 1; - } - } -#endif /* NIC_PARAVIRT */ ND("%d new packets at nic %d nm %d tail %d", n, adapter->next_rx_desc_to_check, netmap_idx_n2k(kring, adapter->next_rx_desc_to_check), kring->nr_hwtail); adapter->next_rx_desc_to_check = nic_i; // if_inc_counter(ifp, IFCOUNTER_IPACKETS, n); kring->nr_hwtail = nm_i; } kring->nr_kflags &= ~NKR_PENDINTR; } /* * Second part: skip past packets that userspace has released. */ nm_i = kring->nr_hwcur; if (nm_i != head) { nic_i = netmap_idx_k2n(kring, nm_i); for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); struct e1000_rx_desc *curr = &adapter->rx_desc_base[nic_i]; struct em_buffer *rxbuf = &adapter->rx_buffer_area[nic_i]; if (addr == NETMAP_BUF_BASE(na)) /* bad buf */ goto ring_reset; if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ curr->buffer_addr = htole64(paddr); netmap_reload_map(na, adapter->rxtag, rxbuf->map, addr); slot->flags &= ~NS_BUF_CHANGED; } curr->status = 0; bus_dmamap_sync(adapter->rxtag, rxbuf->map, BUS_DMASYNC_PREREAD); -#ifdef NIC_PARAVIRT - if (csb_mode && csb->host_rxkick_at == nic_i) - do_host_rxkick = 1; -#endif /* NIC_PARAVIRT */ nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } kring->nr_hwcur = head; bus_dmamap_sync(adapter->rxdma.dma_tag, adapter->rxdma.dma_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* * IMPORTANT: we must leave one free slot in the ring, * so move nic_i back by one unit */ nic_i = nm_prev(nic_i, lim); -#ifdef NIC_PARAVIRT - /* set unconditionally, then also kick if needed */ - if (csb) - csb->guest_rdt = nic_i; - if (!csb_mode || do_host_rxkick) -#endif /* NIC_PARAVIRT */ E1000_WRITE_REG(&adapter->hw, E1000_RDT(0), nic_i); } return 0; ring_reset: return netmap_ring_reinit(kring); } static void lem_netmap_attach(struct adapter *adapter) { struct netmap_adapter na; bzero(&na, sizeof(na)); na.ifp = adapter->ifp; na.na_flags = NAF_BDG_MAYSLEEP; na.num_tx_desc = adapter->num_tx_desc; na.num_rx_desc = adapter->num_rx_desc; na.nm_txsync = lem_netmap_txsync; na.nm_rxsync = lem_netmap_rxsync; na.nm_register = lem_netmap_reg; na.num_tx_rings = na.num_rx_rings = 1; + na.nm_intr = lem_netmap_intr; netmap_attach(&na); } /* end of file */ Index: head/sys/dev/netmap/ixgbe_netmap.h =================================================================== --- head/sys/dev/netmap/ixgbe_netmap.h (revision 307393) +++ head/sys/dev/netmap/ixgbe_netmap.h (revision 307394) @@ -1,492 +1,507 @@ /* * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * netmap support for: ixgbe (both ix and ixv) * * This file is meant to be a reference on how to implement * netmap support for a network driver. * This file contains code but only static or inline functions used * by a single driver. To avoid replication of code we just #include * it near the beginning of the standard driver. */ #include #include /* * Some drivers may need the following headers. Others * already include them by default #include #include */ #include void ixgbe_netmap_attach(struct adapter *adapter); /* * device-specific sysctl variables: * - * ix_crcstrip: 0: keep CRC in rx frames (default), 1: strip it. + * ix_crcstrip: 0: NIC keeps CRC in rx frames (default), 1: NIC strips it. * During regular operations the CRC is stripped, but on some * hardware reception of frames not multiple of 64 is slower, * so using crcstrip=0 helps in benchmarks. * * ix_rx_miss, ix_rx_miss_bufs: * count packets that might be missed due to lost interrupts. */ SYSCTL_DECL(_dev_netmap); static int ix_rx_miss, ix_rx_miss_bufs; int ix_crcstrip; SYSCTL_INT(_dev_netmap, OID_AUTO, ix_crcstrip, - CTLFLAG_RW, &ix_crcstrip, 0, "strip CRC on rx frames"); + CTLFLAG_RW, &ix_crcstrip, 0, "NIC strips CRC on rx frames"); SYSCTL_INT(_dev_netmap, OID_AUTO, ix_rx_miss, CTLFLAG_RW, &ix_rx_miss, 0, "potentially missed rx intr"); SYSCTL_INT(_dev_netmap, OID_AUTO, ix_rx_miss_bufs, CTLFLAG_RW, &ix_rx_miss_bufs, 0, "potentially missed rx intr bufs"); static void set_crcstrip(struct ixgbe_hw *hw, int onoff) { /* crc stripping is set in two places: * IXGBE_HLREG0 (modified on init_locked and hw reset) * IXGBE_RDRXCTL (set by the original driver in * ixgbe_setup_hw_rsc() called in init_locked. * We disable the setting when netmap is compiled in). * We update the values here, but also in ixgbe.c because * init_locked sometimes is called outside our control. */ uint32_t hl, rxc; hl = IXGBE_READ_REG(hw, IXGBE_HLREG0); rxc = IXGBE_READ_REG(hw, IXGBE_RDRXCTL); if (netmap_verbose) D("%s read HLREG 0x%x rxc 0x%x", onoff ? "enter" : "exit", hl, rxc); /* hw requirements ... */ rxc &= ~IXGBE_RDRXCTL_RSCFRSTSIZE; rxc |= IXGBE_RDRXCTL_RSCACKC; if (onoff && !ix_crcstrip) { /* keep the crc. Fast rx */ hl &= ~IXGBE_HLREG0_RXCRCSTRP; rxc &= ~IXGBE_RDRXCTL_CRCSTRIP; } else { /* reset default mode */ hl |= IXGBE_HLREG0_RXCRCSTRP; rxc |= IXGBE_RDRXCTL_CRCSTRIP; } if (netmap_verbose) D("%s write HLREG 0x%x rxc 0x%x", onoff ? "enter" : "exit", hl, rxc); IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hl); IXGBE_WRITE_REG(hw, IXGBE_RDRXCTL, rxc); } +static void +ixgbe_netmap_intr(struct netmap_adapter *na, int onoff) +{ + struct ifnet *ifp = na->ifp; + struct adapter *adapter = ifp->if_softc; + IXGBE_CORE_LOCK(adapter); + if (onoff) { + ixgbe_enable_intr(adapter); // XXX maybe ixgbe_stop ? + } else { + ixgbe_disable_intr(adapter); // XXX maybe ixgbe_stop ? + } + IXGBE_CORE_UNLOCK(adapter); +} + /* * Register/unregister. We are already under netmap lock. * Only called on the first register or the last unregister. */ static int ixgbe_netmap_reg(struct netmap_adapter *na, int onoff) { struct ifnet *ifp = na->ifp; struct adapter *adapter = ifp->if_softc; IXGBE_CORE_LOCK(adapter); adapter->stop_locked(adapter); if (!IXGBE_IS_VF(adapter)) set_crcstrip(&adapter->hw, onoff); /* enable or disable flags and callbacks in na and ifp */ if (onoff) { nm_set_native_flags(na); } else { nm_clear_native_flags(na); } adapter->init_locked(adapter); /* also enables intr */ if (!IXGBE_IS_VF(adapter)) set_crcstrip(&adapter->hw, onoff); // XXX why twice ? IXGBE_CORE_UNLOCK(adapter); return (ifp->if_drv_flags & IFF_DRV_RUNNING ? 0 : 1); } /* * Reconcile kernel and user view of the transmit ring. * * All information is in the kring. * Userspace wants to send packets up to the one before kring->rhead, * kernel knows kring->nr_hwcur is the first unsent packet. * * Here we push packets out (as many as possible), and possibly * reclaim buffers from previously completed transmission. * * The caller (netmap) guarantees that there is only one instance * running at any time. Any interference with other driver * methods should be handled by the individual drivers. */ static int ixgbe_netmap_txsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ u_int nic_i; /* index into the NIC ring */ u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; /* * interrupts on every tx packet are expensive so request * them every half ring, or where NS_REPORT is set */ u_int report_frequency = kring->nkr_num_slots >> 1; /* device-specific */ struct adapter *adapter = ifp->if_softc; struct tx_ring *txr = &adapter->tx_rings[kring->ring_id]; int reclaim_tx; bus_dmamap_sync(txr->txdma.dma_tag, txr->txdma.dma_map, BUS_DMASYNC_POSTREAD); /* * First part: process new packets to send. * nm_i is the current index in the netmap ring, * nic_i is the corresponding index in the NIC ring. * The two numbers differ because upon a *_init() we reset * the NIC ring but leave the netmap ring unchanged. * For the transmit ring, we have * * nm_i = kring->nr_hwcur * nic_i = IXGBE_TDT (not tracked in the driver) * and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size * * In this driver kring->nkr_hwofs >= 0, but for other * drivers it might be negative as well. */ /* * If we have packets to send (kring->nr_hwcur != kring->rhead) * iterate over the netmap ring, fetch length and update * the corresponding slot in the NIC ring. Some drivers also * need to update the buffer's physical address in the NIC slot * even NS_BUF_CHANGED is not set (PNMB computes the addresses). * * The netmap_reload_map() calls is especially expensive, * even when (as in this case) the tag is 0, so do only * when the buffer has actually changed. * * If possible do not set the report/intr bit on all slots, * but only a few times per ring or when NS_REPORT is set. * * Finally, on 10G and faster drivers, it might be useful * to prefetch the next slot and txr entry. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* we have new packets to send */ nic_i = netmap_idx_k2n(kring, nm_i); __builtin_prefetch(&ring->slot[nm_i]); __builtin_prefetch(&txr->tx_buffers[nic_i]); for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; u_int len = slot->len; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); /* device-specific */ union ixgbe_adv_tx_desc *curr = &txr->tx_base[nic_i]; struct ixgbe_tx_buf *txbuf = &txr->tx_buffers[nic_i]; int flags = (slot->flags & NS_REPORT || nic_i == 0 || nic_i == report_frequency) ? IXGBE_TXD_CMD_RS : 0; /* prefetch for next round */ __builtin_prefetch(&ring->slot[nm_i + 1]); __builtin_prefetch(&txr->tx_buffers[nic_i + 1]); NM_CHECK_ADDR_LEN(na, addr, len); if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ netmap_reload_map(na, txr->txtag, txbuf->map, addr); } slot->flags &= ~(NS_REPORT | NS_BUF_CHANGED); /* Fill the slot in the NIC ring. */ /* Use legacy descriptor, they are faster? */ curr->read.buffer_addr = htole64(paddr); curr->read.olinfo_status = 0; curr->read.cmd_type_len = htole32(len | flags | IXGBE_ADVTXD_DCMD_IFCS | IXGBE_TXD_CMD_EOP); /* make sure changes to the buffer are synced */ bus_dmamap_sync(txr->txtag, txbuf->map, BUS_DMASYNC_PREWRITE); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } kring->nr_hwcur = head; /* synchronize the NIC ring */ bus_dmamap_sync(txr->txdma.dma_tag, txr->txdma.dma_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* (re)start the tx unit up to slot nic_i (excluded) */ IXGBE_WRITE_REG(&adapter->hw, txr->tail, nic_i); } /* * Second part: reclaim buffers for completed transmissions. * Because this is expensive (we read a NIC register etc.) * we only do it in specific cases (see below). */ if (flags & NAF_FORCE_RECLAIM) { reclaim_tx = 1; /* forced reclaim */ } else if (!nm_kr_txempty(kring)) { reclaim_tx = 0; /* have buffers, no reclaim */ } else { /* * No buffers available. Locate previous slot with * REPORT_STATUS set. * If the slot has DD set, we can reclaim space, * otherwise wait for the next interrupt. * This enables interrupt moderation on the tx * side though it might reduce throughput. */ struct ixgbe_legacy_tx_desc *txd = (struct ixgbe_legacy_tx_desc *)txr->tx_base; nic_i = txr->next_to_clean + report_frequency; if (nic_i > lim) nic_i -= lim + 1; // round to the closest with dd set nic_i = (nic_i < kring->nkr_num_slots / 4 || nic_i >= kring->nkr_num_slots*3/4) ? 0 : report_frequency; reclaim_tx = txd[nic_i].upper.fields.status & IXGBE_TXD_STAT_DD; // XXX cpu_to_le32 ? } if (reclaim_tx) { /* * Record completed transmissions. * We (re)use the driver's txr->next_to_clean to keep * track of the most recently completed transmission. * * The datasheet discourages the use of TDH to find * out the number of sent packets, but we only set * REPORT_STATUS in a few slots so TDH is the only * good way. */ nic_i = IXGBE_READ_REG(&adapter->hw, IXGBE_IS_VF(adapter) ? - IXGBE_VFTDH(kring->ring_id) : IXGBE_TDH(kring->ring_id)); + IXGBE_VFTDH(kring->ring_id) : IXGBE_TDH(kring->ring_id)); if (nic_i >= kring->nkr_num_slots) { /* XXX can it happen ? */ D("TDH wrap %d", nic_i); nic_i -= kring->nkr_num_slots; } if (nic_i != txr->next_to_clean) { /* some tx completed, increment avail */ txr->next_to_clean = nic_i; kring->nr_hwtail = nm_prev(netmap_idx_n2k(kring, nic_i), lim); } } return 0; } /* * Reconcile kernel and user view of the receive ring. * Same as for the txsync, this routine must be efficient. * The caller guarantees a single invocations, but races against * the rest of the driver should be handled here. * * On call, kring->rhead is the first packet that userspace wants * to keep, and kring->rcur is the wakeup point. * The kernel has previously reported packets up to kring->rtail. * * If (flags & NAF_FORCE_READ) also check for incoming packets irrespective * of whether or not we received an interrupt. */ static int ixgbe_netmap_rxsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ u_int nic_i; /* index into the NIC ring */ u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int force_update = (flags & NAF_FORCE_READ) || kring->nr_kflags & NKR_PENDINTR; /* device-specific */ struct adapter *adapter = ifp->if_softc; struct rx_ring *rxr = &adapter->rx_rings[kring->ring_id]; if (head > lim) return netmap_ring_reinit(kring); /* XXX check sync modes */ bus_dmamap_sync(rxr->rxdma.dma_tag, rxr->rxdma.dma_map, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* * First part: import newly received packets. * * nm_i is the index of the next free slot in the netmap ring, * nic_i is the index of the next received packet in the NIC ring, * and they may differ in case if_init() has been called while * in netmap mode. For the receive ring we have * * nic_i = rxr->next_to_check; * nm_i = kring->nr_hwtail (previous) * and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size * * rxr->next_to_check is set to 0 on a ring reinit */ if (netmap_no_pendintr || force_update) { int crclen = (ix_crcstrip || IXGBE_IS_VF(adapter) ) ? 0 : 4; uint16_t slot_flags = kring->nkr_slot_flags; nic_i = rxr->next_to_check; // or also k2n(kring->nr_hwtail) nm_i = netmap_idx_n2k(kring, nic_i); for (n = 0; ; n++) { union ixgbe_adv_rx_desc *curr = &rxr->rx_base[nic_i]; uint32_t staterr = le32toh(curr->wb.upper.status_error); if ((staterr & IXGBE_RXD_STAT_DD) == 0) break; ring->slot[nm_i].len = le16toh(curr->wb.upper.length) - crclen; ring->slot[nm_i].flags = slot_flags; bus_dmamap_sync(rxr->ptag, rxr->rx_buffers[nic_i].pmap, BUS_DMASYNC_POSTREAD); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } if (n) { /* update the state variables */ if (netmap_no_pendintr && !force_update) { /* diagnostics */ ix_rx_miss ++; ix_rx_miss_bufs += n; } rxr->next_to_check = nic_i; kring->nr_hwtail = nm_i; } kring->nr_kflags &= ~NKR_PENDINTR; } /* * Second part: skip past packets that userspace has released. * (kring->nr_hwcur to kring->rhead excluded), * and make the buffers available for reception. * As usual nm_i is the index in the netmap ring, * nic_i is the index in the NIC ring, and * nm_i == (nic_i + kring->nkr_hwofs) % ring_size */ nm_i = kring->nr_hwcur; if (nm_i != head) { nic_i = netmap_idx_k2n(kring, nm_i); for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; uint64_t paddr; void *addr = PNMB(na, slot, &paddr); union ixgbe_adv_rx_desc *curr = &rxr->rx_base[nic_i]; struct ixgbe_rx_buf *rxbuf = &rxr->rx_buffers[nic_i]; if (addr == NETMAP_BUF_BASE(na)) /* bad buf */ goto ring_reset; if (slot->flags & NS_BUF_CHANGED) { /* buffer has changed, reload map */ netmap_reload_map(na, rxr->ptag, rxbuf->pmap, addr); slot->flags &= ~NS_BUF_CHANGED; } curr->wb.upper.status_error = 0; curr->read.pkt_addr = htole64(paddr); bus_dmamap_sync(rxr->ptag, rxbuf->pmap, BUS_DMASYNC_PREREAD); nm_i = nm_next(nm_i, lim); nic_i = nm_next(nic_i, lim); } kring->nr_hwcur = head; bus_dmamap_sync(rxr->rxdma.dma_tag, rxr->rxdma.dma_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); /* * IMPORTANT: we must leave one free slot in the ring, * so move nic_i back by one unit */ nic_i = nm_prev(nic_i, lim); IXGBE_WRITE_REG(&adapter->hw, rxr->tail, nic_i); } return 0; ring_reset: return netmap_ring_reinit(kring); } /* * The attach routine, called near the end of ixgbe_attach(), * fills the parameters for netmap_attach() and calls it. * It cannot fail, in the worst case (such as no memory) * netmap mode will be disabled and the driver will only * operate in standard mode. */ void ixgbe_netmap_attach(struct adapter *adapter) { struct netmap_adapter na; bzero(&na, sizeof(na)); na.ifp = adapter->ifp; na.na_flags = NAF_BDG_MAYSLEEP; na.num_tx_desc = adapter->num_tx_desc; na.num_rx_desc = adapter->num_rx_desc; na.nm_txsync = ixgbe_netmap_txsync; na.nm_rxsync = ixgbe_netmap_rxsync; na.nm_register = ixgbe_netmap_reg; na.num_tx_rings = na.num_rx_rings = adapter->num_queues; + na.nm_intr = ixgbe_netmap_intr; netmap_attach(&na); } /* end of file */ Index: head/sys/dev/netmap/netmap.c =================================================================== --- head/sys/dev/netmap/netmap.c (revision 307393) +++ head/sys/dev/netmap/netmap.c (revision 307394) @@ -1,3166 +1,3332 @@ /* - * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. + * Copyright (C) 2011-2014 Matteo Landi + * Copyright (C) 2011-2016 Luigi Rizzo + * Copyright (C) 2011-2016 Giuseppe Lettieri + * Copyright (C) 2011-2016 Vincenzo Maffione + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * This module supports memory mapped access to network devices, * see netmap(4). * * The module uses a large, memory pool allocated by the kernel * and accessible as mmapped memory by multiple userspace threads/processes. * The memory pool contains packet buffers and "netmap rings", * i.e. user-accessible copies of the interface's queues. * * Access to the network card works like this: * 1. a process/thread issues one or more open() on /dev/netmap, to create * select()able file descriptor on which events are reported. * 2. on each descriptor, the process issues an ioctl() to identify * the interface that should report events to the file descriptor. * 3. on each descriptor, the process issues an mmap() request to * map the shared memory region within the process' address space. * The list of interesting queues is indicated by a location in * the shared memory region. * 4. using the functions in the netmap(4) userspace API, a process * can look up the occupation state of a queue, access memory buffers, * and retrieve received packets or enqueue packets to transmit. * 5. using some ioctl()s the process can synchronize the userspace view * of the queue with the actual status in the kernel. This includes both * receiving the notification of new packets, and transmitting new * packets on the output interface. * 6. select() or poll() can be used to wait for events on individual * transmit or receive queues (or all queues for a given interface). * SYNCHRONIZATION (USER) The netmap rings and data structures may be shared among multiple user threads or even independent processes. Any synchronization among those threads/processes is delegated to the threads themselves. Only one thread at a time can be in a system call on the same netmap ring. The OS does not enforce this and only guarantees against system crashes in case of invalid usage. LOCKING (INTERNAL) Within the kernel, access to the netmap rings is protected as follows: - a spinlock on each ring, to handle producer/consumer races on RX rings attached to the host stack (against multiple host threads writing from the host stack to the same ring), and on 'destination' rings attached to a VALE switch (i.e. RX rings in VALE ports, and TX rings in NIC/host ports) protecting multiple active senders for the same destination) - an atomic variable to guarantee that there is at most one instance of *_*xsync() on the ring at any time. For rings connected to user file descriptors, an atomic_test_and_set() protects this, and the lock on the ring is not actually used. For NIC RX rings connected to a VALE switch, an atomic_test_and_set() is also used to prevent multiple executions (the driver might indeed already guarantee this). For NIC TX rings connected to a VALE switch, the lock arbitrates access to the queue (both when allocating buffers and when pushing them out). - *xsync() should be protected against initializations of the card. On FreeBSD most devices have the reset routine protected by a RING lock (ixgbe, igb, em) or core lock (re). lem is missing the RING protection on rx_reset(), this should be added. On linux there is an external lock on the tx path, which probably also arbitrates access to the reset routine. XXX to be revised - a per-interface core_lock protecting access from the host stack while interfaces may be detached from netmap mode. XXX there should be no need for this lock if we detach the interfaces only while they are down. --- VALE SWITCH --- NMG_LOCK() serializes all modifications to switches and ports. A switch cannot be deleted until all ports are gone. For each switch, an SX lock (RWlock on linux) protects deletion of ports. When configuring or deleting a new port, the lock is acquired in exclusive mode (after holding NMG_LOCK). When forwarding, the lock is acquired in shared mode (without NMG_LOCK). The lock is held throughout the entire forwarding cycle, during which the thread may incur in a page fault. Hence it is important that sleepable shared locks are used. On the rx ring, the per-port lock is grabbed initially to reserve a number of slot in the ring, then the lock is released, packets are copied from source to destination, and then the lock is acquired again and the receive ring is updated. (A similar thing is done on the tx ring for NIC and host stack ports attached to the switch) */ /* --- internals ---- * * Roadmap to the code that implements the above. * * > 1. a process/thread issues one or more open() on /dev/netmap, to create * > select()able file descriptor on which events are reported. * * Internally, we allocate a netmap_priv_d structure, that will be - * initialized on ioctl(NIOCREGIF). + * initialized on ioctl(NIOCREGIF). There is one netmap_priv_d + * structure for each open(). * * os-specific: - * FreeBSD: netmap_open (netmap_freebsd.c). The priv is - * per-thread. - * linux: linux_netmap_open (netmap_linux.c). The priv is - * per-open. + * FreeBSD: see netmap_open() (netmap_freebsd.c) + * linux: see linux_netmap_open() (netmap_linux.c) * * > 2. on each descriptor, the process issues an ioctl() to identify * > the interface that should report events to the file descriptor. * * Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0. * Most important things happen in netmap_get_na() and * netmap_do_regif(), called from there. Additional details can be * found in the comments above those functions. * * In all cases, this action creates/takes-a-reference-to a * netmap_*_adapter describing the port, and allocates a netmap_if * and all necessary netmap rings, filling them with netmap buffers. * * In this phase, the sync callbacks for each ring are set (these are used * in steps 5 and 6 below). The callbacks depend on the type of adapter. * The adapter creation/initialization code puts them in the * netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they * are copied from there to the netmap_kring's during netmap_do_regif(), by * the nm_krings_create() callback. All the nm_krings_create callbacks * actually call netmap_krings_create() to perform this and the other * common stuff. netmap_krings_create() also takes care of the host rings, * if needed, by setting their sync callbacks appropriately. * * Additional actions depend on the kind of netmap_adapter that has been * registered: * * - netmap_hw_adapter: [netmap.c] * This is a system netdev/ifp with native netmap support. * The ifp is detached from the host stack by redirecting: * - transmissions (from the network stack) to netmap_transmit() * - receive notifications to the nm_notify() callback for * this adapter. The callback is normally netmap_notify(), unless * the ifp is attached to a bridge using bwrap, in which case it * is netmap_bwrap_intr_notify(). * * - netmap_generic_adapter: [netmap_generic.c] * A system netdev/ifp without native netmap support. * * (the decision about native/non native support is taken in * netmap_get_hw_na(), called by netmap_get_na()) * * - netmap_vp_adapter [netmap_vale.c] * Returned by netmap_get_bdg_na(). * This is a persistent or ephemeral VALE port. Ephemeral ports * are created on the fly if they don't already exist, and are * always attached to a bridge. * Persistent VALE ports must must be created separately, and i * then attached like normal NICs. The NIOCREGIF we are examining * will find them only if they had previosly been created and * attached (see VALE_CTL below). * * - netmap_pipe_adapter [netmap_pipe.c] * Returned by netmap_get_pipe_na(). * Both pipe ends are created, if they didn't already exist. * * - netmap_monitor_adapter [netmap_monitor.c] * Returned by netmap_get_monitor_na(). * If successful, the nm_sync callbacks of the monitored adapter * will be intercepted by the returned monitor. * * - netmap_bwrap_adapter [netmap_vale.c] * Cannot be obtained in this way, see VALE_CTL below * * * os-specific: * linux: we first go through linux_netmap_ioctl() to * adapt the FreeBSD interface to the linux one. * * * > 3. on each descriptor, the process issues an mmap() request to * > map the shared memory region within the process' address space. * > The list of interesting queues is indicated by a location in * > the shared memory region. * * os-specific: * FreeBSD: netmap_mmap_single (netmap_freebsd.c). * linux: linux_netmap_mmap (netmap_linux.c). * * > 4. using the functions in the netmap(4) userspace API, a process * > can look up the occupation state of a queue, access memory buffers, * > and retrieve received packets or enqueue packets to transmit. * * these actions do not involve the kernel. * * > 5. using some ioctl()s the process can synchronize the userspace view * > of the queue with the actual status in the kernel. This includes both * > receiving the notification of new packets, and transmitting new * > packets on the output interface. * * These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC * cases. They invoke the nm_sync callbacks on the netmap_kring * structures, as initialized in step 2 and maybe later modified * by a monitor. Monitors, however, will always call the original * callback before doing anything else. * * * > 6. select() or poll() can be used to wait for events on individual * > transmit or receive queues (or all queues for a given interface). * * Implemented in netmap_poll(). This will call the same nm_sync() * callbacks as in step 5 above. * * os-specific: * linux: we first go through linux_netmap_poll() to adapt * the FreeBSD interface to the linux one. * * * ---- VALE_CTL ----- * * VALE switches are controlled by issuing a NIOCREGIF with a non-null * nr_cmd in the nmreq structure. These subcommands are handled by * netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created * and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF * subcommands, respectively. * * Any network interface known to the system (including a persistent VALE * port) can be attached to a VALE switch by issuing the * NETMAP_BDG_ATTACH subcommand. After the attachment, persistent VALE ports * look exactly like ephemeral VALE ports (as created in step 2 above). The * attachment of other interfaces, instead, requires the creation of a * netmap_bwrap_adapter. Moreover, the attached interface must be put in * netmap mode. This may require the creation of a netmap_generic_adapter if * we have no native support for the interface, or if generic adapters have * been forced by sysctl. * * Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(), * called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach() * callback. In the case of the bwrap, the callback creates the * netmap_bwrap_adapter. The initialization of the bwrap is then * completed by calling netmap_do_regif() on it, in the nm_bdg_ctl() * callback (netmap_bwrap_bdg_ctl in netmap_vale.c). * A generic adapter for the wrapped ifp will be created if needed, when * netmap_get_bdg_na() calls netmap_get_hw_na(). * * * ---- DATAPATHS ----- * * -= SYSTEM DEVICE WITH NATIVE SUPPORT =- * * na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach() * * - tx from netmap userspace: * concurrently: * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == DEVICE_netmap_txsync() * 2) device interrupt handler * na->nm_notify() == netmap_notify() * - rx from netmap userspace: * concurrently: * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == DEVICE_netmap_rxsync() * 2) device interrupt handler * na->nm_notify() == netmap_notify() * - rx from host stack * concurrently: * 1) host stack * netmap_transmit() * na->nm_notify == netmap_notify() * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context - * kring->nm_sync() == netmap_rxsync_from_host_compat + * kring->nm_sync() == netmap_rxsync_from_host * netmap_rxsync_from_host(na, NULL, NULL) * - tx to host stack * ioctl(NIOCTXSYNC)/netmap_poll() in process context - * kring->nm_sync() == netmap_txsync_to_host_compat + * kring->nm_sync() == netmap_txsync_to_host * netmap_txsync_to_host(na) - * NM_SEND_UP() - * FreeBSD: na->if_input() == ?? XXX + * nm_os_send_up() + * FreeBSD: na->if_input() == ether_input() * linux: netif_rx() with NM_MAGIC_PRIORITY_RX * * - * * -= SYSTEM DEVICE WITH GENERIC SUPPORT =- * * na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach() * * - tx from netmap userspace: * concurrently: * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == generic_netmap_txsync() - * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX - * generic_ndo_start_xmit() - * orig. dev. start_xmit - * FreeBSD: na->if_transmit() == orig. dev if_transmit + * nm_os_generic_xmit_frame() + * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX + * ifp->ndo_start_xmit == generic_ndo_start_xmit() + * gna->save_start_xmit == orig. dev. start_xmit + * FreeBSD: na->if_transmit() == orig. dev if_transmit * 2) generic_mbuf_destructor() * na->nm_notify() == netmap_notify() * - rx from netmap userspace: * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == generic_netmap_rxsync() * mbq_safe_dequeue() * 2) device driver * generic_rx_handler() * mbq_safe_enqueue() * na->nm_notify() == netmap_notify() - * - rx from host stack: - * concurrently: + * - rx from host stack + * FreeBSD: same as native + * Linux: same as native except: * 1) host stack - * linux: generic_ndo_start_xmit() - * netmap_transmit() - * FreeBSD: ifp->if_input() == netmap_transmit - * both: - * na->nm_notify() == netmap_notify() - * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context - * kring->nm_sync() == netmap_rxsync_from_host_compat - * netmap_rxsync_from_host(na, NULL, NULL) - * - tx to host stack: - * ioctl(NIOCTXSYNC)/netmap_poll() in process context - * kring->nm_sync() == netmap_txsync_to_host_compat - * netmap_txsync_to_host(na) - * NM_SEND_UP() - * FreeBSD: na->if_input() == ??? XXX - * linux: netif_rx() with NM_MAGIC_PRIORITY_RX + * dev_queue_xmit() without NM_MAGIC_PRIORITY_TX + * ifp->ndo_start_xmit == generic_ndo_start_xmit() + * netmap_transmit() + * na->nm_notify() == netmap_notify() + * - tx to host stack (same as native): * * * -= VALE =- * * INCOMING: * * - VALE ports: * ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == netmap_vp_txsync() * * - system device with native support: * from cable: * interrupt * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring) * kring->nm_sync() == DEVICE_netmap_rxsync() * netmap_vp_txsync() * kring->nm_sync() == DEVICE_netmap_rxsync() * from host stack: * netmap_transmit() * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring) - * kring->nm_sync() == netmap_rxsync_from_host_compat() + * kring->nm_sync() == netmap_rxsync_from_host() * netmap_vp_txsync() * * - system device with generic support: * from device driver: * generic_rx_handler() * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring) * kring->nm_sync() == generic_netmap_rxsync() * netmap_vp_txsync() * kring->nm_sync() == generic_netmap_rxsync() * from host stack: * netmap_transmit() * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring) - * kring->nm_sync() == netmap_rxsync_from_host_compat() + * kring->nm_sync() == netmap_rxsync_from_host() * netmap_vp_txsync() * * (all cases) --> nm_bdg_flush() * dest_na->nm_notify() == (see below) * * OUTGOING: * * - VALE ports: * concurrently: * 1) ioctlNIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == netmap_vp_rxsync() * 2) from nm_bdg_flush() * na->nm_notify() == netmap_notify() * * - system device with native support: * to cable: * na->nm_notify() == netmap_bwrap_notify() * netmap_vp_rxsync() * kring->nm_sync() == DEVICE_netmap_txsync() * netmap_vp_rxsync() * to host stack: * netmap_vp_rxsync() - * kring->nm_sync() == netmap_txsync_to_host_compat + * kring->nm_sync() == netmap_txsync_to_host * netmap_vp_rxsync_locked() * * - system device with generic adapter: * to device driver: * na->nm_notify() == netmap_bwrap_notify() * netmap_vp_rxsync() * kring->nm_sync() == generic_netmap_txsync() * netmap_vp_rxsync() * to host stack: * netmap_vp_rxsync() - * kring->nm_sync() == netmap_txsync_to_host_compat + * kring->nm_sync() == netmap_txsync_to_host * netmap_vp_rxsync() * */ /* * OS-specific code that is used only within this file. * Other OS-specific code that must be accessed by drivers * is present in netmap_kern.h */ #if defined(__FreeBSD__) #include /* prerequisite */ #include #include #include /* defines used in kernel.h */ #include /* types used in module initialization */ #include /* cdevsw struct, UID, GID */ #include /* FIONBIO */ #include #include /* struct socket */ #include #include #include #include /* sockaddrs */ #include #include #include #include #include #include #include /* BIOCIMMEDIATE */ #include /* bus_dmamap_* */ #include #include -/* reduce conditional code */ -// linux API, use for the knlist in FreeBSD -/* use a private mutex for the knlist */ -#define init_waitqueue_head(x) do { \ - struct mtx *m = &(x)->m; \ - mtx_init(m, "nm_kn_lock", NULL, MTX_DEF); \ - knlist_init_mtx(&(x)->si.si_note, m); \ - } while (0) - -#define OS_selrecord(a, b) selrecord(a, &((b)->si)) -#define OS_selwakeup(a, b) freebsd_selwakeup(a, b) - #elif defined(linux) #include "bsd_glue.h" - - #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" +#elif defined (_WIN32) + +#include "win_glue.h" + #else #error Unsupported platform #endif /* unsupported */ /* * common headers */ #include #include #include -MALLOC_DEFINE(M_NETMAP, "netmap", "Network memory map"); - /* user-controlled variables */ int netmap_verbose; static int netmap_no_timestamp; /* don't timestamp on rxsync */ - -SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args"); -SYSCTL_INT(_dev_netmap, OID_AUTO, verbose, - CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode"); -SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp, - CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp"); int netmap_mitigate = 1; -SYSCTL_INT(_dev_netmap, OID_AUTO, mitigate, CTLFLAG_RW, &netmap_mitigate, 0, ""); int netmap_no_pendintr = 1; -SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, - CTLFLAG_RW, &netmap_no_pendintr, 0, "Always look for new received packets."); int netmap_txsync_retry = 2; -SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW, - &netmap_txsync_retry, 0 , "Number of txsync loops in bridge's flush."); - int netmap_adaptive_io = 0; -SYSCTL_INT(_dev_netmap, OID_AUTO, adaptive_io, CTLFLAG_RW, - &netmap_adaptive_io, 0 , "Adaptive I/O on paravirt"); - int netmap_flags = 0; /* debug flags */ -int netmap_fwd = 0; /* force transparent mode */ +static int netmap_fwd = 0; /* force transparent mode */ /* * netmap_admode selects the netmap mode to use. * Invalid values are reset to NETMAP_ADMODE_BEST */ -enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */ +enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */ NETMAP_ADMODE_NATIVE, /* either native or none */ NETMAP_ADMODE_GENERIC, /* force generic */ NETMAP_ADMODE_LAST }; static int netmap_admode = NETMAP_ADMODE_BEST; -int netmap_generic_mit = 100*1000; /* Generic mitigation interval in nanoseconds. */ -int netmap_generic_ringsize = 1024; /* Generic ringsize. */ -int netmap_generic_rings = 1; /* number of queues in generic. */ +/* netmap_generic_mit controls mitigation of RX notifications for + * the generic netmap adapter. The value is a time interval in + * nanoseconds. */ +int netmap_generic_mit = 100*1000; +/* We use by default netmap-aware qdiscs with generic netmap adapters, + * even if there can be a little performance hit with hardware NICs. + * However, using the qdisc is the safer approach, for two reasons: + * 1) it prevents non-fifo qdiscs to break the TX notification + * scheme, which is based on mbuf destructors when txqdisc is + * not used. + * 2) it makes it possible to transmit over software devices that + * change skb->dev, like bridge, veth, ... + * + * Anyway users looking for the best performance should + * use native adapters. + */ +int netmap_generic_txqdisc = 1; + +/* Default number of slots and queues for generic adapters. */ +int netmap_generic_ringsize = 1024; +int netmap_generic_rings = 1; + +/* Non-zero if ptnet devices are allowed to use virtio-net headers. */ +int ptnet_vnet_hdr = 1; + +/* + * SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated + * in some other operating systems + */ +SYSBEGIN(main_init); + +SYSCTL_DECL(_dev_netmap); +SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args"); +SYSCTL_INT(_dev_netmap, OID_AUTO, verbose, + CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode"); +SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp, + CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp"); +SYSCTL_INT(_dev_netmap, OID_AUTO, mitigate, CTLFLAG_RW, &netmap_mitigate, 0, ""); +SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, + CTLFLAG_RW, &netmap_no_pendintr, 0, "Always look for new received packets."); +SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW, + &netmap_txsync_retry, 0 , "Number of txsync loops in bridge's flush."); +SYSCTL_INT(_dev_netmap, OID_AUTO, adaptive_io, CTLFLAG_RW, + &netmap_adaptive_io, 0 , "Adaptive I/O on paravirt"); + SYSCTL_INT(_dev_netmap, OID_AUTO, flags, CTLFLAG_RW, &netmap_flags, 0 , ""); SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0 , ""); SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0 , ""); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit, 0 , ""); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW, &netmap_generic_ringsize, 0 , ""); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW, &netmap_generic_rings, 0 , ""); +SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW, &netmap_generic_txqdisc, 0 , ""); +SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr, 0 , ""); +SYSEND; + NMG_LOCK_T netmap_global_lock; -int netmap_use_count = 0; /* number of active netmap instances */ /* * mark the ring as stopped, and run through the locks * to make sure other users get to see it. + * stopped must be either NR_KR_STOPPED (for unbounded stop) + * of NR_KR_LOCKED (brief stop for mutual exclusion purposes) */ static void -netmap_disable_ring(struct netmap_kring *kr) +netmap_disable_ring(struct netmap_kring *kr, int stopped) { - kr->nkr_stopped = 1; - nm_kr_get(kr); + nm_kr_stop(kr, stopped); + // XXX check if nm_kr_stop is sufficient mtx_lock(&kr->q_lock); mtx_unlock(&kr->q_lock); nm_kr_put(kr); } /* stop or enable a single ring */ void netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped) { if (stopped) - netmap_disable_ring(NMR(na, t) + ring_id); + netmap_disable_ring(NMR(na, t) + ring_id, stopped); else NMR(na, t)[ring_id].nkr_stopped = 0; } /* stop or enable all the rings of na */ void netmap_set_all_rings(struct netmap_adapter *na, int stopped) { int i; enum txrx t; if (!nm_netmap_on(na)) return; for_rx_tx(t) { for (i = 0; i < netmap_real_rings(na, t); i++) { netmap_set_ring(na, i, t, stopped); } } } /* * Convenience function used in drivers. Waits for current txsync()s/rxsync()s * to finish and prevents any new one from starting. Call this before turning * netmap mode off, or before removing the hardware rings (e.g., on module - * onload). As a rule of thumb for linux drivers, this should be placed near - * each napi_disable(). + * onload). */ void netmap_disable_all_rings(struct ifnet *ifp) { - netmap_set_all_rings(NA(ifp), 1 /* stopped */); + if (NM_NA_VALID(ifp)) { + netmap_set_all_rings(NA(ifp), NM_KR_STOPPED); + } } /* * Convenience function used in drivers. Re-enables rxsync and txsync on the * adapter's rings In linux drivers, this should be placed near each * napi_enable(). */ void netmap_enable_all_rings(struct ifnet *ifp) { - netmap_set_all_rings(NA(ifp), 0 /* enabled */); + if (NM_NA_VALID(ifp)) { + netmap_set_all_rings(NA(ifp), 0 /* enabled */); + } } +void +netmap_make_zombie(struct ifnet *ifp) +{ + if (NM_NA_VALID(ifp)) { + struct netmap_adapter *na = NA(ifp); + netmap_set_all_rings(na, NM_KR_LOCKED); + na->na_flags |= NAF_ZOMBIE; + netmap_set_all_rings(na, 0); + } +} +void +netmap_undo_zombie(struct ifnet *ifp) +{ + if (NM_NA_VALID(ifp)) { + struct netmap_adapter *na = NA(ifp); + if (na->na_flags & NAF_ZOMBIE) { + netmap_set_all_rings(na, NM_KR_LOCKED); + na->na_flags &= ~NAF_ZOMBIE; + netmap_set_all_rings(na, 0); + } + } +} + /* * generic bound_checking function */ u_int nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg) { u_int oldv = *v; const char *op = NULL; if (dflt < lo) dflt = lo; if (dflt > hi) dflt = hi; if (oldv < lo) { *v = dflt; op = "Bump"; } else if (oldv > hi) { *v = hi; op = "Clamp"; } if (op && msg) printf("%s %s to %d (was %d)\n", op, msg, *v, oldv); return *v; } /* * packet-dump function, user-supplied or static buffer. * The destination buffer must be at least 30+4*len */ const char * nm_dump_buf(char *p, int len, int lim, char *dst) { static char _dst[8192]; int i, j, i0; static char hex[] ="0123456789abcdef"; char *o; /* output position */ #define P_HI(x) hex[((x) & 0xf0)>>4] #define P_LO(x) hex[((x) & 0xf)] #define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.') if (!dst) dst = _dst; if (lim <= 0 || lim > len) lim = len; o = dst; sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim); o += strlen(o); /* hexdump routine */ for (i = 0; i < lim; ) { sprintf(o, "%5d: ", i); o += strlen(o); memset(o, ' ', 48); i0 = i; for (j=0; j < 16 && i < lim; i++, j++) { o[j*3] = P_HI(p[i]); o[j*3+1] = P_LO(p[i]); } i = i0; for (j=0; j < 16 && i < lim; i++, j++) o[j + 48] = P_C(p[i]); o[j+48] = '\n'; o += j+49; } *o = '\0'; #undef P_HI #undef P_LO #undef P_C return dst; } /* * Fetch configuration from the device, to cope with dynamic * reconfigurations after loading the module. */ /* call with NMG_LOCK held */ int netmap_update_config(struct netmap_adapter *na) { u_int txr, txd, rxr, rxd; txr = txd = rxr = rxd = 0; if (na->nm_config == NULL || na->nm_config(na, &txr, &txd, &rxr, &rxd)) { /* take whatever we had at init time */ txr = na->num_tx_rings; txd = na->num_tx_desc; rxr = na->num_rx_rings; rxd = na->num_rx_desc; } if (na->num_tx_rings == txr && na->num_tx_desc == txd && na->num_rx_rings == rxr && na->num_rx_desc == rxd) return 0; /* nothing changed */ if (netmap_verbose || na->active_fds > 0) { D("stored config %s: txring %d x %d, rxring %d x %d", na->name, na->num_tx_rings, na->num_tx_desc, na->num_rx_rings, na->num_rx_desc); D("new config %s: txring %d x %d, rxring %d x %d", na->name, txr, txd, rxr, rxd); } if (na->active_fds == 0) { D("configuration changed (but fine)"); na->num_tx_rings = txr; na->num_tx_desc = txd; na->num_rx_rings = rxr; na->num_rx_desc = rxd; return 0; } D("configuration changed while active, this is bad..."); return 1; } -static void netmap_txsync_to_host(struct netmap_adapter *na); -static int netmap_rxsync_from_host(struct netmap_adapter *na, struct thread *td, void *pwait); +/* nm_sync callbacks for the host rings */ +static int netmap_txsync_to_host(struct netmap_kring *kring, int flags); +static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags); -/* kring->nm_sync callback for the host tx ring */ -static int -netmap_txsync_to_host_compat(struct netmap_kring *kring, int flags) -{ - (void)flags; /* unused */ - netmap_txsync_to_host(kring->na); - return 0; -} - -/* kring->nm_sync callback for the host rx ring */ -static int -netmap_rxsync_from_host_compat(struct netmap_kring *kring, int flags) -{ - (void)flags; /* unused */ - netmap_rxsync_from_host(kring->na, NULL, NULL); - return 0; -} - - - /* create the krings array and initialize the fields common to all adapters. * The array layout is this: * * +----------+ * na->tx_rings ----->| | \ * | | } na->num_tx_ring * | | / * +----------+ * | | host tx kring * na->rx_rings ----> +----------+ * | | \ * | | } na->num_rx_rings * | | / * +----------+ * | | host rx kring * +----------+ * na->tailroom ----->| | \ * | | } tailroom bytes * | | / * +----------+ * * Note: for compatibility, host krings are created even when not needed. * The tailroom space is currently used by vale ports for allocating leases. */ /* call with NMG_LOCK held */ int netmap_krings_create(struct netmap_adapter *na, u_int tailroom) { u_int i, len, ndesc; struct netmap_kring *kring; u_int n[NR_TXRX]; enum txrx t; /* account for the (possibly fake) host rings */ n[NR_TX] = na->num_tx_rings + 1; n[NR_RX] = na->num_rx_rings + 1; len = (n[NR_TX] + n[NR_RX]) * sizeof(struct netmap_kring) + tailroom; na->tx_rings = malloc((size_t)len, M_DEVBUF, M_NOWAIT | M_ZERO); if (na->tx_rings == NULL) { D("Cannot allocate krings"); return ENOMEM; } na->rx_rings = na->tx_rings + n[NR_TX]; /* * All fields in krings are 0 except the one initialized below. * but better be explicit on important kring fields. */ for_rx_tx(t) { ndesc = nma_get_ndesc(na, t); for (i = 0; i < n[t]; i++) { kring = &NMR(na, t)[i]; bzero(kring, sizeof(*kring)); kring->na = na; kring->ring_id = i; kring->tx = t; kring->nkr_num_slots = ndesc; + kring->nr_mode = NKR_NETMAP_OFF; + kring->nr_pending_mode = NKR_NETMAP_OFF; if (i < nma_get_nrings(na, t)) { kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync); - } else if (i == na->num_tx_rings) { + } else { kring->nm_sync = (t == NR_TX ? - netmap_txsync_to_host_compat : - netmap_rxsync_from_host_compat); + netmap_txsync_to_host: + netmap_rxsync_from_host); } kring->nm_notify = na->nm_notify; kring->rhead = kring->rcur = kring->nr_hwcur = 0; /* * IMPORTANT: Always keep one slot empty. */ kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0); - snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name, + snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name, nm_txrx2str(t), i); ND("ktx %s h %d c %d t %d", kring->name, kring->rhead, kring->rcur, kring->rtail); mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF); - init_waitqueue_head(&kring->si); + nm_os_selinfo_init(&kring->si); } - init_waitqueue_head(&na->si[t]); + nm_os_selinfo_init(&na->si[t]); } na->tailroom = na->rx_rings + n[NR_RX]; return 0; } -#ifdef __FreeBSD__ -static void -netmap_knlist_destroy(NM_SELINFO_T *si) -{ - /* XXX kqueue(9) needed; these will mirror knlist_init. */ - knlist_delete(&si->si.si_note, curthread, 0 /* not locked */ ); - knlist_destroy(&si->si.si_note); - /* now we don't need the mutex anymore */ - mtx_destroy(&si->m); -} -#endif /* __FreeBSD__ */ - - /* undo the actions performed by netmap_krings_create */ /* call with NMG_LOCK held */ void netmap_krings_delete(struct netmap_adapter *na) { struct netmap_kring *kring = na->tx_rings; enum txrx t; for_rx_tx(t) - netmap_knlist_destroy(&na->si[t]); + nm_os_selinfo_uninit(&na->si[t]); /* we rely on the krings layout described above */ for ( ; kring != na->tailroom; kring++) { mtx_destroy(&kring->q_lock); - netmap_knlist_destroy(&kring->si); + nm_os_selinfo_uninit(&kring->si); } free(na->tx_rings, M_DEVBUF); na->tx_rings = na->rx_rings = na->tailroom = NULL; } /* * Destructor for NIC ports. They also have an mbuf queue * on the rings connected to the host so we need to purge * them first. */ /* call with NMG_LOCK held */ -static void +void netmap_hw_krings_delete(struct netmap_adapter *na) { struct mbq *q = &na->rx_rings[na->num_rx_rings].rx_queue; ND("destroy sw mbq with len %d", mbq_len(q)); mbq_purge(q); - mbq_safe_destroy(q); + mbq_safe_fini(q); netmap_krings_delete(na); } /* * Undo everything that was done in netmap_do_regif(). In particular, * call nm_register(ifp,0) to stop netmap mode on the interface and * revert to normal operation. */ /* call with NMG_LOCK held */ static void netmap_unset_ringid(struct netmap_priv_d *); -static void netmap_rel_exclusive(struct netmap_priv_d *); -static void +static void netmap_krings_put(struct netmap_priv_d *); +void netmap_do_unregif(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; NMG_LOCK_ASSERT(); na->active_fds--; - /* release exclusive use if it was requested on regif */ - netmap_rel_exclusive(priv); - if (na->active_fds <= 0) { /* last instance */ + /* unset nr_pending_mode and possibly release exclusive mode */ + netmap_krings_put(priv); - if (netmap_verbose) - D("deleting last instance for %s", na->name); - #ifdef WITH_MONITOR + /* XXX check whether we have to do something with monitor + * when rings change nr_mode. */ + if (na->active_fds <= 0) { /* walk through all the rings and tell any monitor * that the port is going to exit netmap mode */ netmap_monitor_stop(na); + } #endif + + if (na->active_fds <= 0 || nm_kring_pending(priv)) { + na->nm_register(na, 0); + } + + /* delete rings and buffers that are no longer needed */ + netmap_mem_rings_delete(na); + + if (na->active_fds <= 0) { /* last instance */ /* - * (TO CHECK) This function is only called + * (TO CHECK) We enter here * when the last reference to this file descriptor goes * away. This means we cannot have any pending poll() * or interrupt routine operating on the structure. * XXX The file may be closed in a thread while * another thread is using it. * Linux keeps the file opened until the last reference * by any outstanding ioctl/poll or mmap is gone. * FreeBSD does not track mmap()s (but we do) and * wakes up any sleeping poll(). Need to check what * happens if the close() occurs while a concurrent * syscall is running. */ - na->nm_register(na, 0); /* off, clear flags */ - /* Wake up any sleeping threads. netmap_poll will - * then return POLLERR - * XXX The wake up now must happen during *_down(), when - * we order all activities to stop. -gl - */ - /* delete rings and buffers */ - netmap_mem_rings_delete(na); + if (netmap_verbose) + D("deleting last instance for %s", na->name); + + if (nm_netmap_on(na)) { + D("BUG: netmap on while going to delete the krings"); + } + na->nm_krings_delete(na); } + /* possibily decrement counter of tx_si/rx_si users */ netmap_unset_ringid(priv); /* delete the nifp */ netmap_mem_if_delete(na, priv->np_nifp); /* drop the allocator */ netmap_mem_deref(na->nm_mem, na); /* mark the priv as unregistered */ priv->np_na = NULL; priv->np_nifp = NULL; } /* call with NMG_LOCK held */ static __inline int nm_si_user(struct netmap_priv_d *priv, enum txrx t) { return (priv->np_na != NULL && (priv->np_qlast[t] - priv->np_qfirst[t] > 1)); } +struct netmap_priv_d* +netmap_priv_new(void) +{ + struct netmap_priv_d *priv; + + priv = malloc(sizeof(struct netmap_priv_d), M_DEVBUF, + M_NOWAIT | M_ZERO); + if (priv == NULL) + return NULL; + priv->np_refs = 1; + nm_os_get_module(); + return priv; +} + /* * Destructor of the netmap_priv_d, called when the fd is closed * Action: undo all the things done by NIOCREGIF, * On FreeBSD we need to track whether there are active mmap()s, * and we use np_active_mmaps for that. On linux, the field is always 0. * Return: 1 if we can free priv, 0 otherwise. * */ /* call with NMG_LOCK held */ -int -netmap_dtor_locked(struct netmap_priv_d *priv) +void +netmap_priv_delete(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; /* number of active references to this fd */ if (--priv->np_refs > 0) { - return 0; + return; } - netmap_use_count--; - if (!na) { - return 1; //XXX is it correct? + nm_os_put_module(); + if (na) { + netmap_do_unregif(priv); } - netmap_do_unregif(priv); - netmap_adapter_put(na); - return 1; + netmap_unget_na(na, priv->np_ifp); + bzero(priv, sizeof(*priv)); /* for safety */ + free(priv, M_DEVBUF); } /* call with NMG_LOCK *not* held */ void netmap_dtor(void *data) { struct netmap_priv_d *priv = data; - int last_instance; NMG_LOCK(); - last_instance = netmap_dtor_locked(priv); + netmap_priv_delete(priv); NMG_UNLOCK(); - if (last_instance) { - bzero(priv, sizeof(*priv)); /* for safety */ - free(priv, M_DEVBUF); - } } /* * Handlers for synchronization of the queues from/to the host. * Netmap has two operating modes: * - in the default mode, the rings connected to the host stack are * just another ring pair managed by userspace; * - in transparent mode (XXX to be defined) incoming packets * (from the host or the NIC) are marked as NS_FORWARD upon * arrival, and the user application has a chance to reset the * flag for packets that should be dropped. * On the RXSYNC or poll(), packets in RX rings between * kring->nr_kcur and ring->cur with NS_FORWARD still set are moved * to the other side. * The transfer NIC --> host is relatively easy, just encapsulate * into mbufs and we are done. The host --> NIC side is slightly * harder because there might not be room in the tx ring so it * might take a while before releasing the buffer. */ /* * pass a chain of buffers to the host stack as coming from 'dst' * We do not need to lock because the queue is private. */ static void netmap_send_up(struct ifnet *dst, struct mbq *q) { struct mbuf *m; + struct mbuf *head = NULL, *prev = NULL; /* send packets up, outside the lock */ while ((m = mbq_dequeue(q)) != NULL) { if (netmap_verbose & NM_VERB_HOST) D("sending up pkt %p size %d", m, MBUF_LEN(m)); - NM_SEND_UP(dst, m); + prev = nm_os_send_up(dst, m, prev); + if (head == NULL) + head = prev; } - mbq_destroy(q); + if (head) + nm_os_send_up(dst, NULL, head); + mbq_fini(q); } /* * put a copy of the buffers marked NS_FORWARD into an mbuf chain. * Take packets from hwcur to ring->head marked NS_FORWARD (or forced) * and pass them up. Drop remaining packets in the unlikely event * of an mbuf shortage. */ static void netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force) { u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; u_int n; struct netmap_adapter *na = kring->na; for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) { struct mbuf *m; struct netmap_slot *slot = &kring->ring->slot[n]; if ((slot->flags & NS_FORWARD) == 0 && !force) continue; if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) { RD(5, "bad pkt at %d len %d", n, slot->len); continue; } slot->flags &= ~NS_FORWARD; // XXX needed ? /* XXX TODO: adapt to the case of a multisegment packet */ m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL); if (m == NULL) break; mbq_enqueue(q, m); } } +static inline int +_nm_may_forward(struct netmap_kring *kring) +{ + return ((netmap_fwd || kring->ring->flags & NR_FORWARD) && + kring->na->na_flags & NAF_HOST_RINGS && + kring->tx == NR_RX); +} +static inline int +nm_may_forward_up(struct netmap_kring *kring) +{ + return _nm_may_forward(kring) && + kring->ring_id != kring->na->num_rx_rings; +} + +static inline int +nm_may_forward_down(struct netmap_kring *kring) +{ + return _nm_may_forward(kring) && + kring->ring_id == kring->na->num_rx_rings; +} + /* * Send to the NIC rings packets marked NS_FORWARD between * kring->nr_hwcur and kring->rhead * Called under kring->rx_queue.lock on the sw rx ring, */ static u_int netmap_sw_to_nic(struct netmap_adapter *na) { struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings]; struct netmap_slot *rxslot = kring->ring->slot; u_int i, rxcur = kring->nr_hwcur; u_int const head = kring->rhead; u_int const src_lim = kring->nkr_num_slots - 1; u_int sent = 0; /* scan rings to find space, then fill as much as possible */ for (i = 0; i < na->num_tx_rings; i++) { struct netmap_kring *kdst = &na->tx_rings[i]; struct netmap_ring *rdst = kdst->ring; u_int const dst_lim = kdst->nkr_num_slots - 1; /* XXX do we trust ring or kring->rcur,rtail ? */ for (; rxcur != head && !nm_ring_empty(rdst); rxcur = nm_next(rxcur, src_lim) ) { struct netmap_slot *src, *dst, tmp; - u_int dst_cur = rdst->cur; + u_int dst_head = rdst->head; src = &rxslot[rxcur]; if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd) continue; sent++; - dst = &rdst->slot[dst_cur]; + dst = &rdst->slot[dst_head]; tmp = *src; src->buf_idx = dst->buf_idx; src->flags = NS_BUF_CHANGED; dst->buf_idx = tmp.buf_idx; dst->len = tmp.len; dst->flags = NS_BUF_CHANGED; - rdst->cur = nm_next(dst_cur, dst_lim); + rdst->head = rdst->cur = nm_next(dst_head, dst_lim); } /* if (sent) XXX txsync ? */ } return sent; } /* * netmap_txsync_to_host() passes packets up. We are called from a * system call in user process context, and the only contention * can be among multiple user threads erroneously calling * this routine concurrently. */ -static void -netmap_txsync_to_host(struct netmap_adapter *na) +static int +netmap_txsync_to_host(struct netmap_kring *kring, int flags) { - struct netmap_kring *kring = &na->tx_rings[na->num_tx_rings]; + struct netmap_adapter *na = kring->na; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; struct mbq q; /* Take packets from hwcur to head and pass them up. * force head = cur since netmap_grab_packets() stops at head * In case of no buffers we give up. At the end of the loop, * the queue is drained in all cases. */ mbq_init(&q); netmap_grab_packets(kring, &q, 1 /* force */); ND("have %d pkts in queue", mbq_len(&q)); kring->nr_hwcur = head; kring->nr_hwtail = head + lim; if (kring->nr_hwtail > lim) kring->nr_hwtail -= lim + 1; netmap_send_up(na->ifp, &q); + return 0; } /* * rxsync backend for packets coming from the host stack. * They have been put in kring->rx_queue by netmap_transmit(). * We protect access to the kring using kring->rx_queue.lock * * This routine also does the selrecord if called from the poll handler - * (we know because td != NULL). + * (we know because sr != NULL). * - * NOTE: on linux, selrecord() is defined as a macro and uses pwait - * as an additional hidden argument. * returns the number of packets delivered to tx queues in * transparent mode, or a negative value if error */ static int -netmap_rxsync_from_host(struct netmap_adapter *na, struct thread *td, void *pwait) +netmap_rxsync_from_host(struct netmap_kring *kring, int flags) { - struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings]; + struct netmap_adapter *na = kring->na; struct netmap_ring *ring = kring->ring; u_int nm_i, n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int ret = 0; struct mbq *q = &kring->rx_queue, fq; - (void)pwait; /* disable unused warnings */ - (void)td; - mbq_init(&fq); /* fq holds packets to be freed */ mbq_lock(q); /* First part: import newly received packets */ n = mbq_len(q); if (n) { /* grab packets from the queue */ struct mbuf *m; uint32_t stop_i; nm_i = kring->nr_hwtail; stop_i = nm_prev(nm_i, lim); while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) { int len = MBUF_LEN(m); struct netmap_slot *slot = &ring->slot[nm_i]; m_copydata(m, 0, len, NMB(na, slot)); ND("nm %d len %d", nm_i, len); if (netmap_verbose) D("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL)); slot->len = len; slot->flags = kring->nkr_slot_flags; nm_i = nm_next(nm_i, lim); mbq_enqueue(&fq, m); } kring->nr_hwtail = nm_i; } /* * Second part: skip past packets that userspace has released. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* something was released */ - if (netmap_fwd || kring->ring->flags & NR_FORWARD) + if (nm_may_forward_down(kring)) { ret = netmap_sw_to_nic(na); + if (ret > 0) { + kring->nr_kflags |= NR_FORWARD; + ret = 0; + } + } kring->nr_hwcur = head; } - /* access copies of cur,tail in the kring */ - if (kring->rcur == kring->rtail && td) /* no bufs available */ - OS_selrecord(td, &kring->si); - mbq_unlock(q); mbq_purge(&fq); - mbq_destroy(&fq); + mbq_fini(&fq); return ret; } /* Get a netmap adapter for the port. * * If it is possible to satisfy the request, return 0 * with *na containing the netmap adapter found. * Otherwise return an error code, with *na containing NULL. * * When the port is attached to a bridge, we always return * EBUSY. * Otherwise, if the port is already bound to a file descriptor, * then we unconditionally return the existing adapter into *na. * In all the other cases, we return (into *na) either native, * generic or NULL, according to the following table: * * native_support * active_fds dev.netmap.admode YES NO * ------------------------------------------------------- * >0 * NA(ifp) NA(ifp) * * 0 NETMAP_ADMODE_BEST NATIVE GENERIC * 0 NETMAP_ADMODE_NATIVE NATIVE NULL * 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC * */ - +static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */ int netmap_get_hw_na(struct ifnet *ifp, struct netmap_adapter **na) { /* generic support */ int i = netmap_admode; /* Take a snapshot. */ struct netmap_adapter *prev_na; -#ifdef WITH_GENERIC - struct netmap_generic_adapter *gna; int error = 0; -#endif *na = NULL; /* default */ /* reset in case of invalid value */ if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST) i = netmap_admode = NETMAP_ADMODE_BEST; - if (NETMAP_CAPABLE(ifp)) { + if (NM_NA_VALID(ifp)) { prev_na = NA(ifp); /* If an adapter already exists, return it if * there are active file descriptors or if * netmap is not forced to use generic * adapters. */ if (NETMAP_OWNED_BY_ANY(prev_na) || i != NETMAP_ADMODE_GENERIC || prev_na->na_flags & NAF_FORCE_NATIVE #ifdef WITH_PIPES /* ugly, but we cannot allow an adapter switch * if some pipe is referring to this one */ || prev_na->na_next_pipe > 0 #endif ) { *na = prev_na; return 0; } } /* If there isn't native support and netmap is not allowed * to use generic adapters, we cannot satisfy the request. */ - if (!NETMAP_CAPABLE(ifp) && i == NETMAP_ADMODE_NATIVE) + if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE) return EOPNOTSUPP; -#ifdef WITH_GENERIC /* Otherwise, create a generic adapter and return it, * saving the previously used netmap adapter, if any. * * Note that here 'prev_na', if not NULL, MUST be a * native adapter, and CANNOT be a generic one. This is * true because generic adapters are created on demand, and * destroyed when not used anymore. Therefore, if the adapter * currently attached to an interface 'ifp' is generic, it * must be that * (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))). * Consequently, if NA(ifp) is generic, we will enter one of * the branches above. This ensures that we never override * a generic adapter with another generic adapter. */ - prev_na = NA(ifp); error = generic_netmap_attach(ifp); if (error) return error; *na = NA(ifp); - gna = (struct netmap_generic_adapter*)NA(ifp); - gna->prev = prev_na; /* save old na */ - if (prev_na != NULL) { - ifunit_ref(ifp->if_xname); - // XXX add a refcount ? - netmap_adapter_get(prev_na); - } - ND("Created generic NA %p (prev %p)", gna, gna->prev); - return 0; -#else /* !WITH_GENERIC */ - return EOPNOTSUPP; -#endif } /* * MUST BE CALLED UNDER NMG_LOCK() * * Get a refcounted reference to a netmap adapter attached * to the interface specified by nmr. * This is always called in the execution of an ioctl(). * * Return ENXIO if the interface specified by the request does * not exist, ENOTSUP if netmap is not supported by the interface, * EBUSY if the interface is already attached to a bridge, * EINVAL if parameters are invalid, ENOMEM if needed resources * could not be allocated. * If successful, hold a reference to the netmap adapter. * - * No reference is kept on the real interface, which may then - * disappear at any time. + * If the interface specified by nmr is a system one, also keep + * a reference to it and return a valid *ifp. */ int -netmap_get_na(struct nmreq *nmr, struct netmap_adapter **na, int create) +netmap_get_na(struct nmreq *nmr, struct netmap_adapter **na, + struct ifnet **ifp, int create) { - struct ifnet *ifp = NULL; int error = 0; struct netmap_adapter *ret = NULL; *na = NULL; /* default return value */ + *ifp = NULL; NMG_LOCK_ASSERT(); - /* we cascade through all possible types of netmap adapter. + /* We cascade through all possible types of netmap adapter. * All netmap_get_*_na() functions return an error and an na, * with the following combinations: * * error na * 0 NULL type doesn't match * !0 NULL type matches, but na creation/lookup failed * 0 !NULL type matches and na created/found * !0 !NULL impossible */ + /* try to see if this is a ptnetmap port */ + error = netmap_get_pt_host_na(nmr, na, create); + if (error || *na != NULL) + return error; + /* try to see if this is a monitor port */ error = netmap_get_monitor_na(nmr, na, create); if (error || *na != NULL) return error; /* try to see if this is a pipe port */ error = netmap_get_pipe_na(nmr, na, create); if (error || *na != NULL) return error; /* try to see if this is a bridge port */ error = netmap_get_bdg_na(nmr, na, create); if (error) return error; if (*na != NULL) /* valid match in netmap_get_bdg_na() */ goto out; /* * This must be a hardware na, lookup the name in the system. * Note that by hardware we actually mean "it shows up in ifconfig". * This may still be a tap, a veth/epair, or even a * persistent VALE port. */ - ifp = ifunit_ref(nmr->nr_name); - if (ifp == NULL) { + *ifp = ifunit_ref(nmr->nr_name); + if (*ifp == NULL) { return ENXIO; } - error = netmap_get_hw_na(ifp, &ret); + error = netmap_get_hw_na(*ifp, &ret); if (error) goto out; *na = ret; netmap_adapter_get(ret); out: - if (error && ret != NULL) - netmap_adapter_put(ret); + if (error) { + if (ret) + netmap_adapter_put(ret); + if (*ifp) { + if_rele(*ifp); + *ifp = NULL; + } + } - if (ifp) - if_rele(ifp); /* allow live unloading of drivers modules */ - return error; } +/* undo netmap_get_na() */ +void +netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp) +{ + if (ifp) + if_rele(ifp); + if (na) + netmap_adapter_put(na); +} + +#define NM_FAIL_ON(t) do { \ + if (unlikely(t)) { \ + RD(5, "%s: fail '" #t "' " \ + "h %d c %d t %d " \ + "rh %d rc %d rt %d " \ + "hc %d ht %d", \ + kring->name, \ + head, cur, ring->tail, \ + kring->rhead, kring->rcur, kring->rtail, \ + kring->nr_hwcur, kring->nr_hwtail); \ + return kring->nkr_num_slots; \ + } \ +} while (0) + /* * validate parameters on entry for *_txsync() * Returns ring->cur if ok, or something >= kring->nkr_num_slots * in case of error. * * rhead, rcur and rtail=hwtail are stored from previous round. * hwcur is the next packet to send to the ring. * * We want * hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail * * hwcur, rhead, rtail and hwtail are reliable */ -static u_int -nm_txsync_prologue(struct netmap_kring *kring) +u_int +nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring) { -#define NM_ASSERT(t) if (t) { D("fail " #t); goto error; } - struct netmap_ring *ring = kring->ring; u_int head = ring->head; /* read only once */ u_int cur = ring->cur; /* read only once */ u_int n = kring->nkr_num_slots; ND(5, "%s kcur %d ktail %d head %d cur %d tail %d", kring->name, kring->nr_hwcur, kring->nr_hwtail, ring->head, ring->cur, ring->tail); #if 1 /* kernel sanity checks; but we can trust the kring. */ - if (kring->nr_hwcur >= n || kring->rhead >= n || - kring->rtail >= n || kring->nr_hwtail >= n) - goto error; + NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n || + kring->rtail >= n || kring->nr_hwtail >= n); #endif /* kernel sanity checks */ /* - * user sanity checks. We only use 'cur', - * A, B, ... are possible positions for cur: + * user sanity checks. We only use head, + * A, B, ... are possible positions for head: * - * 0 A cur B tail C n-1 - * 0 D tail E cur F n-1 + * 0 A rhead B rtail C n-1 + * 0 D rtail E rhead F n-1 * * B, F, D are valid. A, C, E are wrong */ if (kring->rtail >= kring->rhead) { /* want rhead <= head <= rtail */ - NM_ASSERT(head < kring->rhead || head > kring->rtail); + NM_FAIL_ON(head < kring->rhead || head > kring->rtail); /* and also head <= cur <= rtail */ - NM_ASSERT(cur < head || cur > kring->rtail); + NM_FAIL_ON(cur < head || cur > kring->rtail); } else { /* here rtail < rhead */ /* we need head outside rtail .. rhead */ - NM_ASSERT(head > kring->rtail && head < kring->rhead); + NM_FAIL_ON(head > kring->rtail && head < kring->rhead); /* two cases now: head <= rtail or head >= rhead */ if (head <= kring->rtail) { /* want head <= cur <= rtail */ - NM_ASSERT(cur < head || cur > kring->rtail); + NM_FAIL_ON(cur < head || cur > kring->rtail); } else { /* head >= rhead */ /* cur must be outside rtail..head */ - NM_ASSERT(cur > kring->rtail && cur < head); + NM_FAIL_ON(cur > kring->rtail && cur < head); } } if (ring->tail != kring->rtail) { RD(5, "tail overwritten was %d need %d", ring->tail, kring->rtail); ring->tail = kring->rtail; } kring->rhead = head; kring->rcur = cur; return head; - -error: - RD(5, "%s kring error: head %d cur %d tail %d rhead %d rcur %d rtail %d hwcur %d hwtail %d", - kring->name, - head, cur, ring->tail, - kring->rhead, kring->rcur, kring->rtail, - kring->nr_hwcur, kring->nr_hwtail); - return n; -#undef NM_ASSERT } /* * validate parameters on entry for *_rxsync() * Returns ring->head if ok, kring->nkr_num_slots on error. * * For a valid configuration, * hwcur <= head <= cur <= tail <= hwtail * * We only consider head and cur. * hwcur and hwtail are reliable. * */ -static u_int -nm_rxsync_prologue(struct netmap_kring *kring) +u_int +nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring) { - struct netmap_ring *ring = kring->ring; uint32_t const n = kring->nkr_num_slots; uint32_t head, cur; ND(5,"%s kc %d kt %d h %d c %d t %d", kring->name, kring->nr_hwcur, kring->nr_hwtail, ring->head, ring->cur, ring->tail); /* * Before storing the new values, we should check they do not * move backwards. However: * - head is not an issue because the previous value is hwcur; * - cur could in principle go back, however it does not matter * because we are processing a brand new rxsync() */ cur = kring->rcur = ring->cur; /* read only once */ head = kring->rhead = ring->head; /* read only once */ #if 1 /* kernel sanity checks */ - if (kring->nr_hwcur >= n || kring->nr_hwtail >= n) - goto error; + NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n); #endif /* kernel sanity checks */ /* user sanity checks */ if (kring->nr_hwtail >= kring->nr_hwcur) { /* want hwcur <= rhead <= hwtail */ - if (head < kring->nr_hwcur || head > kring->nr_hwtail) - goto error; + NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail); /* and also rhead <= rcur <= hwtail */ - if (cur < head || cur > kring->nr_hwtail) - goto error; + NM_FAIL_ON(cur < head || cur > kring->nr_hwtail); } else { /* we need rhead outside hwtail..hwcur */ - if (head < kring->nr_hwcur && head > kring->nr_hwtail) - goto error; + NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail); /* two cases now: head <= hwtail or head >= hwcur */ if (head <= kring->nr_hwtail) { /* want head <= cur <= hwtail */ - if (cur < head || cur > kring->nr_hwtail) - goto error; + NM_FAIL_ON(cur < head || cur > kring->nr_hwtail); } else { /* cur must be outside hwtail..head */ - if (cur < head && cur > kring->nr_hwtail) - goto error; + NM_FAIL_ON(cur < head && cur > kring->nr_hwtail); } } if (ring->tail != kring->rtail) { RD(5, "%s tail overwritten was %d need %d", kring->name, ring->tail, kring->rtail); ring->tail = kring->rtail; } return head; - -error: - RD(5, "kring error: hwcur %d rcur %d hwtail %d head %d cur %d tail %d", - kring->nr_hwcur, - kring->rcur, kring->nr_hwtail, - kring->rhead, kring->rcur, ring->tail); - return n; } /* * Error routine called when txsync/rxsync detects an error. * Can't do much more than resetting head =cur = hwcur, tail = hwtail * Return 1 on reinit. * * This routine is only called by the upper half of the kernel. * It only reads hwcur (which is changed only by the upper half, too) * and hwtail (which may be changed by the lower half, but only on * a tx ring and only to increase it, so any error will be recovered * on the next call). For the above, we don't strictly need to call * it under lock. */ int netmap_ring_reinit(struct netmap_kring *kring) { struct netmap_ring *ring = kring->ring; u_int i, lim = kring->nkr_num_slots - 1; int errors = 0; // XXX KASSERT nm_kr_tryget RD(10, "called for %s", kring->name); // XXX probably wrong to trust userspace kring->rhead = ring->head; kring->rcur = ring->cur; kring->rtail = ring->tail; if (ring->cur > lim) errors++; if (ring->head > lim) errors++; if (ring->tail > lim) errors++; for (i = 0; i <= lim; i++) { u_int idx = ring->slot[i].buf_idx; u_int len = ring->slot[i].len; if (idx < 2 || idx >= kring->na->na_lut.objtotal) { RD(5, "bad index at slot %d idx %d len %d ", i, idx, len); ring->slot[i].buf_idx = 0; ring->slot[i].len = 0; } else if (len > NETMAP_BUF_SIZE(kring->na)) { ring->slot[i].len = 0; RD(5, "bad len at slot %d idx %d len %d", i, idx, len); } } if (errors) { RD(10, "total %d errors", errors); RD(10, "%s reinit, cur %d -> %d tail %d -> %d", kring->name, ring->cur, kring->nr_hwcur, ring->tail, kring->nr_hwtail); ring->head = kring->rhead = kring->nr_hwcur; ring->cur = kring->rcur = kring->nr_hwcur; ring->tail = kring->rtail = kring->nr_hwtail; } return (errors ? 1 : 0); } /* interpret the ringid and flags fields of an nmreq, by translating them * into a pair of intervals of ring indices: * * [priv->np_txqfirst, priv->np_txqlast) and * [priv->np_rxqfirst, priv->np_rxqlast) * */ int netmap_interp_ringid(struct netmap_priv_d *priv, uint16_t ringid, uint32_t flags) { struct netmap_adapter *na = priv->np_na; u_int j, i = ringid & NETMAP_RING_MASK; u_int reg = flags & NR_REG_MASK; + int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY }; enum txrx t; if (reg == NR_REG_DEFAULT) { /* convert from old ringid to flags */ if (ringid & NETMAP_SW_RING) { reg = NR_REG_SW; } else if (ringid & NETMAP_HW_RING) { reg = NR_REG_ONE_NIC; } else { reg = NR_REG_ALL_NIC; } D("deprecated API, old ringid 0x%x -> ringid %x reg %d", ringid, i, reg); } - switch (reg) { - case NR_REG_ALL_NIC: - case NR_REG_PIPE_MASTER: - case NR_REG_PIPE_SLAVE: - for_rx_tx(t) { + + if ((flags & NR_PTNETMAP_HOST) && (reg != NR_REG_ALL_NIC || + flags & (NR_RX_RINGS_ONLY|NR_TX_RINGS_ONLY))) { + D("Error: only NR_REG_ALL_NIC supported with netmap passthrough"); + return EINVAL; + } + + for_rx_tx(t) { + if (flags & excluded_direction[t]) { + priv->np_qfirst[t] = priv->np_qlast[t] = 0; + continue; + } + switch (reg) { + case NR_REG_ALL_NIC: + case NR_REG_PIPE_MASTER: + case NR_REG_PIPE_SLAVE: priv->np_qfirst[t] = 0; priv->np_qlast[t] = nma_get_nrings(na, t); - } - ND("%s %d %d", "ALL/PIPE", - priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX]); - break; - case NR_REG_SW: - case NR_REG_NIC_SW: - if (!(na->na_flags & NAF_HOST_RINGS)) { - D("host rings not supported"); - return EINVAL; - } - for_rx_tx(t) { + ND("ALL/PIPE: %s %d %d", nm_txrx2str(t), + priv->np_qfirst[t], priv->np_qlast[t]); + break; + case NR_REG_SW: + case NR_REG_NIC_SW: + if (!(na->na_flags & NAF_HOST_RINGS)) { + D("host rings not supported"); + return EINVAL; + } priv->np_qfirst[t] = (reg == NR_REG_SW ? nma_get_nrings(na, t) : 0); priv->np_qlast[t] = nma_get_nrings(na, t) + 1; - } - ND("%s %d %d", reg == NR_REG_SW ? "SW" : "NIC+SW", - priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX]); - break; - case NR_REG_ONE_NIC: - if (i >= na->num_tx_rings && i >= na->num_rx_rings) { - D("invalid ring id %d", i); - return EINVAL; - } - for_rx_tx(t) { + ND("%s: %s %d %d", reg == NR_REG_SW ? "SW" : "NIC+SW", + nm_txrx2str(t), + priv->np_qfirst[t], priv->np_qlast[t]); + break; + case NR_REG_ONE_NIC: + if (i >= na->num_tx_rings && i >= na->num_rx_rings) { + D("invalid ring id %d", i); + return EINVAL; + } /* if not enough rings, use the first one */ j = i; if (j >= nma_get_nrings(na, t)) j = 0; priv->np_qfirst[t] = j; priv->np_qlast[t] = j + 1; + ND("ONE_NIC: %s %d %d", nm_txrx2str(t), + priv->np_qfirst[t], priv->np_qlast[t]); + break; + default: + D("invalid regif type %d", reg); + return EINVAL; } - break; - default: - D("invalid regif type %d", reg); - return EINVAL; } priv->np_flags = (flags & ~NR_REG_MASK) | reg; if (netmap_verbose) { D("%s: tx [%d,%d) rx [%d,%d) id %d", na->name, priv->np_qfirst[NR_TX], priv->np_qlast[NR_TX], priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX], i); } return 0; } /* * Set the ring ID. For devices with a single queue, a request * for all rings is the same as a single ring. */ static int netmap_set_ringid(struct netmap_priv_d *priv, uint16_t ringid, uint32_t flags) { struct netmap_adapter *na = priv->np_na; int error; enum txrx t; error = netmap_interp_ringid(priv, ringid, flags); if (error) { return error; } priv->np_txpoll = (ringid & NETMAP_NO_TX_POLL) ? 0 : 1; /* optimization: count the users registered for more than * one ring, which are the ones sleeping on the global queue. * The default netmap_notify() callback will then * avoid signaling the global queue if nobody is using it */ for_rx_tx(t) { if (nm_si_user(priv, t)) na->si_users[t]++; } return 0; } static void netmap_unset_ringid(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; enum txrx t; for_rx_tx(t) { if (nm_si_user(priv, t)) na->si_users[t]--; priv->np_qfirst[t] = priv->np_qlast[t] = 0; } priv->np_flags = 0; priv->np_txpoll = 0; } -/* check that the rings we want to bind are not exclusively owned by a previous - * bind. If exclusive ownership has been requested, we also mark the rings. +/* Set the nr_pending_mode for the requested rings. + * If requested, also try to get exclusive access to the rings, provided + * the rings we want to bind are not exclusively owned by a previous bind. */ static int -netmap_get_exclusive(struct netmap_priv_d *priv) +netmap_krings_get(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; u_int i; struct netmap_kring *kring; int excl = (priv->np_flags & NR_EXCLUSIVE); enum txrx t; ND("%s: grabbing tx [%d, %d) rx [%d, %d)", na->name, priv->np_qfirst[NR_TX], priv->np_qlast[NR_TX], priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX]); /* first round: check that all the requested rings * are neither alread exclusively owned, nor we * want exclusive ownership when they are already in use */ for_rx_tx(t) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = &NMR(na, t)[i]; if ((kring->nr_kflags & NKR_EXCLUSIVE) || (kring->users && excl)) { ND("ring %s busy", kring->name); return EBUSY; } } } - /* second round: increment usage cound and possibly - * mark as exclusive + /* second round: increment usage count (possibly marking them + * as exclusive) and set the nr_pending_mode */ - for_rx_tx(t) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = &NMR(na, t)[i]; kring->users++; if (excl) kring->nr_kflags |= NKR_EXCLUSIVE; + kring->nr_pending_mode = NKR_NETMAP_ON; } } return 0; } -/* undo netmap_get_ownership() */ +/* Undo netmap_krings_get(). This is done by clearing the exclusive mode + * if was asked on regif, and unset the nr_pending_mode if we are the + * last users of the involved rings. */ static void -netmap_rel_exclusive(struct netmap_priv_d *priv) +netmap_krings_put(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; u_int i; struct netmap_kring *kring; int excl = (priv->np_flags & NR_EXCLUSIVE); enum txrx t; ND("%s: releasing tx [%d, %d) rx [%d, %d)", na->name, priv->np_qfirst[NR_TX], priv->np_qlast[NR_TX], priv->np_qfirst[NR_RX], priv->np_qlast[MR_RX]); for_rx_tx(t) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = &NMR(na, t)[i]; if (excl) kring->nr_kflags &= ~NKR_EXCLUSIVE; kring->users--; + if (kring->users == 0) + kring->nr_pending_mode = NKR_NETMAP_OFF; } } } /* * possibly move the interface to netmap-mode. * If success it returns a pointer to netmap_if, otherwise NULL. * This must be called with NMG_LOCK held. * * The following na callbacks are called in the process: * * na->nm_config() [by netmap_update_config] * (get current number and size of rings) * * We have a generic one for linux (netmap_linux_config). * The bwrap has to override this, since it has to forward * the request to the wrapped adapter (netmap_bwrap_config). * * * na->nm_krings_create() * (create and init the krings array) * * One of the following: * * * netmap_hw_krings_create, (hw ports) * creates the standard layout for the krings * and adds the mbq (used for the host rings). * * * netmap_vp_krings_create (VALE ports) * add leases and scratchpads * * * netmap_pipe_krings_create (pipes) * create the krings and rings of both ends and * cross-link them * * * netmap_monitor_krings_create (monitors) * avoid allocating the mbq * * * netmap_bwrap_krings_create (bwraps) * create both the brap krings array, * the krings array of the wrapped adapter, and * (if needed) the fake array for the host adapter * * na->nm_register(, 1) * (put the adapter in netmap mode) * * This may be one of the following: - * (XXX these should be either all *_register or all *_reg 2014-03-15) * - * * netmap_hw_register (hw ports) + * * netmap_hw_reg (hw ports) * checks that the ifp is still there, then calls * the hardware specific callback; * * * netmap_vp_reg (VALE ports) * If the port is connected to a bridge, * set the NAF_NETMAP_ON flag under the * bridge write lock. * * * netmap_pipe_reg (pipes) * inform the other pipe end that it is no * longer responsible for the lifetime of this * pipe end * * * netmap_monitor_reg (monitors) * intercept the sync callbacks of the monitored * rings * - * * netmap_bwrap_register (bwraps) + * * netmap_bwrap_reg (bwraps) * cross-link the bwrap and hwna rings, * forward the request to the hwna, override * the hwna notify callback (to get the frames * coming from outside go through the bridge). * * */ int netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na, uint16_t ringid, uint32_t flags) { struct netmap_if *nifp = NULL; int error; NMG_LOCK_ASSERT(); /* ring configuration may have changed, fetch from the card */ netmap_update_config(na); priv->np_na = na; /* store the reference */ error = netmap_set_ringid(priv, ringid, flags); if (error) goto err; error = netmap_mem_finalize(na->nm_mem, na); if (error) goto err; if (na->active_fds == 0) { /* * If this is the first registration of the adapter, - * also create the netmap rings and their in-kernel view, + * create the in-kernel view of the netmap rings, * the netmap krings. */ /* * Depending on the adapter, this may also create * the netmap rings themselves */ error = na->nm_krings_create(na); if (error) goto err_drop_mem; - /* create all missing netmap rings */ - error = netmap_mem_rings_create(na); - if (error) - goto err_del_krings; } - /* now the kring must exist and we can check whether some - * previous bind has exclusive ownership on them + /* now the krings must exist and we can check whether some + * previous bind has exclusive ownership on them, and set + * nr_pending_mode */ - error = netmap_get_exclusive(priv); + error = netmap_krings_get(priv); if (error) - goto err_del_rings; + goto err_del_krings; + /* create all needed missing netmap rings */ + error = netmap_mem_rings_create(na); + if (error) + goto err_rel_excl; + /* in all cases, create a new netmap if */ nifp = netmap_mem_if_new(na); if (nifp == NULL) { error = ENOMEM; - goto err_rel_excl; + goto err_del_rings; } - na->active_fds++; - if (!nm_netmap_on(na)) { - /* Netmap not active, set the card in netmap mode - * and make it use the shared buffers. - */ + if (na->active_fds == 0) { /* cache the allocator info in the na */ - netmap_mem_get_lut(na->nm_mem, &na->na_lut); - ND("%p->na_lut == %p", na, na->na_lut.lut); - error = na->nm_register(na, 1); /* mode on */ - if (error) + error = netmap_mem_get_lut(na->nm_mem, &na->na_lut); + if (error) goto err_del_if; + ND("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal, + na->na_lut.objsize); } + if (nm_kring_pending(priv)) { + /* Some kring is switching mode, tell the adapter to + * react on this. */ + error = na->nm_register(na, 1); + if (error) + goto err_put_lut; + } + + /* Commit the reference. */ + na->active_fds++; + /* * advertise that the interface is ready by setting np_nifp. * The barrier is needed because readers (poll, *SYNC and mmap) * check for priv->np_nifp != NULL without locking */ mb(); /* make sure previous writes are visible to all CPUs */ priv->np_nifp = nifp; return 0; +err_put_lut: + if (na->active_fds == 0) + memset(&na->na_lut, 0, sizeof(na->na_lut)); err_del_if: - memset(&na->na_lut, 0, sizeof(na->na_lut)); - na->active_fds--; netmap_mem_if_delete(na, nifp); err_rel_excl: - netmap_rel_exclusive(priv); + netmap_krings_put(priv); err_del_rings: - if (na->active_fds == 0) - netmap_mem_rings_delete(na); + netmap_mem_rings_delete(na); err_del_krings: if (na->active_fds == 0) na->nm_krings_delete(na); err_drop_mem: netmap_mem_deref(na->nm_mem, na); err: priv->np_na = NULL; return error; } /* - * update kring and ring at the end of txsync. + * update kring and ring at the end of rxsync/txsync. */ static inline void -nm_txsync_finalize(struct netmap_kring *kring) +nm_sync_finalize(struct netmap_kring *kring) { - /* update ring tail to what the kernel knows */ + /* + * Update ring tail to what the kernel knows + * After txsync: head/rhead/hwcur might be behind cur/rcur + * if no carrier. + */ kring->ring->tail = kring->rtail = kring->nr_hwtail; - /* note, head/rhead/hwcur might be behind cur/rcur - * if no carrier - */ ND(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d", kring->name, kring->nr_hwcur, kring->nr_hwtail, kring->rhead, kring->rcur, kring->rtail); } - /* - * update kring and ring at the end of rxsync - */ -static inline void -nm_rxsync_finalize(struct netmap_kring *kring) -{ - /* tell userspace that there might be new packets */ - //struct netmap_ring *ring = kring->ring; - ND("head %d cur %d tail %d -> %d", ring->head, ring->cur, ring->tail, - kring->nr_hwtail); - kring->ring->tail = kring->rtail = kring->nr_hwtail; - /* make a copy of the state for next round */ - kring->rhead = kring->ring->head; - kring->rcur = kring->ring->cur; -} - - - -/* * ioctl(2) support for the "netmap" device. * * Following a list of accepted commands: * - NIOCGINFO * - SIOCGIFADDR just for convenience * - NIOCREGIF * - NIOCTXSYNC * - NIOCRXSYNC * * Return 0 on success, errno otherwise. */ int -netmap_ioctl(struct cdev *dev, u_long cmd, caddr_t data, - int fflag, struct thread *td) +netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data, struct thread *td) { - struct netmap_priv_d *priv = NULL; struct nmreq *nmr = (struct nmreq *) data; struct netmap_adapter *na = NULL; - int error; + struct ifnet *ifp = NULL; + int error = 0; u_int i, qfirst, qlast; struct netmap_if *nifp; struct netmap_kring *krings; enum txrx t; - (void)dev; /* UNUSED */ - (void)fflag; /* UNUSED */ - if (cmd == NIOCGINFO || cmd == NIOCREGIF) { /* truncate name */ nmr->nr_name[sizeof(nmr->nr_name) - 1] = '\0'; if (nmr->nr_version != NETMAP_API) { D("API mismatch for %s got %d need %d", nmr->nr_name, nmr->nr_version, NETMAP_API); nmr->nr_version = NETMAP_API; } if (nmr->nr_version < NETMAP_MIN_API || nmr->nr_version > NETMAP_MAX_API) { return EINVAL; } } - CURVNET_SET(TD_TO_VNET(td)); - error = devfs_get_cdevpriv((void **)&priv); - if (error) { - CURVNET_RESTORE(); - /* XXX ENOENT should be impossible, since the priv - * is now created in the open */ - return (error == ENOENT ? ENXIO : error); - } - switch (cmd) { case NIOCGINFO: /* return capabilities etc */ if (nmr->nr_cmd == NETMAP_BDG_LIST) { error = netmap_bdg_ctl(nmr, NULL); break; } NMG_LOCK(); do { /* memsize is always valid */ struct netmap_mem_d *nmd = &nm_mem; u_int memflags; if (nmr->nr_name[0] != '\0') { + /* get a refcount */ - error = netmap_get_na(nmr, &na, 1 /* create */); - if (error) + error = netmap_get_na(nmr, &na, &ifp, 1 /* create */); + if (error) { + na = NULL; + ifp = NULL; break; + } nmd = na->nm_mem; /* get memory allocator */ } error = netmap_mem_get_info(nmd, &nmr->nr_memsize, &memflags, &nmr->nr_arg2); if (error) break; if (na == NULL) /* only memory info */ break; nmr->nr_offset = 0; nmr->nr_rx_slots = nmr->nr_tx_slots = 0; netmap_update_config(na); nmr->nr_rx_rings = na->num_rx_rings; nmr->nr_tx_rings = na->num_tx_rings; nmr->nr_rx_slots = na->num_rx_desc; nmr->nr_tx_slots = na->num_tx_desc; - netmap_adapter_put(na); } while (0); + netmap_unget_na(na, ifp); NMG_UNLOCK(); break; case NIOCREGIF: /* possibly attach/detach NIC and VALE switch */ i = nmr->nr_cmd; if (i == NETMAP_BDG_ATTACH || i == NETMAP_BDG_DETACH || i == NETMAP_BDG_VNET_HDR || i == NETMAP_BDG_NEWIF - || i == NETMAP_BDG_DELIF) { + || i == NETMAP_BDG_DELIF + || i == NETMAP_BDG_POLLING_ON + || i == NETMAP_BDG_POLLING_OFF) { error = netmap_bdg_ctl(nmr, NULL); break; + } else if (i == NETMAP_PT_HOST_CREATE || i == NETMAP_PT_HOST_DELETE) { + error = ptnetmap_ctl(nmr, priv->np_na); + break; + } else if (i == NETMAP_VNET_HDR_GET) { + struct ifnet *ifp; + + NMG_LOCK(); + error = netmap_get_na(nmr, &na, &ifp, 0); + if (na && !error) { + nmr->nr_arg1 = na->virt_hdr_len; + } + netmap_unget_na(na, ifp); + NMG_UNLOCK(); + break; } else if (i != 0) { D("nr_cmd must be 0 not %d", i); error = EINVAL; break; } /* protect access to priv from concurrent NIOCREGIF */ NMG_LOCK(); do { u_int memflags; + struct ifnet *ifp; if (priv->np_nifp != NULL) { /* thread already registered */ error = EBUSY; break; } /* find the interface and a reference */ - error = netmap_get_na(nmr, &na, 1 /* create */); /* keep reference */ + error = netmap_get_na(nmr, &na, &ifp, + 1 /* create */); /* keep reference */ if (error) break; if (NETMAP_OWNED_BY_KERN(na)) { - netmap_adapter_put(na); + netmap_unget_na(na, ifp); error = EBUSY; break; } + + if (na->virt_hdr_len && !(nmr->nr_flags & NR_ACCEPT_VNET_HDR)) { + netmap_unget_na(na, ifp); + error = EIO; + break; + } + error = netmap_do_regif(priv, na, nmr->nr_ringid, nmr->nr_flags); if (error) { /* reg. failed, release priv and ref */ - netmap_adapter_put(na); + netmap_unget_na(na, ifp); break; } nifp = priv->np_nifp; priv->np_td = td; // XXX kqueue, debugging only /* return the offset of the netmap_if object */ nmr->nr_rx_rings = na->num_rx_rings; nmr->nr_tx_rings = na->num_tx_rings; nmr->nr_rx_slots = na->num_rx_desc; nmr->nr_tx_slots = na->num_tx_desc; error = netmap_mem_get_info(na->nm_mem, &nmr->nr_memsize, &memflags, &nmr->nr_arg2); if (error) { netmap_do_unregif(priv); - netmap_adapter_put(na); + netmap_unget_na(na, ifp); break; } if (memflags & NETMAP_MEM_PRIVATE) { *(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM; } for_rx_tx(t) { priv->np_si[t] = nm_si_user(priv, t) ? &na->si[t] : &NMR(na, t)[priv->np_qfirst[t]].si; } if (nmr->nr_arg3) { - D("requested %d extra buffers", nmr->nr_arg3); + if (netmap_verbose) + D("requested %d extra buffers", nmr->nr_arg3); nmr->nr_arg3 = netmap_extra_alloc(na, &nifp->ni_bufs_head, nmr->nr_arg3); - D("got %d extra buffers", nmr->nr_arg3); + if (netmap_verbose) + D("got %d extra buffers", nmr->nr_arg3); } nmr->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp); + + /* store ifp reference so that priv destructor may release it */ + priv->np_ifp = ifp; } while (0); NMG_UNLOCK(); break; case NIOCTXSYNC: case NIOCRXSYNC: nifp = priv->np_nifp; if (nifp == NULL) { error = ENXIO; break; } mb(); /* make sure following reads are not from cache */ na = priv->np_na; /* we have a reference */ if (na == NULL) { D("Internal error: nifp != NULL && na == NULL"); error = ENXIO; break; } - if (!nm_netmap_on(na)) { - error = ENXIO; - break; - } - t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX); krings = NMR(na, t); qfirst = priv->np_qfirst[t]; qlast = priv->np_qlast[t]; for (i = qfirst; i < qlast; i++) { struct netmap_kring *kring = krings + i; - if (nm_kr_tryget(kring)) { - error = EBUSY; - goto out; + struct netmap_ring *ring = kring->ring; + + if (unlikely(nm_kr_tryget(kring, 1, &error))) { + error = (error ? EIO : 0); + continue; } + if (cmd == NIOCTXSYNC) { if (netmap_verbose & NM_VERB_TXSYNC) D("pre txsync ring %d cur %d hwcur %d", - i, kring->ring->cur, + i, ring->cur, kring->nr_hwcur); - if (nm_txsync_prologue(kring) >= kring->nkr_num_slots) { + if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); } else if (kring->nm_sync(kring, NAF_FORCE_RECLAIM) == 0) { - nm_txsync_finalize(kring); + nm_sync_finalize(kring); } if (netmap_verbose & NM_VERB_TXSYNC) D("post txsync ring %d cur %d hwcur %d", - i, kring->ring->cur, + i, ring->cur, kring->nr_hwcur); } else { - if (nm_rxsync_prologue(kring) >= kring->nkr_num_slots) { + if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); } else if (kring->nm_sync(kring, NAF_FORCE_READ) == 0) { - nm_rxsync_finalize(kring); + nm_sync_finalize(kring); } - microtime(&na->rx_rings[i].ring->ts); + microtime(&ring->ts); } nm_kr_put(kring); } break; #ifdef WITH_VALE case NIOCCONFIG: error = netmap_bdg_config(nmr); break; #endif #ifdef __FreeBSD__ case FIONBIO: case FIOASYNC: ND("FIONBIO/FIOASYNC are no-ops"); break; case BIOCIMMEDIATE: case BIOCGHDRCMPLT: case BIOCSHDRCMPLT: case BIOCSSEESENT: D("ignore BIOCIMMEDIATE/BIOCSHDRCMPLT/BIOCSHDRCMPLT/BIOCSSEESENT"); break; default: /* allow device-specific ioctls */ { struct ifnet *ifp = ifunit_ref(nmr->nr_name); if (ifp == NULL) { error = ENXIO; } else { struct socket so; bzero(&so, sizeof(so)); so.so_vnet = ifp->if_vnet; // so->so_proto not null. error = ifioctl(&so, cmd, data, td); if_rele(ifp); } break; } #else /* linux */ default: error = EOPNOTSUPP; #endif /* linux */ } -out: - CURVNET_RESTORE(); return (error); } /* * select(2) and poll(2) handlers for the "netmap" device. * * Can be called for one or more queues. * Return true the event mask corresponding to ready events. * If there are no ready events, do a selrecord on either individual * selinfo or on the global one. * Device-dependent parts (locking and sync of tx/rx rings) * are done through callbacks. * * On linux, arguments are really pwait, the poll table, and 'td' is struct file * * The first one is remapped to pwait as selrecord() uses the name as an * hidden argument. */ int -netmap_poll(struct cdev *dev, int events, struct thread *td) +netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr) { - struct netmap_priv_d *priv = NULL; struct netmap_adapter *na; struct netmap_kring *kring; + struct netmap_ring *ring; u_int i, check_all_tx, check_all_rx, want[NR_TXRX], revents = 0; #define want_tx want[NR_TX] #define want_rx want[NR_RX] struct mbq q; /* packets from hw queues to host stack */ - void *pwait = dev; /* linux compatibility */ - int is_kevent = 0; enum txrx t; /* * In order to avoid nested locks, we need to "double check" * txsync and rxsync if we decide to do a selrecord(). * retry_tx (and retry_rx, later) prevent looping forever. */ int retry_tx = 1, retry_rx = 1; - (void)pwait; - mbq_init(&q); - - /* - * XXX kevent has curthread->tp_fop == NULL, - * so devfs_get_cdevpriv() fails. We circumvent this by passing - * priv as the first argument, which is also useful to avoid - * the selrecord() which are not necessary in that case. + /* transparent mode: send_down is 1 if we have found some + * packets to forward during the rx scan and we have not + * sent them down to the nic yet */ - if (devfs_get_cdevpriv((void **)&priv) != 0) { - is_kevent = 1; - if (netmap_verbose) - D("called from kevent"); - priv = (struct netmap_priv_d *)dev; - } - if (priv == NULL) - return POLLERR; + int send_down = 0; + mbq_init(&q); + if (priv->np_nifp == NULL) { D("No if registered"); return POLLERR; } mb(); /* make sure following reads are not from cache */ na = priv->np_na; if (!nm_netmap_on(na)) return POLLERR; if (netmap_verbose & 0x8000) D("device %s events 0x%x", na->name, events); want_tx = events & (POLLOUT | POLLWRNORM); want_rx = events & (POLLIN | POLLRDNORM); - /* * check_all_{tx|rx} are set if the card has more than one queue AND * the file descriptor is bound to all of them. If so, we sleep on * the "global" selinfo, otherwise we sleep on individual selinfo * (FreeBSD only allows two selinfo's per file descriptor). * The interrupt routine in the driver wake one or the other * (or both) depending on which clients are active. * * rxsync() is only called if we run out of buffers on a POLLIN. * txsync() is called if we run out of buffers on POLLOUT, or * there are pending packets to send. The latter can be disabled * passing NETMAP_NO_TX_POLL in the NIOCREG call. */ check_all_tx = nm_si_user(priv, NR_TX); check_all_rx = nm_si_user(priv, NR_RX); /* * We start with a lock free round which is cheap if we have * slots available. If this fails, then lock and call the sync * routines. */ +#if 1 /* new code- call rx if any of the ring needs to release or read buffers */ + if (want_tx) { + t = NR_TX; + for (i = priv->np_qfirst[t]; want[t] && i < priv->np_qlast[t]; i++) { + kring = &NMR(na, t)[i]; + /* XXX compare ring->cur and kring->tail */ + if (!nm_ring_empty(kring->ring)) { + revents |= want[t]; + want[t] = 0; /* also breaks the loop */ + } + } + } + if (want_rx) { + want_rx = 0; /* look for a reason to run the handlers */ + t = NR_RX; + for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { + kring = &NMR(na, t)[i]; + if (kring->ring->cur == kring->ring->tail /* try fetch new buffers */ + || kring->rhead != kring->ring->head /* release buffers */) { + want_rx = 1; + } + } + if (!want_rx) + revents |= events & (POLLIN | POLLRDNORM); /* we have data */ + } +#else /* old code */ for_rx_tx(t) { for (i = priv->np_qfirst[t]; want[t] && i < priv->np_qlast[t]; i++) { kring = &NMR(na, t)[i]; /* XXX compare ring->cur and kring->tail */ if (!nm_ring_empty(kring->ring)) { revents |= want[t]; want[t] = 0; /* also breaks the loop */ } } } +#endif /* old code */ /* * If we want to push packets out (priv->np_txpoll) or * want_tx is still set, we must issue txsync calls * (on all rings, to avoid that the tx rings stall). * XXX should also check cur != hwcur on the tx rings. * Fortunately, normal tx mode has np_txpoll set. */ if (priv->np_txpoll || want_tx) { /* * The first round checks if anyone is ready, if not * do a selrecord and another round to handle races. * want_tx goes to 0 if any space is found, and is * used to skip rings with no pending transmissions. */ flush_tx: - for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_RX]; i++) { + for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) { int found = 0; kring = &na->tx_rings[i]; - if (!want_tx && kring->ring->cur == kring->nr_hwcur) + ring = kring->ring; + + if (!send_down && !want_tx && ring->cur == kring->nr_hwcur) continue; - /* only one thread does txsync */ - if (nm_kr_tryget(kring)) { - /* either busy or stopped - * XXX if the ring is stopped, sleeping would - * be better. In current code, however, we only - * stop the rings for brief intervals (2014-03-14) - */ - if (netmap_verbose) - RD(2, "%p lost race on txring %d, ok", - priv, i); + + if (nm_kr_tryget(kring, 1, &revents)) continue; - } - if (nm_txsync_prologue(kring) >= kring->nkr_num_slots) { + + if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); revents |= POLLERR; } else { if (kring->nm_sync(kring, 0)) revents |= POLLERR; else - nm_txsync_finalize(kring); + nm_sync_finalize(kring); } /* * If we found new slots, notify potential * listeners on the same ring. * Since we just did a txsync, look at the copies * of cur,tail in the kring. */ found = kring->rcur != kring->rtail; nm_kr_put(kring); if (found) { /* notify other listeners */ revents |= want_tx; want_tx = 0; kring->nm_notify(kring, 0); } } - if (want_tx && retry_tx && !is_kevent) { - OS_selrecord(td, check_all_tx ? + /* if there were any packet to forward we must have handled them by now */ + send_down = 0; + if (want_tx && retry_tx && sr) { + nm_os_selrecord(sr, check_all_tx ? &na->si[NR_TX] : &na->tx_rings[priv->np_qfirst[NR_TX]].si); retry_tx = 0; goto flush_tx; } } /* * If want_rx is still set scan receive rings. * Do it on all rings because otherwise we starve. */ if (want_rx) { - int send_down = 0; /* transparent mode */ /* two rounds here for race avoidance */ do_retry_rx: for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) { int found = 0; kring = &na->rx_rings[i]; + ring = kring->ring; - if (nm_kr_tryget(kring)) { - if (netmap_verbose) - RD(2, "%p lost race on rxring %d, ok", - priv, i); + if (unlikely(nm_kr_tryget(kring, 1, &revents))) continue; - } - if (nm_rxsync_prologue(kring) >= kring->nkr_num_slots) { + if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); revents |= POLLERR; } /* now we can use kring->rcur, rtail */ /* * transparent mode support: collect packets * from the rxring(s). - * XXX NR_FORWARD should only be read on - * physical or NIC ports */ - if (netmap_fwd ||kring->ring->flags & NR_FORWARD) { + if (nm_may_forward_up(kring)) { ND(10, "forwarding some buffers up %d to %d", - kring->nr_hwcur, kring->ring->cur); + kring->nr_hwcur, ring->cur); netmap_grab_packets(kring, &q, netmap_fwd); } + kring->nr_kflags &= ~NR_FORWARD; if (kring->nm_sync(kring, 0)) revents |= POLLERR; else - nm_rxsync_finalize(kring); + nm_sync_finalize(kring); + send_down |= (kring->nr_kflags & NR_FORWARD); /* host ring only */ if (netmap_no_timestamp == 0 || - kring->ring->flags & NR_TIMESTAMP) { - microtime(&kring->ring->ts); + ring->flags & NR_TIMESTAMP) { + microtime(&ring->ts); } found = kring->rcur != kring->rtail; nm_kr_put(kring); if (found) { revents |= want_rx; retry_rx = 0; kring->nm_notify(kring, 0); } } - /* transparent mode XXX only during first pass ? */ - if (na->na_flags & NAF_HOST_RINGS) { - kring = &na->rx_rings[na->num_rx_rings]; - if (check_all_rx - && (netmap_fwd || kring->ring->flags & NR_FORWARD)) { - /* XXX fix to use kring fields */ - if (nm_ring_empty(kring->ring)) - send_down = netmap_rxsync_from_host(na, td, dev); - if (!nm_ring_empty(kring->ring)) - revents |= want_rx; - } - } - - if (retry_rx && !is_kevent) - OS_selrecord(td, check_all_rx ? + if (retry_rx && sr) { + nm_os_selrecord(sr, check_all_rx ? &na->si[NR_RX] : &na->rx_rings[priv->np_qfirst[NR_RX]].si); + } if (send_down > 0 || retry_rx) { retry_rx = 0; if (send_down) goto flush_tx; /* and retry_rx */ else goto do_retry_rx; } } /* * Transparent mode: marked bufs on rx rings between * kring->nr_hwcur and ring->head * are passed to the other endpoint. * - * In this mode we also scan the sw rxring, which in - * turn passes packets up. - * - * XXX Transparent mode at the moment requires to bind all + * Transparent mode requires to bind all * rings to a single file descriptor. */ - if (q.head && na->ifp != NULL) + if (q.head && !nm_kr_tryget(&na->tx_rings[na->num_tx_rings], 1, &revents)) { netmap_send_up(na->ifp, &q); + nm_kr_put(&na->tx_rings[na->num_tx_rings]); + } return (revents); #undef want_tx #undef want_rx } /*-------------------- driver support routines -------------------*/ -static int netmap_hw_krings_create(struct netmap_adapter *); - /* default notify callback */ static int netmap_notify(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; enum txrx t = kring->tx; - OS_selwakeup(&kring->si, PI_NET); + nm_os_selwakeup(&kring->si); /* optimization: avoid a wake up on the global * queue if nobody has registered for more * than one ring */ if (na->si_users[t] > 0) - OS_selwakeup(&na->si[t], PI_NET); + nm_os_selwakeup(&na->si[t]); + return NM_IRQ_COMPLETED; +} + +#if 0 +static int +netmap_notify(struct netmap_adapter *na, u_int n_ring, +enum txrx tx, int flags) +{ + if (tx == NR_TX) { + KeSetEvent(notes->TX_EVENT, 0, FALSE); + } + else + { + KeSetEvent(notes->RX_EVENT, 0, FALSE); + } return 0; } +#endif - /* called by all routines that create netmap_adapters. - * Attach na to the ifp (if any) and provide defaults - * for optional callbacks. Defaults assume that we - * are creating an hardware netmap_adapter. + * provide some defaults and get a reference to the + * memory allocator */ int netmap_attach_common(struct netmap_adapter *na) { - struct ifnet *ifp = na->ifp; - if (na->num_tx_rings == 0 || na->num_rx_rings == 0) { D("%s: invalid rings tx %d rx %d", na->name, na->num_tx_rings, na->num_rx_rings); return EINVAL; } - /* ifp is NULL for virtual adapters (bwrap, non-persistent VALE ports, - * pipes, monitors). For bwrap we actually have a non-null ifp for - * use by the external modules, but that is set after this - * function has been called. - * XXX this is ugly, maybe split this function in two (2014-03-14) - */ - if (ifp != NULL) { - WNA(ifp) = na; - /* the following is only needed for na that use the host port. - * XXX do we have something similar for linux ? - */ #ifdef __FreeBSD__ - na->if_input = ifp->if_input; /* for netmap_send_up */ -#endif /* __FreeBSD__ */ - - NETMAP_SET_CAPABLE(ifp); + if (na->na_flags & NAF_HOST_RINGS && na->ifp) { + na->if_input = na->ifp->if_input; /* for netmap_send_up */ } +#endif /* __FreeBSD__ */ if (na->nm_krings_create == NULL) { /* we assume that we have been called by a driver, * since other port types all provide their own * nm_krings_create */ na->nm_krings_create = netmap_hw_krings_create; na->nm_krings_delete = netmap_hw_krings_delete; } if (na->nm_notify == NULL) na->nm_notify = netmap_notify; na->active_fds = 0; if (na->nm_mem == NULL) /* use the global allocator */ na->nm_mem = &nm_mem; netmap_mem_get(na->nm_mem); #ifdef WITH_VALE if (na->nm_bdg_attach == NULL) /* no special nm_bdg_attach callback. On VALE * attach, we need to interpose a bwrap */ na->nm_bdg_attach = netmap_bwrap_attach; #endif + return 0; } /* standard cleanup, called by all destructors */ void netmap_detach_common(struct netmap_adapter *na) { - if (na->ifp != NULL) - WNA(na->ifp) = NULL; /* XXX do we need this? */ - if (na->tx_rings) { /* XXX should not happen */ D("freeing leftover tx_rings"); na->nm_krings_delete(na); } netmap_pipe_dealloc(na); if (na->nm_mem) netmap_mem_put(na->nm_mem); bzero(na, sizeof(*na)); free(na, M_DEVBUF); } -/* Wrapper for the register callback provided hardware drivers. - * na->ifp == NULL means the driver module has been +/* Wrapper for the register callback provided netmap-enabled + * hardware drivers. + * nm_iszombie(na) means that the driver module has been * unloaded, so we cannot call into it. - * Note that module unloading, in our patched linux drivers, - * happens under NMG_LOCK and after having stopped all the - * nic rings (see netmap_detach). This provides sufficient - * protection for the other driver-provied callbacks - * (i.e., nm_config and nm_*xsync), that therefore don't need - * to wrapped. + * nm_os_ifnet_lock() must guarantee mutual exclusion with + * module unloading. */ static int -netmap_hw_register(struct netmap_adapter *na, int onoff) +netmap_hw_reg(struct netmap_adapter *na, int onoff) { struct netmap_hw_adapter *hwna = (struct netmap_hw_adapter*)na; + int error = 0; - if (na->ifp == NULL) - return onoff ? ENXIO : 0; + nm_os_ifnet_lock(); - return hwna->nm_hw_register(na, onoff); + if (nm_iszombie(na)) { + if (onoff) { + error = ENXIO; + } else if (na != NULL) { + na->na_flags &= ~NAF_NETMAP_ON; + } + goto out; + } + + error = hwna->nm_hw_register(na, onoff); + +out: + nm_os_ifnet_unlock(); + + return error; } +static void +netmap_hw_dtor(struct netmap_adapter *na) +{ + if (nm_iszombie(na) || na->ifp == NULL) + return; + WNA(na->ifp) = NULL; +} + + /* - * Initialize a ``netmap_adapter`` object created by driver on attach. + * Allocate a ``netmap_adapter`` object, and initialize it from the + * 'arg' passed by the driver on attach. * We allocate a block of memory with room for a struct netmap_adapter * plus two sets of N+2 struct netmap_kring (where N is the number * of hardware rings): * krings 0..N-1 are for the hardware queues. * kring N is for the host stack queue * kring N+1 is only used for the selinfo for all queues. // XXX still true ? * Return 0 on success, ENOMEM otherwise. */ -int -netmap_attach(struct netmap_adapter *arg) +static int +_netmap_attach(struct netmap_adapter *arg, size_t size) { struct netmap_hw_adapter *hwna = NULL; - // XXX when is arg == NULL ? - struct ifnet *ifp = arg ? arg->ifp : NULL; + struct ifnet *ifp = NULL; - if (arg == NULL || ifp == NULL) + if (arg == NULL || arg->ifp == NULL) goto fail; - hwna = malloc(sizeof(*hwna), M_DEVBUF, M_NOWAIT | M_ZERO); + ifp = arg->ifp; + hwna = malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); if (hwna == NULL) goto fail; hwna->up = *arg; hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE; strncpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name)); hwna->nm_hw_register = hwna->up.nm_register; - hwna->up.nm_register = netmap_hw_register; + hwna->up.nm_register = netmap_hw_reg; if (netmap_attach_common(&hwna->up)) { free(hwna, M_DEVBUF); goto fail; } netmap_adapter_get(&hwna->up); + NM_ATTACH_NA(ifp, &hwna->up); + #ifdef linux if (ifp->netdev_ops) { /* prepare a clone of the netdev ops */ #ifndef NETMAP_LINUX_HAVE_NETDEV_OPS hwna->nm_ndo.ndo_start_xmit = ifp->netdev_ops; #else hwna->nm_ndo = *ifp->netdev_ops; -#endif +#endif /* NETMAP_LINUX_HAVE_NETDEV_OPS */ } hwna->nm_ndo.ndo_start_xmit = linux_netmap_start_xmit; if (ifp->ethtool_ops) { hwna->nm_eto = *ifp->ethtool_ops; } hwna->nm_eto.set_ringparam = linux_netmap_set_ringparam; #ifdef NETMAP_LINUX_HAVE_SET_CHANNELS hwna->nm_eto.set_channels = linux_netmap_set_channels; -#endif +#endif /* NETMAP_LINUX_HAVE_SET_CHANNELS */ if (arg->nm_config == NULL) { hwna->up.nm_config = netmap_linux_config; } #endif /* linux */ + if (arg->nm_dtor == NULL) { + hwna->up.nm_dtor = netmap_hw_dtor; + } if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n", hwna->up.num_tx_rings, hwna->up.num_tx_desc, hwna->up.num_rx_rings, hwna->up.num_rx_desc); return 0; fail: D("fail, arg %p ifp %p na %p", arg, ifp, hwna); - if (ifp) - netmap_detach(ifp); return (hwna ? EINVAL : ENOMEM); } +int +netmap_attach(struct netmap_adapter *arg) +{ + return _netmap_attach(arg, sizeof(struct netmap_hw_adapter)); +} + + +#ifdef WITH_PTNETMAP_GUEST +int +netmap_pt_guest_attach(struct netmap_adapter *arg, + void *csb, + unsigned int nifp_offset, + nm_pt_guest_ptctl_t ptctl) +{ + struct netmap_pt_guest_adapter *ptna; + struct ifnet *ifp = arg ? arg->ifp : NULL; + int error; + + /* get allocator */ + arg->nm_mem = netmap_mem_pt_guest_new(ifp, nifp_offset, ptctl); + if (arg->nm_mem == NULL) + return ENOMEM; + arg->na_flags |= NAF_MEM_OWNER; + error = _netmap_attach(arg, sizeof(struct netmap_pt_guest_adapter)); + if (error) + return error; + + /* get the netmap_pt_guest_adapter */ + ptna = (struct netmap_pt_guest_adapter *) NA(ifp); + ptna->csb = csb; + + /* Initialize a separate pass-through netmap adapter that is going to + * be used by the ptnet driver only, and so never exposed to netmap + * applications. We only need a subset of the available fields. */ + memset(&ptna->dr, 0, sizeof(ptna->dr)); + ptna->dr.up.ifp = ifp; + ptna->dr.up.nm_mem = ptna->hwup.up.nm_mem; + netmap_mem_get(ptna->dr.up.nm_mem); + ptna->dr.up.nm_config = ptna->hwup.up.nm_config; + + ptna->backend_regifs = 0; + + return 0; +} +#endif /* WITH_PTNETMAP_GUEST */ + + void NM_DBG(netmap_adapter_get)(struct netmap_adapter *na) { if (!na) { return; } refcount_acquire(&na->na_refcount); } /* returns 1 iff the netmap_adapter is destroyed */ int NM_DBG(netmap_adapter_put)(struct netmap_adapter *na) { if (!na) return 1; if (!refcount_release(&na->na_refcount)) return 0; if (na->nm_dtor) na->nm_dtor(na); netmap_detach_common(na); return 1; } /* nm_krings_create callback for all hardware native adapters */ int netmap_hw_krings_create(struct netmap_adapter *na) { int ret = netmap_krings_create(na, 0); if (ret == 0) { /* initialize the mbq for the sw rx ring */ mbq_safe_init(&na->rx_rings[na->num_rx_rings].rx_queue); ND("initialized sw rx queue %d", na->num_rx_rings); } return ret; } /* * Called on module unload by the netmap-enabled drivers */ void netmap_detach(struct ifnet *ifp) { struct netmap_adapter *na = NA(ifp); - int skip; if (!na) return; - skip = 0; NMG_LOCK(); - netmap_disable_all_rings(ifp); - na->ifp = NULL; - na->na_flags &= ~NAF_NETMAP_ON; + netmap_set_all_rings(na, NM_KR_LOCKED); + na->na_flags |= NAF_ZOMBIE; /* * if the netmap adapter is not native, somebody * changed it, so we can not release it here. - * The NULL na->ifp will notify the new owner that + * The NAF_ZOMBIE flag will notify the new owner that * the driver is gone. */ if (na->na_flags & NAF_NATIVE) { - skip = netmap_adapter_put(na); + netmap_adapter_put(na); } - /* give them a chance to notice */ - if (skip == 0) - netmap_enable_all_rings(ifp); + /* give active users a chance to notice that NAF_ZOMBIE has been + * turned on, so that they can stop and return an error to userspace. + * Note that this becomes a NOP if there are no active users and, + * therefore, the put() above has deleted the na, since now NA(ifp) is + * NULL. + */ + netmap_enable_all_rings(ifp); NMG_UNLOCK(); } /* * Intercept packets from the network stack and pass them * to netmap as incoming packets on the 'software' ring. * * We only store packets in a bounded mbq and then copy them * in the relevant rxsync routine. * * We rely on the OS to make sure that the ifp and na do not go * away (typically the caller checks for IFF_DRV_RUNNING or the like). * In nm_register() or whenever there is a reinitialization, * we make sure to make the mode change visible here. */ int netmap_transmit(struct ifnet *ifp, struct mbuf *m) { struct netmap_adapter *na = NA(ifp); - struct netmap_kring *kring; + struct netmap_kring *kring, *tx_kring; u_int len = MBUF_LEN(m); u_int error = ENOBUFS; + unsigned int txr; struct mbq *q; int space; kring = &na->rx_rings[na->num_rx_rings]; // XXX [Linux] we do not need this lock // if we follow the down/configure/up protocol -gl // mtx_lock(&na->core_lock); if (!nm_netmap_on(na)) { D("%s not in netmap mode anymore", na->name); error = ENXIO; goto done; } + txr = MBUF_TXQ(m); + if (txr >= na->num_tx_rings) { + txr %= na->num_tx_rings; + } + tx_kring = &NMR(na, NR_TX)[txr]; + + if (tx_kring->nr_mode == NKR_NETMAP_OFF) { + return MBUF_TRANSMIT(na, ifp, m); + } + q = &kring->rx_queue; // XXX reconsider long packets if we handle fragments if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */ D("%s from_host, drop packet size %d > %d", na->name, len, NETMAP_BUF_SIZE(na)); goto done; } + if (nm_os_mbuf_has_offld(m)) { + RD(1, "%s drop mbuf requiring offloadings", na->name); + goto done; + } + /* protect against rxsync_from_host(), netmap_sw_to_nic() * and maybe other instances of netmap_transmit (the latter * not possible on Linux). * Also avoid overflowing the queue. */ mbq_lock(q); space = kring->nr_hwtail - kring->nr_hwcur; if (space < 0) space += kring->nkr_num_slots; if (space + mbq_len(q) >= kring->nkr_num_slots - 1) { // XXX RD(10, "%s full hwcur %d hwtail %d qlen %d len %d m %p", na->name, kring->nr_hwcur, kring->nr_hwtail, mbq_len(q), len, m); } else { mbq_enqueue(q, m); ND(10, "%s %d bufs in queue len %d m %p", na->name, mbq_len(q), len, m); /* notify outside the lock */ m = NULL; error = 0; } mbq_unlock(q); done: if (m) m_freem(m); /* unconditionally wake up listeners */ kring->nm_notify(kring, 0); /* this is normally netmap_notify(), but for nics * connected to a bridge it is netmap_bwrap_intr_notify(), * that possibly forwards the frames through the switch */ return (error); } /* * netmap_reset() is called by the driver routines when reinitializing * a ring. The driver is in charge of locking to protect the kring. * If native netmap mode is not set just return NULL. + * If native netmap mode is set, in particular, we have to set nr_mode to + * NKR_NETMAP_ON. */ struct netmap_slot * netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n, u_int new_cur) { struct netmap_kring *kring; int new_hwofs, lim; if (!nm_native_on(na)) { ND("interface not in native netmap mode"); return NULL; /* nothing to reinitialize */ } /* XXX note- in the new scheme, we are not guaranteed to be * under lock (e.g. when called on a device reset). * In this case, we should set a flag and do not trust too * much the values. In practice: TODO * - set a RESET flag somewhere in the kring * - do the processing in a conservative way * - let the *sync() fixup at the end. */ if (tx == NR_TX) { if (n >= na->num_tx_rings) return NULL; + kring = na->tx_rings + n; + + if (kring->nr_pending_mode == NKR_NETMAP_OFF) { + kring->nr_mode = NKR_NETMAP_OFF; + return NULL; + } + // XXX check whether we should use hwcur or rcur new_hwofs = kring->nr_hwcur - new_cur; } else { if (n >= na->num_rx_rings) return NULL; kring = na->rx_rings + n; + + if (kring->nr_pending_mode == NKR_NETMAP_OFF) { + kring->nr_mode = NKR_NETMAP_OFF; + return NULL; + } + new_hwofs = kring->nr_hwtail - new_cur; } lim = kring->nkr_num_slots - 1; if (new_hwofs > lim) new_hwofs -= lim + 1; /* Always set the new offset value and realign the ring. */ if (netmap_verbose) D("%s %s%d hwofs %d -> %d, hwtail %d -> %d", na->name, tx == NR_TX ? "TX" : "RX", n, kring->nkr_hwofs, new_hwofs, kring->nr_hwtail, tx == NR_TX ? lim : kring->nr_hwtail); kring->nkr_hwofs = new_hwofs; if (tx == NR_TX) { kring->nr_hwtail = kring->nr_hwcur + lim; if (kring->nr_hwtail > lim) kring->nr_hwtail -= lim + 1; } #if 0 // def linux /* XXX check that the mappings are correct */ /* need ring_nr, adapter->pdev, direction */ buffer_info->dma = dma_map_single(&pdev->dev, addr, adapter->rx_buffer_len, DMA_FROM_DEVICE); if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) { D("error mapping rx netmap buffer %d", i); // XXX fix error handling } #endif /* linux */ /* * Wakeup on the individual and global selwait * We do the wakeup here, but the ring is not yet reconfigured. * However, we are under lock so there are no races. */ + kring->nr_mode = NKR_NETMAP_ON; kring->nm_notify(kring, 0); return kring->ring->slot; } /* * Dispatch rx/tx interrupts to the netmap rings. * * "work_done" is non-null on the RX path, NULL for the TX path. * We rely on the OS to make sure that there is only one active * instance per queue, and that there is appropriate locking. * * The 'notify' routine depends on what the ring is attached to. * - for a netmap file descriptor, do a selwakeup on the individual * waitqueue, plus one on the global one if needed * (see netmap_notify) * - for a nic connected to a switch, call the proper forwarding routine * (see netmap_bwrap_intr_notify) */ -void -netmap_common_irq(struct ifnet *ifp, u_int q, u_int *work_done) +int +netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done) { - struct netmap_adapter *na = NA(ifp); struct netmap_kring *kring; enum txrx t = (work_done ? NR_RX : NR_TX); q &= NETMAP_RING_MASK; if (netmap_verbose) { RD(5, "received %s queue %d", work_done ? "RX" : "TX" , q); } if (q >= nma_get_nrings(na, t)) - return; // not a physical queue + return NM_IRQ_PASS; // not a physical queue kring = NMR(na, t) + q; + if (kring->nr_mode == NKR_NETMAP_OFF) { + return NM_IRQ_PASS; + } + if (t == NR_RX) { kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ? *work_done = 1; /* do not fire napi again */ } - kring->nm_notify(kring, 0); + + return kring->nm_notify(kring, 0); } /* * Default functions to handle rx/tx interrupts from a physical device. * "work_done" is non-null on the RX path, NULL for the TX path. * - * If the card is not in netmap mode, simply return 0, + * If the card is not in netmap mode, simply return NM_IRQ_PASS, * so that the caller proceeds with regular processing. - * Otherwise call netmap_common_irq() and return 1. + * Otherwise call netmap_common_irq(). * * If the card is connected to a netmap file descriptor, * do a selwakeup on the individual queue, plus one on the global one * if needed (multiqueue card _and_ there are multiqueue listeners), - * and return 1. + * and return NR_IRQ_COMPLETED. * * Finally, if called on rx from an interface connected to a switch, - * calls the proper forwarding routine, and return 1. + * calls the proper forwarding routine. */ int netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done) { struct netmap_adapter *na = NA(ifp); /* * XXX emulated netmap mode sets NAF_SKIP_INTR so * we still use the regular driver even though the previous * check fails. It is unclear whether we should use * nm_native_on() here. */ if (!nm_netmap_on(na)) - return 0; + return NM_IRQ_PASS; if (na->na_flags & NAF_SKIP_INTR) { ND("use regular interrupt"); - return 0; + return NM_IRQ_PASS; } - netmap_common_irq(ifp, q, work_done); - return 1; + return netmap_common_irq(na, q, work_done); } /* * Module loader and unloader * * netmap_init() creates the /dev/netmap device and initializes * all global variables. Returns 0 on success, errno on failure * (but there is no chance) * * netmap_fini() destroys everything. */ static struct cdev *netmap_dev; /* /dev/netmap character device. */ extern struct cdevsw netmap_cdevsw; void netmap_fini(void) { - netmap_uninit_bridges(); if (netmap_dev) destroy_dev(netmap_dev); + /* we assume that there are no longer netmap users */ + nm_os_ifnet_fini(); + netmap_uninit_bridges(); netmap_mem_fini(); NMG_LOCK_DESTROY(); printf("netmap: unloaded module.\n"); } int netmap_init(void) { int error; NMG_LOCK_INIT(); error = netmap_mem_init(); if (error != 0) goto fail; /* * MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls * when the module is compiled in. * XXX could use make_dev_credv() to get error number */ netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD, &netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600, "netmap"); if (!netmap_dev) goto fail; error = netmap_init_bridges(); if (error) goto fail; #ifdef __FreeBSD__ - nm_vi_init_index(); + nm_os_vi_init_index(); #endif + + error = nm_os_ifnet_init(); + if (error) + goto fail; printf("netmap: loaded module\n"); return (0); fail: netmap_fini(); return (EINVAL); /* may be incorrect */ } Index: head/sys/dev/netmap/netmap_freebsd.c =================================================================== --- head/sys/dev/netmap/netmap_freebsd.c (revision 307393) +++ head/sys/dev/netmap/netmap_freebsd.c (revision 307394) @@ -1,857 +1,1497 @@ /* * Copyright (C) 2013-2014 Universita` di Pisa. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* $FreeBSD$ */ #include "opt_inet.h" #include "opt_inet6.h" #include #include #include #include /* defines used in kernel.h */ #include /* POLLIN, POLLOUT */ #include /* types used in module initialization */ -#include /* DEV_MODULE */ +#include /* DEV_MODULE_ORDERED */ #include +#include /* kern_ioctl() */ #include #include /* vtophys */ #include /* vtophys */ #include #include #include #include #include #include #include /* sockaddrs */ #include +#include /* kthread_add() */ +#include /* PROC_LOCK() */ +#include /* RFNOWAIT */ +#include /* sched_bind() */ +#include /* mp_maxid */ #include #include #include /* IFT_ETHER */ #include /* ether_ifdetach */ #include /* LLADDR */ #include /* bus_dmamap_* */ #include /* in6_cksum_pseudo() */ #include /* in_pseudo(), in_cksum_hdr() */ #include #include +#include #include /* ======================== FREEBSD-SPECIFIC ROUTINES ================== */ +void nm_os_selinfo_init(NM_SELINFO_T *si) { + struct mtx *m = &si->m; + mtx_init(m, "nm_kn_lock", NULL, MTX_DEF); + knlist_init_mtx(&si->si.si_note, m); +} + +void +nm_os_selinfo_uninit(NM_SELINFO_T *si) +{ + /* XXX kqueue(9) needed; these will mirror knlist_init. */ + knlist_delete(&si->si.si_note, curthread, 0 /* not locked */ ); + knlist_destroy(&si->si.si_note); + /* now we don't need the mutex anymore */ + mtx_destroy(&si->m); +} + +void +nm_os_ifnet_lock(void) +{ + IFNET_WLOCK(); +} + +void +nm_os_ifnet_unlock(void) +{ + IFNET_WUNLOCK(); +} + +static int netmap_use_count = 0; + +void +nm_os_get_module(void) +{ + netmap_use_count++; +} + +void +nm_os_put_module(void) +{ + netmap_use_count--; +} + +static void +netmap_ifnet_arrival_handler(void *arg __unused, struct ifnet *ifp) +{ + netmap_undo_zombie(ifp); +} + +static void +netmap_ifnet_departure_handler(void *arg __unused, struct ifnet *ifp) +{ + netmap_make_zombie(ifp); +} + +static eventhandler_tag nm_ifnet_ah_tag; +static eventhandler_tag nm_ifnet_dh_tag; + +int +nm_os_ifnet_init(void) +{ + nm_ifnet_ah_tag = + EVENTHANDLER_REGISTER(ifnet_arrival_event, + netmap_ifnet_arrival_handler, + NULL, EVENTHANDLER_PRI_ANY); + nm_ifnet_dh_tag = + EVENTHANDLER_REGISTER(ifnet_departure_event, + netmap_ifnet_departure_handler, + NULL, EVENTHANDLER_PRI_ANY); + return 0; +} + +void +nm_os_ifnet_fini(void) +{ + EVENTHANDLER_DEREGISTER(ifnet_arrival_event, + nm_ifnet_ah_tag); + EVENTHANDLER_DEREGISTER(ifnet_departure_event, + nm_ifnet_dh_tag); +} + rawsum_t -nm_csum_raw(uint8_t *data, size_t len, rawsum_t cur_sum) +nm_os_csum_raw(uint8_t *data, size_t len, rawsum_t cur_sum) { /* TODO XXX please use the FreeBSD implementation for this. */ uint16_t *words = (uint16_t *)data; int nw = len / 2; int i; for (i = 0; i < nw; i++) cur_sum += be16toh(words[i]); if (len & 1) cur_sum += (data[len-1] << 8); return cur_sum; } /* Fold a raw checksum: 'cur_sum' is in host byte order, while the * return value is in network byte order. */ uint16_t -nm_csum_fold(rawsum_t cur_sum) +nm_os_csum_fold(rawsum_t cur_sum) { /* TODO XXX please use the FreeBSD implementation for this. */ while (cur_sum >> 16) cur_sum = (cur_sum & 0xFFFF) + (cur_sum >> 16); return htobe16((~cur_sum) & 0xFFFF); } -uint16_t nm_csum_ipv4(struct nm_iphdr *iph) +uint16_t nm_os_csum_ipv4(struct nm_iphdr *iph) { #if 0 return in_cksum_hdr((void *)iph); #else - return nm_csum_fold(nm_csum_raw((uint8_t*)iph, sizeof(struct nm_iphdr), 0)); + return nm_os_csum_fold(nm_os_csum_raw((uint8_t*)iph, sizeof(struct nm_iphdr), 0)); #endif } void -nm_csum_tcpudp_ipv4(struct nm_iphdr *iph, void *data, +nm_os_csum_tcpudp_ipv4(struct nm_iphdr *iph, void *data, size_t datalen, uint16_t *check) { #ifdef INET uint16_t pseudolen = datalen + iph->protocol; /* Compute and insert the pseudo-header cheksum. */ *check = in_pseudo(iph->saddr, iph->daddr, htobe16(pseudolen)); /* Compute the checksum on TCP/UDP header + payload * (includes the pseudo-header). */ - *check = nm_csum_fold(nm_csum_raw(data, datalen, 0)); + *check = nm_os_csum_fold(nm_os_csum_raw(data, datalen, 0)); #else static int notsupported = 0; if (!notsupported) { notsupported = 1; D("inet4 segmentation not supported"); } #endif } void -nm_csum_tcpudp_ipv6(struct nm_ipv6hdr *ip6h, void *data, +nm_os_csum_tcpudp_ipv6(struct nm_ipv6hdr *ip6h, void *data, size_t datalen, uint16_t *check) { #ifdef INET6 *check = in6_cksum_pseudo((void*)ip6h, datalen, ip6h->nexthdr, 0); - *check = nm_csum_fold(nm_csum_raw(data, datalen, 0)); + *check = nm_os_csum_fold(nm_os_csum_raw(data, datalen, 0)); #else static int notsupported = 0; if (!notsupported) { notsupported = 1; D("inet6 segmentation not supported"); } #endif } +/* on FreeBSD we send up one packet at a time */ +void * +nm_os_send_up(struct ifnet *ifp, struct mbuf *m, struct mbuf *prev) +{ + NA(ifp)->if_input(ifp, m); + return NULL; +} + +int +nm_os_mbuf_has_offld(struct mbuf *m) +{ + return m->m_pkthdr.csum_flags & (CSUM_TCP | CSUM_UDP | CSUM_SCTP | + CSUM_TCP_IPV6 | CSUM_UDP_IPV6 | + CSUM_SCTP_IPV6 | CSUM_TSO); +} + +static void +freebsd_generic_rx_handler(struct ifnet *ifp, struct mbuf *m) +{ + struct netmap_generic_adapter *gna = + (struct netmap_generic_adapter *)NA(ifp); + int stolen = generic_rx_handler(ifp, m); + + if (!stolen) { + gna->save_if_input(ifp, m); + } +} + /* * Intercept the rx routine in the standard device driver. * Second argument is non-zero to intercept, 0 to restore */ int -netmap_catch_rx(struct netmap_generic_adapter *gna, int intercept) +nm_os_catch_rx(struct netmap_generic_adapter *gna, int intercept) { struct netmap_adapter *na = &gna->up.up; struct ifnet *ifp = na->ifp; if (intercept) { if (gna->save_if_input) { D("cannot intercept again"); return EINVAL; /* already set */ } gna->save_if_input = ifp->if_input; - ifp->if_input = generic_rx_handler; + ifp->if_input = freebsd_generic_rx_handler; } else { if (!gna->save_if_input){ D("cannot restore"); return EINVAL; /* not saved */ } ifp->if_input = gna->save_if_input; gna->save_if_input = NULL; } return 0; } /* * Intercept the packet steering routine in the tx path, * so that we can decide which queue is used for an mbuf. * Second argument is non-zero to intercept, 0 to restore. * On freebsd we just intercept if_transmit. */ -void -netmap_catch_tx(struct netmap_generic_adapter *gna, int enable) +int +nm_os_catch_tx(struct netmap_generic_adapter *gna, int intercept) { struct netmap_adapter *na = &gna->up.up; struct ifnet *ifp = netmap_generic_getifp(gna); - if (enable) { + if (intercept) { na->if_transmit = ifp->if_transmit; ifp->if_transmit = netmap_transmit; } else { ifp->if_transmit = na->if_transmit; } + + return 0; } /* * Transmit routine used by generic_netmap_txsync(). Returns 0 on success * and non-zero on error (which may be packet drops or other errors). * addr and len identify the netmap buffer, m is the (preallocated) * mbuf to use for transmissions. * * We should add a reference to the mbuf so the m_freem() at the end * of the transmission does not consume resources. * * On FreeBSD, and on multiqueue cards, we can force the queue using * if (M_HASHTYPE_GET(m) != M_HASHTYPE_NONE) * i = m->m_pkthdr.flowid % adapter->num_queues; * else * i = curcpu % adapter->num_queues; * */ int -generic_xmit_frame(struct ifnet *ifp, struct mbuf *m, - void *addr, u_int len, u_int ring_nr) +nm_os_generic_xmit_frame(struct nm_os_gen_arg *a) { int ret; + u_int len = a->len; + struct ifnet *ifp = a->ifp; + struct mbuf *m = a->m; +#if __FreeBSD_version < 1100000 /* - * The mbuf should be a cluster from our special pool, - * so we do not need to do an m_copyback but just copy - * (and eventually, just reference the netmap buffer) + * Old FreeBSD versions. The mbuf has a cluster attached, + * we need to copy from the cluster to the netmap buffer. */ - - if (GET_MBUF_REFCNT(m) != 1) { - D("invalid refcnt %d for %p", - GET_MBUF_REFCNT(m), m); + if (MBUF_REFCNT(m) != 1) { + D("invalid refcnt %d for %p", MBUF_REFCNT(m), m); panic("in generic_xmit_frame"); } - // XXX the ext_size check is unnecessary if we link the netmap buf if (m->m_ext.ext_size < len) { RD(5, "size %d < len %d", m->m_ext.ext_size, len); len = m->m_ext.ext_size; } - if (0) { /* XXX seems to have negligible benefits */ - m->m_ext.ext_buf = m->m_data = addr; - } else { - bcopy(addr, m->m_data, len); - } + bcopy(a->addr, m->m_data, len); +#else /* __FreeBSD_version >= 1100000 */ + /* New FreeBSD versions. Link the external storage to + * the netmap buffer, so that no copy is necessary. */ + m->m_ext.ext_buf = m->m_data = a->addr; + m->m_ext.ext_size = len; +#endif /* __FreeBSD_version >= 1100000 */ + m->m_len = m->m_pkthdr.len = len; - // inc refcount. All ours, we could skip the atomic - atomic_fetchadd_int(PNT_MBUF_REFCNT(m), 1); + + /* mbuf refcnt is not contended, no need to use atomic + * (a memory barrier is enough). */ + SET_MBUF_REFCNT(m, 2); M_HASHTYPE_SET(m, M_HASHTYPE_OPAQUE); - m->m_pkthdr.flowid = ring_nr; + m->m_pkthdr.flowid = a->ring_nr; m->m_pkthdr.rcvif = ifp; /* used for tx notification */ ret = NA(ifp)->if_transmit(ifp, m); - return ret; + return ret ? -1 : 0; } #if __FreeBSD_version >= 1100005 struct netmap_adapter * netmap_getna(if_t ifp) { return (NA((struct ifnet *)ifp)); } #endif /* __FreeBSD_version >= 1100005 */ /* * The following two functions are empty until we have a generic * way to extract the info from the ifp */ int -generic_find_num_desc(struct ifnet *ifp, unsigned int *tx, unsigned int *rx) +nm_os_generic_find_num_desc(struct ifnet *ifp, unsigned int *tx, unsigned int *rx) { D("called, in tx %d rx %d", *tx, *rx); return 0; } void -generic_find_num_queues(struct ifnet *ifp, u_int *txq, u_int *rxq) +nm_os_generic_find_num_queues(struct ifnet *ifp, u_int *txq, u_int *rxq) { D("called, in txq %d rxq %d", *txq, *rxq); *txq = netmap_generic_rings; *rxq = netmap_generic_rings; } +void +nm_os_generic_set_features(struct netmap_generic_adapter *gna) +{ + gna->rxsg = 1; /* Supported through m_copydata. */ + gna->txqdisc = 0; /* Not supported. */ +} + void -netmap_mitigation_init(struct nm_generic_mit *mit, int idx, struct netmap_adapter *na) +nm_os_mitigation_init(struct nm_generic_mit *mit, int idx, struct netmap_adapter *na) { ND("called"); mit->mit_pending = 0; mit->mit_ring_idx = idx; mit->mit_na = na; } void -netmap_mitigation_start(struct nm_generic_mit *mit) +nm_os_mitigation_start(struct nm_generic_mit *mit) { ND("called"); } void -netmap_mitigation_restart(struct nm_generic_mit *mit) +nm_os_mitigation_restart(struct nm_generic_mit *mit) { ND("called"); } int -netmap_mitigation_active(struct nm_generic_mit *mit) +nm_os_mitigation_active(struct nm_generic_mit *mit) { ND("called"); return 0; } void -netmap_mitigation_cleanup(struct nm_generic_mit *mit) +nm_os_mitigation_cleanup(struct nm_generic_mit *mit) { ND("called"); } static int nm_vi_dummy(struct ifnet *ifp, u_long cmd, caddr_t addr) { return EINVAL; } static void nm_vi_start(struct ifnet *ifp) { panic("nm_vi_start() must not be called"); } /* * Index manager of persistent virtual interfaces. * It is used to decide the lowest byte of the MAC address. * We use the same algorithm with management of bridge port index. */ #define NM_VI_MAX 255 static struct { uint8_t index[NM_VI_MAX]; /* XXX just for a reasonable number */ uint8_t active; struct mtx lock; } nm_vi_indices; void -nm_vi_init_index(void) +nm_os_vi_init_index(void) { int i; for (i = 0; i < NM_VI_MAX; i++) nm_vi_indices.index[i] = i; nm_vi_indices.active = 0; mtx_init(&nm_vi_indices.lock, "nm_vi_indices_lock", NULL, MTX_DEF); } /* return -1 if no index available */ static int nm_vi_get_index(void) { int ret; mtx_lock(&nm_vi_indices.lock); ret = nm_vi_indices.active == NM_VI_MAX ? -1 : nm_vi_indices.index[nm_vi_indices.active++]; mtx_unlock(&nm_vi_indices.lock); return ret; } static void nm_vi_free_index(uint8_t val) { int i, lim; mtx_lock(&nm_vi_indices.lock); lim = nm_vi_indices.active; for (i = 0; i < lim; i++) { if (nm_vi_indices.index[i] == val) { /* swap index[lim-1] and j */ int tmp = nm_vi_indices.index[lim-1]; nm_vi_indices.index[lim-1] = val; nm_vi_indices.index[i] = tmp; nm_vi_indices.active--; break; } } if (lim == nm_vi_indices.active) D("funny, index %u didn't found", val); mtx_unlock(&nm_vi_indices.lock); } #undef NM_VI_MAX /* * Implementation of a netmap-capable virtual interface that * registered to the system. * It is based on if_tap.c and ip_fw_log.c in FreeBSD 9. * * Note: Linux sets refcount to 0 on allocation of net_device, * then increments it on registration to the system. * FreeBSD sets refcount to 1 on if_alloc(), and does not * increment this refcount on if_attach(). */ int -nm_vi_persist(const char *name, struct ifnet **ret) +nm_os_vi_persist(const char *name, struct ifnet **ret) { struct ifnet *ifp; u_short macaddr_hi; uint32_t macaddr_mid; u_char eaddr[6]; int unit = nm_vi_get_index(); /* just to decide MAC address */ if (unit < 0) return EBUSY; /* * We use the same MAC address generation method with tap * except for the highest octet is 00:be instead of 00:bd */ macaddr_hi = htons(0x00be); /* XXX tap + 1 */ macaddr_mid = (uint32_t) ticks; bcopy(&macaddr_hi, eaddr, sizeof(short)); bcopy(&macaddr_mid, &eaddr[2], sizeof(uint32_t)); eaddr[5] = (uint8_t)unit; ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { D("if_alloc failed"); return ENOMEM; } if_initname(ifp, name, IF_DUNIT_NONE); ifp->if_mtu = 65536; ifp->if_flags = IFF_UP | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_init = (void *)nm_vi_dummy; ifp->if_ioctl = nm_vi_dummy; ifp->if_start = nm_vi_start; ifp->if_mtu = ETHERMTU; IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen); ifp->if_capabilities |= IFCAP_LINKSTATE; ifp->if_capenable |= IFCAP_LINKSTATE; ether_ifattach(ifp, eaddr); *ret = ifp; return 0; } + /* unregister from the system and drop the final refcount */ void -nm_vi_detach(struct ifnet *ifp) +nm_os_vi_detach(struct ifnet *ifp) { nm_vi_free_index(((char *)IF_LLADDR(ifp))[5]); ether_ifdetach(ifp); if_free(ifp); } +/* ======================== PTNETMAP SUPPORT ========================== */ + +#ifdef WITH_PTNETMAP_GUEST +#include +#include +#include /* bus_dmamap_* */ +#include +#include +#include /* + * ptnetmap memory device (memdev) for freebsd guest, + * ssed to expose host netmap memory to the guest through a PCI BAR. + */ + +/* + * ptnetmap memdev private data structure + */ +struct ptnetmap_memdev { + device_t dev; + struct resource *pci_io; + struct resource *pci_mem; + struct netmap_mem_d *nm_mem; +}; + +static int ptn_memdev_probe(device_t); +static int ptn_memdev_attach(device_t); +static int ptn_memdev_detach(device_t); +static int ptn_memdev_shutdown(device_t); + +static device_method_t ptn_memdev_methods[] = { + DEVMETHOD(device_probe, ptn_memdev_probe), + DEVMETHOD(device_attach, ptn_memdev_attach), + DEVMETHOD(device_detach, ptn_memdev_detach), + DEVMETHOD(device_shutdown, ptn_memdev_shutdown), + DEVMETHOD_END +}; + +static driver_t ptn_memdev_driver = { + PTNETMAP_MEMDEV_NAME, + ptn_memdev_methods, + sizeof(struct ptnetmap_memdev), +}; + +/* We use (SI_ORDER_MIDDLE+1) here, see DEV_MODULE_ORDERED() invocation + * below. */ +static devclass_t ptnetmap_devclass; +DRIVER_MODULE_ORDERED(ptn_memdev, pci, ptn_memdev_driver, ptnetmap_devclass, + NULL, NULL, SI_ORDER_MIDDLE + 1); + +/* + * I/O port read/write wrappers. + * Some are not used, so we keep them commented out until needed + */ +#define ptn_ioread16(ptn_dev, reg) bus_read_2((ptn_dev)->pci_io, (reg)) +#define ptn_ioread32(ptn_dev, reg) bus_read_4((ptn_dev)->pci_io, (reg)) +#if 0 +#define ptn_ioread8(ptn_dev, reg) bus_read_1((ptn_dev)->pci_io, (reg)) +#define ptn_iowrite8(ptn_dev, reg, val) bus_write_1((ptn_dev)->pci_io, (reg), (val)) +#define ptn_iowrite16(ptn_dev, reg, val) bus_write_2((ptn_dev)->pci_io, (reg), (val)) +#define ptn_iowrite32(ptn_dev, reg, val) bus_write_4((ptn_dev)->pci_io, (reg), (val)) +#endif /* unused */ + +/* + * Map host netmap memory through PCI-BAR in the guest OS, + * returning physical (nm_paddr) and virtual (nm_addr) addresses + * of the netmap memory mapped in the guest. + */ +int +nm_os_pt_memdev_iomap(struct ptnetmap_memdev *ptn_dev, vm_paddr_t *nm_paddr, void **nm_addr) +{ + uint32_t mem_size; + int rid; + + D("ptn_memdev_driver iomap"); + + rid = PCIR_BAR(PTNETMAP_MEM_PCI_BAR); + mem_size = ptn_ioread32(ptn_dev, PTNETMAP_IO_PCI_MEMSIZE); + + /* map memory allocator */ + ptn_dev->pci_mem = bus_alloc_resource(ptn_dev->dev, SYS_RES_MEMORY, + &rid, 0, ~0, mem_size, RF_ACTIVE); + if (ptn_dev->pci_mem == NULL) { + *nm_paddr = 0; + *nm_addr = 0; + return ENOMEM; + } + + *nm_paddr = rman_get_start(ptn_dev->pci_mem); + *nm_addr = rman_get_virtual(ptn_dev->pci_mem); + + D("=== BAR %d start %lx len %lx mem_size %x ===", + PTNETMAP_MEM_PCI_BAR, + *nm_paddr, + rman_get_size(ptn_dev->pci_mem), + mem_size); + return (0); +} + +/* Unmap host netmap memory. */ +void +nm_os_pt_memdev_iounmap(struct ptnetmap_memdev *ptn_dev) +{ + D("ptn_memdev_driver iounmap"); + + if (ptn_dev->pci_mem) { + bus_release_resource(ptn_dev->dev, SYS_RES_MEMORY, + PCIR_BAR(PTNETMAP_MEM_PCI_BAR), ptn_dev->pci_mem); + ptn_dev->pci_mem = NULL; + } +} + +/* Device identification routine, return BUS_PROBE_DEFAULT on success, + * positive on failure */ +static int +ptn_memdev_probe(device_t dev) +{ + char desc[256]; + + if (pci_get_vendor(dev) != PTNETMAP_PCI_VENDOR_ID) + return (ENXIO); + if (pci_get_device(dev) != PTNETMAP_PCI_DEVICE_ID) + return (ENXIO); + + snprintf(desc, sizeof(desc), "%s PCI adapter", + PTNETMAP_MEMDEV_NAME); + device_set_desc_copy(dev, desc); + + return (BUS_PROBE_DEFAULT); +} + +/* Device initialization routine. */ +static int +ptn_memdev_attach(device_t dev) +{ + struct ptnetmap_memdev *ptn_dev; + int rid; + uint16_t mem_id; + + D("ptn_memdev_driver attach"); + + ptn_dev = device_get_softc(dev); + ptn_dev->dev = dev; + + pci_enable_busmaster(dev); + + rid = PCIR_BAR(PTNETMAP_IO_PCI_BAR); + ptn_dev->pci_io = bus_alloc_resource_any(dev, SYS_RES_IOPORT, &rid, + RF_ACTIVE); + if (ptn_dev->pci_io == NULL) { + device_printf(dev, "cannot map I/O space\n"); + return (ENXIO); + } + + mem_id = ptn_ioread16(ptn_dev, PTNETMAP_IO_PCI_HOSTID); + + /* create guest allocator */ + ptn_dev->nm_mem = netmap_mem_pt_guest_attach(ptn_dev, mem_id); + if (ptn_dev->nm_mem == NULL) { + ptn_memdev_detach(dev); + return (ENOMEM); + } + netmap_mem_get(ptn_dev->nm_mem); + + D("ptn_memdev_driver probe OK - host_id: %d", mem_id); + + return (0); +} + +/* Device removal routine. */ +static int +ptn_memdev_detach(device_t dev) +{ + struct ptnetmap_memdev *ptn_dev; + + D("ptn_memdev_driver detach"); + ptn_dev = device_get_softc(dev); + + if (ptn_dev->nm_mem) { + netmap_mem_put(ptn_dev->nm_mem); + ptn_dev->nm_mem = NULL; + } + if (ptn_dev->pci_mem) { + bus_release_resource(dev, SYS_RES_MEMORY, + PCIR_BAR(PTNETMAP_MEM_PCI_BAR), ptn_dev->pci_mem); + ptn_dev->pci_mem = NULL; + } + if (ptn_dev->pci_io) { + bus_release_resource(dev, SYS_RES_IOPORT, + PCIR_BAR(PTNETMAP_IO_PCI_BAR), ptn_dev->pci_io); + ptn_dev->pci_io = NULL; + } + + return (0); +} + +static int +ptn_memdev_shutdown(device_t dev) +{ + D("ptn_memdev_driver shutdown"); + return bus_generic_shutdown(dev); +} + +#endif /* WITH_PTNETMAP_GUEST */ + +/* * In order to track whether pages are still mapped, we hook into * the standard cdev_pager and intercept the constructor and * destructor. */ struct netmap_vm_handle_t { struct cdev *dev; struct netmap_priv_d *priv; }; static int netmap_dev_pager_ctor(void *handle, vm_ooffset_t size, vm_prot_t prot, vm_ooffset_t foff, struct ucred *cred, u_short *color) { struct netmap_vm_handle_t *vmh = handle; if (netmap_verbose) D("handle %p size %jd prot %d foff %jd", handle, (intmax_t)size, prot, (intmax_t)foff); if (color) *color = 0; dev_ref(vmh->dev); return 0; } static void netmap_dev_pager_dtor(void *handle) { struct netmap_vm_handle_t *vmh = handle; struct cdev *dev = vmh->dev; struct netmap_priv_d *priv = vmh->priv; if (netmap_verbose) D("handle %p", handle); netmap_dtor(priv); free(vmh, M_DEVBUF); dev_rel(dev); } static int netmap_dev_pager_fault(vm_object_t object, vm_ooffset_t offset, int prot, vm_page_t *mres) { struct netmap_vm_handle_t *vmh = object->handle; struct netmap_priv_d *priv = vmh->priv; struct netmap_adapter *na = priv->np_na; vm_paddr_t paddr; vm_page_t page; vm_memattr_t memattr; vm_pindex_t pidx; ND("object %p offset %jd prot %d mres %p", object, (intmax_t)offset, prot, mres); memattr = object->memattr; pidx = OFF_TO_IDX(offset); paddr = netmap_mem_ofstophys(na->nm_mem, offset); if (paddr == 0) return VM_PAGER_FAIL; if (((*mres)->flags & PG_FICTITIOUS) != 0) { /* * If the passed in result page is a fake page, update it with * the new physical address. */ page = *mres; vm_page_updatefake(page, paddr, memattr); } else { /* * Replace the passed in reqpage page with our own fake page and * free up the all of the original pages. */ #ifndef VM_OBJECT_WUNLOCK /* FreeBSD < 10.x */ #define VM_OBJECT_WUNLOCK VM_OBJECT_UNLOCK #define VM_OBJECT_WLOCK VM_OBJECT_LOCK #endif /* VM_OBJECT_WUNLOCK */ VM_OBJECT_WUNLOCK(object); page = vm_page_getfake(paddr, memattr); VM_OBJECT_WLOCK(object); vm_page_lock(*mres); vm_page_free(*mres); vm_page_unlock(*mres); *mres = page; vm_page_insert(page, object, pidx); } page->valid = VM_PAGE_BITS_ALL; return (VM_PAGER_OK); } static struct cdev_pager_ops netmap_cdev_pager_ops = { .cdev_pg_ctor = netmap_dev_pager_ctor, .cdev_pg_dtor = netmap_dev_pager_dtor, .cdev_pg_fault = netmap_dev_pager_fault, }; static int netmap_mmap_single(struct cdev *cdev, vm_ooffset_t *foff, vm_size_t objsize, vm_object_t *objp, int prot) { int error; struct netmap_vm_handle_t *vmh; struct netmap_priv_d *priv; vm_object_t obj; if (netmap_verbose) D("cdev %p foff %jd size %jd objp %p prot %d", cdev, (intmax_t )*foff, (intmax_t )objsize, objp, prot); vmh = malloc(sizeof(struct netmap_vm_handle_t), M_DEVBUF, M_NOWAIT | M_ZERO); if (vmh == NULL) return ENOMEM; vmh->dev = cdev; NMG_LOCK(); error = devfs_get_cdevpriv((void**)&priv); if (error) goto err_unlock; if (priv->np_nifp == NULL) { error = EINVAL; goto err_unlock; } vmh->priv = priv; priv->np_refs++; NMG_UNLOCK(); obj = cdev_pager_allocate(vmh, OBJT_DEVICE, &netmap_cdev_pager_ops, objsize, prot, *foff, NULL); if (obj == NULL) { D("cdev_pager_allocate failed"); error = EINVAL; goto err_deref; } *objp = obj; return 0; err_deref: NMG_LOCK(); priv->np_refs--; err_unlock: NMG_UNLOCK(); // err: free(vmh, M_DEVBUF); return error; } /* * On FreeBSD the close routine is only called on the last close on * the device (/dev/netmap) so we cannot do anything useful. * To track close() on individual file descriptors we pass netmap_dtor() to * devfs_set_cdevpriv() on open(). The FreeBSD kernel will call the destructor - * when the last fd pointing to the device is closed. + * when the last fd pointing to the device is closed. * * Note that FreeBSD does not even munmap() on close() so we also have * to track mmap() ourselves, and postpone the call to * netmap_dtor() is called when the process has no open fds and no active * memory maps on /dev/netmap, as in linux. */ static int netmap_close(struct cdev *dev, int fflag, int devtype, struct thread *td) { if (netmap_verbose) D("dev %p fflag 0x%x devtype %d td %p", dev, fflag, devtype, td); return 0; } static int netmap_open(struct cdev *dev, int oflags, int devtype, struct thread *td) { struct netmap_priv_d *priv; int error; (void)dev; (void)oflags; (void)devtype; (void)td; - priv = malloc(sizeof(struct netmap_priv_d), M_DEVBUF, - M_NOWAIT | M_ZERO); - if (priv == NULL) - return ENOMEM; - priv->np_refs = 1; + NMG_LOCK(); + priv = netmap_priv_new(); + if (priv == NULL) { + error = ENOMEM; + goto out; + } error = devfs_set_cdevpriv(priv, netmap_dtor); if (error) { - free(priv, M_DEVBUF); - } else { - NMG_LOCK(); - netmap_use_count++; - NMG_UNLOCK(); + netmap_priv_delete(priv); } +out: + NMG_UNLOCK(); return error; } +/******************** kthread wrapper ****************/ +#include +u_int +nm_os_ncpus(void) +{ + return mp_maxid + 1; +} + +struct nm_kthread_ctx { + struct thread *user_td; /* thread user-space (kthread creator) to send ioctl */ + /* notification to guest (interrupt) */ + int irq_fd; /* ioctl fd */ + struct nm_kth_ioctl irq_ioctl; /* ioctl arguments */ + + /* notification from guest */ + void *ioevent_file; /* tsleep() argument */ + + /* worker function and parameter */ + nm_kthread_worker_fn_t worker_fn; + void *worker_private; + + struct nm_kthread *nmk; + + /* integer to manage multiple worker contexts (e.g., RX or TX on ptnetmap) */ + long type; +}; + +struct nm_kthread { + struct thread *worker; + struct mtx worker_lock; + uint64_t scheduled; /* pending wake_up request */ + struct nm_kthread_ctx worker_ctx; + int run; /* used to stop kthread */ + int attach_user; /* kthread attached to user_process */ + int affinity; +}; + +void inline +nm_os_kthread_wakeup_worker(struct nm_kthread *nmk) +{ + /* + * There may be a race between FE and BE, + * which call both this function, and worker kthread, + * that reads nmk->scheduled. + * + * For us it is not important the counter value, + * but simply that it has changed since the last + * time the kthread saw it. + */ + mtx_lock(&nmk->worker_lock); + nmk->scheduled++; + if (nmk->worker_ctx.ioevent_file) { + wakeup(nmk->worker_ctx.ioevent_file); + } + mtx_unlock(&nmk->worker_lock); +} + +void inline +nm_os_kthread_send_irq(struct nm_kthread *nmk) +{ + struct nm_kthread_ctx *ctx = &nmk->worker_ctx; + int err; + + if (ctx->user_td && ctx->irq_fd > 0) { + err = kern_ioctl(ctx->user_td, ctx->irq_fd, ctx->irq_ioctl.com, (caddr_t)&ctx->irq_ioctl.data.msix); + if (err) { + D("kern_ioctl error: %d ioctl parameters: fd %d com %lu data %p", + err, ctx->irq_fd, ctx->irq_ioctl.com, &ctx->irq_ioctl.data); + } + } +} + +static void +nm_kthread_worker(void *data) +{ + struct nm_kthread *nmk = data; + struct nm_kthread_ctx *ctx = &nmk->worker_ctx; + uint64_t old_scheduled = nmk->scheduled; + + if (nmk->affinity >= 0) { + thread_lock(curthread); + sched_bind(curthread, nmk->affinity); + thread_unlock(curthread); + } + + while (nmk->run) { + /* + * check if the parent process dies + * (when kthread is attached to user process) + */ + if (ctx->user_td) { + PROC_LOCK(curproc); + thread_suspend_check(0); + PROC_UNLOCK(curproc); + } else { + kthread_suspend_check(); + } + + /* + * if ioevent_file is not defined, we don't have notification + * mechanism and we continually execute worker_fn() + */ + if (!ctx->ioevent_file) { + ctx->worker_fn(ctx->worker_private); /* worker body */ + } else { + /* checks if there is a pending notification */ + mtx_lock(&nmk->worker_lock); + if (likely(nmk->scheduled != old_scheduled)) { + old_scheduled = nmk->scheduled; + mtx_unlock(&nmk->worker_lock); + + ctx->worker_fn(ctx->worker_private); /* worker body */ + + continue; + } else if (nmk->run) { + /* wait on event with one second timeout */ + msleep_spin(ctx->ioevent_file, &nmk->worker_lock, + "nmk_ev", hz); + nmk->scheduled++; + } + mtx_unlock(&nmk->worker_lock); + } + } + + kthread_exit(); +} + +static int +nm_kthread_open_files(struct nm_kthread *nmk, struct nm_kthread_cfg *cfg) +{ + /* send irq through ioctl to bhyve (vmm.ko) */ + if (cfg->event.irqfd) { + nmk->worker_ctx.irq_fd = cfg->event.irqfd; + nmk->worker_ctx.irq_ioctl = cfg->event.ioctl; + } + /* ring.ioeventfd contains the chan where do tsleep to wait events */ + if (cfg->event.ioeventfd) { + nmk->worker_ctx.ioevent_file = (void *)cfg->event.ioeventfd; + } + + return 0; +} + +static void +nm_kthread_close_files(struct nm_kthread *nmk) +{ + nmk->worker_ctx.irq_fd = 0; + nmk->worker_ctx.ioevent_file = NULL; +} + +void +nm_os_kthread_set_affinity(struct nm_kthread *nmk, int affinity) +{ + nmk->affinity = affinity; +} + +struct nm_kthread * +nm_os_kthread_create(struct nm_kthread_cfg *cfg) +{ + struct nm_kthread *nmk = NULL; + int error; + + nmk = malloc(sizeof(*nmk), M_DEVBUF, M_NOWAIT | M_ZERO); + if (!nmk) + return NULL; + + mtx_init(&nmk->worker_lock, "nm_kthread lock", NULL, MTX_SPIN); + nmk->worker_ctx.worker_fn = cfg->worker_fn; + nmk->worker_ctx.worker_private = cfg->worker_private; + nmk->worker_ctx.type = cfg->type; + nmk->affinity = -1; + + /* attach kthread to user process (ptnetmap) */ + nmk->attach_user = cfg->attach_user; + + /* open event fd */ + error = nm_kthread_open_files(nmk, cfg); + if (error) + goto err; + + return nmk; +err: + free(nmk, M_DEVBUF); + return NULL; +} + +int +nm_os_kthread_start(struct nm_kthread *nmk) +{ + struct proc *p = NULL; + int error = 0; + + if (nmk->worker) { + return EBUSY; + } + + /* check if we want to attach kthread to user process */ + if (nmk->attach_user) { + nmk->worker_ctx.user_td = curthread; + p = curthread->td_proc; + } + + /* enable kthread main loop */ + nmk->run = 1; + /* create kthread */ + if((error = kthread_add(nm_kthread_worker, nmk, p, + &nmk->worker, RFNOWAIT /* to be checked */, 0, "nm-kthread-%ld", + nmk->worker_ctx.type))) { + goto err; + } + + D("nm_kthread started td 0x%p", nmk->worker); + + return 0; +err: + D("nm_kthread start failed err %d", error); + nmk->worker = NULL; + return error; +} + +void +nm_os_kthread_stop(struct nm_kthread *nmk) +{ + if (!nmk->worker) { + return; + } + /* tell to kthread to exit from main loop */ + nmk->run = 0; + + /* wake up kthread if it sleeps */ + kthread_resume(nmk->worker); + nm_os_kthread_wakeup_worker(nmk); + + nmk->worker = NULL; +} + +void +nm_os_kthread_delete(struct nm_kthread *nmk) +{ + if (!nmk) + return; + if (nmk->worker) { + nm_os_kthread_stop(nmk); + } + + nm_kthread_close_files(nmk); + + free(nmk, M_DEVBUF); +} + /******************** kqueue support ****************/ /* - * The OS_selwakeup also needs to issue a KNOTE_UNLOCKED. + * nm_os_selwakeup also needs to issue a KNOTE_UNLOCKED. * We use a non-zero argument to distinguish the call from the one * in kevent_scan() which instead also needs to run netmap_poll(). * The knote uses a global mutex for the time being. We might * try to reuse the one in the si, but it is not allocated * permanently so it might be a bit tricky. * * The *kqfilter function registers one or another f_event * depending on read or write mode. * In the call to f_event() td_fpop is NULL so any child function * calling devfs_get_cdevpriv() would fail - and we need it in * netmap_poll(). As a workaround we store priv into kn->kn_hook * and pass it as first argument to netmap_poll(), which then * uses the failure to tell that we are called from f_event() * and do not need the selrecord(). */ void -freebsd_selwakeup(struct nm_selinfo *si, int pri) +nm_os_selwakeup(struct nm_selinfo *si) { if (netmap_verbose) D("on knote %p", &si->si.si_note); - selwakeuppri(&si->si, pri); + selwakeuppri(&si->si, PI_NET); /* use a non-zero hint to tell the notification from the * call done in kqueue_scan() which uses 0 */ KNOTE_UNLOCKED(&si->si.si_note, 0x100 /* notification */); } +void +nm_os_selrecord(struct thread *td, struct nm_selinfo *si) +{ + selrecord(td, &si->si); +} + static void netmap_knrdetach(struct knote *kn) { struct netmap_priv_d *priv = (struct netmap_priv_d *)kn->kn_hook; struct selinfo *si = &priv->np_si[NR_RX]->si; D("remove selinfo %p", si); knlist_remove(&si->si_note, kn, 0); } static void netmap_knwdetach(struct knote *kn) { struct netmap_priv_d *priv = (struct netmap_priv_d *)kn->kn_hook; struct selinfo *si = &priv->np_si[NR_TX]->si; D("remove selinfo %p", si); knlist_remove(&si->si_note, kn, 0); } /* * callback from notifies (generated externally) and our * calls to kevent(). The former we just return 1 (ready) * since we do not know better. * In the latter we call netmap_poll and return 0/1 accordingly. */ static int netmap_knrw(struct knote *kn, long hint, int events) { struct netmap_priv_d *priv; int revents; if (hint != 0) { ND(5, "call from notify"); return 1; /* assume we are ready */ } priv = kn->kn_hook; /* the notification may come from an external thread, * in which case we do not want to run the netmap_poll * This should be filtered above, but check just in case. */ if (curthread != priv->np_td) { /* should not happen */ RD(5, "curthread changed %p %p", curthread, priv->np_td); return 1; } else { - revents = netmap_poll((void *)priv, events, curthread); + revents = netmap_poll(priv, events, NULL); return (events & revents) ? 1 : 0; } } static int netmap_knread(struct knote *kn, long hint) { return netmap_knrw(kn, hint, POLLIN); } static int netmap_knwrite(struct knote *kn, long hint) { return netmap_knrw(kn, hint, POLLOUT); } static struct filterops netmap_rfiltops = { .f_isfd = 1, .f_detach = netmap_knrdetach, .f_event = netmap_knread, }; static struct filterops netmap_wfiltops = { .f_isfd = 1, .f_detach = netmap_knwdetach, .f_event = netmap_knwrite, }; /* * This is called when a thread invokes kevent() to record * a change in the configuration of the kqueue(). * The 'priv' should be the same as in the netmap device. */ static int netmap_kqfilter(struct cdev *dev, struct knote *kn) { struct netmap_priv_d *priv; int error; struct netmap_adapter *na; struct nm_selinfo *si; int ev = kn->kn_filter; if (ev != EVFILT_READ && ev != EVFILT_WRITE) { D("bad filter request %d", ev); return 1; } error = devfs_get_cdevpriv((void**)&priv); if (error) { D("device not yet setup"); return 1; } na = priv->np_na; if (na == NULL) { D("no netmap adapter for this file descriptor"); return 1; } /* the si is indicated in the priv */ si = priv->np_si[(ev == EVFILT_WRITE) ? NR_TX : NR_RX]; // XXX lock(priv) ? kn->kn_fop = (ev == EVFILT_WRITE) ? &netmap_wfiltops : &netmap_rfiltops; kn->kn_hook = priv; knlist_add(&si->si.si_note, kn, 1); // XXX unlock(priv) ND("register %p %s td %p priv %p kn %p np_nifp %p kn_fp/fpop %s", na, na->ifp->if_xname, curthread, priv, kn, priv->np_nifp, kn->kn_fp == curthread->td_fpop ? "match" : "MISMATCH"); return 0; } +static int +freebsd_netmap_poll(struct cdev *cdevi __unused, int events, struct thread *td) +{ + struct netmap_priv_d *priv; + if (devfs_get_cdevpriv((void **)&priv)) { + return POLLERR; + } + return netmap_poll(priv, events, td); +} + +static int +freebsd_netmap_ioctl(struct cdev *dev __unused, u_long cmd, caddr_t data, + int ffla __unused, struct thread *td) +{ + int error; + struct netmap_priv_d *priv; + + CURVNET_SET(TD_TO_VNET(rd)); + error = devfs_get_cdevpriv((void **)&priv); + if (error) { + /* XXX ENOENT should be impossible, since the priv + * is now created in the open */ + if (error == ENOENT) + error = ENXIO; + goto out; + } + error = netmap_ioctl(priv, cmd, data, td); +out: + CURVNET_RESTORE(); + + return error; +} + +extern struct cdevsw netmap_cdevsw; /* XXX used in netmap.c, should go elsewhere */ struct cdevsw netmap_cdevsw = { .d_version = D_VERSION, .d_name = "netmap", .d_open = netmap_open, .d_mmap_single = netmap_mmap_single, - .d_ioctl = netmap_ioctl, - .d_poll = netmap_poll, + .d_ioctl = freebsd_netmap_ioctl, + .d_poll = freebsd_netmap_poll, .d_kqfilter = netmap_kqfilter, .d_close = netmap_close, }; /*--- end of kqueue support ----*/ /* * Kernel entry point. * * Initialize/finalize the module and return. * * Return 0 on success, errno on failure. */ static int netmap_loader(__unused struct module *module, int event, __unused void *arg) { int error = 0; switch (event) { case MOD_LOAD: error = netmap_init(); break; case MOD_UNLOAD: /* * if some one is still using netmap, * then the module can not be unloaded. */ if (netmap_use_count) { D("netmap module can not be unloaded - netmap_use_count: %d", netmap_use_count); error = EBUSY; break; } netmap_fini(); break; default: error = EOPNOTSUPP; break; } return (error); } - +#ifdef DEV_MODULE_ORDERED +/* + * The netmap module contains three drivers: (i) the netmap character device + * driver; (ii) the ptnetmap memdev PCI device driver, (iii) the ptnet PCI + * device driver. The attach() routines of both (ii) and (iii) need the + * lock of the global allocator, and such lock is initialized in netmap_init(), + * which is part of (i). + * Therefore, we make sure that (i) is loaded before (ii) and (iii), using + * the 'order' parameter of driver declaration macros. For (i), we specify + * SI_ORDER_MIDDLE, while higher orders are used with the DRIVER_MODULE_ORDERED + * macros for (ii) and (iii). + */ +DEV_MODULE_ORDERED(netmap, netmap_loader, NULL, SI_ORDER_MIDDLE); +#else /* !DEV_MODULE_ORDERED */ DEV_MODULE(netmap, netmap_loader, NULL); +#endif /* DEV_MODULE_ORDERED */ +MODULE_DEPEND(netmap, pci, 1, 1, 1); MODULE_VERSION(netmap, 1); +/* reduce conditional code */ +// linux API, use for the knlist in FreeBSD +/* use a private mutex for the knlist */ Index: head/sys/dev/netmap/netmap_generic.c =================================================================== --- head/sys/dev/netmap/netmap_generic.c (revision 307393) +++ head/sys/dev/netmap/netmap_generic.c (revision 307394) @@ -1,866 +1,1268 @@ /* - * Copyright (C) 2013-2014 Universita` di Pisa. All rights reserved. + * Copyright (C) 2013-2016 Vincenzo Maffione + * Copyright (C) 2013-2016 Luigi Rizzo + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * This module implements netmap support on top of standard, * unmodified device drivers. * * A NIOCREGIF request is handled here if the device does not * have native support. TX and RX rings are emulated as follows: * * NIOCREGIF * We preallocate a block of TX mbufs (roughly as many as * tx descriptors; the number is not critical) to speed up * operation during transmissions. The refcount on most of * these buffers is artificially bumped up so we can recycle * them more easily. Also, the destructor is intercepted * so we use it as an interrupt notification to wake up * processes blocked on a poll(). * * For each receive ring we allocate one "struct mbq" * (an mbuf tailq plus a spinlock). We intercept packets * (through if_input) * on the receive path and put them in the mbq from which * netmap receive routines can grab them. * * TX: * in the generic_txsync() routine, netmap buffers are copied * (or linked, in a future) to the preallocated mbufs * and pushed to the transmit queue. Some of these mbufs * (those with NS_REPORT, or otherwise every half ring) * have the refcount=1, others have refcount=2. * When the destructor is invoked, we take that as * a notification that all mbufs up to that one in * the specific ring have been completed, and generate * the equivalent of a transmit interrupt. * * RX: * */ #ifdef __FreeBSD__ #include /* prerequisite */ __FBSDID("$FreeBSD$"); #include #include #include #include /* PROT_EXEC */ #include #include /* sockaddrs */ #include #include #include #include /* bus_dmamap_* in netmap_kern.h */ // XXX temporary - D() defined here #include #include #include #define rtnl_lock() ND("rtnl_lock called") #define rtnl_unlock() ND("rtnl_unlock called") -#define MBUF_TXQ(m) ((m)->m_pkthdr.flowid) #define MBUF_RXQ(m) ((m)->m_pkthdr.flowid) #define smp_mb() /* * FreeBSD mbuf allocator/deallocator in emulation mode: - * + */ +#if __FreeBSD_version < 1100000 + +/* + * For older versions of FreeBSD: + * * We allocate EXT_PACKET mbuf+clusters, but need to set M_NOFREE * so that the destructor, if invoked, will not free the packet. - * In principle we should set the destructor only on demand, + * In principle we should set the destructor only on demand, * but since there might be a race we better do it on allocation. * As a consequence, we also need to set the destructor or we * would leak buffers. */ -/* - * mbuf wrappers - */ - /* mbuf destructor, also need to change the type to EXT_EXTREF, * add an M_NOFREE flag, and then clear the flag and * chain into uma_zfree(zone_pack, mf) * (or reinstall the buffer ?) */ #define SET_MBUF_DESTRUCTOR(m, fn) do { \ (m)->m_ext.ext_free = (void *)fn; \ (m)->m_ext.ext_type = EXT_EXTREF; \ } while (0) -static void -netmap_default_mbuf_destructor(struct mbuf *m) +static int +void_mbuf_dtor(struct mbuf *m, void *arg1, void *arg2) { /* restore original mbuf */ m->m_ext.ext_buf = m->m_data = m->m_ext.ext_arg1; m->m_ext.ext_arg1 = NULL; m->m_ext.ext_type = EXT_PACKET; m->m_ext.ext_free = NULL; - if (GET_MBUF_REFCNT(m) == 0) + if (MBUF_REFCNT(m) == 0) SET_MBUF_REFCNT(m, 1); uma_zfree(zone_pack, m); + + return 0; } static inline struct mbuf * -netmap_get_mbuf(int len) +nm_os_get_mbuf(struct ifnet *ifp, int len) { struct mbuf *m; + + (void)ifp; m = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (m) { - m->m_flags |= M_NOFREE; /* XXXNP: Almost certainly incorrect. */ + /* m_getcl() (mb_ctor_mbuf) has an assert that checks that + * M_NOFREE flag is not specified as third argument, + * so we have to set M_NOFREE after m_getcl(). */ + m->m_flags |= M_NOFREE; m->m_ext.ext_arg1 = m->m_ext.ext_buf; // XXX save - m->m_ext.ext_free = (void *)netmap_default_mbuf_destructor; + m->m_ext.ext_free = (void *)void_mbuf_dtor; m->m_ext.ext_type = EXT_EXTREF; - ND(5, "create m %p refcnt %d", m, GET_MBUF_REFCNT(m)); + ND(5, "create m %p refcnt %d", m, MBUF_REFCNT(m)); } return m; } +#else /* __FreeBSD_version >= 1100000 */ +/* + * Newer versions of FreeBSD, using a straightforward scheme. + * + * We allocate mbufs with m_gethdr(), since the mbuf header is needed + * by the driver. We also attach a customly-provided external storage, + * which in this case is a netmap buffer. When calling m_extadd(), however + * we pass a NULL address, since the real address (and length) will be + * filled in by nm_os_generic_xmit_frame() right before calling + * if_transmit(). + * + * The dtor function does nothing, however we need it since mb_free_ext() + * has a KASSERT(), checking that the mbuf dtor function is not NULL. + */ +#define SET_MBUF_DESTRUCTOR(m, fn) do { \ + (m)->m_ext.ext_free = (void *)fn; \ +} while (0) + +static void void_mbuf_dtor(struct mbuf *m, void *arg1, void *arg2) { } + +static inline struct mbuf * +nm_os_get_mbuf(struct ifnet *ifp, int len) +{ + struct mbuf *m; + + (void)ifp; + (void)len; + + m = m_gethdr(M_NOWAIT, MT_DATA); + if (m == NULL) { + return m; + } + + m_extadd(m, NULL /* buf */, 0 /* size */, void_mbuf_dtor, + NULL, NULL, 0, EXT_NET_DRV); + + return m; +} + +#endif /* __FreeBSD_version >= 1100000 */ + +#elif defined _WIN32 + +#include "win_glue.h" + +#define rtnl_lock() ND("rtnl_lock called") +#define rtnl_unlock() ND("rtnl_unlock called") +#define MBUF_TXQ(m) 0//((m)->m_pkthdr.flowid) +#define MBUF_RXQ(m) 0//((m)->m_pkthdr.flowid) +#define smp_mb() //XXX: to be correctly defined + #else /* linux */ #include "bsd_glue.h" #include /* rtnl_[un]lock() */ #include /* struct ethtool_ops, get_ringparam */ #include -//#define REG_RESET +static inline struct mbuf * +nm_os_get_mbuf(struct ifnet *ifp, int len) +{ + return alloc_skb(ifp->needed_headroom + len + + ifp->needed_tailroom, GFP_ATOMIC); +} #endif /* linux */ /* Common headers. */ #include #include #include +#define for_each_kring_n(_i, _k, _karr, _n) \ + for (_k=_karr, _i = 0; _i < _n; (_k)++, (_i)++) -/* ======================== usage stats =========================== */ +#define for_each_tx_kring(_i, _k, _na) \ + for_each_kring_n(_i, _k, (_na)->tx_rings, (_na)->num_tx_rings) +#define for_each_tx_kring_h(_i, _k, _na) \ + for_each_kring_n(_i, _k, (_na)->tx_rings, (_na)->num_tx_rings + 1) +#define for_each_rx_kring(_i, _k, _na) \ + for_each_kring_n(_i, _k, (_na)->rx_rings, (_na)->num_rx_rings) +#define for_each_rx_kring_h(_i, _k, _na) \ + for_each_kring_n(_i, _k, (_na)->rx_rings, (_na)->num_rx_rings + 1) + + +/* ======================== PERFORMANCE STATISTICS =========================== */ + #ifdef RATE_GENERIC #define IFRATE(x) x struct rate_stats { unsigned long txpkt; unsigned long txsync; unsigned long txirq; + unsigned long txrepl; + unsigned long txdrop; unsigned long rxpkt; unsigned long rxirq; unsigned long rxsync; }; struct rate_context { unsigned refcount; struct timer_list timer; struct rate_stats new; struct rate_stats old; }; #define RATE_PRINTK(_NAME_) \ printk( #_NAME_ " = %lu Hz\n", (cur._NAME_ - ctx->old._NAME_)/RATE_PERIOD); #define RATE_PERIOD 2 static void rate_callback(unsigned long arg) { struct rate_context * ctx = (struct rate_context *)arg; struct rate_stats cur = ctx->new; int r; RATE_PRINTK(txpkt); RATE_PRINTK(txsync); RATE_PRINTK(txirq); + RATE_PRINTK(txrepl); + RATE_PRINTK(txdrop); RATE_PRINTK(rxpkt); RATE_PRINTK(rxsync); RATE_PRINTK(rxirq); printk("\n"); ctx->old = cur; r = mod_timer(&ctx->timer, jiffies + msecs_to_jiffies(RATE_PERIOD * 1000)); if (unlikely(r)) D("[v1000] Error: mod_timer()"); } static struct rate_context rate_ctx; void generic_rate(int txp, int txs, int txi, int rxp, int rxs, int rxi) { if (txp) rate_ctx.new.txpkt++; if (txs) rate_ctx.new.txsync++; if (txi) rate_ctx.new.txirq++; if (rxp) rate_ctx.new.rxpkt++; if (rxs) rate_ctx.new.rxsync++; if (rxi) rate_ctx.new.rxirq++; } #else /* !RATE */ #define IFRATE(x) #endif /* !RATE */ /* =============== GENERIC NETMAP ADAPTER SUPPORT ================= */ /* * Wrapper used by the generic adapter layer to notify * the poller threads. Differently from netmap_rx_irq(), we check * only NAF_NETMAP_ON instead of NAF_NATIVE_ON to enable the irq. */ -static void -netmap_generic_irq(struct ifnet *ifp, u_int q, u_int *work_done) +void +netmap_generic_irq(struct netmap_adapter *na, u_int q, u_int *work_done) { - struct netmap_adapter *na = NA(ifp); if (unlikely(!nm_netmap_on(na))) return; - netmap_common_irq(ifp, q, work_done); + netmap_common_irq(na, q, work_done); +#ifdef RATE_GENERIC + if (work_done) + rate_ctx.new.rxirq++; + else + rate_ctx.new.txirq++; +#endif /* RATE_GENERIC */ } +static int +generic_netmap_unregister(struct netmap_adapter *na) +{ + struct netmap_generic_adapter *gna = (struct netmap_generic_adapter *)na; + struct netmap_kring *kring = NULL; + int i, r; + if (na->active_fds == 0) { + D("Generic adapter %p goes off", na); + rtnl_lock(); + + na->na_flags &= ~NAF_NETMAP_ON; + + /* Release packet steering control. */ + nm_os_catch_tx(gna, 0); + + /* Stop intercepting packets on the RX path. */ + nm_os_catch_rx(gna, 0); + + rtnl_unlock(); + } + + for_each_rx_kring_h(r, kring, na) { + if (nm_kring_pending_off(kring)) { + D("RX ring %d of generic adapter %p goes off", r, na); + kring->nr_mode = NKR_NETMAP_OFF; + } + } + for_each_tx_kring_h(r, kring, na) { + if (nm_kring_pending_off(kring)) { + kring->nr_mode = NKR_NETMAP_OFF; + D("TX ring %d of generic adapter %p goes off", r, na); + } + } + + for_each_rx_kring(r, kring, na) { + /* Free the mbufs still pending in the RX queues, + * that did not end up into the corresponding netmap + * RX rings. */ + mbq_safe_purge(&kring->rx_queue); + nm_os_mitigation_cleanup(&gna->mit[r]); + } + + /* Decrement reference counter for the mbufs in the + * TX pools. These mbufs can be still pending in drivers, + * (e.g. this happens with virtio-net driver, which + * does lazy reclaiming of transmitted mbufs). */ + for_each_tx_kring(r, kring, na) { + /* We must remove the destructor on the TX event, + * because the destructor invokes netmap code, and + * the netmap module may disappear before the + * TX event is consumed. */ + mtx_lock_spin(&kring->tx_event_lock); + if (kring->tx_event) { + SET_MBUF_DESTRUCTOR(kring->tx_event, NULL); + } + kring->tx_event = NULL; + mtx_unlock_spin(&kring->tx_event_lock); + } + + if (na->active_fds == 0) { + free(gna->mit, M_DEVBUF); + + for_each_rx_kring(r, kring, na) { + mbq_safe_fini(&kring->rx_queue); + } + + for_each_tx_kring(r, kring, na) { + mtx_destroy(&kring->tx_event_lock); + if (kring->tx_pool == NULL) { + continue; + } + + for (i=0; inum_tx_desc; i++) { + if (kring->tx_pool[i]) { + m_freem(kring->tx_pool[i]); + } + } + free(kring->tx_pool, M_DEVBUF); + kring->tx_pool = NULL; + } + +#ifdef RATE_GENERIC + if (--rate_ctx.refcount == 0) { + D("del_timer()"); + del_timer(&rate_ctx.timer); + } +#endif + } + + return 0; +} + /* Enable/disable netmap mode for a generic network interface. */ static int generic_netmap_register(struct netmap_adapter *na, int enable) { struct netmap_generic_adapter *gna = (struct netmap_generic_adapter *)na; - struct mbuf *m; + struct netmap_kring *kring = NULL; int error; int i, r; - if (!na) + if (!na) { return EINVAL; + } -#ifdef REG_RESET - error = ifp->netdev_ops->ndo_stop(ifp); - if (error) { - return error; + if (!enable) { + /* This is actually an unregif. */ + return generic_netmap_unregister(na); } -#endif /* REG_RESET */ - if (enable) { /* Enable netmap mode. */ - /* Init the mitigation support on all the rx queues. */ + if (na->active_fds == 0) { + D("Generic adapter %p goes on", na); + /* Do all memory allocations when (na->active_fds == 0), to + * simplify error management. */ + + /* Allocate memory for mitigation support on all the rx queues. */ gna->mit = malloc(na->num_rx_rings * sizeof(struct nm_generic_mit), - M_DEVBUF, M_NOWAIT | M_ZERO); + M_DEVBUF, M_NOWAIT | M_ZERO); if (!gna->mit) { D("mitigation allocation failed"); error = ENOMEM; goto out; } - for (r=0; rnum_rx_rings; r++) - netmap_mitigation_init(&gna->mit[r], r, na); - /* Initialize the rx queue, as generic_rx_handler() can - * be called as soon as netmap_catch_rx() returns. - */ - for (r=0; rnum_rx_rings; r++) { - mbq_safe_init(&na->rx_rings[r].rx_queue); + for_each_rx_kring(r, kring, na) { + /* Init mitigation support. */ + nm_os_mitigation_init(&gna->mit[r], r, na); + + /* Initialize the rx queue, as generic_rx_handler() can + * be called as soon as nm_os_catch_rx() returns. + */ + mbq_safe_init(&kring->rx_queue); } /* - * Preallocate packet buffers for the tx rings. + * Prepare mbuf pools (parallel to the tx rings), for packet + * transmission. Don't preallocate the mbufs here, it's simpler + * to leave this task to txsync. */ - for (r=0; rnum_tx_rings; r++) - na->tx_rings[r].tx_pool = NULL; - for (r=0; rnum_tx_rings; r++) { - na->tx_rings[r].tx_pool = malloc(na->num_tx_desc * sizeof(struct mbuf *), - M_DEVBUF, M_NOWAIT | M_ZERO); - if (!na->tx_rings[r].tx_pool) { + for_each_tx_kring(r, kring, na) { + kring->tx_pool = NULL; + } + for_each_tx_kring(r, kring, na) { + kring->tx_pool = + malloc(na->num_tx_desc * sizeof(struct mbuf *), + M_DEVBUF, M_NOWAIT | M_ZERO); + if (!kring->tx_pool) { D("tx_pool allocation failed"); error = ENOMEM; goto free_tx_pools; } - for (i=0; inum_tx_desc; i++) - na->tx_rings[r].tx_pool[i] = NULL; - for (i=0; inum_tx_desc; i++) { - m = netmap_get_mbuf(NETMAP_BUF_SIZE(na)); - if (!m) { - D("tx_pool[%d] allocation failed", i); - error = ENOMEM; - goto free_tx_pools; - } - na->tx_rings[r].tx_pool[i] = m; - } + mtx_init(&kring->tx_event_lock, "tx_event_lock", + NULL, MTX_SPIN); } + } + + for_each_rx_kring_h(r, kring, na) { + if (nm_kring_pending_on(kring)) { + D("RX ring %d of generic adapter %p goes on", r, na); + kring->nr_mode = NKR_NETMAP_ON; + } + + } + for_each_tx_kring_h(r, kring, na) { + if (nm_kring_pending_on(kring)) { + D("TX ring %d of generic adapter %p goes on", r, na); + kring->nr_mode = NKR_NETMAP_ON; + } + } + + for_each_tx_kring(r, kring, na) { + /* Initialize tx_pool and tx_event. */ + for (i=0; inum_tx_desc; i++) { + kring->tx_pool[i] = NULL; + } + + kring->tx_event = NULL; + } + + if (na->active_fds == 0) { rtnl_lock(); + /* Prepare to intercept incoming traffic. */ - error = netmap_catch_rx(gna, 1); + error = nm_os_catch_rx(gna, 1); if (error) { - D("netdev_rx_handler_register() failed (%d)", error); + D("nm_os_catch_rx(1) failed (%d)", error); goto register_handler; } - na->na_flags |= NAF_NETMAP_ON; /* Make netmap control the packet steering. */ - netmap_catch_tx(gna, 1); + error = nm_os_catch_tx(gna, 1); + if (error) { + D("nm_os_catch_tx(1) failed (%d)", error); + goto catch_rx; + } rtnl_unlock(); + na->na_flags |= NAF_NETMAP_ON; + #ifdef RATE_GENERIC if (rate_ctx.refcount == 0) { D("setup_timer()"); memset(&rate_ctx, 0, sizeof(rate_ctx)); setup_timer(&rate_ctx.timer, &rate_callback, (unsigned long)&rate_ctx); if (mod_timer(&rate_ctx.timer, jiffies + msecs_to_jiffies(1500))) { D("Error: mod_timer()"); } } rate_ctx.refcount++; #endif /* RATE */ - - } else if (na->tx_rings[0].tx_pool) { - /* Disable netmap mode. We enter here only if the previous - generic_netmap_register(na, 1) was successful. - If it was not, na->tx_rings[0].tx_pool was set to NULL by the - error handling code below. */ - rtnl_lock(); - - na->na_flags &= ~NAF_NETMAP_ON; - - /* Release packet steering control. */ - netmap_catch_tx(gna, 0); - - /* Do not intercept packets on the rx path. */ - netmap_catch_rx(gna, 0); - - rtnl_unlock(); - - /* Free the mbufs going to the netmap rings */ - for (r=0; rnum_rx_rings; r++) { - mbq_safe_purge(&na->rx_rings[r].rx_queue); - mbq_safe_destroy(&na->rx_rings[r].rx_queue); - } - - for (r=0; rnum_rx_rings; r++) - netmap_mitigation_cleanup(&gna->mit[r]); - free(gna->mit, M_DEVBUF); - - for (r=0; rnum_tx_rings; r++) { - for (i=0; inum_tx_desc; i++) { - m_freem(na->tx_rings[r].tx_pool[i]); - } - free(na->tx_rings[r].tx_pool, M_DEVBUF); - } - -#ifdef RATE_GENERIC - if (--rate_ctx.refcount == 0) { - D("del_timer()"); - del_timer(&rate_ctx.timer); - } -#endif } -#ifdef REG_RESET - error = ifp->netdev_ops->ndo_open(ifp); - if (error) { - goto free_tx_pools; - } -#endif - return 0; + /* Here (na->active_fds == 0) holds. */ +catch_rx: + nm_os_catch_rx(gna, 0); register_handler: rtnl_unlock(); free_tx_pools: - for (r=0; rnum_tx_rings; r++) { - if (na->tx_rings[r].tx_pool == NULL) + for_each_tx_kring(r, kring, na) { + mtx_destroy(&kring->tx_event_lock); + if (kring->tx_pool == NULL) { continue; - for (i=0; inum_tx_desc; i++) - if (na->tx_rings[r].tx_pool[i]) - m_freem(na->tx_rings[r].tx_pool[i]); - free(na->tx_rings[r].tx_pool, M_DEVBUF); - na->tx_rings[r].tx_pool = NULL; + } + free(kring->tx_pool, M_DEVBUF); + kring->tx_pool = NULL; } - for (r=0; rnum_rx_rings; r++) { - netmap_mitigation_cleanup(&gna->mit[r]); - mbq_safe_destroy(&na->rx_rings[r].rx_queue); + for_each_rx_kring(r, kring, na) { + mbq_safe_fini(&kring->rx_queue); } free(gna->mit, M_DEVBUF); out: return error; } /* * Callback invoked when the device driver frees an mbuf used * by netmap to transmit a packet. This usually happens when * the NIC notifies the driver that transmission is completed. */ static void generic_mbuf_destructor(struct mbuf *m) { - netmap_generic_irq(MBUF_IFP(m), MBUF_TXQ(m), NULL); + struct netmap_adapter *na = NA(GEN_TX_MBUF_IFP(m)); + struct netmap_kring *kring; + unsigned int r = MBUF_TXQ(m); + unsigned int r_orig = r; + + if (unlikely(!nm_netmap_on(na) || r >= na->num_tx_rings)) { + D("Error: no netmap adapter on device %p", + GEN_TX_MBUF_IFP(m)); + return; + } + + /* + * First, clear the event mbuf. + * In principle, the event 'm' should match the one stored + * on ring 'r'. However we check it explicitely to stay + * safe against lower layers (qdisc, driver, etc.) changing + * MBUF_TXQ(m) under our feet. If the match is not found + * on 'r', we try to see if it belongs to some other ring. + */ + for (;;) { + bool match = false; + + kring = &na->tx_rings[r]; + mtx_lock_spin(&kring->tx_event_lock); + if (kring->tx_event == m) { + kring->tx_event = NULL; + match = true; + } + mtx_unlock_spin(&kring->tx_event_lock); + + if (match) { + if (r != r_orig) { + RD(1, "event %p migrated: ring %u --> %u", + m, r_orig, r); + } + break; + } + + if (++r == na->num_tx_rings) r = 0; + + if (r == r_orig) { + RD(1, "Cannot match event %p", m); + return; + } + } + + /* Second, wake up clients. They will reclaim the event through + * txsync. */ + netmap_generic_irq(na, r, NULL); #ifdef __FreeBSD__ - if (netmap_verbose) - RD(5, "Tx irq (%p) queue %d index %d" , m, MBUF_TXQ(m), (int)(uintptr_t)m->m_ext.ext_arg1); - netmap_default_mbuf_destructor(m); -#endif /* __FreeBSD__ */ - IFRATE(rate_ctx.new.txirq++); + void_mbuf_dtor(m, NULL, NULL); +#endif } extern int netmap_adaptive_io; /* Record completed transmissions and update hwtail. * * The oldest tx buffer not yet completed is at nr_hwtail + 1, * nr_hwcur is the first unsent buffer. */ static u_int -generic_netmap_tx_clean(struct netmap_kring *kring) +generic_netmap_tx_clean(struct netmap_kring *kring, int txqdisc) { u_int const lim = kring->nkr_num_slots - 1; u_int nm_i = nm_next(kring->nr_hwtail, lim); u_int hwcur = kring->nr_hwcur; u_int n = 0; struct mbuf **tx_pool = kring->tx_pool; + ND("hwcur = %d, hwtail = %d", kring->nr_hwcur, kring->nr_hwtail); + while (nm_i != hwcur) { /* buffers not completed */ struct mbuf *m = tx_pool[nm_i]; - if (unlikely(m == NULL)) { - /* this is done, try to replenish the entry */ - tx_pool[nm_i] = m = netmap_get_mbuf(NETMAP_BUF_SIZE(kring->na)); + if (txqdisc) { + if (m == NULL) { + /* Nothing to do, this is going + * to be replenished. */ + RD(3, "Is this happening?"); + + } else if (MBUF_QUEUED(m)) { + break; /* Not dequeued yet. */ + + } else if (MBUF_REFCNT(m) != 1) { + /* This mbuf has been dequeued but is still busy + * (refcount is 2). + * Leave it to the driver and replenish. */ + m_freem(m); + tx_pool[nm_i] = NULL; + } + + } else { if (unlikely(m == NULL)) { - D("mbuf allocation failed, XXX error"); - // XXX how do we proceed ? break ? - return -ENOMEM; + int event_consumed; + + /* This slot was used to place an event. */ + mtx_lock_spin(&kring->tx_event_lock); + event_consumed = (kring->tx_event == NULL); + mtx_unlock_spin(&kring->tx_event_lock); + if (!event_consumed) { + /* The event has not been consumed yet, + * still busy in the driver. */ + break; + } + /* The event has been consumed, we can go + * ahead. */ + + } else if (MBUF_REFCNT(m) != 1) { + /* This mbuf is still busy: its refcnt is 2. */ + break; } - } else if (GET_MBUF_REFCNT(m) != 1) { - break; /* This mbuf is still busy: its refcnt is 2. */ } + n++; nm_i = nm_next(nm_i, lim); #if 0 /* rate adaptation */ if (netmap_adaptive_io > 1) { if (n >= netmap_adaptive_io) break; } else if (netmap_adaptive_io) { /* if hwcur - nm_i < lim/8 do an early break * so we prevent the sender from stalling. See CVT. */ if (hwcur >= nm_i) { if (hwcur - nm_i < lim/2) break; } else { if (hwcur + lim + 1 - nm_i < lim/2) break; } } #endif } kring->nr_hwtail = nm_prev(nm_i, lim); ND("tx completed [%d] -> hwtail %d", n, kring->nr_hwtail); return n; } - -/* - * We have pending packets in the driver between nr_hwtail +1 and hwcur. - * Compute a position in the middle, to be used to generate - * a notification. - */ +/* Compute a slot index in the middle between inf and sup. */ static inline u_int -generic_tx_event_middle(struct netmap_kring *kring, u_int hwcur) +ring_middle(u_int inf, u_int sup, u_int lim) { - u_int n = kring->nkr_num_slots; - u_int ntc = nm_next(kring->nr_hwtail, n-1); + u_int n = lim + 1; u_int e; - if (hwcur >= ntc) { - e = (hwcur + ntc) / 2; + if (sup >= inf) { + e = (sup + inf) / 2; } else { /* wrap around */ - e = (hwcur + n + ntc) / 2; + e = (sup + n + inf) / 2; if (e >= n) { e -= n; } } if (unlikely(e >= n)) { D("This cannot happen"); e = 0; } return e; } -/* - * We have pending packets in the driver between nr_hwtail+1 and hwcur. - * Schedule a notification approximately in the middle of the two. - * There is a race but this is only called within txsync which does - * a double check. - */ static void generic_set_tx_event(struct netmap_kring *kring, u_int hwcur) { + u_int lim = kring->nkr_num_slots - 1; struct mbuf *m; u_int e; + u_int ntc = nm_next(kring->nr_hwtail, lim); /* next to clean */ - if (nm_next(kring->nr_hwtail, kring->nkr_num_slots -1) == hwcur) { + if (ntc == hwcur) { return; /* all buffers are free */ } - e = generic_tx_event_middle(kring, hwcur); + /* + * We have pending packets in the driver between hwtail+1 + * and hwcur, and we have to chose one of these slot to + * generate a notification. + * There is a race but this is only called within txsync which + * does a double check. + */ +#if 0 + /* Choose a slot in the middle, so that we don't risk ending + * up in a situation where the client continuously wake up, + * fills one or a few TX slots and go to sleep again. */ + e = ring_middle(ntc, hwcur, lim); +#else + /* Choose the first pending slot, to be safe against driver + * reordering mbuf transmissions. */ + e = ntc; +#endif + m = kring->tx_pool[e]; - ND(5, "Request Event at %d mbuf %p refcnt %d", e, m, m ? GET_MBUF_REFCNT(m) : -2 ); if (m == NULL) { - /* This can happen if there is already an event on the netmap - slot 'e': There is nothing to do. */ + /* An event is already in place. */ return; } - kring->tx_pool[e] = NULL; + + mtx_lock_spin(&kring->tx_event_lock); + if (kring->tx_event) { + /* An event is already in place. */ + mtx_unlock_spin(&kring->tx_event_lock); + return; + } + SET_MBUF_DESTRUCTOR(m, generic_mbuf_destructor); + kring->tx_event = m; + mtx_unlock_spin(&kring->tx_event_lock); - // XXX wmb() ? - /* Decrement the refcount an free it if we have the last one. */ + kring->tx_pool[e] = NULL; + + ND(5, "Request Event at %d mbuf %p refcnt %d", e, m, m ? MBUF_REFCNT(m) : -2 ); + + /* Decrement the refcount. This will free it if we lose the race + * with the driver. */ m_freem(m); smp_mb(); } /* * generic_netmap_txsync() transforms netmap buffers into mbufs * and passes them to the standard device driver * (ndo_start_xmit() or ifp->if_transmit() ). * On linux this is not done directly, but using dev_queue_xmit(), * since it implements the TX flow control (and takes some locks). */ static int generic_netmap_txsync(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; + struct netmap_generic_adapter *gna = (struct netmap_generic_adapter *)na; struct ifnet *ifp = na->ifp; struct netmap_ring *ring = kring->ring; u_int nm_i; /* index into the netmap ring */ // j u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; u_int ring_nr = kring->ring_id; IFRATE(rate_ctx.new.txsync++); - // TODO: handle the case of mbuf allocation failure - rmb(); /* * First part: process new packets to send. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* we have new packets to send */ + struct nm_os_gen_arg a; + u_int event = -1; + + if (gna->txqdisc && nm_kr_txempty(kring)) { + /* In txqdisc mode, we ask for a delayed notification, + * but only when cur == hwtail, which means that the + * client is going to block. */ + event = ring_middle(nm_i, head, lim); + ND(3, "Place txqdisc event (hwcur=%u,event=%u," + "head=%u,hwtail=%u)", nm_i, event, head, + kring->nr_hwtail); + } + + a.ifp = ifp; + a.ring_nr = ring_nr; + a.head = a.tail = NULL; + while (nm_i != head) { struct netmap_slot *slot = &ring->slot[nm_i]; u_int len = slot->len; void *addr = NMB(na, slot); - /* device-specific */ struct mbuf *m; int tx_ret; NM_CHECK_ADDR_LEN(na, addr, len); - /* Tale a mbuf from the tx pool and copy in the user packet. */ + /* Tale a mbuf from the tx pool (replenishing the pool + * entry if necessary) and copy in the user packet. */ m = kring->tx_pool[nm_i]; - if (unlikely(!m)) { - RD(5, "This should never happen"); - kring->tx_pool[nm_i] = m = netmap_get_mbuf(NETMAP_BUF_SIZE(na)); - if (unlikely(m == NULL)) { - D("mbuf allocation failed"); + if (unlikely(m == NULL)) { + kring->tx_pool[nm_i] = m = + nm_os_get_mbuf(ifp, NETMAP_BUF_SIZE(na)); + if (m == NULL) { + RD(2, "Failed to replenish mbuf"); + /* Here we could schedule a timer which + * retries to replenish after a while, + * and notifies the client when it + * manages to replenish some slots. In + * any case we break early to avoid + * crashes. */ break; } + IFRATE(rate_ctx.new.txrepl++); } - /* XXX we should ask notifications when NS_REPORT is set, - * or roughly every half frame. We can optimize this - * by lazily requesting notifications only when a - * transmission fails. Probably the best way is to - * break on failures and set notifications when - * ring->cur == ring->tail || nm_i != cur + + a.m = m; + a.addr = addr; + a.len = len; + a.qevent = (nm_i == event); + /* When not in txqdisc mode, we should ask + * notifications when NS_REPORT is set, or roughly + * every half ring. To optimize this, we set a + * notification event when the client runs out of + * TX ring space, or when transmission fails. In + * the latter case we also break early. */ - tx_ret = generic_xmit_frame(ifp, m, addr, len, ring_nr); + tx_ret = nm_os_generic_xmit_frame(&a); if (unlikely(tx_ret)) { - ND(5, "start_xmit failed: err %d [nm_i %u, head %u, hwtail %u]", - tx_ret, nm_i, head, kring->nr_hwtail); - /* - * No room for this mbuf in the device driver. - * Request a notification FOR A PREVIOUS MBUF, - * then call generic_netmap_tx_clean(kring) to do the - * double check and see if we can free more buffers. - * If there is space continue, else break; - * NOTE: the double check is necessary if the problem - * occurs in the txsync call after selrecord(). - * Also, we need some way to tell the caller that not - * all buffers were queued onto the device (this was - * not a problem with native netmap driver where space - * is preallocated). The bridge has a similar problem - * and we solve it there by dropping the excess packets. - */ - generic_set_tx_event(kring, nm_i); - if (generic_netmap_tx_clean(kring)) { /* space now available */ - continue; - } else { - break; + if (!gna->txqdisc) { + /* + * No room for this mbuf in the device driver. + * Request a notification FOR A PREVIOUS MBUF, + * then call generic_netmap_tx_clean(kring) to do the + * double check and see if we can free more buffers. + * If there is space continue, else break; + * NOTE: the double check is necessary if the problem + * occurs in the txsync call after selrecord(). + * Also, we need some way to tell the caller that not + * all buffers were queued onto the device (this was + * not a problem with native netmap driver where space + * is preallocated). The bridge has a similar problem + * and we solve it there by dropping the excess packets. + */ + generic_set_tx_event(kring, nm_i); + if (generic_netmap_tx_clean(kring, gna->txqdisc)) { + /* space now available */ + continue; + } else { + break; + } } + + /* In txqdisc mode, the netmap-aware qdisc + * queue has the same length as the number of + * netmap slots (N). Since tail is advanced + * only when packets are dequeued, qdisc + * queue overrun cannot happen, so + * nm_os_generic_xmit_frame() did not fail + * because of that. + * However, packets can be dropped because + * carrier is off, or because our qdisc is + * being deactivated, or possibly for other + * reasons. In these cases, we just let the + * packet to be dropped. */ + IFRATE(rate_ctx.new.txdrop++); } + slot->flags &= ~(NS_REPORT | NS_BUF_CHANGED); nm_i = nm_next(nm_i, lim); - IFRATE(rate_ctx.new.txpkt ++); + IFRATE(rate_ctx.new.txpkt++); } - - /* Update hwcur to the next slot to transmit. */ - kring->nr_hwcur = nm_i; /* not head, we could break early */ + if (a.head != NULL) { + a.addr = NULL; + nm_os_generic_xmit_frame(&a); + } + /* Update hwcur to the next slot to transmit. Here nm_i + * is not necessarily head, we could break early. */ + kring->nr_hwcur = nm_i; } /* * Second, reclaim completed buffers */ - if (flags & NAF_FORCE_RECLAIM || nm_kr_txempty(kring)) { + if (!gna->txqdisc && (flags & NAF_FORCE_RECLAIM || nm_kr_txempty(kring))) { /* No more available slots? Set a notification event * on a netmap slot that will be cleaned in the future. * No doublecheck is performed, since txsync() will be * called twice by netmap_poll(). */ generic_set_tx_event(kring, nm_i); } - ND("tx #%d, hwtail = %d", n, kring->nr_hwtail); - generic_netmap_tx_clean(kring); + generic_netmap_tx_clean(kring, gna->txqdisc); return 0; } /* - * This handler is registered (through netmap_catch_rx()) + * This handler is registered (through nm_os_catch_rx()) * within the attached network interface * in the RX subsystem, so that every mbuf passed up by * the driver can be stolen to the network stack. * Stolen packets are put in a queue where the * generic_netmap_rxsync() callback can extract them. + * Returns 1 if the packet was stolen, 0 otherwise. */ -void +int generic_rx_handler(struct ifnet *ifp, struct mbuf *m) { struct netmap_adapter *na = NA(ifp); struct netmap_generic_adapter *gna = (struct netmap_generic_adapter *)na; + struct netmap_kring *kring; u_int work_done; - u_int rr = MBUF_RXQ(m); // receive ring number + u_int r = MBUF_RXQ(m); /* receive ring number */ - if (rr >= na->num_rx_rings) { - rr = rr % na->num_rx_rings; // XXX expensive... + if (r >= na->num_rx_rings) { + r = r % na->num_rx_rings; } + kring = &na->rx_rings[r]; + + if (kring->nr_mode == NKR_NETMAP_OFF) { + /* We must not intercept this mbuf. */ + return 0; + } + /* limit the size of the queue */ - if (unlikely(mbq_len(&na->rx_rings[rr].rx_queue) > 1024)) { + if (unlikely(!gna->rxsg && MBUF_LEN(m) > NETMAP_BUF_SIZE(na))) { + /* This may happen when GRO/LRO features are enabled for + * the NIC driver when the generic adapter does not + * support RX scatter-gather. */ + RD(2, "Warning: driver pushed up big packet " + "(size=%d)", (int)MBUF_LEN(m)); m_freem(m); + } else if (unlikely(mbq_len(&kring->rx_queue) > 1024)) { + m_freem(m); } else { - mbq_safe_enqueue(&na->rx_rings[rr].rx_queue, m); + mbq_safe_enqueue(&kring->rx_queue, m); } if (netmap_generic_mit < 32768) { /* no rx mitigation, pass notification up */ - netmap_generic_irq(na->ifp, rr, &work_done); - IFRATE(rate_ctx.new.rxirq++); + netmap_generic_irq(na, r, &work_done); } else { /* same as send combining, filter notification if there is a * pending timer, otherwise pass it up and start a timer. */ - if (likely(netmap_mitigation_active(&gna->mit[rr]))) { + if (likely(nm_os_mitigation_active(&gna->mit[r]))) { /* Record that there is some pending work. */ - gna->mit[rr].mit_pending = 1; + gna->mit[r].mit_pending = 1; } else { - netmap_generic_irq(na->ifp, rr, &work_done); - IFRATE(rate_ctx.new.rxirq++); - netmap_mitigation_start(&gna->mit[rr]); + netmap_generic_irq(na, r, &work_done); + nm_os_mitigation_start(&gna->mit[r]); } } + + /* We have intercepted the mbuf. */ + return 1; } /* * generic_netmap_rxsync() extracts mbufs from the queue filled by * generic_netmap_rx_handler() and puts their content in the netmap * receive ring. * Access must be protected because the rx handler is asynchronous, */ static int generic_netmap_rxsync(struct netmap_kring *kring, int flags) { struct netmap_ring *ring = kring->ring; struct netmap_adapter *na = kring->na; u_int nm_i; /* index into the netmap ring */ //j, u_int n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int force_update = (flags & NAF_FORCE_READ) || kring->nr_kflags & NKR_PENDINTR; + /* Adapter-specific variables. */ + uint16_t slot_flags = kring->nkr_slot_flags; + u_int nm_buf_len = NETMAP_BUF_SIZE(na); + struct mbq tmpq; + struct mbuf *m; + int avail; /* in bytes */ + int mlen; + int copy; + if (head > lim) return netmap_ring_reinit(kring); - /* - * First part: import newly received packets. - */ - if (netmap_no_pendintr || force_update) { - /* extract buffers from the rx queue, stop at most one - * slot before nr_hwcur (stop_i) - */ - uint16_t slot_flags = kring->nkr_slot_flags; - u_int stop_i = nm_prev(kring->nr_hwcur, lim); + IFRATE(rate_ctx.new.rxsync++); - nm_i = kring->nr_hwtail; /* first empty slot in the receive ring */ - for (n = 0; nm_i != stop_i; n++) { - int len; - void *addr = NMB(na, &ring->slot[nm_i]); - struct mbuf *m; - - /* we only check the address here on generic rx rings */ - if (addr == NETMAP_BUF_BASE(na)) { /* Bad buffer */ - return netmap_ring_reinit(kring); - } - /* - * Call the locked version of the function. - * XXX Ideally we could grab a batch of mbufs at once - * and save some locking overhead. - */ - m = mbq_safe_dequeue(&kring->rx_queue); - if (!m) /* no more data */ - break; - len = MBUF_LEN(m); - m_copydata(m, 0, len, addr); - ring->slot[nm_i].len = len; - ring->slot[nm_i].flags = slot_flags; - m_freem(m); - nm_i = nm_next(nm_i, lim); - } - if (n) { - kring->nr_hwtail = nm_i; - IFRATE(rate_ctx.new.rxpkt += n); - } - kring->nr_kflags &= ~NKR_PENDINTR; - } - - // XXX should we invert the order ? /* - * Second part: skip past packets that userspace has released. + * First part: skip past packets that userspace has released. + * This can possibly make room for the second part. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* Userspace has released some packets. */ for (n = 0; nm_i != head; n++) { struct netmap_slot *slot = &ring->slot[nm_i]; slot->flags &= ~NS_BUF_CHANGED; nm_i = nm_next(nm_i, lim); } kring->nr_hwcur = head; } - IFRATE(rate_ctx.new.rxsync++); + /* + * Second part: import newly received packets. + */ + if (!netmap_no_pendintr && !force_update) { + return 0; + } + + nm_i = kring->nr_hwtail; /* First empty slot in the receive ring. */ + + /* Compute the available space (in bytes) in this netmap ring. + * The first slot that is not considered in is the one before + * nr_hwcur. */ + + avail = nm_prev(kring->nr_hwcur, lim) - nm_i; + if (avail < 0) + avail += lim + 1; + avail *= nm_buf_len; + + /* First pass: While holding the lock on the RX mbuf queue, + * extract as many mbufs as they fit the available space, + * and put them in a temporary queue. + * To avoid performing a per-mbuf division (mlen / nm_buf_len) to + * to update avail, we do the update in a while loop that we + * also use to set the RX slots, but without performing the copy. */ + mbq_init(&tmpq); + mbq_lock(&kring->rx_queue); + for (n = 0;; n++) { + m = mbq_peek(&kring->rx_queue); + if (!m) { + /* No more packets from the driver. */ + break; + } + + mlen = MBUF_LEN(m); + if (mlen > avail) { + /* No more space in the ring. */ + break; + } + + mbq_dequeue(&kring->rx_queue); + + while (mlen) { + copy = nm_buf_len; + if (mlen < copy) { + copy = mlen; + } + mlen -= copy; + avail -= nm_buf_len; + + ring->slot[nm_i].len = copy; + ring->slot[nm_i].flags = slot_flags | (mlen ? NS_MOREFRAG : 0); + nm_i = nm_next(nm_i, lim); + } + + mbq_enqueue(&tmpq, m); + } + mbq_unlock(&kring->rx_queue); + + /* Second pass: Drain the temporary queue, going over the used RX slots, + * and perform the copy out of the RX queue lock. */ + nm_i = kring->nr_hwtail; + + for (;;) { + void *nmaddr; + int ofs = 0; + int morefrag; + + m = mbq_dequeue(&tmpq); + if (!m) { + break; + } + + do { + nmaddr = NMB(na, &ring->slot[nm_i]); + /* We only check the address here on generic rx rings. */ + if (nmaddr == NETMAP_BUF_BASE(na)) { /* Bad buffer */ + m_freem(m); + mbq_purge(&tmpq); + mbq_fini(&tmpq); + return netmap_ring_reinit(kring); + } + + copy = ring->slot[nm_i].len; + m_copydata(m, ofs, copy, nmaddr); + ofs += copy; + morefrag = ring->slot[nm_i].flags & NS_MOREFRAG; + nm_i = nm_next(nm_i, lim); + } while (morefrag); + + m_freem(m); + } + + mbq_fini(&tmpq); + + if (n) { + kring->nr_hwtail = nm_i; + IFRATE(rate_ctx.new.rxpkt += n); + } + kring->nr_kflags &= ~NKR_PENDINTR; + return 0; } static void generic_netmap_dtor(struct netmap_adapter *na) { struct netmap_generic_adapter *gna = (struct netmap_generic_adapter*)na; struct ifnet *ifp = netmap_generic_getifp(gna); struct netmap_adapter *prev_na = gna->prev; if (prev_na != NULL) { D("Released generic NA %p", gna); - if_rele(ifp); netmap_adapter_put(prev_na); - if (na->ifp == NULL) { + if (nm_iszombie(na)) { /* * The driver has been removed without releasing * the reference so we need to do it here. */ netmap_adapter_put(prev_na); } } - WNA(ifp) = prev_na; - D("Restored native NA %p", prev_na); + NM_ATTACH_NA(ifp, prev_na); + /* + * netmap_detach_common(), that it's called after this function, + * overrides WNA(ifp) if na->ifp is not NULL. + */ na->ifp = NULL; + D("Restored native NA %p", prev_na); } /* * generic_netmap_attach() makes it possible to use netmap on * a device without native netmap support. * This is less performant than native support but potentially * faster than raw sockets or similar schemes. * * In this "emulated" mode, netmap rings do not necessarily * have the same size as those in the NIC. We use a default * value and possibly override it if the OS has ways to fetch the * actual configuration. */ int generic_netmap_attach(struct ifnet *ifp) { struct netmap_adapter *na; struct netmap_generic_adapter *gna; int retval; u_int num_tx_desc, num_rx_desc; num_tx_desc = num_rx_desc = netmap_generic_ringsize; /* starting point */ - generic_find_num_desc(ifp, &num_tx_desc, &num_rx_desc); /* ignore errors */ + nm_os_generic_find_num_desc(ifp, &num_tx_desc, &num_rx_desc); /* ignore errors */ ND("Netmap ring size: TX = %d, RX = %d", num_tx_desc, num_rx_desc); if (num_tx_desc == 0 || num_rx_desc == 0) { D("Device has no hw slots (tx %u, rx %u)", num_tx_desc, num_rx_desc); return EINVAL; } gna = malloc(sizeof(*gna), M_DEVBUF, M_NOWAIT | M_ZERO); if (gna == NULL) { D("no memory on attach, give up"); return ENOMEM; } na = (struct netmap_adapter *)gna; strncpy(na->name, ifp->if_xname, sizeof(na->name)); na->ifp = ifp; na->num_tx_desc = num_tx_desc; na->num_rx_desc = num_rx_desc; na->nm_register = &generic_netmap_register; na->nm_txsync = &generic_netmap_txsync; na->nm_rxsync = &generic_netmap_rxsync; na->nm_dtor = &generic_netmap_dtor; /* when using generic, NAF_NETMAP_ON is set so we force * NAF_SKIP_INTR to use the regular interrupt handler */ na->na_flags = NAF_SKIP_INTR | NAF_HOST_RINGS; ND("[GNA] num_tx_queues(%d), real_num_tx_queues(%d), len(%lu)", ifp->num_tx_queues, ifp->real_num_tx_queues, ifp->tx_queue_len); ND("[GNA] num_rx_queues(%d), real_num_rx_queues(%d)", ifp->num_rx_queues, ifp->real_num_rx_queues); - generic_find_num_queues(ifp, &na->num_tx_rings, &na->num_rx_rings); + nm_os_generic_find_num_queues(ifp, &na->num_tx_rings, &na->num_rx_rings); retval = netmap_attach_common(na); if (retval) { free(gna, M_DEVBUF); + return retval; } + + gna->prev = NA(ifp); /* save old na */ + if (gna->prev != NULL) { + netmap_adapter_get(gna->prev); + } + NM_ATTACH_NA(ifp, na); + + nm_os_generic_set_features(gna); + + D("Created generic NA %p (prev %p)", gna, gna->prev); return retval; } Index: head/sys/dev/netmap/netmap_kern.h =================================================================== --- head/sys/dev/netmap/netmap_kern.h (revision 307393) +++ head/sys/dev/netmap/netmap_kern.h (revision 307394) @@ -1,1677 +1,2091 @@ /* - * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. - * Copyright (C) 2013-2014 Universita` di Pisa. All rights reserved. + * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo + * Copyright (C) 2013-2016 Universita` di Pisa + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * The header contains the definitions of constants and function * prototypes used only in kernelspace. */ #ifndef _NET_NETMAP_KERN_H_ #define _NET_NETMAP_KERN_H_ #if defined(linux) #if defined(CONFIG_NETMAP_VALE) #define WITH_VALE #endif #if defined(CONFIG_NETMAP_PIPE) #define WITH_PIPES #endif #if defined(CONFIG_NETMAP_MONITOR) #define WITH_MONITOR #endif #if defined(CONFIG_NETMAP_GENERIC) #define WITH_GENERIC #endif -#if defined(CONFIG_NETMAP_V1000) -#define WITH_V1000 +#if defined(CONFIG_NETMAP_PTNETMAP_GUEST) +#define WITH_PTNETMAP_GUEST #endif +#if defined(CONFIG_NETMAP_PTNETMAP_HOST) +#define WITH_PTNETMAP_HOST +#endif -#else /* not linux */ +#elif defined (_WIN32) +#define WITH_VALE // comment out to disable VALE support +#define WITH_PIPES +#define WITH_MONITOR +#define WITH_GENERIC +#else /* neither linux nor windows */ #define WITH_VALE // comment out to disable VALE support #define WITH_PIPES #define WITH_MONITOR #define WITH_GENERIC +#define WITH_PTNETMAP_HOST /* ptnetmap host support */ +#define WITH_PTNETMAP_GUEST /* ptnetmap guest support */ #endif #if defined(__FreeBSD__) -#include #define likely(x) __builtin_expect((long)!!(x), 1L) #define unlikely(x) __builtin_expect((long)!!(x), 0L) +#define __user #define NM_LOCK_T struct mtx /* low level spinlock, used to protect queues */ #define NM_MTX_T struct sx /* OS-specific mutex (sleepable) */ #define NM_MTX_INIT(m) sx_init(&(m), #m) #define NM_MTX_DESTROY(m) sx_destroy(&(m)) #define NM_MTX_LOCK(m) sx_xlock(&(m)) #define NM_MTX_UNLOCK(m) sx_xunlock(&(m)) #define NM_MTX_ASSERT(m) sx_assert(&(m), SA_XLOCKED) #define NM_SELINFO_T struct nm_selinfo +#define NM_SELRECORD_T struct thread #define MBUF_LEN(m) ((m)->m_pkthdr.len) -#define MBUF_IFP(m) ((m)->m_pkthdr.rcvif) -#define NM_SEND_UP(ifp, m) ((NA(ifp))->if_input)(ifp, m) +#define MBUF_TXQ(m) ((m)->m_pkthdr.flowid) +#define MBUF_TRANSMIT(na, ifp, m) ((na)->if_transmit(ifp, m)) +#define GEN_TX_MBUF_IFP(m) ((m)->m_pkthdr.rcvif) #define NM_ATOMIC_T volatile int // XXX ? /* atomic operations */ #include #define NM_ATOMIC_TEST_AND_SET(p) (!atomic_cmpset_acq_int((p), 0, 1)) #define NM_ATOMIC_CLEAR(p) atomic_store_rel_int((p), 0) #if __FreeBSD_version >= 1100030 #define WNA(_ifp) (_ifp)->if_netmap #else /* older FreeBSD */ #define WNA(_ifp) (_ifp)->if_pspare[0] #endif /* older FreeBSD */ #if __FreeBSD_version >= 1100005 struct netmap_adapter *netmap_getna(if_t ifp); #endif #if __FreeBSD_version >= 1100027 -#define GET_MBUF_REFCNT(m) ((m)->m_ext.ext_cnt ? *((m)->m_ext.ext_cnt) : -1) -#define SET_MBUF_REFCNT(m, x) *((m)->m_ext.ext_cnt) = x -#define PNT_MBUF_REFCNT(m) ((m)->m_ext.ext_cnt) +#define MBUF_REFCNT(m) ((m)->m_ext.ext_count) +#define SET_MBUF_REFCNT(m, x) (m)->m_ext.ext_count = x #else -#define GET_MBUF_REFCNT(m) ((m)->m_ext.ref_cnt ? *((m)->m_ext.ref_cnt) : -1) +#define MBUF_REFCNT(m) ((m)->m_ext.ref_cnt ? *((m)->m_ext.ref_cnt) : -1) #define SET_MBUF_REFCNT(m, x) *((m)->m_ext.ref_cnt) = x -#define PNT_MBUF_REFCNT(m) ((m)->m_ext.ref_cnt) #endif -MALLOC_DECLARE(M_NETMAP); +#define MBUF_QUEUED(m) 1 struct nm_selinfo { struct selinfo si; struct mtx m; }; -void freebsd_selwakeup(struct nm_selinfo *si, int pri); // XXX linux struct, not used in FreeBSD struct net_device_ops { }; struct ethtool_ops { }; struct hrtimer { }; #define NM_BNS_GET(b) #define NM_BNS_PUT(b) #elif defined (linux) #define NM_LOCK_T safe_spinlock_t // see bsd_glue.h #define NM_SELINFO_T wait_queue_head_t #define MBUF_LEN(m) ((m)->len) -#define MBUF_IFP(m) ((m)->dev) -#define NM_SEND_UP(ifp, m) \ - do { \ - m->priority = NM_MAGIC_PRIORITY_RX; \ - netif_rx(m); \ - } while (0) +#define MBUF_TRANSMIT(na, ifp, m) \ + ({ \ + /* Avoid infinite recursion with generic. */ \ + m->priority = NM_MAGIC_PRIORITY_TX; \ + (((struct net_device_ops *)(na)->if_transmit)->ndo_start_xmit(m, ifp)); \ + 0; \ + }) +/* See explanation in nm_os_generic_xmit_frame. */ +#define GEN_TX_MBUF_IFP(m) ((struct ifnet *)skb_shinfo(m)->destructor_arg) + #define NM_ATOMIC_T volatile long unsigned int #define NM_MTX_T struct mutex /* OS-specific sleepable lock */ #define NM_MTX_INIT(m) mutex_init(&(m)) #define NM_MTX_DESTROY(m) do { (void)(m); } while (0) #define NM_MTX_LOCK(m) mutex_lock(&(m)) #define NM_MTX_UNLOCK(m) mutex_unlock(&(m)) #define NM_MTX_ASSERT(m) mutex_is_locked(&(m)) #ifndef DEV_NETMAP #define DEV_NETMAP #endif /* DEV_NETMAP */ #elif defined (__APPLE__) #warning apple support is incomplete. #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #define NM_LOCK_T IOLock * #define NM_SELINFO_T struct selinfo #define MBUF_LEN(m) ((m)->m_pkthdr.len) -#define NM_SEND_UP(ifp, m) ((ifp)->if_input)(ifp, m) +#elif defined (_WIN32) +#include "../../../WINDOWS/win_glue.h" + +#define NM_SELRECORD_T IO_STACK_LOCATION +#define NM_SELINFO_T win_SELINFO // see win_glue.h +#define NM_LOCK_T win_spinlock_t // see win_glue.h +#define NM_MTX_T KGUARDED_MUTEX /* OS-specific mutex (sleepable) */ + +#define NM_MTX_INIT(m) KeInitializeGuardedMutex(&m); +#define NM_MTX_DESTROY(m) do { (void)(m); } while (0) +#define NM_MTX_LOCK(m) KeAcquireGuardedMutex(&(m)) +#define NM_MTX_UNLOCK(m) KeReleaseGuardedMutex(&(m)) +#define NM_MTX_ASSERT(m) assert(&m.Count>0) + +//These linknames are for the NDIS driver +#define NETMAP_NDIS_LINKNAME_STRING L"\\DosDevices\\NMAPNDIS" +#define NETMAP_NDIS_NTDEVICE_STRING L"\\Device\\NMAPNDIS" + +//Definition of internal driver-to-driver ioctl codes +#define NETMAP_KERNEL_XCHANGE_POINTERS _IO('i', 180) +#define NETMAP_KERNEL_SEND_SHUTDOWN_SIGNAL _IO_direct('i', 195) + +//Empty data structures are not permitted by MSVC compiler +//XXX_ale, try to solve this problem +struct net_device_ops{ + char data[1]; +}; +typedef struct ethtool_ops{ + char data[1]; +}; +typedef struct hrtimer{ + KTIMER timer; + BOOLEAN active; + KDPC deferred_proc; +}; + +/* MSVC does not have likely/unlikely support */ +#ifdef _MSC_VER +#define likely(x) (x) +#define unlikely(x) (x) #else +#define likely(x) __builtin_expect((long)!!(x), 1L) +#define unlikely(x) __builtin_expect((long)!!(x), 0L) +#endif //_MSC_VER +#else + #error unsupported platform #endif /* end - platform-specific code */ +#ifndef _WIN32 /* support for emulated sysctl */ +#define SYSBEGIN(x) +#define SYSEND +#endif /* _WIN32 */ + +#define NM_ACCESS_ONCE(x) (*(volatile __typeof__(x) *)&(x)) + #define NMG_LOCK_T NM_MTX_T #define NMG_LOCK_INIT() NM_MTX_INIT(netmap_global_lock) #define NMG_LOCK_DESTROY() NM_MTX_DESTROY(netmap_global_lock) #define NMG_LOCK() NM_MTX_LOCK(netmap_global_lock) #define NMG_UNLOCK() NM_MTX_UNLOCK(netmap_global_lock) #define NMG_LOCK_ASSERT() NM_MTX_ASSERT(netmap_global_lock) #define ND(format, ...) #define D(format, ...) \ do { \ struct timeval __xxts; \ microtime(&__xxts); \ printf("%03d.%06d [%4d] %-25s " format "\n", \ (int)__xxts.tv_sec % 1000, (int)__xxts.tv_usec, \ __LINE__, __FUNCTION__, ##__VA_ARGS__); \ } while (0) /* rate limited, lps indicates how many per second */ #define RD(lps, format, ...) \ do { \ static int t0, __cnt; \ if (t0 != time_second) { \ t0 = time_second; \ __cnt = 0; \ } \ if (__cnt++ < lps) \ D(format, ##__VA_ARGS__); \ } while (0) struct netmap_adapter; struct nm_bdg_fwd; struct nm_bridge; struct netmap_priv_d; +/* os-specific NM_SELINFO_T initialzation/destruction functions */ +void nm_os_selinfo_init(NM_SELINFO_T *); +void nm_os_selinfo_uninit(NM_SELINFO_T *); + const char *nm_dump_buf(char *p, int len, int lim, char *dst); +void nm_os_selwakeup(NM_SELINFO_T *si); +void nm_os_selrecord(NM_SELRECORD_T *sr, NM_SELINFO_T *si); + +int nm_os_ifnet_init(void); +void nm_os_ifnet_fini(void); +void nm_os_ifnet_lock(void); +void nm_os_ifnet_unlock(void); + +void nm_os_get_module(void); +void nm_os_put_module(void); + +void netmap_make_zombie(struct ifnet *); +void netmap_undo_zombie(struct ifnet *); + +/* passes a packet up to the host stack. + * If the packet is sent (or dropped) immediately it returns NULL, + * otherwise it links the packet to prev and returns m. + * In this case, a final call with m=NULL and prev != NULL will send up + * the entire chain to the host stack. + */ +void *nm_os_send_up(struct ifnet *, struct mbuf *m, struct mbuf *prev); + +int nm_os_mbuf_has_offld(struct mbuf *m); + #include "netmap_mbq.h" extern NMG_LOCK_T netmap_global_lock; enum txrx { NR_RX = 0, NR_TX = 1, NR_TXRX }; static __inline const char* nm_txrx2str(enum txrx t) { return (t== NR_RX ? "RX" : "TX"); } static __inline enum txrx nm_txrx_swap(enum txrx t) { return (t== NR_RX ? NR_TX : NR_RX); } #define for_rx_tx(t) for ((t) = 0; (t) < NR_TXRX; (t)++) /* * private, kernel view of a ring. Keeps track of the status of * a ring across system calls. * * nr_hwcur index of the next buffer to refill. * It corresponds to ring->head * at the time the system call returns. * * nr_hwtail index of the first buffer owned by the kernel. * On RX, hwcur->hwtail are receive buffers * not yet released. hwcur is advanced following * ring->head, hwtail is advanced on incoming packets, * and a wakeup is generated when hwtail passes ring->cur * On TX, hwcur->rcur have been filled by the sender * but not sent yet to the NIC; rcur->hwtail are available * for new transmissions, and hwtail->hwcur-1 are pending * transmissions not yet acknowledged. * * The indexes in the NIC and netmap rings are offset by nkr_hwofs slots. * This is so that, on a reset, buffers owned by userspace are not * modified by the kernel. In particular: * RX rings: the next empty buffer (hwtail + hwofs) coincides with * the next empty buffer as known by the hardware (next_to_check or so). * TX rings: hwcur + hwofs coincides with next_to_send * * For received packets, slot->flags is set to nkr_slot_flags * so we can provide a proper initial value (e.g. set NS_FORWARD * when operating in 'transparent' mode). * * The following fields are used to implement lock-free copy of packets * from input to output ports in VALE switch: * nkr_hwlease buffer after the last one being copied. * A writer in nm_bdg_flush reserves N buffers * from nr_hwlease, advances it, then does the * copy outside the lock. * In RX rings (used for VALE ports), * nkr_hwtail <= nkr_hwlease < nkr_hwcur+N-1 * In TX rings (used for NIC or host stack ports) * nkr_hwcur <= nkr_hwlease < nkr_hwtail * nkr_leases array of nkr_num_slots where writers can report * completion of their block. NR_NOSLOT (~0) indicates * that the writer has not finished yet * nkr_lease_idx index of next free slot in nr_leases, to be assigned * * The kring is manipulated by txsync/rxsync and generic netmap function. * * Concurrent rxsync or txsync on the same ring are prevented through * by nm_kr_(try)lock() which in turn uses nr_busy. This is all we need * for NIC rings, and for TX rings attached to the host stack. * * RX rings attached to the host stack use an mbq (rx_queue) on both * rxsync_from_host() and netmap_transmit(). The mbq is protected * by its internal lock. * * RX rings attached to the VALE switch are accessed by both senders * and receiver. They are protected through the q_lock on the RX ring. */ struct netmap_kring { struct netmap_ring *ring; uint32_t nr_hwcur; uint32_t nr_hwtail; /* * Copies of values in user rings, so we do not need to look * at the ring (which could be modified). These are set in the * *sync_prologue()/finalize() routines. */ uint32_t rhead; uint32_t rcur; uint32_t rtail; uint32_t nr_kflags; /* private driver flags */ #define NKR_PENDINTR 0x1 // Pending interrupt. #define NKR_EXCLUSIVE 0x2 /* exclusive binding */ +#define NKR_FORWARD 0x4 /* (host ring only) there are + packets to forward + */ +#define NKR_NEEDRING 0x8 /* ring needed even if users==0 + * (used internally by pipes and + * by ptnetmap host ports) + */ + + uint32_t nr_mode; + uint32_t nr_pending_mode; +#define NKR_NETMAP_OFF 0x0 +#define NKR_NETMAP_ON 0x1 + uint32_t nkr_num_slots; /* * On a NIC reset, the NIC ring indexes may be reset but the * indexes in the netmap rings remain the same. nkr_hwofs * keeps track of the offset between the two. */ int32_t nkr_hwofs; uint16_t nkr_slot_flags; /* initial value for flags */ /* last_reclaim is opaque marker to help reduce the frequency * of operations such as reclaiming tx buffers. A possible use * is set it to ticks and do the reclaim only once per tick. */ uint64_t last_reclaim; NM_SELINFO_T si; /* poll/select wait queue */ NM_LOCK_T q_lock; /* protects kring and ring. */ NM_ATOMIC_T nr_busy; /* prevent concurrent syscalls */ struct netmap_adapter *na; /* The following fields are for VALE switch support */ struct nm_bdg_fwd *nkr_ft; uint32_t *nkr_leases; #define NR_NOSLOT ((uint32_t)~0) /* used in nkr_*lease* */ uint32_t nkr_hwlease; uint32_t nkr_lease_idx; /* while nkr_stopped is set, no new [tr]xsync operations can * be started on this kring. * This is used by netmap_disable_all_rings() * to find a synchronization point where critical data * structures pointed to by the kring can be added or removed */ volatile int nkr_stopped; /* Support for adapters without native netmap support. * On tx rings we preallocate an array of tx buffers * (same size as the netmap ring), on rx rings we * store incoming mbufs in a queue that is drained by * a rxsync. */ - struct mbuf **tx_pool; - // u_int nr_ntc; /* Emulation of a next-to-clean RX ring pointer. */ - struct mbq rx_queue; /* intercepted rx mbufs. */ + struct mbuf **tx_pool; + struct mbuf *tx_event; /* TX event used as a notification */ + NM_LOCK_T tx_event_lock; /* protects the tx_event mbuf */ + struct mbq rx_queue; /* intercepted rx mbufs. */ uint32_t users; /* existing bindings for this ring */ - uint32_t ring_id; /* debugging */ + uint32_t ring_id; /* kring identifier */ enum txrx tx; /* kind of ring (tx or rx) */ char name[64]; /* diagnostic */ /* [tx]sync callback for this kring. * The default nm_kring_create callback (netmap_krings_create) * sets the nm_sync callback of each hardware tx(rx) kring to * the corresponding nm_txsync(nm_rxsync) taken from the * netmap_adapter; moreover, it sets the sync callback * of the host tx(rx) ring to netmap_txsync_to_host * (netmap_rxsync_from_host). * * Overrides: the above configuration is not changed by * any of the nm_krings_create callbacks. */ int (*nm_sync)(struct netmap_kring *kring, int flags); int (*nm_notify)(struct netmap_kring *kring, int flags); #ifdef WITH_PIPES struct netmap_kring *pipe; /* if this is a pipe ring, * pointer to the other end */ - struct netmap_ring *save_ring; /* pointer to hidden rings - * (see netmap_pipe.c for details) - */ #endif /* WITH_PIPES */ #ifdef WITH_VALE int (*save_notify)(struct netmap_kring *kring, int flags); #endif #ifdef WITH_MONITOR /* array of krings that are monitoring this kring */ struct netmap_kring **monitors; uint32_t max_monitors; /* current size of the monitors array */ uint32_t n_monitors; /* next unused entry in the monitor array */ /* * Monitors work by intercepting the sync and notify callbacks of the * monitored krings. This is implemented by replacing the pointers * above and saving the previous ones in mon_* pointers below */ int (*mon_sync)(struct netmap_kring *kring, int flags); int (*mon_notify)(struct netmap_kring *kring, int flags); uint32_t mon_tail; /* last seen slot on rx */ uint32_t mon_pos; /* index of this ring in the monitored ring array */ #endif -} __attribute__((__aligned__(64))); +} +#ifdef _WIN32 +__declspec(align(64)); +#else +__attribute__((__aligned__(64))); +#endif +/* return 1 iff the kring needs to be turned on */ +static inline int +nm_kring_pending_on(struct netmap_kring *kring) +{ + return kring->nr_pending_mode == NKR_NETMAP_ON && + kring->nr_mode == NKR_NETMAP_OFF; +} +/* return 1 iff the kring needs to be turned off */ +static inline int +nm_kring_pending_off(struct netmap_kring *kring) +{ + return kring->nr_pending_mode == NKR_NETMAP_OFF && + kring->nr_mode == NKR_NETMAP_ON; +} + /* return the next index, with wraparound */ static inline uint32_t nm_next(uint32_t i, uint32_t lim) { return unlikely (i == lim) ? 0 : i + 1; } /* return the previous index, with wraparound */ static inline uint32_t nm_prev(uint32_t i, uint32_t lim) { return unlikely (i == 0) ? lim : i - 1; } /* * * Here is the layout for the Rx and Tx rings. RxRING TxRING +-----------------+ +-----------------+ | | | | |XXX free slot XXX| |XXX free slot XXX| +-----------------+ +-----------------+ head->| owned by user |<-hwcur | not sent to nic |<-hwcur | | | yet | +-----------------+ | | cur->| available to | | | | user, not read | +-----------------+ | yet | cur->| (being | | | | prepared) | | | | | +-----------------+ + ------ + tail->| |<-hwtail | |<-hwlease | (being | ... | | ... | prepared) | ... | | ... +-----------------+ ... | | ... | |<-hwlease +-----------------+ | | tail->| |<-hwtail | | | | | | | | | | | | +-----------------+ +-----------------+ * The cur/tail (user view) and hwcur/hwtail (kernel view) * are used in the normal operation of the card. * * When a ring is the output of a switch port (Rx ring for * a VALE port, Tx ring for the host stack or NIC), slots * are reserved in blocks through 'hwlease' which points * to the next unused slot. * On an Rx ring, hwlease is always after hwtail, * and completions cause hwtail to advance. * On a Tx ring, hwlease is always between cur and hwtail, * and completions cause cur to advance. * * nm_kr_space() returns the maximum number of slots that * can be assigned. * nm_kr_lease() reserves the required number of buffers, * advances nkr_hwlease and also returns an entry in * a circular array where completions should be reported. */ struct netmap_lut { struct lut_entry *lut; uint32_t objtotal; /* max buffer index */ uint32_t objsize; /* buffer size */ }; struct netmap_vp_adapter; // forward /* * The "struct netmap_adapter" extends the "struct adapter" * (or equivalent) device descriptor. * It contains all base fields needed to support netmap operation. * There are in fact different types of netmap adapters * (native, generic, VALE switch...) so a netmap_adapter is * just the first field in the derived type. */ struct netmap_adapter { /* * On linux we do not have a good way to tell if an interface * is netmap-capable. So we always use the following trick: * NA(ifp) points here, and the first entry (which hopefully * always exists and is at least 32 bits) contains a magic * value which we can use to detect that the interface is good. */ uint32_t magic; uint32_t na_flags; /* enabled, and other flags */ #define NAF_SKIP_INTR 1 /* use the regular interrupt handler. * useful during initialization */ #define NAF_SW_ONLY 2 /* forward packets only to sw adapter */ #define NAF_BDG_MAYSLEEP 4 /* the bridge is allowed to sleep when * forwarding packets coming from this * interface */ #define NAF_MEM_OWNER 8 /* the adapter uses its own memory area * that cannot be changed */ #define NAF_NATIVE 16 /* the adapter is native. * Virtual ports (non persistent vale ports, * pipes, monitors...) should never use * this flag. */ #define NAF_NETMAP_ON 32 /* netmap is active (either native or * emulated). Where possible (e.g. FreeBSD) * IFCAP_NETMAP also mirrors this flag. */ #define NAF_HOST_RINGS 64 /* the adapter supports the host rings */ #define NAF_FORCE_NATIVE 128 /* the adapter is always NATIVE */ +#define NAF_PTNETMAP_HOST 256 /* the adapter supports ptnetmap in the host */ +#define NAF_ZOMBIE (1U<<30) /* the nic driver has been unloaded */ #define NAF_BUSY (1U<<31) /* the adapter is used internally and * cannot be registered from userspace */ int active_fds; /* number of user-space descriptors using this interface, which is equal to the number of struct netmap_if objs in the mapped region. */ u_int num_rx_rings; /* number of adapter receive rings */ u_int num_tx_rings; /* number of adapter transmit rings */ u_int num_tx_desc; /* number of descriptor in each queue */ u_int num_rx_desc; /* tx_rings and rx_rings are private but allocated * as a contiguous chunk of memory. Each array has * N+1 entries, for the adapter queues and for the host queue. */ struct netmap_kring *tx_rings; /* array of TX rings. */ struct netmap_kring *rx_rings; /* array of RX rings. */ void *tailroom; /* space below the rings array */ /* (used for leases) */ NM_SELINFO_T si[NR_TXRX]; /* global wait queues */ /* count users of the global wait queues */ int si_users[NR_TXRX]; void *pdev; /* used to store pci device */ /* copy of if_qflush and if_transmit pointers, to intercept * packets from the network stack when netmap is active. */ int (*if_transmit)(struct ifnet *, struct mbuf *); /* copy of if_input for netmap_send_up() */ void (*if_input)(struct ifnet *, struct mbuf *); /* references to the ifnet and device routines, used by * the generic netmap functions. */ struct ifnet *ifp; /* adapter is ifp->if_softc */ /*---- callbacks for this netmap adapter -----*/ /* * nm_dtor() is the cleanup routine called when destroying * the adapter. * Called with NMG_LOCK held. * * nm_register() is called on NIOCREGIF and close() to enter * or exit netmap mode on the NIC * Called with NNG_LOCK held. * * nm_txsync() pushes packets to the underlying hw/switch * * nm_rxsync() collects packets from the underlying hw/switch * * nm_config() returns configuration information from the OS * Called with NMG_LOCK held. * * nm_krings_create() create and init the tx_rings and * rx_rings arrays of kring structures. In particular, * set the nm_sync callbacks for each ring. * There is no need to also allocate the corresponding * netmap_rings, since netmap_mem_rings_create() will always * be called to provide the missing ones. * Called with NNG_LOCK held. * * nm_krings_delete() cleanup and delete the tx_rings and rx_rings * arrays * Called with NMG_LOCK held. * * nm_notify() is used to act after data have become available * (or the stopped state of the ring has changed) * For hw devices this is typically a selwakeup(), * but for NIC/host ports attached to a switch (or vice-versa) * we also need to invoke the 'txsync' code downstream. + * This callback pointer is actually used only to initialize + * kring->nm_notify. + * Return values are the same as for netmap_rx_irq(). */ void (*nm_dtor)(struct netmap_adapter *); int (*nm_register)(struct netmap_adapter *, int onoff); + void (*nm_intr)(struct netmap_adapter *, int onoff); int (*nm_txsync)(struct netmap_kring *kring, int flags); int (*nm_rxsync)(struct netmap_kring *kring, int flags); int (*nm_notify)(struct netmap_kring *kring, int flags); #define NAF_FORCE_READ 1 #define NAF_FORCE_RECLAIM 2 /* return configuration information */ int (*nm_config)(struct netmap_adapter *, u_int *txr, u_int *txd, u_int *rxr, u_int *rxd); int (*nm_krings_create)(struct netmap_adapter *); void (*nm_krings_delete)(struct netmap_adapter *); #ifdef WITH_VALE /* * nm_bdg_attach() initializes the na_vp field to point * to an adapter that can be attached to a VALE switch. If the * current adapter is already a VALE port, na_vp is simply a cast; * otherwise, na_vp points to a netmap_bwrap_adapter. * If applicable, this callback also initializes na_hostvp, * that can be used to connect the adapter host rings to the * switch. * Called with NMG_LOCK held. * * nm_bdg_ctl() is called on the actual attach/detach to/from * to/from the switch, to perform adapter-specific * initializations * Called with NMG_LOCK held. */ int (*nm_bdg_attach)(const char *bdg_name, struct netmap_adapter *); int (*nm_bdg_ctl)(struct netmap_adapter *, struct nmreq *, int); /* adapter used to attach this adapter to a VALE switch (if any) */ struct netmap_vp_adapter *na_vp; /* adapter used to attach the host rings of this adapter * to a VALE switch (if any) */ struct netmap_vp_adapter *na_hostvp; #endif /* standard refcount to control the lifetime of the adapter * (it should be equal to the lifetime of the corresponding ifp) */ int na_refcount; /* memory allocator (opaque) * We also cache a pointer to the lut_entry for translating - * buffer addresses, and the total number of buffers. + * buffer addresses, the total number of buffers and the buffer size. */ struct netmap_mem_d *nm_mem; struct netmap_lut na_lut; /* additional information attached to this adapter * by other netmap subsystems. Currently used by - * bwrap and LINUX/v1000. + * bwrap, LINUX/v1000 and ptnetmap */ void *na_private; /* array of pipes that have this adapter as a parent */ struct netmap_pipe_adapter **na_pipes; int na_next_pipe; /* next free slot in the array */ int na_max_pipes; /* size of the array */ + /* Offset of ethernet header for each packet. */ + u_int virt_hdr_len; + char name[64]; }; static __inline u_int nma_get_ndesc(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->num_tx_desc : na->num_rx_desc); } static __inline void nma_set_ndesc(struct netmap_adapter *na, enum txrx t, u_int v) { if (t == NR_TX) na->num_tx_desc = v; else na->num_rx_desc = v; } static __inline u_int nma_get_nrings(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->num_tx_rings : na->num_rx_rings); } static __inline void nma_set_nrings(struct netmap_adapter *na, enum txrx t, u_int v) { if (t == NR_TX) na->num_tx_rings = v; else na->num_rx_rings = v; } static __inline struct netmap_kring* NMR(struct netmap_adapter *na, enum txrx t) { return (t == NR_TX ? na->tx_rings : na->rx_rings); } /* * If the NIC is owned by the kernel * (i.e., bridge), neither another bridge nor user can use it; * if the NIC is owned by a user, only users can share it. * Evaluation must be done under NMG_LOCK(). */ #define NETMAP_OWNED_BY_KERN(na) ((na)->na_flags & NAF_BUSY) #define NETMAP_OWNED_BY_ANY(na) \ (NETMAP_OWNED_BY_KERN(na) || ((na)->active_fds > 0)) /* * derived netmap adapters for various types of ports */ struct netmap_vp_adapter { /* VALE software port */ struct netmap_adapter up; /* * Bridge support: * * bdg_port is the port number used in the bridge; * na_bdg points to the bridge this NA is attached to. */ int bdg_port; struct nm_bridge *na_bdg; int retry; - /* Offset of ethernet header for each packet. */ - u_int virt_hdr_len; /* Maximum Frame Size, used in bdg_mismatch_datapath() */ u_int mfs; /* Last source MAC on this port */ uint64_t last_smac; }; struct netmap_hw_adapter { /* physical device */ struct netmap_adapter up; struct net_device_ops nm_ndo; // XXX linux only struct ethtool_ops nm_eto; // XXX linux only const struct ethtool_ops* save_ethtool; int (*nm_hw_register)(struct netmap_adapter *, int onoff); }; #ifdef WITH_GENERIC /* Mitigation support. */ struct nm_generic_mit { struct hrtimer mit_timer; int mit_pending; int mit_ring_idx; /* index of the ring being mitigated */ struct netmap_adapter *mit_na; /* backpointer */ }; struct netmap_generic_adapter { /* emulated device */ struct netmap_hw_adapter up; /* Pointer to a previously used netmap adapter. */ struct netmap_adapter *prev; /* generic netmap adapters support: * a net_device_ops struct overrides ndo_select_queue(), * save_if_input saves the if_input hook (FreeBSD), * mit implements rx interrupt mitigation, */ struct net_device_ops generic_ndo; void (*save_if_input)(struct ifnet *, struct mbuf *); struct nm_generic_mit *mit; #ifdef linux netdev_tx_t (*save_start_xmit)(struct mbuf *, struct ifnet *); #endif + /* Is the adapter able to use multiple RX slots to scatter + * each packet pushed up by the driver? */ + int rxsg; + + /* Is the transmission path controlled by a netmap-aware + * device queue (i.e. qdisc on linux)? */ + int txqdisc; }; #endif /* WITH_GENERIC */ static __inline int netmap_real_rings(struct netmap_adapter *na, enum txrx t) { return nma_get_nrings(na, t) + !!(na->na_flags & NAF_HOST_RINGS); } #ifdef WITH_VALE - +struct nm_bdg_polling_state; /* * Bridge wrapper for non VALE ports attached to a VALE switch. * * The real device must already have its own netmap adapter (hwna). * The bridge wrapper and the hwna adapter share the same set of * netmap rings and buffers, but they have two separate sets of * krings descriptors, with tx/rx meanings swapped: * * netmap * bwrap krings rings krings hwna * +------+ +------+ +-----+ +------+ +------+ * |tx_rings->| |\ /| |----| |<-tx_rings| * | | +------+ \ / +-----+ +------+ | | * | | X | | * | | / \ | | * | | +------+/ \+-----+ +------+ | | * |rx_rings->| | | |----| |<-rx_rings| * | | +------+ +-----+ +------+ | | * +------+ +------+ * * - packets coming from the bridge go to the brwap rx rings, * which are also the hwna tx rings. The bwrap notify callback * will then complete the hwna tx (see netmap_bwrap_notify). * * - packets coming from the outside go to the hwna rx rings, * which are also the bwrap tx rings. The (overwritten) hwna * notify method will then complete the bridge tx * (see netmap_bwrap_intr_notify). * * The bridge wrapper may optionally connect the hwna 'host' rings * to the bridge. This is done by using a second port in the * bridge and connecting it to the 'host' netmap_vp_adapter * contained in the netmap_bwrap_adapter. The brwap host adapter * cross-links the hwna host rings in the same way as shown above. * * - packets coming from the bridge and directed to the host stack * are handled by the bwrap host notify callback * (see netmap_bwrap_host_notify) * * - packets coming from the host stack are still handled by the * overwritten hwna notify callback (netmap_bwrap_intr_notify), * but are diverted to the host adapter depending on the ring number. * */ struct netmap_bwrap_adapter { struct netmap_vp_adapter up; struct netmap_vp_adapter host; /* for host rings */ struct netmap_adapter *hwna; /* the underlying device */ - /* backup of the hwna memory allocator */ - struct netmap_mem_d *save_nmd; - /* * When we attach a physical interface to the bridge, we * allow the controlling process to terminate, so we need * a place to store the n_detmap_priv_d data structure. * This is only done when physical interfaces * are attached to a bridge. */ struct netmap_priv_d *na_kpriv; + struct nm_bdg_polling_state *na_polling_state; }; int netmap_bwrap_attach(const char *name, struct netmap_adapter *); - #endif /* WITH_VALE */ #ifdef WITH_PIPES #define NM_MAXPIPES 64 /* max number of pipes per adapter */ struct netmap_pipe_adapter { struct netmap_adapter up; u_int id; /* pipe identifier */ int role; /* either NR_REG_PIPE_MASTER or NR_REG_PIPE_SLAVE */ struct netmap_adapter *parent; /* adapter that owns the memory */ struct netmap_pipe_adapter *peer; /* the other end of the pipe */ int peer_ref; /* 1 iff we are holding a ref to the peer */ u_int parent_slot; /* index in the parent pipe array */ }; #endif /* WITH_PIPES */ /* return slots reserved to rx clients; used in drivers */ static inline uint32_t nm_kr_rxspace(struct netmap_kring *k) { int space = k->nr_hwtail - k->nr_hwcur; if (space < 0) space += k->nkr_num_slots; ND("preserving %d rx slots %d -> %d", space, k->nr_hwcur, k->nr_hwtail); return space; } +/* return slots reserved to tx clients */ +#define nm_kr_txspace(_k) nm_kr_rxspace(_k) -/* True if no space in the tx ring. only valid after txsync_prologue */ + +/* True if no space in the tx ring, only valid after txsync_prologue */ static inline int nm_kr_txempty(struct netmap_kring *kring) { return kring->rcur == kring->nr_hwtail; } +/* True if no more completed slots in the rx ring, only valid after + * rxsync_prologue */ +#define nm_kr_rxempty(_k) nm_kr_txempty(_k) /* * protect against multiple threads using the same ring. - * also check that the ring has not been stopped. - * We only care for 0 or !=0 as a return code. + * also check that the ring has not been stopped or locked */ -#define NM_KR_BUSY 1 -#define NM_KR_STOPPED 2 +#define NM_KR_BUSY 1 /* some other thread is syncing the ring */ +#define NM_KR_STOPPED 2 /* unbounded stop (ifconfig down or driver unload) */ +#define NM_KR_LOCKED 3 /* bounded, brief stop for mutual exclusion */ +/* release the previously acquired right to use the *sync() methods of the ring */ static __inline void nm_kr_put(struct netmap_kring *kr) { NM_ATOMIC_CLEAR(&kr->nr_busy); } -static __inline int nm_kr_tryget(struct netmap_kring *kr) +/* true if the ifp that backed the adapter has disappeared (e.g., the + * driver has been unloaded) + */ +static inline int nm_iszombie(struct netmap_adapter *na); + +/* try to obtain exclusive right to issue the *sync() operations on the ring. + * The right is obtained and must be later relinquished via nm_kr_put() if and + * only if nm_kr_tryget() returns 0. + * If can_sleep is 1 there are only two other possible outcomes: + * - the function returns NM_KR_BUSY + * - the function returns NM_KR_STOPPED and sets the POLLERR bit in *perr + * (if non-null) + * In both cases the caller will typically skip the ring, possibly collecting + * errors along the way. + * If the calling context does not allow sleeping, the caller must pass 0 in can_sleep. + * In the latter case, the function may also return NM_KR_LOCKED and leave *perr + * untouched: ideally, the caller should try again at a later time. + */ +static __inline int nm_kr_tryget(struct netmap_kring *kr, int can_sleep, int *perr) { + int busy = 1, stopped; /* check a first time without taking the lock * to avoid starvation for nm_kr_get() */ - if (unlikely(kr->nkr_stopped)) { - ND("ring %p stopped (%d)", kr, kr->nkr_stopped); - return NM_KR_STOPPED; +retry: + stopped = kr->nkr_stopped; + if (unlikely(stopped)) { + goto stop; } - if (unlikely(NM_ATOMIC_TEST_AND_SET(&kr->nr_busy))) - return NM_KR_BUSY; - /* check a second time with lock held */ - if (unlikely(kr->nkr_stopped)) { - ND("ring %p stopped (%d)", kr, kr->nkr_stopped); + busy = NM_ATOMIC_TEST_AND_SET(&kr->nr_busy); + /* we should not return NM_KR_BUSY if the ring was + * actually stopped, so check another time after + * the barrier provided by the atomic operation + */ + stopped = kr->nkr_stopped; + if (unlikely(stopped)) { + goto stop; + } + + if (unlikely(nm_iszombie(kr->na))) { + stopped = NM_KR_STOPPED; + goto stop; + } + + return unlikely(busy) ? NM_KR_BUSY : 0; + +stop: + if (!busy) nm_kr_put(kr); - return NM_KR_STOPPED; + if (stopped == NM_KR_STOPPED) { +/* if POLLERR is defined we want to use it to simplify netmap_poll(). + * Otherwise, any non-zero value will do. + */ +#ifdef POLLERR +#define NM_POLLERR POLLERR +#else +#define NM_POLLERR 1 +#endif /* POLLERR */ + if (perr) + *perr |= NM_POLLERR; +#undef NM_POLLERR + } else if (can_sleep) { + tsleep(kr, 0, "NM_KR_TRYGET", 4); + goto retry; } - return 0; + return stopped; } -static __inline void nm_kr_get(struct netmap_kring *kr) +/* put the ring in the 'stopped' state and wait for the current user (if any) to + * notice. stopped must be either NM_KR_STOPPED or NM_KR_LOCKED + */ +static __inline void nm_kr_stop(struct netmap_kring *kr, int stopped) { + kr->nkr_stopped = stopped; while (NM_ATOMIC_TEST_AND_SET(&kr->nr_busy)) tsleep(kr, 0, "NM_KR_GET", 4); } +/* restart a ring after a stop */ +static __inline void nm_kr_start(struct netmap_kring *kr) +{ + kr->nkr_stopped = 0; + nm_kr_put(kr); +} + /* * The following functions are used by individual drivers to * support netmap operation. * * netmap_attach() initializes a struct netmap_adapter, allocating the * struct netmap_ring's and the struct selinfo. * * netmap_detach() frees the memory allocated by netmap_attach(). * * netmap_transmit() replaces the if_transmit routine of the interface, * and is used to intercept packets coming from the stack. * * netmap_load_map/netmap_reload_map are helper routines to set/reset * the dmamap for a packet buffer * * netmap_reset() is a helper routine to be called in the hw driver * when reinitializing a ring. It should not be called by * virtual ports (vale, pipes, monitor) */ int netmap_attach(struct netmap_adapter *); void netmap_detach(struct ifnet *); int netmap_transmit(struct ifnet *, struct mbuf *); struct netmap_slot *netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n, u_int new_cur); int netmap_ring_reinit(struct netmap_kring *); +/* Return codes for netmap_*x_irq. */ +enum { + /* Driver should do normal interrupt processing, e.g. because + * the interface is not in netmap mode. */ + NM_IRQ_PASS = 0, + /* Port is in netmap mode, and the interrupt work has been + * completed. The driver does not have to notify netmap + * again before the next interrupt. */ + NM_IRQ_COMPLETED = -1, + /* Port is in netmap mode, but the interrupt work has not been + * completed. The driver has to make sure netmap will be + * notified again soon, even if no more interrupts come (e.g. + * on Linux the driver should not call napi_complete()). */ + NM_IRQ_RESCHED = -2, +}; + /* default functions to handle rx/tx interrupts */ int netmap_rx_irq(struct ifnet *, u_int, u_int *); #define netmap_tx_irq(_n, _q) netmap_rx_irq(_n, _q, NULL) -void netmap_common_irq(struct ifnet *, u_int, u_int *work_done); +int netmap_common_irq(struct netmap_adapter *, u_int, u_int *work_done); #ifdef WITH_VALE /* functions used by external modules to interface with VALE */ #define netmap_vp_to_ifp(_vp) ((_vp)->up.ifp) #define netmap_ifp_to_vp(_ifp) (NA(_ifp)->na_vp) #define netmap_ifp_to_host_vp(_ifp) (NA(_ifp)->na_hostvp) #define netmap_bdg_idx(_vp) ((_vp)->bdg_port) const char *netmap_bdg_name(struct netmap_vp_adapter *); #else /* !WITH_VALE */ #define netmap_vp_to_ifp(_vp) NULL #define netmap_ifp_to_vp(_ifp) NULL #define netmap_ifp_to_host_vp(_ifp) NULL #define netmap_bdg_idx(_vp) -1 #define netmap_bdg_name(_vp) NULL #endif /* WITH_VALE */ static inline int nm_netmap_on(struct netmap_adapter *na) { return na && na->na_flags & NAF_NETMAP_ON; } static inline int nm_native_on(struct netmap_adapter *na) { return nm_netmap_on(na) && (na->na_flags & NAF_NATIVE); } +static inline int +nm_iszombie(struct netmap_adapter *na) +{ + return na == NULL || (na->na_flags & NAF_ZOMBIE); +} + +static inline void +nm_update_hostrings_mode(struct netmap_adapter *na) +{ + /* Process nr_mode and nr_pending_mode for host rings. */ + na->tx_rings[na->num_tx_rings].nr_mode = + na->tx_rings[na->num_tx_rings].nr_pending_mode; + na->rx_rings[na->num_rx_rings].nr_mode = + na->rx_rings[na->num_rx_rings].nr_pending_mode; +} + /* set/clear native flags and if_transmit/netdev_ops */ static inline void nm_set_native_flags(struct netmap_adapter *na) { struct ifnet *ifp = na->ifp; + /* We do the setup for intercepting packets only if we are the + * first user of this adapapter. */ + if (na->active_fds > 0) { + return; + } + na->na_flags |= NAF_NETMAP_ON; #ifdef IFCAP_NETMAP /* or FreeBSD ? */ ifp->if_capenable |= IFCAP_NETMAP; #endif -#ifdef __FreeBSD__ +#if defined (__FreeBSD__) na->if_transmit = ifp->if_transmit; ifp->if_transmit = netmap_transmit; +#elif defined (_WIN32) + (void)ifp; /* prevent a warning */ + //XXX_ale can we just comment those? + //na->if_transmit = ifp->if_transmit; + //ifp->if_transmit = netmap_transmit; #else na->if_transmit = (void *)ifp->netdev_ops; ifp->netdev_ops = &((struct netmap_hw_adapter *)na)->nm_ndo; ((struct netmap_hw_adapter *)na)->save_ethtool = ifp->ethtool_ops; ifp->ethtool_ops = &((struct netmap_hw_adapter*)na)->nm_eto; #endif + nm_update_hostrings_mode(na); } - static inline void nm_clear_native_flags(struct netmap_adapter *na) { struct ifnet *ifp = na->ifp; -#ifdef __FreeBSD__ + /* We undo the setup for intercepting packets only if we are the + * last user of this adapapter. */ + if (na->active_fds > 0) { + return; + } + + nm_update_hostrings_mode(na); + +#if defined(__FreeBSD__) ifp->if_transmit = na->if_transmit; +#elif defined(_WIN32) + (void)ifp; /* prevent a warning */ + //XXX_ale can we just comment those? + //ifp->if_transmit = na->if_transmit; #else ifp->netdev_ops = (void *)na->if_transmit; ifp->ethtool_ops = ((struct netmap_hw_adapter*)na)->save_ethtool; #endif na->na_flags &= ~NAF_NETMAP_ON; #ifdef IFCAP_NETMAP /* or FreeBSD ? */ ifp->if_capenable &= ~IFCAP_NETMAP; #endif } +/* + * nm_*sync_prologue() functions are used in ioctl/poll and ptnetmap + * kthreads. + * We need netmap_ring* parameter, because in ptnetmap it is decoupled + * from host kring. + * The user-space ring pointers (head/cur/tail) are shared through + * CSB between host and guest. + */ +/* + * validates parameters in the ring/kring, returns a value for head + * If any error, returns ring_size to force a reinit. + */ +uint32_t nm_txsync_prologue(struct netmap_kring *, struct netmap_ring *); + + +/* + * validates parameters in the ring/kring, returns a value for head + * If any error, returns ring_size lim to force a reinit. + */ +uint32_t nm_rxsync_prologue(struct netmap_kring *, struct netmap_ring *); + + /* check/fix address and len in tx rings */ #if 1 /* debug version */ #define NM_CHECK_ADDR_LEN(_na, _a, _l) do { \ if (_a == NETMAP_BUF_BASE(_na) || _l > NETMAP_BUF_SIZE(_na)) { \ RD(5, "bad addr/len ring %d slot %d idx %d len %d", \ kring->ring_id, nm_i, slot->buf_idx, len); \ if (_l > NETMAP_BUF_SIZE(_na)) \ _l = NETMAP_BUF_SIZE(_na); \ } } while (0) #else /* no debug version */ #define NM_CHECK_ADDR_LEN(_na, _a, _l) do { \ if (_l > NETMAP_BUF_SIZE(_na)) \ _l = NETMAP_BUF_SIZE(_na); \ } while (0) #endif /*---------------------------------------------------------------*/ /* * Support routines used by netmap subsystems * (native drivers, VALE, generic, pipes, monitors, ...) */ /* common routine for all functions that create a netmap adapter. It performs * two main tasks: * - if the na points to an ifp, mark the ifp as netmap capable * using na as its native adapter; * - provide defaults for the setup callbacks and the memory allocator */ int netmap_attach_common(struct netmap_adapter *); /* common actions to be performed on netmap adapter destruction */ void netmap_detach_common(struct netmap_adapter *); /* fill priv->np_[tr]xq{first,last} using the ringid and flags information * coming from a struct nmreq */ int netmap_interp_ringid(struct netmap_priv_d *priv, uint16_t ringid, uint32_t flags); /* update the ring parameters (number and size of tx and rx rings). * It calls the nm_config callback, if available. */ int netmap_update_config(struct netmap_adapter *na); /* create and initialize the common fields of the krings array. * using the information that must be already available in the na. * tailroom can be used to request the allocation of additional * tailroom bytes after the krings array. This is used by * netmap_vp_adapter's (i.e., VALE ports) to make room for * leasing-related data structures */ int netmap_krings_create(struct netmap_adapter *na, u_int tailroom); /* deletes the kring array of the adapter. The array must have * been created using netmap_krings_create */ void netmap_krings_delete(struct netmap_adapter *na); +int netmap_hw_krings_create(struct netmap_adapter *na); +void netmap_hw_krings_delete(struct netmap_adapter *na); + /* set the stopped/enabled status of ring * When stopping, they also wait for all current activity on the ring to * terminate. The status change is then notified using the na nm_notify * callback. */ void netmap_set_ring(struct netmap_adapter *, u_int ring_id, enum txrx, int stopped); /* set the stopped/enabled status of all rings of the adapter. */ void netmap_set_all_rings(struct netmap_adapter *, int stopped); -/* convenience wrappers for netmap_set_all_rings, used in drivers */ +/* convenience wrappers for netmap_set_all_rings */ void netmap_disable_all_rings(struct ifnet *); void netmap_enable_all_rings(struct ifnet *); int netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na, uint16_t ringid, uint32_t flags); +void netmap_do_unregif(struct netmap_priv_d *priv); - u_int nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg); -int netmap_get_na(struct nmreq *nmr, struct netmap_adapter **na, int create); +int netmap_get_na(struct nmreq *nmr, struct netmap_adapter **na, + struct ifnet **ifp, int create); +void netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp); int netmap_get_hw_na(struct ifnet *ifp, struct netmap_adapter **na); #ifdef WITH_VALE /* * The following bridge-related functions are used by other * kernel modules. * * VALE only supports unicast or broadcast. The lookup * function can return 0 .. NM_BDG_MAXPORTS-1 for regular ports, * NM_BDG_MAXPORTS for broadcast, NM_BDG_MAXPORTS+1 for unknown. * XXX in practice "unknown" might be handled same as broadcast. */ typedef u_int (*bdg_lookup_fn_t)(struct nm_bdg_fwd *ft, uint8_t *ring_nr, struct netmap_vp_adapter *); typedef int (*bdg_config_fn_t)(struct nm_ifreq *); typedef void (*bdg_dtor_fn_t)(const struct netmap_vp_adapter *); struct netmap_bdg_ops { bdg_lookup_fn_t lookup; bdg_config_fn_t config; bdg_dtor_fn_t dtor; }; u_int netmap_bdg_learning(struct nm_bdg_fwd *ft, uint8_t *dst_ring, struct netmap_vp_adapter *); +#define NM_BRIDGES 8 /* number of bridges */ #define NM_BDG_MAXPORTS 254 /* up to 254 */ #define NM_BDG_BROADCAST NM_BDG_MAXPORTS #define NM_BDG_NOPORT (NM_BDG_MAXPORTS+1) -#define NM_NAME "vale" /* prefix for bridge port name */ - /* these are redefined in case of no VALE support */ int netmap_get_bdg_na(struct nmreq *nmr, struct netmap_adapter **na, int create); struct nm_bridge *netmap_init_bridges2(u_int); void netmap_uninit_bridges2(struct nm_bridge *, u_int); int netmap_init_bridges(void); void netmap_uninit_bridges(void); int netmap_bdg_ctl(struct nmreq *nmr, struct netmap_bdg_ops *bdg_ops); int netmap_bdg_config(struct nmreq *nmr); #else /* !WITH_VALE */ #define netmap_get_bdg_na(_1, _2, _3) 0 #define netmap_init_bridges(_1) 0 #define netmap_uninit_bridges() #define netmap_bdg_ctl(_1, _2) EINVAL #endif /* !WITH_VALE */ #ifdef WITH_PIPES /* max number of pipes per device */ #define NM_MAXPIPES 64 /* XXX how many? */ void netmap_pipe_dealloc(struct netmap_adapter *); int netmap_get_pipe_na(struct nmreq *nmr, struct netmap_adapter **na, int create); #else /* !WITH_PIPES */ #define NM_MAXPIPES 0 #define netmap_pipe_alloc(_1, _2) 0 #define netmap_pipe_dealloc(_1) #define netmap_get_pipe_na(nmr, _2, _3) \ ({ int role__ = (nmr)->nr_flags & NR_REG_MASK; \ (role__ == NR_REG_PIPE_MASTER || \ role__ == NR_REG_PIPE_SLAVE) ? EOPNOTSUPP : 0; }) #endif #ifdef WITH_MONITOR int netmap_get_monitor_na(struct nmreq *nmr, struct netmap_adapter **na, int create); void netmap_monitor_stop(struct netmap_adapter *na); #else #define netmap_get_monitor_na(nmr, _2, _3) \ ((nmr)->nr_flags & (NR_MONITOR_TX | NR_MONITOR_RX) ? EOPNOTSUPP : 0) #endif #ifdef CONFIG_NET_NS struct net *netmap_bns_get(void); void netmap_bns_put(struct net *); void netmap_bns_getbridges(struct nm_bridge **, u_int *); #else #define netmap_bns_get() #define netmap_bns_put(_1) #define netmap_bns_getbridges(b, n) \ do { *b = nm_bridges; *n = NM_BRIDGES; } while (0) #endif /* Various prototypes */ -int netmap_poll(struct cdev *dev, int events, struct thread *td); +int netmap_poll(struct netmap_priv_d *, int events, NM_SELRECORD_T *td); int netmap_init(void); void netmap_fini(void); int netmap_get_memory(struct netmap_priv_d* p); void netmap_dtor(void *data); -int netmap_dtor_locked(struct netmap_priv_d *priv); -int netmap_ioctl(struct cdev *dev, u_long cmd, caddr_t data, int fflag, struct thread *td); +int netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data, struct thread *); /* netmap_adapter creation/destruction */ // #define NM_DEBUG_PUTGET 1 #ifdef NM_DEBUG_PUTGET #define NM_DBG(f) __##f void __netmap_adapter_get(struct netmap_adapter *na); #define netmap_adapter_get(na) \ do { \ struct netmap_adapter *__na = na; \ D("getting %p:%s (%d)", __na, (__na)->name, (__na)->na_refcount); \ __netmap_adapter_get(__na); \ } while (0) int __netmap_adapter_put(struct netmap_adapter *na); #define netmap_adapter_put(na) \ ({ \ struct netmap_adapter *__na = na; \ D("putting %p:%s (%d)", __na, (__na)->name, (__na)->na_refcount); \ __netmap_adapter_put(__na); \ }) #else /* !NM_DEBUG_PUTGET */ #define NM_DBG(f) f void netmap_adapter_get(struct netmap_adapter *na); int netmap_adapter_put(struct netmap_adapter *na); #endif /* !NM_DEBUG_PUTGET */ /* * module variables */ -#define NETMAP_BUF_BASE(na) ((na)->na_lut.lut[0].vaddr) -#define NETMAP_BUF_SIZE(na) ((na)->na_lut.objsize) +#define NETMAP_BUF_BASE(_na) ((_na)->na_lut.lut[0].vaddr) +#define NETMAP_BUF_SIZE(_na) ((_na)->na_lut.objsize) extern int netmap_mitigate; // XXX not really used extern int netmap_no_pendintr; extern int netmap_verbose; // XXX debugging enum { /* verbose flags */ NM_VERB_ON = 1, /* generic verbose */ NM_VERB_HOST = 0x2, /* verbose host stack */ NM_VERB_RXSYNC = 0x10, /* verbose on rxsync/txsync */ NM_VERB_TXSYNC = 0x20, NM_VERB_RXINTR = 0x100, /* verbose on rx/tx intr (driver) */ NM_VERB_TXINTR = 0x200, NM_VERB_NIC_RXSYNC = 0x1000, /* verbose on rx/tx intr (driver) */ NM_VERB_NIC_TXSYNC = 0x2000, }; extern int netmap_txsync_retry; +extern int netmap_adaptive_io; +extern int netmap_flags; extern int netmap_generic_mit; extern int netmap_generic_ringsize; extern int netmap_generic_rings; -extern int netmap_use_count; +extern int netmap_generic_txqdisc; /* * NA returns a pointer to the struct netmap adapter from the ifp, * WNA is used to write it. */ #define NA(_ifp) ((struct netmap_adapter *)WNA(_ifp)) /* - * Macros to determine if an interface is netmap capable or netmap enabled. - * See the magic field in struct netmap_adapter. - */ -#ifdef __FreeBSD__ -/* - * on FreeBSD just use if_capabilities and if_capenable. - */ -#define NETMAP_CAPABLE(ifp) (NA(ifp) && \ - (ifp)->if_capabilities & IFCAP_NETMAP ) - -#define NETMAP_SET_CAPABLE(ifp) \ - (ifp)->if_capabilities |= IFCAP_NETMAP - -#else /* linux */ - -/* - * on linux: - * we check if NA(ifp) is set and its first element has a related + * On old versions of FreeBSD, NA(ifp) is a pspare. On linux we + * overload another pointer in the netdev. + * + * We check if NA(ifp) is set and its first element has a related * magic value. The capenable is within the struct netmap_adapter. */ #define NETMAP_MAGIC 0x52697a7a -#define NETMAP_CAPABLE(ifp) (NA(ifp) && \ +#define NM_NA_VALID(ifp) (NA(ifp) && \ ((uint32_t)(uintptr_t)NA(ifp) ^ NA(ifp)->magic) == NETMAP_MAGIC ) -#define NETMAP_SET_CAPABLE(ifp) \ - NA(ifp)->magic = ((uint32_t)(uintptr_t)NA(ifp)) ^ NETMAP_MAGIC +#define NM_ATTACH_NA(ifp, na) do { \ + WNA(ifp) = na; \ + if (NA(ifp)) \ + NA(ifp)->magic = \ + ((uint32_t)(uintptr_t)NA(ifp)) ^ NETMAP_MAGIC; \ +} while(0) -#endif /* linux */ +#define NM_IS_NATIVE(ifp) (NM_NA_VALID(ifp) && NA(ifp)->nm_dtor == netmap_hw_dtor) -#ifdef __FreeBSD__ +#if defined(__FreeBSD__) /* Assigns the device IOMMU domain to an allocator. * Returns -ENOMEM in case the domain is different */ #define nm_iommu_group_id(dev) (0) /* Callback invoked by the dma machinery after a successful dmamap_load */ static void netmap_dmamap_cb(__unused void *arg, __unused bus_dma_segment_t * segs, __unused int nseg, __unused int error) { } /* bus_dmamap_load wrapper: call aforementioned function if map != NULL. * XXX can we do it without a callback ? */ static inline void netmap_load_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { if (map) bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE(na), netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT); } static inline void netmap_unload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map) { if (map) bus_dmamap_unload(tag, map); } /* update the map when a buffer changes. */ static inline void netmap_reload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { if (map) { bus_dmamap_unload(tag, map); bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE(na), netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT); } } +#elif defined(_WIN32) + #else /* linux */ int nm_iommu_group_id(bus_dma_tag_t dev); #include static inline void netmap_load_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { if (0 && map) { - *map = dma_map_single(na->pdev, buf, na->na_lut.objsize, - DMA_BIDIRECTIONAL); + *map = dma_map_single(na->pdev, buf, NETMAP_BUF_SIZE(na), + DMA_BIDIRECTIONAL); } } static inline void netmap_unload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map) { - u_int sz = na->na_lut.objsize; + u_int sz = NETMAP_BUF_SIZE(na); if (*map) { dma_unmap_single(na->pdev, *map, sz, - DMA_BIDIRECTIONAL); + DMA_BIDIRECTIONAL); } } static inline void netmap_reload_map(struct netmap_adapter *na, bus_dma_tag_t tag, bus_dmamap_t map, void *buf) { - u_int sz = na->na_lut.objsize; + u_int sz = NETMAP_BUF_SIZE(na); if (*map) { dma_unmap_single(na->pdev, *map, sz, DMA_BIDIRECTIONAL); } *map = dma_map_single(na->pdev, buf, sz, DMA_BIDIRECTIONAL); } /* * XXX How do we redefine these functions: * * on linux we need * dma_map_single(&pdev->dev, virt_addr, len, direction) * dma_unmap_single(&adapter->pdev->dev, phys_addr, len, direction * The len can be implicit (on netmap it is NETMAP_BUF_SIZE) * unfortunately the direction is not, so we need to change * something to have a cross API */ #if 0 struct e1000_buffer *buffer_info = &tx_ring->buffer_info[l]; /* set time_stamp *before* dma to help avoid a possible race */ buffer_info->time_stamp = jiffies; buffer_info->mapped_as_page = false; buffer_info->length = len; //buffer_info->next_to_watch = l; /* reload dma map */ dma_unmap_single(&adapter->pdev->dev, buffer_info->dma, NETMAP_BUF_SIZE, DMA_TO_DEVICE); buffer_info->dma = dma_map_single(&adapter->pdev->dev, addr, NETMAP_BUF_SIZE, DMA_TO_DEVICE); if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) { D("dma mapping error"); /* goto dma_error; See e1000_put_txbuf() */ /* XXX reset */ } tx_desc->buffer_addr = htole64(buffer_info->dma); //XXX #endif /* * The bus_dmamap_sync() can be one of wmb() or rmb() depending on direction. */ #define bus_dmamap_sync(_a, _b, _c) #endif /* linux */ /* * functions to map NIC to KRING indexes (n2k) and vice versa (k2n) */ static inline int netmap_idx_n2k(struct netmap_kring *kr, int idx) { int n = kr->nkr_num_slots; idx += kr->nkr_hwofs; if (idx < 0) return idx + n; else if (idx < n) return idx; else return idx - n; } static inline int netmap_idx_k2n(struct netmap_kring *kr, int idx) { int n = kr->nkr_num_slots; idx -= kr->nkr_hwofs; if (idx < 0) return idx + n; else if (idx < n) return idx; else return idx - n; } /* Entries of the look-up table. */ struct lut_entry { void *vaddr; /* virtual address. */ vm_paddr_t paddr; /* physical address. */ }; struct netmap_obj_pool; /* * NMB return the virtual address of a buffer (buffer 0 on bad index) * PNMB also fills the physical address */ static inline void * NMB(struct netmap_adapter *na, struct netmap_slot *slot) { struct lut_entry *lut = na->na_lut.lut; uint32_t i = slot->buf_idx; return (unlikely(i >= na->na_lut.objtotal)) ? lut[0].vaddr : lut[i].vaddr; } static inline void * PNMB(struct netmap_adapter *na, struct netmap_slot *slot, uint64_t *pp) { uint32_t i = slot->buf_idx; struct lut_entry *lut = na->na_lut.lut; void *ret = (i >= na->na_lut.objtotal) ? lut[0].vaddr : lut[i].vaddr; +#ifndef _WIN32 *pp = (i >= na->na_lut.objtotal) ? lut[0].paddr : lut[i].paddr; +#else + *pp = (i >= na->na_lut.objtotal) ? (uint64_t)lut[0].paddr.QuadPart : (uint64_t)lut[i].paddr.QuadPart; +#endif return ret; } /* * Structure associated to each netmap file descriptor. * It is created on open and left unbound (np_nifp == NULL). * A successful NIOCREGIF will set np_nifp and the first few fields; * this is protected by a global lock (NMG_LOCK) due to low contention. * * np_refs counts the number of references to the structure: one for the fd, * plus (on FreeBSD) one for each active mmap which we track ourselves * (linux automatically tracks them, but FreeBSD does not). * np_refs is protected by NMG_LOCK. * * Read access to the structure is lock free, because ni_nifp once set * can only go to 0 when nobody is using the entry anymore. Readers * must check that np_nifp != NULL before using the other fields. */ struct netmap_priv_d { struct netmap_if * volatile np_nifp; /* netmap if descriptor. */ struct netmap_adapter *np_na; + struct ifnet *np_ifp; uint32_t np_flags; /* from the ioctl */ - u_int np_qfirst[NR_TXRX], + u_int np_qfirst[NR_TXRX], np_qlast[NR_TXRX]; /* range of tx/rx rings to scan */ uint16_t np_txpoll; /* XXX and also np_rxpoll ? */ int np_refs; /* use with NMG_LOCK held */ /* pointers to the selinfo to be used for selrecord. * Either the local or the global one depending on the * number of rings. */ NM_SELINFO_T *np_si[NR_TXRX]; struct thread *np_td; /* kqueue, just debugging */ }; +struct netmap_priv_d *netmap_priv_new(void); +void netmap_priv_delete(struct netmap_priv_d *); + +static inline int nm_kring_pending(struct netmap_priv_d *np) +{ + struct netmap_adapter *na = np->np_na; + enum txrx t; + int i; + + for_rx_tx(t) { + for (i = np->np_qfirst[t]; i < np->np_qlast[t]; i++) { + struct netmap_kring *kring = &NMR(na, t)[i]; + if (kring->nr_mode != kring->nr_pending_mode) { + return 1; + } + } + } + return 0; +} + #ifdef WITH_MONITOR struct netmap_monitor_adapter { struct netmap_adapter up; struct netmap_priv_d priv; uint32_t flags; }; #endif /* WITH_MONITOR */ #ifdef WITH_GENERIC /* * generic netmap emulation for devices that do not have * native netmap support. */ int generic_netmap_attach(struct ifnet *ifp); +int generic_rx_handler(struct ifnet *ifp, struct mbuf *m);; -int netmap_catch_rx(struct netmap_generic_adapter *na, int intercept); -void generic_rx_handler(struct ifnet *ifp, struct mbuf *m);; -void netmap_catch_tx(struct netmap_generic_adapter *na, int enable); -int generic_xmit_frame(struct ifnet *ifp, struct mbuf *m, void *addr, u_int len, u_int ring_nr); -int generic_find_num_desc(struct ifnet *ifp, u_int *tx, u_int *rx); -void generic_find_num_queues(struct ifnet *ifp, u_int *txq, u_int *rxq); +int nm_os_catch_rx(struct netmap_generic_adapter *gna, int intercept); +int nm_os_catch_tx(struct netmap_generic_adapter *gna, int intercept); + +/* + * the generic transmit routine is passed a structure to optionally + * build a queue of descriptors, in an OS-specific way. + * The payload is at addr, if non-null, and the routine should send or queue + * the packet, returning 0 if successful, 1 on failure. + * + * At the end, if head is non-null, there will be an additional call + * to the function with addr = NULL; this should tell the OS-specific + * routine to send the queue and free any resources. Failure is ignored. + */ +struct nm_os_gen_arg { + struct ifnet *ifp; + void *m; /* os-specific mbuf-like object */ + void *head, *tail; /* tailq, if the OS-specific routine needs to build one */ + void *addr; /* payload of current packet */ + u_int len; /* packet length */ + u_int ring_nr; /* packet length */ + u_int qevent; /* in txqdisc mode, place an event on this mbuf */ +}; + +int nm_os_generic_xmit_frame(struct nm_os_gen_arg *); +int nm_os_generic_find_num_desc(struct ifnet *ifp, u_int *tx, u_int *rx); +void nm_os_generic_find_num_queues(struct ifnet *ifp, u_int *txq, u_int *rxq); +void nm_os_generic_set_features(struct netmap_generic_adapter *gna); + static inline struct ifnet* netmap_generic_getifp(struct netmap_generic_adapter *gna) { if (gna->prev) return gna->prev->ifp; return gna->up.up.ifp; } +void netmap_generic_irq(struct netmap_adapter *na, u_int q, u_int *work_done); + //#define RATE_GENERIC /* Enables communication statistics for generic. */ #ifdef RATE_GENERIC void generic_rate(int txp, int txs, int txi, int rxp, int rxs, int rxi); #else #define generic_rate(txp, txs, txi, rxp, rxs, rxi) #endif /* * netmap_mitigation API. This is used by the generic adapter * to reduce the number of interrupt requests/selwakeup * to clients on incoming packets. */ -void netmap_mitigation_init(struct nm_generic_mit *mit, int idx, +void nm_os_mitigation_init(struct nm_generic_mit *mit, int idx, struct netmap_adapter *na); -void netmap_mitigation_start(struct nm_generic_mit *mit); -void netmap_mitigation_restart(struct nm_generic_mit *mit); -int netmap_mitigation_active(struct nm_generic_mit *mit); -void netmap_mitigation_cleanup(struct nm_generic_mit *mit); +void nm_os_mitigation_start(struct nm_generic_mit *mit); +void nm_os_mitigation_restart(struct nm_generic_mit *mit); +int nm_os_mitigation_active(struct nm_generic_mit *mit); +void nm_os_mitigation_cleanup(struct nm_generic_mit *mit); +#else /* !WITH_GENERIC */ +#define generic_netmap_attach(ifp) (EOPNOTSUPP) #endif /* WITH_GENERIC */ - - /* Shared declarations for the VALE switch. */ /* * Each transmit queue accumulates a batch of packets into * a structure before forwarding. Packets to the same * destination are put in a list using ft_next as a link field. * ft_frags and ft_next are valid only on the first fragment. */ struct nm_bdg_fwd { /* forwarding entry for a bridge */ void *ft_buf; /* netmap or indirect buffer */ uint8_t ft_frags; /* how many fragments (only on 1st frag) */ uint8_t _ft_port; /* dst port (unused) */ uint16_t ft_flags; /* flags, e.g. indirect */ uint16_t ft_len; /* src fragment len */ uint16_t ft_next; /* next packet to same destination */ }; /* struct 'virtio_net_hdr' from linux. */ struct nm_vnet_hdr { #define VIRTIO_NET_HDR_F_NEEDS_CSUM 1 /* Use csum_start, csum_offset */ #define VIRTIO_NET_HDR_F_DATA_VALID 2 /* Csum is valid */ uint8_t flags; #define VIRTIO_NET_HDR_GSO_NONE 0 /* Not a GSO frame */ #define VIRTIO_NET_HDR_GSO_TCPV4 1 /* GSO frame, IPv4 TCP (TSO) */ #define VIRTIO_NET_HDR_GSO_UDP 3 /* GSO frame, IPv4 UDP (UFO) */ #define VIRTIO_NET_HDR_GSO_TCPV6 4 /* GSO frame, IPv6 TCP */ #define VIRTIO_NET_HDR_GSO_ECN 0x80 /* TCP has ECN set */ uint8_t gso_type; uint16_t hdr_len; uint16_t gso_size; uint16_t csum_start; uint16_t csum_offset; }; #define WORST_CASE_GSO_HEADER (14+40+60) /* IPv6 + TCP */ /* Private definitions for IPv4, IPv6, UDP and TCP headers. */ struct nm_iphdr { uint8_t version_ihl; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ }; struct nm_tcphdr { uint16_t source; uint16_t dest; uint32_t seq; uint32_t ack_seq; uint8_t doff; /* Data offset + Reserved */ uint8_t flags; uint16_t window; uint16_t check; uint16_t urg_ptr; }; struct nm_udphdr { uint16_t source; uint16_t dest; uint16_t len; uint16_t check; }; struct nm_ipv6hdr { uint8_t priority_version; uint8_t flow_lbl[3]; uint16_t payload_len; uint8_t nexthdr; uint8_t hop_limit; uint8_t saddr[16]; uint8_t daddr[16]; }; /* Type used to store a checksum (in host byte order) that hasn't been * folded yet. */ #define rawsum_t uint32_t -rawsum_t nm_csum_raw(uint8_t *data, size_t len, rawsum_t cur_sum); -uint16_t nm_csum_ipv4(struct nm_iphdr *iph); -void nm_csum_tcpudp_ipv4(struct nm_iphdr *iph, void *data, +rawsum_t nm_os_csum_raw(uint8_t *data, size_t len, rawsum_t cur_sum); +uint16_t nm_os_csum_ipv4(struct nm_iphdr *iph); +void nm_os_csum_tcpudp_ipv4(struct nm_iphdr *iph, void *data, size_t datalen, uint16_t *check); -void nm_csum_tcpudp_ipv6(struct nm_ipv6hdr *ip6h, void *data, +void nm_os_csum_tcpudp_ipv6(struct nm_ipv6hdr *ip6h, void *data, size_t datalen, uint16_t *check); -uint16_t nm_csum_fold(rawsum_t cur_sum); +uint16_t nm_os_csum_fold(rawsum_t cur_sum); void bdg_mismatch_datapath(struct netmap_vp_adapter *na, struct netmap_vp_adapter *dst_na, - struct nm_bdg_fwd *ft_p, struct netmap_ring *ring, + const struct nm_bdg_fwd *ft_p, + struct netmap_ring *dst_ring, u_int *j, u_int lim, u_int *howmany); /* persistent virtual port routines */ -int nm_vi_persist(const char *, struct ifnet **); -void nm_vi_detach(struct ifnet *); -void nm_vi_init_index(void); +int nm_os_vi_persist(const char *, struct ifnet **); +void nm_os_vi_detach(struct ifnet *); +void nm_os_vi_init_index(void); + +/* + * kernel thread routines + */ +struct nm_kthread; /* OS-specific kthread - opaque */ +typedef void (*nm_kthread_worker_fn_t)(void *data); + +/* kthread configuration */ +struct nm_kthread_cfg { + long type; /* kthread type/identifier */ + struct ptnet_ring_cfg event; /* event/ioctl fd */ + nm_kthread_worker_fn_t worker_fn; /* worker function */ + void *worker_private;/* worker parameter */ + int attach_user; /* attach kthread to user process */ +}; +/* kthread configuration */ +struct nm_kthread *nm_os_kthread_create(struct nm_kthread_cfg *cfg); +int nm_os_kthread_start(struct nm_kthread *); +void nm_os_kthread_stop(struct nm_kthread *); +void nm_os_kthread_delete(struct nm_kthread *); +void nm_os_kthread_wakeup_worker(struct nm_kthread *nmk); +void nm_os_kthread_send_irq(struct nm_kthread *); +void nm_os_kthread_set_affinity(struct nm_kthread *, int); +u_int nm_os_ncpus(void); + +#ifdef WITH_PTNETMAP_HOST +/* + * netmap adapter for host ptnetmap ports + */ +struct netmap_pt_host_adapter { + struct netmap_adapter up; + + struct netmap_adapter *parent; + int (*parent_nm_notify)(struct netmap_kring *kring, int flags); + void *ptns; +}; +/* ptnetmap HOST routines */ +int netmap_get_pt_host_na(struct nmreq *nmr, struct netmap_adapter **na, int create); +int ptnetmap_ctl(struct nmreq *nmr, struct netmap_adapter *na); +static inline int +nm_ptnetmap_host_on(struct netmap_adapter *na) +{ + return na && na->na_flags & NAF_PTNETMAP_HOST; +} +#else /* !WITH_PTNETMAP_HOST */ +#define netmap_get_pt_host_na(nmr, _2, _3) \ + ((nmr)->nr_flags & (NR_PTNETMAP_HOST) ? EOPNOTSUPP : 0) +#define ptnetmap_ctl(_1, _2) EINVAL +#define nm_ptnetmap_host_on(_1) EINVAL +#endif /* !WITH_PTNETMAP_HOST */ + +#ifdef WITH_PTNETMAP_GUEST +/* ptnetmap GUEST routines */ + +typedef uint32_t (*nm_pt_guest_ptctl_t)(struct ifnet *, uint32_t); + +/* + * netmap adapter for guest ptnetmap ports + */ +struct netmap_pt_guest_adapter { + /* The netmap adapter to be used by netmap applications. + * This field must be the first, to allow upcast. */ + struct netmap_hw_adapter hwup; + + /* The netmap adapter to be used by the driver. */ + struct netmap_hw_adapter dr; + + void *csb; + + /* Reference counter to track users of backend netmap port: the + * network stack and netmap clients. + * Used to decide when we need (de)allocate krings/rings and + * start (stop) ptnetmap kthreads. */ + int backend_regifs; + +}; + +int netmap_pt_guest_attach(struct netmap_adapter *, void *, + unsigned int, nm_pt_guest_ptctl_t); +struct ptnet_ring; +bool netmap_pt_guest_txsync(struct ptnet_ring *ptring, struct netmap_kring *kring, + int flags); +bool netmap_pt_guest_rxsync(struct ptnet_ring *ptring, struct netmap_kring *kring, + int flags); +int ptnet_nm_krings_create(struct netmap_adapter *na); +void ptnet_nm_krings_delete(struct netmap_adapter *na); +void ptnet_nm_dtor(struct netmap_adapter *na); +#endif /* WITH_PTNETMAP_GUEST */ #endif /* _NET_NETMAP_KERN_H_ */ Index: head/sys/dev/netmap/netmap_mbq.c =================================================================== --- head/sys/dev/netmap/netmap_mbq.c (revision 307393) +++ head/sys/dev/netmap/netmap_mbq.c (revision 307394) @@ -1,163 +1,166 @@ /* - * Copyright (C) 2013-2014 Vincenzo Maffione. All rights reserved. + * Copyright (C) 2013-2014 Vincenzo Maffione + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ */ #ifdef linux #include "bsd_glue.h" +#elif defined (_WIN32) +#include "win_glue.h" #else /* __FreeBSD__ */ #include #include #include #include #include #endif /* __FreeBSD__ */ #include "netmap_mbq.h" static inline void __mbq_init(struct mbq *q) { q->head = q->tail = NULL; q->count = 0; } void mbq_safe_init(struct mbq *q) { mtx_init(&q->lock, "mbq", NULL, MTX_SPIN); __mbq_init(q); } void mbq_init(struct mbq *q) { __mbq_init(q); } static inline void __mbq_enqueue(struct mbq *q, struct mbuf *m) { m->m_nextpkt = NULL; if (q->tail) { q->tail->m_nextpkt = m; q->tail = m; } else { q->head = q->tail = m; } q->count++; } void mbq_safe_enqueue(struct mbq *q, struct mbuf *m) { mbq_lock(q); __mbq_enqueue(q, m); mbq_unlock(q); } void mbq_enqueue(struct mbq *q, struct mbuf *m) { __mbq_enqueue(q, m); } static inline struct mbuf *__mbq_dequeue(struct mbq *q) { struct mbuf *ret = NULL; if (q->head) { ret = q->head; q->head = ret->m_nextpkt; if (q->head == NULL) { q->tail = NULL; } q->count--; ret->m_nextpkt = NULL; } return ret; } struct mbuf *mbq_safe_dequeue(struct mbq *q) { struct mbuf *ret; mbq_lock(q); ret = __mbq_dequeue(q); mbq_unlock(q); return ret; } struct mbuf *mbq_dequeue(struct mbq *q) { return __mbq_dequeue(q); } /* XXX seems pointless to have a generic purge */ static void __mbq_purge(struct mbq *q, int safe) { struct mbuf *m; for (;;) { m = safe ? mbq_safe_dequeue(q) : mbq_dequeue(q); if (m) { m_freem(m); } else { break; } } } void mbq_purge(struct mbq *q) { __mbq_purge(q, 0); } void mbq_safe_purge(struct mbq *q) { __mbq_purge(q, 1); } -void mbq_safe_destroy(struct mbq *q) +void mbq_safe_fini(struct mbq *q) { mtx_destroy(&q->lock); } -void mbq_destroy(struct mbq *q) +void mbq_fini(struct mbq *q) { } Index: head/sys/dev/netmap/netmap_mbq.h =================================================================== --- head/sys/dev/netmap/netmap_mbq.h (revision 307393) +++ head/sys/dev/netmap/netmap_mbq.h (revision 307394) @@ -1,89 +1,97 @@ /* - * Copyright (C) 2013-2014 Vincenzo Maffione. All rights reserved. + * Copyright (C) 2013-2014 Vincenzo Maffione + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ */ #ifndef __NETMAP_MBQ_H__ #define __NETMAP_MBQ_H__ /* * These function implement an mbuf tailq with an optional lock. * The base functions act ONLY ON THE QUEUE, whereas the "safe" * variants (mbq_safe_*) also handle the lock. */ /* XXX probably rely on a previous definition of SPINLOCK_T */ #ifdef linux #define SPINLOCK_T safe_spinlock_t +#elif defined (_WIN32) +#define SPINLOCK_T win_spinlock_t #else #define SPINLOCK_T struct mtx #endif /* A FIFO queue of mbufs with an optional lock. */ struct mbq { struct mbuf *head; struct mbuf *tail; int count; SPINLOCK_T lock; }; -/* XXX "destroy" does not match "init" as a name. - * We should also clarify whether init can be used while +/* We should clarify whether init can be used while * holding a lock, and whether mbq_safe_destroy() is a NOP. */ void mbq_init(struct mbq *q); -void mbq_destroy(struct mbq *q); +void mbq_fini(struct mbq *q); void mbq_enqueue(struct mbq *q, struct mbuf *m); struct mbuf *mbq_dequeue(struct mbq *q); void mbq_purge(struct mbq *q); +static inline struct mbuf * +mbq_peek(struct mbq *q) +{ + return q->head ? q->head : NULL; +} + static inline void mbq_lock(struct mbq *q) { mtx_lock_spin(&q->lock); } static inline void mbq_unlock(struct mbq *q) { mtx_unlock_spin(&q->lock); } void mbq_safe_init(struct mbq *q); -void mbq_safe_destroy(struct mbq *q); +void mbq_safe_fini(struct mbq *q); void mbq_safe_enqueue(struct mbq *q, struct mbuf *m); struct mbuf *mbq_safe_dequeue(struct mbq *q); void mbq_safe_purge(struct mbq *q); static inline unsigned int mbq_len(struct mbq *q) { return q->count; } #endif /* __NETMAP_MBQ_H_ */ Index: head/sys/dev/netmap/netmap_mem2.c =================================================================== --- head/sys/dev/netmap/netmap_mem2.c (revision 307393) +++ head/sys/dev/netmap/netmap_mem2.c (revision 307394) @@ -1,1638 +1,2458 @@ /* - * Copyright (C) 2012-2014 Matteo Landi, Luigi Rizzo, Giuseppe Lettieri. All rights reserved. + * Copyright (C) 2012-2014 Matteo Landi + * Copyright (C) 2012-2016 Luigi Rizzo + * Copyright (C) 2012-2016 Giuseppe Lettieri + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifdef linux #include "bsd_glue.h" #endif /* linux */ #ifdef __APPLE__ #include "osx_glue.h" #endif /* __APPLE__ */ #ifdef __FreeBSD__ #include /* prerequisite */ __FBSDID("$FreeBSD$"); #include #include +#include /* MALLOC_DEFINE */ #include #include /* vtophys */ #include /* vtophys */ #include /* sockaddrs */ #include #include #include #include #include #include /* bus_dmamap_* */ +/* M_NETMAP only used in here */ +MALLOC_DECLARE(M_NETMAP); +MALLOC_DEFINE(M_NETMAP, "netmap", "Network memory map"); + #endif /* __FreeBSD__ */ +#ifdef _WIN32 +#include +#endif + #include #include +#include #include "netmap_mem2.h" -#define NETMAP_BUF_MAX_NUM 20*4096*2 /* large machine */ +#ifdef _WIN32_USE_SMALL_GENERIC_DEVICES_MEMORY +#define NETMAP_BUF_MAX_NUM 8*4096 /* if too big takes too much time to allocate */ +#else +#define NETMAP_BUF_MAX_NUM 20*4096*2 /* large machine */ +#endif #define NETMAP_POOL_MAX_NAMSZ 32 enum { NETMAP_IF_POOL = 0, NETMAP_RING_POOL, NETMAP_BUF_POOL, NETMAP_POOLS_NR }; struct netmap_obj_params { u_int size; u_int num; }; struct netmap_obj_pool { char name[NETMAP_POOL_MAX_NAMSZ]; /* name of the allocator */ /* ---------------------------------------------------*/ /* these are only meaningful if the pool is finalized */ /* (see 'finalized' field in netmap_mem_d) */ u_int objtotal; /* actual total number of objects. */ u_int memtotal; /* actual total memory space */ u_int numclusters; /* actual number of clusters */ u_int objfree; /* number of free objects. */ struct lut_entry *lut; /* virt,phys addresses, objtotal entries */ uint32_t *bitmap; /* one bit per buffer, 1 means free */ uint32_t bitmap_slots; /* number of uint32 entries in bitmap */ /* ---------------------------------------------------*/ /* limits */ u_int objminsize; /* minimum object size */ u_int objmaxsize; /* maximum object size */ u_int nummin; /* minimum number of objects */ u_int nummax; /* maximum number of objects */ /* these are changed only by config */ u_int _objtotal; /* total number of objects */ u_int _objsize; /* object size */ u_int _clustsize; /* cluster size */ u_int _clustentries; /* objects per cluster */ u_int _numclusters; /* number of clusters */ /* requested values */ u_int r_objtotal; u_int r_objsize; }; #define NMA_LOCK_T NM_MTX_T struct netmap_mem_ops { - void (*nmd_get_lut)(struct netmap_mem_d *, struct netmap_lut*); + int (*nmd_get_lut)(struct netmap_mem_d *, struct netmap_lut*); int (*nmd_get_info)(struct netmap_mem_d *, u_int *size, u_int *memflags, uint16_t *id); vm_paddr_t (*nmd_ofstophys)(struct netmap_mem_d *, vm_ooffset_t); int (*nmd_config)(struct netmap_mem_d *); int (*nmd_finalize)(struct netmap_mem_d *); void (*nmd_deref)(struct netmap_mem_d *); ssize_t (*nmd_if_offset)(struct netmap_mem_d *, const void *vaddr); void (*nmd_delete)(struct netmap_mem_d *); struct netmap_if * (*nmd_if_new)(struct netmap_adapter *); void (*nmd_if_delete)(struct netmap_adapter *, struct netmap_if *); int (*nmd_rings_create)(struct netmap_adapter *); void (*nmd_rings_delete)(struct netmap_adapter *); }; typedef uint16_t nm_memid_t; +/* + * Shared info for netmap allocator + * + * Each allocator contains this structur as first netmap_if. + * In this way, we can share same details about allocator + * to the VM. + * Used in ptnetmap. + */ +struct netmap_mem_shared_info { +#ifndef _WIN32 + struct netmap_if up; /* ends with a 0-sized array, which VSC does not like */ +#else /* !_WIN32 */ + char up[sizeof(struct netmap_if)]; +#endif /* !_WIN32 */ + uint64_t features; +#define NMS_FEAT_BUF_POOL 0x0001 +#define NMS_FEAT_MEMSIZE 0x0002 + + uint32_t buf_pool_offset; + uint32_t buf_pool_objtotal; + uint32_t buf_pool_objsize; + uint32_t totalsize; +}; + +#define NMS_NAME "nms_info" +#define NMS_VERSION 1 +static const struct netmap_if nms_if_blueprint = { + .ni_name = NMS_NAME, + .ni_version = NMS_VERSION, + .ni_tx_rings = 0, + .ni_rx_rings = 0 +}; + struct netmap_mem_d { NMA_LOCK_T nm_mtx; /* protect the allocator */ u_int nm_totalsize; /* shorthand */ u_int flags; #define NETMAP_MEM_FINALIZED 0x1 /* preallocation done */ int lasterr; /* last error for curr config */ int active; /* active users */ int refcount; /* the three allocators */ struct netmap_obj_pool pools[NETMAP_POOLS_NR]; nm_memid_t nm_id; /* allocator identifier */ int nm_grp; /* iommu groupd id */ /* list of all existing allocators, sorted by nm_id */ struct netmap_mem_d *prev, *next; struct netmap_mem_ops *ops; }; +/* + * XXX need to fix the case of t0 == void + */ #define NMD_DEFCB(t0, name) \ t0 \ netmap_mem_##name(struct netmap_mem_d *nmd) \ { \ return nmd->ops->nmd_##name(nmd); \ } #define NMD_DEFCB1(t0, name, t1) \ t0 \ netmap_mem_##name(struct netmap_mem_d *nmd, t1 a1) \ { \ return nmd->ops->nmd_##name(nmd, a1); \ } #define NMD_DEFCB3(t0, name, t1, t2, t3) \ t0 \ netmap_mem_##name(struct netmap_mem_d *nmd, t1 a1, t2 a2, t3 a3) \ { \ return nmd->ops->nmd_##name(nmd, a1, a2, a3); \ } #define NMD_DEFNACB(t0, name) \ t0 \ netmap_mem_##name(struct netmap_adapter *na) \ { \ return na->nm_mem->ops->nmd_##name(na); \ } #define NMD_DEFNACB1(t0, name, t1) \ t0 \ netmap_mem_##name(struct netmap_adapter *na, t1 a1) \ { \ return na->nm_mem->ops->nmd_##name(na, a1); \ } -NMD_DEFCB1(void, get_lut, struct netmap_lut *); +NMD_DEFCB1(int, get_lut, struct netmap_lut *); NMD_DEFCB3(int, get_info, u_int *, u_int *, uint16_t *); NMD_DEFCB1(vm_paddr_t, ofstophys, vm_ooffset_t); static int netmap_mem_config(struct netmap_mem_d *); NMD_DEFCB(int, config); NMD_DEFCB1(ssize_t, if_offset, const void *); NMD_DEFCB(void, delete); NMD_DEFNACB(struct netmap_if *, if_new); NMD_DEFNACB1(void, if_delete, struct netmap_if *); NMD_DEFNACB(int, rings_create); NMD_DEFNACB(void, rings_delete); static int netmap_mem_map(struct netmap_obj_pool *, struct netmap_adapter *); static int netmap_mem_unmap(struct netmap_obj_pool *, struct netmap_adapter *); -static int nm_mem_assign_group(struct netmap_mem_d *, device_t); +static int nm_mem_assign_group(struct netmap_mem_d *, struct device *); #define NMA_LOCK_INIT(n) NM_MTX_INIT((n)->nm_mtx) #define NMA_LOCK_DESTROY(n) NM_MTX_DESTROY((n)->nm_mtx) #define NMA_LOCK(n) NM_MTX_LOCK((n)->nm_mtx) #define NMA_UNLOCK(n) NM_MTX_UNLOCK((n)->nm_mtx) #ifdef NM_DEBUG_MEM_PUTGET #define NM_DBG_REFC(nmd, func, line) \ printf("%s:%d mem[%d] -> %d\n", func, line, (nmd)->nm_id, (nmd)->refcount); #else #define NM_DBG_REFC(nmd, func, line) #endif #ifdef NM_DEBUG_MEM_PUTGET void __netmap_mem_get(struct netmap_mem_d *nmd, const char *func, int line) #else void netmap_mem_get(struct netmap_mem_d *nmd) #endif { NMA_LOCK(nmd); nmd->refcount++; NM_DBG_REFC(nmd, func, line); NMA_UNLOCK(nmd); } #ifdef NM_DEBUG_MEM_PUTGET void __netmap_mem_put(struct netmap_mem_d *nmd, const char *func, int line) #else void netmap_mem_put(struct netmap_mem_d *nmd) #endif { int last; NMA_LOCK(nmd); last = (--nmd->refcount == 0); NM_DBG_REFC(nmd, func, line); NMA_UNLOCK(nmd); if (last) netmap_mem_delete(nmd); } int netmap_mem_finalize(struct netmap_mem_d *nmd, struct netmap_adapter *na) { if (nm_mem_assign_group(nmd, na->pdev) < 0) { return ENOMEM; } else { - nmd->ops->nmd_finalize(nmd); + NMA_LOCK(nmd); + nmd->lasterr = nmd->ops->nmd_finalize(nmd); + NMA_UNLOCK(nmd); } if (!nmd->lasterr && na->pdev) netmap_mem_map(&nmd->pools[NETMAP_BUF_POOL], na); return nmd->lasterr; } +static int netmap_mem_init_shared_info(struct netmap_mem_d *nmd); + void netmap_mem_deref(struct netmap_mem_d *nmd, struct netmap_adapter *na) { NMA_LOCK(nmd); netmap_mem_unmap(&nmd->pools[NETMAP_BUF_POOL], na); + if (nmd->active == 1) { + u_int i; + + /* + * Reset the allocator when it falls out of use so that any + * pool resources leaked by unclean application exits are + * reclaimed. + */ + for (i = 0; i < NETMAP_POOLS_NR; i++) { + struct netmap_obj_pool *p; + u_int j; + + p = &nmd->pools[i]; + p->objfree = p->objtotal; + /* + * Reproduce the net effect of the M_ZERO malloc() + * and marking of free entries in the bitmap that + * occur in finalize_obj_allocator() + */ + memset(p->bitmap, + '\0', + sizeof(uint32_t) * ((p->objtotal + 31) / 32)); + + /* + * Set all the bits in the bitmap that have + * corresponding buffers to 1 to indicate they are + * free. + */ + for (j = 0; j < p->objtotal; j++) { + if (p->lut[j].vaddr != NULL) { + p->bitmap[ (j>>5) ] |= ( 1 << (j & 31) ); + } + } + } + + /* + * Per netmap_mem_finalize_all(), + * buffers 0 and 1 are reserved + */ + nmd->pools[NETMAP_BUF_POOL].objfree -= 2; + if (nmd->pools[NETMAP_BUF_POOL].bitmap) { + /* XXX This check is a workaround that prevents a + * NULL pointer crash which currently happens only + * with ptnetmap guests. Also, + * netmap_mem_init_shared_info must not be called + * by ptnetmap guest. */ + nmd->pools[NETMAP_BUF_POOL].bitmap[0] = ~3; + + /* expose info to the ptnetmap guest */ + netmap_mem_init_shared_info(nmd); + } + } + nmd->ops->nmd_deref(nmd); + NMA_UNLOCK(nmd); - return nmd->ops->nmd_deref(nmd); } /* accessor functions */ -static void +static int netmap_mem2_get_lut(struct netmap_mem_d *nmd, struct netmap_lut *lut) { lut->lut = nmd->pools[NETMAP_BUF_POOL].lut; lut->objtotal = nmd->pools[NETMAP_BUF_POOL].objtotal; lut->objsize = nmd->pools[NETMAP_BUF_POOL]._objsize; + + return 0; } -struct netmap_obj_params netmap_params[NETMAP_POOLS_NR] = { +static struct netmap_obj_params netmap_params[NETMAP_POOLS_NR] = { [NETMAP_IF_POOL] = { .size = 1024, .num = 100, }, [NETMAP_RING_POOL] = { .size = 9*PAGE_SIZE, .num = 200, }, [NETMAP_BUF_POOL] = { .size = 2048, .num = NETMAP_BUF_MAX_NUM, }, }; -struct netmap_obj_params netmap_min_priv_params[NETMAP_POOLS_NR] = { +static struct netmap_obj_params netmap_min_priv_params[NETMAP_POOLS_NR] = { [NETMAP_IF_POOL] = { .size = 1024, - .num = 1, + .num = 2, }, [NETMAP_RING_POOL] = { .size = 5*PAGE_SIZE, .num = 4, }, [NETMAP_BUF_POOL] = { .size = 2048, .num = 4098, }, }; /* * nm_mem is the memory allocator used for all physical interfaces * running in netmap mode. * Virtual (VALE) ports will have each its own allocator. */ extern struct netmap_mem_ops netmap_mem_global_ops; /* forward */ struct netmap_mem_d nm_mem = { /* Our memory allocator. */ .pools = { [NETMAP_IF_POOL] = { .name = "netmap_if", .objminsize = sizeof(struct netmap_if), .objmaxsize = 4096, .nummin = 10, /* don't be stingy */ .nummax = 10000, /* XXX very large */ }, [NETMAP_RING_POOL] = { .name = "netmap_ring", .objminsize = sizeof(struct netmap_ring), .objmaxsize = 32*PAGE_SIZE, .nummin = 2, .nummax = 1024, }, [NETMAP_BUF_POOL] = { .name = "netmap_buf", .objminsize = 64, .objmaxsize = 65536, .nummin = 4, .nummax = 1000000, /* one million! */ }, }, .nm_id = 1, .nm_grp = -1, .prev = &nm_mem, .next = &nm_mem, .ops = &netmap_mem_global_ops }; -struct netmap_mem_d *netmap_last_mem_d = &nm_mem; +static struct netmap_mem_d *netmap_last_mem_d = &nm_mem; /* blueprint for the private memory allocators */ extern struct netmap_mem_ops netmap_mem_private_ops; /* forward */ -const struct netmap_mem_d nm_blueprint = { +/* XXX clang is not happy about using name as a print format */ +static const struct netmap_mem_d nm_blueprint = { .pools = { [NETMAP_IF_POOL] = { .name = "%s_if", .objminsize = sizeof(struct netmap_if), .objmaxsize = 4096, .nummin = 1, .nummax = 100, }, [NETMAP_RING_POOL] = { .name = "%s_ring", .objminsize = sizeof(struct netmap_ring), .objmaxsize = 32*PAGE_SIZE, .nummin = 2, .nummax = 1024, }, [NETMAP_BUF_POOL] = { .name = "%s_buf", .objminsize = 64, .objmaxsize = 65536, .nummin = 4, .nummax = 1000000, /* one million! */ }, }, .flags = NETMAP_MEM_PRIVATE, .ops = &netmap_mem_private_ops }; /* memory allocator related sysctls */ #define STRINGIFY(x) #x #define DECLARE_SYSCTLS(id, name) \ + SYSBEGIN(mem2_ ## name); \ + SYSCTL_DECL(_dev_netmap); /* leave it here, easier for porting */ \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_size, \ CTLFLAG_RW, &netmap_params[id].size, 0, "Requested size of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_size, \ CTLFLAG_RD, &nm_mem.pools[id]._objsize, 0, "Current size of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_num, \ CTLFLAG_RW, &netmap_params[id].num, 0, "Requested number of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_num, \ CTLFLAG_RD, &nm_mem.pools[id].objtotal, 0, "Current number of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, priv_##name##_size, \ CTLFLAG_RW, &netmap_min_priv_params[id].size, 0, \ "Default size of private netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, priv_##name##_num, \ CTLFLAG_RW, &netmap_min_priv_params[id].num, 0, \ - "Default number of private netmap " STRINGIFY(name) "s") + "Default number of private netmap " STRINGIFY(name) "s"); \ + SYSEND -SYSCTL_DECL(_dev_netmap); DECLARE_SYSCTLS(NETMAP_IF_POOL, if); DECLARE_SYSCTLS(NETMAP_RING_POOL, ring); DECLARE_SYSCTLS(NETMAP_BUF_POOL, buf); +/* call with NMA_LOCK(&nm_mem) held */ static int -nm_mem_assign_id(struct netmap_mem_d *nmd) +nm_mem_assign_id_locked(struct netmap_mem_d *nmd) { nm_memid_t id; struct netmap_mem_d *scan = netmap_last_mem_d; int error = ENOMEM; - NMA_LOCK(&nm_mem); - do { /* we rely on unsigned wrap around */ id = scan->nm_id + 1; if (id == 0) /* reserve 0 as error value */ id = 1; scan = scan->next; if (id != scan->nm_id) { nmd->nm_id = id; nmd->prev = scan->prev; nmd->next = scan; scan->prev->next = nmd; scan->prev = nmd; netmap_last_mem_d = nmd; error = 0; break; } } while (scan != netmap_last_mem_d); - NMA_UNLOCK(&nm_mem); return error; } +/* call with NMA_LOCK(&nm_mem) *not* held */ +static int +nm_mem_assign_id(struct netmap_mem_d *nmd) +{ + int ret; + + NMA_LOCK(&nm_mem); + ret = nm_mem_assign_id_locked(nmd); + NMA_UNLOCK(&nm_mem); + + return ret; +} + static void nm_mem_release_id(struct netmap_mem_d *nmd) { NMA_LOCK(&nm_mem); nmd->prev->next = nmd->next; nmd->next->prev = nmd->prev; if (netmap_last_mem_d == nmd) netmap_last_mem_d = nmd->prev; nmd->prev = nmd->next = NULL; NMA_UNLOCK(&nm_mem); } static int -nm_mem_assign_group(struct netmap_mem_d *nmd, device_t dev) +nm_mem_assign_group(struct netmap_mem_d *nmd, struct device *dev) { int err = 0, id; id = nm_iommu_group_id(dev); if (netmap_verbose) D("iommu_group %d", id); NMA_LOCK(nmd); if (nmd->nm_grp < 0) nmd->nm_grp = id; if (nmd->nm_grp != id) nmd->lasterr = err = ENOMEM; NMA_UNLOCK(nmd); return err; } /* * First, find the allocator that contains the requested offset, * then locate the cluster through a lookup table. */ static vm_paddr_t netmap_mem2_ofstophys(struct netmap_mem_d* nmd, vm_ooffset_t offset) { int i; vm_ooffset_t o = offset; vm_paddr_t pa; struct netmap_obj_pool *p; NMA_LOCK(nmd); p = nmd->pools; for (i = 0; i < NETMAP_POOLS_NR; offset -= p[i].memtotal, i++) { if (offset >= p[i].memtotal) continue; // now lookup the cluster's address +#ifndef _WIN32 pa = vtophys(p[i].lut[offset / p[i]._objsize].vaddr) + offset % p[i]._objsize; +#else + pa = vtophys(p[i].lut[offset / p[i]._objsize].vaddr); + pa.QuadPart += offset % p[i]._objsize; +#endif NMA_UNLOCK(nmd); return pa; } /* this is only in case of errors */ D("invalid ofs 0x%x out of 0x%x 0x%x 0x%x", (u_int)o, p[NETMAP_IF_POOL].memtotal, p[NETMAP_IF_POOL].memtotal + p[NETMAP_RING_POOL].memtotal, p[NETMAP_IF_POOL].memtotal + p[NETMAP_RING_POOL].memtotal + p[NETMAP_BUF_POOL].memtotal); NMA_UNLOCK(nmd); +#ifndef _WIN32 return 0; // XXX bad address +#else + vm_paddr_t res; + res.QuadPart = 0; + return res; +#endif } +#ifdef _WIN32 + +/* + * win32_build_virtual_memory_for_userspace + * + * This function get all the object making part of the pools and maps + * a contiguous virtual memory space for the userspace + * It works this way + * 1 - allocate a Memory Descriptor List wide as the sum + * of the memory needed for the pools + * 2 - cycle all the objects in every pool and for every object do + * + * 2a - cycle all the objects in every pool, get the list + * of the physical address descriptors + * 2b - calculate the offset in the array of pages desciptor in the + * main MDL + * 2c - copy the descriptors of the object in the main MDL + * + * 3 - return the resulting MDL that needs to be mapped in userland + * + * In this way we will have an MDL that describes all the memory for the + * objects in a single object +*/ + +PMDL +win32_build_user_vm_map(struct netmap_mem_d* nmd) +{ + int i, j; + u_int memsize, memflags, ofs = 0; + PMDL mainMdl, tempMdl; + + if (netmap_mem_get_info(nmd, &memsize, &memflags, NULL)) { + D("memory not finalised yet"); + return NULL; + } + + mainMdl = IoAllocateMdl(NULL, memsize, FALSE, FALSE, NULL); + if (mainMdl == NULL) { + D("failed to allocate mdl"); + return NULL; + } + + NMA_LOCK(nmd); + for (i = 0; i < NETMAP_POOLS_NR; i++) { + struct netmap_obj_pool *p = &nmd->pools[i]; + int clsz = p->_clustsize; + int clobjs = p->_clustentries; /* objects per cluster */ + int mdl_len = sizeof(PFN_NUMBER) * BYTES_TO_PAGES(clsz); + PPFN_NUMBER pSrc, pDst; + + /* each pool has a different cluster size so we need to reallocate */ + tempMdl = IoAllocateMdl(p->lut[0].vaddr, clsz, FALSE, FALSE, NULL); + if (tempMdl == NULL) { + NMA_UNLOCK(nmd); + D("fail to allocate tempMdl"); + IoFreeMdl(mainMdl); + return NULL; + } + pSrc = MmGetMdlPfnArray(tempMdl); + /* create one entry per cluster, the lut[] has one entry per object */ + for (j = 0; j < p->numclusters; j++, ofs += clsz) { + pDst = &MmGetMdlPfnArray(mainMdl)[BYTES_TO_PAGES(ofs)]; + MmInitializeMdl(tempMdl, p->lut[j*clobjs].vaddr, clsz); + MmBuildMdlForNonPagedPool(tempMdl); /* compute physical page addresses */ + RtlCopyMemory(pDst, pSrc, mdl_len); /* copy the page descriptors */ + mainMdl->MdlFlags = tempMdl->MdlFlags; /* XXX what is in here ? */ + } + IoFreeMdl(tempMdl); + } + NMA_UNLOCK(nmd); + return mainMdl; +} + +#endif /* _WIN32 */ + +/* + * helper function for OS-specific mmap routines (currently only windows). + * Given an nmd and a pool index, returns the cluster size and number of clusters. + * Returns 0 if memory is finalised and the pool is valid, otherwise 1. + * It should be called under NMA_LOCK(nmd) otherwise the underlying info can change. + */ + +int +netmap_mem2_get_pool_info(struct netmap_mem_d* nmd, u_int pool, u_int *clustsize, u_int *numclusters) +{ + if (!nmd || !clustsize || !numclusters || pool >= NETMAP_POOLS_NR) + return 1; /* invalid arguments */ + // NMA_LOCK_ASSERT(nmd); + if (!(nmd->flags & NETMAP_MEM_FINALIZED)) { + *clustsize = *numclusters = 0; + return 1; /* not ready yet */ + } + *clustsize = nmd->pools[pool]._clustsize; + *numclusters = nmd->pools[pool].numclusters; + return 0; /* success */ +} + static int netmap_mem2_get_info(struct netmap_mem_d* nmd, u_int* size, u_int *memflags, nm_memid_t *id) { int error = 0; NMA_LOCK(nmd); error = netmap_mem_config(nmd); if (error) goto out; if (size) { if (nmd->flags & NETMAP_MEM_FINALIZED) { *size = nmd->nm_totalsize; } else { int i; *size = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { struct netmap_obj_pool *p = nmd->pools + i; *size += (p->_numclusters * p->_clustsize); } } } if (memflags) *memflags = nmd->flags; if (id) *id = nmd->nm_id; out: NMA_UNLOCK(nmd); return error; } /* * we store objects by kernel address, need to find the offset * within the pool to export the value to userspace. * Algorithm: scan until we find the cluster, then add the * actual offset in the cluster */ static ssize_t netmap_obj_offset(struct netmap_obj_pool *p, const void *vaddr) { int i, k = p->_clustentries, n = p->objtotal; ssize_t ofs = 0; for (i = 0; i < n; i += k, ofs += p->_clustsize) { const char *base = p->lut[i].vaddr; ssize_t relofs = (const char *) vaddr - base; if (relofs < 0 || relofs >= p->_clustsize) continue; ofs = ofs + relofs; ND("%s: return offset %d (cluster %d) for pointer %p", p->name, ofs, i, vaddr); return ofs; } D("address %p is not contained inside any cluster (%s)", vaddr, p->name); return 0; /* An error occurred */ } /* Helper functions which convert virtual addresses to offsets */ #define netmap_if_offset(n, v) \ netmap_obj_offset(&(n)->pools[NETMAP_IF_POOL], (v)) #define netmap_ring_offset(n, v) \ ((n)->pools[NETMAP_IF_POOL].memtotal + \ netmap_obj_offset(&(n)->pools[NETMAP_RING_POOL], (v))) -#define netmap_buf_offset(n, v) \ - ((n)->pools[NETMAP_IF_POOL].memtotal + \ - (n)->pools[NETMAP_RING_POOL].memtotal + \ - netmap_obj_offset(&(n)->pools[NETMAP_BUF_POOL], (v))) - - static ssize_t netmap_mem2_if_offset(struct netmap_mem_d *nmd, const void *addr) { ssize_t v; NMA_LOCK(nmd); v = netmap_if_offset(nmd, addr); NMA_UNLOCK(nmd); return v; } /* * report the index, and use start position as a hint, * otherwise buffer allocation becomes terribly expensive. */ static void * netmap_obj_malloc(struct netmap_obj_pool *p, u_int len, uint32_t *start, uint32_t *index) { uint32_t i = 0; /* index in the bitmap */ - uint32_t mask, j; /* slot counter */ + uint32_t mask, j = 0; /* slot counter */ void *vaddr = NULL; if (len > p->_objsize) { D("%s request size %d too large", p->name, len); // XXX cannot reduce the size return NULL; } if (p->objfree == 0) { D("no more %s objects", p->name); return NULL; } if (start) i = *start; /* termination is guaranteed by p->free, but better check bounds on i */ while (vaddr == NULL && i < p->bitmap_slots) { uint32_t cur = p->bitmap[i]; if (cur == 0) { /* bitmask is fully used */ i++; continue; } /* locate a slot */ for (j = 0, mask = 1; (cur & mask) == 0; j++, mask <<= 1) ; p->bitmap[i] &= ~mask; /* mark object as in use */ p->objfree--; vaddr = p->lut[i * 32 + j].vaddr; if (index) *index = i * 32 + j; } - ND("%s allocator: allocated object @ [%d][%d]: vaddr %p", i, j, vaddr); + ND("%s allocator: allocated object @ [%d][%d]: vaddr %p",p->name, i, j, vaddr); if (start) *start = i; return vaddr; } /* * free by index, not by address. * XXX should we also cleanup the content ? */ static int netmap_obj_free(struct netmap_obj_pool *p, uint32_t j) { uint32_t *ptr, mask; if (j >= p->objtotal) { D("invalid index %u, max %u", j, p->objtotal); return 1; } ptr = &p->bitmap[j / 32]; mask = (1 << (j % 32)); if (*ptr & mask) { D("ouch, double free on buffer %d", j); return 1; } else { *ptr |= mask; p->objfree++; return 0; } } /* * free by address. This is slow but is only used for a few * objects (rings, nifp) */ static void netmap_obj_free_va(struct netmap_obj_pool *p, void *vaddr) { u_int i, j, n = p->numclusters; for (i = 0, j = 0; i < n; i++, j += p->_clustentries) { void *base = p->lut[i * p->_clustentries].vaddr; ssize_t relofs = (ssize_t) vaddr - (ssize_t) base; /* Given address, is out of the scope of the current cluster.*/ if (vaddr < base || relofs >= p->_clustsize) continue; j = j + relofs / p->_objsize; /* KASSERT(j != 0, ("Cannot free object 0")); */ netmap_obj_free(p, j); return; } D("address %p is not contained inside any cluster (%s)", vaddr, p->name); } #define netmap_mem_bufsize(n) \ ((n)->pools[NETMAP_BUF_POOL]._objsize) #define netmap_if_malloc(n, len) netmap_obj_malloc(&(n)->pools[NETMAP_IF_POOL], len, NULL, NULL) #define netmap_if_free(n, v) netmap_obj_free_va(&(n)->pools[NETMAP_IF_POOL], (v)) #define netmap_ring_malloc(n, len) netmap_obj_malloc(&(n)->pools[NETMAP_RING_POOL], len, NULL, NULL) #define netmap_ring_free(n, v) netmap_obj_free_va(&(n)->pools[NETMAP_RING_POOL], (v)) #define netmap_buf_malloc(n, _pos, _index) \ netmap_obj_malloc(&(n)->pools[NETMAP_BUF_POOL], netmap_mem_bufsize(n), _pos, _index) #if 0 // XXX unused /* Return the index associated to the given packet buffer */ #define netmap_buf_index(n, v) \ (netmap_obj_offset(&(n)->pools[NETMAP_BUF_POOL], (v)) / NETMAP_BDG_BUF_SIZE(n)) #endif /* * allocate extra buffers in a linked list. * returns the actual number. */ uint32_t netmap_extra_alloc(struct netmap_adapter *na, uint32_t *head, uint32_t n) { struct netmap_mem_d *nmd = na->nm_mem; uint32_t i, pos = 0; /* opaque, scan position in the bitmap */ NMA_LOCK(nmd); *head = 0; /* default, 'null' index ie empty list */ for (i = 0 ; i < n; i++) { uint32_t cur = *head; /* save current head */ uint32_t *p = netmap_buf_malloc(nmd, &pos, head); if (p == NULL) { D("no more buffers after %d of %d", i, n); *head = cur; /* restore */ break; } - RD(5, "allocate buffer %d -> %d", *head, cur); + ND(5, "allocate buffer %d -> %d", *head, cur); *p = cur; /* link to previous head */ } NMA_UNLOCK(nmd); return i; } static void netmap_extra_free(struct netmap_adapter *na, uint32_t head) { struct lut_entry *lut = na->na_lut.lut; struct netmap_mem_d *nmd = na->nm_mem; struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; uint32_t i, cur, *buf; - D("freeing the extra list"); + ND("freeing the extra list"); for (i = 0; head >=2 && head < p->objtotal; i++) { cur = head; buf = lut[head].vaddr; head = *buf; *buf = 0; if (netmap_obj_free(p, cur)) break; } if (head != 0) D("breaking with head %d", head); - D("freed %d buffers", i); + if (netmap_verbose) + D("freed %d buffers", i); } /* Return nonzero on error */ static int netmap_new_bufs(struct netmap_mem_d *nmd, struct netmap_slot *slot, u_int n) { struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; u_int i = 0; /* slot counter */ uint32_t pos = 0; /* slot in p->bitmap */ uint32_t index = 0; /* buffer index */ for (i = 0; i < n; i++) { void *vaddr = netmap_buf_malloc(nmd, &pos, &index); if (vaddr == NULL) { D("no more buffers after %d of %d", i, n); goto cleanup; } slot[i].buf_idx = index; slot[i].len = p->_objsize; slot[i].flags = 0; } ND("allocated %d buffers, %d available, first at %d", n, p->objfree, pos); return (0); cleanup: while (i > 0) { i--; netmap_obj_free(p, slot[i].buf_idx); } bzero(slot, n * sizeof(slot[0])); return (ENOMEM); } static void netmap_mem_set_ring(struct netmap_mem_d *nmd, struct netmap_slot *slot, u_int n, uint32_t index) { struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; u_int i; for (i = 0; i < n; i++) { slot[i].buf_idx = index; slot[i].len = p->_objsize; slot[i].flags = 0; } } static void netmap_free_buf(struct netmap_mem_d *nmd, uint32_t i) { struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; if (i < 2 || i >= p->objtotal) { D("Cannot free buf#%d: should be in [2, %d[", i, p->objtotal); return; } netmap_obj_free(p, i); } static void netmap_free_bufs(struct netmap_mem_d *nmd, struct netmap_slot *slot, u_int n) { u_int i; for (i = 0; i < n; i++) { if (slot[i].buf_idx > 2) netmap_free_buf(nmd, slot[i].buf_idx); } } static void netmap_reset_obj_allocator(struct netmap_obj_pool *p) { if (p == NULL) return; if (p->bitmap) free(p->bitmap, M_NETMAP); p->bitmap = NULL; if (p->lut) { u_int i; - size_t sz = p->_clustsize; /* * Free each cluster allocated in * netmap_finalize_obj_allocator(). The cluster start * addresses are stored at multiples of p->_clusterentries * in the lut. */ for (i = 0; i < p->objtotal; i += p->_clustentries) { if (p->lut[i].vaddr) - contigfree(p->lut[i].vaddr, sz, M_NETMAP); + contigfree(p->lut[i].vaddr, p->_clustsize, M_NETMAP); } bzero(p->lut, sizeof(struct lut_entry) * p->objtotal); #ifdef linux vfree(p->lut); #else free(p->lut, M_NETMAP); #endif } p->lut = NULL; p->objtotal = 0; p->memtotal = 0; p->numclusters = 0; p->objfree = 0; } /* * Free all resources related to an allocator. */ static void netmap_destroy_obj_allocator(struct netmap_obj_pool *p) { if (p == NULL) return; netmap_reset_obj_allocator(p); } /* * We receive a request for objtotal objects, of size objsize each. * Internally we may round up both numbers, as we allocate objects * in small clusters multiple of the page size. * We need to keep track of objtotal and clustentries, * as they are needed when freeing memory. * * XXX note -- userspace needs the buffers to be contiguous, * so we cannot afford gaps at the end of a cluster. */ /* call with NMA_LOCK held */ static int netmap_config_obj_allocator(struct netmap_obj_pool *p, u_int objtotal, u_int objsize) { int i; u_int clustsize; /* the cluster size, multiple of page size */ u_int clustentries; /* how many objects per entry */ /* we store the current request, so we can * detect configuration changes later */ p->r_objtotal = objtotal; p->r_objsize = objsize; #define MAX_CLUSTSIZE (1<<22) // 4 MB #define LINE_ROUND NM_CACHE_ALIGN // 64 if (objsize >= MAX_CLUSTSIZE) { /* we could do it but there is no point */ D("unsupported allocation for %d bytes", objsize); return EINVAL; } /* make sure objsize is a multiple of LINE_ROUND */ i = (objsize & (LINE_ROUND - 1)); if (i) { D("XXX aligning object by %d bytes", LINE_ROUND - i); objsize += LINE_ROUND - i; } if (objsize < p->objminsize || objsize > p->objmaxsize) { D("requested objsize %d out of range [%d, %d]", objsize, p->objminsize, p->objmaxsize); return EINVAL; } if (objtotal < p->nummin || objtotal > p->nummax) { D("requested objtotal %d out of range [%d, %d]", objtotal, p->nummin, p->nummax); return EINVAL; } /* * Compute number of objects using a brute-force approach: * given a max cluster size, * we try to fill it with objects keeping track of the * wasted space to the next page boundary. */ for (clustentries = 0, i = 1;; i++) { u_int delta, used = i * objsize; if (used > MAX_CLUSTSIZE) break; delta = used % PAGE_SIZE; if (delta == 0) { // exact solution clustentries = i; break; } } /* exact solution not found */ if (clustentries == 0) { D("unsupported allocation for %d bytes", objsize); return EINVAL; } /* compute clustsize */ clustsize = clustentries * objsize; if (netmap_verbose) D("objsize %d clustsize %d objects %d", objsize, clustsize, clustentries); /* * The number of clusters is n = ceil(objtotal/clustentries) * objtotal' = n * clustentries */ p->_clustentries = clustentries; p->_clustsize = clustsize; p->_numclusters = (objtotal + clustentries - 1) / clustentries; /* actual values (may be larger than requested) */ p->_objsize = objsize; p->_objtotal = p->_numclusters * clustentries; return 0; } +static struct lut_entry * +nm_alloc_lut(u_int nobj) +{ + size_t n = sizeof(struct lut_entry) * nobj; + struct lut_entry *lut; +#ifdef linux + lut = vmalloc(n); +#else + lut = malloc(n, M_NETMAP, M_NOWAIT | M_ZERO); +#endif + return lut; +} /* call with NMA_LOCK held */ static int netmap_finalize_obj_allocator(struct netmap_obj_pool *p) { int i; /* must be signed */ size_t n; /* optimistically assume we have enough memory */ p->numclusters = p->_numclusters; p->objtotal = p->_objtotal; - n = sizeof(struct lut_entry) * p->objtotal; -#ifdef linux - p->lut = vmalloc(n); -#else - p->lut = malloc(n, M_NETMAP, M_NOWAIT | M_ZERO); -#endif + p->lut = nm_alloc_lut(p->objtotal); if (p->lut == NULL) { - D("Unable to create lookup table (%d bytes) for '%s'", (int)n, p->name); + D("Unable to create lookup table for '%s'", p->name); goto clean; } /* Allocate the bitmap */ n = (p->objtotal + 31) / 32; p->bitmap = malloc(sizeof(uint32_t) * n, M_NETMAP, M_NOWAIT | M_ZERO); if (p->bitmap == NULL) { D("Unable to create bitmap (%d entries) for allocator '%s'", (int)n, p->name); goto clean; } p->bitmap_slots = n; /* * Allocate clusters, init pointers and bitmap */ n = p->_clustsize; for (i = 0; i < (int)p->objtotal;) { int lim = i + p->_clustentries; char *clust; + /* + * XXX Note, we only need contigmalloc() for buffers attached + * to native interfaces. In all other cases (nifp, netmap rings + * and even buffers for VALE ports or emulated interfaces) we + * can live with standard malloc, because the hardware will not + * access the pages directly. + */ clust = contigmalloc(n, M_NETMAP, M_NOWAIT | M_ZERO, (size_t)0, -1UL, PAGE_SIZE, 0); if (clust == NULL) { /* * If we get here, there is a severe memory shortage, * so halve the allocated memory to reclaim some. */ D("Unable to create cluster at %d for '%s' allocator", i, p->name); if (i < 2) /* nothing to halve */ goto out; lim = i / 2; for (i--; i >= lim; i--) { p->bitmap[ (i>>5) ] &= ~( 1 << (i & 31) ); if (i % p->_clustentries == 0 && p->lut[i].vaddr) contigfree(p->lut[i].vaddr, n, M_NETMAP); p->lut[i].vaddr = NULL; } out: p->objtotal = i; /* we may have stopped in the middle of a cluster */ p->numclusters = (i + p->_clustentries - 1) / p->_clustentries; break; } /* * Set bitmap and lut state for all buffers in the current * cluster. * * [i, lim) is the set of buffer indexes that cover the * current cluster. * * 'clust' is really the address of the current buffer in * the current cluster as we index through it with a stride * of p->_objsize. */ for (; i < lim; i++, clust += p->_objsize) { p->bitmap[ (i>>5) ] |= ( 1 << (i & 31) ); p->lut[i].vaddr = clust; p->lut[i].paddr = vtophys(clust); } } p->objfree = p->objtotal; p->memtotal = p->numclusters * p->_clustsize; if (p->objfree == 0) goto clean; if (netmap_verbose) D("Pre-allocated %d clusters (%d/%dKB) for '%s'", p->numclusters, p->_clustsize >> 10, p->memtotal >> 10, p->name); return 0; clean: netmap_reset_obj_allocator(p); return ENOMEM; } /* call with lock held */ static int netmap_memory_config_changed(struct netmap_mem_d *nmd) { int i; for (i = 0; i < NETMAP_POOLS_NR; i++) { if (nmd->pools[i].r_objsize != netmap_params[i].size || nmd->pools[i].r_objtotal != netmap_params[i].num) return 1; } return 0; } static void netmap_mem_reset_all(struct netmap_mem_d *nmd) { int i; if (netmap_verbose) D("resetting %p", nmd); for (i = 0; i < NETMAP_POOLS_NR; i++) { netmap_reset_obj_allocator(&nmd->pools[i]); } nmd->flags &= ~NETMAP_MEM_FINALIZED; } static int netmap_mem_unmap(struct netmap_obj_pool *p, struct netmap_adapter *na) { int i, lim = p->_objtotal; if (na->pdev == NULL) return 0; -#ifdef __FreeBSD__ +#if defined(__FreeBSD__) (void)i; (void)lim; D("unsupported on FreeBSD"); + +#elif defined(_WIN32) + (void)i; + (void)lim; + D("unsupported on Windows"); //XXX_ale, really? #else /* linux */ for (i = 2; i < lim; i++) { netmap_unload_map(na, (bus_dma_tag_t) na->pdev, &p->lut[i].paddr); } #endif /* linux */ return 0; } static int netmap_mem_map(struct netmap_obj_pool *p, struct netmap_adapter *na) { -#ifdef __FreeBSD__ +#if defined(__FreeBSD__) D("unsupported on FreeBSD"); +#elif defined(_WIN32) + D("unsupported on Windows"); //XXX_ale, really? #else /* linux */ int i, lim = p->_objtotal; if (na->pdev == NULL) return 0; for (i = 2; i < lim; i++) { netmap_load_map(na, (bus_dma_tag_t) na->pdev, &p->lut[i].paddr, p->lut[i].vaddr); } #endif /* linux */ return 0; } static int +netmap_mem_init_shared_info(struct netmap_mem_d *nmd) +{ + struct netmap_mem_shared_info *nms_info; + ssize_t base; + + /* Use the first slot in IF_POOL */ + nms_info = netmap_if_malloc(nmd, sizeof(*nms_info)); + if (nms_info == NULL) { + return ENOMEM; + } + + base = netmap_if_offset(nmd, nms_info); + + memcpy(&nms_info->up, &nms_if_blueprint, sizeof(nms_if_blueprint)); + nms_info->buf_pool_offset = nmd->pools[NETMAP_IF_POOL].memtotal + nmd->pools[NETMAP_RING_POOL].memtotal; + nms_info->buf_pool_objtotal = nmd->pools[NETMAP_BUF_POOL].objtotal; + nms_info->buf_pool_objsize = nmd->pools[NETMAP_BUF_POOL]._objsize; + nms_info->totalsize = nmd->nm_totalsize; + nms_info->features = NMS_FEAT_BUF_POOL | NMS_FEAT_MEMSIZE; + + return 0; +} + +static int netmap_mem_finalize_all(struct netmap_mem_d *nmd) { int i; if (nmd->flags & NETMAP_MEM_FINALIZED) return 0; nmd->lasterr = 0; nmd->nm_totalsize = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { nmd->lasterr = netmap_finalize_obj_allocator(&nmd->pools[i]); if (nmd->lasterr) goto error; nmd->nm_totalsize += nmd->pools[i].memtotal; } /* buffers 0 and 1 are reserved */ nmd->pools[NETMAP_BUF_POOL].objfree -= 2; nmd->pools[NETMAP_BUF_POOL].bitmap[0] = ~3; nmd->flags |= NETMAP_MEM_FINALIZED; + /* expose info to the ptnetmap guest */ + nmd->lasterr = netmap_mem_init_shared_info(nmd); + if (nmd->lasterr) + goto error; + if (netmap_verbose) D("interfaces %d KB, rings %d KB, buffers %d MB", nmd->pools[NETMAP_IF_POOL].memtotal >> 10, nmd->pools[NETMAP_RING_POOL].memtotal >> 10, nmd->pools[NETMAP_BUF_POOL].memtotal >> 20); if (netmap_verbose) D("Free buffers: %d", nmd->pools[NETMAP_BUF_POOL].objfree); return 0; error: netmap_mem_reset_all(nmd); return nmd->lasterr; } static void netmap_mem_private_delete(struct netmap_mem_d *nmd) { if (nmd == NULL) return; if (netmap_verbose) D("deleting %p", nmd); if (nmd->active > 0) D("bug: deleting mem allocator with active=%d!", nmd->active); nm_mem_release_id(nmd); if (netmap_verbose) D("done deleting %p", nmd); NMA_LOCK_DESTROY(nmd); free(nmd, M_DEVBUF); } static int netmap_mem_private_config(struct netmap_mem_d *nmd) { /* nothing to do, we are configured on creation * and configuration never changes thereafter */ return 0; } static int netmap_mem_private_finalize(struct netmap_mem_d *nmd) { int err; - NMA_LOCK(nmd); - nmd->active++; err = netmap_mem_finalize_all(nmd); - NMA_UNLOCK(nmd); + if (!err) + nmd->active++; return err; } static void netmap_mem_private_deref(struct netmap_mem_d *nmd) { - NMA_LOCK(nmd); if (--nmd->active <= 0) netmap_mem_reset_all(nmd); - NMA_UNLOCK(nmd); } /* * allocator for private memory */ struct netmap_mem_d * netmap_mem_private_new(const char *name, u_int txr, u_int txd, u_int rxr, u_int rxd, u_int extra_bufs, u_int npipes, int *perr) { struct netmap_mem_d *d = NULL; struct netmap_obj_params p[NETMAP_POOLS_NR]; int i, err; u_int v, maxd; d = malloc(sizeof(struct netmap_mem_d), - M_DEVBUF, M_NOWAIT | M_ZERO); + M_DEVBUF, M_NOWAIT | M_ZERO); if (d == NULL) { err = ENOMEM; goto error; } *d = nm_blueprint; err = nm_mem_assign_id(d); if (err) goto error; /* account for the fake host rings */ txr++; rxr++; /* copy the min values */ for (i = 0; i < NETMAP_POOLS_NR; i++) { p[i] = netmap_min_priv_params[i]; } /* possibly increase them to fit user request */ v = sizeof(struct netmap_if) + sizeof(ssize_t) * (txr + rxr); if (p[NETMAP_IF_POOL].size < v) p[NETMAP_IF_POOL].size = v; v = 2 + 4 * npipes; if (p[NETMAP_IF_POOL].num < v) p[NETMAP_IF_POOL].num = v; maxd = (txd > rxd) ? txd : rxd; v = sizeof(struct netmap_ring) + sizeof(struct netmap_slot) * maxd; if (p[NETMAP_RING_POOL].size < v) p[NETMAP_RING_POOL].size = v; /* each pipe endpoint needs two tx rings (1 normal + 1 host, fake) * and two rx rings (again, 1 normal and 1 fake host) */ v = txr + rxr + 8 * npipes; if (p[NETMAP_RING_POOL].num < v) p[NETMAP_RING_POOL].num = v; /* for each pipe we only need the buffers for the 4 "real" rings. * On the other end, the pipe ring dimension may be different from * the parent port ring dimension. As a compromise, we allocate twice the * space actually needed if the pipe rings were the same size as the parent rings */ v = (4 * npipes + rxr) * rxd + (4 * npipes + txr) * txd + 2 + extra_bufs; /* the +2 is for the tx and rx fake buffers (indices 0 and 1) */ if (p[NETMAP_BUF_POOL].num < v) p[NETMAP_BUF_POOL].num = v; if (netmap_verbose) D("req if %d*%d ring %d*%d buf %d*%d", p[NETMAP_IF_POOL].num, p[NETMAP_IF_POOL].size, p[NETMAP_RING_POOL].num, p[NETMAP_RING_POOL].size, p[NETMAP_BUF_POOL].num, p[NETMAP_BUF_POOL].size); for (i = 0; i < NETMAP_POOLS_NR; i++) { snprintf(d->pools[i].name, NETMAP_POOL_MAX_NAMSZ, nm_blueprint.pools[i].name, name); err = netmap_config_obj_allocator(&d->pools[i], p[i].num, p[i].size); if (err) goto error; } d->flags &= ~NETMAP_MEM_FINALIZED; NMA_LOCK_INIT(d); return d; error: netmap_mem_private_delete(d); if (perr) *perr = err; return NULL; } /* call with lock held */ static int netmap_mem_global_config(struct netmap_mem_d *nmd) { int i; if (nmd->active) /* already in use, we cannot change the configuration */ goto out; if (!netmap_memory_config_changed(nmd)) goto out; ND("reconfiguring"); if (nmd->flags & NETMAP_MEM_FINALIZED) { /* reset previous allocation */ for (i = 0; i < NETMAP_POOLS_NR; i++) { netmap_reset_obj_allocator(&nmd->pools[i]); } nmd->flags &= ~NETMAP_MEM_FINALIZED; } for (i = 0; i < NETMAP_POOLS_NR; i++) { nmd->lasterr = netmap_config_obj_allocator(&nmd->pools[i], netmap_params[i].num, netmap_params[i].size); if (nmd->lasterr) goto out; } out: return nmd->lasterr; } static int netmap_mem_global_finalize(struct netmap_mem_d *nmd) { int err; - + /* update configuration if changed */ if (netmap_mem_global_config(nmd)) - goto out; + return nmd->lasterr; nmd->active++; if (nmd->flags & NETMAP_MEM_FINALIZED) { /* may happen if config is not changed */ ND("nothing to do"); goto out; } if (netmap_mem_finalize_all(nmd)) goto out; nmd->lasterr = 0; out: if (nmd->lasterr) nmd->active--; err = nmd->lasterr; return err; } static void netmap_mem_global_delete(struct netmap_mem_d *nmd) { int i; for (i = 0; i < NETMAP_POOLS_NR; i++) { netmap_destroy_obj_allocator(&nm_mem.pools[i]); } NMA_LOCK_DESTROY(&nm_mem); } int netmap_mem_init(void) { NMA_LOCK_INIT(&nm_mem); netmap_mem_get(&nm_mem); return (0); } void netmap_mem_fini(void) { netmap_mem_put(&nm_mem); } static void netmap_free_rings(struct netmap_adapter *na) { enum txrx t; for_rx_tx(t) { u_int i; - for (i = 0; i < netmap_real_rings(na, t); i++) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { struct netmap_kring *kring = &NMR(na, t)[i]; struct netmap_ring *ring = kring->ring; - if (ring == NULL) + if (ring == NULL || kring->users > 0 || (kring->nr_kflags & NKR_NEEDRING)) { + ND("skipping ring %s (ring %p, users %d)", + kring->name, ring, kring->users); continue; - netmap_free_bufs(na->nm_mem, ring->slot, kring->nkr_num_slots); + } + if (i != nma_get_nrings(na, t) || na->na_flags & NAF_HOST_RINGS) + netmap_free_bufs(na->nm_mem, ring->slot, kring->nkr_num_slots); netmap_ring_free(na->nm_mem, ring); kring->ring = NULL; } } } /* call with NMA_LOCK held * * * Allocate netmap rings and buffers for this card * The rings are contiguous, but have variable size. * The kring array must follow the layout described * in netmap_krings_create(). */ static int netmap_mem2_rings_create(struct netmap_adapter *na) { enum txrx t; NMA_LOCK(na->nm_mem); for_rx_tx(t) { u_int i; for (i = 0; i <= nma_get_nrings(na, t); i++) { struct netmap_kring *kring = &NMR(na, t)[i]; struct netmap_ring *ring = kring->ring; u_int len, ndesc; - if (ring) { - ND("%s already created", kring->name); - continue; /* already created by somebody else */ + if (ring || (!kring->users && !(kring->nr_kflags & NKR_NEEDRING))) { + /* uneeded, or already created by somebody else */ + ND("skipping ring %s", kring->name); + continue; } ndesc = kring->nkr_num_slots; len = sizeof(struct netmap_ring) + ndesc * sizeof(struct netmap_slot); ring = netmap_ring_malloc(na->nm_mem, len); if (ring == NULL) { D("Cannot allocate %s_ring", nm_txrx2str(t)); goto cleanup; } ND("txring at %p", ring); kring->ring = ring; *(uint32_t *)(uintptr_t)&ring->num_slots = ndesc; *(int64_t *)(uintptr_t)&ring->buf_ofs = (na->nm_mem->pools[NETMAP_IF_POOL].memtotal + na->nm_mem->pools[NETMAP_RING_POOL].memtotal) - netmap_ring_offset(na->nm_mem, ring); /* copy values from kring */ ring->head = kring->rhead; ring->cur = kring->rcur; ring->tail = kring->rtail; *(uint16_t *)(uintptr_t)&ring->nr_buf_size = netmap_mem_bufsize(na->nm_mem); ND("%s h %d c %d t %d", kring->name, ring->head, ring->cur, ring->tail); ND("initializing slots for %s_ring", nm_txrx2str(txrx)); if (i != nma_get_nrings(na, t) || (na->na_flags & NAF_HOST_RINGS)) { /* this is a real ring */ if (netmap_new_bufs(na->nm_mem, ring->slot, ndesc)) { D("Cannot allocate buffers for %s_ring", nm_txrx2str(t)); goto cleanup; } } else { /* this is a fake ring, set all indices to 0 */ netmap_mem_set_ring(na->nm_mem, ring->slot, ndesc, 0); } /* ring info */ *(uint16_t *)(uintptr_t)&ring->ringid = kring->ring_id; *(uint16_t *)(uintptr_t)&ring->dir = kring->tx; } } NMA_UNLOCK(na->nm_mem); return 0; cleanup: netmap_free_rings(na); NMA_UNLOCK(na->nm_mem); return ENOMEM; } static void netmap_mem2_rings_delete(struct netmap_adapter *na) { /* last instance, release bufs and rings */ NMA_LOCK(na->nm_mem); netmap_free_rings(na); NMA_UNLOCK(na->nm_mem); } /* call with NMA_LOCK held */ /* * Allocate the per-fd structure netmap_if. * * We assume that the configuration stored in na * (number of tx/rx rings and descs) does not change while * the interface is in netmap mode. */ static struct netmap_if * netmap_mem2_if_new(struct netmap_adapter *na) { struct netmap_if *nifp; ssize_t base; /* handy for relative offsets between rings and nifp */ u_int i, len, n[NR_TXRX], ntot; enum txrx t; ntot = 0; for_rx_tx(t) { /* account for the (eventually fake) host rings */ n[t] = nma_get_nrings(na, t) + 1; ntot += n[t]; } /* * the descriptor is followed inline by an array of offsets * to the tx and rx rings in the shared memory region. */ NMA_LOCK(na->nm_mem); len = sizeof(struct netmap_if) + (ntot * sizeof(ssize_t)); nifp = netmap_if_malloc(na->nm_mem, len); if (nifp == NULL) { NMA_UNLOCK(na->nm_mem); return NULL; } /* initialize base fields -- override const */ *(u_int *)(uintptr_t)&nifp->ni_tx_rings = na->num_tx_rings; *(u_int *)(uintptr_t)&nifp->ni_rx_rings = na->num_rx_rings; strncpy(nifp->ni_name, na->name, (size_t)IFNAMSIZ); /* * fill the slots for the rx and tx rings. They contain the offset * between the ring and nifp, so the information is usable in * userspace to reach the ring from the nifp. */ base = netmap_if_offset(na->nm_mem, nifp); for (i = 0; i < n[NR_TX]; i++) { + if (na->tx_rings[i].ring == NULL) { + // XXX maybe use the offset of an error ring, + // like we do for buffers? + *(ssize_t *)(uintptr_t)&nifp->ring_ofs[i] = 0; + continue; + } *(ssize_t *)(uintptr_t)&nifp->ring_ofs[i] = netmap_ring_offset(na->nm_mem, na->tx_rings[i].ring) - base; } for (i = 0; i < n[NR_RX]; i++) { + if (na->rx_rings[i].ring == NULL) { + // XXX maybe use the offset of an error ring, + // like we do for buffers? + *(ssize_t *)(uintptr_t)&nifp->ring_ofs[i+n[NR_TX]] = 0; + continue; + } *(ssize_t *)(uintptr_t)&nifp->ring_ofs[i+n[NR_TX]] = netmap_ring_offset(na->nm_mem, na->rx_rings[i].ring) - base; } NMA_UNLOCK(na->nm_mem); return (nifp); } static void netmap_mem2_if_delete(struct netmap_adapter *na, struct netmap_if *nifp) { if (nifp == NULL) /* nothing to do */ return; NMA_LOCK(na->nm_mem); if (nifp->ni_bufs_head) netmap_extra_free(na, nifp->ni_bufs_head); netmap_if_free(na->nm_mem, nifp); NMA_UNLOCK(na->nm_mem); } static void netmap_mem_global_deref(struct netmap_mem_d *nmd) { nmd->active--; if (!nmd->active) nmd->nm_grp = -1; if (netmap_verbose) D("active = %d", nmd->active); } struct netmap_mem_ops netmap_mem_global_ops = { .nmd_get_lut = netmap_mem2_get_lut, .nmd_get_info = netmap_mem2_get_info, .nmd_ofstophys = netmap_mem2_ofstophys, .nmd_config = netmap_mem_global_config, .nmd_finalize = netmap_mem_global_finalize, .nmd_deref = netmap_mem_global_deref, .nmd_delete = netmap_mem_global_delete, .nmd_if_offset = netmap_mem2_if_offset, .nmd_if_new = netmap_mem2_if_new, .nmd_if_delete = netmap_mem2_if_delete, .nmd_rings_create = netmap_mem2_rings_create, .nmd_rings_delete = netmap_mem2_rings_delete }; struct netmap_mem_ops netmap_mem_private_ops = { .nmd_get_lut = netmap_mem2_get_lut, .nmd_get_info = netmap_mem2_get_info, .nmd_ofstophys = netmap_mem2_ofstophys, .nmd_config = netmap_mem_private_config, .nmd_finalize = netmap_mem_private_finalize, .nmd_deref = netmap_mem_private_deref, .nmd_if_offset = netmap_mem2_if_offset, .nmd_delete = netmap_mem_private_delete, .nmd_if_new = netmap_mem2_if_new, .nmd_if_delete = netmap_mem2_if_delete, .nmd_rings_create = netmap_mem2_rings_create, .nmd_rings_delete = netmap_mem2_rings_delete }; + +#ifdef WITH_PTNETMAP_GUEST +struct mem_pt_if { + struct mem_pt_if *next; + struct ifnet *ifp; + unsigned int nifp_offset; + nm_pt_guest_ptctl_t ptctl; +}; + +/* Netmap allocator for ptnetmap guests. */ +struct netmap_mem_ptg { + struct netmap_mem_d up; + + vm_paddr_t nm_paddr; /* physical address in the guest */ + void *nm_addr; /* virtual address in the guest */ + struct netmap_lut buf_lut; /* lookup table for BUF pool in the guest */ + nm_memid_t nm_host_id; /* allocator identifier in the host */ + struct ptnetmap_memdev *ptn_dev; + struct mem_pt_if *pt_ifs; /* list of interfaces in passthrough */ +}; + +/* Link a passthrough interface to a passthrough netmap allocator. */ +static int +netmap_mem_pt_guest_ifp_add(struct netmap_mem_d *nmd, struct ifnet *ifp, + unsigned int nifp_offset, + nm_pt_guest_ptctl_t ptctl) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + struct mem_pt_if *ptif = malloc(sizeof(*ptif), M_NETMAP, + M_NOWAIT | M_ZERO); + + if (!ptif) { + return ENOMEM; + } + + NMA_LOCK(nmd); + + ptif->ifp = ifp; + ptif->nifp_offset = nifp_offset; + ptif->ptctl = ptctl; + + if (ptnmd->pt_ifs) { + ptif->next = ptnmd->pt_ifs; + } + ptnmd->pt_ifs = ptif; + + NMA_UNLOCK(nmd); + + D("added (ifp=%p,nifp_offset=%u)", ptif->ifp, ptif->nifp_offset); + + return 0; +} + +/* Called with NMA_LOCK(nmd) held. */ +static struct mem_pt_if * +netmap_mem_pt_guest_ifp_lookup(struct netmap_mem_d *nmd, struct ifnet *ifp) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + struct mem_pt_if *curr; + + for (curr = ptnmd->pt_ifs; curr; curr = curr->next) { + if (curr->ifp == ifp) { + return curr; + } + } + + return NULL; +} + +/* Unlink a passthrough interface from a passthrough netmap allocator. */ +int +netmap_mem_pt_guest_ifp_del(struct netmap_mem_d *nmd, struct ifnet *ifp) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + struct mem_pt_if *prev = NULL; + struct mem_pt_if *curr; + int ret = -1; + + NMA_LOCK(nmd); + + for (curr = ptnmd->pt_ifs; curr; curr = curr->next) { + if (curr->ifp == ifp) { + if (prev) { + prev->next = curr->next; + } else { + ptnmd->pt_ifs = curr->next; + } + D("removed (ifp=%p,nifp_offset=%u)", + curr->ifp, curr->nifp_offset); + free(curr, M_NETMAP); + ret = 0; + break; + } + prev = curr; + } + + NMA_UNLOCK(nmd); + + return ret; +} + +/* Read allocator info from the first netmap_if (only on finalize) */ +static int +netmap_mem_pt_guest_read_shared_info(struct netmap_mem_d *nmd) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + struct netmap_mem_shared_info *nms_info; + uint32_t bufsize; + uint32_t nbuffers; + char *vaddr; + vm_paddr_t paddr; + int i; + + nms_info = (struct netmap_mem_shared_info *)ptnmd->nm_addr; + if (strncmp(nms_info->up.ni_name, NMS_NAME, sizeof(NMS_NAME)) != 0) { + D("error, the first slot does not contain shared info"); + return EINVAL; + } + /* check features mem_shared info */ + if ((nms_info->features & (NMS_FEAT_BUF_POOL | NMS_FEAT_MEMSIZE)) != + (NMS_FEAT_BUF_POOL | NMS_FEAT_MEMSIZE)) { + D("error, the shared info does not contain BUF_POOL and MEMSIZE"); + return EINVAL; + } + + bufsize = nms_info->buf_pool_objsize; + nbuffers = nms_info->buf_pool_objtotal; + + /* allocate the lut */ + if (ptnmd->buf_lut.lut == NULL) { + D("allocating lut"); + ptnmd->buf_lut.lut = nm_alloc_lut(nbuffers); + if (ptnmd->buf_lut.lut == NULL) { + D("lut allocation failed"); + return ENOMEM; + } + } + + /* we have physically contiguous memory mapped through PCI BAR */ + vaddr = (char *)(ptnmd->nm_addr) + nms_info->buf_pool_offset; + paddr = ptnmd->nm_paddr + nms_info->buf_pool_offset; + + for (i = 0; i < nbuffers; i++) { + ptnmd->buf_lut.lut[i].vaddr = vaddr; + ptnmd->buf_lut.lut[i].paddr = paddr; + vaddr += bufsize; + paddr += bufsize; + } + + ptnmd->buf_lut.objtotal = nbuffers; + ptnmd->buf_lut.objsize = bufsize; + + nmd->nm_totalsize = nms_info->totalsize; + + return 0; +} + +static int +netmap_mem_pt_guest_get_lut(struct netmap_mem_d *nmd, struct netmap_lut *lut) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + + if (!(nmd->flags & NETMAP_MEM_FINALIZED)) { + return EINVAL; + } + + *lut = ptnmd->buf_lut; + return 0; +} + +static int +netmap_mem_pt_guest_get_info(struct netmap_mem_d *nmd, u_int *size, + u_int *memflags, uint16_t *id) +{ + int error = 0; + + NMA_LOCK(nmd); + + error = nmd->ops->nmd_config(nmd); + if (error) + goto out; + + if (size) + *size = nmd->nm_totalsize; + if (memflags) + *memflags = nmd->flags; + if (id) + *id = nmd->nm_id; + +out: + NMA_UNLOCK(nmd); + + return error; +} + +static vm_paddr_t +netmap_mem_pt_guest_ofstophys(struct netmap_mem_d *nmd, vm_ooffset_t off) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + vm_paddr_t paddr; + /* if the offset is valid, just return csb->base_addr + off */ + paddr = (vm_paddr_t)(ptnmd->nm_paddr + off); + ND("off %lx padr %lx", off, (unsigned long)paddr); + return paddr; +} + +static int +netmap_mem_pt_guest_config(struct netmap_mem_d *nmd) +{ + /* nothing to do, we are configured on creation + * and configuration never changes thereafter + */ + return 0; +} + +static int +netmap_mem_pt_guest_finalize(struct netmap_mem_d *nmd) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + int error = 0; + + nmd->active++; + + if (nmd->flags & NETMAP_MEM_FINALIZED) + goto out; + + if (ptnmd->ptn_dev == NULL) { + D("ptnetmap memdev not attached"); + error = ENOMEM; + goto err; + } + /* map memory through ptnetmap-memdev BAR */ + error = nm_os_pt_memdev_iomap(ptnmd->ptn_dev, &ptnmd->nm_paddr, + &ptnmd->nm_addr); + if (error) + goto err; + + /* read allcator info and create lut */ + error = netmap_mem_pt_guest_read_shared_info(nmd); + if (error) + goto err; + + nmd->flags |= NETMAP_MEM_FINALIZED; +out: + return 0; +err: + nmd->active--; + return error; +} + +static void +netmap_mem_pt_guest_deref(struct netmap_mem_d *nmd) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + + nmd->active--; + if (nmd->active <= 0 && + (nmd->flags & NETMAP_MEM_FINALIZED)) { + nmd->flags &= ~NETMAP_MEM_FINALIZED; + /* unmap ptnetmap-memdev memory */ + if (ptnmd->ptn_dev) { + nm_os_pt_memdev_iounmap(ptnmd->ptn_dev); + } + ptnmd->nm_addr = 0; + ptnmd->nm_paddr = 0; + } +} + +static ssize_t +netmap_mem_pt_guest_if_offset(struct netmap_mem_d *nmd, const void *vaddr) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; + + return (const char *)(vaddr) - (char *)(ptnmd->nm_addr); +} + +static void +netmap_mem_pt_guest_delete(struct netmap_mem_d *nmd) +{ + if (nmd == NULL) + return; + if (netmap_verbose) + D("deleting %p", nmd); + if (nmd->active > 0) + D("bug: deleting mem allocator with active=%d!", nmd->active); + nm_mem_release_id(nmd); + if (netmap_verbose) + D("done deleting %p", nmd); + NMA_LOCK_DESTROY(nmd); + free(nmd, M_DEVBUF); +} + +static struct netmap_if * +netmap_mem_pt_guest_if_new(struct netmap_adapter *na) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)na->nm_mem; + struct mem_pt_if *ptif; + struct netmap_if *nifp = NULL; + + NMA_LOCK(na->nm_mem); + + ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, na->ifp); + if (ptif == NULL) { + D("Error: interface %p is not in passthrough", na->ifp); + goto out; + } + + nifp = (struct netmap_if *)((char *)(ptnmd->nm_addr) + + ptif->nifp_offset); + NMA_UNLOCK(na->nm_mem); +out: + return nifp; +} + +static void +netmap_mem_pt_guest_if_delete(struct netmap_adapter *na, struct netmap_if *nifp) +{ + struct mem_pt_if *ptif; + + NMA_LOCK(na->nm_mem); + + ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, na->ifp); + if (ptif == NULL) { + D("Error: interface %p is not in passthrough", na->ifp); + goto out; + } + + ptif->ptctl(na->ifp, PTNETMAP_PTCTL_IFDELETE); +out: + NMA_UNLOCK(na->nm_mem); +} + +static int +netmap_mem_pt_guest_rings_create(struct netmap_adapter *na) +{ + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)na->nm_mem; + struct mem_pt_if *ptif; + struct netmap_if *nifp; + int i, error = -1; + + NMA_LOCK(na->nm_mem); + + ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, na->ifp); + if (ptif == NULL) { + D("Error: interface %p is not in passthrough", na->ifp); + goto out; + } + + + /* point each kring to the corresponding backend ring */ + nifp = (struct netmap_if *)((char *)ptnmd->nm_addr + ptif->nifp_offset); + for (i = 0; i <= na->num_tx_rings; i++) { + struct netmap_kring *kring = na->tx_rings + i; + if (kring->ring) + continue; + kring->ring = (struct netmap_ring *) + ((char *)nifp + nifp->ring_ofs[i]); + } + for (i = 0; i <= na->num_rx_rings; i++) { + struct netmap_kring *kring = na->rx_rings + i; + if (kring->ring) + continue; + kring->ring = (struct netmap_ring *) + ((char *)nifp + + nifp->ring_ofs[i + na->num_tx_rings + 1]); + } + + //error = ptif->ptctl->nm_ptctl(ifp, PTNETMAP_PTCTL_RINGSCREATE); + error = 0; +out: + NMA_UNLOCK(na->nm_mem); + + return error; +} + +static void +netmap_mem_pt_guest_rings_delete(struct netmap_adapter *na) +{ + /* TODO: remove?? */ +#if 0 + struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)na->nm_mem; + struct mem_pt_if *ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, + na->ifp); +#endif +} + +static struct netmap_mem_ops netmap_mem_pt_guest_ops = { + .nmd_get_lut = netmap_mem_pt_guest_get_lut, + .nmd_get_info = netmap_mem_pt_guest_get_info, + .nmd_ofstophys = netmap_mem_pt_guest_ofstophys, + .nmd_config = netmap_mem_pt_guest_config, + .nmd_finalize = netmap_mem_pt_guest_finalize, + .nmd_deref = netmap_mem_pt_guest_deref, + .nmd_if_offset = netmap_mem_pt_guest_if_offset, + .nmd_delete = netmap_mem_pt_guest_delete, + .nmd_if_new = netmap_mem_pt_guest_if_new, + .nmd_if_delete = netmap_mem_pt_guest_if_delete, + .nmd_rings_create = netmap_mem_pt_guest_rings_create, + .nmd_rings_delete = netmap_mem_pt_guest_rings_delete +}; + +/* Called with NMA_LOCK(&nm_mem) held. */ +static struct netmap_mem_d * +netmap_mem_pt_guest_find_hostid(nm_memid_t host_id) +{ + struct netmap_mem_d *mem = NULL; + struct netmap_mem_d *scan = netmap_last_mem_d; + + do { + /* find ptnetmap allocator through host ID */ + if (scan->ops->nmd_deref == netmap_mem_pt_guest_deref && + ((struct netmap_mem_ptg *)(scan))->nm_host_id == host_id) { + mem = scan; + break; + } + scan = scan->next; + } while (scan != netmap_last_mem_d); + + return mem; +} + +/* Called with NMA_LOCK(&nm_mem) held. */ +static struct netmap_mem_d * +netmap_mem_pt_guest_create(nm_memid_t host_id) +{ + struct netmap_mem_ptg *ptnmd; + int err = 0; + + ptnmd = malloc(sizeof(struct netmap_mem_ptg), + M_DEVBUF, M_NOWAIT | M_ZERO); + if (ptnmd == NULL) { + err = ENOMEM; + goto error; + } + + ptnmd->up.ops = &netmap_mem_pt_guest_ops; + ptnmd->nm_host_id = host_id; + ptnmd->pt_ifs = NULL; + + /* Assign new id in the guest (We have the lock) */ + err = nm_mem_assign_id_locked(&ptnmd->up); + if (err) + goto error; + + ptnmd->up.flags &= ~NETMAP_MEM_FINALIZED; + ptnmd->up.flags |= NETMAP_MEM_IO; + + NMA_LOCK_INIT(&ptnmd->up); + + return &ptnmd->up; +error: + netmap_mem_pt_guest_delete(&ptnmd->up); + return NULL; +} + +/* + * find host id in guest allocators and create guest allocator + * if it is not there + */ +static struct netmap_mem_d * +netmap_mem_pt_guest_get(nm_memid_t host_id) +{ + struct netmap_mem_d *nmd; + + NMA_LOCK(&nm_mem); + nmd = netmap_mem_pt_guest_find_hostid(host_id); + if (nmd == NULL) { + nmd = netmap_mem_pt_guest_create(host_id); + } + NMA_UNLOCK(&nm_mem); + + return nmd; +} + +/* + * The guest allocator can be created by ptnetmap_memdev (during the device + * attach) or by ptnetmap device (e1000/virtio), during the netmap_attach. + * + * The order is not important (we have different order in LINUX and FreeBSD). + * The first one, creates the device, and the second one simply attaches it. + */ + +/* Called when ptnetmap_memdev is attaching, to attach a new allocator in + * the guest */ +struct netmap_mem_d * +netmap_mem_pt_guest_attach(struct ptnetmap_memdev *ptn_dev, nm_memid_t host_id) +{ + struct netmap_mem_d *nmd; + struct netmap_mem_ptg *ptnmd; + + nmd = netmap_mem_pt_guest_get(host_id); + + /* assign this device to the guest allocator */ + if (nmd) { + ptnmd = (struct netmap_mem_ptg *)nmd; + ptnmd->ptn_dev = ptn_dev; + } + + return nmd; +} + +/* Called when ptnetmap device (virtio/e1000) is attaching */ +struct netmap_mem_d * +netmap_mem_pt_guest_new(struct ifnet *ifp, + unsigned int nifp_offset, + nm_pt_guest_ptctl_t ptctl) +{ + struct netmap_mem_d *nmd; + nm_memid_t host_id; + + if (ifp == NULL || ptctl == NULL) { + return NULL; + } + + /* Get the host id allocator. */ + host_id = ptctl(ifp, PTNETMAP_PTCTL_HOSTMEMID); + + nmd = netmap_mem_pt_guest_get(host_id); + + if (nmd) { + netmap_mem_pt_guest_ifp_add(nmd, ifp, nifp_offset, + ptctl); + } + + return nmd; +} + +#endif /* WITH_PTNETMAP_GUEST */ Index: head/sys/dev/netmap/netmap_mem2.h =================================================================== --- head/sys/dev/netmap/netmap_mem2.h (revision 307393) +++ head/sys/dev/netmap/netmap_mem2.h (revision 307394) @@ -1,165 +1,181 @@ /* - * Copyright (C) 2012-2014 Matteo Landi, Luigi Rizzo, Giuseppe Lettieri. All rights reserved. + * Copyright (C) 2012-2014 Matteo Landi + * Copyright (C) 2012-2016 Luigi Rizzo + * Copyright (C) 2012-2016 Giuseppe Lettieri + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * (New) memory allocator for netmap */ /* * This allocator creates three memory pools: * nm_if_pool for the struct netmap_if * nm_ring_pool for the struct netmap_ring * nm_buf_pool for the packet buffers. * * that contain netmap objects. Each pool is made of a number of clusters, * multiple of a page size, each containing an integer number of objects. * The clusters are contiguous in user space but not in the kernel. * Only nm_buf_pool needs to be dma-able, * but for convenience use the same type of allocator for all. * * Once mapped, the three pools are exported to userspace * as a contiguous block, starting from nm_if_pool. Each * cluster (and pool) is an integral number of pages. * [ . . . ][ . . . . . .][ . . . . . . . . . .] * nm_if nm_ring nm_buf * * The userspace areas contain offsets of the objects in userspace. * When (at init time) we write these offsets, we find out the index * of the object, and from there locate the offset from the beginning * of the region. * * The invididual allocators manage a pool of memory for objects of * the same size. * The pool is split into smaller clusters, whose size is a * multiple of the page size. The cluster size is chosen * to minimize the waste for a given max cluster size * (we do it by brute force, as we have relatively few objects * per cluster). * * Objects are aligned to the cache line (64 bytes) rounding up object * sizes when needed. A bitmap contains the state of each object. * Allocation scans the bitmap; this is done only on attach, so we are not * too worried about performance * * For each allocator we can define (thorugh sysctl) the size and * number of each object. Memory is allocated at the first use of a * netmap file descriptor, and can be freed when all such descriptors * have been released (including unmapping the memory). * If memory is scarce, the system tries to get as much as possible * and the sysctl values reflect the actual allocation. * Together with desired values, the sysctl export also absolute * min and maximum values that cannot be overridden. * * struct netmap_if: * variable size, max 16 bytes per ring pair plus some fixed amount. * 1024 bytes should be large enough in practice. * * In the worst case we have one netmap_if per ring in the system. * * struct netmap_ring * variable size, 8 byte per slot plus some fixed amount. * Rings can be large (e.g. 4k slots, or >32Kbytes). * We default to 36 KB (9 pages), and a few hundred rings. * * struct netmap_buffer * The more the better, both because fast interfaces tend to have * many slots, and because we may want to use buffers to store * packets in userspace avoiding copies. * Must contain a full frame (eg 1518, or more for vlans, jumbo * frames etc.) plus be nicely aligned, plus some NICs restrict * the size to multiple of 1K or so. Default to 2K */ #ifndef _NET_NETMAP_MEM2_H_ #define _NET_NETMAP_MEM2_H_ /* We implement two kinds of netmap_mem_d structures: * * - global: used by hardware NICS; * * - private: used by VALE ports. * * In both cases, the netmap_mem_d structure has the same lifetime as the * netmap_adapter of the corresponding NIC or port. It is the responsibility of * the client code to delete the private allocator when the associated * netmap_adapter is freed (this is implemented by the NAF_MEM_OWNER flag in * netmap.c). The 'refcount' field counts the number of active users of the * structure. The global allocator uses this information to prevent/allow * reconfiguration. The private allocators release all their memory when there * are no active users. By 'active user' we mean an existing netmap_priv * structure holding a reference to the allocator. */ extern struct netmap_mem_d nm_mem; -void netmap_mem_get_lut(struct netmap_mem_d *, struct netmap_lut *); +int netmap_mem_get_lut(struct netmap_mem_d *, struct netmap_lut *); vm_paddr_t netmap_mem_ofstophys(struct netmap_mem_d *, vm_ooffset_t); +#ifdef _WIN32 +PMDL win32_build_user_vm_map(struct netmap_mem_d* nmd); +#endif int netmap_mem_finalize(struct netmap_mem_d *, struct netmap_adapter *); int netmap_mem_init(void); void netmap_mem_fini(void); struct netmap_if * netmap_mem_if_new(struct netmap_adapter *); void netmap_mem_if_delete(struct netmap_adapter *, struct netmap_if *); int netmap_mem_rings_create(struct netmap_adapter *); void netmap_mem_rings_delete(struct netmap_adapter *); void netmap_mem_deref(struct netmap_mem_d *, struct netmap_adapter *); +int netmap_mem2_get_pool_info(struct netmap_mem_d *, u_int, u_int *, u_int *); int netmap_mem_get_info(struct netmap_mem_d *, u_int *size, u_int *memflags, uint16_t *id); ssize_t netmap_mem_if_offset(struct netmap_mem_d *, const void *vaddr); struct netmap_mem_d* netmap_mem_private_new(const char *name, u_int txr, u_int txd, u_int rxr, u_int rxd, u_int extra_bufs, u_int npipes, int* error); void netmap_mem_delete(struct netmap_mem_d *); //#define NM_DEBUG_MEM_PUTGET 1 #ifdef NM_DEBUG_MEM_PUTGET #define netmap_mem_get(nmd) \ do { \ __netmap_mem_get(nmd, __FUNCTION__, __LINE__); \ } while (0) #define netmap_mem_put(nmd) \ do { \ __netmap_mem_put(nmd, __FUNCTION__, __LINE__); \ } while (0) void __netmap_mem_get(struct netmap_mem_d *, const char *, int); void __netmap_mem_put(struct netmap_mem_d *, const char *, int); #else /* !NM_DEBUG_MEM_PUTGET */ void netmap_mem_get(struct netmap_mem_d *); void netmap_mem_put(struct netmap_mem_d *); #endif /* !NM_DEBUG_PUTGET */ + +#ifdef WITH_PTNETMAP_GUEST +struct netmap_mem_d* netmap_mem_pt_guest_new(struct ifnet *, + unsigned int nifp_offset, + nm_pt_guest_ptctl_t); +struct ptnetmap_memdev; +struct netmap_mem_d* netmap_mem_pt_guest_attach(struct ptnetmap_memdev *, uint16_t); +int netmap_mem_pt_guest_ifp_del(struct netmap_mem_d *, struct ifnet *); +#endif /* WITH_PTNETMAP_GUEST */ #define NETMAP_MEM_PRIVATE 0x2 /* allocator uses private address space */ #define NETMAP_MEM_IO 0x4 /* the underlying memory is mmapped I/O */ uint32_t netmap_extra_alloc(struct netmap_adapter *, uint32_t *, uint32_t n); #endif Index: head/sys/dev/netmap/netmap_monitor.c =================================================================== --- head/sys/dev/netmap/netmap_monitor.c (revision 307393) +++ head/sys/dev/netmap/netmap_monitor.c (revision 307394) @@ -1,850 +1,886 @@ /* - * Copyright (C) 2014 Giuseppe Lettieri. All rights reserved. + * Copyright (C) 2014-2016 Giuseppe Lettieri + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * Monitors * * netmap monitors can be used to do monitoring of network traffic * on another adapter, when the latter adapter is working in netmap mode. * * Monitors offer to userspace the same interface as any other netmap port, * with as many pairs of netmap rings as the monitored adapter. * However, only the rx rings are actually used. Each monitor rx ring receives * the traffic transiting on both the tx and rx corresponding rings in the * monitored adapter. During registration, the user can choose if she wants * to intercept tx only, rx only, or both tx and rx traffic. * * If the monitor is not able to cope with the stream of frames, excess traffic * will be dropped. * * If the monitored adapter leaves netmap mode, the monitor has to be restarted. * * Monitors can be either zero-copy or copy-based. * * Copy monitors see the frames before they are consumed: * * - For tx traffic, this is when the application sends them, before they are * passed down to the adapter. * * - For rx traffic, this is when they are received by the adapter, before * they are sent up to the application, if any (note that, if no * application is reading from a monitored ring, the ring will eventually * fill up and traffic will stop). * * Zero-copy monitors only see the frames after they have been consumed: * * - For tx traffic, this is after the slots containing the frames have been * marked as free. Note that this may happen at a considerably delay after * frame transmission, since freeing of slots is often done lazily. * * - For rx traffic, this is after the consumer on the monitored adapter * has released them. In most cases, the consumer is a userspace * application which may have modified the frame contents. * * Several copy monitors may be active on any ring. Zero-copy monitors, * instead, need exclusive access to each of the monitored rings. This may * change in the future, if we implement zero-copy monitor chaining. * */ #if defined(__FreeBSD__) #include /* prerequisite */ #include #include #include /* defines used in kernel.h */ #include /* types used in module initialization */ #include #include #include #include #include #include #include /* sockaddrs */ #include #include #include /* bus_dmamap_* */ #include #elif defined(linux) #include "bsd_glue.h" #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" +#elif defined(_WIN32) +#include "win_glue.h" #else #error Unsupported platform #endif /* unsupported */ /* * common headers */ #include #include #include #ifdef WITH_MONITOR #define NM_MONITOR_MAXSLOTS 4096 /* ******************************************************************** * functions common to both kind of monitors ******************************************************************** */ /* nm_sync callback for the monitor's own tx rings. * This makes no sense and always returns error */ static int netmap_monitor_txsync(struct netmap_kring *kring, int flags) { RD(1, "%s %x", kring->name, flags); return EIO; } /* nm_sync callback for the monitor's own rx rings. * Note that the lock in netmap_zmon_parent_sync only protects * writers among themselves. Synchronization between writers * (i.e., netmap_zmon_parent_txsync and netmap_zmon_parent_rxsync) * and readers (i.e., netmap_zmon_rxsync) relies on memory barriers. */ static int netmap_monitor_rxsync(struct netmap_kring *kring, int flags) { ND("%s %x", kring->name, flags); kring->nr_hwcur = kring->rcur; mb(); return 0; } /* nm_krings_create callbacks for monitors. - * We could use the default netmap_hw_krings_zmon, but - * we don't need the mbq. */ static int netmap_monitor_krings_create(struct netmap_adapter *na) { - return netmap_krings_create(na, 0); + int error = netmap_krings_create(na, 0); + if (error) + return error; + /* override the host rings callbacks */ + na->tx_rings[na->num_tx_rings].nm_sync = netmap_monitor_txsync; + na->rx_rings[na->num_rx_rings].nm_sync = netmap_monitor_rxsync; + return 0; } /* nm_krings_delete callback for monitors */ static void netmap_monitor_krings_delete(struct netmap_adapter *na) { netmap_krings_delete(na); } static u_int nm_txrx2flag(enum txrx t) { return (t == NR_RX ? NR_MONITOR_RX : NR_MONITOR_TX); } /* allocate the monitors array in the monitored kring */ static int nm_monitor_alloc(struct netmap_kring *kring, u_int n) { size_t len; struct netmap_kring **nm; if (n <= kring->max_monitors) /* we already have more entries that requested */ return 0; len = sizeof(struct netmap_kring *) * n; +#ifndef _WIN32 nm = realloc(kring->monitors, len, M_DEVBUF, M_NOWAIT | M_ZERO); +#else + nm = realloc(kring->monitors, len, sizeof(struct netmap_kring *)*kring->max_monitors); +#endif if (nm == NULL) return ENOMEM; kring->monitors = nm; kring->max_monitors = n; return 0; } /* deallocate the parent array in the parent adapter */ static void nm_monitor_dealloc(struct netmap_kring *kring) { if (kring->monitors) { if (kring->n_monitors > 0) { D("freeing not empty monitor array for %s (%d dangling monitors)!", kring->name, kring->n_monitors); } free(kring->monitors, M_DEVBUF); kring->monitors = NULL; kring->max_monitors = 0; kring->n_monitors = 0; } } /* * monitors work by replacing the nm_sync() and possibly the * nm_notify() callbacks in the monitored rings. */ static int netmap_zmon_parent_txsync(struct netmap_kring *, int); static int netmap_zmon_parent_rxsync(struct netmap_kring *, int); static int netmap_monitor_parent_txsync(struct netmap_kring *, int); static int netmap_monitor_parent_rxsync(struct netmap_kring *, int); static int netmap_monitor_parent_notify(struct netmap_kring *, int); /* add the monitor mkring to the list of monitors of kring. * If this is the first monitor, intercept the callbacks */ static int netmap_monitor_add(struct netmap_kring *mkring, struct netmap_kring *kring, int zcopy) { - int error = 0; + int error = NM_IRQ_COMPLETED; /* sinchronize with concurrently running nm_sync()s */ - nm_kr_get(kring); + nm_kr_stop(kring, NM_KR_LOCKED); /* make sure the monitor array exists and is big enough */ error = nm_monitor_alloc(kring, kring->n_monitors + 1); if (error) goto out; kring->monitors[kring->n_monitors] = mkring; mkring->mon_pos = kring->n_monitors; kring->n_monitors++; if (kring->n_monitors == 1) { /* this is the first monitor, intercept callbacks */ - D("%s: intercept callbacks on %s", mkring->name, kring->name); + ND("%s: intercept callbacks on %s", mkring->name, kring->name); kring->mon_sync = kring->nm_sync; /* zcopy monitors do not override nm_notify(), but * we save the original one regardless, so that * netmap_monitor_del() does not need to know the * monitor type */ kring->mon_notify = kring->nm_notify; if (kring->tx == NR_TX) { kring->nm_sync = (zcopy ? netmap_zmon_parent_txsync : netmap_monitor_parent_txsync); } else { kring->nm_sync = (zcopy ? netmap_zmon_parent_rxsync : netmap_monitor_parent_rxsync); if (!zcopy) { /* also intercept notify */ kring->nm_notify = netmap_monitor_parent_notify; kring->mon_tail = kring->nr_hwtail; } } } out: - nm_kr_put(kring); + nm_kr_start(kring); return error; } /* remove the monitor mkring from the list of monitors of kring. * If this is the last monitor, restore the original callbacks */ static void netmap_monitor_del(struct netmap_kring *mkring, struct netmap_kring *kring) { /* sinchronize with concurrently running nm_sync()s */ - nm_kr_get(kring); + nm_kr_stop(kring, NM_KR_LOCKED); kring->n_monitors--; if (mkring->mon_pos != kring->n_monitors) { kring->monitors[mkring->mon_pos] = kring->monitors[kring->n_monitors]; kring->monitors[mkring->mon_pos]->mon_pos = mkring->mon_pos; } kring->monitors[kring->n_monitors] = NULL; if (kring->n_monitors == 0) { /* this was the last monitor, restore callbacks and delete monitor array */ - D("%s: restoring sync on %s: %p", mkring->name, kring->name, kring->mon_sync); + ND("%s: restoring sync on %s: %p", mkring->name, kring->name, kring->mon_sync); kring->nm_sync = kring->mon_sync; kring->mon_sync = NULL; if (kring->tx == NR_RX) { - D("%s: restoring notify on %s: %p", + ND("%s: restoring notify on %s: %p", mkring->name, kring->name, kring->mon_notify); kring->nm_notify = kring->mon_notify; kring->mon_notify = NULL; } nm_monitor_dealloc(kring); } - nm_kr_put(kring); + nm_kr_start(kring); } /* This is called when the monitored adapter leaves netmap mode * (see netmap_do_unregif). * We need to notify the monitors that the monitored rings are gone. * We do this by setting their mna->priv.np_na to NULL. * Note that the rings are already stopped when this happens, so * no monitor ring callback can be active. */ void netmap_monitor_stop(struct netmap_adapter *na) { enum txrx t; for_rx_tx(t) { u_int i; - for (i = 0; i < nma_get_nrings(na, t); i++) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { struct netmap_kring *kring = &NMR(na, t)[i]; u_int j; for (j = 0; j < kring->n_monitors; j++) { struct netmap_kring *mkring = kring->monitors[j]; struct netmap_monitor_adapter *mna = (struct netmap_monitor_adapter *)mkring->na; /* forget about this adapter */ netmap_adapter_put(mna->priv.np_na); mna->priv.np_na = NULL; } } } } /* common functions for the nm_register() callbacks of both kind of * monitors. */ static int netmap_monitor_reg_common(struct netmap_adapter *na, int onoff, int zmon) { struct netmap_monitor_adapter *mna = (struct netmap_monitor_adapter *)na; struct netmap_priv_d *priv = &mna->priv; struct netmap_adapter *pna = priv->np_na; struct netmap_kring *kring, *mkring; int i; enum txrx t; ND("%p: onoff %d", na, onoff); if (onoff) { if (pna == NULL) { /* parent left netmap mode, fatal */ D("%s: internal error", na->name); return ENXIO; } for_rx_tx(t) { if (mna->flags & nm_txrx2flag(t)) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = &NMR(pna, t)[i]; mkring = &na->rx_rings[i]; - netmap_monitor_add(mkring, kring, zmon); + if (nm_kring_pending_on(mkring)) { + netmap_monitor_add(mkring, kring, zmon); + mkring->nr_mode = NKR_NETMAP_ON; + } } } } na->na_flags |= NAF_NETMAP_ON; } else { - if (pna == NULL) { - D("%s: parent left netmap mode, nothing to restore", na->name); - return 0; - } - na->na_flags &= ~NAF_NETMAP_ON; + if (na->active_fds == 0) + na->na_flags &= ~NAF_NETMAP_ON; for_rx_tx(t) { if (mna->flags & nm_txrx2flag(t)) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { - kring = &NMR(pna, t)[i]; mkring = &na->rx_rings[i]; - netmap_monitor_del(mkring, kring); + if (nm_kring_pending_off(mkring)) { + mkring->nr_mode = NKR_NETMAP_OFF; + /* we cannot access the parent krings if the parent + * has left netmap mode. This is signaled by a NULL + * pna pointer + */ + if (pna) { + kring = &NMR(pna, t)[i]; + netmap_monitor_del(mkring, kring); + } + } } } } } return 0; } /* **************************************************************** * functions specific for zero-copy monitors **************************************************************** */ /* * Common function for both zero-copy tx and rx nm_sync() * callbacks */ static int netmap_zmon_parent_sync(struct netmap_kring *kring, int flags, enum txrx tx) { struct netmap_kring *mkring = kring->monitors[0]; struct netmap_ring *ring = kring->ring, *mring; int error = 0; int rel_slots, free_slots, busy, sent = 0; u_int beg, end, i; u_int lim = kring->nkr_num_slots - 1, mlim; // = mkring->nkr_num_slots - 1; if (mkring == NULL) { RD(5, "NULL monitor on %s", kring->name); return 0; } mring = mkring->ring; mlim = mkring->nkr_num_slots - 1; /* get the relased slots (rel_slots) */ if (tx == NR_TX) { beg = kring->nr_hwtail; error = kring->mon_sync(kring, flags); if (error) return error; end = kring->nr_hwtail; } else { /* NR_RX */ beg = kring->nr_hwcur; end = kring->rhead; } rel_slots = end - beg; if (rel_slots < 0) rel_slots += kring->nkr_num_slots; if (!rel_slots) { /* no released slots, but we still need * to call rxsync if this is a rx ring */ goto out_rxsync; } /* we need to lock the monitor receive ring, since it * is the target of bot tx and rx traffic from the monitored * adapter */ mtx_lock(&mkring->q_lock); /* get the free slots available on the monitor ring */ i = mkring->nr_hwtail; busy = i - mkring->nr_hwcur; if (busy < 0) busy += mkring->nkr_num_slots; free_slots = mlim - busy; if (!free_slots) goto out; /* swap min(free_slots, rel_slots) slots */ if (free_slots < rel_slots) { beg += (rel_slots - free_slots); if (beg >= kring->nkr_num_slots) beg -= kring->nkr_num_slots; rel_slots = free_slots; } sent = rel_slots; for ( ; rel_slots; rel_slots--) { struct netmap_slot *s = &ring->slot[beg]; struct netmap_slot *ms = &mring->slot[i]; uint32_t tmp; tmp = ms->buf_idx; ms->buf_idx = s->buf_idx; s->buf_idx = tmp; ND(5, "beg %d buf_idx %d", beg, tmp); tmp = ms->len; ms->len = s->len; s->len = tmp; s->flags |= NS_BUF_CHANGED; beg = nm_next(beg, lim); i = nm_next(i, mlim); } mb(); mkring->nr_hwtail = i; out: mtx_unlock(&mkring->q_lock); if (sent) { /* notify the new frames to the monitor */ mkring->nm_notify(mkring, 0); } out_rxsync: if (tx == NR_RX) error = kring->mon_sync(kring, flags); return error; } /* callback used to replace the nm_sync callback in the monitored tx rings */ static int netmap_zmon_parent_txsync(struct netmap_kring *kring, int flags) { ND("%s %x", kring->name, flags); return netmap_zmon_parent_sync(kring, flags, NR_TX); } /* callback used to replace the nm_sync callback in the monitored rx rings */ static int netmap_zmon_parent_rxsync(struct netmap_kring *kring, int flags) { ND("%s %x", kring->name, flags); return netmap_zmon_parent_sync(kring, flags, NR_RX); } static int netmap_zmon_reg(struct netmap_adapter *na, int onoff) { return netmap_monitor_reg_common(na, onoff, 1 /* zcopy */); } /* nm_dtor callback for monitors */ static void netmap_zmon_dtor(struct netmap_adapter *na) { struct netmap_monitor_adapter *mna = (struct netmap_monitor_adapter *)na; struct netmap_priv_d *priv = &mna->priv; struct netmap_adapter *pna = priv->np_na; netmap_adapter_put(pna); } /* **************************************************************** * functions specific for copy monitors **************************************************************** */ static void netmap_monitor_parent_sync(struct netmap_kring *kring, u_int first_new, int new_slots) { u_int j; for (j = 0; j < kring->n_monitors; j++) { struct netmap_kring *mkring = kring->monitors[j]; u_int i, mlim, beg; int free_slots, busy, sent = 0, m; u_int lim = kring->nkr_num_slots - 1; struct netmap_ring *ring = kring->ring, *mring = mkring->ring; u_int max_len = NETMAP_BUF_SIZE(mkring->na); mlim = mkring->nkr_num_slots - 1; /* we need to lock the monitor receive ring, since it * is the target of bot tx and rx traffic from the monitored * adapter */ mtx_lock(&mkring->q_lock); /* get the free slots available on the monitor ring */ i = mkring->nr_hwtail; busy = i - mkring->nr_hwcur; if (busy < 0) busy += mkring->nkr_num_slots; free_slots = mlim - busy; if (!free_slots) goto out; /* copy min(free_slots, new_slots) slots */ m = new_slots; beg = first_new; if (free_slots < m) { beg += (m - free_slots); if (beg >= kring->nkr_num_slots) beg -= kring->nkr_num_slots; m = free_slots; } for ( ; m; m--) { struct netmap_slot *s = &ring->slot[beg]; struct netmap_slot *ms = &mring->slot[i]; u_int copy_len = s->len; char *src = NMB(kring->na, s), *dst = NMB(mkring->na, ms); if (unlikely(copy_len > max_len)) { RD(5, "%s->%s: truncating %d to %d", kring->name, mkring->name, copy_len, max_len); copy_len = max_len; } memcpy(dst, src, copy_len); ms->len = copy_len; sent++; beg = nm_next(beg, lim); i = nm_next(i, mlim); } mb(); mkring->nr_hwtail = i; out: mtx_unlock(&mkring->q_lock); if (sent) { /* notify the new frames to the monitor */ mkring->nm_notify(mkring, 0); } } } /* callback used to replace the nm_sync callback in the monitored tx rings */ static int netmap_monitor_parent_txsync(struct netmap_kring *kring, int flags) { u_int first_new; int new_slots; /* get the new slots */ first_new = kring->nr_hwcur; new_slots = kring->rhead - first_new; if (new_slots < 0) new_slots += kring->nkr_num_slots; if (new_slots) netmap_monitor_parent_sync(kring, first_new, new_slots); return kring->mon_sync(kring, flags); } /* callback used to replace the nm_sync callback in the monitored rx rings */ static int netmap_monitor_parent_rxsync(struct netmap_kring *kring, int flags) { u_int first_new; int new_slots, error; /* get the new slots */ error = kring->mon_sync(kring, flags); if (error) return error; first_new = kring->mon_tail; new_slots = kring->nr_hwtail - first_new; if (new_slots < 0) new_slots += kring->nkr_num_slots; if (new_slots) netmap_monitor_parent_sync(kring, first_new, new_slots); kring->mon_tail = kring->nr_hwtail; return 0; } /* callback used to replace the nm_notify() callback in the monitored rx rings */ static int netmap_monitor_parent_notify(struct netmap_kring *kring, int flags) { + int (*notify)(struct netmap_kring*, int); ND(5, "%s %x", kring->name, flags); /* ?xsync callbacks have tryget called by their callers * (NIOCREGIF and poll()), but here we have to call it * by ourself */ - if (nm_kr_tryget(kring)) - goto out; - netmap_monitor_parent_rxsync(kring, NAF_FORCE_READ); + if (nm_kr_tryget(kring, 0, NULL)) { + /* in all cases, just skip the sync */ + return NM_IRQ_COMPLETED; + } + if (kring->n_monitors > 0) { + netmap_monitor_parent_rxsync(kring, NAF_FORCE_READ); + notify = kring->mon_notify; + } else { + /* we are no longer monitoring this ring, so both + * mon_sync and mon_notify are NULL + */ + notify = kring->nm_notify; + } nm_kr_put(kring); -out: - return kring->mon_notify(kring, flags); + return notify(kring, flags); } static int netmap_monitor_reg(struct netmap_adapter *na, int onoff) { return netmap_monitor_reg_common(na, onoff, 0 /* no zcopy */); } static void netmap_monitor_dtor(struct netmap_adapter *na) { struct netmap_monitor_adapter *mna = (struct netmap_monitor_adapter *)na; struct netmap_priv_d *priv = &mna->priv; struct netmap_adapter *pna = priv->np_na; netmap_adapter_put(pna); } /* check if nmr is a request for a monitor adapter that we can satisfy */ int netmap_get_monitor_na(struct nmreq *nmr, struct netmap_adapter **na, int create) { struct nmreq pnmr; struct netmap_adapter *pna; /* parent adapter */ struct netmap_monitor_adapter *mna; + struct ifnet *ifp = NULL; int i, error; enum txrx t; int zcopy = (nmr->nr_flags & NR_ZCOPY_MON); char monsuff[10] = ""; if ((nmr->nr_flags & (NR_MONITOR_TX | NR_MONITOR_RX)) == 0) { + if (nmr->nr_flags & NR_ZCOPY_MON) { + /* the flag makes no sense unless you are + * creating a monitor + */ + return EINVAL; + } ND("not a monitor"); return 0; } /* this is a request for a monitor adapter */ - D("flags %x", nmr->nr_flags); + ND("flags %x", nmr->nr_flags); mna = malloc(sizeof(*mna), M_DEVBUF, M_NOWAIT | M_ZERO); if (mna == NULL) { D("memory error"); return ENOMEM; } /* first, try to find the adapter that we want to monitor * We use the same nmr, after we have turned off the monitor flags. * In this way we can potentially monitor everything netmap understands, * except other monitors. */ memcpy(&pnmr, nmr, sizeof(pnmr)); - pnmr.nr_flags &= ~(NR_MONITOR_TX | NR_MONITOR_RX); - error = netmap_get_na(&pnmr, &pna, create); + pnmr.nr_flags &= ~(NR_MONITOR_TX | NR_MONITOR_RX | NR_ZCOPY_MON); + error = netmap_get_na(&pnmr, &pna, &ifp, create); if (error) { D("parent lookup failed: %d", error); + free(mna, M_DEVBUF); return error; } - D("found parent: %s", pna->name); + ND("found parent: %s", pna->name); if (!nm_netmap_on(pna)) { /* parent not in netmap mode */ /* XXX we can wait for the parent to enter netmap mode, * by intercepting its nm_register callback (2014-03-16) */ D("%s not in netmap mode", pna->name); error = EINVAL; goto put_out; } /* grab all the rings we need in the parent */ mna->priv.np_na = pna; error = netmap_interp_ringid(&mna->priv, nmr->nr_ringid, nmr->nr_flags); if (error) { D("ringid error"); goto put_out; } if (mna->priv.np_qlast[NR_TX] - mna->priv.np_qfirst[NR_TX] == 1) { snprintf(monsuff, 10, "-%d", mna->priv.np_qfirst[NR_TX]); } snprintf(mna->up.name, sizeof(mna->up.name), "%s%s/%s%s%s", pna->name, monsuff, zcopy ? "z" : "", (nmr->nr_flags & NR_MONITOR_RX) ? "r" : "", (nmr->nr_flags & NR_MONITOR_TX) ? "t" : ""); if (zcopy) { /* zero copy monitors need exclusive access to the monitored rings */ for_rx_tx(t) { if (! (nmr->nr_flags & nm_txrx2flag(t))) continue; for (i = mna->priv.np_qfirst[t]; i < mna->priv.np_qlast[t]; i++) { struct netmap_kring *kring = &NMR(pna, t)[i]; if (kring->n_monitors > 0) { error = EBUSY; D("ring %s already monitored by %s", kring->name, kring->monitors[0]->name); goto put_out; } } } mna->up.nm_register = netmap_zmon_reg; mna->up.nm_dtor = netmap_zmon_dtor; /* to have zero copy, we need to use the same memory allocator * as the monitored port */ mna->up.nm_mem = pna->nm_mem; mna->up.na_lut = pna->na_lut; } else { /* normal monitors are incompatible with zero copy ones */ for_rx_tx(t) { if (! (nmr->nr_flags & nm_txrx2flag(t))) continue; for (i = mna->priv.np_qfirst[t]; i < mna->priv.np_qlast[t]; i++) { struct netmap_kring *kring = &NMR(pna, t)[i]; if (kring->n_monitors > 0 && kring->monitors[0]->na->nm_register == netmap_zmon_reg) { error = EBUSY; D("ring busy"); goto put_out; } } } mna->up.nm_rxsync = netmap_monitor_rxsync; mna->up.nm_register = netmap_monitor_reg; mna->up.nm_dtor = netmap_monitor_dtor; } /* the monitor supports the host rings iff the parent does */ mna->up.na_flags = (pna->na_flags & NAF_HOST_RINGS); /* a do-nothing txsync: monitors cannot be used to inject packets */ mna->up.nm_txsync = netmap_monitor_txsync; mna->up.nm_rxsync = netmap_monitor_rxsync; mna->up.nm_krings_create = netmap_monitor_krings_create; mna->up.nm_krings_delete = netmap_monitor_krings_delete; mna->up.num_tx_rings = 1; // XXX we don't need it, but field can't be zero /* we set the number of our rx_rings to be max(num_rx_rings, num_rx_rings) * in the parent */ mna->up.num_rx_rings = pna->num_rx_rings; if (pna->num_tx_rings > pna->num_rx_rings) mna->up.num_rx_rings = pna->num_tx_rings; /* by default, the number of slots is the same as in * the parent rings, but the user may ask for a different * number */ mna->up.num_tx_desc = nmr->nr_tx_slots; nm_bound_var(&mna->up.num_tx_desc, pna->num_tx_desc, 1, NM_MONITOR_MAXSLOTS, NULL); mna->up.num_rx_desc = nmr->nr_rx_slots; nm_bound_var(&mna->up.num_rx_desc, pna->num_rx_desc, 1, NM_MONITOR_MAXSLOTS, NULL); error = netmap_attach_common(&mna->up); if (error) { D("attach_common error"); goto put_out; } /* remember the traffic directions we have to monitor */ mna->flags = (nmr->nr_flags & (NR_MONITOR_TX | NR_MONITOR_RX)); *na = &mna->up; netmap_adapter_get(*na); - /* write the configuration back */ - nmr->nr_tx_rings = mna->up.num_tx_rings; - nmr->nr_rx_rings = mna->up.num_rx_rings; - nmr->nr_tx_slots = mna->up.num_tx_desc; - nmr->nr_rx_slots = mna->up.num_rx_desc; - /* keep the reference to the parent */ - D("monitor ok"); + ND("monitor ok"); + /* drop the reference to the ifp, if any */ + if (ifp) + if_rele(ifp); + return 0; put_out: - netmap_adapter_put(pna); + netmap_unget_na(pna, ifp); free(mna, M_DEVBUF); return error; } #endif /* WITH_MONITOR */ Index: head/sys/dev/netmap/netmap_offloadings.c =================================================================== --- head/sys/dev/netmap/netmap_offloadings.c (revision 307393) +++ head/sys/dev/netmap/netmap_offloadings.c (revision 307394) @@ -1,402 +1,490 @@ /* - * Copyright (C) 2014 Vincenzo Maffione. All rights reserved. + * Copyright (C) 2014-2015 Vincenzo Maffione + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* $FreeBSD$ */ #if defined(__FreeBSD__) #include /* prerequisite */ #include #include #include /* defines used in kernel.h */ -#include /* types used in module initialization */ #include /* types used in module initialization */ #include +#include #include /* struct socket */ #include /* sockaddrs */ #include #include #include /* bus_dmamap_* */ #include #elif defined(linux) #include "bsd_glue.h" #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" #else #error Unsupported platform #endif /* unsupported */ #include #include /* This routine is called by bdg_mismatch_datapath() when it finishes * accumulating bytes for a segment, in order to fix some fields in the * segment headers (which still contain the same content as the header - * of the original GSO packet). 'buf' points to the beginning (e.g. - * the ethernet header) of the segment, and 'len' is its length. + * of the original GSO packet). 'pkt' points to the beginning of the IP + * header of the segment, while 'len' is the length of the IP packet. */ -static void gso_fix_segment(uint8_t *buf, size_t len, u_int idx, - u_int segmented_bytes, u_int last_segment, - u_int tcp, u_int iphlen) +static void +gso_fix_segment(uint8_t *pkt, size_t len, u_int ipv4, u_int iphlen, u_int tcp, + u_int idx, u_int segmented_bytes, u_int last_segment) { - struct nm_iphdr *iph = (struct nm_iphdr *)(buf + 14); - struct nm_ipv6hdr *ip6h = (struct nm_ipv6hdr *)(buf + 14); + struct nm_iphdr *iph = (struct nm_iphdr *)(pkt); + struct nm_ipv6hdr *ip6h = (struct nm_ipv6hdr *)(pkt); uint16_t *check = NULL; uint8_t *check_data = NULL; - if (iphlen == 20) { + if (ipv4) { /* Set the IPv4 "Total Length" field. */ - iph->tot_len = htobe16(len-14); + iph->tot_len = htobe16(len); ND("ip total length %u", be16toh(ip->tot_len)); /* Set the IPv4 "Identification" field. */ iph->id = htobe16(be16toh(iph->id) + idx); ND("ip identification %u", be16toh(iph->id)); /* Compute and insert the IPv4 header checksum. */ iph->check = 0; - iph->check = nm_csum_ipv4(iph); + iph->check = nm_os_csum_ipv4(iph); ND("IP csum %x", be16toh(iph->check)); - } else {/* if (iphlen == 40) */ + } else { /* Set the IPv6 "Payload Len" field. */ - ip6h->payload_len = htobe16(len-14-iphlen); + ip6h->payload_len = htobe16(len-iphlen); } if (tcp) { - struct nm_tcphdr *tcph = (struct nm_tcphdr *)(buf + 14 + iphlen); + struct nm_tcphdr *tcph = (struct nm_tcphdr *)(pkt + iphlen); /* Set the TCP sequence number. */ tcph->seq = htobe32(be32toh(tcph->seq) + segmented_bytes); ND("tcp seq %u", be32toh(tcph->seq)); /* Zero the PSH and FIN TCP flags if this is not the last segment. */ if (!last_segment) tcph->flags &= ~(0x8 | 0x1); ND("last_segment %u", last_segment); check = &tcph->check; check_data = (uint8_t *)tcph; } else { /* UDP */ - struct nm_udphdr *udph = (struct nm_udphdr *)(buf + 14 + iphlen); + struct nm_udphdr *udph = (struct nm_udphdr *)(pkt + iphlen); /* Set the UDP 'Length' field. */ - udph->len = htobe16(len-14-iphlen); + udph->len = htobe16(len-iphlen); check = &udph->check; check_data = (uint8_t *)udph; } /* Compute and insert TCP/UDP checksum. */ *check = 0; - if (iphlen == 20) - nm_csum_tcpudp_ipv4(iph, check_data, len-14-iphlen, check); + if (ipv4) + nm_os_csum_tcpudp_ipv4(iph, check_data, len-iphlen, check); else - nm_csum_tcpudp_ipv6(ip6h, check_data, len-14-iphlen, check); + nm_os_csum_tcpudp_ipv6(ip6h, check_data, len-iphlen, check); ND("TCP/UDP csum %x", be16toh(*check)); } +static int +vnet_hdr_is_bad(struct nm_vnet_hdr *vh) +{ + uint8_t gso_type = vh->gso_type & ~VIRTIO_NET_HDR_GSO_ECN; + return ( + (gso_type != VIRTIO_NET_HDR_GSO_NONE && + gso_type != VIRTIO_NET_HDR_GSO_TCPV4 && + gso_type != VIRTIO_NET_HDR_GSO_UDP && + gso_type != VIRTIO_NET_HDR_GSO_TCPV6) + || + (vh->flags & ~(VIRTIO_NET_HDR_F_NEEDS_CSUM + | VIRTIO_NET_HDR_F_DATA_VALID)) + ); +} + /* The VALE mismatch datapath implementation. */ -void bdg_mismatch_datapath(struct netmap_vp_adapter *na, - struct netmap_vp_adapter *dst_na, - struct nm_bdg_fwd *ft_p, struct netmap_ring *ring, - u_int *j, u_int lim, u_int *howmany) +void +bdg_mismatch_datapath(struct netmap_vp_adapter *na, + struct netmap_vp_adapter *dst_na, + const struct nm_bdg_fwd *ft_p, + struct netmap_ring *dst_ring, + u_int *j, u_int lim, u_int *howmany) { - struct netmap_slot *slot = NULL; + struct netmap_slot *dst_slot = NULL; struct nm_vnet_hdr *vh = NULL; - /* Number of source slots to process. */ - u_int frags = ft_p->ft_frags; - struct nm_bdg_fwd *ft_end = ft_p + frags; + const struct nm_bdg_fwd *ft_end = ft_p + ft_p->ft_frags; /* Source and destination pointers. */ uint8_t *dst, *src; size_t src_len, dst_len; + /* Indices and counters for the destination ring. */ u_int j_start = *j; + u_int j_cur = j_start; u_int dst_slots = 0; - /* If the source port uses the offloadings, while destination doesn't, - * we grab the source virtio-net header and do the offloadings here. - */ - if (na->virt_hdr_len && !dst_na->virt_hdr_len) { - vh = (struct nm_vnet_hdr *)ft_p->ft_buf; + if (unlikely(ft_p == ft_end)) { + RD(3, "No source slots to process"); + return; } /* Init source and dest pointers. */ src = ft_p->ft_buf; src_len = ft_p->ft_len; - slot = &ring->slot[*j]; - dst = NMB(&dst_na->up, slot); + dst_slot = &dst_ring->slot[j_cur]; + dst = NMB(&dst_na->up, dst_slot); dst_len = src_len; + /* If the source port uses the offloadings, while destination doesn't, + * we grab the source virtio-net header and do the offloadings here. + */ + if (na->up.virt_hdr_len && !dst_na->up.virt_hdr_len) { + vh = (struct nm_vnet_hdr *)src; + /* Initial sanity check on the source virtio-net header. If + * something seems wrong, just drop the packet. */ + if (src_len < na->up.virt_hdr_len) { + RD(3, "Short src vnet header, dropping"); + return; + } + if (vnet_hdr_is_bad(vh)) { + RD(3, "Bad src vnet header, dropping"); + return; + } + } + /* We are processing the first input slot and there is a mismatch * between source and destination virt_hdr_len (SHL and DHL). * When the a client is using virtio-net headers, the header length * can be: * - 10: the header corresponds to the struct nm_vnet_hdr * - 12: the first 10 bytes correspond to the struct * virtio_net_hdr, and the last 2 bytes store the * "mergeable buffers" info, which is an optional * hint that can be zeroed for compatibility * * The destination header is therefore built according to the * following table: * * SHL | DHL | destination header * ----------------------------- * 0 | 10 | zero * 0 | 12 | zero * 10 | 0 | doesn't exist * 10 | 12 | first 10 bytes are copied from source header, last 2 are zero * 12 | 0 | doesn't exist * 12 | 10 | copied from the first 10 bytes of source header */ - bzero(dst, dst_na->virt_hdr_len); - if (na->virt_hdr_len && dst_na->virt_hdr_len) + bzero(dst, dst_na->up.virt_hdr_len); + if (na->up.virt_hdr_len && dst_na->up.virt_hdr_len) memcpy(dst, src, sizeof(struct nm_vnet_hdr)); /* Skip the virtio-net headers. */ - src += na->virt_hdr_len; - src_len -= na->virt_hdr_len; - dst += dst_na->virt_hdr_len; - dst_len = dst_na->virt_hdr_len + src_len; + src += na->up.virt_hdr_len; + src_len -= na->up.virt_hdr_len; + dst += dst_na->up.virt_hdr_len; + dst_len = dst_na->up.virt_hdr_len + src_len; /* Here it could be dst_len == 0 (which implies src_len == 0), * so we avoid passing a zero length fragment. */ if (dst_len == 0) { ft_p++; src = ft_p->ft_buf; src_len = ft_p->ft_len; dst_len = src_len; } if (vh && vh->gso_type != VIRTIO_NET_HDR_GSO_NONE) { u_int gso_bytes = 0; /* Length of the GSO packet header. */ u_int gso_hdr_len = 0; /* Pointer to the GSO packet header. Assume it is in a single fragment. */ uint8_t *gso_hdr = NULL; /* Index of the current segment. */ u_int gso_idx = 0; /* Payload data bytes segmented so far (e.g. TCP data bytes). */ u_int segmented_bytes = 0; + /* Is this an IPv4 or IPv6 GSO packet? */ + u_int ipv4 = 0; /* Length of the IP header (20 if IPv4, 40 if IPv6). */ u_int iphlen = 0; + /* Length of the Ethernet header (18 if 802.1q, otherwise 14). */ + u_int ethhlen = 14; /* Is this a TCP or an UDP GSO packet? */ u_int tcp = ((vh->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) == VIRTIO_NET_HDR_GSO_UDP) ? 0 : 1; /* Segment the GSO packet contained into the input slots (frags). */ - while (ft_p != ft_end) { + for (;;) { size_t copy; + if (dst_slots >= *howmany) { + /* We still have work to do, but we've run out of + * dst slots, so we have to drop the packet. */ + RD(3, "Not enough slots, dropping GSO packet"); + return; + } + /* Grab the GSO header if we don't have it. */ if (!gso_hdr) { uint16_t ethertype; gso_hdr = src; /* Look at the 'Ethertype' field to see if this packet - * is IPv4 or IPv6. - */ - ethertype = be16toh(*((uint16_t *)(gso_hdr + 12))); - if (ethertype == 0x0800) - iphlen = 20; - else /* if (ethertype == 0x86DD) */ - iphlen = 40; + * is IPv4 or IPv6, taking into account VLAN + * encapsulation. */ + for (;;) { + if (src_len < ethhlen) { + RD(3, "Short GSO fragment [eth], dropping"); + return; + } + ethertype = be16toh(*((uint16_t *) + (gso_hdr + ethhlen - 2))); + if (ethertype != 0x8100) /* not 802.1q */ + break; + ethhlen += 4; + } + switch (ethertype) { + case 0x0800: /* IPv4 */ + { + struct nm_iphdr *iph = (struct nm_iphdr *) + (gso_hdr + ethhlen); + + if (src_len < ethhlen + 20) { + RD(3, "Short GSO fragment " + "[IPv4], dropping"); + return; + } + ipv4 = 1; + iphlen = 4 * (iph->version_ihl & 0x0F); + break; + } + case 0x86DD: /* IPv6 */ + ipv4 = 0; + iphlen = 40; + break; + default: + RD(3, "Unsupported ethertype, " + "dropping GSO packet"); + return; + } ND(3, "type=%04x", ethertype); + if (src_len < ethhlen + iphlen) { + RD(3, "Short GSO fragment [IP], dropping"); + return; + } + /* Compute gso_hdr_len. For TCP we need to read the * content of the 'Data Offset' field. */ if (tcp) { - struct nm_tcphdr *tcph = - (struct nm_tcphdr *)&gso_hdr[14+iphlen]; + struct nm_tcphdr *tcph = (struct nm_tcphdr *) + (gso_hdr + ethhlen + iphlen); - gso_hdr_len = 14 + iphlen + 4*(tcph->doff >> 4); - } else - gso_hdr_len = 14 + iphlen + 8; /* UDP */ + if (src_len < ethhlen + iphlen + 20) { + RD(3, "Short GSO fragment " + "[TCP], dropping"); + return; + } + gso_hdr_len = ethhlen + iphlen + + 4 * (tcph->doff >> 4); + } else { + gso_hdr_len = ethhlen + iphlen + 8; /* UDP */ + } + if (src_len < gso_hdr_len) { + RD(3, "Short GSO fragment [TCP/UDP], dropping"); + return; + } + ND(3, "gso_hdr_len %u gso_mtu %d", gso_hdr_len, - dst_na->mfs); + dst_na->mfs); /* Advance source pointers. */ src += gso_hdr_len; src_len -= gso_hdr_len; if (src_len == 0) { ft_p++; if (ft_p == ft_end) break; src = ft_p->ft_buf; src_len = ft_p->ft_len; - continue; } } /* Fill in the header of the current segment. */ if (gso_bytes == 0) { memcpy(dst, gso_hdr, gso_hdr_len); gso_bytes = gso_hdr_len; } /* Fill in data and update source and dest pointers. */ copy = src_len; if (gso_bytes + copy > dst_na->mfs) copy = dst_na->mfs - gso_bytes; memcpy(dst + gso_bytes, src, copy); gso_bytes += copy; src += copy; src_len -= copy; /* A segment is complete or we have processed all the the GSO payload bytes. */ if (gso_bytes >= dst_na->mfs || (src_len == 0 && ft_p + 1 == ft_end)) { /* After raw segmentation, we must fix some header * fields and compute checksums, in a protocol dependent * way. */ - gso_fix_segment(dst, gso_bytes, gso_idx, - segmented_bytes, - src_len == 0 && ft_p + 1 == ft_end, - tcp, iphlen); + gso_fix_segment(dst + ethhlen, gso_bytes - ethhlen, + ipv4, iphlen, tcp, + gso_idx, segmented_bytes, + src_len == 0 && ft_p + 1 == ft_end); ND("frame %u completed with %d bytes", gso_idx, (int)gso_bytes); - slot->len = gso_bytes; - slot->flags = 0; + dst_slot->len = gso_bytes; + dst_slot->flags = 0; + dst_slots++; segmented_bytes += gso_bytes - gso_hdr_len; - dst_slots++; - - /* Next destination slot. */ - *j = nm_next(*j, lim); - slot = &ring->slot[*j]; - dst = NMB(&dst_na->up, slot); - gso_bytes = 0; gso_idx++; + + /* Next destination slot. */ + j_cur = nm_next(j_cur, lim); + dst_slot = &dst_ring->slot[j_cur]; + dst = NMB(&dst_na->up, dst_slot); } /* Next input slot. */ if (src_len == 0) { ft_p++; if (ft_p == ft_end) break; src = ft_p->ft_buf; src_len = ft_p->ft_len; } } ND(3, "%d bytes segmented", segmented_bytes); } else { /* Address of a checksum field into a destination slot. */ uint16_t *check = NULL; /* Accumulator for an unfolded checksum. */ rawsum_t csum = 0; /* Process a non-GSO packet. */ /* Init 'check' if necessary. */ if (vh && (vh->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM)) { if (unlikely(vh->csum_offset + vh->csum_start > src_len)) D("invalid checksum request"); else check = (uint16_t *)(dst + vh->csum_start + vh->csum_offset); } while (ft_p != ft_end) { /* Init/update the packet checksum if needed. */ if (vh && (vh->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM)) { if (!dst_slots) - csum = nm_csum_raw(src + vh->csum_start, + csum = nm_os_csum_raw(src + vh->csum_start, src_len - vh->csum_start, 0); else - csum = nm_csum_raw(src, src_len, csum); + csum = nm_os_csum_raw(src, src_len, csum); } /* Round to a multiple of 64 */ src_len = (src_len + 63) & ~63; if (ft_p->ft_flags & NS_INDIRECT) { if (copyin(src, dst, src_len)) { /* Invalid user pointer, pretend len is 0. */ dst_len = 0; } } else { memcpy(dst, src, (int)src_len); } - slot->len = dst_len; - + dst_slot->len = dst_len; dst_slots++; /* Next destination slot. */ - *j = nm_next(*j, lim); - slot = &ring->slot[*j]; - dst = NMB(&dst_na->up, slot); + j_cur = nm_next(j_cur, lim); + dst_slot = &dst_ring->slot[j_cur]; + dst = NMB(&dst_na->up, dst_slot); /* Next source slot. */ ft_p++; src = ft_p->ft_buf; dst_len = src_len = ft_p->ft_len; - } /* Finalize (fold) the checksum if needed. */ if (check && vh && (vh->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM)) { - *check = nm_csum_fold(csum); + *check = nm_os_csum_fold(csum); } ND(3, "using %u dst_slots", dst_slots); - /* A second pass on the desitations slots to set the slot flags, + /* A second pass on the destination slots to set the slot flags, * using the right number of destination slots. */ - while (j_start != *j) { - slot = &ring->slot[j_start]; - slot->flags = (dst_slots << 8)| NS_MOREFRAG; + while (j_start != j_cur) { + dst_slot = &dst_ring->slot[j_start]; + dst_slot->flags = (dst_slots << 8)| NS_MOREFRAG; j_start = nm_next(j_start, lim); } /* Clear NS_MOREFRAG flag on last entry. */ - slot->flags = (dst_slots << 8); + dst_slot->flags = (dst_slots << 8); } - /* Update howmany. */ + /* Update howmany and j. This is to commit the use of + * those slots in the destination ring. */ if (unlikely(dst_slots > *howmany)) { - dst_slots = *howmany; - D("Slot allocation error: Should never happen"); + D("Slot allocation error: This is a bug"); } + *j = j_cur; *howmany -= dst_slots; } Index: head/sys/dev/netmap/netmap_pipe.c =================================================================== --- head/sys/dev/netmap/netmap_pipe.c (revision 307393) +++ head/sys/dev/netmap/netmap_pipe.c (revision 307394) @@ -1,679 +1,689 @@ /* - * Copyright (C) 2014 Giuseppe Lettieri. All rights reserved. + * Copyright (C) 2014-2016 Giuseppe Lettieri + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* $FreeBSD$ */ #if defined(__FreeBSD__) #include /* prerequisite */ #include #include #include /* defines used in kernel.h */ #include /* types used in module initialization */ #include #include #include #include #include #include #include /* sockaddrs */ #include #include #include /* bus_dmamap_* */ #include #elif defined(linux) #include "bsd_glue.h" #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" +#elif defined(_WIN32) +#include "win_glue.h" + #else #error Unsupported platform #endif /* unsupported */ /* * common headers */ #include #include #include #ifdef WITH_PIPES #define NM_PIPE_MAXSLOTS 4096 -int netmap_default_pipes = 0; /* ignored, kept for compatibility */ +static int netmap_default_pipes = 0; /* ignored, kept for compatibility */ +SYSBEGIN(vars_pipes); SYSCTL_DECL(_dev_netmap); SYSCTL_INT(_dev_netmap, OID_AUTO, default_pipes, CTLFLAG_RW, &netmap_default_pipes, 0 , ""); +SYSEND; /* allocate the pipe array in the parent adapter */ static int nm_pipe_alloc(struct netmap_adapter *na, u_int npipes) { size_t len; struct netmap_pipe_adapter **npa; if (npipes <= na->na_max_pipes) /* we already have more entries that requested */ return 0; if (npipes < na->na_next_pipe || npipes > NM_MAXPIPES) return EINVAL; len = sizeof(struct netmap_pipe_adapter *) * npipes; +#ifndef _WIN32 npa = realloc(na->na_pipes, len, M_DEVBUF, M_NOWAIT | M_ZERO); +#else + npa = realloc(na->na_pipes, len, sizeof(struct netmap_pipe_adapter *)*na->na_max_pipes); +#endif if (npa == NULL) return ENOMEM; na->na_pipes = npa; na->na_max_pipes = npipes; return 0; } /* deallocate the parent array in the parent adapter */ void netmap_pipe_dealloc(struct netmap_adapter *na) { if (na->na_pipes) { if (na->na_next_pipe > 0) { D("freeing not empty pipe array for %s (%d dangling pipes)!", na->name, na->na_next_pipe); } free(na->na_pipes, M_DEVBUF); na->na_pipes = NULL; na->na_max_pipes = 0; na->na_next_pipe = 0; } } /* find a pipe endpoint with the given id among the parent's pipes */ static struct netmap_pipe_adapter * netmap_pipe_find(struct netmap_adapter *parent, u_int pipe_id) { int i; struct netmap_pipe_adapter *na; for (i = 0; i < parent->na_next_pipe; i++) { na = parent->na_pipes[i]; if (na->id == pipe_id) { return na; } } return NULL; } /* add a new pipe endpoint to the parent array */ static int netmap_pipe_add(struct netmap_adapter *parent, struct netmap_pipe_adapter *na) { if (parent->na_next_pipe >= parent->na_max_pipes) { u_int npipes = parent->na_max_pipes ? 2*parent->na_max_pipes : 2; int error = nm_pipe_alloc(parent, npipes); if (error) return error; } parent->na_pipes[parent->na_next_pipe] = na; na->parent_slot = parent->na_next_pipe; parent->na_next_pipe++; return 0; } /* remove the given pipe endpoint from the parent array */ static void netmap_pipe_remove(struct netmap_adapter *parent, struct netmap_pipe_adapter *na) { u_int n; n = --parent->na_next_pipe; if (n != na->parent_slot) { struct netmap_pipe_adapter **p = &parent->na_pipes[na->parent_slot]; *p = parent->na_pipes[n]; (*p)->parent_slot = na->parent_slot; } parent->na_pipes[n] = NULL; } static int netmap_pipe_txsync(struct netmap_kring *txkring, int flags) { struct netmap_kring *rxkring = txkring->pipe; u_int limit; /* slots to transfer */ u_int j, k, lim_tx = txkring->nkr_num_slots - 1, lim_rx = rxkring->nkr_num_slots - 1; int m, busy; ND("%p: %s %x -> %s", txkring, txkring->name, flags, rxkring->name); ND(2, "before: hwcur %d hwtail %d cur %d head %d tail %d", txkring->nr_hwcur, txkring->nr_hwtail, txkring->rcur, txkring->rhead, txkring->rtail); j = rxkring->nr_hwtail; /* RX */ k = txkring->nr_hwcur; /* TX */ m = txkring->rhead - txkring->nr_hwcur; /* new slots */ if (m < 0) m += txkring->nkr_num_slots; limit = m; m = lim_rx; /* max avail space on destination */ busy = j - rxkring->nr_hwcur; /* busy slots */ if (busy < 0) busy += rxkring->nkr_num_slots; m -= busy; /* subtract busy slots */ ND(2, "m %d limit %d", m, limit); if (m < limit) limit = m; if (limit == 0) { /* either the rxring is full, or nothing to send */ return 0; } while (limit-- > 0) { - struct netmap_slot *rs = &rxkring->save_ring->slot[j]; + struct netmap_slot *rs = &rxkring->ring->slot[j]; struct netmap_slot *ts = &txkring->ring->slot[k]; struct netmap_slot tmp; /* swap the slots */ tmp = *rs; *rs = *ts; *ts = tmp; /* report the buffer change */ ts->flags |= NS_BUF_CHANGED; rs->flags |= NS_BUF_CHANGED; j = nm_next(j, lim_rx); k = nm_next(k, lim_tx); } mb(); /* make sure the slots are updated before publishing them */ rxkring->nr_hwtail = j; txkring->nr_hwcur = k; txkring->nr_hwtail = nm_prev(k, lim_tx); ND(2, "after: hwcur %d hwtail %d cur %d head %d tail %d j %d", txkring->nr_hwcur, txkring->nr_hwtail, txkring->rcur, txkring->rhead, txkring->rtail, j); mb(); /* make sure rxkring->nr_hwtail is updated before notifying */ rxkring->nm_notify(rxkring, 0); return 0; } static int netmap_pipe_rxsync(struct netmap_kring *rxkring, int flags) { struct netmap_kring *txkring = rxkring->pipe; uint32_t oldhwcur = rxkring->nr_hwcur; ND("%s %x <- %s", rxkring->name, flags, txkring->name); rxkring->nr_hwcur = rxkring->rhead; /* recover user-relased slots */ ND(5, "hwcur %d hwtail %d cur %d head %d tail %d", rxkring->nr_hwcur, rxkring->nr_hwtail, rxkring->rcur, rxkring->rhead, rxkring->rtail); mb(); /* paired with the first mb() in txsync */ if (oldhwcur != rxkring->nr_hwcur) { /* we have released some slots, notify the other end */ mb(); /* make sure nr_hwcur is updated before notifying */ txkring->nm_notify(txkring, 0); } return 0; } /* Pipe endpoints are created and destroyed together, so that endopoints do not * have to check for the existence of their peer at each ?xsync. * * To play well with the existing netmap infrastructure (refcounts etc.), we * adopt the following strategy: * * 1) The first endpoint that is created also creates the other endpoint and * grabs a reference to it. * * state A) user1 --> endpoint1 --> endpoint2 * * 2) If, starting from state A, endpoint2 is then registered, endpoint1 gives * its reference to the user: * * state B) user1 --> endpoint1 endpoint2 <--- user2 * * 3) Assume that, starting from state B endpoint2 is closed. In the unregister * callback endpoint2 notes that endpoint1 is still active and adds a reference * from endpoint1 to itself. When user2 then releases her own reference, * endpoint2 is not destroyed and we are back to state A. A symmetrical state * would be reached if endpoint1 were released instead. * * 4) If, starting from state A, endpoint1 is closed, the destructor notes that * it owns a reference to endpoint2 and releases it. * * Something similar goes on for the creation and destruction of the krings. */ /* netmap_pipe_krings_delete. * * There are two cases: * * 1) state is * * usr1 --> e1 --> e2 * * and we are e1. We have to create both sets * of krings. * * 2) state is * * usr1 --> e1 --> e2 * * and we are e2. e1 is certainly registered and our - * krings already exist, but they may be hidden. + * krings already exist. Nothing to do. */ static int netmap_pipe_krings_create(struct netmap_adapter *na) { struct netmap_pipe_adapter *pna = (struct netmap_pipe_adapter *)na; struct netmap_adapter *ona = &pna->peer->up; int error = 0; enum txrx t; if (pna->peer_ref) { int i; /* case 1) above */ - ND("%p: case 1, create everything", na); + D("%p: case 1, create both ends", na); error = netmap_krings_create(na, 0); if (error) goto err; - /* we also create all the rings, since we need to - * update the save_ring pointers. - * netmap_mem_rings_create (called by our caller) - * will not create the rings again - */ - - error = netmap_mem_rings_create(na); + /* create the krings of the other end */ + error = netmap_krings_create(ona, 0); if (error) goto del_krings1; - /* update our hidden ring pointers */ - for_rx_tx(t) { - for (i = 0; i < nma_get_nrings(na, t) + 1; i++) - NMR(na, t)[i].save_ring = NMR(na, t)[i].ring; - } - - /* now, create krings and rings of the other end */ - error = netmap_krings_create(ona, 0); - if (error) - goto del_rings1; - - error = netmap_mem_rings_create(ona); - if (error) - goto del_krings2; - - for_rx_tx(t) { - for (i = 0; i < nma_get_nrings(ona, t) + 1; i++) - NMR(ona, t)[i].save_ring = NMR(ona, t)[i].ring; - } - /* cross link the krings */ for_rx_tx(t) { - enum txrx r= nm_txrx_swap(t); /* swap NR_TX <-> NR_RX */ + enum txrx r = nm_txrx_swap(t); /* swap NR_TX <-> NR_RX */ for (i = 0; i < nma_get_nrings(na, t); i++) { NMR(na, t)[i].pipe = NMR(&pna->peer->up, r) + i; NMR(&pna->peer->up, r)[i].pipe = NMR(na, t) + i; } } - } else { - int i; - /* case 2) above */ - /* recover the hidden rings */ - ND("%p: case 2, hidden rings", na); - for_rx_tx(t) { - for (i = 0; i < nma_get_nrings(na, t) + 1; i++) - NMR(na, t)[i].ring = NMR(na, t)[i].save_ring; - } + } return 0; -del_krings2: - netmap_krings_delete(ona); -del_rings1: - netmap_mem_rings_delete(na); del_krings1: netmap_krings_delete(na); err: return error; } /* netmap_pipe_reg. * * There are two cases on registration (onoff==1) * * 1.a) state is * * usr1 --> e1 --> e2 * - * and we are e1. Nothing special to do. + * and we are e1. Create the needed rings of the + * other end. * * 1.b) state is * * usr1 --> e1 --> e2 <-- usr2 * * and we are e2. Drop the ref e1 is holding. * * There are two additional cases on unregister (onoff==0) * * 2.a) state is * * usr1 --> e1 --> e2 * * and we are e1. Nothing special to do, e2 will * be cleaned up by the destructor of e1. * * 2.b) state is * * usr1 --> e1 e2 <-- usr2 * * and we are either e1 or e2. Add a ref from the * other end and hide our rings. */ static int netmap_pipe_reg(struct netmap_adapter *na, int onoff) { struct netmap_pipe_adapter *pna = (struct netmap_pipe_adapter *)na; + struct netmap_adapter *ona = &pna->peer->up; + int i, error = 0; enum txrx t; ND("%p: onoff %d", na, onoff); if (onoff) { - na->na_flags |= NAF_NETMAP_ON; + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { + struct netmap_kring *kring = &NMR(na, t)[i]; + + if (nm_kring_pending_on(kring)) { + /* mark the partner ring as needed */ + kring->pipe->nr_kflags |= NKR_NEEDRING; + } + } + } + + /* create all missing needed rings on the other end */ + error = netmap_mem_rings_create(ona); + if (error) + return error; + + /* In case of no error we put our rings in netmap mode */ + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { + struct netmap_kring *kring = &NMR(na, t)[i]; + + if (nm_kring_pending_on(kring)) { + kring->nr_mode = NKR_NETMAP_ON; + } + } + } + if (na->active_fds == 0) + na->na_flags |= NAF_NETMAP_ON; } else { - na->na_flags &= ~NAF_NETMAP_ON; + if (na->active_fds == 0) + na->na_flags &= ~NAF_NETMAP_ON; + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { + struct netmap_kring *kring = &NMR(na, t)[i]; + + if (nm_kring_pending_off(kring)) { + kring->nr_mode = NKR_NETMAP_OFF; + /* mark the peer ring as no longer needed by us + * (it may still be kept if sombody else is using it) + */ + kring->pipe->nr_kflags &= ~NKR_NEEDRING; + } + } + } + /* delete all the peer rings that are no longer needed */ + netmap_mem_rings_delete(ona); } + + if (na->active_fds) { + D("active_fds %d", na->active_fds); + return 0; + } + if (pna->peer_ref) { ND("%p: case 1.a or 2.a, nothing to do", na); return 0; } if (onoff) { ND("%p: case 1.b, drop peer", na); pna->peer->peer_ref = 0; netmap_adapter_put(na); } else { - int i; ND("%p: case 2.b, grab peer", na); netmap_adapter_get(na); pna->peer->peer_ref = 1; - /* hide our rings from netmap_mem_rings_delete */ - for_rx_tx(t) { - for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { - NMR(na, t)[i].ring = NULL; - } - } } - return 0; + return error; } /* netmap_pipe_krings_delete. * * There are two cases: * * 1) state is * * usr1 --> e1 --> e2 * * and we are e1 (e2 is not registered, so krings_delete cannot be * called on it); * * 2) state is * * usr1 --> e1 e2 <-- usr2 * * and we are either e1 or e2. * * In the former case we have to also delete the krings of e2; * in the latter case we do nothing (note that our krings * have already been hidden in the unregister callback). */ static void netmap_pipe_krings_delete(struct netmap_adapter *na) { struct netmap_pipe_adapter *pna = (struct netmap_pipe_adapter *)na; struct netmap_adapter *ona; /* na of the other end */ - int i; - enum txrx t; if (!pna->peer_ref) { ND("%p: case 2, kept alive by peer", na); return; } /* case 1) above */ ND("%p: case 1, deleting everyhing", na); netmap_krings_delete(na); /* also zeroes tx_rings etc. */ - /* restore the ring to be deleted on the peer */ ona = &pna->peer->up; if (ona->tx_rings == NULL) { /* already deleted, we must be on an * cleanup-after-error path */ return; } - for_rx_tx(t) { - for (i = 0; i < nma_get_nrings(ona, t) + 1; i++) - NMR(ona, t)[i].ring = NMR(ona, t)[i].save_ring; - } - netmap_mem_rings_delete(ona); netmap_krings_delete(ona); } static void netmap_pipe_dtor(struct netmap_adapter *na) { struct netmap_pipe_adapter *pna = (struct netmap_pipe_adapter *)na; ND("%p", na); if (pna->peer_ref) { ND("%p: clean up peer", na); pna->peer_ref = 0; netmap_adapter_put(&pna->peer->up); } if (pna->role == NR_REG_PIPE_MASTER) netmap_pipe_remove(pna->parent, pna); netmap_adapter_put(pna->parent); pna->parent = NULL; } int netmap_get_pipe_na(struct nmreq *nmr, struct netmap_adapter **na, int create) { struct nmreq pnmr; struct netmap_adapter *pna; /* parent adapter */ struct netmap_pipe_adapter *mna, *sna, *req; + struct ifnet *ifp = NULL; u_int pipe_id; int role = nmr->nr_flags & NR_REG_MASK; int error; ND("flags %x", nmr->nr_flags); if (role != NR_REG_PIPE_MASTER && role != NR_REG_PIPE_SLAVE) { ND("not a pipe"); return 0; } role = nmr->nr_flags & NR_REG_MASK; /* first, try to find the parent adapter */ bzero(&pnmr, sizeof(pnmr)); memcpy(&pnmr.nr_name, nmr->nr_name, IFNAMSIZ); /* pass to parent the requested number of pipes */ pnmr.nr_arg1 = nmr->nr_arg1; - error = netmap_get_na(&pnmr, &pna, create); + error = netmap_get_na(&pnmr, &pna, &ifp, create); if (error) { ND("parent lookup failed: %d", error); return error; } ND("found parent: %s", na->name); if (NETMAP_OWNED_BY_KERN(pna)) { ND("parent busy"); error = EBUSY; goto put_out; } /* next, lookup the pipe id in the parent list */ req = NULL; pipe_id = nmr->nr_ringid & NETMAP_RING_MASK; mna = netmap_pipe_find(pna, pipe_id); if (mna) { if (mna->role == role) { ND("found %d directly at %d", pipe_id, mna->parent_slot); req = mna; } else { ND("found %d indirectly at %d", pipe_id, mna->parent_slot); req = mna->peer; } /* the pipe we have found already holds a ref to the parent, * so we need to drop the one we got from netmap_get_na() */ netmap_adapter_put(pna); goto found; } ND("pipe %d not found, create %d", pipe_id, create); if (!create) { error = ENODEV; goto put_out; } /* we create both master and slave. * The endpoint we were asked for holds a reference to * the other one. */ mna = malloc(sizeof(*mna), M_DEVBUF, M_NOWAIT | M_ZERO); if (mna == NULL) { error = ENOMEM; goto put_out; } snprintf(mna->up.name, sizeof(mna->up.name), "%s{%d", pna->name, pipe_id); mna->id = pipe_id; mna->role = NR_REG_PIPE_MASTER; mna->parent = pna; mna->up.nm_txsync = netmap_pipe_txsync; mna->up.nm_rxsync = netmap_pipe_rxsync; mna->up.nm_register = netmap_pipe_reg; mna->up.nm_dtor = netmap_pipe_dtor; mna->up.nm_krings_create = netmap_pipe_krings_create; mna->up.nm_krings_delete = netmap_pipe_krings_delete; mna->up.nm_mem = pna->nm_mem; mna->up.na_lut = pna->na_lut; mna->up.num_tx_rings = 1; mna->up.num_rx_rings = 1; mna->up.num_tx_desc = nmr->nr_tx_slots; nm_bound_var(&mna->up.num_tx_desc, pna->num_tx_desc, 1, NM_PIPE_MAXSLOTS, NULL); mna->up.num_rx_desc = nmr->nr_rx_slots; nm_bound_var(&mna->up.num_rx_desc, pna->num_rx_desc, 1, NM_PIPE_MAXSLOTS, NULL); error = netmap_attach_common(&mna->up); if (error) goto free_mna; /* register the master with the parent */ error = netmap_pipe_add(pna, mna); if (error) goto free_mna; /* create the slave */ sna = malloc(sizeof(*mna), M_DEVBUF, M_NOWAIT | M_ZERO); if (sna == NULL) { error = ENOMEM; goto unregister_mna; } /* most fields are the same, copy from master and then fix */ *sna = *mna; snprintf(sna->up.name, sizeof(sna->up.name), "%s}%d", pna->name, pipe_id); sna->role = NR_REG_PIPE_SLAVE; error = netmap_attach_common(&sna->up); if (error) goto free_sna; /* join the two endpoints */ mna->peer = sna; sna->peer = mna; /* we already have a reference to the parent, but we * need another one for the other endpoint we created */ netmap_adapter_get(pna); if (role == NR_REG_PIPE_MASTER) { req = mna; mna->peer_ref = 1; netmap_adapter_get(&sna->up); } else { req = sna; sna->peer_ref = 1; netmap_adapter_get(&mna->up); } ND("created master %p and slave %p", mna, sna); found: ND("pipe %d %s at %p", pipe_id, (req->role == NR_REG_PIPE_MASTER ? "master" : "slave"), req); *na = &req->up; netmap_adapter_get(*na); - /* write the configuration back */ - nmr->nr_tx_rings = req->up.num_tx_rings; - nmr->nr_rx_rings = req->up.num_rx_rings; - nmr->nr_tx_slots = req->up.num_tx_desc; - nmr->nr_rx_slots = req->up.num_rx_desc; - /* keep the reference to the parent. * It will be released by the req destructor */ + /* drop the ifp reference, if any */ + if (ifp) { + if_rele(ifp); + } + return 0; free_sna: free(sna, M_DEVBUF); unregister_mna: netmap_pipe_remove(pna, mna); free_mna: free(mna, M_DEVBUF); put_out: - netmap_adapter_put(pna); + netmap_unget_na(pna, ifp); return error; } #endif /* WITH_PIPES */ Index: head/sys/dev/netmap/netmap_vale.c =================================================================== --- head/sys/dev/netmap/netmap_vale.c (revision 307393) +++ head/sys/dev/netmap/netmap_vale.c (revision 307394) @@ -1,2427 +1,2778 @@ /* - * Copyright (C) 2013-2014 Universita` di Pisa. All rights reserved. + * Copyright (C) 2013-2016 Universita` di Pisa + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * This module implements the VALE switch for netmap --- VALE SWITCH --- NMG_LOCK() serializes all modifications to switches and ports. A switch cannot be deleted until all ports are gone. For each switch, an SX lock (RWlock on linux) protects deletion of ports. When configuring or deleting a new port, the lock is acquired in exclusive mode (after holding NMG_LOCK). When forwarding, the lock is acquired in shared mode (without NMG_LOCK). The lock is held throughout the entire forwarding cycle, during which the thread may incur in a page fault. Hence it is important that sleepable shared locks are used. On the rx ring, the per-port lock is grabbed initially to reserve a number of slot in the ring, then the lock is released, packets are copied from source to destination, and then the lock is acquired again and the receive ring is updated. (A similar thing is done on the tx ring for NIC and host stack ports attached to the switch) */ /* * OS-specific code that is used only within this file. * Other OS-specific code that must be accessed by drivers * is present in netmap_kern.h */ #if defined(__FreeBSD__) #include /* prerequisite */ __FBSDID("$FreeBSD$"); #include #include #include /* defines used in kernel.h */ #include /* types used in module initialization */ #include /* cdevsw struct, UID, GID */ #include #include /* struct socket */ #include #include #include #include /* sockaddrs */ #include #include #include #include #include /* BIOCIMMEDIATE */ #include /* bus_dmamap_* */ #include #include #define BDG_RWLOCK_T struct rwlock // struct rwlock #define BDG_RWINIT(b) \ rw_init_flags(&(b)->bdg_lock, "bdg lock", RW_NOWITNESS) #define BDG_WLOCK(b) rw_wlock(&(b)->bdg_lock) #define BDG_WUNLOCK(b) rw_wunlock(&(b)->bdg_lock) #define BDG_RLOCK(b) rw_rlock(&(b)->bdg_lock) #define BDG_RTRYLOCK(b) rw_try_rlock(&(b)->bdg_lock) #define BDG_RUNLOCK(b) rw_runlock(&(b)->bdg_lock) #define BDG_RWDESTROY(b) rw_destroy(&(b)->bdg_lock) #elif defined(linux) #include "bsd_glue.h" #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" +#elif defined(_WIN32) +#include "win_glue.h" + #else #error Unsupported platform #endif /* unsupported */ /* * common headers */ #include #include #include #ifdef WITH_VALE /* * system parameters (most of them in netmap_kern.h) - * NM_NAME prefix for switch port names, default "vale" + * NM_BDG_NAME prefix for switch port names, default "vale" * NM_BDG_MAXPORTS number of ports * NM_BRIDGES max number of switches in the system. * XXX should become a sysctl or tunable * * Switch ports are named valeX:Y where X is the switch name and Y * is the port. If Y matches a physical interface name, the port is * connected to a physical device. * * Unlike physical interfaces, switch ports use their own memory region * for rings and buffers. * The virtual interfaces use per-queue lock instead of core lock. * In the tx loop, we aggregate traffic in batches to make all operations * faster. The batch size is bridge_batch. */ #define NM_BDG_MAXRINGS 16 /* XXX unclear how many. */ #define NM_BDG_MAXSLOTS 4096 /* XXX same as above */ #define NM_BRIDGE_RINGSIZE 1024 /* in the device */ #define NM_BDG_HASH 1024 /* forwarding table entries */ #define NM_BDG_BATCH 1024 /* entries in the forwarding buffer */ #define NM_MULTISEG 64 /* max size of a chain of bufs */ /* actual size of the tables */ #define NM_BDG_BATCH_MAX (NM_BDG_BATCH + NM_MULTISEG) /* NM_FT_NULL terminates a list of slots in the ft */ #define NM_FT_NULL NM_BDG_BATCH_MAX -#define NM_BRIDGES 8 /* number of bridges */ /* * bridge_batch is set via sysctl to the max batch size to be * used in the bridge. The actual value may be larger as the * last packet in the block may overflow the size. */ -int bridge_batch = NM_BDG_BATCH; /* bridge batch size */ +static int bridge_batch = NM_BDG_BATCH; /* bridge batch size */ +SYSBEGIN(vars_vale); SYSCTL_DECL(_dev_netmap); SYSCTL_INT(_dev_netmap, OID_AUTO, bridge_batch, CTLFLAG_RW, &bridge_batch, 0 , ""); +SYSEND; - static int netmap_vp_create(struct nmreq *, struct ifnet *, struct netmap_vp_adapter **); static int netmap_vp_reg(struct netmap_adapter *na, int onoff); -static int netmap_bwrap_register(struct netmap_adapter *, int onoff); +static int netmap_bwrap_reg(struct netmap_adapter *, int onoff); /* * For each output interface, nm_bdg_q is used to construct a list. * bq_len is the number of output buffers (we can have coalescing * during the copy). */ struct nm_bdg_q { uint16_t bq_head; uint16_t bq_tail; uint32_t bq_len; /* number of buffers */ }; /* XXX revise this */ struct nm_hash_ent { uint64_t mac; /* the top 2 bytes are the epoch */ uint64_t ports; }; /* * nm_bridge is a descriptor for a VALE switch. * Interfaces for a bridge are all in bdg_ports[]. * The array has fixed size, an empty entry does not terminate * the search, but lookups only occur on attach/detach so we * don't mind if they are slow. * * The bridge is non blocking on the transmit ports: excess * packets are dropped if there is no room on the output port. * * bdg_lock protects accesses to the bdg_ports array. * This is a rw lock (or equivalent). */ struct nm_bridge { /* XXX what is the proper alignment/layout ? */ BDG_RWLOCK_T bdg_lock; /* protects bdg_ports */ int bdg_namelen; uint32_t bdg_active_ports; /* 0 means free */ char bdg_basename[IFNAMSIZ]; /* Indexes of active ports (up to active_ports) * and all other remaining ports. */ uint8_t bdg_port_index[NM_BDG_MAXPORTS]; struct netmap_vp_adapter *bdg_ports[NM_BDG_MAXPORTS]; /* * The function to decide the destination port. * It returns either of an index of the destination port, * NM_BDG_BROADCAST to broadcast this packet, or NM_BDG_NOPORT not to * forward this packet. ring_nr is the source ring index, and the * function may overwrite this value to forward this packet to a * different ring index. - * This function must be set by netmap_bdgctl(). + * This function must be set by netmap_bdg_ctl(). */ struct netmap_bdg_ops bdg_ops; /* the forwarding table, MAC+ports. * XXX should be changed to an argument to be passed to * the lookup function, and allocated on attach */ struct nm_hash_ent ht[NM_BDG_HASH]; #ifdef CONFIG_NET_NS struct net *ns; #endif /* CONFIG_NET_NS */ }; const char* netmap_bdg_name(struct netmap_vp_adapter *vp) { struct nm_bridge *b = vp->na_bdg; if (b == NULL) return NULL; return b->bdg_basename; } #ifndef CONFIG_NET_NS /* * XXX in principle nm_bridges could be created dynamically * Right now we have a static array and deletions are protected * by an exclusive lock. */ -struct nm_bridge *nm_bridges; +static struct nm_bridge *nm_bridges; #endif /* !CONFIG_NET_NS */ /* * this is a slightly optimized copy routine which rounds * to multiple of 64 bytes and is often faster than dealing * with other odd sizes. We assume there is enough room * in the source and destination buffers. * * XXX only for multiples of 64 bytes, non overlapped. */ static inline void pkt_copy(void *_src, void *_dst, int l) { uint64_t *src = _src; uint64_t *dst = _dst; if (unlikely(l >= 1024)) { memcpy(dst, src, l); return; } for (; likely(l > 0); l-=64) { *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; } } +static int +nm_is_id_char(const char c) +{ + return (c >= 'a' && c <= 'z') || + (c >= 'A' && c <= 'Z') || + (c >= '0' && c <= '9') || + (c == '_'); +} + +/* Validate the name of a VALE bridge port and return the + * position of the ":" character. */ +static int +nm_vale_name_validate(const char *name) +{ + int colon_pos = -1; + int i; + + if (!name || strlen(name) < strlen(NM_BDG_NAME)) { + return -1; + } + + for (i = 0; name[i]; i++) { + if (name[i] == ':') { + if (colon_pos != -1) { + return -1; + } + colon_pos = i; + } else if (!nm_is_id_char(name[i])) { + return -1; + } + } + + if (i >= IFNAMSIZ) { + return -1; + } + + return colon_pos; +} + /* * locate a bridge among the existing ones. * MUST BE CALLED WITH NMG_LOCK() * * a ':' in the name terminates the bridge name. Otherwise, just NM_NAME. * We assume that this is called with a name of at least NM_NAME chars. */ static struct nm_bridge * nm_find_bridge(const char *name, int create) { - int i, l, namelen; + int i, namelen; struct nm_bridge *b = NULL, *bridges; u_int num_bridges; NMG_LOCK_ASSERT(); netmap_bns_getbridges(&bridges, &num_bridges); - namelen = strlen(NM_NAME); /* base length */ - l = name ? strlen(name) : 0; /* actual length */ - if (l < namelen) { + namelen = nm_vale_name_validate(name); + if (namelen < 0) { D("invalid bridge name %s", name ? name : NULL); return NULL; } - for (i = namelen + 1; i < l; i++) { - if (name[i] == ':') { - namelen = i; - break; - } - } - if (namelen >= IFNAMSIZ) - namelen = IFNAMSIZ; - ND("--- prefix is '%.*s' ---", namelen, name); /* lookup the name, remember empty slot if there is one */ for (i = 0; i < num_bridges; i++) { struct nm_bridge *x = bridges + i; if (x->bdg_active_ports == 0) { if (create && b == NULL) b = x; /* record empty slot */ } else if (x->bdg_namelen != namelen) { continue; } else if (strncmp(name, x->bdg_basename, namelen) == 0) { ND("found '%.*s' at %d", namelen, name, i); b = x; break; } } if (i == num_bridges && b) { /* name not found, can create entry */ /* initialize the bridge */ strncpy(b->bdg_basename, name, namelen); ND("create new bridge %s with ports %d", b->bdg_basename, b->bdg_active_ports); b->bdg_namelen = namelen; b->bdg_active_ports = 0; for (i = 0; i < NM_BDG_MAXPORTS; i++) b->bdg_port_index[i] = i; /* set the default function */ b->bdg_ops.lookup = netmap_bdg_learning; /* reset the MAC address table */ bzero(b->ht, sizeof(struct nm_hash_ent) * NM_BDG_HASH); NM_BNS_GET(b); } return b; } /* * Free the forwarding tables for rings attached to switch ports. */ static void nm_free_bdgfwd(struct netmap_adapter *na) { int nrings, i; struct netmap_kring *kring; NMG_LOCK_ASSERT(); nrings = na->num_tx_rings; kring = na->tx_rings; for (i = 0; i < nrings; i++) { if (kring[i].nkr_ft) { free(kring[i].nkr_ft, M_DEVBUF); kring[i].nkr_ft = NULL; /* protect from freeing twice */ } } } /* * Allocate the forwarding tables for the rings attached to the bridge ports. */ static int nm_alloc_bdgfwd(struct netmap_adapter *na) { int nrings, l, i, num_dstq; struct netmap_kring *kring; NMG_LOCK_ASSERT(); /* all port:rings + broadcast */ num_dstq = NM_BDG_MAXPORTS * NM_BDG_MAXRINGS + 1; l = sizeof(struct nm_bdg_fwd) * NM_BDG_BATCH_MAX; l += sizeof(struct nm_bdg_q) * num_dstq; l += sizeof(uint16_t) * NM_BDG_BATCH_MAX; nrings = netmap_real_rings(na, NR_TX); kring = na->tx_rings; for (i = 0; i < nrings; i++) { struct nm_bdg_fwd *ft; struct nm_bdg_q *dstq; int j; ft = malloc(l, M_DEVBUF, M_NOWAIT | M_ZERO); if (!ft) { nm_free_bdgfwd(na); return ENOMEM; } dstq = (struct nm_bdg_q *)(ft + NM_BDG_BATCH_MAX); for (j = 0; j < num_dstq; j++) { dstq[j].bq_head = dstq[j].bq_tail = NM_FT_NULL; dstq[j].bq_len = 0; } kring[i].nkr_ft = ft; } return 0; } /* remove from bridge b the ports in slots hw and sw * (sw can be -1 if not needed) */ static void netmap_bdg_detach_common(struct nm_bridge *b, int hw, int sw) { int s_hw = hw, s_sw = sw; int i, lim =b->bdg_active_ports; uint8_t tmp[NM_BDG_MAXPORTS]; /* New algorithm: make a copy of bdg_port_index; lookup NA(ifp)->bdg_port and SWNA(ifp)->bdg_port in the array of bdg_port_index, replacing them with entries from the bottom of the array; decrement bdg_active_ports; acquire BDG_WLOCK() and copy back the array. */ if (netmap_verbose) D("detach %d and %d (lim %d)", hw, sw, lim); /* make a copy of the list of active ports, update it, * and then copy back within BDG_WLOCK(). */ memcpy(tmp, b->bdg_port_index, sizeof(tmp)); for (i = 0; (hw >= 0 || sw >= 0) && i < lim; ) { if (hw >= 0 && tmp[i] == hw) { ND("detach hw %d at %d", hw, i); lim--; /* point to last active port */ tmp[i] = tmp[lim]; /* swap with i */ tmp[lim] = hw; /* now this is inactive */ hw = -1; } else if (sw >= 0 && tmp[i] == sw) { ND("detach sw %d at %d", sw, i); lim--; tmp[i] = tmp[lim]; tmp[lim] = sw; sw = -1; } else { i++; } } if (hw >= 0 || sw >= 0) { D("XXX delete failed hw %d sw %d, should panic...", hw, sw); } BDG_WLOCK(b); if (b->bdg_ops.dtor) b->bdg_ops.dtor(b->bdg_ports[s_hw]); b->bdg_ports[s_hw] = NULL; if (s_sw >= 0) { b->bdg_ports[s_sw] = NULL; } memcpy(b->bdg_port_index, tmp, sizeof(tmp)); b->bdg_active_ports = lim; BDG_WUNLOCK(b); ND("now %d active ports", lim); if (lim == 0) { ND("marking bridge %s as free", b->bdg_basename); bzero(&b->bdg_ops, sizeof(b->bdg_ops)); NM_BNS_PUT(b); } } /* nm_bdg_ctl callback for VALE ports */ static int netmap_vp_bdg_ctl(struct netmap_adapter *na, struct nmreq *nmr, int attach) { struct netmap_vp_adapter *vpna = (struct netmap_vp_adapter *)na; struct nm_bridge *b = vpna->na_bdg; + (void)nmr; // XXX merge ? if (attach) return 0; /* nothing to do */ if (b) { netmap_set_all_rings(na, 0 /* disable */); netmap_bdg_detach_common(b, vpna->bdg_port, -1); vpna->na_bdg = NULL; netmap_set_all_rings(na, 1 /* enable */); } /* I have took reference just for attach */ netmap_adapter_put(na); return 0; } /* nm_dtor callback for ephemeral VALE ports */ static void netmap_vp_dtor(struct netmap_adapter *na) { struct netmap_vp_adapter *vpna = (struct netmap_vp_adapter*)na; struct nm_bridge *b = vpna->na_bdg; ND("%s has %d references", na->name, na->na_refcount); if (b) { netmap_bdg_detach_common(b, vpna->bdg_port, -1); } } /* remove a persistent VALE port from the system */ static int nm_vi_destroy(const char *name) { struct ifnet *ifp; int error; ifp = ifunit_ref(name); if (!ifp) return ENXIO; NMG_LOCK(); /* make sure this is actually a VALE port */ - if (!NETMAP_CAPABLE(ifp) || NA(ifp)->nm_register != netmap_vp_reg) { + if (!NM_NA_VALID(ifp) || NA(ifp)->nm_register != netmap_vp_reg) { error = EINVAL; goto err; } if (NA(ifp)->na_refcount > 1) { error = EBUSY; goto err; } NMG_UNLOCK(); D("destroying a persistent vale interface %s", ifp->if_xname); /* Linux requires all the references are released * before unregister */ if_rele(ifp); netmap_detach(ifp); - nm_vi_detach(ifp); + nm_os_vi_detach(ifp); return 0; err: NMG_UNLOCK(); if_rele(ifp); return error; } /* * Create a virtual interface registered to the system. * The interface will be attached to a bridge later. */ static int nm_vi_create(struct nmreq *nmr) { struct ifnet *ifp; struct netmap_vp_adapter *vpna; int error; /* don't include VALE prefix */ - if (!strncmp(nmr->nr_name, NM_NAME, strlen(NM_NAME))) + if (!strncmp(nmr->nr_name, NM_BDG_NAME, strlen(NM_BDG_NAME))) return EINVAL; ifp = ifunit_ref(nmr->nr_name); if (ifp) { /* already exist, cannot create new one */ if_rele(ifp); return EEXIST; } - error = nm_vi_persist(nmr->nr_name, &ifp); + error = nm_os_vi_persist(nmr->nr_name, &ifp); if (error) return error; NMG_LOCK(); /* netmap_vp_create creates a struct netmap_vp_adapter */ error = netmap_vp_create(nmr, ifp, &vpna); if (error) { D("error %d", error); - nm_vi_detach(ifp); + nm_os_vi_detach(ifp); return error; } /* persist-specific routines */ vpna->up.nm_bdg_ctl = netmap_vp_bdg_ctl; netmap_adapter_get(&vpna->up); + NM_ATTACH_NA(ifp, &vpna->up); NMG_UNLOCK(); D("created %s", ifp->if_xname); return 0; } /* Try to get a reference to a netmap adapter attached to a VALE switch. * If the adapter is found (or is created), this function returns 0, a * non NULL pointer is returned into *na, and the caller holds a * reference to the adapter. * If an adapter is not found, then no reference is grabbed and the * function returns an error code, or 0 if there is just a VALE prefix * mismatch. Therefore the caller holds a reference when * (*na != NULL && return == 0). */ int netmap_get_bdg_na(struct nmreq *nmr, struct netmap_adapter **na, int create) { char *nr_name = nmr->nr_name; const char *ifname; struct ifnet *ifp; int error = 0; struct netmap_vp_adapter *vpna, *hostna = NULL; struct nm_bridge *b; int i, j, cand = -1, cand2 = -1; int needed; *na = NULL; /* default return value */ /* first try to see if this is a bridge port. */ NMG_LOCK_ASSERT(); - if (strncmp(nr_name, NM_NAME, sizeof(NM_NAME) - 1)) { + if (strncmp(nr_name, NM_BDG_NAME, sizeof(NM_BDG_NAME) - 1)) { return 0; /* no error, but no VALE prefix */ } b = nm_find_bridge(nr_name, create); if (b == NULL) { D("no bridges available for '%s'", nr_name); return (create ? ENOMEM : ENXIO); } if (strlen(nr_name) < b->bdg_namelen) /* impossible */ panic("x"); /* Now we are sure that name starts with the bridge's name, * lookup the port in the bridge. We need to scan the entire * list. It is not important to hold a WLOCK on the bridge * during the search because NMG_LOCK already guarantees * that there are no other possible writers. */ /* lookup in the local list of ports */ for (j = 0; j < b->bdg_active_ports; j++) { i = b->bdg_port_index[j]; vpna = b->bdg_ports[i]; // KASSERT(na != NULL); ND("checking %s", vpna->up.name); if (!strcmp(vpna->up.name, nr_name)) { netmap_adapter_get(&vpna->up); ND("found existing if %s refs %d", nr_name) *na = &vpna->up; return 0; } } /* not found, should we create it? */ if (!create) return ENXIO; /* yes we should, see if we have space to attach entries */ needed = 2; /* in some cases we only need 1 */ if (b->bdg_active_ports + needed >= NM_BDG_MAXPORTS) { D("bridge full %d, cannot create new port", b->bdg_active_ports); return ENOMEM; } /* record the next two ports available, but do not allocate yet */ cand = b->bdg_port_index[b->bdg_active_ports]; cand2 = b->bdg_port_index[b->bdg_active_ports + 1]; ND("+++ bridge %s port %s used %d avail %d %d", b->bdg_basename, ifname, b->bdg_active_ports, cand, cand2); /* * try see if there is a matching NIC with this name * (after the bridge's name) */ ifname = nr_name + b->bdg_namelen + 1; ifp = ifunit_ref(ifname); if (!ifp) { /* Create an ephemeral virtual port * This block contains all the ephemeral-specific logics */ if (nmr->nr_cmd) { /* nr_cmd must be 0 for a virtual port */ return EINVAL; } /* bdg_netmap_attach creates a struct netmap_adapter */ error = netmap_vp_create(nmr, NULL, &vpna); if (error) { D("error %d", error); free(ifp, M_DEVBUF); return error; } /* shortcut - we can skip get_hw_na(), * ownership check and nm_bdg_attach() */ } else { struct netmap_adapter *hw; error = netmap_get_hw_na(ifp, &hw); if (error || hw == NULL) goto out; /* host adapter might not be created */ error = hw->nm_bdg_attach(nr_name, hw); if (error) goto out; vpna = hw->na_vp; hostna = hw->na_hostvp; - if_rele(ifp); if (nmr->nr_arg1 != NETMAP_BDG_HOST) hostna = NULL; } BDG_WLOCK(b); vpna->bdg_port = cand; ND("NIC %p to bridge port %d", vpna, cand); /* bind the port to the bridge (virtual ports are not active) */ b->bdg_ports[cand] = vpna; vpna->na_bdg = b; b->bdg_active_ports++; if (hostna != NULL) { /* also bind the host stack to the bridge */ b->bdg_ports[cand2] = hostna; hostna->bdg_port = cand2; hostna->na_bdg = b; b->bdg_active_ports++; ND("host %p to bridge port %d", hostna, cand2); } ND("if %s refs %d", ifname, vpna->up.na_refcount); BDG_WUNLOCK(b); *na = &vpna->up; netmap_adapter_get(*na); return 0; out: if_rele(ifp); return error; } /* Process NETMAP_BDG_ATTACH */ static int nm_bdg_ctl_attach(struct nmreq *nmr) { struct netmap_adapter *na; int error; NMG_LOCK(); error = netmap_get_bdg_na(nmr, &na, 1 /* create if not exists */); if (error) /* no device */ goto unlock_exit; if (na == NULL) { /* VALE prefix missing */ error = EINVAL; goto unlock_exit; } if (NETMAP_OWNED_BY_ANY(na)) { error = EBUSY; goto unref_exit; } if (na->nm_bdg_ctl) { /* nop for VALE ports. The bwrap needs to put the hwna * in netmap mode (see netmap_bwrap_bdg_ctl) */ error = na->nm_bdg_ctl(na, nmr, 1); if (error) goto unref_exit; ND("registered %s to netmap-mode", na->name); } NMG_UNLOCK(); return 0; unref_exit: netmap_adapter_put(na); unlock_exit: NMG_UNLOCK(); return error; } +static inline int +nm_is_bwrap(struct netmap_adapter *na) +{ + return na->nm_register == netmap_bwrap_reg; +} /* process NETMAP_BDG_DETACH */ static int nm_bdg_ctl_detach(struct nmreq *nmr) { struct netmap_adapter *na; int error; NMG_LOCK(); error = netmap_get_bdg_na(nmr, &na, 0 /* don't create */); if (error) { /* no device, or another bridge or user owns the device */ goto unlock_exit; } if (na == NULL) { /* VALE prefix missing */ error = EINVAL; goto unlock_exit; + } else if (nm_is_bwrap(na) && + ((struct netmap_bwrap_adapter *)na)->na_polling_state) { + /* Don't detach a NIC with polling */ + error = EBUSY; + netmap_adapter_put(na); + goto unlock_exit; } - if (na->nm_bdg_ctl) { /* remove the port from bridge. The bwrap * also needs to put the hwna in normal mode */ error = na->nm_bdg_ctl(na, nmr, 0); } netmap_adapter_put(na); unlock_exit: NMG_UNLOCK(); return error; } +struct nm_bdg_polling_state; +struct +nm_bdg_kthread { + struct nm_kthread *nmk; + u_int qfirst; + u_int qlast; + struct nm_bdg_polling_state *bps; +}; +struct nm_bdg_polling_state { + bool configured; + bool stopped; + struct netmap_bwrap_adapter *bna; + u_int reg; + u_int qfirst; + u_int qlast; + u_int cpu_from; + u_int ncpus; + struct nm_bdg_kthread *kthreads; +}; + +static void +netmap_bwrap_polling(void *data) +{ + struct nm_bdg_kthread *nbk = data; + struct netmap_bwrap_adapter *bna; + u_int qfirst, qlast, i; + struct netmap_kring *kring0, *kring; + + if (!nbk) + return; + qfirst = nbk->qfirst; + qlast = nbk->qlast; + bna = nbk->bps->bna; + kring0 = NMR(bna->hwna, NR_RX); + + for (i = qfirst; i < qlast; i++) { + kring = kring0 + i; + kring->nm_notify(kring, 0); + } +} + +static int +nm_bdg_create_kthreads(struct nm_bdg_polling_state *bps) +{ + struct nm_kthread_cfg kcfg; + int i, j; + + bps->kthreads = malloc(sizeof(struct nm_bdg_kthread) * bps->ncpus, + M_DEVBUF, M_NOWAIT | M_ZERO); + if (bps->kthreads == NULL) + return ENOMEM; + + bzero(&kcfg, sizeof(kcfg)); + kcfg.worker_fn = netmap_bwrap_polling; + for (i = 0; i < bps->ncpus; i++) { + struct nm_bdg_kthread *t = bps->kthreads + i; + int all = (bps->ncpus == 1 && bps->reg == NR_REG_ALL_NIC); + int affinity = bps->cpu_from + i; + + t->bps = bps; + t->qfirst = all ? bps->qfirst /* must be 0 */: affinity; + t->qlast = all ? bps->qlast : t->qfirst + 1; + D("kthread %d a:%u qf:%u ql:%u", i, affinity, t->qfirst, + t->qlast); + + kcfg.type = i; + kcfg.worker_private = t; + t->nmk = nm_os_kthread_create(&kcfg); + if (t->nmk == NULL) { + goto cleanup; + } + nm_os_kthread_set_affinity(t->nmk, affinity); + } + return 0; + +cleanup: + for (j = 0; j < i; j++) { + struct nm_bdg_kthread *t = bps->kthreads + i; + nm_os_kthread_delete(t->nmk); + } + free(bps->kthreads, M_DEVBUF); + return EFAULT; +} + +/* a version of ptnetmap_start_kthreads() */ +static int +nm_bdg_polling_start_kthreads(struct nm_bdg_polling_state *bps) +{ + int error, i, j; + + if (!bps) { + D("polling is not configured"); + return EFAULT; + } + bps->stopped = false; + + for (i = 0; i < bps->ncpus; i++) { + struct nm_bdg_kthread *t = bps->kthreads + i; + error = nm_os_kthread_start(t->nmk); + if (error) { + D("error in nm_kthread_start()"); + goto cleanup; + } + } + return 0; + +cleanup: + for (j = 0; j < i; j++) { + struct nm_bdg_kthread *t = bps->kthreads + i; + nm_os_kthread_stop(t->nmk); + } + bps->stopped = true; + return error; +} + +static void +nm_bdg_polling_stop_delete_kthreads(struct nm_bdg_polling_state *bps) +{ + int i; + + if (!bps) + return; + + for (i = 0; i < bps->ncpus; i++) { + struct nm_bdg_kthread *t = bps->kthreads + i; + nm_os_kthread_stop(t->nmk); + nm_os_kthread_delete(t->nmk); + } + bps->stopped = true; +} + +static int +get_polling_cfg(struct nmreq *nmr, struct netmap_adapter *na, + struct nm_bdg_polling_state *bps) +{ + int req_cpus, avail_cpus, core_from; + u_int reg, i, qfirst, qlast; + + avail_cpus = nm_os_ncpus(); + req_cpus = nmr->nr_arg1; + + if (req_cpus == 0) { + D("req_cpus must be > 0"); + return EINVAL; + } else if (req_cpus >= avail_cpus) { + D("for safety, we need at least one core left in the system"); + return EINVAL; + } + reg = nmr->nr_flags & NR_REG_MASK; + i = nmr->nr_ringid & NETMAP_RING_MASK; + /* + * ONE_NIC: dedicate one core to one ring. If multiple cores + * are specified, consecutive rings are also polled. + * For example, if ringid=2 and 2 cores are given, + * ring 2 and 3 are polled by core 2 and 3, respectively. + * ALL_NIC: poll all the rings using a core specified by ringid. + * the number of cores must be 1. + */ + if (reg == NR_REG_ONE_NIC) { + if (i + req_cpus > nma_get_nrings(na, NR_RX)) { + D("only %d rings exist (ring %u-%u is given)", + nma_get_nrings(na, NR_RX), i, i+req_cpus); + return EINVAL; + } + qfirst = i; + qlast = qfirst + req_cpus; + core_from = qfirst; + } else if (reg == NR_REG_ALL_NIC) { + if (req_cpus != 1) { + D("ncpus must be 1 not %d for REG_ALL_NIC", req_cpus); + return EINVAL; + } + qfirst = 0; + qlast = nma_get_nrings(na, NR_RX); + core_from = i; + } else { + D("reg must be ALL_NIC or ONE_NIC"); + return EINVAL; + } + + bps->reg = reg; + bps->qfirst = qfirst; + bps->qlast = qlast; + bps->cpu_from = core_from; + bps->ncpus = req_cpus; + D("%s qfirst %u qlast %u cpu_from %u ncpus %u", + reg == NR_REG_ALL_NIC ? "REG_ALL_NIC" : "REG_ONE_NIC", + qfirst, qlast, core_from, req_cpus); + return 0; +} + +static int +nm_bdg_ctl_polling_start(struct nmreq *nmr, struct netmap_adapter *na) +{ + struct nm_bdg_polling_state *bps; + struct netmap_bwrap_adapter *bna; + int error; + + bna = (struct netmap_bwrap_adapter *)na; + if (bna->na_polling_state) { + D("ERROR adapter already in polling mode"); + return EFAULT; + } + + bps = malloc(sizeof(*bps), M_DEVBUF, M_NOWAIT | M_ZERO); + if (!bps) + return ENOMEM; + bps->configured = false; + bps->stopped = true; + + if (get_polling_cfg(nmr, na, bps)) { + free(bps, M_DEVBUF); + return EINVAL; + } + + if (nm_bdg_create_kthreads(bps)) { + free(bps, M_DEVBUF); + return EFAULT; + } + + bps->configured = true; + bna->na_polling_state = bps; + bps->bna = bna; + + /* disable interrupt if possible */ + if (bna->hwna->nm_intr) + bna->hwna->nm_intr(bna->hwna, 0); + /* start kthread now */ + error = nm_bdg_polling_start_kthreads(bps); + if (error) { + D("ERROR nm_bdg_polling_start_kthread()"); + free(bps->kthreads, M_DEVBUF); + free(bps, M_DEVBUF); + bna->na_polling_state = NULL; + if (bna->hwna->nm_intr) + bna->hwna->nm_intr(bna->hwna, 1); + } + return error; +} + +static int +nm_bdg_ctl_polling_stop(struct nmreq *nmr, struct netmap_adapter *na) +{ + struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter *)na; + struct nm_bdg_polling_state *bps; + + if (!bna->na_polling_state) { + D("ERROR adapter is not in polling mode"); + return EFAULT; + } + bps = bna->na_polling_state; + nm_bdg_polling_stop_delete_kthreads(bna->na_polling_state); + bps->configured = false; + free(bps, M_DEVBUF); + bna->na_polling_state = NULL; + /* reenable interrupt */ + if (bna->hwna->nm_intr) + bna->hwna->nm_intr(bna->hwna, 1); + return 0; +} + /* Called by either user's context (netmap_ioctl()) * or external kernel modules (e.g., Openvswitch). * Operation is indicated in nmr->nr_cmd. * NETMAP_BDG_OPS that sets configure/lookup/dtor functions to the bridge * requires bdg_ops argument; the other commands ignore this argument. * * Called without NMG_LOCK. */ int netmap_bdg_ctl(struct nmreq *nmr, struct netmap_bdg_ops *bdg_ops) { struct nm_bridge *b, *bridges; struct netmap_adapter *na; struct netmap_vp_adapter *vpna; char *name = nmr->nr_name; int cmd = nmr->nr_cmd, namelen = strlen(name); int error = 0, i, j; u_int num_bridges; netmap_bns_getbridges(&bridges, &num_bridges); switch (cmd) { case NETMAP_BDG_NEWIF: error = nm_vi_create(nmr); break; case NETMAP_BDG_DELIF: error = nm_vi_destroy(nmr->nr_name); break; case NETMAP_BDG_ATTACH: error = nm_bdg_ctl_attach(nmr); break; case NETMAP_BDG_DETACH: error = nm_bdg_ctl_detach(nmr); break; case NETMAP_BDG_LIST: /* this is used to enumerate bridges and ports */ if (namelen) { /* look up indexes of bridge and port */ - if (strncmp(name, NM_NAME, strlen(NM_NAME))) { + if (strncmp(name, NM_BDG_NAME, strlen(NM_BDG_NAME))) { error = EINVAL; break; } NMG_LOCK(); b = nm_find_bridge(name, 0 /* don't create */); if (!b) { error = ENOENT; NMG_UNLOCK(); break; } - error = ENOENT; + error = 0; + nmr->nr_arg1 = b - bridges; /* bridge index */ + nmr->nr_arg2 = NM_BDG_NOPORT; for (j = 0; j < b->bdg_active_ports; j++) { i = b->bdg_port_index[j]; vpna = b->bdg_ports[i]; if (vpna == NULL) { D("---AAAAAAAAARGH-------"); continue; } /* the former and the latter identify a * virtual port and a NIC, respectively */ if (!strcmp(vpna->up.name, name)) { - /* bridge index */ - nmr->nr_arg1 = b - bridges; nmr->nr_arg2 = i; /* port index */ - error = 0; break; } } NMG_UNLOCK(); } else { /* return the first non-empty entry starting from * bridge nr_arg1 and port nr_arg2. * * Users can detect the end of the same bridge by * seeing the new and old value of nr_arg1, and can * detect the end of all the bridge by error != 0 */ i = nmr->nr_arg1; j = nmr->nr_arg2; NMG_LOCK(); for (error = ENOENT; i < NM_BRIDGES; i++) { b = bridges + i; if (j >= b->bdg_active_ports) { j = 0; /* following bridges scan from 0 */ continue; } nmr->nr_arg1 = i; nmr->nr_arg2 = j; j = b->bdg_port_index[j]; vpna = b->bdg_ports[j]; strncpy(name, vpna->up.name, (size_t)IFNAMSIZ); error = 0; break; } NMG_UNLOCK(); } break; case NETMAP_BDG_REGOPS: /* XXX this should not be available from userspace */ /* register callbacks to the given bridge. * nmr->nr_name may be just bridge's name (including ':' * if it is not just NM_NAME). */ if (!bdg_ops) { error = EINVAL; break; } NMG_LOCK(); b = nm_find_bridge(name, 0 /* don't create */); if (!b) { error = EINVAL; } else { b->bdg_ops = *bdg_ops; } NMG_UNLOCK(); break; case NETMAP_BDG_VNET_HDR: /* Valid lengths for the virtio-net header are 0 (no header), 10 and 12. */ if (nmr->nr_arg1 != 0 && nmr->nr_arg1 != sizeof(struct nm_vnet_hdr) && nmr->nr_arg1 != 12) { error = EINVAL; break; } NMG_LOCK(); error = netmap_get_bdg_na(nmr, &na, 0); if (na && !error) { vpna = (struct netmap_vp_adapter *)na; - vpna->virt_hdr_len = nmr->nr_arg1; - if (vpna->virt_hdr_len) + na->virt_hdr_len = nmr->nr_arg1; + if (na->virt_hdr_len) { vpna->mfs = NETMAP_BUF_SIZE(na); - D("Using vnet_hdr_len %d for %p", vpna->virt_hdr_len, vpna); + } + D("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na); netmap_adapter_put(na); + } else if (!na) { + error = ENXIO; } NMG_UNLOCK(); break; + case NETMAP_BDG_POLLING_ON: + case NETMAP_BDG_POLLING_OFF: + NMG_LOCK(); + error = netmap_get_bdg_na(nmr, &na, 0); + if (na && !error) { + if (!nm_is_bwrap(na)) { + error = EOPNOTSUPP; + } else if (cmd == NETMAP_BDG_POLLING_ON) { + error = nm_bdg_ctl_polling_start(nmr, na); + if (!error) + netmap_adapter_get(na); + } else { + error = nm_bdg_ctl_polling_stop(nmr, na); + if (!error) + netmap_adapter_put(na); + } + netmap_adapter_put(na); + } + NMG_UNLOCK(); + break; + default: D("invalid cmd (nmr->nr_cmd) (0x%x)", cmd); error = EINVAL; break; } return error; } int netmap_bdg_config(struct nmreq *nmr) { struct nm_bridge *b; int error = EINVAL; NMG_LOCK(); b = nm_find_bridge(nmr->nr_name, 0); if (!b) { NMG_UNLOCK(); return error; } NMG_UNLOCK(); /* Don't call config() with NMG_LOCK() held */ BDG_RLOCK(b); if (b->bdg_ops.config != NULL) error = b->bdg_ops.config((struct nm_ifreq *)nmr); BDG_RUNLOCK(b); return error; } /* nm_krings_create callback for VALE ports. * Calls the standard netmap_krings_create, then adds leases on rx * rings and bdgfwd on tx rings. */ static int netmap_vp_krings_create(struct netmap_adapter *na) { u_int tailroom; int error, i; uint32_t *leases; u_int nrx = netmap_real_rings(na, NR_RX); /* * Leases are attached to RX rings on vale ports */ tailroom = sizeof(uint32_t) * na->num_rx_desc * nrx; error = netmap_krings_create(na, tailroom); if (error) return error; leases = na->tailroom; for (i = 0; i < nrx; i++) { /* Receive rings */ na->rx_rings[i].nkr_leases = leases; leases += na->num_rx_desc; } error = nm_alloc_bdgfwd(na); if (error) { netmap_krings_delete(na); return error; } return 0; } /* nm_krings_delete callback for VALE ports. */ static void netmap_vp_krings_delete(struct netmap_adapter *na) { nm_free_bdgfwd(na); netmap_krings_delete(na); } static int nm_bdg_flush(struct nm_bdg_fwd *ft, u_int n, struct netmap_vp_adapter *na, u_int ring_nr); /* * main dispatch routine for the bridge. * Grab packets from a kring, move them into the ft structure * associated to the tx (input) port. Max one instance per port, * filtered on input (ioctl, poll or XXX). * Returns the next position in the ring. */ static int nm_bdg_preflush(struct netmap_kring *kring, u_int end) { struct netmap_vp_adapter *na = (struct netmap_vp_adapter*)kring->na; struct netmap_ring *ring = kring->ring; struct nm_bdg_fwd *ft; u_int ring_nr = kring->ring_id; u_int j = kring->nr_hwcur, lim = kring->nkr_num_slots - 1; u_int ft_i = 0; /* start from 0 */ u_int frags = 1; /* how many frags ? */ struct nm_bridge *b = na->na_bdg; /* To protect against modifications to the bridge we acquire a * shared lock, waiting if we can sleep (if the source port is * attached to a user process) or with a trylock otherwise (NICs). */ ND("wait rlock for %d packets", ((j > end ? lim+1 : 0) + end) - j); if (na->up.na_flags & NAF_BDG_MAYSLEEP) BDG_RLOCK(b); else if (!BDG_RTRYLOCK(b)) return 0; ND(5, "rlock acquired for %d packets", ((j > end ? lim+1 : 0) + end) - j); ft = kring->nkr_ft; for (; likely(j != end); j = nm_next(j, lim)) { struct netmap_slot *slot = &ring->slot[j]; char *buf; ft[ft_i].ft_len = slot->len; ft[ft_i].ft_flags = slot->flags; ND("flags is 0x%x", slot->flags); /* we do not use the buf changed flag, but we still need to reset it */ slot->flags &= ~NS_BUF_CHANGED; /* this slot goes into a list so initialize the link field */ ft[ft_i].ft_next = NM_FT_NULL; buf = ft[ft_i].ft_buf = (slot->flags & NS_INDIRECT) ? (void *)(uintptr_t)slot->ptr : NMB(&na->up, slot); if (unlikely(buf == NULL)) { RD(5, "NULL %s buffer pointer from %s slot %d len %d", (slot->flags & NS_INDIRECT) ? "INDIRECT" : "DIRECT", kring->name, j, ft[ft_i].ft_len); buf = ft[ft_i].ft_buf = NETMAP_BUF_BASE(&na->up); ft[ft_i].ft_len = 0; ft[ft_i].ft_flags = 0; } __builtin_prefetch(buf); ++ft_i; if (slot->flags & NS_MOREFRAG) { frags++; continue; } if (unlikely(netmap_verbose && frags > 1)) RD(5, "%d frags at %d", frags, ft_i - frags); ft[ft_i - frags].ft_frags = frags; frags = 1; if (unlikely((int)ft_i >= bridge_batch)) ft_i = nm_bdg_flush(ft, ft_i, na, ring_nr); } if (frags > 1) { - D("truncate incomplete fragment at %d (%d frags)", ft_i, frags); - // ft_i > 0, ft[ft_i-1].flags has NS_MOREFRAG - ft[ft_i - 1].ft_frags &= ~NS_MOREFRAG; - ft[ft_i - frags].ft_frags = frags - 1; + /* Here ft_i > 0, ft[ft_i-1].flags has NS_MOREFRAG, and we + * have to fix frags count. */ + frags--; + ft[ft_i - 1].ft_flags &= ~NS_MOREFRAG; + ft[ft_i - frags].ft_frags = frags; + D("Truncate incomplete fragment at %d (%d frags)", ft_i, frags); } if (ft_i) ft_i = nm_bdg_flush(ft, ft_i, na, ring_nr); BDG_RUNLOCK(b); return j; } /* ----- FreeBSD if_bridge hash function ------- */ /* * The following hash function is adapted from "Hash Functions" by Bob Jenkins * ("Algorithm Alley", Dr. Dobbs Journal, September 1997). * * http://www.burtleburtle.net/bob/hash/spooky.html */ #define mix(a, b, c) \ do { \ a -= b; a -= c; a ^= (c >> 13); \ b -= c; b -= a; b ^= (a << 8); \ c -= a; c -= b; c ^= (b >> 13); \ a -= b; a -= c; a ^= (c >> 12); \ b -= c; b -= a; b ^= (a << 16); \ c -= a; c -= b; c ^= (b >> 5); \ a -= b; a -= c; a ^= (c >> 3); \ b -= c; b -= a; b ^= (a << 10); \ c -= a; c -= b; c ^= (b >> 15); \ } while (/*CONSTCOND*/0) static __inline uint32_t nm_bridge_rthash(const uint8_t *addr) { uint32_t a = 0x9e3779b9, b = 0x9e3779b9, c = 0; // hask key b += addr[5] << 8; b += addr[4]; a += addr[3] << 24; a += addr[2] << 16; a += addr[1] << 8; a += addr[0]; mix(a, b, c); #define BRIDGE_RTHASH_MASK (NM_BDG_HASH-1) return (c & BRIDGE_RTHASH_MASK); } #undef mix /* nm_register callback for VALE ports */ static int netmap_vp_reg(struct netmap_adapter *na, int onoff) { struct netmap_vp_adapter *vpna = (struct netmap_vp_adapter*)na; + enum txrx t; + int i; /* persistent ports may be put in netmap mode * before being attached to a bridge */ if (vpna->na_bdg) BDG_WLOCK(vpna->na_bdg); if (onoff) { - na->na_flags |= NAF_NETMAP_ON; + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { + struct netmap_kring *kring = &NMR(na, t)[i]; + + if (nm_kring_pending_on(kring)) + kring->nr_mode = NKR_NETMAP_ON; + } + } + if (na->active_fds == 0) + na->na_flags |= NAF_NETMAP_ON; /* XXX on FreeBSD, persistent VALE ports should also * toggle IFCAP_NETMAP in na->ifp (2014-03-16) */ } else { - na->na_flags &= ~NAF_NETMAP_ON; + if (na->active_fds == 0) + na->na_flags &= ~NAF_NETMAP_ON; + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { + struct netmap_kring *kring = &NMR(na, t)[i]; + + if (nm_kring_pending_off(kring)) + kring->nr_mode = NKR_NETMAP_OFF; + } + } } if (vpna->na_bdg) BDG_WUNLOCK(vpna->na_bdg); return 0; } /* * Lookup function for a learning bridge. * Update the hash table with the source address, * and then returns the destination port index, and the * ring in *dst_ring (at the moment, always use ring 0) */ u_int netmap_bdg_learning(struct nm_bdg_fwd *ft, uint8_t *dst_ring, struct netmap_vp_adapter *na) { uint8_t *buf = ft->ft_buf; u_int buf_len = ft->ft_len; struct nm_hash_ent *ht = na->na_bdg->ht; uint32_t sh, dh; u_int dst, mysrc = na->bdg_port; uint64_t smac, dmac; + uint8_t indbuf[12]; /* safety check, unfortunately we have many cases */ - if (buf_len >= 14 + na->virt_hdr_len) { + if (buf_len >= 14 + na->up.virt_hdr_len) { /* virthdr + mac_hdr in the same slot */ - buf += na->virt_hdr_len; - buf_len -= na->virt_hdr_len; - } else if (buf_len == na->virt_hdr_len && ft->ft_flags & NS_MOREFRAG) { + buf += na->up.virt_hdr_len; + buf_len -= na->up.virt_hdr_len; + } else if (buf_len == na->up.virt_hdr_len && ft->ft_flags & NS_MOREFRAG) { /* only header in first fragment */ ft++; buf = ft->ft_buf; buf_len = ft->ft_len; } else { RD(5, "invalid buf format, length %d", buf_len); return NM_BDG_NOPORT; } + + if (ft->ft_flags & NS_INDIRECT) { + if (copyin(buf, indbuf, sizeof(indbuf))) { + return NM_BDG_NOPORT; + } + buf = indbuf; + } + dmac = le64toh(*(uint64_t *)(buf)) & 0xffffffffffff; smac = le64toh(*(uint64_t *)(buf + 4)); smac >>= 16; /* * The hash is somewhat expensive, there might be some * worthwhile optimizations here. */ if (((buf[6] & 1) == 0) && (na->last_smac != smac)) { /* valid src */ uint8_t *s = buf+6; sh = nm_bridge_rthash(s); // XXX hash of source /* update source port forwarding entry */ na->last_smac = ht[sh].mac = smac; /* XXX expire ? */ ht[sh].ports = mysrc; if (netmap_verbose) D("src %02x:%02x:%02x:%02x:%02x:%02x on port %d", s[0], s[1], s[2], s[3], s[4], s[5], mysrc); } dst = NM_BDG_BROADCAST; if ((buf[0] & 1) == 0) { /* unicast */ dh = nm_bridge_rthash(buf); // XXX hash of dst if (ht[dh].mac == dmac) { /* found dst */ dst = ht[dh].ports; } /* XXX otherwise return NM_BDG_UNKNOWN ? */ } return dst; } /* * Available space in the ring. Only used in VALE code * and only with is_rx = 1 */ static inline uint32_t nm_kr_space(struct netmap_kring *k, int is_rx) { int space; if (is_rx) { int busy = k->nkr_hwlease - k->nr_hwcur; if (busy < 0) busy += k->nkr_num_slots; space = k->nkr_num_slots - 1 - busy; } else { /* XXX never used in this branch */ space = k->nr_hwtail - k->nkr_hwlease; if (space < 0) space += k->nkr_num_slots; } #if 0 // sanity check if (k->nkr_hwlease >= k->nkr_num_slots || k->nr_hwcur >= k->nkr_num_slots || k->nr_tail >= k->nkr_num_slots || busy < 0 || busy >= k->nkr_num_slots) { D("invalid kring, cur %d tail %d lease %d lease_idx %d lim %d", k->nr_hwcur, k->nr_hwtail, k->nkr_hwlease, k->nkr_lease_idx, k->nkr_num_slots); } #endif return space; } /* make a lease on the kring for N positions. return the * lease index * XXX only used in VALE code and with is_rx = 1 */ static inline uint32_t nm_kr_lease(struct netmap_kring *k, u_int n, int is_rx) { uint32_t lim = k->nkr_num_slots - 1; uint32_t lease_idx = k->nkr_lease_idx; k->nkr_leases[lease_idx] = NR_NOSLOT; k->nkr_lease_idx = nm_next(lease_idx, lim); if (n > nm_kr_space(k, is_rx)) { D("invalid request for %d slots", n); panic("x"); } /* XXX verify that there are n slots */ k->nkr_hwlease += n; if (k->nkr_hwlease > lim) k->nkr_hwlease -= lim + 1; if (k->nkr_hwlease >= k->nkr_num_slots || k->nr_hwcur >= k->nkr_num_slots || k->nr_hwtail >= k->nkr_num_slots || k->nkr_lease_idx >= k->nkr_num_slots) { D("invalid kring %s, cur %d tail %d lease %d lease_idx %d lim %d", k->na->name, k->nr_hwcur, k->nr_hwtail, k->nkr_hwlease, k->nkr_lease_idx, k->nkr_num_slots); } return lease_idx; } /* * * This flush routine supports only unicast and broadcast but a large * number of ports, and lets us replace the learn and dispatch functions. */ int nm_bdg_flush(struct nm_bdg_fwd *ft, u_int n, struct netmap_vp_adapter *na, u_int ring_nr) { struct nm_bdg_q *dst_ents, *brddst; uint16_t num_dsts = 0, *dsts; struct nm_bridge *b = na->na_bdg; - u_int i, j, me = na->bdg_port; + u_int i, me = na->bdg_port; /* * The work area (pointed by ft) is followed by an array of * pointers to queues , dst_ents; there are NM_BDG_MAXRINGS * queues per port plus one for the broadcast traffic. * Then we have an array of destination indexes. */ dst_ents = (struct nm_bdg_q *)(ft + NM_BDG_BATCH_MAX); dsts = (uint16_t *)(dst_ents + NM_BDG_MAXPORTS * NM_BDG_MAXRINGS + 1); /* first pass: find a destination for each packet in the batch */ for (i = 0; likely(i < n); i += ft[i].ft_frags) { uint8_t dst_ring = ring_nr; /* default, same ring as origin */ uint16_t dst_port, d_i; struct nm_bdg_q *d; ND("slot %d frags %d", i, ft[i].ft_frags); /* Drop the packet if the virtio-net header is not into the first fragment nor at the very beginning of the second. */ - if (unlikely(na->virt_hdr_len > ft[i].ft_len)) + if (unlikely(na->up.virt_hdr_len > ft[i].ft_len)) continue; dst_port = b->bdg_ops.lookup(&ft[i], &dst_ring, na); if (netmap_verbose > 255) RD(5, "slot %d port %d -> %d", i, me, dst_port); if (dst_port == NM_BDG_NOPORT) continue; /* this packet is identified to be dropped */ else if (unlikely(dst_port > NM_BDG_MAXPORTS)) continue; else if (dst_port == NM_BDG_BROADCAST) dst_ring = 0; /* broadcasts always go to ring 0 */ else if (unlikely(dst_port == me || !b->bdg_ports[dst_port])) continue; /* get a position in the scratch pad */ d_i = dst_port * NM_BDG_MAXRINGS + dst_ring; d = dst_ents + d_i; /* append the first fragment to the list */ if (d->bq_head == NM_FT_NULL) { /* new destination */ d->bq_head = d->bq_tail = i; /* remember this position to be scanned later */ if (dst_port != NM_BDG_BROADCAST) dsts[num_dsts++] = d_i; } else { ft[d->bq_tail].ft_next = i; d->bq_tail = i; } d->bq_len += ft[i].ft_frags; } /* * Broadcast traffic goes to ring 0 on all destinations. * So we need to add these rings to the list of ports to scan. * XXX at the moment we scan all NM_BDG_MAXPORTS ports, which is * expensive. We should keep a compact list of active destinations * so we could shorten this loop. */ brddst = dst_ents + NM_BDG_BROADCAST * NM_BDG_MAXRINGS; if (brddst->bq_head != NM_FT_NULL) { + u_int j; for (j = 0; likely(j < b->bdg_active_ports); j++) { uint16_t d_i; i = b->bdg_port_index[j]; if (unlikely(i == me)) continue; d_i = i * NM_BDG_MAXRINGS; if (dst_ents[d_i].bq_head == NM_FT_NULL) dsts[num_dsts++] = d_i; } } ND(5, "pass 1 done %d pkts %d dsts", n, num_dsts); /* second pass: scan destinations */ for (i = 0; i < num_dsts; i++) { struct netmap_vp_adapter *dst_na; struct netmap_kring *kring; struct netmap_ring *ring; u_int dst_nr, lim, j, d_i, next, brd_next; u_int needed, howmany; int retry = netmap_txsync_retry; struct nm_bdg_q *d; uint32_t my_start = 0, lease_idx = 0; int nrings; int virt_hdr_mismatch = 0; d_i = dsts[i]; ND("second pass %d port %d", i, d_i); d = dst_ents + d_i; // XXX fix the division dst_na = b->bdg_ports[d_i/NM_BDG_MAXRINGS]; /* protect from the lookup function returning an inactive * destination port */ if (unlikely(dst_na == NULL)) goto cleanup; if (dst_na->up.na_flags & NAF_SW_ONLY) goto cleanup; /* * The interface may be in !netmap mode in two cases: * - when na is attached but not activated yet; * - when na is being deactivated but is still attached. */ if (unlikely(!nm_netmap_on(&dst_na->up))) { ND("not in netmap mode!"); goto cleanup; } /* there is at least one either unicast or broadcast packet */ brd_next = brddst->bq_head; next = d->bq_head; /* we need to reserve this many slots. If fewer are * available, some packets will be dropped. * Packets may have multiple fragments, so we may not use * there is a chance that we may not use all of the slots * we have claimed, so we will need to handle the leftover * ones when we regain the lock. */ needed = d->bq_len + brddst->bq_len; - if (unlikely(dst_na->virt_hdr_len != na->virt_hdr_len)) { - RD(3, "virt_hdr_mismatch, src %d dst %d", na->virt_hdr_len, dst_na->virt_hdr_len); + if (unlikely(dst_na->up.virt_hdr_len != na->up.virt_hdr_len)) { + RD(3, "virt_hdr_mismatch, src %d dst %d", na->up.virt_hdr_len, + dst_na->up.virt_hdr_len); /* There is a virtio-net header/offloadings mismatch between * source and destination. The slower mismatch datapath will * be used to cope with all the mismatches. */ virt_hdr_mismatch = 1; if (dst_na->mfs < na->mfs) { /* We may need to do segmentation offloadings, and so * we may need a number of destination slots greater * than the number of input slots ('needed'). * We look for the smallest integer 'x' which satisfies: * needed * na->mfs + x * H <= x * na->mfs * where 'H' is the length of the longest header that may * be replicated in the segmentation process (e.g. for * TCPv4 we must account for ethernet header, IP header * and TCPv4 header). */ needed = (needed * na->mfs) / (dst_na->mfs - WORST_CASE_GSO_HEADER) + 1; ND(3, "srcmtu=%u, dstmtu=%u, x=%u", na->mfs, dst_na->mfs, needed); } } ND(5, "pass 2 dst %d is %x %s", i, d_i, is_vp ? "virtual" : "nic/host"); dst_nr = d_i & (NM_BDG_MAXRINGS-1); nrings = dst_na->up.num_rx_rings; if (dst_nr >= nrings) dst_nr = dst_nr % nrings; kring = &dst_na->up.rx_rings[dst_nr]; ring = kring->ring; lim = kring->nkr_num_slots - 1; retry: if (dst_na->retry && retry) { /* try to get some free slot from the previous run */ kring->nm_notify(kring, 0); /* actually useful only for bwraps, since there * the notify will trigger a txsync on the hwna. VALE ports * have dst_na->retry == 0 */ } /* reserve the buffers in the queue and an entry * to report completion, and drop lock. * XXX this might become a helper function. */ mtx_lock(&kring->q_lock); if (kring->nkr_stopped) { mtx_unlock(&kring->q_lock); goto cleanup; } my_start = j = kring->nkr_hwlease; howmany = nm_kr_space(kring, 1); if (needed < howmany) howmany = needed; lease_idx = nm_kr_lease(kring, howmany, 1); mtx_unlock(&kring->q_lock); /* only retry if we need more than available slots */ if (retry && needed <= howmany) retry = 0; /* copy to the destination queue */ while (howmany > 0) { struct netmap_slot *slot; struct nm_bdg_fwd *ft_p, *ft_end; u_int cnt; /* find the queue from which we pick next packet. * NM_FT_NULL is always higher than valid indexes * so we never dereference it if the other list * has packets (and if both are empty we never * get here). */ if (next < brd_next) { ft_p = ft + next; next = ft_p->ft_next; } else { /* insert broadcast */ ft_p = ft + brd_next; brd_next = ft_p->ft_next; } cnt = ft_p->ft_frags; // cnt > 0 if (unlikely(cnt > howmany)) break; /* no more space */ if (netmap_verbose && cnt > 1) RD(5, "rx %d frags to %d", cnt, j); ft_end = ft_p + cnt; if (unlikely(virt_hdr_mismatch)) { bdg_mismatch_datapath(na, dst_na, ft_p, ring, &j, lim, &howmany); } else { howmany -= cnt; do { char *dst, *src = ft_p->ft_buf; size_t copy_len = ft_p->ft_len, dst_len = copy_len; slot = &ring->slot[j]; dst = NMB(&dst_na->up, slot); ND("send [%d] %d(%d) bytes at %s:%d", i, (int)copy_len, (int)dst_len, NM_IFPNAME(dst_ifp), j); /* round to a multiple of 64 */ copy_len = (copy_len + 63) & ~63; if (unlikely(copy_len > NETMAP_BUF_SIZE(&dst_na->up) || copy_len > NETMAP_BUF_SIZE(&na->up))) { RD(5, "invalid len %d, down to 64", (int)copy_len); copy_len = dst_len = 64; // XXX } if (ft_p->ft_flags & NS_INDIRECT) { if (copyin(src, dst, copy_len)) { // invalid user pointer, pretend len is 0 dst_len = 0; } } else { //memcpy(dst, src, copy_len); pkt_copy(src, dst, (int)copy_len); } slot->len = dst_len; slot->flags = (cnt << 8)| NS_MOREFRAG; j = nm_next(j, lim); needed--; ft_p++; } while (ft_p != ft_end); slot->flags = (cnt << 8); /* clear flag on last entry */ } /* are we done ? */ if (next == NM_FT_NULL && brd_next == NM_FT_NULL) break; } { /* current position */ uint32_t *p = kring->nkr_leases; /* shorthand */ uint32_t update_pos; int still_locked = 1; mtx_lock(&kring->q_lock); if (unlikely(howmany > 0)) { /* not used all bufs. If i am the last one * i can recover the slots, otherwise must * fill them with 0 to mark empty packets. */ ND("leftover %d bufs", howmany); if (nm_next(lease_idx, lim) == kring->nkr_lease_idx) { /* yes i am the last one */ ND("roll back nkr_hwlease to %d", j); kring->nkr_hwlease = j; } else { while (howmany-- > 0) { ring->slot[j].len = 0; ring->slot[j].flags = 0; j = nm_next(j, lim); } } } p[lease_idx] = j; /* report I am done */ update_pos = kring->nr_hwtail; if (my_start == update_pos) { /* all slots before my_start have been reported, * so scan subsequent leases to see if other ranges * have been completed, and to a selwakeup or txsync. */ while (lease_idx != kring->nkr_lease_idx && p[lease_idx] != NR_NOSLOT) { j = p[lease_idx]; p[lease_idx] = NR_NOSLOT; lease_idx = nm_next(lease_idx, lim); } /* j is the new 'write' position. j != my_start * means there are new buffers to report */ if (likely(j != my_start)) { kring->nr_hwtail = j; still_locked = 0; mtx_unlock(&kring->q_lock); kring->nm_notify(kring, 0); /* this is netmap_notify for VALE ports and * netmap_bwrap_notify for bwrap. The latter will * trigger a txsync on the underlying hwna */ if (dst_na->retry && retry--) { /* XXX this is going to call nm_notify again. * Only useful for bwrap in virtual machines */ goto retry; } } } if (still_locked) mtx_unlock(&kring->q_lock); } cleanup: d->bq_head = d->bq_tail = NM_FT_NULL; /* cleanup */ d->bq_len = 0; } brddst->bq_head = brddst->bq_tail = NM_FT_NULL; /* cleanup */ brddst->bq_len = 0; return 0; } /* nm_txsync callback for VALE ports */ static int netmap_vp_txsync(struct netmap_kring *kring, int flags) { struct netmap_vp_adapter *na = (struct netmap_vp_adapter *)kring->na; u_int done; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; if (bridge_batch <= 0) { /* testing only */ done = head; // used all goto done; } if (!na->na_bdg) { done = head; goto done; } if (bridge_batch > NM_BDG_BATCH) bridge_batch = NM_BDG_BATCH; done = nm_bdg_preflush(kring, head); done: if (done != head) D("early break at %d/ %d, tail %d", done, head, kring->nr_hwtail); /* * packets between 'done' and 'cur' are left unsent. */ kring->nr_hwcur = done; kring->nr_hwtail = nm_prev(done, lim); if (netmap_verbose) D("%s ring %d flags %d", na->up.name, kring->ring_id, flags); return 0; } /* rxsync code used by VALE ports nm_rxsync callback and also * internally by the brwap */ static int netmap_vp_rxsync_locked(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct netmap_ring *ring = kring->ring; u_int nm_i, lim = kring->nkr_num_slots - 1; u_int head = kring->rhead; int n; if (head > lim) { D("ouch dangerous reset!!!"); n = netmap_ring_reinit(kring); goto done; } /* First part, import newly received packets. */ /* actually nothing to do here, they are already in the kring */ /* Second part, skip past packets that userspace has released. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* consistency check, but nothing really important here */ for (n = 0; likely(nm_i != head); n++) { struct netmap_slot *slot = &ring->slot[nm_i]; void *addr = NMB(na, slot); if (addr == NETMAP_BUF_BASE(kring->na)) { /* bad buf */ D("bad buffer index %d, ignore ?", slot->buf_idx); } slot->flags &= ~NS_BUF_CHANGED; nm_i = nm_next(nm_i, lim); } kring->nr_hwcur = head; } n = 0; done: return n; } /* * nm_rxsync callback for VALE ports * user process reading from a VALE switch. * Already protected against concurrent calls from userspace, * but we must acquire the queue's lock to protect against * writers on the same queue. */ static int netmap_vp_rxsync(struct netmap_kring *kring, int flags) { int n; mtx_lock(&kring->q_lock); n = netmap_vp_rxsync_locked(kring, flags); mtx_unlock(&kring->q_lock); return n; } /* nm_bdg_attach callback for VALE ports * The na_vp port is this same netmap_adapter. There is no host port. */ static int netmap_vp_bdg_attach(const char *name, struct netmap_adapter *na) { struct netmap_vp_adapter *vpna = (struct netmap_vp_adapter *)na; if (vpna->na_bdg) return EBUSY; na->na_vp = vpna; strncpy(na->name, name, sizeof(na->name)); na->na_hostvp = NULL; return 0; } /* create a netmap_vp_adapter that describes a VALE port. * Only persistent VALE ports have a non-null ifp. */ static int netmap_vp_create(struct nmreq *nmr, struct ifnet *ifp, struct netmap_vp_adapter **ret) { struct netmap_vp_adapter *vpna; struct netmap_adapter *na; int error; u_int npipes = 0; vpna = malloc(sizeof(*vpna), M_DEVBUF, M_NOWAIT | M_ZERO); if (vpna == NULL) return ENOMEM; na = &vpna->up; na->ifp = ifp; strncpy(na->name, nmr->nr_name, sizeof(na->name)); /* bound checking */ na->num_tx_rings = nmr->nr_tx_rings; nm_bound_var(&na->num_tx_rings, 1, 1, NM_BDG_MAXRINGS, NULL); nmr->nr_tx_rings = na->num_tx_rings; // write back na->num_rx_rings = nmr->nr_rx_rings; nm_bound_var(&na->num_rx_rings, 1, 1, NM_BDG_MAXRINGS, NULL); nmr->nr_rx_rings = na->num_rx_rings; // write back nm_bound_var(&nmr->nr_tx_slots, NM_BRIDGE_RINGSIZE, 1, NM_BDG_MAXSLOTS, NULL); na->num_tx_desc = nmr->nr_tx_slots; nm_bound_var(&nmr->nr_rx_slots, NM_BRIDGE_RINGSIZE, 1, NM_BDG_MAXSLOTS, NULL); /* validate number of pipes. We want at least 1, * but probably can do with some more. * So let's use 2 as default (when 0 is supplied) */ npipes = nmr->nr_arg1; nm_bound_var(&npipes, 2, 1, NM_MAXPIPES, NULL); nmr->nr_arg1 = npipes; /* write back */ /* validate extra bufs */ nm_bound_var(&nmr->nr_arg3, 0, 0, 128*NM_BDG_MAXSLOTS, NULL); na->num_rx_desc = nmr->nr_rx_slots; - vpna->virt_hdr_len = 0; vpna->mfs = 1514; vpna->last_smac = ~0llu; /*if (vpna->mfs > netmap_buf_size) TODO netmap_buf_size is zero?? vpna->mfs = netmap_buf_size; */ if (netmap_verbose) D("max frame size %u", vpna->mfs); na->na_flags |= NAF_BDG_MAYSLEEP; /* persistent VALE ports look like hw devices * with a native netmap adapter */ if (ifp) na->na_flags |= NAF_NATIVE; na->nm_txsync = netmap_vp_txsync; na->nm_rxsync = netmap_vp_rxsync; na->nm_register = netmap_vp_reg; na->nm_krings_create = netmap_vp_krings_create; na->nm_krings_delete = netmap_vp_krings_delete; na->nm_dtor = netmap_vp_dtor; na->nm_mem = netmap_mem_private_new(na->name, na->num_tx_rings, na->num_tx_desc, na->num_rx_rings, na->num_rx_desc, nmr->nr_arg3, npipes, &error); if (na->nm_mem == NULL) goto err; na->nm_bdg_attach = netmap_vp_bdg_attach; /* other nmd fields are set in the common routine */ error = netmap_attach_common(na); if (error) goto err; *ret = vpna; return 0; err: if (na->nm_mem != NULL) netmap_mem_delete(na->nm_mem); free(vpna, M_DEVBUF); return error; } /* Bridge wrapper code (bwrap). * This is used to connect a non-VALE-port netmap_adapter (hwna) to a * VALE switch. * The main task is to swap the meaning of tx and rx rings to match the * expectations of the VALE switch code (see nm_bdg_flush). * * The bwrap works by interposing a netmap_bwrap_adapter between the * rest of the system and the hwna. The netmap_bwrap_adapter looks like * a netmap_vp_adapter to the rest the system, but, internally, it * translates all callbacks to what the hwna expects. * * Note that we have to intercept callbacks coming from two sides: * * - callbacks coming from the netmap module are intercepted by * passing around the netmap_bwrap_adapter instead of the hwna * * - callbacks coming from outside of the netmap module only know * about the hwna. This, however, only happens in interrupt * handlers, where only the hwna->nm_notify callback is called. * What the bwrap does is to overwrite the hwna->nm_notify callback * with its own netmap_bwrap_intr_notify. * XXX This assumes that the hwna->nm_notify callback was the * standard netmap_notify(), as it is the case for nic adapters. * Any additional action performed by hwna->nm_notify will not be * performed by netmap_bwrap_intr_notify. * * Additionally, the bwrap can optionally attach the host rings pair * of the wrapped adapter to a different port of the switch. */ static void netmap_bwrap_dtor(struct netmap_adapter *na) { struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter*)na; struct netmap_adapter *hwna = bna->hwna; + struct nm_bridge *b = bna->up.na_bdg, + *bh = bna->host.na_bdg; + if (b) { + netmap_bdg_detach_common(b, bna->up.bdg_port, + (bh ? bna->host.bdg_port : -1)); + } + ND("na %p", na); - /* drop reference to hwna->ifp. - * If we don't do this, netmap_detach_common(na) - * will think it has set NA(na->ifp) to NULL - */ na->ifp = NULL; - /* for safety, also drop the possible reference - * in the hostna - */ bna->host.up.ifp = NULL; - - hwna->nm_mem = bna->save_nmd; hwna->na_private = NULL; hwna->na_vp = hwna->na_hostvp = NULL; hwna->na_flags &= ~NAF_BUSY; netmap_adapter_put(hwna); } /* * Intr callback for NICs connected to a bridge. * Simply ignore tx interrupts (maybe we could try to recover space ?) * and pass received packets from nic to the bridge. * * XXX TODO check locking: this is called from the interrupt * handler so we should make sure that the interface is not * disconnected while passing down an interrupt. * * Note, no user process can access this NIC or the host stack. * The only part of the ring that is significant are the slots, * and head/cur/tail are set from the kring as needed * (part as a receive ring, part as a transmit ring). * * callback that overwrites the hwna notify callback. - * Packets come from the outside or from the host stack and are put on an hwna rx ring. + * Packets come from the outside or from the host stack and are put on an + * hwna rx ring. * The bridge wrapper then sends the packets through the bridge. */ static int netmap_bwrap_intr_notify(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct netmap_bwrap_adapter *bna = na->na_private; struct netmap_kring *bkring; struct netmap_vp_adapter *vpna = &bna->up; u_int ring_nr = kring->ring_id; - int error = 0; + int ret = NM_IRQ_COMPLETED; + int error; if (netmap_verbose) D("%s %s 0x%x", na->name, kring->name, flags); - if (!nm_netmap_on(na)) - return 0; - bkring = &vpna->up.tx_rings[ring_nr]; /* make sure the ring is not disabled */ - if (nm_kr_tryget(kring)) - return 0; + if (nm_kr_tryget(kring, 0 /* can't sleep */, NULL)) { + return EIO; + } if (netmap_verbose) D("%s head %d cur %d tail %d", na->name, kring->rhead, kring->rcur, kring->rtail); /* simulate a user wakeup on the rx ring * fetch packets that have arrived. */ error = kring->nm_sync(kring, 0); if (error) goto put_out; - if (kring->nr_hwcur == kring->nr_hwtail && netmap_verbose) { - D("how strange, interrupt with no packets on %s", - na->name); + if (kring->nr_hwcur == kring->nr_hwtail) { + if (netmap_verbose) + D("how strange, interrupt with no packets on %s", + na->name); goto put_out; } /* new packets are kring->rcur to kring->nr_hwtail, and the bkring * had hwcur == bkring->rhead. So advance bkring->rhead to kring->nr_hwtail * to push all packets out. */ bkring->rhead = bkring->rcur = kring->nr_hwtail; netmap_vp_txsync(bkring, flags); /* mark all buffers as released on this ring */ kring->rhead = kring->rcur = kring->rtail = kring->nr_hwtail; /* another call to actually release the buffers */ error = kring->nm_sync(kring, 0); + /* The second rxsync may have further advanced hwtail. If this happens, + * return NM_IRQ_RESCHED, otherwise just return NM_IRQ_COMPLETED. */ + if (kring->rcur != kring->nr_hwtail) { + ret = NM_IRQ_RESCHED; + } put_out: nm_kr_put(kring); - return error; + + return error ? error : ret; } /* nm_register callback for bwrap */ static int -netmap_bwrap_register(struct netmap_adapter *na, int onoff) +netmap_bwrap_reg(struct netmap_adapter *na, int onoff) { struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter *)na; struct netmap_adapter *hwna = bna->hwna; struct netmap_vp_adapter *hostna = &bna->host; - int error; + int error, i; enum txrx t; ND("%s %s", na->name, onoff ? "on" : "off"); if (onoff) { - int i; - /* netmap_do_regif has been called on the bwrap na. * We need to pass the information about the * memory allocator down to the hwna before * putting it in netmap mode */ hwna->na_lut = na->na_lut; if (hostna->na_bdg) { /* if the host rings have been attached to switch, * we need to copy the memory allocator information * in the hostna also */ hostna->up.na_lut = na->na_lut; } /* cross-link the netmap rings * The original number of rings comes from hwna, * rx rings on one side equals tx rings on the other. - * We need to do this now, after the initialization - * of the kring->ring pointers */ for_rx_tx(t) { - enum txrx r= nm_txrx_swap(t); /* swap NR_TX <-> NR_RX */ - for (i = 0; i < nma_get_nrings(na, r) + 1; i++) { - NMR(hwna, t)[i].nkr_num_slots = NMR(na, r)[i].nkr_num_slots; - NMR(hwna, t)[i].ring = NMR(na, r)[i].ring; + enum txrx r = nm_txrx_swap(t); /* swap NR_TX <-> NR_RX */ + for (i = 0; i < nma_get_nrings(hwna, r) + 1; i++) { + NMR(hwna, r)[i].ring = NMR(na, t)[i].ring; } } + + if (na->na_flags & NAF_HOST_RINGS) { + struct netmap_adapter *hna = &hostna->up; + /* the hostna rings are the host rings of the bwrap. + * The corresponding krings must point back to the + * hostna + */ + hna->tx_rings = &na->tx_rings[na->num_tx_rings]; + hna->tx_rings[0].na = hna; + hna->rx_rings = &na->rx_rings[na->num_rx_rings]; + hna->rx_rings[0].na = hna; + } } + /* pass down the pending ring state information */ + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) + NMR(hwna, t)[i].nr_pending_mode = + NMR(na, t)[i].nr_pending_mode; + } + /* forward the request to the hwna */ error = hwna->nm_register(hwna, onoff); if (error) return error; + /* copy up the current ring state information */ + for_rx_tx(t) { + for (i = 0; i < nma_get_nrings(na, t) + 1; i++) + NMR(na, t)[i].nr_mode = + NMR(hwna, t)[i].nr_mode; + } + /* impersonate a netmap_vp_adapter */ netmap_vp_reg(na, onoff); if (hostna->na_bdg) netmap_vp_reg(&hostna->up, onoff); if (onoff) { u_int i; /* intercept the hwna nm_nofify callback on the hw rings */ for (i = 0; i < hwna->num_rx_rings; i++) { hwna->rx_rings[i].save_notify = hwna->rx_rings[i].nm_notify; hwna->rx_rings[i].nm_notify = netmap_bwrap_intr_notify; } i = hwna->num_rx_rings; /* for safety */ /* save the host ring notify unconditionally */ hwna->rx_rings[i].save_notify = hwna->rx_rings[i].nm_notify; if (hostna->na_bdg) { /* also intercept the host ring notify */ hwna->rx_rings[i].nm_notify = netmap_bwrap_intr_notify; } + if (na->active_fds == 0) + na->na_flags |= NAF_NETMAP_ON; } else { u_int i; + + if (na->active_fds == 0) + na->na_flags &= ~NAF_NETMAP_ON; + /* reset all notify callbacks (including host ring) */ for (i = 0; i <= hwna->num_rx_rings; i++) { hwna->rx_rings[i].nm_notify = hwna->rx_rings[i].save_notify; hwna->rx_rings[i].save_notify = NULL; } hwna->na_lut.lut = NULL; hwna->na_lut.objtotal = 0; hwna->na_lut.objsize = 0; } return 0; } /* nm_config callback for bwrap */ static int netmap_bwrap_config(struct netmap_adapter *na, u_int *txr, u_int *txd, u_int *rxr, u_int *rxd) { struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter *)na; struct netmap_adapter *hwna = bna->hwna; /* forward the request */ netmap_update_config(hwna); /* swap the results */ *txr = hwna->num_rx_rings; *txd = hwna->num_rx_desc; *rxr = hwna->num_tx_rings; *rxd = hwna->num_rx_desc; return 0; } /* nm_krings_create callback for bwrap */ static int netmap_bwrap_krings_create(struct netmap_adapter *na) { struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter *)na; struct netmap_adapter *hwna = bna->hwna; - struct netmap_adapter *hostna = &bna->host.up; - int error; + int i, error = 0; + enum txrx t; ND("%s", na->name); /* impersonate a netmap_vp_adapter */ error = netmap_vp_krings_create(na); if (error) return error; /* also create the hwna krings */ error = hwna->nm_krings_create(hwna); if (error) { - netmap_vp_krings_delete(na); - return error; + goto err_del_vp_rings; } - /* the connection between the bwrap krings and the hwna krings - * will be perfomed later, in the nm_register callback, since - * now the kring->ring pointers have not been initialized yet - */ - if (na->na_flags & NAF_HOST_RINGS) { - /* the hostna rings are the host rings of the bwrap. - * The corresponding krings must point back to the - * hostna - */ - hostna->tx_rings = &na->tx_rings[na->num_tx_rings]; - hostna->tx_rings[0].na = hostna; - hostna->rx_rings = &na->rx_rings[na->num_rx_rings]; - hostna->rx_rings[0].na = hostna; + /* get each ring slot number from the corresponding hwna ring */ + for_rx_tx(t) { + enum txrx r = nm_txrx_swap(t); /* swap NR_TX <-> NR_RX */ + for (i = 0; i < nma_get_nrings(hwna, r) + 1; i++) { + NMR(na, t)[i].nkr_num_slots = NMR(hwna, r)[i].nkr_num_slots; + } } return 0; + +err_del_vp_rings: + netmap_vp_krings_delete(na); + + return error; } static void netmap_bwrap_krings_delete(struct netmap_adapter *na) { struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter *)na; struct netmap_adapter *hwna = bna->hwna; ND("%s", na->name); hwna->nm_krings_delete(hwna); netmap_vp_krings_delete(na); } /* notify method for the bridge-->hwna direction */ static int netmap_bwrap_notify(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct netmap_bwrap_adapter *bna = na->na_private; struct netmap_adapter *hwna = bna->hwna; u_int ring_n = kring->ring_id; u_int lim = kring->nkr_num_slots - 1; struct netmap_kring *hw_kring; - int error = 0; + int error; ND("%s: na %s hwna %s", (kring ? kring->name : "NULL!"), (na ? na->name : "NULL!"), (hwna ? hwna->name : "NULL!")); hw_kring = &hwna->tx_rings[ring_n]; - if (nm_kr_tryget(hw_kring)) - return 0; + if (nm_kr_tryget(hw_kring, 0, NULL)) { + return ENXIO; + } - if (!nm_netmap_on(hwna)) - return 0; /* first step: simulate a user wakeup on the rx ring */ netmap_vp_rxsync(kring, flags); ND("%s[%d] PRE rx(c%3d t%3d l%3d) ring(h%3d c%3d t%3d) tx(c%3d ht%3d t%3d)", na->name, ring_n, kring->nr_hwcur, kring->nr_hwtail, kring->nkr_hwlease, ring->head, ring->cur, ring->tail, hw_kring->nr_hwcur, hw_kring->nr_hwtail, hw_ring->rtail); /* second step: the new packets are sent on the tx ring * (which is actually the same ring) */ hw_kring->rhead = hw_kring->rcur = kring->nr_hwtail; error = hw_kring->nm_sync(hw_kring, flags); if (error) - goto out; + goto put_out; /* third step: now we are back the rx ring */ /* claim ownership on all hw owned bufs */ kring->rhead = kring->rcur = nm_next(hw_kring->nr_hwtail, lim); /* skip past reserved slot */ /* fourth step: the user goes to sleep again, causing another rxsync */ netmap_vp_rxsync(kring, flags); ND("%s[%d] PST rx(c%3d t%3d l%3d) ring(h%3d c%3d t%3d) tx(c%3d ht%3d t%3d)", na->name, ring_n, kring->nr_hwcur, kring->nr_hwtail, kring->nkr_hwlease, ring->head, ring->cur, ring->tail, hw_kring->nr_hwcur, hw_kring->nr_hwtail, hw_kring->rtail); -out: +put_out: nm_kr_put(hw_kring); - return error; + + return error ? error : NM_IRQ_COMPLETED; } /* nm_bdg_ctl callback for the bwrap. * Called on bridge-attach and detach, as an effect of vale-ctl -[ahd]. * On attach, it needs to provide a fake netmap_priv_d structure and * perform a netmap_do_regif() on the bwrap. This will put both the * bwrap and the hwna in netmap mode, with the netmap rings shared * and cross linked. Moroever, it will start intercepting interrupts * directed to hwna. */ static int netmap_bwrap_bdg_ctl(struct netmap_adapter *na, struct nmreq *nmr, int attach) { struct netmap_priv_d *npriv; struct netmap_bwrap_adapter *bna = (struct netmap_bwrap_adapter*)na; int error = 0; if (attach) { if (NETMAP_OWNED_BY_ANY(na)) { return EBUSY; } if (bna->na_kpriv) { /* nothing to do */ return 0; } - npriv = malloc(sizeof(*npriv), M_DEVBUF, M_NOWAIT|M_ZERO); + npriv = netmap_priv_new(); if (npriv == NULL) return ENOMEM; - error = netmap_do_regif(npriv, na, nmr->nr_ringid, nmr->nr_flags); + npriv->np_ifp = na->ifp; /* let the priv destructor release the ref */ + error = netmap_do_regif(npriv, na, 0, NR_REG_NIC_SW); if (error) { - bzero(npriv, sizeof(*npriv)); - free(npriv, M_DEVBUF); + netmap_priv_delete(npriv); return error; } bna->na_kpriv = npriv; na->na_flags |= NAF_BUSY; } else { - int last_instance; - if (na->active_fds == 0) /* not registered */ return EINVAL; - last_instance = netmap_dtor_locked(bna->na_kpriv); - if (!last_instance) { - D("--- error, trying to detach an entry with active mmaps"); - error = EINVAL; - } else { - struct nm_bridge *b = bna->up.na_bdg, - *bh = bna->host.na_bdg; - npriv = bna->na_kpriv; - bna->na_kpriv = NULL; - D("deleting priv"); - - bzero(npriv, sizeof(*npriv)); - free(npriv, M_DEVBUF); - if (b) { - /* XXX the bwrap dtor should take care - * of this (2014-06-16) - */ - netmap_bdg_detach_common(b, bna->up.bdg_port, - (bh ? bna->host.bdg_port : -1)); - } - na->na_flags &= ~NAF_BUSY; - } + netmap_priv_delete(bna->na_kpriv); + bna->na_kpriv = NULL; + na->na_flags &= ~NAF_BUSY; } return error; } /* attach a bridge wrapper to the 'real' device */ int netmap_bwrap_attach(const char *nr_name, struct netmap_adapter *hwna) { struct netmap_bwrap_adapter *bna; struct netmap_adapter *na = NULL; struct netmap_adapter *hostna = NULL; int error = 0; enum txrx t; /* make sure the NIC is not already in use */ if (NETMAP_OWNED_BY_ANY(hwna)) { D("NIC %s busy, cannot attach to bridge", hwna->name); return EBUSY; } bna = malloc(sizeof(*bna), M_DEVBUF, M_NOWAIT | M_ZERO); if (bna == NULL) { return ENOMEM; } na = &bna->up.up; + /* make bwrap ifp point to the real ifp */ + na->ifp = hwna->ifp; na->na_private = bna; strncpy(na->name, nr_name, sizeof(na->name)); /* fill the ring data for the bwrap adapter with rx/tx meanings * swapped. The real cross-linking will be done during register, * when all the krings will have been created. */ for_rx_tx(t) { enum txrx r = nm_txrx_swap(t); /* swap NR_TX <-> NR_RX */ nma_set_nrings(na, t, nma_get_nrings(hwna, r)); nma_set_ndesc(na, t, nma_get_ndesc(hwna, r)); } na->nm_dtor = netmap_bwrap_dtor; - na->nm_register = netmap_bwrap_register; + na->nm_register = netmap_bwrap_reg; // na->nm_txsync = netmap_bwrap_txsync; // na->nm_rxsync = netmap_bwrap_rxsync; na->nm_config = netmap_bwrap_config; na->nm_krings_create = netmap_bwrap_krings_create; na->nm_krings_delete = netmap_bwrap_krings_delete; na->nm_notify = netmap_bwrap_notify; na->nm_bdg_ctl = netmap_bwrap_bdg_ctl; na->pdev = hwna->pdev; - na->nm_mem = netmap_mem_private_new(na->name, - na->num_tx_rings, na->num_tx_desc, - na->num_rx_rings, na->num_rx_desc, - 0, 0, &error); - na->na_flags |= NAF_MEM_OWNER; - if (na->nm_mem == NULL) - goto err_put; + na->nm_mem = hwna->nm_mem; + na->virt_hdr_len = hwna->virt_hdr_len; bna->up.retry = 1; /* XXX maybe this should depend on the hwna */ bna->hwna = hwna; netmap_adapter_get(hwna); hwna->na_private = bna; /* weak reference */ hwna->na_vp = &bna->up; if (hwna->na_flags & NAF_HOST_RINGS) { if (hwna->na_flags & NAF_SW_ONLY) na->na_flags |= NAF_SW_ONLY; na->na_flags |= NAF_HOST_RINGS; hostna = &bna->host.up; snprintf(hostna->name, sizeof(hostna->name), "%s^", nr_name); hostna->ifp = hwna->ifp; for_rx_tx(t) { enum txrx r = nm_txrx_swap(t); nma_set_nrings(hostna, t, 1); nma_set_ndesc(hostna, t, nma_get_ndesc(hwna, r)); } // hostna->nm_txsync = netmap_bwrap_host_txsync; // hostna->nm_rxsync = netmap_bwrap_host_rxsync; hostna->nm_notify = netmap_bwrap_notify; hostna->nm_mem = na->nm_mem; hostna->na_private = bna; hostna->na_vp = &bna->up; na->na_hostvp = hwna->na_hostvp = hostna->na_hostvp = &bna->host; hostna->na_flags = NAF_BUSY; /* prevent NIOCREGIF */ } ND("%s<->%s txr %d txd %d rxr %d rxd %d", na->name, ifp->if_xname, na->num_tx_rings, na->num_tx_desc, na->num_rx_rings, na->num_rx_desc); error = netmap_attach_common(na); if (error) { goto err_free; } - /* make bwrap ifp point to the real ifp - * NOTE: netmap_attach_common() interprets a non-NULL na->ifp - * as a request to make the ifp point to the na. Since we - * do not want to change the na already pointed to by hwna->ifp, - * the following assignment has to be delayed until now - */ - na->ifp = hwna->ifp; hwna->na_flags |= NAF_BUSY; - /* make hwna point to the allocator we are actually using, - * so that monitors will be able to find it - */ - bna->save_nmd = hwna->nm_mem; - hwna->nm_mem = na->nm_mem; return 0; err_free: - netmap_mem_delete(na->nm_mem); -err_put: hwna->na_vp = hwna->na_hostvp = NULL; netmap_adapter_put(hwna); free(bna, M_DEVBUF); return error; } struct nm_bridge * netmap_init_bridges2(u_int n) { int i; struct nm_bridge *b; b = malloc(sizeof(struct nm_bridge) * n, M_DEVBUF, M_NOWAIT | M_ZERO); if (b == NULL) return NULL; for (i = 0; i < n; i++) BDG_RWINIT(&b[i]); return b; } void netmap_uninit_bridges2(struct nm_bridge *b, u_int n) { int i; if (b == NULL) return; for (i = 0; i < n; i++) BDG_RWDESTROY(&b[i]); free(b, M_DEVBUF); } int netmap_init_bridges(void) { #ifdef CONFIG_NET_NS return netmap_bns_register(); #else nm_bridges = netmap_init_bridges2(NM_BRIDGES); if (nm_bridges == NULL) return ENOMEM; return 0; #endif } void netmap_uninit_bridges(void) { #ifdef CONFIG_NET_NS netmap_bns_unregister(); #else netmap_uninit_bridges2(nm_bridges, NM_BRIDGES); #endif } #endif /* WITH_VALE */ Index: head/sys/modules/netmap/Makefile =================================================================== --- head/sys/modules/netmap/Makefile (revision 307393) +++ head/sys/modules/netmap/Makefile (revision 307394) @@ -1,21 +1,27 @@ # $FreeBSD$ # # Compile netmap as a module, useful if you want a netmap bridge # or loadable drivers. +.include # FreeBSD 10 and earlier +# .include "${SYSDIR}/conf/kern.opts.mk" + .PATH: ${.CURDIR}/../../dev/netmap .PATH.h: ${.CURDIR}/../../net -CFLAGS += -I${.CURDIR}/../../ +CFLAGS += -I${.CURDIR}/../../ -D INET KMOD = netmap -SRCS = device_if.h bus_if.h opt_netmap.h +SRCS = device_if.h bus_if.h pci_if.h opt_netmap.h SRCS += netmap.c netmap.h netmap_kern.h SRCS += netmap_mem2.c netmap_mem2.h SRCS += netmap_generic.c SRCS += netmap_mbq.c netmap_mbq.h SRCS += netmap_vale.c SRCS += netmap_freebsd.c SRCS += netmap_offloadings.c SRCS += netmap_pipe.c SRCS += netmap_monitor.c +SRCS += netmap_pt.c +SRCS += if_ptnet.c +SRCS += opt_inet.h opt_inet6.h .include Index: head/sys/net/netmap.h =================================================================== --- head/sys/net/netmap.h (revision 307393) +++ head/sys/net/netmap.h (revision 307394) @@ -1,564 +1,671 @@ /* * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``S IS''AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * Definitions of constants and the structures used by the netmap * framework, for the part visible to both kernel and userspace. * Detailed info on netmap is available with "man netmap" or at * * http://info.iet.unipi.it/~luigi/netmap/ * * This API is also used to communicate with the VALE software switch */ #ifndef _NET_NETMAP_H_ #define _NET_NETMAP_H_ #define NETMAP_API 11 /* current API version */ #define NETMAP_MIN_API 11 /* min and max versions accepted */ #define NETMAP_MAX_API 15 /* * Some fields should be cache-aligned to reduce contention. * The alignment is architecture and OS dependent, but rather than * digging into OS headers to find the exact value we use an estimate * that should cover most architectures. */ #define NM_CACHE_ALIGN 128 /* * --- Netmap data structures --- * * The userspace data structures used by netmap are shown below. * They are allocated by the kernel and mmap()ed by userspace threads. * Pointers are implemented as memory offsets or indexes, * so that they can be easily dereferenced in kernel and userspace. KERNEL (opaque, obviously) ==================================================================== | USERSPACE | struct netmap_ring +---->+---------------+ / | head,cur,tail | struct netmap_if (nifp, 1 per fd) / | buf_ofs | +---------------+ / | other fields | | ni_tx_rings | / +===============+ | ni_rx_rings | / | buf_idx, len | slot[0] | | / | flags, ptr | | | / +---------------+ +===============+ / | buf_idx, len | slot[1] | txring_ofs[0] | (rel.to nifp)--' | flags, ptr | | txring_ofs[1] | +---------------+ (tx+1 entries) (num_slots entries) | txring_ofs[t] | | buf_idx, len | slot[n-1] +---------------+ | flags, ptr | | rxring_ofs[0] | +---------------+ | rxring_ofs[1] | (rx+1 entries) | rxring_ofs[r] | +---------------+ * For each "interface" (NIC, host stack, PIPE, VALE switch port) bound to * a file descriptor, the mmap()ed region contains a (logically readonly) * struct netmap_if pointing to struct netmap_ring's. * * There is one netmap_ring per physical NIC ring, plus one tx/rx ring * pair attached to the host stack (this pair is unused for non-NIC ports). * * All physical/host stack ports share the same memory region, * so that zero-copy can be implemented between them. * VALE switch ports instead have separate memory regions. * * The netmap_ring is the userspace-visible replica of the NIC ring. * Each slot has the index of a buffer (MTU-sized and residing in the * mmapped region), its length and some flags. An extra 64-bit pointer * is provided for user-supplied buffers in the tx path. * * In user space, the buffer address is computed as * (char *)ring + buf_ofs + index * NETMAP_BUF_SIZE * * Added in NETMAP_API 11: * * + NIOCREGIF can request the allocation of extra spare buffers from * the same memory pool. The desired number of buffers must be in * nr_arg3. The ioctl may return fewer buffers, depending on memory * availability. nr_arg3 will return the actual value, and, once * mapped, nifp->ni_bufs_head will be the index of the first buffer. * * The buffers are linked to each other using the first uint32_t * as the index. On close, ni_bufs_head must point to the list of * buffers to be released. * * + NIOCREGIF can request space for extra rings (and buffers) * allocated in the same memory space. The number of extra rings * is in nr_arg1, and is advisory. This is a no-op on NICs where * the size of the memory space is fixed. * * + NIOCREGIF can attach to PIPE rings sharing the same memory * space with a parent device. The ifname indicates the parent device, * which must already exist. Flags in nr_flags indicate if we want to * bind the master or slave side, the index (from nr_ringid) * is just a cookie and does not need to be sequential. * * + NIOCREGIF can also attach to 'monitor' rings that replicate * the content of specific rings, also from the same memory space. * * Extra flags in nr_flags support the above functions. * Application libraries may use the following naming scheme: * netmap:foo all NIC ring pairs * netmap:foo^ only host ring pair * netmap:foo+ all NIC ring + host ring pairs * netmap:foo-k the k-th NIC ring pair * netmap:foo{k PIPE ring pair k, master side * netmap:foo}k PIPE ring pair k, slave side + * + * Some notes about host rings: + * + * + The RX host ring is used to store those packets that the host network + * stack is trying to transmit through a NIC queue, but only if that queue + * is currently in netmap mode. Netmap will not intercept host stack mbufs + * designated to NIC queues that are not in netmap mode. As a consequence, + * registering a netmap port with netmap:foo^ is not enough to intercept + * mbufs in the RX host ring; the netmap port should be registered with + * netmap:foo*, or another registration should be done to open at least a + * NIC TX queue in netmap mode. + * + * + Netmap is not currently able to deal with intercepted trasmit mbufs which + * require offloadings like TSO, UFO, checksumming offloadings, etc. It is + * responsibility of the user to disable those offloadings (e.g. using + * ifconfig on FreeBSD or ethtool -K on Linux) for an interface that is being + * used in netmap mode. If the offloadings are not disabled, GSO and/or + * unchecksummed packets may be dropped immediately or end up in the host RX + * ring, and will be dropped as soon as the packet reaches another netmap + * adapter. */ /* * struct netmap_slot is a buffer descriptor */ struct netmap_slot { uint32_t buf_idx; /* buffer index */ uint16_t len; /* length for this slot */ uint16_t flags; /* buf changed, etc. */ uint64_t ptr; /* pointer for indirect buffers */ }; /* * The following flags control how the slot is used */ #define NS_BUF_CHANGED 0x0001 /* buf_idx changed */ /* * must be set whenever buf_idx is changed (as it might be * necessary to recompute the physical address and mapping) * * It is also set by the kernel whenever the buf_idx is * changed internally (e.g., by pipes). Applications may * use this information to know when they can reuse the * contents of previously prepared buffers. */ #define NS_REPORT 0x0002 /* ask the hardware to report results */ /* * Request notification when slot is used by the hardware. * Normally transmit completions are handled lazily and * may be unreported. This flag lets us know when a slot * has been sent (e.g. to terminate the sender). */ #define NS_FORWARD 0x0004 /* pass packet 'forward' */ /* * (Only for physical ports, rx rings with NR_FORWARD set). * Slot released to the kernel (i.e. before ring->head) with * this flag set are passed to the peer ring (host/NIC), * thus restoring the host-NIC connection for these slots. * This supports efficient traffic monitoring or firewalling. */ #define NS_NO_LEARN 0x0008 /* disable bridge learning */ /* * On a VALE switch, do not 'learn' the source port for * this buffer. */ #define NS_INDIRECT 0x0010 /* userspace buffer */ /* * (VALE tx rings only) data is in a userspace buffer, * whose address is in the 'ptr' field in the slot. */ #define NS_MOREFRAG 0x0020 /* packet has more fragments */ /* * (VALE ports only) * Set on all but the last slot of a multi-segment packet. * The 'len' field refers to the individual fragment. */ #define NS_PORT_SHIFT 8 #define NS_PORT_MASK (0xff << NS_PORT_SHIFT) /* * The high 8 bits of the flag, if not zero, indicate the * destination port for the VALE switch, overriding * the lookup table. */ #define NS_RFRAGS(_slot) ( ((_slot)->flags >> 8) & 0xff) /* * (VALE rx rings only) the high 8 bits * are the number of fragments. */ /* * struct netmap_ring * * Netmap representation of a TX or RX ring (also known as "queue"). * This is a queue implemented as a fixed-size circular array. * At the software level the important fields are: head, cur, tail. * * In TX rings: * * head first slot available for transmission. * cur wakeup point. select() and poll() will unblock * when 'tail' moves past 'cur' * tail (readonly) first slot reserved to the kernel * * [head .. tail-1] can be used for new packets to send; * 'head' and 'cur' must be incremented as slots are filled * with new packets to be sent; * 'cur' can be moved further ahead if we need more space * for new transmissions. XXX todo (2014-03-12) * * In RX rings: * * head first valid received packet * cur wakeup point. select() and poll() will unblock * when 'tail' moves past 'cur' * tail (readonly) first slot reserved to the kernel * * [head .. tail-1] contain received packets; * 'head' and 'cur' must be incremented as slots are consumed * and can be returned to the kernel; * 'cur' can be moved further ahead if we want to wait for * new packets without returning the previous ones. * * DATA OWNERSHIP/LOCKING: * The netmap_ring, and all slots and buffers in the range * [head .. tail-1] are owned by the user program; * the kernel only accesses them during a netmap system call * and in the user thread context. * * Other slots and buffers are reserved for use by the kernel */ struct netmap_ring { /* * buf_ofs is meant to be used through macros. * It contains the offset of the buffer region from this * descriptor. */ const int64_t buf_ofs; const uint32_t num_slots; /* number of slots in the ring. */ const uint32_t nr_buf_size; const uint16_t ringid; const uint16_t dir; /* 0: tx, 1: rx */ uint32_t head; /* (u) first user slot */ uint32_t cur; /* (u) wakeup point */ uint32_t tail; /* (k) first kernel slot */ uint32_t flags; struct timeval ts; /* (k) time of last *sync() */ /* opaque room for a mutex or similar object */ - uint8_t sem[128] __attribute__((__aligned__(NM_CACHE_ALIGN))); +#if !defined(_WIN32) || defined(__CYGWIN__) + uint8_t __attribute__((__aligned__(NM_CACHE_ALIGN))) sem[128]; +#else + uint8_t __declspec(align(NM_CACHE_ALIGN)) sem[128]; +#endif /* the slots follow. This struct has variable size */ struct netmap_slot slot[0]; /* array of slots. */ }; /* * RING FLAGS */ #define NR_TIMESTAMP 0x0002 /* set timestamp on *sync() */ /* * updates the 'ts' field on each netmap syscall. This saves * saves a separate gettimeofday(), and is not much worse than * software timestamps generated in the interrupt handler. */ #define NR_FORWARD 0x0004 /* enable NS_FORWARD for ring */ /* * Enables the NS_FORWARD slot flag for the ring. */ /* * Netmap representation of an interface and its queue(s). * This is initialized by the kernel when binding a file * descriptor to a port, and should be considered as readonly * by user programs. The kernel never uses it. * * There is one netmap_if for each file descriptor on which we want * to select/poll. * select/poll operates on one or all pairs depending on the value of * nmr_queueid passed on the ioctl. */ struct netmap_if { char ni_name[IFNAMSIZ]; /* name of the interface. */ const uint32_t ni_version; /* API version, currently unused */ const uint32_t ni_flags; /* properties */ #define NI_PRIV_MEM 0x1 /* private memory region */ /* * The number of packet rings available in netmap mode. * Physical NICs can have different numbers of tx and rx rings. * Physical NICs also have a 'host' ring pair. * Additionally, clients can request additional ring pairs to * be used for internal communication. */ const uint32_t ni_tx_rings; /* number of HW tx rings */ const uint32_t ni_rx_rings; /* number of HW rx rings */ uint32_t ni_bufs_head; /* head index for extra bufs */ uint32_t ni_spare1[5]; /* * The following array contains the offset of each netmap ring * from this structure, in the following order: * NIC tx rings (ni_tx_rings); host tx ring (1); extra tx rings; * NIC rx rings (ni_rx_rings); host tx ring (1); extra rx rings. * * The area is filled up by the kernel on NIOCREGIF, * and then only read by userspace code. */ const ssize_t ring_ofs[0]; }; #ifndef NIOCREGIF /* * ioctl names and related fields * * NIOCTXSYNC, NIOCRXSYNC synchronize tx or rx queues, * whose identity is set in NIOCREGIF through nr_ringid. * These are non blocking and take no argument. * * NIOCGINFO takes a struct ifreq, the interface name is the input, * the outputs are number of queues and number of descriptor * for each queue (useful to set number of threads etc.). * The info returned is only advisory and may change before * the interface is bound to a file descriptor. * * NIOCREGIF takes an interface name within a struct nmre, * and activates netmap mode on the interface (if possible). * * The argument to NIOCGINFO/NIOCREGIF overlays struct ifreq so we * can pass it down to other NIC-related ioctls. * * The actual argument (struct nmreq) has a number of options to request * different functions. * The following are used in NIOCREGIF when nr_cmd == 0: * * nr_name (in) * The name of the port (em0, valeXXX:YYY, etc.) * limited to IFNAMSIZ for backward compatibility. * * nr_version (in/out) * Must match NETMAP_API as used in the kernel, error otherwise. * Always returns the desired value on output. * * nr_tx_slots, nr_tx_slots, nr_tx_rings, nr_rx_rings (in/out) * On input, non-zero values may be used to reconfigure the port * according to the requested values, but this is not guaranteed. * On output the actual values in use are reported. * * nr_ringid (in) * Indicates how rings should be bound to the file descriptors. * If nr_flags != 0, then the low bits (in NETMAP_RING_MASK) * are used to indicate the ring number, and nr_flags specifies * the actual rings to bind. NETMAP_NO_TX_POLL is unaffected. * * NOTE: THE FOLLOWING (nr_flags == 0) IS DEPRECATED: * If nr_flags == 0, NETMAP_HW_RING and NETMAP_SW_RING control * the binding as follows: * 0 (default) binds all physical rings * NETMAP_HW_RING | ring number binds a single ring pair * NETMAP_SW_RING binds only the host tx/rx rings * * NETMAP_NO_TX_POLL can be OR-ed to make select()/poll() push * packets on tx rings only if POLLOUT is set. * The default is to push any pending packet. * * NETMAP_DO_RX_POLL can be OR-ed to make select()/poll() release * packets on rx rings also when POLLIN is NOT set. * The default is to touch the rx ring only with POLLIN. * Note that this is the opposite of TX because it * reflects the common usage. * * NOTE: NETMAP_PRIV_MEM IS DEPRECATED, use nr_arg2 instead. * NETMAP_PRIV_MEM is set on return for ports that do not use * the global memory allocator. * This information is not significant and applications * should look at the region id in nr_arg2 * * nr_flags is the recommended mode to indicate which rings should * be bound to a file descriptor. Values are NR_REG_* * * nr_arg1 (in) The number of extra rings to be reserved. * Especially when allocating a VALE port the system only * allocates the amount of memory needed for the port. * If more shared memory rings are desired (e.g. for pipes), * the first invocation for the same basename/allocator * should specify a suitable number. Memory cannot be * extended after the first allocation without closing * all ports on the same region. * * nr_arg2 (in/out) The identity of the memory region used. * On input, 0 means the system decides autonomously, * other values may try to select a specific region. * On return the actual value is reported. * Region '1' is the global allocator, normally shared * by all interfaces. Other values are private regions. * If two ports the same region zero-copy is possible. * * nr_arg3 (in/out) number of extra buffers to be allocated. * * * * nr_cmd (in) if non-zero indicates a special command: * NETMAP_BDG_ATTACH and nr_name = vale*:ifname * attaches the NIC to the switch; nr_ringid specifies * which rings to use. Used by vale-ctl -a ... * nr_arg1 = NETMAP_BDG_HOST also attaches the host port * as in vale-ctl -h ... * * NETMAP_BDG_DETACH and nr_name = vale*:ifname * disconnects a previously attached NIC. * Used by vale-ctl -d ... * * NETMAP_BDG_LIST * list the configuration of VALE switches. * * NETMAP_BDG_VNET_HDR * Set the virtio-net header length used by the client * of a VALE switch port. * * NETMAP_BDG_NEWIF * create a persistent VALE port with name nr_name. * Used by vale-ctl -n ... * * NETMAP_BDG_DELIF * delete a persistent VALE port. Used by vale-ctl -d ... * * nr_arg1, nr_arg2, nr_arg3 (in/out) command specific * * * */ /* * struct nmreq overlays a struct ifreq (just the name) */ struct nmreq { char nr_name[IFNAMSIZ]; uint32_t nr_version; /* API version */ uint32_t nr_offset; /* nifp offset in the shared region */ uint32_t nr_memsize; /* size of the shared region */ uint32_t nr_tx_slots; /* slots in tx rings */ uint32_t nr_rx_slots; /* slots in rx rings */ uint16_t nr_tx_rings; /* number of tx rings */ uint16_t nr_rx_rings; /* number of rx rings */ uint16_t nr_ringid; /* ring(s) we care about */ #define NETMAP_HW_RING 0x4000 /* single NIC ring pair */ #define NETMAP_SW_RING 0x2000 /* only host ring pair */ #define NETMAP_RING_MASK 0x0fff /* the ring number */ #define NETMAP_NO_TX_POLL 0x1000 /* no automatic txsync on poll */ #define NETMAP_DO_RX_POLL 0x8000 /* DO automatic rxsync on poll */ uint16_t nr_cmd; #define NETMAP_BDG_ATTACH 1 /* attach the NIC */ #define NETMAP_BDG_DETACH 2 /* detach the NIC */ #define NETMAP_BDG_REGOPS 3 /* register bridge callbacks */ #define NETMAP_BDG_LIST 4 /* get bridge's info */ #define NETMAP_BDG_VNET_HDR 5 /* set the port virtio-net-hdr length */ #define NETMAP_BDG_OFFSET NETMAP_BDG_VNET_HDR /* deprecated alias */ #define NETMAP_BDG_NEWIF 6 /* create a virtual port */ #define NETMAP_BDG_DELIF 7 /* destroy a virtual port */ +#define NETMAP_PT_HOST_CREATE 8 /* create ptnetmap kthreads */ +#define NETMAP_PT_HOST_DELETE 9 /* delete ptnetmap kthreads */ +#define NETMAP_BDG_POLLING_ON 10 /* delete polling kthread */ +#define NETMAP_BDG_POLLING_OFF 11 /* delete polling kthread */ +#define NETMAP_VNET_HDR_GET 12 /* get the port virtio-net-hdr length */ uint16_t nr_arg1; /* reserve extra rings in NIOCREGIF */ #define NETMAP_BDG_HOST 1 /* attach the host stack on ATTACH */ uint16_t nr_arg2; uint32_t nr_arg3; /* req. extra buffers in NIOCREGIF */ uint32_t nr_flags; /* various modes, extends nr_ringid */ uint32_t spare2[1]; }; #define NR_REG_MASK 0xf /* values for nr_flags */ enum { NR_REG_DEFAULT = 0, /* backward compat, should not be used. */ NR_REG_ALL_NIC = 1, NR_REG_SW = 2, NR_REG_NIC_SW = 3, NR_REG_ONE_NIC = 4, NR_REG_PIPE_MASTER = 5, NR_REG_PIPE_SLAVE = 6, }; /* monitor uses the NR_REG to select the rings to monitor */ #define NR_MONITOR_TX 0x100 #define NR_MONITOR_RX 0x200 #define NR_ZCOPY_MON 0x400 /* request exclusive access to the selected rings */ #define NR_EXCLUSIVE 0x800 +/* request ptnetmap host support */ +#define NR_PASSTHROUGH_HOST NR_PTNETMAP_HOST /* deprecated */ +#define NR_PTNETMAP_HOST 0x1000 +#define NR_RX_RINGS_ONLY 0x2000 +#define NR_TX_RINGS_ONLY 0x4000 +/* Applications set this flag if they are able to deal with virtio-net headers, + * that is send/receive frames that start with a virtio-net header. + * If not set, NIOCREGIF will fail with netmap ports that require applications + * to use those headers. If the flag is set, the application can use the + * NETMAP_VNET_HDR_GET command to figure out the header length. */ +#define NR_ACCEPT_VNET_HDR 0x8000 +#define NM_BDG_NAME "vale" /* prefix for bridge port name */ /* + * Windows does not have _IOWR(). _IO(), _IOW() and _IOR() are defined + * in ws2def.h but not sure if they are in the form we need. + * XXX so we redefine them + * in a convenient way to use for DeviceIoControl signatures + */ +#ifdef _WIN32 +#undef _IO // ws2def.h +#define _WIN_NM_IOCTL_TYPE 40000 +#define _IO(_c, _n) CTL_CODE(_WIN_NM_IOCTL_TYPE, ((_n) + 0x800) , \ + METHOD_BUFFERED, FILE_ANY_ACCESS ) +#define _IO_direct(_c, _n) CTL_CODE(_WIN_NM_IOCTL_TYPE, ((_n) + 0x800) , \ + METHOD_OUT_DIRECT, FILE_ANY_ACCESS ) + +#define _IOWR(_c, _n, _s) _IO(_c, _n) + +/* We havesome internal sysctl in addition to the externally visible ones */ +#define NETMAP_MMAP _IO_direct('i', 160) // note METHOD_OUT_DIRECT +#define NETMAP_POLL _IO('i', 162) + +/* and also two setsockopt for sysctl emulation */ +#define NETMAP_SETSOCKOPT _IO('i', 140) +#define NETMAP_GETSOCKOPT _IO('i', 141) + + +//These linknames are for the Netmap Core Driver +#define NETMAP_NT_DEVICE_NAME L"\\Device\\NETMAP" +#define NETMAP_DOS_DEVICE_NAME L"\\DosDevices\\netmap" + +//Definition of a structure used to pass a virtual address within an IOCTL +typedef struct _MEMORY_ENTRY { + PVOID pUsermodeVirtualAddress; +} MEMORY_ENTRY, *PMEMORY_ENTRY; + +typedef struct _POLL_REQUEST_DATA { + int events; + int timeout; + int revents; +} POLL_REQUEST_DATA; + +#endif /* _WIN32 */ + +/* * FreeBSD uses the size value embedded in the _IOWR to determine * how much to copy in/out. So we need it to match the actual * data structure we pass. We put some spares in the structure * to ease compatibility with other versions */ #define NIOCGINFO _IOWR('i', 145, struct nmreq) /* return IF info */ #define NIOCREGIF _IOWR('i', 146, struct nmreq) /* interface register */ #define NIOCTXSYNC _IO('i', 148) /* sync tx queues */ #define NIOCRXSYNC _IO('i', 149) /* sync rx queues */ #define NIOCCONFIG _IOWR('i',150, struct nm_ifreq) /* for ext. modules */ #endif /* !NIOCREGIF */ /* * Helper functions for kernel and userspace */ /* * check if space is available in the ring. */ static inline int nm_ring_empty(struct netmap_ring *ring) { return (ring->cur == ring->tail); } /* * Opaque structure that is passed to an external kernel * module via ioctl(fd, NIOCCONFIG, req) for a user-owned * bridge port (at this point ephemeral VALE interface). */ #define NM_IFRDATA_LEN 256 struct nm_ifreq { char nifr_name[IFNAMSIZ]; char data[NM_IFRDATA_LEN]; }; +/* + * netmap kernel thread configuration + */ +/* bhyve/vmm.ko MSIX parameters for IOCTL */ +struct ptn_vmm_ioctl_msix { + uint64_t msg; + uint64_t addr; +}; + +/* IOCTL parameters */ +struct nm_kth_ioctl { + u_long com; + /* TODO: use union */ + union { + struct ptn_vmm_ioctl_msix msix; + } data; +}; + +/* Configuration of a ptnetmap ring */ +struct ptnet_ring_cfg { + uint64_t ioeventfd; /* eventfd in linux, tsleep() parameter in FreeBSD */ + uint64_t irqfd; /* eventfd in linux, ioctl fd in FreeBSD */ + struct nm_kth_ioctl ioctl; /* ioctl parameter to send irq (only used in bhyve/FreeBSD) */ +}; #endif /* _NET_NETMAP_H_ */ Index: head/sys/net/netmap_user.h =================================================================== --- head/sys/net/netmap_user.h (revision 307393) +++ head/sys/net/netmap_user.h (revision 307394) @@ -1,744 +1,1084 @@ /* - * Copyright (C) 2011-2014 Universita` di Pisa. All rights reserved. + * Copyright (C) 2011-2016 Universita` di Pisa + * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * Functions and macros to manipulate netmap structures and packets * in userspace. See netmap(4) for more information. * * The address of the struct netmap_if, say nifp, is computed from the * value returned from ioctl(.., NIOCREG, ...) and the mmap region: * ioctl(fd, NIOCREG, &req); * mem = mmap(0, ... ); * nifp = NETMAP_IF(mem, req.nr_nifp); * (so simple, we could just do it manually) * * From there: * struct netmap_ring *NETMAP_TXRING(nifp, index) * struct netmap_ring *NETMAP_RXRING(nifp, index) * we can access ring->cur, ring->head, ring->tail, etc. * * ring->slot[i] gives us the i-th slot (we can access * directly len, flags, buf_idx) * * char *buf = NETMAP_BUF(ring, x) returns a pointer to * the buffer numbered x * * All ring indexes (head, cur, tail) should always move forward. * To compute the next index in a circular ring you can use * i = nm_ring_next(ring, i); * * To ease porting apps from pcap to netmap we supply a few fuctions * that can be called to open, close, read and write on netmap in a way * similar to libpcap. Note that the read/write function depend on * an ioctl()/select()/poll() being issued to refill rings or push * packets out. * * In order to use these, include #define NETMAP_WITH_LIBS * in the source file that invokes these functions. */ #ifndef _NET_NETMAP_USER_H_ #define _NET_NETMAP_USER_H_ +#define NETMAP_DEVICE_NAME "/dev/netmap" + +#ifdef __CYGWIN__ +/* + * we can compile userspace apps with either cygwin or msvc, + * and we use _WIN32 to identify windows specific code + */ +#ifndef _WIN32 +#define _WIN32 +#endif /* _WIN32 */ + +#endif /* __CYGWIN__ */ + +#ifdef _WIN32 +#undef NETMAP_DEVICE_NAME +#define NETMAP_DEVICE_NAME "/proc/sys/DosDevices/Global/netmap" +#include +#include +#include +#endif /* _WIN32 */ + #include #include /* apple needs sockaddr */ #include /* IFNAMSIZ */ +#include #ifndef likely #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #endif /* likely and unlikely */ #include /* helper macro */ #define _NETMAP_OFFSET(type, ptr, offset) \ ((type)(void *)((char *)(ptr) + (offset))) #define NETMAP_IF(_base, _ofs) _NETMAP_OFFSET(struct netmap_if *, _base, _ofs) #define NETMAP_TXRING(nifp, index) _NETMAP_OFFSET(struct netmap_ring *, \ nifp, (nifp)->ring_ofs[index] ) #define NETMAP_RXRING(nifp, index) _NETMAP_OFFSET(struct netmap_ring *, \ nifp, (nifp)->ring_ofs[index + (nifp)->ni_tx_rings + 1] ) #define NETMAP_BUF(ring, index) \ ((char *)(ring) + (ring)->buf_ofs + ((index)*(ring)->nr_buf_size)) #define NETMAP_BUF_IDX(ring, buf) \ ( ((char *)(buf) - ((char *)(ring) + (ring)->buf_ofs) ) / \ (ring)->nr_buf_size ) static inline uint32_t nm_ring_next(struct netmap_ring *r, uint32_t i) { return ( unlikely(i + 1 == r->num_slots) ? 0 : i + 1); } /* * Return 1 if we have pending transmissions in the tx ring. * When everything is complete ring->head = ring->tail + 1 (modulo ring size) */ static inline int nm_tx_pending(struct netmap_ring *r) { return nm_ring_next(r, r->tail) != r->head; } static inline uint32_t nm_ring_space(struct netmap_ring *ring) { int ret = ring->tail - ring->cur; if (ret < 0) ret += ring->num_slots; return ret; } #ifdef NETMAP_WITH_LIBS /* * Support for simple I/O libraries. * Include other system headers required for compiling this. */ #ifndef HAVE_NETMAP_WITH_LIBS #define HAVE_NETMAP_WITH_LIBS #include #include #include #include /* memset */ #include #include /* EINVAL */ #include /* O_RDWR */ #include /* close() */ #include #include #ifndef ND /* debug macros */ /* debug support */ #define ND(_fmt, ...) do {} while(0) #define D(_fmt, ...) \ do { \ struct timeval _t0; \ gettimeofday(&_t0, NULL); \ fprintf(stderr, "%03d.%06d %s [%d] " _fmt "\n", \ (int)(_t0.tv_sec % 1000), (int)_t0.tv_usec, \ __FUNCTION__, __LINE__, ##__VA_ARGS__); \ } while (0) /* Rate limited version of "D", lps indicates how many per second */ #define RD(lps, format, ...) \ do { \ static int __t0, __cnt; \ struct timeval __xxts; \ gettimeofday(&__xxts, NULL); \ if (__t0 != __xxts.tv_sec) { \ __t0 = __xxts.tv_sec; \ __cnt = 0; \ } \ if (__cnt++ < lps) { \ D(format, ##__VA_ARGS__); \ } \ } while (0) #endif -struct nm_pkthdr { /* same as pcap_pkthdr */ +struct nm_pkthdr { /* first part is the same as pcap_pkthdr */ struct timeval ts; uint32_t caplen; uint32_t len; + + uint64_t flags; /* NM_MORE_PKTS etc */ +#define NM_MORE_PKTS 1 + struct nm_desc *d; + struct netmap_slot *slot; + uint8_t *buf; }; struct nm_stat { /* same as pcap_stat */ u_int ps_recv; u_int ps_drop; u_int ps_ifdrop; -#ifdef WIN32 +#ifdef WIN32 /* XXX or _WIN32 ? */ u_int bs_capt; #endif /* WIN32 */ }; #define NM_ERRBUF_SIZE 512 struct nm_desc { struct nm_desc *self; /* point to self if netmap. */ int fd; void *mem; uint32_t memsize; int done_mmap; /* set if mem is the result of mmap */ struct netmap_if * const nifp; uint16_t first_tx_ring, last_tx_ring, cur_tx_ring; uint16_t first_rx_ring, last_rx_ring, cur_rx_ring; struct nmreq req; /* also contains the nr_name = ifname */ struct nm_pkthdr hdr; /* * The memory contains netmap_if, rings and then buffers. * Given a pointer (e.g. to nm_inject) we can compare with * mem/buf_start/buf_end to tell if it is a buffer or * some other descriptor in our region. * We also store a pointer to some ring as it helps in the * translation from buffer indexes to addresses. */ struct netmap_ring * const some_ring; void * const buf_start; void * const buf_end; /* parameters from pcap_open_live */ int snaplen; int promisc; int to_ms; char *errbuf; /* save flags so we can restore them on close */ uint32_t if_flags; uint32_t if_reqcap; uint32_t if_curcap; struct nm_stat st; char msg[NM_ERRBUF_SIZE]; }; /* * when the descriptor is open correctly, d->self == d * Eventually we should also use some magic number. */ #define P2NMD(p) ((struct nm_desc *)(p)) #define IS_NETMAP_DESC(d) ((d) && P2NMD(d)->self == P2NMD(d)) #define NETMAP_FD(d) (P2NMD(d)->fd) /* * this is a slightly optimized copy routine which rounds * to multiple of 64 bytes and is often faster than dealing * with other odd sizes. We assume there is enough room * in the source and destination buffers. * * XXX only for multiples of 64 bytes, non overlapped. */ static inline void nm_pkt_copy(const void *_src, void *_dst, int l) { const uint64_t *src = (const uint64_t *)_src; uint64_t *dst = (uint64_t *)_dst; if (unlikely(l >= 1024)) { memcpy(dst, src, l); return; } for (; likely(l > 0); l-=64) { *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; *dst++ = *src++; } } /* * The callback, invoked on each received packet. Same as libpcap */ typedef void (*nm_cb_t)(u_char *, const struct nm_pkthdr *, const u_char *d); /* *--- the pcap-like API --- * * nm_open() opens a file descriptor, binds to a port and maps memory. * * ifname (netmap:foo or vale:foo) is the port name * a suffix can indicate the follwing: * ^ bind the host (sw) ring pair * * bind host and NIC ring pairs (transparent) * -NN bind individual NIC ring pair * {NN bind master side of pipe NN * }NN bind slave side of pipe NN - * a suffix starting with + and the following flags, + * a suffix starting with / and the following flags, * in any order: * x exclusive access * z zero copy monitor * t monitor tx side * r monitor rx side + * R bind only RX ring(s) + * T bind only TX ring(s) * * req provides the initial values of nmreq before parsing ifname. * Remember that the ifname parsing will override the ring * number in nm_ringid, and part of nm_flags; * flags special functions, normally 0 * indicates which fields of *arg are significant * arg special functions, normally NULL * if passed a netmap_desc with mem != NULL, * use that memory instead of mmap. */ static struct nm_desc *nm_open(const char *ifname, const struct nmreq *req, uint64_t flags, const struct nm_desc *arg); /* * nm_open can import some fields from the parent descriptor. * These flags control which ones. * Also in flags you can specify NETMAP_NO_TX_POLL and NETMAP_DO_RX_POLL, * which set the initial value for these flags. * Note that the 16 low bits of the flags are reserved for data * that may go into the nmreq. */ enum { NM_OPEN_NO_MMAP = 0x040000, /* reuse mmap from parent */ NM_OPEN_IFNAME = 0x080000, /* nr_name, nr_ringid, nr_flags */ NM_OPEN_ARG1 = 0x100000, NM_OPEN_ARG2 = 0x200000, NM_OPEN_ARG3 = 0x400000, NM_OPEN_RING_CFG = 0x800000, /* tx|rx rings|slots */ }; /* * nm_close() closes and restores the port to its previous state */ static int nm_close(struct nm_desc *); /* + * nm_mmap() do mmap or inherit from parent if the nr_arg2 + * (memory block) matches. + */ + +static int nm_mmap(struct nm_desc *, const struct nm_desc *); + +/* * nm_inject() is the same as pcap_inject() * nm_dispatch() is the same as pcap_dispatch() * nm_nextpkt() is the same as pcap_next() */ static int nm_inject(struct nm_desc *, const void *, size_t); static int nm_dispatch(struct nm_desc *, int, nm_cb_t, u_char *); static u_char *nm_nextpkt(struct nm_desc *, struct nm_pkthdr *); +#ifdef _WIN32 +intptr_t _get_osfhandle(int); /* defined in io.h in windows */ + /* + * In windows we do not have yet native poll support, so we keep track + * of file descriptors associated to netmap ports to emulate poll on + * them and fall back on regular poll on other file descriptors. + */ +struct win_netmap_fd_list { + struct win_netmap_fd_list *next; + int win_netmap_fd; + HANDLE win_netmap_handle; +}; + +/* + * list head containing all the netmap opened fd and their + * windows HANDLE counterparts + */ +static struct win_netmap_fd_list *win_netmap_fd_list_head; + +static void +win_insert_fd_record(int fd) +{ + struct win_netmap_fd_list *curr; + + for (curr = win_netmap_fd_list_head; curr; curr = curr->next) { + if (fd == curr->win_netmap_fd) { + return; + } + } + curr = calloc(1, sizeof(*curr)); + curr->next = win_netmap_fd_list_head; + curr->win_netmap_fd = fd; + curr->win_netmap_handle = IntToPtr(_get_osfhandle(fd)); + win_netmap_fd_list_head = curr; +} + +void +win_remove_fd_record(int fd) +{ + struct win_netmap_fd_list *curr = win_netmap_fd_list_head; + struct win_netmap_fd_list *prev = NULL; + for (; curr ; prev = curr, curr = curr->next) { + if (fd != curr->win_netmap_fd) + continue; + /* found the entry */ + if (prev == NULL) { /* we are freeing the first entry */ + win_netmap_fd_list_head = curr->next; + } else { + prev->next = curr->next; + } + free(curr); + break; + } +} + + +HANDLE +win_get_netmap_handle(int fd) +{ + struct win_netmap_fd_list *curr; + + for (curr = win_netmap_fd_list_head; curr; curr = curr->next) { + if (fd == curr->win_netmap_fd) { + return curr->win_netmap_handle; + } + } + return NULL; +} + +/* + * we need to wrap ioctl and mmap, at least for the netmap file descriptors + */ + +/* + * use this function only from netmap_user.h internal functions + * same as ioctl, returns 0 on success and -1 on error + */ +static int +win_nm_ioctl_internal(HANDLE h, int32_t ctlCode, void *arg) +{ + DWORD bReturn = 0, szIn, szOut; + BOOL ioctlReturnStatus; + void *inParam = arg, *outParam = arg; + + switch (ctlCode) { + case NETMAP_POLL: + szIn = sizeof(POLL_REQUEST_DATA); + szOut = sizeof(POLL_REQUEST_DATA); + break; + case NETMAP_MMAP: + szIn = 0; + szOut = sizeof(void*); + inParam = NULL; /* nothing on input */ + break; + case NIOCTXSYNC: + case NIOCRXSYNC: + szIn = 0; + szOut = 0; + break; + case NIOCREGIF: + szIn = sizeof(struct nmreq); + szOut = sizeof(struct nmreq); + break; + case NIOCCONFIG: + D("unsupported NIOCCONFIG!"); + return -1; + + default: /* a regular ioctl */ + D("invalid ioctl %x on netmap fd", ctlCode); + return -1; + } + + ioctlReturnStatus = DeviceIoControl(h, + ctlCode, inParam, szIn, + outParam, szOut, + &bReturn, NULL); + // XXX note windows returns 0 on error or async call, 1 on success + // we could call GetLastError() to figure out what happened + return ioctlReturnStatus ? 0 : -1; +} + +/* + * this function is what must be called from user-space programs + * same as ioctl, returns 0 on success and -1 on error + */ +static int +win_nm_ioctl(int fd, int32_t ctlCode, void *arg) +{ + HANDLE h = win_get_netmap_handle(fd); + + if (h == NULL) { + return ioctl(fd, ctlCode, arg); + } else { + return win_nm_ioctl_internal(h, ctlCode, arg); + } +} + +#define ioctl win_nm_ioctl /* from now on, within this file ... */ + +/* + * We cannot use the native mmap on windows + * The only parameter used is "fd", the other ones are just declared to + * make this signature comparable to the FreeBSD/Linux one + */ +static void * +win32_mmap_emulated(void *addr, size_t length, int prot, int flags, int fd, int32_t offset) +{ + HANDLE h = win_get_netmap_handle(fd); + + if (h == NULL) { + return mmap(addr, length, prot, flags, fd, offset); + } else { + MEMORY_ENTRY ret; + + return win_nm_ioctl_internal(h, NETMAP_MMAP, &ret) ? + NULL : ret.pUsermodeVirtualAddress; + } +} + +#define mmap win32_mmap_emulated + +#include /* XXX needed to use the structure pollfd */ + +static int +win_nm_poll(struct pollfd *fds, int nfds, int timeout) +{ + HANDLE h; + + if (nfds != 1 || fds == NULL || (h = win_get_netmap_handle(fds->fd)) == NULL) {; + return poll(fds, nfds, timeout); + } else { + POLL_REQUEST_DATA prd; + + prd.timeout = timeout; + prd.events = fds->events; + + win_nm_ioctl_internal(h, NETMAP_POLL, &prd); + if ((prd.revents == POLLERR) || (prd.revents == STATUS_TIMEOUT)) { + return -1; + } + return 1; + } +} + +#define poll win_nm_poll + +static int +win_nm_open(char* pathname, int flags) +{ + + if (strcmp(pathname, NETMAP_DEVICE_NAME) == 0) { + int fd = open(NETMAP_DEVICE_NAME, O_RDWR); + if (fd < 0) { + return -1; + } + + win_insert_fd_record(fd); + return fd; + } else { + return open(pathname, flags); + } +} + +#define open win_nm_open + +static int +win_nm_close(int fd) +{ + if (fd != -1) { + close(fd); + if (win_get_netmap_handle(fd) != NULL) { + win_remove_fd_record(fd); + } + } + return 0; +} + +#define close win_nm_close + +#endif /* _WIN32 */ + +static int +nm_is_identifier(const char *s, const char *e) +{ + for (; s != e; s++) { + if (!isalnum(*s) && *s != '_') { + return 0; + } + } + + return 1; +} + +/* * Try to open, return descriptor if successful, NULL otherwise. * An invalid netmap name will return errno = 0; * You can pass a pointer to a pre-filled nm_desc to add special * parameters. Flags is used as follows - * NM_OPEN_NO_MMAP use the memory from arg, only + * NM_OPEN_NO_MMAP use the memory from arg, only XXX avoid mmap * if the nr_arg2 (memory block) matches. * NM_OPEN_ARG1 use req.nr_arg1 from arg * NM_OPEN_ARG2 use req.nr_arg2 from arg * NM_OPEN_RING_CFG user ring config from arg */ static struct nm_desc * nm_open(const char *ifname, const struct nmreq *req, uint64_t new_flags, const struct nm_desc *arg) { struct nm_desc *d = NULL; const struct nm_desc *parent = arg; u_int namelen; uint32_t nr_ringid = 0, nr_flags, nr_reg; const char *port = NULL; + const char *vpname = NULL; #define MAXERRMSG 80 char errmsg[MAXERRMSG] = ""; enum { P_START, P_RNGSFXOK, P_GETNUM, P_FLAGS, P_FLAGSOK } p_state; + int is_vale; long num; - if (strncmp(ifname, "netmap:", 7) && strncmp(ifname, "vale", 4)) { + if (strncmp(ifname, "netmap:", 7) && + strncmp(ifname, NM_BDG_NAME, strlen(NM_BDG_NAME))) { errno = 0; /* name not recognised, not an error */ return NULL; } - if (ifname[0] == 'n') + + is_vale = (ifname[0] == 'v'); + if (is_vale) { + port = index(ifname, ':'); + if (port == NULL) { + snprintf(errmsg, MAXERRMSG, + "missing ':' in vale name"); + goto fail; + } + + if (!nm_is_identifier(ifname + 4, port)) { + snprintf(errmsg, MAXERRMSG, "invalid bridge name"); + goto fail; + } + + vpname = ++port; + } else { ifname += 7; + port = ifname; + } + /* scan for a separator */ - for (port = ifname; *port && !index("-*^{}/", *port); port++) + for (; *port && !index("-*^{}/", *port); port++) ; + + if (is_vale && !nm_is_identifier(vpname, port)) { + snprintf(errmsg, MAXERRMSG, "invalid bridge port name"); + goto fail; + } + namelen = port - ifname; if (namelen >= sizeof(d->req.nr_name)) { snprintf(errmsg, MAXERRMSG, "name too long"); goto fail; } p_state = P_START; nr_flags = NR_REG_ALL_NIC; /* default for no suffix */ while (*port) { switch (p_state) { case P_START: switch (*port) { case '^': /* only SW ring */ nr_flags = NR_REG_SW; p_state = P_RNGSFXOK; break; case '*': /* NIC and SW */ nr_flags = NR_REG_NIC_SW; p_state = P_RNGSFXOK; break; case '-': /* one NIC ring pair */ nr_flags = NR_REG_ONE_NIC; p_state = P_GETNUM; break; case '{': /* pipe (master endpoint) */ nr_flags = NR_REG_PIPE_MASTER; p_state = P_GETNUM; break; case '}': /* pipe (slave endoint) */ nr_flags = NR_REG_PIPE_SLAVE; p_state = P_GETNUM; break; case '/': /* start of flags */ p_state = P_FLAGS; break; default: snprintf(errmsg, MAXERRMSG, "unknown modifier: '%c'", *port); goto fail; } port++; break; case P_RNGSFXOK: switch (*port) { case '/': p_state = P_FLAGS; break; default: snprintf(errmsg, MAXERRMSG, "unexpected character: '%c'", *port); goto fail; } port++; break; case P_GETNUM: num = strtol(port, (char **)&port, 10); if (num < 0 || num >= NETMAP_RING_MASK) { snprintf(errmsg, MAXERRMSG, "'%ld' out of range [0, %d)", num, NETMAP_RING_MASK); goto fail; } nr_ringid = num & NETMAP_RING_MASK; p_state = P_RNGSFXOK; break; case P_FLAGS: case P_FLAGSOK: switch (*port) { case 'x': nr_flags |= NR_EXCLUSIVE; break; case 'z': nr_flags |= NR_ZCOPY_MON; break; case 't': nr_flags |= NR_MONITOR_TX; break; case 'r': nr_flags |= NR_MONITOR_RX; break; + case 'R': + nr_flags |= NR_RX_RINGS_ONLY; + break; + case 'T': + nr_flags |= NR_TX_RINGS_ONLY; + break; default: snprintf(errmsg, MAXERRMSG, "unrecognized flag: '%c'", *port); goto fail; } port++; p_state = P_FLAGSOK; break; } } if (p_state != P_START && p_state != P_RNGSFXOK && p_state != P_FLAGSOK) { snprintf(errmsg, MAXERRMSG, "unexpected end of port name"); goto fail; } + if ((nr_flags & NR_ZCOPY_MON) && + !(nr_flags & (NR_MONITOR_TX|NR_MONITOR_RX))) { + snprintf(errmsg, MAXERRMSG, "'z' used but neither 'r', nor 't' found"); + goto fail; + } ND("flags: %s %s %s %s", (nr_flags & NR_EXCLUSIVE) ? "EXCLUSIVE" : "", (nr_flags & NR_ZCOPY_MON) ? "ZCOPY_MON" : "", (nr_flags & NR_MONITOR_TX) ? "MONITOR_TX" : "", (nr_flags & NR_MONITOR_RX) ? "MONITOR_RX" : ""); d = (struct nm_desc *)calloc(1, sizeof(*d)); if (d == NULL) { snprintf(errmsg, MAXERRMSG, "nm_desc alloc failure"); errno = ENOMEM; return NULL; } d->self = d; /* set this early so nm_close() works */ - d->fd = open("/dev/netmap", O_RDWR); + d->fd = open(NETMAP_DEVICE_NAME, O_RDWR); if (d->fd < 0) { snprintf(errmsg, MAXERRMSG, "cannot open /dev/netmap: %s", strerror(errno)); goto fail; } if (req) d->req = *req; d->req.nr_version = NETMAP_API; d->req.nr_ringid &= ~NETMAP_RING_MASK; /* these fields are overridden by ifname and flags processing */ d->req.nr_ringid |= nr_ringid; - d->req.nr_flags = nr_flags; + d->req.nr_flags |= nr_flags; memcpy(d->req.nr_name, ifname, namelen); d->req.nr_name[namelen] = '\0'; /* optionally import info from parent */ if (IS_NETMAP_DESC(parent) && new_flags) { if (new_flags & NM_OPEN_ARG1) D("overriding ARG1 %d", parent->req.nr_arg1); d->req.nr_arg1 = new_flags & NM_OPEN_ARG1 ? parent->req.nr_arg1 : 4; if (new_flags & NM_OPEN_ARG2) D("overriding ARG2 %d", parent->req.nr_arg2); d->req.nr_arg2 = new_flags & NM_OPEN_ARG2 ? parent->req.nr_arg2 : 0; if (new_flags & NM_OPEN_ARG3) D("overriding ARG3 %d", parent->req.nr_arg3); d->req.nr_arg3 = new_flags & NM_OPEN_ARG3 ? parent->req.nr_arg3 : 0; if (new_flags & NM_OPEN_RING_CFG) { D("overriding RING_CFG"); d->req.nr_tx_slots = parent->req.nr_tx_slots; d->req.nr_rx_slots = parent->req.nr_rx_slots; d->req.nr_tx_rings = parent->req.nr_tx_rings; d->req.nr_rx_rings = parent->req.nr_rx_rings; } if (new_flags & NM_OPEN_IFNAME) { D("overriding ifname %s ringid 0x%x flags 0x%x", parent->req.nr_name, parent->req.nr_ringid, parent->req.nr_flags); memcpy(d->req.nr_name, parent->req.nr_name, sizeof(d->req.nr_name)); d->req.nr_ringid = parent->req.nr_ringid; d->req.nr_flags = parent->req.nr_flags; } } /* add the *XPOLL flags */ d->req.nr_ringid |= new_flags & (NETMAP_NO_TX_POLL | NETMAP_DO_RX_POLL); if (ioctl(d->fd, NIOCREGIF, &d->req)) { snprintf(errmsg, MAXERRMSG, "NIOCREGIF failed: %s", strerror(errno)); goto fail; } - if (IS_NETMAP_DESC(parent) && parent->mem && - parent->req.nr_arg2 == d->req.nr_arg2) { - /* do not mmap, inherit from parent */ - d->memsize = parent->memsize; - d->mem = parent->mem; - } else { - /* XXX TODO: check if memsize is too large (or there is overflow) */ - d->memsize = d->req.nr_memsize; - d->mem = mmap(0, d->memsize, PROT_WRITE | PROT_READ, MAP_SHARED, - d->fd, 0); - if (d->mem == MAP_FAILED) { - snprintf(errmsg, MAXERRMSG, "mmap failed: %s", strerror(errno)); - goto fail; - } - d->done_mmap = 1; + /* if parent is defined, do nm_mmap() even if NM_OPEN_NO_MMAP is set */ + if ((!(new_flags & NM_OPEN_NO_MMAP) || parent) && nm_mmap(d, parent)) { + snprintf(errmsg, MAXERRMSG, "mmap failed: %s", strerror(errno)); + goto fail; } - { - struct netmap_if *nifp = NETMAP_IF(d->mem, d->req.nr_offset); - struct netmap_ring *r = NETMAP_RXRING(nifp, ); - *(struct netmap_if **)(uintptr_t)&(d->nifp) = nifp; - *(struct netmap_ring **)(uintptr_t)&d->some_ring = r; - *(void **)(uintptr_t)&d->buf_start = NETMAP_BUF(r, 0); - *(void **)(uintptr_t)&d->buf_end = - (char *)d->mem + d->memsize; - } - nr_reg = d->req.nr_flags & NR_REG_MASK; if (nr_reg == NR_REG_SW) { /* host stack */ d->first_tx_ring = d->last_tx_ring = d->req.nr_tx_rings; d->first_rx_ring = d->last_rx_ring = d->req.nr_rx_rings; } else if (nr_reg == NR_REG_ALL_NIC) { /* only nic */ d->first_tx_ring = 0; d->first_rx_ring = 0; d->last_tx_ring = d->req.nr_tx_rings - 1; d->last_rx_ring = d->req.nr_rx_rings - 1; } else if (nr_reg == NR_REG_NIC_SW) { d->first_tx_ring = 0; d->first_rx_ring = 0; d->last_tx_ring = d->req.nr_tx_rings; d->last_rx_ring = d->req.nr_rx_rings; } else if (nr_reg == NR_REG_ONE_NIC) { /* XXX check validity */ d->first_tx_ring = d->last_tx_ring = d->first_rx_ring = d->last_rx_ring = d->req.nr_ringid & NETMAP_RING_MASK; } else { /* pipes */ d->first_tx_ring = d->last_tx_ring = 0; d->first_rx_ring = d->last_rx_ring = 0; } #ifdef DEBUG_NETMAP_USER { /* debugging code */ int i; D("%s tx %d .. %d %d rx %d .. %d %d", ifname, d->first_tx_ring, d->last_tx_ring, d->req.nr_tx_rings, d->first_rx_ring, d->last_rx_ring, d->req.nr_rx_rings); for (i = 0; i <= d->req.nr_tx_rings; i++) { struct netmap_ring *r = NETMAP_TXRING(d->nifp, i); D("TX%d %p h %d c %d t %d", i, r, r->head, r->cur, r->tail); } for (i = 0; i <= d->req.nr_rx_rings; i++) { struct netmap_ring *r = NETMAP_RXRING(d->nifp, i); D("RX%d %p h %d c %d t %d", i, r, r->head, r->cur, r->tail); } } #endif /* debugging */ d->cur_tx_ring = d->first_tx_ring; d->cur_rx_ring = d->first_rx_ring; return d; fail: nm_close(d); if (errmsg[0]) D("%s %s", errmsg, ifname); if (errno == 0) errno = EINVAL; return NULL; } static int nm_close(struct nm_desc *d) { /* * ugly trick to avoid unused warnings */ static void *__xxzt[] __attribute__ ((unused)) = { (void *)nm_open, (void *)nm_inject, (void *)nm_dispatch, (void *)nm_nextpkt } ; if (d == NULL || d->self != d) return EINVAL; if (d->done_mmap && d->mem) munmap(d->mem, d->memsize); - if (d->fd != -1) + if (d->fd != -1) { close(d->fd); + } + bzero(d, sizeof(*d)); free(d); return 0; } +static int +nm_mmap(struct nm_desc *d, const struct nm_desc *parent) +{ + //XXX TODO: check if mmap is already done + + if (IS_NETMAP_DESC(parent) && parent->mem && + parent->req.nr_arg2 == d->req.nr_arg2) { + /* do not mmap, inherit from parent */ + D("do not mmap, inherit from parent"); + d->memsize = parent->memsize; + d->mem = parent->mem; + } else { + /* XXX TODO: check if memsize is too large (or there is overflow) */ + d->memsize = d->req.nr_memsize; + d->mem = mmap(0, d->memsize, PROT_WRITE | PROT_READ, MAP_SHARED, + d->fd, 0); + if (d->mem == MAP_FAILED) { + goto fail; + } + d->done_mmap = 1; + } + { + struct netmap_if *nifp = NETMAP_IF(d->mem, d->req.nr_offset); + struct netmap_ring *r = NETMAP_RXRING(nifp, ); + + *(struct netmap_if **)(uintptr_t)&(d->nifp) = nifp; + *(struct netmap_ring **)(uintptr_t)&d->some_ring = r; + *(void **)(uintptr_t)&d->buf_start = NETMAP_BUF(r, 0); + *(void **)(uintptr_t)&d->buf_end = + (char *)d->mem + d->memsize; + } + + return 0; + +fail: + return EINVAL; +} + /* * Same prototype as pcap_inject(), only need to cast. */ static int nm_inject(struct nm_desc *d, const void *buf, size_t size) { u_int c, n = d->last_tx_ring - d->first_tx_ring + 1; for (c = 0; c < n ; c++) { /* compute current ring to use */ struct netmap_ring *ring; uint32_t i, idx; uint32_t ri = d->cur_tx_ring + c; if (ri > d->last_tx_ring) ri = d->first_tx_ring; ring = NETMAP_TXRING(d->nifp, ri); if (nm_ring_empty(ring)) { continue; } i = ring->cur; idx = ring->slot[i].buf_idx; ring->slot[i].len = size; nm_pkt_copy(buf, NETMAP_BUF(ring, idx), size); d->cur_tx_ring = ri; ring->head = ring->cur = nm_ring_next(ring, i); return size; } return 0; /* fail */ } /* * Same prototype as pcap_dispatch(), only need to cast. */ static int nm_dispatch(struct nm_desc *d, int cnt, nm_cb_t cb, u_char *arg) { int n = d->last_rx_ring - d->first_rx_ring + 1; int c, got = 0, ri = d->cur_rx_ring; + d->hdr.buf = NULL; + d->hdr.flags = NM_MORE_PKTS; + d->hdr.d = d; if (cnt == 0) cnt = -1; /* cnt == -1 means infinite, but rings have a finite amount * of buffers and the int is large enough that we never wrap, * so we can omit checking for -1 */ for (c=0; c < n && cnt != got; c++) { /* compute current ring to use */ struct netmap_ring *ring; ri = d->cur_rx_ring + c; if (ri > d->last_rx_ring) ri = d->first_rx_ring; ring = NETMAP_RXRING(d->nifp, ri); for ( ; !nm_ring_empty(ring) && cnt != got; got++) { - u_int i = ring->cur; - u_int idx = ring->slot[i].buf_idx; - u_char *buf = (u_char *)NETMAP_BUF(ring, idx); - + u_int idx, i; + if (d->hdr.buf) { /* from previous round */ + cb(arg, &d->hdr, d->hdr.buf); + } + i = ring->cur; + idx = ring->slot[i].buf_idx; + d->hdr.slot = &ring->slot[i]; + d->hdr.buf = (u_char *)NETMAP_BUF(ring, idx); // __builtin_prefetch(buf); d->hdr.len = d->hdr.caplen = ring->slot[i].len; d->hdr.ts = ring->ts; - cb(arg, &d->hdr, buf); ring->head = ring->cur = nm_ring_next(ring, i); } + } + if (d->hdr.buf) { /* from previous round */ + d->hdr.flags = 0; + cb(arg, &d->hdr, d->hdr.buf); } d->cur_rx_ring = ri; return got; } static u_char * nm_nextpkt(struct nm_desc *d, struct nm_pkthdr *hdr) { int ri = d->cur_rx_ring; do { /* compute current ring to use */ struct netmap_ring *ring = NETMAP_RXRING(d->nifp, ri); if (!nm_ring_empty(ring)) { u_int i = ring->cur; u_int idx = ring->slot[i].buf_idx; u_char *buf = (u_char *)NETMAP_BUF(ring, idx); // __builtin_prefetch(buf); hdr->ts = ring->ts; hdr->len = hdr->caplen = ring->slot[i].len; ring->cur = nm_ring_next(ring, i); /* we could postpone advancing head if we want * to hold the buffer. This can be supported in * the future. */ ring->head = ring->cur; d->cur_rx_ring = ri; return buf; } ri++; if (ri > d->last_rx_ring) ri = d->first_rx_ring; } while (ri != d->cur_rx_ring); return NULL; /* nothing found */ } #endif /* !HAVE_NETMAP_WITH_LIBS */ #endif /* NETMAP_WITH_LIBS */ #endif /* _NET_NETMAP_USER_H_ */ Index: head/tools/tools/netmap/Makefile =================================================================== --- head/tools/tools/netmap/Makefile (revision 307393) +++ head/tools/tools/netmap/Makefile (revision 307394) @@ -1,32 +1,37 @@ # # $FreeBSD$ # # For multiple programs using a single source file each, # we can just define 'progs' and create custom targets. -PROGS = pkt-gen bridge vale-ctl +PROGS = pkt-gen nmreplay bridge vale-ctl CLEANFILES = $(PROGS) *.o MAN= -CFLAGS += -Werror -Wall # -nostdinc -I/usr/include -I../../../sys +CFLAGS += -Werror -Wall +CFLAGS += -nostdinc -I ../../../sys -I/usr/include CFLAGS += -Wextra LDFLAGS += -lpthread .ifdef WITHOUT_PCAP CFLAGS += -DNO_PCAP .else LDFLAGS += -lpcap .endif +LDFLAGS += -lm # used by nmreplay .include .include all: $(PROGS) pkt-gen: pkt-gen.o $(CC) $(CFLAGS) -o pkt-gen pkt-gen.o $(LDFLAGS) bridge: bridge.o $(CC) $(CFLAGS) -o bridge bridge.o + +nmreplay: nmreplay.o + $(CC) $(CFLAGS) -o nmreplay nmreplay.o $(LDFLAGS) vale-ctl: vale-ctl.o $(CC) $(CFLAGS) -o vale-ctl vale-ctl.o Index: head/tools/tools/netmap/bridge.c =================================================================== --- head/tools/tools/netmap/bridge.c (revision 307393) +++ head/tools/tools/netmap/bridge.c (revision 307394) @@ -1,317 +1,335 @@ /* * (C) 2011-2014 Luigi Rizzo, Matteo Landi * * BSD license * * A netmap client to bridge two network interfaces * (or one interface and the host stack). * * $FreeBSD$ */ #include #define NETMAP_WITH_LIBS #include #include int verbose = 0; static int do_abort = 0; static int zerocopy = 1; /* enable zerocopy if possible */ static void sigint_h(int sig) { (void)sig; /* UNUSED */ do_abort = 1; signal(SIGINT, SIG_DFL); } /* * how many packets on this set of queues ? */ int pkt_queued(struct nm_desc *d, int tx) { u_int i, tot = 0; if (tx) { for (i = d->first_tx_ring; i <= d->last_tx_ring; i++) { tot += nm_ring_space(NETMAP_TXRING(d->nifp, i)); } } else { for (i = d->first_rx_ring; i <= d->last_rx_ring; i++) { tot += nm_ring_space(NETMAP_RXRING(d->nifp, i)); } } return tot; } /* * move up to 'limit' pkts from rxring to txring swapping buffers. */ static int process_rings(struct netmap_ring *rxring, struct netmap_ring *txring, u_int limit, const char *msg) { u_int j, k, m = 0; /* print a warning if any of the ring flags is set (e.g. NM_REINIT) */ if (rxring->flags || txring->flags) D("%s rxflags %x txflags %x", msg, rxring->flags, txring->flags); j = rxring->cur; /* RX */ k = txring->cur; /* TX */ m = nm_ring_space(rxring); if (m < limit) limit = m; m = nm_ring_space(txring); if (m < limit) limit = m; m = limit; while (limit-- > 0) { struct netmap_slot *rs = &rxring->slot[j]; struct netmap_slot *ts = &txring->slot[k]; /* swap packets */ if (ts->buf_idx < 2 || rs->buf_idx < 2) { D("wrong index rx[%d] = %d -> tx[%d] = %d", j, rs->buf_idx, k, ts->buf_idx); sleep(2); } /* copy the packet length. */ if (rs->len > 2048) { D("wrong len %d rx[%d] -> tx[%d]", rs->len, j, k); rs->len = 0; } else if (verbose > 1) { D("%s send len %d rx[%d] -> tx[%d]", msg, rs->len, j, k); } ts->len = rs->len; if (zerocopy) { uint32_t pkt = ts->buf_idx; ts->buf_idx = rs->buf_idx; rs->buf_idx = pkt; /* report the buffer change. */ ts->flags |= NS_BUF_CHANGED; rs->flags |= NS_BUF_CHANGED; } else { char *rxbuf = NETMAP_BUF(rxring, rs->buf_idx); char *txbuf = NETMAP_BUF(txring, ts->buf_idx); nm_pkt_copy(rxbuf, txbuf, ts->len); } j = nm_ring_next(rxring, j); k = nm_ring_next(txring, k); } rxring->head = rxring->cur = j; txring->head = txring->cur = k; if (verbose && m > 0) D("%s sent %d packets to %p", msg, m, txring); return (m); } /* move packts from src to destination */ static int move(struct nm_desc *src, struct nm_desc *dst, u_int limit) { struct netmap_ring *txring, *rxring; u_int m = 0, si = src->first_rx_ring, di = dst->first_tx_ring; const char *msg = (src->req.nr_ringid & NETMAP_SW_RING) ? "host->net" : "net->host"; while (si <= src->last_rx_ring && di <= dst->last_tx_ring) { rxring = NETMAP_RXRING(src->nifp, si); txring = NETMAP_TXRING(dst->nifp, di); ND("txring %p rxring %p", txring, rxring); if (nm_ring_empty(rxring)) { si++; continue; } if (nm_ring_empty(txring)) { di++; continue; } m += process_rings(rxring, txring, limit, msg); } return (m); } static void usage(void) { fprintf(stderr, - "usage: bridge [-v] [-i ifa] [-i ifb] [-b burst] [-w wait_time] [iface]\n"); + "usage: bridge [-v] [-i ifa] [-i ifb] [-b burst] [-w wait_time] [ifa [ifb [burst]]]\n"); exit(1); } /* * bridge [-v] if1 [if2] * * If only one name, or the two interfaces are the same, * bridges userland and the adapter. Otherwise bridge * two intefaces. */ int main(int argc, char **argv) { struct pollfd pollfd[2]; int ch; u_int burst = 1024, wait_link = 4; struct nm_desc *pa = NULL, *pb = NULL; char *ifa = NULL, *ifb = NULL; char ifabuf[64] = { 0 }; fprintf(stderr, "%s built %s %s\n", argv[0], __DATE__, __TIME__); while ( (ch = getopt(argc, argv, "b:ci:vw:")) != -1) { switch (ch) { default: D("bad option %c %s", ch, optarg); usage(); break; case 'b': /* burst */ burst = atoi(optarg); break; case 'i': /* interface */ if (ifa == NULL) ifa = optarg; else if (ifb == NULL) ifb = optarg; else D("%s ignored, already have 2 interfaces", optarg); break; case 'c': zerocopy = 0; /* do not zerocopy */ break; case 'v': verbose++; break; case 'w': wait_link = atoi(optarg); break; } } argc -= optind; argv += optind; + if (argc > 0) + ifa = argv[0]; if (argc > 1) - ifa = argv[1]; + ifb = argv[1]; if (argc > 2) - ifb = argv[2]; - if (argc > 3) - burst = atoi(argv[3]); + burst = atoi(argv[2]); if (!ifb) ifb = ifa; if (!ifa) { D("missing interface"); usage(); } if (burst < 1 || burst > 8192) { D("invalid burst %d, set to 1024", burst); burst = 1024; } if (wait_link > 100) { D("invalid wait_link %d, set to 4", wait_link); wait_link = 4; } if (!strcmp(ifa, ifb)) { D("same interface, endpoint 0 goes to host"); snprintf(ifabuf, sizeof(ifabuf) - 1, "%s^", ifa); ifa = ifabuf; } else { /* two different interfaces. Take all rings on if1 */ } pa = nm_open(ifa, NULL, 0, NULL); if (pa == NULL) { D("cannot open %s", ifa); return (1); } - // XXX use a single mmap ? + /* try to reuse the mmap() of the first interface, if possible */ pb = nm_open(ifb, NULL, NM_OPEN_NO_MMAP, pa); if (pb == NULL) { D("cannot open %s", ifb); nm_close(pa); return (1); } zerocopy = zerocopy && (pa->mem == pb->mem); D("------- zerocopy %ssupported", zerocopy ? "" : "NOT "); /* setup poll(2) variables. */ memset(pollfd, 0, sizeof(pollfd)); pollfd[0].fd = pa->fd; pollfd[1].fd = pb->fd; D("Wait %d secs for link to come up...", wait_link); sleep(wait_link); D("Ready to go, %s 0x%x/%d <-> %s 0x%x/%d.", pa->req.nr_name, pa->first_rx_ring, pa->req.nr_rx_rings, pb->req.nr_name, pb->first_rx_ring, pb->req.nr_rx_rings); /* main loop */ signal(SIGINT, sigint_h); while (!do_abort) { int n0, n1, ret; pollfd[0].events = pollfd[1].events = 0; pollfd[0].revents = pollfd[1].revents = 0; n0 = pkt_queued(pa, 0); n1 = pkt_queued(pb, 0); +#if defined(_WIN32) || defined(BUSYWAIT) + if (n0){ + ioctl(pollfd[1].fd, NIOCTXSYNC, NULL); + pollfd[1].revents = POLLOUT; + } + else { + ioctl(pollfd[0].fd, NIOCRXSYNC, NULL); + } + if (n1){ + ioctl(pollfd[0].fd, NIOCTXSYNC, NULL); + pollfd[0].revents = POLLOUT; + } + else { + ioctl(pollfd[1].fd, NIOCRXSYNC, NULL); + } + ret = 1; +#else if (n0) pollfd[1].events |= POLLOUT; else pollfd[0].events |= POLLIN; if (n1) pollfd[0].events |= POLLOUT; else pollfd[1].events |= POLLIN; ret = poll(pollfd, 2, 2500); +#endif //defined(_WIN32) || defined(BUSYWAIT) if (ret <= 0 || verbose) D("poll %s [0] ev %x %x rx %d@%d tx %d," " [1] ev %x %x rx %d@%d tx %d", ret <= 0 ? "timeout" : "ok", pollfd[0].events, pollfd[0].revents, pkt_queued(pa, 0), NETMAP_RXRING(pa->nifp, pa->cur_rx_ring)->cur, pkt_queued(pa, 1), pollfd[1].events, pollfd[1].revents, pkt_queued(pb, 0), NETMAP_RXRING(pb->nifp, pb->cur_rx_ring)->cur, pkt_queued(pb, 1) ); if (ret < 0) continue; if (pollfd[0].revents & POLLERR) { struct netmap_ring *rx = NETMAP_RXRING(pa->nifp, pa->cur_rx_ring); D("error on fd0, rx [%d,%d,%d)", rx->head, rx->cur, rx->tail); } if (pollfd[1].revents & POLLERR) { struct netmap_ring *rx = NETMAP_RXRING(pb->nifp, pb->cur_rx_ring); D("error on fd1, rx [%d,%d,%d)", rx->head, rx->cur, rx->tail); } if (pollfd[0].revents & POLLOUT) { move(pb, pa, burst); // XXX we don't need the ioctl */ // ioctl(me[0].fd, NIOCTXSYNC, NULL); } if (pollfd[1].revents & POLLOUT) { move(pa, pb, burst); // XXX we don't need the ioctl */ // ioctl(me[1].fd, NIOCTXSYNC, NULL); } } D("exiting"); nm_close(pb); nm_close(pa); return (0); } Index: head/tools/tools/netmap/ctrs.h =================================================================== --- head/tools/tools/netmap/ctrs.h (nonexistent) +++ head/tools/tools/netmap/ctrs.h (revision 307394) @@ -0,0 +1,108 @@ +#ifndef CTRS_H_ +#define CTRS_H_ + +/* $FreeBSD$ */ + +#include + +/* counters to accumulate statistics */ +struct my_ctrs { + uint64_t pkts, bytes, events, drop; + uint64_t min_space; + struct timeval t; +}; + +/* very crude code to print a number in normalized form. + * Caller has to make sure that the buffer is large enough. + */ +static const char * +norm2(char *buf, double val, char *fmt) +{ + char *units[] = { "", "K", "M", "G", "T" }; + u_int i; + + for (i = 0; val >=1000 && i < sizeof(units)/sizeof(char *) - 1; i++) + val /= 1000; + sprintf(buf, fmt, val, units[i]); + return buf; +} + +static __inline const char * +norm(char *buf, double val) +{ + return norm2(buf, val, "%.3f %s"); +} + +static __inline int +timespec_ge(const struct timespec *a, const struct timespec *b) +{ + + if (a->tv_sec > b->tv_sec) + return (1); + if (a->tv_sec < b->tv_sec) + return (0); + if (a->tv_nsec >= b->tv_nsec) + return (1); + return (0); +} + +static __inline struct timespec +timeval2spec(const struct timeval *a) +{ + struct timespec ts = { + .tv_sec = a->tv_sec, + .tv_nsec = a->tv_usec * 1000 + }; + return ts; +} + +static __inline struct timeval +timespec2val(const struct timespec *a) +{ + struct timeval tv = { + .tv_sec = a->tv_sec, + .tv_usec = a->tv_nsec / 1000 + }; + return tv; +} + + +static __inline struct timespec +timespec_add(struct timespec a, struct timespec b) +{ + struct timespec ret = { a.tv_sec + b.tv_sec, a.tv_nsec + b.tv_nsec }; + if (ret.tv_nsec >= 1000000000) { + ret.tv_sec++; + ret.tv_nsec -= 1000000000; + } + return ret; +} + +static __inline struct timespec +timespec_sub(struct timespec a, struct timespec b) +{ + struct timespec ret = { a.tv_sec - b.tv_sec, a.tv_nsec - b.tv_nsec }; + if (ret.tv_nsec < 0) { + ret.tv_sec--; + ret.tv_nsec += 1000000000; + } + return ret; +} + +static uint64_t +wait_for_next_report(struct timeval *prev, struct timeval *cur, + int report_interval) +{ + struct timeval delta; + + delta.tv_sec = report_interval/1000; + delta.tv_usec = (report_interval%1000)*1000; + if (select(0, NULL, NULL, NULL, &delta) < 0 && errno != EINTR) { + perror("select"); + abort(); + } + gettimeofday(cur, NULL); + timersub(cur, prev, &delta); + return delta.tv_sec* 1000000 + delta.tv_usec; +} +#endif /* CTRS_H_ */ Property changes on: head/tools/tools/netmap/ctrs.h ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/tools/tools/netmap/nmreplay.8 =================================================================== --- head/tools/tools/netmap/nmreplay.8 (nonexistent) +++ head/tools/tools/netmap/nmreplay.8 (revision 307394) @@ -0,0 +1,129 @@ +.\" Copyright (c) 2016 Luigi Rizzo, Universita` di Pisa +.\" All rights reserved. +.\" +.\" Redistribution and use in source and binary forms, with or without +.\" modification, are permitted provided that the following conditions +.\" are met: +.\" 1. Redistributions of source code must retain the above copyright +.\" notice, this list of conditions and the following disclaimer. +.\" 2. Redistributions in binary form must reproduce the above copyright +.\" notice, this list of conditions and the following disclaimer in the +.\" documentation and/or other materials provided with the distribution. +.\" +.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND +.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +.\" SUCH DAMAGE. +.\" +.\" $FreeBSD$ +.\" +.Dd February 16, 2016 +.Dt NMREPLAY 1 +.Os +.Sh NAME +.Nm nmreplay +.Nd playback a pcap file through a netmap interface +.Sh SYNOPSIS +.Bk -words +.Bl -tag -width "nmreplay" +.It Nm +.Op Fl f Ar pcap-file +.Op Fl i Ar netmap-interface +.Op Fl B Ar bandwidth +.Op Fl D Ar delay +.Op Fl L Ar loss +.Op Fl b Ar batch size +.Op Fl w Ar wait-link +.Op Fl v +.Op Fl C Ar cpu-placement +.Sh DESCRIPTION +.Nm +works like +.Nm tcpreplay +to replay a pcap file through a netmap interface, +with programmable rates and possibly delays, losses +and packet alterations. +.Nm +is designed to run at high speed, so the transmit schedule +is computed ahead of time, and the thread in charge of transmission +only has to pump data through the interface. +.Nm +can connect to any type of netmap port. +.Pp +Command line options are as follows +.Bl -tag -width Ds +.It Fl f Ar pcap-file +Name of the pcap file to replay. +.It Fl i Ar interface +Name of the netmap interface to use as output. +.It Fl v +Enable verbose mode +.It Fl b Ar batch-size +Maximum batch size to use during transmissions. +.Nm +normally transmits packets one at a time, but it may use +larger batches, up to the value specified with this option, +when running at high rates. +.It Fl B Ar bps | Cm constant, Ns Ar bps | Cm ether, Ns Ar bps | Cm real Ns Op , Ns Ar speedup +Bandwidth to be used for transmission. +.Ar bps +is a floating point number optionally follow by a character +(k, K, m, M, g, G) that multiplies the value by 10^3, 10^6 and 10^9 +respectively. +.Cm constant +(can be omitted) means that the bandwidth will be computed +with reference to the actual packet size (excluding CRC and framing). +.Cm ether +indicates that the ethernet framing (160 bits) and CRC (32 bits) +will be included in the computation of the packet size. +.Cm real +means transmission will occur according to the timestamps +recorded in the trace. The optional +.Ar speedup +multiplier (defaults to 1) indicates how much faster +or slower than real time the trace should be replayed. +.It Fl D Ar dt | Cm constant, Ns Ar dt | Cm uniform, Ns Ar dmin,dmax | Cm exp, Ar dmin,davg +Adds additional delay to the packet transmission, whose distribution +can be constant, uniform or exponential. +.Ar dt, dmin, dmax, avt +are times expressed as floating point numbers optionally followed +by a character (s, m, u, n) to indicate seconds, milliseconds, +microseconds, nanoseconds. +The delay is added to the transmit time and adjusted so that there is +never packet reordering. +.It Fl L Ar x | Cm plr, Ns Ar x | Cm ber, Ns Ar x +Simulates packet or bit errors, causing offending packets to be dropped. +.Ar x +is a floating point number indicating the packet or bit error rate. +.It Fl w Ar wait-link +indicates the number of seconds to wait before transmitting. +It defaults to 2, and may be useful when talking to physical +ports to let link negotiation complete before starting transmission. +.El +.Sh OPERATION +.Nm +creates an in-memory schedule with all packets to be transmitted, +and then launches a separate thread to take care of transmissions +while the main thread reports statistics every second. +.Sh SEE ALSO +.Pa http://info.iet.unipi.it/~luigi/netmap/ +.Pp +Luigi Rizzo, Revisiting network I/O APIs: the netmap framework, +Communications of the ACM, 55 (3), pp.45-51, March 2012 +.Pp +Luigi Rizzo, Giuseppe Lettieri, +VALE, a switched ethernet for virtual machines, +ACM CoNEXT'12, December 2012, Nice +.Sh AUTHORS +.An -nosplit +.Nm +has been written by +.An Luigi Rizzo, Andrea Beconcini, Francesco Mola and Lorenzo Biagini +at the Universita` di Pisa, Italy. Property changes on: head/tools/tools/netmap/nmreplay.8 ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/tools/tools/netmap/nmreplay.c =================================================================== --- head/tools/tools/netmap/nmreplay.c (nonexistent) +++ head/tools/tools/netmap/nmreplay.c (revision 307394) @@ -0,0 +1,1820 @@ +/* + * Copyright (C) 2016 Universita` di Pisa. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * $FreeBSD$ + */ + + +#if 0 /* COMMENT */ + +This program implements NMREPLAY, a program to replay a pcap file +enforcing the output rate and possibly random losses and delay +distributions. +It is meant to be run from the command line and implemented with a main +control thread for monitoring, plus a thread to push packets out. + +The control thread parses command line arguments, prepares a +schedule for transmission in a memory buffer and then sits +in a loop where it periodically reads traffic statistics from +the other threads and prints them out on the console. + +The transmit buffer contains headers and packets. Each header +includes a timestamp that determines when the packet should be sent out. +A "consumer" thread cons() reads from the queue and transmits packets +on the output netmap port when their time has come. + +The program does CPU pinning and sets the scheduler and priority +for the "cons" threads. Externally one should do the +assignment of other threads (e.g. interrupt handlers) and +make sure that network interfaces are configured properly. + +--- Main functions of the program --- +within each function, q is used as a pointer to the queue holding +packets and parameters. + +pcap_prod() + + reads from the pcap file and prepares packets to transmit. + After reading a packet from the pcap file, the following information + are extracted which can be used to determine the schedule: + + q->cur_pkt points to the buffer containing the packet + q->cur_len packet length, excluding CRC + q->cur_caplen available packet length (may be shorter than cur_len) + q->cur_tt transmission time for the packet, computed from the trace. + + The following functions are then called in sequence: + + q->c_loss (set with the -L command line option) decides + whether the packet should be dropped before even queuing. + This is generally useful to emulate random loss. + The function is supposed to set q->c_drop = 1 if the + packet should be dropped, or leave it to 0 otherwise. + + q->c_bw (set with the -B command line option) is used to + enforce the transmit bandwidth. The function must store + in q->cur_tt the transmission time (in nanoseconds) of + the packet, which is typically proportional to the length + of the packet, i.e. q->cur_tt = q->cur_len / + Variants are possible, eg. to account for constant framing + bits as on the ethernet, or variable channel acquisition times, + etc. + This mechanism can also be used to simulate variable queueing + delay e.g. due to the presence of cross traffic. + + q->c_delay (set with the -D option) implements delay emulation. + The function should set q->cur_delay to the additional + delay the packet is subject to. The framework will take care of + computing the actual exit time of a packet so that there is no + reordering. + + +#endif /* COMMENT */ + +// debugging macros +#define NED(_fmt, ...) do {} while (0) +#define ED(_fmt, ...) \ + do { \ + struct timeval _t0; \ + gettimeofday(&_t0, NULL); \ + fprintf(stderr, "%03d.%03d %-10.10s [%5d] \t" _fmt "\n", \ + (int)(_t0.tv_sec % 1000), (int)_t0.tv_usec/1000, \ + __FUNCTION__, __LINE__, ##__VA_ARGS__); \ + } while (0) + +/* WWW is for warnings, EEE is for errors */ +#define WWW(_fmt, ...) ED("--WWW-- " _fmt, ##__VA_ARGS__) +#define EEE(_fmt, ...) ED("--EEE-- " _fmt, ##__VA_ARGS__) +#define DDD(_fmt, ...) ED("--DDD-- " _fmt, ##__VA_ARGS__) + +#define _GNU_SOURCE // for CPU_SET() etc +#include +#define NETMAP_WITH_LIBS +#include +#include + + +/* + * +A packet in the queue is q_pkt plus the payload. + +For the packet descriptor we need the following: + + - position of next packet in the queue (can go backwards). + We can reduce to 32 bits if we consider alignments, + or we just store the length to be added to the current + value and assume 0 as a special index. + - actual packet length (16 bits may be ok) + - queue output time, in nanoseconds (64 bits) + - delay line output time, in nanoseconds + One of the two can be packed to a 32bit value + +A convenient coding uses 32 bytes per packet. + + */ + +struct q_pkt { + uint64_t next; /* buffer index for next packet */ + uint64_t pktlen; /* actual packet len */ + uint64_t pt_qout; /* time of output from queue */ + uint64_t pt_tx; /* transmit time */ +}; + + +/* + * The header for a pcap file + */ +struct pcap_file_header { + uint32_t magic; + /*used to detect the file format itself and the byte + ordering. The writing application writes 0xa1b2c3d4 with it's native byte + ordering format into this field. The reading application will read either + 0xa1b2c3d4 (identical) or 0xd4c3b2a1 (swapped). If the reading application + reads the swapped 0xd4c3b2a1 value, it knows that all the following fields + will have to be swapped too. For nanosecond-resolution files, the writing + application writes 0xa1b23c4d, with the two nibbles of the two lower-order + bytes swapped, and the reading application will read either 0xa1b23c4d + (identical) or 0x4d3cb2a1 (swapped)*/ + uint16_t version_major; + uint16_t version_minor; /*the version number of this file format */ + int32_t thiszone; + /*the correction time in seconds between GMT (UTC) and the + local timezone of the following packet header timestamps. Examples: If the + timestamps are in GMT (UTC), thiszone is simply 0. If the timestamps are in + Central European time (Amsterdam, Berlin, ...) which is GMT + 1:00, thiszone + must be -3600*/ + uint32_t stampacc; /*the accuracy of time stamps in the capture*/ + uint32_t snaplen; + /*the "snapshot length" for the capture (typically 65535 + or even more, but might be limited by the user)*/ + uint32_t network; + /*link-layer header type, specifying the type of headers + at the beginning of the packet (e.g. 1 for Ethernet); this can be various + types such as 802.11, 802.11 with various radio information, PPP, Token + Ring, FDDI, etc.*/ +}; + +#if 0 /* from pcap.h */ +struct pcap_file_header { + bpf_u_int32 magic; + u_short version_major; + u_short version_minor; + bpf_int32 thiszone; /* gmt to local correction */ + bpf_u_int32 sigfigs; /* accuracy of timestamps */ + bpf_u_int32 snaplen; /* max length saved portion of each pkt */ + bpf_u_int32 linktype; /* data link type (LINKTYPE_*) */ +}; + +struct pcap_pkthdr { + struct timeval ts; /* time stamp */ + bpf_u_int32 caplen; /* length of portion present */ + bpf_u_int32 len; /* length this packet (off wire) */ +}; +#endif /* from pcap.h */ + +struct pcap_pkthdr { + uint32_t ts_sec; /* seconds from epoch */ + uint32_t ts_frac; /* microseconds or nanoseconds depending on sigfigs */ + uint32_t caplen; + /*the number of bytes of packet data actually captured + and saved in the file. This value should never become larger than orig_len + or the snaplen value of the global header*/ + uint32_t len; /* wire length */ +}; + + +#define PKT_PAD (32) /* padding on packets */ + +static inline int pad(int x) +{ + return ((x) + PKT_PAD - 1) & ~(PKT_PAD - 1) ; +} + + + +/* + * wrapper around the pcap file. + * We mmap the file so it is easy to do multiple passes through it. + */ +struct nm_pcap_file { + int fd; + uint64_t filesize; + const char *data; /* mmapped file */ + + uint64_t tot_pkt; + uint64_t tot_bytes; + uint64_t tot_bytes_rounded; /* need hdr + pad(len) */ + uint32_t resolution; /* 1000 for us, 1 for ns */ + int swap; /* need to swap fields ? */ + + uint64_t first_ts; + uint64_t total_tx_time; + /* + * total_tx_time is computed as last_ts - first_ts, plus the + * transmission time for the first packet which in turn is + * computed according to the average bandwidth + */ + + uint64_t file_len; + const char *cur; /* running pointer */ + const char *lim; /* data + file_len */ + int err; +}; + +static struct nm_pcap_file *readpcap(const char *fn); +static void destroy_pcap(struct nm_pcap_file *file); + + +#include +#include +#include +#include +#include +#include /* memcpy */ + +#include + +#define NS_SCALE 1000000000UL /* nanoseconds in 1s */ + +static void destroy_pcap(struct nm_pcap_file *pf) +{ + if (!pf) + return; + + munmap((void *)(uintptr_t)pf->data, pf->filesize); + close(pf->fd); + bzero(pf, sizeof(*pf)); + free(pf); + return; +} + +// convert a field of given size if swap is needed. +static uint32_t +cvt(const void *src, int size, char swap) +{ + uint32_t ret = 0; + if (size != 2 && size != 4) { + EEE("Invalid size %d\n", size); + exit(1); + } + memcpy(&ret, src, size); + if (swap) { + unsigned char tmp, *data = (unsigned char *)&ret; + int i; + for (i = 0; i < size / 2; i++) { + tmp = data[i]; + data[i] = data[size - (1 + i)]; + data[size - (1 + i)] = tmp; + } + } + return ret; +} + +static uint32_t +read_next_info(struct nm_pcap_file *pf, int size) +{ + const char *end = pf->cur + size; + uint32_t ret; + if (end > pf->lim) { + pf->err = 1; + ret = 0; + } else { + ret = cvt(pf->cur, size, pf->swap); + pf->cur = end; + } + return ret; +} + +/* + * mmap the file, make sure timestamps are sorted, and count + * packets and sizes + * Timestamps represent the receive time of the packets. + * We need to compute also the 'first_ts' which refers to a hypotetical + * packet right before the first one, see the code for details. + */ +static struct nm_pcap_file * +readpcap(const char *fn) +{ + struct nm_pcap_file _f, *pf = &_f; + uint64_t prev_ts, first_pkt_time; + uint32_t magic, first_len = 0; + + bzero(pf, sizeof(*pf)); + pf->fd = open(fn, O_RDONLY); + if (pf->fd < 0) { + EEE("cannot open file %s", fn); + return NULL; + } + /* compute length */ + pf->filesize = lseek(pf->fd, 0, SEEK_END); + lseek(pf->fd, 0, SEEK_SET); + ED("filesize is %lu", (u_long)(pf->filesize)); + if (pf->filesize < sizeof(struct pcap_file_header)) { + EEE("file too short %s", fn); + close(pf->fd); + return NULL; + } + pf->data = mmap(NULL, pf->filesize, PROT_READ, MAP_SHARED, pf->fd, 0); + if (pf->data == MAP_FAILED) { + EEE("cannot mmap file %s", fn); + close(pf->fd); + return NULL; + } + pf->cur = pf->data; + pf->lim = pf->data + pf->filesize; + pf->err = 0; + pf->swap = 0; /* default, same endianness when read magic */ + + magic = read_next_info(pf, 4); + ED("magic is 0x%x", magic); + switch (magic) { + case 0xa1b2c3d4: /* native, us resolution */ + pf->swap = 0; + pf->resolution = 1000; + break; + case 0xd4c3b2a1: /* swapped, us resolution */ + pf->swap = 1; + pf->resolution = 1000; + break; + case 0xa1b23c4d: /* native, ns resolution */ + pf->swap = 0; + pf->resolution = 1; /* nanoseconds */ + break; + case 0x4d3cb2a1: /* swapped, ns resolution */ + pf->swap = 1; + pf->resolution = 1; /* nanoseconds */ + break; + default: + EEE("unknown magic 0x%x", magic); + return NULL; + } + + ED("swap %d res %d\n", pf->swap, pf->resolution); + pf->cur = pf->data + sizeof(struct pcap_file_header); + pf->lim = pf->data + pf->filesize; + pf->err = 0; + prev_ts = 0; + while (pf->cur < pf->lim && pf->err == 0) { + uint32_t base = pf->cur - pf->data; + uint64_t cur_ts = read_next_info(pf, 4) * NS_SCALE + + read_next_info(pf, 4) * pf->resolution; + uint32_t caplen = read_next_info(pf, 4); + uint32_t len = read_next_info(pf, 4); + + if (pf->err) { + WWW("end of pcap file after %d packets\n", + (int)pf->tot_pkt); + break; + } + if (cur_ts < prev_ts) { + WWW("reordered packet %d\n", + (int)pf->tot_pkt); + } + prev_ts = cur_ts; + (void)base; + if (pf->tot_pkt == 0) { + pf->first_ts = cur_ts; + first_len = len; + } + pf->tot_pkt++; + pf->tot_bytes += len; + pf->tot_bytes_rounded += pad(len) + sizeof(struct q_pkt); + pf->cur += caplen; + } + pf->total_tx_time = prev_ts - pf->first_ts; /* excluding first packet */ + ED("tot_pkt %lu tot_bytes %lu tx_time %.6f s first_len %lu", + (u_long)pf->tot_pkt, (u_long)pf->tot_bytes, + 1e-9*pf->total_tx_time, (u_long)first_len); + /* + * We determine that based on the + * average bandwidth of the trace, as follows + * first_pkt_ts = p[0].len / avg_bw + * In turn avg_bw = (total_len - p[0].len)/(p[n-1].ts - p[0].ts) + * so + * first_ts = p[0].ts - p[0].len * (p[n-1].ts - p[0].ts) / (total_len - p[0].len) + */ + if (pf->tot_bytes == first_len) { + /* cannot estimate bandwidth, so force 1 Gbit */ + first_pkt_time = first_len * 8; /* * 10^9 / bw */ + } else { + first_pkt_time = pf->total_tx_time * first_len / (pf->tot_bytes - first_len); + } + ED("first_pkt_time %.6f s", 1e-9*first_pkt_time); + pf->total_tx_time += first_pkt_time; + pf->first_ts -= first_pkt_time; + + /* all correct, allocate a record and copy */ + pf = calloc(1, sizeof(*pf)); + *pf = _f; + /* reset pointer to start */ + pf->cur = pf->data + sizeof(struct pcap_file_header); + pf->err = 0; + return pf; +} + +enum my_pcap_mode { PM_NONE, PM_FAST, PM_FIXED, PM_REAL }; + +int verbose = 0; + +static int do_abort = 0; + +#include +#include +#include +#include + +#include // setpriority + +#ifdef __FreeBSD__ +#include /* pthread w/ affinity */ +#include /* cpu_set */ +#endif /* __FreeBSD__ */ + +#ifdef linux +#define cpuset_t cpu_set_t +#endif + +#ifdef __APPLE__ +#define cpuset_t uint64_t // XXX +static inline void CPU_ZERO(cpuset_t *p) +{ + *p = 0; +} + +static inline void CPU_SET(uint32_t i, cpuset_t *p) +{ + *p |= 1<< (i & 0x3f); +} + +#define pthread_setaffinity_np(a, b, c) ((void)a, 0) +#define sched_setscheduler(a, b, c) (1) /* error */ +#define clock_gettime(a,b) \ + do {struct timespec t0 = {0,0}; *(b) = t0; } while (0) + +#define _P64 unsigned long +#endif + +#ifndef _P64 + +/* we use uint64_t widely, but printf gives trouble on different + * platforms so we use _P64 as a cast + */ +#define _P64 uint64_t +#endif /* print stuff */ + + +struct _qs; /* forward */ +/* + * descriptor of a configuration entry. + * Each handler has a parse function which takes ac/av[] and returns + * true if successful. Any allocated space is stored into struct _cfg * + * that is passed as argument. + * arg and arg_len are included for convenience. + */ +struct _cfg { + int (*parse)(struct _qs *, struct _cfg *, int ac, char *av[]); /* 0 ok, 1 on error */ + int (*run)(struct _qs *, struct _cfg *arg); /* 0 Ok, 1 on error */ + // int close(struct _qs *, void *arg); /* 0 Ok, 1 on error */ + + const char *optarg; /* command line argument. Initial value is the error message */ + /* placeholders for common values */ + void *arg; /* allocated memory if any */ + int arg_len; /* size of *arg in case a realloc is needed */ + uint64_t d[16]; /* static storage for simple cases */ + double f[4]; /* static storage for simple cases */ +}; + + +/* + * communication occurs through this data structure, with fields + * cache-aligned according to who are the readers/writers. + * + +The queue is an array of memory (buf) of size buflen (does not change). + +The producer uses 'tail' as an index in the queue to indicate +the first empty location (ie. after the last byte of data), +the consumer uses head to indicate the next byte to consume. + +For best performance we should align buffers and packets +to multiples of cacheline, but this would explode memory too much. +Worst case memory explosion is with 65 byte packets. +Memory usage as shown below: + + qpkt-pad + size 32-16 32-32 32-64 64-64 + + 64 96 96 96 128 + 65 112 128 160 192 + + +An empty queue has head == tail, a full queue will have free space +below a threshold. In our case the queue is large enough and we +are non blocking so we can simply drop traffic when the queue +approaches a full state. + +To simulate bandwidth limitations efficiently, the producer has a second +pointer, prod_tail_1, used to check for expired packets. This is done lazily. + + */ +/* + * When sizing the buffer, we must assume some value for the bandwidth. + * INFINITE_BW is supposed to be faster than what we support + */ +#define INFINITE_BW (200ULL*1000000*1000) +#define MY_CACHELINE (128ULL) +#define MAX_PKT (9200) /* max packet size */ + +#define ALIGN_CACHE __attribute__ ((aligned (MY_CACHELINE))) + +struct _qs { /* shared queue */ + uint64_t t0; /* start of times */ + + uint64_t buflen; /* queue length */ + char *buf; + + /* handlers for various options */ + struct _cfg c_delay; + struct _cfg c_bw; + struct _cfg c_loss; + + /* producer's fields */ + uint64_t tx ALIGN_CACHE; /* tx counter */ + uint64_t prod_tail_1; /* head of queue */ + uint64_t prod_head; /* cached copy */ + uint64_t prod_tail; /* cached copy */ + uint64_t prod_drop; /* drop packet count */ + uint64_t prod_max_gap; /* rx round duration */ + + struct nm_pcap_file *pcap; /* the pcap struct */ + + /* parameters for reading from the netmap port */ + struct nm_desc *src_port; /* netmap descriptor */ + const char * prod_ifname; /* interface name or pcap file */ + struct netmap_ring *rxring; /* current ring being handled */ + uint32_t si; /* ring index */ + int burst; + uint32_t rx_qmax; /* stats on max queued */ + + uint64_t qt_qout; /* queue exit time for last packet */ + /* + * when doing shaping, the software computes and stores here + * the time when the most recently queued packet will exit from + * the queue. + */ + + uint64_t qt_tx; /* delay line exit time for last packet */ + /* + * The software computes the time at which the most recently + * queued packet exits from the queue. + * To avoid reordering, the next packet should exit at least + * at qt_tx + cur_tt + */ + + /* producer's fields controlling the queueing */ + const char * cur_pkt; /* current packet being analysed */ + uint32_t cur_len; /* length of current packet */ + uint32_t cur_caplen; /* captured length of current packet */ + + int cur_drop; /* 1 if current packet should be dropped. */ + /* + * cur_drop can be set as a result of the loss emulation, + * and may need to use the packet size, current time, etc. + */ + + uint64_t cur_tt; /* transmission time (ns) for current packet */ + /* + * The transmission time is how much link time the packet will consume. + * should be set by the function that does the bandwidth emulation, + * but could also be the result of a function that emulates the + * presence of competing traffic, MAC protocols etc. + * cur_tt is 0 for links with infinite bandwidth. + */ + + uint64_t cur_delay; /* delay (ns) for current packet from c_delay.run() */ + /* + * this should be set by the function that computes the extra delay + * applied to the packet. + * The code makes sure that there is no reordering and possibly + * bumps the output time as needed. + */ + + + /* consumer's fields */ + const char * cons_ifname; + uint64_t rx ALIGN_CACHE; /* rx counter */ + uint64_t cons_head; /* cached copy */ + uint64_t cons_tail; /* cached copy */ + uint64_t cons_now; /* most recent producer timestamp */ + uint64_t rx_wait; /* stats */ + + /* shared fields */ + volatile uint64_t _tail ALIGN_CACHE ; /* producer writes here */ + volatile uint64_t _head ALIGN_CACHE ; /* consumer reads from here */ +}; + +struct pipe_args { + int wait_link; + + pthread_t cons_tid; /* main thread */ + pthread_t prod_tid; /* producer thread */ + + /* Affinity: */ + int cons_core; /* core for cons() */ + int prod_core; /* core for prod() */ + + struct nm_desc *pa; /* netmap descriptor */ + struct nm_desc *pb; + + struct _qs q; +}; + +#define NS_IN_S (1000000000ULL) // nanoseconds +#define TIME_UNITS NS_IN_S +/* set the thread affinity. */ +static int +setaffinity(int i) +{ + cpuset_t cpumask; + struct sched_param p; + + if (i == -1) + return 0; + + /* Set thread affinity affinity.*/ + CPU_ZERO(&cpumask); + CPU_SET(i, &cpumask); + + if (pthread_setaffinity_np(pthread_self(), sizeof(cpuset_t), &cpumask) != 0) { + WWW("Unable to set affinity: %s", strerror(errno)); + } + if (setpriority(PRIO_PROCESS, 0, -10)) {; // XXX not meaningful + WWW("Unable to set priority: %s", strerror(errno)); + } + bzero(&p, sizeof(p)); + p.sched_priority = 10; // 99 on linux ? + // use SCHED_RR or SCHED_FIFO + if (sched_setscheduler(0, SCHED_RR, &p)) { + WWW("Unable to set scheduler: %s", strerror(errno)); + } + return 0; +} + + +/* + * set the timestamp from the clock, subtract t0 + */ +static inline void +set_tns_now(uint64_t *now, uint64_t t0) +{ + struct timespec t; + + clock_gettime(CLOCK_REALTIME, &t); // XXX precise on FreeBSD ? + *now = (uint64_t)(t.tv_nsec + NS_IN_S * t.tv_sec); + *now -= t0; +} + + + +/* compare two timestamps */ +static inline int64_t +ts_cmp(uint64_t a, uint64_t b) +{ + return (int64_t)(a - b); +} + +/* create a packet descriptor */ +static inline struct q_pkt * +pkt_at(struct _qs *q, uint64_t ofs) +{ + return (struct q_pkt *)(q->buf + ofs); +} + + +/* + * we have already checked for room and prepared p->next + */ +static inline int +enq(struct _qs *q) +{ + struct q_pkt *p = pkt_at(q, q->prod_tail); + + /* hopefully prefetch has been done ahead */ + nm_pkt_copy(q->cur_pkt, (char *)(p+1), q->cur_caplen); + p->pktlen = q->cur_len; + p->pt_qout = q->qt_qout; + p->pt_tx = q->qt_tx; + p->next = q->prod_tail + pad(q->cur_len) + sizeof(struct q_pkt); + ND("enqueue len %d at %d new tail %ld qout %.6f tx %.6f", + q->cur_len, (int)q->prod_tail, p->next, + 1e-9*p->pt_qout, 1e-9*p->pt_tx); + q->prod_tail = p->next; + q->tx++; + return 0; +} + +/* + * simple handler for parameters not supplied + */ +static int +null_run_fn(struct _qs *q, struct _cfg *cfg) +{ + (void)q; + (void)cfg; + return 0; +} + + + +/* + * put packet data into the buffer. + * We read from the mmapped pcap file, construct header, copy + * the captured length of the packet and pad with zeroes. + */ +static void * +pcap_prod(void *_pa) +{ + struct pipe_args *pa = _pa; + struct _qs *q = &pa->q; + struct nm_pcap_file *pf = q->pcap; /* already opened by readpcap */ + uint32_t loops, i, tot_pkts; + + /* data plus the loop record */ + uint64_t need; + uint64_t t_tx, tt, last_ts; /* last timestamp from trace */ + + /* + * For speed we make sure the trace is at least some 1000 packets, + * so we may need to loop the trace more than once (for short traces) + */ + loops = (1 + 10000 / pf->tot_pkt); + tot_pkts = loops * pf->tot_pkt; + need = loops * pf->tot_bytes_rounded + sizeof(struct q_pkt); + q->buf = calloc(1, need); + if (q->buf == NULL) { + D("alloc %ld bytes for queue failed, exiting",(_P64)need); + goto fail; + } + q->prod_head = q->prod_tail = 0; + q->buflen = need; + + pf->cur = pf->data + sizeof(struct pcap_file_header); + pf->err = 0; + + ED("--- start create %lu packets at tail %d", + (u_long)tot_pkts, (int)q->prod_tail); + last_ts = pf->first_ts; /* beginning of the trace */ + + q->qt_qout = 0; /* first packet out of the queue */ + + for (loops = 0, i = 0; i < tot_pkts && !do_abort; i++) { + const char *next_pkt; /* in the pcap buffer */ + uint64_t cur_ts; + + /* read values from the pcap buffer */ + cur_ts = read_next_info(pf, 4) * NS_SCALE + + read_next_info(pf, 4) * pf->resolution; + q->cur_caplen = read_next_info(pf, 4); + q->cur_len = read_next_info(pf, 4); + next_pkt = pf->cur + q->cur_caplen; + + /* prepare fields in q for the generator */ + q->cur_pkt = pf->cur; + /* initial estimate of tx time */ + q->cur_tt = cur_ts - last_ts; + // -pf->first_ts + loops * pf->total_tx_time - last_ts; + + if ((i % pf->tot_pkt) == 0) + ED("insert %5d len %lu cur_tt %.6f", + i, (u_long)q->cur_len, 1e-9*q->cur_tt); + + /* prepare for next iteration */ + pf->cur = next_pkt; + last_ts = cur_ts; + if (next_pkt == pf->lim) { //last pkt + pf->cur = pf->data + sizeof(struct pcap_file_header); + last_ts = pf->first_ts; /* beginning of the trace */ + loops++; + } + + q->c_loss.run(q, &q->c_loss); + if (q->cur_drop) + continue; + q->c_bw.run(q, &q->c_bw); + tt = q->cur_tt; + q->qt_qout += tt; +#if 0 + if (drop_after(q)) + continue; +#endif + q->c_delay.run(q, &q->c_delay); /* compute delay */ + t_tx = q->qt_qout + q->cur_delay; + ND(5, "tt %ld qout %ld tx %ld qt_tx %ld", tt, q->qt_qout, t_tx, q->qt_tx); + /* insure no reordering and spacing by transmission time */ + q->qt_tx = (t_tx >= q->qt_tx + tt) ? t_tx : q->qt_tx + tt; + enq(q); + + q->tx++; + ND("ins %d q->prod_tail = %lu", (int)insert, (unsigned long)q->prod_tail); + } + /* loop marker ? */ + ED("done q->prod_tail:%d",(int)q->prod_tail); + q->_tail = q->prod_tail; /* publish */ + + return NULL; +fail: + if (q->buf != NULL) { + free(q->buf); + } + nm_close(pa->pb); + return (NULL); +} + + +/* + * the consumer reads from the queue using head, + * advances it every now and then. + */ +static void * +cons(void *_pa) +{ + struct pipe_args *pa = _pa; + struct _qs *q = &pa->q; + int pending = 0; + uint64_t last_ts = 0; + + /* read the start of times in q->t0 */ + set_tns_now(&q->t0, 0); + /* set the time (cons_now) to clock - q->t0 */ + set_tns_now(&q->cons_now, q->t0); + q->cons_head = q->_head; + q->cons_tail = q->_tail; + while (!do_abort) { /* consumer, infinite */ + struct q_pkt *p = pkt_at(q, q->cons_head); + + __builtin_prefetch (q->buf + p->next); + + if (q->cons_head == q->cons_tail) { //reset record + ND("Transmission restarted"); + /* + * add to q->t0 the time for the last packet + */ + q->t0 += last_ts; + q->cons_head = 0; //restart from beginning of the queue + continue; + } + last_ts = p->pt_tx; + if (ts_cmp(p->pt_tx, q->cons_now) > 0) { + // packet not ready + q->rx_wait++; + /* the ioctl should be conditional */ + ioctl(pa->pb->fd, NIOCTXSYNC, 0); // XXX just in case + pending = 0; + usleep(20); + set_tns_now(&q->cons_now, q->t0); + continue; + } + /* XXX copy is inefficient but simple */ + pending++; + if (nm_inject(pa->pb, (char *)(p + 1), p->pktlen) == 0 || + pending > q->burst) { + RD(1, "inject failed len %d now %ld tx %ld h %ld t %ld next %ld", + (int)p->pktlen, (u_long)q->cons_now, (u_long)p->pt_tx, + (u_long)q->_head, (u_long)q->_tail, (u_long)p->next); + ioctl(pa->pb->fd, NIOCTXSYNC, 0); + pending = 0; + continue; + } + q->cons_head = p->next; + /* drain packets from the queue */ + q->rx++; + } + D("exiting on abort"); + return NULL; +} + +/* + * In case of pcap file as input, the program acts in 2 different + * phases. It first fill the queue and then starts the cons() + */ +static void * +nmreplay_main(void *_a) +{ + struct pipe_args *a = _a; + struct _qs *q = &a->q; + const char *cap_fname = q->prod_ifname; + + setaffinity(a->cons_core); + set_tns_now(&q->t0, 0); /* starting reference */ + if (cap_fname == NULL) { + goto fail; + } + q->pcap = readpcap(cap_fname); + if (q->pcap == NULL) { + EEE("unable to read file %s", cap_fname); + goto fail; + } + pcap_prod((void*)a); + destroy_pcap(q->pcap); + q->pcap = NULL; + a->pb = nm_open(q->cons_ifname, NULL, 0, NULL); + if (a->pb == NULL) { + EEE("cannot open netmap on %s", q->cons_ifname); + do_abort = 1; // XXX any better way ? + return NULL; + } + /* continue as cons() */ + WWW("prepare to send packets"); + usleep(1000); + cons((void*)a); + EEE("exiting on abort"); +fail: + if (q->pcap != NULL) { + destroy_pcap(q->pcap); + } + do_abort = 1; + return NULL; +} + + +static void +sigint_h(int sig) +{ + (void)sig; /* UNUSED */ + do_abort = 1; + signal(SIGINT, SIG_DFL); +} + + + +static void +usage(void) +{ + fprintf(stderr, + "usage: nmreplay [-v] [-D delay] [-B {[constant,]bps|ether,bps|real,speedup}] [-L loss]\n" + "\t[-b burst] -i ifa-or-pcap-file -i ifb\n"); + exit(1); +} + + +/*---- configuration handling ---- */ +/* + * support routine: split argument, returns ac and *av. + * av contains two extra entries, a NULL and a pointer + * to the entire string. + */ +static char ** +split_arg(const char *src, int *_ac) +{ + char *my = NULL, **av = NULL, *seps = " \t\r\n,"; + int l, i, ac; /* number of entries */ + + if (!src) + return NULL; + l = strlen(src); + /* in the first pass we count fields, in the second pass + * we allocate the av[] array and a copy of the string + * and fill av[]. av[ac] = NULL, av[ac+1] + */ + for (;;) { + i = ac = 0; + ND("start pass %d: <%s>", av ? 1 : 0, my); + while (i < l) { + /* trim leading separator */ + while (i = l) + break; + ND(" pass %d arg %d: <%s>", av ? 1 : 0, ac, src+i); + if (av) /* in the second pass, set the result */ + av[ac] = my+i; + ac++; + /* skip string */ + while (i ", i, av[i]); + } + av[i++] = NULL; + av[i++] = my; + *_ac = ac; + return av; +} + + +/* + * apply a command against a set of functions, + * install a handler in *dst + */ +static int +cmd_apply(const struct _cfg *a, const char *arg, struct _qs *q, struct _cfg *dst) +{ + int ac = 0; + char **av; + int i; + + if (arg == NULL || *arg == '\0') + return 1; /* no argument may be ok */ + if (a == NULL || dst == NULL) { + ED("program error - invalid arguments"); + exit(1); + } + av = split_arg(arg, &ac); + if (av == NULL) + return 1; /* error */ + for (i = 0; a[i].parse; i++) { + struct _cfg x = a[i]; + const char *errmsg = x.optarg; + int ret; + + x.arg = NULL; + x.arg_len = 0; + bzero(&x.d, sizeof(x.d)); + ND("apply %s to %s", av[0], errmsg); + ret = x.parse(q, &x, ac, av); + if (ret == 2) /* not recognised */ + continue; + if (ret == 1) { + ED("invalid arguments: need '%s' have '%s'", + errmsg, arg); + break; + } + x.optarg = arg; + *dst = x; + return 0; + } + ED("arguments %s not recognised", arg); + free(av); + return 1; +} + +static struct _cfg delay_cfg[]; +static struct _cfg bw_cfg[]; +static struct _cfg loss_cfg[]; + +static uint64_t parse_bw(const char *arg); + +/* + * prodcons [options] + * accept separate sets of arguments for the two directions + * + */ + +static void +add_to(const char ** v, int l, const char *arg, const char *msg) +{ + for (; l > 0 && *v != NULL ; l--, v++); + if (l == 0) { + ED("%s %s", msg, arg); + exit(1); + } + *v = arg; +} + +int +main(int argc, char **argv) +{ + int ch, i, err=0; + +#define N_OPTS 1 + struct pipe_args bp[N_OPTS]; + const char *d[N_OPTS], *b[N_OPTS], *l[N_OPTS], *q[N_OPTS], *ifname[N_OPTS], *m[N_OPTS]; + const char *pcap_file[N_OPTS]; + int cores[4] = { 2, 8, 4, 10 }; /* default values */ + + bzero(&bp, sizeof(bp)); /* all data initially go here */ + bzero(d, sizeof(d)); + bzero(b, sizeof(b)); + bzero(l, sizeof(l)); + bzero(q, sizeof(q)); + bzero(m, sizeof(m)); + bzero(ifname, sizeof(ifname)); + bzero(pcap_file, sizeof(pcap_file)); + + + /* set default values */ + for (i = 0; i < N_OPTS; i++) { + struct _qs *q = &bp[i].q; + + q->burst = 128; + q->c_delay.optarg = "0"; + q->c_delay.run = null_run_fn; + q->c_loss.optarg = "0"; + q->c_loss.run = null_run_fn; + q->c_bw.optarg = "0"; + q->c_bw.run = null_run_fn; + } + + // Options: + // B bandwidth in bps + // D delay in seconds + // L loss probability + // f pcap file + // i interface name + // w wait link + // b batch size + // v verbose + // C cpu placement + + while ( (ch = getopt(argc, argv, "B:C:D:L:b:f:i:vw:")) != -1) { + switch (ch) { + default: + D("bad option %c %s", ch, optarg); + usage(); + break; + + case 'C': /* CPU placement, up to 4 arguments */ + { + int ac = 0; + char **av = split_arg(optarg, &ac); + if (ac == 1) { /* sequential after the first */ + cores[0] = atoi(av[0]); + cores[1] = cores[0] + 1; + cores[2] = cores[1] + 1; + cores[3] = cores[2] + 1; + } else if (ac == 2) { /* two sequential pairs */ + cores[0] = atoi(av[0]); + cores[1] = cores[0] + 1; + cores[2] = atoi(av[1]); + cores[3] = cores[2] + 1; + } else if (ac == 4) { /* four values */ + cores[0] = atoi(av[0]); + cores[1] = atoi(av[1]); + cores[2] = atoi(av[2]); + cores[3] = atoi(av[3]); + } else { + ED(" -C accepts 1, 2 or 4 comma separated arguments"); + usage(); + } + if (av) + free(av); + } + break; + + case 'B': /* bandwidth in bps */ + add_to(b, N_OPTS, optarg, "-B too many times"); + break; + + case 'D': /* delay in seconds (float) */ + add_to(d, N_OPTS, optarg, "-D too many times"); + break; + + case 'L': /* loss probability */ + add_to(l, N_OPTS, optarg, "-L too many times"); + break; + + case 'b': /* burst */ + bp[0].q.burst = atoi(optarg); + break; + + case 'f': /* pcap_file */ + add_to(pcap_file, N_OPTS, optarg, "-f too many times"); + break; + case 'i': /* interface */ + add_to(ifname, N_OPTS, optarg, "-i too many times"); + break; + case 'v': + verbose++; + break; + case 'w': + bp[0].wait_link = atoi(optarg); + break; + } + + } + + argc -= optind; + argv += optind; + + /* + * consistency checks for common arguments + * if pcap file has been provided we need just one interface, two otherwise + */ + if (!pcap_file[0]) { + ED("missing pcap file"); + usage(); + } + if (!ifname[0]) { + ED("missing interface"); + usage(); + } + if (bp[0].q.burst < 1 || bp[0].q.burst > 8192) { + WWW("invalid burst %d, set to 1024", bp[0].q.burst); + bp[0].q.burst = 1024; // XXX 128 is probably better + } + if (bp[0].wait_link > 100) { + ED("invalid wait_link %d, set to 4", bp[0].wait_link); + bp[0].wait_link = 4; + } + + bp[0].q.prod_ifname = pcap_file[0]; + bp[0].q.cons_ifname = ifname[0]; + + /* assign cores. prod and cons work better if on the same HT */ + bp[0].cons_core = cores[0]; + bp[0].prod_core = cores[1]; + ED("running on cores %d %d %d %d", cores[0], cores[1], cores[2], cores[3]); + + /* apply commands */ + for (i = 0; i < N_OPTS; i++) { /* once per queue */ + struct _qs *q = &bp[i].q; + err += cmd_apply(delay_cfg, d[i], q, &q->c_delay); + err += cmd_apply(bw_cfg, b[i], q, &q->c_bw); + err += cmd_apply(loss_cfg, l[i], q, &q->c_loss); + } + + pthread_create(&bp[0].cons_tid, NULL, nmreplay_main, (void*)&bp[0]); + signal(SIGINT, sigint_h); + sleep(1); + while (!do_abort) { + struct _qs olda = bp[0].q; + struct _qs *q0 = &bp[0].q; + + sleep(1); + ED("%ld -> %ld maxq %d round %ld", + (_P64)(q0->rx - olda.rx), (_P64)(q0->tx - olda.tx), + q0->rx_qmax, (_P64)q0->prod_max_gap + ); + ED("plr nominal %le actual %le", + (double)(q0->c_loss.d[0])/(1<<24), + q0->c_loss.d[1] == 0 ? 0 : + (double)(q0->c_loss.d[2])/q0->c_loss.d[1]); + bp[0].q.rx_qmax = (bp[0].q.rx_qmax * 7)/8; // ewma + bp[0].q.prod_max_gap = (bp[0].q.prod_max_gap * 7)/8; // ewma + } + D("exiting on abort"); + sleep(1); + + return (0); +} + +/* conversion factor for numbers. + * Each entry has a set of characters and conversion factor, + * the first entry should have an empty string and default factor, + * the final entry has s = NULL. + */ +struct _sm { /* string and multiplier */ + char *s; + double m; +}; + +/* + * parse a generic value + */ +static double +parse_gen(const char *arg, const struct _sm *conv, int *err) +{ + double d; + char *ep; + int dummy; + + if (err == NULL) + err = &dummy; + *err = 0; + if (arg == NULL) + goto error; + d = strtod(arg, &ep); + if (ep == arg) { /* no value */ + ED("bad argument %s", arg); + goto error; + } + /* special case, no conversion */ + if (conv == NULL && *ep == '\0') + goto done; + ND("checking %s [%s]", arg, ep); + for (;conv->s; conv++) { + if (strchr(conv->s, *ep)) + goto done; + } +error: + *err = 1; /* unrecognised */ + return 0; + +done: + if (conv) { + ND("scale is %s %lf", conv->s, conv->m); + d *= conv->m; /* apply default conversion */ + } + ND("returning %lf", d); + return d; +} + +#define U_PARSE_ERR ~(0ULL) + +/* returns a value in nanoseconds */ +static uint64_t +parse_time(const char *arg) +{ + struct _sm a[] = { + {"", 1000000000 /* seconds */}, + {"n", 1 /* nanoseconds */}, {"u", 1000 /* microseconds */}, + {"m", 1000000 /* milliseconds */}, {"s", 1000000000 /* seconds */}, + {NULL, 0 /* seconds */} + }; + int err; + uint64_t ret = (uint64_t)parse_gen(arg, a, &err); + return err ? U_PARSE_ERR : ret; +} + + +/* + * parse a bandwidth, returns value in bps or U_PARSE_ERR if error. + */ +static uint64_t +parse_bw(const char *arg) +{ + struct _sm a[] = { + {"", 1}, {"kK", 1000}, {"mM", 1000000}, {"gG", 1000000000}, {NULL, 0} + }; + int err; + uint64_t ret = (uint64_t)parse_gen(arg, a, &err); + return err ? U_PARSE_ERR : ret; +} + + +/* + * For some function we need random bits. + * This is a wrapper to whatever function you want that returns + * 24 useful random bits. + */ + +#include /* log, exp etc. */ +static inline uint64_t +my_random24(void) /* 24 useful bits */ +{ + return random() & ((1<<24) - 1); +} + + +/*-------------- user-configuration -----------------*/ + +#if 0 /* start of comment block */ + +Here we place the functions to implement the various features +of the system. For each feature one should define a struct _cfg +(see at the beginning for definition) that refers a *_parse() function +to extract values from the command line, and a *_run() function +that is invoked on each packet to implement the desired function. + +Examples of the two functions are below. In general: + +- the *_parse() function takes argc/argv[], matches the function + name in argv[0], extracts the operating parameters, allocates memory + if needed, and stores them in the struct _cfg. + Return value is 2 if argv[0] is not recosnised, 1 if there is an + error in the arguments, 0 if all ok. + + On the command line, argv[] is a single, comma separated argument + that follow the specific option eg -D constant,20ms + + struct _cfg has some preallocated space (e.g an array of uint64_t) so simple + function can use that without having to allocate memory. + +- the *_run() function takes struct _q *q and struct _cfg *cfg as arguments. + *q contains all the informatio that may be possibly needed, including + those on the packet currently under processing. + The basic values are the following: + + char * cur_pkt points to the current packet (linear buffer) + uint32_t cur_len; length of the current packet + the functions are not supposed to modify these values + + int cur_drop; true if current packet must be dropped. + Must be set to non-zero by the loss emulation function + + uint64_t cur_delay; delay in nanoseconds for the current packet + Must be set by the delay emulation function + + More sophisticated functions may need to access other fields in *q, + see the structure description for that. + +When implementing a new function for a feature (e.g. for delay, +bandwidth, loss...) the struct _cfg should be added to the array +that contains all possible options. + + --- Specific notes --- + +DELAY emulation -D option_arguments + + If the option is not supplied, the system applies 0 extra delay + + The resolution for times is 1ns, the precision is load dependent and + generally in the order of 20-50us. + Times are in nanoseconds, can be followed by a character specifying + a different unit e.g. + + n nanoseconds + u microseconds + m milliseconds + s seconds + + Currently implemented options: + + constant,t constant delay equal to t + + uniform,tmin,tmax uniform delay between tmin and tmax + + exp,tavg,tmin exponential distribution with average tavg + and minimum tmin (corresponds to an exponential + distribution with argument 1/(tavg-tmin) ) + + +LOSS emulation -L option_arguments + + Loss is expressed as packet or bit error rate, which is an absolute + number between 0 and 1 (typically small). + + Currently implemented options + + plr,p uniform packet loss rate p, independent + of packet size + + burst,p,lmin,lmax burst loss with burst probability p and + burst length uniformly distributed between + lmin and lmax + + ber,p uniformly distributed bit error rate p, + so actual loss prob. depends on size. + +BANDWIDTH emulation -B option_arguments + + Bandwidths are expressed in bits per second, can be followed by a + character specifying a different unit e.g. + + b/B bits per second + k/K kbits/s (10^3 bits/s) + m/M mbits/s (10^6 bits/s) + g/G gbits/s (10^9 bits/s) + + Currently implemented options + + const,b constant bw, excluding mac framing + ether,b constant bw, including ethernet framing + (20 bytes framing + 4 bytes crc) + real,[scale] use real time, optionally with a scaling factor + +#endif /* end of comment block */ + +/* + * Configuration options for delay + */ + +/* constant delay, also accepts just a number */ +static int +const_delay_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + uint64_t delay; + + (void)q; + if (strncmp(av[0], "const", 5) != 0 && ac > 1) + return 2; /* unrecognised */ + if (ac > 2) + return 1; /* error */ + delay = parse_time(av[ac - 1]); + if (delay == U_PARSE_ERR) + return 1; /* error */ + dst->d[0] = delay; + return 0; /* success */ +} + +/* runtime function, store the delay into q->cur_delay */ +static int +const_delay_run(struct _qs *q, struct _cfg *arg) +{ + q->cur_delay = arg->d[0]; /* the delay */ + return 0; +} + +static int +uniform_delay_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + uint64_t dmin, dmax; + + (void)q; + if (strcmp(av[0], "uniform") != 0) + return 2; /* not recognised */ + if (ac != 3) + return 1; /* error */ + dmin = parse_time(av[1]); + dmax = parse_time(av[2]); + if (dmin == U_PARSE_ERR || dmax == U_PARSE_ERR || dmin > dmax) + return 1; + D("dmin %ld dmax %ld", (_P64)dmin, (_P64)dmax); + dst->d[0] = dmin; + dst->d[1] = dmax; + dst->d[2] = dmax - dmin; + return 0; +} + +static int +uniform_delay_run(struct _qs *q, struct _cfg *arg) +{ + uint64_t x = my_random24(); + q->cur_delay = arg->d[0] + ((arg->d[2] * x) >> 24); +#if 0 /* COMPUTE_STATS */ +#endif /* COMPUTE_STATS */ + return 0; +} + +/* + * exponential delay: Prob(delay = x) = exp(-x/d_av) + * gives a delay between 0 and infinity with average d_av + * The cumulative function is 1 - d_av exp(-x/d_av) + * + * The inverse function generates a uniform random number p in 0..1 + * and generates delay = (d_av-d_min) * -ln(1-p) + d_min + * + * To speed up behaviour at runtime we tabulate the values + */ + +static int +exp_delay_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ +#define PTS_D_EXP 512 + uint64_t i, d_av, d_min, *t; /*table of values */ + + (void)q; + if (strcmp(av[0], "exp") != 0) + return 2; /* not recognised */ + if (ac != 3) + return 1; /* error */ + d_av = parse_time(av[1]); + d_min = parse_time(av[2]); + if (d_av == U_PARSE_ERR || d_min == U_PARSE_ERR || d_av < d_min) + return 1; /* error */ + d_av -= d_min; + dst->arg_len = PTS_D_EXP * sizeof(uint64_t); + dst->arg = calloc(1, dst->arg_len); + if (dst->arg == NULL) + return 1; /* no memory */ + t = (uint64_t *)dst->arg; + /* tabulate -ln(1-n)*delay for n in 0..1 */ + for (i = 0; i < PTS_D_EXP; i++) { + double d = -log2 ((double)(PTS_D_EXP - i) / PTS_D_EXP) * d_av + d_min; + t[i] = (uint64_t)d; + ND(5, "%ld: %le", i, d); + } + return 0; +} + +static int +exp_delay_run(struct _qs *q, struct _cfg *arg) +{ + uint64_t *t = (uint64_t *)arg->arg; + q->cur_delay = t[my_random24() & (PTS_D_EXP - 1)]; + RD(5, "delay %lu", (_P64)q->cur_delay); + return 0; +} + + +/* unused arguments in configuration */ +#define _CFG_END NULL, 0, {0}, {0} + +static struct _cfg delay_cfg[] = { + { const_delay_parse, const_delay_run, + "constant,delay", _CFG_END }, + { uniform_delay_parse, uniform_delay_run, + "uniform,dmin,dmax # dmin <= dmax", _CFG_END }, + { exp_delay_parse, exp_delay_run, + "exp,dmin,davg # dmin <= davg", _CFG_END }, + { NULL, NULL, NULL, _CFG_END } +}; + +/* standard bandwidth, also accepts just a number */ +static int +const_bw_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + uint64_t bw; + + (void)q; + if (strncmp(av[0], "const", 5) != 0 && ac > 1) + return 2; /* unrecognised */ + if (ac > 2) + return 1; /* error */ + bw = parse_bw(av[ac - 1]); + if (bw == U_PARSE_ERR) { + return (ac == 2) ? 1 /* error */ : 2 /* unrecognised */; + } + dst->d[0] = bw; + return 0; /* success */ +} + + +/* runtime function, store the delay into q->cur_delay */ +static int +const_bw_run(struct _qs *q, struct _cfg *arg) +{ + uint64_t bps = arg->d[0]; + q->cur_tt = bps ? 8ULL* TIME_UNITS * q->cur_len / bps : 0 ; + return 0; +} + +/* ethernet bandwidth, add 672 bits per packet */ +static int +ether_bw_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + uint64_t bw; + + (void)q; + if (strcmp(av[0], "ether") != 0) + return 2; /* unrecognised */ + if (ac != 2) + return 1; /* error */ + bw = parse_bw(av[ac - 1]); + if (bw == U_PARSE_ERR) + return 1; /* error */ + dst->d[0] = bw; + return 0; /* success */ +} + + +/* runtime function, add 20 bytes (framing) + 4 bytes (crc) */ +static int +ether_bw_run(struct _qs *q, struct _cfg *arg) +{ + uint64_t bps = arg->d[0]; + q->cur_tt = bps ? 8ULL * TIME_UNITS * (q->cur_len + 24) / bps : 0 ; + return 0; +} + +/* real bandwidth, plus scaling factor */ +static int +real_bw_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + double scale; + + (void)q; + if (strcmp(av[0], "real") != 0) + return 2; /* unrecognised */ + if (ac > 2) { /* second argument is optional */ + return 1; /* error */ + } else if (ac == 1) { + scale = 1; + } else { + int err = 0; + scale = parse_gen(av[ac-1], NULL, &err); + if (err || scale <= 0 || scale > 1000) + return 1; + } + ED("real -> scale is %.6f", scale); + dst->f[0] = scale; + return 0; /* success */ +} + +static int +real_bw_run(struct _qs *q, struct _cfg *arg) +{ + q->cur_tt /= arg->f[0]; + return 0; +} + +static struct _cfg bw_cfg[] = { + { const_bw_parse, const_bw_run, + "constant,bps", _CFG_END }, + { ether_bw_parse, ether_bw_run, + "ether,bps", _CFG_END }, + { real_bw_parse, real_bw_run, + "real,scale", _CFG_END }, + { NULL, NULL, NULL, _CFG_END } +}; + +/* + * loss patterns + */ +static int +const_plr_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + double plr; + int err; + + (void)q; + if (strcmp(av[0], "plr") != 0 && ac > 1) + return 2; /* unrecognised */ + if (ac > 2) + return 1; /* error */ + // XXX to be completed + plr = parse_gen(av[ac-1], NULL, &err); + if (err || plr < 0 || plr > 1) + return 1; + dst->d[0] = plr * (1<<24); /* scale is 16m */ + if (plr != 0 && dst->d[0] == 0) + ED("WWW warning, rounding %le down to 0", plr); + return 0; /* success */ +} + +static int +const_plr_run(struct _qs *q, struct _cfg *arg) +{ + (void)arg; + uint64_t r = my_random24(); + q->cur_drop = r < arg->d[0]; +#if 1 /* keep stats */ + arg->d[1]++; + arg->d[2] += q->cur_drop; +#endif + return 0; +} + + +/* + * For BER the loss is 1- (1-ber)**bit_len + * The linear approximation is only good for small values, so we + * tabulate (1-ber)**len for various sizes in bytes + */ +static int +const_ber_parse(struct _qs *q, struct _cfg *dst, int ac, char *av[]) +{ + double ber, ber8, cur; + int i, err; + uint32_t *plr; + const uint32_t mask = (1<<24) - 1; + + (void)q; + if (strcmp(av[0], "ber") != 0) + return 2; /* unrecognised */ + if (ac != 2) + return 1; /* error */ + ber = parse_gen(av[ac-1], NULL, &err); + if (err || ber < 0 || ber > 1) + return 1; + dst->arg_len = MAX_PKT * sizeof(uint32_t); + plr = calloc(1, dst->arg_len); + if (plr == NULL) + return 1; /* no memory */ + dst->arg = plr; + ber8 = 1 - ber; + ber8 *= ber8; /* **2 */ + ber8 *= ber8; /* **4 */ + ber8 *= ber8; /* **8 */ + cur = 1; + for (i=0; i < MAX_PKT; i++, cur *= ber8) { + plr[i] = (mask + 1)*(1 - cur); + if (plr[i] > mask) + plr[i] = mask; +#if 0 + if (i>= 60) // && plr[i] < mask/2) + RD(50,"%4d: %le %ld", i, 1.0 - cur, (_P64)plr[i]); +#endif + } + dst->d[0] = ber * (mask + 1); + return 0; /* success */ +} + +static int +const_ber_run(struct _qs *q, struct _cfg *arg) +{ + int l = q->cur_len; + uint64_t r = my_random24(); + uint32_t *plr = arg->arg; + + if (l >= MAX_PKT) { + RD(5, "pkt len %d too large, trim to %d", l, MAX_PKT-1); + l = MAX_PKT-1; + } + q->cur_drop = r < plr[l]; +#if 1 /* keep stats */ + arg->d[1] += l * 8; + arg->d[2] += q->cur_drop; +#endif + return 0; +} + +static struct _cfg loss_cfg[] = { + { const_plr_parse, const_plr_run, + "plr,prob # 0 <= prob <= 1", _CFG_END }, + { const_ber_parse, const_ber_run, + "ber,prob # 0 <= prob <= 1", _CFG_END }, + { NULL, NULL, NULL, _CFG_END } +}; Property changes on: head/tools/tools/netmap/nmreplay.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/tools/tools/netmap/pkt-gen.c =================================================================== --- head/tools/tools/netmap/pkt-gen.c (revision 307393) +++ head/tools/tools/netmap/pkt-gen.c (revision 307394) @@ -1,2030 +1,2624 @@ /* * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. - * Copyright (C) 2013-2014 Universita` di Pisa. All rights reserved. + * Copyright (C) 2013-2015 Universita` di Pisa. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * $Id: pkt-gen.c 12346 2013-06-12 17:36:25Z luigi $ * * Example program to show how to build a multithreaded packet * source/sink using the netmap device. * * In this example we create a programmable number of threads * to take care of all the queues of the interface used to * send or receive traffic. * */ -// #define TRASH_VHOST_HDR - #define _GNU_SOURCE /* for CPU_SET() */ #include #define NETMAP_WITH_LIBS #include #include // isprint() #include // sysconf() #include #include /* ntohs */ +#ifndef _WIN32 #include /* sysctl */ +#endif #include /* getifaddrs */ #include #include #include #include +#include +#include #include #ifndef NO_PCAP #include #endif +#include "ctrs.h" + +#ifdef _WIN32 +#define cpuset_t DWORD_PTR //uint64_t +static inline void CPU_ZERO(cpuset_t *p) +{ + *p = 0; +} + +static inline void CPU_SET(uint32_t i, cpuset_t *p) +{ + *p |= 1<< (i & 0x3f); +} + +#define pthread_setaffinity_np(a, b, c) !SetThreadAffinityMask(a, *c) //((void)a, 0) +#define TAP_CLONEDEV "/dev/tap" +#define AF_LINK 18 //defined in winsocks.h +#define CLOCK_REALTIME_PRECISE CLOCK_REALTIME +#include + +/* + * Convert an ASCII representation of an ethernet address to + * binary form. + */ +struct ether_addr * +ether_aton(const char *a) +{ + int i; + static struct ether_addr o; + unsigned int o0, o1, o2, o3, o4, o5; + + i = sscanf(a, "%x:%x:%x:%x:%x:%x", &o0, &o1, &o2, &o3, &o4, &o5); + + if (i != 6) + return (NULL); + + o.octet[0]=o0; + o.octet[1]=o1; + o.octet[2]=o2; + o.octet[3]=o3; + o.octet[4]=o4; + o.octet[5]=o5; + + return ((struct ether_addr *)&o); +} + +/* + * Convert a binary representation of an ethernet address to + * an ASCII string. + */ +char * +ether_ntoa(const struct ether_addr *n) +{ + int i; + static char a[18]; + + i = sprintf(a, "%02x:%02x:%02x:%02x:%02x:%02x", + n->octet[0], n->octet[1], n->octet[2], + n->octet[3], n->octet[4], n->octet[5]); + return (i < 17 ? NULL : (char *)&a); +} +#endif /* _WIN32 */ + #ifdef linux #define cpuset_t cpu_set_t #define ifr_flagshigh ifr_flags /* only the low 16 bits here */ #define IFF_PPROMISC IFF_PROMISC /* IFF_PPROMISC does not exist */ #include #include #define CLOCK_REALTIME_PRECISE CLOCK_REALTIME #include /* ether_aton */ #include /* sockaddr_ll */ #endif /* linux */ #ifdef __FreeBSD__ #include /* le64toh */ #include #include /* pthread w/ affinity */ #include /* cpu_set */ #include /* LLADDR */ #endif /* __FreeBSD__ */ #ifdef __APPLE__ #define cpuset_t uint64_t // XXX static inline void CPU_ZERO(cpuset_t *p) { *p = 0; } static inline void CPU_SET(uint32_t i, cpuset_t *p) { *p |= 1<< (i & 0x3f); } #define pthread_setaffinity_np(a, b, c) ((void)a, 0) #define ifr_flagshigh ifr_flags // XXX #define IFF_PPROMISC IFF_PROMISC #include /* LLADDR */ #define clock_gettime(a,b) \ do {struct timespec t0 = {0,0}; *(b) = t0; } while (0) #endif /* __APPLE__ */ const char *default_payload="netmap pkt-gen DIRECT payload\n" "http://info.iet.unipi.it/~luigi/netmap/ "; const char *indirect_payload="netmap pkt-gen indirect payload\n" "http://info.iet.unipi.it/~luigi/netmap/ "; int verbose = 0; #define SKIP_PAYLOAD 1 /* do not check payload. XXX unused */ #define VIRT_HDR_1 10 /* length of a base vnet-hdr */ #define VIRT_HDR_2 12 /* length of the extenede vnet-hdr */ #define VIRT_HDR_MAX VIRT_HDR_2 struct virt_header { uint8_t fields[VIRT_HDR_MAX]; }; #define MAX_BODYSIZE 16384 struct pkt { struct virt_header vh; struct ether_header eh; struct ip ip; struct udphdr udp; uint8_t body[MAX_BODYSIZE]; // XXX hardwired } __attribute__((__packed__)); struct ip_range { char *name; uint32_t start, end; /* same as struct in_addr */ uint16_t port0, port1; }; struct mac_range { char *name; struct ether_addr start, end; }; /* ifname can be netmap:foo-xxxx */ #define MAX_IFNAMELEN 64 /* our buffer for ifname */ //#define MAX_PKTSIZE 1536 #define MAX_PKTSIZE MAX_BODYSIZE /* XXX: + IP_HDR + ETH_HDR */ /* compact timestamp to fit into 60 byte packet. (enough to obtain RTT) */ struct tstamp { uint32_t sec; uint32_t nsec; }; /* * global arguments for all threads */ struct glob_arg { struct ip_range src_ip; struct ip_range dst_ip; struct mac_range dst_mac; struct mac_range src_mac; int pkt_size; int burst; int forever; - int npackets; /* total packets to send */ + uint64_t npackets; /* total packets to send */ int frags; /* fragments per packet */ int nthreads; - int cpus; + int cpus; /* cpus used for running */ + int system_cpus; /* cpus on the system */ + int options; /* testing */ #define OPT_PREFETCH 1 #define OPT_ACCESS 2 #define OPT_COPY 4 #define OPT_MEMCPY 8 #define OPT_TS 16 /* add a timestamp */ #define OPT_INDIRECT 32 /* use indirect buffers, tx only */ #define OPT_DUMP 64 /* dump rx/tx traffic */ -#define OPT_MONITOR_TX 128 -#define OPT_MONITOR_RX 256 +#define OPT_RUBBISH 256 /* send wathever the buffers contain */ #define OPT_RANDOM_SRC 512 #define OPT_RANDOM_DST 1024 +#define OPT_PPS_STATS 2048 int dev_type; #ifndef NO_PCAP pcap_t *p; #endif int tx_rate; struct timespec tx_period; int affinity; int main_fd; struct nm_desc *nmd; int report_interval; /* milliseconds between prints */ void *(*td_body)(void *); + int td_type; void *mmap_addr; char ifname[MAX_IFNAMELEN]; char *nmr_config; int dummy_send; int virt_header; /* send also the virt_header */ int extra_bufs; /* goes in nr_arg3 */ + int extra_pipes; /* goes in nr_arg1 */ char *packet_file; /* -P option */ +#define STATS_WIN 15 + int win_idx; + int64_t win[STATS_WIN]; }; enum dev_type { DEV_NONE, DEV_NETMAP, DEV_PCAP, DEV_TAP }; /* * Arguments for a new thread. The same structure is used by * the source and the sink */ struct targ { struct glob_arg *g; int used; int completed; int cancel; int fd; struct nm_desc *nmd; - volatile uint64_t count; + /* these ought to be volatile, but they are + * only sampled and errors should not accumulate + */ + struct my_ctrs ctr; + struct timespec tic, toc; int me; pthread_t thread; int affinity; struct pkt pkt; void *frame; }; /* * extract the extremes from a range of ipv4 addresses. * addr_lo[-addr_hi][:port_lo[-port_hi]] */ static void extract_ip_range(struct ip_range *r) { char *ap, *pp; struct in_addr a; if (verbose) D("extract IP range from %s", r->name); r->port0 = r->port1 = 0; r->start = r->end = 0; /* the first - splits start/end of range */ ap = index(r->name, '-'); /* do we have ports ? */ if (ap) { *ap++ = '\0'; } /* grab the initial values (mandatory) */ pp = index(r->name, ':'); if (pp) { *pp++ = '\0'; r->port0 = r->port1 = strtol(pp, NULL, 0); }; inet_aton(r->name, &a); r->start = r->end = ntohl(a.s_addr); if (ap) { pp = index(ap, ':'); if (pp) { *pp++ = '\0'; if (*pp) r->port1 = strtol(pp, NULL, 0); } if (*ap) { inet_aton(ap, &a); r->end = ntohl(a.s_addr); } } if (r->port0 > r->port1) { uint16_t tmp = r->port0; r->port0 = r->port1; r->port1 = tmp; } if (r->start > r->end) { uint32_t tmp = r->start; r->start = r->end; r->end = tmp; } { struct in_addr a; char buf1[16]; // one ip address a.s_addr = htonl(r->end); strncpy(buf1, inet_ntoa(a), sizeof(buf1)); a.s_addr = htonl(r->start); if (1) D("range is %s:%d to %s:%d", inet_ntoa(a), r->port0, buf1, r->port1); } } static void extract_mac_range(struct mac_range *r) { if (verbose) D("extract MAC range from %s", r->name); bcopy(ether_aton(r->name), &r->start, 6); bcopy(ether_aton(r->name), &r->end, 6); #if 0 bcopy(targ->src_mac, eh->ether_shost, 6); p = index(targ->g->src_mac, '-'); if (p) targ->src_mac_range = atoi(p+1); bcopy(ether_aton(targ->g->dst_mac), targ->dst_mac, 6); bcopy(targ->dst_mac, eh->ether_dhost, 6); p = index(targ->g->dst_mac, '-'); if (p) targ->dst_mac_range = atoi(p+1); #endif if (verbose) D("%s starts at %s", r->name, ether_ntoa(&r->start)); } static struct targ *targs; static int global_nthreads; /* control-C handler */ static void sigint_h(int sig) { int i; (void)sig; /* UNUSED */ - D("received control-C on thread %p", pthread_self()); + D("received control-C on thread %p", (void *)pthread_self()); for (i = 0; i < global_nthreads; i++) { targs[i].cancel = 1; } - signal(SIGINT, SIG_DFL); } /* sysctl wrapper to return the number of active CPUs */ static int system_ncpus(void) { int ncpus; #if defined (__FreeBSD__) int mib[2] = { CTL_HW, HW_NCPU }; size_t len = sizeof(mib); sysctl(mib, 2, &ncpus, &len, NULL, 0); #elif defined(linux) ncpus = sysconf(_SC_NPROCESSORS_ONLN); +#elif defined(_WIN32) + { + SYSTEM_INFO sysinfo; + GetSystemInfo(&sysinfo); + ncpus = sysinfo.dwNumberOfProcessors; + } #else /* others */ ncpus = 1; #endif /* others */ return (ncpus); } #ifdef __linux__ #define sockaddr_dl sockaddr_ll #define sdl_family sll_family #define AF_LINK AF_PACKET #define LLADDR(s) s->sll_addr; #include #define TAP_CLONEDEV "/dev/net/tun" #endif /* __linux__ */ #ifdef __FreeBSD__ #include #define TAP_CLONEDEV "/dev/tap" #endif /* __FreeBSD */ #ifdef __APPLE__ // #warning TAP not supported on apple ? #include #define TAP_CLONEDEV "/dev/tap" #endif /* __APPLE__ */ /* * parse the vale configuration in conf and put it in nmr. * Return the flag set if necessary. * The configuration may consist of 0 to 4 numbers separated * by commas: #tx-slots,#rx-slots,#tx-rings,#rx-rings. * Missing numbers or zeroes stand for default values. * As an additional convenience, if exactly one number * is specified, then this is assigned to both #tx-slots and #rx-slots. * If there is no 4th number, then the 3rd is assigned to both #tx-rings * and #rx-rings. */ int parse_nmr_config(const char* conf, struct nmreq *nmr) { char *w, *tok; int i, v; nmr->nr_tx_rings = nmr->nr_rx_rings = 0; nmr->nr_tx_slots = nmr->nr_rx_slots = 0; if (conf == NULL || ! *conf) return 0; w = strdup(conf); for (i = 0, tok = strtok(w, ","); tok; i++, tok = strtok(NULL, ",")) { v = atoi(tok); switch (i) { case 0: nmr->nr_tx_slots = nmr->nr_rx_slots = v; break; case 1: nmr->nr_rx_slots = v; break; case 2: nmr->nr_tx_rings = nmr->nr_rx_rings = v; break; case 3: nmr->nr_rx_rings = v; break; default: D("ignored config: %s", tok); break; } } D("txr %d txd %d rxr %d rxd %d", nmr->nr_tx_rings, nmr->nr_tx_slots, nmr->nr_rx_rings, nmr->nr_rx_slots); free(w); return (nmr->nr_tx_rings || nmr->nr_tx_slots || nmr->nr_rx_rings || nmr->nr_rx_slots) ? NM_OPEN_RING_CFG : 0; } /* * locate the src mac address for our interface, put it * into the user-supplied buffer. return 0 if ok, -1 on error. */ static int source_hwaddr(const char *ifname, char *buf) { struct ifaddrs *ifaphead, *ifap; int l = sizeof(ifap->ifa_name); if (getifaddrs(&ifaphead) != 0) { D("getifaddrs %s failed", ifname); return (-1); } for (ifap = ifaphead; ifap; ifap = ifap->ifa_next) { struct sockaddr_dl *sdl = (struct sockaddr_dl *)ifap->ifa_addr; uint8_t *mac; if (!sdl || sdl->sdl_family != AF_LINK) continue; if (strncmp(ifap->ifa_name, ifname, l) != 0) continue; mac = (uint8_t *)LLADDR(sdl); sprintf(buf, "%02x:%02x:%02x:%02x:%02x:%02x", mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]); if (verbose) D("source hwaddr %s", buf); break; } freeifaddrs(ifaphead); return ifap ? 0 : 1; } /* set the thread affinity. */ static int setaffinity(pthread_t me, int i) { cpuset_t cpumask; if (i == -1) return 0; /* Set thread affinity affinity.*/ CPU_ZERO(&cpumask); CPU_SET(i, &cpumask); if (pthread_setaffinity_np(me, sizeof(cpuset_t), &cpumask) != 0) { D("Unable to set affinity: %s", strerror(errno)); return 1; } return 0; } /* Compute the checksum of the given ip header. */ static uint16_t checksum(const void *data, uint16_t len, uint32_t sum) { const uint8_t *addr = data; uint32_t i; /* Checksum all the pairs of bytes first... */ for (i = 0; i < (len & ~1U); i += 2) { sum += (u_int16_t)ntohs(*((u_int16_t *)(addr + i))); if (sum > 0xFFFF) sum -= 0xFFFF; } /* * If there's a single byte left over, checksum it, too. * Network byte order is big-endian, so the remaining byte is * the high byte. */ if (i < len) { sum += addr[i] << 8; if (sum > 0xFFFF) sum -= 0xFFFF; } return sum; } static u_int16_t wrapsum(u_int32_t sum) { sum = ~sum & 0xFFFF; return (htons(sum)); } /* Check the payload of the packet for errors (use it for debug). * Look for consecutive ascii representations of the size of the packet. */ static void -dump_payload(char *p, int len, struct netmap_ring *ring, int cur) +dump_payload(const char *_p, int len, struct netmap_ring *ring, int cur) { char buf[128]; int i, j, i0; + const unsigned char *p = (const unsigned char *)_p; /* get the length in ASCII of the length of the packet. */ printf("ring %p cur %5d [buf %6d flags 0x%04x len %5d]\n", ring, cur, ring->slot[cur].buf_idx, ring->slot[cur].flags, len); /* hexdump routine */ for (i = 0; i < len; ) { memset(buf, sizeof(buf), ' '); sprintf(buf, "%5d: ", i); i0 = i; for (j=0; j < 16 && i < len; i++, j++) sprintf(buf+7+j*3, "%02x ", (uint8_t)(p[i])); i = i0; for (j=0; j < 16 && i < len; i++, j++) sprintf(buf+7+j + 48, "%c", isprint(p[i]) ? p[i] : '.'); printf("%s\n", buf); } } /* * Fill a packet with some payload. * We create a UDP packet so the payload starts at * 14+20+8 = 42 bytes. */ #ifdef __linux__ #define uh_sport source #define uh_dport dest #define uh_ulen len #define uh_sum check #endif /* linux */ /* * increment the addressed in the packet, * starting from the least significant field. * DST_IP DST_PORT SRC_IP SRC_PORT */ static void update_addresses(struct pkt *pkt, struct glob_arg *g) { uint32_t a; uint16_t p; struct ip *ip = &pkt->ip; struct udphdr *udp = &pkt->udp; do { /* XXX for now it doesn't handle non-random src, random dst */ if (g->options & OPT_RANDOM_SRC) { udp->uh_sport = random(); ip->ip_src.s_addr = random(); } else { p = ntohs(udp->uh_sport); if (p < g->src_ip.port1) { /* just inc, no wrap */ udp->uh_sport = htons(p + 1); break; } udp->uh_sport = htons(g->src_ip.port0); a = ntohl(ip->ip_src.s_addr); if (a < g->src_ip.end) { /* just inc, no wrap */ ip->ip_src.s_addr = htonl(a + 1); break; } ip->ip_src.s_addr = htonl(g->src_ip.start); udp->uh_sport = htons(g->src_ip.port0); } if (g->options & OPT_RANDOM_DST) { udp->uh_dport = random(); ip->ip_dst.s_addr = random(); } else { p = ntohs(udp->uh_dport); if (p < g->dst_ip.port1) { /* just inc, no wrap */ udp->uh_dport = htons(p + 1); break; } udp->uh_dport = htons(g->dst_ip.port0); a = ntohl(ip->ip_dst.s_addr); if (a < g->dst_ip.end) { /* just inc, no wrap */ ip->ip_dst.s_addr = htonl(a + 1); break; } } ip->ip_dst.s_addr = htonl(g->dst_ip.start); } while (0); // update checksum } /* * initialize one packet and prepare for the next one. * The copy could be done better instead of repeating it each time. */ static void initialize_packet(struct targ *targ) { struct pkt *pkt = &targ->pkt; struct ether_header *eh; struct ip *ip; struct udphdr *udp; uint16_t paylen = targ->g->pkt_size - sizeof(*eh) - sizeof(struct ip); const char *payload = targ->g->options & OPT_INDIRECT ? indirect_payload : default_payload; int i, l0 = strlen(payload); +#ifndef NO_PCAP char errbuf[PCAP_ERRBUF_SIZE]; pcap_t *file; struct pcap_pkthdr *header; const unsigned char *packet; /* Read a packet from a PCAP file if asked. */ if (targ->g->packet_file != NULL) { if ((file = pcap_open_offline(targ->g->packet_file, errbuf)) == NULL) D("failed to open pcap file %s", targ->g->packet_file); if (pcap_next_ex(file, &header, &packet) < 0) D("failed to read packet from %s", targ->g->packet_file); if ((targ->frame = malloc(header->caplen)) == NULL) D("out of memory"); bcopy(packet, (unsigned char *)targ->frame, header->caplen); targ->g->pkt_size = header->caplen; pcap_close(file); return; } +#endif /* create a nice NUL-terminated string */ for (i = 0; i < paylen; i += l0) { if (l0 > paylen - i) l0 = paylen - i; // last round bcopy(payload, pkt->body + i, l0); } pkt->body[i-1] = '\0'; ip = &pkt->ip; /* prepare the headers */ ip->ip_v = IPVERSION; ip->ip_hl = 5; ip->ip_id = 0; ip->ip_tos = IPTOS_LOWDELAY; ip->ip_len = ntohs(targ->g->pkt_size - sizeof(*eh)); ip->ip_id = 0; ip->ip_off = htons(IP_DF); /* Don't fragment */ ip->ip_ttl = IPDEFTTL; ip->ip_p = IPPROTO_UDP; ip->ip_dst.s_addr = htonl(targ->g->dst_ip.start); ip->ip_src.s_addr = htonl(targ->g->src_ip.start); ip->ip_sum = wrapsum(checksum(ip, sizeof(*ip), 0)); udp = &pkt->udp; udp->uh_sport = htons(targ->g->src_ip.port0); udp->uh_dport = htons(targ->g->dst_ip.port0); udp->uh_ulen = htons(paylen); /* Magic: taken from sbin/dhclient/packet.c */ udp->uh_sum = wrapsum(checksum(udp, sizeof(*udp), checksum(pkt->body, paylen - sizeof(*udp), checksum(&ip->ip_src, 2 * sizeof(ip->ip_src), IPPROTO_UDP + (u_int32_t)ntohs(udp->uh_ulen) ) ) )); eh = &pkt->eh; bcopy(&targ->g->src_mac.start, eh->ether_shost, 6); bcopy(&targ->g->dst_mac.start, eh->ether_dhost, 6); eh->ether_type = htons(ETHERTYPE_IP); bzero(&pkt->vh, sizeof(pkt->vh)); -#ifdef TRASH_VHOST_HDR - /* set bogus content */ - pkt->vh.fields[0] = 0xff; - pkt->vh.fields[1] = 0xff; - pkt->vh.fields[2] = 0xff; - pkt->vh.fields[3] = 0xff; - pkt->vh.fields[4] = 0xff; - pkt->vh.fields[5] = 0xff; -#endif /* TRASH_VHOST_HDR */ // dump_payload((void *)pkt, targ->g->pkt_size, NULL, 0); } static void -set_vnet_hdr_len(struct targ *t) +get_vnet_hdr_len(struct glob_arg *g) { - int err, l = t->g->virt_header; struct nmreq req; + int err; + memset(&req, 0, sizeof(req)); + bcopy(g->nmd->req.nr_name, req.nr_name, sizeof(req.nr_name)); + req.nr_version = NETMAP_API; + req.nr_cmd = NETMAP_VNET_HDR_GET; + err = ioctl(g->main_fd, NIOCREGIF, &req); + if (err) { + D("Unable to get virtio-net header length"); + return; + } + + g->virt_header = req.nr_arg1; + if (g->virt_header) { + D("Port requires virtio-net header, length = %d", + g->virt_header); + } +} + +static void +set_vnet_hdr_len(struct glob_arg *g) +{ + int err, l = g->virt_header; + struct nmreq req; + if (l == 0) return; memset(&req, 0, sizeof(req)); - bcopy(t->nmd->req.nr_name, req.nr_name, sizeof(req.nr_name)); + bcopy(g->nmd->req.nr_name, req.nr_name, sizeof(req.nr_name)); req.nr_version = NETMAP_API; req.nr_cmd = NETMAP_BDG_VNET_HDR; req.nr_arg1 = l; - err = ioctl(t->fd, NIOCREGIF, &req); + err = ioctl(g->main_fd, NIOCREGIF, &req); if (err) { - D("Unable to set vnet header length %d", l); + D("Unable to set virtio-net header length %d", l); } } /* * create and enqueue a batch of packets on a ring. * On the last one set NS_REPORT to tell the driver to generate * an interrupt when done. */ static int send_packets(struct netmap_ring *ring, struct pkt *pkt, void *frame, int size, struct glob_arg *g, u_int count, int options, u_int nfrags) { u_int n, sent, cur = ring->cur; u_int fcnt; n = nm_ring_space(ring); if (n < count) count = n; if (count < nfrags) { D("truncating packet, no room for frags %d %d", count, nfrags); } #if 0 if (options & (OPT_COPY | OPT_PREFETCH) ) { for (sent = 0; sent < count; sent++) { struct netmap_slot *slot = &ring->slot[cur]; char *p = NETMAP_BUF(ring, slot->buf_idx); __builtin_prefetch(p); cur = nm_ring_next(ring, cur); } cur = ring->cur; } #endif for (fcnt = nfrags, sent = 0; sent < count; sent++) { struct netmap_slot *slot = &ring->slot[cur]; char *p = NETMAP_BUF(ring, slot->buf_idx); + int buf_changed = slot->flags & NS_BUF_CHANGED; slot->flags = 0; - if (options & OPT_INDIRECT) { + if (options & OPT_RUBBISH) { + /* do nothing */ + } else if (options & OPT_INDIRECT) { slot->flags |= NS_INDIRECT; - slot->ptr = (uint64_t)frame; - } else if (options & OPT_COPY) { + slot->ptr = (uint64_t)((uintptr_t)frame); + } else if ((options & OPT_COPY) || buf_changed) { nm_pkt_copy(frame, p, size); if (fcnt == nfrags) update_addresses(pkt, g); } else if (options & OPT_MEMCPY) { memcpy(p, frame, size); if (fcnt == nfrags) update_addresses(pkt, g); } else if (options & OPT_PREFETCH) { __builtin_prefetch(p); } if (options & OPT_DUMP) dump_payload(p, size, ring, cur); slot->len = size; if (--fcnt > 0) slot->flags |= NS_MOREFRAG; else fcnt = nfrags; if (sent == count - 1) { slot->flags &= ~NS_MOREFRAG; slot->flags |= NS_REPORT; } cur = nm_ring_next(ring, cur); } ring->head = ring->cur = cur; return (sent); } /* + * Index of the highest bit set + */ +uint32_t +msb64(uint64_t x) +{ + uint64_t m = 1ULL << 63; + int i; + + for (i = 63; i >= 0; i--, m >>=1) + if (m & x) + return i; + return 0; +} + +/* * Send a packet, and wait for a response. * The payload (after UDP header, ofs 42) has a 4-byte sequence * followed by a struct timeval (or bintime?) */ #define PAY_OFS 42 /* where in the pkt... */ static void * pinger_body(void *data) { struct targ *targ = (struct targ *) data; struct pollfd pfd = { .fd = targ->fd, .events = POLLIN }; struct netmap_if *nifp = targ->nmd->nifp; - int i, rx = 0, n = targ->g->npackets; + int i, rx = 0; void *frame; int size; - uint32_t sent = 0; struct timespec ts, now, last_print; - uint32_t count = 0, min = 1000000000, av = 0; + uint64_t sent = 0, n = targ->g->npackets; + uint64_t count = 0, t_cur, t_min = ~0, av = 0; + uint64_t buckets[64]; /* bins for delays, ns */ frame = &targ->pkt; frame += sizeof(targ->pkt.vh) - targ->g->virt_header; size = targ->g->pkt_size + targ->g->virt_header; + if (targ->g->nthreads > 1) { D("can only ping with 1 thread"); return NULL; } + bzero(&buckets, sizeof(buckets)); clock_gettime(CLOCK_REALTIME_PRECISE, &last_print); now = last_print; - while (n == 0 || (int)sent < n) { + while (!targ->cancel && (n == 0 || sent < n)) { struct netmap_ring *ring = NETMAP_TXRING(nifp, 0); struct netmap_slot *slot; char *p; for (i = 0; i < 1; i++) { /* XXX why the loop for 1 pkt ? */ slot = &ring->slot[ring->cur]; slot->len = size; p = NETMAP_BUF(ring, slot->buf_idx); if (nm_ring_empty(ring)) { D("-- ouch, cannot send"); } else { struct tstamp *tp; nm_pkt_copy(frame, p, size); clock_gettime(CLOCK_REALTIME_PRECISE, &ts); bcopy(&sent, p+42, sizeof(sent)); tp = (struct tstamp *)(p+46); tp->sec = (uint32_t)ts.tv_sec; tp->nsec = (uint32_t)ts.tv_nsec; sent++; ring->head = ring->cur = nm_ring_next(ring, ring->cur); } } /* should use a parameter to decide how often to send */ if (poll(&pfd, 1, 3000) <= 0) { D("poll error/timeout on queue %d: %s", targ->me, strerror(errno)); continue; } /* see what we got back */ for (i = targ->nmd->first_tx_ring; i <= targ->nmd->last_tx_ring; i++) { ring = NETMAP_RXRING(nifp, i); while (!nm_ring_empty(ring)) { uint32_t seq; struct tstamp *tp; + int pos; + slot = &ring->slot[ring->cur]; p = NETMAP_BUF(ring, slot->buf_idx); clock_gettime(CLOCK_REALTIME_PRECISE, &now); bcopy(p+42, &seq, sizeof(seq)); tp = (struct tstamp *)(p+46); ts.tv_sec = (time_t)tp->sec; ts.tv_nsec = (long)tp->nsec; ts.tv_sec = now.tv_sec - ts.tv_sec; ts.tv_nsec = now.tv_nsec - ts.tv_nsec; if (ts.tv_nsec < 0) { ts.tv_nsec += 1000000000; ts.tv_sec--; } - if (1) D("seq %d/%d delta %d.%09d", seq, sent, + if (0) D("seq %d/%lu delta %d.%09d", seq, sent, (int)ts.tv_sec, (int)ts.tv_nsec); - if (ts.tv_nsec < (int)min) - min = ts.tv_nsec; + t_cur = ts.tv_sec * 1000000000UL + ts.tv_nsec; + if (t_cur < t_min) + t_min = t_cur; count ++; - av += ts.tv_nsec; + av += t_cur; + pos = msb64(t_cur); + buckets[pos]++; + /* now store it in a bucket */ ring->head = ring->cur = nm_ring_next(ring, ring->cur); rx++; } } //D("tx %d rx %d", sent, rx); //usleep(100000); ts.tv_sec = now.tv_sec - last_print.tv_sec; ts.tv_nsec = now.tv_nsec - last_print.tv_nsec; if (ts.tv_nsec < 0) { ts.tv_nsec += 1000000000; ts.tv_sec--; } if (ts.tv_sec >= 1) { - D("count %d min %d av %d", - count, min, av/count); + D("count %d RTT: min %d av %d ns", + (int)count, (int)t_min, (int)(av/count)); + int k, j, kmin; + char buf[512]; + + for (kmin = 0; kmin < 64; kmin ++) + if (buckets[kmin]) + break; + for (k = 63; k >= kmin; k--) + if (buckets[k]) + break; + buf[0] = '\0'; + for (j = kmin; j <= k; j++) + sprintf(buf, "%s %5d", buf, (int)buckets[j]); + D("k: %d .. %d\n\t%s", 1<used = 0; + return NULL; } /* * reply to ping requests */ static void * ponger_body(void *data) { struct targ *targ = (struct targ *) data; struct pollfd pfd = { .fd = targ->fd, .events = POLLIN }; struct netmap_if *nifp = targ->nmd->nifp; struct netmap_ring *txring, *rxring; - int i, rx = 0, sent = 0, n = targ->g->npackets; + int i, rx = 0; + uint64_t sent = 0, n = targ->g->npackets; if (targ->g->nthreads > 1) { D("can only reply ping with 1 thread"); return NULL; } - D("understood ponger %d but don't know how to do it", n); - while (n == 0 || sent < n) { + D("understood ponger %lu but don't know how to do it", n); + while (!targ->cancel && (n == 0 || sent < n)) { uint32_t txcur, txavail; //#define BUSYWAIT #ifdef BUSYWAIT ioctl(pfd.fd, NIOCRXSYNC, NULL); #else if (poll(&pfd, 1, 1000) <= 0) { D("poll error/timeout on queue %d: %s", targ->me, strerror(errno)); continue; } #endif txring = NETMAP_TXRING(nifp, 0); txcur = txring->cur; txavail = nm_ring_space(txring); /* see what we got back */ for (i = targ->nmd->first_rx_ring; i <= targ->nmd->last_rx_ring; i++) { rxring = NETMAP_RXRING(nifp, i); while (!nm_ring_empty(rxring)) { uint16_t *spkt, *dpkt; uint32_t cur = rxring->cur; struct netmap_slot *slot = &rxring->slot[cur]; char *src, *dst; src = NETMAP_BUF(rxring, slot->buf_idx); //D("got pkt %p of size %d", src, slot->len); rxring->head = rxring->cur = nm_ring_next(rxring, cur); rx++; if (txavail == 0) continue; dst = NETMAP_BUF(txring, txring->slot[txcur].buf_idx); /* copy... */ dpkt = (uint16_t *)dst; spkt = (uint16_t *)src; nm_pkt_copy(src, dst, slot->len); dpkt[0] = spkt[3]; dpkt[1] = spkt[4]; dpkt[2] = spkt[5]; dpkt[3] = spkt[0]; dpkt[4] = spkt[1]; dpkt[5] = spkt[2]; txring->slot[txcur].len = slot->len; /* XXX swap src dst mac */ txcur = nm_ring_next(txring, txcur); txavail--; sent++; } } txring->head = txring->cur = txcur; - targ->count = sent; + targ->ctr.pkts = sent; #ifdef BUSYWAIT ioctl(pfd.fd, NIOCTXSYNC, NULL); #endif //D("tx %d rx %d", sent, rx); } - return NULL; -} -static __inline int -timespec_ge(const struct timespec *a, const struct timespec *b) -{ + /* reset the ``used`` flag. */ + targ->used = 0; - if (a->tv_sec > b->tv_sec) - return (1); - if (a->tv_sec < b->tv_sec) - return (0); - if (a->tv_nsec >= b->tv_nsec) - return (1); - return (0); + return NULL; } -static __inline struct timespec -timeval2spec(const struct timeval *a) -{ - struct timespec ts = { - .tv_sec = a->tv_sec, - .tv_nsec = a->tv_usec * 1000 - }; - return ts; -} -static __inline struct timeval -timespec2val(const struct timespec *a) -{ - struct timeval tv = { - .tv_sec = a->tv_sec, - .tv_usec = a->tv_nsec / 1000 - }; - return tv; -} - - -static __inline struct timespec -timespec_add(struct timespec a, struct timespec b) -{ - struct timespec ret = { a.tv_sec + b.tv_sec, a.tv_nsec + b.tv_nsec }; - if (ret.tv_nsec >= 1000000000) { - ret.tv_sec++; - ret.tv_nsec -= 1000000000; - } - return ret; -} - -static __inline struct timespec -timespec_sub(struct timespec a, struct timespec b) -{ - struct timespec ret = { a.tv_sec - b.tv_sec, a.tv_nsec - b.tv_nsec }; - if (ret.tv_nsec < 0) { - ret.tv_sec--; - ret.tv_nsec += 1000000000; - } - return ret; -} - - /* * wait until ts, either busy or sleeping if more than 1ms. * Return wakeup time. */ static struct timespec wait_time(struct timespec ts) { for (;;) { struct timespec w, cur; clock_gettime(CLOCK_REALTIME_PRECISE, &cur); w = timespec_sub(ts, cur); if (w.tv_sec < 0) return cur; else if (w.tv_sec > 0 || w.tv_nsec > 1000000) poll(NULL, 0, 1); } } static void * sender_body(void *data) { struct targ *targ = (struct targ *) data; struct pollfd pfd = { .fd = targ->fd, .events = POLLOUT }; struct netmap_if *nifp; - struct netmap_ring *txring; - int i, n = targ->g->npackets / targ->g->nthreads; - int64_t sent = 0; + struct netmap_ring *txring = NULL; + int i; + uint64_t n = targ->g->npackets / targ->g->nthreads; + uint64_t sent = 0; + uint64_t event = 0; int options = targ->g->options | OPT_COPY; struct timespec nexttime = { 0, 0}; // XXX silence compiler int rate_limit = targ->g->tx_rate; struct pkt *pkt = &targ->pkt; void *frame; int size; if (targ->frame == NULL) { frame = pkt; frame += sizeof(pkt->vh) - targ->g->virt_header; size = targ->g->pkt_size + targ->g->virt_header; } else { frame = targ->frame; size = targ->g->pkt_size; } D("start, fd %d main_fd %d", targ->fd, targ->g->main_fd); if (setaffinity(targ->thread, targ->affinity)) goto quit; /* main loop.*/ clock_gettime(CLOCK_REALTIME_PRECISE, &targ->tic); if (rate_limit) { targ->tic = timespec_add(targ->tic, (struct timespec){2,0}); targ->tic.tv_nsec = 0; wait_time(targ->tic); nexttime = targ->tic; } if (targ->g->dev_type == DEV_TAP) { D("writing to file desc %d", targ->g->main_fd); for (i = 0; !targ->cancel && (n == 0 || sent < n); i++) { if (write(targ->g->main_fd, frame, size) != -1) sent++; update_addresses(pkt, targ->g); if (i > 10000) { - targ->count = sent; + targ->ctr.pkts = sent; + targ->ctr.bytes = sent*size; + targ->ctr.events = sent; i = 0; } } #ifndef NO_PCAP } else if (targ->g->dev_type == DEV_PCAP) { pcap_t *p = targ->g->p; for (i = 0; !targ->cancel && (n == 0 || sent < n); i++) { if (pcap_inject(p, frame, size) != -1) sent++; update_addresses(pkt, targ->g); if (i > 10000) { - targ->count = sent; + targ->ctr.pkts = sent; + targ->ctr.bytes = sent*size; + targ->ctr.events = sent; i = 0; } } #endif /* NO_PCAP */ } else { int tosend = 0; int frags = targ->g->frags; - nifp = targ->nmd->nifp; + nifp = targ->nmd->nifp; while (!targ->cancel && (n == 0 || sent < n)) { if (rate_limit && tosend <= 0) { tosend = targ->g->burst; nexttime = timespec_add(nexttime, targ->g->tx_period); wait_time(nexttime); } /* * wait for available room in the send queue(s) */ +#ifdef BUSYWAIT + if (ioctl(pfd.fd, NIOCTXSYNC, NULL) < 0) { + D("ioctl error on queue %d: %s", targ->me, + strerror(errno)); + goto quit; + } +#else /* !BUSYWAIT */ if (poll(&pfd, 1, 2000) <= 0) { if (targ->cancel) break; D("poll error/timeout on queue %d: %s", targ->me, strerror(errno)); // goto quit; } if (pfd.revents & POLLERR) { - D("poll error"); + D("poll error on %d ring %d-%d", pfd.fd, + targ->nmd->first_tx_ring, targ->nmd->last_tx_ring); goto quit; } +#endif /* !BUSYWAIT */ /* * scan our queues and send on those with room */ if (options & OPT_COPY && sent > 100000 && !(targ->g->options & OPT_COPY) ) { D("drop copy"); options &= ~OPT_COPY; } for (i = targ->nmd->first_tx_ring; i <= targ->nmd->last_tx_ring; i++) { - int m, limit = rate_limit ? tosend : targ->g->burst; + int m; + uint64_t limit = rate_limit ? tosend : targ->g->burst; if (n > 0 && n - sent < limit) limit = n - sent; txring = NETMAP_TXRING(nifp, i); if (nm_ring_empty(txring)) continue; if (frags > 1) limit = ((limit + frags - 1) / frags) * frags; m = send_packets(txring, pkt, frame, size, targ->g, limit, options, frags); ND("limit %d tail %d frags %d m %d", limit, txring->tail, frags, m); sent += m; - targ->count = sent; + if (m > 0) //XXX-ste: can m be 0? + event++; + targ->ctr.pkts = sent; + targ->ctr.bytes = sent*size; + targ->ctr.events = event; if (rate_limit) { tosend -= m; if (tosend <= 0) break; } } } /* flush any remaining packets */ D("flush tail %d head %d on thread %p", txring->tail, txring->head, - pthread_self()); + (void *)pthread_self()); ioctl(pfd.fd, NIOCTXSYNC, NULL); /* final part: wait all the TX queues to be empty. */ for (i = targ->nmd->first_tx_ring; i <= targ->nmd->last_tx_ring; i++) { txring = NETMAP_TXRING(nifp, i); - while (nm_tx_pending(txring)) { + while (!targ->cancel && nm_tx_pending(txring)) { RD(5, "pending tx tail %d head %d on ring %d", txring->tail, txring->head, i); ioctl(pfd.fd, NIOCTXSYNC, NULL); usleep(1); /* wait 1 tick */ } } } /* end DEV_NETMAP */ clock_gettime(CLOCK_REALTIME_PRECISE, &targ->toc); targ->completed = 1; - targ->count = sent; - + targ->ctr.pkts = sent; + targ->ctr.bytes = sent*size; + targ->ctr.events = event; quit: /* reset the ``used`` flag. */ targ->used = 0; return (NULL); } #ifndef NO_PCAP static void receive_pcap(u_char *user, const struct pcap_pkthdr * h, const u_char * bytes) { - int *count = (int *)user; - (void)h; /* UNUSED */ + struct my_ctrs *ctr = (struct my_ctrs *)user; (void)bytes; /* UNUSED */ - (*count)++; + ctr->bytes += h->len; + ctr->pkts++; } #endif /* !NO_PCAP */ + static int -receive_packets(struct netmap_ring *ring, u_int limit, int dump) +receive_packets(struct netmap_ring *ring, u_int limit, int dump, uint64_t *bytes) { u_int cur, rx, n; + uint64_t b = 0; + if (bytes == NULL) + bytes = &b; + cur = ring->cur; n = nm_ring_space(ring); if (n < limit) limit = n; for (rx = 0; rx < limit; rx++) { struct netmap_slot *slot = &ring->slot[cur]; char *p = NETMAP_BUF(ring, slot->buf_idx); + *bytes += slot->len; if (dump) dump_payload(p, slot->len, ring, cur); cur = nm_ring_next(ring, cur); } ring->head = ring->cur = cur; return (rx); } static void * receiver_body(void *data) { struct targ *targ = (struct targ *) data; struct pollfd pfd = { .fd = targ->fd, .events = POLLIN }; struct netmap_if *nifp; struct netmap_ring *rxring; int i; - uint64_t received = 0; + struct my_ctrs cur; + cur.pkts = cur.bytes = cur.events = cur.min_space = 0; + cur.t.tv_usec = cur.t.tv_sec = 0; // unused, just silence the compiler + if (setaffinity(targ->thread, targ->affinity)) goto quit; D("reading from %s fd %d main_fd %d", targ->g->ifname, targ->fd, targ->g->main_fd); /* unbounded wait for the first packet. */ for (;!targ->cancel;) { i = poll(&pfd, 1, 1000); if (i > 0 && !(pfd.revents & POLLERR)) break; RD(1, "waiting for initial packets, poll returns %d %d", i, pfd.revents); } /* main loop, exit after 1s silence */ clock_gettime(CLOCK_REALTIME_PRECISE, &targ->tic); if (targ->g->dev_type == DEV_TAP) { while (!targ->cancel) { char buf[MAX_BODYSIZE]; /* XXX should we poll ? */ - if (read(targ->g->main_fd, buf, sizeof(buf)) > 0) - targ->count++; + i = read(targ->g->main_fd, buf, sizeof(buf)); + if (i > 0) { + targ->ctr.pkts++; + targ->ctr.bytes += i; + targ->ctr.events++; + } } #ifndef NO_PCAP } else if (targ->g->dev_type == DEV_PCAP) { while (!targ->cancel) { /* XXX should we poll ? */ pcap_dispatch(targ->g->p, targ->g->burst, receive_pcap, - (u_char *)&targ->count); + (u_char *)&targ->ctr); + targ->ctr.events++; } #endif /* !NO_PCAP */ } else { int dump = targ->g->options & OPT_DUMP; - nifp = targ->nmd->nifp; + nifp = targ->nmd->nifp; while (!targ->cancel) { /* Once we started to receive packets, wait at most 1 seconds before quitting. */ +#ifdef BUSYWAIT + if (ioctl(pfd.fd, NIOCRXSYNC, NULL) < 0) { + D("ioctl error on queue %d: %s", targ->me, + strerror(errno)); + goto quit; + } +#else /* !BUSYWAIT */ if (poll(&pfd, 1, 1 * 1000) <= 0 && !targ->g->forever) { clock_gettime(CLOCK_REALTIME_PRECISE, &targ->toc); targ->toc.tv_sec -= 1; /* Subtract timeout time. */ goto out; } if (pfd.revents & POLLERR) { D("poll err"); goto quit; } - +#endif /* !BUSYWAIT */ + uint64_t cur_space = 0; for (i = targ->nmd->first_rx_ring; i <= targ->nmd->last_rx_ring; i++) { int m; rxring = NETMAP_RXRING(nifp, i); + /* compute free space in the ring */ + m = rxring->head + rxring->num_slots - rxring->tail; + if (m >= (int) rxring->num_slots) + m -= rxring->num_slots; + cur_space += m; if (nm_ring_empty(rxring)) continue; - m = receive_packets(rxring, targ->g->burst, dump); - received += m; + m = receive_packets(rxring, targ->g->burst, dump, &cur.bytes); + cur.pkts += m; + if (m > 0) //XXX-ste: can m be 0? + cur.events++; } - targ->count = received; + cur.min_space = targ->ctr.min_space; + if (cur_space < cur.min_space) + cur.min_space = cur_space; + targ->ctr = cur; } } clock_gettime(CLOCK_REALTIME_PRECISE, &targ->toc); +#if !defined(BUSYWAIT) out: +#endif targ->completed = 1; - targ->count = received; + targ->ctr = cur; quit: /* reset the ``used`` flag. */ targ->used = 0; return (NULL); } -/* very crude code to print a number in normalized form. - * Caller has to make sure that the buffer is large enough. - */ -static const char * -norm(char *buf, double val) +static void * +txseq_body(void *data) { - char *units[] = { "", "K", "M", "G", "T" }; - u_int i; + struct targ *targ = (struct targ *) data; + struct pollfd pfd = { .fd = targ->fd, .events = POLLOUT }; + struct netmap_ring *ring; + int64_t sent = 0; + uint64_t event = 0; + int options = targ->g->options | OPT_COPY; + struct timespec nexttime = {0, 0}; + int rate_limit = targ->g->tx_rate; + struct pkt *pkt = &targ->pkt; + int frags = targ->g->frags; + uint32_t sequence = 0; + int budget = 0; + void *frame; + int size; - for (i = 0; val >=1000 && i < sizeof(units)/sizeof(char *) - 1; i++) - val /= 1000; - sprintf(buf, "%.2f %s", val, units[i]); - return buf; + if (targ->g->nthreads > 1) { + D("can only txseq ping with 1 thread"); + return NULL; + } + + if (targ->g->npackets > 0) { + D("Ignoring -n argument"); + } + + frame = pkt; + frame += sizeof(pkt->vh) - targ->g->virt_header; + size = targ->g->pkt_size + targ->g->virt_header; + + D("start, fd %d main_fd %d", targ->fd, targ->g->main_fd); + if (setaffinity(targ->thread, targ->affinity)) + goto quit; + + clock_gettime(CLOCK_REALTIME_PRECISE, &targ->tic); + if (rate_limit) { + targ->tic = timespec_add(targ->tic, (struct timespec){2,0}); + targ->tic.tv_nsec = 0; + wait_time(targ->tic); + nexttime = targ->tic; + } + + /* Only use the first queue. */ + ring = NETMAP_TXRING(targ->nmd->nifp, targ->nmd->first_tx_ring); + + while (!targ->cancel) { + int64_t limit; + unsigned int space; + unsigned int head; + int fcnt; + + if (!rate_limit) { + budget = targ->g->burst; + + } else if (budget <= 0) { + budget = targ->g->burst; + nexttime = timespec_add(nexttime, targ->g->tx_period); + wait_time(nexttime); + } + + /* wait for available room in the send queue */ + if (poll(&pfd, 1, 2000) <= 0) { + if (targ->cancel) + break; + D("poll error/timeout on queue %d: %s", targ->me, + strerror(errno)); + } + if (pfd.revents & POLLERR) { + D("poll error on %d ring %d-%d", pfd.fd, + targ->nmd->first_tx_ring, targ->nmd->last_tx_ring); + goto quit; + } + + /* If no room poll() again. */ + space = nm_ring_space(ring); + if (!space) { + continue; + } + + limit = budget; + + if (space < limit) { + limit = space; + } + + /* Cut off ``limit`` to make sure is multiple of ``frags``. */ + if (frags > 1) { + limit = (limit / frags) * frags; + } + + limit = sent + limit; /* Convert to absolute. */ + + for (fcnt = frags, head = ring->head; + sent < limit; sent++, sequence++) { + struct netmap_slot *slot = &ring->slot[head]; + char *p = NETMAP_BUF(ring, slot->buf_idx); + + slot->flags = 0; + pkt->body[0] = sequence >> 24; + pkt->body[1] = (sequence >> 16) & 0xff; + pkt->body[2] = (sequence >> 8) & 0xff; + pkt->body[3] = sequence & 0xff; + nm_pkt_copy(frame, p, size); + if (fcnt == frags) { + update_addresses(pkt, targ->g); + } + + if (options & OPT_DUMP) { + dump_payload(p, size, ring, head); + } + + slot->len = size; + + if (--fcnt > 0) { + slot->flags |= NS_MOREFRAG; + } else { + fcnt = frags; + } + + if (sent == limit - 1) { + /* Make sure we don't push an incomplete + * packet. */ + assert(!(slot->flags & NS_MOREFRAG)); + slot->flags |= NS_REPORT; + } + + head = nm_ring_next(ring, head); + if (rate_limit) { + budget--; + } + } + + ring->cur = ring->head = head; + + event ++; + targ->ctr.pkts = sent; + targ->ctr.bytes = sent * size; + targ->ctr.events = event; + } + + /* flush any remaining packets */ + D("flush tail %d head %d on thread %p", + ring->tail, ring->head, + (void *)pthread_self()); + ioctl(pfd.fd, NIOCTXSYNC, NULL); + + /* final part: wait the TX queues to become empty. */ + while (!targ->cancel && nm_tx_pending(ring)) { + RD(5, "pending tx tail %d head %d on ring %d", + ring->tail, ring->head, targ->nmd->first_tx_ring); + ioctl(pfd.fd, NIOCTXSYNC, NULL); + usleep(1); /* wait 1 tick */ + } + + clock_gettime(CLOCK_REALTIME_PRECISE, &targ->toc); + targ->completed = 1; + targ->ctr.pkts = sent; + targ->ctr.bytes = sent * size; + targ->ctr.events = event; +quit: + /* reset the ``used`` flag. */ + targ->used = 0; + + return (NULL); } -static void -tx_output(uint64_t sent, int size, double delta) + +static char * +multi_slot_to_string(struct netmap_ring *ring, unsigned int head, + unsigned int nfrags, char *strbuf, size_t strbuflen) { - double bw, raw_bw, pps; - char b1[40], b2[80], b3[80]; + unsigned int f; + char *ret = strbuf; - printf("Sent %llu packets, %d bytes each, in %.2f seconds.\n", - (unsigned long long)sent, size, delta); - if (delta == 0) - delta = 1e-6; - if (size < 60) /* correct for min packet size */ - size = 60; - pps = sent / delta; - bw = (8.0 * size * sent) / delta; - /* raw packets have4 bytes crc + 20 bytes framing */ - raw_bw = (8.0 * (size + 24) * sent) / delta; + for (f = 0; f < nfrags; f++) { + struct netmap_slot *slot = &ring->slot[head]; + int m = snprintf(strbuf, strbuflen, "|%u,%x|", slot->len, + slot->flags); + if (m >= (int)strbuflen) { + break; + } + strbuf += m; + strbuflen -= m; - printf("Speed: %spps Bandwidth: %sbps (raw %sbps)\n", - norm(b1, pps), norm(b2, bw), norm(b3, raw_bw) ); + head = nm_ring_next(ring, head); + } + + return ret; } +static void * +rxseq_body(void *data) +{ + struct targ *targ = (struct targ *) data; + struct pollfd pfd = { .fd = targ->fd, .events = POLLIN }; + int dump = targ->g->options & OPT_DUMP; + struct netmap_ring *ring; + unsigned int frags_exp = 1; + uint32_t seq_exp = 0; + struct my_ctrs cur; + unsigned int frags = 0; + int first_packet = 1; + int first_slot = 1; + int i; + cur.pkts = cur.bytes = cur.events = cur.min_space = 0; + cur.t.tv_usec = cur.t.tv_sec = 0; // unused, just silence the compiler + + if (setaffinity(targ->thread, targ->affinity)) + goto quit; + + D("reading from %s fd %d main_fd %d", + targ->g->ifname, targ->fd, targ->g->main_fd); + /* unbounded wait for the first packet. */ + for (;!targ->cancel;) { + i = poll(&pfd, 1, 1000); + if (i > 0 && !(pfd.revents & POLLERR)) + break; + RD(1, "waiting for initial packets, poll returns %d %d", + i, pfd.revents); + } + + clock_gettime(CLOCK_REALTIME_PRECISE, &targ->tic); + + ring = NETMAP_RXRING(targ->nmd->nifp, targ->nmd->first_rx_ring); + + while (!targ->cancel) { + unsigned int head; + uint32_t seq; + int limit; + + /* Once we started to receive packets, wait at most 1 seconds + before quitting. */ + if (poll(&pfd, 1, 1 * 1000) <= 0 && !targ->g->forever) { + clock_gettime(CLOCK_REALTIME_PRECISE, &targ->toc); + targ->toc.tv_sec -= 1; /* Subtract timeout time. */ + goto out; + } + + if (pfd.revents & POLLERR) { + D("poll err"); + goto quit; + } + + if (nm_ring_empty(ring)) + continue; + + limit = nm_ring_space(ring); + if (limit > targ->g->burst) + limit = targ->g->burst; + +#if 0 + /* Enable this if + * 1) we remove the early-return optimization from + * the netmap poll implementation, or + * 2) pipes get NS_MOREFRAG support. + * With the current netmap implementation, an experiment like + * pkt-gen -i vale:1{1 -f txseq -F 9 + * pkt-gen -i vale:1}1 -f rxseq + * would get stuck as soon as we find nm_ring_space(ring) < 9, + * since here limit is rounded to 0 and + * pipe rxsync is not called anymore by the poll() of this loop. + */ + if (frags_exp > 1) { + int o = limit; + /* Cut off to the closest smaller multiple. */ + limit = (limit / frags_exp) * frags_exp; + RD(2, "LIMIT %d --> %d", o, limit); + } +#endif + + for (head = ring->head, i = 0; i < limit; i++) { + struct netmap_slot *slot = &ring->slot[head]; + char *p = NETMAP_BUF(ring, slot->buf_idx); + int len = slot->len; + struct pkt *pkt; + + if (dump) { + dump_payload(p, slot->len, ring, head); + } + + frags++; + if (!(slot->flags & NS_MOREFRAG)) { + if (first_packet) { + first_packet = 0; + } else if (frags != frags_exp) { + char prbuf[512]; + RD(1, "Received packets with %u frags, " + "expected %u, '%s'", frags, frags_exp, + multi_slot_to_string(ring, head-frags+1, frags, + prbuf, sizeof(prbuf))); + } + first_packet = 0; + frags_exp = frags; + frags = 0; + } + + p -= sizeof(pkt->vh) - targ->g->virt_header; + len += sizeof(pkt->vh) - targ->g->virt_header; + pkt = (struct pkt *)p; + + if ((char *)pkt + len < ((char *)pkt->body) + sizeof(seq)) { + RD(1, "%s: packet too small (len=%u)", __func__, + slot->len); + } else { + seq = (pkt->body[0] << 24) | (pkt->body[1] << 16) + | (pkt->body[2] << 8) | pkt->body[3]; + if (first_slot) { + /* Grab the first one, whatever it + is. */ + seq_exp = seq; + first_slot = 0; + } else if (seq != seq_exp) { + uint32_t delta = seq - seq_exp; + + if (delta < (0xFFFFFFFF >> 1)) { + RD(2, "Sequence GAP: exp %u found %u", + seq_exp, seq); + } else { + RD(2, "Sequence OUT OF ORDER: " + "exp %u found %u", seq_exp, seq); + } + seq_exp = seq; + } + seq_exp++; + } + + cur.bytes += slot->len; + head = nm_ring_next(ring, head); + cur.pkts++; + } + + ring->cur = ring->head = head; + + cur.events++; + targ->ctr = cur; + } + + clock_gettime(CLOCK_REALTIME_PRECISE, &targ->toc); + +out: + targ->completed = 1; + targ->ctr = cur; + +quit: + /* reset the ``used`` flag. */ + targ->used = 0; + + return (NULL); +} + + static void -rx_output(uint64_t received, double delta) +tx_output(struct my_ctrs *cur, double delta, const char *msg) { - double pps; - char b1[40]; + double bw, raw_bw, pps, abs; + char b1[40], b2[80], b3[80]; + int size; - printf("Received %llu packets, in %.2f seconds.\n", - (unsigned long long) received, delta); + if (cur->pkts == 0) { + printf("%s nothing.\n", msg); + return; + } + size = (int)(cur->bytes / cur->pkts); + + printf("%s %llu packets %llu bytes %llu events %d bytes each in %.2f seconds.\n", + msg, + (unsigned long long)cur->pkts, + (unsigned long long)cur->bytes, + (unsigned long long)cur->events, size, delta); if (delta == 0) delta = 1e-6; - pps = received / delta; - printf("Speed: %spps\n", norm(b1, pps)); + if (size < 60) /* correct for min packet size */ + size = 60; + pps = cur->pkts / delta; + bw = (8.0 * cur->bytes) / delta; + /* raw packets have4 bytes crc + 20 bytes framing */ + raw_bw = (8.0 * (cur->pkts * 24 + cur->bytes)) / delta; + abs = cur->pkts / (double)(cur->events); + + printf("Speed: %spps Bandwidth: %sbps (raw %sbps). Average batch: %.2f pkts\n", + norm(b1, pps), norm(b2, bw), norm(b3, raw_bw), abs); } static void usage(void) { const char *cmd = "pkt-gen"; fprintf(stderr, "Usage:\n" "%s arguments\n" "\t-i interface interface name\n" - "\t-f function tx rx ping pong\n" + "\t-f function tx rx ping pong txseq rxseq\n" "\t-n count number of iterations (can be 0)\n" - "\t-t pkts_to_send also forces tx mode\n" + "\t-t pkts_to_send also forces tx mode\n" "\t-r pkts_to_receive also forces rx mode\n" "\t-l pkt_size in bytes excluding CRC\n" "\t-d dst_ip[:port[-dst_ip:port]] single or range\n" "\t-s src_ip[:port[-src_ip:port]] single or range\n" "\t-D dst-mac\n" "\t-S src-mac\n" "\t-a cpu_id use setaffinity\n" "\t-b burst size testing, mostly\n" "\t-c cores cores to use\n" "\t-p threads processes/threads to use\n" "\t-T report_ms milliseconds between reports\n" - "\t-P use libpcap instead of netmap\n" "\t-w wait_for_link_time in seconds\n" "\t-R rate in packets per second\n" "\t-X dump payload\n" "\t-H len add empty virtio-net-header with size 'len'\n" + "\t-E pipes allocate extra space for a number of pipes\n" + "\t-r do not touch the buffers (send rubbish)\n" "\t-P file load packet from pcap file\n" "\t-z use random IPv4 src address/port\n" "\t-Z use random IPv4 dst address/port\n" + "\t-F num_frags send multi-slot packets\n" + "\t-A activate pps stats on receiver\n" "", cmd); exit(0); } +enum { + TD_TYPE_SENDER = 1, + TD_TYPE_RECEIVER, + TD_TYPE_OTHER, +}; + static void start_threads(struct glob_arg *g) { int i; targs = calloc(g->nthreads, sizeof(*targs)); /* * Now create the desired number of threads, each one * using a single descriptor. */ for (i = 0; i < g->nthreads; i++) { struct targ *t = &targs[i]; bzero(t, sizeof(*t)); t->fd = -1; /* default, with pcap */ t->g = g; if (g->dev_type == DEV_NETMAP) { struct nm_desc nmd = *g->nmd; /* copy, we overwrite ringid */ uint64_t nmd_flags = 0; nmd.self = &nmd; - if (g->nthreads > 1) { - if (nmd.req.nr_flags != NR_REG_ALL_NIC) { - D("invalid nthreads mode %d", nmd.req.nr_flags); + if (i > 0) { + /* the first thread uses the fd opened by the main + * thread, the other threads re-open /dev/netmap + */ + if (g->nthreads > 1) { + nmd.req.nr_flags = + g->nmd->req.nr_flags & ~NR_REG_MASK; + nmd.req.nr_flags |= NR_REG_ONE_NIC; + nmd.req.nr_ringid = i; + } + /* Only touch one of the rings (rx is already ok) */ + if (g->td_type == TD_TYPE_RECEIVER) + nmd_flags |= NETMAP_NO_TX_POLL; + + /* register interface. Override ifname and ringid etc. */ + t->nmd = nm_open(t->g->ifname, NULL, nmd_flags | + NM_OPEN_IFNAME | NM_OPEN_NO_MMAP, &nmd); + if (t->nmd == NULL) { + D("Unable to open %s: %s", + t->g->ifname, strerror(errno)); continue; } - nmd.req.nr_flags = NR_REG_ONE_NIC; - nmd.req.nr_ringid = i; + } else { + t->nmd = g->nmd; } - /* Only touch one of the rings (rx is already ok) */ - if (g->td_body == receiver_body) - nmd_flags |= NETMAP_NO_TX_POLL; - - /* register interface. Override ifname and ringid etc. */ - if (g->options & OPT_MONITOR_TX) - nmd.req.nr_flags |= NR_MONITOR_TX; - if (g->options & OPT_MONITOR_RX) - nmd.req.nr_flags |= NR_MONITOR_RX; - - t->nmd = nm_open(t->g->ifname, NULL, nmd_flags | - NM_OPEN_IFNAME | NM_OPEN_NO_MMAP, &nmd); - if (t->nmd == NULL) { - D("Unable to open %s: %s", - t->g->ifname, strerror(errno)); - continue; - } t->fd = t->nmd->fd; - set_vnet_hdr_len(t); } else { targs[i].fd = g->main_fd; } t->used = 1; t->me = i; if (g->affinity >= 0) { - if (g->affinity < g->cpus) - t->affinity = g->affinity; - else - t->affinity = i % g->cpus; + t->affinity = (g->affinity + i) % g->system_cpus; } else { t->affinity = -1; } /* default, init packets */ initialize_packet(t); if (pthread_create(&t->thread, NULL, g->td_body, t) == -1) { D("Unable to create thread %d: %s", i, strerror(errno)); t->used = 0; } } } static void main_thread(struct glob_arg *g) { int i; - uint64_t prev = 0; - uint64_t count = 0; + struct my_ctrs prev, cur; double delta_t; struct timeval tic, toc; - gettimeofday(&toc, NULL); + prev.pkts = prev.bytes = prev.events = 0; + gettimeofday(&prev.t, NULL); for (;;) { - struct timeval now, delta; - uint64_t pps, usec, my_count, npkts; + char b1[40], b2[40], b3[40], b4[70]; + uint64_t pps, usec; + struct my_ctrs x; + double abs; int done = 0; - delta.tv_sec = g->report_interval/1000; - delta.tv_usec = (g->report_interval%1000)*1000; - select(0, NULL, NULL, NULL, &delta); - gettimeofday(&now, NULL); - timersub(&now, &toc, &toc); - my_count = 0; + usec = wait_for_next_report(&prev.t, &cur.t, + g->report_interval); + + cur.pkts = cur.bytes = cur.events = 0; + cur.min_space = 0; + if (usec < 10000) /* too short to be meaningful */ + continue; + /* accumulate counts for all threads */ for (i = 0; i < g->nthreads; i++) { - my_count += targs[i].count; + cur.pkts += targs[i].ctr.pkts; + cur.bytes += targs[i].ctr.bytes; + cur.events += targs[i].ctr.events; + cur.min_space += targs[i].ctr.min_space; + targs[i].ctr.min_space = 99999; if (targs[i].used == 0) done++; } - usec = toc.tv_sec* 1000000 + toc.tv_usec; - if (usec < 10000) - continue; - npkts = my_count - prev; - pps = (npkts*1000000 + usec/2) / usec; - D("%llu pps (%llu pkts in %llu usec)", - (unsigned long long)pps, - (unsigned long long)npkts, - (unsigned long long)usec); - prev = my_count; - toc = now; + x.pkts = cur.pkts - prev.pkts; + x.bytes = cur.bytes - prev.bytes; + x.events = cur.events - prev.events; + pps = (x.pkts*1000000 + usec/2) / usec; + abs = (x.events > 0) ? (x.pkts / (double) x.events) : 0; + + if (!(g->options & OPT_PPS_STATS)) { + strcpy(b4, ""); + } else { + /* Compute some pps stats using a sliding window. */ + double ppsavg = 0.0, ppsdev = 0.0; + int nsamples = 0; + + g->win[g->win_idx] = pps; + g->win_idx = (g->win_idx + 1) % STATS_WIN; + + for (i = 0; i < STATS_WIN; i++) { + ppsavg += g->win[i]; + if (g->win[i]) { + nsamples ++; + } + } + ppsavg /= nsamples; + + for (i = 0; i < STATS_WIN; i++) { + if (g->win[i] == 0) { + continue; + } + ppsdev += (g->win[i] - ppsavg) * (g->win[i] - ppsavg); + } + ppsdev /= nsamples; + ppsdev = sqrt(ppsdev); + + snprintf(b4, sizeof(b4), "[avg/std %s/%s pps]", + norm(b1, ppsavg), norm(b2, ppsdev)); + } + + D("%spps %s(%spkts %sbps in %llu usec) %.2f avg_batch %d min_space", + norm(b1, pps), b4, + norm(b2, (double)x.pkts), + norm(b3, (double)x.bytes*8), + (unsigned long long)usec, + abs, (int)cur.min_space); + prev = cur; + if (done == g->nthreads) break; } timerclear(&tic); timerclear(&toc); + cur.pkts = cur.bytes = cur.events = 0; + /* final round */ for (i = 0; i < g->nthreads; i++) { struct timespec t_tic, t_toc; /* * Join active threads, unregister interfaces and close * file descriptors. */ if (targs[i].used) - pthread_join(targs[i].thread, NULL); - close(targs[i].fd); + pthread_join(targs[i].thread, NULL); /* blocking */ + if (g->dev_type == DEV_NETMAP) { + nm_close(targs[i].nmd); + targs[i].nmd = NULL; + } else { + close(targs[i].fd); + } if (targs[i].completed == 0) D("ouch, thread %d exited with error", i); /* * Collect threads output and extract information about * how long it took to send all the packets. */ - count += targs[i].count; + cur.pkts += targs[i].ctr.pkts; + cur.bytes += targs[i].ctr.bytes; + cur.events += targs[i].ctr.events; + /* collect the largest start (tic) and end (toc) times, + * XXX maybe we should do the earliest tic, or do a weighted + * average ? + */ t_tic = timeval2spec(&tic); t_toc = timeval2spec(&toc); if (!timerisset(&tic) || timespec_ge(&targs[i].tic, &t_tic)) tic = timespec2val(&targs[i].tic); if (!timerisset(&toc) || timespec_ge(&targs[i].toc, &t_toc)) toc = timespec2val(&targs[i].toc); } /* print output. */ timersub(&toc, &tic, &toc); delta_t = toc.tv_sec + 1e-6* toc.tv_usec; - if (g->td_body == sender_body) - tx_output(count, g->pkt_size, delta_t); + if (g->td_type == TD_TYPE_SENDER) + tx_output(&cur, delta_t, "Sent"); else - rx_output(count, delta_t); - - if (g->dev_type == DEV_NETMAP) { - munmap(g->nmd->mem, g->nmd->req.nr_memsize); - close(g->main_fd); - } + tx_output(&cur, delta_t, "Received"); } - -struct sf { +struct td_desc { + int ty; char *key; void *f; }; -static struct sf func[] = { - { "tx", sender_body }, - { "rx", receiver_body }, - { "ping", pinger_body }, - { "pong", ponger_body }, - { NULL, NULL } +static struct td_desc func[] = { + { TD_TYPE_SENDER, "tx", sender_body }, + { TD_TYPE_RECEIVER, "rx", receiver_body }, + { TD_TYPE_OTHER, "ping", pinger_body }, + { TD_TYPE_OTHER, "pong", ponger_body }, + { TD_TYPE_SENDER, "txseq", txseq_body }, + { TD_TYPE_RECEIVER, "rxseq", rxseq_body }, + { 0, NULL, NULL } }; static int tap_alloc(char *dev) { struct ifreq ifr; int fd, err; char *clonedev = TAP_CLONEDEV; (void)err; (void)dev; /* Arguments taken by the function: * * char *dev: the name of an interface (or '\0'). MUST have enough * space to hold the interface name if '\0' is passed * int flags: interface flags (eg, IFF_TUN etc.) */ #ifdef __FreeBSD__ if (dev[3]) { /* tapSomething */ static char buf[128]; snprintf(buf, sizeof(buf), "/dev/%s", dev); clonedev = buf; } #endif /* open the device */ if( (fd = open(clonedev, O_RDWR)) < 0 ) { return fd; } D("%s open successful", clonedev); /* preparation of the struct ifr, of type "struct ifreq" */ memset(&ifr, 0, sizeof(ifr)); #ifdef linux ifr.ifr_flags = IFF_TAP | IFF_NO_PI; if (*dev) { /* if a device name was specified, put it in the structure; otherwise, * the kernel will try to allocate the "next" device of the * specified type */ strncpy(ifr.ifr_name, dev, IFNAMSIZ); } /* try to create the device */ if( (err = ioctl(fd, TUNSETIFF, (void *) &ifr)) < 0 ) { D("failed to to a TUNSETIFF: %s", strerror(errno)); close(fd); return err; } /* if the operation was successful, write back the name of the * interface to the variable "dev", so the caller can know * it. Note that the caller MUST reserve space in *dev (see calling * code below) */ strcpy(dev, ifr.ifr_name); D("new name is %s", dev); #endif /* linux */ /* this is the special file descriptor that the caller will use to talk * with the virtual interface */ return fd; } int main(int arc, char **argv) { int i; + struct sigaction sa; + sigset_t ss; struct glob_arg g; int ch; int wait_link = 2; int devqueues = 1; /* how many device queues */ bzero(&g, sizeof(g)); g.main_fd = -1; g.td_body = receiver_body; + g.td_type = TD_TYPE_RECEIVER; g.report_interval = 1000; /* report interval */ g.affinity = -1; /* ip addresses can also be a range x.x.x.x-x.x.x.y */ g.src_ip.name = "10.0.0.1"; g.dst_ip.name = "10.1.0.1"; g.dst_mac.name = "ff:ff:ff:ff:ff:ff"; g.src_mac.name = NULL; g.pkt_size = 60; g.burst = 512; // default g.nthreads = 1; - g.cpus = 1; + g.cpus = 1; // default g.forever = 1; g.tx_rate = 0; g.frags = 1; g.nmr_config = ""; g.virt_header = 0; while ( (ch = getopt(arc, argv, - "a:f:F:n:i:Il:d:s:D:S:b:c:o:p:T:w:WvR:XC:H:e:m:P:zZ")) != -1) { - struct sf *fn; + "a:f:F:n:i:Il:d:s:D:S:b:c:o:p:T:w:WvR:XC:H:e:E:m:rP:zZA")) != -1) { + struct td_desc *fn; switch(ch) { default: D("bad option %c %s", ch, optarg); usage(); break; case 'n': - g.npackets = atoi(optarg); + g.npackets = strtoull(optarg, NULL, 10); break; case 'F': i = atoi(optarg); if (i < 1 || i > 63) { D("invalid frags %d [1..63], ignore", i); break; } g.frags = i; break; case 'f': for (fn = func; fn->key; fn++) { if (!strcmp(fn->key, optarg)) break; } - if (fn->key) + if (fn->key) { g.td_body = fn->f; - else + g.td_type = fn->ty; + } else { D("unrecognised function %s", optarg); + } break; case 'o': /* data generation options */ g.options = atoi(optarg); break; case 'a': /* force affinity */ g.affinity = atoi(optarg); break; case 'i': /* interface */ /* a prefix of tap: netmap: or pcap: forces the mode. * otherwise we guess */ D("interface is %s", optarg); if (strlen(optarg) > MAX_IFNAMELEN - 8) { D("ifname too long %s", optarg); break; } strcpy(g.ifname, optarg); if (!strcmp(optarg, "null")) { g.dev_type = DEV_NETMAP; g.dummy_send = 1; } else if (!strncmp(optarg, "tap:", 4)) { g.dev_type = DEV_TAP; strcpy(g.ifname, optarg + 4); } else if (!strncmp(optarg, "pcap:", 5)) { g.dev_type = DEV_PCAP; strcpy(g.ifname, optarg + 5); } else if (!strncmp(optarg, "netmap:", 7) || !strncmp(optarg, "vale", 4)) { g.dev_type = DEV_NETMAP; } else if (!strncmp(optarg, "tap", 3)) { g.dev_type = DEV_TAP; } else { /* prepend netmap: */ g.dev_type = DEV_NETMAP; sprintf(g.ifname, "netmap:%s", optarg); } break; case 'I': g.options |= OPT_INDIRECT; /* XXX use indirect buffer */ break; case 'l': /* pkt_size */ g.pkt_size = atoi(optarg); break; case 'd': g.dst_ip.name = optarg; break; case 's': g.src_ip.name = optarg; break; case 'T': /* report interval */ g.report_interval = atoi(optarg); break; case 'w': wait_link = atoi(optarg); break; case 'W': /* XXX changed default */ g.forever = 0; /* do not exit rx even with no traffic */ break; case 'b': /* burst */ g.burst = atoi(optarg); break; case 'c': g.cpus = atoi(optarg); break; case 'p': g.nthreads = atoi(optarg); break; case 'D': /* destination mac */ g.dst_mac.name = optarg; break; case 'S': /* source mac */ g.src_mac.name = optarg; break; case 'v': verbose++; break; case 'R': g.tx_rate = atoi(optarg); break; case 'X': g.options |= OPT_DUMP; break; case 'C': g.nmr_config = strdup(optarg); break; case 'H': g.virt_header = atoi(optarg); break; case 'e': /* extra bufs */ g.extra_bufs = atoi(optarg); break; - case 'm': - if (strcmp(optarg, "tx") == 0) { - g.options |= OPT_MONITOR_TX; - } else if (strcmp(optarg, "rx") == 0) { - g.options |= OPT_MONITOR_RX; - } else { - D("unrecognized monitor mode %s", optarg); - } + case 'E': + g.extra_pipes = atoi(optarg); break; case 'P': g.packet_file = strdup(optarg); break; + case 'm': + /* ignored */ + break; + case 'r': + g.options |= OPT_RUBBISH; + break; case 'z': g.options |= OPT_RANDOM_SRC; break; case 'Z': g.options |= OPT_RANDOM_DST; break; + case 'A': + g.options |= OPT_PPS_STATS; + break; } } if (strlen(g.ifname) <=0 ) { D("missing ifname"); usage(); } - i = system_ncpus(); + g.system_cpus = i = system_ncpus(); if (g.cpus < 0 || g.cpus > i) { D("%d cpus is too high, have only %d cpus", g.cpus, i); usage(); } +D("running on %d cpus (have %d)", g.cpus, i); if (g.cpus == 0) g.cpus = i; if (g.pkt_size < 16 || g.pkt_size > MAX_PKTSIZE) { D("bad pktsize %d [16..%d]\n", g.pkt_size, MAX_PKTSIZE); usage(); } if (g.src_mac.name == NULL) { static char mybuf[20] = "00:00:00:00:00:00"; /* retrieve source mac address. */ if (source_hwaddr(g.ifname, mybuf) == -1) { D("Unable to retrieve source mac"); // continue, fail later } g.src_mac.name = mybuf; } /* extract address ranges */ extract_ip_range(&g.src_ip); extract_ip_range(&g.dst_ip); extract_mac_range(&g.src_mac); extract_mac_range(&g.dst_mac); if (g.src_ip.start != g.src_ip.end || g.src_ip.port0 != g.src_ip.port1 || g.dst_ip.start != g.dst_ip.end || g.dst_ip.port0 != g.dst_ip.port1) g.options |= OPT_COPY; if (g.virt_header != 0 && g.virt_header != VIRT_HDR_1 && g.virt_header != VIRT_HDR_2) { D("bad virtio-net-header length"); usage(); } if (g.dev_type == DEV_TAP) { D("want to use tap %s", g.ifname); g.main_fd = tap_alloc(g.ifname); if (g.main_fd < 0) { D("cannot open tap %s", g.ifname); usage(); } #ifndef NO_PCAP } else if (g.dev_type == DEV_PCAP) { char pcap_errbuf[PCAP_ERRBUF_SIZE]; pcap_errbuf[0] = '\0'; // init the buffer g.p = pcap_open_live(g.ifname, 256 /* XXX */, 1, 100, pcap_errbuf); if (g.p == NULL) { D("cannot open pcap on %s", g.ifname); usage(); } g.main_fd = pcap_fileno(g.p); D("using pcap on %s fileno %d", g.ifname, g.main_fd); #endif /* !NO_PCAP */ } else if (g.dummy_send) { /* but DEV_NETMAP */ D("using a dummy send routine"); } else { struct nmreq base_nmd; bzero(&base_nmd, sizeof(base_nmd)); parse_nmr_config(g.nmr_config, &base_nmd); if (g.extra_bufs) { base_nmd.nr_arg3 = g.extra_bufs; } + if (g.extra_pipes) { + base_nmd.nr_arg1 = g.extra_pipes; + } + base_nmd.nr_flags |= NR_ACCEPT_VNET_HDR; + /* * Open the netmap device using nm_open(). * * protocol stack and may cause a reset of the card, * which in turn may take some time for the PHY to * reconfigure. We do the open here to have time to reset. */ g.nmd = nm_open(g.ifname, &base_nmd, 0, NULL); if (g.nmd == NULL) { D("Unable to open %s: %s", g.ifname, strerror(errno)); goto out; } + + if (g.nthreads > 1) { + struct nm_desc saved_desc = *g.nmd; + saved_desc.self = &saved_desc; + saved_desc.mem = NULL; + nm_close(g.nmd); + saved_desc.req.nr_flags &= ~NR_REG_MASK; + saved_desc.req.nr_flags |= NR_REG_ONE_NIC; + saved_desc.req.nr_ringid = 0; + g.nmd = nm_open(g.ifname, &base_nmd, NM_OPEN_IFNAME, &saved_desc); + if (g.nmd == NULL) { + D("Unable to open %s: %s", g.ifname, strerror(errno)); + goto out; + } + } g.main_fd = g.nmd->fd; D("mapped %dKB at %p", g.nmd->req.nr_memsize>>10, g.nmd->mem); - /* get num of queues in tx or rx */ - if (g.td_body == sender_body) + if (g.virt_header) { + /* Set the virtio-net header length, since the user asked + * for it explicitely. */ + set_vnet_hdr_len(&g); + } else { + /* Check whether the netmap port we opened requires us to send + * and receive frames with virtio-net header. */ + get_vnet_hdr_len(&g); + } + + /* get num of queues in tx or rx */ + if (g.td_type == TD_TYPE_SENDER) devqueues = g.nmd->req.nr_tx_rings; - else + else devqueues = g.nmd->req.nr_rx_rings; /* validate provided nthreads. */ if (g.nthreads < 1 || g.nthreads > devqueues) { D("bad nthreads %d, have %d queues", g.nthreads, devqueues); // continue, fail later } if (verbose) { struct netmap_if *nifp = g.nmd->nifp; struct nmreq *req = &g.nmd->req; D("nifp at offset %d, %d tx %d rx region %d", req->nr_offset, req->nr_tx_rings, req->nr_rx_rings, req->nr_arg2); for (i = 0; i <= req->nr_tx_rings; i++) { struct netmap_ring *ring = NETMAP_TXRING(nifp, i); - D(" TX%d at 0x%lx slots %d", i, - (char *)ring - (char *)nifp, ring->num_slots); + D(" TX%d at 0x%p slots %d", i, + (void *)((char *)ring - (char *)nifp), ring->num_slots); } for (i = 0; i <= req->nr_rx_rings; i++) { struct netmap_ring *ring = NETMAP_RXRING(nifp, i); - D(" RX%d at 0x%lx slots %d", i, - (char *)ring - (char *)nifp, ring->num_slots); + D(" RX%d at 0x%p slots %d", i, + (void *)((char *)ring - (char *)nifp), ring->num_slots); } } /* Print some debug information. */ fprintf(stdout, "%s %s: %d queues, %d threads and %d cpus.\n", - (g.td_body == sender_body) ? "Sending on" : "Receiving from", + (g.td_type == TD_TYPE_SENDER) ? "Sending on" : + ((g.td_type == TD_TYPE_RECEIVER) ? "Receiving from" : + "Working on"), g.ifname, devqueues, g.nthreads, g.cpus); - if (g.td_body == sender_body) { + if (g.td_type == TD_TYPE_SENDER) { fprintf(stdout, "%s -> %s (%s -> %s)\n", g.src_ip.name, g.dst_ip.name, g.src_mac.name, g.dst_mac.name); } out: /* Exit if something went wrong. */ if (g.main_fd < 0) { D("aborting"); usage(); } } if (g.options) { - D("--- SPECIAL OPTIONS:%s%s%s%s%s\n", + D("--- SPECIAL OPTIONS:%s%s%s%s%s%s\n", g.options & OPT_PREFETCH ? " prefetch" : "", g.options & OPT_ACCESS ? " access" : "", g.options & OPT_MEMCPY ? " memcpy" : "", g.options & OPT_INDIRECT ? " indirect" : "", - g.options & OPT_COPY ? " copy" : ""); + g.options & OPT_COPY ? " copy" : "", + g.options & OPT_RUBBISH ? " rubbish " : ""); } g.tx_period.tv_sec = g.tx_period.tv_nsec = 0; if (g.tx_rate > 0) { /* try to have at least something every second, * reducing the burst size to some 0.01s worth of data * (but no less than one full set of fragments) */ uint64_t x; int lim = (g.tx_rate)/300; if (g.burst > lim) g.burst = lim; if (g.burst < g.frags) g.burst = g.frags; x = ((uint64_t)1000000000 * (uint64_t)g.burst) / (uint64_t) g.tx_rate; g.tx_period.tv_nsec = x; g.tx_period.tv_sec = g.tx_period.tv_nsec / 1000000000; g.tx_period.tv_nsec = g.tx_period.tv_nsec % 1000000000; } - if (g.td_body == sender_body) + if (g.td_type == TD_TYPE_SENDER) D("Sending %d packets every %ld.%09ld s", g.burst, g.tx_period.tv_sec, g.tx_period.tv_nsec); /* Wait for PHY reset. */ D("Wait %d secs for phy reset", wait_link); sleep(wait_link); D("Ready..."); /* Install ^C handler. */ global_nthreads = g.nthreads; - signal(SIGINT, sigint_h); - + sigemptyset(&ss); + sigaddset(&ss, SIGINT); + /* block SIGINT now, so that all created threads will inherit the mask */ + if (pthread_sigmask(SIG_BLOCK, &ss, NULL) < 0) { + D("failed to block SIGINT: %s", strerror(errno)); + } start_threads(&g); + /* Install the handler and re-enable SIGINT for the main thread */ + sa.sa_handler = sigint_h; + if (sigaction(SIGINT, &sa, NULL) < 0) { + D("failed to install ^C handler: %s", strerror(errno)); + } + + if (pthread_sigmask(SIG_UNBLOCK, &ss, NULL) < 0) { + D("failed to re-enable SIGINT: %s", strerror(errno)); + } main_thread(&g); + free(targs); return 0; } /* end of file */ Index: head/tools/tools/netmap/vale-ctl.c =================================================================== --- head/tools/tools/netmap/vale-ctl.c (revision 307393) +++ head/tools/tools/netmap/vale-ctl.c (revision 307394) @@ -1,235 +1,276 @@ /* * Copyright (C) 2013-2014 Michio Honda. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* $FreeBSD$ */ +#define NETMAP_WITH_LIBS +#include +#include + #include #include #include /* PRI* macros */ #include /* strcmp */ #include /* open */ #include /* close */ #include /* ioctl */ #include #include /* apple needs sockaddr */ #include /* ifreq */ -#include -#include #include /* basename */ #include /* atoi, free */ -/* debug support */ -#define ND(format, ...) do {} while(0) -#define D(format, ...) \ - fprintf(stderr, "%s [%d] " format "\n", \ - __FUNCTION__, __LINE__, ##__VA_ARGS__) - /* XXX cut and paste from pkt-gen.c because I'm not sure whether this * program may include nm_util.h */ void parse_nmr_config(const char* conf, struct nmreq *nmr) { char *w, *tok; int i, v; nmr->nr_tx_rings = nmr->nr_rx_rings = 0; nmr->nr_tx_slots = nmr->nr_rx_slots = 0; if (conf == NULL || ! *conf) return; w = strdup(conf); for (i = 0, tok = strtok(w, ","); tok; i++, tok = strtok(NULL, ",")) { v = atoi(tok); switch (i) { case 0: nmr->nr_tx_slots = nmr->nr_rx_slots = v; break; case 1: nmr->nr_rx_slots = v; break; case 2: nmr->nr_tx_rings = nmr->nr_rx_rings = v; break; case 3: nmr->nr_rx_rings = v; break; default: D("ignored config: %s", tok); break; } } D("txr %d txd %d rxr %d rxd %d", nmr->nr_tx_rings, nmr->nr_tx_slots, nmr->nr_rx_rings, nmr->nr_rx_slots); free(w); } static int bdg_ctl(const char *name, int nr_cmd, int nr_arg, char *nmr_config) { struct nmreq nmr; int error = 0; int fd = open("/dev/netmap", O_RDWR); if (fd == -1) { D("Unable to open /dev/netmap"); return -1; } bzero(&nmr, sizeof(nmr)); nmr.nr_version = NETMAP_API; if (name != NULL) /* might be NULL */ strncpy(nmr.nr_name, name, sizeof(nmr.nr_name)); nmr.nr_cmd = nr_cmd; parse_nmr_config(nmr_config, &nmr); switch (nr_cmd) { case NETMAP_BDG_DELIF: case NETMAP_BDG_NEWIF: error = ioctl(fd, NIOCREGIF, &nmr); if (error == -1) { ND("Unable to %s %s", nr_cmd == NETMAP_BDG_DELIF ? "delete":"create", name); perror(name); } else { ND("Success to %s %s", nr_cmd == NETMAP_BDG_DELIF ? "delete":"create", name); } break; case NETMAP_BDG_ATTACH: case NETMAP_BDG_DETACH: - if (nr_arg && nr_arg != NETMAP_BDG_HOST) + nmr.nr_flags = NR_REG_ALL_NIC; + if (nr_arg && nr_arg != NETMAP_BDG_HOST) { + nmr.nr_flags = NR_REG_NIC_SW; nr_arg = 0; + } nmr.nr_arg1 = nr_arg; error = ioctl(fd, NIOCREGIF, &nmr); if (error == -1) { ND("Unable to %s %s to the bridge", nr_cmd == NETMAP_BDG_DETACH?"detach":"attach", name); perror(name); } else ND("Success to %s %s to the bridge", nr_cmd == NETMAP_BDG_DETACH?"detach":"attach", name); break; case NETMAP_BDG_LIST: if (strlen(nmr.nr_name)) { /* name to bridge/port info */ error = ioctl(fd, NIOCGINFO, &nmr); if (error) { ND("Unable to obtain info for %s", name); perror(name); } else D("%s at bridge:%d port:%d", name, nmr.nr_arg1, nmr.nr_arg2); break; } /* scan all the bridges and ports */ nmr.nr_arg1 = nmr.nr_arg2 = 0; for (; !ioctl(fd, NIOCGINFO, &nmr); nmr.nr_arg2++) { D("bridge:%d port:%d %s", nmr.nr_arg1, nmr.nr_arg2, nmr.nr_name); nmr.nr_name[0] = '\0'; } break; + case NETMAP_BDG_POLLING_ON: + case NETMAP_BDG_POLLING_OFF: + /* We reuse nmreq fields as follows: + * nr_tx_slots: 0 and non-zero indicate REG_ALL_NIC + * REG_ONE_NIC, respectively. + * nr_rx_slots: CPU core index. This also indicates the + * first queue in the case of REG_ONE_NIC + * nr_tx_rings: (REG_ONE_NIC only) indicates the + * number of CPU cores or the last queue + */ + nmr.nr_flags |= nmr.nr_tx_slots ? + NR_REG_ONE_NIC : NR_REG_ALL_NIC; + nmr.nr_ringid = nmr.nr_rx_slots; + /* number of cores/rings */ + if (nmr.nr_flags == NR_REG_ALL_NIC) + nmr.nr_arg1 = 1; + else + nmr.nr_arg1 = nmr.nr_tx_rings; + + error = ioctl(fd, NIOCREGIF, &nmr); + if (!error) + D("polling on %s %s", nmr.nr_name, + nr_cmd == NETMAP_BDG_POLLING_ON ? + "started" : "stopped"); + else + D("polling on %s %s (err %d)", nmr.nr_name, + nr_cmd == NETMAP_BDG_POLLING_ON ? + "couldn't start" : "couldn't stop", error); + break; + default: /* GINFO */ nmr.nr_cmd = nmr.nr_arg1 = nmr.nr_arg2 = 0; error = ioctl(fd, NIOCGINFO, &nmr); if (error) { ND("Unable to get if info for %s", name); perror(name); } else D("%s: %d queues.", name, nmr.nr_rx_rings); break; } close(fd); return error; } int main(int argc, char *argv[]) { int ch, nr_cmd = 0, nr_arg = 0; const char *command = basename(argv[0]); char *name = NULL, *nmr_config = NULL; - if (argc > 3) { + if (argc > 5) { usage: fprintf(stderr, "Usage:\n" "%s arguments\n" "\t-g interface interface name to get info\n" "\t-d interface interface name to be detached\n" "\t-a interface interface name to be attached\n" "\t-h interface interface name to be attached with the host stack\n" "\t-n interface interface name to be created\n" "\t-r interface interface name to be deleted\n" "\t-l list all or specified bridge's interfaces (default)\n" "\t-C string ring/slot setting of an interface creating by -n\n" + "\t-p interface start polling. Additional -C x,y,z configures\n" + "\t\t x: 0 (REG_ALL_NIC) or 1 (REG_ONE_NIC),\n" + "\t\t y: CPU core id for ALL_NIC and core/ring for ONE_NIC\n" + "\t\t z: (ONE_NIC only) num of total cores/rings\n" + "\t-P interface stop polling\n" "", command); return 0; } - while ((ch = getopt(argc, argv, "d:a:h:g:l:n:r:C:")) != -1) { - name = optarg; /* default */ + while ((ch = getopt(argc, argv, "d:a:h:g:l:n:r:C:p:P:")) != -1) { + if (ch != 'C') + name = optarg; /* default */ switch (ch) { default: fprintf(stderr, "bad option %c %s", ch, optarg); goto usage; case 'd': nr_cmd = NETMAP_BDG_DETACH; break; case 'a': nr_cmd = NETMAP_BDG_ATTACH; break; case 'h': nr_cmd = NETMAP_BDG_ATTACH; nr_arg = NETMAP_BDG_HOST; break; case 'n': nr_cmd = NETMAP_BDG_NEWIF; break; case 'r': nr_cmd = NETMAP_BDG_DELIF; break; case 'g': nr_cmd = 0; break; case 'l': nr_cmd = NETMAP_BDG_LIST; if (optind < argc && argv[optind][0] == '-') name = NULL; break; case 'C': nmr_config = strdup(optarg); break; + case 'p': + nr_cmd = NETMAP_BDG_POLLING_ON; + break; + case 'P': + nr_cmd = NETMAP_BDG_POLLING_OFF; + break; } - if (optind != argc) { - // fprintf(stderr, "optind %d argc %d\n", optind, argc); - goto usage; - } + } + if (optind != argc) { + // fprintf(stderr, "optind %d argc %d\n", optind, argc); + goto usage; } if (argc == 1) nr_cmd = NETMAP_BDG_LIST; return bdg_ctl(name, nr_cmd, nr_arg, nmr_config) ? 1 : 0; }