Index: head/sys/dev/netmap/netmap.c =================================================================== --- head/sys/dev/netmap/netmap.c (revision 345268) +++ head/sys/dev/netmap/netmap.c (revision 345269) @@ -1,4217 +1,4259 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2011-2014 Matteo Landi * Copyright (C) 2011-2016 Luigi Rizzo * Copyright (C) 2011-2016 Giuseppe Lettieri * Copyright (C) 2011-2016 Vincenzo Maffione * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * This module supports memory mapped access to network devices, * see netmap(4). * * The module uses a large, memory pool allocated by the kernel * and accessible as mmapped memory by multiple userspace threads/processes. * The memory pool contains packet buffers and "netmap rings", * i.e. user-accessible copies of the interface's queues. * * Access to the network card works like this: * 1. a process/thread issues one or more open() on /dev/netmap, to create * select()able file descriptor on which events are reported. * 2. on each descriptor, the process issues an ioctl() to identify * the interface that should report events to the file descriptor. * 3. on each descriptor, the process issues an mmap() request to * map the shared memory region within the process' address space. * The list of interesting queues is indicated by a location in * the shared memory region. * 4. using the functions in the netmap(4) userspace API, a process * can look up the occupation state of a queue, access memory buffers, * and retrieve received packets or enqueue packets to transmit. * 5. using some ioctl()s the process can synchronize the userspace view * of the queue with the actual status in the kernel. This includes both * receiving the notification of new packets, and transmitting new * packets on the output interface. * 6. select() or poll() can be used to wait for events on individual * transmit or receive queues (or all queues for a given interface). * SYNCHRONIZATION (USER) The netmap rings and data structures may be shared among multiple user threads or even independent processes. Any synchronization among those threads/processes is delegated to the threads themselves. Only one thread at a time can be in a system call on the same netmap ring. The OS does not enforce this and only guarantees against system crashes in case of invalid usage. LOCKING (INTERNAL) Within the kernel, access to the netmap rings is protected as follows: - a spinlock on each ring, to handle producer/consumer races on RX rings attached to the host stack (against multiple host threads writing from the host stack to the same ring), and on 'destination' rings attached to a VALE switch (i.e. RX rings in VALE ports, and TX rings in NIC/host ports) protecting multiple active senders for the same destination) - an atomic variable to guarantee that there is at most one instance of *_*xsync() on the ring at any time. For rings connected to user file descriptors, an atomic_test_and_set() protects this, and the lock on the ring is not actually used. For NIC RX rings connected to a VALE switch, an atomic_test_and_set() is also used to prevent multiple executions (the driver might indeed already guarantee this). For NIC TX rings connected to a VALE switch, the lock arbitrates access to the queue (both when allocating buffers and when pushing them out). - *xsync() should be protected against initializations of the card. On FreeBSD most devices have the reset routine protected by a RING lock (ixgbe, igb, em) or core lock (re). lem is missing the RING protection on rx_reset(), this should be added. On linux there is an external lock on the tx path, which probably also arbitrates access to the reset routine. XXX to be revised - a per-interface core_lock protecting access from the host stack while interfaces may be detached from netmap mode. XXX there should be no need for this lock if we detach the interfaces only while they are down. --- VALE SWITCH --- NMG_LOCK() serializes all modifications to switches and ports. A switch cannot be deleted until all ports are gone. For each switch, an SX lock (RWlock on linux) protects deletion of ports. When configuring or deleting a new port, the lock is acquired in exclusive mode (after holding NMG_LOCK). When forwarding, the lock is acquired in shared mode (without NMG_LOCK). The lock is held throughout the entire forwarding cycle, during which the thread may incur in a page fault. Hence it is important that sleepable shared locks are used. On the rx ring, the per-port lock is grabbed initially to reserve a number of slot in the ring, then the lock is released, packets are copied from source to destination, and then the lock is acquired again and the receive ring is updated. (A similar thing is done on the tx ring for NIC and host stack ports attached to the switch) */ /* --- internals ---- * * Roadmap to the code that implements the above. * * > 1. a process/thread issues one or more open() on /dev/netmap, to create * > select()able file descriptor on which events are reported. * * Internally, we allocate a netmap_priv_d structure, that will be * initialized on ioctl(NIOCREGIF). There is one netmap_priv_d * structure for each open(). * * os-specific: * FreeBSD: see netmap_open() (netmap_freebsd.c) * linux: see linux_netmap_open() (netmap_linux.c) * * > 2. on each descriptor, the process issues an ioctl() to identify * > the interface that should report events to the file descriptor. * * Implemented by netmap_ioctl(), NIOCREGIF case, with nmr->nr_cmd==0. * Most important things happen in netmap_get_na() and * netmap_do_regif(), called from there. Additional details can be * found in the comments above those functions. * * In all cases, this action creates/takes-a-reference-to a * netmap_*_adapter describing the port, and allocates a netmap_if * and all necessary netmap rings, filling them with netmap buffers. * * In this phase, the sync callbacks for each ring are set (these are used * in steps 5 and 6 below). The callbacks depend on the type of adapter. * The adapter creation/initialization code puts them in the * netmap_adapter (fields na->nm_txsync and na->nm_rxsync). Then, they * are copied from there to the netmap_kring's during netmap_do_regif(), by * the nm_krings_create() callback. All the nm_krings_create callbacks * actually call netmap_krings_create() to perform this and the other * common stuff. netmap_krings_create() also takes care of the host rings, * if needed, by setting their sync callbacks appropriately. * * Additional actions depend on the kind of netmap_adapter that has been * registered: * * - netmap_hw_adapter: [netmap.c] * This is a system netdev/ifp with native netmap support. * The ifp is detached from the host stack by redirecting: * - transmissions (from the network stack) to netmap_transmit() * - receive notifications to the nm_notify() callback for * this adapter. The callback is normally netmap_notify(), unless * the ifp is attached to a bridge using bwrap, in which case it * is netmap_bwrap_intr_notify(). * * - netmap_generic_adapter: [netmap_generic.c] * A system netdev/ifp without native netmap support. * * (the decision about native/non native support is taken in * netmap_get_hw_na(), called by netmap_get_na()) * * - netmap_vp_adapter [netmap_vale.c] * Returned by netmap_get_bdg_na(). * This is a persistent or ephemeral VALE port. Ephemeral ports * are created on the fly if they don't already exist, and are * always attached to a bridge. * Persistent VALE ports must must be created separately, and i * then attached like normal NICs. The NIOCREGIF we are examining * will find them only if they had previosly been created and * attached (see VALE_CTL below). * * - netmap_pipe_adapter [netmap_pipe.c] * Returned by netmap_get_pipe_na(). * Both pipe ends are created, if they didn't already exist. * * - netmap_monitor_adapter [netmap_monitor.c] * Returned by netmap_get_monitor_na(). * If successful, the nm_sync callbacks of the monitored adapter * will be intercepted by the returned monitor. * * - netmap_bwrap_adapter [netmap_vale.c] * Cannot be obtained in this way, see VALE_CTL below * * * os-specific: * linux: we first go through linux_netmap_ioctl() to * adapt the FreeBSD interface to the linux one. * * * > 3. on each descriptor, the process issues an mmap() request to * > map the shared memory region within the process' address space. * > The list of interesting queues is indicated by a location in * > the shared memory region. * * os-specific: * FreeBSD: netmap_mmap_single (netmap_freebsd.c). * linux: linux_netmap_mmap (netmap_linux.c). * * > 4. using the functions in the netmap(4) userspace API, a process * > can look up the occupation state of a queue, access memory buffers, * > and retrieve received packets or enqueue packets to transmit. * * these actions do not involve the kernel. * * > 5. using some ioctl()s the process can synchronize the userspace view * > of the queue with the actual status in the kernel. This includes both * > receiving the notification of new packets, and transmitting new * > packets on the output interface. * * These are implemented in netmap_ioctl(), NIOCTXSYNC and NIOCRXSYNC * cases. They invoke the nm_sync callbacks on the netmap_kring * structures, as initialized in step 2 and maybe later modified * by a monitor. Monitors, however, will always call the original * callback before doing anything else. * * * > 6. select() or poll() can be used to wait for events on individual * > transmit or receive queues (or all queues for a given interface). * * Implemented in netmap_poll(). This will call the same nm_sync() * callbacks as in step 5 above. * * os-specific: * linux: we first go through linux_netmap_poll() to adapt * the FreeBSD interface to the linux one. * * * ---- VALE_CTL ----- * * VALE switches are controlled by issuing a NIOCREGIF with a non-null * nr_cmd in the nmreq structure. These subcommands are handled by * netmap_bdg_ctl() in netmap_vale.c. Persistent VALE ports are created * and destroyed by issuing the NETMAP_BDG_NEWIF and NETMAP_BDG_DELIF * subcommands, respectively. * * Any network interface known to the system (including a persistent VALE * port) can be attached to a VALE switch by issuing the * NETMAP_REQ_VALE_ATTACH command. After the attachment, persistent VALE ports * look exactly like ephemeral VALE ports (as created in step 2 above). The * attachment of other interfaces, instead, requires the creation of a * netmap_bwrap_adapter. Moreover, the attached interface must be put in * netmap mode. This may require the creation of a netmap_generic_adapter if * we have no native support for the interface, or if generic adapters have * been forced by sysctl. * * Both persistent VALE ports and bwraps are handled by netmap_get_bdg_na(), * called by nm_bdg_ctl_attach(), and discriminated by the nm_bdg_attach() * callback. In the case of the bwrap, the callback creates the * netmap_bwrap_adapter. The initialization of the bwrap is then * completed by calling netmap_do_regif() on it, in the nm_bdg_ctl() * callback (netmap_bwrap_bdg_ctl in netmap_vale.c). * A generic adapter for the wrapped ifp will be created if needed, when * netmap_get_bdg_na() calls netmap_get_hw_na(). * * * ---- DATAPATHS ----- * * -= SYSTEM DEVICE WITH NATIVE SUPPORT =- * * na == NA(ifp) == netmap_hw_adapter created in DEVICE_netmap_attach() * * - tx from netmap userspace: * concurrently: * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == DEVICE_netmap_txsync() * 2) device interrupt handler * na->nm_notify() == netmap_notify() * - rx from netmap userspace: * concurrently: * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == DEVICE_netmap_rxsync() * 2) device interrupt handler * na->nm_notify() == netmap_notify() * - rx from host stack * concurrently: * 1) host stack * netmap_transmit() * na->nm_notify == netmap_notify() * 2) ioctl(NIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == netmap_rxsync_from_host * netmap_rxsync_from_host(na, NULL, NULL) * - tx to host stack * ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == netmap_txsync_to_host * netmap_txsync_to_host(na) * nm_os_send_up() * FreeBSD: na->if_input() == ether_input() * linux: netif_rx() with NM_MAGIC_PRIORITY_RX * * * -= SYSTEM DEVICE WITH GENERIC SUPPORT =- * * na == NA(ifp) == generic_netmap_adapter created in generic_netmap_attach() * * - tx from netmap userspace: * concurrently: * 1) ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == generic_netmap_txsync() * nm_os_generic_xmit_frame() * linux: dev_queue_xmit() with NM_MAGIC_PRIORITY_TX * ifp->ndo_start_xmit == generic_ndo_start_xmit() * gna->save_start_xmit == orig. dev. start_xmit * FreeBSD: na->if_transmit() == orig. dev if_transmit * 2) generic_mbuf_destructor() * na->nm_notify() == netmap_notify() * - rx from netmap userspace: * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == generic_netmap_rxsync() * mbq_safe_dequeue() * 2) device driver * generic_rx_handler() * mbq_safe_enqueue() * na->nm_notify() == netmap_notify() * - rx from host stack * FreeBSD: same as native * Linux: same as native except: * 1) host stack * dev_queue_xmit() without NM_MAGIC_PRIORITY_TX * ifp->ndo_start_xmit == generic_ndo_start_xmit() * netmap_transmit() * na->nm_notify() == netmap_notify() * - tx to host stack (same as native): * * * -= VALE =- * * INCOMING: * * - VALE ports: * ioctl(NIOCTXSYNC)/netmap_poll() in process context * kring->nm_sync() == netmap_vp_txsync() * * - system device with native support: * from cable: * interrupt * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring) * kring->nm_sync() == DEVICE_netmap_rxsync() * netmap_vp_txsync() * kring->nm_sync() == DEVICE_netmap_rxsync() * from host stack: * netmap_transmit() * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring) * kring->nm_sync() == netmap_rxsync_from_host() * netmap_vp_txsync() * * - system device with generic support: * from device driver: * generic_rx_handler() * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr != host ring) * kring->nm_sync() == generic_netmap_rxsync() * netmap_vp_txsync() * kring->nm_sync() == generic_netmap_rxsync() * from host stack: * netmap_transmit() * na->nm_notify() == netmap_bwrap_intr_notify(ring_nr == host ring) * kring->nm_sync() == netmap_rxsync_from_host() * netmap_vp_txsync() * * (all cases) --> nm_bdg_flush() * dest_na->nm_notify() == (see below) * * OUTGOING: * * - VALE ports: * concurrently: * 1) ioctl(NIOCRXSYNC)/netmap_poll() in process context * kring->nm_sync() == netmap_vp_rxsync() * 2) from nm_bdg_flush() * na->nm_notify() == netmap_notify() * * - system device with native support: * to cable: * na->nm_notify() == netmap_bwrap_notify() * netmap_vp_rxsync() * kring->nm_sync() == DEVICE_netmap_txsync() * netmap_vp_rxsync() * to host stack: * netmap_vp_rxsync() * kring->nm_sync() == netmap_txsync_to_host * netmap_vp_rxsync_locked() * * - system device with generic adapter: * to device driver: * na->nm_notify() == netmap_bwrap_notify() * netmap_vp_rxsync() * kring->nm_sync() == generic_netmap_txsync() * netmap_vp_rxsync() * to host stack: * netmap_vp_rxsync() * kring->nm_sync() == netmap_txsync_to_host * netmap_vp_rxsync() * */ /* * OS-specific code that is used only within this file. * Other OS-specific code that must be accessed by drivers * is present in netmap_kern.h */ #if defined(__FreeBSD__) #include /* prerequisite */ #include #include #include /* defines used in kernel.h */ #include /* types used in module initialization */ #include /* cdevsw struct, UID, GID */ #include /* FIONBIO */ #include #include /* struct socket */ #include #include #include #include /* sockaddrs */ #include #include #include #include #include #include #include /* BIOCIMMEDIATE */ #include /* bus_dmamap_* */ #include #include #include /* ETHER_BPF_MTAP */ #elif defined(linux) #include "bsd_glue.h" #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" #elif defined (_WIN32) #include "win_glue.h" #else #error Unsupported platform #endif /* unsupported */ /* * common headers */ #include #include #include /* user-controlled variables */ int netmap_verbose; #ifdef CONFIG_NETMAP_DEBUG int netmap_debug; #endif /* CONFIG_NETMAP_DEBUG */ static int netmap_no_timestamp; /* don't timestamp on rxsync */ int netmap_no_pendintr = 1; int netmap_txsync_retry = 2; static int netmap_fwd = 0; /* force transparent forwarding */ /* * netmap_admode selects the netmap mode to use. * Invalid values are reset to NETMAP_ADMODE_BEST */ enum { NETMAP_ADMODE_BEST = 0, /* use native, fallback to generic */ NETMAP_ADMODE_NATIVE, /* either native or none */ NETMAP_ADMODE_GENERIC, /* force generic */ NETMAP_ADMODE_LAST }; static int netmap_admode = NETMAP_ADMODE_BEST; /* netmap_generic_mit controls mitigation of RX notifications for * the generic netmap adapter. The value is a time interval in * nanoseconds. */ int netmap_generic_mit = 100*1000; /* We use by default netmap-aware qdiscs with generic netmap adapters, * even if there can be a little performance hit with hardware NICs. * However, using the qdisc is the safer approach, for two reasons: * 1) it prevents non-fifo qdiscs to break the TX notification * scheme, which is based on mbuf destructors when txqdisc is * not used. * 2) it makes it possible to transmit over software devices that * change skb->dev, like bridge, veth, ... * * Anyway users looking for the best performance should * use native adapters. */ #ifdef linux int netmap_generic_txqdisc = 1; #endif /* Default number of slots and queues for generic adapters. */ int netmap_generic_ringsize = 1024; int netmap_generic_rings = 1; /* Non-zero to enable checksum offloading in NIC drivers */ int netmap_generic_hwcsum = 0; /* Non-zero if ptnet devices are allowed to use virtio-net headers. */ int ptnet_vnet_hdr = 1; /* * SYSCTL calls are grouped between SYSBEGIN and SYSEND to be emulated * in some other operating systems */ SYSBEGIN(main_init); SYSCTL_DECL(_dev_netmap); SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args"); SYSCTL_INT(_dev_netmap, OID_AUTO, verbose, CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode"); #ifdef CONFIG_NETMAP_DEBUG SYSCTL_INT(_dev_netmap, OID_AUTO, debug, CTLFLAG_RW, &netmap_debug, 0, "Debug messages"); #endif /* CONFIG_NETMAP_DEBUG */ SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp, CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp"); SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr, CTLFLAG_RW, &netmap_no_pendintr, 0, "Always look for new received packets."); SYSCTL_INT(_dev_netmap, OID_AUTO, txsync_retry, CTLFLAG_RW, &netmap_txsync_retry, 0, "Number of txsync loops in bridge's flush."); SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0, "Force NR_FORWARD mode"); SYSCTL_INT(_dev_netmap, OID_AUTO, admode, CTLFLAG_RW, &netmap_admode, 0, "Adapter mode. 0 selects the best option available," "1 forces native adapter, 2 forces emulated adapter"); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_hwcsum, CTLFLAG_RW, &netmap_generic_hwcsum, 0, "Hardware checksums. 0 to disable checksum generation by the NIC (default)," "1 to enable checksum generation by the NIC"); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_mit, CTLFLAG_RW, &netmap_generic_mit, 0, "RX notification interval in nanoseconds"); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_ringsize, CTLFLAG_RW, &netmap_generic_ringsize, 0, "Number of per-ring slots for emulated netmap mode"); SYSCTL_INT(_dev_netmap, OID_AUTO, generic_rings, CTLFLAG_RW, &netmap_generic_rings, 0, "Number of TX/RX queues for emulated netmap adapters"); #ifdef linux SYSCTL_INT(_dev_netmap, OID_AUTO, generic_txqdisc, CTLFLAG_RW, &netmap_generic_txqdisc, 0, "Use qdisc for generic adapters"); #endif SYSCTL_INT(_dev_netmap, OID_AUTO, ptnet_vnet_hdr, CTLFLAG_RW, &ptnet_vnet_hdr, 0, "Allow ptnet devices to use virtio-net headers"); SYSEND; NMG_LOCK_T netmap_global_lock; /* * mark the ring as stopped, and run through the locks * to make sure other users get to see it. * stopped must be either NR_KR_STOPPED (for unbounded stop) * of NR_KR_LOCKED (brief stop for mutual exclusion purposes) */ static void netmap_disable_ring(struct netmap_kring *kr, int stopped) { nm_kr_stop(kr, stopped); // XXX check if nm_kr_stop is sufficient mtx_lock(&kr->q_lock); mtx_unlock(&kr->q_lock); nm_kr_put(kr); } /* stop or enable a single ring */ void netmap_set_ring(struct netmap_adapter *na, u_int ring_id, enum txrx t, int stopped) { if (stopped) netmap_disable_ring(NMR(na, t)[ring_id], stopped); else NMR(na, t)[ring_id]->nkr_stopped = 0; } /* stop or enable all the rings of na */ void netmap_set_all_rings(struct netmap_adapter *na, int stopped) { int i; enum txrx t; if (!nm_netmap_on(na)) return; for_rx_tx(t) { for (i = 0; i < netmap_real_rings(na, t); i++) { netmap_set_ring(na, i, t, stopped); } } } /* * Convenience function used in drivers. Waits for current txsync()s/rxsync()s * to finish and prevents any new one from starting. Call this before turning * netmap mode off, or before removing the hardware rings (e.g., on module * onload). */ void netmap_disable_all_rings(struct ifnet *ifp) { if (NM_NA_VALID(ifp)) { netmap_set_all_rings(NA(ifp), NM_KR_STOPPED); } } /* * Convenience function used in drivers. Re-enables rxsync and txsync on the * adapter's rings In linux drivers, this should be placed near each * napi_enable(). */ void netmap_enable_all_rings(struct ifnet *ifp) { if (NM_NA_VALID(ifp)) { netmap_set_all_rings(NA(ifp), 0 /* enabled */); } } void netmap_make_zombie(struct ifnet *ifp) { if (NM_NA_VALID(ifp)) { struct netmap_adapter *na = NA(ifp); netmap_set_all_rings(na, NM_KR_LOCKED); na->na_flags |= NAF_ZOMBIE; netmap_set_all_rings(na, 0); } } void netmap_undo_zombie(struct ifnet *ifp) { if (NM_NA_VALID(ifp)) { struct netmap_adapter *na = NA(ifp); if (na->na_flags & NAF_ZOMBIE) { netmap_set_all_rings(na, NM_KR_LOCKED); na->na_flags &= ~NAF_ZOMBIE; netmap_set_all_rings(na, 0); } } } /* * generic bound_checking function */ u_int nm_bound_var(u_int *v, u_int dflt, u_int lo, u_int hi, const char *msg) { u_int oldv = *v; const char *op = NULL; if (dflt < lo) dflt = lo; if (dflt > hi) dflt = hi; if (oldv < lo) { *v = dflt; op = "Bump"; } else if (oldv > hi) { *v = hi; op = "Clamp"; } if (op && msg) nm_prinf("%s %s to %d (was %d)", op, msg, *v, oldv); return *v; } /* * packet-dump function, user-supplied or static buffer. * The destination buffer must be at least 30+4*len */ const char * nm_dump_buf(char *p, int len, int lim, char *dst) { static char _dst[8192]; int i, j, i0; static char hex[] ="0123456789abcdef"; char *o; /* output position */ #define P_HI(x) hex[((x) & 0xf0)>>4] #define P_LO(x) hex[((x) & 0xf)] #define P_C(x) ((x) >= 0x20 && (x) <= 0x7e ? (x) : '.') if (!dst) dst = _dst; if (lim <= 0 || lim > len) lim = len; o = dst; sprintf(o, "buf 0x%p len %d lim %d\n", p, len, lim); o += strlen(o); /* hexdump routine */ for (i = 0; i < lim; ) { sprintf(o, "%5d: ", i); o += strlen(o); memset(o, ' ', 48); i0 = i; for (j=0; j < 16 && i < lim; i++, j++) { o[j*3] = P_HI(p[i]); o[j*3+1] = P_LO(p[i]); } i = i0; for (j=0; j < 16 && i < lim; i++, j++) o[j + 48] = P_C(p[i]); o[j+48] = '\n'; o += j+49; } *o = '\0'; #undef P_HI #undef P_LO #undef P_C return dst; } /* * Fetch configuration from the device, to cope with dynamic * reconfigurations after loading the module. */ /* call with NMG_LOCK held */ int netmap_update_config(struct netmap_adapter *na) { struct nm_config_info info; bzero(&info, sizeof(info)); if (na->nm_config == NULL || na->nm_config(na, &info)) { /* take whatever we had at init time */ info.num_tx_rings = na->num_tx_rings; info.num_tx_descs = na->num_tx_desc; info.num_rx_rings = na->num_rx_rings; info.num_rx_descs = na->num_rx_desc; info.rx_buf_maxsize = na->rx_buf_maxsize; } if (na->num_tx_rings == info.num_tx_rings && na->num_tx_desc == info.num_tx_descs && na->num_rx_rings == info.num_rx_rings && na->num_rx_desc == info.num_rx_descs && na->rx_buf_maxsize == info.rx_buf_maxsize) return 0; /* nothing changed */ if (na->active_fds == 0) { na->num_tx_rings = info.num_tx_rings; na->num_tx_desc = info.num_tx_descs; na->num_rx_rings = info.num_rx_rings; na->num_rx_desc = info.num_rx_descs; na->rx_buf_maxsize = info.rx_buf_maxsize; if (netmap_verbose) nm_prinf("configuration changed for %s: txring %d x %d, " "rxring %d x %d, rxbufsz %d", na->name, na->num_tx_rings, na->num_tx_desc, na->num_rx_rings, na->num_rx_desc, na->rx_buf_maxsize); return 0; } nm_prerr("WARNING: configuration changed for %s while active: " "txring %d x %d, rxring %d x %d, rxbufsz %d", na->name, info.num_tx_rings, info.num_tx_descs, info.num_rx_rings, info.num_rx_descs, info.rx_buf_maxsize); return 1; } /* nm_sync callbacks for the host rings */ static int netmap_txsync_to_host(struct netmap_kring *kring, int flags); static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags); /* create the krings array and initialize the fields common to all adapters. * The array layout is this: * * +----------+ * na->tx_rings ----->| | \ * | | } na->num_tx_ring * | | / * +----------+ * | | host tx kring * na->rx_rings ----> +----------+ * | | \ * | | } na->num_rx_rings * | | / * +----------+ * | | host rx kring * +----------+ * na->tailroom ----->| | \ * | | } tailroom bytes * | | / * +----------+ * * Note: for compatibility, host krings are created even when not needed. * The tailroom space is currently used by vale ports for allocating leases. */ /* call with NMG_LOCK held */ int netmap_krings_create(struct netmap_adapter *na, u_int tailroom) { u_int i, len, ndesc; struct netmap_kring *kring; u_int n[NR_TXRX]; enum txrx t; int err = 0; if (na->tx_rings != NULL) { if (netmap_debug & NM_DEBUG_ON) nm_prerr("warning: krings were already created"); return 0; } /* account for the (possibly fake) host rings */ n[NR_TX] = netmap_all_rings(na, NR_TX); n[NR_RX] = netmap_all_rings(na, NR_RX); len = (n[NR_TX] + n[NR_RX]) * (sizeof(struct netmap_kring) + sizeof(struct netmap_kring *)) + tailroom; na->tx_rings = nm_os_malloc((size_t)len); if (na->tx_rings == NULL) { nm_prerr("Cannot allocate krings"); return ENOMEM; } na->rx_rings = na->tx_rings + n[NR_TX]; na->tailroom = na->rx_rings + n[NR_RX]; /* link the krings in the krings array */ kring = (struct netmap_kring *)((char *)na->tailroom + tailroom); for (i = 0; i < n[NR_TX] + n[NR_RX]; i++) { na->tx_rings[i] = kring; kring++; } /* * All fields in krings are 0 except the one initialized below. * but better be explicit on important kring fields. */ for_rx_tx(t) { ndesc = nma_get_ndesc(na, t); for (i = 0; i < n[t]; i++) { kring = NMR(na, t)[i]; bzero(kring, sizeof(*kring)); kring->notify_na = na; kring->ring_id = i; kring->tx = t; kring->nkr_num_slots = ndesc; kring->nr_mode = NKR_NETMAP_OFF; kring->nr_pending_mode = NKR_NETMAP_OFF; if (i < nma_get_nrings(na, t)) { kring->nm_sync = (t == NR_TX ? na->nm_txsync : na->nm_rxsync); } else { if (!(na->na_flags & NAF_HOST_RINGS)) kring->nr_kflags |= NKR_FAKERING; kring->nm_sync = (t == NR_TX ? netmap_txsync_to_host: netmap_rxsync_from_host); } kring->nm_notify = na->nm_notify; kring->rhead = kring->rcur = kring->nr_hwcur = 0; /* * IMPORTANT: Always keep one slot empty. */ kring->rtail = kring->nr_hwtail = (t == NR_TX ? ndesc - 1 : 0); snprintf(kring->name, sizeof(kring->name) - 1, "%s %s%d", na->name, nm_txrx2str(t), i); nm_prdis("ktx %s h %d c %d t %d", kring->name, kring->rhead, kring->rcur, kring->rtail); err = nm_os_selinfo_init(&kring->si, kring->name); if (err) { netmap_krings_delete(na); return err; } mtx_init(&kring->q_lock, (t == NR_TX ? "nm_txq_lock" : "nm_rxq_lock"), NULL, MTX_DEF); kring->na = na; /* setting this field marks the mutex as initialized */ } err = nm_os_selinfo_init(&na->si[t], na->name); if (err) { netmap_krings_delete(na); return err; } } return 0; } /* undo the actions performed by netmap_krings_create */ /* call with NMG_LOCK held */ void netmap_krings_delete(struct netmap_adapter *na) { struct netmap_kring **kring = na->tx_rings; enum txrx t; if (na->tx_rings == NULL) { if (netmap_debug & NM_DEBUG_ON) nm_prerr("warning: krings were already deleted"); return; } for_rx_tx(t) nm_os_selinfo_uninit(&na->si[t]); /* we rely on the krings layout described above */ for ( ; kring != na->tailroom; kring++) { if ((*kring)->na != NULL) mtx_destroy(&(*kring)->q_lock); nm_os_selinfo_uninit(&(*kring)->si); } nm_os_free(na->tx_rings); na->tx_rings = na->rx_rings = na->tailroom = NULL; } /* * Destructor for NIC ports. They also have an mbuf queue * on the rings connected to the host so we need to purge * them first. */ /* call with NMG_LOCK held */ void netmap_hw_krings_delete(struct netmap_adapter *na) { u_int lim = netmap_real_rings(na, NR_RX), i; for (i = nma_get_nrings(na, NR_RX); i < lim; i++) { struct mbq *q = &NMR(na, NR_RX)[i]->rx_queue; nm_prdis("destroy sw mbq with len %d", mbq_len(q)); mbq_purge(q); mbq_safe_fini(q); } netmap_krings_delete(na); } static void netmap_mem_drop(struct netmap_adapter *na) { int last = netmap_mem_deref(na->nm_mem, na); /* if the native allocator had been overrided on regif, * restore it now and drop the temporary one */ if (last && na->nm_mem_prev) { netmap_mem_put(na->nm_mem); na->nm_mem = na->nm_mem_prev; na->nm_mem_prev = NULL; } } /* * Undo everything that was done in netmap_do_regif(). In particular, * call nm_register(ifp,0) to stop netmap mode on the interface and * revert to normal operation. */ /* call with NMG_LOCK held */ static void netmap_unset_ringid(struct netmap_priv_d *); static void netmap_krings_put(struct netmap_priv_d *); void netmap_do_unregif(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; NMG_LOCK_ASSERT(); na->active_fds--; /* unset nr_pending_mode and possibly release exclusive mode */ netmap_krings_put(priv); #ifdef WITH_MONITOR /* XXX check whether we have to do something with monitor * when rings change nr_mode. */ if (na->active_fds <= 0) { /* walk through all the rings and tell any monitor * that the port is going to exit netmap mode */ netmap_monitor_stop(na); } #endif if (na->active_fds <= 0 || nm_kring_pending(priv)) { na->nm_register(na, 0); } /* delete rings and buffers that are no longer needed */ netmap_mem_rings_delete(na); if (na->active_fds <= 0) { /* last instance */ /* * (TO CHECK) We enter here * when the last reference to this file descriptor goes * away. This means we cannot have any pending poll() * or interrupt routine operating on the structure. * XXX The file may be closed in a thread while * another thread is using it. * Linux keeps the file opened until the last reference * by any outstanding ioctl/poll or mmap is gone. * FreeBSD does not track mmap()s (but we do) and * wakes up any sleeping poll(). Need to check what * happens if the close() occurs while a concurrent * syscall is running. */ if (netmap_debug & NM_DEBUG_ON) nm_prinf("deleting last instance for %s", na->name); if (nm_netmap_on(na)) { nm_prerr("BUG: netmap on while going to delete the krings"); } na->nm_krings_delete(na); + + /* restore the default number of host tx and rx rings */ + na->num_host_tx_rings = 1; + na->num_host_rx_rings = 1; } /* possibily decrement counter of tx_si/rx_si users */ netmap_unset_ringid(priv); /* delete the nifp */ netmap_mem_if_delete(na, priv->np_nifp); /* drop the allocator */ netmap_mem_drop(na); /* mark the priv as unregistered */ priv->np_na = NULL; priv->np_nifp = NULL; } struct netmap_priv_d* netmap_priv_new(void) { struct netmap_priv_d *priv; priv = nm_os_malloc(sizeof(struct netmap_priv_d)); if (priv == NULL) return NULL; priv->np_refs = 1; nm_os_get_module(); return priv; } /* * Destructor of the netmap_priv_d, called when the fd is closed * Action: undo all the things done by NIOCREGIF, * On FreeBSD we need to track whether there are active mmap()s, * and we use np_active_mmaps for that. On linux, the field is always 0. * Return: 1 if we can free priv, 0 otherwise. * */ /* call with NMG_LOCK held */ void netmap_priv_delete(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; /* number of active references to this fd */ if (--priv->np_refs > 0) { return; } nm_os_put_module(); if (na) { netmap_do_unregif(priv); } netmap_unget_na(na, priv->np_ifp); bzero(priv, sizeof(*priv)); /* for safety */ nm_os_free(priv); } /* call with NMG_LOCK *not* held */ void netmap_dtor(void *data) { struct netmap_priv_d *priv = data; NMG_LOCK(); netmap_priv_delete(priv); NMG_UNLOCK(); } /* * Handlers for synchronization of the rings from/to the host stack. * These are associated to a network interface and are just another * ring pair managed by userspace. * * Netmap also supports transparent forwarding (NS_FORWARD and NR_FORWARD * flags): * * - Before releasing buffers on hw RX rings, the application can mark * them with the NS_FORWARD flag. During the next RXSYNC or poll(), they * will be forwarded to the host stack, similarly to what happened if * the application moved them to the host TX ring. * * - Before releasing buffers on the host RX ring, the application can * mark them with the NS_FORWARD flag. During the next RXSYNC or poll(), * they will be forwarded to the hw TX rings, saving the application * from doing the same task in user-space. * * Transparent fowarding can be enabled per-ring, by setting the NR_FORWARD * flag, or globally with the netmap_fwd sysctl. * * The transfer NIC --> host is relatively easy, just encapsulate * into mbufs and we are done. The host --> NIC side is slightly * harder because there might not be room in the tx ring so it * might take a while before releasing the buffer. */ /* * Pass a whole queue of mbufs to the host stack as coming from 'dst' * We do not need to lock because the queue is private. * After this call the queue is empty. */ static void netmap_send_up(struct ifnet *dst, struct mbq *q) { struct mbuf *m; struct mbuf *head = NULL, *prev = NULL; /* Send packets up, outside the lock; head/prev machinery * is only useful for Windows. */ while ((m = mbq_dequeue(q)) != NULL) { if (netmap_debug & NM_DEBUG_HOST) nm_prinf("sending up pkt %p size %d", m, MBUF_LEN(m)); prev = nm_os_send_up(dst, m, prev); if (head == NULL) head = prev; } if (head) nm_os_send_up(dst, NULL, head); mbq_fini(q); } /* * Scan the buffers from hwcur to ring->head, and put a copy of those * marked NS_FORWARD (or all of them if forced) into a queue of mbufs. * Drop remaining packets in the unlikely event * of an mbuf shortage. */ static void netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force) { u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; u_int n; struct netmap_adapter *na = kring->na; for (n = kring->nr_hwcur; n != head; n = nm_next(n, lim)) { struct mbuf *m; struct netmap_slot *slot = &kring->ring->slot[n]; if ((slot->flags & NS_FORWARD) == 0 && !force) continue; if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE(na)) { nm_prlim(5, "bad pkt at %d len %d", n, slot->len); continue; } slot->flags &= ~NS_FORWARD; // XXX needed ? /* XXX TODO: adapt to the case of a multisegment packet */ m = m_devget(NMB(na, slot), slot->len, 0, na->ifp, NULL); if (m == NULL) break; mbq_enqueue(q, m); } } static inline int _nm_may_forward(struct netmap_kring *kring) { return ((netmap_fwd || kring->ring->flags & NR_FORWARD) && kring->na->na_flags & NAF_HOST_RINGS && kring->tx == NR_RX); } static inline int nm_may_forward_up(struct netmap_kring *kring) { return _nm_may_forward(kring) && kring->ring_id != kring->na->num_rx_rings; } static inline int nm_may_forward_down(struct netmap_kring *kring, int sync_flags) { return _nm_may_forward(kring) && (sync_flags & NAF_CAN_FORWARD_DOWN) && kring->ring_id == kring->na->num_rx_rings; } /* * Send to the NIC rings packets marked NS_FORWARD between * kring->nr_hwcur and kring->rhead. * Called under kring->rx_queue.lock on the sw rx ring. * * It can only be called if the user opened all the TX hw rings, * see NAF_CAN_FORWARD_DOWN flag. * We can touch the TX netmap rings (slots, head and cur) since * we are in poll/ioctl system call context, and the application * is not supposed to touch the ring (using a different thread) * during the execution of the system call. */ static u_int netmap_sw_to_nic(struct netmap_adapter *na) { struct netmap_kring *kring = na->rx_rings[na->num_rx_rings]; struct netmap_slot *rxslot = kring->ring->slot; u_int i, rxcur = kring->nr_hwcur; u_int const head = kring->rhead; u_int const src_lim = kring->nkr_num_slots - 1; u_int sent = 0; /* scan rings to find space, then fill as much as possible */ for (i = 0; i < na->num_tx_rings; i++) { struct netmap_kring *kdst = na->tx_rings[i]; struct netmap_ring *rdst = kdst->ring; u_int const dst_lim = kdst->nkr_num_slots - 1; /* XXX do we trust ring or kring->rcur,rtail ? */ for (; rxcur != head && !nm_ring_empty(rdst); rxcur = nm_next(rxcur, src_lim) ) { struct netmap_slot *src, *dst, tmp; u_int dst_head = rdst->head; src = &rxslot[rxcur]; if ((src->flags & NS_FORWARD) == 0 && !netmap_fwd) continue; sent++; dst = &rdst->slot[dst_head]; tmp = *src; src->buf_idx = dst->buf_idx; src->flags = NS_BUF_CHANGED; dst->buf_idx = tmp.buf_idx; dst->len = tmp.len; dst->flags = NS_BUF_CHANGED; rdst->head = rdst->cur = nm_next(dst_head, dst_lim); } /* if (sent) XXX txsync ? it would be just an optimization */ } return sent; } /* * netmap_txsync_to_host() passes packets up. We are called from a * system call in user process context, and the only contention * can be among multiple user threads erroneously calling * this routine concurrently. */ static int netmap_txsync_to_host(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; struct mbq q; /* Take packets from hwcur to head and pass them up. * Force hwcur = head since netmap_grab_packets() stops at head */ mbq_init(&q); netmap_grab_packets(kring, &q, 1 /* force */); nm_prdis("have %d pkts in queue", mbq_len(&q)); kring->nr_hwcur = head; kring->nr_hwtail = head + lim; if (kring->nr_hwtail > lim) kring->nr_hwtail -= lim + 1; netmap_send_up(na->ifp, &q); return 0; } /* * rxsync backend for packets coming from the host stack. * They have been put in kring->rx_queue by netmap_transmit(). * We protect access to the kring using kring->rx_queue.lock * * also moves to the nic hw rings any packet the user has marked * for transparent-mode forwarding, then sets the NR_FORWARD * flag in the kring to let the caller push them out */ static int netmap_rxsync_from_host(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->na; struct netmap_ring *ring = kring->ring; u_int nm_i, n; u_int const lim = kring->nkr_num_slots - 1; u_int const head = kring->rhead; int ret = 0; struct mbq *q = &kring->rx_queue, fq; mbq_init(&fq); /* fq holds packets to be freed */ mbq_lock(q); /* First part: import newly received packets */ n = mbq_len(q); if (n) { /* grab packets from the queue */ struct mbuf *m; uint32_t stop_i; nm_i = kring->nr_hwtail; stop_i = nm_prev(kring->nr_hwcur, lim); while ( nm_i != stop_i && (m = mbq_dequeue(q)) != NULL ) { int len = MBUF_LEN(m); struct netmap_slot *slot = &ring->slot[nm_i]; m_copydata(m, 0, len, NMB(na, slot)); nm_prdis("nm %d len %d", nm_i, len); if (netmap_debug & NM_DEBUG_HOST) nm_prinf("%s", nm_dump_buf(NMB(na, slot),len, 128, NULL)); slot->len = len; slot->flags = 0; nm_i = nm_next(nm_i, lim); mbq_enqueue(&fq, m); } kring->nr_hwtail = nm_i; } /* * Second part: skip past packets that userspace has released. */ nm_i = kring->nr_hwcur; if (nm_i != head) { /* something was released */ if (nm_may_forward_down(kring, flags)) { ret = netmap_sw_to_nic(na); if (ret > 0) { kring->nr_kflags |= NR_FORWARD; ret = 0; } } kring->nr_hwcur = head; } mbq_unlock(q); mbq_purge(&fq); mbq_fini(&fq); return ret; } /* Get a netmap adapter for the port. * * If it is possible to satisfy the request, return 0 * with *na containing the netmap adapter found. * Otherwise return an error code, with *na containing NULL. * * When the port is attached to a bridge, we always return * EBUSY. * Otherwise, if the port is already bound to a file descriptor, * then we unconditionally return the existing adapter into *na. * In all the other cases, we return (into *na) either native, * generic or NULL, according to the following table: * * native_support * active_fds dev.netmap.admode YES NO * ------------------------------------------------------- * >0 * NA(ifp) NA(ifp) * * 0 NETMAP_ADMODE_BEST NATIVE GENERIC * 0 NETMAP_ADMODE_NATIVE NATIVE NULL * 0 NETMAP_ADMODE_GENERIC GENERIC GENERIC * */ static void netmap_hw_dtor(struct netmap_adapter *); /* needed by NM_IS_NATIVE() */ int netmap_get_hw_na(struct ifnet *ifp, struct netmap_mem_d *nmd, struct netmap_adapter **na) { /* generic support */ int i = netmap_admode; /* Take a snapshot. */ struct netmap_adapter *prev_na; int error = 0; *na = NULL; /* default */ /* reset in case of invalid value */ if (i < NETMAP_ADMODE_BEST || i >= NETMAP_ADMODE_LAST) i = netmap_admode = NETMAP_ADMODE_BEST; if (NM_NA_VALID(ifp)) { prev_na = NA(ifp); /* If an adapter already exists, return it if * there are active file descriptors or if * netmap is not forced to use generic * adapters. */ if (NETMAP_OWNED_BY_ANY(prev_na) || i != NETMAP_ADMODE_GENERIC || prev_na->na_flags & NAF_FORCE_NATIVE #ifdef WITH_PIPES /* ugly, but we cannot allow an adapter switch * if some pipe is referring to this one */ || prev_na->na_next_pipe > 0 #endif ) { *na = prev_na; goto assign_mem; } } /* If there isn't native support and netmap is not allowed * to use generic adapters, we cannot satisfy the request. */ if (!NM_IS_NATIVE(ifp) && i == NETMAP_ADMODE_NATIVE) return EOPNOTSUPP; /* Otherwise, create a generic adapter and return it, * saving the previously used netmap adapter, if any. * * Note that here 'prev_na', if not NULL, MUST be a * native adapter, and CANNOT be a generic one. This is * true because generic adapters are created on demand, and * destroyed when not used anymore. Therefore, if the adapter * currently attached to an interface 'ifp' is generic, it * must be that * (NA(ifp)->active_fds > 0 || NETMAP_OWNED_BY_KERN(NA(ifp))). * Consequently, if NA(ifp) is generic, we will enter one of * the branches above. This ensures that we never override * a generic adapter with another generic adapter. */ error = generic_netmap_attach(ifp); if (error) return error; *na = NA(ifp); assign_mem: if (nmd != NULL && !((*na)->na_flags & NAF_MEM_OWNER) && (*na)->active_fds == 0 && ((*na)->nm_mem != nmd)) { (*na)->nm_mem_prev = (*na)->nm_mem; (*na)->nm_mem = netmap_mem_get(nmd); } return 0; } /* * MUST BE CALLED UNDER NMG_LOCK() * * Get a refcounted reference to a netmap adapter attached * to the interface specified by req. * This is always called in the execution of an ioctl(). * * Return ENXIO if the interface specified by the request does * not exist, ENOTSUP if netmap is not supported by the interface, * EBUSY if the interface is already attached to a bridge, * EINVAL if parameters are invalid, ENOMEM if needed resources * could not be allocated. * If successful, hold a reference to the netmap adapter. * * If the interface specified by req is a system one, also keep * a reference to it and return a valid *ifp. */ int netmap_get_na(struct nmreq_header *hdr, struct netmap_adapter **na, struct ifnet **ifp, struct netmap_mem_d *nmd, int create) { struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body; int error = 0; struct netmap_adapter *ret = NULL; int nmd_ref = 0; *na = NULL; /* default return value */ *ifp = NULL; if (hdr->nr_reqtype != NETMAP_REQ_REGISTER) { return EINVAL; } if (req->nr_mode == NR_REG_PIPE_MASTER || req->nr_mode == NR_REG_PIPE_SLAVE) { /* Do not accept deprecated pipe modes. */ nm_prerr("Deprecated pipe nr_mode, use xx{yy or xx}yy syntax"); return EINVAL; } NMG_LOCK_ASSERT(); /* if the request contain a memid, try to find the * corresponding memory region */ if (nmd == NULL && req->nr_mem_id) { nmd = netmap_mem_find(req->nr_mem_id); if (nmd == NULL) return EINVAL; /* keep the rereference */ nmd_ref = 1; } /* We cascade through all possible types of netmap adapter. * All netmap_get_*_na() functions return an error and an na, * with the following combinations: * * error na * 0 NULL type doesn't match * !0 NULL type matches, but na creation/lookup failed * 0 !NULL type matches and na created/found * !0 !NULL impossible */ error = netmap_get_null_na(hdr, na, nmd, create); if (error || *na != NULL) goto out; /* try to see if this is a monitor port */ error = netmap_get_monitor_na(hdr, na, nmd, create); if (error || *na != NULL) goto out; /* try to see if this is a pipe port */ error = netmap_get_pipe_na(hdr, na, nmd, create); if (error || *na != NULL) goto out; /* try to see if this is a bridge port */ error = netmap_get_vale_na(hdr, na, nmd, create); if (error) goto out; if (*na != NULL) /* valid match in netmap_get_bdg_na() */ goto out; /* * This must be a hardware na, lookup the name in the system. * Note that by hardware we actually mean "it shows up in ifconfig". * This may still be a tap, a veth/epair, or even a * persistent VALE port. */ *ifp = ifunit_ref(hdr->nr_name); if (*ifp == NULL) { error = ENXIO; goto out; } error = netmap_get_hw_na(*ifp, nmd, &ret); if (error) goto out; *na = ret; netmap_adapter_get(ret); + /* + * if the adapter supports the host rings and it is not alread open, + * try to set the number of host rings as requested by the user + */ + if (((*na)->na_flags & NAF_HOST_RINGS) && (*na)->active_fds == 0) { + if (req->nr_host_tx_rings) + (*na)->num_host_tx_rings = req->nr_host_tx_rings; + if (req->nr_host_rx_rings) + (*na)->num_host_rx_rings = req->nr_host_rx_rings; + } + nm_prdis("%s: host tx %d rx %u", (*na)->name, (*na)->num_host_tx_rings, + (*na)->num_host_rx_rings); + out: if (error) { if (ret) netmap_adapter_put(ret); if (*ifp) { if_rele(*ifp); *ifp = NULL; } } if (nmd_ref) netmap_mem_put(nmd); return error; } /* undo netmap_get_na() */ void netmap_unget_na(struct netmap_adapter *na, struct ifnet *ifp) { if (ifp) if_rele(ifp); if (na) netmap_adapter_put(na); } #define NM_FAIL_ON(t) do { \ if (unlikely(t)) { \ nm_prlim(5, "%s: fail '" #t "' " \ "h %d c %d t %d " \ "rh %d rc %d rt %d " \ "hc %d ht %d", \ kring->name, \ head, cur, ring->tail, \ kring->rhead, kring->rcur, kring->rtail, \ kring->nr_hwcur, kring->nr_hwtail); \ return kring->nkr_num_slots; \ } \ } while (0) /* * validate parameters on entry for *_txsync() * Returns ring->cur if ok, or something >= kring->nkr_num_slots * in case of error. * * rhead, rcur and rtail=hwtail are stored from previous round. * hwcur is the next packet to send to the ring. * * We want * hwcur <= *rhead <= head <= cur <= tail = *rtail <= hwtail * * hwcur, rhead, rtail and hwtail are reliable */ u_int nm_txsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring) { u_int head = ring->head; /* read only once */ u_int cur = ring->cur; /* read only once */ u_int n = kring->nkr_num_slots; nm_prdis(5, "%s kcur %d ktail %d head %d cur %d tail %d", kring->name, kring->nr_hwcur, kring->nr_hwtail, ring->head, ring->cur, ring->tail); #if 1 /* kernel sanity checks; but we can trust the kring. */ NM_FAIL_ON(kring->nr_hwcur >= n || kring->rhead >= n || kring->rtail >= n || kring->nr_hwtail >= n); #endif /* kernel sanity checks */ /* * user sanity checks. We only use head, * A, B, ... are possible positions for head: * * 0 A rhead B rtail C n-1 * 0 D rtail E rhead F n-1 * * B, F, D are valid. A, C, E are wrong */ if (kring->rtail >= kring->rhead) { /* want rhead <= head <= rtail */ NM_FAIL_ON(head < kring->rhead || head > kring->rtail); /* and also head <= cur <= rtail */ NM_FAIL_ON(cur < head || cur > kring->rtail); } else { /* here rtail < rhead */ /* we need head outside rtail .. rhead */ NM_FAIL_ON(head > kring->rtail && head < kring->rhead); /* two cases now: head <= rtail or head >= rhead */ if (head <= kring->rtail) { /* want head <= cur <= rtail */ NM_FAIL_ON(cur < head || cur > kring->rtail); } else { /* head >= rhead */ /* cur must be outside rtail..head */ NM_FAIL_ON(cur > kring->rtail && cur < head); } } if (ring->tail != kring->rtail) { nm_prlim(5, "%s tail overwritten was %d need %d", kring->name, ring->tail, kring->rtail); ring->tail = kring->rtail; } kring->rhead = head; kring->rcur = cur; return head; } /* * validate parameters on entry for *_rxsync() * Returns ring->head if ok, kring->nkr_num_slots on error. * * For a valid configuration, * hwcur <= head <= cur <= tail <= hwtail * * We only consider head and cur. * hwcur and hwtail are reliable. * */ u_int nm_rxsync_prologue(struct netmap_kring *kring, struct netmap_ring *ring) { uint32_t const n = kring->nkr_num_slots; uint32_t head, cur; nm_prdis(5,"%s kc %d kt %d h %d c %d t %d", kring->name, kring->nr_hwcur, kring->nr_hwtail, ring->head, ring->cur, ring->tail); /* * Before storing the new values, we should check they do not * move backwards. However: * - head is not an issue because the previous value is hwcur; * - cur could in principle go back, however it does not matter * because we are processing a brand new rxsync() */ cur = kring->rcur = ring->cur; /* read only once */ head = kring->rhead = ring->head; /* read only once */ #if 1 /* kernel sanity checks */ NM_FAIL_ON(kring->nr_hwcur >= n || kring->nr_hwtail >= n); #endif /* kernel sanity checks */ /* user sanity checks */ if (kring->nr_hwtail >= kring->nr_hwcur) { /* want hwcur <= rhead <= hwtail */ NM_FAIL_ON(head < kring->nr_hwcur || head > kring->nr_hwtail); /* and also rhead <= rcur <= hwtail */ NM_FAIL_ON(cur < head || cur > kring->nr_hwtail); } else { /* we need rhead outside hwtail..hwcur */ NM_FAIL_ON(head < kring->nr_hwcur && head > kring->nr_hwtail); /* two cases now: head <= hwtail or head >= hwcur */ if (head <= kring->nr_hwtail) { /* want head <= cur <= hwtail */ NM_FAIL_ON(cur < head || cur > kring->nr_hwtail); } else { /* cur must be outside hwtail..head */ NM_FAIL_ON(cur < head && cur > kring->nr_hwtail); } } if (ring->tail != kring->rtail) { nm_prlim(5, "%s tail overwritten was %d need %d", kring->name, ring->tail, kring->rtail); ring->tail = kring->rtail; } return head; } /* * Error routine called when txsync/rxsync detects an error. * Can't do much more than resetting head = cur = hwcur, tail = hwtail * Return 1 on reinit. * * This routine is only called by the upper half of the kernel. * It only reads hwcur (which is changed only by the upper half, too) * and hwtail (which may be changed by the lower half, but only on * a tx ring and only to increase it, so any error will be recovered * on the next call). For the above, we don't strictly need to call * it under lock. */ int netmap_ring_reinit(struct netmap_kring *kring) { struct netmap_ring *ring = kring->ring; u_int i, lim = kring->nkr_num_slots - 1; int errors = 0; // XXX KASSERT nm_kr_tryget nm_prlim(10, "called for %s", kring->name); // XXX probably wrong to trust userspace kring->rhead = ring->head; kring->rcur = ring->cur; kring->rtail = ring->tail; if (ring->cur > lim) errors++; if (ring->head > lim) errors++; if (ring->tail > lim) errors++; for (i = 0; i <= lim; i++) { u_int idx = ring->slot[i].buf_idx; u_int len = ring->slot[i].len; if (idx < 2 || idx >= kring->na->na_lut.objtotal) { nm_prlim(5, "bad index at slot %d idx %d len %d ", i, idx, len); ring->slot[i].buf_idx = 0; ring->slot[i].len = 0; } else if (len > NETMAP_BUF_SIZE(kring->na)) { ring->slot[i].len = 0; nm_prlim(5, "bad len at slot %d idx %d len %d", i, idx, len); } } if (errors) { nm_prlim(10, "total %d errors", errors); nm_prlim(10, "%s reinit, cur %d -> %d tail %d -> %d", kring->name, ring->cur, kring->nr_hwcur, ring->tail, kring->nr_hwtail); ring->head = kring->rhead = kring->nr_hwcur; ring->cur = kring->rcur = kring->nr_hwcur; ring->tail = kring->rtail = kring->nr_hwtail; } return (errors ? 1 : 0); } /* interpret the ringid and flags fields of an nmreq, by translating them * into a pair of intervals of ring indices: * * [priv->np_txqfirst, priv->np_txqlast) and * [priv->np_rxqfirst, priv->np_rxqlast) * */ int netmap_interp_ringid(struct netmap_priv_d *priv, uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags) { struct netmap_adapter *na = priv->np_na; int excluded_direction[] = { NR_TX_RINGS_ONLY, NR_RX_RINGS_ONLY }; enum txrx t; u_int j; for_rx_tx(t) { if (nr_flags & excluded_direction[t]) { priv->np_qfirst[t] = priv->np_qlast[t] = 0; continue; } switch (nr_mode) { case NR_REG_ALL_NIC: case NR_REG_NULL: priv->np_qfirst[t] = 0; priv->np_qlast[t] = nma_get_nrings(na, t); nm_prdis("ALL/PIPE: %s %d %d", nm_txrx2str(t), priv->np_qfirst[t], priv->np_qlast[t]); break; case NR_REG_SW: case NR_REG_NIC_SW: if (!(na->na_flags & NAF_HOST_RINGS)) { nm_prerr("host rings not supported"); return EINVAL; } priv->np_qfirst[t] = (nr_mode == NR_REG_SW ? nma_get_nrings(na, t) : 0); priv->np_qlast[t] = netmap_all_rings(na, t); nm_prdis("%s: %s %d %d", nr_mode == NR_REG_SW ? "SW" : "NIC+SW", nm_txrx2str(t), priv->np_qfirst[t], priv->np_qlast[t]); break; case NR_REG_ONE_NIC: if (nr_ringid >= na->num_tx_rings && nr_ringid >= na->num_rx_rings) { nm_prerr("invalid ring id %d", nr_ringid); return EINVAL; } /* if not enough rings, use the first one */ j = nr_ringid; if (j >= nma_get_nrings(na, t)) j = 0; priv->np_qfirst[t] = j; priv->np_qlast[t] = j + 1; nm_prdis("ONE_NIC: %s %d %d", nm_txrx2str(t), priv->np_qfirst[t], priv->np_qlast[t]); break; + case NR_REG_ONE_SW: + if (!(na->na_flags & NAF_HOST_RINGS)) { + nm_prerr("host rings not supported"); + return EINVAL; + } + if (nr_ringid >= na->num_host_tx_rings && + nr_ringid >= na->num_host_rx_rings) { + nm_prerr("invalid ring id %d", nr_ringid); + return EINVAL; + } + /* if not enough rings, use the first one */ + j = nr_ringid; + if (j >= nma_get_host_nrings(na, t)) + j = 0; + priv->np_qfirst[t] = nma_get_nrings(na, t) + j; + priv->np_qlast[t] = nma_get_nrings(na, t) + j + 1; + nm_prdis("ONE_SW: %s %d %d", nm_txrx2str(t), + priv->np_qfirst[t], priv->np_qlast[t]); + break; default: nm_prerr("invalid regif type %d", nr_mode); return EINVAL; } } priv->np_flags = nr_flags; /* Allow transparent forwarding mode in the host --> nic * direction only if all the TX hw rings have been opened. */ if (priv->np_qfirst[NR_TX] == 0 && priv->np_qlast[NR_TX] >= na->num_tx_rings) { priv->np_sync_flags |= NAF_CAN_FORWARD_DOWN; } if (netmap_verbose) { nm_prinf("%s: tx [%d,%d) rx [%d,%d) id %d", na->name, priv->np_qfirst[NR_TX], priv->np_qlast[NR_TX], priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX], nr_ringid); } return 0; } /* * Set the ring ID. For devices with a single queue, a request * for all rings is the same as a single ring. */ static int netmap_set_ringid(struct netmap_priv_d *priv, uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags) { struct netmap_adapter *na = priv->np_na; int error; enum txrx t; error = netmap_interp_ringid(priv, nr_mode, nr_ringid, nr_flags); if (error) { return error; } priv->np_txpoll = (nr_flags & NR_NO_TX_POLL) ? 0 : 1; /* optimization: count the users registered for more than * one ring, which are the ones sleeping on the global queue. * The default netmap_notify() callback will then * avoid signaling the global queue if nobody is using it */ for_rx_tx(t) { if (nm_si_user(priv, t)) na->si_users[t]++; } return 0; } static void netmap_unset_ringid(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; enum txrx t; for_rx_tx(t) { if (nm_si_user(priv, t)) na->si_users[t]--; priv->np_qfirst[t] = priv->np_qlast[t] = 0; } priv->np_flags = 0; priv->np_txpoll = 0; priv->np_kloop_state = 0; } /* Set the nr_pending_mode for the requested rings. * If requested, also try to get exclusive access to the rings, provided * the rings we want to bind are not exclusively owned by a previous bind. */ static int netmap_krings_get(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; u_int i; struct netmap_kring *kring; int excl = (priv->np_flags & NR_EXCLUSIVE); enum txrx t; if (netmap_debug & NM_DEBUG_ON) nm_prinf("%s: grabbing tx [%d, %d) rx [%d, %d)", na->name, priv->np_qfirst[NR_TX], priv->np_qlast[NR_TX], priv->np_qfirst[NR_RX], priv->np_qlast[NR_RX]); /* first round: check that all the requested rings * are neither alread exclusively owned, nor we * want exclusive ownership when they are already in use */ for_rx_tx(t) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = NMR(na, t)[i]; if ((kring->nr_kflags & NKR_EXCLUSIVE) || (kring->users && excl)) { nm_prdis("ring %s busy", kring->name); return EBUSY; } } } /* second round: increment usage count (possibly marking them * as exclusive) and set the nr_pending_mode */ for_rx_tx(t) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = NMR(na, t)[i]; kring->users++; if (excl) kring->nr_kflags |= NKR_EXCLUSIVE; kring->nr_pending_mode = NKR_NETMAP_ON; } } return 0; } /* Undo netmap_krings_get(). This is done by clearing the exclusive mode * if was asked on regif, and unset the nr_pending_mode if we are the * last users of the involved rings. */ static void netmap_krings_put(struct netmap_priv_d *priv) { struct netmap_adapter *na = priv->np_na; u_int i; struct netmap_kring *kring; int excl = (priv->np_flags & NR_EXCLUSIVE); enum txrx t; nm_prdis("%s: releasing tx [%d, %d) rx [%d, %d)", na->name, priv->np_qfirst[NR_TX], priv->np_qlast[NR_TX], priv->np_qfirst[NR_RX], priv->np_qlast[MR_RX]); for_rx_tx(t) { for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = NMR(na, t)[i]; if (excl) kring->nr_kflags &= ~NKR_EXCLUSIVE; kring->users--; if (kring->users == 0) kring->nr_pending_mode = NKR_NETMAP_OFF; } } } static int nm_priv_rx_enabled(struct netmap_priv_d *priv) { return (priv->np_qfirst[NR_RX] != priv->np_qlast[NR_RX]); } /* Validate the CSB entries for both directions (atok and ktoa). * To be called under NMG_LOCK(). */ static int netmap_csb_validate(struct netmap_priv_d *priv, struct nmreq_opt_csb *csbo) { struct nm_csb_atok *csb_atok_base = (struct nm_csb_atok *)(uintptr_t)csbo->csb_atok; struct nm_csb_ktoa *csb_ktoa_base = (struct nm_csb_ktoa *)(uintptr_t)csbo->csb_ktoa; enum txrx t; int num_rings[NR_TXRX], tot_rings; size_t entry_size[2]; void *csb_start[2]; int i; if (priv->np_kloop_state & NM_SYNC_KLOOP_RUNNING) { nm_prerr("Cannot update CSB while kloop is running"); return EBUSY; } tot_rings = 0; for_rx_tx(t) { num_rings[t] = priv->np_qlast[t] - priv->np_qfirst[t]; tot_rings += num_rings[t]; } if (tot_rings <= 0) return 0; if (!(priv->np_flags & NR_EXCLUSIVE)) { nm_prerr("CSB mode requires NR_EXCLUSIVE"); return EINVAL; } entry_size[0] = sizeof(*csb_atok_base); entry_size[1] = sizeof(*csb_ktoa_base); csb_start[0] = (void *)csb_atok_base; csb_start[1] = (void *)csb_ktoa_base; for (i = 0; i < 2; i++) { /* On Linux we could use access_ok() to simplify * the validation. However, the advantage of * this approach is that it works also on * FreeBSD. */ size_t csb_size = tot_rings * entry_size[i]; void *tmp; int err; if ((uintptr_t)csb_start[i] & (entry_size[i]-1)) { nm_prerr("Unaligned CSB address"); return EINVAL; } tmp = nm_os_malloc(csb_size); if (!tmp) return ENOMEM; if (i == 0) { /* Application --> kernel direction. */ err = copyin(csb_start[i], tmp, csb_size); } else { /* Kernel --> application direction. */ memset(tmp, 0, csb_size); err = copyout(tmp, csb_start[i], csb_size); } nm_os_free(tmp); if (err) { nm_prerr("Invalid CSB address"); return err; } } priv->np_csb_atok_base = csb_atok_base; priv->np_csb_ktoa_base = csb_ktoa_base; /* Initialize the CSB. */ for_rx_tx(t) { for (i = 0; i < num_rings[t]; i++) { struct netmap_kring *kring = NMR(priv->np_na, t)[i + priv->np_qfirst[t]]; struct nm_csb_atok *csb_atok = csb_atok_base + i; struct nm_csb_ktoa *csb_ktoa = csb_ktoa_base + i; if (t == NR_RX) { csb_atok += num_rings[NR_TX]; csb_ktoa += num_rings[NR_TX]; } CSB_WRITE(csb_atok, head, kring->rhead); CSB_WRITE(csb_atok, cur, kring->rcur); CSB_WRITE(csb_atok, appl_need_kick, 1); CSB_WRITE(csb_atok, sync_flags, 1); CSB_WRITE(csb_ktoa, hwcur, kring->nr_hwcur); CSB_WRITE(csb_ktoa, hwtail, kring->nr_hwtail); CSB_WRITE(csb_ktoa, kern_need_kick, 1); nm_prinf("csb_init for kring %s: head %u, cur %u, " "hwcur %u, hwtail %u", kring->name, kring->rhead, kring->rcur, kring->nr_hwcur, kring->nr_hwtail); } } return 0; } /* Ensure that the netmap adapter can support the given MTU. * @return EINVAL if the na cannot be set to mtu, 0 otherwise. */ int netmap_buf_size_validate(const struct netmap_adapter *na, unsigned mtu) { unsigned nbs = NETMAP_BUF_SIZE(na); if (mtu <= na->rx_buf_maxsize) { /* The MTU fits a single NIC slot. We only * Need to check that netmap buffers are * large enough to hold an MTU. NS_MOREFRAG * cannot be used in this case. */ if (nbs < mtu) { nm_prerr("error: netmap buf size (%u) " "< device MTU (%u)", nbs, mtu); return EINVAL; } } else { /* More NIC slots may be needed to receive * or transmit a single packet. Check that * the adapter supports NS_MOREFRAG and that * netmap buffers are large enough to hold * the maximum per-slot size. */ if (!(na->na_flags & NAF_MOREFRAG)) { nm_prerr("error: large MTU (%d) needed " "but %s does not support " "NS_MOREFRAG", mtu, na->ifp->if_xname); return EINVAL; } else if (nbs < na->rx_buf_maxsize) { nm_prerr("error: using NS_MOREFRAG on " "%s requires netmap buf size " ">= %u", na->ifp->if_xname, na->rx_buf_maxsize); return EINVAL; } else { nm_prinf("info: netmap application on " "%s needs to support " "NS_MOREFRAG " "(MTU=%u,netmap_buf_size=%u)", na->ifp->if_xname, mtu, nbs); } } return 0; } /* * possibly move the interface to netmap-mode. * If success it returns a pointer to netmap_if, otherwise NULL. * This must be called with NMG_LOCK held. * * The following na callbacks are called in the process: * * na->nm_config() [by netmap_update_config] * (get current number and size of rings) * * We have a generic one for linux (netmap_linux_config). * The bwrap has to override this, since it has to forward * the request to the wrapped adapter (netmap_bwrap_config). * * * na->nm_krings_create() * (create and init the krings array) * * One of the following: * * * netmap_hw_krings_create, (hw ports) * creates the standard layout for the krings * and adds the mbq (used for the host rings). * * * netmap_vp_krings_create (VALE ports) * add leases and scratchpads * * * netmap_pipe_krings_create (pipes) * create the krings and rings of both ends and * cross-link them * * * netmap_monitor_krings_create (monitors) * avoid allocating the mbq * * * netmap_bwrap_krings_create (bwraps) * create both the brap krings array, * the krings array of the wrapped adapter, and * (if needed) the fake array for the host adapter * * na->nm_register(, 1) * (put the adapter in netmap mode) * * This may be one of the following: * * * netmap_hw_reg (hw ports) * checks that the ifp is still there, then calls * the hardware specific callback; * * * netmap_vp_reg (VALE ports) * If the port is connected to a bridge, * set the NAF_NETMAP_ON flag under the * bridge write lock. * * * netmap_pipe_reg (pipes) * inform the other pipe end that it is no * longer responsible for the lifetime of this * pipe end * * * netmap_monitor_reg (monitors) * intercept the sync callbacks of the monitored * rings * * * netmap_bwrap_reg (bwraps) * cross-link the bwrap and hwna rings, * forward the request to the hwna, override * the hwna notify callback (to get the frames * coming from outside go through the bridge). * * */ int netmap_do_regif(struct netmap_priv_d *priv, struct netmap_adapter *na, uint32_t nr_mode, uint16_t nr_ringid, uint64_t nr_flags) { struct netmap_if *nifp = NULL; int error; NMG_LOCK_ASSERT(); priv->np_na = na; /* store the reference */ error = netmap_mem_finalize(na->nm_mem, na); if (error) goto err; if (na->active_fds == 0) { /* cache the allocator info in the na */ error = netmap_mem_get_lut(na->nm_mem, &na->na_lut); if (error) goto err_drop_mem; nm_prdis("lut %p bufs %u size %u", na->na_lut.lut, na->na_lut.objtotal, na->na_lut.objsize); /* ring configuration may have changed, fetch from the card */ netmap_update_config(na); } /* compute the range of tx and rx rings to monitor */ error = netmap_set_ringid(priv, nr_mode, nr_ringid, nr_flags); if (error) goto err_put_lut; if (na->active_fds == 0) { /* * If this is the first registration of the adapter, * perform sanity checks and create the in-kernel view * of the netmap rings (the netmap krings). */ if (na->ifp && nm_priv_rx_enabled(priv)) { /* This netmap adapter is attached to an ifnet. */ unsigned mtu = nm_os_ifnet_mtu(na->ifp); nm_prdis("%s: mtu %d rx_buf_maxsize %d netmap_buf_size %d", na->name, mtu, na->rx_buf_maxsize, NETMAP_BUF_SIZE(na)); if (na->rx_buf_maxsize == 0) { nm_prerr("%s: error: rx_buf_maxsize == 0", na->name); error = EIO; goto err_drop_mem; } error = netmap_buf_size_validate(na, mtu); if (error) goto err_drop_mem; } /* * Depending on the adapter, this may also create * the netmap rings themselves */ error = na->nm_krings_create(na); if (error) goto err_put_lut; } /* now the krings must exist and we can check whether some * previous bind has exclusive ownership on them, and set * nr_pending_mode */ error = netmap_krings_get(priv); if (error) goto err_del_krings; /* create all needed missing netmap rings */ error = netmap_mem_rings_create(na); if (error) goto err_rel_excl; /* in all cases, create a new netmap if */ nifp = netmap_mem_if_new(na, priv); if (nifp == NULL) { error = ENOMEM; goto err_rel_excl; } if (nm_kring_pending(priv)) { /* Some kring is switching mode, tell the adapter to * react on this. */ error = na->nm_register(na, 1); if (error) goto err_del_if; } /* Commit the reference. */ na->active_fds++; /* * advertise that the interface is ready by setting np_nifp. * The barrier is needed because readers (poll, *SYNC and mmap) * check for priv->np_nifp != NULL without locking */ mb(); /* make sure previous writes are visible to all CPUs */ priv->np_nifp = nifp; return 0; err_del_if: netmap_mem_if_delete(na, nifp); err_rel_excl: netmap_krings_put(priv); netmap_mem_rings_delete(na); err_del_krings: if (na->active_fds == 0) na->nm_krings_delete(na); err_put_lut: if (na->active_fds == 0) memset(&na->na_lut, 0, sizeof(na->na_lut)); err_drop_mem: netmap_mem_drop(na); err: priv->np_na = NULL; return error; } /* * update kring and ring at the end of rxsync/txsync. */ static inline void nm_sync_finalize(struct netmap_kring *kring) { /* * Update ring tail to what the kernel knows * After txsync: head/rhead/hwcur might be behind cur/rcur * if no carrier. */ kring->ring->tail = kring->rtail = kring->nr_hwtail; nm_prdis(5, "%s now hwcur %d hwtail %d head %d cur %d tail %d", kring->name, kring->nr_hwcur, kring->nr_hwtail, kring->rhead, kring->rcur, kring->rtail); } /* set ring timestamp */ static inline void ring_timestamp_set(struct netmap_ring *ring) { if (netmap_no_timestamp == 0 || ring->flags & NR_TIMESTAMP) { microtime(&ring->ts); } } static int nmreq_copyin(struct nmreq_header *, int); static int nmreq_copyout(struct nmreq_header *, int); static int nmreq_checkoptions(struct nmreq_header *); /* * ioctl(2) support for the "netmap" device. * * Following a list of accepted commands: * - NIOCCTRL device control API * - NIOCTXSYNC sync TX rings * - NIOCRXSYNC sync RX rings * - SIOCGIFADDR just for convenience * - NIOCGINFO deprecated (legacy API) * - NIOCREGIF deprecated (legacy API) * * Return 0 on success, errno otherwise. */ int netmap_ioctl(struct netmap_priv_d *priv, u_long cmd, caddr_t data, struct thread *td, int nr_body_is_user) { struct mbq q; /* packets from RX hw queues to host stack */ struct netmap_adapter *na = NULL; struct netmap_mem_d *nmd = NULL; struct ifnet *ifp = NULL; int error = 0; u_int i, qfirst, qlast; struct netmap_kring **krings; int sync_flags; enum txrx t; switch (cmd) { case NIOCCTRL: { struct nmreq_header *hdr = (struct nmreq_header *)data; if (hdr->nr_version < NETMAP_MIN_API || hdr->nr_version > NETMAP_MAX_API) { nm_prerr("API mismatch: got %d need %d", hdr->nr_version, NETMAP_API); return EINVAL; } /* Make a kernel-space copy of the user-space nr_body. * For convenince, the nr_body pointer and the pointers * in the options list will be replaced with their * kernel-space counterparts. The original pointers are * saved internally and later restored by nmreq_copyout */ error = nmreq_copyin(hdr, nr_body_is_user); if (error) { return error; } /* Sanitize hdr->nr_name. */ hdr->nr_name[sizeof(hdr->nr_name) - 1] = '\0'; switch (hdr->nr_reqtype) { case NETMAP_REQ_REGISTER: { struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body; struct netmap_if *nifp; /* Protect access to priv from concurrent requests. */ NMG_LOCK(); do { struct nmreq_option *opt; u_int memflags; if (priv->np_nifp != NULL) { /* thread already registered */ error = EBUSY; break; } #ifdef WITH_EXTMEM opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options, NETMAP_REQ_OPT_EXTMEM); if (opt != NULL) { struct nmreq_opt_extmem *e = (struct nmreq_opt_extmem *)opt; error = nmreq_checkduplicate(opt); if (error) { opt->nro_status = error; break; } nmd = netmap_mem_ext_create(e->nro_usrptr, &e->nro_info, &error); opt->nro_status = error; if (nmd == NULL) break; } #endif /* WITH_EXTMEM */ if (nmd == NULL && req->nr_mem_id) { /* find the allocator and get a reference */ nmd = netmap_mem_find(req->nr_mem_id); if (nmd == NULL) { if (netmap_verbose) { nm_prerr("%s: failed to find mem_id %u", hdr->nr_name, req->nr_mem_id); } error = EINVAL; break; } } /* find the interface and a reference */ error = netmap_get_na(hdr, &na, &ifp, nmd, 1 /* create */); /* keep reference */ if (error) break; if (NETMAP_OWNED_BY_KERN(na)) { error = EBUSY; break; } if (na->virt_hdr_len && !(req->nr_flags & NR_ACCEPT_VNET_HDR)) { nm_prerr("virt_hdr_len=%d, but application does " "not accept it", na->virt_hdr_len); error = EIO; break; } error = netmap_do_regif(priv, na, req->nr_mode, req->nr_ringid, req->nr_flags); if (error) { /* reg. failed, release priv and ref */ break; } opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options, NETMAP_REQ_OPT_CSB); if (opt != NULL) { struct nmreq_opt_csb *csbo = (struct nmreq_opt_csb *)opt; error = nmreq_checkduplicate(opt); if (!error) { error = netmap_csb_validate(priv, csbo); } opt->nro_status = error; if (error) { netmap_do_unregif(priv); break; } } nifp = priv->np_nifp; /* return the offset of the netmap_if object */ req->nr_rx_rings = na->num_rx_rings; req->nr_tx_rings = na->num_tx_rings; req->nr_rx_slots = na->num_rx_desc; req->nr_tx_slots = na->num_tx_desc; + req->nr_host_tx_rings = na->num_host_tx_rings; + req->nr_host_rx_rings = na->num_host_rx_rings; error = netmap_mem_get_info(na->nm_mem, &req->nr_memsize, &memflags, &req->nr_mem_id); if (error) { netmap_do_unregif(priv); break; } if (memflags & NETMAP_MEM_PRIVATE) { *(uint32_t *)(uintptr_t)&nifp->ni_flags |= NI_PRIV_MEM; } for_rx_tx(t) { priv->np_si[t] = nm_si_user(priv, t) ? &na->si[t] : &NMR(na, t)[priv->np_qfirst[t]]->si; } if (req->nr_extra_bufs) { if (netmap_verbose) nm_prinf("requested %d extra buffers", req->nr_extra_bufs); req->nr_extra_bufs = netmap_extra_alloc(na, &nifp->ni_bufs_head, req->nr_extra_bufs); if (netmap_verbose) nm_prinf("got %d extra buffers", req->nr_extra_bufs); } req->nr_offset = netmap_mem_if_offset(na->nm_mem, nifp); error = nmreq_checkoptions(hdr); if (error) { netmap_do_unregif(priv); break; } /* store ifp reference so that priv destructor may release it */ priv->np_ifp = ifp; } while (0); if (error) { netmap_unget_na(na, ifp); } /* release the reference from netmap_mem_find() or * netmap_mem_ext_create() */ if (nmd) netmap_mem_put(nmd); NMG_UNLOCK(); break; } case NETMAP_REQ_PORT_INFO_GET: { struct nmreq_port_info_get *req = (struct nmreq_port_info_get *)(uintptr_t)hdr->nr_body; NMG_LOCK(); do { u_int memflags; if (hdr->nr_name[0] != '\0') { /* Build a nmreq_register out of the nmreq_port_info_get, * so that we can call netmap_get_na(). */ struct nmreq_register regreq; bzero(®req, sizeof(regreq)); regreq.nr_mode = NR_REG_ALL_NIC; regreq.nr_tx_slots = req->nr_tx_slots; regreq.nr_rx_slots = req->nr_rx_slots; regreq.nr_tx_rings = req->nr_tx_rings; regreq.nr_rx_rings = req->nr_rx_rings; + regreq.nr_host_tx_rings = req->nr_host_tx_rings; + regreq.nr_host_rx_rings = req->nr_host_rx_rings; regreq.nr_mem_id = req->nr_mem_id; /* get a refcount */ hdr->nr_reqtype = NETMAP_REQ_REGISTER; hdr->nr_body = (uintptr_t)®req; error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */); hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; /* reset type */ hdr->nr_body = (uintptr_t)req; /* reset nr_body */ if (error) { na = NULL; ifp = NULL; break; } nmd = na->nm_mem; /* get memory allocator */ } else { nmd = netmap_mem_find(req->nr_mem_id ? req->nr_mem_id : 1); if (nmd == NULL) { if (netmap_verbose) nm_prerr("%s: failed to find mem_id %u", hdr->nr_name, req->nr_mem_id ? req->nr_mem_id : 1); error = EINVAL; break; } } error = netmap_mem_get_info(nmd, &req->nr_memsize, &memflags, &req->nr_mem_id); if (error) break; if (na == NULL) /* only memory info */ break; netmap_update_config(na); req->nr_rx_rings = na->num_rx_rings; req->nr_tx_rings = na->num_tx_rings; req->nr_rx_slots = na->num_rx_desc; req->nr_tx_slots = na->num_tx_desc; + req->nr_host_tx_rings = na->num_host_tx_rings; + req->nr_host_rx_rings = na->num_host_rx_rings; } while (0); netmap_unget_na(na, ifp); NMG_UNLOCK(); break; } #ifdef WITH_VALE case NETMAP_REQ_VALE_ATTACH: { error = netmap_vale_attach(hdr, NULL /* userspace request */); break; } case NETMAP_REQ_VALE_DETACH: { error = netmap_vale_detach(hdr, NULL /* userspace request */); break; } case NETMAP_REQ_VALE_LIST: { error = netmap_vale_list(hdr); break; } case NETMAP_REQ_PORT_HDR_SET: { struct nmreq_port_hdr *req = (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body; /* Build a nmreq_register out of the nmreq_port_hdr, * so that we can call netmap_get_bdg_na(). */ struct nmreq_register regreq; bzero(®req, sizeof(regreq)); regreq.nr_mode = NR_REG_ALL_NIC; /* For now we only support virtio-net headers, and only for * VALE ports, but this may change in future. Valid lengths * for the virtio-net header are 0 (no header), 10 and 12. */ if (req->nr_hdr_len != 0 && req->nr_hdr_len != sizeof(struct nm_vnet_hdr) && req->nr_hdr_len != 12) { if (netmap_verbose) nm_prerr("invalid hdr_len %u", req->nr_hdr_len); error = EINVAL; break; } NMG_LOCK(); hdr->nr_reqtype = NETMAP_REQ_REGISTER; hdr->nr_body = (uintptr_t)®req; error = netmap_get_vale_na(hdr, &na, NULL, 0); hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_SET; hdr->nr_body = (uintptr_t)req; if (na && !error) { struct netmap_vp_adapter *vpna = (struct netmap_vp_adapter *)na; na->virt_hdr_len = req->nr_hdr_len; if (na->virt_hdr_len) { vpna->mfs = NETMAP_BUF_SIZE(na); } if (netmap_verbose) nm_prinf("Using vnet_hdr_len %d for %p", na->virt_hdr_len, na); netmap_adapter_put(na); } else if (!na) { error = ENXIO; } NMG_UNLOCK(); break; } case NETMAP_REQ_PORT_HDR_GET: { /* Get vnet-header length for this netmap port */ struct nmreq_port_hdr *req = (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body; /* Build a nmreq_register out of the nmreq_port_hdr, * so that we can call netmap_get_bdg_na(). */ struct nmreq_register regreq; struct ifnet *ifp; bzero(®req, sizeof(regreq)); regreq.nr_mode = NR_REG_ALL_NIC; NMG_LOCK(); hdr->nr_reqtype = NETMAP_REQ_REGISTER; hdr->nr_body = (uintptr_t)®req; error = netmap_get_na(hdr, &na, &ifp, NULL, 0); hdr->nr_reqtype = NETMAP_REQ_PORT_HDR_GET; hdr->nr_body = (uintptr_t)req; if (na && !error) { req->nr_hdr_len = na->virt_hdr_len; } netmap_unget_na(na, ifp); NMG_UNLOCK(); break; } case NETMAP_REQ_VALE_NEWIF: { error = nm_vi_create(hdr); break; } case NETMAP_REQ_VALE_DELIF: { error = nm_vi_destroy(hdr->nr_name); break; } case NETMAP_REQ_VALE_POLLING_ENABLE: case NETMAP_REQ_VALE_POLLING_DISABLE: { error = nm_bdg_polling(hdr); break; } #endif /* WITH_VALE */ case NETMAP_REQ_POOLS_INFO_GET: { /* Get information from the memory allocator used for * hdr->nr_name. */ struct nmreq_pools_info *req = (struct nmreq_pools_info *)(uintptr_t)hdr->nr_body; NMG_LOCK(); do { /* Build a nmreq_register out of the nmreq_pools_info, * so that we can call netmap_get_na(). */ struct nmreq_register regreq; bzero(®req, sizeof(regreq)); regreq.nr_mem_id = req->nr_mem_id; regreq.nr_mode = NR_REG_ALL_NIC; hdr->nr_reqtype = NETMAP_REQ_REGISTER; hdr->nr_body = (uintptr_t)®req; error = netmap_get_na(hdr, &na, &ifp, NULL, 1 /* create */); hdr->nr_reqtype = NETMAP_REQ_POOLS_INFO_GET; /* reset type */ hdr->nr_body = (uintptr_t)req; /* reset nr_body */ if (error) { na = NULL; ifp = NULL; break; } nmd = na->nm_mem; /* grab the memory allocator */ if (nmd == NULL) { error = EINVAL; break; } /* Finalize the memory allocator, get the pools * information and release the allocator. */ error = netmap_mem_finalize(nmd, na); if (error) { break; } error = netmap_mem_pools_info_get(req, nmd); netmap_mem_drop(na); } while (0); netmap_unget_na(na, ifp); NMG_UNLOCK(); break; } case NETMAP_REQ_CSB_ENABLE: { struct nmreq_option *opt; opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)hdr->nr_options, NETMAP_REQ_OPT_CSB); if (opt == NULL) { error = EINVAL; } else { struct nmreq_opt_csb *csbo = (struct nmreq_opt_csb *)opt; error = nmreq_checkduplicate(opt); if (!error) { NMG_LOCK(); error = netmap_csb_validate(priv, csbo); NMG_UNLOCK(); } opt->nro_status = error; } break; } case NETMAP_REQ_SYNC_KLOOP_START: { error = netmap_sync_kloop(priv, hdr); break; } case NETMAP_REQ_SYNC_KLOOP_STOP: { error = netmap_sync_kloop_stop(priv); break; } default: { error = EINVAL; break; } } /* Write back request body to userspace and reset the * user-space pointer. */ error = nmreq_copyout(hdr, error); break; } case NIOCTXSYNC: case NIOCRXSYNC: { if (unlikely(priv->np_nifp == NULL)) { error = ENXIO; break; } mb(); /* make sure following reads are not from cache */ if (unlikely(priv->np_csb_atok_base)) { nm_prerr("Invalid sync in CSB mode"); error = EBUSY; break; } na = priv->np_na; /* we have a reference */ mbq_init(&q); t = (cmd == NIOCTXSYNC ? NR_TX : NR_RX); krings = NMR(na, t); qfirst = priv->np_qfirst[t]; qlast = priv->np_qlast[t]; sync_flags = priv->np_sync_flags; for (i = qfirst; i < qlast; i++) { struct netmap_kring *kring = krings[i]; struct netmap_ring *ring = kring->ring; if (unlikely(nm_kr_tryget(kring, 1, &error))) { error = (error ? EIO : 0); continue; } if (cmd == NIOCTXSYNC) { if (netmap_debug & NM_DEBUG_TXSYNC) nm_prinf("pre txsync ring %d cur %d hwcur %d", i, ring->cur, kring->nr_hwcur); if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); } else if (kring->nm_sync(kring, sync_flags | NAF_FORCE_RECLAIM) == 0) { nm_sync_finalize(kring); } if (netmap_debug & NM_DEBUG_TXSYNC) nm_prinf("post txsync ring %d cur %d hwcur %d", i, ring->cur, kring->nr_hwcur); } else { if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); } if (nm_may_forward_up(kring)) { /* transparent forwarding, see netmap_poll() */ netmap_grab_packets(kring, &q, netmap_fwd); } if (kring->nm_sync(kring, sync_flags | NAF_FORCE_READ) == 0) { nm_sync_finalize(kring); } ring_timestamp_set(ring); } nm_kr_put(kring); } if (mbq_peek(&q)) { netmap_send_up(na->ifp, &q); } break; } default: { return netmap_ioctl_legacy(priv, cmd, data, td); break; } } return (error); } size_t nmreq_size_by_type(uint16_t nr_reqtype) { switch (nr_reqtype) { case NETMAP_REQ_REGISTER: return sizeof(struct nmreq_register); case NETMAP_REQ_PORT_INFO_GET: return sizeof(struct nmreq_port_info_get); case NETMAP_REQ_VALE_ATTACH: return sizeof(struct nmreq_vale_attach); case NETMAP_REQ_VALE_DETACH: return sizeof(struct nmreq_vale_detach); case NETMAP_REQ_VALE_LIST: return sizeof(struct nmreq_vale_list); case NETMAP_REQ_PORT_HDR_SET: case NETMAP_REQ_PORT_HDR_GET: return sizeof(struct nmreq_port_hdr); case NETMAP_REQ_VALE_NEWIF: return sizeof(struct nmreq_vale_newif); case NETMAP_REQ_VALE_DELIF: case NETMAP_REQ_SYNC_KLOOP_STOP: case NETMAP_REQ_CSB_ENABLE: return 0; case NETMAP_REQ_VALE_POLLING_ENABLE: case NETMAP_REQ_VALE_POLLING_DISABLE: return sizeof(struct nmreq_vale_polling); case NETMAP_REQ_POOLS_INFO_GET: return sizeof(struct nmreq_pools_info); case NETMAP_REQ_SYNC_KLOOP_START: return sizeof(struct nmreq_sync_kloop_start); } return 0; } static size_t nmreq_opt_size_by_type(uint32_t nro_reqtype, uint64_t nro_size) { size_t rv = sizeof(struct nmreq_option); #ifdef NETMAP_REQ_OPT_DEBUG if (nro_reqtype & NETMAP_REQ_OPT_DEBUG) return (nro_reqtype & ~NETMAP_REQ_OPT_DEBUG); #endif /* NETMAP_REQ_OPT_DEBUG */ switch (nro_reqtype) { #ifdef WITH_EXTMEM case NETMAP_REQ_OPT_EXTMEM: rv = sizeof(struct nmreq_opt_extmem); break; #endif /* WITH_EXTMEM */ case NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS: if (nro_size >= rv) rv = nro_size; break; case NETMAP_REQ_OPT_CSB: rv = sizeof(struct nmreq_opt_csb); break; case NETMAP_REQ_OPT_SYNC_KLOOP_MODE: rv = sizeof(struct nmreq_opt_sync_kloop_mode); break; } /* subtract the common header */ return rv - sizeof(struct nmreq_option); } int nmreq_copyin(struct nmreq_header *hdr, int nr_body_is_user) { size_t rqsz, optsz, bufsz; int error; char *ker = NULL, *p; struct nmreq_option **next, *src; struct nmreq_option buf; uint64_t *ptrs; if (hdr->nr_reserved) { if (netmap_verbose) nm_prerr("nr_reserved must be zero"); return EINVAL; } if (!nr_body_is_user) return 0; hdr->nr_reserved = nr_body_is_user; /* compute the total size of the buffer */ rqsz = nmreq_size_by_type(hdr->nr_reqtype); if (rqsz > NETMAP_REQ_MAXSIZE) { error = EMSGSIZE; goto out_err; } if ((rqsz && hdr->nr_body == (uintptr_t)NULL) || (!rqsz && hdr->nr_body != (uintptr_t)NULL)) { /* Request body expected, but not found; or * request body found but unexpected. */ if (netmap_verbose) nm_prerr("nr_body expected but not found, or vice versa"); error = EINVAL; goto out_err; } bufsz = 2 * sizeof(void *) + rqsz; optsz = 0; for (src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; src; src = (struct nmreq_option *)(uintptr_t)buf.nro_next) { error = copyin(src, &buf, sizeof(*src)); if (error) goto out_err; optsz += sizeof(*src); optsz += nmreq_opt_size_by_type(buf.nro_reqtype, buf.nro_size); if (rqsz + optsz > NETMAP_REQ_MAXSIZE) { error = EMSGSIZE; goto out_err; } bufsz += optsz + sizeof(void *); } ker = nm_os_malloc(bufsz); if (ker == NULL) { error = ENOMEM; goto out_err; } p = ker; /* make a copy of the user pointers */ ptrs = (uint64_t*)p; *ptrs++ = hdr->nr_body; *ptrs++ = hdr->nr_options; p = (char *)ptrs; /* copy the body */ error = copyin((void *)(uintptr_t)hdr->nr_body, p, rqsz); if (error) goto out_restore; /* overwrite the user pointer with the in-kernel one */ hdr->nr_body = (uintptr_t)p; p += rqsz; /* copy the options */ next = (struct nmreq_option **)&hdr->nr_options; src = *next; while (src) { struct nmreq_option *opt; /* copy the option header */ ptrs = (uint64_t *)p; opt = (struct nmreq_option *)(ptrs + 1); error = copyin(src, opt, sizeof(*src)); if (error) goto out_restore; /* make a copy of the user next pointer */ *ptrs = opt->nro_next; /* overwrite the user pointer with the in-kernel one */ *next = opt; /* initialize the option as not supported. * Recognized options will update this field. */ opt->nro_status = EOPNOTSUPP; p = (char *)(opt + 1); /* copy the option body */ optsz = nmreq_opt_size_by_type(opt->nro_reqtype, opt->nro_size); if (optsz) { /* the option body follows the option header */ error = copyin(src + 1, p, optsz); if (error) goto out_restore; p += optsz; } /* move to next option */ next = (struct nmreq_option **)&opt->nro_next; src = *next; } return 0; out_restore: ptrs = (uint64_t *)ker; hdr->nr_body = *ptrs++; hdr->nr_options = *ptrs++; hdr->nr_reserved = 0; nm_os_free(ker); out_err: return error; } static int nmreq_copyout(struct nmreq_header *hdr, int rerror) { struct nmreq_option *src, *dst; void *ker = (void *)(uintptr_t)hdr->nr_body, *bufstart; uint64_t *ptrs; size_t bodysz; int error; if (!hdr->nr_reserved) return rerror; /* restore the user pointers in the header */ ptrs = (uint64_t *)ker - 2; bufstart = ptrs; hdr->nr_body = *ptrs++; src = (struct nmreq_option *)(uintptr_t)hdr->nr_options; hdr->nr_options = *ptrs; if (!rerror) { /* copy the body */ bodysz = nmreq_size_by_type(hdr->nr_reqtype); error = copyout(ker, (void *)(uintptr_t)hdr->nr_body, bodysz); if (error) { rerror = error; goto out; } } /* copy the options */ dst = (struct nmreq_option *)(uintptr_t)hdr->nr_options; while (src) { size_t optsz; uint64_t next; /* restore the user pointer */ next = src->nro_next; ptrs = (uint64_t *)src - 1; src->nro_next = *ptrs; /* always copy the option header */ error = copyout(src, dst, sizeof(*src)); if (error) { rerror = error; goto out; } /* copy the option body only if there was no error */ if (!rerror && !src->nro_status) { optsz = nmreq_opt_size_by_type(src->nro_reqtype, src->nro_size); if (optsz) { error = copyout(src + 1, dst + 1, optsz); if (error) { rerror = error; goto out; } } } src = (struct nmreq_option *)(uintptr_t)next; dst = (struct nmreq_option *)(uintptr_t)*ptrs; } out: hdr->nr_reserved = 0; nm_os_free(bufstart); return rerror; } struct nmreq_option * nmreq_findoption(struct nmreq_option *opt, uint16_t reqtype) { for ( ; opt; opt = (struct nmreq_option *)(uintptr_t)opt->nro_next) if (opt->nro_reqtype == reqtype) return opt; return NULL; } int nmreq_checkduplicate(struct nmreq_option *opt) { uint16_t type = opt->nro_reqtype; int dup = 0; while ((opt = nmreq_findoption((struct nmreq_option *)(uintptr_t)opt->nro_next, type))) { dup++; opt->nro_status = EINVAL; } return (dup ? EINVAL : 0); } static int nmreq_checkoptions(struct nmreq_header *hdr) { struct nmreq_option *opt; /* return error if there is still any option * marked as not supported */ for (opt = (struct nmreq_option *)(uintptr_t)hdr->nr_options; opt; opt = (struct nmreq_option *)(uintptr_t)opt->nro_next) if (opt->nro_status == EOPNOTSUPP) return EOPNOTSUPP; return 0; } /* * select(2) and poll(2) handlers for the "netmap" device. * * Can be called for one or more queues. * Return true the event mask corresponding to ready events. * If there are no ready events (and 'sr' is not NULL), do a * selrecord on either individual selinfo or on the global one. * Device-dependent parts (locking and sync of tx/rx rings) * are done through callbacks. * * On linux, arguments are really pwait, the poll table, and 'td' is struct file * * The first one is remapped to pwait as selrecord() uses the name as an * hidden argument. */ int netmap_poll(struct netmap_priv_d *priv, int events, NM_SELRECORD_T *sr) { struct netmap_adapter *na; struct netmap_kring *kring; struct netmap_ring *ring; u_int i, want[NR_TXRX], revents = 0; NM_SELINFO_T *si[NR_TXRX]; #define want_tx want[NR_TX] #define want_rx want[NR_RX] struct mbq q; /* packets from RX hw queues to host stack */ /* * In order to avoid nested locks, we need to "double check" * txsync and rxsync if we decide to do a selrecord(). * retry_tx (and retry_rx, later) prevent looping forever. */ int retry_tx = 1, retry_rx = 1; /* Transparent mode: send_down is 1 if we have found some * packets to forward (host RX ring --> NIC) during the rx * scan and we have not sent them down to the NIC yet. * Transparent mode requires to bind all rings to a single * file descriptor. */ int send_down = 0; int sync_flags = priv->np_sync_flags; mbq_init(&q); if (unlikely(priv->np_nifp == NULL)) { return POLLERR; } mb(); /* make sure following reads are not from cache */ na = priv->np_na; if (unlikely(!nm_netmap_on(na))) return POLLERR; if (unlikely(priv->np_csb_atok_base)) { nm_prerr("Invalid poll in CSB mode"); return POLLERR; } if (netmap_debug & NM_DEBUG_ON) nm_prinf("device %s events 0x%x", na->name, events); want_tx = events & (POLLOUT | POLLWRNORM); want_rx = events & (POLLIN | POLLRDNORM); /* * If the card has more than one queue AND the file descriptor is * bound to all of them, we sleep on the "global" selinfo, otherwise * we sleep on individual selinfo (FreeBSD only allows two selinfo's * per file descriptor). * The interrupt routine in the driver wake one or the other * (or both) depending on which clients are active. * * rxsync() is only called if we run out of buffers on a POLLIN. * txsync() is called if we run out of buffers on POLLOUT, or * there are pending packets to send. The latter can be disabled * passing NETMAP_NO_TX_POLL in the NIOCREG call. */ si[NR_RX] = priv->np_si[NR_RX]; si[NR_TX] = priv->np_si[NR_TX]; #ifdef __FreeBSD__ /* * We start with a lock free round which is cheap if we have * slots available. If this fails, then lock and call the sync * routines. We can't do this on Linux, as the contract says * that we must call nm_os_selrecord() unconditionally. */ if (want_tx) { const enum txrx t = NR_TX; for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = NMR(na, t)[i]; if (kring->ring->cur != kring->ring->tail) { /* Some unseen TX space is available, so what * we don't need to run txsync. */ revents |= want[t]; want[t] = 0; break; } } } if (want_rx) { const enum txrx t = NR_RX; int rxsync_needed = 0; for (i = priv->np_qfirst[t]; i < priv->np_qlast[t]; i++) { kring = NMR(na, t)[i]; if (kring->ring->cur == kring->ring->tail || kring->rhead != kring->ring->head) { /* There are no unseen packets on this ring, * or there are some buffers to be returned * to the netmap port. We therefore go ahead * and run rxsync. */ rxsync_needed = 1; break; } } if (!rxsync_needed) { revents |= want_rx; want_rx = 0; } } #endif #ifdef linux /* The selrecord must be unconditional on linux. */ nm_os_selrecord(sr, si[NR_RX]); nm_os_selrecord(sr, si[NR_TX]); #endif /* linux */ /* * If we want to push packets out (priv->np_txpoll) or * want_tx is still set, we must issue txsync calls * (on all rings, to avoid that the tx rings stall). * Fortunately, normal tx mode has np_txpoll set. */ if (priv->np_txpoll || want_tx) { /* * The first round checks if anyone is ready, if not * do a selrecord and another round to handle races. * want_tx goes to 0 if any space is found, and is * used to skip rings with no pending transmissions. */ flush_tx: for (i = priv->np_qfirst[NR_TX]; i < priv->np_qlast[NR_TX]; i++) { int found = 0; kring = na->tx_rings[i]; ring = kring->ring; /* * Don't try to txsync this TX ring if we already found some * space in some of the TX rings (want_tx == 0) and there are no * TX slots in this ring that need to be flushed to the NIC * (head == hwcur). */ if (!send_down && !want_tx && ring->head == kring->nr_hwcur) continue; if (nm_kr_tryget(kring, 1, &revents)) continue; if (nm_txsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); revents |= POLLERR; } else { if (kring->nm_sync(kring, sync_flags)) revents |= POLLERR; else nm_sync_finalize(kring); } /* * If we found new slots, notify potential * listeners on the same ring. * Since we just did a txsync, look at the copies * of cur,tail in the kring. */ found = kring->rcur != kring->rtail; nm_kr_put(kring); if (found) { /* notify other listeners */ revents |= want_tx; want_tx = 0; #ifndef linux kring->nm_notify(kring, 0); #endif /* linux */ } } /* if there were any packet to forward we must have handled them by now */ send_down = 0; if (want_tx && retry_tx && sr) { #ifndef linux nm_os_selrecord(sr, si[NR_TX]); #endif /* !linux */ retry_tx = 0; goto flush_tx; } } /* * If want_rx is still set scan receive rings. * Do it on all rings because otherwise we starve. */ if (want_rx) { /* two rounds here for race avoidance */ do_retry_rx: for (i = priv->np_qfirst[NR_RX]; i < priv->np_qlast[NR_RX]; i++) { int found = 0; kring = na->rx_rings[i]; ring = kring->ring; if (unlikely(nm_kr_tryget(kring, 1, &revents))) continue; if (nm_rxsync_prologue(kring, ring) >= kring->nkr_num_slots) { netmap_ring_reinit(kring); revents |= POLLERR; } /* now we can use kring->rcur, rtail */ /* * transparent mode support: collect packets from * hw rxring(s) that have been released by the user */ if (nm_may_forward_up(kring)) { netmap_grab_packets(kring, &q, netmap_fwd); } /* Clear the NR_FORWARD flag anyway, it may be set by * the nm_sync() below only on for the host RX ring (see * netmap_rxsync_from_host()). */ kring->nr_kflags &= ~NR_FORWARD; if (kring->nm_sync(kring, sync_flags)) revents |= POLLERR; else nm_sync_finalize(kring); send_down |= (kring->nr_kflags & NR_FORWARD); ring_timestamp_set(ring); found = kring->rcur != kring->rtail; nm_kr_put(kring); if (found) { revents |= want_rx; retry_rx = 0; #ifndef linux kring->nm_notify(kring, 0); #endif /* linux */ } } #ifndef linux if (retry_rx && sr) { nm_os_selrecord(sr, si[NR_RX]); } #endif /* !linux */ if (send_down || retry_rx) { retry_rx = 0; if (send_down) goto flush_tx; /* and retry_rx */ else goto do_retry_rx; } } /* * Transparent mode: released bufs (i.e. between kring->nr_hwcur and * ring->head) marked with NS_FORWARD on hw rx rings are passed up * to the host stack. */ if (mbq_peek(&q)) { netmap_send_up(na->ifp, &q); } return (revents); #undef want_tx #undef want_rx } int nma_intr_enable(struct netmap_adapter *na, int onoff) { bool changed = false; enum txrx t; int i; for_rx_tx(t) { for (i = 0; i < nma_get_nrings(na, t); i++) { struct netmap_kring *kring = NMR(na, t)[i]; int on = !(kring->nr_kflags & NKR_NOINTR); if (!!onoff != !!on) { changed = true; } if (onoff) { kring->nr_kflags &= ~NKR_NOINTR; } else { kring->nr_kflags |= NKR_NOINTR; } } } if (!changed) { return 0; /* nothing to do */ } if (!na->nm_intr) { nm_prerr("Cannot %s interrupts for %s", onoff ? "enable" : "disable", na->name); return -1; } na->nm_intr(na, onoff); return 0; } /*-------------------- driver support routines -------------------*/ /* default notify callback */ static int netmap_notify(struct netmap_kring *kring, int flags) { struct netmap_adapter *na = kring->notify_na; enum txrx t = kring->tx; nm_os_selwakeup(&kring->si); /* optimization: avoid a wake up on the global * queue if nobody has registered for more * than one ring */ if (na->si_users[t] > 0) nm_os_selwakeup(&na->si[t]); return NM_IRQ_COMPLETED; } /* called by all routines that create netmap_adapters. * provide some defaults and get a reference to the * memory allocator */ int netmap_attach_common(struct netmap_adapter *na) { if (!na->rx_buf_maxsize) { /* Set a conservative default (larger is safer). */ na->rx_buf_maxsize = PAGE_SIZE; } #ifdef __FreeBSD__ if (na->na_flags & NAF_HOST_RINGS && na->ifp) { na->if_input = na->ifp->if_input; /* for netmap_send_up */ } na->pdev = na; /* make sure netmap_mem_map() is called */ #endif /* __FreeBSD__ */ if (na->na_flags & NAF_HOST_RINGS) { if (na->num_host_rx_rings == 0) na->num_host_rx_rings = 1; if (na->num_host_tx_rings == 0) na->num_host_tx_rings = 1; } if (na->nm_krings_create == NULL) { /* we assume that we have been called by a driver, * since other port types all provide their own * nm_krings_create */ na->nm_krings_create = netmap_hw_krings_create; na->nm_krings_delete = netmap_hw_krings_delete; } if (na->nm_notify == NULL) na->nm_notify = netmap_notify; na->active_fds = 0; if (na->nm_mem == NULL) { /* use the global allocator */ na->nm_mem = netmap_mem_get(&nm_mem); } #ifdef WITH_VALE if (na->nm_bdg_attach == NULL) /* no special nm_bdg_attach callback. On VALE * attach, we need to interpose a bwrap */ na->nm_bdg_attach = netmap_default_bdg_attach; #endif return 0; } /* Wrapper for the register callback provided netmap-enabled * hardware drivers. * nm_iszombie(na) means that the driver module has been * unloaded, so we cannot call into it. * nm_os_ifnet_lock() must guarantee mutual exclusion with * module unloading. */ static int netmap_hw_reg(struct netmap_adapter *na, int onoff) { struct netmap_hw_adapter *hwna = (struct netmap_hw_adapter*)na; int error = 0; nm_os_ifnet_lock(); if (nm_iszombie(na)) { if (onoff) { error = ENXIO; } else if (na != NULL) { na->na_flags &= ~NAF_NETMAP_ON; } goto out; } error = hwna->nm_hw_register(na, onoff); out: nm_os_ifnet_unlock(); return error; } static void netmap_hw_dtor(struct netmap_adapter *na) { if (na->ifp == NULL) return; NM_DETACH_NA(na->ifp); } /* * Allocate a netmap_adapter object, and initialize it from the * 'arg' passed by the driver on attach. * We allocate a block of memory of 'size' bytes, which has room * for struct netmap_adapter plus additional room private to * the caller. * Return 0 on success, ENOMEM otherwise. */ int netmap_attach_ext(struct netmap_adapter *arg, size_t size, int override_reg) { struct netmap_hw_adapter *hwna = NULL; struct ifnet *ifp = NULL; if (size < sizeof(struct netmap_hw_adapter)) { if (netmap_debug & NM_DEBUG_ON) nm_prerr("Invalid netmap adapter size %d", (int)size); return EINVAL; } if (arg == NULL || arg->ifp == NULL) { if (netmap_debug & NM_DEBUG_ON) nm_prerr("either arg or arg->ifp is NULL"); return EINVAL; } if (arg->num_tx_rings == 0 || arg->num_rx_rings == 0) { if (netmap_debug & NM_DEBUG_ON) nm_prerr("%s: invalid rings tx %d rx %d", arg->name, arg->num_tx_rings, arg->num_rx_rings); return EINVAL; } ifp = arg->ifp; if (NM_NA_CLASH(ifp)) { /* If NA(ifp) is not null but there is no valid netmap * adapter it means that someone else is using the same * pointer (e.g. ax25_ptr on linux). This happens for * instance when also PF_RING is in use. */ nm_prerr("Error: netmap adapter hook is busy"); return EBUSY; } hwna = nm_os_malloc(size); if (hwna == NULL) goto fail; hwna->up = *arg; hwna->up.na_flags |= NAF_HOST_RINGS | NAF_NATIVE; strlcpy(hwna->up.name, ifp->if_xname, sizeof(hwna->up.name)); if (override_reg) { hwna->nm_hw_register = hwna->up.nm_register; hwna->up.nm_register = netmap_hw_reg; } if (netmap_attach_common(&hwna->up)) { nm_os_free(hwna); goto fail; } netmap_adapter_get(&hwna->up); NM_ATTACH_NA(ifp, &hwna->up); nm_os_onattach(ifp); if (arg->nm_dtor == NULL) { hwna->up.nm_dtor = netmap_hw_dtor; } if_printf(ifp, "netmap queues/slots: TX %d/%d, RX %d/%d\n", hwna->up.num_tx_rings, hwna->up.num_tx_desc, hwna->up.num_rx_rings, hwna->up.num_rx_desc); return 0; fail: nm_prerr("fail, arg %p ifp %p na %p", arg, ifp, hwna); return (hwna ? EINVAL : ENOMEM); } int netmap_attach(struct netmap_adapter *arg) { return netmap_attach_ext(arg, sizeof(struct netmap_hw_adapter), 1 /* override nm_reg */); } void NM_DBG(netmap_adapter_get)(struct netmap_adapter *na) { if (!na) { return; } refcount_acquire(&na->na_refcount); } /* returns 1 iff the netmap_adapter is destroyed */ int NM_DBG(netmap_adapter_put)(struct netmap_adapter *na) { if (!na) return 1; if (!refcount_release(&na->na_refcount)) return 0; if (na->nm_dtor) na->nm_dtor(na); if (na->tx_rings) { /* XXX should not happen */ if (netmap_debug & NM_DEBUG_ON) nm_prerr("freeing leftover tx_rings"); na->nm_krings_delete(na); } netmap_pipe_dealloc(na); if (na->nm_mem) netmap_mem_put(na->nm_mem); bzero(na, sizeof(*na)); nm_os_free(na); return 1; } /* nm_krings_create callback for all hardware native adapters */ int netmap_hw_krings_create(struct netmap_adapter *na) { int ret = netmap_krings_create(na, 0); if (ret == 0) { /* initialize the mbq for the sw rx ring */ u_int lim = netmap_real_rings(na, NR_RX), i; for (i = na->num_rx_rings; i < lim; i++) { mbq_safe_init(&NMR(na, NR_RX)[i]->rx_queue); } nm_prdis("initialized sw rx queue %d", na->num_rx_rings); } return ret; } /* * Called on module unload by the netmap-enabled drivers */ void netmap_detach(struct ifnet *ifp) { struct netmap_adapter *na = NA(ifp); if (!na) return; NMG_LOCK(); netmap_set_all_rings(na, NM_KR_LOCKED); /* * if the netmap adapter is not native, somebody * changed it, so we can not release it here. * The NAF_ZOMBIE flag will notify the new owner that * the driver is gone. */ if (!(na->na_flags & NAF_NATIVE) || !netmap_adapter_put(na)) { na->na_flags |= NAF_ZOMBIE; } /* give active users a chance to notice that NAF_ZOMBIE has been * turned on, so that they can stop and return an error to userspace. * Note that this becomes a NOP if there are no active users and, * therefore, the put() above has deleted the na, since now NA(ifp) is * NULL. */ netmap_enable_all_rings(ifp); NMG_UNLOCK(); } /* * Intercept packets from the network stack and pass them * to netmap as incoming packets on the 'software' ring. * * We only store packets in a bounded mbq and then copy them * in the relevant rxsync routine. * * We rely on the OS to make sure that the ifp and na do not go * away (typically the caller checks for IFF_DRV_RUNNING or the like). * In nm_register() or whenever there is a reinitialization, * we make sure to make the mode change visible here. */ int netmap_transmit(struct ifnet *ifp, struct mbuf *m) { struct netmap_adapter *na = NA(ifp); struct netmap_kring *kring, *tx_kring; u_int len = MBUF_LEN(m); u_int error = ENOBUFS; unsigned int txr; struct mbq *q; int busy; u_int i; i = MBUF_TXQ(m); if (i >= na->num_host_rx_rings) { i = i % na->num_host_rx_rings; } kring = NMR(na, NR_RX)[nma_get_nrings(na, NR_RX) + i]; // XXX [Linux] we do not need this lock // if we follow the down/configure/up protocol -gl // mtx_lock(&na->core_lock); if (!nm_netmap_on(na)) { nm_prerr("%s not in netmap mode anymore", na->name); error = ENXIO; goto done; } txr = MBUF_TXQ(m); if (txr >= na->num_tx_rings) { txr %= na->num_tx_rings; } tx_kring = NMR(na, NR_TX)[txr]; if (tx_kring->nr_mode == NKR_NETMAP_OFF) { return MBUF_TRANSMIT(na, ifp, m); } q = &kring->rx_queue; // XXX reconsider long packets if we handle fragments if (len > NETMAP_BUF_SIZE(na)) { /* too long for us */ nm_prerr("%s from_host, drop packet size %d > %d", na->name, len, NETMAP_BUF_SIZE(na)); goto done; } if (!netmap_generic_hwcsum) { if (nm_os_mbuf_has_csum_offld(m)) { nm_prlim(1, "%s drop mbuf that needs checksum offload", na->name); goto done; } } if (nm_os_mbuf_has_seg_offld(m)) { nm_prlim(1, "%s drop mbuf that needs generic segmentation offload", na->name); goto done; } #ifdef __FreeBSD__ ETHER_BPF_MTAP(ifp, m); #endif /* __FreeBSD__ */ /* protect against netmap_rxsync_from_host(), netmap_sw_to_nic() * and maybe other instances of netmap_transmit (the latter * not possible on Linux). * We enqueue the mbuf only if we are sure there is going to be * enough room in the host RX ring, otherwise we drop it. */ mbq_lock(q); busy = kring->nr_hwtail - kring->nr_hwcur; if (busy < 0) busy += kring->nkr_num_slots; if (busy + mbq_len(q) >= kring->nkr_num_slots - 1) { nm_prlim(2, "%s full hwcur %d hwtail %d qlen %d", na->name, kring->nr_hwcur, kring->nr_hwtail, mbq_len(q)); } else { mbq_enqueue(q, m); nm_prdis(2, "%s %d bufs in queue", na->name, mbq_len(q)); /* notify outside the lock */ m = NULL; error = 0; } mbq_unlock(q); done: if (m) m_freem(m); /* unconditionally wake up listeners */ kring->nm_notify(kring, 0); /* this is normally netmap_notify(), but for nics * connected to a bridge it is netmap_bwrap_intr_notify(), * that possibly forwards the frames through the switch */ return (error); } /* * netmap_reset() is called by the driver routines when reinitializing * a ring. The driver is in charge of locking to protect the kring. * If native netmap mode is not set just return NULL. * If native netmap mode is set, in particular, we have to set nr_mode to * NKR_NETMAP_ON. */ struct netmap_slot * netmap_reset(struct netmap_adapter *na, enum txrx tx, u_int n, u_int new_cur) { struct netmap_kring *kring; int new_hwofs, lim; if (!nm_native_on(na)) { nm_prdis("interface not in native netmap mode"); return NULL; /* nothing to reinitialize */ } /* XXX note- in the new scheme, we are not guaranteed to be * under lock (e.g. when called on a device reset). * In this case, we should set a flag and do not trust too * much the values. In practice: TODO * - set a RESET flag somewhere in the kring * - do the processing in a conservative way * - let the *sync() fixup at the end. */ if (tx == NR_TX) { if (n >= na->num_tx_rings) return NULL; kring = na->tx_rings[n]; if (kring->nr_pending_mode == NKR_NETMAP_OFF) { kring->nr_mode = NKR_NETMAP_OFF; return NULL; } // XXX check whether we should use hwcur or rcur new_hwofs = kring->nr_hwcur - new_cur; } else { if (n >= na->num_rx_rings) return NULL; kring = na->rx_rings[n]; if (kring->nr_pending_mode == NKR_NETMAP_OFF) { kring->nr_mode = NKR_NETMAP_OFF; return NULL; } new_hwofs = kring->nr_hwtail - new_cur; } lim = kring->nkr_num_slots - 1; if (new_hwofs > lim) new_hwofs -= lim + 1; /* Always set the new offset value and realign the ring. */ if (netmap_debug & NM_DEBUG_ON) nm_prinf("%s %s%d hwofs %d -> %d, hwtail %d -> %d", na->name, tx == NR_TX ? "TX" : "RX", n, kring->nkr_hwofs, new_hwofs, kring->nr_hwtail, tx == NR_TX ? lim : kring->nr_hwtail); kring->nkr_hwofs = new_hwofs; if (tx == NR_TX) { kring->nr_hwtail = kring->nr_hwcur + lim; if (kring->nr_hwtail > lim) kring->nr_hwtail -= lim + 1; } /* * Wakeup on the individual and global selwait * We do the wakeup here, but the ring is not yet reconfigured. * However, we are under lock so there are no races. */ kring->nr_mode = NKR_NETMAP_ON; kring->nm_notify(kring, 0); return kring->ring->slot; } /* * Dispatch rx/tx interrupts to the netmap rings. * * "work_done" is non-null on the RX path, NULL for the TX path. * We rely on the OS to make sure that there is only one active * instance per queue, and that there is appropriate locking. * * The 'notify' routine depends on what the ring is attached to. * - for a netmap file descriptor, do a selwakeup on the individual * waitqueue, plus one on the global one if needed * (see netmap_notify) * - for a nic connected to a switch, call the proper forwarding routine * (see netmap_bwrap_intr_notify) */ int netmap_common_irq(struct netmap_adapter *na, u_int q, u_int *work_done) { struct netmap_kring *kring; enum txrx t = (work_done ? NR_RX : NR_TX); q &= NETMAP_RING_MASK; if (netmap_debug & (NM_DEBUG_RXINTR|NM_DEBUG_TXINTR)) { nm_prlim(5, "received %s queue %d", work_done ? "RX" : "TX" , q); } if (q >= nma_get_nrings(na, t)) return NM_IRQ_PASS; // not a physical queue kring = NMR(na, t)[q]; if (kring->nr_mode == NKR_NETMAP_OFF) { return NM_IRQ_PASS; } if (t == NR_RX) { kring->nr_kflags |= NKR_PENDINTR; // XXX atomic ? *work_done = 1; /* do not fire napi again */ } return kring->nm_notify(kring, 0); } /* * Default functions to handle rx/tx interrupts from a physical device. * "work_done" is non-null on the RX path, NULL for the TX path. * * If the card is not in netmap mode, simply return NM_IRQ_PASS, * so that the caller proceeds with regular processing. * Otherwise call netmap_common_irq(). * * If the card is connected to a netmap file descriptor, * do a selwakeup on the individual queue, plus one on the global one * if needed (multiqueue card _and_ there are multiqueue listeners), * and return NR_IRQ_COMPLETED. * * Finally, if called on rx from an interface connected to a switch, * calls the proper forwarding routine. */ int netmap_rx_irq(struct ifnet *ifp, u_int q, u_int *work_done) { struct netmap_adapter *na = NA(ifp); /* * XXX emulated netmap mode sets NAF_SKIP_INTR so * we still use the regular driver even though the previous * check fails. It is unclear whether we should use * nm_native_on() here. */ if (!nm_netmap_on(na)) return NM_IRQ_PASS; if (na->na_flags & NAF_SKIP_INTR) { nm_prdis("use regular interrupt"); return NM_IRQ_PASS; } return netmap_common_irq(na, q, work_done); } /* set/clear native flags and if_transmit/netdev_ops */ void nm_set_native_flags(struct netmap_adapter *na) { struct ifnet *ifp = na->ifp; /* We do the setup for intercepting packets only if we are the * first user of this adapapter. */ if (na->active_fds > 0) { return; } na->na_flags |= NAF_NETMAP_ON; nm_os_onenter(ifp); nm_update_hostrings_mode(na); } void nm_clear_native_flags(struct netmap_adapter *na) { struct ifnet *ifp = na->ifp; /* We undo the setup for intercepting packets only if we are the * last user of this adapter. */ if (na->active_fds > 0) { return; } nm_update_hostrings_mode(na); nm_os_onexit(ifp); na->na_flags &= ~NAF_NETMAP_ON; } void netmap_krings_mode_commit(struct netmap_adapter *na, int onoff) { enum txrx t; for_rx_tx(t) { int i; for (i = 0; i < netmap_real_rings(na, t); i++) { struct netmap_kring *kring = NMR(na, t)[i]; if (onoff && nm_kring_pending_on(kring)) kring->nr_mode = NKR_NETMAP_ON; else if (!onoff && nm_kring_pending_off(kring)) kring->nr_mode = NKR_NETMAP_OFF; } } } /* * Module loader and unloader * * netmap_init() creates the /dev/netmap device and initializes * all global variables. Returns 0 on success, errno on failure * (but there is no chance) * * netmap_fini() destroys everything. */ static struct cdev *netmap_dev; /* /dev/netmap character device. */ extern struct cdevsw netmap_cdevsw; void netmap_fini(void) { if (netmap_dev) destroy_dev(netmap_dev); /* we assume that there are no longer netmap users */ nm_os_ifnet_fini(); netmap_uninit_bridges(); netmap_mem_fini(); NMG_LOCK_DESTROY(); nm_prinf("netmap: unloaded module."); } int netmap_init(void) { int error; NMG_LOCK_INIT(); error = netmap_mem_init(); if (error != 0) goto fail; /* * MAKEDEV_ETERNAL_KLD avoids an expensive check on syscalls * when the module is compiled in. * XXX could use make_dev_credv() to get error number */ netmap_dev = make_dev_credf(MAKEDEV_ETERNAL_KLD, &netmap_cdevsw, 0, NULL, UID_ROOT, GID_WHEEL, 0600, "netmap"); if (!netmap_dev) goto fail; error = netmap_init_bridges(); if (error) goto fail; #ifdef __FreeBSD__ nm_os_vi_init_index(); #endif error = nm_os_ifnet_init(); if (error) goto fail; nm_prinf("netmap: loaded module"); return (0); fail: netmap_fini(); return (EINVAL); /* may be incorrect */ } Index: head/sys/dev/netmap/netmap_legacy.c =================================================================== --- head/sys/dev/netmap/netmap_legacy.c (revision 345268) +++ head/sys/dev/netmap/netmap_legacy.c (revision 345269) @@ -1,435 +1,439 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2018 Vincenzo Maffione * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* $FreeBSD$ */ #if defined(__FreeBSD__) #include /* prerequisite */ #include #include /* defines used in kernel.h */ #include /* FIONBIO */ #include #include /* struct socket */ #include /* sockaddrs */ #include #include #include #include /* BIOCIMMEDIATE */ #include /* bus_dmamap_* */ #include #elif defined(linux) #include "bsd_glue.h" #elif defined(__APPLE__) #warning OSX support is only partial #include "osx_glue.h" #elif defined (_WIN32) #include "win_glue.h" #endif /* * common headers */ #include #include #include static int nmreq_register_from_legacy(struct nmreq *nmr, struct nmreq_header *hdr, struct nmreq_register *req) { req->nr_offset = nmr->nr_offset; req->nr_memsize = nmr->nr_memsize; req->nr_tx_slots = nmr->nr_tx_slots; req->nr_rx_slots = nmr->nr_rx_slots; req->nr_tx_rings = nmr->nr_tx_rings; req->nr_rx_rings = nmr->nr_rx_rings; + req->nr_host_tx_rings = 0; + req->nr_host_rx_rings = 0; req->nr_mem_id = nmr->nr_arg2; req->nr_ringid = nmr->nr_ringid & NETMAP_RING_MASK; if ((nmr->nr_flags & NR_REG_MASK) == NR_REG_DEFAULT) { /* Convert the older nmr->nr_ringid (original * netmap control API) to nmr->nr_flags. */ u_int regmode = NR_REG_DEFAULT; if (req->nr_ringid & NETMAP_SW_RING) { regmode = NR_REG_SW; } else if (req->nr_ringid & NETMAP_HW_RING) { regmode = NR_REG_ONE_NIC; } else { regmode = NR_REG_ALL_NIC; } req->nr_mode = regmode; } else { req->nr_mode = nmr->nr_flags & NR_REG_MASK; } /* Fix nr_name, nr_mode and nr_ringid to handle pipe requests. */ if (req->nr_mode == NR_REG_PIPE_MASTER || req->nr_mode == NR_REG_PIPE_SLAVE) { char suffix[10]; snprintf(suffix, sizeof(suffix), "%c%d", (req->nr_mode == NR_REG_PIPE_MASTER ? '{' : '}'), req->nr_ringid); if (strlen(hdr->nr_name) + strlen(suffix) >= sizeof(hdr->nr_name)) { /* No space for the pipe suffix. */ return ENOBUFS; } strncat(hdr->nr_name, suffix, strlen(suffix)); req->nr_mode = NR_REG_ALL_NIC; req->nr_ringid = 0; } req->nr_flags = nmr->nr_flags & (~NR_REG_MASK); if (nmr->nr_ringid & NETMAP_NO_TX_POLL) { req->nr_flags |= NR_NO_TX_POLL; } if (nmr->nr_ringid & NETMAP_DO_RX_POLL) { req->nr_flags |= NR_DO_RX_POLL; } /* nmr->nr_arg1 (nr_pipes) ignored */ req->nr_extra_bufs = nmr->nr_arg3; return 0; } /* Convert the legacy 'nmr' struct into one of the nmreq_xyz structs * (new API). The new struct is dynamically allocated. */ static struct nmreq_header * nmreq_from_legacy(struct nmreq *nmr, u_long ioctl_cmd) { struct nmreq_header *hdr = nm_os_malloc(sizeof(*hdr)); if (hdr == NULL) { goto oom; } /* Sanitize nmr->nr_name by adding the string terminator. */ if (ioctl_cmd == NIOCGINFO || ioctl_cmd == NIOCREGIF) { nmr->nr_name[sizeof(nmr->nr_name) - 1] = '\0'; } /* First prepare the request header. */ hdr->nr_version = NETMAP_API; /* new API */ strlcpy(hdr->nr_name, nmr->nr_name, sizeof(nmr->nr_name)); hdr->nr_options = (uintptr_t)NULL; hdr->nr_body = (uintptr_t)NULL; switch (ioctl_cmd) { case NIOCREGIF: { switch (nmr->nr_cmd) { case 0: { /* Regular NIOCREGIF operation. */ struct nmreq_register *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = NETMAP_REQ_REGISTER; if (nmreq_register_from_legacy(nmr, hdr, req)) { goto oom; } break; } case NETMAP_BDG_ATTACH: { struct nmreq_vale_attach *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = NETMAP_REQ_VALE_ATTACH; if (nmreq_register_from_legacy(nmr, hdr, &req->reg)) { goto oom; } /* Fix nr_mode, starting from nr_arg1. */ if (nmr->nr_arg1 & NETMAP_BDG_HOST) { req->reg.nr_mode = NR_REG_NIC_SW; } else { req->reg.nr_mode = NR_REG_ALL_NIC; } break; } case NETMAP_BDG_DETACH: { hdr->nr_reqtype = NETMAP_REQ_VALE_DETACH; hdr->nr_body = (uintptr_t)nm_os_malloc(sizeof(struct nmreq_vale_detach)); break; } case NETMAP_BDG_VNET_HDR: case NETMAP_VNET_HDR_GET: { struct nmreq_port_hdr *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = (nmr->nr_cmd == NETMAP_BDG_VNET_HDR) ? NETMAP_REQ_PORT_HDR_SET : NETMAP_REQ_PORT_HDR_GET; req->nr_hdr_len = nmr->nr_arg1; break; } case NETMAP_BDG_NEWIF : { struct nmreq_vale_newif *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = NETMAP_REQ_VALE_NEWIF; req->nr_tx_slots = nmr->nr_tx_slots; req->nr_rx_slots = nmr->nr_rx_slots; req->nr_tx_rings = nmr->nr_tx_rings; req->nr_rx_rings = nmr->nr_rx_rings; req->nr_mem_id = nmr->nr_arg2; break; } case NETMAP_BDG_DELIF: { hdr->nr_reqtype = NETMAP_REQ_VALE_DELIF; break; } case NETMAP_BDG_POLLING_ON: case NETMAP_BDG_POLLING_OFF: { struct nmreq_vale_polling *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = (nmr->nr_cmd == NETMAP_BDG_POLLING_ON) ? NETMAP_REQ_VALE_POLLING_ENABLE : NETMAP_REQ_VALE_POLLING_DISABLE; switch (nmr->nr_flags & NR_REG_MASK) { default: req->nr_mode = 0; /* invalid */ break; case NR_REG_ONE_NIC: req->nr_mode = NETMAP_POLLING_MODE_MULTI_CPU; break; case NR_REG_ALL_NIC: req->nr_mode = NETMAP_POLLING_MODE_SINGLE_CPU; break; } req->nr_first_cpu_id = nmr->nr_ringid & NETMAP_RING_MASK; req->nr_num_polling_cpus = nmr->nr_arg1; break; } case NETMAP_PT_HOST_CREATE: case NETMAP_PT_HOST_DELETE: { nm_prerr("Netmap passthrough not supported yet"); return NULL; break; } } break; } case NIOCGINFO: { if (nmr->nr_cmd == NETMAP_BDG_LIST) { struct nmreq_vale_list *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = NETMAP_REQ_VALE_LIST; req->nr_bridge_idx = nmr->nr_arg1; req->nr_port_idx = nmr->nr_arg2; } else { /* Regular NIOCGINFO. */ struct nmreq_port_info_get *req = nm_os_malloc(sizeof(*req)); if (!req) { goto oom; } hdr->nr_body = (uintptr_t)req; hdr->nr_reqtype = NETMAP_REQ_PORT_INFO_GET; req->nr_memsize = nmr->nr_memsize; req->nr_tx_slots = nmr->nr_tx_slots; req->nr_rx_slots = nmr->nr_rx_slots; req->nr_tx_rings = nmr->nr_tx_rings; req->nr_rx_rings = nmr->nr_rx_rings; + req->nr_host_tx_rings = 0; + req->nr_host_rx_rings = 0; req->nr_mem_id = nmr->nr_arg2; } break; } } return hdr; oom: if (hdr) { if (hdr->nr_body) { nm_os_free((void *)(uintptr_t)hdr->nr_body); } nm_os_free(hdr); } nm_prerr("Failed to allocate memory for nmreq_xyz struct"); return NULL; } static void nmreq_register_to_legacy(const struct nmreq_register *req, struct nmreq *nmr) { nmr->nr_offset = req->nr_offset; nmr->nr_memsize = req->nr_memsize; nmr->nr_tx_slots = req->nr_tx_slots; nmr->nr_rx_slots = req->nr_rx_slots; nmr->nr_tx_rings = req->nr_tx_rings; nmr->nr_rx_rings = req->nr_rx_rings; nmr->nr_arg2 = req->nr_mem_id; nmr->nr_arg3 = req->nr_extra_bufs; } /* Convert a nmreq_xyz struct (new API) to the legacy 'nmr' struct. * It also frees the nmreq_xyz struct, as it was allocated by * nmreq_from_legacy(). */ static int nmreq_to_legacy(struct nmreq_header *hdr, struct nmreq *nmr) { int ret = 0; /* We only write-back the fields that the user expects to be * written back. */ switch (hdr->nr_reqtype) { case NETMAP_REQ_REGISTER: { struct nmreq_register *req = (struct nmreq_register *)(uintptr_t)hdr->nr_body; nmreq_register_to_legacy(req, nmr); break; } case NETMAP_REQ_PORT_INFO_GET: { struct nmreq_port_info_get *req = (struct nmreq_port_info_get *)(uintptr_t)hdr->nr_body; nmr->nr_memsize = req->nr_memsize; nmr->nr_tx_slots = req->nr_tx_slots; nmr->nr_rx_slots = req->nr_rx_slots; nmr->nr_tx_rings = req->nr_tx_rings; nmr->nr_rx_rings = req->nr_rx_rings; nmr->nr_arg2 = req->nr_mem_id; break; } case NETMAP_REQ_VALE_ATTACH: { struct nmreq_vale_attach *req = (struct nmreq_vale_attach *)(uintptr_t)hdr->nr_body; nmreq_register_to_legacy(&req->reg, nmr); break; } case NETMAP_REQ_VALE_DETACH: { break; } case NETMAP_REQ_VALE_LIST: { struct nmreq_vale_list *req = (struct nmreq_vale_list *)(uintptr_t)hdr->nr_body; strlcpy(nmr->nr_name, hdr->nr_name, sizeof(nmr->nr_name)); nmr->nr_arg1 = req->nr_bridge_idx; nmr->nr_arg2 = req->nr_port_idx; break; } case NETMAP_REQ_PORT_HDR_SET: case NETMAP_REQ_PORT_HDR_GET: { struct nmreq_port_hdr *req = (struct nmreq_port_hdr *)(uintptr_t)hdr->nr_body; nmr->nr_arg1 = req->nr_hdr_len; break; } case NETMAP_REQ_VALE_NEWIF: { struct nmreq_vale_newif *req = (struct nmreq_vale_newif *)(uintptr_t)hdr->nr_body; nmr->nr_tx_slots = req->nr_tx_slots; nmr->nr_rx_slots = req->nr_rx_slots; nmr->nr_tx_rings = req->nr_tx_rings; nmr->nr_rx_rings = req->nr_rx_rings; nmr->nr_arg2 = req->nr_mem_id; break; } case NETMAP_REQ_VALE_DELIF: case NETMAP_REQ_VALE_POLLING_ENABLE: case NETMAP_REQ_VALE_POLLING_DISABLE: { break; } } return ret; } int netmap_ioctl_legacy(struct netmap_priv_d *priv, u_long cmd, caddr_t data, struct thread *td) { int error = 0; switch (cmd) { case NIOCGINFO: case NIOCREGIF: { /* Request for the legacy control API. Convert it to a * NIOCCTRL request. */ struct nmreq *nmr = (struct nmreq *) data; struct nmreq_header *hdr; - if (nmr->nr_version < 11) { - nm_prerr("Minimum supported API is 11 (requested %u)", + if (nmr->nr_version < 14) { + nm_prerr("Minimum supported API is 14 (requested %u)", nmr->nr_version); return EINVAL; } hdr = nmreq_from_legacy(nmr, cmd); if (hdr == NULL) { /* out of memory */ return ENOMEM; } error = netmap_ioctl(priv, NIOCCTRL, (caddr_t)hdr, td, /*nr_body_is_user=*/0); if (error == 0) { nmreq_to_legacy(hdr, nmr); } if (hdr->nr_body) { nm_os_free((void *)(uintptr_t)hdr->nr_body); } nm_os_free(hdr); break; } #ifdef WITH_VALE case NIOCCONFIG: { struct nm_ifreq *nr = (struct nm_ifreq *)data; error = netmap_bdg_config(nr); break; } #endif #ifdef __FreeBSD__ case FIONBIO: case FIOASYNC: /* FIONBIO/FIOASYNC are no-ops. */ break; case BIOCIMMEDIATE: case BIOCGHDRCMPLT: case BIOCSHDRCMPLT: case BIOCSSEESENT: /* Ignore these commands. */ break; default: /* allow device-specific ioctls */ { struct nmreq *nmr = (struct nmreq *)data; struct ifnet *ifp = ifunit_ref(nmr->nr_name); if (ifp == NULL) { error = ENXIO; } else { struct socket so; bzero(&so, sizeof(so)); so.so_vnet = ifp->if_vnet; // so->so_proto not null. error = ifioctl(&so, cmd, data, td); if_rele(ifp); } break; } #else /* linux */ default: error = EOPNOTSUPP; #endif /* linux */ } return error; } Index: head/sys/dev/netmap/netmap_mem2.c =================================================================== --- head/sys/dev/netmap/netmap_mem2.c (revision 345268) +++ head/sys/dev/netmap/netmap_mem2.c (revision 345269) @@ -1,2852 +1,2856 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2012-2014 Matteo Landi * Copyright (C) 2012-2016 Luigi Rizzo * Copyright (C) 2012-2016 Giuseppe Lettieri * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifdef linux #include "bsd_glue.h" #endif /* linux */ #ifdef __APPLE__ #include "osx_glue.h" #endif /* __APPLE__ */ #ifdef __FreeBSD__ #include /* prerequisite */ __FBSDID("$FreeBSD$"); #include #include #include /* MALLOC_DEFINE */ #include #include /* vtophys */ #include /* vtophys */ #include /* sockaddrs */ #include #include #include #include #include #include /* bus_dmamap_* */ /* M_NETMAP only used in here */ MALLOC_DECLARE(M_NETMAP); MALLOC_DEFINE(M_NETMAP, "netmap", "Network memory map"); #endif /* __FreeBSD__ */ #ifdef _WIN32 #include #endif #include #include #include #include "netmap_mem2.h" #ifdef _WIN32_USE_SMALL_GENERIC_DEVICES_MEMORY #define NETMAP_BUF_MAX_NUM 8*4096 /* if too big takes too much time to allocate */ #else #define NETMAP_BUF_MAX_NUM 20*4096*2 /* large machine */ #endif #define NETMAP_POOL_MAX_NAMSZ 32 enum { NETMAP_IF_POOL = 0, NETMAP_RING_POOL, NETMAP_BUF_POOL, NETMAP_POOLS_NR }; struct netmap_obj_params { u_int size; u_int num; u_int last_size; u_int last_num; }; struct netmap_obj_pool { char name[NETMAP_POOL_MAX_NAMSZ]; /* name of the allocator */ /* ---------------------------------------------------*/ /* these are only meaningful if the pool is finalized */ /* (see 'finalized' field in netmap_mem_d) */ u_int objtotal; /* actual total number of objects. */ u_int memtotal; /* actual total memory space */ u_int numclusters; /* actual number of clusters */ u_int objfree; /* number of free objects. */ struct lut_entry *lut; /* virt,phys addresses, objtotal entries */ uint32_t *bitmap; /* one bit per buffer, 1 means free */ uint32_t *invalid_bitmap;/* one bit per buffer, 1 means invalid */ uint32_t bitmap_slots; /* number of uint32 entries in bitmap */ int alloc_done; /* we have allocated the memory */ /* ---------------------------------------------------*/ /* limits */ u_int objminsize; /* minimum object size */ u_int objmaxsize; /* maximum object size */ u_int nummin; /* minimum number of objects */ u_int nummax; /* maximum number of objects */ /* these are changed only by config */ u_int _objtotal; /* total number of objects */ u_int _objsize; /* object size */ u_int _clustsize; /* cluster size */ u_int _clustentries; /* objects per cluster */ u_int _numclusters; /* number of clusters */ /* requested values */ u_int r_objtotal; u_int r_objsize; }; #define NMA_LOCK_T NM_MTX_T #define NMA_LOCK_INIT(n) NM_MTX_INIT((n)->nm_mtx) #define NMA_LOCK_DESTROY(n) NM_MTX_DESTROY((n)->nm_mtx) #define NMA_LOCK(n) NM_MTX_LOCK((n)->nm_mtx) #define NMA_SPINLOCK(n) NM_MTX_SPINLOCK((n)->nm_mtx) #define NMA_UNLOCK(n) NM_MTX_UNLOCK((n)->nm_mtx) struct netmap_mem_ops { int (*nmd_get_lut)(struct netmap_mem_d *, struct netmap_lut*); int (*nmd_get_info)(struct netmap_mem_d *, uint64_t *size, u_int *memflags, uint16_t *id); vm_paddr_t (*nmd_ofstophys)(struct netmap_mem_d *, vm_ooffset_t); int (*nmd_config)(struct netmap_mem_d *); int (*nmd_finalize)(struct netmap_mem_d *); void (*nmd_deref)(struct netmap_mem_d *); ssize_t (*nmd_if_offset)(struct netmap_mem_d *, const void *vaddr); void (*nmd_delete)(struct netmap_mem_d *); struct netmap_if * (*nmd_if_new)(struct netmap_adapter *, struct netmap_priv_d *); void (*nmd_if_delete)(struct netmap_adapter *, struct netmap_if *); int (*nmd_rings_create)(struct netmap_adapter *); void (*nmd_rings_delete)(struct netmap_adapter *); }; struct netmap_mem_d { NMA_LOCK_T nm_mtx; /* protect the allocator */ u_int nm_totalsize; /* shorthand */ u_int flags; #define NETMAP_MEM_FINALIZED 0x1 /* preallocation done */ #define NETMAP_MEM_HIDDEN 0x8 /* beeing prepared */ int lasterr; /* last error for curr config */ int active; /* active users */ int refcount; /* the three allocators */ struct netmap_obj_pool pools[NETMAP_POOLS_NR]; nm_memid_t nm_id; /* allocator identifier */ int nm_grp; /* iommu groupd id */ /* list of all existing allocators, sorted by nm_id */ struct netmap_mem_d *prev, *next; struct netmap_mem_ops *ops; struct netmap_obj_params params[NETMAP_POOLS_NR]; #define NM_MEM_NAMESZ 16 char name[NM_MEM_NAMESZ]; }; int netmap_mem_get_lut(struct netmap_mem_d *nmd, struct netmap_lut *lut) { int rv; NMA_LOCK(nmd); rv = nmd->ops->nmd_get_lut(nmd, lut); NMA_UNLOCK(nmd); return rv; } int netmap_mem_get_info(struct netmap_mem_d *nmd, uint64_t *size, u_int *memflags, nm_memid_t *memid) { int rv; NMA_LOCK(nmd); rv = nmd->ops->nmd_get_info(nmd, size, memflags, memid); NMA_UNLOCK(nmd); return rv; } vm_paddr_t netmap_mem_ofstophys(struct netmap_mem_d *nmd, vm_ooffset_t off) { vm_paddr_t pa; #if defined(__FreeBSD__) /* This function is called by netmap_dev_pager_fault(), which holds a * non-sleepable lock since FreeBSD 12. Since we cannot sleep, we * spin on the trylock. */ NMA_SPINLOCK(nmd); #else NMA_LOCK(nmd); #endif pa = nmd->ops->nmd_ofstophys(nmd, off); NMA_UNLOCK(nmd); return pa; } static int netmap_mem_config(struct netmap_mem_d *nmd) { if (nmd->active) { /* already in use. Not fatal, but we * cannot change the configuration */ return 0; } return nmd->ops->nmd_config(nmd); } ssize_t netmap_mem_if_offset(struct netmap_mem_d *nmd, const void *off) { ssize_t rv; NMA_LOCK(nmd); rv = nmd->ops->nmd_if_offset(nmd, off); NMA_UNLOCK(nmd); return rv; } static void netmap_mem_delete(struct netmap_mem_d *nmd) { nmd->ops->nmd_delete(nmd); } struct netmap_if * netmap_mem_if_new(struct netmap_adapter *na, struct netmap_priv_d *priv) { struct netmap_if *nifp; struct netmap_mem_d *nmd = na->nm_mem; NMA_LOCK(nmd); nifp = nmd->ops->nmd_if_new(na, priv); NMA_UNLOCK(nmd); return nifp; } void netmap_mem_if_delete(struct netmap_adapter *na, struct netmap_if *nif) { struct netmap_mem_d *nmd = na->nm_mem; NMA_LOCK(nmd); nmd->ops->nmd_if_delete(na, nif); NMA_UNLOCK(nmd); } int netmap_mem_rings_create(struct netmap_adapter *na) { int rv; struct netmap_mem_d *nmd = na->nm_mem; NMA_LOCK(nmd); rv = nmd->ops->nmd_rings_create(na); NMA_UNLOCK(nmd); return rv; } void netmap_mem_rings_delete(struct netmap_adapter *na) { struct netmap_mem_d *nmd = na->nm_mem; NMA_LOCK(nmd); nmd->ops->nmd_rings_delete(na); NMA_UNLOCK(nmd); } static int netmap_mem_map(struct netmap_obj_pool *, struct netmap_adapter *); static int netmap_mem_unmap(struct netmap_obj_pool *, struct netmap_adapter *); static int nm_mem_assign_group(struct netmap_mem_d *, struct device *); static void nm_mem_release_id(struct netmap_mem_d *); nm_memid_t netmap_mem_get_id(struct netmap_mem_d *nmd) { return nmd->nm_id; } #ifdef NM_DEBUG_MEM_PUTGET #define NM_DBG_REFC(nmd, func, line) \ nm_prinf("%d mem[%d] -> %d", line, (nmd)->nm_id, (nmd)->refcount); #else #define NM_DBG_REFC(nmd, func, line) #endif /* circular list of all existing allocators */ static struct netmap_mem_d *netmap_last_mem_d = &nm_mem; NM_MTX_T nm_mem_list_lock; struct netmap_mem_d * __netmap_mem_get(struct netmap_mem_d *nmd, const char *func, int line) { NM_MTX_LOCK(nm_mem_list_lock); nmd->refcount++; NM_DBG_REFC(nmd, func, line); NM_MTX_UNLOCK(nm_mem_list_lock); return nmd; } void __netmap_mem_put(struct netmap_mem_d *nmd, const char *func, int line) { int last; NM_MTX_LOCK(nm_mem_list_lock); last = (--nmd->refcount == 0); if (last) nm_mem_release_id(nmd); NM_DBG_REFC(nmd, func, line); NM_MTX_UNLOCK(nm_mem_list_lock); if (last) netmap_mem_delete(nmd); } int netmap_mem_finalize(struct netmap_mem_d *nmd, struct netmap_adapter *na) { int lasterr = 0; if (nm_mem_assign_group(nmd, na->pdev) < 0) { return ENOMEM; } NMA_LOCK(nmd); if (netmap_mem_config(nmd)) goto out; nmd->active++; nmd->lasterr = nmd->ops->nmd_finalize(nmd); if (!nmd->lasterr && na->pdev) { nmd->lasterr = netmap_mem_map(&nmd->pools[NETMAP_BUF_POOL], na); } out: lasterr = nmd->lasterr; NMA_UNLOCK(nmd); if (lasterr) netmap_mem_deref(nmd, na); return lasterr; } static int nm_isset(uint32_t *bitmap, u_int i) { return bitmap[ (i>>5) ] & ( 1U << (i & 31U) ); } static int netmap_init_obj_allocator_bitmap(struct netmap_obj_pool *p) { u_int n, j; if (p->bitmap == NULL) { /* Allocate the bitmap */ n = (p->objtotal + 31) / 32; p->bitmap = nm_os_malloc(sizeof(p->bitmap[0]) * n); if (p->bitmap == NULL) { nm_prerr("Unable to create bitmap (%d entries) for allocator '%s'", (int)n, p->name); return ENOMEM; } p->bitmap_slots = n; } else { memset(p->bitmap, 0, p->bitmap_slots * sizeof(p->bitmap[0])); } p->objfree = 0; /* * Set all the bits in the bitmap that have * corresponding buffers to 1 to indicate they are * free. */ for (j = 0; j < p->objtotal; j++) { if (p->invalid_bitmap && nm_isset(p->invalid_bitmap, j)) { if (netmap_debug & NM_DEBUG_MEM) nm_prinf("skipping %s %d", p->name, j); continue; } p->bitmap[ (j>>5) ] |= ( 1U << (j & 31U) ); p->objfree++; } if (netmap_verbose) nm_prinf("%s free %u", p->name, p->objfree); if (p->objfree == 0) { if (netmap_verbose) nm_prerr("%s: no objects available", p->name); return ENOMEM; } return 0; } static int netmap_mem_init_bitmaps(struct netmap_mem_d *nmd) { int i, error = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { struct netmap_obj_pool *p = &nmd->pools[i]; error = netmap_init_obj_allocator_bitmap(p); if (error) return error; } /* * buffers 0 and 1 are reserved */ if (nmd->pools[NETMAP_BUF_POOL].objfree < 2) { nm_prerr("%s: not enough buffers", nmd->pools[NETMAP_BUF_POOL].name); return ENOMEM; } nmd->pools[NETMAP_BUF_POOL].objfree -= 2; if (nmd->pools[NETMAP_BUF_POOL].bitmap) { /* XXX This check is a workaround that prevents a * NULL pointer crash which currently happens only * with ptnetmap guests. * Removed shared-info --> is the bug still there? */ nmd->pools[NETMAP_BUF_POOL].bitmap[0] = ~3U; } return 0; } int netmap_mem_deref(struct netmap_mem_d *nmd, struct netmap_adapter *na) { int last_user = 0; NMA_LOCK(nmd); if (na->active_fds <= 0) netmap_mem_unmap(&nmd->pools[NETMAP_BUF_POOL], na); if (nmd->active == 1) { last_user = 1; /* * Reset the allocator when it falls out of use so that any * pool resources leaked by unclean application exits are * reclaimed. */ netmap_mem_init_bitmaps(nmd); } nmd->ops->nmd_deref(nmd); nmd->active--; if (last_user) { nmd->nm_grp = -1; nmd->lasterr = 0; } NMA_UNLOCK(nmd); return last_user; } /* accessor functions */ static int netmap_mem2_get_lut(struct netmap_mem_d *nmd, struct netmap_lut *lut) { lut->lut = nmd->pools[NETMAP_BUF_POOL].lut; #ifdef __FreeBSD__ lut->plut = lut->lut; #endif lut->objtotal = nmd->pools[NETMAP_BUF_POOL].objtotal; lut->objsize = nmd->pools[NETMAP_BUF_POOL]._objsize; return 0; } static struct netmap_obj_params netmap_min_priv_params[NETMAP_POOLS_NR] = { [NETMAP_IF_POOL] = { .size = 1024, .num = 2, }, [NETMAP_RING_POOL] = { .size = 5*PAGE_SIZE, .num = 4, }, [NETMAP_BUF_POOL] = { .size = 2048, .num = 4098, }, }; /* * nm_mem is the memory allocator used for all physical interfaces * running in netmap mode. * Virtual (VALE) ports will have each its own allocator. */ extern struct netmap_mem_ops netmap_mem_global_ops; /* forward */ struct netmap_mem_d nm_mem = { /* Our memory allocator. */ .pools = { [NETMAP_IF_POOL] = { .name = "netmap_if", .objminsize = sizeof(struct netmap_if), .objmaxsize = 4096, .nummin = 10, /* don't be stingy */ .nummax = 10000, /* XXX very large */ }, [NETMAP_RING_POOL] = { .name = "netmap_ring", .objminsize = sizeof(struct netmap_ring), .objmaxsize = 32*PAGE_SIZE, .nummin = 2, .nummax = 1024, }, [NETMAP_BUF_POOL] = { .name = "netmap_buf", .objminsize = 64, .objmaxsize = 65536, .nummin = 4, .nummax = 1000000, /* one million! */ }, }, .params = { [NETMAP_IF_POOL] = { .size = 1024, .num = 100, }, [NETMAP_RING_POOL] = { .size = 9*PAGE_SIZE, .num = 200, }, [NETMAP_BUF_POOL] = { .size = 2048, .num = NETMAP_BUF_MAX_NUM, }, }, .nm_id = 1, .nm_grp = -1, .prev = &nm_mem, .next = &nm_mem, .ops = &netmap_mem_global_ops, .name = "1" }; /* blueprint for the private memory allocators */ /* XXX clang is not happy about using name as a print format */ static const struct netmap_mem_d nm_blueprint = { .pools = { [NETMAP_IF_POOL] = { .name = "%s_if", .objminsize = sizeof(struct netmap_if), .objmaxsize = 4096, .nummin = 1, .nummax = 100, }, [NETMAP_RING_POOL] = { .name = "%s_ring", .objminsize = sizeof(struct netmap_ring), .objmaxsize = 32*PAGE_SIZE, .nummin = 2, .nummax = 1024, }, [NETMAP_BUF_POOL] = { .name = "%s_buf", .objminsize = 64, .objmaxsize = 65536, .nummin = 4, .nummax = 1000000, /* one million! */ }, }, .nm_grp = -1, .flags = NETMAP_MEM_PRIVATE, .ops = &netmap_mem_global_ops, }; /* memory allocator related sysctls */ #define STRINGIFY(x) #x #define DECLARE_SYSCTLS(id, name) \ SYSBEGIN(mem2_ ## name); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_size, \ CTLFLAG_RW, &nm_mem.params[id].size, 0, "Requested size of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_size, \ CTLFLAG_RD, &nm_mem.pools[id]._objsize, 0, "Current size of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_num, \ CTLFLAG_RW, &nm_mem.params[id].num, 0, "Requested number of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_num, \ CTLFLAG_RD, &nm_mem.pools[id].objtotal, 0, "Current number of netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, priv_##name##_size, \ CTLFLAG_RW, &netmap_min_priv_params[id].size, 0, \ "Default size of private netmap " STRINGIFY(name) "s"); \ SYSCTL_INT(_dev_netmap, OID_AUTO, priv_##name##_num, \ CTLFLAG_RW, &netmap_min_priv_params[id].num, 0, \ "Default number of private netmap " STRINGIFY(name) "s"); \ SYSEND SYSCTL_DECL(_dev_netmap); DECLARE_SYSCTLS(NETMAP_IF_POOL, if); DECLARE_SYSCTLS(NETMAP_RING_POOL, ring); DECLARE_SYSCTLS(NETMAP_BUF_POOL, buf); /* call with nm_mem_list_lock held */ static int nm_mem_assign_id_locked(struct netmap_mem_d *nmd) { nm_memid_t id; struct netmap_mem_d *scan = netmap_last_mem_d; int error = ENOMEM; do { /* we rely on unsigned wrap around */ id = scan->nm_id + 1; if (id == 0) /* reserve 0 as error value */ id = 1; scan = scan->next; if (id != scan->nm_id) { nmd->nm_id = id; nmd->prev = scan->prev; nmd->next = scan; scan->prev->next = nmd; scan->prev = nmd; netmap_last_mem_d = nmd; nmd->refcount = 1; NM_DBG_REFC(nmd, __FUNCTION__, __LINE__); error = 0; break; } } while (scan != netmap_last_mem_d); return error; } /* call with nm_mem_list_lock *not* held */ static int nm_mem_assign_id(struct netmap_mem_d *nmd) { int ret; NM_MTX_LOCK(nm_mem_list_lock); ret = nm_mem_assign_id_locked(nmd); NM_MTX_UNLOCK(nm_mem_list_lock); return ret; } /* call with nm_mem_list_lock held */ static void nm_mem_release_id(struct netmap_mem_d *nmd) { nmd->prev->next = nmd->next; nmd->next->prev = nmd->prev; if (netmap_last_mem_d == nmd) netmap_last_mem_d = nmd->prev; nmd->prev = nmd->next = NULL; } struct netmap_mem_d * netmap_mem_find(nm_memid_t id) { struct netmap_mem_d *nmd; NM_MTX_LOCK(nm_mem_list_lock); nmd = netmap_last_mem_d; do { if (!(nmd->flags & NETMAP_MEM_HIDDEN) && nmd->nm_id == id) { nmd->refcount++; NM_DBG_REFC(nmd, __FUNCTION__, __LINE__); NM_MTX_UNLOCK(nm_mem_list_lock); return nmd; } nmd = nmd->next; } while (nmd != netmap_last_mem_d); NM_MTX_UNLOCK(nm_mem_list_lock); return NULL; } static int nm_mem_assign_group(struct netmap_mem_d *nmd, struct device *dev) { int err = 0, id; id = nm_iommu_group_id(dev); if (netmap_debug & NM_DEBUG_MEM) nm_prinf("iommu_group %d", id); NMA_LOCK(nmd); if (nmd->nm_grp < 0) nmd->nm_grp = id; if (nmd->nm_grp != id) { if (netmap_verbose) nm_prerr("iommu group mismatch: %u vs %u", nmd->nm_grp, id); nmd->lasterr = err = ENOMEM; } NMA_UNLOCK(nmd); return err; } static struct lut_entry * nm_alloc_lut(u_int nobj) { size_t n = sizeof(struct lut_entry) * nobj; struct lut_entry *lut; #ifdef linux lut = vmalloc(n); #else lut = nm_os_malloc(n); #endif return lut; } static void nm_free_lut(struct lut_entry *lut, u_int objtotal) { bzero(lut, sizeof(struct lut_entry) * objtotal); #ifdef linux vfree(lut); #else nm_os_free(lut); #endif } #if defined(linux) || defined(_WIN32) static struct plut_entry * nm_alloc_plut(u_int nobj) { size_t n = sizeof(struct plut_entry) * nobj; struct plut_entry *lut; lut = vmalloc(n); return lut; } static void nm_free_plut(struct plut_entry * lut) { vfree(lut); } #endif /* linux or _WIN32 */ /* * First, find the allocator that contains the requested offset, * then locate the cluster through a lookup table. */ static vm_paddr_t netmap_mem2_ofstophys(struct netmap_mem_d* nmd, vm_ooffset_t offset) { int i; vm_ooffset_t o = offset; vm_paddr_t pa; struct netmap_obj_pool *p; p = nmd->pools; for (i = 0; i < NETMAP_POOLS_NR; offset -= p[i].memtotal, i++) { if (offset >= p[i].memtotal) continue; // now lookup the cluster's address #ifndef _WIN32 pa = vtophys(p[i].lut[offset / p[i]._objsize].vaddr) + offset % p[i]._objsize; #else pa = vtophys(p[i].lut[offset / p[i]._objsize].vaddr); pa.QuadPart += offset % p[i]._objsize; #endif return pa; } /* this is only in case of errors */ nm_prerr("invalid ofs 0x%x out of 0x%x 0x%x 0x%x", (u_int)o, p[NETMAP_IF_POOL].memtotal, p[NETMAP_IF_POOL].memtotal + p[NETMAP_RING_POOL].memtotal, p[NETMAP_IF_POOL].memtotal + p[NETMAP_RING_POOL].memtotal + p[NETMAP_BUF_POOL].memtotal); #ifndef _WIN32 return 0; /* bad address */ #else vm_paddr_t res; res.QuadPart = 0; return res; #endif } #ifdef _WIN32 /* * win32_build_virtual_memory_for_userspace * * This function get all the object making part of the pools and maps * a contiguous virtual memory space for the userspace * It works this way * 1 - allocate a Memory Descriptor List wide as the sum * of the memory needed for the pools * 2 - cycle all the objects in every pool and for every object do * * 2a - cycle all the objects in every pool, get the list * of the physical address descriptors * 2b - calculate the offset in the array of pages desciptor in the * main MDL * 2c - copy the descriptors of the object in the main MDL * * 3 - return the resulting MDL that needs to be mapped in userland * * In this way we will have an MDL that describes all the memory for the * objects in a single object */ PMDL win32_build_user_vm_map(struct netmap_mem_d* nmd) { u_int memflags, ofs = 0; PMDL mainMdl, tempMdl; uint64_t memsize; int i, j; if (netmap_mem_get_info(nmd, &memsize, &memflags, NULL)) { nm_prerr("memory not finalised yet"); return NULL; } mainMdl = IoAllocateMdl(NULL, memsize, FALSE, FALSE, NULL); if (mainMdl == NULL) { nm_prerr("failed to allocate mdl"); return NULL; } NMA_LOCK(nmd); for (i = 0; i < NETMAP_POOLS_NR; i++) { struct netmap_obj_pool *p = &nmd->pools[i]; int clsz = p->_clustsize; int clobjs = p->_clustentries; /* objects per cluster */ int mdl_len = sizeof(PFN_NUMBER) * BYTES_TO_PAGES(clsz); PPFN_NUMBER pSrc, pDst; /* each pool has a different cluster size so we need to reallocate */ tempMdl = IoAllocateMdl(p->lut[0].vaddr, clsz, FALSE, FALSE, NULL); if (tempMdl == NULL) { NMA_UNLOCK(nmd); nm_prerr("fail to allocate tempMdl"); IoFreeMdl(mainMdl); return NULL; } pSrc = MmGetMdlPfnArray(tempMdl); /* create one entry per cluster, the lut[] has one entry per object */ for (j = 0; j < p->numclusters; j++, ofs += clsz) { pDst = &MmGetMdlPfnArray(mainMdl)[BYTES_TO_PAGES(ofs)]; MmInitializeMdl(tempMdl, p->lut[j*clobjs].vaddr, clsz); MmBuildMdlForNonPagedPool(tempMdl); /* compute physical page addresses */ RtlCopyMemory(pDst, pSrc, mdl_len); /* copy the page descriptors */ mainMdl->MdlFlags = tempMdl->MdlFlags; /* XXX what is in here ? */ } IoFreeMdl(tempMdl); } NMA_UNLOCK(nmd); return mainMdl; } #endif /* _WIN32 */ /* * helper function for OS-specific mmap routines (currently only windows). * Given an nmd and a pool index, returns the cluster size and number of clusters. * Returns 0 if memory is finalised and the pool is valid, otherwise 1. * It should be called under NMA_LOCK(nmd) otherwise the underlying info can change. */ int netmap_mem2_get_pool_info(struct netmap_mem_d* nmd, u_int pool, u_int *clustsize, u_int *numclusters) { if (!nmd || !clustsize || !numclusters || pool >= NETMAP_POOLS_NR) return 1; /* invalid arguments */ // NMA_LOCK_ASSERT(nmd); if (!(nmd->flags & NETMAP_MEM_FINALIZED)) { *clustsize = *numclusters = 0; return 1; /* not ready yet */ } *clustsize = nmd->pools[pool]._clustsize; *numclusters = nmd->pools[pool].numclusters; return 0; /* success */ } static int netmap_mem2_get_info(struct netmap_mem_d* nmd, uint64_t* size, u_int *memflags, nm_memid_t *id) { int error = 0; error = netmap_mem_config(nmd); if (error) goto out; if (size) { if (nmd->flags & NETMAP_MEM_FINALIZED) { *size = nmd->nm_totalsize; } else { int i; *size = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { struct netmap_obj_pool *p = nmd->pools + i; *size += (p->_numclusters * p->_clustsize); } } } if (memflags) *memflags = nmd->flags; if (id) *id = nmd->nm_id; out: return error; } /* * we store objects by kernel address, need to find the offset * within the pool to export the value to userspace. * Algorithm: scan until we find the cluster, then add the * actual offset in the cluster */ static ssize_t netmap_obj_offset(struct netmap_obj_pool *p, const void *vaddr) { int i, k = p->_clustentries, n = p->objtotal; ssize_t ofs = 0; for (i = 0; i < n; i += k, ofs += p->_clustsize) { const char *base = p->lut[i].vaddr; ssize_t relofs = (const char *) vaddr - base; if (relofs < 0 || relofs >= p->_clustsize) continue; ofs = ofs + relofs; nm_prdis("%s: return offset %d (cluster %d) for pointer %p", p->name, ofs, i, vaddr); return ofs; } nm_prerr("address %p is not contained inside any cluster (%s)", vaddr, p->name); return 0; /* An error occurred */ } /* Helper functions which convert virtual addresses to offsets */ #define netmap_if_offset(n, v) \ netmap_obj_offset(&(n)->pools[NETMAP_IF_POOL], (v)) #define netmap_ring_offset(n, v) \ ((n)->pools[NETMAP_IF_POOL].memtotal + \ netmap_obj_offset(&(n)->pools[NETMAP_RING_POOL], (v))) static ssize_t netmap_mem2_if_offset(struct netmap_mem_d *nmd, const void *addr) { return netmap_if_offset(nmd, addr); } /* * report the index, and use start position as a hint, * otherwise buffer allocation becomes terribly expensive. */ static void * netmap_obj_malloc(struct netmap_obj_pool *p, u_int len, uint32_t *start, uint32_t *index) { uint32_t i = 0; /* index in the bitmap */ uint32_t mask, j = 0; /* slot counter */ void *vaddr = NULL; if (len > p->_objsize) { nm_prerr("%s request size %d too large", p->name, len); return NULL; } if (p->objfree == 0) { nm_prerr("no more %s objects", p->name); return NULL; } if (start) i = *start; /* termination is guaranteed by p->free, but better check bounds on i */ while (vaddr == NULL && i < p->bitmap_slots) { uint32_t cur = p->bitmap[i]; if (cur == 0) { /* bitmask is fully used */ i++; continue; } /* locate a slot */ for (j = 0, mask = 1; (cur & mask) == 0; j++, mask <<= 1) ; p->bitmap[i] &= ~mask; /* mark object as in use */ p->objfree--; vaddr = p->lut[i * 32 + j].vaddr; if (index) *index = i * 32 + j; } nm_prdis("%s allocator: allocated object @ [%d][%d]: vaddr %p",p->name, i, j, vaddr); if (start) *start = i; return vaddr; } /* * free by index, not by address. * XXX should we also cleanup the content ? */ static int netmap_obj_free(struct netmap_obj_pool *p, uint32_t j) { uint32_t *ptr, mask; if (j >= p->objtotal) { nm_prerr("invalid index %u, max %u", j, p->objtotal); return 1; } ptr = &p->bitmap[j / 32]; mask = (1 << (j % 32)); if (*ptr & mask) { nm_prerr("ouch, double free on buffer %d", j); return 1; } else { *ptr |= mask; p->objfree++; return 0; } } /* * free by address. This is slow but is only used for a few * objects (rings, nifp) */ static void netmap_obj_free_va(struct netmap_obj_pool *p, void *vaddr) { u_int i, j, n = p->numclusters; for (i = 0, j = 0; i < n; i++, j += p->_clustentries) { void *base = p->lut[i * p->_clustentries].vaddr; ssize_t relofs = (ssize_t) vaddr - (ssize_t) base; /* Given address, is out of the scope of the current cluster.*/ if (base == NULL || vaddr < base || relofs >= p->_clustsize) continue; j = j + relofs / p->_objsize; /* KASSERT(j != 0, ("Cannot free object 0")); */ netmap_obj_free(p, j); return; } nm_prerr("address %p is not contained inside any cluster (%s)", vaddr, p->name); } unsigned netmap_mem_bufsize(struct netmap_mem_d *nmd) { return nmd->pools[NETMAP_BUF_POOL]._objsize; } #define netmap_if_malloc(n, len) netmap_obj_malloc(&(n)->pools[NETMAP_IF_POOL], len, NULL, NULL) #define netmap_if_free(n, v) netmap_obj_free_va(&(n)->pools[NETMAP_IF_POOL], (v)) #define netmap_ring_malloc(n, len) netmap_obj_malloc(&(n)->pools[NETMAP_RING_POOL], len, NULL, NULL) #define netmap_ring_free(n, v) netmap_obj_free_va(&(n)->pools[NETMAP_RING_POOL], (v)) #define netmap_buf_malloc(n, _pos, _index) \ netmap_obj_malloc(&(n)->pools[NETMAP_BUF_POOL], netmap_mem_bufsize(n), _pos, _index) #if 0 /* currently unused */ /* Return the index associated to the given packet buffer */ #define netmap_buf_index(n, v) \ (netmap_obj_offset(&(n)->pools[NETMAP_BUF_POOL], (v)) / NETMAP_BDG_BUF_SIZE(n)) #endif /* * allocate extra buffers in a linked list. * returns the actual number. */ uint32_t netmap_extra_alloc(struct netmap_adapter *na, uint32_t *head, uint32_t n) { struct netmap_mem_d *nmd = na->nm_mem; uint32_t i, pos = 0; /* opaque, scan position in the bitmap */ NMA_LOCK(nmd); *head = 0; /* default, 'null' index ie empty list */ for (i = 0 ; i < n; i++) { uint32_t cur = *head; /* save current head */ uint32_t *p = netmap_buf_malloc(nmd, &pos, head); if (p == NULL) { nm_prerr("no more buffers after %d of %d", i, n); *head = cur; /* restore */ break; } nm_prdis(5, "allocate buffer %d -> %d", *head, cur); *p = cur; /* link to previous head */ } NMA_UNLOCK(nmd); return i; } static void netmap_extra_free(struct netmap_adapter *na, uint32_t head) { struct lut_entry *lut = na->na_lut.lut; struct netmap_mem_d *nmd = na->nm_mem; struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; uint32_t i, cur, *buf; nm_prdis("freeing the extra list"); for (i = 0; head >=2 && head < p->objtotal; i++) { cur = head; buf = lut[head].vaddr; head = *buf; *buf = 0; if (netmap_obj_free(p, cur)) break; } if (head != 0) nm_prerr("breaking with head %d", head); if (netmap_debug & NM_DEBUG_MEM) nm_prinf("freed %d buffers", i); } /* Return nonzero on error */ static int netmap_new_bufs(struct netmap_mem_d *nmd, struct netmap_slot *slot, u_int n) { struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; u_int i = 0; /* slot counter */ uint32_t pos = 0; /* slot in p->bitmap */ uint32_t index = 0; /* buffer index */ for (i = 0; i < n; i++) { void *vaddr = netmap_buf_malloc(nmd, &pos, &index); if (vaddr == NULL) { nm_prerr("no more buffers after %d of %d", i, n); goto cleanup; } slot[i].buf_idx = index; slot[i].len = p->_objsize; slot[i].flags = 0; slot[i].ptr = 0; } nm_prdis("%s: allocated %d buffers, %d available, first at %d", p->name, n, p->objfree, pos); return (0); cleanup: while (i > 0) { i--; netmap_obj_free(p, slot[i].buf_idx); } bzero(slot, n * sizeof(slot[0])); return (ENOMEM); } static void netmap_mem_set_ring(struct netmap_mem_d *nmd, struct netmap_slot *slot, u_int n, uint32_t index) { struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; u_int i; for (i = 0; i < n; i++) { slot[i].buf_idx = index; slot[i].len = p->_objsize; slot[i].flags = 0; } } static void netmap_free_buf(struct netmap_mem_d *nmd, uint32_t i) { struct netmap_obj_pool *p = &nmd->pools[NETMAP_BUF_POOL]; if (i < 2 || i >= p->objtotal) { nm_prerr("Cannot free buf#%d: should be in [2, %d[", i, p->objtotal); return; } netmap_obj_free(p, i); } static void netmap_free_bufs(struct netmap_mem_d *nmd, struct netmap_slot *slot, u_int n) { u_int i; for (i = 0; i < n; i++) { if (slot[i].buf_idx > 1) netmap_free_buf(nmd, slot[i].buf_idx); } nm_prdis("%s: released some buffers, available: %u", p->name, p->objfree); } static void netmap_reset_obj_allocator(struct netmap_obj_pool *p) { if (p == NULL) return; if (p->bitmap) nm_os_free(p->bitmap); p->bitmap = NULL; if (p->invalid_bitmap) nm_os_free(p->invalid_bitmap); p->invalid_bitmap = NULL; if (!p->alloc_done) { /* allocation was done by somebody else. * Let them clean up after themselves. */ return; } if (p->lut) { u_int i; /* * Free each cluster allocated in * netmap_finalize_obj_allocator(). The cluster start * addresses are stored at multiples of p->_clusterentries * in the lut. */ for (i = 0; i < p->objtotal; i += p->_clustentries) { contigfree(p->lut[i].vaddr, p->_clustsize, M_NETMAP); } nm_free_lut(p->lut, p->objtotal); } p->lut = NULL; p->objtotal = 0; p->memtotal = 0; p->numclusters = 0; p->objfree = 0; p->alloc_done = 0; } /* * Free all resources related to an allocator. */ static void netmap_destroy_obj_allocator(struct netmap_obj_pool *p) { if (p == NULL) return; netmap_reset_obj_allocator(p); } /* * We receive a request for objtotal objects, of size objsize each. * Internally we may round up both numbers, as we allocate objects * in small clusters multiple of the page size. * We need to keep track of objtotal and clustentries, * as they are needed when freeing memory. * * XXX note -- userspace needs the buffers to be contiguous, * so we cannot afford gaps at the end of a cluster. */ /* call with NMA_LOCK held */ static int netmap_config_obj_allocator(struct netmap_obj_pool *p, u_int objtotal, u_int objsize) { int i; u_int clustsize; /* the cluster size, multiple of page size */ u_int clustentries; /* how many objects per entry */ /* we store the current request, so we can * detect configuration changes later */ p->r_objtotal = objtotal; p->r_objsize = objsize; #define MAX_CLUSTSIZE (1<<22) // 4 MB #define LINE_ROUND NM_CACHE_ALIGN // 64 if (objsize >= MAX_CLUSTSIZE) { /* we could do it but there is no point */ nm_prerr("unsupported allocation for %d bytes", objsize); return EINVAL; } /* make sure objsize is a multiple of LINE_ROUND */ i = (objsize & (LINE_ROUND - 1)); if (i) { nm_prinf("aligning object by %d bytes", LINE_ROUND - i); objsize += LINE_ROUND - i; } if (objsize < p->objminsize || objsize > p->objmaxsize) { nm_prerr("requested objsize %d out of range [%d, %d]", objsize, p->objminsize, p->objmaxsize); return EINVAL; } if (objtotal < p->nummin || objtotal > p->nummax) { nm_prerr("requested objtotal %d out of range [%d, %d]", objtotal, p->nummin, p->nummax); return EINVAL; } /* * Compute number of objects using a brute-force approach: * given a max cluster size, * we try to fill it with objects keeping track of the * wasted space to the next page boundary. */ for (clustentries = 0, i = 1;; i++) { u_int delta, used = i * objsize; if (used > MAX_CLUSTSIZE) break; delta = used % PAGE_SIZE; if (delta == 0) { // exact solution clustentries = i; break; } } /* exact solution not found */ if (clustentries == 0) { nm_prerr("unsupported allocation for %d bytes", objsize); return EINVAL; } /* compute clustsize */ clustsize = clustentries * objsize; if (netmap_debug & NM_DEBUG_MEM) nm_prinf("objsize %d clustsize %d objects %d", objsize, clustsize, clustentries); /* * The number of clusters is n = ceil(objtotal/clustentries) * objtotal' = n * clustentries */ p->_clustentries = clustentries; p->_clustsize = clustsize; p->_numclusters = (objtotal + clustentries - 1) / clustentries; /* actual values (may be larger than requested) */ p->_objsize = objsize; p->_objtotal = p->_numclusters * clustentries; return 0; } /* call with NMA_LOCK held */ static int netmap_finalize_obj_allocator(struct netmap_obj_pool *p) { int i; /* must be signed */ size_t n; if (p->lut) { /* if the lut is already there we assume that also all the * clusters have already been allocated, possibily by somebody * else (e.g., extmem). In the latter case, the alloc_done flag * will remain at zero, so that we will not attempt to * deallocate the clusters by ourselves in * netmap_reset_obj_allocator. */ return 0; } /* optimistically assume we have enough memory */ p->numclusters = p->_numclusters; p->objtotal = p->_objtotal; p->alloc_done = 1; p->lut = nm_alloc_lut(p->objtotal); if (p->lut == NULL) { nm_prerr("Unable to create lookup table for '%s'", p->name); goto clean; } /* * Allocate clusters, init pointers */ n = p->_clustsize; for (i = 0; i < (int)p->objtotal;) { int lim = i + p->_clustentries; char *clust; /* * XXX Note, we only need contigmalloc() for buffers attached * to native interfaces. In all other cases (nifp, netmap rings * and even buffers for VALE ports or emulated interfaces) we * can live with standard malloc, because the hardware will not * access the pages directly. */ clust = contigmalloc(n, M_NETMAP, M_NOWAIT | M_ZERO, (size_t)0, -1UL, PAGE_SIZE, 0); if (clust == NULL) { /* * If we get here, there is a severe memory shortage, * so halve the allocated memory to reclaim some. */ nm_prerr("Unable to create cluster at %d for '%s' allocator", i, p->name); if (i < 2) /* nothing to halve */ goto out; lim = i / 2; for (i--; i >= lim; i--) { if (i % p->_clustentries == 0 && p->lut[i].vaddr) contigfree(p->lut[i].vaddr, n, M_NETMAP); p->lut[i].vaddr = NULL; } out: p->objtotal = i; /* we may have stopped in the middle of a cluster */ p->numclusters = (i + p->_clustentries - 1) / p->_clustentries; break; } /* * Set lut state for all buffers in the current cluster. * * [i, lim) is the set of buffer indexes that cover the * current cluster. * * 'clust' is really the address of the current buffer in * the current cluster as we index through it with a stride * of p->_objsize. */ for (; i < lim; i++, clust += p->_objsize) { p->lut[i].vaddr = clust; #if !defined(linux) && !defined(_WIN32) p->lut[i].paddr = vtophys(clust); #endif } } p->memtotal = p->numclusters * p->_clustsize; if (netmap_verbose) nm_prinf("Pre-allocated %d clusters (%d/%dKB) for '%s'", p->numclusters, p->_clustsize >> 10, p->memtotal >> 10, p->name); return 0; clean: netmap_reset_obj_allocator(p); return ENOMEM; } /* call with lock held */ static int netmap_mem_params_changed(struct netmap_obj_params* p) { int i, rv = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { if (p[i].last_size != p[i].size || p[i].last_num != p[i].num) { p[i].last_size = p[i].size; p[i].last_num = p[i].num; rv = 1; } } return rv; } static void netmap_mem_reset_all(struct netmap_mem_d *nmd) { int i; if (netmap_debug & NM_DEBUG_MEM) nm_prinf("resetting %p", nmd); for (i = 0; i < NETMAP_POOLS_NR; i++) { netmap_reset_obj_allocator(&nmd->pools[i]); } nmd->flags &= ~NETMAP_MEM_FINALIZED; } static int netmap_mem_unmap(struct netmap_obj_pool *p, struct netmap_adapter *na) { int i, lim = p->objtotal; struct netmap_lut *lut = &na->na_lut; if (na == NULL || na->pdev == NULL) return 0; #if defined(__FreeBSD__) /* On FreeBSD mapping and unmapping is performed by the txsync * and rxsync routine, packet by packet. */ (void)i; (void)lim; (void)lut; #elif defined(_WIN32) (void)i; (void)lim; (void)lut; nm_prerr("unsupported on Windows"); #else /* linux */ nm_prdis("unmapping and freeing plut for %s", na->name); if (lut->plut == NULL) return 0; for (i = 0; i < lim; i += p->_clustentries) { if (lut->plut[i].paddr) netmap_unload_map(na, (bus_dma_tag_t) na->pdev, &lut->plut[i].paddr, p->_clustsize); } nm_free_plut(lut->plut); lut->plut = NULL; #endif /* linux */ return 0; } static int netmap_mem_map(struct netmap_obj_pool *p, struct netmap_adapter *na) { int error = 0; int i, lim = p->objtotal; struct netmap_lut *lut = &na->na_lut; if (na->pdev == NULL) return 0; #if defined(__FreeBSD__) /* On FreeBSD mapping and unmapping is performed by the txsync * and rxsync routine, packet by packet. */ (void)i; (void)lim; (void)lut; #elif defined(_WIN32) (void)i; (void)lim; (void)lut; nm_prerr("unsupported on Windows"); #else /* linux */ if (lut->plut != NULL) { nm_prdis("plut already allocated for %s", na->name); return 0; } nm_prdis("allocating physical lut for %s", na->name); lut->plut = nm_alloc_plut(lim); if (lut->plut == NULL) { nm_prerr("Failed to allocate physical lut for %s", na->name); return ENOMEM; } for (i = 0; i < lim; i += p->_clustentries) { lut->plut[i].paddr = 0; } for (i = 0; i < lim; i += p->_clustentries) { int j; if (p->lut[i].vaddr == NULL) continue; error = netmap_load_map(na, (bus_dma_tag_t) na->pdev, &lut->plut[i].paddr, p->lut[i].vaddr, p->_clustsize); if (error) { nm_prerr("Failed to map cluster #%d from the %s pool", i, p->name); break; } for (j = 1; j < p->_clustentries; j++) { lut->plut[i + j].paddr = lut->plut[i + j - 1].paddr + p->_objsize; } } if (error) netmap_mem_unmap(p, na); #endif /* linux */ return error; } static int netmap_mem_finalize_all(struct netmap_mem_d *nmd) { int i; if (nmd->flags & NETMAP_MEM_FINALIZED) return 0; nmd->lasterr = 0; nmd->nm_totalsize = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { nmd->lasterr = netmap_finalize_obj_allocator(&nmd->pools[i]); if (nmd->lasterr) goto error; nmd->nm_totalsize += nmd->pools[i].memtotal; } nmd->lasterr = netmap_mem_init_bitmaps(nmd); if (nmd->lasterr) goto error; nmd->flags |= NETMAP_MEM_FINALIZED; if (netmap_verbose) nm_prinf("interfaces %d KB, rings %d KB, buffers %d MB", nmd->pools[NETMAP_IF_POOL].memtotal >> 10, nmd->pools[NETMAP_RING_POOL].memtotal >> 10, nmd->pools[NETMAP_BUF_POOL].memtotal >> 20); if (netmap_verbose) nm_prinf("Free buffers: %d", nmd->pools[NETMAP_BUF_POOL].objfree); return 0; error: netmap_mem_reset_all(nmd); return nmd->lasterr; } /* * allocator for private memory */ static void * _netmap_mem_private_new(size_t size, struct netmap_obj_params *p, struct netmap_mem_ops *ops, int *perr) { struct netmap_mem_d *d = NULL; int i, err = 0; d = nm_os_malloc(size); if (d == NULL) { err = ENOMEM; goto error; } *d = nm_blueprint; d->ops = ops; err = nm_mem_assign_id(d); if (err) goto error_free; snprintf(d->name, NM_MEM_NAMESZ, "%d", d->nm_id); for (i = 0; i < NETMAP_POOLS_NR; i++) { snprintf(d->pools[i].name, NETMAP_POOL_MAX_NAMSZ, nm_blueprint.pools[i].name, d->name); d->params[i].num = p[i].num; d->params[i].size = p[i].size; } NMA_LOCK_INIT(d); err = netmap_mem_config(d); if (err) goto error_rel_id; d->flags &= ~NETMAP_MEM_FINALIZED; return d; error_rel_id: NMA_LOCK_DESTROY(d); nm_mem_release_id(d); error_free: nm_os_free(d); error: if (perr) *perr = err; return NULL; } struct netmap_mem_d * netmap_mem_private_new(u_int txr, u_int txd, u_int rxr, u_int rxd, u_int extra_bufs, u_int npipes, int *perr) { struct netmap_mem_d *d = NULL; struct netmap_obj_params p[NETMAP_POOLS_NR]; int i; u_int v, maxd; /* account for the fake host rings */ txr++; rxr++; /* copy the min values */ for (i = 0; i < NETMAP_POOLS_NR; i++) { p[i] = netmap_min_priv_params[i]; } /* possibly increase them to fit user request */ v = sizeof(struct netmap_if) + sizeof(ssize_t) * (txr + rxr); if (p[NETMAP_IF_POOL].size < v) p[NETMAP_IF_POOL].size = v; v = 2 + 4 * npipes; if (p[NETMAP_IF_POOL].num < v) p[NETMAP_IF_POOL].num = v; maxd = (txd > rxd) ? txd : rxd; v = sizeof(struct netmap_ring) + sizeof(struct netmap_slot) * maxd; if (p[NETMAP_RING_POOL].size < v) p[NETMAP_RING_POOL].size = v; /* each pipe endpoint needs two tx rings (1 normal + 1 host, fake) * and two rx rings (again, 1 normal and 1 fake host) */ v = txr + rxr + 8 * npipes; if (p[NETMAP_RING_POOL].num < v) p[NETMAP_RING_POOL].num = v; /* for each pipe we only need the buffers for the 4 "real" rings. * On the other end, the pipe ring dimension may be different from * the parent port ring dimension. As a compromise, we allocate twice the * space actually needed if the pipe rings were the same size as the parent rings */ v = (4 * npipes + rxr) * rxd + (4 * npipes + txr) * txd + 2 + extra_bufs; /* the +2 is for the tx and rx fake buffers (indices 0 and 1) */ if (p[NETMAP_BUF_POOL].num < v) p[NETMAP_BUF_POOL].num = v; if (netmap_verbose) nm_prinf("req if %d*%d ring %d*%d buf %d*%d", p[NETMAP_IF_POOL].num, p[NETMAP_IF_POOL].size, p[NETMAP_RING_POOL].num, p[NETMAP_RING_POOL].size, p[NETMAP_BUF_POOL].num, p[NETMAP_BUF_POOL].size); d = _netmap_mem_private_new(sizeof(*d), p, &netmap_mem_global_ops, perr); return d; } /* call with lock held */ static int netmap_mem2_config(struct netmap_mem_d *nmd) { int i; if (!netmap_mem_params_changed(nmd->params)) goto out; nm_prdis("reconfiguring"); if (nmd->flags & NETMAP_MEM_FINALIZED) { /* reset previous allocation */ for (i = 0; i < NETMAP_POOLS_NR; i++) { netmap_reset_obj_allocator(&nmd->pools[i]); } nmd->flags &= ~NETMAP_MEM_FINALIZED; } for (i = 0; i < NETMAP_POOLS_NR; i++) { nmd->lasterr = netmap_config_obj_allocator(&nmd->pools[i], nmd->params[i].num, nmd->params[i].size); if (nmd->lasterr) goto out; } out: return nmd->lasterr; } static int netmap_mem2_finalize(struct netmap_mem_d *nmd) { if (nmd->flags & NETMAP_MEM_FINALIZED) goto out; if (netmap_mem_finalize_all(nmd)) goto out; nmd->lasterr = 0; out: return nmd->lasterr; } static void netmap_mem2_delete(struct netmap_mem_d *nmd) { int i; for (i = 0; i < NETMAP_POOLS_NR; i++) { netmap_destroy_obj_allocator(&nmd->pools[i]); } NMA_LOCK_DESTROY(nmd); if (nmd != &nm_mem) nm_os_free(nmd); } #ifdef WITH_EXTMEM /* doubly linekd list of all existing external allocators */ static struct netmap_mem_ext *netmap_mem_ext_list = NULL; NM_MTX_T nm_mem_ext_list_lock; #endif /* WITH_EXTMEM */ int netmap_mem_init(void) { NM_MTX_INIT(nm_mem_list_lock); NMA_LOCK_INIT(&nm_mem); netmap_mem_get(&nm_mem); #ifdef WITH_EXTMEM NM_MTX_INIT(nm_mem_ext_list_lock); #endif /* WITH_EXTMEM */ return (0); } void netmap_mem_fini(void) { netmap_mem_put(&nm_mem); } static void netmap_free_rings(struct netmap_adapter *na) { enum txrx t; for_rx_tx(t) { u_int i; for (i = 0; i < netmap_all_rings(na, t); i++) { struct netmap_kring *kring = NMR(na, t)[i]; struct netmap_ring *ring = kring->ring; if (ring == NULL || kring->users > 0 || (kring->nr_kflags & NKR_NEEDRING)) { if (netmap_debug & NM_DEBUG_MEM) nm_prinf("NOT deleting ring %s (ring %p, users %d neekring %d)", kring->name, ring, kring->users, kring->nr_kflags & NKR_NEEDRING); continue; } if (netmap_debug & NM_DEBUG_MEM) nm_prinf("deleting ring %s", kring->name); if (!(kring->nr_kflags & NKR_FAKERING)) { nm_prdis("freeing bufs for %s", kring->name); netmap_free_bufs(na->nm_mem, ring->slot, kring->nkr_num_slots); } else { nm_prdis("NOT freeing bufs for %s", kring->name); } netmap_ring_free(na->nm_mem, ring); kring->ring = NULL; } } } /* call with NMA_LOCK held * * * Allocate netmap rings and buffers for this card * The rings are contiguous, but have variable size. * The kring array must follow the layout described * in netmap_krings_create(). */ static int netmap_mem2_rings_create(struct netmap_adapter *na) { enum txrx t; for_rx_tx(t) { u_int i; for (i = 0; i < netmap_all_rings(na, t); i++) { struct netmap_kring *kring = NMR(na, t)[i]; struct netmap_ring *ring = kring->ring; u_int len, ndesc; if (ring || (!kring->users && !(kring->nr_kflags & NKR_NEEDRING))) { /* uneeded, or already created by somebody else */ if (netmap_debug & NM_DEBUG_MEM) nm_prinf("NOT creating ring %s (ring %p, users %d neekring %d)", kring->name, ring, kring->users, kring->nr_kflags & NKR_NEEDRING); continue; } if (netmap_debug & NM_DEBUG_MEM) nm_prinf("creating %s", kring->name); ndesc = kring->nkr_num_slots; len = sizeof(struct netmap_ring) + ndesc * sizeof(struct netmap_slot); ring = netmap_ring_malloc(na->nm_mem, len); if (ring == NULL) { nm_prerr("Cannot allocate %s_ring", nm_txrx2str(t)); goto cleanup; } nm_prdis("txring at %p", ring); kring->ring = ring; *(uint32_t *)(uintptr_t)&ring->num_slots = ndesc; *(int64_t *)(uintptr_t)&ring->buf_ofs = (na->nm_mem->pools[NETMAP_IF_POOL].memtotal + na->nm_mem->pools[NETMAP_RING_POOL].memtotal) - netmap_ring_offset(na->nm_mem, ring); /* copy values from kring */ ring->head = kring->rhead; ring->cur = kring->rcur; ring->tail = kring->rtail; *(uint32_t *)(uintptr_t)&ring->nr_buf_size = netmap_mem_bufsize(na->nm_mem); nm_prdis("%s h %d c %d t %d", kring->name, ring->head, ring->cur, ring->tail); nm_prdis("initializing slots for %s_ring", nm_txrx2str(t)); if (!(kring->nr_kflags & NKR_FAKERING)) { /* this is a real ring */ if (netmap_debug & NM_DEBUG_MEM) nm_prinf("allocating buffers for %s", kring->name); if (netmap_new_bufs(na->nm_mem, ring->slot, ndesc)) { nm_prerr("Cannot allocate buffers for %s_ring", nm_txrx2str(t)); goto cleanup; } } else { /* this is a fake ring, set all indices to 0 */ if (netmap_debug & NM_DEBUG_MEM) nm_prinf("NOT allocating buffers for %s", kring->name); netmap_mem_set_ring(na->nm_mem, ring->slot, ndesc, 0); } /* ring info */ *(uint16_t *)(uintptr_t)&ring->ringid = kring->ring_id; *(uint16_t *)(uintptr_t)&ring->dir = kring->tx; } } return 0; cleanup: /* we cannot actually cleanup here, since we don't own kring->users * and kring->nr_klags & NKR_NEEDRING. The caller must decrement * the first or zero-out the second, then call netmap_free_rings() * to do the cleanup */ return ENOMEM; } static void netmap_mem2_rings_delete(struct netmap_adapter *na) { /* last instance, release bufs and rings */ netmap_free_rings(na); } /* call with NMA_LOCK held */ /* * Allocate the per-fd structure netmap_if. * * We assume that the configuration stored in na * (number of tx/rx rings and descs) does not change while * the interface is in netmap mode. */ static struct netmap_if * netmap_mem2_if_new(struct netmap_adapter *na, struct netmap_priv_d *priv) { struct netmap_if *nifp; ssize_t base; /* handy for relative offsets between rings and nifp */ u_int i, len, n[NR_TXRX], ntot; enum txrx t; ntot = 0; for_rx_tx(t) { /* account for the (eventually fake) host rings */ n[t] = netmap_all_rings(na, t); ntot += n[t]; } /* * the descriptor is followed inline by an array of offsets * to the tx and rx rings in the shared memory region. */ len = sizeof(struct netmap_if) + (ntot * sizeof(ssize_t)); nifp = netmap_if_malloc(na->nm_mem, len); if (nifp == NULL) { NMA_UNLOCK(na->nm_mem); return NULL; } /* initialize base fields -- override const */ *(u_int *)(uintptr_t)&nifp->ni_tx_rings = na->num_tx_rings; *(u_int *)(uintptr_t)&nifp->ni_rx_rings = na->num_rx_rings; + *(u_int *)(uintptr_t)&nifp->ni_host_tx_rings = + (na->num_host_tx_rings ? na->num_host_tx_rings : 1); + *(u_int *)(uintptr_t)&nifp->ni_host_rx_rings = + (na->num_host_rx_rings ? na->num_host_rx_rings : 1); strlcpy(nifp->ni_name, na->name, sizeof(nifp->ni_name)); /* * fill the slots for the rx and tx rings. They contain the offset * between the ring and nifp, so the information is usable in * userspace to reach the ring from the nifp. */ base = netmap_if_offset(na->nm_mem, nifp); for (i = 0; i < n[NR_TX]; i++) { /* XXX instead of ofs == 0 maybe use the offset of an error * ring, like we do for buffers? */ ssize_t ofs = 0; if (na->tx_rings[i]->ring != NULL && i >= priv->np_qfirst[NR_TX] && i < priv->np_qlast[NR_TX]) { ofs = netmap_ring_offset(na->nm_mem, na->tx_rings[i]->ring) - base; } *(ssize_t *)(uintptr_t)&nifp->ring_ofs[i] = ofs; } for (i = 0; i < n[NR_RX]; i++) { /* XXX instead of ofs == 0 maybe use the offset of an error * ring, like we do for buffers? */ ssize_t ofs = 0; if (na->rx_rings[i]->ring != NULL && i >= priv->np_qfirst[NR_RX] && i < priv->np_qlast[NR_RX]) { ofs = netmap_ring_offset(na->nm_mem, na->rx_rings[i]->ring) - base; } *(ssize_t *)(uintptr_t)&nifp->ring_ofs[i+n[NR_TX]] = ofs; } return (nifp); } static void netmap_mem2_if_delete(struct netmap_adapter *na, struct netmap_if *nifp) { if (nifp == NULL) /* nothing to do */ return; if (nifp->ni_bufs_head) netmap_extra_free(na, nifp->ni_bufs_head); netmap_if_free(na->nm_mem, nifp); } static void netmap_mem2_deref(struct netmap_mem_d *nmd) { if (netmap_debug & NM_DEBUG_MEM) nm_prinf("active = %d", nmd->active); } struct netmap_mem_ops netmap_mem_global_ops = { .nmd_get_lut = netmap_mem2_get_lut, .nmd_get_info = netmap_mem2_get_info, .nmd_ofstophys = netmap_mem2_ofstophys, .nmd_config = netmap_mem2_config, .nmd_finalize = netmap_mem2_finalize, .nmd_deref = netmap_mem2_deref, .nmd_delete = netmap_mem2_delete, .nmd_if_offset = netmap_mem2_if_offset, .nmd_if_new = netmap_mem2_if_new, .nmd_if_delete = netmap_mem2_if_delete, .nmd_rings_create = netmap_mem2_rings_create, .nmd_rings_delete = netmap_mem2_rings_delete }; int netmap_mem_pools_info_get(struct nmreq_pools_info *req, struct netmap_mem_d *nmd) { int ret; ret = netmap_mem_get_info(nmd, &req->nr_memsize, NULL, &req->nr_mem_id); if (ret) { return ret; } NMA_LOCK(nmd); req->nr_if_pool_offset = 0; req->nr_if_pool_objtotal = nmd->pools[NETMAP_IF_POOL].objtotal; req->nr_if_pool_objsize = nmd->pools[NETMAP_IF_POOL]._objsize; req->nr_ring_pool_offset = nmd->pools[NETMAP_IF_POOL].memtotal; req->nr_ring_pool_objtotal = nmd->pools[NETMAP_RING_POOL].objtotal; req->nr_ring_pool_objsize = nmd->pools[NETMAP_RING_POOL]._objsize; req->nr_buf_pool_offset = nmd->pools[NETMAP_IF_POOL].memtotal + nmd->pools[NETMAP_RING_POOL].memtotal; req->nr_buf_pool_objtotal = nmd->pools[NETMAP_BUF_POOL].objtotal; req->nr_buf_pool_objsize = nmd->pools[NETMAP_BUF_POOL]._objsize; NMA_UNLOCK(nmd); return 0; } #ifdef WITH_EXTMEM struct netmap_mem_ext { struct netmap_mem_d up; struct nm_os_extmem *os; struct netmap_mem_ext *next, *prev; }; /* call with nm_mem_list_lock held */ static void netmap_mem_ext_register(struct netmap_mem_ext *e) { NM_MTX_LOCK(nm_mem_ext_list_lock); if (netmap_mem_ext_list) netmap_mem_ext_list->prev = e; e->next = netmap_mem_ext_list; netmap_mem_ext_list = e; e->prev = NULL; NM_MTX_UNLOCK(nm_mem_ext_list_lock); } /* call with nm_mem_list_lock held */ static void netmap_mem_ext_unregister(struct netmap_mem_ext *e) { if (e->prev) e->prev->next = e->next; else netmap_mem_ext_list = e->next; if (e->next) e->next->prev = e->prev; e->prev = e->next = NULL; } static struct netmap_mem_ext * netmap_mem_ext_search(struct nm_os_extmem *os) { struct netmap_mem_ext *e; NM_MTX_LOCK(nm_mem_ext_list_lock); for (e = netmap_mem_ext_list; e; e = e->next) { if (nm_os_extmem_isequal(e->os, os)) { netmap_mem_get(&e->up); break; } } NM_MTX_UNLOCK(nm_mem_ext_list_lock); return e; } static void netmap_mem_ext_delete(struct netmap_mem_d *d) { int i; struct netmap_mem_ext *e = (struct netmap_mem_ext *)d; netmap_mem_ext_unregister(e); for (i = 0; i < NETMAP_POOLS_NR; i++) { struct netmap_obj_pool *p = &d->pools[i]; if (p->lut) { nm_free_lut(p->lut, p->objtotal); p->lut = NULL; } } if (e->os) nm_os_extmem_delete(e->os); netmap_mem2_delete(d); } static int netmap_mem_ext_config(struct netmap_mem_d *nmd) { return 0; } struct netmap_mem_ops netmap_mem_ext_ops = { .nmd_get_lut = netmap_mem2_get_lut, .nmd_get_info = netmap_mem2_get_info, .nmd_ofstophys = netmap_mem2_ofstophys, .nmd_config = netmap_mem_ext_config, .nmd_finalize = netmap_mem2_finalize, .nmd_deref = netmap_mem2_deref, .nmd_delete = netmap_mem_ext_delete, .nmd_if_offset = netmap_mem2_if_offset, .nmd_if_new = netmap_mem2_if_new, .nmd_if_delete = netmap_mem2_if_delete, .nmd_rings_create = netmap_mem2_rings_create, .nmd_rings_delete = netmap_mem2_rings_delete }; struct netmap_mem_d * netmap_mem_ext_create(uint64_t usrptr, struct nmreq_pools_info *pi, int *perror) { int error = 0; int i, j; struct netmap_mem_ext *nme; char *clust; size_t off; struct nm_os_extmem *os = NULL; int nr_pages; // XXX sanity checks if (pi->nr_if_pool_objtotal == 0) pi->nr_if_pool_objtotal = netmap_min_priv_params[NETMAP_IF_POOL].num; if (pi->nr_if_pool_objsize == 0) pi->nr_if_pool_objsize = netmap_min_priv_params[NETMAP_IF_POOL].size; if (pi->nr_ring_pool_objtotal == 0) pi->nr_ring_pool_objtotal = netmap_min_priv_params[NETMAP_RING_POOL].num; if (pi->nr_ring_pool_objsize == 0) pi->nr_ring_pool_objsize = netmap_min_priv_params[NETMAP_RING_POOL].size; if (pi->nr_buf_pool_objtotal == 0) pi->nr_buf_pool_objtotal = netmap_min_priv_params[NETMAP_BUF_POOL].num; if (pi->nr_buf_pool_objsize == 0) pi->nr_buf_pool_objsize = netmap_min_priv_params[NETMAP_BUF_POOL].size; if (netmap_verbose & NM_DEBUG_MEM) nm_prinf("if %d %d ring %d %d buf %d %d", pi->nr_if_pool_objtotal, pi->nr_if_pool_objsize, pi->nr_ring_pool_objtotal, pi->nr_ring_pool_objsize, pi->nr_buf_pool_objtotal, pi->nr_buf_pool_objsize); os = nm_os_extmem_create(usrptr, pi, &error); if (os == NULL) { nm_prerr("os extmem creation failed"); goto out; } nme = netmap_mem_ext_search(os); if (nme) { nm_os_extmem_delete(os); return &nme->up; } if (netmap_verbose & NM_DEBUG_MEM) nm_prinf("not found, creating new"); nme = _netmap_mem_private_new(sizeof(*nme), (struct netmap_obj_params[]){ { pi->nr_if_pool_objsize, pi->nr_if_pool_objtotal }, { pi->nr_ring_pool_objsize, pi->nr_ring_pool_objtotal }, { pi->nr_buf_pool_objsize, pi->nr_buf_pool_objtotal }}, &netmap_mem_ext_ops, &error); if (nme == NULL) goto out_unmap; nr_pages = nm_os_extmem_nr_pages(os); /* from now on pages will be released by nme destructor; * we let res = 0 to prevent release in out_unmap below */ nme->os = os; os = NULL; /* pass ownership */ clust = nm_os_extmem_nextpage(nme->os); off = 0; for (i = 0; i < NETMAP_POOLS_NR; i++) { struct netmap_obj_pool *p = &nme->up.pools[i]; struct netmap_obj_params *o = &nme->up.params[i]; p->_objsize = o->size; p->_clustsize = o->size; p->_clustentries = 1; p->lut = nm_alloc_lut(o->num); if (p->lut == NULL) { error = ENOMEM; goto out_delete; } p->bitmap_slots = (o->num + sizeof(uint32_t) - 1) / sizeof(uint32_t); p->invalid_bitmap = nm_os_malloc(sizeof(uint32_t) * p->bitmap_slots); if (p->invalid_bitmap == NULL) { error = ENOMEM; goto out_delete; } if (nr_pages == 0) { p->objtotal = 0; p->memtotal = 0; p->objfree = 0; continue; } for (j = 0; j < o->num && nr_pages > 0; j++) { size_t noff; p->lut[j].vaddr = clust + off; #if !defined(linux) && !defined(_WIN32) p->lut[j].paddr = vtophys(p->lut[j].vaddr); #endif nm_prdis("%s %d at %p", p->name, j, p->lut[j].vaddr); noff = off + p->_objsize; if (noff < PAGE_SIZE) { off = noff; continue; } nm_prdis("too big, recomputing offset..."); while (noff >= PAGE_SIZE) { char *old_clust = clust; noff -= PAGE_SIZE; clust = nm_os_extmem_nextpage(nme->os); nr_pages--; nm_prdis("noff %zu page %p nr_pages %d", noff, page_to_virt(*pages), nr_pages); if (noff > 0 && !nm_isset(p->invalid_bitmap, j) && (nr_pages == 0 || old_clust + PAGE_SIZE != clust)) { /* out of space or non contiguous, * drop this object * */ p->invalid_bitmap[ (j>>5) ] |= 1U << (j & 31U); nm_prdis("non contiguous at off %zu, drop", noff); } if (nr_pages == 0) break; } off = noff; } p->objtotal = j; p->numclusters = p->objtotal; p->memtotal = j * p->_objsize; nm_prdis("%d memtotal %u", j, p->memtotal); } netmap_mem_ext_register(nme); return &nme->up; out_delete: netmap_mem_put(&nme->up); out_unmap: if (os) nm_os_extmem_delete(os); out: if (perror) *perror = error; return NULL; } #endif /* WITH_EXTMEM */ #ifdef WITH_PTNETMAP struct mem_pt_if { struct mem_pt_if *next; struct ifnet *ifp; unsigned int nifp_offset; }; /* Netmap allocator for ptnetmap guests. */ struct netmap_mem_ptg { struct netmap_mem_d up; vm_paddr_t nm_paddr; /* physical address in the guest */ void *nm_addr; /* virtual address in the guest */ struct netmap_lut buf_lut; /* lookup table for BUF pool in the guest */ nm_memid_t host_mem_id; /* allocator identifier in the host */ struct ptnetmap_memdev *ptn_dev;/* ptnetmap memdev */ struct mem_pt_if *pt_ifs; /* list of interfaces in passthrough */ }; /* Link a passthrough interface to a passthrough netmap allocator. */ static int netmap_mem_pt_guest_ifp_add(struct netmap_mem_d *nmd, struct ifnet *ifp, unsigned int nifp_offset) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; struct mem_pt_if *ptif = nm_os_malloc(sizeof(*ptif)); if (!ptif) { return ENOMEM; } NMA_LOCK(nmd); ptif->ifp = ifp; ptif->nifp_offset = nifp_offset; if (ptnmd->pt_ifs) { ptif->next = ptnmd->pt_ifs; } ptnmd->pt_ifs = ptif; NMA_UNLOCK(nmd); nm_prinf("ifp=%s,nifp_offset=%u", ptif->ifp->if_xname, ptif->nifp_offset); return 0; } /* Called with NMA_LOCK(nmd) held. */ static struct mem_pt_if * netmap_mem_pt_guest_ifp_lookup(struct netmap_mem_d *nmd, struct ifnet *ifp) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; struct mem_pt_if *curr; for (curr = ptnmd->pt_ifs; curr; curr = curr->next) { if (curr->ifp == ifp) { return curr; } } return NULL; } /* Unlink a passthrough interface from a passthrough netmap allocator. */ int netmap_mem_pt_guest_ifp_del(struct netmap_mem_d *nmd, struct ifnet *ifp) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; struct mem_pt_if *prev = NULL; struct mem_pt_if *curr; int ret = -1; NMA_LOCK(nmd); for (curr = ptnmd->pt_ifs; curr; curr = curr->next) { if (curr->ifp == ifp) { if (prev) { prev->next = curr->next; } else { ptnmd->pt_ifs = curr->next; } nm_prinf("removed (ifp=%p,nifp_offset=%u)", curr->ifp, curr->nifp_offset); nm_os_free(curr); ret = 0; break; } prev = curr; } NMA_UNLOCK(nmd); return ret; } static int netmap_mem_pt_guest_get_lut(struct netmap_mem_d *nmd, struct netmap_lut *lut) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; if (!(nmd->flags & NETMAP_MEM_FINALIZED)) { return EINVAL; } *lut = ptnmd->buf_lut; return 0; } static int netmap_mem_pt_guest_get_info(struct netmap_mem_d *nmd, uint64_t *size, u_int *memflags, uint16_t *id) { int error = 0; error = nmd->ops->nmd_config(nmd); if (error) goto out; if (size) *size = nmd->nm_totalsize; if (memflags) *memflags = nmd->flags; if (id) *id = nmd->nm_id; out: return error; } static vm_paddr_t netmap_mem_pt_guest_ofstophys(struct netmap_mem_d *nmd, vm_ooffset_t off) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; vm_paddr_t paddr; /* if the offset is valid, just return csb->base_addr + off */ paddr = (vm_paddr_t)(ptnmd->nm_paddr + off); nm_prdis("off %lx padr %lx", off, (unsigned long)paddr); return paddr; } static int netmap_mem_pt_guest_config(struct netmap_mem_d *nmd) { /* nothing to do, we are configured on creation * and configuration never changes thereafter */ return 0; } static int netmap_mem_pt_guest_finalize(struct netmap_mem_d *nmd) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; uint64_t mem_size; uint32_t bufsize; uint32_t nbuffers; uint32_t poolofs; vm_paddr_t paddr; char *vaddr; int i; int error = 0; if (nmd->flags & NETMAP_MEM_FINALIZED) goto out; if (ptnmd->ptn_dev == NULL) { nm_prerr("ptnetmap memdev not attached"); error = ENOMEM; goto out; } /* Map memory through ptnetmap-memdev BAR. */ error = nm_os_pt_memdev_iomap(ptnmd->ptn_dev, &ptnmd->nm_paddr, &ptnmd->nm_addr, &mem_size); if (error) goto out; /* Initialize the lut using the information contained in the * ptnetmap memory device. */ bufsize = nm_os_pt_memdev_ioread(ptnmd->ptn_dev, PTNET_MDEV_IO_BUF_POOL_OBJSZ); nbuffers = nm_os_pt_memdev_ioread(ptnmd->ptn_dev, PTNET_MDEV_IO_BUF_POOL_OBJNUM); /* allocate the lut */ if (ptnmd->buf_lut.lut == NULL) { nm_prinf("allocating lut"); ptnmd->buf_lut.lut = nm_alloc_lut(nbuffers); if (ptnmd->buf_lut.lut == NULL) { nm_prerr("lut allocation failed"); return ENOMEM; } } /* we have physically contiguous memory mapped through PCI BAR */ poolofs = nm_os_pt_memdev_ioread(ptnmd->ptn_dev, PTNET_MDEV_IO_BUF_POOL_OFS); vaddr = (char *)(ptnmd->nm_addr) + poolofs; paddr = ptnmd->nm_paddr + poolofs; for (i = 0; i < nbuffers; i++) { ptnmd->buf_lut.lut[i].vaddr = vaddr; vaddr += bufsize; paddr += bufsize; } ptnmd->buf_lut.objtotal = nbuffers; ptnmd->buf_lut.objsize = bufsize; nmd->nm_totalsize = (unsigned int)mem_size; /* Initialize these fields as are needed by * netmap_mem_bufsize(). * XXX please improve this, why do we need this * replication? maybe we nmd->pools[] should no be * there for the guest allocator? */ nmd->pools[NETMAP_BUF_POOL]._objsize = bufsize; nmd->pools[NETMAP_BUF_POOL]._objtotal = nbuffers; nmd->flags |= NETMAP_MEM_FINALIZED; out: return error; } static void netmap_mem_pt_guest_deref(struct netmap_mem_d *nmd) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; if (nmd->active == 1 && (nmd->flags & NETMAP_MEM_FINALIZED)) { nmd->flags &= ~NETMAP_MEM_FINALIZED; /* unmap ptnetmap-memdev memory */ if (ptnmd->ptn_dev) { nm_os_pt_memdev_iounmap(ptnmd->ptn_dev); } ptnmd->nm_addr = NULL; ptnmd->nm_paddr = 0; } } static ssize_t netmap_mem_pt_guest_if_offset(struct netmap_mem_d *nmd, const void *vaddr) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)nmd; return (const char *)(vaddr) - (char *)(ptnmd->nm_addr); } static void netmap_mem_pt_guest_delete(struct netmap_mem_d *nmd) { if (nmd == NULL) return; if (netmap_verbose) nm_prinf("deleting %p", nmd); if (nmd->active > 0) nm_prerr("bug: deleting mem allocator with active=%d!", nmd->active); if (netmap_verbose) nm_prinf("done deleting %p", nmd); NMA_LOCK_DESTROY(nmd); nm_os_free(nmd); } static struct netmap_if * netmap_mem_pt_guest_if_new(struct netmap_adapter *na, struct netmap_priv_d *priv) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)na->nm_mem; struct mem_pt_if *ptif; struct netmap_if *nifp = NULL; ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, na->ifp); if (ptif == NULL) { nm_prerr("interface %s is not in passthrough", na->name); goto out; } nifp = (struct netmap_if *)((char *)(ptnmd->nm_addr) + ptif->nifp_offset); out: return nifp; } static void netmap_mem_pt_guest_if_delete(struct netmap_adapter *na, struct netmap_if *nifp) { struct mem_pt_if *ptif; ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, na->ifp); if (ptif == NULL) { nm_prerr("interface %s is not in passthrough", na->name); } } static int netmap_mem_pt_guest_rings_create(struct netmap_adapter *na) { struct netmap_mem_ptg *ptnmd = (struct netmap_mem_ptg *)na->nm_mem; struct mem_pt_if *ptif; struct netmap_if *nifp; int i, error = -1; ptif = netmap_mem_pt_guest_ifp_lookup(na->nm_mem, na->ifp); if (ptif == NULL) { nm_prerr("interface %s is not in passthrough", na->name); goto out; } /* point each kring to the corresponding backend ring */ nifp = (struct netmap_if *)((char *)ptnmd->nm_addr + ptif->nifp_offset); for (i = 0; i < netmap_all_rings(na, NR_TX); i++) { struct netmap_kring *kring = na->tx_rings[i]; if (kring->ring) continue; kring->ring = (struct netmap_ring *) ((char *)nifp + nifp->ring_ofs[i]); } for (i = 0; i < netmap_all_rings(na, NR_RX); i++) { struct netmap_kring *kring = na->rx_rings[i]; if (kring->ring) continue; kring->ring = (struct netmap_ring *) ((char *)nifp + nifp->ring_ofs[netmap_all_rings(na, NR_TX) + i]); } error = 0; out: return error; } static void netmap_mem_pt_guest_rings_delete(struct netmap_adapter *na) { #if 0 enum txrx t; for_rx_tx(t) { u_int i; for (i = 0; i < nma_get_nrings(na, t) + 1; i++) { struct netmap_kring *kring = &NMR(na, t)[i]; kring->ring = NULL; } } #endif } static struct netmap_mem_ops netmap_mem_pt_guest_ops = { .nmd_get_lut = netmap_mem_pt_guest_get_lut, .nmd_get_info = netmap_mem_pt_guest_get_info, .nmd_ofstophys = netmap_mem_pt_guest_ofstophys, .nmd_config = netmap_mem_pt_guest_config, .nmd_finalize = netmap_mem_pt_guest_finalize, .nmd_deref = netmap_mem_pt_guest_deref, .nmd_if_offset = netmap_mem_pt_guest_if_offset, .nmd_delete = netmap_mem_pt_guest_delete, .nmd_if_new = netmap_mem_pt_guest_if_new, .nmd_if_delete = netmap_mem_pt_guest_if_delete, .nmd_rings_create = netmap_mem_pt_guest_rings_create, .nmd_rings_delete = netmap_mem_pt_guest_rings_delete }; /* Called with nm_mem_list_lock held. */ static struct netmap_mem_d * netmap_mem_pt_guest_find_memid(nm_memid_t mem_id) { struct netmap_mem_d *mem = NULL; struct netmap_mem_d *scan = netmap_last_mem_d; do { /* find ptnetmap allocator through host ID */ if (scan->ops->nmd_deref == netmap_mem_pt_guest_deref && ((struct netmap_mem_ptg *)(scan))->host_mem_id == mem_id) { mem = scan; mem->refcount++; NM_DBG_REFC(mem, __FUNCTION__, __LINE__); break; } scan = scan->next; } while (scan != netmap_last_mem_d); return mem; } /* Called with nm_mem_list_lock held. */ static struct netmap_mem_d * netmap_mem_pt_guest_create(nm_memid_t mem_id) { struct netmap_mem_ptg *ptnmd; int err = 0; ptnmd = nm_os_malloc(sizeof(struct netmap_mem_ptg)); if (ptnmd == NULL) { err = ENOMEM; goto error; } ptnmd->up.ops = &netmap_mem_pt_guest_ops; ptnmd->host_mem_id = mem_id; ptnmd->pt_ifs = NULL; /* Assign new id in the guest (We have the lock) */ err = nm_mem_assign_id_locked(&ptnmd->up); if (err) goto error; ptnmd->up.flags &= ~NETMAP_MEM_FINALIZED; ptnmd->up.flags |= NETMAP_MEM_IO; NMA_LOCK_INIT(&ptnmd->up); snprintf(ptnmd->up.name, NM_MEM_NAMESZ, "%d", ptnmd->up.nm_id); return &ptnmd->up; error: netmap_mem_pt_guest_delete(&ptnmd->up); return NULL; } /* * find host id in guest allocators and create guest allocator * if it is not there */ static struct netmap_mem_d * netmap_mem_pt_guest_get(nm_memid_t mem_id) { struct netmap_mem_d *nmd; NM_MTX_LOCK(nm_mem_list_lock); nmd = netmap_mem_pt_guest_find_memid(mem_id); if (nmd == NULL) { nmd = netmap_mem_pt_guest_create(mem_id); } NM_MTX_UNLOCK(nm_mem_list_lock); return nmd; } /* * The guest allocator can be created by ptnetmap_memdev (during the device * attach) or by ptnetmap device (ptnet), during the netmap_attach. * * The order is not important (we have different order in LINUX and FreeBSD). * The first one, creates the device, and the second one simply attaches it. */ /* Called when ptnetmap_memdev is attaching, to attach a new allocator in * the guest */ struct netmap_mem_d * netmap_mem_pt_guest_attach(struct ptnetmap_memdev *ptn_dev, nm_memid_t mem_id) { struct netmap_mem_d *nmd; struct netmap_mem_ptg *ptnmd; nmd = netmap_mem_pt_guest_get(mem_id); /* assign this device to the guest allocator */ if (nmd) { ptnmd = (struct netmap_mem_ptg *)nmd; ptnmd->ptn_dev = ptn_dev; } return nmd; } /* Called when ptnet device is attaching */ struct netmap_mem_d * netmap_mem_pt_guest_new(struct ifnet *ifp, unsigned int nifp_offset, unsigned int memid) { struct netmap_mem_d *nmd; if (ifp == NULL) { return NULL; } nmd = netmap_mem_pt_guest_get((nm_memid_t)memid); if (nmd) { netmap_mem_pt_guest_ifp_add(nmd, ifp, nifp_offset); } return nmd; } #endif /* WITH_PTNETMAP */ Index: head/sys/net/netmap.h =================================================================== --- head/sys/net/netmap.h (revision 345268) +++ head/sys/net/netmap.h (revision 345269) @@ -1,928 +1,934 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``S IS''AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * Definitions of constants and the structures used by the netmap * framework, for the part visible to both kernel and userspace. * Detailed info on netmap is available with "man netmap" or at * * http://info.iet.unipi.it/~luigi/netmap/ * * This API is also used to communicate with the VALE software switch */ #ifndef _NET_NETMAP_H_ #define _NET_NETMAP_H_ -#define NETMAP_API 13 /* current API version */ +#define NETMAP_API 14 /* current API version */ -#define NETMAP_MIN_API 13 /* min and max versions accepted */ +#define NETMAP_MIN_API 14 /* min and max versions accepted */ #define NETMAP_MAX_API 15 /* * Some fields should be cache-aligned to reduce contention. * The alignment is architecture and OS dependent, but rather than * digging into OS headers to find the exact value we use an estimate * that should cover most architectures. */ #define NM_CACHE_ALIGN 128 /* * --- Netmap data structures --- * * The userspace data structures used by netmap are shown below. * They are allocated by the kernel and mmap()ed by userspace threads. * Pointers are implemented as memory offsets or indexes, * so that they can be easily dereferenced in kernel and userspace. KERNEL (opaque, obviously) ==================================================================== - | - USERSPACE | struct netmap_ring - +---->+---------------+ - / | head,cur,tail | - struct netmap_if (nifp, 1 per fd) / | buf_ofs | - +---------------+ / | other fields | - | ni_tx_rings | / +===============+ - | ni_rx_rings | / | buf_idx, len | slot[0] - | | / | flags, ptr | - | | / +---------------+ - +===============+ / | buf_idx, len | slot[1] - | txring_ofs[0] | (rel.to nifp)--' | flags, ptr | - | txring_ofs[1] | +---------------+ - (tx+1 entries) (num_slots entries) - | txring_ofs[t] | | buf_idx, len | slot[n-1] - +---------------+ | flags, ptr | - | rxring_ofs[0] | +---------------+ - | rxring_ofs[1] | - (rx+1 entries) - | rxring_ofs[r] | - +---------------+ + | + USERSPACE | struct netmap_ring + +---->+---------------+ + / | head,cur,tail | + struct netmap_if (nifp, 1 per fd) / | buf_ofs | + +----------------+ / | other fields | + | ni_tx_rings | / +===============+ + | ni_rx_rings | / | buf_idx, len | slot[0] + | | / | flags, ptr | + | | / +---------------+ + +================+ / | buf_idx, len | slot[1] + | txring_ofs[0] | (rel.to nifp)--' | flags, ptr | + | txring_ofs[1] | +---------------+ + (tx+htx entries) (num_slots entries) + | txring_ofs[t] | | buf_idx, len | slot[n-1] + +----------------+ | flags, ptr | + | rxring_ofs[0] | +---------------+ + | rxring_ofs[1] | + (rx+hrx entries) + | rxring_ofs[r] | + +----------------+ * For each "interface" (NIC, host stack, PIPE, VALE switch port) bound to * a file descriptor, the mmap()ed region contains a (logically readonly) * struct netmap_if pointing to struct netmap_ring's. * - * There is one netmap_ring per physical NIC ring, plus one tx/rx ring - * pair attached to the host stack (this pair is unused for non-NIC ports). + * There is one netmap_ring per physical NIC ring, plus at least one tx/rx ring + * pair attached to the host stack (these pairs are unused for non-NIC ports). * * All physical/host stack ports share the same memory region, * so that zero-copy can be implemented between them. * VALE switch ports instead have separate memory regions. * * The netmap_ring is the userspace-visible replica of the NIC ring. * Each slot has the index of a buffer (MTU-sized and residing in the * mmapped region), its length and some flags. An extra 64-bit pointer * is provided for user-supplied buffers in the tx path. * * In user space, the buffer address is computed as * (char *)ring + buf_ofs + index * NETMAP_BUF_SIZE * * Added in NETMAP_API 11: * * + NIOCREGIF can request the allocation of extra spare buffers from * the same memory pool. The desired number of buffers must be in * nr_arg3. The ioctl may return fewer buffers, depending on memory * availability. nr_arg3 will return the actual value, and, once * mapped, nifp->ni_bufs_head will be the index of the first buffer. * * The buffers are linked to each other using the first uint32_t * as the index. On close, ni_bufs_head must point to the list of * buffers to be released. * - * + NIOCREGIF can request space for extra rings (and buffers) - * allocated in the same memory space. The number of extra rings - * is in nr_arg1, and is advisory. This is a no-op on NICs where - * the size of the memory space is fixed. - * * + NIOCREGIF can attach to PIPE rings sharing the same memory * space with a parent device. The ifname indicates the parent device, * which must already exist. Flags in nr_flags indicate if we want to * bind the master or slave side, the index (from nr_ringid) * is just a cookie and does not need to be sequential. * * + NIOCREGIF can also attach to 'monitor' rings that replicate * the content of specific rings, also from the same memory space. * * Extra flags in nr_flags support the above functions. * Application libraries may use the following naming scheme: - * netmap:foo all NIC ring pairs - * netmap:foo^ only host ring pair - * netmap:foo+ all NIC ring + host ring pairs - * netmap:foo-k the k-th NIC ring pair - * netmap:foo{k PIPE ring pair k, master side - * netmap:foo}k PIPE ring pair k, slave side + * netmap:foo all NIC rings pairs + * netmap:foo^ only host rings pairs + * netmap:foo^k the k-th host rings pair + * netmap:foo+ all NIC rings + host rings pairs + * netmap:foo-k the k-th NIC rings pair + * netmap:foo{k PIPE rings pair k, master side + * netmap:foo}k PIPE rings pair k, slave side * * Some notes about host rings: * - * + The RX host ring is used to store those packets that the host network + * + The RX host rings are used to store those packets that the host network * stack is trying to transmit through a NIC queue, but only if that queue * is currently in netmap mode. Netmap will not intercept host stack mbufs * designated to NIC queues that are not in netmap mode. As a consequence, * registering a netmap port with netmap:foo^ is not enough to intercept - * mbufs in the RX host ring; the netmap port should be registered with + * mbufs in the RX host rings; the netmap port should be registered with * netmap:foo*, or another registration should be done to open at least a * NIC TX queue in netmap mode. * * + Netmap is not currently able to deal with intercepted trasmit mbufs which * require offloadings like TSO, UFO, checksumming offloadings, etc. It is * responsibility of the user to disable those offloadings (e.g. using * ifconfig on FreeBSD or ethtool -K on Linux) for an interface that is being * used in netmap mode. If the offloadings are not disabled, GSO and/or * unchecksummed packets may be dropped immediately or end up in the host RX - * ring, and will be dropped as soon as the packet reaches another netmap + * rings, and will be dropped as soon as the packet reaches another netmap * adapter. */ /* * struct netmap_slot is a buffer descriptor */ struct netmap_slot { uint32_t buf_idx; /* buffer index */ uint16_t len; /* length for this slot */ uint16_t flags; /* buf changed, etc. */ uint64_t ptr; /* pointer for indirect buffers */ }; /* * The following flags control how the slot is used */ #define NS_BUF_CHANGED 0x0001 /* buf_idx changed */ /* * must be set whenever buf_idx is changed (as it might be * necessary to recompute the physical address and mapping) * * It is also set by the kernel whenever the buf_idx is * changed internally (e.g., by pipes). Applications may * use this information to know when they can reuse the * contents of previously prepared buffers. */ #define NS_REPORT 0x0002 /* ask the hardware to report results */ /* * Request notification when slot is used by the hardware. * Normally transmit completions are handled lazily and * may be unreported. This flag lets us know when a slot * has been sent (e.g. to terminate the sender). */ #define NS_FORWARD 0x0004 /* pass packet 'forward' */ /* * (Only for physical ports, rx rings with NR_FORWARD set). * Slot released to the kernel (i.e. before ring->head) with * this flag set are passed to the peer ring (host/NIC), * thus restoring the host-NIC connection for these slots. * This supports efficient traffic monitoring or firewalling. */ #define NS_NO_LEARN 0x0008 /* disable bridge learning */ /* * On a VALE switch, do not 'learn' the source port for * this buffer. */ #define NS_INDIRECT 0x0010 /* userspace buffer */ /* * (VALE tx rings only) data is in a userspace buffer, * whose address is in the 'ptr' field in the slot. */ #define NS_MOREFRAG 0x0020 /* packet has more fragments */ /* * (VALE ports, ptnetmap ports and some NIC ports, e.g. * ixgbe and i40e on Linux) * Set on all but the last slot of a multi-segment packet. * The 'len' field refers to the individual fragment. */ #define NS_PORT_SHIFT 8 #define NS_PORT_MASK (0xff << NS_PORT_SHIFT) /* * The high 8 bits of the flag, if not zero, indicate the * destination port for the VALE switch, overriding * the lookup table. */ #define NS_RFRAGS(_slot) ( ((_slot)->flags >> 8) & 0xff) /* * (VALE rx rings only) the high 8 bits * are the number of fragments. */ #define NETMAP_MAX_FRAGS 64 /* max number of fragments */ /* * struct netmap_ring * * Netmap representation of a TX or RX ring (also known as "queue"). * This is a queue implemented as a fixed-size circular array. * At the software level the important fields are: head, cur, tail. * * In TX rings: * * head first slot available for transmission. * cur wakeup point. select() and poll() will unblock * when 'tail' moves past 'cur' * tail (readonly) first slot reserved to the kernel * * [head .. tail-1] can be used for new packets to send; * 'head' and 'cur' must be incremented as slots are filled * with new packets to be sent; * 'cur' can be moved further ahead if we need more space * for new transmissions. XXX todo (2014-03-12) * * In RX rings: * * head first valid received packet * cur wakeup point. select() and poll() will unblock * when 'tail' moves past 'cur' * tail (readonly) first slot reserved to the kernel * * [head .. tail-1] contain received packets; * 'head' and 'cur' must be incremented as slots are consumed * and can be returned to the kernel; * 'cur' can be moved further ahead if we want to wait for * new packets without returning the previous ones. * * DATA OWNERSHIP/LOCKING: * The netmap_ring, and all slots and buffers in the range * [head .. tail-1] are owned by the user program; * the kernel only accesses them during a netmap system call * and in the user thread context. * * Other slots and buffers are reserved for use by the kernel */ struct netmap_ring { /* * buf_ofs is meant to be used through macros. * It contains the offset of the buffer region from this * descriptor. */ const int64_t buf_ofs; const uint32_t num_slots; /* number of slots in the ring. */ const uint32_t nr_buf_size; const uint16_t ringid; const uint16_t dir; /* 0: tx, 1: rx */ uint32_t head; /* (u) first user slot */ uint32_t cur; /* (u) wakeup point */ uint32_t tail; /* (k) first kernel slot */ uint32_t flags; struct timeval ts; /* (k) time of last *sync() */ /* opaque room for a mutex or similar object */ #if !defined(_WIN32) || defined(__CYGWIN__) uint8_t __attribute__((__aligned__(NM_CACHE_ALIGN))) sem[128]; #else uint8_t __declspec(align(NM_CACHE_ALIGN)) sem[128]; #endif /* the slots follow. This struct has variable size */ struct netmap_slot slot[0]; /* array of slots. */ }; /* * RING FLAGS */ #define NR_TIMESTAMP 0x0002 /* set timestamp on *sync() */ /* * updates the 'ts' field on each netmap syscall. This saves * saves a separate gettimeofday(), and is not much worse than * software timestamps generated in the interrupt handler. */ #define NR_FORWARD 0x0004 /* enable NS_FORWARD for ring */ /* * Enables the NS_FORWARD slot flag for the ring. */ /* * Helper functions for kernel and userspace */ /* * Check if space is available in the ring. We use ring->head, which * points to the next netmap slot to be published to netmap. It is * possible that the applications moves ring->cur ahead of ring->tail * (e.g., by setting ring->cur <== ring->tail), if it wants more slots * than the ones currently available, and it wants to be notified when * more arrive. See netmap(4) for more details and examples. */ static inline int nm_ring_empty(struct netmap_ring *ring) { return (ring->head == ring->tail); } /* * Netmap representation of an interface and its queue(s). * This is initialized by the kernel when binding a file * descriptor to a port, and should be considered as readonly * by user programs. The kernel never uses it. * * There is one netmap_if for each file descriptor on which we want * to select/poll. * select/poll operates on one or all pairs depending on the value of * nmr_queueid passed on the ioctl. */ struct netmap_if { char ni_name[IFNAMSIZ]; /* name of the interface. */ const uint32_t ni_version; /* API version, currently unused */ const uint32_t ni_flags; /* properties */ #define NI_PRIV_MEM 0x1 /* private memory region */ /* * The number of packet rings available in netmap mode. * Physical NICs can have different numbers of tx and rx rings. - * Physical NICs also have a 'host' ring pair. + * Physical NICs also have at least a 'host' rings pair. * Additionally, clients can request additional ring pairs to * be used for internal communication. */ const uint32_t ni_tx_rings; /* number of HW tx rings */ const uint32_t ni_rx_rings; /* number of HW rx rings */ uint32_t ni_bufs_head; /* head index for extra bufs */ - uint32_t ni_spare1[5]; + const uint32_t ni_host_tx_rings; /* number of SW tx rings */ + const uint32_t ni_host_rx_rings; /* number of SW rx rings */ + uint32_t ni_spare1[3]; /* * The following array contains the offset of each netmap ring * from this structure, in the following order: - * NIC tx rings (ni_tx_rings); host tx ring (1); extra tx rings; - * NIC rx rings (ni_rx_rings); host tx ring (1); extra rx rings. + * - NIC tx rings (ni_tx_rings); + * - host tx rings (ni_host_tx_rings); + * - NIC rx rings (ni_rx_rings); + * - host rx ring (ni_host_rx_rings); * - * The area is filled up by the kernel on NIOCREGIF, + * The area is filled up by the kernel on NETMAP_REQ_REGISTER, * and then only read by userspace code. */ const ssize_t ring_ofs[0]; }; /* Legacy interface to interact with a netmap control device. * Included for backward compatibility. The user should not include this * file directly. */ #include "netmap_legacy.h" /* * New API to control netmap control devices. New applications should only use * nmreq_xyz structs with the NIOCCTRL ioctl() command. * * NIOCCTRL takes a nmreq_header struct, which contains the required * API version, the name of a netmap port, a command type, and pointers * to request body and options. * * nr_name (in) * The name of the port (em0, valeXXX:YYY, eth0{pn1 etc.) * * nr_version (in/out) * Must match NETMAP_API as used in the kernel, error otherwise. * Always returns the desired value on output. * * nr_reqtype (in) * One of the NETMAP_REQ_* command types below * * nr_body (in) * Pointer to a command-specific struct, described by one * of the struct nmreq_xyz below. * * nr_options (in) * Command specific options, if any. * * A NETMAP_REQ_REGISTER command activates netmap mode on the netmap * port (e.g. physical interface) specified by nmreq_header.nr_name. * The request body (struct nmreq_register) has several arguments to * specify how the port is to be registered. * - * nr_tx_slots, nr_tx_slots, nr_tx_rings, nr_rx_rings (in/out) + * nr_tx_slots, nr_tx_slots, nr_tx_rings, nr_rx_rings, + * nr_host_tx_rings, nr_host_rx_rings (in/out) * On input, non-zero values may be used to reconfigure the port * according to the requested values, but this is not guaranteed. * On output the actual values in use are reported. * * nr_mode (in) * Indicate what set of rings must be bound to the netmap * device (e.g. all NIC rings, host rings only, NIC and * host rings, ...). Values are in NR_REG_*. * * nr_ringid (in) * If nr_mode == NR_REG_ONE_NIC (only a single couple of TX/RX * rings), indicate which NIC TX and/or RX ring is to be bound * (0..nr_*x_rings-1). * * nr_flags (in) * Indicate special options for how to open the port. * * NR_NO_TX_POLL can be OR-ed to make select()/poll() push * packets on tx rings only if POLLOUT is set. * The default is to push any pending packet. * * NR_DO_RX_POLL can be OR-ed to make select()/poll() release * packets on rx rings also when POLLIN is NOT set. * The default is to touch the rx ring only with POLLIN. * Note that this is the opposite of TX because it * reflects the common usage. * * Other options are NR_MONITOR_TX, NR_MONITOR_RX, NR_ZCOPY_MON, * NR_EXCLUSIVE, NR_RX_RINGS_ONLY, NR_TX_RINGS_ONLY and * NR_ACCEPT_VNET_HDR. * * nr_mem_id (in/out) * The identity of the memory region used. * On input, 0 means the system decides autonomously, * other values may try to select a specific region. * On return the actual value is reported. * Region '1' is the global allocator, normally shared * by all interfaces. Other values are private regions. * If two ports the same region zero-copy is possible. * * nr_extra_bufs (in/out) * Number of extra buffers to be allocated. * * The other NETMAP_REQ_* commands are described below. * */ /* maximum size of a request, including all options */ #define NETMAP_REQ_MAXSIZE 4096 /* Header common to all request options. */ struct nmreq_option { /* Pointer ot the next option. */ uint64_t nro_next; /* Option type. */ uint32_t nro_reqtype; /* (out) status of the option: * 0: recognized and processed * !=0: errno value */ uint32_t nro_status; /* Option size, used only for options that can have variable size * (e.g. because they contain arrays). For fixed-size options this * field should be set to zero. */ uint64_t nro_size; }; /* Header common to all requests. Do not reorder these fields, as we need * the second one (nr_reqtype) to know how much to copy from/to userspace. */ struct nmreq_header { uint16_t nr_version; /* API version */ uint16_t nr_reqtype; /* nmreq type (NETMAP_REQ_*) */ uint32_t nr_reserved; /* must be zero */ #define NETMAP_REQ_IFNAMSIZ 64 char nr_name[NETMAP_REQ_IFNAMSIZ]; /* port name */ uint64_t nr_options; /* command-specific options */ uint64_t nr_body; /* ptr to nmreq_xyz struct */ }; enum { /* Register a netmap port with the device. */ NETMAP_REQ_REGISTER = 1, /* Get information from a netmap port. */ NETMAP_REQ_PORT_INFO_GET, /* Attach a netmap port to a VALE switch. */ NETMAP_REQ_VALE_ATTACH, /* Detach a netmap port from a VALE switch. */ NETMAP_REQ_VALE_DETACH, /* List the ports attached to a VALE switch. */ NETMAP_REQ_VALE_LIST, /* Set the port header length (was virtio-net header length). */ NETMAP_REQ_PORT_HDR_SET, /* Get the port header length (was virtio-net header length). */ NETMAP_REQ_PORT_HDR_GET, /* Create a new persistent VALE port. */ NETMAP_REQ_VALE_NEWIF, /* Delete a persistent VALE port. */ NETMAP_REQ_VALE_DELIF, /* Enable polling kernel thread(s) on an attached VALE port. */ NETMAP_REQ_VALE_POLLING_ENABLE, /* Disable polling kernel thread(s) on an attached VALE port. */ NETMAP_REQ_VALE_POLLING_DISABLE, /* Get info about the pools of a memory allocator. */ NETMAP_REQ_POOLS_INFO_GET, /* Start an in-kernel loop that syncs the rings periodically or * on notifications. The loop runs in the context of the ioctl * syscall, and only stops on NETMAP_REQ_SYNC_KLOOP_STOP. */ NETMAP_REQ_SYNC_KLOOP_START, /* Stops the thread executing the in-kernel loop. The thread * returns from the ioctl syscall. */ NETMAP_REQ_SYNC_KLOOP_STOP, /* Enable CSB mode on a registered netmap control device. */ NETMAP_REQ_CSB_ENABLE, }; enum { /* On NETMAP_REQ_REGISTER, ask netmap to use memory allocated * from user-space allocated memory pools (e.g. hugepages). */ NETMAP_REQ_OPT_EXTMEM = 1, /* ON NETMAP_REQ_SYNC_KLOOP_START, ask netmap to use eventfd-based * notifications to synchronize the kernel loop with the application. */ NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS, /* On NETMAP_REQ_REGISTER, ask netmap to work in CSB mode, where * head, cur and tail pointers are not exchanged through the * struct netmap_ring header, but rather using an user-provided * memory area (see struct nm_csb_atok and struct nm_csb_ktoa). */ NETMAP_REQ_OPT_CSB, /* An extension to NETMAP_REQ_OPT_SYNC_KLOOP_EVENTFDS, which specifies * if the TX and/or RX rings are synced in the context of the VM exit. * This requires the 'ioeventfd' fields to be valid (cannot be < 0). */ NETMAP_REQ_OPT_SYNC_KLOOP_MODE, }; /* * nr_reqtype: NETMAP_REQ_REGISTER * Bind (register) a netmap port to this control device. */ struct nmreq_register { uint64_t nr_offset; /* nifp offset in the shared region */ uint64_t nr_memsize; /* size of the shared region */ uint32_t nr_tx_slots; /* slots in tx rings */ uint32_t nr_rx_slots; /* slots in rx rings */ uint16_t nr_tx_rings; /* number of tx rings */ uint16_t nr_rx_rings; /* number of rx rings */ + uint16_t nr_host_tx_rings; /* number of host tx rings */ + uint16_t nr_host_rx_rings; /* number of host rx rings */ uint16_t nr_mem_id; /* id of the memory allocator */ uint16_t nr_ringid; /* ring(s) we care about */ uint32_t nr_mode; /* specify NR_REG_* modes */ uint32_t nr_extra_bufs; /* number of requested extra buffers */ uint64_t nr_flags; /* additional flags (see below) */ /* monitors use nr_ringid and nr_mode to select the rings to monitor */ #define NR_MONITOR_TX 0x100 #define NR_MONITOR_RX 0x200 #define NR_ZCOPY_MON 0x400 /* request exclusive access to the selected rings */ #define NR_EXCLUSIVE 0x800 /* 0x1000 unused */ #define NR_RX_RINGS_ONLY 0x2000 #define NR_TX_RINGS_ONLY 0x4000 /* Applications set this flag if they are able to deal with virtio-net headers, * that is send/receive frames that start with a virtio-net header. - * If not set, NIOCREGIF will fail with netmap ports that require applications - * to use those headers. If the flag is set, the application can use the - * NETMAP_VNET_HDR_GET command to figure out the header length. */ + * If not set, NETMAP_REQ_REGISTER will fail with netmap ports that require + * applications to use those headers. If the flag is set, the application can + * use the NETMAP_VNET_HDR_GET command to figure out the header length. */ #define NR_ACCEPT_VNET_HDR 0x8000 /* The following two have the same meaning of NETMAP_NO_TX_POLL and * NETMAP_DO_RX_POLL. */ #define NR_DO_RX_POLL 0x10000 #define NR_NO_TX_POLL 0x20000 }; /* Valid values for nmreq_register.nr_mode (see above). */ enum { NR_REG_DEFAULT = 0, /* backward compat, should not be used. */ NR_REG_ALL_NIC = 1, NR_REG_SW = 2, NR_REG_NIC_SW = 3, NR_REG_ONE_NIC = 4, NR_REG_PIPE_MASTER = 5, /* deprecated, use "x{y" port name syntax */ NR_REG_PIPE_SLAVE = 6, /* deprecated, use "x}y" port name syntax */ NR_REG_NULL = 7, + NR_REG_ONE_SW = 8, }; /* A single ioctl number is shared by all the new API command. * Demultiplexing is done using the hdr.nr_reqtype field. * FreeBSD uses the size value embedded in the _IOWR to determine * how much to copy in/out, so we define the ioctl() command * specifying only nmreq_header, and copyin/copyout the rest. */ #define NIOCCTRL _IOWR('i', 151, struct nmreq_header) /* The ioctl commands to sync TX/RX netmap rings. * NIOCTXSYNC, NIOCRXSYNC synchronize tx or rx queues, - * whose identity is set in NIOCREGIF through nr_ringid. + * whose identity is set in NETMAP_REQ_REGISTER through nr_ringid. * These are non blocking and take no argument. */ #define NIOCTXSYNC _IO('i', 148) /* sync tx queues */ #define NIOCRXSYNC _IO('i', 149) /* sync rx queues */ /* * nr_reqtype: NETMAP_REQ_PORT_INFO_GET * Get information about a netmap port, including number of rings. * slots per ring, id of the memory allocator, etc. The netmap * control device used for this operation does not need to be bound * to a netmap port. */ struct nmreq_port_info_get { uint64_t nr_memsize; /* size of the shared region */ uint32_t nr_tx_slots; /* slots in tx rings */ uint32_t nr_rx_slots; /* slots in rx rings */ uint16_t nr_tx_rings; /* number of tx rings */ uint16_t nr_rx_rings; /* number of rx rings */ + uint16_t nr_host_tx_rings; /* number of host tx rings */ + uint16_t nr_host_rx_rings; /* number of host rx rings */ uint16_t nr_mem_id; /* memory allocator id (in/out) */ - uint16_t pad1; + uint16_t pad[3]; }; #define NM_BDG_NAME "vale" /* prefix for bridge port name */ /* * nr_reqtype: NETMAP_REQ_VALE_ATTACH * Attach a netmap port to a VALE switch. Both the name of the netmap * port and the VALE switch are specified through the nr_name argument. * The attach operation could need to register a port, so at least * the same arguments are available. * port_index will contain the index where the port has been attached. */ struct nmreq_vale_attach { struct nmreq_register reg; uint32_t port_index; uint32_t pad1; }; /* * nr_reqtype: NETMAP_REQ_VALE_DETACH * Detach a netmap port from a VALE switch. Both the name of the netmap * port and the VALE switch are specified through the nr_name argument. * port_index will contain the index where the port was attached. */ struct nmreq_vale_detach { uint32_t port_index; uint32_t pad1; }; /* * nr_reqtype: NETMAP_REQ_VALE_LIST * List the ports of a VALE switch. */ struct nmreq_vale_list { /* Name of the VALE port (valeXXX:YYY) or empty. */ uint16_t nr_bridge_idx; uint16_t pad1; uint32_t nr_port_idx; }; /* * nr_reqtype: NETMAP_REQ_PORT_HDR_SET or NETMAP_REQ_PORT_HDR_GET * Set or get the port header length of the port identified by hdr.nr_name. * The control device does not need to be bound to a netmap port. */ struct nmreq_port_hdr { uint32_t nr_hdr_len; uint32_t pad1; }; /* * nr_reqtype: NETMAP_REQ_VALE_NEWIF * Create a new persistent VALE port. */ struct nmreq_vale_newif { uint32_t nr_tx_slots; /* slots in tx rings */ uint32_t nr_rx_slots; /* slots in rx rings */ uint16_t nr_tx_rings; /* number of tx rings */ uint16_t nr_rx_rings; /* number of rx rings */ uint16_t nr_mem_id; /* id of the memory allocator */ uint16_t pad1; }; /* * nr_reqtype: NETMAP_REQ_VALE_POLLING_ENABLE or NETMAP_REQ_VALE_POLLING_DISABLE * Enable or disable polling kthreads on a VALE port. */ struct nmreq_vale_polling { uint32_t nr_mode; #define NETMAP_POLLING_MODE_SINGLE_CPU 1 #define NETMAP_POLLING_MODE_MULTI_CPU 2 uint32_t nr_first_cpu_id; uint32_t nr_num_polling_cpus; uint32_t pad1; }; /* * nr_reqtype: NETMAP_REQ_POOLS_INFO_GET * Get info about the pools of the memory allocator of the netmap * port specified by hdr.nr_name and nr_mem_id. The netmap control * device used for this operation does not need to be bound to a netmap * port. */ struct nmreq_pools_info { uint64_t nr_memsize; uint16_t nr_mem_id; /* in/out argument */ uint16_t pad1[3]; uint64_t nr_if_pool_offset; uint32_t nr_if_pool_objtotal; uint32_t nr_if_pool_objsize; uint64_t nr_ring_pool_offset; uint32_t nr_ring_pool_objtotal; uint32_t nr_ring_pool_objsize; uint64_t nr_buf_pool_offset; uint32_t nr_buf_pool_objtotal; uint32_t nr_buf_pool_objsize; }; /* * nr_reqtype: NETMAP_REQ_SYNC_KLOOP_START * Start an in-kernel loop that syncs the rings periodically or on * notifications. The loop runs in the context of the ioctl syscall, * and only stops on NETMAP_REQ_SYNC_KLOOP_STOP. * The registered netmap port must be open in CSB mode. */ struct nmreq_sync_kloop_start { /* Sleeping is the default synchronization method for the kloop. * The 'sleep_us' field specifies how many microsconds to sleep for * when there is no work to do, before doing another kloop iteration. */ uint32_t sleep_us; uint32_t pad1; }; /* A CSB entry for the application --> kernel direction. */ struct nm_csb_atok { uint32_t head; /* AW+ KR+ the head of the appl netmap_ring */ uint32_t cur; /* AW+ KR+ the cur of the appl netmap_ring */ uint32_t appl_need_kick; /* AW+ KR+ kern --> appl notification enable */ uint32_t sync_flags; /* AW+ KR+ the flags of the appl [tx|rx]sync() */ uint32_t pad[12]; /* pad to a 64 bytes cacheline */ }; /* A CSB entry for the application <-- kernel direction. */ struct nm_csb_ktoa { uint32_t hwcur; /* AR+ KW+ the hwcur of the kern netmap_kring */ uint32_t hwtail; /* AR+ KW+ the hwtail of the kern netmap_kring */ uint32_t kern_need_kick; /* AR+ KW+ appl-->kern notification enable */ uint32_t pad[13]; }; #ifdef __linux__ #ifdef __KERNEL__ #define nm_stst_barrier smp_wmb #define nm_ldld_barrier smp_rmb #define nm_stld_barrier smp_mb #else /* !__KERNEL__ */ static inline void nm_stst_barrier(void) { /* A memory barrier with release semantic has the combined * effect of a store-store barrier and a load-store barrier, * which is fine for us. */ __atomic_thread_fence(__ATOMIC_RELEASE); } static inline void nm_ldld_barrier(void) { /* A memory barrier with acquire semantic has the combined * effect of a load-load barrier and a store-load barrier, * which is fine for us. */ __atomic_thread_fence(__ATOMIC_ACQUIRE); } #endif /* !__KERNEL__ */ #elif defined(__FreeBSD__) #ifdef _KERNEL #define nm_stst_barrier atomic_thread_fence_rel #define nm_ldld_barrier atomic_thread_fence_acq #define nm_stld_barrier atomic_thread_fence_seq_cst #else /* !_KERNEL */ #include static inline void nm_stst_barrier(void) { atomic_thread_fence(memory_order_release); } static inline void nm_ldld_barrier(void) { atomic_thread_fence(memory_order_acquire); } #endif /* !_KERNEL */ #else /* !__linux__ && !__FreeBSD__ */ #error "OS not supported" #endif /* !__linux__ && !__FreeBSD__ */ /* Application side of sync-kloop: Write ring pointers (cur, head) to the CSB. * This routine is coupled with sync_kloop_kernel_read(). */ static inline void nm_sync_kloop_appl_write(struct nm_csb_atok *atok, uint32_t cur, uint32_t head) { /* Issue a first store-store barrier to make sure writes to the * netmap ring do not overcome updates on atok->cur and atok->head. */ nm_stst_barrier(); /* * We need to write cur and head to the CSB but we cannot do it atomically. * There is no way we can prevent the host from reading the updated value * of one of the two and the old value of the other. However, if we make * sure that the host never reads a value of head more recent than the * value of cur we are safe. We can allow the host to read a value of cur * more recent than the value of head, since in the netmap ring cur can be * ahead of head and cur cannot wrap around head because it must be behind * tail. Inverting the order of writes below could instead result into the * host to think head went ahead of cur, which would cause the sync * prologue to fail. * * The following memory barrier scheme is used to make this happen: * * Guest Host * * STORE(cur) LOAD(head) * wmb() <-----------> rmb() * STORE(head) LOAD(cur) * */ atok->cur = cur; nm_stst_barrier(); atok->head = head; } /* Application side of sync-kloop: Read kring pointers (hwcur, hwtail) from * the CSB. This routine is coupled with sync_kloop_kernel_write(). */ static inline void nm_sync_kloop_appl_read(struct nm_csb_ktoa *ktoa, uint32_t *hwtail, uint32_t *hwcur) { /* * We place a memory barrier to make sure that the update of hwtail never * overtakes the update of hwcur. * (see explanation in sync_kloop_kernel_write). */ *hwtail = ktoa->hwtail; nm_ldld_barrier(); *hwcur = ktoa->hwcur; /* Make sure that loads from ktoa->hwtail and ktoa->hwcur are not delayed * after the loads from the netmap ring. */ nm_ldld_barrier(); } /* * data for NETMAP_REQ_OPT_* options */ struct nmreq_opt_sync_kloop_eventfds { struct nmreq_option nro_opt; /* common header */ /* An array of N entries for bidirectional notifications between * the kernel loop and the application. The number of entries and * their order must agree with the CSB arrays passed in the * NETMAP_REQ_OPT_CSB option. Each entry contains a file descriptor * backed by an eventfd. * * If any of the 'ioeventfd' entries is < 0, the event loop uses * the sleeping synchronization strategy (according to sleep_us), * and keeps kern_need_kick always disabled. * Each 'irqfd' can be < 0, and in that case the corresponding queue * is never notified. */ struct { /* Notifier for the application --> kernel loop direction. */ int32_t ioeventfd; /* Notifier for the kernel loop --> application direction. */ int32_t irqfd; } eventfds[0]; }; struct nmreq_opt_sync_kloop_mode { struct nmreq_option nro_opt; /* common header */ #define NM_OPT_SYNC_KLOOP_DIRECT_TX (1 << 0) #define NM_OPT_SYNC_KLOOP_DIRECT_RX (1 << 1) uint32_t mode; }; struct nmreq_opt_extmem { struct nmreq_option nro_opt; /* common header */ uint64_t nro_usrptr; /* (in) ptr to usr memory */ struct nmreq_pools_info nro_info; /* (in/out) */ }; struct nmreq_opt_csb { struct nmreq_option nro_opt; /* Array of CSB entries for application --> kernel communication * (N entries). */ uint64_t csb_atok; /* Array of CSB entries for kernel --> application communication * (N entries). */ uint64_t csb_ktoa; }; #endif /* _NET_NETMAP_H_ */ Index: head/sys/net/netmap_legacy.h =================================================================== --- head/sys/net/netmap_legacy.h (revision 345268) +++ head/sys/net/netmap_legacy.h (revision 345269) @@ -1,264 +1,257 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2011-2014 Matteo Landi, Luigi Rizzo. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``S IS''AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef _NET_NETMAP_LEGACY_H_ #define _NET_NETMAP_LEGACY_H_ /* * $FreeBSD$ * * ioctl names and related fields * * NIOCTXSYNC, NIOCRXSYNC synchronize tx or rx queues, * whose identity is set in NIOCREGIF through nr_ringid. * These are non blocking and take no argument. * * NIOCGINFO takes a struct ifreq, the interface name is the input, * the outputs are number of queues and number of descriptor * for each queue (useful to set number of threads etc.). * The info returned is only advisory and may change before * the interface is bound to a file descriptor. * * NIOCREGIF takes an interface name within a struct nmre, * and activates netmap mode on the interface (if possible). * * The argument to NIOCGINFO/NIOCREGIF overlays struct ifreq so we * can pass it down to other NIC-related ioctls. * * The actual argument (struct nmreq) has a number of options to request * different functions. * The following are used in NIOCREGIF when nr_cmd == 0: * * nr_name (in) * The name of the port (em0, valeXXX:YYY, etc.) * limited to IFNAMSIZ for backward compatibility. * * nr_version (in/out) * Must match NETMAP_API as used in the kernel, error otherwise. * Always returns the desired value on output. * * nr_tx_slots, nr_tx_slots, nr_tx_rings, nr_rx_rings (in/out) * On input, non-zero values may be used to reconfigure the port * according to the requested values, but this is not guaranteed. * On output the actual values in use are reported. * * nr_ringid (in) * Indicates how rings should be bound to the file descriptors. * If nr_flags != 0, then the low bits (in NETMAP_RING_MASK) * are used to indicate the ring number, and nr_flags specifies * the actual rings to bind. NETMAP_NO_TX_POLL is unaffected. * * NOTE: THE FOLLOWING (nr_flags == 0) IS DEPRECATED: * If nr_flags == 0, NETMAP_HW_RING and NETMAP_SW_RING control * the binding as follows: * 0 (default) binds all physical rings * NETMAP_HW_RING | ring number binds a single ring pair * NETMAP_SW_RING binds only the host tx/rx rings * * NETMAP_NO_TX_POLL can be OR-ed to make select()/poll() push * packets on tx rings only if POLLOUT is set. * The default is to push any pending packet. * * NETMAP_DO_RX_POLL can be OR-ed to make select()/poll() release * packets on rx rings also when POLLIN is NOT set. * The default is to touch the rx ring only with POLLIN. * Note that this is the opposite of TX because it * reflects the common usage. * * NOTE: NETMAP_PRIV_MEM IS DEPRECATED, use nr_arg2 instead. * NETMAP_PRIV_MEM is set on return for ports that do not use * the global memory allocator. * This information is not significant and applications * should look at the region id in nr_arg2 * * nr_flags is the recommended mode to indicate which rings should * be bound to a file descriptor. Values are NR_REG_* * - * nr_arg1 (in) The number of extra rings to be reserved. - * Especially when allocating a VALE port the system only - * allocates the amount of memory needed for the port. - * If more shared memory rings are desired (e.g. for pipes), - * the first invocation for the same basename/allocator - * should specify a suitable number. Memory cannot be - * extended after the first allocation without closing - * all ports on the same region. + * nr_arg1 (in) Reserved. * * nr_arg2 (in/out) The identity of the memory region used. * On input, 0 means the system decides autonomously, * other values may try to select a specific region. * On return the actual value is reported. * Region '1' is the global allocator, normally shared * by all interfaces. Other values are private regions. * If two ports the same region zero-copy is possible. * * nr_arg3 (in/out) number of extra buffers to be allocated. * * * * nr_cmd (in) if non-zero indicates a special command: * NETMAP_BDG_ATTACH and nr_name = vale*:ifname * attaches the NIC to the switch; nr_ringid specifies * which rings to use. Used by vale-ctl -a ... * nr_arg1 = NETMAP_BDG_HOST also attaches the host port * as in vale-ctl -h ... * * NETMAP_BDG_DETACH and nr_name = vale*:ifname * disconnects a previously attached NIC. * Used by vale-ctl -d ... * * NETMAP_BDG_LIST * list the configuration of VALE switches. * * NETMAP_BDG_VNET_HDR * Set the virtio-net header length used by the client * of a VALE switch port. * * NETMAP_BDG_NEWIF * create a persistent VALE port with name nr_name. * Used by vale-ctl -n ... * * NETMAP_BDG_DELIF * delete a persistent VALE port. Used by vale-ctl -d ... * * nr_arg1, nr_arg2, nr_arg3 (in/out) command specific * * * */ /* * struct nmreq overlays a struct ifreq (just the name) */ struct nmreq { char nr_name[IFNAMSIZ]; uint32_t nr_version; /* API version */ uint32_t nr_offset; /* nifp offset in the shared region */ uint32_t nr_memsize; /* size of the shared region */ uint32_t nr_tx_slots; /* slots in tx rings */ uint32_t nr_rx_slots; /* slots in rx rings */ uint16_t nr_tx_rings; /* number of tx rings */ uint16_t nr_rx_rings; /* number of rx rings */ uint16_t nr_ringid; /* ring(s) we care about */ #define NETMAP_HW_RING 0x4000 /* single NIC ring pair */ #define NETMAP_SW_RING 0x2000 /* only host ring pair */ #define NETMAP_RING_MASK 0x0fff /* the ring number */ #define NETMAP_NO_TX_POLL 0x1000 /* no automatic txsync on poll */ #define NETMAP_DO_RX_POLL 0x8000 /* DO automatic rxsync on poll */ uint16_t nr_cmd; #define NETMAP_BDG_ATTACH 1 /* attach the NIC */ #define NETMAP_BDG_DETACH 2 /* detach the NIC */ #define NETMAP_BDG_REGOPS 3 /* register bridge callbacks */ #define NETMAP_BDG_LIST 4 /* get bridge's info */ #define NETMAP_BDG_VNET_HDR 5 /* set the port virtio-net-hdr length */ #define NETMAP_BDG_NEWIF 6 /* create a virtual port */ #define NETMAP_BDG_DELIF 7 /* destroy a virtual port */ #define NETMAP_PT_HOST_CREATE 8 /* create ptnetmap kthreads */ #define NETMAP_PT_HOST_DELETE 9 /* delete ptnetmap kthreads */ #define NETMAP_BDG_POLLING_ON 10 /* delete polling kthread */ #define NETMAP_BDG_POLLING_OFF 11 /* delete polling kthread */ #define NETMAP_VNET_HDR_GET 12 /* get the port virtio-net-hdr length */ - uint16_t nr_arg1; /* reserve extra rings in NIOCREGIF */ + uint16_t nr_arg1; /* extra arguments */ #define NETMAP_BDG_HOST 1 /* nr_arg1 value for NETMAP_BDG_ATTACH */ uint16_t nr_arg2; /* id of the memory allocator */ uint32_t nr_arg3; /* req. extra buffers in NIOCREGIF */ uint32_t nr_flags; /* specify NR_REG_* mode and other flags */ #define NR_REG_MASK 0xf /* to extract NR_REG_* mode from nr_flags */ /* various modes, extends nr_ringid */ uint32_t spare2[1]; }; #ifdef _WIN32 /* * Windows does not have _IOWR(). _IO(), _IOW() and _IOR() are defined * in ws2def.h but not sure if they are in the form we need. * We therefore redefine them in a convenient way to use for DeviceIoControl * signatures. */ #undef _IO // ws2def.h #define _WIN_NM_IOCTL_TYPE 40000 #define _IO(_c, _n) CTL_CODE(_WIN_NM_IOCTL_TYPE, ((_n) + 0x800) , \ METHOD_BUFFERED, FILE_ANY_ACCESS ) #define _IO_direct(_c, _n) CTL_CODE(_WIN_NM_IOCTL_TYPE, ((_n) + 0x800) , \ METHOD_OUT_DIRECT, FILE_ANY_ACCESS ) #define _IOWR(_c, _n, _s) _IO(_c, _n) /* We havesome internal sysctl in addition to the externally visible ones */ #define NETMAP_MMAP _IO_direct('i', 160) // note METHOD_OUT_DIRECT #define NETMAP_POLL _IO('i', 162) /* and also two setsockopt for sysctl emulation */ #define NETMAP_SETSOCKOPT _IO('i', 140) #define NETMAP_GETSOCKOPT _IO('i', 141) /* These linknames are for the Netmap Core Driver */ #define NETMAP_NT_DEVICE_NAME L"\\Device\\NETMAP" #define NETMAP_DOS_DEVICE_NAME L"\\DosDevices\\netmap" /* Definition of a structure used to pass a virtual address within an IOCTL */ typedef struct _MEMORY_ENTRY { PVOID pUsermodeVirtualAddress; } MEMORY_ENTRY, *PMEMORY_ENTRY; typedef struct _POLL_REQUEST_DATA { int events; int timeout; int revents; } POLL_REQUEST_DATA; #endif /* _WIN32 */ /* * Opaque structure that is passed to an external kernel * module via ioctl(fd, NIOCCONFIG, req) for a user-owned * bridge port (at this point ephemeral VALE interface). */ #define NM_IFRDATA_LEN 256 struct nm_ifreq { char nifr_name[IFNAMSIZ]; char data[NM_IFRDATA_LEN]; }; /* * FreeBSD uses the size value embedded in the _IOWR to determine * how much to copy in/out. So we need it to match the actual * data structure we pass. We put some spares in the structure * to ease compatibility with other versions */ #define NIOCGINFO _IOWR('i', 145, struct nmreq) /* return IF info */ #define NIOCREGIF _IOWR('i', 146, struct nmreq) /* interface register */ #define NIOCCONFIG _IOWR('i',150, struct nm_ifreq) /* for ext. modules */ #endif /* _NET_NETMAP_LEGACY_H_ */ Index: head/sys/net/netmap_user.h =================================================================== --- head/sys/net/netmap_user.h (revision 345268) +++ head/sys/net/netmap_user.h (revision 345269) @@ -1,1172 +1,1174 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (C) 2011-2016 Universita` di Pisa * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * $FreeBSD$ * * Functions and macros to manipulate netmap structures and packets * in userspace. See netmap(4) for more information. * * The address of the struct netmap_if, say nifp, is computed from the * value returned from ioctl(.., NIOCREG, ...) and the mmap region: * ioctl(fd, NIOCREG, &req); * mem = mmap(0, ... ); * nifp = NETMAP_IF(mem, req.nr_nifp); * (so simple, we could just do it manually) * * From there: * struct netmap_ring *NETMAP_TXRING(nifp, index) * struct netmap_ring *NETMAP_RXRING(nifp, index) * we can access ring->cur, ring->head, ring->tail, etc. * * ring->slot[i] gives us the i-th slot (we can access * directly len, flags, buf_idx) * * char *buf = NETMAP_BUF(ring, x) returns a pointer to * the buffer numbered x * * All ring indexes (head, cur, tail) should always move forward. * To compute the next index in a circular ring you can use * i = nm_ring_next(ring, i); * * To ease porting apps from pcap to netmap we supply a few fuctions * that can be called to open, close, read and write on netmap in a way * similar to libpcap. Note that the read/write function depend on * an ioctl()/select()/poll() being issued to refill rings or push * packets out. * * In order to use these, include #define NETMAP_WITH_LIBS * in the source file that invokes these functions. */ #ifndef _NET_NETMAP_USER_H_ #define _NET_NETMAP_USER_H_ #define NETMAP_DEVICE_NAME "/dev/netmap" #ifdef __CYGWIN__ /* * we can compile userspace apps with either cygwin or msvc, * and we use _WIN32 to identify windows specific code */ #ifndef _WIN32 #define _WIN32 #endif /* _WIN32 */ #endif /* __CYGWIN__ */ #ifdef _WIN32 #undef NETMAP_DEVICE_NAME #define NETMAP_DEVICE_NAME "/proc/sys/DosDevices/Global/netmap" #include #include #include #endif /* _WIN32 */ #include #include /* apple needs sockaddr */ #include /* IFNAMSIZ */ #include +#include /* memset */ +#include /* gettimeofday */ #ifndef likely #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #endif /* likely and unlikely */ #include /* helper macro */ #define _NETMAP_OFFSET(type, ptr, offset) \ ((type)(void *)((char *)(ptr) + (offset))) #define NETMAP_IF(_base, _ofs) _NETMAP_OFFSET(struct netmap_if *, _base, _ofs) #define NETMAP_TXRING(nifp, index) _NETMAP_OFFSET(struct netmap_ring *, \ nifp, (nifp)->ring_ofs[index] ) #define NETMAP_RXRING(nifp, index) _NETMAP_OFFSET(struct netmap_ring *, \ - nifp, (nifp)->ring_ofs[index + (nifp)->ni_tx_rings + 1] ) + nifp, (nifp)->ring_ofs[index + (nifp)->ni_tx_rings + \ + (nifp)->ni_host_tx_rings] ) #define NETMAP_BUF(ring, index) \ ((char *)(ring) + (ring)->buf_ofs + ((index)*(ring)->nr_buf_size)) #define NETMAP_BUF_IDX(ring, buf) \ ( ((char *)(buf) - ((char *)(ring) + (ring)->buf_ofs) ) / \ (ring)->nr_buf_size ) static inline uint32_t nm_ring_next(struct netmap_ring *r, uint32_t i) { return ( unlikely(i + 1 == r->num_slots) ? 0 : i + 1); } /* * Return 1 if we have pending transmissions in the tx ring. * When everything is complete ring->head = ring->tail + 1 (modulo ring size) */ static inline int nm_tx_pending(struct netmap_ring *r) { return nm_ring_next(r, r->tail) != r->head; } /* Compute the number of slots available in the netmap ring. We use * ring->head as explained in the comment above nm_ring_empty(). */ static inline uint32_t nm_ring_space(struct netmap_ring *ring) { int ret = ring->tail - ring->head; if (ret < 0) ret += ring->num_slots; return ret; } - -#ifdef NETMAP_WITH_LIBS -/* - * Support for simple I/O libraries. - * Include other system headers required for compiling this. - */ - -#ifndef HAVE_NETMAP_WITH_LIBS -#define HAVE_NETMAP_WITH_LIBS - -#include -#include -#include -#include /* memset */ -#include -#include /* EINVAL */ -#include /* O_RDWR */ -#include /* close() */ -#include -#include - #ifndef ND /* debug macros */ /* debug support */ #define ND(_fmt, ...) do {} while(0) #define D(_fmt, ...) \ do { \ struct timeval _t0; \ gettimeofday(&_t0, NULL); \ fprintf(stderr, "%03d.%06d %s [%d] " _fmt "\n", \ (int)(_t0.tv_sec % 1000), (int)_t0.tv_usec, \ __FUNCTION__, __LINE__, ##__VA_ARGS__); \ } while (0) /* Rate limited version of "D", lps indicates how many per second */ #define RD(lps, format, ...) \ do { \ static int __t0, __cnt; \ struct timeval __xxts; \ gettimeofday(&__xxts, NULL); \ if (__t0 != __xxts.tv_sec) { \ __t0 = __xxts.tv_sec; \ __cnt = 0; \ } \ if (__cnt++ < lps) { \ D(format, ##__VA_ARGS__); \ } \ } while (0) #endif +/* + * this is a slightly optimized copy routine which rounds + * to multiple of 64 bytes and is often faster than dealing + * with other odd sizes. We assume there is enough room + * in the source and destination buffers. + */ +static inline void +nm_pkt_copy(const void *_src, void *_dst, int l) +{ + const uint64_t *src = (const uint64_t *)_src; + uint64_t *dst = (uint64_t *)_dst; + + if (unlikely(l >= 1024 || l % 64)) { + memcpy(dst, src, l); + return; + } + for (; likely(l > 0); l-=64) { + *dst++ = *src++; + *dst++ = *src++; + *dst++ = *src++; + *dst++ = *src++; + *dst++ = *src++; + *dst++ = *src++; + *dst++ = *src++; + *dst++ = *src++; + } +} + +#ifdef NETMAP_WITH_LIBS +/* + * Support for simple I/O libraries. + * Include other system headers required for compiling this. + */ + +#ifndef HAVE_NETMAP_WITH_LIBS +#define HAVE_NETMAP_WITH_LIBS + +#include +#include +#include +#include +#include /* EINVAL */ +#include /* O_RDWR */ +#include /* close() */ +#include +#include + struct nm_pkthdr { /* first part is the same as pcap_pkthdr */ struct timeval ts; uint32_t caplen; uint32_t len; uint64_t flags; /* NM_MORE_PKTS etc */ #define NM_MORE_PKTS 1 struct nm_desc *d; struct netmap_slot *slot; uint8_t *buf; }; struct nm_stat { /* same as pcap_stat */ u_int ps_recv; u_int ps_drop; u_int ps_ifdrop; #ifdef WIN32 /* XXX or _WIN32 ? */ u_int bs_capt; #endif /* WIN32 */ }; #define NM_ERRBUF_SIZE 512 struct nm_desc { struct nm_desc *self; /* point to self if netmap. */ int fd; void *mem; uint32_t memsize; int done_mmap; /* set if mem is the result of mmap */ struct netmap_if * const nifp; uint16_t first_tx_ring, last_tx_ring, cur_tx_ring; uint16_t first_rx_ring, last_rx_ring, cur_rx_ring; struct nmreq req; /* also contains the nr_name = ifname */ struct nm_pkthdr hdr; /* * The memory contains netmap_if, rings and then buffers. * Given a pointer (e.g. to nm_inject) we can compare with * mem/buf_start/buf_end to tell if it is a buffer or * some other descriptor in our region. * We also store a pointer to some ring as it helps in the * translation from buffer indexes to addresses. */ struct netmap_ring * const some_ring; void * const buf_start; void * const buf_end; /* parameters from pcap_open_live */ int snaplen; int promisc; int to_ms; char *errbuf; /* save flags so we can restore them on close */ uint32_t if_flags; uint32_t if_reqcap; uint32_t if_curcap; struct nm_stat st; char msg[NM_ERRBUF_SIZE]; }; /* * when the descriptor is open correctly, d->self == d * Eventually we should also use some magic number. */ #define P2NMD(p) ((struct nm_desc *)(p)) #define IS_NETMAP_DESC(d) ((d) && P2NMD(d)->self == P2NMD(d)) #define NETMAP_FD(d) (P2NMD(d)->fd) -/* - * this is a slightly optimized copy routine which rounds - * to multiple of 64 bytes and is often faster than dealing - * with other odd sizes. We assume there is enough room - * in the source and destination buffers. - */ -static inline void -nm_pkt_copy(const void *_src, void *_dst, int l) -{ - const uint64_t *src = (const uint64_t *)_src; - uint64_t *dst = (uint64_t *)_dst; - - if (unlikely(l >= 1024 || l % 64)) { - memcpy(dst, src, l); - return; - } - for (; likely(l > 0); l-=64) { - *dst++ = *src++; - *dst++ = *src++; - *dst++ = *src++; - *dst++ = *src++; - *dst++ = *src++; - *dst++ = *src++; - *dst++ = *src++; - *dst++ = *src++; - } -} /* * The callback, invoked on each received packet. Same as libpcap */ typedef void (*nm_cb_t)(u_char *, const struct nm_pkthdr *, const u_char *d); /* *--- the pcap-like API --- * * nm_open() opens a file descriptor, binds to a port and maps memory. * * ifname (netmap:foo or vale:foo) is the port name * a suffix can indicate the follwing: * ^ bind the host (sw) ring pair * * bind host and NIC ring pairs * -NN bind individual NIC ring pair * {NN bind master side of pipe NN * }NN bind slave side of pipe NN * a suffix starting with / and the following flags, * in any order: * x exclusive access * z zero copy monitor (both tx and rx) * t monitor tx side (copy monitor) * r monitor rx side (copy monitor) * R bind only RX ring(s) * T bind only TX ring(s) * * req provides the initial values of nmreq before parsing ifname. * Remember that the ifname parsing will override the ring * number in nm_ringid, and part of nm_flags; * flags special functions, normally 0 * indicates which fields of *arg are significant * arg special functions, normally NULL * if passed a netmap_desc with mem != NULL, * use that memory instead of mmap. */ static struct nm_desc *nm_open(const char *ifname, const struct nmreq *req, uint64_t flags, const struct nm_desc *arg); /* * nm_open can import some fields from the parent descriptor. * These flags control which ones. * Also in flags you can specify NETMAP_NO_TX_POLL and NETMAP_DO_RX_POLL, * which set the initial value for these flags. * Note that the 16 low bits of the flags are reserved for data * that may go into the nmreq. */ enum { NM_OPEN_NO_MMAP = 0x040000, /* reuse mmap from parent */ NM_OPEN_IFNAME = 0x080000, /* nr_name, nr_ringid, nr_flags */ NM_OPEN_ARG1 = 0x100000, NM_OPEN_ARG2 = 0x200000, NM_OPEN_ARG3 = 0x400000, NM_OPEN_RING_CFG = 0x800000, /* tx|rx rings|slots */ }; /* * nm_close() closes and restores the port to its previous state */ static int nm_close(struct nm_desc *); /* * nm_mmap() do mmap or inherit from parent if the nr_arg2 * (memory block) matches. */ static int nm_mmap(struct nm_desc *, const struct nm_desc *); /* * nm_inject() is the same as pcap_inject() * nm_dispatch() is the same as pcap_dispatch() * nm_nextpkt() is the same as pcap_next() */ static int nm_inject(struct nm_desc *, const void *, size_t); static int nm_dispatch(struct nm_desc *, int, nm_cb_t, u_char *); static u_char *nm_nextpkt(struct nm_desc *, struct nm_pkthdr *); #ifdef _WIN32 intptr_t _get_osfhandle(int); /* defined in io.h in windows */ /* * In windows we do not have yet native poll support, so we keep track * of file descriptors associated to netmap ports to emulate poll on * them and fall back on regular poll on other file descriptors. */ struct win_netmap_fd_list { struct win_netmap_fd_list *next; int win_netmap_fd; HANDLE win_netmap_handle; }; /* * list head containing all the netmap opened fd and their * windows HANDLE counterparts */ static struct win_netmap_fd_list *win_netmap_fd_list_head; static void win_insert_fd_record(int fd) { struct win_netmap_fd_list *curr; for (curr = win_netmap_fd_list_head; curr; curr = curr->next) { if (fd == curr->win_netmap_fd) { return; } } curr = calloc(1, sizeof(*curr)); curr->next = win_netmap_fd_list_head; curr->win_netmap_fd = fd; curr->win_netmap_handle = IntToPtr(_get_osfhandle(fd)); win_netmap_fd_list_head = curr; } void win_remove_fd_record(int fd) { struct win_netmap_fd_list *curr = win_netmap_fd_list_head; struct win_netmap_fd_list *prev = NULL; for (; curr ; prev = curr, curr = curr->next) { if (fd != curr->win_netmap_fd) continue; /* found the entry */ if (prev == NULL) { /* we are freeing the first entry */ win_netmap_fd_list_head = curr->next; } else { prev->next = curr->next; } free(curr); break; } } HANDLE win_get_netmap_handle(int fd) { struct win_netmap_fd_list *curr; for (curr = win_netmap_fd_list_head; curr; curr = curr->next) { if (fd == curr->win_netmap_fd) { return curr->win_netmap_handle; } } return NULL; } /* * we need to wrap ioctl and mmap, at least for the netmap file descriptors */ /* * use this function only from netmap_user.h internal functions * same as ioctl, returns 0 on success and -1 on error */ static int win_nm_ioctl_internal(HANDLE h, int32_t ctlCode, void *arg) { DWORD bReturn = 0, szIn, szOut; BOOL ioctlReturnStatus; void *inParam = arg, *outParam = arg; switch (ctlCode) { case NETMAP_POLL: szIn = sizeof(POLL_REQUEST_DATA); szOut = sizeof(POLL_REQUEST_DATA); break; case NETMAP_MMAP: szIn = 0; szOut = sizeof(void*); inParam = NULL; /* nothing on input */ break; case NIOCTXSYNC: case NIOCRXSYNC: szIn = 0; szOut = 0; break; case NIOCREGIF: szIn = sizeof(struct nmreq); szOut = sizeof(struct nmreq); break; case NIOCCONFIG: D("unsupported NIOCCONFIG!"); return -1; default: /* a regular ioctl */ D("invalid ioctl %x on netmap fd", ctlCode); return -1; } ioctlReturnStatus = DeviceIoControl(h, ctlCode, inParam, szIn, outParam, szOut, &bReturn, NULL); // XXX note windows returns 0 on error or async call, 1 on success // we could call GetLastError() to figure out what happened return ioctlReturnStatus ? 0 : -1; } /* * this function is what must be called from user-space programs * same as ioctl, returns 0 on success and -1 on error */ static int win_nm_ioctl(int fd, int32_t ctlCode, void *arg) { HANDLE h = win_get_netmap_handle(fd); if (h == NULL) { return ioctl(fd, ctlCode, arg); } else { return win_nm_ioctl_internal(h, ctlCode, arg); } } #define ioctl win_nm_ioctl /* from now on, within this file ... */ /* * We cannot use the native mmap on windows * The only parameter used is "fd", the other ones are just declared to * make this signature comparable to the FreeBSD/Linux one */ static void * win32_mmap_emulated(void *addr, size_t length, int prot, int flags, int fd, int32_t offset) { HANDLE h = win_get_netmap_handle(fd); if (h == NULL) { return mmap(addr, length, prot, flags, fd, offset); } else { MEMORY_ENTRY ret; return win_nm_ioctl_internal(h, NETMAP_MMAP, &ret) ? NULL : ret.pUsermodeVirtualAddress; } } #define mmap win32_mmap_emulated #include /* XXX needed to use the structure pollfd */ static int win_nm_poll(struct pollfd *fds, int nfds, int timeout) { HANDLE h; if (nfds != 1 || fds == NULL || (h = win_get_netmap_handle(fds->fd)) == NULL) {; return poll(fds, nfds, timeout); } else { POLL_REQUEST_DATA prd; prd.timeout = timeout; prd.events = fds->events; win_nm_ioctl_internal(h, NETMAP_POLL, &prd); if ((prd.revents == POLLERR) || (prd.revents == STATUS_TIMEOUT)) { return -1; } return 1; } } #define poll win_nm_poll static int win_nm_open(char* pathname, int flags) { if (strcmp(pathname, NETMAP_DEVICE_NAME) == 0) { int fd = open(NETMAP_DEVICE_NAME, O_RDWR); if (fd < 0) { return -1; } win_insert_fd_record(fd); return fd; } else { return open(pathname, flags); } } #define open win_nm_open static int win_nm_close(int fd) { if (fd != -1) { close(fd); if (win_get_netmap_handle(fd) != NULL) { win_remove_fd_record(fd); } } return 0; } #define close win_nm_close #endif /* _WIN32 */ static int nm_is_identifier(const char *s, const char *e) { for (; s != e; s++) { if (!isalnum(*s) && *s != '_') { return 0; } } return 1; } #define MAXERRMSG 80 static int nm_parse(const char *ifname, struct nm_desc *d, char *err) { int is_vale; const char *port = NULL; const char *vpname = NULL; u_int namelen; uint32_t nr_ringid = 0, nr_flags; char errmsg[MAXERRMSG] = ""; long num; uint16_t nr_arg2 = 0; enum { P_START, P_RNGSFXOK, P_GETNUM, P_FLAGS, P_FLAGSOK, P_MEMID } p_state; errno = 0; is_vale = (ifname[0] == 'v'); if (is_vale) { port = index(ifname, ':'); if (port == NULL) { snprintf(errmsg, MAXERRMSG, "missing ':' in vale name"); goto fail; } if (!nm_is_identifier(ifname + 4, port)) { snprintf(errmsg, MAXERRMSG, "invalid bridge name"); goto fail; } vpname = ++port; } else { ifname += 7; port = ifname; } /* scan for a separator */ for (; *port && !index("-*^{}/@", *port); port++) ; if (is_vale && !nm_is_identifier(vpname, port)) { snprintf(errmsg, MAXERRMSG, "invalid bridge port name"); goto fail; } namelen = port - ifname; if (namelen >= sizeof(d->req.nr_name)) { snprintf(errmsg, MAXERRMSG, "name too long"); goto fail; } memcpy(d->req.nr_name, ifname, namelen); d->req.nr_name[namelen] = '\0'; p_state = P_START; nr_flags = NR_REG_ALL_NIC; /* default for no suffix */ while (*port) { switch (p_state) { case P_START: switch (*port) { case '^': /* only SW ring */ nr_flags = NR_REG_SW; p_state = P_RNGSFXOK; break; case '*': /* NIC and SW */ nr_flags = NR_REG_NIC_SW; p_state = P_RNGSFXOK; break; case '-': /* one NIC ring pair */ nr_flags = NR_REG_ONE_NIC; p_state = P_GETNUM; break; case '{': /* pipe (master endpoint) */ nr_flags = NR_REG_PIPE_MASTER; p_state = P_GETNUM; break; case '}': /* pipe (slave endoint) */ nr_flags = NR_REG_PIPE_SLAVE; p_state = P_GETNUM; break; case '/': /* start of flags */ p_state = P_FLAGS; break; case '@': /* start of memid */ p_state = P_MEMID; break; default: snprintf(errmsg, MAXERRMSG, "unknown modifier: '%c'", *port); goto fail; } port++; break; case P_RNGSFXOK: switch (*port) { case '/': p_state = P_FLAGS; break; case '@': p_state = P_MEMID; break; default: snprintf(errmsg, MAXERRMSG, "unexpected character: '%c'", *port); goto fail; } port++; break; case P_GETNUM: num = strtol(port, (char **)&port, 10); if (num < 0 || num >= NETMAP_RING_MASK) { snprintf(errmsg, MAXERRMSG, "'%ld' out of range [0, %d)", num, NETMAP_RING_MASK); goto fail; } nr_ringid = num & NETMAP_RING_MASK; p_state = P_RNGSFXOK; break; case P_FLAGS: case P_FLAGSOK: if (*port == '@') { port++; p_state = P_MEMID; break; } switch (*port) { case 'x': nr_flags |= NR_EXCLUSIVE; break; case 'z': nr_flags |= NR_ZCOPY_MON; break; case 't': nr_flags |= NR_MONITOR_TX; break; case 'r': nr_flags |= NR_MONITOR_RX; break; case 'R': nr_flags |= NR_RX_RINGS_ONLY; break; case 'T': nr_flags |= NR_TX_RINGS_ONLY; break; default: snprintf(errmsg, MAXERRMSG, "unrecognized flag: '%c'", *port); goto fail; } port++; p_state = P_FLAGSOK; break; case P_MEMID: if (nr_arg2 != 0) { snprintf(errmsg, MAXERRMSG, "double setting of memid"); goto fail; } num = strtol(port, (char **)&port, 10); if (num <= 0) { snprintf(errmsg, MAXERRMSG, "invalid memid %ld, must be >0", num); goto fail; } nr_arg2 = num; p_state = P_RNGSFXOK; break; } } if (p_state != P_START && p_state != P_RNGSFXOK && p_state != P_FLAGSOK) { snprintf(errmsg, MAXERRMSG, "unexpected end of port name"); goto fail; } ND("flags: %s %s %s %s", (nr_flags & NR_EXCLUSIVE) ? "EXCLUSIVE" : "", (nr_flags & NR_ZCOPY_MON) ? "ZCOPY_MON" : "", (nr_flags & NR_MONITOR_TX) ? "MONITOR_TX" : "", (nr_flags & NR_MONITOR_RX) ? "MONITOR_RX" : ""); d->req.nr_flags |= nr_flags; d->req.nr_ringid |= nr_ringid; d->req.nr_arg2 = nr_arg2; d->self = d; return 0; fail: if (!errno) errno = EINVAL; if (err) strncpy(err, errmsg, MAXERRMSG); return -1; } /* * Try to open, return descriptor if successful, NULL otherwise. * An invalid netmap name will return errno = 0; * You can pass a pointer to a pre-filled nm_desc to add special * parameters. Flags is used as follows * NM_OPEN_NO_MMAP use the memory from arg, only XXX avoid mmap * if the nr_arg2 (memory block) matches. * NM_OPEN_ARG1 use req.nr_arg1 from arg * NM_OPEN_ARG2 use req.nr_arg2 from arg * NM_OPEN_RING_CFG user ring config from arg */ static struct nm_desc * nm_open(const char *ifname, const struct nmreq *req, uint64_t new_flags, const struct nm_desc *arg) { struct nm_desc *d = NULL; const struct nm_desc *parent = arg; char errmsg[MAXERRMSG] = ""; uint32_t nr_reg; if (strncmp(ifname, "netmap:", 7) && strncmp(ifname, NM_BDG_NAME, strlen(NM_BDG_NAME))) { errno = 0; /* name not recognised, not an error */ return NULL; } d = (struct nm_desc *)calloc(1, sizeof(*d)); if (d == NULL) { snprintf(errmsg, MAXERRMSG, "nm_desc alloc failure"); errno = ENOMEM; return NULL; } d->self = d; /* set this early so nm_close() works */ d->fd = open(NETMAP_DEVICE_NAME, O_RDWR); if (d->fd < 0) { snprintf(errmsg, MAXERRMSG, "cannot open /dev/netmap: %s", strerror(errno)); goto fail; } if (req) d->req = *req; if (!(new_flags & NM_OPEN_IFNAME)) { if (nm_parse(ifname, d, errmsg) < 0) goto fail; } d->req.nr_version = NETMAP_API; d->req.nr_ringid &= NETMAP_RING_MASK; /* optionally import info from parent */ if (IS_NETMAP_DESC(parent) && new_flags) { if (new_flags & NM_OPEN_ARG1) D("overriding ARG1 %d", parent->req.nr_arg1); d->req.nr_arg1 = new_flags & NM_OPEN_ARG1 ? parent->req.nr_arg1 : 4; if (new_flags & NM_OPEN_ARG2) { D("overriding ARG2 %d", parent->req.nr_arg2); d->req.nr_arg2 = parent->req.nr_arg2; } if (new_flags & NM_OPEN_ARG3) D("overriding ARG3 %d", parent->req.nr_arg3); d->req.nr_arg3 = new_flags & NM_OPEN_ARG3 ? parent->req.nr_arg3 : 0; if (new_flags & NM_OPEN_RING_CFG) { D("overriding RING_CFG"); d->req.nr_tx_slots = parent->req.nr_tx_slots; d->req.nr_rx_slots = parent->req.nr_rx_slots; d->req.nr_tx_rings = parent->req.nr_tx_rings; d->req.nr_rx_rings = parent->req.nr_rx_rings; } if (new_flags & NM_OPEN_IFNAME) { D("overriding ifname %s ringid 0x%x flags 0x%x", parent->req.nr_name, parent->req.nr_ringid, parent->req.nr_flags); memcpy(d->req.nr_name, parent->req.nr_name, sizeof(d->req.nr_name)); d->req.nr_ringid = parent->req.nr_ringid; d->req.nr_flags = parent->req.nr_flags; } } /* add the *XPOLL flags */ d->req.nr_ringid |= new_flags & (NETMAP_NO_TX_POLL | NETMAP_DO_RX_POLL); if (ioctl(d->fd, NIOCREGIF, &d->req)) { snprintf(errmsg, MAXERRMSG, "NIOCREGIF failed: %s", strerror(errno)); goto fail; } nr_reg = d->req.nr_flags & NR_REG_MASK; if (nr_reg == NR_REG_SW) { /* host stack */ d->first_tx_ring = d->last_tx_ring = d->req.nr_tx_rings; d->first_rx_ring = d->last_rx_ring = d->req.nr_rx_rings; } else if (nr_reg == NR_REG_ALL_NIC) { /* only nic */ d->first_tx_ring = 0; d->first_rx_ring = 0; d->last_tx_ring = d->req.nr_tx_rings - 1; d->last_rx_ring = d->req.nr_rx_rings - 1; } else if (nr_reg == NR_REG_NIC_SW) { d->first_tx_ring = 0; d->first_rx_ring = 0; d->last_tx_ring = d->req.nr_tx_rings; d->last_rx_ring = d->req.nr_rx_rings; } else if (nr_reg == NR_REG_ONE_NIC) { /* XXX check validity */ d->first_tx_ring = d->last_tx_ring = d->first_rx_ring = d->last_rx_ring = d->req.nr_ringid & NETMAP_RING_MASK; } else { /* pipes */ d->first_tx_ring = d->last_tx_ring = 0; d->first_rx_ring = d->last_rx_ring = 0; } /* if parent is defined, do nm_mmap() even if NM_OPEN_NO_MMAP is set */ if ((!(new_flags & NM_OPEN_NO_MMAP) || parent) && nm_mmap(d, parent)) { snprintf(errmsg, MAXERRMSG, "mmap failed: %s", strerror(errno)); goto fail; } #ifdef DEBUG_NETMAP_USER { /* debugging code */ int i; D("%s tx %d .. %d %d rx %d .. %d %d", ifname, d->first_tx_ring, d->last_tx_ring, d->req.nr_tx_rings, d->first_rx_ring, d->last_rx_ring, d->req.nr_rx_rings); for (i = 0; i <= d->req.nr_tx_rings; i++) { struct netmap_ring *r = NETMAP_TXRING(d->nifp, i); D("TX%d %p h %d c %d t %d", i, r, r->head, r->cur, r->tail); } for (i = 0; i <= d->req.nr_rx_rings; i++) { struct netmap_ring *r = NETMAP_RXRING(d->nifp, i); D("RX%d %p h %d c %d t %d", i, r, r->head, r->cur, r->tail); } } #endif /* debugging */ d->cur_tx_ring = d->first_tx_ring; d->cur_rx_ring = d->first_rx_ring; return d; fail: nm_close(d); if (errmsg[0]) D("%s %s", errmsg, ifname); if (errno == 0) errno = EINVAL; return NULL; } static int nm_close(struct nm_desc *d) { /* * ugly trick to avoid unused warnings */ static void *__xxzt[] __attribute__ ((unused)) = { (void *)nm_open, (void *)nm_inject, (void *)nm_dispatch, (void *)nm_nextpkt } ; if (d == NULL || d->self != d) return EINVAL; if (d->done_mmap && d->mem) munmap(d->mem, d->memsize); if (d->fd != -1) { close(d->fd); } bzero(d, sizeof(*d)); free(d); return 0; } static int nm_mmap(struct nm_desc *d, const struct nm_desc *parent) { //XXX TODO: check if mmap is already done if (IS_NETMAP_DESC(parent) && parent->mem && parent->req.nr_arg2 == d->req.nr_arg2) { /* do not mmap, inherit from parent */ D("do not mmap, inherit from parent"); d->memsize = parent->memsize; d->mem = parent->mem; } else { /* XXX TODO: check if memsize is too large (or there is overflow) */ d->memsize = d->req.nr_memsize; d->mem = mmap(0, d->memsize, PROT_WRITE | PROT_READ, MAP_SHARED, d->fd, 0); if (d->mem == MAP_FAILED) { goto fail; } d->done_mmap = 1; } { struct netmap_if *nifp = NETMAP_IF(d->mem, d->req.nr_offset); struct netmap_ring *r = NETMAP_RXRING(nifp, d->first_rx_ring); if ((void *)r == (void *)nifp) { /* the descriptor is open for TX only */ r = NETMAP_TXRING(nifp, d->first_tx_ring); } *(struct netmap_if **)(uintptr_t)&(d->nifp) = nifp; *(struct netmap_ring **)(uintptr_t)&d->some_ring = r; *(void **)(uintptr_t)&d->buf_start = NETMAP_BUF(r, 0); *(void **)(uintptr_t)&d->buf_end = (char *)d->mem + d->memsize; } return 0; fail: return EINVAL; } /* * Same prototype as pcap_inject(), only need to cast. */ static int nm_inject(struct nm_desc *d, const void *buf, size_t size) { u_int c, n = d->last_tx_ring - d->first_tx_ring + 1, ri = d->cur_tx_ring; for (c = 0; c < n ; c++, ri++) { /* compute current ring to use */ struct netmap_ring *ring; uint32_t i, j, idx; size_t rem; if (ri > d->last_tx_ring) ri = d->first_tx_ring; ring = NETMAP_TXRING(d->nifp, ri); rem = size; j = ring->cur; while (rem > ring->nr_buf_size && j != ring->tail) { rem -= ring->nr_buf_size; j = nm_ring_next(ring, j); } if (j == ring->tail && rem > 0) continue; i = ring->cur; while (i != j) { idx = ring->slot[i].buf_idx; ring->slot[i].len = ring->nr_buf_size; ring->slot[i].flags = NS_MOREFRAG; nm_pkt_copy(buf, NETMAP_BUF(ring, idx), ring->nr_buf_size); i = nm_ring_next(ring, i); buf = (char *)buf + ring->nr_buf_size; } idx = ring->slot[i].buf_idx; ring->slot[i].len = rem; ring->slot[i].flags = 0; nm_pkt_copy(buf, NETMAP_BUF(ring, idx), rem); ring->head = ring->cur = nm_ring_next(ring, i); d->cur_tx_ring = ri; return size; } return 0; /* fail */ } /* * Same prototype as pcap_dispatch(), only need to cast. */ static int nm_dispatch(struct nm_desc *d, int cnt, nm_cb_t cb, u_char *arg) { int n = d->last_rx_ring - d->first_rx_ring + 1; int c, got = 0, ri = d->cur_rx_ring; d->hdr.buf = NULL; d->hdr.flags = NM_MORE_PKTS; d->hdr.d = d; if (cnt == 0) cnt = -1; /* cnt == -1 means infinite, but rings have a finite amount * of buffers and the int is large enough that we never wrap, * so we can omit checking for -1 */ for (c=0; c < n && cnt != got; c++, ri++) { /* compute current ring to use */ struct netmap_ring *ring; if (ri > d->last_rx_ring) ri = d->first_rx_ring; ring = NETMAP_RXRING(d->nifp, ri); for ( ; !nm_ring_empty(ring) && cnt != got; got++) { u_int idx, i; u_char *oldbuf; struct netmap_slot *slot; if (d->hdr.buf) { /* from previous round */ cb(arg, &d->hdr, d->hdr.buf); } i = ring->cur; slot = &ring->slot[i]; idx = slot->buf_idx; /* d->cur_rx_ring doesn't change inside this loop, but * set it here, so it reflects d->hdr.buf's ring */ d->cur_rx_ring = ri; d->hdr.slot = slot; oldbuf = d->hdr.buf = (u_char *)NETMAP_BUF(ring, idx); // __builtin_prefetch(buf); d->hdr.len = d->hdr.caplen = slot->len; while (slot->flags & NS_MOREFRAG) { u_char *nbuf; u_int oldlen = slot->len; i = nm_ring_next(ring, i); slot = &ring->slot[i]; d->hdr.len += slot->len; nbuf = (u_char *)NETMAP_BUF(ring, slot->buf_idx); if (oldbuf != NULL && nbuf - oldbuf == ring->nr_buf_size && oldlen == ring->nr_buf_size) { d->hdr.caplen += slot->len; oldbuf = nbuf; } else { oldbuf = NULL; } } d->hdr.ts = ring->ts; ring->head = ring->cur = nm_ring_next(ring, i); } } if (d->hdr.buf) { /* from previous round */ d->hdr.flags = 0; cb(arg, &d->hdr, d->hdr.buf); } return got; } static u_char * nm_nextpkt(struct nm_desc *d, struct nm_pkthdr *hdr) { int ri = d->cur_rx_ring; do { /* compute current ring to use */ struct netmap_ring *ring = NETMAP_RXRING(d->nifp, ri); if (!nm_ring_empty(ring)) { u_int i = ring->cur; u_int idx = ring->slot[i].buf_idx; u_char *buf = (u_char *)NETMAP_BUF(ring, idx); // __builtin_prefetch(buf); hdr->ts = ring->ts; hdr->len = hdr->caplen = ring->slot[i].len; ring->cur = nm_ring_next(ring, i); /* we could postpone advancing head if we want * to hold the buffer. This can be supported in * the future. */ ring->head = ring->cur; d->cur_rx_ring = ri; return buf; } ri++; if (ri > d->last_rx_ring) ri = d->first_rx_ring; } while (ri != d->cur_rx_ring); return NULL; /* nothing found */ } #endif /* !HAVE_NETMAP_WITH_LIBS */ #endif /* NETMAP_WITH_LIBS */ #endif /* _NET_NETMAP_USER_H_ */