Page Menu
Home
FreeBSD
Search
Configure Global Search
Log In
Files
F133106905
D16429.id46521.diff
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Flag For Later
Award Token
Size
212 KB
Referenced Files
None
Subscribers
None
D16429.id46521.diff
View Options
Index: sys/amd64/conf/GENERIC
===================================================================
--- sys/amd64/conf/GENERIC
+++ sys/amd64/conf/GENERIC
@@ -242,7 +242,7 @@
device ixv # Intel PRO/10GbE PCIE VF Ethernet
device ixl # Intel XL710 40Gbe PCIE Ethernet
#options IXL_IW # Enable iWARP Client Interface in ixl(4)
-#device ixlv # Intel XL710 40Gbe VF PCIE Ethernet
+device ixlv # Intel XL710 40Gbe VF PCIE Ethernet
device le # AMD Am7900 LANCE and Am79C9xx PCnet
device ti # Alteon Networks Tigon I/II gigabit Ethernet
device txp # 3Com 3cR990 (``Typhoon'')
Index: sys/amd64/conf/NOTES
===================================================================
--- sys/amd64/conf/NOTES
+++ sys/amd64/conf/NOTES
@@ -334,7 +334,7 @@
device iwn # Intel 4965/1000/5000/6000 wireless NICs.
device ixl # Intel XL710 40Gbe PCIE Ethernet
#options IXL_IW # Enable iWARP Client Interface in ixl(4)
-#device ixlv # Intel XL710 40Gbe VF PCIE Ethernet
+device ixlv # Intel XL710 40Gbe VF PCIE Ethernet
device mthca # Mellanox HCA InfiniBand
device mlx4 # Shared code module between IB and Ethernet
device mlx4ib # Mellanox ConnectX HCA InfiniBand
Index: sys/conf/files.amd64
===================================================================
--- sys/conf/files.amd64
+++ sys/conf/files.amd64
@@ -272,10 +272,10 @@
compile-with "${NORMAL_C} -I$S/dev/ixl"
#dev/ixl/ixl_iw.c optional ixl pci \
# compile-with "${NORMAL_C} -I$S/dev/ixl"
-#dev/ixl/if_ixlv.c optional ixlv pci \
-# compile-with "${NORMAL_C} -I$S/dev/ixl"
-#dev/ixl/ixlvc.c optional ixlv pci \
-# compile-with "${NORMAL_C} -I$S/dev/ixl"
+dev/ixl/if_ixlv.c optional ixlv pci \
+ compile-with "${NORMAL_C} -I$S/dev/ixl"
+dev/ixl/ixlvc.c optional ixlv pci \
+ compile-with "${NORMAL_C} -I$S/dev/ixl"
dev/ixl/ixl_txrx.c optional ixl pci | ixlv pci \
compile-with "${NORMAL_C} -I$S/dev/ixl"
dev/ixl/i40e_osdep.c optional ixl pci | ixlv pci \
Index: sys/dev/ixl/README
===================================================================
--- sys/dev/ixl/README
+++ /dev/null
@@ -1,410 +0,0 @@
- ixl FreeBSD* Base Driver and ixlv VF Driver for the
- Intel XL710 Ethernet Controller Family
-
-/*$FreeBSD$*/
-================================================================
-
-August 26, 2014
-
-
-Contents
-========
-
-- Overview
-- Supported Adapters
-- The VF Driver
-- Building and Installation
-- Additional Configurations
-- Known Limitations
-
-
-Overview
-========
-
-This file describes the IXL FreeBSD* Base driver and the IXLV VF Driver
-for the XL710 Ethernet Family of Adapters. The Driver has been developed
-for use with FreeBSD 10.0 or later, but should be compatible with any
-supported release.
-
-For questions related to hardware requirements, refer to the documentation
-supplied with your Intel XL710 adapter. All hardware requirements listed
-apply for use with FreeBSD.
-
-
-Supported Adapters
-==================
-
-The drivers in this release are compatible with XL710 and X710-based
-Intel Ethernet Network Connections.
-
-
-SFP+ Devices with Pluggable Optics
-----------------------------------
-
-SR Modules
-----------
- Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
- Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
-
-LR Modules
-----------
- Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
- Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
-
-QSFP+ Modules
--------------
- Intel TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) E40GQSFPSR
- Intel TRIPLE RATE 1G/10G/40G QSFP+ LR (bailed) E40GQSFPLR
- QSFP+ 1G speed is not supported on XL710 based devices.
-
-X710/XL710 Based SFP+ adapters support all passive and active limiting direct
-attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
-
-The VF Driver
-==================
-The VF driver is normally used in a virtualized environment where a host
-driver manages SRIOV, and provides a VF device to the guest. With this
-first release the only host environment tested was using Linux QEMU/KVM.
-Support is planned for Xen and VMWare hosts at a later time.
-
-In the FreeBSD guest the IXLV driver would be loaded and will function
-using the VF device assigned to it.
-
-The VF driver provides most of the same functionality as the CORE driver,
-but is actually a slave to the Host, access to many controls are actually
-accomplished by a request to the Host via what is called the "Admin queue".
-These are startup and initialization events however, once in operation
-the device is self-contained and should achieve near native performance.
-
-Some notable limitations of the VF environment: for security reasons
-the driver is never permitted to be promiscuous, therefore a tcpdump
-will not behave the same with the interface. Second, media info is not
-available from the PF, so it will always appear as auto.
-
-Tarball Building and Installation
-=========================
-
-NOTE: You must have kernel sources installed to compile the driver tarball.
-
-These instructions assume a standalone driver tarball, building the driver
-already in the kernel source is simply a matter of adding the device entry
-to the kernel config file, or building in the ixl or ixlv module directory.
-
-In the instructions below, x.x.x is the driver version
-as indicated in the name of the driver tarball. The example is
-for ixl, the same procedure applies for ixlv.
-
-1. Move the base driver tar file to the directory of your choice.
- For example, use /home/username/ixl or /usr/local/src/ixl.
-
-2. Untar/unzip the archive:
- tar xfz ixl-x.x.x.tar.gz
-
-3. To install man page:
- cd ixl-x.x.x
- gzip -c ixl.4 > /usr/share/man/man4/ixl.4.gz
-
-4. To load the driver onto a running system:
- cd ixl-x.x.x/src
- make load
-
-5. To assign an IP address to the interface, enter the following:
- ifconfig ixl<interface_num> <IP_address>
-
-6. Verify that the interface works. Enter the following, where <IP_address>
- is the IP address for another machine on the same subnet as the interface
- that is being tested:
-
- ping <IP_address>
-
-7. If you want the driver to load automatically when the system is booted:
-
- cd ixl-x.x.x/src
- make
- make install
-
- Edit /boot/loader.conf, and add the following line:
- if_ixl_load="YES"
-
- Edit /etc/rc.conf, and create the appropriate
- ifconfig_ixl<interface_num> entry:
-
- ifconfig_ixl<interface_num>="<ifconfig_settings>"
-
- Example usage:
-
- ifconfig_ixl0="inet 192.168.10.1 netmask 255.255.255.0"
-
- NOTE: For assistance, see the ifconfig man page.
-
-
-
-Configuration and Tuning
-=========================
-
-Both drivers supports Transmit/Receive Checksum Offload for IPv4 and IPv6,
-TSO forIPv4 and IPv6, LRO, and Jumbo Frames on all 40 Gigabit adapters.
-
- Jumbo Frames
- ------------
- To enable Jumbo Frames, use the ifconfig utility to increase
- the MTU beyond 1500 bytes.
-
- - The Jumbo Frames setting on the switch must be set to at least
- 22 byteslarger than that of the adapter.
-
- - The maximum MTU setting for Jumbo Frames is 9706. This value
- coincides with the maximum jumbo frames size of 9728.
- To modify the setting, enter the following:
-
- ifconfig ixl<interface_num> <hostname or IP address> mtu 9000
-
- - To confirm an interface's MTU value, use the ifconfig command.
- To confirm the MTU used between two specific devices, use:
-
- route get <destination_IP_address>
-
- VLANs
- -----
- To create a new VLAN pseudo-interface:
-
- ifconfig <vlan_name> create
-
- To associate the VLAN pseudo-interface with a physical interface
- and assign a VLAN ID, IP address, and netmask:
-
- ifconfig <vlan_name> <ip_address> netmask <subnet_mask> vlan
- <vlan_id> vlandev <physical_interface>
-
- Example:
-
- ifconfig vlan10 10.0.0.1 netmask 255.255.255.0 vlan 10 vlandev ixl0
-
- In this example, all packets will be marked on egress with
- 802.1Q VLAN tags, specifying a VLAN ID of 10.
-
- To remove a VLAN pseudo-interface:
-
- ifconfig <vlan_name> destroy
-
-
- Checksum Offload
- ----------------
-
- Checksum offloading supports IPv4 and IPv6 with TCP and UDP packets
- and is supported for both transmit and receive. Checksum offloading
- for transmit and recieve is enabled by default for both IPv4 and IPv6.
-
- Checksum offloading can be enabled or disabled using ifconfig.
- Transmit and receive offloading for IPv4 and Ipv6 are enabled
- and disabled seperately.
-
- NOTE: TSO requires Tx checksum, so when Tx checksum
- is disabled, TSO will also be disabled.
-
- To enable Tx checksum offloading for ipv4:
-
- ifconfig ixl<interface_num> txcsum4
-
- To disable Tx checksum offloading for ipv4:
-
- ifconfig ixl<interface_num> -txcsum4
- (NOTE: This will disable TSO4)
-
- To enable Rx checksum offloading for ipv6:
-
- ifconfig ixl<interface_num> rxcsum6
-
- To disable Rx checksum offloading for ipv6:
-
- ifconfig ixl<interface_num> -rxcsum6
- (NOTE: This will disable TSO6)
-
-
- To confirm the current settings:
-
- ifconfig ixl<interface_num>
-
-
- TSO
- ---
-
- TSO supports both IPv4 and IPv6 and is enabled by default. TSO can
- be disabled and enabled using the ifconfig utility.
-
- NOTE: TSO requires Tx checksum, so when Tx checksum is
- disabled, TSO will also be disabled.
-
- To disable TSO IPv4:
-
- ifconfig ixl<interface_num> -tso4
-
- To enable TSO IPv4:
-
- ifconfig ixl<interface_num> tso4
-
- To disable TSO IPv6:
-
- ifconfig ixl<interface_num> -tso6
-
- To enable TSO IPv6:
-
- ifconfig ixl<interface_num> tso6
-
- To disable BOTH TSO IPv4 and IPv6:
-
- ifconfig ixl<interface_num> -tso
-
- To enable BOTH TSO IPv4 and IPv6:
-
- ifconfig ixl<interface_num> tso
-
-
- LRO
- ---
-
- Large Receive Offload is enabled by default. It can be enabled
- or disabled by using the ifconfig utility.
-
- NOTE: LRO should be disabled when forwarding packets.
-
- To disable LRO:
-
- ifconfig ixl<interface_num> -lro
-
- To enable LRO:
-
- ifconfig ixl<interface_num> lro
-
-
-Flow Control (IXL only)
-------------
-Flow control is disabled by default. To change flow control settings use sysctl.
-
-To enable flow control to Rx pause frames:
-
- sysctl dev.ixl.<interface_num>.fc=1
-
-To enable flow control to Tx pause frames:
-
- sysctl dev.ixl.<interface_num>.fc=2
-
-To enable flow control to Rx and Tx pause frames:
-
- sysctl dev.ixl.<interface_num>.fc=3
-
-To disable flow control:
-
- sysctl dev.ixl.<interface_num>.fc=0
-
-
-NOTE: You must have a flow control capable link partner.
-
-NOTE: The VF driver does not have access to flow control, it must be
- managed from the host side.
-
-
- Important system configuration changes:
- =======================================
-
--Change the file /etc/sysctl.conf, and add the line:
-
- hw.intr_storm_threshold: 0 (the default is 1000)
-
--Best throughput results are seen with a large MTU; use 9706 if possible.
-
--The default number of descriptors per ring is 1024, increasing this may
-improve performance depending on the use case.
-
--The VF driver uses a relatively large buf ring, this was found to eliminate
- UDP transmit errors, it is a tuneable, and if no UDP traffic is used it can
- be reduced. It is memory used per queue.
-
-
-Known Limitations
-=================
-
-Network Memory Buffer allocation
---------------------------------
- FreeBSD may have a low number of network memory buffers (mbufs) by default.
-If your mbuf value is too low, it may cause the driver to fail to initialize
-and/or cause the system to become unresponsive. You can check to see if the
-system is mbuf-starved by running 'netstat -m'. Increase the number of mbufs
-by editing the lines below in /etc/sysctl.conf:
-
- kern.ipc.nmbclusters
- kern.ipc.nmbjumbop
- kern.ipc.nmbjumbo9
- kern.ipc.nmbjumbo16
- kern.ipc.nmbufs
-
-The amount of memory that you allocate is system specific, and may
-require some trial and error.
-
-Also, increasing the follwing in /etc/sysctl.conf could help increase
-network performance:
-
- kern.ipc.maxsockbuf
- net.inet.tcp.sendspace
- net.inet.tcp.recvspace
- net.inet.udp.maxdgram
- net.inet.udp.recvspace
-
-
-UDP Stress Test Dropped Packet Issue
-------------------------------------
-Under small packet UDP stress test with the ixl driver, the FreeBSD system
-may drop UDP packets due to the fullness of socket buffers. You may want to
-change the driver's Flow Control variables to the minimum value for
-controlling packet reception.
-
-
-Disable LRO when routing/bridging
----------------------------------
-LRO must be turned off when forwarding traffic.
-
-
-Lower than expected performance
--------------------------------
-Some PCIe x8 slots are actually configured as x4 slots. These slots have
-insufficient bandwidth for full line rate with dual port and quad port
-devices.
-
-In addition, if you put a PCIe Generation 3-capable adapter into a PCIe
-Generation 2 slot, you cannot get full bandwidth. The driver detects this
-situation and writes the following message in the system log:
-
- "PCI-Express bandwidth available for this card is not sufficient for
- optimal performance. For optimal performance a x8 PCI-Express slot
- is required."
-
-If this error occurs, moving your adapter to a true PCIe Generation 3 x8
-slot will resolve the issue.
-
-
-Support
-=======
-
-For general information and support, go to the Intel support website at:
-
- http://support.intel.com
-
-If an issue is identified with the released source code on the supported kernel
-with a supported adapter, email the specific information related to the issue
-to freebsdnic@mailbox.intel.com.
-
-
-License
-=======
-
-This software program is released under the terms of a license agreement
-between you ('Licensee') and Intel. Do not use or load this software or any
-associated materials (collectively, the 'Software') until you have carefully
-read the full terms and conditions of the LICENSE located in this software
-package. By loadingor using the Software, you agree to the terms of this
-Agreement. If you do not agree with the terms of this Agreement, do not
-install or use the Software.
-
-* Other names and brands may be claimed as the property of others.
-
-
Index: sys/dev/ixl/i40e_osdep.c
===================================================================
--- sys/dev/ixl/i40e_osdep.c
+++ sys/dev/ixl/i40e_osdep.c
@@ -161,27 +161,25 @@
mtx_destroy(&lock->mutex);
}
+static inline int
+ixl_ms_scale(int x)
+{
+ if (hz == 1000)
+ return (x);
+ else if (hz > 1000)
+ return (x*(hz/1000));
+ else
+ return (max(1, x/(1000/hz)));
+}
+
void
i40e_msec_pause(int msecs)
{
- int ticks_to_pause = (msecs * hz) / 1000;
- int start_ticks = ticks;
-
- if (cold || SCHEDULER_STOPPED()) {
+ if (cold || SCHEDULER_STOPPED())
i40e_msec_delay(msecs);
- return;
- }
-
- while (1) {
- kern_yield(PRI_USER);
- int yielded_ticks = ticks - start_ticks;
- if (yielded_ticks > ticks_to_pause)
- break;
- else if (yielded_ticks < 0
- && (yielded_ticks + INT_MAX + 1 > ticks_to_pause)) {
- break;
- }
- }
+ else
+ // ERJ: (msecs * hz) could overflow
+ pause("ixl", ixl_ms_scale(msecs));
}
/*
@@ -272,7 +270,5 @@
{
pci_write_config(((struct i40e_osdep *)hw->back)->dev,
reg, value, 2);
-
- return;
}
Index: sys/dev/ixl/if_ixl.c
===================================================================
--- sys/dev/ixl/if_ixl.c
+++ sys/dev/ixl/if_ixl.c
@@ -115,10 +115,11 @@
static void ixl_if_vlan_register(if_ctx_t ctx, u16 vtag);
static void ixl_if_vlan_unregister(if_ctx_t ctx, u16 vtag);
static uint64_t ixl_if_get_counter(if_ctx_t ctx, ift_counter cnt);
-static void ixl_if_vflr_handle(if_ctx_t ctx);
-// static void ixl_if_link_intr_enable(if_ctx_t ctx);
static int ixl_if_i2c_req(if_ctx_t ctx, struct ifi2creq *req);
static int ixl_if_priv_ioctl(if_ctx_t ctx, u_long command, caddr_t data);
+#ifdef PCI_IOV
+static void ixl_if_vflr_handle(if_ctx_t ctx);
+#endif
/*** Other ***/
static int ixl_mc_filter_apply(void *arg, struct ifmultiaddr *ifma, int);
@@ -137,9 +138,9 @@
DEVMETHOD(device_detach, iflib_device_detach),
DEVMETHOD(device_shutdown, iflib_device_shutdown),
#ifdef PCI_IOV
- DEVMETHOD(pci_iov_init, ixl_iov_init),
- DEVMETHOD(pci_iov_uninit, ixl_iov_uninit),
- DEVMETHOD(pci_iov_add_vf, ixl_add_vf),
+ DEVMETHOD(pci_iov_init, iflib_device_iov_init),
+ DEVMETHOD(pci_iov_uninit, iflib_device_iov_uninit),
+ DEVMETHOD(pci_iov_add_vf, iflib_device_iov_add_vf),
#endif
DEVMETHOD_END
};
@@ -168,7 +169,6 @@
DEVMETHOD(ifdi_msix_intr_assign, ixl_if_msix_intr_assign),
DEVMETHOD(ifdi_intr_enable, ixl_if_enable_intr),
DEVMETHOD(ifdi_intr_disable, ixl_if_disable_intr),
- //DEVMETHOD(ifdi_link_intr_enable, ixl_if_link_intr_enable),
DEVMETHOD(ifdi_rx_queue_intr_enable, ixl_if_rx_queue_intr_enable),
DEVMETHOD(ifdi_tx_queue_intr_enable, ixl_if_tx_queue_intr_enable),
DEVMETHOD(ifdi_tx_queues_alloc, ixl_if_tx_queues_alloc),
@@ -184,9 +184,14 @@
DEVMETHOD(ifdi_vlan_register, ixl_if_vlan_register),
DEVMETHOD(ifdi_vlan_unregister, ixl_if_vlan_unregister),
DEVMETHOD(ifdi_get_counter, ixl_if_get_counter),
- DEVMETHOD(ifdi_vflr_handle, ixl_if_vflr_handle),
DEVMETHOD(ifdi_i2c_req, ixl_if_i2c_req),
DEVMETHOD(ifdi_priv_ioctl, ixl_if_priv_ioctl),
+#ifdef PCI_IOV
+ DEVMETHOD(ifdi_iov_init, ixl_if_iov_init),
+ DEVMETHOD(ifdi_iov_uninit, ixl_if_iov_uninit),
+ DEVMETHOD(ifdi_iov_vf_add, ixl_if_iov_vf_add),
+ DEVMETHOD(ifdi_vflr_handle, ixl_if_vflr_handle),
+#endif
// ifdi_led_func
// ifdi_debug
DEVMETHOD_END
@@ -201,7 +206,7 @@
*/
static SYSCTL_NODE(_hw, OID_AUTO, ixl, CTLFLAG_RD, 0,
- "IXL driver parameters");
+ "ixl driver parameters");
/*
* Leave this on unless you need to send flow control
@@ -221,6 +226,13 @@
&ixl_i2c_access_method, 0,
IXL_SYSCTL_HELP_I2C_METHOD);
+static int ixl_enable_vf_loopback = 1;
+TUNABLE_INT("hw.ixl.enable_vf_loopback",
+ &ixl_enable_vf_loopback);
+SYSCTL_INT(_hw_ixl, OID_AUTO, enable_vf_loopback, CTLFLAG_RDTUN,
+ &ixl_enable_vf_loopback, 0,
+ IXL_SYSCTL_HELP_VF_LOOPBACK);
+
/*
* Different method for processing TX descriptor
* completion.
@@ -332,9 +344,9 @@
static int
ixl_allocate_pci_resources(struct ixl_pf *pf)
{
- int rid;
- struct i40e_hw *hw = &pf->hw;
device_t dev = iflib_get_dev(pf->vsi.ctx);
+ struct i40e_hw *hw = &pf->hw;
+ int rid;
/* Map BAR0 */
rid = PCIR_BAR(0);
@@ -385,21 +397,17 @@
enum i40e_status_code status;
int error = 0;
- INIT_DEBUGOUT("ixl_if_attach_pre: begin");
+ INIT_DBG_DEV(dev, "begin");
- /* Allocate, clear, and link in our primary soft structure */
dev = iflib_get_dev(ctx);
pf = iflib_get_softc(ctx);
+
vsi = &pf->vsi;
vsi->back = pf;
pf->dev = dev;
hw = &pf->hw;
- /*
- ** Note this assumes we have a single embedded VSI,
- ** this could be enhanced later to allocate multiple
- */
- //vsi->dev = pf->dev;
+ vsi->dev = dev;
vsi->hw = &pf->hw;
vsi->id = 0;
vsi->num_vlans = 0;
@@ -544,6 +552,7 @@
* sizeof(struct i40e_tx_desc), DBA_ALIGN);
scctx->isc_txrx = &ixl_txrx_dwb;
}
+ scctx->isc_txrx->ift_legacy_intr = ixl_intr;
scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0]
* sizeof(union i40e_32byte_rx_desc), DBA_ALIGN);
scctx->isc_msix_bar = PCIR_BAR(IXL_MSIX_BAR);
@@ -555,7 +564,7 @@
scctx->isc_tx_csum_flags = CSUM_OFFLOAD;
scctx->isc_capabilities = scctx->isc_capenable = IXL_CAPS;
- INIT_DEBUGOUT("ixl_if_attach_pre: end");
+ INIT_DBG_DEV(dev, "end");
return (0);
err_mac_hmc:
@@ -578,7 +587,7 @@
int error = 0;
enum i40e_status_code status;
- INIT_DEBUGOUT("ixl_if_attach_post: begin");
+ INIT_DBG_DEV(dev, "begin");
dev = iflib_get_dev(ctx);
pf = iflib_get_softc(ctx);
@@ -586,6 +595,10 @@
vsi->ifp = iflib_get_ifp(ctx);
hw = &pf->hw;
+ /* Save off determined number of queues for interface */
+ vsi->num_rx_queues = vsi->shared->isc_nrxqsets;
+ vsi->num_tx_queues = vsi->shared->isc_ntxqsets;
+
/* Setup OS network interface / ifnet */
if (ixl_setup_interface(dev, pf)) {
device_printf(dev, "interface setup failed!\n");
@@ -693,6 +706,10 @@
return (error);
}
+/**
+ * XXX: iflib always ignores the return value of detach()
+ * -> This means that this isn't allowed to fail
+ */
static int
ixl_if_detach(if_ctx_t ctx)
{
@@ -701,7 +718,7 @@
struct i40e_hw *hw = &pf->hw;
device_t dev = pf->dev;
enum i40e_status_code status;
-#if defined(PCI_IOV) || defined(IXL_IW)
+#ifdef IXL_IW
int error;
#endif
@@ -712,16 +729,9 @@
error = ixl_iw_pf_detach(pf);
if (error == EBUSY) {
device_printf(dev, "iwarp in use; stop it first.\n");
- return (error);
+ //return (error);
}
}
-#endif
-#ifdef PCI_IOV
- error = pci_iov_detach(dev);
- if (error != 0) {
- device_printf(dev, "SR-IOV in use; detach first.\n");
- return (error);
- }
#endif
/* Remove all previously allocated media types */
ifmedia_removeall(vsi->media);
@@ -750,7 +760,6 @@
return (0);
}
-/* TODO: Do shutdown-specific stuff here */
static int
ixl_if_shutdown(if_ctx_t ctx)
{
@@ -795,37 +804,6 @@
return (0);
}
-/* Set Report Status queue fields to 0 */
-static void
-ixl_init_tx_rsqs(struct ixl_vsi *vsi)
-{
- if_softc_ctx_t scctx = vsi->shared;
- struct ixl_tx_queue *tx_que;
- int i, j;
-
- for (i = 0, tx_que = vsi->tx_queues; i < vsi->num_tx_queues; i++, tx_que++) {
- struct tx_ring *txr = &tx_que->txr;
-
- txr->tx_rs_cidx = txr->tx_rs_pidx = txr->tx_cidx_processed = 0;
-
- for (j = 0; j < scctx->isc_ntxd[0]; j++)
- txr->tx_rsq[j] = QIDX_INVALID;
- }
-}
-
-static void
-ixl_init_tx_cidx(struct ixl_vsi *vsi)
-{
- struct ixl_tx_queue *tx_que;
- int i;
-
- for (i = 0, tx_que = vsi->tx_queues; i < vsi->num_tx_queues; i++, tx_que++) {
- struct tx_ring *txr = &tx_que->txr;
-
- txr->tx_cidx_processed = 0;
- }
-}
-
void
ixl_if_init(if_ctx_t ctx)
{
@@ -871,8 +849,6 @@
return;
}
- // TODO: Call iflib setup multicast filters here?
- // It's called in ixgbe in D5213
ixl_if_multi_set(ctx);
/* Set up RSS */
@@ -922,7 +898,7 @@
#endif
ixl_disable_rings_intr(vsi);
- ixl_disable_rings(vsi);
+ ixl_disable_rings(pf, vsi, &pf->qtag);
}
static int
@@ -935,6 +911,9 @@
int err, i, rid, vector = 0;
char buf[16];
+ MPASS(vsi->shared->isc_nrxqsets > 0);
+ MPASS(vsi->shared->isc_ntxqsets > 0);
+
/* Admin Que must use vector 0*/
rid = vector + 1;
err = iflib_irq_alloc_generic(ctx, &vsi->irq, rid, IFLIB_INTR_ADMIN,
@@ -942,14 +921,14 @@
if (err) {
iflib_irq_free(ctx, &vsi->irq);
device_printf(iflib_get_dev(ctx),
- "Failed to register Admin que handler");
+ "Failed to register Admin Que handler");
return (err);
}
- // TODO: Re-enable this at some point
- // iflib_softirq_alloc_generic(ctx, rid, IFLIB_INTR_IOV, pf, 0, "ixl_iov");
+ /* Create soft IRQ for handling VFLRs */
+ iflib_softirq_alloc_generic(ctx, &pf->iov_irq, IFLIB_INTR_IOV, pf, 0, "iov");
/* Now set up the stations */
- for (i = 0, vector = 1; i < vsi->num_rx_queues; i++, vector++, rx_que++) {
+ for (i = 0, vector = 1; i < vsi->shared->isc_nrxqsets; i++, vector++, rx_que++) {
rid = vector + 1;
snprintf(buf, sizeof(buf), "rxq%d", i);
@@ -959,7 +938,7 @@
* what's expected in the iflib context? */
if (err) {
device_printf(iflib_get_dev(ctx),
- "Failed to allocate q int %d err: %d", i, err);
+ "Failed to allocate queue RX int vector %d, err: %d\n", i, err);
vsi->num_rx_queues = i + 1;
goto fail;
}
@@ -968,16 +947,16 @@
bzero(buf, sizeof(buf));
- for (i = 0; i < vsi->num_tx_queues; i++, tx_que++) {
+ for (i = 0; i < vsi->shared->isc_ntxqsets; i++, tx_que++) {
snprintf(buf, sizeof(buf), "txq%d", i);
iflib_softirq_alloc_generic(ctx,
- &vsi->rx_queues[i % vsi->num_rx_queues].que_irq,
+ &vsi->rx_queues[i % vsi->shared->isc_nrxqsets].que_irq,
IFLIB_INTR_TX, tx_que, tx_que->txr.me, buf);
/* TODO: Maybe call a strategy function for this to figure out which
* interrupts to map Tx queues to. I don't know if there's an immediately
* better way than this other than a user-supplied map, though. */
- tx_que->msix = (i % vsi->num_rx_queues) + 1;
+ tx_que->msix = (i % vsi->shared->isc_nrxqsets) + 1;
}
return (0);
@@ -1050,11 +1029,10 @@
{
struct ixl_pf *pf = iflib_get_softc(ctx);
struct ixl_vsi *vsi = &pf->vsi;
- struct i40e_hw *hw = vsi->hw;
- struct ixl_tx_queue *tx_que = &vsi->tx_queues[txqid];
+ struct i40e_hw *hw = vsi->hw;
+ struct ixl_tx_queue *tx_que = &vsi->tx_queues[txqid];
ixl_enable_queue(hw, tx_que->msix - 1);
-
return (0);
}
@@ -1065,12 +1043,11 @@
struct ixl_vsi *vsi = &pf->vsi;
if_softc_ctx_t scctx = vsi->shared;
struct ixl_tx_queue *que;
- // int i;
int i, j, error = 0;
- MPASS(vsi->num_tx_queues > 0);
+ MPASS(scctx->isc_ntxqsets > 0);
MPASS(ntxqs == 1);
- MPASS(vsi->num_tx_queues == ntxqsets);
+ MPASS(scctx->isc_ntxqsets == ntxqsets);
/* Allocate queue structure memory */
if (!(vsi->tx_queues =
@@ -1117,9 +1094,12 @@
struct ixl_rx_queue *que;
int i, error = 0;
- MPASS(vsi->num_rx_queues > 0);
+#ifdef INVARIANTS
+ if_softc_ctx_t scctx = vsi->shared;
+ MPASS(scctx->isc_nrxqsets > 0);
MPASS(nrxqs == 1);
- MPASS(vsi->num_rx_queues == nrxqsets);
+ MPASS(scctx->isc_nrxqsets == nrxqsets);
+#endif
/* Allocate queue structure memory */
if (!(vsi->rx_queues =
@@ -1277,13 +1257,9 @@
if (pf->state & IXL_PF_STATE_MDD_PENDING)
ixl_handle_mdd_event(pf);
-#ifdef PCI_IOV
- if (pf->state & IXL_PF_STATE_VF_RESET_REQ)
- iflib_iov_intr_deferred(ctx);
-#endif
-
ixl_process_adminq(pf, &pending);
ixl_update_link_status(pf);
+ ixl_update_stats_counters(pf);
/*
* If there are still messages to process, reschedule ourselves.
@@ -1517,32 +1493,11 @@
static void
ixl_if_timer(if_ctx_t ctx, uint16_t qid)
{
- struct ixl_pf *pf = iflib_get_softc(ctx);
- //struct i40e_hw *hw = &pf->hw;
- //struct ixl_tx_queue *que = &vsi->tx_queues[qid];
- #if 0
- u32 mask;
-
- /*
- ** Check status of the queues
- */
- mask = (I40E_PFINT_DYN_CTLN_INTENA_MASK |
- I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK);
-
- /* If queue param has outstanding work, trigger sw irq */
- // TODO: TX queues in iflib don't use HW interrupts; does this do anything?
- if (que->busy)
- wr32(hw, I40E_PFINT_DYN_CTLN(que->txr.me), mask);
-#endif
-
if (qid != 0)
return;
/* Fire off the adminq task */
iflib_admin_intr_deferred(ctx);
-
- /* Update stats */
- ixl_update_stats_counters(pf);
}
static void
@@ -1611,13 +1566,15 @@
}
}
+#ifdef PCI_IOV
static void
ixl_if_vflr_handle(if_ctx_t ctx)
{
- IXL_DEV_ERR(iflib_get_dev(ctx), "");
+ struct ixl_pf *pf = iflib_get_softc(ctx);
- // TODO: call ixl_handle_vflr()
+ ixl_handle_vflr(pf);
}
+#endif
static int
ixl_if_i2c_req(if_ctx_t ctx, struct ifi2creq *req)
@@ -1675,6 +1632,7 @@
pf->dbg_mask = ixl_core_debug_mask;
pf->hw.debug_mask = ixl_shared_debug_mask;
pf->vsi.enable_head_writeback = !!(ixl_enable_head_writeback);
+ pf->enable_vf_loopback = !!(ixl_enable_vf_loopback);
#if 0
pf->dynamic_rx_itr = ixl_dynamic_rx_itr;
pf->dynamic_tx_itr = ixl_dynamic_tx_itr;
Index: sys/dev/ixl/if_ixlv.c
===================================================================
--- sys/dev/ixl/if_ixlv.c
+++ sys/dev/ixl/if_ixlv.c
@@ -32,7 +32,6 @@
******************************************************************************/
/*$FreeBSD$*/
-#include "ixl.h"
#include "ixlv.h"
/*********************************************************************
@@ -42,9 +41,10 @@
#define IXLV_DRIVER_VERSION_MINOR 5
#define IXLV_DRIVER_VERSION_BUILD 4
-char ixlv_driver_version[] = __XSTRING(IXLV_DRIVER_VERSION_MAJOR) "."
- __XSTRING(IXLV_DRIVER_VERSION_MINOR) "."
- __XSTRING(IXLV_DRIVER_VERSION_BUILD) "-iflib-k";
+#define IXLV_DRIVER_VERSION_STRING \
+ __XSTRING(IXLV_DRIVER_VERSION_MAJOR) "." \
+ __XSTRING(IXLV_DRIVER_VERSION_MINOR) "." \
+ __XSTRING(IXLV_DRIVER_VERSION_BUILD) "-iflib-k"
/*********************************************************************
* PCI Device ID Table
@@ -56,9 +56,9 @@
static pci_vendor_info_t ixlv_vendor_info_array[] =
{
- {I40E_INTEL_VENDOR_ID, I40E_DEV_ID_VF, 0, 0, 0},
- {I40E_INTEL_VENDOR_ID, I40E_DEV_ID_X722_VF, 0, 0, 0},
- {I40E_INTEL_VENDOR_ID, I40E_DEV_ID_ADAPTIVE_VF, 0, 0, 0},
+ PVID(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_VF, "Intel(R) Ethernet Virtual Function 700 Series"),
+ PVID(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_X722_VF, "Intel(R) Ethernet Virtual Function 700 Series (X722)"),
+ PVID(I40E_INTEL_VENDOR_ID, I40E_DEV_ID_ADAPTIVE_VF, "Intel(R) Ethernet Adaptive Virtual Function"),
/* required last entry */
PVID_END
};
@@ -66,7 +66,7 @@
/*********************************************************************
* Function prototypes
*********************************************************************/
-static void *ixlv_register(device_t dev);
+static void *ixlv_register(device_t dev);
static int ixlv_if_attach_pre(if_ctx_t ctx);
static int ixlv_if_attach_post(if_ctx_t ctx);
static int ixlv_if_detach(if_ctx_t ctx);
@@ -76,7 +76,8 @@
static int ixlv_if_msix_intr_assign(if_ctx_t ctx, int msix);
static void ixlv_if_enable_intr(if_ctx_t ctx);
static void ixlv_if_disable_intr(if_ctx_t ctx);
-static int ixlv_if_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid);
+static int ixlv_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid);
+static int ixlv_if_tx_queue_intr_enable(if_ctx_t ctx, uint16_t txqid);
static int ixlv_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets);
static int ixlv_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nqs, int nqsets);
static void ixlv_if_queues_free(if_ctx_t ctx);
@@ -100,8 +101,8 @@
static void ixlv_init_filters(struct ixlv_sc *);
static void ixlv_free_pci_resources(struct ixlv_sc *);
static void ixlv_free_filters(struct ixlv_sc *);
-static void ixlv_setup_interface(device_t, struct ixl_vsi *);
-static void ixlv_add_sysctls(struct ixlv_sc *);
+static void ixlv_setup_interface(device_t, struct ixlv_sc *);
+static void ixlv_add_device_sysctls(struct ixlv_sc *);
static void ixlv_enable_adminq_irq(struct i40e_hw *);
static void ixlv_disable_adminq_irq(struct i40e_hw *);
static void ixlv_enable_queue_irq(struct i40e_hw *, int);
@@ -113,21 +114,20 @@
static int ixlv_del_mac_filter(struct ixlv_sc *sc, u8 *macaddr);
static int ixlv_msix_que(void *);
static int ixlv_msix_adminq(void *);
-static void ixlv_do_adminq_locked(struct ixlv_sc *sc);
-static void ixl_init_cmd_complete(struct ixl_vc_cmd *, void *,
- enum i40e_status_code);
-static void ixlv_configure_itr(struct ixlv_sc *);
+//static void ixlv_del_multi(struct ixlv_sc *sc);
+static void ixlv_init_multi(struct ixlv_sc *sc);
+static void ixlv_configure_itr(struct ixlv_sc *sc);
-static void ixlv_setup_vlan_filters(struct ixlv_sc *);
+static int ixlv_sysctl_rx_itr(SYSCTL_HANDLER_ARGS);
+static int ixlv_sysctl_tx_itr(SYSCTL_HANDLER_ARGS);
+static int ixlv_sysctl_current_speed(SYSCTL_HANDLER_ARGS);
+static int ixlv_sysctl_sw_filter_list(SYSCTL_HANDLER_ARGS);
+static int ixlv_sysctl_queue_interrupt_table(SYSCTL_HANDLER_ARGS);
+static int ixlv_sysctl_vf_reset(SYSCTL_HANDLER_ARGS);
+static int ixlv_sysctl_vflr_reset(SYSCTL_HANDLER_ARGS);
-static char *ixlv_vc_speed_to_string(enum virtchnl_link_speed link_speed);
-static int ixlv_sysctl_current_speed(SYSCTL_HANDLER_ARGS);
-
-// static void ixlv_add_sysctls(struct ixlv_sc *);
-#ifdef IXL_DEBUG
-static int ixlv_sysctl_qtx_tail_handler(SYSCTL_HANDLER_ARGS);
-static int ixlv_sysctl_qrx_tail_handler(SYSCTL_HANDLER_ARGS);
-#endif
+char *ixlv_vc_speed_to_string(enum virtchnl_link_speed link_speed);
+static void ixlv_save_tunables(struct ixlv_sc *);
/*********************************************************************
* FreeBSD Device Interface Entry Points
@@ -149,6 +149,7 @@
devclass_t ixlv_devclass;
DRIVER_MODULE(ixlv, pci, ixlv_driver, ixlv_devclass, 0, 0);
+MODULE_VERSION(ixlv, 3);
MODULE_DEPEND(ixlv, pci, 1, 1, 1);
MODULE_DEPEND(ixlv, ether, 1, 1, 1);
@@ -166,14 +167,14 @@
DEVMETHOD(ifdi_msix_intr_assign, ixlv_if_msix_intr_assign),
DEVMETHOD(ifdi_intr_enable, ixlv_if_enable_intr),
DEVMETHOD(ifdi_intr_disable, ixlv_if_disable_intr),
- DEVMETHOD(ifdi_queue_intr_enable, ixlv_if_queue_intr_enable),
+ DEVMETHOD(ifdi_rx_queue_intr_enable, ixlv_if_rx_queue_intr_enable),
+ DEVMETHOD(ifdi_tx_queue_intr_enable, ixlv_if_tx_queue_intr_enable),
DEVMETHOD(ifdi_tx_queues_alloc, ixlv_if_tx_queues_alloc),
DEVMETHOD(ifdi_rx_queues_alloc, ixlv_if_rx_queues_alloc),
DEVMETHOD(ifdi_queues_free, ixlv_if_queues_free),
DEVMETHOD(ifdi_update_admin_status, ixlv_if_update_admin_status),
DEVMETHOD(ifdi_multi_set, ixlv_if_multi_set),
DEVMETHOD(ifdi_mtu_set, ixlv_if_mtu_set),
- // DEVMETHOD(ifdi_crcstrip_set, ixlv_if_crcstrip_set),
DEVMETHOD(ifdi_media_status, ixlv_if_media_status),
DEVMETHOD(ifdi_media_change, ixlv_if_media_change),
DEVMETHOD(ifdi_promisc_set, ixlv_if_promisc_set),
@@ -193,27 +194,7 @@
*/
static SYSCTL_NODE(_hw, OID_AUTO, ixlv, CTLFLAG_RD, 0,
- "IXLV driver parameters");
-
-/*
-** Number of descriptors per ring:
-** - TX and RX sizes are independently configurable
-*/
-static int ixlv_tx_ring_size = IXL_DEFAULT_RING;
-TUNABLE_INT("hw.ixlv.tx_ring_size", &ixlv_tx_ring_size);
-SYSCTL_INT(_hw_ixlv, OID_AUTO, tx_ring_size, CTLFLAG_RDTUN,
- &ixlv_tx_ring_size, 0, "TX Descriptor Ring Size");
-
-static int ixlv_rx_ring_size = IXL_DEFAULT_RING;
-TUNABLE_INT("hw.ixlv.rx_ring_size", &ixlv_rx_ring_size);
-SYSCTL_INT(_hw_ixlv, OID_AUTO, rx_ring_size, CTLFLAG_RDTUN,
- &ixlv_rx_ring_size, 0, "TX Descriptor Ring Size");
-
-/* Set to zero to auto calculate */
-int ixlv_max_queues = 0;
-TUNABLE_INT("hw.ixlv.max_queues", &ixlv_max_queues);
-SYSCTL_INT(_hw_ixlv, OID_AUTO, max_queues, CTLFLAG_RDTUN,
- &ixlv_max_queues, 0, "Number of Queues");
+ "ixlv driver parameters");
/*
* Different method for processing TX descriptor
@@ -226,6 +207,21 @@
&ixlv_enable_head_writeback, 0,
"For detecting last completed TX descriptor by hardware, use value written by HW instead of checking descriptors");
+static int ixlv_core_debug_mask = 0;
+TUNABLE_INT("hw.ixlv.core_debug_mask",
+ &ixlv_core_debug_mask);
+SYSCTL_INT(_hw_ixlv, OID_AUTO, core_debug_mask, CTLFLAG_RDTUN,
+ &ixlv_core_debug_mask, 0,
+ "Display debug statements that are printed in non-shared code");
+
+static int ixlv_shared_debug_mask = 0;
+TUNABLE_INT("hw.ixlv.shared_debug_mask",
+ &ixlv_shared_debug_mask);
+SYSCTL_INT(_hw_ixlv, OID_AUTO, shared_debug_mask, CTLFLAG_RDTUN,
+ &ixlv_shared_debug_mask, 0,
+ "Display debug statements that are printed in shared code");
+
+#if 0
/*
** Controls for Interrupt Throttling
** - true/false for dynamic adjustment
@@ -240,6 +236,7 @@
TUNABLE_INT("hw.ixlv.dynamic_tx_itr", &ixlv_dynamic_tx_itr);
SYSCTL_INT(_hw_ixlv, OID_AUTO, dynamic_tx_itr, CTLFLAG_RDTUN,
&ixlv_dynamic_tx_itr, 0, "Dynamic TX Interrupt Rate");
+#endif
int ixlv_rx_itr = IXL_ITR_8K;
TUNABLE_INT("hw.ixlv.rx_itr", &ixlv_rx_itr);
@@ -251,29 +248,28 @@
SYSCTL_INT(_hw_ixlv, OID_AUTO, tx_itr, CTLFLAG_RDTUN,
&ixlv_tx_itr, 0, "TX Interrupt Rate");
-extern struct if_txrx ixl_txrx;
+extern struct if_txrx ixl_txrx_hwb;
+extern struct if_txrx ixl_txrx_dwb;
static struct if_shared_ctx ixlv_sctx_init = {
.isc_magic = IFLIB_MAGIC,
.isc_q_align = PAGE_SIZE,/* max(DBA_ALIGN, PAGE_SIZE) */
.isc_tx_maxsize = IXL_TSO_SIZE + sizeof(struct ether_vlan_header),
- .isc_tx_maxsegsize = PAGE_SIZE,
+ .isc_tx_maxsegsize = IXL_MAX_DMA_SEG_SIZE,
.isc_tso_maxsize = IXL_TSO_SIZE + sizeof(struct ether_vlan_header),
- .isc_tso_maxsegsize = PAGE_SIZE,
- // TODO: Review the rx_maxsize and rx_maxsegsize params
- // Where are they used in iflib?
+ .isc_tso_maxsegsize = IXL_MAX_DMA_SEG_SIZE,
.isc_rx_maxsize = 16384,
- .isc_rx_nsegments = 1,
- .isc_rx_maxsegsize = 16384,
- // TODO: What is isc_nfl for?
+ .isc_rx_nsegments = IXL_MAX_RX_SEGS,
+ .isc_rx_maxsegsize = IXL_MAX_DMA_SEG_SIZE,
.isc_nfl = 1,
.isc_ntxqs = 1,
.isc_nrxqs = 1,
.isc_admin_intrcnt = 1,
.isc_vendor_info = ixlv_vendor_info_array,
- .isc_driver_version = ixlv_driver_version,
+ .isc_driver_version = IXLV_DRIVER_VERSION_STRING,
.isc_driver = &ixlv_if_driver,
+ .isc_flags = IFLIB_NEED_SCRATCH | IFLIB_NEED_ZERO_CSUM | IFLIB_IS_VF,
.isc_nrxd_min = {IXL_MIN_RING},
.isc_ntxd_min = {IXL_MIN_RING},
@@ -286,64 +282,82 @@
if_shared_ctx_t ixlv_sctx = &ixlv_sctx_init;
/*** Functions ***/
-
static void *
ixlv_register(device_t dev)
{
return (ixlv_sctx);
- }
+}
+
+static int
+ixlv_allocate_pci_resources(struct ixlv_sc *sc)
+{
+ struct i40e_hw *hw = &sc->hw;
+ device_t dev = iflib_get_dev(sc->vsi.ctx);
+ int rid;
+
+ /* Map BAR0 */
+ rid = PCIR_BAR(0);
+ sc->pci_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
+ &rid, RF_ACTIVE);
+
+ if (!(sc->pci_mem)) {
+ device_printf(dev, "Unable to allocate bus resource: PCI memory\n");
+ return (ENXIO);
+ }
+
+ /* Save off the PCI information */
+ hw->vendor_id = pci_get_vendor(dev);
+ hw->device_id = pci_get_device(dev);
+ hw->revision_id = pci_read_config(dev, PCIR_REVID, 1);
+ hw->subsystem_vendor_id =
+ pci_read_config(dev, PCIR_SUBVEND_0, 2);
+ hw->subsystem_device_id =
+ pci_read_config(dev, PCIR_SUBDEV_0, 2);
+
+ hw->bus.device = pci_get_slot(dev);
+ hw->bus.func = pci_get_function(dev);
+
+ /* Save off register access information */
+ sc->osdep.mem_bus_space_tag =
+ rman_get_bustag(sc->pci_mem);
+ sc->osdep.mem_bus_space_handle =
+ rman_get_bushandle(sc->pci_mem);
+ sc->osdep.mem_bus_space_size = rman_get_size(sc->pci_mem);
+ sc->osdep.flush_reg = I40E_VFGEN_RSTAT;
+ sc->osdep.dev = dev;
+
+ sc->hw.hw_addr = (u8 *) &sc->osdep.mem_bus_space_handle;
+ sc->hw.back = &sc->osdep;
+
+ return (0);
+}
static int
ixlv_if_attach_pre(if_ctx_t ctx)
{
device_t dev;
- struct ixlv_sc *sc;
- struct i40e_hw *hw;
- struct ixl_vsi *vsi;
+ struct ixlv_sc *sc;
+ struct i40e_hw *hw;
+ struct ixl_vsi *vsi;
if_softc_ctx_t scctx;
int error = 0;
- INIT_DBG_DEV(dev, "begin");
-
dev = iflib_get_dev(ctx);
sc = iflib_get_softc(ctx);
- hw = &sc->hw;
- /*
- ** Note this assumes we have a single embedded VSI,
- ** this could be enhanced later to allocate multiple
- */
+
vsi = &sc->vsi;
- vsi->dev = dev;
vsi->back = sc;
+ sc->dev = dev;
+ hw = &sc->hw;
+
+ vsi->dev = dev;
vsi->hw = &sc->hw;
- // vsi->id = 0;
vsi->num_vlans = 0;
vsi->ctx = ctx;
vsi->media = iflib_get_media(ctx);
vsi->shared = scctx = iflib_get_softc_ctx(ctx);
- sc->dev = dev;
- /* Initialize hw struct */
- ixlv_init_hw(sc);
- /*
- * These are the same across all current ixl models
- */
- vsi->shared->isc_tx_nsegments = IXL_MAX_TX_SEGS;
- vsi->shared->isc_msix_bar = PCIR_BAR(IXL_MSIX_BAR);
- vsi->shared->isc_tx_tso_segments_max = IXL_MAX_TSO_SEGS;
- vsi->shared->isc_tx_tso_size_max = IXL_TSO_SIZE;
- vsi->shared->isc_tx_tso_segsize_max = PAGE_SIZE;
-
- /* Save this tunable */
- vsi->enable_head_writeback = ixlv_enable_head_writeback;
-
- scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0]
- * sizeof(struct i40e_tx_desc) + sizeof(u32), DBA_ALIGN);
- scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0]
- * sizeof(union i40e_32byte_rx_desc), DBA_ALIGN);
- /* XXX: No idea what this does */
- /* TODO: This value may depend on resources received */
- scctx->isc_max_txqsets = scctx->isc_max_rxqsets = 16;
+ ixlv_save_tunables(sc);
/* Do PCI setup - map BAR0, etc */
if (ixlv_allocate_pci_resources(sc)) {
@@ -353,9 +367,12 @@
goto err_early;
}
- INIT_DBG_DEV(dev, "Allocated PCI resources and MSIX vectors");
+ ixlv_dbg_init(sc, "Allocated PCI resources and MSIX vectors\n");
- /* XXX: This is called by init_shared_code in the PF driver */
+ /*
+ * XXX: This is called by init_shared_code in the PF driver,
+ * but the rest of that function does not support VFs.
+ */
error = i40e_set_mac_type(hw);
if (error) {
device_printf(dev, "%s: set_mac_type failed: %d\n",
@@ -370,7 +387,7 @@
goto err_pci_res;
}
- INIT_DBG_DEV(dev, "VF Device is ready for configuration");
+ ixlv_dbg_init(sc, "VF Device is ready for configuration\n");
/* Sets up Admin Queue */
error = ixlv_setup_vc(sc);
@@ -380,7 +397,7 @@
goto err_pci_res;
}
- INIT_DBG_DEV(dev, "PF API version verified");
+ ixlv_dbg_init(sc, "PF API version verified\n");
/* Need API version before sending reset message */
error = ixlv_reset(sc);
@@ -389,7 +406,7 @@
goto err_aq;
}
- INIT_DBG_DEV(dev, "VF reset complete");
+ ixlv_dbg_init(sc, "VF reset complete\n");
/* Ask for VF config from PF */
error = ixlv_vf_config(sc);
@@ -405,10 +422,8 @@
sc->vf_res->max_vectors,
sc->vf_res->rss_key_size,
sc->vf_res->rss_lut_size);
-#ifdef IXL_DEBUG
- device_printf(dev, "Offload flags: 0x%b\n",
- sc->vf_res->vf_offload_flags, IXLV_PRINTF_VF_OFFLOAD_FLAGS);
-#endif
+ ixlv_dbg_info(sc, "Received offload flags: 0x%b\n",
+ sc->vf_res->vf_cap_flags, IXLV_PRINTF_VF_OFFLOAD_FLAGS);
/* got VF config message back from PF, now we can parse it */
for (int i = 0; i < sc->vf_res->num_vsis; i++) {
@@ -422,9 +437,10 @@
}
vsi->id = sc->vsi_res->vsi_id;
- INIT_DBG_DEV(dev, "Resource Acquisition complete");
+ ixlv_dbg_init(sc, "Resource Acquisition complete\n");
/* If no mac address was assigned just make a random one */
+ // TODO: What if the PF doesn't allow us to set our own MAC?
if (!ixlv_check_ether_addr(hw->mac.addr)) {
u8 addr[ETHER_ADDR_LEN];
arc4rand(&addr, sizeof(addr), 0);
@@ -435,21 +451,35 @@
bcopy(hw->mac.addr, hw->mac.perm_addr, ETHER_ADDR_LEN);
iflib_set_mac(ctx, hw->mac.addr);
- // TODO: Is this still safe to call?
- // ixl_vsi_setup_rings_size(vsi, ixlv_tx_ring_size, ixlv_rx_ring_size);
-
/* Allocate filter lists */
ixlv_init_filters(sc);
/* Fill out more iflib parameters */
- scctx->isc_txrx = &ixl_txrx;
- // TODO: Probably needs changing
- vsi->shared->isc_rss_table_size = sc->hw.func_caps.rss_table_size;
+ // TODO: This needs to be set to configured "num-queues" value
+ // from iovctl.conf
+ scctx->isc_ntxqsets_max = scctx->isc_nrxqsets_max = 4;
+ if (vsi->enable_head_writeback) {
+ scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0]
+ * sizeof(struct i40e_tx_desc) + sizeof(u32), DBA_ALIGN);
+ scctx->isc_txrx = &ixl_txrx_hwb;
+ } else {
+ scctx->isc_txqsizes[0] = roundup2(scctx->isc_ntxd[0]
+ * sizeof(struct i40e_tx_desc), DBA_ALIGN);
+ scctx->isc_txrx = &ixl_txrx_dwb;
+ }
+ scctx->isc_rxqsizes[0] = roundup2(scctx->isc_nrxd[0]
+ * sizeof(union i40e_32byte_rx_desc), DBA_ALIGN);
+ scctx->isc_msix_bar = PCIR_BAR(IXL_MSIX_BAR);
+ scctx->isc_tx_nsegments = IXL_MAX_TX_SEGS;
+ scctx->isc_tx_tso_segments_max = IXL_MAX_TSO_SEGS;
+ scctx->isc_tx_tso_size_max = IXL_TSO_SIZE;
+ scctx->isc_tx_tso_segsize_max = IXL_MAX_DMA_SEG_SIZE;
+ scctx->isc_rss_table_size = IXL_RSS_VSI_LUT_SIZE;
scctx->isc_tx_csum_flags = CSUM_OFFLOAD;
scctx->isc_capabilities = scctx->isc_capenable = IXL_CAPS;
- INIT_DBG_DEV(dev, "end");
return (0);
+
err_res_buf:
free(sc->vf_res, M_DEVBUF);
err_aq:
@@ -457,8 +487,6 @@
err_pci_res:
ixlv_free_pci_resources(sc);
err_early:
- ixlv_free_filters(sc);
- INIT_DBG_DEV(dev, "end: error %d", error);
return (error);
}
@@ -474,27 +502,29 @@
INIT_DBG_DEV(dev, "begin");
dev = iflib_get_dev(ctx);
- vsi = iflib_get_softc(ctx);
+ sc = iflib_get_softc(ctx);
+ vsi = &sc->vsi;
vsi->ifp = iflib_get_ifp(ctx);
- sc = (struct ixlv_sc *)vsi->back;
hw = &sc->hw;
+ /* Save off determined number of queues for interface */
+ vsi->num_rx_queues = vsi->shared->isc_nrxqsets;
+ vsi->num_tx_queues = vsi->shared->isc_ntxqsets;
+
/* Setup the stack interface */
- if (ixlv_setup_interface(dev, sc) != 0) {
- device_printf(dev, "%s: setup interface failed!\n",
- __func__);
- error = EIO;
- goto out;
- }
+ ixlv_setup_interface(dev, sc);
INIT_DBG_DEV(dev, "Interface setup complete");
/* Initialize statistics & add sysctls */
bzero(&sc->vsi.eth_stats, sizeof(struct i40e_eth_stats));
- ixlv_add_sysctls(sc);
+ ixlv_add_device_sysctls(sc);
+
+ sc->init_state = IXLV_INIT_READY;
- /* We want AQ enabled early */
+ /* We want AQ enabled early for init */
ixlv_enable_adminq_irq(hw);
+
INIT_DBG_DEV(dev, "end");
return (error);
// TODO: Check if any failures can happen above
@@ -509,22 +539,23 @@
#endif
}
+/**
+ * XXX: iflib always ignores the return value of detach()
+ * -> This means that this isn't allowed to fail
+ */
static int
ixlv_if_detach(if_ctx_t ctx)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- struct ixlv_sc *sc = vsi->back;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
struct i40e_hw *hw = &sc->hw;
- device_t dev = sc->dev;
+ device_t dev = sc->dev;
enum i40e_status_code status;
INIT_DBG_DEV(dev, "begin");
/* Remove all the media and link information */
- ifmedia_removeall(&sc->media);
-
- /* Drain VC mgr */
- callout_drain(&sc->vc_mgr.callout);
+ ifmedia_removeall(vsi->media);
ixlv_disable_adminq_irq(hw);
status = i40e_shutdown_adminq(&sc->hw);
@@ -542,16 +573,11 @@
return (0);
}
-/* TODO: Do shutdown-specific stuff here */
static int
ixlv_if_shutdown(if_ctx_t ctx)
{
int error = 0;
- INIT_DBG_DEV(dev, "begin");
-
- /* TODO: Call ixl_if_stop()? */
-
return (error);
}
@@ -561,164 +587,18 @@
{
int error = 0;
- INIT_DBG_DEV(dev, "begin");
-
- /* TODO: Call ixl_if_stop()? */
-
return (error);
}
static int
ixlv_if_resume(if_ctx_t ctx)
{
- struct ifnet *ifp = iflib_get_ifp(ctx);
-
- INIT_DBG_DEV(dev, "begin");
-
/* Read & clear wake-up registers */
- /* Required after D3->D0 transition */
- if (ifp->if_flags & IFF_UP)
- ixlv_if_init(ctx);
-
return (0);
}
#if 0
-static int
-ixlv_ioctl(struct ifnet *ifp, u_long command, caddr_t data)
-{
- struct ixl_vsi *vsi = ifp->if_softc;
- struct ixlv_sc *sc = vsi->back;
- struct ifreq *ifr = (struct ifreq *)data;
-#if defined(INET) || defined(INET6)
- struct ifaddr *ifa = (struct ifaddr *)data;
- bool avoid_reset = FALSE;
-#endif
- int error = 0;
-
-
- switch (command) {
-
- case SIOCSIFADDR:
-#ifdef INET
- if (ifa->ifa_addr->sa_family == AF_INET)
- avoid_reset = TRUE;
-#endif
-#ifdef INET6
- if (ifa->ifa_addr->sa_family == AF_INET6)
- avoid_reset = TRUE;
-#endif
-#if defined(INET) || defined(INET6)
- /*
- ** Calling init results in link renegotiation,
- ** so we avoid doing it when possible.
- */
- if (avoid_reset) {
- ifp->if_flags |= IFF_UP;
- if (!(ifp->if_drv_flags & IFF_DRV_RUNNING))
- ixlv_init(vsi);
-#ifdef INET
- if (!(ifp->if_flags & IFF_NOARP))
- arp_ifinit(ifp, ifa);
-#endif
- } else
- error = ether_ioctl(ifp, command, data);
- break;
-#endif
- case SIOCSIFMTU:
- IOCTL_DBG_IF2(ifp, "SIOCSIFMTU (Set Interface MTU)");
- mtx_lock(&sc->mtx);
- if (ifr->ifr_mtu > IXL_MAX_FRAME -
- ETHER_HDR_LEN - ETHER_CRC_LEN - ETHER_VLAN_ENCAP_LEN) {
- error = EINVAL;
- IOCTL_DBG_IF(ifp, "mtu too large");
- } else {
- IOCTL_DBG_IF2(ifp, "mtu: %lu -> %d", (u_long)ifp->if_mtu, ifr->ifr_mtu);
- // ERJ: Interestingly enough, these types don't match
- ifp->if_mtu = (u_long)ifr->ifr_mtu;
- vsi->max_frame_size =
- ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
- + ETHER_VLAN_ENCAP_LEN;
- if (ifp->if_drv_flags & IFF_DRV_RUNNING)
- ixlv_init_locked(sc);
- }
- mtx_unlock(&sc->mtx);
- break;
- case SIOCSIFFLAGS:
- IOCTL_DBG_IF2(ifp, "SIOCSIFFLAGS (Set Interface Flags)");
- mtx_lock(&sc->mtx);
- if (ifp->if_flags & IFF_UP) {
- if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0)
- ixlv_init_locked(sc);
- } else
- if (ifp->if_drv_flags & IFF_DRV_RUNNING)
- ixlv_stop(sc);
- sc->if_flags = ifp->if_flags;
- mtx_unlock(&sc->mtx);
- break;
- case SIOCADDMULTI:
- IOCTL_DBG_IF2(ifp, "SIOCADDMULTI");
- if (ifp->if_drv_flags & IFF_DRV_RUNNING) {
- mtx_lock(&sc->mtx);
- ixlv_disable_intr(vsi);
- ixlv_add_multi(vsi);
- ixlv_enable_intr(vsi);
- mtx_unlock(&sc->mtx);
- }
- break;
- case SIOCDELMULTI:
- IOCTL_DBG_IF2(ifp, "SIOCDELMULTI");
- if (sc->init_state == IXLV_RUNNING) {
- mtx_lock(&sc->mtx);
- ixlv_disable_intr(vsi);
- ixlv_del_multi(vsi);
- ixlv_enable_intr(vsi);
- mtx_unlock(&sc->mtx);
- }
- break;
- case SIOCSIFMEDIA:
- case SIOCGIFMEDIA:
- IOCTL_DBG_IF2(ifp, "SIOCxIFMEDIA (Get/Set Interface Media)");
- error = ifmedia_ioctl(ifp, ifr, &sc->media, command);
- break;
- case SIOCSIFCAP:
- {
- int mask = ifr->ifr_reqcap ^ ifp->if_capenable;
- IOCTL_DBG_IF2(ifp, "SIOCSIFCAP (Set Capabilities)");
-
- ixlv_cap_txcsum_tso(vsi, ifp, mask);
-
- if (mask & IFCAP_RXCSUM)
- ifp->if_capenable ^= IFCAP_RXCSUM;
- if (mask & IFCAP_RXCSUM_IPV6)
- ifp->if_capenable ^= IFCAP_RXCSUM_IPV6;
- if (mask & IFCAP_LRO)
- ifp->if_capenable ^= IFCAP_LRO;
- if (mask & IFCAP_VLAN_HWTAGGING)
- ifp->if_capenable ^= IFCAP_VLAN_HWTAGGING;
- if (mask & IFCAP_VLAN_HWFILTER)
- ifp->if_capenable ^= IFCAP_VLAN_HWFILTER;
- if (mask & IFCAP_VLAN_HWTSO)
- ifp->if_capenable ^= IFCAP_VLAN_HWTSO;
- if (ifp->if_drv_flags & IFF_DRV_RUNNING) {
- ixlv_init(vsi);
- }
- VLAN_CAPABILITIES(ifp);
-
- break;
- }
-
- default:
- IOCTL_DBG_IF2(ifp, "UNKNOWN (0x%X)", (int)command);
- error = ether_ioctl(ifp, command, data);
- break;
- }
-
- return (error);
-}
-#endif
-
/*
** To do a reinit on the VF is unfortunately more complicated
** than a physical device, we must have the PF more or less
@@ -744,7 +624,7 @@
error = ixlv_reset(sc);
- INIT_DBG_IF(ifp, "VF was reset");
+ ixlv_dbg_info(sc, "VF was reset\n");
/* set the state in case we went thru RESET */
sc->init_state = IXLV_RUNNING;
@@ -773,180 +653,140 @@
}
ixlv_enable_adminq_irq(hw);
- ixl_vc_flush(&sc->vc_mgr);
+ return (error);
+}
+
+int
+ixlv_send_vc_msg(struct ixlv_sc *sc, u32 op)
+{
+ int error = 0;
+
+ error = ixl_vc_send_cmd(sc, op);
+ if (error != 0)
+ ixlv_dbg_vc(sc, "Error sending %b: %d\n", op, IXLV_FLAGS, error);
+
+ return (error);
+}
+#endif
+
+int
+ixlv_send_vc_msg(struct ixlv_sc *sc, u32 op)
+{
+ int error = 0;
+
+ error = ixl_vc_send_cmd(sc, op);
+ if (error != 0)
+ ixlv_dbg_vc(sc, "Error sending %b: %d\n", op, IXLV_FLAGS, error);
- INIT_DBG_IF(ifp, "end");
return (error);
}
static void
-ixl_init_cmd_complete(struct ixl_vc_cmd *cmd, void *arg,
- enum i40e_status_code code)
+ixlv_init_queues(struct ixl_vsi *vsi)
{
- struct ixlv_sc *sc;
+ if_softc_ctx_t scctx = vsi->shared;
+ struct ixl_tx_queue *tx_que = vsi->tx_queues;
+ struct ixl_rx_queue *rx_que = vsi->rx_queues;
+ struct rx_ring *rxr;
+
+ for (int i = 0; i < vsi->num_tx_queues; i++, tx_que++)
+ ixl_init_tx_ring(vsi, tx_que);
- sc = arg;
+ for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++) {
+ rxr = &rx_que->rxr;
- /*
- * Ignore "Adapter Stopped" message as that happens if an ifconfig down
- * happens while a command is in progress, so we don't print an error
- * in that case.
- */
- if (code != I40E_SUCCESS && code != I40E_ERR_ADAPTER_STOPPED) {
- if_printf(sc->vsi.ifp,
- "Error %s waiting for PF to complete operation %d\n",
- i40e_stat_str(&sc->hw, code), cmd->request);
+ if (scctx->isc_max_frame_size <= MCLBYTES)
+ rxr->mbuf_sz = MCLBYTES;
+ else
+ rxr->mbuf_sz = MJUMPAGESIZE;
+
+ wr32(vsi->hw, rxr->tail, 0);
}
}
void
ixlv_if_init(if_ctx_t ctx)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- if_softc_ctx_t scctx = vsi->shared;
- struct ixlv_sc *sc = vsi->back;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
struct i40e_hw *hw = &sc->hw;
struct ifnet *ifp = iflib_get_ifp(ctx);
- struct ixl_tx_queue *tx_que = vsi->tx_queues;
- struct ixl_rx_queue *rx_que = vsi->rx_queues;
-
+ u8 tmpaddr[ETHER_ADDR_LEN];
int error = 0;
INIT_DBG_IF(ifp, "begin");
- IXLV_CORE_LOCK_ASSERT(sc);
+ MPASS(sx_xlocked(iflib_ctx_lock_get(ctx)));
+
+ error = ixlv_reset_complete(hw);
+ if (error) {
+ device_printf(sc->dev, "%s: VF reset failed\n",
+ __func__);
+ }
+
+ if (!i40e_check_asq_alive(hw)) {
+ ixlv_dbg_info(sc, "ASQ is not alive, re-initializing AQ\n");
+ pci_enable_busmaster(sc->dev);
+ i40e_shutdown_adminq(hw);
+ i40e_init_adminq(hw);
+ }
- /* Do a reinit first if an init has already been done */
+#if 0
if ((sc->init_state == IXLV_RUNNING) ||
(sc->init_state == IXLV_RESET_REQUIRED) ||
(sc->init_state == IXLV_RESET_PENDING))
error = ixlv_reinit_locked(sc);
/* Don't bother with init if we failed reinit */
if (error)
- goto init_done;
+ return;
+#endif
- /* Remove existing MAC filter if new MAC addr is set */
- if (bcmp(IF_LLADDR(ifp), hw->mac.addr, ETHER_ADDR_LEN) != 0) {
+ bcopy(IF_LLADDR(ifp), tmpaddr, ETHER_ADDR_LEN);
+ if (!cmp_etheraddr(hw->mac.addr, tmpaddr) &&
+ (i40e_validate_mac_addr(tmpaddr) == I40E_SUCCESS)) {
error = ixlv_del_mac_filter(sc, hw->mac.addr);
if (error == 0)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->del_mac_cmd,
- IXLV_FLAG_AQ_DEL_MAC_FILTER, ixl_init_cmd_complete,
- sc);
- }
-
- /* Check for an LAA mac address... */
- bcopy(IF_LLADDR(ifp), hw->mac.addr, ETHER_ADDR_LEN);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_DEL_MAC_FILTER);
- /* Add mac filter for this VF to PF */
- if (i40e_validate_mac_addr(hw->mac.addr) == I40E_SUCCESS) {
- error = ixlv_add_mac_filter(sc, hw->mac.addr, 0);
- if (!error || error == EEXIST)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->add_mac_cmd,
- IXLV_FLAG_AQ_ADD_MAC_FILTER, ixl_init_cmd_complete,
- sc);
+ bcopy(tmpaddr, hw->mac.addr, ETH_ALEN);
}
+ error = ixlv_add_mac_filter(sc, hw->mac.addr, 0);
+ if (!error || error == EEXIST)
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_ADD_MAC_FILTER);
+ iflib_set_mac(ctx, hw->mac.addr);
+
/* Setup vlan's if needed */
- ixlv_setup_vlan_filters(sc);
+ // ixlv_setup_vlan_filters(sc);
- // TODO: Functionize
/* Prepare the queues for operation */
- for (int i = 0; i < vsi->num_tx_queues; i++, tx_que++) {
- // TODO: Necessary? Correct?
- ixl_init_tx_ring(vsi, tx_que);
- }
- for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++) {
- struct rx_ring *rxr = &rx_que->rxr;
-
- if (scctx->isc_max_frame_size <= MCLBYTES)
- rxr->mbuf_sz = MCLBYTES;
- else
- rxr->mbuf_sz = MJUMPAGESIZE;
- }
+ ixlv_init_queues(vsi);
/* Set initial ITR values */
ixlv_configure_itr(sc);
- /* Configure queues */
- ixl_vc_enqueue(&sc->vc_mgr, &sc->config_queues_cmd,
- IXLV_FLAG_AQ_CONFIGURE_QUEUES, ixl_init_cmd_complete, sc);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_CONFIGURE_QUEUES);
/* Set up RSS */
ixlv_config_rss(sc);
/* Map vectors */
- ixl_vc_enqueue(&sc->vc_mgr, &sc->map_vectors_cmd,
- IXLV_FLAG_AQ_MAP_VECTORS, ixl_init_cmd_complete, sc);
-
- /* Enable queues */
- ixl_vc_enqueue(&sc->vc_mgr, &sc->enable_queues_cmd,
- IXLV_FLAG_AQ_ENABLE_QUEUES, ixl_init_cmd_complete, sc);
-
- sc->init_state = IXLV_RUNNING;
-
-init_done:
- INIT_DBG_IF(ifp, "end");
- return;
-}
-
-#if 0
-void
-ixlv_init(void *arg)
-{
- struct ixl_vsi *vsi = (struct ixl_vsi *)arg;
- struct ixlv_sc *sc = vsi->back;
- int retries = 0;
-
- /* Prevent init from running again while waiting for AQ calls
- * made in init_locked() to complete. */
- mtx_lock(&sc->mtx);
- if (sc->init_in_progress) {
- mtx_unlock(&sc->mtx);
- return;
- } else
- sc->init_in_progress = true;
-
- ixlv_init_locked(sc);
- mtx_unlock(&sc->mtx);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_MAP_VECTORS);
- /* Wait for init_locked to finish */
- while (!(vsi->ifp->if_drv_flags & IFF_DRV_RUNNING)
- && ++retries < IXLV_MAX_INIT_WAIT) {
- i40e_msec_pause(25);
- }
- if (retries >= IXLV_MAX_INIT_WAIT) {
- if_printf(vsi->ifp,
- "Init failed to complete in allotted time!\n");
- }
+ /* Init SW TX ring indices */
+ if (vsi->enable_head_writeback)
+ ixl_init_tx_cidx(vsi);
+ else
+ ixl_init_tx_rsqs(vsi);
- mtx_lock(&sc->mtx);
- sc->init_in_progress = false;
- mtx_unlock(&sc->mtx);
-}
+ /* Configure promiscuous mode */
+ ixlv_if_promisc_set(ctx, if_getflags(ifp));
-/*
- * ixlv_attach() helper function; gathers information about
- * the (virtual) hardware for use elsewhere in the driver.
- */
-static void
-ixlv_init_hw(struct ixlv_sc *sc)
-{
- struct i40e_hw *hw = &sc->hw;
- device_t dev = sc->dev;
-
- /* Save off the information about this board */
- hw->vendor_id = pci_get_vendor(dev);
- hw->device_id = pci_get_device(dev);
- hw->revision_id = pci_read_config(dev, PCIR_REVID, 1);
- hw->subsystem_vendor_id =
- pci_read_config(dev, PCIR_SUBVEND_0, 2);
- hw->subsystem_device_id =
- pci_read_config(dev, PCIR_SUBDEV_0, 2);
+ /* Enable queues */
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_ENABLE_QUEUES);
- hw->bus.device = pci_get_slot(dev);
- hw->bus.func = pci_get_function(dev);
+ sc->init_state = IXLV_RUNNING;
}
-#endif
/*
* ixlv_attach() helper function; initalizes the admin queue
@@ -978,7 +818,7 @@
continue;
}
- INIT_DBG_DEV(dev, "Initialized Admin Queue; starting"
+ ixlv_dbg_init(sc, "Initialized Admin Queue; starting"
" send_api_ver attempt %d", i+1);
retry_send:
@@ -1007,7 +847,7 @@
if (asq_retries > IXLV_AQ_MAX_ERR)
continue;
- INIT_DBG_DEV(dev, "Sent API version message to PF");
+ ixlv_dbg_init(sc, "Sent API version message to PF");
/* Verify that the VF accepts the PF's API version */
error = ixlv_verify_api_ver(sc);
@@ -1074,7 +914,7 @@
i40e_msec_pause(10);
}
- INIT_DBG_DEV(dev, "Sent VF config message to PF, attempt %d",
+ ixlv_dbg_init(sc, "Sent VF config message to PF, attempt %d\n",
retried + 1);
if (!sc->vf_res) {
@@ -1119,53 +959,65 @@
static int
ixlv_if_msix_intr_assign(if_ctx_t ctx, int msix)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- struct ixlv_sc *sc = vsi->back;
- struct ixl_rx_queue *que = vsi->rx_queues;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+ struct ixl_rx_queue *rx_que = vsi->rx_queues;
struct ixl_tx_queue *tx_que = vsi->tx_queues;
int err, i, rid, vector = 0;
char buf[16];
+ MPASS(vsi->shared->isc_nrxqsets > 0);
+ MPASS(vsi->shared->isc_ntxqsets > 0);
+
/* Admin Que is vector 0*/
rid = vector + 1;
-
err = iflib_irq_alloc_generic(ctx, &vsi->irq, rid, IFLIB_INTR_ADMIN,
- ixlv_msix_adminq, sc, 0, "aq");
+ ixlv_msix_adminq, sc, 0, "aq");
if (err) {
iflib_irq_free(ctx, &vsi->irq);
- device_printf(iflib_get_dev(ctx), "Failed to register Admin que handler");
+ device_printf(iflib_get_dev(ctx),
+ "Failed to register Admin Que handler");
return (err);
}
- sc->admvec = vector;
- ++vector;
/* Now set up the stations */
- for (i = 0; i < vsi->num_rx_queues; i++, vector++, que++) {
+ for (i = 0, vector = 1; i < vsi->shared->isc_nrxqsets; i++, vector++, rx_que++) {
rid = vector + 1;
snprintf(buf, sizeof(buf), "rxq%d", i);
- err = iflib_irq_alloc_generic(ctx, &que->que_irq, rid, IFLIB_INTR_RX,
- ixlv_msix_que, que, que->rxr.me, buf);
+ err = iflib_irq_alloc_generic(ctx, &rx_que->que_irq, rid,
+ IFLIB_INTR_RX, ixlv_msix_que, rx_que, rx_que->rxr.me, buf);
+ /* XXX: Does the driver work as expected if there are fewer num_rx_queues than
+ * what's expected in the iflib context? */
if (err) {
- device_printf(iflib_get_dev(ctx), "Failed to allocate q int %d err: %d", i, err);
+ device_printf(iflib_get_dev(ctx),
+ "Failed to allocate queue RX int vector %d, err: %d\n", i, err);
vsi->num_rx_queues = i + 1;
goto fail;
}
- que->msix = vector;
+ rx_que->msix = vector;
}
- for (i = 0, tx_que = vsi->tx_queues; i < vsi->num_tx_queues; i++, tx_que++) {
+ bzero(buf, sizeof(buf));
+
+ for (i = 0; i < vsi->shared->isc_ntxqsets; i++, tx_que++) {
snprintf(buf, sizeof(buf), "txq%d", i);
- rid = que->msix + 1;
- iflib_softirq_alloc_generic(ctx, rid, IFLIB_INTR_TX, tx_que, tx_que->txr.me, buf);
+ iflib_softirq_alloc_generic(ctx,
+ &vsi->rx_queues[i % vsi->shared->isc_nrxqsets].que_irq,
+ IFLIB_INTR_TX, tx_que, tx_que->txr.me, buf);
+
+ /* TODO: Maybe call a strategy function for this to figure out which
+ * interrupts to map Tx queues to. I don't know if there's an immediately
+ * better way than this other than a user-supplied map, though. */
+ tx_que->msix = (i % vsi->shared->isc_nrxqsets) + 1;
}
return (0);
fail:
iflib_irq_free(ctx, &vsi->irq);
- que = vsi->rx_queues;
- for (int i = 0; i < vsi->num_rx_queues; i++, que++)
- iflib_irq_free(ctx, &que->que_irq);
+ rx_que = vsi->rx_queues;
+ for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++)
+ iflib_irq_free(ctx, &rx_que->que_irq);
return (err);
}
@@ -1173,7 +1025,8 @@
static void
ixlv_if_enable_intr(if_ctx_t ctx)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
ixlv_enable_intr(vsi);
}
@@ -1182,34 +1035,48 @@
static void
ixlv_if_disable_intr(if_ctx_t ctx)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
ixlv_disable_intr(vsi);
}
-/* Enable queue interrupt */
static int
-ixlv_if_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid)
+ixlv_if_rx_queue_intr_enable(if_ctx_t ctx, uint16_t rxqid)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- struct i40e_hw *hw = vsi->hw;
- struct ixl_rx_queue *que = &vsi->rx_queues[rxqid];
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+ struct i40e_hw *hw = vsi->hw;
+ struct ixl_rx_queue *rx_que = &vsi->rx_queues[rxqid];
- ixlv_enable_queue_irq(hw, que->rxr.me);
+ ixlv_enable_queue_irq(hw, rx_que->msix - 1);
+ return (0);
+}
+static int
+ixlv_if_tx_queue_intr_enable(if_ctx_t ctx, uint16_t txqid)
+{
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+ struct i40e_hw *hw = vsi->hw;
+ struct ixl_tx_queue *tx_que = &vsi->tx_queues[txqid];
+
+ ixlv_enable_queue_irq(hw, tx_que->msix - 1);
return (0);
}
static int
ixlv_if_tx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int ntxqs, int ntxqsets)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+ if_softc_ctx_t scctx = vsi->shared;
struct ixl_tx_queue *que;
- int i;
+ int i, j, error = 0;
- MPASS(vsi->num_tx_queues > 0);
+ MPASS(scctx->isc_ntxqsets > 0);
MPASS(ntxqs == 1);
- MPASS(vsi->num_tx_queues == ntxqsets);
+ MPASS(scctx->isc_ntxqsets == ntxqsets);
/* Allocate queue structure memory */
if (!(vsi->tx_queues =
@@ -1220,40 +1087,56 @@
for (i = 0, que = vsi->tx_queues; i < ntxqsets; i++, que++) {
struct tx_ring *txr = &que->txr;
+
txr->me = i;
que->vsi = vsi;
+ if (!vsi->enable_head_writeback) {
+ /* Allocate report status array */
+ if (!(txr->tx_rsq = malloc(sizeof(qidx_t) * scctx->isc_ntxd[0], M_IXLV, M_NOWAIT))) {
+ device_printf(iflib_get_dev(ctx), "failed to allocate tx_rsq memory\n");
+ error = ENOMEM;
+ goto fail;
+ }
+ /* Init report status array */
+ for (j = 0; j < scctx->isc_ntxd[0]; j++)
+ txr->tx_rsq[j] = QIDX_INVALID;
+ }
/* get the virtual and physical address of the hardware queues */
txr->tail = I40E_QTX_TAIL1(txr->me);
- txr->tx_base = (struct i40e_tx_desc *)vaddrs[i];
- txr->tx_paddr = paddrs[i];
+ txr->tx_base = (struct i40e_tx_desc *)vaddrs[i * ntxqs];
+ txr->tx_paddr = paddrs[i * ntxqs];
txr->que = que;
}
-
- // TODO: Do a config_gtask_init for admin queue here?
- // iflib_config_gtask_init(ctx, &adapter->mod_task, ixgbe_handle_mod, "mod_task");
- device_printf(iflib_get_dev(ctx), "%s: allocated for %d txqs\n", __func__, vsi->num_tx_queues);
return (0);
+fail:
+ ixlv_if_queues_free(ctx);
+ return (error);
}
static int
ixlv_if_rx_queues_alloc(if_ctx_t ctx, caddr_t *vaddrs, uint64_t *paddrs, int nrxqs, int nrxqsets)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
struct ixl_rx_queue *que;
- int i;
+ int i, error = 0;
- MPASS(vsi->num_rx_queues > 0);
+#ifdef INVARIANTS
+ if_softc_ctx_t scctx = vsi->shared;
+ MPASS(scctx->isc_nrxqsets > 0);
MPASS(nrxqs == 1);
- MPASS(vsi->num_rx_queues == nrxqsets);
+ MPASS(scctx->isc_nrxqsets == nrxqsets);
+#endif
/* Allocate queue structure memory */
if (!(vsi->rx_queues =
(struct ixl_rx_queue *) malloc(sizeof(struct ixl_rx_queue) *
nrxqsets, M_IXLV, M_NOWAIT | M_ZERO))) {
device_printf(iflib_get_dev(ctx), "Unable to allocate RX ring memory\n");
- return (ENOMEM);
+ error = ENOMEM;
+ goto fail;
}
for (i = 0, que = vsi->rx_queues; i < nrxqsets; i++, que++) {
@@ -1264,19 +1147,35 @@
/* get the virtual and physical address of the hardware queues */
rxr->tail = I40E_QRX_TAIL1(rxr->me);
- rxr->rx_base = (union i40e_rx_desc *)vaddrs[i];
- rxr->rx_paddr = paddrs[i];
+ rxr->rx_base = (union i40e_rx_desc *)vaddrs[i * nrxqs];
+ rxr->rx_paddr = paddrs[i * nrxqs];
rxr->que = que;
}
- device_printf(iflib_get_dev(ctx), "%s: allocated for %d rxqs\n", __func__, vsi->num_rx_queues);
return (0);
+fail:
+ ixlv_if_queues_free(ctx);
+ return (error);
}
static void
ixlv_if_queues_free(if_ctx_t ctx)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+
+ if (!vsi->enable_head_writeback) {
+ struct ixl_tx_queue *que;
+ int i = 0;
+
+ for (i = 0, que = vsi->tx_queues; i < vsi->num_tx_queues; i++, que++) {
+ struct tx_ring *txr = &que->txr;
+ if (txr->tx_rsq != NULL) {
+ free(txr->tx_rsq, M_IXLV);
+ txr->tx_rsq = NULL;
+ }
+ }
+ }
if (vsi->tx_queues != NULL) {
free(vsi->tx_queues, M_IXLV);
@@ -1288,274 +1187,232 @@
}
}
-// TODO: Implement
-static void
-ixlv_if_update_admin_status(if_ctx_t ctx)
+static int
+ixlv_check_aq_errors(struct ixlv_sc *sc)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- //struct ixlv_sc *sc = vsi->back;
- //struct i40e_hw *hw = &sc->hw;
- //struct i40e_arq_event_info event;
- //i40e_status ret;
- //u32 loop = 0;
- //u16 opcode
- u16 result = 0;
- //u64 baudrate;
-
- /* TODO: Split up
- * - Update admin queue stuff
- * - Update link status
- * - Enqueue aq task
- * - Re-enable admin intr
- */
+ struct i40e_hw *hw = &sc->hw;
+ device_t dev = sc->dev;
+ u32 reg, oldreg;
+ u8 aq_error = false;
-/* TODO: Does VF reset need to be handled here? */
-#if 0
- if (pf->state & IXL_PF_STATE_EMPR_RESETTING) {
- /* Flag cleared at end of this function */
- ixl_handle_empr_reset(pf);
- return;
+ /* check for Admin queue errors */
+ oldreg = reg = rd32(hw, hw->aq.arq.len);
+ if (reg & I40E_VF_ARQLEN1_ARQVFE_MASK) {
+ device_printf(dev, "ARQ VF Error detected\n");
+ reg &= ~I40E_VF_ARQLEN1_ARQVFE_MASK;
+ aq_error = true;
}
-#endif
+ if (reg & I40E_VF_ARQLEN1_ARQOVFL_MASK) {
+ device_printf(dev, "ARQ Overflow Error detected\n");
+ reg &= ~I40E_VF_ARQLEN1_ARQOVFL_MASK;
+ aq_error = true;
+ }
+ if (reg & I40E_VF_ARQLEN1_ARQCRIT_MASK) {
+ device_printf(dev, "ARQ Critical Error detected\n");
+ reg &= ~I40E_VF_ARQLEN1_ARQCRIT_MASK;
+ aq_error = true;
+ }
+ if (oldreg != reg)
+ wr32(hw, hw->aq.arq.len, reg);
-#if 0
- event.buf_len = IXL_AQ_BUF_SZ;
- event.msg_buf = malloc(event.buf_len,
- M_IXLV, M_NOWAIT | M_ZERO);
- if (!event.msg_buf) {
- device_printf(pf->dev, "%s: Unable to allocate memory for Admin"
- " Queue event!\n", __func__);
- return;
+ oldreg = reg = rd32(hw, hw->aq.asq.len);
+ if (reg & I40E_VF_ATQLEN1_ATQVFE_MASK) {
+ device_printf(dev, "ASQ VF Error detected\n");
+ reg &= ~I40E_VF_ATQLEN1_ATQVFE_MASK;
+ aq_error = true;
+ }
+ if (reg & I40E_VF_ATQLEN1_ATQOVFL_MASK) {
+ device_printf(dev, "ASQ Overflow Error detected\n");
+ reg &= ~I40E_VF_ATQLEN1_ATQOVFL_MASK;
+ aq_error = true;
+ }
+ if (reg & I40E_VF_ATQLEN1_ATQCRIT_MASK) {
+ device_printf(dev, "ASQ Critical Error detected\n");
+ reg &= ~I40E_VF_ATQLEN1_ATQCRIT_MASK;
+ aq_error = true;
}
+ if (oldreg != reg)
+ wr32(hw, hw->aq.asq.len, reg);
+
+ if (aq_error) {
+ device_printf(dev, "WARNING: Stopping VF!\n");
+ /*
+ * A VF reset might not be enough to fix a problem here;
+ * a PF reset could be required.
+ */
+ sc->init_state = IXLV_RESET_REQUIRED;
+ ixlv_stop(sc);
+ ixlv_request_reset(sc);
+ }
+
+ return (aq_error ? EIO : 0);
+}
+
+static enum i40e_status_code
+ixlv_process_adminq(struct ixlv_sc *sc, u16 *pending)
+{
+ enum i40e_status_code status = I40E_SUCCESS;
+ struct i40e_arq_event_info event;
+ struct i40e_hw *hw = &sc->hw;
+ struct virtchnl_msg *v_msg;
+ int error = 0, loop = 0;
+ u32 reg;
+
+ error = ixlv_check_aq_errors(sc);
+ if (error)
+ return (I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR);
+
+ event.buf_len = IXL_AQ_BUF_SZ;
+ event.msg_buf = sc->aq_buffer;
+ bzero(event.msg_buf, IXL_AQ_BUF_SZ);
+ v_msg = (struct virtchnl_msg *)&event.desc;
/* clean and process any events */
do {
- ret = i40e_clean_arq_element(hw, &event, &result);
- if (ret)
- break;
- opcode = LE16_TO_CPU(event.desc.opcode);
- ixl_dbg(pf, IXL_DBG_AQ,
- "Admin Queue event: %#06x\n", opcode);
- switch (opcode) {
- case i40e_aqc_opc_get_link_status:
- ixl_link_event(pf, &event);
- break;
- case i40e_aqc_opc_send_msg_to_pf:
-#ifdef PCI_IOV
- ixl_handle_vf_msg(pf, &event);
-#endif
- break;
- case i40e_aqc_opc_event_lan_overflow:
+ status = i40e_clean_arq_element(hw, &event, pending);
+ if (status)
break;
- default:
-#ifdef IXL_DEBUG
- printf("AdminQ unknown event %x\n", opcode);
-#endif
- break;
- }
+ ixlv_vc_completion(sc, v_msg->v_opcode,
+ v_msg->v_retval, event.msg_buf, event.msg_len);
+ bzero(event.msg_buf, IXL_AQ_BUF_SZ);
+ } while (*pending && (loop++ < IXL_ADM_LIMIT));
- } while (result && (loop++ < IXL_ADM_LIMIT));
+ /* Re-enable admin queue interrupt cause */
+ reg = rd32(hw, I40E_VFINT_ICR0_ENA1);
+ reg |= I40E_VFINT_ICR0_ENA1_ADMINQ_MASK;
+ wr32(hw, I40E_VFINT_ICR0_ENA1, reg);
- free(event.msg_buf, M_IXLV);
-#endif
+ return (status);
+}
-#if 0
- /* XXX: This updates the link status */
- if (pf->link_up) {
- if (vsi->link_active == FALSE) {
- vsi->link_active = TRUE;
- baudrate = ixl_max_aq_speed_to_value(pf->link_speed);
- iflib_link_state_change(ctx, LINK_STATE_UP, baudrate);
- ixl_link_up_msg(pf);
- // ixl_ping_all_vfs(adapter);
- }
- } else { /* Link down */
- if (vsi->link_active == TRUE) {
- vsi->link_active = FALSE;
- iflib_link_state_change(ctx, LINK_STATE_DOWN, 0);
- // ixl_ping_all_vfs(adapter);
- }
- }
-#endif
+static void
+ixlv_if_update_admin_status(if_ctx_t ctx)
+{
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct i40e_hw *hw = &sc->hw;
+ u16 pending;
+
+ ixlv_process_adminq(sc, &pending);
+ ixlv_update_link_status(sc);
/*
- * If there are still messages to process, reschedule ourselves.
- * Otherwise, re-enable our interrupt and go to sleep.
+ * If there are still messages to process, reschedule.
+ * Otherwise, re-enable the Admin Queue interrupt.
*/
- if (result > 0)
+ if (pending > 0)
iflib_admin_intr_deferred(ctx);
else
- /* TODO: Link/adminq interrupt should be re-enabled in IFDI_LINK_INTR_ENABLE */
- ixlv_enable_intr(vsi);
+ ixlv_enable_adminq_irq(hw);
+}
+
+static int
+ixlv_mc_filter_apply(void *arg, struct ifmultiaddr *ifma, int count __unused)
+{
+ struct ixlv_sc *sc = arg;
+
+ if (ifma->ifma_addr->sa_family != AF_LINK)
+ return (0);
+ ixlv_add_mac_filter(sc,
+ (u8*)LLADDR((struct sockaddr_dl *) ifma->ifma_addr),
+ IXL_FILTER_MC);
+ return (1);
}
static void
ixlv_if_multi_set(if_ctx_t ctx)
{
- // struct ixl_vsi *vsi = iflib_get_softc(ctx);
- // struct i40e_hw *hw = vsi->hw;
- // struct ixlv_sc *sc = vsi->back;
- // int mcnt = 0, flags;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ int mcnt = 0;
- IOCTL_DEBUGOUT("ixl_if_multi_set: begin");
+ IOCTL_DEBUGOUT("ixlv_if_multi_set: begin");
- // TODO: Implement
-#if 0
mcnt = if_multiaddr_count(iflib_get_ifp(ctx), MAX_MULTICAST_ADDR);
- /* delete existing MC filters */
- ixlv_del_multi(vsi);
-
if (__predict_false(mcnt == MAX_MULTICAST_ADDR)) {
- // Set promiscuous mode (multicast)
- // TODO: This needs to get handled somehow
-#if 0
- ixl_vc_enqueue(&sc->vc_mgr, &sc->add_vlan_cmd,
- IXLV_FLAG_AQ_CONFIGURE_PROMISC, ixl_init_cmd_complete, sc);
-#endif
+ /* Delete MC filters and enable mulitcast promisc instead */
+ ixlv_init_multi(sc);
+ sc->promisc_flags |= FLAG_VF_MULTICAST_PROMISC;
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_CONFIGURE_PROMISC);
+ device_printf(sc->dev, "%s: Not yet\n", __func__);
return;
}
- /* (re-)install filters for all mcast addresses */
- mcnt = if_multi_apply(iflib_get_ifp(ctx), ixl_mc_filter_apply, vsi);
+
+ /* If there aren't too many filters, delete existing MC filters */
+ ixlv_init_multi(sc);
+
+ /* And (re-)install filters for all mcast addresses */
+ mcnt = if_multi_apply(iflib_get_ifp(ctx), ixlv_mc_filter_apply, sc);
- if (mcnt > 0) {
- flags = (IXL_FILTER_ADD | IXL_FILTER_USED | IXL_FILTER_MC);
- ixlv_add_hw_filters(vsi, flags, mcnt);
- }
-#endif
+ if (mcnt > 0)
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_ADD_MAC_FILTER);
+}
+
+static int
+ixlv_if_mtu_set(if_ctx_t ctx, uint32_t mtu)
+{
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+
+ IOCTL_DEBUGOUT("ioctl: SIOCSIFMTU (Set Interface MTU)");
+ if (mtu > IXL_MAX_FRAME - ETHER_HDR_LEN - ETHER_CRC_LEN -
+ ETHER_VLAN_ENCAP_LEN)
+ return (EINVAL);
- IOCTL_DEBUGOUT("ixl_if_multi_set: end");
+ vsi->shared->isc_max_frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN +
+ ETHER_VLAN_ENCAP_LEN;
+
+ return (0);
}
static void
ixlv_if_media_status(if_ctx_t ctx, struct ifmediareq *ifmr)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- struct ixlv_sc *sc = (struct ixlv_sc *)vsi->back;
- struct i40e_hw *hw = &sc->hw;
+#ifdef IXL_DEBUG
+ struct ifnet *ifp = iflib_get_ifp(ctx);
+#endif
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
- INIT_DEBUGOUT("ixl_media_status: begin");
+ INIT_DBG_IF(ifp, "begin");
- hw->phy.get_link_info = TRUE;
- i40e_get_link_status(hw, &sc->link_up);
+ ixlv_update_link_status(sc);
ifmr->ifm_status = IFM_AVALID;
ifmr->ifm_active = IFM_ETHER;
- if (!sc->link_up) {
+ if (!sc->link_up)
return;
- }
ifmr->ifm_status |= IFM_ACTIVE;
/* Hardware is always full-duplex */
ifmr->ifm_active |= IFM_FDX;
- // TODO: Check another variable to get link speed
-#if 0
- switch (hw->phy.link_info.phy_type) {
- /* 100 M */
- case I40E_PHY_TYPE_100BASE_TX:
- ifmr->ifm_active |= IFM_100_TX;
- break;
- /* 1 G */
- case I40E_PHY_TYPE_1000BASE_T:
- ifmr->ifm_active |= IFM_1000_T;
- break;
- case I40E_PHY_TYPE_1000BASE_SX:
- ifmr->ifm_active |= IFM_1000_SX;
- break;
- case I40E_PHY_TYPE_1000BASE_LX:
- ifmr->ifm_active |= IFM_1000_LX;
- break;
- case I40E_PHY_TYPE_1000BASE_T_OPTICAL:
- ifmr->ifm_active |= IFM_OTHER;
- break;
- /* 10 G */
- case I40E_PHY_TYPE_10GBASE_SFPP_CU:
- ifmr->ifm_active |= IFM_10G_TWINAX;
- break;
- case I40E_PHY_TYPE_10GBASE_SR:
- ifmr->ifm_active |= IFM_10G_SR;
- break;
- case I40E_PHY_TYPE_10GBASE_LR:
- ifmr->ifm_active |= IFM_10G_LR;
- break;
- case I40E_PHY_TYPE_10GBASE_T:
- ifmr->ifm_active |= IFM_10G_T;
- break;
- case I40E_PHY_TYPE_XAUI:
- case I40E_PHY_TYPE_XFI:
- case I40E_PHY_TYPE_10GBASE_AOC:
- ifmr->ifm_active |= IFM_OTHER;
- break;
- /* 25 G */
- case I40E_PHY_TYPE_25GBASE_KR:
- ifmr->ifm_active |= IFM_25G_KR;
- break;
- case I40E_PHY_TYPE_25GBASE_CR:
- ifmr->ifm_active |= IFM_25G_CR;
- break;
- case I40E_PHY_TYPE_25GBASE_SR:
- ifmr->ifm_active |= IFM_25G_SR;
- break;
- case I40E_PHY_TYPE_25GBASE_LR:
- ifmr->ifm_active |= IFM_UNKNOWN;
- break;
- /* 40 G */
- case I40E_PHY_TYPE_40GBASE_CR4:
- case I40E_PHY_TYPE_40GBASE_CR4_CU:
- ifmr->ifm_active |= IFM_40G_CR4;
- break;
- case I40E_PHY_TYPE_40GBASE_SR4:
- ifmr->ifm_active |= IFM_40G_SR4;
- break;
- case I40E_PHY_TYPE_40GBASE_LR4:
- ifmr->ifm_active |= IFM_40G_LR4;
- break;
- case I40E_PHY_TYPE_XLAUI:
- ifmr->ifm_active |= IFM_OTHER;
- break;
- case I40E_PHY_TYPE_1000BASE_KX:
- ifmr->ifm_active |= IFM_1000_KX;
- break;
- case I40E_PHY_TYPE_SGMII:
- ifmr->ifm_active |= IFM_1000_SGMII;
- break;
- /* ERJ: What's the difference between these? */
- case I40E_PHY_TYPE_10GBASE_CR1_CU:
- case I40E_PHY_TYPE_10GBASE_CR1:
- ifmr->ifm_active |= IFM_10G_CR1;
- break;
- case I40E_PHY_TYPE_10GBASE_KX4:
- ifmr->ifm_active |= IFM_10G_KX4;
- break;
- case I40E_PHY_TYPE_10GBASE_KR:
- ifmr->ifm_active |= IFM_10G_KR;
- break;
- case I40E_PHY_TYPE_SFI:
- ifmr->ifm_active |= IFM_10G_SFI;
- break;
- /* Our single 20G media type */
- case I40E_PHY_TYPE_20GBASE_KR2:
- ifmr->ifm_active |= IFM_20G_KR2;
- break;
- case I40E_PHY_TYPE_40GBASE_KR4:
- ifmr->ifm_active |= IFM_40G_KR4;
- break;
- case I40E_PHY_TYPE_XLPPI:
- case I40E_PHY_TYPE_40GBASE_AOC:
- ifmr->ifm_active |= IFM_40G_XLPPI;
- break;
- /* Unknown to driver */
- default:
- ifmr->ifm_active |= IFM_UNKNOWN;
- break;
+ /* Based on the link speed reported by the PF over the AdminQ, choose a
+ * PHY type to report. This isn't 100% correct since we don't really
+ * know the underlying PHY type of the PF, but at least we can report
+ * a valid link speed...
+ */
+ switch (sc->link_speed) {
+ case VIRTCHNL_LINK_SPEED_100MB:
+ ifmr->ifm_active |= IFM_100_TX;
+ break;
+ case VIRTCHNL_LINK_SPEED_1GB:
+ ifmr->ifm_active |= IFM_1000_T;
+ break;
+ case VIRTCHNL_LINK_SPEED_10GB:
+ ifmr->ifm_active |= IFM_10G_SR;
+ break;
+ case VIRTCHNL_LINK_SPEED_20GB:
+ case VIRTCHNL_LINK_SPEED_25GB:
+ ifmr->ifm_active |= IFM_25G_SR;
+ break;
+ case VIRTCHNL_LINK_SPEED_40GB:
+ ifmr->ifm_active |= IFM_40G_SR4;
+ break;
+ default:
+ ifmr->ifm_active |= IFM_UNKNOWN;
+ break;
}
- /* Report flow control status as well */
- if (hw->phy.link_info.an_info & I40E_AQ_LINK_PAUSE_TX)
- ifmr->ifm_active |= IFM_ETH_TXPAUSE;
- if (hw->phy.link_info.an_info & I40E_AQ_LINK_PAUSE_RX)
- ifmr->ifm_active |= IFM_ETH_RXPAUSE;
- #endif
+
+ INIT_DBG_IF(ifp, "end");
}
static int
@@ -1572,57 +1429,44 @@
return (ENODEV);
}
-// TODO: Rework
static int
ixlv_if_promisc_set(if_ctx_t ctx, int flags)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
struct ifnet *ifp = iflib_get_ifp(ctx);
- struct i40e_hw *hw = vsi->hw;
- int err;
- bool uni = FALSE, multi = FALSE;
+
+ sc->promisc_flags = 0;
if (flags & IFF_ALLMULTI ||
if_multiaddr_count(ifp, MAX_MULTICAST_ADDR) == MAX_MULTICAST_ADDR)
- multi = TRUE;
+ sc->promisc_flags |= FLAG_VF_MULTICAST_PROMISC;
if (flags & IFF_PROMISC)
- uni = TRUE;
+ sc->promisc_flags |= FLAG_VF_UNICAST_PROMISC;
- err = i40e_aq_set_vsi_unicast_promiscuous(hw,
- vsi->seid, uni, NULL, false);
- if (err)
- return (err);
- err = i40e_aq_set_vsi_multicast_promiscuous(hw,
- vsi->seid, multi, NULL);
- return (err);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_CONFIGURE_PROMISC);
+
+ return (0);
}
static void
ixlv_if_timer(if_ctx_t ctx, uint16_t qid)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- struct ixlv_sc *sc = vsi->back;
- //struct i40e_hw *hw = &sc->hw;
- //struct ixl_tx_queue *que = &vsi->tx_queues[qid];
- //u32 mask;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct i40e_hw *hw = &sc->hw;
+ u32 val;
-#if 0
- /*
- ** Check status of the queues
- */
- mask = (I40E_PFINT_DYN_CTLN_INTENA_MASK |
- I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK);
-
- /* If queue param has outstanding work, trigger sw irq */
- // TODO: TX queues in iflib don't use HW interrupts; does this do anything?
- if (que->busy)
- wr32(hw, I40E_PFINT_DYN_CTLN(que->txr.me), mask);
- #endif
-
- // XXX: Is this timer per-queue?
if (qid != 0)
return;
+ /* Check for when PF triggers a VF reset */
+ val = rd32(hw, I40E_VFGEN_RSTAT) &
+ I40E_VFGEN_RSTAT_VFR_STATE_MASK;
+ if (val != VIRTCHNL_VFR_VFACTIVE
+ && val != VIRTCHNL_VFR_COMPLETED) {
+ ixlv_dbg_info(sc, "reset in progress! (%d)\n", val);
+ return;
+ }
+
/* Fire off the adminq task */
iflib_admin_intr_deferred(ctx);
@@ -1633,35 +1477,49 @@
static void
ixlv_if_vlan_register(if_ctx_t ctx, u16 vtag)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- //struct i40e_hw *hw = vsi->hw;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+ struct ixlv_vlan_filter *v;
if ((vtag == 0) || (vtag > 4095)) /* Invalid */
return;
++vsi->num_vlans;
- // TODO: Redo
- // ixlv_add_filter(vsi, hw->mac.addr, vtag);
+ v = malloc(sizeof(struct ixlv_vlan_filter), M_DEVBUF, M_WAITOK | M_ZERO);
+ SLIST_INSERT_HEAD(sc->vlan_filters, v, next);
+ v->vlan = vtag;
+ v->flags = IXL_FILTER_ADD;
+
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_ADD_VLAN_FILTER);
}
static void
ixlv_if_vlan_unregister(if_ctx_t ctx, u16 vtag)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
- //struct i40e_hw *hw = vsi->hw;
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
+ struct ixlv_vlan_filter *v;
+ int i = 0;
if ((vtag == 0) || (vtag > 4095)) /* Invalid */
return;
- --vsi->num_vlans;
- // TODO: Redo
- // ixlv_del_filter(vsi, hw->mac.addr, vtag);
+ SLIST_FOREACH(v, sc->vlan_filters, next) {
+ if (v->vlan == vtag) {
+ v->flags = IXL_FILTER_DEL;
+ ++i;
+ --vsi->num_vlans;
+ }
+ }
+ if (i)
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_DEL_VLAN_FILTER);
}
static uint64_t
ixlv_if_get_counter(if_ctx_t ctx, ift_counter cnt)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
+ struct ixl_vsi *vsi = &sc->vsi;
if_t ifp = iflib_get_ifp(ctx);
switch (cnt) {
@@ -1695,53 +1553,6 @@
}
}
-static int
-ixlv_allocate_pci_resources(struct ixlv_sc *sc)
-{
- struct i40e_hw *hw = &sc->hw;
- device_t dev = iflib_get_dev(sc->vsi.ctx);
- int rid;
-
- /* Map BAR0 */
- rid = PCIR_BAR(0);
- sc->pci_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
- &rid, RF_ACTIVE);
-
- if (!(sc->pci_mem)) {
- device_printf(dev, "Unable to allocate bus resource: PCI memory\n");
- return (ENXIO);
- }
-
- /* Save off the PCI information */
- hw->vendor_id = pci_get_vendor(dev);
- hw->device_id = pci_get_device(dev);
- hw->revision_id = pci_read_config(dev, PCIR_REVID, 1);
- hw->subsystem_vendor_id =
- pci_read_config(dev, PCIR_SUBVEND_0, 2);
- hw->subsystem_device_id =
- pci_read_config(dev, PCIR_SUBDEV_0, 2);
-
- hw->bus.device = pci_get_slot(dev);
- hw->bus.func = pci_get_function(dev);
-
- /* Save off register access information */
- sc->osdep.mem_bus_space_tag =
- rman_get_bustag(sc->pci_mem);
- sc->osdep.mem_bus_space_handle =
- rman_get_bushandle(sc->pci_mem);
- sc->osdep.mem_bus_space_size = rman_get_size(sc->pci_mem);
- sc->osdep.flush_reg = I40E_VFGEN_RSTAT;
- sc->osdep.dev = dev;
-
- sc->hw.hw_addr = (u8 *) &sc->osdep.mem_bus_space_handle;
- sc->hw.back = &sc->osdep;
-
- /* Disable adminq interrupts (just in case) */
- /* TODO: Probably not necessary */
- // ixlv_disable_adminq_irq(&sc->hw);
-
- return (0);
- }
static void
ixlv_free_pci_resources(struct ixlv_sc *sc)
@@ -1751,13 +1562,10 @@
device_t dev = sc->dev;
/* We may get here before stations are setup */
- // TODO: Check if we can still check against sc->msix
- if ((sc->msix > 0) || (rx_que == NULL))
+ if (rx_que == NULL)
goto early;
- /*
- ** Release all msix VSI resources:
- */
+ /* Release all interrupts */
iflib_irq_free(vsi->ctx, &vsi->irq);
for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++)
@@ -1793,6 +1601,7 @@
__func__);
return (error);
}
+ pci_enable_busmaster(dev);
error = i40e_shutdown_adminq(hw);
if (error) {
@@ -1801,306 +1610,65 @@
return (error);
}
- error = i40e_init_adminq(hw);
- if (error) {
- device_printf(dev, "%s: init_adminq failed: %d\n",
- __func__, error);
- return(error);
- }
-
- return (0);
-}
-
-static int
-ixlv_reset_complete(struct i40e_hw *hw)
-{
- u32 reg;
-
- /* Wait up to ~10 seconds */
- for (int i = 0; i < 100; i++) {
- reg = rd32(hw, I40E_VFGEN_RSTAT) &
- I40E_VFGEN_RSTAT_VFR_STATE_MASK;
-
- if ((reg == VIRTCHNL_VFR_VFACTIVE) ||
- (reg == VIRTCHNL_VFR_COMPLETED))
- return (0);
- i40e_msec_pause(100);
- }
-
- return (EBUSY);
-}
-
-static void
-ixlv_setup_interface(device_t dev, struct ixl_vsi *vsi)
-{
- if_ctx_t ctx = vsi->ctx;
- struct ixlv_sc *sc = vsi->back;
- struct ifnet *ifp = iflib_get_ifp(ctx);
- uint64_t cap;
- //struct ixl_queue *que = vsi->queues;
-
- INIT_DBG_DEV(dev, "begin");
-
- /* TODO: Remove VLAN_ENCAP_LEN? */
- vsi->shared->isc_max_frame_size =
- ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
- + ETHER_VLAN_ENCAP_LEN;
-#if __FreeBSD_version >= 1100000
- if_setbaudrate(ifp, IF_Gbps(40));
-#else
- if_initbaudrate(ifp, IF_Gbps(40));
-#endif
-
- /* Media types based on reported link speed over AdminQ */
- ifmedia_add(&sc->media, IFM_ETHER | IFM_100_TX, 0, NULL);
- ifmedia_add(&sc->media, IFM_ETHER | IFM_1000_T, 0, NULL);
- ifmedia_add(&sc->media, IFM_ETHER | IFM_10G_SR, 0, NULL);
- ifmedia_add(&sc->media, IFM_ETHER | IFM_25G_SR, 0, NULL);
- ifmedia_add(&sc->media, IFM_ETHER | IFM_40G_SR4, 0, NULL);
-
- ifmedia_add(&sc->media, IFM_ETHER | IFM_AUTO, 0, NULL);
- ifmedia_set(&sc->media, IFM_ETHER | IFM_AUTO);
-
- INIT_DBG_DEV(dev, "end");
- return (0);
-}
-#if 0
-
-/*
-** Allocate and setup a single queue
-*/
-static int
-ixlv_setup_queue(struct ixlv_sc *sc, struct ixl_queue *que)
-{
- device_t dev = sc->dev;
- struct tx_ring *txr;
- struct rx_ring *rxr;
- int rsize, tsize;
- int error = I40E_SUCCESS;
-
- txr = &que->txr;
- txr->que = que;
- txr->tail = I40E_QTX_TAIL1(que->me);
- /* Initialize the TX lock */
- snprintf(txr->mtx_name, sizeof(txr->mtx_name), "%s:tx(%d)",
- device_get_nameunit(dev), que->me);
- mtx_init(&txr->mtx, txr->mtx_name, NULL, MTX_DEF);
- /*
- * Create the TX descriptor ring
- *
- * In Head Writeback mode, the descriptor ring is one bigger
- * than the number of descriptors for space for the HW to
- * write back index of last completed descriptor.
- */
- if (sc->vsi.enable_head_writeback) {
- tsize = roundup2((que->num_tx_desc *
- sizeof(struct i40e_tx_desc)) +
- sizeof(u32), DBA_ALIGN);
- } else {
- tsize = roundup2((que->num_tx_desc *
- sizeof(struct i40e_tx_desc)), DBA_ALIGN);
- }
- if (i40e_allocate_dma_mem(&sc->hw,
- &txr->dma, i40e_mem_reserved, tsize, DBA_ALIGN)) {
- device_printf(dev,
- "Unable to allocate TX Descriptor memory\n");
- error = ENOMEM;
- goto err_destroy_tx_mtx;
- }
- txr->base = (struct i40e_tx_desc *)txr->dma.va;
- bzero((void *)txr->base, tsize);
- /* Now allocate transmit soft structs for the ring */
- if (ixl_allocate_tx_data(que)) {
- device_printf(dev,
- "Critical Failure setting up TX structures\n");
- error = ENOMEM;
- goto err_free_tx_dma;
- }
- /* Allocate a buf ring */
- txr->br = buf_ring_alloc(ixlv_txbrsz, M_DEVBUF,
- M_WAITOK, &txr->mtx);
- if (txr->br == NULL) {
- device_printf(dev,
- "Critical Failure setting up TX buf ring\n");
- error = ENOMEM;
- goto err_free_tx_data;
- }
-
- /*
- * Next the RX queues...
- */
- rsize = roundup2(que->num_rx_desc *
- sizeof(union i40e_rx_desc), DBA_ALIGN);
- rxr = &que->rxr;
- rxr->que = que;
- rxr->tail = I40E_QRX_TAIL1(que->me);
-
- /* Initialize the RX side lock */
- snprintf(rxr->mtx_name, sizeof(rxr->mtx_name), "%s:rx(%d)",
- device_get_nameunit(dev), que->me);
- mtx_init(&rxr->mtx, rxr->mtx_name, NULL, MTX_DEF);
-
- if (i40e_allocate_dma_mem(&sc->hw,
- &rxr->dma, i40e_mem_reserved, rsize, 4096)) { //JFV - should this be DBA?
- device_printf(dev,
- "Unable to allocate RX Descriptor memory\n");
- error = ENOMEM;
- goto err_destroy_rx_mtx;
- }
- rxr->base = (union i40e_rx_desc *)rxr->dma.va;
- bzero((void *)rxr->base, rsize);
-
- /* Allocate receive soft structs for the ring */
- if (ixl_allocate_rx_data(que)) {
- device_printf(dev,
- "Critical Failure setting up receive structs\n");
- error = ENOMEM;
- goto err_free_rx_dma;
- }
-
- return (0);
-
-err_free_rx_dma:
- i40e_free_dma_mem(&sc->hw, &rxr->dma);
-err_destroy_rx_mtx:
- mtx_destroy(&rxr->mtx);
- /* err_free_tx_buf_ring */
- buf_ring_free(txr->br, M_DEVBUF);
-err_free_tx_data:
- ixl_free_que_tx(que);
-err_free_tx_dma:
- i40e_free_dma_mem(&sc->hw, &txr->dma);
-err_destroy_tx_mtx:
- mtx_destroy(&txr->mtx);
-
- return (error);
-}
-#endif
-
-/*
-** Allocate and setup the interface queues
-*/
-static int
-ixlv_setup_queues(struct ixlv_sc *sc)
-{
- device_t dev = sc->dev;
- struct ixl_vsi *vsi;
- struct ixl_queue *que;
- int i;
- int error = I40E_SUCCESS;
-
- vsi = &sc->vsi;
- vsi->back = (void *)sc;
- vsi->hw = &sc->hw;
- vsi->num_vlans = 0;
-
- /* Get memory for the station queues */
- if (!(vsi->queues =
- (struct ixl_queue *) malloc(sizeof(struct ixl_queue) *
- vsi->num_queues, M_DEVBUF, M_NOWAIT | M_ZERO))) {
- device_printf(dev, "Unable to allocate queue memory\n");
- return ENOMEM;
- }
-
- for (i = 0; i < vsi->num_queues; i++) {
- que = &vsi->queues[i];
- que->num_tx_desc = vsi->num_tx_desc;
- que->num_rx_desc = vsi->num_rx_desc;
- que->me = i;
- que->vsi = vsi;
-
- if (ixlv_setup_queue(sc, que)) {
- error = ENOMEM;
- goto err_free_queues;
- }
+ error = i40e_init_adminq(hw);
+ if (error) {
+ device_printf(dev, "%s: init_adminq failed: %d\n",
+ __func__, error);
+ return (error);
}
+ ixlv_enable_adminq_irq(hw);
return (0);
-
-err_free_queues:
- while (i--)
- ixlv_free_queue(sc, &vsi->queues[i]);
-
- free(vsi->queues, M_DEVBUF);
-
- return (error);
}
-#if 0
-/*
-** This routine is run via an vlan config EVENT,
-** it enables us to use the HW Filter table since
-** we can get the vlan id. This just creates the
-** entry in the soft version of the VFTA, init will
-** repopulate the real table.
-*/
-static void
-ixlv_register_vlan(void *arg, struct ifnet *ifp, u16 vtag)
+static int
+ixlv_reset_complete(struct i40e_hw *hw)
{
- struct ixl_vsi *vsi = arg;
- struct ixlv_sc *sc = vsi->back;
- struct ixlv_vlan_filter *v;
-
-
- if (ifp->if_softc != arg) /* Not our event */
- return;
+ u32 reg;
- if ((vtag == 0) || (vtag > 4095)) /* Invalid */
- return;
+ /* Wait up to ~10 seconds */
+ for (int i = 0; i < 100; i++) {
+ reg = rd32(hw, I40E_VFGEN_RSTAT) &
+ I40E_VFGEN_RSTAT_VFR_STATE_MASK;
- /* Sanity check - make sure it doesn't already exist */
- SLIST_FOREACH(v, sc->vlan_filters, next) {
- if (v->vlan == vtag)
- return;
+ if ((reg == VIRTCHNL_VFR_VFACTIVE) ||
+ (reg == VIRTCHNL_VFR_COMPLETED))
+ return (0);
+ i40e_msec_pause(100);
}
- mtx_lock(&sc->mtx);
- ++vsi->num_vlans;
- v = malloc(sizeof(struct ixlv_vlan_filter), M_DEVBUF, M_NOWAIT | M_ZERO);
- SLIST_INSERT_HEAD(sc->vlan_filters, v, next);
- v->vlan = vtag;
- v->flags = IXL_FILTER_ADD;
- ixl_vc_enqueue(&sc->vc_mgr, &sc->add_vlan_cmd,
- IXLV_FLAG_AQ_ADD_VLAN_FILTER, ixl_init_cmd_complete, sc);
- mtx_unlock(&sc->mtx);
- return;
+ return (EBUSY);
}
-/*
-** This routine is run via an vlan
-** unconfig EVENT, remove our entry
-** in the soft vfta.
-*/
static void
-ixlv_unregister_vlan(void *arg, struct ifnet *ifp, u16 vtag)
+ixlv_setup_interface(device_t dev, struct ixlv_sc *sc)
{
- struct ixl_vsi *vsi = arg;
- struct ixlv_sc *sc = vsi->back;
- struct ixlv_vlan_filter *v;
- int i = 0;
-
- if (ifp->if_softc != arg)
- return;
+ struct ixl_vsi *vsi = &sc->vsi;
+ if_ctx_t ctx = vsi->ctx;
+ struct ifnet *ifp = iflib_get_ifp(ctx);
- if ((vtag == 0) || (vtag > 4095)) /* Invalid */
- return;
+ INIT_DBG_DEV(dev, "begin");
- mtx_lock(&sc->mtx);
- SLIST_FOREACH(v, sc->vlan_filters, next) {
- if (v->vlan == vtag) {
- v->flags = IXL_FILTER_DEL;
- ++i;
- --vsi->num_vlans;
- }
- }
- if (i)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->del_vlan_cmd,
- IXLV_FLAG_AQ_DEL_VLAN_FILTER, ixl_init_cmd_complete, sc);
- mtx_unlock(&sc->mtx);
- return;
-}
+ vsi->shared->isc_max_frame_size =
+ ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
+ + ETHER_VLAN_ENCAP_LEN;
+#if __FreeBSD_version >= 1100000
+ if_setbaudrate(ifp, IF_Gbps(40));
+#else
+ if_initbaudrate(ifp, IF_Gbps(40));
#endif
+ /* Media types based on reported link speed over AdminQ */
+ ifmedia_add(vsi->media, IFM_ETHER | IFM_100_TX, 0, NULL);
+ ifmedia_add(vsi->media, IFM_ETHER | IFM_1000_T, 0, NULL);
+ ifmedia_add(vsi->media, IFM_ETHER | IFM_10G_SR, 0, NULL);
+ ifmedia_add(vsi->media, IFM_ETHER | IFM_25G_SR, 0, NULL);
+ ifmedia_add(vsi->media, IFM_ETHER | IFM_40G_SR4, 0, NULL);
+
+ ifmedia_add(vsi->media, IFM_ETHER | IFM_AUTO, 0, NULL);
+ ifmedia_set(vsi->media, IFM_ETHER | IFM_AUTO);
+}
+
/*
** Get a new filter and add it to the mac filter list.
*/
@@ -2146,36 +1714,38 @@
{
struct ixlv_sc *sc = arg;
struct i40e_hw *hw = &sc->hw;
- // device_t dev = sc->dev;
- u32 reg;
+ u32 reg, mask;
bool do_task = FALSE;
++sc->admin_irq;
reg = rd32(hw, I40E_VFINT_ICR01);
+ /*
+ * For masking off interrupt causes that need to be handled before
+ * they can be re-enabled
+ */
mask = rd32(hw, I40E_VFINT_ICR0_ENA1);
- reg = rd32(hw, I40E_VFINT_DYN_CTL01);
- reg |= I40E_VFINT_DYN_CTL01_CLEARPBA_MASK;
- wr32(hw, I40E_VFINT_DYN_CTL01, reg);
-
/* Check on the cause */
- if (reg & I40E_VFINT_ICR0_ADMINQ_MASK)
+ if (reg & I40E_VFINT_ICR0_ADMINQ_MASK) {
+ mask &= ~I40E_VFINT_ICR0_ENA_ADMINQ_MASK;
do_task = TRUE;
+ }
+
+ wr32(hw, I40E_VFINT_ICR0_ENA1, mask);
+ ixlv_enable_adminq_irq(hw);
if (do_task)
- iflib_admin_intr_deferred(sc->vsi.ctx);
+ return (FILTER_SCHEDULE_THREAD);
else
- ixlv_enable_adminq_irq(hw);
-
- return (FILTER_HANDLED);
+ return (FILTER_HANDLED);
}
void
ixlv_enable_intr(struct ixl_vsi *vsi)
{
- struct i40e_hw *hw = vsi->hw;
- struct ixl_rx_queue *que = vsi->rx_queues;
+ struct i40e_hw *hw = vsi->hw;
+ struct ixl_rx_queue *que = vsi->rx_queues;
ixlv_enable_adminq_irq(hw);
for (int i = 0; i < vsi->num_rx_queues; i++, que++)
@@ -2185,10 +1755,9 @@
void
ixlv_disable_intr(struct ixl_vsi *vsi)
{
- struct i40e_hw *hw = vsi->hw;
- struct ixl_rx_queue *que = vsi->rx_queues;
+ struct i40e_hw *hw = vsi->hw;
+ struct ixl_rx_queue *que = vsi->rx_queues;
- ixlv_disable_adminq_irq(hw);
for (int i = 0; i < vsi->num_rx_queues; i++, que++)
ixlv_disable_queue_irq(hw, que->rxr.me);
}
@@ -2230,40 +1799,56 @@
wr32(hw, I40E_VFINT_DYN_CTLN1(id),
I40E_VFINT_DYN_CTLN1_ITR_INDX_MASK);
rd32(hw, I40E_VFGEN_RSTAT);
- return;
}
-/*
- * Get initial ITR values from tunable values.
- */
static void
-ixlv_configure_itr(struct ixlv_sc *sc)
+ixlv_configure_tx_itr(struct ixlv_sc *sc)
{
struct i40e_hw *hw = &sc->hw;
struct ixl_vsi *vsi = &sc->vsi;
- struct ixl_rx_queue *rx_que = vsi->rx_queues;
-
- vsi->rx_itr_setting = ixlv_rx_itr;
- //vsi->tx_itr_setting = ixlv_tx_itr;
-
- for (int i = 0; i < vsi->num_rx_queues; i++, rx_que++) {
- struct rx_ring *rxr = &rx_que->rxr;
+ struct ixl_tx_queue *que = vsi->tx_queues;
- wr32(hw, I40E_VFINT_ITRN1(IXL_RX_ITR, i),
- vsi->rx_itr_setting);
- rxr->itr = vsi->rx_itr_setting;
- rxr->latency = IXL_AVE_LATENCY;
+ vsi->tx_itr_setting = sc->tx_itr;
-#if 0
+ for (int i = 0; i < vsi->num_tx_queues; i++, que++) {
struct tx_ring *txr = &que->txr;
+
wr32(hw, I40E_VFINT_ITRN1(IXL_TX_ITR, i),
vsi->tx_itr_setting);
txr->itr = vsi->tx_itr_setting;
txr->latency = IXL_AVE_LATENCY;
-#endif
}
}
+static void
+ixlv_configure_rx_itr(struct ixlv_sc *sc)
+{
+ struct i40e_hw *hw = &sc->hw;
+ struct ixl_vsi *vsi = &sc->vsi;
+ struct ixl_rx_queue *que = vsi->rx_queues;
+
+ vsi->rx_itr_setting = sc->rx_itr;
+
+ for (int i = 0; i < vsi->num_rx_queues; i++, que++) {
+ struct rx_ring *rxr = &que->rxr;
+
+ wr32(hw, I40E_VFINT_ITRN1(IXL_RX_ITR, i),
+ vsi->rx_itr_setting);
+ rxr->itr = vsi->rx_itr_setting;
+ rxr->latency = IXL_AVE_LATENCY;
+ }
+}
+
+/*
+ * Get initial ITR values from tunable values.
+ */
+static void
+ixlv_configure_itr(struct ixlv_sc *sc)
+{
+ ixlv_configure_tx_itr(sc);
+ ixlv_configure_rx_itr(sc);
+}
+
/*
** Provide a update to the queue RX
** interrupt moderation value.
@@ -2274,16 +1859,16 @@
struct ixl_vsi *vsi = que->vsi;
struct i40e_hw *hw = vsi->hw;
struct rx_ring *rxr = &que->rxr;
- u16 rx_itr;
- u16 rx_latency = 0;
- int rx_bytes;
-
+ //u16 rx_itr;
+ //u16 rx_latency = 0;
+ //int rx_bytes;
/* Idle, do nothing */
if (rxr->bytes == 0)
return;
- if (ixlv_dynamic_rx_itr) {
+#if 0
+ if (sc->ixlv_dynamic_rx_itr) {
rx_bytes = rxr->bytes/rxr->itr;
rx_itr = rxr->itr;
@@ -2323,6 +1908,7 @@
que->rxr.me), rxr->itr);
}
} else { /* We may have have toggled to non-dynamic */
+#endif
if (vsi->rx_itr_setting & IXL_ITR_DYNAMIC)
vsi->rx_itr_setting = ixlv_rx_itr;
/* Update the hardware if needed */
@@ -2331,13 +1917,15 @@
wr32(hw, I40E_VFINT_ITRN1(IXL_RX_ITR,
que->rxr.me), rxr->itr);
}
+#if 0
}
rxr->bytes = 0;
rxr->packets = 0;
- return;
+#endif
}
+#if 0
/*
** Provide a update to the queue TX
** interrupt moderation value.
@@ -2348,15 +1936,17 @@
struct ixl_vsi *vsi = que->vsi;
struct i40e_hw *hw = vsi->hw;
struct tx_ring *txr = &que->txr;
+#if 0
u16 tx_itr;
u16 tx_latency = 0;
int tx_bytes;
-
+#endif
/* Idle, do nothing */
if (txr->bytes == 0)
return;
+#if 0
if (ixlv_dynamic_tx_itr) {
tx_bytes = txr->bytes/txr->itr;
tx_itr = txr->itr;
@@ -2397,6 +1987,7 @@
}
} else { /* We may have have toggled to non-dynamic */
+#endif
if (vsi->tx_itr_setting & IXL_ITR_DYNAMIC)
vsi->tx_itr_setting = ixlv_tx_itr;
/* Update the hardware if needed */
@@ -2405,328 +1996,49 @@
wr32(hw, I40E_VFINT_ITRN1(IXL_TX_ITR,
que->txr.me), txr->itr);
}
+#if 0
}
txr->bytes = 0;
txr->packets = 0;
- return;
-}
-
-#if 0
-/*
-**
-** MSIX Interrupt Handlers and Tasklets
-**
-*/
-static void
-ixlv_handle_que(void *context, int pending)
-{
- struct ixl_queue *que = context;
- struct ixl_vsi *vsi = que->vsi;
- struct i40e_hw *hw = vsi->hw;
- struct tx_ring *txr = &que->txr;
- struct ifnet *ifp = vsi->ifp;
- bool more;
-
- if (ifp->if_drv_flags & IFF_DRV_RUNNING) {
- more = ixl_rxeof(que, IXL_RX_LIMIT);
- mtx_lock(&txr->mtx);
- ixl_txeof(que);
- if (!drbr_empty(ifp, txr->br))
- ixl_mq_start_locked(ifp, txr);
- mtx_unlock(&txr->mtx);
- if (more) {
- taskqueue_enqueue(que->tq, &que->task);
- return;
- }
- }
-
- /* Reenable this interrupt - hmmm */
- ixlv_enable_queue_irq(hw, que->me);
- return;
+#endif
}
#endif
-
static int
ixlv_msix_que(void *arg)
-{
- struct ixl_rx_queue *que = arg;
-
- ++que->irqs;
-
- ixlv_set_queue_rx_itr(que);
- ixlv_set_queue_tx_itr(que);
-
- return (FILTER_SCHEDULE_THREAD);
-}
-
-
-/*********************************************************************
- *
- * Media Ioctl callback
- *
- * This routine is called whenever the user queries the status of
- * the interface using ifconfig.
- *
- **********************************************************************/
-static void
-ixlv_media_status(struct ifnet * ifp, struct ifmediareq * ifmr)
-{
- struct ixl_vsi *vsi = ifp->if_softc;
- struct ixlv_sc *sc = vsi->back;
-
- INIT_DBG_IF(ifp, "begin");
-
- mtx_lock(&sc->mtx);
-
- ixlv_update_link_status(sc);
-
- ifmr->ifm_status = IFM_AVALID;
- ifmr->ifm_active = IFM_ETHER;
-
- if (!sc->link_up) {
- mtx_unlock(&sc->mtx);
- INIT_DBG_IF(ifp, "end: link not up");
- return;
- }
-
- ifmr->ifm_status |= IFM_ACTIVE;
- /* Hardware is always full-duplex */
- ifmr->ifm_active |= IFM_FDX;
-
- /* Based on the link speed reported by the PF over the AdminQ, choose a
- * PHY type to report. This isn't 100% correct since we don't really
- * know the underlying PHY type of the PF, but at least we can report
- * a valid link speed...
- */
- switch (sc->link_speed) {
- case VIRTCHNL_LINK_SPEED_100MB:
- ifmr->ifm_active |= IFM_100_TX;
- break;
- case VIRTCHNL_LINK_SPEED_1GB:
- ifmr->ifm_active |= IFM_1000_T;
- break;
- case VIRTCHNL_LINK_SPEED_10GB:
- ifmr->ifm_active |= IFM_10G_SR;
- break;
- case VIRTCHNL_LINK_SPEED_20GB:
- case VIRTCHNL_LINK_SPEED_25GB:
- ifmr->ifm_active |= IFM_25G_SR;
- break;
- case VIRTCHNL_LINK_SPEED_40GB:
- ifmr->ifm_active |= IFM_40G_SR4;
- break;
- default:
- ifmr->ifm_active |= IFM_UNKNOWN;
- break;
- }
-
- mtx_unlock(&sc->mtx);
- INIT_DBG_IF(ifp, "end");
- return;
-}
-
-/*********************************************************************
- *
- * Media Ioctl callback
- *
- * This routine is called when the user changes speed/duplex using
- * media/mediopt option with ifconfig.
- *
- **********************************************************************/
-static int
-ixlv_media_change(struct ifnet * ifp)
-{
- struct ixl_vsi *vsi = ifp->if_softc;
- struct ifmedia *ifm = &vsi->media;
-
- INIT_DBG_IF(ifp, "begin");
-
- if (IFM_TYPE(ifm->ifm_media) != IFM_ETHER)
- return (EINVAL);
-
- if_printf(ifp, "Changing speed is not supported\n");
-
- INIT_DBG_IF(ifp, "end");
- return (ENODEV);
-}
-
-
-#if 0
-/*********************************************************************
- * Multicast Initialization
- *
- * This routine is called by init to reset a fresh state.
- *
- **********************************************************************/
-
-static void
-ixlv_init_multi(struct ixl_vsi *vsi)
-{
- struct ixlv_mac_filter *f;
- struct ixlv_sc *sc = vsi->back;
- int mcnt = 0;
-
- IOCTL_DBG_IF(vsi->ifp, "begin");
-
- /* First clear any multicast filters */
- SLIST_FOREACH(f, sc->mac_filters, next) {
- if ((f->flags & IXL_FILTER_USED)
- && (f->flags & IXL_FILTER_MC)) {
- f->flags |= IXL_FILTER_DEL;
- mcnt++;
- }
- }
- if (mcnt > 0)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->del_multi_cmd,
- IXLV_FLAG_AQ_DEL_MAC_FILTER, ixl_init_cmd_complete,
- sc);
-
- IOCTL_DBG_IF(vsi->ifp, "end");
-}
-
-static void
-ixlv_add_multi(struct ixl_vsi *vsi)
-{
- struct ifmultiaddr *ifma;
- struct ifnet *ifp = vsi->ifp;
- struct ixlv_sc *sc = vsi->back;
- int mcnt = 0;
-
- IOCTL_DBG_IF(ifp, "begin");
+{
+ struct ixl_rx_queue *rx_que = arg;
- if_maddr_rlock(ifp);
- /*
- ** Get a count, to decide if we
- ** simply use multicast promiscuous.
- */
- CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) {
- if (ifma->ifma_addr->sa_family != AF_LINK)
- continue;
- mcnt++;
- }
- if_maddr_runlock(ifp);
-
- /* TODO: Remove -- cannot set promiscuous mode in a VF */
- if (__predict_false(mcnt >= MAX_MULTICAST_ADDR)) {
- /* delete all multicast filters */
- ixlv_init_multi(vsi);
- sc->promiscuous_flags |= FLAG_VF_MULTICAST_PROMISC;
- ixl_vc_enqueue(&sc->vc_mgr, &sc->add_multi_cmd,
- IXLV_FLAG_AQ_CONFIGURE_PROMISC, ixl_init_cmd_complete,
- sc);
- IOCTL_DEBUGOUT("%s: end: too many filters", __func__);
- return;
- }
+ ++rx_que->irqs;
- mcnt = 0;
- if_maddr_rlock(ifp);
- CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) {
- if (ifma->ifma_addr->sa_family != AF_LINK)
- continue;
- if (!ixlv_add_mac_filter(sc,
- (u8*)LLADDR((struct sockaddr_dl *) ifma->ifma_addr),
- IXL_FILTER_MC))
- mcnt++;
- }
- if_maddr_runlock(ifp);
- /*
- ** Notify AQ task that sw filters need to be
- ** added to hw list
- */
- if (mcnt > 0)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->add_multi_cmd,
- IXLV_FLAG_AQ_ADD_MAC_FILTER, ixl_init_cmd_complete,
- sc);
+ ixlv_set_queue_rx_itr(rx_que);
+ // ixlv_set_queue_tx_itr(que);
- IOCTL_DBG_IF(ifp, "end");
+ return (FILTER_SCHEDULE_THREAD);
}
+/*********************************************************************
+ * Multicast Initialization
+ *
+ * This routine is called by init to reset a fresh state.
+ *
+ **********************************************************************/
static void
-ixlv_del_multi(struct ixl_vsi *vsi)
+ixlv_init_multi(struct ixlv_sc *sc)
{
struct ixlv_mac_filter *f;
- struct ifmultiaddr *ifma;
- struct ifnet *ifp = vsi->ifp;
- struct ixlv_sc *sc = vsi->back;
- int mcnt = 0;
- bool match = FALSE;
+ int mcnt = 0;
- IOCTL_DBG_IF(ifp, "begin");
-
- /* Search for removed multicast addresses */
- if_maddr_rlock(ifp);
+ /* First clear any multicast filters */
SLIST_FOREACH(f, sc->mac_filters, next) {
if ((f->flags & IXL_FILTER_USED)
&& (f->flags & IXL_FILTER_MC)) {
- /* check if mac address in filter is in sc's list */
- match = FALSE;
- CK_STAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) {
- if (ifma->ifma_addr->sa_family != AF_LINK)
- continue;
- u8 *mc_addr =
- (u8 *)LLADDR((struct sockaddr_dl *)ifma->ifma_addr);
- if (cmp_etheraddr(f->macaddr, mc_addr)) {
- match = TRUE;
- break;
- }
- }
- /* if this filter is not in the sc's list, remove it */
- if (match == FALSE && !(f->flags & IXL_FILTER_DEL)) {
- f->flags |= IXL_FILTER_DEL;
- mcnt++;
- IOCTL_DBG_IF(ifp, "marked: " MAC_FORMAT,
- MAC_FORMAT_ARGS(f->macaddr));
- }
- else if (match == FALSE)
- IOCTL_DBG_IF(ifp, "exists: " MAC_FORMAT,
- MAC_FORMAT_ARGS(f->macaddr));
+ f->flags |= IXL_FILTER_DEL;
+ mcnt++;
}
}
- if_maddr_runlock(ifp);
-
if (mcnt > 0)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->del_multi_cmd,
- IXLV_FLAG_AQ_DEL_MAC_FILTER, ixl_init_cmd_complete,
- sc);
-
- IOCTL_DBG_IF(ifp, "end");
-}
-
-static void
-ixlv_local_timer(void *arg)
-{
- struct ixlv_sc *sc = arg;
- struct i40e_hw *hw = &sc->hw;
- struct ixl_vsi *vsi = &sc->vsi;
- u32 val;
-
- IXLV_CORE_LOCK_ASSERT(sc);
-
- /* If Reset is in progress just bail */
- if (sc->init_state == IXLV_RESET_PENDING)
- return;
-
- /* Check for when PF triggers a VF reset */
- val = rd32(hw, I40E_VFGEN_RSTAT) &
- I40E_VFGEN_RSTAT_VFR_STATE_MASK;
-
- if (val != VIRTCHNL_VFR_VFACTIVE
- && val != VIRTCHNL_VFR_COMPLETED) {
- DDPRINTF(sc->dev, "reset in progress! (%d)", val);
- return;
- }
-
- ixlv_request_stats(sc);
-
- /* clean and process any events */
- taskqueue_enqueue(sc->tq, &sc->aq_irq);
-
- /* Increment stat when a queue shows hung */
- if (ixl_queue_hang_check(vsi))
- sc->watchdog_events++;
-
- callout_reset(&sc->timer, hz, ixlv_local_timer, sc);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_DEL_MAC_FILTER);
}
/*
@@ -2737,29 +2049,23 @@
void
ixlv_update_link_status(struct ixlv_sc *sc)
{
- struct ixl_vsi *vsi = &sc->vsi;
- struct ifnet *ifp = vsi->ifp;
+ struct ixl_vsi *vsi = &sc->vsi;
+ u64 baudrate;
if (sc->link_up){
if (vsi->link_active == FALSE) {
- if (bootverbose)
- if_printf(ifp,"Link is Up, %s\n",
- ixlv_vc_speed_to_string(sc->link_speed));
vsi->link_active = TRUE;
- if_link_state_change(ifp, LINK_STATE_UP);
+ baudrate = ixl_max_vc_speed_to_value(sc->link_speed);
+ ixlv_dbg_info(sc, "baudrate: %lu\n", baudrate);
+ iflib_link_state_change(vsi->ctx, LINK_STATE_UP, baudrate);
}
} else { /* Link down */
if (vsi->link_active == TRUE) {
- if (bootverbose)
- if_printf(ifp,"Link is Down\n");
- if_link_state_change(ifp, LINK_STATE_DOWN);
vsi->link_active = FALSE;
+ iflib_link_state_change(vsi->ctx, LINK_STATE_DOWN, 0);
}
}
-
- return;
}
-#endif
/*********************************************************************
*
@@ -2772,29 +2078,18 @@
ixlv_stop(struct ixlv_sc *sc)
{
struct ifnet *ifp;
- int start;
ifp = sc->vsi.ifp;
- INIT_DBG_IF(ifp, "begin");
-
- ixl_vc_flush(&sc->vc_mgr);
- ixlv_disable_queues(sc);
-
- start = ticks;
- while ((ifp->if_drv_flags & IFF_DRV_RUNNING) &&
- ((ticks - start) < hz/10))
- ixlv_do_adminq_locked(sc);
- /* Stop the local timer */
- callout_stop(&sc->timer);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_DISABLE_QUEUES);
- INIT_DBG_IF(ifp, "end");
+ ixlv_disable_intr(&sc->vsi);
}
static void
ixlv_if_stop(if_ctx_t ctx)
{
- struct ixl_vsi *vsi = iflib_get_softc(ctx);
+ struct ixlv_sc *sc = iflib_get_softc(ctx);
ixlv_stop(sc);
}
@@ -2886,14 +2181,11 @@
static void
ixlv_config_rss_pf(struct ixlv_sc *sc)
{
- ixl_vc_enqueue(&sc->vc_mgr, &sc->config_rss_key_cmd,
- IXLV_FLAG_AQ_CONFIG_RSS_KEY, ixl_init_cmd_complete, sc);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_CONFIG_RSS_KEY);
- ixl_vc_enqueue(&sc->vc_mgr, &sc->set_rss_hena_cmd,
- IXLV_FLAG_AQ_SET_RSS_HENA, ixl_init_cmd_complete, sc);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_SET_RSS_HENA);
- ixl_vc_enqueue(&sc->vc_mgr, &sc->config_rss_lut_cmd,
- IXLV_FLAG_AQ_CONFIG_RSS_LUT, ixl_init_cmd_complete, sc);
+ ixlv_send_vc_msg(sc, IXLV_FLAG_AQ_CONFIG_RSS_LUT);
}
/*
@@ -2905,41 +2197,15 @@
ixlv_config_rss(struct ixlv_sc *sc)
{
if (sc->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_REG) {
- DDPRINTF(sc->dev, "Setting up RSS using VF registers...");
+ ixlv_dbg_info(sc, "Setting up RSS using VF registers...");
ixlv_config_rss_reg(sc);
} else if (sc->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
- DDPRINTF(sc->dev, "Setting up RSS using messages to PF...");
+ ixlv_dbg_info(sc, "Setting up RSS using messages to PF...");
ixlv_config_rss_pf(sc);
} else
device_printf(sc->dev, "VF does not support RSS capability sent by PF.\n");
}
-/*
-** This routine refreshes vlan filters, called by init
-** it scans the filter table and then updates the AQ
-*/
-static void
-ixlv_setup_vlan_filters(struct ixlv_sc *sc)
-{
- struct ixl_vsi *vsi = &sc->vsi;
- struct ixlv_vlan_filter *f;
- int cnt = 0;
-
- if (vsi->num_vlans == 0)
- return;
- /*
- ** Scan the filter table for vlan entries,
- ** and if found call for the AQ update.
- */
- SLIST_FOREACH(f, sc->vlan_filters, next)
- if (f->flags & IXL_FILTER_ADD)
- cnt++;
- if (cnt > 0)
- ixl_vc_enqueue(&sc->vc_mgr, &sc->add_vlan_cmd,
- IXLV_FLAG_AQ_ADD_VLAN_FILTER, ixl_init_cmd_complete, sc);
-}
-
-
/*
** This routine adds new MAC filters to the sc's list;
** these are later added in hardware by sending a virtual
@@ -2991,226 +2257,79 @@
return (0);
}
+/*
+ * Re-uses the name from the PF driver.
+ */
static void
-ixlv_do_adminq_locked(struct ixlv_sc *sc)
+ixlv_add_device_sysctls(struct ixlv_sc *sc)
{
- struct i40e_hw *hw = &sc->hw;
- struct i40e_arq_event_info event;
- struct virtchnl_msg *v_msg;
- device_t dev = sc->dev;
- u16 result = 0;
- u32 reg, oldreg;
- i40e_status ret;
- bool aq_error = false;
+ struct ixl_vsi *vsi = &sc->vsi;
+ device_t dev = sc->dev;
- event.buf_len = IXL_AQ_BUF_SZ;
- event.msg_buf = sc->aq_buffer;
- v_msg = (struct virtchnl_msg *)&event.desc;
+ struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
+ struct sysctl_oid_list *ctx_list =
+ SYSCTL_CHILDREN(device_get_sysctl_tree(dev));
+ struct sysctl_oid *debug_node;
+ struct sysctl_oid_list *debug_list;
- do {
- ret = i40e_clean_arq_element(hw, &event, &result);
- if (ret)
- break;
- ixlv_vc_completion(sc, v_msg->v_opcode,
- v_msg->v_retval, event.msg_buf, event.msg_len);
- if (result != 0)
- bzero(event.msg_buf, IXL_AQ_BUF_SZ);
- } while (result);
+ SYSCTL_ADD_PROC(ctx, ctx_list,
+ OID_AUTO, "current_speed", CTLTYPE_STRING | CTLFLAG_RD,
+ sc, 0, ixlv_sysctl_current_speed, "A", "Current Port Speed");
- /* check for Admin queue errors */
- oldreg = reg = rd32(hw, hw->aq.arq.len);
- if (reg & I40E_VF_ARQLEN1_ARQVFE_MASK) {
- device_printf(dev, "ARQ VF Error detected\n");
- reg &= ~I40E_VF_ARQLEN1_ARQVFE_MASK;
- aq_error = true;
- }
- if (reg & I40E_VF_ARQLEN1_ARQOVFL_MASK) {
- device_printf(dev, "ARQ Overflow Error detected\n");
- reg &= ~I40E_VF_ARQLEN1_ARQOVFL_MASK;
- aq_error = true;
- }
- if (reg & I40E_VF_ARQLEN1_ARQCRIT_MASK) {
- device_printf(dev, "ARQ Critical Error detected\n");
- reg &= ~I40E_VF_ARQLEN1_ARQCRIT_MASK;
- aq_error = true;
- }
- if (oldreg != reg)
- wr32(hw, hw->aq.arq.len, reg);
+ SYSCTL_ADD_PROC(ctx, ctx_list,
+ OID_AUTO, "tx_itr", CTLTYPE_INT | CTLFLAG_RW,
+ sc, 0, ixlv_sysctl_tx_itr, "I",
+ "Immediately set TX ITR value for all queues");
- oldreg = reg = rd32(hw, hw->aq.asq.len);
- if (reg & I40E_VF_ATQLEN1_ATQVFE_MASK) {
- device_printf(dev, "ASQ VF Error detected\n");
- reg &= ~I40E_VF_ATQLEN1_ATQVFE_MASK;
- aq_error = true;
- }
- if (reg & I40E_VF_ATQLEN1_ATQOVFL_MASK) {
- device_printf(dev, "ASQ Overflow Error detected\n");
- reg &= ~I40E_VF_ATQLEN1_ATQOVFL_MASK;
- aq_error = true;
- }
- if (reg & I40E_VF_ATQLEN1_ATQCRIT_MASK) {
- device_printf(dev, "ASQ Critical Error detected\n");
- reg &= ~I40E_VF_ATQLEN1_ATQCRIT_MASK;
- aq_error = true;
- }
- if (oldreg != reg)
- wr32(hw, hw->aq.asq.len, reg);
+ SYSCTL_ADD_PROC(ctx, ctx_list,
+ OID_AUTO, "rx_itr", CTLTYPE_INT | CTLFLAG_RW,
+ sc, 0, ixlv_sysctl_rx_itr, "I",
+ "Immediately set RX ITR value for all queues");
- if (aq_error) {
- /* Need to reset adapter */
- device_printf(dev, "WARNING: Resetting!\n");
- sc->init_state = IXLV_RESET_REQUIRED;
- ixlv_stop(sc);
- // TODO: Make stop/init calls match
- ixlv_if_init(sc->vsi.ctx);
- }
- ixlv_enable_adminq_irq(hw);
-}
+#if 0
+ SYSCTL_ADD_INT(ctx, ctx_list,
+ OID_AUTO, "dynamic_rx_itr", CTLFLAG_RW,
+ &sc->dynamic_rx_itr, 0, "Enable dynamic RX ITR");
-static void
-ixlv_add_sysctls(struct ixlv_sc *sc)
-{
- device_t dev = sc->dev;
- struct ixl_vsi *vsi = &sc->vsi;
- struct i40e_eth_stats *es = &vsi->eth_stats;
+ SYSCTL_ADD_INT(ctx, ctx_list,
+ OID_AUTO, "dynamic_tx_itr", CTLFLAG_RW,
+ &sc->dynamic_tx_itr, 0, "Enable dynamic TX ITR");
+#endif
- struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
- struct sysctl_oid *tree = device_get_sysctl_tree(dev);
- struct sysctl_oid_list *child = SYSCTL_CHILDREN(tree);
+ /* Add sysctls meant to print debug information, but don't list them
+ * in "sysctl -a" output. */
+ debug_node = SYSCTL_ADD_NODE(ctx, ctx_list,
+ OID_AUTO, "debug", CTLFLAG_RD | CTLFLAG_SKIP, NULL, "Debug Sysctls");
+ debug_list = SYSCTL_CHILDREN(debug_node);
- struct sysctl_oid *vsi_node; // *queue_node;
- struct sysctl_oid_list *vsi_list; // *queue_list;
+ SYSCTL_ADD_UINT(ctx, debug_list,
+ OID_AUTO, "shared_debug_mask", CTLFLAG_RW,
+ &sc->hw.debug_mask, 0, "Shared code debug message level");
-#define QUEUE_NAME_LEN 32
- //char queue_namebuf[QUEUE_NAME_LEN];
+ SYSCTL_ADD_UINT(ctx, debug_list,
+ OID_AUTO, "core_debug_mask", CTLFLAG_RW,
+ &sc->dbg_mask, 0, "Non-shared code debug message level");
-#if 0
- struct ixl_queue *queues = vsi->queues;
- struct tX_ring *txr;
- struct rx_ring *rxr;
-#endif
+ SYSCTL_ADD_PROC(ctx, debug_list,
+ OID_AUTO, "filter_list", CTLTYPE_STRING | CTLFLAG_RD,
+ sc, 0, ixlv_sysctl_sw_filter_list, "A", "SW Filter List");
- /* Driver statistics sysctls */
- SYSCTL_ADD_UQUAD(ctx, child, OID_AUTO, "watchdog_events",
- CTLFLAG_RD, &sc->watchdog_events,
- "Watchdog timeouts");
- SYSCTL_ADD_UQUAD(ctx, child, OID_AUTO, "admin_irq",
- CTLFLAG_RD, &sc->admin_irq,
- "Admin Queue IRQ Handled");
-
- SYSCTL_ADD_INT(ctx, child, OID_AUTO, "tx_ring_size",
- CTLFLAG_RD, &vsi->num_tx_desc, 0,
- "TX ring size");
- SYSCTL_ADD_INT(ctx, child, OID_AUTO, "rx_ring_size",
- CTLFLAG_RD, &vsi->num_rx_desc, 0,
- "RX ring size");
-
- SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "current_speed",
- CTLTYPE_STRING | CTLFLAG_RD,
- sc, 0, ixlv_sysctl_current_speed,
- "A", "Current Port Speed");
-
- /* VSI statistics sysctls */
- vsi_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "vsi",
- CTLFLAG_RD, NULL, "VSI-specific statistics");
- vsi_list = SYSCTL_CHILDREN(vsi_node);
-
- struct ixl_sysctl_info ctls[] =
- {
- {&es->rx_bytes, "good_octets_rcvd", "Good Octets Received"},
- {&es->rx_unicast, "ucast_pkts_rcvd",
- "Unicast Packets Received"},
- {&es->rx_multicast, "mcast_pkts_rcvd",
- "Multicast Packets Received"},
- {&es->rx_broadcast, "bcast_pkts_rcvd",
- "Broadcast Packets Received"},
- {&es->rx_discards, "rx_discards", "Discarded RX packets"},
- {&es->rx_unknown_protocol, "rx_unknown_proto", "RX unknown protocol packets"},
- {&es->tx_bytes, "good_octets_txd", "Good Octets Transmitted"},
- {&es->tx_unicast, "ucast_pkts_txd", "Unicast Packets Transmitted"},
- {&es->tx_multicast, "mcast_pkts_txd",
- "Multicast Packets Transmitted"},
- {&es->tx_broadcast, "bcast_pkts_txd",
- "Broadcast Packets Transmitted"},
- {&es->tx_errors, "tx_errors", "TX packet errors"},
- // end
- {0,0,0}
- };
- struct ixl_sysctl_info *entry = ctls;
- while (entry->stat != NULL)
- {
- SYSCTL_ADD_QUAD(ctx, child, OID_AUTO, entry->name,
- CTLFLAG_RD, entry->stat,
- entry->description);
- entry++;
- }
+ SYSCTL_ADD_PROC(ctx, debug_list,
+ OID_AUTO, "queue_interrupt_table", CTLTYPE_STRING | CTLFLAG_RD,
+ sc, 0, ixlv_sysctl_queue_interrupt_table, "A", "View MSI-X indices for TX/RX queues");
-#if 0
- /* Queue sysctls */
- for (int q = 0; q < vsi->num_queues; q++) {
- snprintf(queue_namebuf, QUEUE_NAME_LEN, "que%d", q);
- queue_node = SYSCTL_ADD_NODE(ctx, vsi_list, OID_AUTO, queue_namebuf,
- CTLFLAG_RD, NULL, "Queue Name");
- queue_list = SYSCTL_CHILDREN(queue_node);
-
- txr = &(queues[q].txr);
- rxr = &(queues[q].rxr);
-
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "mbuf_defrag_failed",
- CTLFLAG_RD, &(queues[q].mbuf_defrag_failed),
- "m_defrag() failed");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "dropped",
- CTLFLAG_RD, &(queues[q].dropped_pkts),
- "Driver dropped packets");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "irqs",
- CTLFLAG_RD, &(queues[q].irqs),
- "irqs on this queue");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "tso_tx",
- CTLFLAG_RD, &(queues[q].tso),
- "TSO");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "tx_dmamap_failed",
- CTLFLAG_RD, &(queues[q].tx_dmamap_failed),
- "Driver tx dma failure in xmit");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "no_desc_avail",
- CTLFLAG_RD, &(txr->no_desc),
- "Queue No Descriptor Available");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "tx_packets",
- CTLFLAG_RD, &(txr->total_packets),
- "Queue Packets Transmitted");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "tx_bytes",
- CTLFLAG_RD, &(txr->tx_bytes),
- "Queue Bytes Transmitted");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "rx_packets",
- CTLFLAG_RD, &(rxr->rx_packets),
- "Queue Packets Received");
- SYSCTL_ADD_QUAD(ctx, queue_list, OID_AUTO, "rx_bytes",
- CTLFLAG_RD, &(rxr->rx_bytes),
- "Queue Bytes Received");
- SYSCTL_ADD_UINT(ctx, queue_list, OID_AUTO, "rx_itr",
- CTLFLAG_RD, &(rxr->itr), 0,
- "Queue Rx ITR Interval");
- SYSCTL_ADD_UINT(ctx, queue_list, OID_AUTO, "tx_itr",
- CTLFLAG_RD, &(txr->itr), 0,
- "Queue Tx ITR Interval");
+ SYSCTL_ADD_PROC(ctx, debug_list,
+ OID_AUTO, "do_vf_reset", CTLTYPE_INT | CTLFLAG_WR,
+ sc, 0, ixlv_sysctl_vf_reset, "A", "Request a VF reset from PF");
+
+ SYSCTL_ADD_PROC(ctx, debug_list,
+ OID_AUTO, "do_vflr_reset", CTLTYPE_INT | CTLFLAG_WR,
+ sc, 0, ixlv_sysctl_vflr_reset, "A", "Request a VFLR reset from HW");
+
+ /* Add stats sysctls */
+ ixl_add_vsi_sysctls(dev, vsi, ctx, "vsi");
+ ixl_add_queues_sysctls(dev, vsi);
-#ifdef IXL_DEBUG
- /* Examine queue state */
- SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "qtx_head",
- CTLTYPE_UINT | CTLFLAG_RD, &queues[q],
- sizeof(struct ixl_queue),
- ixlv_sysctl_qtx_tail_handler, "IU",
- "Queue Transmit Descriptor Tail");
- SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "qrx_head",
- CTLTYPE_UINT | CTLFLAG_RD, &queues[q],
- sizeof(struct ixl_queue),
- ixlv_sysctl_qrx_tail_handler, "IU",
- "Queue Receive Descriptor Tail");
- SYSCTL_ADD_INT(ctx, queue_list, OID_AUTO, "watchdog_timer",
- CTLFLAG_RD, &(txr.watchdog_timer), 0,
- "Ticks before watchdog event is triggered");
-#endif
- }
-#endif
}
static void
@@ -3244,7 +2363,7 @@
free(sc->vlan_filters, M_DEVBUF);
}
-static char *
+char *
ixlv_vc_speed_to_string(enum virtchnl_link_speed link_speed)
{
int index;
@@ -3299,49 +2418,251 @@
return (error);
}
-#ifdef IXL_DEBUG
-/**
- * ixlv_sysctl_qtx_tail_handler
- * Retrieves I40E_QTX_TAIL1 value from hardware
- * for a sysctl.
+/*
+ * Sanity check and save off tunable values.
*/
-static int
-ixlv_sysctl_qtx_tail_handler(SYSCTL_HANDLER_ARGS)
+static void
+ixlv_save_tunables(struct ixlv_sc *sc)
{
- struct ixl_queue *que;
- int error;
- u32 val;
+ device_t dev = sc->dev;
- que = ((struct ixl_queue *)oidp->oid_arg1);
- if (!que) return 0;
+ /* Save tunable information */
+ sc->dbg_mask = ixlv_core_debug_mask;
+ sc->hw.debug_mask = ixlv_shared_debug_mask;
+ sc->vsi.enable_head_writeback = !!(ixlv_enable_head_writeback);
+
+ if (ixlv_tx_itr < 0 || ixlv_tx_itr > IXL_MAX_ITR) {
+ device_printf(dev, "Invalid tx_itr value of %d set!\n",
+ ixlv_tx_itr);
+ device_printf(dev, "tx_itr must be between %d and %d, "
+ "inclusive\n",
+ 0, IXL_MAX_ITR);
+ device_printf(dev, "Using default value of %d instead\n",
+ IXL_ITR_4K);
+ sc->tx_itr = IXL_ITR_4K;
+ } else
+ sc->tx_itr = ixlv_tx_itr;
+
+ if (ixlv_rx_itr < 0 || ixlv_rx_itr > IXL_MAX_ITR) {
+ device_printf(dev, "Invalid rx_itr value of %d set!\n",
+ ixlv_rx_itr);
+ device_printf(dev, "rx_itr must be between %d and %d, "
+ "inclusive\n",
+ 0, IXL_MAX_ITR);
+ device_printf(dev, "Using default value of %d instead\n",
+ IXL_ITR_8K);
+ sc->rx_itr = IXL_ITR_8K;
+ } else
+ sc->rx_itr = ixlv_rx_itr;
+}
- val = rd32(que->vsi->hw, que->txr.tail);
- error = sysctl_handle_int(oidp, &val, 0, req);
- if (error || !req->newptr)
- return error;
- return (0);
+/*
+ * Used to set the Tx ITR value for all of the VF's queues.
+ * Writes to the ITR registers immediately.
+ */
+static int
+ixlv_sysctl_tx_itr(SYSCTL_HANDLER_ARGS)
+{
+ struct ixlv_sc *sc = (struct ixlv_sc *)arg1;
+ device_t dev = sc->dev;
+ int requested_tx_itr;
+ int error = 0;
+
+ requested_tx_itr = sc->tx_itr;
+ error = sysctl_handle_int(oidp, &requested_tx_itr, 0, req);
+ if ((error) || (req->newptr == NULL))
+ return (error);
+ if (sc->dynamic_tx_itr) {
+ device_printf(dev,
+ "Cannot set TX itr value while dynamic TX itr is enabled\n");
+ return (EINVAL);
+ }
+ if (requested_tx_itr < 0 || requested_tx_itr > IXL_MAX_ITR) {
+ device_printf(dev,
+ "Invalid TX itr value; value must be between 0 and %d\n",
+ IXL_MAX_ITR);
+ return (EINVAL);
+ }
+
+ sc->tx_itr = requested_tx_itr;
+ ixlv_configure_tx_itr(sc);
+
+ return (error);
}
-/**
- * ixlv_sysctl_qrx_tail_handler
- * Retrieves I40E_QRX_TAIL1 value from hardware
- * for a sysctl.
+/*
+ * Used to set the Rx ITR value for all of the VF's queues.
+ * Writes to the ITR registers immediately.
*/
-static int
-ixlv_sysctl_qrx_tail_handler(SYSCTL_HANDLER_ARGS)
+static int
+ixlv_sysctl_rx_itr(SYSCTL_HANDLER_ARGS)
{
- struct ixl_queue *que;
- int error;
- u32 val;
+ struct ixlv_sc *sc = (struct ixlv_sc *)arg1;
+ device_t dev = sc->dev;
+ int requested_rx_itr;
+ int error = 0;
+
+ requested_rx_itr = sc->rx_itr;
+ error = sysctl_handle_int(oidp, &requested_rx_itr, 0, req);
+ if ((error) || (req->newptr == NULL))
+ return (error);
+ if (sc->dynamic_rx_itr) {
+ device_printf(dev,
+ "Cannot set RX itr value while dynamic RX itr is enabled\n");
+ return (EINVAL);
+ }
+ if (requested_rx_itr < 0 || requested_rx_itr > IXL_MAX_ITR) {
+ device_printf(dev,
+ "Invalid RX itr value; value must be between 0 and %d\n",
+ IXL_MAX_ITR);
+ return (EINVAL);
+ }
- que = ((struct ixl_queue *)oidp->oid_arg1);
- if (!que) return 0;
+ sc->rx_itr = requested_rx_itr;
+ ixlv_configure_rx_itr(sc);
- val = rd32(que->vsi->hw, que->rxr.tail);
- error = sysctl_handle_int(oidp, &val, 0, req);
- if (error || !req->newptr)
- return error;
- return (0);
+ return (error);
}
-#endif
+static int
+ixlv_sysctl_sw_filter_list(SYSCTL_HANDLER_ARGS)
+{
+ struct ixlv_sc *sc = (struct ixlv_sc *)arg1;
+ struct ixlv_mac_filter *f;
+ struct ixlv_vlan_filter *v;
+ device_t dev = sc->dev;
+ int ftl_len, ftl_counter = 0, error = 0;
+ struct sbuf *buf;
+
+ buf = sbuf_new_for_sysctl(NULL, NULL, 128, req);
+ if (!buf) {
+ device_printf(dev, "Could not allocate sbuf for output.\n");
+ return (ENOMEM);
+ }
+
+ sbuf_printf(buf, "\n");
+
+ /* Print MAC filters */
+ sbuf_printf(buf, "MAC Filters:\n");
+ ftl_len = 0;
+ SLIST_FOREACH(f, sc->mac_filters, next)
+ ftl_len++;
+ if (ftl_len < 1)
+ sbuf_printf(buf, "(none)\n");
+ else {
+ SLIST_FOREACH(f, sc->mac_filters, next) {
+ sbuf_printf(buf,
+ MAC_FORMAT ", flags %#06x\n",
+ MAC_FORMAT_ARGS(f->macaddr), f->flags);
+ }
+ }
+
+ /* Print VLAN filters */
+ sbuf_printf(buf, "VLAN Filters:\n");
+ ftl_len = 0;
+ SLIST_FOREACH(v, sc->vlan_filters, next)
+ ftl_len++;
+ if (ftl_len < 1)
+ sbuf_printf(buf, "(none)");
+ else {
+ SLIST_FOREACH(v, sc->vlan_filters, next) {
+ sbuf_printf(buf,
+ "%d, flags %#06x",
+ v->vlan, v->flags);
+ /* don't print '\n' for last entry */
+ if (++ftl_counter != ftl_len)
+ sbuf_printf(buf, "\n");
+ }
+ }
+
+ error = sbuf_finish(buf);
+ if (error)
+ device_printf(dev, "Error finishing sbuf: %d\n", error);
+
+ sbuf_delete(buf);
+ return (error);
+}
+
+/*
+ * Print out mapping of TX queue indexes and Rx queue indexes
+ * to MSI-X vectors.
+ */
+static int
+ixlv_sysctl_queue_interrupt_table(SYSCTL_HANDLER_ARGS)
+{
+ struct ixlv_sc *sc = (struct ixlv_sc *)arg1;
+ struct ixl_vsi *vsi = &sc->vsi;
+ device_t dev = sc->dev;
+ struct sbuf *buf;
+ int error = 0;
+
+ struct ixl_rx_queue *rx_que = vsi->rx_queues;
+ struct ixl_tx_queue *tx_que = vsi->tx_queues;
+
+ buf = sbuf_new_for_sysctl(NULL, NULL, 128, req);
+ if (!buf) {
+ device_printf(dev, "Could not allocate sbuf for output.\n");
+ return (ENOMEM);
+ }
+
+ sbuf_cat(buf, "\n");
+ for (int i = 0; i < vsi->num_rx_queues; i++) {
+ rx_que = &vsi->rx_queues[i];
+ sbuf_printf(buf, "(rxq %3d): %d\n", i, rx_que->msix);
+ }
+ for (int i = 0; i < vsi->num_tx_queues; i++) {
+ tx_que = &vsi->tx_queues[i];
+ sbuf_printf(buf, "(txq %3d): %d\n", i, tx_que->msix);
+ }
+
+ error = sbuf_finish(buf);
+ if (error)
+ device_printf(dev, "Error finishing sbuf: %d\n", error);
+ sbuf_delete(buf);
+
+ return (error);
+}
+
+#define CTX_ACTIVE(ctx) ((if_getdrvflags(iflib_get_ifp(ctx)) & IFF_DRV_RUNNING))
+static int
+ixlv_sysctl_vf_reset(SYSCTL_HANDLER_ARGS)
+{
+ struct ixlv_sc *sc = (struct ixlv_sc *)arg1;
+ int do_reset = 0, error = 0;
+
+ error = sysctl_handle_int(oidp, &do_reset, 0, req);
+ if ((error) || (req->newptr == NULL))
+ return (error);
+
+ if (do_reset == 1) {
+ ixlv_reset(sc);
+ if (CTX_ACTIVE(sc->vsi.ctx))
+ iflib_request_reset(sc->vsi.ctx);
+ }
+
+ return (error);
+}
+
+static int
+ixlv_sysctl_vflr_reset(SYSCTL_HANDLER_ARGS)
+{
+ struct ixlv_sc *sc = (struct ixlv_sc *)arg1;
+ device_t dev = sc->dev;
+ int do_reset = 0, error = 0;
+
+ error = sysctl_handle_int(oidp, &do_reset, 0, req);
+ if ((error) || (req->newptr == NULL))
+ return (error);
+
+ if (do_reset == 1) {
+ if (!pcie_flr(dev, max(pcie_get_max_completion_timeout(dev) / 1000, 10), true)) {
+ device_printf(dev, "PCIE FLR failed\n");
+ error = EIO;
+ }
+ else if (CTX_ACTIVE(sc->vsi.ctx))
+ iflib_request_reset(sc->vsi.ctx);
+ }
+
+ return (error);
+}
+#undef CTX_ACTIVE
Index: sys/dev/ixl/ixl.h
===================================================================
--- sys/dev/ixl/ixl.h
+++ sys/dev/ixl/ixl.h
@@ -32,7 +32,6 @@
******************************************************************************/
/*$FreeBSD$*/
-
#ifndef _IXL_H_
#define _IXL_H_
@@ -136,8 +135,6 @@
#define IXL_MSIX_BAR 3
#define IXL_ADM_LIMIT 2
-// TODO: Find out which TSO_SIZE to use
-//#define IXL_TSO_SIZE 65535
#define IXL_TSO_SIZE ((255*1024)-1)
#define IXL_TX_BUF_SZ ((u32) 1514)
#define IXL_AQ_BUF_SZ ((u32) 4096)
@@ -210,16 +207,6 @@
#define IXL_RX_CTX_BASE_UNITS 128
#define IXL_TX_CTX_BASE_UNITS 128
-#if 0
-#define IXL_VPINT_LNKLSTN_REG(hw, vector, vf_num) \
- I40E_VPINT_LNKLSTN(((vector) - 1) + \
- (((hw)->func_caps.num_msix_vectors_vf - 1) * (vf_num)))
-
-#define IXL_VFINT_DYN_CTLN_REG(hw, vector, vf_num) \
- I40E_VFINT_DYN_CTLN(((vector) - 1) + \
- (((hw)->func_caps.num_msix_vectors_vf - 1) * (vf_num)))
-#endif
-
#define IXL_PF_PCI_CIAA_VF_DEVICE_STATUS 0xAA
#define IXL_PF_PCI_CIAD_VF_TRANS_PENDING_MASK 0x20
@@ -299,6 +286,9 @@
#define IXL_SET_NOPROTO(vsi, count) (vsi)->noproto = (count)
#endif
+/* For stats sysctl naming */
+#define QUEUE_NAME_LEN 32
+
#define IXL_DEV_ERR(_dev, _format, ...) \
device_printf(_dev, "%s: " _format " (%s:%d)\n", __func__, ##__VA_ARGS__, __FILE__, __LINE__)
@@ -415,16 +405,15 @@
if_ctx_t ctx;
if_softc_ctx_t shared;
struct ifnet *ifp;
- //device_t dev;
+ device_t dev;
struct i40e_hw *hw;
struct ifmedia *media;
-#define num_rx_queues shared->isc_nrxqsets
-#define num_tx_queues shared->isc_ntxqsets
+
+ int num_rx_queues;
+ int num_tx_queues;
void *back;
enum i40e_vsi_type type;
- // TODO: Remove?
- u64 que_mask;
int id;
u32 rx_itr_setting;
u32 tx_itr_setting;
@@ -541,9 +530,18 @@
extern const uint8_t ixl_bcast_addr[ETHER_ADDR_LEN];
/* Common function prototypes between PF/VF driver */
+void ixl_debug_core(device_t dev, u32 enabled_mask, u32 mask, char *fmt, ...);
void ixl_init_tx_ring(struct ixl_vsi *vsi, struct ixl_tx_queue *que);
void ixl_get_default_rss_key(u32 *);
const char * i40e_vc_stat_str(struct i40e_hw *hw,
enum virtchnl_status_code stat_err);
-u64 ixl_max_aq_speed_to_value(u8);
+void ixl_init_tx_rsqs(struct ixl_vsi *vsi);
+void ixl_init_tx_cidx(struct ixl_vsi *vsi);
+u64 ixl_max_vc_speed_to_value(u8 link_speeds);
+void ixl_add_vsi_sysctls(device_t dev, struct ixl_vsi *vsi,
+ struct sysctl_ctx_list *ctx, const char *sysctl_name);
+void ixl_add_sysctls_eth_stats(struct sysctl_ctx_list *ctx,
+ struct sysctl_oid_list *child,
+ struct i40e_eth_stats *eth_stats);
+void ixl_add_queues_sysctls(device_t dev, struct ixl_vsi *vsi);
#endif /* _IXL_H_ */
Index: sys/dev/ixl/ixl_debug.h
===================================================================
--- sys/dev/ixl/ixl_debug.h
+++ sys/dev/ixl/ixl_debug.h
@@ -91,12 +91,9 @@
IXL_DBG_EN_DIS = 0x00000002,
IXL_DBG_AQ = 0x00000004,
IXL_DBG_NVMUPD = 0x00000008,
+ IXL_DBG_FILTER = 0x00000010,
- IXL_DBG_IOCTL_KNOWN = 0x00000010,
- IXL_DBG_IOCTL_UNKNOWN = 0x00000020,
- IXL_DBG_IOCTL_ALL = 0x00000030,
-
- I40E_DEBUG_RSS = 0x00000100,
+ IXL_DEBUG_RSS = 0x00000100,
IXL_DBG_IOV = 0x00001000,
IXL_DBG_IOV_VC = 0x00002000,
@@ -107,4 +104,20 @@
IXL_DBG_ALL = 0xFFFFFFFF
};
+enum ixlv_dbg_mask {
+ IXLV_DBG_INFO = 0x00000001,
+ IXLV_DBG_EN_DIS = 0x00000002,
+ IXLV_DBG_AQ = 0x00000004,
+ IXLV_DBG_INIT = 0x00000008,
+ IXLV_DBG_FILTER = 0x00000010,
+
+ IXLV_DEBUG_RSS = 0x00000100,
+
+ IXLV_DBG_VC = 0x00001000,
+
+ IXLV_DBG_SWITCH_INFO = 0x00010000,
+
+ IXLV_DBG_ALL = 0xFFFFFFFF
+};
+
#endif /* _IXL_DEBUG_H_ */
Index: sys/dev/ixl/ixl_pf.h
===================================================================
--- sys/dev/ixl/ixl_pf.h
+++ sys/dev/ixl/ixl_pf.h
@@ -87,10 +87,6 @@
/* Physical controller structure */
struct ixl_pf {
- /*
- * This is first so that iflib_get_softc can return
- * either the VSI or the PF structures.
- */
struct ixl_vsi vsi;
struct i40e_hw hw;
@@ -103,7 +99,6 @@
int iw_msix;
bool iw_enabled;
#endif
- int if_flags;
u32 state;
u8 supported_speeds;
@@ -111,13 +106,12 @@
struct ixl_pf_qtag qtag;
/* Tunable values */
- bool enable_msix;
- int max_queues;
bool enable_tx_fc_filter;
int dynamic_rx_itr;
int dynamic_tx_itr;
int tx_itr;
int rx_itr;
+ int enable_vf_loopback;
bool link_up;
int advertised_speed;
@@ -126,7 +120,6 @@
bool has_i2c;
/* Misc stats maintained by the driver */
- u64 watchdog_events;
u64 admin_irq;
/* Statistics from hw */
@@ -145,8 +138,7 @@
struct ixl_vf *vfs;
int num_vfs;
uint16_t veb_seid;
- struct task vflr_task;
- int vc_debug_lvl;
+ struct if_irq iov_irq;
};
/*
@@ -226,6 +218,12 @@
"\t3 - Use Admin Queue command (best)\n" \
"Using the Admin Queue is only supported on 710 devices with FW version 1.7 or higher"
+#define IXL_SYSCTL_HELP_VF_LOOPBACK \
+"\nDetermines mode that embedded device switch will use when SR-IOV is initialized:\n" \
+"\t0 - Disable (VEPA)\n" \
+"\t1 - Enable (VEB)\n" \
+"Enabling this will allow VFs in separate VMs to communicate over the hardware bridge."
+
extern const char * const ixl_fc_string[6];
MALLOC_DECLARE(M_IXL);
@@ -242,14 +240,9 @@
ixl_send_vf_nack_msg((pf), (vf), (op), (st), __FILE__, __LINE__)
/* Debug printing */
-#define ixl_dbg(p, m, s, ...) ixl_debug_core(p, m, s, ##__VA_ARGS__)
-void ixl_debug_core(struct ixl_pf *, enum ixl_dbg_mask, char *, ...);
-
-/* For stats sysctl naming */
-#define QUEUE_NAME_LEN 32
-
-/* For netmap(4) compatibility */
-#define ixl_disable_intr(vsi) ixl_disable_rings_intr(vsi)
+#define ixl_dbg(pf, m, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, m, s, ##__VA_ARGS__)
+#define ixl_dbg_info(pf, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, IXL_DBG_INFO, s, ##__VA_ARGS__)
+#define ixl_dbg_filter(pf, s, ...) ixl_debug_core(pf->dev, pf->dbg_mask, IXL_DBG_FILTER, s, ##__VA_ARGS__)
/* PF-only function declarations */
int ixl_setup_interface(device_t, struct ixl_pf *);
@@ -292,7 +285,6 @@
u64 *, u64 *);
void ixl_stop(struct ixl_pf *);
-void ixl_add_vsi_sysctls(struct ixl_pf *pf, struct ixl_vsi *vsi, struct sysctl_ctx_list *ctx, const char *sysctl_name);
int ixl_get_hw_capabilities(struct ixl_pf *);
void ixl_link_up_msg(struct ixl_pf *);
void ixl_update_link_status(struct ixl_pf *);
@@ -342,7 +334,7 @@
void ixl_del_filter(struct ixl_vsi *, const u8 *, s16 vlan);
void ixl_reconfigure_filters(struct ixl_vsi *vsi);
-int ixl_disable_rings(struct ixl_vsi *);
+int ixl_disable_rings(struct ixl_pf *, struct ixl_vsi *, struct ixl_pf_qtag *);
int ixl_disable_tx_ring(struct ixl_pf *, struct ixl_pf_qtag *, u16);
int ixl_disable_rx_ring(struct ixl_pf *, struct ixl_pf_qtag *, u16);
int ixl_disable_ring(struct ixl_pf *pf, struct ixl_pf_qtag *, u16);
@@ -400,5 +392,6 @@
int ixl_get_fw_lldp_status(struct ixl_pf *pf);
int ixl_attach_get_link_status(struct ixl_pf *);
+u64 ixl_max_aq_speed_to_value(u8);
#endif /* _IXL_PF_H_ */
Index: sys/dev/ixl/ixl_pf_iov.h
===================================================================
--- sys/dev/ixl/ixl_pf_iov.h
+++ sys/dev/ixl/ixl_pf_iov.h
@@ -45,19 +45,19 @@
/* Public functions */
/*
- * These three are DEVMETHODs required for SR-IOV PF support.
+ * These three are DEVMETHODs required for SR-IOV PF support in iflib.
*/
-int ixl_iov_init(device_t dev, uint16_t num_vfs, const nvlist_t *params);
-void ixl_iov_uninit(device_t dev);
-int ixl_add_vf(device_t dev, uint16_t vfnum, const nvlist_t *params);
+int ixl_if_iov_init(if_ctx_t ctx, uint16_t num_vfs, const nvlist_t *params);
+void ixl_if_iov_uninit(if_ctx_t ctx);
+int ixl_if_iov_vf_add(if_ctx_t ctx, uint16_t vfnum, const nvlist_t *params);
/*
- * The standard PF driver needs to call these during normal execution when
+ * The base PF driver needs to call these during normal execution when
* SR-IOV mode is active.
*/
void ixl_initialize_sriov(struct ixl_pf *pf);
void ixl_handle_vf_msg(struct ixl_pf *pf, struct i40e_arq_event_info *event);
-void ixl_handle_vflr(void *arg, int pending);
+void ixl_handle_vflr(struct ixl_pf *pf);
void ixl_broadcast_link_state(struct ixl_pf *pf);
#endif /* _IXL_PF_IOV_H_ */
Index: sys/dev/ixl/ixl_pf_iov.c
===================================================================
--- sys/dev/ixl/ixl_pf_iov.c
+++ sys/dev/ixl/ixl_pf_iov.c
@@ -77,14 +77,21 @@
static void ixl_vf_config_promisc_msg(struct ixl_pf *pf, struct ixl_vf *vf, void *msg, uint16_t msg_size);
static void ixl_vf_get_stats_msg(struct ixl_pf *pf, struct ixl_vf *vf, void *msg, uint16_t msg_size);
static int ixl_vf_reserve_queues(struct ixl_pf *pf, struct ixl_vf *vf, int num_queues);
+static int ixl_config_pf_vsi_loopback(struct ixl_pf *pf, bool enable);
static int ixl_adminq_err_to_errno(enum i40e_admin_queue_err err);
+/*
+ * TODO: Move pieces of this into iflib and call the rest in a handler?
+ *
+ * e.g. ixl_if_iov_set_schema
+ *
+ * It's odd to do pci_iov_detach() there while doing pci_iov_attach()
+ * in the driver.
+ */
void
ixl_initialize_sriov(struct ixl_pf *pf)
{
- return;
-#if 0
device_t dev = pf->dev;
struct i40e_hw *hw = &pf->hw;
nvlist_t *pf_schema, *vf_schema;
@@ -101,7 +108,7 @@
IOV_SCHEMA_HASDEFAULT, FALSE);
pci_iov_schema_add_uint16(vf_schema, "num-queues",
IOV_SCHEMA_HASDEFAULT,
- max(1, hw->func_caps.num_msix_vectors_vf - 1) % IXLV_MAX_QUEUES);
+ max(1, min(hw->func_caps.num_msix_vectors_vf - 1, IXLV_MAX_QUEUES)));
iov_error = pci_iov_attach(dev, pf_schema, vf_schema);
if (iov_error != 0) {
@@ -110,9 +117,6 @@
iov_error);
} else
device_printf(dev, "SR-IOV ready\n");
-
- pf->vc_debug_lvl = 1;
-#endif
}
@@ -142,7 +146,9 @@
bzero(&vsi_ctx.info, sizeof(vsi_ctx.info));
vsi_ctx.info.valid_sections = htole16(I40E_AQ_VSI_PROP_SWITCH_VALID);
- vsi_ctx.info.switch_id = htole16(0);
+ if (pf->enable_vf_loopback)
+ vsi_ctx.info.switch_id =
+ htole16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
vsi_ctx.info.valid_sections |= htole16(I40E_AQ_VSI_PROP_SECURITY_VALID);
vsi_ctx.info.sec_flags = 0;
@@ -172,8 +178,6 @@
return (ixl_adminq_err_to_errno(hw->aq.asq_last_status));
vf->vsi.seid = vsi_ctx.seid;
vf->vsi.vsi_num = vsi_ctx.vsi_number;
- // TODO: How to deal with num tx queues / num rx queues split?
- // I don't think just assigning this variable is going to work
vf->vsi.num_rx_queues = vf->qtag.num_active;
vf->vsi.num_tx_queues = vf->qtag.num_active;
@@ -204,10 +208,15 @@
if (error != 0)
return (error);
+ /* Let VF receive broadcast Ethernet frames */
+ error = i40e_aq_set_vsi_broadcast(hw, vf->vsi.seid, TRUE, NULL);
+ if (error)
+ device_printf(pf->dev, "Error configuring VF VSI for broadcast promiscuous\n");
+ /* Re-add VF's MAC/VLAN filters to its VSI */
+ ixl_reconfigure_filters(&vf->vsi);
+ /* Reset stats? */
vf->vsi.hw_filters_add = 0;
vf->vsi.hw_filters_del = 0;
- // ixl_add_filter(&vf->vsi, ixl_bcast_addr, IXL_VLAN_ANY);
- ixl_reconfigure_filters(&vf->vsi);
return (0);
}
@@ -372,12 +381,16 @@
hw = &pf->hw;
+ ixl_dbg(pf, IXL_DBG_IOV, "Resetting VF-%d\n", vf->vf_num);
+
vfrtrig = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_num));
vfrtrig |= I40E_VPGEN_VFRTRIG_VFSWR_MASK;
wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_num), vfrtrig);
ixl_flush(hw);
ixl_reinit_vf(pf, vf);
+
+ ixl_dbg(pf, IXL_DBG_IOV, "Resetting VF-%d done.\n", vf->vf_num);
}
static void
@@ -413,7 +426,7 @@
wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_num), vfrtrig);
if (vf->vsi.seid != 0)
- ixl_disable_rings(&vf->vsi);
+ ixl_disable_rings(pf, &vf->vsi, &vf->qtag);
ixl_vf_release_resources(pf, vf);
ixl_vf_setup_vsi(pf, vf);
@@ -649,7 +662,7 @@
rxq.tphwdesc_ena = 1;
rxq.tphdata_ena = 1;
rxq.tphhead_ena = 1;
- rxq.lrxqthresh = 2;
+ rxq.lrxqthresh = 1;
rxq.prefena = 1;
status = i40e_set_lan_rx_queue_context(hw, global_queue_num, &rxq);
@@ -1003,7 +1016,7 @@
continue;
/* Warn if this queue is already marked as disabled */
if (!ixl_pf_qmgr_is_queue_enabled(&vf->qtag, i, true)) {
- device_printf(pf->dev, "VF %d: TX ring %d is already disabled!\n",
+ ixl_dbg(pf, IXL_DBG_IOV, "VF %d: TX ring %d is already disabled!\n",
vf->vf_num, i);
continue;
}
@@ -1029,7 +1042,7 @@
continue;
/* Warn if this queue is already marked as disabled */
if (!ixl_pf_qmgr_is_queue_enabled(&vf->qtag, i, false)) {
- device_printf(pf->dev, "VF %d: RX ring %d is already disabled!\n",
+ ixl_dbg(pf, IXL_DBG_IOV, "VF %d: RX ring %d is already disabled!\n",
vf->vf_num, i);
continue;
}
@@ -1292,6 +1305,7 @@
void *msg, uint16_t msg_size)
{
struct virtchnl_promisc_info *info;
+ struct i40e_hw *hw = &pf->hw;
enum i40e_status_code code;
if (msg_size != sizeof(*info)) {
@@ -1301,8 +1315,11 @@
}
if (!(vf->vf_flags & VF_FLAG_PROMISC_CAP)) {
+ /*
+ * Do the same thing as the Linux PF driver -- lie to the VF
+ */
i40e_send_vf_nack(pf, vf,
- VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, I40E_ERR_PARAM);
+ VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, I40E_SUCCESS);
return;
}
@@ -1313,19 +1330,25 @@
return;
}
- code = i40e_aq_set_vsi_unicast_promiscuous(&pf->hw, info->vsi_id,
+ code = i40e_aq_set_vsi_unicast_promiscuous(hw, vf->vsi.seid,
info->flags & FLAG_VF_UNICAST_PROMISC, NULL, TRUE);
if (code != I40E_SUCCESS) {
+ device_printf(pf->dev, "i40e_aq_set_vsi_unicast_promiscuous (seid %d) failed: status %s,"
+ " error %s\n", vf->vsi.seid, i40e_stat_str(hw, code),
+ i40e_aq_str(hw, hw->aq.asq_last_status));
i40e_send_vf_nack(pf, vf,
- VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, code);
+ VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, I40E_ERR_PARAM);
return;
}
- code = i40e_aq_set_vsi_multicast_promiscuous(&pf->hw, info->vsi_id,
+ code = i40e_aq_set_vsi_multicast_promiscuous(hw, vf->vsi.seid,
info->flags & FLAG_VF_MULTICAST_PROMISC, NULL);
if (code != I40E_SUCCESS) {
+ device_printf(pf->dev, "i40e_aq_set_vsi_multicast_promiscuous (seid %d) failed: status %s,"
+ " error %s\n", vf->vsi.seid, i40e_stat_str(hw, code),
+ i40e_aq_str(hw, hw->aq.asq_last_status));
i40e_send_vf_nack(pf, vf,
- VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, code);
+ VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, I40E_ERR_PARAM);
return;
}
@@ -1627,19 +1650,25 @@
/* Handle any VFs that have reset themselves via a Function Level Reset(FLR). */
void
-ixl_handle_vflr(void *arg, int pending)
+ixl_handle_vflr(struct ixl_pf *pf)
{
- struct ixl_pf *pf;
struct ixl_vf *vf;
struct i40e_hw *hw;
uint16_t global_vf_num;
uint32_t vflrstat_index, vflrstat_mask, vflrstat, icr0;
int i;
- pf = arg;
hw = &pf->hw;
- /* TODO: May need to lock this */
+ ixl_dbg(pf, IXL_DBG_IOV, "%s: begin\n", __func__);
+
+ /* Re-enable VFLR interrupt cause so driver doesn't miss a
+ * reset interrupt for another VF */
+ icr0 = rd32(hw, I40E_PFINT_ICR0_ENA);
+ icr0 |= I40E_PFINT_ICR0_ENA_VFLR_MASK;
+ wr32(hw, I40E_PFINT_ICR0_ENA, icr0);
+ ixl_flush(hw);
+
for (i = 0; i < pf->num_vfs; i++) {
global_vf_num = hw->func_caps.vf_base_id + i;
@@ -1654,17 +1683,12 @@
wr32(hw, I40E_GLGEN_VFLRSTAT(vflrstat_index),
vflrstat_mask);
+ ixl_dbg(pf, IXL_DBG_IOV, "Reinitializing VF-%d\n", i);
ixl_reinit_vf(pf, vf);
+ ixl_dbg(pf, IXL_DBG_IOV, "Reinitializing VF-%d done\n", i);
}
}
- atomic_clear_32(&pf->state, IXL_PF_STATE_VF_RESET_REQ);
- icr0 = rd32(hw, I40E_PFINT_ICR0_ENA);
- icr0 |= I40E_PFINT_ICR0_ENA_VFLR_MASK;
- wr32(hw, I40E_PFINT_ICR0_ENA, icr0);
- ixl_flush(hw);
-
- // IXL_PF_UNLOCK()
}
static int
@@ -1721,23 +1745,52 @@
}
}
+static int
+ixl_config_pf_vsi_loopback(struct ixl_pf *pf, bool enable)
+{
+ struct i40e_hw *hw = &pf->hw;
+ device_t dev = pf->dev;
+ struct ixl_vsi *vsi = &pf->vsi;
+ struct i40e_vsi_context ctxt;
+ int error;
+
+ memset(&ctxt, 0, sizeof(ctxt));
+
+ ctxt.seid = vsi->seid;
+ if (pf->veb_seid != 0)
+ ctxt.uplink_seid = pf->veb_seid;
+ ctxt.pf_num = hw->pf_id;
+ ctxt.connection_type = IXL_VSI_DATA_PORT;
+
+ ctxt.info.valid_sections = htole16(I40E_AQ_VSI_PROP_SWITCH_VALID);
+ ctxt.info.switch_id = (enable) ?
+ htole16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB) : 0;
+
+ /* error is set to 0 on success */
+ error = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
+ if (error) {
+ device_printf(dev, "i40e_aq_update_vsi_params() failed, error %d,"
+ " aq_error %d\n", error, hw->aq.asq_last_status);
+ }
+
+ return (error);
+}
+
int
-ixl_iov_init(device_t dev, uint16_t num_vfs, const nvlist_t *params)
+ixl_if_iov_init(if_ctx_t ctx, uint16_t num_vfs, const nvlist_t *params)
{
- struct ixl_pf *pf;
+ struct ixl_pf *pf = iflib_get_softc(ctx);
+ device_t dev = iflib_get_dev(ctx);
struct i40e_hw *hw;
struct ixl_vsi *pf_vsi;
enum i40e_status_code ret;
int i, error;
- pf = device_get_softc(dev);
hw = &pf->hw;
pf_vsi = &pf->vsi;
- //IXL_PF_LOCK(pf);
pf->vfs = malloc(sizeof(struct ixl_vf) * num_vfs, M_IXL, M_NOWAIT |
M_ZERO);
-
if (pf->vfs == NULL) {
error = ENOMEM;
goto fail;
@@ -1746,65 +1799,70 @@
for (i = 0; i < num_vfs; i++)
sysctl_ctx_init(&pf->vfs[i].ctx);
+ /*
+ * Add the VEB and ...
+ * - do nothing: VEPA mode
+ * - enable loopback mode on connected VSIs: VEB mode
+ */
ret = i40e_aq_add_veb(hw, pf_vsi->uplink_seid, pf_vsi->seid,
1, FALSE, &pf->veb_seid, FALSE, NULL);
if (ret != I40E_SUCCESS) {
- error = ixl_adminq_err_to_errno(hw->aq.asq_last_status);
- device_printf(dev, "add_veb failed; code=%d error=%d", ret,
- error);
+ error = hw->aq.asq_last_status;
+ device_printf(dev, "i40e_aq_add_veb failed; status %s error %s",
+ i40e_stat_str(hw, ret), i40e_aq_str(hw, error));
goto fail;
}
+ if (pf->enable_vf_loopback)
+ ixl_config_pf_vsi_loopback(pf, true);
pf->num_vfs = num_vfs;
- //IXL_PF_UNLOCK(pf);
return (0);
fail:
free(pf->vfs, M_IXL);
pf->vfs = NULL;
- //IXL_PF_UNLOCK(pf);
return (error);
}
void
-ixl_iov_uninit(device_t dev)
+ixl_if_iov_uninit(if_ctx_t ctx)
{
- struct ixl_pf *pf;
+ struct ixl_pf *pf = iflib_get_softc(ctx);
struct i40e_hw *hw;
struct ixl_vsi *vsi;
struct ifnet *ifp;
struct ixl_vf *vfs;
int i, num_vfs;
- pf = device_get_softc(dev);
hw = &pf->hw;
vsi = &pf->vsi;
ifp = vsi->ifp;
- //IXL_PF_LOCK(pf);
for (i = 0; i < pf->num_vfs; i++) {
if (pf->vfs[i].vsi.seid != 0)
i40e_aq_delete_element(hw, pf->vfs[i].vsi.seid, NULL);
ixl_pf_qmgr_release(&pf->qmgr, &pf->vfs[i].qtag);
ixl_free_mac_filters(&pf->vfs[i].vsi);
- DDPRINTF(dev, "VF %d: %d released\n",
+ ixl_dbg(pf, IXL_DBG_IOV, "VF %d: %d released\n",
i, pf->vfs[i].qtag.num_allocated);
- DDPRINTF(dev, "Unallocated total: %d\n", ixl_pf_qmgr_get_num_free(&pf->qmgr));
+ ixl_dbg(pf, IXL_DBG_IOV, "Unallocated total: %d\n", ixl_pf_qmgr_get_num_free(&pf->qmgr));
}
if (pf->veb_seid != 0) {
i40e_aq_delete_element(hw, pf->veb_seid, NULL);
pf->veb_seid = 0;
}
+ /* Reset PF VSI loopback mode */
+ if (pf->enable_vf_loopback)
+ ixl_config_pf_vsi_loopback(pf, false);
vfs = pf->vfs;
num_vfs = pf->num_vfs;
pf->vfs = NULL;
pf->num_vfs = 0;
- //IXL_PF_UNLOCK(pf);
- /* Do this after the unlock as sysctl_ctx_free might sleep. */
+ /* sysctl_ctx_free might sleep, but this func is called w/ an sx lock */
for (i = 0; i < num_vfs; i++)
sysctl_ctx_free(&vfs[i].ctx);
free(vfs, M_IXL);
@@ -1823,9 +1881,9 @@
if (num_queues < 1) {
device_printf(dev, "Setting VF %d num-queues to 1\n", vf->vf_num);
num_queues = 1;
- } else if (num_queues > 16) {
- device_printf(dev, "Setting VF %d num-queues to 16\n", vf->vf_num);
- num_queues = 16;
+ } else if (num_queues > IXLV_MAX_QUEUES) {
+ device_printf(dev, "Setting VF %d num-queues to %d\n", vf->vf_num, IXLV_MAX_QUEUES);
+ num_queues = IXLV_MAX_QUEUES;
}
error = ixl_pf_qmgr_alloc_scattered(&pf->qmgr, num_queues, &vf->qtag);
if (error) {
@@ -1834,30 +1892,27 @@
return (ENOSPC);
}
- DDPRINTF(dev, "VF %d: %d allocated, %d active",
+ ixl_dbg(pf, IXL_DBG_IOV, "VF %d: %d allocated, %d active\n",
vf->vf_num, vf->qtag.num_allocated, vf->qtag.num_active);
- DDPRINTF(dev, "Unallocated total: %d", ixl_pf_qmgr_get_num_free(&pf->qmgr));
+ ixl_dbg(pf, IXL_DBG_IOV, "Unallocated total: %d\n", ixl_pf_qmgr_get_num_free(&pf->qmgr));
return (0);
}
int
-ixl_add_vf(device_t dev, uint16_t vfnum, const nvlist_t *params)
+ixl_if_iov_vf_add(if_ctx_t ctx, uint16_t vfnum, const nvlist_t *params)
{
+ struct ixl_pf *pf = iflib_get_softc(ctx);
+ device_t dev = pf->dev;
char sysctl_name[QUEUE_NAME_LEN];
- struct ixl_pf *pf;
struct ixl_vf *vf;
const void *mac;
size_t size;
int error;
int vf_num_queues;
- pf = device_get_softc(dev);
vf = &pf->vfs[vfnum];
-
- //IXL_PF_LOCK(pf);
vf->vf_num = vfnum;
-
vf->vsi.back = pf;
vf->vf_flags = VF_FLAG_ENABLED;
SLIST_INIT(&vf->vsi.ftl);
@@ -1893,12 +1948,12 @@
vf->vf_flags |= VF_FLAG_VLAN_CAP;
+ /* VF needs to be reset before it can be used */
ixl_reset_vf(pf, vf);
out:
- //IXL_PF_UNLOCK(pf);
if (error == 0) {
snprintf(sysctl_name, sizeof(sysctl_name), "vf%d", vfnum);
- ixl_add_vsi_sysctls(pf, &vf->vsi, &vf->ctx, sysctl_name);
+ ixl_add_vsi_sysctls(dev, &vf->vsi, &vf->ctx, sysctl_name);
}
return (error);
Index: sys/dev/ixl/ixl_pf_main.c
===================================================================
--- sys/dev/ixl/ixl_pf_main.c
+++ sys/dev/ixl/ixl_pf_main.c
@@ -113,21 +113,6 @@
MALLOC_DEFINE(M_IXL, "ixl", "ixl driver allocations");
-void
-ixl_debug_core(struct ixl_pf *pf, enum ixl_dbg_mask mask, char *fmt, ...)
-{
- va_list args;
-
- if (!(mask & pf->dbg_mask))
- return;
-
- /* Re-implement device_printf() */
- device_print_prettyname(pf->dev);
- va_start(args, fmt);
- vprintf(fmt, args);
- va_end(args);
-}
-
/*
** Put the FW, API, NVM, EEtrackID, and OEM version information into a string
*/
@@ -527,11 +512,11 @@
int
ixl_msix_que(void *arg)
{
- struct ixl_rx_queue *que = arg;
+ struct ixl_rx_queue *rx_que = arg;
- ++que->irqs;
+ ++rx_que->irqs;
- ixl_set_queue_rx_itr(que);
+ ixl_set_queue_rx_itr(rx_que);
// ixl_set_queue_tx_itr(que);
return (FILTER_SCHEDULE_THREAD);
@@ -557,8 +542,10 @@
++pf->admin_irq;
reg = rd32(hw, I40E_PFINT_ICR0);
- // For masking off interrupt causes that need to be handled before
- // they can be re-enabled
+ /*
+ * For masking off interrupt causes that need to be handled before
+ * they can be re-enabled
+ */
mask = rd32(hw, I40E_PFINT_ICR0_ENA);
/* Check on the cause */
@@ -637,11 +624,12 @@
#ifdef PCI_IOV
if (reg & I40E_PFINT_ICR0_VFLR_MASK) {
mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK;
- atomic_set_32(&pf->state, IXL_PF_STATE_VF_RESET_REQ);
- do_task = TRUE;
+ iflib_iov_intr_deferred(pf->vsi.ctx);
}
#endif
+
wr32(hw, I40E_PFINT_ICR0_ENA, mask);
+ ixl_enable_intr0(hw);
if (do_task)
return (FILTER_SCHEDULE_THREAD);
@@ -1028,7 +1016,6 @@
INIT_DBG_DEV(dev, "begin");
- /* TODO: Remove VLAN_ENCAP_LEN? */
vsi->shared->isc_max_frame_size =
ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
+ ETHER_VLAN_ENCAP_LEN;
@@ -1067,6 +1054,29 @@
return (0);
}
+/*
+ * Input: bitmap of enum i40e_aq_link_speed
+ */
+u64
+ixl_max_aq_speed_to_value(u8 link_speeds)
+{
+ if (link_speeds & I40E_LINK_SPEED_40GB)
+ return IF_Gbps(40);
+ if (link_speeds & I40E_LINK_SPEED_25GB)
+ return IF_Gbps(25);
+ if (link_speeds & I40E_LINK_SPEED_20GB)
+ return IF_Gbps(20);
+ if (link_speeds & I40E_LINK_SPEED_10GB)
+ return IF_Gbps(10);
+ if (link_speeds & I40E_LINK_SPEED_1GB)
+ return IF_Gbps(1);
+ if (link_speeds & I40E_LINK_SPEED_100MB)
+ return IF_Mbps(100);
+ else
+ /* Minimum supported link speed */
+ return IF_Mbps(100);
+}
+
/*
** Run when the Admin Queue gets a link state change interrupt.
*/
@@ -1194,7 +1204,7 @@
* the driver may not use all of them).
*/
tc_queues = fls(pf->qtag.num_allocated) - 1;
- ctxt.info.tc_mapping[0] = ((0 << I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+ ctxt.info.tc_mapping[0] = ((pf->qtag.first_qidx << I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT)
& I40E_AQ_VSI_TC_QUE_OFFSET_MASK) |
((tc_queues << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT)
& I40E_AQ_VSI_TC_QUE_NUMBER_MASK);
@@ -1493,23 +1503,6 @@
return;
}
-void
-ixl_add_vsi_sysctls(struct ixl_pf *pf, struct ixl_vsi *vsi,
- struct sysctl_ctx_list *ctx, const char *sysctl_name)
-{
- struct sysctl_oid *tree;
- struct sysctl_oid_list *child;
- struct sysctl_oid_list *vsi_list;
-
- tree = device_get_sysctl_tree(pf->dev);
- child = SYSCTL_CHILDREN(tree);
- vsi->vsi_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, sysctl_name,
- CTLFLAG_RD, NULL, "VSI Number");
- vsi_list = SYSCTL_CHILDREN(vsi->vsi_node);
-
- ixl_add_sysctls_eth_stats(ctx, vsi_list, &vsi->eth_stats);
-}
-
#ifdef IXL_DEBUG
/**
* ixl_sysctl_qtx_tail_handler
@@ -1634,131 +1627,17 @@
struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
struct sysctl_oid *tree = device_get_sysctl_tree(dev);
struct sysctl_oid_list *child = SYSCTL_CHILDREN(tree);
- struct sysctl_oid_list *vsi_list, *queue_list;
- struct sysctl_oid *queue_node;
- char queue_namebuf[32];
-
- struct ixl_rx_queue *rx_que;
- struct ixl_tx_queue *tx_que;
- struct tx_ring *txr;
- struct rx_ring *rxr;
/* Driver statistics */
- SYSCTL_ADD_UQUAD(ctx, child, OID_AUTO, "watchdog_events",
- CTLFLAG_RD, &pf->watchdog_events,
- "Watchdog timeouts");
SYSCTL_ADD_UQUAD(ctx, child, OID_AUTO, "admin_irq",
CTLFLAG_RD, &pf->admin_irq,
- "Admin Queue IRQ Handled");
-
- ixl_add_vsi_sysctls(pf, &pf->vsi, ctx, "pf");
- vsi_list = SYSCTL_CHILDREN(pf->vsi.vsi_node);
-
- /* Queue statistics */
- for (int q = 0; q < vsi->num_rx_queues; q++) {
- snprintf(queue_namebuf, QUEUE_NAME_LEN, "rxq%02d", q);
- queue_node = SYSCTL_ADD_NODE(ctx, vsi_list,
- OID_AUTO, queue_namebuf, CTLFLAG_RD, NULL, "RX Queue #");
- queue_list = SYSCTL_CHILDREN(queue_node);
-
- rx_que = &(vsi->rx_queues[q]);
- rxr = &(rx_que->rxr);
-
-
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "irqs",
- CTLFLAG_RD, &(rx_que->irqs),
- "irqs on this queue (both Tx and Rx)");
-
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "packets",
- CTLFLAG_RD, &(rxr->rx_packets),
- "Queue Packets Received");
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "bytes",
- CTLFLAG_RD, &(rxr->rx_bytes),
- "Queue Bytes Received");
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "desc_err",
- CTLFLAG_RD, &(rxr->desc_errs),
- "Queue Rx Descriptor Errors");
- SYSCTL_ADD_UINT(ctx, queue_list, OID_AUTO, "itr",
- CTLFLAG_RD, &(rxr->itr), 0,
- "Queue Rx ITR Interval");
-#ifdef IXL_DEBUG
- SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "qrx_tail",
- CTLTYPE_UINT | CTLFLAG_RD, rx_que,
- sizeof(struct ixl_rx_queue),
- ixl_sysctl_qrx_tail_handler, "IU",
- "Queue Receive Descriptor Tail");
-#endif
- }
- for (int q = 0; q < vsi->num_tx_queues; q++) {
- snprintf(queue_namebuf, QUEUE_NAME_LEN, "txq%02d", q);
- queue_node = SYSCTL_ADD_NODE(ctx, vsi_list,
- OID_AUTO, queue_namebuf, CTLFLAG_RD, NULL, "TX Queue #");
- queue_list = SYSCTL_CHILDREN(queue_node);
-
- tx_que = &(vsi->tx_queues[q]);
- txr = &(tx_que->txr);
-
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "tso",
- CTLFLAG_RD, &(tx_que->tso),
- "TSO");
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "mss_too_small",
- CTLFLAG_RD, &(txr->mss_too_small),
- "TSO sends with an MSS less than 64");
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "packets",
- CTLFLAG_RD, &(txr->tx_packets),
- "Queue Packets Transmitted");
- SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "bytes",
- CTLFLAG_RD, &(txr->tx_bytes),
- "Queue Bytes Transmitted");
- SYSCTL_ADD_UINT(ctx, queue_list, OID_AUTO, "itr",
- CTLFLAG_RD, &(txr->itr), 0,
- "Queue Tx ITR Interval");
-#ifdef IXL_DEBUG
- SYSCTL_ADD_PROC(ctx, queue_list, OID_AUTO, "qtx_tail",
- CTLTYPE_UINT | CTLFLAG_RD, tx_que,
- sizeof(struct ixl_tx_queue),
- ixl_sysctl_qtx_tail_handler, "IU",
- "Queue Transmit Descriptor Tail");
-#endif
- }
+ "Admin Queue IRQs received");
- /* MAC stats */
- ixl_add_sysctls_mac_stats(ctx, child, pf_stats);
-}
+ ixl_add_vsi_sysctls(dev, vsi, ctx, "pf");
-void
-ixl_add_sysctls_eth_stats(struct sysctl_ctx_list *ctx,
- struct sysctl_oid_list *child,
- struct i40e_eth_stats *eth_stats)
-{
- struct ixl_sysctl_info ctls[] =
- {
- {ð_stats->rx_bytes, "good_octets_rcvd", "Good Octets Received"},
- {ð_stats->rx_unicast, "ucast_pkts_rcvd",
- "Unicast Packets Received"},
- {ð_stats->rx_multicast, "mcast_pkts_rcvd",
- "Multicast Packets Received"},
- {ð_stats->rx_broadcast, "bcast_pkts_rcvd",
- "Broadcast Packets Received"},
- {ð_stats->rx_discards, "rx_discards", "Discarded RX packets"},
- {ð_stats->tx_bytes, "good_octets_txd", "Good Octets Transmitted"},
- {ð_stats->tx_unicast, "ucast_pkts_txd", "Unicast Packets Transmitted"},
- {ð_stats->tx_multicast, "mcast_pkts_txd",
- "Multicast Packets Transmitted"},
- {ð_stats->tx_broadcast, "bcast_pkts_txd",
- "Broadcast Packets Transmitted"},
- // end
- {0,0,0}
- };
+ ixl_add_queues_sysctls(dev, vsi);
- struct ixl_sysctl_info *entry = ctls;
- while (entry->stat != 0)
- {
- SYSCTL_ADD_UQUAD(ctx, child, OID_AUTO, entry->name,
- CTLFLAG_RD, entry->stat,
- entry->description);
- entry++;
- }
+ ixl_add_sysctls_mac_stats(ctx, child, pf_stats);
}
void
@@ -2052,8 +1931,6 @@
f->flags |= IXL_FILTER_MC;
else
printf("WARNING: no filter available!!\n");
-
- return;
}
void
@@ -2104,8 +1981,8 @@
else
vsi->num_macs++;
+ f->flags |= IXL_FILTER_USED;
ixl_add_hw_filters(vsi, f->flags, 1);
- return;
}
void
@@ -2165,12 +2042,15 @@
enum i40e_status_code status;
int j = 0;
- MPASS(cnt > 0);
-
pf = vsi->back;
- dev = iflib_get_dev(vsi->ctx);
+ dev = vsi->dev;
hw = &pf->hw;
+ if (cnt < 1) {
+ ixl_dbg_info(pf, "ixl_add_hw_filters: cnt == 0\n");
+ return;
+ }
+
a = malloc(sizeof(struct i40e_aqc_add_macvlan_element_data) * cnt,
M_DEVBUF, M_NOWAIT | M_ZERO);
if (a == NULL) {
@@ -2197,6 +2077,9 @@
b->flags |= I40E_AQC_MACVLAN_ADD_PERFECT_MATCH;
f->flags &= ~IXL_FILTER_ADD;
j++;
+
+ ixl_dbg_filter(pf, "ADD: " MAC_FORMAT "\n",
+ MAC_FORMAT_ARGS(f->macaddr));
}
if (j == cnt)
break;
@@ -2232,7 +2115,7 @@
pf = vsi->back;
hw = &pf->hw;
- dev = iflib_get_dev(vsi->ctx);
+ dev = vsi->dev;
d = malloc(sizeof(struct i40e_aqc_remove_macvlan_element_data) * cnt,
M_DEVBUF, M_NOWAIT | M_ZERO);
@@ -2252,6 +2135,10 @@
} else {
e->vlan_tag = f->vlan;
}
+
+ ixl_dbg_filter(pf, "DEL: " MAC_FORMAT "\n",
+ MAC_FORMAT_ARGS(f->macaddr));
+
/* delete entry from vsi list */
SLIST_REMOVE(&vsi->ftl, f, ixl_mac_filter, next);
free(f, M_DEVBUF);
@@ -2456,18 +2343,16 @@
return (error);
}
-/* For PF VSI only */
int
-ixl_disable_rings(struct ixl_vsi *vsi)
+ixl_disable_rings(struct ixl_pf *pf, struct ixl_vsi *vsi, struct ixl_pf_qtag *qtag)
{
- struct ixl_pf *pf = vsi->back;
- int error = 0;
+ int error = 0;
for (int i = 0; i < vsi->num_tx_queues; i++)
- error = ixl_disable_tx_ring(pf, &pf->qtag, i);
+ error = ixl_disable_tx_ring(pf, qtag, i);
for (int i = 0; i < vsi->num_rx_queues; i++)
- error = ixl_disable_rx_ring(pf, &pf->qtag, i);
+ error = ixl_disable_rx_ring(pf, qtag, i);
return (error);
}
@@ -2578,14 +2463,12 @@
ixl_flush(hw);
}
-/* This only enables HW interrupts for the RX queues */
void
ixl_enable_intr(struct ixl_vsi *vsi)
{
struct i40e_hw *hw = vsi->hw;
struct ixl_rx_queue *que = vsi->rx_queues;
- // TODO: Check iflib interrupt mode instead?
if (vsi->shared->isc_intr == IFLIB_INTR_MSIX) {
for (int i = 0; i < vsi->num_rx_queues; i++, que++)
ixl_enable_queue(hw, que->rxr.me);
@@ -3314,12 +3197,6 @@
OID_AUTO, "read_i2c_diag_data", CTLTYPE_STRING | CTLFLAG_RD,
pf, 0, ixl_sysctl_read_i2c_diag_data, "A", "Dump selected diagnostic data from FW");
}
-
-#ifdef PCI_IOV
- SYSCTL_ADD_UINT(ctx, debug_list,
- OID_AUTO, "vc_debug_level", CTLFLAG_RW, &pf->vc_debug_lvl,
- 0, "PF/VF Virtual Channel debug level");
-#endif
}
/*
@@ -3332,9 +3209,7 @@
struct ixl_pf *pf = (struct ixl_pf *)arg1;
int queues;
- //IXL_PF_LOCK(pf);
queues = (int)ixl_pf_qmgr_get_num_free(&pf->qmgr);
- //IXL_PF_UNLOCK(pf);
return sysctl_handle_int(oidp, NULL, queues, req);
}
@@ -3998,44 +3873,72 @@
struct ixl_pf *pf = (struct ixl_pf *)arg1;
struct ixl_vsi *vsi = &pf->vsi;
struct ixl_mac_filter *f;
- char *buf, *buf_i;
+ device_t dev = pf->dev;
+ int error = 0, ftl_len = 0, ftl_counter = 0;
- int error = 0;
- int ftl_len = 0;
- int ftl_counter = 0;
- int buf_len = 0;
- int entry_len = 42;
+ struct sbuf *buf;
- SLIST_FOREACH(f, &vsi->ftl, next) {
- ftl_len++;
+ buf = sbuf_new_for_sysctl(NULL, NULL, 128, req);
+ if (!buf) {
+ device_printf(dev, "Could not allocate sbuf for output.\n");
+ return (ENOMEM);
}
- if (ftl_len < 1) {
- sysctl_handle_string(oidp, "(none)", 6, req);
- return (0);
- }
+ sbuf_printf(buf, "\n");
- buf_len = sizeof(char) * (entry_len + 1) * ftl_len + 2;
- buf = buf_i = malloc(buf_len, M_DEVBUF, M_WAITOK);
+ /* Print MAC filters */
+ sbuf_printf(buf, "PF Filters:\n");
+ SLIST_FOREACH(f, &vsi->ftl, next)
+ ftl_len++;
- sprintf(buf_i++, "\n");
- SLIST_FOREACH(f, &vsi->ftl, next) {
- sprintf(buf_i,
- MAC_FORMAT ", vlan %4d, flags %#06x",
- MAC_FORMAT_ARGS(f->macaddr), f->vlan, f->flags);
- buf_i += entry_len;
- /* don't print '\n' for last entry */
- if (++ftl_counter != ftl_len) {
- sprintf(buf_i, "\n");
- buf_i++;
+ if (ftl_len < 1)
+ sbuf_printf(buf, "(none)\n");
+ else {
+ SLIST_FOREACH(f, &vsi->ftl, next) {
+ sbuf_printf(buf,
+ MAC_FORMAT ", vlan %4d, flags %#06x",
+ MAC_FORMAT_ARGS(f->macaddr), f->vlan, f->flags);
+ /* don't print '\n' for last entry */
+ if (++ftl_counter != ftl_len)
+ sbuf_printf(buf, "\n");
+ }
+ }
+
+#ifdef PCI_IOV
+ /* TODO: Give each VF its own filter list sysctl */
+ struct ixl_vf *vf;
+ if (pf->num_vfs > 0) {
+ sbuf_printf(buf, "\n\n");
+ for (int i = 0; i < pf->num_vfs; i++) {
+ vf = &pf->vfs[i];
+ if (!(vf->vf_flags & VF_FLAG_ENABLED))
+ continue;
+
+ vsi = &vf->vsi;
+ ftl_len = 0, ftl_counter = 0;
+ sbuf_printf(buf, "VF-%d Filters:\n", vf->vf_num);
+ SLIST_FOREACH(f, &vsi->ftl, next)
+ ftl_len++;
+
+ if (ftl_len < 1)
+ sbuf_printf(buf, "(none)\n");
+ else {
+ SLIST_FOREACH(f, &vsi->ftl, next) {
+ sbuf_printf(buf,
+ MAC_FORMAT ", vlan %4d, flags %#06x\n",
+ MAC_FORMAT_ARGS(f->macaddr), f->vlan, f->flags);
+ }
+ }
}
}
+#endif
- error = sysctl_handle_string(oidp, buf, strlen(buf), req);
+ error = sbuf_finish(buf);
if (error)
- printf("sysctl error: %d\n", error);
- free(buf, M_DEVBUF);
- return error;
+ device_printf(dev, "Error finishing sbuf: %d\n", error);
+ sbuf_delete(buf);
+
+ return (error);
}
#define IXL_SW_RES_SIZE 0x14
Index: sys/dev/ixl/ixl_pf_qmgr.h
===================================================================
--- sys/dev/ixl/ixl_pf_qmgr.h
+++ sys/dev/ixl/ixl_pf_qmgr.h
@@ -53,11 +53,11 @@
/* Manager */
struct ixl_pf_qmgr_qinfo {
- bool allocated;
- bool tx_enabled;
- bool rx_enabled;
- bool tx_configured;
- bool rx_configured;
+ u8 allocated;
+ u8 tx_enabled;
+ u8 rx_enabled;
+ u8 tx_configured;
+ u8 rx_configured;
};
struct ixl_pf_qmgr {
@@ -74,7 +74,10 @@
struct ixl_pf_qtag {
struct ixl_pf_qmgr *qmgr;
enum ixl_pf_qmgr_qalloc_type type;
- u16 qidx[IXL_MAX_SCATTERED_QUEUES];
+ union {
+ u16 qidx[IXL_MAX_SCATTERED_QUEUES];
+ u16 first_qidx;
+ };
u16 num_allocated;
u16 num_active;
};
Index: sys/dev/ixl/ixl_pf_qmgr.c
===================================================================
--- sys/dev/ixl/ixl_pf_qmgr.c
+++ sys/dev/ixl/ixl_pf_qmgr.c
@@ -45,7 +45,7 @@
qmgr->num_queues = num_queues;
qmgr->qinfo = malloc(num_queues * sizeof(struct ixl_pf_qmgr_qinfo),
- M_IXL, M_ZERO | M_WAITOK);
+ M_IXL, M_ZERO | M_NOWAIT);
if (qmgr->qinfo == NULL)
return ENOMEM;
Index: sys/dev/ixl/ixl_txrx.c
===================================================================
--- sys/dev/ixl/ixl_txrx.c
+++ sys/dev/ixl/ixl_txrx.c
@@ -65,8 +65,6 @@
qidx_t budget);
static int ixl_isc_rxd_pkt_get(void *arg, if_rxd_info_t ri);
-extern int ixl_intr(void *arg);
-
struct if_txrx ixl_txrx_hwb = {
ixl_isc_txd_encap,
ixl_isc_txd_flush,
@@ -75,7 +73,7 @@
ixl_isc_rxd_pkt_get,
ixl_isc_rxd_refill,
ixl_isc_rxd_flush,
- ixl_intr
+ NULL
};
struct if_txrx ixl_txrx_dwb = {
@@ -86,7 +84,7 @@
ixl_isc_rxd_pkt_get,
ixl_isc_rxd_refill,
ixl_isc_rxd_flush,
- ixl_intr
+ NULL
};
/*
@@ -133,6 +131,21 @@
return hw->err_str;
}
+void
+ixl_debug_core(device_t dev, u32 enabled_mask, u32 mask, char *fmt, ...)
+{
+ va_list args;
+
+ if (!(mask & enabled_mask))
+ return;
+
+ /* Re-implement device_printf() */
+ device_print_prettyname(dev);
+ va_start(args, fmt);
+ vprintf(fmt, args);
+ va_end(args);
+}
+
static bool
ixl_is_tx_desc_done(struct tx_ring *txr, int idx)
{
@@ -406,9 +419,7 @@
(sizeof(struct i40e_tx_desc)) *
(vsi->shared->isc_ntxd[0] + (vsi->enable_head_writeback ? 1 : 0)));
- // TODO: Write max descriptor index instead of 0?
wr32(vsi->hw, txr->tail, 0);
- wr32(vsi->hw, I40E_QTX_HEAD(txr->me), 0);
}
/*
@@ -547,14 +558,6 @@
nrxd = vsi->shared->isc_nrxd[0];
- if (budget == 1) {
- rxd = &rxr->rx_base[idx];
- qword = le64toh(rxd->wb.qword1.status_error_len);
- status = (qword & I40E_RXD_QW1_STATUS_MASK)
- >> I40E_RXD_QW1_STATUS_SHIFT;
- return !!(status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT));
- }
-
for (cnt = 0, i = idx; cnt < nrxd - 1 && cnt <= budget;) {
rxd = &rxr->rx_base[i];
qword = le64toh(rxd->wb.qword1.status_error_len);
@@ -657,7 +660,7 @@
MPASS((status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT)) != 0);
ri->iri_len += plen;
- rxr->bytes += plen;
+ rxr->rx_bytes += plen;
cur->wb.qword1.status_error_len = 0;
eop = (status & (1 << I40E_RX_DESC_STATUS_EOF_SHIFT));
@@ -745,25 +748,179 @@
ri->iri_csum_data |= htons(0xffff);
}
+/* Set Report Status queue fields to 0 */
+void
+ixl_init_tx_rsqs(struct ixl_vsi *vsi)
+{
+ if_softc_ctx_t scctx = vsi->shared;
+ struct ixl_tx_queue *tx_que;
+ int i, j;
+
+ for (i = 0, tx_que = vsi->tx_queues; i < vsi->num_tx_queues; i++, tx_que++) {
+ struct tx_ring *txr = &tx_que->txr;
+
+ txr->tx_rs_cidx = txr->tx_rs_pidx = txr->tx_cidx_processed = 0;
+
+ for (j = 0; j < scctx->isc_ntxd[0]; j++)
+ txr->tx_rsq[j] = QIDX_INVALID;
+ }
+}
+
+void
+ixl_init_tx_cidx(struct ixl_vsi *vsi)
+{
+ struct ixl_tx_queue *tx_que;
+ int i;
+
+ for (i = 0, tx_que = vsi->tx_queues; i < vsi->num_tx_queues; i++, tx_que++) {
+ struct tx_ring *txr = &tx_que->txr;
+
+ txr->tx_cidx_processed = 0;
+ }
+}
+
/*
- * Input: bitmap of enum i40e_aq_link_speed
+ * Input: bitmap of enum virtchnl_link_speed
*/
u64
-ixl_max_aq_speed_to_value(u8 link_speeds)
+ixl_max_vc_speed_to_value(u8 link_speeds)
{
- if (link_speeds & I40E_LINK_SPEED_40GB)
+ if (link_speeds & VIRTCHNL_LINK_SPEED_40GB)
return IF_Gbps(40);
- if (link_speeds & I40E_LINK_SPEED_25GB)
+ if (link_speeds & VIRTCHNL_LINK_SPEED_25GB)
return IF_Gbps(25);
- if (link_speeds & I40E_LINK_SPEED_20GB)
+ if (link_speeds & VIRTCHNL_LINK_SPEED_20GB)
return IF_Gbps(20);
- if (link_speeds & I40E_LINK_SPEED_10GB)
+ if (link_speeds & VIRTCHNL_LINK_SPEED_10GB)
return IF_Gbps(10);
- if (link_speeds & I40E_LINK_SPEED_1GB)
+ if (link_speeds & VIRTCHNL_LINK_SPEED_1GB)
return IF_Gbps(1);
- if (link_speeds & I40E_LINK_SPEED_100MB)
+ if (link_speeds & VIRTCHNL_LINK_SPEED_100MB)
return IF_Mbps(100);
else
/* Minimum supported link speed */
return IF_Mbps(100);
}
+
+void
+ixl_add_vsi_sysctls(device_t dev, struct ixl_vsi *vsi,
+ struct sysctl_ctx_list *ctx, const char *sysctl_name)
+{
+ struct sysctl_oid *tree;
+ struct sysctl_oid_list *child;
+ struct sysctl_oid_list *vsi_list;
+
+ tree = device_get_sysctl_tree(dev);
+ child = SYSCTL_CHILDREN(tree);
+ vsi->vsi_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, sysctl_name,
+ CTLFLAG_RD, NULL, "VSI Number");
+ vsi_list = SYSCTL_CHILDREN(vsi->vsi_node);
+
+ ixl_add_sysctls_eth_stats(ctx, vsi_list, &vsi->eth_stats);
+}
+
+void
+ixl_add_sysctls_eth_stats(struct sysctl_ctx_list *ctx,
+ struct sysctl_oid_list *child,
+ struct i40e_eth_stats *eth_stats)
+{
+ struct ixl_sysctl_info ctls[] =
+ {
+ {ð_stats->rx_bytes, "good_octets_rcvd", "Good Octets Received"},
+ {ð_stats->rx_unicast, "ucast_pkts_rcvd",
+ "Unicast Packets Received"},
+ {ð_stats->rx_multicast, "mcast_pkts_rcvd",
+ "Multicast Packets Received"},
+ {ð_stats->rx_broadcast, "bcast_pkts_rcvd",
+ "Broadcast Packets Received"},
+ {ð_stats->rx_discards, "rx_discards", "Discarded RX packets"},
+ {ð_stats->tx_bytes, "good_octets_txd", "Good Octets Transmitted"},
+ {ð_stats->tx_unicast, "ucast_pkts_txd", "Unicast Packets Transmitted"},
+ {ð_stats->tx_multicast, "mcast_pkts_txd",
+ "Multicast Packets Transmitted"},
+ {ð_stats->tx_broadcast, "bcast_pkts_txd",
+ "Broadcast Packets Transmitted"},
+ // end
+ {0,0,0}
+ };
+
+ struct ixl_sysctl_info *entry = ctls;
+ while (entry->stat != 0)
+ {
+ SYSCTL_ADD_UQUAD(ctx, child, OID_AUTO, entry->name,
+ CTLFLAG_RD, entry->stat,
+ entry->description);
+ entry++;
+ }
+}
+
+void
+ixl_add_queues_sysctls(device_t dev, struct ixl_vsi *vsi)
+{
+ struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(dev);
+ struct sysctl_oid_list *vsi_list, *queue_list;
+ struct sysctl_oid *queue_node;
+ char queue_namebuf[32];
+
+ struct ixl_rx_queue *rx_que;
+ struct ixl_tx_queue *tx_que;
+ struct tx_ring *txr;
+ struct rx_ring *rxr;
+
+ vsi_list = SYSCTL_CHILDREN(vsi->vsi_node);
+
+ /* Queue statistics */
+ for (int q = 0; q < vsi->num_rx_queues; q++) {
+ bzero(queue_namebuf, sizeof(queue_namebuf));
+ snprintf(queue_namebuf, QUEUE_NAME_LEN, "rxq%02d", q);
+ queue_node = SYSCTL_ADD_NODE(ctx, vsi_list,
+ OID_AUTO, queue_namebuf, CTLFLAG_RD, NULL, "RX Queue #");
+ queue_list = SYSCTL_CHILDREN(queue_node);
+
+ rx_que = &(vsi->rx_queues[q]);
+ rxr = &(rx_que->rxr);
+
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "irqs",
+ CTLFLAG_RD, &(rx_que->irqs),
+ "irqs on this queue (both Tx and Rx)");
+
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "packets",
+ CTLFLAG_RD, &(rxr->rx_packets),
+ "Queue Packets Received");
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "bytes",
+ CTLFLAG_RD, &(rxr->rx_bytes),
+ "Queue Bytes Received");
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "desc_err",
+ CTLFLAG_RD, &(rxr->desc_errs),
+ "Queue Rx Descriptor Errors");
+ SYSCTL_ADD_UINT(ctx, queue_list, OID_AUTO, "itr",
+ CTLFLAG_RD, &(rxr->itr), 0,
+ "Queue Rx ITR Interval");
+ }
+ for (int q = 0; q < vsi->num_tx_queues; q++) {
+ bzero(queue_namebuf, sizeof(queue_namebuf));
+ snprintf(queue_namebuf, QUEUE_NAME_LEN, "txq%02d", q);
+ queue_node = SYSCTL_ADD_NODE(ctx, vsi_list,
+ OID_AUTO, queue_namebuf, CTLFLAG_RD, NULL, "TX Queue #");
+ queue_list = SYSCTL_CHILDREN(queue_node);
+
+ tx_que = &(vsi->tx_queues[q]);
+ txr = &(tx_que->txr);
+
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "tso",
+ CTLFLAG_RD, &(tx_que->tso),
+ "TSO");
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "mss_too_small",
+ CTLFLAG_RD, &(txr->mss_too_small),
+ "TSO sends with an MSS less than 64");
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "packets",
+ CTLFLAG_RD, &(txr->tx_packets),
+ "Queue Packets Transmitted");
+ SYSCTL_ADD_UQUAD(ctx, queue_list, OID_AUTO, "bytes",
+ CTLFLAG_RD, &(txr->tx_bytes),
+ "Queue Bytes Transmitted");
+ SYSCTL_ADD_UINT(ctx, queue_list, OID_AUTO, "itr",
+ CTLFLAG_RD, &(txr->itr), 0,
+ "Queue Tx ITR Interval");
+ }
+}
Index: sys/dev/ixl/ixlv.h
===================================================================
--- sys/dev/ixl/ixlv.h
+++ sys/dev/ixl/ixlv.h
@@ -36,7 +36,7 @@
#ifndef _IXLV_H_
#define _IXLV_H_
-#include "ixlv_vc_mgr.h"
+#include "ixl.h"
#define IXLV_AQ_MAX_ERR 200
#define IXLV_MAX_FILTERS 128
@@ -65,35 +65,31 @@
"\20\1ENABLE_QUEUES\2DISABLE_QUEUES\3ADD_MAC_FILTER" \
"\4ADD_VLAN_FILTER\5DEL_MAC_FILTER\6DEL_VLAN_FILTER" \
"\7CONFIGURE_QUEUES\10MAP_VECTORS\11HANDLE_RESET" \
- "\12CONFIGURE_PROMISC\13GET_STATS"
+ "\12CONFIGURE_PROMISC\13GET_STATS\14CONFIG_RSS_KEY" \
+ "\15SET_RSS_HENA\16GET_RSS_HENA_CAPS\17CONFIG_RSS_LUT"
#define IXLV_PRINTF_VF_OFFLOAD_FLAGS \
- "\20\1I40E_VIRTCHNL_VF_OFFLOAD_L2" \
- "\2I40E_VIRTCHNL_VF_OFFLOAD_IWARP" \
- "\3I40E_VIRTCHNL_VF_OFFLOAD_FCOE" \
- "\4I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ" \
- "\5I40E_VIRTCHNL_VF_OFFLOAD_RSS_REG" \
- "\6I40E_VIRTCHNL_VF_OFFLOAD_WB_ON_ITR" \
- "\21I40E_VIRTCHNL_VF_OFFLOAD_VLAN" \
- "\22I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING" \
- "\23I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2" \
- "\24I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF"
+ "\20\1L2" \
+ "\2IWARP" \
+ "\3RSVD" \
+ "\4RSS_AQ" \
+ "\5RSS_REG" \
+ "\6WB_ON_ITR" \
+ "\7REQ_QUEUES" \
+ "\21VLAN" \
+ "\22RX_POLLING" \
+ "\23RSS_PCTYPE_V2" \
+ "\24RSS_PF" \
+ "\25ENCAP" \
+ "\26ENCAP_CSUM" \
+ "\27RX_ENCAP_CSUM"
static MALLOC_DEFINE(M_IXLV, "ixlv", "ixlv driver allocations");
/* Driver state */
enum ixlv_state_t {
- IXLV_START,
- IXLV_FAILED,
IXLV_RESET_REQUIRED,
IXLV_RESET_PENDING,
- IXLV_VERSION_CHECK,
- IXLV_GET_RESOURCES,
IXLV_INIT_READY,
- IXLV_INIT_START,
- IXLV_INIT_CONFIG,
- IXLV_INIT_MAPPING,
- IXLV_INIT_ENABLE,
- IXLV_INIT_COMPLETE,
IXLV_RUNNING,
};
@@ -115,76 +111,42 @@
/* Software controller structure */
struct ixlv_sc {
+ struct ixl_vsi vsi;
+
struct i40e_hw hw;
struct i40e_osdep osdep;
device_t dev;
struct resource *pci_mem;
- struct resource *msix_mem;
enum ixlv_state_t init_state;
- int init_in_progress;
-
- /*
- * Interrupt resources
- */
- void *tag;
- struct resource *res; /* For the AQ */
struct ifmedia media;
- struct callout timer;
- int msix;
- int pf_version;
- int if_flags;
+ struct virtchnl_version_info version;
+ enum ixl_dbg_mask dbg_mask;
+ u16 promisc_flags;
bool link_up;
enum virtchnl_link_speed link_speed;
- struct mtx mtx;
-
- u32 qbase;
- u32 admvec;
- struct timeout_task timeout;
-#ifdef notyet
- struct task aq_irq;
- struct task aq_sched;
-#endif
-
- struct ixl_vsi vsi;
+ /* Tunable settings */
+ int tx_itr;
+ int rx_itr;
+ int dynamic_tx_itr;
+ int dynamic_rx_itr;
/* Filter lists */
struct mac_list *mac_filters;
struct vlan_list *vlan_filters;
- /* Promiscuous mode */
- u32 promiscuous_flags;
-
- /* Admin queue task flags */
- u32 aq_wait_count;
-
- struct ixl_vc_mgr vc_mgr;
- struct ixl_vc_cmd add_mac_cmd;
- struct ixl_vc_cmd del_mac_cmd;
- struct ixl_vc_cmd config_queues_cmd;
- struct ixl_vc_cmd map_vectors_cmd;
- struct ixl_vc_cmd enable_queues_cmd;
- struct ixl_vc_cmd add_vlan_cmd;
- struct ixl_vc_cmd del_vlan_cmd;
- struct ixl_vc_cmd add_multi_cmd;
- struct ixl_vc_cmd del_multi_cmd;
- struct ixl_vc_cmd config_rss_key_cmd;
- struct ixl_vc_cmd get_rss_hena_caps_cmd;
- struct ixl_vc_cmd set_rss_hena_cmd;
- struct ixl_vc_cmd config_rss_lut_cmd;
-
/* Virtual comm channel */
struct virtchnl_vf_resource *vf_res;
struct virtchnl_vsi_resource *vsi_res;
/* Misc stats maintained by the driver */
- u64 watchdog_events;
u64 admin_irq;
+ /* Buffer used for reading AQ responses */
u8 aq_buffer[IXL_AQ_BUF_SZ];
};
@@ -203,6 +165,12 @@
return (status);
}
+/* Debug printing */
+#define ixlv_dbg(sc, m, s, ...) ixl_debug_core(sc->dev, sc->dbg_mask, m, s, ##__VA_ARGS__)
+#define ixlv_dbg_init(sc, s, ...) ixl_debug_core(sc->dev, sc->dbg_mask, IXLV_DBG_INIT, s, ##__VA_ARGS__)
+#define ixlv_dbg_info(sc, s, ...) ixl_debug_core(sc->dev, sc->dbg_mask, IXLV_DBG_INFO, s, ##__VA_ARGS__)
+#define ixlv_dbg_vc(sc, s, ...) ixl_debug_core(sc->dev, sc->dbg_mask, IXLV_DBG_VC, s, ##__VA_ARGS__)
+
/*
** VF Common function prototypes
*/
@@ -214,28 +182,32 @@
int ixlv_get_vf_config(struct ixlv_sc *);
void ixlv_init(void *);
int ixlv_reinit_locked(struct ixlv_sc *);
-void ixlv_configure_queues(struct ixlv_sc *);
-void ixlv_enable_queues(struct ixlv_sc *);
-void ixlv_disable_queues(struct ixlv_sc *);
-void ixlv_map_queues(struct ixlv_sc *);
+int ixlv_configure_queues(struct ixlv_sc *);
+int ixlv_enable_queues(struct ixlv_sc *);
+int ixlv_disable_queues(struct ixlv_sc *);
+int ixlv_map_queues(struct ixlv_sc *);
void ixlv_enable_intr(struct ixl_vsi *);
void ixlv_disable_intr(struct ixl_vsi *);
-void ixlv_add_ether_filters(struct ixlv_sc *);
-void ixlv_del_ether_filters(struct ixlv_sc *);
-void ixlv_request_stats(struct ixlv_sc *);
-void ixlv_request_reset(struct ixlv_sc *);
+int ixlv_add_ether_filters(struct ixlv_sc *);
+int ixlv_del_ether_filters(struct ixlv_sc *);
+int ixlv_request_stats(struct ixlv_sc *);
+int ixlv_request_reset(struct ixlv_sc *);
void ixlv_vc_completion(struct ixlv_sc *,
enum virtchnl_ops, enum virtchnl_status_code,
u8 *, u16);
-void ixlv_add_ether_filter(struct ixlv_sc *);
-void ixlv_add_vlans(struct ixlv_sc *);
-void ixlv_del_vlans(struct ixlv_sc *);
+int ixlv_add_ether_filter(struct ixlv_sc *);
+int ixlv_add_vlans(struct ixlv_sc *);
+int ixlv_del_vlans(struct ixlv_sc *);
void ixlv_update_stats_counters(struct ixlv_sc *,
struct i40e_eth_stats *);
void ixlv_update_link_status(struct ixlv_sc *);
-void ixlv_get_default_rss_key(u32 *, bool);
-void ixlv_config_rss_key(struct ixlv_sc *);
-void ixlv_set_rss_hena(struct ixlv_sc *);
-void ixlv_config_rss_lut(struct ixlv_sc *);
-
+int ixlv_get_default_rss_key(u32 *, bool);
+int ixlv_config_rss_key(struct ixlv_sc *);
+int ixlv_set_rss_hena(struct ixlv_sc *);
+int ixlv_config_rss_lut(struct ixlv_sc *);
+int ixlv_config_promisc_mode(struct ixlv_sc *);
+
+int ixl_vc_send_cmd(struct ixlv_sc *sc, uint32_t request);
+int ixlv_send_vc_msg(struct ixlv_sc *sc, u32 op);
+char *ixlv_vc_speed_to_string(enum virtchnl_link_speed link_speed);
#endif /* _IXLV_H_ */
Index: sys/dev/ixl/ixlv_vc_mgr.h
===================================================================
--- sys/dev/ixl/ixlv_vc_mgr.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/******************************************************************************
-
- Copyright (c) 2013-2018, Intel Corporation
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions are met:
-
- 1. Redistributions of source code must retain the above copyright notice,
- this list of conditions and the following disclaimer.
-
- 2. Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in the
- documentation and/or other materials provided with the distribution.
-
- 3. Neither the name of the Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived from
- this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
- LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- POSSIBILITY OF SUCH DAMAGE.
-
-******************************************************************************/
-/*$FreeBSD$*/
-
-#ifndef _IXLV_VC_MGR_H_
-#define _IXLV_VC_MGR_H_
-
-#include <sys/queue.h>
-
-struct ixl_vc_cmd;
-
-typedef void ixl_vc_callback_t(struct ixl_vc_cmd *, void *,
- enum i40e_status_code);
-
-
-#define IXLV_VC_CMD_FLAG_BUSY 0x0001
-
-struct ixl_vc_cmd
-{
- uint32_t request;
- uint32_t flags;
-
- ixl_vc_callback_t *callback;
- void *arg;
-
- TAILQ_ENTRY(ixl_vc_cmd) next;
-};
-
-struct ixl_vc_mgr
-{
- struct ixlv_sc *sc;
- struct ixl_vc_cmd *current;
- struct callout callout;
-
- TAILQ_HEAD(, ixl_vc_cmd) pending;
-};
-
-#define IXLV_VC_TIMEOUT (2 * hz)
-
-void ixl_vc_init_mgr(struct ixlv_sc *, struct ixl_vc_mgr *);
-void ixl_vc_enqueue(struct ixl_vc_mgr *, struct ixl_vc_cmd *,
- uint32_t, ixl_vc_callback_t *, void *);
-void ixl_vc_flush(struct ixl_vc_mgr *mgr);
-
-#endif
-
Index: sys/dev/ixl/ixlvc.c
===================================================================
--- sys/dev/ixl/ixlvc.c
+++ sys/dev/ixl/ixlvc.c
@@ -40,118 +40,11 @@
#include "ixl.h"
#include "ixlv.h"
-#include "i40e_prototype.h"
-
/* busy wait delay in msec */
#define IXLV_BUSY_WAIT_DELAY 10
#define IXLV_BUSY_WAIT_COUNT 50
-static void ixl_vc_process_resp(struct ixl_vc_mgr *, uint32_t,
- enum virtchnl_status_code);
-static void ixl_vc_process_next(struct ixl_vc_mgr *mgr);
-static void ixl_vc_schedule_retry(struct ixl_vc_mgr *mgr);
-static void ixl_vc_send_current(struct ixl_vc_mgr *mgr);
-
-#ifdef IXL_DEBUG
-/*
-** Validate VF messages
-*/
-static int ixl_vc_validate_vf_msg(struct ixlv_sc *sc, u32 v_opcode,
- u8 *msg, u16 msglen)
-{
- bool err_msg_format = false;
- int valid_len;
-
- /* Validate message length. */
- switch (v_opcode) {
- case VIRTCHNL_OP_VERSION:
- valid_len = sizeof(struct virtchnl_version_info);
- break;
- case VIRTCHNL_OP_RESET_VF:
- valid_len = 0;
- break;
- case VIRTCHNL_OP_GET_VF_RESOURCES:
- /* Valid length in api v1.0 is 0, v1.1 is 4 */
- valid_len = 4;
- break;
- case VIRTCHNL_OP_CONFIG_TX_QUEUE:
- valid_len = sizeof(struct virtchnl_txq_info);
- break;
- case VIRTCHNL_OP_CONFIG_RX_QUEUE:
- valid_len = sizeof(struct virtchnl_rxq_info);
- break;
- case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
- valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
- if (msglen >= valid_len) {
- struct virtchnl_vsi_queue_config_info *vqc =
- (struct virtchnl_vsi_queue_config_info *)msg;
- valid_len += (vqc->num_queue_pairs *
- sizeof(struct
- virtchnl_queue_pair_info));
- if (vqc->num_queue_pairs == 0)
- err_msg_format = true;
- }
- break;
- case VIRTCHNL_OP_CONFIG_IRQ_MAP:
- valid_len = sizeof(struct virtchnl_irq_map_info);
- if (msglen >= valid_len) {
- struct virtchnl_irq_map_info *vimi =
- (struct virtchnl_irq_map_info *)msg;
- valid_len += (vimi->num_vectors *
- sizeof(struct virtchnl_vector_map));
- if (vimi->num_vectors == 0)
- err_msg_format = true;
- }
- break;
- case VIRTCHNL_OP_ENABLE_QUEUES:
- case VIRTCHNL_OP_DISABLE_QUEUES:
- valid_len = sizeof(struct virtchnl_queue_select);
- break;
- case VIRTCHNL_OP_ADD_ETH_ADDR:
- case VIRTCHNL_OP_DEL_ETH_ADDR:
- valid_len = sizeof(struct virtchnl_ether_addr_list);
- if (msglen >= valid_len) {
- struct virtchnl_ether_addr_list *veal =
- (struct virtchnl_ether_addr_list *)msg;
- valid_len += veal->num_elements *
- sizeof(struct virtchnl_ether_addr);
- if (veal->num_elements == 0)
- err_msg_format = true;
- }
- break;
- case VIRTCHNL_OP_ADD_VLAN:
- case VIRTCHNL_OP_DEL_VLAN:
- valid_len = sizeof(struct virtchnl_vlan_filter_list);
- if (msglen >= valid_len) {
- struct virtchnl_vlan_filter_list *vfl =
- (struct virtchnl_vlan_filter_list *)msg;
- valid_len += vfl->num_elements * sizeof(u16);
- if (vfl->num_elements == 0)
- err_msg_format = true;
- }
- break;
- case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
- valid_len = sizeof(struct virtchnl_promisc_info);
- break;
- case VIRTCHNL_OP_GET_STATS:
- valid_len = sizeof(struct virtchnl_queue_select);
- break;
- /* These are always errors coming from the VF. */
- case VIRTCHNL_OP_EVENT:
- case VIRTCHNL_OP_UNKNOWN:
- default:
- return EPERM;
- break;
- }
- /* few more checks */
- if ((valid_len != msglen) || (err_msg_format))
- return EINVAL;
- else
- return 0;
-}
-#endif
-
/*
** ixlv_send_pf_msg
**
@@ -161,31 +54,39 @@
ixlv_send_pf_msg(struct ixlv_sc *sc,
enum virtchnl_ops op, u8 *msg, u16 len)
{
- struct i40e_hw *hw = &sc->hw;
- device_t dev = sc->dev;
- i40e_status err;
-
-#ifdef IXL_DEBUG
- /*
- ** Pre-validating messages to the PF
- */
+ struct i40e_hw *hw = &sc->hw;
+ device_t dev = sc->dev;
+ i40e_status status;
int val_err;
- val_err = ixl_vc_validate_vf_msg(sc, op, msg, len);
+
+ /* Validating message before sending it to the PF */
+ val_err = virtchnl_vc_validate_vf_msg(&sc->version, op, msg, len);
if (val_err)
device_printf(dev, "Error validating msg to PF for op %d,"
" msglen %d: error %d\n", op, len, val_err);
-#endif
- err = i40e_aq_send_msg_to_pf(hw, op, I40E_SUCCESS, msg, len, NULL);
- if (err)
+ if (!i40e_check_asq_alive(hw)) {
+ if (op != VIRTCHNL_OP_GET_STATS)
+ device_printf(dev, "Unable to send opcode %s to PF, "
+ "ASQ is not alive\n", ixl_vc_opcode_str(op));
+ return (0);
+ }
+
+ if (op != VIRTCHNL_OP_GET_STATS)
+ ixlv_dbg_vc(sc,
+ "Sending msg (op=%s[%d]) to PF\n",
+ ixl_vc_opcode_str(op), op);
+
+ status = i40e_aq_send_msg_to_pf(hw, op, I40E_SUCCESS, msg, len, NULL);
+ if (status && op != VIRTCHNL_OP_GET_STATS)
device_printf(dev, "Unable to send opcode %s to PF, "
"status %s, aq error %s\n",
ixl_vc_opcode_str(op),
- i40e_stat_str(hw, err),
+ i40e_stat_str(hw, status),
i40e_aq_str(hw, hw->aq.asq_last_status));
- return err;
-}
+ return (status);
+}
/*
** ixlv_send_api_ver
@@ -224,7 +125,7 @@
int retries = 0;
event.buf_len = IXL_AQ_BUF_SZ;
- event.msg_buf = malloc(event.buf_len, M_DEVBUF, M_NOWAIT);
+ event.msg_buf = malloc(event.buf_len, M_IXLV, M_NOWAIT);
if (!event.msg_buf) {
err = ENOMEM;
goto out;
@@ -266,8 +167,10 @@
(pf_vvi->minor > VIRTCHNL_VERSION_MINOR))) {
device_printf(dev, "Critical PF/VF API version mismatch!\n");
err = EIO;
- } else
- sc->pf_version = pf_vvi->minor;
+ } else {
+ sc->version.major = pf_vvi->major;
+ sc->version.minor = pf_vvi->minor;
+ }
/* Log PF/VF api versions */
device_printf(dev, "PF API %d.%d / VF API %d.%d\n",
@@ -275,7 +178,7 @@
VIRTCHNL_VERSION_MAJOR, VIRTCHNL_VERSION_MINOR);
out_alloc:
- free(event.msg_buf, M_DEVBUF);
+ free(event.msg_buf, M_IXLV);
out:
return (err);
}
@@ -296,7 +199,10 @@
VIRTCHNL_VF_OFFLOAD_RSS_PF |
VIRTCHNL_VF_OFFLOAD_VLAN;
- if (sc->pf_version == VIRTCHNL_VERSION_MINOR_NO_VF_CAPS)
+ ixlv_dbg_info(sc, "Sending offload flags: 0x%b\n",
+ caps, IXLV_PRINTF_VF_OFFLOAD_FLAGS);
+
+ if (sc->version.minor == VIRTCHNL_VERSION_MINOR_NO_VF_CAPS)
return ixlv_send_pf_msg(sc, VIRTCHNL_OP_GET_VF_RESOURCES,
NULL, 0);
else
@@ -326,7 +232,7 @@
len = sizeof(struct virtchnl_vf_resource) +
sizeof(struct virtchnl_vsi_resource);
event.buf_len = len;
- event.msg_buf = malloc(event.buf_len, M_DEVBUF, M_NOWAIT);
+ event.msg_buf = malloc(event.buf_len, M_IXLV, M_NOWAIT);
if (!event.msg_buf) {
err = ENOMEM;
goto out;
@@ -371,7 +277,7 @@
i40e_vf_parse_hw_config(hw, sc->vf_res);
out_alloc:
- free(event.msg_buf, M_DEVBUF);
+ free(event.msg_buf, M_IXLV);
out:
return err;
}
@@ -381,7 +287,7 @@
**
** Request that the PF set up our queues.
*/
-void
+int
ixlv_configure_queues(struct ixlv_sc *sc)
{
device_t dev = sc->dev;
@@ -401,11 +307,10 @@
pairs = max(vsi->num_tx_queues, vsi->num_rx_queues);
len = sizeof(struct virtchnl_vsi_queue_config_info) +
(sizeof(struct virtchnl_queue_pair_info) * pairs);
- vqci = malloc(len, M_DEVBUF, M_NOWAIT | M_ZERO);
+ vqci = malloc(len, M_IXLV, M_NOWAIT | M_ZERO);
if (!vqci) {
device_printf(dev, "%s: unable to allocate memory\n", __func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (ENOMEM);
}
vqci->vsi_id = sc->vsi_res->vsi_id;
vqci->num_queue_pairs = pairs;
@@ -413,6 +318,7 @@
/* Size check is not needed here - HW max is 16 queue pairs, and we
* can fit info for 31 of them into the AQ buffer before it overflows.
*/
+ // TODO: the above is wrong now; X722 VFs can have 256 queues
for (int i = 0; i < pairs; i++, tx_que++, rx_que++, vqpi++) {
txr = &tx_que->txr;
rxr = &rx_que->rxr;
@@ -422,22 +328,29 @@
vqpi->txq.ring_len = scctx->isc_ntxd[0];
vqpi->txq.dma_ring_addr = txr->tx_paddr;
/* Enable Head writeback */
- vqpi->txq.headwb_enabled = 0;
- vqpi->txq.dma_headwb_addr = 0;
+ if (!vsi->enable_head_writeback) {
+ vqpi->txq.headwb_enabled = 0;
+ vqpi->txq.dma_headwb_addr = 0;
+ } else {
+ vqpi->txq.headwb_enabled = 1;
+ vqpi->txq.dma_headwb_addr = txr->tx_paddr +
+ sizeof(struct i40e_tx_desc) * scctx->isc_ntxd[0];
+ }
vqpi->rxq.vsi_id = vqci->vsi_id;
vqpi->rxq.queue_id = i;
vqpi->rxq.ring_len = scctx->isc_nrxd[0];
vqpi->rxq.dma_ring_addr = rxr->rx_paddr;
vqpi->rxq.max_pkt_size = scctx->isc_max_frame_size;
- // TODO: Get this value from iflib, somehow
vqpi->rxq.databuffer_size = rxr->mbuf_sz;
vqpi->rxq.splithdr_enabled = 0;
}
ixlv_send_pf_msg(sc, VIRTCHNL_OP_CONFIG_VSI_QUEUES,
(u8 *)vqci, len);
- free(vqci, M_DEVBUF);
+ free(vqci, M_IXLV);
+
+ return (0);
}
/*
@@ -445,7 +358,7 @@
**
** Request that the PF enable all of our queues.
*/
-void
+int
ixlv_enable_queues(struct ixlv_sc *sc)
{
struct virtchnl_queue_select vqs;
@@ -453,10 +366,11 @@
vqs.vsi_id = sc->vsi_res->vsi_id;
/* XXX: In Linux PF, as long as neither of these is 0,
* every queue in VF VSI is enabled. */
- vqs.tx_queues = (1 << sc->vsi_res->num_queue_pairs) - 1;
+ vqs.tx_queues = (1 << sc->vsi.num_tx_queues) - 1;
vqs.rx_queues = vqs.tx_queues;
ixlv_send_pf_msg(sc, VIRTCHNL_OP_ENABLE_QUEUES,
(u8 *)&vqs, sizeof(vqs));
+ return (0);
}
/*
@@ -464,7 +378,7 @@
**
** Request that the PF disable all of our queues.
*/
-void
+int
ixlv_disable_queues(struct ixlv_sc *sc)
{
struct virtchnl_queue_select vqs;
@@ -472,10 +386,11 @@
vqs.vsi_id = sc->vsi_res->vsi_id;
/* XXX: In Linux PF, as long as neither of these is 0,
* every queue in VF VSI is disabled. */
- vqs.tx_queues = (1 << sc->vsi_res->num_queue_pairs) - 1;
+ vqs.tx_queues = (1 << sc->vsi.num_tx_queues) - 1;
vqs.rx_queues = vqs.tx_queues;
ixlv_send_pf_msg(sc, VIRTCHNL_OP_DISABLE_QUEUES,
(u8 *)&vqs, sizeof(vqs));
+ return (0);
}
/*
@@ -484,7 +399,7 @@
** Request that the PF map queues to interrupt vectors. Misc causes, including
** admin queue, are always mapped to vector 0.
*/
-void
+int
ixlv_map_queues(struct ixlv_sc *sc)
{
struct virtchnl_irq_map_info *vm;
@@ -502,12 +417,11 @@
q = scctx->isc_vectors - 1;
len = sizeof(struct virtchnl_irq_map_info) +
- (scctx->isc_vectors * sizeof(struct i40e_virtchnl_vector_map));
- vm = malloc(len, M_DEVBUF, M_NOWAIT);
+ (scctx->isc_vectors * sizeof(struct virtchnl_vector_map));
+ vm = malloc(len, M_IXLV, M_NOWAIT);
if (!vm) {
device_printf(dev, "%s: unable to allocate memory\n", __func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (ENOMEM);
}
vm->num_vectors = scctx->isc_vectors;
@@ -515,7 +429,8 @@
for (i = 0; i < q; i++, rx_que++) {
vm->vecmap[i].vsi_id = sc->vsi_res->vsi_id;
vm->vecmap[i].vector_id = i + 1; /* first is adminq */
- // vm->vecmap[i].txq_map = (1 << que->me);
+ // TODO: Re-examine this
+ vm->vecmap[i].txq_map = (1 << rx_que->rxr.me);
vm->vecmap[i].rxq_map = (1 << rx_que->rxr.me);
vm->vecmap[i].rxitr_idx = 0;
vm->vecmap[i].txitr_idx = 1;
@@ -531,7 +446,9 @@
ixlv_send_pf_msg(sc, VIRTCHNL_OP_CONFIG_IRQ_MAP,
(u8 *)vm, len);
- free(vm, M_DEVBUF);
+ free(vm, M_IXLV);
+
+ return (0);
}
/*
@@ -539,10 +456,10 @@
** to be added, then create the data to hand to the AQ
** for handling.
*/
-void
+int
ixlv_add_vlans(struct ixlv_sc *sc)
{
- struct virtchnl_vlan_filter_list *v;
+ struct virtchnl_vlan_filter_list *v;
struct ixlv_vlan_filter *f, *ftmp;
device_t dev = sc->dev;
int len, i = 0, cnt = 0;
@@ -553,11 +470,8 @@
cnt++;
}
- if (!cnt) { /* no work... */
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_ADD_VLAN_FILTER,
- VIRTCHNL_STATUS_SUCCESS);
- return;
- }
+ if (!cnt) /* no work... */
+ return (ENOENT);
len = sizeof(struct virtchnl_vlan_filter_list) +
(cnt * sizeof(u16));
@@ -565,16 +479,14 @@
if (len > IXL_AQ_BUF_SZ) {
device_printf(dev, "%s: Exceeded Max AQ Buf size\n",
__func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (EFBIG);
}
- v = malloc(len, M_DEVBUF, M_NOWAIT);
+ v = malloc(len, M_IXLV, M_NOWAIT);
if (!v) {
device_printf(dev, "%s: unable to allocate memory\n",
__func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (ENOMEM);
}
v->vsi_id = sc->vsi_res->vsi_id;
@@ -592,8 +504,9 @@
}
ixlv_send_pf_msg(sc, VIRTCHNL_OP_ADD_VLAN, (u8 *)v, len);
- free(v, M_DEVBUF);
+ free(v, M_IXLV);
/* add stats? */
+ return (0);
}
/*
@@ -601,12 +514,12 @@
** to be removed, then create the data to hand to the AQ
** for handling.
*/
-void
+int
ixlv_del_vlans(struct ixlv_sc *sc)
{
- device_t dev = sc->dev;
struct virtchnl_vlan_filter_list *v;
struct ixlv_vlan_filter *f, *ftmp;
+ device_t dev = sc->dev;
int len, i = 0, cnt = 0;
/* Get count of VLAN filters to delete */
@@ -615,11 +528,8 @@
cnt++;
}
- if (!cnt) { /* no work... */
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_DEL_VLAN_FILTER,
- VIRTCHNL_STATUS_SUCCESS);
- return;
- }
+ if (!cnt) /* no work... */
+ return (ENOENT);
len = sizeof(struct virtchnl_vlan_filter_list) +
(cnt * sizeof(u16));
@@ -627,16 +537,14 @@
if (len > IXL_AQ_BUF_SZ) {
device_printf(dev, "%s: Exceeded Max AQ Buf size\n",
__func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (EFBIG);
}
- v = malloc(len, M_DEVBUF, M_NOWAIT | M_ZERO);
+ v = malloc(len, M_IXLV, M_NOWAIT | M_ZERO);
if (!v) {
device_printf(dev, "%s: unable to allocate memory\n",
__func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (ENOMEM);
}
v->vsi_id = sc->vsi_res->vsi_id;
@@ -648,15 +556,16 @@
bcopy(&f->vlan, &v->vlan_id[i], sizeof(u16));
i++;
SLIST_REMOVE(sc->vlan_filters, f, ixlv_vlan_filter, next);
- free(f, M_DEVBUF);
+ free(f, M_IXLV);
}
if (i == cnt)
break;
}
ixlv_send_pf_msg(sc, VIRTCHNL_OP_DEL_VLAN, (u8 *)v, len);
- free(v, M_DEVBUF);
+ free(v, M_IXLV);
/* add stats? */
+ return (0);
}
@@ -665,13 +574,14 @@
** table and creates an Admin Queue call to create
** the filters in the hardware.
*/
-void
+int
ixlv_add_ether_filters(struct ixlv_sc *sc)
{
struct virtchnl_ether_addr_list *a;
struct ixlv_mac_filter *f;
- device_t dev = sc->dev;
- int len, j = 0, cnt = 0;
+ device_t dev = sc->dev;
+ int len, j = 0, cnt = 0;
+ enum i40e_status_code status;
/* Get count of MAC addresses to add */
SLIST_FOREACH(f, sc->mac_filters, next) {
@@ -679,21 +589,18 @@
cnt++;
}
if (cnt == 0) { /* Should not happen... */
- DDPRINTF(dev, "cnt == 0, exiting...");
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_ADD_MAC_FILTER,
- VIRTCHNL_STATUS_SUCCESS);
- return;
+ ixlv_dbg_vc(sc, "%s: cnt == 0, exiting...\n", __func__);
+ return (ENOENT);
}
len = sizeof(struct virtchnl_ether_addr_list) +
(cnt * sizeof(struct virtchnl_ether_addr));
- a = malloc(len, M_DEVBUF, M_NOWAIT | M_ZERO);
+ a = malloc(len, M_IXLV, M_NOWAIT | M_ZERO);
if (a == NULL) {
device_printf(dev, "%s: Failed to get memory for "
"virtchnl_ether_addr_list\n", __func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (ENOMEM);
}
a->vsi_id = sc->vsi.id;
a->num_elements = cnt;
@@ -705,7 +612,7 @@
f->flags &= ~IXL_FILTER_ADD;
j++;
- DDPRINTF(dev, "ADD: " MAC_FORMAT,
+ ixlv_dbg_vc(sc, "ADD: " MAC_FORMAT "\n",
MAC_FORMAT_ARGS(f->macaddr));
}
if (j == cnt)
@@ -713,11 +620,12 @@
}
DDPRINTF(dev, "len %d, j %d, cnt %d",
len, j, cnt);
- ixlv_send_pf_msg(sc,
+
+ status = ixlv_send_pf_msg(sc,
VIRTCHNL_OP_ADD_ETH_ADDR, (u8 *)a, len);
/* add stats? */
- free(a, M_DEVBUF);
- return;
+ free(a, M_IXLV);
+ return (status);
}
/*
@@ -725,13 +633,13 @@
** sc MAC filter list and creates an Admin Queue call
** to delete those filters in the hardware.
*/
-void
+int
ixlv_del_ether_filters(struct ixlv_sc *sc)
{
struct virtchnl_ether_addr_list *d;
- device_t dev = sc->dev;
- struct ixlv_mac_filter *f, *f_temp;
- int len, j = 0, cnt = 0;
+ struct ixlv_mac_filter *f, *f_temp;
+ device_t dev = sc->dev;
+ int len, j = 0, cnt = 0;
/* Get count of MAC addresses to delete */
SLIST_FOREACH(f, sc->mac_filters, next) {
@@ -739,21 +647,18 @@
cnt++;
}
if (cnt == 0) {
- DDPRINTF(dev, "cnt == 0, exiting...");
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_DEL_MAC_FILTER,
- VIRTCHNL_STATUS_SUCCESS);
- return;
+ ixlv_dbg_vc(sc, "%s: cnt == 0, exiting...\n", __func__);
+ return (ENOENT);
}
len = sizeof(struct virtchnl_ether_addr_list) +
(cnt * sizeof(struct virtchnl_ether_addr));
- d = malloc(len, M_DEVBUF, M_NOWAIT | M_ZERO);
+ d = malloc(len, M_IXLV, M_NOWAIT | M_ZERO);
if (d == NULL) {
device_printf(dev, "%s: Failed to get memory for "
"virtchnl_ether_addr_list\n", __func__);
- ixl_vc_schedule_retry(&sc->vc_mgr);
- return;
+ return (ENOMEM);
}
d->vsi_id = sc->vsi.id;
d->num_elements = cnt;
@@ -762,11 +667,11 @@
SLIST_FOREACH_SAFE(f, sc->mac_filters, next, f_temp) {
if (f->flags & IXL_FILTER_DEL) {
bcopy(f->macaddr, d->list[j].addr, ETHER_ADDR_LEN);
- DDPRINTF(dev, "DEL: " MAC_FORMAT,
+ ixlv_dbg_vc(sc, "DEL: " MAC_FORMAT "\n",
MAC_FORMAT_ARGS(f->macaddr));
j++;
SLIST_REMOVE(sc->mac_filters, f, ixlv_mac_filter, next);
- free(f, M_DEVBUF);
+ free(f, M_IXLV);
}
if (j == cnt)
break;
@@ -774,15 +679,15 @@
ixlv_send_pf_msg(sc,
VIRTCHNL_OP_DEL_ETH_ADDR, (u8 *)d, len);
/* add stats? */
- free(d, M_DEVBUF);
- return;
+ free(d, M_IXLV);
+ return (0);
}
/*
** ixlv_request_reset
** Request that the PF reset this VF. No response is expected.
*/
-void
+int
ixlv_request_reset(struct ixlv_sc *sc)
{
/*
@@ -792,13 +697,14 @@
*/
wr32(&sc->hw, I40E_VFGEN_RSTAT, VIRTCHNL_VFR_INPROGRESS);
ixlv_send_pf_msg(sc, VIRTCHNL_OP_RESET_VF, NULL, 0);
+ return (0);
}
/*
** ixlv_request_stats
** Request the statistics for this VF's VSI from PF.
*/
-void
+int
ixlv_request_stats(struct ixlv_sc *sc)
{
struct virtchnl_queue_select vqs;
@@ -808,10 +714,10 @@
/* Low priority, we don't need to error check */
error = ixlv_send_pf_msg(sc, VIRTCHNL_OP_GET_STATS,
(u8 *)&vqs, sizeof(vqs));
-#ifdef IXL_DEBUG
if (error)
device_printf(sc->dev, "Error sending stats request to PF: %d\n", error);
-#endif
+
+ return (0);
}
/*
@@ -850,7 +756,7 @@
vsi->eth_stats = *es;
}
-void
+int
ixlv_config_rss_key(struct ixlv_sc *sc)
{
struct virtchnl_rss_key *rss_key_msg;
@@ -867,26 +773,27 @@
/* Send the fetched key */
key_length = IXL_RSS_KEY_SIZE;
msg_len = sizeof(struct virtchnl_rss_key) + (sizeof(u8) * key_length) - 1;
- rss_key_msg = malloc(msg_len, M_DEVBUF, M_NOWAIT | M_ZERO);
+ rss_key_msg = malloc(msg_len, M_IXLV, M_NOWAIT | M_ZERO);
if (rss_key_msg == NULL) {
device_printf(sc->dev, "Unable to allocate msg memory for RSS key msg.\n");
- return;
+ return (ENOMEM);
}
rss_key_msg->vsi_id = sc->vsi_res->vsi_id;
rss_key_msg->key_len = key_length;
bcopy(rss_seed, &rss_key_msg->key[0], key_length);
- DDPRINTF(sc->dev, "config_rss: vsi_id %d, key_len %d",
+ ixlv_dbg_vc(sc, "config_rss: vsi_id %d, key_len %d\n",
rss_key_msg->vsi_id, rss_key_msg->key_len);
ixlv_send_pf_msg(sc, VIRTCHNL_OP_CONFIG_RSS_KEY,
(u8 *)rss_key_msg, msg_len);
- free(rss_key_msg, M_DEVBUF);
+ free(rss_key_msg, M_IXLV);
+ return (0);
}
-void
+int
ixlv_set_rss_hena(struct ixlv_sc *sc)
{
struct virtchnl_rss_hena hena;
@@ -899,9 +806,10 @@
ixlv_send_pf_msg(sc, VIRTCHNL_OP_SET_RSS_HENA,
(u8 *)&hena, sizeof(hena));
+ return (0);
}
-void
+int
ixlv_config_rss_lut(struct ixlv_sc *sc)
{
struct virtchnl_rss_lut *rss_lut_msg;
@@ -912,10 +820,10 @@
lut_length = IXL_RSS_VSI_LUT_SIZE;
msg_len = sizeof(struct virtchnl_rss_lut) + (lut_length * sizeof(u8)) - 1;
- rss_lut_msg = malloc(msg_len, M_DEVBUF, M_NOWAIT | M_ZERO);
+ rss_lut_msg = malloc(msg_len, M_IXLV, M_NOWAIT | M_ZERO);
if (rss_lut_msg == NULL) {
device_printf(sc->dev, "Unable to allocate msg memory for RSS lut msg.\n");
- return;
+ return (ENOMEM);
}
rss_lut_msg->vsi_id = sc->vsi_res->vsi_id;
@@ -942,7 +850,21 @@
ixlv_send_pf_msg(sc, VIRTCHNL_OP_CONFIG_RSS_LUT,
(u8 *)rss_lut_msg, msg_len);
- free(rss_lut_msg, M_DEVBUF);
+ free(rss_lut_msg, M_IXLV);
+ return (0);
+}
+
+int
+ixlv_config_promisc_mode(struct ixlv_sc *sc)
+{
+ struct virtchnl_promisc_info pinfo;
+
+ pinfo.vsi_id = sc->vsi_res->vsi_id;
+ pinfo.flags = sc->promisc_flags;
+
+ ixlv_send_pf_msg(sc, VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE,
+ (u8 *)&pinfo, sizeof(pinfo));
+ return (0);
}
/*
@@ -958,7 +880,10 @@
enum virtchnl_status_code v_retval, u8 *msg, u16 msglen)
{
device_t dev = sc->dev;
- struct ixl_vsi *vsi = &sc->vsi;
+
+ if (v_opcode != VIRTCHNL_OP_GET_STATS)
+ ixlv_dbg_vc(sc, "%s: opcode %s\n", __func__,
+ ixl_vc_opcode_str(v_opcode));
if (v_opcode == VIRTCHNL_OP_EVENT) {
struct virtchnl_pf_event *vpe =
@@ -966,11 +891,9 @@
switch (vpe->event) {
case VIRTCHNL_EVENT_LINK_CHANGE:
-#ifdef IXL_DEBUG
- device_printf(dev, "Link change: status %d, speed %d\n",
+ ixlv_dbg_vc(sc, "Link change: status %d, speed %s\n",
vpe->event_data.link_event.link_status,
- vpe->event_data.link_event.link_speed);
-#endif
+ ixlv_vc_speed_to_string(vpe->event_data.link_event.link_speed));
sc->link_up =
vpe->event_data.link_event.link_status;
sc->link_speed =
@@ -983,8 +906,8 @@
ixlv_if_init(sc->vsi.ctx);
break;
default:
- device_printf(dev, "%s: Unknown event %d from AQ\n",
- __func__, vpe->event);
+ ixlv_dbg_vc(sc, "Unknown event %d from AQ\n",
+ vpe->event);
break;
}
@@ -998,273 +921,87 @@
__func__, i40e_vc_stat_str(&sc->hw, v_retval), ixl_vc_opcode_str(v_opcode));
}
-#ifdef IXL_DEBUG
- if (v_opcode != VIRTCHNL_OP_GET_STATS)
- DDPRINTF(dev, "opcode %d", v_opcode);
-#endif
-
switch (v_opcode) {
case VIRTCHNL_OP_GET_STATS:
ixlv_update_stats_counters(sc, (struct i40e_eth_stats *)msg);
break;
case VIRTCHNL_OP_ADD_ETH_ADDR:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_ADD_MAC_FILTER,
- v_retval);
if (v_retval) {
device_printf(dev, "WARNING: Error adding VF mac filter!\n");
device_printf(dev, "WARNING: Device may not receive traffic!\n");
}
break;
case VIRTCHNL_OP_DEL_ETH_ADDR:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_DEL_MAC_FILTER,
- v_retval);
break;
case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_CONFIGURE_PROMISC,
- v_retval);
break;
case VIRTCHNL_OP_ADD_VLAN:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_ADD_VLAN_FILTER,
- v_retval);
break;
case VIRTCHNL_OP_DEL_VLAN:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_DEL_VLAN_FILTER,
- v_retval);
break;
case VIRTCHNL_OP_ENABLE_QUEUES:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_ENABLE_QUEUES,
- v_retval);
- if (v_retval == 0) {
- /* Update link status */
- ixlv_update_link_status(sc);
- /* Turn on all interrupts */
- ixlv_enable_intr(vsi);
- /* And inform the stack we're ready */
- // vsi->ifp->if_drv_flags |= IFF_DRV_RUNNING;
- /* TODO: Clear a state flag, so we know we're ready to run init again */
- }
break;
case VIRTCHNL_OP_DISABLE_QUEUES:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_DISABLE_QUEUES,
- v_retval);
- if (v_retval == 0) {
- /* Turn off all interrupts */
- ixlv_disable_intr(vsi);
- /* Tell the stack that the interface is no longer active */
- vsi->ifp->if_drv_flags &= ~(IFF_DRV_RUNNING);
- }
break;
case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_CONFIGURE_QUEUES,
- v_retval);
break;
case VIRTCHNL_OP_CONFIG_IRQ_MAP:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_MAP_VECTORS,
- v_retval);
break;
case VIRTCHNL_OP_CONFIG_RSS_KEY:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_CONFIG_RSS_KEY,
- v_retval);
break;
case VIRTCHNL_OP_SET_RSS_HENA:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_SET_RSS_HENA,
- v_retval);
break;
case VIRTCHNL_OP_CONFIG_RSS_LUT:
- ixl_vc_process_resp(&sc->vc_mgr, IXLV_FLAG_AQ_CONFIG_RSS_LUT,
- v_retval);
break;
default:
-#ifdef IXL_DEBUG
- device_printf(dev,
- "%s: Received unexpected message %s from PF.\n",
- __func__, ixl_vc_opcode_str(v_opcode));
-#endif
+ ixlv_dbg_vc(sc,
+ "Received unexpected message %s from PF.\n",
+ ixl_vc_opcode_str(v_opcode));
break;
}
- return;
}
-static void
+int
ixl_vc_send_cmd(struct ixlv_sc *sc, uint32_t request)
{
switch (request) {
case IXLV_FLAG_AQ_MAP_VECTORS:
- ixlv_map_queues(sc);
- break;
+ return ixlv_map_queues(sc);
case IXLV_FLAG_AQ_ADD_MAC_FILTER:
- ixlv_add_ether_filters(sc);
- break;
+ return ixlv_add_ether_filters(sc);
case IXLV_FLAG_AQ_ADD_VLAN_FILTER:
- ixlv_add_vlans(sc);
- break;
+ return ixlv_add_vlans(sc);
case IXLV_FLAG_AQ_DEL_MAC_FILTER:
- ixlv_del_ether_filters(sc);
- break;
+ return ixlv_del_ether_filters(sc);
case IXLV_FLAG_AQ_DEL_VLAN_FILTER:
- ixlv_del_vlans(sc);
- break;
+ return ixlv_del_vlans(sc);
case IXLV_FLAG_AQ_CONFIGURE_QUEUES:
- ixlv_configure_queues(sc);
- break;
+ return ixlv_configure_queues(sc);
case IXLV_FLAG_AQ_DISABLE_QUEUES:
- ixlv_disable_queues(sc);
- break;
+ return ixlv_disable_queues(sc);
case IXLV_FLAG_AQ_ENABLE_QUEUES:
- ixlv_enable_queues(sc);
- break;
+ return ixlv_enable_queues(sc);
case IXLV_FLAG_AQ_CONFIG_RSS_KEY:
- ixlv_config_rss_key(sc);
- break;
+ return ixlv_config_rss_key(sc);
case IXLV_FLAG_AQ_SET_RSS_HENA:
- ixlv_set_rss_hena(sc);
- break;
+ return ixlv_set_rss_hena(sc);
case IXLV_FLAG_AQ_CONFIG_RSS_LUT:
- ixlv_config_rss_lut(sc);
- break;
- }
-}
-
-void
-ixl_vc_init_mgr(struct ixlv_sc *sc, struct ixl_vc_mgr *mgr)
-{
- mgr->sc = sc;
- mgr->current = NULL;
- TAILQ_INIT(&mgr->pending);
- callout_init_mtx(&mgr->callout, &sc->mtx, 0);
-}
-
-static void
-ixl_vc_process_completion(struct ixl_vc_mgr *mgr, enum i40e_status_code err)
-{
- struct ixl_vc_cmd *cmd;
-
- cmd = mgr->current;
- mgr->current = NULL;
- cmd->flags &= ~IXLV_VC_CMD_FLAG_BUSY;
-
- cmd->callback(cmd, cmd->arg, err);
- ixl_vc_process_next(mgr);
-}
+ return ixlv_config_rss_lut(sc);
-static void
-ixl_vc_process_resp(struct ixl_vc_mgr *mgr, uint32_t request,
- enum virtchnl_status_code err)
-{
- struct ixl_vc_cmd *cmd;
-
- cmd = mgr->current;
- if (cmd == NULL || cmd->request != request)
- return;
-
- callout_stop(&mgr->callout);
- /* ATM, the virtchnl codes map to i40e ones directly */
- ixl_vc_process_completion(mgr, (enum i40e_status_code)err);
-}
-
-static void
-ixl_vc_cmd_timeout(void *arg)
-{
- struct ixl_vc_mgr *mgr = (struct ixl_vc_mgr *)arg;
-
- ixl_vc_process_completion(mgr, I40E_ERR_TIMEOUT);
-}
-
-static void
-ixl_vc_cmd_retry(void *arg)
-{
- struct ixl_vc_mgr *mgr = (struct ixl_vc_mgr *)arg;
-
- ixl_vc_send_current(mgr);
-}
-
-static void
-ixl_vc_send_current(struct ixl_vc_mgr *mgr)
-{
- struct ixl_vc_cmd *cmd;
-
- cmd = mgr->current;
- ixl_vc_send_cmd(mgr->sc, cmd->request);
- callout_reset(&mgr->callout, IXLV_VC_TIMEOUT, ixl_vc_cmd_timeout, mgr);
-}
-
-static void
-ixl_vc_process_next(struct ixl_vc_mgr *mgr)
-{
- struct ixl_vc_cmd *cmd;
-
- if (mgr->current != NULL)
- return;
-
- if (TAILQ_EMPTY(&mgr->pending))
- return;
-
- cmd = TAILQ_FIRST(&mgr->pending);
- TAILQ_REMOVE(&mgr->pending, cmd, next);
-
- mgr->current = cmd;
- ixl_vc_send_current(mgr);
-}
-
-static void
-ixl_vc_schedule_retry(struct ixl_vc_mgr *mgr)
-{
-
- callout_reset(&mgr->callout, howmany(hz, 100), ixl_vc_cmd_retry, mgr);
-}
-
-void
-ixl_vc_enqueue(struct ixl_vc_mgr *mgr, struct ixl_vc_cmd *cmd,
- uint32_t req, ixl_vc_callback_t *callback, void *arg)
-{
- if (cmd->flags & IXLV_VC_CMD_FLAG_BUSY) {
- if (mgr->current == cmd)
- mgr->current = NULL;
- else
- TAILQ_REMOVE(&mgr->pending, cmd, next);
+ case IXLV_FLAG_AQ_CONFIGURE_PROMISC:
+ return ixlv_config_promisc_mode(sc);
}
- cmd->request = req;
- cmd->callback = callback;
- cmd->arg = arg;
- cmd->flags |= IXLV_VC_CMD_FLAG_BUSY;
- TAILQ_INSERT_TAIL(&mgr->pending, cmd, next);
-
- ixl_vc_process_next(mgr);
-}
-
-void
-ixl_vc_flush(struct ixl_vc_mgr *mgr)
-{
- struct ixl_vc_cmd *cmd;
-
- KASSERT(TAILQ_EMPTY(&mgr->pending) || mgr->current != NULL,
- ("ixlv: pending commands waiting but no command in progress"));
-
- cmd = mgr->current;
- if (cmd != NULL) {
- mgr->current = NULL;
- cmd->flags &= ~IXLV_VC_CMD_FLAG_BUSY;
- cmd->callback(cmd, cmd->arg, I40E_ERR_ADAPTER_STOPPED);
- }
-
- while ((cmd = TAILQ_FIRST(&mgr->pending)) != NULL) {
- TAILQ_REMOVE(&mgr->pending, cmd, next);
- cmd->flags &= ~IXLV_VC_CMD_FLAG_BUSY;
- cmd->callback(cmd, cmd->arg, I40E_ERR_ADAPTER_STOPPED);
- }
-
- callout_stop(&mgr->callout);
+ return (0);
}
-
Index: sys/modules/Makefile
===================================================================
--- sys/modules/Makefile
+++ sys/modules/Makefile
@@ -205,6 +205,7 @@
${_ix} \
${_ixv} \
${_ixl} \
+ ${_ixlv} \
jme \
joy \
kbdmux \
Index: sys/modules/ixl/Makefile
===================================================================
--- sys/modules/ixl/Makefile
+++ sys/modules/ixl/Makefile
@@ -6,7 +6,7 @@
SRCS = device_if.h bus_if.h pci_if.h ifdi_if.h
SRCS += opt_inet.h opt_inet6.h opt_rss.h opt_ixl.h opt_iflib.h
SRCS += if_ixl.c ixl_pf_main.c ixl_pf_qmgr.c ixl_txrx.c ixl_pf_i2c.c i40e_osdep.c
-SRCS.PCI_IOV = pci_iov_if.h ixl_pf_iov.c
+SRCS.PCI_IOV += pci_iov_if.h ixl_pf_iov.c
# Shared source
SRCS += i40e_common.c i40e_nvm.c i40e_adminq.c i40e_lan_hmc.c i40e_hmc.c i40e_dcb.c
@@ -14,7 +14,11 @@
# Debug messages / sysctls
# CFLAGS += -DIXL_DEBUG
-#CFLAGS += -DIXL_IW
-#SRCS += ixl_iw.c
+# Enable asserts and other debugging facilities
+# CFLAGS += -DINVARIANTS -DINVARIANTS_SUPPORT -DWITNESS
+
+# Enable iWARP client interface
+# CFLAGS += -DIXL_IW
+# SRCS += ixl_iw.c
.include <bsd.kmod.mk>
Index: sys/modules/ixlv/Makefile
===================================================================
--- sys/modules/ixlv/Makefile
+++ sys/modules/ixlv/Makefile
@@ -4,7 +4,7 @@
KMOD = if_ixlv
SRCS = device_if.h bus_if.h pci_if.h ifdi_if.h
-SRCS += opt_inet.h opt_inet6.h opt_rss.h opt_ixl.h opt_iflib.h
+SRCS += opt_inet.h opt_inet6.h opt_rss.h opt_ixl.h opt_iflib.h opt_global.h
SRCS += if_ixlv.c ixlvc.c ixl_txrx.c i40e_osdep.c
# Shared source
@@ -12,5 +12,7 @@
# Debug messages / sysctls
# CFLAGS += -DIXL_DEBUG
+# Enable asserts and other debugging facilities
+# CFLAGS += -DINVARIANTS -DINVARIANTS_SUPPORT -DWITNESS
.include <bsd.kmod.mk>
Index: sys/net/iflib.h
===================================================================
--- sys/net/iflib.h
+++ sys/net/iflib.h
@@ -246,7 +246,7 @@
/* fields necessary for probe */
pci_vendor_info_t *isc_vendor_info;
char *isc_driver_version;
-/* optional function to transform the read values to match the table*/
+ /* optional function to transform the read values to match the table*/
void (*isc_parse_devinfo) (uint16_t *device_id, uint16_t *subvendor_id,
uint16_t *subdevice_id, uint16_t *rev_id);
int isc_nrxd_min[8];
@@ -375,6 +375,8 @@
if_shared_ctx_t iflib_get_sctx(if_ctx_t ctx);
void iflib_set_mac(if_ctx_t ctx, uint8_t mac[ETHER_ADDR_LEN]);
+void iflib_request_reset(if_ctx_t ctx);
+uint8_t iflib_in_detach(if_ctx_t ctx);
/*
* If the driver can plug cleanly in to newbus use these
Index: sys/net/iflib.c
===================================================================
--- sys/net/iflib.c
+++ sys/net/iflib.c
@@ -101,6 +101,10 @@
#include <x86/iommu/busdma_dmar.h>
#endif
+#ifdef PCI_IOV
+#include <dev/pci/pci_iov.h>
+#endif
+
#include <sys/bitstring.h>
/*
* enable accounting of every mbuf as it comes in to and goes out of
@@ -157,9 +161,9 @@
struct iflib_ctx {
KOBJ_FIELDS;
- /*
- * Pointer to hardware driver's softc
- */
+ /*
+ * Pointer to hardware driver's softc
+ */
void *ifc_softc;
device_t ifc_dev;
if_t ifc_ifp;
@@ -178,7 +182,7 @@
uint32_t ifc_if_flags;
uint32_t ifc_flags;
uint32_t ifc_max_fl_buf_size;
- int ifc_in_detach;
+ uint32_t ifc_in_detach;
int ifc_link_state;
int ifc_link_irq;
@@ -256,7 +260,13 @@
void
iflib_set_detach(if_ctx_t ctx)
{
- ctx->ifc_in_detach = 1;
+ atomic_store_rel_32(&ctx->ifc_in_detach, 1);
+}
+
+uint8_t
+iflib_in_detach(if_ctx_t ctx)
+{
+ return (atomic_load_acq_32(&ctx->ifc_in_detach) != 0);
}
void
@@ -3866,8 +3876,9 @@
ctx->ifc_flags &= ~(IFC_DO_RESET|IFC_DO_WATCHDOG);
STATE_UNLOCK(ctx);
- if ((!running & !oactive) &&
- !(ctx->ifc_sctx->isc_flags & IFLIB_ADMIN_ALWAYS_RUN))
+ if ((!running & !oactive) && !(ctx->ifc_sctx->isc_flags & IFLIB_ADMIN_ALWAYS_RUN))
+ return;
+ if (iflib_in_detach(ctx))
return;
CTX_LOCK(ctx);
@@ -3906,7 +3917,8 @@
{
if_ctx_t ctx = context;
- if (!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING))
+ if (!(if_getdrvflags(ctx->ifc_ifp) & IFF_DRV_RUNNING) &&
+ !(ctx->ifc_sctx->isc_flags & IFLIB_ADMIN_ALWAYS_RUN))
return;
CTX_LOCK(ctx);
@@ -4678,17 +4690,17 @@
ctx->ifc_flags |= IFC_INIT_DONE;
CTX_UNLOCK(ctx);
return (0);
+
fail_detach:
ether_ifdetach(ctx->ifc_ifp);
fail_intr_free:
- if (scctx->isc_intr == IFLIB_INTR_MSIX || scctx->isc_intr == IFLIB_INTR_MSI)
- pci_release_msi(ctx->ifc_dev);
fail_queues:
iflib_tx_structures_free(ctx);
iflib_rx_structures_free(ctx);
fail:
IFDI_DETACH(ctx);
CTX_UNLOCK(ctx);
+
return (err);
}
@@ -4973,12 +4985,19 @@
/* Make sure VLANS are not using driver */
if (if_vlantrunkinuse(ifp)) {
- device_printf(dev,"Vlan in use, detach first\n");
+ device_printf(dev, "Vlan in use, detach first\n");
+ return (EBUSY);
+ }
+#ifdef PCI_IOV
+ if (!CTX_IS_VF(ctx) && pci_iov_detach(dev) != 0) {
+ device_printf(dev, "SR-IOV in use; detach first.\n");
return (EBUSY);
}
+#endif
+
+ iflib_set_detach(ctx);
CTX_LOCK(ctx);
- ctx->ifc_in_detach = 1;
iflib_stop(ctx);
CTX_UNLOCK(ctx);
@@ -5213,7 +5232,7 @@
CTX_LOCK_INIT(ctx);
STATE_LOCK_INIT(ctx, device_get_nameunit(ctx->ifc_dev));
- ifp = ctx->ifc_ifp = if_gethandle(IFT_ETHER);
+ ifp = ctx->ifc_ifp = if_alloc(IFT_ETHER);
if (ifp == NULL) {
device_printf(dev, "can not allocate ifnet structure\n");
return (ENOMEM);
@@ -5397,7 +5416,7 @@
fl[j].ifl_ifdi = &rxq->ifr_ifdi[j + rxq->ifr_fl_offset];
fl[j].ifl_rxd_size = scctx->isc_rxd_size[j];
}
- /* Allocate receive buffers for the ring*/
+ /* Allocate receive buffers for the ring */
if (iflib_rxsd_alloc(rxq)) {
device_printf(dev,
"Critical Failure setting up receive buffers\n");
@@ -5552,6 +5571,8 @@
for (int i = 0; i < ctx->ifc_softc_ctx.isc_nrxqsets; i++, rxq++) {
iflib_rx_sds_free(rxq);
}
+ free(ctx->ifc_rxqs, M_IFLIB);
+ ctx->ifc_rxqs = NULL;
}
static int
@@ -5812,7 +5833,7 @@
}
void
-iflib_softirq_alloc_generic(if_ctx_t ctx, if_irq_t irq, iflib_intr_type_t type, void *arg, int qid, const char *name)
+iflib_softirq_alloc_generic(if_ctx_t ctx, if_irq_t irq, iflib_intr_type_t type, void *arg, int qid, const char *name)
{
struct grouptask *gtask;
struct taskqgroup *tqg;
@@ -6130,8 +6151,9 @@
if (ctx->ifc_sysctl_qs_eq_override == 0) {
#ifdef INVARIANTS
if (tx_queues != rx_queues)
- device_printf(dev, "queue equality override not set, capping rx_queues at %d and tx_queues at %d\n",
- min(rx_queues, tx_queues), min(rx_queues, tx_queues));
+ device_printf(dev,
+ "queue equality override not set, capping rx_queues at %d and tx_queues at %d\n",
+ min(rx_queues, tx_queues), min(rx_queues, tx_queues));
#endif
tx_queues = min(rx_queues, tx_queues);
rx_queues = min(rx_queues, tx_queues);
@@ -6141,8 +6163,7 @@
vectors = rx_queues + admincnt;
if ((err = pci_alloc_msix(dev, &vectors)) == 0) {
- device_printf(dev,
- "Using MSIX interrupts with %d vectors\n", vectors);
+ device_printf(dev, "Using MSIX interrupts with %d vectors\n", vectors);
scctx->isc_vectors = vectors;
scctx->isc_nrxqsets = rx_queues;
scctx->isc_ntxqsets = tx_queues;
@@ -6150,7 +6171,8 @@
return (vectors);
} else {
- device_printf(dev, "failed to allocate %d msix vectors, err: %d - using MSI\n", vectors, err);
+ device_printf(dev,
+ "failed to allocate %d msix vectors, err: %d - using MSI\n", vectors, err);
bus_release_resource(dev, SYS_RES_MEMORY, bar,
ctx->ifc_msix_mem);
ctx->ifc_msix_mem = NULL;
@@ -6461,6 +6483,15 @@
}
+void
+iflib_request_reset(if_ctx_t ctx)
+{
+
+ STATE_LOCK(ctx);
+ ctx->ifc_flags |= IFC_DO_RESET;
+ STATE_UNLOCK(ctx);
+}
+
#ifndef __NO_STRICT_ALIGNMENT
static struct mbuf *
iflib_fixup_rx(struct mbuf *m)
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Fri, Oct 24, 1:05 AM (5 h, 29 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
24110847
Default Alt Text
D16429.id46521.diff (212 KB)
Attached To
Mode
D16429: ixlv(4): Update to use iflib; change name to iavf(4)
Attached
Detach File
Event Timeline
Log In to Comment