Index: stable/12/share/man/man4/ena.4 =================================================================== --- stable/12/share/man/man4/ena.4 (revision 368011) +++ stable/12/share/man/man4/ena.4 (revision 368012) @@ -1,279 +1,281 @@ -.\" Copyright (c) 2015-2017 Amazon.com, Inc. or its affiliates. +.\" SPDX-License-Identifier: BSD-2-Clause +.\" +.\" Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT .\" OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, .\" SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT .\" LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE .\" OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .\" .\" $FreeBSD$ .\" .Dd August 16, 2017 .Dt ENA 4 .Os .Sh NAME .Nm ena .Nd "FreeBSD kernel driver for Elastic Network Adapter (ENA) family" .Sh SYNOPSIS To compile this driver into the kernel, place the following line in the kernel configuration file: .Bd -ragged -offset indent .Cd "device ena" .Ed .Pp Alternatively, to load the driver as a module at boot time, place the following line in .Xr loader.conf 5 : .Bd -literal -offset indent if_ena_load="YES" .Ed .Sh DESCRIPTION The ENA is a networking interface designed to make good use of modern CPU features and system architectures. .Pp The ENA device exposes a lightweight management interface with a minimal set of memory mapped registers and extendable command set through an Admin Queue. .Pp The driver supports a range of ENA devices, is link-speed independent (i.e., the same driver is used for 10GbE, 25GbE, 40GbE, etc.), and has a negotiated and extendable feature set. .Pp Some ENA devices support SR-IOV. This driver is used for both the SR-IOV Physical Function (PF) and Virtual Function (VF) devices. .Pp The ENA devices enable high speed and low overhead network traffic processing by providing multiple Tx/Rx queue pairs (the maximum number is advertised by the device via the Admin Queue), a dedicated MSI-X interrupt vector per Tx/Rx queue pair, and CPU cacheline optimized data placement. .Pp The .Nm driver supports industry standard TCP/IP offload features such as checksum offload and TCP transmit segmentation offload (TSO). Receive-side scaling (RSS) is supported for multi-core scaling. .Pp The .Nm driver and its corresponding devices implement health monitoring mechanisms such as watchdog, enabling the device and driver to recover in a manner transparent to the application, as well as debug logs. .Pp Some of the ENA devices support a working mode called Low-latency Queue (LLQ), which saves several more microseconds. This feature will be implemented for driver in future releases. .Sh HARDWARE Supported PCI vendor ID/device IDs: .Pp .Bl -bullet -compact .It 1d0f:0ec2 - ENA PF .It 1d0f:1ec2 - ENA PF with LLQ support .It 1d0f:ec20 - ENA VF .It 1d0f:ec21 - ENA VF with LLQ support .El .Sh DIAGNOSTICS .Ss Device initialization phase: .Bl -diag .It ena%d: failed to init mmio read less .Pp Error occurred during initialization of the mmio register read request. .It ena%d: Can not reset device .Pp Device could not be reset. .br Device may not be responding or is already during reset. .It ena%d: device version is too low .Pp Version of the controller is too old and it is not supported by the driver. .It ena%d: Invalid dma width value %d .Pp The controller is able to request dma transaction width. .br Device stopped responding or it demanded invalid value. .It ena%d: Can not initialize ena admin queue with device .Pp Initialization of the Admin Queue failed. .br Device may not be responding or there was a problem with initialization of the resources. .It ena%d: Cannot get attribute for ena device rc: %d .Pp Failed to get attributes of the device from the controller. .It ena%d: Cannot configure aenq groups rc: %d .Pp Errors occurred when trying to configure AENQ groups. .El .Ss Driver initialisation/shutdown phase: .Bl -diag .It ena%d: PCI resource allocation failed! .It ena%d: allocating ena_dev failed .It ena%d: failed to pmap registers bar .It ena%d: Error while setting up bufring .It ena%d: Error with initialization of IO rings .It ena%d: can not allocate ifnet structure .It ena%d: Error with network interface setup .It ena%d: Failed to enable and set the admin interrupts .It ena%d: Failed to allocate %d, vectors %d .It ena%d: Failed to enable MSIX, vectors %d rc %d .It ena%d: Error with MSI-X enablement .It ena%d: could not allocate irq vector: %d .It ena%d: Unable to allocate bus resource: registers .Pp Resource allocation failed when initializing the device. .br Driver will not be attached. .It ena%d: ENA device init failed (err: %d) .Pp Device initialization failed. .br Driver will not be attached. .It ena%d: could not activate irq vector: %d .Pp Error occurred when trying to activate interrupt vectors for Admin Queue. .It ena%d: failed to register interrupt handler for irq %ju: %d .Pp Error occurred when trying to register Admin Queue interrupt handler. .It ena%d: Cannot setup mgmnt queue intr .Pp Error occurred during configuration of the Admin Queue interrupts. .It ena%d: Enable MSI-X failed .Pp Configuration of the MSI-X for Admin Queue failed. .br There could be lack of resources or interrupts could not have been configured. .br Driver will not be attached. .It ena%d: VLAN is in use, detach first .Pp VLANs are being used when trying to detach the driver. .br VLANs must be detached first and then detach routine have to be called again. .It ena%d: Unmapped RX DMA tag associations .It ena%d: Unmapped TX DMA tag associations .Pp Error occurred when trying to destroy RX/TX DMA tag. .It ena%d: Cannot init RSS .It ena%d: Cannot fill indirect table .It ena%d: Cannot fill indirect table .It ena%d: Cannot fill hash function .It ena%d: Cannot fill hash control .It ena%d: WARNING: RSS was not properly initialized, it will affect bandwidth .Pp Error occurred during initialization of one of RSS resources. .br The device will work with reduced performance because all RX packets will be passed to queue 0 and there will be no hash information. .It ena%d: failed to tear down irq: %d .It ena%d: dev has no parent while releasing res for irq: %d Release of the interrupts failed. .El .Ss Additional diagnostic: .Bl -diag .It ena%d: Cannot get attribute for ena device .Pp This message appears when trying to change MTU and driver is unable to get attributes from the device. .It ena%d: Invalid MTU setting. new_mtu: %d .Pp Requested MTU value is not supported and will not be set. .It ena%d: keep alive watchdog timeout .Pp Device stopped responding and will be reset. .It ena%d: Found a Tx that wasn't completed on time, qid %d, index %d. .Pp Packet was pushed to the NIC but not sent within given time limit. .br It may be caused by hang of the IO queue. .It ena%d: The number of lost tx completion is aboce the threshold (%d > %d). Reset the device .Pp If too many Tx wasn't completed on time the device is going to be reset. .br It may be caused by hanged queue or device. .It ena%d: trigger reset is on .Pp Device will be reset. .br Reset is triggered either by watchdog or if too many TX packets were not completed on time. .It ena%d: invalid value recvd .Pp Link status received from the device in the AENQ handler is invalid. .It ena%d: Allocation for Tx Queue %u failed .It ena%d: Allocation for Rx Queue %u failed .It ena%d: Unable to create Rx DMA map for buffer %d .It ena%d: Failed to create io TX queue #%d rc: %d .It ena%d: Failed to get TX queue handlers. TX queue num %d rc: %d .It ena%d: Failed to create io RX queue[%d] rc: %d .It ena%d: Failed to get RX queue handlers. RX queue num %d rc: %d .It ena%d: failed to request irq .It ena%d: could not allocate irq vector: %d .It ena%d: failed to register interrupt handler for irq %ju: %d .Pp IO resources initialization failed. .br Interface will not be brought up. .It ena%d: LRO[%d] Initialization failed! .Pp Initialization of the LRO for the RX ring failed. .It ena%d: failed to alloc buffer for rx queue .It ena%d: failed to add buffer for rx queue %d .It ena%d: refilled rx queue %d with %d pages only .Pp Allocation of resources used on RX path failed. .br If happened during initialization of the IO queue, the interface will not be brought up. .It ena%d: ioctl promisc/allmulti .Pp IOCTL request for the device to work in promiscuous/allmulti mode. .br See .Xr ifconfig 8 for more details. .It ena%d: too many fragments. Last fragment: %d! .Pp Packet with unsupported number of segments was queued for sending to the device. .br Packet will be dropped. .Sh SUPPORT If an issue is identified with the released source code with a supported adapter, please email the specific information related to the issue to .Aq Mt mk@semihalf.com and .Aq Mt mw@semihalf.com . .Sh SEE ALSO .Xr vlan 4 , .Xr ifconfig 8 .Sh AUTHORS The .Nm driver was written by .An Semihalf. Index: stable/12/sys/contrib/ena-com/ena_com.c =================================================================== --- stable/12/sys/contrib/ena-com/ena_com.c (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_com.c (revision 368012) @@ -1,3070 +1,3107 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "ena_com.h" /*****************************************************************************/ /*****************************************************************************/ /* Timeout in micro-sec */ #define ADMIN_CMD_TIMEOUT_US (3000000) #define ENA_ASYNC_QUEUE_DEPTH 16 #define ENA_ADMIN_QUEUE_DEPTH 32 #ifdef ENA_EXTENDED_STATS #define ENA_HISTOGRAM_ACTIVE_MASK_OFFSET 0xF08 #define ENA_EXTENDED_STAT_GET_FUNCT(_funct_queue) (_funct_queue & 0xFFFF) #define ENA_EXTENDED_STAT_GET_QUEUE(_funct_queue) (_funct_queue >> 16) #endif /* ENA_EXTENDED_STATS */ #define ENA_CTRL_MAJOR 0 #define ENA_CTRL_MINOR 0 #define ENA_CTRL_SUB_MINOR 1 #define MIN_ENA_CTRL_VER \ (((ENA_CTRL_MAJOR) << \ (ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT)) | \ ((ENA_CTRL_MINOR) << \ (ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT)) | \ (ENA_CTRL_SUB_MINOR)) #define ENA_DMA_ADDR_TO_UINT32_LOW(x) ((u32)((u64)(x))) #define ENA_DMA_ADDR_TO_UINT32_HIGH(x) ((u32)(((u64)(x)) >> 32)) #define ENA_MMIO_READ_TIMEOUT 0xFFFFFFFF #define ENA_COM_BOUNCE_BUFFER_CNTRL_CNT 4 #define ENA_REGS_ADMIN_INTR_MASK 1 -#define ENA_MIN_POLL_US 100 +#define ENA_MIN_ADMIN_POLL_US 100 -#define ENA_MAX_POLL_US 5000 +#define ENA_MAX_ADMIN_POLL_US 5000 /*****************************************************************************/ /*****************************************************************************/ /*****************************************************************************/ enum ena_cmd_status { ENA_CMD_SUBMITTED, ENA_CMD_COMPLETED, /* Abort - canceled by the driver */ ENA_CMD_ABORTED, }; struct ena_comp_ctx { ena_wait_event_t wait_event; struct ena_admin_acq_entry *user_cqe; u32 comp_size; enum ena_cmd_status status; /* status from the device */ u8 comp_status; u8 cmd_opcode; bool occupied; }; struct ena_com_stats_ctx { struct ena_admin_aq_get_stats_cmd get_cmd; struct ena_admin_acq_get_stats_resp get_resp; }; static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev, struct ena_common_mem_addr *ena_addr, dma_addr_t addr) { if ((addr & GENMASK_ULL(ena_dev->dma_addr_bits - 1, 0)) != addr) { - ena_trc_err("dma address has more bits that the device supports\n"); + ena_trc_err(ena_dev, "DMA address has more bits that the device supports\n"); return ENA_COM_INVAL; } ena_addr->mem_addr_low = lower_32_bits(addr); ena_addr->mem_addr_high = (u16)upper_32_bits(addr); return 0; } -static int ena_com_admin_init_sq(struct ena_com_admin_queue *queue) +static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue) { - struct ena_com_admin_sq *sq = &queue->sq; - u16 size = ADMIN_SQ_SIZE(queue->q_depth); + struct ena_com_dev *ena_dev = admin_queue->ena_dev; + struct ena_com_admin_sq *sq = &admin_queue->sq; + u16 size = ADMIN_SQ_SIZE(admin_queue->q_depth); - ENA_MEM_ALLOC_COHERENT(queue->q_dmadev, size, sq->entries, sq->dma_addr, + ENA_MEM_ALLOC_COHERENT(admin_queue->q_dmadev, size, sq->entries, sq->dma_addr, sq->mem_handle); if (!sq->entries) { - ena_trc_err("memory allocation failed\n"); + ena_trc_err(ena_dev, "Memory allocation failed\n"); return ENA_COM_NO_MEM; } sq->head = 0; sq->tail = 0; sq->phase = 1; sq->db_addr = NULL; return 0; } -static int ena_com_admin_init_cq(struct ena_com_admin_queue *queue) +static int ena_com_admin_init_cq(struct ena_com_admin_queue *admin_queue) { - struct ena_com_admin_cq *cq = &queue->cq; - u16 size = ADMIN_CQ_SIZE(queue->q_depth); + struct ena_com_dev *ena_dev = admin_queue->ena_dev; + struct ena_com_admin_cq *cq = &admin_queue->cq; + u16 size = ADMIN_CQ_SIZE(admin_queue->q_depth); - ENA_MEM_ALLOC_COHERENT(queue->q_dmadev, size, cq->entries, cq->dma_addr, + ENA_MEM_ALLOC_COHERENT(admin_queue->q_dmadev, size, cq->entries, cq->dma_addr, cq->mem_handle); if (!cq->entries) { - ena_trc_err("memory allocation failed\n"); + ena_trc_err(ena_dev, "Memory allocation failed\n"); return ENA_COM_NO_MEM; } cq->head = 0; cq->phase = 1; return 0; } -static int ena_com_admin_init_aenq(struct ena_com_dev *dev, +static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev, struct ena_aenq_handlers *aenq_handlers) { - struct ena_com_aenq *aenq = &dev->aenq; + struct ena_com_aenq *aenq = &ena_dev->aenq; u32 addr_low, addr_high, aenq_caps; u16 size; - dev->aenq.q_depth = ENA_ASYNC_QUEUE_DEPTH; + ena_dev->aenq.q_depth = ENA_ASYNC_QUEUE_DEPTH; size = ADMIN_AENQ_SIZE(ENA_ASYNC_QUEUE_DEPTH); - ENA_MEM_ALLOC_COHERENT(dev->dmadev, size, + ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, size, aenq->entries, aenq->dma_addr, aenq->mem_handle); if (!aenq->entries) { - ena_trc_err("memory allocation failed\n"); + ena_trc_err(ena_dev, "Memory allocation failed\n"); return ENA_COM_NO_MEM; } aenq->head = aenq->q_depth; aenq->phase = 1; addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(aenq->dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(aenq->dma_addr); - ENA_REG_WRITE32(dev->bus, addr_low, dev->reg_bar + ENA_REGS_AENQ_BASE_LO_OFF); - ENA_REG_WRITE32(dev->bus, addr_high, dev->reg_bar + ENA_REGS_AENQ_BASE_HI_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_AENQ_BASE_LO_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_AENQ_BASE_HI_OFF); aenq_caps = 0; - aenq_caps |= dev->aenq.q_depth & ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK; + aenq_caps |= ena_dev->aenq.q_depth & ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK; aenq_caps |= (sizeof(struct ena_admin_aenq_entry) << ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT) & ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK; - ENA_REG_WRITE32(dev->bus, aenq_caps, dev->reg_bar + ENA_REGS_AENQ_CAPS_OFF); + ENA_REG_WRITE32(ena_dev->bus, aenq_caps, ena_dev->reg_bar + ENA_REGS_AENQ_CAPS_OFF); if (unlikely(!aenq_handlers)) { - ena_trc_err("aenq handlers pointer is NULL\n"); + ena_trc_err(ena_dev, "AENQ handlers pointer is NULL\n"); return ENA_COM_INVAL; } aenq->aenq_handlers = aenq_handlers; return 0; } static void comp_ctxt_release(struct ena_com_admin_queue *queue, struct ena_comp_ctx *comp_ctx) { comp_ctx->occupied = false; ATOMIC32_DEC(&queue->outstanding_cmds); } -static struct ena_comp_ctx *get_comp_ctxt(struct ena_com_admin_queue *queue, +static struct ena_comp_ctx *get_comp_ctxt(struct ena_com_admin_queue *admin_queue, u16 command_id, bool capture) { - if (unlikely(command_id >= queue->q_depth)) { - ena_trc_err("command id is larger than the queue size. cmd_id: %u queue size %d\n", - command_id, queue->q_depth); + if (unlikely(command_id >= admin_queue->q_depth)) { + ena_trc_err(admin_queue->ena_dev, + "Command id is larger than the queue size. cmd_id: %u queue size %d\n", + command_id, admin_queue->q_depth); return NULL; } - if (unlikely(!queue->comp_ctx)) { - ena_trc_err("Completion context is NULL\n"); + if (unlikely(!admin_queue->comp_ctx)) { + ena_trc_err(admin_queue->ena_dev, + "Completion context is NULL\n"); return NULL; } - if (unlikely(queue->comp_ctx[command_id].occupied && capture)) { - ena_trc_err("Completion context is occupied\n"); + if (unlikely(admin_queue->comp_ctx[command_id].occupied && capture)) { + ena_trc_err(admin_queue->ena_dev, + "Completion context is occupied\n"); return NULL; } if (capture) { - ATOMIC32_INC(&queue->outstanding_cmds); - queue->comp_ctx[command_id].occupied = true; + ATOMIC32_INC(&admin_queue->outstanding_cmds); + admin_queue->comp_ctx[command_id].occupied = true; } - return &queue->comp_ctx[command_id]; + return &admin_queue->comp_ctx[command_id]; } static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, struct ena_admin_aq_entry *cmd, size_t cmd_size_in_bytes, struct ena_admin_acq_entry *comp, size_t comp_size_in_bytes) { struct ena_comp_ctx *comp_ctx; u16 tail_masked, cmd_id; u16 queue_size_mask; u16 cnt; queue_size_mask = admin_queue->q_depth - 1; tail_masked = admin_queue->sq.tail & queue_size_mask; /* In case of queue FULL */ cnt = (u16)ATOMIC32_READ(&admin_queue->outstanding_cmds); if (cnt >= admin_queue->q_depth) { - ena_trc_dbg("admin queue is full.\n"); + ena_trc_dbg(admin_queue->ena_dev, "Admin queue is full.\n"); admin_queue->stats.out_of_space++; return ERR_PTR(ENA_COM_NO_SPACE); } cmd_id = admin_queue->curr_cmd_id; cmd->aq_common_descriptor.flags |= admin_queue->sq.phase & ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK; cmd->aq_common_descriptor.command_id |= cmd_id & ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK; comp_ctx = get_comp_ctxt(admin_queue, cmd_id, true); if (unlikely(!comp_ctx)) return ERR_PTR(ENA_COM_INVAL); comp_ctx->status = ENA_CMD_SUBMITTED; comp_ctx->comp_size = (u32)comp_size_in_bytes; comp_ctx->user_cqe = comp; comp_ctx->cmd_opcode = cmd->aq_common_descriptor.opcode; ENA_WAIT_EVENT_CLEAR(comp_ctx->wait_event); memcpy(&admin_queue->sq.entries[tail_masked], cmd, cmd_size_in_bytes); admin_queue->curr_cmd_id = (admin_queue->curr_cmd_id + 1) & queue_size_mask; admin_queue->sq.tail++; admin_queue->stats.submitted_cmd++; if (unlikely((admin_queue->sq.tail & queue_size_mask) == 0)) admin_queue->sq.phase = !admin_queue->sq.phase; ENA_DB_SYNC(&admin_queue->sq.mem_handle); ENA_REG_WRITE32(admin_queue->bus, admin_queue->sq.tail, admin_queue->sq.db_addr); return comp_ctx; } -static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue) +static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *admin_queue) { - size_t size = queue->q_depth * sizeof(struct ena_comp_ctx); + struct ena_com_dev *ena_dev = admin_queue->ena_dev; + size_t size = admin_queue->q_depth * sizeof(struct ena_comp_ctx); struct ena_comp_ctx *comp_ctx; u16 i; - queue->comp_ctx = ENA_MEM_ALLOC(queue->q_dmadev, size); - if (unlikely(!queue->comp_ctx)) { - ena_trc_err("memory allocation failed\n"); + admin_queue->comp_ctx = ENA_MEM_ALLOC(admin_queue->q_dmadev, size); + if (unlikely(!admin_queue->comp_ctx)) { + ena_trc_err(ena_dev, "Memory allocation failed\n"); return ENA_COM_NO_MEM; } - for (i = 0; i < queue->q_depth; i++) { - comp_ctx = get_comp_ctxt(queue, i, false); + for (i = 0; i < admin_queue->q_depth; i++) { + comp_ctx = get_comp_ctxt(admin_queue, i, false); if (comp_ctx) ENA_WAIT_EVENT_INIT(comp_ctx->wait_event); } return 0; } static struct ena_comp_ctx *ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, struct ena_admin_aq_entry *cmd, size_t cmd_size_in_bytes, struct ena_admin_acq_entry *comp, size_t comp_size_in_bytes) { unsigned long flags = 0; struct ena_comp_ctx *comp_ctx; ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); if (unlikely(!admin_queue->running_state)) { ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); return ERR_PTR(ENA_COM_NO_DEVICE); } comp_ctx = __ena_com_submit_admin_cmd(admin_queue, cmd, cmd_size_in_bytes, comp, comp_size_in_bytes); if (IS_ERR(comp_ctx)) admin_queue->running_state = false; ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); return comp_ctx; } static int ena_com_init_io_sq(struct ena_com_dev *ena_dev, struct ena_com_create_io_ctx *ctx, struct ena_com_io_sq *io_sq) { size_t size; int dev_node = 0; memset(&io_sq->desc_addr, 0x0, sizeof(io_sq->desc_addr)); io_sq->dma_addr_bits = (u8)ena_dev->dma_addr_bits; io_sq->desc_entry_size = (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ? sizeof(struct ena_eth_io_tx_desc) : sizeof(struct ena_eth_io_rx_desc); size = io_sq->desc_entry_size * io_sq->q_depth; io_sq->bus = ena_dev->bus; if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) { ENA_MEM_ALLOC_COHERENT_NODE(ena_dev->dmadev, size, io_sq->desc_addr.virt_addr, io_sq->desc_addr.phys_addr, io_sq->desc_addr.mem_handle, ctx->numa_node, dev_node); if (!io_sq->desc_addr.virt_addr) { ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, size, io_sq->desc_addr.virt_addr, io_sq->desc_addr.phys_addr, io_sq->desc_addr.mem_handle); } if (!io_sq->desc_addr.virt_addr) { - ena_trc_err("memory allocation failed\n"); + ena_trc_err(ena_dev, "Memory allocation failed\n"); return ENA_COM_NO_MEM; } } if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { /* Allocate bounce buffers */ io_sq->bounce_buf_ctrl.buffer_size = ena_dev->llq_info.desc_list_entry_size; io_sq->bounce_buf_ctrl.buffers_num = ENA_COM_BOUNCE_BUFFER_CNTRL_CNT; io_sq->bounce_buf_ctrl.next_to_use = 0; size = io_sq->bounce_buf_ctrl.buffer_size * io_sq->bounce_buf_ctrl.buffers_num; ENA_MEM_ALLOC_NODE(ena_dev->dmadev, size, io_sq->bounce_buf_ctrl.base_buffer, ctx->numa_node, dev_node); if (!io_sq->bounce_buf_ctrl.base_buffer) io_sq->bounce_buf_ctrl.base_buffer = ENA_MEM_ALLOC(ena_dev->dmadev, size); if (!io_sq->bounce_buf_ctrl.base_buffer) { - ena_trc_err("bounce buffer memory allocation failed\n"); + ena_trc_err(ena_dev, "Bounce buffer memory allocation failed\n"); return ENA_COM_NO_MEM; } memcpy(&io_sq->llq_info, &ena_dev->llq_info, sizeof(io_sq->llq_info)); /* Initiate the first bounce buffer */ io_sq->llq_buf_ctrl.curr_bounce_buf = ena_com_get_next_bounce_buffer(&io_sq->bounce_buf_ctrl); memset(io_sq->llq_buf_ctrl.curr_bounce_buf, 0x0, io_sq->llq_info.desc_list_entry_size); io_sq->llq_buf_ctrl.descs_left_in_line = io_sq->llq_info.descs_num_before_header; io_sq->disable_meta_caching = io_sq->llq_info.disable_meta_caching; if (io_sq->llq_info.max_entries_in_tx_burst > 0) io_sq->entries_in_tx_burst_left = io_sq->llq_info.max_entries_in_tx_burst; } io_sq->tail = 0; io_sq->next_to_comp = 0; io_sq->phase = 1; return 0; } static int ena_com_init_io_cq(struct ena_com_dev *ena_dev, struct ena_com_create_io_ctx *ctx, struct ena_com_io_cq *io_cq) { size_t size; int prev_node = 0; memset(&io_cq->cdesc_addr, 0x0, sizeof(io_cq->cdesc_addr)); /* Use the basic completion descriptor for Rx */ io_cq->cdesc_entry_size_in_bytes = (io_cq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ? sizeof(struct ena_eth_io_tx_cdesc) : sizeof(struct ena_eth_io_rx_cdesc_base); size = io_cq->cdesc_entry_size_in_bytes * io_cq->q_depth; io_cq->bus = ena_dev->bus; - ENA_MEM_ALLOC_COHERENT_NODE(ena_dev->dmadev, - size, - io_cq->cdesc_addr.virt_addr, - io_cq->cdesc_addr.phys_addr, - io_cq->cdesc_addr.mem_handle, - ctx->numa_node, - prev_node); + ENA_MEM_ALLOC_COHERENT_NODE_ALIGNED(ena_dev->dmadev, + size, + io_cq->cdesc_addr.virt_addr, + io_cq->cdesc_addr.phys_addr, + io_cq->cdesc_addr.mem_handle, + ctx->numa_node, + prev_node, + ENA_CDESC_RING_SIZE_ALIGNMENT); if (!io_cq->cdesc_addr.virt_addr) { - ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, - size, - io_cq->cdesc_addr.virt_addr, - io_cq->cdesc_addr.phys_addr, - io_cq->cdesc_addr.mem_handle); + ENA_MEM_ALLOC_COHERENT_ALIGNED(ena_dev->dmadev, + size, + io_cq->cdesc_addr.virt_addr, + io_cq->cdesc_addr.phys_addr, + io_cq->cdesc_addr.mem_handle, + ENA_CDESC_RING_SIZE_ALIGNMENT); } if (!io_cq->cdesc_addr.virt_addr) { - ena_trc_err("memory allocation failed\n"); + ena_trc_err(ena_dev, "Memory allocation failed\n"); return ENA_COM_NO_MEM; } io_cq->phase = 1; io_cq->head = 0; return 0; } static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *admin_queue, struct ena_admin_acq_entry *cqe) { struct ena_comp_ctx *comp_ctx; u16 cmd_id; cmd_id = cqe->acq_common_descriptor.command & ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK; comp_ctx = get_comp_ctxt(admin_queue, cmd_id, false); if (unlikely(!comp_ctx)) { - ena_trc_err("comp_ctx is NULL. Changing the admin queue running state\n"); + ena_trc_err(admin_queue->ena_dev, + "comp_ctx is NULL. Changing the admin queue running state\n"); admin_queue->running_state = false; return; } comp_ctx->status = ENA_CMD_COMPLETED; comp_ctx->comp_status = cqe->acq_common_descriptor.status; if (comp_ctx->user_cqe) memcpy(comp_ctx->user_cqe, (void *)cqe, comp_ctx->comp_size); if (!admin_queue->polling) ENA_WAIT_EVENT_SIGNAL(comp_ctx->wait_event); } static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_queue) { struct ena_admin_acq_entry *cqe = NULL; u16 comp_num = 0; u16 head_masked; u8 phase; head_masked = admin_queue->cq.head & (admin_queue->q_depth - 1); phase = admin_queue->cq.phase; cqe = &admin_queue->cq.entries[head_masked]; /* Go over all the completions */ while ((READ_ONCE8(cqe->acq_common_descriptor.flags) & ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK) == phase) { /* Do not read the rest of the completion entry before the * phase bit was validated */ dma_rmb(); ena_com_handle_single_admin_completion(admin_queue, cqe); head_masked++; comp_num++; if (unlikely(head_masked == admin_queue->q_depth)) { head_masked = 0; phase = !phase; } cqe = &admin_queue->cq.entries[head_masked]; } admin_queue->cq.head += comp_num; admin_queue->cq.phase = phase; admin_queue->sq.head += comp_num; admin_queue->stats.completed_cmd += comp_num; } -static int ena_com_comp_status_to_errno(u8 comp_status) +static int ena_com_comp_status_to_errno(struct ena_com_admin_queue *admin_queue, + u8 comp_status) { if (unlikely(comp_status != 0)) - ena_trc_err("admin command failed[%u]\n", comp_status); + ena_trc_err(admin_queue->ena_dev, + "Admin command failed[%u]\n", comp_status); switch (comp_status) { case ENA_ADMIN_SUCCESS: return ENA_COM_OK; case ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE: return ENA_COM_NO_MEM; case ENA_ADMIN_UNSUPPORTED_OPCODE: return ENA_COM_UNSUPPORTED; case ENA_ADMIN_BAD_OPCODE: case ENA_ADMIN_MALFORMED_REQUEST: case ENA_ADMIN_ILLEGAL_PARAMETER: case ENA_ADMIN_UNKNOWN_ERROR: return ENA_COM_INVAL; + case ENA_ADMIN_RESOURCE_BUSY: + return ENA_COM_TRY_AGAIN; } return ENA_COM_INVAL; } -static inline void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us) +static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us) { - delay_us = ENA_MAX32(ENA_MIN_POLL_US, delay_us); - delay_us = ENA_MIN32(delay_us * (1 << exp), ENA_MAX_POLL_US); + delay_us = ENA_MAX32(ENA_MIN_ADMIN_POLL_US, delay_us); + delay_us = ENA_MIN32(delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US); ENA_USLEEP(delay_us); } static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_ctx, struct ena_com_admin_queue *admin_queue) { unsigned long flags = 0; ena_time_t timeout; int ret; u32 exp = 0; timeout = ENA_GET_SYSTEM_TIMEOUT(admin_queue->completion_timeout); while (1) { ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); ena_com_handle_admin_completion(admin_queue); ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); if (comp_ctx->status != ENA_CMD_SUBMITTED) break; if (ENA_TIME_EXPIRE(timeout)) { - ena_trc_err("Wait for completion (polling) timeout\n"); + ena_trc_err(admin_queue->ena_dev, + "Wait for completion (polling) timeout\n"); /* ENA didn't have any completion */ ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); admin_queue->stats.no_completion++; admin_queue->running_state = false; ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); ret = ENA_COM_TIMER_EXPIRED; goto err; } - ena_delay_exponential_backoff_us(exp++, admin_queue->ena_dev->ena_min_poll_delay_us); + ena_delay_exponential_backoff_us(exp++, + admin_queue->ena_dev->ena_min_poll_delay_us); } if (unlikely(comp_ctx->status == ENA_CMD_ABORTED)) { - ena_trc_err("Command was aborted\n"); + ena_trc_err(admin_queue->ena_dev, "Command was aborted\n"); ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); admin_queue->stats.aborted_cmd++; ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); ret = ENA_COM_NO_DEVICE; goto err; } ENA_WARN(comp_ctx->status != ENA_CMD_COMPLETED, - "Invalid comp status %d\n", comp_ctx->status); + admin_queue->ena_dev, "Invalid comp status %d\n", + comp_ctx->status); - ret = ena_com_comp_status_to_errno(comp_ctx->comp_status); + ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx->comp_status); err: comp_ctxt_release(admin_queue, comp_ctx); return ret; } -/** +/* * Set the LLQ configurations of the firmware * * The driver provides only the enabled feature values to the device, * which in turn, checks if they are supported. */ static int ena_com_set_llq(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; struct ena_com_llq_info *llq_info = &ena_dev->llq_info; int ret; memset(&cmd, 0x0, sizeof(cmd)); admin_queue = &ena_dev->admin_queue; cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.feat_common.feature_id = ENA_ADMIN_LLQ; cmd.u.llq.header_location_ctrl_enabled = llq_info->header_location_ctrl; cmd.u.llq.entry_size_ctrl_enabled = llq_info->desc_list_entry_size_ctrl; cmd.u.llq.desc_num_before_header_enabled = llq_info->descs_num_before_header; cmd.u.llq.descriptors_stride_ctrl_enabled = llq_info->desc_stride_ctrl; - if (llq_info->disable_meta_caching) - cmd.u.llq.accel_mode.u.set.enabled_flags |= - BIT(ENA_ADMIN_DISABLE_META_CACHING); + cmd.u.llq.accel_mode.u.set.enabled_flags = + BIT(ENA_ADMIN_DISABLE_META_CACHING) | + BIT(ENA_ADMIN_LIMIT_TX_BURST); - if (llq_info->max_entries_in_tx_burst) - cmd.u.llq.accel_mode.u.set.enabled_flags |= - BIT(ENA_ADMIN_LIMIT_TX_BURST); - ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - ena_trc_err("Failed to set LLQ configurations: %d\n", ret); + ena_trc_err(ena_dev, "Failed to set LLQ configurations: %d\n", ret); return ret; } static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, struct ena_admin_feature_llq_desc *llq_features, struct ena_llq_configurations *llq_default_cfg) { struct ena_com_llq_info *llq_info = &ena_dev->llq_info; + struct ena_admin_accel_mode_get llq_accel_mode_get; u16 supported_feat; int rc; memset(llq_info, 0, sizeof(*llq_info)); supported_feat = llq_features->header_location_ctrl_supported; if (likely(supported_feat & llq_default_cfg->llq_header_location)) { llq_info->header_location_ctrl = llq_default_cfg->llq_header_location; } else { - ena_trc_err("Invalid header location control, supported: 0x%x\n", + ena_trc_err(ena_dev, "Invalid header location control, supported: 0x%x\n", supported_feat); return -EINVAL; } if (likely(llq_info->header_location_ctrl == ENA_ADMIN_INLINE_HEADER)) { supported_feat = llq_features->descriptors_stride_ctrl_supported; if (likely(supported_feat & llq_default_cfg->llq_stride_ctrl)) { llq_info->desc_stride_ctrl = llq_default_cfg->llq_stride_ctrl; } else { if (supported_feat & ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY) { llq_info->desc_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY; } else if (supported_feat & ENA_ADMIN_SINGLE_DESC_PER_ENTRY) { llq_info->desc_stride_ctrl = ENA_ADMIN_SINGLE_DESC_PER_ENTRY; } else { - ena_trc_err("Invalid desc_stride_ctrl, supported: 0x%x\n", + ena_trc_err(ena_dev, "Invalid desc_stride_ctrl, supported: 0x%x\n", supported_feat); return -EINVAL; } - ena_trc_err("Default llq stride ctrl is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", + ena_trc_err(ena_dev, "Default llq stride ctrl is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", llq_default_cfg->llq_stride_ctrl, supported_feat, llq_info->desc_stride_ctrl); } } else { llq_info->desc_stride_ctrl = 0; } supported_feat = llq_features->entry_size_ctrl_supported; if (likely(supported_feat & llq_default_cfg->llq_ring_entry_size)) { llq_info->desc_list_entry_size_ctrl = llq_default_cfg->llq_ring_entry_size; llq_info->desc_list_entry_size = llq_default_cfg->llq_ring_entry_size_value; } else { if (supported_feat & ENA_ADMIN_LIST_ENTRY_SIZE_128B) { llq_info->desc_list_entry_size_ctrl = ENA_ADMIN_LIST_ENTRY_SIZE_128B; llq_info->desc_list_entry_size = 128; } else if (supported_feat & ENA_ADMIN_LIST_ENTRY_SIZE_192B) { llq_info->desc_list_entry_size_ctrl = ENA_ADMIN_LIST_ENTRY_SIZE_192B; llq_info->desc_list_entry_size = 192; } else if (supported_feat & ENA_ADMIN_LIST_ENTRY_SIZE_256B) { llq_info->desc_list_entry_size_ctrl = ENA_ADMIN_LIST_ENTRY_SIZE_256B; llq_info->desc_list_entry_size = 256; } else { - ena_trc_err("Invalid entry_size_ctrl, supported: 0x%x\n", supported_feat); + ena_trc_err(ena_dev, "Invalid entry_size_ctrl, supported: 0x%x\n", + supported_feat); return -EINVAL; } - ena_trc_err("Default llq ring entry size is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", + ena_trc_err(ena_dev, "Default llq ring entry size is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", llq_default_cfg->llq_ring_entry_size, supported_feat, llq_info->desc_list_entry_size); } if (unlikely(llq_info->desc_list_entry_size & 0x7)) { /* The desc list entry size should be whole multiply of 8 * This requirement comes from __iowrite64_copy() */ - ena_trc_err("illegal entry size %d\n", + ena_trc_err(ena_dev, "Illegal entry size %d\n", llq_info->desc_list_entry_size); return -EINVAL; } if (llq_info->desc_stride_ctrl == ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY) llq_info->descs_per_entry = llq_info->desc_list_entry_size / sizeof(struct ena_eth_io_tx_desc); else llq_info->descs_per_entry = 1; supported_feat = llq_features->desc_num_before_header_supported; if (likely(supported_feat & llq_default_cfg->llq_num_decs_before_header)) { llq_info->descs_num_before_header = llq_default_cfg->llq_num_decs_before_header; } else { if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2) { llq_info->descs_num_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2; } else if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_1) { llq_info->descs_num_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_1; } else if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_4) { llq_info->descs_num_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_4; } else if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8) { llq_info->descs_num_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8; } else { - ena_trc_err("Invalid descs_num_before_header, supported: 0x%x\n", + ena_trc_err(ena_dev, "Invalid descs_num_before_header, supported: 0x%x\n", supported_feat); return -EINVAL; } - ena_trc_err("Default llq num descs before header is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", + ena_trc_err(ena_dev, "Default llq num descs before header is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", llq_default_cfg->llq_num_decs_before_header, supported_feat, llq_info->descs_num_before_header); } /* Check for accelerated queue supported */ + llq_accel_mode_get = llq_features->accel_mode.u.get; + llq_info->disable_meta_caching = - llq_features->accel_mode.u.get.supported_flags & - BIT(ENA_ADMIN_DISABLE_META_CACHING); + !!(llq_accel_mode_get.supported_flags & + BIT(ENA_ADMIN_DISABLE_META_CACHING)); - if (llq_features->accel_mode.u.get.supported_flags & BIT(ENA_ADMIN_LIMIT_TX_BURST)) + if (llq_accel_mode_get.supported_flags & BIT(ENA_ADMIN_LIMIT_TX_BURST)) llq_info->max_entries_in_tx_burst = - llq_features->accel_mode.u.get.max_tx_burst_size / + llq_accel_mode_get.max_tx_burst_size / llq_default_cfg->llq_ring_entry_size_value; rc = ena_com_set_llq(ena_dev); if (rc) - ena_trc_err("Cannot set LLQ configuration: %d\n", rc); + ena_trc_err(ena_dev, "Cannot set LLQ configuration: %d\n", rc); return rc; } static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *comp_ctx, struct ena_com_admin_queue *admin_queue) { unsigned long flags = 0; int ret; ENA_WAIT_EVENT_WAIT(comp_ctx->wait_event, admin_queue->completion_timeout); /* In case the command wasn't completed find out the root cause. * There might be 2 kinds of errors * 1) No completion (timeout reached) * 2) There is completion but the device didn't get any msi-x interrupt. */ if (unlikely(comp_ctx->status == ENA_CMD_SUBMITTED)) { ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); ena_com_handle_admin_completion(admin_queue); admin_queue->stats.no_completion++; ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); if (comp_ctx->status == ENA_CMD_COMPLETED) { - ena_trc_err("The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n", + ena_trc_err(admin_queue->ena_dev, + "The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n", comp_ctx->cmd_opcode, admin_queue->auto_polling ? "ON" : "OFF"); /* Check if fallback to polling is enabled */ if (admin_queue->auto_polling) admin_queue->polling = true; } else { - ena_trc_err("The ena device didn't send a completion for the admin cmd %d status %d\n", + ena_trc_err(admin_queue->ena_dev, + "The ena device didn't send a completion for the admin cmd %d status %d\n", comp_ctx->cmd_opcode, comp_ctx->status); } /* Check if shifted to polling mode. * This will happen if there is a completion without an interrupt * and autopolling mode is enabled. Continuing normal execution in such case */ if (!admin_queue->polling) { admin_queue->running_state = false; ret = ENA_COM_TIMER_EXPIRED; goto err; } } - ret = ena_com_comp_status_to_errno(comp_ctx->comp_status); + ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx->comp_status); err: comp_ctxt_release(admin_queue, comp_ctx); return ret; } /* This method read the hardware device register through posting writes * and waiting for response * On timeout the function will return ENA_MMIO_READ_TIMEOUT */ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; volatile struct ena_admin_ena_mmio_req_read_less_resp *read_resp = mmio_read->read_resp; u32 mmio_read_reg, ret, i; unsigned long flags = 0; u32 timeout = mmio_read->reg_read_to; ENA_MIGHT_SLEEP(); if (timeout == 0) timeout = ENA_REG_READ_TIMEOUT; /* If readless is disabled, perform regular read */ if (!mmio_read->readless_supported) return ENA_REG_READ32(ena_dev->bus, ena_dev->reg_bar + offset); ENA_SPINLOCK_LOCK(mmio_read->lock, flags); mmio_read->seq_num++; read_resp->req_id = mmio_read->seq_num + 0xDEAD; mmio_read_reg = (offset << ENA_REGS_MMIO_REG_READ_REG_OFF_SHIFT) & ENA_REGS_MMIO_REG_READ_REG_OFF_MASK; mmio_read_reg |= mmio_read->seq_num & ENA_REGS_MMIO_REG_READ_REQ_ID_MASK; ENA_REG_WRITE32(ena_dev->bus, mmio_read_reg, ena_dev->reg_bar + ENA_REGS_MMIO_REG_READ_OFF); for (i = 0; i < timeout; i++) { if (READ_ONCE16(read_resp->req_id) == mmio_read->seq_num) break; ENA_UDELAY(1); } if (unlikely(i == timeout)) { - ena_trc_err("reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n", + ena_trc_err(ena_dev, "Reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n", mmio_read->seq_num, offset, read_resp->req_id, read_resp->reg_off); ret = ENA_MMIO_READ_TIMEOUT; goto err; } if (read_resp->reg_off != offset) { - ena_trc_err("Read failure: wrong offset provided\n"); + ena_trc_err(ena_dev, "Read failure: wrong offset provided\n"); ret = ENA_MMIO_READ_TIMEOUT; } else { ret = read_resp->reg_val; } err: ENA_SPINLOCK_UNLOCK(mmio_read->lock, flags); return ret; } /* There are two types to wait for completion. * Polling mode - wait until the completion is available. * Async mode - wait on wait queue until the completion is ready * (or the timeout expired). * It is expected that the IRQ called ena_com_handle_admin_completion * to mark the completions. */ static int ena_com_wait_and_process_admin_cq(struct ena_comp_ctx *comp_ctx, struct ena_com_admin_queue *admin_queue) { if (admin_queue->polling) return ena_com_wait_and_process_admin_cq_polling(comp_ctx, admin_queue); return ena_com_wait_and_process_admin_cq_interrupts(comp_ctx, admin_queue); } static int ena_com_destroy_io_sq(struct ena_com_dev *ena_dev, struct ena_com_io_sq *io_sq) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_admin_aq_destroy_sq_cmd destroy_cmd; struct ena_admin_acq_destroy_sq_resp_desc destroy_resp; u8 direction; int ret; memset(&destroy_cmd, 0x0, sizeof(destroy_cmd)); if (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) direction = ENA_ADMIN_SQ_DIRECTION_TX; else direction = ENA_ADMIN_SQ_DIRECTION_RX; destroy_cmd.sq.sq_identity |= (direction << ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT) & ENA_ADMIN_SQ_SQ_DIRECTION_MASK; destroy_cmd.sq.sq_idx = io_sq->idx; destroy_cmd.aq_common_descriptor.opcode = ENA_ADMIN_DESTROY_SQ; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&destroy_cmd, sizeof(destroy_cmd), (struct ena_admin_acq_entry *)&destroy_resp, sizeof(destroy_resp)); if (unlikely(ret && (ret != ENA_COM_NO_DEVICE))) - ena_trc_err("failed to destroy io sq error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to destroy io sq error: %d\n", ret); return ret; } static void ena_com_io_queue_free(struct ena_com_dev *ena_dev, struct ena_com_io_sq *io_sq, struct ena_com_io_cq *io_cq) { size_t size; if (io_cq->cdesc_addr.virt_addr) { size = io_cq->cdesc_entry_size_in_bytes * io_cq->q_depth; ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, io_cq->cdesc_addr.virt_addr, io_cq->cdesc_addr.phys_addr, io_cq->cdesc_addr.mem_handle); io_cq->cdesc_addr.virt_addr = NULL; } if (io_sq->desc_addr.virt_addr) { size = io_sq->desc_entry_size * io_sq->q_depth; ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, io_sq->desc_addr.virt_addr, io_sq->desc_addr.phys_addr, io_sq->desc_addr.mem_handle); io_sq->desc_addr.virt_addr = NULL; } if (io_sq->bounce_buf_ctrl.base_buffer) { ENA_MEM_FREE(ena_dev->dmadev, io_sq->bounce_buf_ctrl.base_buffer, (io_sq->llq_info.desc_list_entry_size * ENA_COM_BOUNCE_BUFFER_CNTRL_CNT)); io_sq->bounce_buf_ctrl.base_buffer = NULL; } } static int wait_for_reset_state(struct ena_com_dev *ena_dev, u32 timeout, u16 exp_state) { u32 val, exp = 0; ena_time_t timeout_stamp; /* Convert timeout from resolution of 100ms to us resolution. */ timeout_stamp = ENA_GET_SYSTEM_TIMEOUT(100 * 1000 * timeout); while (1) { val = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF); if (unlikely(val == ENA_MMIO_READ_TIMEOUT)) { - ena_trc_err("Reg read timeout occurred\n"); + ena_trc_err(ena_dev, "Reg read timeout occurred\n"); return ENA_COM_TIMER_EXPIRED; } if ((val & ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK) == exp_state) return 0; if (ENA_TIME_EXPIRE(timeout_stamp)) return ENA_COM_TIMER_EXPIRED; ena_delay_exponential_backoff_us(exp++, ena_dev->ena_min_poll_delay_us); } } static bool ena_com_check_supported_feature_id(struct ena_com_dev *ena_dev, enum ena_admin_aq_feature_id feature_id) { u32 feature_mask = 1 << feature_id; /* Device attributes is always supported */ if ((feature_id != ENA_ADMIN_DEVICE_ATTRIBUTES) && !(ena_dev->supported_features & feature_mask)) return false; return true; } static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *get_resp, enum ena_admin_aq_feature_id feature_id, dma_addr_t control_buf_dma_addr, u32 control_buff_size, u8 feature_ver) { struct ena_com_admin_queue *admin_queue; struct ena_admin_get_feat_cmd get_cmd; int ret; if (!ena_com_check_supported_feature_id(ena_dev, feature_id)) { - ena_trc_dbg("Feature %d isn't supported\n", feature_id); + ena_trc_dbg(ena_dev, "Feature %d isn't supported\n", feature_id); return ENA_COM_UNSUPPORTED; } memset(&get_cmd, 0x0, sizeof(get_cmd)); admin_queue = &ena_dev->admin_queue; get_cmd.aq_common_descriptor.opcode = ENA_ADMIN_GET_FEATURE; if (control_buff_size) get_cmd.aq_common_descriptor.flags = ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; else get_cmd.aq_common_descriptor.flags = 0; ret = ena_com_mem_addr_set(ena_dev, &get_cmd.control_buffer.address, control_buf_dma_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } get_cmd.control_buffer.length = control_buff_size; get_cmd.feat_common.feature_version = feature_ver; get_cmd.feat_common.feature_id = feature_id; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *) &get_cmd, sizeof(get_cmd), (struct ena_admin_acq_entry *) get_resp, sizeof(*get_resp)); if (unlikely(ret)) - ena_trc_err("Failed to submit get_feature command %d error: %d\n", + ena_trc_err(ena_dev, "Failed to submit get_feature command %d error: %d\n", feature_id, ret); return ret; } static int ena_com_get_feature(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *get_resp, enum ena_admin_aq_feature_id feature_id, u8 feature_ver) { return ena_com_get_feature_ex(ena_dev, get_resp, feature_id, 0, 0, feature_ver); } int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev) { return ena_dev->rss.hash_func; } static void ena_com_hash_key_fill_default_key(struct ena_com_dev *ena_dev) { struct ena_admin_feature_rss_flow_hash_control *hash_key = (ena_dev->rss).hash_key; ENA_RSS_FILL_KEY(&hash_key->key, sizeof(hash_key->key)); /* The key buffer is stored in the device in an array of - * uint32 elements. Therefore the number of elements can be derived - * by dividing the buffer length by the size of each array element. - * In current implementation each element is sized at uint32_t - * so it's actually a division by 4 but if the element size changes, - * there is no need to rewrite this code. + * uint32 elements. */ - hash_key->keys_num = sizeof(hash_key->key) / sizeof(hash_key->key[0]); + hash_key->key_parts = ENA_ADMIN_RSS_KEY_PARTS; } static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_FUNCTION)) return ENA_COM_UNSUPPORTED; ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, sizeof(*rss->hash_key), rss->hash_key, rss->hash_key_dma_addr, rss->hash_key_mem_handle); if (unlikely(!rss->hash_key)) return ENA_COM_NO_MEM; return 0; } static void ena_com_hash_key_destroy(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; if (rss->hash_key) ENA_MEM_FREE_COHERENT(ena_dev->dmadev, sizeof(*rss->hash_key), rss->hash_key, rss->hash_key_dma_addr, rss->hash_key_mem_handle); rss->hash_key = NULL; } static int ena_com_hash_ctrl_init(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, sizeof(*rss->hash_ctrl), rss->hash_ctrl, rss->hash_ctrl_dma_addr, rss->hash_ctrl_mem_handle); if (unlikely(!rss->hash_ctrl)) return ENA_COM_NO_MEM; return 0; } static void ena_com_hash_ctrl_destroy(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; if (rss->hash_ctrl) ENA_MEM_FREE_COHERENT(ena_dev->dmadev, sizeof(*rss->hash_ctrl), rss->hash_ctrl, rss->hash_ctrl_dma_addr, rss->hash_ctrl_mem_handle); rss->hash_ctrl = NULL; } static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev, u16 log_size) { struct ena_rss *rss = &ena_dev->rss; struct ena_admin_get_feat_resp get_resp; size_t tbl_size; int ret; ret = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, 0); + ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG, 0); if (unlikely(ret)) return ret; if ((get_resp.u.ind_table.min_size > log_size) || (get_resp.u.ind_table.max_size < log_size)) { - ena_trc_err("indirect table size doesn't fit. requested size: %d while min is:%d and max %d\n", + ena_trc_err(ena_dev, "Indirect table size doesn't fit. requested size: %d while min is:%d and max %d\n", 1 << log_size, 1 << get_resp.u.ind_table.min_size, 1 << get_resp.u.ind_table.max_size); return ENA_COM_INVAL; } tbl_size = (1ULL << log_size) * sizeof(struct ena_admin_rss_ind_table_entry); ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, tbl_size, rss->rss_ind_tbl, rss->rss_ind_tbl_dma_addr, rss->rss_ind_tbl_mem_handle); if (unlikely(!rss->rss_ind_tbl)) goto mem_err1; tbl_size = (1ULL << log_size) * sizeof(u16); rss->host_rss_ind_tbl = ENA_MEM_ALLOC(ena_dev->dmadev, tbl_size); if (unlikely(!rss->host_rss_ind_tbl)) goto mem_err2; rss->tbl_log_size = log_size; return 0; mem_err2: tbl_size = (1ULL << log_size) * sizeof(struct ena_admin_rss_ind_table_entry); ENA_MEM_FREE_COHERENT(ena_dev->dmadev, tbl_size, rss->rss_ind_tbl, rss->rss_ind_tbl_dma_addr, rss->rss_ind_tbl_mem_handle); rss->rss_ind_tbl = NULL; mem_err1: rss->tbl_log_size = 0; return ENA_COM_NO_MEM; } static void ena_com_indirect_table_destroy(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; size_t tbl_size = (1ULL << rss->tbl_log_size) * sizeof(struct ena_admin_rss_ind_table_entry); if (rss->rss_ind_tbl) ENA_MEM_FREE_COHERENT(ena_dev->dmadev, tbl_size, rss->rss_ind_tbl, rss->rss_ind_tbl_dma_addr, rss->rss_ind_tbl_mem_handle); rss->rss_ind_tbl = NULL; if (rss->host_rss_ind_tbl) ENA_MEM_FREE(ena_dev->dmadev, rss->host_rss_ind_tbl, ((1ULL << rss->tbl_log_size) * sizeof(u16))); rss->host_rss_ind_tbl = NULL; } static int ena_com_create_io_sq(struct ena_com_dev *ena_dev, struct ena_com_io_sq *io_sq, u16 cq_idx) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_admin_aq_create_sq_cmd create_cmd; struct ena_admin_acq_create_sq_resp_desc cmd_completion; u8 direction; int ret; memset(&create_cmd, 0x0, sizeof(create_cmd)); create_cmd.aq_common_descriptor.opcode = ENA_ADMIN_CREATE_SQ; if (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) direction = ENA_ADMIN_SQ_DIRECTION_TX; else direction = ENA_ADMIN_SQ_DIRECTION_RX; create_cmd.sq_identity |= (direction << ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT) & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK; create_cmd.sq_caps_2 |= io_sq->mem_queue_type & ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK; create_cmd.sq_caps_2 |= (ENA_ADMIN_COMPLETION_POLICY_DESC << ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT) & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK; create_cmd.sq_caps_3 |= ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; create_cmd.cq_idx = cq_idx; create_cmd.sq_depth = io_sq->q_depth; if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) { ret = ena_com_mem_addr_set(ena_dev, &create_cmd.sq_ba, io_sq->desc_addr.phys_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } } ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&create_cmd, sizeof(create_cmd), (struct ena_admin_acq_entry *)&cmd_completion, sizeof(cmd_completion)); if (unlikely(ret)) { - ena_trc_err("Failed to create IO SQ. error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to create IO SQ. error: %d\n", ret); return ret; } io_sq->idx = cmd_completion.sq_idx; io_sq->db_addr = (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + (uintptr_t)cmd_completion.sq_doorbell_offset); if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { io_sq->header_addr = (u8 __iomem *)((uintptr_t)ena_dev->mem_bar + cmd_completion.llq_headers_offset); io_sq->desc_addr.pbuf_dev_addr = (u8 __iomem *)((uintptr_t)ena_dev->mem_bar + cmd_completion.llq_descriptors_offset); } - ena_trc_dbg("created sq[%u], depth[%u]\n", io_sq->idx, io_sq->q_depth); + ena_trc_dbg(ena_dev, "Created sq[%u], depth[%u]\n", io_sq->idx, io_sq->q_depth); return ret; } static int ena_com_ind_tbl_convert_to_device(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; struct ena_com_io_sq *io_sq; u16 qid; int i; for (i = 0; i < 1 << rss->tbl_log_size; i++) { qid = rss->host_rss_ind_tbl[i]; if (qid >= ENA_TOTAL_NUM_QUEUES) return ENA_COM_INVAL; io_sq = &ena_dev->io_sq_queues[qid]; if (io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX) return ENA_COM_INVAL; rss->rss_ind_tbl[i].cq_idx = io_sq->idx; } return 0; } static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev, u16 intr_delay_resolution) { u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution; if (unlikely(!intr_delay_resolution)) { - ena_trc_err("Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n"); + ena_trc_err(ena_dev, "Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n"); intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION; } /* update Rx */ ena_dev->intr_moder_rx_interval = ena_dev->intr_moder_rx_interval * prev_intr_delay_resolution / intr_delay_resolution; /* update Tx */ ena_dev->intr_moder_tx_interval = ena_dev->intr_moder_tx_interval * prev_intr_delay_resolution / intr_delay_resolution; ena_dev->intr_delay_resolution = intr_delay_resolution; } /*****************************************************************************/ /******************************* API ******************************/ /*****************************************************************************/ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, struct ena_admin_aq_entry *cmd, size_t cmd_size, struct ena_admin_acq_entry *comp, size_t comp_size) { struct ena_comp_ctx *comp_ctx; int ret; comp_ctx = ena_com_submit_admin_cmd(admin_queue, cmd, cmd_size, comp, comp_size); if (IS_ERR(comp_ctx)) { if (comp_ctx == ERR_PTR(ENA_COM_NO_DEVICE)) - ena_trc_dbg("Failed to submit command [%ld]\n", + ena_trc_dbg(admin_queue->ena_dev, + "Failed to submit command [%ld]\n", PTR_ERR(comp_ctx)); else - ena_trc_err("Failed to submit command [%ld]\n", + ena_trc_err(admin_queue->ena_dev, + "Failed to submit command [%ld]\n", PTR_ERR(comp_ctx)); - return PTR_ERR(comp_ctx); + return (int)PTR_ERR(comp_ctx); } ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue); if (unlikely(ret)) { if (admin_queue->running_state) - ena_trc_err("Failed to process command. ret = %d\n", - ret); + ena_trc_err(admin_queue->ena_dev, + "Failed to process command. ret = %d\n", ret); else - ena_trc_dbg("Failed to process command. ret = %d\n", - ret); + ena_trc_dbg(admin_queue->ena_dev, + "Failed to process command. ret = %d\n", ret); } return ret; } int ena_com_create_io_cq(struct ena_com_dev *ena_dev, struct ena_com_io_cq *io_cq) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_admin_aq_create_cq_cmd create_cmd; struct ena_admin_acq_create_cq_resp_desc cmd_completion; int ret; memset(&create_cmd, 0x0, sizeof(create_cmd)); create_cmd.aq_common_descriptor.opcode = ENA_ADMIN_CREATE_CQ; create_cmd.cq_caps_2 |= (io_cq->cdesc_entry_size_in_bytes / 4) & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; create_cmd.cq_caps_1 |= ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK; create_cmd.msix_vector = io_cq->msix_vector; create_cmd.cq_depth = io_cq->q_depth; ret = ena_com_mem_addr_set(ena_dev, &create_cmd.cq_ba, io_cq->cdesc_addr.phys_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&create_cmd, sizeof(create_cmd), (struct ena_admin_acq_entry *)&cmd_completion, sizeof(cmd_completion)); if (unlikely(ret)) { - ena_trc_err("Failed to create IO CQ. error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to create IO CQ. error: %d\n", ret); return ret; } io_cq->idx = cmd_completion.cq_idx; io_cq->unmask_reg = (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + cmd_completion.cq_interrupt_unmask_register_offset); if (cmd_completion.cq_head_db_register_offset) io_cq->cq_head_db_reg = (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + cmd_completion.cq_head_db_register_offset); if (cmd_completion.numa_node_register_offset) io_cq->numa_node_cfg_reg = (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + cmd_completion.numa_node_register_offset); - ena_trc_dbg("created cq[%u], depth[%u]\n", io_cq->idx, io_cq->q_depth); + ena_trc_dbg(ena_dev, "Created cq[%u], depth[%u]\n", io_cq->idx, io_cq->q_depth); return ret; } int ena_com_get_io_handlers(struct ena_com_dev *ena_dev, u16 qid, struct ena_com_io_sq **io_sq, struct ena_com_io_cq **io_cq) { if (qid >= ENA_TOTAL_NUM_QUEUES) { - ena_trc_err("Invalid queue number %d but the max is %d\n", + ena_trc_err(ena_dev, "Invalid queue number %d but the max is %d\n", qid, ENA_TOTAL_NUM_QUEUES); return ENA_COM_INVAL; } *io_sq = &ena_dev->io_sq_queues[qid]; *io_cq = &ena_dev->io_cq_queues[qid]; return 0; } void ena_com_abort_admin_commands(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_comp_ctx *comp_ctx; u16 i; if (!admin_queue->comp_ctx) return; for (i = 0; i < admin_queue->q_depth; i++) { comp_ctx = get_comp_ctxt(admin_queue, i, false); if (unlikely(!comp_ctx)) break; comp_ctx->status = ENA_CMD_ABORTED; ENA_WAIT_EVENT_SIGNAL(comp_ctx->wait_event); } } void ena_com_wait_for_abort_completion(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; unsigned long flags = 0; u32 exp = 0; ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); while (ATOMIC32_READ(&admin_queue->outstanding_cmds) != 0) { ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); ena_delay_exponential_backoff_us(exp++, ena_dev->ena_min_poll_delay_us); ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); } ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); } int ena_com_destroy_io_cq(struct ena_com_dev *ena_dev, struct ena_com_io_cq *io_cq) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_admin_aq_destroy_cq_cmd destroy_cmd; struct ena_admin_acq_destroy_cq_resp_desc destroy_resp; int ret; memset(&destroy_cmd, 0x0, sizeof(destroy_cmd)); destroy_cmd.cq_idx = io_cq->idx; destroy_cmd.aq_common_descriptor.opcode = ENA_ADMIN_DESTROY_CQ; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&destroy_cmd, sizeof(destroy_cmd), (struct ena_admin_acq_entry *)&destroy_resp, sizeof(destroy_resp)); if (unlikely(ret && (ret != ENA_COM_NO_DEVICE))) - ena_trc_err("Failed to destroy IO CQ. error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to destroy IO CQ. error: %d\n", ret); return ret; } bool ena_com_get_admin_running_state(struct ena_com_dev *ena_dev) { return ena_dev->admin_queue.running_state; } void ena_com_set_admin_running_state(struct ena_com_dev *ena_dev, bool state) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; unsigned long flags = 0; ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); ena_dev->admin_queue.running_state = state; ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); } void ena_com_admin_aenq_enable(struct ena_com_dev *ena_dev) { u16 depth = ena_dev->aenq.q_depth; - ENA_WARN(ena_dev->aenq.head != depth, "Invalid AENQ state\n"); + ENA_WARN(ena_dev->aenq.head != depth, ena_dev, "Invalid AENQ state\n"); /* Init head_db to mark that all entries in the queue * are initially available */ ENA_REG_WRITE32(ena_dev->bus, depth, ena_dev->reg_bar + ENA_REGS_AENQ_HEAD_DB_OFF); } int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) { struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; struct ena_admin_get_feat_resp get_resp; int ret; ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0); if (ret) { - ena_trc_info("Can't get aenq configuration\n"); + ena_trc_info(ena_dev, "Can't get aenq configuration\n"); return ret; } if ((get_resp.u.aenq.supported_groups & groups_flag) != groups_flag) { - ena_trc_warn("Trying to set unsupported aenq events. supported flag: 0x%x asked flag: 0x%x\n", + ena_trc_warn(ena_dev, "Trying to set unsupported aenq events. supported flag: 0x%x asked flag: 0x%x\n", get_resp.u.aenq.supported_groups, groups_flag); return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); admin_queue = &ena_dev->admin_queue; cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = 0; cmd.feat_common.feature_id = ENA_ADMIN_AENQ_CONFIG; cmd.u.aenq.enabled_groups = groups_flag; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - ena_trc_err("Failed to config AENQ ret: %d\n", ret); + ena_trc_err(ena_dev, "Failed to config AENQ ret: %d\n", ret); return ret; } int ena_com_get_dma_width(struct ena_com_dev *ena_dev) { u32 caps = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF); - int width; + u32 width; if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) { - ena_trc_err("Reg read timeout occurred\n"); + ena_trc_err(ena_dev, "Reg read timeout occurred\n"); return ENA_COM_TIMER_EXPIRED; } width = (caps & ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK) >> ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT; - ena_trc_dbg("ENA dma width: %d\n", width); + ena_trc_dbg(ena_dev, "ENA dma width: %d\n", width); if ((width < 32) || width > ENA_MAX_PHYS_ADDR_SIZE_BITS) { - ena_trc_err("DMA width illegal value: %d\n", width); + ena_trc_err(ena_dev, "DMA width illegal value: %d\n", width); return ENA_COM_INVAL; } ena_dev->dma_addr_bits = width; return width; } int ena_com_validate_version(struct ena_com_dev *ena_dev) { u32 ver; u32 ctrl_ver; u32 ctrl_ver_masked; /* Make sure the ENA version and the controller version are at least * as the driver expects */ ver = ena_com_reg_bar_read32(ena_dev, ENA_REGS_VERSION_OFF); ctrl_ver = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CONTROLLER_VERSION_OFF); if (unlikely((ver == ENA_MMIO_READ_TIMEOUT) || (ctrl_ver == ENA_MMIO_READ_TIMEOUT))) { - ena_trc_err("Reg read timeout occurred\n"); + ena_trc_err(ena_dev, "Reg read timeout occurred\n"); return ENA_COM_TIMER_EXPIRED; } - ena_trc_info("ena device version: %d.%d\n", + ena_trc_info(ena_dev, "ENA device version: %d.%d\n", (ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >> ENA_REGS_VERSION_MAJOR_VERSION_SHIFT, ver & ENA_REGS_VERSION_MINOR_VERSION_MASK); - ena_trc_info("ena controller version: %d.%d.%d implementation version %d\n", + ena_trc_info(ena_dev, "ENA controller version: %d.%d.%d implementation version %d\n", (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) >> ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT, (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) >> ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT, (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK), (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK) >> ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT); ctrl_ver_masked = (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) | (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) | (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK); /* Validate the ctrl version without the implementation ID */ if (ctrl_ver_masked < MIN_ENA_CTRL_VER) { - ena_trc_err("ENA ctrl version is lower than the minimal ctrl version the driver supports\n"); + ena_trc_err(ena_dev, "ENA ctrl version is lower than the minimal ctrl version the driver supports\n"); return -1; } return 0; } +static void +ena_com_free_ena_admin_queue_comp_ctx(struct ena_com_dev *ena_dev, + struct ena_com_admin_queue *admin_queue) + +{ + if (!admin_queue->comp_ctx) + return; + + ENA_WAIT_EVENTS_DESTROY(admin_queue); + ENA_MEM_FREE(ena_dev->dmadev, + admin_queue->comp_ctx, + (admin_queue->q_depth * sizeof(struct ena_comp_ctx))); + + admin_queue->comp_ctx = NULL; +} + void ena_com_admin_destroy(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_com_admin_cq *cq = &admin_queue->cq; struct ena_com_admin_sq *sq = &admin_queue->sq; struct ena_com_aenq *aenq = &ena_dev->aenq; u16 size; - ENA_WAIT_EVENT_DESTROY(admin_queue->comp_ctx->wait_event); - if (admin_queue->comp_ctx) - ENA_MEM_FREE(ena_dev->dmadev, - admin_queue->comp_ctx, - (admin_queue->q_depth * sizeof(struct ena_comp_ctx))); - admin_queue->comp_ctx = NULL; + ena_com_free_ena_admin_queue_comp_ctx(ena_dev, admin_queue); + size = ADMIN_SQ_SIZE(admin_queue->q_depth); if (sq->entries) ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, sq->entries, sq->dma_addr, sq->mem_handle); sq->entries = NULL; size = ADMIN_CQ_SIZE(admin_queue->q_depth); if (cq->entries) ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, cq->entries, cq->dma_addr, cq->mem_handle); cq->entries = NULL; size = ADMIN_AENQ_SIZE(aenq->q_depth); if (ena_dev->aenq.entries) ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, aenq->entries, aenq->dma_addr, aenq->mem_handle); aenq->entries = NULL; ENA_SPINLOCK_DESTROY(admin_queue->q_lock); } void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling) { u32 mask_value = 0; if (polling) mask_value = ENA_REGS_ADMIN_INTR_MASK; ENA_REG_WRITE32(ena_dev->bus, mask_value, ena_dev->reg_bar + ENA_REGS_INTR_MASK_OFF); ena_dev->admin_queue.polling = polling; } bool ena_com_get_admin_polling_mode(struct ena_com_dev *ena_dev) { return ena_dev->admin_queue.polling; } void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev, bool polling) { ena_dev->admin_queue.auto_polling = polling; } int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; ENA_SPINLOCK_INIT(mmio_read->lock); ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, sizeof(*mmio_read->read_resp), mmio_read->read_resp, mmio_read->read_resp_dma_addr, mmio_read->read_resp_mem_handle); if (unlikely(!mmio_read->read_resp)) goto err; ena_com_mmio_reg_read_request_write_dev_addr(ena_dev); mmio_read->read_resp->req_id = 0x0; mmio_read->seq_num = 0x0; mmio_read->readless_supported = true; return 0; err: ENA_SPINLOCK_DESTROY(mmio_read->lock); return ENA_COM_NO_MEM; } void ena_com_set_mmio_read_mode(struct ena_com_dev *ena_dev, bool readless_supported) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; mmio_read->readless_supported = readless_supported; } void ena_com_mmio_reg_read_request_destroy(struct ena_com_dev *ena_dev) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; ENA_REG_WRITE32(ena_dev->bus, 0x0, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_LO_OFF); ENA_REG_WRITE32(ena_dev->bus, 0x0, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_HI_OFF); ENA_MEM_FREE_COHERENT(ena_dev->dmadev, sizeof(*mmio_read->read_resp), mmio_read->read_resp, mmio_read->read_resp_dma_addr, mmio_read->read_resp_mem_handle); mmio_read->read_resp = NULL; ENA_SPINLOCK_DESTROY(mmio_read->lock); } void ena_com_mmio_reg_read_request_write_dev_addr(struct ena_com_dev *ena_dev) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; u32 addr_low, addr_high; addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(mmio_read->read_resp_dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(mmio_read->read_resp_dma_addr); ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_LO_OFF); ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_HI_OFF); } int ena_com_admin_init(struct ena_com_dev *ena_dev, struct ena_aenq_handlers *aenq_handlers) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; u32 aq_caps, acq_caps, dev_sts, addr_low, addr_high; int ret; dev_sts = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF); if (unlikely(dev_sts == ENA_MMIO_READ_TIMEOUT)) { - ena_trc_err("Reg read timeout occurred\n"); + ena_trc_err(ena_dev, "Reg read timeout occurred\n"); return ENA_COM_TIMER_EXPIRED; } if (!(dev_sts & ENA_REGS_DEV_STS_READY_MASK)) { - ena_trc_err("Device isn't ready, abort com init\n"); + ena_trc_err(ena_dev, "Device isn't ready, abort com init\n"); return ENA_COM_NO_DEVICE; } admin_queue->q_depth = ENA_ADMIN_QUEUE_DEPTH; admin_queue->bus = ena_dev->bus; admin_queue->q_dmadev = ena_dev->dmadev; admin_queue->polling = false; admin_queue->curr_cmd_id = 0; ATOMIC32_SET(&admin_queue->outstanding_cmds, 0); ENA_SPINLOCK_INIT(admin_queue->q_lock); ret = ena_com_init_comp_ctxt(admin_queue); if (ret) goto error; ret = ena_com_admin_init_sq(admin_queue); if (ret) goto error; ret = ena_com_admin_init_cq(admin_queue); if (ret) goto error; admin_queue->sq.db_addr = (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + ENA_REGS_AQ_DB_OFF); addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(admin_queue->sq.dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(admin_queue->sq.dma_addr); ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_AQ_BASE_LO_OFF); ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_AQ_BASE_HI_OFF); addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(admin_queue->cq.dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(admin_queue->cq.dma_addr); ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_ACQ_BASE_LO_OFF); ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_ACQ_BASE_HI_OFF); aq_caps = 0; aq_caps |= admin_queue->q_depth & ENA_REGS_AQ_CAPS_AQ_DEPTH_MASK; aq_caps |= (sizeof(struct ena_admin_aq_entry) << ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_SHIFT) & ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_MASK; acq_caps = 0; acq_caps |= admin_queue->q_depth & ENA_REGS_ACQ_CAPS_ACQ_DEPTH_MASK; acq_caps |= (sizeof(struct ena_admin_acq_entry) << ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_SHIFT) & ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_MASK; ENA_REG_WRITE32(ena_dev->bus, aq_caps, ena_dev->reg_bar + ENA_REGS_AQ_CAPS_OFF); ENA_REG_WRITE32(ena_dev->bus, acq_caps, ena_dev->reg_bar + ENA_REGS_ACQ_CAPS_OFF); ret = ena_com_admin_init_aenq(ena_dev, aenq_handlers); if (ret) goto error; admin_queue->ena_dev = ena_dev; admin_queue->running_state = true; return 0; error: ena_com_admin_destroy(ena_dev); return ret; } int ena_com_create_io_queue(struct ena_com_dev *ena_dev, struct ena_com_create_io_ctx *ctx) { struct ena_com_io_sq *io_sq; struct ena_com_io_cq *io_cq; int ret; if (ctx->qid >= ENA_TOTAL_NUM_QUEUES) { - ena_trc_err("Qid (%d) is bigger than max num of queues (%d)\n", + ena_trc_err(ena_dev, "Qid (%d) is bigger than max num of queues (%d)\n", ctx->qid, ENA_TOTAL_NUM_QUEUES); return ENA_COM_INVAL; } io_sq = &ena_dev->io_sq_queues[ctx->qid]; io_cq = &ena_dev->io_cq_queues[ctx->qid]; memset(io_sq, 0x0, sizeof(*io_sq)); memset(io_cq, 0x0, sizeof(*io_cq)); /* Init CQ */ io_cq->q_depth = ctx->queue_size; io_cq->direction = ctx->direction; io_cq->qid = ctx->qid; io_cq->msix_vector = ctx->msix_vector; io_sq->q_depth = ctx->queue_size; io_sq->direction = ctx->direction; io_sq->qid = ctx->qid; io_sq->mem_queue_type = ctx->mem_queue_type; if (ctx->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) /* header length is limited to 8 bits */ io_sq->tx_max_header_size = ENA_MIN32(ena_dev->tx_max_header_size, SZ_256); ret = ena_com_init_io_sq(ena_dev, ctx, io_sq); if (ret) goto error; ret = ena_com_init_io_cq(ena_dev, ctx, io_cq); if (ret) goto error; ret = ena_com_create_io_cq(ena_dev, io_cq); if (ret) goto error; ret = ena_com_create_io_sq(ena_dev, io_sq, io_cq->idx); if (ret) goto destroy_io_cq; return 0; destroy_io_cq: ena_com_destroy_io_cq(ena_dev, io_cq); error: ena_com_io_queue_free(ena_dev, io_sq, io_cq); return ret; } void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid) { struct ena_com_io_sq *io_sq; struct ena_com_io_cq *io_cq; if (qid >= ENA_TOTAL_NUM_QUEUES) { - ena_trc_err("Qid (%d) is bigger than max num of queues (%d)\n", + ena_trc_err(ena_dev, "Qid (%d) is bigger than max num of queues (%d)\n", qid, ENA_TOTAL_NUM_QUEUES); return; } io_sq = &ena_dev->io_sq_queues[qid]; io_cq = &ena_dev->io_cq_queues[qid]; ena_com_destroy_io_sq(ena_dev, io_sq); ena_com_destroy_io_cq(ena_dev, io_cq); ena_com_io_queue_free(ena_dev, io_sq, io_cq); } int ena_com_get_link_params(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *resp) { return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0); } int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx) { struct ena_admin_get_feat_resp get_resp; int rc; rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_DEVICE_ATTRIBUTES, 0); if (rc) return rc; memcpy(&get_feat_ctx->dev_attr, &get_resp.u.dev_attr, sizeof(get_resp.u.dev_attr)); + ena_dev->supported_features = get_resp.u.dev_attr.supported_features; if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_MAX_QUEUES_EXT, ENA_FEATURE_MAX_QUEUE_EXT_VER); if (rc) return rc; if (get_resp.u.max_queue_ext.version != ENA_FEATURE_MAX_QUEUE_EXT_VER) return -EINVAL; memcpy(&get_feat_ctx->max_queue_ext, &get_resp.u.max_queue_ext, sizeof(get_resp.u.max_queue_ext)); ena_dev->tx_max_header_size = get_resp.u.max_queue_ext.max_queue_ext.max_tx_header_size; } else { rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_MAX_QUEUES_NUM, 0); memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue, sizeof(get_resp.u.max_queue)); ena_dev->tx_max_header_size = get_resp.u.max_queue.max_header_size; if (rc) return rc; } rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0); if (rc) return rc; memcpy(&get_feat_ctx->aenq, &get_resp.u.aenq, sizeof(get_resp.u.aenq)); rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0); if (rc) return rc; memcpy(&get_feat_ctx->offload, &get_resp.u.offload, sizeof(get_resp.u.offload)); /* Driver hints isn't mandatory admin command. So in case the * command isn't supported set driver hints to 0 */ rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS, 0); if (!rc) memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints, sizeof(get_resp.u.hw_hints)); else if (rc == ENA_COM_UNSUPPORTED) memset(&get_feat_ctx->hw_hints, 0x0, sizeof(get_feat_ctx->hw_hints)); else return rc; rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ, 0); if (!rc) memcpy(&get_feat_ctx->llq, &get_resp.u.llq, sizeof(get_resp.u.llq)); else if (rc == ENA_COM_UNSUPPORTED) memset(&get_feat_ctx->llq, 0x0, sizeof(get_feat_ctx->llq)); else return rc; - rc = ena_com_get_feature(ena_dev, &get_resp, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, 0); - if (!rc) - memcpy(&get_feat_ctx->ind_table, &get_resp.u.ind_table, - sizeof(get_resp.u.ind_table)); - else if (rc == ENA_COM_UNSUPPORTED) - memset(&get_feat_ctx->ind_table, 0x0, - sizeof(get_feat_ctx->ind_table)); - else - return rc; - return 0; } void ena_com_admin_q_comp_intr_handler(struct ena_com_dev *ena_dev) { ena_com_handle_admin_completion(&ena_dev->admin_queue); } /* ena_handle_specific_aenq_event: * return the handler that is relevant to the specific event group */ -static ena_aenq_handler ena_com_get_specific_aenq_cb(struct ena_com_dev *dev, +static ena_aenq_handler ena_com_get_specific_aenq_cb(struct ena_com_dev *ena_dev, u16 group) { - struct ena_aenq_handlers *aenq_handlers = dev->aenq.aenq_handlers; + struct ena_aenq_handlers *aenq_handlers = ena_dev->aenq.aenq_handlers; if ((group < ENA_MAX_HANDLERS) && aenq_handlers->handlers[group]) return aenq_handlers->handlers[group]; return aenq_handlers->unimplemented_handler; } /* ena_aenq_intr_handler: * handles the aenq incoming events. * pop events from the queue and apply the specific handler */ -void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data) +void ena_com_aenq_intr_handler(struct ena_com_dev *ena_dev, void *data) { struct ena_admin_aenq_entry *aenq_e; struct ena_admin_aenq_common_desc *aenq_common; - struct ena_com_aenq *aenq = &dev->aenq; + struct ena_com_aenq *aenq = &ena_dev->aenq; u64 timestamp; ena_aenq_handler handler_cb; u16 masked_head, processed = 0; u8 phase; masked_head = aenq->head & (aenq->q_depth - 1); phase = aenq->phase; aenq_e = &aenq->entries[masked_head]; /* Get first entry */ aenq_common = &aenq_e->aenq_common_desc; /* Go over all the events */ while ((READ_ONCE8(aenq_common->flags) & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) { /* Make sure the phase bit (ownership) is as expected before * reading the rest of the descriptor. */ dma_rmb(); timestamp = (u64)aenq_common->timestamp_low | ((u64)aenq_common->timestamp_high << 32); - ena_trc_dbg("AENQ! Group[%x] Syndrom[%x] timestamp: [%" ENA_PRIu64 "s]\n", + + ena_trc_dbg(ena_dev, "AENQ! Group[%x] Syndrome[%x] timestamp: [%" ENA_PRIu64 "s]\n", aenq_common->group, - aenq_common->syndrom, + aenq_common->syndrome, timestamp); /* Handle specific event*/ - handler_cb = ena_com_get_specific_aenq_cb(dev, + handler_cb = ena_com_get_specific_aenq_cb(ena_dev, aenq_common->group); handler_cb(data, aenq_e); /* call the actual event handler*/ /* Get next event entry */ masked_head++; processed++; if (unlikely(masked_head == aenq->q_depth)) { masked_head = 0; phase = !phase; } aenq_e = &aenq->entries[masked_head]; aenq_common = &aenq_e->aenq_common_desc; } aenq->head += processed; aenq->phase = phase; /* Don't update aenq doorbell if there weren't any processed events */ if (!processed) return; /* write the aenq doorbell after all AENQ descriptors were read */ mb(); - ENA_REG_WRITE32_RELAXED(dev->bus, (u32)aenq->head, - dev->reg_bar + ENA_REGS_AENQ_HEAD_DB_OFF); + ENA_REG_WRITE32_RELAXED(ena_dev->bus, (u32)aenq->head, + ena_dev->reg_bar + ENA_REGS_AENQ_HEAD_DB_OFF); mmiowb(); } #ifdef ENA_EXTENDED_STATS /* * Sets the function Idx and Queue Idx to be used for * get full statistics feature * */ int ena_com_extended_stats_set_func_queue(struct ena_com_dev *ena_dev, u32 func_queue) { /* Function & Queue is acquired from user in the following format : * Bottom Half word: funct * Top Half Word: queue */ ena_dev->stats_func = ENA_EXTENDED_STAT_GET_FUNCT(func_queue); ena_dev->stats_queue = ENA_EXTENDED_STAT_GET_QUEUE(func_queue); return 0; } #endif /* ENA_EXTENDED_STATS */ int ena_com_dev_reset(struct ena_com_dev *ena_dev, enum ena_regs_reset_reason_types reset_reason) { u32 stat, timeout, cap, reset_val; int rc; stat = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF); cap = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF); if (unlikely((stat == ENA_MMIO_READ_TIMEOUT) || (cap == ENA_MMIO_READ_TIMEOUT))) { - ena_trc_err("Reg read32 timeout occurred\n"); + ena_trc_err(ena_dev, "Reg read32 timeout occurred\n"); return ENA_COM_TIMER_EXPIRED; } if ((stat & ENA_REGS_DEV_STS_READY_MASK) == 0) { - ena_trc_err("Device isn't ready, can't reset device\n"); + ena_trc_err(ena_dev, "Device isn't ready, can't reset device\n"); return ENA_COM_INVAL; } timeout = (cap & ENA_REGS_CAPS_RESET_TIMEOUT_MASK) >> ENA_REGS_CAPS_RESET_TIMEOUT_SHIFT; if (timeout == 0) { - ena_trc_err("Invalid timeout value\n"); + ena_trc_err(ena_dev, "Invalid timeout value\n"); return ENA_COM_INVAL; } /* start reset */ reset_val = ENA_REGS_DEV_CTL_DEV_RESET_MASK; reset_val |= (reset_reason << ENA_REGS_DEV_CTL_RESET_REASON_SHIFT) & ENA_REGS_DEV_CTL_RESET_REASON_MASK; ENA_REG_WRITE32(ena_dev->bus, reset_val, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF); /* Write again the MMIO read request address */ ena_com_mmio_reg_read_request_write_dev_addr(ena_dev); rc = wait_for_reset_state(ena_dev, timeout, ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK); if (rc != 0) { - ena_trc_err("Reset indication didn't turn on\n"); + ena_trc_err(ena_dev, "Reset indication didn't turn on\n"); return rc; } /* reset done */ ENA_REG_WRITE32(ena_dev->bus, 0, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF); rc = wait_for_reset_state(ena_dev, timeout, 0); if (rc != 0) { - ena_trc_err("Reset indication didn't turn off\n"); + ena_trc_err(ena_dev, "Reset indication didn't turn off\n"); return rc; } timeout = (cap & ENA_REGS_CAPS_ADMIN_CMD_TO_MASK) >> ENA_REGS_CAPS_ADMIN_CMD_TO_SHIFT; if (timeout) /* the resolution of timeout reg is 100ms */ ena_dev->admin_queue.completion_timeout = timeout * 100000; else ena_dev->admin_queue.completion_timeout = ADMIN_CMD_TIMEOUT_US; return 0; } static int ena_get_dev_stats(struct ena_com_dev *ena_dev, struct ena_com_stats_ctx *ctx, enum ena_admin_get_stats_type type) { struct ena_admin_aq_get_stats_cmd *get_cmd = &ctx->get_cmd; struct ena_admin_acq_get_stats_resp *get_resp = &ctx->get_resp; struct ena_com_admin_queue *admin_queue; int ret; admin_queue = &ena_dev->admin_queue; get_cmd->aq_common_descriptor.opcode = ENA_ADMIN_GET_STATS; get_cmd->aq_common_descriptor.flags = 0; get_cmd->type = type; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)get_cmd, sizeof(*get_cmd), (struct ena_admin_acq_entry *)get_resp, sizeof(*get_resp)); if (unlikely(ret)) - ena_trc_err("Failed to get stats. error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to get stats. error: %d\n", ret); return ret; } +int ena_com_get_eni_stats(struct ena_com_dev *ena_dev, + struct ena_admin_eni_stats *stats) +{ + struct ena_com_stats_ctx ctx; + int ret; + + memset(&ctx, 0x0, sizeof(ctx)); + ret = ena_get_dev_stats(ena_dev, &ctx, ENA_ADMIN_GET_STATS_TYPE_ENI); + if (likely(ret == 0)) + memcpy(stats, &ctx.get_resp.u.eni_stats, + sizeof(ctx.get_resp.u.eni_stats)); + + return ret; +} + int ena_com_get_dev_basic_stats(struct ena_com_dev *ena_dev, struct ena_admin_basic_stats *stats) { struct ena_com_stats_ctx ctx; int ret; memset(&ctx, 0x0, sizeof(ctx)); ret = ena_get_dev_stats(ena_dev, &ctx, ENA_ADMIN_GET_STATS_TYPE_BASIC); if (likely(ret == 0)) - memcpy(stats, &ctx.get_resp.basic_stats, - sizeof(ctx.get_resp.basic_stats)); + memcpy(stats, &ctx.get_resp.u.basic_stats, + sizeof(ctx.get_resp.u.basic_stats)); return ret; } #ifdef ENA_EXTENDED_STATS int ena_com_get_dev_extended_stats(struct ena_com_dev *ena_dev, char *buff, u32 len) { struct ena_com_stats_ctx ctx; struct ena_admin_aq_get_stats_cmd *get_cmd = &ctx.get_cmd; ena_mem_handle_t mem_handle; void *virt_addr; dma_addr_t phys_addr; int ret; ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, len, virt_addr, phys_addr, mem_handle); if (!virt_addr) { ret = ENA_COM_NO_MEM; goto done; } memset(&ctx, 0x0, sizeof(ctx)); ret = ena_com_mem_addr_set(ena_dev, &get_cmd->u.control_buffer.address, phys_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); goto free_ext_stats_mem; } get_cmd->u.control_buffer.length = len; get_cmd->device_id = ena_dev->stats_func; get_cmd->queue_idx = ena_dev->stats_queue; ret = ena_get_dev_stats(ena_dev, &ctx, ENA_ADMIN_GET_STATS_TYPE_EXTENDED); if (ret < 0) goto free_ext_stats_mem; ret = snprintf(buff, len, "%s", (char *)virt_addr); free_ext_stats_mem: ENA_MEM_FREE_COHERENT(ena_dev->dmadev, len, virt_addr, phys_addr, mem_handle); done: return ret; } #endif int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) { struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; int ret; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_MTU)) { - ena_trc_dbg("Feature %d isn't supported\n", ENA_ADMIN_MTU); + ena_trc_dbg(ena_dev, "Feature %d isn't supported\n", ENA_ADMIN_MTU); return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); admin_queue = &ena_dev->admin_queue; cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = 0; cmd.feat_common.feature_id = ENA_ADMIN_MTU; - cmd.u.mtu.mtu = mtu; + cmd.u.mtu.mtu = (u32)mtu; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - ena_trc_err("Failed to set mtu %d. error: %d\n", mtu, ret); + ena_trc_err(ena_dev, "Failed to set mtu %d. error: %d\n", mtu, ret); return ret; } int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, struct ena_admin_feature_offload_desc *offload) { int ret; struct ena_admin_get_feat_resp resp; ret = ena_com_get_feature(ena_dev, &resp, ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0); if (unlikely(ret)) { - ena_trc_err("Failed to get offload capabilities %d\n", ret); + ena_trc_err(ena_dev, "Failed to get offload capabilities %d\n", ret); return ret; } memcpy(offload, &resp.u.offload, sizeof(resp.u.offload)); return 0; } int ena_com_set_hash_function(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_rss *rss = &ena_dev->rss; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; struct ena_admin_get_feat_resp get_resp; int ret; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_FUNCTION)) { - ena_trc_dbg("Feature %d isn't supported\n", + ena_trc_dbg(ena_dev, "Feature %d isn't supported\n", ENA_ADMIN_RSS_HASH_FUNCTION); return ENA_COM_UNSUPPORTED; } /* Validate hash function is supported */ ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_FUNCTION, 0); if (unlikely(ret)) return ret; if (!(get_resp.u.flow_hash_func.supported_func & BIT(rss->hash_func))) { - ena_trc_err("Func hash %d isn't supported by device, abort\n", + ena_trc_err(ena_dev, "Func hash %d isn't supported by device, abort\n", rss->hash_func); return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; cmd.feat_common.feature_id = ENA_ADMIN_RSS_HASH_FUNCTION; cmd.u.flow_hash_func.init_val = rss->hash_init_val; cmd.u.flow_hash_func.selected_func = 1 << rss->hash_func; ret = ena_com_mem_addr_set(ena_dev, &cmd.control_buffer.address, rss->hash_key_dma_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } cmd.control_buffer.length = sizeof(*rss->hash_key); ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) { - ena_trc_err("Failed to set hash function %d. error: %d\n", + ena_trc_err(ena_dev, "Failed to set hash function %d. error: %d\n", rss->hash_func, ret); return ENA_COM_INVAL; } return 0; } int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, enum ena_admin_hash_functions func, const u8 *key, u16 key_len, u32 init_val) { struct ena_admin_feature_rss_flow_hash_control *hash_key; struct ena_admin_get_feat_resp get_resp; enum ena_admin_hash_functions old_func; struct ena_rss *rss = &ena_dev->rss; int rc; hash_key = rss->hash_key; /* Make sure size is a mult of DWs */ if (unlikely(key_len & 0x3)) return ENA_COM_INVAL; rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_FUNCTION, rss->hash_key_dma_addr, sizeof(*rss->hash_key), 0); if (unlikely(rc)) return rc; if (!(BIT(func) & get_resp.u.flow_hash_func.supported_func)) { - ena_trc_err("Flow hash function %d isn't supported\n", func); + ena_trc_err(ena_dev, "Flow hash function %d isn't supported\n", func); return ENA_COM_UNSUPPORTED; } switch (func) { case ENA_ADMIN_TOEPLITZ: if (key) { if (key_len != sizeof(hash_key->key)) { - ena_trc_err("key len (%hu) doesn't equal the supported size (%zu)\n", + ena_trc_err(ena_dev, "key len (%hu) doesn't equal the supported size (%zu)\n", key_len, sizeof(hash_key->key)); return ENA_COM_INVAL; } memcpy(hash_key->key, key, key_len); rss->hash_init_val = init_val; - hash_key->keys_num = key_len / sizeof(hash_key->key[0]); + hash_key->key_parts = key_len / sizeof(hash_key->key[0]); } break; case ENA_ADMIN_CRC32: rss->hash_init_val = init_val; break; default: - ena_trc_err("Invalid hash function (%d)\n", func); + ena_trc_err(ena_dev, "Invalid hash function (%d)\n", func); return ENA_COM_INVAL; } old_func = rss->hash_func; rss->hash_func = func; rc = ena_com_set_hash_function(ena_dev); /* Restore the old function */ if (unlikely(rc)) rss->hash_func = old_func; return rc; } int ena_com_get_hash_function(struct ena_com_dev *ena_dev, enum ena_admin_hash_functions *func) { struct ena_rss *rss = &ena_dev->rss; struct ena_admin_get_feat_resp get_resp; int rc; if (unlikely(!func)) return ENA_COM_INVAL; rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_FUNCTION, rss->hash_key_dma_addr, sizeof(*rss->hash_key), 0); if (unlikely(rc)) return rc; /* ENA_FFS() returns 1 in case the lsb is set */ rss->hash_func = ENA_FFS(get_resp.u.flow_hash_func.selected_func); if (rss->hash_func) rss->hash_func--; *func = rss->hash_func; return 0; } int ena_com_get_hash_key(struct ena_com_dev *ena_dev, u8 *key) { struct ena_admin_feature_rss_flow_hash_control *hash_key = ena_dev->rss.hash_key; if (key) - memcpy(key, hash_key->key, (size_t)(hash_key->keys_num) << 2); + memcpy(key, hash_key->key, + (size_t)(hash_key->key_parts) * sizeof(hash_key->key[0])); return 0; } int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev, enum ena_admin_flow_hash_proto proto, u16 *fields) { struct ena_rss *rss = &ena_dev->rss; struct ena_admin_get_feat_resp get_resp; int rc; rc = ena_com_get_feature_ex(ena_dev, &get_resp, ENA_ADMIN_RSS_HASH_INPUT, rss->hash_ctrl_dma_addr, sizeof(*rss->hash_ctrl), 0); if (unlikely(rc)) return rc; if (fields) *fields = rss->hash_ctrl->selected_fields[proto].fields; return 0; } int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_rss *rss = &ena_dev->rss; struct ena_admin_feature_rss_hash_control *hash_ctrl = rss->hash_ctrl; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; int ret; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_INPUT)) { - ena_trc_dbg("Feature %d isn't supported\n", + ena_trc_dbg(ena_dev, "Feature %d isn't supported\n", ENA_ADMIN_RSS_HASH_INPUT); return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; cmd.feat_common.feature_id = ENA_ADMIN_RSS_HASH_INPUT; cmd.u.flow_hash_input.enabled_input_sort = ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK | ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK; ret = ena_com_mem_addr_set(ena_dev, &cmd.control_buffer.address, rss->hash_ctrl_dma_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } cmd.control_buffer.length = sizeof(*hash_ctrl); ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - ena_trc_err("Failed to set hash input. error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to set hash input. error: %d\n", ret); return ret; } int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; struct ena_admin_feature_rss_hash_control *hash_ctrl = rss->hash_ctrl; u16 available_fields = 0; int rc, i; /* Get the supported hash input */ rc = ena_com_get_hash_ctrl(ena_dev, 0, NULL); if (unlikely(rc)) return rc; hash_ctrl->selected_fields[ENA_ADMIN_RSS_TCP4].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA | ENA_ADMIN_RSS_L4_DP | ENA_ADMIN_RSS_L4_SP; hash_ctrl->selected_fields[ENA_ADMIN_RSS_UDP4].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA | ENA_ADMIN_RSS_L4_DP | ENA_ADMIN_RSS_L4_SP; hash_ctrl->selected_fields[ENA_ADMIN_RSS_TCP6].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA | ENA_ADMIN_RSS_L4_DP | ENA_ADMIN_RSS_L4_SP; hash_ctrl->selected_fields[ENA_ADMIN_RSS_UDP6].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA | ENA_ADMIN_RSS_L4_DP | ENA_ADMIN_RSS_L4_SP; hash_ctrl->selected_fields[ENA_ADMIN_RSS_IP4].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA; hash_ctrl->selected_fields[ENA_ADMIN_RSS_IP6].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA; hash_ctrl->selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA; hash_ctrl->selected_fields[ENA_ADMIN_RSS_NOT_IP].fields = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA; for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; i++) { available_fields = hash_ctrl->selected_fields[i].fields & hash_ctrl->supported_fields[i].fields; if (available_fields != hash_ctrl->selected_fields[i].fields) { - ena_trc_err("hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n", + ena_trc_err(ena_dev, "Hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n", i, hash_ctrl->supported_fields[i].fields, hash_ctrl->selected_fields[i].fields); return ENA_COM_UNSUPPORTED; } } rc = ena_com_set_hash_ctrl(ena_dev); /* In case of failure, restore the old hash ctrl */ if (unlikely(rc)) ena_com_get_hash_ctrl(ena_dev, 0, NULL); return rc; } int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev, enum ena_admin_flow_hash_proto proto, u16 hash_fields) { struct ena_rss *rss = &ena_dev->rss; struct ena_admin_feature_rss_hash_control *hash_ctrl = rss->hash_ctrl; u16 supported_fields; int rc; if (proto >= ENA_ADMIN_RSS_PROTO_NUM) { - ena_trc_err("Invalid proto num (%u)\n", proto); + ena_trc_err(ena_dev, "Invalid proto num (%u)\n", proto); return ENA_COM_INVAL; } /* Get the ctrl table */ rc = ena_com_get_hash_ctrl(ena_dev, proto, NULL); if (unlikely(rc)) return rc; /* Make sure all the fields are supported */ supported_fields = hash_ctrl->supported_fields[proto].fields; if ((hash_fields & supported_fields) != hash_fields) { - ena_trc_err("proto %d doesn't support the required fields %x. supports only: %x\n", + ena_trc_err(ena_dev, "Proto %d doesn't support the required fields %x. supports only: %x\n", proto, hash_fields, supported_fields); } hash_ctrl->selected_fields[proto].fields = hash_fields; rc = ena_com_set_hash_ctrl(ena_dev); /* In case of failure, restore the old hash ctrl */ if (unlikely(rc)) ena_com_get_hash_ctrl(ena_dev, 0, NULL); return 0; } int ena_com_indirect_table_fill_entry(struct ena_com_dev *ena_dev, u16 entry_idx, u16 entry_value) { struct ena_rss *rss = &ena_dev->rss; if (unlikely(entry_idx >= (1 << rss->tbl_log_size))) return ENA_COM_INVAL; if (unlikely((entry_value > ENA_TOTAL_NUM_QUEUES))) return ENA_COM_INVAL; rss->host_rss_ind_tbl[entry_idx] = entry_value; return 0; } int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_rss *rss = &ena_dev->rss; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; int ret; if (!ena_com_check_supported_feature_id(ena_dev, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG)) { - ena_trc_dbg("Feature %d isn't supported\n", - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG); + ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG)) { + ena_trc_dbg(ena_dev, "Feature %d isn't supported\n", + ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG); return ENA_COM_UNSUPPORTED; } ret = ena_com_ind_tbl_convert_to_device(ena_dev); if (ret) { - ena_trc_err("Failed to convert host indirection table to device table\n"); + ena_trc_err(ena_dev, "Failed to convert host indirection table to device table\n"); return ret; } memset(&cmd, 0x0, sizeof(cmd)); cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; - cmd.feat_common.feature_id = ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG; + cmd.feat_common.feature_id = ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG; cmd.u.ind_table.size = rss->tbl_log_size; cmd.u.ind_table.inline_index = 0xFFFFFFFF; ret = ena_com_mem_addr_set(ena_dev, &cmd.control_buffer.address, rss->rss_ind_tbl_dma_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } - cmd.control_buffer.length = (1ULL << rss->tbl_log_size) * + cmd.control_buffer.length = (u32)(1ULL << rss->tbl_log_size) * sizeof(struct ena_admin_rss_ind_table_entry); ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - ena_trc_err("Failed to set indirect table. error: %d\n", ret); + ena_trc_err(ena_dev, "Failed to set indirect table. error: %d\n", ret); return ret; } int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl) { struct ena_rss *rss = &ena_dev->rss; struct ena_admin_get_feat_resp get_resp; u32 tbl_size; int i, rc; - tbl_size = (1ULL << rss->tbl_log_size) * + tbl_size = (u32)(1ULL << rss->tbl_log_size) * sizeof(struct ena_admin_rss_ind_table_entry); rc = ena_com_get_feature_ex(ena_dev, &get_resp, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, + ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG, rss->rss_ind_tbl_dma_addr, tbl_size, 0); if (unlikely(rc)) return rc; if (!ind_tbl) return 0; for (i = 0; i < (1 << rss->tbl_log_size); i++) ind_tbl[i] = rss->host_rss_ind_tbl[i]; return 0; } int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 indr_tbl_log_size) { int rc; memset(&ena_dev->rss, 0x0, sizeof(ena_dev->rss)); rc = ena_com_indirect_table_allocate(ena_dev, indr_tbl_log_size); if (unlikely(rc)) goto err_indr_tbl; /* The following function might return unsupported in case the * device doesn't support setting the key / hash function. We can safely * ignore this error and have indirection table support only. */ rc = ena_com_hash_key_allocate(ena_dev); if (likely(!rc)) ena_com_hash_key_fill_default_key(ena_dev); else if (rc != ENA_COM_UNSUPPORTED) goto err_hash_key; rc = ena_com_hash_ctrl_init(ena_dev); if (unlikely(rc)) goto err_hash_ctrl; return 0; err_hash_ctrl: ena_com_hash_key_destroy(ena_dev); err_hash_key: ena_com_indirect_table_destroy(ena_dev); err_indr_tbl: return rc; } void ena_com_rss_destroy(struct ena_com_dev *ena_dev) { ena_com_indirect_table_destroy(ena_dev); ena_com_hash_key_destroy(ena_dev); ena_com_hash_ctrl_destroy(ena_dev); memset(&ena_dev->rss, 0x0, sizeof(ena_dev->rss)); } int ena_com_allocate_host_info(struct ena_com_dev *ena_dev) { struct ena_host_attribute *host_attr = &ena_dev->host_attr; ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, SZ_4K, host_attr->host_info, host_attr->host_info_dma_addr, host_attr->host_info_dma_handle); if (unlikely(!host_attr->host_info)) return ENA_COM_NO_MEM; host_attr->host_info->ena_spec_version = ((ENA_COMMON_SPEC_VERSION_MAJOR << ENA_REGS_VERSION_MAJOR_VERSION_SHIFT) | (ENA_COMMON_SPEC_VERSION_MINOR)); return 0; } int ena_com_allocate_debug_area(struct ena_com_dev *ena_dev, u32 debug_area_size) { struct ena_host_attribute *host_attr = &ena_dev->host_attr; ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, debug_area_size, host_attr->debug_area_virt_addr, host_attr->debug_area_dma_addr, host_attr->debug_area_dma_handle); if (unlikely(!host_attr->debug_area_virt_addr)) { host_attr->debug_area_size = 0; return ENA_COM_NO_MEM; } host_attr->debug_area_size = debug_area_size; return 0; } void ena_com_delete_host_info(struct ena_com_dev *ena_dev) { struct ena_host_attribute *host_attr = &ena_dev->host_attr; if (host_attr->host_info) { ENA_MEM_FREE_COHERENT(ena_dev->dmadev, SZ_4K, host_attr->host_info, host_attr->host_info_dma_addr, host_attr->host_info_dma_handle); host_attr->host_info = NULL; } } void ena_com_delete_debug_area(struct ena_com_dev *ena_dev) { struct ena_host_attribute *host_attr = &ena_dev->host_attr; if (host_attr->debug_area_virt_addr) { ENA_MEM_FREE_COHERENT(ena_dev->dmadev, host_attr->debug_area_size, host_attr->debug_area_virt_addr, host_attr->debug_area_dma_addr, host_attr->debug_area_dma_handle); host_attr->debug_area_virt_addr = NULL; } } int ena_com_set_host_attributes(struct ena_com_dev *ena_dev) { struct ena_host_attribute *host_attr = &ena_dev->host_attr; struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; int ret; /* Host attribute config is called before ena_com_get_dev_attr_feat * so ena_com can't check if the feature is supported. */ memset(&cmd, 0x0, sizeof(cmd)); admin_queue = &ena_dev->admin_queue; cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.feat_common.feature_id = ENA_ADMIN_HOST_ATTR_CONFIG; ret = ena_com_mem_addr_set(ena_dev, &cmd.u.host_attr.debug_ba, host_attr->debug_area_dma_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } ret = ena_com_mem_addr_set(ena_dev, &cmd.u.host_attr.os_info_ba, host_attr->host_info_dma_addr); if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); + ena_trc_err(ena_dev, "Memory address set failed\n"); return ret; } cmd.u.host_attr.debug_area_size = host_attr->debug_area_size; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - ena_trc_err("Failed to set host attributes: %d\n", ret); + ena_trc_err(ena_dev, "Failed to set host attributes: %d\n", ret); return ret; } /* Interrupt moderation */ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev) { return ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_INTERRUPT_MODERATION); } -static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs, +static int ena_com_update_nonadaptive_moderation_interval(struct ena_com_dev *ena_dev, + u32 coalesce_usecs, u32 intr_delay_resolution, u32 *intr_moder_interval) { if (!intr_delay_resolution) { - ena_trc_err("Illegal interrupt delay granularity value\n"); + ena_trc_err(ena_dev, "Illegal interrupt delay granularity value\n"); return ENA_COM_FAULT; } *intr_moder_interval = coalesce_usecs / intr_delay_resolution; return 0; } - int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, u32 tx_coalesce_usecs) { - return ena_com_update_nonadaptive_moderation_interval(tx_coalesce_usecs, + return ena_com_update_nonadaptive_moderation_interval(ena_dev, + tx_coalesce_usecs, ena_dev->intr_delay_resolution, &ena_dev->intr_moder_tx_interval); } int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, u32 rx_coalesce_usecs) { - return ena_com_update_nonadaptive_moderation_interval(rx_coalesce_usecs, + return ena_com_update_nonadaptive_moderation_interval(ena_dev, + rx_coalesce_usecs, ena_dev->intr_delay_resolution, &ena_dev->intr_moder_rx_interval); } int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) { struct ena_admin_get_feat_resp get_resp; u16 delay_resolution; int rc; rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_INTERRUPT_MODERATION, 0); if (rc) { if (rc == ENA_COM_UNSUPPORTED) { - ena_trc_dbg("Feature %d isn't supported\n", + ena_trc_dbg(ena_dev, "Feature %d isn't supported\n", ENA_ADMIN_INTERRUPT_MODERATION); rc = 0; } else { - ena_trc_err("Failed to get interrupt moderation admin cmd. rc: %d\n", - rc); + ena_trc_err(ena_dev, + "Failed to get interrupt moderation admin cmd. rc: %d\n", rc); } /* no moderation supported, disable adaptive support */ ena_com_disable_adaptive_moderation(ena_dev); return rc; } /* if moderation is supported by device we set adaptive moderation */ delay_resolution = get_resp.u.intr_moderation.intr_delay_resolution; ena_com_update_intr_delay_resolution(ena_dev, delay_resolution); /* Disable adaptive moderation by default - can be enabled later */ ena_com_disable_adaptive_moderation(ena_dev); return 0; } unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev) { return ena_dev->intr_moder_tx_interval; } unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev) { return ena_dev->intr_moder_rx_interval; } int ena_com_config_dev_mode(struct ena_com_dev *ena_dev, struct ena_admin_feature_llq_desc *llq_features, struct ena_llq_configurations *llq_default_cfg) { struct ena_com_llq_info *llq_info = &ena_dev->llq_info; int rc; if (!llq_features->max_llq_num) { ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } rc = ena_com_config_llq_info(ena_dev, llq_features, llq_default_cfg); if (rc) return rc; ena_dev->tx_max_header_size = llq_info->desc_list_entry_size - (llq_info->descs_num_before_header * sizeof(struct ena_eth_io_tx_desc)); if (unlikely(ena_dev->tx_max_header_size == 0)) { - ena_trc_err("the size of the LLQ entry is smaller than needed\n"); + ena_trc_err(ena_dev, "The size of the LLQ entry is smaller than needed\n"); return -EINVAL; } ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_DEV; return 0; } Index: stable/12/sys/contrib/ena-com/ena_com.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_com.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_com.h (revision 368012) @@ -1,1031 +1,1062 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef ENA_COM #define ENA_COM #include "ena_plat.h" #define ENA_MAX_NUM_IO_QUEUES 128U /* We need to queues for each IO (on for Tx and one for Rx) */ #define ENA_TOTAL_NUM_QUEUES (2 * (ENA_MAX_NUM_IO_QUEUES)) #define ENA_MAX_HANDLERS 256 #define ENA_MAX_PHYS_ADDR_SIZE_BITS 48 /* Unit in usec */ #define ENA_REG_READ_TIMEOUT 200000 #define ADMIN_SQ_SIZE(depth) ((depth) * sizeof(struct ena_admin_aq_entry)) #define ADMIN_CQ_SIZE(depth) ((depth) * sizeof(struct ena_admin_acq_entry)) #define ADMIN_AENQ_SIZE(depth) ((depth) * sizeof(struct ena_admin_aenq_entry)) +#define ENA_CDESC_RING_SIZE_ALIGNMENT (1 << 12) /* 4K */ + /*****************************************************************************/ /*****************************************************************************/ /* ENA adaptive interrupt moderation settings */ #define ENA_INTR_INITIAL_TX_INTERVAL_USECS ENA_INTR_INITIAL_TX_INTERVAL_USECS_PLAT #define ENA_INTR_INITIAL_RX_INTERVAL_USECS 0 #define ENA_DEFAULT_INTR_DELAY_RESOLUTION 1 #define ENA_HASH_KEY_SIZE 40 #define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF #define ENA_FEATURE_MAX_QUEUE_EXT_VER 1 struct ena_llq_configurations { enum ena_admin_llq_header_location llq_header_location; enum ena_admin_llq_ring_entry_size llq_ring_entry_size; enum ena_admin_llq_stride_ctrl llq_stride_ctrl; enum ena_admin_llq_num_descs_before_header llq_num_decs_before_header; u16 llq_ring_entry_size_value; }; enum queue_direction { ENA_COM_IO_QUEUE_DIRECTION_TX, ENA_COM_IO_QUEUE_DIRECTION_RX }; struct ena_com_buf { dma_addr_t paddr; /**< Buffer physical address */ u16 len; /**< Buffer length in bytes */ }; struct ena_com_rx_buf_info { u16 len; u16 req_id; }; struct ena_com_io_desc_addr { u8 __iomem *pbuf_dev_addr; /* LLQ address */ u8 *virt_addr; dma_addr_t phys_addr; ena_mem_handle_t mem_handle; }; struct ena_com_tx_meta { u16 mss; u16 l3_hdr_len; u16 l3_hdr_offset; u16 l4_hdr_len; /* In words */ }; struct ena_com_llq_info { u16 header_location_ctrl; u16 desc_stride_ctrl; u16 desc_list_entry_size_ctrl; u16 desc_list_entry_size; u16 descs_num_before_header; u16 descs_per_entry; u16 max_entries_in_tx_burst; bool disable_meta_caching; }; struct ena_com_io_cq { struct ena_com_io_desc_addr cdesc_addr; void *bus; /* Interrupt unmask register */ u32 __iomem *unmask_reg; /* The completion queue head doorbell register */ u32 __iomem *cq_head_db_reg; /* numa configuration register (for TPH) */ u32 __iomem *numa_node_cfg_reg; /* The value to write to the above register to unmask * the interrupt of this queue */ u32 msix_vector; enum queue_direction direction; /* holds the number of cdesc of the current packet */ u16 cur_rx_pkt_cdesc_count; /* save the firt cdesc idx of the current packet */ u16 cur_rx_pkt_cdesc_start_idx; u16 q_depth; /* Caller qid */ u16 qid; /* Device queue index */ u16 idx; u16 head; u16 last_head_update; u8 phase; u8 cdesc_entry_size_in_bytes; } ____cacheline_aligned; struct ena_com_io_bounce_buffer_control { u8 *base_buffer; u16 next_to_use; u16 buffer_size; u16 buffers_num; /* Must be a power of 2 */ }; /* This struct is to keep tracking the current location of the next llq entry */ struct ena_com_llq_pkt_ctrl { u8 *curr_bounce_buf; u16 idx; u16 descs_left_in_line; }; struct ena_com_io_sq { struct ena_com_io_desc_addr desc_addr; void *bus; u32 __iomem *db_addr; u8 __iomem *header_addr; enum queue_direction direction; enum ena_admin_placement_policy_type mem_queue_type; bool disable_meta_caching; u32 msix_vector; struct ena_com_tx_meta cached_tx_meta; struct ena_com_llq_info llq_info; struct ena_com_llq_pkt_ctrl llq_buf_ctrl; struct ena_com_io_bounce_buffer_control bounce_buf_ctrl; u16 q_depth; u16 qid; u16 idx; u16 tail; u16 next_to_comp; u16 llq_last_copy_tail; u32 tx_max_header_size; u8 phase; u8 desc_entry_size; u8 dma_addr_bits; u16 entries_in_tx_burst_left; } ____cacheline_aligned; struct ena_com_admin_cq { struct ena_admin_acq_entry *entries; ena_mem_handle_t mem_handle; dma_addr_t dma_addr; u16 head; u8 phase; }; struct ena_com_admin_sq { struct ena_admin_aq_entry *entries; ena_mem_handle_t mem_handle; dma_addr_t dma_addr; u32 __iomem *db_addr; u16 head; u16 tail; u8 phase; }; struct ena_com_stats_admin { u64 aborted_cmd; u64 submitted_cmd; u64 completed_cmd; u64 out_of_space; u64 no_completion; }; struct ena_com_admin_queue { void *q_dmadev; void *bus; struct ena_com_dev *ena_dev; ena_spinlock_t q_lock; /* spinlock for the admin queue */ struct ena_comp_ctx *comp_ctx; u32 completion_timeout; u16 q_depth; struct ena_com_admin_cq cq; struct ena_com_admin_sq sq; /* Indicate if the admin queue should poll for completion */ bool polling; /* Define if fallback to polling mode should occur */ bool auto_polling; u16 curr_cmd_id; /* Indicate that the ena was initialized and can * process new admin commands */ bool running_state; /* Count the number of outstanding admin commands */ ena_atomic32_t outstanding_cmds; struct ena_com_stats_admin stats; }; struct ena_aenq_handlers; struct ena_com_aenq { u16 head; u8 phase; struct ena_admin_aenq_entry *entries; dma_addr_t dma_addr; ena_mem_handle_t mem_handle; u16 q_depth; struct ena_aenq_handlers *aenq_handlers; }; struct ena_com_mmio_read { struct ena_admin_ena_mmio_req_read_less_resp *read_resp; dma_addr_t read_resp_dma_addr; ena_mem_handle_t read_resp_mem_handle; u32 reg_read_to; /* in us */ u16 seq_num; bool readless_supported; /* spin lock to ensure a single outstanding read */ ena_spinlock_t lock; }; struct ena_rss { /* Indirect table */ u16 *host_rss_ind_tbl; struct ena_admin_rss_ind_table_entry *rss_ind_tbl; dma_addr_t rss_ind_tbl_dma_addr; ena_mem_handle_t rss_ind_tbl_mem_handle; u16 tbl_log_size; /* Hash key */ enum ena_admin_hash_functions hash_func; struct ena_admin_feature_rss_flow_hash_control *hash_key; dma_addr_t hash_key_dma_addr; ena_mem_handle_t hash_key_mem_handle; u32 hash_init_val; /* Flow Control */ struct ena_admin_feature_rss_hash_control *hash_ctrl; dma_addr_t hash_ctrl_dma_addr; ena_mem_handle_t hash_ctrl_mem_handle; }; struct ena_host_attribute { /* Debug area */ u8 *debug_area_virt_addr; dma_addr_t debug_area_dma_addr; ena_mem_handle_t debug_area_dma_handle; u32 debug_area_size; /* Host information */ struct ena_admin_host_info *host_info; dma_addr_t host_info_dma_addr; ena_mem_handle_t host_info_dma_handle; }; /* Each ena_dev is a PCI function. */ struct ena_com_dev { struct ena_com_admin_queue admin_queue; struct ena_com_aenq aenq; struct ena_com_io_cq io_cq_queues[ENA_TOTAL_NUM_QUEUES]; struct ena_com_io_sq io_sq_queues[ENA_TOTAL_NUM_QUEUES]; u8 __iomem *reg_bar; void __iomem *mem_bar; void *dmadev; void *bus; + ena_netdev *net_device; enum ena_admin_placement_policy_type tx_mem_queue_type; u32 tx_max_header_size; u16 stats_func; /* Selected function for extended statistic dump */ u16 stats_queue; /* Selected queue for extended statistic dump */ struct ena_com_mmio_read mmio_read; struct ena_rss rss; u32 supported_features; u32 dma_addr_bits; struct ena_host_attribute host_attr; bool adaptive_coalescing; u16 intr_delay_resolution; /* interrupt moderation intervals are in usec divided by * intr_delay_resolution, which is supplied by the device. */ u32 intr_moder_tx_interval; u32 intr_moder_rx_interval; struct ena_intr_moder_entry *intr_moder_tbl; struct ena_com_llq_info llq_info; u32 ena_min_poll_delay_us; }; struct ena_com_dev_get_features_ctx { struct ena_admin_queue_feature_desc max_queues; struct ena_admin_queue_ext_feature_desc max_queue_ext; struct ena_admin_device_attr_feature_desc dev_attr; struct ena_admin_feature_aenq_desc aenq; struct ena_admin_feature_offload_desc offload; struct ena_admin_ena_hw_hints hw_hints; struct ena_admin_feature_llq_desc llq; - struct ena_admin_feature_rss_ind_table ind_table; }; struct ena_com_create_io_ctx { enum ena_admin_placement_policy_type mem_queue_type; enum queue_direction direction; int numa_node; u32 msix_vector; u16 queue_size; u16 qid; }; typedef void (*ena_aenq_handler)(void *data, struct ena_admin_aenq_entry *aenq_e); /* Holds aenq handlers. Indexed by AENQ event group */ struct ena_aenq_handlers { ena_aenq_handler handlers[ENA_MAX_HANDLERS]; ena_aenq_handler unimplemented_handler; }; /*****************************************************************************/ /*****************************************************************************/ #if defined(__cplusplus) extern "C" { #endif /* ena_com_mmio_reg_read_request_init - Init the mmio reg read mechanism * @ena_dev: ENA communication layer struct * * Initialize the register read mechanism. * * @note: This method must be the first stage in the initialization sequence. * * @return - 0 on success, negative value on failure. */ int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev); /* ena_com_set_mmio_read_mode - Enable/disable the indirect mmio reg read mechanism * @ena_dev: ENA communication layer struct * @readless_supported: readless mode (enable/disable) */ void ena_com_set_mmio_read_mode(struct ena_com_dev *ena_dev, bool readless_supported); /* ena_com_mmio_reg_read_request_write_dev_addr - Write the mmio reg read return * value physical address. * @ena_dev: ENA communication layer struct */ void ena_com_mmio_reg_read_request_write_dev_addr(struct ena_com_dev *ena_dev); /* ena_com_mmio_reg_read_request_destroy - Destroy the mmio reg read mechanism * @ena_dev: ENA communication layer struct */ void ena_com_mmio_reg_read_request_destroy(struct ena_com_dev *ena_dev); /* ena_com_admin_init - Init the admin and the async queues * @ena_dev: ENA communication layer struct * @aenq_handlers: Those handlers to be called upon event. * * Initialize the admin submission and completion queues. * Initialize the asynchronous events notification queues. * * @return - 0 on success, negative value on failure. */ int ena_com_admin_init(struct ena_com_dev *ena_dev, struct ena_aenq_handlers *aenq_handlers); /* ena_com_admin_destroy - Destroy the admin and the async events queues. * @ena_dev: ENA communication layer struct * * @note: Before calling this method, the caller must validate that the device * won't send any additional admin completions/aenq. * To achieve that, a FLR is recommended. */ void ena_com_admin_destroy(struct ena_com_dev *ena_dev); /* ena_com_dev_reset - Perform device FLR to the device. * @ena_dev: ENA communication layer struct * @reset_reason: Specify what is the trigger for the reset in case of an error. * * @return - 0 on success, negative value on failure. */ int ena_com_dev_reset(struct ena_com_dev *ena_dev, enum ena_regs_reset_reason_types reset_reason); /* ena_com_create_io_queue - Create io queue. * @ena_dev: ENA communication layer struct * @ctx - create context structure * * Create the submission and the completion queues. * * @return - 0 on success, negative value on failure. */ int ena_com_create_io_queue(struct ena_com_dev *ena_dev, struct ena_com_create_io_ctx *ctx); /* ena_com_destroy_io_queue - Destroy IO queue with the queue id - qid. * @ena_dev: ENA communication layer struct * @qid - the caller virtual queue id. */ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid); /* ena_com_get_io_handlers - Return the io queue handlers * @ena_dev: ENA communication layer struct * @qid - the caller virtual queue id. * @io_sq - IO submission queue handler * @io_cq - IO completion queue handler. * * @return - 0 on success, negative value on failure. */ int ena_com_get_io_handlers(struct ena_com_dev *ena_dev, u16 qid, struct ena_com_io_sq **io_sq, struct ena_com_io_cq **io_cq); /* ena_com_admin_aenq_enable - ENAble asynchronous event notifications * @ena_dev: ENA communication layer struct * * After this method, aenq event can be received via AENQ. */ void ena_com_admin_aenq_enable(struct ena_com_dev *ena_dev); /* ena_com_set_admin_running_state - Set the state of the admin queue * @ena_dev: ENA communication layer struct * * Change the state of the admin queue (enable/disable) */ void ena_com_set_admin_running_state(struct ena_com_dev *ena_dev, bool state); /* ena_com_get_admin_running_state - Get the admin queue state * @ena_dev: ENA communication layer struct * * Retrieve the state of the admin queue (enable/disable) * * @return - current polling mode (enable/disable) */ bool ena_com_get_admin_running_state(struct ena_com_dev *ena_dev); /* ena_com_set_admin_polling_mode - Set the admin completion queue polling mode * @ena_dev: ENA communication layer struct * @polling: ENAble/Disable polling mode * * Set the admin completion mode. */ void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling); /* ena_com_get_admin_polling_mode - Get the admin completion queue polling mode * @ena_dev: ENA communication layer struct * * Get the admin completion mode. * If polling mode is on, ena_com_execute_admin_command will perform a * polling on the admin completion queue for the commands completion, * otherwise it will wait on wait event. * * @return state */ bool ena_com_get_admin_polling_mode(struct ena_com_dev *ena_dev); /* ena_com_set_admin_auto_polling_mode - Enable autoswitch to polling mode * @ena_dev: ENA communication layer struct * @polling: Enable/Disable polling mode * * Set the autopolling mode. * If autopolling is on: * In case of missing interrupt when data is available switch to polling. */ void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev, bool polling); /* ena_com_admin_q_comp_intr_handler - admin queue interrupt handler * @ena_dev: ENA communication layer struct * * This method goes over the admin completion queue and wakes up all the pending * threads that wait on the commands wait event. * * @note: Should be called after MSI-X interrupt. */ void ena_com_admin_q_comp_intr_handler(struct ena_com_dev *ena_dev); /* ena_com_aenq_intr_handler - AENQ interrupt handler * @ena_dev: ENA communication layer struct * * This method goes over the async event notification queue and calls the proper * aenq handler. */ -void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data); +void ena_com_aenq_intr_handler(struct ena_com_dev *ena_dev, void *data); /* ena_com_abort_admin_commands - Abort all the outstanding admin commands. * @ena_dev: ENA communication layer struct * * This method aborts all the outstanding admin commands. * The caller should then call ena_com_wait_for_abort_completion to make sure * all the commands were completed. */ void ena_com_abort_admin_commands(struct ena_com_dev *ena_dev); /* ena_com_wait_for_abort_completion - Wait for admin commands abort. * @ena_dev: ENA communication layer struct * * This method waits until all the outstanding admin commands are completed. */ void ena_com_wait_for_abort_completion(struct ena_com_dev *ena_dev); /* ena_com_validate_version - Validate the device parameters * @ena_dev: ENA communication layer struct * * This method verifies the device parameters are the same as the saved * parameters in ena_dev. * This method is useful after device reset, to validate the device mac address * and the device offloads are the same as before the reset. * * @return - 0 on success negative value otherwise. */ int ena_com_validate_version(struct ena_com_dev *ena_dev); /* ena_com_get_link_params - Retrieve physical link parameters. * @ena_dev: ENA communication layer struct * @resp: Link parameters * * Retrieve the physical link parameters, * like speed, auto-negotiation and full duplex support. * * @return - 0 on Success negative value otherwise. */ int ena_com_get_link_params(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *resp); /* ena_com_get_dma_width - Retrieve physical dma address width the device * supports. * @ena_dev: ENA communication layer struct * * Retrieve the maximum physical address bits the device can handle. * * @return: > 0 on Success and negative value otherwise. */ int ena_com_get_dma_width(struct ena_com_dev *ena_dev); /* ena_com_set_aenq_config - Set aenq groups configurations * @ena_dev: ENA communication layer struct * @groups flag: bit fields flags of enum ena_admin_aenq_group. * * Configure which aenq event group the driver would like to receive. * * @return: 0 on Success and negative value otherwise. */ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag); /* ena_com_get_dev_attr_feat - Get device features * @ena_dev: ENA communication layer struct * @get_feat_ctx: returned context that contain the get features. * * @return: 0 on Success and negative value otherwise. */ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx); /* ena_com_get_dev_basic_stats - Get device basic statistics * @ena_dev: ENA communication layer struct * @stats: stats return value * * @return: 0 on Success and negative value otherwise. */ int ena_com_get_dev_basic_stats(struct ena_com_dev *ena_dev, struct ena_admin_basic_stats *stats); +/* ena_com_get_eni_stats - Get extended network interface statistics + * @ena_dev: ENA communication layer struct + * @stats: stats return value + * + * @return: 0 on Success and negative value otherwise. + */ +int ena_com_get_eni_stats(struct ena_com_dev *ena_dev, + struct ena_admin_eni_stats *stats); + /* ena_com_set_dev_mtu - Configure the device mtu. * @ena_dev: ENA communication layer struct * @mtu: mtu value * * @return: 0 on Success and negative value otherwise. */ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu); /* ena_com_get_offload_settings - Retrieve the device offloads capabilities * @ena_dev: ENA communication layer struct * @offlad: offload return value * * @return: 0 on Success and negative value otherwise. */ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, struct ena_admin_feature_offload_desc *offload); /* ena_com_rss_init - Init RSS * @ena_dev: ENA communication layer struct * @log_size: indirection log size * * Allocate RSS/RFS resources. * The caller then can configure rss using ena_com_set_hash_function, * ena_com_set_hash_ctrl and ena_com_indirect_table_set. * * @return: 0 on Success and negative value otherwise. */ int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 log_size); /* ena_com_rss_destroy - Destroy rss * @ena_dev: ENA communication layer struct * * Free all the RSS/RFS resources. */ void ena_com_rss_destroy(struct ena_com_dev *ena_dev); /* ena_com_get_current_hash_function - Get RSS hash function * @ena_dev: ENA communication layer struct * * Return the current hash function. * @return: 0 or one of the ena_admin_hash_functions values. */ int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev); /* ena_com_fill_hash_function - Fill RSS hash function * @ena_dev: ENA communication layer struct * @func: The hash function (Toeplitz or crc) * @key: Hash key (for toeplitz hash) * @key_len: key length (max length 10 DW) * @init_val: initial value for the hash function * * Fill the ena_dev resources with the desire hash function, hash key, key_len * and key initial value (if needed by the hash function). * To flush the key into the device the caller should call * ena_com_set_hash_function. * * @return: 0 on Success and negative value otherwise. */ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, enum ena_admin_hash_functions func, const u8 *key, u16 key_len, u32 init_val); /* ena_com_set_hash_function - Flush the hash function and it dependencies to * the device. * @ena_dev: ENA communication layer struct * * Flush the hash function and it dependencies (key, key length and * initial value) if needed. * * @note: Prior to this method the caller should call ena_com_fill_hash_function * * @return: 0 on Success and negative value otherwise. */ int ena_com_set_hash_function(struct ena_com_dev *ena_dev); /* ena_com_get_hash_function - Retrieve the hash function from the device. * @ena_dev: ENA communication layer struct * @func: hash function * * Retrieve the hash function from the device. * * @note: If the caller called ena_com_fill_hash_function but didn't flush * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. */ int ena_com_get_hash_function(struct ena_com_dev *ena_dev, enum ena_admin_hash_functions *func); /* ena_com_get_hash_key - Retrieve the hash key * @ena_dev: ENA communication layer struct * @key: hash key * * Retrieve the hash key. * * @note: If the caller called ena_com_fill_hash_key but didn't flush * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. */ int ena_com_get_hash_key(struct ena_com_dev *ena_dev, u8 *key); /* ena_com_fill_hash_ctrl - Fill RSS hash control * @ena_dev: ENA communication layer struct. * @proto: The protocol to configure. * @hash_fields: bit mask of ena_admin_flow_hash_fields * * Fill the ena_dev resources with the desire hash control (the ethernet * fields that take part of the hash) for a specific protocol. * To flush the hash control to the device, the caller should call * ena_com_set_hash_ctrl. * * @return: 0 on Success and negative value otherwise. */ int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev, enum ena_admin_flow_hash_proto proto, u16 hash_fields); /* ena_com_set_hash_ctrl - Flush the hash control resources to the device. * @ena_dev: ENA communication layer struct * * Flush the hash control (the ethernet fields that take part of the hash) * * @note: Prior to this method the caller should call ena_com_fill_hash_ctrl. * * @return: 0 on Success and negative value otherwise. */ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev); /* ena_com_get_hash_ctrl - Retrieve the hash control from the device. * @ena_dev: ENA communication layer struct * @proto: The protocol to retrieve. * @fields: bit mask of ena_admin_flow_hash_fields. * * Retrieve the hash control from the device. * * @note: If the caller called ena_com_fill_hash_ctrl but didn't flush * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. */ int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev, enum ena_admin_flow_hash_proto proto, u16 *fields); /* ena_com_set_default_hash_ctrl - Set the hash control to a default * configuration. * @ena_dev: ENA communication layer struct * * Fill the ena_dev resources with the default hash control configuration. * To flush the hash control to the device, the caller should call * ena_com_set_hash_ctrl. * * @return: 0 on Success and negative value otherwise. */ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev); /* ena_com_indirect_table_fill_entry - Fill a single entry in the RSS * indirection table * @ena_dev: ENA communication layer struct. * @entry_idx - indirection table entry. * @entry_value - redirection value * * Fill a single entry of the RSS indirection table in the ena_dev resources. * To flush the indirection table to the device, the called should call * ena_com_indirect_table_set. * * @return: 0 on Success and negative value otherwise. */ int ena_com_indirect_table_fill_entry(struct ena_com_dev *ena_dev, u16 entry_idx, u16 entry_value); /* ena_com_indirect_table_set - Flush the indirection table to the device. * @ena_dev: ENA communication layer struct * * Flush the indirection hash control to the device. * Prior to this method the caller should call ena_com_indirect_table_fill_entry * * @return: 0 on Success and negative value otherwise. */ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev); /* ena_com_indirect_table_get - Retrieve the indirection table from the device. * @ena_dev: ENA communication layer struct * @ind_tbl: indirection table * * Retrieve the RSS indirection table from the device. * * @note: If the caller called ena_com_indirect_table_fill_entry but didn't flush * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. */ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl); /* ena_com_allocate_host_info - Allocate host info resources. * @ena_dev: ENA communication layer struct * * @return: 0 on Success and negative value otherwise. */ int ena_com_allocate_host_info(struct ena_com_dev *ena_dev); /* ena_com_allocate_debug_area - Allocate debug area. * @ena_dev: ENA communication layer struct * @debug_area_size - debug area size. * * @return: 0 on Success and negative value otherwise. */ int ena_com_allocate_debug_area(struct ena_com_dev *ena_dev, u32 debug_area_size); /* ena_com_delete_debug_area - Free the debug area resources. * @ena_dev: ENA communication layer struct * * Free the allocated debug area. */ void ena_com_delete_debug_area(struct ena_com_dev *ena_dev); /* ena_com_delete_host_info - Free the host info resources. * @ena_dev: ENA communication layer struct * * Free the allocated host info. */ void ena_com_delete_host_info(struct ena_com_dev *ena_dev); /* ena_com_set_host_attributes - Update the device with the host * attributes (debug area and host info) base address. * @ena_dev: ENA communication layer struct * * @return: 0 on Success and negative value otherwise. */ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev); /* ena_com_create_io_cq - Create io completion queue. * @ena_dev: ENA communication layer struct * @io_cq - io completion queue handler * Create IO completion queue. * * @return - 0 on success, negative value on failure. */ int ena_com_create_io_cq(struct ena_com_dev *ena_dev, struct ena_com_io_cq *io_cq); /* ena_com_destroy_io_cq - Destroy io completion queue. * @ena_dev: ENA communication layer struct * @io_cq - io completion queue handler * Destroy IO completion queue. * * @return - 0 on success, negative value on failure. */ int ena_com_destroy_io_cq(struct ena_com_dev *ena_dev, struct ena_com_io_cq *io_cq); /* ena_com_execute_admin_command - Execute admin command * @admin_queue: admin queue. * @cmd: the admin command to execute. * @cmd_size: the command size. * @cmd_completion: command completion return value. * @cmd_comp_size: command completion size. * Submit an admin command and then wait until the device returns a * completion. * The completion will be copied into cmd_comp. * * @return - 0 on success, negative value on failure. */ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, struct ena_admin_aq_entry *cmd, size_t cmd_size, struct ena_admin_acq_entry *cmd_comp, size_t cmd_comp_size); /* ena_com_init_interrupt_moderation - Init interrupt moderation * @ena_dev: ENA communication layer struct * * @return - 0 on success, negative value on failure. */ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev); /* ena_com_interrupt_moderation_supported - Return if interrupt moderation * capability is supported by the device. * * @return - supported or not. */ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev); /* ena_com_update_nonadaptive_moderation_interval_tx - Update the * non-adaptive interval in Tx direction. * @ena_dev: ENA communication layer struct * @tx_coalesce_usecs: Interval in usec. * * @return - 0 on success, negative value on failure. */ int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, u32 tx_coalesce_usecs); /* ena_com_update_nonadaptive_moderation_interval_rx - Update the * non-adaptive interval in Rx direction. * @ena_dev: ENA communication layer struct * @rx_coalesce_usecs: Interval in usec. * * @return - 0 on success, negative value on failure. */ int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, u32 rx_coalesce_usecs); /* ena_com_get_nonadaptive_moderation_interval_tx - Retrieve the * non-adaptive interval in Tx direction. * @ena_dev: ENA communication layer struct * * @return - interval in usec */ unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev); /* ena_com_get_nonadaptive_moderation_interval_rx - Retrieve the * non-adaptive interval in Rx direction. * @ena_dev: ENA communication layer struct * * @return - interval in usec */ unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev); /* ena_com_config_dev_mode - Configure the placement policy of the device. * @ena_dev: ENA communication layer struct * @llq_features: LLQ feature descriptor, retrieve via * ena_com_get_dev_attr_feat. * @ena_llq_config: The default driver LLQ parameters configurations */ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev, struct ena_admin_feature_llq_desc *llq_features, struct ena_llq_configurations *llq_default_config); + +/* ena_com_io_sq_to_ena_dev - Extract ena_com_dev using contained field io_sq. + * @io_sq: IO submit queue struct + * + * @return - ena_com_dev struct extracted from io_sq + */ +static inline struct ena_com_dev *ena_com_io_sq_to_ena_dev(struct ena_com_io_sq *io_sq) +{ + return container_of(io_sq, struct ena_com_dev, io_sq_queues[io_sq->qid]); +} + +/* ena_com_io_cq_to_ena_dev - Extract ena_com_dev using contained field io_cq. + * @io_sq: IO submit queue struct + * + * @return - ena_com_dev struct extracted from io_sq + */ +static inline struct ena_com_dev *ena_com_io_cq_to_ena_dev(struct ena_com_io_cq *io_cq) +{ + return container_of(io_cq, struct ena_com_dev, io_cq_queues[io_cq->qid]); +} static inline bool ena_com_get_adaptive_moderation_enabled(struct ena_com_dev *ena_dev) { return ena_dev->adaptive_coalescing; } static inline void ena_com_enable_adaptive_moderation(struct ena_com_dev *ena_dev) { ena_dev->adaptive_coalescing = true; } static inline void ena_com_disable_adaptive_moderation(struct ena_com_dev *ena_dev) { ena_dev->adaptive_coalescing = false; } /* ena_com_update_intr_reg - Prepare interrupt register * @intr_reg: interrupt register to update. * @rx_delay_interval: Rx interval in usecs * @tx_delay_interval: Tx interval in usecs * @unmask: unmask enable/disable * * Prepare interrupt update register with the supplied parameters. */ static inline void ena_com_update_intr_reg(struct ena_eth_io_intr_reg *intr_reg, u32 rx_delay_interval, u32 tx_delay_interval, bool unmask) { intr_reg->intr_control = 0; intr_reg->intr_control |= rx_delay_interval & ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK; intr_reg->intr_control |= (tx_delay_interval << ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT) & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK; if (unmask) intr_reg->intr_control |= ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK; } static inline u8 *ena_com_get_next_bounce_buffer(struct ena_com_io_bounce_buffer_control *bounce_buf_ctrl) { u16 size, buffers_num; u8 *buf; size = bounce_buf_ctrl->buffer_size; buffers_num = bounce_buf_ctrl->buffers_num; buf = bounce_buf_ctrl->base_buffer + (bounce_buf_ctrl->next_to_use++ & (buffers_num - 1)) * size; prefetchw(bounce_buf_ctrl->base_buffer + (bounce_buf_ctrl->next_to_use & (buffers_num - 1)) * size); return buf; } #ifdef ENA_EXTENDED_STATS int ena_com_get_dev_extended_stats(struct ena_com_dev *ena_dev, char *buff, u32 len); int ena_com_extended_stats_set_func_queue(struct ena_com_dev *ena_dev, u32 funct_queue); #endif #if defined(__cplusplus) } #endif /* __cplusplus */ #endif /* !(ENA_COM) */ Index: stable/12/sys/contrib/ena-com/ena_defs/ena_admin_defs.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_defs/ena_admin_defs.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_defs/ena_admin_defs.h (revision 368012) @@ -1,1683 +1,1723 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef _ENA_ADMIN_H_ #define _ENA_ADMIN_H_ #define ENA_ADMIN_EXTRA_PROPERTIES_STRING_LEN 32 #define ENA_ADMIN_EXTRA_PROPERTIES_COUNT 32 +#define ENA_ADMIN_RSS_KEY_PARTS 10 + enum ena_admin_aq_opcode { ENA_ADMIN_CREATE_SQ = 1, ENA_ADMIN_DESTROY_SQ = 2, ENA_ADMIN_CREATE_CQ = 3, ENA_ADMIN_DESTROY_CQ = 4, ENA_ADMIN_GET_FEATURE = 8, ENA_ADMIN_SET_FEATURE = 9, ENA_ADMIN_GET_STATS = 11, }; enum ena_admin_aq_completion_status { ENA_ADMIN_SUCCESS = 0, ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE = 1, ENA_ADMIN_BAD_OPCODE = 2, ENA_ADMIN_UNSUPPORTED_OPCODE = 3, ENA_ADMIN_MALFORMED_REQUEST = 4, /* Additional status is provided in ACQ entry extended_status */ ENA_ADMIN_ILLEGAL_PARAMETER = 5, ENA_ADMIN_UNKNOWN_ERROR = 6, ENA_ADMIN_RESOURCE_BUSY = 7, }; +/* subcommands for the set/get feature admin commands */ enum ena_admin_aq_feature_id { ENA_ADMIN_DEVICE_ATTRIBUTES = 1, ENA_ADMIN_MAX_QUEUES_NUM = 2, ENA_ADMIN_HW_HINTS = 3, ENA_ADMIN_LLQ = 4, ENA_ADMIN_EXTRA_PROPERTIES_STRINGS = 5, ENA_ADMIN_EXTRA_PROPERTIES_FLAGS = 6, ENA_ADMIN_MAX_QUEUES_EXT = 7, ENA_ADMIN_RSS_HASH_FUNCTION = 10, ENA_ADMIN_STATELESS_OFFLOAD_CONFIG = 11, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG = 12, + ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG = 12, ENA_ADMIN_MTU = 14, ENA_ADMIN_RSS_HASH_INPUT = 18, ENA_ADMIN_INTERRUPT_MODERATION = 20, ENA_ADMIN_AENQ_CONFIG = 26, ENA_ADMIN_LINK_CONFIG = 27, ENA_ADMIN_HOST_ATTR_CONFIG = 28, ENA_ADMIN_FEATURES_OPCODE_NUM = 32, }; enum ena_admin_placement_policy_type { /* descriptors and headers are in host memory */ ENA_ADMIN_PLACEMENT_POLICY_HOST = 1, /* descriptors and headers are in device memory (a.k.a Low Latency * Queue) */ ENA_ADMIN_PLACEMENT_POLICY_DEV = 3, }; enum ena_admin_link_types { ENA_ADMIN_LINK_SPEED_1G = 0x1, ENA_ADMIN_LINK_SPEED_2_HALF_G = 0x2, ENA_ADMIN_LINK_SPEED_5G = 0x4, ENA_ADMIN_LINK_SPEED_10G = 0x8, ENA_ADMIN_LINK_SPEED_25G = 0x10, ENA_ADMIN_LINK_SPEED_40G = 0x20, ENA_ADMIN_LINK_SPEED_50G = 0x40, ENA_ADMIN_LINK_SPEED_100G = 0x80, ENA_ADMIN_LINK_SPEED_200G = 0x100, ENA_ADMIN_LINK_SPEED_400G = 0x200, }; enum ena_admin_completion_policy_type { /* completion queue entry for each sq descriptor */ ENA_ADMIN_COMPLETION_POLICY_DESC = 0, /* completion queue entry upon request in sq descriptor */ ENA_ADMIN_COMPLETION_POLICY_DESC_ON_DEMAND = 1, /* current queue head pointer is updated in OS memory upon sq * descriptor request */ ENA_ADMIN_COMPLETION_POLICY_HEAD_ON_DEMAND = 2, /* current queue head pointer is updated in OS memory for each sq * descriptor */ ENA_ADMIN_COMPLETION_POLICY_HEAD = 3, }; /* basic stats return ena_admin_basic_stats while extanded stats return a * buffer (string format) with additional statistics per queue and per * device id */ enum ena_admin_get_stats_type { ENA_ADMIN_GET_STATS_TYPE_BASIC = 0, ENA_ADMIN_GET_STATS_TYPE_EXTENDED = 1, + /* extra HW stats for specific network interface */ + ENA_ADMIN_GET_STATS_TYPE_ENI = 2, }; enum ena_admin_get_stats_scope { ENA_ADMIN_SPECIFIC_QUEUE = 0, ENA_ADMIN_ETH_TRAFFIC = 1, }; struct ena_admin_aq_common_desc { /* 11:0 : command_id * 15:12 : reserved12 */ uint16_t command_id; /* as appears in ena_admin_aq_opcode */ uint8_t opcode; /* 0 : phase * 1 : ctrl_data - control buffer address valid * 2 : ctrl_data_indirect - control buffer address * points to list of pages with addresses of control * buffers * 7:3 : reserved3 */ uint8_t flags; }; /* used in ena_admin_aq_entry. Can point directly to control data, or to a * page list chunk. Used also at the end of indirect mode page list chunks, * for chaining. */ struct ena_admin_ctrl_buff_info { uint32_t length; struct ena_common_mem_addr address; }; struct ena_admin_sq { uint16_t sq_idx; /* 4:0 : reserved * 7:5 : sq_direction - 0x1 - Tx; 0x2 - Rx */ uint8_t sq_identity; uint8_t reserved1; }; struct ena_admin_aq_entry { struct ena_admin_aq_common_desc aq_common_descriptor; union { uint32_t inline_data_w1[3]; struct ena_admin_ctrl_buff_info control_buffer; } u; uint32_t inline_data_w4[12]; }; struct ena_admin_acq_common_desc { /* command identifier to associate it with the aq descriptor * 11:0 : command_id * 15:12 : reserved12 */ uint16_t command; uint8_t status; /* 0 : phase * 7:1 : reserved1 */ uint8_t flags; uint16_t extended_status; /* indicates to the driver which AQ entry has been consumed by the - * device and could be reused + * device and could be reused */ uint16_t sq_head_indx; }; struct ena_admin_acq_entry { struct ena_admin_acq_common_desc acq_common_descriptor; uint32_t response_specific_data[14]; }; struct ena_admin_aq_create_sq_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; /* 4:0 : reserved0_w1 * 7:5 : sq_direction - 0x1 - Tx, 0x2 - Rx */ uint8_t sq_identity; uint8_t reserved8_w1; /* 3:0 : placement_policy - Describing where the SQ * descriptor ring and the SQ packet headers reside: * 0x1 - descriptors and headers are in OS memory, * 0x3 - descriptors and headers in device memory * (a.k.a Low Latency Queue) * 6:4 : completion_policy - Describing what policy * to use for generation completion entry (cqe) in * the CQ associated with this SQ: 0x0 - cqe for each * sq descriptor, 0x1 - cqe upon request in sq * descriptor, 0x2 - current queue head pointer is * updated in OS memory upon sq descriptor request * 0x3 - current queue head pointer is updated in OS * memory for each sq descriptor * 7 : reserved15_w1 */ uint8_t sq_caps_2; /* 0 : is_physically_contiguous - Described if the * queue ring memory is allocated in physical * contiguous pages or split. * 7:1 : reserved17_w1 */ uint8_t sq_caps_3; - /* associated completion queue id. This CQ must be created prior to - * SQ creation + /* associated completion queue id. This CQ must be created prior to SQ + * creation */ uint16_t cq_idx; /* submission queue depth in entries */ uint16_t sq_depth; /* SQ physical base address in OS memory. This field should not be * used for Low Latency queues. Has to be page aligned. */ struct ena_common_mem_addr sq_ba; /* specifies queue head writeback location in OS memory. Valid if * completion_policy is set to completion_policy_head_on_demand or * completion_policy_head. Has to be cache aligned */ struct ena_common_mem_addr sq_head_writeback; uint32_t reserved0_w7; uint32_t reserved0_w8; }; enum ena_admin_sq_direction { ENA_ADMIN_SQ_DIRECTION_TX = 1, ENA_ADMIN_SQ_DIRECTION_RX = 2, }; struct ena_admin_acq_create_sq_resp_desc { struct ena_admin_acq_common_desc acq_common_desc; uint16_t sq_idx; uint16_t reserved; /* queue doorbell address as an offset to PCIe MMIO REG BAR */ uint32_t sq_doorbell_offset; /* low latency queue ring base address as an offset to PCIe MMIO * LLQ_MEM BAR */ uint32_t llq_descriptors_offset; /* low latency queue headers' memory as an offset to PCIe MMIO * LLQ_MEM BAR */ uint32_t llq_headers_offset; }; struct ena_admin_aq_destroy_sq_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; struct ena_admin_sq sq; }; struct ena_admin_acq_destroy_sq_resp_desc { struct ena_admin_acq_common_desc acq_common_desc; }; struct ena_admin_aq_create_cq_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; /* 4:0 : reserved5 * 5 : interrupt_mode_enabled - if set, cq operates * in interrupt mode, otherwise - polling * 7:6 : reserved6 */ uint8_t cq_caps_1; /* 4:0 : cq_entry_size_words - size of CQ entry in * 32-bit words, valid values: 4, 8. * 7:5 : reserved7 */ uint8_t cq_caps_2; /* completion queue depth in # of entries. must be power of 2 */ uint16_t cq_depth; /* msix vector assigned to this cq */ uint32_t msix_vector; /* cq physical base address in OS memory. CQ must be physically * contiguous */ struct ena_common_mem_addr cq_ba; }; struct ena_admin_acq_create_cq_resp_desc { struct ena_admin_acq_common_desc acq_common_desc; uint16_t cq_idx; /* actual cq depth in number of entries */ uint16_t cq_actual_depth; uint32_t numa_node_register_offset; uint32_t cq_head_db_register_offset; uint32_t cq_interrupt_unmask_register_offset; }; struct ena_admin_aq_destroy_cq_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; uint16_t cq_idx; uint16_t reserved1; }; struct ena_admin_acq_destroy_cq_resp_desc { struct ena_admin_acq_common_desc acq_common_desc; }; /* ENA AQ Get Statistics command. Extended statistics are placed in control * buffer pointed by AQ entry */ struct ena_admin_aq_get_stats_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; union { /* command specific inline data */ uint32_t inline_data_w1[3]; struct ena_admin_ctrl_buff_info control_buffer; } u; /* stats type as defined in enum ena_admin_get_stats_type */ uint8_t type; /* stats scope defined in enum ena_admin_get_stats_scope */ uint8_t scope; uint16_t reserved3; /* queue id. used when scope is specific_queue */ uint16_t queue_idx; /* device id, value 0xFFFF means mine. only privileged device can get - * stats of other device + * stats of other device */ uint16_t device_id; }; /* Basic Statistics Command. */ struct ena_admin_basic_stats { uint32_t tx_bytes_low; uint32_t tx_bytes_high; uint32_t tx_pkts_low; uint32_t tx_pkts_high; uint32_t rx_bytes_low; uint32_t rx_bytes_high; uint32_t rx_pkts_low; uint32_t rx_pkts_high; uint32_t rx_drops_low; uint32_t rx_drops_high; uint32_t tx_drops_low; uint32_t tx_drops_high; }; +/* ENI Statistics Command. */ +struct ena_admin_eni_stats { + /* The number of packets shaped due to inbound aggregate BW + * allowance being exceeded + */ + uint64_t bw_in_allowance_exceeded; + + /* The number of packets shaped due to outbound aggregate BW + * allowance being exceeded + */ + uint64_t bw_out_allowance_exceeded; + + /* The number of packets shaped due to PPS allowance being exceeded */ + uint64_t pps_allowance_exceeded; + + /* The number of packets shaped due to connection tracking + * allowance being exceeded and leading to failure in establishment + * of new connections + */ + uint64_t conntrack_allowance_exceeded; + + /* The number of packets shaped due to linklocal packet rate + * allowance being exceeded + */ + uint64_t linklocal_allowance_exceeded; +}; + struct ena_admin_acq_get_stats_resp { struct ena_admin_acq_common_desc acq_common_desc; - struct ena_admin_basic_stats basic_stats; + union { + uint64_t raw[7]; + + struct ena_admin_basic_stats basic_stats; + + struct ena_admin_eni_stats eni_stats; + } u; }; struct ena_admin_get_set_feature_common_desc { /* 1:0 : select - 0x1 - current value; 0x3 - default * value * 7:3 : reserved3 */ uint8_t flags; /* as appears in ena_admin_aq_feature_id */ uint8_t feature_id; /* The driver specifies the max feature version it supports and the - * device responds with the currently supported feature version. The - * field is zero based + * device responds with the currently supported feature version. The + * field is zero based */ uint8_t feature_version; uint8_t reserved8; }; struct ena_admin_device_attr_feature_desc { uint32_t impl_id; uint32_t device_version; - /* bitmap of ena_admin_aq_feature_id */ + /* bitmap of ena_admin_aq_feature_id, which represents supported + * subcommands for the set/get feature admin commands. + */ uint32_t supported_features; uint32_t reserved3; /* Indicates how many bits are used physical address access. */ uint32_t phys_addr_width; /* Indicates how many bits are used virtual address access. */ uint32_t virt_addr_width; /* unicast MAC address (in Network byte order) */ uint8_t mac_addr[6]; uint8_t reserved7[2]; uint32_t max_mtu; }; enum ena_admin_llq_header_location { /* header is in descriptor list */ ENA_ADMIN_INLINE_HEADER = 1, /* header in a separate ring, implies 16B descriptor list entry */ ENA_ADMIN_HEADER_RING = 2, }; enum ena_admin_llq_ring_entry_size { ENA_ADMIN_LIST_ENTRY_SIZE_128B = 1, ENA_ADMIN_LIST_ENTRY_SIZE_192B = 2, ENA_ADMIN_LIST_ENTRY_SIZE_256B = 4, }; enum ena_admin_llq_num_descs_before_header { ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_0 = 0, ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_1 = 1, ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2 = 2, ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_4 = 4, ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8 = 8, }; /* packet descriptor list entry always starts with one or more descriptors, * followed by a header. The rest of the descriptors are located in the * beginning of the subsequent entry. Stride refers to how the rest of the * descriptors are placed. This field is relevant only for inline header * mode */ enum ena_admin_llq_stride_ctrl { ENA_ADMIN_SINGLE_DESC_PER_ENTRY = 1, ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY = 2, }; enum ena_admin_accel_mode_feat { ENA_ADMIN_DISABLE_META_CACHING = 0, ENA_ADMIN_LIMIT_TX_BURST = 1, }; struct ena_admin_accel_mode_get { /* bit field of enum ena_admin_accel_mode_feat */ uint16_t supported_flags; /* maximum burst size between two doorbells. The size is in bytes */ uint16_t max_tx_burst_size; }; struct ena_admin_accel_mode_set { /* bit field of enum ena_admin_accel_mode_feat */ uint16_t enabled_flags; uint16_t reserved; }; struct ena_admin_accel_mode_req { union { uint32_t raw[2]; struct ena_admin_accel_mode_get get; struct ena_admin_accel_mode_set set; } u; }; struct ena_admin_feature_llq_desc { uint32_t max_llq_num; uint32_t max_llq_depth; - /* specify the header locations the device supports. bitfield of - * enum ena_admin_llq_header_location. + /* specify the header locations the device supports. bitfield of enum + * ena_admin_llq_header_location. */ uint16_t header_location_ctrl_supported; /* the header location the driver selected to use. */ uint16_t header_location_ctrl_enabled; - /* if inline header is specified - this is the size of descriptor - * list entry. If header in a separate ring is specified - this is - * the size of header ring entry. bitfield of enum - * ena_admin_llq_ring_entry_size. specify the entry sizes the device - * supports + /* if inline header is specified - this is the size of descriptor list + * entry. If header in a separate ring is specified - this is the size + * of header ring entry. bitfield of enum ena_admin_llq_ring_entry_size. + * specify the entry sizes the device supports */ uint16_t entry_size_ctrl_supported; /* the entry size the driver selected to use. */ uint16_t entry_size_ctrl_enabled; - /* valid only if inline header is specified. First entry associated - * with the packet includes descriptors and header. Rest of the - * entries occupied by descriptors. This parameter defines the max - * number of descriptors precedding the header in the first entry. - * The field is bitfield of enum - * ena_admin_llq_num_descs_before_header and specify the values the - * device supports + /* valid only if inline header is specified. First entry associated with + * the packet includes descriptors and header. Rest of the entries + * occupied by descriptors. This parameter defines the max number of + * descriptors precedding the header in the first entry. The field is + * bitfield of enum ena_admin_llq_num_descs_before_header and specify + * the values the device supports */ uint16_t desc_num_before_header_supported; /* the desire field the driver selected to use */ uint16_t desc_num_before_header_enabled; /* valid only if inline was chosen. bitfield of enum - * ena_admin_llq_stride_ctrl + * ena_admin_llq_stride_ctrl */ uint16_t descriptors_stride_ctrl_supported; /* the stride control the driver selected to use */ uint16_t descriptors_stride_ctrl_enabled; /* reserved */ uint32_t reserved1; - /* accelerated low latency queues requirment. driver needs to - * support those requirments in order to use accelerated llq + /* accelerated low latency queues requirement. driver needs to + * support those requirements in order to use accelerated llq */ struct ena_admin_accel_mode_req accel_mode; }; struct ena_admin_queue_ext_feature_fields { uint32_t max_tx_sq_num; uint32_t max_tx_cq_num; uint32_t max_rx_sq_num; uint32_t max_rx_cq_num; uint32_t max_tx_sq_depth; uint32_t max_tx_cq_depth; uint32_t max_rx_sq_depth; uint32_t max_rx_cq_depth; uint32_t max_tx_header_size; - /* Maximum Descriptors number, including meta descriptor, allowed for - * a single Tx packet + /* Maximum Descriptors number, including meta descriptor, allowed for a + * single Tx packet */ uint16_t max_per_packet_tx_descs; /* Maximum Descriptors number allowed for a single Rx packet */ uint16_t max_per_packet_rx_descs; }; struct ena_admin_queue_feature_desc { uint32_t max_sq_num; uint32_t max_sq_depth; uint32_t max_cq_num; uint32_t max_cq_depth; uint32_t max_legacy_llq_num; uint32_t max_legacy_llq_depth; uint32_t max_header_size; - /* Maximum Descriptors number, including meta descriptor, allowed for - * a single Tx packet + /* Maximum Descriptors number, including meta descriptor, allowed for a + * single Tx packet */ uint16_t max_packet_tx_descs; /* Maximum Descriptors number allowed for a single Rx packet */ uint16_t max_packet_rx_descs; }; struct ena_admin_set_feature_mtu_desc { /* exclude L2 */ uint32_t mtu; }; struct ena_admin_get_extra_properties_strings_desc { uint32_t count; }; struct ena_admin_get_extra_properties_flags_desc { uint32_t flags; }; struct ena_admin_set_feature_host_attr_desc { /* host OS info base address in OS memory. host info is 4KB of * physically contiguous */ struct ena_common_mem_addr os_info_ba; /* host debug area base address in OS memory. debug area must be * physically contiguous */ struct ena_common_mem_addr debug_ba; /* debug area size */ uint32_t debug_area_size; }; struct ena_admin_feature_intr_moder_desc { /* interrupt delay granularity in usec */ uint16_t intr_delay_resolution; uint16_t reserved; }; struct ena_admin_get_feature_link_desc { /* Link speed in Mb */ uint32_t speed; /* bit field of enum ena_admin_link types */ uint32_t supported; /* 0 : autoneg * 1 : duplex - Full Duplex * 31:2 : reserved2 */ uint32_t flags; }; struct ena_admin_feature_aenq_desc { /* bitmask for AENQ groups the device can report */ uint32_t supported_groups; /* bitmask for AENQ groups to report */ uint32_t enabled_groups; }; struct ena_admin_feature_offload_desc { /* 0 : TX_L3_csum_ipv4 * 1 : TX_L4_ipv4_csum_part - The checksum field * should be initialized with pseudo header checksum * 2 : TX_L4_ipv4_csum_full * 3 : TX_L4_ipv6_csum_part - The checksum field * should be initialized with pseudo header checksum * 4 : TX_L4_ipv6_csum_full * 5 : tso_ipv4 * 6 : tso_ipv6 * 7 : tso_ecn */ uint32_t tx; /* Receive side supported stateless offload * 0 : RX_L3_csum_ipv4 - IPv4 checksum * 1 : RX_L4_ipv4_csum - TCP/UDP/IPv4 checksum * 2 : RX_L4_ipv6_csum - TCP/UDP/IPv6 checksum * 3 : RX_hash - Hash calculation */ uint32_t rx_supported; uint32_t rx_enabled; }; enum ena_admin_hash_functions { ENA_ADMIN_TOEPLITZ = 1, ENA_ADMIN_CRC32 = 2, }; struct ena_admin_feature_rss_flow_hash_control { - uint32_t keys_num; + uint32_t key_parts; uint32_t reserved; - uint32_t key[10]; + uint32_t key[ENA_ADMIN_RSS_KEY_PARTS]; }; struct ena_admin_feature_rss_flow_hash_function { /* 7:0 : funcs - bitmask of ena_admin_hash_functions */ uint32_t supported_func; /* 7:0 : selected_func - bitmask of * ena_admin_hash_functions */ uint32_t selected_func; /* initial value */ uint32_t init_val; }; /* RSS flow hash protocols */ enum ena_admin_flow_hash_proto { ENA_ADMIN_RSS_TCP4 = 0, ENA_ADMIN_RSS_UDP4 = 1, ENA_ADMIN_RSS_TCP6 = 2, ENA_ADMIN_RSS_UDP6 = 3, ENA_ADMIN_RSS_IP4 = 4, ENA_ADMIN_RSS_IP6 = 5, ENA_ADMIN_RSS_IP4_FRAG = 6, ENA_ADMIN_RSS_NOT_IP = 7, /* TCPv6 with extension header */ ENA_ADMIN_RSS_TCP6_EX = 8, /* IPv6 with extension header */ ENA_ADMIN_RSS_IP6_EX = 9, ENA_ADMIN_RSS_PROTO_NUM = 16, }; /* RSS flow hash fields */ enum ena_admin_flow_hash_fields { /* Ethernet Dest Addr */ ENA_ADMIN_RSS_L2_DA = BIT(0), /* Ethernet Src Addr */ ENA_ADMIN_RSS_L2_SA = BIT(1), /* ipv4/6 Dest Addr */ ENA_ADMIN_RSS_L3_DA = BIT(2), /* ipv4/6 Src Addr */ ENA_ADMIN_RSS_L3_SA = BIT(3), /* tcp/udp Dest Port */ ENA_ADMIN_RSS_L4_DP = BIT(4), /* tcp/udp Src Port */ ENA_ADMIN_RSS_L4_SP = BIT(5), }; struct ena_admin_proto_input { /* flow hash fields (bitwise according to ena_admin_flow_hash_fields) */ uint16_t fields; uint16_t reserved2; }; struct ena_admin_feature_rss_hash_control { struct ena_admin_proto_input supported_fields[ENA_ADMIN_RSS_PROTO_NUM]; struct ena_admin_proto_input selected_fields[ENA_ADMIN_RSS_PROTO_NUM]; struct ena_admin_proto_input reserved2[ENA_ADMIN_RSS_PROTO_NUM]; struct ena_admin_proto_input reserved3[ENA_ADMIN_RSS_PROTO_NUM]; }; struct ena_admin_feature_rss_flow_hash_input { /* supported hash input sorting * 1 : L3_sort - support swap L3 addresses if DA is * smaller than SA * 2 : L4_sort - support swap L4 ports if DP smaller * SP */ uint16_t supported_input_sort; /* enabled hash input sorting * 1 : enable_L3_sort - enable swap L3 addresses if * DA smaller than SA * 2 : enable_L4_sort - enable swap L4 ports if DP * smaller than SP */ uint16_t enabled_input_sort; }; enum ena_admin_os_type { ENA_ADMIN_OS_LINUX = 1, ENA_ADMIN_OS_WIN = 2, ENA_ADMIN_OS_DPDK = 3, ENA_ADMIN_OS_FREEBSD = 4, ENA_ADMIN_OS_IPXE = 5, ENA_ADMIN_OS_ESXI = 6, ENA_ADMIN_OS_GROUPS_NUM = 6, }; struct ena_admin_host_info { /* defined in enum ena_admin_os_type */ uint32_t os_type; /* os distribution string format */ uint8_t os_dist_str[128]; /* OS distribution numeric format */ uint32_t os_dist; /* kernel version string format */ uint8_t kernel_ver_str[32]; /* Kernel version numeric format */ uint32_t kernel_ver; /* 7:0 : major * 15:8 : minor * 23:16 : sub_minor * 31:24 : module_type */ uint32_t driver_version; /* features bitmap */ uint32_t supported_network_features[2]; /* ENA spec version of driver */ uint16_t ena_spec_version; /* ENA device's Bus, Device and Function * 2:0 : function * 7:3 : device * 15:8 : bus */ uint16_t bdf; /* Number of CPUs */ uint16_t num_cpus; uint16_t reserved; - /* 0 : mutable_rss_table_size + /* 0 : reserved * 1 : rx_offset * 2 : interrupt_moderation - * 3 : map_rx_buf_bidirectional - * 31:4 : reserved + * 3 : rx_buf_mirroring + * 4 : rss_configurable_function_key + * 31:5 : reserved */ uint32_t driver_supported_features; }; struct ena_admin_rss_ind_table_entry { uint16_t cq_idx; uint16_t reserved; }; struct ena_admin_feature_rss_ind_table { /* min supported table size (2^min_size) */ uint16_t min_size; /* max supported table size (2^max_size) */ uint16_t max_size; /* table size (2^size) */ uint16_t size; /* 0 : one_entry_update - The ENA device supports * setting a single RSS table entry */ uint8_t flags; uint8_t reserved; /* index of the inline entry. 0xFFFFFFFF means invalid */ uint32_t inline_index; /* used for updating single entry, ignored when setting the entire * table through the control buffer. */ struct ena_admin_rss_ind_table_entry inline_entry; }; /* When hint value is 0, driver should use it's own predefined value */ struct ena_admin_ena_hw_hints { /* value in ms */ uint16_t mmio_read_timeout; /* value in ms */ uint16_t driver_watchdog_timeout; /* Per packet tx completion timeout. value in ms */ uint16_t missing_tx_completion_timeout; uint16_t missed_tx_completion_count_threshold_to_reset; /* value in ms */ uint16_t admin_completion_tx_timeout; uint16_t netdev_wd_timeout; uint16_t max_tx_sgl_size; uint16_t max_rx_sgl_size; uint16_t reserved[8]; }; struct ena_admin_get_feat_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; struct ena_admin_ctrl_buff_info control_buffer; struct ena_admin_get_set_feature_common_desc feat_common; uint32_t raw[11]; }; struct ena_admin_queue_ext_feature_desc { /* version */ uint8_t version; uint8_t reserved1[3]; union { struct ena_admin_queue_ext_feature_fields max_queue_ext; uint32_t raw[10]; - } ; + }; }; struct ena_admin_get_feat_resp { struct ena_admin_acq_common_desc acq_common_desc; union { uint32_t raw[14]; struct ena_admin_device_attr_feature_desc dev_attr; struct ena_admin_feature_llq_desc llq; struct ena_admin_queue_feature_desc max_queue; struct ena_admin_queue_ext_feature_desc max_queue_ext; struct ena_admin_feature_aenq_desc aenq; struct ena_admin_get_feature_link_desc link; struct ena_admin_feature_offload_desc offload; struct ena_admin_feature_rss_flow_hash_function flow_hash_func; struct ena_admin_feature_rss_flow_hash_input flow_hash_input; struct ena_admin_feature_rss_ind_table ind_table; struct ena_admin_feature_intr_moder_desc intr_moderation; struct ena_admin_ena_hw_hints hw_hints; struct ena_admin_get_extra_properties_strings_desc extra_properties_strings; struct ena_admin_get_extra_properties_flags_desc extra_properties_flags; } u; }; struct ena_admin_set_feat_cmd { struct ena_admin_aq_common_desc aq_common_descriptor; struct ena_admin_ctrl_buff_info control_buffer; struct ena_admin_get_set_feature_common_desc feat_common; union { uint32_t raw[11]; /* mtu size */ struct ena_admin_set_feature_mtu_desc mtu; /* host attributes */ struct ena_admin_set_feature_host_attr_desc host_attr; /* AENQ configuration */ struct ena_admin_feature_aenq_desc aenq; /* rss flow hash function */ struct ena_admin_feature_rss_flow_hash_function flow_hash_func; /* rss flow hash input */ struct ena_admin_feature_rss_flow_hash_input flow_hash_input; /* rss indirection table */ struct ena_admin_feature_rss_ind_table ind_table; /* LLQ configuration */ struct ena_admin_feature_llq_desc llq; } u; }; struct ena_admin_set_feat_resp { struct ena_admin_acq_common_desc acq_common_desc; union { uint32_t raw[14]; } u; }; struct ena_admin_aenq_common_desc { uint16_t group; - uint16_t syndrom; + uint16_t syndrome; /* 0 : phase * 7:1 : reserved - MBZ */ uint8_t flags; uint8_t reserved1[3]; uint32_t timestamp_low; uint32_t timestamp_high; }; /* asynchronous event notification groups */ enum ena_admin_aenq_group { ENA_ADMIN_LINK_CHANGE = 0, ENA_ADMIN_FATAL_ERROR = 1, ENA_ADMIN_WARNING = 2, ENA_ADMIN_NOTIFICATION = 3, ENA_ADMIN_KEEP_ALIVE = 4, ENA_ADMIN_AENQ_GROUPS_NUM = 5, }; -enum ena_admin_aenq_notification_syndrom { +enum ena_admin_aenq_notification_syndrome { ENA_ADMIN_SUSPEND = 0, ENA_ADMIN_RESUME = 1, ENA_ADMIN_UPDATE_HINTS = 2, }; struct ena_admin_aenq_entry { struct ena_admin_aenq_common_desc aenq_common_desc; /* command specific inline data */ uint32_t inline_data_w4[12]; }; struct ena_admin_aenq_link_change_desc { struct ena_admin_aenq_common_desc aenq_common_desc; /* 0 : link_status */ uint32_t flags; }; struct ena_admin_aenq_keep_alive_desc { struct ena_admin_aenq_common_desc aenq_common_desc; uint32_t rx_drops_low; uint32_t rx_drops_high; uint32_t tx_drops_low; uint32_t tx_drops_high; }; struct ena_admin_ena_mmio_req_read_less_resp { uint16_t req_id; uint16_t reg_off; /* value is valid when poll is cleared */ uint32_t reg_val; }; /* aq_common_desc */ #define ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK GENMASK(11, 0) #define ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK BIT(0) #define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT 1 #define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK BIT(1) #define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT 2 #define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK BIT(2) /* sq */ #define ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT 5 #define ENA_ADMIN_SQ_SQ_DIRECTION_MASK GENMASK(7, 5) /* acq_common_desc */ #define ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK GENMASK(11, 0) #define ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK BIT(0) /* aq_create_sq_cmd */ #define ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT 5 #define ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK GENMASK(7, 5) #define ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK GENMASK(3, 0) #define ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT 4 #define ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK GENMASK(6, 4) #define ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK BIT(0) /* aq_create_cq_cmd */ #define ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT 5 #define ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK BIT(5) #define ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK GENMASK(4, 0) /* get_set_feature_common_desc */ #define ENA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK GENMASK(1, 0) /* get_feature_link_desc */ #define ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK BIT(0) #define ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT 1 #define ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK BIT(1) /* feature_offload_desc */ #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK BIT(0) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT 1 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK BIT(1) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT 2 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK BIT(2) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT 3 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK BIT(3) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT 4 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK BIT(4) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT 5 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK BIT(5) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT 6 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK BIT(6) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT 7 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK BIT(7) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK BIT(0) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT 1 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK BIT(1) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT 2 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK BIT(2) #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT 3 #define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK BIT(3) /* feature_rss_flow_hash_function */ #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK GENMASK(7, 0) #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK GENMASK(7, 0) /* feature_rss_flow_hash_input */ #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT 1 #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK BIT(1) #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT 2 #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK BIT(2) #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT 1 #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK BIT(1) #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT 2 #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK BIT(2) /* host_info */ #define ENA_ADMIN_HOST_INFO_MAJOR_MASK GENMASK(7, 0) #define ENA_ADMIN_HOST_INFO_MINOR_SHIFT 8 #define ENA_ADMIN_HOST_INFO_MINOR_MASK GENMASK(15, 8) #define ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT 16 #define ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK GENMASK(23, 16) #define ENA_ADMIN_HOST_INFO_MODULE_TYPE_SHIFT 24 #define ENA_ADMIN_HOST_INFO_MODULE_TYPE_MASK GENMASK(31, 24) #define ENA_ADMIN_HOST_INFO_FUNCTION_MASK GENMASK(2, 0) #define ENA_ADMIN_HOST_INFO_DEVICE_SHIFT 3 #define ENA_ADMIN_HOST_INFO_DEVICE_MASK GENMASK(7, 3) #define ENA_ADMIN_HOST_INFO_BUS_SHIFT 8 #define ENA_ADMIN_HOST_INFO_BUS_MASK GENMASK(15, 8) -#define ENA_ADMIN_HOST_INFO_MUTABLE_RSS_TABLE_SIZE_MASK BIT(0) #define ENA_ADMIN_HOST_INFO_RX_OFFSET_SHIFT 1 #define ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK BIT(1) #define ENA_ADMIN_HOST_INFO_INTERRUPT_MODERATION_SHIFT 2 #define ENA_ADMIN_HOST_INFO_INTERRUPT_MODERATION_MASK BIT(2) -#define ENA_ADMIN_HOST_INFO_MAP_RX_BUF_BIDIRECTIONAL_SHIFT 3 -#define ENA_ADMIN_HOST_INFO_MAP_RX_BUF_BIDIRECTIONAL_MASK BIT(3) +#define ENA_ADMIN_HOST_INFO_RX_BUF_MIRRORING_SHIFT 3 +#define ENA_ADMIN_HOST_INFO_RX_BUF_MIRRORING_MASK BIT(3) +#define ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_SHIFT 4 +#define ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_MASK BIT(4) /* feature_rss_ind_table */ #define ENA_ADMIN_FEATURE_RSS_IND_TABLE_ONE_ENTRY_UPDATE_MASK BIT(0) /* aenq_common_desc */ #define ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK BIT(0) /* aenq_link_change_desc */ #define ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK BIT(0) #if !defined(DEFS_LINUX_MAINLINE) static inline uint16_t get_ena_admin_aq_common_desc_command_id(const struct ena_admin_aq_common_desc *p) { return p->command_id & ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK; } static inline void set_ena_admin_aq_common_desc_command_id(struct ena_admin_aq_common_desc *p, uint16_t val) { p->command_id |= val & ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK; } static inline uint8_t get_ena_admin_aq_common_desc_phase(const struct ena_admin_aq_common_desc *p) { return p->flags & ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK; } static inline void set_ena_admin_aq_common_desc_phase(struct ena_admin_aq_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK; } static inline uint8_t get_ena_admin_aq_common_desc_ctrl_data(const struct ena_admin_aq_common_desc *p) { return (p->flags & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK) >> ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT; } static inline void set_ena_admin_aq_common_desc_ctrl_data(struct ena_admin_aq_common_desc *p, uint8_t val) { p->flags |= (val << ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT) & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK; } static inline uint8_t get_ena_admin_aq_common_desc_ctrl_data_indirect(const struct ena_admin_aq_common_desc *p) { return (p->flags & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK) >> ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT; } static inline void set_ena_admin_aq_common_desc_ctrl_data_indirect(struct ena_admin_aq_common_desc *p, uint8_t val) { p->flags |= (val << ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT) & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; } static inline uint8_t get_ena_admin_sq_sq_direction(const struct ena_admin_sq *p) { return (p->sq_identity & ENA_ADMIN_SQ_SQ_DIRECTION_MASK) >> ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT; } static inline void set_ena_admin_sq_sq_direction(struct ena_admin_sq *p, uint8_t val) { p->sq_identity |= (val << ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT) & ENA_ADMIN_SQ_SQ_DIRECTION_MASK; } static inline uint16_t get_ena_admin_acq_common_desc_command_id(const struct ena_admin_acq_common_desc *p) { return p->command & ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK; } static inline void set_ena_admin_acq_common_desc_command_id(struct ena_admin_acq_common_desc *p, uint16_t val) { p->command |= val & ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK; } static inline uint8_t get_ena_admin_acq_common_desc_phase(const struct ena_admin_acq_common_desc *p) { return p->flags & ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK; } static inline void set_ena_admin_acq_common_desc_phase(struct ena_admin_acq_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK; } static inline uint8_t get_ena_admin_aq_create_sq_cmd_sq_direction(const struct ena_admin_aq_create_sq_cmd *p) { return (p->sq_identity & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK) >> ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT; } static inline void set_ena_admin_aq_create_sq_cmd_sq_direction(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { p->sq_identity |= (val << ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT) & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK; } static inline uint8_t get_ena_admin_aq_create_sq_cmd_placement_policy(const struct ena_admin_aq_create_sq_cmd *p) { return p->sq_caps_2 & ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK; } static inline void set_ena_admin_aq_create_sq_cmd_placement_policy(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { p->sq_caps_2 |= val & ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK; } static inline uint8_t get_ena_admin_aq_create_sq_cmd_completion_policy(const struct ena_admin_aq_create_sq_cmd *p) { return (p->sq_caps_2 & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK) >> ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT; } static inline void set_ena_admin_aq_create_sq_cmd_completion_policy(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { p->sq_caps_2 |= (val << ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT) & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK; } static inline uint8_t get_ena_admin_aq_create_sq_cmd_is_physically_contiguous(const struct ena_admin_aq_create_sq_cmd *p) { return p->sq_caps_3 & ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; } static inline void set_ena_admin_aq_create_sq_cmd_is_physically_contiguous(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { p->sq_caps_3 |= val & ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; } static inline uint8_t get_ena_admin_aq_create_cq_cmd_interrupt_mode_enabled(const struct ena_admin_aq_create_cq_cmd *p) { return (p->cq_caps_1 & ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK) >> ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT; } static inline void set_ena_admin_aq_create_cq_cmd_interrupt_mode_enabled(struct ena_admin_aq_create_cq_cmd *p, uint8_t val) { p->cq_caps_1 |= (val << ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT) & ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK; } static inline uint8_t get_ena_admin_aq_create_cq_cmd_cq_entry_size_words(const struct ena_admin_aq_create_cq_cmd *p) { return p->cq_caps_2 & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; } static inline void set_ena_admin_aq_create_cq_cmd_cq_entry_size_words(struct ena_admin_aq_create_cq_cmd *p, uint8_t val) { p->cq_caps_2 |= val & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; } static inline uint8_t get_ena_admin_get_set_feature_common_desc_select(const struct ena_admin_get_set_feature_common_desc *p) { return p->flags & ENA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK; } static inline void set_ena_admin_get_set_feature_common_desc_select(struct ena_admin_get_set_feature_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK; } static inline uint32_t get_ena_admin_get_feature_link_desc_autoneg(const struct ena_admin_get_feature_link_desc *p) { return p->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK; } static inline void set_ena_admin_get_feature_link_desc_autoneg(struct ena_admin_get_feature_link_desc *p, uint32_t val) { p->flags |= val & ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK; } static inline uint32_t get_ena_admin_get_feature_link_desc_duplex(const struct ena_admin_get_feature_link_desc *p) { return (p->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK) >> ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT; } static inline void set_ena_admin_get_feature_link_desc_duplex(struct ena_admin_get_feature_link_desc *p, uint32_t val) { p->flags |= (val << ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT) & ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_TX_L3_csum_ipv4(const struct ena_admin_feature_offload_desc *p) { return p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK; } static inline void set_ena_admin_feature_offload_desc_TX_L3_csum_ipv4(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= val & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_part(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT; } static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_part(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_full(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT; } static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_full(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_part(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT; } static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_part(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_full(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT; } static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_full(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_tso_ipv4(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT; } static inline void set_ena_admin_feature_offload_desc_tso_ipv4(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_tso_ipv6(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT; } static inline void set_ena_admin_feature_offload_desc_tso_ipv6(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_tso_ecn(const struct ena_admin_feature_offload_desc *p) { return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT; } static inline void set_ena_admin_feature_offload_desc_tso_ecn(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_RX_L3_csum_ipv4(const struct ena_admin_feature_offload_desc *p) { return p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK; } static inline void set_ena_admin_feature_offload_desc_RX_L3_csum_ipv4(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->rx_supported |= val & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_RX_L4_ipv4_csum(const struct ena_admin_feature_offload_desc *p) { return (p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT; } static inline void set_ena_admin_feature_offload_desc_RX_L4_ipv4_csum(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->rx_supported |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_RX_L4_ipv6_csum(const struct ena_admin_feature_offload_desc *p) { return (p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT; } static inline void set_ena_admin_feature_offload_desc_RX_L4_ipv6_csum(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->rx_supported |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK; } static inline uint32_t get_ena_admin_feature_offload_desc_RX_hash(const struct ena_admin_feature_offload_desc *p) { return (p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT; } static inline void set_ena_admin_feature_offload_desc_RX_hash(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->rx_supported |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK; } static inline uint32_t get_ena_admin_feature_rss_flow_hash_function_funcs(const struct ena_admin_feature_rss_flow_hash_function *p) { return p->supported_func & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK; } static inline void set_ena_admin_feature_rss_flow_hash_function_funcs(struct ena_admin_feature_rss_flow_hash_function *p, uint32_t val) { p->supported_func |= val & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK; } static inline uint32_t get_ena_admin_feature_rss_flow_hash_function_selected_func(const struct ena_admin_feature_rss_flow_hash_function *p) { return p->selected_func & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK; } static inline void set_ena_admin_feature_rss_flow_hash_function_selected_func(struct ena_admin_feature_rss_flow_hash_function *p, uint32_t val) { p->selected_func |= val & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK; } static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_L3_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { return (p->supported_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT; } static inline void set_ena_admin_feature_rss_flow_hash_input_L3_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { p->supported_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK; } static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_L4_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { return (p->supported_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT; } static inline void set_ena_admin_feature_rss_flow_hash_input_L4_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { p->supported_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK; } static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_enable_L3_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { return (p->enabled_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT; } static inline void set_ena_admin_feature_rss_flow_hash_input_enable_L3_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { p->enabled_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK; } static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_enable_L4_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { return (p->enabled_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT; } static inline void set_ena_admin_feature_rss_flow_hash_input_enable_L4_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { p->enabled_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK; } static inline uint32_t get_ena_admin_host_info_major(const struct ena_admin_host_info *p) { return p->driver_version & ENA_ADMIN_HOST_INFO_MAJOR_MASK; } static inline void set_ena_admin_host_info_major(struct ena_admin_host_info *p, uint32_t val) { p->driver_version |= val & ENA_ADMIN_HOST_INFO_MAJOR_MASK; } static inline uint32_t get_ena_admin_host_info_minor(const struct ena_admin_host_info *p) { return (p->driver_version & ENA_ADMIN_HOST_INFO_MINOR_MASK) >> ENA_ADMIN_HOST_INFO_MINOR_SHIFT; } static inline void set_ena_admin_host_info_minor(struct ena_admin_host_info *p, uint32_t val) { p->driver_version |= (val << ENA_ADMIN_HOST_INFO_MINOR_SHIFT) & ENA_ADMIN_HOST_INFO_MINOR_MASK; } static inline uint32_t get_ena_admin_host_info_sub_minor(const struct ena_admin_host_info *p) { return (p->driver_version & ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK) >> ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT; } static inline void set_ena_admin_host_info_sub_minor(struct ena_admin_host_info *p, uint32_t val) { p->driver_version |= (val << ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT) & ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK; } static inline uint32_t get_ena_admin_host_info_module_type(const struct ena_admin_host_info *p) { return (p->driver_version & ENA_ADMIN_HOST_INFO_MODULE_TYPE_MASK) >> ENA_ADMIN_HOST_INFO_MODULE_TYPE_SHIFT; } static inline void set_ena_admin_host_info_module_type(struct ena_admin_host_info *p, uint32_t val) { p->driver_version |= (val << ENA_ADMIN_HOST_INFO_MODULE_TYPE_SHIFT) & ENA_ADMIN_HOST_INFO_MODULE_TYPE_MASK; } static inline uint16_t get_ena_admin_host_info_function(const struct ena_admin_host_info *p) { return p->bdf & ENA_ADMIN_HOST_INFO_FUNCTION_MASK; } static inline void set_ena_admin_host_info_function(struct ena_admin_host_info *p, uint16_t val) { p->bdf |= val & ENA_ADMIN_HOST_INFO_FUNCTION_MASK; } static inline uint16_t get_ena_admin_host_info_device(const struct ena_admin_host_info *p) { return (p->bdf & ENA_ADMIN_HOST_INFO_DEVICE_MASK) >> ENA_ADMIN_HOST_INFO_DEVICE_SHIFT; } static inline void set_ena_admin_host_info_device(struct ena_admin_host_info *p, uint16_t val) { p->bdf |= (val << ENA_ADMIN_HOST_INFO_DEVICE_SHIFT) & ENA_ADMIN_HOST_INFO_DEVICE_MASK; } static inline uint16_t get_ena_admin_host_info_bus(const struct ena_admin_host_info *p) { return (p->bdf & ENA_ADMIN_HOST_INFO_BUS_MASK) >> ENA_ADMIN_HOST_INFO_BUS_SHIFT; } static inline void set_ena_admin_host_info_bus(struct ena_admin_host_info *p, uint16_t val) { p->bdf |= (val << ENA_ADMIN_HOST_INFO_BUS_SHIFT) & ENA_ADMIN_HOST_INFO_BUS_MASK; } -static inline uint32_t get_ena_admin_host_info_mutable_rss_table_size(const struct ena_admin_host_info *p) -{ - return p->driver_supported_features & ENA_ADMIN_HOST_INFO_MUTABLE_RSS_TABLE_SIZE_MASK; -} - -static inline void set_ena_admin_host_info_mutable_rss_table_size(struct ena_admin_host_info *p, uint32_t val) -{ - p->driver_supported_features |= val & ENA_ADMIN_HOST_INFO_MUTABLE_RSS_TABLE_SIZE_MASK; -} - static inline uint32_t get_ena_admin_host_info_rx_offset(const struct ena_admin_host_info *p) { return (p->driver_supported_features & ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK) >> ENA_ADMIN_HOST_INFO_RX_OFFSET_SHIFT; } static inline void set_ena_admin_host_info_rx_offset(struct ena_admin_host_info *p, uint32_t val) { p->driver_supported_features |= (val << ENA_ADMIN_HOST_INFO_RX_OFFSET_SHIFT) & ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK; } static inline uint32_t get_ena_admin_host_info_interrupt_moderation(const struct ena_admin_host_info *p) { return (p->driver_supported_features & ENA_ADMIN_HOST_INFO_INTERRUPT_MODERATION_MASK) >> ENA_ADMIN_HOST_INFO_INTERRUPT_MODERATION_SHIFT; } static inline void set_ena_admin_host_info_interrupt_moderation(struct ena_admin_host_info *p, uint32_t val) { p->driver_supported_features |= (val << ENA_ADMIN_HOST_INFO_INTERRUPT_MODERATION_SHIFT) & ENA_ADMIN_HOST_INFO_INTERRUPT_MODERATION_MASK; } -static inline uint32_t get_ena_admin_host_info_map_rx_buf_bidirectional(const struct ena_admin_host_info *p) +static inline uint32_t get_ena_admin_host_info_rx_buf_mirroring(const struct ena_admin_host_info *p) { - return (p->driver_supported_features & ENA_ADMIN_HOST_INFO_MAP_RX_BUF_BIDIRECTIONAL_MASK) >> ENA_ADMIN_HOST_INFO_MAP_RX_BUF_BIDIRECTIONAL_SHIFT; + return (p->driver_supported_features & ENA_ADMIN_HOST_INFO_RX_BUF_MIRRORING_MASK) >> ENA_ADMIN_HOST_INFO_RX_BUF_MIRRORING_SHIFT; } -static inline void set_ena_admin_host_info_map_rx_buf_bidirectional(struct ena_admin_host_info *p, uint32_t val) +static inline void set_ena_admin_host_info_rx_buf_mirroring(struct ena_admin_host_info *p, uint32_t val) { - p->driver_supported_features |= (val << ENA_ADMIN_HOST_INFO_MAP_RX_BUF_BIDIRECTIONAL_SHIFT) & ENA_ADMIN_HOST_INFO_MAP_RX_BUF_BIDIRECTIONAL_MASK; + p->driver_supported_features |= (val << ENA_ADMIN_HOST_INFO_RX_BUF_MIRRORING_SHIFT) & ENA_ADMIN_HOST_INFO_RX_BUF_MIRRORING_MASK; +} + +static inline uint32_t get_ena_admin_host_info_rss_configurable_function_key(const struct ena_admin_host_info *p) +{ + return (p->driver_supported_features & ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_MASK) >> ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_SHIFT; +} + +static inline void set_ena_admin_host_info_rss_configurable_function_key(struct ena_admin_host_info *p, uint32_t val) +{ + p->driver_supported_features |= (val << ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_SHIFT) & ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_MASK; } static inline uint8_t get_ena_admin_feature_rss_ind_table_one_entry_update(const struct ena_admin_feature_rss_ind_table *p) { return p->flags & ENA_ADMIN_FEATURE_RSS_IND_TABLE_ONE_ENTRY_UPDATE_MASK; } static inline void set_ena_admin_feature_rss_ind_table_one_entry_update(struct ena_admin_feature_rss_ind_table *p, uint8_t val) { p->flags |= val & ENA_ADMIN_FEATURE_RSS_IND_TABLE_ONE_ENTRY_UPDATE_MASK; } static inline uint8_t get_ena_admin_aenq_common_desc_phase(const struct ena_admin_aenq_common_desc *p) { return p->flags & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK; } static inline void set_ena_admin_aenq_common_desc_phase(struct ena_admin_aenq_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK; } static inline uint32_t get_ena_admin_aenq_link_change_desc_link_status(const struct ena_admin_aenq_link_change_desc *p) { return p->flags & ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK; } static inline void set_ena_admin_aenq_link_change_desc_link_status(struct ena_admin_aenq_link_change_desc *p, uint32_t val) { p->flags |= val & ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK; } #endif /* !defined(DEFS_LINUX_MAINLINE) */ #endif /* _ENA_ADMIN_H_ */ Index: stable/12/sys/contrib/ena-com/ena_defs/ena_common_defs.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_defs/ena_common_defs.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_defs/ena_common_defs.h (revision 368012) @@ -1,49 +1,49 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef _ENA_COMMON_H_ #define _ENA_COMMON_H_ #define ENA_COMMON_SPEC_VERSION_MAJOR 2 #define ENA_COMMON_SPEC_VERSION_MINOR 0 /* ENA operates with 48-bit memory addresses. ena_mem_addr_t */ struct ena_common_mem_addr { uint32_t mem_addr_low; uint16_t mem_addr_high; /* MBZ */ uint16_t reserved16; }; #endif /* _ENA_COMMON_H_ */ Index: stable/12/sys/contrib/ena-com/ena_defs/ena_eth_io_defs.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_defs/ena_eth_io_defs.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_defs/ena_eth_io_defs.h (revision 368012) @@ -1,970 +1,970 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef _ENA_ETH_IO_H_ #define _ENA_ETH_IO_H_ enum ena_eth_io_l3_proto_index { ENA_ETH_IO_L3_PROTO_UNKNOWN = 0, ENA_ETH_IO_L3_PROTO_IPV4 = 8, ENA_ETH_IO_L3_PROTO_IPV6 = 11, ENA_ETH_IO_L3_PROTO_FCOE = 21, ENA_ETH_IO_L3_PROTO_ROCE = 22, }; enum ena_eth_io_l4_proto_index { ENA_ETH_IO_L4_PROTO_UNKNOWN = 0, ENA_ETH_IO_L4_PROTO_TCP = 12, ENA_ETH_IO_L4_PROTO_UDP = 13, ENA_ETH_IO_L4_PROTO_ROUTEABLE_ROCE = 23, }; struct ena_eth_io_tx_desc { /* 15:0 : length - Buffer length in bytes, must * include any packet trailers that the ENA supposed * to update like End-to-End CRC, Authentication GMAC * etc. This length must not include the * 'Push_Buffer' length. This length must not include * the 4-byte added in the end for 802.3 Ethernet FCS * 21:16 : req_id_hi - Request ID[15:10] * 22 : reserved22 - MBZ * 23 : meta_desc - MBZ * 24 : phase * 25 : reserved1 - MBZ * 26 : first - Indicates first descriptor in * transaction * 27 : last - Indicates last descriptor in * transaction * 28 : comp_req - Indicates whether completion * should be posted, after packet is transmitted. * Valid only for first descriptor * 30:29 : reserved29 - MBZ * 31 : reserved31 - MBZ */ uint32_t len_ctrl; /* 3:0 : l3_proto_idx - L3 protocol. This field * required when l3_csum_en,l3_csum or tso_en are set. * 4 : DF - IPv4 DF, must be 0 if packet is IPv4 and * DF flags of the IPv4 header is 0. Otherwise must * be set to 1 * 6:5 : reserved5 * 7 : tso_en - Enable TSO, For TCP only. * 12:8 : l4_proto_idx - L4 protocol. This field need * to be set when l4_csum_en or tso_en are set. * 13 : l3_csum_en - enable IPv4 header checksum. * 14 : l4_csum_en - enable TCP/UDP checksum. * 15 : ethernet_fcs_dis - when set, the controller * will not append the 802.3 Ethernet Frame Check * Sequence to the packet * 16 : reserved16 * 17 : l4_csum_partial - L4 partial checksum. when * set to 0, the ENA calculates the L4 checksum, * where the Destination Address required for the * TCP/UDP pseudo-header is taken from the actual * packet L3 header. when set to 1, the ENA doesn't * calculate the sum of the pseudo-header, instead, * the checksum field of the L4 is used instead. When * TSO enabled, the checksum of the pseudo-header * must not include the tcp length field. L4 partial * checksum should be used for IPv6 packet that * contains Routing Headers. * 20:18 : reserved18 - MBZ * 21 : reserved21 - MBZ * 31:22 : req_id_lo - Request ID[9:0] */ uint32_t meta_ctrl; uint32_t buff_addr_lo; /* address high and header size * 15:0 : addr_hi - Buffer Pointer[47:32] * 23:16 : reserved16_w2 * 31:24 : header_length - Header length. For Low * Latency Queues, this fields indicates the number * of bytes written to the headers' memory. For * normal queues, if packet is TCP or UDP, and longer * than max_header_size, then this field should be * set to the sum of L4 header offset and L4 header * size(without options), otherwise, this field * should be set to 0. For both modes, this field * must not exceed the max_header_size. * max_header_size value is reported by the Max * Queues Feature descriptor */ uint32_t buff_addr_hi_hdr_sz; }; struct ena_eth_io_tx_meta_desc { /* 9:0 : req_id_lo - Request ID[9:0] * 11:10 : reserved10 - MBZ * 12 : reserved12 - MBZ * 13 : reserved13 - MBZ * 14 : ext_valid - if set, offset fields in Word2 * are valid Also MSS High in Word 0 and bits [31:24] * in Word 3 * 15 : reserved15 * 19:16 : mss_hi * 20 : eth_meta_type - 0: Tx Metadata Descriptor, 1: * Extended Metadata Descriptor * 21 : meta_store - Store extended metadata in queue * cache * 22 : reserved22 - MBZ * 23 : meta_desc - MBO * 24 : phase * 25 : reserved25 - MBZ * 26 : first - Indicates first descriptor in * transaction * 27 : last - Indicates last descriptor in * transaction * 28 : comp_req - Indicates whether completion * should be posted, after packet is transmitted. * Valid only for first descriptor * 30:29 : reserved29 - MBZ * 31 : reserved31 - MBZ */ uint32_t len_ctrl; /* 5:0 : req_id_hi * 31:6 : reserved6 - MBZ */ uint32_t word1; /* 7:0 : l3_hdr_len * 15:8 : l3_hdr_off * 21:16 : l4_hdr_len_in_words - counts the L4 header * length in words. there is an explicit assumption * that L4 header appears right after L3 header and * L4 offset is based on l3_hdr_off+l3_hdr_len * 31:22 : mss_lo */ uint32_t word2; uint32_t reserved; }; struct ena_eth_io_tx_cdesc { /* Request ID[15:0] */ uint16_t req_id; uint8_t status; /* flags * 0 : phase * 7:1 : reserved1 */ uint8_t flags; uint16_t sub_qid; uint16_t sq_head_idx; }; struct ena_eth_io_rx_desc { /* In bytes. 0 means 64KB */ uint16_t length; /* MBZ */ uint8_t reserved2; /* 0 : phase * 1 : reserved1 - MBZ * 2 : first - Indicates first descriptor in * transaction * 3 : last - Indicates last descriptor in transaction * 4 : comp_req * 5 : reserved5 - MBO * 7:6 : reserved6 - MBZ */ uint8_t ctrl; uint16_t req_id; /* MBZ */ uint16_t reserved6; uint32_t buff_addr_lo; uint16_t buff_addr_hi; /* MBZ */ uint16_t reserved16_w3; }; /* 4-word format Note: all ethernet parsing information are valid only when * last=1 */ struct ena_eth_io_rx_cdesc_base { /* 4:0 : l3_proto_idx * 6:5 : src_vlan_cnt * 7 : reserved7 - MBZ * 12:8 : l4_proto_idx * 13 : l3_csum_err - when set, either the L3 * checksum error detected, or, the controller didn't * validate the checksum. This bit is valid only when * l3_proto_idx indicates IPv4 packet * 14 : l4_csum_err - when set, either the L4 * checksum error detected, or, the controller didn't * validate the checksum. This bit is valid only when * l4_proto_idx indicates TCP/UDP packet, and, * ipv4_frag is not set. This bit is valid only when * l4_csum_checked below is set. * 15 : ipv4_frag - Indicates IPv4 fragmented packet * 16 : l4_csum_checked - L4 checksum was verified * (could be OK or error), when cleared the status of * checksum is unknown * 23:17 : reserved17 - MBZ * 24 : phase * 25 : l3_csum2 - second checksum engine result * 26 : first - Indicates first descriptor in * transaction * 27 : last - Indicates last descriptor in * transaction * 29:28 : reserved28 * 30 : buffer - 0: Metadata descriptor. 1: Buffer * Descriptor was used * 31 : reserved31 */ uint32_t status; uint16_t length; uint16_t req_id; /* 32-bit hash result */ uint32_t hash; uint16_t sub_qid; uint8_t offset; uint8_t reserved; }; /* 8-word format */ struct ena_eth_io_rx_cdesc_ext { struct ena_eth_io_rx_cdesc_base base; uint32_t buff_addr_lo; uint16_t buff_addr_hi; uint16_t reserved16; uint32_t reserved_w6; uint32_t reserved_w7; }; struct ena_eth_io_intr_reg { /* 14:0 : rx_intr_delay * 29:15 : tx_intr_delay * 30 : intr_unmask * 31 : reserved */ uint32_t intr_control; }; struct ena_eth_io_numa_node_cfg_reg { /* 7:0 : numa * 30:8 : reserved * 31 : enabled */ uint32_t numa_cfg; }; /* tx_desc */ #define ENA_ETH_IO_TX_DESC_LENGTH_MASK GENMASK(15, 0) #define ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT 16 #define ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK GENMASK(21, 16) #define ENA_ETH_IO_TX_DESC_META_DESC_SHIFT 23 #define ENA_ETH_IO_TX_DESC_META_DESC_MASK BIT(23) #define ENA_ETH_IO_TX_DESC_PHASE_SHIFT 24 #define ENA_ETH_IO_TX_DESC_PHASE_MASK BIT(24) #define ENA_ETH_IO_TX_DESC_FIRST_SHIFT 26 #define ENA_ETH_IO_TX_DESC_FIRST_MASK BIT(26) #define ENA_ETH_IO_TX_DESC_LAST_SHIFT 27 #define ENA_ETH_IO_TX_DESC_LAST_MASK BIT(27) #define ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT 28 #define ENA_ETH_IO_TX_DESC_COMP_REQ_MASK BIT(28) #define ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK GENMASK(3, 0) #define ENA_ETH_IO_TX_DESC_DF_SHIFT 4 #define ENA_ETH_IO_TX_DESC_DF_MASK BIT(4) #define ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT 7 #define ENA_ETH_IO_TX_DESC_TSO_EN_MASK BIT(7) #define ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT 8 #define ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK GENMASK(12, 8) #define ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT 13 #define ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK BIT(13) #define ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT 14 #define ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK BIT(14) #define ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT 15 #define ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK BIT(15) #define ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT 17 #define ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK BIT(17) #define ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT 22 #define ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK GENMASK(31, 22) #define ENA_ETH_IO_TX_DESC_ADDR_HI_MASK GENMASK(15, 0) #define ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT 24 #define ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK GENMASK(31, 24) /* tx_meta_desc */ #define ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK GENMASK(9, 0) #define ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT 14 #define ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK BIT(14) #define ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT 16 #define ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK GENMASK(19, 16) #define ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT 20 #define ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK BIT(20) #define ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT 21 #define ENA_ETH_IO_TX_META_DESC_META_STORE_MASK BIT(21) #define ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT 23 #define ENA_ETH_IO_TX_META_DESC_META_DESC_MASK BIT(23) #define ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT 24 #define ENA_ETH_IO_TX_META_DESC_PHASE_MASK BIT(24) #define ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT 26 #define ENA_ETH_IO_TX_META_DESC_FIRST_MASK BIT(26) #define ENA_ETH_IO_TX_META_DESC_LAST_SHIFT 27 #define ENA_ETH_IO_TX_META_DESC_LAST_MASK BIT(27) #define ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT 28 #define ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK BIT(28) #define ENA_ETH_IO_TX_META_DESC_REQ_ID_HI_MASK GENMASK(5, 0) #define ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK GENMASK(7, 0) #define ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT 8 #define ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK GENMASK(15, 8) #define ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT 16 #define ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK GENMASK(21, 16) #define ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT 22 #define ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK GENMASK(31, 22) /* tx_cdesc */ #define ENA_ETH_IO_TX_CDESC_PHASE_MASK BIT(0) /* rx_desc */ #define ENA_ETH_IO_RX_DESC_PHASE_MASK BIT(0) #define ENA_ETH_IO_RX_DESC_FIRST_SHIFT 2 #define ENA_ETH_IO_RX_DESC_FIRST_MASK BIT(2) #define ENA_ETH_IO_RX_DESC_LAST_SHIFT 3 #define ENA_ETH_IO_RX_DESC_LAST_MASK BIT(3) #define ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT 4 #define ENA_ETH_IO_RX_DESC_COMP_REQ_MASK BIT(4) /* rx_cdesc_base */ #define ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK GENMASK(4, 0) #define ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT 5 #define ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK GENMASK(6, 5) #define ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT 8 #define ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK GENMASK(12, 8) #define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT 13 #define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK BIT(13) #define ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT 14 #define ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK BIT(14) #define ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT 15 #define ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK BIT(15) #define ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_SHIFT 16 #define ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_MASK BIT(16) #define ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT 24 #define ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK BIT(24) #define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT 25 #define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK BIT(25) #define ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT 26 #define ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK BIT(26) #define ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT 27 #define ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK BIT(27) #define ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT 30 #define ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK BIT(30) /* intr_reg */ #define ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK GENMASK(14, 0) #define ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT 15 #define ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK GENMASK(29, 15) #define ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT 30 #define ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK BIT(30) /* numa_node_cfg_reg */ #define ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK GENMASK(7, 0) #define ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT 31 #define ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK BIT(31) #if !defined(DEFS_LINUX_MAINLINE) static inline uint32_t get_ena_eth_io_tx_desc_length(const struct ena_eth_io_tx_desc *p) { return p->len_ctrl & ENA_ETH_IO_TX_DESC_LENGTH_MASK; } static inline void set_ena_eth_io_tx_desc_length(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= val & ENA_ETH_IO_TX_DESC_LENGTH_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_req_id_hi(const struct ena_eth_io_tx_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK) >> ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT; } static inline void set_ena_eth_io_tx_desc_req_id_hi(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT) & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_meta_desc(const struct ena_eth_io_tx_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_DESC_META_DESC_MASK) >> ENA_ETH_IO_TX_DESC_META_DESC_SHIFT; } static inline void set_ena_eth_io_tx_desc_meta_desc(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_META_DESC_SHIFT) & ENA_ETH_IO_TX_DESC_META_DESC_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_phase(const struct ena_eth_io_tx_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_DESC_PHASE_MASK) >> ENA_ETH_IO_TX_DESC_PHASE_SHIFT; } static inline void set_ena_eth_io_tx_desc_phase(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_DESC_PHASE_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_first(const struct ena_eth_io_tx_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_DESC_FIRST_MASK) >> ENA_ETH_IO_TX_DESC_FIRST_SHIFT; } static inline void set_ena_eth_io_tx_desc_first(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_FIRST_SHIFT) & ENA_ETH_IO_TX_DESC_FIRST_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_last(const struct ena_eth_io_tx_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_DESC_LAST_MASK) >> ENA_ETH_IO_TX_DESC_LAST_SHIFT; } static inline void set_ena_eth_io_tx_desc_last(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_LAST_SHIFT) & ENA_ETH_IO_TX_DESC_LAST_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_comp_req(const struct ena_eth_io_tx_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_DESC_COMP_REQ_MASK) >> ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT; } static inline void set_ena_eth_io_tx_desc_comp_req(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT) & ENA_ETH_IO_TX_DESC_COMP_REQ_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_l3_proto_idx(const struct ena_eth_io_tx_desc *p) { return p->meta_ctrl & ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK; } static inline void set_ena_eth_io_tx_desc_l3_proto_idx(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= val & ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_DF(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_DF_MASK) >> ENA_ETH_IO_TX_DESC_DF_SHIFT; } static inline void set_ena_eth_io_tx_desc_DF(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_DF_SHIFT) & ENA_ETH_IO_TX_DESC_DF_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_tso_en(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_TSO_EN_MASK) >> ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT; } static inline void set_ena_eth_io_tx_desc_tso_en(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT) & ENA_ETH_IO_TX_DESC_TSO_EN_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_l4_proto_idx(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK) >> ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT; } static inline void set_ena_eth_io_tx_desc_l4_proto_idx(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT) & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_l3_csum_en(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK) >> ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT; } static inline void set_ena_eth_io_tx_desc_l3_csum_en(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT) & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_l4_csum_en(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK) >> ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT; } static inline void set_ena_eth_io_tx_desc_l4_csum_en(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT) & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_ethernet_fcs_dis(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK) >> ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT; } static inline void set_ena_eth_io_tx_desc_ethernet_fcs_dis(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT) & ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_l4_csum_partial(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK) >> ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT; } static inline void set_ena_eth_io_tx_desc_l4_csum_partial(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT) & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_req_id_lo(const struct ena_eth_io_tx_desc *p) { return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK) >> ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT; } static inline void set_ena_eth_io_tx_desc_req_id_lo(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT) & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_addr_hi(const struct ena_eth_io_tx_desc *p) { return p->buff_addr_hi_hdr_sz & ENA_ETH_IO_TX_DESC_ADDR_HI_MASK; } static inline void set_ena_eth_io_tx_desc_addr_hi(struct ena_eth_io_tx_desc *p, uint32_t val) { p->buff_addr_hi_hdr_sz |= val & ENA_ETH_IO_TX_DESC_ADDR_HI_MASK; } static inline uint32_t get_ena_eth_io_tx_desc_header_length(const struct ena_eth_io_tx_desc *p) { return (p->buff_addr_hi_hdr_sz & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK) >> ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT; } static inline void set_ena_eth_io_tx_desc_header_length(struct ena_eth_io_tx_desc *p, uint32_t val) { p->buff_addr_hi_hdr_sz |= (val << ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT) & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_req_id_lo(const struct ena_eth_io_tx_meta_desc *p) { return p->len_ctrl & ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK; } static inline void set_ena_eth_io_tx_meta_desc_req_id_lo(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= val & ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_ext_valid(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK) >> ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_ext_valid(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT) & ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_mss_hi(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK) >> ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_mss_hi(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT) & ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_eth_meta_type(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK) >> ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_eth_meta_type(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT) & ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_meta_store(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_META_STORE_MASK) >> ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_meta_store(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT) & ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_meta_desc(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_META_DESC_MASK) >> ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_meta_desc(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT) & ENA_ETH_IO_TX_META_DESC_META_DESC_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_phase(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_PHASE_MASK) >> ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_phase(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_META_DESC_PHASE_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_first(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_FIRST_MASK) >> ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_first(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT) & ENA_ETH_IO_TX_META_DESC_FIRST_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_last(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_LAST_MASK) >> ENA_ETH_IO_TX_META_DESC_LAST_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_last(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_LAST_SHIFT) & ENA_ETH_IO_TX_META_DESC_LAST_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_comp_req(const struct ena_eth_io_tx_meta_desc *p) { return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK) >> ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_comp_req(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT) & ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_req_id_hi(const struct ena_eth_io_tx_meta_desc *p) { return p->word1 & ENA_ETH_IO_TX_META_DESC_REQ_ID_HI_MASK; } static inline void set_ena_eth_io_tx_meta_desc_req_id_hi(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word1 |= val & ENA_ETH_IO_TX_META_DESC_REQ_ID_HI_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_l3_hdr_len(const struct ena_eth_io_tx_meta_desc *p) { return p->word2 & ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK; } static inline void set_ena_eth_io_tx_meta_desc_l3_hdr_len(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word2 |= val & ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_l3_hdr_off(const struct ena_eth_io_tx_meta_desc *p) { return (p->word2 & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK) >> ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_l3_hdr_off(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word2 |= (val << ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT) & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_l4_hdr_len_in_words(const struct ena_eth_io_tx_meta_desc *p) { return (p->word2 & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK) >> ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_l4_hdr_len_in_words(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word2 |= (val << ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK; } static inline uint32_t get_ena_eth_io_tx_meta_desc_mss_lo(const struct ena_eth_io_tx_meta_desc *p) { return (p->word2 & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK) >> ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT; } static inline void set_ena_eth_io_tx_meta_desc_mss_lo(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word2 |= (val << ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT) & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK; } static inline uint8_t get_ena_eth_io_tx_cdesc_phase(const struct ena_eth_io_tx_cdesc *p) { return p->flags & ENA_ETH_IO_TX_CDESC_PHASE_MASK; } static inline void set_ena_eth_io_tx_cdesc_phase(struct ena_eth_io_tx_cdesc *p, uint8_t val) { p->flags |= val & ENA_ETH_IO_TX_CDESC_PHASE_MASK; } static inline uint8_t get_ena_eth_io_rx_desc_phase(const struct ena_eth_io_rx_desc *p) { return p->ctrl & ENA_ETH_IO_RX_DESC_PHASE_MASK; } static inline void set_ena_eth_io_rx_desc_phase(struct ena_eth_io_rx_desc *p, uint8_t val) { p->ctrl |= val & ENA_ETH_IO_RX_DESC_PHASE_MASK; } static inline uint8_t get_ena_eth_io_rx_desc_first(const struct ena_eth_io_rx_desc *p) { return (p->ctrl & ENA_ETH_IO_RX_DESC_FIRST_MASK) >> ENA_ETH_IO_RX_DESC_FIRST_SHIFT; } static inline void set_ena_eth_io_rx_desc_first(struct ena_eth_io_rx_desc *p, uint8_t val) { p->ctrl |= (val << ENA_ETH_IO_RX_DESC_FIRST_SHIFT) & ENA_ETH_IO_RX_DESC_FIRST_MASK; } static inline uint8_t get_ena_eth_io_rx_desc_last(const struct ena_eth_io_rx_desc *p) { return (p->ctrl & ENA_ETH_IO_RX_DESC_LAST_MASK) >> ENA_ETH_IO_RX_DESC_LAST_SHIFT; } static inline void set_ena_eth_io_rx_desc_last(struct ena_eth_io_rx_desc *p, uint8_t val) { p->ctrl |= (val << ENA_ETH_IO_RX_DESC_LAST_SHIFT) & ENA_ETH_IO_RX_DESC_LAST_MASK; } static inline uint8_t get_ena_eth_io_rx_desc_comp_req(const struct ena_eth_io_rx_desc *p) { return (p->ctrl & ENA_ETH_IO_RX_DESC_COMP_REQ_MASK) >> ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT; } static inline void set_ena_eth_io_rx_desc_comp_req(struct ena_eth_io_rx_desc *p, uint8_t val) { p->ctrl |= (val << ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT) & ENA_ETH_IO_RX_DESC_COMP_REQ_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_proto_idx(const struct ena_eth_io_rx_cdesc_base *p) { return p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK; } static inline void set_ena_eth_io_rx_cdesc_base_l3_proto_idx(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= val & ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_src_vlan_cnt(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_src_vlan_cnt(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_proto_idx(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_l4_proto_idx(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_csum_err(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_l3_csum_err(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_csum_err(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_l4_csum_err(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_ipv4_frag(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_ipv4_frag(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_csum_checked(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_l4_csum_checked(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_phase(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_phase(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_csum2(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_l3_csum2(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_first(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_first(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_last(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_last(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK; } static inline uint32_t get_ena_eth_io_rx_cdesc_base_buffer(const struct ena_eth_io_rx_cdesc_base *p) { return (p->status & ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT; } static inline void set_ena_eth_io_rx_cdesc_base_buffer(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK; } static inline uint32_t get_ena_eth_io_intr_reg_rx_intr_delay(const struct ena_eth_io_intr_reg *p) { return p->intr_control & ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK; } static inline void set_ena_eth_io_intr_reg_rx_intr_delay(struct ena_eth_io_intr_reg *p, uint32_t val) { p->intr_control |= val & ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK; } static inline uint32_t get_ena_eth_io_intr_reg_tx_intr_delay(const struct ena_eth_io_intr_reg *p) { return (p->intr_control & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK) >> ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT; } static inline void set_ena_eth_io_intr_reg_tx_intr_delay(struct ena_eth_io_intr_reg *p, uint32_t val) { p->intr_control |= (val << ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT) & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK; } static inline uint32_t get_ena_eth_io_intr_reg_intr_unmask(const struct ena_eth_io_intr_reg *p) { return (p->intr_control & ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK) >> ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT; } static inline void set_ena_eth_io_intr_reg_intr_unmask(struct ena_eth_io_intr_reg *p, uint32_t val) { p->intr_control |= (val << ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT) & ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK; } static inline uint32_t get_ena_eth_io_numa_node_cfg_reg_numa(const struct ena_eth_io_numa_node_cfg_reg *p) { return p->numa_cfg & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK; } static inline void set_ena_eth_io_numa_node_cfg_reg_numa(struct ena_eth_io_numa_node_cfg_reg *p, uint32_t val) { p->numa_cfg |= val & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK; } static inline uint32_t get_ena_eth_io_numa_node_cfg_reg_enabled(const struct ena_eth_io_numa_node_cfg_reg *p) { return (p->numa_cfg & ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK) >> ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT; } static inline void set_ena_eth_io_numa_node_cfg_reg_enabled(struct ena_eth_io_numa_node_cfg_reg *p, uint32_t val) { p->numa_cfg |= (val << ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT) & ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK; } #endif /* !defined(DEFS_LINUX_MAINLINE) */ #endif /* _ENA_ETH_IO_H_ */ Index: stable/12/sys/contrib/ena-com/ena_defs/ena_gen_info.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_defs/ena_gen_info.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_defs/ena_gen_info.h (revision 368012) @@ -1,34 +1,34 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ -#define ENA_GEN_DATE "Mon Apr 20 15:41:59 DST 2020" -#define ENA_GEN_COMMIT "daa45ac" +#define ENA_GEN_DATE "Fri Sep 18 17:09:00 IDT 2020" +#define ENA_GEN_COMMIT "0f80d82" Index: stable/12/sys/contrib/ena-com/ena_defs/ena_regs_defs.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_defs/ena_regs_defs.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_defs/ena_regs_defs.h (revision 368012) @@ -1,159 +1,159 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef _ENA_REGS_H_ #define _ENA_REGS_H_ enum ena_regs_reset_reason_types { ENA_REGS_RESET_NORMAL = 0, ENA_REGS_RESET_KEEP_ALIVE_TO = 1, ENA_REGS_RESET_ADMIN_TO = 2, ENA_REGS_RESET_MISS_TX_CMPL = 3, ENA_REGS_RESET_INV_RX_REQ_ID = 4, ENA_REGS_RESET_INV_TX_REQ_ID = 5, ENA_REGS_RESET_TOO_MANY_RX_DESCS = 6, ENA_REGS_RESET_INIT_ERR = 7, ENA_REGS_RESET_DRIVER_INVALID_STATE = 8, ENA_REGS_RESET_OS_TRIGGER = 9, ENA_REGS_RESET_OS_NETDEV_WD = 10, ENA_REGS_RESET_SHUTDOWN = 11, ENA_REGS_RESET_USER_TRIGGER = 12, ENA_REGS_RESET_GENERIC = 13, ENA_REGS_RESET_MISS_INTERRUPT = 14, ENA_REGS_RESET_LAST, }; /* ena_registers offsets */ /* 0 base */ #define ENA_REGS_VERSION_OFF 0x0 #define ENA_REGS_CONTROLLER_VERSION_OFF 0x4 #define ENA_REGS_CAPS_OFF 0x8 #define ENA_REGS_CAPS_EXT_OFF 0xc #define ENA_REGS_AQ_BASE_LO_OFF 0x10 #define ENA_REGS_AQ_BASE_HI_OFF 0x14 #define ENA_REGS_AQ_CAPS_OFF 0x18 #define ENA_REGS_ACQ_BASE_LO_OFF 0x20 #define ENA_REGS_ACQ_BASE_HI_OFF 0x24 #define ENA_REGS_ACQ_CAPS_OFF 0x28 #define ENA_REGS_AQ_DB_OFF 0x2c #define ENA_REGS_ACQ_TAIL_OFF 0x30 #define ENA_REGS_AENQ_CAPS_OFF 0x34 #define ENA_REGS_AENQ_BASE_LO_OFF 0x38 #define ENA_REGS_AENQ_BASE_HI_OFF 0x3c #define ENA_REGS_AENQ_HEAD_DB_OFF 0x40 #define ENA_REGS_AENQ_TAIL_OFF 0x44 #define ENA_REGS_INTR_MASK_OFF 0x4c #define ENA_REGS_DEV_CTL_OFF 0x54 #define ENA_REGS_DEV_STS_OFF 0x58 #define ENA_REGS_MMIO_REG_READ_OFF 0x5c #define ENA_REGS_MMIO_RESP_LO_OFF 0x60 #define ENA_REGS_MMIO_RESP_HI_OFF 0x64 #define ENA_REGS_RSS_IND_ENTRY_UPDATE_OFF 0x68 /* version register */ #define ENA_REGS_VERSION_MINOR_VERSION_MASK 0xff #define ENA_REGS_VERSION_MAJOR_VERSION_SHIFT 8 #define ENA_REGS_VERSION_MAJOR_VERSION_MASK 0xff00 /* controller_version register */ #define ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK 0xff #define ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT 8 #define ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK 0xff00 #define ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT 16 #define ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK 0xff0000 #define ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT 24 #define ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK 0xff000000 /* caps register */ #define ENA_REGS_CAPS_CONTIGUOUS_QUEUE_REQUIRED_MASK 0x1 #define ENA_REGS_CAPS_RESET_TIMEOUT_SHIFT 1 #define ENA_REGS_CAPS_RESET_TIMEOUT_MASK 0x3e #define ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT 8 #define ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK 0xff00 #define ENA_REGS_CAPS_ADMIN_CMD_TO_SHIFT 16 #define ENA_REGS_CAPS_ADMIN_CMD_TO_MASK 0xf0000 /* aq_caps register */ #define ENA_REGS_AQ_CAPS_AQ_DEPTH_MASK 0xffff #define ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_SHIFT 16 #define ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_MASK 0xffff0000 /* acq_caps register */ #define ENA_REGS_ACQ_CAPS_ACQ_DEPTH_MASK 0xffff #define ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_SHIFT 16 #define ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_MASK 0xffff0000 /* aenq_caps register */ #define ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK 0xffff #define ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT 16 #define ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK 0xffff0000 /* dev_ctl register */ #define ENA_REGS_DEV_CTL_DEV_RESET_MASK 0x1 #define ENA_REGS_DEV_CTL_AQ_RESTART_SHIFT 1 #define ENA_REGS_DEV_CTL_AQ_RESTART_MASK 0x2 #define ENA_REGS_DEV_CTL_QUIESCENT_SHIFT 2 #define ENA_REGS_DEV_CTL_QUIESCENT_MASK 0x4 #define ENA_REGS_DEV_CTL_IO_RESUME_SHIFT 3 #define ENA_REGS_DEV_CTL_IO_RESUME_MASK 0x8 #define ENA_REGS_DEV_CTL_RESET_REASON_SHIFT 28 #define ENA_REGS_DEV_CTL_RESET_REASON_MASK 0xf0000000 /* dev_sts register */ #define ENA_REGS_DEV_STS_READY_MASK 0x1 #define ENA_REGS_DEV_STS_AQ_RESTART_IN_PROGRESS_SHIFT 1 #define ENA_REGS_DEV_STS_AQ_RESTART_IN_PROGRESS_MASK 0x2 #define ENA_REGS_DEV_STS_AQ_RESTART_FINISHED_SHIFT 2 #define ENA_REGS_DEV_STS_AQ_RESTART_FINISHED_MASK 0x4 #define ENA_REGS_DEV_STS_RESET_IN_PROGRESS_SHIFT 3 #define ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK 0x8 #define ENA_REGS_DEV_STS_RESET_FINISHED_SHIFT 4 #define ENA_REGS_DEV_STS_RESET_FINISHED_MASK 0x10 #define ENA_REGS_DEV_STS_FATAL_ERROR_SHIFT 5 #define ENA_REGS_DEV_STS_FATAL_ERROR_MASK 0x20 #define ENA_REGS_DEV_STS_QUIESCENT_STATE_IN_PROGRESS_SHIFT 6 #define ENA_REGS_DEV_STS_QUIESCENT_STATE_IN_PROGRESS_MASK 0x40 #define ENA_REGS_DEV_STS_QUIESCENT_STATE_ACHIEVED_SHIFT 7 #define ENA_REGS_DEV_STS_QUIESCENT_STATE_ACHIEVED_MASK 0x80 /* mmio_reg_read register */ #define ENA_REGS_MMIO_REG_READ_REQ_ID_MASK 0xffff #define ENA_REGS_MMIO_REG_READ_REG_OFF_SHIFT 16 #define ENA_REGS_MMIO_REG_READ_REG_OFF_MASK 0xffff0000 /* rss_ind_entry_update register */ #define ENA_REGS_RSS_IND_ENTRY_UPDATE_INDEX_MASK 0xffff #define ENA_REGS_RSS_IND_ENTRY_UPDATE_CQ_IDX_SHIFT 16 #define ENA_REGS_RSS_IND_ENTRY_UPDATE_CQ_IDX_MASK 0xffff0000 #endif /* _ENA_REGS_H_ */ Index: stable/12/sys/contrib/ena-com/ena_eth_com.c =================================================================== --- stable/12/sys/contrib/ena-com/ena_eth_com.c (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_eth_com.c (revision 368012) @@ -1,646 +1,685 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "ena_eth_com.h" static struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc( struct ena_com_io_cq *io_cq) { struct ena_eth_io_rx_cdesc_base *cdesc; u16 expected_phase, head_masked; u16 desc_phase; head_masked = io_cq->head & (io_cq->q_depth - 1); expected_phase = io_cq->phase; cdesc = (struct ena_eth_io_rx_cdesc_base *)(io_cq->cdesc_addr.virt_addr + (head_masked * io_cq->cdesc_entry_size_in_bytes)); desc_phase = (READ_ONCE32(cdesc->status) & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT; if (desc_phase != expected_phase) return NULL; /* Make sure we read the rest of the descriptor after the phase bit * has been read */ dma_rmb(); return cdesc; } static void *get_sq_desc_regular_queue(struct ena_com_io_sq *io_sq) { u16 tail_masked; u32 offset; tail_masked = io_sq->tail & (io_sq->q_depth - 1); offset = tail_masked * io_sq->desc_entry_size; return (void *)((uintptr_t)io_sq->desc_addr.virt_addr + offset); } static int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq, u8 *bounce_buffer) { struct ena_com_llq_info *llq_info = &io_sq->llq_info; u16 dst_tail_mask; u32 dst_offset; dst_tail_mask = io_sq->tail & (io_sq->q_depth - 1); dst_offset = dst_tail_mask * llq_info->desc_list_entry_size; if (is_llq_max_tx_burst_exists(io_sq)) { if (unlikely(!io_sq->entries_in_tx_burst_left)) { - ena_trc_err("Error: trying to send more packets than tx burst allows\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Error: trying to send more packets than tx burst allows\n"); return ENA_COM_NO_SPACE; } io_sq->entries_in_tx_burst_left--; - ena_trc_dbg("decreasing entries_in_tx_burst_left of queue %d to %d\n", + ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq), + "Decreasing entries_in_tx_burst_left of queue %d to %d\n", io_sq->qid, io_sq->entries_in_tx_burst_left); } /* Make sure everything was written into the bounce buffer before * writing the bounce buffer to the device */ wmb(); /* The line is completed. Copy it to dev */ ENA_MEMCPY_TO_DEVICE_64(io_sq->desc_addr.pbuf_dev_addr + dst_offset, bounce_buffer, llq_info->desc_list_entry_size); io_sq->tail++; /* Switch phase bit in case of wrap around */ if (unlikely((io_sq->tail & (io_sq->q_depth - 1)) == 0)) io_sq->phase ^= 1; return ENA_COM_OK; } static int ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq, u8 *header_src, u16 header_len) { struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl; struct ena_com_llq_info *llq_info = &io_sq->llq_info; u8 *bounce_buffer = pkt_ctrl->curr_bounce_buf; u16 header_offset; if (unlikely(io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)) return 0; header_offset = llq_info->descs_num_before_header * io_sq->desc_entry_size; if (unlikely((header_offset + header_len) > llq_info->desc_list_entry_size)) { - ena_trc_err("trying to write header larger than llq entry can accommodate\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Trying to write header larger than llq entry can accommodate\n"); return ENA_COM_FAULT; } if (unlikely(!bounce_buffer)) { - ena_trc_err("bounce buffer is NULL\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Bounce buffer is NULL\n"); return ENA_COM_FAULT; } memcpy(bounce_buffer + header_offset, header_src, header_len); return 0; } static void *get_sq_desc_llq(struct ena_com_io_sq *io_sq) { struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl; u8 *bounce_buffer; void *sq_desc; bounce_buffer = pkt_ctrl->curr_bounce_buf; if (unlikely(!bounce_buffer)) { - ena_trc_err("bounce buffer is NULL\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Bounce buffer is NULL\n"); return NULL; } sq_desc = bounce_buffer + pkt_ctrl->idx * io_sq->desc_entry_size; pkt_ctrl->idx++; pkt_ctrl->descs_left_in_line--; return sq_desc; } static int ena_com_close_bounce_buffer(struct ena_com_io_sq *io_sq) { struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl; struct ena_com_llq_info *llq_info = &io_sq->llq_info; int rc; if (unlikely(io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)) return ENA_COM_OK; /* bounce buffer was used, so write it and get a new one */ if (pkt_ctrl->idx) { rc = ena_com_write_bounce_buffer_to_dev(io_sq, pkt_ctrl->curr_bounce_buf); if (unlikely(rc)) { - ena_trc_err("failed to write bounce buffer to device\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed to write bounce buffer to device\n"); return rc; } pkt_ctrl->curr_bounce_buf = ena_com_get_next_bounce_buffer(&io_sq->bounce_buf_ctrl); memset(io_sq->llq_buf_ctrl.curr_bounce_buf, 0x0, llq_info->desc_list_entry_size); } pkt_ctrl->idx = 0; pkt_ctrl->descs_left_in_line = llq_info->descs_num_before_header; return ENA_COM_OK; } static void *get_sq_desc(struct ena_com_io_sq *io_sq) { if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) return get_sq_desc_llq(io_sq); return get_sq_desc_regular_queue(io_sq); } static int ena_com_sq_update_llq_tail(struct ena_com_io_sq *io_sq) { struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl; struct ena_com_llq_info *llq_info = &io_sq->llq_info; int rc; if (!pkt_ctrl->descs_left_in_line) { rc = ena_com_write_bounce_buffer_to_dev(io_sq, pkt_ctrl->curr_bounce_buf); if (unlikely(rc)) { - ena_trc_err("failed to write bounce buffer to device\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed to write bounce buffer to device\n"); return rc; } pkt_ctrl->curr_bounce_buf = ena_com_get_next_bounce_buffer(&io_sq->bounce_buf_ctrl); memset(io_sq->llq_buf_ctrl.curr_bounce_buf, 0x0, llq_info->desc_list_entry_size); pkt_ctrl->idx = 0; if (unlikely(llq_info->desc_stride_ctrl == ENA_ADMIN_SINGLE_DESC_PER_ENTRY)) pkt_ctrl->descs_left_in_line = 1; else pkt_ctrl->descs_left_in_line = llq_info->desc_list_entry_size / io_sq->desc_entry_size; } return ENA_COM_OK; } static int ena_com_sq_update_tail(struct ena_com_io_sq *io_sq) { if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) return ena_com_sq_update_llq_tail(io_sq); io_sq->tail++; /* Switch phase bit in case of wrap around */ if (unlikely((io_sq->tail & (io_sq->q_depth - 1)) == 0)) io_sq->phase ^= 1; return ENA_COM_OK; } static struct ena_eth_io_rx_cdesc_base * ena_com_rx_cdesc_idx_to_ptr(struct ena_com_io_cq *io_cq, u16 idx) { idx &= (io_cq->q_depth - 1); return (struct ena_eth_io_rx_cdesc_base *) ((uintptr_t)io_cq->cdesc_addr.virt_addr + idx * io_cq->cdesc_entry_size_in_bytes); } static u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq, u16 *first_cdesc_idx) { struct ena_eth_io_rx_cdesc_base *cdesc; u16 count = 0, head_masked; u32 last = 0; do { cdesc = ena_com_get_next_rx_cdesc(io_cq); if (!cdesc) break; ena_com_cq_inc_head(io_cq); count++; last = (READ_ONCE32(cdesc->status) & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT; } while (!last); if (last) { *first_cdesc_idx = io_cq->cur_rx_pkt_cdesc_start_idx; count += io_cq->cur_rx_pkt_cdesc_count; head_masked = io_cq->head & (io_cq->q_depth - 1); io_cq->cur_rx_pkt_cdesc_count = 0; io_cq->cur_rx_pkt_cdesc_start_idx = head_masked; - ena_trc_dbg("ena q_id: %d packets were completed. first desc idx %u descs# %d\n", + ena_trc_dbg(ena_com_io_cq_to_ena_dev(io_cq), + "ENA q_id: %d packets were completed. first desc idx %u descs# %d\n", io_cq->qid, *first_cdesc_idx, count); } else { io_cq->cur_rx_pkt_cdesc_count += count; count = 0; } return count; } static int ena_com_create_meta(struct ena_com_io_sq *io_sq, struct ena_com_tx_meta *ena_meta) { struct ena_eth_io_tx_meta_desc *meta_desc = NULL; meta_desc = get_sq_desc(io_sq); + if (unlikely(!meta_desc)) + return ENA_COM_FAULT; + memset(meta_desc, 0x0, sizeof(struct ena_eth_io_tx_meta_desc)); meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_DESC_MASK; meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK; /* bits 0-9 of the mss */ - meta_desc->word2 |= (ena_meta->mss << + meta_desc->word2 |= ((u32)ena_meta->mss << ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT) & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK; /* bits 10-13 of the mss */ meta_desc->len_ctrl |= ((ena_meta->mss >> 10) << ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT) & ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK; /* Extended meta desc */ meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK; - meta_desc->len_ctrl |= (io_sq->phase << + meta_desc->len_ctrl |= ((u32)io_sq->phase << ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_META_DESC_PHASE_MASK; meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_FIRST_MASK; meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; meta_desc->word2 |= ena_meta->l3_hdr_len & ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK; meta_desc->word2 |= (ena_meta->l3_hdr_offset << ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT) & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK; - meta_desc->word2 |= (ena_meta->l4_hdr_len << + meta_desc->word2 |= ((u32)ena_meta->l4_hdr_len << ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK; return ena_com_sq_update_tail(io_sq); } static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, struct ena_com_tx_ctx *ena_tx_ctx, bool *have_meta) { struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; /* When disable meta caching is set, don't bother to save the meta and * compare it to the stored version, just create the meta */ if (io_sq->disable_meta_caching) { if (unlikely(!ena_tx_ctx->meta_valid)) return ENA_COM_INVAL; *have_meta = true; return ena_com_create_meta(io_sq, ena_meta); - } else if (ena_com_meta_desc_changed(io_sq, ena_tx_ctx)) { + } + + if (ena_com_meta_desc_changed(io_sq, ena_tx_ctx)) { *have_meta = true; /* Cache the meta desc */ memcpy(&io_sq->cached_tx_meta, ena_meta, sizeof(struct ena_com_tx_meta)); return ena_com_create_meta(io_sq, ena_meta); - } else { - *have_meta = false; - return ENA_COM_OK; } + + *have_meta = false; + return ENA_COM_OK; } -static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx, - struct ena_eth_io_rx_cdesc_base *cdesc) +static void ena_com_rx_set_flags(struct ena_com_io_cq *io_cq, + struct ena_com_rx_ctx *ena_rx_ctx, + struct ena_eth_io_rx_cdesc_base *cdesc) { ena_rx_ctx->l3_proto = cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK; ena_rx_ctx->l4_proto = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT; ena_rx_ctx->l3_csum_err = !!((cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT); ena_rx_ctx->l4_csum_err = !!((cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT); ena_rx_ctx->l4_csum_checked = !!((cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_CHECKED_SHIFT); ena_rx_ctx->hash = cdesc->hash; ena_rx_ctx->frag = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT; - ena_trc_dbg("ena_rx_ctx->l3_proto %d ena_rx_ctx->l4_proto %d\nena_rx_ctx->l3_csum_err %d ena_rx_ctx->l4_csum_err %d\nhash frag %d frag: %d cdesc_status: %x\n", + ena_trc_dbg(ena_com_io_cq_to_ena_dev(io_cq), + "l3_proto %d l4_proto %d l3_csum_err %d l4_csum_err %d hash %d frag %d cdesc_status %x\n", ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto, ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err, ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status); } /*****************************************************************************/ /***************************** API **********************************/ /*****************************************************************************/ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, struct ena_com_tx_ctx *ena_tx_ctx, int *nb_hw_desc) { struct ena_eth_io_tx_desc *desc = NULL; struct ena_com_buf *ena_bufs = ena_tx_ctx->ena_bufs; void *buffer_to_push = ena_tx_ctx->push_header; u16 header_len = ena_tx_ctx->header_len; u16 num_bufs = ena_tx_ctx->num_bufs; u16 start_tail = io_sq->tail; int i, rc; bool have_meta; u64 addr_hi; ENA_WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_TX, - "wrong Q type"); + ena_com_io_sq_to_ena_dev(io_sq), "wrong Q type"); /* num_bufs +1 for potential meta desc */ if (unlikely(!ena_com_sq_have_enough_space(io_sq, num_bufs + 1))) { - ena_trc_dbg("Not enough space in the tx queue\n"); + ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq), + "Not enough space in the tx queue\n"); return ENA_COM_NO_MEM; } if (unlikely(header_len > io_sq->tx_max_header_size)) { - ena_trc_err("header size is too large %d max header: %d\n", + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Header size is too large %d max header: %d\n", header_len, io_sq->tx_max_header_size); return ENA_COM_INVAL; } if (unlikely(io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV && !buffer_to_push)) { - ena_trc_err("push header wasn't provided on LLQ mode\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Push header wasn't provided on LLQ mode\n"); return ENA_COM_INVAL; } rc = ena_com_write_header_to_bounce(io_sq, buffer_to_push, header_len); if (unlikely(rc)) return rc; rc = ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx, &have_meta); if (unlikely(rc)) { - ena_trc_err("failed to create and store tx meta desc\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed to create and store tx meta desc\n"); return rc; } /* If the caller doesn't want to send packets */ if (unlikely(!num_bufs && !header_len)) { rc = ena_com_close_bounce_buffer(io_sq); if (rc) - ena_trc_err("failed to write buffers to LLQ\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed to write buffers to LLQ\n"); *nb_hw_desc = io_sq->tail - start_tail; return rc; } desc = get_sq_desc(io_sq); if (unlikely(!desc)) return ENA_COM_FAULT; memset(desc, 0x0, sizeof(struct ena_eth_io_tx_desc)); /* Set first desc when we don't have meta descriptor */ if (!have_meta) desc->len_ctrl |= ENA_ETH_IO_TX_DESC_FIRST_MASK; - desc->buff_addr_hi_hdr_sz |= (header_len << + desc->buff_addr_hi_hdr_sz |= ((u32)header_len << ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT) & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK; - desc->len_ctrl |= (io_sq->phase << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) & + desc->len_ctrl |= ((u32)io_sq->phase << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_DESC_PHASE_MASK; desc->len_ctrl |= ENA_ETH_IO_TX_DESC_COMP_REQ_MASK; /* Bits 0-9 */ - desc->meta_ctrl |= (ena_tx_ctx->req_id << + desc->meta_ctrl |= ((u32)ena_tx_ctx->req_id << ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT) & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK; desc->meta_ctrl |= (ena_tx_ctx->df << ENA_ETH_IO_TX_DESC_DF_SHIFT) & ENA_ETH_IO_TX_DESC_DF_MASK; /* Bits 10-15 */ desc->len_ctrl |= ((ena_tx_ctx->req_id >> 10) << ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT) & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK; if (ena_tx_ctx->meta_valid) { desc->meta_ctrl |= (ena_tx_ctx->tso_enable << ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT) & ENA_ETH_IO_TX_DESC_TSO_EN_MASK; desc->meta_ctrl |= ena_tx_ctx->l3_proto & ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK; desc->meta_ctrl |= (ena_tx_ctx->l4_proto << ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT) & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK; desc->meta_ctrl |= (ena_tx_ctx->l3_csum_enable << ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT) & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK; desc->meta_ctrl |= (ena_tx_ctx->l4_csum_enable << ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT) & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK; desc->meta_ctrl |= (ena_tx_ctx->l4_csum_partial << ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT) & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK; } for (i = 0; i < num_bufs; i++) { /* The first desc share the same desc as the header */ if (likely(i != 0)) { rc = ena_com_sq_update_tail(io_sq); if (unlikely(rc)) { - ena_trc_err("failed to update sq tail\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed to update sq tail\n"); return rc; } desc = get_sq_desc(io_sq); if (unlikely(!desc)) return ENA_COM_FAULT; memset(desc, 0x0, sizeof(struct ena_eth_io_tx_desc)); - desc->len_ctrl |= (io_sq->phase << + desc->len_ctrl |= ((u32)io_sq->phase << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_DESC_PHASE_MASK; } desc->len_ctrl |= ena_bufs->len & ENA_ETH_IO_TX_DESC_LENGTH_MASK; addr_hi = ((ena_bufs->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32); desc->buff_addr_lo = (u32)ena_bufs->paddr; desc->buff_addr_hi_hdr_sz |= addr_hi & ENA_ETH_IO_TX_DESC_ADDR_HI_MASK; ena_bufs++; } /* set the last desc indicator */ desc->len_ctrl |= ENA_ETH_IO_TX_DESC_LAST_MASK; rc = ena_com_sq_update_tail(io_sq); if (unlikely(rc)) { - ena_trc_err("failed to update sq tail of the last descriptor\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed to update sq tail of the last descriptor\n"); return rc; } rc = ena_com_close_bounce_buffer(io_sq); if (rc) - ena_trc_err("failed when closing bounce buffer\n"); + ena_trc_err(ena_com_io_sq_to_ena_dev(io_sq), + "Failed when closing bounce buffer\n"); *nb_hw_desc = io_sq->tail - start_tail; return rc; } int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, struct ena_com_io_sq *io_sq, struct ena_com_rx_ctx *ena_rx_ctx) { struct ena_com_rx_buf_info *ena_buf = &ena_rx_ctx->ena_bufs[0]; struct ena_eth_io_rx_cdesc_base *cdesc = NULL; + u16 q_depth = io_cq->q_depth; u16 cdesc_idx = 0; u16 nb_hw_desc; u16 i = 0; ENA_WARN(io_cq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, - "wrong Q type"); + ena_com_io_cq_to_ena_dev(io_cq), "wrong Q type"); nb_hw_desc = ena_com_cdesc_rx_pkt_get(io_cq, &cdesc_idx); if (nb_hw_desc == 0) { ena_rx_ctx->descs = nb_hw_desc; return 0; } - ena_trc_dbg("fetch rx packet: queue %d completed desc: %d\n", + ena_trc_dbg(ena_com_io_cq_to_ena_dev(io_cq), + "Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid, nb_hw_desc); if (unlikely(nb_hw_desc > ena_rx_ctx->max_bufs)) { - ena_trc_err("Too many RX cdescs (%d) > MAX(%d)\n", + ena_trc_err(ena_com_io_cq_to_ena_dev(io_cq), + "Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc, ena_rx_ctx->max_bufs); return ENA_COM_NO_SPACE; } cdesc = ena_com_rx_cdesc_idx_to_ptr(io_cq, cdesc_idx); ena_rx_ctx->pkt_offset = cdesc->offset; do { - ena_buf->len = cdesc->length; - ena_buf->req_id = cdesc->req_id; - ena_buf++; - } while ((++i < nb_hw_desc) && (cdesc = ena_com_rx_cdesc_idx_to_ptr(io_cq, cdesc_idx + i))); + ena_buf[i].len = cdesc->length; + ena_buf[i].req_id = cdesc->req_id; + if (unlikely(ena_buf[i].req_id >= q_depth)) + return ENA_COM_EIO; + if (++i >= nb_hw_desc) + break; + + cdesc = ena_com_rx_cdesc_idx_to_ptr(io_cq, cdesc_idx + i); + + } while (1); + /* Update SQ head ptr */ io_sq->next_to_comp += nb_hw_desc; - ena_trc_dbg("[%s][QID#%d] Updating SQ head to: %d\n", __func__, + ena_trc_dbg(ena_com_io_cq_to_ena_dev(io_cq), + "[%s][QID#%d] Updating SQ head to: %d\n", __func__, io_sq->qid, io_sq->next_to_comp); /* Get rx flags from the last pkt */ - ena_com_rx_set_flags(ena_rx_ctx, cdesc); + ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc); ena_rx_ctx->descs = nb_hw_desc; + return 0; } int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, struct ena_com_buf *ena_buf, u16 req_id) { struct ena_eth_io_rx_desc *desc; ENA_WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, - "wrong Q type"); + ena_com_io_sq_to_ena_dev(io_sq), "wrong Q type"); if (unlikely(!ena_com_sq_have_enough_space(io_sq, 1))) return ENA_COM_NO_SPACE; desc = get_sq_desc(io_sq); if (unlikely(!desc)) return ENA_COM_FAULT; memset(desc, 0x0, sizeof(struct ena_eth_io_rx_desc)); desc->length = ena_buf->len; desc->ctrl = ENA_ETH_IO_RX_DESC_FIRST_MASK | - ENA_ETH_IO_RX_DESC_LAST_MASK | - (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK) | - ENA_ETH_IO_RX_DESC_COMP_REQ_MASK; + ENA_ETH_IO_RX_DESC_LAST_MASK | + ENA_ETH_IO_RX_DESC_COMP_REQ_MASK | + (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK); desc->req_id = req_id; + + ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq), + "[%s] Adding single RX desc, Queue: %u, req_id: %u\n", + __func__, io_sq->qid, req_id); desc->buff_addr_lo = (u32)ena_buf->paddr; desc->buff_addr_hi = ((ena_buf->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32); return ena_com_sq_update_tail(io_sq); } bool ena_com_cq_empty(struct ena_com_io_cq *io_cq) { struct ena_eth_io_rx_cdesc_base *cdesc; cdesc = ena_com_get_next_rx_cdesc(io_cq); if (cdesc) return false; else return true; } Index: stable/12/sys/contrib/ena-com/ena_eth_com.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_eth_com.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_eth_com.h (revision 368012) @@ -1,286 +1,291 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef ENA_ETH_COM_H_ #define ENA_ETH_COM_H_ #if defined(__cplusplus) extern "C" { #endif #include "ena_com.h" /* head update threshold in units of (queue size / ENA_COMP_HEAD_THRESH) */ #define ENA_COMP_HEAD_THRESH 4 struct ena_com_tx_ctx { struct ena_com_tx_meta ena_meta; struct ena_com_buf *ena_bufs; /* For LLQ, header buffer - pushed to the device mem space */ void *push_header; enum ena_eth_io_l3_proto_index l3_proto; enum ena_eth_io_l4_proto_index l4_proto; u16 num_bufs; u16 req_id; /* For regular queue, indicate the size of the header * For LLQ, indicate the size of the pushed buffer */ u16 header_len; u8 meta_valid; u8 tso_enable; u8 l3_csum_enable; u8 l4_csum_enable; u8 l4_csum_partial; u8 df; /* Don't fragment */ }; struct ena_com_rx_ctx { struct ena_com_rx_buf_info *ena_bufs; enum ena_eth_io_l3_proto_index l3_proto; enum ena_eth_io_l4_proto_index l4_proto; bool l3_csum_err; bool l4_csum_err; u8 l4_csum_checked; /* fragmented packet */ bool frag; u32 hash; u16 descs; int max_bufs; u8 pkt_offset; }; int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, struct ena_com_tx_ctx *ena_tx_ctx, int *nb_hw_desc); int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, struct ena_com_io_sq *io_sq, struct ena_com_rx_ctx *ena_rx_ctx); int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, struct ena_com_buf *ena_buf, u16 req_id); bool ena_com_cq_empty(struct ena_com_io_cq *io_cq); static inline void ena_com_unmask_intr(struct ena_com_io_cq *io_cq, struct ena_eth_io_intr_reg *intr_reg) { ENA_REG_WRITE32(io_cq->bus, intr_reg->intr_control, io_cq->unmask_reg); } static inline int ena_com_free_q_entries(struct ena_com_io_sq *io_sq) { u16 tail, next_to_comp, cnt; next_to_comp = io_sq->next_to_comp; tail = io_sq->tail; cnt = tail - next_to_comp; return io_sq->q_depth - 1 - cnt; } /* Check if the submission queue has enough space to hold required_buffers */ static inline bool ena_com_sq_have_enough_space(struct ena_com_io_sq *io_sq, u16 required_buffers) { int temp; if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) return ena_com_free_q_entries(io_sq) >= required_buffers; /* This calculation doesn't need to be 100% accurate. So to reduce * the calculation overhead just Subtract 2 lines from the free descs * (one for the header line and one to compensate the devision * down calculation. */ temp = required_buffers / io_sq->llq_info.descs_per_entry + 2; return ena_com_free_q_entries(io_sq) > temp; } static inline bool ena_com_meta_desc_changed(struct ena_com_io_sq *io_sq, struct ena_com_tx_ctx *ena_tx_ctx) { if (!ena_tx_ctx->meta_valid) return false; return !!memcmp(&io_sq->cached_tx_meta, &ena_tx_ctx->ena_meta, sizeof(struct ena_com_tx_meta)); } static inline bool is_llq_max_tx_burst_exists(struct ena_com_io_sq *io_sq) { return (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) && io_sq->llq_info.max_entries_in_tx_burst > 0; } static inline bool ena_com_is_doorbell_needed(struct ena_com_io_sq *io_sq, struct ena_com_tx_ctx *ena_tx_ctx) { struct ena_com_llq_info *llq_info; int descs_after_first_entry; int num_entries_needed = 1; u16 num_descs; if (!is_llq_max_tx_burst_exists(io_sq)) return false; llq_info = &io_sq->llq_info; num_descs = ena_tx_ctx->num_bufs; if (llq_info->disable_meta_caching || unlikely(ena_com_meta_desc_changed(io_sq, ena_tx_ctx))) ++num_descs; if (num_descs > llq_info->descs_num_before_header) { descs_after_first_entry = num_descs - llq_info->descs_num_before_header; num_entries_needed += DIV_ROUND_UP(descs_after_first_entry, llq_info->descs_per_entry); } - ena_trc_dbg("queue: %d num_descs: %d num_entries_needed: %d\n", + ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq), + "Queue: %d num_descs: %d num_entries_needed: %d\n", io_sq->qid, num_descs, num_entries_needed); return num_entries_needed > io_sq->entries_in_tx_burst_left; } static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq) { u16 max_entries_in_tx_burst = io_sq->llq_info.max_entries_in_tx_burst; u16 tail = io_sq->tail; - ena_trc_dbg("write submission queue doorbell for queue: %d tail: %d\n", + ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq), + "Write submission queue doorbell for queue: %d tail: %d\n", io_sq->qid, tail); ENA_REG_WRITE32(io_sq->bus, tail, io_sq->db_addr); if (is_llq_max_tx_burst_exists(io_sq)) { - ena_trc_dbg("reset available entries in tx burst for queue %d to %d\n", - io_sq->qid, max_entries_in_tx_burst); + ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq), + "Reset available entries in tx burst for queue %d to %d\n", + io_sq->qid, max_entries_in_tx_burst); io_sq->entries_in_tx_burst_left = max_entries_in_tx_burst; } return 0; } static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq) { u16 unreported_comp, head; bool need_update; if (unlikely(io_cq->cq_head_db_reg)) { head = io_cq->head; unreported_comp = head - io_cq->last_head_update; need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH); if (unlikely(need_update)) { - ena_trc_dbg("Write completion queue doorbell for queue %d: head: %d\n", + ena_trc_dbg(ena_com_io_cq_to_ena_dev(io_cq), + "Write completion queue doorbell for queue %d: head: %d\n", io_cq->qid, head); ENA_REG_WRITE32(io_cq->bus, head, io_cq->cq_head_db_reg); io_cq->last_head_update = head; } } return 0; } static inline void ena_com_update_numa_node(struct ena_com_io_cq *io_cq, u8 numa_node) { struct ena_eth_io_numa_node_cfg_reg numa_cfg; if (!io_cq->numa_node_cfg_reg) return; numa_cfg.numa_cfg = (numa_node & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK) | ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK; ENA_REG_WRITE32(io_cq->bus, numa_cfg.numa_cfg, io_cq->numa_node_cfg_reg); } static inline void ena_com_comp_ack(struct ena_com_io_sq *io_sq, u16 elem) { io_sq->next_to_comp += elem; } static inline void ena_com_cq_inc_head(struct ena_com_io_cq *io_cq) { io_cq->head++; /* Switch phase bit in case of wrap around */ if (unlikely((io_cq->head & (io_cq->q_depth - 1)) == 0)) io_cq->phase ^= 1; } static inline int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id) { u8 expected_phase, cdesc_phase; struct ena_eth_io_tx_cdesc *cdesc; u16 masked_head; masked_head = io_cq->head & (io_cq->q_depth - 1); expected_phase = io_cq->phase; cdesc = (struct ena_eth_io_tx_cdesc *) ((uintptr_t)io_cq->cdesc_addr.virt_addr + (masked_head * io_cq->cdesc_entry_size_in_bytes)); /* When the current completion descriptor phase isn't the same as the * expected, it mean that the device still didn't update * this completion. */ cdesc_phase = READ_ONCE16(cdesc->flags) & ENA_ETH_IO_TX_CDESC_PHASE_MASK; if (cdesc_phase != expected_phase) return ENA_COM_TRY_AGAIN; dma_rmb(); *req_id = READ_ONCE16(cdesc->req_id); if (unlikely(*req_id >= io_cq->q_depth)) { - ena_trc_err("Invalid req id %d\n", cdesc->req_id); + ena_trc_err(ena_com_io_cq_to_ena_dev(io_cq), + "Invalid req id %d\n", cdesc->req_id); return ENA_COM_INVAL; } ena_com_cq_inc_head(io_cq); return 0; } #if defined(__cplusplus) } #endif #endif /* ENA_ETH_COM_H_ */ Index: stable/12/sys/contrib/ena-com/ena_plat.h =================================================================== --- stable/12/sys/contrib/ena-com/ena_plat.h (revision 368011) +++ stable/12/sys/contrib/ena-com/ena_plat.h (revision 368012) @@ -1,414 +1,437 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * Neither the name of copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef ENA_PLAT_H_ #define ENA_PLAT_H_ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include extern struct ena_bus_space ebs; /* Levels */ #define ENA_ALERT (1 << 0) /* Alerts are providing more error info. */ #define ENA_WARNING (1 << 1) /* Driver output is more error sensitive. */ #define ENA_INFO (1 << 2) /* Provides additional driver info. */ #define ENA_DBG (1 << 3) /* Driver output for debugging. */ /* Detailed info that will be printed with ENA_INFO or ENA_DEBUG flag. */ #define ENA_TXPTH (1 << 4) /* Allows TX path tracing. */ #define ENA_RXPTH (1 << 5) /* Allows RX path tracing. */ #define ENA_RSC (1 << 6) /* Goes with TXPTH or RXPTH, free/alloc res. */ #define ENA_IOQ (1 << 7) /* Detailed info about IO queues. */ #define ENA_ADMQ (1 << 8) /* Detailed info about admin queue. */ #define ENA_NETMAP (1 << 9) /* Detailed info about netmap. */ +#define DEFAULT_ALLOC_ALIGNMENT 8 + extern int ena_log_level; -#define ena_trace_raw(level, fmt, args...) \ +#define container_of(ptr, type, member) \ + ({ \ + const __typeof(((type *)0)->member) *__p = (ptr); \ + (type *)((uintptr_t)__p - offsetof(type, member)); \ + }) + +#define ena_trace_raw(ctx, level, fmt, args...) \ do { \ + ((void)(ctx)); \ if (((level) & ena_log_level) != (level)) \ break; \ printf(fmt, ##args); \ } while (0) -#define ena_trace(level, fmt, args...) \ - ena_trace_raw(level, "%s() [TID:%d]: " \ +#define ena_trace(ctx, level, fmt, args...) \ + ena_trace_raw(ctx, level, "%s() [TID:%d]: " \ fmt, __func__, curthread->td_tid, ##args) -#define ena_trc_dbg(format, arg...) ena_trace(ENA_DBG, format, ##arg) -#define ena_trc_info(format, arg...) ena_trace(ENA_INFO, format, ##arg) -#define ena_trc_warn(format, arg...) ena_trace(ENA_WARNING, format, ##arg) -#define ena_trc_err(format, arg...) ena_trace(ENA_ALERT, format, ##arg) +#define ena_trc_dbg(ctx, format, arg...) \ + ena_trace(ctx, ENA_DBG, format, ##arg) +#define ena_trc_info(ctx, format, arg...) \ + ena_trace(ctx, ENA_INFO, format, ##arg) +#define ena_trc_warn(ctx, format, arg...) \ + ena_trace(ctx, ENA_WARNING, format, ##arg) +#define ena_trc_err(ctx, format, arg...) \ + ena_trace(ctx, ENA_ALERT, format, ##arg) #define unlikely(x) __predict_false(!!(x)) #define likely(x) __predict_true(!!(x)) #define __iomem #define ____cacheline_aligned __aligned(CACHE_LINE_SIZE) #define MAX_ERRNO 4095 #define IS_ERR_VALUE(x) unlikely((x) <= (unsigned long)MAX_ERRNO) -#define ENA_ASSERT(cond, format, arg...) \ +#define ENA_WARN(cond, ctx, format, arg...) \ do { \ - if (unlikely(!(cond))) { \ - ena_trc_err( \ - "Assert failed on %s:%s:%d:" format, \ - __FILE__, __func__, __LINE__, ##arg); \ - } \ - } while (0) - -#define ENA_WARN(cond, format, arg...) \ - do { \ if (unlikely((cond))) { \ - ena_trc_warn(format, ##arg); \ + ena_trc_warn(ctx, format, ##arg); \ } \ } while (0) static inline long IS_ERR(const void *ptr) { return IS_ERR_VALUE((unsigned long)ptr); } static inline void *ERR_PTR(long error) { return (void *)error; } static inline long PTR_ERR(const void *ptr) { return (long) ptr; } #define GENMASK(h, l) (((~0U) - (1U << (l)) + 1) & (~0U >> (32 - 1 - (h)))) #define GENMASK_ULL(h, l) (((~0ULL) << (l)) & (~0ULL >> (64 - 1 - (h)))) #define BIT(x) (1UL << (x)) #define ENA_ABORT() BUG() #define BUG() panic("ENA BUG") #define SZ_256 (256) #define SZ_4K (4096) #define ENA_COM_OK 0 #define ENA_COM_FAULT EFAULT #define ENA_COM_INVAL EINVAL #define ENA_COM_NO_MEM ENOMEM #define ENA_COM_NO_SPACE ENOSPC #define ENA_COM_TRY_AGAIN -1 #define ENA_COM_UNSUPPORTED EOPNOTSUPP #define ENA_COM_NO_DEVICE ENODEV #define ENA_COM_PERMISSION EPERM #define ENA_COM_TIMER_EXPIRED ETIMEDOUT +#define ENA_COM_EIO EIO #define ENA_MSLEEP(x) pause_sbt("ena", SBT_1MS * (x), SBT_1MS, 0) #define ENA_USLEEP(x) pause_sbt("ena", SBT_1US * (x), SBT_1US, 0) #define ENA_UDELAY(x) DELAY(x) #define ENA_GET_SYSTEM_TIMEOUT(timeout_us) \ ((long)cputick2usec(cpu_ticks()) + (timeout_us)) #define ENA_TIME_EXPIRE(timeout) ((timeout) < cputick2usec(cpu_ticks())) #define ENA_MIGHT_SLEEP() #define min_t(type, _x, _y) ((type)(_x) < (type)(_y) ? (type)(_x) : (type)(_y)) #define max_t(type, _x, _y) ((type)(_x) > (type)(_y) ? (type)(_x) : (type)(_y)) #define ENA_MIN32(x,y) MIN(x, y) #define ENA_MIN16(x,y) MIN(x, y) #define ENA_MIN8(x,y) MIN(x, y) #define ENA_MAX32(x,y) MAX(x, y) #define ENA_MAX16(x,y) MAX(x, y) #define ENA_MAX8(x,y) MAX(x, y) /* Spinlock related methods */ #define ena_spinlock_t struct mtx #define ENA_SPINLOCK_INIT(spinlock) \ mtx_init(&(spinlock), "ena_spin", NULL, MTX_SPIN) #define ENA_SPINLOCK_DESTROY(spinlock) \ do { \ if (mtx_initialized(&(spinlock))) \ mtx_destroy(&(spinlock)); \ } while (0) #define ENA_SPINLOCK_LOCK(spinlock, flags) \ do { \ (void)(flags); \ mtx_lock_spin(&(spinlock)); \ } while (0) #define ENA_SPINLOCK_UNLOCK(spinlock, flags) \ do { \ (void)(flags); \ mtx_unlock_spin(&(spinlock)); \ } while (0) /* Wait queue related methods */ #define ena_wait_event_t struct { struct cv wq; struct mtx mtx; } #define ENA_WAIT_EVENT_INIT(waitqueue) \ do { \ cv_init(&((waitqueue).wq), "cv"); \ mtx_init(&((waitqueue).mtx), "wq", NULL, MTX_DEF); \ } while (0) -#define ENA_WAIT_EVENT_DESTROY(waitqueue) \ +#define ENA_WAIT_EVENTS_DESTROY(admin_queue) \ do { \ - cv_destroy(&((waitqueue).wq)); \ - mtx_destroy(&((waitqueue).mtx)); \ + struct ena_comp_ctx *comp_ctx; \ + int i; \ + for (i = 0; i < admin_queue->q_depth; i++) { \ + comp_ctx = get_comp_ctxt(admin_queue, i, false); \ + if (comp_ctx != NULL) { \ + cv_destroy(&((comp_ctx->wait_event).wq)); \ + mtx_destroy(&((comp_ctx->wait_event).mtx)); \ + } \ + } \ } while (0) #define ENA_WAIT_EVENT_CLEAR(waitqueue) \ cv_init(&((waitqueue).wq), (waitqueue).wq.cv_description) #define ENA_WAIT_EVENT_WAIT(waitqueue, timeout_us) \ do { \ mtx_lock(&((waitqueue).mtx)); \ cv_timedwait(&((waitqueue).wq), &((waitqueue).mtx), \ timeout_us * hz / 1000 / 1000 ); \ mtx_unlock(&((waitqueue).mtx)); \ } while (0) #define ENA_WAIT_EVENT_SIGNAL(waitqueue) \ do { \ mtx_lock(&((waitqueue).mtx)); \ cv_broadcast(&((waitqueue).wq)); \ mtx_unlock(&((waitqueue).mtx)); \ } while (0) #define dma_addr_t bus_addr_t #define u8 uint8_t #define u16 uint16_t #define u32 uint32_t #define u64 uint64_t typedef struct { bus_addr_t paddr; caddr_t vaddr; bus_dma_tag_t tag; bus_dmamap_t map; bus_dma_segment_t seg; int nseg; } ena_mem_handle_t; struct ena_bus { bus_space_handle_t reg_bar_h; bus_space_tag_t reg_bar_t; bus_space_handle_t mem_bar_h; bus_space_tag_t mem_bar_t; }; typedef uint32_t ena_atomic32_t; #define ENA_PRIu64 PRIu64 typedef uint64_t ena_time_t; +typedef struct ifnet ena_netdev; void ena_dmamap_callback(void *arg, bus_dma_segment_t *segs, int nseg, int error); int ena_dma_alloc(device_t dmadev, bus_size_t size, ena_mem_handle_t *dma, - int mapflags); + int mapflags, bus_size_t alignment); static inline uint32_t ena_reg_read32(struct ena_bus *bus, bus_size_t offset) { uint32_t v = bus_space_read_4(bus->reg_bar_t, bus->reg_bar_h, offset); rmb(); return v; } #define ENA_MEMCPY_TO_DEVICE_64(dst, src, size) \ do { \ int count, i; \ volatile uint64_t *to = (volatile uint64_t *)(dst); \ const uint64_t *from = (const uint64_t *)(src); \ count = (size) / 8; \ \ for (i = 0; i < count; i++, from++, to++) \ *to = *from; \ } while (0) #define ENA_MEM_ALLOC(dmadev, size) malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO) #define ENA_MEM_ALLOC_NODE(dmadev, size, virt, node, dev_node) (virt = NULL) #define ENA_MEM_FREE(dmadev, ptr, size) \ do { \ (void)(size); \ free(ptr, M_DEVBUF); \ } while (0) -#define ENA_MEM_ALLOC_COHERENT_NODE(dmadev, size, virt, phys, handle, node, \ - dev_node) \ +#define ENA_MEM_ALLOC_COHERENT_NODE_ALIGNED(dmadev, size, virt, phys, \ + handle, node, dev_node, alignment) \ do { \ ((virt) = NULL); \ (void)(dev_node); \ } while (0) -#define ENA_MEM_ALLOC_COHERENT(dmadev, size, virt, phys, dma) \ +#define ENA_MEM_ALLOC_COHERENT_NODE(dmadev, size, virt, phys, handle, \ + node, dev_node) \ + ENA_MEM_ALLOC_COHERENT_NODE_ALIGNED(dmadev, size, virt, \ + phys, handle, node, dev_node, DEFAULT_ALLOC_ALIGNMENT) + +#define ENA_MEM_ALLOC_COHERENT_ALIGNED(dmadev, size, virt, phys, dma, \ + alignment) \ do { \ - ena_dma_alloc((dmadev), (size), &(dma), 0); \ + ena_dma_alloc((dmadev), (size), &(dma), 0, alignment); \ (virt) = (void *)(dma).vaddr; \ (phys) = (dma).paddr; \ } while (0) + +#define ENA_MEM_ALLOC_COHERENT(dmadev, size, virt, phys, dma) \ + ENA_MEM_ALLOC_COHERENT_ALIGNED(dmadev, size, virt, \ + phys, dma, DEFAULT_ALLOC_ALIGNMENT) #define ENA_MEM_FREE_COHERENT(dmadev, size, virt, phys, dma) \ do { \ (void)size; \ bus_dmamap_unload((dma).tag, (dma).map); \ bus_dmamem_free((dma).tag, (virt), (dma).map); \ bus_dma_tag_destroy((dma).tag); \ (dma).tag = NULL; \ (virt) = NULL; \ } while (0) /* Register R/W methods */ #define ENA_REG_WRITE32(bus, value, offset) \ do { \ wmb(); \ ENA_REG_WRITE32_RELAXED(bus, value, offset); \ } while (0) #define ENA_REG_WRITE32_RELAXED(bus, value, offset) \ bus_space_write_4( \ ((struct ena_bus*)bus)->reg_bar_t, \ ((struct ena_bus*)bus)->reg_bar_h, \ (bus_size_t)(offset), (value)) #define ENA_REG_READ32(bus, offset) \ ena_reg_read32((struct ena_bus*)(bus), (bus_size_t)(offset)) #define ENA_DB_SYNC_WRITE(mem_handle) bus_dmamap_sync( \ (mem_handle)->tag, (mem_handle)->map, BUS_DMASYNC_PREWRITE) #define ENA_DB_SYNC_PREREAD(mem_handle) bus_dmamap_sync( \ (mem_handle)->tag, (mem_handle)->map, BUS_DMASYNC_PREREAD) #define ENA_DB_SYNC_POSTREAD(mem_handle) bus_dmamap_sync( \ (mem_handle)->tag, (mem_handle)->map, BUS_DMASYNC_POSTREAD) #define ENA_DB_SYNC(mem_handle) ENA_DB_SYNC_WRITE(mem_handle) #define time_after(a,b) ((long)((unsigned long)(b) - (unsigned long)(a)) < 0) #define VLAN_HLEN sizeof(struct ether_vlan_header) #define CSUM_OFFLOAD (CSUM_IP|CSUM_TCP|CSUM_UDP) #define prefetch(x) (void)(x) #define prefetchw(x) (void)(x) /* DMA buffers access */ #define dma_unmap_addr(p, name) ((p)->dma->name) #define dma_unmap_addr_set(p, name, v) (((p)->dma->name) = (v)) #define dma_unmap_len(p, name) ((p)->name) #define dma_unmap_len_set(p, name, v) (((p)->name) = (v)) #define memcpy_toio memcpy #define ATOMIC32_INC(I32_PTR) atomic_add_int(I32_PTR, 1) #define ATOMIC32_DEC(I32_PTR) atomic_add_int(I32_PTR, -1) #define ATOMIC32_READ(I32_PTR) atomic_load_acq_int(I32_PTR) #define ATOMIC32_SET(I32_PTR, VAL) atomic_store_rel_int(I32_PTR, VAL) #define barrier() __asm__ __volatile__("": : :"memory") #define dma_rmb() barrier() #define mmiowb() barrier() #define ACCESS_ONCE(x) (*(volatile __typeof(x) *)&(x)) #define READ_ONCE(x) ({ \ __typeof(x) __var; \ barrier(); \ __var = ACCESS_ONCE(x); \ barrier(); \ __var; \ }) #define READ_ONCE8(x) READ_ONCE(x) #define READ_ONCE16(x) READ_ONCE(x) #define READ_ONCE32(x) READ_ONCE(x) #define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16)) #define lower_32_bits(n) ((uint32_t)(n)) #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) #define ENA_FFS(x) ffs(x) void ena_rss_key_fill(void *key, size_t size); #define ENA_RSS_FILL_KEY(key, size) ena_rss_key_fill(key, size) #include "ena_defs/ena_includes.h" #endif /* ENA_PLAT_H_ */ Index: stable/12/sys/dev/ena/ena.c =================================================================== --- stable/12/sys/dev/ena/ena.c (revision 368011) +++ stable/12/sys/dev/ena/ena.c (revision 368012) @@ -1,3858 +1,3931 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include "opt_rss.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef RSS #include #endif #include #include #include #include #include #include #include #include #include #include #include #include "ena_datapath.h" #include "ena.h" #include "ena_sysctl.h" #ifdef DEV_NETMAP #include "ena_netmap.h" #endif /* DEV_NETMAP */ /********************************************************* * Function prototypes *********************************************************/ static int ena_probe(device_t); static void ena_intr_msix_mgmnt(void *); static void ena_free_pci_resources(struct ena_adapter *); static int ena_change_mtu(if_t, int); static inline void ena_alloc_counters(counter_u64_t *, int); static inline void ena_free_counters(counter_u64_t *, int); static inline void ena_reset_counters(counter_u64_t *, int); static void ena_init_io_rings_common(struct ena_adapter *, struct ena_ring *, uint16_t); static void ena_init_io_rings_basic(struct ena_adapter *); static void ena_init_io_rings_advanced(struct ena_adapter *); static void ena_init_io_rings(struct ena_adapter *); static void ena_free_io_ring_resources(struct ena_adapter *, unsigned int); static void ena_free_all_io_rings_resources(struct ena_adapter *); static int ena_setup_tx_dma_tag(struct ena_adapter *); static int ena_free_tx_dma_tag(struct ena_adapter *); static int ena_setup_rx_dma_tag(struct ena_adapter *); static int ena_free_rx_dma_tag(struct ena_adapter *); static void ena_release_all_tx_dmamap(struct ena_ring *); static int ena_setup_tx_resources(struct ena_adapter *, int); static void ena_free_tx_resources(struct ena_adapter *, int); static int ena_setup_all_tx_resources(struct ena_adapter *); static void ena_free_all_tx_resources(struct ena_adapter *); static int ena_setup_rx_resources(struct ena_adapter *, unsigned int); static void ena_free_rx_resources(struct ena_adapter *, unsigned int); static int ena_setup_all_rx_resources(struct ena_adapter *); static void ena_free_all_rx_resources(struct ena_adapter *); static inline int ena_alloc_rx_mbuf(struct ena_adapter *, struct ena_ring *, struct ena_rx_buffer *); static void ena_free_rx_mbuf(struct ena_adapter *, struct ena_ring *, struct ena_rx_buffer *); static void ena_free_rx_bufs(struct ena_adapter *, unsigned int); static void ena_refill_all_rx_bufs(struct ena_adapter *); static void ena_free_all_rx_bufs(struct ena_adapter *); static void ena_free_tx_bufs(struct ena_adapter *, unsigned int); static void ena_free_all_tx_bufs(struct ena_adapter *); static void ena_destroy_all_tx_queues(struct ena_adapter *); static void ena_destroy_all_rx_queues(struct ena_adapter *); static void ena_destroy_all_io_queues(struct ena_adapter *); static int ena_create_io_queues(struct ena_adapter *); static int ena_handle_msix(void *); static int ena_enable_msix(struct ena_adapter *); static void ena_setup_mgmnt_intr(struct ena_adapter *); static int ena_setup_io_intr(struct ena_adapter *); static int ena_request_mgmnt_irq(struct ena_adapter *); static int ena_request_io_irq(struct ena_adapter *); static void ena_free_mgmnt_irq(struct ena_adapter *); static void ena_free_io_irq(struct ena_adapter *); static void ena_free_irqs(struct ena_adapter*); static void ena_disable_msix(struct ena_adapter *); static void ena_unmask_all_io_irqs(struct ena_adapter *); static int ena_rss_configure(struct ena_adapter *); static int ena_up_complete(struct ena_adapter *); static uint64_t ena_get_counter(if_t, ift_counter); static int ena_media_change(if_t); static void ena_media_status(if_t, struct ifmediareq *); static void ena_init(void *); static int ena_ioctl(if_t, u_long, caddr_t); static int ena_get_dev_offloads(struct ena_com_dev_get_features_ctx *); static void ena_update_host_info(struct ena_admin_host_info *, if_t); static void ena_update_hwassist(struct ena_adapter *); static int ena_setup_ifnet(device_t, struct ena_adapter *, struct ena_com_dev_get_features_ctx *); static int ena_enable_wc(struct resource *); static int ena_set_queues_placement_policy(device_t, struct ena_com_dev *, struct ena_admin_feature_llq_desc *, struct ena_llq_configurations *); static uint32_t ena_calc_max_io_queue_num(device_t, struct ena_com_dev *, struct ena_com_dev_get_features_ctx *); static int ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *); static int ena_rss_init_default(struct ena_adapter *); static void ena_rss_init_default_deferred(void *); static void ena_config_host_info(struct ena_com_dev *, device_t); static int ena_attach(device_t); static int ena_detach(device_t); static int ena_device_init(struct ena_adapter *, device_t, struct ena_com_dev_get_features_ctx *, int *); static int ena_enable_msix_and_set_admin_interrupts(struct ena_adapter *); static void ena_update_on_link_change(void *, struct ena_admin_aenq_entry *); static void unimplemented_aenq_handler(void *, struct ena_admin_aenq_entry *); +static int ena_copy_eni_metrics(struct ena_adapter *); static void ena_timer_service(void *); static char ena_version[] = DEVICE_NAME DRV_MODULE_NAME " v" DRV_MODULE_VERSION; static ena_vendor_info_t ena_vendor_info_array[] = { { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_PF, 0}, - { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_LLQ_PF, 0}, + { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_PF_RSERV0, 0}, { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_VF, 0}, - { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_LLQ_VF, 0}, + { PCI_VENDOR_ID_AMAZON, PCI_DEV_ID_ENA_VF_RSERV0, 0}, /* Last entry */ { 0, 0, 0 } }; /* * Contains pointers to event handlers, e.g. link state chage. */ static struct ena_aenq_handlers aenq_handlers; void ena_dmamap_callback(void *arg, bus_dma_segment_t *segs, int nseg, int error) { if (error != 0) return; *(bus_addr_t *) arg = segs[0].ds_addr; } int ena_dma_alloc(device_t dmadev, bus_size_t size, - ena_mem_handle_t *dma , int mapflags) + ena_mem_handle_t *dma, int mapflags, bus_size_t alignment) { struct ena_adapter* adapter = device_get_softc(dmadev); uint32_t maxsize; uint64_t dma_space_addr; int error; maxsize = ((size - 1) / PAGE_SIZE + 1) * PAGE_SIZE; dma_space_addr = ENA_DMA_BIT_MASK(adapter->dma_width); if (unlikely(dma_space_addr == 0)) dma_space_addr = BUS_SPACE_MAXADDR; error = bus_dma_tag_create(bus_get_dma_tag(dmadev), /* parent */ - 8, 0, /* alignment, bounds */ + alignment, 0, /* alignment, bounds */ dma_space_addr, /* lowaddr of exclusion window */ BUS_SPACE_MAXADDR,/* highaddr of exclusion window */ NULL, NULL, /* filter, filterarg */ maxsize, /* maxsize */ 1, /* nsegments */ maxsize, /* maxsegsize */ BUS_DMA_ALLOCNOW, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &dma->tag); if (unlikely(error != 0)) { - ena_trace(ENA_ALERT, "bus_dma_tag_create failed: %d\n", error); + ena_trace(NULL, ENA_ALERT, "bus_dma_tag_create failed: %d\n", error); goto fail_tag; } error = bus_dmamem_alloc(dma->tag, (void**) &dma->vaddr, BUS_DMA_COHERENT | BUS_DMA_ZERO, &dma->map); if (unlikely(error != 0)) { - ena_trace(ENA_ALERT, "bus_dmamem_alloc(%ju) failed: %d\n", + ena_trace(NULL, ENA_ALERT, "bus_dmamem_alloc(%ju) failed: %d\n", (uintmax_t)size, error); goto fail_map_create; } dma->paddr = 0; error = bus_dmamap_load(dma->tag, dma->map, dma->vaddr, size, ena_dmamap_callback, &dma->paddr, mapflags); if (unlikely((error != 0) || (dma->paddr == 0))) { - ena_trace(ENA_ALERT, ": bus_dmamap_load failed: %d\n", error); + ena_trace(NULL, ENA_ALERT, ": bus_dmamap_load failed: %d\n", error); goto fail_map_load; } bus_dmamap_sync(dma->tag, dma->map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); return (0); fail_map_load: bus_dmamem_free(dma->tag, dma->vaddr, dma->map); fail_map_create: bus_dma_tag_destroy(dma->tag); fail_tag: dma->tag = NULL; dma->vaddr = NULL; dma->paddr = 0; return (error); } /* * This function should generate unique key for the whole driver. * If the key was already genereated in the previous call (for example * for another adapter), then it should be returned instead. */ void ena_rss_key_fill(void *key, size_t size) { static bool key_generated; static uint8_t default_key[ENA_HASH_KEY_SIZE]; KASSERT(size <= ENA_HASH_KEY_SIZE, ("Requested more bytes than ENA RSS key can hold")); if (!key_generated) { arc4random_buf(default_key, ENA_HASH_KEY_SIZE); key_generated = true; } memcpy(key, default_key, size); } static void ena_free_pci_resources(struct ena_adapter *adapter) { device_t pdev = adapter->pdev; if (adapter->memory != NULL) { bus_release_resource(pdev, SYS_RES_MEMORY, PCIR_BAR(ENA_MEM_BAR), adapter->memory); } if (adapter->registers != NULL) { bus_release_resource(pdev, SYS_RES_MEMORY, PCIR_BAR(ENA_REG_BAR), adapter->registers); } } static int ena_probe(device_t dev) { ena_vendor_info_t *ent; char adapter_name[60]; uint16_t pci_vendor_id = 0; uint16_t pci_device_id = 0; pci_vendor_id = pci_get_vendor(dev); pci_device_id = pci_get_device(dev); ent = ena_vendor_info_array; while (ent->vendor_id != 0) { if ((pci_vendor_id == ent->vendor_id) && (pci_device_id == ent->device_id)) { - ena_trace(ENA_DBG, "vendor=%x device=%x\n", + ena_trace(NULL, ENA_DBG, "vendor=%x device=%x\n", pci_vendor_id, pci_device_id); sprintf(adapter_name, DEVICE_DESC); device_set_desc_copy(dev, adapter_name); return (BUS_PROBE_DEFAULT); } ent++; } return (ENXIO); } static int ena_change_mtu(if_t ifp, int new_mtu) { struct ena_adapter *adapter = if_getsoftc(ifp); int rc; if ((new_mtu > adapter->max_mtu) || (new_mtu < ENA_MIN_MTU)) { device_printf(adapter->pdev, "Invalid MTU setting. " "new_mtu: %d max mtu: %d min mtu: %d\n", new_mtu, adapter->max_mtu, ENA_MIN_MTU); return (EINVAL); } rc = ena_com_set_dev_mtu(adapter->ena_dev, new_mtu); if (likely(rc == 0)) { - ena_trace(ENA_DBG, "set MTU to %d\n", new_mtu); + ena_trace(NULL, ENA_DBG, "set MTU to %d\n", new_mtu); if_setmtu(ifp, new_mtu); } else { device_printf(adapter->pdev, "Failed to set MTU to %d\n", new_mtu); } return (rc); } static inline void ena_alloc_counters(counter_u64_t *begin, int size) { counter_u64_t *end = (counter_u64_t *)((char *)begin + size); for (; begin < end; ++begin) *begin = counter_u64_alloc(M_WAITOK); } static inline void ena_free_counters(counter_u64_t *begin, int size) { counter_u64_t *end = (counter_u64_t *)((char *)begin + size); for (; begin < end; ++begin) counter_u64_free(*begin); } static inline void ena_reset_counters(counter_u64_t *begin, int size) { counter_u64_t *end = (counter_u64_t *)((char *)begin + size); for (; begin < end; ++begin) counter_u64_zero(*begin); } static void ena_init_io_rings_common(struct ena_adapter *adapter, struct ena_ring *ring, uint16_t qid) { ring->qid = qid; ring->adapter = adapter; ring->ena_dev = adapter->ena_dev; ring->first_interrupt = false; ring->no_interrupt_event_cnt = 0; } static void ena_init_io_rings_basic(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev; struct ena_ring *txr, *rxr; struct ena_que *que; int i; ena_dev = adapter->ena_dev; for (i = 0; i < adapter->num_io_queues; i++) { txr = &adapter->tx_ring[i]; rxr = &adapter->rx_ring[i]; /* TX/RX common ring state */ ena_init_io_rings_common(adapter, txr, i); ena_init_io_rings_common(adapter, rxr, i); /* TX specific ring state */ txr->tx_max_header_size = ena_dev->tx_max_header_size; txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type; que = &adapter->que[i]; que->adapter = adapter; que->id = i; que->tx_ring = txr; que->rx_ring = rxr; txr->que = que; rxr->que = que; rxr->empty_rx_queue = 0; rxr->rx_mbuf_sz = ena_mbuf_sz; } } static void ena_init_io_rings_advanced(struct ena_adapter *adapter) { struct ena_ring *txr, *rxr; int i; for (i = 0; i < adapter->num_io_queues; i++) { txr = &adapter->tx_ring[i]; rxr = &adapter->rx_ring[i]; /* Allocate a buf ring */ txr->buf_ring_size = adapter->buf_ring_size; txr->br = buf_ring_alloc(txr->buf_ring_size, M_DEVBUF, M_WAITOK, &txr->ring_mtx); /* Allocate Tx statistics. */ ena_alloc_counters((counter_u64_t *)&txr->tx_stats, sizeof(txr->tx_stats)); /* Allocate Rx statistics. */ ena_alloc_counters((counter_u64_t *)&rxr->rx_stats, sizeof(rxr->rx_stats)); /* Initialize locks */ snprintf(txr->mtx_name, nitems(txr->mtx_name), "%s:tx(%d)", device_get_nameunit(adapter->pdev), i); snprintf(rxr->mtx_name, nitems(rxr->mtx_name), "%s:rx(%d)", device_get_nameunit(adapter->pdev), i); mtx_init(&txr->ring_mtx, txr->mtx_name, NULL, MTX_DEF); } } static void ena_init_io_rings(struct ena_adapter *adapter) { /* * IO rings initialization can be divided into the 2 steps: * 1. Initialize variables and fields with initial values and copy * them from adapter/ena_dev (basic) * 2. Allocate mutex, counters and buf_ring (advanced) */ ena_init_io_rings_basic(adapter); ena_init_io_rings_advanced(adapter); } static void ena_free_io_ring_resources(struct ena_adapter *adapter, unsigned int qid) { struct ena_ring *txr = &adapter->tx_ring[qid]; struct ena_ring *rxr = &adapter->rx_ring[qid]; ena_free_counters((counter_u64_t *)&txr->tx_stats, sizeof(txr->tx_stats)); ena_free_counters((counter_u64_t *)&rxr->rx_stats, sizeof(rxr->rx_stats)); ENA_RING_MTX_LOCK(txr); drbr_free(txr->br, M_DEVBUF); ENA_RING_MTX_UNLOCK(txr); mtx_destroy(&txr->ring_mtx); } static void ena_free_all_io_rings_resources(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_io_queues; i++) ena_free_io_ring_resources(adapter, i); } static int ena_setup_tx_dma_tag(struct ena_adapter *adapter) { int ret; /* Create DMA tag for Tx buffers */ ret = bus_dma_tag_create(bus_get_dma_tag(adapter->pdev), 1, 0, /* alignment, bounds */ ENA_DMA_BIT_MASK(adapter->dma_width), /* lowaddr of excl window */ BUS_SPACE_MAXADDR, /* highaddr of excl window */ NULL, NULL, /* filter, filterarg */ ENA_TSO_MAXSIZE, /* maxsize */ adapter->max_tx_sgl_size - 1, /* nsegments */ ENA_TSO_MAXSIZE, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockfuncarg */ &adapter->tx_buf_tag); return (ret); } static int ena_free_tx_dma_tag(struct ena_adapter *adapter) { int ret; ret = bus_dma_tag_destroy(adapter->tx_buf_tag); if (likely(ret == 0)) adapter->tx_buf_tag = NULL; return (ret); } static int ena_setup_rx_dma_tag(struct ena_adapter *adapter) { int ret; /* Create DMA tag for Rx buffers*/ ret = bus_dma_tag_create(bus_get_dma_tag(adapter->pdev), /* parent */ 1, 0, /* alignment, bounds */ ENA_DMA_BIT_MASK(adapter->dma_width), /* lowaddr of excl window */ BUS_SPACE_MAXADDR, /* highaddr of excl window */ NULL, NULL, /* filter, filterarg */ ena_mbuf_sz, /* maxsize */ adapter->max_rx_sgl_size, /* nsegments */ ena_mbuf_sz, /* maxsegsize */ 0, /* flags */ NULL, /* lockfunc */ NULL, /* lockarg */ &adapter->rx_buf_tag); return (ret); } static int ena_free_rx_dma_tag(struct ena_adapter *adapter) { int ret; ret = bus_dma_tag_destroy(adapter->rx_buf_tag); if (likely(ret == 0)) adapter->rx_buf_tag = NULL; return (ret); } static void ena_release_all_tx_dmamap(struct ena_ring *tx_ring) { struct ena_adapter *adapter = tx_ring->adapter; struct ena_tx_buffer *tx_info; bus_dma_tag_t tx_tag = adapter->tx_buf_tag;; int i; #ifdef DEV_NETMAP struct ena_netmap_tx_info *nm_info; int j; #endif /* DEV_NETMAP */ for (i = 0; i < tx_ring->ring_size; ++i) { tx_info = &tx_ring->tx_buffer_info[i]; #ifdef DEV_NETMAP if (adapter->ifp->if_capenable & IFCAP_NETMAP) { nm_info = &tx_info->nm_info; for (j = 0; j < ENA_PKT_MAX_BUFS; ++j) { if (nm_info->map_seg[j] != NULL) { bus_dmamap_destroy(tx_tag, nm_info->map_seg[j]); nm_info->map_seg[j] = NULL; } } } #endif /* DEV_NETMAP */ if (tx_info->dmamap != NULL) { bus_dmamap_destroy(tx_tag, tx_info->dmamap); tx_info->dmamap = NULL; } } } /** * ena_setup_tx_resources - allocate Tx resources (Descriptors) * @adapter: network interface device structure * @qid: queue index * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid) { struct ena_que *que = &adapter->que[qid]; struct ena_ring *tx_ring = que->tx_ring; int size, i, err; #ifdef DEV_NETMAP bus_dmamap_t *map; int j; ena_netmap_reset_tx_ring(adapter, qid); #endif /* DEV_NETMAP */ size = sizeof(struct ena_tx_buffer) * tx_ring->ring_size; tx_ring->tx_buffer_info = malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); if (unlikely(tx_ring->tx_buffer_info == NULL)) return (ENOMEM); size = sizeof(uint16_t) * tx_ring->ring_size; tx_ring->free_tx_ids = malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); if (unlikely(tx_ring->free_tx_ids == NULL)) goto err_buf_info_free; size = tx_ring->tx_max_header_size; tx_ring->push_buf_intermediate_buf = malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO); if (unlikely(tx_ring->push_buf_intermediate_buf == NULL)) goto err_tx_ids_free; /* Req id stack for TX OOO completions */ for (i = 0; i < tx_ring->ring_size; i++) tx_ring->free_tx_ids[i] = i; /* Reset TX statistics. */ ena_reset_counters((counter_u64_t *)&tx_ring->tx_stats, sizeof(tx_ring->tx_stats)); tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; tx_ring->acum_pkts = 0; /* Make sure that drbr is empty */ ENA_RING_MTX_LOCK(tx_ring); drbr_flush(adapter->ifp, tx_ring->br); ENA_RING_MTX_UNLOCK(tx_ring); /* ... and create the buffer DMA maps */ for (i = 0; i < tx_ring->ring_size; i++) { err = bus_dmamap_create(adapter->tx_buf_tag, 0, &tx_ring->tx_buffer_info[i].dmamap); if (unlikely(err != 0)) { - ena_trace(ENA_ALERT, + ena_trace(NULL, ENA_ALERT, "Unable to create Tx DMA map for buffer %d\n", i); goto err_map_release; } #ifdef DEV_NETMAP if (adapter->ifp->if_capenable & IFCAP_NETMAP) { map = tx_ring->tx_buffer_info[i].nm_info.map_seg; for (j = 0; j < ENA_PKT_MAX_BUFS; j++) { err = bus_dmamap_create(adapter->tx_buf_tag, 0, &map[j]); if (unlikely(err != 0)) { - ena_trace(ENA_ALERT, "Unable to create " + ena_trace(NULL, ENA_ALERT, "Unable to create " "Tx DMA for buffer %d %d\n", i, j); goto err_map_release; } } } #endif /* DEV_NETMAP */ } /* Allocate taskqueues */ TASK_INIT(&tx_ring->enqueue_task, 0, ena_deferred_mq_start, tx_ring); tx_ring->enqueue_tq = taskqueue_create_fast("ena_tx_enque", M_NOWAIT, taskqueue_thread_enqueue, &tx_ring->enqueue_tq); if (unlikely(tx_ring->enqueue_tq == NULL)) { - ena_trace(ENA_ALERT, + ena_trace(NULL, ENA_ALERT, "Unable to create taskqueue for enqueue task\n"); i = tx_ring->ring_size; goto err_map_release; } tx_ring->running = true; taskqueue_start_threads(&tx_ring->enqueue_tq, 1, PI_NET, "%s txeq %d", device_get_nameunit(adapter->pdev), que->cpu); return (0); err_map_release: ena_release_all_tx_dmamap(tx_ring); err_tx_ids_free: free(tx_ring->free_tx_ids, M_DEVBUF); tx_ring->free_tx_ids = NULL; err_buf_info_free: free(tx_ring->tx_buffer_info, M_DEVBUF); tx_ring->tx_buffer_info = NULL; return (ENOMEM); } /** * ena_free_tx_resources - Free Tx Resources per Queue * @adapter: network interface device structure * @qid: queue index * * Free all transmit software resources **/ static void ena_free_tx_resources(struct ena_adapter *adapter, int qid) { struct ena_ring *tx_ring = &adapter->tx_ring[qid]; #ifdef DEV_NETMAP struct ena_netmap_tx_info *nm_info; int j; #endif /* DEV_NETMAP */ while (taskqueue_cancel(tx_ring->enqueue_tq, &tx_ring->enqueue_task, NULL)) taskqueue_drain(tx_ring->enqueue_tq, &tx_ring->enqueue_task); taskqueue_free(tx_ring->enqueue_tq); ENA_RING_MTX_LOCK(tx_ring); /* Flush buffer ring, */ drbr_flush(adapter->ifp, tx_ring->br); /* Free buffer DMA maps, */ for (int i = 0; i < tx_ring->ring_size; i++) { bus_dmamap_sync(adapter->tx_buf_tag, tx_ring->tx_buffer_info[i].dmamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(adapter->tx_buf_tag, tx_ring->tx_buffer_info[i].dmamap); bus_dmamap_destroy(adapter->tx_buf_tag, tx_ring->tx_buffer_info[i].dmamap); #ifdef DEV_NETMAP if (adapter->ifp->if_capenable & IFCAP_NETMAP) { nm_info = &tx_ring->tx_buffer_info[i].nm_info; for (j = 0; j < ENA_PKT_MAX_BUFS; j++) { if (nm_info->socket_buf_idx[j] != 0) { bus_dmamap_sync(adapter->tx_buf_tag, nm_info->map_seg[j], BUS_DMASYNC_POSTWRITE); ena_netmap_unload(adapter, nm_info->map_seg[j]); } bus_dmamap_destroy(adapter->tx_buf_tag, nm_info->map_seg[j]); nm_info->socket_buf_idx[j] = 0; } } #endif /* DEV_NETMAP */ m_freem(tx_ring->tx_buffer_info[i].mbuf); tx_ring->tx_buffer_info[i].mbuf = NULL; } ENA_RING_MTX_UNLOCK(tx_ring); /* And free allocated memory. */ free(tx_ring->tx_buffer_info, M_DEVBUF); tx_ring->tx_buffer_info = NULL; free(tx_ring->free_tx_ids, M_DEVBUF); tx_ring->free_tx_ids = NULL; free(tx_ring->push_buf_intermediate_buf, M_DEVBUF); tx_ring->push_buf_intermediate_buf = NULL; } /** * ena_setup_all_tx_resources - allocate all queues Tx resources * @adapter: network interface device structure * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_all_tx_resources(struct ena_adapter *adapter) { int i, rc; for (i = 0; i < adapter->num_io_queues; i++) { rc = ena_setup_tx_resources(adapter, i); if (rc != 0) { device_printf(adapter->pdev, "Allocation for Tx Queue %u failed\n", i); goto err_setup_tx; } } return (0); err_setup_tx: /* Rewind the index freeing the rings as we go */ while (i--) ena_free_tx_resources(adapter, i); return (rc); } /** * ena_free_all_tx_resources - Free Tx Resources for All Queues * @adapter: network interface device structure * * Free all transmit software resources **/ static void ena_free_all_tx_resources(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_io_queues; i++) ena_free_tx_resources(adapter, i); } /** * ena_setup_rx_resources - allocate Rx resources (Descriptors) * @adapter: network interface device structure * @qid: queue index * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_rx_resources(struct ena_adapter *adapter, unsigned int qid) { struct ena_que *que = &adapter->que[qid]; struct ena_ring *rx_ring = que->rx_ring; int size, err, i; size = sizeof(struct ena_rx_buffer) * rx_ring->ring_size; #ifdef DEV_NETMAP ena_netmap_reset_rx_ring(adapter, qid); rx_ring->initialized = false; #endif /* DEV_NETMAP */ /* * Alloc extra element so in rx path * we can always prefetch rx_info + 1 */ size += sizeof(struct ena_rx_buffer); rx_ring->rx_buffer_info = malloc(size, M_DEVBUF, M_WAITOK | M_ZERO); size = sizeof(uint16_t) * rx_ring->ring_size; rx_ring->free_rx_ids = malloc(size, M_DEVBUF, M_WAITOK); for (i = 0; i < rx_ring->ring_size; i++) rx_ring->free_rx_ids[i] = i; /* Reset RX statistics. */ ena_reset_counters((counter_u64_t *)&rx_ring->rx_stats, sizeof(rx_ring->rx_stats)); rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; /* ... and create the buffer DMA maps */ for (i = 0; i < rx_ring->ring_size; i++) { err = bus_dmamap_create(adapter->rx_buf_tag, 0, &(rx_ring->rx_buffer_info[i].map)); if (err != 0) { - ena_trace(ENA_ALERT, + ena_trace(NULL, ENA_ALERT, "Unable to create Rx DMA map for buffer %d\n", i); goto err_buf_info_unmap; } } /* Create LRO for the ring */ if ((adapter->ifp->if_capenable & IFCAP_LRO) != 0) { int err = tcp_lro_init(&rx_ring->lro); if (err != 0) { device_printf(adapter->pdev, "LRO[%d] Initialization failed!\n", qid); } else { - ena_trace(ENA_INFO, + ena_trace(NULL, ENA_INFO, "RX Soft LRO[%d] Initialized\n", qid); rx_ring->lro.ifp = adapter->ifp; } } return (0); err_buf_info_unmap: while (i--) { bus_dmamap_destroy(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map); } free(rx_ring->free_rx_ids, M_DEVBUF); rx_ring->free_rx_ids = NULL; free(rx_ring->rx_buffer_info, M_DEVBUF); rx_ring->rx_buffer_info = NULL; return (ENOMEM); } /** * ena_free_rx_resources - Free Rx Resources * @adapter: network interface device structure * @qid: queue index * * Free all receive software resources **/ static void ena_free_rx_resources(struct ena_adapter *adapter, unsigned int qid) { struct ena_ring *rx_ring = &adapter->rx_ring[qid]; /* Free buffer DMA maps, */ for (int i = 0; i < rx_ring->ring_size; i++) { bus_dmamap_sync(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map, BUS_DMASYNC_POSTREAD); m_freem(rx_ring->rx_buffer_info[i].mbuf); rx_ring->rx_buffer_info[i].mbuf = NULL; bus_dmamap_unload(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map); bus_dmamap_destroy(adapter->rx_buf_tag, rx_ring->rx_buffer_info[i].map); } /* free LRO resources, */ tcp_lro_free(&rx_ring->lro); /* free allocated memory */ free(rx_ring->rx_buffer_info, M_DEVBUF); rx_ring->rx_buffer_info = NULL; free(rx_ring->free_rx_ids, M_DEVBUF); rx_ring->free_rx_ids = NULL; } /** * ena_setup_all_rx_resources - allocate all queues Rx resources * @adapter: network interface device structure * * Returns 0 on success, otherwise on failure. **/ static int ena_setup_all_rx_resources(struct ena_adapter *adapter) { int i, rc = 0; for (i = 0; i < adapter->num_io_queues; i++) { rc = ena_setup_rx_resources(adapter, i); if (rc != 0) { device_printf(adapter->pdev, "Allocation for Rx Queue %u failed\n", i); goto err_setup_rx; } } return (0); err_setup_rx: /* rewind the index freeing the rings as we go */ while (i--) ena_free_rx_resources(adapter, i); return (rc); } /** * ena_free_all_rx_resources - Free Rx resources for all queues * @adapter: network interface device structure * * Free all receive software resources **/ static void ena_free_all_rx_resources(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_io_queues; i++) ena_free_rx_resources(adapter, i); } static inline int ena_alloc_rx_mbuf(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { struct ena_com_buf *ena_buf; bus_dma_segment_t segs[1]; int nsegs, error; int mlen; /* if previous allocated frag is not used */ if (unlikely(rx_info->mbuf != NULL)) return (0); /* Get mbuf using UMA allocator */ rx_info->mbuf = m_getjcl(M_NOWAIT, MT_DATA, M_PKTHDR, rx_ring->rx_mbuf_sz); if (unlikely(rx_info->mbuf == NULL)) { counter_u64_add(rx_ring->rx_stats.mjum_alloc_fail, 1); rx_info->mbuf = m_getcl(M_NOWAIT, MT_DATA, M_PKTHDR); if (unlikely(rx_info->mbuf == NULL)) { counter_u64_add(rx_ring->rx_stats.mbuf_alloc_fail, 1); return (ENOMEM); } mlen = MCLBYTES; } else { mlen = rx_ring->rx_mbuf_sz; } /* Set mbuf length*/ rx_info->mbuf->m_pkthdr.len = rx_info->mbuf->m_len = mlen; /* Map packets for DMA */ - ena_trace(ENA_DBG | ENA_RSC | ENA_RXPTH, + ena_trace(NULL, ENA_DBG | ENA_RSC | ENA_RXPTH, "Using tag %p for buffers' DMA mapping, mbuf %p len: %d\n", adapter->rx_buf_tag,rx_info->mbuf, rx_info->mbuf->m_len); error = bus_dmamap_load_mbuf_sg(adapter->rx_buf_tag, rx_info->map, rx_info->mbuf, segs, &nsegs, BUS_DMA_NOWAIT); if (unlikely((error != 0) || (nsegs != 1))) { - ena_trace(ENA_WARNING, "failed to map mbuf, error: %d, " + ena_trace(NULL, ENA_WARNING, "failed to map mbuf, error: %d, " "nsegs: %d\n", error, nsegs); counter_u64_add(rx_ring->rx_stats.dma_mapping_err, 1); goto exit; } bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_PREREAD); ena_buf = &rx_info->ena_buf; ena_buf->paddr = segs[0].ds_addr; ena_buf->len = mlen; - ena_trace(ENA_DBG | ENA_RSC | ENA_RXPTH, + ena_trace(NULL, ENA_DBG | ENA_RSC | ENA_RXPTH, "ALLOC RX BUF: mbuf %p, rx_info %p, len %d, paddr %#jx\n", rx_info->mbuf, rx_info,ena_buf->len, (uintmax_t)ena_buf->paddr); return (0); exit: m_freem(rx_info->mbuf); rx_info->mbuf = NULL; return (EFAULT); } static void ena_free_rx_mbuf(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { if (rx_info->mbuf == NULL) { - ena_trace(ENA_WARNING, "Trying to free unallocated buffer\n"); + ena_trace(NULL, ENA_WARNING, "Trying to free unallocated buffer\n"); return; } bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_POSTREAD); bus_dmamap_unload(adapter->rx_buf_tag, rx_info->map); m_freem(rx_info->mbuf); rx_info->mbuf = NULL; } /** * ena_refill_rx_bufs - Refills ring with descriptors * @rx_ring: the ring which we want to feed with free descriptors * @num: number of descriptors to refill * Refills the ring with newly allocated DMA-mapped mbufs for receiving **/ int ena_refill_rx_bufs(struct ena_ring *rx_ring, uint32_t num) { struct ena_adapter *adapter = rx_ring->adapter; uint16_t next_to_use, req_id; uint32_t i; int rc; - ena_trace(ENA_DBG | ENA_RXPTH | ENA_RSC, "refill qid: %d\n", + ena_trace(NULL, ENA_DBG | ENA_RXPTH | ENA_RSC, "refill qid: %d\n", rx_ring->qid); next_to_use = rx_ring->next_to_use; for (i = 0; i < num; i++) { struct ena_rx_buffer *rx_info; - ena_trace(ENA_DBG | ENA_RXPTH | ENA_RSC, + ena_trace(NULL, ENA_DBG | ENA_RXPTH | ENA_RSC, "RX buffer - next to use: %d\n", next_to_use); req_id = rx_ring->free_rx_ids[next_to_use]; rx_info = &rx_ring->rx_buffer_info[req_id]; #ifdef DEV_NETMAP if (ena_rx_ring_in_netmap(adapter, rx_ring->qid)) rc = ena_netmap_alloc_rx_slot(adapter, rx_ring, rx_info); else #endif /* DEV_NETMAP */ rc = ena_alloc_rx_mbuf(adapter, rx_ring, rx_info); if (unlikely(rc != 0)) { - ena_trace(ENA_WARNING, + ena_trace(NULL, ENA_WARNING, "failed to alloc buffer for rx queue %d\n", rx_ring->qid); break; } rc = ena_com_add_single_rx_desc(rx_ring->ena_com_io_sq, &rx_info->ena_buf, req_id); if (unlikely(rc != 0)) { - ena_trace(ENA_WARNING, + ena_trace(NULL, ENA_WARNING, "failed to add buffer for rx queue %d\n", rx_ring->qid); break; } next_to_use = ENA_RX_RING_IDX_NEXT(next_to_use, rx_ring->ring_size); } if (unlikely(i < num)) { counter_u64_add(rx_ring->rx_stats.refil_partial, 1); - ena_trace(ENA_WARNING, + ena_trace(NULL, ENA_WARNING, "refilled rx qid %d with only %d mbufs (from %d)\n", rx_ring->qid, i, num); } if (likely(i != 0)) ena_com_write_sq_doorbell(rx_ring->ena_com_io_sq); rx_ring->next_to_use = next_to_use; return (i); } int ena_update_buf_ring_size(struct ena_adapter *adapter, uint32_t new_buf_ring_size) { uint32_t old_buf_ring_size; int rc = 0; bool dev_was_up; ENA_LOCK_LOCK(adapter); old_buf_ring_size = adapter->buf_ring_size; adapter->buf_ring_size = new_buf_ring_size; dev_was_up = ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter); ena_down(adapter); /* Reconfigure buf ring for all Tx rings. */ ena_free_all_io_rings_resources(adapter); ena_init_io_rings_advanced(adapter); if (dev_was_up) { /* * If ena_up() fails, it's not because of recent buf_ring size * changes. Because of that, we just want to revert old drbr * value and trigger the reset because something else had to * go wrong. */ rc = ena_up(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to configure device after setting new drbr size: %u. Reverting old value: %u and triggering the reset\n", new_buf_ring_size, old_buf_ring_size); /* Revert old size and trigger the reset */ adapter->buf_ring_size = old_buf_ring_size; ena_free_all_io_rings_resources(adapter); ena_init_io_rings_advanced(adapter); ENA_FLAG_SET_ATOMIC(ENA_FLAG_DEV_UP_BEFORE_RESET, adapter); ena_trigger_reset(adapter, ENA_REGS_RESET_OS_TRIGGER); } } ENA_LOCK_UNLOCK(adapter); return (rc); } int ena_update_queue_size(struct ena_adapter *adapter, uint32_t new_tx_size, uint32_t new_rx_size) { uint32_t old_tx_size, old_rx_size; int rc = 0; bool dev_was_up; ENA_LOCK_LOCK(adapter); old_tx_size = adapter->requested_tx_ring_size; old_rx_size = adapter->requested_rx_ring_size; adapter->requested_tx_ring_size = new_tx_size; adapter->requested_rx_ring_size = new_rx_size; dev_was_up = ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter); ena_down(adapter); /* Configure queues with new size. */ ena_init_io_rings_basic(adapter); if (dev_was_up) { rc = ena_up(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to configure device with the new sizes - Tx: %u Rx: %u. Reverting old values - Tx: %u Rx: %u\n", new_tx_size, new_rx_size, old_tx_size, old_rx_size); /* Revert old size. */ adapter->requested_tx_ring_size = old_tx_size; adapter->requested_rx_ring_size = old_rx_size; ena_init_io_rings_basic(adapter); /* And try again. */ rc = ena_up(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to revert old queue sizes. Triggering device reset.\n"); /* * If we've failed again, something had to go * wrong. After reset, the device should try to * go up */ ENA_FLAG_SET_ATOMIC( ENA_FLAG_DEV_UP_BEFORE_RESET, adapter); ena_trigger_reset(adapter, ENA_REGS_RESET_OS_TRIGGER); } } } ENA_LOCK_UNLOCK(adapter); return (rc); } static void ena_update_io_rings(struct ena_adapter *adapter, uint32_t num) { ena_free_all_io_rings_resources(adapter); /* Force indirection table to be reinitialized */ ena_com_rss_destroy(adapter->ena_dev); adapter->num_io_queues = num; ena_init_io_rings(adapter); } /* Caller should sanitize new_num */ int ena_update_io_queue_nb(struct ena_adapter *adapter, uint32_t new_num) { uint32_t old_num; int rc = 0; bool dev_was_up; ENA_LOCK_LOCK(adapter); dev_was_up = ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter); old_num = adapter->num_io_queues; ena_down(adapter); ena_update_io_rings(adapter, new_num); if (dev_was_up) { rc = ena_up(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to configure device with %u IO queues. " "Reverting to previous value: %u\n", new_num, old_num); ena_update_io_rings(adapter, old_num); rc = ena_up(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to revert to previous setup IO " "queues. Triggering device reset.\n"); ENA_FLAG_SET_ATOMIC( ENA_FLAG_DEV_UP_BEFORE_RESET, adapter); ena_trigger_reset(adapter, ENA_REGS_RESET_OS_TRIGGER); } } } ENA_LOCK_UNLOCK(adapter); return (rc); } static void ena_free_rx_bufs(struct ena_adapter *adapter, unsigned int qid) { struct ena_ring *rx_ring = &adapter->rx_ring[qid]; unsigned int i; for (i = 0; i < rx_ring->ring_size; i++) { struct ena_rx_buffer *rx_info = &rx_ring->rx_buffer_info[i]; if (rx_info->mbuf != NULL) ena_free_rx_mbuf(adapter, rx_ring, rx_info); #ifdef DEV_NETMAP if (((if_getflags(adapter->ifp) & IFF_DYING) == 0) && (adapter->ifp->if_capenable & IFCAP_NETMAP)) { if (rx_info->netmap_buf_idx != 0) ena_netmap_free_rx_slot(adapter, rx_ring, rx_info); } #endif /* DEV_NETMAP */ } } /** * ena_refill_all_rx_bufs - allocate all queues Rx buffers * @adapter: network interface device structure * */ static void ena_refill_all_rx_bufs(struct ena_adapter *adapter) { struct ena_ring *rx_ring; int i, rc, bufs_num; for (i = 0; i < adapter->num_io_queues; i++) { rx_ring = &adapter->rx_ring[i]; bufs_num = rx_ring->ring_size - 1; rc = ena_refill_rx_bufs(rx_ring, bufs_num); if (unlikely(rc != bufs_num)) - ena_trace(ENA_WARNING, "refilling Queue %d failed. " + ena_trace(NULL, ENA_WARNING, "refilling Queue %d failed. " "Allocated %d buffers from: %d\n", i, rc, bufs_num); #ifdef DEV_NETMAP rx_ring->initialized = true; #endif /* DEV_NETMAP */ } } static void ena_free_all_rx_bufs(struct ena_adapter *adapter) { int i; for (i = 0; i < adapter->num_io_queues; i++) ena_free_rx_bufs(adapter, i); } /** * ena_free_tx_bufs - Free Tx Buffers per Queue * @adapter: network interface device structure * @qid: queue index **/ static void ena_free_tx_bufs(struct ena_adapter *adapter, unsigned int qid) { bool print_once = true; struct ena_ring *tx_ring = &adapter->tx_ring[qid]; ENA_RING_MTX_LOCK(tx_ring); for (int i = 0; i < tx_ring->ring_size; i++) { struct ena_tx_buffer *tx_info = &tx_ring->tx_buffer_info[i]; if (tx_info->mbuf == NULL) continue; if (print_once) { device_printf(adapter->pdev, "free uncompleted tx mbuf qid %d idx 0x%x\n", qid, i); print_once = false; } else { - ena_trace(ENA_DBG, + ena_trace(NULL, ENA_DBG, "free uncompleted tx mbuf qid %d idx 0x%x\n", qid, i); } bus_dmamap_sync(adapter->tx_buf_tag, tx_info->dmamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(adapter->tx_buf_tag, tx_info->dmamap); m_free(tx_info->mbuf); tx_info->mbuf = NULL; } ENA_RING_MTX_UNLOCK(tx_ring); } static void ena_free_all_tx_bufs(struct ena_adapter *adapter) { for (int i = 0; i < adapter->num_io_queues; i++) ena_free_tx_bufs(adapter, i); } static void ena_destroy_all_tx_queues(struct ena_adapter *adapter) { uint16_t ena_qid; int i; for (i = 0; i < adapter->num_io_queues; i++) { ena_qid = ENA_IO_TXQ_IDX(i); ena_com_destroy_io_queue(adapter->ena_dev, ena_qid); } } static void ena_destroy_all_rx_queues(struct ena_adapter *adapter) { uint16_t ena_qid; int i; for (i = 0; i < adapter->num_io_queues; i++) { ena_qid = ENA_IO_RXQ_IDX(i); ena_com_destroy_io_queue(adapter->ena_dev, ena_qid); } } static void ena_destroy_all_io_queues(struct ena_adapter *adapter) { struct ena_que *queue; int i; for (i = 0; i < adapter->num_io_queues; i++) { queue = &adapter->que[i]; while (taskqueue_cancel(queue->cleanup_tq, &queue->cleanup_task, NULL)) taskqueue_drain(queue->cleanup_tq, &queue->cleanup_task); taskqueue_free(queue->cleanup_tq); } ena_destroy_all_tx_queues(adapter); ena_destroy_all_rx_queues(adapter); } static int ena_create_io_queues(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; struct ena_com_create_io_ctx ctx; struct ena_ring *ring; struct ena_que *queue; uint16_t ena_qid; uint32_t msix_vector; int rc, i; /* Create TX queues */ for (i = 0; i < adapter->num_io_queues; i++) { msix_vector = ENA_IO_IRQ_IDX(i); ena_qid = ENA_IO_TXQ_IDX(i); ctx.mem_queue_type = ena_dev->tx_mem_queue_type; ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; ctx.queue_size = adapter->requested_tx_ring_size; ctx.msix_vector = msix_vector; ctx.qid = ena_qid; rc = ena_com_create_io_queue(ena_dev, &ctx); if (rc != 0) { device_printf(adapter->pdev, "Failed to create io TX queue #%d rc: %d\n", i, rc); goto err_tx; } ring = &adapter->tx_ring[i]; rc = ena_com_get_io_handlers(ena_dev, ena_qid, &ring->ena_com_io_sq, &ring->ena_com_io_cq); if (rc != 0) { device_printf(adapter->pdev, "Failed to get TX queue handlers. TX queue num" " %d rc: %d\n", i, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); goto err_tx; } } /* Create RX queues */ for (i = 0; i < adapter->num_io_queues; i++) { msix_vector = ENA_IO_IRQ_IDX(i); ena_qid = ENA_IO_RXQ_IDX(i); ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; ctx.queue_size = adapter->requested_rx_ring_size; ctx.msix_vector = msix_vector; ctx.qid = ena_qid; rc = ena_com_create_io_queue(ena_dev, &ctx); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to create io RX queue[%d] rc: %d\n", i, rc); goto err_rx; } ring = &adapter->rx_ring[i]; rc = ena_com_get_io_handlers(ena_dev, ena_qid, &ring->ena_com_io_sq, &ring->ena_com_io_cq); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to get RX queue handlers. RX queue num" " %d rc: %d\n", i, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); goto err_rx; } } for (i = 0; i < adapter->num_io_queues; i++) { queue = &adapter->que[i]; TASK_INIT(&queue->cleanup_task, 0, ena_cleanup, queue); queue->cleanup_tq = taskqueue_create_fast("ena cleanup", M_WAITOK, taskqueue_thread_enqueue, &queue->cleanup_tq); taskqueue_start_threads(&queue->cleanup_tq, 1, PI_NET, "%s queue %d cleanup", device_get_nameunit(adapter->pdev), i); } return (0); err_rx: while (i--) ena_com_destroy_io_queue(ena_dev, ENA_IO_RXQ_IDX(i)); i = adapter->num_io_queues; err_tx: while (i--) ena_com_destroy_io_queue(ena_dev, ENA_IO_TXQ_IDX(i)); return (ENXIO); } /********************************************************************* * * MSIX & Interrupt Service routine * **********************************************************************/ /** * ena_handle_msix - MSIX Interrupt Handler for admin/async queue * @arg: interrupt number **/ static void ena_intr_msix_mgmnt(void *arg) { struct ena_adapter *adapter = (struct ena_adapter *)arg; ena_com_admin_q_comp_intr_handler(adapter->ena_dev); if (likely(ENA_FLAG_ISSET(ENA_FLAG_DEVICE_RUNNING, adapter))) ena_com_aenq_intr_handler(adapter->ena_dev, arg); } /** * ena_handle_msix - MSIX Interrupt Handler for Tx/Rx * @arg: queue **/ static int ena_handle_msix(void *arg) { struct ena_que *queue = arg; struct ena_adapter *adapter = queue->adapter; if_t ifp = adapter->ifp; if (unlikely((if_getdrvflags(ifp) & IFF_DRV_RUNNING) == 0)) return (FILTER_STRAY); taskqueue_enqueue(queue->cleanup_tq, &queue->cleanup_task); return (FILTER_HANDLED); } static int ena_enable_msix(struct ena_adapter *adapter) { device_t dev = adapter->pdev; int msix_vecs, msix_req; int i, rc = 0; if (ENA_FLAG_ISSET(ENA_FLAG_MSIX_ENABLED, adapter)) { device_printf(dev, "Error, MSI-X is already enabled\n"); return (EINVAL); } /* Reserved the max msix vectors we might need */ msix_vecs = ENA_MAX_MSIX_VEC(adapter->max_num_io_queues); adapter->msix_entries = malloc(msix_vecs * sizeof(struct msix_entry), M_DEVBUF, M_WAITOK | M_ZERO); - ena_trace(ENA_DBG, "trying to enable MSI-X, vectors: %d\n", msix_vecs); + ena_trace(NULL, ENA_DBG, "trying to enable MSI-X, vectors: %d\n", msix_vecs); for (i = 0; i < msix_vecs; i++) { adapter->msix_entries[i].entry = i; /* Vectors must start from 1 */ adapter->msix_entries[i].vector = i + 1; } msix_req = msix_vecs; rc = pci_alloc_msix(dev, &msix_vecs); if (unlikely(rc != 0)) { device_printf(dev, "Failed to enable MSIX, vectors %d rc %d\n", msix_vecs, rc); rc = ENOSPC; goto err_msix_free; } if (msix_vecs != msix_req) { if (msix_vecs == ENA_ADMIN_MSIX_VEC) { device_printf(dev, "Not enough number of MSI-x allocated: %d\n", msix_vecs); pci_release_msi(dev); rc = ENOSPC; goto err_msix_free; } device_printf(dev, "Enable only %d MSI-x (out of %d), reduce " "the number of queues\n", msix_vecs, msix_req); } adapter->msix_vecs = msix_vecs; ENA_FLAG_SET_ATOMIC(ENA_FLAG_MSIX_ENABLED, adapter); return (0); err_msix_free: free(adapter->msix_entries, M_DEVBUF); adapter->msix_entries = NULL; return (rc); } static void ena_setup_mgmnt_intr(struct ena_adapter *adapter) { snprintf(adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].name, ENA_IRQNAME_SIZE, "ena-mgmnt@pci:%s", device_get_nameunit(adapter->pdev)); /* * Handler is NULL on purpose, it will be set * when mgmnt interrupt is acquired */ adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].handler = NULL; adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].data = adapter; adapter->irq_tbl[ENA_MGMNT_IRQ_IDX].vector = adapter->msix_entries[ENA_MGMNT_IRQ_IDX].vector; } static int ena_setup_io_intr(struct ena_adapter *adapter) { static int last_bind_cpu = -1; int irq_idx; if (adapter->msix_entries == NULL) return (EINVAL); for (int i = 0; i < adapter->num_io_queues; i++) { irq_idx = ENA_IO_IRQ_IDX(i); snprintf(adapter->irq_tbl[irq_idx].name, ENA_IRQNAME_SIZE, "%s-TxRx-%d", device_get_nameunit(adapter->pdev), i); adapter->irq_tbl[irq_idx].handler = ena_handle_msix; adapter->irq_tbl[irq_idx].data = &adapter->que[i]; adapter->irq_tbl[irq_idx].vector = adapter->msix_entries[irq_idx].vector; - ena_trace(ENA_INFO | ENA_IOQ, "ena_setup_io_intr vector: %d\n", + ena_trace(NULL, ENA_INFO | ENA_IOQ, "ena_setup_io_intr vector: %d\n", adapter->msix_entries[irq_idx].vector); /* * We want to bind rings to the corresponding cpu * using something similar to the RSS round-robin technique. */ if (unlikely(last_bind_cpu < 0)) last_bind_cpu = CPU_FIRST(); adapter->que[i].cpu = adapter->irq_tbl[irq_idx].cpu = last_bind_cpu; last_bind_cpu = CPU_NEXT(last_bind_cpu); } return (0); } static int ena_request_mgmnt_irq(struct ena_adapter *adapter) { struct ena_irq *irq; unsigned long flags; int rc, rcc; flags = RF_ACTIVE | RF_SHAREABLE; irq = &adapter->irq_tbl[ENA_MGMNT_IRQ_IDX]; irq->res = bus_alloc_resource_any(adapter->pdev, SYS_RES_IRQ, &irq->vector, flags); if (unlikely(irq->res == NULL)) { device_printf(adapter->pdev, "could not allocate " "irq vector: %d\n", irq->vector); return (ENXIO); } rc = bus_setup_intr(adapter->pdev, irq->res, INTR_TYPE_NET | INTR_MPSAFE, NULL, ena_intr_msix_mgmnt, irq->data, &irq->cookie); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "failed to register " "interrupt handler for irq %ju: %d\n", rman_get_start(irq->res), rc); goto err_res_free; } irq->requested = true; return (rc); err_res_free: - ena_trace(ENA_INFO | ENA_ADMQ, "releasing resource for irq %d\n", + ena_trace(NULL, ENA_INFO | ENA_ADMQ, "releasing resource for irq %d\n", irq->vector); rcc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); if (unlikely(rcc != 0)) device_printf(adapter->pdev, "dev has no parent while " "releasing res for irq: %d\n", irq->vector); irq->res = NULL; return (rc); } static int ena_request_io_irq(struct ena_adapter *adapter) { struct ena_irq *irq; unsigned long flags = 0; int rc = 0, i, rcc; if (unlikely(!ENA_FLAG_ISSET(ENA_FLAG_MSIX_ENABLED, adapter))) { device_printf(adapter->pdev, "failed to request I/O IRQ: MSI-X is not enabled\n"); return (EINVAL); } else { flags = RF_ACTIVE | RF_SHAREABLE; } for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) { irq = &adapter->irq_tbl[i]; if (unlikely(irq->requested)) continue; irq->res = bus_alloc_resource_any(adapter->pdev, SYS_RES_IRQ, &irq->vector, flags); if (unlikely(irq->res == NULL)) { rc = ENOMEM; device_printf(adapter->pdev, "could not allocate " "irq vector: %d\n", irq->vector); goto err; } rc = bus_setup_intr(adapter->pdev, irq->res, INTR_TYPE_NET | INTR_MPSAFE, irq->handler, NULL, irq->data, &irq->cookie); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "failed to register " "interrupt handler for irq %ju: %d\n", rman_get_start(irq->res), rc); goto err; } irq->requested = true; - ena_trace(ENA_INFO, "queue %d - cpu %d\n", + ena_trace(NULL, ENA_INFO, "queue %d - cpu %d\n", i - ENA_IO_IRQ_FIRST_IDX, irq->cpu); } return (rc); err: for (; i >= ENA_IO_IRQ_FIRST_IDX; i--) { irq = &adapter->irq_tbl[i]; rcc = 0; /* Once we entered err: section and irq->requested is true we free both intr and resources */ if (irq->requested) rcc = bus_teardown_intr(adapter->pdev, irq->res, irq->cookie); if (unlikely(rcc != 0)) device_printf(adapter->pdev, "could not release" " irq: %d, error: %d\n", irq->vector, rcc); /* If we entred err: section without irq->requested set we know it was bus_alloc_resource_any() that needs cleanup, provided res is not NULL. In case res is NULL no work in needed in this iteration */ rcc = 0; if (irq->res != NULL) { rcc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); } if (unlikely(rcc != 0)) device_printf(adapter->pdev, "dev has no parent while " "releasing res for irq: %d\n", irq->vector); irq->requested = false; irq->res = NULL; } return (rc); } static void ena_free_mgmnt_irq(struct ena_adapter *adapter) { struct ena_irq *irq; int rc; irq = &adapter->irq_tbl[ENA_MGMNT_IRQ_IDX]; if (irq->requested) { - ena_trace(ENA_INFO | ENA_ADMQ, "tear down irq: %d\n", + ena_trace(NULL, ENA_INFO | ENA_ADMQ, "tear down irq: %d\n", irq->vector); rc = bus_teardown_intr(adapter->pdev, irq->res, irq->cookie); if (unlikely(rc != 0)) device_printf(adapter->pdev, "failed to tear " "down irq: %d\n", irq->vector); irq->requested = 0; } if (irq->res != NULL) { - ena_trace(ENA_INFO | ENA_ADMQ, "release resource irq: %d\n", + ena_trace(NULL, ENA_INFO | ENA_ADMQ, "release resource irq: %d\n", irq->vector); rc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); irq->res = NULL; if (unlikely(rc != 0)) device_printf(adapter->pdev, "dev has no parent while " "releasing res for irq: %d\n", irq->vector); } } static void ena_free_io_irq(struct ena_adapter *adapter) { struct ena_irq *irq; int rc; for (int i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) { irq = &adapter->irq_tbl[i]; if (irq->requested) { - ena_trace(ENA_INFO | ENA_IOQ, "tear down irq: %d\n", + ena_trace(NULL, ENA_INFO | ENA_IOQ, "tear down irq: %d\n", irq->vector); rc = bus_teardown_intr(adapter->pdev, irq->res, irq->cookie); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "failed to tear " "down irq: %d\n", irq->vector); } irq->requested = 0; } if (irq->res != NULL) { - ena_trace(ENA_INFO | ENA_IOQ, "release resource irq: %d\n", + ena_trace(NULL, ENA_INFO | ENA_IOQ, "release resource irq: %d\n", irq->vector); rc = bus_release_resource(adapter->pdev, SYS_RES_IRQ, irq->vector, irq->res); irq->res = NULL; if (unlikely(rc != 0)) { device_printf(adapter->pdev, "dev has no parent" " while releasing res for irq: %d\n", irq->vector); } } } } static void ena_free_irqs(struct ena_adapter* adapter) { ena_free_io_irq(adapter); ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); } static void ena_disable_msix(struct ena_adapter *adapter) { if (ENA_FLAG_ISSET(ENA_FLAG_MSIX_ENABLED, adapter)) { ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_MSIX_ENABLED, adapter); pci_release_msi(adapter->pdev); } adapter->msix_vecs = 0; if (adapter->msix_entries != NULL) free(adapter->msix_entries, M_DEVBUF); adapter->msix_entries = NULL; } static void ena_unmask_all_io_irqs(struct ena_adapter *adapter) { struct ena_com_io_cq* io_cq; struct ena_eth_io_intr_reg intr_reg; uint16_t ena_qid; int i; /* Unmask interrupts for all queues */ for (i = 0; i < adapter->num_io_queues; i++) { ena_qid = ENA_IO_TXQ_IDX(i); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; ena_com_update_intr_reg(&intr_reg, 0, 0, true); ena_com_unmask_intr(io_cq, &intr_reg); } } /* Configure the Rx forwarding */ static int ena_rss_configure(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; int rc; /* In case the RSS table was destroyed */ if (!ena_dev->rss.tbl_log_size) { rc = ena_rss_init_default(adapter); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(adapter->pdev, "WARNING: RSS was not properly re-initialized," " it will affect bandwidth\n"); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_RSS_ACTIVE, adapter); return (rc); } } /* Set indirect table */ rc = ena_com_indirect_table_set(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) return (rc); /* Configure hash function (if supported) */ rc = ena_com_set_hash_function(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) return (rc); /* Configure hash inputs (if supported) */ rc = ena_com_set_hash_ctrl(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) return (rc); return (0); } static int ena_up_complete(struct ena_adapter *adapter) { int rc; if (likely(ENA_FLAG_ISSET(ENA_FLAG_RSS_ACTIVE, adapter))) { rc = ena_rss_configure(adapter); if (rc != 0) { device_printf(adapter->pdev, "Failed to configure RSS\n"); return (rc); } } rc = ena_change_mtu(adapter->ifp, adapter->ifp->if_mtu); if (unlikely(rc != 0)) return (rc); ena_refill_all_rx_bufs(adapter); ena_reset_counters((counter_u64_t *)&adapter->hw_stats, sizeof(adapter->hw_stats)); return (0); } static void set_io_rings_size(struct ena_adapter *adapter, int new_tx_size, int new_rx_size) { int i; for (i = 0; i < adapter->num_io_queues; i++) { adapter->tx_ring[i].ring_size = new_tx_size; adapter->rx_ring[i].ring_size = new_rx_size; } } static int create_queues_with_size_backoff(struct ena_adapter *adapter) { int rc; uint32_t cur_rx_ring_size, cur_tx_ring_size; uint32_t new_rx_ring_size, new_tx_ring_size; /* * Current queue sizes might be set to smaller than the requested * ones due to past queue allocation failures. */ set_io_rings_size(adapter, adapter->requested_tx_ring_size, adapter->requested_rx_ring_size); while (1) { /* Allocate transmit descriptors */ rc = ena_setup_all_tx_resources(adapter); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "err_setup_tx\n"); + ena_trace(NULL, ENA_ALERT, "err_setup_tx\n"); goto err_setup_tx; } /* Allocate receive descriptors */ rc = ena_setup_all_rx_resources(adapter); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "err_setup_rx\n"); + ena_trace(NULL, ENA_ALERT, "err_setup_rx\n"); goto err_setup_rx; } /* Create IO queues for Rx & Tx */ rc = ena_create_io_queues(adapter); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, + ena_trace(NULL, ENA_ALERT, "create IO queues failed\n"); goto err_io_que; } return (0); err_io_que: ena_free_all_rx_resources(adapter); err_setup_rx: ena_free_all_tx_resources(adapter); err_setup_tx: /* * Lower the ring size if ENOMEM. Otherwise, return the * error straightaway. */ if (unlikely(rc != ENOMEM)) { - ena_trace(ENA_ALERT, + ena_trace(NULL, ENA_ALERT, "Queue creation failed with error code: %d\n", rc); return (rc); } cur_tx_ring_size = adapter->tx_ring[0].ring_size; cur_rx_ring_size = adapter->rx_ring[0].ring_size; device_printf(adapter->pdev, "Not enough memory to create queues with sizes TX=%d, RX=%d\n", cur_tx_ring_size, cur_rx_ring_size); new_tx_ring_size = cur_tx_ring_size; new_rx_ring_size = cur_rx_ring_size; /* * Decrease the size of a larger queue, or decrease both if they are * the same size. */ if (cur_rx_ring_size <= cur_tx_ring_size) new_tx_ring_size = cur_tx_ring_size / 2; if (cur_rx_ring_size >= cur_tx_ring_size) new_rx_ring_size = cur_rx_ring_size / 2; if (new_tx_ring_size < ENA_MIN_RING_SIZE || new_rx_ring_size < ENA_MIN_RING_SIZE) { device_printf(adapter->pdev, "Queue creation failed with the smallest possible queue size" "of %d for both queues. Not retrying with smaller queues\n", ENA_MIN_RING_SIZE); return (rc); } set_io_rings_size(adapter, new_tx_ring_size, new_rx_ring_size); } } int ena_up(struct ena_adapter *adapter) { int rc = 0; if (unlikely(device_is_attached(adapter->pdev) == 0)) { device_printf(adapter->pdev, "device is not attached!\n"); return (ENXIO); } if (ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter)) return (0); device_printf(adapter->pdev, "device is going UP\n"); /* setup interrupts for IO queues */ rc = ena_setup_io_intr(adapter); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "error setting up IO interrupt\n"); + ena_trace(NULL, ENA_ALERT, "error setting up IO interrupt\n"); goto error; } rc = ena_request_io_irq(adapter); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "err_req_irq\n"); + ena_trace(NULL, ENA_ALERT, "err_req_irq\n"); goto error; } device_printf(adapter->pdev, "Creating %u IO queues. Rx queue size: %d, Tx queue size: %d, " "LLQ is %s\n", adapter->num_io_queues, adapter->requested_rx_ring_size, adapter->requested_tx_ring_size, (adapter->ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) ? "ENABLED" : "DISABLED"); rc = create_queues_with_size_backoff(adapter); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, + ena_trace(NULL, ENA_ALERT, "error creating queues with size backoff\n"); goto err_create_queues_with_backoff; } if (ENA_FLAG_ISSET(ENA_FLAG_LINK_UP, adapter)) if_link_state_change(adapter->ifp, LINK_STATE_UP); rc = ena_up_complete(adapter); if (unlikely(rc != 0)) goto err_up_complete; counter_u64_add(adapter->dev_stats.interface_up, 1); ena_update_hwassist(adapter); if_setdrvflagbits(adapter->ifp, IFF_DRV_RUNNING, IFF_DRV_OACTIVE); /* Activate timer service only if the device is running. * If this flag is not set, it means that the driver is being * reset and timer service will be activated afterwards. */ if (ENA_FLAG_ISSET(ENA_FLAG_DEVICE_RUNNING, adapter)) { callout_reset_sbt(&adapter->timer_service, SBT_1S, SBT_1S, ena_timer_service, (void *)adapter, 0); } ENA_FLAG_SET_ATOMIC(ENA_FLAG_DEV_UP, adapter); ena_unmask_all_io_irqs(adapter); return (0); err_up_complete: ena_destroy_all_io_queues(adapter); ena_free_all_rx_resources(adapter); ena_free_all_tx_resources(adapter); err_create_queues_with_backoff: ena_free_io_irq(adapter); error: return (rc); } static uint64_t ena_get_counter(if_t ifp, ift_counter cnt) { struct ena_adapter *adapter; struct ena_hw_stats *stats; adapter = if_getsoftc(ifp); stats = &adapter->hw_stats; switch (cnt) { case IFCOUNTER_IPACKETS: return (counter_u64_fetch(stats->rx_packets)); case IFCOUNTER_OPACKETS: return (counter_u64_fetch(stats->tx_packets)); case IFCOUNTER_IBYTES: return (counter_u64_fetch(stats->rx_bytes)); case IFCOUNTER_OBYTES: return (counter_u64_fetch(stats->tx_bytes)); case IFCOUNTER_IQDROPS: return (counter_u64_fetch(stats->rx_drops)); case IFCOUNTER_OQDROPS: return (counter_u64_fetch(stats->tx_drops)); default: return (if_get_counter_default(ifp, cnt)); } } static int ena_media_change(if_t ifp) { /* Media Change is not supported by firmware */ return (0); } static void ena_media_status(if_t ifp, struct ifmediareq *ifmr) { struct ena_adapter *adapter = if_getsoftc(ifp); - ena_trace(ENA_DBG, "enter\n"); + ena_trace(NULL, ENA_DBG, "enter\n"); ENA_LOCK_LOCK(adapter); ifmr->ifm_status = IFM_AVALID; ifmr->ifm_active = IFM_ETHER; if (!ENA_FLAG_ISSET(ENA_FLAG_LINK_UP, adapter)) { ENA_LOCK_UNLOCK(adapter); - ena_trace(ENA_INFO, "Link is down\n"); + ena_trace(NULL, ENA_INFO, "Link is down\n"); return; } ifmr->ifm_status |= IFM_ACTIVE; ifmr->ifm_active |= IFM_UNKNOWN | IFM_FDX; ENA_LOCK_UNLOCK(adapter); } static void ena_init(void *arg) { struct ena_adapter *adapter = (struct ena_adapter *)arg; if (!ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter)) { ENA_LOCK_LOCK(adapter); ena_up(adapter); ENA_LOCK_UNLOCK(adapter); } } static int ena_ioctl(if_t ifp, u_long command, caddr_t data) { struct ena_adapter *adapter; struct ifreq *ifr; int rc; adapter = ifp->if_softc; ifr = (struct ifreq *)data; /* * Acquiring lock to prevent from running up and down routines parallel. */ rc = 0; switch (command) { case SIOCSIFMTU: if (ifp->if_mtu == ifr->ifr_mtu) break; ENA_LOCK_LOCK(adapter); ena_down(adapter); ena_change_mtu(ifp, ifr->ifr_mtu); rc = ena_up(adapter); ENA_LOCK_UNLOCK(adapter); break; case SIOCSIFFLAGS: if ((ifp->if_flags & IFF_UP) != 0) { if ((if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0) { if ((ifp->if_flags & (IFF_PROMISC | IFF_ALLMULTI)) != 0) { device_printf(adapter->pdev, "ioctl promisc/allmulti\n"); } } else { ENA_LOCK_LOCK(adapter); rc = ena_up(adapter); ENA_LOCK_UNLOCK(adapter); } } else { if ((if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0) { ENA_LOCK_LOCK(adapter); ena_down(adapter); ENA_LOCK_UNLOCK(adapter); } } break; case SIOCADDMULTI: case SIOCDELMULTI: break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: rc = ifmedia_ioctl(ifp, ifr, &adapter->media, command); break; case SIOCSIFCAP: { int reinit = 0; if (ifr->ifr_reqcap != ifp->if_capenable) { ifp->if_capenable = ifr->ifr_reqcap; reinit = 1; } if ((reinit != 0) && ((if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0)) { ENA_LOCK_LOCK(adapter); ena_down(adapter); rc = ena_up(adapter); ENA_LOCK_UNLOCK(adapter); } } break; default: rc = ether_ioctl(ifp, command, data); break; } return (rc); } static int ena_get_dev_offloads(struct ena_com_dev_get_features_ctx *feat) { int caps = 0; if ((feat->offload.tx & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK)) != 0) caps |= IFCAP_TXCSUM; if ((feat->offload.tx & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK)) != 0) caps |= IFCAP_TXCSUM_IPV6; if ((feat->offload.tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) != 0) caps |= IFCAP_TSO4; if ((feat->offload.tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK) != 0) caps |= IFCAP_TSO6; if ((feat->offload.rx_supported & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK)) != 0) caps |= IFCAP_RXCSUM; if ((feat->offload.rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK) != 0) caps |= IFCAP_RXCSUM_IPV6; caps |= IFCAP_LRO | IFCAP_JUMBO_MTU; return (caps); } static void ena_update_host_info(struct ena_admin_host_info *host_info, if_t ifp) { host_info->supported_network_features[0] = (uint32_t)if_getcapabilities(ifp); } static void ena_update_hwassist(struct ena_adapter *adapter) { if_t ifp = adapter->ifp; uint32_t feat = adapter->tx_offload_cap; int cap = if_getcapenable(ifp); int flags = 0; if_clearhwassist(ifp); if ((cap & IFCAP_TXCSUM) != 0) { if ((feat & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK) != 0) flags |= CSUM_IP; if ((feat & (ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK | ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK)) != 0) flags |= CSUM_IP_UDP | CSUM_IP_TCP; } if ((cap & IFCAP_TXCSUM_IPV6) != 0) flags |= CSUM_IP6_UDP | CSUM_IP6_TCP; if ((cap & IFCAP_TSO4) != 0) flags |= CSUM_IP_TSO; if ((cap & IFCAP_TSO6) != 0) flags |= CSUM_IP6_TSO; if_sethwassistbits(ifp, flags, 0); } static int ena_setup_ifnet(device_t pdev, struct ena_adapter *adapter, struct ena_com_dev_get_features_ctx *feat) { if_t ifp; int caps = 0; ifp = adapter->ifp = if_gethandle(IFT_ETHER); if (unlikely(ifp == NULL)) { - ena_trace(ENA_ALERT, "can not allocate ifnet structure\n"); + ena_trace(NULL, ENA_ALERT, "can not allocate ifnet structure\n"); return (ENXIO); } if_initname(ifp, device_get_name(pdev), device_get_unit(pdev)); if_setdev(ifp, pdev); if_setsoftc(ifp, adapter); if_setflags(ifp, IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST); if_setinitfn(ifp, ena_init); if_settransmitfn(ifp, ena_mq_start); if_setqflushfn(ifp, ena_qflush); if_setioctlfn(ifp, ena_ioctl); if_setgetcounterfn(ifp, ena_get_counter); if_setsendqlen(ifp, adapter->requested_tx_ring_size); if_setsendqready(ifp); if_setmtu(ifp, ETHERMTU); if_setbaudrate(ifp, 0); /* Zeroize capabilities... */ if_setcapabilities(ifp, 0); if_setcapenable(ifp, 0); /* check hardware support */ caps = ena_get_dev_offloads(feat); /* ... and set them */ if_setcapabilitiesbit(ifp, caps, 0); /* TSO parameters */ ifp->if_hw_tsomax = ENA_TSO_MAXSIZE - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN); ifp->if_hw_tsomaxsegcount = adapter->max_tx_sgl_size - 1; ifp->if_hw_tsomaxsegsize = ENA_TSO_MAXSIZE; if_setifheaderlen(ifp, sizeof(struct ether_vlan_header)); if_setcapenable(ifp, if_getcapabilities(ifp)); /* * Specify the media types supported by this adapter and register * callbacks to update media and link information */ ifmedia_init(&adapter->media, IFM_IMASK, ena_media_change, ena_media_status); ifmedia_add(&adapter->media, IFM_ETHER | IFM_AUTO, 0, NULL); ifmedia_set(&adapter->media, IFM_ETHER | IFM_AUTO); ether_ifattach(ifp, adapter->mac_addr); return (0); } void ena_down(struct ena_adapter *adapter) { int rc; if (!ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter)) return; device_printf(adapter->pdev, "device is going DOWN\n"); callout_drain(&adapter->timer_service); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_DEV_UP, adapter); if_setdrvflagbits(adapter->ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); ena_free_io_irq(adapter); if (ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter)) { rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); if (unlikely(rc != 0)) device_printf(adapter->pdev, "Device reset failed\n"); } ena_destroy_all_io_queues(adapter); ena_free_all_tx_bufs(adapter); ena_free_all_rx_bufs(adapter); ena_free_all_tx_resources(adapter); ena_free_all_rx_resources(adapter); counter_u64_add(adapter->dev_stats.interface_down, 1); } static uint32_t ena_calc_max_io_queue_num(device_t pdev, struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx) { uint32_t io_tx_sq_num, io_tx_cq_num, io_rx_num, max_num_io_queues; /* Regular queues capabilities */ if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { struct ena_admin_queue_ext_feature_fields *max_queue_ext = &get_feat_ctx->max_queue_ext.max_queue_ext; io_rx_num = min_t(int, max_queue_ext->max_rx_sq_num, max_queue_ext->max_rx_cq_num); io_tx_sq_num = max_queue_ext->max_tx_sq_num; io_tx_cq_num = max_queue_ext->max_tx_cq_num; } else { struct ena_admin_queue_feature_desc *max_queues = &get_feat_ctx->max_queues; io_tx_sq_num = max_queues->max_sq_num; io_tx_cq_num = max_queues->max_cq_num; io_rx_num = min_t(int, io_tx_sq_num, io_tx_cq_num); } /* In case of LLQ use the llq fields for the tx SQ/CQ */ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) io_tx_sq_num = get_feat_ctx->llq.max_llq_num; max_num_io_queues = min_t(uint32_t, mp_ncpus, ENA_MAX_NUM_IO_QUEUES); max_num_io_queues = min_t(uint32_t, max_num_io_queues, io_rx_num); max_num_io_queues = min_t(uint32_t, max_num_io_queues, io_tx_sq_num); max_num_io_queues = min_t(uint32_t, max_num_io_queues, io_tx_cq_num); /* 1 IRQ for for mgmnt and 1 IRQ for each TX/RX pair */ max_num_io_queues = min_t(uint32_t, max_num_io_queues, pci_msix_count(pdev) - 1); return (max_num_io_queues); } static int ena_enable_wc(struct resource *res) { #if defined(__i386) || defined(__amd64) || defined(__aarch64__) vm_offset_t va; vm_size_t len; va = (vm_offset_t)rman_get_virtual(res); len = rman_get_size(res); #if defined(__i386) || defined(__amd64) int rc; rc = pmap_change_attr(va, len, VM_MEMATTR_WRITE_COMBINING); if (unlikely(rc != 0)) - ena_trace(ENA_ALERT, "pmap_change_attr failed, %d\n", rc); + ena_trace(NULL, ENA_ALERT, "pmap_change_attr failed, %d\n", rc); return (rc); #else /* defined(__aarch64__) */ vm_paddr_t pa; pa = rman_get_start(res); pmap_kenter(va, len, pa, VM_MEMATTR_WRITE_COMBINING); return (0); #endif /* defined(__i386) || defined(__amd64) */ #endif return (EOPNOTSUPP); } static int ena_set_queues_placement_policy(device_t pdev, struct ena_com_dev *ena_dev, struct ena_admin_feature_llq_desc *llq, struct ena_llq_configurations *llq_default_configurations) { struct ena_adapter *adapter = device_get_softc(pdev); int rc, rid; uint32_t llq_feature_mask; llq_feature_mask = 1 << ENA_ADMIN_LLQ; if (!(ena_dev->supported_features & llq_feature_mask)) { device_printf(pdev, "LLQ is not supported. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return (0); } rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to configure the device mode. " "Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return (0); } /* Nothing to config, exit */ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) return (0); /* Try to allocate resources for LLQ bar */ rid = PCIR_BAR(ENA_MEM_BAR); adapter->memory = bus_alloc_resource_any(pdev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (unlikely(adapter->memory == NULL)) { device_printf(pdev, "unable to allocate LLQ bar resource. " "Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return (0); } /* Enable write combining for better LLQ performance */ rc = ena_enable_wc(adapter->memory); if (unlikely(rc != 0)) { device_printf(pdev, "failed to enable write combining.\n"); return (rc); } /* * Save virtual address of the device's memory region * for the ena_com layer. */ ena_dev->mem_bar = rman_get_virtual(adapter->memory); return (0); } static inline void set_default_llq_configurations(struct ena_llq_configurations *llq_config) { llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER; llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B; llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY; llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2; llq_config->llq_ring_entry_size_value = 128; } static int ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) { struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq; struct ena_com_dev *ena_dev = ctx->ena_dev; uint32_t tx_queue_size = ENA_DEFAULT_RING_SIZE; uint32_t rx_queue_size = ENA_DEFAULT_RING_SIZE; uint32_t max_tx_queue_size; uint32_t max_rx_queue_size; if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { struct ena_admin_queue_ext_feature_fields *max_queue_ext = &ctx->get_feat_ctx->max_queue_ext.max_queue_ext; max_rx_queue_size = min_t(uint32_t, max_queue_ext->max_rx_cq_depth, max_queue_ext->max_rx_sq_depth); max_tx_queue_size = max_queue_ext->max_tx_cq_depth; if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) max_tx_queue_size = min_t(uint32_t, max_tx_queue_size, llq->max_llq_depth); else max_tx_queue_size = min_t(uint32_t, max_tx_queue_size, max_queue_ext->max_tx_sq_depth); ctx->max_tx_sgl_size = min_t(uint16_t, ENA_PKT_MAX_BUFS, max_queue_ext->max_per_packet_tx_descs); ctx->max_rx_sgl_size = min_t(uint16_t, ENA_PKT_MAX_BUFS, max_queue_ext->max_per_packet_rx_descs); } else { struct ena_admin_queue_feature_desc *max_queues = &ctx->get_feat_ctx->max_queues; max_rx_queue_size = min_t(uint32_t, max_queues->max_cq_depth, max_queues->max_sq_depth); max_tx_queue_size = max_queues->max_cq_depth; if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) max_tx_queue_size = min_t(uint32_t, max_tx_queue_size, llq->max_llq_depth); else max_tx_queue_size = min_t(uint32_t, max_tx_queue_size, max_queues->max_sq_depth); ctx->max_tx_sgl_size = min_t(uint16_t, ENA_PKT_MAX_BUFS, max_queues->max_packet_tx_descs); ctx->max_rx_sgl_size = min_t(uint16_t, ENA_PKT_MAX_BUFS, max_queues->max_packet_rx_descs); } /* round down to the nearest power of 2 */ max_tx_queue_size = 1 << (flsl(max_tx_queue_size) - 1); max_rx_queue_size = 1 << (flsl(max_rx_queue_size) - 1); tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE, max_tx_queue_size); rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE, max_rx_queue_size); tx_queue_size = 1 << (flsl(tx_queue_size) - 1); rx_queue_size = 1 << (flsl(rx_queue_size) - 1); ctx->max_tx_queue_size = max_tx_queue_size; ctx->max_rx_queue_size = max_rx_queue_size; ctx->tx_queue_size = tx_queue_size; ctx->rx_queue_size = rx_queue_size; return (0); } static int ena_rss_init_default(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; device_t dev = adapter->pdev; int qid, rc, i; rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); if (unlikely(rc != 0)) { device_printf(dev, "Cannot init indirect table\n"); return (rc); } for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++) { qid = i % adapter->num_io_queues; rc = ena_com_indirect_table_fill_entry(ena_dev, i, ENA_IO_RXQ_IDX(qid)); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(dev, "Cannot fill indirect table\n"); goto err_rss_destroy; } } #ifdef RSS uint8_t rss_algo = rss_gethashalgo(); if (rss_algo == RSS_HASH_TOEPLITZ) { uint8_t hash_key[RSS_KEYSIZE]; rss_getkey(hash_key); rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_TOEPLITZ, hash_key, RSS_KEYSIZE, 0xFFFFFFFF); } else #endif rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_CRC32, NULL, ENA_HASH_KEY_SIZE, 0xFFFFFFFF); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(dev, "Cannot fill hash function\n"); goto err_rss_destroy; } rc = ena_com_set_default_hash_ctrl(ena_dev); if (unlikely((rc != 0) && (rc != EOPNOTSUPP))) { device_printf(dev, "Cannot fill hash control\n"); goto err_rss_destroy; } return (0); err_rss_destroy: ena_com_rss_destroy(ena_dev); return (rc); } static void ena_rss_init_default_deferred(void *arg) { struct ena_adapter *adapter; devclass_t dc; int max; int rc; dc = devclass_find("ena"); if (unlikely(dc == NULL)) { - ena_trace(ENA_ALERT, "No devclass ena\n"); + ena_trace(NULL, ENA_ALERT, "No devclass ena\n"); return; } max = devclass_get_maxunit(dc); while (max-- >= 0) { adapter = devclass_get_softc(dc, max); if (adapter != NULL) { rc = ena_rss_init_default(adapter); ENA_FLAG_SET_ATOMIC(ENA_FLAG_RSS_ACTIVE, adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "WARNING: RSS was not properly initialized," " it will affect bandwidth\n"); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_RSS_ACTIVE, adapter); } } } } SYSINIT(ena_rss_init, SI_SUB_KICK_SCHEDULER, SI_ORDER_SECOND, ena_rss_init_default_deferred, NULL); static void ena_config_host_info(struct ena_com_dev *ena_dev, device_t dev) { struct ena_admin_host_info *host_info; uintptr_t rid; int rc; /* Allocate only the host info */ rc = ena_com_allocate_host_info(ena_dev); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "Cannot allocate host info\n"); + ena_trace(NULL, ENA_ALERT, "Cannot allocate host info\n"); return; } host_info = ena_dev->host_attr.host_info; if (pci_get_id(dev, PCI_ID_RID, &rid) == 0) host_info->bdf = rid; host_info->os_type = ENA_ADMIN_OS_FREEBSD; host_info->kernel_ver = osreldate; sprintf(host_info->kernel_ver_str, "%d", osreldate); host_info->os_dist = 0; strncpy(host_info->os_dist_str, osrelease, sizeof(host_info->os_dist_str) - 1); host_info->driver_version = (DRV_MODULE_VER_MAJOR) | (DRV_MODULE_VER_MINOR << ENA_ADMIN_HOST_INFO_MINOR_SHIFT) | (DRV_MODULE_VER_SUBMINOR << ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT); host_info->num_cpus = mp_ncpus; + host_info->driver_supported_features = + ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK; rc = ena_com_set_host_attributes(ena_dev); if (unlikely(rc != 0)) { if (rc == EOPNOTSUPP) - ena_trace(ENA_WARNING, "Cannot set host attributes\n"); + ena_trace(NULL, ENA_WARNING, "Cannot set host attributes\n"); else - ena_trace(ENA_ALERT, "Cannot set host attributes\n"); + ena_trace(NULL, ENA_ALERT, "Cannot set host attributes\n"); goto err; } return; err: ena_com_delete_host_info(ena_dev); } static int ena_device_init(struct ena_adapter *adapter, device_t pdev, struct ena_com_dev_get_features_ctx *get_feat_ctx, int *wd_active) { struct ena_com_dev* ena_dev = adapter->ena_dev; bool readless_supported; uint32_t aenq_groups; int dma_width; int rc; rc = ena_com_mmio_reg_read_request_init(ena_dev); if (unlikely(rc != 0)) { device_printf(pdev, "failed to init mmio read less\n"); return (rc); } /* * The PCIe configuration space revision id indicate if mmio reg * read is disabled */ readless_supported = !(pci_get_revid(pdev) & ENA_MMIO_DISABLE_REG_READ); ena_com_set_mmio_read_mode(ena_dev, readless_supported); rc = ena_com_dev_reset(ena_dev, ENA_REGS_RESET_NORMAL); if (unlikely(rc != 0)) { device_printf(pdev, "Can not reset device\n"); goto err_mmio_read_less; } rc = ena_com_validate_version(ena_dev); if (unlikely(rc != 0)) { device_printf(pdev, "device version is too low\n"); goto err_mmio_read_less; } dma_width = ena_com_get_dma_width(ena_dev); if (unlikely(dma_width < 0)) { device_printf(pdev, "Invalid dma width value %d", dma_width); rc = dma_width; goto err_mmio_read_less; } adapter->dma_width = dma_width; /* ENA admin level init */ rc = ena_com_admin_init(ena_dev, &aenq_handlers); if (unlikely(rc != 0)) { device_printf(pdev, "Can not initialize ena admin queue with device\n"); goto err_mmio_read_less; } /* * To enable the msix interrupts the driver needs to know the number * of queues. So the driver uses polling mode to retrieve this * information */ ena_com_set_admin_polling_mode(ena_dev, true); ena_config_host_info(ena_dev, pdev); /* Get Device Attributes */ rc = ena_com_get_dev_attr_feat(ena_dev, get_feat_ctx); if (unlikely(rc != 0)) { device_printf(pdev, "Cannot get attribute for ena device rc: %d\n", rc); goto err_admin_init; } aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE) | BIT(ENA_ADMIN_FATAL_ERROR) | BIT(ENA_ADMIN_WARNING) | BIT(ENA_ADMIN_NOTIFICATION) | BIT(ENA_ADMIN_KEEP_ALIVE); aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); if (unlikely(rc != 0)) { device_printf(pdev, "Cannot configure aenq groups rc: %d\n", rc); goto err_admin_init; } *wd_active = !!(aenq_groups & BIT(ENA_ADMIN_KEEP_ALIVE)); return (0); err_admin_init: ena_com_delete_host_info(ena_dev); ena_com_admin_destroy(ena_dev); err_mmio_read_less: ena_com_mmio_reg_read_request_destroy(ena_dev); return (rc); } static int ena_enable_msix_and_set_admin_interrupts(struct ena_adapter *adapter) { struct ena_com_dev *ena_dev = adapter->ena_dev; int rc; rc = ena_enable_msix(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Error with MSI-X enablement\n"); return (rc); } ena_setup_mgmnt_intr(adapter); rc = ena_request_mgmnt_irq(adapter); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Cannot setup mgmnt queue intr\n"); goto err_disable_msix; } ena_com_set_admin_polling_mode(ena_dev, false); ena_com_admin_aenq_enable(ena_dev); return (0); err_disable_msix: ena_disable_msix(adapter); return (rc); } /* Function called on ENA_ADMIN_KEEP_ALIVE event */ static void ena_keep_alive_wd(void *adapter_data, struct ena_admin_aenq_entry *aenq_e) { struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; struct ena_admin_aenq_keep_alive_desc *desc; sbintime_t stime; uint64_t rx_drops; uint64_t tx_drops; desc = (struct ena_admin_aenq_keep_alive_desc *)aenq_e; rx_drops = ((uint64_t)desc->rx_drops_high << 32) | desc->rx_drops_low; tx_drops = ((uint64_t)desc->tx_drops_high << 32) | desc->tx_drops_low; counter_u64_zero(adapter->hw_stats.rx_drops); counter_u64_add(adapter->hw_stats.rx_drops, rx_drops); counter_u64_zero(adapter->hw_stats.tx_drops); counter_u64_add(adapter->hw_stats.tx_drops, tx_drops); stime = getsbinuptime(); atomic_store_rel_64(&adapter->keep_alive_timestamp, stime); } /* Check for keep alive expiration */ static void check_for_missing_keep_alive(struct ena_adapter *adapter) { sbintime_t timestamp, time; if (adapter->wd_active == 0) return; if (adapter->keep_alive_timeout == ENA_HW_HINTS_NO_TIMEOUT) return; timestamp = atomic_load_acq_64(&adapter->keep_alive_timestamp); time = getsbinuptime() - timestamp; if (unlikely(time > adapter->keep_alive_timeout)) { device_printf(adapter->pdev, "Keep alive watchdog timeout.\n"); counter_u64_add(adapter->dev_stats.wd_expired, 1); ena_trigger_reset(adapter, ENA_REGS_RESET_KEEP_ALIVE_TO); } } /* Check if admin queue is enabled */ static void check_for_admin_com_state(struct ena_adapter *adapter) { if (unlikely(ena_com_get_admin_running_state(adapter->ena_dev) == false)) { device_printf(adapter->pdev, "ENA admin queue is not in running state!\n"); counter_u64_add(adapter->dev_stats.admin_q_pause, 1); ena_trigger_reset(adapter, ENA_REGS_RESET_ADMIN_TO); } } static int check_for_rx_interrupt_queue(struct ena_adapter *adapter, struct ena_ring *rx_ring) { if (likely(rx_ring->first_interrupt)) return (0); if (ena_com_cq_empty(rx_ring->ena_com_io_cq)) return (0); rx_ring->no_interrupt_event_cnt++; if (rx_ring->no_interrupt_event_cnt == ENA_MAX_NO_INTERRUPT_ITERATIONS) { device_printf(adapter->pdev, "Potential MSIX issue on Rx side " "Queue = %d. Reset the device\n", rx_ring->qid); ena_trigger_reset(adapter, ENA_REGS_RESET_MISS_INTERRUPT); return (EIO); } return (0); } static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter, struct ena_ring *tx_ring) { struct bintime curtime, time; struct ena_tx_buffer *tx_buf; sbintime_t time_offset; uint32_t missed_tx = 0; int i, rc = 0; getbinuptime(&curtime); for (i = 0; i < tx_ring->ring_size; i++) { tx_buf = &tx_ring->tx_buffer_info[i]; if (bintime_isset(&tx_buf->timestamp) == 0) continue; time = curtime; bintime_sub(&time, &tx_buf->timestamp); time_offset = bttosbt(time); if (unlikely(!tx_ring->first_interrupt && time_offset > 2 * adapter->missing_tx_timeout)) { /* * If after graceful period interrupt is still not * received, we schedule a reset. */ device_printf(adapter->pdev, "Potential MSIX issue on Tx side Queue = %d. " "Reset the device\n", tx_ring->qid); ena_trigger_reset(adapter, ENA_REGS_RESET_MISS_INTERRUPT); return (EIO); } /* Check again if packet is still waiting */ if (unlikely(time_offset > adapter->missing_tx_timeout)) { if (!tx_buf->print_once) - ena_trace(ENA_WARNING, "Found a Tx that wasn't " + ena_trace(NULL, ENA_WARNING, "Found a Tx that wasn't " "completed on time, qid %d, index %d.\n", tx_ring->qid, i); tx_buf->print_once = true; missed_tx++; } } if (unlikely(missed_tx > adapter->missing_tx_threshold)) { device_printf(adapter->pdev, "The number of lost tx completion is above the threshold " "(%d > %d). Reset the device\n", missed_tx, adapter->missing_tx_threshold); ena_trigger_reset(adapter, ENA_REGS_RESET_MISS_TX_CMPL); rc = EIO; } counter_u64_add(tx_ring->tx_stats.missing_tx_comp, missed_tx); return (rc); } /* * Check for TX which were not completed on time. * Timeout is defined by "missing_tx_timeout". * Reset will be performed if number of incompleted * transactions exceeds "missing_tx_threshold". */ static void check_for_missing_completions(struct ena_adapter *adapter) { struct ena_ring *tx_ring; struct ena_ring *rx_ring; int i, budget, rc; /* Make sure the driver doesn't turn the device in other process */ rmb(); if (!ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter)) return; if (ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter)) return; if (adapter->missing_tx_timeout == ENA_HW_HINTS_NO_TIMEOUT) return; budget = adapter->missing_tx_max_queues; for (i = adapter->next_monitored_tx_qid; i < adapter->num_io_queues; i++) { tx_ring = &adapter->tx_ring[i]; rx_ring = &adapter->rx_ring[i]; rc = check_missing_comp_in_tx_queue(adapter, tx_ring); if (unlikely(rc != 0)) return; rc = check_for_rx_interrupt_queue(adapter, rx_ring); if (unlikely(rc != 0)) return; budget--; if (budget == 0) { i++; break; } } adapter->next_monitored_tx_qid = i % adapter->num_io_queues; } /* trigger rx cleanup after 2 consecutive detections */ #define EMPTY_RX_REFILL 2 /* For the rare case where the device runs out of Rx descriptors and the * msix handler failed to refill new Rx descriptors (due to a lack of memory * for example). * This case will lead to a deadlock: * The device won't send interrupts since all the new Rx packets will be dropped * The msix handler won't allocate new Rx descriptors so the device won't be * able to send new packets. * * When such a situation is detected - execute rx cleanup task in another thread */ static void check_for_empty_rx_ring(struct ena_adapter *adapter) { struct ena_ring *rx_ring; int i, refill_required; if (!ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter)) return; if (ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter)) return; for (i = 0; i < adapter->num_io_queues; i++) { rx_ring = &adapter->rx_ring[i]; refill_required = ena_com_free_q_entries(rx_ring->ena_com_io_sq); if (unlikely(refill_required == (rx_ring->ring_size - 1))) { rx_ring->empty_rx_queue++; if (rx_ring->empty_rx_queue >= EMPTY_RX_REFILL) { counter_u64_add(rx_ring->rx_stats.empty_rx_ring, 1); device_printf(adapter->pdev, "trigger refill for ring %d\n", i); taskqueue_enqueue(rx_ring->que->cleanup_tq, &rx_ring->que->cleanup_task); rx_ring->empty_rx_queue = 0; } } else { rx_ring->empty_rx_queue = 0; } } } static void ena_update_hints(struct ena_adapter *adapter, struct ena_admin_ena_hw_hints *hints) { struct ena_com_dev *ena_dev = adapter->ena_dev; if (hints->admin_completion_tx_timeout) ena_dev->admin_queue.completion_timeout = hints->admin_completion_tx_timeout * 1000; if (hints->mmio_read_timeout) /* convert to usec */ ena_dev->mmio_read.reg_read_to = hints->mmio_read_timeout * 1000; if (hints->missed_tx_completion_count_threshold_to_reset) adapter->missing_tx_threshold = hints->missed_tx_completion_count_threshold_to_reset; if (hints->missing_tx_completion_timeout) { if (hints->missing_tx_completion_timeout == ENA_HW_HINTS_NO_TIMEOUT) adapter->missing_tx_timeout = ENA_HW_HINTS_NO_TIMEOUT; else adapter->missing_tx_timeout = SBT_1MS * hints->missing_tx_completion_timeout; } if (hints->driver_watchdog_timeout) { if (hints->driver_watchdog_timeout == ENA_HW_HINTS_NO_TIMEOUT) adapter->keep_alive_timeout = ENA_HW_HINTS_NO_TIMEOUT; else adapter->keep_alive_timeout = SBT_1MS * hints->driver_watchdog_timeout; } } +/** + * ena_copy_eni_metrics - Get and copy ENI metrics from the HW. + * @adapter: ENA device adapter + * + * Returns 0 on success, EOPNOTSUPP if current HW doesn't support those metrics + * and other error codes on failure. + * + * This function can possibly cause a race with other calls to the admin queue. + * Because of that, the caller should either lock this function or make sure + * that there is no race in the current context. + */ +static int +ena_copy_eni_metrics(struct ena_adapter *adapter) +{ + static bool print_once = true; + int rc; + + rc = ena_com_get_eni_stats(adapter->ena_dev, &adapter->eni_metrics); + + if (rc != 0) { + if (rc == ENA_COM_UNSUPPORTED) { + if (print_once) { + device_printf(adapter->pdev, + "Retrieving ENI metrics is not supported.\n"); + print_once = false; + } else { + ena_trace(NULL, ENA_DBG, + "Retrieving ENI metrics is not supported.\n"); + } + } else { + device_printf(adapter->pdev, + "Failed to get ENI metrics: %d\n", rc); + } + } + + return (rc); +} + static void ena_timer_service(void *data) { struct ena_adapter *adapter = (struct ena_adapter *)data; struct ena_admin_host_info *host_info = adapter->ena_dev->host_attr.host_info; check_for_missing_keep_alive(adapter); check_for_admin_com_state(adapter); check_for_missing_completions(adapter); check_for_empty_rx_ring(adapter); + /* + * User controller update of the ENI metrics. + * If the delay was set to 0, then the stats shouldn't be updated at + * all. + * Otherwise, wait 'eni_metrics_sample_interval' seconds, before + * updating stats. + * As timer service is executed every second, it's enough to increment + * appropriate counter each time the timer service is executed. + */ + if ((adapter->eni_metrics_sample_interval != 0) && + (++adapter->eni_metrics_sample_interval_cnt >= + adapter->eni_metrics_sample_interval)) { + /* + * There is no race with other admin queue calls, as: + * - Timer service runs after interface is up, so all + * configuration calls to the admin queue are finished. + * - After interface is up, the driver doesn't use (at least + * for now) other functions writing to the admin queue. + * + * It may change in the future, so in that situation, the lock + * will be needed. ENA_LOCK_*() cannot be used for that purpose, + * as callout ena_timer_service is protected by them. It could + * lead to the deadlock if callout_drain() would hold the lock + * before ena_copy_eni_metrics() was executed. It's advised to + * use separate lock in that situation which will be used only + * for the admin queue. + */ + (void)ena_copy_eni_metrics(adapter); + adapter->eni_metrics_sample_interval_cnt = 0; + } + + if (host_info != NULL) ena_update_host_info(host_info, adapter->ifp); if (unlikely(ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter))) { device_printf(adapter->pdev, "Trigger reset is on\n"); taskqueue_enqueue(adapter->reset_tq, &adapter->reset_task); return; } /* * Schedule another timeout one second from now. */ callout_schedule_sbt(&adapter->timer_service, SBT_1S, SBT_1S, 0); } void ena_destroy_device(struct ena_adapter *adapter, bool graceful) { if_t ifp = adapter->ifp; struct ena_com_dev *ena_dev = adapter->ena_dev; bool dev_up; if (!ENA_FLAG_ISSET(ENA_FLAG_DEVICE_RUNNING, adapter)) return; if_link_state_change(ifp, LINK_STATE_DOWN); callout_drain(&adapter->timer_service); dev_up = ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter); if (dev_up) ENA_FLAG_SET_ATOMIC(ENA_FLAG_DEV_UP_BEFORE_RESET, adapter); if (!graceful) ena_com_set_admin_running_state(ena_dev, false); if (ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, adapter)) ena_down(adapter); /* * Stop the device from sending AENQ events (if the device was up, and * the trigger reset was on, ena_down already performs device reset) */ if (!(ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter) && dev_up)) ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason); ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); /* * IO rings resources should be freed because `ena_restore_device()` * calls (not directly) `ena_enable_msix()`, which re-allocates MSIX * vectors. The amount of MSIX vectors after destroy-restore may be * different than before. Therefore, IO rings resources should be * established from scratch each time. */ ena_free_all_io_rings_resources(adapter); ena_com_abort_admin_commands(ena_dev); ena_com_wait_for_abort_completion(ena_dev); ena_com_admin_destroy(ena_dev); ena_com_mmio_reg_read_request_destroy(ena_dev); adapter->reset_reason = ENA_REGS_RESET_NORMAL; ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_TRIGGER_RESET, adapter); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_DEVICE_RUNNING, adapter); } static int ena_device_validate_params(struct ena_adapter *adapter, struct ena_com_dev_get_features_ctx *get_feat_ctx) { if (memcmp(get_feat_ctx->dev_attr.mac_addr, adapter->mac_addr, ETHER_ADDR_LEN) != 0) { device_printf(adapter->pdev, "Error, mac address are different\n"); return (EINVAL); } if (get_feat_ctx->dev_attr.max_mtu < if_getmtu(adapter->ifp)) { device_printf(adapter->pdev, "Error, device max mtu is smaller than ifp MTU\n"); return (EINVAL); } return 0; } int ena_restore_device(struct ena_adapter *adapter) { struct ena_com_dev_get_features_ctx get_feat_ctx; struct ena_com_dev *ena_dev = adapter->ena_dev; if_t ifp = adapter->ifp; device_t dev = adapter->pdev; int wd_active; int rc; ENA_FLAG_SET_ATOMIC(ENA_FLAG_ONGOING_RESET, adapter); rc = ena_device_init(adapter, dev, &get_feat_ctx, &wd_active); if (rc != 0) { device_printf(dev, "Cannot initialize device\n"); goto err; } /* * Only enable WD if it was enabled before reset, so it won't override * value set by the user by the sysctl. */ if (adapter->wd_active != 0) adapter->wd_active = wd_active; rc = ena_device_validate_params(adapter, &get_feat_ctx); if (rc != 0) { device_printf(dev, "Validation of device parameters failed\n"); goto err_device_destroy; } ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_ONGOING_RESET, adapter); /* Make sure we don't have a race with AENQ Links state handler */ if (ENA_FLAG_ISSET(ENA_FLAG_LINK_UP, adapter)) if_link_state_change(ifp, LINK_STATE_UP); rc = ena_enable_msix_and_set_admin_interrupts(adapter); if (rc != 0) { device_printf(dev, "Enable MSI-X failed\n"); goto err_device_destroy; } /* * Effective value of used MSIX vectors should be the same as before * `ena_destroy_device()`, if possible, or closest to it if less vectors * are available. */ if ((adapter->msix_vecs - ENA_ADMIN_MSIX_VEC) < adapter->num_io_queues) adapter->num_io_queues = adapter->msix_vecs - ENA_ADMIN_MSIX_VEC; /* Re-initialize rings basic information */ ena_init_io_rings(adapter); /* If the interface was up before the reset bring it up */ if (ENA_FLAG_ISSET(ENA_FLAG_DEV_UP_BEFORE_RESET, adapter)) { rc = ena_up(adapter); if (rc != 0) { device_printf(dev, "Failed to create I/O queues\n"); goto err_disable_msix; } } /* Indicate that device is running again and ready to work */ ENA_FLAG_SET_ATOMIC(ENA_FLAG_DEVICE_RUNNING, adapter); if (ENA_FLAG_ISSET(ENA_FLAG_DEV_UP_BEFORE_RESET, adapter)) { /* * As the AENQ handlers weren't executed during reset because * the flag ENA_FLAG_DEVICE_RUNNING was turned off, the * timestamp must be updated again That will prevent next reset * caused by missing keep alive. */ adapter->keep_alive_timestamp = getsbinuptime(); callout_reset_sbt(&adapter->timer_service, SBT_1S, SBT_1S, ena_timer_service, (void *)adapter, 0); } ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_DEV_UP_BEFORE_RESET, adapter); device_printf(dev, "Device reset completed successfully, Driver info: %s\n", ena_version); return (rc); err_disable_msix: ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); err_device_destroy: ena_com_abort_admin_commands(ena_dev); ena_com_wait_for_abort_completion(ena_dev); ena_com_admin_destroy(ena_dev); ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE); ena_com_mmio_reg_read_request_destroy(ena_dev); err: ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_DEVICE_RUNNING, adapter); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_ONGOING_RESET, adapter); device_printf(dev, "Reset attempt failed. Can not reset the device\n"); return (rc); } static void ena_reset_task(void *arg, int pending) { struct ena_adapter *adapter = (struct ena_adapter *)arg; if (unlikely(!ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter))) { device_printf(adapter->pdev, "device reset scheduled but trigger_reset is off\n"); return; } ENA_LOCK_LOCK(adapter); ena_destroy_device(adapter, false); ena_restore_device(adapter); ENA_LOCK_UNLOCK(adapter); } /** * ena_attach - Device Initialization Routine * @pdev: device information struct * * Returns 0 on success, otherwise on failure. * * ena_attach initializes an adapter identified by a device structure. * The OS initialization, configuring of the adapter private structure, * and a hardware reset occur. **/ static int ena_attach(device_t pdev) { struct ena_com_dev_get_features_ctx get_feat_ctx; struct ena_llq_configurations llq_config; struct ena_calc_queue_size_ctx calc_queue_ctx = { 0 }; static int version_printed; struct ena_adapter *adapter; struct ena_com_dev *ena_dev = NULL; uint32_t max_num_io_queues; int rid, rc; adapter = device_get_softc(pdev); adapter->pdev = pdev; ENA_LOCK_INIT(adapter); /* * Set up the timer service - driver is responsible for avoiding * concurrency, as the callout won't be using any locking inside. */ callout_init(&adapter->timer_service, true); adapter->keep_alive_timeout = DEFAULT_KEEP_ALIVE_TO; adapter->missing_tx_timeout = DEFAULT_TX_CMP_TO; adapter->missing_tx_max_queues = DEFAULT_TX_MONITORED_QUEUES; adapter->missing_tx_threshold = DEFAULT_TX_CMP_THRESHOLD; if (version_printed++ == 0) device_printf(pdev, "%s\n", ena_version); /* Allocate memory for ena_dev structure */ ena_dev = malloc(sizeof(struct ena_com_dev), M_DEVBUF, M_WAITOK | M_ZERO); adapter->ena_dev = ena_dev; ena_dev->dmadev = pdev; rid = PCIR_BAR(ENA_REG_BAR); adapter->memory = NULL; adapter->registers = bus_alloc_resource_any(pdev, SYS_RES_MEMORY, &rid, RF_ACTIVE); if (unlikely(adapter->registers == NULL)) { device_printf(pdev, "unable to allocate bus resource: registers!\n"); rc = ENOMEM; goto err_dev_free; } ena_dev->bus = malloc(sizeof(struct ena_bus), M_DEVBUF, M_WAITOK | M_ZERO); /* Store register resources */ ((struct ena_bus*)(ena_dev->bus))->reg_bar_t = rman_get_bustag(adapter->registers); ((struct ena_bus*)(ena_dev->bus))->reg_bar_h = rman_get_bushandle(adapter->registers); if (unlikely(((struct ena_bus*)(ena_dev->bus))->reg_bar_h == 0)) { device_printf(pdev, "failed to pmap registers bar\n"); rc = ENXIO; goto err_bus_free; } ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; /* Initially clear all the flags */ ENA_FLAG_ZERO(adapter); /* Device initialization */ rc = ena_device_init(adapter, pdev, &get_feat_ctx, &adapter->wd_active); if (unlikely(rc != 0)) { device_printf(pdev, "ENA device init failed! (err: %d)\n", rc); rc = ENXIO; goto err_bus_free; } set_default_llq_configurations(&llq_config); rc = ena_set_queues_placement_policy(pdev, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc != 0)) { device_printf(pdev, "failed to set placement policy\n"); goto err_com_free; } if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) adapter->disable_meta_caching = !!(get_feat_ctx.llq.accel_mode.u.get.supported_flags & BIT(ENA_ADMIN_DISABLE_META_CACHING)); adapter->keep_alive_timestamp = getsbinuptime(); adapter->tx_offload_cap = get_feat_ctx.offload.tx; memcpy(adapter->mac_addr, get_feat_ctx.dev_attr.mac_addr, ETHER_ADDR_LEN); calc_queue_ctx.pdev = pdev; calc_queue_ctx.ena_dev = ena_dev; calc_queue_ctx.get_feat_ctx = &get_feat_ctx; /* Calculate initial and maximum IO queue number and size */ max_num_io_queues = ena_calc_max_io_queue_num(pdev, ena_dev, &get_feat_ctx); rc = ena_calc_io_queue_size(&calc_queue_ctx); if (unlikely((rc != 0) || (max_num_io_queues <= 0))) { rc = EFAULT; goto err_com_free; } adapter->requested_tx_ring_size = calc_queue_ctx.tx_queue_size; adapter->requested_rx_ring_size = calc_queue_ctx.rx_queue_size; adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size; adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size; adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; adapter->max_num_io_queues = max_num_io_queues; adapter->buf_ring_size = ENA_DEFAULT_BUF_RING_SIZE; adapter->max_mtu = get_feat_ctx.dev_attr.max_mtu; adapter->reset_reason = ENA_REGS_RESET_NORMAL; /* set up dma tags for rx and tx buffers */ rc = ena_setup_tx_dma_tag(adapter); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to create TX DMA tag\n"); goto err_com_free; } rc = ena_setup_rx_dma_tag(adapter); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to create RX DMA tag\n"); goto err_tx_tag_free; } /* * The amount of requested MSIX vectors is equal to * adapter::max_num_io_queues (see `ena_enable_msix()`), plus a constant * number of admin queue interrupts. The former is initially determined * by HW capabilities (see `ena_calc_max_io_queue_num())` but may not be * achieved if there are not enough system resources. By default, the * number of effectively used IO queues is the same but later on it can * be limited by the user using sysctl interface. */ rc = ena_enable_msix_and_set_admin_interrupts(adapter); if (unlikely(rc != 0)) { device_printf(pdev, "Failed to enable and set the admin interrupts\n"); goto err_io_free; } /* By default all of allocated MSIX vectors are actively used */ adapter->num_io_queues = adapter->msix_vecs - ENA_ADMIN_MSIX_VEC; /* initialize rings basic information */ ena_init_io_rings(adapter); /* setup network interface */ rc = ena_setup_ifnet(pdev, adapter, &get_feat_ctx); if (unlikely(rc != 0)) { device_printf(pdev, "Error with network interface setup\n"); goto err_msix_free; } /* Initialize reset task queue */ TASK_INIT(&adapter->reset_task, 0, ena_reset_task, adapter); adapter->reset_tq = taskqueue_create("ena_reset_enqueue", M_WAITOK | M_ZERO, taskqueue_thread_enqueue, &adapter->reset_tq); taskqueue_start_threads(&adapter->reset_tq, 1, PI_NET, "%s rstq", device_get_nameunit(adapter->pdev)); /* Initialize statistics */ ena_alloc_counters((counter_u64_t *)&adapter->dev_stats, sizeof(struct ena_stats_dev)); ena_alloc_counters((counter_u64_t *)&adapter->hw_stats, sizeof(struct ena_hw_stats)); ena_sysctl_add_nodes(adapter); #ifdef DEV_NETMAP rc = ena_netmap_attach(adapter); if (rc != 0) { device_printf(pdev, "netmap attach failed: %d\n", rc); goto err_detach; } #endif /* DEV_NETMAP */ /* Tell the stack that the interface is not active */ if_setdrvflagbits(adapter->ifp, IFF_DRV_OACTIVE, IFF_DRV_RUNNING); ENA_FLAG_SET_ATOMIC(ENA_FLAG_DEVICE_RUNNING, adapter); return (0); #ifdef DEV_NETMAP err_detach: ether_ifdetach(adapter->ifp); #endif /* DEV_NETMAP */ err_msix_free: ena_com_dev_reset(adapter->ena_dev, ENA_REGS_RESET_INIT_ERR); ena_free_mgmnt_irq(adapter); ena_disable_msix(adapter); err_io_free: ena_free_all_io_rings_resources(adapter); ena_free_rx_dma_tag(adapter); err_tx_tag_free: ena_free_tx_dma_tag(adapter); err_com_free: ena_com_admin_destroy(ena_dev); ena_com_delete_host_info(ena_dev); ena_com_mmio_reg_read_request_destroy(ena_dev); err_bus_free: free(ena_dev->bus, M_DEVBUF); ena_free_pci_resources(adapter); err_dev_free: free(ena_dev, M_DEVBUF); return (rc); } /** * ena_detach - Device Removal Routine * @pdev: device information struct * * ena_detach is called by the device subsystem to alert the driver * that it should release a PCI device. **/ static int ena_detach(device_t pdev) { struct ena_adapter *adapter = device_get_softc(pdev); struct ena_com_dev *ena_dev = adapter->ena_dev; int rc; /* Make sure VLANS are not using driver */ if (adapter->ifp->if_vlantrunk != NULL) { device_printf(adapter->pdev ,"VLAN is in use, detach first\n"); return (EBUSY); } ether_ifdetach(adapter->ifp); /* Stop timer service */ ENA_LOCK_LOCK(adapter); callout_drain(&adapter->timer_service); ENA_LOCK_UNLOCK(adapter); /* Release reset task */ while (taskqueue_cancel(adapter->reset_tq, &adapter->reset_task, NULL)) taskqueue_drain(adapter->reset_tq, &adapter->reset_task); taskqueue_free(adapter->reset_tq); ENA_LOCK_LOCK(adapter); ena_down(adapter); ena_destroy_device(adapter, true); ENA_LOCK_UNLOCK(adapter); #ifdef DEV_NETMAP netmap_detach(adapter->ifp); #endif /* DEV_NETMAP */ ena_free_counters((counter_u64_t *)&adapter->hw_stats, sizeof(struct ena_hw_stats)); ena_free_counters((counter_u64_t *)&adapter->dev_stats, sizeof(struct ena_stats_dev)); rc = ena_free_rx_dma_tag(adapter); if (unlikely(rc != 0)) device_printf(adapter->pdev, "Unmapped RX DMA tag associations\n"); rc = ena_free_tx_dma_tag(adapter); if (unlikely(rc != 0)) device_printf(adapter->pdev, "Unmapped TX DMA tag associations\n"); ena_free_irqs(adapter); ena_free_pci_resources(adapter); if (likely(ENA_FLAG_ISSET(ENA_FLAG_RSS_ACTIVE, adapter))) ena_com_rss_destroy(ena_dev); ena_com_delete_host_info(ena_dev); ENA_LOCK_DESTROY(adapter); if_free(adapter->ifp); if (ena_dev->bus != NULL) free(ena_dev->bus, M_DEVBUF); if (ena_dev != NULL) free(ena_dev, M_DEVBUF); return (bus_generic_detach(pdev)); } /****************************************************************************** ******************************** AENQ Handlers ******************************* *****************************************************************************/ /** * ena_update_on_link_change: * Notify the network interface about the change in link status **/ static void ena_update_on_link_change(void *adapter_data, struct ena_admin_aenq_entry *aenq_e) { struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; struct ena_admin_aenq_link_change_desc *aenq_desc; int status; if_t ifp; aenq_desc = (struct ena_admin_aenq_link_change_desc *)aenq_e; ifp = adapter->ifp; status = aenq_desc->flags & ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK; if (status != 0) { device_printf(adapter->pdev, "link is UP\n"); ENA_FLAG_SET_ATOMIC(ENA_FLAG_LINK_UP, adapter); if (!ENA_FLAG_ISSET(ENA_FLAG_ONGOING_RESET, adapter)) if_link_state_change(ifp, LINK_STATE_UP); } else { device_printf(adapter->pdev, "link is DOWN\n"); if_link_state_change(ifp, LINK_STATE_DOWN); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_LINK_UP, adapter); } } static void ena_notification(void *adapter_data, struct ena_admin_aenq_entry *aenq_e) { struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; struct ena_admin_ena_hw_hints *hints; - ENA_WARN(aenq_e->aenq_common_desc.group != ENA_ADMIN_NOTIFICATION, + ENA_WARN(NULL, aenq_e->aenq_common_desc.group != ENA_ADMIN_NOTIFICATION, "Invalid group(%x) expected %x\n", aenq_e->aenq_common_desc.group, ENA_ADMIN_NOTIFICATION); - switch (aenq_e->aenq_common_desc.syndrom) { + switch (aenq_e->aenq_common_desc.syndrome) { case ENA_ADMIN_UPDATE_HINTS: hints = (struct ena_admin_ena_hw_hints *)(&aenq_e->inline_data_w4); ena_update_hints(adapter, hints); break; default: device_printf(adapter->pdev, "Invalid aenq notification link state %d\n", - aenq_e->aenq_common_desc.syndrom); + aenq_e->aenq_common_desc.syndrome); } } /** * This handler will called for unknown event group or unimplemented handlers **/ static void unimplemented_aenq_handler(void *adapter_data, struct ena_admin_aenq_entry *aenq_e) { struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; device_printf(adapter->pdev, "Unknown event was received or event with unimplemented handler\n"); } static struct ena_aenq_handlers aenq_handlers = { .handlers = { [ENA_ADMIN_LINK_CHANGE] = ena_update_on_link_change, [ENA_ADMIN_NOTIFICATION] = ena_notification, [ENA_ADMIN_KEEP_ALIVE] = ena_keep_alive_wd, }, .unimplemented_handler = unimplemented_aenq_handler }; /********************************************************************* * FreeBSD Device Interface Entry Points *********************************************************************/ static device_method_t ena_methods[] = { /* Device interface */ DEVMETHOD(device_probe, ena_probe), DEVMETHOD(device_attach, ena_attach), DEVMETHOD(device_detach, ena_detach), DEVMETHOD_END }; static driver_t ena_driver = { "ena", ena_methods, sizeof(struct ena_adapter), }; devclass_t ena_devclass; DRIVER_MODULE(ena, pci, ena_driver, ena_devclass, 0, 0); MODULE_PNP_INFO("U16:vendor;U16:device", pci, ena, ena_vendor_info_array, nitems(ena_vendor_info_array) - 1); MODULE_DEPEND(ena, pci, 1, 1, 1); MODULE_DEPEND(ena, ether, 1, 1, 1); #ifdef DEV_NETMAP MODULE_DEPEND(ena, netmap, 1, 1, 1); #endif /* DEV_NETMAP */ /*********************************************************************/ Index: stable/12/sys/dev/ena/ena.h =================================================================== --- stable/12/sys/dev/ena/ena.h (revision 368011) +++ stable/12/sys/dev/ena/ena.h (revision 368012) @@ -1,532 +1,520 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ * */ #ifndef ENA_H #define ENA_H #include #include "ena-com/ena_com.h" #include "ena-com/ena_eth_com.h" #define DRV_MODULE_VER_MAJOR 2 -#define DRV_MODULE_VER_MINOR 2 +#define DRV_MODULE_VER_MINOR 3 #define DRV_MODULE_VER_SUBMINOR 0 #define DRV_MODULE_NAME "ena" #ifndef DRV_MODULE_VERSION #define DRV_MODULE_VERSION \ __XSTRING(DRV_MODULE_VER_MAJOR) "." \ __XSTRING(DRV_MODULE_VER_MINOR) "." \ __XSTRING(DRV_MODULE_VER_SUBMINOR) #endif #define DEVICE_NAME "Elastic Network Adapter (ENA)" #define DEVICE_DESC "ENA adapter" /* Calculate DMA mask - width for ena cannot exceed 48, so it is safe */ #define ENA_DMA_BIT_MASK(x) ((1ULL << (x)) - 1ULL) /* 1 for AENQ + ADMIN */ #define ENA_ADMIN_MSIX_VEC 1 #define ENA_MAX_MSIX_VEC(io_queues) (ENA_ADMIN_MSIX_VEC + (io_queues)) #define ENA_REG_BAR 0 #define ENA_MEM_BAR 2 #define ENA_BUS_DMA_SEGS 32 #define ENA_DEFAULT_BUF_RING_SIZE 4096 #define ENA_DEFAULT_RING_SIZE 1024 #define ENA_MIN_RING_SIZE 256 /* * Refill Rx queue when number of required descriptors is above * QUEUE_SIZE / ENA_RX_REFILL_THRESH_DIVIDER or ENA_RX_REFILL_THRESH_PACKET */ #define ENA_RX_REFILL_THRESH_DIVIDER 8 #define ENA_RX_REFILL_THRESH_PACKET 256 #define ENA_IRQNAME_SIZE 40 #define ENA_PKT_MAX_BUFS 19 #define ENA_RX_RSS_TABLE_LOG_SIZE 7 #define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) #define ENA_HASH_KEY_SIZE 40 #define ENA_MAX_FRAME_LEN 10000 #define ENA_MIN_FRAME_LEN 60 #define ENA_TX_RESUME_THRESH (ENA_PKT_MAX_BUFS + 2) #define DB_THRESHOLD 64 #define TX_COMMIT 32 /* * TX budget for cleaning. It should be half of the RX budget to reduce amount * of TCP retransmissions. */ #define TX_BUDGET 128 /* RX cleanup budget. -1 stands for infinity. */ #define RX_BUDGET 256 /* * How many times we can repeat cleanup in the io irq handling routine if the * RX or TX budget was depleted. */ #define CLEAN_BUDGET 8 #define RX_IRQ_INTERVAL 20 #define TX_IRQ_INTERVAL 50 #define ENA_MIN_MTU 128 #define ENA_TSO_MAXSIZE 65536 #define ENA_MMIO_DISABLE_REG_READ BIT(0) #define ENA_TX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1)) #define ENA_RX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1)) #define ENA_IO_TXQ_IDX(q) (2 * (q)) #define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) #define ENA_MGMNT_IRQ_IDX 0 #define ENA_IO_IRQ_FIRST_IDX 1 #define ENA_IO_IRQ_IDX(q) (ENA_IO_IRQ_FIRST_IDX + (q)) #define ENA_MAX_NO_INTERRUPT_ITERATIONS 3 /* * ENA device should send keep alive msg every 1 sec. * We wait for 6 sec just to be on the safe side. */ #define DEFAULT_KEEP_ALIVE_TO (SBT_1S * 6) /* Time in jiffies before concluding the transmitter is hung. */ #define DEFAULT_TX_CMP_TO (SBT_1S * 5) /* Number of queues to check for missing queues per timer tick */ #define DEFAULT_TX_MONITORED_QUEUES (4) /* Max number of timeouted packets before device reset */ #define DEFAULT_TX_CMP_THRESHOLD (128) /* * Supported PCI vendor and devices IDs */ #define PCI_VENDOR_ID_AMAZON 0x1d0f -#define PCI_DEV_ID_ENA_PF 0x0ec2 -#define PCI_DEV_ID_ENA_LLQ_PF 0x1ec2 -#define PCI_DEV_ID_ENA_VF 0xec20 -#define PCI_DEV_ID_ENA_LLQ_VF 0xec21 +#define PCI_DEV_ID_ENA_PF 0x0ec2 +#define PCI_DEV_ID_ENA_PF_RSERV0 0x1ec2 +#define PCI_DEV_ID_ENA_VF 0xec20 +#define PCI_DEV_ID_ENA_VF_RSERV0 0xec21 /* * Flags indicating current ENA driver state */ enum ena_flags_t { ENA_FLAG_DEVICE_RUNNING, ENA_FLAG_DEV_UP, ENA_FLAG_LINK_UP, ENA_FLAG_MSIX_ENABLED, ENA_FLAG_TRIGGER_RESET, ENA_FLAG_ONGOING_RESET, ENA_FLAG_DEV_UP_BEFORE_RESET, ENA_FLAG_RSS_ACTIVE, ENA_FLAGS_NUMBER = ENA_FLAG_RSS_ACTIVE }; BITSET_DEFINE(_ena_state, ENA_FLAGS_NUMBER); typedef struct _ena_state ena_state_t; #define ENA_FLAG_ZERO(adapter) \ BIT_ZERO(ENA_FLAGS_NUMBER, &(adapter)->flags) #define ENA_FLAG_ISSET(bit, adapter) \ BIT_ISSET(ENA_FLAGS_NUMBER, (bit), &(adapter)->flags) #define ENA_FLAG_SET_ATOMIC(bit, adapter) \ BIT_SET_ATOMIC(ENA_FLAGS_NUMBER, (bit), &(adapter)->flags) #define ENA_FLAG_CLEAR_ATOMIC(bit, adapter) \ BIT_CLR_ATOMIC(ENA_FLAGS_NUMBER, (bit), &(adapter)->flags) struct msix_entry { int entry; int vector; }; typedef struct _ena_vendor_info_t { uint16_t vendor_id; uint16_t device_id; unsigned int index; } ena_vendor_info_t; struct ena_irq { /* Interrupt resources */ struct resource *res; driver_filter_t *handler; void *data; void *cookie; unsigned int vector; bool requested; int cpu; char name[ENA_IRQNAME_SIZE]; }; struct ena_que { struct ena_adapter *adapter; struct ena_ring *tx_ring; struct ena_ring *rx_ring; struct task cleanup_task; struct taskqueue *cleanup_tq; uint32_t id; int cpu; }; struct ena_calc_queue_size_ctx { struct ena_com_dev_get_features_ctx *get_feat_ctx; struct ena_com_dev *ena_dev; device_t pdev; uint32_t tx_queue_size; uint32_t rx_queue_size; uint32_t max_tx_queue_size; uint32_t max_rx_queue_size; uint16_t max_tx_sgl_size; uint16_t max_rx_sgl_size; }; #ifdef DEV_NETMAP struct ena_netmap_tx_info { uint32_t socket_buf_idx[ENA_PKT_MAX_BUFS]; bus_dmamap_t map_seg[ENA_PKT_MAX_BUFS]; unsigned int sockets_used; }; #endif struct ena_tx_buffer { struct mbuf *mbuf; /* # of ena desc for this specific mbuf * (includes data desc and metadata desc) */ unsigned int tx_descs; /* # of buffers used by this mbuf */ unsigned int num_of_bufs; bus_dmamap_t dmamap; /* Used to detect missing tx packets */ struct bintime timestamp; bool print_once; #ifdef DEV_NETMAP struct ena_netmap_tx_info nm_info; #endif /* DEV_NETMAP */ struct ena_com_buf bufs[ENA_PKT_MAX_BUFS]; } __aligned(CACHE_LINE_SIZE); struct ena_rx_buffer { struct mbuf *mbuf; bus_dmamap_t map; struct ena_com_buf ena_buf; #ifdef DEV_NETMAP uint32_t netmap_buf_idx; #endif /* DEV_NETMAP */ } __aligned(CACHE_LINE_SIZE); struct ena_stats_tx { counter_u64_t cnt; counter_u64_t bytes; counter_u64_t prepare_ctx_err; counter_u64_t dma_mapping_err; counter_u64_t doorbells; counter_u64_t missing_tx_comp; counter_u64_t bad_req_id; counter_u64_t collapse; counter_u64_t collapse_err; counter_u64_t queue_wakeup; counter_u64_t queue_stop; counter_u64_t llq_buffer_copy; }; struct ena_stats_rx { counter_u64_t cnt; counter_u64_t bytes; counter_u64_t refil_partial; counter_u64_t bad_csum; counter_u64_t mjum_alloc_fail; counter_u64_t mbuf_alloc_fail; counter_u64_t dma_mapping_err; counter_u64_t bad_desc_num; counter_u64_t bad_req_id; counter_u64_t empty_rx_ring; }; struct ena_ring { /* Holds the empty requests for TX/RX out of order completions */ union { uint16_t *free_tx_ids; uint16_t *free_rx_ids; }; struct ena_com_dev *ena_dev; struct ena_adapter *adapter; struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq; uint16_t qid; /* Determines if device will use LLQ or normal mode for TX */ enum ena_admin_placement_policy_type tx_mem_queue_type; union { /* The maximum length the driver can push to the device (For LLQ) */ uint8_t tx_max_header_size; /* The maximum (and default) mbuf size for the Rx descriptor. */ uint16_t rx_mbuf_sz; }; bool first_interrupt; uint16_t no_interrupt_event_cnt; struct ena_com_rx_buf_info ena_bufs[ENA_PKT_MAX_BUFS]; struct ena_que *que; struct lro_ctrl lro; uint16_t next_to_use; uint16_t next_to_clean; union { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ struct ena_rx_buffer *rx_buffer_info; /* contex of rx packet */ }; int ring_size; /* number of tx/rx_buffer_info's entries */ struct buf_ring *br; /* only for TX */ uint32_t buf_ring_size; struct mtx ring_mtx; char mtx_name[16]; struct { struct task enqueue_task; struct taskqueue *enqueue_tq; }; union { struct ena_stats_tx tx_stats; struct ena_stats_rx rx_stats; }; union { int empty_rx_queue; /* For Tx ring to indicate if it's running or not */ bool running; }; /* How many packets are sent in one Tx loop, used for doorbells */ uint32_t acum_pkts; /* Used for LLQ */ uint8_t *push_buf_intermediate_buf; #ifdef DEV_NETMAP bool initialized; #endif /* DEV_NETMAP */ } __aligned(CACHE_LINE_SIZE); struct ena_stats_dev { counter_u64_t wd_expired; counter_u64_t interface_up; counter_u64_t interface_down; counter_u64_t admin_q_pause; }; struct ena_hw_stats { counter_u64_t rx_packets; counter_u64_t tx_packets; counter_u64_t rx_bytes; counter_u64_t tx_bytes; counter_u64_t rx_drops; counter_u64_t tx_drops; }; /* Board specific private data structure */ struct ena_adapter { struct ena_com_dev *ena_dev; /* OS defined structs */ if_t ifp; device_t pdev; struct ifmedia media; /* OS resources */ struct resource *memory; struct resource *registers; struct sx global_lock; /* MSI-X */ struct msix_entry *msix_entries; int msix_vecs; /* DMA tags used throughout the driver adapter for Tx and Rx */ bus_dma_tag_t tx_buf_tag; bus_dma_tag_t rx_buf_tag; int dma_width; uint32_t max_mtu; uint32_t num_io_queues; uint32_t max_num_io_queues; uint32_t requested_tx_ring_size; uint32_t requested_rx_ring_size; uint32_t max_tx_ring_size; uint32_t max_rx_ring_size; uint16_t max_tx_sgl_size; uint16_t max_rx_sgl_size; uint32_t tx_offload_cap; uint32_t buf_ring_size; /* RSS*/ uint8_t rss_ind_tbl[ENA_RX_RSS_TABLE_SIZE]; uint8_t mac_addr[ETHER_ADDR_LEN]; /* mdio and phy*/ ena_state_t flags; /* Queue will represent one TX and one RX ring */ struct ena_que que[ENA_MAX_NUM_IO_QUEUES] __aligned(CACHE_LINE_SIZE); /* TX */ struct ena_ring tx_ring[ENA_MAX_NUM_IO_QUEUES] __aligned(CACHE_LINE_SIZE); /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_IO_QUEUES] __aligned(CACHE_LINE_SIZE); struct ena_irq irq_tbl[ENA_MAX_MSIX_VEC(ENA_MAX_NUM_IO_QUEUES)]; /* Timer service */ struct callout timer_service; sbintime_t keep_alive_timestamp; uint32_t next_monitored_tx_qid; struct task reset_task; struct taskqueue *reset_tq; int wd_active; sbintime_t keep_alive_timeout; sbintime_t missing_tx_timeout; uint32_t missing_tx_max_queues; uint32_t missing_tx_threshold; bool disable_meta_caching; + uint16_t eni_metrics_sample_interval; + uint16_t eni_metrics_sample_interval_cnt; + /* Statistics */ struct ena_stats_dev dev_stats; struct ena_hw_stats hw_stats; + struct ena_admin_eni_stats eni_metrics; enum ena_regs_reset_reason_types reset_reason; }; #define ENA_RING_MTX_LOCK(_ring) mtx_lock(&(_ring)->ring_mtx) #define ENA_RING_MTX_TRYLOCK(_ring) mtx_trylock(&(_ring)->ring_mtx) #define ENA_RING_MTX_UNLOCK(_ring) mtx_unlock(&(_ring)->ring_mtx) #define ENA_LOCK_INIT(adapter) \ sx_init(&(adapter)->global_lock, "ENA global lock") #define ENA_LOCK_DESTROY(adapter) sx_destroy(&(adapter)->global_lock) #define ENA_LOCK_LOCK(adapter) sx_xlock(&(adapter)->global_lock) #define ENA_LOCK_UNLOCK(adapter) sx_unlock(&(adapter)->global_lock) #define clamp_t(type, _x, min, max) min_t(type, max_t(type, _x, min), max) #define clamp_val(val, lo, hi) clamp_t(__typeof(val), val, lo, hi) static inline int ena_mbuf_count(struct mbuf *mbuf) { int count = 1; while ((mbuf = mbuf->m_next) != NULL) ++count; return count; } int ena_up(struct ena_adapter *adapter); void ena_down(struct ena_adapter *adapter); int ena_restore_device(struct ena_adapter *adapter); void ena_destroy_device(struct ena_adapter *adapter, bool graceful); int ena_refill_rx_bufs(struct ena_ring *rx_ring, uint32_t num); int ena_update_buf_ring_size(struct ena_adapter *adapter, uint32_t new_buf_ring_size); int ena_update_queue_size(struct ena_adapter *adapter, uint32_t new_tx_size, uint32_t new_rx_size); int ena_update_io_queue_nb(struct ena_adapter *adapter, uint32_t new_num); static inline void ena_trigger_reset(struct ena_adapter *adapter, enum ena_regs_reset_reason_types reset_reason) { if (likely(!ENA_FLAG_ISSET(ENA_FLAG_TRIGGER_RESET, adapter))) { adapter->reset_reason = reset_reason; ENA_FLAG_SET_ATOMIC(ENA_FLAG_TRIGGER_RESET, adapter); } -} - -static inline int -validate_rx_req_id(struct ena_ring *rx_ring, uint16_t req_id) -{ - if (likely(req_id < rx_ring->ring_size)) - return (0); - - device_printf(rx_ring->adapter->pdev, "Invalid rx req_id: %hu\n", - req_id); - counter_u64_add(rx_ring->rx_stats.bad_req_id, 1); - - /* Trigger device reset */ - ena_trigger_reset(rx_ring->adapter, ENA_REGS_RESET_INV_RX_REQ_ID); - - return (EFAULT); } #endif /* !(ENA_H) */ Index: stable/12/sys/dev/ena/ena_datapath.c =================================================================== --- stable/12/sys/dev/ena/ena_datapath.c (revision 368011) +++ stable/12/sys/dev/ena/ena_datapath.c (revision 368012) @@ -1,1125 +1,1119 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include "opt_rss.h" #include "ena.h" #include "ena_datapath.h" #ifdef DEV_NETMAP #include "ena_netmap.h" #endif /* DEV_NETMAP */ /********************************************************************* * Static functions prototypes *********************************************************************/ static int ena_tx_cleanup(struct ena_ring *); static int ena_rx_cleanup(struct ena_ring *); static inline int validate_tx_req_id(struct ena_ring *, uint16_t); static void ena_rx_hash_mbuf(struct ena_ring *, struct ena_com_rx_ctx *, struct mbuf *); static struct mbuf* ena_rx_mbuf(struct ena_ring *, struct ena_com_rx_buf_info *, struct ena_com_rx_ctx *, uint16_t *); static inline void ena_rx_checksum(struct ena_ring *, struct ena_com_rx_ctx *, struct mbuf *); static void ena_tx_csum(struct ena_com_tx_ctx *, struct mbuf *, bool); static int ena_check_and_collapse_mbuf(struct ena_ring *tx_ring, struct mbuf **mbuf); static int ena_xmit_mbuf(struct ena_ring *, struct mbuf **); static void ena_start_xmit(struct ena_ring *); /********************************************************************* * Global functions *********************************************************************/ void ena_cleanup(void *arg, int pending) { struct ena_que *que = arg; struct ena_adapter *adapter = que->adapter; if_t ifp = adapter->ifp; struct ena_ring *tx_ring; struct ena_ring *rx_ring; struct ena_com_io_cq* io_cq; struct ena_eth_io_intr_reg intr_reg; int qid, ena_qid; int txc, rxc, i; if (unlikely((if_getdrvflags(ifp) & IFF_DRV_RUNNING) == 0)) return; - ena_trace(ENA_DBG, "MSI-X TX/RX routine\n"); + ena_trace(NULL, ENA_DBG, "MSI-X TX/RX routine\n"); tx_ring = que->tx_ring; rx_ring = que->rx_ring; qid = que->id; ena_qid = ENA_IO_TXQ_IDX(qid); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; tx_ring->first_interrupt = true; rx_ring->first_interrupt = true; for (i = 0; i < CLEAN_BUDGET; ++i) { rxc = ena_rx_cleanup(rx_ring); txc = ena_tx_cleanup(tx_ring); if (unlikely((if_getdrvflags(ifp) & IFF_DRV_RUNNING) == 0)) return; if ((txc != TX_BUDGET) && (rxc != RX_BUDGET)) break; } /* Signal that work is done and unmask interrupt */ ena_com_update_intr_reg(&intr_reg, RX_IRQ_INTERVAL, TX_IRQ_INTERVAL, true); ena_com_unmask_intr(io_cq, &intr_reg); } void ena_deferred_mq_start(void *arg, int pending) { struct ena_ring *tx_ring = (struct ena_ring *)arg; struct ifnet *ifp = tx_ring->adapter->ifp; while (!drbr_empty(ifp, tx_ring->br) && tx_ring->running && (if_getdrvflags(ifp) & IFF_DRV_RUNNING) != 0) { ENA_RING_MTX_LOCK(tx_ring); ena_start_xmit(tx_ring); ENA_RING_MTX_UNLOCK(tx_ring); } } int ena_mq_start(if_t ifp, struct mbuf *m) { struct ena_adapter *adapter = ifp->if_softc; struct ena_ring *tx_ring; int ret, is_drbr_empty; uint32_t i; if (unlikely((if_getdrvflags(adapter->ifp) & IFF_DRV_RUNNING) == 0)) return (ENODEV); /* Which queue to use */ /* * If everything is setup correctly, it should be the * same bucket that the current CPU we're on is. * It should improve performance. */ if (M_HASHTYPE_GET(m) != M_HASHTYPE_NONE) { i = m->m_pkthdr.flowid % adapter->num_io_queues; } else { i = curcpu % adapter->num_io_queues; } tx_ring = &adapter->tx_ring[i]; /* Check if drbr is empty before putting packet */ is_drbr_empty = drbr_empty(ifp, tx_ring->br); ret = drbr_enqueue(ifp, tx_ring->br, m); if (unlikely(ret != 0)) { taskqueue_enqueue(tx_ring->enqueue_tq, &tx_ring->enqueue_task); return (ret); } if (is_drbr_empty && (ENA_RING_MTX_TRYLOCK(tx_ring) != 0)) { ena_start_xmit(tx_ring); ENA_RING_MTX_UNLOCK(tx_ring); } else { taskqueue_enqueue(tx_ring->enqueue_tq, &tx_ring->enqueue_task); } return (0); } void ena_qflush(if_t ifp) { struct ena_adapter *adapter = ifp->if_softc; struct ena_ring *tx_ring = adapter->tx_ring; int i; for(i = 0; i < adapter->num_io_queues; ++i, ++tx_ring) if (!drbr_empty(ifp, tx_ring->br)) { ENA_RING_MTX_LOCK(tx_ring); drbr_flush(ifp, tx_ring->br); ENA_RING_MTX_UNLOCK(tx_ring); } if_qflush(ifp); } /********************************************************************* * Static functions *********************************************************************/ static inline int validate_tx_req_id(struct ena_ring *tx_ring, uint16_t req_id) { struct ena_adapter *adapter = tx_ring->adapter; struct ena_tx_buffer *tx_info = NULL; if (likely(req_id < tx_ring->ring_size)) { tx_info = &tx_ring->tx_buffer_info[req_id]; if (tx_info->mbuf != NULL) return (0); device_printf(adapter->pdev, "tx_info doesn't have valid mbuf\n"); } device_printf(adapter->pdev, "Invalid req_id: %hu\n", req_id); counter_u64_add(tx_ring->tx_stats.bad_req_id, 1); /* Trigger device reset */ ena_trigger_reset(adapter, ENA_REGS_RESET_INV_TX_REQ_ID); return (EFAULT); } /** * ena_tx_cleanup - clear sent packets and corresponding descriptors * @tx_ring: ring for which we want to clean packets * * Once packets are sent, we ask the device in a loop for no longer used * descriptors. We find the related mbuf chain in a map (index in an array) * and free it, then update ring state. * This is performed in "endless" loop, updating ring pointers every * TX_COMMIT. The first check of free descriptor is performed before the actual * loop, then repeated at the loop end. **/ static int ena_tx_cleanup(struct ena_ring *tx_ring) { struct ena_adapter *adapter; struct ena_com_io_cq* io_cq; uint16_t next_to_clean; uint16_t req_id; uint16_t ena_qid; unsigned int total_done = 0; int rc; int commit = TX_COMMIT; int budget = TX_BUDGET; int work_done; bool above_thresh; adapter = tx_ring->que->adapter; ena_qid = ENA_IO_TXQ_IDX(tx_ring->que->id); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; next_to_clean = tx_ring->next_to_clean; #ifdef DEV_NETMAP if (netmap_tx_irq(adapter->ifp, tx_ring->qid) != NM_IRQ_PASS) return (0); #endif /* DEV_NETMAP */ do { struct ena_tx_buffer *tx_info; struct mbuf *mbuf; rc = ena_com_tx_comp_req_id_get(io_cq, &req_id); if (unlikely(rc != 0)) break; rc = validate_tx_req_id(tx_ring, req_id); if (unlikely(rc != 0)) break; tx_info = &tx_ring->tx_buffer_info[req_id]; mbuf = tx_info->mbuf; tx_info->mbuf = NULL; bintime_clear(&tx_info->timestamp); bus_dmamap_sync(adapter->tx_buf_tag, tx_info->dmamap, BUS_DMASYNC_POSTWRITE); bus_dmamap_unload(adapter->tx_buf_tag, tx_info->dmamap); - ena_trace(ENA_DBG | ENA_TXPTH, "tx: q %d mbuf %p completed\n", + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "tx: q %d mbuf %p completed\n", tx_ring->qid, mbuf); m_freem(mbuf); total_done += tx_info->tx_descs; tx_ring->free_tx_ids[next_to_clean] = req_id; next_to_clean = ENA_TX_RING_IDX_NEXT(next_to_clean, tx_ring->ring_size); if (unlikely(--commit == 0)) { commit = TX_COMMIT; /* update ring state every TX_COMMIT descriptor */ tx_ring->next_to_clean = next_to_clean; ena_com_comp_ack( &adapter->ena_dev->io_sq_queues[ena_qid], total_done); ena_com_update_dev_comp_head(io_cq); total_done = 0; } } while (likely(--budget)); work_done = TX_BUDGET - budget; - ena_trace(ENA_DBG | ENA_TXPTH, "tx: q %d done. total pkts: %d\n", + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "tx: q %d done. total pkts: %d\n", tx_ring->qid, work_done); /* If there is still something to commit update ring state */ if (likely(commit != TX_COMMIT)) { tx_ring->next_to_clean = next_to_clean; ena_com_comp_ack(&adapter->ena_dev->io_sq_queues[ena_qid], total_done); ena_com_update_dev_comp_head(io_cq); } /* * Need to make the rings circular update visible to * ena_xmit_mbuf() before checking for tx_ring->running. */ mb(); above_thresh = ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, ENA_TX_RESUME_THRESH); if (unlikely(!tx_ring->running && above_thresh)) { ENA_RING_MTX_LOCK(tx_ring); above_thresh = ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, ENA_TX_RESUME_THRESH); if (!tx_ring->running && above_thresh) { tx_ring->running = true; counter_u64_add(tx_ring->tx_stats.queue_wakeup, 1); taskqueue_enqueue(tx_ring->enqueue_tq, &tx_ring->enqueue_task); } ENA_RING_MTX_UNLOCK(tx_ring); } return (work_done); } static void ena_rx_hash_mbuf(struct ena_ring *rx_ring, struct ena_com_rx_ctx *ena_rx_ctx, struct mbuf *mbuf) { struct ena_adapter *adapter = rx_ring->adapter; if (likely(ENA_FLAG_ISSET(ENA_FLAG_RSS_ACTIVE, adapter))) { mbuf->m_pkthdr.flowid = ena_rx_ctx->hash; #ifdef RSS /* * Hardware and software RSS are in agreement only when both are * configured to Toeplitz algorithm. This driver configures * that algorithm only when software RSS is enabled and uses it. */ if (adapter->ena_dev->rss.hash_func != ENA_ADMIN_TOEPLITZ && ena_rx_ctx->l3_proto != ENA_ETH_IO_L3_PROTO_UNKNOWN) { M_HASHTYPE_SET(mbuf, M_HASHTYPE_OPAQUE_HASH); return; } #endif if (ena_rx_ctx->frag && (ena_rx_ctx->l3_proto != ENA_ETH_IO_L3_PROTO_UNKNOWN)) { M_HASHTYPE_SET(mbuf, M_HASHTYPE_OPAQUE_HASH); return; } switch (ena_rx_ctx->l3_proto) { case ENA_ETH_IO_L3_PROTO_IPV4: switch (ena_rx_ctx->l4_proto) { case ENA_ETH_IO_L4_PROTO_TCP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_TCP_IPV4); break; case ENA_ETH_IO_L4_PROTO_UDP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_UDP_IPV4); break; default: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_IPV4); } break; case ENA_ETH_IO_L3_PROTO_IPV6: switch (ena_rx_ctx->l4_proto) { case ENA_ETH_IO_L4_PROTO_TCP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_TCP_IPV6); break; case ENA_ETH_IO_L4_PROTO_UDP: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_UDP_IPV6); break; default: M_HASHTYPE_SET(mbuf, M_HASHTYPE_RSS_IPV6); } break; case ENA_ETH_IO_L3_PROTO_UNKNOWN: M_HASHTYPE_SET(mbuf, M_HASHTYPE_NONE); break; default: M_HASHTYPE_SET(mbuf, M_HASHTYPE_OPAQUE_HASH); } } else { mbuf->m_pkthdr.flowid = rx_ring->qid; M_HASHTYPE_SET(mbuf, M_HASHTYPE_NONE); } } /** * ena_rx_mbuf - assemble mbuf from descriptors * @rx_ring: ring for which we want to clean packets * @ena_bufs: buffer info * @ena_rx_ctx: metadata for this packet(s) * @next_to_clean: ring pointer, will be updated only upon success * **/ static struct mbuf* ena_rx_mbuf(struct ena_ring *rx_ring, struct ena_com_rx_buf_info *ena_bufs, struct ena_com_rx_ctx *ena_rx_ctx, uint16_t *next_to_clean) { struct mbuf *mbuf; struct ena_rx_buffer *rx_info; struct ena_adapter *adapter; unsigned int descs = ena_rx_ctx->descs; - int rc; uint16_t ntc, len, req_id, buf = 0; ntc = *next_to_clean; adapter = rx_ring->adapter; len = ena_bufs[buf].len; req_id = ena_bufs[buf].req_id; - rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc != 0)) - return (NULL); - rx_info = &rx_ring->rx_buffer_info[req_id]; if (unlikely(rx_info->mbuf == NULL)) { device_printf(adapter->pdev, "NULL mbuf in rx_info"); return (NULL); } - ena_trace(ENA_DBG | ENA_RXPTH, "rx_info %p, mbuf %p, paddr %jx\n", + ena_trace(NULL, ENA_DBG | ENA_RXPTH, "rx_info %p, mbuf %p, paddr %jx\n", rx_info, rx_info->mbuf, (uintmax_t)rx_info->ena_buf.paddr); bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_POSTREAD); mbuf = rx_info->mbuf; mbuf->m_flags |= M_PKTHDR; mbuf->m_pkthdr.len = len; mbuf->m_len = len; + // Only for the first segment the data starts at specific offset + mbuf->m_data = mtodo(mbuf, ena_rx_ctx->pkt_offset); + ena_trace(NULL, ENA_DBG | ENA_RXPTH, + "Mbuf data offset=%u\n", ena_rx_ctx->pkt_offset); mbuf->m_pkthdr.rcvif = rx_ring->que->adapter->ifp; /* Fill mbuf with hash key and it's interpretation for optimization */ ena_rx_hash_mbuf(rx_ring, ena_rx_ctx, mbuf); - ena_trace(ENA_DBG | ENA_RXPTH, "rx mbuf 0x%p, flags=0x%x, len: %d\n", + ena_trace(NULL, ENA_DBG | ENA_RXPTH, "rx mbuf 0x%p, flags=0x%x, len: %d\n", mbuf, mbuf->m_flags, mbuf->m_pkthdr.len); /* DMA address is not needed anymore, unmap it */ bus_dmamap_unload(rx_ring->adapter->rx_buf_tag, rx_info->map); rx_info->mbuf = NULL; rx_ring->free_rx_ids[ntc] = req_id; ntc = ENA_RX_RING_IDX_NEXT(ntc, rx_ring->ring_size); /* * While we have more than 1 descriptors for one rcvd packet, append * other mbufs to the main one */ while (--descs) { ++buf; len = ena_bufs[buf].len; req_id = ena_bufs[buf].req_id; - rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc != 0)) { - /* - * If the req_id is invalid, then the device will be - * reset. In that case we must free all mbufs that - * were already gathered. - */ - m_freem(mbuf); - return (NULL); - } rx_info = &rx_ring->rx_buffer_info[req_id]; if (unlikely(rx_info->mbuf == NULL)) { device_printf(adapter->pdev, "NULL mbuf in rx_info"); /* * If one of the required mbufs was not allocated yet, * we can break there. * All earlier used descriptors will be reallocated * later and not used mbufs can be reused. * The next_to_clean pointer will not be updated in case * of an error, so caller should advance it manually * in error handling routine to keep it up to date * with hw ring. */ m_freem(mbuf); return (NULL); } bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_POSTREAD); if (unlikely(m_append(mbuf, len, rx_info->mbuf->m_data) == 0)) { counter_u64_add(rx_ring->rx_stats.mbuf_alloc_fail, 1); - ena_trace(ENA_WARNING, "Failed to append Rx mbuf %p\n", + ena_trace(NULL, ENA_WARNING, "Failed to append Rx mbuf %p\n", mbuf); } - ena_trace(ENA_DBG | ENA_RXPTH, + ena_trace(NULL, ENA_DBG | ENA_RXPTH, "rx mbuf updated. len %d\n", mbuf->m_pkthdr.len); /* Free already appended mbuf, it won't be useful anymore */ bus_dmamap_unload(rx_ring->adapter->rx_buf_tag, rx_info->map); m_freem(rx_info->mbuf); rx_info->mbuf = NULL; rx_ring->free_rx_ids[ntc] = req_id; ntc = ENA_RX_RING_IDX_NEXT(ntc, rx_ring->ring_size); } *next_to_clean = ntc; return (mbuf); } /** * ena_rx_checksum - indicate in mbuf if hw indicated a good cksum **/ static inline void ena_rx_checksum(struct ena_ring *rx_ring, struct ena_com_rx_ctx *ena_rx_ctx, struct mbuf *mbuf) { /* if IP and error */ if (unlikely((ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV4) && ena_rx_ctx->l3_csum_err)) { /* ipv4 checksum error */ mbuf->m_pkthdr.csum_flags = 0; counter_u64_add(rx_ring->rx_stats.bad_csum, 1); - ena_trace(ENA_DBG, "RX IPv4 header checksum error\n"); + ena_trace(NULL, ENA_DBG, "RX IPv4 header checksum error\n"); return; } /* if TCP/UDP */ if ((ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_TCP) || (ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_UDP)) { if (ena_rx_ctx->l4_csum_err) { /* TCP/UDP checksum error */ mbuf->m_pkthdr.csum_flags = 0; counter_u64_add(rx_ring->rx_stats.bad_csum, 1); - ena_trace(ENA_DBG, "RX L4 checksum error\n"); + ena_trace(NULL, ENA_DBG, "RX L4 checksum error\n"); } else { mbuf->m_pkthdr.csum_flags = CSUM_IP_CHECKED; mbuf->m_pkthdr.csum_flags |= CSUM_IP_VALID; } } } /** * ena_rx_cleanup - handle rx irq * @arg: ring for which irq is being handled **/ static int ena_rx_cleanup(struct ena_ring *rx_ring) { struct ena_adapter *adapter; struct mbuf *mbuf; struct ena_com_rx_ctx ena_rx_ctx; struct ena_com_io_cq* io_cq; struct ena_com_io_sq* io_sq; + enum ena_regs_reset_reason_types reset_reason; if_t ifp; uint16_t ena_qid; uint16_t next_to_clean; uint32_t refill_required; uint32_t refill_threshold; uint32_t do_if_input = 0; unsigned int qid; int rc, i; int budget = RX_BUDGET; #ifdef DEV_NETMAP int done; #endif /* DEV_NETMAP */ adapter = rx_ring->que->adapter; ifp = adapter->ifp; qid = rx_ring->que->id; ena_qid = ENA_IO_RXQ_IDX(qid); io_cq = &adapter->ena_dev->io_cq_queues[ena_qid]; io_sq = &adapter->ena_dev->io_sq_queues[ena_qid]; next_to_clean = rx_ring->next_to_clean; #ifdef DEV_NETMAP if (netmap_rx_irq(adapter->ifp, rx_ring->qid, &done) != NM_IRQ_PASS) return (0); #endif /* DEV_NETMAP */ - ena_trace(ENA_DBG, "rx: qid %d\n", qid); + ena_trace(NULL, ENA_DBG, "rx: qid %d\n", qid); do { ena_rx_ctx.ena_bufs = rx_ring->ena_bufs; ena_rx_ctx.max_bufs = adapter->max_rx_sgl_size; ena_rx_ctx.descs = 0; + ena_rx_ctx.pkt_offset = 0; + bus_dmamap_sync(io_cq->cdesc_addr.mem_handle.tag, io_cq->cdesc_addr.mem_handle.map, BUS_DMASYNC_POSTREAD); rc = ena_com_rx_pkt(io_cq, io_sq, &ena_rx_ctx); + if (unlikely(rc != 0)) { + if (rc == ENA_COM_NO_SPACE) { + counter_u64_add(rx_ring->rx_stats.bad_desc_num, + 1); + reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS; + } else { + counter_u64_add(rx_ring->rx_stats.bad_req_id, + 1); + reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; + } + ena_trigger_reset(adapter, reset_reason); + return (0); + } - if (unlikely(rc != 0)) - goto error; - if (unlikely(ena_rx_ctx.descs == 0)) break; - ena_trace(ENA_DBG | ENA_RXPTH, "rx: q %d got packet from ena. " + ena_trace(NULL, ENA_DBG | ENA_RXPTH, "rx: q %d got packet from ena. " "descs #: %d l3 proto %d l4 proto %d hash: %x\n", rx_ring->qid, ena_rx_ctx.descs, ena_rx_ctx.l3_proto, ena_rx_ctx.l4_proto, ena_rx_ctx.hash); /* Receive mbuf from the ring */ mbuf = ena_rx_mbuf(rx_ring, rx_ring->ena_bufs, &ena_rx_ctx, &next_to_clean); bus_dmamap_sync(io_cq->cdesc_addr.mem_handle.tag, io_cq->cdesc_addr.mem_handle.map, BUS_DMASYNC_PREREAD); /* Exit if we failed to retrieve a buffer */ if (unlikely(mbuf == NULL)) { for (i = 0; i < ena_rx_ctx.descs; ++i) { rx_ring->free_rx_ids[next_to_clean] = rx_ring->ena_bufs[i].req_id; next_to_clean = ENA_RX_RING_IDX_NEXT(next_to_clean, rx_ring->ring_size); } break; } if (((ifp->if_capenable & IFCAP_RXCSUM) != 0) || ((ifp->if_capenable & IFCAP_RXCSUM_IPV6) != 0)) { ena_rx_checksum(rx_ring, &ena_rx_ctx, mbuf); } counter_enter(); counter_u64_add_protected(rx_ring->rx_stats.bytes, mbuf->m_pkthdr.len); counter_u64_add_protected(adapter->hw_stats.rx_bytes, mbuf->m_pkthdr.len); counter_exit(); /* * LRO is only for IP/TCP packets and TCP checksum of the packet * should be computed by hardware. */ do_if_input = 1; if (((ifp->if_capenable & IFCAP_LRO) != 0) && ((mbuf->m_pkthdr.csum_flags & CSUM_IP_VALID) != 0) && (ena_rx_ctx.l4_proto == ENA_ETH_IO_L4_PROTO_TCP)) { /* * Send to the stack if: * - LRO not enabled, or * - no LRO resources, or * - lro enqueue fails */ if ((rx_ring->lro.lro_cnt != 0) && (tcp_lro_rx(&rx_ring->lro, mbuf, 0) == 0)) do_if_input = 0; } if (do_if_input != 0) { - ena_trace(ENA_DBG | ENA_RXPTH, + ena_trace(NULL, ENA_DBG | ENA_RXPTH, "calling if_input() with mbuf %p\n", mbuf); (*ifp->if_input)(ifp, mbuf); } counter_enter(); counter_u64_add_protected(rx_ring->rx_stats.cnt, 1); counter_u64_add_protected(adapter->hw_stats.rx_packets, 1); counter_exit(); } while (--budget); rx_ring->next_to_clean = next_to_clean; refill_required = ena_com_free_q_entries(io_sq); refill_threshold = min_t(int, rx_ring->ring_size / ENA_RX_REFILL_THRESH_DIVIDER, ENA_RX_REFILL_THRESH_PACKET); if (refill_required > refill_threshold) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); ena_refill_rx_bufs(rx_ring, refill_required); } tcp_lro_flush_all(&rx_ring->lro); return (RX_BUDGET - budget); - -error: - counter_u64_add(rx_ring->rx_stats.bad_desc_num, 1); - - /* Too many desc from the device. Trigger reset */ - ena_trigger_reset(adapter, ENA_REGS_RESET_TOO_MANY_RX_DESCS); - - return (0); } static void ena_tx_csum(struct ena_com_tx_ctx *ena_tx_ctx, struct mbuf *mbuf, bool disable_meta_caching) { struct ena_com_tx_meta *ena_meta; struct ether_vlan_header *eh; struct mbuf *mbuf_next; u32 mss; bool offload; uint16_t etype; int ehdrlen; struct ip *ip; int iphlen; struct tcphdr *th; int offset; offload = false; ena_meta = &ena_tx_ctx->ena_meta; mss = mbuf->m_pkthdr.tso_segsz; if (mss != 0) offload = true; if ((mbuf->m_pkthdr.csum_flags & CSUM_TSO) != 0) offload = true; if ((mbuf->m_pkthdr.csum_flags & CSUM_OFFLOAD) != 0) offload = true; if (!offload) { if (disable_meta_caching) { memset(ena_meta, 0, sizeof(*ena_meta)); ena_tx_ctx->meta_valid = 1; } else { ena_tx_ctx->meta_valid = 0; } return; } /* Determine where frame payload starts. */ eh = mtod(mbuf, struct ether_vlan_header *); if (eh->evl_encap_proto == htons(ETHERTYPE_VLAN)) { etype = ntohs(eh->evl_proto); ehdrlen = ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN; } else { etype = ntohs(eh->evl_encap_proto); ehdrlen = ETHER_HDR_LEN; } mbuf_next = m_getptr(mbuf, ehdrlen, &offset); ip = (struct ip *)(mtodo(mbuf_next, offset)); iphlen = ip->ip_hl << 2; mbuf_next = m_getptr(mbuf, iphlen + ehdrlen, &offset); th = (struct tcphdr *)(mtodo(mbuf_next, offset)); if ((mbuf->m_pkthdr.csum_flags & CSUM_IP) != 0) { ena_tx_ctx->l3_csum_enable = 1; } if ((mbuf->m_pkthdr.csum_flags & CSUM_TSO) != 0) { ena_tx_ctx->tso_enable = 1; ena_meta->l4_hdr_len = (th->th_off); } switch (etype) { case ETHERTYPE_IP: ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV4; if ((ip->ip_off & htons(IP_DF)) != 0) ena_tx_ctx->df = 1; break; case ETHERTYPE_IPV6: ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV6; default: break; } if (ip->ip_p == IPPROTO_TCP) { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP; if ((mbuf->m_pkthdr.csum_flags & (CSUM_IP_TCP | CSUM_IP6_TCP)) != 0) ena_tx_ctx->l4_csum_enable = 1; else ena_tx_ctx->l4_csum_enable = 0; } else if (ip->ip_p == IPPROTO_UDP) { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP; if ((mbuf->m_pkthdr.csum_flags & (CSUM_IP_UDP | CSUM_IP6_UDP)) != 0) ena_tx_ctx->l4_csum_enable = 1; else ena_tx_ctx->l4_csum_enable = 0; } else { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UNKNOWN; ena_tx_ctx->l4_csum_enable = 0; } ena_meta->mss = mss; ena_meta->l3_hdr_len = iphlen; ena_meta->l3_hdr_offset = ehdrlen; ena_tx_ctx->meta_valid = 1; } static int ena_check_and_collapse_mbuf(struct ena_ring *tx_ring, struct mbuf **mbuf) { struct ena_adapter *adapter; struct mbuf *collapsed_mbuf; int num_frags; adapter = tx_ring->adapter; num_frags = ena_mbuf_count(*mbuf); /* One segment must be reserved for configuration descriptor. */ if (num_frags < adapter->max_tx_sgl_size) return (0); counter_u64_add(tx_ring->tx_stats.collapse, 1); collapsed_mbuf = m_collapse(*mbuf, M_NOWAIT, adapter->max_tx_sgl_size - 1); if (unlikely(collapsed_mbuf == NULL)) { counter_u64_add(tx_ring->tx_stats.collapse_err, 1); return (ENOMEM); } /* If mbuf was collapsed succesfully, original mbuf is released. */ *mbuf = collapsed_mbuf; return (0); } static int ena_tx_map_mbuf(struct ena_ring *tx_ring, struct ena_tx_buffer *tx_info, struct mbuf *mbuf, void **push_hdr, u16 *header_len) { struct ena_adapter *adapter = tx_ring->adapter; struct ena_com_buf *ena_buf; bus_dma_segment_t segs[ENA_BUS_DMA_SEGS]; size_t iseg = 0; uint32_t mbuf_head_len; uint16_t offset; int rc, nsegs; mbuf_head_len = mbuf->m_len; tx_info->mbuf = mbuf; ena_buf = tx_info->bufs; /* * For easier maintaining of the DMA map, map the whole mbuf even if * the LLQ is used. The descriptors will be filled using the segments. */ rc = bus_dmamap_load_mbuf_sg(adapter->tx_buf_tag, tx_info->dmamap, mbuf, segs, &nsegs, BUS_DMA_NOWAIT); if (unlikely((rc != 0) || (nsegs == 0))) { - ena_trace(ENA_WARNING, + ena_trace(NULL, ENA_WARNING, "dmamap load failed! err: %d nsegs: %d\n", rc, nsegs); goto dma_error; } if (tx_ring->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { /* * When the device is LLQ mode, the driver will copy * the header into the device memory space. * the ena_com layer assumes the header is in a linear * memory space. * This assumption might be wrong since part of the header * can be in the fragmented buffers. * First check if header fits in the mbuf. If not, copy it to * separate buffer that will be holding linearized data. */ *header_len = min_t(uint32_t, mbuf->m_pkthdr.len, tx_ring->tx_max_header_size); /* If header is in linear space, just point into mbuf's data. */ if (likely(*header_len <= mbuf_head_len)) { *push_hdr = mbuf->m_data; /* * Otherwise, copy whole portion of header from multiple mbufs * to intermediate buffer. */ } else { m_copydata(mbuf, 0, *header_len, tx_ring->push_buf_intermediate_buf); *push_hdr = tx_ring->push_buf_intermediate_buf; counter_u64_add(tx_ring->tx_stats.llq_buffer_copy, 1); } - ena_trace(ENA_DBG | ENA_TXPTH, + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "mbuf: %p header_buf->vaddr: %p push_len: %d\n", mbuf, *push_hdr, *header_len); /* If packet is fitted in LLQ header, no need for DMA segments. */ if (mbuf->m_pkthdr.len <= tx_ring->tx_max_header_size) { return (0); } else { offset = tx_ring->tx_max_header_size; /* * As Header part is mapped to LLQ header, we can skip it and just * map the residuum of the mbuf to DMA Segments. */ while (offset > 0) { if (offset >= segs[iseg].ds_len) { offset -= segs[iseg].ds_len; } else { ena_buf->paddr = segs[iseg].ds_addr + offset; ena_buf->len = segs[iseg].ds_len - offset; ena_buf++; tx_info->num_of_bufs++; offset = 0; } iseg++; } } } else { *push_hdr = NULL; /* * header_len is just a hint for the device. Because FreeBSD is not * giving us information about packet header length and it is not * guaranteed that all packet headers will be in the 1st mbuf, setting * header_len to 0 is making the device ignore this value and resolve * header on it's own. */ *header_len = 0; } /* Map rest of the mbuf */ while (iseg < nsegs) { ena_buf->paddr = segs[iseg].ds_addr; ena_buf->len = segs[iseg].ds_len; ena_buf++; iseg++; tx_info->num_of_bufs++; } return (0); dma_error: counter_u64_add(tx_ring->tx_stats.dma_mapping_err, 1); tx_info->mbuf = NULL; return (rc); } static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct mbuf **mbuf) { struct ena_adapter *adapter; struct ena_tx_buffer *tx_info; struct ena_com_tx_ctx ena_tx_ctx; struct ena_com_dev *ena_dev; struct ena_com_io_sq* io_sq; void *push_hdr; uint16_t next_to_use; uint16_t req_id; uint16_t ena_qid; uint16_t header_len; int rc; int nb_hw_desc; ena_qid = ENA_IO_TXQ_IDX(tx_ring->que->id); adapter = tx_ring->que->adapter; ena_dev = adapter->ena_dev; io_sq = &ena_dev->io_sq_queues[ena_qid]; rc = ena_check_and_collapse_mbuf(tx_ring, mbuf); if (unlikely(rc != 0)) { - ena_trace(ENA_WARNING, + ena_trace(NULL, ENA_WARNING, "Failed to collapse mbuf! err: %d\n", rc); return (rc); } - ena_trace(ENA_DBG | ENA_TXPTH, "Tx: %d bytes\n", (*mbuf)->m_pkthdr.len); + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "Tx: %d bytes\n", (*mbuf)->m_pkthdr.len); next_to_use = tx_ring->next_to_use; req_id = tx_ring->free_tx_ids[next_to_use]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->num_of_bufs = 0; rc = ena_tx_map_mbuf(tx_ring, tx_info, *mbuf, &push_hdr, &header_len); if (unlikely(rc != 0)) { - ena_trace(ENA_WARNING, "Failed to map TX mbuf\n"); + ena_trace(NULL, ENA_WARNING, "Failed to map TX mbuf\n"); return (rc); } memset(&ena_tx_ctx, 0x0, sizeof(struct ena_com_tx_ctx)); ena_tx_ctx.ena_bufs = tx_info->bufs; ena_tx_ctx.push_header = push_hdr; ena_tx_ctx.num_bufs = tx_info->num_of_bufs; ena_tx_ctx.req_id = req_id; ena_tx_ctx.header_len = header_len; /* Set flags and meta data */ ena_tx_csum(&ena_tx_ctx, *mbuf, adapter->disable_meta_caching); if (tx_ring->acum_pkts == DB_THRESHOLD || ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx)) { - ena_trace(ENA_DBG | ENA_TXPTH, + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "llq tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->que->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); counter_u64_add(tx_ring->tx_stats.doorbells, 1); tx_ring->acum_pkts = 0; } /* Prepare the packet's descriptors and send them to device */ rc = ena_com_prepare_tx(io_sq, &ena_tx_ctx, &nb_hw_desc); if (unlikely(rc != 0)) { if (likely(rc == ENA_COM_NO_MEM)) { - ena_trace(ENA_DBG | ENA_TXPTH, + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "tx ring[%d] if out of space\n", tx_ring->que->id); } else { device_printf(adapter->pdev, "failed to prepare tx bufs\n"); } counter_u64_add(tx_ring->tx_stats.prepare_ctx_err, 1); goto dma_error; } counter_enter(); counter_u64_add_protected(tx_ring->tx_stats.cnt, 1); counter_u64_add_protected(tx_ring->tx_stats.bytes, (*mbuf)->m_pkthdr.len); counter_u64_add_protected(adapter->hw_stats.tx_packets, 1); counter_u64_add_protected(adapter->hw_stats.tx_bytes, (*mbuf)->m_pkthdr.len); counter_exit(); tx_info->tx_descs = nb_hw_desc; getbinuptime(&tx_info->timestamp); tx_info->print_once = true; tx_ring->next_to_use = ENA_TX_RING_IDX_NEXT(next_to_use, tx_ring->ring_size); /* stop the queue when no more space available, the packet can have up * to sgl_size + 2. one for the meta descriptor and one for header * (if the header is larger than tx_max_header_size). */ if (unlikely(!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, adapter->max_tx_sgl_size + 2))) { - ena_trace(ENA_DBG | ENA_TXPTH, "Stop queue %d\n", + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "Stop queue %d\n", tx_ring->que->id); tx_ring->running = false; counter_u64_add(tx_ring->tx_stats.queue_stop, 1); /* There is a rare condition where this function decides to * stop the queue but meanwhile tx_cleanup() updates * next_to_completion and terminates. * The queue will remain stopped forever. * To solve this issue this function performs mb(), checks * the wakeup condition and wakes up the queue if needed. */ mb(); if (ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, ENA_TX_RESUME_THRESH)) { tx_ring->running = true; counter_u64_add(tx_ring->tx_stats.queue_wakeup, 1); } } bus_dmamap_sync(adapter->tx_buf_tag, tx_info->dmamap, BUS_DMASYNC_PREWRITE); return (0); dma_error: tx_info->mbuf = NULL; bus_dmamap_unload(adapter->tx_buf_tag, tx_info->dmamap); return (rc); } static void ena_start_xmit(struct ena_ring *tx_ring) { struct mbuf *mbuf; struct ena_adapter *adapter = tx_ring->adapter; struct ena_com_io_sq* io_sq; int ena_qid; int ret = 0; if (unlikely((if_getdrvflags(adapter->ifp) & IFF_DRV_RUNNING) == 0)) return; if (unlikely(!ENA_FLAG_ISSET(ENA_FLAG_LINK_UP, adapter))) return; ena_qid = ENA_IO_TXQ_IDX(tx_ring->que->id); io_sq = &adapter->ena_dev->io_sq_queues[ena_qid]; while ((mbuf = drbr_peek(adapter->ifp, tx_ring->br)) != NULL) { - ena_trace(ENA_DBG | ENA_TXPTH, "\ndequeued mbuf %p with flags %#x and" + ena_trace(NULL, ENA_DBG | ENA_TXPTH, "\ndequeued mbuf %p with flags %#x and" " header csum flags %#jx\n", mbuf, mbuf->m_flags, (uint64_t)mbuf->m_pkthdr.csum_flags); if (unlikely(!tx_ring->running)) { drbr_putback(adapter->ifp, tx_ring->br, mbuf); break; } if (unlikely((ret = ena_xmit_mbuf(tx_ring, &mbuf)) != 0)) { if (ret == ENA_COM_NO_MEM) { drbr_putback(adapter->ifp, tx_ring->br, mbuf); } else if (ret == ENA_COM_NO_SPACE) { drbr_putback(adapter->ifp, tx_ring->br, mbuf); } else { m_freem(mbuf); drbr_advance(adapter->ifp, tx_ring->br); } break; } drbr_advance(adapter->ifp, tx_ring->br); if (unlikely((if_getdrvflags(adapter->ifp) & IFF_DRV_RUNNING) == 0)) return; tx_ring->acum_pkts++; BPF_MTAP(adapter->ifp, mbuf); } if (likely(tx_ring->acum_pkts != 0)) { /* Trigger the dma engine */ ena_com_write_sq_doorbell(io_sq); counter_u64_add(tx_ring->tx_stats.doorbells, 1); tx_ring->acum_pkts = 0; } if (unlikely(!tx_ring->running)) taskqueue_enqueue(tx_ring->que->cleanup_tq, &tx_ring->que->cleanup_task); } Index: stable/12/sys/dev/ena/ena_datapath.h =================================================================== --- stable/12/sys/dev/ena/ena_datapath.h (revision 368011) +++ stable/12/sys/dev/ena/ena_datapath.h (revision 368012) @@ -1,42 +1,42 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ * */ #ifndef ENA_TXRX_H #define ENA_TXRX_H void ena_cleanup(void *arg, int pending); void ena_qflush(if_t ifp); int ena_mq_start(if_t ifp, struct mbuf *m); void ena_deferred_mq_start(void *arg, int pending); #endif /* ENA_TXRX_H */ Index: stable/12/sys/dev/ena/ena_netmap.c =================================================================== --- stable/12/sys/dev/ena/ena_netmap.c (revision 368011) +++ stable/12/sys/dev/ena/ena_netmap.c (revision 368012) @@ -1,1092 +1,1095 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #ifdef DEV_NETMAP #include "ena.h" #include "ena_netmap.h" #define ENA_NETMAP_MORE_FRAMES 1 #define ENA_NETMAP_NO_MORE_FRAMES 0 #define ENA_MAX_FRAMES 16384 struct ena_netmap_ctx { struct netmap_kring *kring; struct ena_adapter *adapter; struct netmap_adapter *na; struct netmap_slot *slots; struct ena_ring *ring; struct ena_com_io_cq *io_cq; struct ena_com_io_sq *io_sq; u_int nm_i; uint16_t nt; uint16_t lim; }; /* Netmap callbacks */ static int ena_netmap_reg(struct netmap_adapter *, int); static int ena_netmap_txsync(struct netmap_kring *, int); static int ena_netmap_rxsync(struct netmap_kring *, int); /* Helper functions */ static int ena_netmap_tx_frames(struct ena_netmap_ctx *); static int ena_netmap_tx_frame(struct ena_netmap_ctx *); static inline uint16_t ena_netmap_count_slots(struct ena_netmap_ctx *); static inline uint16_t ena_netmap_packet_len(struct netmap_slot *, u_int, uint16_t); static int ena_netmap_copy_data(struct netmap_adapter *, struct netmap_slot *, u_int, uint16_t, uint16_t, void *); static int ena_netmap_map_single_slot(struct netmap_adapter *, struct netmap_slot *, bus_dma_tag_t, bus_dmamap_t, void **, uint64_t *); static int ena_netmap_tx_map_slots(struct ena_netmap_ctx *, struct ena_tx_buffer *, void **, uint16_t *, uint16_t *); static void ena_netmap_unmap_last_socket_chain(struct ena_netmap_ctx *, struct ena_tx_buffer *); static void ena_netmap_tx_cleanup(struct ena_netmap_ctx *); static uint16_t ena_netmap_tx_clean_one(struct ena_netmap_ctx *, uint16_t); static inline int validate_tx_req_id(struct ena_ring *, uint16_t); static int ena_netmap_rx_frames(struct ena_netmap_ctx *); static int ena_netmap_rx_frame(struct ena_netmap_ctx *); static int ena_netmap_rx_load_desc(struct ena_netmap_ctx *, uint16_t, int *); static void ena_netmap_rx_cleanup(struct ena_netmap_ctx *); static void ena_netmap_fill_ctx(struct netmap_kring *, struct ena_netmap_ctx *, uint16_t); int ena_netmap_attach(struct ena_adapter *adapter) { struct netmap_adapter na; - ena_trace(ENA_NETMAP, "netmap attach\n"); + ena_trace(NULL, ENA_NETMAP, "netmap attach\n"); bzero(&na, sizeof(na)); na.na_flags = NAF_MOREFRAG; na.ifp = adapter->ifp; na.num_tx_desc = adapter->requested_tx_ring_size; na.num_rx_desc = adapter->requested_rx_ring_size; na.num_tx_rings = adapter->num_io_queues; na.num_rx_rings = adapter->num_io_queues; na.rx_buf_maxsize = adapter->buf_ring_size; na.nm_txsync = ena_netmap_txsync; na.nm_rxsync = ena_netmap_rxsync; na.nm_register = ena_netmap_reg; return (netmap_attach(&na)); } int ena_netmap_alloc_rx_slot(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { struct netmap_adapter *na = NA(adapter->ifp); struct netmap_kring *kring; struct netmap_ring *ring; struct netmap_slot *slot; void *addr; uint64_t paddr; int nm_i, qid, head, lim, rc; /* if previously allocated frag is not used */ if (unlikely(rx_info->netmap_buf_idx != 0)) return (0); qid = rx_ring->qid; kring = na->rx_rings[qid]; nm_i = kring->nr_hwcur; head = kring->rhead; - ena_trace(ENA_NETMAP | ENA_DBG, "nr_hwcur: %d, nr_hwtail: %d, " + ena_trace(NULL, ENA_NETMAP | ENA_DBG, "nr_hwcur: %d, nr_hwtail: %d, " "rhead: %d, rcur: %d, rtail: %d\n", kring->nr_hwcur, kring->nr_hwtail, kring->rhead, kring->rcur, kring->rtail); if ((nm_i == head) && rx_ring->initialized) { - ena_trace(ENA_NETMAP, "No free slots in netmap ring\n"); + ena_trace(NULL, ENA_NETMAP, "No free slots in netmap ring\n"); return (ENOMEM); } ring = kring->ring; if (ring == NULL) { device_printf(adapter->pdev, "Rx ring %d is NULL\n", qid); return (EFAULT); } slot = &ring->slot[nm_i]; addr = PNMB(na, slot, &paddr); if (addr == NETMAP_BUF_BASE(na)) { device_printf(adapter->pdev, "Bad buff in slot\n"); return (EFAULT); } rc = netmap_load_map(na, adapter->rx_buf_tag, rx_info->map, addr); if (rc != 0) { - ena_trace(ENA_WARNING, "DMA mapping error\n"); + ena_trace(NULL, ENA_WARNING, "DMA mapping error\n"); return (rc); } bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_PREREAD); rx_info->ena_buf.paddr = paddr; rx_info->ena_buf.len = ring->nr_buf_size; rx_info->mbuf = NULL; rx_info->netmap_buf_idx = slot->buf_idx; slot->buf_idx = 0; lim = kring->nkr_num_slots - 1; kring->nr_hwcur = nm_next(nm_i, lim); return (0); } void ena_netmap_free_rx_slot(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { struct netmap_adapter *na; struct netmap_kring *kring; struct netmap_slot *slot; int nm_i, qid, lim; na = NA(adapter->ifp); if (na == NULL) { device_printf(adapter->pdev, "netmap adapter is NULL\n"); return; } if (na->rx_rings == NULL) { device_printf(adapter->pdev, "netmap rings are NULL\n"); return; } qid = rx_ring->qid; kring = na->rx_rings[qid]; if (kring == NULL) { device_printf(adapter->pdev, "netmap kernel ring %d is NULL\n", qid); return; } lim = kring->nkr_num_slots - 1; nm_i = nm_prev(kring->nr_hwcur, lim); if (kring->nr_mode != NKR_NETMAP_ON) return; bus_dmamap_sync(adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_POSTREAD); netmap_unload_map(na, adapter->rx_buf_tag, rx_info->map); KASSERT(kring->ring == NULL, ("Netmap Rx ring is NULL\n")); slot = &kring->ring->slot[nm_i]; - ENA_ASSERT(slot->buf_idx == 0, "Overwrite slot buf\n"); + ENA_WARN(slot->buf_idx != 0, NULL, "Overwrite slot buf\n"); slot->buf_idx = rx_info->netmap_buf_idx; slot->flags = NS_BUF_CHANGED; rx_info->netmap_buf_idx = 0; kring->nr_hwcur = nm_i; } static bool ena_ring_in_netmap(struct ena_adapter *adapter, int qid, enum txrx x) { struct netmap_adapter *na; struct netmap_kring *kring; if (adapter->ifp->if_capenable & IFCAP_NETMAP) { na = NA(adapter->ifp); kring = (x == NR_RX) ? na->rx_rings[qid] : na->tx_rings[qid]; if (kring->nr_mode == NKR_NETMAP_ON) return true; } return false; } bool ena_tx_ring_in_netmap(struct ena_adapter *adapter, int qid) { return ena_ring_in_netmap(adapter, qid, NR_TX); } bool ena_rx_ring_in_netmap(struct ena_adapter *adapter, int qid) { return ena_ring_in_netmap(adapter, qid, NR_RX); } static void ena_netmap_reset_ring(struct ena_adapter *adapter, int qid, enum txrx x) { if (!ena_ring_in_netmap(adapter, qid, x)) return; netmap_reset(NA(adapter->ifp), x, qid, 0); - ena_trace(ENA_NETMAP, "%s ring %d is in netmap mode\n", + ena_trace(NULL, ENA_NETMAP, "%s ring %d is in netmap mode\n", (x == NR_TX) ? "Tx" : "Rx", qid); } void ena_netmap_reset_rx_ring(struct ena_adapter *adapter, int qid) { ena_netmap_reset_ring(adapter, qid, NR_RX); } void ena_netmap_reset_tx_ring(struct ena_adapter *adapter, int qid) { ena_netmap_reset_ring(adapter, qid, NR_TX); } static int ena_netmap_reg(struct netmap_adapter *na, int onoff) { struct ifnet *ifp = na->ifp; struct ena_adapter* adapter = ifp->if_softc; struct netmap_kring *kring; enum txrx t; int rc, i; ENA_LOCK_LOCK(adapter); ENA_FLAG_CLEAR_ATOMIC(ENA_FLAG_TRIGGER_RESET, adapter); ena_down(adapter); if (onoff) { - ena_trace(ENA_NETMAP, "netmap on\n"); + ena_trace(NULL, ENA_NETMAP, "netmap on\n"); for_rx_tx(t) { for (i = 0; i <= nma_get_nrings(na, t); i++) { kring = NMR(na, t)[i]; if (nm_kring_pending_on(kring)) { kring->nr_mode = NKR_NETMAP_ON; } } } nm_set_native_flags(na); } else { - ena_trace(ENA_NETMAP, "netmap off\n"); + ena_trace(NULL, ENA_NETMAP, "netmap off\n"); nm_clear_native_flags(na); for_rx_tx(t) { for (i = 0; i <= nma_get_nrings(na, t); i++) { kring = NMR(na, t)[i]; if (nm_kring_pending_off(kring)) { kring->nr_mode = NKR_NETMAP_OFF; } } } } rc = ena_up(adapter); if (rc != 0) { - ena_trace(ENA_WARNING, "ena_up failed with rc=%d\n", rc); + ena_trace(NULL, ENA_WARNING, "ena_up failed with rc=%d\n", rc); adapter->reset_reason = ENA_REGS_RESET_DRIVER_INVALID_STATE; nm_clear_native_flags(na); ena_destroy_device(adapter, false); ENA_FLAG_SET_ATOMIC(ENA_FLAG_DEV_UP_BEFORE_RESET, adapter); rc = ena_restore_device(adapter); } ENA_LOCK_UNLOCK(adapter); return (rc); } static int ena_netmap_txsync(struct netmap_kring *kring, int flags) { struct ena_netmap_ctx ctx; int rc = 0; ena_netmap_fill_ctx(kring, &ctx, ENA_IO_TXQ_IDX(kring->ring_id)); ctx.ring = &ctx.adapter->tx_ring[kring->ring_id]; ENA_RING_MTX_LOCK(ctx.ring); if (unlikely(!ENA_FLAG_ISSET(ENA_FLAG_DEV_UP, ctx.adapter))) goto txsync_end; if (unlikely(!ENA_FLAG_ISSET(ENA_FLAG_LINK_UP, ctx.adapter))) goto txsync_end; rc = ena_netmap_tx_frames(&ctx); ena_netmap_tx_cleanup(&ctx); txsync_end: ENA_RING_MTX_UNLOCK(ctx.ring); return (rc); } static int ena_netmap_tx_frames(struct ena_netmap_ctx *ctx) { struct ena_ring *tx_ring = ctx->ring; int rc = 0; ctx->nm_i = ctx->kring->nr_hwcur; ctx->nt = ctx->ring->next_to_use; __builtin_prefetch(&ctx->slots[ctx->nm_i]); while (ctx->nm_i != ctx->kring->rhead) { if ((rc = ena_netmap_tx_frame(ctx)) != 0) { /* * When there is no empty space in Tx ring, error is * still being returned. It should not be passed to the * netmap, as application knows current ring state from * netmap ring pointers. Returning error there could * cause application to exit, but the Tx ring is commonly * being full. */ if (rc == ENA_COM_NO_MEM) rc = 0; break; } tx_ring->acum_pkts++; } /* If any packet was sent... */ if (likely(ctx->nm_i != ctx->kring->nr_hwcur)) { /* ...send the doorbell to the device. */ ena_com_write_sq_doorbell(ctx->io_sq); counter_u64_add(ctx->ring->tx_stats.doorbells, 1); tx_ring->acum_pkts = 0; ctx->ring->next_to_use = ctx->nt; ctx->kring->nr_hwcur = ctx->nm_i; } return (rc); } static int ena_netmap_tx_frame(struct ena_netmap_ctx *ctx) { struct ena_com_tx_ctx ena_tx_ctx; struct ena_adapter *adapter; struct ena_ring *tx_ring; struct ena_tx_buffer *tx_info; uint16_t req_id; uint16_t header_len; uint16_t packet_len; int nb_hw_desc; int rc; void *push_hdr; adapter = ctx->adapter; if (ena_netmap_count_slots(ctx) > adapter->max_tx_sgl_size) { - ena_trace(ENA_WARNING, "Too many slots per packet\n"); + ena_trace(NULL, ENA_WARNING, "Too many slots per packet\n"); return (EINVAL); } tx_ring = ctx->ring; req_id = tx_ring->free_tx_ids[ctx->nt]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->num_of_bufs = 0; tx_info->nm_info.sockets_used = 0; rc = ena_netmap_tx_map_slots(ctx, tx_info, &push_hdr, &header_len, &packet_len); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "Failed to map Tx slot\n"); return (rc); } bzero(&ena_tx_ctx, sizeof(struct ena_com_tx_ctx)); ena_tx_ctx.ena_bufs = tx_info->bufs; ena_tx_ctx.push_header = push_hdr; ena_tx_ctx.num_bufs = tx_info->num_of_bufs; ena_tx_ctx.req_id = req_id; ena_tx_ctx.header_len = header_len; /* There are no any offloads, as the netmap doesn't support them */ if (tx_ring->acum_pkts == DB_THRESHOLD || ena_com_is_doorbell_needed(ctx->io_sq, &ena_tx_ctx)) { ena_com_write_sq_doorbell(ctx->io_sq); counter_u64_add(tx_ring->tx_stats.doorbells, 1); tx_ring->acum_pkts = 0; } rc = ena_com_prepare_tx(ctx->io_sq, &ena_tx_ctx, &nb_hw_desc); if (unlikely(rc != 0)) { if (likely(rc == ENA_COM_NO_MEM)) { - ena_trace(ENA_NETMAP | ENA_DBG | ENA_TXPTH, + ena_trace(NULL, ENA_NETMAP | ENA_DBG | ENA_TXPTH, "Tx ring[%d] is out of space\n", tx_ring->que->id); } else { device_printf(adapter->pdev, "Failed to prepare Tx bufs\n"); } counter_u64_add(tx_ring->tx_stats.prepare_ctx_err, 1); ena_netmap_unmap_last_socket_chain(ctx, tx_info); return (rc); } counter_enter(); counter_u64_add_protected(tx_ring->tx_stats.cnt, 1); counter_u64_add_protected(tx_ring->tx_stats.bytes, packet_len); counter_u64_add_protected(adapter->hw_stats.tx_packets, 1); counter_u64_add_protected(adapter->hw_stats.tx_bytes, packet_len); counter_exit(); tx_info->tx_descs = nb_hw_desc; ctx->nt = ENA_TX_RING_IDX_NEXT(ctx->nt, ctx->ring->ring_size); for (unsigned int i = 0; i < tx_info->num_of_bufs; i++) bus_dmamap_sync(adapter->tx_buf_tag, tx_info->nm_info.map_seg[i], BUS_DMASYNC_PREWRITE); return (0); } static inline uint16_t ena_netmap_count_slots(struct ena_netmap_ctx *ctx) { uint16_t slots = 1; uint16_t nm = ctx->nm_i; while ((ctx->slots[nm].flags & NS_MOREFRAG) != 0) { slots++; nm = nm_next(nm, ctx->lim); } return slots; } static inline uint16_t ena_netmap_packet_len(struct netmap_slot *slots, u_int slot_index, uint16_t limit) { struct netmap_slot *nm_slot; uint16_t packet_size = 0; do { nm_slot = &slots[slot_index]; packet_size += nm_slot->len; slot_index = nm_next(slot_index, limit); } while ((nm_slot->flags & NS_MOREFRAG) != 0); return packet_size; } static int ena_netmap_copy_data(struct netmap_adapter *na, struct netmap_slot *slots, u_int slot_index, uint16_t limit, uint16_t bytes_to_copy, void *destination) { struct netmap_slot *nm_slot; void *slot_vaddr; uint16_t packet_size; uint16_t data_amount; packet_size = 0; do { nm_slot = &slots[slot_index]; slot_vaddr = NMB(na, nm_slot); if (unlikely(slot_vaddr == NULL)) return (EINVAL); data_amount = min_t(uint16_t, bytes_to_copy, nm_slot->len); memcpy(destination, slot_vaddr, data_amount); bytes_to_copy -= data_amount; slot_index = nm_next(slot_index, limit); } while ((nm_slot->flags & NS_MOREFRAG) != 0 && bytes_to_copy > 0); return (0); } static int ena_netmap_map_single_slot(struct netmap_adapter *na, struct netmap_slot *slot, bus_dma_tag_t dmatag, bus_dmamap_t dmamap, void **vaddr, uint64_t *paddr) { int rc; *vaddr = PNMB(na, slot, paddr); if (unlikely(vaddr == NULL)) { - ena_trace(ENA_ALERT, "Slot address is NULL\n"); + ena_trace(NULL, ENA_ALERT, "Slot address is NULL\n"); return (EINVAL); } rc = netmap_load_map(na, dmatag, dmamap, *vaddr); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "Failed to map slot %d for DMA\n", + ena_trace(NULL, ENA_ALERT, "Failed to map slot %d for DMA\n", slot->buf_idx); return (EINVAL); } return (0); } static int ena_netmap_tx_map_slots(struct ena_netmap_ctx *ctx, struct ena_tx_buffer *tx_info, void **push_hdr, uint16_t *header_len, uint16_t *packet_len) { struct netmap_slot *slot; struct ena_com_buf *ena_buf; struct ena_adapter *adapter; struct ena_ring *tx_ring; struct ena_netmap_tx_info *nm_info; bus_dmamap_t *nm_maps; void *vaddr; uint64_t paddr; uint32_t *nm_buf_idx; uint32_t slot_head_len; uint32_t frag_len; uint32_t remaining_len; uint16_t push_len; uint16_t delta; int rc; adapter = ctx->adapter; tx_ring = ctx->ring; ena_buf = tx_info->bufs; nm_info = &tx_info->nm_info; nm_maps = nm_info->map_seg; nm_buf_idx = nm_info->socket_buf_idx; slot = &ctx->slots[ctx->nm_i]; slot_head_len = slot->len; *packet_len = ena_netmap_packet_len(ctx->slots, ctx->nm_i, ctx->lim); remaining_len = *packet_len; delta = 0; __builtin_prefetch(&ctx->slots[ctx->nm_i + 1]); if (tx_ring->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { /* * When the device is in LLQ mode, the driver will copy * the header into the device memory space. * The ena_com layer assumes that the header is in a linear * memory space. * This assumption might be wrong since part of the header * can be in the fragmented buffers. * First, check if header fits in the first slot. If not, copy * it to separate buffer that will be holding linearized data. */ push_len = min_t(uint32_t, *packet_len, tx_ring->tx_max_header_size); *header_len = push_len; /* If header is in linear space, just point to socket's data. */ if (likely(push_len <= slot_head_len)) { *push_hdr = NMB(ctx->na, slot); if (unlikely(push_hdr == NULL)) { device_printf(adapter->pdev, "Slot vaddress is NULL\n"); return (EINVAL); } /* * Otherwise, copy whole portion of header from multiple slots * to intermediate buffer. */ } else { rc = ena_netmap_copy_data(ctx->na, ctx->slots, ctx->nm_i, ctx->lim, push_len, tx_ring->push_buf_intermediate_buf); if (unlikely(rc)) { device_printf(adapter->pdev, "Failed to copy data from slots to push_buf\n"); return (EINVAL); } *push_hdr = tx_ring->push_buf_intermediate_buf; counter_u64_add(tx_ring->tx_stats.llq_buffer_copy, 1); delta = push_len - slot_head_len; } - ena_trace(ENA_NETMAP | ENA_DBG | ENA_TXPTH, + ena_trace(NULL, ENA_NETMAP | ENA_DBG | ENA_TXPTH, "slot: %d header_buf->vaddr: %p push_len: %d\n", slot->buf_idx, *push_hdr, push_len); /* * If header was in linear memory space, map for the dma rest of the data * in the first mbuf of the mbuf chain. */ if (slot_head_len > push_len) { rc = ena_netmap_map_single_slot(ctx->na, slot, adapter->tx_buf_tag, *nm_maps, &vaddr, &paddr); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "DMA mapping error\n"); return (rc); } nm_maps++; ena_buf->paddr = paddr + push_len; ena_buf->len = slot->len - push_len; ena_buf++; tx_info->num_of_bufs++; } remaining_len -= slot->len; /* Save buf idx before advancing */ *nm_buf_idx = slot->buf_idx; nm_buf_idx++; slot->buf_idx = 0; /* Advance to the next socket */ ctx->nm_i = nm_next(ctx->nm_i, ctx->lim); slot = &ctx->slots[ctx->nm_i]; nm_info->sockets_used++; /* * If header is in non linear space (delta > 0), then skip mbufs * containing header and map the last one containing both header * and the packet data. * The first segment is already counted in. */ while (delta > 0) { __builtin_prefetch(&ctx->slots[ctx->nm_i + 1]); frag_len = slot->len; /* * If whole segment contains header just move to the * next one and reduce delta. */ if (unlikely(delta >= frag_len)) { delta -= frag_len; } else { /* * Map the data and then assign it with the * offsets */ rc = ena_netmap_map_single_slot(ctx->na, slot, adapter->tx_buf_tag, *nm_maps, &vaddr, &paddr); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "DMA mapping error\n"); goto error_map; } nm_maps++; ena_buf->paddr = paddr + delta; ena_buf->len = slot->len - delta; ena_buf++; tx_info->num_of_bufs++; delta = 0; } remaining_len -= slot->len; /* Save buf idx before advancing */ *nm_buf_idx = slot->buf_idx; nm_buf_idx++; slot->buf_idx = 0; /* Advance to the next socket */ ctx->nm_i = nm_next(ctx->nm_i, ctx->lim); slot = &ctx->slots[ctx->nm_i]; nm_info->sockets_used++; } } else { *push_hdr = NULL; /* * header_len is just a hint for the device. Because netmap is * not giving us any information about packet header length and * it is not guaranteed that all packet headers will be in the * 1st slot, setting header_len to 0 is making the device ignore * this value and resolve header on it's own. */ *header_len = 0; } /* Map all remaining data (regular routine for non-LLQ mode) */ while (remaining_len > 0) { __builtin_prefetch(&ctx->slots[ctx->nm_i + 1]); rc = ena_netmap_map_single_slot(ctx->na, slot, adapter->tx_buf_tag, *nm_maps, &vaddr, &paddr); if (unlikely(rc != 0)) { device_printf(adapter->pdev, "DMA mapping error\n"); goto error_map; } nm_maps++; ena_buf->paddr = paddr; ena_buf->len = slot->len; ena_buf++; tx_info->num_of_bufs++; remaining_len -= slot->len; /* Save buf idx before advancing */ *nm_buf_idx = slot->buf_idx; nm_buf_idx++; slot->buf_idx = 0; /* Advance to the next socket */ ctx->nm_i = nm_next(ctx->nm_i, ctx->lim); slot = &ctx->slots[ctx->nm_i]; nm_info->sockets_used++; } return (0); error_map: ena_netmap_unmap_last_socket_chain(ctx, tx_info); return (rc); } static void ena_netmap_unmap_last_socket_chain(struct ena_netmap_ctx *ctx, struct ena_tx_buffer *tx_info) { struct ena_netmap_tx_info *nm_info; int n; nm_info = &tx_info->nm_info; /** * As the used sockets must not be equal to the buffers used in the LLQ * mode, they must be treated separately. * First, unmap the DMA maps. */ n = tx_info->num_of_bufs; while (n--) { netmap_unload_map(ctx->na, ctx->adapter->tx_buf_tag, nm_info->map_seg[n]); } tx_info->num_of_bufs = 0; /* Next, retain the sockets back to the userspace */ n = nm_info->sockets_used; while (n--) { ctx->slots[ctx->nm_i].buf_idx = nm_info->socket_buf_idx[n]; ctx->slots[ctx->nm_i].flags = NS_BUF_CHANGED; nm_info->socket_buf_idx[n] = 0; ctx->nm_i = nm_prev(ctx->nm_i, ctx->lim); } nm_info->sockets_used = 0; } static void ena_netmap_tx_cleanup(struct ena_netmap_ctx *ctx) { uint16_t req_id; uint16_t total_tx_descs = 0; ctx->nm_i = ctx->kring->nr_hwtail; ctx->nt = ctx->ring->next_to_clean; /* Reclaim buffers for completed transmissions */ while (ena_com_tx_comp_req_id_get(ctx->io_cq, &req_id) >= 0) { if (validate_tx_req_id(ctx->ring, req_id) != 0) break; total_tx_descs += ena_netmap_tx_clean_one(ctx, req_id); } ctx->kring->nr_hwtail = ctx->nm_i; if (total_tx_descs > 0) { /* acknowledge completion of sent packets */ ctx->ring->next_to_clean = ctx->nt; ena_com_comp_ack(ctx->ring->ena_com_io_sq, total_tx_descs); ena_com_update_dev_comp_head(ctx->ring->ena_com_io_cq); } } static uint16_t ena_netmap_tx_clean_one(struct ena_netmap_ctx *ctx, uint16_t req_id) { struct ena_tx_buffer *tx_info; struct ena_netmap_tx_info *nm_info; int n; tx_info = &ctx->ring->tx_buffer_info[req_id]; nm_info = &tx_info->nm_info; /** * As the used sockets must not be equal to the buffers used in the LLQ * mode, they must be treated separately. * First, unmap the DMA maps. */ n = tx_info->num_of_bufs; for (n = 0; n < tx_info->num_of_bufs; n++) { netmap_unload_map(ctx->na, ctx->adapter->tx_buf_tag, nm_info->map_seg[n]); } tx_info->num_of_bufs = 0; /* Next, retain the sockets back to the userspace */ for (n = 0; n < nm_info->sockets_used; n++) { ctx->nm_i = nm_next(ctx->nm_i, ctx->lim); - ENA_ASSERT(ctx->slots[ctx->nm_i].buf_idx == 0, + ENA_WARN(ctx->slots[ctx->nm_i].buf_idx != 0, NULL, "Tx idx is not 0.\n"); ctx->slots[ctx->nm_i].buf_idx = nm_info->socket_buf_idx[n]; ctx->slots[ctx->nm_i].flags = NS_BUF_CHANGED; nm_info->socket_buf_idx[n] = 0; } nm_info->sockets_used = 0; ctx->ring->free_tx_ids[ctx->nt] = req_id; ctx->nt = ENA_TX_RING_IDX_NEXT(ctx->nt, ctx->lim); return tx_info->tx_descs; } static inline int validate_tx_req_id(struct ena_ring *tx_ring, uint16_t req_id) { struct ena_adapter *adapter = tx_ring->adapter; if (likely(req_id < tx_ring->ring_size)) return (0); - ena_trace(ENA_WARNING, "Invalid req_id: %hu\n", req_id); + ena_trace(NULL, ENA_WARNING, "Invalid req_id: %hu\n", req_id); counter_u64_add(tx_ring->tx_stats.bad_req_id, 1); ena_trigger_reset(adapter, ENA_REGS_RESET_INV_TX_REQ_ID); return (EFAULT); } static int ena_netmap_rxsync(struct netmap_kring *kring, int flags) { struct ena_netmap_ctx ctx; int rc; ena_netmap_fill_ctx(kring, &ctx, ENA_IO_RXQ_IDX(kring->ring_id)); ctx.ring = &ctx.adapter->rx_ring[kring->ring_id]; if (ctx.kring->rhead > ctx.lim) { /* Probably not needed to release slots from RX ring. */ return (netmap_ring_reinit(ctx.kring)); } if (unlikely((if_getdrvflags(ctx.na->ifp) & IFF_DRV_RUNNING) == 0)) return (0); if (unlikely(!ENA_FLAG_ISSET(ENA_FLAG_LINK_UP, ctx.adapter))) return (0); if ((rc = ena_netmap_rx_frames(&ctx)) != 0) return (rc); ena_netmap_rx_cleanup(&ctx); return (0); } static inline int ena_netmap_rx_frames(struct ena_netmap_ctx *ctx) { int rc = 0; int frames_counter = 0; ctx->nt = ctx->ring->next_to_clean; ctx->nm_i = ctx->kring->nr_hwtail; while((rc = ena_netmap_rx_frame(ctx)) == ENA_NETMAP_MORE_FRAMES) { frames_counter++; /* In case of multiple frames, it is not an error. */ rc = 0; if (frames_counter > ENA_MAX_FRAMES) { device_printf(ctx->adapter->pdev, "Driver is stuck in the Rx loop\n"); break; } }; ctx->kring->nr_hwtail = ctx->nm_i; ctx->kring->nr_kflags &= ~NKR_PENDINTR; ctx->ring->next_to_clean = ctx->nt; return (rc); } static inline int ena_netmap_rx_frame(struct ena_netmap_ctx *ctx) { struct ena_com_rx_ctx ena_rx_ctx; + enum ena_regs_reset_reason_types reset_reason; int rc, len = 0; uint16_t buf, nm; ena_rx_ctx.ena_bufs = ctx->ring->ena_bufs; ena_rx_ctx.max_bufs = ctx->adapter->max_rx_sgl_size; bus_dmamap_sync(ctx->io_cq->cdesc_addr.mem_handle.tag, ctx->io_cq->cdesc_addr.mem_handle.map, BUS_DMASYNC_POSTREAD); rc = ena_com_rx_pkt(ctx->io_cq, ctx->io_sq, &ena_rx_ctx); if (unlikely(rc != 0)) { - ena_trace(ENA_ALERT, "Too many desc from the device.\n"); - counter_u64_add(ctx->ring->rx_stats.bad_desc_num, 1); - ena_trigger_reset(ctx->adapter, - ENA_REGS_RESET_TOO_MANY_RX_DESCS); + ena_trace(NULL, ENA_ALERT, + "Failed to read pkt from the device with error: %d\n", rc); + if (rc == ENA_COM_NO_SPACE) { + counter_u64_add(ctx->ring->rx_stats.bad_desc_num, 1); + reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS; + } else { + counter_u64_add(ctx->ring->rx_stats.bad_req_id, 1); + reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; + } + ena_trigger_reset(ctx->adapter, reset_reason); return (rc); } if (unlikely(ena_rx_ctx.descs == 0)) return (ENA_NETMAP_NO_MORE_FRAMES); - ena_trace(ENA_NETMAP | ENA_DBG, "Rx: q %d got packet from ena. descs #:" + ena_trace(NULL, ENA_NETMAP | ENA_DBG, "Rx: q %d got packet from ena. descs #:" " %d l3 proto %d l4 proto %d hash: %x\n", ctx->ring->qid, ena_rx_ctx.descs, ena_rx_ctx.l3_proto, ena_rx_ctx.l4_proto, ena_rx_ctx.hash); for (buf = 0; buf < ena_rx_ctx.descs; buf++) if ((rc = ena_netmap_rx_load_desc(ctx, buf, &len)) != 0) break; /* * ena_netmap_rx_load_desc doesn't know the number of descriptors. * It just set flag NS_MOREFRAG to all slots, then here flag of * last slot is cleared. */ ctx->slots[nm_prev(ctx->nm_i, ctx->lim)].flags = NS_BUF_CHANGED; if (rc != 0) { goto rx_clear_desc; } bus_dmamap_sync(ctx->io_cq->cdesc_addr.mem_handle.tag, ctx->io_cq->cdesc_addr.mem_handle.map, BUS_DMASYNC_PREREAD); counter_enter(); counter_u64_add_protected(ctx->ring->rx_stats.bytes, len); counter_u64_add_protected(ctx->adapter->hw_stats.rx_bytes, len); counter_u64_add_protected(ctx->ring->rx_stats.cnt, 1); counter_u64_add_protected(ctx->adapter->hw_stats.rx_packets, 1); counter_exit(); return (ENA_NETMAP_MORE_FRAMES); rx_clear_desc: nm = ctx->nm_i; /* Remove failed packet from ring */ while(buf--) { ctx->slots[nm].flags = 0; ctx->slots[nm].len = 0; nm = nm_prev(nm, ctx->lim); } return (rc); } static inline int ena_netmap_rx_load_desc(struct ena_netmap_ctx *ctx, uint16_t buf, int *len) { struct ena_rx_buffer *rx_info; uint16_t req_id; - int rc; req_id = ctx->ring->ena_bufs[buf].req_id; - rc = validate_rx_req_id(ctx->ring, req_id); - if (unlikely(rc != 0)) - return (rc); - rx_info = &ctx->ring->rx_buffer_info[req_id]; bus_dmamap_sync(ctx->adapter->rx_buf_tag, rx_info->map, BUS_DMASYNC_POSTREAD); netmap_unload_map(ctx->na, ctx->adapter->rx_buf_tag, rx_info->map); - ENA_ASSERT(ctx->slots[ctx->nm_i].buf_idx == 0, "Rx idx is not 0.\n"); + ENA_WARN(ctx->slots[ctx->nm_i].buf_idx != 0, NULL, + "Rx idx is not 0.\n"); ctx->slots[ctx->nm_i].buf_idx = rx_info->netmap_buf_idx; rx_info->netmap_buf_idx = 0; /* * Set NS_MOREFRAG to all slots. * Then ena_netmap_rx_frame clears it from last one. */ ctx->slots[ctx->nm_i].flags |= NS_MOREFRAG | NS_BUF_CHANGED; ctx->slots[ctx->nm_i].len = ctx->ring->ena_bufs[buf].len; *len += ctx->slots[ctx->nm_i].len; ctx->ring->free_rx_ids[ctx->nt] = req_id; - ena_trace(ENA_DBG, "rx_info %p, buf_idx %d, paddr %jx, nm: %d\n", + ena_trace(NULL, ENA_DBG, "rx_info %p, buf_idx %d, paddr %jx, nm: %d\n", rx_info, ctx->slots[ctx->nm_i].buf_idx, (uintmax_t)rx_info->ena_buf.paddr, ctx->nm_i); ctx->nm_i = nm_next(ctx->nm_i, ctx->lim); ctx->nt = ENA_RX_RING_IDX_NEXT(ctx->nt, ctx->ring->ring_size); return (0); } static inline void ena_netmap_rx_cleanup(struct ena_netmap_ctx *ctx) { int refill_required; refill_required = ctx->kring->rhead - ctx->kring->nr_hwcur; if (ctx->kring->nr_hwcur != ctx->kring->nr_hwtail) refill_required -= 1; if (refill_required == 0) return; else if (refill_required < 0) refill_required += ctx->kring->nkr_num_slots; ena_refill_rx_bufs(ctx->ring, refill_required); } static inline void ena_netmap_fill_ctx(struct netmap_kring *kring, struct ena_netmap_ctx *ctx, uint16_t ena_qid) { ctx->kring = kring; ctx->na = kring->na; ctx->adapter = ctx->na->ifp->if_softc; ctx->lim = kring->nkr_num_slots - 1; ctx->io_cq = &ctx->adapter->ena_dev->io_cq_queues[ena_qid]; ctx->io_sq = &ctx->adapter->ena_dev->io_sq_queues[ena_qid]; ctx->slots = kring->ring->slot; } void ena_netmap_unload(struct ena_adapter *adapter, bus_dmamap_t map) { struct netmap_adapter *na = NA(adapter->ifp); netmap_unload_map(na, adapter->tx_buf_tag, map); } #endif /* DEV_NETMAP */ Index: stable/12/sys/dev/ena/ena_netmap.h =================================================================== --- stable/12/sys/dev/ena/ena_netmap.h (revision 368011) +++ stable/12/sys/dev/ena/ena_netmap.h (revision 368012) @@ -1,60 +1,60 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ * */ #ifndef _ENA_NETMAP_H_ #define _ENA_NETMAP_H_ /* Undef (un)likely as they are defined in netmap_kern.h */ #ifdef likely #undef likely #endif /* likely */ #ifdef unlikely #undef unlikely #endif /* unlikely */ #include #include #include int ena_netmap_attach(struct ena_adapter *adapter); int ena_netmap_alloc_rx_slot(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info); void ena_netmap_free_rx_slot(struct ena_adapter *adapter, struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info); bool ena_rx_ring_in_netmap(struct ena_adapter *adapter, int qid); bool ena_tx_ring_in_netmap(struct ena_adapter *adapter, int qid); void ena_netmap_reset_rx_ring(struct ena_adapter *adapter, int qid); void ena_netmap_reset_tx_ring(struct ena_adapter *adapter, int qid); void ena_netmap_unload(struct ena_adapter *adapter, bus_dmamap_t map); #endif /* _ENA_NETMAP_H_ */ Index: stable/12/sys/dev/ena/ena_sysctl.c =================================================================== --- stable/12/sys/dev/ena/ena_sysctl.c (revision 368011) +++ stable/12/sys/dev/ena/ena_sysctl.c (revision 368012) @@ -1,462 +1,557 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include "ena_sysctl.h" static void ena_sysctl_add_wd(struct ena_adapter *); static void ena_sysctl_add_stats(struct ena_adapter *); +static void ena_sysctl_add_eni_metrics(struct ena_adapter *); static void ena_sysctl_add_tuneables(struct ena_adapter *); static int ena_sysctl_buf_ring_size(SYSCTL_HANDLER_ARGS); static int ena_sysctl_rx_queue_size(SYSCTL_HANDLER_ARGS); static int ena_sysctl_io_queues_nb(SYSCTL_HANDLER_ARGS); +static int ena_sysctl_eni_metrics_interval(SYSCTL_HANDLER_ARGS); +/* Limit max ENI sample rate to be an hour. */ +#define ENI_METRICS_MAX_SAMPLE_INTERVAL 3600 + static SYSCTL_NODE(_hw, OID_AUTO, ena, CTLFLAG_RD, 0, "ENA driver parameters"); /* * Logging level for changing verbosity of the output */ int ena_log_level = ENA_ALERT | ENA_WARNING; SYSCTL_INT(_hw_ena, OID_AUTO, log_level, CTLFLAG_RWTUN, &ena_log_level, 0, "Logging level indicating verbosity of the logs"); SYSCTL_CONST_STRING(_hw_ena, OID_AUTO, driver_version, CTLFLAG_RD, DRV_MODULE_VERSION, "ENA driver version"); /* * Use 9k mbufs for the Rx buffers. Default to 0 (use page size mbufs instead). * Using 9k mbufs in low memory conditions might cause allocation to take a lot * of time and lead to the OS instability as it needs to look for the contiguous * pages. * However, page size mbufs has a bit smaller throughput than 9k mbufs, so if * the network performance is the priority, the 9k mbufs can be used. */ int ena_enable_9k_mbufs = 0; SYSCTL_INT(_hw_ena, OID_AUTO, enable_9k_mbufs, CTLFLAG_RDTUN, &ena_enable_9k_mbufs, 0, "Use 9 kB mbufs for Rx descriptors"); void ena_sysctl_add_nodes(struct ena_adapter *adapter) { ena_sysctl_add_wd(adapter); ena_sysctl_add_stats(adapter); + ena_sysctl_add_eni_metrics(adapter); ena_sysctl_add_tuneables(adapter); } static void ena_sysctl_add_wd(struct ena_adapter *adapter) { device_t dev; struct sysctl_ctx_list *ctx; struct sysctl_oid *tree; struct sysctl_oid_list *child; dev = adapter->pdev; ctx = device_get_sysctl_ctx(dev); tree = device_get_sysctl_tree(dev); child = SYSCTL_CHILDREN(tree); /* Sysctl calls for Watchdog service */ SYSCTL_ADD_INT(ctx, child, OID_AUTO, "wd_active", CTLFLAG_RWTUN, &adapter->wd_active, 0, "Watchdog is active"); SYSCTL_ADD_QUAD(ctx, child, OID_AUTO, "keep_alive_timeout", CTLFLAG_RWTUN, &adapter->keep_alive_timeout, "Timeout for Keep Alive messages"); SYSCTL_ADD_QUAD(ctx, child, OID_AUTO, "missing_tx_timeout", CTLFLAG_RWTUN, &adapter->missing_tx_timeout, "Timeout for TX completion"); SYSCTL_ADD_U32(ctx, child, OID_AUTO, "missing_tx_max_queues", CTLFLAG_RWTUN, &adapter->missing_tx_max_queues, 0, "Number of TX queues to check per run"); SYSCTL_ADD_U32(ctx, child, OID_AUTO, "missing_tx_threshold", CTLFLAG_RWTUN, &adapter->missing_tx_threshold, 0, "Max number of timeouted packets"); } static void ena_sysctl_add_stats(struct ena_adapter *adapter) { device_t dev; struct ena_ring *tx_ring; struct ena_ring *rx_ring; struct ena_hw_stats *hw_stats; struct ena_stats_dev *dev_stats; struct ena_stats_tx *tx_stats; struct ena_stats_rx *rx_stats; struct ena_com_stats_admin *admin_stats; struct sysctl_ctx_list *ctx; struct sysctl_oid *tree; struct sysctl_oid_list *child; struct sysctl_oid *queue_node, *tx_node, *rx_node, *hw_node; struct sysctl_oid *admin_node; struct sysctl_oid_list *queue_list, *tx_list, *rx_list, *hw_list; struct sysctl_oid_list *admin_list; #define QUEUE_NAME_LEN 32 char namebuf[QUEUE_NAME_LEN]; int i; dev = adapter->pdev; ctx = device_get_sysctl_ctx(dev); tree = device_get_sysctl_tree(dev); child = SYSCTL_CHILDREN(tree); tx_ring = adapter->tx_ring; rx_ring = adapter->rx_ring; hw_stats = &adapter->hw_stats; dev_stats = &adapter->dev_stats; admin_stats = &adapter->ena_dev->admin_queue.stats; SYSCTL_ADD_COUNTER_U64(ctx, child, OID_AUTO, "wd_expired", CTLFLAG_RD, &dev_stats->wd_expired, "Watchdog expiry count"); SYSCTL_ADD_COUNTER_U64(ctx, child, OID_AUTO, "interface_up", CTLFLAG_RD, &dev_stats->interface_up, "Network interface up count"); SYSCTL_ADD_COUNTER_U64(ctx, child, OID_AUTO, "interface_down", CTLFLAG_RD, &dev_stats->interface_down, "Network interface down count"); SYSCTL_ADD_COUNTER_U64(ctx, child, OID_AUTO, "admin_q_pause", CTLFLAG_RD, &dev_stats->admin_q_pause, "Admin queue pauses"); for (i = 0; i < adapter->num_io_queues; ++i, ++tx_ring, ++rx_ring) { snprintf(namebuf, QUEUE_NAME_LEN, "queue%d", i); queue_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, namebuf, CTLFLAG_RD, NULL, "Queue Name"); queue_list = SYSCTL_CHILDREN(queue_node); /* TX specific stats */ tx_node = SYSCTL_ADD_NODE(ctx, queue_list, OID_AUTO, "tx_ring", CTLFLAG_RD, NULL, "TX ring"); tx_list = SYSCTL_CHILDREN(tx_node); tx_stats = &tx_ring->tx_stats; SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "count", CTLFLAG_RD, &tx_stats->cnt, "Packets sent"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "bytes", CTLFLAG_RD, &tx_stats->bytes, "Bytes sent"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "prepare_ctx_err", CTLFLAG_RD, &tx_stats->prepare_ctx_err, "TX buffer preparation failures"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "dma_mapping_err", CTLFLAG_RD, &tx_stats->dma_mapping_err, "DMA mapping failures"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "doorbells", CTLFLAG_RD, &tx_stats->doorbells, "Queue doorbells"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "missing_tx_comp", CTLFLAG_RD, &tx_stats->missing_tx_comp, "TX completions missed"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "bad_req_id", CTLFLAG_RD, &tx_stats->bad_req_id, "Bad request id count"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "mbuf_collapses", CTLFLAG_RD, &tx_stats->collapse, "Mbuf collapse count"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "mbuf_collapse_err", CTLFLAG_RD, &tx_stats->collapse_err, "Mbuf collapse failures"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "queue_wakeups", CTLFLAG_RD, &tx_stats->queue_wakeup, "Queue wakeups"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "queue_stops", CTLFLAG_RD, &tx_stats->queue_stop, "Queue stops"); SYSCTL_ADD_COUNTER_U64(ctx, tx_list, OID_AUTO, "llq_buffer_copy", CTLFLAG_RD, &tx_stats->llq_buffer_copy, "Header copies for llq transaction"); /* RX specific stats */ rx_node = SYSCTL_ADD_NODE(ctx, queue_list, OID_AUTO, "rx_ring", CTLFLAG_RD, NULL, "RX ring"); rx_list = SYSCTL_CHILDREN(rx_node); rx_stats = &rx_ring->rx_stats; SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "count", CTLFLAG_RD, &rx_stats->cnt, "Packets received"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "bytes", CTLFLAG_RD, &rx_stats->bytes, "Bytes received"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "refil_partial", CTLFLAG_RD, &rx_stats->refil_partial, "Partial refilled mbufs"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "bad_csum", CTLFLAG_RD, &rx_stats->bad_csum, "Bad RX checksum"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "mbuf_alloc_fail", CTLFLAG_RD, &rx_stats->mbuf_alloc_fail, "Failed mbuf allocs"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "mjum_alloc_fail", CTLFLAG_RD, &rx_stats->mjum_alloc_fail, "Failed jumbo mbuf allocs"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "dma_mapping_err", CTLFLAG_RD, &rx_stats->dma_mapping_err, "DMA mapping errors"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "bad_desc_num", CTLFLAG_RD, &rx_stats->bad_desc_num, "Bad descriptor count"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "bad_req_id", CTLFLAG_RD, &rx_stats->bad_req_id, "Bad request id count"); SYSCTL_ADD_COUNTER_U64(ctx, rx_list, OID_AUTO, "empty_rx_ring", CTLFLAG_RD, &rx_stats->empty_rx_ring, "RX descriptors depletion count"); } /* Stats read from device */ hw_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "hw_stats", CTLFLAG_RD, NULL, "Statistics from hardware"); hw_list = SYSCTL_CHILDREN(hw_node); SYSCTL_ADD_COUNTER_U64(ctx, hw_list, OID_AUTO, "rx_packets", CTLFLAG_RD, &hw_stats->rx_packets, "Packets received"); SYSCTL_ADD_COUNTER_U64(ctx, hw_list, OID_AUTO, "tx_packets", CTLFLAG_RD, &hw_stats->tx_packets, "Packets transmitted"); SYSCTL_ADD_COUNTER_U64(ctx, hw_list, OID_AUTO, "rx_bytes", CTLFLAG_RD, &hw_stats->rx_bytes, "Bytes received"); SYSCTL_ADD_COUNTER_U64(ctx, hw_list, OID_AUTO, "tx_bytes", CTLFLAG_RD, &hw_stats->tx_bytes, "Bytes transmitted"); SYSCTL_ADD_COUNTER_U64(ctx, hw_list, OID_AUTO, "rx_drops", CTLFLAG_RD, &hw_stats->rx_drops, "Receive packet drops"); SYSCTL_ADD_COUNTER_U64(ctx, hw_list, OID_AUTO, "tx_drops", CTLFLAG_RD, &hw_stats->tx_drops, "Transmit packet drops"); /* ENA Admin queue stats */ admin_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "admin_stats", CTLFLAG_RD, NULL, "ENA Admin Queue statistics"); admin_list = SYSCTL_CHILDREN(admin_node); SYSCTL_ADD_U64(ctx, admin_list, OID_AUTO, "aborted_cmd", CTLFLAG_RD, &admin_stats->aborted_cmd, 0, "Aborted commands"); SYSCTL_ADD_U64(ctx, admin_list, OID_AUTO, "sumbitted_cmd", CTLFLAG_RD, &admin_stats->submitted_cmd, 0, "Submitted commands"); SYSCTL_ADD_U64(ctx, admin_list, OID_AUTO, "completed_cmd", CTLFLAG_RD, &admin_stats->completed_cmd, 0, "Completed commands"); SYSCTL_ADD_U64(ctx, admin_list, OID_AUTO, "out_of_space", CTLFLAG_RD, &admin_stats->out_of_space, 0, "Queue out of space"); SYSCTL_ADD_U64(ctx, admin_list, OID_AUTO, "no_completion", CTLFLAG_RD, &admin_stats->no_completion, 0, "Commands not completed"); } static void +ena_sysctl_add_eni_metrics(struct ena_adapter *adapter) +{ + device_t dev; + struct ena_admin_eni_stats *eni_metrics; + + struct sysctl_ctx_list *ctx; + struct sysctl_oid *tree; + struct sysctl_oid_list *child; + + struct sysctl_oid *eni_node; + struct sysctl_oid_list *eni_list; + + dev = adapter->pdev; + + ctx = device_get_sysctl_ctx(dev); + tree = device_get_sysctl_tree(dev); + child = SYSCTL_CHILDREN(tree); + + eni_node = SYSCTL_ADD_NODE(ctx, child, OID_AUTO, "eni_metrics", + CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "ENA's ENI metrics"); + eni_list = SYSCTL_CHILDREN(eni_node); + + eni_metrics = &adapter->eni_metrics; + + SYSCTL_ADD_U64(ctx, eni_list, OID_AUTO, "bw_in_allowance_exceeded", + CTLFLAG_RD, &eni_metrics->bw_in_allowance_exceeded, 0, + "Inbound BW allowance exceeded"); + SYSCTL_ADD_U64(ctx, eni_list, OID_AUTO, "bw_out_allowance_exceeded", + CTLFLAG_RD, &eni_metrics->bw_out_allowance_exceeded, 0, + "Outbound BW allowance exceeded"); + SYSCTL_ADD_U64(ctx, eni_list, OID_AUTO, "pps_allowance_exceeded", + CTLFLAG_RD, &eni_metrics->pps_allowance_exceeded, 0, + "PPS allowance exceeded"); + SYSCTL_ADD_U64(ctx, eni_list, OID_AUTO, "conntrack_allowance_exceeded", + CTLFLAG_RD, &eni_metrics->conntrack_allowance_exceeded, 0, + "Connection tracking allowance exceeded"); + SYSCTL_ADD_U64(ctx, eni_list, OID_AUTO, "linklocal_allowance_exceeded", + CTLFLAG_RD, &eni_metrics->linklocal_allowance_exceeded, 0, + "Linklocal packet rate allowance exceeded"); + + /* + * Tuneable, which determines how often ENI metrics will be read. + * 0 means it's turned off. Maximum allowed value is limited by: + * ENI_METRICS_MAX_SAMPLE_INTERVAL. + */ + SYSCTL_ADD_PROC(ctx, eni_list, OID_AUTO, "sample_interval", + CTLTYPE_U16 | CTLFLAG_RW | CTLFLAG_MPSAFE, adapter, 0, + ena_sysctl_eni_metrics_interval, "SU", + "Interval in seconds for updating ENI emetrics. 0 turns off the update."); +} + +static void ena_sysctl_add_tuneables(struct ena_adapter *adapter) { device_t dev; struct sysctl_ctx_list *ctx; struct sysctl_oid *tree; struct sysctl_oid_list *child; dev = adapter->pdev; ctx = device_get_sysctl_ctx(dev); tree = device_get_sysctl_tree(dev); child = SYSCTL_CHILDREN(tree); /* Tuneable number of buffers in the buf-ring (drbr) */ SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "buf_ring_size", CTLTYPE_U32 | CTLFLAG_RW | CTLFLAG_MPSAFE, adapter, 0, ena_sysctl_buf_ring_size, "I", "Size of the Tx buffer ring (drbr)."); /* Tuneable number of the Rx ring size */ SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "rx_queue_size", CTLTYPE_U32 | CTLFLAG_RW | CTLFLAG_MPSAFE, adapter, 0, ena_sysctl_rx_queue_size, "I", "Size of the Rx ring. The size should be a power of 2."); /* Tuneable number of IO queues */ SYSCTL_ADD_PROC(ctx, child, OID_AUTO, "io_queues_nb", CTLTYPE_U32 | CTLFLAG_RW | CTLFLAG_MPSAFE, adapter, 0, ena_sysctl_io_queues_nb, "I", "Number of IO queues."); } static int ena_sysctl_buf_ring_size(SYSCTL_HANDLER_ARGS) { struct ena_adapter *adapter = arg1; uint32_t val; int error; val = 0; error = sysctl_wire_old_buffer(req, sizeof(val)); if (error == 0) { val = adapter->buf_ring_size; - error = sysctl_handle_int(oidp, &val, 0, req); + error = sysctl_handle_32(oidp, &val, 0, req); } if (error != 0 || req->newptr == NULL) return (error); if (!powerof2(val) || val == 0) { device_printf(adapter->pdev, "Requested new Tx buffer ring size (%u) is not a power of 2\n", val); return (EINVAL); } if (val != adapter->buf_ring_size) { device_printf(adapter->pdev, "Requested new Tx buffer ring size: %d. Old size: %d\n", val, adapter->buf_ring_size); error = ena_update_buf_ring_size(adapter, val); } else { device_printf(adapter->pdev, "New Tx buffer ring size is the same as already used: %u\n", adapter->buf_ring_size); } return (error); } static int ena_sysctl_rx_queue_size(SYSCTL_HANDLER_ARGS) { struct ena_adapter *adapter = arg1; uint32_t val; int error; val = 0; error = sysctl_wire_old_buffer(req, sizeof(val)); if (error == 0) { val = adapter->requested_rx_ring_size; error = sysctl_handle_32(oidp, &val, 0, req); } if (error != 0 || req->newptr == NULL) return (error); if (val < ENA_MIN_RING_SIZE || val > adapter->max_rx_ring_size) { device_printf(adapter->pdev, "Requested new Rx queue size (%u) is out of range: [%u, %u]\n", val, ENA_MIN_RING_SIZE, adapter->max_rx_ring_size); return (EINVAL); } /* Check if the parameter is power of 2 */ if (!powerof2(val)) { device_printf(adapter->pdev, "Requested new Rx queue size (%u) is not a power of 2\n", val); return (EINVAL); } if (val != adapter->requested_rx_ring_size) { device_printf(adapter->pdev, "Requested new Rx queue size: %u. Old size: %u\n", val, adapter->requested_rx_ring_size); error = ena_update_queue_size(adapter, adapter->requested_tx_ring_size, val); } else { device_printf(adapter->pdev, "New Rx queue size is the same as already used: %u\n", adapter->requested_rx_ring_size); } return (error); } /* * Change number of effectively used IO queues adapter->num_io_queues */ static int ena_sysctl_io_queues_nb(SYSCTL_HANDLER_ARGS) { struct ena_adapter *adapter = arg1; uint32_t tmp = 0; int error; error = sysctl_wire_old_buffer(req, sizeof(tmp)); if (error == 0) { tmp = adapter->num_io_queues; error = sysctl_handle_int(oidp, &tmp, 0, req); } if (error != 0 || req->newptr == NULL) return (error); if (tmp == 0) { device_printf(adapter->pdev, "Requested number of IO queues is zero\n"); return (EINVAL); } /* * The adapter::max_num_io_queues is the HW capability. The system * resources availability may potentially be a tighter limit. Therefore * the relation `adapter::max_num_io_queues >= adapter::msix_vecs` * always holds true, while the `adapter::msix_vecs` is variable across * device reset (`ena_destroy_device()` + `ena_restore_device()`). */ if (tmp > (adapter->msix_vecs - ENA_ADMIN_MSIX_VEC)) { device_printf(adapter->pdev, "Requested number of IO queues is higher than maximum " "allowed (%u)\n", adapter->msix_vecs - ENA_ADMIN_MSIX_VEC); return (EINVAL); } if (tmp == adapter->num_io_queues) { device_printf(adapter->pdev, "Requested number of IO queues is equal to current value " "(%u)\n", adapter->num_io_queues); } else { device_printf(adapter->pdev, "Requested new number of IO queues: %u, current value: " "%u\n", tmp, adapter->num_io_queues); error = ena_update_io_queue_nb(adapter, tmp); } return (error); +} + +static int +ena_sysctl_eni_metrics_interval(SYSCTL_HANDLER_ARGS) +{ + struct ena_adapter *adapter = arg1; + uint16_t interval; + int error; + + error = sysctl_wire_old_buffer(req, sizeof(interval)); + if (error == 0) { + interval = adapter->eni_metrics_sample_interval; + error = sysctl_handle_16(oidp, &interval, 0, req); + } + if (error != 0 || req->newptr == NULL) + return (error); + + if (interval > ENI_METRICS_MAX_SAMPLE_INTERVAL) { + device_printf(adapter->pdev, + "ENI metrics update interval is out of range - maximum allowed value: %d seconds\n", + ENI_METRICS_MAX_SAMPLE_INTERVAL); + return (EINVAL); + } + + if (interval == 0) { + device_printf(adapter->pdev, + "ENI metrics update is now turned off\n"); + bzero(&adapter->eni_metrics, sizeof(adapter->eni_metrics)); + } else { + device_printf(adapter->pdev, + "ENI metrics update interval is set to: %"PRIu16" seconds\n", + interval); + } + + adapter->eni_metrics_sample_interval = interval; + + return (0); } Index: stable/12/sys/dev/ena/ena_sysctl.h =================================================================== --- stable/12/sys/dev/ena/ena_sysctl.h (revision 368011) +++ stable/12/sys/dev/ena/ena_sysctl.h (revision 368012) @@ -1,47 +1,47 @@ /*- - * BSD LICENSE + * SPDX-License-Identifier: BSD-2-Clause * * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * $FreeBSD$ * */ #ifndef ENA_SYSCTL_H #define ENA_SYSCTL_H #include #include #include "ena.h" void ena_sysctl_add_nodes(struct ena_adapter *adapter); extern int ena_enable_9k_mbufs; #define ena_mbuf_sz (ena_enable_9k_mbufs ? MJUM9BYTES : MJUMPAGESIZE) #endif /* !(ENA_SYSCTL_H) */ Index: stable/12/sys/modules/ena/Makefile =================================================================== --- stable/12/sys/modules/ena/Makefile (revision 368011) +++ stable/12/sys/modules/ena/Makefile (revision 368012) @@ -1,43 +1,43 @@ # -# BSD LICENSE +# SPDX-License-Identifier: BSD-2-Clause # -# Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. +# Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # $FreeBSD$ # .PATH: ${SRCTOP}/sys/dev/ena \ ${SRCTOP}/sys/contrib/ena-com KMOD = if_ena SRCS = ena_com.c ena_eth_com.c SRCS += ena.c ena_sysctl.c ena_datapath.c ena_netmap.c SRCS += device_if.h bus_if.h pci_if.h SRCS += opt_rss.h CFLAGS += -I${SRCTOP}/sys/contrib .include Index: stable/12 =================================================================== --- stable/12 (revision 368011) +++ stable/12 (revision 368012) Property changes on: stable/12 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r367795,367799-367803,367805