Index: stable/12/share/man/man4/mlx5io.4 =================================================================== --- stable/12/share/man/man4/mlx5io.4 (revision 353238) +++ stable/12/share/man/man4/mlx5io.4 (revision 353239) @@ -1,168 +1,191 @@ .\" .\" Copyright (c) 2018, 2019 Mellanox Technologies .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd May 7, 2019 +.Dd October 2, 2019 .Dt mlx5io 4 .Os .Sh NAME .Nm mlx5io -.Nd IOCTL interface to manage Connect-X 4/5 Mellanox network adapters +.Nd IOCTL interface to manage Connect-X 4/5/6 Mellanox network adapters .Sh SYNOPSIS .In dev/mlx5/mlx5io.h .Sh DESCRIPTION The .Nm -interface is provided for management of the Connect-X 4 and 5 network adapters +interface is provided for management of the Connect-X4, 5 and 6 network adapters in the aspects not covered by the generic network configuration, mostly related to the PCIe attachment and internal card working. Interface consists of the commands, which are passed by means of .Xr ioctl 2 on the file descriptor, opened from the .Pa /dev/mlx5ctl device node. .Pp The following commands are implemented: .Bl -tag -width indent .It Dv MLX5_FWDUMP_FORCE Take the snapshot of the firmware registers state and store it in the kernel buffer. The buffer must be empty, in other words, no dumps should be written so far, or existing dump cleared with the .Dv MLX5_FWDUMP_RESET command for the specified device. The argument for the command should point to the .Vt struct mlx5_tool_addr structure, containing the PCIe bus address of the device. .Bd -literal struct mlx5_tool_addr { uint32_t domain; uint8_t bus; uint8_t slot; uint8_t func; }; .Ed .It Dv MLX5_FWDUMP_RESET Clear the stored firmware dump, preparing the kernel buffer for the next dump. The argument for the command should point to the .Vt struct mlx5_tool_addr structure, containing the PCIe bus address of the device. .It Dv MLX5_FWDUMP_GET Fetch the stored firmware dump into the user memory. The argument to the command should point to the input/output .Vt struct mlx5_fwdump_get structure. Its .Dv devaddr field specifies the address of the device, the .Dv buf fields points to the array of .Vt struct mlx5_fwdump_reg of records of the registers values, the size of the array is specified in the .Dv reg_cnt field. .Bd -literal struct mlx5_fwdump_get { struct mlx5_tool_addr devaddr; struct mlx5_fwdump_reg *buf; size_t reg_cnt; size_t reg_filled; /* out */ }; .Ed .Pp On successfull return, the .Dv reg_filled field reports the number of the .Dv buf array elements actually filled with the registers values. If .Dv buf contains the .Dv NULL pointer, no registers are filled, but .Dv reg_filled still contains the number of registers that should be passed for the complete dump. .Pp The .Vt struct mlx5_fwdump_reg element contains the address of the register in the field .Dv addr , and its value in the field .Dv val . .Bd -literal struct mlx5_fwdump_reg { uint32_t addr; uint32_t val; }; .Ed .It Dv MLX5_FW_UPDATE Requests firmware update (flash) on the adapter specified by the .Dv devaddr using the firmware image in .Dv MFA2 format. The argument for the ioctl command is the .Vt struct mlx5_fw_update with the following definition. .Bd -literal struct mlx5_fw_update { struct mlx5_tool_addr devaddr; void *img_fw_data; size_t img_fw_data_len; }; .Ed Image address in memory is passed in .Dv img_fw_data , the length of the image is specified in .Dv img_fw_data_len field. .It Dv MLX5_FW_RESET Requests PCIe link-level reset on the device. The address of the device is specified by the .Vt struct mlx5_tool_addr structure, which should be passed as an argument. +.It Dv MLX5_EEPROM_GET +Fetch EEPROM information. +The argument to the command should point to the input/output +.Vt struct mlx5_eeprom_get +structure where, the +.Dv devaddr +field specifies the address of the device. +.Bd -literal +struct mlx5_eeprom_get { + struct mlx5_tool_addr devaddr; + size_t eeprom_info_page_valid; + uint32_t *eeprom_info_buf; + size_t eeprom_info_out_len; +}; +.Ed +.Pp +On successfull return, the +.Dv eeprom_info_out_len +field reports the length of the EEPROM information. +.Dv eeprom_info_buf +field contains the actual EEPROM information. +.Dv eeprom_info_page_valid +field reports the third page validity. .El .Sh FILES The .Pa /dev/mlx5ctl .Xr devfs 5 node is used to pass commands to the driver. .Sh RETURN VALUES If successful, the IOCTL returns zero. Otherwise, -1 is returned and the global variable .Va errno is set to indicate the error. .Sh SEE ALSO .Xr errno 2 , .Xr ioctl 2 , .Xr mlx5en 4 , .Xr mlx5ib 4 , .Xr mlx5tool 8 and .Xr pci 9 . Index: stable/12/sys/dev/mlx5/diagnostics.h =================================================================== --- stable/12/sys/dev/mlx5/diagnostics.h (revision 353238) +++ stable/12/sys/dev/mlx5/diagnostics.h (revision 353239) @@ -1,138 +1,155 @@ /*- * Copyright (c) 2013-2017, Mellanox Technologies, Ltd. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef MLX5_CORE_DIAGNOSTICS_H #define MLX5_CORE_DIAGNOSTICS_H #define MLX5_CORE_DIAGNOSTICS_NUM(n, s, t) n #define MLX5_CORE_DIAGNOSTICS_STRUCT(n, s, t) s, #define MLX5_CORE_DIAGNOSTICS_ENTRY(n, s, t) { #s, (t) }, +static MALLOC_DEFINE(M_MLX5_EEPROM, "MLX5EEPROM", "MLX5 EEPROM information"); + struct mlx5_core_diagnostics_entry { const char *desc; u16 counter_id; }; #define MLX5_CORE_PCI_DIAGNOSTICS(m) \ m(+1, pxd_ready_bp, 0x0401) \ m(+1, pci_write_bp, 0x0402) \ m(+1, pci_read_bp, 0x0403) \ m(+1, pci_read_stuck_no_completion_buffer, 0x0404) \ m(+1, max_pci_bw, 0x0405) \ m(+1, used_pci_bw, 0x0406) \ m(+1, rx_pci_errors, 0) \ m(+1, tx_pci_errors, 0) \ m(+1, tx_pci_correctable_errors, 0) \ m(+1, tx_pci_non_fatal_errors, 0) \ m(+1, tx_pci_fatal_errors, 0) #define MLX5_CORE_PCI_DIAGNOSTICS_NUM \ (0 MLX5_CORE_PCI_DIAGNOSTICS(MLX5_CORE_DIAGNOSTICS_NUM)) union mlx5_core_pci_diagnostics { u64 array[MLX5_CORE_PCI_DIAGNOSTICS_NUM]; struct { u64 MLX5_CORE_PCI_DIAGNOSTICS( MLX5_CORE_DIAGNOSTICS_STRUCT) dummy[0]; } counter; }; extern const struct mlx5_core_diagnostics_entry mlx5_core_pci_diagnostics_table[MLX5_CORE_PCI_DIAGNOSTICS_NUM]; #define MLX5_CORE_GENERAL_DIAGNOSTICS(m) \ m(+1, l0_mtt_miss, 0x0801) \ m(+1, l0_mtt_hit, 0x0802) \ m(+1, l1_mtt_miss, 0x0803) \ m(+1, l1_mtt_hit, 0x0804) \ m(+1, l0_mpt_miss, 0x0805) \ m(+1, l0_mpt_hit, 0x0806) \ m(+1, l1_mpt_miss, 0x0807) \ m(+1, l1_mpt_hit, 0x0808) \ m(+1, rxb_no_slow_path_credits, 0x0c01) \ m(+1, rxb_no_fast_path_credits, 0x0c02) \ m(+1, rxb_rxt_no_slow_path_cred_perf_count, 0x0c03) \ m(+1, rxb_rxt_no_fast_path_cred_perf_count, 0x0c04) \ m(+1, rxt_ctrl_perf_slice_load_slow, 0x1001) \ m(+1, rxt_ctrl_perf_slice_load_fast, 0x1002) \ m(+1, rxt_steering_perf_count_steering0_rse_work_rate, 0x1003) \ m(+1, rxt_steering_perf_count_steering1_rse_work_rate, 0x1004) \ m(+1, perf_count_tpt_credit, 0x1401) \ m(+1, perf_wb_miss, 0x1402) \ m(+1, perf_wb_hit, 0x1403) \ m(+1, rxw_perf_rx_l1_slow_miss_ldb, 0x1404) \ m(+1, rxw_perf_rx_l1_slow_hit_ldb, 0x1405) \ m(+1, rxw_perf_rx_l1_fast_miss_ldb, 0x1406) \ m(+1, rxw_perf_rx_l1_fast_hit_ldb, 0x1407) \ m(+1, rxw_perf_l2_cache_read_miss_ldb, 0x1408) \ m(+1, rxw_perf_l2_cache_read_hit_ldb, 0x1409) \ m(+1, rxw_perf_rx_l1_slow_miss_reqsl, 0x140a) \ m(+1, rxw_perf_rx_l1_slow_hit_reqsl, 0x140b) \ m(+1, rxw_perf_rx_l1_fast_miss_reqsl, 0x140c) \ m(+1, rxw_perf_rx_l1_fast_hit_reqsl, 0x140d) \ m(+1, rxw_perf_l2_cache_read_miss_reqsl, 0x140e) \ m(+1, rxw_perf_l2_cache_read_hit_reqsl, 0x140f) \ m(+1, rxs_no_pxt_credits, 0x1801) \ m(+1, rxc_eq_all_slices_busy, 0x1c01) \ m(+1, rxc_cq_all_slices_busy, 0x1c02) \ m(+1, rxc_msix_all_slices_busy, 0x1c03) \ m(+1, sxw_qp_done_due_to_vl_limited, 0x2001) \ m(+1, sxw_qp_done_due_to_desched, 0x2002) \ m(+1, sxw_qp_done_due_to_work_done, 0x2003) \ m(+1, sxw_qp_done_due_to_limited, 0x2004) \ m(+1, sxw_qp_done_due_to_e2e_credits, 0x2005) \ m(+1, sxw_packet_send_sxw2sxp_go_vld, 0x2006) \ m(+1, sxw_perf_count_steering_hit, 0x2007) \ m(+1, sxw_perf_count_steering_miss, 0x2008) \ m(+1, sxw_perf_count_steering_rse_0, 0x2009) \ m(+1, sxd_no_sched_credits, 0x2401) \ m(+1, sxd_no_slow_path_sched_credits, 0x2402) \ m(+1, tpt_indirect_mem_key, 0x2801) #define MLX5_CORE_GENERAL_DIAGNOSTICS_NUM \ (0 MLX5_CORE_GENERAL_DIAGNOSTICS(MLX5_CORE_DIAGNOSTICS_NUM)) union mlx5_core_general_diagnostics { u64 array[MLX5_CORE_GENERAL_DIAGNOSTICS_NUM]; struct { u64 MLX5_CORE_GENERAL_DIAGNOSTICS( MLX5_CORE_DIAGNOSTICS_STRUCT) dummy[0]; } counter; }; extern const struct mlx5_core_diagnostics_entry mlx5_core_general_diagnostics_table[MLX5_CORE_GENERAL_DIAGNOSTICS_NUM]; +struct mlx5_eeprom { + int lock_bit; + int i2c_addr; + int page_num; + int device_addr; + int module_num; + int len; + int type; + int page_valid; + u32 *data; +}; + /* function prototypes */ int mlx5_core_set_diagnostics_full(struct mlx5_core_dev *mdev, u8 enable_pci, u8 enable_general); int mlx5_core_get_diagnostics_full(struct mlx5_core_dev *mdev, union mlx5_core_pci_diagnostics *ppci, union mlx5_core_general_diagnostics *pgen); int mlx5_core_supports_diagnostics(struct mlx5_core_dev *mdev, u16 counter_id); +int mlx5_read_eeprom(struct mlx5_core_dev *dev, struct mlx5_eeprom *eeprom); +int mlx5_get_eeprom_info(struct mlx5_core_dev *dev, struct mlx5_eeprom *eeprom); +int mlx5_get_eeprom(struct mlx5_core_dev *dev, struct mlx5_eeprom *ee); #endif /* MLX5_CORE_DIAGNOSTICS_H */ Index: stable/12/sys/dev/mlx5/mlx5_core/mlx5_diagnostics.c =================================================================== --- stable/12/sys/dev/mlx5/mlx5_core/mlx5_diagnostics.c (revision 353238) +++ stable/12/sys/dev/mlx5/mlx5_core/mlx5_diagnostics.c (revision 353239) @@ -1,286 +1,442 @@ /*- * Copyright (c) 2013-2017, Mellanox Technologies, Ltd. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include +#include #include +#include +#include const struct mlx5_core_diagnostics_entry mlx5_core_pci_diagnostics_table[ MLX5_CORE_PCI_DIAGNOSTICS_NUM] = { MLX5_CORE_PCI_DIAGNOSTICS(MLX5_CORE_DIAGNOSTICS_ENTRY) }; const struct mlx5_core_diagnostics_entry mlx5_core_general_diagnostics_table[ MLX5_CORE_GENERAL_DIAGNOSTICS_NUM] = { MLX5_CORE_GENERAL_DIAGNOSTICS(MLX5_CORE_DIAGNOSTICS_ENTRY) }; static int mlx5_core_get_index_of_diag_counter( const struct mlx5_core_diagnostics_entry *entry, int size, u16 counter_id) { int x; /* check for invalid counter ID */ if (counter_id == 0) return -1; /* lookup counter ID in table */ for (x = 0; x != size; x++) { if (entry[x].counter_id == counter_id) return x; } return -1; } static void mlx5_core_put_diag_counter( const struct mlx5_core_diagnostics_entry *entry, u64 *array, int size, u16 counter_id, u64 value) { int x; /* check for invalid counter ID */ if (counter_id == 0) return; /* lookup counter ID in table */ for (x = 0; x != size; x++) { if (entry[x].counter_id == counter_id) { array[x] = value; break; } } } int mlx5_core_set_diagnostics_full(struct mlx5_core_dev *dev, u8 enable_pci, u8 enable_general) { void *diag_params_ctx; void *in; int numcounters; int inlen; int err; int x; int y; if (MLX5_CAP_GEN(dev, debug) == 0) return 0; numcounters = MLX5_CAP_GEN(dev, num_of_diagnostic_counters); if (numcounters == 0) return 0; inlen = MLX5_ST_SZ_BYTES(set_diagnostic_params_in) + MLX5_ST_SZ_BYTES(diagnostic_counter) * numcounters; in = mlx5_vzalloc(inlen); if (in == NULL) return -ENOMEM; diag_params_ctx = MLX5_ADDR_OF(set_diagnostic_params_in, in, diagnostic_params_ctx); MLX5_SET(diagnostic_params_context, diag_params_ctx, enable, enable_pci || enable_general); MLX5_SET(diagnostic_params_context, diag_params_ctx, single, 1); MLX5_SET(diagnostic_params_context, diag_params_ctx, on_demand, 1); /* collect the counters we want to enable */ for (x = y = 0; x != numcounters; x++) { u16 counter_id = MLX5_CAP_DEBUG(dev, diagnostic_counter[x].counter_id); int index = -1; if (index < 0 && enable_pci != 0) { /* check if counter ID exists in local table */ index = mlx5_core_get_index_of_diag_counter( mlx5_core_pci_diagnostics_table, MLX5_CORE_PCI_DIAGNOSTICS_NUM, counter_id); } if (index < 0 && enable_general != 0) { /* check if counter ID exists in local table */ index = mlx5_core_get_index_of_diag_counter( mlx5_core_general_diagnostics_table, MLX5_CORE_GENERAL_DIAGNOSTICS_NUM, counter_id); } if (index < 0) continue; MLX5_SET(diagnostic_params_context, diag_params_ctx, counter_id[y].counter_id, counter_id); y++; } /* recompute input length */ inlen = MLX5_ST_SZ_BYTES(set_diagnostic_params_in) + MLX5_ST_SZ_BYTES(diagnostic_counter) * y; /* set number of counters */ MLX5_SET(diagnostic_params_context, diag_params_ctx, num_of_counters, y); /* execute firmware command */ err = mlx5_set_diagnostic_params(dev, in, inlen); kvfree(in); return err; } int mlx5_core_get_diagnostics_full(struct mlx5_core_dev *dev, union mlx5_core_pci_diagnostics *pdiag, union mlx5_core_general_diagnostics *pgen) { void *out; void *in; int numcounters; int outlen; int inlen; int err; int x; if (MLX5_CAP_GEN(dev, debug) == 0) return 0; numcounters = MLX5_CAP_GEN(dev, num_of_diagnostic_counters); if (numcounters == 0) return 0; outlen = MLX5_ST_SZ_BYTES(query_diagnostic_counters_out) + MLX5_ST_SZ_BYTES(diagnostic_counter) * numcounters; out = mlx5_vzalloc(outlen); if (out == NULL) return -ENOMEM; err = mlx5_query_diagnostic_counters(dev, 1, 0, out, outlen); if (err == 0) { for (x = 0; x != numcounters; x++) { u16 counter_id = MLX5_GET( query_diagnostic_counters_out, out, diag_counter[x].counter_id); u64 counter_value = MLX5_GET64( query_diagnostic_counters_out, out, diag_counter[x].counter_value_h); if (pdiag != NULL) { mlx5_core_put_diag_counter( mlx5_core_pci_diagnostics_table, pdiag->array, MLX5_CORE_PCI_DIAGNOSTICS_NUM, counter_id, counter_value); } if (pgen != NULL) { mlx5_core_put_diag_counter( mlx5_core_general_diagnostics_table, pgen->array, MLX5_CORE_GENERAL_DIAGNOSTICS_NUM, counter_id, counter_value); } } } kvfree(out); if (pdiag != NULL) { inlen = MLX5_ST_SZ_BYTES(mpcnt_reg); outlen = MLX5_ST_SZ_BYTES(mpcnt_reg); in = mlx5_vzalloc(inlen); if (in == NULL) return -ENOMEM; out = mlx5_vzalloc(outlen); if (out == NULL) { kvfree(in); return -ENOMEM; } MLX5_SET(mpcnt_reg, in, grp, MLX5_PCIE_PERFORMANCE_COUNTERS_GROUP); err = mlx5_core_access_reg(dev, in, inlen, out, outlen, MLX5_REG_MPCNT, 0, 0); if (err == 0) { void *pcounters = MLX5_ADDR_OF(mpcnt_reg, out, counter_set.pcie_perf_counters); pdiag->counter.rx_pci_errors = MLX5_GET(pcie_perf_counters, pcounters, rx_errors); pdiag->counter.tx_pci_errors = MLX5_GET(pcie_perf_counters, pcounters, tx_errors); } MLX5_SET(mpcnt_reg, in, grp, MLX5_PCIE_TIMERS_AND_STATES_COUNTERS_GROUP); err = mlx5_core_access_reg(dev, in, inlen, out, outlen, MLX5_REG_MPCNT, 0, 0); if (err == 0) { void *pcounters = MLX5_ADDR_OF(mpcnt_reg, out, counter_set.pcie_timers_states); pdiag->counter.tx_pci_non_fatal_errors = MLX5_GET(pcie_timers_states, pcounters, non_fatal_err_msg_sent); pdiag->counter.tx_pci_fatal_errors = MLX5_GET(pcie_timers_states, pcounters, fatal_err_msg_sent); } kvfree(in); kvfree(out); } return 0; } int mlx5_core_supports_diagnostics(struct mlx5_core_dev *dev, u16 counter_id) { int numcounters; int x; if (MLX5_CAP_GEN(dev, debug) == 0) return 0; /* check for any counter */ if (counter_id == 0) return 1; numcounters = MLX5_CAP_GEN(dev, num_of_diagnostic_counters); /* check if counter ID exists in debug capability */ for (x = 0; x != numcounters; x++) { if (MLX5_CAP_DEBUG(dev, diagnostic_counter[x].counter_id) == counter_id) return 1; } return 0; /* not supported counter */ } + +/* + * Read the first three bytes of the eeprom in order to get the needed info + * for the whole reading. + * Byte 0 - Identifier byte + * Byte 1 - Revision byte + * Byte 2 - Status byte + */ +int +mlx5_get_eeprom_info(struct mlx5_core_dev *dev, struct mlx5_eeprom *eeprom) +{ + u32 data = 0; + int size_read = 0; + int ret; + + ret = mlx5_query_module_num(dev, &eeprom->module_num); + if (ret) { + mlx5_core_err(dev, "Failed query module error=%d\n", ret); + return (-ret); + } + + /* Read the first three bytes to get Identifier, Revision and Status */ + ret = mlx5_query_eeprom(dev, eeprom->i2c_addr, eeprom->page_num, + eeprom->device_addr, MLX5_EEPROM_INFO_BYTES, eeprom->module_num, &data, + &size_read); + if (ret) { + mlx5_core_err(dev, + "Failed query EEPROM module error=0x%x\n", ret); + return (-ret); + } + + switch (data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK) { + case SFF_8024_ID_QSFP: + eeprom->type = MLX5_ETH_MODULE_SFF_8436; + eeprom->len = MLX5_ETH_MODULE_SFF_8436_LEN; + break; + case SFF_8024_ID_QSFPPLUS: + case SFF_8024_ID_QSFP28: + if ((data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK) == SFF_8024_ID_QSFP28 || + ((data & MLX5_EEPROM_REVISION_ID_BYTE_MASK) >> 8) >= 0x3) { + eeprom->type = MLX5_ETH_MODULE_SFF_8636; + eeprom->len = MLX5_ETH_MODULE_SFF_8636_LEN; + } else { + eeprom->type = MLX5_ETH_MODULE_SFF_8436; + eeprom->len = MLX5_ETH_MODULE_SFF_8436_LEN; + } + if ((data & MLX5_EEPROM_PAGE_3_VALID_BIT_MASK) == 0) + eeprom->page_valid = 1; + break; + case SFF_8024_ID_SFP: + eeprom->type = MLX5_ETH_MODULE_SFF_8472; + eeprom->len = MLX5_ETH_MODULE_SFF_8472_LEN; + break; + default: + mlx5_core_err(dev, "Not recognized cable type = 0x%x(%s)\n", + data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK, + sff_8024_id[data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK]); + return (EINVAL); + } + return (0); +} + +/* Read both low and high pages of the eeprom */ +int +mlx5_get_eeprom(struct mlx5_core_dev *dev, struct mlx5_eeprom *ee) +{ + int size_read = 0; + int ret; + + if (ee->len == 0) + return (EINVAL); + + /* Read low page of the eeprom */ + while (ee->device_addr < ee->len) { + ret = mlx5_query_eeprom(dev, ee->i2c_addr, ee->page_num, ee->device_addr, + ee->len - ee->device_addr, ee->module_num, + ee->data + (ee->device_addr / 4), &size_read); + if (ret) { + mlx5_core_err(dev, + "Failed reading EEPROM, error = 0x%02x\n", ret); + return (-ret); + } + ee->device_addr += size_read; + } + + /* Read high page of the eeprom */ + if (ee->page_valid == 1) { + ee->device_addr = MLX5_EEPROM_HIGH_PAGE_OFFSET; + ee->page_num = MLX5_EEPROM_HIGH_PAGE; + size_read = 0; + while (ee->device_addr < MLX5_EEPROM_PAGE_LENGTH) { + ret = mlx5_query_eeprom(dev, ee->i2c_addr, ee->page_num, + ee->device_addr, MLX5_EEPROM_PAGE_LENGTH - ee->device_addr, + ee->module_num, ee->data + (ee->len / 4) + + ((ee->device_addr - MLX5_EEPROM_HIGH_PAGE_OFFSET) / 4), + &size_read); + if (ret) { + mlx5_core_err(dev, + "Failed reading EEPROM, error = 0x%02x\n", + ret); + return (-ret); + } + ee->device_addr += size_read; + } + } + return (0); +} + +/* + * Read cable EEPROM module information by first inspecting the first + * three bytes to get the initial information for a whole reading. + * Information will be printed to dmesg. + */ +int +mlx5_read_eeprom(struct mlx5_core_dev *dev, struct mlx5_eeprom *eeprom) +{ + int error; + + eeprom->i2c_addr = MLX5_I2C_ADDR_LOW; + eeprom->device_addr = 0; + eeprom->page_num = MLX5_EEPROM_LOW_PAGE; + eeprom->page_valid = 0; + + /* Read three first bytes to get important info */ + error = mlx5_get_eeprom_info(dev, eeprom); + if (error) { + mlx5_core_err(dev, + "Failed reading EEPROM initial information\n"); + return (error); + } + /* + * Allocate needed length buffer and additional space for + * page 0x03 + */ + eeprom->data = malloc(eeprom->len + MLX5_EEPROM_PAGE_LENGTH, + M_MLX5_EEPROM, M_WAITOK | M_ZERO); + + /* Read the whole eeprom information */ + error = mlx5_get_eeprom(dev, eeprom); + if (error) { + mlx5_core_err(dev, "Failed reading EEPROM\n"); + error = 0; + /* + * Continue printing partial information in case of + * an error + */ + } + free(eeprom->data, M_MLX5_EEPROM); + + return (error); +} + + Index: stable/12/sys/dev/mlx5/mlx5_core/mlx5_fwdump.c =================================================================== --- stable/12/sys/dev/mlx5/mlx5_core/mlx5_fwdump.c (revision 353238) +++ stable/12/sys/dev/mlx5/mlx5_core/mlx5_fwdump.c (revision 353239) @@ -1,432 +1,495 @@ /*- * Copyright (c) 2018, 2019 Mellanox Technologies, Ltd. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include +#include #include #include +#include static MALLOC_DEFINE(M_MLX5_DUMP, "MLX5DUMP", "MLX5 Firmware dump"); static unsigned mlx5_fwdump_getsize(const struct mlx5_crspace_regmap *rege) { const struct mlx5_crspace_regmap *r; unsigned sz; for (sz = 0, r = rege; r->cnt != 0; r++) sz += r->cnt; return (sz); } static void mlx5_fwdump_destroy_dd(struct mlx5_core_dev *mdev) { mtx_assert(&mdev->dump_lock, MA_OWNED); free(mdev->dump_data, M_MLX5_DUMP); mdev->dump_data = NULL; } void mlx5_fwdump_prep(struct mlx5_core_dev *mdev) { device_t dev; int error, vsc_addr; unsigned i, sz; u32 addr, in, out, next_addr; mdev->dump_data = NULL; error = mlx5_vsc_find_cap(mdev); if (error != 0) { /* Inability to create a firmware dump is not fatal. */ mlx5_core_warn(mdev, "mlx5_fwdump_prep failed %d\n", error); return; } error = mlx5_vsc_lock(mdev); if (error != 0) return; error = mlx5_vsc_set_space(mdev, MLX5_VSC_DOMAIN_SCAN_CRSPACE); if (error != 0) { mlx5_core_warn(mdev, "VSC scan space is not supported\n"); goto unlock_vsc; } dev = mdev->pdev->dev.bsddev; vsc_addr = mdev->vsc_addr; if (vsc_addr == 0) { mlx5_core_warn(mdev, "Cannot read vsc, no address\n"); goto unlock_vsc; } in = 0; for (sz = 1, addr = 0;;) { MLX5_VSC_SET(vsc_addr, &in, address, addr); pci_write_config(dev, vsc_addr + MLX5_VSC_ADDR_OFFSET, in, 4); error = mlx5_vsc_wait_on_flag(mdev, 1); if (error != 0) { mlx5_core_warn(mdev, "Failed waiting for read complete flag, error %d\n", error); goto unlock_vsc; } pci_read_config(dev, vsc_addr + MLX5_VSC_DATA_OFFSET, 4); out = pci_read_config(dev, vsc_addr + MLX5_VSC_ADDR_OFFSET, 4); next_addr = MLX5_VSC_GET(vsc_addr, &out, address); if (next_addr == 0 || next_addr == addr) break; if (next_addr != addr + 4) sz++; addr = next_addr; } mdev->dump_rege = malloc(sz * sizeof(struct mlx5_crspace_regmap), M_MLX5_DUMP, M_WAITOK | M_ZERO); for (i = 0, addr = 0;;) { MPASS(i < sz); mdev->dump_rege[i].cnt++; MLX5_VSC_SET(vsc_addr, &in, address, addr); pci_write_config(dev, vsc_addr + MLX5_VSC_ADDR_OFFSET, in, 4); error = mlx5_vsc_wait_on_flag(mdev, 1); if (error != 0) { mlx5_core_warn(mdev, "Failed waiting for read complete flag, error %d\n", error); free(mdev->dump_rege, M_MLX5_DUMP); mdev->dump_rege = NULL; goto unlock_vsc; } pci_read_config(dev, vsc_addr + MLX5_VSC_DATA_OFFSET, 4); out = pci_read_config(dev, vsc_addr + MLX5_VSC_ADDR_OFFSET, 4); next_addr = MLX5_VSC_GET(vsc_addr, &out, address); if (next_addr == 0 || next_addr == addr) break; if (next_addr != addr + 4) mdev->dump_rege[++i].addr = next_addr; addr = next_addr; } KASSERT(i + 1 == sz, ("inconsistent hw crspace reads: sz %u i %u addr %#lx", sz, i, (unsigned long)addr)); mdev->dump_size = mlx5_fwdump_getsize(mdev->dump_rege); mdev->dump_data = malloc(mdev->dump_size * sizeof(uint32_t), M_MLX5_DUMP, M_WAITOK | M_ZERO); mdev->dump_valid = false; mdev->dump_copyout = false; unlock_vsc: mlx5_vsc_unlock(mdev); } void mlx5_fwdump(struct mlx5_core_dev *mdev) { const struct mlx5_crspace_regmap *r; uint32_t i, ri; int error; mlx5_core_info(mdev, "Issuing FW dump\n"); mtx_lock(&mdev->dump_lock); if (mdev->dump_data == NULL) goto failed; if (mdev->dump_valid) { /* only one dump */ mlx5_core_warn(mdev, "Only one FW dump can be captured aborting FW dump\n"); goto failed; } /* mlx5_vsc already warns, be silent. */ error = mlx5_vsc_lock(mdev); if (error != 0) goto failed; error = mlx5_vsc_set_space(mdev, MLX5_VSC_DOMAIN_PROTECTED_CRSPACE); if (error != 0) goto unlock_vsc; for (i = 0, r = mdev->dump_rege; r->cnt != 0; r++) { for (ri = 0; ri < r->cnt; ri++) { error = mlx5_vsc_read(mdev, r->addr + ri * 4, &mdev->dump_data[i]); if (error != 0) goto unlock_vsc; i++; } } mdev->dump_valid = true; unlock_vsc: mlx5_vsc_unlock(mdev); failed: mtx_unlock(&mdev->dump_lock); } void mlx5_fwdump_clean(struct mlx5_core_dev *mdev) { mtx_lock(&mdev->dump_lock); while (mdev->dump_copyout) msleep(&mdev->dump_copyout, &mdev->dump_lock, 0, "mlx5fwc", 0); mlx5_fwdump_destroy_dd(mdev); mtx_unlock(&mdev->dump_lock); free(mdev->dump_rege, M_MLX5_DUMP); } static int mlx5_fwdump_reset(struct mlx5_core_dev *mdev) { int error; error = 0; mtx_lock(&mdev->dump_lock); if (mdev->dump_data != NULL) { while (mdev->dump_copyout) { msleep(&mdev->dump_copyout, &mdev->dump_lock, 0, "mlx5fwr", 0); } mdev->dump_valid = false; } else { error = ENOENT; } mtx_unlock(&mdev->dump_lock); return (error); } static int mlx5_dbsf_to_core(const struct mlx5_tool_addr *devaddr, struct mlx5_core_dev **mdev) { device_t dev; struct pci_dev *pdev; dev = pci_find_dbsf(devaddr->domain, devaddr->bus, devaddr->slot, devaddr->func); if (dev == NULL) return (ENOENT); if (device_get_devclass(dev) != mlx5_core_driver.bsdclass) return (EINVAL); pdev = device_get_softc(dev); *mdev = pci_get_drvdata(pdev); if (*mdev == NULL) return (ENOENT); return (0); } static int mlx5_fwdump_copyout(struct mlx5_core_dev *mdev, struct mlx5_fwdump_get *fwg) { const struct mlx5_crspace_regmap *r; struct mlx5_fwdump_reg rv, *urv; uint32_t i, ri; int error; mtx_lock(&mdev->dump_lock); if (mdev->dump_data == NULL) { mtx_unlock(&mdev->dump_lock); return (ENOENT); } if (fwg->buf == NULL) { fwg->reg_filled = mdev->dump_size; mtx_unlock(&mdev->dump_lock); return (0); } if (!mdev->dump_valid) { mtx_unlock(&mdev->dump_lock); return (ENOENT); } mdev->dump_copyout = true; mtx_unlock(&mdev->dump_lock); urv = fwg->buf; for (i = 0, r = mdev->dump_rege; r->cnt != 0; r++) { for (ri = 0; ri < r->cnt; ri++) { if (i >= fwg->reg_cnt) goto out; rv.addr = r->addr + ri * 4; rv.val = mdev->dump_data[i]; error = copyout(&rv, urv, sizeof(rv)); if (error != 0) return (error); urv++; i++; } } out: fwg->reg_filled = i; mtx_lock(&mdev->dump_lock); mdev->dump_copyout = false; wakeup(&mdev->dump_copyout); mtx_unlock(&mdev->dump_lock); return (0); } static int mlx5_fw_reset(struct mlx5_core_dev *mdev) { device_t dev, bus; int error; error = -mlx5_set_mfrl_reg(mdev, MLX5_FRL_LEVEL3); if (error == 0) { dev = mdev->pdev->dev.bsddev; mtx_lock(&Giant); bus = device_get_parent(dev); error = BUS_RESET_CHILD(device_get_parent(bus), bus, DEVF_RESET_DETACH); mtx_unlock(&Giant); } return (error); } static int +mlx5_eeprom_copyout(struct mlx5_core_dev *dev, struct mlx5_eeprom_get *eeprom_info) +{ + struct mlx5_eeprom eeprom; + int error; + + eeprom.i2c_addr = MLX5_I2C_ADDR_LOW; + eeprom.device_addr = 0; + eeprom.page_num = MLX5_EEPROM_LOW_PAGE; + eeprom.page_valid = 0; + + /* Read three first bytes to get important info */ + error = mlx5_get_eeprom_info(dev, &eeprom); + if (error != 0) { + mlx5_core_err(dev, + "Failed reading EEPROM initial information\n"); + return (error); + } + eeprom_info->eeprom_info_page_valid = eeprom.page_valid; + eeprom_info->eeprom_info_out_len = eeprom.len; + + if (eeprom_info->eeprom_info_buf == NULL) + return (0); + /* + * Allocate needed length buffer and additional space for + * page 0x03 + */ + eeprom.data = malloc(eeprom.len + MLX5_EEPROM_PAGE_LENGTH, + M_MLX5_EEPROM, M_WAITOK | M_ZERO); + + /* Read the whole eeprom information */ + error = mlx5_get_eeprom(dev, &eeprom); + if (error != 0) { + mlx5_core_err(dev, "Failed reading EEPROM error = %d\n", + error); + error = 0; + /* + * Continue printing partial information in case of + * an error + */ + } + error = copyout(eeprom.data, eeprom_info->eeprom_info_buf, + eeprom.len); + free(eeprom.data, M_MLX5_EEPROM); + + return (error); +} + +static int mlx5_ctl_ioctl(struct cdev *dev, u_long cmd, caddr_t data, int fflag, struct thread *td) { struct mlx5_core_dev *mdev; struct mlx5_fwdump_get *fwg; struct mlx5_tool_addr *devaddr; struct mlx5_fw_update *fu; struct firmware fake_fw; + struct mlx5_eeprom_get *eeprom_info; int error; error = 0; switch (cmd) { case MLX5_FWDUMP_GET: if ((fflag & FREAD) == 0) { error = EBADF; break; } fwg = (struct mlx5_fwdump_get *)data; devaddr = &fwg->devaddr; error = mlx5_dbsf_to_core(devaddr, &mdev); if (error != 0) break; error = mlx5_fwdump_copyout(mdev, fwg); break; case MLX5_FWDUMP_RESET: if ((fflag & FWRITE) == 0) { error = EBADF; break; } devaddr = (struct mlx5_tool_addr *)data; error = mlx5_dbsf_to_core(devaddr, &mdev); if (error == 0) error = mlx5_fwdump_reset(mdev); break; case MLX5_FWDUMP_FORCE: if ((fflag & FWRITE) == 0) { error = EBADF; break; } devaddr = (struct mlx5_tool_addr *)data; error = mlx5_dbsf_to_core(devaddr, &mdev); if (error != 0) break; mlx5_fwdump(mdev); break; case MLX5_FW_UPDATE: if ((fflag & FWRITE) == 0) { error = EBADF; break; } fu = (struct mlx5_fw_update *)data; if (fu->img_fw_data_len > 10 * 1024 * 1024) { error = EINVAL; break; } devaddr = &fu->devaddr; error = mlx5_dbsf_to_core(devaddr, &mdev); if (error != 0) break; bzero(&fake_fw, sizeof(fake_fw)); fake_fw.name = "umlx_fw_up"; fake_fw.datasize = fu->img_fw_data_len; fake_fw.version = 1; fake_fw.data = (void *)kmem_malloc(fu->img_fw_data_len, M_WAITOK); if (fake_fw.data == NULL) { error = ENOMEM; break; } error = copyin(fu->img_fw_data, __DECONST(void *, fake_fw.data), fu->img_fw_data_len); if (error == 0) error = -mlx5_firmware_flash(mdev, &fake_fw); kmem_free((vm_offset_t)fake_fw.data, fu->img_fw_data_len); break; case MLX5_FW_RESET: if ((fflag & FWRITE) == 0) { error = EBADF; break; } devaddr = (struct mlx5_tool_addr *)data; error = mlx5_dbsf_to_core(devaddr, &mdev); if (error != 0) break; error = mlx5_fw_reset(mdev); + break; + case MLX5_EEPROM_GET: + if ((fflag & FREAD) == 0) { + error = EBADF; + break; + } + eeprom_info = (struct mlx5_eeprom_get *)data; + devaddr = &eeprom_info->devaddr; + error = mlx5_dbsf_to_core(devaddr, &mdev); + if (error != 0) + break; + error = mlx5_eeprom_copyout(mdev, eeprom_info); break; default: error = ENOTTY; break; } return (error); } static struct cdevsw mlx5_ctl_devsw = { .d_version = D_VERSION, .d_ioctl = mlx5_ctl_ioctl, }; static struct cdev *mlx5_ctl_dev; int mlx5_ctl_init(void) { struct make_dev_args mda; int error; make_dev_args_init(&mda); mda.mda_flags = MAKEDEV_WAITOK | MAKEDEV_CHECKNAME; mda.mda_devsw = &mlx5_ctl_devsw; mda.mda_uid = UID_ROOT; mda.mda_gid = GID_OPERATOR; mda.mda_mode = 0640; error = make_dev_s(&mda, &mlx5_ctl_dev, "mlx5ctl"); return (-error); } void mlx5_ctl_fini(void) { if (mlx5_ctl_dev != NULL) destroy_dev(mlx5_ctl_dev); } Index: stable/12/sys/dev/mlx5/mlx5_en/en.h =================================================================== --- stable/12/sys/dev/mlx5/mlx5_en/en.h (revision 353238) +++ stable/12/sys/dev/mlx5/mlx5_en/en.h (revision 353239) @@ -1,1205 +1,1177 @@ /*- * Copyright (c) 2015-2019 Mellanox Technologies. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _MLX5_EN_H_ #define _MLX5_EN_H_ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "opt_rss.h" #ifdef RSS #include #include #endif #include #include #include #include #include #include #include #include #include #include #define MLX5E_MAX_PRIORITY 8 /* IEEE 802.1Qaz standard supported values */ #define IEEE_8021QAZ_MAX_TCS 8 #define MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE 0x7 #define MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE 0xa #define MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE 0xe #define MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE 0x7 #define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE 0xa #define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE 0xe #define MLX5E_MAX_BUSDMA_RX_SEGS 15 #ifndef MLX5E_MAX_RX_BYTES #define MLX5E_MAX_RX_BYTES MCLBYTES #endif #define MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ \ MIN(65535, 7 * MLX5E_MAX_RX_BYTES) #define MLX5E_DIM_DEFAULT_PROFILE 3 #define MLX5E_DIM_MAX_RX_CQ_MODERATION_PKTS_WITH_LRO 16 #define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC 0x10 #define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE 0x3 #define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS 0x20 #define MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC 0x10 #define MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS 0x20 #define MLX5E_PARAMS_DEFAULT_MIN_RX_WQES 0x80 #define MLX5E_PARAMS_DEFAULT_RX_HASH_LOG_TBL_SZ 0x7 #define MLX5E_CACHELINE_SIZE CACHE_LINE_SIZE #define MLX5E_HW2SW_MTU(hwmtu) \ ((hwmtu) - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN + ETHER_CRC_LEN)) #define MLX5E_SW2HW_MTU(swmtu) \ ((swmtu) + (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN + ETHER_CRC_LEN)) #define MLX5E_SW2MB_MTU(swmtu) \ (MLX5E_SW2HW_MTU(swmtu) + MLX5E_NET_IP_ALIGN) #define MLX5E_MTU_MIN 72 /* Min MTU allowed by the kernel */ #define MLX5E_MTU_MAX MIN(ETHERMTU_JUMBO, MJUM16BYTES) /* Max MTU of Ethernet * jumbo frames */ #define MLX5E_BUDGET_MAX 8192 /* RX and TX */ #define MLX5E_RX_BUDGET_MAX 256 #define MLX5E_SQ_BF_BUDGET 16 #define MLX5E_SQ_TX_QUEUE_SIZE 4096 /* SQ drbr queue size */ #define MLX5E_MAX_TX_NUM_TC 8 /* units */ #define MLX5E_MAX_TX_HEADER 128 /* bytes */ #define MLX5E_MAX_TX_PAYLOAD_SIZE 65536 /* bytes */ #define MLX5E_MAX_TX_MBUF_SIZE 65536 /* bytes */ #define MLX5E_MAX_TX_MBUF_FRAGS \ ((MLX5_SEND_WQE_MAX_WQEBBS * MLX5_SEND_WQEBB_NUM_DS) - \ (MLX5E_MAX_TX_HEADER / MLX5_SEND_WQE_DS) - \ 1 /* the maximum value of the DS counter is 0x3F and not 0x40 */) /* units */ #define MLX5E_MAX_TX_INLINE \ (MLX5E_MAX_TX_HEADER - sizeof(struct mlx5e_tx_wqe) + \ sizeof(((struct mlx5e_tx_wqe *)0)->eth.inline_hdr_start)) /* bytes */ #define MLX5E_100MB (100000) #define MLX5E_1GB (1000000) MALLOC_DECLARE(M_MLX5EN); struct mlx5_core_dev; struct mlx5e_cq; typedef void (mlx5e_cq_comp_t)(struct mlx5_core_cq *); #define mlx5_en_err(_dev, format, ...) \ if_printf(_dev, "ERR: ""%s:%d:(pid %d): " format, \ __func__, __LINE__, curthread->td_proc->p_pid, \ ##__VA_ARGS__) #define mlx5_en_warn(_dev, format, ...) \ if_printf(_dev, "WARN: ""%s:%d:(pid %d): " format, \ __func__, __LINE__, curthread->td_proc->p_pid, \ ##__VA_ARGS__) #define mlx5_en_info(_dev, format, ...) \ if_printf(_dev, "INFO: ""%s:%d:(pid %d): " format, \ __func__, __LINE__, curthread->td_proc->p_pid, \ ##__VA_ARGS__) #define MLX5E_STATS_COUNT(a, ...) a #define MLX5E_STATS_VAR(a, b, c, ...) b c; #define MLX5E_STATS_DESC(a, b, c, d, e, ...) d, e, #define MLX5E_VPORT_STATS(m) \ /* HW counters */ \ m(+1, u64, rx_packets, "rx_packets", "Received packets") \ m(+1, u64, rx_bytes, "rx_bytes", "Received bytes") \ m(+1, u64, tx_packets, "tx_packets", "Transmitted packets") \ m(+1, u64, tx_bytes, "tx_bytes", "Transmitted bytes") \ m(+1, u64, rx_error_packets, "rx_error_packets", "Received error packets") \ m(+1, u64, rx_error_bytes, "rx_error_bytes", "Received error bytes") \ m(+1, u64, tx_error_packets, "tx_error_packets", "Transmitted error packets") \ m(+1, u64, tx_error_bytes, "tx_error_bytes", "Transmitted error bytes") \ m(+1, u64, rx_unicast_packets, "rx_unicast_packets", "Received unicast packets") \ m(+1, u64, rx_unicast_bytes, "rx_unicast_bytes", "Received unicast bytes") \ m(+1, u64, tx_unicast_packets, "tx_unicast_packets", "Transmitted unicast packets") \ m(+1, u64, tx_unicast_bytes, "tx_unicast_bytes", "Transmitted unicast bytes") \ m(+1, u64, rx_multicast_packets, "rx_multicast_packets", "Received multicast packets") \ m(+1, u64, rx_multicast_bytes, "rx_multicast_bytes", "Received multicast bytes") \ m(+1, u64, tx_multicast_packets, "tx_multicast_packets", "Transmitted multicast packets") \ m(+1, u64, tx_multicast_bytes, "tx_multicast_bytes", "Transmitted multicast bytes") \ m(+1, u64, rx_broadcast_packets, "rx_broadcast_packets", "Received broadcast packets") \ m(+1, u64, rx_broadcast_bytes, "rx_broadcast_bytes", "Received broadcast bytes") \ m(+1, u64, tx_broadcast_packets, "tx_broadcast_packets", "Transmitted broadcast packets") \ m(+1, u64, tx_broadcast_bytes, "tx_broadcast_bytes", "Transmitted broadcast bytes") \ m(+1, u64, rx_out_of_buffer, "rx_out_of_buffer", "Receive out of buffer, no recv wqes events") \ /* SW counters */ \ m(+1, u64, tso_packets, "tso_packets", "Transmitted TSO packets") \ m(+1, u64, tso_bytes, "tso_bytes", "Transmitted TSO bytes") \ m(+1, u64, lro_packets, "lro_packets", "Received LRO packets") \ m(+1, u64, lro_bytes, "lro_bytes", "Received LRO bytes") \ m(+1, u64, sw_lro_queued, "sw_lro_queued", "Packets queued for SW LRO") \ m(+1, u64, sw_lro_flushed, "sw_lro_flushed", "Packets flushed from SW LRO") \ m(+1, u64, rx_csum_good, "rx_csum_good", "Received checksum valid packets") \ m(+1, u64, rx_csum_none, "rx_csum_none", "Received no checksum packets") \ m(+1, u64, tx_csum_offload, "tx_csum_offload", "Transmit checksum offload packets") \ m(+1, u64, tx_queue_dropped, "tx_queue_dropped", "Transmit queue dropped") \ m(+1, u64, tx_defragged, "tx_defragged", "Transmit queue defragged") \ m(+1, u64, rx_wqe_err, "rx_wqe_err", "Receive WQE errors") \ m(+1, u64, tx_jumbo_packets, "tx_jumbo_packets", "TX packets greater than 1518 octets") \ m(+1, u64, rx_steer_missed_packets, "rx_steer_missed_packets", "RX packets dropped by steering rule(s)") #define MLX5E_VPORT_STATS_NUM (0 MLX5E_VPORT_STATS(MLX5E_STATS_COUNT)) struct mlx5e_vport_stats { struct sysctl_ctx_list ctx; u64 arg [0]; MLX5E_VPORT_STATS(MLX5E_STATS_VAR) }; #define MLX5E_PPORT_IEEE802_3_STATS(m) \ m(+1, u64, frames_tx, "frames_tx", "Frames transmitted") \ m(+1, u64, frames_rx, "frames_rx", "Frames received") \ m(+1, u64, check_seq_err, "check_seq_err", "Sequence errors") \ m(+1, u64, alignment_err, "alignment_err", "Alignment errors") \ m(+1, u64, octets_tx, "octets_tx", "Bytes transmitted") \ m(+1, u64, octets_received, "octets_received", "Bytes received") \ m(+1, u64, multicast_xmitted, "multicast_xmitted", "Multicast transmitted") \ m(+1, u64, broadcast_xmitted, "broadcast_xmitted", "Broadcast transmitted") \ m(+1, u64, multicast_rx, "multicast_rx", "Multicast received") \ m(+1, u64, broadcast_rx, "broadcast_rx", "Broadcast received") \ m(+1, u64, in_range_len_errors, "in_range_len_errors", "In range length errors") \ m(+1, u64, out_of_range_len, "out_of_range_len", "Out of range length errors") \ m(+1, u64, too_long_errors, "too_long_errors", "Too long errors") \ m(+1, u64, symbol_err, "symbol_err", "Symbol errors") \ m(+1, u64, mac_control_tx, "mac_control_tx", "MAC control transmitted") \ m(+1, u64, mac_control_rx, "mac_control_rx", "MAC control received") \ m(+1, u64, unsupported_op_rx, "unsupported_op_rx", "Unsupported operation received") \ m(+1, u64, pause_ctrl_rx, "pause_ctrl_rx", "Pause control received") \ m(+1, u64, pause_ctrl_tx, "pause_ctrl_tx", "Pause control transmitted") #define MLX5E_PPORT_RFC2819_STATS(m) \ m(+1, u64, drop_events, "drop_events", "Dropped events") \ m(+1, u64, octets, "octets", "Octets") \ m(+1, u64, pkts, "pkts", "Packets") \ m(+1, u64, broadcast_pkts, "broadcast_pkts", "Broadcast packets") \ m(+1, u64, multicast_pkts, "multicast_pkts", "Multicast packets") \ m(+1, u64, crc_align_errors, "crc_align_errors", "CRC alignment errors") \ m(+1, u64, undersize_pkts, "undersize_pkts", "Undersized packets") \ m(+1, u64, oversize_pkts, "oversize_pkts", "Oversized packets") \ m(+1, u64, fragments, "fragments", "Fragments") \ m(+1, u64, jabbers, "jabbers", "Jabbers") \ m(+1, u64, collisions, "collisions", "Collisions") #define MLX5E_PPORT_RFC2819_STATS_DEBUG(m) \ m(+1, u64, p64octets, "p64octets", "Bytes") \ m(+1, u64, p65to127octets, "p65to127octets", "Bytes") \ m(+1, u64, p128to255octets, "p128to255octets", "Bytes") \ m(+1, u64, p256to511octets, "p256to511octets", "Bytes") \ m(+1, u64, p512to1023octets, "p512to1023octets", "Bytes") \ m(+1, u64, p1024to1518octets, "p1024to1518octets", "Bytes") \ m(+1, u64, p1519to2047octets, "p1519to2047octets", "Bytes") \ m(+1, u64, p2048to4095octets, "p2048to4095octets", "Bytes") \ m(+1, u64, p4096to8191octets, "p4096to8191octets", "Bytes") \ m(+1, u64, p8192to10239octets, "p8192to10239octets", "Bytes") #define MLX5E_PPORT_RFC2863_STATS_DEBUG(m) \ m(+1, u64, in_octets, "in_octets", "In octets") \ m(+1, u64, in_ucast_pkts, "in_ucast_pkts", "In unicast packets") \ m(+1, u64, in_discards, "in_discards", "In discards") \ m(+1, u64, in_errors, "in_errors", "In errors") \ m(+1, u64, in_unknown_protos, "in_unknown_protos", "In unknown protocols") \ m(+1, u64, out_octets, "out_octets", "Out octets") \ m(+1, u64, out_ucast_pkts, "out_ucast_pkts", "Out unicast packets") \ m(+1, u64, out_discards, "out_discards", "Out discards") \ m(+1, u64, out_errors, "out_errors", "Out errors") \ m(+1, u64, in_multicast_pkts, "in_multicast_pkts", "In multicast packets") \ m(+1, u64, in_broadcast_pkts, "in_broadcast_pkts", "In broadcast packets") \ m(+1, u64, out_multicast_pkts, "out_multicast_pkts", "Out multicast packets") \ m(+1, u64, out_broadcast_pkts, "out_broadcast_pkts", "Out broadcast packets") #define MLX5E_PPORT_ETHERNET_EXTENDED_STATS_DEBUG(m) \ m(+1, u64, port_transmit_wait, "port_transmit_wait", "Port transmit wait") \ m(+1, u64, ecn_marked, "ecn_marked", "ECN marked") \ m(+1, u64, no_buffer_discard_mc, "no_buffer_discard_mc", "No buffer discard mc") \ m(+1, u64, rx_ebp, "rx_ebp", "RX EBP") \ m(+1, u64, tx_ebp, "tx_ebp", "TX EBP") \ m(+1, u64, rx_buffer_almost_full, "rx_buffer_almost_full", "RX buffer almost full") \ m(+1, u64, rx_buffer_full, "rx_buffer_full", "RX buffer full") \ m(+1, u64, rx_icrc_encapsulated, "rx_icrc_encapsulated", "RX ICRC encapsulated") \ m(+1, u64, ex_reserved_0, "ex_reserved_0", "Reserved") \ m(+1, u64, ex_reserved_1, "ex_reserved_1", "Reserved") \ m(+1, u64, tx_stat_p64octets, "tx_stat_p64octets", "Bytes") \ m(+1, u64, tx_stat_p65to127octets, "tx_stat_p65to127octets", "Bytes") \ m(+1, u64, tx_stat_p128to255octets, "tx_stat_p128to255octets", "Bytes") \ m(+1, u64, tx_stat_p256to511octets, "tx_stat_p256to511octets", "Bytes") \ m(+1, u64, tx_stat_p512to1023octets, "tx_stat_p512to1023octets", "Bytes") \ m(+1, u64, tx_stat_p1024to1518octets, "tx_stat_p1024to1518octets", "Bytes") \ m(+1, u64, tx_stat_p1519to2047octets, "tx_stat_p1519to2047octets", "Bytes") \ m(+1, u64, tx_stat_p2048to4095octets, "tx_stat_p2048to4095octets", "Bytes") \ m(+1, u64, tx_stat_p4096to8191octets, "tx_stat_p4096to8191octets", "Bytes") \ m(+1, u64, tx_stat_p8192to10239octets, "tx_stat_p8192to10239octets", "Bytes") #define MLX5E_PPORT_STATISTICAL_DEBUG(m) \ m(+1, u64, phy_time_since_last_clear, "phy_time_since_last_clear", \ "Time since last clear in milliseconds") \ m(+1, u64, phy_received_bits, "phy_received_bits", \ "Total amount of traffic received in bits before error correction") \ m(+1, u64, phy_symbol_errors, "phy_symbol_errors", \ "Total number of symbol errors before error correction") \ m(+1, u64, phy_corrected_bits, "phy_corrected_bits", \ "Total number of corrected bits ") \ m(+1, u64, phy_corrected_bits_lane0, "phy_corrected_bits_lane0", \ "Total number of corrected bits for lane 0") \ m(+1, u64, phy_corrected_bits_lane1, "phy_corrected_bits_lane1", \ "Total number of corrected bits for lane 1") \ m(+1, u64, phy_corrected_bits_lane2, "phy_corrected_bits_lane2", \ "Total number of corrected bits for lane 2") \ m(+1, u64, phy_corrected_bits_lane3, "phy_corrected_bits_lane3", \ "Total number of corrected bits for lane 3") #define MLX5E_PPORT_PHYSICAL_LAYER_STATS_DEBUG(m) \ m(+1, u64, time_since_last_clear, "time_since_last_clear", \ "Time since the last counters clear event (msec)") \ m(+1, u64, symbol_errors, "symbol_errors", "Symbol errors") \ m(+1, u64, sync_headers_errors, "sync_headers_errors", \ "Sync header error counter") \ m(+1, u64, bip_errors_lane0, "edpl_bip_errors_lane0", \ "Indicates the number of PRBS errors on lane 0") \ m(+1, u64, bip_errors_lane1, "edpl_bip_errors_lane1", \ "Indicates the number of PRBS errors on lane 1") \ m(+1, u64, bip_errors_lane2, "edpl_bip_errors_lane2", \ "Indicates the number of PRBS errors on lane 2") \ m(+1, u64, bip_errors_lane3, "edpl_bip_errors_lane3", \ "Indicates the number of PRBS errors on lane 3") \ m(+1, u64, fc_corrected_blocks_lane0, "fc_corrected_blocks_lane0", \ "FEC correctable block counter lane 0") \ m(+1, u64, fc_corrected_blocks_lane1, "fc_corrected_blocks_lane1", \ "FEC correctable block counter lane 1") \ m(+1, u64, fc_corrected_blocks_lane2, "fc_corrected_blocks_lane2", \ "FEC correctable block counter lane 2") \ m(+1, u64, fc_corrected_blocks_lane3, "fc_corrected_blocks_lane3", \ "FEC correctable block counter lane 3") \ m(+1, u64, rs_corrected_blocks, "rs_corrected_blocks", \ "FEC correcable block counter") \ m(+1, u64, rs_uncorrectable_blocks, "rs_uncorrectable_blocks", \ "FEC uncorrecable block counter") \ m(+1, u64, rs_no_errors_blocks, "rs_no_errors_blocks", \ "The number of RS-FEC blocks received that had no errors") \ m(+1, u64, rs_single_error_blocks, "rs_single_error_blocks", \ "The number of corrected RS-FEC blocks received that had" \ "exactly 1 error symbol") \ m(+1, u64, rs_corrected_symbols_total, "rs_corrected_symbols_total", \ "Port FEC corrected symbol counter") \ m(+1, u64, rs_corrected_symbols_lane0, "rs_corrected_symbols_lane0", \ "FEC corrected symbol counter lane 0") \ m(+1, u64, rs_corrected_symbols_lane1, "rs_corrected_symbols_lane1", \ "FEC corrected symbol counter lane 1") \ m(+1, u64, rs_corrected_symbols_lane2, "rs_corrected_symbols_lane2", \ "FEC corrected symbol counter lane 2") \ m(+1, u64, rs_corrected_symbols_lane3, "rs_corrected_symbols_lane3", \ "FEC corrected symbol counter lane 3") /* Per priority statistics for PFC */ #define MLX5E_PPORT_PER_PRIO_STATS_SUB(m,n,p) \ m(n, p, +1, u64, rx_octets, "rx_octets", "Received octets") \ m(n, p, +1, u64, rx_uc_frames, "rx_uc_frames", "Received unicast frames") \ m(n, p, +1, u64, rx_mc_frames, "rx_mc_frames", "Received multicast frames") \ m(n, p, +1, u64, rx_bc_frames, "rx_bc_frames", "Received broadcast frames") \ m(n, p, +1, u64, rx_frames, "rx_frames", "Received frames") \ m(n, p, +1, u64, tx_octets, "tx_octets", "Transmitted octets") \ m(n, p, +1, u64, tx_uc_frames, "tx_uc_frames", "Transmitted unicast frames") \ m(n, p, +1, u64, tx_mc_frames, "tx_mc_frames", "Transmitted multicast frames") \ m(n, p, +1, u64, tx_bc_frames, "tx_bc_frames", "Transmitted broadcast frames") \ m(n, p, +1, u64, tx_frames, "tx_frames", "Transmitted frames") \ m(n, p, +1, u64, rx_pause, "rx_pause", "Received pause frames") \ m(n, p, +1, u64, rx_pause_duration, "rx_pause_duration", \ "Received pause duration") \ m(n, p, +1, u64, tx_pause, "tx_pause", "Transmitted pause frames") \ m(n, p, +1, u64, tx_pause_duration, "tx_pause_duration", \ "Transmitted pause duration") \ m(n, p, +1, u64, rx_pause_transition, "rx_pause_transition", \ "Received pause transitions") \ m(n, p, +1, u64, rx_discards, "rx_discards", "Discarded received frames") \ m(n, p, +1, u64, device_stall_minor_watermark, \ "device_stall_minor_watermark", "Device stall minor watermark") \ m(n, p, +1, u64, device_stall_critical_watermark, \ "device_stall_critical_watermark", "Device stall critical watermark") #define MLX5E_PPORT_PER_PRIO_STATS_PREFIX(m,p,c,t,f,s,d) \ m(c, t, pri_##p##_##f, "prio" #p "_" s, "Priority " #p " - " d) #define MLX5E_PPORT_PER_PRIO_STATS_NUM_PRIO 8 #define MLX5E_PPORT_PER_PRIO_STATS(m) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,0) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,1) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,2) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,3) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,4) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,5) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,6) \ MLX5E_PPORT_PER_PRIO_STATS_SUB(MLX5E_PPORT_PER_PRIO_STATS_PREFIX,m,7) #define MLX5E_PCIE_PERFORMANCE_COUNTERS_64(m) \ m(+1, u64, life_time_counter_high, "life_time_counter", \ "Life time counter.", pcie_perf_counters) \ m(+1, u64, tx_overflow_buffer_pkt, "tx_overflow_buffer_pkt", \ "The number of packets dropped due to lack of PCIe buffers " \ "in receive path from NIC port toward the hosts.", \ pcie_perf_counters) \ m(+1, u64, tx_overflow_buffer_marked_pkt, \ "tx_overflow_buffer_marked_pkt", \ "The number of packets marked due to lack of PCIe buffers " \ "in receive path from NIC port toward the hosts.", \ pcie_perf_counters) #define MLX5E_PCIE_PERFORMANCE_COUNTERS_32(m) \ m(+1, u64, rx_errors, "rx_errors", \ "Number of transitions to recovery due to Framing " \ "errors and CRC errors.", pcie_perf_counters) \ m(+1, u64, tx_errors, "tx_errors", "Number of transitions " \ "to recovery due to EIEOS and TS errors.", pcie_perf_counters) \ m(+1, u64, l0_to_recovery_eieos, "l0_to_recovery_eieos", "Number of " \ "transitions to recovery due to getting EIEOS.", pcie_perf_counters)\ m(+1, u64, l0_to_recovery_ts, "l0_to_recovery_ts", "Number of " \ "transitions to recovery due to getting TS.", pcie_perf_counters) \ m(+1, u64, l0_to_recovery_framing, "l0_to_recovery_framing", "Number "\ "of transitions to recovery due to identifying framing " \ "errors at gen3/4.", pcie_perf_counters) \ m(+1, u64, l0_to_recovery_retrain, "l0_to_recovery_retrain", \ "Number of transitions to recovery due to link retrain request " \ "from data link.", pcie_perf_counters) \ m(+1, u64, crc_error_dllp, "crc_error_dllp", "Number of transitions " \ "to recovery due to identifying CRC DLLP errors.", \ pcie_perf_counters) \ m(+1, u64, crc_error_tlp, "crc_error_tlp", "Number of transitions to "\ "recovery due to identifying CRC TLP errors.", pcie_perf_counters) \ m(+1, u64, outbound_stalled_reads, "outbound_stalled_reads", \ "The percentage of time within the last second that the NIC had " \ "outbound non-posted read requests but could not perform the " \ "operation due to insufficient non-posted credits.", \ pcie_perf_counters) \ m(+1, u64, outbound_stalled_writes, "outbound_stalled_writes", \ "The percentage of time within the last second that the NIC had " \ "outbound posted writes requests but could not perform the " \ "operation due to insufficient posted credits.", \ pcie_perf_counters) \ m(+1, u64, outbound_stalled_reads_events, \ "outbound_stalled_reads_events", "The number of events where " \ "outbound_stalled_reads was above a threshold.", \ pcie_perf_counters) \ m(+1, u64, outbound_stalled_writes_events, \ "outbound_stalled_writes_events", \ "The number of events where outbound_stalled_writes was above " \ "a threshold.", pcie_perf_counters) #define MLX5E_PCIE_TIMERS_AND_STATES_COUNTERS_32(m) \ m(+1, u64, time_to_boot_image_start, "time_to_boot_image_start", \ "Time from start until FW boot image starts running in usec.", \ pcie_timers_states) \ m(+1, u64, time_to_link_image, "time_to_link_image", \ "Time from start until FW pci_link image starts running in usec.", \ pcie_timers_states) \ m(+1, u64, calibration_time, "calibration_time", \ "Time it took FW to do calibration in usec.", \ pcie_timers_states) \ m(+1, u64, time_to_first_perst, "time_to_first_perst", \ "Time form start until FW handle first perst. in usec.", \ pcie_timers_states) \ m(+1, u64, time_to_detect_state, "time_to_detect_state", \ "Time from start until first transition to LTSSM.Detect_Q in usec", \ pcie_timers_states) \ m(+1, u64, time_to_l0, "time_to_l0", \ "Time from start until first transition to LTSSM.L0 in usec", \ pcie_timers_states) \ m(+1, u64, time_to_crs_en, "time_to_crs_en", \ "Time from start until crs is enabled in usec", \ pcie_timers_states) \ m(+1, u64, time_to_plastic_image_start, "time_to_plastic_image_start",\ "Time form start until FW plastic image starts running in usec.", \ pcie_timers_states) \ m(+1, u64, time_to_iron_image_start, "time_to_iron_image_start", \ "Time form start until FW iron image starts running in usec.", \ pcie_timers_states) \ m(+1, u64, perst_handler, "perst_handler", \ "Number of persts arrived.", pcie_timers_states) \ m(+1, u64, times_in_l1, "times_in_l1", \ "Number of times LTSSM entered L1 flow.", pcie_timers_states) \ m(+1, u64, times_in_l23, "times_in_l23", \ "Number of times LTSSM entered L23 flow.", pcie_timers_states) \ m(+1, u64, dl_down, "dl_down", \ "Number of moves for DL_active to DL_down.", pcie_timers_states) \ m(+1, u64, config_cycle1usec, "config_cycle1usec", \ "Number of configuration requests that firmware " \ "handled in less than 1 usec.", pcie_timers_states) \ m(+1, u64, config_cycle2to7usec, "config_cycle2to7usec", \ "Number of configuration requests that firmware " \ "handled within 2 to 7 usec.", pcie_timers_states) \ m(+1, u64, config_cycle8to15usec, "config_cycle8to15usec", \ "Number of configuration requests that firmware " \ "handled within 8 to 15 usec.", pcie_timers_states) \ m(+1, u64, config_cycle16to63usec, "config_cycle16to63usec", \ "Number of configuration requests that firmware " \ "handled within 16 to 63 usec.", pcie_timers_states) \ m(+1, u64, config_cycle64usec, "config_cycle64usec", \ "Number of configuration requests that firmware " \ "handled took more than 64 usec.", pcie_timers_states) \ m(+1, u64, correctable_err_msg_sent, "correctable_err_msg_sent", \ "Number of correctable error messages sent.", pcie_timers_states) \ m(+1, u64, non_fatal_err_msg_sent, "non_fatal_err_msg_sent", \ "Number of non-Fatal error msg sent.", pcie_timers_states) \ m(+1, u64, fatal_err_msg_sent, "fatal_err_msg_sent", \ "Number of fatal error msg sent.", pcie_timers_states) #define MLX5E_PCIE_LANE_COUNTERS_32(m) \ m(+1, u64, error_counter_lane0, "error_counter_lane0", \ "Error counter for PCI lane 0", pcie_lanes_counters) \ m(+1, u64, error_counter_lane1, "error_counter_lane1", \ "Error counter for PCI lane 1", pcie_lanes_counters) \ m(+1, u64, error_counter_lane2, "error_counter_lane2", \ "Error counter for PCI lane 2", pcie_lanes_counters) \ m(+1, u64, error_counter_lane3, "error_counter_lane3", \ "Error counter for PCI lane 3", pcie_lanes_counters) \ m(+1, u64, error_counter_lane4, "error_counter_lane4", \ "Error counter for PCI lane 4", pcie_lanes_counters) \ m(+1, u64, error_counter_lane5, "error_counter_lane5", \ "Error counter for PCI lane 5", pcie_lanes_counters) \ m(+1, u64, error_counter_lane6, "error_counter_lane6", \ "Error counter for PCI lane 6", pcie_lanes_counters) \ m(+1, u64, error_counter_lane7, "error_counter_lane7", \ "Error counter for PCI lane 7", pcie_lanes_counters) \ m(+1, u64, error_counter_lane8, "error_counter_lane8", \ "Error counter for PCI lane 8", pcie_lanes_counters) \ m(+1, u64, error_counter_lane9, "error_counter_lane9", \ "Error counter for PCI lane 9", pcie_lanes_counters) \ m(+1, u64, error_counter_lane10, "error_counter_lane10", \ "Error counter for PCI lane 10", pcie_lanes_counters) \ m(+1, u64, error_counter_lane11, "error_counter_lane11", \ "Error counter for PCI lane 11", pcie_lanes_counters) \ m(+1, u64, error_counter_lane12, "error_counter_lane12", \ "Error counter for PCI lane 12", pcie_lanes_counters) \ m(+1, u64, error_counter_lane13, "error_counter_lane13", \ "Error counter for PCI lane 13", pcie_lanes_counters) \ m(+1, u64, error_counter_lane14, "error_counter_lane14", \ "Error counter for PCI lane 14", pcie_lanes_counters) \ m(+1, u64, error_counter_lane15, "error_counter_lane15", \ "Error counter for PCI lane 15", pcie_lanes_counters) /* * Make sure to update mlx5e_update_pport_counters() * when adding a new MLX5E_PPORT_STATS block */ #define MLX5E_PPORT_STATS(m) \ MLX5E_PPORT_PER_PRIO_STATS(m) \ MLX5E_PPORT_IEEE802_3_STATS(m) \ MLX5E_PPORT_RFC2819_STATS(m) #define MLX5E_PORT_STATS_DEBUG(m) \ MLX5E_PPORT_RFC2819_STATS_DEBUG(m) \ MLX5E_PPORT_RFC2863_STATS_DEBUG(m) \ MLX5E_PPORT_PHYSICAL_LAYER_STATS_DEBUG(m) \ MLX5E_PPORT_ETHERNET_EXTENDED_STATS_DEBUG(m) \ MLX5E_PPORT_STATISTICAL_DEBUG(m) \ MLX5E_PCIE_PERFORMANCE_COUNTERS_64(m) \ MLX5E_PCIE_PERFORMANCE_COUNTERS_32(m) \ MLX5E_PCIE_TIMERS_AND_STATES_COUNTERS_32(m) \ MLX5E_PCIE_LANE_COUNTERS_32(m) #define MLX5E_PPORT_IEEE802_3_STATS_NUM \ (0 MLX5E_PPORT_IEEE802_3_STATS(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_RFC2819_STATS_NUM \ (0 MLX5E_PPORT_RFC2819_STATS(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_STATS_NUM \ (0 MLX5E_PPORT_STATS(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_PER_PRIO_STATS_NUM \ (0 MLX5E_PPORT_PER_PRIO_STATS(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_RFC2819_STATS_DEBUG_NUM \ (0 MLX5E_PPORT_RFC2819_STATS_DEBUG(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_RFC2863_STATS_DEBUG_NUM \ (0 MLX5E_PPORT_RFC2863_STATS_DEBUG(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_PHYSICAL_LAYER_STATS_DEBUG_NUM \ (0 MLX5E_PPORT_PHYSICAL_LAYER_STATS_DEBUG(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_ETHERNET_EXTENDED_STATS_DEBUG_NUM \ (0 MLX5E_PPORT_ETHERNET_EXTENDED_STATS_DEBUG(MLX5E_STATS_COUNT)) #define MLX5E_PPORT_STATISTICAL_DEBUG_NUM \ (0 MLX5E_PPORT_STATISTICAL_DEBUG(MLX5E_STATS_COUNT)) #define MLX5E_PORT_STATS_DEBUG_NUM \ (0 MLX5E_PORT_STATS_DEBUG(MLX5E_STATS_COUNT)) struct mlx5e_pport_stats { struct sysctl_ctx_list ctx; u64 arg [0]; MLX5E_PPORT_STATS(MLX5E_STATS_VAR) }; struct mlx5e_port_stats_debug { struct sysctl_ctx_list ctx; u64 arg [0]; MLX5E_PORT_STATS_DEBUG(MLX5E_STATS_VAR) }; #define MLX5E_RQ_STATS(m) \ m(+1, u64, packets, "packets", "Received packets") \ m(+1, u64, bytes, "bytes", "Received bytes") \ m(+1, u64, csum_none, "csum_none", "Received packets") \ m(+1, u64, lro_packets, "lro_packets", "Received LRO packets") \ m(+1, u64, lro_bytes, "lro_bytes", "Received LRO bytes") \ m(+1, u64, sw_lro_queued, "sw_lro_queued", "Packets queued for SW LRO") \ m(+1, u64, sw_lro_flushed, "sw_lro_flushed", "Packets flushed from SW LRO") \ m(+1, u64, wqe_err, "wqe_err", "Received packets") #define MLX5E_RQ_STATS_NUM (0 MLX5E_RQ_STATS(MLX5E_STATS_COUNT)) struct mlx5e_rq_stats { struct sysctl_ctx_list ctx; u64 arg [0]; MLX5E_RQ_STATS(MLX5E_STATS_VAR) }; #define MLX5E_SQ_STATS(m) \ m(+1, u64, packets, "packets", "Transmitted packets") \ m(+1, u64, bytes, "bytes", "Transmitted bytes") \ m(+1, u64, tso_packets, "tso_packets", "Transmitted packets") \ m(+1, u64, tso_bytes, "tso_bytes", "Transmitted bytes") \ m(+1, u64, csum_offload_none, "csum_offload_none", "Transmitted packets") \ m(+1, u64, defragged, "defragged", "Transmitted packets") \ m(+1, u64, dropped, "dropped", "Transmitted packets") \ m(+1, u64, nop, "nop", "Transmitted packets") #define MLX5E_SQ_STATS_NUM (0 MLX5E_SQ_STATS(MLX5E_STATS_COUNT)) struct mlx5e_sq_stats { struct sysctl_ctx_list ctx; u64 arg [0]; MLX5E_SQ_STATS(MLX5E_STATS_VAR) }; struct mlx5e_stats { struct mlx5e_vport_stats vport; struct mlx5e_pport_stats pport; struct mlx5e_port_stats_debug port_stats_debug; }; struct mlx5e_rq_param { u32 rqc [MLX5_ST_SZ_DW(rqc)]; struct mlx5_wq_param wq; }; struct mlx5e_sq_param { u32 sqc [MLX5_ST_SZ_DW(sqc)]; struct mlx5_wq_param wq; }; struct mlx5e_cq_param { u32 cqc [MLX5_ST_SZ_DW(cqc)]; struct mlx5_wq_param wq; }; struct mlx5e_params { u8 log_sq_size; u8 log_rq_size; u16 num_channels; u8 default_vlan_prio; u8 num_tc; u8 rx_cq_moderation_mode; u8 tx_cq_moderation_mode; u16 rx_cq_moderation_usec; u16 rx_cq_moderation_pkts; u16 tx_cq_moderation_usec; u16 tx_cq_moderation_pkts; u16 min_rx_wqes; bool hw_lro_en; bool cqe_zipping_en; u32 lro_wqe_sz; u16 rx_hash_log_tbl_sz; u32 tx_pauseframe_control __aligned(4); u32 rx_pauseframe_control __aligned(4); u16 tx_max_inline; u8 tx_min_inline_mode; u8 tx_priority_flow_control; u8 rx_priority_flow_control; u8 channels_rsss; }; #define MLX5E_PARAMS(m) \ m(+1, u64, tx_queue_size_max, "tx_queue_size_max", "Max send queue size") \ m(+1, u64, rx_queue_size_max, "rx_queue_size_max", "Max receive queue size") \ m(+1, u64, tx_queue_size, "tx_queue_size", "Default send queue size") \ m(+1, u64, rx_queue_size, "rx_queue_size", "Default receive queue size") \ m(+1, u64, channels, "channels", "Default number of channels") \ m(+1, u64, channels_rsss, "channels_rsss", "Default channels receive side scaling stride") \ m(+1, u64, coalesce_usecs_max, "coalesce_usecs_max", "Maximum usecs for joining packets") \ m(+1, u64, coalesce_pkts_max, "coalesce_pkts_max", "Maximum packets to join") \ m(+1, u64, rx_coalesce_usecs, "rx_coalesce_usecs", "Limit in usec for joining rx packets") \ m(+1, u64, rx_coalesce_pkts, "rx_coalesce_pkts", "Maximum number of rx packets to join") \ m(+1, u64, rx_coalesce_mode, "rx_coalesce_mode", "0: EQE fixed mode 1: CQE fixed mode 2: EQE auto mode 3: CQE auto mode") \ m(+1, u64, tx_coalesce_usecs, "tx_coalesce_usecs", "Limit in usec for joining tx packets") \ m(+1, u64, tx_coalesce_pkts, "tx_coalesce_pkts", "Maximum number of tx packets to join") \ m(+1, u64, tx_coalesce_mode, "tx_coalesce_mode", "0: EQE mode 1: CQE mode") \ m(+1, u64, tx_completion_fact, "tx_completion_fact", "1..MAX: Completion event ratio") \ m(+1, u64, tx_completion_fact_max, "tx_completion_fact_max", "Maximum completion event ratio") \ m(+1, u64, hw_lro, "hw_lro", "set to enable hw_lro") \ m(+1, u64, cqe_zipping, "cqe_zipping", "0 : CQE zipping disabled") \ m(+1, u64, modify_tx_dma, "modify_tx_dma", "0: Enable TX 1: Disable TX") \ m(+1, u64, modify_rx_dma, "modify_rx_dma", "0: Enable RX 1: Disable RX") \ m(+1, u64, diag_pci_enable, "diag_pci_enable", "0: Disabled 1: Enabled") \ m(+1, u64, diag_general_enable, "diag_general_enable", "0: Disabled 1: Enabled") \ m(+1, u64, hw_mtu, "hw_mtu", "Current hardware MTU value") \ m(+1, u64, mc_local_lb, "mc_local_lb", "0: Local multicast loopback enabled 1: Disabled") \ m(+1, u64, uc_local_lb, "uc_local_lb", "0: Local unicast loopback enabled 1: Disabled") #define MLX5E_PARAMS_NUM (0 MLX5E_PARAMS(MLX5E_STATS_COUNT)) struct mlx5e_params_ethtool { u64 arg [0]; MLX5E_PARAMS(MLX5E_STATS_VAR) u64 max_bw_value[IEEE_8021QAZ_MAX_TCS]; u8 max_bw_share[IEEE_8021QAZ_MAX_TCS]; u8 prio_tc[MLX5E_MAX_PRIORITY]; u8 dscp2prio[MLX5_MAX_SUPPORTED_DSCP]; u8 trust_state; }; -/* EEPROM Standards for plug in modules */ -#ifndef MLX5E_ETH_MODULE_SFF_8472 -#define MLX5E_ETH_MODULE_SFF_8472 0x1 -#define MLX5E_ETH_MODULE_SFF_8472_LEN 128 -#endif - -#ifndef MLX5E_ETH_MODULE_SFF_8636 -#define MLX5E_ETH_MODULE_SFF_8636 0x2 -#define MLX5E_ETH_MODULE_SFF_8636_LEN 256 -#endif - -#ifndef MLX5E_ETH_MODULE_SFF_8436 -#define MLX5E_ETH_MODULE_SFF_8436 0x3 -#define MLX5E_ETH_MODULE_SFF_8436_LEN 256 -#endif - -/* EEPROM I2C Addresses */ -#define MLX5E_I2C_ADDR_LOW 0x50 -#define MLX5E_I2C_ADDR_HIGH 0x51 - -#define MLX5E_EEPROM_LOW_PAGE 0x0 -#define MLX5E_EEPROM_HIGH_PAGE 0x3 - -#define MLX5E_EEPROM_HIGH_PAGE_OFFSET 128 -#define MLX5E_EEPROM_PAGE_LENGTH 256 - -#define MLX5E_EEPROM_INFO_BYTES 0x3 - struct mlx5e_cq { /* data path - accessed per cqe */ struct mlx5_cqwq wq; /* data path - accessed per HW polling */ struct mlx5_core_cq mcq; /* control */ struct mlx5e_priv *priv; struct mlx5_wq_ctrl wq_ctrl; } __aligned(MLX5E_CACHELINE_SIZE); struct mlx5e_rq_mbuf { bus_dmamap_t dma_map; caddr_t data; struct mbuf *mbuf; }; struct mlx5e_rq { /* data path */ struct mlx5_wq_ll wq; struct mtx mtx; bus_dma_tag_t dma_tag; u32 wqe_sz; u32 nsegs; struct mlx5e_rq_mbuf *mbuf; struct ifnet *ifp; struct mlx5e_rq_stats stats; struct mlx5e_cq cq; struct lro_ctrl lro; volatile int enabled; int ix; /* Dynamic Interrupt Moderation */ struct net_dim dim; /* control */ struct mlx5_wq_ctrl wq_ctrl; u32 rqn; struct mlx5e_channel *channel; struct callout watchdog; } __aligned(MLX5E_CACHELINE_SIZE); struct mlx5e_sq_mbuf { bus_dmamap_t dma_map; struct mbuf *mbuf; u32 num_bytes; u32 num_wqebbs; }; enum { MLX5E_SQ_READY, MLX5E_SQ_FULL }; struct mlx5e_snd_tag { struct m_snd_tag m_snd_tag; /* send tag */ u32 type; /* tag type */ }; struct mlx5e_sq { /* data path */ struct mtx lock; bus_dma_tag_t dma_tag; struct mtx comp_lock; /* dirtied @completion */ u16 cc; /* dirtied @xmit */ u16 pc __aligned(MLX5E_CACHELINE_SIZE); u16 bf_offset; u16 cev_counter; /* completion event counter */ u16 cev_factor; /* completion event factor */ u16 cev_next_state; /* next completion event state */ #define MLX5E_CEV_STATE_INITIAL 0 /* timer not started */ #define MLX5E_CEV_STATE_SEND_NOPS 1 /* send NOPs */ #define MLX5E_CEV_STATE_HOLD_NOPS 2 /* don't send NOPs yet */ u16 running; /* set if SQ is running */ struct callout cev_callout; union { u32 d32[2]; u64 d64; } doorbell; struct mlx5e_sq_stats stats; struct mlx5e_cq cq; /* pointers to per packet info: write@xmit, read@completion */ struct mlx5e_sq_mbuf *mbuf; struct buf_ring *br; /* read only */ struct mlx5_wq_cyc wq; struct mlx5_uar uar; struct ifnet *ifp; u32 sqn; u32 bf_buf_size; u32 mkey_be; u16 max_inline; u8 min_inline_mode; u8 min_insert_caps; #define MLX5E_INSERT_VLAN 1 #define MLX5E_INSERT_NON_VLAN 2 /* control path */ struct mlx5_wq_ctrl wq_ctrl; struct mlx5e_priv *priv; int tc; } __aligned(MLX5E_CACHELINE_SIZE); static inline bool mlx5e_sq_has_room_for(struct mlx5e_sq *sq, u16 n) { u16 cc = sq->cc; u16 pc = sq->pc; return ((sq->wq.sz_m1 & (cc - pc)) >= n || cc == pc); } static inline u32 mlx5e_sq_queue_level(struct mlx5e_sq *sq) { u16 cc; u16 pc; if (sq == NULL) return (0); cc = sq->cc; pc = sq->pc; return (((sq->wq.sz_m1 & (pc - cc)) * IF_SND_QUEUE_LEVEL_MAX) / sq->wq.sz_m1); } struct mlx5e_channel { /* data path */ struct mlx5e_rq rq; struct mlx5e_snd_tag tag; struct mlx5e_sq sq[MLX5E_MAX_TX_NUM_TC]; u32 mkey_be; u8 num_tc; /* control */ struct mlx5e_priv *priv; int ix; int cpu; } __aligned(MLX5E_CACHELINE_SIZE); enum mlx5e_traffic_types { MLX5E_TT_IPV4_TCP, MLX5E_TT_IPV6_TCP, MLX5E_TT_IPV4_UDP, MLX5E_TT_IPV6_UDP, MLX5E_TT_IPV4_IPSEC_AH, MLX5E_TT_IPV6_IPSEC_AH, MLX5E_TT_IPV4_IPSEC_ESP, MLX5E_TT_IPV6_IPSEC_ESP, MLX5E_TT_IPV4, MLX5E_TT_IPV6, MLX5E_TT_ANY, MLX5E_NUM_TT, }; enum { MLX5E_RQT_SPREADING = 0, MLX5E_RQT_DEFAULT_RQ = 1, MLX5E_NUM_RQT = 2, }; struct mlx5_flow_rule; struct mlx5e_eth_addr_info { u8 addr [ETH_ALEN + 2]; u32 tt_vec; /* flow table rule per traffic type */ struct mlx5_flow_rule *ft_rule[MLX5E_NUM_TT]; }; #define MLX5E_ETH_ADDR_HASH_SIZE (1 << BITS_PER_BYTE) struct mlx5e_eth_addr_hash_node; struct mlx5e_eth_addr_hash_head { struct mlx5e_eth_addr_hash_node *lh_first; }; struct mlx5e_eth_addr_db { struct mlx5e_eth_addr_hash_head if_uc[MLX5E_ETH_ADDR_HASH_SIZE]; struct mlx5e_eth_addr_hash_head if_mc[MLX5E_ETH_ADDR_HASH_SIZE]; struct mlx5e_eth_addr_info broadcast; struct mlx5e_eth_addr_info allmulti; struct mlx5e_eth_addr_info promisc; bool broadcast_enabled; bool allmulti_enabled; bool promisc_enabled; }; enum { MLX5E_STATE_ASYNC_EVENTS_ENABLE, MLX5E_STATE_OPENED, }; enum { MLX5_BW_NO_LIMIT = 0, MLX5_100_MBPS_UNIT = 3, MLX5_GBPS_UNIT = 4, }; struct mlx5e_vlan_db { unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; struct mlx5_flow_rule *active_vlans_ft_rule[VLAN_N_VID]; struct mlx5_flow_rule *untagged_ft_rule; struct mlx5_flow_rule *any_cvlan_ft_rule; struct mlx5_flow_rule *any_svlan_ft_rule; bool filter_disabled; }; struct mlx5e_flow_table { int num_groups; struct mlx5_flow_table *t; struct mlx5_flow_group **g; }; struct mlx5e_flow_tables { struct mlx5_flow_namespace *ns; struct mlx5e_flow_table vlan; struct mlx5e_flow_table main; struct mlx5e_flow_table inner_rss; }; #ifdef RATELIMIT #include "en_rl.h" #endif #define MLX5E_TSTMP_PREC 10 struct mlx5e_clbr_point { uint64_t base_curr; uint64_t base_prev; uint64_t clbr_hw_prev; uint64_t clbr_hw_curr; u_int clbr_gen; }; struct mlx5e_dcbx { u32 cable_len; u32 xoff; }; struct mlx5e_priv { struct mlx5_core_dev *mdev; /* must be first */ /* priv data path fields - start */ int order_base_2_num_channels; int queue_mapping_channel_mask; int num_tc; int default_vlan_prio; /* priv data path fields - end */ unsigned long state; int gone; #define PRIV_LOCK(priv) sx_xlock(&(priv)->state_lock) #define PRIV_UNLOCK(priv) sx_xunlock(&(priv)->state_lock) #define PRIV_LOCKED(priv) sx_xlocked(&(priv)->state_lock) #define PRIV_ASSERT_LOCKED(priv) sx_assert(&(priv)->state_lock, SA_XLOCKED) struct sx state_lock; /* Protects Interface state */ struct mlx5_uar cq_uar; u32 pdn; u32 tdn; struct mlx5_core_mr mr; volatile unsigned int channel_refs; u32 tisn[MLX5E_MAX_TX_NUM_TC]; u32 rqtn; u32 tirn[MLX5E_NUM_TT]; struct mlx5e_flow_tables fts; struct mlx5e_eth_addr_db eth_addr; struct mlx5e_vlan_db vlan; struct mlx5e_params params; struct mlx5e_params_ethtool params_ethtool; union mlx5_core_pci_diagnostics params_pci; union mlx5_core_general_diagnostics params_general; struct mtx async_events_mtx; /* sync hw events */ struct work_struct update_stats_work; struct work_struct update_carrier_work; struct work_struct set_rx_mode_work; MLX5_DECLARE_DOORBELL_LOCK(doorbell_lock) struct ifnet *ifp; struct sysctl_ctx_list sysctl_ctx; struct sysctl_oid *sysctl_ifnet; struct sysctl_oid *sysctl_hw; int sysctl_debug; struct mlx5e_stats stats; int counter_set_id; struct workqueue_struct *wq; eventhandler_tag vlan_detach; eventhandler_tag vlan_attach; struct ifmedia media; int media_status_last; int media_active_last; struct callout watchdog; #ifdef RATELIMIT struct mlx5e_rl_priv_data rl; #endif struct callout tstmp_clbr; int clbr_done; int clbr_curr; struct mlx5e_clbr_point clbr_points[2]; u_int clbr_gen; struct mlx5e_dcbx dcbx; struct mlx5e_channel channel[]; }; #define MLX5E_NET_IP_ALIGN 2 struct mlx5e_tx_wqe { struct mlx5_wqe_ctrl_seg ctrl; struct mlx5_wqe_eth_seg eth; }; struct mlx5e_rx_wqe { struct mlx5_wqe_srq_next_seg next; struct mlx5_wqe_data_seg data[]; }; /* the size of the structure above must be power of two */ CTASSERT(powerof2(sizeof(struct mlx5e_rx_wqe))); struct mlx5e_eeprom { int lock_bit; int i2c_addr; int page_num; int device_addr; int module_num; int len; int type; int page_valid; u32 *data; }; #define MLX5E_FLD_MAX(typ, fld) ((1ULL << __mlx5_bit_sz(typ, fld)) - 1ULL) int mlx5e_xmit(struct ifnet *, struct mbuf *); int mlx5e_open_locked(struct ifnet *); int mlx5e_close_locked(struct ifnet *); void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, int event); void mlx5e_rx_cq_comp(struct mlx5_core_cq *); void mlx5e_tx_cq_comp(struct mlx5_core_cq *); struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq); void mlx5e_dim_work(struct work_struct *); void mlx5e_dim_build_cq_param(struct mlx5e_priv *, struct mlx5e_cq_param *); int mlx5e_open_flow_table(struct mlx5e_priv *priv); void mlx5e_close_flow_table(struct mlx5e_priv *priv); void mlx5e_set_rx_mode_core(struct mlx5e_priv *priv); void mlx5e_set_rx_mode_work(struct work_struct *work); void mlx5e_vlan_rx_add_vid(void *, struct ifnet *, u16); void mlx5e_vlan_rx_kill_vid(void *, struct ifnet *, u16); void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv); void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv); int mlx5e_add_all_vlan_rules(struct mlx5e_priv *priv); void mlx5e_del_all_vlan_rules(struct mlx5e_priv *priv); static inline void mlx5e_tx_notify_hw(struct mlx5e_sq *sq, u32 *wqe, int bf_sz) { u16 ofst = MLX5_BF_OFFSET + sq->bf_offset; /* ensure wqe is visible to device before updating doorbell record */ wmb(); *sq->wq.db = cpu_to_be32(sq->pc); /* * Ensure the doorbell record is visible to device before ringing * the doorbell: */ wmb(); if (bf_sz) { __iowrite64_copy(sq->uar.bf_map + ofst, wqe, bf_sz); /* flush the write-combining mapped buffer */ wmb(); } else { mlx5_write64(wqe, sq->uar.map + ofst, MLX5_GET_DOORBELL_LOCK(&sq->priv->doorbell_lock)); } sq->bf_offset ^= sq->bf_buf_size; } static inline void mlx5e_cq_arm(struct mlx5e_cq *cq, spinlock_t *dblock) { struct mlx5_core_cq *mcq; mcq = &cq->mcq; mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map, dblock, cq->wq.cc); } static inline void mlx5e_ref_channel(struct mlx5e_priv *priv) { KASSERT(priv->channel_refs < INT_MAX, ("Channel refs will overflow")); atomic_fetchadd_int(&priv->channel_refs, 1); } static inline void mlx5e_unref_channel(struct mlx5e_priv *priv) { KASSERT(priv->channel_refs > 0, ("Channel refs is not greater than zero")); atomic_fetchadd_int(&priv->channel_refs, -1); } #define mlx5e_dbg(_IGN, _priv, ...) mlx5_core_dbg((_priv)->mdev, __VA_ARGS__) extern const struct ethtool_ops mlx5e_ethtool_ops; void mlx5e_create_ethtool(struct mlx5e_priv *); void mlx5e_create_stats(struct sysctl_ctx_list *, struct sysctl_oid_list *, const char *, const char **, unsigned, u64 *); void mlx5e_send_nop(struct mlx5e_sq *, u32); void mlx5e_sq_cev_timeout(void *); int mlx5e_refresh_channel_params(struct mlx5e_priv *); int mlx5e_open_cq(struct mlx5e_priv *, struct mlx5e_cq_param *, struct mlx5e_cq *, mlx5e_cq_comp_t *, int eq_ix); void mlx5e_close_cq(struct mlx5e_cq *); void mlx5e_free_sq_db(struct mlx5e_sq *); int mlx5e_alloc_sq_db(struct mlx5e_sq *); int mlx5e_enable_sq(struct mlx5e_sq *, struct mlx5e_sq_param *, int tis_num); int mlx5e_modify_sq(struct mlx5e_sq *, int curr_state, int next_state); void mlx5e_disable_sq(struct mlx5e_sq *); void mlx5e_drain_sq(struct mlx5e_sq *); void mlx5e_modify_tx_dma(struct mlx5e_priv *priv, uint8_t value); void mlx5e_modify_rx_dma(struct mlx5e_priv *priv, uint8_t value); void mlx5e_resume_sq(struct mlx5e_sq *sq); void mlx5e_update_sq_inline(struct mlx5e_sq *sq); void mlx5e_refresh_sq_inline(struct mlx5e_priv *priv); int mlx5e_update_buf_lossy(struct mlx5e_priv *priv); #endif /* _MLX5_EN_H_ */ Index: stable/12/sys/dev/mlx5/mlx5_en/mlx5_en_ethtool.c =================================================================== --- stable/12/sys/dev/mlx5/mlx5_en/mlx5_en_ethtool.c (revision 353238) +++ stable/12/sys/dev/mlx5/mlx5_en/mlx5_en_ethtool.c (revision 353239) @@ -1,1429 +1,1215 @@ /*- * Copyright (c) 2015-2019 Mellanox Technologies. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include "en.h" #include "port_buffer.h" -#include void mlx5e_create_stats(struct sysctl_ctx_list *ctx, struct sysctl_oid_list *parent, const char *buffer, const char **desc, unsigned num, u64 * arg) { struct sysctl_oid *node; unsigned x; sysctl_ctx_init(ctx); node = SYSCTL_ADD_NODE(ctx, parent, OID_AUTO, buffer, CTLFLAG_RD, NULL, "Statistics"); if (node == NULL) return; for (x = 0; x != num; x++) { SYSCTL_ADD_UQUAD(ctx, SYSCTL_CHILDREN(node), OID_AUTO, desc[2 * x], CTLFLAG_RD, arg + x, desc[2 * x + 1]); } } static void mlx5e_ethtool_sync_tx_completion_fact(struct mlx5e_priv *priv) { /* * Limit the maximum distance between completion events to * half of the currently set TX queue size. * * The maximum number of queue entries a single IP packet can * consume is given by MLX5_SEND_WQE_MAX_WQEBBS. * * The worst case max value is then given as below: */ uint64_t max = priv->params_ethtool.tx_queue_size / (2 * MLX5_SEND_WQE_MAX_WQEBBS); /* * Update the maximum completion factor value in case the * tx_queue_size field changed. Ensure we don't overflow * 16-bits. */ if (max < 1) max = 1; else if (max > 65535) max = 65535; priv->params_ethtool.tx_completion_fact_max = max; /* * Verify that the current TX completion factor is within the * given limits: */ if (priv->params_ethtool.tx_completion_fact < 1) priv->params_ethtool.tx_completion_fact = 1; else if (priv->params_ethtool.tx_completion_fact > max) priv->params_ethtool.tx_completion_fact = max; } static int mlx5e_getmaxrate(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; u8 max_bw_unit[IEEE_8021QAZ_MAX_TCS]; u8 max_bw_value[IEEE_8021QAZ_MAX_TCS]; int err; int i; PRIV_LOCK(priv); err = -mlx5_query_port_tc_rate_limit(mdev, max_bw_value, max_bw_unit); if (err) goto done; for (i = 0; i <= mlx5_max_tc(mdev); i++) { switch (max_bw_unit[i]) { case MLX5_100_MBPS_UNIT: priv->params_ethtool.max_bw_value[i] = max_bw_value[i] * MLX5E_100MB; break; case MLX5_GBPS_UNIT: priv->params_ethtool.max_bw_value[i] = max_bw_value[i] * MLX5E_1GB; break; case MLX5_BW_NO_LIMIT: priv->params_ethtool.max_bw_value[i] = 0; break; default: priv->params_ethtool.max_bw_value[i] = -1; WARN_ONCE(true, "non-supported BW unit"); break; } } done: PRIV_UNLOCK(priv); return (err); } static int mlx5e_get_max_alloc(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; int err; int x; PRIV_LOCK(priv); err = -mlx5_query_port_tc_bw_alloc(mdev, priv->params_ethtool.max_bw_share); if (err == 0) { /* set default value */ for (x = 0; x != IEEE_8021QAZ_MAX_TCS; x++) { priv->params_ethtool.max_bw_share[x] = 100 / IEEE_8021QAZ_MAX_TCS; } err = -mlx5_set_port_tc_bw_alloc(mdev, priv->params_ethtool.max_bw_share); } PRIV_UNLOCK(priv); return (err); } static int mlx5e_get_dscp(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; int err; if (MLX5_CAP_GEN(mdev, qcam_reg) == 0 || MLX5_CAP_QCAM_REG(mdev, qpts) == 0 || MLX5_CAP_QCAM_REG(mdev, qpdpm) == 0) return (EOPNOTSUPP); PRIV_LOCK(priv); err = -mlx5_query_dscp2prio(mdev, priv->params_ethtool.dscp2prio); if (err) goto done; err = -mlx5_query_trust_state(mdev, &priv->params_ethtool.trust_state); if (err) goto done; done: PRIV_UNLOCK(priv); return (err); } static void mlx5e_tc_get_parameters(struct mlx5e_priv *priv, u64 *new_bw_value, u8 *max_bw_value, u8 *max_bw_unit) { const u64 upper_limit_mbps = 255 * MLX5E_100MB; const u64 upper_limit_gbps = 255 * MLX5E_1GB; u64 temp; int i; memset(max_bw_value, 0, IEEE_8021QAZ_MAX_TCS); memset(max_bw_unit, 0, IEEE_8021QAZ_MAX_TCS); for (i = 0; i <= mlx5_max_tc(priv->mdev); i++) { temp = (new_bw_value != NULL) ? new_bw_value[i] : priv->params_ethtool.max_bw_value[i]; if (!temp) { max_bw_unit[i] = MLX5_BW_NO_LIMIT; } else if (temp > upper_limit_gbps) { max_bw_unit[i] = MLX5_BW_NO_LIMIT; } else if (temp <= upper_limit_mbps) { max_bw_value[i] = howmany(temp, MLX5E_100MB); max_bw_unit[i] = MLX5_100_MBPS_UNIT; } else { max_bw_value[i] = howmany(temp, MLX5E_1GB); max_bw_unit[i] = MLX5_GBPS_UNIT; } } } static int mlx5e_tc_maxrate_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; struct mlx5_core_dev *mdev = priv->mdev; u8 max_bw_unit[IEEE_8021QAZ_MAX_TCS]; u8 max_bw_value[IEEE_8021QAZ_MAX_TCS]; u64 new_bw_value[IEEE_8021QAZ_MAX_TCS]; u8 max_rates = mlx5_max_tc(mdev) + 1; u8 x; int err; PRIV_LOCK(priv); err = SYSCTL_OUT(req, priv->params_ethtool.max_bw_value, sizeof(priv->params_ethtool.max_bw_value[0]) * max_rates); if (err || !req->newptr) goto done; err = SYSCTL_IN(req, new_bw_value, sizeof(new_bw_value[0]) * max_rates); if (err) goto done; /* range check input value */ for (x = 0; x != max_rates; x++) { if (new_bw_value[x] % MLX5E_100MB) { err = ERANGE; goto done; } } mlx5e_tc_get_parameters(priv, new_bw_value, max_bw_value, max_bw_unit); err = -mlx5_modify_port_tc_rate_limit(mdev, max_bw_value, max_bw_unit); if (err) goto done; memcpy(priv->params_ethtool.max_bw_value, new_bw_value, sizeof(priv->params_ethtool.max_bw_value)); done: PRIV_UNLOCK(priv); return (err); } static int mlx5e_tc_rate_share_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; struct mlx5_core_dev *mdev = priv->mdev; u8 max_bw_share[IEEE_8021QAZ_MAX_TCS]; u8 max_rates = mlx5_max_tc(mdev) + 1; int i; int err; int sum; PRIV_LOCK(priv); err = SYSCTL_OUT(req, priv->params_ethtool.max_bw_share, max_rates); if (err || !req->newptr) goto done; err = SYSCTL_IN(req, max_bw_share, max_rates); if (err) goto done; /* range check input value */ for (sum = i = 0; i != max_rates; i++) { if (max_bw_share[i] < 1 || max_bw_share[i] > 100) { err = ERANGE; goto done; } sum += max_bw_share[i]; } /* sum of values should be as close to 100 as possible */ if (sum < (100 - max_rates + 1) || sum > 100) { err = ERANGE; goto done; } err = -mlx5_set_port_tc_bw_alloc(mdev, max_bw_share); if (err) goto done; memcpy(priv->params_ethtool.max_bw_share, max_bw_share, sizeof(priv->params_ethtool.max_bw_share)); done: PRIV_UNLOCK(priv); return (err); } static int mlx5e_get_prio_tc(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; int err = 0; int i; PRIV_LOCK(priv); if (!MLX5_CAP_GEN(priv->mdev, ets)) { PRIV_UNLOCK(priv); return (EOPNOTSUPP); } for (i = 0; i != MLX5E_MAX_PRIORITY; i++) { err = -mlx5_query_port_prio_tc(mdev, i, priv->params_ethtool.prio_tc + i); if (err) break; } PRIV_UNLOCK(priv); return (err); } static int mlx5e_prio_to_tc_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; struct mlx5_core_dev *mdev = priv->mdev; uint8_t temp[MLX5E_MAX_PRIORITY]; int err; int i; PRIV_LOCK(priv); err = SYSCTL_OUT(req, priv->params_ethtool.prio_tc, MLX5E_MAX_PRIORITY); if (err || !req->newptr) goto done; err = SYSCTL_IN(req, temp, MLX5E_MAX_PRIORITY); if (err) goto done; for (i = 0; i != MLX5E_MAX_PRIORITY; i++) { if (temp[i] > mlx5_max_tc(mdev)) { err = ERANGE; goto done; } } for (i = 0; i != MLX5E_MAX_PRIORITY; i++) { if (temp[i] == priv->params_ethtool.prio_tc[i]) continue; err = -mlx5_set_port_prio_tc(mdev, i, temp[i]); if (err) goto done; /* update cached value */ priv->params_ethtool.prio_tc[i] = temp[i]; } done: PRIV_UNLOCK(priv); return (err); } static int mlx5e_trust_state_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; struct mlx5_core_dev *mdev = priv->mdev; int err; u8 result; PRIV_LOCK(priv); result = priv->params_ethtool.trust_state; err = sysctl_handle_8(oidp, &result, 0, req); if (err || !req->newptr || result == priv->params_ethtool.trust_state) goto done; switch (result) { case MLX5_QPTS_TRUST_PCP: case MLX5_QPTS_TRUST_DSCP: break; case MLX5_QPTS_TRUST_BOTH: if (!MLX5_CAP_QCAM_FEATURE(mdev, qpts_trust_both)) { err = EOPNOTSUPP; goto done; } break; default: err = ERANGE; goto done; } err = -mlx5_set_trust_state(mdev, result); if (err) goto done; priv->params_ethtool.trust_state = result; /* update inline mode */ mlx5e_refresh_sq_inline(priv); #ifdef RATELIMIT mlx5e_rl_refresh_sq_inline(&priv->rl); #endif done: PRIV_UNLOCK(priv); return (err); } static int mlx5e_dscp_prio_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; int prio_index = arg2; struct mlx5_core_dev *mdev = priv->mdev; uint8_t dscp2prio[MLX5_MAX_SUPPORTED_DSCP]; uint8_t x; int err; PRIV_LOCK(priv); err = SYSCTL_OUT(req, priv->params_ethtool.dscp2prio + prio_index, sizeof(priv->params_ethtool.dscp2prio) / 8); if (err || !req->newptr) goto done; memcpy(dscp2prio, priv->params_ethtool.dscp2prio, sizeof(dscp2prio)); err = SYSCTL_IN(req, dscp2prio + prio_index, sizeof(dscp2prio) / 8); if (err) goto done; for (x = 0; x != MLX5_MAX_SUPPORTED_DSCP; x++) { if (dscp2prio[x] > 7) { err = ERANGE; goto done; } } err = -mlx5_set_dscp2prio(mdev, dscp2prio); if (err) goto done; /* update local array */ memcpy(priv->params_ethtool.dscp2prio, dscp2prio, sizeof(priv->params_ethtool.dscp2prio)); done: PRIV_UNLOCK(priv); return (err); } int mlx5e_update_buf_lossy(struct mlx5e_priv *priv) { struct ieee_pfc pfc; PRIV_ASSERT_LOCKED(priv); bzero(&pfc, sizeof(pfc)); pfc.pfc_en = priv->params.rx_priority_flow_control; return (-mlx5e_port_manual_buffer_config(priv, MLX5E_PORT_BUFFER_PFC, priv->params_ethtool.hw_mtu, &pfc, NULL, NULL)); } static int mlx5e_buf_size_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv; u32 buf_size[MLX5E_MAX_BUFFER]; struct mlx5e_port_buffer port_buffer; int error, i; priv = arg1; PRIV_LOCK(priv); error = -mlx5e_port_query_buffer(priv, &port_buffer); if (error != 0) goto done; for (i = 0; i < nitems(buf_size); i++) buf_size[i] = port_buffer.buffer[i].size; error = SYSCTL_OUT(req, buf_size, sizeof(buf_size)); if (error != 0 || req->newptr == NULL) goto done; error = SYSCTL_IN(req, buf_size, sizeof(buf_size)); if (error != 0) goto done; error = -mlx5e_port_manual_buffer_config(priv, MLX5E_PORT_BUFFER_SIZE, priv->params_ethtool.hw_mtu, NULL, buf_size, NULL); done: PRIV_UNLOCK(priv); return (error); } static int mlx5e_buf_prio_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv; struct mlx5_core_dev *mdev; u8 buffer[MLX5E_MAX_BUFFER]; int error; priv = arg1; mdev = priv->mdev; PRIV_LOCK(priv); error = -mlx5e_port_query_priority2buffer(mdev, buffer); if (error != 0) goto done; error = SYSCTL_OUT(req, buffer, MLX5E_MAX_BUFFER); if (error != 0 || req->newptr == NULL) goto done; error = SYSCTL_IN(req, buffer, MLX5E_MAX_BUFFER); if (error != 0) goto done; error = -mlx5e_port_manual_buffer_config(priv, MLX5E_PORT_BUFFER_PRIO2BUFFER, priv->params_ethtool.hw_mtu, NULL, NULL, buffer); if (error == 0) error = mlx5e_update_buf_lossy(priv); done: PRIV_UNLOCK(priv); return (error); } static int mlx5e_cable_length_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv; u_int cable_len; int error; priv = arg1; PRIV_LOCK(priv); cable_len = priv->dcbx.cable_len; error = sysctl_handle_int(oidp, &cable_len, 0, req); if (error == 0 && req->newptr != NULL && cable_len != priv->dcbx.cable_len) { error = -mlx5e_port_manual_buffer_config(priv, MLX5E_PORT_BUFFER_CABLE_LEN, priv->params_ethtool.hw_mtu, NULL, NULL, NULL); if (error == 0) priv->dcbx.cable_len = cable_len; } PRIV_UNLOCK(priv); return (error); } #define MLX5_PARAM_OFFSET(n) \ __offsetof(struct mlx5e_priv, params_ethtool.n) static int mlx5e_ethtool_handler(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; uint64_t value; int mode_modify; int was_opened; int error; PRIV_LOCK(priv); value = priv->params_ethtool.arg[arg2]; if (req != NULL) { error = sysctl_handle_64(oidp, &value, 0, req); if (error || req->newptr == NULL || value == priv->params_ethtool.arg[arg2]) goto done; /* assign new value */ priv->params_ethtool.arg[arg2] = value; } else { error = 0; } /* check if device is gone */ if (priv->gone) { error = ENXIO; goto done; } was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state); mode_modify = MLX5_CAP_GEN(priv->mdev, cq_period_mode_modify); switch (MLX5_PARAM_OFFSET(arg[arg2])) { case MLX5_PARAM_OFFSET(rx_coalesce_usecs): /* import RX coal time */ if (priv->params_ethtool.rx_coalesce_usecs < 1) priv->params_ethtool.rx_coalesce_usecs = 0; else if (priv->params_ethtool.rx_coalesce_usecs > MLX5E_FLD_MAX(cqc, cq_period)) { priv->params_ethtool.rx_coalesce_usecs = MLX5E_FLD_MAX(cqc, cq_period); } priv->params.rx_cq_moderation_usec = priv->params_ethtool.rx_coalesce_usecs; /* check to avoid down and up the network interface */ if (was_opened) error = mlx5e_refresh_channel_params(priv); break; case MLX5_PARAM_OFFSET(rx_coalesce_pkts): /* import RX coal pkts */ if (priv->params_ethtool.rx_coalesce_pkts < 1) priv->params_ethtool.rx_coalesce_pkts = 0; else if (priv->params_ethtool.rx_coalesce_pkts > MLX5E_FLD_MAX(cqc, cq_max_count)) { priv->params_ethtool.rx_coalesce_pkts = MLX5E_FLD_MAX(cqc, cq_max_count); } priv->params.rx_cq_moderation_pkts = priv->params_ethtool.rx_coalesce_pkts; /* check to avoid down and up the network interface */ if (was_opened) error = mlx5e_refresh_channel_params(priv); break; case MLX5_PARAM_OFFSET(tx_coalesce_usecs): /* import TX coal time */ if (priv->params_ethtool.tx_coalesce_usecs < 1) priv->params_ethtool.tx_coalesce_usecs = 0; else if (priv->params_ethtool.tx_coalesce_usecs > MLX5E_FLD_MAX(cqc, cq_period)) { priv->params_ethtool.tx_coalesce_usecs = MLX5E_FLD_MAX(cqc, cq_period); } priv->params.tx_cq_moderation_usec = priv->params_ethtool.tx_coalesce_usecs; /* check to avoid down and up the network interface */ if (was_opened) error = mlx5e_refresh_channel_params(priv); break; case MLX5_PARAM_OFFSET(tx_coalesce_pkts): /* import TX coal pkts */ if (priv->params_ethtool.tx_coalesce_pkts < 1) priv->params_ethtool.tx_coalesce_pkts = 0; else if (priv->params_ethtool.tx_coalesce_pkts > MLX5E_FLD_MAX(cqc, cq_max_count)) { priv->params_ethtool.tx_coalesce_pkts = MLX5E_FLD_MAX(cqc, cq_max_count); } priv->params.tx_cq_moderation_pkts = priv->params_ethtool.tx_coalesce_pkts; /* check to avoid down and up the network interface */ if (was_opened) error = mlx5e_refresh_channel_params(priv); break; case MLX5_PARAM_OFFSET(tx_queue_size): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* import TX queue size */ if (priv->params_ethtool.tx_queue_size < (1 << MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)) { priv->params_ethtool.tx_queue_size = (1 << MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE); } else if (priv->params_ethtool.tx_queue_size > priv->params_ethtool.tx_queue_size_max) { priv->params_ethtool.tx_queue_size = priv->params_ethtool.tx_queue_size_max; } /* store actual TX queue size */ priv->params.log_sq_size = order_base_2(priv->params_ethtool.tx_queue_size); priv->params_ethtool.tx_queue_size = 1 << priv->params.log_sq_size; /* verify TX completion factor */ mlx5e_ethtool_sync_tx_completion_fact(priv); /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(rx_queue_size): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* import RX queue size */ if (priv->params_ethtool.rx_queue_size < (1 << MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE)) { priv->params_ethtool.rx_queue_size = (1 << MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE); } else if (priv->params_ethtool.rx_queue_size > priv->params_ethtool.rx_queue_size_max) { priv->params_ethtool.rx_queue_size = priv->params_ethtool.rx_queue_size_max; } /* store actual RX queue size */ priv->params.log_rq_size = order_base_2(priv->params_ethtool.rx_queue_size); priv->params_ethtool.rx_queue_size = 1 << priv->params.log_rq_size; /* update least number of RX WQEs */ priv->params.min_rx_wqes = min( priv->params_ethtool.rx_queue_size - 1, MLX5E_PARAMS_DEFAULT_MIN_RX_WQES); /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(channels_rsss): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* import number of channels */ if (priv->params_ethtool.channels_rsss < 1) priv->params_ethtool.channels_rsss = 1; else if (priv->params_ethtool.channels_rsss > 128) priv->params_ethtool.channels_rsss = 128; priv->params.channels_rsss = priv->params_ethtool.channels_rsss; /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(channels): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* import number of channels */ if (priv->params_ethtool.channels < 1) priv->params_ethtool.channels = 1; else if (priv->params_ethtool.channels > (u64) priv->mdev->priv.eq_table.num_comp_vectors) { priv->params_ethtool.channels = (u64) priv->mdev->priv.eq_table.num_comp_vectors; } priv->params.num_channels = priv->params_ethtool.channels; /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(rx_coalesce_mode): /* network interface must be down */ if (was_opened != 0 && mode_modify == 0) mlx5e_close_locked(priv->ifp); /* import RX coalesce mode */ if (priv->params_ethtool.rx_coalesce_mode > 3) priv->params_ethtool.rx_coalesce_mode = 3; priv->params.rx_cq_moderation_mode = priv->params_ethtool.rx_coalesce_mode; /* restart network interface, if any */ if (was_opened != 0) { if (mode_modify == 0) mlx5e_open_locked(priv->ifp); else error = mlx5e_refresh_channel_params(priv); } break; case MLX5_PARAM_OFFSET(tx_coalesce_mode): /* network interface must be down */ if (was_opened != 0 && mode_modify == 0) mlx5e_close_locked(priv->ifp); /* import TX coalesce mode */ if (priv->params_ethtool.tx_coalesce_mode != 0) priv->params_ethtool.tx_coalesce_mode = 1; priv->params.tx_cq_moderation_mode = priv->params_ethtool.tx_coalesce_mode; /* restart network interface, if any */ if (was_opened != 0) { if (mode_modify == 0) mlx5e_open_locked(priv->ifp); else error = mlx5e_refresh_channel_params(priv); } break; case MLX5_PARAM_OFFSET(hw_lro): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* import HW LRO mode */ if (priv->params_ethtool.hw_lro != 0 && MLX5_CAP_ETH(priv->mdev, lro_cap)) { priv->params_ethtool.hw_lro = 1; /* check if feature should actually be enabled */ if (priv->ifp->if_capenable & IFCAP_LRO) { priv->params.hw_lro_en = true; } else { priv->params.hw_lro_en = false; mlx5_en_warn(priv->ifp, "To enable HW LRO " "please also enable LRO via ifconfig(8).\n"); } } else { /* return an error if HW does not support this feature */ if (priv->params_ethtool.hw_lro != 0) error = EINVAL; priv->params.hw_lro_en = false; priv->params_ethtool.hw_lro = 0; } /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(cqe_zipping): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* import CQE zipping mode */ if (priv->params_ethtool.cqe_zipping && MLX5_CAP_GEN(priv->mdev, cqe_compression)) { priv->params.cqe_zipping_en = true; priv->params_ethtool.cqe_zipping = 1; } else { priv->params.cqe_zipping_en = false; priv->params_ethtool.cqe_zipping = 0; } /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(tx_completion_fact): /* network interface must be down */ if (was_opened) mlx5e_close_locked(priv->ifp); /* verify parameter */ mlx5e_ethtool_sync_tx_completion_fact(priv); /* restart network interface, if any */ if (was_opened) mlx5e_open_locked(priv->ifp); break; case MLX5_PARAM_OFFSET(modify_tx_dma): /* check if network interface is opened */ if (was_opened) { priv->params_ethtool.modify_tx_dma = priv->params_ethtool.modify_tx_dma ? 1 : 0; /* modify tx according to value */ mlx5e_modify_tx_dma(priv, value != 0); } else { /* if closed force enable tx */ priv->params_ethtool.modify_tx_dma = 0; } break; case MLX5_PARAM_OFFSET(modify_rx_dma): /* check if network interface is opened */ if (was_opened) { priv->params_ethtool.modify_rx_dma = priv->params_ethtool.modify_rx_dma ? 1 : 0; /* modify rx according to value */ mlx5e_modify_rx_dma(priv, value != 0); } else { /* if closed force enable rx */ priv->params_ethtool.modify_rx_dma = 0; } break; case MLX5_PARAM_OFFSET(diag_pci_enable): priv->params_ethtool.diag_pci_enable = priv->params_ethtool.diag_pci_enable ? 1 : 0; error = -mlx5_core_set_diagnostics_full(priv->mdev, priv->params_ethtool.diag_pci_enable, priv->params_ethtool.diag_general_enable); break; case MLX5_PARAM_OFFSET(diag_general_enable): priv->params_ethtool.diag_general_enable = priv->params_ethtool.diag_general_enable ? 1 : 0; error = -mlx5_core_set_diagnostics_full(priv->mdev, priv->params_ethtool.diag_pci_enable, priv->params_ethtool.diag_general_enable); break; case MLX5_PARAM_OFFSET(mc_local_lb): priv->params_ethtool.mc_local_lb = priv->params_ethtool.mc_local_lb ? 1 : 0; if (MLX5_CAP_GEN(priv->mdev, disable_local_lb)) { error = mlx5_nic_vport_modify_local_lb(priv->mdev, MLX5_LOCAL_MC_LB, priv->params_ethtool.mc_local_lb); } else { error = EOPNOTSUPP; } break; case MLX5_PARAM_OFFSET(uc_local_lb): priv->params_ethtool.uc_local_lb = priv->params_ethtool.uc_local_lb ? 1 : 0; if (MLX5_CAP_GEN(priv->mdev, disable_local_lb)) { error = mlx5_nic_vport_modify_local_lb(priv->mdev, MLX5_LOCAL_UC_LB, priv->params_ethtool.uc_local_lb); } else { error = EOPNOTSUPP; } break; default: break; } done: PRIV_UNLOCK(priv); return (error); } -/* - * Read the first three bytes of the eeprom in order to get the needed info - * for the whole reading. - * Byte 0 - Identifier byte - * Byte 1 - Revision byte - * Byte 2 - Status byte - */ -static int -mlx5e_get_eeprom_info(struct mlx5e_priv *priv, struct mlx5e_eeprom *eeprom) -{ - struct mlx5_core_dev *dev = priv->mdev; - u32 data = 0; - int size_read = 0; - int ret; - - ret = mlx5_query_module_num(dev, &eeprom->module_num); - if (ret) { - mlx5_en_err(priv->ifp, "Failed query module error=%d\n", - ret); - return (ret); - } - - /* Read the first three bytes to get Identifier, Revision and Status */ - ret = mlx5_query_eeprom(dev, eeprom->i2c_addr, eeprom->page_num, - eeprom->device_addr, MLX5E_EEPROM_INFO_BYTES, eeprom->module_num, &data, - &size_read); - if (ret) { - mlx5_en_err(priv->ifp, - "Failed query eeprom module error=0x%x\n", ret); - return (ret); - } - - switch (data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK) { - case SFF_8024_ID_QSFP: - eeprom->type = MLX5E_ETH_MODULE_SFF_8436; - eeprom->len = MLX5E_ETH_MODULE_SFF_8436_LEN; - break; - case SFF_8024_ID_QSFPPLUS: - case SFF_8024_ID_QSFP28: - if ((data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK) == SFF_8024_ID_QSFP28 || - ((data & MLX5_EEPROM_REVISION_ID_BYTE_MASK) >> 8) >= 0x3) { - eeprom->type = MLX5E_ETH_MODULE_SFF_8636; - eeprom->len = MLX5E_ETH_MODULE_SFF_8636_LEN; - } else { - eeprom->type = MLX5E_ETH_MODULE_SFF_8436; - eeprom->len = MLX5E_ETH_MODULE_SFF_8436_LEN; - } - if ((data & MLX5_EEPROM_PAGE_3_VALID_BIT_MASK) == 0) - eeprom->page_valid = 1; - break; - case SFF_8024_ID_SFP: - eeprom->type = MLX5E_ETH_MODULE_SFF_8472; - eeprom->len = MLX5E_ETH_MODULE_SFF_8472_LEN; - break; - default: - mlx5_en_err(priv->ifp, - "Not recognized cable type = 0x%x(%s)\n", - data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK, - sff_8024_id[data & MLX5_EEPROM_IDENTIFIER_BYTE_MASK]); - return (EINVAL); - } - return (0); -} - -/* Read both low and high pages of the eeprom */ -static int -mlx5e_get_eeprom(struct mlx5e_priv *priv, struct mlx5e_eeprom *ee) -{ - struct mlx5_core_dev *dev = priv->mdev; - int size_read = 0; - int ret; - - if (ee->len == 0) - return (EINVAL); - - /* Read low page of the eeprom */ - while (ee->device_addr < ee->len) { - ret = mlx5_query_eeprom(dev, ee->i2c_addr, ee->page_num, ee->device_addr, - ee->len - ee->device_addr, ee->module_num, - ee->data + (ee->device_addr / 4), &size_read); - if (ret) { - mlx5_en_err(priv->ifp, - "Failed reading eeprom, error = 0x%02x\n",ret); - return (ret); - } - ee->device_addr += size_read; - } - - /* Read high page of the eeprom */ - if (ee->page_valid) { - ee->device_addr = MLX5E_EEPROM_HIGH_PAGE_OFFSET; - ee->page_num = MLX5E_EEPROM_HIGH_PAGE; - size_read = 0; - while (ee->device_addr < MLX5E_EEPROM_PAGE_LENGTH) { - ret = mlx5_query_eeprom(dev, ee->i2c_addr, ee->page_num, - ee->device_addr, MLX5E_EEPROM_PAGE_LENGTH - ee->device_addr, - ee->module_num, ee->data + (ee->len / 4) + - ((ee->device_addr - MLX5E_EEPROM_HIGH_PAGE_OFFSET) / 4), - &size_read); - if (ret) { - mlx5_en_err(priv->ifp, - "Failed reading eeprom, error = 0x%02x\n", - ret); - return (ret); - } - ee->device_addr += size_read; - } - } - return (0); -} - -static void -mlx5e_print_eeprom(struct mlx5e_eeprom *eeprom) -{ - int row; - int index_in_row; - int byte_to_write = 0; - int line_length = 16; - - printf("\nOffset\t\tValues\n"); - printf("------\t\t------"); - while (byte_to_write < eeprom->len) { - printf("\n0x%04X\t\t", byte_to_write); - for (index_in_row = 0; index_in_row < line_length; index_in_row++) { - printf("%02X ", ((u8 *)eeprom->data)[byte_to_write]); - byte_to_write++; - } - } - - if (eeprom->page_valid) { - row = MLX5E_EEPROM_HIGH_PAGE_OFFSET; - printf("\n\nUpper Page 0x03\n"); - printf("\nOffset\t\tValues\n"); - printf("------\t\t------"); - while (row < MLX5E_EEPROM_PAGE_LENGTH) { - printf("\n0x%04X\t\t", row); - for (index_in_row = 0; index_in_row < line_length; index_in_row++) { - printf("%02X ", ((u8 *)eeprom->data)[byte_to_write]); - byte_to_write++; - row++; - } - } - } -} - -/* - * Read cable EEPROM module information by first inspecting the first - * three bytes to get the initial information for a whole reading. - * Information will be printed to dmesg. - */ -static int -mlx5e_read_eeprom(SYSCTL_HANDLER_ARGS) -{ - struct mlx5e_priv *priv = arg1; - struct mlx5e_eeprom eeprom; - int error; - int result = 0; - - PRIV_LOCK(priv); - error = sysctl_handle_int(oidp, &result, 0, req); - if (error || !req->newptr) - goto done; - - /* Check if device is gone */ - if (priv->gone) { - error = ENXIO; - goto done; - } - - if (result == 1) { - eeprom.i2c_addr = MLX5E_I2C_ADDR_LOW; - eeprom.device_addr = 0; - eeprom.page_num = MLX5E_EEPROM_LOW_PAGE; - eeprom.page_valid = 0; - - /* Read three first bytes to get important info */ - error = mlx5e_get_eeprom_info(priv, &eeprom); - if (error) { - mlx5_en_err(priv->ifp, - "Failed reading eeprom's initial information\n"); - error = 0; - goto done; - } - /* - * Allocate needed length buffer and additional space for - * page 0x03 - */ - eeprom.data = malloc(eeprom.len + MLX5E_EEPROM_PAGE_LENGTH, - M_MLX5EN, M_WAITOK | M_ZERO); - - /* Read the whole eeprom information */ - error = mlx5e_get_eeprom(priv, &eeprom); - if (error) { - mlx5_en_err(priv->ifp, "Failed reading eeprom\n"); - error = 0; - /* - * Continue printing partial information in case of - * an error - */ - } - mlx5e_print_eeprom(&eeprom); - free(eeprom.data, M_MLX5EN); - } -done: - PRIV_UNLOCK(priv); - return (error); -} - static const char *mlx5e_params_desc[] = { MLX5E_PARAMS(MLX5E_STATS_DESC) }; static const char *mlx5e_port_stats_debug_desc[] = { MLX5E_PORT_STATS_DEBUG(MLX5E_STATS_DESC) }; static int mlx5e_ethtool_debug_channel_info(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv; struct sbuf sb; struct mlx5e_channel *c; struct mlx5e_sq *sq; struct mlx5e_rq *rq; int error, i, tc; bool opened; priv = arg1; error = sysctl_wire_old_buffer(req, 0); if (error != 0) return (error); if (sbuf_new_for_sysctl(&sb, NULL, 1024, req) == NULL) return (ENOMEM); sbuf_clear_flags(&sb, SBUF_INCLUDENUL); PRIV_LOCK(priv); opened = test_bit(MLX5E_STATE_OPENED, &priv->state); sbuf_printf(&sb, "pages irq %d\n", priv->mdev->priv.msix_arr[MLX5_EQ_VEC_PAGES].vector); sbuf_printf(&sb, "command irq %d\n", priv->mdev->priv.msix_arr[MLX5_EQ_VEC_CMD].vector); sbuf_printf(&sb, "async irq %d\n", priv->mdev->priv.msix_arr[MLX5_EQ_VEC_ASYNC].vector); for (i = 0; i != priv->params.num_channels; i++) { int eqn_not_used = -1; int irqn = MLX5_EQ_VEC_COMP_BASE; if (mlx5_vector2eqn(priv->mdev, i, &eqn_not_used, &irqn) != 0) continue; c = opened ? &priv->channel[i] : NULL; rq = opened ? &c->rq : NULL; sbuf_printf(&sb, "channel %d rq %d cq %d irq %d\n", i, opened ? rq->rqn : -1, opened ? rq->cq.mcq.cqn : -1, priv->mdev->priv.msix_arr[irqn].vector); for (tc = 0; tc != priv->num_tc; tc++) { sq = opened ? &c->sq[tc] : NULL; sbuf_printf(&sb, "channel %d tc %d sq %d cq %d irq %d\n", i, tc, opened ? sq->sqn : -1, opened ? sq->cq.mcq.cqn : -1, priv->mdev->priv.msix_arr[irqn].vector); } } PRIV_UNLOCK(priv); error = sbuf_finish(&sb); sbuf_delete(&sb); return (error); } static int mlx5e_ethtool_debug_stats(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; int sys_debug; int error; PRIV_LOCK(priv); if (priv->gone != 0) { error = ENODEV; goto done; } sys_debug = priv->sysctl_debug; error = sysctl_handle_int(oidp, &sys_debug, 0, req); if (error != 0 || !req->newptr) goto done; sys_debug = sys_debug ? 1 : 0; if (sys_debug == priv->sysctl_debug) goto done; if ((priv->sysctl_debug = sys_debug)) { mlx5e_create_stats(&priv->stats.port_stats_debug.ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), "debug_stats", mlx5e_port_stats_debug_desc, MLX5E_PORT_STATS_DEBUG_NUM, priv->stats.port_stats_debug.arg); SYSCTL_ADD_PROC(&priv->stats.port_stats_debug.ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "hw_ctx_debug", CTLFLAG_RD | CTLFLAG_MPSAFE | CTLTYPE_STRING, priv, 0, mlx5e_ethtool_debug_channel_info, "S", ""); } else { sysctl_ctx_free(&priv->stats.port_stats_debug.ctx); } done: PRIV_UNLOCK(priv); return (error); } static void mlx5e_create_diagnostics(struct mlx5e_priv *priv) { struct mlx5_core_diagnostics_entry entry; struct sysctl_ctx_list *ctx; struct sysctl_oid *node; int x; /* sysctl context we are using */ ctx = &priv->sysctl_ctx; /* create root node */ node = SYSCTL_ADD_NODE(ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "diagnostics", CTLFLAG_RD, NULL, "Diagnostics"); if (node == NULL) return; /* create PCI diagnostics */ for (x = 0; x != MLX5_CORE_PCI_DIAGNOSTICS_NUM; x++) { entry = mlx5_core_pci_diagnostics_table[x]; if (mlx5_core_supports_diagnostics(priv->mdev, entry.counter_id) == 0) continue; SYSCTL_ADD_UQUAD(ctx, SYSCTL_CHILDREN(node), OID_AUTO, entry.desc, CTLFLAG_RD, priv->params_pci.array + x, "PCI diagnostics counter"); } /* create general diagnostics */ for (x = 0; x != MLX5_CORE_GENERAL_DIAGNOSTICS_NUM; x++) { entry = mlx5_core_general_diagnostics_table[x]; if (mlx5_core_supports_diagnostics(priv->mdev, entry.counter_id) == 0) continue; SYSCTL_ADD_UQUAD(ctx, SYSCTL_CHILDREN(node), OID_AUTO, entry.desc, CTLFLAG_RD, priv->params_general.array + x, "General diagnostics counter"); } } void mlx5e_create_ethtool(struct mlx5e_priv *priv) { struct sysctl_oid *node, *qos_node; const char *pnameunit; struct mlx5e_port_buffer port_buffer; unsigned x; int i; /* set some defaults */ priv->params_ethtool.tx_queue_size_max = 1 << MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE; priv->params_ethtool.rx_queue_size_max = 1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE; priv->params_ethtool.tx_queue_size = 1 << priv->params.log_sq_size; priv->params_ethtool.rx_queue_size = 1 << priv->params.log_rq_size; priv->params_ethtool.channels = priv->params.num_channels; priv->params_ethtool.channels_rsss = priv->params.channels_rsss; priv->params_ethtool.coalesce_pkts_max = MLX5E_FLD_MAX(cqc, cq_max_count); priv->params_ethtool.coalesce_usecs_max = MLX5E_FLD_MAX(cqc, cq_period); priv->params_ethtool.rx_coalesce_mode = priv->params.rx_cq_moderation_mode; priv->params_ethtool.rx_coalesce_usecs = priv->params.rx_cq_moderation_usec; priv->params_ethtool.rx_coalesce_pkts = priv->params.rx_cq_moderation_pkts; priv->params_ethtool.tx_coalesce_mode = priv->params.tx_cq_moderation_mode; priv->params_ethtool.tx_coalesce_usecs = priv->params.tx_cq_moderation_usec; priv->params_ethtool.tx_coalesce_pkts = priv->params.tx_cq_moderation_pkts; priv->params_ethtool.hw_lro = priv->params.hw_lro_en; priv->params_ethtool.cqe_zipping = priv->params.cqe_zipping_en; mlx5e_ethtool_sync_tx_completion_fact(priv); /* get default values for local loopback, if any */ if (MLX5_CAP_GEN(priv->mdev, disable_local_lb)) { int err; u8 val; err = mlx5_nic_vport_query_local_lb(priv->mdev, MLX5_LOCAL_MC_LB, &val); if (err == 0) priv->params_ethtool.mc_local_lb = val; err = mlx5_nic_vport_query_local_lb(priv->mdev, MLX5_LOCAL_UC_LB, &val); if (err == 0) priv->params_ethtool.uc_local_lb = val; } /* create root node */ node = SYSCTL_ADD_NODE(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "conf", CTLFLAG_RW, NULL, "Configuration"); if (node == NULL) return; for (x = 0; x != MLX5E_PARAMS_NUM; x++) { /* check for read-only parameter */ if (strstr(mlx5e_params_desc[2 * x], "_max") != NULL || strstr(mlx5e_params_desc[2 * x], "_mtu") != NULL) { SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, mlx5e_params_desc[2 * x], CTLTYPE_U64 | CTLFLAG_RD | CTLFLAG_MPSAFE, priv, x, &mlx5e_ethtool_handler, "QU", mlx5e_params_desc[2 * x + 1]); } else { #if (__FreeBSD_version < 1100000) char path[64]; #endif /* * NOTE: In FreeBSD-11 and newer the * CTLFLAG_RWTUN flag will take care of * loading default sysctl value from the * kernel environment, if any: */ SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, mlx5e_params_desc[2 * x], CTLTYPE_U64 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, x, &mlx5e_ethtool_handler, "QU", mlx5e_params_desc[2 * x + 1]); #if (__FreeBSD_version < 1100000) /* compute path for sysctl */ snprintf(path, sizeof(path), "dev.mce.%d.conf.%s", device_get_unit(priv->mdev->pdev->dev.bsddev), mlx5e_params_desc[2 * x]); /* try to fetch tunable, if any */ if (TUNABLE_QUAD_FETCH(path, &priv->params_ethtool.arg[x])) mlx5e_ethtool_handler(NULL, priv, x, NULL); #endif } } SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, "debug_stats", CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE, priv, 0, &mlx5e_ethtool_debug_stats, "I", "Extended debug statistics"); pnameunit = device_get_nameunit(priv->mdev->pdev->dev.bsddev); SYSCTL_ADD_STRING(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, "device_name", CTLFLAG_RD, __DECONST(void *, pnameunit), 0, "PCI device name"); - - /* EEPROM support */ - SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, "eeprom_info", - CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE, priv, 0, - mlx5e_read_eeprom, "I", "EEPROM information"); /* Diagnostics support */ mlx5e_create_diagnostics(priv); /* create qos node */ qos_node = SYSCTL_ADD_NODE(&priv->sysctl_ctx, SYSCTL_CHILDREN(node), OID_AUTO, "qos", CTLFLAG_RW, NULL, "Quality Of Service configuration"); if (qos_node == NULL) return; /* Priority rate limit support */ if (mlx5e_getmaxrate(priv) == 0) { SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "tc_max_rate", CTLTYPE_U64 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_tc_maxrate_handler, "QU", "Max rate for priority, specified in kilobits, where kilo=1000, " "max_rate must be divisible by 100000"); } /* Bandwidth limiting by ratio */ if (mlx5e_get_max_alloc(priv) == 0) { SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "tc_rate_share", CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_tc_rate_share_handler, "QU", "Specify bandwidth ratio from 1 to 100 " "for the available traffic classes"); } /* Priority to traffic class mapping */ if (mlx5e_get_prio_tc(priv) == 0) { SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "prio_0_7_tc", CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_prio_to_tc_handler, "CU", "Set traffic class 0 to 7 for priority 0 to 7 inclusivly"); } /* DSCP support */ if (mlx5e_get_dscp(priv) == 0) { for (i = 0; i != MLX5_MAX_SUPPORTED_DSCP; i += 8) { char name[32]; snprintf(name, sizeof(name), "dscp_%d_%d_prio", i, i + 7); SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, name, CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, i, mlx5e_dscp_prio_handler, "CU", "Set DSCP to priority mapping, 0..7"); } #define A "Set trust state, 1:PCP 2:DSCP" #define B " 3:BOTH" SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "trust_state", CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_trust_state_handler, "CU", MLX5_CAP_QCAM_FEATURE(priv->mdev, qpts_trust_both) ? A B : A); #undef B #undef A } if (mlx5e_port_query_buffer(priv, &port_buffer) == 0) { SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "buffers_size", CTLTYPE_U32 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_buf_size_handler, "IU", "Set buffers sizes"); SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "buffers_prio", CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_buf_prio_handler, "CU", "Set prio to buffers mapping"); SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(qos_node), OID_AUTO, "cable_length", CTLTYPE_UINT | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, mlx5e_cable_length_handler, "IU", "Set cable length in meters for xoff threshold calculation"); } } Index: stable/12/sys/dev/mlx5/mlx5_en/mlx5_en_main.c =================================================================== --- stable/12/sys/dev/mlx5/mlx5_en/mlx5_en_main.c (revision 353238) +++ stable/12/sys/dev/mlx5/mlx5_en/mlx5_en_main.c (revision 353239) @@ -1,4477 +1,4477 @@ /*- * Copyright (c) 2015-2018 Mellanox Technologies. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #include "en.h" #include #include #ifndef ETH_DRIVER_VERSION #define ETH_DRIVER_VERSION "3.5.1" #endif #define DRIVER_RELDATE "April 2019" static const char mlx5e_version[] = "mlx5en: Mellanox Ethernet driver " ETH_DRIVER_VERSION " (" DRIVER_RELDATE ")\n"; static int mlx5e_get_wqe_sz(struct mlx5e_priv *priv, u32 *wqe_sz, u32 *nsegs); struct mlx5e_channel_param { struct mlx5e_rq_param rq; struct mlx5e_sq_param sq; struct mlx5e_cq_param rx_cq; struct mlx5e_cq_param tx_cq; }; struct media { u32 subtype; u64 baudrate; }; static const struct media mlx5e_mode_table[MLX5E_LINK_SPEEDS_NUMBER][MLX5E_LINK_MODES_NUMBER] = { [MLX5E_1000BASE_CX_SGMII][MLX5E_SGMII] = { .subtype = IFM_1000_CX_SGMII, .baudrate = IF_Mbps(1000ULL), }, [MLX5E_1000BASE_KX][MLX5E_KX] = { .subtype = IFM_1000_KX, .baudrate = IF_Mbps(1000ULL), }, [MLX5E_10GBASE_CX4][MLX5E_CX4] = { .subtype = IFM_10G_CX4, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_KX4][MLX5E_KX4] = { .subtype = IFM_10G_KX4, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_KR][MLX5E_KR] = { .subtype = IFM_10G_KR, .baudrate = IF_Gbps(10ULL), }, [MLX5E_20GBASE_KR2][MLX5E_KR2] = { .subtype = IFM_20G_KR2, .baudrate = IF_Gbps(20ULL), }, [MLX5E_40GBASE_CR4][MLX5E_CR4] = { .subtype = IFM_40G_CR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_KR4][MLX5E_KR4] = { .subtype = IFM_40G_KR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_56GBASE_R4][MLX5E_R] = { .subtype = IFM_56G_R4, .baudrate = IF_Gbps(56ULL), }, [MLX5E_10GBASE_CR][MLX5E_CR1] = { .subtype = IFM_10G_CR1, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_SR][MLX5E_SR] = { .subtype = IFM_10G_SR, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_ER_LR][MLX5E_ER] = { .subtype = IFM_10G_ER, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_ER_LR][MLX5E_LR] = { .subtype = IFM_10G_LR, .baudrate = IF_Gbps(10ULL), }, [MLX5E_40GBASE_SR4][MLX5E_SR4] = { .subtype = IFM_40G_SR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_LR4_ER4][MLX5E_LR4] = { .subtype = IFM_40G_LR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_LR4_ER4][MLX5E_ER4] = { .subtype = IFM_40G_ER4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_100GBASE_CR4][MLX5E_CR4] = { .subtype = IFM_100G_CR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100GBASE_SR4][MLX5E_SR4] = { .subtype = IFM_100G_SR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100GBASE_KR4][MLX5E_KR4] = { .subtype = IFM_100G_KR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100GBASE_LR4][MLX5E_LR4] = { .subtype = IFM_100G_LR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100BASE_TX][MLX5E_TX] = { .subtype = IFM_100_TX, .baudrate = IF_Mbps(100ULL), }, [MLX5E_1000BASE_T][MLX5E_T] = { .subtype = IFM_1000_T, .baudrate = IF_Mbps(1000ULL), }, [MLX5E_10GBASE_T][MLX5E_T] = { .subtype = IFM_10G_T, .baudrate = IF_Gbps(10ULL), }, [MLX5E_25GBASE_CR][MLX5E_CR] = { .subtype = IFM_25G_CR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GBASE_KR][MLX5E_KR] = { .subtype = IFM_25G_KR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GBASE_SR][MLX5E_SR] = { .subtype = IFM_25G_SR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_50GBASE_CR2][MLX5E_CR2] = { .subtype = IFM_50G_CR2, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GBASE_KR2][MLX5E_KR2] = { .subtype = IFM_50G_KR2, .baudrate = IF_Gbps(50ULL), }, }; static const struct media mlx5e_ext_mode_table[MLX5E_EXT_LINK_SPEEDS_NUMBER][MLX5E_LINK_MODES_NUMBER] = { [MLX5E_SGMII_100M][MLX5E_SGMII] = { .subtype = IFM_100_SGMII, .baudrate = IF_Mbps(100), }, [MLX5E_1000BASE_X_SGMII][MLX5E_KX] = { .subtype = IFM_1000_KX, .baudrate = IF_Mbps(1000), }, [MLX5E_1000BASE_X_SGMII][MLX5E_CX_SGMII] = { .subtype = IFM_1000_CX_SGMII, .baudrate = IF_Mbps(1000), }, [MLX5E_1000BASE_X_SGMII][MLX5E_CX] = { .subtype = IFM_1000_CX, .baudrate = IF_Mbps(1000), }, [MLX5E_1000BASE_X_SGMII][MLX5E_LX] = { .subtype = IFM_1000_LX, .baudrate = IF_Mbps(1000), }, [MLX5E_1000BASE_X_SGMII][MLX5E_SX] = { .subtype = IFM_1000_SX, .baudrate = IF_Mbps(1000), }, [MLX5E_1000BASE_X_SGMII][MLX5E_T] = { .subtype = IFM_1000_T, .baudrate = IF_Mbps(1000), }, [MLX5E_5GBASE_R][MLX5E_T] = { .subtype = IFM_5000_T, .baudrate = IF_Mbps(5000), }, [MLX5E_5GBASE_R][MLX5E_KR] = { .subtype = IFM_5000_KR, .baudrate = IF_Mbps(5000), }, [MLX5E_5GBASE_R][MLX5E_KR1] = { .subtype = IFM_5000_KR1, .baudrate = IF_Mbps(5000), }, [MLX5E_5GBASE_R][MLX5E_KR_S] = { .subtype = IFM_5000_KR_S, .baudrate = IF_Mbps(5000), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_ER] = { .subtype = IFM_10G_ER, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_KR] = { .subtype = IFM_10G_KR, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_LR] = { .subtype = IFM_10G_LR, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_SR] = { .subtype = IFM_10G_SR, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_T] = { .subtype = IFM_10G_T, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_AOC] = { .subtype = IFM_10G_AOC, .baudrate = IF_Gbps(10ULL), }, [MLX5E_10GBASE_XFI_XAUI_1][MLX5E_CR1] = { .subtype = IFM_10G_CR1, .baudrate = IF_Gbps(10ULL), }, [MLX5E_40GBASE_XLAUI_4_XLPPI_4][MLX5E_CR4] = { .subtype = IFM_40G_CR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_XLAUI_4_XLPPI_4][MLX5E_KR4] = { .subtype = IFM_40G_KR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_XLAUI_4_XLPPI_4][MLX5E_LR4] = { .subtype = IFM_40G_LR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_XLAUI_4_XLPPI_4][MLX5E_SR4] = { .subtype = IFM_40G_SR4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_40GBASE_XLAUI_4_XLPPI_4][MLX5E_ER4] = { .subtype = IFM_40G_ER4, .baudrate = IF_Gbps(40ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_CR] = { .subtype = IFM_25G_CR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_KR] = { .subtype = IFM_25G_KR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_SR] = { .subtype = IFM_25G_SR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_ACC] = { .subtype = IFM_25G_ACC, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_AOC] = { .subtype = IFM_25G_AOC, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_CR1] = { .subtype = IFM_25G_CR1, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_CR_S] = { .subtype = IFM_25G_CR_S, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_KR1] = { .subtype = IFM_5000_KR1, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_KR_S] = { .subtype = IFM_25G_KR_S, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_LR] = { .subtype = IFM_25G_LR, .baudrate = IF_Gbps(25ULL), }, [MLX5E_25GAUI_1_25GBASE_CR_KR][MLX5E_T] = { .subtype = IFM_25G_T, .baudrate = IF_Gbps(25ULL), }, [MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2][MLX5E_CR2] = { .subtype = IFM_50G_CR2, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2][MLX5E_KR2] = { .subtype = IFM_50G_KR2, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2][MLX5E_SR2] = { .subtype = IFM_50G_SR2, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2][MLX5E_LR2] = { .subtype = IFM_50G_LR2, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR][MLX5E_LR] = { .subtype = IFM_50G_LR, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR][MLX5E_SR] = { .subtype = IFM_50G_SR, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR][MLX5E_CP] = { .subtype = IFM_50G_CP, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR][MLX5E_FR] = { .subtype = IFM_50G_FR, .baudrate = IF_Gbps(50ULL), }, [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR][MLX5E_KR_PAM4] = { .subtype = IFM_50G_KR_PAM4, .baudrate = IF_Gbps(50ULL), }, [MLX5E_CAUI_4_100GBASE_CR4_KR4][MLX5E_CR4] = { .subtype = IFM_100G_CR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_CAUI_4_100GBASE_CR4_KR4][MLX5E_KR4] = { .subtype = IFM_100G_KR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_CAUI_4_100GBASE_CR4_KR4][MLX5E_LR4] = { .subtype = IFM_100G_LR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_CAUI_4_100GBASE_CR4_KR4][MLX5E_SR4] = { .subtype = IFM_100G_SR4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100GAUI_2_100GBASE_CR2_KR2][MLX5E_SR2] = { .subtype = IFM_100G_SR2, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100GAUI_2_100GBASE_CR2_KR2][MLX5E_CP2] = { .subtype = IFM_100G_CP2, .baudrate = IF_Gbps(100ULL), }, [MLX5E_100GAUI_2_100GBASE_CR2_KR2][MLX5E_KR2_PAM4] = { .subtype = IFM_100G_KR2_PAM4, .baudrate = IF_Gbps(100ULL), }, [MLX5E_200GAUI_4_200GBASE_CR4_KR4][MLX5E_DR4] = { .subtype = IFM_200G_DR4, .baudrate = IF_Gbps(200ULL), }, [MLX5E_200GAUI_4_200GBASE_CR4_KR4][MLX5E_LR4] = { .subtype = IFM_200G_LR4, .baudrate = IF_Gbps(200ULL), }, [MLX5E_200GAUI_4_200GBASE_CR4_KR4][MLX5E_SR4] = { .subtype = IFM_200G_SR4, .baudrate = IF_Gbps(200ULL), }, [MLX5E_200GAUI_4_200GBASE_CR4_KR4][MLX5E_FR4] = { .subtype = IFM_200G_FR4, .baudrate = IF_Gbps(200ULL), }, [MLX5E_200GAUI_4_200GBASE_CR4_KR4][MLX5E_CR4_PAM4] = { .subtype = IFM_200G_CR4_PAM4, .baudrate = IF_Gbps(200ULL), }, [MLX5E_200GAUI_4_200GBASE_CR4_KR4][MLX5E_KR4_PAM4] = { .subtype = IFM_200G_KR4_PAM4, .baudrate = IF_Gbps(200ULL), }, }; MALLOC_DEFINE(M_MLX5EN, "MLX5EN", "MLX5 Ethernet"); static void mlx5e_update_carrier(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; u32 out[MLX5_ST_SZ_DW(ptys_reg)]; u32 eth_proto_oper; int error; u8 port_state; u8 is_er_type; u8 i, j; bool ext; struct media media_entry = {}; port_state = mlx5_query_vport_state(mdev, MLX5_QUERY_VPORT_STATE_IN_OP_MOD_VNIC_VPORT, 0); if (port_state == VPORT_STATE_UP) { priv->media_status_last |= IFM_ACTIVE; } else { priv->media_status_last &= ~IFM_ACTIVE; priv->media_active_last = IFM_ETHER; if_link_state_change(priv->ifp, LINK_STATE_DOWN); return; } error = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, 1); if (error) { priv->media_active_last = IFM_ETHER; priv->ifp->if_baudrate = 1; mlx5_en_err(priv->ifp, "query port ptys failed: 0x%x\n", error); return; } ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet); eth_proto_oper = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_oper); i = ilog2(eth_proto_oper); for (j = 0; j != MLX5E_LINK_MODES_NUMBER; j++) { media_entry = ext ? mlx5e_ext_mode_table[i][j] : mlx5e_mode_table[i][j]; if (media_entry.baudrate != 0) break; } if (media_entry.subtype == 0) { mlx5_en_err(priv->ifp, "Could not find operational media subtype\n"); return; } switch (media_entry.subtype) { case IFM_10G_ER: error = mlx5_query_pddr_range_info(mdev, 1, &is_er_type); if (error != 0) { mlx5_en_err(priv->ifp, "query port pddr failed: %d\n", error); } if (error != 0 || is_er_type == 0) media_entry.subtype = IFM_10G_LR; break; case IFM_40G_LR4: error = mlx5_query_pddr_range_info(mdev, 1, &is_er_type); if (error != 0) { mlx5_en_err(priv->ifp, "query port pddr failed: %d\n", error); } if (error == 0 && is_er_type != 0) media_entry.subtype = IFM_40G_ER4; break; } priv->media_active_last = media_entry.subtype | IFM_ETHER | IFM_FDX; priv->ifp->if_baudrate = media_entry.baudrate; if_link_state_change(priv->ifp, LINK_STATE_UP); } static void mlx5e_media_status(struct ifnet *dev, struct ifmediareq *ifmr) { struct mlx5e_priv *priv = dev->if_softc; ifmr->ifm_status = priv->media_status_last; ifmr->ifm_active = priv->media_active_last | (priv->params.rx_pauseframe_control ? IFM_ETH_RXPAUSE : 0) | (priv->params.tx_pauseframe_control ? IFM_ETH_TXPAUSE : 0); } static u32 mlx5e_find_link_mode(u32 subtype, bool ext) { u32 i; u32 j; u32 link_mode = 0; u32 speeds_num = 0; struct media media_entry = {}; switch (subtype) { case IFM_10G_LR: subtype = IFM_10G_ER; break; case IFM_40G_ER4: subtype = IFM_40G_LR4; break; } speeds_num = ext ? MLX5E_EXT_LINK_SPEEDS_NUMBER : MLX5E_LINK_SPEEDS_NUMBER; for (i = 0; i != speeds_num; i++) { for (j = 0; j < MLX5E_LINK_MODES_NUMBER ; ++j) { media_entry = ext ? mlx5e_ext_mode_table[i][j] : mlx5e_mode_table[i][j]; if (media_entry.baudrate == 0) continue; if (media_entry.subtype == subtype) { link_mode |= MLX5E_PROT_MASK(i); } } } return (link_mode); } static int mlx5e_set_port_pause_and_pfc(struct mlx5e_priv *priv) { return (mlx5_set_port_pause_and_pfc(priv->mdev, 1, priv->params.rx_pauseframe_control, priv->params.tx_pauseframe_control, priv->params.rx_priority_flow_control, priv->params.tx_priority_flow_control)); } static int mlx5e_set_port_pfc(struct mlx5e_priv *priv) { int error; if (priv->gone != 0) { error = -ENXIO; } else if (priv->params.rx_pauseframe_control || priv->params.tx_pauseframe_control) { mlx5_en_err(priv->ifp, "Global pauseframes must be disabled before enabling PFC.\n"); error = -EINVAL; } else { error = mlx5e_set_port_pause_and_pfc(priv); } return (error); } static int mlx5e_media_change(struct ifnet *dev) { struct mlx5e_priv *priv = dev->if_softc; struct mlx5_core_dev *mdev = priv->mdev; u32 eth_proto_cap; u32 link_mode; u32 out[MLX5_ST_SZ_DW(ptys_reg)]; int was_opened; int locked; int error; bool ext; locked = PRIV_LOCKED(priv); if (!locked) PRIV_LOCK(priv); if (IFM_TYPE(priv->media.ifm_media) != IFM_ETHER) { error = EINVAL; goto done; } error = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, 1); if (error != 0) { mlx5_en_err(dev, "Query port media capability failed\n"); goto done; } ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet); link_mode = mlx5e_find_link_mode(IFM_SUBTYPE(priv->media.ifm_media), ext); /* query supported capabilities */ eth_proto_cap = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_capability); /* check for autoselect */ if (IFM_SUBTYPE(priv->media.ifm_media) == IFM_AUTO) { link_mode = eth_proto_cap; if (link_mode == 0) { mlx5_en_err(dev, "Port media capability is zero\n"); error = EINVAL; goto done; } } else { link_mode = link_mode & eth_proto_cap; if (link_mode == 0) { mlx5_en_err(dev, "Not supported link mode requested\n"); error = EINVAL; goto done; } } if (priv->media.ifm_media & (IFM_ETH_RXPAUSE | IFM_ETH_TXPAUSE)) { /* check if PFC is enabled */ if (priv->params.rx_priority_flow_control || priv->params.tx_priority_flow_control) { mlx5_en_err(dev, "PFC must be disabled before enabling global pauseframes.\n"); error = EINVAL; goto done; } } /* update pauseframe control bits */ priv->params.rx_pauseframe_control = (priv->media.ifm_media & IFM_ETH_RXPAUSE) ? 1 : 0; priv->params.tx_pauseframe_control = (priv->media.ifm_media & IFM_ETH_TXPAUSE) ? 1 : 0; /* check if device is opened */ was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state); /* reconfigure the hardware */ mlx5_set_port_status(mdev, MLX5_PORT_DOWN); mlx5_set_port_proto(mdev, link_mode, MLX5_PTYS_EN, ext); error = -mlx5e_set_port_pause_and_pfc(priv); if (was_opened) mlx5_set_port_status(mdev, MLX5_PORT_UP); done: if (!locked) PRIV_UNLOCK(priv); return (error); } static void mlx5e_update_carrier_work(struct work_struct *work) { struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, update_carrier_work); PRIV_LOCK(priv); if (test_bit(MLX5E_STATE_OPENED, &priv->state)) mlx5e_update_carrier(priv); PRIV_UNLOCK(priv); } #define MLX5E_PCIE_PERF_GET_64(a,b,c,d,e,f) \ s_debug->c = MLX5_GET64(mpcnt_reg, out, counter_set.f.c); #define MLX5E_PCIE_PERF_GET_32(a,b,c,d,e,f) \ s_debug->c = MLX5_GET(mpcnt_reg, out, counter_set.f.c); static void mlx5e_update_pcie_counters(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; struct mlx5e_port_stats_debug *s_debug = &priv->stats.port_stats_debug; const unsigned sz = MLX5_ST_SZ_BYTES(mpcnt_reg); void *out; void *in; int err; /* allocate firmware request structures */ in = mlx5_vzalloc(sz); out = mlx5_vzalloc(sz); if (in == NULL || out == NULL) goto free_out; MLX5_SET(mpcnt_reg, in, grp, MLX5_PCIE_PERFORMANCE_COUNTERS_GROUP); err = mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_MPCNT, 0, 0); if (err != 0) goto free_out; MLX5E_PCIE_PERFORMANCE_COUNTERS_64(MLX5E_PCIE_PERF_GET_64) MLX5E_PCIE_PERFORMANCE_COUNTERS_32(MLX5E_PCIE_PERF_GET_32) MLX5_SET(mpcnt_reg, in, grp, MLX5_PCIE_TIMERS_AND_STATES_COUNTERS_GROUP); err = mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_MPCNT, 0, 0); if (err != 0) goto free_out; MLX5E_PCIE_TIMERS_AND_STATES_COUNTERS_32(MLX5E_PCIE_PERF_GET_32) MLX5_SET(mpcnt_reg, in, grp, MLX5_PCIE_LANE_COUNTERS_GROUP); err = mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_MPCNT, 0, 0); if (err != 0) goto free_out; MLX5E_PCIE_LANE_COUNTERS_32(MLX5E_PCIE_PERF_GET_32) free_out: /* free firmware request structures */ kvfree(in); kvfree(out); } /* * This function reads the physical port counters from the firmware * using a pre-defined layout defined by various MLX5E_PPORT_XXX() * macros. The output is converted from big-endian 64-bit values into * host endian ones and stored in the "priv->stats.pport" structure. */ static void mlx5e_update_pport_counters(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; struct mlx5e_pport_stats *s = &priv->stats.pport; struct mlx5e_port_stats_debug *s_debug = &priv->stats.port_stats_debug; u32 *in; u32 *out; const u64 *ptr; unsigned sz = MLX5_ST_SZ_BYTES(ppcnt_reg); unsigned x; unsigned y; unsigned z; /* allocate firmware request structures */ in = mlx5_vzalloc(sz); out = mlx5_vzalloc(sz); if (in == NULL || out == NULL) goto free_out; /* * Get pointer to the 64-bit counter set which is located at a * fixed offset in the output firmware request structure: */ ptr = (const uint64_t *)MLX5_ADDR_OF(ppcnt_reg, out, counter_set); MLX5_SET(ppcnt_reg, in, local_port, 1); /* read IEEE802_3 counter group using predefined counter layout */ MLX5_SET(ppcnt_reg, in, grp, MLX5_IEEE_802_3_COUNTERS_GROUP); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); for (x = 0, y = MLX5E_PPORT_PER_PRIO_STATS_NUM; x != MLX5E_PPORT_IEEE802_3_STATS_NUM; x++, y++) s->arg[y] = be64toh(ptr[x]); /* read RFC2819 counter group using predefined counter layout */ MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2819_COUNTERS_GROUP); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); for (x = 0; x != MLX5E_PPORT_RFC2819_STATS_NUM; x++, y++) s->arg[y] = be64toh(ptr[x]); for (y = 0; x != MLX5E_PPORT_RFC2819_STATS_NUM + MLX5E_PPORT_RFC2819_STATS_DEBUG_NUM; x++, y++) s_debug->arg[y] = be64toh(ptr[x]); /* read RFC2863 counter group using predefined counter layout */ MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2863_COUNTERS_GROUP); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); for (x = 0; x != MLX5E_PPORT_RFC2863_STATS_DEBUG_NUM; x++, y++) s_debug->arg[y] = be64toh(ptr[x]); /* read physical layer stats counter group using predefined counter layout */ MLX5_SET(ppcnt_reg, in, grp, MLX5_PHYSICAL_LAYER_COUNTERS_GROUP); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); for (x = 0; x != MLX5E_PPORT_PHYSICAL_LAYER_STATS_DEBUG_NUM; x++, y++) s_debug->arg[y] = be64toh(ptr[x]); /* read Extended Ethernet counter group using predefined counter layout */ MLX5_SET(ppcnt_reg, in, grp, MLX5_ETHERNET_EXTENDED_COUNTERS_GROUP); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); for (x = 0; x != MLX5E_PPORT_ETHERNET_EXTENDED_STATS_DEBUG_NUM; x++, y++) s_debug->arg[y] = be64toh(ptr[x]); /* read Extended Statistical Group */ if (MLX5_CAP_GEN(mdev, pcam_reg) && MLX5_CAP_PCAM_FEATURE(mdev, ppcnt_statistical_group) && MLX5_CAP_PCAM_FEATURE(mdev, per_lane_error_counters)) { /* read Extended Statistical counter group using predefined counter layout */ MLX5_SET(ppcnt_reg, in, grp, MLX5_PHYSICAL_LAYER_STATISTICAL_GROUP); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); for (x = 0; x != MLX5E_PPORT_STATISTICAL_DEBUG_NUM; x++, y++) s_debug->arg[y] = be64toh(ptr[x]); } /* read PCIE counters */ mlx5e_update_pcie_counters(priv); /* read per-priority counters */ MLX5_SET(ppcnt_reg, in, grp, MLX5_PER_PRIORITY_COUNTERS_GROUP); /* iterate all the priorities */ for (y = z = 0; z != MLX5E_PPORT_PER_PRIO_STATS_NUM_PRIO; z++) { MLX5_SET(ppcnt_reg, in, prio_tc, z); mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0); /* read per priority stats counter group using predefined counter layout */ for (x = 0; x != (MLX5E_PPORT_PER_PRIO_STATS_NUM / MLX5E_PPORT_PER_PRIO_STATS_NUM_PRIO); x++, y++) s->arg[y] = be64toh(ptr[x]); } free_out: /* free firmware request structures */ kvfree(in); kvfree(out); } static void mlx5e_grp_vnic_env_update_stats(struct mlx5e_priv *priv) { u32 out[MLX5_ST_SZ_DW(query_vnic_env_out)] = {}; u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {}; if (!MLX5_CAP_GEN(priv->mdev, nic_receive_steering_discard)) return; MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV); MLX5_SET(query_vnic_env_in, in, op_mod, 0); MLX5_SET(query_vnic_env_in, in, other_vport, 0); if (mlx5_cmd_exec(priv->mdev, in, sizeof(in), out, sizeof(out)) != 0) return; priv->stats.vport.rx_steer_missed_packets = MLX5_GET64(query_vnic_env_out, out, vport_env.nic_receive_steering_discard); } /* * This function is called regularly to collect all statistics * counters from the firmware. The values can be viewed through the * sysctl interface. Execution is serialized using the priv's global * configuration lock. */ static void mlx5e_update_stats_locked(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; struct mlx5e_vport_stats *s = &priv->stats.vport; struct mlx5e_sq_stats *sq_stats; struct buf_ring *sq_br; #if (__FreeBSD_version < 1100000) struct ifnet *ifp = priv->ifp; #endif u32 in[MLX5_ST_SZ_DW(query_vport_counter_in)]; u32 *out; int outlen = MLX5_ST_SZ_BYTES(query_vport_counter_out); u64 tso_packets = 0; u64 tso_bytes = 0; u64 tx_queue_dropped = 0; u64 tx_defragged = 0; u64 tx_offload_none = 0; u64 lro_packets = 0; u64 lro_bytes = 0; u64 sw_lro_queued = 0; u64 sw_lro_flushed = 0; u64 rx_csum_none = 0; u64 rx_wqe_err = 0; u64 rx_packets = 0; u64 rx_bytes = 0; u32 rx_out_of_buffer = 0; int i; int j; out = mlx5_vzalloc(outlen); if (out == NULL) goto free_out; /* Collect firts the SW counters and then HW for consistency */ for (i = 0; i < priv->params.num_channels; i++) { struct mlx5e_channel *pch = priv->channel + i; struct mlx5e_rq *rq = &pch->rq; struct mlx5e_rq_stats *rq_stats = &pch->rq.stats; /* collect stats from LRO */ rq_stats->sw_lro_queued = rq->lro.lro_queued; rq_stats->sw_lro_flushed = rq->lro.lro_flushed; sw_lro_queued += rq_stats->sw_lro_queued; sw_lro_flushed += rq_stats->sw_lro_flushed; lro_packets += rq_stats->lro_packets; lro_bytes += rq_stats->lro_bytes; rx_csum_none += rq_stats->csum_none; rx_wqe_err += rq_stats->wqe_err; rx_packets += rq_stats->packets; rx_bytes += rq_stats->bytes; for (j = 0; j < priv->num_tc; j++) { sq_stats = &pch->sq[j].stats; sq_br = pch->sq[j].br; tso_packets += sq_stats->tso_packets; tso_bytes += sq_stats->tso_bytes; tx_queue_dropped += sq_stats->dropped; if (sq_br != NULL) tx_queue_dropped += sq_br->br_drops; tx_defragged += sq_stats->defragged; tx_offload_none += sq_stats->csum_offload_none; } } /* update counters */ s->tso_packets = tso_packets; s->tso_bytes = tso_bytes; s->tx_queue_dropped = tx_queue_dropped; s->tx_defragged = tx_defragged; s->lro_packets = lro_packets; s->lro_bytes = lro_bytes; s->sw_lro_queued = sw_lro_queued; s->sw_lro_flushed = sw_lro_flushed; s->rx_csum_none = rx_csum_none; s->rx_wqe_err = rx_wqe_err; s->rx_packets = rx_packets; s->rx_bytes = rx_bytes; mlx5e_grp_vnic_env_update_stats(priv); /* HW counters */ memset(in, 0, sizeof(in)); MLX5_SET(query_vport_counter_in, in, opcode, MLX5_CMD_OP_QUERY_VPORT_COUNTER); MLX5_SET(query_vport_counter_in, in, op_mod, 0); MLX5_SET(query_vport_counter_in, in, other_vport, 0); memset(out, 0, outlen); /* get number of out-of-buffer drops first */ if (test_bit(MLX5E_STATE_OPENED, &priv->state) != 0 && mlx5_vport_query_out_of_rx_buffer(mdev, priv->counter_set_id, &rx_out_of_buffer) == 0) { s->rx_out_of_buffer = rx_out_of_buffer; } /* get port statistics */ if (mlx5_cmd_exec(mdev, in, sizeof(in), out, outlen) == 0) { #define MLX5_GET_CTR(out, x) \ MLX5_GET64(query_vport_counter_out, out, x) s->rx_error_packets = MLX5_GET_CTR(out, received_errors.packets); s->rx_error_bytes = MLX5_GET_CTR(out, received_errors.octets); s->tx_error_packets = MLX5_GET_CTR(out, transmit_errors.packets); s->tx_error_bytes = MLX5_GET_CTR(out, transmit_errors.octets); s->rx_unicast_packets = MLX5_GET_CTR(out, received_eth_unicast.packets); s->rx_unicast_bytes = MLX5_GET_CTR(out, received_eth_unicast.octets); s->tx_unicast_packets = MLX5_GET_CTR(out, transmitted_eth_unicast.packets); s->tx_unicast_bytes = MLX5_GET_CTR(out, transmitted_eth_unicast.octets); s->rx_multicast_packets = MLX5_GET_CTR(out, received_eth_multicast.packets); s->rx_multicast_bytes = MLX5_GET_CTR(out, received_eth_multicast.octets); s->tx_multicast_packets = MLX5_GET_CTR(out, transmitted_eth_multicast.packets); s->tx_multicast_bytes = MLX5_GET_CTR(out, transmitted_eth_multicast.octets); s->rx_broadcast_packets = MLX5_GET_CTR(out, received_eth_broadcast.packets); s->rx_broadcast_bytes = MLX5_GET_CTR(out, received_eth_broadcast.octets); s->tx_broadcast_packets = MLX5_GET_CTR(out, transmitted_eth_broadcast.packets); s->tx_broadcast_bytes = MLX5_GET_CTR(out, transmitted_eth_broadcast.octets); s->tx_packets = s->tx_unicast_packets + s->tx_multicast_packets + s->tx_broadcast_packets; s->tx_bytes = s->tx_unicast_bytes + s->tx_multicast_bytes + s->tx_broadcast_bytes; /* Update calculated offload counters */ s->tx_csum_offload = s->tx_packets - tx_offload_none; s->rx_csum_good = s->rx_packets - s->rx_csum_none; } /* Get physical port counters */ mlx5e_update_pport_counters(priv); s->tx_jumbo_packets = priv->stats.port_stats_debug.tx_stat_p1519to2047octets + priv->stats.port_stats_debug.tx_stat_p2048to4095octets + priv->stats.port_stats_debug.tx_stat_p4096to8191octets + priv->stats.port_stats_debug.tx_stat_p8192to10239octets; #if (__FreeBSD_version < 1100000) /* no get_counters interface in fbsd 10 */ ifp->if_ipackets = s->rx_packets; ifp->if_ierrors = priv->stats.pport.in_range_len_errors + priv->stats.pport.out_of_range_len + priv->stats.pport.too_long_errors + priv->stats.pport.check_seq_err + priv->stats.pport.alignment_err; ifp->if_iqdrops = s->rx_out_of_buffer; ifp->if_opackets = s->tx_packets; ifp->if_oerrors = priv->stats.port_stats_debug.out_discards; ifp->if_snd.ifq_drops = s->tx_queue_dropped; ifp->if_ibytes = s->rx_bytes; ifp->if_obytes = s->tx_bytes; ifp->if_collisions = priv->stats.pport.collisions; #endif free_out: kvfree(out); /* Update diagnostics, if any */ if (priv->params_ethtool.diag_pci_enable || priv->params_ethtool.diag_general_enable) { int error = mlx5_core_get_diagnostics_full(mdev, priv->params_ethtool.diag_pci_enable ? &priv->params_pci : NULL, priv->params_ethtool.diag_general_enable ? &priv->params_general : NULL); if (error != 0) mlx5_en_err(priv->ifp, "Failed reading diagnostics: %d\n", error); } } static void mlx5e_update_stats_work(struct work_struct *work) { struct mlx5e_priv *priv; priv = container_of(work, struct mlx5e_priv, update_stats_work); PRIV_LOCK(priv); if (test_bit(MLX5E_STATE_OPENED, &priv->state) != 0) mlx5e_update_stats_locked(priv); PRIV_UNLOCK(priv); } static void mlx5e_update_stats(void *arg) { struct mlx5e_priv *priv = arg; queue_work(priv->wq, &priv->update_stats_work); callout_reset(&priv->watchdog, hz, &mlx5e_update_stats, priv); } static void mlx5e_async_event_sub(struct mlx5e_priv *priv, enum mlx5_dev_event event) { switch (event) { case MLX5_DEV_EVENT_PORT_UP: case MLX5_DEV_EVENT_PORT_DOWN: queue_work(priv->wq, &priv->update_carrier_work); break; default: break; } } static void mlx5e_async_event(struct mlx5_core_dev *mdev, void *vpriv, enum mlx5_dev_event event, unsigned long param) { struct mlx5e_priv *priv = vpriv; mtx_lock(&priv->async_events_mtx); if (test_bit(MLX5E_STATE_ASYNC_EVENTS_ENABLE, &priv->state)) mlx5e_async_event_sub(priv, event); mtx_unlock(&priv->async_events_mtx); } static void mlx5e_enable_async_events(struct mlx5e_priv *priv) { set_bit(MLX5E_STATE_ASYNC_EVENTS_ENABLE, &priv->state); } static void mlx5e_disable_async_events(struct mlx5e_priv *priv) { mtx_lock(&priv->async_events_mtx); clear_bit(MLX5E_STATE_ASYNC_EVENTS_ENABLE, &priv->state); mtx_unlock(&priv->async_events_mtx); } static void mlx5e_calibration_callout(void *arg); static int mlx5e_calibration_duration = 20; static int mlx5e_fast_calibration = 1; static int mlx5e_normal_calibration = 30; static SYSCTL_NODE(_hw_mlx5, OID_AUTO, calibr, CTLFLAG_RW, 0, "MLX5 timestamp calibration parameteres"); SYSCTL_INT(_hw_mlx5_calibr, OID_AUTO, duration, CTLFLAG_RWTUN, &mlx5e_calibration_duration, 0, "Duration of initial calibration"); SYSCTL_INT(_hw_mlx5_calibr, OID_AUTO, fast, CTLFLAG_RWTUN, &mlx5e_fast_calibration, 0, "Recalibration interval during initial calibration"); SYSCTL_INT(_hw_mlx5_calibr, OID_AUTO, normal, CTLFLAG_RWTUN, &mlx5e_normal_calibration, 0, "Recalibration interval during normal operations"); /* * Ignites the calibration process. */ static void mlx5e_reset_calibration_callout(struct mlx5e_priv *priv) { if (priv->clbr_done == 0) mlx5e_calibration_callout(priv); else callout_reset_curcpu(&priv->tstmp_clbr, (priv->clbr_done < mlx5e_calibration_duration ? mlx5e_fast_calibration : mlx5e_normal_calibration) * hz, mlx5e_calibration_callout, priv); } static uint64_t mlx5e_timespec2usec(const struct timespec *ts) { return ((uint64_t)ts->tv_sec * 1000000000 + ts->tv_nsec); } static uint64_t mlx5e_hw_clock(struct mlx5e_priv *priv) { struct mlx5_init_seg *iseg; uint32_t hw_h, hw_h1, hw_l; iseg = priv->mdev->iseg; do { hw_h = ioread32be(&iseg->internal_timer_h); hw_l = ioread32be(&iseg->internal_timer_l); hw_h1 = ioread32be(&iseg->internal_timer_h); } while (hw_h1 != hw_h); return (((uint64_t)hw_h << 32) | hw_l); } /* * The calibration callout, it runs either in the context of the * thread which enables calibration, or in callout. It takes the * snapshot of system and adapter clocks, then advances the pointers to * the calibration point to allow rx path to read the consistent data * lockless. */ static void mlx5e_calibration_callout(void *arg) { struct mlx5e_priv *priv; struct mlx5e_clbr_point *next, *curr; struct timespec ts; int clbr_curr_next; priv = arg; curr = &priv->clbr_points[priv->clbr_curr]; clbr_curr_next = priv->clbr_curr + 1; if (clbr_curr_next >= nitems(priv->clbr_points)) clbr_curr_next = 0; next = &priv->clbr_points[clbr_curr_next]; next->base_prev = curr->base_curr; next->clbr_hw_prev = curr->clbr_hw_curr; next->clbr_hw_curr = mlx5e_hw_clock(priv); if (((next->clbr_hw_curr - curr->clbr_hw_curr) >> MLX5E_TSTMP_PREC) == 0) { if (priv->clbr_done != 0) { mlx5_en_err(priv->ifp, "HW failed tstmp frozen %#jx %#jx, disabling\n", next->clbr_hw_curr, curr->clbr_hw_prev); priv->clbr_done = 0; } atomic_store_rel_int(&curr->clbr_gen, 0); return; } nanouptime(&ts); next->base_curr = mlx5e_timespec2usec(&ts); curr->clbr_gen = 0; atomic_thread_fence_rel(); priv->clbr_curr = clbr_curr_next; atomic_store_rel_int(&next->clbr_gen, ++(priv->clbr_gen)); if (priv->clbr_done < mlx5e_calibration_duration) priv->clbr_done++; mlx5e_reset_calibration_callout(priv); } static const char *mlx5e_rq_stats_desc[] = { MLX5E_RQ_STATS(MLX5E_STATS_DESC) }; static int mlx5e_create_rq(struct mlx5e_channel *c, struct mlx5e_rq_param *param, struct mlx5e_rq *rq) { struct mlx5e_priv *priv = c->priv; struct mlx5_core_dev *mdev = priv->mdev; char buffer[16]; void *rqc = param->rqc; void *rqc_wq = MLX5_ADDR_OF(rqc, rqc, wq); int wq_sz; int err; int i; u32 nsegs, wqe_sz; err = mlx5e_get_wqe_sz(priv, &wqe_sz, &nsegs); if (err != 0) goto done; /* Create DMA descriptor TAG */ if ((err = -bus_dma_tag_create( bus_get_dma_tag(mdev->pdev->dev.bsddev), 1, /* any alignment */ 0, /* no boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ nsegs * MLX5E_MAX_RX_BYTES, /* maxsize */ nsegs, /* nsegments */ nsegs * MLX5E_MAX_RX_BYTES, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockfuncarg */ &rq->dma_tag))) goto done; err = mlx5_wq_ll_create(mdev, ¶m->wq, rqc_wq, &rq->wq, &rq->wq_ctrl); if (err) goto err_free_dma_tag; rq->wq.db = &rq->wq.db[MLX5_RCV_DBR]; err = mlx5e_get_wqe_sz(priv, &rq->wqe_sz, &rq->nsegs); if (err != 0) goto err_rq_wq_destroy; wq_sz = mlx5_wq_ll_get_size(&rq->wq); err = -tcp_lro_init_args(&rq->lro, c->tag.m_snd_tag.ifp, TCP_LRO_ENTRIES, wq_sz); if (err) goto err_rq_wq_destroy; rq->mbuf = malloc(wq_sz * sizeof(rq->mbuf[0]), M_MLX5EN, M_WAITOK | M_ZERO); for (i = 0; i != wq_sz; i++) { struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(&rq->wq, i); int j; err = -bus_dmamap_create(rq->dma_tag, 0, &rq->mbuf[i].dma_map); if (err != 0) { while (i--) bus_dmamap_destroy(rq->dma_tag, rq->mbuf[i].dma_map); goto err_rq_mbuf_free; } /* set value for constant fields */ for (j = 0; j < rq->nsegs; j++) wqe->data[j].lkey = c->mkey_be; } INIT_WORK(&rq->dim.work, mlx5e_dim_work); if (priv->params.rx_cq_moderation_mode < 2) { rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_DISABLED; } else { void *cqc = container_of(param, struct mlx5e_channel_param, rq)->rx_cq.cqc; switch (MLX5_GET(cqc, cqc, cq_period_mode)) { case MLX5_CQ_PERIOD_MODE_START_FROM_EQE: rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE; break; case MLX5_CQ_PERIOD_MODE_START_FROM_CQE: rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_CQE; break; default: rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_DISABLED; break; } } rq->ifp = c->tag.m_snd_tag.ifp; rq->channel = c; rq->ix = c->ix; snprintf(buffer, sizeof(buffer), "rxstat%d", c->ix); mlx5e_create_stats(&rq->stats.ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), buffer, mlx5e_rq_stats_desc, MLX5E_RQ_STATS_NUM, rq->stats.arg); return (0); err_rq_mbuf_free: free(rq->mbuf, M_MLX5EN); tcp_lro_free(&rq->lro); err_rq_wq_destroy: mlx5_wq_destroy(&rq->wq_ctrl); err_free_dma_tag: bus_dma_tag_destroy(rq->dma_tag); done: return (err); } static void mlx5e_destroy_rq(struct mlx5e_rq *rq) { int wq_sz; int i; /* destroy all sysctl nodes */ sysctl_ctx_free(&rq->stats.ctx); /* free leftover LRO packets, if any */ tcp_lro_free(&rq->lro); wq_sz = mlx5_wq_ll_get_size(&rq->wq); for (i = 0; i != wq_sz; i++) { if (rq->mbuf[i].mbuf != NULL) { bus_dmamap_unload(rq->dma_tag, rq->mbuf[i].dma_map); m_freem(rq->mbuf[i].mbuf); } bus_dmamap_destroy(rq->dma_tag, rq->mbuf[i].dma_map); } free(rq->mbuf, M_MLX5EN); mlx5_wq_destroy(&rq->wq_ctrl); bus_dma_tag_destroy(rq->dma_tag); } static int mlx5e_enable_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param) { struct mlx5e_channel *c = rq->channel; struct mlx5e_priv *priv = c->priv; struct mlx5_core_dev *mdev = priv->mdev; void *in; void *rqc; void *wq; int inlen; int err; inlen = MLX5_ST_SZ_BYTES(create_rq_in) + sizeof(u64) * rq->wq_ctrl.buf.npages; in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); rqc = MLX5_ADDR_OF(create_rq_in, in, ctx); wq = MLX5_ADDR_OF(rqc, rqc, wq); memcpy(rqc, param->rqc, sizeof(param->rqc)); MLX5_SET(rqc, rqc, cqn, c->rq.cq.mcq.cqn); MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RST); MLX5_SET(rqc, rqc, flush_in_error_en, 1); if (priv->counter_set_id >= 0) MLX5_SET(rqc, rqc, counter_set_id, priv->counter_set_id); MLX5_SET(wq, wq, log_wq_pg_sz, rq->wq_ctrl.buf.page_shift - PAGE_SHIFT); MLX5_SET64(wq, wq, dbr_addr, rq->wq_ctrl.db.dma); mlx5_fill_page_array(&rq->wq_ctrl.buf, (__be64 *) MLX5_ADDR_OF(wq, wq, pas)); err = mlx5_core_create_rq(mdev, in, inlen, &rq->rqn); kvfree(in); return (err); } static int mlx5e_modify_rq(struct mlx5e_rq *rq, int curr_state, int next_state) { struct mlx5e_channel *c = rq->channel; struct mlx5e_priv *priv = c->priv; struct mlx5_core_dev *mdev = priv->mdev; void *in; void *rqc; int inlen; int err; inlen = MLX5_ST_SZ_BYTES(modify_rq_in); in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx); MLX5_SET(modify_rq_in, in, rqn, rq->rqn); MLX5_SET(modify_rq_in, in, rq_state, curr_state); MLX5_SET(rqc, rqc, state, next_state); err = mlx5_core_modify_rq(mdev, in, inlen); kvfree(in); return (err); } static void mlx5e_disable_rq(struct mlx5e_rq *rq) { struct mlx5e_channel *c = rq->channel; struct mlx5e_priv *priv = c->priv; struct mlx5_core_dev *mdev = priv->mdev; mlx5_core_destroy_rq(mdev, rq->rqn); } static int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq) { struct mlx5e_channel *c = rq->channel; struct mlx5e_priv *priv = c->priv; struct mlx5_wq_ll *wq = &rq->wq; int i; for (i = 0; i < 1000; i++) { if (wq->cur_sz >= priv->params.min_rx_wqes) return (0); msleep(4); } return (-ETIMEDOUT); } static int mlx5e_open_rq(struct mlx5e_channel *c, struct mlx5e_rq_param *param, struct mlx5e_rq *rq) { int err; err = mlx5e_create_rq(c, param, rq); if (err) return (err); err = mlx5e_enable_rq(rq, param); if (err) goto err_destroy_rq; err = mlx5e_modify_rq(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY); if (err) goto err_disable_rq; c->rq.enabled = 1; return (0); err_disable_rq: mlx5e_disable_rq(rq); err_destroy_rq: mlx5e_destroy_rq(rq); return (err); } static void mlx5e_close_rq(struct mlx5e_rq *rq) { mtx_lock(&rq->mtx); rq->enabled = 0; callout_stop(&rq->watchdog); mtx_unlock(&rq->mtx); callout_drain(&rq->watchdog); mlx5e_modify_rq(rq, MLX5_RQC_STATE_RDY, MLX5_RQC_STATE_ERR); } static void mlx5e_close_rq_wait(struct mlx5e_rq *rq) { mlx5e_disable_rq(rq); mlx5e_close_cq(&rq->cq); cancel_work_sync(&rq->dim.work); mlx5e_destroy_rq(rq); } void mlx5e_free_sq_db(struct mlx5e_sq *sq) { int wq_sz = mlx5_wq_cyc_get_size(&sq->wq); int x; for (x = 0; x != wq_sz; x++) { if (sq->mbuf[x].mbuf != NULL) { bus_dmamap_unload(sq->dma_tag, sq->mbuf[x].dma_map); m_freem(sq->mbuf[x].mbuf); } bus_dmamap_destroy(sq->dma_tag, sq->mbuf[x].dma_map); } free(sq->mbuf, M_MLX5EN); } int mlx5e_alloc_sq_db(struct mlx5e_sq *sq) { int wq_sz = mlx5_wq_cyc_get_size(&sq->wq); int err; int x; sq->mbuf = malloc(wq_sz * sizeof(sq->mbuf[0]), M_MLX5EN, M_WAITOK | M_ZERO); /* Create DMA descriptor MAPs */ for (x = 0; x != wq_sz; x++) { err = -bus_dmamap_create(sq->dma_tag, 0, &sq->mbuf[x].dma_map); if (err != 0) { while (x--) bus_dmamap_destroy(sq->dma_tag, sq->mbuf[x].dma_map); free(sq->mbuf, M_MLX5EN); return (err); } } return (0); } static const char *mlx5e_sq_stats_desc[] = { MLX5E_SQ_STATS(MLX5E_STATS_DESC) }; void mlx5e_update_sq_inline(struct mlx5e_sq *sq) { sq->max_inline = sq->priv->params.tx_max_inline; sq->min_inline_mode = sq->priv->params.tx_min_inline_mode; /* * Check if trust state is DSCP or if inline mode is NONE which * indicates CX-5 or newer hardware. */ if (sq->priv->params_ethtool.trust_state != MLX5_QPTS_TRUST_PCP || sq->min_inline_mode == MLX5_INLINE_MODE_NONE) { if (MLX5_CAP_ETH(sq->priv->mdev, wqe_vlan_insert)) sq->min_insert_caps = MLX5E_INSERT_VLAN | MLX5E_INSERT_NON_VLAN; else sq->min_insert_caps = MLX5E_INSERT_NON_VLAN; } else { sq->min_insert_caps = 0; } } static void mlx5e_refresh_sq_inline_sub(struct mlx5e_priv *priv, struct mlx5e_channel *c) { int i; for (i = 0; i != c->num_tc; i++) { mtx_lock(&c->sq[i].lock); mlx5e_update_sq_inline(&c->sq[i]); mtx_unlock(&c->sq[i].lock); } } void mlx5e_refresh_sq_inline(struct mlx5e_priv *priv) { int i; /* check if channels are closed */ if (test_bit(MLX5E_STATE_OPENED, &priv->state) == 0) return; for (i = 0; i < priv->params.num_channels; i++) mlx5e_refresh_sq_inline_sub(priv, &priv->channel[i]); } static int mlx5e_create_sq(struct mlx5e_channel *c, int tc, struct mlx5e_sq_param *param, struct mlx5e_sq *sq) { struct mlx5e_priv *priv = c->priv; struct mlx5_core_dev *mdev = priv->mdev; char buffer[16]; void *sqc = param->sqc; void *sqc_wq = MLX5_ADDR_OF(sqc, sqc, wq); int err; /* Create DMA descriptor TAG */ if ((err = -bus_dma_tag_create( bus_get_dma_tag(mdev->pdev->dev.bsddev), 1, /* any alignment */ 0, /* no boundary */ BUS_SPACE_MAXADDR, /* lowaddr */ BUS_SPACE_MAXADDR, /* highaddr */ NULL, NULL, /* filter, filterarg */ MLX5E_MAX_TX_PAYLOAD_SIZE, /* maxsize */ MLX5E_MAX_TX_MBUF_FRAGS, /* nsegments */ MLX5E_MAX_TX_MBUF_SIZE, /* maxsegsize */ 0, /* flags */ NULL, NULL, /* lockfunc, lockfuncarg */ &sq->dma_tag))) goto done; err = mlx5_alloc_map_uar(mdev, &sq->uar); if (err) goto err_free_dma_tag; err = mlx5_wq_cyc_create(mdev, ¶m->wq, sqc_wq, &sq->wq, &sq->wq_ctrl); if (err) goto err_unmap_free_uar; sq->wq.db = &sq->wq.db[MLX5_SND_DBR]; sq->bf_buf_size = (1 << MLX5_CAP_GEN(mdev, log_bf_reg_size)) / 2; err = mlx5e_alloc_sq_db(sq); if (err) goto err_sq_wq_destroy; sq->mkey_be = c->mkey_be; sq->ifp = priv->ifp; sq->priv = priv; sq->tc = tc; mlx5e_update_sq_inline(sq); snprintf(buffer, sizeof(buffer), "txstat%dtc%d", c->ix, tc); mlx5e_create_stats(&sq->stats.ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), buffer, mlx5e_sq_stats_desc, MLX5E_SQ_STATS_NUM, sq->stats.arg); return (0); err_sq_wq_destroy: mlx5_wq_destroy(&sq->wq_ctrl); err_unmap_free_uar: mlx5_unmap_free_uar(mdev, &sq->uar); err_free_dma_tag: bus_dma_tag_destroy(sq->dma_tag); done: return (err); } static void mlx5e_destroy_sq(struct mlx5e_sq *sq) { /* destroy all sysctl nodes */ sysctl_ctx_free(&sq->stats.ctx); mlx5e_free_sq_db(sq); mlx5_wq_destroy(&sq->wq_ctrl); mlx5_unmap_free_uar(sq->priv->mdev, &sq->uar); bus_dma_tag_destroy(sq->dma_tag); } int mlx5e_enable_sq(struct mlx5e_sq *sq, struct mlx5e_sq_param *param, int tis_num) { void *in; void *sqc; void *wq; int inlen; int err; inlen = MLX5_ST_SZ_BYTES(create_sq_in) + sizeof(u64) * sq->wq_ctrl.buf.npages; in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); sqc = MLX5_ADDR_OF(create_sq_in, in, ctx); wq = MLX5_ADDR_OF(sqc, sqc, wq); memcpy(sqc, param->sqc, sizeof(param->sqc)); MLX5_SET(sqc, sqc, tis_num_0, tis_num); MLX5_SET(sqc, sqc, cqn, sq->cq.mcq.cqn); MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RST); MLX5_SET(sqc, sqc, tis_lst_sz, 1); MLX5_SET(sqc, sqc, flush_in_error_en, 1); MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC); MLX5_SET(wq, wq, uar_page, sq->uar.index); MLX5_SET(wq, wq, log_wq_pg_sz, sq->wq_ctrl.buf.page_shift - PAGE_SHIFT); MLX5_SET64(wq, wq, dbr_addr, sq->wq_ctrl.db.dma); mlx5_fill_page_array(&sq->wq_ctrl.buf, (__be64 *) MLX5_ADDR_OF(wq, wq, pas)); err = mlx5_core_create_sq(sq->priv->mdev, in, inlen, &sq->sqn); kvfree(in); return (err); } int mlx5e_modify_sq(struct mlx5e_sq *sq, int curr_state, int next_state) { void *in; void *sqc; int inlen; int err; inlen = MLX5_ST_SZ_BYTES(modify_sq_in); in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx); MLX5_SET(modify_sq_in, in, sqn, sq->sqn); MLX5_SET(modify_sq_in, in, sq_state, curr_state); MLX5_SET(sqc, sqc, state, next_state); err = mlx5_core_modify_sq(sq->priv->mdev, in, inlen); kvfree(in); return (err); } void mlx5e_disable_sq(struct mlx5e_sq *sq) { mlx5_core_destroy_sq(sq->priv->mdev, sq->sqn); } static int mlx5e_open_sq(struct mlx5e_channel *c, int tc, struct mlx5e_sq_param *param, struct mlx5e_sq *sq) { int err; err = mlx5e_create_sq(c, tc, param, sq); if (err) return (err); err = mlx5e_enable_sq(sq, param, c->priv->tisn[tc]); if (err) goto err_destroy_sq; err = mlx5e_modify_sq(sq, MLX5_SQC_STATE_RST, MLX5_SQC_STATE_RDY); if (err) goto err_disable_sq; WRITE_ONCE(sq->running, 1); return (0); err_disable_sq: mlx5e_disable_sq(sq); err_destroy_sq: mlx5e_destroy_sq(sq); return (err); } static void mlx5e_sq_send_nops_locked(struct mlx5e_sq *sq, int can_sleep) { /* fill up remainder with NOPs */ while (sq->cev_counter != 0) { while (!mlx5e_sq_has_room_for(sq, 1)) { if (can_sleep != 0) { mtx_unlock(&sq->lock); msleep(4); mtx_lock(&sq->lock); } else { goto done; } } /* send a single NOP */ mlx5e_send_nop(sq, 1); atomic_thread_fence_rel(); } done: /* Check if we need to write the doorbell */ if (likely(sq->doorbell.d64 != 0)) { mlx5e_tx_notify_hw(sq, sq->doorbell.d32, 0); sq->doorbell.d64 = 0; } } void mlx5e_sq_cev_timeout(void *arg) { struct mlx5e_sq *sq = arg; mtx_assert(&sq->lock, MA_OWNED); /* check next state */ switch (sq->cev_next_state) { case MLX5E_CEV_STATE_SEND_NOPS: /* fill TX ring with NOPs, if any */ mlx5e_sq_send_nops_locked(sq, 0); /* check if completed */ if (sq->cev_counter == 0) { sq->cev_next_state = MLX5E_CEV_STATE_INITIAL; return; } break; default: /* send NOPs on next timeout */ sq->cev_next_state = MLX5E_CEV_STATE_SEND_NOPS; break; } /* restart timer */ callout_reset_curcpu(&sq->cev_callout, hz, mlx5e_sq_cev_timeout, sq); } void mlx5e_drain_sq(struct mlx5e_sq *sq) { int error; struct mlx5_core_dev *mdev= sq->priv->mdev; /* * Check if already stopped. * * NOTE: Serialization of this function is managed by the * caller ensuring the priv's state lock is locked or in case * of rate limit support, a single thread manages drain and * resume of SQs. The "running" variable can therefore safely * be read without any locks. */ if (READ_ONCE(sq->running) == 0) return; /* don't put more packets into the SQ */ WRITE_ONCE(sq->running, 0); /* serialize access to DMA rings */ mtx_lock(&sq->lock); /* teardown event factor timer, if any */ sq->cev_next_state = MLX5E_CEV_STATE_HOLD_NOPS; callout_stop(&sq->cev_callout); /* send dummy NOPs in order to flush the transmit ring */ mlx5e_sq_send_nops_locked(sq, 1); mtx_unlock(&sq->lock); /* make sure it is safe to free the callout */ callout_drain(&sq->cev_callout); /* wait till SQ is empty or link is down */ mtx_lock(&sq->lock); while (sq->cc != sq->pc && (sq->priv->media_status_last & IFM_ACTIVE) != 0 && mdev->state != MLX5_DEVICE_STATE_INTERNAL_ERROR) { mtx_unlock(&sq->lock); msleep(1); sq->cq.mcq.comp(&sq->cq.mcq); mtx_lock(&sq->lock); } mtx_unlock(&sq->lock); /* error out remaining requests */ error = mlx5e_modify_sq(sq, MLX5_SQC_STATE_RDY, MLX5_SQC_STATE_ERR); if (error != 0) { mlx5_en_err(sq->ifp, "mlx5e_modify_sq() from RDY to ERR failed: %d\n", error); } /* wait till SQ is empty */ mtx_lock(&sq->lock); while (sq->cc != sq->pc && mdev->state != MLX5_DEVICE_STATE_INTERNAL_ERROR) { mtx_unlock(&sq->lock); msleep(1); sq->cq.mcq.comp(&sq->cq.mcq); mtx_lock(&sq->lock); } mtx_unlock(&sq->lock); } static void mlx5e_close_sq_wait(struct mlx5e_sq *sq) { mlx5e_drain_sq(sq); mlx5e_disable_sq(sq); mlx5e_destroy_sq(sq); } static int mlx5e_create_cq(struct mlx5e_priv *priv, struct mlx5e_cq_param *param, struct mlx5e_cq *cq, mlx5e_cq_comp_t *comp, int eq_ix) { struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_cq *mcq = &cq->mcq; int eqn_not_used; int irqn; int err; u32 i; param->wq.buf_numa_node = 0; param->wq.db_numa_node = 0; err = mlx5_vector2eqn(mdev, eq_ix, &eqn_not_used, &irqn); if (err) return (err); err = mlx5_cqwq_create(mdev, ¶m->wq, param->cqc, &cq->wq, &cq->wq_ctrl); if (err) return (err); mcq->cqe_sz = 64; mcq->set_ci_db = cq->wq_ctrl.db.db; mcq->arm_db = cq->wq_ctrl.db.db + 1; *mcq->set_ci_db = 0; *mcq->arm_db = 0; mcq->vector = eq_ix; mcq->comp = comp; mcq->event = mlx5e_cq_error_event; mcq->irqn = irqn; mcq->uar = &priv->cq_uar; for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) { struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i); cqe->op_own = 0xf1; } cq->priv = priv; return (0); } static void mlx5e_destroy_cq(struct mlx5e_cq *cq) { mlx5_wq_destroy(&cq->wq_ctrl); } static int mlx5e_enable_cq(struct mlx5e_cq *cq, struct mlx5e_cq_param *param, int eq_ix) { struct mlx5_core_cq *mcq = &cq->mcq; void *in; void *cqc; int inlen; int irqn_not_used; int eqn; int err; inlen = MLX5_ST_SZ_BYTES(create_cq_in) + sizeof(u64) * cq->wq_ctrl.buf.npages; in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); cqc = MLX5_ADDR_OF(create_cq_in, in, cq_context); memcpy(cqc, param->cqc, sizeof(param->cqc)); mlx5_fill_page_array(&cq->wq_ctrl.buf, (__be64 *) MLX5_ADDR_OF(create_cq_in, in, pas)); mlx5_vector2eqn(cq->priv->mdev, eq_ix, &eqn, &irqn_not_used); MLX5_SET(cqc, cqc, c_eqn, eqn); MLX5_SET(cqc, cqc, uar_page, mcq->uar->index); MLX5_SET(cqc, cqc, log_page_size, cq->wq_ctrl.buf.page_shift - PAGE_SHIFT); MLX5_SET64(cqc, cqc, dbr_addr, cq->wq_ctrl.db.dma); err = mlx5_core_create_cq(cq->priv->mdev, mcq, in, inlen); kvfree(in); if (err) return (err); mlx5e_cq_arm(cq, MLX5_GET_DOORBELL_LOCK(&cq->priv->doorbell_lock)); return (0); } static void mlx5e_disable_cq(struct mlx5e_cq *cq) { mlx5_core_destroy_cq(cq->priv->mdev, &cq->mcq); } int mlx5e_open_cq(struct mlx5e_priv *priv, struct mlx5e_cq_param *param, struct mlx5e_cq *cq, mlx5e_cq_comp_t *comp, int eq_ix) { int err; err = mlx5e_create_cq(priv, param, cq, comp, eq_ix); if (err) return (err); err = mlx5e_enable_cq(cq, param, eq_ix); if (err) goto err_destroy_cq; return (0); err_destroy_cq: mlx5e_destroy_cq(cq); return (err); } void mlx5e_close_cq(struct mlx5e_cq *cq) { mlx5e_disable_cq(cq); mlx5e_destroy_cq(cq); } static int mlx5e_open_tx_cqs(struct mlx5e_channel *c, struct mlx5e_channel_param *cparam) { int err; int tc; for (tc = 0; tc < c->num_tc; tc++) { /* open completion queue */ err = mlx5e_open_cq(c->priv, &cparam->tx_cq, &c->sq[tc].cq, &mlx5e_tx_cq_comp, c->ix); if (err) goto err_close_tx_cqs; } return (0); err_close_tx_cqs: for (tc--; tc >= 0; tc--) mlx5e_close_cq(&c->sq[tc].cq); return (err); } static void mlx5e_close_tx_cqs(struct mlx5e_channel *c) { int tc; for (tc = 0; tc < c->num_tc; tc++) mlx5e_close_cq(&c->sq[tc].cq); } static int mlx5e_open_sqs(struct mlx5e_channel *c, struct mlx5e_channel_param *cparam) { int err; int tc; for (tc = 0; tc < c->num_tc; tc++) { err = mlx5e_open_sq(c, tc, &cparam->sq, &c->sq[tc]); if (err) goto err_close_sqs; } return (0); err_close_sqs: for (tc--; tc >= 0; tc--) mlx5e_close_sq_wait(&c->sq[tc]); return (err); } static void mlx5e_close_sqs_wait(struct mlx5e_channel *c) { int tc; for (tc = 0; tc < c->num_tc; tc++) mlx5e_close_sq_wait(&c->sq[tc]); } static void mlx5e_chan_mtx_init(struct mlx5e_channel *c) { int tc; mtx_init(&c->rq.mtx, "mlx5rx", MTX_NETWORK_LOCK, MTX_DEF); callout_init_mtx(&c->rq.watchdog, &c->rq.mtx, 0); for (tc = 0; tc < c->num_tc; tc++) { struct mlx5e_sq *sq = c->sq + tc; mtx_init(&sq->lock, "mlx5tx", MTX_NETWORK_LOCK " TX", MTX_DEF); mtx_init(&sq->comp_lock, "mlx5comp", MTX_NETWORK_LOCK " TX", MTX_DEF); callout_init_mtx(&sq->cev_callout, &sq->lock, 0); sq->cev_factor = c->priv->params_ethtool.tx_completion_fact; /* ensure the TX completion event factor is not zero */ if (sq->cev_factor == 0) sq->cev_factor = 1; } } static void mlx5e_chan_mtx_destroy(struct mlx5e_channel *c) { int tc; mtx_destroy(&c->rq.mtx); for (tc = 0; tc < c->num_tc; tc++) { mtx_destroy(&c->sq[tc].lock); mtx_destroy(&c->sq[tc].comp_lock); } } static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix, struct mlx5e_channel_param *cparam, struct mlx5e_channel *c) { int err; memset(c, 0, sizeof(*c)); c->priv = priv; c->ix = ix; /* setup send tag */ c->tag.m_snd_tag.ifp = priv->ifp; c->tag.type = IF_SND_TAG_TYPE_UNLIMITED; c->mkey_be = cpu_to_be32(priv->mr.key); c->num_tc = priv->num_tc; /* init mutexes */ mlx5e_chan_mtx_init(c); /* open transmit completion queue */ err = mlx5e_open_tx_cqs(c, cparam); if (err) goto err_free; /* open receive completion queue */ err = mlx5e_open_cq(c->priv, &cparam->rx_cq, &c->rq.cq, &mlx5e_rx_cq_comp, c->ix); if (err) goto err_close_tx_cqs; err = mlx5e_open_sqs(c, cparam); if (err) goto err_close_rx_cq; err = mlx5e_open_rq(c, &cparam->rq, &c->rq); if (err) goto err_close_sqs; /* poll receive queue initially */ c->rq.cq.mcq.comp(&c->rq.cq.mcq); return (0); err_close_sqs: mlx5e_close_sqs_wait(c); err_close_rx_cq: mlx5e_close_cq(&c->rq.cq); err_close_tx_cqs: mlx5e_close_tx_cqs(c); err_free: /* destroy mutexes */ mlx5e_chan_mtx_destroy(c); return (err); } static void mlx5e_close_channel(struct mlx5e_channel *c) { mlx5e_close_rq(&c->rq); } static void mlx5e_close_channel_wait(struct mlx5e_channel *c) { mlx5e_close_rq_wait(&c->rq); mlx5e_close_sqs_wait(c); mlx5e_close_tx_cqs(c); /* destroy mutexes */ mlx5e_chan_mtx_destroy(c); } static int mlx5e_get_wqe_sz(struct mlx5e_priv *priv, u32 *wqe_sz, u32 *nsegs) { u32 r, n; r = priv->params.hw_lro_en ? priv->params.lro_wqe_sz : MLX5E_SW2MB_MTU(priv->ifp->if_mtu); if (r > MJUM16BYTES) return (-ENOMEM); if (r > MJUM9BYTES) r = MJUM16BYTES; else if (r > MJUMPAGESIZE) r = MJUM9BYTES; else if (r > MCLBYTES) r = MJUMPAGESIZE; else r = MCLBYTES; /* * n + 1 must be a power of two, because stride size must be. * Stride size is 16 * (n + 1), as the first segment is * control. */ for (n = howmany(r, MLX5E_MAX_RX_BYTES); !powerof2(n + 1); n++) ; if (n > MLX5E_MAX_BUSDMA_RX_SEGS) return (-ENOMEM); *wqe_sz = r; *nsegs = n; return (0); } static void mlx5e_build_rq_param(struct mlx5e_priv *priv, struct mlx5e_rq_param *param) { void *rqc = param->rqc; void *wq = MLX5_ADDR_OF(rqc, rqc, wq); u32 wqe_sz, nsegs; mlx5e_get_wqe_sz(priv, &wqe_sz, &nsegs); MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_LINKED_LIST); MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN); MLX5_SET(wq, wq, log_wq_stride, ilog2(sizeof(struct mlx5e_rx_wqe) + nsegs * sizeof(struct mlx5_wqe_data_seg))); MLX5_SET(wq, wq, log_wq_sz, priv->params.log_rq_size); MLX5_SET(wq, wq, pd, priv->pdn); param->wq.buf_numa_node = 0; param->wq.db_numa_node = 0; param->wq.linear = 1; } static void mlx5e_build_sq_param(struct mlx5e_priv *priv, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); MLX5_SET(wq, wq, log_wq_sz, priv->params.log_sq_size); MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB)); MLX5_SET(wq, wq, pd, priv->pdn); param->wq.buf_numa_node = 0; param->wq.db_numa_node = 0; param->wq.linear = 1; } static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv, struct mlx5e_cq_param *param) { void *cqc = param->cqc; MLX5_SET(cqc, cqc, uar_page, priv->cq_uar.index); } static void mlx5e_get_default_profile(struct mlx5e_priv *priv, int mode, struct net_dim_cq_moder *ptr) { *ptr = net_dim_get_profile(mode, MLX5E_DIM_DEFAULT_PROFILE); /* apply LRO restrictions */ if (priv->params.hw_lro_en && ptr->pkts > MLX5E_DIM_MAX_RX_CQ_MODERATION_PKTS_WITH_LRO) { ptr->pkts = MLX5E_DIM_MAX_RX_CQ_MODERATION_PKTS_WITH_LRO; } } static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, struct mlx5e_cq_param *param) { struct net_dim_cq_moder curr; void *cqc = param->cqc; /* * We use MLX5_CQE_FORMAT_HASH because the RX hash mini CQE * format is more beneficial for FreeBSD use case. * * Adding support for MLX5_CQE_FORMAT_CSUM will require changes * in mlx5e_decompress_cqe. */ if (priv->params.cqe_zipping_en) { MLX5_SET(cqc, cqc, mini_cqe_res_format, MLX5_CQE_FORMAT_HASH); MLX5_SET(cqc, cqc, cqe_compression_en, 1); } MLX5_SET(cqc, cqc, log_cq_size, priv->params.log_rq_size); switch (priv->params.rx_cq_moderation_mode) { case 0: MLX5_SET(cqc, cqc, cq_period, priv->params.rx_cq_moderation_usec); MLX5_SET(cqc, cqc, cq_max_count, priv->params.rx_cq_moderation_pkts); MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE); break; case 1: MLX5_SET(cqc, cqc, cq_period, priv->params.rx_cq_moderation_usec); MLX5_SET(cqc, cqc, cq_max_count, priv->params.rx_cq_moderation_pkts); if (MLX5_CAP_GEN(priv->mdev, cq_period_start_from_cqe)) MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_CQE); else MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE); break; case 2: mlx5e_get_default_profile(priv, NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE, &curr); MLX5_SET(cqc, cqc, cq_period, curr.usec); MLX5_SET(cqc, cqc, cq_max_count, curr.pkts); MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE); break; case 3: mlx5e_get_default_profile(priv, NET_DIM_CQ_PERIOD_MODE_START_FROM_CQE, &curr); MLX5_SET(cqc, cqc, cq_period, curr.usec); MLX5_SET(cqc, cqc, cq_max_count, curr.pkts); if (MLX5_CAP_GEN(priv->mdev, cq_period_start_from_cqe)) MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_CQE); else MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE); break; default: break; } mlx5e_dim_build_cq_param(priv, param); mlx5e_build_common_cq_param(priv, param); } static void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, struct mlx5e_cq_param *param) { void *cqc = param->cqc; MLX5_SET(cqc, cqc, log_cq_size, priv->params.log_sq_size); MLX5_SET(cqc, cqc, cq_period, priv->params.tx_cq_moderation_usec); MLX5_SET(cqc, cqc, cq_max_count, priv->params.tx_cq_moderation_pkts); switch (priv->params.tx_cq_moderation_mode) { case 0: MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE); break; default: if (MLX5_CAP_GEN(priv->mdev, cq_period_start_from_cqe)) MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_CQE); else MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE); break; } mlx5e_build_common_cq_param(priv, param); } static void mlx5e_build_channel_param(struct mlx5e_priv *priv, struct mlx5e_channel_param *cparam) { memset(cparam, 0, sizeof(*cparam)); mlx5e_build_rq_param(priv, &cparam->rq); mlx5e_build_sq_param(priv, &cparam->sq); mlx5e_build_rx_cq_param(priv, &cparam->rx_cq); mlx5e_build_tx_cq_param(priv, &cparam->tx_cq); } static int mlx5e_open_channels(struct mlx5e_priv *priv) { struct mlx5e_channel_param cparam; int err; int i; int j; mlx5e_build_channel_param(priv, &cparam); for (i = 0; i < priv->params.num_channels; i++) { err = mlx5e_open_channel(priv, i, &cparam, &priv->channel[i]); if (err) goto err_close_channels; } for (j = 0; j < priv->params.num_channels; j++) { err = mlx5e_wait_for_min_rx_wqes(&priv->channel[j].rq); if (err) goto err_close_channels; } return (0); err_close_channels: while (i--) { mlx5e_close_channel(&priv->channel[i]); mlx5e_close_channel_wait(&priv->channel[i]); } return (err); } static void mlx5e_close_channels(struct mlx5e_priv *priv) { int i; for (i = 0; i < priv->params.num_channels; i++) mlx5e_close_channel(&priv->channel[i]); for (i = 0; i < priv->params.num_channels; i++) mlx5e_close_channel_wait(&priv->channel[i]); } static int mlx5e_refresh_sq_params(struct mlx5e_priv *priv, struct mlx5e_sq *sq) { if (MLX5_CAP_GEN(priv->mdev, cq_period_mode_modify)) { uint8_t cq_mode; switch (priv->params.tx_cq_moderation_mode) { case 0: case 2: cq_mode = MLX5_CQ_PERIOD_MODE_START_FROM_EQE; break; default: cq_mode = MLX5_CQ_PERIOD_MODE_START_FROM_CQE; break; } return (mlx5_core_modify_cq_moderation_mode(priv->mdev, &sq->cq.mcq, priv->params.tx_cq_moderation_usec, priv->params.tx_cq_moderation_pkts, cq_mode)); } return (mlx5_core_modify_cq_moderation(priv->mdev, &sq->cq.mcq, priv->params.tx_cq_moderation_usec, priv->params.tx_cq_moderation_pkts)); } static int mlx5e_refresh_rq_params(struct mlx5e_priv *priv, struct mlx5e_rq *rq) { if (MLX5_CAP_GEN(priv->mdev, cq_period_mode_modify)) { uint8_t cq_mode; uint8_t dim_mode; int retval; switch (priv->params.rx_cq_moderation_mode) { case 0: case 2: cq_mode = MLX5_CQ_PERIOD_MODE_START_FROM_EQE; dim_mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE; break; default: cq_mode = MLX5_CQ_PERIOD_MODE_START_FROM_CQE; dim_mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_CQE; break; } /* tear down dynamic interrupt moderation */ mtx_lock(&rq->mtx); rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_DISABLED; mtx_unlock(&rq->mtx); /* wait for dynamic interrupt moderation work task, if any */ cancel_work_sync(&rq->dim.work); if (priv->params.rx_cq_moderation_mode >= 2) { struct net_dim_cq_moder curr; mlx5e_get_default_profile(priv, dim_mode, &curr); retval = mlx5_core_modify_cq_moderation_mode(priv->mdev, &rq->cq.mcq, curr.usec, curr.pkts, cq_mode); /* set dynamic interrupt moderation mode and zero defaults */ mtx_lock(&rq->mtx); rq->dim.mode = dim_mode; rq->dim.state = 0; rq->dim.profile_ix = MLX5E_DIM_DEFAULT_PROFILE; mtx_unlock(&rq->mtx); } else { retval = mlx5_core_modify_cq_moderation_mode(priv->mdev, &rq->cq.mcq, priv->params.rx_cq_moderation_usec, priv->params.rx_cq_moderation_pkts, cq_mode); } return (retval); } return (mlx5_core_modify_cq_moderation(priv->mdev, &rq->cq.mcq, priv->params.rx_cq_moderation_usec, priv->params.rx_cq_moderation_pkts)); } static int mlx5e_refresh_channel_params_sub(struct mlx5e_priv *priv, struct mlx5e_channel *c) { int err; int i; err = mlx5e_refresh_rq_params(priv, &c->rq); if (err) goto done; for (i = 0; i != c->num_tc; i++) { err = mlx5e_refresh_sq_params(priv, &c->sq[i]); if (err) goto done; } done: return (err); } int mlx5e_refresh_channel_params(struct mlx5e_priv *priv) { int i; /* check if channels are closed */ if (test_bit(MLX5E_STATE_OPENED, &priv->state) == 0) return (EINVAL); for (i = 0; i < priv->params.num_channels; i++) { int err; err = mlx5e_refresh_channel_params_sub(priv, &priv->channel[i]); if (err) return (err); } return (0); } static int mlx5e_open_tis(struct mlx5e_priv *priv, int tc) { struct mlx5_core_dev *mdev = priv->mdev; u32 in[MLX5_ST_SZ_DW(create_tis_in)]; void *tisc = MLX5_ADDR_OF(create_tis_in, in, ctx); memset(in, 0, sizeof(in)); MLX5_SET(tisc, tisc, prio, tc); MLX5_SET(tisc, tisc, transport_domain, priv->tdn); return (mlx5_core_create_tis(mdev, in, sizeof(in), &priv->tisn[tc])); } static void mlx5e_close_tis(struct mlx5e_priv *priv, int tc) { mlx5_core_destroy_tis(priv->mdev, priv->tisn[tc]); } static int mlx5e_open_tises(struct mlx5e_priv *priv) { int num_tc = priv->num_tc; int err; int tc; for (tc = 0; tc < num_tc; tc++) { err = mlx5e_open_tis(priv, tc); if (err) goto err_close_tises; } return (0); err_close_tises: for (tc--; tc >= 0; tc--) mlx5e_close_tis(priv, tc); return (err); } static void mlx5e_close_tises(struct mlx5e_priv *priv) { int num_tc = priv->num_tc; int tc; for (tc = 0; tc < num_tc; tc++) mlx5e_close_tis(priv, tc); } static int mlx5e_open_rqt(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; u32 *in; u32 out[MLX5_ST_SZ_DW(create_rqt_out)] = {0}; void *rqtc; int inlen; int err; int sz; int i; sz = 1 << priv->params.rx_hash_log_tbl_sz; inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + sizeof(u32) * sz; in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); rqtc = MLX5_ADDR_OF(create_rqt_in, in, rqt_context); MLX5_SET(rqtc, rqtc, rqt_actual_size, sz); MLX5_SET(rqtc, rqtc, rqt_max_size, sz); for (i = 0; i < sz; i++) { int ix = i; #ifdef RSS ix = rss_get_indirection_to_bucket(ix); #endif /* ensure we don't overflow */ ix %= priv->params.num_channels; /* apply receive side scaling stride, if any */ ix -= ix % (int)priv->params.channels_rsss; MLX5_SET(rqtc, rqtc, rq_num[i], priv->channel[ix].rq.rqn); } MLX5_SET(create_rqt_in, in, opcode, MLX5_CMD_OP_CREATE_RQT); err = mlx5_cmd_exec(mdev, in, inlen, out, sizeof(out)); if (!err) priv->rqtn = MLX5_GET(create_rqt_out, out, rqtn); kvfree(in); return (err); } static void mlx5e_close_rqt(struct mlx5e_priv *priv) { u32 in[MLX5_ST_SZ_DW(destroy_rqt_in)] = {0}; u32 out[MLX5_ST_SZ_DW(destroy_rqt_out)] = {0}; MLX5_SET(destroy_rqt_in, in, opcode, MLX5_CMD_OP_DESTROY_RQT); MLX5_SET(destroy_rqt_in, in, rqtn, priv->rqtn); mlx5_cmd_exec(priv->mdev, in, sizeof(in), out, sizeof(out)); } static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 * tirc, int tt) { void *hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer); __be32 *hkey; MLX5_SET(tirc, tirc, transport_domain, priv->tdn); #define ROUGH_MAX_L2_L3_HDR_SZ 256 #define MLX5_HASH_IP (MLX5_HASH_FIELD_SEL_SRC_IP |\ MLX5_HASH_FIELD_SEL_DST_IP) #define MLX5_HASH_ALL (MLX5_HASH_FIELD_SEL_SRC_IP |\ MLX5_HASH_FIELD_SEL_DST_IP |\ MLX5_HASH_FIELD_SEL_L4_SPORT |\ MLX5_HASH_FIELD_SEL_L4_DPORT) #define MLX5_HASH_IP_IPSEC_SPI (MLX5_HASH_FIELD_SEL_SRC_IP |\ MLX5_HASH_FIELD_SEL_DST_IP |\ MLX5_HASH_FIELD_SEL_IPSEC_SPI) if (priv->params.hw_lro_en) { MLX5_SET(tirc, tirc, lro_enable_mask, MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO | MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO); MLX5_SET(tirc, tirc, lro_max_msg_sz, (priv->params.lro_wqe_sz - ROUGH_MAX_L2_L3_HDR_SZ) >> 8); /* TODO: add the option to choose timer value dynamically */ MLX5_SET(tirc, tirc, lro_timeout_period_usecs, MLX5_CAP_ETH(priv->mdev, lro_timer_supported_periods[2])); } /* setup parameters for hashing TIR type, if any */ switch (tt) { case MLX5E_TT_ANY: MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_DIRECT); MLX5_SET(tirc, tirc, inline_rqn, priv->channel[0].rq.rqn); break; default: MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT); MLX5_SET(tirc, tirc, indirect_table, priv->rqtn); MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_TIRC_RX_HASH_FN_HASH_TOEPLITZ); hkey = (__be32 *) MLX5_ADDR_OF(tirc, tirc, rx_hash_toeplitz_key); #ifdef RSS /* * The FreeBSD RSS implementation does currently not * support symmetric Toeplitz hashes: */ MLX5_SET(tirc, tirc, rx_hash_symmetric, 0); rss_getkey((uint8_t *)hkey); #else MLX5_SET(tirc, tirc, rx_hash_symmetric, 1); hkey[0] = cpu_to_be32(0xD181C62C); hkey[1] = cpu_to_be32(0xF7F4DB5B); hkey[2] = cpu_to_be32(0x1983A2FC); hkey[3] = cpu_to_be32(0x943E1ADB); hkey[4] = cpu_to_be32(0xD9389E6B); hkey[5] = cpu_to_be32(0xD1039C2C); hkey[6] = cpu_to_be32(0xA74499AD); hkey[7] = cpu_to_be32(0x593D56D9); hkey[8] = cpu_to_be32(0xF3253C06); hkey[9] = cpu_to_be32(0x2ADC1FFC); #endif break; } switch (tt) { case MLX5E_TT_IPV4_TCP: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV4); MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, MLX5_L4_PROT_TYPE_TCP); #ifdef RSS if (!(rss_gethashconfig() & RSS_HASHTYPE_RSS_TCP_IPV4)) { MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP); } else #endif MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_ALL); break; case MLX5E_TT_IPV6_TCP: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV6); MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, MLX5_L4_PROT_TYPE_TCP); #ifdef RSS if (!(rss_gethashconfig() & RSS_HASHTYPE_RSS_TCP_IPV6)) { MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP); } else #endif MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_ALL); break; case MLX5E_TT_IPV4_UDP: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV4); MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, MLX5_L4_PROT_TYPE_UDP); #ifdef RSS if (!(rss_gethashconfig() & RSS_HASHTYPE_RSS_UDP_IPV4)) { MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP); } else #endif MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_ALL); break; case MLX5E_TT_IPV6_UDP: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV6); MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, MLX5_L4_PROT_TYPE_UDP); #ifdef RSS if (!(rss_gethashconfig() & RSS_HASHTYPE_RSS_UDP_IPV6)) { MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP); } else #endif MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_ALL); break; case MLX5E_TT_IPV4_IPSEC_AH: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV4); MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP_IPSEC_SPI); break; case MLX5E_TT_IPV6_IPSEC_AH: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV6); MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP_IPSEC_SPI); break; case MLX5E_TT_IPV4_IPSEC_ESP: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV4); MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP_IPSEC_SPI); break; case MLX5E_TT_IPV6_IPSEC_ESP: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV6); MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP_IPSEC_SPI); break; case MLX5E_TT_IPV4: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV4); MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP); break; case MLX5E_TT_IPV6: MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, MLX5_L3_PROT_TYPE_IPV6); MLX5_SET(rx_hash_field_select, hfso, selected_fields, MLX5_HASH_IP); break; default: break; } } static int mlx5e_open_tir(struct mlx5e_priv *priv, int tt) { struct mlx5_core_dev *mdev = priv->mdev; u32 *in; void *tirc; int inlen; int err; inlen = MLX5_ST_SZ_BYTES(create_tir_in); in = mlx5_vzalloc(inlen); if (in == NULL) return (-ENOMEM); tirc = MLX5_ADDR_OF(create_tir_in, in, tir_context); mlx5e_build_tir_ctx(priv, tirc, tt); err = mlx5_core_create_tir(mdev, in, inlen, &priv->tirn[tt]); kvfree(in); return (err); } static void mlx5e_close_tir(struct mlx5e_priv *priv, int tt) { mlx5_core_destroy_tir(priv->mdev, priv->tirn[tt]); } static int mlx5e_open_tirs(struct mlx5e_priv *priv) { int err; int i; for (i = 0; i < MLX5E_NUM_TT; i++) { err = mlx5e_open_tir(priv, i); if (err) goto err_close_tirs; } return (0); err_close_tirs: for (i--; i >= 0; i--) mlx5e_close_tir(priv, i); return (err); } static void mlx5e_close_tirs(struct mlx5e_priv *priv) { int i; for (i = 0; i < MLX5E_NUM_TT; i++) mlx5e_close_tir(priv, i); } /* * SW MTU does not include headers, * HW MTU includes all headers and checksums. */ static int mlx5e_set_dev_port_mtu(struct ifnet *ifp, int sw_mtu) { struct mlx5e_priv *priv = ifp->if_softc; struct mlx5_core_dev *mdev = priv->mdev; int hw_mtu; int err; hw_mtu = MLX5E_SW2HW_MTU(sw_mtu); err = mlx5_set_port_mtu(mdev, hw_mtu); if (err) { mlx5_en_err(ifp, "mlx5_set_port_mtu failed setting %d, err=%d\n", sw_mtu, err); return (err); } /* Update vport context MTU */ err = mlx5_set_vport_mtu(mdev, hw_mtu); if (err) { mlx5_en_err(ifp, "Failed updating vport context with MTU size, err=%d\n", err); } ifp->if_mtu = sw_mtu; err = mlx5_query_vport_mtu(mdev, &hw_mtu); if (err || !hw_mtu) { /* fallback to port oper mtu */ err = mlx5_query_port_oper_mtu(mdev, &hw_mtu); } if (err) { mlx5_en_err(ifp, "Query port MTU, after setting new MTU value, failed\n"); return (err); } else if (MLX5E_HW2SW_MTU(hw_mtu) < sw_mtu) { err = -E2BIG, mlx5_en_err(ifp, "Port MTU %d is smaller than ifp mtu %d\n", hw_mtu, sw_mtu); } else if (MLX5E_HW2SW_MTU(hw_mtu) > sw_mtu) { err = -EINVAL; mlx5_en_err(ifp, "Port MTU %d is bigger than ifp mtu %d\n", hw_mtu, sw_mtu); } priv->params_ethtool.hw_mtu = hw_mtu; return (err); } int mlx5e_open_locked(struct ifnet *ifp) { struct mlx5e_priv *priv = ifp->if_softc; int err; u16 set_id; /* check if already opened */ if (test_bit(MLX5E_STATE_OPENED, &priv->state) != 0) return (0); #ifdef RSS if (rss_getnumbuckets() > priv->params.num_channels) { mlx5_en_info(ifp, "NOTE: There are more RSS buckets(%u) than channels(%u) available\n", rss_getnumbuckets(), priv->params.num_channels); } #endif err = mlx5e_open_tises(priv); if (err) { mlx5_en_err(ifp, "mlx5e_open_tises failed, %d\n", err); return (err); } err = mlx5_vport_alloc_q_counter(priv->mdev, MLX5_INTERFACE_PROTOCOL_ETH, &set_id); if (err) { mlx5_en_err(priv->ifp, "mlx5_vport_alloc_q_counter failed: %d\n", err); goto err_close_tises; } /* store counter set ID */ priv->counter_set_id = set_id; err = mlx5e_open_channels(priv); if (err) { mlx5_en_err(ifp, "mlx5e_open_channels failed, %d\n", err); goto err_dalloc_q_counter; } err = mlx5e_open_rqt(priv); if (err) { mlx5_en_err(ifp, "mlx5e_open_rqt failed, %d\n", err); goto err_close_channels; } err = mlx5e_open_tirs(priv); if (err) { mlx5_en_err(ifp, "mlx5e_open_tir failed, %d\n", err); goto err_close_rqls; } err = mlx5e_open_flow_table(priv); if (err) { mlx5_en_err(ifp, "mlx5e_open_flow_table failed, %d\n", err); goto err_close_tirs; } err = mlx5e_add_all_vlan_rules(priv); if (err) { mlx5_en_err(ifp, "mlx5e_add_all_vlan_rules failed, %d\n", err); goto err_close_flow_table; } set_bit(MLX5E_STATE_OPENED, &priv->state); mlx5e_update_carrier(priv); mlx5e_set_rx_mode_core(priv); return (0); err_close_flow_table: mlx5e_close_flow_table(priv); err_close_tirs: mlx5e_close_tirs(priv); err_close_rqls: mlx5e_close_rqt(priv); err_close_channels: mlx5e_close_channels(priv); err_dalloc_q_counter: mlx5_vport_dealloc_q_counter(priv->mdev, MLX5_INTERFACE_PROTOCOL_ETH, priv->counter_set_id); err_close_tises: mlx5e_close_tises(priv); return (err); } static void mlx5e_open(void *arg) { struct mlx5e_priv *priv = arg; PRIV_LOCK(priv); if (mlx5_set_port_status(priv->mdev, MLX5_PORT_UP)) mlx5_en_err(priv->ifp, "Setting port status to up failed\n"); mlx5e_open_locked(priv->ifp); priv->ifp->if_drv_flags |= IFF_DRV_RUNNING; PRIV_UNLOCK(priv); } int mlx5e_close_locked(struct ifnet *ifp) { struct mlx5e_priv *priv = ifp->if_softc; /* check if already closed */ if (test_bit(MLX5E_STATE_OPENED, &priv->state) == 0) return (0); clear_bit(MLX5E_STATE_OPENED, &priv->state); mlx5e_set_rx_mode_core(priv); mlx5e_del_all_vlan_rules(priv); if_link_state_change(priv->ifp, LINK_STATE_DOWN); mlx5e_close_flow_table(priv); mlx5e_close_tirs(priv); mlx5e_close_rqt(priv); mlx5e_close_channels(priv); mlx5_vport_dealloc_q_counter(priv->mdev, MLX5_INTERFACE_PROTOCOL_ETH, priv->counter_set_id); mlx5e_close_tises(priv); return (0); } #if (__FreeBSD_version >= 1100000) static uint64_t mlx5e_get_counter(struct ifnet *ifp, ift_counter cnt) { struct mlx5e_priv *priv = ifp->if_softc; u64 retval; /* PRIV_LOCK(priv); XXX not allowed */ switch (cnt) { case IFCOUNTER_IPACKETS: retval = priv->stats.vport.rx_packets; break; case IFCOUNTER_IERRORS: retval = priv->stats.pport.in_range_len_errors + priv->stats.pport.out_of_range_len + priv->stats.pport.too_long_errors + priv->stats.pport.check_seq_err + priv->stats.pport.alignment_err; break; case IFCOUNTER_IQDROPS: retval = priv->stats.vport.rx_out_of_buffer; break; case IFCOUNTER_OPACKETS: retval = priv->stats.vport.tx_packets; break; case IFCOUNTER_OERRORS: retval = priv->stats.port_stats_debug.out_discards; break; case IFCOUNTER_IBYTES: retval = priv->stats.vport.rx_bytes; break; case IFCOUNTER_OBYTES: retval = priv->stats.vport.tx_bytes; break; case IFCOUNTER_IMCASTS: retval = priv->stats.vport.rx_multicast_packets; break; case IFCOUNTER_OMCASTS: retval = priv->stats.vport.tx_multicast_packets; break; case IFCOUNTER_OQDROPS: retval = priv->stats.vport.tx_queue_dropped; break; case IFCOUNTER_COLLISIONS: retval = priv->stats.pport.collisions; break; default: retval = if_get_counter_default(ifp, cnt); break; } /* PRIV_UNLOCK(priv); XXX not allowed */ return (retval); } #endif static void mlx5e_set_rx_mode(struct ifnet *ifp) { struct mlx5e_priv *priv = ifp->if_softc; queue_work(priv->wq, &priv->set_rx_mode_work); } static int mlx5e_ioctl(struct ifnet *ifp, u_long command, caddr_t data) { struct mlx5e_priv *priv; struct ifreq *ifr; struct ifi2creq i2c; int error = 0; int mask = 0; int size_read = 0; int module_status; int module_num; int max_mtu; uint8_t read_addr; priv = ifp->if_softc; /* check if detaching */ if (priv == NULL || priv->gone != 0) return (ENXIO); switch (command) { case SIOCSIFMTU: ifr = (struct ifreq *)data; PRIV_LOCK(priv); mlx5_query_port_max_mtu(priv->mdev, &max_mtu); if (ifr->ifr_mtu >= MLX5E_MTU_MIN && ifr->ifr_mtu <= MIN(MLX5E_MTU_MAX, max_mtu)) { int was_opened; was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state); if (was_opened) mlx5e_close_locked(ifp); /* set new MTU */ mlx5e_set_dev_port_mtu(ifp, ifr->ifr_mtu); if (was_opened) mlx5e_open_locked(ifp); } else { error = EINVAL; mlx5_en_err(ifp, "Invalid MTU value. Min val: %d, Max val: %d\n", MLX5E_MTU_MIN, MIN(MLX5E_MTU_MAX, max_mtu)); } PRIV_UNLOCK(priv); break; case SIOCSIFFLAGS: if ((ifp->if_flags & IFF_UP) && (ifp->if_drv_flags & IFF_DRV_RUNNING)) { mlx5e_set_rx_mode(ifp); break; } PRIV_LOCK(priv); if (ifp->if_flags & IFF_UP) { if ((ifp->if_drv_flags & IFF_DRV_RUNNING) == 0) { if (test_bit(MLX5E_STATE_OPENED, &priv->state) == 0) mlx5e_open_locked(ifp); ifp->if_drv_flags |= IFF_DRV_RUNNING; mlx5_set_port_status(priv->mdev, MLX5_PORT_UP); } } else { if (ifp->if_drv_flags & IFF_DRV_RUNNING) { mlx5_set_port_status(priv->mdev, MLX5_PORT_DOWN); if (test_bit(MLX5E_STATE_OPENED, &priv->state) != 0) mlx5e_close_locked(ifp); mlx5e_update_carrier(priv); ifp->if_drv_flags &= ~IFF_DRV_RUNNING; } } PRIV_UNLOCK(priv); break; case SIOCADDMULTI: case SIOCDELMULTI: mlx5e_set_rx_mode(ifp); break; case SIOCSIFMEDIA: case SIOCGIFMEDIA: case SIOCGIFXMEDIA: ifr = (struct ifreq *)data; error = ifmedia_ioctl(ifp, ifr, &priv->media, command); break; case SIOCSIFCAP: ifr = (struct ifreq *)data; PRIV_LOCK(priv); mask = ifr->ifr_reqcap ^ ifp->if_capenable; if (mask & IFCAP_TXCSUM) { ifp->if_capenable ^= IFCAP_TXCSUM; ifp->if_hwassist ^= (CSUM_TCP | CSUM_UDP | CSUM_IP); if (IFCAP_TSO4 & ifp->if_capenable && !(IFCAP_TXCSUM & ifp->if_capenable)) { ifp->if_capenable &= ~IFCAP_TSO4; ifp->if_hwassist &= ~CSUM_IP_TSO; mlx5_en_err(ifp, "tso4 disabled due to -txcsum.\n"); } } if (mask & IFCAP_TXCSUM_IPV6) { ifp->if_capenable ^= IFCAP_TXCSUM_IPV6; ifp->if_hwassist ^= (CSUM_UDP_IPV6 | CSUM_TCP_IPV6); if (IFCAP_TSO6 & ifp->if_capenable && !(IFCAP_TXCSUM_IPV6 & ifp->if_capenable)) { ifp->if_capenable &= ~IFCAP_TSO6; ifp->if_hwassist &= ~CSUM_IP6_TSO; mlx5_en_err(ifp, "tso6 disabled due to -txcsum6.\n"); } } if (mask & IFCAP_RXCSUM) ifp->if_capenable ^= IFCAP_RXCSUM; if (mask & IFCAP_RXCSUM_IPV6) ifp->if_capenable ^= IFCAP_RXCSUM_IPV6; if (mask & IFCAP_TSO4) { if (!(IFCAP_TSO4 & ifp->if_capenable) && !(IFCAP_TXCSUM & ifp->if_capenable)) { mlx5_en_err(ifp, "enable txcsum first.\n"); error = EAGAIN; goto out; } ifp->if_capenable ^= IFCAP_TSO4; ifp->if_hwassist ^= CSUM_IP_TSO; } if (mask & IFCAP_TSO6) { if (!(IFCAP_TSO6 & ifp->if_capenable) && !(IFCAP_TXCSUM_IPV6 & ifp->if_capenable)) { mlx5_en_err(ifp, "enable txcsum6 first.\n"); error = EAGAIN; goto out; } ifp->if_capenable ^= IFCAP_TSO6; ifp->if_hwassist ^= CSUM_IP6_TSO; } if (mask & IFCAP_VLAN_HWFILTER) { if (ifp->if_capenable & IFCAP_VLAN_HWFILTER) mlx5e_disable_vlan_filter(priv); else mlx5e_enable_vlan_filter(priv); ifp->if_capenable ^= IFCAP_VLAN_HWFILTER; } if (mask & IFCAP_VLAN_HWTAGGING) ifp->if_capenable ^= IFCAP_VLAN_HWTAGGING; if (mask & IFCAP_WOL_MAGIC) ifp->if_capenable ^= IFCAP_WOL_MAGIC; VLAN_CAPABILITIES(ifp); /* turn off LRO means also turn of HW LRO - if it's on */ if (mask & IFCAP_LRO) { int was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state); bool need_restart = false; ifp->if_capenable ^= IFCAP_LRO; /* figure out if updating HW LRO is needed */ if (!(ifp->if_capenable & IFCAP_LRO)) { if (priv->params.hw_lro_en) { priv->params.hw_lro_en = false; need_restart = true; } } else { if (priv->params.hw_lro_en == false && priv->params_ethtool.hw_lro != 0) { priv->params.hw_lro_en = true; need_restart = true; } } if (was_opened && need_restart) { mlx5e_close_locked(ifp); mlx5e_open_locked(ifp); } } if (mask & IFCAP_HWRXTSTMP) { ifp->if_capenable ^= IFCAP_HWRXTSTMP; if (ifp->if_capenable & IFCAP_HWRXTSTMP) { if (priv->clbr_done == 0) mlx5e_reset_calibration_callout(priv); } else { callout_drain(&priv->tstmp_clbr); priv->clbr_done = 0; } } out: PRIV_UNLOCK(priv); break; case SIOCGI2C: ifr = (struct ifreq *)data; /* * Copy from the user-space address ifr_data to the * kernel-space address i2c */ error = copyin(ifr_data_get_ptr(ifr), &i2c, sizeof(i2c)); if (error) break; if (i2c.len > sizeof(i2c.data)) { error = EINVAL; break; } PRIV_LOCK(priv); /* Get module_num which is required for the query_eeprom */ error = mlx5_query_module_num(priv->mdev, &module_num); if (error) { mlx5_en_err(ifp, "Query module num failed, eeprom reading is not supported\n"); error = EINVAL; goto err_i2c; } /* Check if module is present before doing an access */ module_status = mlx5_query_module_status(priv->mdev, module_num); if (module_status != MLX5_MODULE_STATUS_PLUGGED_ENABLED) { error = EINVAL; goto err_i2c; } /* * Currently 0XA0 and 0xA2 are the only addresses permitted. * The internal conversion is as follows: */ if (i2c.dev_addr == 0xA0) - read_addr = MLX5E_I2C_ADDR_LOW; + read_addr = MLX5_I2C_ADDR_LOW; else if (i2c.dev_addr == 0xA2) - read_addr = MLX5E_I2C_ADDR_HIGH; + read_addr = MLX5_I2C_ADDR_HIGH; else { mlx5_en_err(ifp, "Query eeprom failed, Invalid Address: %X\n", i2c.dev_addr); error = EINVAL; goto err_i2c; } error = mlx5_query_eeprom(priv->mdev, - read_addr, MLX5E_EEPROM_LOW_PAGE, + read_addr, MLX5_EEPROM_LOW_PAGE, (uint32_t)i2c.offset, (uint32_t)i2c.len, module_num, (uint32_t *)i2c.data, &size_read); if (error) { mlx5_en_err(ifp, "Query eeprom failed, eeprom reading is not supported\n"); error = EINVAL; goto err_i2c; } if (i2c.len > MLX5_EEPROM_MAX_BYTES) { error = mlx5_query_eeprom(priv->mdev, - read_addr, MLX5E_EEPROM_LOW_PAGE, + read_addr, MLX5_EEPROM_LOW_PAGE, (uint32_t)(i2c.offset + size_read), (uint32_t)(i2c.len - size_read), module_num, (uint32_t *)(i2c.data + size_read), &size_read); } if (error) { mlx5_en_err(ifp, "Query eeprom failed, eeprom reading is not supported\n"); error = EINVAL; goto err_i2c; } error = copyout(&i2c, ifr_data_get_ptr(ifr), sizeof(i2c)); err_i2c: PRIV_UNLOCK(priv); break; default: error = ether_ioctl(ifp, command, data); break; } return (error); } static int mlx5e_check_required_hca_cap(struct mlx5_core_dev *mdev) { /* * TODO: uncoment once FW really sets all these bits if * (!mdev->caps.eth.rss_ind_tbl_cap || !mdev->caps.eth.csum_cap || * !mdev->caps.eth.max_lso_cap || !mdev->caps.eth.vlan_cap || * !(mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_SCQE_BRK_MOD)) return * -ENOTSUPP; */ /* TODO: add more must-to-have features */ if (MLX5_CAP_GEN(mdev, port_type) != MLX5_CAP_PORT_TYPE_ETH) return (-ENODEV); return (0); } static u16 mlx5e_get_max_inline_cap(struct mlx5_core_dev *mdev) { uint32_t bf_buf_size = (1U << MLX5_CAP_GEN(mdev, log_bf_reg_size)) / 2U; bf_buf_size -= sizeof(struct mlx5e_tx_wqe) - 2; /* verify against driver hardware limit */ if (bf_buf_size > MLX5E_MAX_TX_INLINE) bf_buf_size = MLX5E_MAX_TX_INLINE; return (bf_buf_size); } static int mlx5e_build_ifp_priv(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv, int num_comp_vectors) { int err; /* * TODO: Consider link speed for setting "log_sq_size", * "log_rq_size" and "cq_moderation_xxx": */ priv->params.log_sq_size = MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; priv->params.log_rq_size = MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE; priv->params.rx_cq_moderation_usec = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ? MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE : MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC; priv->params.rx_cq_moderation_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ? 1 : 0; priv->params.rx_cq_moderation_pkts = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS; priv->params.tx_cq_moderation_usec = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC; priv->params.tx_cq_moderation_pkts = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS; priv->params.min_rx_wqes = MLX5E_PARAMS_DEFAULT_MIN_RX_WQES; priv->params.rx_hash_log_tbl_sz = (order_base_2(num_comp_vectors) > MLX5E_PARAMS_DEFAULT_RX_HASH_LOG_TBL_SZ) ? order_base_2(num_comp_vectors) : MLX5E_PARAMS_DEFAULT_RX_HASH_LOG_TBL_SZ; priv->params.num_tc = 1; priv->params.default_vlan_prio = 0; priv->counter_set_id = -1; priv->params.tx_max_inline = mlx5e_get_max_inline_cap(mdev); err = mlx5_query_min_inline(mdev, &priv->params.tx_min_inline_mode); if (err) return (err); /* * hw lro is currently defaulted to off. when it won't anymore we * will consider the HW capability: "!!MLX5_CAP_ETH(mdev, lro_cap)" */ priv->params.hw_lro_en = false; priv->params.lro_wqe_sz = MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ; /* * CQE zipping is currently defaulted to off. when it won't * anymore we will consider the HW capability: * "!!MLX5_CAP_GEN(mdev, cqe_compression)" */ priv->params.cqe_zipping_en = false; priv->mdev = mdev; priv->params.num_channels = num_comp_vectors; priv->params.channels_rsss = 1; priv->order_base_2_num_channels = order_base_2(num_comp_vectors); priv->queue_mapping_channel_mask = roundup_pow_of_two(num_comp_vectors) - 1; priv->num_tc = priv->params.num_tc; priv->default_vlan_prio = priv->params.default_vlan_prio; INIT_WORK(&priv->update_stats_work, mlx5e_update_stats_work); INIT_WORK(&priv->update_carrier_work, mlx5e_update_carrier_work); INIT_WORK(&priv->set_rx_mode_work, mlx5e_set_rx_mode_work); return (0); } static int mlx5e_create_mkey(struct mlx5e_priv *priv, u32 pdn, struct mlx5_core_mr *mkey) { struct ifnet *ifp = priv->ifp; struct mlx5_core_dev *mdev = priv->mdev; int inlen = MLX5_ST_SZ_BYTES(create_mkey_in); void *mkc; u32 *in; int err; in = mlx5_vzalloc(inlen); if (in == NULL) { mlx5_en_err(ifp, "failed to allocate inbox\n"); return (-ENOMEM); } mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, access_mode, MLX5_ACCESS_MODE_PA); MLX5_SET(mkc, mkc, lw, 1); MLX5_SET(mkc, mkc, lr, 1); MLX5_SET(mkc, mkc, pd, pdn); MLX5_SET(mkc, mkc, length64, 1); MLX5_SET(mkc, mkc, qpn, 0xffffff); err = mlx5_core_create_mkey(mdev, mkey, in, inlen); if (err) mlx5_en_err(ifp, "mlx5_core_create_mkey failed, %d\n", err); kvfree(in); return (err); } static const char *mlx5e_vport_stats_desc[] = { MLX5E_VPORT_STATS(MLX5E_STATS_DESC) }; static const char *mlx5e_pport_stats_desc[] = { MLX5E_PPORT_STATS(MLX5E_STATS_DESC) }; static void mlx5e_priv_mtx_init(struct mlx5e_priv *priv) { mtx_init(&priv->async_events_mtx, "mlx5async", MTX_NETWORK_LOCK, MTX_DEF); sx_init(&priv->state_lock, "mlx5state"); callout_init_mtx(&priv->watchdog, &priv->async_events_mtx, 0); MLX5_INIT_DOORBELL_LOCK(&priv->doorbell_lock); } static void mlx5e_priv_mtx_destroy(struct mlx5e_priv *priv) { mtx_destroy(&priv->async_events_mtx); sx_destroy(&priv->state_lock); } static int sysctl_firmware(SYSCTL_HANDLER_ARGS) { /* * %d.%d%.d the string format. * fw_rev_{maj,min,sub} return u16, 2^16 = 65536. * We need at most 5 chars to store that. * It also has: two "." and NULL at the end, which means we need 18 * (5*3 + 3) chars at most. */ char fw[18]; struct mlx5e_priv *priv = arg1; int error; snprintf(fw, sizeof(fw), "%d.%d.%d", fw_rev_maj(priv->mdev), fw_rev_min(priv->mdev), fw_rev_sub(priv->mdev)); error = sysctl_handle_string(oidp, fw, sizeof(fw), req); return (error); } static void mlx5e_disable_tx_dma(struct mlx5e_channel *ch) { int i; for (i = 0; i < ch->num_tc; i++) mlx5e_drain_sq(&ch->sq[i]); } static void mlx5e_reset_sq_doorbell_record(struct mlx5e_sq *sq) { sq->doorbell.d32[0] = cpu_to_be32(MLX5_OPCODE_NOP); sq->doorbell.d32[1] = cpu_to_be32(sq->sqn << 8); mlx5e_tx_notify_hw(sq, sq->doorbell.d32, 0); sq->doorbell.d64 = 0; } void mlx5e_resume_sq(struct mlx5e_sq *sq) { int err; /* check if already enabled */ if (READ_ONCE(sq->running) != 0) return; err = mlx5e_modify_sq(sq, MLX5_SQC_STATE_ERR, MLX5_SQC_STATE_RST); if (err != 0) { mlx5_en_err(sq->ifp, "mlx5e_modify_sq() from ERR to RST failed: %d\n", err); } sq->cc = 0; sq->pc = 0; /* reset doorbell prior to moving from RST to RDY */ mlx5e_reset_sq_doorbell_record(sq); err = mlx5e_modify_sq(sq, MLX5_SQC_STATE_RST, MLX5_SQC_STATE_RDY); if (err != 0) { mlx5_en_err(sq->ifp, "mlx5e_modify_sq() from RST to RDY failed: %d\n", err); } sq->cev_next_state = MLX5E_CEV_STATE_INITIAL; WRITE_ONCE(sq->running, 1); } static void mlx5e_enable_tx_dma(struct mlx5e_channel *ch) { int i; for (i = 0; i < ch->num_tc; i++) mlx5e_resume_sq(&ch->sq[i]); } static void mlx5e_disable_rx_dma(struct mlx5e_channel *ch) { struct mlx5e_rq *rq = &ch->rq; int err; mtx_lock(&rq->mtx); rq->enabled = 0; callout_stop(&rq->watchdog); mtx_unlock(&rq->mtx); callout_drain(&rq->watchdog); err = mlx5e_modify_rq(rq, MLX5_RQC_STATE_RDY, MLX5_RQC_STATE_ERR); if (err != 0) { mlx5_en_err(rq->ifp, "mlx5e_modify_rq() from RDY to RST failed: %d\n", err); } while (!mlx5_wq_ll_is_empty(&rq->wq)) { msleep(1); rq->cq.mcq.comp(&rq->cq.mcq); } /* * Transitioning into RST state will allow the FW to track less ERR state queues, * thus reducing the recv queue flushing time */ err = mlx5e_modify_rq(rq, MLX5_RQC_STATE_ERR, MLX5_RQC_STATE_RST); if (err != 0) { mlx5_en_err(rq->ifp, "mlx5e_modify_rq() from ERR to RST failed: %d\n", err); } } static void mlx5e_enable_rx_dma(struct mlx5e_channel *ch) { struct mlx5e_rq *rq = &ch->rq; int err; rq->wq.wqe_ctr = 0; mlx5_wq_ll_update_db_record(&rq->wq); err = mlx5e_modify_rq(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY); if (err != 0) { mlx5_en_err(rq->ifp, "mlx5e_modify_rq() from RST to RDY failed: %d\n", err); } rq->enabled = 1; rq->cq.mcq.comp(&rq->cq.mcq); } void mlx5e_modify_tx_dma(struct mlx5e_priv *priv, uint8_t value) { int i; if (test_bit(MLX5E_STATE_OPENED, &priv->state) == 0) return; for (i = 0; i < priv->params.num_channels; i++) { if (value) mlx5e_disable_tx_dma(&priv->channel[i]); else mlx5e_enable_tx_dma(&priv->channel[i]); } } void mlx5e_modify_rx_dma(struct mlx5e_priv *priv, uint8_t value) { int i; if (test_bit(MLX5E_STATE_OPENED, &priv->state) == 0) return; for (i = 0; i < priv->params.num_channels; i++) { if (value) mlx5e_disable_rx_dma(&priv->channel[i]); else mlx5e_enable_rx_dma(&priv->channel[i]); } } static void mlx5e_add_hw_stats(struct mlx5e_priv *priv) { SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_hw), OID_AUTO, "fw_version", CTLTYPE_STRING | CTLFLAG_RD, priv, 0, sysctl_firmware, "A", "HCA firmware version"); SYSCTL_ADD_STRING(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_hw), OID_AUTO, "board_id", CTLFLAG_RD, priv->mdev->board_id, 0, "Board ID"); } static int mlx5e_sysctl_tx_priority_flow_control(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; uint8_t temp[MLX5E_MAX_PRIORITY]; uint32_t tx_pfc; int err; int i; PRIV_LOCK(priv); tx_pfc = priv->params.tx_priority_flow_control; for (i = 0; i != MLX5E_MAX_PRIORITY; i++) temp[i] = (tx_pfc >> i) & 1; err = SYSCTL_OUT(req, temp, MLX5E_MAX_PRIORITY); if (err || !req->newptr) goto done; err = SYSCTL_IN(req, temp, MLX5E_MAX_PRIORITY); if (err) goto done; priv->params.tx_priority_flow_control = 0; /* range check input value */ for (i = 0; i != MLX5E_MAX_PRIORITY; i++) { if (temp[i] > 1) { err = ERANGE; goto done; } priv->params.tx_priority_flow_control |= (temp[i] << i); } /* check if update is required */ if (tx_pfc != priv->params.tx_priority_flow_control) err = -mlx5e_set_port_pfc(priv); done: if (err != 0) priv->params.tx_priority_flow_control= tx_pfc; PRIV_UNLOCK(priv); return (err); } static int mlx5e_sysctl_rx_priority_flow_control(SYSCTL_HANDLER_ARGS) { struct mlx5e_priv *priv = arg1; uint8_t temp[MLX5E_MAX_PRIORITY]; uint32_t rx_pfc; int err; int i; PRIV_LOCK(priv); rx_pfc = priv->params.rx_priority_flow_control; for (i = 0; i != MLX5E_MAX_PRIORITY; i++) temp[i] = (rx_pfc >> i) & 1; err = SYSCTL_OUT(req, temp, MLX5E_MAX_PRIORITY); if (err || !req->newptr) goto done; err = SYSCTL_IN(req, temp, MLX5E_MAX_PRIORITY); if (err) goto done; priv->params.rx_priority_flow_control = 0; /* range check input value */ for (i = 0; i != MLX5E_MAX_PRIORITY; i++) { if (temp[i] > 1) { err = ERANGE; goto done; } priv->params.rx_priority_flow_control |= (temp[i] << i); } /* check if update is required */ if (rx_pfc != priv->params.rx_priority_flow_control) { err = -mlx5e_set_port_pfc(priv); if (err == 0) err = mlx5e_update_buf_lossy(priv); } done: if (err != 0) priv->params.rx_priority_flow_control= rx_pfc; PRIV_UNLOCK(priv); return (err); } static void mlx5e_setup_pauseframes(struct mlx5e_priv *priv) { #if (__FreeBSD_version < 1100000) char path[96]; #endif int error; /* enable pauseframes by default */ priv->params.tx_pauseframe_control = 1; priv->params.rx_pauseframe_control = 1; /* disable ports flow control, PFC, by default */ priv->params.tx_priority_flow_control = 0; priv->params.rx_priority_flow_control = 0; #if (__FreeBSD_version < 1100000) /* compute path for sysctl */ snprintf(path, sizeof(path), "dev.mce.%d.tx_pauseframe_control", device_get_unit(priv->mdev->pdev->dev.bsddev)); /* try to fetch tunable, if any */ TUNABLE_INT_FETCH(path, &priv->params.tx_pauseframe_control); /* compute path for sysctl */ snprintf(path, sizeof(path), "dev.mce.%d.rx_pauseframe_control", device_get_unit(priv->mdev->pdev->dev.bsddev)); /* try to fetch tunable, if any */ TUNABLE_INT_FETCH(path, &priv->params.rx_pauseframe_control); #endif /* register pauseframe SYSCTLs */ SYSCTL_ADD_INT(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "tx_pauseframe_control", CTLFLAG_RDTUN, &priv->params.tx_pauseframe_control, 0, "Set to enable TX pause frames. Clear to disable."); SYSCTL_ADD_INT(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "rx_pauseframe_control", CTLFLAG_RDTUN, &priv->params.rx_pauseframe_control, 0, "Set to enable RX pause frames. Clear to disable."); /* register priority flow control, PFC, SYSCTLs */ SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "tx_priority_flow_control", CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, &mlx5e_sysctl_tx_priority_flow_control, "CU", "Set to enable TX ports flow control frames for priorities 0..7. Clear to disable."); SYSCTL_ADD_PROC(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "rx_priority_flow_control", CTLTYPE_U8 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE, priv, 0, &mlx5e_sysctl_rx_priority_flow_control, "CU", "Set to enable RX ports flow control frames for priorities 0..7. Clear to disable."); PRIV_LOCK(priv); /* range check */ priv->params.tx_pauseframe_control = priv->params.tx_pauseframe_control ? 1 : 0; priv->params.rx_pauseframe_control = priv->params.rx_pauseframe_control ? 1 : 0; /* update firmware */ error = mlx5e_set_port_pause_and_pfc(priv); if (error == -EINVAL) { mlx5_en_err(priv->ifp, "Global pauseframes must be disabled before enabling PFC.\n"); priv->params.rx_priority_flow_control = 0; priv->params.tx_priority_flow_control = 0; /* update firmware */ (void) mlx5e_set_port_pause_and_pfc(priv); } PRIV_UNLOCK(priv); } static int mlx5e_ul_snd_tag_alloc(struct ifnet *ifp, union if_snd_tag_alloc_params *params, struct m_snd_tag **ppmt) { struct mlx5e_priv *priv; struct mlx5e_channel *pch; priv = ifp->if_softc; if (unlikely(priv->gone || params->hdr.flowtype == M_HASHTYPE_NONE)) { return (EOPNOTSUPP); } else { /* keep this code synced with mlx5e_select_queue() */ u32 ch = priv->params.num_channels; #ifdef RSS u32 temp; if (rss_hash2bucket(params->hdr.flowid, params->hdr.flowtype, &temp) == 0) ch = temp % ch; else #endif ch = (params->hdr.flowid % 128) % ch; /* * NOTE: The channels array is only freed at detach * and it safe to return a pointer to the send tag * inside the channels structure as long as we * reference the priv. */ pch = priv->channel + ch; /* check if send queue is not running */ if (unlikely(pch->sq[0].running == 0)) return (ENXIO); mlx5e_ref_channel(priv); *ppmt = &pch->tag.m_snd_tag; return (0); } } static int mlx5e_ul_snd_tag_query(struct m_snd_tag *pmt, union if_snd_tag_query_params *params) { struct mlx5e_channel *pch = container_of(pmt, struct mlx5e_channel, tag.m_snd_tag); params->unlimited.max_rate = -1ULL; params->unlimited.queue_level = mlx5e_sq_queue_level(&pch->sq[0]); return (0); } static void mlx5e_ul_snd_tag_free(struct m_snd_tag *pmt) { struct mlx5e_channel *pch = container_of(pmt, struct mlx5e_channel, tag.m_snd_tag); mlx5e_unref_channel(pch->priv); } static int mlx5e_snd_tag_alloc(struct ifnet *ifp, union if_snd_tag_alloc_params *params, struct m_snd_tag **ppmt) { switch (params->hdr.type) { #ifdef RATELIMIT case IF_SND_TAG_TYPE_RATE_LIMIT: return (mlx5e_rl_snd_tag_alloc(ifp, params, ppmt)); #endif case IF_SND_TAG_TYPE_UNLIMITED: return (mlx5e_ul_snd_tag_alloc(ifp, params, ppmt)); default: return (EOPNOTSUPP); } } static int mlx5e_snd_tag_modify(struct m_snd_tag *pmt, union if_snd_tag_modify_params *params) { struct mlx5e_snd_tag *tag = container_of(pmt, struct mlx5e_snd_tag, m_snd_tag); switch (tag->type) { #ifdef RATELIMIT case IF_SND_TAG_TYPE_RATE_LIMIT: return (mlx5e_rl_snd_tag_modify(pmt, params)); #endif case IF_SND_TAG_TYPE_UNLIMITED: default: return (EOPNOTSUPP); } } static int mlx5e_snd_tag_query(struct m_snd_tag *pmt, union if_snd_tag_query_params *params) { struct mlx5e_snd_tag *tag = container_of(pmt, struct mlx5e_snd_tag, m_snd_tag); switch (tag->type) { #ifdef RATELIMIT case IF_SND_TAG_TYPE_RATE_LIMIT: return (mlx5e_rl_snd_tag_query(pmt, params)); #endif case IF_SND_TAG_TYPE_UNLIMITED: return (mlx5e_ul_snd_tag_query(pmt, params)); default: return (EOPNOTSUPP); } } static void mlx5e_snd_tag_free(struct m_snd_tag *pmt) { struct mlx5e_snd_tag *tag = container_of(pmt, struct mlx5e_snd_tag, m_snd_tag); switch (tag->type) { #ifdef RATELIMIT case IF_SND_TAG_TYPE_RATE_LIMIT: mlx5e_rl_snd_tag_free(pmt); break; #endif case IF_SND_TAG_TYPE_UNLIMITED: mlx5e_ul_snd_tag_free(pmt); break; default: break; } } static void * mlx5e_create_ifp(struct mlx5_core_dev *mdev) { struct ifnet *ifp; struct mlx5e_priv *priv; u8 dev_addr[ETHER_ADDR_LEN] __aligned(4); u8 connector_type; struct sysctl_oid_list *child; int ncv = mdev->priv.eq_table.num_comp_vectors; char unit[16]; int err; int i,j; u32 eth_proto_cap; u32 out[MLX5_ST_SZ_DW(ptys_reg)]; bool ext = 0; u32 speeds_num; struct media media_entry = {}; if (mlx5e_check_required_hca_cap(mdev)) { mlx5_core_dbg(mdev, "mlx5e_check_required_hca_cap() failed\n"); return (NULL); } /* * Try to allocate the priv and make room for worst-case * number of channel structures: */ priv = malloc(sizeof(*priv) + (sizeof(priv->channel[0]) * mdev->priv.eq_table.num_comp_vectors), M_MLX5EN, M_WAITOK | M_ZERO); mlx5e_priv_mtx_init(priv); ifp = priv->ifp = if_alloc(IFT_ETHER); if (ifp == NULL) { mlx5_core_err(mdev, "if_alloc() failed\n"); goto err_free_priv; } ifp->if_softc = priv; if_initname(ifp, "mce", device_get_unit(mdev->pdev->dev.bsddev)); ifp->if_mtu = ETHERMTU; ifp->if_init = mlx5e_open; ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST; ifp->if_ioctl = mlx5e_ioctl; ifp->if_transmit = mlx5e_xmit; ifp->if_qflush = if_qflush; #if (__FreeBSD_version >= 1100000) ifp->if_get_counter = mlx5e_get_counter; #endif ifp->if_snd.ifq_maxlen = ifqmaxlen; /* * Set driver features */ ifp->if_capabilities |= IFCAP_HWCSUM | IFCAP_HWCSUM_IPV6; ifp->if_capabilities |= IFCAP_VLAN_MTU | IFCAP_VLAN_HWTAGGING; ifp->if_capabilities |= IFCAP_VLAN_HWCSUM | IFCAP_VLAN_HWFILTER; ifp->if_capabilities |= IFCAP_LINKSTATE | IFCAP_JUMBO_MTU; ifp->if_capabilities |= IFCAP_LRO; ifp->if_capabilities |= IFCAP_TSO | IFCAP_VLAN_HWTSO; ifp->if_capabilities |= IFCAP_HWSTATS | IFCAP_HWRXTSTMP; ifp->if_capabilities |= IFCAP_TXRTLMT; ifp->if_snd_tag_alloc = mlx5e_snd_tag_alloc; ifp->if_snd_tag_free = mlx5e_snd_tag_free; ifp->if_snd_tag_modify = mlx5e_snd_tag_modify; ifp->if_snd_tag_query = mlx5e_snd_tag_query; /* set TSO limits so that we don't have to drop TX packets */ ifp->if_hw_tsomax = MLX5E_MAX_TX_PAYLOAD_SIZE - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN); ifp->if_hw_tsomaxsegcount = MLX5E_MAX_TX_MBUF_FRAGS - 1 /* hdr */; ifp->if_hw_tsomaxsegsize = MLX5E_MAX_TX_MBUF_SIZE; ifp->if_capenable = ifp->if_capabilities; ifp->if_hwassist = 0; if (ifp->if_capenable & IFCAP_TSO) ifp->if_hwassist |= CSUM_TSO; if (ifp->if_capenable & IFCAP_TXCSUM) ifp->if_hwassist |= (CSUM_TCP | CSUM_UDP | CSUM_IP); if (ifp->if_capenable & IFCAP_TXCSUM_IPV6) ifp->if_hwassist |= (CSUM_UDP_IPV6 | CSUM_TCP_IPV6); /* ifnet sysctl tree */ sysctl_ctx_init(&priv->sysctl_ctx); priv->sysctl_ifnet = SYSCTL_ADD_NODE(&priv->sysctl_ctx, SYSCTL_STATIC_CHILDREN(_dev), OID_AUTO, ifp->if_dname, CTLFLAG_RD, 0, "MLX5 ethernet - interface name"); if (priv->sysctl_ifnet == NULL) { mlx5_core_err(mdev, "SYSCTL_ADD_NODE() failed\n"); goto err_free_sysctl; } snprintf(unit, sizeof(unit), "%d", ifp->if_dunit); priv->sysctl_ifnet = SYSCTL_ADD_NODE(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, unit, CTLFLAG_RD, 0, "MLX5 ethernet - interface unit"); if (priv->sysctl_ifnet == NULL) { mlx5_core_err(mdev, "SYSCTL_ADD_NODE() failed\n"); goto err_free_sysctl; } /* HW sysctl tree */ child = SYSCTL_CHILDREN(device_get_sysctl_tree(mdev->pdev->dev.bsddev)); priv->sysctl_hw = SYSCTL_ADD_NODE(&priv->sysctl_ctx, child, OID_AUTO, "hw", CTLFLAG_RD, 0, "MLX5 ethernet dev hw"); if (priv->sysctl_hw == NULL) { mlx5_core_err(mdev, "SYSCTL_ADD_NODE() failed\n"); goto err_free_sysctl; } err = mlx5e_build_ifp_priv(mdev, priv, ncv); if (err) { mlx5_core_err(mdev, "mlx5e_build_ifp_priv() failed (%d)\n", err); goto err_free_sysctl; } /* reuse mlx5core's watchdog workqueue */ priv->wq = mdev->priv.health.wq_watchdog; err = mlx5_alloc_map_uar(mdev, &priv->cq_uar); if (err) { mlx5_en_err(ifp, "mlx5_alloc_map_uar failed, %d\n", err); goto err_free_wq; } err = mlx5_core_alloc_pd(mdev, &priv->pdn); if (err) { mlx5_en_err(ifp, "mlx5_core_alloc_pd failed, %d\n", err); goto err_unmap_free_uar; } err = mlx5_alloc_transport_domain(mdev, &priv->tdn); if (err) { mlx5_en_err(ifp, "mlx5_alloc_transport_domain failed, %d\n", err); goto err_dealloc_pd; } err = mlx5e_create_mkey(priv, priv->pdn, &priv->mr); if (err) { mlx5_en_err(ifp, "mlx5e_create_mkey failed, %d\n", err); goto err_dealloc_transport_domain; } mlx5_query_nic_vport_mac_address(priv->mdev, 0, dev_addr); /* check if we should generate a random MAC address */ if (MLX5_CAP_GEN(priv->mdev, vport_group_manager) == 0 && is_zero_ether_addr(dev_addr)) { random_ether_addr(dev_addr); mlx5_en_err(ifp, "Assigned random MAC address\n"); } #ifdef RATELIMIT err = mlx5e_rl_init(priv); if (err) { mlx5_en_err(ifp, "mlx5e_rl_init failed, %d\n", err); goto err_create_mkey; } #endif /* set default MTU */ mlx5e_set_dev_port_mtu(ifp, ifp->if_mtu); /* Set default media status */ priv->media_status_last = IFM_AVALID; priv->media_active_last = IFM_ETHER | IFM_AUTO | IFM_ETH_RXPAUSE | IFM_FDX; /* setup default pauseframes configuration */ mlx5e_setup_pauseframes(priv); /* Setup supported medias */ //TODO: If we failed to query ptys is it ok to proceed?? if (!mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, 1)) { ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet); eth_proto_cap = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_capability); if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) connector_type = MLX5_GET(ptys_reg, out, connector_type); } else { eth_proto_cap = 0; mlx5_en_err(ifp, "Query port media capability failed, %d\n", err); } ifmedia_init(&priv->media, IFM_IMASK | IFM_ETH_FMASK, mlx5e_media_change, mlx5e_media_status); speeds_num = ext ? MLX5E_EXT_LINK_SPEEDS_NUMBER : MLX5E_LINK_SPEEDS_NUMBER; for (i = 0; i != speeds_num; i++) { for (j = 0; j < MLX5E_LINK_MODES_NUMBER ; ++j) { media_entry = ext ? mlx5e_ext_mode_table[i][j] : mlx5e_mode_table[i][j]; if (media_entry.baudrate == 0) continue; if (MLX5E_PROT_MASK(i) & eth_proto_cap) { ifmedia_add(&priv->media, media_entry.subtype | IFM_ETHER, 0, NULL); ifmedia_add(&priv->media, media_entry.subtype | IFM_ETHER | IFM_FDX | IFM_ETH_RXPAUSE | IFM_ETH_TXPAUSE, 0, NULL); } } } ifmedia_add(&priv->media, IFM_ETHER | IFM_AUTO, 0, NULL); ifmedia_add(&priv->media, IFM_ETHER | IFM_AUTO | IFM_FDX | IFM_ETH_RXPAUSE | IFM_ETH_TXPAUSE, 0, NULL); /* Set autoselect by default */ ifmedia_set(&priv->media, IFM_ETHER | IFM_AUTO | IFM_FDX | IFM_ETH_RXPAUSE | IFM_ETH_TXPAUSE); ether_ifattach(ifp, dev_addr); /* Register for VLAN events */ priv->vlan_attach = EVENTHANDLER_REGISTER(vlan_config, mlx5e_vlan_rx_add_vid, priv, EVENTHANDLER_PRI_FIRST); priv->vlan_detach = EVENTHANDLER_REGISTER(vlan_unconfig, mlx5e_vlan_rx_kill_vid, priv, EVENTHANDLER_PRI_FIRST); /* Link is down by default */ if_link_state_change(ifp, LINK_STATE_DOWN); mlx5e_enable_async_events(priv); mlx5e_add_hw_stats(priv); mlx5e_create_stats(&priv->stats.vport.ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), "vstats", mlx5e_vport_stats_desc, MLX5E_VPORT_STATS_NUM, priv->stats.vport.arg); mlx5e_create_stats(&priv->stats.pport.ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), "pstats", mlx5e_pport_stats_desc, MLX5E_PPORT_STATS_NUM, priv->stats.pport.arg); mlx5e_create_ethtool(priv); mtx_lock(&priv->async_events_mtx); mlx5e_update_stats(priv); mtx_unlock(&priv->async_events_mtx); SYSCTL_ADD_INT(&priv->sysctl_ctx, SYSCTL_CHILDREN(priv->sysctl_ifnet), OID_AUTO, "rx_clbr_done", CTLFLAG_RD, &priv->clbr_done, 0, "RX timestamps calibration state"); callout_init(&priv->tstmp_clbr, CALLOUT_DIRECT); mlx5e_reset_calibration_callout(priv); return (priv); #ifdef RATELIMIT err_create_mkey: mlx5_core_destroy_mkey(priv->mdev, &priv->mr); #endif err_dealloc_transport_domain: mlx5_dealloc_transport_domain(mdev, priv->tdn); err_dealloc_pd: mlx5_core_dealloc_pd(mdev, priv->pdn); err_unmap_free_uar: mlx5_unmap_free_uar(mdev, &priv->cq_uar); err_free_wq: flush_workqueue(priv->wq); err_free_sysctl: sysctl_ctx_free(&priv->sysctl_ctx); if (priv->sysctl_debug) sysctl_ctx_free(&priv->stats.port_stats_debug.ctx); if_free(ifp); err_free_priv: mlx5e_priv_mtx_destroy(priv); free(priv, M_MLX5EN); return (NULL); } static void mlx5e_destroy_ifp(struct mlx5_core_dev *mdev, void *vpriv) { struct mlx5e_priv *priv = vpriv; struct ifnet *ifp = priv->ifp; /* don't allow more IOCTLs */ priv->gone = 1; /* XXX wait a bit to allow IOCTL handlers to complete */ pause("W", hz); #ifdef RATELIMIT /* * The kernel can have reference(s) via the m_snd_tag's into * the ratelimit channels, and these must go away before * detaching: */ while (READ_ONCE(priv->rl.stats.tx_active_connections) != 0) { mlx5_en_err(priv->ifp, "Waiting for all ratelimit connections to terminate\n"); pause("W", hz); } #endif /* stop watchdog timer */ callout_drain(&priv->watchdog); callout_drain(&priv->tstmp_clbr); if (priv->vlan_attach != NULL) EVENTHANDLER_DEREGISTER(vlan_config, priv->vlan_attach); if (priv->vlan_detach != NULL) EVENTHANDLER_DEREGISTER(vlan_unconfig, priv->vlan_detach); /* make sure device gets closed */ PRIV_LOCK(priv); mlx5e_close_locked(ifp); PRIV_UNLOCK(priv); /* wait for all unlimited send tags to go away */ while (priv->channel_refs != 0) { mlx5_en_err(priv->ifp, "Waiting for all unlimited connections to terminate\n"); pause("W", hz); } /* unregister device */ ifmedia_removeall(&priv->media); ether_ifdetach(ifp); if_free(ifp); #ifdef RATELIMIT mlx5e_rl_cleanup(priv); #endif /* destroy all remaining sysctl nodes */ sysctl_ctx_free(&priv->stats.vport.ctx); sysctl_ctx_free(&priv->stats.pport.ctx); if (priv->sysctl_debug) sysctl_ctx_free(&priv->stats.port_stats_debug.ctx); sysctl_ctx_free(&priv->sysctl_ctx); mlx5_core_destroy_mkey(priv->mdev, &priv->mr); mlx5_dealloc_transport_domain(priv->mdev, priv->tdn); mlx5_core_dealloc_pd(priv->mdev, priv->pdn); mlx5_unmap_free_uar(priv->mdev, &priv->cq_uar); mlx5e_disable_async_events(priv); flush_workqueue(priv->wq); mlx5e_priv_mtx_destroy(priv); free(priv, M_MLX5EN); } static void * mlx5e_get_ifp(void *vpriv) { struct mlx5e_priv *priv = vpriv; return (priv->ifp); } static struct mlx5_interface mlx5e_interface = { .add = mlx5e_create_ifp, .remove = mlx5e_destroy_ifp, .event = mlx5e_async_event, .protocol = MLX5_INTERFACE_PROTOCOL_ETH, .get_dev = mlx5e_get_ifp, }; void mlx5e_init(void) { mlx5_register_interface(&mlx5e_interface); } void mlx5e_cleanup(void) { mlx5_unregister_interface(&mlx5e_interface); } static void mlx5e_show_version(void __unused *arg) { printf("%s", mlx5e_version); } SYSINIT(mlx5e_show_version, SI_SUB_DRIVERS, SI_ORDER_ANY, mlx5e_show_version, NULL); module_init_order(mlx5e_init, SI_ORDER_THIRD); module_exit_order(mlx5e_cleanup, SI_ORDER_THIRD); #if (__FreeBSD_version >= 1100000) MODULE_DEPEND(mlx5en, linuxkpi, 1, 1, 1); #endif MODULE_DEPEND(mlx5en, mlx5, 1, 1, 1); MODULE_VERSION(mlx5en, 1); Index: stable/12/sys/dev/mlx5/mlx5io.h =================================================================== --- stable/12/sys/dev/mlx5/mlx5io.h (revision 353238) +++ stable/12/sys/dev/mlx5/mlx5io.h (revision 353239) @@ -1,151 +1,159 @@ /*- * Copyright (c) 2018, Mellanox Technologies, Ltd. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _DEV_MLX5_MLX5IO_H_ #define _DEV_MLX5_MLX5IO_H_ #include struct mlx5_fwdump_reg { uint32_t addr; uint32_t val; }; struct mlx5_tool_addr { uint32_t domain; uint8_t bus; uint8_t slot; uint8_t func; }; struct mlx5_fwdump_get { struct mlx5_tool_addr devaddr; struct mlx5_fwdump_reg *buf; size_t reg_cnt; size_t reg_filled; /* out */ }; struct mlx5_fw_update { struct mlx5_tool_addr devaddr; void *img_fw_data; size_t img_fw_data_len; }; +struct mlx5_eeprom_get { + struct mlx5_tool_addr devaddr; + uint32_t *eeprom_info_buf; + uint8_t eeprom_info_page_valid; + size_t eeprom_info_out_len; +}; + #define MLX5_FWDUMP_GET _IOWR('m', 1, struct mlx5_fwdump_get) #define MLX5_FWDUMP_RESET _IOW('m', 2, struct mlx5_tool_addr) #define MLX5_FWDUMP_FORCE _IOW('m', 3, struct mlx5_tool_addr) #define MLX5_FW_UPDATE _IOW('m', 4, struct mlx5_fw_update) #define MLX5_FW_RESET _IOW('m', 5, struct mlx5_tool_addr) +#define MLX5_EEPROM_GET _IOWR('m', 6, struct mlx5_eeprom_get) #ifndef _KERNEL #define MLX5_DEV_PATH _PATH_DEV"mlx5ctl" #endif enum mlx5_fpga_id { MLX5_FPGA_NEWTON = 0, MLX5_FPGA_EDISON = 1, MLX5_FPGA_MORSE = 2, MLX5_FPGA_MORSEQ = 3, }; enum mlx5_fpga_image { MLX5_FPGA_IMAGE_USER = 0, MLX5_FPGA_IMAGE_FACTORY = 1, MLX5_FPGA_IMAGE_FACTORY_FAILOVER = 2, MLX5_FPGA_IMAGE_RESET = 17, MLX5_FPGA_IMAGE_RELOAD = 18, }; enum mlx5_fpga_status { MLX5_FPGA_STATUS_SUCCESS = 0, MLX5_FPGA_STATUS_FAILURE = 1, MLX5_FPGA_STATUS_IN_PROGRESS = 2, MLX5_FPGA_STATUS_DISCONNECTED = 3, }; struct mlx5_fpga_query { enum mlx5_fpga_image admin_image; enum mlx5_fpga_image oper_image; enum mlx5_fpga_status image_status; }; enum mlx5_fpga_tee { MLX5_FPGA_TEE_DISABLE = 0, MLX5_FPGA_TEE_GENERATE_EVENT = 1, MLX5_FPGA_TEE_GENERATE_SINGLE_EVENT = 2, }; enum mlx5_fpga_connect { MLX5_FPGA_CONNECT_QUERY = 0, MLX5_FPGA_CONNECT_DISCONNECT = 0x9, MLX5_FPGA_CONNECT_CONNECT = 0xA, }; /** * enum mlx5_fpga_access_type - Enumerated the different methods possible for * accessing the device memory address space */ enum mlx5_fpga_access_type { /** Use the slow CX-FPGA I2C bus*/ MLX5_FPGA_ACCESS_TYPE_I2C = 0x0, /** Use the fast 'shell QP' */ MLX5_FPGA_ACCESS_TYPE_RDMA, /** Use the fastest available method */ MLX5_FPGA_ACCESS_TYPE_DONTCARE, MLX5_FPGA_ACCESS_TYPE_MAX = MLX5_FPGA_ACCESS_TYPE_DONTCARE, }; #define MLX5_FPGA_INTERNAL_SENSORS_LOW 63 #define MLX5_FPGA_INTERNAL_SENSORS_HIGH 63 struct mlx5_fpga_temperature { uint32_t temperature; uint32_t index; uint32_t tee; uint32_t max_temperature; uint32_t temperature_threshold_hi; uint32_t temperature_threshold_lo; uint32_t mte; uint32_t mtr; char sensor_name[16]; }; #define MLX5_FPGA_CAP_ARR_SZ 0x40 #define MLX5_FPGA_ACCESS_TYPE _IOWINT('m', 0x80) #define MLX5_FPGA_LOAD _IOWINT('m', 0x81) #define MLX5_FPGA_RESET _IO('m', 0x82) #define MLX5_FPGA_IMAGE_SEL _IOWINT('m', 0x83) #define MLX5_FPGA_QUERY _IOR('m', 0x84, struct mlx5_fpga_query) #define MLX5_FPGA_CAP _IOR('m', 0x85, uint32_t[MLX5_FPGA_CAP_ARR_SZ]) #define MLX5_FPGA_TEMPERATURE _IOWR('m', 0x86, struct mlx5_fpga_temperature) #define MLX5_FPGA_CONNECT _IOWR('m', 0x87, enum mlx5_fpga_connect) #define MLX5_FPGA_RELOAD _IO('m', 0x88) #define MLX5_FPGA_TOOLS_NAME_SUFFIX "_mlx5_fpga_tools" #endif Index: stable/12/sys/dev/mlx5/port.h =================================================================== --- stable/12/sys/dev/mlx5/port.h (revision 353238) +++ stable/12/sys/dev/mlx5/port.h (revision 353239) @@ -1,274 +1,295 @@ /*- * Copyright (c) 2016-2018, Mellanox Technologies, Ltd. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS `AS IS' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef __MLX5_PORT_H__ #define __MLX5_PORT_H__ #include enum mlx5_beacon_duration { MLX5_BEACON_DURATION_OFF = 0x0, MLX5_BEACON_DURATION_INF = 0xffff, }; enum mlx5_module_id { MLX5_MODULE_ID_SFP = 0x3, MLX5_MODULE_ID_QSFP = 0xC, MLX5_MODULE_ID_QSFP_PLUS = 0xD, MLX5_MODULE_ID_QSFP28 = 0x11, }; enum mlx5_an_status { MLX5_AN_UNAVAILABLE = 0, MLX5_AN_COMPLETE = 1, MLX5_AN_FAILED = 2, MLX5_AN_LINK_UP = 3, MLX5_AN_LINK_DOWN = 4, }; +/* EEPROM I2C Addresses */ +#define MLX5_I2C_ADDR_LOW 0x50 +#define MLX5_I2C_ADDR_HIGH 0x51 +#define MLX5_EEPROM_PAGE_LENGTH 256 #define MLX5_EEPROM_MAX_BYTES 32 #define MLX5_EEPROM_IDENTIFIER_BYTE_MASK 0x000000ff #define MLX5_EEPROM_REVISION_ID_BYTE_MASK 0x0000ff00 #define MLX5_EEPROM_PAGE_3_VALID_BIT_MASK 0x00040000 -#define MLX5_I2C_ADDR_LOW 0x50 -#define MLX5_I2C_ADDR_HIGH 0x51 -#define MLX5_EEPROM_PAGE_LENGTH 256 +#define MLX5_EEPROM_LOW_PAGE 0x0 +#define MLX5_EEPROM_HIGH_PAGE 0x3 +#define MLX5_EEPROM_HIGH_PAGE_OFFSET 128 +#define MLX5_EEPROM_INFO_BYTES 0x3 + +/* EEPROM Standards for plug in modules */ +#ifndef MLX5_ETH_MODULE_SFF_8472 +#define MLX5_ETH_MODULE_SFF_8472 0x1 +#define MLX5_ETH_MODULE_SFF_8472_LEN 128 +#endif + +#ifndef MLX5_ETH_MODULE_SFF_8636 +#define MLX5_ETH_MODULE_SFF_8636 0x2 +#define MLX5_ETH_MODULE_SFF_8636_LEN 256 +#endif + +#ifndef MLX5_ETH_MODULE_SFF_8436 +#define MLX5_ETH_MODULE_SFF_8436 0x3 +#define MLX5_ETH_MODULE_SFF_8436_LEN 256 +#endif enum mlx5e_link_speed { MLX5E_1000BASE_CX_SGMII = 0, MLX5E_1000BASE_KX = 1, MLX5E_10GBASE_CX4 = 2, MLX5E_10GBASE_KX4 = 3, MLX5E_10GBASE_KR = 4, MLX5E_20GBASE_KR2 = 5, MLX5E_40GBASE_CR4 = 6, MLX5E_40GBASE_KR4 = 7, MLX5E_56GBASE_R4 = 8, MLX5E_10GBASE_CR = 12, MLX5E_10GBASE_SR = 13, MLX5E_10GBASE_ER_LR = 14, MLX5E_40GBASE_SR4 = 15, MLX5E_40GBASE_LR4_ER4 = 16, MLX5E_50GBASE_SR2 = 18, MLX5E_100GBASE_CR4 = 20, MLX5E_100GBASE_SR4 = 21, MLX5E_100GBASE_KR4 = 22, MLX5E_100GBASE_LR4 = 23, MLX5E_100BASE_TX = 24, MLX5E_1000BASE_T = 25, MLX5E_10GBASE_T = 26, MLX5E_25GBASE_CR = 27, MLX5E_25GBASE_KR = 28, MLX5E_25GBASE_SR = 29, MLX5E_50GBASE_CR2 = 30, MLX5E_50GBASE_KR2 = 31, MLX5E_LINK_SPEEDS_NUMBER, }; enum mlx5e_ext_link_speed { MLX5E_SGMII_100M = 0, MLX5E_1000BASE_X_SGMII = 1, MLX5E_5GBASE_R = 3, MLX5E_10GBASE_XFI_XAUI_1 = 4, MLX5E_40GBASE_XLAUI_4_XLPPI_4 = 5, MLX5E_25GAUI_1_25GBASE_CR_KR = 6, MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2 = 7, MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR = 8, MLX5E_CAUI_4_100GBASE_CR4_KR4 = 9, MLX5E_100GAUI_2_100GBASE_CR2_KR2 = 10, MLX5E_200GAUI_4_200GBASE_CR4_KR4 = 12, MLX5E_400GAUI_8 = 15, MLX5E_EXT_LINK_SPEEDS_NUMBER, }; enum mlx5e_link_mode { MLX5E_ACC, MLX5E_AOC, MLX5E_AUI, MLX5E_AUI_AC, MLX5E_AUI2, MLX5E_AUI2_AC, MLX5E_AUI4, MLX5E_AUI4_AC, MLX5E_CAUI2, MLX5E_CAUI2_AC, MLX5E_CAUI4, MLX5E_CAUI4_AC, MLX5E_CP, MLX5E_CP2, MLX5E_CR, MLX5E_CR_S, MLX5E_CR1, MLX5E_CR2, MLX5E_CR4, MLX5E_CR_PAM4, MLX5E_CR4_PAM4, MLX5E_CX4, MLX5E_CX, MLX5E_CX_SGMII, MLX5E_DR, MLX5E_DR4, MLX5E_ER, MLX5E_ER4, MLX5E_FR, MLX5E_FR4, MLX5E_KR, MLX5E_KR1, MLX5E_KR_PAM4, MLX5E_KR_S, MLX5E_KR2, MLX5E_KR2_PAM4, MLX5E_KR4, MLX5E_KR4_PAM4, MLX5E_KX, MLX5E_KX4, MLX5E_LR, MLX5E_LR2, MLX5E_LR4, MLX5E_LX, MLX5E_R, MLX5E_SGMII, MLX5E_SR, MLX5E_SR2, MLX5E_SR4, MLX5E_SX, MLX5E_T, MLX5E_TX, MLX5E_LINK_MODES_NUMBER, }; enum mlx5e_connector_type { MLX5E_PORT_UNKNOWN = 0, MLX5E_PORT_NONE = 1, MLX5E_PORT_TP = 2, MLX5E_PORT_AUI = 3, MLX5E_PORT_BNC = 4, MLX5E_PORT_MII = 5, MLX5E_PORT_FIBRE = 6, MLX5E_PORT_DA = 7, MLX5E_PORT_OTHER = 8, MLX5E_CONNECTOR_TYPE_NUMBER, }; enum mlx5_qpts_trust_state { MLX5_QPTS_TRUST_PCP = 1, MLX5_QPTS_TRUST_DSCP = 2, MLX5_QPTS_TRUST_BOTH = 3, }; struct mlx5e_port_eth_proto { u32 cap; u32 admin; u32 oper; }; #ifndef SPEED_40000 #define SPEED_40000 40000 #endif #define MLX5E_PROT_MASK(link_mode) (1 << (link_mode)) #define PORT_MODULE_EVENT_MODULE_STATUS_MASK 0xF #define PORT_MODULE_EVENT_ERROR_TYPE_MASK 0xF #define MLX5_GET_ETH_PROTO(reg, out, ext, field) \ ((ext) ? MLX5_GET(reg, out, ext_##field) : \ MLX5_GET(reg, out, field)) int mlx5_set_port_caps(struct mlx5_core_dev *dev, u8 port_num, u32 caps); int mlx5_query_port_ptys(struct mlx5_core_dev *dev, u32 *ptys, int ptys_size, int proto_mask, u8 local_port); int mlx5_query_port_proto_cap(struct mlx5_core_dev *dev, u32 *proto_cap, int proto_mask); int mlx5_query_port_autoneg(struct mlx5_core_dev *dev, int proto_mask, u8 *an_disable_cap, u8 *an_disable_status); int mlx5_set_port_autoneg(struct mlx5_core_dev *dev, bool disable, u32 eth_proto_admin, int proto_mask); int mlx5_query_port_proto_admin(struct mlx5_core_dev *dev, u32 *proto_admin, int proto_mask); int mlx5_query_port_eth_proto_oper(struct mlx5_core_dev *dev, u32 *proto_oper, u8 local_port); int mlx5_set_port_proto(struct mlx5_core_dev *dev, u32 proto_admin, int proto_mask, bool ext); int mlx5_set_port_status(struct mlx5_core_dev *dev, enum mlx5_port_status status); int mlx5_query_port_status(struct mlx5_core_dev *dev, u8 *status); int mlx5_query_port_admin_status(struct mlx5_core_dev *dev, enum mlx5_port_status *status); int mlx5_set_port_pause_and_pfc(struct mlx5_core_dev *dev, u32 port, u8 rx_pause, u8 tx_pause, u8 pfc_en_rx, u8 pfc_en_tx); int mlx5_query_port_pause(struct mlx5_core_dev *dev, u32 port, u32 *rx_pause, u32 *tx_pause); int mlx5_query_port_pfc(struct mlx5_core_dev *dev, u8 *pfc_en_tx, u8 *pfc_en_rx); int mlx5_set_port_mtu(struct mlx5_core_dev *dev, int mtu); int mlx5_query_port_max_mtu(struct mlx5_core_dev *dev, int *max_mtu); int mlx5_query_port_oper_mtu(struct mlx5_core_dev *dev, int *oper_mtu); unsigned int mlx5_query_module_status(struct mlx5_core_dev *dev, int module_num); int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num); int mlx5_query_eeprom(struct mlx5_core_dev *dev, int i2c_addr, int page_num, int device_addr, int size, int module_num, u32 *data, int *size_read); int mlx5_max_tc(struct mlx5_core_dev *mdev); int mlx5_query_port_tc_rate_limit(struct mlx5_core_dev *mdev, u8 *max_bw_value, u8 *max_bw_units); int mlx5_modify_port_tc_rate_limit(struct mlx5_core_dev *mdev, const u8 *max_bw_value, const u8 *max_bw_units); int mlx5_query_port_prio_tc(struct mlx5_core_dev *mdev, u8 prio, u8 *tc); int mlx5_set_port_prio_tc(struct mlx5_core_dev *mdev, int prio_index, const u8 prio_tc); int mlx5_set_port_tc_group(struct mlx5_core_dev *mdev, const u8 *tc_group); int mlx5_query_port_tc_group(struct mlx5_core_dev *mdev, u8 tc, u8 *tc_group); int mlx5_set_port_tc_bw_alloc(struct mlx5_core_dev *mdev, const u8 *tc_bw); int mlx5_query_port_tc_bw_alloc(struct mlx5_core_dev *mdev, u8 *bw_pct); int mlx5_set_trust_state(struct mlx5_core_dev *mdev, u8 trust_state); int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state); #define MLX5_MAX_SUPPORTED_DSCP 64 int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, const u8 *dscp2prio); int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio); int mlx5_query_pddr_range_info(struct mlx5_core_dev *mdev, u8 local_port, u8 *is_er_type); u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper); int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed); int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext, struct mlx5e_port_eth_proto *eproto); int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out); int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in); int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer); int mlx5e_port_set_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer); #endif /* __MLX5_PORT_H__ */ Index: stable/12 =================================================================== --- stable/12 (revision 353238) +++ stable/12 (revision 353239) Property changes on: stable/12 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r352983,353001